@@ -253,7 +254,7 @@ $(this).html(''); });titletitletitletitletitle ``` -接下来,定义表模型。 这是提供所有表选项的地方,包括界面的滚动,而不是分页,根据 dom 字符串提供的装饰,将数据导出为 CSV 和其他格式的能力,以及建立与服务器的 Ajax 连接。 请注意,使用 Groovy GString 调用 Grails **createLink()** 的方法创建 URL,在 **EmployeeController** 中指向 **browserLister** 操作。同样有趣的是表格列的定义。此信息将发送到后端,后端查询数据库并返回相应的记录。 +接下来,定义表模型。这是提供所有表选项的地方,包括界面的滚动,而不是分页,根据 DOM 字符串提供的装饰,将数据导出为 CSV 和其他格式的能力,以及建立与服务器的 AJAX 连接。 请注意,使用 Groovy GString 调用 Grails `createLink()` 的方法创建 URL,在 `EmployeeController` 中指向 `browserLister` 操作。同样有趣的是表格列的定义。此信息将发送到后端,后端查询数据库并返回相应的记录。 ``` var table = $('#employee_dt').DataTable( { @@ -302,7 +303,7 @@ that.search(this.value).draw(); ![](https://opensource.com/sites/default/files/uploads/screen_4.png) -这是另一个屏幕截图,显示了过滤和多列排序(寻找 position 包括字符 “dev” 的员工,先按 office 排序,然后按姓氏排序): +这是另一个屏幕截图,显示了过滤和多列排序(寻找 “position” 包括字符 “dev” 的员工,先按 “office” 排序,然后按姓氏排序): ![](https://opensource.com/sites/default/files/uploads/screen_5.png) @@ -314,37 +315,37 @@ that.search(this.value).draw(); ![](https://opensource.com/sites/default/files/uploads/screen7.png) -好的,视图部分看起来非常简单; 因此,控制器必须做所有繁重的工作,对吧? 让我们来看看… +好的,视图部分看起来非常简单;因此,控制器必须做所有繁重的工作,对吧? 让我们来看看…… #### 控制器 browserLister 操作 -回想一下,我们看到过这个字符串 +回想一下,我们看到过这个字符串: ``` "${createLink(controller: 'employee', action: 'browserLister')}" ``` -对于从 DataTables 模型中调用 Ajax 的 URL,是在 Grails 服务器上动态创建 HTML 链接,其 Grails 标记背后通过调用 [createLink()][17] 的方法实现的。这会最终产生一个指向 **EmployeeController** 的链接,位于: +对于从 DataTables 模型中调用 AJAX 的 URL,是在 Grails 服务器上动态创建 HTML 链接,其 Grails 标记背后通过调用 [createLink()][17] 的方法实现的。这会最终产生一个指向 `EmployeeController` 的链接,位于: ``` embrow/grails-app/controllers/com/nuevaconsulting/embrow/EmployeeController.groovy ``` -特别是控制器方法 **browserLister()**。我在代码中留了一些 print 语句,以便在运行时能够在终端看到中间结果。 +特别是控制器方法 `browserLister()`。我在代码中留了一些 `print` 语句,以便在运行时能够在终端看到中间结果。 ```     def browserLister() {         // Applies filters and sorting to return a list of desired employees ``` -首先,打印出传递给 **browserLister()** 的参数。我通常使用此代码开始构建控制器方法,以便我完全清楚我的控制器正在接收什么。 +首先,打印出传递给 `browserLister()` 的参数。我通常使用此代码开始构建控制器方法,以便我完全清楚我的控制器正在接收什么。 ```       println "employee browserLister params $params"         println() ``` -接下来,处理这些参数以使它们更加有用。首先,jQuery DataTables 参数,一个名为 **jqdtParams**的 Groovy 映射: +接下来,处理这些参数以使它们更加有用。首先,jQuery DataTables 参数,一个名为 `jqdtParams` 的 Groovy 映射: ``` def jqdtParams = [:] @@ -363,7 +364,7 @@ println "employee dataTableParams $jqdtParams" println() ``` -接下来,列数据,一个名为 **columnMap**的 Groovy 映射: +接下来,列数据,一个名为 `columnMap` 的 Groovy 映射: ``` def columnMap = jqdtParams.columns.collectEntries { k, v -> @@ -386,7 +387,7 @@ println "employee columnMap $columnMap" println() ``` -接下来,从 **columnMap** 中检索的所有列表,以及在视图中应如何排序这些列表,Groovy 列表分别称为 **allColumnList**和 **orderList**: +接下来,从 `columnMap` 中检索的所有列表,以及在视图中应如何排序这些列表,Groovy 列表分别称为 `allColumnList` 和 `orderList` : ``` def allColumnList = columnMap.keySet() as List @@ -395,7 +396,7 @@ def orderList = jqdtParams.order.collect { k, v -> [allColumnList[v.column as In println "employee orderList $orderList" ``` -我们将使用 Grails 的 Hibernate 标准实现来实际选择要显示的元素以及它们的排序和分页。标准要求过滤器关闭; 在大多数示例中,这是作为标准实例本身的创建的一部分给出的,但是在这里我们预先定义过滤器闭包。请注意,在这种情况下,“date hired” 过滤器的相对复杂的解释被视为一年并应用于建立日期范围,并使用 **createAlias** 以允许我们进入相关类别 Position 和 Office: +我们将使用 Grails 的 Hibernate 标准实现来实际选择要显示的元素以及它们的排序和分页。标准要求过滤器关闭;在大多数示例中,这是作为标准实例本身的创建的一部分给出的,但是在这里我们预先定义过滤器闭包。请注意,在这种情况下,“date hired” 过滤器的相对复杂的解释被视为一年并应用于建立日期范围,并使用 `createAlias` 以允许我们进入相关类别 `Position` 和 `Office`: ``` def filterer = { @@ -424,14 +425,14 @@ def filterer = { } ``` -是时候应用上述内容了。第一步是获取分页代码所需的所有 Employee 实例的总数: +是时候应用上述内容了。第一步是获取分页代码所需的所有 `Employee` 实例的总数: ```         def recordsTotal = Employee.count()         println "employee recordsTotal $recordsTotal" ``` -接下来,将过滤器应用于 Employee 实例以获取过滤结果的计数,该结果将始终小于或等于总数(同样,这是针对分页代码): +接下来,将过滤器应用于 `Employee` 实例以获取过滤结果的计数,该结果将始终小于或等于总数(同样,这是针对分页代码): ```         def c = Employee.createCriteria() @@ -467,7 +468,7 @@ def filterer = { 要完全清楚,JTable 中的分页代码管理三个计数:数据集中的记录总数,应用过滤器后得到的数字,以及要在页面上显示的数字(显示是滚动还是分页)。 排序应用于所有过滤的记录,并且分页应用于那些过滤的记录的块以用于显示目的。 -接下来,处理命令返回的结果,在每行中创建指向 Employee,Position 和 Office 实例的链接,以便用户可以单击这些链接以获取相关实例的所有详细信息: +接下来,处理命令返回的结果,在每行中创建指向 `Employee`、`Position` 和 `Office` 实例的链接,以便用户可以单击这些链接以获取相关实例的所有详细信息: ```         def dollarFormatter = new DecimalFormat('$##,###.##') @@ -490,14 +491,15 @@ def filterer = { } ``` -大功告成 +大功告成。 + 如果你熟悉 Grails,这可能看起来比你原先想象的要多,但这里没有火箭式的一步到位方法,只是很多分散的操作步骤。但是,如果你没有太多接触 Grails(或 Groovy),那么需要了解很多新东西 - 闭包,代理和构建器等等。 在那种情况下,从哪里开始? 最好的地方是了解 Groovy 本身,尤其是 [Groovy closures][18] 和 [Groovy delegates and builders][19]。然后再去阅读上面关于 Grails 和 Hibernate 条件查询的建议阅读文章。 ### 结语 -jQuery DataTables 为 Grails 制作了很棒的表格数据浏览器。对视图进行编码并不是太棘手,但DataTables 文档中提供的 PHP 示例提供的功能仅到此位置。特别是,它们不是用 Grails 程序员编写的,也不包含探索使用引用其他类(实质上是查找表)的元素的更精细的细节。 +jQuery DataTables 为 Grails 制作了很棒的表格数据浏览器。对视图进行编码并不是太棘手,但 DataTables 文档中提供的 PHP 示例提供的功能仅到此位置。特别是,它们不是用 Grails 程序员编写的,也不包含探索使用引用其他类(实质上是查找表)的元素的更精细的细节。 我使用这种方法制作了几个数据浏览器,允许用户选择要查看和累积记录计数的列,或者只是浏览数据。即使在相对适度的 VPS 上的百万行表中,性能也很好。 @@ -512,7 +514,7 @@ via: https://opensource.com/article/18/9/using-grails-jquery-and-datatables 作者:[Chris Hermansen][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[jrg](https://github.com/jrglinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -528,11 +530,11 @@ via: https://opensource.com/article/18/9/using-grails-jquery-and-datatables [9]: http://sdkman.io/ [10]: http://guides.grails.org/creating-your-first-grails-app/guide/index.html [11]: https://opensource.com/file/410061 -[12]: https://opensource.com/sites/default/files/uploads/screen_1.png "Embrow home screen" +[12]: https://opensource.com/sites/default/files/uploads/screen_1.png [13]: https://opensource.com/file/410066 -[14]: https://opensource.com/sites/default/files/uploads/screen_2.png "Office list screenshot" +[14]: https://opensource.com/sites/default/files/uploads/screen_2.png [15]: https://opensource.com/file/410071 -[16]: https://opensource.com/sites/default/files/uploads/screen3.png "Employee controller screenshot" +[16]: https://opensource.com/sites/default/files/uploads/screen3.png [17]: https://gsp.grails.org/latest/ref/Tags/createLink.html [18]: http://groovy-lang.org/closures.html [19]: http://groovy-lang.org/dsls.html diff --git a/published/201811/20180928 What containers can teach us about DevOps.md b/published/201811/20180928 What containers can teach us about DevOps.md new file mode 100644 index 0000000000..3a0a360603 --- /dev/null +++ b/published/201811/20180928 What containers can teach us about DevOps.md @@ -0,0 +1,98 @@ +容器技术对 DevOps 的一些启发 +====== + +> 容器技术的使用支撑了目前 DevOps 三大主要实践:工作流、及时反馈、持续学习。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf) + +有人说容器技术与 DevOps 二者在发展的过程中是互相促进的关系。得益于 DevOps 设计理念的流行,容器生态系统在设计上与组件选择上也有相应发展。同时,由于容器技术在生产环境中的使用,反过来也促进了 DevOps 三大主要实践:[支撑 DevOps 的三个实践][1]。 + +### 工作流 + +#### 容器中的工作流 + +每个容器都可以看成一个独立的运行环境,对于容器内部,不需要考虑外部的宿主环境、集群环境,以及其它基础设施。在容器内部,每个功能看起来都是以传统的方式运行。从外部来看,容器内运行的应用一般作为整个应用系统架构的一部分:比如 web API、web app 用户界面、数据库、任务执行、缓存系统、垃圾回收等。运维团队一般会限制容器的资源使用,并在此基础上建立完善的容器性能监控服务,从而降低其对基础设施或者下游其他用户的影响。 + +#### 现实中的工作流 + +那些跟“容器”一样业务功能独立的团队,也可以借鉴这种容器思维。因为无论是在现实生活中的工作流(代码发布、构建基础设施,甚至制造 [《杰森一家》中的斯贝斯利太空飞轮][2] 等),还是技术中的工作流(开发、测试、运维、发布)都使用了这样的线性工作流,一旦某个独立的环节或者工作团队出现了问题,那么整个下游都会受到影响,虽然使用这种线性的工作流有效降低了工作耦合性。 + +#### DevOps 中的工作流 + +DevOps 中的第一条原则,就是掌控整个执行链路的情况,努力理解系统如何协同工作,并理解其中出现的问题如何对整个过程产生影响。为了提高流程的效率,团队需要持续不断的找到系统中可能存在的性能浪费以及问题,并最终修复它们。 + +> 践行这样的工作流后,可以避免将一个已知缺陷带到工作流的下游,避免局部优化导致可能的全局性能下降,要不断探索如何优化工作流,持续加深对于系统的理解。 + +> —— Gene Kim,《[支撑 DevOps 的三个实践][3]》,IT 革命,2017.4.25 + +### 反馈 + +#### 容器中的反馈 + +除了限制容器的资源,很多产品还提供了监控和通知容器性能指标的功能,从而了解当容器工作不正常时,容器内部处于什么样的状态。比如目前[流行的][5] [Prometheus][4],可以用来收集容器和容器集群中相应的性能指标数据。容器本身特别适用于分隔应用系统,以及打包代码和其运行环境,但同时也带来了不透明的特性,这时,从中快速收集信息来解决其内部出现的问题就显得尤为重要了。 + +#### 现实中的反馈 + +在现实中,从始至终同样也需要反馈。一个高效的处理流程中,及时的反馈能够快速地定位事情发生的时间。反馈的关键词是“快速”和“相关”。当一个团队被淹没在大量不相关的事件时,那些真正需要快速反馈的重要信息很容易被忽视掉,并向下游传递形成更严重的问题。想象下[如果露西和埃塞尔][6]能够很快地意识到:传送带太快了,那么制作出的巧克力可能就没什么问题了(尽管这样就不那么搞笑了)。(LCTT 译注:露西和埃塞尔是上世纪 50 年代的著名黑白情景喜剧《我爱露西》中的主角) + +#### DevOps 中的反馈 + +DevOps 中的第二条原则,就是快速收集所有相关的有用信息,这样在问题影响到其它开发流程之前就可以被识别出。DevOps 团队应该努力去“优化下游”,以及快速解决那些可能会影响到之后团队的问题。同工作流一样,反馈也是一个持续的过程,目标是快速的获得重要的信息以及当问题出现后能够及时地响应。 + +> 快速的反馈对于提高技术的质量、可用性、安全性至关重要。 + +> —— Gene Kim 等人,《DevOps 手册:如何在技术组织中创造世界级的敏捷性,可靠性和安全性》,IT 革命,2016 + +### 持续学习 + +#### 容器中的持续学习 + +践行第三条原则“持续学习”是一个不小的挑战。在不需要掌握太多边缘的或难以理解的东西的情况下,容器技术让我们的开发工程师和运营团队依然可以安全地进行本地和生产环境的测试,这在之前是难以做到的。即便是一些激进的实验,容器技术仍然让我们轻松地进行版本控制、记录和分享。 + +#### 现实中的持续学习 + +举个我自己的例子:多年前,作为一个年轻、初出茅庐的系统管理员(仅仅工作三周),我被安排对一个运行着某个大学核心 IT 部门网站的 Apache 虚拟主机配置进行更改。由于没有方便的测试环境,我直接在生产站点上修改配置,当时觉得配置没问题就发布了,几分钟后,我无意中听到了隔壁同事说: + +“等会,网站挂了?” + +“没错,怎么回事?” + +很多人蒙圈了…… + +在被嘲讽之后(真实的嘲讽),我一头扎在工作台上,赶紧撤销我之前的更改。当天下午晚些时候,部门主管 —— 我老板的老板的老板 —— 来到我的工位询问发生了什么事。“别担心,”她告诉我。“我们不会责怪你,这是一个错误,现在你已经学会了。” + +而在容器中,这种情形在我的笔记本上就很容易测试了,并且也很容易在部署生产环境之前,被那些经验老道的团队成员发现。 + +#### DevOps 中的持续学习 + +持续学习文化的一部分是我们每个人都希望通过一些改变从而能够提高一些东西,并勇敢地通过实验来验证我们的想法。对于 DevOps 团队来说,失败无论对团队还是个人来说都是成长而不是惩罚,所以不要畏惧失败。团队中的每个成员不断学习、共享,也会不断提升其所在团队与组织的水平。 + +随着系统越来越被细分,我们更需要将注意力集中在具体的点上:上面提到的两条原则主要关注整体流程,而持续学习关注的则是整个项目、人员、团队、组织的未来。它不仅对流程产生了影响,还对流程中的每个人产生影响。 + +> 实验和冒险让我们能够不懈地改进我们的工作,但也要求我们尝试之前未用过的工作方式。 + +> —— Gene Kim 等人,《[凤凰计划:让你了解 IT、DevOps 以及如何取得商业成功][7]》,IT 革命,2013 + +### 容器技术带给 DevOps 的启迪 + +有效地应用容器技术可以学习 DevOps 的三条原则:工作流,反馈以及持续学习。从整体上看应用程序和基础设施,而不是对容器外的东西置若罔闻,教会我们考虑到系统的所有部分,了解其上游和下游影响,打破隔阂,并作为一个团队工作,以提升整体表现和深度了解整个系统。通过努力提供及时准确的反馈,我们可以在组织内部创建有效的反馈机制,以便在问题发生影响之前发现问题。最后,提供一个安全的环境来尝试新的想法并从中学习,教会我们创造一种文化,在这种文化中,失败一方面促进了我们知识的增长,另一方面通过有根据的猜测,可以为复杂的问题带来新的、优雅的解决方案。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/containers-can-teach-us-devops + +作者:[Chris Hermansen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[littleji](https://github.com/littleji) +校对:[pityonline](https://github.com/pityonline), [wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[1]: https://itrevolution.com/the-three-ways-principles-underpinning-devops/ +[2]: https://en.wikipedia.org/wiki/The_Jetsons +[3]: http://itrevolution.com/the-three-ways-principles-underpinning-devops +[4]: https://prometheus.io/ +[5]: https://opensource.com/article/18/9/prometheus-operational-advantage +[6]: https://www.youtube.com/watch?v=8NPzLBSBzPI +[7]: https://itrevolution.com/book/the-phoenix-project/ diff --git a/published/201811/20181001 Turn your book into a website and an ePub using Pandoc.md b/published/201811/20181001 Turn your book into a website and an ePub using Pandoc.md new file mode 100644 index 0000000000..734ac021cb --- /dev/null +++ b/published/201811/20181001 Turn your book into a website and an ePub using Pandoc.md @@ -0,0 +1,259 @@ +使用 Pandoc 将你的书转换成网页和电子书 +====== + +> 通过 Markdown 和 Pandoc,可以做到编写一次,发布两次。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) + +Pandoc 是一个命令行工具,用于将文件从一种标记语言转换为另一种标记语言。在我 [对 Pandoc 的简介][1] 一文中,我演示了如何把 Markdown 编写的文本转换为网页、幻灯片和 PDF。 + +在这篇后续文章中,我将深入探讨 [Pandoc][2],展示如何从同一个 Markdown 源文件生成网页和 ePub 格式的电子书。我将使用我即将发布的电子书《[面向对象思想的 GRASP 原则][3]》为例进行讲解,这本电子书正是通过以下过程创建的。 + +首先,我将解释这本书使用的文件结构,然后介绍如何使用 Pandoc 生成网页并将其部署在 GitHub 上;最后,我演示了如何生成对应的 ePub 格式电子书。 + +你可以在我的 GitHub 仓库 [Programming Fight Club][4] 中找到相应代码。 + +### 设置图书结构 + +我用 Markdown 语法完成了所有的写作,你也可以使用 HTML 标记,但是当 Pandoc 将 Markdown 转换为 ePub 文档时,引入的 HTML 标记越多,出现问题的风险就越高。我的书按照每章一个文件的形式进行组织,用 Markdown 的 `H1` 标记(`#`)声明每章的标题。你也可以在每个文件中放置多个章节,但将它们放在单独的文件中可以更轻松地查找内容并在以后进行更新。 + +元信息遵循类似的模式,每种输出格式都有自己的元信息文件。元信息文件定义有关文档的信息,例如要添加到 HTML 中的文本或 ePub 的许可证。我将所有 Markdown 文档存储在名为 `parts` 的文件夹中(这对于用来生成网页和 ePub 的 Makefile 非常重要)。下面以一个例子进行说明,让我们看一下目录,前言和关于本书(分为 `toc.md`、`preface.md` 和 `about.md` 三个文件)这三部分,为清楚起见,我们将省略其余的章节。 + +关于本书这部分内容的开头部分类似: + +``` +# About this book {-} + +## Who should read this book {-} + +Before creating a complex software system one needs to create a solid foundation. +General Responsibility Assignment Software Principles (GRASP) are guidelines to assign +responsibilities to software classes in object-oriented programming. +``` + +每一章完成后,下一步就是添加元信息来设置网页和 ePub 的格式。 + +### 生成网页 + +#### 创建 HTML 元信息文件 + +我创建的网页的元信息文件(`web-metadata.yaml`)是一个简单的 YAML 文件,其中包含 ` ` 标签中的作者、标题、和版权等信息,以及 HTML 文件中开头和结尾的内容。 + +我建议(至少)包括 `web-metadata.yaml` 文件中的以下字段: + +``` +--- +title: GRASP principles for the Object-oriented mind +author: Kiko Fernandez-Reyes +rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International +header-includes: +- | + ```{=html} + + + ``` +include-before: +- | + ```{=html} +

If you like this book, please consider + spreading the word or + + buying me a coffee + +

+ ``` +include-after: +- | + ```{=html} +
+
+
+ +
+
+ ``` +--- +``` + +下面几个变量需要注意一下: + +- `header-includes` 变量包含将要嵌入 `` 标签的 HTML 文本。 +- 调用变量后的下一行必须是 `- |`。再往下一行必须以与 `|` 对齐的三个反引号开始,否则 Pandoc 将无法识别。`{= html}` 告诉 Pandoc 其中的内容是原始文本,不应该作为 Markdown 处理。(为此,需要检查 Pandoc 中的 `raw_attribute` 扩展是否已启用。要进行此检查,键入 `pandoc --list-extensions | grep raw` 并确保返回的列表包含名为 `+ raw_html` 的项目,加号表示已启用。) +- 变量 `include-before` 在网页开头添加一些 HTML 文本,此处我请求读者帮忙宣传我的书或给我打赏。 +- `include-after` 变量在网页末尾添加原始 HTML 文本,同时显示我的图书许可证。 + +这些只是其中一部分可用的变量,查看 HTML 中的模板变量(我的文章 [Pandoc简介][1] 中介绍了如何查看 LaTeX 的模版变量,查看 HTML 模版变量的过程是相同的)对其余变量进行了解。 + +#### 将网页分成多章 + +网页可以作为一个整体生成,这会产生一个包含所有内容的长页面;也可以分成多章,我认为这样会更容易阅读。我将解释如何将网页划分为多章,以便读者不会被长网页吓到。 + +为了使网页易于在 GitHub Pages 上部署,需要创建一个名为 `docs` 的根文件夹(这是 GitHub Pages 默认用于渲染网页的根文件夹)。然后我们需要为 `docs` 下的每一章创建文件夹,将 HTML 内容放在各自的文件夹中,将文件内容放在名为 `index.html` 的文件中。 + +例如,`about.md` 文件将转换成名为 `index.html` 的文件,该文件位于名为 `about`(`about/index.html`)的文件夹中。这样,当用户键入 `http:///about/` 时,文件夹中的 `index.html` 文件将显示在其浏览器中。 + +下面的 `Makefile` 将执行上述所有操作: + +``` +# Your book files +DEPENDENCIES= toc preface about + +# Placement of your HTML files +DOCS=docs + +all: web + +web: setup $(DEPENDENCIES) +        @cp $(DOCS)/toc/index.html $(DOCS) + + +# Creation and copy of stylesheet and images into +# the assets folder. This is important to deploy the +# website to Github Pages. +setup: +        @mkdir -p $(DOCS) +        @cp -r assets $(DOCS) + + +# Creation of folder and index.html file on a +# per-chapter basis + +$(DEPENDENCIES): +        @mkdir -p $(DOCS)/$@ +        @pandoc -s --toc web-metadata.yaml parts/$@.md \ +        -c /assets/pandoc.css -o $(DOCS)/$@/index.html + +clean: +        @rm -rf $(DOCS) + +.PHONY: all clean web setup +``` + +选项 `- c /assets/pandoc.css` 声明要使用的 CSS 样式表,它将从 `/assets/pandoc.cs` 中获取。也就是说,在 `` 标签内,Pandoc 会添加这样一行: + +``` + +``` + +使用下面的命令生成网页: + +``` +make +``` + +根文件夹现在应该包含如下所示的文件结构: + +``` +.---parts +|    |--- toc.md +|    |--- preface.md +|    |--- about.md +| +|---docs +    |--- assets/ +    |--- index.html +    |--- toc +    |     |--- index.html +    | +    |--- preface +    |     |--- index.html +    | +    |--- about +          |--- index.html +    +``` + +#### 部署网页 + +通过以下步骤将网页部署到 GitHub 上: + +1. 创建一个新的 GitHub 仓库 +2. 将内容推送到新创建的仓库 +3. 找到仓库设置中的 GitHub Pages 部分,选择 `Source` 选项让 GitHub 使用主分支的内容 + +你可以在 [GitHub Pages][5] 的网站上获得更多详细信息。 + +[我的书的网页][6] 便是通过上述过程生成的,可以在网页上查看结果。 + +### 生成电子书 + +#### 创建 ePub 格式的元信息文件 + +ePub 格式的元信息文件 `epub-meta.yaml` 和 HTML 元信息文件是类似的。主要区别在于 ePub 提供了其他模板变量,例如 `publisher` 和 `cover-image` 。ePub 格式图书的样式表可能与网页所用的不同,在这里我使用一个名为 `epub.css` 的样式表。 + +``` +--- +title: 'GRASP principles for the Object-oriented Mind' +publisher: 'Programming Language Fight Club' +author: Kiko Fernandez-Reyes +rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International +cover-image: assets/cover.png +stylesheet: assets/epub.css +... +``` + +将以下内容添加到之前的 `Makefile` 中: + +``` +epub: +        @pandoc -s --toc epub-meta.yaml \ +        $(addprefix parts/, $(DEPENDENCIES:=.md)) -o $(DOCS)/assets/book.epub +``` + +用于产生 ePub 格式图书的命令从 HTML 版本获取所有依赖项(每章的名称),向它们添加 Markdown 扩展,并在它们前面加上每一章的文件夹路径,以便让 Pandoc 知道如何进行处理。例如,如果 `$(DEPENDENCIES` 变量只包含 “前言” 和 “关于本书” 两章,那么 `Makefile` 将会这样调用: + +``` +@pandoc -s --toc epub-meta.yaml \ +parts/preface.md parts/about.md -o $(DOCS)/assets/book.epub +``` + +Pandoc 将提取这两章的内容,然后进行组合,最后生成 ePub 格式的电子书,并放在 `Assets` 文件夹中。 + +这是使用此过程创建 ePub 格式电子书的一个 [示例][7]。 + +### 过程总结 + +从 Markdown 文件创建网页和 ePub 格式电子书的过程并不困难,但有很多细节需要注意。遵循以下大纲可能使你更容易使用 Pandoc。 + +- HTML 图书: + - 使用 Markdown 语法创建每章内容 + - 添加元信息 + - 创建一个 `Makefile` 将各个部分组合在一起 + - 设置 GitHub Pages + - 部署 +- ePub 电子书: + - 使用之前创建的每一章内容 + - 添加新的元信息文件 + - 创建一个 `Makefile` 以将各个部分组合在一起 + - 设置 GitHub Pages + - 部署 + + +------ + +via: https://opensource.com/article/18/10/book-to-website-epub-using-pandoc + +作者:[Kiko Fernandez-Reyes][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[jlztan](https://github.com/jlztan) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/kikofernandez +[1]: https://linux.cn/article-10228-1.html +[2]: https://pandoc.org/ +[3]: https://www.programmingfightclub.com/ +[4]: https://github.com/kikofernandez/programmingfightclub +[5]: https://pages.github.com/ +[6]: https://www.programmingfightclub.com/grasp-principles/ +[7]: https://github.com/kikofernandez/programmingfightclub/raw/master/docs/web_assets/demo.epub diff --git a/translated/tech/20181002 4 open source invoicing tools for small businesses.md b/published/201811/20181002 4 open source invoicing tools for small businesses.md similarity index 85% rename from translated/tech/20181002 4 open source invoicing tools for small businesses.md rename to published/201811/20181002 4 open source invoicing tools for small businesses.md index f333c318bc..c1f5337122 100644 --- a/translated/tech/20181002 4 open source invoicing tools for small businesses.md +++ b/published/201811/20181002 4 open source invoicing tools for small businesses.md @@ -1,22 +1,23 @@ 适用于小型企业的 4 个开源发票工具 ====== -用基于 web 的发票软件管理你的账单,完成收款,十分简单。 + +> 用基于 web 的发票软件管理你的账单,轻松完成收款,十分简单。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory2.png?itok=AvneLxFp) 无论您开办小型企业的原因是什么,保持业务发展的关键是可以盈利。收款也就意味着向客户提供发票。 -使用 LibreOffice Writer 或 LibreOffice Calc 提供发票很容易,但有时候你需要的不止这些。从更专业的角度看。一种跟进发票的方法。提醒你何时跟进你发出的发票。 +使用 LibreOffice Writer 或 LibreOffice Calc 提供发票很容易,但有时候你需要的不止这些。从更专业的角度看,一种跟进发票的方法,可以提醒你何时跟进你发出的发票。 -在这里有各种各样的商业闭源发票管理工具。但是开源界的产品和相对应的闭源商业工具比起来,并不差,没准还更灵活。 +在这里有各种各样的商业闭源的发票管理工具。但是开源的产品和相对应的闭源商业工具比起来,并不差,没准还更灵活。 让我们一起了解这 4 款基于 web 的开源发票工具,它们很适用于预算紧张的自由职业者和小型企业。2014 年,我在本文的[早期版本][1]中提到了其中两个工具。这 4 个工具用起来都很简单,并且你可以在任何设备上使用它们。 ### Invoice Ninja -我不是很喜欢 ninja 这个词。尽管如此,我喜欢 [Invoice Ninja][2]。非常喜欢。它将功能融合在一个简单的界面,其中包含一组功能,可让创建,管理和向客户、消费者发送发票。 +我不是很喜欢 ninja (忍者)这个词。尽管如此,我喜欢 [Invoice Ninja][2]。非常喜欢。它将功能融合在一个简单的界面,其中包含一组可让你创建、管理和向客户、消费者发送发票的功能。 -您可以轻松配置多个客户端,跟进付款和未结清的发票,生成报价并用电子邮件发送发票。Invoice Ninja 与其竞争对手不同,它[集成][3]了超过 40 个流行支付方式,包括 PayPal,Stripe,WePay 以及 Apple Pay。 +您可以轻松配置多个客户端,跟进付款和未结清的发票,生成报价并用电子邮件发送发票。Invoice Ninja 与其竞争对手不同,它[集成][3]了超过 40 个流行支付方式,包括 PayPal、Stripe、WePay 以及 Apple Pay。 [下载][4]一个可以安装到自己服务器上的版本,或者获取一个[托管版][5]的账户,都可以使用 Invoice Ninja。它有免费版,也有每月 8 美元的收费版。 @@ -34,7 +35,7 @@ InvoicePlane 不仅可以生成或跟进发票。你还可以为任务或商品 [OpenSourceBilling][9] 被它的开发者称赞为“非常简单的计费软件”,当之无愧。它拥有最简洁的交互界面,配置使用起来轻而易举。 -OpenSourceBilling 因它的商业智能仪表盘脱颖而出,它可以跟进跟进你当前和以前的发票,以及任何没有支付的款项。它以图表的形式整理信息,使之很容易阅读。 +OpenSourceBilling 因它的商业智能仪表盘脱颖而出,它可以跟进你当前和以前的发票,以及任何没有支付的款项。它以图表的形式整理信息,使之很容易阅读。 你可以在发票上配置很多信息。只需点几下鼠标按几下键盘,即可添加项目、税率、客户名称以及付款条件。OpenSourceBilling 将这些信息保存在你所有的发票当中,不管新发票还是旧发票。 @@ -57,7 +58,7 @@ via: https://opensource.com/article/18/10/open-source-invoicing-tools 作者:[Scott Nesbitt][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[fuowang](https://github.com/fuowang) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201811/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md b/published/201811/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md new file mode 100644 index 0000000000..58996654e5 --- /dev/null +++ b/published/201811/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md @@ -0,0 +1,70 @@ + +Greg Kroah-Hartman 解释内核社区是如何使 Linux 安全的 +============ + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kernel-security_0.jpg?itok=hOaTQwWV) + +> 内核维护者 Greg Kroah-Hartman 谈论内核社区如何保护 Linux 不遭受损害。 + +由于 Linux 使用量持续扩大,内核社区去提高这个世界上使用最广泛的技术 —— Linux 内核的安全性的重要性越来越高。安全不仅对企业客户很重要,它对消费者也很重要,因为 80% 的移动设备都使用了 Linux。在本文中,Linux 内核维护者 Greg Kroah-Hartman 带我们了解内核社区如何应对威胁。 + +### bug 不可避免 + +![Greg Kroah-Hartman](https://www.linux.com/sites/lcom/files/styles/floated_images/public/greg-k-h.png?itok=p4fREYuj "Greg Kroah-Hartman") + +*Greg Kroah-Hartman [Linux 基金会][1]* + +正如 Linus Torvalds 曾经说过的,大多数安全问题都是 bug 造成的,而 bug 又是软件开发过程的一部分。是软件就有 bug。 + +Kroah-Hartman 说:“就算是 bug,我们也不知道它是安全的 bug 还是不安全的 bug。我修复的一个著名 bug,在三年后才被 Red Hat 认定为安全漏洞“。 + +在消除 bug 方面,内核社区没有太多的办法,只能做更多的测试来寻找 bug。内核社区现在已经有了自己的安全团队,它们是由熟悉内核核心的内核开发者组成。 + +Kroah-Hartman 说:”当我们收到一个报告时,我们就让参与这个领域的核心开发者去修复它。在一些情况下,他们可能是同一个人,让他们进入安全团队可以更快地解决问题“。但他也强调,内核所有部分的开发者都必须清楚地了解这些问题,因为内核是一个可信环境,它必须被保护起来。 + +Kroah-Hartman 说:”一旦我们修复了它,我们就将它放到我们的栈分析规则中,以便于以后不再重新出现这个 bug。“ + +除修复 bug 之外,内核社区也不断加固内核。Kroah-Hartman 说:“我们意识到,我们需要一些主动的缓减措施,因此我们需要加固内核。” + +Kees Cook 和其他一些人付出了巨大的努力,带来了一直在内核之外的加固特性,并将它们合并或适配到内核中。在每个内核发行后,Cook 都对所有新的加固特性做一个总结。但是只加固内核是不够的,供应商们必须要启用这些新特性来让它们充分发挥作用,但他们并没有这么做。 + +Kroah-Hartman [每周发布一个稳定版内核][5],而为了长期的支持,公司们只从中挑选一个,以便于设备制造商能够利用它。但是,Kroah-Hartman 注意到,除了 Google Pixel 之外,大多数 Android 手机并不包含这些额外的安全加固特性,这就意味着,所有的这些手机都是有漏洞的。他说:“人们应该去启用这些加固特性”。 + +Kroah-Hartman 说:“我购买了基于 Linux 内核 4.4 的所有旗舰级手机,去查看它们中哪些确实升级了新特性。结果我发现只有一家公司升级了它们的内核。……我在整个供应链中努力去解决这个问题,因为这是一个很棘手的问题。它涉及许多不同的组织 —— SoC 制造商、运营商等等。关键点是,需要他们把我们辛辛苦苦设计的内核去推送给大家。” + +好消息是,与消费电子产品不一样,像 Red Hat 和 SUSE 这样的大供应商,在企业环境中持续对内核进行更新。使用容器、pod 和虚拟化的现代系统做到这一点更容易了。无需停机就可以毫不费力地更新和重启。事实上,现在来保证系统安全相比过去容易多了。 + +### Meltdown 和 Spectre + +没有任何一个关于安全的讨论能够避免提及 Meltdown 和 Spectre 缺陷。内核社区一直致力于修改新发现的和已查明的安全漏洞。不管怎样,Intel 已经因为这些事情改变了它们的策略。 + +Kroah-Hartman 说:“他们已经重新研究如何处理安全 bug,以及如何与社区合作,因为他们知道他们做错了。内核已经修复了几乎所有大的 Spectre 问题,但是还有一些小问题仍在处理中”。 + +好消息是,这些 Intel 漏洞使得内核社区正在变得更好。Kroah-Hartman 说:“我们需要做更多的测试。对于最新一轮的安全补丁,在它们被发布之前,我们自己花了四个月时间来测试它们,因为我们要防止这个安全问题在全世界扩散。而一旦这些漏洞在真实的世界中被利用,将让我们认识到我们所依赖的基础设施是多么的脆弱,我们多年来一直在做这种测试,这确保了其它人不会遭到这些 bug 的伤害。所以说,Intel 的这些漏洞在某种程度上让内核社区变得更好了”。 + +对安全的日渐关注也为那些有才华的人创造了更多的工作机会。由于安全是个极具吸引力的领域,那些希望在内核空间中有所建树的人,安全将是他们一个很好的起点。 + +Kroah-Hartman 说:“如果有人想从事这方面的工作,我们有大量的公司愿意雇佣他们。我知道一些开始去修复 bug 的人已经被他们雇佣了。” + +你可以在下面链接的视频上查看更多的内容: + +[视频](https://youtu.be/jkGVabyMh1I) + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/10/greg-kroah-hartman-explains-how-kernel-community-securing-linux-0 + +作者:[SWAPNIL BHARTIYA][a] +选题:[oska874][b] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/arnieswap +[b]:https://github.com/oska874 +[1]:https://www.linux.com/licenses/category/linux-foundation +[2]:https://www.linux.com/licenses/category/creative-commons-zero +[3]:https://www.linux.com/files/images/greg-k-hpng +[4]:https://www.linux.com/files/images/kernel-securityjpg-0 +[5]:https://www.kernel.org/category/releases.html diff --git a/published/201811/20181004 Functional programming in Python- Immutable data structures.md b/published/201811/20181004 Functional programming in Python- Immutable data structures.md new file mode 100644 index 0000000000..4b9bffdc51 --- /dev/null +++ b/published/201811/20181004 Functional programming in Python- Immutable data structures.md @@ -0,0 +1,191 @@ +Python 函数式编程:不可变数据结构 +====== + +> 不可变性可以帮助我们更好地理解我们的代码。下面我将讲述如何在不牺牲性能的条件下来实现它。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D) + +在这个由两篇文章构成的系列中,我将讨论如何将函数式编程方法论中的思想引入至 Python 中,来充分发挥这两个领域的优势。 + +本文(也就是第一篇文章)中,我们将探讨不可变数据结构的优势。第二部分会探讨如何在 `toolz` 库的帮助下,用 Python 实现高层次的函数式编程理念。 + +为什么要用函数式编程?因为变化的东西更难推理。如果你已经确信变化会带来麻烦,那很棒。如果你还没有被说服,在文章结束时,你会明白这一点的。 + +我们从思考正方形和矩形开始。如果我们抛开实现细节,单从接口的角度考虑,正方形是矩形的子类吗? + +子类的定义基于[里氏替换原则][1]。一个子类必须能够完成超类所做的一切。 + +如何为矩形定义接口? + +``` +from zope.interface import Interface + +class IRectangle(Interface): +    def get_length(self): +        """正方形能做到""" +    def get_width(self): +        """正方形能做到""" +    def set_dimensions(self, length, width): +        """啊哦""" +``` + +如果我们这么定义,那正方形就不能成为矩形的子类:如果长度和宽度不等,它就无法对 `set_dimensions` 方法做出响应。 + +另一种方法,是选择将矩形做成不可变对象。 + +``` +class IRectangle(Interface): +    def get_length(self): +        """正方形能做到""" +    def get_width(self): +        """正方形能做到""" +    def with_dimensions(self, length, width): +        """返回一个新矩形""" +``` + +现在,我们可以将正方形视为矩形了。在调用 `with_dimensions` 时,它可以返回一个新的矩形(它不一定是个正方形),但它本身并没有变,依然是一个正方形。 + +这似乎像是个学术问题 —— 直到我们认为正方形和矩形可以在某种意义上看做一个容器的侧面。在理解了这个例子以后,我们会处理更传统的容器,以解决更现实的案例。比如,考虑一下随机存取数组。 + +我们现在有 `ISquare` 和 `IRectangle`,而且 `ISequere` 是 `IRectangle` 的子类。 + +我们希望把矩形放进随机存取数组中: + +``` +class IArrayOfRectangles(Interface): +    def get_element(self, i): +        """返回一个矩形""" +    def set_element(self, i, rectangle): +        """'rectangle' 可以是任意 IRectangle 对象""" +``` + +我们同样希望把正方形放进随机存取数组: + +``` +class IArrayOfSquare(Interface): +    def get_element(self, i): +        """返回一个正方形""" +    def set_element(self, i, square): +        """'square' 可以是任意 ISquare 对象""" +``` + +尽管 `ISquare` 是 `IRectangle` 的子集,但没有任何一个数组可以同时实现 `IArrayOfSquare` 和 `IArrayOfRectangle`. + +为什么不能呢?假设 `bucket` 实现了这两个类的功能。 + +``` +>>> rectangle = make_rectangle(3, 4) +>>> bucket.set_element(0, rectangle) # 这是 IArrayOfRectangle 中的合法操作 +>>> thing = bucket.get_element(0) # IArrayOfSquare 要求 thing 必须是一个正方形 +>>> assert thing.height == thing.width +Traceback (most recent call last): +  File "", line 1, in +AssertionError +``` + +无法同时实现这两类功能,意味着这两个类无法构成继承关系,即使 `ISquare` 是 `IRectangle` 的子类。问题来自 `set_element` 方法:如果我们实现一个只读的数组,那 `IArrayOfSquare` 就可以是 `IArrayOfRectangle` 的子类了。 + +在可变的 `IRectangle` 和可变的 `IArrayOf*` 接口中,可变性都会使得对类型和子类的思考变得更加困难 —— 放弃变换的能力,意味着我们的直觉所希望的类型间关系能够成立了。 + +可变性还会带来作用域方面的影响。当一个共享对象被两个地方的代码改变时,这种问题就会发生。一个经典的例子是两个线程同时改变一个共享变量。不过在单线程程序中,即使在两个相距很远的地方共享一个变量,也是一件简单的事情。从 Python 语言的角度来思考,大多数对象都可以从很多位置来访问:比如在模块全局变量,或在一个堆栈跟踪中,或者以类属性来访问。 + +如果我们无法对共享做出约束,那我们可能要考虑对可变性来进行约束了。 + +这是一个不可变的矩形,它利用了 [attr][2] 库: + +``` +@attr.s(frozen=True) +class Rectange(object): +    length = attr.ib() +    width = attr.ib() +    @classmethod +    def with_dimensions(cls, length, width): +        return cls(length, width) +``` + +这是一个正方形: + +``` +@attr.s(frozen=True) +class Square(object): +    side = attr.ib() +    @classmethod +    def with_dimensions(cls, length, width): +        return Rectangle(length, width) +``` + +使用 `frozen` 参数,我们可以轻易地使 `attrs` 创建的类成为不可变类型。正确实现 `__setitem__` 方法的工作都交给别人完成了,对我们是不可见的。 + +修改对象仍然很容易;但是我们不可能改变它的本质。 + +``` +too_long = Rectangle(100, 4) +reasonable = attr.evolve(too_long, length=10) +``` + +[Pyrsistent][3] 能让我们拥有不可变的容器。 + +``` +# 由整数构成的向量 +a = pyrsistent.v(1, 2, 3) +# 并非由整数构成的向量 +b = a.set(1, "hello") +``` + +尽管 `b` 不是一个由整数构成的向量,但没有什么能够改变 `a` 只由整数构成的性质。 + +如果 `a` 有一百万个元素呢?`b` 会将其中的 999999 个元素复制一遍吗?`Pyrsistent` 具有“大 O”性能保证:所有操作的时间复杂度都是 `O(log n)`. 它还带有一个可选的 C 语言扩展,以在“大 O”性能之上进行提升。 + +修改嵌套对象时,会涉及到“变换器”的概念: + +``` +blog = pyrsistent.m( +    title="My blog", +    links=pyrsistent.v("github", "twitter"), +    posts=pyrsistent.v( +        pyrsistent.m(title="no updates", +                     content="I'm busy"), +        pyrsistent.m(title="still no updates", +                     content="still busy"))) +new_blog = blog.transform(["posts", 1, "content"], +                          "pretty busy") +``` + +`new_blog` 现在将是如下对象的不可变等价物: + +``` +{'links': ['github', 'twitter'], + 'posts': [{'content': "I'm busy", +            'title': 'no updates'}, +           {'content': 'pretty busy', +            'title': 'still no updates'}], + 'title': 'My blog'} +``` + +不过 `blog` 依然不变。这意味着任何拥有旧对象引用的人都没有受到影响:转换只会有局部效果。 + +当共享行为猖獗时,这会很有用。例如,函数的默认参数: + +``` +def silly_sum(a, b, extra=v(1, 2)): +    extra = extra.extend([a, b]) +    return sum(extra) +``` + +在本文中,我们了解了为什么不可变性有助于我们来思考我们的代码,以及如何在不带来过大性能负担的条件下实现它。下一篇,我们将学习如何借助不可变对象来实现强大的程序结构。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures + +作者:[Moshe Zadka][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[StdioA](https://github.com/StdioA) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez +[1]: https://en.wikipedia.org/wiki/Liskov_substitution_principle +[2]: https://www.attrs.org/en/stable/ +[3]: https://pyrsistent.readthedocs.io/en/latest/ diff --git a/published/201811/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md b/published/201811/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md new file mode 100644 index 0000000000..91718ae292 --- /dev/null +++ b/published/201811/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md @@ -0,0 +1,159 @@ +Terminalizer:一个记录您终端活动并且生成 Gif 图像的工具 +==== + +今天我们要讨论一个广为人知的主题,我们也围绕这个主题写过许多的文章,因此我不会针对这个如何记录终端会话流程给出太多具体的资料。 + +我们可以使用脚本命令来记录 Linux 的终端会话,这也是大家公认的一种办法。不过今天我们将来介绍一个能起到相同作用的工具 —— Terminalizer。 + +这个工具可以帮助我们记录用户的终端活动,以帮助我们从输出的文件中找到有用的信息。 + +### 什么是 Terminlizer + +用户可以用 Terminlizer 记录他们的终端活动并且生成一个 Gif 图像。它是一个允许高度定制的 CLI 工具。用户可以在网络播放器、在线播放器上用链接分享他们记录下的文件。 + +**推荐阅读:** + + - [Script – 一个记录您终端对话的简单工具][1] + - [在 Linux 上自动记录/捕捉所有用户的终端对话][2] + - [Teleconsole – 一个能立即与任何人分享您终端对话的工具][3] + - [tmate – 立即与任何人分享您的终端对话][4] + - [Peek – 在 Linux 里制造一个 Gif 记录器][5] + - [Kgif – 一个能生成 Gif 图片,以记录窗口活动的简单 Shell 脚本][6] +- [Gifine – 在 Ubuntu/Debian 里快速制造一个 Gif 视频][7] + +目前没有发行版拥有官方软件包来安装此实用程序,不过我们可以用 Node.js 来安装它。 + +### 如何在 Linux 上安装 Node.js + +安装 Node.js 有许多种方法。我们在这里将会教您一个常用的方法。 + +在 Ubuntu/LinuxMint 上可以使用 [APT-GET 命令][8] 或者 [APT 命令][9] 来安装 Node.js。 + +``` +$ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash - +$ sudo apt-get install -y nodejs +``` + +在 Debian 上使用 [APT-GET 命令][8] 或者 [APT 命令][9] 来安装 Node.js。 + +``` +# curl -sL https://deb.nodesource.com/setup_8.x | bash - +# apt-get install -y nodejs +``` + +在 RHEL/CentOS 上,使用 [YUM 命令][10] 来安装。 + +``` +$ sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash - +$ sudo yum install epel-release +$ sudo yum -y install nodejs +``` + +在 Fedora 上,用 [DNF 命令][11] 来安装 tmux。 + +``` +$ sudo dnf install nodejs +``` + +在 Arch Linux 上,用 [Pacman 命令][12] 来安装 tmux。 + +``` +$ sudo pacman -S nodejs npm +``` + +在 openSUSE 上,用 [Zypper Command][13] 来安装 tmux。 + +``` +$ sudo zypper in nodejs6 +``` + +### 如何安装 Terminalizer + +您已经安装了 Node.js 这个先决软件包,现在是时候在您的系统上安装 Terminalizer 了。简单执行如下的 `npm` 命令即可安装。 + +``` +$ sudo npm install -g terminalizer +``` + +### 如何使用 Terminalizer + +您只需要执行如下的命令,即可使用 Terminalizer 记录您的终端会话活动。您可以敲击 `CTRL+D` 来结束并且保存记录。 + +``` +# terminalizer record 2g-session + +defaultConfigPath +The recording session is started +Press CTRL+D to exit and save the recording +``` + +这将会将您记录的会话保存成一个 YAML 文件,在这个例子里,我的文件名将会是 2g-session-activity.yml。 + +![][15] + +``` +# logout +Successfully Recorded +The recording data is saved into the file: +/home/daygeek/2g-session.yml +You can edit the file and even change the configurations. +``` + +![][16] + +### 如何播放记录下来的文件 + +使用以下命令来播放您记录的 YAML 文件。在以下操作中,请确保您已经用了您的文件名来替换 “2g-session”。 + +``` +# terminalizer play 2g-session +``` + +将记录的文件渲染成 Gif 图像。 + +``` +# terminalizer render 2g-session +``` + +注意: 以下的两个命令在此版本尚且不可用,或许在下一版本这两个命令将会付诸使用。 + +如果您想要将记录的文件分享给其他人,您可以将您的文件上传到在线播放器,并且将链接分享给对方。 + +``` +terminalizer share 2g-session +``` + +为记录的文件生成一个网络播放器。 + +``` +# terminalizer generate 2g-session +``` + + -------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/terminalizer-a-tool-to-record-your-terminal-and-generate-animated-gif-images/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[thecyanbird](https://github.com/thecyanbird) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[1]: https://www.2daygeek.com/script-command-record-save-your-terminal-session-activity-linux/ +[2]: https://www.2daygeek.com/automatically-record-all-users-terminal-sessions-activity-linux-script-command/ +[3]: https://www.2daygeek.com/teleconsole-share-terminal-session-instantly-to-anyone-in-seconds/ +[4]: https://www.2daygeek.com/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds/ +[5]: https://www.2daygeek.com/peek-create-animated-gif-screen-recorder-capture-arch-linux-mint-fedora-ubuntu/ +[6]: https://www.2daygeek.com/kgif-create-animated-gif-file-active-window-screen-recorder-capture-arch-linux-mint-fedora-ubuntu-debian-opensuse-centos/ +[7]: https://www.2daygeek.com/gifine-create-animated-gif-vedio-recorder-linux-mint-debian-ubuntu/ +[8]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[9]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[13]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[14]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[15]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-record-2g-session-1.gif +[16]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-play-2g-session.gif diff --git a/published/201811/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md b/published/201811/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md new file mode 100644 index 0000000000..63f74a4816 --- /dev/null +++ b/published/201811/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md @@ -0,0 +1,118 @@ +服务器的 LinuxBoot:告别 UEFI、拥抱开源 +============ + +[LinuxBoot][13] 是私有的 [UEFI][15] 固件的开源 [替代品][14]。它发布于去年,并且现在已经得到主流的硬件生产商的认可成为他们产品的默认固件。去年,LinuxBoot 已经被 Linux 基金会接受并[纳入][16]开源家族。 + +这个项目最初是由 Ron Minnich 在 2017 年 1 月提出,它是 LinuxBIOS 的创造人,并且在 Google 领导 [coreboot][17] 的工作。 + +Google、Facebook、[Horizon Computing Solutions][18]、和 [Two Sigma][19] 共同合作,在运行 Linux 的服务器上开发 [LinuxBoot 项目][20](以前叫 [NERF][21])。 + +它的开放性允许服务器用户去很容易地定制他们自己的引导脚本、修复问题、构建他们自己的 [运行时环境][22] 和用他们自己的密钥去 [刷入固件][23],而不需要等待供应商的更新。 + +下面是第一次使用 NERF BIOS 去引导 [Ubuntu Xenial][24] 的视频: + +[点击看视频](https://youtu.be/HBkZAN3xkJg) + +我们来讨论一下它与 UEFI 相比在服务器硬件方面的其它优势。 + +### LinuxBoot 超越 UEFI 的优势 + +![LinuxBoot vs UEFI](https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/linuxboot-uefi.png?w=800&ssl=1) + +下面是一些 LinuxBoot 超越 UEFI 的主要优势: + +#### 启动速度显著加快 + +它能在 20 秒钟以内完成服务器启动,而 UEFI 需要几分钟的时间。 + +#### 显著的灵活性 + +LinuxBoot 可以用在 Linux 支持的各种设备、文件系统和协议上。 + +#### 更加安全 + +相比 UEFI 而言,LinuxBoot 在设备驱动程序和文件系统方面进行更加严格的检查。 + +我们可能争辩说 UEFI 是使用 [EDK II][25] 而部分开源的,而 LinuxBoot 是部分闭源的。但有人[提出][26],即便有像 EDK II 这样的代码,但也没有做适当的审查级别和像 [Linux 内核][27] 那样的正确性检查,并且在 UEFI 的开发中还大量使用闭源组件。 + +另一方面,LinuxBoot 有非常小的二进制文件,它仅用了大约几百 KB,相比而言,而 UEFI 的二进制文件有 32 MB。 + +严格来说,LinuxBoot 与 UEFI 不一样,更适合于[可信计算基础][28]。 + +LinuxBoot 有一个基于 [kexec][30] 的引导加载器,它不支持启动 Windows/非 Linux 内核,但这影响并不大,因为主流的云都是基于 Linux 的服务器。 + +### LinuxBoot 的采用者 + +自 2011 年, [Facebook][32] 发起了[开源计算项目(OCP)][31],它的一些服务器是基于[开源][33]设计的,目的是构建的数据中心更加高效。LinuxBoot 已经在下面列出的几个开源计算硬件上做了测试: + +* Winterfell +* Leopard +* Tioga Pass + +更多 [OCP][34] 硬件在[这里][35]有一个简短的描述。OCP 基金会通过[开源系统固件][36]运行一个专门的固件项目。 + +支持 LinuxBoot 的其它一些设备有: + +* [QEMU][9] 仿真的 [Q35][10] 系统 +* [Intel S2600wf][11] +* [Dell R630][12] + +上个月底(2018 年 9 月 24 日),[Equus 计算解决方案][37] [宣布][38] 发行它的 [白盒开放式™][39] M2660 和 M2760 服务器,作为它们的定制的、成本优化的、开放硬件服务器和存储平台的一部分。它们都支持 LinuxBoot 灵活定制服务器的 BIOS,以提升安全性和设计一个非常快的纯净的引导体验。 + +### 你认为 LinuxBoot 怎么样? + +LinuxBoot 在 [GitHub][40] 上有很丰富的文档。你喜欢它与 UEFI 不同的特性吗?由于 LinuxBoot 的开放式开发和未来,你愿意使用 LinuxBoot 而不是 UEFI 去启动你的服务器吗?请在下面的评论区告诉我们吧。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/linuxboot-uefi/ + +作者:[Avimanyu Bandyopadhyay][a] +选题:[oska874][b] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/avimanyu/ +[b]:https://github.com/oska874 +[1]:https://itsfoss.com/linuxboot-uefi/# +[2]:https://itsfoss.com/linuxboot-uefi/# +[3]:https://itsfoss.com/linuxboot-uefi/# +[4]:https://itsfoss.com/linuxboot-uefi/# +[5]:https://itsfoss.com/linuxboot-uefi/# +[6]:https://itsfoss.com/linuxboot-uefi/# +[7]:https://itsfoss.com/author/avimanyu/ +[8]:https://itsfoss.com/linuxboot-uefi/#comments +[9]:https://en.wikipedia.org/wiki/QEMU +[10]:https://wiki.qemu.org/Features/Q35 +[11]:https://trmm.net/S2600 +[12]:https://trmm.net/NERF#Installing_on_a_Dell_R630 +[13]:https://www.linuxboot.org/ +[14]:https://www.phoronix.com/scan.php?page=news_item&px=LinuxBoot-OSFC-2018-State +[15]:https://itsfoss.com/check-uefi-or-bios/ +[16]:https://www.linuxfoundation.org/blog/2018/01/system-startup-gets-a-boost-with-new-linuxboot-project/ +[17]:https://en.wikipedia.org/wiki/Coreboot +[18]:http://www.horizon-computing.com/ +[19]:https://www.twosigma.com/ +[20]:https://trmm.net/LinuxBoot_34c3 +[21]:https://trmm.net/NERF +[22]:https://trmm.net/LinuxBoot_34c3#Runtimes +[23]:http://www.tech-faq.com/flashing-firmware.html +[24]:https://itsfoss.com/features-ubuntu-1604/ +[25]:https://www.tianocore.org/ +[26]:https://media.ccc.de/v/34c3-9056-bringing_linux_back_to_server_boot_roms_with_nerf_and_heads +[27]:https://medium.com/@bhumikagoyal/linux-kernel-development-cycle-52b4c55be06e +[28]:https://en.wikipedia.org/wiki/Trusted_computing_base +[29]:https://itsfoss.com/adobe-alternatives-linux/ +[30]:https://en.wikipedia.org/wiki/Kexec +[31]:https://en.wikipedia.org/wiki/Open_Compute_Project +[32]:https://github.com/facebook +[33]:https://github.com/opencomputeproject +[34]:https://www.networkworld.com/article/3266293/lan-wan/what-is-the-open-compute-project.html +[35]:http://hyperscaleit.com/ocp-server-hardware/ +[36]:https://www.opencompute.org/projects/open-system-firmware +[37]:https://www.equuscs.com/ +[38]:http://www.dcvelocity.com/products/Software_-_Systems/20180924-equus-compute-solutions-introduces-whitebox-open-m2660-and-m2760-servers/ +[39]:https://www.equuscs.com/servers/whitebox-open/ +[40]:https://github.com/linuxboot/linuxboot diff --git a/translated/talk/20181008 3 areas to drive DevOps change.md b/published/201811/20181008 3 areas to drive DevOps change.md similarity index 53% rename from translated/talk/20181008 3 areas to drive DevOps change.md rename to published/201811/20181008 3 areas to drive DevOps change.md index 2edb255af5..2efd0fc6c5 100644 --- a/translated/talk/20181008 3 areas to drive DevOps change.md +++ b/published/201811/20181008 3 areas to drive DevOps change.md @@ -1,12 +1,13 @@ 推动 DevOps 变革的三个方面 ====== -推动大规模的组织变革是一个痛苦的过程。对于 DevOps 来说,尽管也有阵痛,但变革带来的价值则相当可观。 + +> 推动大规模的组织变革是一个痛苦的过程。对于 DevOps 来说,尽管也有阵痛,但变革带来的价值则相当可观。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-inclusion-transformation-change_20180927.png?itok=2E-g10hJ) 避免痛苦是一种强大的动力。一些研究表明,[植物也会通过遭受疼痛的过程][1]以采取措施来保护自己。我们人类有时也会刻意让自己受苦——在剧烈运动之后,身体可能会发生酸痛,但我们仍然坚持运动。那是因为当人认为整个过程利大于弊时,几乎可以忍受任何事情。 -推动大规模的组织变革得过程确实是痛苦的。有人可能会因难以改变价值观和行为而感到痛苦,有人可能会因难以带领团队而感到痛苦,也有人可能会因难以开展工作而感到痛苦。但就 DevOps 而言,我可以说这些痛苦都是值得的。 +推动大规模的组织变革的过程确实是痛苦的。有人可能会因难以改变价值观和行为而感到痛苦,有人可能会因难以带领团队而感到痛苦,也有人可能会因难以开展工作而感到痛苦。但就 DevOps 而言,我可以说这些痛苦都是值得的。 我也曾经关注过一个团队耗费大量时间优化技术流程的过程,在这个过程中,团队逐渐将流程进行自动化改造,并最终获得了成功。 @@ -14,60 +15,64 @@ 图片来源:Lee Eason. CC BY-SA 4.0 -这张图表充分表明了变革的价值。一家公司在我主导实行了 DevOps 转型之后,60 多个团队每月提交了超过 900 个发布请求。这些工作量的原耗时高达每个月 350 天,而这么多的工作量对于任何公司来说都是不可忽视的。除此以外,他们每月的部署次数从 100 次增加到了 9000 次,高危 bug 减少了 24%,工程师们更轻松了,净推荐值Net Promoter Score(NPS)也提高了,而 NPS 提高反过来也让团队的 DevOps 转型更加顺利。正如 [Puppet 发布的 DevOps 报告][4]所预测的,用在技术流程改进上的投资可以在业务成果上明显地体现出来。 +这张图表充分表明了变革的价值。一家公司在我主导实行了 DevOps 转型之后,60 多个团队每月提交了超过 900 个发布请求。这些工作量的原耗时高达每个月 350 人/天,而这么多的工作量对于任何公司来说都是不可忽视的。除此以外,他们每月的部署次数从 100 次增加到了 9000 次,高危 bug 减少了 24%,工程师们更轻松了,净推荐值Net Promoter Score(NPS)也提高了,而 NPS 提高反过来也让团队的 DevOps 转型更加顺利。正如 [Puppet 发布的 DevOps 报告][4]所预测的,用在技术流程改进上的投入可以在业务成果上明显地体现出来。 -而 DevOps 主导者在推动变革是必须关注这三个方面:团队管理,团队文化和团队活力。 +而 DevOps 主导者在推动变革时必须关注这三个方面:团队管理,团队文化和团队活力。 ### 团队管理 +最重要的是,改进对技术流程的投入可以转化为更好的业务成果。 + 组织架构越大,业务领导与一线员工之间的距离就会越大,当然发生误解的可能性也会越大。而且各种技术工具和实际应用都在以日新月异的速度变化,这就导致业务领导几乎不可能对 DevOps 或敏捷开发的转型方向有一个亲身的了解。 DevOps 主导者必须和管理层密切合作,在进行决策的时候给出相关的意见,以帮助他们做出正确的决策。 -公司的管理层只是知道 DevOps 会对产品部署的方式进行改进,而并不了解其中的具体过程。当管理层发现你在和软件团队执行自动化部署失败时,就会想要了解这件事情的细节。如果管理层了解到进行部署的是软件团队而不是专门的发布管理团队,就可能会坚持使用传统的变更流程来保证业务的正常运作。你可能会失去团队的信任,团队也可能不愿意作出进一步的改变。 +公司的管理层只是知道 DevOps 会对产品部署的方式进行改进,而并不了解其中的具体过程。假设你正在帮助一个软件开发团队实现自动化部署,当管理层得知某次部署失败时(这种情况是有的),就会想要了解这件事情的细节。如果管理层了解到进行部署的是软件团队而不是专门的发布管理团队,就可能会坚持使用传统的变更流程来保证业务的正常运作。你可能会失去团队的信任,团队也可能不愿意做出进一步的改变。 -如果没有和管理层做好心理上的预期,一旦发生意外的生产事件,都会对你和管理层之间的信任造成难以消除的影响。所以,最好事先和管理层之间在各方面协调好,这会让你在后续的工作中避免很多麻烦。 +如果没有和管理层做好心理上的预期,一旦发生意外的生产事件,重建管理层的信任并得到他们的支持比事先对他们进行教育需要更长的时间。所以,最好事先和管理层在各方面协调好,这会让你在后续的工作中避免很多麻烦。 对于和管理层之间的协调,这里有两条建议: - * 一是**重视所有规章制度**。如果管理层对合同、安全等各方面有任何疑问,你都可以向法务或安全负责人咨询,这样做可以避免犯下后果严重的错误。 - * 二是**将管理层的重点关注的方面输出为量化指标**。举个例子,如果公司的目标是减少客户流失,而你调查得出计划外的停机是造成客户流失的主要原因,那么就可以让团队对故障的平均检测时间Mean Time To Detection(MTTD)和平均解决时间Mean Time To Resolution(MTTR)实行重点优化。你可以使用这些关键指标来量化团队的工作成果,而管理层对此也可以有一个直观的了解。 - - +* 一是**重视所有规章制度**。如果管理层对合同、安全等各方面有任何疑问,你都可以向法务或安全负责人咨询,这样做可以避免犯下后果严重的错误。 +* 二是**将管理层重点关注的方面输出为量化指标**。举个例子,如果公司的目标是减少客户流失,而你调查得出计划外的服务宕机是造成客户流失的主要原因,那么就可以让团队对故障的平均排查时间Mean Time To Detection(MTTD)和平均解决时间Mean Time To Resolution(MTTR)实行重点优化。你可以使用这些关键指标来量化团队的工作成果,而管理层对此也可以有一个直观的了解。 ### 团队文化 DevOps 是一种专注于持续改进代码、构建、部署和操作流程的文化,而团队文化代表了团队的价值观和行为。从本质上说,团队文化是要塑造团队成员的行为方式,而这并不是一件容易的事。 -我推荐一本叫做《[披着狼皮的 CIO][5]》的书。另外,研究心理学、阅读《[Drive][6]》、观看 Daniel Pink 的 [TED 演讲][7]、阅读《[千面英雄][7]》、了解每个人的心路历程,以上这些都是你推动公司技术变革所应该尝试去做的事情。 +我推荐一本叫做《[披着狼皮的 CIO][5]》的书。另外,研究心理学、阅读《[Drive][6]》、观看 Daniel Pink 的 [TED 演讲][7]、阅读《[千面英雄][7]》、了解每个人的心路历程,以上这些都是你推动公司技术变革所应该尝试去做的事情。如果这些你都没兴趣,说明你不是那个推动公司变革的人。如果你想成为那个人,那就开始学习吧! -理性的人大多都按照自己的价值观工作,然而团队通常没有让每个人都能达成共识的明确价值观。因此,你需要明确团队目前的价值观,包括价值观的形成过程和价值观的目标导向。也不能将这些价值观强加到团队成员身上,只需要让团队成员在目前的硬件条件下力所能及地做到最好就可以了 +从本质上说,改变一个人真不是件容易的事。 -同时需要向团队成员阐明,公司正在发生组织上的变化,团队的价值观也随之改变,最好也厘清整个过程中将会作出什么变化。例如,公司以往或许是由于资金有限,一直将节约成本的原则放在首位,在研发新产品的时候,基础架构团队不得不通过共享数据库集群或服务器,从而导致了服务之间的紧密耦合。然而随着时间的推移,这种做法会产生难以维护的混乱,即使是一个小小的变化也可能造成无法预料的后果。这就导致交付团队难以执行变更控制流程,进而令变更停滞不前。 +理性的人大多都按照自己的价值观工作,然而团队通常没有让每个人都能达成共识的明确价值观。因此,你需要明确团队目前的价值观,包括价值观的形成过程和价值观的目标导向。但不能将这些价值观强加到团队成员身上,只需要让团队成员在现有条件下力所能及地做到最好就可以了。 -如果这种状况持续多年,最终的结果将会是毫无创新、技术老旧、问题繁多以及产品品质低下,公司的发展到达了瓶颈,原本的价值观已经不再适用。所以,工作效率的优先级必须高于节约成本。 +同时需要向团队成员阐明,公司正在发生组织和团队目标的变化,团队的价值观也随之改变,最好也厘清整个过程中将会作出什么变化。例如,公司以往或许是由于资金有限,一直将节约成本的原则放在首位,在研发新产品的时候,基础架构团队不得不共享数据库集群或服务器,从而导致了服务之间的紧密耦合。然而随着时间的推移,这种做法会产生难以维护的混乱,即使是一个小小的变化也可能造成无法预料的后果。这就导致交付团队难以执行变更控制流程,进而令变更停滞不前。 -你必须强调团队的价值观。每当团队按照价值观取得了一定的工作进展,都应该对团队作出激励。在团队部署出现失败时,鼓励他们承担风险、继续学习,同时指导团队如何改进他们的工作并表示支持。长此下来,团队成员就会对你产生信任,并逐渐切合团队的价值观。 +如果这种状况持续几年,最终的结果将会是毫无创新、技术老旧、问题繁多以及产品品质低下,公司的发展到达了瓶颈,原本的价值观已经不再适用。所以,工作效率的优先级必须高于节约成本。如果一个选择能让团队运作更好,另一个选择只是短期来看成本便宜,那你应该选择前者。 + +你必须反复强调团队的价值观。每当团队取得了一定的工作进展(即使探索创新时出现一些小的失误),都应该对团队作出激励。在团队部署出现失败时,鼓励他们承担风险、吸取教训,同时指导团队如何改进他们的工作并表示支持。长此下来,团队成员就会对你产生信任,不再顾虑为切合团队的价值观而做出改变。 ### 团队活力 -你有没有在会议上听过类似这样的话?“在张三度假回来之前,我们无法对这件事情做出评估。他是唯一一个了解代码的人”,或者是“我们完成不了这项任务,它在网络上需要跨团队合作,而防火墙管理员刚好请病假了”,又或者是“张三最清楚这个系统最好,他说是怎么样,通常就是怎么样”。那么如果团队在处理工作时,谁才是主力?就是张三。而且也一直会是他。 +你有没有在会议上听过类似这样的话?“在张三度假回来之前,我们无法对这件事情做出评估。他是唯一一个了解代码的人”,或者是“我们完成不了这项任务,它在网络上需要跨团队合作,而防火墙管理员刚好请病假了”,又或者是“张三最清楚这个系统,他说是怎么样,通常就是怎么样”。那么如果团队在处理工作时,谁才是主力?就是张三。而且也一直会是他。 -我们一直都认为这就是软件开发的本质。但是如果我们不作出改变,这种循环就会一直保持下去。 +我们一直都认为这就是软件开发的自带属性。但是如果我们不作出改变,这种循环就会一直持续下去。 -熵的存在会让团队自发地变得混乱和缺乏活力,团队的成员和主导者的都有责任控制这个熵并保持团队的活力。DevOps、敏捷开发、上云、代码重构这些行为都会令熵增加速,这是因为转型让团队需要学习更多新技能和专业知识以开展新工作。 +熵的存在会让团队自发地变得混乱和缺乏活力,团队的成员和主导者的都有责任控制这个熵并保持团队的活力。DevOps、敏捷开发、上云、代码重构这些行为都会令熵加速增长,这是因为转型让团队需要学习更多新技能和专业知识以开展新工作。 -我们来看一个产品团队重构遗留代码的例子。像往常一样,他们在 AWS 上构建新的服务。而传统的系统则在数据中心部署,并由 IT 部门进行监控和备份。IT 部门会确保在基础架构的层面上满足应用的安全需求、进行灾难恢复测试、系统补丁、安装配置了入侵检测和防病毒代理,而且 IT 部门还保留了年度审计流程所需的变更控制记录。 +我们来看一个产品团队重构历史代码的例子。像往常一样,他们在 AWS 上构建新的服务。而传统的系统则在数据中心部署,并由 IT 部门进行监控和备份。IT 部门会确保在基础架构的层面上满足应用的安全需求、进行灾难恢复测试、系统补丁、安装配置了入侵检测和防病毒代理,而且 IT 部门还保留了年度审计流程所需的变更控制记录。 -产品团队经常会犯一个致命的错误,就是认为 IT 部门是需要突破的瓶颈。他们希望脱离已有的 IT 部门并使用公有云,但实际上是他们忽视了 IT 部门提供的关键服务。迁移到云上只是以不同的方式实现这些关键服务,因为 AWS 也是一个数据中心,团队即使使用 AWS 也需要完成 IT 运维任务。 +产品团队经常会犯一个致命的错误,就是认为 IT 是消耗资源的部门,是需要突破的瓶颈。他们希望脱离已有的 IT 部门并使用公有云,但实际上是他们忽视了 IT 部门提供的关键服务。迁移到云上只是以不同的方式实现这些关键服务,因为 AWS 也是一个数据中心,团队即使使用 AWS 也需要完成 IT 运维任务。 -实际上,产品团队在迁移到云时候也必须学习如何使用这些 IT 服务。因此,当产品团队开始重构遗留的代码并部署到云上时,也需要学习大量的技能才能正常运作。这些技能不会无师自通,必须自行学习或者聘用相关的人员,团队的主导者也必须积极进行管理。 +实际上,产品团队在向云迁移的时候也必须学习如何使用这些 IT 服务。因此,当产品团队开始重构历史代码并部署到云上时,也需要学习大量的技能才能正常运作。这些技能不会无师自通,必须自行学习或者聘用相关的人员,团队的主导者也必须积极进行管理。 -在带领团队时,我找不到任何适合我的工具,因此我建立了 [Tekita.io][9] 这个项目。Tekata 免费而且容易使用。但相比起来,把注意力集中在人员和流程上更为重要,你需要不断学习,持续关注团队的弱项,因为它们会影响团队的交付能力,而修补这些弱项往往需要学习大量的新知识,这就需要团队成员之间有一个很好的协作。因此 76% 的年轻人都认为个人发展机会是公司文化[最重要的的一环][10]。 +在带领团队时,我找不到任何适合我的工具,因此我建立了 [Tekita.io][9] 这个项目。Tekata 免费而且容易使用。但相比起来,把注意力集中在人员和流程上更为重要,你需要不断学习,持续关注团队的短板,因为它们会影响团队的交付能力,而弥补这些短板往往需要学习大量的新知识,这就需要团队成员之间有一个很好的协作。因此 76% 的年轻人都认为个人发展机会是公司文化[最重要的的一环][10]。 ### 效果就是最好的证明 -DevOps 转型会改变团队的工作方式和文化,这需要得到管理层的支持和理解。同时,工作方式的改变意味着新技术的引入,所以在管理上也必须谨慎。但转型的最终结果是团队变得更高效、成员变得更积极、产品变得更优质,客户也变得更快乐。 +DevOps 转型会改变团队的工作方式和文化,这需要得到管理层的支持和理解。同时,工作方式的改变意味着新技术的引入,所以在管理上也必须谨慎。但转型的最终结果是团队变得更高效、成员变得更积极、产品变得更优质,客户也变得更满意。 + +Lee Eason 将于 10 月 21-23 日在北卡罗来纳州 Raleigh 举行的 [All Things Open][12] 上讲述 [DevOps 转型的故事][11]。 免责声明:本文中的内容仅为 Lee Eason 的个人立场,不代表 Ipreo 或 IHS Markit。 @@ -78,7 +83,7 @@ via: https://opensource.com/article/18/10/tales-devops-transformation 作者:[Lee Eason][a] 选题:[lujun9972][b] 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[pityonline](https://github.com/pityonline) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -96,4 +101,3 @@ via: https://opensource.com/article/18/10/tales-devops-transformation [10]: https://www.execu-search.com/~/media/Resources/pdf/2017_Hiring_Outlook_eBook [11]: https://allthingsopen.org/talk/tales-from-a-devops-transformation/ [12]: https://allthingsopen.org/ - diff --git a/published/201811/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md b/published/201811/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md new file mode 100644 index 0000000000..f8b6e2b5d9 --- /dev/null +++ b/published/201811/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md @@ -0,0 +1,102 @@ +KeeWeb:一个开源且跨平台的密码管理工具 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/keeweb-720x340.png) + +如果你长时间使用互联网,那很可能在很多网站上都有很多帐户。所有这些帐户都必须有密码,而且必须记住所有的密码,或者把它们写下来。在纸上写下密码可能不安全,如果有多个密码,记住它们实际上是不可能的。这就是密码管理工具在过去几年中大受欢迎的原因。密码管理工具就像一个中央存储库,你可以在其中存储所有帐户的所有密码,并为它设置一个主密码。使用这种方法,你唯一需要记住的只有主密码。 + +**KeePass** 就是一个这样的开源密码管理工具,它有一个官方客户端,但功能非常简单。也有许多 PC 端和手机端的其他密码管理工具,并且与 KeePass 存储加密密码的文件格式兼容。其中一个就是 **KeeWeb**。 + +KeeWeb 是一个开源、跨平台的密码管理工具,具有云同步,键盘快捷键和插件等功能。KeeWeb使用 Electron 框架,这意味着它可以在 Windows、Linux 和 Mac OS 上运行。 + +### KeeWeb 的使用 + +有两种方式可以使用 KeeWeb。第一种无需安装,直接在网页上使用,第二中就是在本地系统中安装 KeeWeb 客户端。 + +#### 在网页上使用 KeeWeb + +如果不想在系统中安装应用,可以去 [https://app.keeweb.info/][1] 使用KeeWeb。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-webapp.png) + +网页端具有桌面客户端的所有功能,当然也需要联网才能进行使用。 + +#### 在计算机中安装 KeeWeb + +如果喜欢客户端的舒适性和离线可用性,也可以将其安装在系统中。 + +如果使用 Ubuntu/Debian,你可以去 [发布页][2] 下载 KeeWeb 最新的 .deb 文件,然后通过下面的命令进行安装: + +``` +$ sudo dpkg -i KeeWeb-1.6.3.linux.x64.deb +``` + +如果用的是 Arch,在 [AUR][3] 上也有 KeeWeb,可以使用任何 AUR 助手进行安装,例如 [Yay][4]: + +``` +$ yay -S keeweb +``` + +安装后,从菜单中或应用程序启动器启动 KeeWeb。默认界面如下: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-desktop-client.png) + +### 总体布局 + +KeeWeb 界面主要显示所有密码的列表,在左侧展示所有标签。单击标签将对密码进行筛选,只显示带有那个标签的密码。在右侧,显示所选帐户的所有字段。你可以设置用户名、密码、网址,或者添加自定义的备注。你甚至可以创建自己的字段并将其标记为安全字段,这在存储信用卡信息等内容时非常有用。你只需单击即可复制密码。 KeeWeb 还显示账户的创建和修改日期。已删除的密码会保留在回收站中,可以在其中还原或永久删除。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-general-layout.png) + +### KeeWeb 功能 + +#### 云同步 + +KeeWeb 的主要功能之一是支持各种远程位置和云服务。除了加载本地文件,你可以从以下位置打开文件: + +1. WebDAV Servers +2. Google Drive +3. Dropbox +4. OneDrive + +这意味着如果你使用多台计算机,就可以在它们之间同步密码文件,因此不必担心某台设备无法访问所有密码。 + +#### 密码生成器 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-password-generator.png) + +除了对密码进行加密之外,为每个帐户创建新的强密码也很重要。这意味着,如果你的某个帐户遭到入侵,攻击者将无法使用相同的密码进入其他帐户。 + +为此,KeeWeb 有一个内置密码生成器,可以生成特定长度、包含指定字符的自定义密码。 + +#### 插件 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-plugins.png) + +你可以使用插件扩展 KeeWeb 的功能。其中一些插件用于更改界面语言,而其他插件则添加新功能,例如访问 https://haveibeenpwned.com 以查看密码是否暴露。 + +#### 本地备份 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/KeeWeb-backup.png) + +无论密码文件存储在何处,你都应该在计算机上保留一份本地备份。幸运的是,KeeWeb 内置了这个功能。你可以备份到特定路径,并将其设置为定期备份,或者只在文件更改时进行备份。 + +### 结论 + +我实际使用 KeeWeb 已经好几年了,它完全改变了我存储密码的方式。云同步是我长期使用 KeeWeb 的主要功能,这样我不必担心在多个设备上保存多个不同步的文件。如果你想要一个具有云同步功能的密码管理工具,KeeWeb 就是你应该关注的东西。 + +------ + +via: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/ + +作者:[EDITOR][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[jlztan](https://github.com/jlztan) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[1]: https://app.keeweb.info/ +[2]: https://github.com/keeweb/keeweb/releases/latest +[3]: https://aur.archlinux.org/packages/keeweb/ +[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ diff --git a/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md b/published/201811/20181008 Play Windows games on Fedora with Steam Play and Proton.md similarity index 56% rename from sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md rename to published/201811/20181008 Play Windows games on Fedora with Steam Play and Proton.md index 8f3a5a38c5..c0859f1dc1 100644 --- a/sources/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md +++ b/published/201811/20181008 Play Windows games on Fedora with Steam Play and Proton.md @@ -1,9 +1,9 @@ -在 Fedora 上使用 Steam play 和 Proton 来玩 Windows 游戏 +在 Fedora 上使用 Steam play 和 Proton 来玩 Windows 游戏 ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/09/steam-proton-816x345.jpg) -几周前,Steam 宣布要给 Steam Play 增加一个新组件,用于支持在 Linux 平台上使用 Proton 来玩 Windows 的游戏,这个组件是 WINE 的一个分支。这个功能仍然处于测试阶段,且并非对所有游戏都有效。这里有一些关于 Steam 和 Proton 的细节。 +之前,Steam [宣布][1]要给 Steam Play 增加一个新组件,用于支持在 Linux 平台上使用 Proton 来玩 Windows 的游戏,这个组件是 WINE 的一个分支。这个功能仍然处于测试阶段,且并非对所有游戏都有效。这里有一些关于 Steam 和 Proton 的细节。 据 Steam 网站称,测试版本中有以下这些新功能: @@ -13,29 +13,27 @@ * 改进了对游戏控制器的支持,游戏自动识别所有 Steam 支持的控制器,比起游戏的原始版本,能够获得更多开箱即用的控制器兼容性。 * 和 vanilla WINE 比起来,游戏的多线程性能得到了极大的提高。 - - ### 安装 如果你有兴趣,想尝试一下 Steam 和 Proton。请按照下面这些简单的步骤进行操作。(请注意,如果你已经安装了最新版本的 Steam,可以忽略启用 Steam 测试版这个第一步。在这种情况下,你不再需要通过 Steam 测试版来使用 Proton。) -打开 Steam 并登陆到你的帐户,这个截屏示例显示的是在使用 Proton 之前仅支持22个游戏。 +打开 Steam 并登陆到你的帐户,这个截屏示例显示的是在使用 Proton 之前仅支持 22 个游戏。 ![][3] -现在点击客户端顶部的 Steam 选项,这会显示一个下拉菜单。然后选择设置。 +现在点击客户端顶部的 “Steam” 选项,这会显示一个下拉菜单。然后选择“设置”。 ![][4] -现在弹出了设置窗口,选择账户选项,并在 Beta participation 旁边,点击更改。 +现在弹出了设置窗口,选择“账户”选项,并在 “参与 Beta 测试” 旁边,点击“更改”。 ![][5] -现在将 None 更改为 Steam Beta Update。 +现在将 “None” 更改为 “Steam Beta Update”。 ![][6] -点击确定,然后系统会提示你重新启动。 +点击“确定”,然后系统会提示你重新启动。 ![][7] @@ -43,11 +41,11 @@ ![][8] -在重新启动之后,返回到上面的设置窗口。这次你会看到一个新选项。确定有为提供支持的游戏使用 Stream Play 这个复选框,让所有的游戏都使用 Steam Play 进行运行,而不是 steam 中游戏特定的选项。兼容性工具应该是 Proton。 +在重新启动之后,返回到上面的设置窗口。这次你会看到一个新选项。确定勾选了“为提供支持的游戏使用 Stream Play” 、“让所有的游戏都使用 Steam Play 运行”,“使用这个工具替代 Steam 中游戏特定的选项”。这个兼容性工具应该就是 Proton。 ![][9] -Steam 客户端会要求你重新启动,照做,然后重新登陆你的 Steam 账户,你的 Linux 的游戏库就能得到扩展了。 +Steam 客户端会要求你重新启动,照做,然后重新登录你的 Steam 账户,你的 Linux 的游戏库就能得到扩展了。 ![][10] @@ -69,7 +67,7 @@ Steam 客户端会要求你重新启动,照做,然后重新登陆你的 Stea ![][16] -一些游戏可能会受到 Proton 测试性质的影响,在下面这个叫 Chantelise 游戏中,没有了声音并且帧率很低。请记住这个功能仍然在测试阶段,Fedora 不会对结果负责。如果你想要了解更多,社区已经创建了一个 Google 文档,这个文档里有已经测试过的游戏的列表。 +一些游戏可能会受到 Proton 测试性质的影响,在这个叫 Chantelise 游戏中,没有了声音并且帧率很低。请记住这个功能仍然在测试阶段,Fedora 不会对结果负责。如果你想要了解更多,社区已经创建了一个 Google 文档,这个文档里有已经测试过的游戏的列表。 -------------------------------------------------------------------------------- @@ -79,25 +77,25 @@ via: https://fedoramagazine.org/play-windows-games-steam-play-proton/ 作者:[Francisco J. Vergara Torres][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[hopefully2333](https://github.com/hopefully2333) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://fedoramagazine.org/author/patxi/ [1]: https://steamcommunity.com/games/221410/announcements/detail/1696055855739350561 [2]: https://fedoramagazine.org/third-party-repositories-fedora/ -[3]: https://fedoramagazine.org/wp-content/uploads/2018/09/listOfGamesLinux-300x197.png -[4]: https://fedoramagazine.org/wp-content/uploads/2018/09/1-300x169.png -[5]: https://fedoramagazine.org/wp-content/uploads/2018/09/2-300x196.png -[6]: https://fedoramagazine.org/wp-content/uploads/2018/09/4-300x272.png -[7]: https://fedoramagazine.org/wp-content/uploads/2018/09/6-300x237.png -[8]: https://fedoramagazine.org/wp-content/uploads/2018/09/7-300x126.png -[9]: https://fedoramagazine.org/wp-content/uploads/2018/09/10-300x237.png -[10]: https://fedoramagazine.org/wp-content/uploads/2018/09/12-300x196.png -[11]: https://fedoramagazine.org/wp-content/uploads/2018/09/13-300x196.png -[12]: https://fedoramagazine.org/wp-content/uploads/2018/09/14-300x195.png -[13]: https://fedoramagazine.org/wp-content/uploads/2018/09/15-300x196.png -[14]: https://fedoramagazine.org/wp-content/uploads/2018/09/16-300x195.png -[15]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-14-59-300x169.png -[16]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-19-34-300x169.png +[3]: https://fedoramagazine.org/wp-content/uploads/2018/09/listOfGamesLinux-768x505.png +[4]: https://fedoramagazine.org/wp-content/uploads/2018/09/1-768x432.png +[5]: https://fedoramagazine.org/wp-content/uploads/2018/09/2-768x503.png +[6]: https://fedoramagazine.org/wp-content/uploads/2018/09/4.png +[7]: https://fedoramagazine.org/wp-content/uploads/2018/09/6.png +[8]: https://fedoramagazine.org/wp-content/uploads/2018/09/7.png +[9]: https://fedoramagazine.org/wp-content/uploads/2018/09/10.png +[10]: https://fedoramagazine.org/wp-content/uploads/2018/09/12-768x503.png +[11]: https://fedoramagazine.org/wp-content/uploads/2018/09/13-768x501.png +[12]: https://fedoramagazine.org/wp-content/uploads/2018/09/14-768x498.png +[13]: https://fedoramagazine.org/wp-content/uploads/2018/09/15-768x501.png +[14]: https://fedoramagazine.org/wp-content/uploads/2018/09/16-768x500.png +[15]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-14-59-768x432.png +[16]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-19-34-768x432.png [17]: https://docs.google.com/spreadsheets/d/1DcZZQ4HL_Ol969UbXJmFG8TzOHNnHoj8Q1f8DIFe8-8/edit#gid=1003113831 diff --git a/published/201811/20181010 5 alerting and visualization tools for sysadmins.md b/published/201811/20181010 5 alerting and visualization tools for sysadmins.md new file mode 100644 index 0000000000..2306e197cf --- /dev/null +++ b/published/201811/20181010 5 alerting and visualization tools for sysadmins.md @@ -0,0 +1,163 @@ +5 个适合系统管理员使用的告警可视化工具 +====== + +> 这些开源的工具能够通过输出帮助用户了解系统的运行状况,并对可能发生的潜在问题作出告警。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI-) + +你大概已经知道(或猜到)告警可视化alerting and visualization工具是用来做什么的了。下面我们就要来说一下,为什么要讨论这样的工具,甚至某些系统专门将可视化作为特有的功能。 + +可观察性Observability的概念来自控制理论control theory,这个概念描述了我们通过对系统的输入和输出来了解其的能力。本文将重点介绍具有可观察性的输出组件。 + +告警可视化工具可以对其它系统的输出进行分析,进而对输出的信息进行结构化表示。告警实际上是对系统异常状态的描述,而可视化则是让用户能够直观理解的结构化表示。 + +### 常见的可视化告警 + +#### 告警 + +首先要明确一下告警alert的含义。在人员无法响应告警内容情况下,不应该发送告警 —— 包括那些发给多个人但只有其中少数人可以响应的告警,以及系统中的每个异常都触发的告警。因为这样会产生告警疲劳,告警接收者也往往会对这些过多的告警采取忽视的态度 —— 直到系统恶化到以少见的方式告警。 + +例如,如果管理员每天都会收到告警系统发来的数百封告警邮件,他就很容易会忽略告警系统的所有邮件。除非他真的看到问题发生,或者受到了客户或上级的询问时,管理员才会重新重视告警信息。在这种情况下,告警已经失去了原有的意义和用途。 + +告警不是一个持续的信息流或者状态更新。告警的目的在于暴露系统无法自动恢复的问题,而且告警应该只发送给最有可能解决问题的人员。超出这个定义的内容都不应该作为告警,否则将会对实际工作造成不良的影响。 + +不同的告警体系都会有各自的告警类型,因此不能用优先级(P1-P5)或者诸如“信息”、“警告”、“严重”之类的字眼来一概而论,下面我会介绍一些新兴的复杂系统的事件响应中出现的通用分类方式。 + +刚才我提到了一个“信息”这个告警类型,但实际上告警不应该是一个信息,尽管有些人可能会不这样认为。但我觉得如果一个告警没有发送给任何一个人,它就不应该是警报,而只是一些在许多系统中被视为警报的数据点,代表了一些应该知晓但不需要响应的事件。它更应该作为告警可视化工具的一部分,而不是会导致触发告警的事件。《[实用监控][1]》是这个领域的必读书籍,其作者 Mike Julian 在书中就介绍了他自己关于告警的看法。 + +而非信息警报则代表告警需要被响应以及需要相关的操作。我将这些告警大致分为内部故障和外部故障两种类型,而对于大多数公司来说,通常会有两个以上的级别来确定响应告警的优先级。系统性能下降就是一种故障,因为其对用户的影响通常都是未知的。 + +内部故障比外部故障的优先级低,但也需要快速响应。内部故障通常包括公司员工使用的内部系统或仅对公司员工可见的应用故障。 + +外部故障则包括任何马上会产生业务影响的系统故障,但不包括影响系统更新的故障。外部故障一般包括客户所面临的应用故障、数据库故障和导致系统可用性或一致性失效的网络故障,这些都会影响用户的正常使用。对于不直接影响用户的依赖组件故障也属于外部故障,随着应用程序的不断运行,一旦依赖组件发生故障,系统的性能也会受到波及。这种情况对于使用某些外部服务或数据源的系统来说很常见,尽管这些外部服务或数据源对于可能不涉及到系统的主要功能,但是当系统在处理相关依赖组件的错误时可能会出现较明显的延迟。 + +### 可视化 + +可视化的种类有很多,我就不一一赘述了。这是一个有趣的研究领域,在我这些年的数据分析经历当中,学习和应用可视化方面的知识可以说是相当有挑战性。我们需要将复杂的系统输出通过直观的方式来向他人展示,才能有效地把信息传播出去。[Google Charts][2] 和 [Tableau][3] 都提供了很多可视化方面的工具。下面将会介绍一些最常见的可视化创新解决方案。 + +#### 折线图 + +折线图可能是最常见的可视化方式了,它可以让用户很直观地按照时间维度了解系统的情况。系统中每个单一或聚合的指标都会以一条折线在图表中体现。但当同一个图表中同时存在多条折线时,就可能会对阅读有所影响(如下图所示),所以大多数情况下都可以选择仅查看其中的少数几条折线,而不是让所有折线同时显示。如果某个指标的数值产生了大于正常范围的波动,就会很容易发现。例如下图中异常的紫线、黄线、浅蓝线。 + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart.png) + +折线图的另一个用法是可以将多条折线堆叠起来以显示它们之间的关系。例如对于通过折线图反映服务器的请求数量,可以单独看到每台服务器上的请求,也可以聚合在一起看。这就可以在同一个图表中灵活查看整个系统以及每个实例的情况了。 + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart_aggregate.png) + +#### 热力图 + +另一种常见的可视化方式是热力图。热力图与条形图比较类似,还可以在条形图的基础上显示某部分在整体中占比的变化情况。例如在查看网络请求延时的时候,就可以使用热力图快速查看到所有网络请求的总体趋势和分布情况,另外,它可以使用不同颜色来表示不同部分的数值。 + +在以下这个热力图中,通过竖直方向上每个时间段的色块数量分布,可以清楚地看到大部分数据集中在整个范围的中心位置。我们还可以发现,大多数时间段的色块分布都是比较宽松的,而 14:00 到 15:00 这一段则分布得很密集,这样的分布有可能意味着一种不健康的状态。 + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_histogram.png) + +#### 仪表图 + +还有一种常见的可视化方式是仪表图,用户可以通过仪表图快速了解单个指标。仪表一般用于单个指标的显示,例如车速表代表汽车的行驶速度、油量表代表油箱中的汽油量等等。大多数的仪表图都有一个共通点,就是会划分出所示指标的对应状态。如下图所示,绿色表示正常的状态,橙色表示不良的状态,而红色则表示极差的状态。下图中间一行模拟了真实仪表的显示情况。 + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_gauges.png) + +上面图表中,除了常规仪表样式的显示方式之外,还有较为直接的数据显示方式,配合相同的配色方案,一眼就可以看出各个指标所处的状态,这一点与和仪表的特点类似。所以,最下面一行可能是仪表图的最佳显示方式,用户不需要仔细阅读,就可以大致了解各个指标的不同状态。这种类型的可视化是我最常用的类型,在数秒钟之间,我就可以全面地总览系统各方面地运行情况。 + +#### 火焰图 + +由 [Netflix 的 Brendan Gregg][4] 在 2011 年开始使用的火焰图是一种较为少见地可视化方式。它不像仪表图那样可以从图表中快速得到关键信息,通常只会在需要解决某个应用的问题的时候才会用到这种图表。火焰图主要用于 CPU、内存和相关帧方面的表示,X 轴按字母顺序将帧一一列出,而 Y 轴则表示堆栈的深度。图中每个矩形都是一个标明了调用的函数的堆栈帧。矩形越宽,就表示它在堆栈中出现越频繁。在分析系统性能问题的时候,火焰图能够起到很大的作用,大家不妨尝试一下。 + +![](https://opensource.com/sites/default/files/uploads/monitoring_guide_flame_graph_0.png) + +### 工具的选择 + +在告警工具方面,有几个商用的工具相当不错。但由于这是一篇介绍开源技术的文章,我只会介绍那些已经被广泛使用的免费工具。希望你也能够为这些工具贡献你自己的代码,让它们更加完善。 + +### 告警工具 + +#### Bosun + +如果你的电脑出现问题,得多亏 Stack Exchange 你才能在网上查到解决办法。Stack Exchange 以众包问答的模式运营着很多不同类型的网站。其中就有广受开发者欢迎的 [Stack Overflow][5],以及运维方面有名的 [Super User][6]。除此以外,从育儿经验到科幻小说、从哲学讨论到单车论坛,Stack Exchange 都有涉猎。 + +Stack Exchange 开源了它的告警管理系统 [Bosun][7],同时也发布了 Prometheus 及其 [AlertManager][8] 系统。这两个系统有共通点。Bosun 和 Prometheus 一样使用 Golang 开发,但 Bosun 比 Prometheus 更为强大,因为它可以使用指标聚合metrics aggregation以外的方式与系统交互。Bosun 还可以从日志和事件收集系统中提取数据,并且支持 Graphite、InfluxDB、OpenTSDB 和 Elasticsearch。 + +Bosun 的架构包括一个单一的服务器的二进制文件,一个诸如 OpenTSDB 的后端、Redis 以及 [scollector 代理][9]。 scollector 代理会自动检测主机上正在运行的服务,并反馈这些进程和其它的系统资源的情况。这些数据将发送到后端。随后 Bosun 的二进制服务文件会向后端发起查询,确定是否需要触发告警。也可以通过 [Grafana][10] 这些工具通过一个通用接口查询 Bosun 的底层后端。而 Redis 则用于存储 Bosun 的状态信息和元数据。 + +Bosun 有一个非常巧妙的功能,就是可以根据历史数据来测试告警。这是我几年前在使用 Prometheus 的时候就非常需要的功能,当时我有一个异常的数据需要产生告警,但没有一个可以用于测试的简便方法。为了确保告警能够正常触发,我不得不造出对应的数据来进行测试。而 Bosun 让这个步骤的耗时大大缩短。 + +Bosun 更是涵盖了所有常用过的功能,包括简单的图形化表示和告警的创建。它还带有强大的用于编写告警规则的表达式语言。但 Bosun 默认只带有电子邮件通知配置和 HTTP 通知配置,因此如果需要连接到 Slack 或其它工具,就需要对配置作出更大程度的定制化([其文档中有][11])。类似于 Prometheus,Bosun 还可以使用模板通知,你可以使用 HTML 和 CSS 来创建你所需要的电子邮件通知。 + +#### Cabot + +[Cabot][12] 由 [Arachnys][13] 公司开发。你或许对 Arachnys 公司并不了解,但它很有影响力:Arachnys 公司构建了一个基于云的先进解决方案,用于防范金融犯罪。在之前的公司时,我也曾经参与过类似“[了解你的客户][14](KYC)”的工作。大多数公司都认为与恐怖组织产生联系会造成相当不好的影响,因为恐怖组织可能会利用自己的系统来筹集资金。而这些解决方案将有助于防范欺诈类犯罪,尽管这类犯罪情节相对较轻,但仍然也会对机构产生风险。 + +Arachnys 公司为什么要开发 Cabot 呢?其实只是因为 Arachnys 的开发人员对 [Nagios][15] 不太熟悉。Cabot 的出现对很多人来说都是一个好消息,它基于 Django 和 Bootstrap 开发,因此如果想对这个项目做出自己的贡献,门槛并不高。(另外值得一提的是,Cabot 这个名字来源于开发者的狗。) + +与 Bosun 类似,Cabot 也不对数据进行收集,而是使用监控对象的 API 提供的数据。因此,Cabot 告警的模式是拉取而不是推送。它通过访问每个监控对象的 API,根据特定的指标检索所需的数据,然后将告警数据使用 Redis 缓存,进而持久化存储到 Postgres 数据库。 + +Cabot 的一个较为少见的特点是,它原生支持 [Graphite][16],同时也支持 [Jenkins][17]。Jenkins 在这里被视为一个集中式的定时任务,它会以对待故障的方式去对待构建失败的状况。构建失败当然没有系统故障那么紧急,但一旦出现构建失败,还是需要团队采取措施去处理,毕竟并不是每个人在收到构建失败的电子邮件时都会亲自去检查 Jenkins。 + +Cabot 另一个有趣的功能是它可以接入 Google 日历安排值班人员,这个称为 Rota 的功能用处很大,希望其它告警系统也能加入类似的功能。Cabot 目前仅支持安排主备联系人,但还有继续改进的空间。它自己的文档也提到,如果需要全面的功能,更应该考虑付费的解决方案。 + +#### StatsAgg + +[Pearson][19] 作为一家开发了 [StatsAgg][18] 告警平台的出版公司,这是极为罕见的,当然也很值得敬佩。除此以外,Pearson 还运营着另外几个网站以及和 [O'Reilly Media][20] 合资的企业。但我仍然会将它视为出版教学书籍的公司。 + +StatsAgg 除了是一个告警平台,还是一个指标聚合平台,甚至也有点类似其它系统的代理。StatsAgg 支持通过 Graphite、StatsD、InfluxDB 和 OpenTSDB 输入数据,也支持将其转发到各种平台。但随着中心服务的负载不断增加,风险也不断增大。尽管如此,如果 StatsAgg 的基础架构足够强壮,即使后端存储平台出现故障,也不会对它产生告警的过程造成影响。 + +StatsAgg 是用 Java 开发的,为了尽可能降低复杂性,它仅包括主服务和一个 UI。StatsAgg 支持基于正则表达式匹配来发送告警,而且它更注重于服务方面的告警,而不是服务器基础告警。我认为它填补了开源监控工具方面的空白,而这正式它自己的目标。 + +### 可视化工具 + +#### Grafana + +[Grafana][10] 的知名度很高,它也被广泛采用。每当我需要用到数据面板的时候,我总是会想到它,因为它比我使用过的任何一款类似的产品都要好。Grafana 由 Torkel Ödegaard 开发的,像 Cabot 一样,也是在圣诞节期间开发的,并在 2014 年 1 月发布。在短短几年之间,它已经有了长足的发展。Grafana 基于 Kibana 开发,Torkel 开启了新的分支并将其命名为 Grafana。 + +Grafana 着重体现了实用性以及数据呈现的美观性。它天生就可以从 Graphite、Elasticsearch、OpenTSDB、Prometheus 和 InfluxDB 收集数据。此外有一个 Grafana 商用版插件可以从更多数据源获取数据,但是其他数据源插件也并非没有开源版本,Grafana 的插件生态系统已经提供了各种数据源。 + +Grafana 能做什么呢?Grafana 提供了一个中心化的了解系统的方式。它通过 web 来展示数据,任何人都有机会访问到相关信息,当然也可以使用身份验证来对访问进行限制。Grafana 使用各种可视化方式来提供对系统一目了然的了解。Grafana 还支持不同类型的可视化方式,包括集成告警可视化的功能。 + +现在你可以更直观地设置告警了。通过 Grafana,可以查看图表,还可以查看由于系统性能下降而触发告警的位置,单击要触发报警的位置,并告诉 Grafana 将告警发送何处。这是一个对告警平台非常强大的补充。告警平台不一定会因此而被取代,但告警系统一定会由此得到更多启发和发展。 + +Grafana 还引入了很多团队协作的功能。不同用户之间能够共享数据面板,你不再需要为 [Kubernetes][21] 集群创建独立的数据面板,因为由 Kubernetes 开发者和 Grafana 开发者共同维护的一些数据面板已经可用了。 + +团队协作过程中一个重要的功能是注释。注释功能允许用户将上下文添加到图表当中,其他用户就可以通过上下文更直观地理解图表。当团队成员在处理某个事件,并且需要沟通和理解时,这个功能就十分重要了。将所有相关信息都放在需要的位置,可以让整个团队中快速达成共识。在团队需要调查故障原因和定位事件责任时,这个功能就可以发挥作用了。 + +#### Vizceral + +[Vizceral][22] 由 Netflix 开发,用于在故障发生时更有效地了解流量的情况。Grafana 是一种通用性更强的工具,而 Vizceral 则专用于某些领域。 尽管 Netflix 表示已经不再在内部使用 Vizceral,也不再主动对其展开维护,但 Vizceral 仍然会定期更新。我在这里介绍这个工具,主要是为了介绍它的的可视化机制,以及如何利用它来协助解决问题。你可以在样例环境中用它来更好地掌握这一类系统的特性。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/alerting-and-visualization-tools-sysadmins + +作者:[Dan Barker][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/barkerd427 +[b]: https://github.com/lujun9972 +[1]: https://www.practicalmonitoring.com/ +[2]: https://developers.google.com/chart/interactive/docs/gallery +[3]: https://libguides.libraries.claremont.edu/c.php?g=474417&p=3286401 +[4]: http://www.brendangregg.com/flamegraphs.html +[5]: https://stackoverflow.com/ +[6]: https://superuser.com/ +[7]: http://bosun.org/ +[8]: https://prometheus.io/docs/alerting/alertmanager/ +[9]: https://bosun.org/scollector/ +[10]: https://grafana.com/ +[11]: https://bosun.org/notifications +[12]: https://cabotapp.com/ +[13]: https://www.arachnys.com/ +[14]: https://en.wikipedia.org/wiki/Know_your_customer +[15]: https://www.nagios.org/ +[16]: https://graphiteapp.org/ +[17]: https://jenkins.io/ +[18]: https://github.com/PearsonEducation/StatsAgg +[19]: https://www.pearson.com/us/ +[20]: https://www.oreilly.com/ +[21]: https://opensource.com/resources/what-is-kubernetes +[22]: https://github.com/Netflix/vizceral + diff --git a/translated/tech/20181010 An introduction to using tcpdump at the Linux command line.md b/published/201811/20181010 An introduction to using tcpdump at the Linux command line.md similarity index 81% rename from translated/tech/20181010 An introduction to using tcpdump at the Linux command line.md rename to published/201811/20181010 An introduction to using tcpdump at the Linux command line.md index 9926a2279c..8744ef5162 100644 --- a/translated/tech/20181010 An introduction to using tcpdump at the Linux command line.md +++ b/published/201811/20181010 An introduction to using tcpdump at the Linux command line.md @@ -1,41 +1,41 @@ - Linux 命令行中使用 tcpdump 抓包 +在 Linux 命令行中使用 tcpdump 抓包 ====== -Tcpdump 是一款灵活、功能强大的抓包工具,能有效地帮助排查网络故障问题。 +> `tcpdump` 是一款灵活、功能强大的抓包工具,能有效地帮助排查网络故障问题。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE) -根据我作为管理员的经验,在网络连接中经常遇到十分难以排查的故障问题。对于这类情况,tcpdump 便能派上用场。 +以我作为管理员的经验,在网络连接中经常遇到十分难以排查的故障问题。对于这类情况,`tcpdump` 便能派上用场。 -Tcpdump 是一个命令行实用工具,允许你抓取和分析经过系统的流量数据包。它通常被用作于网络故障分析工具以及安全工具。 +`tcpdump` 是一个命令行实用工具,允许你抓取和分析经过系统的流量数据包。它通常被用作于网络故障分析工具以及安全工具。 -Tcpdump 是一款强大的工具,支持多种选项和过滤规则,适用场景十分广泛。由于它是命令行工具,因此适用于在远程服务器或者没有图形界面的设备中收集数据包以便于事后分析。它可以在后台启动,也可以用 cron 等定时工具创建定时任务启用它。 +`tcpdump` 是一款强大的工具,支持多种选项和过滤规则,适用场景十分广泛。由于它是命令行工具,因此适用于在远程服务器或者没有图形界面的设备中收集数据包以便于事后分析。它可以在后台启动,也可以用 cron 等定时工具创建定时任务启用它。 -本文中,我们将讨论 tcpdump 最常用的一些功能。 +本文中,我们将讨论 `tcpdump` 最常用的一些功能。 -### 1\. 在 Linux 中安装 tcpdump +### 1、在 Linux 中安装 tcpdump -Tcpdump 支持多种 Linux 发行版,所以你的系统中很有可能已经安装了它。用下面的命令检查一下是否已经安装了 tcpdump: +`tcpdump` 支持多种 Linux 发行版,所以你的系统中很有可能已经安装了它。用下面的命令检查一下是否已经安装了 `tcpdump`: ``` $ which tcpdump /usr/sbin/tcpdump ``` -如果还没有安装 tcpdump,你可以用软件包管理器安装它。 -例如,在 CentOS 或者 Red Hat Enterprise 系统中,用如下命令安装 tcpdump: +如果还没有安装 `tcpdump`,你可以用软件包管理器安装它。 +例如,在 CentOS 或者 Red Hat Enterprise 系统中,用如下命令安装 `tcpdump`: ``` $ sudo yum install -y tcpdump ``` -Tcpdump 依赖于 `libpcap`,该库文件用于捕获网络数据包。如果该库文件也没有安装,系统会根据依赖关系自动安装它。 +`tcpdump` 依赖于 `libpcap`,该库文件用于捕获网络数据包。如果该库文件也没有安装,系统会根据依赖关系自动安装它。 现在你可以开始抓包了。 -### 2\. 用 tcpdump 抓包 +### 2、用 tcpdump 抓包 -使用 tcpdump 抓包,需要管理员权限,因此下面的示例中绝大多数命令都是以 `sudo` 开头。 +使用 `tcpdump` 抓包,需要管理员权限,因此下面的示例中绝大多数命令都是以 `sudo` 开头。 首先,先用 `tcpdump -D` 命令列出可以抓包的网络接口: @@ -80,7 +80,7 @@ listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes $ ``` -Tcpdump 会持续抓包直到收到中断信号。你可以按 `Ctrl+C` 来停止抓包。正如上面示例所示,`tcpdump` 抓取了超过 9000 个数据包。在这个示例中,由于我是通过 `ssh` 连接到服务器,所以 tcpdump 也捕获了所有这类数据包。`-c` 选项可以用于限制 tcpdump 抓包的数量: +`tcpdump` 会持续抓包直到收到中断信号。你可以按 `Ctrl+C` 来停止抓包。正如上面示例所示,`tcpdump` 抓取了超过 9000 个数据包。在这个示例中,由于我是通过 `ssh` 连接到服务器,所以 `tcpdump` 也捕获了所有这类数据包。`-c` 选项可以用于限制 `tcpdump` 抓包的数量: ``` $ sudo tcpdump -i any -c 5 @@ -97,9 +97,9 @@ listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes $ ``` -如上所示,`tcpdump` 在抓取 5 个数据包后自动停止了抓包。这在有些场景中十分有用——比如你只需要抓取少量的数据包用于分析。当我们需要使用过滤规则抓取特定的数据包(如下所示)时,`-c` 的作用就十分突出了。 +如上所示,`tcpdump` 在抓取 5 个数据包后自动停止了抓包。这在有些场景中十分有用 —— 比如你只需要抓取少量的数据包用于分析。当我们需要使用过滤规则抓取特定的数据包(如下所示)时,`-c` 的作用就十分突出了。 -在上面示例中,tcpdump 默认是将 IP 地址和端口号解析为对应的接口名以及服务协议名称。而通常在网络故障排查中,使用 IP 地址和端口号更便于分析问题;用 `-n` 选项显示 IP 地址,`-nn` 选项显示端口号: +在上面示例中,`tcpdump` 默认是将 IP 地址和端口号解析为对应的接口名以及服务协议名称。而通常在网络故障排查中,使用 IP 地址和端口号更便于分析问题;用 `-n` 选项显示 IP 地址,`-nn` 选项显示端口号: ``` $ sudo tcpdump -i any -c5 -nn @@ -115,13 +115,13 @@ listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes 0 packets dropped by kernel ``` -如上所示,抓取的数据包中显示 IP 地址和端口号。这样还可以阻止 tcpdump 发出 DNS 查找,有助于在网络故障排查中减少数据流量。 +如上所示,抓取的数据包中显示 IP 地址和端口号。这样还可以阻止 `tcpdump` 发出 DNS 查找,有助于在网络故障排查中减少数据流量。 现在你已经会抓包了,让我们来分析一下这些抓包输出的含义吧。 -### 3\. 理解抓取的报文 +### 3、理解抓取的报文 -Tcpdump 能够抓取并解码多种协议类型的数据报文,如 TCP,UDP,ICMP 等等。虽然这里我们不可能介绍所有的数据报文类型,但可以分析下 TCP 类型的数据报文,来帮助你入门。更多有关 tcpdump 的详细介绍可以参考其 [帮助手册][1]。Tcpdump 抓取的 TCP 报文看起来如下: +`tcpdump` 能够抓取并解码多种协议类型的数据报文,如 TCP、UDP、ICMP 等等。虽然这里我们不可能介绍所有的数据报文类型,但可以分析下 TCP 类型的数据报文,来帮助你入门。更多有关 `tcpdump` 的详细介绍可以参考其 [帮助手册][1]。`tcpdump` 抓取的 TCP 报文看起来如下: ``` 08:41:13.729687 IP 192.168.64.28.22 > 192.168.64.1.41916: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 117964079 ecr 816509256], length 372 @@ -137,7 +137,7 @@ Tcpdump 能够抓取并解码多种协议类型的数据报文,如 TCP,UDP 在源 IP 和目的 IP 之后,可以看到是 TCP 报文标记段 `Flags [P.]`。该字段通常取值如下: -| Value | Flag Type | Description | +| 值 | 标志类型 | 描述 | | ----- | --------- | ----------------- | | S | SYN | Connection Start | | F | FIN | Connection Finish | @@ -149,19 +149,19 @@ Tcpdump 能够抓取并解码多种协议类型的数据报文,如 TCP,UDP 接下来是该数据包中数据的序列号。对于抓取的第一个数据包,该字段值是一个绝对数字,后续包使用相对数值,以便更容易查询跟踪。例如此处 `seq 196:568` 代表该数据包包含该数据流的第 196 到 568 字节。 -接下来是 ack 值:`ack 1`。该数据包是数据发送方,ack 值为1。在数据接收方,该字段代表数据流上的下一个预期字节数据,例如,该数据流中下一个数据包的 ack 值应该是 568。 +接下来是 ack 值:`ack 1`。该数据包是数据发送方,ack 值为 1。在数据接收方,该字段代表数据流上的下一个预期字节数据,例如,该数据流中下一个数据包的 ack 值应该是 568。 接下来字段是接收窗口大小 `win 309`,它表示接收缓冲区中可用的字节数,后跟 TCP 选项如 MSS(最大段大小)或者窗口比例值。更详尽的 TCP 协议内容请参考 [Transmission Control Protocol(TCP) Parameters][2]。 -最后,`length 372`代表数据包有效载荷字节长度。这个长度和 seq 序列号中字节数值长度是不一样的。 +最后,`length 372` 代表数据包有效载荷字节长度。这个长度和 seq 序列号中字节数值长度是不一样的。 现在让我们学习如何过滤数据报文以便更容易的分析定位问题。 -### 4\. 过滤数据包 +### 4、过滤数据包 -正如上面所提,tcpdump 可以抓取很多种类型的数据报文,其中很多可能和我们需要查找的问题并没有关系。举个例子,假设你正在定位一个与 web 服务器连接的网络问题,就不必关系 SSH 数据报文,因此在抓包结果中过滤掉 SSH 报文可能更便于你分析问题。 +正如上面所提,`tcpdump` 可以抓取很多种类型的数据报文,其中很多可能和我们需要查找的问题并没有关系。举个例子,假设你正在定位一个与 web 服务器连接的网络问题,就不必关系 SSH 数据报文,因此在抓包结果中过滤掉 SSH 报文可能更便于你分析问题。 -Tcpdump 有很多参数选项可以设置数据包过滤规则,例如根据源 IP 以及目的 IP 地址,端口号,协议等等规则来过滤数据包。下面就介绍一些最常用的过滤方法。 +`tcpdump` 有很多参数选项可以设置数据包过滤规则,例如根据源 IP 以及目的 IP 地址,端口号,协议等等规则来过滤数据包。下面就介绍一些最常用的过滤方法。 #### 协议 @@ -181,7 +181,7 @@ PING opensource.com (54.204.39.132) 56(84) bytes of data. 64 bytes from ec2-54-204-39-132.compute-1.amazonaws.com (54.204.39.132): icmp_seq=1 ttl=47 time=39.6 ms ``` -回到运行 tcpdump 命令的终端中,可以看到它筛选出了 ICMP 报文。这里 tcpdump 并没有显示有关 `opensource.com`的域名解析数据包: +回到运行 `tcpdump` 命令的终端中,可以看到它筛选出了 ICMP 报文。这里 `tcpdump` 并没有显示有关 `opensource.com` 的域名解析数据包: ``` 09:34:20.136766 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 1, length 64 @@ -215,7 +215,7 @@ listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes #### 端口号 -Tcpdump 可以根据服务类型或者端口号来筛选数据包。例如,抓取和 HTTP 服务相关的数据包: +`tcpdump` 可以根据服务类型或者端口号来筛选数据包。例如,抓取和 HTTP 服务相关的数据包: ``` $ sudo tcpdump -i any -c5 -nn port 80 @@ -303,11 +303,11 @@ listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes 该例子中我们只抓取了来自源 IP 为 `192.168.122.98` 或者 `54.204.39.132` 的 HTTP (端口号80)的数据包。使用该方法就很容易抓取到数据流中交互双方的数据包了。 -### 5\. 检查数据包内容 +### 5、检查数据包内容 -在以上的示例中,我们只按数据包头部的信息来建立规则筛选数据包,例如源地址、目的地址、端口号等等。有时我们需要分析网络连接问题,可能需要分析数据包中的内容来判断什么内容需要被发送、什么内容需要被接收等。Tcpdump 提供了两个选项可以查看数据包内容,`-X` 以十六进制打印出数据报文内容,`-A` 打印数据报文的 ASCII 值。 +在以上的示例中,我们只按数据包头部的信息来建立规则筛选数据包,例如源地址、目的地址、端口号等等。有时我们需要分析网络连接问题,可能需要分析数据包中的内容来判断什么内容需要被发送、什么内容需要被接收等。`tcpdump` 提供了两个选项可以查看数据包内容,`-X` 以十六进制打印出数据报文内容,`-A` 打印数据报文的 ASCII 值。 -例如,HTTP request 报文内容如下: +例如,HTTP 请求报文内容如下: ``` $ sudo tcpdump -i any -c10 -nn -A port 80 @@ -379,9 +379,9 @@ E..4..@.@.....zb6.'....P....o.............. 这对定位一些普通 HTTP 调用 API 接口的问题很有用。当然如果是加密报文,这个输出也就没多大用了。 -### 6\. 保存抓包数据 +### 6、保存抓包数据 -Tcpdump 提供了保存抓包数据的功能以便后续分析数据包。例如,你可以夜里让它在那里抓包,然后早上起来再去分析它。同样当有很多数据包时,显示过快也不利于分析,将数据包保存下来,更有利于分析问题。 +`tcpdump` 提供了保存抓包数据的功能以便后续分析数据包。例如,你可以夜里让它在那里抓包,然后早上起来再去分析它。同样当有很多数据包时,显示过快也不利于分析,将数据包保存下来,更有利于分析问题。 使用 `-w` 选项来保存数据包而不是在屏幕上显示出抓取的数据包: @@ -398,7 +398,7 @@ tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 2621 正如示例中所示,保存数据包到文件中时屏幕上就没有任何有关数据报文的输出,其中 `-c10` 表示抓取到 10 个数据包后就停止抓包。如果想有一些反馈来提示确实抓取到了数据包,可以使用 `-v` 选项。 -Tcpdump 将数据包保存在二进制文件中,所以不能简单的用文本编辑器去打开它。使用 `-r` 选项参数来阅读该文件中的报文内容: +`tcpdump` 将数据包保存在二进制文件中,所以不能简单的用文本编辑器去打开它。使用 `-r` 选项参数来阅读该文件中的报文内容: ``` $ tcpdump -nn -r webserver.pcap @@ -418,7 +418,7 @@ $ 这里不需要管理员权限 `sudo` 了,因为此刻并不是在网络接口处抓包。 -你还可以使用我们讨论过的任何过滤规则来过滤文件中的内容,就像使用实时数据一样。 例如,通过执行以下命令从源 IP 地址`54.204.39.132` 检查文件中的数据包: +你还可以使用我们讨论过的任何过滤规则来过滤文件中的内容,就像使用实时数据一样。 例如,通过执行以下命令从源 IP 地址 `54.204.39.132` 检查文件中的数据包: ``` $ tcpdump -nn -r webserver.pcap src 54.204.39.132 @@ -431,11 +431,11 @@ reading from file webserver.pcap, link-type LINUX_SLL (Linux cooked) ### 下一步做什么? -以上的基本功能已经可以帮助你使用强大的 tcpdump 抓包工具了。更多的内容请参考 [tcpdump网页][3] 以及它的 [帮助文件][4]。 +以上的基本功能已经可以帮助你使用强大的 `tcpdump` 抓包工具了。更多的内容请参考 [tcpdump 网站][3] 以及它的 [帮助文件][4]。 -Tcpdump 命令行工具为分析网络流量数据包提供了强大的灵活性。如果需要使用图形工具来抓包请参考 [Wireshark][5]。 +`tcpdump` 命令行工具为分析网络流量数据包提供了强大的灵活性。如果需要使用图形工具来抓包请参考 [Wireshark][5]。 -Wireshark 还可以用来读取 tcpdump 保存的 `pcap` 文件。你可以使用 tcpdump 命令行在没有 GUI 界面的远程机器上抓包然后在 Wireshark 中分析数据包。 +Wireshark 还可以用来读取 `tcpdump` 保存的 pcap 文件。你可以使用 `tcpdump` 命令行在没有 GUI 界面的远程机器上抓包然后在 Wireshark 中分析数据包。 -------------------------------------------------------------------------------- @@ -444,7 +444,7 @@ via: https://opensource.com/article/18/10/introduction-tcpdump 作者:[Ricardo Gerardi][a] 选题:[lujun9972][b] 译者:[jrg](https://github.com/jrglinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201811/20181014 How Lisp Became God-s Own Programming Language.md b/published/201811/20181014 How Lisp Became God-s Own Programming Language.md new file mode 100644 index 0000000000..017a67799f --- /dev/null +++ b/published/201811/20181014 How Lisp Became God-s Own Programming Language.md @@ -0,0 +1,186 @@ +Lisp 是怎么成为上帝的编程语言的 +====== + +当程序员们谈论各类编程语言的相对优势时,他们通常会采用相当平淡的措词,就好像这些语言是一条工具带上的各种工具似的 —— 有适合写操作系统的,也有适合把其它程序黏在一起来完成特殊工作的。这种讨论方式非常合理;不同语言的能力不同。不声明特定用途就声称某门语言比其他语言更优秀只能导致侮辱性的无用争论。 + +但有一门语言似乎受到和用途无关的特殊尊敬:那就是 Lisp。即使是恨不得给每个说出形如“某某语言比其他所有语言都好”这类话的人都来一拳的键盘远征军们,也会承认 Lisp 处于另一个层次。 Lisp 超越了用于评判其他语言的实用主义标准,因为普通程序员并不使用 Lisp 编写实用的程序 —— 而且,多半他们永远也不会这么做。然而,人们对 Lisp 的敬意是如此深厚,甚至于到了这门语言会时而被加上神话属性的程度。 + +大家都喜欢的网络漫画合集 xkcd 就至少在两组漫画中如此描绘过 Lisp:[其中一组漫画][1]中,某人得到了某种 Lisp 启示,而这好像使他理解了宇宙的基本构架。 + +![](https://imgs.xkcd.com/comics/lisp.jpg) + +在[另一组漫画][2]中,一个穿着长袍的老程序员给他的徒弟递了一沓圆括号,说这是“文明时代的优雅武器”,暗示着 Lisp 就像原力那样拥有各式各样的神秘力量。 + +![](https://imgs.xkcd.com/comics/lisp_cycles.png) + +另一个绝佳例子是 Bob Kanefsky 的滑稽剧插曲,《上帝就在人间》。这部剧叫做《永恒之火》,撰写于 1990 年代中期;剧中描述了上帝必然是使用 Lisp 创造世界的种种原因。完整的歌词可以在 [GNU 幽默合集][3]中找到,如下是一段摘抄: + +> 因为上帝用祂的 Lisp 代码 + +> 让树叶充满绿意。 + +> 分形的花儿和递归的根: + +> 我见过的奇技淫巧之中没什么比这更可爱。 + +> 当我对着雪花深思时, + +> 从未见过两片相同的, + +> 我知道,上帝偏爱那一门 + +> 名字是四个字母的语言。 + +(LCTT 译注:参见 “四个字母”,参见:[四字神名](https://zh.wikipedia.org/wiki/%E5%9B%9B%E5%AD%97%E7%A5%9E%E5%90%8D),致谢 [no1xsyzy](https://github.com/LCTT/TranslateProject/issues/11320)) + +以下这句话我实在不好在人前说;不过,我还是觉得,这样一种 “Lisp 是奥术魔法”的文化模因实在是有史以来最奇异、最迷人的东西。Lisp 是象牙塔的产物,是人工智能研究的工具;因此,它对于编程界的俗人而言总是陌生的,甚至是带有神秘色彩的。然而,当今的程序员们[开始怂恿彼此,“在你死掉之前至少试一试 Lisp”][4],就像这是一种令人恍惚入迷的致幻剂似的。尽管 Lisp 是广泛使用的编程语言中第二古老的(只比 Fortran 年轻一岁)[^1] ,程序员们也仍旧在互相怂恿。想象一下,如果你的工作是为某种组织或者团队推广一门新的编程语言的话,忽悠大家让他们相信你的新语言拥有神力难道不是绝佳的策略吗?—— 但你如何能够做到这一点呢?或者,换句话说,一门编程语言究竟是如何变成人们口中“隐晦知识的载体”的呢? + +Lisp 究竟是怎么成为这样的? + +![Byte 杂志封面,1979年八月。][5] + +*Byte 杂志封面,1979年八月。* + +### 理论 A :公理般的语言 + +Lisp 的创造者约翰·麦卡锡John McCarthy最初并没有想过把 Lisp 做成优雅、精炼的计算法则结晶。然而,在一两次运气使然的深谋远虑和一系列优化之后,Lisp 的确变成了那样的东西。 保罗·格雷厄姆Paul Graham(我们一会儿之后才会聊到他)曾经这么写道, 麦卡锡通过 Lisp “为编程作出的贡献就像是欧几里得对几何学所做的贡献一般” [^2]。人们可能会在 Lisp 中看出更加隐晦的含义 —— 因为麦卡锡创造 Lisp 时使用的要素实在是过于基础,基础到连弄明白他到底是创造了这门语言、还是发现了这门语言,都是一件难事。 + +最初, 麦卡锡产生要造一门语言的想法,是在 1956 年的达特茅斯人工智能夏季研究项目Darthmouth Summer Research Project on Artificial Intelligence上。夏季研究项目是个持续数周的学术会议,直到现在也仍旧在举行;它是此类会议之中最早开始举办的会议之一。 麦卡锡当初还是个达特茅斯的数学助教,而“人工智能artificial intelligence(AI)”这个词事实上就是他建议举办该会议时发明的 [^3]。在整个会议期间大概有十人参加 [^4]。他们之中包括了艾伦·纽厄尔Allen Newell赫伯特·西蒙Herbert Simon,两名隶属于兰德公司RAND Corporation卡内基梅隆大学Carnegie Mellon的学者。这两人不久之前设计了一门语言,叫做 IPL。 + +当时,纽厄尔和西蒙正试图制作一套能够在命题演算中生成证明的系统。两人意识到,用电脑的原生指令集编写这套系统会非常困难;于是他们决定创造一门语言——他们的原话是“伪代码pseudo-code”,这样,他们就能更加轻松自然地表达这台“逻辑理论机器Logic Theory Machine”的底层逻辑了 [^5]。这门语言叫做 IPL,即“信息处理语言Information Processing Language”;比起我们现在认知中的编程语言,它更像是一种高层次的汇编语言方言。 纽厄尔和西蒙提到,当时人们开发的其它“伪代码”都抓着标准数学符号不放 —— 也许他们指的是 Fortran [^6];与此不同的是,他们的语言使用成组的符号方程来表示命题演算中的语句。通常,用 IPL 写出来的程序会调用一系列的汇编语言宏,以此在这些符号方程列表中对表达式进行变换和求值。 + +麦卡锡认为,一门实用的编程语言应该像 Fortran 那样使用代数表达式;因此,他并不怎么喜欢 IPL [^7]。然而,他也认为,在给人工智能领域的一些问题建模时,符号列表会是非常好用的工具 —— 而且在那些涉及演绎的问题上尤其有用。麦卡锡的渴望最终被诉诸行动;他要创造一门代数的列表处理语言 —— 这门语言会像 Fortran 一样使用代数表达式,但拥有和 IPL 一样的符号列表处理能力。 + +当然,今日的 Lisp 可不像 Fortran。在会议之后的几年中,麦卡锡关于“理想的列表处理语言”的见解似乎在逐渐演化。到 1957 年,他的想法发生了改变。他那时候正在用 Fortran 编写一个能下国际象棋的程序;越是长时间地使用 Fortran ,麦卡锡就越确信其设计中存在不当之处,而最大的问题就是尴尬的 `IF` 声明 [^8]。为此,他发明了一个替代品,即条件表达式 `true`;这个表达式会在给定的测试通过时返回子表达式 `A` ,而在测试未通过时返回子表达式 `B` ,*而且*,它只会对返回的子表达式进行求值。在 1958 年夏天,当麦卡锡设计一个能够求导的程序时,他意识到,他发明的 `true` 条件表达式让编写递归函数这件事变得更加简单自然了 [^9]。也是这个求导问题让麦卡锡创造了 `maplist` 函数;这个函数会将其它函数作为参数并将之作用于指定列表的所有元素 [^10]。在给项数多得叫人抓狂的多项式求导时,它尤其有用。 + +然而,以上的所有这些,在 Fortran 中都是没有的;因此,在 1958 年的秋天,麦卡锡请来了一群学生来实现 Lisp。因为他那时已经成了一名麻省理工助教,所以,这些学生可都是麻省理工的学生。当麦卡锡和学生们最终将他的主意变为能运行的代码时,这门语言得到了进一步的简化。这之中最大的改变涉及了 Lisp 的语法本身。最初,麦卡锡在设计语言时,曾经试图加入所谓的 “M 表达式”;这是一层语法糖,能让 Lisp 的语法变得类似于 Fortran。虽然 M 表达式可以被翻译为 S 表达式 —— 基础的、“用圆括号括起来的列表”,也就是 Lisp 最著名的特征 —— 但 S 表达式事实上是一种给机器看的低阶表达方法。唯一的问题是,麦卡锡用方括号标记 M 表达式,但他的团队在麻省理工使用的 IBM 026 键盘打孔机的键盘上根本没有方括号 [^11]。于是 Lisp 团队坚定不移地使用着 S 表达式,不仅用它们表示数据列表,也拿它们来表达函数的应用。麦卡锡和他的学生们还作了另外几样改进,包括将数学符号前置;他们也修改了内存模型,这样 Lisp 实质上就只有一种数据类型了 [^12]。 + +到 1960 年,麦卡锡发表了他关于 Lisp 的著名论文,《用符号方程表示的递归函数及它们的机器计算》。那时候,Lisp 已经被极大地精简,而这让麦卡锡意识到,他的作品其实是“一套优雅的数学系统”,而非普通的编程语言 [^13]。他后来这么写道,对 Lisp 的许多简化使其“成了一种描述可计算函数的方式,而且它比图灵机或者一般情况下用于递归函数理论的递归定义更加简洁” [^14]。在他的论文中,他不仅使用 Lisp 作为编程语言,也将它当作一套用于研究递归函数行为方式的表达方法。 + +通过“从一小撮规则中逐步实现出 Lisp”的方式,麦卡锡将这门语言介绍给了他的读者。后来,保罗·格雷厄姆在短文《[Lisp 之根][6]The Roots of Lisp》中用更易读的语言回顾了麦卡锡的步骤。格雷厄姆只用了七种原始运算符、两种函数写法,以及使用原始运算符定义的六个稍微高级一点的函数来解释 Lisp。毫无疑问,Lisp 的这种只需使用极少量的基本规则就能完整说明的特点加深了其神秘色彩。格雷厄姆称麦卡锡的论文为“使计算公理化”的一种尝试 [^15]。我认为,在思考 Lisp 的魅力从何而来时,这是一个极好的切入点。其它编程语言都有明显的人工构造痕迹,表现为 `While`,`typedef`,`public static void` 这样的关键词;而 Lisp 的设计却简直像是纯粹计算逻辑的鬼斧神工。Lisp 的这一性质,以及它和晦涩难懂的“递归函数理论”的密切关系,使它具备了获得如今声望的充分理由。 + +### 理论 B:属于未来的机器 + +Lisp 诞生二十年后,它成了著名的《[黑客词典][7]Hacker’s Dictionary》中所说的,人工智能研究的“母语”。Lisp 在此之前传播迅速,多半是托了语法规律的福 —— 不管在怎么样的电脑上,实现 Lisp 都是一件相对简单直白的事。而学者们之后坚持使用它乃是因为 Lisp 在处理符号表达式这方面有巨大的优势;在那个时代,人工智能很大程度上就意味着符号,于是这一点就显得十分重要。在许多重要的人工智能项目中都能见到 Lisp 的身影。这些项目包括了 [SHRDLU 自然语言程序][8]、[Macsyma 代数系统][9] 和 [ACL2 逻辑系统][10]。 + +然而,在 1970 年代中期,人工智能研究者们的电脑算力开始不够用了。PDP-10 就是一个典型。这个型号在人工智能学界曾经极受欢迎;但面对这些用 Lisp 写的 AI 程序,它的 18 位地址空间一天比一天显得吃紧 [^16]。许多的 AI 程序在设计上可以与人互动。要让这些既极度要求硬件性能、又有互动功能的程序在分时系统上优秀发挥,是很有挑战性的。麻省理工的彼得·杜奇Peter Deutsch给出了解决方案:那就是针对 Lisp 程序来特别设计电脑。就像是我那[关于 Chaosnet 的上一篇文章][11]所说的那样,这些Lisp 计算机Lisp machines会给每个用户都专门分配一个为 Lisp 特别优化的处理器。到后来,考虑到硬核 Lisp 程序员的需求,这些计算机甚至还配备上了完全由 Lisp 编写的开发环境。在当时那样一个小型机时代已至尾声而微型机的繁盛尚未完全到来的尴尬时期,Lisp 计算机就是编程精英们的“高性能个人电脑”。 + +有那么一会儿,Lisp 计算机被当成是未来趋势。好几家公司雨后春笋般出现,追着赶着要把这项技术商业化。其中最成功的一家叫做 Symbolics,由麻省理工 AI 实验室的前成员创立。上世纪八十年代,这家公司生产了所谓的 3600 系列计算机,它们当时在 AI 领域和需要高性能计算的产业中应用极广。3600 系列配备了大屏幕、位图显示、鼠标接口,以及[强大的图形与动画软件][12]。它们都是惊人的机器,能让惊人的程序运行起来。例如,之前在推特上跟我聊过的机器人研究者 Bob Culley,就能用一台 1985 年生产的 Symbolics 3650 写出带有图形演示的寻路算法。他向我解释说,在 1980 年代,位图显示和面向对象编程(能够通过 [Flavors 扩展][13]在 Lisp 计算机上使用)都刚刚出现。Symbolics 站在时代的最前沿。 + +![Bob Culley 的寻路程序。][14] + +*Bob Culley 的寻路程序。* + +而以上这一切导致 Symbolics 的计算机奇贵无比。在 1983 年,一台 Symbolics 3600 能卖 111,000 美金 [^16]。所以,绝大部分人只可能远远地赞叹 Lisp 计算机的威力和操作员们用 Lisp 编写程序的奇妙技术。不止他们赞叹,从 1979 年到 1980 年代末,Byte 杂志曾经多次提到过 Lisp 和 Lisp 计算机。在 1979 年八月发行的、关于 Lisp 的一期特别杂志中,杂志编辑激情洋溢地写道,麻省理工正在开发的计算机配备了“大坨大坨的内存”和“先进的操作系统” [^17];他觉得,这些 Lisp 计算机的前途是如此光明,以至于它们的面世会让 1978 和 1977 年 —— 诞生了 Apple II、Commodore PET 和 TRS-80 的两年 —— 显得黯淡无光。五年之后,在 1985 年,一名 Byte 杂志撰稿人描述了为“复杂精巧、性能强悍的 Symbolics 3670”编写 Lisp 程序的体验,并力劝读者学习 Lisp,称其为“绝大数人工智能工作者的语言选择”,和将来的通用编程语言 [^18]。 + +我问过保罗·麦克琼斯Paul McJones(他在山景城Mountain View计算机历史博物馆Computer History Museum做了许多 Lisp 的[保护工作][15]),人们是什么时候开始将 Lisp 当作高维生物的赠礼一样谈论的呢?他说,这门语言自有的性质毋庸置疑地促进了这种现象的产生;然而,他也说,Lisp 上世纪六七十年代在人工智能领域得到的广泛应用,很有可能也起到了作用。当 1980 年代到来、Lisp 计算机进入市场时,象牙塔外的某些人由此接触到了 Lisp 的能力,于是传说开始滋生。时至今日,很少有人还记得 Lisp 计算机和 Symbolics 公司;但 Lisp 得以在八十年代一直保持神秘,很大程度上要归功于它们。 + +### 理论 C:学习编程 + +1985 年,两位麻省理工的教授,哈尔·阿伯尔森Harold "Hal" Abelson杰拉尔德·瑟斯曼Gerald Sussman,外加瑟斯曼的妻子朱莉·瑟斯曼Julie Sussman,出版了一本叫做《计算机程序的构造和解释Structure and Interpretation of Computer Programs》的教科书。这本书用 Scheme(一种 Lisp 方言)向读者们示范了如何编程。它被用于教授麻省理工入门编程课程长达二十年之久。出于直觉,我认为 SICP(这本书的名字通常缩写为 SICP)倍增了 Lisp 的“神秘要素”。SICP 使用 Lisp 描绘了深邃得几乎可以称之为哲学的编程理念。这些理念非常普适,可以用任意一种编程语言展现;但 SICP 的作者们选择了 Lisp。结果,这本阴阳怪气、卓越不凡、吸引了好几代程序员(还成了一种[奇特的模因][16])的著作臭名远扬之后,Lisp 的声望也顺带被提升了。Lisp 已不仅仅是一如既往的“麦卡锡的优雅表达方式”;它现在还成了“向你传授编程的不传之秘的语言”。 + +SICP 究竟有多奇怪这一点值得好好说;因为我认为,时至今日,这本书的古怪之处和 Lisp 的古怪之处是相辅相成的。书的封面就透着一股古怪。那上面画着一位朝着桌子走去,准备要施法的巫师或者炼金术士。他的一只手里抓着一副测径仪 —— 或者圆规,另一只手上拿着个球,上书“eval”和“apply”。他对面的女人指着桌子;在背景中,希腊字母 λ (lambda)漂浮在半空,释放出光芒。 + +![SICP 封面上的画作][17] + +*SICP 封面上的画作。* + +说真的,这上面画的究竟是怎么一回事?为什么桌子会长着动物的腿?为什么这个女人指着桌子?墨水瓶又是干什么用的?我们是不是该说,这位巫师已经破译了宇宙的隐藏奥秘,而所有这些奥秘就蕴含在 eval/apply 循环和 Lambda 演算之中?看似就是如此。单单是这张图片,就一定对人们如今谈论 Lisp 的方式产生了难以计量的影响。 + +然而,这本书的内容通常并不比封面正常多少。SICP 跟你读过的所有计算机科学教科书都不同。在引言中,作者们表示,这本书不只教你怎么用 Lisp 编程 —— 它是关于“现象的三个焦点:人的心智、复数的计算机程序,和计算机”的作品 [^19]。在之后,他们对此进行了解释,描述了他们对如下观点的坚信:编程不该被当作是一种计算机科学的训练,而应该是“程序性认识论procedural epistemology”的一种新表达方式 [^20]。程序是将那些偶然被送入计算机的思想组织起来的全新方法。这本书的第一章简明地介绍了 Lisp,但是之后的绝大部分都在讲述更加抽象的概念。其中包括了对不同编程范式的讨论,对于面向对象系统中“时间”和“一致性”的讨论;在书中的某一处,还有关于通信的基本限制可能会如何带来同步问题的讨论 —— 而这些基本限制在通信中就像是光速不变在相对论中一样关键 [^21]。都是些高深难懂的东西。 + +以上这些并不是说这是本糟糕的书;这本书其实棒极了。在我读过的所有作品中,这本书对于重要的编程理念的讨论是最为深刻的;那些理念我琢磨了很久,却一直无力用文字去表达。一本入门编程教科书能如此迅速地开始描述面向对象编程的根本缺陷,和函数式语言“将可变状态降到最少”的优点,实在是一件让人印象深刻的事。而这种描述之后变为了另一种震撼人心的讨论:某种(可能类似于今日的 [RxJS][18] 的)流范式能如何同时具备两者的优秀特性。SICP 用和当初麦卡锡的 Lisp 论文相似的方式提纯出了高级程序设计的精华。你读完这本书之后,会立即想要将它推荐给你的程序员朋友们;如果他们找到这本书,看到了封面,但最终没有阅读的话,他们就只会记住长着动物腿的桌子上方那神秘的、根本的、给予魔法师特殊能力的、写着 eval/apply 的东西。话说回来,书上这两人的鞋子也让我印象颇深。 + +然而,SICP 最重要的影响恐怕是,它将 Lisp 由一门怪语言提升成了必要的教学工具。在 SICP 面世之前,人们互相推荐 Lisp,以学习这门语言为提升编程技巧的途径。1979 年的 Byte 杂志 Lisp 特刊印证了这一事实。之前提到的那位编辑不仅就麻省理工的新 Lisp 计算机大书特书,还说,Lisp 这门语言值得一学,因为它“代表了分析问题的另一种视角” [^22]。但 SICP 并未只把 Lisp 作为其它语言的陪衬来使用;SICP 将其作为*入门*语言。这就暗含了一种论点,那就是,Lisp 是最能把握计算机编程基础的语言。可以认为,如今的程序员们彼此怂恿“在死掉之前至少试试 Lisp”的时候,他们很大程度上是因为 SICP 才这么说的。毕竟,编程语言 [Brainfuck][19] 想必同样也提供了“分析问题的另一种视角”;但人们学习 Lisp 而非学习 Brainfuck,那是因为他们知道,前者的那种 Lisp 视角在二十年中都被看作是极其有用的,有用到麻省理工在给他们的本科生教其它语言之前,必然会先教 Lisp。 + +### Lisp 的回归 + +在 SICP 出版的同一年,本贾尼·斯特劳斯特卢普Bjarne Stroustrup发布了 C++ 语言的首个版本,它将面向对象编程带到了大众面前。几年之后,Lisp 计算机的市场崩盘,AI 寒冬开始了。在下一个十年的变革中, C++ 和后来的 Java 成了前途无量的语言,而 Lisp 被冷落,无人问津。 + +理所当然地,确定人们对 Lisp 重新燃起热情的具体时间并不可能;但这多半是保罗·格雷厄姆发表他那几篇声称 Lisp 是首选入门语言的短文之后的事了。保罗·格雷厄姆是 Y-Combinator 的联合创始人和《Hacker News》的创始者,他这几篇短文有很大的影响力。例如,在短文《[胜于平庸][20]Beating the Averages》中,他声称 Lisp 宏使 Lisp 比其它语言更强。他说,因为他在自己创办的公司 Viaweb 中使用 Lisp,他得以比竞争对手更快地推出新功能。至少,[一部分程序员][21]被说服了。然而,庞大的主流程序员群体并未换用 Lisp。 + +实际上出现的情况是,Lisp 并未流行,但越来越多 Lisp 式的特性被加入到广受欢迎的语言中。Python 有了列表推导式。C# 有了 Linq。Ruby……嗯,[Ruby 是 Lisp 的一种][22]。就如格雷厄姆之前在 2001 年提到的那样,“在一系列常用语言中所体现出的‘默认语言’正越发朝着 Lisp 的方向演化” [^23]。尽管其它语言变得越来越像 Lisp,Lisp 本身仍然保留了其作为“很少人了解但是大家都该学的神秘语言”的特殊声望。在 1980 年,Lisp 的诞生二十周年纪念日上,麦卡锡写道,Lisp 之所以能够存活这么久,是因为它具备“编程语言领域中的某种近似局部最优” [^24]。这句话并未充分地表明 Lisp 的真正影响力。Lisp 能够存活超过半个世纪之久,并非因为程序员们一年年地勉强承认它就是最好的编程工具;事实上,即使绝大多数程序员根本不用它,它还是存活了下来。多亏了它的起源和它的人工智能研究用途,说不定还要多亏 SICP 的遗产,Lisp 一直都那么让人着迷。在我们能够想象上帝用其它新的编程语言创造世界之前,Lisp 都不会走下神坛。 + +-------------------------------------------------------------------------------- + +[^1]: John McCarthy, “History of Lisp”, 14, Stanford University, February 12, 1979, accessed October 14, 2018, http://jmc.stanford.edu/articles/lisp/lisp.pdf + +[^2]: Paul Graham, “The Roots of Lisp”, 1, January 18, 2002, accessed October 14, 2018, http://languagelog.ldc.upenn.edu/myl/llog/jmc.pdf. + +[^3]: Martin Childs, “John McCarthy: Computer scientist known as the father of AI”, The Independent, November 1, 2011, accessed on October 14, 2018, https://www.independent.co.uk/news/obituaries/john-mccarthy-computer-scientist-known-as-the-father-of-ai-6255307.html. + +[^4]: Lisp Bulletin History. http://www.artinfo-musinfo.org/scans/lb/lb3f.pdf + +[^5]: Allen Newell and Herbert Simon, “Current Developments in Complex Information Processing,” 19, May 1, 1956, accessed on October 14, 2018, http://bitsavers.org/pdf/rand/ipl/P-850_Current_Developments_In_Complex_Information_Processing_May56.pdf. + +[^6]: ibid. + +[^7]: Herbert Stoyan, “Lisp History”, 43, Lisp Bulletin #3, December 1979, accessed on October 14, 2018, http://www.artinfo-musinfo.org/scans/lb/lb3f.pdf + +[^8]: McCarthy, “History of Lisp”, 5. + +[^9]: ibid. + +[^10]: McCarthy “History of Lisp”, 6. + +[^11]: Stoyan, “Lisp History”, 45 + +[^12]: McCarthy, “History of Lisp”, 8. + +[^13]: McCarthy, “History of Lisp”, 2. + +[^14]: McCarthy, “History of Lisp”, 8. + +[^15]: Graham, “The Roots of Lisp”, 11. + +[^16]: Guy Steele and Richard Gabriel, “The Evolution of Lisp”, 22, History of Programming Languages 2, 1993, accessed on October 14, 2018, http://www.dreamsongs.com/Files/HOPL2-Uncut.pdf. 2 + +[^17]: Carl Helmers, “Editorial”, Byte Magazine, 154, August 1979, accessed on October 14, 2018, https://archive.org/details/byte-magazine-1979-08/page/n153. + +[^18]: Patrick Winston, “The Lisp Revolution”, 209, April 1985, accessed on October 14, 2018, https://archive.org/details/byte-magazine-1985-04/page/n207. + +[^19]: Harold Abelson, Gerald Jay. Sussman, and Julie Sussman, Structure and Interpretation of Computer Programs (Cambridge, Mass: MIT Press, 2010), xiii. + +[^20]: Abelson, xxiii. + +[^21]: Abelson, 428. + +[^22]: Helmers, 7. + +[^23]: Paul Graham, “What Made Lisp Different”, December 2001, accessed on October 14, 2018, http://www.paulgraham.com/diff.html. + +[^24]: John McCarthy, “Lisp—Notes on its past and future”, 3, Stanford University, 1980, accessed on October 14, 2018, http://jmc.stanford.edu/articles/lisp20th/lisp20th.pdf. + +via: https://twobithistory.org/2018/10/14/lisp.html + +作者:[Two-Bit History][a] +选题:[lujun9972][b] +译者:[Northurland](https://github.com/Northurland) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twobithistory.org +[b]: https://github.com/lujun9972 +[1]: https://xkcd.com/224/ +[2]: https://xkcd.com/297/ +[3]: https://www.gnu.org/fun/jokes/eternal-flame.en.html +[4]: https://www.reddit.com/r/ProgrammerHumor/comments/5c14o6/xkcd_lisp/d9szjnc/ +[5]: https://twobithistory.org/images/byte_lisp.jpg +[6]: http://languagelog.ldc.upenn.edu/myl/llog/jmc.pdf +[7]: https://en.wikipedia.org/wiki/Jargon_File +[8]: https://hci.stanford.edu/winograd/shrdlu/ +[9]: https://en.wikipedia.org/wiki/Macsyma +[10]: https://en.wikipedia.org/wiki/ACL2 +[11]: https://twobithistory.org/2018/09/30/chaosnet.html +[12]: https://youtu.be/gV5obrYaogU?t=201 +[13]: https://en.wikipedia.org/wiki/Flavors_(programming_language) +[14]: https://twobithistory.org/images/symbolics.jpg +[15]: http://www.softwarepreservation.org/projects/LISP/ +[16]: https://knowyourmeme.com/forums/meme-research/topics/47038-structure-and-interpretation-of-computer-programs-hugeass-image-dump-for-evidence +[17]: https://twobithistory.org/images/sicp.jpg +[18]: https://rxjs-dev.firebaseapp.com/ +[19]: https://en.wikipedia.org/wiki/Brainfuck +[20]: http://www.paulgraham.com/avg.html +[21]: https://web.archive.org/web/20061004035628/http://wiki.alu.org/Chris-Perkins +[22]: http://www.randomhacks.net/2005/12/03/why-ruby-is-an-acceptable-lisp/ diff --git a/published/201811/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md b/published/201811/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md new file mode 100644 index 0000000000..01bdffbafd --- /dev/null +++ b/published/201811/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md @@ -0,0 +1,247 @@ +如何使用 chkconfig 和 systemctl 命令启用或禁用 Linux 服务 +====== + +对于 Linux 管理员来说这是一个重要(美妙)的话题,所以每个人都必须知道,并练习怎样才能更高效的使用它们。 + +在 Linux 中,无论何时当你安装任何带有服务和守护进程的包,系统默认会把这些服务的初始化及 systemd 脚本添加进去,不过此时它们并没有被启用。 + +我们需要手动的开启或者关闭那些服务。Linux 中有三个著名的且一直在被使用的初始化系统。 + +### 什么是初始化系统? + +在以 Linux/Unix 为基础的操作系统上,`init` (初始化的简称) 是内核引导系统启动过程中第一个启动的进程。 + +`init` 的进程 id (pid)是 1,除非系统关机否则它将会一直在后台运行。 + +`init` 首先根据 `/etc/inittab` 文件决定 Linux 运行的级别,然后根据运行级别在后台启动所有其他进程和应用程序。 + +BIOS、MBR、GRUB 和内核程序在启动 `init` 之前就作为 Linux 的引导程序的一部分开始工作了。 + +下面是 Linux 中可以使用的运行级别(从 0~6 总共七个运行级别): + + * `0`:关机 + * `1`:单用户模式 + * `2`:多用户模式(没有NFS) + * `3`:完全的多用户模式 + * `4`:系统未使用 + * `5`:图形界面模式 + * `6`:重启 + +下面是 Linux 系统中最常用的三个初始化系统: + + * System V(Sys V) + * Upstart + * systemd + +### 什么是 System V(Sys V)? + +System V(Sys V)是类 Unix 系统第一个也是传统的初始化系统。`init` 是内核引导系统启动过程中第一支启动的程序,它是所有程序的父进程。 + +大部分 Linux 发行版最开始使用的是叫作 System V(Sys V)的传统的初始化系统。在过去的几年中,已经发布了好几个初始化系统以解决标准版本中的设计限制,例如:launchd、Service Management Facility、systemd 和 Upstart。 + +但是 systemd 已经被几个主要的 Linux 发行版所采用,以取代传统的 SysV 初始化系统。 + +### 什么是 Upstart? + +Upstart 是一个基于事件的 `/sbin/init` 守护进程的替代品,它在系统启动过程中处理任务和服务的启动,在系统运行期间监视它们,在系统关机的时候关闭它们。 + +它最初是为 Ubuntu 而设计,但是它也能够完美的部署在其他所有 Linux系统中,用来代替古老的 System-V。 + +Upstart 被用于 Ubuntu 从 9.10 到 Ubuntu 14.10 和基于 RHEL 6 的系统,之后它被 systemd 取代。 + +### 什么是 systemd? + +systemd 是一个新的初始化系统和系统管理器,它被用于所有主要的 Linux 发行版,以取代传统的 SysV 初始化系统。 + +systemd 兼容 SysV 和 LSB 初始化脚本。它可以直接替代 SysV 初始化系统。systemd 是被内核启动的第一个程序,它的 PID 是 1。 + +systemd 是所有程序的父进程,Fedora 15 是第一个用 systemd 取代 upstart 的发行版。`systemctl` 用于命令行,它是管理 systemd 的守护进程/服务的主要工具,例如:(开启、重启、关闭、启用、禁用、重载和状态) + +systemd 使用 .service 文件而不是 bash 脚本(SysVinit 使用的)。systemd 将所有守护进程添加到 cgroups 中排序,你可以通过浏览 `/cgroup/systemd` 文件查看系统等级。 + +### 如何使用 chkconfig 命令启用或禁用引导服务? + +`chkconfig` 实用程序是一个命令行工具,允许你在指定运行级别下启动所选服务,以及列出所有可用服务及其当前设置。 + +此外,它还允许我们从启动中启用或禁用服务。前提是你有超级管理员权限(root 或者 `sudo`)运行这个命令。 + +所有的服务脚本位于 `/etc/rd.d/init.d`文件中 + +### 如何列出运行级别中所有的服务 + +`--list` 参数会展示所有的服务及其当前状态(启用或禁用服务的运行级别): + +``` +# chkconfig --list +NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off +abrt-ccpp 0:off 1:off 2:off 3:on 4:off 5:on 6:off +abrtd 0:off 1:off 2:off 3:on 4:off 5:on 6:off +acpid 0:off 1:off 2:on 3:on 4:on 5:on 6:off +atd 0:off 1:off 2:off 3:on 4:on 5:on 6:off +auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off +. +. +``` + +### 如何查看指定服务的状态 + +如果你想查看运行级别下某个服务的状态,你可以使用下面的格式匹配出需要的服务。 + +比如说我想查看运行级别中 `auditd` 服务的状态 + +``` +# chkconfig --list| grep auditd +auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off +``` + +### 如何在指定运行级别中启用服务 + +使用 `--level` 参数启用指定运行级别下的某个服务,下面展示如何在运行级别 3 和运行级别 5 下启用 `httpd` 服务。 + + +``` +# chkconfig --level 35 httpd on +``` + +### 如何在指定运行级别下禁用服务 + +同样使用 `--level` 参数禁用指定运行级别下的服务,下面展示的是在运行级别 3 和运行级别 5 中禁用 `httpd` 服务。 + +``` +# chkconfig --level 35 httpd off +``` + +### 如何将一个新服务添加到启动列表中 + +`-–add` 参数允许我们添加任何新的服务到启动列表中,默认情况下,新添加的服务会在运行级别 2、3、4、5 下自动开启。 + +``` +# chkconfig --add nagios +``` + +### 如何从启动列表中删除服务 + +可以使用 `--del` 参数从启动列表中删除服务,下面展示的是如何从启动列表中删除 Nagios 服务。 + +``` +# chkconfig --del nagios +``` + +### 如何使用 systemctl 命令启用或禁用开机自启服务? + +`systemctl` 用于命令行,它是一个用来管理 systemd 的守护进程/服务的基础工具,例如:(开启、重启、关闭、启用、禁用、重载和状态)。 + +所有服务创建的 unit 文件位与 `/etc/systemd/system/`。 + +### 如何列出全部的服务 + +使用下面的命令列出全部的服务(包括启用的和禁用的)。 + +``` +# systemctl list-unit-files --type=service +UNIT FILE STATE +arp-ethers.service disabled +auditd.service enabled +autovt@.service enabled +blk-availability.service disabled +brandbot.service static +chrony-dnssrv@.service static +chrony-wait.service disabled +chronyd.service enabled +cloud-config.service enabled +cloud-final.service enabled +cloud-init-local.service enabled +cloud-init.service enabled +console-getty.service disabled +console-shell.service disabled +container-getty@.service static +cpupower.service disabled +crond.service enabled +. +. +150 unit files listed. +``` + +使用下面的格式通过正则表达式匹配出你想要查看的服务的当前状态。下面是使用 `systemctl` 命令查看 `httpd` 服务的状态。 + +``` +# systemctl list-unit-files --type=service | grep httpd +httpd.service disabled +``` + +### 如何让指定的服务开机自启 + +使用下面格式的 `systemctl` 命令启用一个指定的服务。启用服务将会创建一个符号链接,如下可见: + +``` +# systemctl enable httpd +Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. +``` + +运行下列命令再次确认服务是否被启用。 + +``` +# systemctl is-enabled httpd +enabled +``` + +### 如何禁用指定的服务 + +运行下面的命令禁用服务将会移除你启用服务时所创建的符号链接。 + +``` +# systemctl disable httpd +Removed symlink /etc/systemd/system/multi-user.target.wants/httpd.service. +``` + +运行下面的命令再次确认服务是否被禁用。 + +``` +# systemctl is-enabled httpd +disabled +``` + +### 如何查看系统当前的运行级别 + +使用 `systemctl` 命令确认你系统当前的运行级别,`runlevel` 命令仍然可在 systemd 下工作,不过,运行级别对于 systemd 来说是一个历史遗留的概念。所以我建议你全部使用 `systemctl` 命令。 + +我们当前处于运行级别 3, 它等同于下面显示的 `multi-user.target`。 + +``` +# systemctl list-units --type=target +UNIT LOAD ACTIVE SUB DESCRIPTION +basic.target loaded active active Basic System +cloud-config.target loaded active active Cloud-config availability +cryptsetup.target loaded active active Local Encrypted Volumes +getty.target loaded active active Login Prompts +local-fs-pre.target loaded active active Local File Systems (Pre) +local-fs.target loaded active active Local File Systems +multi-user.target loaded active active Multi-User System +network-online.target loaded active active Network is Online +network-pre.target loaded active active Network (Pre) +network.target loaded active active Network +paths.target loaded active active Paths +remote-fs.target loaded active active Remote File Systems +slices.target loaded active active Slices +sockets.target loaded active active Sockets +swap.target loaded active active Swap +sysinit.target loaded active active System Initialization +timers.target loaded active active Timers +``` + +-------------------------------------------------------------------------------- + + +via: https://www.2daygeek.com/how-to-enable-or-disable-services-on-boot-in-linux-using-chkconfig-and-systemctl-command/ + + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[way-ww](https://github.com/way-ww) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 diff --git a/translated/tech/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md b/published/201811/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md similarity index 63% rename from translated/tech/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md rename to published/201811/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md index 4f01447600..26abe69e74 100644 --- a/translated/tech/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md +++ b/published/201811/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md @@ -1,71 +1,74 @@ -Kali Linux:在开始使用之前你必须知道的 – FOSS Post +在你开始使用 Kali Linux 之前必须知道的事情 ====== ![](https://i1.wp.com/fosspost.org/wp-content/uploads/2018/10/kali-linux.png?fit=1237%2C527&ssl=1) -Kali Linux 在渗透测试和白帽子方面,是业界领先的 Linux 发行版。默认情况下,该发行版附带了大量黑客和渗透工具和软件,并且在全世界都得到了广泛认可。即使在那些甚至可能不知道 Linux 是什么的 Windows 用户中也是如此。 +Kali Linux 在渗透测试和白帽子方面是业界领先的 Linux 发行版。默认情况下,该发行版附带了大量入侵和渗透的工具和软件,并且在全世界都得到了广泛认可。即使在那些甚至可能不知道 Linux 是什么的 Windows 用户中也是如此。 -由于后者的原因,许多人都试图单独使用 Kali Linux,尽管他们甚至不了解 Linux 系统的基础知识。原因可能各不相同,有的为了玩乐,有的是为了取悦女友而伪装成黑客,有的仅仅是试图破解邻居的 WiFi 网络以免费上网。如果你打算使用 Kali Linux,所有的这些都是不好的事情。 +由于后者的原因(LCTT 译注:Windows 用户),许多人都试图单独使用 Kali Linux,尽管他们甚至不了解 Linux 系统的基础知识。原因可能各不相同,有的为了玩乐,有的是为了取悦女友而伪装成黑客,有的仅仅是试图破解邻居的 WiFi 网络以免费上网。如果你打算使用 Kali Linux,记住,所有的这些都是不好的事情。 在计划使用 Kali Linux 之前,你应该了解一些提示。 ### Kali Linux 不适合初学者 ![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-000.png?resize=850%2C478&ssl=1) -Kali Linux 默认 GNOME 桌面 -如果你是几个月前刚开始使用 Linux 的人,或者你认为自己的知识水平低于平均水平,那么 Kali Linux 就不适合你。如果你打算问“如何在 Kali 上安装 Stream?如何让我的打印机在 Kali 上工作?如何解决 Kali 上的 APT 源错误?”这些东西,那么 Kali Linux 并不适合你。 +*Kali Linux 默认 GNOME 桌面* -Kali Linux 主要面向想要运行渗透测试的专家或想要学习成为白帽子和数字取证的人。但即使你来自后者,普通的 Kali Linux 用户在日常使用时也会遇到很多麻烦。他还被要求以非常谨慎的方式使用工具和软件,而不仅仅是“让我们安装并运行一切”。每一个工具必须小心使用,你安装的每一个软件都必须仔细检查。 +如果你是几个月前刚开始使用 Linux 的人,或者你认为自己的知识水平低于平均水平,那么 Kali Linux 就不适合你。如果你打算问“如何在 Kali 上安装 Steam?如何让我的打印机在 Kali 上工作?如何解决 Kali 上的 APT 源错误?”这些东西,那么 Kali Linux 并不适合你。 -**建议阅读:** [Linux 系统的组件是什么?][1] +Kali Linux 主要面向想要运行渗透测试套件的专家或想要学习成为白帽子和数字取证的人。但即使你属于后者,普通的 Kali Linux 用户在日常使用时也会遇到很多麻烦。他还被要求以非常谨慎的方式使用工具和软件,而不仅仅是“让我们安装并运行一切”。每一个工具必须小心使用,你安装的每一个软件都必须仔细检查。 -普通 Linux 用户无法做正常的事情。(to 校正:这里什么意思呢?)一个更好的方法是花几周时间学习 Linux 及其守护进程,服务,软件,发行版及其工作方式,然后观看几十个关于白帽子攻击的视频和课程,然后再尝试使用 Kali 来应用你学习到的东西。 +**建议阅读:** [Linux 系统的组件有什么?][1] + +普通 Linux 用户都无法自如地使用它。一个更好的方法是花几周时间学习 Linux 及其守护进程、服务、软件、发行版及其工作方式,然后观看几十个关于白帽子攻击的视频和课程,然后再尝试使用 Kali 来应用你学习到的东西。 ### 它会让你被黑客攻击 ![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-001.png?resize=850%2C478&ssl=1) -Kali Linux 入侵和测试工具 + +*Kali Linux 入侵和测试工具* 在普通的 Linux 系统中,普通用户有一个账户,而 root 用户也有一个单独的账号。但在 Kali Linux 中并非如此。Kali Linux 默认使用 root 账户,不提供普通用户账户。这是因为 Kali 中几乎所有可用的安全工具都需要 root 权限,并且为了避免每分钟要求你输入 root 密码,所以这样设计。 -当然,你可以简单地创建一个普通用户账户并开始使用它。但是,这种方式仍然不推荐,因为这不是 Kali Linux 系统设计的工作方式。然后,在使用程序,打开端口,调试软件时,你会遇到很多问题,你会发现为什么这个东西不起作用,最终却发现它是一个奇怪的权限错误。另外每次在系统上做任何事情时,你会被每次运行工具都要求输入密码而烦恼。 +当然,你可以简单地创建一个普通用户账户并开始使用它。但是,这种方式仍然不推荐,因为这不是 Kali Linux 系统设计的工作方式。使用普通用户在使用程序,打开端口,调试软件时,你会遇到很多问题,你会发现为什么这个东西不起作用,最终却发现它是一个奇怪的权限错误。另外每次在系统上做任何事情时,你会被每次运行工具都要求输入密码而烦恼。 -现在,由于你被迫以 root 用户身份使用它,因此你在系统上运行的所有软件也将以 root 权限运行。如果你不知道自己在做什么,那么这很糟糕,因为如果 Firefox 中存在漏洞,并且你访问了一个受感染的网站,那么黑客能够在你的 PC 上获得全部 root 权限并入侵你。如果你使用的是普通用户账户,则会收到限制。此外,你安装和使用的某些工具可能会在你不知情的情况下打开端口并泄露信息,因此如果你不是非常小心,人们可能会以你尝试入侵他们的方式入侵你。 +现在,由于你被迫以 root 用户身份使用它,因此你在系统上运行的所有软件也将以 root 权限运行。如果你不知道自己在做什么,那么这很糟糕,因为如果 Firefox 中存在漏洞,并且你访问了一个受感染的网站,那么黑客能够在你的 PC 上获得全部 root 权限并入侵你。如果你使用的是普通用户账户,则会受到限制。此外,你安装和使用的某些工具可能会在你不知情的情况下打开端口并泄露信息,因此如果你不是非常小心,人们可能会以你尝试入侵他们的方式入侵你。 -如果你在一些情况下访问于与 Kali Linux 相关的 Facebook 群组,你会发现这些群组中几乎有四分之一的帖子是人们在寻求帮助,因为有人入侵了他们。 +如果你曾经访问过与 Kali Linux 相关的 Facebook 群组,你会发现这些群组中几乎有四分之一的帖子是人们在寻求帮助,因为有人入侵了他们。 ### 它可以让你入狱 -Kali Linux 仅提供软件。那么,如何使用它们完全是你自己的责任。 +Kali Linux 只是提供了软件。那么,如何使用它们完全是你自己的责任。 在世界上大多数发达国家,使用针对公共 WiFi 网络或其他设备的渗透测试工具很容易让你入狱。现在不要以为你使用了 Kali 就无法被跟踪,许多系统都配置了复杂的日志记录设备来简单地跟踪试图监听或入侵其网络的人,你可能无意间成为其中的一个,那么它会毁掉你的生活。 永远不要对不属于你的设备或网络使用 Kali Linux 系统,也不要明确允许对它们进行入侵。如果你说你不知道你在做什么,在法庭上它不会被当作借口来接受。 -### 修改了内核和软件 +### 修改了的内核和软件 -Kali [基于][2] Debian(测试分支,这意味着 Kali Linux 使用滚动发布模型),因此它使用了 Debian 的大部分软件体系结构,你会发现 Kali Linux 中的大部分软件跟 Debian 中的没什么区别。 +Kali [基于][2] Debian(“测试”分支,这意味着 Kali Linux 使用滚动发布模型),因此它使用了 Debian 的大部分软件体系结构,你会发现 Kali Linux 中的大部分软件跟 Debian 中的没什么区别。 但是,Kali 修改了一些包来加强安全性并修复了一些可能的漏洞。例如,Kali 使用的 Linux 内核被打了补丁,允许在各种设备上进行无线注入。这些补丁通常在普通内核中不可用。此外,Kali Linux 不依赖于 Debian 服务器和镜像,而是通过自己的服务器构建软件包。以下是最新版本中的默认软件源: + ``` - deb http://http.kali.org/kali kali-rolling main contrib non-free - deb-src http://http.kali.org/kali kali-rolling main contrib non-free +deb http://http.kali.org/kali kali-rolling main contrib non-free +deb-src http://http.kali.org/kali kali-rolling main contrib non-free ``` 这就是为什么,对于某些特定的软件,当你在 Kali Linux 和 Fedora 中使用相同的程序时,你会发现不同的行为。你可以从 [git.kali.org][3] 中查看 Kali Linux 软件的完整列表。你还可以在 Kali Linux(GNOME)上找到我们[自己生成的已安装包列表][4]。 -更重要的是,Kali Linux 官方文档极力建议不要添加任何其他第三方软件仓库,因为 Kali Linux 是一个滚动发行版,并且依赖于 Debian 测试,由于依赖关系冲突和包钩子,所以你很可能只是添加一个新的仓库源就会破坏系统。 +更重要的是,Kali Linux 官方文档极力建议不要添加任何其他第三方软件仓库,因为 Kali Linux 是一个滚动发行版,并且依赖于 Debian 测试分支,由于依赖关系冲突和包钩子,所以你很可能只是添加一个新的仓库源就会破坏系统。 ### 不要安装 Kali Linux ![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-002.png?resize=750%2C504&ssl=1) -使用 Kali Linux 在 fosspost.org 上运行 wpscan +*使用 Kali Linux 在 fosspost.org 上运行 wpscan* 我在极少数情况下使用 Kali Linux 来测试我部署的软件和服务器。但是,我永远不敢安装它并将其用作主系统。 -如果你要将其用作主系统,那么你必须保留自己的个人文件,密码,数据以及系统上的所有内容。你还需要安装大量日常使用的软件,以解放你的生活。但正如我们上面提到的,使用 Kali Linux 是非常危险的,应该非常小心地进行,如果你被入侵了,你将丢失所有数据,并且可能会暴露给更多的人。如果你在做一些不合法的事情,你的个人信息也可用于跟踪你。如果你不小心使用这些工具,那么你甚至可能会毁掉自己的数据。 +如果你要将其用作主系统,那么你必须保留自己的个人文件、密码、数据以及系统上的所有内容。你还需要安装大量日常使用的软件,以解放你的生活。但正如我们上面提到的,使用 Kali Linux 是非常危险的,应该非常小心地进行,如果你被入侵了,你将丢失所有数据,并且可能会暴露给更多的人。如果你在做一些不合法的事情,你的个人信息也可用于跟踪你。如果你不小心使用这些工具,那么你甚至可能会毁掉自己的数据。 即使是专业的白帽子也不建议将其作为主系统安装,而是通过 USB 使用它来进行渗透测试工作,然后再回到普通的 Linux 发行版。 @@ -83,7 +86,7 @@ via: https://fosspost.org/articles/must-know-before-using-kali-linux 作者:[M.Hanny Sabbagh][a] 选题:[lujun9972][b] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181017 Browsing the web with Min, a minimalist open source web browser.md b/published/201811/20181017 Browsing the web with Min, a minimalist open source web browser.md similarity index 90% rename from translated/tech/20181017 Browsing the web with Min, a minimalist open source web browser.md rename to published/201811/20181017 Browsing the web with Min, a minimalist open source web browser.md index aac33903d9..8b0244f58b 100644 --- a/translated/tech/20181017 Browsing the web with Min, a minimalist open source web browser.md +++ b/published/201811/20181017 Browsing the web with Min, a minimalist open source web browser.md @@ -1,9 +1,11 @@ 使用极简浏览器 Min 浏览网页 ====== + > 并非所有 web 浏览器都要做到无所不能,Min 就是一个极简主义风格的浏览器。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG) -现在还有开发新的网络浏览器的需要吗?即使现在浏览器领域已经成为了寡头市场,但仍然不断涌现出各种前所未有的浏览器产品。 +现在还有开发新的 Web 浏览器的需要吗?即使现在浏览器领域已经成为了寡头市场,但仍然不断涌现出各种前所未有的浏览器产品。 [Min][1] 就是其中一个。顾名思义,Min 是一个小的浏览器,也是一个极简主义的浏览器。但它麻雀虽小五脏俱全,而且还是一个开源的浏览器,它的 Apache 2.0 许可证引起了我的注意。 @@ -29,7 +31,7 @@ Min 号称是更智能、更快速的浏览器。经过尝试以后,我觉得 Min 和其它浏览器一样,支持页面选项卡。它还有一个称为 Tasks 的功能,可以对打开的选项卡进行分组。 -[DuckDuckGo][6]是我最喜欢的搜索引擎,而 Min 的默认搜索引擎恰好就是它,这正合我意。当然,如果你喜欢另一个搜索引擎,也可以在 Min 的偏好设置中配置你喜欢的搜索引擎作为默认搜索引擎。 +[DuckDuckGo][6] 是我最喜欢的搜索引擎,而 Min 的默认搜索引擎恰好就是它,这正合我意。当然,如果你喜欢另一个搜索引擎,也可以在 Min 的偏好设置中配置你喜欢的搜索引擎作为默认搜索引擎。 Min 没有使用类似 AdBlock 这样的插件来过滤你不想看到的内容,而是使用了一个名为 [EasyList][7] 的内置的广告拦截器,你可以使用它来屏蔽脚本和图片。另外 Min 还带有一个内置的防跟踪软件。 @@ -54,7 +56,7 @@ Min 确实也有自己的缺点,例如它无法将网站添加为书签。替 ### 总结 Min 算是一个中规中矩的浏览器,它可以凭借轻量、快速的优点吸引很多极简主义的用户。但是对于追求多功能的用户来说,Min 就显得相当捉襟见肘了。 -. + 所以,如果你想摆脱当今多功能浏览器的束缚,我觉得可以试用一下 Min。 @@ -65,7 +67,7 @@ via: https://opensource.com/article/18/10/min-web-browser 作者:[Scott Nesbitt][a] 选题:[lujun9972][b] 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sources/tech/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md b/published/201811/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md similarity index 56% rename from sources/tech/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md rename to published/201811/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md index 9f1d3f05be..b33670d461 100644 --- a/sources/tech/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md +++ b/published/201811/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md @@ -1,49 +1,49 @@ -Chrony – An Alternative NTP Client And Server For Unix-like Systems +Chrony:一个类 Unix 系统上 NTP 客户端和服务器替代品 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/10/chrony-1-720x340.jpeg) -In this tutorial, we will be discussing how to install and configure **Chrony** , an alternative NTP client and server for Unix-like systems. Chrony can synchronise the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time. Chrony is free, open source and supports GNU/Linux and BSD variants such as FreeBSD, NetBSD, macOS, and Solaris. +在这个教程中,我们会讨论如何安装和配置 **Chrony**,一个类 Unix 系统上 NTP 客户端和服务器的替代品。Chrony 可以更快的同步系统时钟,具有更好的时钟准确度,并且它对于那些不是一直在线的系统很有帮助。Chrony 是自由开源的,并且支持 GNU/Linux 和 BSD 衍生版(比如 FreeBSD、NetBSD)、macOS 和 Solaris 等。 -### Installing Chrony +### 安装 Chrony -Chrony is available in the default repositories of most Linux distributions. If you’re on Arch Linux, run the following command to install it: +Chrony 可以从大多数 Linux 发行版的默认软件库中获得。如果你使用的是 Arch Linux,运行下面的命令来安装它: ``` $ sudo pacman -S chrony ``` -On Debian, Ubuntu, Linux Mint: +在 Debian、Ubuntu、Linux Mint 上: ``` $ sudo apt-get install chrony ``` -On Fedora: +在 Fedora 上: ``` $ sudo dnf install chrony ``` -Once installed, start **chronyd.service** daemon if it is not started already: +当安装完成后,如果之前没有启动过的话需启动 `chronyd.service` 守护进程: ``` $ sudo systemctl start chronyd.service ``` -Make it to start automatically on every reboot using command: +使用下面的命令让它每次重启系统后自动运行: ``` $ sudo systemctl enable chronyd.service ``` -To verify if the Chronyd.service has been started, run: +为了确认 `chronyd.service` 已经启动,运行: ``` $ sudo systemctl status chronyd.service ``` -If everything is OK, you will see an output something like below. +如果一切正常,你将看到类似下面的输出: ``` ● chrony.service - chrony, an NTP client/server @@ -67,13 +67,13 @@ Oct 17 10:35:03 ubuntuserver chronyd[2482]: Selected source 91.189.89.199 Oct 17 10:35:06 ubuntuserver chronyd[2482]: Selected source 106.10.186.200 ``` -As you can see, Chrony service is started and working! +可以看到,Chrony 服务已经启动并且正在工作! -### Configure Chrony +### 配置 Chrony -The NTP clients needs to know which NTP servers it should contact to get the current time. We can specify the NTP servers in the **server** or **pool** directive in the NTP configuration file. Usually, the default configuration file is **/etc/chrony/chrony.conf** or **/etc/chrony.conf** depending upon the Linux distribution version. For better reliability, it is recommended to specify at least three servers. +NTP 客户端需要知道它要连接到哪个 NTP 服务器来获取当前时间。我们可以直接在该 NTP 配置文件中的 `server` 或者 `pool` 项指定 NTP 服务器。通常,默认的配置文件位于 `/etc/chrony/chrony.conf` 或者 `/etc/chrony.conf`,取决于 Linux 发行版版本。为了更可靠的同步时间,建议指定至少三个服务器。 -The following lines are just an example taken from my Ubuntu 18.04 LTS server. +下面几行是我的 Ubuntu 18.04 LTS 服务器上的一个示例。 ``` [...] @@ -87,22 +87,19 @@ pool 2.ubuntu.pool.ntp.org iburst maxsources 2 [...] ``` -As you see in the above output, [**NTP Pool Project**][1] has been set as the default time server. For those wondering, NTP pool project is the cluster of time servers that provides NTP service for tens of millions clients across the world. It is the default time server for Ubuntu and most of the other major Linux distributions. +从上面的输出中你可以看到,[NTP 服务器池项目][1] 已经被设置成为了默认的时间服务器。对于那些好奇的人,NTP 服务器池项目是一个时间服务器集群,用来为全世界千万个客户端提供 NTP 服务。它是 Ubuntu 以及其他主流 Linux 发行版的默认时间服务器。 -Here, +在这里, + * `iburst` 选项用来加速初始的同步过程 + * `maxsources` 代表 NTP 源的最大数量 - * the **iburst** option is used to speed up the initial synchronisation. - * the **maxsources** refers the maximum number of NTP sources. +请确保你选择的 NTP 服务器是同步的、稳定的、离你的位置较近的,以便使用这些 NTP 源来提升时间准确度。 +### 在命令行中管理 Chronyd +chrony 有一个命令行工具叫做 `chronyc` 用来控制和监控 chrony 守护进程(`chronyd`)。 -Please make sure that the NTP servers you have chosen are well synchronised, stable and close to your location to improve the accuracy of the time with NTP sources. - -### Manage Chronyd from command line - -Chrony has a command line utility named **chronyc** to control and monitor the **chrony** daemon (chronyd). - -To check if **chrony** is synchronized, we can use the **tracking** command as shown below. +为了检查是否 chrony 已经同步,我们可以使用下面展示的 `tracking` 命令。 ``` $ chronyc tracking @@ -121,7 +118,7 @@ Update interval : 515.1 seconds Leap status : Normal ``` -We can verify the current time sources that chrony uses with command: +我们可以使用命令确认现在 chrony 使用的时间源: ``` $ chronyc sources @@ -138,7 +135,7 @@ MS Name/IP address Stratum Poll Reach LastRx Last sample ^- ns2.pulsation.fr 2 10 377 311 -75ms[ -73ms] +/- 250ms ``` -Chronyc utility can find the statistics of each sources, such as drift rate and offset estimation process, using **sourcestats** command. +`chronyc` 工具可以对每个源进行统计,比如使用 `sourcestats` 命令获得漂移速率和进行偏移估计。 ``` $ chronyc sourcestats @@ -155,7 +152,7 @@ sin1.m-d.net 29 13 83m +0.049 6.060 -8466us 9940us ns2.pulsation.fr 32 17 88m +0.784 9.834 -62ms 22ms ``` -If your system is not connected to Internet, you need to notify Chrony that the system is not connected to the Internet. To do so, run: +如果你的系统没有连接到互联网,你需要告知 Chrony 系统没有连接到 互联网。为了这样做,运行: ``` $ sudo chronyc offline @@ -163,7 +160,7 @@ $ sudo chronyc offline 200 OK ``` -To verify the status of your NTP sources, simply run: +为了确认你的 NTP 源的状态,只需要运行: ``` $ chronyc activity @@ -175,16 +172,16 @@ $ chronyc activity 0 sources with unknown address ``` -As you see, all my NTP sources are down at the moment. +可以看到,我的所有源此时都是离线状态。 -Once you’re connected to the Internet, just notify Chrony that your system is back online using command: +一旦你连接到互联网,只需要使用命令告知 Chrony 你的系统已经回到在线状态: ``` $ sudo chronyc online 200 OK ``` -To view the status of NTP source(s), run: +为了查看 NTP 源的状态,运行: ``` $ chronyc activity @@ -196,18 +193,16 @@ $ chronyc activity 0 sources with unknown address ``` -For more detailed explanation of all options and parameters, refer the man pages. +所有选项和参数的详细解释,请参考其帮助手册。 ``` $ man chronyc - $ man chronyd ``` -And, that’s all for now. Hope this was useful. In the subsequent tutorials, we will see how to setup a local NTP server using Chrony and configure the clients to use it to synchronise time. - -Stay tuned! +这就是文章的所有内容。希望对你有所帮助。在随后的教程中,我们会看到如何使用 Chrony 启动一个本地的 NTP 服务器并且配置客户端来使用这个服务器同步时间。 +保持关注! -------------------------------------------------------------------------------- @@ -216,8 +211,8 @@ via: https://www.ostechnix.com/chrony-an-alternative-ntp-client-and-server-for-u 作者:[SK][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[zianglei](https://github.com/zianglei) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181017 Design faster web pages, part 2- Image replacement.md b/published/201811/20181017 Design faster web pages, part 2- Image replacement.md similarity index 87% rename from translated/tech/20181017 Design faster web pages, part 2- Image replacement.md rename to published/201811/20181017 Design faster web pages, part 2- Image replacement.md index 55631b4713..98a8719844 100644 --- a/translated/tech/20181017 Design faster web pages, part 2- Image replacement.md +++ b/published/201811/20181017 Design faster web pages, part 2- Image replacement.md @@ -1,7 +1,7 @@ 设计更快的网页(二):图片替换 ====== -![](https://fedoramagazine.org/wp-content/uploads/2018/03/fasterwebsites2-816x345.jpg) +![](https://fedoramagazine.org/wp-content/uploads/2018/03/fasterwebsites2-816x345.jpg) 欢迎回到我们为了构建更快网页所写的系列文章。上一篇[文章][1]讨论了只通过图片压缩实现这个目标的方法。这个例子从一开始有 1.2MB 的“浏览器脂肪”,然后它减轻到了 488.9KB 的大小。但这还不够快!那么本文继续来给浏览器“减肥”。你可能在这个过程中会认为我们所做的事情有点疯狂,但一旦完成,你就会明白为什么要这么做了。 @@ -21,17 +21,15 @@ $ sudo dnf install inkscape ![Getfedora 的页面,对其中的图片做了标记][5] -这次分析更好地以图形方式完成,这也就是它从屏幕截图开始的原因。上面的截图标记了页面中的所有图形元素。Fedora 网站团队已经针对两种情况措施(也有可能是四种,这样更好)来替换图像了。社交媒体的图标变成了字体的字形,而语言选择器变成了 SVG. +这次分析以图形方式完成更好,这也就是它从屏幕截图开始的原因。上面的截图标记了页面中的所有图形元素。Fedora 网站团队已经针对两种情况措施(也有可能是四种,这样更好)来替换图像了。社交媒体的图标变成了字体的字形,而语言选择器变成了 SVG. 我们有几个可以替换的选择: - + CSS3 + 字体 + SVG + HTML5 Canvas - #### HTML5 Canvas 简单来说,HTML5 Canvas 是一种 HTML 元素,它允许你借助脚本语言(通常是 JavaScript)在上面绘图,不过它现在还没有被广泛使用。因为它可以使用脚本语言来绘制,所以这个元素也可以用来做动画。这里有一些使用 HTML Canvas 实现的实例,比如[三角形模式][6]、[动态波浪][7]和[字体动画][8]。不过,在这种情况下,似乎这也不是最好的选择。 @@ -42,7 +40,7 @@ $ sudo dnf install inkscape #### 字体 -另外一种方式是使用字体来装饰网页,[Fontawesome][9] 在这方面很流行。比如,在这个例子中你可以使用字体来替换“风味”和“旋转”的图标。这种方法有一个负面影响,但解决起来很容易,我们会在本系列的下一部分中来介绍。 +另外一种方式是使用字体来装饰网页,[Fontawesome][9] 在这方面很流行。比如,在这个例子中你可以使用字体来替换“Flavor”和“Spin”的图标。这种方法有一个负面影响,但解决起来很容易,我们会在本系列的下一部分中来介绍。 #### SVG @@ -94,13 +92,13 @@ inkscape:connector-curvature="0" /> ![Inkscape - 激活节点工具][10] -这个例子中有五个不必要的节点——就是直线中间的那些。要删除它们,你可以使用已激活的节点工具依次选中它们,并按下 **Del** 键。然后,选中这条线的定义节点,并使用工具栏的工具把它们重新做成角。 +这个例子中有五个不必要的节点——就是直线中间的那些。要删除它们,你可以使用已激活的节点工具依次选中它们,并按下 `Del` 键。然后,选中这条线的定义节点,并使用工具栏的工具把它们重新做成角。 ![Inkscape - 将节点变成角的工具][11] 如果不修复这些角,我们还有方法可以定义这条曲线,这条曲线会被保存,也就会增加文件体积。你可以手动清理这些节点,因为它无法有效的自动完成。现在,你已经为下一阶段做好了准备。 -使用_另存为_功能,并选择_优化的 SVG_。这会弹出一个窗口,你可以在里面选择移除或保留哪些成分。 +使用“另存为”功能,并选择“优化的 SVG”。这会弹出一个窗口,你可以在里面选择移除或保留哪些成分。 ![Inkscape - “另存为”“优化的 SVG”][12] @@ -121,7 +119,7 @@ insgesamt 928K -rw-rw-r--. 1 user user 112K 19. Feb 19:05 greyscale-pattern-opti.svg.gz ``` -这是我为可视化这个主题所做的一个小测试的输出。你可能应该看到光栅图形——PNG——已经被压缩,不能再被压缩了。而 SVG,一个 XML 文件正相反。它是文本文件,所以可被压缩至原来的四分之一不到。因此,现在它的体积要比 PNG 小 50 KB 左右。 +这是我为可视化这个主题所做的一个小测试的输出。你可能应该看到光栅图形——PNG——已经被压缩,不能再被压缩了。而 SVG,它是一个 XML 文件正相反。它是文本文件,所以可被压缩至原来的四分之一不到。因此,现在它的体积要比 PNG 小 50 KB 左右。 现代浏览器可以以原生方式处理压缩文件。所以,许多 Web 服务器都打开了 mod_deflate (Apache) 和 gzip (Nginx) 模式。这样我们就可以在传输过程中节省空间。你可以在[这儿][13]看看你的服务器是不是启用了它。 @@ -129,18 +127,16 @@ insgesamt 928K 首先,没有人希望每次都要用 Inkscape 来优化 SVG. 你可以在命令行中脱离 GUI 来运行 Inkscape,但你找不到选项来将 Inkscape SVG 转换成优化的 SVG. 用这种方式只能导出光栅图像。但是我们替代品: - * SVGO (看起来开发过程已经不活跃了) - * Scour +* SVGO (看起来开发过程已经不活跃了) +* Scour - - -本例中我们使用 scour 来进行优化。先来安装它: +本例中我们使用 `scour` 来进行优化。先来安装它: ``` $ sudo dnf install scour ``` -要想自动优化 SVG 文件,请运行 scour,就像这样: +要想自动优化 SVG 文件,请运行 `scour`,就像这样: ``` [user@localhost ]$ scour INPUT.svg OUTPUT.svg -p 3 --create-groups --renderer-workaround --strip-xml-prolog --remove-descriptive-elements --enable-comment-stripping --disable-embed-rasters --no-line-breaks --enable-id-stripping --shorten-ids @@ -156,13 +152,13 @@ via: https://fedoramagazine.org/design-faster-web-pages-part-2-image-replacement 作者:[Sirko Kemter][a] 选题:[lujun9972][b] 译者:[StdioA](https://github.com/StdioA) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://fedoramagazine.org/author/gnokii/ [b]: https://github.com/lujun9972 -[1]: https://wp.me/p3XX0v-5fJ +[1]: https://linux.cn/article-10166-1.html [2]: https://fedoramagazine.org/howto-use-sudo/ [3]: https://fedoramagazine.org/?s=Inkscape [4]: https://getfedora.org diff --git a/published/201811/20181017 How To Determine Which System Manager Is Running On Linux System.md b/published/201811/20181017 How To Determine Which System Manager Is Running On Linux System.md new file mode 100644 index 0000000000..884cbebaef --- /dev/null +++ b/published/201811/20181017 How To Determine Which System Manager Is Running On Linux System.md @@ -0,0 +1,120 @@ +如何弄清 Linux 系统运行何种系统管理程序 +====== + +虽然我们经常听到系统管理器System Manager这词,但很少有人深究其确切意义。现在我们将向你展示其区别。 + +我会尽自己所能来解释清楚一切。我们大多都知道 System V 和 systemd 两种系统管理器。 System V (简写 SysV) 是老式系统所使用的古老且传统的初始化系统及系统管理器。 + +Systemd 是全新的初始化系统及系统管理器,并且已被大部分主流 Linux 发行版所采用。 + +Linux 系统中主要有三种有名而仍在使用的初始化系统。大多数 Linux 发行版都使用其中之一。 + +### 什么是初始化系统管理器? + +在基于 Linux/Unix 的操作系统中,`init` (初始化的简称) 是内核启动系统时开启的第一个进程。 + +它持有的进程 ID(PID)号为 1,其在后台一直运行着,直到关机。 + +`init` 会查找 `/etc/inittab` 文件中相应配置信息来确定系统的运行级别,然后根据运行级别在后台启动所有的其它进程和应用。 + +作为 Linux 启动过程的一部分,BIOS、MBR、GRUB 和内核进程在此进程之前就被激活了。 + +下面列出的是 Linux 的可用运行级别(存在七个运行级别,从 0 到 6)。 + + * `0`:停机 + * `1`:单用户模式 + * `2`:多用户,无 NFS(LCTT 译注:NFS 即 Network File System,网络文件系统) + * `3`:全功能多用户模式 + * `4`:未使用 + * `5`:X11(GUI – 图形用户界面) + * `6`:重启 + +下面列出的是 Linux 系统中广泛使用的三种初始化系统。 + + * System V (Sys V):是类 Unix 操作系统传统的也是首款初始化系统。 + * Upstart:基于事件驱动,是 `/sbin/init` 守护进程的替代品。 + * Systemd:是一款全新的初始化系统及系统管理器,它被所有主流的 Linux 发行版实现/采用,以替代传统的 SysV 初始化系统。 + +### 什么是 System V (Sys V)? + +System V(Sys V)是类 Unix 操作系统传统的也是首款初始化系统。`init` 是系统由内核启动期间启动的第一个进程,它是所有进程的父进程。 + +起初,大多数 Linux 发行版都使用名为 System V(SysV)的传统的初始化系统。多年来,为了解决标准版本中的设计限制,发布了几个替代的初始化系统,例如 launchd、Service Management Facility、systemd 和 Upstart。 + +但只有 systemd 最终被几个主流 Linux 发行版所采用,以替代传统的 SysV。 + +### 什么是 Upstart? + +Upstart 基于事件驱动,是 `/sbin/init` 守护进程的替代品。用来在启动期间控制任务和服务的启动,在关机期间停止它们,及在系统运行过程中监视它们。 + +它最初是为 Ubuntu 发行版开发的,但也可以在所有的 Linux 发行版中部署运行,以替代古老的 System V 初始化系统。 + +它用于 Ubuntu 9.10 到 14.10 版本和基于 RHEL 6 的系统中,之后的被 systemd 取代了。 + +### 什么是 systemd? + +systemd 是一款全新的初始化系统及系统管理器,它被所有主流的 Linux 发行版实现/采用,以替代传统的 SysV 初始化系统。 + +systemd 与 SysV 和 LSB(LCTT 译注:Linux Standards Base) 初始化脚本兼容。它可以作为 SysV 初始化系统的直接替代品。其是内核启动的第一个进程并占有数字 1 的 PID,它是所有进程的父进程。 + +Fedora 15 是第一个采用 systemd 而不是 upstart 的发行版。[systemctl][3] 是一款命令行工具,它是管理 systemd 守护进程/服务(如 `start`、`restart`、`stop`、`enable`、`disable`、`reload` 和 `status`)的主要工具。 + +systemd 使用 `.service` 文件而不是(SysV 初始化系统使用的) bash 脚本。systemd 把所有守护进程按顺序排列到自己 Cgroups (LCTT 译注:Cgroups 是 control groups 的缩写,是 Linux 内核提供的一种可以限制、记录、隔离进程组所使用的物理资源,如:cpu、memory、IO 等的机制。最初由 Google 的工程师提出,后来被整合进 Linux 内核。Cgroups 也是 LXC 为实现虚拟化所使用的资源管理手段,可以说没有 cgroups 就没有 LXC)中,所以通过查看 `/cgroup/systemd` 文件就可以查看系统层次结构。 + +### 在 Linux 上如何识别出系统管理器 + +在系统上运行如下命令来查看运行着什么系统管理器: + +(LCTT 译注:原文繁冗啰嗦,翻译时进行了裁剪整理。) + +#### 方法 1:使用 ps 命令 + +`ps` – 显示当前进程快照。`ps` 会显示选定的活动进程的信息。其输出不能确切区分出是 System V(SysV) 还是 upstart,所以我建议使用其它方法。 + +``` +# ps -p1 | grep "init\|upstart\|systemd" + 1 ? 00:00:00 init +``` + +#### 方法 2:使用 rpm 命令 + +RPM 即 Red Hat Package Manager (红帽包管理),是一款功能强大的[安装包管理][1]命令行工具,在基于 Red Hat 的发行版中使用,如 RHEL、CentOS、Fedora、openSUSE 和 Mageia。此工具可以在系统/服务上对软件进行安装、更新、删除、查询及验证等操作。通常 RPM 文件都带有 `.rpm` 后缀。 + +RPM 会使用必要的库和依赖库来构建软件,并且不会与系统上安装的其它包冲突。 + +``` +# rpm -qf /sbin/init +SysVinit-2.86-17.el5 +``` + +#### 方法 3:使用 /sbin/init 文件 + +`/sbin/init` 程序会将根文件系统从内存加载或切换到磁盘。 + +这是启动过程的主要部分。这个进程开始时的运行级别为 “N”(无)。`/sbin/init` 程序会按照 `/etc/inittab` 配制文件的描述来初始化系统。 + +``` +# /sbin/init --version +init (upstart 0.6.5) +Copyright (C) 2010 Canonical Ltd. + +This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +``` + + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-determine-which-init-system-manager-is-running-on-linux-system/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[runningwater](https://github.com/runningwater) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/package-management/ +[2]: https://www.2daygeek.com/rpm-command-examples/ +[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/ diff --git a/translated/tech/20181019 Edit your videos with Pitivi on Fedora.md b/published/201811/20181019 Edit your videos with Pitivi on Fedora.md similarity index 89% rename from translated/tech/20181019 Edit your videos with Pitivi on Fedora.md rename to published/201811/20181019 Edit your videos with Pitivi on Fedora.md index 09c36fa71f..a9c25180fb 100644 --- a/translated/tech/20181019 Edit your videos with Pitivi on Fedora.md +++ b/published/201811/20181019 Edit your videos with Pitivi on Fedora.md @@ -1,10 +1,11 @@ -在 Fedora 上使用 Pitivi 编辑你的视频 +在 Fedora 上使用 Pitivi 编辑视频 ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/10/pitivi-816x346.png) -想制作一部你本周末冒险的视频吗?视频编辑有很多选择。但是,如果你在寻找一个容易上手的视频编辑器,并且也可以在官方 Fedora 仓库中找到,请尝试一下[Pitivi][1]。 -Pitivi 是一个使用 GStreamer 框架的开源非线性视频编辑器。在 Fedora 下开箱即用,Pitivi 支持 OGG、WebM 和一系列其他格式。此外,通过 gstreamer 插件可以获得更多视频格式支持。Pitivi 也与 GNOME 桌面紧密集成,因此相比其他新的程序,它的 UI 在 Fedora Workstation 上会感觉很熟悉。 +想制作一部你本周末冒险的视频吗?视频编辑有很多选择。但是,如果你在寻找一个容易上手的视频编辑器,并且也可以在官方 Fedora 仓库中找到,请尝试一下 [Pitivi][1]。 + +Pitivi 是一个使用 GStreamer 框架的开源非线性视频编辑器。在 Fedora 下开箱即用,Pitivi 支持 OGG、WebM 和一系列其他格式。此外,通过 GStreamer 插件可以获得更多视频格式支持。Pitivi 也与 GNOME 桌面紧密集成,因此相比其他新的程序,它的 UI 在 Fedora Workstation 上会感觉很熟悉。 ### 在 Fedora 上安装 Pitivi @@ -20,7 +21,7 @@ sudo dnf install pitivi ### 基本编辑 -Pitivi 内置了多种工具,可以快速有效地编辑剪辑。只需将视频、音频和图像导入 Pitivi 媒体库,然后将它们拖到时间线上即可。此外,除了时间线上的简单淡入淡出过渡之外,pitivi 还允许你轻松地将剪辑的各个部分分割、修剪和分组。 +Pitivi 内置了多种工具,可以快速有效地编辑剪辑。只需将视频、音频和图像导入 Pitivi 媒体库,然后将它们拖到时间线上即可。此外,除了时间线上的简单淡入淡出过渡之外,Pitivi 还允许你轻松地将剪辑的各个部分分割、修剪和分组。 ![][3] @@ -40,7 +41,7 @@ via: https://fedoramagazine.org/edit-your-videos-with-pitivi-on-fedora/ 作者:[Ryan Lerch][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181019 How to use Pandoc to produce a research paper.md b/published/201811/20181019 How to use Pandoc to produce a research paper.md similarity index 100% rename from published/20181019 How to use Pandoc to produce a research paper.md rename to published/201811/20181019 How to use Pandoc to produce a research paper.md diff --git a/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md b/published/201811/20181019 What is an SRE and how does it relate to DevOps.md similarity index 83% rename from translated/talk/20181019 What is an SRE and how does it relate to DevOps.md rename to published/201811/20181019 What is an SRE and how does it relate to DevOps.md index 80700d6fb9..03bd773fa7 100644 --- a/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md +++ b/published/201811/20181019 What is an SRE and how does it relate to DevOps.md @@ -1,15 +1,15 @@ 什么是 SRE?它和 DevOps 是怎么关联的? ===== -大型企业里 SRE 角色比较常见,不过小公司也需要 SRE。 +> 大型企业里 SRE 角色比较常见,不过小公司也需要 SRE。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP) -虽然站点可靠性工程师(SRE)角色在近几年变得流行起来,但是很多人 —— 甚至是软件行业里的 —— 还不知道 SRE 是什么或者 SRE 都干些什么。为了搞清楚这些问题,这篇文章解释了 SRE 的含义,还有 SRE 怎样关联 DevOps,以及在工程师团队规模不大的组织里 SRE 该如何工作。 +虽然站点可靠性工程师site reliability engineer(SRE)角色在近几年变得流行起来,但是很多人 —— 甚至是软件行业里的 —— 还不知道 SRE 是什么或者 SRE 都干些什么。为了搞清楚这些问题,这篇文章解释了 SRE 的含义,还有 SRE 怎样关联 DevOps,以及在工程师团队规模不大的组织里 SRE 该如何工作。 ### 什么是站点可靠性工程? -谷歌的几个工程师写的《 [SRE:谷歌运维解密][1]》被认为是站点可靠性工程的权威书籍。谷歌的工程副总裁 Ben Treynor Sloss 在二十一世纪初[创造了这个术语][2]。他是这样定义的:“当你让软件工程师设计运维功能时,SRE 就产生了。” +谷歌的几个工程师写的《[SRE:谷歌运维解密][1]》被认为是站点可靠性工程的权威书籍。谷歌的工程副总裁 Ben Treynor Sloss 在二十一世纪初[创造了这个术语][2]。他是这样定义的:“当你让软件工程师设计运维功能时,SRE 就产生了。” 虽然系统管理员从很久之前就在写代码,但是过去的很多时候系统管理团队是手动管理机器的。当时他们管理的机器可能有几十台或者上百台,不过当这个数字涨到了几千甚至几十万的时候,就不能简单的靠人去解决问题了。规模如此大的情况下,很明显应该用代码去管理机器(以及机器上运行的软件)。 @@ -19,13 +19,13 @@ ### SRE 和 DevOps -站点可靠性工程的核心,就是对 DevOps 范例的实践。[DevOps 的定义][3]有很多种方式。开发团队(“devs”)和运维(“ops”)团队相互分离的传统模式下,写代码的团队在服务交付给用户使用之后就不再对服务状态负责了。开发团队“把代码扔到墙那边”让运维团队去部署和支持。 +站点可靠性工程的核心,就是对 DevOps 范例的实践。[DevOps 的定义][3]有很多种方式。开发团队(“dev”)和运维(“ops”)团队相互分离的传统模式下,写代码的团队在将服务交付给用户使用之后就不再对服务状态负责了。开发团队“把代码扔到墙那边”让运维团队去部署和支持。 这种情况会导致大量失衡。开发和运维的目标总是不一致 —— 开发希望用户体验到“最新最棒”的代码,但是运维想要的是变更尽量少的稳定系统。运维是这样假定的,任何变更都可能引发不稳定,而不做任何变更的系统可以一直保持稳定。(减少软件的变更次数并不是避免故障的唯一因素,认识到这一点很重要。例如,虽然你的 web 应用保持不变,但是当用户数量涨到十倍时,服务可能就会以各种方式出问题。) DevOps 理念认为通过合并这两个岗位就能够消灭争论。如果开发团队时刻都想把新代码部署上线,那么他们也必须对新代码引起的故障负责。就像亚马逊的 [Werner Vogels 说的][4]那样,“谁开发,谁运维”(生产环境)。但是开发人员已经有一大堆问题了。他们不断的被推动着去开发老板要的产品功能。再让他们去了解基础设施,包括如何部署、配置还有监控服务,这对他们的要求有点太多了。所以就需要 SRE 了。 -开发一个 web 应用的时候经常是很多人一起参与。有用户界面设计师,图形设计师,前端工程师,后端工程师,还有许多其他工种(视技术选型的具体情况而定)。如何管理写好的代码也是需求之一(例如部署,配置,监控)—— 这是 SRE 的专业领域。但是,就像前端工程师受益于后端领域的知识一样(例如从数据库获取数据的方法),SRE 理解部署系统的工作原理,知道如何满足特定的代码或者项目的具体需求。 +开发一个 web 应用的时候经常是很多人一起参与。有用户界面设计师、图形设计师、前端工程师、后端工程师,还有许多其他工种(视技术选型的具体情况而定)。如何管理写好的代码也是需求之一(例如部署、配置、监控)—— 这是 SRE 的专业领域。但是,就像前端工程师受益于后端领域的知识一样(例如从数据库获取数据的方法),SRE 理解部署系统的工作原理,知道如何满足特定的代码或者项目的具体需求。 所以 SRE 不仅仅是“写代码的运维工程师”。相反,SRE 是开发团队的成员,他们有着不同的技能,特别是在发布部署、配置管理、监控、指标等方面。但是,就像前端工程师必须知道如何从数据库中获取数据一样,SRE 也不是只负责这些领域。为了提供更容易升级、管理和监控的产品,整个团队共同努力。 @@ -37,7 +37,7 @@ DevOps 理念认为通过合并这两个岗位就能够消灭争论。如果开 让开发人员做 SRE 最显著的优点是,团队规模变大的时候也能很好的扩展。而且,开发人员将会全面地了解应用的特性。但是,许多初创公司的基础设施包含了各种各样的 SaaS 产品,这种多样性在基础设施上体现的最明显,因为连基础设施本身也是多种多样。然后你们在某个基础设施上引入指标系统、站点监控、日志分析、容器等等。这些技术解决了一部分问题,也增加了复杂度。开发人员除了要了解应用程序的核心技术(比如开发语言),还要了解上述所有技术和服务。最终,掌握所有的这些技术让人无法承受。 -另一种方案是聘请专家专职做 SRE。他们专注于发布部署、配置管理、监控和指标,可以节省开发人员的时间。这种方案的缺点是,SRE 的时间必须分配给多个不同的应用(就是说 SRE 需要贯穿整个工程部门)。 这可能意味着 SRE 没时间对任何应用深入学习,然而他们可以站在一个能看到服务全貌的高度,知道各个部分是怎么组合在一起的。 这个“ 三万英尺高的视角”可以帮助 SRE 从系统整体上考虑,哪些薄弱环节需要优先修复。 +另一种方案是聘请专家专职做 SRE。他们专注于发布部署、配置管理、监控和指标,可以节省开发人员的时间。这种方案的缺点是,SRE 的时间必须分配给多个不同的应用(就是说 SRE 需要贯穿整个工程部门)。 这可能意味着 SRE 没时间对任何应用深入学习,然而他们可以站在一个能看到服务全貌的高度,知道各个部分是怎么组合在一起的。 这个“三万英尺高的视角”可以帮助 SRE 从系统整体上考虑,哪些薄弱环节需要优先修复。 有一个关键信息我还没提到:其他的工程师。他们可能很渴望了解发布部署的原理,也很想尽全力学会使用指标系统。而且,雇一个 SRE 可不是一件简单的事儿。因为你要找的是一个既懂系统管理又懂软件工程的人。(我之所以明确地说软件工程而不是说“能写代码”,是因为除了写代码之外软件工程还包括很多东西,比如编写良好的测试或文档。) @@ -54,7 +54,7 @@ via: https://opensource.com/article/18/10/sre-startup 作者:[Craig Sebenik][a] 选题:[lujun9972][b] 译者:[BeliteX](https://github.com/belitex) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181022 5 tips for choosing the right open source database.md b/published/201811/20181022 5 tips for choosing the right open source database.md similarity index 89% rename from translated/tech/20181022 5 tips for choosing the right open source database.md rename to published/201811/20181022 5 tips for choosing the right open source database.md index 1111786922..9f0447086d 100644 --- a/translated/tech/20181022 5 tips for choosing the right open source database.md +++ b/published/201811/20181022 5 tips for choosing the right open source database.md @@ -1,10 +1,11 @@ 正确选择开源数据库的 5 个技巧 ====== + > 对关键应用的选择不容许丝毫错误。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8) -你或许会遇到需要选择适合的开源数据库的情况。但这无论对于开源方面的老手或是新手,都是一项艰巨的任务。 +你或许会遇到需要选择合适的开源数据库的情况。但这无论对于开源方面的老手或是新手,都是一项艰巨的任务。 在过去的几年中,采用开源技术的企业越来越多。面对这样的趋势,众多开源应用公司都纷纷承诺自己提供的解决方案能够各种问题、适应各种负载。但这些承诺不能轻信,在开源应用上的选择是重要而艰难的,尤其是数据库这种关键的应用。 @@ -20,7 +21,7 @@ ### 了解你的工作负载 -尽管开源数据库技术的功能越来越丰富,但这些新加入的功能都不太具有普适性。譬如 MongoDB 新增了事务的支持、MySQL 新增了 JSON 存储的功能等等。目前开源数据库的普遍趋势是不断加入新的功能,但很多人的误区却在于没有选择最适合的工具来完成自己的工作——这样的人或许是一个自大的开发者,又或许是一个视野狭窄的主管——最终导致公司业务上的损失。最致命的是,在业务初期,使用了不适合的工具往往也可以顺利地完成任务,但随着业务的增长,很快就会到达瓶颈,尽管这个时候还可以替换更合适的工具,但成本就比较高了。 +尽管开源数据库技术的功能越来越丰富,但这些新加入的功能都不太具有普适性。譬如 MongoDB 新增了事务的支持、MySQL 新增了 JSON 存储的功能等等。目前开源数据库的普遍趋势是不断加入新的功能,但很多人的误区却在于没有选择最适合的工具来完成自己的工作 —— 这样的人或许是一个自大的开发者,又或许是一个视野狭窄的主管 —— 最终导致公司业务上的损失。最致命的是,在业务初期,使用了不适合的工具往往也可以顺利地完成任务,但随着业务的增长,很快就会到达瓶颈,尽管这个时候还可以替换更合适的工具,但成本就比较高了。 例如,如果你需要的是数据分析仓库,关系数据库可能不是一个适合的选择;如果你处理事务的应用要求严格的数据完整性和一致性,就不要考虑 NoSQL 了。 @@ -30,7 +31,7 @@ Battery Ventures 是一家专注于技术的投资公司,最近推出了一个用于跟踪最受欢迎开源项目的 [BOSS 指数][2] 。它提供了对一些被广泛采用的开源项目和活跃的开源项目的详细情况。其中,数据库技术毫无悬念地占据了榜单的主导地位,在前十位之中占了一半。这个 BOSS 指数对于刚接触开源数据库领域的人来说,这是一个很好的切入点。当然,开源技术的提供者也会针对很多常见的典型问题给出对应的解决方案。 -我认为,你想要做的事情很可能已经有人解决过了。即使这些先行者的解决方案不一定完全契合你的需求,但也可以从他们成功或失败案例中根据你自己的需求修改得出合适的解决方案。 +我认为,你想要做的事情很可能已经有人解决过了。即使这些先行者的解决方案不一定完全契合你的需求,但也可以从他们成功或失败的案例中根据你自己的需求修改得出合适的解决方案。 如果你采用了一个最前沿的技术,这就是你探索的好机会了。如果你的工作负载刚好适合新的开源数据库技术,放胆去尝试吧。第一个吃螃蟹的人总是会得到意外的挑战和收获。 @@ -46,7 +47,7 @@ Battery Ventures 是一家专注于技术的投资公司,最近推出了一个 ### 有疑问,找专家 -如果你仍然不确定数据库选择得是否合适,可以在论坛、网站或者与软件的提供者处商讨。研究各种开源数据库是否满足自己的需求是一件很有意义的事,因为总会发现你从不知道的技术。而开源社区就是分享这些信息的地方。 +如果你仍然不确定数据库选择的是否合适,可以在论坛、网站或者与软件的提供者处商讨。研究各种开源数据库是否满足自己的需求是一件很有意义的事,因为总会发现你从不知道的技术。而开源社区就是分享这些信息的地方。 当你接触到开源软件和软件提供者时,有一件重要的事情需要注意。很多公司都有开放的核心业务模式,鼓励采用他们的数据库软件。你可以只接受他们的部分建议和指导,然后用你自己的能力去研究和探索替代方案。 @@ -62,7 +63,7 @@ via: https://opensource.com/article/18/10/tips-choosing-right-open-source-databa 作者:[Barrett Chambers][a] 选题:[lujun9972][b] 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181022 How to set up WordPress on a Raspberry Pi.md b/published/201811/20181022 How to set up WordPress on a Raspberry Pi.md similarity index 52% rename from translated/tech/20181022 How to set up WordPress on a Raspberry Pi.md rename to published/201811/20181022 How to set up WordPress on a Raspberry Pi.md index 5153307eee..a3ca6d17ef 100644 --- a/translated/tech/20181022 How to set up WordPress on a Raspberry Pi.md +++ b/published/201811/20181022 How to set up WordPress on a Raspberry Pi.md @@ -1,38 +1,39 @@ -如何在 Rasspberry Pi 上搭建 WordPress +如何在树莓派上搭建 WordPress ====== -这篇简单的教程可以让你在 Rasspberry Pi 上运行你的 WordPress 网站。 +> 这篇简单的教程可以让你在树莓派上运行你的 WordPress 网站。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_raspberry-pi-classroom_lead.png?itok=KIyhmR8W) WordPress 是一个非常受欢迎的开源博客平台和内容管理平台(CMS)。它很容易搭建,而且还有一个活跃的开发者社区构建网站、创建主题和插件供其他人使用。 -虽然通过一键式 WordPress 设置获得托管包很容易,但通过命令行就可以在 Linux 服务器上设置自己的托管包,而且 Raspberry Pi 是一种用来尝试它并顺便学习一些东西的相当好的途径。 +虽然通过一键式 WordPress 设置获得托管包很容易,但也可以简单地通过命令行在 Linux 服务器上设置自己的托管包,而且树莓派是一种用来尝试它并顺便学习一些东西的相当好的途径。 -使用一个 web 堆栈的四个部分是 Linux、Apache、MySQL 和 PHP。这里是你对它们每一个需要了解的。 +一个经常使用的 Web 套件的四个部分是 Linux、Apache、MySQL 和 PHP。这里是你对它们每一个需要了解的。 ### Linux -Raspberry Pi 上运行的系统是 Raspbian,这是一个基于 Debian,优化地可以很好的运行在 Raspberry Pi 硬件上的 Linux 发行版。你有两个选择:桌面版或是精简版。桌面版有一个熟悉的桌面还有很多教育软件和编程工具,像是 LibreOffice 套件、Mincraft,还有一个 web 浏览器。精简版本没有桌面环境,因此它只有命令行以及一些必要的软件。 +树莓派上运行的系统是 Raspbian,这是一个基于 Debian,为运行在树莓派硬件上而优化的很好的 Linux 发行版。你有两个选择:桌面版或是精简版。桌面版有一个熟悉的桌面还有很多教育软件和编程工具,像是 LibreOffice 套件、Mincraft,还有一个 web 浏览器。精简版本没有桌面环境,因此它只有命令行以及一些必要的软件。 这篇教程在两个版本上都可以使用,但是如果你使用的是精简版,你必须要有另外一台电脑去访问你的站点。 ### Apache -Apache 是一个受欢迎的 web 服务器应用,你可以安装在你的 Raspberry Pi 上伺服你的 web 页面。就其自身而言,Apache 可以通过 HTTP 提供静态 HTML 文件。使用额外的模块,它也可以使用像是 PHP 的脚本语言提供动态网页。 +Apache 是一个受欢迎的 web 服务器应用,你可以安装在你的树莓派上伺服你的 web 页面。就其自身而言,Apache 可以通过 HTTP 提供静态 HTML 文件。使用额外的模块,它也可以使用像是 PHP 的脚本语言提供动态网页。 安装 Apache 非常简单。打开一个终端窗口,然后输入下面的命令: ``` sudo apt install apache2 -y ``` -Apache 默认放了一个测试文件在一个 web 目录中,你可以从你的电脑或是你网络中的其他计算机进行访问。只需要打开 web 浏览器,然后输入地址 ****。或者(特别是你使用的是 Raspbian Lite 的话)输入你的 Pi 的 IP 地址代替 **localhost**。你应该会在你的浏览器窗口中看到这样的内容: + +Apache 默认放了一个测试文件在一个 web 目录中,你可以从你的电脑或是你网络中的其他计算机进行访问。只需要打开 web 浏览器,然后输入地址 ``。或者(特别是你使用的是 Raspbian Lite 的话)输入你的树莓派的 IP 地址代替 `localhost`。你应该会在你的浏览器窗口中看到这样的内容: ![](https://opensource.com/sites/default/files/uploads/apache-it-works.png) 这意味着你的 Apache 已经开始工作了! -这个默认的网页仅仅是你文件系统里的一个文件。它在你本地的 **/var/www/html/index/html**。你可以使用 [Leafpad][2] 文本编辑器写一些 HTML 去替换这个文件的内容。 +这个默认的网页仅仅是你文件系统里的一个文件。它在你本地的 `/var/www/html/index/html`。你可以使用 [Leafpad][2] 文本编辑器写一些 HTML 去替换这个文件的内容。 ``` cd /var/www/html/ @@ -43,27 +44,27 @@ sudo leafpad index.html ### MySQL -MySQL (显然是 "my S-Q-L" 或者 "my sequel") 是一个很受欢迎的数据库引擎。就像 PHP,它被非常广泛的应用于网页服务,这也是为什么像 WordPress 一样的项目选择了它,以及这些项目是为何如此受欢迎。 +MySQL(读作 “my S-Q-L” 或者 “my sequel”)是一个很受欢迎的数据库引擎。就像 PHP,它被非常广泛的应用于网页服务,这也是为什么像 WordPress 一样的项目选择了它,以及这些项目是为何如此受欢迎。 -在一个终端窗口中输入以下命令安装 MySQL 服务: +在一个终端窗口中输入以下命令安装 MySQL 服务(LCTT 译注:实际上安装的是 MySQL 分支 MariaDB): ``` sudo apt-get install mysql-server -y ``` -WordPress 使用 MySQL 存储文章、页面、用户数据、还有许多其他的内容。 +WordPress 使用 MySQL 存储文章、页面、用户数据、还有许多其他的内容。 ### PHP -PHP 是一个预处理器:它是在服务器通过网络浏览器接受网页请求是运行的代码。它解决那些需要展示在网页上的内容,然后发送这些网页到浏览器上。,不像静态的 HTML,PHP 能在不同的情况下展示不同的内容。PHP 是一个在 web 上非常受欢迎的语言;很多像 Facebook 和 Wikipedia 的项目都使用 PHP 编写。 +PHP 是一个预处理器:它是在服务器通过网络浏览器接受网页请求是运行的代码。它解决那些需要展示在网页上的内容,然后发送这些网页到浏览器上。不像静态的 HTML,PHP 能在不同的情况下展示不同的内容。PHP 是一个在 web 上非常受欢迎的语言;很多像 Facebook 和 Wikipedia 的项目都使用 PHP 编写。 -安装 PHP 和 MySQL 的插件: +安装 PHP 和 MySQL 的插件: ``` sudo apt-get install php php-mysql -y ``` -删除 **index.html**,然后创建 **index.php**: +删除 `index.html`,然后创建 `index.php`: ``` sudo rm index.html @@ -82,16 +83,16 @@ sudo leafpad index.php ### WordPress -你可以使用 **wget** 命令从 [wordpress.org][3] 下载 WordPress。最新的 WordPress 总是使用 [wordpress.org/latest.tar.gz][4] 这个网址,所以你可以直接抓取这些文件,而无需到网页里面查看,现在的版本是 4.9.8。 +你可以使用 `wget` 命令从 [wordpress.org][3] 下载 WordPress。最新的 WordPress 总是使用 [wordpress.org/latest.tar.gz][4] 这个网址,所以你可以直接抓取这些文件,而无需到网页里面查看,现在的版本是 4.9.8。 -确保你在 **/var/www/html** 目录中,然后删除里面的所有内容: +确保你在 `/var/www/html` 目录中,然后删除里面的所有内容: ``` cd /var/www/html/ sudo rm * ``` -使用 **wget** 下载 WordPress,然后提取里面的内容,并移动提取的 WordPress 目录中的内容移动到 **html** 目录下: +使用 `wget` 下载 WordPress,然后提取里面的内容,并移动提取的 WordPress 目录中的内容移动到 `html` 目录下: ``` sudo wget http://wordpress.org/latest.tar.gz @@ -99,13 +100,13 @@ sudo tar xzf latest.tar.gz sudo mv wordpress/* . ``` -现在可以删除压缩包和空的 **wordpress** 目录: +现在可以删除压缩包和空的 `wordpress` 目录了: ``` sudo rm -rf wordpress latest.tar.gz ``` -运行 **ls** 或者 **tree -L 1** 命令显示 WordPress 项目下包含的内容: +运行 `ls` 或者 `tree -L 1` 命令显示 WordPress 项目下包含的内容: ``` . @@ -132,9 +133,9 @@ sudo rm -rf wordpress latest.tar.gz 3 directories, 16 files ``` -这是 WordPress 的默认安装源。在 **wp-content** 目录中,你可以编辑你的自定义安装。 +这是 WordPress 的默认安装源。在 `wp-content` 目录中,你可以编辑你的自定义安装。 -你现在应该把所有文件的所有权改为 Apache 用户: +你现在应该把所有文件的所有权改为 Apache 的运行用户 `www-data`: ``` sudo chown -R www-data: . @@ -152,24 +153,27 @@ sudo mysql_secure_installation 你将会被问到一系列的问题。这里原来没有设置密码,但是在下一步你应该设置一个。确保你记住了你输入的密码,后面你需要使用它去连接你的 WordPress。按回车确认下面的所有问题。 -当它完成之后,你将会看到 "All done!" 和 "Thanks for using MariaDB!" 的信息。 +当它完成之后,你将会看到 “All done!” 和 “Thanks for using MariaDB!” 的信息。 -在终端窗口运行 **mysql** 命令: +在终端窗口运行 `mysql` 命令: ``` sudo mysql -uroot -p ``` -输入你创建的 root 密码。你将看到 “Welcome to the MariaDB monitor.” 的欢迎信息。在 **MariaDB [(none)] >** 提示处使用以下命令,为你 WordPress 的安装创建一个数据库: + +输入你创建的 root 密码(LCTT 译注:不是 Linux 系统的 root 密码,是 MySQL 的 root 密码)。你将看到 “Welcome to the MariaDB monitor.” 的欢迎信息。在 “MariaDB [(none)] >” 提示处使用以下命令,为你 WordPress 的安装创建一个数据库: ``` create database wordpress; ``` + 注意声明最后的分号,如果命令执行成功,你将看到下面的提示: ``` Query OK, 1 row affected (0.00 sec) ``` -把 数据库权限交给 root 用户在声明的底部输入密码: + +把数据库权限交给 root 用户在声明的底部输入密码: ``` GRANT ALL PRIVILEGES ON wordpress.* TO 'root'@'localhost' IDENTIFIED BY 'YOURPASSWORD'; @@ -181,13 +185,13 @@ GRANT ALL PRIVILEGES ON wordpress.* TO 'root'@'localhost' IDENTIFIED BY 'YOURPAS FLUSH PRIVILEGES; ``` -按 **Ctrl+D** 退出 MariaDB 提示,返回到 Bash shell。 +按 `Ctrl+D` 退出 MariaDB 提示符,返回到 Bash shell。 ### WordPress 配置 -在你的 Raspberry Pi 打开网页浏览器,地址栏输入 ****。选择一个你想要在 WordPress 使用的语言,然后点击 **继续**。你将会看到 WordPress 的欢迎界面。点击 **让我们开始吧** 按钮。 +在你的 树莓派 打开网页浏览器,地址栏输入 `http://localhost`。选择一个你想要在 WordPress 使用的语言,然后点击“Continue”。你将会看到 WordPress 的欢迎界面。点击 “Let's go!” 按钮。 -按照下面这样填写基本的站点信息: +按照下面这样填写基本的站点信息: ``` Database Name:      wordpress @@ -197,22 +201,23 @@ Database Host:      localhost Table Prefix:       wp_ ``` -点击 **提交** 继续,然后点击 **运行安装**。 +点击 “Submit” 继续,然后点击 “Run the install”。 ![](https://opensource.com/sites/default/files/uploads/wp-info.png) -按下面的格式填写:为你的站点设置一个标题、创建一个用户名和密码、输入你的 email 地址。点击 **安装 WordPress** 按钮,然后使用你刚刚创建的账号登录,你现在已经登录,而且你的站点已经设置好了,你可以在浏览器地址栏输入 **** 查看你的网站。 +按下面的格式填写:为你的站点设置一个标题、创建一个用户名和密码、输入你的 email 地址。点击 “Install WordPress” 按钮,然后使用你刚刚创建的账号登录,你现在已经登录,而且你的站点已经设置好了,你可以在浏览器地址栏输入 `http://localhost/wp-admin` 查看你的网站。 ### 永久链接 -更改你的永久链接,使得你的 URLs 更加友好是一个很好的想法。 +更改你的永久链接设置,使得你的 URL 更加友好是一个很好的想法。 -要这样做,首先登录你的 WordPress ,进入仪表盘。进入 **设置**,**永久链接**。选择 **文章名** 选项,然后点击 **保存更改**。接着你需要开启 Apache 的 **改写** 模块。 +要这样做,首先登录你的 WordPress ,进入仪表盘。进入 “Settings”,“Permalinks”。选择 “Post name” 选项,然后点击 “Save Changes”。接着你需要开启 Apache 的 `rewrite` 模块。 ``` sudo a2enmod rewrite ``` -你还需要告诉虚拟托管服务,站点允许改写请求。为你的虚拟主机编辑 Apache 配置文件 + +你还需要告诉虚拟托管服务,站点允许改写请求。为你的虚拟主机编辑 Apache 配置文件: ``` sudo leafpad /etc/apache2/sites-available/000-default.conf @@ -226,7 +231,7 @@ sudo leafpad /etc/apache2/sites-available/000-default.conf ``` -确保其中有像这样的内容 **< VirtualHost \*:80>** +确保其中有像这样的内容 ``: ``` @@ -244,17 +249,16 @@ sudo systemctl restart apache2 ### 下一步? -WordPress 是可以高度自定义的。在网站顶部横幅处点击你的站点名,你就会进入仪表盘,。在这里你可以修改主题、添加页面和文章、编辑菜单、添加插件、以及许多其他的事情。 +WordPress 是可以高度自定义的。在网站顶部横幅处点击你的站点名,你就会进入仪表盘。在这里你可以修改主题、添加页面和文章、编辑菜单、添加插件、以及许多其他的事情。 -这里有一些你可以在 Raspberry Pi 的网页服务上尝试的有趣的事情: +这里有一些你可以在树莓派的网页服务上尝试的有趣的事情: * 添加页面和文章到你的网站 * 从外观菜单安装不同的主题 * 自定义你的网站主题或是创建你自己的 * 使用你的网站服务向你的网络上的其他人显示有用的信息 - -不要忘记,Raspberry Pi 是一台 Linux 电脑。你也可以使用相同的结构在运行着 Debian 或者 Ubuntu 的服务器上安装 WordPress。 +不要忘记,树莓派是一台 Linux 电脑。你也可以使用相同的结构在运行着 Debian 或者 Ubuntu 的服务器上安装 WordPress。 -------------------------------------------------------------------------------- @@ -263,7 +267,7 @@ via: https://opensource.com/article/18/10/setting-wordpress-raspberry-pi 作者:[Ben Nuttall][a] 选题:[lujun9972][b] 译者:[dianbanjiu](https://github.com/dianbanjiu) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181023 Getting started with functional programming in Python using the toolz library.md b/published/201811/20181023 Getting started with functional programming in Python using the toolz library.md similarity index 67% rename from translated/tech/20181023 Getting started with functional programming in Python using the toolz library.md rename to published/201811/20181023 Getting started with functional programming in Python using the toolz library.md index 1f2606daa2..d23a45bc77 100644 --- a/translated/tech/20181023 Getting started with functional programming in Python using the toolz library.md +++ b/published/201811/20181023 Getting started with functional programming in Python using the toolz library.md @@ -1,7 +1,7 @@ -使用Python的toolz库开始函数式编程 +使用 Python 的 toolz 库开始函数式编程 ====== -toolz库允许你操作函数,使其更容易理解,更容易测试代码。 +> toolz 库允许你操作函数,使其更容易理解,更容易测试代码。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy) @@ -20,7 +20,11 @@ def add_one_word(words, word): 这个函数假设它的第一个参数是一个不可变的类似字典的对象,它返回一个新的类似字典的在相关位置递增的对象:这就是一个简单的频率计数器。 -但是,只有将它应用于单词流并做归纳时才有用。 我们可以使用内置模块 `functools` 中的归纳器。 `functools.reduce(function, stream, initializer)` +但是,只有将它应用于单词流并做*归纳*时才有用。 我们可以使用内置模块 `functools` 中的归纳器。 + +``` +functools.reduce(function, stream, initializer) +``` 我们想要一个函数,应用于流,并且能能返回频率计数。 @@ -30,14 +34,12 @@ def add_one_word(words, word): add_all_words = curry(functools.reduce, add_one_word) ``` -使用此版本,我们需要提供初始化程序。 但是,我们不能只将 `pyrsistent.m` 函数添加到 `curry` 函数中中; 因为这个顺序是错误的。 +使用此版本,我们需要提供初始化程序。但是,我们不能只将 `pyrsistent.m` 函数添加到 `curry` 函数中; 因为这个顺序是错误的。 ``` add_all_words_flipped = flip(add_all_words) ``` -The `flip` higher-level function returns a function that calls the original, with arguments flipped. - `flip` 这个高阶函数返回一个调用原始函数的函数,并且翻转参数顺序。 ``` @@ -46,7 +48,7 @@ get_all_words = add_all_words_flipped(pyrsistent.m()) 我们利用 `flip` 自动调整其参数的特性给它一个初始值:一个空字典。 -现在我们可以执行 `get_all_words(word_stream)` 这个函数来获取频率字典。 但是,我们如何获得一个单词流呢? Python文件是行流的。 +现在我们可以执行 `get_all_words(word_stream)` 这个函数来获取频率字典。 但是,我们如何获得一个单词流呢? Python 文件是按行供流的。 ``` def to_words(lines): @@ -60,9 +62,9 @@ def to_words(lines): words_from_file = toolz.compose(get_all_words, to_words) ``` -在这种情况下,组合只是使两个函数很容易阅读:首先将文件的行流应用于 `to_words`,然后将 `get_all_words` 应用于 `to_words` 的结果。 散文似乎与代码相反。 +在这种情况下,组合只是使两个函数很容易阅读:首先将文件的行流应用于 `to_words`,然后将 `get_all_words` 应用于 `to_words` 的结果。 但是文字上读起来似乎与代码执行相反。 -当我们开始认真对待可组合性时,这很重要。 有时可以将代码编写为一个单元序列,单独测试每个单元,最后将它们全部组合。 如果有几个组合元素时,组合的顺序可能就很难理解。 +当我们开始认真对待可组合性时,这很重要。有时可以将代码编写为一个单元序列,单独测试每个单元,最后将它们全部组合。如果有几个组合元素时,组合的顺序可能就很难理解。 `toolz` 库借用了 Unix 命令行的做法,并使用 `pipe` 作为执行相同操作的函数,但顺序相反。 @@ -70,17 +72,13 @@ words_from_file = toolz.compose(get_all_words, to_words) words_from_file = toolz.pipe(to_words, get_all_words) ``` -Now it reads more intuitively: Pipe the input into `to_words`, and pipe the results into `get_all_words`. On a command line, the equivalent would look like this: - 现在读起来更直观了:将输入传递到 `to_words`,并将结果传递给 `get_all_words`。 在命令行上,等效写法如下所示: ``` $ cat files | to_words | get_all_words ``` -The `toolz` library allows us to manipulate functions, slicing, dicing, and composing them to make our code easier to understand and to test. - -`toolz` 库允许我们操作函数,切片,分割和组合,以使我们的代码更容易理解和测试。 +`toolz` 库允许我们操作函数,切片、分割和组合,以使我们的代码更容易理解和测试。 -------------------------------------------------------------------------------- @@ -89,10 +87,10 @@ via: https://opensource.com/article/18/10/functional-programming-python-toolz 作者:[Moshe Zadka][a] 选题:[lujun9972][b] 译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://opensource.com/users/moshez [b]: https://github.com/lujun9972 -[1]: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures \ No newline at end of file +[1]: https://linux.cn/article-10222-1.html diff --git a/published/201811/20181024 4 cool new projects to try in COPR for October 2018.md b/published/201811/20181024 4 cool new projects to try in COPR for October 2018.md new file mode 100644 index 0000000000..70e2146853 --- /dev/null +++ b/published/201811/20181024 4 cool new projects to try in COPR for October 2018.md @@ -0,0 +1,77 @@ +COPR 仓库中 4 个很酷的新软件(2018.10) +====== + +![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg) + +COPR 是软件的个人存储库的[集合] [1],它包含那些不在标准的 Fedora 仓库中的软件。某些软件不符合允许轻松打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是自由开源的。COPR 可以在标准的 Fedora 包之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,它是尝试新的或实验性软件的一种很好的方法。 + +这是 COPR 中一组新的有趣项目。 + +[编者按:这些项目里面有一个兵不适合通过 COPR 分发,所以从本文中 也删除了。相关的评论也删除了,以免误导读者。对此带来的不便,我们深表歉意。] + +(LCTT 译注:本文后来移除了对“GitKraken”项目的介绍。) + +### Music On Console + +[Music On Console][4] 播放器(简称 mocp)是一个简单的控制台音频播放器。它有一个类似于 “Midnight Commander” 的界面,并且很容易使用。你只需进入包含音乐的目录,然后选择要播放的文件或目录。此外,mocp 提供了一组命令,允许直接从命令行进行控制。 + +![][5] + +#### 安装说明 + +该仓库目前为 Fedora 28 和 29 提供 Music On Console 播放器。要安装 mocp,请使用以下命令: + +``` +sudo dnf copr enable Krzystof/Moc +sudo dnf install moc +``` + +### cnping + +[Cnping][6] 是小型的图形化 ping IPv4 工具,可用于可视化显示 RTT 的变化。它提供了一个选项来控制每个数据包之间的间隔以及发送的数据大小。除了显示的图表外,cnping 还提供 RTT 和丢包的基本统计数据。 + +![][7] + +#### 安装说明 + +该仓库目前为 Fedora 27、28、29 和 Rawhide 提供 cnping。要安装 cnping,请使用以下命令: + +``` +sudo dnf copr enable dreua/cnping +sudo dnf install cnping +``` + +### Pdfsandwich + +[Pdfsandwich][8] 是将文本添加到图像形式的文本 PDF 文件 (如扫描书籍) 的工具。它使用光学字符识别 (OCR) 创建一个额外的图层, 包含了原始页面已识别的文本。这对于复制和处理文本很有用。 + +#### 安装说明 + +该仓库目前为 Fedora 27、28、29、Rawhide 以及 EPEL 7 提供 pdfsandwich。要安装 pdfsandwich,请使用以下命令: + +``` +sudo dnf copr enable merlinm/pdfsandwich +sudo dnf install pdfsandwich +``` + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/4-cool-new-projects-try-copr-october-2018/ + +作者:[Dominik Turecek][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org +[b]: https://github.com/lujun9972 +[1]: https://copr.fedorainfracloud.org/ +[2]: https://www.gitkraken.com/git-client +[3]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-gitkraken.png +[4]: http://moc.daper.net/ +[5]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-mocp.png +[6]: https://github.com/cnlohr/cnping +[7]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-cnping.png +[8]: http://www.tobias-elze.de/pdfsandwich/ diff --git a/translated/tech/20181024 Get organized at the Linux command line with Calcurse.md b/published/201811/20181024 Get organized at the Linux command line with Calcurse.md similarity index 62% rename from translated/tech/20181024 Get organized at the Linux command line with Calcurse.md rename to published/201811/20181024 Get organized at the Linux command line with Calcurse.md index 6b6622dc5a..5d18f71ad5 100644 --- a/translated/tech/20181024 Get organized at the Linux command line with Calcurse.md +++ b/published/201811/20181024 Get organized at the Linux command line with Calcurse.md @@ -1,11 +1,11 @@ 使用 Calcurse 在 Linux 命令行中组织任务 ====== -使用 Calcurse 了解你的日历和待办事项列表。 +> 使用 Calcurse 了解你的日历和待办事项列表。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT) -你是否需要复杂,功能丰富的图形或 Web 程序才能保持井井有条?我不这么认为。正确的命令行工具可以完成工作并且做得很好。 +你是否需要复杂、功能丰富的图形或 Web 程序才能保持井井有条?我不这么认为。合适的命令行工具可以完成工作并且做得很好。 当然,说出命令行这个词可能会让一些 Linux 用户感到害怕。对他们来说,命令行是未知领域。 @@ -15,54 +15,51 @@ ### 获取软件 -如果你喜欢编译代码(我通常不喜欢),你可以从[Calcurse 网站][1]获取源码。否则,根据你的 Linux 发行版获取[二进制安装程序][2]。你甚至可以从 Linux 发行版的软件包管理器中获取 Calcurse。检查一下不会有错的。 +如果你喜欢编译代码(我通常不喜欢),你可以从 [Calcurse 网站][1]获取源码。否则,根据你的 Linux 发行版获取[二进制安装程序][2]。你甚至可以从 Linux 发行版的软件包管理器中获取 Calcurse。检查一下不会有错的。 编译或安装 Calcurse 后(两者都不用太长时间),你就可以开始使用了。 ### 使用 Calcurse -打开终端并输入 **calcurse**。 +打开终端并输入 `calcurse`。 ![](https://opensource.com/sites/default/files/uploads/calcurse-main.png) Calcurse 的界面由三个面板组成: - * 预约(屏幕左侧) -  * 日历(右上角) -  * 待办事项清单(右下角) + * 预约Appointments(屏幕左侧) +  * 日历Calendar(右上角) +  * 待办事项清单TODO(右下角) +按键盘上的 `Tab` 键在面板之间移动。要在面板添加新项目,请按下 `a`。Calcurse 将指导你完成添加项目所需的操作。 +一个有趣的地方地是预约和日历面板配合工作。你选中日历面板并添加一个预约。在那里,你选择一个预约的日期。完成后,你回到预约面板,你就看到了。 - -按键盘上的 Tab 键在面板之间移动。要在面板添加新项目,请按下 **a**。Calcurse 将指导你完成添加项目所需的操作。 - -一个有趣的地方地预约和日历面板一起生效。你选中日历面板并添加一个预约。在那里,你选择一个预约的日期。完成后,你回到预约面板。我知道。。。 - -按下 **a** 设置开始时间,持续时间(以分钟为单位)和预约说明。开始时间和持续时间是可选的。Calcurse 在它们到期的那天显示预约。 +按下 `a` 设置开始时间、持续时间(以分钟为单位)和预约说明。开始时间和持续时间是可选的。Calcurse 在它们到期的那天显示预约。 ![](https://opensource.com/sites/default/files/uploads/calcurse-appointment.png) -一天的预约看起来像: +一天的预约看起来像这样: ![](https://opensource.com/sites/default/files/uploads/calcurse-appt-list.png) -待办事项列表独立运作。选中待办面板并(再次)按下 **a**。输入任务的描述,然后设置优先级(1 表示最高,9 表示最低)。Calcurse 会在待办事项面板中列出未完成的任务。 +待办事项列表独立运作。选中待办面板并(再次)按下 `a`。输入任务的描述,然后设置优先级(1 表示最高,9 表示最低)。Calcurse 会在待办事项面板中列出未完成的任务。 ![](https://opensource.com/sites/default/files/uploads/calcurse-todo.png) -如果你的任务有很长的描述,那么 Calcurse 会截断它。你可以使用键盘上的向上或向下箭头键浏览任务,然后按下 **v** 查看描述。 +如果你的任务有很长的描述,那么 Calcurse 会截断它。你可以使用键盘上的向上或向下箭头键浏览任务,然后按下 `v` 查看描述。 ![](https://opensource.com/sites/default/files/uploads/calcurse-view-todo.png) -Calcurse 将其信息以文本形式保存在你的主目录下名为 **.calcurse** 的隐藏文件夹中,例如 **/home/scott/.calcurse**。如果 Calcurse 停止工作,那也很容易找到你的信息。 +Calcurse 将其信息以文本形式保存在你的主目录下名为 `.calcurse` 的隐藏文件夹中,例如 `/home/scott/.calcurse`。如果 Calcurse 停止工作,那也很容易找到你的信息。 ### 其他有用的功能 -Calcurse 其他的功能包括设置重复预约的功能。要执行此操作,找出要重复的预约,然后在预约面板中按下 **r**。系统会要求你设置频率(例如,每天或每周)以及你希望重复预约的时间。 +Calcurse 其他的功能包括设置重复预约的功能。要执行此操作,找出要重复的预约,然后在预约面板中按下 `r`。系统会要求你设置频率(例如,每天或每周)以及你希望重复预约的时间。 你还可以导入 [ICAL][3] 格式的日历或以 ICAL 或 [PCAL][4] 格式导出数据。使用 ICAL,你可以与其他日历程序共享数据。使用 PCAL,你可以生成日历的 Postscript 版本。 -你还可以将许多命令行参数传递给 Calcurse。你可以[在文档中][5]阅读它们。 +你还可以将许多命令行参数传递给 Calcurse。你可以[在文档中][5]了解它们。 虽然很简单,但 Calcurse 可以帮助你保持井井有条。你需要更加关注自己的任务和预约,但是你将能够更好地关注你需要做什么以及你需要做的方向。 @@ -73,7 +70,7 @@ via: https://opensource.com/article/18/10/calcurse 作者:[Scott Nesbitt][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201811/20181025 Monitoring database health and behavior- Which metrics matter.md b/published/201811/20181025 Monitoring database health and behavior- Which metrics matter.md new file mode 100644 index 0000000000..b8cfabc248 --- /dev/null +++ b/published/201811/20181025 Monitoring database health and behavior- Which metrics matter.md @@ -0,0 +1,84 @@ +监测数据库的健康和行为:有哪些重要指标? +====== + +> 对数据库的监测可能过于困难或者没有找到关键点。本文将讲述如何正确的监测数据库。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D) + +我们没有对数据库讨论过多少。在这个充满监测仪器的时代,我们监测我们的应用程序、基础设施、甚至我们的用户,但有时忘记我们的数据库也值得被监测。这很大程度是因为数据库表现的很好,以至于我们单纯地信任它能把任务完成的很好。信任固然重要,但能够证明它的表现确实如我们所期待的那样就更好了。 + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image1_-_bffs.png?itok=BZQM_Fos) + +### 为什么监测你的数据库? + +监测数据库的原因有很多,其中大多数原因与监测系统的任何其他部分的原因相同:了解应用程序的各个组件中发生的什么,会让你成为更了解情况的,能够做出明智决策的开发人员。 + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image5_fire.png?itok=wsip2Fa4) + +更具体地说,数据库是系统健康和行为的重要标志。数据库中的异常行为能够指出应用程序中出现问题的区域。另外,当应用程序中有异常行为时,你可以利用数据库的指标来迅速完成排除故障的过程。 + +### 问题 + +最轻微的调查揭示了监测数据库的一个问题:数据库有很多指标。说“很多”只是轻描淡写,如果你是史高治Scrooge McDuck(LCTT 译注:史高治,唐老鸭的舅舅,以一毛不拔著称),你不会放过任何一个可用的指标。如果这是摔角狂热Wrestlemania 比赛,那么指标就是折叠椅。监测所有指标似乎并不实用,那么你如何决定要监测哪些指标? + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image2_db_metrics.png?itok=Jd9NY1bt) + +### 解决方案 + +开始监测数据库的最好方式是认识一些基础的数据库指标。这些指标为理解数据库的行为创造了良好的开端。 + +### 吞吐量:数据库做了多少? + +开始检测数据库的最好方法是跟踪它所接到请求的数量。我们对数据库有较高期望;期望它能稳定的存储数据,并处理我们抛给它的所有查询,这些查询可能是一天一次大规模查询,或者是来自用户一天到晚的数百万次查询。吞吐量可以告诉我们数据库是否如我们期望的那样工作。 + +你也可以将请求按照类型(读、写、服务器端、客户端等)分组,以开始分析流量。 + +### 执行时间:数据库完成工作需要多长时间? + +这个指标看起来很明显,但往往被忽视了。你不仅想知道数据库收到了多少请求,还想知道数据库在每个请求上花费了多长时间。 然而,参考上下文来讨论执行时间非常重要:像 InfluxDB 这样的时间序列数据库中的慢与像 MySQL 这样的关系型数据库中的慢不一样。InfluxDB 中的慢可能意味着毫秒,而 MySQL 的 `SLOW_QUERY` 变量的默认值是 10 秒。 + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image4_slow_is_relative.png?itok=9RkuzUi8) + +监测执行时间和提高执行时间不一样,所以如果你的应用程序中有其他问题需要修复,那么请注意在优化上花费时间的诱惑。 + +### 并发性:数据库同时做了多少工作? + +一旦你知道数据库正在处理多少请求以及每个请求需要多长时间,你就需要添加一层复杂性以开始从这些指标中获得实际值。 + +如果数据库接收到十个请求,并且每个请求需要十秒钟来完成,那么数据库是忙碌了 100 秒、10 秒,还是介于两者之间?并发任务的数量改变了数据库资源的使用方式。当你考虑连接和线程的数量等问题时,你将开始对数据库指标有更全面的了解。 + +并发性还能影响延迟,这不仅包括任务完成所需的时间(执行时间),还包括任务在处理之前需要等待的时间。 + +### 利用率:数据库繁忙的时间百分比是多少? + +利用率是由吞吐量、执行时间和并发性的峰值所确定的数据库可用的频率,或者数据库太忙而不能响应请求的频率。 + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image6_telephone.png?itok=YzdpwUQP) + +该指标对于确定数据库的整体健康和性能特别有用。如果只能在 80% 的时间内响应请求,则可以重新分配资源、进行优化工作,或者进行更改以更接近高可用性。 + +### 好消息 + +监测和分析似乎非常困难,特别是因为我们大多数人不是数据库专家,我们可能没有时间去理解这些指标。但好消息是,大部分的工作已经为我们做好了。许多数据库都有一个内部性能数据库(Postgres:`pg_stats`、CouchDB:`Runtime_Statistics`、InfluxDB:`_internal` 等),数据库工程师设计该数据库来监测与该特定数据库有关的指标。你可以看到像慢速查询的数量一样广泛的内容,或者像数据库中每个事件的平均微秒一样详细的内容。 + +### 结论 + +数据库创建了足够的指标以使我们需要长时间研究,虽然内部性能数据库充满了有用的信息,但并不总是使你清楚应该关注哪些指标。从吞吐量、执行时间、并发性和利用率开始,它们为你提供了足够的信息,使你可以开始了解你的数据库中的情况。 + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image3_3_hearts.png?itok=iHF-OSwx) + +你在监视你的数据库吗?你发现哪些指标有用?告诉我吧! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/database-metrics-matter + +作者:[Katy Farmer][a] +选题:[lujun9972][b] +译者:[ChiZelin](https://github.com/ChiZelin) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/thekatertot +[b]: https://github.com/lujun9972 diff --git a/published/201811/20181025 Understanding Linux Links- Part 2.md b/published/201811/20181025 Understanding Linux Links- Part 2.md new file mode 100644 index 0000000000..97e551fed5 --- /dev/null +++ b/published/201811/20181025 Understanding Linux Links- Part 2.md @@ -0,0 +1,95 @@ +理解 Linux 链接(二) +====== +> 我们继续这个系列,来看一些你所不知道的微妙之处。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/links-fikri-rasyid-7853.jpg?itok=0jBT_1M2) + +在[本系列的第一篇文章中][1],我们认识了硬链接、软链接,知道在很多时候链接是非常有用的。链接看起来比较简单,但是也有一些不易察觉的奇怪的地方需要注意。这就是我们这篇文章中要讲的。例如,像一下我们在前一篇文章中创建的指向 `libblah` 的链接。请注意,我们是如何从目标文件夹中创建链接的。 + +``` +cd /usr/local/lib +ln -s /usr/lib/libblah +``` + +这样是可以工作的,但是下面的这个例子却是不行的。 + +``` +cd /usr/lib +ln -s libblah /usr/local/lib +``` + +也就是说,从原始文件夹内到目标文件夹之间的链接将不起作用。 + +出现这种情况的原因是 `ln` 会把它当作是你在 `/usr/local/lib` 中创建一个到 `/usr/local/lib` 的链接,并在 `/usr/local/lib` 中创建了从 `libblah` 到 `libblah` 的一个链接。这是因为所有链接文件获取的是文件的名称(`libblah),而不是文件的路径,最终的结果将会产生一个坏的链接。 + +然而,请看下面的这种情况。 + +``` +cd /usr/lib +ln -s /usr/lib/libblah /usr/local/lib +``` + +是可以工作的。奇怪的事情又来了,不管你在文件系统的任何位置执行这个指令,它都可以好好的工作。使用绝对路径,也就是说,指定整个完整的路径,从根目录(`/`)开始到需要的文件或者是文件夹,是最好的实现方式。 + +其它需要注意的事情是,只要 `/usr/lib` 和 `/usr/local/lib` 在一个分区上,做一个如下的硬链接: + +``` +cd /usr/lib +ln libblah /usr/local/lib +``` + +也是可以工作的,因为硬链接不依赖于指向文件系统内的文件来工作。 + +如果硬链接不起作用,那么可能是你想跨分区之间建立一个硬链接。就比如说,你有分区 A 上有文件 `fileA` ,并且把这个分区挂载到 `/path/to/partitionA/directory` 目录,而你又想从 `fileA` 链接到分区 B 上 `/path/to/partitionB/directory` 目录,这样是行不通的。 + +``` +ln /path/to/partitionA/directory/file /path/to/partitionB/directory +``` + +正如我们之前说的一样,硬链接是分区表中指向的是同一个分区的数据的条目,你不能把一个分区表的条目指向另一个分区上的数据,这种情况下,你只能选择创建一个软链接: + +``` +ln -s /path/to/partitionA/directory/file /path/to/partitionB/directory +``` + +另一个软链接能做到,而硬链接不能的是链接到一个目录。 + +``` +ln -s /path/to/some/directory /path/to/some/other/directory +``` + +这将在 `/path/to/some/other/directory` 中创建 `/path/to/some/directory` 的链接,没有任何问题。 + +当你使用硬链接做同样的事情的时候,会提示你一个错误,说不允许那么做。而不允许这么做的原因量会导致无休止的递归:如果你在目录 A 中有一个目录 B,然后你在目录 B 中链接 A,就会出现同样的情况,在目录 A 中,目录 A 包含了目录 B,而在目录 B 中又包含了 A,然后又包含了 B,等等无穷无尽。 + +当然你可以在递归中使用软链接,但你为什么要那样做呢? + +### 我应该使用硬链接还是软链接呢? + +通常,你可以在任何地方使用软链接做任何事情。实际上,在有些情况下你只能使用软链接。话说回来,硬链接的效率要稍高一些:它们占用的磁盘空间更少,访问速度更快。在大多数的机器上,你可以忽略这一点点的差异,因为:在磁盘空间越来越大,访问速度越来越快的今天,空间和速度的差异可以忽略不计。不过,如果你是在一个有小存储和低功耗的处理器上使用嵌入式系统上使用 Linux, 则可能需要考虑使用硬链接。 + +另一个使用硬链接的原因是硬链接不容易损坏。假设你有一个软链接,而你意外的移动或者删除了它指向的文件,那么你的软链接将会损坏,并指向了一个不存在的东西。这种情况是不会发生在硬链接中的,因为硬链接直接指向的是磁盘上的数据。实际上,磁盘上的空间不会被标记为空闲,除非最后一个指向它的硬链接把它从文件系统中擦除掉。 + +软链接,在另一方面比硬链接可以做更多的事情,而且可以指向任何东西,可以是文件或目录。它也可以指向不在同一个分区上的文件和目录。仅这两个不同,我们就可以做出唯一的选择了。 + +### 下期 + +现在我们已经介绍了文件和目录以及操作它们的工具,你是否已经准备好转到这些工具,可以浏览目录层次结构,可以查找文件中的数据,也可以检查目录。这就是我们下一期中要做的事情。下期见。 + +你可以通过 Linux 基金会和 edX “[Linux 简介][2]”了解更多关于 Linux 的免费课程。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/10/understanding-linux-links-part-2 + +作者:[Paul Brown][a] +选题:[lujun9972][b] +译者:[Jamkr](https://github.com/Jamkr) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/bro66 +[b]: https://github.com/lujun9972 +[1]: https://linux.cn/article-10173-1.html +[2]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/published/201811/20181025 What breaks our systems- A taxonomy of black swans.md b/published/201811/20181025 What breaks our systems- A taxonomy of black swans.md new file mode 100644 index 0000000000..e3aa38e75a --- /dev/null +++ b/published/201811/20181025 What breaks our systems- A taxonomy of black swans.md @@ -0,0 +1,122 @@ +让系统崩溃的黑天鹅分类 +====== + +> 在严重的故障发生之前,找到引起问题的异常事件,并修复它。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/black-swan-pair_0.png?itok=MkshwqVg) + +黑天鹅Black swan用来比喻造成严重影响的小概率事件(比如 2008 年的金融危机)。在生产环境的系统中,黑天鹅是指这样的事情:它引发了你不知道的问题,造成了重大影响,不能快速修复或回滚,也不能用值班说明书上的其他标准响应来解决。它是事发几年后你还在给新人说起的事件。 + +从定义上看,黑天鹅是不可预测的,不过有时候我们能找到其中的一些模式,针对有关联的某一类问题准备防御措施。 + +例如,大部分故障的直接原因是变更(代码、环境或配置)。虽然这种方式触发的 bug 是独特的、不可预测的,但是常见的金丝雀发布对避免这类问题有一定的作用,而且自动回滚已经成了一种标准止损策略。 + +随着我们的专业性不断成熟,一些其他的问题也正逐渐变得容易理解,被归类到某种风险并有普适的预防策略。 + +### 公布出来的黑天鹅事件 + +所有科技公司都有生产环境的故障,只不过并不是所有公司都会分享他们的事故分析。那些公开讨论事故的公司帮了我们的忙。下列事故都描述了某一类问题,但它们绝对不是只一个孤例。我们的系统中都有黑天鹅在潜伏着,只是有些人还不知道而已。 + +#### 达到上限 + +达到任何类型的限制都会引发严重事故。这类问题的一个典型例子是 2017 年 2 月 [Instapaper 的一次服务中断][1]。我把这份事故报告给任何一个运维工作者看,他们读完都会脊背发凉。Instapaper 生产环境的数据库所在的文件系统有 2 TB 的大小限制,但是数据库服务团队并不知情。在没有任何报错的情况下,数据库不再接受任何写入了。完全恢复需要好几天,而且还得迁移数据库。 + +资源限制有各式各样的触发场景。Sentry 遇到了 [Postgres 的最大事务 ID 限制][2]。Platform.sh 遇到了[管道缓冲区大小限制][3]。SparkPost [触发了 AWS 的 DDoS 保护][4]。Foursquare 在他们的一个 [MongoDB 耗尽内存][5]时遭遇了性能骤降。 + +提前了解系统限制的一个办法是定期做测试。好的压力测试(在生产环境的副本上做)应该包含写入事务,并且应该把每一种数据存储都写到超过当前生产环境的容量。压力测试时很容易忽略的是次要存储(比如 Zookeeper)。如果你是在测试时遇到了资源限制,那么你还有时间去解决问题。鉴于这种资源限制问题的解决方案可能涉及重大的变更(比如数据存储拆分),所以时间是非常宝贵的。 + +说到云产品的使用,如果你的服务产生了异常的负载,或者你用的产品或功能还没有被广泛使用(比如老旧的或者新兴的),那么你遇到资源上限的风险很大。对这些云产品做一下压力测试是值得的。不过,做之前要提醒一下你的云服务提供商。 + +最后,知道了哪里有限制之后,要增加监控(和对应文档),这样你才能知道系统在什么时候接近了资源上限。不要寄希望于那些还在维护服务的人会记得。 + +#### 扩散的慢请求 + +> “这个世界的关联性远比我们想象中更大。所以我们看到了更多 Nassim Taleb 所说的‘黑天鹅事件’ —— 即罕见事件以更高的频率离谱地发生了,因为世界是相互关联的” +> —— [Richard Thaler][6] + +HostedGraphite 的负载均衡器并没有托管在 AWS 上,却[被 AWS 的服务中断给搞垮了][7],他们关于这次事故原因的分析报告很好地诠释了分布式计算系统之间存在多么大的关联。在这个事件里,负载均衡器的连接池被来自 AWS 上的客户访问占满了,因为这些连接很耗时。同样的现象还会发生在应用的线程、锁、数据库连接上 —— 任何能被慢操作占满的资源。 + +这个 HostedGraphite 的例子中,慢速连接是外部系统施加的,不过慢速连接经常是由内部某个系统的饱和所引起的,饱和与慢操作的级联,拖慢了系统中的其他部分。[Spotify 的一个事故][8]就说明了这样的传播 —— 流媒体服务的前端被另一个微服务的饱和所影响,造成健康检查失败。强制给所有请求设置超时时间,以及限制请求队列的长度,可以预防这一类故障传播。这样即使有问题,至少你的服务还能承担一些流量,而且因为整体上你的系统里故障的部分更少了,恢复起来也会更快。 + +重试的间隔应该用指数退避来限制一下,并加入一些时间抖动。Square 有一次服务中断是 [Redis 存储的过载][9],原因是有一段代码对失败的事务重试了 500 次,没有任何重试退避的方案,也说明了过度重试的潜在风险。另外,针对这种情况,[断路器][10]设计模式也是有用的。 + +应该设计出监控仪表盘来清晰地展示所有资源的[使用率、饱和度和报错][11],这样才能快速发现问题。 + +#### 突发的高负载 + +系统在异常高的负载下经常会发生故障。用户天然会引发高负载,不过也常常是由系统引发的。午夜突发的 cron 定时任务是老生常谈了。如果程序让移动客户端同时去获取更新,这些客户端也会造成突发的大流量(当然,给这种请求加入时间抖动会好很多)。 + +在预定时刻同时发生的事件并不是突发大流量的唯一原因。Slack 经历过一次短时间内的[多次服务中断][12],原因是非常多的客户端断开连接后立即重连,造成了突发的大负载。 CircleCI 也经历过一次[严重的服务中断][13],当时 Gitlab 从故障中恢复了,所以数据库里积累了大量的构建任务队列,服务变得饱和而且缓慢。 + +几乎所有的服务都会受突发的高负载所影响。所以对这类可能出现的事情做应急预案 —— 并测试一下预案能否正常工作 —— 是必须的。客户端退避和[减载][14]通常是这些方案的核心。 + +如果你的系统必须不间断地接收数据,并且数据不能被丢掉,关键是用可伸缩的方式把数据缓冲到队列中,后续再处理。 + +#### 自动化系统是复杂的系统 + +> “复杂的系统本身就是有风险的系统” +> —— [Richard Cook, MD][15] + +过去几年里软件的运维操作趋势是更加自动化。任何可能降低系统容量的自动化操作(比如擦除磁盘、退役设备、关闭服务)都应该谨慎操作。这类自动化操作的故障(由于系统有 bug 或者有不正确的调用)能很快地搞垮你的系统,而且可能很难恢复。 + +谷歌的 Christina Schulman 和 Etienne Perot 在[用安全规约协助保护你的数据中心][16]的演讲中给了一些例子。其中一次事故是将谷歌整个内部的内容分发网络(CDN)提交给了擦除磁盘的自动化系统。 + +Schulman 和 Perot 建议使用一个中心服务来管理规约,限制破坏性自动化操作的速度,并能感知到系统状态(比如避免在最近有告警的服务上执行破坏性的操作)。 + +自动化系统在与运维人员(或其他自动化系统)交互时,也可能造成严重事故。[Reddit][17] 遭遇过一次严重的服务中断,当时他们的自动化系统重启了一个服务,但是这个服务是运维人员停掉做维护的。一旦有了多个自动化系统,它们之间潜在的交互就变得异常复杂和不可预测。 + +所有的自动化系统都把日志输出到一个容易搜索的中心存储上,能帮助到对这类不可避免的意外情况的处理。自动化系统总是应该具备这样一种机制,即允许快速地关掉它们(完全关掉或者只关掉其中一部分操作或一部分目标)。 + +### 防止黑天鹅事件 + +可能在等着击垮系统的黑天鹅可不止上面这些。有很多其他的严重问题是能通过一些技术来避免的,像金丝雀发布、压力测试、混沌工程、灾难测试和模糊测试 —— 当然还有冗余性和弹性的设计。但是即使用了这些技术,有时候你的系统还是会有故障。 + +为了确保你的组织能有效地响应,在服务中断期间,请保证关键技术人员和领导层有办法沟通协调。例如,有一种你可能需要处理的烦人的事情,那就是网络完全中断。拥有故障时仍然可用的通信通道非常重要,这个通信通道要完全独立于你们自己的基础设施及对其的依赖。举个例子,假如你使用 AWS,那么把故障时可用的通信服务部署在 AWS 上就不明智了。在和你的主系统无关的地方,运行电话网桥或 IRC 服务器是比较好的方案。确保每个人都知道这个通信平台,并练习使用它。 + +另一个原则是,确保监控和运维工具对生产环境系统的依赖尽可能的少。将控制平面和数据平面分开,你才能在系统不健康的时候做变更。不要让数据处理和配置变更或监控使用同一个消息队列,比如,应该使用不同的消息队列实例。在 [SparkPost: DNS 挂掉的那一天][4] 这个演讲中,Jeremy Blosser 讲了一个这类例子,很关键的工具依赖了生产环境的 DNS 配置,但是生产环境的 DNS 出了问题。 + +### 对抗黑天鹅的心理学 + +处理生产环境的重大事故时会产生很大的压力。为这些场景制定结构化的事故管理流程确实是有帮助的。很多科技公司([包括谷歌][18])成功地使用了联邦应急管理局事故指挥系统的某个版本。对于每一个值班的人,遇到了他们无法独立解决的重大问题时,都应该有一个明确的寻求协助的方法。 + +对于那些持续很长时间的事故,有一点很重要,要确保工程师不会连续工作到不合理的时长,确保他们不会不吃不睡(没有报警打扰的睡觉)。疲惫不堪的工程师很容易犯错或者漏掉了可能更快解决故障的信息。 + +### 了解更多 + +关于黑天鹅(或者以前的黑天鹅)事件以及应对策略,还有很多其他的事情可以说。如果你想了解更多,我强烈推荐你去看这两本书,它们是关于生产环境中的弹性和稳定性的:Susan Fowler 写的《[生产微服务][19]》,还有 Michael T. Nygard 的 《[Release It!][20]》。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/taxonomy-black-swans + +作者:[Laura Nolan][a] +选题:[lujun9972][b] +译者:[BeliteX](https://github.com/belitex) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lauranolan +[b]: https://github.com/lujun9972 +[1]: https://medium.com/making-instapaper/instapaper-outage-cause-recovery-3c32a7e9cc5f +[2]: https://blog.sentry.io/2015/07/23/transaction-id-wraparound-in-postgres.html +[3]: https://medium.com/@florian_7764/technical-post-mortem-of-the-august-incident-82ab4c3d6547 +[4]: https://www.usenix.org/conference/srecon18americas/presentation/blosser +[5]: https://groups.google.com/forum/#!topic/mongodb-user/UoqU8ofp134 +[6]: https://en.wikipedia.org/wiki/Richard_Thaler +[7]: https://blog.hostedgraphite.com/2018/03/01/spooky-action-at-a-distance-how-an-aws-outage-ate-our-load-balancer/ +[8]: https://labs.spotify.com/2013/06/04/incident-management-at-spotify/ +[9]: https://medium.com/square-corner-blog/incident-summary-2017-03-16-2f65be39297 +[10]: https://en.wikipedia.org/wiki/Circuit_breaker_design_pattern +[11]: http://www.brendangregg.com/usemethod.html +[12]: https://slackhq.com/this-was-not-normal-really +[13]: https://circleci.statuspage.io/incidents/hr0mm9xmm3x6 +[14]: https://www.youtube.com/watch?v=XNEIkivvaV4 +[15]: https://web.mit.edu/2.75/resources/random/How%20Complex%20Systems%20Fail.pdf +[16]: https://www.usenix.org/conference/srecon18americas/presentation/schulman +[17]: https://www.reddit.com/r/announcements/comments/4y0m56/why_reddit_was_down_on_aug_11/ +[18]: https://landing.google.com/sre/book/chapters/managing-incidents.html +[19]: http://shop.oreilly.com/product/0636920053675.do +[20]: https://www.oreilly.com/library/view/release-it/9781680500264/ +[21]: https://www.usenix.org/conference/lisa18/presentation/nolan +[22]: https://www.usenix.org/conference/lisa18 diff --git a/published/201811/20181026 An Overview of Android Pie.md b/published/201811/20181026 An Overview of Android Pie.md new file mode 100644 index 0000000000..7aae6a1f0f --- /dev/null +++ b/published/201811/20181026 An Overview of Android Pie.md @@ -0,0 +1,118 @@ +Android 9.0 概览 +====== + +> 第九代 Android 带来了更令人满意的用户体验。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/android-pie.jpg?itok=Sx4rbOWY) + +我们来谈论一下 Android。尽管 Android 只是一款内核经过修改的 Linux,但经过多年的发展,Android 开发者们(或许包括正在阅读这篇文章的你)已经为这个平台的演变做出了很多值得称道的贡献。当然,可能很多人都已经知道,但我们还是要说,Android 并不完全开源,当你使用 Google 服务的时候,就已经接触到闭源的部分了。Google Play 商店就是其中之一,它不是一个开放的服务。不过无论 Android 开源与否,这就是一个美味、营养、高效、省电的馅饼(LCTT 译注:Android 9.0 代号为 Pie)。 + +我在我的 Essential PH-1 手机上运行了 Android 9.0(我真的很喜欢这款手机,也知道这家公司的境况并不好)。在我自己体验了一段时间之后,我认为它是会被大众接受的。那么 Android 9.0 到底好在哪里呢?下面我们就来深入探讨一下。我们的出发点是用户的角度,而不是开发人员的角度,因此我也不会深入探讨太底层的方面。 + +### 手势操作 + +Android 系统在新的手势操作方面投入了很多,但实际体验却不算太好。这个功能确实引起了我的兴趣。在这个功能发布之初,大家都对它了解甚少,纷纷猜测它会不会让用户使用多点触控的手势来浏览 Android 界面?又或者会不会是一个完全颠覆人们认知的东西? + +实际上,手势操作比大多数人设想的要更加微妙而简单,因为很多功能都浓缩到了 Home 键上。打开手势操作功能之后,Recent 键的功能就合并到 Home 键上了。因此,如果需要查看最近打开的应用程序,就不能简单地通过 Recent 键来查看,而应该从 Home 键向上轻扫一下。(图 1) + +![Android Pie][2] + +*图 1:Android 9.0 中的”最近的应用程序“界面。* + +另一个不同的地方是 App Drawer。类似于查看最近打开的应用,需要在 Home 键向上滑动才能打开 App Drawer。 + +而后退按钮则没有去掉。在应用程序需要用到后退功能时,它就会出现在主屏幕的左下方。有时候即使应用程序自己带有后退按钮,Android 的后退按钮也会出现。 + +当然,如果你不喜欢使用手势操作,也可以禁用这个功能。只需要按照下列步骤操作: + + 1. 打开”设置“ + 2. 向下滑动并进入“系统 > 手势” + 3. 从 Home 键向上滑动 + 4. 将 On/Off 滑块(图 2)滑动至 Off 位置 + +![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_2.png?itok=cs2tqZut) + +*图 2:关闭手势操作。* + +### 电池寿命 + +人工智能已经在 Android 得到了充分的使用。现在,Android 使用人工智能大大提供了电池的续航时间,这样的新技术称为自适应电池。自适应电池可以根据用户的个人使用习惯来决定各种应用和服务的耗电优先级。通过使用人工智能技术,Android 可以分析用户对每一个应用或服务的使用情况,并适当地关闭未使用的应用程序,以免长期驻留在内存中白白消耗电池电量。 + +对于这个功能的唯一一个警告是,如果人工智能出现问题并导致电池电量过早耗尽,就只能通过恢复出厂设置来解决这个问题了。尽管有这样的缺陷,在电池续航时间方面,Android 9.0 也比 Android 8.0 有所改善。 + +### 分屏功能的变化 + +分屏对于 Android 来说不是一个新功能,但在 Android 9.0 上,它的使用方式和以往相比略有不同,而且只对于手势操作有影响,不使用手势操作的用户不受影响。要在 Android 9.0 上使用分屏功能,需要按照下列步骤操作: + + 1. 从 Home 键向上滑动,打开“最近的应用程序”。 + 2. 找到需要放置在屏幕顶部的应用程序。 + 3. 长按应用程序顶部的图标以显示新的弹出菜单。(图 3) + 4. 点击分屏,应用程序会在屏幕的上半部分打开。 + 5. 找到要打开的第二个应用程序,然后点击它添加到屏幕的下半部分。 + +![Adding an app][5] + +*图 3:在 Android 9.0 上将应用添加到分屏模式中。* + +使用分屏功能关闭应用程序的方法和原来保持一致。 + +### 应用操作 + +这个功能在早前已经引入了,但直到 Android 9.0 发布,人们才开始对它产生明显的关注。应用操作功能可以让用户直接从应用启动器来执行应用里的某些操作。 + +例如,长按 GMail 启动器,就可以执行回复最近的邮件、撰写新邮件等功能。在 Android 8.0 中,这个功能则以弹出动作列表的方式展现。在 Android 9.0 中,这个功能更契合 Google 的材料设计Material Design风格(图 4)。 + +![Actions][7] + +*图 4:Android 应用操作。* + +### 声音控制 + +在 Android 中,声音控制的方式经常发生变化。在 Android 8.0 对“请勿打扰”功能进行调整之后,声音控制已经做得相当不错了。而在 Android 9.0 当中,声音控制再次进行了优化。 + +Android 9.0 这次优化针对的是设备上快速控制声音的按钮。如果用户按下音量增大或减小按钮,就会看到一个新的弹出菜单,可以让用户控制设备的静音和震动情况。点击这个弹出菜单顶部的图标(图 5),可以在完全静音、静音和正常声音几种状态之间切换。 + +![Sound control][9] + +*图 5:Android 9.0 上的声音控制。* + +### 屏幕截图 + +由于我要撰写关于 Android 的文章,所以我会常常需要进行屏幕截图。而 Android 9.0 有一项我最喜欢的更新,就是分享屏幕截图。Android 9.0 可以在截取屏幕截图后,直接共享、编辑,或者删除不喜欢的截图,而不需要像以前一样打开 Google 相册、找到要共享的屏幕截图、打开图像然后共享图像。 + +如果你想分享屏幕截图,只需要在截图后等待弹出菜单,点击分享(图 6),从标准的 Android 分享菜单中分享即可。 + +![Sharing ][11] + +*图 6:共享屏幕截图变得更加容易。* + +### 更令人满意的 Android 体验 + +Android 9.0 带来了更令人满意的用户体验。当然,以上说到的内容只是它的冰山一角。如果需要更多信息,可以查阅 Google 的官方 [Android 9.0 网站][12]。如果你的设备还没有收到升级推送,请耐心等待,Android 9.0 值得等待。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/2018/10/overview-android-pie + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: /files/images/pie1png +[2]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_1.png?itok=BsSe8kqS "Android Pie" +[3]: /licenses/category/used-permission +[4]: /files/images/pie3png +[5]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_3.png?itok=F-NB1dqI "Adding an app" +[6]: /files/images/pie4png +[7]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_4.png?itok=Ex-NzYSo "Actions" +[8]: /files/images/pie5png +[9]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_5.png?itok=NMW2vIlL "Sound control" +[10]: /files/images/pie6png +[11]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_6.png?itok=7Ik8_4jC "Sharing " +[12]: https://www.android.com/versions/pie-9-0/ + diff --git a/published/201811/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md b/published/201811/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md new file mode 100644 index 0000000000..655d66dfbf --- /dev/null +++ b/published/201811/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md @@ -0,0 +1,84 @@ +使用 Ultimate Plumber 即时预览管道命令结果 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Ultimate-Plumber-720x340.jpg) + +管道命令的作用是将一个命令/程序/进程的输出发送给另一个命令/程序/进程,以便将输出结果进行进一步的处理。我们可以通过使用管道命令把多个命令组合起来,使一个命令的标准输入或输出重定向到另一个命令。两个或多个 Linux 命令之间的竖线字符(`|`)表示在命令之间使用管道命令。管道命令的一般语法如下所示: + +``` +Command-1 | Command-2 | Command-3 | …| Command-N +``` + +Ultimate Plumber(简称 UP)是一个命令行工具,它可以用于即时预览管道命令结果。如果你在使用 Linux 时经常会用到管道命令,就可以通过它更好地运用管道命令了。它可以预先显示执行管道命令后的结果,而且是即时滚动地显示,让你可以轻松构建复杂的管道。 + +下文将会介绍如何安装 UP 并用它将复杂管道命令的编写变得简单。 + + +**重要警告:** + +在生产环境中请谨慎使用 UP!在使用它的过程中,有可能会在无意中删除重要数据,尤其是搭配 `rm` 或 `dd` 命令时需要更加小心。勿谓言之不预。 + +### 使用 Ultimate Plumber 即时预览管道命令 + +下面给出一个简单的例子介绍 `up` 的使用方法。如果需要将 `lshw` 命令的输出传递给 `up`,只需要在终端中输入以下命令,然后回车: + +``` +$ lshw |& up +``` + +你会在屏幕顶部看到一个输入框,如下图所示。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Ultimate-Plumber.png) + +在输入命令的过程中,输入管道符号并回车,就可以立即执行已经输入了的命令。Ultimate Plumber 会在下方的可滚动窗口中即时显示管道命令的输出。在这种状态下,你可以通过 `PgUp`/`PgDn` 键或 `ctrl + ←`/`ctrl + →` 组合键来查看结果。 + +当你满意执行结果之后,可以使用 `ctrl + x` 组合键退出 `UP`。而退出前编写的管道命令则会保存在当前工作目录的文件中,并命名为 `up1.sh`。如果这个文件名已经被占用,就会命名为 `up2.sh`、`up3.sh` 等等以此类推,直到第 1000 个文件。如果你不需要将管道命令保存输出,只需要使用 `ctrl + c` 组合键退出即可。 + +通过 `cat` 命令可以查看 `upX.sh` 文件的内容。例如以下是我的 `up2.sh` 文件的输出内容: + +``` +$ cat up2.sh +#!/bin/bash +grep network -A5 | grep : | cut -d: -f2- | paste - - +``` + +如果通过管道发送到 `up` 的命令运行时间太长,终端窗口的左上角会显示一个波浪号(~)字符,这就表示 `up` 在等待前一个命令的输出结果作为输入。在这种情况下,你可能需要使用 `ctrl + s` 组合键暂时冻结 `up` 的输入缓冲区大小。在需要解冻的时候,使用 `ctrl + q` 组合键即可。Ultimate Plumber 的输入缓冲区大小一般为 40 MB,到达这个限制之后,屏幕的左上角会显示一个加号。 + +以下是 `up` 命令的一个简单演示: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/up.gif) + +### 安装 Ultimate Plumber + +喜欢这个工具的话,你可以在你的 Linux 系统上安装使用。安装过程也相当简单,只需要在终端里执行以下两个命令就可以安装 `up` 了。 + +首先从 Ultimate Plumber 的[发布页面][1]下载最新的二进制文件,并将放在你系统的某个路径下,例如 `/usr/local/bin/`。 + +``` +$ sudo wget -O /usr/local/bin/up wget https://github.com/akavel/up/releases/download/v0.2.1/up +``` + +然后向 `up` 二进制文件赋予可执行权限: + +``` +$ sudo chmod a+x /usr/local/bin/up +``` + +至此,你已经完成了 `up` 的安装,可以开始编写你的管道命令了。 + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/ultimate-plumber-writing-linux-pipes-with-instant-live-preview/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/akavel/up/releases + diff --git a/published/201811/20181027 Design faster web pages, part 3- Font and CSS tweaks.md b/published/201811/20181027 Design faster web pages, part 3- Font and CSS tweaks.md new file mode 100644 index 0000000000..e0b157c37a --- /dev/null +++ b/published/201811/20181027 Design faster web pages, part 3- Font and CSS tweaks.md @@ -0,0 +1,73 @@ +设计更快的网页(三):字体和 CSS 调整 +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/10/designfaster3-816x345.jpg) + +欢迎回到我们为了构建更快网页所写的系列文章。本系列的[第一部分][1]和[第二部分][2]讲述了如何通过优化和替换图片来减少浏览器脂肪。本部分会着眼于在 CSS([层叠式样式表][3])和字体中减掉更多的脂肪。 + +### 调整 CSS + +首先,我们先来看看问题的源头。CSS 的出现曾是技术的一大进步。你可以用一个集中式的样式表来装饰多个网页。如今很多 Web 开发者都会使用 Bootstrap 这样的框架。 + +这些框架当然方便,可是很多人都会将整个框架直接复制粘贴走。Bootstrap 非常大:目前 Bootstrap 4.0 的“最小”版本也有 144.9 KB. 在这个以 TB 来计数据的时代,它可能不算多。但就像所说的那样,一头小牛也能搞出大麻烦。 + +我们回头来看 [getfedora.org][4] 的例子。我们在[第一部分][1]中提过,第一个分析结果显示 CSS 文件占用的空间几乎比 HTML 本身还要大十倍。这里显示了所有用到的样式表: + +![][5] + +那是九个不同的样式表。其中的很多样式在这个页面中并没有用上。 + +#### 移除、合并、以及压缩/缩小化 + +Font-awesome CSS 代表了包含未使用样式的极端。这个页面中只用到了这个字体的三个字形。如果以 KB 为单位,getfedora.org 用到的 font-awesome CSS 最初有 25.2 KB. 在清理掉所有未使用的样式后,它只有 1.3 KB 了。这只有原来体积的 4% 左右!对于 Bootstrap CSS,原来它有 118.3 KB,清理掉无用的样式后只有 13.2 KB,这就是差异。 + +下一个问题是,我们必须要这样一个 `bootstrap.css` 和 `font-awesome.css` 吗?或者,它们能不能合起来呢?没错,它们可以。这样虽然不会节省更多的文件空间,但浏览器成功渲染页面所需要发起的请求更少了。 + +最后,在合并 CSS 文件后,尝试去除无用样式并缩小它们。这样,它们只有 4.3 KB 大小,而你省掉了 10.1 KB. + +不幸的是,在 Fedora 软件仓库中,还没有打包好的缩小工具。不过,有几百种在线服务可以帮到你。或者,你也可以使用 [CSS-HTML-JS Minify][6],它用 Python 编写,所以容易安装。现在没有一个可用的工具来净化 CSS,不过我们有 [UnCSS][7] 这样的 Web 服务。 + +### 字体改进 + +[CSS3][8] 带来了很多开发人员喜欢的东西。它可以定义一些渲染页面所用的字体,并让浏览器在后台下载。此后,很多 Web 设计师都很开心,尤其是在他们发现了 Web 设计中图标字体的用法之后。像 [Font Awesome][9] 这样的字体集现在非常流行,也被广泛使用。这是这个字体集的大小: + +``` +current free version 912 glyphs/icons, smallest set ttf 30.9KB, woff 14.7KB, woff2 12.2KB, svg 107.2KB, eot 31.2 +``` + +所以问题是,你需要所有的字形吗?很可能不需要。你可以通过 [FontForge][10] 来去除这些无用字形,但这需要很大的工作量。你还可以用 [Fontello][11]. 你可以使用公共实例,也可以配置你自己的版本,因为它是自由软件,可以在 [Github][12] 上找到。 + +这种自定义字体集的缺点在于,你必须自己来托管字体文件。你也没法使用其它在线服务来提供更新。但与更快的性能相比,这可能算不上一个缺点。 + +### 总结 + +现在,你已经做完了所有对内容本身的操作,来最大限度地减少浏览器加载和解释的内容。从现在开始,只有服务器的管理技巧才才能帮到你了。 + +有一个很简单,但很多人都做错了的事情,就是使用一些智能缓存。比如,CSS 或者图片文件可以缓存一周。但无论如何,如果你用了 Cloudflare 这样的代理服务或者自己构建了代理,首先要做的都应该是缩小页面。用户喜欢可以快速加载的页面。他们会(默默地)感谢你,服务器的负载也会更小。 + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/design-faster-web-pages-part-3-font-css-tweaks/ + +作者:[Sirko Kemter][a] +选题:[lujun9972][b] +译者:[StdioA](https://github.com/StdioA) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/gnokii/ +[b]: https://github.com/lujun9972 +[1]: https://linux.cn/article-10166-1.html +[2]: https://linux.cn/article-10217-1.html +[3]: https://en.wikipedia.org/wiki/Cascading_Style_Sheets +[4]: https://getfedora.org +[5]: https://fedoramagazine.org/wp-content/uploads/2018/02/CSS_delivery_tool_-_Examine_how_a_page_uses_CSS_-_2018-02-24_15.00.46.png +[6]: https://github.com/juancarlospaco/css-html-js-minify +[7]: https://uncss-online.com/ +[8]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS3 +[9]: https://fontawesome.com/ +[10]: https://fontforge.github.io/en-US/ +[11]: http://fontello.com/ +[12]: https://github.com/fontello/fontello diff --git a/translated/tech/20181029 4 open source Android email clients.md b/published/201811/20181029 4 open source Android email clients.md similarity index 81% rename from translated/tech/20181029 4 open source Android email clients.md rename to published/201811/20181029 4 open source Android email clients.md index 285b472234..4c1b32ef65 100644 --- a/translated/tech/20181029 4 open source Android email clients.md +++ b/published/201811/20181029 4 open source Android email clients.md @@ -1,37 +1,39 @@ -四个开源的Android邮件客户端 +四个开源的 Android 邮件客户端 ====== -Email 现在还没有绝迹,而且现在大部分邮件都来自于移动设备。 + +> Email 现在还没有绝迹,而且现在大部分邮件都来自于移动设备。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_mail_box_envelope_send_blue.jpg?itok=6Epj47H6) -现在一些年轻人正将邮件称之为“老年人的交流方式”,然而事实却是邮件绝对还没有消亡。虽然[协作工具][1],社交媒体,和短信很常用,但是它们还没做好取代邮件这种必要的商业(和社交)通信工具。 +现在一些年轻人正将邮件称之为“老年人的交流方式”,然而事实却是邮件绝对还没有消亡。虽然[协作工具][1]、社交媒体,和短信很常用,但是它们还没做好取代邮件这种必要的商业(和社交)通信工具的准备。 考虑到邮件还没有消失,并且(很多研究表明)人们都是在移动设备上阅读邮件,拥有一个好的移动邮件客户端就变得很关键。如果你是一个想使用开源的邮件客户端的 Android 用户,事情就变得有点棘手了。 我们提供了四个开源的 Andorid 邮件客户端供选择。其中两个可以通过 Andorid 官方应用商店 [Google Play][2] 下载。你也可以在 [Fossdroid][3] 或者 [F-Droid][4] 这些开源 Android 应用库中找到他们。(下方有每个应用的具体下载方式。) + ### K-9 Mail -[K-9 Mail][5] 拥有几乎和 Android 一样长的历史——它起源于 Android 1.0 邮件客户端的一个补丁。它支持 IMAP 和 WebDAV、多用户、附件、emojis 和其他经典的邮件客户端功能。它的[用户文档][6]提供了关于安装、启动、安全、阅读和发送邮件等等的帮助。 +[K-9 Mail][5] 拥有几乎和 Android 一样长的历史——它起源于 Android 1.0 邮件客户端的一个补丁。它支持 IMAP 和 WebDAV、多用户、附件、emoji 和其它经典的邮件客户端功能。它的[用户文档][6]提供了关于安装、启动、安全、阅读和发送邮件等等的帮助。 K-9 基于 [Apache 2.0][7] 协议开源,[源码][8]可以从 GitHub 上获得. 应用可以从 [Google Play][9]、[Amazon][10] 和 [F-Droid][11] 上下载。 ### p≡p -正如它的全称,”Pretty Easy Privacy”说的那样,[p≡p][12] 主要关注于隐私和安全通信。它提供自动的、端到端的邮件和附件加密(但要求你的收件人也要能够加密邮件——否则,p≡p会警告你的邮件将不加密发出)。 +正如它的全称,”Pretty Easy Privacy”说的那样,[p≡p][12] 主要关注于隐私和安全通信。它提供自动的、端到端的邮件和附件加密(但要求你的收件人也要能够加密邮件——否则,p≡p 会警告你的邮件将不加密发出)。 你可以从 GitLab 获得[源码][13](基于 [GPLv3][14] 协议),并且可以从应用的官网上找到相应的[文档][15]。应用可以在 [Fossdroid][16] 上免费下载或者在 [Google Play][17] 上支付一点儿象征性的费用下载。 ### InboxPager -[InboxPager][18] 允许你通过 SSL/TLS 协议收发邮件信息,这也表明如果你的邮件提供商(比如 Gmail )没有默认开启这个功能的话,你可能要做一些设置。(幸运的是, InboxPager 提供了 Gmail的[设置教程][19]。)它同时也支持通过 OpenKeychain 应用进行 OpenPGP 机密。 +[InboxPager][18] 允许你通过 SSL/TLS 协议收发邮件信息,这也表明如果你的邮件提供商(比如 Gmail )没有默认开启这个功能的话,你可能要做一些设置。(幸运的是, InboxPager 提供了 Gmail 的[设置教程][19]。)它同时也支持通过 OpenKeychain 应用进行 OpenPGP 加密。 InboxPager 基于 [GPLv3][20] 协议,其源码可从 GitHub 获得,并且应用可以从 [F-Droid][21] 下载。 ### FairEmail -[FairEmail][22] 是一个极简的邮件客户端,它的功能集中于读写信息,没有任何多余的可能拖慢客户端的功能。它支持多个帐号和用户,消息线程,加密等等。 +[FairEmail][22] 是一个极简的邮件客户端,它的功能集中于读写信息,没有任何多余的可能拖慢客户端的功能。它支持多个帐号和用户、消息线索、加密等等。 -它基于 [GPLv3][23] 协议开源,[源码][24]可以从GitHub上获得。你可以在 [Fossdroid][25] 上下载 FairEamil; 对 Google Play 版本感兴趣的人可以从 [testing the software][26] 获得应用。 +它基于 [GPLv3][23] 协议开源,[源码][24]可以从 GitHub 上获得。你可以在 [Fossdroid][25] 上下载 FairEamil;对 Google Play 版本感兴趣的人可以从 [testing the software][26] 获得应用。 肯定还有更多的开源 Android 客户端(或者上述软件的加强版本)——活跃的开发者们可以关注一下。如果你知道还有哪些优秀的应用,可以在评论里和我们分享。 @@ -42,7 +44,7 @@ via: https://opensource.com/article/18/10/open-source-android-email-clients 作者:[Opensource.com][a] 选题:[lujun9972][b] 译者:[zianglei][c] -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201811/20181029 Machine learning with Python- Essential hacks and tricks.md b/published/201811/20181029 Machine learning with Python- Essential hacks and tricks.md new file mode 100644 index 0000000000..34901c542d --- /dev/null +++ b/published/201811/20181029 Machine learning with Python- Essential hacks and tricks.md @@ -0,0 +1,119 @@ +Python 机器学习的必备技巧 +====== + +> 尝试使用 Python 掌握机器学习、人工智能和深度学习。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S) + +想要入门机器学习并不难。除了大规模网络公开课Massive Open Online Courses(MOOC)之外,还有很多其它优秀的免费资源。下面我分享一些我觉得比较有用的方法。 + +1. 从一些 YouTube 上的好视频开始,阅览一些关于这方面的文章或者书籍,例如 《[主算法:终极学习机器的探索将如何重塑我们的世界][29]》,而且我觉得你肯定会喜欢这些[关于机器学习的很酷的互动页面][30]。 +2. 对于“机器学习machine learning”、“人工智能artificial intelligence”、“深度学习deep learning”、“数据科学data science”、“计算机视觉computer vision”和“机器人技术robotics”这一堆新名词,你需要知道它们之间的区别。你可以阅览或聆听这些领域的专家们的演讲,例如这位有影响力的[数据科学家 Brandon Rohrer 的精彩视频][1]。或者这个讲述了数据科学相关的[各种角色之间的区别][2]的视频。 +3. 明确你自己的学习目标,并选择合适的 [Coursera 课程][3],或者参加高校的网络公开课,例如[华盛顿大学的课程][4]就很不错。 +4. 关注优秀的博客:例如 [KDnuggets][32] 的博客、[Mark Meloon][33] 的博客、[Brandon Rohrer][34] 的博客、[Open AI][35] 的研究博客,这些都值得推荐。 +5. 如果你热衷于在线课程,后文中会有如何[正确选择 MOOC 课程][31]的指导。 +6. 最重要的是,培养自己对这些技术的兴趣。加入一些优秀的社交论坛,不要被那些耸人听闻的头条和新闻所吸引,专注于阅读和了解,将这些技术的背景知识和发展方向理解透彻,并积极思考在日常生活和工作中如何应用机器学习或数据科学的原理。例如建立一个简单的回归模型来预测下一次午餐的成本,又或者是从电力公司的网站上下载历史电费数据,在 Excel 中进行简单的时序分析以发现某种规律。在你对这些技术产生了浓厚兴趣之后,可以观看以下这个视频。 + + + +### Python 是机器学习和人工智能方面的最佳语言吗? + +除非你是一名专业的研究一些复杂算法纯理论证明的研究人员,否则,对于一个机器学习的入门者来说,需要熟悉至少一种高级编程语言。因为大多数情况下都是需要考虑如何将现有的机器学习算法应用于解决实际问题,而这需要有一定的编程能力作为基础。 + +哪一种语言是数据科学的最佳语言?这个讨论一直没有停息过。对于这方面,你可以提起精神来看一下 FreeCodeCamp 上这一篇关于[数据科学语言][6]的文章,又或者是 KDnuggets 关于 [Python 和 R 之争][7]的深入探讨。 + +目前人们普遍认为 Python 在开发、部署、维护各方面的效率都是比较高的。与 Java、C 和 C++ 这些较为传统的语言相比,Python 的语法更为简单和高级。而且 Python 拥有活跃的社区群体、广泛的开源文化、数百个专用于机器学习的优质代码库,以及来自业界巨头(包括 Google、Dropbox、Airbnb 等)的强大技术支持。 + +### 基础 Python 库 + +如果你打算使用 Python 实施机器学习,你必须掌握一些 Python 包和库的使用方法。 + +#### NumPy + +NumPy 的完整名称是 [Numerical Python][8],它是 Python 生态里高性能科学计算和数据分析都需要用到的基础包,几乎所有高级工具(例如 [Pandas][9] 和 [scikit-learn][10])都依赖于它。[TensorFlow][11] 使用了 NumPy 数组作为基础构建块以支持 Tensor 对象和深度学习的图形流。很多 NumPy 操作的速度都非常快,因为它们都是通过 C 实现的。高性能对于数据科学和现代机器学习来说是一个非常宝贵的优势。 + +![](https://opensource.com/sites/default/files/uploads/machine-learning-python_numpy-cheat-sheet.jpeg) + +#### Pandas + +Pandas 是 Python 生态中用于进行通用数据分析的最受欢迎的库。Pandas 基于 NumPy 数组构建,在保证了可观的执行速度的同时,还提供了许多数据工程方面的功能,包括: + + * 对多种不同数据格式的读写操作 + * 选择数据子集 + * 跨行列计算 + * 查找并补充缺失的数据 + * 将操作应用于数据中的独立分组 + * 按照多种格式转换数据 + * 组合多个数据集 + * 高级时间序列功能 + * 通过 Matplotlib 和 Seaborn 进行可视化 + +![](https://opensource.com/sites/default/files/uploads/pandas_cheat_sheet_github.png) + +#### Matplotlib 和 Seaborn + +数据可视化和数据分析是数据科学家的必备技能,毕竟仅凭一堆枯燥的数据是无法有效地将背后蕴含的信息向受众传达的。这两项技能对于机器学习来说同样重要,因为首先要对数据集进行一个探索性分析,才能更准确地选择合适的机器学习算法。 + +[Matplotlib][12] 是应用最广泛的 2D Python 可视化库。它包含海量的命令和接口,可以让你根据数据生成高质量的图表。要学习使用 Matplotlib,可以参考这篇详尽的[文章][13]。 + +![](https://opensource.com/sites/default/files/uploads/matplotlib_gallery_-1.png) + +[Seaborn][14] 也是一个强大的用于统计和绘图的可视化库。它在 Matplotlib 的基础上提供样式灵活的 API、用于统计和绘图的常见高级函数,还可以和 Pandas 提供的功能相结合。要学习使用 Seaborn,可以参考这篇优秀的[教程][15]。 + +![](https://opensource.com/sites/default/files/uploads/machine-learning-python_seaborn.png) + +#### Scikit-learn + +Scikit-learn 是机器学习方面通用的重要 Python 包。它实现了多种[分类][16]、[回归][17]和[聚类][18]算法,包括[支持向量机][19]、[随机森林][20]、[梯度增强][21]、[k-means 算法][22]和 [DBSCAN 算法][23],可以与 Python 的数值库 NumPy 和科学计算库 [SciPy][24] 结合使用。它通过兼容的接口提供了有监督和无监督的学习算法。Scikit-learn 的强壮性让它可以稳定运行在生产环境中,同时它在易用性、代码质量、团队协作、文档和性能等各个方面都有良好的表现。可以参考[这篇基于 Scikit-learn 的机器学习入门][25],或者[这篇基于 Scikit-learn 的简单机器学习用例演示][26]。 + +本文使用 [CC BY-SA 4.0][28] 许可,在 [Heartbeat][27] 上首发。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/machine-learning-python-essential-hacks-and-tricks + +作者:[Tirthajyoti Sarkar][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/tirthajyoti +[b]: https://github.com/lujun9972 +[1]: https://www.youtube.com/watch?v=tKa0zDDDaQk +[2]: https://www.youtube.com/watch?v=Ura_ioOcpQI +[3]: https://www.coursera.org/learn/machine-learning +[4]: https://www.coursera.org/specializations/machine-learning +[5]: https://towardsdatascience.com/how-to-choose-effective-moocs-for-machine-learning-and-data-science-8681700ed83f +[6]: https://medium.freecodecamp.org/which-languages-should-you-learn-for-data-science-e806ba55a81f +[7]: https://www.kdnuggets.com/2017/09/python-vs-r-data-science-machine-learning.html +[8]: http://numpy.org/ +[9]: https://pandas.pydata.org/ +[10]: http://scikit-learn.org/ +[11]: https://www.tensorflow.org/ +[12]: https://matplotlib.org/ +[13]: https://realpython.com/python-matplotlib-guide/ +[14]: https://seaborn.pydata.org/ +[15]: https://www.datacamp.com/community/tutorials/seaborn-python-tutorial +[16]: https://en.wikipedia.org/wiki/Statistical_classification +[17]: https://en.wikipedia.org/wiki/Regression_analysis +[18]: https://en.wikipedia.org/wiki/Cluster_analysis +[19]: https://en.wikipedia.org/wiki/Support_vector_machine +[20]: https://en.wikipedia.org/wiki/Random_forests +[21]: https://en.wikipedia.org/wiki/Gradient_boosting +[22]: https://en.wikipedia.org/wiki/K-means_clustering +[23]: https://en.wikipedia.org/wiki/DBSCAN +[24]: https://en.wikipedia.org/wiki/SciPy +[25]: http://scikit-learn.org/stable/tutorial/basic/tutorial.html +[26]: https://towardsdatascience.com/machine-learning-with-python-easy-and-robust-method-to-fit-nonlinear-data-19e8a1ddbd49 +[27]: https://heartbeat.fritz.ai/some-essential-hacks-and-tricks-for-machine-learning-with-python-5478bc6593f2 +[28]: https://creativecommons.org/licenses/by-sa/4.0/ +[29]: https://www.goodreads.com/book/show/24612233-the-master-algorithm +[30]: http://www.r2d3.us/visual-intro-to-machine-learning-part-1/ +[31]: https://towardsdatascience.com/how-to-choose-effective-moocs-for-machine-learning-and-data-science-8681700ed83f +[32]: https://www.kdnuggets.com/ +[33]: http://www.markmeloon.com/ +[34]: https://brohrer.github.io/blog.html +[35]: https://blog.openai.com/ + diff --git a/published/201811/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md b/published/201811/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md new file mode 100644 index 0000000000..f675342f6f --- /dev/null +++ b/published/201811/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md @@ -0,0 +1,367 @@ +我们如何得知安装的包来自哪个仓库? +========== + +有时候你可能想知道安装的软件包来自于哪个仓库。这将帮助你在遇到包冲突问题时进行故障排除。 + +因为[第三方仓库][1]拥有最新版本的软件包,所以有时候当你试图安装一些包的时候会出现兼容性的问题。 + +在 Linux 上一切都是可能的,因为你可以安装一个即使在你的发行版系统上不能使用的包。 + +你也可以安装一个最新版本的包,即使你的发行版系统仓库还没有这个版本,怎么做到的呢? + +这就是为什么出现了第三方仓库。它们允许用户从库中安装所有可用的包。 + +几乎所有的发行版系统都允许第三方软件库。一些发行版还会官方推荐一些不会取代基础仓库的第三方仓库,例如 CentOS 官方推荐安装 [EPEL 库][2]。 + +下面是常用的仓库列表和它们的详细信息。 + + * CentOS: [EPEL][2]、[ELRepo][3] 等是 [Centos 社区认证仓库](4)。 + * Fedora: [RPMfusion 仓库][5] 是经常被很多 [Fedora][6] 用户使用的仓库。 + * ArchLinux: ArchLinux 社区仓库包含了来自于 Arch 用户仓库的可信用户审核通过的软件包。 + * openSUSE: [Packman 仓库][7] 为 openSUSE 提供了各种附加的软件包,特别是但不限于那些在 openSUSE Build Service 应用黑名单上的与多媒体相关的应用和库。它是 openSUSE 软件包的最大外部软件库。 + * Ubuntu:个人软件包归档(PPA)是一种软件仓库。开发者们可以创建这种仓库来分发他们的软件。你可以在 PPA 导航页面找到相关信息。同时,你也可以启用 Cananical 合作伙伴软件仓库。 + +### 仓库是什么? + +软件仓库是存储特定的应用程序的软件包的集中场所。 + +所有的 Linux 发行版都在维护他们自己的仓库,并允许用户在他们的机器上获取和安装包。 + +每个厂商都提供了各自的包管理工具来管理它们的仓库,例如搜索、安装、更新、升级、删除等等。 + +除了 RHEL 和 SUSE 以外大部分 Linux 发行版都是自由软件。要访问付费的仓库,你需要购买其订阅服务。 + +### 为什么我们需要启用第三方仓库? + +在 Linux 里,并不建议从源代码安装包,因为这样做可能会在升级软件和系统的时候产生很多问题,这也是为什么我们建议从库中安装包而不是从源代码安装。 + +### 在 RHEL/CentOS 系统上我们如何得知安装的软件包来自哪个仓库? + +这可以通过很多方法实现。我们会给你所有可能的选择,你可以选择一个对你来说最合适的。 + +#### 方法-1:使用 yum 命令 + +RHEL 和 CentOS 系统使用 RPM 包,因此我们能够使用 [Yum 包管理器][8] 来获得信息。 + +YUM 即 “Yellodog Updater, Modified” 是适用于基于 RPM 的系统例如 RHEL 和 CentOS 的一个开源命令行前端包管理工具。 + +`yum` 是从发行版仓库和其他第三方库中获取、安装、删除、查询和管理 RPM 包的一个主要工具。 + +``` +# yum info apachetop +Loaded plugins: fastestmirror +Loading mirror speeds from cached hostfile + * epel: epel.mirror.constant.com +Installed Packages +Name : apachetop +Arch : x86_64 +Version : 0.15.6 +Release : 1.el7 +Size : 65 k +Repo : installed +From repo : epel +Summary : A top-like display of Apache logs +URL : https://github.com/tessus/apachetop +License : BSD +Description : ApacheTop watches a logfile generated by Apache (in standard common or + : combined logformat, although it doesn't (yet) make use of any of the extra + : fields in combined) and generates human-parsable output in realtime. +``` + +`apachetop` 包来自 EPEL 仓库。 + +#### 方法-2:使用 yumdb 命令 + +`yumdb info` 提供了类似于 `yum info` 的信息但是它又提供了包校验和数据、类型、用户信息(谁安装的软件包)。从 yum 3.2.26 开始,yum 已经开始在 rpmdatabase 之外存储额外的信息(user 表示软件是用户安装的,dep 表示它是作为依赖项引入的)。 + +``` +# yumdb info lighttpd +Loaded plugins: fastestmirror +lighttpd-1.4.50-1.el7.x86_64 + checksum_data = a24d18102ed40148cfcc965310a516050ed437d728eeeefb23709486783a4d37 + checksum_type = sha256 + command_line = --enablerepo=epel install lighttpd apachetop aria2 atop axel + from_repo = epel + from_repo_revision = 1540756729 + from_repo_timestamp = 1540757483 + installed_by = 0 + origin_url = https://epel.mirror.constant.com/7/x86_64/Packages/l/lighttpd-1.4.50-1.el7.x86_64.rpm + reason = user + releasever = 7 + var_contentdir = centos + var_infra = stock + var_uuid = ce328b07-9c0a-4765-b2ad-59d96a257dc8 +``` + +`lighttpd` 包来自 EPEL 仓库。 + +#### 方法-3:使用 rpm 命令 + +[RPM 命令][9] 即 “Red Hat Package Manager” 是一个适用于基于 Red Hat 的系统(例如 RHEL、CentOS、Fedora、openSUSE & Mageia)的强大的命令行包管理工具。 + +这个工具允许你在你的 Linux 系统/服务器上安装、更新、移除、查询和验证软件。RPM 文件具有 .rpm 后缀名。RPM 包是用必需的库和依赖关系构建的,不会与系统上安装的其他包冲突。 + +``` +# rpm -qi apachetop +Name : apachetop +Version : 0.15.6 +Release : 1.el7 +Architecture: x86_64 +Install Date: Mon 29 Oct 2018 06:47:49 AM EDT +Group : Applications/Internet +Size : 67020 +License : BSD +Signature : RSA/SHA256, Mon 22 Jun 2015 09:30:26 AM EDT, Key ID 6a2faea2352c64e5 +Source RPM : apachetop-0.15.6-1.el7.src.rpm +Build Date : Sat 20 Jun 2015 09:02:37 PM EDT +Build Host : buildvm-22.phx2.fedoraproject.org +Relocations : (not relocatable) +Packager : Fedora Project +Vendor : Fedora Project +URL : https://github.com/tessus/apachetop +Summary : A top-like display of Apache logs +Description : +ApacheTop watches a logfile generated by Apache (in standard common or +combined logformat, although it doesn't (yet) make use of any of the extra +fields in combined) and generates human-parsable output in realtime. +``` + +`apachetop` 包来自 EPEL 仓库。 + +#### 方法-4:使用 repoquery 命令 + +`repoquery` 是一个从 YUM 库查询信息的程序,类似于 rpm 查询。 + +``` +# repoquery -i httpd + +Name : httpd +Version : 2.4.6 +Release : 80.el7.centos.1 +Architecture: x86_64 +Size : 9817285 +Packager : CentOS BuildSystem +Group : System Environment/Daemons +URL : http://httpd.apache.org/ +Repository : updates +Summary : Apache HTTP Server +Source : httpd-2.4.6-80.el7.centos.1.src.rpm +Description : +The Apache HTTP Server is a powerful, efficient, and extensible +web server. +``` + +`httpd` 包来自 CentOS updates 仓库。 + +### 在 Fedora 系统上我们如何得知安装的包来自哪个仓库? + +DNF 是 “Dandified yum” 的缩写。DNF 是使用 hawkey/libsolv 库作为后端的下一代 yum 包管理器(yum 的分支)。从 Fedora 18 开始 Aleš Kozumplík 开始开发 DNF,并最终在 Fedora 22 上得以应用/启用。 + +[dnf 命令][10] 用于在 Fedora 22 以及之后的系统上安装、更新、搜索和删除包。它会自动解决依赖并使安装包的过程变得顺畅,不会出现任何问题。 + +``` +$ dnf info tilix +Last metadata expiration check: 27 days, 10:00:23 ago on Wed 04 Oct 2017 06:43:27 AM IST. +Installed Packages +Name : tilix +Version : 1.6.4 +Release : 1.fc26 +Arch : x86_64 +Size : 3.6 M +Source : tilix-1.6.4-1.fc26.src.rpm +Repo : @System +From repo : updates +Summary : Tiling terminal emulator +URL : https://github.com/gnunn1/tilix +License : MPLv2.0 and GPLv3+ and CC-BY-SA +Description : Tilix is a tiling terminal emulator with the following features: + : + : - Layout terminals in any fashion by splitting them horizontally or vertically + : - Terminals can be re-arranged using drag and drop both within and between + : windows + : - Terminals can be detached into a new window via drag and drop + : - Input can be synchronized between terminals so commands typed in one + : terminal are replicated to the others + : - The grouping of terminals can be saved and loaded from disk + : - Terminals support custom titles + : - Color schemes are stored in files and custom color schemes can be created by + : simply creating a new file + : - Transparent background + : - Supports notifications when processes are completed out of view + : + : The application was written using GTK 3 and an effort was made to conform to + : GNOME Human Interface Guidelines (HIG). +``` + +`tilix` 包来自 Fedora updates 仓库。 + +### 在 openSUSE 系统上我们如何得知安装的包来自哪个仓库? + +Zypper 是一个使用 libzypp 的命令行包管理器。[Zypper 命令][11] 提供了存储库访问、依赖处理、包安装等功能。 + +``` +$ zypper info nano + +Loading repository data... +Reading installed packages... + + +Information for package nano: +----------------------------- +Repository : Main Repository (OSS) +Name : nano +Version : 2.4.2-5.3 +Arch : x86_64 +Vendor : openSUSE +Installed Size : 1017.8 KiB +Installed : No +Status : not installed +Source package : nano-2.4.2-5.3.src +Summary : Pico editor clone with enhancements +Description : + GNU nano is a small and friendly text editor. It aims to emulate + the Pico text editor while also offering a few enhancements. +``` + +`nano` 包来自于 openSUSE Main 仓库(OSS)。 + +### 在 ArchLinux 系统上我们如何得知安装的包来自哪个仓库? + +[Pacman 命令][12] 即包管理器工具(package manager utility ),是一个简单的用来安装、构建、删除和管理 Arch Linux 软件包的命令行工具。Pacman 使用 libalpm 作为后端来执行所有的操作。 + +``` +# pacman -Ss chromium +extra/chromium 48.0.2564.116-1 + The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser +extra/qt5-webengine 5.5.1-9 (qt qt5) + Provides support for web applications using the Chromium browser project +community/chromium-bsu 0.9.15.1-2 + A fast paced top scrolling shooter +community/chromium-chromevox latest-1 + Causes the Chromium web browser to automatically install and update the ChromeVox screen reader extention. Note: This + package does not contain the extension code. +community/fcitx-mozc 2.17.2313.102-1 + Fcitx Module of A Japanese Input Method for Chromium OS, Windows, Mac and Linux (the Open Source Edition of Google Japanese + Input) +``` + +`chromium` 包来自 ArchLinux extra 仓库。 + +或者,我们可以使用以下选项获得关于包的详细信息。 + +``` +# pacman -Si chromium +Repository : extra +Name : chromium +Version : 48.0.2564.116-1 +Description : The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser +Architecture : x86_64 +URL : http://www.chromium.org/ +Licenses : BSD +Groups : None +Provides : None +Depends On : gtk2 nss alsa-lib xdg-utils bzip2 libevent libxss icu libexif libgcrypt ttf-font systemd dbus + flac snappy speech-dispatcher pciutils libpulse harfbuzz libsecret libvpx perl perl-file-basedir + desktop-file-utils hicolor-icon-theme +Optional Deps : kdebase-kdialog: needed for file dialogs in KDE + gnome-keyring: for storing passwords in GNOME keyring + kwallet: for storing passwords in KWallet +Conflicts With : None +Replaces : None +Download Size : 44.42 MiB +Installed Size : 172.44 MiB +Packager : Evangelos Foutras +Build Date : Fri 19 Feb 2016 04:17:12 AM IST +Validated By : MD5 Sum SHA-256 Sum Signature +``` + +`chromium` 包来自 ArchLinux extra 仓库。 + +### 在基于 Debian 的系统上我们如何得知安装的包来自哪个仓库? + +在基于 Debian 的系统例如 Ubuntu、LinuxMint 上可以使用两种方法实现。 + +#### 方法-1:使用 apt-cache 命令 + +[apt-cache 命令][13] 可以显示存储在 APT 内部数据库的很多信息。这些信息是一种缓存,因为它们是从列在 `source.list` 文件里的不同的源中获得的。这个过程发生在 apt 更新操作期间。 + +``` +$ apt-cache policy python3 +python3: + Installed: 3.6.3-0ubuntu2 + Candidate: 3.6.3-0ubuntu3 + Version table: + 3.6.3-0ubuntu3 500 + 500 http://in.archive.ubuntu.com/ubuntu artful-updates/main amd64 Packages + *** 3.6.3-0ubuntu2 500 + 500 http://in.archive.ubuntu.com/ubuntu artful/main amd64 Packages + 100 /var/lib/dpkg/status +``` + +`python3` 包来自 Ubuntu updates 仓库。 + +#### 方法-2:使用 apt 命令 + +[APT 命令][14] 即 “Advanced Packaging Tool”,是 `apt-get` 命令的替代品,就像 DNF 是如何取代 YUM 一样。它是具有丰富功能的命令行工具并将所有的功能例如 `apt-cache`、`apt-search`、`dpkg`、`apt-cdrom`、`apt-config`、`apt-ket` 等包含在一个命令(APT)中,并且还有几个独特的功能。例如我们可以通过 APT 轻松安装 .dpkg 包,但我们不能使用 `apt-get` 命令安装,更多类似的功能都被包含进了 APT 命令。`apt-get` 因缺失了很多未被解决的特性而被 `apt` 取代。 + +``` +$ apt -a show notepadqq +Package: notepadqq +Version: 1.3.2-1~artful1 +Priority: optional +Section: editors +Maintainer: Daniele Di Sarli +Installed-Size: 1,352 kB +Depends: notepadqq-common (= 1.3.2-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2) +Download-Size: 356 kB +APT-Sources: http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful/main amd64 Packages +Description: Notepad++-like editor for Linux + Text editor with support for multiple programming + languages, multiple encodings and plugin support. + +Package: notepadqq +Version: 1.2.0-1~artful1 +Status: install ok installed +Priority: optional +Section: editors +Maintainer: Daniele Di Sarli +Installed-Size: 1,352 kB +Depends: notepadqq-common (= 1.2.0-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2) +Homepage: http://notepadqq.altervista.org +Download-Size: unknown +APT-Manual-Installed: yes +APT-Sources: /var/lib/dpkg/status +Description: Notepad++-like editor for Linux + Text editor with support for multiple programming + languages, multiple encodings and plugin support. +``` + +`notepadqq` 包来自 Launchpad PPA。 + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-do-we-find-out-the-installed-packages-came-from-which-repository/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[zianglei](https://github.com/zianglei) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/repository/ +[2]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/ +[3]: https://www.2daygeek.com/install-enable-elrepo-on-rhel-centos-scientific-linux/ +[4]: https://www.2daygeek.com/additional-yum-repositories-for-centos-rhel-fedora-systems/ +[5]: https://www.2daygeek.com/install-enable-rpm-fusion-repository-on-centos-fedora-rhel/ +[6]: https://fedoraproject.org/wiki/Third_party_repositories +[7]: https://www.2daygeek.com/install-enable-packman-repository-on-opensuse-leap/ +[8]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[9]: https://www.2daygeek.com/rpm-command-examples/ +[10]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[13]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[14]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ diff --git a/translated/tech/20181030 How To Analyze And Explore The Contents Of Docker Images.md b/published/201811/20181030 How To Analyze And Explore The Contents Of Docker Images.md similarity index 59% rename from translated/tech/20181030 How To Analyze And Explore The Contents Of Docker Images.md rename to published/201811/20181030 How To Analyze And Explore The Contents Of Docker Images.md index 8b0021bf26..932937b0f2 100644 --- a/translated/tech/20181030 How To Analyze And Explore The Contents Of Docker Images.md +++ b/published/201811/20181030 How To Analyze And Explore The Contents Of Docker Images.md @@ -1,33 +1,42 @@ 如何分析并探索 Docker 容器镜像的内容 ====== + ![](https://www.ostechnix.com/wp-content/uploads/2018/10/dive-tool-720x340.png) -或许你已经了解到 Docker 容器镜像是一个轻量、独立、含有运行某个应用所需全部软件的可执行包,这也是为什么容器镜像会经常被开发者用于构建和分发应用。假如你很好奇一个 Docker 镜像里面包含了什么东西,那么这篇简要的指南或许会帮助到你。今天,我们将学会使用一个名为 **Dive** 的工具来分析和探索 Docker 镜像每层的内容。通过分析 Docker 镜像,我们可以发现在各个层之间可能重复的文件并通过移除它们来减小 Docker 镜像的大小。Dive 工具不仅仅是一个 Docker 镜像分析工具,它还可以帮助我们来构建镜像。Dive 是一个用 Go 编程语言编写的免费开源工具。 +或许你已经了解到 Docker 容器镜像是一个轻量、独立、含有运行某个应用所需全部软件的可执行包,这也是为什么容器镜像会经常被开发者用于构建和分发应用。假如你很好奇一个 Docker 镜像里面包含了什么东西,那么这篇简要的指南或许会帮助到你。今天,我们将学会使用一个名为 **Dive** 的工具来分析和探索 Docker 镜像每层的内容。 + +通过分析 Docker 镜像,我们可以发现在各个层之间可能重复的文件并通过移除它们来减小 Docker 镜像的大小。Dive 工具不仅仅是一个 Docker 镜像分析工具,它还可以帮助我们来构建镜像。Dive 是一个用 Go 编程语言编写的自由开源工具。 ### 安装 Dive -首先从该项目的 [**发布页**][1] 下载最新版本,然后像下面展示的那样根据你所使用的发行版来安装它。 +首先从该项目的 [发布页][1] 下载最新版本,然后像下面展示的那样根据你所使用的发行版来安装它。 假如你正在使用 **Debian** 或者 **Ubuntu**,那么可以运行下面的命令来下载并安装它。 + ``` $ wget https://github.com/wagoodman/dive/releases/download/v0.0.8/dive_0.0.8_linux_amd64.deb ``` + ``` $ sudo apt install ./dive_0.0.8_linux_amd64.deb ``` **在 RHEL 或 CentOS 系统中** + ``` $ wget https://github.com/wagoodman/dive/releases/download/v0.0.8/dive_0.0.8_linux_amd64.rpm ``` + ``` $ sudo rpm -i dive_0.0.8_linux_amd64.rpm ``` -Dive 也可以使用 [**Linuxbrew**][2] 包管理器来安装。 +Dive 也可以使用 [Linuxbrew][2] 包管理器来安装。 + ``` $ brew tap wagoodman/dive ``` + ``` $ brew install dive ``` @@ -36,34 +45,37 @@ $ brew install dive ### 分析并探索 Docker 镜像的内容 -要分析一个 Docker 镜像,只需要运行加上 Docker 镜像 ID的 dive 命令就可以了。你可以使用 `sudo docker images` 来得到 Docker 镜像的 ID。 +要分析一个 Docker 镜像,只需要运行加上 Docker 镜像 ID 的 `dive` 命令就可以了。你可以使用 `sudo docker images` 来得到 Docker 镜像的 ID。 + ``` $ sudo dive ea4c82dcd15a ``` -上面命令中的 **ea4c82dcd15a** 是某个镜像的 id。 +上面命令中的 `ea4c82dcd15a` 是某个镜像的 ID。 -然后 Dive 命令将快速地分析给定 Docker 镜像的内容并将它在终端中展示出来。 +然后 `dive` 命令将快速地分析给定 Docker 镜像的内容并将它在终端中展示出来。 ![](https://www.ostechnix.com/wp-content/uploads/2018/10/Dive-1.png) -正如你在上面的截图中看到的那样,在终端的左边一栏列出了给定 Docker 镜像的各个层及其详细内容,浪费的空间大小等信息。右边一栏则给出了给定 Docker 镜像每一层的内容。你可以使用 **Ctrl+SPACEBAR** 来在左右栏之间切换,使用 **UP/DOWN** 上下键来在目录树中进行浏览。 +正如你在上面的截图中看到的那样,在终端的左边一栏列出了给定 Docker 镜像的各个层及其详细内容,浪费的空间大小等信息。右边一栏则给出了给定 Docker 镜像每一层的内容。你可以使用 `Ctrl+空格` 来在左右栏之间切换,使用 `UP`/`DOWN` 光标键来在目录树中进行浏览。 -下面是 `Dive` 的快捷键列表: - * **Ctrl+Spacebar** – 在左右栏之间切换 - * **Spacebar** – 展开或收起目录树 - * **Ctrl+A** – 文件树视图:展示或隐藏增加的文件 - * **Ctrl+R** – 文件树视图:展示或隐藏被移除的文件 - * **Ctrl+M** – 文件树视图:展示或隐藏被修改的文件 - * **Ctrl+U** – 文件树视图:展示或隐藏未修改的文件 - * **Ctrl+L** – 层视图:展示当前层的变化 - * **Ctrl+A** – 层视图:展示总的变化 - * **Ctrl+/** – 筛选文件 - * **Ctrl+C** – 退出 +下面是 `dive` 的快捷键列表: -在上面的例子中,我使用了 `sudo` 权限,这是因为我的 Docker 镜像存储在 **/var/lib/docker/** 目录中。假如你的镜像保存在你的家目录 `$HOME`或者在其他不属于 `root` 用户的目录,你就没有必要使用 `sudo` 命令。 + * `Ctrl+空格` —— 在左右栏之间切换 + * `空格` —— 展开或收起目录树 + * `Ctrl+A` —— 文件树视图:展示或隐藏增加的文件 + * `Ctrl+R` —— 文件树视图:展示或隐藏被移除的文件 + * `Ctrl+M` —— 文件树视图:展示或隐藏被修改的文件 + * `Ctrl+U` —— 文件树视图:展示或隐藏未修改的文件 + * `Ctrl+L` —— 层视图:展示当前层的变化 + * `Ctrl+A` —— 层视图:展示总的变化 + * `Ctrl+/` —— 筛选文件 + * `Ctrl+C` —— 退出 + +在上面的例子中,我使用了 `sudo` 权限,这是因为我的 Docker 镜像存储在 `/var/lib/docker/` 目录中。假如你的镜像保存在你的家目录 (`$HOME`)或者在其他不属于 `root` 用户的目录,你就没有必要使用 `sudo` 命令。 你还可以使用下面的单个命令来构建一个 Docker 镜像并立刻分析该镜像: + ``` $ dive build -t ``` @@ -83,7 +95,7 @@ via: https://www.ostechnix.com/how-to-analyze-and-explore-the-contents-of-docker 作者:[SK][a] 选题:[lujun9972][b] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -91,4 +103,4 @@ via: https://www.ostechnix.com/how-to-analyze-and-explore-the-contents-of-docker [b]: https://github.com/lujun9972 [1]: https://github.com/wagoodman/dive/releases [2]: https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/ -[3]: https://github.com/wagoodman/dive \ No newline at end of file +[3]: https://github.com/wagoodman/dive diff --git a/published/201811/20181031 8 creepy commands that haunt the terminal - Opensource.com.md b/published/201811/20181031 8 creepy commands that haunt the terminal - Opensource.com.md new file mode 100644 index 0000000000..8b21e7b55a --- /dev/null +++ b/published/201811/20181031 8 creepy commands that haunt the terminal - Opensource.com.md @@ -0,0 +1,58 @@ +8 个出没于终端中的吓人命令 +====== + +> 欢迎来到 Linux 令人毛骨悚然的一面。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/halloween_bag_bat_diy.jpg?itok=24M0lX25) + +又是一年中的这个时候:天气变冷了、树叶变色了,各处的孩子都化妆成了小鬼、妖精和僵尸。(LCTT 译注:本文原发表于万圣节)但你知道吗, Unix (和 Linux) 和它们的各个分支也充满了令人毛骨悚然的东西?让我们来看一下我们所熟悉和喜爱的操作系统的一些令人毛骨悚然的一面。 + +### 半神(守护进程) + +如果没有潜伏于系统中的各种守护进程daemon,那么 Unix 就没什么不同。守护进程是运行在后台的进程,并为用户和操作系统本身提供有用的服务,比如 SSH、FTP、HTTP 等等。 + +### 僵尸(僵尸进程) + +不时出现的僵尸进程是一种被杀死但是拒绝离开的进程。当它出现时,无疑你只能选择你有的工具来赶走它。僵尸进程通常表明产生它的进程出现了问题。 + +### 杀死(kill) + +你不仅可以使用 `kill` 来干掉一个僵尸进程,你还可以用它杀死任何对你系统产生负面影响的进程。有一个使用太多 RAM 或 CPU 周期的进程?使用 `kill` 命令杀死它。 + +### 猫(cat) + +`cat` 和猫科动物无关,但是与文件操作有关:`cat` 是 “concatenate” 的缩写。你甚至可以使用这个方便的命令来查看文件的内容。 + +### 尾巴(tail) + +当你想要查看文件中最后 n 行时,`tail` 命令很有用。当你想要监控一个文件时,它也很棒。 + +### 巫师(which) + +哦,不,它不是巫师(witch)的一种。而是打印传递给它的命令所在的文件位置的命令。例如,`which python` 将在你系统上打印每个版本的 Python 的位置。 + +### 地下室(crypt) + +`crypt` 命令,以前称为 `mcrypt`,当你想要加密(encrypt)文件的内容时,它是很方便的,这样除了你之外没有人可以读取它。像大多数 Unix 命令一样,你可以单独使用 `crypt` 或在系统脚本中调用它。 + +### 切碎(shred) + +当你不仅要删除文件还想要确保没有人能够恢复它时,`shred` 命令很方便。使用 `rm` 命令删除文件是不够的。你还需要覆盖该文件以前占用的空间。这就是 `shred` 的用武之地。 + +这些只是你会在 Unix 中发现的一部分令人毛骨悚然的东西。你还知道其他诡异的命令么?请随时告诉我。 + +万圣节快乐!(LCTT:可惜我们翻译完了,只能将恐怖的感觉延迟了 :D) + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/spookier-side-unix-linux + +作者:[Patrick H.Mullins][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/pmullins +[b]: https://github.com/lujun9972 diff --git a/published/201811/20181101 KRS- A new tool for gathering Kubernetes resource statistics.md b/published/201811/20181101 KRS- A new tool for gathering Kubernetes resource statistics.md new file mode 100644 index 0000000000..56b2bb1c40 --- /dev/null +++ b/published/201811/20181101 KRS- A new tool for gathering Kubernetes resource statistics.md @@ -0,0 +1,73 @@ +KRS:一个收集 Kubernetes 资源统计数据的新工具 +====== + +> 零配置工具简化了信息收集,例如在某个命名空间中运行了多少个 pod。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_hardware_purple.png?itok=3NdVoYhl) + +最近我在纽约的 O'Reilly Velocity 就 [Kubernetes 应用故障排除][1]的主题发表了演讲,并且在积极的反馈和讨论的推动下,我决定重新审视这个领域的工具。结果,除了 [kubernetes-incubator/spartakus][2] 和 [kubernetes/kube-state-metrics][3] 之外,我们还没有太多的轻量级工具来收集资源统计数据(例如命名空间中的 pod 或服务的数量)。所以,我在回家的路上开始编写一个小工具 —— 创造性地命名为 `krs`,它是 Kubernetes Resource Stats 的简称 ,它允许你收集这些统计数据。 + +你可以通过两种方式使用 [mhausenblas/krs][5]: + +* 直接在命令行(有 Linux、Windows 和 MacOS 的二进制文件),以及 +* 在集群中使用 [launch.sh][4] 脚本部署,该脚本动态创建适当的基于角色的访问控制(RBAC) 权限。 + +提醒你,它还在早期,并且还在开发中。但是,`krs` 的 0.1 版本提供以下功能: + +* 在每个命名空间的基础上,它定期收集资源统计信息(支持 pod、部署和服务)。 +* 它以 [OpenMetrics 格式][6]公开这些统计。 +* 它可以直接通过二进制文件使用,也可以在包含所有依赖项的容器化设置中使用。 + +目前,你需要安装并配置 `kubectl`,因为 `krs` 依赖于执行 `kubectl get all` 命令来收集统计数据。(另一方面,谁会使用 Kubernetes 但没有安装 `kubectl` 呢?) + +使用 `krs` 很简单。[下载][7]适合你平台的二进制文件,并按如下方式执行: + +``` +$ krs thenamespacetowatch +# HELP pods Number of pods in any state, for example running +# TYPE pods gauge +pods{namespace="thenamespacetowatch"} 13 +# HELP deployments Number of deployments +# TYPE deployments gauge +deployments{namespace="thenamespacetowatch"} 6 +# HELP services Number of services +# TYPE services gauge +services{namespace="thenamespacetowatch"} 4 +``` + +这将在前台启动 `krs`,从名称空间 `thenamespacetowatch` 收集资源统计信息,并分别在标准输出中以 OpenMetrics 格式输出它们,以供你进一步处理。 + +![krs screenshot][9] + +*krs 实战截屏* + +也许你会问,Michael,为什么它不能做一些有用的事(例如将指标存储在 S3 中)?因为 [Unix 哲学][10]。 + +对于那些想知道他们是否可以直接使用 Prometheus 或 [kubernetes/kube-state-metrics][3] 来完成这项任务的人:是的,你可以,为什么不行呢? `krs` 的重点是作为已有工具的轻量级且易于使用的替代品 —— 甚至可能在某些方面略微互补。 + +本文最初发表在 [Medium 的 ITNext][11] 上,并获得授权转载。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/kubernetes-resource-statistics + +作者:[Michael Hausenblas][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mhausenblas +[b]: https://github.com/lujun9972 +[1]: http://troubleshooting.kubernetes.sh/ +[2]: https://github.com/kubernetes-incubator/spartakus +[3]: https://github.com/kubernetes/kube-state-metrics +[4]: https://github.com/mhausenblas/krs/blob/master/launch.sh +[5]: https://github.com/mhausenblas/krs +[6]: https://openmetrics.io/ +[7]: https://github.com/mhausenblas/krs/releases +[8]: /file/412706 +[9]: https://opensource.com/sites/default/files/uploads/krs_screenshot.png (krs screenshot) +[10]: http://harmful.cat-v.org/cat-v/ +[11]: https://itnext.io/kubernetes-resource-statistics-e8247f92b45c diff --git a/published/201811/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md b/published/201811/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md new file mode 100644 index 0000000000..ef04dc33dd --- /dev/null +++ b/published/201811/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md @@ -0,0 +1,82 @@ +如何从 Windows 7、8 和 10 创建可启动的 Linux USB 盘? +====== + +如果你想了解 Linux,首先要做的是在你的系统上安装 Linux 系统。 + +它可以通过两种方式实现,使用 Virtualbox、VMWare 等虚拟化应用,或者在你的系统上安装 Linux。 + +如果你倾向于从 Windows 系统迁移到 Linux 系统或计划在备用机上安装 Linux 系统,那么你须为此创建可启动的 USB 盘。 + +我们已经写过许多[在 Linux 上创建可启动 USB 盘][1] 的文章,如 [BootISO][2]、[Etcher][3] 和 [dd 命令][4],但我们从来没有机会写一篇文章关于在 Windows 中创建 Linux 可启动 USB 盘的文章。不管怎样,我们今天有机会做这件事了。 + +在本文中,我们将向你展示如何从 Windows 10 创建可启动的 Ubuntu USB 盘。 + +这些步骤也适用于其他 Linux,但你必须从下拉列表中选择相应的操作系统而不是 Ubuntu。 + +### 步骤 1:下载 Ubuntu ISO + +访问 [Ubuntu 发布][5] 页面并下载最新版本。我想建议你下载最新的 LTS 版而不是普通的发布。 + +通过 MD5 或 SHA256 验证校验和,确保下载了正确的 ISO。输出值应与 Ubuntu 版本页面值匹配。 + +### 步骤 2:下载 Universal USB Installer + +有许多程序可供使用,但我的首选是 [Universal USB Installer][6],它使用起来非常简单。只需访问 Universal USB Installer 页面并下载该程序即可。 + +### 步骤3:创建可启动的 Ubuntu ISO + +这个程序在使用上不复杂。首先连接 USB 盘,然后点击下载的 Universal USB Installer。启动后,你可以看到类似于我们的界面。 + +![][8] + + * 步骤 1:选择 Ubuntu 系统。 + * 步骤 2:选择 Ubuntu ISO 下载位置。 + * 步骤 3:它默认选择的是 USB 盘,但是要验证一下,接着勾选格式化选项。 + +![][9] + +当你点击 “Create” 按钮时,它会弹出一个带有警告的窗口。不用担心,只需点击 “Yes” 继续进行此操作即可。 + +![][10] + +USB 盘分区正在进行中。 + +![][11] + +要等待一会儿才能完成。如你您想将它移至后台,你可以点击 “Background” 按钮。 + +![][12] + +好了,完成了。 + +![][13] + +现在你可以进行[安装 Ubuntu 系统][14]了。但是,它也提供了一个 live 模式,如果你想在安装之前尝试,那么可以使用它。 + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/create-a-bootable-live-usb-drive-from-windows-using-universal-usb-installer/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/bootable-usb/ +[2]: https://www.2daygeek.com/bootiso-a-simple-bash-script-to-securely-create-a-bootable-usb-device-in-linux-from-iso-file/ +[3]: https://www.2daygeek.com/etcher-easy-way-to-create-a-bootable-usb-drive-sd-card-from-an-iso-image-on-linux/ +[4]: https://www.2daygeek.com/create-a-bootable-usb-drive-from-an-iso-image-using-dd-command-on-linux/ +[5]: http://releases.ubuntu.com/ +[6]: https://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/ +[7]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[8]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-1.png +[9]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-2.png +[10]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-3.png +[11]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-4.png +[12]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-5.png +[13]: https://www.2daygeek.com/wp-content/uploads/2018/11/create-a-live-linux-os-usb-from-windows-using-universal-usb-installer-6.png +[14]: https://www.2daygeek.com/how-to-install-ubuntu-16-04/ diff --git a/published/201811/20181105 CPod- An Open Source, Cross-platform Podcast App.md b/published/201811/20181105 CPod- An Open Source, Cross-platform Podcast App.md new file mode 100644 index 0000000000..ea0dbe77e7 --- /dev/null +++ b/published/201811/20181105 CPod- An Open Source, Cross-platform Podcast App.md @@ -0,0 +1,111 @@ +CPod:一个开源、跨平台播客应用 +====== + +播客是一个很好的娱乐和获取信息的方式。事实上,我会听十几个不同的播客,包括技术、神秘事件、历史和喜剧。当然,[Linux 播客][1]也在此列表中。 + +今天,我们将看一个简单的跨平台应用来收听你的播客。 + +![][2] + +*推荐的播客和播客搜索* + +### 应用程序 + +[CPod][3] 是 [Zack Guard(z-------------)][4] 的作品。**它是一个 [Election][5] 程序**,这使它能够在大多数操作系统(Linux、Windows、Mac OS)上运行。 + +> 一个小事:CPod 最初被命名为 Cumulonimbus。 + +应用的大部分被两个面板占用,来显示内容和选项。屏幕左侧的小条让你可以使用应用的不同功能。CPod 的不同栏目包括主页、队列、订阅、浏览和设置。 + +![cpod settings][6] + +*设置* + +### CPod 的功能 + +以下是 CPod 提供的功能列表: + + * 简洁,干净的设计 + * 可在主流计算机平台上使用 + * 有 Snap 包 + * 搜索 iTunes 的播客目录 + * 可下载也可无需下载就播放节目 + * 查看播客信息和节目 + * 搜索播客的个别节目 + * 深色模式 + * 改变播放速度 + * 键盘快捷键 + * 将你的播客订阅与 gpodder.net 同步 + * 导入和导出订阅 + * 根据长度、日期、下载状态和播放进度对订阅进行排序 + * 在应用启动时自动获取新节目 + * 多语言支持 + + +![search option in cpod application][7] + +*搜索 ZFS 节目* + +### 在 Linux 上体验 CPod + +我最后在两个系统上安装了 CPod:ArchLabs 和 Windows。[Arch 用户仓库​][8] 中有两个版本的 CPod。但是,它们都已过时,一个是版本 1.14.0,另一个是 1.22.6。最新版本的 CPod 是 1.27.0。由于 ArchLabs 和 Windows 之间的版本差异,我的体验有所不同。在本文中,我将重点关注 1.27.0,因为它是最新且功能最多的。 + +我马上能够找到我最喜欢的播客。我可以粘贴 RSS 源的 URL 来添加 iTunes 列表中没有的那些播客。 + +找到播客的特定节目也很容易。例如,我最近在寻找 [Late Night Linux][9] 中的一集,这集中他们在谈论 [ZFS][10]。我点击播客,在搜索框中输入 “ZFS” 然后找到了它。 + +我很快发现播放一堆播客节目的最简单方法是将它们添加到队列中。一旦它们进入队列,你可以流式传输或下载它们。你也可以通过拖放重新排序它们。每集在播放时,它会显示可视化的声波以及节目摘要。 + +### 安装 CPod + +在 [GitHub][11] 上,你可以下载适用于 Linux 的 AppImage 或 Deb 文件,适用于 Windows 的 .exe 文件或适用于 Mac OS 的 .dmg 文件。 + +你可以使用 [Snap][12] 安装 CPod。你需要做的就是使用以下命令: + +``` +sudo snap install cpod +``` + +就像我之前说的那样,CPod 的 [Arch 用户仓库][8]的版本已经过时了。我已经给其中一个打包者发了消息。如果你使用 Arch(或基于 Arch 的发行版),我建议你这样做。 + +![cpod for Linux pidcasts][13] + +*播放其中一个我最喜欢的播客* + +### 最后的想法 + +总的来说,我喜欢 CPod。它外观漂亮,使用简单。事实上,我更喜欢原来的名字(Cumulonimbus),但是它有点拗口。 + +我刚刚在程序中遇到两个问题。首先,我希望每个播客都有评分。其次,在打开黑暗模式后,根据长度、日期、下载状态和播放进度对剧集进行排序的菜单不起作用。 + +你有没有用过 CPod?如果没有,你最喜欢的播客应用是什么?你最喜欢的播客有哪些?请在下面的评论中告诉我们。 + +如果你发现这篇文章很有意思,请花一点时间在社交媒体、Hacker News 或 [Reddit][14] 上分享它。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/cpod-podcast-app/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/linux-podcasts/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod1.1.jpg?w=800&ssl=1 +[3]: https://github.com/z-------------/CPod +[4]: https://github.com/z------------- +[5]: https://electronjs.org/ +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod2.1.png?w=800&ssl=1 +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod4.1.jpg?w=800&ssl=1 +[8]: https://aur.archlinux.org/packages/?O=0&K=cpod +[9]: https://latenightlinux.com/ +[10]: https://itsfoss.com/what-is-zfs/ +[11]: https://github.com/z-------------/CPod/releases +[12]: https://snapcraft.io/cumulonimbus +[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod3.1.jpg?w=800&ssl=1 +[14]: http://reddit.com/r/linuxusersgroup diff --git a/published/201811/20181105 Commandline quick tips- How to locate a file.md b/published/201811/20181105 Commandline quick tips- How to locate a file.md new file mode 100644 index 0000000000..6b8d9a1109 --- /dev/null +++ b/published/201811/20181105 Commandline quick tips- How to locate a file.md @@ -0,0 +1,229 @@ +命令行快速技巧:如何定位一个文件 +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg) + +我们都会有文件存储在电脑里 —— 目录、相片、源代码等等。它们是如此之多。也无疑超出了我的记忆范围。要是毫无目标,找到正确的那一个可能会很费时间。在这篇文章里我们来看一下如何在命令行里找到需要的文件,特别是快速找到你想要的那一个。 + +好消息是 Linux 命令行专门设计了很多非常有用的命令行工具在你的电脑上查找文件。下面我们看一下它们其中三个:`ls`、`tree` 和 `find`。 + +### ls + +如果你知道文件在哪里,你只需要列出它们或者查看有关它们的信息,`ls` 就是为此而生的。 + +只需运行 `ls` 就可以列出当下目录中所有可见的文件和目录: + +``` +$ ls +Documents Music Pictures Videos notes.txt +``` + +添加 `-l` 选项可以查看文件的相关信息。同时再加上 `-h` 选项,就可以用一种人们易读的格式查看文件的大小: + +``` +$ ls -lh +total 60K +drwxr-xr-x 2 adam adam 4.0K Nov 2 13:07 Documents +drwxr-xr-x 2 adam adam 4.0K Nov 2 13:07 Music +drwxr-xr-x 2 adam adam 4.0K Nov 2 13:13 Pictures +drwxr-xr-x 2 adam adam 4.0K Nov 2 13:07 Videos +-rw-r--r-- 1 adam adam 43K Nov 2 13:12 notes.txt +``` + +`ls` 也可以搜索一个指定位置: + +``` +$ ls Pictures/ +trees.png wallpaper.png +``` + +或者一个指定文件 —— 即便只跟着名字的一部分: + +``` +$ ls *.txt +notes.txt +``` + +少了点什么?想要查看一个隐藏文件?没问题,使用 `-a` 选项: + +``` +$ ls -a +. .bash_logout .bashrc Documents Pictures notes.txt +.. .bash_profile .vimrc Music Videos +``` + +`ls` 还有很多其他有用的选项,你可以把它们组合在一起获得你想要的效果。可以使用以下命令了解更多: + +``` +$ man ls +``` + +### tree + +如果你想查看你的文件的树状结构,`tree` 是一个不错的选择。可能你的系统上没有默认安装它,你可以使用包管理 DNF 手动安装: + +``` +$ sudo dnf install tree +``` + +如果不带任何选项或者参数地运行 `tree`,将会以当前目录开始,显示出包含其下所有目录和文件的一个树状图。提醒一下,这个输出可能会非常大,因为它包含了这个目录下的所有目录和文件: + +``` +$ tree +. +|-- Documents +| |-- notes.txt +| |-- secret +| | `-- christmas-presents.txt +| `-- work +| |-- project-abc +| | |-- README.md +| | |-- do-things.sh +| | `-- project-notes.txt +| `-- status-reports.txt +|-- Music +|-- Pictures +| |-- trees.png +| `-- wallpaper.png +|-- Videos +`-- notes.txt +``` + +如果列出的太多了,使用 `-L` 选项,并在其后加上你想查看的层级数,可以限制列出文件的层级: + +``` +$ tree -L 2 +. +|-- Documents +| |-- notes.txt +| |-- secret +| `-- work +|-- Music +|-- Pictures +| |-- trees.png +| `-- wallpaper.png +|-- Videos +`-- notes.txt +``` + +你也可以显示一个指定目录的树状图: + +``` +$ tree Documents/work/ +Documents/work/ +|-- project-abc +| |-- README.md +| |-- do-things.sh +| `-- project-notes.txt +`-- status-reports.txt +``` + +如果使用 `tree` 列出的是一个很大的树状图,你可以把它跟 `less` 组合使用: + +``` +$ tree | less +``` + +再一次,`tree` 有很多其他的选项可以使用,你可以把他们组合在一起发挥更强大的作用。man 手册页有所有这些选项: + +``` +$ man tree +``` + +### find + +那么如果不知道文件在哪里呢?就让我们来找到它们吧! + +要是你的系统中没有 `find`,你可以使用 DNF 安装它: + +``` +$ sudo dnf install findutils +``` + +运行 `find` 时如果没有添加任何选项或者参数,它将会递归列出当前目录下的所有文件和目录。 + +``` +$ find +. +./Documents +./Documents/secret +./Documents/secret/christmas-presents.txt +./Documents/notes.txt +./Documents/work +./Documents/work/status-reports.txt +./Documents/work/project-abc +./Documents/work/project-abc/README.md +./Documents/work/project-abc/do-things.sh +./Documents/work/project-abc/project-notes.txt +./.bash_logout +./.bashrc +./Videos +./.bash_profile +./.vimrc +./Pictures +./Pictures/trees.png +./Pictures/wallpaper.png +./notes.txt +./Music +``` + +但是 `find` 真正强大的是你可以使用文件名进行搜索: + +``` +$ find -name do-things.sh +./Documents/work/project-abc/do-things.sh +``` + +或者仅仅是名字的一部分 —— 像是文件后缀。我们来找一下所有的 .txt 文件: + +``` +$ find -name "*.txt" +./Documents/secret/christmas-presents.txt +./Documents/notes.txt +./Documents/work/status-reports.txt +./Documents/work/project-abc/project-notes.txt +./notes.txt +``` + +你也可以根据大小寻找文件。如果你的空间不足的时候,这种方法也许特别有用。现在来列出所有大于 1 MB 的文件: + +``` +$ find -size +1M +./Pictures/trees.png +./Pictures/wallpaper.png +``` + +当然也可以搜索一个具体的目录。假如我想在我的 Documents 文件夹下找一个文件,而且我知道它的名字里有 “project” 这个词: + +``` +$ find Documents -name "*project*" +Documents/work/project-abc +Documents/work/project-abc/project-notes.txt +``` + +除了文件它还显示目录。你可以限制仅搜索查询文件: + +``` +$ find Documents -name "*project*" -type f +Documents/work/project-abc/project-notes.txt +``` + +最后再一次,`find` 还有很多供你使用的选项,要是你想使用它们,man 手册页绝对可以帮到你: + +``` +$ man find +``` + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/commandline-quick-tips-locate-file/ + +作者:[Adam Šamalík][a] +选题:[lujun9972][b] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/asamalik/ +[b]: https://github.com/lujun9972 diff --git a/published/201811/20181105 Introducing pydbgen- A random dataframe-database table generator.md b/published/201811/20181105 Introducing pydbgen- A random dataframe-database table generator.md new file mode 100644 index 0000000000..27bb64d37e --- /dev/null +++ b/published/201811/20181105 Introducing pydbgen- A random dataframe-database table generator.md @@ -0,0 +1,171 @@ +pydbgen:一个数据库随机生成器 +====== + +> 用这个简单的工具生成带有多表的大型数据库,让你更好地用 SQL 研究数据科学。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK) + +在研究数据科学的过程中,最麻烦的往往不是算法或者技术,而是如何获取到一批原始数据。尽管网上有很多真实优质的数据集可以用于机器学习,然而在学习 SQL 时却不是如此。 + +对于数据科学来说,熟悉 SQL 的重要性不亚于了解 Python 或 R 编程。如果想收集诸如姓名、年龄、信用卡信息、地址这些信息用于机器学习任务,在 Kaggle 上查找专门的数据集比使用足够大的真实数据库要容易得多。 + +如果有一个简单的工具或库来帮助你生成一个大型数据库,表里还存放着大量你需要的数据,岂不美哉? + +不仅仅是数据科学的入门者,即使是经验丰富的软件测试人员也会需要这样一个简单的工具,只需编写几行代码,就可以通过随机(但是是假随机)生成任意数量但有意义的数据集。 + +因此,我要推荐这个名为 [pydbgen][1] 的轻量级 Python 库。在后文中,我会简要说明这个库的相关内容,你也可以[阅读它的文档][2]详细了解更多信息。 + +### pydbgen 是什么 + +`pydbgen` 是一个轻量的纯 Python 库,它可以用于生成随机但有意义的数据记录(包括姓名、地址、信用卡号、日期、时间、公司名称、职位、车牌号等等),存放在 Pandas Dataframe 对象中,并保存到 SQLite 数据库或 Excel 文件。 + +### 如何安装 pydbgen + +目前 1.0.5 版本的 pydbgen 托管在 PyPI(Python 包索引存储库Python Package Index repository)上,并且对 [Faker][3] 有依赖关系。安装 pydbgen 只需要执行命令: + +``` +pip install pydbgen +``` + +已经在 Python 3.6 环境下测试安装成功,但在 Python 2 环境下无法正常安装。 + +### 如何使用 pydbgen + +在使用 `pydbgen` 之前,首先要初始化 `pydb` 对象。 + +``` +import pydbgen +from pydbgen import pydbgen +myDB=pydbgen.pydb() +``` + +随后就可以调用 `pydb` 对象公开的各种内部函数了。可以按照下面的例子,输出随机的美国城市和车牌号码: + +``` +myDB.city_real() +>> 'Otterville' +for _ in range(10): + print(myDB.license_plate()) +>> 8NVX937 + 6YZH485 + XBY-564 + SCG-2185 + XMR-158 + 6OZZ231 + CJN-850 + SBL-4272 + TPY-658 + SZL-0934 +``` + +另外,如果你输入的是 `city()` 而不是 `city_real()`,返回的将会是虚构的城市名。 + +``` +print(myDB.gen_data_series(num=8,data_type='city')) +>> +New Michelle +Robinborough +Leebury +Kaylatown +Hamiltonfort +Lake Christopher +Hannahstad +West Adamborough +``` + +### 生成随机的 Pandas Dataframe + +你可以指定生成数据的数量和种类,但需要注意的是,返回结果均为字符串或文本类型。 + +``` +testdf=myDB.gen_dataframe(5,['name','city','phone','date']) +testdf +``` + +最终产生的 Dataframe 类似下图所示。 + +![](https://opensource.com/sites/default/files/uploads/pydbgen_pandas-dataframe.png) + +### 生成数据库表 + +你也可以指定生成数据的数量和种类,而返回结果是数据库中的文本或者变长字符串类型。在生成过程中,你可以指定对应的数据库文件名和表名。 + +``` +myDB.gen_table(db_file='Testdb.DB',table_name='People', + +fields=['name','city','street_address','email']) +``` + +上面的例子种生成了一个能被 MySQL 和 SQLite 支持的 `.db` 文件。下图则显示了这个文件中的数据表在 SQLite 可视化客户端中打开的画面。 + +![](https://opensource.com/sites/default/files/uploads/pydbgen_db-browser-for-sqlite.png) + +### 生成 Excel 文件 + +和上面的其它示例类似,下面的代码可以生成一个具有随机数据的 Excel 文件。值得一提的是,通过将 `phone_simple` 参数设为 `False` ,可以生成较长较复杂的电话号码。如果你想要提高自己在数据提取方面的能力,不妨尝试一下这个功能。 + +``` +myDB.gen_excel(num=20,fields=['name','phone','time','country'], +phone_simple=False,filename='TestExcel.xlsx') +``` + +最终的结果类似下图所示: + +![](https://opensource.com/sites/default/files/uploads/pydbgen_excel.png) + +### 生成随机电子邮箱地址 + +`pydbgen` 内置了一个 `realistic_email` 方法,它基于种子来生成随机的电子邮箱地址。如果你不想在网络上使用真实的电子邮箱地址时,这个功能可以派上用场。 + +``` +for _ in range(10): + print(myDB.realistic_email('Tirtha Sarkar')) +>> +Tirtha_Sarkar@gmail.com +Sarkar.Tirtha@outlook.com +Tirtha_S48@verizon.com +Tirtha_Sarkar62@yahoo.com +Tirtha.S46@yandex.com +Tirtha.S@att.com +Sarkar.Tirtha60@gmail.com +TirthaSarkar@zoho.com +Sarkar.Tirtha@protonmail.com +Tirtha.S@comcast.net +``` + +### 未来的改进和用户贡献 + +目前的版本中并不完美。如果你发现了 pydbgen 的 bug 导致它在运行期间发生崩溃,请向我反馈。如果你打算对这个项目贡献代码,[也随时欢迎你][1]。当然现在也还有很多改进的方向: + + * pydbgen 作为随机数据生成器,可以集成一些机器学习或统计建模的功能吗? + * pydbgen 是否会添加可视化功能? + +一切皆有可能! + +如果你有任何问题或想法想要分享,都可以通过 [tirthajyoti@gmail.com][4] 与我联系。如果你像我一样对机器学习和数据科学感兴趣,也可以添加我的 [LinkedIn][5] 或在 [Twitter][6] 上关注我。另外,还可以在我的 [GitHub][7] 上找到更多 Python、R 或 MATLAB 的有趣代码和机器学习资源。 + +本文以 [CC BY-SA 4.0][9] 许可在 [Towards Data Science][8] 首发。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/pydbgen-random-database-table-generator + +作者:[Tirthajyoti Sarkar][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/tirthajyoti +[b]: https://github.com/lujun9972 +[1]: https://github.com/tirthajyoti/pydbgen +[2]: http://pydbgen.readthedocs.io/en/latest/ +[3]: https://faker.readthedocs.io/en/latest/index.html +[4]: mailto:tirthajyoti@gmail.com +[5]: https://www.linkedin.com/in/tirthajyoti-sarkar-2127aa7/ +[6]: https://twitter.com/tirthajyotiS +[7]: https://github.com/tirthajyoti?tab=repositories +[8]: https://towardsdatascience.com/introducing-pydbgen-a-random-dataframe-database-table-generator-b5c7bdc84be5 +[9]: https://creativecommons.org/licenses/by-sa/4.0/ + diff --git a/published/201811/20181105 Revisiting the Unix philosophy in 2018.md b/published/201811/20181105 Revisiting the Unix philosophy in 2018.md new file mode 100644 index 0000000000..7c9931e601 --- /dev/null +++ b/published/201811/20181105 Revisiting the Unix philosophy in 2018.md @@ -0,0 +1,102 @@ +2018 重温 Unix 哲学 +====== +> 在现代微服务环境中,构建小型、单一的应用程序的旧策略又再一次流行了起来。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) + +1984 年,Rob Pike 和 Brian W. Kernighan 在 AT&T 贝尔实验室技术期刊上发表了名为 “[Unix 环境编程][1]” 的文章,其中他们使用 BSD 的 `cat -v` 例子来认证 Unix 哲学。简而言之,Unix 哲学是:构建小型、单一的应用程序 —— 不管用什么语言 —— 只做一件小而美的事情,用 `stdin` / `stdout` 进行通信,并通过管道进行连接。 + +听起来是不是有点耳熟? + +是的,我也这么认为。这就是 James Lewis 和 Martin Fowler 给出的 [微服务的定义][2] 。 + +> 简单来说,微服务架构的风格是将单个 应用程序开发为一套小型服务的方法,每个服务都运行在它的进程中,并用轻量级机制进行通信,通常是 HTTP 资源 API 。 + +虽然一个 *nix 程序或者是一个微服务本身可能非常局限甚至不是很有用,但是当这些独立工作的单元组合在一起的时候就显示出了它们真正的好处和强大。 + +### *nix程序 vs 微服务 + +下面的表格对比了 *nix 环境中的程序(例如 `cat` 或 `lsof`)与微服务环境中的程序。 + +| | *nix 程序 | 微服务 | +| ------------- | ------------------------- | ----------------------- | +| 执行单元 | 程序使用 `stdin`/`stdout` | 使用 HTTP 或 gRPC API | +| 数据流 | 管道 | ? | +| 可配置和参数化 | 命令行参数、环境变量和配置文件 | JSON/YAML 文档 | +| 发现 | 包管理器、man、make | DNS、环境变量、OpenAPI | + +让我们详细的看看每一行。 + +#### 执行单元 + +*nix 系统(如 Linux)中的执行单元是一个可执行的文件(二进制或者是脚本),理想情况下,它们从 `stdin` 读取输入并将输出写入 `stdout`。而微服务通过暴露一个或多个通信接口来提供服务,比如 HTTP 和 gRPC API。在这两种情况下,你都会发现无状态示例(本质上是纯函数行为)和有状态示例,除了输入之外,还有一些内部(持久)状态决定发生了什么。 + +#### 数据流 + +传统的,*nix 程序能够通过管道进行通信。换句话说,我们要感谢 [Doug McIlroy][3],你不需要创建临时文件来传递,而可以在每个进程之间处理无穷无尽的数据流。据我所知,除了我在 [2017 年做的基于 Apache Kafka 小实验][4],没有什么能比得上管道化的微服务了。 + +#### 可配置和参数化 + +你是如何配置程序或者服务的,无论是永久性的服务还是即时的服务?是的,在 *nix 系统上,你通常有三种方法:命令行参数、环境变量,或全面的配置文件。在微服务架构中,典型的做法是用 YAML(或者甚至是 JSON)文档,定制好一个服务的布局和配置以及依赖的组件和通信、存储和运行时配置。例如 [Kubernetes 资源定义][5]、[Nomad 工作规范][6] 或 [Docker 编排][7] 文档。这些可能参数化也可能不参数化;也就是说,除非你知道一些模板语言,像 Kubernetes 中的 [Helm][8],否则你会发现你使用了很多 `sed -i` 这样的命令。 + +#### 发现 + +你怎么知道有哪些程序和服务可用,以及如何使用它们?在 *nix 系统中通常都有一个包管理器和一个很好用的 man 页面;使用它们,应该能够回答你所有的问题。在微服务的设置中,在寻找一个服务的时候会相对更自动化一些。除了像 [Airbnb 的 SmartStack][9] 或 [Netflix 的 Eureka][10] 等可以定制以外,通常还有基于环境变量或基于 DNS 的[方法][11],允许您动态的发现服务。同样重要的是,事实上 [OpenAPI][12] 为 HTTP API 提供了一套标准文档和设计模式,[gRPC][13] 为一些耦合性强的高性能项目也做了同样的事情。最后非常重要的一点是,考虑到开发者经验(DX),应该从写一份好的 [Makefile][14] 开始,并以编写符合 [风格][15] 的文档结束。 + +### 优点和缺点 + +*nix 系统和微服务都提供了许多挑战和机遇。 + +#### 模块性 + +要设计一个简洁、有清晰的目的,并且能够很好地和其它模块配合的某个东西是很困难的。甚至是在不同版本中实现并引入相应的异常处理流程都很困难的。在微服务中,这意味着重试逻辑和超时机制,或者将这些功能外包到服务网格service mesh是不是一个更好的选择呢?这确实比较难,可如果你做好了,那它的可重用性是巨大的。 + +#### 可观测性 + +在一个独石monolith(2018 年)或是一个试图做任何事情的大型程序(1984 年),当情况恶化的时候,应当能够直接的找到问题的根源。但是在一个 + +``` +yes | tr \\n x | head -c 450m | grep n +``` + +或者在一个微服务设置中请求一个路径,例如,涉及 20 个服务,你怎么弄清楚是哪个服务的问题?幸运的是,我们有很多标准,特别是 [OpenCensus][16] 和 [OpenTracing][17]。如果您希望转向微服务,可预测性仍然可能是最大的问题。 + +#### 全局状态 + +对于 *nix 程序来说可能不是一个大问题,但在微服务中,全局状态仍然是一个需要讨论的问题。也就是说,如何确保有效的管理本地化(持久性)的状态以及尽可能在少做变更的情况下使全局保持一致。 + +### 总结一下 + +最后,问题仍然是:你是否在使用合适的工具来完成特定的工作?也就是说,以同样的方式实现一个特定的 *nix 程序在某些时候或者阶段会是一个更好的选择,它是可能在你的组织或工作过程中的一个[最好的选择][18]。无论如何,我希望这篇文章可以让你看到 Unix 哲学和微服务之间许多强有力的相似之处。也许我们可以从前者那里学到一些东西使后者受益。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/revisiting-unix-philosophy-2018 + +作者:[Michael Hausenblas][a] +选题:[lujun9972][b] +译者:[Jamskr](https://github.com/Jamskr) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mhausenblas +[b]: https://github.com/lujun9972 +[1]: http://harmful.cat-v.org/cat-v/ +[2]: https://martinfowler.com/articles/microservices.html +[3]: https://en.wikipedia.org/wiki/Douglas_McIlroy +[4]: https://speakerdeck.com/mhausenblas/distributed-named-pipes-and-other-inter-services-communication +[5]: http://kubernetesbyexample.com/ +[6]: https://www.nomadproject.io/docs/job-specification/index.html +[7]: https://docs.docker.com/compose/overview/ +[8]: https://helm.sh/ +[9]: https://github.com/airbnb/smartstack-cookbook +[10]: https://github.com/Netflix/eureka +[11]: https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services +[12]: https://www.openapis.org/ +[13]: https://grpc.io/ +[14]: https://suva.sh/posts/well-documented-makefiles/ +[15]: https://www.linux.com/news/improve-your-writing-gnu-style-checkers +[16]: https://opencensus.io/ +[17]: https://opentracing.io/ +[18]: https://robertnorthard.com/devops-days-well-architected-monoliths-are-okay/ diff --git a/published/201811/20181105 Some Good Alternatives To ‘du- Command.md b/published/201811/20181105 Some Good Alternatives To ‘du- Command.md new file mode 100644 index 0000000000..cd08bac2a2 --- /dev/null +++ b/published/201811/20181105 Some Good Alternatives To ‘du- Command.md @@ -0,0 +1,305 @@ +几个用于替代 du 命令的更好选择 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/du-command-720x340.jpg) + +大家对 `du` 命令应该都不陌生,它可以在类 Unix 系统中对文件和目录的空间使用情况进行计算和汇总。如果你也经常需要使用 `du` 命令,你会对以下内容感兴趣的。我发现了五个可以替代原有的 `du` 命令的更好的工具。当然,如果后续有更多更好的选择,我会继续列出来。如果你有其它推荐,也欢迎在评论中留言。 + +### ncdu + +`ncdu` 作为普通 `du` 的替代品,这在 Linux 社区中已经很流行了。`ncdu` 正是基于开发者们对 `du` 的性能不满意而被开发出来的。`ncdu` 是一个使用 C 语言和 ncurses 接口开发的简易快速的磁盘用量分析器,可以用来查看目录或文件在本地或远程系统上占用磁盘空间的情况。如果你有兴趣查看关于 `ncdu` 的详细介绍,可以浏览《[如何在 Linux 上使用 ncdu 查看磁盘占用量][9]》这一篇文章。 + +### tin-summer + +tin-summer 是使用 Rust 语言编写的自由开源工具,它可以用于查找占用磁盘空间的文件,它也是 `du` 命令的另一个替代品。由于使用了多线程,因此 tin-summer 在计算大目录的大小时会比 `du` 命令快得多。tin-summer 与 `du` 命令之间的区别是前者读取文件的大小,而后者则读取磁盘使用情况。 + +tin-summer 的开发者认为它可以替代 `du`,因为它具有以下优势: + + * 在大目录的操作速度上比 `du` 更快; + * 在显示结果上默认采用易读格式; + * 可以使用正则表达式排除文件或目录; + * 可以对输出进行排序和着色处理; + * 可扩展,等等。 + +**安装 tin-summer** + +要安装 tin-summer,只需要在终端中执行以下命令: + +``` +$ curl -LSfs https://japaric.github.io/trust/install.sh | sh -s -- --git vmchale/tin-summer +``` + +你也可以使用 `cargo` 软件包管理器安装 tin-summer,但你需要在系统上先安装 Rust。在 Rust 已经安装好的情况下,执行以下命令: + +``` +$ cargo install tin-summer +``` + +如果上面提到的这两种方法都不能成功安装 tin-summer,还可以从它的[软件发布页][1]下载最新版本的二进制文件编译,进行手动安装。 + +**用法** + +(LCTT 译注:tin-summer 的命令名为 `sn`) + +如果需要查看当前工作目录的文件大小,可以执行以下命令: + +``` +$ sn f +749 MB ./.rustup/toolchains +749 MB ./.rustup +147 MB ./.cargo/bin +147 MB ./.cargo +900 MB . +``` + +不需要进行额外声明,它也是默认以易读的格式向用户展示数据。在使用 `du` 命令的时候,则必须加上额外的 `-h` 参数才能得到同样的效果。 + +只需要按以下的形式执行命令,就可以查看某个特定目录的文件大小。 + +``` +$ sn f +``` + +还可以对输出结果进行排序,例如下面的命令可以输出指定目录中最大的 5 个文件或目录: + +``` +$ sn sort /home/sk/ -n5 +749 MB /home/sk/.rustup +749 MB /home/sk/.rustup/toolchains +147 MB /home/sk/.cargo +147 MB /home/sk/.cargo/bin +2.6 MB /home/sk/mcelog +900 MB /home/sk/ +``` + +顺便一提,上面结果中的最后一行是指定目录 `/home/sk` 的总大小。所以不要惊讶为什么输入的是 5 而实际输出了 6 行结果。 + +在当前目录下查找带有构建工程的目录,可以使用以下命令: + +``` +$ sn ar +``` + +tin-summer 同样支持查找指定大小的带有构建工程的目录。例如执行以下命令可以查找到大小在 100 MB 以上的带有构建工程的目录: + +``` +$ sn ar -t100M +``` + +如上文所说,tin-summer 在操作大目录的时候速度比较快,因此在操作小目录的时候,速度会相对比较慢一些。不过它的开发者已经表示,将会在以后的版本中优化这个缺陷。 + +要获取相关的帮助,可以执行以下命令: + +``` +$ sn --help +``` + +如果想要更详尽的介绍,可以查看[这个项目的 GitHub 页面][10]。 + +### dust + +`dust` (含义是 `du` + `rust` = `dust`)使用 Rust 编写,是一个免费、开源的更直观的 `du` 工具。它可以在不需要 `head` 或`sort` 命令的情况下即时显示目录占用的磁盘空间。与 tin-summer 一样,它会默认情况以易读的格式显示每个目录的大小。 + +**安装 dust** + +由于 `dust` 也是使用 Rust 编写,因此它也可以通过 `cargo` 软件包管理器进行安装: + +``` +$ cargo install du-dust +``` + +也可以从它的[软件发布页][2]下载最新版本的二进制文件,并按照以下步骤安装。在写这篇文章的时候,最新的版本是 0.3.1。 + +``` +$ wget https://github.com/bootandy/dust/releases/download/v0.3.1/dust-v0.3.1-x86_64-unknown-linux-gnu.tar.gz +``` + +抽取文件: + +``` +$ tar -xvf dust-v0.3.1-x86_64-unknown-linux-gnu.tar.gz +``` + +最后将可执行文件复制到你的 `$PATH`(例如 `/usr/local/bin`)下: + +``` +$ sudo mv dust /usr/local/bin/ +``` + +**用法** + +需要查看当前目录及所有子目录下的文件大小,可以执行以下命令: + +``` +$ dust +``` + +输出示例: + +![](http://www.ostechnix.com/wp-content/uploads/2018/11/dust-1.png) + +带上 `-p` 参数可以按照从当前目录起始的完整目录显示。 + +``` +$ dust -p +``` + +![dust 2][4] + +如果需要查看多个目录的大小,只需要同时列出这些目录,并用空格分隔开即可: + +``` +$ dust +``` + +下面再多举几个例子,例如: + +显示文件的长度: + +``` +$ dust -s +``` + +只显示 10 个目录: + +``` +$ dust -n 10 +``` + +查看当前目录下最多 3 层子目录: + +``` +$ dust -d 3 +``` + +查看帮助: + +``` +$ dust -h +``` + +如果想要更详尽的介绍,可以查看[这个项目的 GitHub 页面][11]。 + +### diskus + +`diskus` 也是使用 Rust 编写的一个小型、快速的开源工具,它可以用于替代 `du -sh` 命令。`diskus` 将会计算当前目录下所有文件的总大小,它的效果相当于 `du -sh` 或 `du -sh --bytes`,但其开发者表示 `diskus` 的运行速度是 `du -sh` 的 9 倍。 + +**安装 diskus** + +`diskus` 已经存放于 Arch Linux 社区用户软件仓库Arch Linux User-community Repository([AUR][5])当中,可以通过任何一种 AUR 帮助工具(例如 [`yay`][6])把它安装在基于 Arch 的系统上: + +``` +$ yay -S diskus +``` + +对于 Ubuntu 及其衍生发行版,可以在 `diskus` 的[软件发布页][7]上下载最新版的软件包并安装: + +``` +$ wget "https://github.com/sharkdp/diskus/releases/download/v0.3.1/diskus_0.3.1_amd64.deb" + +$ sudo dpkg -i diskus_0.3.1_amd64.deb +``` + +还可以使用 `cargo` 软件包管理器安装 `diskus`,但必须在系统上先安装 Rust 1.29+。 + +安装好 Rust 之后,就可以使用以下命令安装 `diskus`: + +``` +$ cargo install diskus +``` + +**用法** + +在通常情况下,如果需要查看某个目录的大小,我会使用形如 `du -sh` 的命令。 + +``` +$ du -sh dir +``` + +这里的 `-s` 参数表示显示总大小。 + +如果使用 `diskus`,直接就可以显示当前目录的总大小。 + +``` +$ diskus +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/diskus-in-action.png) + +我使用 `diskus` 查看 Arch Linux 系统上各个目录的总大小,这个工具的速度确实比 `du -sh` 快得多。但是它目前只能显示当前目录的大小。 + +要获取相关的帮助,可以执行以下命令: + +``` +$ diskus -h +``` + +如果想要更详尽的介绍,可以查看[这个项目的 GitHub 页面][12]。 + +### duu + +`duu` 是 Directory Usage Utility 的缩写。它是使用 Python 编写的查看指定目录大小的工具。它具有跨平台的特性,因此在 Windows、Mac OS 和 Linux 系统上都能够使用。 + +**安装 duu** + +安装这个工具之前需要先安装 Python 3。不过目前很多 Linux 发行版的默认软件仓库中都带有 Python 3,所以这个依赖并不难解决。 + +Python 3 安装完成后,从 `duu` 的[软件发布页][8]下载其最新版本。 + +``` +$ wget https://github.com/jftuga/duu/releases/download/2.20/duu.py +``` + +**用法** + +要查看当前目录的大小,只需要执行以下命令: + +``` +$ python3 duu.py +``` + +输出示例: + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/duu.png) + +从上图可以看出,`duu` 会显示当前目录下文件的数量情况,按照 Byte、KB、MB 单位显示这些文件的总大小,以及每个文件的大小。 + +如果需要查看某个目录的大小,只需要声明目录的绝对路径即可: + +``` +$ python3 duu.py /home/sk/Downloads/ +``` + +如果想要更详尽的介绍,可以查看[这个项目的 GitHub 页面][13]。 + +以上就是 `du` 命令的五种替代方案,希望这篇文章能够帮助到你。就我自己而言,我并不会在这五种工具之间交替使用,我更喜欢使用 `ncdu`。欢迎在下面的评论区发表你对这些工具的评论。 + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/some-good-alternatives-to-du-command/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/vmchale/tin-summer/releases +[2]: https://github.com/bootandy/dust/releases +[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[4]: http://www.ostechnix.com/wp-content/uploads/2018/11/dust-2.png +[5]: https://aur.archlinux.org/packages/diskus-bin/ +[6]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[7]: https://github.com/sharkdp/diskus/releases +[8]: https://github.com/jftuga/duu/releases +[9]: https://www.ostechnix.com/check-disk-space-usage-linux-using-ncdu/ +[10]: https://github.com/vmchale/tin-summer +[11]: https://github.com/bootandy/dust +[12]: https://github.com/sharkdp/diskus +[13]: https://github.com/jftuga/duu + diff --git a/published/201811/20181107 Gitbase- Exploring git repos with SQL.md b/published/201811/20181107 Gitbase- Exploring git repos with SQL.md new file mode 100644 index 0000000000..994474d949 --- /dev/null +++ b/published/201811/20181107 Gitbase- Exploring git repos with SQL.md @@ -0,0 +1,92 @@ +gitbase:用 SQL 查询 Git 仓库 +====== + +> gitbase 是一个使用 go 开发的的开源项目,它实现了在 Git 仓库上执行 SQL 查询。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg) + +Git 已经成为了代码版本控制的事实标准,但尽管 Git 相当普及,对代码仓库的深入分析的工作难度却没有因此而下降;而 SQL 在大型代码库的查询方面则已经是一种久经考验的语言,因此诸如 Spark 和 BigQuery 这样的项目都采用了它。 + +所以,source{d} 很顺理成章地将这两种技术结合起来,就产生了 gitbase(LCTT 译注:source{d} 是一家开源公司,本文作者是该公司开发者关系副总裁)。gitbase 是一个代码即数据code-as-data的解决方案,可以使用 SQL 对 git 仓库进行大规模分析。 + +[gitbase][1] 是一个完全开源的项目。它站在了很多巨人的肩上,因此得到了足够的发展竞争力。下面就来介绍一下其中的一些“巨人”。 + +![](https://opensource.com/sites/default/files/uploads/gitbase.png) + +*[gitbase playground][2] 为 gitbase 提供了一个可视化的操作环境。* + +### 用 Vitess 解析 SQL + +gitbase 通过 SQL 与用户进行交互,因此需要能够遵循 MySQL 协议来对通过网络传入的 SQL 请求作出解析和理解,万幸由 YouTube 建立的 [Vitess][3] 项目已经在这一方面给出了解决方案。Vitess 是一个横向扩展的 MySQL 数据库集群系统。 + +我们只是使用了这个项目中的部分重要代码,并将其转化为一个可以让任何人在数分钟以内编写出一个 MySQL 服务器的[开源程序][4],就像我在 [justforfunc][5] 视频系列中展示的 [CSVQL][6] 一样,它可以使用 SQL 操作 CSV 文件。 + +### 用 go-git 读取 git 仓库 + +在成功解析 SQL 请求之后,还需要对数据集中的 git 仓库进行查询才能返回结果。因此,我们还结合使用了 source{d} 最成功的 [go-git][7] 仓库。go-git 是使用纯 go 语言编写的具有高度可扩展性的 git 实现。 + +借此我们就可以很方便地将存储在磁盘上的代码仓库保存为 [siva][8] 文件格式(这同样是 source{d} 的一个开源项目),也可以通过 `git clone` 来对代码仓库进行复制。 + +### 使用 enry 检测语言、使用 babelfish 解析文件 + +gitbase 集成了我们开源的语言检测项目 [enry][9] 以及代码解析项目 [babelfish][10],因此在分析 git 仓库历史代码的能力也相当强大。babelfish 是一个自托管服务,普适于各种源代码解析,并将代码文件转换为通用抽象语法树Universal Abstract Syntax Tree(UAST)。 + +这两个功能在 gitbase 中可以被用户以函数 `LANGUAGE` 和 `UAST` 调用,诸如“查找上个月最常被修改的函数的名称”这样的请求就需要通过这两个功能实现。 + +### 提高性能 + +gitbase 可以对非常大的数据集进行分析,例如来自 GitHub 高达 3 TB 源代码的 Public Git Archive([公告][11])。面临的工作量如此巨大,因此每一点性能都必须运用到极致。于是,我们也使用到了 Rubex 和 Pilosa 这两个项目。 + +#### 使用 Rubex 和 Oniguruma 优化正则表达式速度 + +[Rubex][12] 是 go 的正则表达式标准库包的一个准替代品。之所以说它是准替代品,是因为它没有在 `regexp.Regexp` 类中实现 `LiteralPrefix` 方法,直到现在都还没有。 + +Rubex 的高性能是由于使用 [cgo][14] 调用了 [Oniguruma][13],它是一个高度优化的 C 代码库。 + +#### 使用 Pilosa 索引优化查询速度 + +索引几乎是每个关系型数据库都拥有的特性,但 Vitess 由于不需要用到索引,因此并没有进行实现。 + +于是我们引入了 [Pilosa][15] 这个开源项目。Pilosa 是一个使用 go 实现的分布式位图索引,可以显著提升跨多个大型数据集的查询的速度。通过 Pilosa,gitbase 才得以在巨大的数据集中进行查询。 + +### 总结 + +我想用这一篇文章来对开源社区表达我衷心的感谢,让我们能够不负众望的在短时间内完成 gitbase 的开发。我们 source{d} 的每一位成员都是开源的拥护者,github.com/src-d 下的每一行代码都是见证。 + +你想使用 gitbase 吗?最简单快捷的方式是从 sourced.tech/engine 下载 source{d} 引擎,就可以通过单个命令运行 gitbase 了。 + +想要了解更多,可以听听我在 [Go SF 大会][16]上的演讲录音。 + +本文在 [Medium][17] 首发,并经许可在此发布。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/gitbase + +作者:[Francesc Campoy][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/francesc +[b]: https://github.com/lujun9972 +[1]: https://github.com/src-d/gitbase +[2]: https://github.com/src-d/gitbase-web +[3]: https://github.com/vitessio/vitess +[4]: https://github.com/src-d/go-mysql-server +[5]: http://justforfunc.com/ +[6]: https://youtu.be/bcRDXAraprk +[7]: https://github.com/src-d/go-git +[8]: https://github.com/src-d/siva +[9]: https://github.com/src-d/enry +[10]: https://github.com/bblfsh/bblfshd +[11]: https://blog.sourced.tech/post/announcing-pga/ +[12]: https://github.com/moovweb/rubex +[13]: https://github.com/kkos/oniguruma +[14]: https://golang.org/cmd/cgo/ +[15]: https://github.com/pilosa/pilosa +[16]: https://www.meetup.com/golangsf/events/251690574/ +[17]: https://medium.com/sourcedtech/gitbase-exploring-git-repos-with-sql-95ec0986386c + diff --git a/published/201811/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md b/published/201811/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md new file mode 100644 index 0000000000..4d7112d397 --- /dev/null +++ b/published/201811/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md @@ -0,0 +1,186 @@ +在 Linux 中如何查找一个命令或进程的执行时间 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/time-command-720x340.png) + +在类 Unix 系统中,你可能知道一个命令或进程开始执行的时间,以及[一个进程运行了多久][1]。 但是,你如何知道这个命令或进程何时结束或者它完成运行所花费的总时长呢? 在类 Unix 系统中,这是非常容易的! 有一个专门为此设计的程序名叫 **GNU time**。 使用 `time` 程序,我们可以轻松地测量 Linux 操作系统中命令或程序的总执行时间。 `time` 命令在大多数 Linux 发行版中都有预装,所以你不必去安装它。 + +### 在 Linux 中查找一个命令或进程的执行时间 + +要测量一个命令或程序的执行时间,运行: + +``` +$ /usr/bin/time -p ls +``` + +或者, + +``` +$ time ls +``` + +输出样例: + +``` +dir1 dir2 file1 file2 mcelog + +real 0m0.007s +user 0m0.001s +sys 0m0.004s +``` + +``` +$ time ls -a +. .bash_logout dir1 file2 mcelog .sudo_as_admin_successful +.. .bashrc dir2 .gnupg .profile .wget-hsts +.bash_history .cache file1 .local .stack + +real 0m0.008s +user 0m0.001s +sys 0m0.005s +``` + +以上命令显示出了 `ls` 命令的总执行时间。 你可以将 `ls` 替换为任何命令或进程,以查找总的执行时间。 + +输出详解: + + 1. `real` —— 指的是命令或程序所花费的总时间 + 2. `user` —— 指的是在用户模式下程序所花费的时间 + 3. `sys` —— 指的是在内核模式下程序所花费的时间 + + + +我们也可以将命令限制为仅运行一段时间。参考如下教程了解更多细节: + +- [在 Linux 中如何让一个命令运行特定的时长](https://www.ostechnix.com/run-command-specific-time-linux/) + +### time 与 /usr/bin/time + +你可能注意到了, 我们在上面的例子中使用了两个命令 `time` 和 `/usr/bin/time` 。 所以,你可能会想知道他们的不同。 + +首先, 让我们使用 `type` 命令看看 `time` 命令到底是什么。对于那些我们不了解的 Linux 命令,`type` 命令用于查找相关命令的信息。 更多详细信息,[请参阅本指南][2]。 + +``` +$ type -a time +time is a shell keyword +time is /usr/bin/time +``` + +正如你在上面的输出中看到的一样,`time` 是两个东西: + + * 一个是 BASH shell 中内建的关键字 + * 一个是可执行文件,如 `/usr/bin/time` + +由于 shell 关键字的优先级高于可执行文件,当你没有给出完整路径只运行 `time` 命令时,你运行的是 shell 内建的命令。 但是,当你运行 `/usr/bin/time` 时,你运行的是真正的 **GNU time** 命令。 因此,为了执行真正的命令你可能需要给出完整路径。 + +在大多数 shell 中如 BASH、ZSH、CSH、KSH、TCSH 等,内建的关键字 `time` 是可用的。 `time` 关键字的选项少于该可执行文件,你可以使用的唯一选项是 `-p`。 + +你现在知道了如何使用 `time` 命令查找给定命令或进程的总执行时间。 想进一步了解 GNU time 工具吗? 继续阅读吧! + +### 关于 GNU time 程序的简要介绍 + +GNU time 程序运行带有给定参数的命令或程序,并在命令完成后将系统资源使用情况汇总到标准输出。 与 `time` 关键字不同,GNU time 程序不仅显示命令或进程的执行时间,还显示内存、I/O 和 IPC 调用等其他资源。 + +`time` 命令的语法是: + +``` +/usr/bin/time [options] command [arguments...] +``` + +上述语法中的 `options` 是指一组可以与 `time` 命令一起使用去执行特定功能的选项。 下面给出了可用的选项: + + * `-f, –format` —— 使用此选项可以根据需求指定输出格式。 + * `-p, –portability` —— 使用简要的输出格式。 + * `-o file, –output=FILE` —— 将输出写到指定文件中而不是到标准输出。 + * `-a, –append` —— 将输出追加到文件中而不是覆盖它。 + * `-v, –verbose` —— 此选项显示 `time` 命令输出的详细信息。 + * `–quiet` – 此选项可以防止 `time` 命令报告程序的状态. + +当不带任何选项使用 GNU time 命令时,你将看到以下输出。 + +``` +$ /usr/bin/time wc /etc/hosts +9 28 273 /etc/hosts +0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2024maxresident)k +0inputs+0outputs (0major+73minor)pagefaults 0swaps +``` + +如果你用 shell 关键字 `time` 运行相同的命令, 输出会有一点儿不同: + +``` +$ time wc /etc/hosts +9 28 273 /etc/hosts + +real 0m0.006s +user 0m0.001s +sys 0m0.004s +``` + +有时,你可能希望将系统资源使用情况输出到文件中而不是终端上。 为此, 你可以使用 `-o` 选项,如下所示。 + +``` +$ /usr/bin/time -o file.txt ls +dir1 dir2 file1 file2 file.txt mcelog +``` + +正如你看到的,`time` 命令不会显示到终端上。因为我们将输出写到了`file.txt` 的文件中。 让我们看一下这个文件的内容: + +``` +$ cat file.txt +0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2512maxresident)k +0inputs+0outputs (0major+106minor)pagefaults 0swaps +``` + +当你使用 `-o` 选项时, 如果你没有一个名为 `file.txt` 的文件,它会创建一个并把输出写进去。如果文件存在,它会覆盖文件原来的内容。 + +你可以使用 `-a` 选项将输出追加到文件后面,而不是覆盖它的内容。 + +``` +$ /usr/bin/time -a file.txt ls +``` + +`-f` 选项允许用户根据自己的喜好控制输出格式。 比如说,以下命令的输出仅显示用户,系统和总时间。 + +``` +$ /usr/bin/time -f "\t%E real,\t%U user,\t%S sys" ls +dir1 dir2 file1 file2 mcelog +0:00.00 real, 0.00 user, 0.00 sys +``` + +请注意 shell 中内建的 `time` 命令并不具有 GNU time 程序的所有功能。 + +有关 GNU time 程序的详细说明可以使用 `man` 命令来查看。 + +``` +$ man time +``` + +想要了解有关 Bash 内建 `time` 关键字的更多信息,请运行: + +``` +$ help time +``` + +就到这里吧。 希望对你有所帮助。 + +会有更多好东西分享哦。 请关注我们! + +加油哦! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-find-the-execution-time-of-a-command-or-process-in-linux/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[caixiangyue](https://github.com/caixiangyue) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/find-long-process-running-linux/ +[2]: https://www.ostechnix.com/the-type-command-tutorial-with-examples-for-beginners/ diff --git a/published/201811/20181108 Choosing a printer for Linux.md b/published/201811/20181108 Choosing a printer for Linux.md new file mode 100644 index 0000000000..0d13ffd990 --- /dev/null +++ b/published/201811/20181108 Choosing a printer for Linux.md @@ -0,0 +1,79 @@ +为 Linux 选择打印机 +====== + +> Linux 为打印机提供了广泛的支持。学习如何利用它。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) + +我们在传闻已久的无纸化社会方面取得了重大进展,但我们仍需要不时打印文件。如果你是 Linux 用户,并有一台没有 Linux 安装盘的打印机,或者你正准备在市场上购买新设备,那么你很幸运。因为大多数 Linux 发行版(以及 MacOS)都使用通用 Unix 打印系统([CUPS][1]),它包含了当今大多数打印机的驱动程序。这意味着 Linux 为打印机提供了比 Windows 更广泛的支持。 + +### 选择打印机 + +如果你需要购买新打印机,了解它是否支持 Linux 的最佳方法是查看包装盒或制造商网站上的文档。你也可以搜索 [Open Printing][2] 数据库。它是检查各种打印机与 Linux 兼容性的绝佳资源。 + +以下是与 Linux 兼容的佳能打印机的一些 Open Printing 结果。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_2-openprinting.png) + +下面的截图是 Open Printing 的 Hewlett-Packard LaserJet 4050 的结果 —— 根据数据库,它应该可以“完美”工作。这里列出了建议驱动以及通用说明,让我了解它适用于 CUPS、行式打印守护程序(LPD)、LPRng 等。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_3-hplaserjet.png) + +在任何情况下,最好在购买打印机之前检查制造商的网站并询问其他 Linux 用户。 + +### 检查你的连接 + +有几种方法可以将打印机连接到计算机。如果你的打印机是通过 USB 连接的,那么可以在 Bash 提示符下输入 `lsusb` 来轻松检查连接。 + +``` +$ lsusb +``` + +该命令返回 “Bus 002 Device 004: ID 03f0:ad2a Hewlett-Packard” —— 这没有太多价值,但可以得知打印机已连接。我可以通过输入以下命令获得有关打印机的更多信息: + +``` +$ dmesg | grep -i usb +``` + +结果更加详细。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_1-dmesg.png) + +如果你尝试将打印机连接到并口(假设你的计算机有并口 —— 如今很少见),你可以使用此命令检查连接: + +``` +$ dmesg | grep -i parport +``` + +返回的信息可以帮助我为我的打印机选择正确的驱动程序。我发现,如果我坚持使用流行的名牌打印机,大部分时间我都能获得良好的效果。 + +### 设置你的打印机软件 + +Fedora Linux 和 Ubuntu Linux 都包含简单的打印机设置工具。[Fedora][3] 为打印问题的答案维护了一个出色的 wiki。可以在 GUI 中的设置轻松启动这些工具,也可以在命令行上调用 `system-config-printer`。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_4-printersetup.png) + +HP 支持 Linux 打印的 [HP Linux 成像和打印][4] (HPLIP) 软件可能已安装在你的 Linux 系统上。如果没有,你可以为你的发行版[下载][5]最新版本。打印机制造商 [Epson][6] 和 [Brother][7] 也有带有 Linux 打印机驱动程序和信息的网页。 + +你最喜欢的 Linux 打印机是什么?请在评论中分享你的意见。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/choosing-printer-linux + +作者:[Don Watkins][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: https://www.cups.org/ +[2]: http://www.openprinting.org/printers +[3]: https://fedoraproject.org/wiki/Printing +[4]: https://developers.hp.com/hp-linux-imaging-and-printing +[5]: https://developers.hp.com/hp-linux-imaging-and-printing/gethplip +[6]: https://epson.com/Support/wa00821 +[7]: https://support.brother.com/g/s/id/linux/en/index.html?c=us_ot&lang=en&comple=on&redirect=on diff --git a/published/201811/20181108 The Difference Between more, less And most Commands.md b/published/201811/20181108 The Difference Between more, less And most Commands.md new file mode 100644 index 0000000000..14e1fc87fd --- /dev/null +++ b/published/201811/20181108 The Difference Between more, less And most Commands.md @@ -0,0 +1,221 @@ +more、less 和 most 的区别 +====== +![](https://www.ostechnix.com/wp-content/uploads/2018/11/more-less-and-most-commands-720x340.png) + +如果你是一个 Linux 方面的新手,你可能会在 `more`、`less`、`most` 这三个命令行工具之间产生疑惑。在本文当中,我会对这三个命令行工具进行对比,以及展示它们各自在 Linux 中的一些使用例子。总的来说,这几个命令行工具之间都有相通和差异,而且它们在大部分 Linux 发行版上都有自带。 + +我们首先来看看 `more` 命令。 + +### more 命令 + +`more` 是一个老式的、基础的终端分页阅读器,它可以用于打开指定的文件并进行交互式阅读。如果文件的内容太长,在一屏以内无法完整显示,就会逐页显示文件内容。使用回车键或者空格键可以滚动浏览文件的内容,但有一个限制,就是只能够单向滚动。也就是说只能按顺序往下翻页,而不能进行回看。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/more-command-demo.gif) + +**更正** + +有的 Linux 用户向我指出,在 `more` 当中是可以向上翻页的。不过,最原始版本的 `more` 确实只允许向下翻页,在后续出现的较新的版本中也允许了有限次数的向上翻页,只需要在浏览过程中按 `b` 键即可向上翻页。唯一的限制是 `more` 不能搭配管道使用(如 `ls | more`)。(LCTT 译注:此处原作者疑似有误,译者使用 `more` 是可以搭配管道使用的,或许与不同 `more` 版本有关) + +按 `q` 即可退出 `more`。 + +**更多示例** + +打开 `ostechnix.txt` 文件进行交互式阅读,可以执行以下命令: + +``` +$ more ostechnix.txt +``` + +在阅读过程中,如果需要查找某个字符串,只需要像下面这样输入斜杠(`/`)之后接着输入需要查找的内容: + +``` +/linux +``` + +按 `n` 键可以跳转到下一个匹配的字符串。 + +如果需要在文件的第 `10` 行开始阅读,只需要执行: + +``` +$ more +10 file +``` + +就可以从文件的第 `10` 行开始显示文件的内容了。 + +如果你需要让 `more` 提示你按空格键来翻页,可以加上 `-d` 参数: + +``` +$ more -d ostechnix.txt +``` + +![][2] + +如上图所示,`more` 会提示你可以按空格键翻页。 + +如果需要查看所有选项以及对应的按键,可以按 `h` 键。 + +要查看 `more` 的更多详细信息,可以参考手册: + +``` +$ man more +``` + +### less 命令 + +`less` 命令也是用于打开指定的文件并进行交互式阅读,它也支持翻页和搜索。如果文件的内容太长,也会对输出进行分页,因此也可以翻页阅读。比 `more` 命令更好的一点是,`less` 支持向上翻页和向下翻页,也就是可以在整个文件中任意阅读。 + +![][4] + +在使用功能方面,`less` 比 `more` 命令具有更多优点,以下列出其中几个: + + * 支持向上翻页和向下翻页 + * 支持向上搜索和向下搜索 + * 可以跳转到文件的末尾并立即从文件的开头开始阅读 + * 在编辑器中打开指定的文件 + +**更多示例** + +打开文件: + +``` +$ less ostechnix.txt +``` + +按空格键或回车键可以向下翻页,按 `b` 键可以向上翻页。 + +如果需要向下搜索,在输入斜杠(`/`)之后接着输入需要搜索的内容: + +``` +/linux +``` + +按 `n` 键可以跳转到下一个匹配的字符串,如果需要跳转到上一个匹配的字符串,可以按 `N` 键。 + +如果需要向上搜索,在输入问号(`?`)之后接着输入需要搜索的内容: + +``` +?linux +``` + +同样是按 `n` 键或 `N` 键跳转到下一个或上一个匹配的字符串。 + +只需要按 `v` 键,就会将正在阅读的文件在默认编辑器中打开,然后就可以对文件进行各种编辑操作了。 + +按 `h` 键可以查看 `less` 工具的选项和对应的按键。 + +按 `q` 键可以退出阅读。 + +要查看 `less` 的更多详细信息,可以参考手册: + +``` +$ man less +``` + +### most 命令 + +`most` 同样是一个终端阅读工具,而且比 `more` 和 `less` 的功能更为丰富。`most` 支持同时打开多个文件。你可以在打开的文件之间切换、编辑当前打开的文件、迅速跳转到文件中的某一行、分屏阅读、同时锁定或滚动多个屏幕等等功能。在默认情况下,对于较长的行,`most` 不会将其截断成多行显示,而是提供了左右滚动功能以在同一行内显示。 + +**更多示例** + +打开文件: + +``` +$ most ostechnix1.txt +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/most-command.png) + +按 `e` 键可以编辑当前文件。 + +如果需要向下搜索,在斜杠(`/`)或 `S` 或 `f` 之后输入需要搜索的内容,按 `n` 键就可以跳转到下一个匹配的字符串。 + +![][3] + +如果需要向上搜索,在问号(`?`)之后输入需要搜索的内容,也是通过按 `n` 键跳转到下一个匹配的字符串。 + +同时打开多个文件: + +``` +$ most ostechnix1.txt ostechnix2.txt ostechnix3.txt +``` + +在打开了多个文件的状态下,可以输入 `:n` 切换到下一个文件,使用 `↑` 或 `↓` 键选择需要切换到的文件,按回车键就可以查看对应的文件。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/most-2.gif) + +要打开文件并跳转到某个字符串首次出现的位置(例如 linux),可以执行以下命令: + +``` +$ most file +/linux +``` + +按 `h` 键可以查看帮助。 + +**按键操作列表** + +移动: + + * 空格键或 `D` 键 – 向下滚动一屏 + * `DELETE` 键或 `U` 键 – 向上滚动一屏 + * `↓` 键 – 向下移动一行 + * `↑` 键 – 向上移动一行 + * `T` 键 – 移动到文件开头 + * `B` 键 – 移动到文件末尾 + * `>` 键或 `TAB` 键 – 向右滚动屏幕 + * `<` 键 – 向左滚动屏幕 + * `→` 键 – 向右移动一列 + * `←` 键 – 向左移动一列 + * `J` 键或 `G` 键 – 移动到某一行,例如 `10j` 可以移动到第 10 行 + * `%` 键 – 移动到文件长度某个百分比的位置 + +窗口命令: + + * `Ctrl-X 2`、`Ctrl-W 2` – 分屏 + * `Ctrl-X 1`、`Ctrl-W 1` – 只显示一个窗口 + * `O` 键、`Ctrl-X O` – 切换到另一个窗口 + * `Ctrl-X 0` – 删除窗口 + +文件内搜索: + + * `S` 键或 `f` 键或 `/` 键 – 向下搜索 + * `?` 键 – 向上搜索 + * `n` 键 – 跳转到下一个匹配的字符串 + +退出: + + * `q` 键 – 退出 `most` ,且所有打开的文件都会被关闭 + * `:N`、`:n` – 退出当前文件并查看下一个文件(使用 `↑` 键、`↓` 键选择下一个文件) + +要查看 `most` 的更多详细信息,可以参考手册: + +``` +$ man most +``` + +### 总结 + +`more` – 传统且基础的分页阅读工具,仅支持向下翻页和有限次数的向上翻页。 + +`less` – 比 `more` 功能丰富,支持向下翻页和向上翻页,也支持文本搜索。在打开大文件的时候,比 `vi` 这类文本编辑器启动得更快。 + +`most` – 在上述两个工具功能的基础上,还加入了同时打开多个文件、同时锁定或滚动多个屏幕、分屏等等大量功能。 + +以上就是我的介绍,希望能让你通过我的文章对这三个工具有一定的认识。如果想了解这篇文章以外的关于这几个工具的详细功能,请参阅它们的 `man` 手册。 + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/the-difference-between-more-less-and-most-commands/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: http://www.ostechnix.com/wp-content/uploads/2018/11/more-1.png +[3]: http://www.ostechnix.com/wp-content/uploads/2018/11/most-1-1.gif +[4]: https://www.ostechnix.com/wp-content/uploads/2018/11/less-command-demo.gif diff --git a/published/201811/20181109 7 reasons I love open source.md b/published/201811/20181109 7 reasons I love open source.md new file mode 100644 index 0000000000..f45dfa2e86 --- /dev/null +++ b/published/201811/20181109 7 reasons I love open source.md @@ -0,0 +1,41 @@ +我爱开源的 7 个理由 +====== + +> 成为开源社区的一员绝对是一个明智之举,原因有很多。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_lovework.png?itok=gmj9tqiG) + +这就是我为什么包括晚上和周末在内花费非常多的时间待在 [GitHub][1] 上,成为开源社区的一个活跃成员。 + +我参加过各种规模的项目,从个人项目到几个人的协作项目,乃至有数百位贡献者的项目,每一个项目都让我有新的受益。 + +![](https://opensource.com/sites/default/files/uploads/open_source_contributions.gif) + +也就是说,这里有七个原因让我为开源做出贡献: + + * **它让我的技能与时俱进。** 在咨询公司的管理职位工作,有时我觉得自己与创建软件的实际过程越来越远。参与开源项目使我可以重新回到我最热爱的编程之中。也使我能够体验新技术,学习新技术和语言,并且使我不被酷酷的孩子们落下。 + * **它教我如何与人打交道。** 与一群素未谋面的人合作开源项目在与人交往方面能够教会你很多。你很快会发现每个人有他们自己的压力,他们自己的义务,以及不同的时间表。学习如何与一群陌生人合作是一种很好的生活技能。 + * **它使我成为一个更好的沟通者。** 开源项目的维护者的时间有限。你很快就知道,要成功地贡献,你必须能够清楚、简明地表达你所做的改变、添加或修复,最重要的是,你为什么要这么做。 + * **它使我成为一个更好的开发者。** 没有什么能像成百上千的其他开发者依赖你的代码一样 —— 它敦促你更加专注软件设计、测试和文档。 + * **它使我的造物变得更好。** 可能开源背后最强大的观念是它允许你驾驭一个由有创造力、有智慧、有知识的个人组成的全球网络。我知道我自己一个人的能力是有限的,我不可能什么都知道,但与开源社区的合作有助于我改进我的创作。 + * **它告诉我小事物的价值。** 如果一个项目的文档不清楚或不完整,我会毫不犹豫地把它做得更好。一个小小的更新或修复可能只节省开发人员几分钟的时间,但是随着用户数量的增加,您一个小小的更改可能产生巨大的价值。 + * **它使我更好的营销。** 好的,这是一个奇怪的例子。有这么多伟大的开源项目在那里,感觉像一场争夺关注的拼搏。从事于开源让我学到了很多营销的价值。这不是关于讲述或创建一个华丽的网站。而是关于如何清楚地传达你所创造的,它是如何使用的,以及它带来的好处。 + +我可以继续讨论开源是如何帮助你发展伙伴、关系和朋友的,不过你应该都知道了。有许多原因让我乐于成为开源社区的一员。 + +你可能想知道这些如何用于大型金融服务机构的 IT 战略。简单来说:谁不想要一个擅长与人交流和工作,具有尖端的技能,并能够推销他们的成果的开发团队呢? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/reasons-love-open-source + +作者:[Colin Eberhardt][a] +选题:[lujun9972][b] +译者:[ChiZelin](https://github.com/ChiZelin) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/colineberhardt +[b]: https://github.com/lujun9972 +[1]: https://github.com/ColinEberhardt/ diff --git a/published/201811/20181113 4 tips for learning Golang.md b/published/201811/20181113 4 tips for learning Golang.md new file mode 100644 index 0000000000..ed80a40ded --- /dev/null +++ b/published/201811/20181113 4 tips for learning Golang.md @@ -0,0 +1,80 @@ +学习 Golang 的 4 个技巧 +====== + +> 到达 Golang 大陆:一位资深开发者之旅。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_laptop_code_programming_mountain_view.jpg?itok=yx5buqkr) + +2014 年夏天…… + +> IBM:“我们需要你弄清楚这个 Docker。” + +> 我:“没问题。” + +> IBM:“那就开始吧。” + +> 我:“好的。”(内心声音):”Docker 是用 Go 编写的。是吗?“(Google 一下)“哦,一门编程语言。我在我的岗位上已经学习了很多了。这不会太难。” + +我的大学新生编程课是使用 VAX 汇编程序教授的。在数据结构课上,我们使用 Pascal —— 在图书馆计算机中心的旧电脑上使用软盘加载。在一门更高一级的课程中,我的教授教授喜欢用 ADA 去展示所有的例子。在我们的 Sun 工作站上,我通过各种 UNIX 的实用源代码学到了一点 C。在 IBM,OS/2 源代码中我们使用了 C 和一些 x86 汇编程序;在一个与 Apple 合作的项目中我们大量使用 C++ 的面向对象功能。不久后我学到了 shell 脚本,开始是 csh,但是在 90 年代中期发现 Linux 后就转到了 Bash。在 90 年代后期,我在将 IBM 的定制的 JVM 代码中的即时(JIT)编译器移植到 Linux 时,我不得不开始学习 m4(与其说是编程语言,不如说是一种宏处理器)。 + +一晃 20 年……我从未因为学习一门新的编程语言而焦灼。但是 [Go][1] 让我感觉有些不同。我打算公开贡献,上传到 GitHub,让任何有兴趣的人都可以看到!作为一个 40 多岁的资深开发者的 Go 新手,我不想成为一个笑话。我们都知道程序员的骄傲,不想丢人,不论你的经验水平如何。 + +我早期的调研显示,Go 似乎比某些语言更 “地道”。它不仅仅是让代码可以编译;也需要让代码可以 “Go Go Go”。 + +现在,我的个人的 Go 之旅四年间有了几百个拉取请求(PR),我不是致力于成为一个专家,但是现在我觉得贡献和编写代码比我在 2014 年的时候更舒服了。所以,你该怎么教一个老人新的技能或者一门编程语言呢?以下是我自己在前往 Golang 大陆之旅的四个步骤。 + +### 1、不要跳过基础 + +虽然你可以通过复制代码来进行你早期的学习(谁还有时间阅读手册!?),Go 有一个非常易读的 [语言规范][2],它写的很易于理解,即便你在语言或者编译理论方面没有取得硕士学位。鉴于 Go 的 **参数:类型** 顺序的特有习惯,以及一些有趣的语言功能,例如通道和 go 协程,搞定这些新概念是非常重要的是事情。阅读这个附属的文档 [高效 Go 编程][3],这是 Golang 创造者提供的另一个重要资源,它将为你提供有效和正确使用语言的准备。 + +### 2、从最好的中学习 + +有许多宝贵的资源可供挖掘,可以将你的 Go 知识提升到下一个等级。最近在 [GopherCon][4] 上的所有讲演都可以在网上找到,如这个 [GopherCon US 2018][5] 的详尽列表。这些讲演的专业知识和技术水平各不相同,但是你可以通过它们轻松地找到一些你所不了解的事情。[Francesc Campoy][6] 创建了一个名叫 [JustForFunc][7] 的 Go 编程视频系列,其不断增多的剧集可以用来拓宽你的 Go 知识和理解。直接搜索 “Golang" 可以为那些想要了解更多信息的人们展示许多其它视频和在线资源。 + +想要看代码?在 GitHub 上许多受欢迎的云原生项目都是用 Go 写的:[Docker/Moby][8]、[Kubernetes][9]、[Istio][10]、[containerd][11]、[CoreDNS][12],以及许多其它的。语言纯粹主义者可能会认为一些项目比另外一些更地道,但这些都是很好的起点,可以看到在高度活跃的项目的大型代码库中使用 Go 的程度。 + +### 3、使用优秀的语言工具 + +你会很快了解到 [gofmt][13] 的宝贵之处。Go 最漂亮的一个地方就在于没有关于每个项目代码格式的争论 —— **gofmt** 内置在语言的运行环境中,并且根据一系列可靠的、易于理解的语言规则对 Go 代码进行格式化。我不知道有哪个基于 Golang 的项目会在持续集成中不坚持使用 **gofmt** 检查拉取请求。 + +除了直接构建于运行环境和 SDK 中的一系列有价值的工具之外,我强烈建议使用一个对 Golang 的特性有良好支持的编辑器或者 IDE。由于我经常在命令行中进行工作,我依赖于 Vim 加上强大的 [vim-go][14] 插件。我也喜欢微软提供的 [VS Code][15],特别是它的 [Go 语言][16] 插件。 + +想要一个调试器?[Delve][17] 项目在不断的改进和成熟,它是在 Go 二进制文件上进行 [gdb][18] 式调试的强有力的竞争者。 + +### 4、写一些代码 + +你要是不开始尝试使用 Go 写代码,你永远不知道它有什么好的地方。找一个有 “需要帮助” 问题标签的项目,然后开始贡献代码。如果你已经使用了一个用 Go 编写的开源项目,找出它是否有一些可以用初学者方式解决的 Bug,然后开始你的第一个拉取请求。与生活中的大多数事情一样,实践出真知,所以开始吧。 + +事实证明,你可以教会一个资深的老开发者一门新的技能甚至编程语言。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/learning-golang + +作者:[Phill Estes][a] +选题:[lujun9972][b] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/estesp +[b]: https://github.com/lujun9972 +[1]: https://golang.org/ +[2]: https://golang.org/ref/spec +[3]: https://golang.org/doc/effective_go.html +[4]: https://www.gophercon.com/ +[5]: https://tqdev.com/2018-gophercon-2018-videos-online +[6]: https://twitter.com/francesc +[7]: https://www.youtube.com/channel/UC_BzFbxG2za3bp5NRRRXJSw +[8]: https://github.com/moby/moby +[9]: https://github.com/kubernetes/kubernetes +[10]: https://github.com/istio/istio +[11]: https://github.com/containerd/containerd +[12]: https://github.com/coredns/coredns +[13]: https://blog.golang.org/go-fmt-your-code +[14]: https://github.com/fatih/vim-go +[15]: https://code.visualstudio.com/ +[16]: https://code.visualstudio.com/docs/languages/go +[17]: https://github.com/derekparker/delve +[18]: https://www.gnu.org/software/gdb/ diff --git a/published/201811/20181113 The alias And unalias Commands Explained With Examples.md b/published/201811/20181113 The alias And unalias Commands Explained With Examples.md new file mode 100644 index 0000000000..1448918a1e --- /dev/null +++ b/published/201811/20181113 The alias And unalias Commands Explained With Examples.md @@ -0,0 +1,156 @@ +举例说明 alias 和 unalias 命令 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/alias-command-720x340.png) + +如果不是一个命令行重度用户的话,过了一段时间之后,你就可能已经忘记了这些复杂且冗长的 Linux 命令了。当然,有很多方法可以让你 [回想起遗忘的命令][1]。你可以简单的 [保存常用的命令][2] 然后按需使用。也可以在终端里 [标记重要的命令][3],然后在任何时候你想要的时间使用它们。而且,Linux 有一个内建命令 `history` 可以帮助你记忆这些命令。另外一个记住这些如此长的命令的简便方式就是为这些命令创建一个别名。你可以为任何经常重复调用的常用命令创建别名,而不仅仅是长命令。通过这种方法,你不必再过多地记忆这些命令。这篇文章中,我们将会在 Linux 环境下举例说明 `alias` 和 `unalias` 命令。 + +### alias 命令 + +`alias` 使用一个用户自定义的字符串来代替一个或者一串命令(包括多个选项、参数)。这个字符串可以是一个简单的名字或者缩写,不管这个命令原来多么复杂。`alias` 命令已经预装在 shell(包括 BASH、Csh、Ksh 和 Zsh 等) 当中。 + +`alias` 的通用语法是: + +``` +alias [alias-name[=string]...] +``` + +接下来看几个例子。 + +#### 列出别名 + +可能在你的系统中已经设置了一些别名。有些应用在你安装它们的时候可能已经自动创建了别名。要查看已经存在的别名,运行: + +``` +$ alias +``` + +或者, + +``` +$ alias -p +``` + +在我的 Arch Linux 系统中已经设置了下面这些别名。 + +``` +alias betty='/home/sk/betty/main.rb' +alias ls='ls --color=auto' +alias pbcopy='xclip -selection clipboard' +alias pbpaste='xclip -selection clipboard -o' +alias update='newsbeuter -r && sudo pacman -Syu' +``` + +#### 创建一个新的别名 + +像我之前说的,你不必去记忆这些又臭又长的命令。你甚至不必一遍一遍的运行长命令。只需要为这些命令创建一个简单易懂的别名,然后在任何你想使用的时候运行这些别名就可以了。这种方式会让你爱上命令行。 + +``` +$ du -h --max-depth=1 | sort -hr +``` + +这个命令将会查找当前工作目录下的各个子目录占用的磁盘大小,并按照从大到小的顺序进行排序。这个命令有点长。我们可以像下面这样轻易地为其创建一个 别名: + +``` +$ alias du='du -h --max-depth=1 | sort -hr' +``` + +这里的 `du` 就是这条命令的别名。这个别名可以被设置为任何名字,主要便于记忆和区别。 + +在创建一个别名的时候,使用单引号或者双引号都是可以的。这两种方法最后的结果没有任何区别。 + +现在你可以运行这个别名(例如我们这个例子中的 `du` )。它和上面的原命令将会产生相同的结果。 + +这个别名仅限于当前 shell 会话中。一旦你退出了当前 shell 会话,别名也就失效了。为了让这些别名长久有效,你需要把它们添加到你 shell 的配置文件当中。 + +BASH,编辑 `~/.bashrc` 文件: + +``` +$ nano ~/.bashrc +``` + +一行添加一个别名: + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/alias.png) + +保存并退出这个文件。然后运行以下命令更新修改: + +``` +$ source ~/.bashrc +``` + +现在,这些别名在所有会话中都可以永久使用了。 + +ZSH,你需要添加这些别名到 `~/.zshrc`文件中。Fish,跟上面的类似,添加这些别名到 `~/.config/fish/config.fish` 文件中。 + +#### 查看某个特定的命令别名 + +像我上面提到的,你可以使用 `alias` 命令列出你系统中所有的别名。如果你想查看跟给定的别名有关的命令,例如 `du`,只需要运行: + +``` +$ alias du +alias du='du -h --max-depth=1 | sort -hr' +``` + +像你看到的那样,上面的命令可以显示与单词 `du` 有关的命令。 + +关于 `alias` 命令更多的细节,参阅 man 手册页: + +``` +$ man alias +``` + +### unalias 命令 + +跟它的名字说的一样,`unalias` 命令可以很轻松地从你的系统当中移除别名。`unalias` 命令的通用语法是: + +``` +unalias +``` + +要移除命令的别名,像我们之前创建的 `du`,只需要运行: + +``` +$ unalias du +``` + +`unalias` 命令不仅会从当前会话中移除别名,也会从你的 shell 配置文件中永久地移除别名。 + +还有一种移除别名的方法,是创建具有相同名称的新别名。 + +要从当前会话中移除所有的别名,使用 `-a` 选项: + +``` +$ unalias -a +``` + +更多细节,参阅 man 手册页。 + +``` +$ man unalias +``` + +如果你经常一遍又一遍的运行这些繁杂又冗长的命令,给它们创建别名可以节省你的时间。现在是你为常用命令创建别名的时候了。 + +这就是所有的内容了。希望可以帮到你。还有更多的干货即将到来,敬请期待! + +祝近祺! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/easily-recall-forgotten-linux-commands/ +[2]: https://www.ostechnix.com/save-commands-terminal-use-demand/ +[3]: https://www.ostechnix.com/bookmark-linux-commands-easier-repeated-invocation/ diff --git a/published/201811/20181113 What you need to know about the GPL Cooperation Commitment.md b/published/201811/20181113 What you need to know about the GPL Cooperation Commitment.md new file mode 100644 index 0000000000..2218dfcd2c --- /dev/null +++ b/published/201811/20181113 What you need to know about the GPL Cooperation Commitment.md @@ -0,0 +1,55 @@ +GPL 合作承诺的发展历程 +====== + +> GPL 合作承诺GPL Cooperation Commitment消除了开发者对许可证失效的顾虑,从而达到促进技术创新的目的。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Law_balance_open_source.png?itok=5c4JhuEY) + +假如能免于顾虑,技术创新和发展将会让世界发生天翻地覆的改变。[GPL 合作承诺][1]GPL Cooperation Commitment就这样应运而生,只为通过公平、一致、可预测的许可证来让科技创新无后顾之忧。 + +去年,我曾经写过一篇文章,讨论了许可证对开源软件下游用户的影响。在进行研究的时候,我就发现许可证的约束力并不强,而且很多情况下是不可预测的。因此,我在文章中提出了一个能使开源许可证具有一致性和可预测性的潜在解决方案。但我只考虑到了诸如通过法律系统立法的“传统”方法。 + +2017 年 11 月,RedHat、IBM、Google 和 Facebook 提出了这种我从未考虑过的非传统的解决方案:GPL 合作承诺。GPL 合作承诺规定了 GPL 公平一致执行的方式。我认为,GPL 合作承诺之所以有这么深刻的意义,有以下两个原因:一是许可证的公平性和一致性对于开源社区的发展来说至关重要,二是法律对不可预测性并不容忍。 + +### 了解 GPL + +要了解 GPL 合作承诺,首先要了解什么是 GPL。GPL 是 [GNU 通用许可证][2]GNU General Public License的缩写,它是一个公共版权的开源许可证,这就意味着开源软件的分发者必须向下游用户公开源代码。GPL 还禁止对下游的使用作出限制,要求个人用户不得拒绝他人对开源软件的使用自由、研究自由、共享自由和改进自由。GPL 规定,只要下游用户满足了许可证的要求和条件,就可以使用该许可证。如果被许可人出现了不符合许可证的情况,则视为违规。 + +按照第二版 GPL(GPLv2)的描述,许可证会在任何违规的情况下自动终止,这就导致了部分开发者对 GPL 有所抗拒。而在第三版 GPL(GPLv3)中则引入了“[治愈条款][3]cure provision”,这一条款规定,被许可人可以在 30 天内对违反 GPL 的行为进行改正,如果在这个缓冲期内改正完成,许可证就不会被终止。 + +这一规定消除了许可证被无故终止的顾虑,从而让软件的开发者和用户专注于开发和创新。 + +### GPL 合作承诺做了什么 + +GPL 合作承诺将 GPLv3 的治愈条款应用于使用 GPLv2 的软件上,让使用 GPLv2 许可证的开发者避免许可证无故终止的窘境,并与 GPLv3 许可证保持一致。 + +很多软件开发者都希望正确合规地做好一件事情,但有时候却不了解具体的实施细节。因此,GPL 合作承诺的重要性就在于能够对软件开发者们做出一些引导,让他们避免因一些简单的错误导致许可证违规终止。 + +Linux 基金会技术顾问委员会在 2017 年宣布,Linux 内核项目将会[采用 GPLv3 的治愈条款][4]。在 GPL 合作承诺的推动下,很多大型科技公司和个人开发者都做出了相同的承诺,会将该条款扩展应用于他们采用 GPLv2(或 LGPLv2.1)许可证的所有软件,而不仅仅是对 Linux 内核的贡献。 + +GPL 合作承诺的广泛采用将会对开源社区产生非常积极的影响。如果更多的公司和个人开始采用 GPL 合作承诺,就能让大量正在使用 GPLv2 或 LGPLv2.1 许可证的软件以更公平和更可预测的形式履行许可证中的条款。 + +截至 2018 年 11 月,包括 IBM、Google、亚马逊、微软、腾讯、英特尔、RedHat 在内的 40 余家行业巨头公司都已经[签署了 GPL 合作承诺][5],以期为开源社区创立公平的标准以及提供可预测的执行力。GPL 合作承诺是开源社区齐心协力引领开源未来发展方向的一个成功例子。 + +GPL 合作承诺能够让下游用户了解到开发者对他们的尊重,同时也表示了开发者使用了 GPLv2 许可证的代码是安全的。如果你想查阅更多信息,包括如何将自己的名字添加到 GPL 合作承诺中,可以访问 [GPL 合作承诺的网站][6]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/gpl-cooperation-commitment + +作者:[Brooke Driver][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bdriver +[b]: https://github.com/lujun9972 +[1]: https://gplcc.github.io/gplcc/ +[2]: https://www.gnu.org/licenses/licenses.en.html +[3]: https://opensource.com/article/18/6/gplv3-anniversary +[4]: https://www.kernel.org/doc/html/v4.16/process/kernel-enforcement-statement.html +[5]: https://gplcc.github.io/gplcc/Company/Company-List.html +[6]: http://gplcc.github.io/gplcc + diff --git a/published/201811/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md b/published/201811/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md new file mode 100644 index 0000000000..99a92d917b --- /dev/null +++ b/published/201811/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md @@ -0,0 +1,79 @@ +ProtectedText:一个免费的在线加密笔记 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/protected-text-720x340.png) + +记录笔记是我们每个人必备的重要技能,它可以帮助我们把自己听到、读到、学到的内容长期地保留下来,也有很多的应用和工具都能让我们更好地记录笔记。下面我要介绍一个叫做 **ProtectedText** 的应用,这是一个可以将你的笔记在线上保存起来的免费的加密笔记。它是一个免费的 web 服务,在上面记录文本以后,它将会对文本进行加密,只需要一台支持连接到互联网并且拥有 web 浏览器的设备,就可以访问到记录的内容。 + +ProtectedText 不会向你询问任何个人信息,也不会保存任何密码,没有广告,没有 Cookies,更没有用户跟踪和注册流程。除了拥有密码能够解密文本的人,任何人都无法查看到笔记的内容。而且,使用前不需要在网站上注册账号,写完笔记之后,直接关闭浏览器,你的笔记也就保存好了。 + +### 在加密笔记本上记录笔记 + +访问 这个链接,就可以打开 ProtectedText 页面了(LCTT 译注:如果访问不了,你知道的)。这个时候你将进入网站主页,接下来需要在页面上的输入框输入一个你想用的名称,或者在地址栏后面直接加上想用的名称。这个名称是一个自定义的名称(例如 ),是你查看自己保存的笔记的专有入口。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-1.png) + +如果你选用的名称还没有被占用,你就会看到下图中的提示信息。点击 “Create” 键就可以创建你的个人笔记页了。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-2.png) + +至此你已经创建好了你自己的笔记页面,可以开始记录笔记了。目前每个笔记页的最大容量是每页 750000+ 个字符。 + +ProtectedText 使用 AES 算法对你的笔记内容进行加密和解密,而计算散列则使用了 SHA512 算法。 + +笔记记录完毕以后,点击顶部的 “Save” 键保存。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-3.png) + +按下保存键之后,ProtectedText 会提示你输入密码以加密你的笔记内容。按照它的要求输入两次密码,然后点击 “Save” 键。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-4.png) + +尽管 ProtectedText 对你使用的密码没有太多要求,但毕竟密码总是一寸长一寸强,所以还是最好使用长且复杂的密码(用到数字和特殊字符)以避免暴力破解。由于 ProtectedText 不会保存你的密码,一旦密码丢失,密码和笔记内容就都找不回来了。因此,请牢记你的密码,或者使用诸如 [Buttercup][3]、[KeeWeb][4] 这样的密码管理器来存储你的密码。 + +在使用其它设备时,可以通过访问之前创建的 URL 就可以访问你的笔记了。届时会出现如下的提示信息,只需要输入正确的密码,就可以查看和编辑你的笔记。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-5.png) + +一般情况下,只有知道密码的人才能正常访问笔记的内容。如果你希望将自己的笔记公开,只需要以 的形式访问就可以了,ProtectedText 将会自动使用 `yourPassword` 字符串解密你的笔记。 + +ProtectedText 还有配套的 [Android 应用][6] 可以让你在移动设备上进行同步笔记、离线工作、备份笔记、锁定/解锁笔记等等操作。 + +**优点** + + * 简单、易用、快速、免费 + * ProtectedText.com 的客户端代码可以在[这里][7]免费获取,如果你想了解它的底层实现,可以自行学习它的源代码 + * 存储的内容没有到期时间,只要你愿意,笔记内容可以一直保存在服务器上 + * 可以让你的数据限制为私有或公开开放 + +**缺点** + + * 尽管客户端代码是公开的,但服务端代码并没有公开,因此你无法自行搭建一个类似的服务。如果你不信任这个网站,请不要使用。 + * 由于网站不存储你的任何个人信息,包括你的密码,因此如果你丢失了密码,数据将永远无法恢复。网站方还声称他们并不清楚谁拥有了哪些数据,所以一定要牢记密码。 + + +如果你想通过一种简单的方式将笔记保存到线上,并且需要在不需要安装任何工具的情况下访问,那么 ProtectedText 会是一个好的选择。如果你还知道其它类似的应用程序,欢迎在评论区留言! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/protectedtext-a-free-encrypted-notepad-to-save-your-notes-online/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: http://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-4.png +[3]: https://www.ostechnix.com/buttercup-a-free-secure-and-cross-platform-password-manager/ +[4]: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/ +[5]: http://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-5.png +[6]: https://play.google.com/store/apps/details?id=com.protectedtext.android +[7]: https://www.protectedtext.com/js/main.js + diff --git a/published/201811/20181115 How to install a device driver on Linux.md b/published/201811/20181115 How to install a device driver on Linux.md new file mode 100644 index 0000000000..bd1c3fd353 --- /dev/null +++ b/published/201811/20181115 How to install a device driver on Linux.md @@ -0,0 +1,144 @@ +如何在 Linux 上安装设备驱动程序 +====== + +> 学习 Linux 设备驱动如何工作,并知道如何使用它们。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc) + +对于一个熟悉 Windows 或者 MacOS 的人,想要切换到 Linux,它们都会面临一个艰巨的问题就是怎么安装和配置设备驱动。这是可以理解的,因为 Windows 和 MacOS 都有一套机制把这个过程做得非常的友好。比如说,当你插入一个新的硬件设备, Windows 能够自动检测并会弹出一个窗口询问你是否要继续驱动程序的安装。你也可以从网络上下载驱动程序,仅仅需要双击解压或者是通过设备管理器导入驱动程序即可。 + +而这在 Linux 操作系统上并非这么简单。第一个原因是, Linux 是一个开源的操作系统,所以有 [数百种 Linux 发行版的变体][1]。也就是说不可能做一个指南来适应所有的 Linux 发行版。因为每种 Linux 安装驱动程序的过程都有差异。 + +第二,大多数默认的 Linux 驱动程序也都是开源的,并被集成到了系统中,这使得安装一些并未包含的驱动程序变得非常复杂,即使已经可以检测大多数的硬件设备。第三,不同发行版的许可也有差异。例如,[Fedora 禁止事项][2] 禁止包含专有的、受法律保护,或者是违反美国法律的驱动程序。而 Ubuntu 则让用户[避免使用受法律保护或闭源的硬件设备][3]。 + +为了更好的学习 Linux 驱动程序是如何工作的,我建议阅读 《Linux 设备驱动程序》一书中的 [设备驱动程序简介][4]。 + +### 两种方式来寻找驱动程序 + +#### 1、 用户界面 + +如果是一个刚从 Windows 或 MacOS 转过来的 Linux 新手,那你会很高兴知道 Linux 也提供了一个通过向导式的程序来查看驱动程序是否可用的方法。 Ubuntu 提供了一个 [附加驱动程序][5] 选项。其它的 Linux 发行版也提供了帮助程序,像 [GNOME 的包管理器][6],你可以使用它来检查驱动程序是否可用。 + +#### 2、 命令行 + +如果你通过漂亮的用户界面没有找到驱动程序,那又该怎么办呢?或许你只能通过没有任何图形界面的 shell?甚至你可以使用控制台来展现你的技能。你有两个选择: + +1. **通过一个仓库** + + 这和 MacOS 中的 [homebrew][7] 命令行很像。通过使用 `yum`、 `dnf`、`apt-get` 等等。你基本可以通过添加仓库,并更新包缓存。 +2. **下载、编译,然后自己构建** + + 这通常包括直接从网络,或通过 `wget` 命令下载源码包,然后运行配置和编译、安装。这超出了本文的范围,但是你可以在网络上找到很多在线指南,如果你选择的是这条路的话。 + +### 检查是否已经安装了这个驱动程序 + +在进一步学习安装 Linux 驱动程序之前,让我们来学习几条命令,用来检测驱动程序是否已经在你的系统上可用。 + +[lspci][8] 命令显示了系统上所有 PCI 总线和设备驱动程序的详细信息。 + +``` +$ lscpci +``` + +或者使用 `grep`: + +``` +$ lscpci | grep SOME_DRIVER_KEYWORD +``` + +例如,你可以使用 `lspci | grep SAMSUNG` 命令,如果你想知道是否安装过三星的驱动。 + +[dmesg][9] 命令显示了所有内核识别的驱动程序。 + +``` +$ dmesg +``` + +或配合 `grep` 使用: + +``` +$ dmesg | grep SOME_DRIVER_KEYWORD +``` + +任何识别到的驱动程序都会显示在结果中。 + +如果通过 `dmesg` 或者 `lscpi` 命令没有识别到任何驱动程序,尝试下这两个命令,看看驱动程序至少是否加载到硬盘。 + +``` +$ /sbin/lsmod +``` + +和 + +``` +$ find /lib/modules +``` + +技巧:和 `lspci` 或 `dmesg` 一样,通过在上面的命令后面加上 `| grep` 来过滤结果。 + +如果一个驱动程序已经被识别到了,但是通过 `lscpi` 或 `dmesg` 并没有找到,这意味着驱动程序已经存在于硬盘上,但是并没有加载到内核中,这种情况,你可以通过 `modprobe` 命令来加载这个模块。 + +``` +$ sudo modprobe MODULE_NAME +``` + +使用 `sudo` 来运行这个命令,因为这个模块要使用 root 权限来安装。 + +### 添加仓库并安装 + +可以通过 `yum`、`dnf` 和 `apt-get` 几种不同的方式来添加一个仓库;一个个介绍完它们并不在本文的范围。简单一点来说,这个示例将会使用 `apt-get` ,但是这个命令和其它的几个都是很类似的。 + +#### 1、删除存在的仓库,如果它存在 + +``` +$ sudo apt-get purge NAME_OF_DRIVER* +``` + +其中 `NAME_OF_DRIVER` 是你的驱动程序的可能的名称。你还可以将模式匹配加到正则表达式中来进一步过滤。 + +#### 2、将仓库加入到仓库表中,这应该在驱动程序指南中有指定 + +``` +$ sudo add-apt-repository REPOLIST_OF_DRIVER +``` + +其中 `REPOLIST_OF_DRIVER` 应该从驱动文档中有指定(例如:`epel-list`)。 + +#### 3、更新仓库列表 + +``` +$ sudo apt-get update +``` + +#### 4、安装驱动程序 + +``` +$ sudo apt-get install NAME_OF_DRIVER +``` + +#### 5、检查安装状态 + +像上面说的一样,通过 `lscpi` 命令来检查驱动程序是否已经安装成功。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/how-install-device-driver-linux + +作者:[Bryant Son][a] +选题:[lujun9972][b] +译者:[Jamskr](https://github.com/Jamskr) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/brson +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/List_of_Linux_distributions +[2]: https://fedoraproject.org/wiki/Forbidden_items?rd=ForbiddenItems +[3]: https://www.ubuntu.com/licensing +[4]: https://www.xml.com/ldd/chapter/book/ch01.html +[5]: https://askubuntu.com/questions/47506/how-do-i-install-additional-drivers +[6]: https://help.gnome.org/users/gnome-packagekit/stable/add-remove.html.en +[7]: https://brew.sh/ +[8]: https://en.wikipedia.org/wiki/Lspci +[9]: https://en.wikipedia.org/wiki/Dmesg diff --git a/published/201811/20181116 Akash Angle- How do you Fedora.md b/published/201811/20181116 Akash Angle- How do you Fedora.md new file mode 100644 index 0000000000..ccd764f8aa --- /dev/null +++ b/published/201811/20181116 Akash Angle- How do you Fedora.md @@ -0,0 +1,62 @@ +Akash Angle:你如何使用 Fedora? +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/11/akash-angle-816x345.jpg) + +我们最近采访了Akash Angle 来了解他如何使用 Fedora。这是 Fedora Magazine 上 Fedora [系列的一部分][1]。该系列介绍 Fedora 用户以及他们如何使用 Fedora 完成工作。请通过[反馈表单][2]与我们联系表达你对成为受访者的兴趣。 + +### Akash Angle 是谁? + +Akash 是一位不久前抛弃 Windows 的 Linux 用户。作为一名过去 9 年的狂热 Fedora 用户,他已经尝试了几乎所有的 Fedora 定制版和桌面环境来完成他的日常任务。是一位校友给他介绍了 Fedora。 + +### 使用什么硬件? + +Akash 在工作时使用联想 B490。它配备了英特尔酷睿 i3-3310 处理器和 240GB 金士顿 SSD。Akash 说:“这台笔记本电脑非常适合一些日常任务,如上网、写博客,以及一些照片编辑和视频编辑。虽然不是专业的笔记本电脑,而且规格并不是那么高端,但它完美地完成了工作。“ + +他使用一个入门级的罗技无线鼠标,并希望能有一个机械键盘。他的 PC 是一台定制桌面电脑,拥有最新的第 7 代 Intel i5 7400 处理器和 8GB Corsair Vengeance 内存。 + +![][3] + +### 使用什么软件? + +Akash 是 GNOME 3 桌面环境的粉丝。他喜欢该操作系统为完成基本任务而加入的华丽功能。 + +出于实际原因,他更喜欢全新安来升级到最新 Fedora 版本。他认为 Fedora 29 可以说是最好的工作站。Akash 说这种说法得到了各种科技传播网站和开源新闻网站评论的支持。 + +为了播放视频,他的首选是打包为 [Flatpak][4] 的 VLC 视频播放器 ,它提供了最新的稳定版本。当 Akash 想截图时,他的终极工具是 [Shutter,Magazine 曾介绍过][5]。对于图形处理,GIMP 是他不能离开的工具。 + +Google Chrome 稳定版和开发版是他最常用的网络浏览器。他还使用 Chromium 和 Firefox 的默认版本,有时甚至会使用 Opera。 + +由于他是一名资深用户,所以 Akash 其余时候都使用终端。GNOME Terminal 是他使用的一个终端。 + +#### 最喜欢的壁纸 + +他最喜欢的壁纸之一是下面最初来自 Fedora 16 的壁纸: + +![][6] + +这是他目前在 Fedora 29 工作站上使用的壁纸之一: + +![][7] + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/akash-angle-how-do-you-fedora/ + +作者:[Adam Šamalík][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/asamalik/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/tag/how-do-you-fedora/ +[2]: https://fedoramagazine.org/submit-an-idea-or-tip/ +[3]: https://fedoramagazine.org/wp-content/uploads/2018/11/akash-angle-desktop-300x259.png +[4]: https://fedoramagazine.org/getting-started-flatpak/ +[5]: https://fedoramagazine.org/screenshot-everything-shutter-fedora/ +[6]: https://fedoramagazine.org/wp-content/uploads/2018/11/Fedora-16-300x188.png +[7]: https://fedoramagazine.org/wp-content/uploads/2018/11/wallpaper2you_72588-300x169.jpg diff --git a/published/201811/20181117 How to enter single user mode in SUSE 12 Linux.md b/published/201811/20181117 How to enter single user mode in SUSE 12 Linux.md new file mode 100644 index 0000000000..333beaad19 --- /dev/null +++ b/published/201811/20181117 How to enter single user mode in SUSE 12 Linux.md @@ -0,0 +1,55 @@ +如何在 SUSE 12 Linux 中进入单用户模式? +====== + +> 一篇了解如何在 SUSE 12 Linux 服务器中进入单用户模式的简短文章。 + +![How to enter single user mode in SUSE 12 Linux][1] + +在这篇简短的文章中,我们将向你介绍在 SUSE 12 Linux 中进入单用户模式的步骤。在排除系统主要问题时,单用户模式始终是首选。单用户模式禁用网络并且没有其他用户登录,你可以排除许多多用户系统的情况,可以帮助你快速排除故障。单用户模式最常见的一种用处是[重置忘记的 root 密码][2]。 + +### 1、暂停启动过程 + +首先,你需要拥有机器的控制台才能进入单用户模式。如果它是虚拟机那就是虚拟机控制台,如果它是物理机那么你需要连接它的 iLO/串口控制台。重启系统并在 GRUB 启动菜单中按任意键停止内核的自动启动。 + +![Kernel selection menu at boot in SUSE 12][3] + +### 2、编辑内核的启动选项 + +进入上面的页面后,在所选内核(通常是你首选的最新内核)上按 `e` 更新其启动选项。你会看到下面的页面。 + +![grub2 edits in SUSE 12][4] + +现在,向下滚动到内核引导行,并在行尾添加 `init=/bin/bash`,如下所示。 + +![Edit to boot in single user shell][5] + +### 3、引导编辑后的内核 + +现在按 `Ctrl-x` 或 `F10` 来启动这个编辑过的内核。内核将以单用户模式启动,你将看到 `#` 号提示符,即有服务器的 root 访问权限。此时,根文件系统以只读模式挂载。因此,你对系统所做的任何更改都不会被保存。 + +运行以下命令以将根文件系统重新挂载为可重写入的。 + +``` +kerneltalks:/ # mount -o remount,rw / +``` + +这就完成了!继续在单用户模式中做你必要的事情吧。完成后不要忘了重启服务器引导到普通多用户模式。 + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/howto/how-to-enter-single-user-mode-in-suse-12-linux/ + +作者:[kerneltalks][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://kerneltalks.com +[b]: https://github.com/lujun9972 +[1]: https://a4.kerneltalks.com/wp-content/uploads/2018/11/How-to-enter-single-user-mode-in-SUSE-12-Linux.png +[2]: https://kerneltalks.com/linux/recover-forgotten-root-password-rhel/ +[3]: https://a1.kerneltalks.com/wp-content/uploads/2018/11/Grub-menu-in-SUSE-12.png +[4]: https://a3.kerneltalks.com/wp-content/uploads/2018/11/grub2-editor.png +[5]: https://a4.kerneltalks.com/wp-content/uploads/2018/11/Edit-to-boot-in-single-user-shell.png diff --git a/published/201811/20181119 How To Customize Bash Prompt In Linux.md b/published/201811/20181119 How To Customize Bash Prompt In Linux.md new file mode 100644 index 0000000000..190fdb914b --- /dev/null +++ b/published/201811/20181119 How To Customize Bash Prompt In Linux.md @@ -0,0 +1,313 @@ +在 Linux 上自定义 bash 命令提示符 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2017/10/BASH-720x340.jpg) + +众所周知,**bash**(the **B**ourne-**A**gain **Sh**ell)是目前绝大多数 Linux 发行版使用的默认 shell。本文将会介绍如何通过添加颜色和样式来自定义 bash 命令提示符的显示。尽管很多插件或工具都可以很轻易地满足这一需求,但我们也可以不使用插件和工具,自己手动自定义一些基本的显示方式,例如添加或者修改某些元素、更改前景色、更改背景色等等。 + +### 在 Linux 中自定义 bash 命令提示符 + +在 bash 中,我们可以通过更改 `$PS1` 环境变量的值来自定义 bash 命令提示符。 + +一般情况下,bash 命令提示符会是以下这样的形式: + +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal.png) + +在上图这种默认显示形式当中,“sk” 是我的用户名,而 “ubuntuserver” 是我的主机名。 + +只要插入一些以反斜杠开头的特殊转义字符串,就可以按照你的喜好修改命令提示符了。下面我来举几个例子。 + +在开始之前,我强烈建议你预先备份 `~/.bashrc` 文件。 + +``` +$ cp ~/.bashrc ~/.bashrc.bak +``` + +#### 更改 bash 命令提示符中的 username@hostname 部分 + +如上所示,bash 命令提示符一般都带有 “username@hostname” 部分,这个部分是可以修改的。 + +只需要编辑 `~/.bashrc` 文件: + +``` +$ vi ~/.bashrc +``` + +在文件的最后添加一行: + +``` +PS1="ostechnix> " +``` + +将上面的 “ostechnix” 替换为任意一个你想使用的单词,然后按 `ESC` 并输入 `:wq` 保存、退出文件。 + +执行以下命令使刚才的修改生效: + +``` +$ source ~/.bashrc +``` + +你就可以看见 bash 命令提示符中出现刚才添加的 “ostechnix” 了。 + +![][3] + +再来看看另一个例子,比如将 “username@hostname” 替换为 “Hello@welcome>”。 + +同样是像刚才那样修改 `~/.bashrc` 文件。 + +``` +export PS1="Hello@welcome> " +``` + +然后执行 `source ~/.bashrc` 让修改结果立即生效。 + +以下是我在 Ubuntu 18.04 LTS 上修改后的效果。 + +![](https://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-1.png) + +#### 仅显示用户名 + +如果需要仅显示用户名,只需要在 `~/.bashrc` 文件中加入以下这一行。 + +``` +export PS1="\u " +``` + +这里的 `\u` 就是一个转义字符串。 + +下面提供了一些可以添加到 `$PS1` 环境变量中的用以改变 bash 命令提示符样式的转义字符串。每次修改之后,都需要执行 `source ~/.bashrc` 命令才能立即生效。 + +#### 显示用户名和主机名 + +``` +export PS1="\u\h " +``` + +命令提示符会这样显示: + +``` +skubuntuserver +``` + +#### 显示用户名和完全限定域名 + +``` +export PS1="\u\H " +``` + +#### 在用户名和主机名之间显示其它字符 + +如果你还需要在用户名和主机名之间显示其它字符(例如 `@`),可以使用以下格式: + +``` +export PS1="\u@\h " +``` + +命令提示符会这样显示: + +``` +sk@ubuntuserver +``` + +#### 显示用户名、主机名,并在末尾添加 $ 符号 + +``` +export PS1="\u@\h\\$ " +``` + +#### 综合以上两种显示方式 + +``` +export PS1="\u@\h> " +``` + +命令提示符最终会这样显示: + +``` +sk@ubuntuserver> +``` + +相似地,还可以添加其它特殊字符,例如冒号、分号、星号、下划线、空格等等。 + +#### 显示用户名、主机名、shell 名称 + +``` +export PS1="\u@\h>\s " +``` + +#### 显示用户名、主机名、shell 名称以及 shell 版本 + +``` +export PS1="\u@\h>\s\v " +``` + +bash 命令提示符显示样式: + +![][4] + +#### 显示用户名、主机名、当前目录 + +``` +export PS1="\u@\h\w " +``` + +如果当前目录是 `$HOME` ,会以一个波浪线(`~`)显示。 + +#### 在 bash 命令提示符中显示日期 + +除了用户名和主机名,如果还想在 bash 命令提示符中显示日期,可以在 `~/.bashrc` 文件中添加以下内容: + +``` +export PS1="\u@\h>\d " +``` + +![][5] + +#### 在 bash 命令提示符中显示日期及 12 小时制时间 + +``` +export PS1="\u@\h>\d\@ " +``` + +#### 显示日期及 hh:mm:ss 格式时间 + +``` +export PS1="\u@\h>\d\T " +``` + +#### 显示日期及 24 小时制时间 + +``` +export PS1="\u@\h>\d\A " +``` + +#### 显示日期及 24 小时制 hh:mm:ss 格式时间 + +``` +export PS1="\u@\h>\d\t " +``` + +以上是一些常见的可以改变 bash 命令提示符的转义字符串。除此以外的其它转义字符串,可以在 bash 的 man 手册 PROMPTING 章节中查阅。 + +你也可以随时执行以下命令查看当前的命令提示符样式。 + +``` +$ echo $PS1 +``` + +#### 在 bash 命令提示符中去掉 username@hostname 部分 + +如果我不想做任何调整,直接把 username@hostname 部分整个去掉可以吗?答案是肯定的。 + +如果你是一个技术方面的博主,你有可能会需要在网站或者博客中上传自己的 Linux 终端截图。或许你的用户名和主机名太拉风、太另类,不想让别人看到,在这种情况下,你就需要隐藏命令提示符中的 “username@hostname” 部分。 + +如果你不想暴露自己的用户名和主机名,只需要按照以下步骤操作。 + +编辑 `~/.bashrc` 文件: + +``` +$ vi ~/.bashrc +``` + +在文件末尾添加这一行: + +``` +PS1="\W> " +``` + +输入 `:wq` 保存并关闭文件。 + +执行以下命令让修改立即生效。 + +``` +$ source ~/.bashrc +``` + +现在看一下你的终端,“username@hostname” 部分已经消失了,只保留了一个 `~>` 标记。 + +![][6] + +如果你想要尽可能简单的操作,又不想弄乱你的 `~/.bashrc` 文件,最好的办法就是在系统中创建另一个用户(例如 “user@example”、“admin@demo”)。用带有这样的命令提示符的用户去截图或者录屏,就不需要顾虑自己的用户名或主机名被别人看见了。 + +**警告:**在某些情况下,这种做法并不推荐。例如像 zsh 这种 shell 会继承当前 shell 的设置,这个时候可能会出现一些意想不到的问题。这个技巧只用于隐藏命令提示符中的 “username@hostname” 部分,仅此而已,如果把这个技巧挪作他用,也可能会出现异常。 + +### 为 bash 命令提示符着色 + +目前我们也只是变更了 bash 命令提示符中的内容,下面介绍一下如何对命令提示符进行着色。 + +通过向 `~/.bashrc` 文件写入一些配置,可以修改 bash 命令提示符的前景色(也就是文本的颜色)和背景色。 + +例如,下面这一行配置可以令某些文本的颜色变成红色: + +``` +export PS1="\u@\[\e[31m\]\h\[\e[m\] " +``` + +添加配置后,执行 `source ~/.bashrc` 立即生效。 + +你的 bash 命令提示符就会变成这样: + +![][7] + +类似地,可以用这样的配置来改变背景色: + +``` +export PS1="\u@\[\e[31;46m\]\h\[\e[m\] " +``` + +![][8] + +### 添加 emoji + +大家都喜欢 emoji。还可以按照以下配置把 emoji 插入到命令提示符中。 + +``` +PS1="\W 🔥 >" +``` + +需要注意的是,emoji 的显示取决于使用的字体,因此某些终端可能会无法正常显示 emoji,取而代之的是一些乱码或者单色表情符号。 + +### 自定义 bash 命令提示符有点难,有更简单的方法吗? + +如果你是一个新手,编辑 `$PS1` 环境变量的过程可能会有些困难,因为命令提示符中的大量转义字符串可能会让你有点晕头转向。但不要担心,有一个在线的 bash `$PS1` 生成器可以帮助你轻松生成各种 `$PS1` 环境变量值。 + +就是这个[网站][9]: + +[![EzPrompt](https://www.ostechnix.com/wp-content/uploads/2017/10/EzPrompt.png)][9] + +只需要直接选择你想要的 bash 命令提示符样式,添加颜色、设计排序,然后就完成了。你可以预览输出,并将配置代码复制粘贴到 `~/.bashrc` 文件中。就这么简单。顺便一提,本文中大部分的示例都是通过这个网站制作的。 + +### 我把我的 ~/.bashrc 文件弄乱了,该如何恢复? + +正如我在上面提到的,强烈建议在更改 `~/.bashrc` 文件前做好备份(在更改其它重要的配置文件之前也一定要记得备份)。这样一旦出现任何问题,你都可以很方便地恢复到更改之前的配置状态。当然,如果你忘记了备份,还可以按照下面这篇文章中介绍的方法恢复为默认配置。 + +- [如何将 `~/.bashrc` 文件恢复到默认配置][10] + +这篇文章是基于 ubuntu 的,但也适用于其它的 Linux 发行版。不过事先声明,这篇文章的方法会将 `~/.bashrc` 文件恢复到系统最初时的状态,你对这个文件做过的任何修改都将丢失。 + +感谢阅读! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/cdn-cgi/l/email-protection +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal-2.png +[4]: http://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-2.png +[5]: http://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-3.png +[6]: http://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal-1.png +[7]: http://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/bash-prompt-4/ +[8]: http://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/bash-prompt-5/ +[9]: http://ezprompt.net/ +[10]: https://www.ostechnix.com/restore-bashrc-file-default-settings-ubuntu/ + diff --git a/published/201811/20181120 How To Change GDM Login Screen Background In Ubuntu.md b/published/201811/20181120 How To Change GDM Login Screen Background In Ubuntu.md new file mode 100644 index 0000000000..9fbf743381 --- /dev/null +++ b/published/201811/20181120 How To Change GDM Login Screen Background In Ubuntu.md @@ -0,0 +1,86 @@ +如何更换 Ubuntu 系统的 GDM 登录界面背景 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-3.png) + +Ubuntu 18.04 LTS 桌面系统在登录、锁屏和解锁状态下,我们会看到一个纯紫色的背景。它是 GDM(GNOME 显示管理器GNOME Display Manager)从 ubuntu 17.04 版本开始使用的默认背景。有一些人可能会不喜欢这个纯色的背景,想换一个酷一点、更吸睛的!如果是这样,你找对地方了。这篇短文将会告诉你如何更换 Ubuntu 18.04 LTS 的 GDM 登录界面的背景。 + +### 更换 Ubuntu 的登录界面背景 + +这是 Ubuntu 18.04 LTS 桌面系统默认的登录界面。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-1.png) + +不管你喜欢与否,你总是会不经意在登录、解屏/锁屏的时面对它。别担心!你可以随便更换一个你喜欢的图片。 + +在 Ubuntu 上更换桌面壁纸和用户的资料图像不难。我们可以点击鼠标就搞定了。但更换解屏/锁屏的背景则需要修改文件 `ubuntu.css`,它位于 `/usr/share/gnome-shell/theme`。 + +修改这个文件之前,最好备份一下它。这样我们可以避免出现问题时可以恢复它。 + +``` +$ sudo cp /usr/share/gnome-shell/theme/ubuntu.css /usr/share/gnome-shell/theme/ubuntu.css.bak +``` + +修改文件 `ubuntu.css`: + +``` +$ sudo nano /usr/share/gnome-shell/theme/ubuntu.css +``` + +在文件中找到关键字 `lockDialogGroup`,如下行: + +``` +#lockDialogGroup { + background: #2c001e url(resource:///org/gnome/shell/theme/noise-texture.png); + background-repeat: repeat; +} +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/ubuntu_css.png) + +可以看到,GDM 默认登录的背景图片是 `noise-texture.png`。 + +现在修改为你自己的图片路径。也可以选择 .jpg 或 .png 格式的文件,两种格式的图片文件都是支持的。修改完成后的文件内容如下: + +``` +#lockDialogGroup { + background: #2c001e url(file:///home/sk/image.png); + background-repeat: no-repeat; + background-size: cover; + background-position: center; +} +``` + +请注意 `ubuntu.css` 文件里这个关键字的修改,我把修改点加粗了。 + +你可能注意到,我把原来的 `... url(resource:///org/gnome/shell/theme/noise-texture.png);` 修改为 `... url(file:///home/sk/image.png);`。也就是说,你可以把 `... url(resource ...` 修改为 `.. url(file ...`。 + +同时,你可以把参数 `background-repeat:` 的值 `repeat` 修改为 `no-repeat`,并增加另外两行。你可以直接复制上面几行的修改到你的 `ubuntu.css` 文件,对应的修改为你的图片路径。 + +修改完成后,保存和关闭此文件。然后系统重启生效。 + +下面是 GDM 登录界面的最新背景图片: + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-2.png) + +是不是很酷,你都看到了,更换 GDM 登录的默认背景很简单。你只需要修改 `ubuntu.css` 文件中图片的路径然后重启系统。是不是很简单也很有意思. + +你可以修改 `/usr/share/gnome-shell/theme` 目录下的文件 `gdm3.css` ,具体修改内容和修改结果和上面一样。同时记得修改前备份要修改的文件。 + +就这些了。如果有好的东东再分享了,请大家关注! + +后会有期。 + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-change-gdm-login-screen-background-in-ubuntu/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[Guevaraya](https://github.com/guevaraya) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 diff --git a/published/201811/20181126 How to use multiple programming languages without losing your mind.md b/published/201811/20181126 How to use multiple programming languages without losing your mind.md new file mode 100644 index 0000000000..bbb310fa4e --- /dev/null +++ b/published/201811/20181126 How to use multiple programming languages without losing your mind.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: (heguangzhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: subject: (How to use multiple programming languages without losing your mind) +[#]: via: (https://opensource.com/article/18/11/multiple-programming-languages) +[#]: author: (Bart Copeland https://opensource.com/users/bartcopeland) +[#]: url: (https://linux.cn/article-10291-1.html) + +如何使用多种编程语言而又不失理智 +====== + +> 多语言编程环境是一把双刃剑,既带来好处,也带来可能威胁组织的复杂性。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_programming_languages.jpg?itok=KJcdnXM2) + +如今,随着各种不同的编程语言的出现,许多组织已经变成了数字多语种组织digital polyglots。开源打开了一个语言和技术堆栈的世界,开发人员可以使用这些语言和技术堆栈来完成他们的任务,包括开发、支持过时的和现代的软件应用。 + +与那些只说母语的人相比,通晓多种语言的人可以与数百万人交谈。在软件环境中,开发人员不会引入新的语言来达到特定的目的,也不会更好地交流。一些语言对于一项任务来说很棒,但是对于另一项任务来说却不行,因此使用多种编程语言可以让开发人员使用合适的工具来完成这项任务。这样,所有的开发都是多语种的;这只是野兽的本性。 + +多语种环境的创建通常是渐进的和情景化的。例如,当一家企业收购一家公司时,它就承担了该公司的技术堆栈 —— 包括其编程语言。或者,随着技术领导的改变,新的领导者可能会将不同的技术纳入其中。技术也有过时的时候,随着时间的推移,增加了组织必须维护的编程语言和技术的数量。 + +多语言环境对企业来说是一把双刃剑,既带来好处,也带来复杂性和挑战。最终,如果这种情况得不到控制,多语言将会扼杀你的企业。 + +### 棘手的技术绕口令 + +如果有多种不同的技术 —— 编程语言、过时的工具和新兴的技术堆栈 —— 就有复杂性。工程师团队花更多的时间努力改进编程语言,包括许可证、安全性和依赖性。与此同时,管理层缺乏对代码合规性的监督,无法衡量风险。 + +发生的情况是,企业具有不同程度的编程语言质量和工具支持的高度可变性。当你需要和十几个人一起工作时,很难成为一种语言的专家。一个能流利地说法语和意大利语的人和一个能用八种语言串成几个句子的人在技能水平上有很大差异。开发人员和编程语言也是如此。 + +随着更多编程语言的加入,困难只会增加,导致数字巴别塔的出现。 + +答案是不要拿走开发人员工作所需的工具。添加新的编程语言可以建立他们的技能基础,并为他们提供合适的设备来完成他们的工作。所以,你想对你的开发者说“是”,但是随着越来越多的编程语言被添加到企业中,它们会拖累你的软件开发生命周期(SDLC)。在规模上,所有这些语言和工具都可能扼杀企业。 + +企业应注意三个主要问题: + +1. **可见性:** 团队聚在一起执行项目,然后解散。应用程序已经发布,但从未更新 —— 为什么要修复那些没有被破坏的东西?因此,当发现一个关键漏洞时,企业可能无法了解哪些应用程序受到影响,这些应用程序包含哪些库,甚至无法了解它们是用什么语言构建的。这可能导致成本高昂的“勘探项目”,以确保漏洞得到适当解决。 + +2. **更新或编码:** 一些企业将更新和修复功能集中在一个团队中。其他人要求每个“比萨团队”管理自己的开发工具。无论是哪种情况,工程团队和管理层都要付出机会成本:这些团队没有编码新特性,而是不断更新和修复开源工具中的库,因为它们移动得如此之快。 + +3. **重新发明轮子:** 由于代码依赖性和库版本不断更新,当发现漏洞时,与应用程序原始版本相关联的工件可能不再可用。因此,许多开发周期都被浪费在试图重新创建一个可以修复漏洞的环境上。 + +将你组织中的每种编程语言乘以这三个问题,开始时被认为是分子一样小的东西突然看起来像珠穆朗玛峰。就像登山者一样,没有合适的设备和工具,你将无法生存。 + +### 找到你的罗塞塔石碑 + +一个全面的解决方案可以满足 SDLC 中企业及其个人利益相关者的需求。企业可以使用以下最佳实践创建解决方案: + + 1. 监控生产中运行的代码,并根据应用程序中使用的标记组件(例如,常见漏洞和暴露组件)的风险做出响应。 + 2. 定期接收更新以保持代码的最新和无错误。 + 3. 使用商业开源支持来获得编程语言版本和平台的帮助,这些版本和平台已经接近尾声,并且不受社区支持。 + 4. 标准化整个企业中的特定编程语言构建,以实现跨团队的一致环境,并最大限度地减少依赖性。 + 5. 根据相关性设置何时触发更新、警报或其他类型事件的阈值。 + 6. 为您的包管理创建一个单一的可信来源;这可能需要知识渊博的技术提供商的帮助。 + 7. 根据您的特定标准,只使用您需要的软件包获得较小的构建版本。 + +使用这些最佳实践,开发人员可以最大限度地利用他们的时间为企业创造更多价值,而不是执行基本的工具或构建工程任务。这将在软件开发生命周期(SDLC)的所有环境中创建代码一致性。由于维护编程语言和软件包分发所需的资源更少,这也将提高效率和节约成本。这种新的操作方式将使技术人员和管理人员的生活更加轻松。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/multiple-programming-languages + +作者:[Bart Copeland][a] +选题:[lujun9972][b] +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bartcopeland +[b]: https://github.com/lujun9972 diff --git a/published/20181123 Three SSH GUI Tools for Linux.md b/published/20181123 Three SSH GUI Tools for Linux.md new file mode 100644 index 0000000000..d742be9ba8 --- /dev/null +++ b/published/20181123 Three SSH GUI Tools for Linux.md @@ -0,0 +1,144 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: subject: (Three SSH GUI Tools for Linux) +[#]: via: (https://www.linux.com/blog/learn/intro-to-linux/2018/11/three-ssh-guis-linux) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) +[#]: url: (https://linux.cn/article-10559-1.html) + +3 个 Linux 上的 SSH 图形界面工具 +====== + +> 了解一下这三个用于 Linux 上的 SSH 图形界面工具。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh.jpg?itok=3UcXhJt7) + +在你担任 Linux 管理员的职业生涯中,你会使用 Secure Shell(SSH)远程连接到 Linux 服务器或桌面。可能你曾经在某些情况下,会同时 SSH 连接到多个 Linux 服务器。实际上,SSH 可能是 Linux 工具箱中最常用的工具之一。因此,你应该尽可能提高体验效率。对于许多管理员来说,没有什么比命令行更有效了。但是,有些用户更喜欢使用 GUI 工具,尤其是在从台式机连接到远程并在服务器上工作时。 + +如果你碰巧喜欢好的图形界面工具,你肯定很乐于了解一些 Linux 上优秀的 SSH 图形界面工具。让我们来看看这三个工具,看看它们中的一个(或多个)是否完全符合你的需求。 + +我将在 [Elementary OS][1] 上演示这些工具,但它们都可用于大多数主要发行版。 + +### PuTTY + +已经有一些经验的人都知道 [PuTTY][2]。实际上,从 Windows 环境通过 SSH 连接到 Linux 服务器时,PuTTY 是事实上的标准工具。但 PuTTY 不仅适用于 Windows。事实上,通过标准软件库,PuTTY 也可以安装在 Linux 上。 PuTTY 的功能列表包括: + + * 保存会话。 + * 通过 IP 或主机名连接。 + * 使用替代的 SSH 端口。 + * 定义连接类型。 + * 日志。 + * 设置键盘、响铃、外观、连接等等。 + * 配置本地和远程隧道。 + * 支持代理。 + * 支持 X11 隧道。 + +PuTTY 图形工具主要是一种保存 SSH 会话的方法,因此可以更轻松地管理所有需要不断远程进出的各种 Linux 服务器和桌面。一旦连接成功,PuTTY 就会建立一个到 Linux 服务器的连接窗口,你将可以在其中工作。此时,你可能会有疑问,为什么不在终端窗口工作呢?对于一些人来说,保存会话的便利确实使 PuTTY 值得使用。 + +在 Linux 上安装 PuTTY 很简单。例如,你可以在基于 Debian 的发行版上运行命令: + +``` +sudo apt-get install -y putty +``` + +安装后,你可以从桌面菜单运行 PuTTY 图形工具或运行命令 `putty`。在 PuTTY “Configuration” 窗口(图 1)中,在 “HostName (or IP address) ” 部分键入主机名或 IP 地址,配置 “Port”(如果不是默认值 22),从 “Connection type”中选择 SSH,然后单击“Open”。 + +![PuTTY Connection][4] + +*图 1:PuTTY 连接配置窗口* + +建立连接后,系统将提示你输入远程服务器上的用户凭据(图2)。 + +![log in][7] + +*图 2:使用 PuTTY 登录到远程服务器* + +要保存会话(以便你不必始终键入远程服务器信息),请填写主机名(或 IP 地址)、配置端口和连接类型,然后(在单击 “Open” 之前),在 “Saved Sessions” 部分的顶部文本区域中键入名称,然后单击 “Save”。这将保存会话的配置。若要连接到已保存的会话,请从 “Saved Sessions” 窗口中选择它,单击 “Load”,然后单击 “Open”。系统会提示你输入远程服务器上的远程凭据。 + +### EasySSH + +虽然 [EasySSH][8] 没有提供 PuTTY 中的那么多的配置选项,但它(顾名思义)非常容易使用。 EasySSH 的最佳功能之一是它提供了一个标签式界面,因此你可以打开多个 SSH 连接并在它们之间快速切换。EasySSH 的其他功能包括: + + * 分组(出于更好的体验效率,可以对标签进行分组)。 + * 保存用户名、密码。 + * 外观选项。 + * 支持本地和远程隧道。 + +在 Linux 桌面上安装 EasySSH 很简单,因为可以通过 Flatpak 安装应用程序(这意味着你必须在系统上安装 Flatpak)。安装 Flatpak 后,使用以下命令添加 EasySSH: + +``` +sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo + +sudo flatpak install flathub com.github.muriloventuroso.easyssh +``` + +用如下命令运行 EasySSH: + +``` +flatpak run com.github.muriloventuroso.easyssh +``` + +将会打开 EasySSH 应用程序,你可以单击左上角的 “+” 按钮。 在结果窗口(图 3)中,根据需要配置 SSH 连接。 + +![Adding a connection][10] + +*图 3:在 EasySSH 中添加连接很简单* + +添加连接后,它将显示在主窗口的左侧导航中(图 4)。 + +![EasySSH][12] + +*图 4:EasySSH 主窗口* + +要在 EasySSH 连接到远程服务器,请从左侧导航栏中选择它,然后单击 “Connect” 按钮(图 5)。 + +![Connecting][14] + +*图 5:用 EasySSH 连接到远程服务器* + +对于 EasySSH 的一个警告是你必须将用户名和密码保存在连接配置中(否则连接将失败)。这意味着任何有权访问运行 EasySSH 的桌面的人都可以在不知道密码的情况下远程访问你的服务器。因此,你必须始终记住在你离开时锁定桌面屏幕(并确保使用强密码)。否则服务器容易受到意外登录的影响。 + +### Terminator + +(LCTT 译注:这个选择不符合本文主题,本节删节) + +### termius + +(LCTT 译注:本节是根据网友推荐补充的) + +termius 是一个商业版的 SSH、Telnet 和 Mosh 客户端,不是开源软件。支持包括 [Linux](https://www.termius.com/linux)、Windows、Mac、iOS 和安卓在内的各种操作系统。对于单一设备是免费的,支持多设备的白金账号需要按月付费。 + +### 很少(但值得)的选择 + +Linux 上没有很多可用的 SSH 图形界面工具。为什么?因为大多数管理员更喜欢简单地打开终端窗口并使用标准命令行工具来远程访问其服务器。但是,如果你需要图形界面工具,则有两个可靠选项,可以更轻松地登录多台计算机。虽然对于那些寻找 SSH 图形界面工具的人来说只有不多的几个选择,但那些可用的工具当然值得你花时间。尝试其中一个,亲眼看看。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/intro-to-linux/2018/11/three-ssh-guis-linux + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://elementary.io/ +[2]: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html +[3]: https://www.linux.com/files/images/sshguis1jpg +[4]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_1.jpg?itok=DiNTz_wO (PuTTY Connection) +[5]: https://www.linux.com/licenses/category/used-permission +[6]: https://www.linux.com/files/images/sshguis2jpg +[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_2.jpg?itok=4ORsJlz3 (log in) +[8]: https://github.com/muriloventuroso/easyssh +[9]: https://www.linux.com/files/images/sshguis3jpg +[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_3.jpg?itok=bHC2zlda (Adding a connection) +[11]: https://www.linux.com/files/images/sshguis4jpg +[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_4.jpg?itok=hhJzhRIg (EasySSH) +[13]: https://www.linux.com/files/images/sshguis5jpg +[14]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_5.jpg?itok=piFEFYTQ (Connecting) +[15]: https://www.linux.com/files/images/sshguis6jpg +[16]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_6.jpg?itok=-kYl6iSE (Terminator) diff --git a/published/20181124 14 Best ASCII Games for Linux That are Insanely Good.md b/published/20181124 14 Best ASCII Games for Linux That are Insanely Good.md new file mode 100644 index 0000000000..cac0934625 --- /dev/null +++ b/published/20181124 14 Best ASCII Games for Linux That are Insanely Good.md @@ -0,0 +1,332 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: subject: (14 Best ASCII Games for Linux That are Insanely Good) +[#]: via: (https://itsfoss.com/best-ascii-games/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) +[#]: url: (https://linux.cn/article-10546-1.html) + +14 个依然很棒的 Linux ASCII 游戏 +====== + +基于文本的(或者我应该说是[基于终端的][1])游戏在十年前非常流行 —— 当时还没有像战神God Of War荒野大镖客:救赎 2Red Dead Redemption 2蜘蛛侠Spiderman这样的视觉游戏大作。 + +当然,Linux 平台有很多好游戏 —— 虽然并不总是“最新和最好”。但是,有一些 ASCII 游戏,却是你永远不会玩腻的。 + +你或许不相信,有一些 ASCII 游戏被证明是非常容易上瘾的(所以,我可能需要一段时间才能继续写下一篇文章,或者我可能会被解雇? —— 帮帮我!) + +哈哈,开个玩笑。让我们来看看最好的 ASCII 游戏吧。 + +**注意:**安装 ASCII 游戏可能要花费不少时间(有些可能会要求你安装其他依赖项或根本不起作用)。你甚至可能会遇到一些需要你从源代码构建的 ASCII 游戏。因此,我们只筛选出那些易于安装和运行的产品 —— 不用费劲。 + +### 在运行和安装 ASCII 游戏之前需要做的事情 + +如果你没有安装的话,某些 ASCII 游戏可能需要你安装 [Simple DirectMedia Layer][2]。因此,以防万一,你应该先尝试安装它,然后再尝试运行本文中提到的任何游戏。 + +要安装它,你需要键入如下命令: + +``` +sudo apt install libsdl2-2.0 +sudo apt install libsdl2_mixer-2.0 +``` + +### Linux 上最好的 ASCII 游戏 + +![Best Ascii games for Linux][3] + +如下列出的游戏排名不分先后。 + +#### 1、战争诅咒 + +![Curse of War ascii games][5] + +[战争诅咒][4]Curse of War是一个有趣的策略游戏。一开始你可能会发现它有点令人困惑,但一旦你掌握了,就会喜欢上它。在启动游戏之前,我建议你在其 [主页][4] 上查看该游戏规则。 + +你将建设基础设施、保护资源并指挥你的军队进行战斗。你所要做的就是把你的旗帜放在一个合适的位置,让你的军队来完成其余的任务。不仅仅是攻击敌人,你还需要管理和保护资源以帮助赢得战斗。 + +如果你之前从未玩过任何 ASCII 游戏,请耐心花一些时间来学习它、体验它的全部潜力。 + +##### 如何安装? + +你可以在官方软件库里找到它。键入如下命令来安装它: + +``` +sudo apt install curseofwar +``` + +#### 2、ASCII 领域 + +![ascii sector][6] + +讨厌策略游戏?不用担心,ASCII 领域ASCII Sector是一款具有空间环境的游戏,可让你进行大量探索。 + +此外,不仅仅局限于探索,你还想要采取一些行动吗?也是可以的。当然,虽然战斗体验不是最好的,但它也很有趣。当你看到各种基地、任务和探索时,会让你更加兴奋。你会在这个小小的游戏中遇到一个练级系统,你必须赚取足够的钱或进行交易才能升级你的宇宙飞船。 + +而这个游戏最好的地方是你可以创建自己的任务,也可以玩其他人的任务。 + +##### 如何安装? + +你需要先从其 [官方网站][7] 下载并解压缩归档包。完成后,打开终端并输入这些命令(将 “Downloads” 文件夹替换为你解压缩文件夹所在的位置,如果解压缩文件夹位于你的主目录中,则忽略它): + +``` +cd Downloads +cd asciisec +chmod +x asciisec +./asciisec +``` + +#### 3、DoomRL + +![doom ascii game][8] + +你肯定知道经典游戏“毁灭战士DOOM”,所以,如果你想把它像 Rogue 类游戏一样略微体验一下,DoomRL 就是适合你的游戏。它是一个基于 ASCII 的游戏,这或许让你想不到。 + +这是一个非常小的游戏,但是可以玩很久。 + +##### 如何安装? + +与你对 “ASCII 领域”所做的类似,你需要从其 [下载页面][9] 下载官方归档文件,然后将其解压缩到一个文件夹。 + +解压缩后,输入以下命令: + +``` +cd Downloads // navigating to the location where the unpacked folder exists +cd doomrl-linux-x64-0997 +chmod +x doomrl +./doomrl +``` + +#### 4、金字塔建造者 + +![Pyramid Builder ascii game for Linux][10] + +金字塔建造者Pyramid Builder 是一款创新的 ASCII 游戏,你可以通过帮助建造金字塔来提升你的文明。 + +你需要指导工人耕种、卸载货物、并移动巨大的石头,以成功建造金字塔。 + +这确实是一个值得下载的 ASCII 游戏。 + +##### 如何安装? + +只需前往其官方网站并下载包以解压缩。提取后,导航到该文件夹并运行可执行文件。 + +``` +cd Downloads +cd pyramid_builder_linux +chmod +x pyramid_builder_linux.x86_64 +./pyramid_builder_linux.x86_64 +``` + +#### 5、DiabloRL + +![Diablo ascii RPG game][11] + +如果你是一位狂热的游戏玩家,你一定听说过暴雪的暗黑破坏神Diablo 1 代,毫无疑问这是一个精彩的游戏。 + +现在你有机会玩一个该游戏的独特演绎版本 —— 一个 ASCII 游戏。DiabloRL 是一款非常棒的基于回合制的 Rogue 类的游戏。你可以从各种职业(战士、巫师或盗贼)中进行选择。每个职业都具有一套不同的属性,可以带来不同游戏体验。 + +当然,个人偏好会有所不同,但它是一个不错的暗黑破坏神“降级版”。你觉得怎么样? + +#### 6、Ninvaders + +![Ninvaders terminal game for Linux][12] + +Ninvaders 是最好的 ASCII 游戏之一,因为它是如此简单,且可以消磨时间的街机游戏。 + +你必须防御入侵者,需要在它们到达之前击败它们。这听起来很简单,但它极具挑战性。 + +##### 如何安装? + +与“战争诅咒”类似,你可以在官方软件库中找到它。所以,只需输入此命令即可安装它: + +``` +sudo apt install ninvaders  +``` + +#### 7、帝国 + +![Empire terminal game][13] + +帝国Empire这是一款即时战略游戏,你需要互联网连接。我个人不是实时战略游戏的粉丝,但如果你是这类游戏的粉丝,你可以看看他们的 [指南][14] 来玩这个游戏,因为学习起来非常具有挑战性。 + +游戏区域包含城市、土地和水。你需要用军队、船只、飞机和其他资源扩展你的城市。通过快速扩张,你可以通过在对方动作之前摧毁它们来捕获其他城市。 + +##### 如何安装? + +安装很简单,只需输入以下命令: + +``` +sudo apt install empire +``` + +#### 8、Nudoku + +![Nudoku is a terminal version game of Sudoku][15] + +喜欢数独游戏?好吧,你也有个 Nudoku 游戏,这是它的克隆。这是当你想放松时的一个完美的消磨时间的 ASCII 游戏。 + +它为你提供三个难度级别:简单、正常和困难。如果你想要挑战电脑,其难度会非常难!如果你只是想放松一下,那么就选择简单难度吧。 + +##### 如何安装? + +安装它很容易,只需在终端输入以下命令: + +``` +sudo apt install nudoku +``` + +#### 9、Nethack + +最好的地下城式 ASCII 游戏之一。如果你已经知道一些 Linux 的 ASCII 游戏,我相信这是你的最爱之一。 + +它具有许多不同的层(约 45 个),并且包含一堆武器、卷轴、药水、盔甲、戒指和宝石。你也可以选择“永久死亡”模式来玩试试。 + +在这里可不仅仅是杀戮,你还有很多需要探索的地方。 + +##### 如何安装? + +只需按照以下命令安装它: + +``` +sudo apt install nethack +``` + +#### 10、ASCII 滑雪 + +![ascii jump game][16] + +ASCII 滑雪ASCII Jump 是一款简单易玩的游戏,你必须沿着各种轨道滑动,同时跳跃、改变位置,并尽可能长时间地移动以达到最大距离。 + +即使看起来很简单,但是看看这个 ASCII 游戏视觉上的表现也是很神奇的。你可以从训练模式开始,然后进入世界杯比赛。你还可以选择你的竞争对手以及你想要开始游戏的山丘。 + +##### 如何安装? + +只需按照以下命令安装它: + +``` +sudo apt install asciijump +``` + +#### 11、Bastet + +![Bastet is tetris game in ascii form][17] + +不要被这个名字误导,它实际上是俄罗斯方块游戏的一个有趣的克隆。 + +你不要觉得它只是另一个普通的俄罗斯方块游戏,它会为你丢下最糟糕的砖块。祝你玩得开心! + +##### 如何安装? + +打开终端并键入如下命令: + +``` +sudo apt install bastet +``` + +#### 12、Bombardier + +![Bomabrdier game in ascii form][18] + +Bombardier 是另一个简单的 ASCII 游戏,它会让你迷上它。 + +在这里,你有一架直升机(或许你想称之为飞机),每一圈它都会降低,你需要投掷炸弹才能摧毁你下面的街区/建筑物。当你摧毁一个街区时,游戏还会在它显示的消息里面添加一些幽默。很好玩。 + +##### 如何安装? + +Bombardier 可以在官方软件库中找到,所以只需在终端中键入以下内容即可安装它: + +``` +sudo apt install bombardier +``` + +#### 13、Angband + +![Angband ascii game][19] + +一个很酷的地下城探索游戏,界面整洁。在探索该游戏时,你可以在一个屏幕上看到所有重要信息。 + +它包含不同种类的种族可供选择角色。你可以是精灵、霍比特人、矮人或其他什么,有十几种可供选择。请记住,你需要在最后击败黑暗之王,所以尽可能升级你的武器并做好准备。 + +##### 如何安装? + +直接键入如下命令: + +``` +sudo apt install angband +``` + +#### 14、GNU 国际象棋 + +![GNU Chess is a chess game that you can play in Linux terminal][20] + +为什么不下盘棋呢?这是我最喜欢的策略游戏了! + +但是,除非你知道如何使用代表的符号来描述下一步行动,否则 GNU 国际象棋可能很难玩。当然,作为一个 ASCII 游戏,它不太好交互,所以它会要求你记录你的移动并显示输出(当它等待计算机思考它的下一步行动时)。 + +##### 如何安装? + +如果你了解国际象棋的代表符号,请输入以下命令从终端安装它: + +``` +sudo apt install gnuchess +``` + +#### 一些荣誉奖 + +正如我之前提到的,我们试图向你推荐最好的(也是最容易在 Linux 机器上安装的那些) ASCII 游戏。 + +然而,有一些标志性的 ASCII 游戏值得关注,它们需要更多的安装工作(你可以获得源代码,但需要构建它/安装它)。 + +其中一些游戏是: + ++ [Cataclysm: Dark Days Ahead][22] ++ [Brogue][23] ++ [Dwarf Fortress][24] + +你可以按照我们的 [从源代码安装软件的完全指南][21] 来进行。 + +### 总结 + +我们提到的哪些 ASCII 游戏适合你?我们错过了你最喜欢的吗? + +请在下面的评论中告诉我们你的想法。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-ascii-games/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/best-command-line-games-linux/ +[2]: https://www.libsdl.org/ +[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/best-ascii-games-featured.png?resize=800%2C450&ssl=1 +[4]: http://a-nikolaev.github.io/curseofwar/ +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/curseofwar-ascii-game.jpg?fit=800%2C479&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/ascii-sector-game.jpg?fit=800%2C424&ssl=1 +[7]: http://www.asciisector.net/download/ +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/doom-rl-ascii-game.jpg?ssl=1 +[9]: https://drl.chaosforge.org/downloads +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/pyramid-builder-ascii-game.jpg?fit=800%2C509&ssl=1 +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/diablo-rl-ascii-game.jpg?ssl=1 +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/ninvaders-ascii-game.jpg?fit=800%2C426&ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/empire-ascii-game.jpg?fit=800%2C570&ssl=1 +[14]: http://www.wolfpackempire.com/infopages/Guide.html +[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/nudoku-ascii-game.jpg?fit=800%2C434&ssl=1 +[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/ascii-jump.jpg?fit=800%2C566&ssl=1 +[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/bastet-tetris-clone-ascii.jpg?fit=800%2C465&ssl=1 +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/bombardier.jpg?fit=800%2C571&ssl=1 +[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/angband-ascii-game.jpg?ssl=1 +[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/gnuchess-ascii-game.jpg?ssl=1 +[21]: https://linux.cn/article-9172-1.html +[22]: https://github.com/CleverRaven/Cataclysm-DDA +[23]: https://sites.google.com/site/broguegame/ +[24]: http://www.bay12games.com/dwarves/index.html + diff --git a/published/201812/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md b/published/201812/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md new file mode 100644 index 0000000000..0f51e0e7a9 --- /dev/null +++ b/published/201812/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md @@ -0,0 +1,401 @@ +学习 Linux/*BSD/Unix 的 30 个最佳在线文档 +====== + +手册页(man)是由系统管理员和 IT 技术开发人员写的,更多的是为了作为参考而不是教你如何使用。手册页对于已经熟悉使用 Linux、Unix 和 BSD 操作系统的人来说是非常有用的。如果你仅仅需要知道某个命令或者某个配置文件的格式那么你可以使用手册页,但是手册页对于 Linux 新手来说并没有太大的帮助。想要通过使用手册页来学习一些新东西不是一个好的选择。这里有将提供 30 个学习 Linux 和 Unix 操作系统的最佳在线网页文档。 + +![Dennis Ritchie and Ken Thompson working with UNIX PDP11][1] + +值得一提的是,相对于 Linux,BSD 的手册页更好。 + +### #1:Red Hat Enterprise Linux(RHEL) + +![Red hat Enterprise Linux 文档][2] + +RHEL 是由红帽公司开发的面向商业市场的 Linux 发行版。红帽的文档是最好的文档之一,涵盖从 RHEL 的基础到一些高级主题比如安全、SELinux、虚拟化、目录服务器、服务器集群、JBOSS 应用程序服务器、高可用性集群(HPC)等。红帽的文档已经被翻译成 22 种语言,发布成多页面 HTML、单页面 HTML、PDF、EPUB 等文件格式。好消息同样的文档你可以用于 Centos 和 Scientific Linux(社区企业发行版)。这些文档随操作系统一起下载提供,也就是说当你没有网络的时候,你也可以使用它们。RHEL 的文档**涵盖从安装到配置器群的所有内容**。唯一的缺点是你需要成为付费用户。当然这对于企业公司来说是一件完美的事。 + +1. RHEL 文档:[HTML/PDF格式][3](LCTT 译注:**此链接**需要付费用户才可以访问) +2. 是否支持论坛:只能通过红帽公司的用户网站提交支持案例。 + +#### 关于 CentOS Wiki 和论坛的说明 + +![Centos Linux Wiki][4] + +CentOS(社区企业操作系统Community ENTerprise Operating System)是由 RHEL 提供的自由源码包免费重建的。它为个人电脑或其它用途提供了可靠的、免费的企业级 Linux。你可以不用付出任何支持和认证费用就可以获得 RHEL 的稳定性。CentOS的 wiki 分为 Howto、技巧等等部分,链接如下: + +1. 文档:[wiki 格式][87] +2. 是否支持论坛:[是][88] + +### #2:Arch 的 Wiki 和论坛 + +![Arch Linux wiki 和教程][5] + +Arch linux 是一个独立开发的 Linux 操作系统,它有基于 wiki 网站形式的非常不错的文档。它是由 Arch 社区的一些用户共同协作开发出来的,并且允许任何用户添加或修改内容。这些文档教程被分为几类比如说优化、软件包管理、系统管理、X window 系统还有获取安装 Arch Linux 等。它的[官方论坛][7]在解决许多问题的时候也非常有用。它有总共 4 万多个注册用户、超过 1 百万个帖子。 该 wiki 包含一些 **其它 Linux 发行版也适用的通用信息**。 + +1. Arch 社区文档:[Wiki 格式][8] +2. 是否支持论坛:[是][7] + +### #3:Gentoo Linux Wiki 和论坛 + +![Gentoo Linux 手册和 Wiki][9] + +Gentoo Linux 基于 Portage 包管理系统。Gentoo Linux 用户根据它们选择的配置在本地编译源代码。多数 Gentoo Linux 用户都会定制自己独有的程序集。 Gentoo Linux 的文档会给你一些有关 Gentoo Linux 操作系统的说明和一些有关安装、软件包、网络和其它等主要出现的问题的解决方法。Gentoo 有对你来说 **非常有用的论坛**,论坛中有超过 13 万 4 千的用户,总共发了有 5442416 个文章。 + +1. Gentoo 社区文档:[手册][10] 和 [Wiki 格式][11] +2. 是否支持论坛:[是][12] + +### #4:Ubuntu Wiki 和文档 + +![Ubuntu Linux Wiki 和论坛][14] + +Ubuntu 是领先的台式机和笔记本电脑发行版之一。其官方文档由 Ubuntu 文档工程开发维护。你可以在从官方文档中查看大量的信息,比如如何开始使用 Ubuntu 的教程。最好的是,此处包含的这些信息也可用于基于 Debian 的其它系统。你可能会找到由 Ubuntu 的用户们创建的社区文档,这是一份有关 Ubuntu 的使用教程和技巧等。Ubuntu Linux 有着网络上最大的 Linux 社区的操作系统,它对新用户和有经验的用户均有助益。 + +1. Ubuntu 社区文档:[wiki 格式][15] +2. Ubuntu 官方文档:[wiki 格式][16] +3. 是否支持论坛:[是][17] + +### #5:IBM Developer Works + +![IBM: Linux 程序员和系统管理员用到的技术][18] + +IBM Developer Works 为 Linux 程序员和系统管理员提供技术资源,其中包含数以百计的文章、教程和技巧来协助 Linux 程序员的编程工作和应用开发还有系统管理员的日常工作。 + +1. IBM 开发者项目文档:[HTML 格式][19] +2. 是否支持论坛:[是][20] + +### #6:FreeBSD 文档和手册 + +![Freebsd Documentation][21] + +FreeBSD 的手册是由 FreeBSD 文档项目FreeBSD Documentation Project所创建的,它介绍了 FreeBSD 操作系统的安装、管理和一些日常使用技巧等内容。FreeBSD 的手册页通常比 GNU Linux 的手册页要好一点。FreeBSD **附带有全部最新手册页的文档**。 FreeBSD 手册涵盖任何你想要的内容。手册包含一些通用的 Unix 资料,这些资料同样适用于其它的 Linux 发行版。FreeBSD 官方论坛会在你遇到棘手问题时给予帮助。 + +1. FreeBSD 文档:[HTML/PDF 格式][90] +2. 是否支持论坛:[是][91] + +### #7:Bash Hackers Wiki + +![Bash Hackers wiki][22] + +这是一个对于 bash 使用者来说非常好的资源。Bash 使用者的 wiki 是为了归纳所有类型的 GNU Bash 文档。这个项目的动力是为了提供可阅读的文档和资料来避免用户被迫一点一点阅读 Bash 的手册,有时候这是非常麻烦的。Bash Hackers Wiki 分为各个类,比如说脚本和通用资料、如何使用、代码风格、bash 命令格式和其它。 + +1. Bash 用户教程:[wiki 格式][23] + +### #8:Bash 常见问题 + +![Bash 常见问题:一些有关 GNU/BASH 常见问题的解决方法][24] + +这是一个为 bash 新手设计的一个 wiki。它收集了 IRC 网络的 #bash 频道里常见问题的解决方法,这些解决方法是由该频道的普通成员提供。当你遇到问题的时候不要忘了在 [BashPitfalls][25] 部分检索查找答案。这些常见问题的解决方法可能会倾向于 Bash,或者偏向于最基本的 Bourne Shell,这决定于是谁给出的答案。大多数情况会尽力提供可移植的(Bourne)和高效的(Bash,在适当情况下)的两类答案。 + +1. Bash 常见问题:[wiki 格式][26] + +### #9: Howtoforge - Linux 教程 + +![Howtoforge][27] + +博客作者 Falko 在 Howtoforge 上有一些非常不错的东西。这个网站提供了 Linux 关于各种各样主题的教程,比如说其著名的“最佳服务器系列”,网站将主题分为几类,比如说 web 服务器、linux 发行版、DNS 服务器、虚拟化、高可用性、电子邮件和反垃圾邮件、FTP 服务器、编程主题还有一些其它的内容。这个网站也支持德语。 + +1. Howtoforge: [html 格式][28] +2. 是否支持论坛:是 + +### #10:OpenBSD 常见问题和文档 + +![OpenBSD 文档][29] + +OpenBSD 是另一个基于 BSD 的类 Unix 计算机操作系统。OpenBSD 是由 NetBSD 项目分支而来。OpenBSD 因高质量的代码和文档、对软件许可协议的坚定立场和强烈关注安全问题而闻名。OpenBSD 的文档分为多个主题类别,比如说安装、包管理、防火墙设置、用户管理、网络、磁盘和磁盘阵列管理等。 + +1. OpenBSD:[html 格式][30] +2. 是否支持论坛:否,但是可以通过 [邮件列表][31] 来咨询 + +### #11: Calomel - 开源研究和参考文档 + +![开源研究和参考文档][32] + +这个极好的网站是专门作为开源软件和那些特别专注于 OpenBSD 的软件的文档来使用的。这是最简洁的引导网站之一,专注于高质量的内容。网站内容分为多个类,比如说 DNS、OpenBSD、安全、web 服务器、Samba 文件服务器、各种工具等。 + +1. Calomel 官网:[html 格式][33] +2. 是否支持论坛:否 + +### #12:Slackware 书籍项目 + +![Slackware Linux 手册和文档][34] + +Slackware Linux 是我的第一个 Linux 发行版。Slackware 是基于 Linux 内核的最早的发行版之一,也是当前正在维护的最古老的 Linux 发行版。 这个发行版面向专注于稳定性的高级用户。 Slackware 也是很少有的的“类 Unix” 的 Linux 发行版之一。官方的 Slackware 手册是为了让用户快速开始了解 Slackware 操作系统的使用方法而设计的。 这不是说它将包含发行版的每一个方面,而是为了说明它的实用性和给使用者一些有关系统的基础工作使用方法。手册分为多个主题,比如说安装、网络和系统配置、系统管理、包管理等。 + +1. Slackware Linux 手册:[html 格式][35]、pdf 和其它格式 +2. 是否支持论坛:是 + +### #13:Linux 文档项目(TLDP) + +![Linux 学习网站和文档][36] + +Linux 文档项目Linux Documentation Project旨在给 Linux 操作系统提供自由、高质量文档。网站是由志愿者创建和维护的。网站分为具体主题的帮助、由浅入深的指南等。在此我想推荐一个非常好的[文档][37],这个文档既是一个教程也是一个 shell 脚本编程的参考文档,对于新用户来说这个 HOWTO 的[列表][38]也是一个不错的开始。 + +1. Linux [文档工程][39] 支持多种查阅格式 +2. 是否支持论坛:否 + +### #14:Linux Home Networking + +![Linux Home Networking][40] + +Linux Home Networking 是学习 linux 的另一个比较好的资源,这个网站包含了 Linux 软件认证考试的内容比如 RHCE,还有一些计算机培训课程。网站包含了许多主题,比如说网络、Samba 文件服务器、无线网络、web 服务器等。 + +1. Linux [home networking][41] 可通过 html 格式和 PDF(少量费用)格式查阅 +2. 是否支持论坛:是 + +### #15:Linux Action Show + +![Linux 播客][42] + +Linux Action Show(LAS) 是一个关于 Linux 的播客。这个网站是由 Bryan Lunduke、Allan Jude 和 Chris Fisher 共同管理的。它包含了 FOSS 的最新消息。网站内容主要是评论一些应用程序和 Linux 发行版。有时候也会发布一些和开源项目著名人物的采访视频。 + +1. Linux [action show][43] 支持音频和视频格式 +2. 是否支持论坛:是 + +### #16:Commandlinefu + +![Commandlinefu 的最优 Unix / Linux 命令][45] + +Commandlinefu 列出了各种有用或有趣的 shell 命令。这里所有命令都可以评论、讨论和投票(支持或反对)。对于所有 Unix 命令行用户来说是一个极好的资源。不要忘了查看[评选出来的最佳命令][44]。 + + 1. [Commandlinefu][46] 支持 html 格式 + 2. 是否支持论坛:否 + +### #17:Debian 管理技巧和资源 + +![Debian Linux 管理: 系统管理员技巧和教程][48] + +这个网站包含一些只和 Debian GNU/Linux 相关的主题、技巧和教程,特别是包含了关于系统管理的有趣和有用的信息。你可以在上面贡献文章、建议和问题。提交了之后不要忘记查看[最佳文章列表][47]里有没有你的文章。 + +1. Debian [系统管理][49] 支持 html 格式 +2. 是否支持论坛:否 + +### #18: Catonmat - Sed、Awk、Perl 教程 + +![Sed 流编辑器、 Awk 文本处理工具、 Perl 语言教程][50] + +这个网站是由博客作者 Peteris Krumins 维护的。主要关注命令行和 Unix 编程主题,比如说 sed 流编辑器、perl 语言、AWK 文本处理工具等。不要忘了查看 [sed 介绍][51]、sed 含义解释,还有命令行历史的[权威介绍][53]。 + +1. [catonmat][55] 支持 html 格式 +2. 是否支持论坛:否 + +### #19:Debian GNU/Linux 文档和 Wiki + +![Debian Linux 教程和 Wiki][56] + +Debian 是另外一个 Linux 操作系统,其主要使用的软件以 GNU 许可证发布。Debian 因严格坚持 Unix 和自由软件的理念而闻名,它也是很受欢迎并且有一定影响力的 Linux 发行版本之一。 Ubuntu 等发行版本都是基于 Debian 的。Debian 项目以一种易于访问的形式提供给用户合适的文档。这个网站分为 Wiki、安装指导、常见问题、支持论坛几个模块。 + +1. Debian GNU/Linux [文档][57] 支持 html 和其它格式访问 +2. Debian GNU/Linux [wiki][58] +3. 是否支持论坛:[是][59] + +### #20:Linux Sea + +Linux Sea 这本书提供了比较通俗易懂但充满技术(从最终用户角度来看)的 Linux 操作系统的介绍,使用 Gentoo Linux 作为例子。它既没有谈论 Linux 内核或 Linux 发行版的历史,也没有谈到 Linux 用户不那么感兴趣的细节。 + +1. Linux [sea][60] 支持 html 格式访问 +2. 是否支持论坛: 否 + +### #21:O'reilly Commons + +![免费 Linux / Unix / Php / Javascript / Ubuntu 学习笔记][61] + +O'reilly 出版社发布了不少 wiki 格式的文章。这个网站主要是为了给那些喜欢创作、参考、使用、修改、更新和修订来自 O'Reilly 或者其它来源的素材的社区提供资料。这个网站包含关于 Ubuntu、PHP、Spamassassin、Linux 等的免费书籍。 + +1. Oreilly [commons][62] 支持 Wiki 格式 +2. 是否支持论坛:否 + +### #22:Ubuntu 袖珍指南 + +![Ubuntu 新手书籍][63] + +这本书的作者是 Keir Thomas。这本指南(或者说是书籍)对于所有 ubuntu 用户来说都值得一读。这本书旨在向用户介绍 Ubuntu 操作系统和其所依赖的理念。你可以从官网下载这本书的 PDF 版本,也可以在亚马逊买印刷版。 + +1. Ubuntu [pocket guide][64] 支持 PDF 和印刷版本. +2. 是否支持论坛:否 + +### #23: Linux: Rute User's Tutorial and Exposition + +![GNU/LINUX system administration book][65] + +这本书涵盖了 GNU/LINUX 系统管理,主要是对主流的发布版本比如红帽和 Debian 的说明,可以作为新用户的教程和高级管理员的参考。这本书旨在给出 Unix 系统的每个面的简明彻底的解释和实践性的例子。想要全面了解 Linux 的人都不需要再看了 —— 这里没有涉及的内容。 + +1. Linux: [Rute User's Tutorial and Exposition][66] 支持印刷版和 html 格式 +2. 是否支持论坛:否 + +### #24:高级 Linux 编程 + +![高级 Linux 编程][67] + +这本书是写给那些已经熟悉了 C 语言编程的程序员的。这本书采取一种教程式的方式来讲述大多数在 GNU/Linux 系统应用编程中重要的概念和功能特性。如果你是一个已经对 GNU/Linux 系统编程有一定经验的开发者,或者是对其它类 Unix 系统编程有一定经验的开发者,或者对 GNU/Linux 软件开发有兴趣,或者想要从非 Unix 系统环境转换到 Unix 平台并且已经熟悉了优秀软件的开发原则,那你很适合读这本书。另外,你会发现这本书同样适合于 C 和 C++ 编程。 + +1. [高级 Linux 编程][68] 支持印刷版和 PDF 格式 +2. 是否支持论坛:否 + +### #25: LPI 101 Course Notes + +![Linux 国际专业协会认证书籍][69] + +LPIC 1、2、3 级是用于 Linux 系统管理员认证的。这个网站提供了 LPI 101 和 LPI 102 的测试训练。这些是根据 GNU 自由文档协议GNU Free Documentation Licence(FDL)发布的。这些课程材料基于 Linux 国际专业协会的 LPI 101 和 102 考试的目标。这个课程是为了提供给你一些必备的 Linux 系统的操作和管理的技能。 + +1. LPI [训练手册][70] 支持 PDF 格式 +2. 是否支持论坛:否 + +### #26: FLOSS 手册 + +![FLOSS Manuals is a collection of manuals about free and open source software][72] + +FLOSS 手册是一系列关于自由和开源软件以及用于创建它们的工具和使用这些工具的社区的手册。社区的成员包含作者、编辑、设计师、软件开发者、积极分子等。这些手册中说明了怎样安装使用一些自由和开源软件,如何操作(比如设计和维持在线安全)开源软件,这其中也包含如何使用或支持自由软件和格式的自由文化服务手册。你也会发现关于一些像 VLC、 [Linux 视频编辑][71]、 Linux、 OLPC / SUGAR、 GRAPHICS 等软件的手册。 + +1. 你可以浏览 [FOSS 手册][73] 支持 Wiki 格式 +2. 是否支持论坛:否 + +### #27:Linux 入门包 + +![Linux 入门包][74] + +刚接触 Linux 这个美好世界?想找一个简单的入门方式?你可以下载一个 130 页的指南来入门。这个指南会向你展示如何在你的个人电脑上安装 Linux,如何浏览桌面,掌握最主流行的 Linux 程序和修复可能出现的问题的方法。 + +1. [Linux 入门包][75]支持 PDF 格式 +2. 是否支持论坛:否 + +### #28:Linux.com - Linux 信息来源 + +Linux.com 是 Linux 基金会的一个产品。这个网站上提供一些新闻、指南、教程和一些关于 Linux 的其它信息。利用全球 Linux 用户的力量来通知、写作、连接 Linux 的事务。 + +1. 在线访问 [Linux.com][76] +2. 是否支持论坛:是 + +### #29: LWN + +LWN 是一个注重自由软件及用于 Linux 和其它类 Unix 操作系统的软件的网站。这个网站有周刊、基本上每天发布的单独文章和文章的讨论对话。该网站提供有关 Linux 和 FOSS 相关的开发、法律、商业和安全问题的全面报道。 + +1. 在线访问 [lwn.net][77] +2. 是否支持论坛:否 + +### #30:Mac OS X 相关网站 + +与 Mac OS X 相关网站的快速链接: + +* [Mac OS X 提示][78] —— 这个网站专用于苹果的 Mac OS X Unix 操作系统。网站有很多有关 Bash 和 Mac OS X 的使用建议、技巧和教程 +* [Mac OS 开发库][79] —— 苹果拥有大量和 OS X 开发相关的优秀系列内容。不要忘了看一看 [bash shell 脚本入门][80] +* [Apple 知识库][81] - 这个有点像 RHN 的知识库。这个网站提供了所有苹果产品包括 OS X 相关的指南和故障报修建议。 + +### #30: NetBSD + +(LCTT 译注:没错,又一个 30) + +NetBSD 是另一个基于 BSD Unix 操作系统的自由开源操作系统。NetBSD 项目专注于系统的高质量设计、稳定性和性能。由于 NetBSD 的可移植性和伯克利式的许可证,NetBSD 常用于嵌入式系统。这个网站提供了一些 NetBSD 官方文档和各种第三方文档的链接。 + +1. 在线访问 [netbsd][82] 文档,支持 html、PDF 格式 +2. 是否支持论坛:否 + +### 你要做的事 + +这是我的个人列表,这可能并不完全是权威的,因此如果你有你自己喜欢的独特 Unix/Linux 网站,可以在下方参与评论分享。 + +// 图片来源: [Flickr photo][83] PanelSwitchman。一些连接是用户在我们的 Facebook 粉丝页面上建议添加的。 + +### 关于作者 + +作者是 nixCraft 的创建者和经验丰富的系统管理员以及 Linux 操作系统 / Unix shell 脚本的培训师。它曾与全球客户及各行各业合作,包括 IT、教育,国防和空间研究以及一些非营利部门。可以关注作者的 [Twitter][84]、[Facebook][85]、[Google+][86]。 + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/linux-unix-bsd-documentations.html + +作者:[Vivek Gite][a] +译者:[ScarboroughCoral](https://github.com/ScarboroughCoral) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/tips/2011/12/unix-pdp11.jpg "Dennis Ritchie and Ken Thompson working with UNIX PDP11" +[2]:https://www.cyberciti.biz/media/new/tips/2011/12/redhat-enterprise-linux-docs.png "Red hat Enterprise Linux Docs" +[3]:https://access.redhat.com/documentation/en-us/ +[4]:https://www.cyberciti.biz/media/new/tips/2011/12/centos-linux-wiki.png "Centos Linux Wiki, Support, Documents" +[5]:https://www.cyberciti.biz/media/new/tips/2011/12/arch-linux-wiki.png "Arch Linux wiki and tutorials " +[6]:https://wiki.archlinux.org/index.php/Category:Networking_%28English%29 +[7]:https://bbs.archlinux.org/ +[8]:https://wiki.archlinux.org/ +[9]:https://www.cyberciti.biz/media/new/tips/2011/12/gentoo-linux-wiki1.png "Gentoo Linux Handbook and Wiki" +[10]:http://www.gentoo.org/doc/en/handbook/ +[11]:https://wiki.gentoo.org +[12]:https://forums.gentoo.org/ +[13]:http://gentoo-wiki.com +[14]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-linux-wiki.png "Ubuntu Linux Wiki and Forums" +[15]:https://help.ubuntu.com/community +[16]:https://help.ubuntu.com/ +[17]:https://ubuntuforums.org/ +[18]:https://www.cyberciti.biz/media/new/tips/2011/12/ibm-devel.png "IBM: Technical for Linux programmers and system administrators" +[19]:https://www.ibm.com/developerworks/learn/linux/index.html +[20]:https://www.ibm.com/developerworks/community/forums/html/public?lang=en +[21]:https://www.cyberciti.biz/media/new/tips/2011/12/freebsd-docs.png "Freebsd Documentation" +[22]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-hackers-wiki.png "Bash hackers wiki for bash users" +[23]:http://wiki.bash-hackers.org/doku.php +[24]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-faq.png "Bash FAQ: Answers to frequently asked questions about GNU/BASH" +[25]:http://mywiki.wooledge.org/BashPitfalls +[26]:https://mywiki.wooledge.org/BashFAQ +[27]:https://www.cyberciti.biz/media/new/tips/2011/12/howtoforge.png "Howtoforge tutorials" +[28]:https://howtoforge.com/ +[29]:https://www.cyberciti.biz/media/new/tips/2011/12/openbsd-faq.png "OpenBSD Documenation" +[30]:https://www.openbsd.org/faq/index.html +[31]:https://www.openbsd.org/mail.html +[32]:https://www.cyberciti.biz/media/new/tips/2011/12/calomel_org.png "Open Source Research and Reference Documentation" +[33]:https://calomel.org +[34]:https://www.cyberciti.biz/media/new/tips/2011/12/slackware-linux-book.png "Slackware Linux Book and Documentation " +[35]:http://www.slackbook.org/ +[36]:https://www.cyberciti.biz/media/new/tips/2011/12/tldp.png "Linux Learning Site and Documentation " +[37]:http://tldp.org/LDP/abs/html/index.html +[38]:http://tldp.org/HOWTO/HOWTO-INDEX/howtos.html +[39]:http://tldp.org/ +[40]:https://www.cyberciti.biz/media/new/tips/2011/12/linuxhomenetworking.png "Linux Home Networking " +[41]:http://www.linuxhomenetworking.com/ +[42]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-action-show.png "Linux Podcast " +[43]:http://www.jupiterbroadcasting.com/show/linuxactionshow/ +[44]:https://www.commandlinefu.com/commands/browse/sort-by-votes +[45]:https://www.cyberciti.biz/media/new/tips/2011/12/commandlinefu.png "The best Unix / Linux Commands " +[46]:https://commandlinefu.com/ +[47]:https://www.debian-administration.org/hof +[48]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-admin.png "Debian Linux Adminstration: Tips and Tutorial For Sys Admin" +[49]:https://www.debian-administration.org/ +[50]:https://www.cyberciti.biz/media/new/tips/2011/12/catonmat.png "Sed, Awk, Perl Tutorials" +[51]:http://www.catonmat.net/blog/worlds-best-introduction-to-sed/ +[52]:https://www.catonmat.net/blog/sed-one-liners-explained-part-one/ +[53]:https://www.catonmat.net/blog/the-definitive-guide-to-bash-command-line-history/ +[54]:https://www.catonmat.net/blog/awk-one-liners-explained-part-one/ +[55]:https://catonmat.net/ +[56]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-wiki.png "Debian Linux Tutorials and Wiki" +[57]:https://www.debian.org/doc/ +[58]:https://wiki.debian.org/ +[59]:https://www.debian.org/support +[60]:http://swift.siphos.be/linux_sea/ +[61]:https://www.cyberciti.biz/media/new/tips/2011/12/orelly.png "Oreilly Free Linux / Unix / Php / Javascript / Ubuntu Books" +[62]:http://commons.oreilly.com/wiki/index.php/O%27Reilly_Commons +[63]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-guide.png "Ubuntu Book For New Users" +[64]:http://ubuntupocketguide.com/ +[65]:https://www.cyberciti.biz/media/new/tips/2011/12/rute.png "GNU/LINUX system administration free book" +[66]:https://web.archive.org/web/20160204213406/http://rute.2038bug.com/rute.html.gz +[67]:https://www.cyberciti.biz/media/new/tips/2011/12/advanced-linux-programming.png "Download Advanced Linux Programming PDF version" +[68]:https://github.com/MentorEmbedded/advancedlinuxprogramming +[69]:https://www.cyberciti.biz/media/new/tips/2011/12/lpic.png "Download Linux Professional Institute Certification PDF Book" +[70]:http://academy.delmar.edu/Courses/ITSC1358/eBooks/LPI-101.LinuxTrainingCourseNotes.pdf +[71]://www.cyberciti.biz/faq/top5-linux-video-editing-system-software/ +[72]:https://www.cyberciti.biz/media/new/tips/2011/12/floss-manuals.png "Download manuals about free and open source software" +[73]:https://flossmanuals.net/ +[74]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-starter.png "New to Linux? Start Linux starter book [ PDF version ]" +[75]:http://www.tuxradar.com/linuxstarterpack +[76]:https://linux.com +[77]:https://lwn.net/ +[78]:http://hints.macworld.com/ +[79]:https://developer.apple.com/library/mac/navigation/ +[80]:https://developer.apple.com/library/mac/#documentation/OpenSource/Conceptual/ShellScripting/Introduction/Introduction.html +[81]:https://support.apple.com/kb/index?page=search&locale=en_US&q= +[82]:https://www.netbsd.org/docs/ +[83]:https://www.flickr.com/photos/9479603@N02/3311745151/in/set-72157614479572582/ +[84]:https://twitter.com/nixcraft +[85]:https://facebook.com/nixcraft +[86]:https://plus.google.com/+CybercitiBiz +[87]:https://wiki.centos.org/ +[88]:https://www.centos.org/forums/ +[90]: https://www.freebsd.org/docs.html +[91]: https://forums.freebsd.org/ diff --git a/published/201812/20171012 7 Best eBook Readers for Linux.md b/published/201812/20171012 7 Best eBook Readers for Linux.md new file mode 100644 index 0000000000..346eed6bb6 --- /dev/null +++ b/published/201812/20171012 7 Best eBook Readers for Linux.md @@ -0,0 +1,188 @@ +7 个最佳 Linux 电子书阅读器 +====== + +**摘要:** 本文中我们涉及一些 Linux 最佳电子书阅读器。这些应用提供更佳的阅读体验甚至可以管理你的电子书。 + +![最佳 Linux 电子书阅读器][1] + +最近,随着人们发现在手持设备、Kindle 或者 PC 上阅读更加舒适,对电子图书的需求有所增加。至于 Linux 用户,也有各种电子书应用满足你阅读和整理电子书的需求。 + +在本文中,我们选出了七个最佳 Linux 电子书阅读器。这些电子书阅读器最适合 pdf、epub 和其他电子书格式。 + +我提供的是 Ubuntu 安装说明,因为我现在使用它。如果你使用的是[非 Ubuntu 发行版][2],你能在你的发行版软件仓库中找到大多数这些电子书应用。 + +### 1. Calibre + +[Calibre][3] 是 Linux 最受欢迎的电子书应用。老实说,这不仅仅是一个简单的电子书阅读器。它是一个完整的电子书解决方案。你甚至能[通过 Calibre 创建专业的电子书][4]。 + +通过强大的电子书管理和易用的界面,它提供了创建和编辑电子书的功能。Calibre 支持多种格式和与其它电子书阅读器同步。它也可以让你轻松转换一种电子书格式到另一种。 + +Calibre 最大的缺点是,资源消耗太多,因此作为一个独立的电子阅读器来说是一个艰难的选择。 + +![Calibre][5] + +#### 特性 + + * 管理电子书:Calibre 通过管理元数据来排序和分组电子书。你能从各种来源下载一本电子书的元数据或创建和编辑现有的字段。 + * 支持所有主流电子书格式:Calibre 支持所有主流电子书格式并兼容多种电子阅读器。 + * 文件转换:在转换时,你能通过改变电子书风格,创建内容表和调整边距的选项来转换任何一种电子书格式到另一种。你也能转换个人文档为电子书。 + * 从 web 下载杂志期刊:Calibre 能从各种新闻源或者通过 RSS 订阅源传递故事。 + * 分享和备份你的电子图书馆:它提供了一个选项,可以托管你电子书集合到它的服务端,从而你能与好友共享或用任何设备从任何地方访问。备份和导入/导出特性可以确保你的收藏安全和方便携带。 + +#### 安装 + +你能在主流 Linux 发行版的软件库中找到它。对于 Ubuntu,在软件中心搜索它或者使用下面的命令: + +``` +sudo apt-get install calibre +``` + +### 2. FBReader + +![FBReader: Linux 电子书阅读器][6] + +[FBReader][7] 是一个开源的轻量级多平台电子书阅读器,它支持多种格式,比如 ePub、fb2、mobi、rtf、html 等。它包括了一些可以访问的流行网络电子图书馆,那里你能免费或付费下载电子书。 + +#### 特性 + + * 支持多种文件格式和设备比如 Android、iOS、Windows、Mac 和更多。 + * 同步书集、阅读位置和书签。 + * 在线管理你图书馆,可以从你的 Linux 桌面添加任何书到所有设备。 + * 支持 Web 浏览器访问你的书集。 + * 支持将书籍存储在 Google Drive ,可以通过作者,系列或其他属性整理书籍。 + +#### 安装 + +你能从官方库或者在终端中输入以下命令安装 FBReader 电子阅读器。 + +``` +sudo apt-get install fbreader +``` + +或者你能从[这里][8]抓取一个以 .deb 包,并在你的基于 Debian 发行版的系统上安装它。 + +### 3. Okular + +[Okular][9] 是另一个开源的基于 KDE 开发的跨平台文档查看器,它已经作为 KDE 应用发布的一部分了。 + +![Okular][10] + +#### 特性 + + * Okular 支持多种文档格式像 PDF、Postscript、DjVu、CHM、XPS、ePub 和其他。 + * 支持在 PDF 文档中评论、高亮和绘制不同的形状等。 + * 无需修改原始 PDF 文件,分别保存上述这些更改。 + * 电子书中的文本能被提取到一个文本文件,并且有个名为 Jovie 的内置文本阅读服务。 + +备注:查看这个应用的时候,我发现这个应用在 Ubuntu 和它的衍生系统中不支持 ePub 文件格式。其他发行版用户仍然可以发挥它全部的潜力。 + +#### 安装 + +Ubuntu 用户可以在终端中键入下面的命令来安装它: + +``` +sudo apt-get install okular +``` + +### 4. Lucidor + +Lucidor 是一个易用的、支持 epub 文件格式和在 OPDS 格式中编目的电子阅读器。它也具有在本地书架里组织电子书集、从互联网搜索和下载,和将 Web 订阅和网页转换成电子书的功能。 + +Lucidor 是 XULRunner 应用程序,它向您展示了具有类似火狐的选项卡式布局,和存储数据和配置时的行为。它是这个列表中最简单的电子阅读器,包括诸如文本说明和滚动选项之类的配置。 + +![lucidor][11] + +你可以通过选择单词并右击“查找单词”来查找该单词在 Wiktionary.org 的定义。它也包含 web 订阅或 web 页面作为电子书的选项。 + +你能从[这里][12]下载和安装 deb 或者 RPM 包。 + +### 5. Bookworm + +![Bookworm Linux 电子阅读器][13] + +Bookworm 是另一个支持多种文件格式诸如 epub、pdf、mobi、cbr 和 cbz 的自由开源的电子阅读器。我写了一篇关于 Bookworm 应用程序的特性和安装的专题文章,到这里阅读:[Bookworm:一个简单而强大的 Linux 电子阅读器][14] + +#### 安装 + +``` +sudo apt-add-repository ppa:bookworm-team/bookworm +sudo apt-get update +sudo apt-get install bookworm +``` + +### 6. Easy Ebook Viewer + +[Easy Ebook Viewer][15] 是又一个用于读取 ePub 文件的很棒的 GTK Python 应用。具有基本章节导航、从上次阅读位置继续、从其他电子书文件格式导入、章节跳转等功能,Easy Ebook Viewer 是一个简单而简约的 ePub 阅读器. + +![Easy-Ebook-Viewer][16] + +这个应用仍然处于初始阶段,只支持 ePub 文件。 + +#### 安装 + +你可以从 [GitHub][17] 下载源代码,并自己编译它及依赖项来安装 Easy Ebook Viewer。或者,以下终端命令将执行完全相同的工作。 + +``` +sudo apt install git gir1.2-webkit-3.0 libwebkitgtk-3.0-0 gir1.2-gtk-3.0 python3-gi +git clone https://github.com/michaldaniel/Ebook-Viewer.git +cd Ebook-Viewer/ +sudo make install +``` + +成功完成上述步骤后,你可以从 Dash 启动它。 + +### 7. Buka + +Buka 主要是一个具有简单而清爽的用户界面的电子书管理器。它目前支持 PDF 格式,旨在帮助用户更加关注内容。拥有 PDF 阅读器的所有基本特性,Buka 允许你通过箭头键导航,具有缩放选项,并且能并排查看两页。 + +你可以创建单独的 PDF 文件列表并轻松地在它们之间切换。Buka 也提供了一个内置翻译工具,但是你需要有效的互联网连接来使用这个特性。 + +![Buka][19] + +#### 安装 + +你能从[官方下载页面][20]下载一个 AppImage。如果你不知道如何做,请阅读[如何在 Linux 下使用 AppImage][21]。或者,你可以通过命令行安装它: + +``` +sudo snap install buka +``` + +### 结束语 + +就我个人而言,我发现 Calibre 最适合我的需要。当然,Bookworm 看起来很有前途,这几天我经常使用它。不过,电子书应用的选择完全取决于你的喜好。 + +你使用哪个电子书应用呢?在下面的评论中让我们知道。 + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-ebook-readers-linux/ + +作者:[Ambarish Kumar][a] +译者:[zjon](https://github.com/zjon) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/ambarish/ +[1]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/10/best-ebook-readers-linux.png +[2]:https://itsfoss.com/non-ubuntu-beginner-linux/ +[3]:https://www.calibre-ebook.com +[4]:https://itsfoss.com/create-ebook-calibre-linux/ +[5]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Calibre-800x603.jpeg +[6]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/10/fbreader-800x624.jpeg +[7]:https://fbreader.org +[8]:https://fbreader.org/content/fbreader-beta-linux-desktop +[9]:https://okular.kde.org/ +[10]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Okular-800x435.jpg +[11]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/lucidor-2.png +[12]:http://lucidor.org/lucidor/download.php +[13]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/08/bookworm-ebook-reader-linux-800x450.jpeg +[14]:https://itsfoss.com/bookworm-ebook-reader-linux/ +[15]:https://github.com/michaldaniel/Ebook-Viewer +[16]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Easy-Ebook-Viewer.jpg +[17]:https://github.com/michaldaniel/Ebook-Viewer.git +[18]:https://github.com/oguzhaninan/Buka +[19]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Buka2-800x555.png +[20]:https://github.com/oguzhaninan/Buka/releases +[21]:https://itsfoss.com/use-appimage-linux/ diff --git a/published/201812/20171108 Continuous infrastructure- The other CI.md b/published/201812/20171108 Continuous infrastructure- The other CI.md new file mode 100644 index 0000000000..67a35c7c3d --- /dev/null +++ b/published/201812/20171108 Continuous infrastructure- The other CI.md @@ -0,0 +1,108 @@ +持续基础设施:另一个 CI +====== + +> 想要提升你的 DevOps 效率吗?将基础设施当成你的 CI 流程中的重要的一环。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_darwincloud_520x292_0311LL.png?itok=74DLgd8Q) + +持续交付(CD)和持续集成(CI)是 DevOps 的两个众所周知的方面。但在 CI 大肆流行的今天却忽略了另一个关键性的 "I":基础设施infrastructure。 + +曾经有一段时间 “基础设施”就意味着无头headless的黑盒子、庞大的服务器,和高耸的机架 —— 更不用说漫长的采购流程和对盈余负载的错误估计。后来到了虚拟机时代,把基础设施处理得很好,虚拟化 —— 以前的世界从未有过这样。我们不再需要管理实体的服务器。仅仅是简单的点击,我们就可以创建和销毁、开始和停止、升级和降级我们的服务器。 + +有一个关于银行的流行故事:它们实现了数字化,并且引入了在线表格,用户需要手动填写表格、打印,然后邮寄回银行(LCTT 译注:我真的遇到过有人问我这样的需求怎么办)。这就是我们今天基础设施遇到的情况:使用新技术来做和以前一样的事情。 + +在这篇文章中,我们会看到在基础设施管理方面的进步,将基础设施视为一个版本化的组件并试着探索不可变服务器immutable server的概念。在后面的文章中,我们将了解如何使用开源工具来实现持续的基础设施。 + +![continuous infrastructure pipeline][2] + +*实践中的持续集成流程* + +这是我们熟悉的 CI,尽早发布、经常发布的循环流程。这个流程缺少一个关键的组件:基础设施。 + +突击小测试: + +* 你怎样创建和升级你的基础设施? +* 你怎样控制和追溯基础设施的改变? +* 你的基础设施是如何与你的业务进行匹配的? +* 你是如何确保在正确的基础设施配置上进行测试的? + +要回答这些问题,就要了解持续基础设施continuous infrastructure。把 CI 构建流程分为代码持续集成continuous integration code(CIc)和基础设施持续集成continuous integration infrastructure(CIi)来并行开发和构建代码和基础设施,再将两者融合到一起进行测试。把基础设施构建视为 CI 流程中的重要的一环。 + +![pipeline with infrastructure][4] + +*包含持续基础设施的 CI 流程* + +关于 CIi 定义的几个方面: + +1. 代码 + + 通过代码来创建基础设施架构,而不是通过安装。基础设施如代码Infrastructure as code(IaC)是使用配置脚本创建基础设施的现代最流行的方法。这些脚本遵循典型的编码和单元测试周期(请参阅下面关于 Terraform 脚本的示例)。 +2. 版本 + + IaC 组件在源码仓库中进行版本管理。这让基础设施的拥有了版本控制的所有好处:一致性,可追溯性,分支和标记。 +3. 管理 + + 通过编码和版本化的基础设施管理,你可以使用你所熟悉的测试和发布流程来管理基础设施的开发。 + +CIi 提供了下面的这些优势: + +1. 一致性Consistency + + 版本化和标记化的基础设施意味着你可以清楚的知道你的系统使用了哪些组件和配置。这建立了一个非常好的 DevOps 实践,用来鉴别和管理基础设施的一致性。 +2. 可重现性Reproducibility + + 通过基础设施的标记和基线,重建基础设施变得非常容易。想想你是否经常听到这个:“但是它在我的机器上可以运行!”现在,你可以在本地的测试平台中快速重现类似生产环境,从而将环境像变量一样在你的调试过程中删除。 +3. 可追溯性Traceability + + 你是否还记得曾经有过多少次寻找到底是谁更改了文件夹权限的经历,或者是谁升级了 `ssh` 包?代码化的、版本化的,发布的基础设施消除了临时性变更,为基础设施的管理带来了可追踪性和可预测性。 +4. 自动化Automation + + 借助脚本化的基础架构,自动化是下一个合乎逻辑的步骤。自动化允许你按需创建基础设施,并在使用完成后销毁它,所以你可以将更多宝贵的时间和精力用在更重要的任务上。 +5. 不变性Immutability + + CIi 带来了不可变基础设施等创新。你可以创建一个新的基础设施组件而不是通过升级(请参阅下面有关不可变设施的说明)。 + +持续基础设施是从运行基础环境到运行基础组件的进化。像处理代码一样,通过证实的 DevOps 流程来完成。对传统的 CI 的重新定义包含了缺少的那个 “i”,从而形成了连贯的 CD 。 + +**(CIc + CIi) = CI -> CD** + +### 基础设施如代码 (IaC) + +CIi 流程的一个关键推动因素是基础设施如代码infrastructure as code(IaC)。IaC 是一种使用配置文件进行基础设施创建和升级的机制。这些配置文件像其他的代码一样进行开发,并且使用版本管理系统进行管理。这些文件遵循一般的代码开发流程:单元测试、提交、构建和发布。IaC 流程拥有版本控制带给基础设施开发的所有好处,如标记、版本一致性,和修改可追溯。 + +这有一个简单的 Terraform 脚本用来在 AWS 上创建一个双层基础设施的简单示例,包括虚拟私有云(VPC)、弹性负载(ELB),安全组和一个 NGINX 服务器。[Terraform][5] 是一个通过脚本创建和更改基础设施架构和开源工具。 + +![terraform script][7] + +*Terraform 脚本创建双层架构设施的简单示例* + +完整的脚本请参见 [GitHub][8]。 + +### 不可变基础设施 + +你有几个正在运行的虚拟机,需要更新安全补丁。一个常见的做法是推送一个远程脚本单独更新每个系统。 + +要是不更新旧系统,如何才能直接丢弃它们并部署安装了新安全补丁的新系统呢?这就是不可变基础设施immutable infrastructure。因为之前的基础设施是版本化的、标签化的,所以安装补丁就只是更新该脚本并将其推送到发布流程而已。 + +现在你知道为什么要说基础设施在 CI 流程中特别重要了吗? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/continuous-infrastructure-other-ci + +作者:[Girish Managoli][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Jamskr](https://github.com/Jamskr) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/gammay +[1]:/file/376916 +[2]:https://opensource.com/sites/default/files/images/life-uploads/figure1.jpg (continuous infrastructure pipeline in use) +[3]:/file/376921 +[4]:https://opensource.com/sites/default/files/images/life-uploads/figure2.jpg (CI pipeline with infrastructure) +[5]:https://github.com/hashicorp/terraform +[6]:/file/376926 +[7]:https://opensource.com/sites/default/files/images/life-uploads/figure3_0.png (sample terraform script) +[8]:https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/two-tier diff --git a/published/201812/20171111 A CEOs Guide to Emacs.md b/published/201812/20171111 A CEOs Guide to Emacs.md new file mode 100644 index 0000000000..4a92e5710b --- /dev/null +++ b/published/201812/20171111 A CEOs Guide to Emacs.md @@ -0,0 +1,286 @@ +CEO 的 Emacs 秘籍 +=========== + +几年前,不,是几十年前,我就在用 Emacs。不论是码代码、编写文档,还是管理邮件和日程,我都用这个编辑器,或者是说操作系统,而且我还乐此不疲。许多年过去了,我也转向了其他更新、更好的工具。结果,就连最基本的文件浏览,我都已经忘了在不用鼠标的情况下该怎么操作。大约三个月前,我意识到我在应用程序和计算机之间切换上耗费了大量的时间,于是我决定再次使用 Emacs。这是个很正确的决定,原因有以下几个。其中包括用 `.emacs` 和 Dropbox 来搭建一个良好的、可移植的环境的一些技巧。 + +对于那些还没用过 Emacs 的人来说,Emacs 会让你爱恨交加。它有点像一个房子大小的鲁布·戈德堡机械Rube Goldberg machine,乍一看,它具备烤面包机的所有功能。这听起来不像是一种认可,但关键词是“乍一看”。一旦你了解了 Emacs,你就会意识到它其实是一台可以当发动机用的热核烤面包机……好吧,只是指文本处理的所有事情。当考虑到你计算机的使用周期在很大程度上都是与文本有关时,这是一个相当大胆的声明。大胆,但却是真的。 + +也许对我来说更重要的是,Emacs 是我曾经使用过的一个应用,并让我觉得我真正的拥有它,而不是把我塑造成一个匿名的“用户”,就好像位于 [Soma][30](LCTT 译注:旧金山的一个街区)或雷蒙德(LCTT 译注:微软总部所在地)附近某个高档办公室的产品营销部门把钱作为明确的目标一样。现代生产力和创作应用程序(如 Pages 或 IDE)就像碳纤维赛车,它们装备得很好,也很齐全。而 Emacs 就像一盒经典的 [Campagnolo][31] (LCTT 译注:世界上最好的三个公路自行车套件系统品牌之一)零件和一个漂亮的自行车牵引式钢框架,但缺少曲柄臂和刹车杆,你必须在网上某个小众文化中找到它们。前者更快而且很完整,后者是无尽的快乐或烦恼的源泉,当然这取决于你自己,而且这种快乐或烦恼会伴随到你死。我就是那种在找到一堆老古董或用 `Emacs Lisp` 配置编辑器时会感到高兴的人,具体情况因人而异。 + +![1933 steel bicycle](https://www.fugue.co/hubfs/Imported_Blog_Media/bicycle-1.jpg) + +*一辆我还在骑的 1933 年产的钢制自行车。你可以看看框架管差别: [https://www.youtube.com/watch?v=khJQgRLKMU0][6]* + +这可能给人一种 Emacs 已经过气或过时的印象。然而并不是,Emacs 是强大和永恒的,只要你耐心地去理解它的一些规则。Emacs 的规则很另类,也很奇怪,但其中的逻辑却引人注目,且魅力十足。对于我来说, Emacs 更像是未来而不是过去。就像牵引式钢框架在未来几十年里将会变得好用和舒适,而神奇的碳纤维自行车将会被扔进垃圾场,在撞击中粉碎一样,Emacs 也将会作为一种在最新的流行应用早已被遗忘的时候的好用的工具继续存在这里。 + +使用 Lisp 代码来构建个人工作环境,并将这个恰到好处的环境移植到任何计算机,如果这种想法打动了你,那么你将会爱上 Emacs。如果你喜欢很潮、很炫的,又不想投入太多时间和精力的情况下就能直接工作的话,那么 Emacs 可能不适合你。我已经不再写代码了(除了 Ludwig 和 Emacs Lisp),但是 Fugue 公司的很多工程师都使用 Emacs 来提高码代码的效率。我公司有 30% 的工程师用 Emacs,40% 用 IDE 和 30% 的用 vim。但这篇文章是写给 CEO 和其他[精英][32]Pointy-Haired Bosses(PHB[^1] )(以及对 Emacs 感兴趣的人)的,所以我将解释或者说辩解我为什么喜欢它以及我如何使用它。同时我也希望我能介绍清楚,从而让你有个良好的体验,而不是花上几个小时去 Google。 + +### 恒久优势 + +使用 Emacs 带来的长期优势是让生活更轻松。与最后的收获相比,最开始的付出完全值得。想想这些: + +#### 简单高效 + +Org 模式本身就值得花时间,但如果你像我一样,你通常要处理十几份左右的文件 —— 从博客帖子到会议事务清单,再到员工评估。在现代计算世界中,这通常意味着要使用多个应用程序,所有这些程序都有不同的用户界面、保存方式、排序和搜索方式。结果就是你需要不断转换思维环境,记住各种细节。我讨厌在程序间切换,这是在强人所难,因为这是个不完整界面模型[^2] ,并且我讨厌记住本该由计算机记住的东西。在单个环境下,Emacs 对 PHB 甚至比对于程序员更高效,因为程序员更多时候只需要专注于一个程序。转换思维环境的成本比表面上的要更高。操作系统和应用程序厂商已经构建了各种界面,以分散我们对这一现实的注意力。如果你是技术人员,通过快捷键(`M-:`)来访问功能强大的[语言解释器][33]会方便的多[^3] 。 + +许多应用程序可以全天全屏地用于编辑文本。但Emacs 是唯一的,因为它既是编辑器也是 Emacs Lisp 解释器。从本质上来说,你工作时只要用电脑上的一两个键就能完成。如果你略懂编程的话,就会发现这代表着你可以在 Emacs 中做 _任何事情_。一旦你在内存中存储了这些指令,你的电脑就可以在工作时几乎实时地为你提供高效的运转。你不会想用 Emacs Lisp 来重建 Excel,因为只要用简单的一两行代码就能实现 Excel 中大多数的功能。比如说我要处理数字,我更有可能转到 scratch 缓冲区,编写一些代码,而不是打开电子表格。即便是要写一封比较大的邮件时,我通常也会先在 Emacs 中写完,然后再复制粘贴到邮件客户端中。当你可以流畅的书写时,为什么要去切换呢?你可以先从一两个简单的算术开始,随着时间的推移,你可以很容易的在 Emacs 中添加你所需要处理的计算。这在应用程序中可能是独一无二的,同时还提供了让为其他的人创造的丰富特性。还记得艾萨克·阿西莫夫书中那些神奇的终端吗? Emacs 是我所遇到的最接近它们的东西[^4] 。我决定不再用什么应用程序来做这个或那个。相反,我只是工作。拥有一个伟大的工具并致力于此,这才是真正的动力和效率。 + +#### 静中造物 + +拥有所发现的最好的文本编辑功能的最终结果是什么?有一群人在做各种各样有用的补充吗?发挥了 Lisp 键盘的全部威力了吗?我用 Emacs 来完成所有的创作性工作,音乐和图片除外。 + +我办公桌上有两个显示器。其中一块竖屏是将 Emacs 全天全屏显示,另一个显示浏览器,用来搜索和阅读,我通常也会打开一个终端。我将日历、邮件等放在 OS X 的另一个桌面上,当我使用 Emacs 时,这个桌面会隐藏起来,同时我也会关掉所有通知。这样就能让我专注于我手头上在做的事了。我发现,越是先进的 UI 应用程序,消除干扰越是不可能,因为这些应用程序致力于提供帮助和易用性。我不需要经常被提醒该如何操作,我已经做了成千上万次了,我真正需要的是一张干净整洁的白纸用来思考。也许因为年龄和自己的“恶习”,我不太喜欢处在嘈杂的环境中,但我认为这值得一试。看看在你电脑环境中有一些真正的宁静是怎样的。当然,现在很多应用程序都有隐藏界面的模式,谢天谢地,苹果和微软现在都有了真正意义上的全屏模式。但是,没有并没有应用程序可以强大到足以“处理”大多数事务。除非你整天写代码,或者像出书一样,处理很长的文档,否则你仍然会面临其他应用程序的干扰。而且,大多数现代应用程序似乎同时显得自视甚高,缺乏功能和可用性[^5] 。比起 office 桌面版,我更讨厌它的在线版。 + +![](https://www.fugue.co/hubfs/Imported_Blog_Media/desktop-1.jpg) + +*我的桌面布局, Emacs 在左边* + +但是沟通呢?创造和沟通之间的差别很大。当我将这两件事在不同时间段处理时,我的效率会更高。我们 Fugue 公司使用 Slack,痛并快乐着。我把 Slack 和我的日历、电子邮件放在一个即时通讯的桌面上,这样,当我正在做事时,我就能够忽略所有的聊天信息了。虽然只要一个 Slackstorm 或一封风投或董事会董事的电子邮件,就能让我立刻丢掉手头工作。但是,大多数事情通常可以等上一两个小时。 + +#### 普适恒久 + +第三个原因是,我发现 Emacs 比其它的环境更有优势的是,你可以很容易地用它来处理事务。我的意思是,你所需要的只是通过类似于 Dropbox 的网站同步一两个目录,而不是让大量的应用程序以它们自己的方式进行交互和同步。然后,你可以在任何你已经精心打造了适合你的目的的套件的环境中工作了。我在 OS X、Windows,或有时在 Linux 都是这样做的。它非常简单可靠。这种功能很有用,以至于我害怕处理 Pages、Google Docs、Office 或其他类型的文件和应用程序,这些文件和应用程序会迫使我回到文件系统或云中的某个地方去寻找。 + +限制在计算机上永久存储的因素是文件格式。假设人类已经解决了存储问题[^6] ,随着时间的推移,我们面临的问题是我们能否够继续访问我们创建的信息。文本文件是保存时间最久的格式。你可以用 Emacs 轻松地打开 1970 年的文本文件。然而对于 Office 应用程序却并非如此。同时文本文件要比 Office 应用程序数据文件小得多,也要好的多。作为一个数码背包迷,作为一个在脑子里一闪而过就会做很多小笔记的人,拥有一个简单、轻便、永久、随时可用的东西对我来说很重要。 + +如果你准备尝试 Emacs,请继续读下去!下面的部分不是完整的教程,但是在读完后,就可以动手操作了。 + +### 驾驭之道 —— 专业定制 + +所有这些强大、精神上的平静和安宁的代价是,Emacs 有一个陡峭的学习曲线,它的一切都与你以前所习惯的不同。一开始,这会让你觉得你是在浪费时间在一个过时和奇怪的应用程序上,就好像穿越到过去。这有点像你只开过车,却要你去学骑自行车[^7] 。 + +#### 类型抉择 + +我用的是来自 GNU 的 OS X 和 Windows 的通用版本的 Emacs。你可以在 [http://emacsformacos.com/][35] 获取 OS X 版本,在 [http://www.gnu.org/software/emacs/][37] 获取 Windows 版本。市面上还有很多其他版本,尤其是 Mac 版本,但我发现,要做一些功能强大的东西(涉及到 Lisp 和许多模式),学习曲线要比实际操作低得多。下载,然后我们就可以开始了[^8] ! + +#### 驾驭之始 + +在本文中,我将使用 Emacs 的按键和组合键约定。`C` 表示 `Control` 键,`M` 表示 `meta`(通常是 `Alt` 或 `Option` 键),以及用于组合键的连字符。因此,`C-h t` 表示同时按下 `Control` 和 `h` 键,然后释放,再按下 `t`。这个组合快捷键会指向一个教程,这是你首先要做的一件事。 + +不要使用方向键或鼠标。它们可以工作,但是你应该给自己一周的时间来使用 Emacs 教程中的原生的导航命令。一旦你这些命令变为了肌肉记忆,你可能就会乐在其中,无论到哪里,你都会非常想念它们。这个 Emacs 教程在介绍它们方面做得很好,但是我将进行总结,所以你不需要阅读全部内容。最无聊的是,不用方向键,用 `C-b` 向前移动,用 `C-f` 向后移动,上一行用 `C-p`,下一行用 `C-n`。你可能会想:“我用方向键就很好,为什么还要这样做?” 有几个原因。首先,你不需要从主键盘区将你的手移开。第二,使用 `Alt`(或用 Emacs 的说法 `Meta`)键来向前或向后在单词间移动。显而易见这样更方便。第三,如果想重复某个命令,可以在命令前面加上一个数字。在编辑文档时,我经常使用这种方法,通过估计向后移动多少个单词或向上或向下移动多少行,然后按下 `C-9 C-p` 或 `M-5 M-b` 之类的快捷键。其它真正重要的导航命令基于开头用 `a` 和结尾用 `e`。在行中使用 `C-a|e`,在句中使用 `M-a|e`。为了让句中的命令正常工作,需要在句号后增加两个空格,这同时提供了一个有用的特性,并消除了脑中一个过时的[观点][38]。如果需要将文档导出到单个空间[发布环境][39],可以编写一个宏来执行此操作。 + +Emacs 所附带的教程很值得去看。对于真正缺乏耐心的人,我将介绍一些重要的命令,但那个教程非常有用。记住:用 `C-h t` 进入教程。 + +#### 驾驭之复制粘贴 + +你可以把 Emacs 设为 CUA 模式,这将会以熟悉的方式工作来操作复制粘贴,但是原生的 Emacs 方法更好,而且你一旦学会了它,就很容易。你可以使用 `Shift` 和导航命令来标记区域(如同选择)。所以 `C-F` 是选中光标前的一个字符,等等。亦可以用 `M-w` 来复制,用 `C-w` 剪切,然后用 `C-y` 粘贴。这些实际上叫做删除killing召回yanking,但它非常类似于剪切和粘贴。在删除中还有一些小技巧,但是现在,你只需要关注剪切、复制和粘贴。如果你开始尝试了,那么 `C-x u` 是撤销。 + +#### 驾驭之 Ido 模式 + +相信我,Ido 会让文件的工作变得很简单。通常,你在 Emacs 中处理文件不需要使用一个单独的访达或文件资源管理器的窗口。相反,你可以用编辑器的命令来创建、打开和保存文件。如果没有 Ido 的话,这将有点麻烦,所以我建议你在学习其他之前安装好它。 Ido 是 Emacs 的 22 版时开始出现的,但是需要对你的 `.emacs` 文件做一些调整,来确保它一直开启着。这是个配置环境的好理由。 + +Emacs 中的大多数功能都表现在模式上。要安装指定的模式,需要做两件事。嗯,一开始你需要做一些额外的事情,但这些只需要做一次,然后再做这两件事。那么,这件额外的事情是你需要一个单独的位置来放置所有 Emacs Lisp 文件,并且你需要告诉 Emacs 这个位置在哪。我建议你在 Dropbox 上创建一个单独的目录,那是你 Emacs 主目录。在这里,你需要创建一个 `.emacs` 文件和 `.emacs.d` 目录。在 `.emacs.d` 目录下,创建一个 `lisp` 的目录。就像这样: + +``` +home +| ++.emacs +| +-.emacs.d + | + -lisp +``` + +你可以将 `.el` 文件,比如说模式文件,放到 `home/.emacs.d/lisp` 目录下,然后在你的 `.emacs` 文件中添加以下代码来指明该路径: + +``` +(add-to-list 'load-path "~/.emacs.d/lisp/") +``` + +Ido 模式是 Emacs 自带的,所以你不需要在你的 `lisp` 目录中放这个 `.el` 文件,但你仍然需要添加上面代码,因为下面的介绍会使用到它. + +#### 驾驭之符号链接 + +等等,这里写的 `.emacs` 和 `.emacs.d` 都是存放在你的主目录下,但我们把它们放到了 Dropbox 的某些愚蠢的文件夹!对,这就让你的环境在任何地方都很容易使用。把所有东西都保存在 Dropbox 上,并做符号链接到 `~` 下的 `.emacs` 、`.emacs.d` 和你的主要存放文档的目录。在 OS X 上,使用 `ln -s` 命令非常简单,但在 Windows 上却很麻烦。幸运的是,Emacs 提供了一种简单的方法来替代 Windows 上的符号链接,Windows 的 `HOME` 环境变量。转到 Windows 的环境变量(Windows 10,你可以按 Windows 键然后输入 “环境变量” 来搜索,这是 Windows 10 最好的地方了),在你的帐户下创建一个指向你在 Dropbox 中 Emacs 的文件夹的 `HOME` 环境变量。如果你想方便地浏览 Dropbox 之外的本地文件,你可能想在你的实际主目录下建立一个到 Dropbox 下 Emacs 主目录的符号链接。 + +至此,你已经完成了在任意机器上指向你的 Emacs 配置和文件所需的技巧。如果你买了一台新电脑,或者用别人的电脑一小时或一天,你就可以得到你的整个工作环境。第一次操作起来似乎有点难,但是一旦你知道你在做什么,就(最多)只需要 10 分钟。 + +但我们现在是在配置 Ido …… + +按下 `C-x` `C-f` 然后输入 `~/.emacs` 和两次回车来创建 `.emacs` 文件,将下面几行添加进去: + +``` +;; set up ido mode +(require `ido) +(setq ido-enable-flex-matching t) +(setq ido-everywhere t) +(ido-mode 1) +``` + +在 `.emacs` 窗口开着的时候,执行 `M-x evaluate-buffer` 命令。如果某处弄错了的话,将得到一个错误,或者你将得到 Ido。Ido 改变了在 minibuffer 中操作文件操方式。关于这个有一篇比较好的文档,但是我也会指出一些技巧。有效地使用 `~/`;你可以在 minibuffer 的任何地方输入 `~/`,它就会跳转到主目录。这就意味着,你应该让你的大部分东西就近的放在主目录下。我用 `~/org` 目录来保存所有非代码的东西,用 `~/code` 保存代码。一旦你进入到正确的目录,通常会拥有一组具有不同扩展名的文件,特别是当你使用 Org 模式并从中发布的话。你可以输入 `.` 和想要的扩展名,无论你的在文件名的什么位置,Ido 都会将选择限制在具有该扩展名的文件中。例如,我在 Org 模式下写这篇博客,所以该文件是: + +``` +~/org/blog/emacs.org +``` + +我偶尔也会用 Org 模式发布成 HTML 格式,所以我将在同一目录下得到 `emacs.html` 文件。当我想打开该 Org 文件时,我会输入: + +``` +C-x C-f ~/o[RET]/bl[RET].or[RET] +``` + +其中 `[RET]` 是我使用 `Ido` 模式的自动补全而按下的回车键。所以,这只需要按 12 个键,如果你习惯了的话, 这将比打开访达或文件资源管理器再用鼠标点要节省 _很_ 多时间。 Ido 模式很有用,而这只是操作 Emacs 的一种实用模式而已。下面让我们去探索一些其它对完成工作很有帮助的模式吧。 + +#### 驾驭之字体风格 + +我推荐在 Emacs 中使用漂亮的字体族。它们可以使用不同的括号、0 和其他字符进行自定义。你可以在字体文件本身中构建额外的行间距。我推荐 1.5 倍的行间距,并在代码和数据中使用不等宽字体。写作中我用 `Serif` 字体,它有一种紧凑但时髦的感觉。你可以在 [http://input.fontbureau.com/][40] 上找到它们,在那里你可以根据自己的喜好进行定制。你可以使用 Emacs 中的菜单手动设置字体,但这会将代码保存到你的 `.emacs` 文件中,如果你使用多个设备,你可能需要一些不同的设置。我将我的 `.emacs` 设置为根据使用的机器的名称来相应配置屏幕。代码如下: + +``` +;; set up fonts for different OSes. OSX toggles to full screen. +(setq myfont "InputSerif") +(cond +((string-equal system-name "Sampo.local") + (set-face-attribute 'default nil :font myfont :height 144) + (toggle-frame-fullscreen)) +((string-equal system-name "Morpheus.local") + (set-face-attribute 'default nil :font myfont :height 144)) +((string-equal system-name "ILMARINEN") + (set-face-attribute 'default nil :font myfont :height 106)) +((string-equal system-name "UKKO") + (set-face-attribute 'default nil :font myfont :height 104))) +``` + +你应该将 Emacs 中的 `system-name` 的值替换成你通过 `(system-name)` 得到的值。注意,在 Sampo (我的 MacBook)上,我还将 Emacs 设置为全屏。我也想在 Windows 实现这个功能,但是 Windows 和 Emacs 好像互相嫌弃对方,当我尝试配置时,它总是不稳定。相反,我只能在启动后手动全屏。 + +我还建议去掉 Emacs 中的上世纪 90 年代出现的难看工具栏,当时比较流行在应用程序中使用工具栏。我还去掉了一些其它的“电镀层”,这样我就有了一个简单、高效的界面。把这些加到你的 `.emacs` 的文件中来去掉工具栏和滚动条,但要保留菜单(在 OS X 上,它将被隐藏,除非你将鼠标到屏幕顶部): + +``` +(if (fboundp 'scroll-bar-mode) (scroll-bar-mode -1)) +(if (fboundp 'tool-bar-mode) (tool-bar-mode -1)) +(if (fboundp 'menu-bar-mode) (menu-bar-mode 1)) +``` + +#### 驾驭之 Org 模式 + +我基本上是在 Org 模式下处理工作的。它是我创作文档、记笔记、列任务清单以及 90% 其他工作的首选环境。Org 模式是笔记和待办事项列表的组合工具,最初是由一个在会议中使用笔记本电脑的人构想出来的。我反对在会议中使用笔记本电脑,自己也不使用,所以我的用法与他的有些不同。对我来说,Org 模式主要是一种处理结构中内容的方式。在 Org 模式中有标题和副标题等,它们的作用就像一个大纲。Org 模式允许你展开或隐藏大纲树,还可以重新排列该树。这正合我意,并且我发现用这种方式使用它是一种乐趣。 + +Org 模式也有很多让生活愉快的小功能。例如,脚注处理非常好,LaTeX/PDF 输出也很好。Org 模式能够根据所有文档中的待办事项生成议程,并能很好地将它们与日期/时间联系起来。我不把它用在任何形式的外部任务上,这些任务都是在一个共享的日历上处理的,但是在创建事物和跟踪我未来需要创建的东西时,它是无价的。安装它,你只要将 `org-mode.el` 放到你的 `lisp` 目录下。如果你想要它基于文档的结构进行缩进并在打开时全部展开的话,在你的 `.emacs` 文件中添加如下代码: + +``` +;; set up org mode +(setq org-startup-indented t) +(setq org-startup-folded "showall") +(setq org-directory "~/org") +``` + +最后一行是让 Org 模式知道在哪里查找要包含在议程和其他事情中的文件。我把 Org 模式保存在我的主目录中,也就是说,像前面介绍的一样,它是 Dropbox 目录的一个符号链接。 + +我有一个总是在缓冲区中打开的 `stuff.org` 文件。我把它当作记事本。Org 模式使得提取待办事项和有期限的事情变得很容易。当你能够内联 Lisp 代码并在需要计算它时,它特别有用。拥有包含内容的代码非常方便。同样,你可以使用 Emacs 访问实际的计算机,这是一种解放。 + +##### 用 Org 模式进行发布 + +我关心的是文档的外观及格式。我刚开始工作时是个设计师,而且我认为信息可以,也应该表现得清晰和美丽。Org 模式对将 LaTeX 生成 PDF 支持的很好,LaTeX 虽然也有学习曲线,但是很容易处理一些简单的事务。 + +如果你想使用字体和样式,而不是典型的 LaTeX 字体和样式,你需要做些事。首先,你要用到 XeLaTeX,这样就可以使用普通的系统字体,而不是 LaTeX 的特殊字体。接下来,你需要将以下代码添加到 `.emacs` 中: + +``` +(setq org-latex-pdf-process + '("xelatex -interaction nonstopmode %f" + "xelatex -interaction nonstopmode %f")) +``` + +我把这个放在 `.emacs` 中 Org 模式配置部分的末尾,使文档变得更整洁。这让你在从 Org 模式发布时可以使用更多格式化选项。例如,我经常使用: + +``` +#+LaTeX_HEADER: \usepackage{fontspec} +#+LATEX_HEADER: \setmonofont[Scale=0.9]{Input Mono} +#+LATEX_HEADER: \setromanfont{Maison Neue} +#+LATEX_HEADER: \linespread{1.5} +#+LATEX_HEADER: \usepackage[margin=1.25in]{geometry} + +#+TITLE: Document Title Here +``` + +这些都可以在 `.org` 文件中找到。我们的公司规定的正文字体是 `Maison Neue`,但你也可以在这写上任何适当的东西。我很是抵制 `Maison Neue`,因为这是一种糟糕的字体,任何人都不应该使用它。 + +这个文件是一个使用该配置输出为 PDF 的实例。这就是开箱即用的 LaTeX 一样。在我看来这还不错,但是字体很平淡,而且有点奇怪。此外,如果你使用标准格式,人们会觉得他们正在阅读的东西是、或者假装是一篇学术论文。别怪我没提醒你。 + +#### 驾驭之 Ace Jump 模式 + +这只是一个辅助模式,而不是一个主模式,但是你也需要它。其工作原理有点像之前提到的 Jef Raskin 的 Leap 功能[^9] 。 按下 `C-c C-SPC`,然后输入要跳转到单词的第一个字母。它会高亮显示所有以该字母开头的单词,并将其替换为字母表中的字母。你只需键入所需位置的字母,光标就会跳转到该位置。我常将它作为导航键或是用来检索。将 `.el` 文件下到你的 `lisp` 目录下,并在 `.emacs` 文件添加如下代码: + +``` +;; set up ace-jump-mode +(add-to-list 'load-path "which-folder-ace-jump-mode-file-in/") +(require 'ace-jump-mode) +(define-key global-map (kbd "C-c C-SPC" ) 'ace-jump-mode) +``` + +### 待续 + +本文已经够详细了,你能在其中得到你所想要的。我很想知道除编程之外(或用于编程)Emacs 的使用情况,及其是否高效。在我使用 Emacs 的过程中,可能存在一些自作聪明的老板式想法,如果你能指出来,我将不胜感激。之后,我可能会写一些更新来介绍其它特性或模式。我很确定我将会向你展示如何在 Emacs 和 Ludwig 模式下使用 Fugue,因为我会将它发展成比代码高亮更有用的东西。更多想法,请在 Twitter 上 [@fugueHQ][41] 。 + +### 脚注 + +[^1]: 如果你是位精英,但从没涉及过技术方面,那么 Emacs 并不适合你。对于少数的人来说,Emacs 可能会为他们开辟一条通往计算机技术方面的道路,但这只是极少数。如果你知道怎么使用 Unix 或 Windows 的终端,或者曾编辑过 dotfile,或者说你曾写过一点代码的话,这对使用 Emacs 有很大的帮助。 + +[^2]: 参考链接: http://archive.wired.com/wired/archive/2.08/tufte.html + +[^3]: 我主要是在写作时使用这个模式来进行一些运算。比如说,当我在给一个新雇员写一封入职信时,我想要算这封入职信中有多少个选项。由于我在我的 `.emacs` 为 outstanding-shares 定义了一个变量,所以我只要按下 `M-:` 然后输入 `(* .001 outstanding-shares)` 就能再无需打开计算器或电子表格的情况下得到精度为 0.001 的结果。我使用了 _大量_ 的变量来避免程序间切换。 + +[^4]: 缺少的部分是 web。有个名为 eww 的 Emacs 网页浏览器能够让你在 Emacs 中浏览网页。我用的就是这个,因为它既能拦截广告(LCTT 译注:实质上是无法显示,/laugh),同时也在可读性方面为 web 开发者消除了大多数差劲的选项。这个其实有点类似于 Safari 的阅读模式。不幸的是,大部分网站都有很多令人讨厌的繁琐的东西以及难以转换为文本的导航, + +[^5]: 易用性和易学性这两者经常容易被搞混。易学性是指学习使用工具的难易程度。而易用性是指工具高效的程度。通常来说,这是要差别的,就想鼠标和菜单栏的差别一样。菜单栏很容易学会,但是却不怎么高效,以致于早期会存在一些键盘的快捷键。除了在 GUI 方面上,Raskin 在很多方面上的观点都很正确。如今,操作系统正在将一些合适的搜索映射到键盘的快捷键上。比如说在 OS X 和 Windows 上,我默认的导航方式就是搜索。Ubuntu 的搜索做的很差劲,如同它的 GUI 一样差劲。 + +[^6]: 在有网的情况下,[AWS S3][42] 是解决文件存储问题的有效方案。数万亿个对象存在 S3 中,但是从来没有遗失过。大部分提供云存储的服务都是在 S3 上或是模拟 S3 构建的。没人能够拥有 S3 一样的规模,所以我将重要的文件通过 Dropbox 存储在上面。 + +[^7]: 目前,你可能会想:“这个人和自行车有什么关系?”……我在各个层面上都喜欢自行车。自行车是迄今为止发明的最具机械效率的交通工具。自行车可以是真正美丽的事物。而且,只要注意点的话,自行车可以用一辈子。早在 2001 年,我曾向 Rivendell Bicycle Works 订购了一辆自行车,现在我每次看到那辆自行车依然很高兴,自行车和 Unix 是我接触过的最好的两个发明。对了,还有 Emacs。 + +[^8]: 这个网站有一个很棒的 Emacs 教程,但不是这个。当我浏览这个页面时,我确实得到了一些对获取高效的 Emacs 配置很重要的知识,但无论怎么说,这都不是个替代品。 + +[^9]: 20 世纪 80 年代,Jef Raskin 与 Steve Jobs 在 Macintosh 项目上闹翻后, Jef Raskin 又设计了 [Canon Cat 计算机][43]。这台 Cat 计算机是以文档为中心的界面(所有的计算机都应如此),并以一种全新的方式使用键盘,你现在可以用 Emacs 来模仿这种键盘。如果现在有一台现代的,功能强大的 Cat 计算机并配有一个高分辨的显示器和 Unix 系统的话,我立马会用 Mac 来换。[https://youtu.be/o_TlE_U_X3c?t=19s][28] + +-------------------------------------------------------------------------------- + +via: https://blog.fugue.co/2015-11-11-guide-to-emacs.html + +作者:[Josh Stella][a] +译者:[oneforalone](https://github.com/oneforalone) +校对:[wxy](https://github.com/wxy), [oneforalone](https://github.com/oneforalone) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.fugue.co/authors/josh.html +[1]:https://blog.fugue.co/2013-10-16-vpc-on-aws-part3.html +[2]:https://blog.fugue.co/2013-10-02-vpc-on-aws-part2.html +[3]:http://ww2.fugue.co/2017-05-25_OS_AR_GartnerCoolVendor2017_01-LP-Registration.html +[4]:https://blog.fugue.co/authors/josh.html +[5]:https://twitter.com/joshstella +[6]:https://www.youtube.com/watch?v=khJQgRLKMU0 +[7]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#phb +[8]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#tufte +[9]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#interpreter +[10]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#eww +[11]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#usability +[12]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#s3 +[13]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#bicycles +[14]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#nottutorial +[15]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#canoncat +[16]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#phbOrigin +[17]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#tufteOrigin +[18]:http://archive.wired.com/wired/archive/2.08/tufte.html +[19]:http://archive.wired.com/wired/archive/2.08/tufte.html +[20]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#interpreterOrigin +[21]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#ewwOrigin +[22]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#usabilityOrigin +[23]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#s3Origin +[24]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#bicyclesOrigin +[25]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#nottutorialOrigin +[26]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#canoncatOrigin +[27]:https://youtu.be/o_TlE_U_X3c?t=19s +[28]:https://youtu.be/o_TlE_U_X3c?t=19s +[29]:https://blog.fugue.co/authors/josh.html +[30]:http://www.huffingtonpost.com/zachary-ehren/soma-isnt-a-drug-san-fran_b_987841.html +[31]:http://www.campagnolo.com/US/en +[32]:http://www.businessinsider.com/best-pointy-haired-boss-moments-from-dilbert-2013-10 +[33]:http://www.webopedia.com/TERM/I/interpreter.html +[34]:http://emacsformacosx.com/ +[35]:http://emacsformacosx.com/ +[36]:http://www.gnu.org/software/emacs/ +[37]:http://www.gnu.org/software/emacs/ +[38]:http://www.huffingtonpost.com/2015/05/29/two-spaces-after-period-debate_n_7455660.html +[39]:http://practicaltypography.com/one-space-between-sentences.html +[40]:http://input.fontbureau.com/ +[41]:https://twitter.com/fugueHQ +[42]:https://baike.baidu.com/item/amazon%20s3/10809744?fr=aladdin +[43]:https://en.wikipedia.org/wiki/Canon_Cat diff --git a/published/201812/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md b/published/201812/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md new file mode 100644 index 0000000000..bca48001bf --- /dev/null +++ b/published/201812/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md @@ -0,0 +1,106 @@ +TLDR 页:Linux 手册页的简化替代品 +============== + +[![](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu-640x360.jpg "tldr page ubuntu")][22] + +在终端上使用各种命令执行重要任务是 Linux 桌面体验中不可或缺的一部分。Linux 这个开源操作系统拥有[丰富的命令][23],任何用户都无法全部记住所有这些命令。而使事情变得更复杂的是,每个命令都有自己的一组带来丰富的功能的选项。 + +为了解决这个问题,人们创建了[手册页][12]man page,(手册 —— man 是 manual 的缩写)。首先,它是用英文写成的,包含了大量关于不同命令的深入信息。有时候,当你在寻找命令的基本信息时,它就会显得有点庞杂。为了解决这个问题,人们创建了[TLDR 页][13]。 + +### 什么是 TLDR 页? + +TLDR 页的 GitHub 仓库将其描述为简化的、社区驱动的手册页集合。在实际示例的帮助下,努力让使用手册页的体验变得更简单。如果还不知道,TLDR 取自互联网的常见俚语:太长没读Too Long Didn’t Read。 + +如果你想比较一下,让我们以 `tar` 命令为例。 通常,手册页的篇幅会超过 1000 行。`tar` 是一个归档实用程序,经常与 `bzip` 或 `gzip` 等压缩方法结合使用。看一下它的手册页: + +[![tar man page](https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg)][14] + +而另一方面,TLDR 页面让你只是浏览一下命令,看看它是如何工作的。 `tar` 的 TLDR 页面看起来像这样,并带有一些方便的例子 —— 你可以使用此实用程序完成的最常见任务: + +[![tar tldr page](https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg)][15] + +让我们再举一个例子,向你展示 TLDR 页面为 `apt` 提供的内容: + +[![tldr-page-of-apt](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg)][16] + +如上,它向你展示了 TLDR 如何工作并使你的生活更轻松,下面让我们告诉你如何在基于 Linux 的操作系统上安装它。 + +### 如何在 Linux 上安装和使用 TLDR 页? + +最成熟的 TLDR 客户端是基于 Node.js 的,你可以使用 NPM 包管理器轻松安装它。如果你的系统上没有 Node 和 NPM,请运行以下命令: + +``` +sudo apt-get install nodejs +sudo apt-get install npm +``` + +如果你使用的是 Debian、Ubuntu 或 Ubuntu 衍生发行版以外的操作系统,你可以根据自己的情况使用`yum`、`dnf` 或 `pacman`包管理器。 + +现在,通过在终端中运行以下命令,在 Linux 机器上安装 TLDR 客户端: + +``` +sudo npm install -g tldr +``` + +一旦安装了此终端实用程序,最好在尝试之前更新其缓存。 为此,请运行以下命令: + +``` +tldr --update +``` + +执行此操作后,就可以阅读任何 Linux 命令的 TLDR 页面了。 为此,只需键入: + +``` +tldr +``` + +[![tldr kill command](https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg)][17] + +你还可以运行其[帮助命令](https://github.com/tldr-pages/tldr-node-client),以查看可与 TLDR 一起使用的各种参数,以获取所需输出。 像往常一样,这个帮助页面也附有例子。 + +### TLDR 的 web、Android 和 iOS 版本 + +你会惊喜地发现 TLDR 页不仅限于你的 Linux 桌面。 相反,它也可以在你的 Web 浏览器中使用,可以从任何计算机访问。 + +要使用 TLDR Web 版本,请访问 [tldr.ostera.io][18] 并执行所需的搜索操作。 + +或者,你也可以下载 [iOS][19] 和 [Android][20] 应用程序,并随时随地学习新命令。 + +[![tldr app ios](https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg)][21] + +你觉得这个很酷的 Linux 终端技巧很有意思吗? 请尝试一下,让我们知道您的反馈。 + +-------------------------------------------------------------------------------- + +via: https://fossbytes.com/tldr-pages-linux-man-pages-alternative/ + +作者:[Adarsh Verma][a] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fossbytes.com/author/adarsh/ +[1]:https://fossbytes.com/watch-star-wars-command-prompt-via-telnet/ +[2]:https://fossbytes.com/use-stackoverflow-linux-terminal-mac/ +[3]:https://fossbytes.com/single-command-curl-wttr-terminal-weather-report/ +[4]:https://fossbytes.com/how-to-google-search-in-command-line-using-googler/ +[5]:https://fossbytes.com/check-bitcoin-cryptocurrency-prices-command-line-coinmon/ +[6]:https://fossbytes.com/review-torrench-download-torrents-using-terminal-linux/ +[7]:https://fossbytes.com/use-wikipedia-termnianl-wikit/ +[8]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F +[9]:https://twitter.com/intent/tweet?text=TLDR+pages%3A+Simplified+Alternative+To+Linux+Man+Pages&url=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F&via=%40fossbytes14 +[10]:http://plus.google.com/share?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/ +[11]:http://pinterest.com/pin/create/button/?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/&media=https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg +[12]:https://fossbytes.com/linux-lexicon-man-pages-navigation/ +[13]:https://github.com/tldr-pages/tldr +[14]:https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg +[15]:https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg +[16]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg +[17]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg +[18]:https://tldr.ostera.io/ +[19]:https://itunes.apple.com/us/app/tldt-pages/id1071725095?ls=1&mt=8 +[20]:https://play.google.com/store/apps/details?id=io.github.hidroh.tldroid +[21]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg +[22]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg +[23]:https://fossbytes.com/a-z-list-linux-command-line-reference/ diff --git a/published/201812/20171223 Celebrate Christmas In Linux Way With These Wallpapers.md b/published/201812/20171223 Celebrate Christmas In Linux Way With These Wallpapers.md new file mode 100644 index 0000000000..3aa2e6f3ea --- /dev/null +++ b/published/201812/20171223 Celebrate Christmas In Linux Way With These Wallpapers.md @@ -0,0 +1,224 @@ +[#]: collector: (lujun9972) +[#]: translator: (jlztan) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: subject: (Celebrate Christmas In Linux Way With These Wallpapers) +[#]: via: (https://itsfoss.com/christmas-linux-wallpaper/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) +[#]: url: (https://linux.cn/article-10381-1.html) + +以 Linux 的方式庆祝圣诞节 +====== + +当前正是假日季,很多人可能已经在庆祝圣诞节了。祝你圣诞快乐,新年快乐。 + +为了延续节日氛围,我将向你展示一些非常棒的圣诞主题的 [Linux 壁纸][1]。在呈现这些壁纸之前,先来看一棵 Linux 终端下的圣诞树。 + +### 让你的桌面飘雪(针对 GNOME 用户) + +- [Let it Snow on Your Linux Desktop](https://youtu.be/1QI1ludzZuA) + +如果您在 Ubuntu 18.04 或任何其他 Linux 发行版中使用 GNOME 桌面,您可以使用一个小的 [GNOME 扩展][55]并在桌面上飘雪。 + +您可以从软件中心或 GNOME 扩展网站获取此 gsnow 扩展。我建议您阅读一些关于[使用 GNOME 扩展][55]的内容。 + +安装此扩展程序后,您会在顶部面板上看到一个小雪花图标。 如果您单击一次,您会看到桌面屏幕上的小絮状物掉落。 + +![](https://itsfoss.com/wp-content/uploads/2018/12/snowfall-on-linux-desktop-1.webm) + +你可以再次点击该图标来禁止雪花落下。 + +### 在 Linux 终端下显示圣诞树 + +![Display Christmas Tree in Linux Terminal](https://i.giphy.com/xUNda6KphvbpYxL3tm.gif) + +如果你想要在终端里显示一个动画的圣诞树,你可以使用如下命令: + +``` +curl https://raw.githubusercontent.com/sergiolepore/ChristBASHTree/master/tree-EN.sh | bash +``` + +要是不想一直从互联网上获取这棵圣诞树,也可以从它的 [GitHub 仓库][2] 中获取对应的 shell 脚本,更改权限之后按照运行普通 shell 脚本的方式运行它。 + +### 使用 Perl 在 Linux 终端下显示圣诞树 + +[![Christmas Tree in Linux terminal by NixCraft][3]][4] + +这个技巧最初由 [NixCraft][5] 分享,你需要为此安装 Perl 模块。 + +说实话,我不喜欢使用 Perl 模块,因为卸载它们真的很痛苦。所以使用这个 Perl 模块时需谨记,你必须手动移除它。 + +``` +perl -MCPAN -e 'install Acme::POE::Tree' +``` + +你可以阅读 [原文][5] 来了解更多信息。 + +### 下载 Linux 圣诞主题壁纸 + +所有这些 Linux 圣诞主题壁纸都是由 Mark Riedesel 制作的,你可以在 [他的网站][6] 上找到很多其他艺术品。 + +自 2002 年以来,他几乎每年都在制作这样的壁纸。可以理解的是,最早的一些壁纸不具有现代的宽高比。我把它们按时间倒序排列。 + +注意一个小地方,这里显示的图片都是高度压缩的,因此你要通过图片下方提供的链接进行下载。 + +![Christmas Linux Wallpaper][56] + +*[下载此壁纸][57]* + +![Christmas Linux Wallpaper][7] + +*[下载此壁纸][8]* + +[![Christmas Linux Wallpapers][9]][10] + +*[下载此壁纸][11]* + +[![Christmas Linux Wallpapers][12]][13] + +*[下载此壁纸][14]* + +[![Christmas Linux Wallpapers][15]][16] + +*[下载此壁纸][17]* + +[![Christmas Linux Wallpapers][18]][19] + +*[下载此壁纸][20]* + +[![Christmas Linux Wallpapers][21]][22] + +*[下载此壁纸][23]* + +[![Christmas Linux Wallpapers][24]][25] + +*[下载此壁纸][26]* + +[![Christmas Linux Wallpapers][27]][28] + +*[下载此壁纸][29]* + +[![Christmas Linux Wallpapers][30]][31] + +*[下载此壁纸][32]* + +[![Christmas Linux Wallpapers][33]][34] + +*[下载此壁纸][35]* + +[![Christmas Linux Wallpapers][36]][37] + +*[下载此壁纸][38]* + +[![Christmas Linux Wallpapers][39]][40] + +*[下载此壁纸][41]* + +[![Christmas Linux Wallpapers][42]][43] + +*[下载此壁纸][44]* + +[![Christmas Linux Wallpapers][45]][46] + +*[下载此壁纸][47]* + +[![Christmas Linux Wallpapers][48]][49] + +*[下载此壁纸][50]* + +### 福利:Linux 圣诞颂歌 + +这是给你的一份福利,给像我们一样的 Linux 爱好者的关于 Linux 的圣诞颂歌。 + +在 [《计算机世界》的一篇文章][51] 中,[Sandra Henry-Stocker][52] 分享了这些圣诞颂歌。摘录片段如下: + +这一段用的 [Chestnuts Roasting on an Open Fire][53] 的曲调: + +> Running merrily on open source +> +> With users happy as can be +> +> We’re using Linux and getting lots done + +> And happy everything is free + +这一段用的 [The Twelve Days of Christmas][54] 的曲调: + +> On my first day with Linux, my admin gave to me a password and a login ID +> +> On my second day with Linux my admin gave to me two new commands and a password and a login ID + +在 [这里][51] 阅读完整的颂歌。 + +Linux 快乐! + +------ + +via: https://itsfoss.com/christmas-linux-wallpaper/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[jlztan](https://github.com/jlztan) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/beautiful-linux-wallpapers/ +[2]: https://github.com/sergiolepore/ChristBASHTree +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/12/perl-tree.gif?resize=600%2C622&ssl=1 +[4]: https://itsfoss.com/christmas-linux-wallpaper/perl-tree/ +[5]: https://www.cyberciti.biz/open-source/command-line-hacks/linux-unix-desktop-fun-christmas-tree-for-your-terminal/ +[6]: http://www.klowner.com/ +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmas-linux-wallpaper-featured.jpeg?resize=800%2C450&ssl=1 +[8]: http://klowner.com/wallery/christmas_tux_2017/download/ChristmasTux2017_3840x2160.png +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2016_3840x2160_result.jpg?resize=800%2C450&ssl=1 +[10]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2016_3840x2160_result/ +[11]: http://www.klowner.com/wallpaper/christmas_tux_2016/ +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2015_2560x1920_result.jpg?resize=800%2C600&ssl=1 +[13]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2015_2560x1920_result/ +[14]: http://www.klowner.com/wallpaper/christmas_tux_2015/ +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2014_2560x1440_result.jpg?resize=800%2C450&ssl=1 +[16]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2014_2560x1440_result/ +[17]: http://www.klowner.com/wallpaper/christmas_tux_2014/ +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmastux2013_result.jpg?resize=800%2C450&ssl=1 +[19]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2013_result/ +[20]: http://www.klowner.com/wallpaper/christmas_tux_2013/ +[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2012_2560x1440_result.jpg?resize=800%2C450&ssl=1 +[22]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2012_2560x1440_result/ +[23]: http://www.klowner.com/wallpaper/christmas_tux_2012/ +[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmastux2011_2560x1440_result.jpg?resize=800%2C450&ssl=1 +[25]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2011_2560x1440_result/ +[26]: http://www.klowner.com/wallpaper/christmas_tux_2011/ +[27]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmastux2010_5120x2880_result.jpg?resize=800%2C450&ssl=1 +[28]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2010_5120x2880_result/ +[29]: http://www.klowner.com/wallpaper/christmas_tux_2010/ +[30]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2009_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[31]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2009_1600x1200_result/ +[32]: http://www.klowner.com/wallpaper/christmas_tux_2009/ +[33]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2008_2560x1600_result.jpg?resize=800%2C500&ssl=1 +[34]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2008_2560x1600_result/ +[35]: http://www.klowner.com/wallpaper/christmas_tux_2008/ +[36]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2007_2560x1600_result.jpg?resize=800%2C500&ssl=1 +[37]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2007_2560x1600_result/ +[38]: http://www.klowner.com/wallpaper/christmas_tux_2007/ +[39]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2006_1024x768_result.jpg?resize=800%2C600&ssl=1 +[40]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2006_1024x768_result/ +[41]: http://www.klowner.com/wallpaper/christmas_tux_2006/ +[42]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2005_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[43]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2005_1600x1200_result/ +[44]: http://www.klowner.com/wallpaper/christmas_tux_2005/ +[45]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2004_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[46]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2004_1600x1200_result/ +[47]: http://www.klowner.com/wallpaper/christmas_tux_2004/ +[48]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2002_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[49]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2002_1600x1200_result/ +[50]: http://www.klowner.com/wallpaper/christmas_tux_2002/ +[51]: http://www.computerworld.com/article/3151076/linux/merry-linux-to-you.html +[52]: https://twitter.com/bugfarm +[53]: https://www.youtube.com/watch?v=dhzxQCTCI3E +[54]: https://www.youtube.com/watch?v=oyEyMjdD2uk +[55]: https://itsfoss.com/gnome-shell-extensions/ +[56]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2018.jpeg?w=800&ssl=1 +[57]: http://www.klowner.com/wallery/christmas_tux_2018/download/ChristmasTux2018_4K_3840x2160.png diff --git a/published/201812/20171223 My personal Email setup - Notmuch, mbsync, postfix and dovecot.md b/published/201812/20171223 My personal Email setup - Notmuch, mbsync, postfix and dovecot.md new file mode 100644 index 0000000000..12f45713d4 --- /dev/null +++ b/published/201812/20171223 My personal Email setup - Notmuch, mbsync, postfix and dovecot.md @@ -0,0 +1,240 @@ +我的个人电子邮件系统设置:notmuch、mbsync、Postfix 和 dovecot +====== + +我使用个人电子邮件系统已经相当长的时间了,但是一直没有记录过文档。最近我换了我的笔记本电脑(职业变更导致的变动),我在试图重新创建本地邮件系统时迷茫了。所以这篇文章是一个给自己看的文档,这样我就不用费劲就能再次搭建出来。 + +### 服务器端 + +我运行自己的邮件服务器,并使用 Postfix 作为 SMTP 服务器,用 Dovecot 实现 IMAP。我不打算详细介绍如何配置这些设置,因为我的设置主要是通过使用 Jonas 为 Redpill 基础架构创建的脚本完成的。什么是 Redpill?(用 Jonas 自己的话说): + +> \ Redpill 是一个概念:一种设置 Debian hosts 去跨组织协作的方式 +> +> \ 我发展了这个概念,并将其首次用于 Redpill 网中网:redpill.dk,其中涉及到了我自己的网络(jones.dk),我的主要客户的网络(homebase.dk),一个包括 Skolelinux Germany(free-owl.de)的在德国的网络,和 Vasudev 的网络(copyninja.info) + +除此之外, 我还有一个 dovecot sieve 过滤,根据邮件的来源,对邮件进行高级分类,将其放到各种文件夹中。所有的规则都存在于每个有邮件地址的账户下的 `~/dovecot.sieve` 文件中。 + +再次,我不会详细介绍如何设置这些东西,因为这不是我这个帖子的目标。 + +### 在我的笔记本电脑上 + +在我的笔记本电脑上,我已经按照 4 个部分设置 + + 1. 邮件同步:使用 `mbsync` 命令完成 + 2. 分类:使用 notmuch 完成 + 3. 阅读:使用 notmuch-emacs 完成 + 4. 邮件发送:使用作为中继服务器和 SMTP 客户端运行的 Postfix 完成。 + +### 邮件同步 + +邮件同步是使用 `mbsync` 工具完成的, 我以前是 OfflineIMAP 的用户,最近切换到 `mbsync`,因为我觉得它比 OfflineIMAP 的配置更轻量、更简单。该命令是由 isync 包提供的。 + +配置文件是 `~/.mbsyncrc`。下面是我的例子与一些个人设置。 + +``` +IMAPAccount copyninja +Host imap.copyninja.info +User vasudev +PassCmd "gpg -q --for-your-eyes-only --no-tty --exit-on-status-write-error --batch --passphrase-file ~/path/to/passphrase.txt -d ~/path/to/mailpass.gpg" +SSLType IMAPS +SSLVersion TLSv1.2 +CertificateFile /etc/ssl/certs/ca-certificates.crt + + +IMAPAccount gmail-kamathvasudev +Host imap.gmail.com +User kamathvasudev@gmail.com +PassCmd "gpg -q --for-your-eyes-only --no-tty --exit-on-status-write-error --batch --passphrase-file ~/path/to/passphrase.txt -d ~/path/to/mailpass.gpg" +SSLType IMAPS +SSLVersion TLSv1.2 +CertificateFile /etc/ssl/certs/ca-certificates.crt + +IMAPStore copyninja-remote +Account copyninja + +IMAPStore gmail-kamathvasudev-remote +Account gmail-kamathvasudev + +MaildirStore copyninja-local +Path ~/Mail/vasudev-copyninja.info/ +Inbox ~/Mail/vasudev-copyninja.info/INBOX + +MaildirStore gmail-kamathvasudev-local +Path ~/Mail/Gmail-1/ +Inbox ~/Mail/Gmail-1/INBOX + +Channel copyninja +Master :copyninja-remote: +Slave :copyninja-local: +Patterns * +Create Both +SyncState * +Sync All + +Channel gmail-kamathvasudev +Master :gmail-kamathvasudev-remote: +Slave :gmail-kamathvasudev-local: +# Exclude everything under the internal [Gmail] folder, except the interesting folders +Patterns * ![Gmail]* +Create Both +SyncState * +Sync All +``` + +对上述配置中的一些有趣部分进行一下说明。一个是 PassCmd,它允许你提供 shell 命令来获取帐户的密码。这样可以避免在配置文件中填写密码。我使用 gpg 的对称加密,并在我的磁盘上存储密码。这当然是由 Unix ACL 保护安全的。 + +实际上,我想使用我的公钥来加密文件,但当脚本在后台或通过 systemd 运行时,解锁文件看起来很困难 (或者说几乎不可能)。如果你有更好的建议,我洗耳恭听:-)。 + +下一个指令部分是 Patterns。这使你可以有选择地同步来自邮件服务器的邮件。这对我来说真的很有帮助,可以排除所有的 “[Gmail]/ folders” 垃圾目录。 + +### 邮件分类 + +一旦邮件到达你的本地设备,我们需要一种方法来轻松地在邮件读取器中读取邮件。我最初的设置使用本地 dovecot 实例提供同步的 Maildir,并在 Gnus 中阅读。这种设置相比于设置所有的服务器软件是有点大题小作,但 Gnus 无法很好地应付 Maildir 格式,这是最好的方法。这个设置也有一个缺点,那就是在你快速搜索邮件时,要搜索大量邮件。而这就是 notmuch 的用武之地。 + +notmuch 允许我轻松索引上千兆字节的邮件档案而找到我需要的东西。我已经创建了一个小脚本,它结合了执行 `mbsync` 和 `notmuch`。我使用 dovecot sieve 来基于实际上创建在服务器端的 Maildirs 标记邮件。下面是我的完整 shell 脚本,它执行同步分类和删除垃圾邮件的任务。 + +``` +#!/bin/sh + +MBSYNC=$(pgrep mbsync) +NOTMUCH=$(pgrep notmuch) + +if [ -n "$MBSYNC" -o -n "$NOTMUCH" ]; then + echo "Already running one instance of mail-sync. Exiting..." + exit 0 +fi + +echo "Deleting messages tagged as *deleted*" +notmuch search --format=text0 --output=files tag:deleted |xargs -0 --no-run-if-empty rm -v + +echo "Moving spam to Spam folder" +notmuch search --format=text0 --output=files tag:Spam and \ + to:vasudev@copyninja.info | \ + xargs -0 -I {} --no-run-if-empty mv -v {} ~/Mail/vasudev-copyninja.info/Spam/cur +notmuch search --format=text0 --output=files tag:Spam and + to:vasudev-debian@copyninja.info | \ + xargs -0 -I {} --no-run-if-empty mv -v {} ~/Mail/vasudev-copyninja.info/Spam/cur + + +MDIR="vasudev-copyninja.info vasudev-debian Gmail-1" +mbsync -Va +notmuch new + +for mdir in $MDIR; do + echo "Processing $mdir" + for fdir in $(ls -d /home/vasudev/Mail/$mdir/*); do + if [ $(basename $fdir) != "INBOX" ]; then + echo "Tagging for $(basename $fdir)" + notmuch tag +$(basename $fdir) -inbox -- folder:$mdir/$(basename $fdir) + fi + done +done +``` + +因此,在运行 `mbsync` 之前,我搜索所有标记为“deleted”的邮件,并将其从系统中删除。接下来,我在我的帐户上查找标记为“Spam”的邮件,并将其移动到“Spam”文件夹。你没看错,这些邮件逃脱了垃圾邮件过滤器进入到我的收件箱,并被我亲自标记为垃圾邮件。 + +运行 `mbsync` 后,我基于它们的文件夹标记邮件(搜索字符串 `folder:`)。这让我可以很容易地得到一个邮件列表的内容,而不需要记住列表地址。 + +### 阅读邮件 + +现在,我们已经实现同步和分类邮件,是时候来设置阅读部分。我使用 notmuch-emacs 界面来阅读邮件。我使用 emacs 的 Spacemacs 风格,所以我花了一些时间写了一个私有层,它将我所有的快捷键和分类集中在一个地方,而不会扰乱我的整个 `.spacemacs` 文件。你可以在 [notmuch-emacs-layer 仓库][1] 找到我的私有层的代码。 + +### 发送邮件 + +能阅读邮件这还不够,我们也需要能够回复邮件。而这是最近是我感到迷茫的一个略显棘手的部分,以至于不得不写这篇文章,这样我就不会再忘记了。(当然也不必在网络上参考一些过时的帖子。) + +我的系统发送邮件使用 Postfix 作为 SMTP 客户端,使用我自己的 SMTP 服务器作为它的中继主机。中继的问题是,它不能是具有动态 IP 的主机。有两种方法可以允许具有动态 IP 的主机使用中继服务器, 一种是将邮件来源的 IP 地址放入 `my_network` 或第二个使用 SASL 身份验证。 + +我的首选方法是使用 SASL 身份验证。为此,我首先要为每台机器创建一个单独的账户,它将把邮件中继到我的主服务器上。想法是不使用我的主帐户 SASL 进行身份验证。(最初我使用的是主账户,但 Jonas 给出了可行的按账户的想法) + +``` +adduser _relay +``` + +这里替换 `` 为你的笔记本电脑的名称或任何你正在使用的设备。现在我们需要调整 Postfix 作为中继服务器。因此,在 Postfix 配置中添加以下行: + +``` +# SASL authentication +smtp_sasl_auth_enable = yes +smtp_tls_security_level = encrypt +smtp_sasl_tls_security_options = noanonymous +relayhost = [smtp.copyninja.info]:submission +smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd +``` + +因此, 这里的 `relayhost` 是用于将邮件转发到互联网的 Postfix 实例的服务器名称。`submission` 的部分 Postfix 将邮件转发到端口 587(安全端口)。`smtp_sasl_tls_security_options` 设置为不允许匿名连接。这是必须的,以便中继服务器信任你的移动主机,并同意为你转发邮件。 + +`/etc/postfix/sasl_passwd` 是你需要存储用于服务器 SASL 身份验证的帐户密码的文件。将以下内容放入其中。 + +``` +[smtp.example.com]:submission user:password +``` + +用你已放入 `relayhost` 配置的 SMTP 服务器名称替换 `smtp.example.com`。用你创建的 `_relay` 用户及其密码替换 `user` 和 `passwd`。 + +若要保护 `sasl_passwd` 文件,并为 Postfix 创建它的哈希文件,使用以下命令。 + +``` +chown root:root /etc/postfix/sasl_passwd +chmod 0600 /etc/postfix/sasl_passwd +postmap /etc/postfix/sasl_passwd +``` + +最后一条命令将创建 `/etc/postfix/sasl_passwd.db` 文件,它是你的文件的 `/etc/postfix/sasl_passwd` 的哈希文件,具有相同的所有者和权限。现在重新加载 Postfix,并使用 `mail` 命令检查邮件是否从你的系统中发出。 + +### Bonus 的部分 + +好吧,因为我有一个脚本创建以上结合了邮件的同步和分类。我继续创建了一个 systemd 计时器,以定期同步后台的邮件。就我而言,每 10 分钟一次。下面是 `mailsync.timer` 文件。 + +``` +[Unit] +Description=Check Mail Every 10 minutes +RefuseManualStart=no +RefuseManualStop=no + +[Timer] +Persistent=false +OnBootSec=5min +OnUnitActiveSec=10min +Unit=mailsync.service + +[Install] +WantedBy=default.target +``` + +下面是 mailsync.service 服务,这是 mailsync.timer 执行我们的脚本所需要的。 + +``` +[Unit] +Description=Check Mail +RefuseManualStart=no +RefuseManualStop=yes + +[Service] +Type=oneshot +ExecStart=/usr/local/bin/mail-sync +StandardOutput=syslog +StandardError=syslog +``` + +将这些文件置于 `/etc/systemd/user` 目录下并运行以下代码去开启它们: + +``` +systemctl enable --user mailsync.timer +systemctl enable --user mailsync.service +systemctl start --user mailsync.timer +``` + +这就是我从系统同步和发送邮件的方式。我从 Jonas Smedegaard 那里了解到了 afew,他审阅了这篇帖子。因此, 下一步, 我将尝试使用 afew 改进我的 notmuch 配置,当然还会有一个后续的帖子:-)。 + +-------------------------------------------------------------------------------- + +via: https://copyninja.info/blog/email_setup.html + +作者:[copyninja][a] +译者:[lixinyuxx](https://github.com/lixinyuxx) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://copyninja.info +[1]:https://source.copyninja.info/notmuch-emacs-layer.git/ diff --git a/published/201812/20180101 27 open solutions to everything in education.md b/published/201812/20180101 27 open solutions to everything in education.md new file mode 100644 index 0000000000..48a4f3fa3c --- /dev/null +++ b/published/201812/20180101 27 open solutions to everything in education.md @@ -0,0 +1,91 @@ +27 个全方位的开放式教育解决方案 +====== + +> 阅读这些 2017 年 Opensource.com 发布的开放如何改进教育和学习的好文章。 + +![27 open solutions to everything in education](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OpenEducationResources_520x292_cm.png?itok=9y4FGgRo) + +开放式理念 (从开源软件到开放硬件,再到开放原则) 正在改变教育的范式。因此,为了庆祝今年发生的一切,我收集了 2017 年(译注:本文原发布于 2018 年初)在 Opensource.com 上发表的 27 篇关于这个主题的最好的文章。我把它们分成明确的主题,而不是按人气来分类。而且,如果这 27 个故事不能满足你对教育方面开源信息的胃口,那就看看我们的合作文章吧 “[教育如何借助 Linux 和树莓派][30]”。 + +### 开放对每个人都有好处 + +1. [书评:《OPEN》探讨了开放性的广泛文化含义][1]:Scott Nesbitt 评价 David Price 的书 《OPEN》 ,该书探讨了 “开放” 不仅仅是技术转变的观点,而是 “我们未来将如何工作、生活和学习”。 +2. [通过开源技能快速开始您的职业生涯][2]: VM (Vicky) Brasseur 指出了如何借助学习开源在工作群体中脱颖而出。这个建议不仅仅是针对程序员的;设计师、作家、营销人员和其他创意专业人士也对开源的成功至关重要。 +3. [研究生学位可以让你跳槽到开源职位][3]:引用的研究表明会 Linux 技能会带来更高的薪水, Joshua Pearce 说对开源的熟练和研究生学位是无与伦比的职业技能组合。 +4. [彻底改变了宾夕法尼亚的学校文化的三种实践][4]:Charlie Reisinger 向我们展示了开放式实践是如何在宾夕法尼亚州的一个学区创造一种更具包容性、敏捷性和开放性的文化的。Charlie 说,这不仅仅是为了省钱;该区还受益于 “开放式领导原则,促进师生创新,帮助更好地吸引社区,创造一个更有活力和包容性的学习社区”。 +5. [使用开源工具促使学生进步的 15 种方法][5]:我写了开源是如何让学生自由探索、补拙和学习的,不管他们是在学习基本的数字化素养,还是通过有趣的项目来扩展这些技能。 +6. [开发人员有机会编写好的代码][6]:开源往往是对社会有益的项目的支柱。正如 Benetech Labs 副总裁 Ahn Bui 在这次采访中指出的那样:“建立开放数据标准是打破数据孤岛不可或缺的一步。这些开放标准将为互操作性提供基础,进而转化为更多的组织共同建设,往往更具成本效益。最终目标是以同样的成本甚至更低的成本为更多的人服务。” + +### 用于再融合和再利用的开放式教育资源 + +1. [学术教员可以和维基百科一起教学吗?][7]:Wiki Ed 的项目总监 LiAnna Davis 讨论开放式教育资源open educational resources (OER) ,如 Wiki Ed,是如何提供高质量且经济实惠的开源学习资源作为课堂教学工具。 +2. [书本内外?开放教育资源的状态][8]:Cable Green 是 Creative Common 开放教育主管,分享了高等教育中教育面貌是如何变化的,以及 Creative Common 正在采取哪些措施来促进教育。 +3. [急需符合标准的课程的学校系统找到了希望][9]:Karen Vaites 是 Open Up Resources 社区布道师和首席营销官,谈论了非营利组织努力为 K-12 学校提供开放的、标准一致的课程。 +4. [夏威夷大学如何解决当今高等教育的问题][10]:夏威夷大学 Manoa 分校的教育技术专家 Billy Meinke 表示,在大学课程中过渡到 ORE 将 “使教师能够控制他们教授的内容,我们预计这将为他们节省学生的费用。” +5. [开放式课程如何削减高等教育成本][11]:塞勒学院的教育总监 Devon Ritter 报告了塞勒学院是如何建立以公开许可内容为基础的大学学分课程,目的是使更多的人能够负担得起和获得高等教育。 +6. [开放教育资源运动在提速][12]:Alexis Clifton 是纽约州立大学的 OER 服务的执行董事,描述了纽约 800 万美元的投资如何刺激开放教育的增长,并使大学更实惠。 +7. [开放项目合作,从小学到大学教室][13]:来自杜克大学的 Aria F. Chernik 探索 OSPRI (开源教育学的研究与创新), 这是杜克大学和红帽的合作,旨在建立一个 21 世纪的,开放设计的 preK-12 学习生态系统。 +8. [Perma.cc 将阻止学术链接腐烂][14]::弗吉尼亚理工大学的 Phillip Young 写的关于 Perma.cc 的文章,这种一种“链接腐烂”的解决方案,在学术论文中的超链接随着时间的推移而消失或变化的概览很高。 +9. [开放教育:学生如何通过创建开放教科书来节省资金][15]:OER 先驱 Robin DeRosa 谈到 “引入公开许可教科书的自由,以及教育和学习应结合包容性生态系统,以增强公益的总体理念”。 + +### 课堂上的开源工具 + +1. [开源棋盘游戏如何拯救地球][16]:Joshua Pearce 写的关于拯救地球的一个棋盘游戏,这是一款让学生在玩乐和为创客社区做出贡献的同时解决环境问题的棋盘游戏。 +2. [一个教孩子们如何阅读的新 Android 应用程序][17]:Michael Hall 谈到了他在儿子被诊断为自闭症后为他开发的儿童识字应用 Phoenicia,以及良好编码的价值,和为什么用户测试比你想象的更重要。 +3. [8 个用于教育的开源 Android 应用程序][18]:Joshua Allen Holm 推荐了 8 个来自 F-Droid 软件库的开源应用,使我们可以将智能手机用作学习工具。 +4. [MATLA B的 3 种开源替代方案][19]:Jason Baker 更新了他 2016 年的开源数学计算软件调查报告,提供了 MATLAB 的替代方案,这是数学、物理科学、工程和经济学中几乎无处不在的昂贵的专用解决方案。 +5. [SVG 与教孩子编码有什么关系?][20]:退休工程师 Jay Nick 谈论他如何使用艺术作为一种创新的方式,向学生介绍编码。他在学校做志愿者,使用 SVG 来教授一种结合数学和艺术原理的编码方法。 +6. [5 个破灭的神话:在高等教育中使用开源][21]: 拥有德克萨斯理工大学美术博士学位的 Kyle Conway 分享他在一个由专有解决方案统治的世界中使用开源工具的经验。 Kyle 说有一种偏见,反对在计算机科学以外的学科中使用开源:“很多人认为非技术专业的学生不能使用 Linux,他们对在高级学位课程中使用 Linux 的人做出了很多假设……嗯,这是有可能的,我就是证明。” +7. [大学开源工具列表][22]:Aaron Cocker 概述了他在攻读计算机科学本科学位时使用的开源工具 (包括演示、备份和编程软件)。 +8. [5 个可帮助您学习优秀 KDE 应用程序][23]:Zsolt Szakács 提供五个 KDE 应用程序,可以帮助任何想要学习新技能或培养现有技能的人。 + +### 在教室编码 + +1. [如何尽早让下一代编码][24]:Bryson Payne 说我们需要在高中前教孩子们学会编码: 到了九年级,80% 的女孩和 60% 的男孩已经从 STEM 职业中自选。但他建议,这不仅仅是就业和缩小 IT 技能差距的问题。“教一个年轻人编写代码可能是你能给他们的最改变生活的技能。而且这不仅仅是一个职业提升者。编码是关于解决问题,它是关于创造力,更重要的是,它是提升能力”。 +2. [孩子们无法在没有计算机的情况下编码][25]:Patrick Masson 推出了 FLOSS 儿童桌面计划, 该计划教授服务不足学校的学生使用开源软件 (如 Linux、LibreOffice 和 GIMP) 重新利用较旧的计算机。该计划不仅为破旧或退役的硬件注入新的生命,还为学生提供了重要的技能,而且还为学生提供了可能转化为未来职业生涯的重要技能。 +3. [如今 Scratch 是否能像 80 年代的 LOGO 语言一样教孩子们编码?][26]:Anderson Silva 提出使用 [Scratch][27] 以激发孩子们对编程的兴趣,就像在 20 世纪 80 年代开始使用 LOGO 语言时一样。 +4. [通过这个拖放框架学习Android开发][28]:Eric Eslinger 介绍了 App Inventor,这是一个编程框架,用于构建 Android 应用程序使用可视块语言(类似 Scratch 或者 [Snap][29])。 + +在这一年里,我们了解到,教育领域的各个方面都有了开放的解决方案,我预计这一主题将在 2018 年及以后继续下去。在未来的一年里,你是否希望 Opensource.com 涵盖开放式的教育主题?如果是, 请在评论中分享你的想法。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/best-open-education + +作者:[Don Watkins][a] +译者:[lixinyuxx](https://github.com/lixinyuxx) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/don-watkins +[1]:https://opensource.com/article/17/7/book-review-open +[2]:https://opensource.com/article/17/8/jump-start-your-career +[3]:https://opensource.com/article/17/1/grad-school-open-source-academic-lab +[4]:https://opensource.com/article/17/7/open-school-leadership +[5]:https://opensource.com/article/17/7/empower-students-open-source-tools +[6]:https://opensource.com/article/17/3/interview-anh-bui-benetech-labs +[7]:https://opensource.com/article/17/1/Wiki-Education-Foundation +[8]:https://opensource.com/article/17/2/future-textbooks-cable-green-creative-commons +[9]:https://opensource.com/article/17/1/open-up-resources +[10]:https://opensource.com/article/17/2/interview-education-billy-meinke +[11]:https://opensource.com/article/17/7/college-alternatives +[12]:https://opensource.com/article/17/10/open-educational-resources-alexis-clifton +[13]:https://opensource.com/article/17/3/education-should-be-open-design +[14]:https://opensource.com/article/17/9/stop-link-rot-permacc +[15]:https://opensource.com/article/17/11/creating-open-textbooks +[16]:https://opensource.com/article/17/7/save-planet-board-game +[17]:https://opensource.com/article/17/4/phoenicia-education-software +[18]:https://opensource.com/article/17/8/8-open-source-android-apps-education +[19]:https://opensource.com/alternatives/matlab +[20]:https://opensource.com/article/17/5/coding-scalable-vector-graphics-make-steam +[21]:https://opensource.com/article/17/5/how-linux-higher-education +[22]:https://opensource.com/article/17/6/open-source-tools-university-student +[23]:https://opensource.com/article/17/6/kde-education-software +[24]:https://opensource.com/article/17/8/teach-kid-code-change-life +[25]:https://opensource.com/article/17/9/floss-desktops-kids +[26]:https://opensource.com/article/17/3/logo-scratch-teach-programming-kids +[27]:https://scratch.mit.edu/ +[28]:https://opensource.com/article/17/8/app-inventor-android-app-development +[29]:http://snap.berkeley.edu/ +[30]:https://opensource.com/article/17/12/best-opensourcecom-linux-and-raspberry-pi-education diff --git a/published/201812/20180104 How Creative Commons benefits artists and big business.md b/published/201812/20180104 How Creative Commons benefits artists and big business.md new file mode 100644 index 0000000000..aefc804479 --- /dev/null +++ b/published/201812/20180104 How Creative Commons benefits artists and big business.md @@ -0,0 +1,66 @@ +你所不知道的知识共享(CC) +====== + +> 知识共享为艺术家提供访问权限和原始素材。大公司也从中受益。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/CreativeCommons_ideas_520x292_1112JS.png?itok=otei0vKb) + +我毕业于电影学院,毕业后在一所电影学校教书,之后进入一家主流电影工作室,我一直在从事电影相关的工作。创意产业的方方面面面临着同一个问题:创作者需要原材料。有趣的是,自由文化运动提出了解决方案,具体来说是在自由文化运动中出现的知识共享Creative Commons组织。 + +### 知识共享能够为我们提供展示片段和小样 + +和其他事情一样,创造力也需要反复练习。幸运的是,在我刚开始接触电脑时,就在一本关于渲染工场的专业杂志中接触到了开源这个存在。当时我并不理解所谓的“开源”是什么,但我知道只有开源工具能帮助我在领域内稳定发展。对我来说,知识共享也是如此。知识共享可以为艺术家们提供充满丰富艺术资源的工作室。 + +我在电影学院任教时,经常需要给学生们准备练习编辑、录音、拟音、分级、评分的示例录像。在 Jim Munroe 的独立作品 [Infest Wisely][1] 中和 [Vimeo][2] 上的知识共享内容里我总能找到我想要的。这些逼真的镜头覆盖内容十分广泛,从独立制作到昂贵的高品质的升降镜头(一般都会用无人机代替)都有。 + +![](https://opensource.com/sites/default/files/u128651/bunny.png) + +对实验主义艺术来说,确有无尽可能。知识共享提供了丰富的素材,这些材料可以用来整合、混剪等等,可以满足一位视觉先锋能够想到的任何用途。 + +在接触知识共享之前,如果我想要使用写实镜头,如果在大学,只能用之前的学生和老师拍摄的或者直接使用版权库里的镜头,或者使用有受限的版权保护的镜头。 + +### 坚守版权的底线很重要 + +知识共享同样能够创造经济效益。在某大型计算机公司的渲染工场工作时,我负责在某些硬件设施上测试渲染的运行情况,而这个测试时刻面临着被搁置的风险。做这些测试时,我用的都是[大雄兔][3]的资源,因为这个电影和它的组件都是可以免费使用和分享的。如果没有这个小短片,在接触写实资源之前我都没法完成我的实验,因为对于一个计算机公司来说,雇佣一只 3D 艺术家来按需布景是不太现实的。 + +令我震惊的是,与开源类似,知识共享已经用我们难以想象的方式支撑起了大公司。知识共享的使用可能会也可能不会影响公司的日常流程,但它填补了不足,让工作流程顺利进行。我没见到谁在他们的书中将流畅工作归功于知识共享的应用,但它确实无处不在。 + +![](https://opensource.com/sites/default/files/u128651/sintel.png) + +我也见过一些开放版权的电影,比如[辛特尔][4],在最近的电视节目中播放了它的短片,电视的分辨率已经超过了标准媒体。 + +### 知识共享可以提供大量原材料 + +艺术家需要原材料。画家需要颜料、画笔和画布。雕塑家需要陶土和工具。数字内容编辑师需要数字内容,无论它是剪贴画还是音效或者是电子游戏里的现成的精灵。 + +数字媒介赋予了人们超能力,让一个人就能完成需要一组人员才能完成的工作。事实上,我们大部分都好高骛远。我们想做高大上的项目,想让我们的成果不论是视觉上还是听觉上都无与伦比。我们想塑造的是宏大的世界,紧张的情节,能引起共鸣的作品,但我们所拥有的时间精力和技能与之都不匹配,达不到想要的效果。 + +是知识共享再一次拯救了我们,在 [Freesound.org][5]、 [Openclipart.org][6]、 [OpenGameArt.org][7] 等等网站上都有大量的开放版权艺术材料。通过知识共享,艺术家可以使用各种他们自己没办法创造的原材料,来完成他们原本完不成的工作。 + +最神奇的是,不用自己投资,你放在网上给大家使用的原材料就能变成精美的作品,而这是你从没想过的。我在知识共享上面分享了很多音乐素材,它们现在用于无数的专辑和电子游戏里。有些人用了我的材料会通知我,有些是我自己发现的,所以这些材料的应用可能比我知道的还有多得多。有时我会偶然看到我亲手画的标志出现在我从没听说过的软件里。我见到过我为 [Opensource.com][8] 写的文章在别处发表,有的是论文的参考文献,白皮书或者参考资料中。 + +### 知识共享所代表的自由文化也是一种文化 + +“自由文化”这个说法过于累赘,文化,从概念上来说,是一个有机的整体。在这种文化中社会逐渐成长发展,从一个人到另一个。它是人与人之间的互动和思想交流。自由文化是自由缺失的现代世界里的特殊产物。 + +如果你也想对这样的局限进行反抗,想把你的思想、作品、你自己的文化分享给全世界的人,那么就来和我们一起,使用知识共享吧! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/creative-commons-real-world + +作者:[Seth Kenlon][a] +译者:[Valoniakim](https://github.com/Valoniakim) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/seth +[1]:http://infestwisely.com +[2]:https://vimeo.com/creativecommons +[3]:https://peach.blender.org/ +[4]:https://durian.blender.org/ +[5]:http://freesound.org +[6]:http://openclipart.org +[7]:http://opengameart.org +[8]:https://opensource.com/ diff --git a/published/201812/20180128 Getting Linux Jobs.md b/published/201812/20180128 Getting Linux Jobs.md new file mode 100644 index 0000000000..9bfaf0e1e5 --- /dev/null +++ b/published/201812/20180128 Getting Linux Jobs.md @@ -0,0 +1,95 @@ +Linux 求职建议 +====== + +通过对招聘网站数据的仔细研究,我们发现,即使是非常有经验的 Linux 程序员,也会在面试中陷入困境。 + +这就导致了很多优秀并且有经验的人无缘无故地找不到合适的工作,因为如今的就业市场需要我们有一些手段来提高自己的竞争力。 + +我有两个同事和一个表哥,他们都有 RedHat 认证,管理过比较大的服务器机房,也都收到过前雇主的认真推荐。 + +可是,在他们应聘的时候,所有的这些证书、本身的能力、工作经验好像都没有起到任何作用,他们所面对的招聘广告是某人从技术词汇中临时挑选的一些“技能片段”所组成的。 + +现如今,礼貌变得过时了,**不回应**变成了发布招聘广告的公司的新沟通方式。 + +这同样也意味着大多公司的招聘或者人事可能会**错过**非常优秀的应聘者。 + +我之所以敢说的如此肯定,是因为现在招聘广告大多数看上去都非常的滑稽。 + +[Reallylinux.com][3] 另一位特约撰稿人 Walter ,发表过一篇关于 [招聘广告疯掉了][4] 的文章。 + +他说的也许是对的,可是我认为 Linux 工作应聘者可以通过注意招聘广告的**三个关键点**避免落入陷阱。 + +**首先**,很少会有 Linux 系统管理员的招聘广告只针对 Linux 有要求。 + +一定要注意很少有 Linux 系统管理员的职位是实际在服务器上跑 Linux的,反而,很多在搜索 “Linux 管理员” 得到的职位实际上是指大量的 *NX 操作系统的。 + +举个例子,有一则关于 **Linux 管理员** 的招聘广告: + +> 该职位需要为建立系统集成提供支持,尤其是 BSD 应用的系统安装... + +或者有一些其他的要求: + +> 有 Windows 系统管理经验的。 + +最为讽刺的是,如果你在应聘面试的时候表现出专注于 Linux 的话,你可能不会被聘用。 + +另外,如果你直接把 Linux 写在你的特长或者专业上,他们可能都不会仔细看你的简历,因为他们根本区分不了 UNIX、BSD、Linux。 + +最终的结果就是,如果你太老实,只在简历上写了 Linux,你可能会被直接过掉,但是如果你把 Linux 改成 UNIX/Linux 的话,可能会走得更远。 + +我有两个同事最后修改了他们的简历,然后获得了更好的面试机会,虽然依旧没有被聘用,因为大多数招聘广告其实已经内定人员了,这些招聘信息被放出来仅仅是为了表现出他们有招聘的想法。 + +**第二点**,公司里唯一在乎系统管理员职位的只有技术主管,其他人包括人事或管理层根本不关心这个。 + +我记得有一次开会旁听的时候,听见一个执行副总裁把服务器管理人员说成“一毛钱一打的人”,这种想法是多么的奇怪啊。 + +讽刺的是,等到邮件系统出故障,电话交换机连接时不时会断开,或者核心商业文件从企业内网中消失的时候,这些总裁又是最先打电话给系统管理员的。 + +或许如果他们不整天在电话留言中说那么多空话,或者不往邮件里塞满妻子的照片和旅行途中的照片的话,服务器可能就不会崩溃。 + +请注意,招聘 Linux 运维或者服务器管理员的广告被放出来是因为公司**技术层**认为有迫切的需求。你也不需要和人事或者公司高层聊什么,搞清楚谁是招聘的技术经理然后打电话给他们。 + +你需要直接联系他们因为“有些技术问题”是人事回答不了的,即使你只有 60 秒的时间可以和他们交流,你也必须抓住这个机会和真正有需求并且懂技术的人沟通。 + +那如果人事的漂亮 MM 不让你直接联系技术怎么办呢? + +开始记得问人事一些技术性问题,比如说他们的 Linux 集群是如何建立的,它们运行在独立的虚拟机上吗?这些技术性的问题会让人事变得不耐烦,最后让你有机会问出“我能不能直接联系你们团队的技术人员”。 + +如果对方的回答是“应该可以”或者“稍后回复你”,那么他们可能已经在两周前就已经计划好了找一个人来填补这个空缺,比如说人事部员工的未婚夫。**他们只是不希望看起来太像裙带主义,而是带有一点利己主义的不确定主义。** + +所以一定要记得花点时间弄清楚到底谁是发布招聘广告的直接**技术**负责人,然后和他们聊一聊,这可能会让你少一番胡扯并且让你更有可能应聘成功。 + +**第三点**,现在的招聘广告很少有完全真实的内容了。 + +我以前见过一个招聘具有高级别专家也不会有的专门知识的初级系统管理员的广告,他们的计划是列出公司的发展计划蓝图,然后找到应聘者。 + +在这种情况下,你应聘 Linux 管理员职位应该提供几个关键性信息,例如工作经验和相关证书。 + +诀窍在于,用这些关键词尽量装点你的简历,以匹配他们的招聘信息,这样他们几乎不可能发现你缺失了哪个关键词。 + +这并不一定会让你成功找到一份工作,但它可以让你获得一次面试机会,这也算是一个巨大的进步。 + +通过理解和应用以上三点,或许可以让那些寻求 Linux 管理员工作的人能够比那些只有一线地狱机会的人领先一步。 + +即使这些建议不能让你马上得到面试机会,你也可以利用这些经验和意识去参加贸易展或公司主办的技术会议等活动。 + +我强烈建议你们也经常参加这种活动,尤其是当它们比较近的话,可以给你一个扩展人脉的机会。 + +请记住,如今的职业人脉已经失去了原来的意义了,现在只是可以用来获取“哪些公司实际上在招聘、哪些公司只是为了给股东带来增长的表象而在职位方面撒谎”的小道消息。 + + +-------------------------------------------------------------------------------- + +via: http://reallylinux.com/docs/gettinglinuxjobs.shtml + +作者:[Andrea W.Codingly][a] +译者:[Ryze-Borgia](https://github.com/Ryze-Borgia) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://reallylinux.com +[1]:http://www.reallylinux.com +[2]:http://reallylinux.com/docs/linuxrecessionproof.shtml +[3]:http://reallylinux.com +[4]:http://reallylinux.com/docs/wantadsmad.shtml diff --git a/published/201812/20180130 Graphics and music tools for game development.md b/published/201812/20180130 Graphics and music tools for game development.md new file mode 100644 index 0000000000..7e77e30d67 --- /dev/null +++ b/published/201812/20180130 Graphics and music tools for game development.md @@ -0,0 +1,179 @@ +用于游戏开发的图形和音乐工具 +====== +> 要在三天内打造一个可玩的游戏,你需要一些快速而稳定的好工具。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Life_opengame.png?itok=JPxruL3k) + +在十月初,我们的俱乐部马歇尔大学的 [Geeks and Gadgets][1] 参加了首次 [Open Jam][2],这是一个庆祝最佳开源工具的游戏 Jam。游戏 Jam 是一种活动,参与者以团队协作的方式来开发有趣的计算机游戏。Jam 一般都很短,仅有三天,并且非常累。Opensource.com 在八月下旬[发布了][3] Open Jam 活动,足有 [45 支游戏][4] 进入到了竞赛中。 + +我们的俱乐部希望在我们的项目中创建和使用开放源码软件,所以 Open Jam 自然是我们想要参与的 Jam 了。我们提交的游戏是一个实验性的游戏,名为 [Mark My Words][5]。我们使用了多种自由和开放源码 (FOSS) 工具来开发它;在这篇文章中,我们将讨论一些我们使用的工具和我们注意到可能有潜在阻碍的地方。 + +### 音频工具 + +#### MilkyTracker + +[MilkyTracker][6] 是一个可用于编曲老式视频游戏中的音乐的软件包。它是一种[音乐声道器][7]music tracker,是一个强大的 MOD 和 XM 文件创建器,带有基于特征网格的模式编辑器。在我们的游戏中,我们使用它来编曲大多数的音乐片段。这个程序最好的地方是,它比我们其它的大多数工具消耗更少的硬盘空间和内存。虽然如此,MilkyTracker 仍然非常强大。 + +![](https://opensource.com/sites/default/files/u128651/mtracker.png) + +其用户界面需要一会来习惯,这里有对一些想试用 MilkyTracker 的音乐家的一些提示: + + * 转到 “Config > Misc.” ,设置编辑模式的控制风格为 “MilkyTracker”,这将给你提供几乎全部现代键盘快捷方式。 + * 用 `Ctrl+Z` 撤销 + * 用 `Ctrl+Y` 重做 + * 用空格键切换模式编辑方式 + * 用退格键删除先前的音符 + * 用插入键来插入一行 + * 默认情况下,一个音符将持续作用,直到它在该频道上被替换。你可以明确地结束一个音符,通过使用一个反引号(`)键来插入一个 KeyOff 音符 + * 在你开始谱写乐曲前,你需要创建或查找采样。我们建议在诸如 [Freesound][9] 或 [ccMixter][10] 这样的网站上查找采用 [Creative Commons][8] 协议的采样, + +另外,把 [MilkyTracker 文档页面][11] 放在手边。它含有数不清的教程和手册的链接。一个好的起点是在该项目 wiki 上的 [MilkyTracker 指南][12]。 + +#### LMMS + +我们的两个音乐家使用多用途的现代音乐创建工具 [LMMS][13]。它带有一个绝妙的采样和效果库,以及多种多样的灵活的插件来生成独特的声音。LMMS 的学习曲线令人吃惊的低,在某种程度上是因为其好用的节拍/低音线编辑器。 + +![](https://opensource.com/sites/default/files/u128651/lmms_plugins.png) + +我们对于想试试 LMMS 的音乐家有一个建议:使用插件。对于 [chiptune][14]式音乐,我们推荐 [sfxr][15]、[BitInvader][16] 和 [FreeBoy][17]。对于其它风格,[ZynAddSubFX][18] 是一个好的选择。它配备了各种合成仪器,可以根据您的需要进行更改。 + +### 图形工具 + +#### Tiled + +在开放源码游戏开发中,[Tiled][19] 是一个流行的贴片地图编辑器。我们使用它为来为我们在游戏场景中组合连续的、复古式的背景。 + +![](https://opensource.com/sites/default/files/u128651/tiled.png) + +Tiled 可以导出地图为 XML、JSON 或普通的图片。它是稳定的、跨平台的。 + +Tiled 的功能之一允许你在地图上定义和放置随意的游戏对象,例如硬币和提升道具,但在 jam 期间我们没有使用它。你需要做的全部是以贴片集的方式加载对象的图像,然后使用“插入平铺”来放置它们。 + +一般来说,对于需要一个地图编辑器的项目,Tiled 是我们所推荐的软件中一个不可或缺的部分。 + +#### Piskel + +[Piskel][20] 是一个像素艺术编辑器,它的源文件代码以 [Apache 2.0 协议][21] 发布。在这次 Jam 期间,们的大多数的图像资源都使用 Piskel 来处理,我们当然也将在未来的工程中使用它。 + +在这个 Jam 期间,Piskel 极大地帮助我们的两个功能是洋葱皮Onion skin精灵序列图spritesheet导出。 + +##### 洋葱皮 + +洋葱皮功能将使 Piskel 以虚影显示你编辑的动画的前一帧和后一帧的,像这样: + +![](https://opensource.com/sites/default/files/u128651/onionshow.gif) + +洋葱皮是很方便的,因为它适合作为一个绘制指引和帮助你在整个动画进程中保持角色的一致形状和体积。 要启用它,只需单击屏幕右上角预览窗口下方的洋葱形图标即可。 + +![](https://opensource.com/sites/default/files/u128651/onionenable.png) + +##### 精灵序列图导出 + +Piskel 将动画导出为精灵序列图的能力也非常有用。精灵序列图是一个包含动画所有帧的光栅图像。例如,这是我们从 Piskel 导出的精灵序列图: + +![](https://opensource.com/sites/default/files/u128651/sprite-artist.png) + +该精灵序列图包含两帧。一帧位于图像的上半部分,另一帧位于图像的下半部分。精灵序列图通过从单个文件加载整个动画,大大简化了游戏的代码。这是上面精灵序列图的动画版本: + +![](https://opensource.com/sites/default/files/u128651/sprite-artist-anim.gif) + +##### Unpiskel.py + +在 Jam 期间,我们很多次想批量转换 Piskel 文件到 PNG 文件。由于 Piskel 文件格式基于 JSON,我们写一个基于 GPLv3 协议的名为 [unpiskel.py][22] 的 Python 小脚本来做转换。 + +它像这样被调用的: + +``` +python unpiskel.py input.piskel +``` + +这个脚本将从一个 Piskel 文件(这里是 `input.piskel`)中提取 PNG 数据帧和图层,并将它们各自存储。这些文件采用模式 `NAME_XX_YY.png` 命名,在这里 `NAME` 是 Piskel 文件的缩减名称,`XX` 是帧的编号,`YY` 是层的编号。 + +因为脚本可以从一个 shell 中调用,它可以用在整个文件列表中。 + +``` +for f in *.piskel; do python unpiskel.py "$f"; done +``` + +### Python、Pygame 和 cx_Freeze + +#### Python 和 Pygame + +我们使用 [Python][23] 语言来制作我们的游戏。它是一个脚本语言,通常被用于文本处理和桌面应用程序开发。它也可以用于游戏开发,例如像 [Angry Drunken Dwarves][24] 和 [Ren'Py][25] 这样的项目所展示的。这两个项目都使用一个称为 [Pygame][26] 的 Python 库来显示图形和产生声音,所以我们也决定在 Open Jam 中使用这个库。 + +Pygame 被证明是既稳定又富有特色,并且它对我们创建的街机式游戏来说是很棒的。在低分辨率时,库的速度足够快的,但是在高分辨率时,它只用 CPU 的渲染开始变慢。这是因为 Pygame 不使用硬件加速渲染。然而,开发者可以充分利用 OpenGL 基础设施的优势。 + +如果你正在寻找一个好的 2D 游戏编程库,Pygame 是值得密切注意的一个。它的网站有 [一个好的教程][27] 可以作为起步。务必看看它! + +#### cx_Freeze + +准备发行我们的游戏是有趣的。我们知道,Windows 用户不喜欢装一套 Python,并且要求他们来安装它可能很过分。除此之外,他们也可能必须安装 Pygame,在 Windows 上,这不是一个简单的工作。 + +很显然:我们必须放置我们的游戏到一个更方便的格式中。很多其他的 Open Jam 参与者使用专有的游戏引擎 Unity,它能够使他们的游戏在网页浏览器中来玩。这使得它们非常方便地来玩。便利性是一个我们的游戏中根本不存在的东西。但是,感谢生机勃勃的 Python 生态系统,我们有选择。已有的工具可以帮助 Python 程序员将他们的游戏做成 Windows 上的发布版本。我们考虑过的两个工具是 [cx_Freeze][28] 和 [Pygame2exe][29](它使用 [py2exe][30])。我们最终决定用 cx_Freeze,因为它是跨平台的。 + +在 cx_Freeze 中,你可以把一个单脚本游戏打包成发布版本,只要在 shell 中运行一个命令,像这样: + +``` +cxfreeze main.py --target-dir dist +``` + +`cxfreeze` 的这个调用将把你的脚本(这里是 `main.py`)和在你系统上的 Python 解释器捆绑到到 `dist` 目录。一旦完成,你需要做的是手动复制你的游戏的数据文件到 `dist` 目录。你将看到,`dist` 目录包含一个可以运行来开始你的游戏的可执行文件。 + +这里有使用 cx_Freeze 的更复杂的方法,允许你自动地复制数据文件,但是我们发现简单的调用 `cxfreeze` 足够满足我们的需要。感谢这个工具,我们使我们的游戏玩起来更便利一些。 + +### 庆祝开源 + +Open Jam 是庆祝开源模式的软件开发的重要活动。这是一个分析开源工具的当前状态和我们在未来工作中需求的一个机会。对于游戏开发者探求其工具的使用极限,学习未来游戏开发所必须改进的地方,游戏 Jam 或许是最好的时机。 + +开源工具使人们能够在不损害自由的情况下探索自己的创造力,而无需预先投入资金。虽然我们可能不会成为专业的游戏开发者,但我们仍然能够通过我们的简短的实验性游戏 [Mark My Words][5] 获得一点点体验。它是一个以语言学为主题的游戏,描绘了虚构的书写系统在其历史中的演变。还有很多其他不错的作品提交给了 Open Jam,它们都值得一试。 真的,[去看看][31]! + +在本文结束前,我们想要感谢所有的 [参加俱乐部的成员][32],使得这次经历真正的有价值。我们也想要感谢 [Michael Clayton][33]、[Jared Sprague][34] 和 [Opensource.com][35] 主办 Open Jam。简直酷毙了。 + +现在,我们对读者提出了一些问题。你是一个 FOSS 游戏开发者吗?你选择的工具是什么?务必在下面留下一个评论! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/graphics-music-tools-game-dev + +作者:[Charlie Murphy][a] +译者:[robsean](https://github.com/robsean) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/rsg167 +[1]:http://mugeeks.org/ +[2]:https://itch.io/jam/open-jam-1 +[3]:https://opensource.com/article/17/8/open-jam-announcement +[4]:https://opensource.com/article/17/11/open-jam +[5]:https://mugeeksalpha.itch.io/mark-omy-words +[6]:http://milkytracker.titandemo.org/ +[7]:https://en.wikipedia.org/wiki/Music_tracker +[8]:https://creativecommons.org/ +[9]:https://freesound.org/ +[10]:http://ccmixter.org/view/media/home +[11]:http://milkytracker.titandemo.org/documentation/ +[12]:https://github.com/milkytracker/MilkyTracker/wiki/MilkyTracker-Guide +[13]:https://lmms.io/ +[14]:https://en.wikipedia.org/wiki/Chiptune +[15]:https://github.com/grimfang4/sfxr +[16]:https://lmms.io/wiki/index.php?title=BitInvader +[17]:https://lmms.io/wiki/index.php?title=FreeBoy +[18]:http://zynaddsubfx.sourceforge.net/ +[19]:http://www.mapeditor.org/ +[20]:https://www.piskelapp.com/ +[21]:https://github.com/piskelapp/piskel/blob/master/LICENSE +[22]:https://raw.githubusercontent.com/MUGeeksandGadgets/MarkMyWords/master/tools/unpiskel.py +[23]:https://www.python.org/ +[24]:https://www.sacredchao.net/~piman/angrydd/ +[25]:https://renpy.org/ +[26]:https://www.Pygame.org/ +[27]:http://Pygame.org/docs/tut/PygameIntro.html +[28]:https://anthony-tuininga.github.io/cx_Freeze/ +[29]:https://Pygame.org/wiki/Pygame2exe +[30]:http://www.py2exe.org/ +[31]:https://itch.io/jam/open-jam-1/entries +[32]:https://github.com/MUGeeksandGadgets/MarkMyWords/blob/3e1e8aed12ebe13acccf0d87b06d4f3bd124b9db/README.md#credits +[33]:https://twitter.com/mwcz +[34]:https://twitter.com/caramelcode +[35]:https://opensource.com/ diff --git a/published/201812/20180131 For your first HTML code lets help Batman write a love letter.md b/published/201812/20180131 For your first HTML code lets help Batman write a love letter.md new file mode 100644 index 0000000000..4272904f5c --- /dev/null +++ b/published/201812/20180131 For your first HTML code lets help Batman write a love letter.md @@ -0,0 +1,869 @@ +编写你的第一行 HTML 代码,来帮助蝙蝠侠写一封情书 +====== + +![](https://cdn-images-1.medium.com/max/1000/1*kZxbQJTdb4jn_frfqpRg9g.jpeg) + +在一个美好的夜晚,你的肚子拒绝消化你在晚餐吃的大块披萨,所以你不得不在睡梦中冲进洗手间。 + +在浴室里,当你在思考为什么会发生这种情况时,你听到一个来自通风口的低沉声音:“嘿,我是蝙蝠侠。” + +这时,你会怎么做呢? + +在你恐慌并处于关键时刻之前,蝙蝠侠说:“我需要你的帮助。我是一个超级极客,但我不懂 HTML。我需要用 HTML 写一封情书,你愿意帮助我吗?” + +谁会拒绝蝙蝠侠的请求呢,对吧?所以让我们用 HTML 来写一封蝙蝠侠的情书。 + +### 你的第一个 HTML 文件 + +HTML 网页与你电脑上的其它文件一样。就同一个 .doc 文件以 MS Word 打开,.jpg 文件在图像查看器中打开一样,一个 .html 文件在浏览器中打开。 + +那么,让我们来创建一个 .html 文件。你可以在 Notepad 或其它任何编辑器中完成此任务,但我建议使用 VS Code。[在这里下载并安装 VS Code][2]。它是免费的,也是我唯一喜欢的微软产品。 + +在系统中创建一个目录,将其命名为 “HTML Practice”(不带引号)。在这个目录中,再创建一个名为 “Batman's Love Letter”(不带引号)的目录,这将是我们的项目根目录。这意味着我们所有与这个项目相关的文件都会在这里。 + +打开 VS Code,按下 `ctrl+n` 创建一个新文件,按下 `ctrl+s` 保存文件。切换到 “Batman's Love Letter” 文件夹并将其命名为 “loveletter.html”,然后单击保存。 + +现在,如果你在文件资源管理器中双击它,它将在你的默认浏览器中打开。我建议使用 Firefox 来进行 web 开发,但 Chrome 也可以。 + +让我们将这个过程与我们已经熟悉的东西联系起来。还记得你第一次拿到电脑吗?我做的第一件事是打开 MS Paint 并绘制一些东西。你在 Paint 中绘制一些东西并将其另存为图像,然后你可以在图像查看器中查看该图像。之后,如果要再次编辑该图像,你在 Paint 中重新打开它,编辑并保存它。 + +我们目前的流程非常相似。正如我们使用 Paint 创建和编辑图像一样,我们使用 VS Code 来创建和编辑 HTML 文件。就像我们使用图像查看器查看图像一样,我们使用浏览器来查看我们的 HTML 页面。 + +### HTML 中的段落 + +我们有一个空的 HTML 文件,以下是蝙蝠侠想在他的情书中写的第一段。 + +“After all the battles we fought together, after all the difficult times we saw together, and after all the good and bad moments we’ve been through, I think it’s time I let you know how I feel about you.” + +复制这些到 VS Code 中的 loveletter.html。单击 “View -> Toggle Word Wrap (alt+z)” 自动换行。 + +保存并在浏览器中打开它。如果它已经打开,单击浏览器中的刷新按钮。 + +瞧!那是你的第一个网页! + +我们的第一段已准备就绪,但这不是在 HTML 中编写段落的推荐方法。我们有一种特定的方法让浏览器知道一个文本是一个段落。 + +如果你用 `

` 和 `

` 来包裹文本,那么浏览器将识别 `

` 和 `

` 中的文本是一个段落。我们这样做: + +``` +

After all the battles we fought together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.

+``` + +通过在 `

` 和 `

`中编写段落,你创建了一个 HTML 元素。一个网页就是 HTML 元素的集合。 + +让我们首先来认识一些术语:`

` 是开始标签,`

` 是结束标签,“p” 是标签名称。元素开始和结束标签之间的文本是元素的内容。 + +### “style” 属性 + +在上面,你将看到文本覆盖屏幕的整个宽度。 + +我们不希望这样。没有人想要阅读这么长的行。让我们设定段落宽度为 550px。 + +我们可以通过使用元素的 `style` 属性来实现。你可以在其 `style` 属性中定义元素的样式(例如,在我们的示例中为宽度)。以下行将在 `p` 元素上创建一个空样式属性: + +``` +

...

+``` + +你看到那个空的 `""` 了吗?这就是我们定义元素外观的地方。现在我们要将宽度设置为 550px。我们这样做: + +``` +

+ After all the battles we fought together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

+``` + +我们将 `width` 属性设置为 `550px`,用冒号 `:` 分隔,以分号 `;` 结束。 + +另外,注意我们如何将 `

` 和 `

` 放在单独的行中,文本内容用一个制表符缩进。像这样设置代码使其更具可读性。 + +### HTML 中的列表 + +接下来,蝙蝠侠希望列出他所钦佩的人的一些优点,例如: + +``` +You complete my darkness with your light. I love: +- the way you see good in the worst things +- the way you handle emotionally difficult situations +- the way you look at Justice +I have learned a lot from you. You have occupied a special place in my heart over time. +``` + +这看起来很简单。 + +让我们继续,在 `

` 下面复制所需的文本: + +``` +

+ After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

+

+ You complete my darkness with your light. I love: + - the way you see good in the worse + - the way you handle emotionally difficult situations + - the way you look at Justice + I have learned a lot from you. You have occupied a special place in my heart over the time. +

+``` + +保存并刷新浏览器。 + +![](https://cdn-images-1.medium.com/max/1000/1*M0Ae5ZpRTucNyucfaaz4uw.jpeg) + +哇!这里发生了什么,我们的列表在哪里? + +如果你仔细观察,你会发现没有显示换行符。在代码中我们在新的一行中编写列表项,但这些项在浏览器中显示在一行中。 + +如果你想在 HTML(新行)中插入换行符,你必须使用 `
`。让我们来使用 `
`,看看它长什么样: + +``` +

+ After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

+

+ You complete my darkness with your light. I love:
+ - the way you see good in the worse
+ - the way you handle emotionally difficult situations
+ - the way you look at Justice
+ I have learned a lot from you. You have occupied a special place in my heart over the time. +

+``` + +保存并刷新: + +![](https://cdn-images-1.medium.com/max/1000/1*Mj4Sr_jUliidxFpEtu0pXw.jpeg) + +好的,现在它看起来就像我们想要的那样! + +另外,注意我们没有写一个 `
`。有些标签不需要结束标签(它们被称为自闭合标签)。 + +还有一件事:我们没有在两个段落之间使用 `
`,但第二个段落仍然是从一个新行开始,这是因为 `

` 元素会自动插入换行符。 + +我们使用纯文本编写列表,但是有两个标签可以供我们使用来达到相同的目的:`

    ` and `
  • `。 + +让我们解释一下名字的意思:ul 代表无序列表Unordered List,li 代表列表项目List Item。让我们使用它们来展示我们的列表: + +``` +

    + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

    +``` + +``` +

    + You complete my darkness with your light. I love: +

      +
    • the way you see good in the worse
    • +
    • the way you handle emotionally difficult situations
    • +
    • the way you look at Justice
    • +
    + I have learned a lot from you. You have occupied a special place in my heart over the time. +

    +``` + +在复制代码之前,注意差异部分: + +* 我们删除了所有的 `
    `,因为每个 `
  • ` 会自动显示在新行中 +* 我们将每个列表项包含在 `
  • ` 和 `
  • ` 之间 +* 我们将所有列表项的集合包裹在 `
      ` 和 `
    ` 之间 +* 我们没有像 `

    ` 元素那样定义 `

      ` 元素的宽度。这是因为 `
        ` 是 `

        ` 的子节点,`

        ` 已经被约束到 550px,所以 `

          ` 不会超出这个范围。 + +让我们保存文件并刷新浏览器以查看结果: + +![](https://cdn-images-1.medium.com/max/1000/1*aPlMpYVZESPwgUO3Iv-qCA.jpeg) + +你会立即注意到在每个列表项之前显示了重点标志。我们现在不需要在每个列表项之前写 “-”。 + +经过仔细检查,你会注意到最后一行超出 550px 宽度。这是为什么?因为 HTML 不允许 `
            ` 元素出现在 `

            ` 元素中。让我们将第一行和最后一行放在单独的 `

            ` 元素中: + +``` +

            + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

            +``` + +``` +

            + You complete my darkness with your light. I love: +

            +``` + +``` +
              +
            • the way you see good in the worse
            • +
            • the way you handle emotionally difficult situations
            • +
            • the way you look at Justice
            • +
            +``` + +``` +

            + I have learned a lot from you. You have occupied a special place in my heart over the time. +

            +``` + +保存并刷新。 + +注意,这次我们还定义了 `
              ` 元素的宽度。那是因为我们现在已经将 `
                ` 元素放在了 `

                ` 元素之外。 + +定义情书中所有元素的宽度会变得很麻烦。我们有一个特定的元素用于此目的:`

                ` 元素。一个 `
                ` 元素就是一个通用容器,用于对内容进行分组,以便轻松设置样式。 + +让我们用 `
                ` 元素包装整个情书,并为其赋予宽度:550px 。 + +``` +
                +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +棒极了,我们的代码现在看起来简洁多了。 + +### HTML 中的标题 + +到目前为止,蝙蝠侠对结果很高兴,他希望在情书上标题。他想写一个标题: “Bat Letter”。当然,你已经看到这个名字了,不是吗?:D + +你可以使用 `

                `、`

                `、`

                `、`

                `、`

                ` 和 `
                ` 标签来添加标题,`

                ` 是最大的标题和最主要的标题,`

                ` 是最小的标题。 + +![](https://cdn-images-1.medium.com/max/1000/1*Ud-NzfT-SrMgur1WX4LCkQ.jpeg) + +让我们在第二段之前使用 `

                ` 做主标题和一个副标题: + +``` +
                +

                Bat Letter

                +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +保存,刷新。 + +![](https://cdn-images-1.medium.com/max/1000/1*rzyIl-gHug3nQChqfscU3w.jpeg) + +### HTML 中的图像 + +我们的情书尚未完成,但在继续之前,缺少一件大事:蝙蝠侠标志。你见过是蝙蝠侠的东西但没有蝙蝠侠的标志吗? + +并没有。 + +所以,让我们在情书中添加一个蝙蝠侠标志。 + +在 HTML 中包含图像就像在一个 Word 文件中包含图像一样。在 MS Word 中,你到 “菜单 -> 插入 -> 图像 -> 然后导航到图像位置为止 -> 选择图像 -> 单击插入”。 + +在 HTML 中,我们使用 `` 标签让浏览器知道我们需要加载的图像,而不是单击菜单。我们在 `src` 属性中写入文件的位置和名称。如果图像在项目根目录中,我们可以简单地在 `src` 属性中写入图像文件的名称。 + +在我们深入编码之前,从[这里][3]下载蝙蝠侠标志。你可能希望裁剪图像中的额外空白区域。复制项目根目录中的图像并将其重命名为 “bat-logo.jpeg”。 + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +我们在第 3 行包含了 `` 标签。这个标签也是一个自闭合的标签,所以我们不需要写 ``。在 `src` 属性中,我们给出了图像文件的名称。这个名称应与图像名称完全相同,包括扩展名(.jpeg)及其大小写。 + +保存并刷新,查看结果。 + +![](https://cdn-images-1.medium.com/max/1000/1*uMNWAISOACJlzDOONcrGXw.jpeg) + +该死的!刚刚发生了什么? + +当使用 `` 标签包含图像时,默认情况下,图像将以其原始分辨率显示。在我们的例子中,图像比 550px 宽得多。让我们使用 `style` 属性定义它的宽度: + + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +你会注意到,这次我们定义宽度使用了 “%” 而不是 “px”。当我们在 “%” 中定义宽度时,它将占据父元素宽度的百分比。因此,100% 的 550px 将为我们提供 550px。 + +保存并刷新,查看结果。 + +![](https://cdn-images-1.medium.com/max/1000/1*5c0ngx3BFVlyyP6UNtfYyg.jpeg) + +太棒了!这让蝙蝠侠的脸露出了羞涩的微笑 :)。 + +### HTML 中的粗体和斜体 + +现在蝙蝠侠想在最后几段中承认他的爱。他有以下文本供你用 HTML 编写: + +“I have a confession to make + +It feels like my chest _does_ have a heart. You make my heart beat. Your smile brings a smile to my face, your pain brings pain to my heart. + +I don’t show my emotions, but I think this man behind the mask is falling for you.” + +当阅读到这里时,你会问蝙蝠侠:“等等,这是给谁的?”蝙蝠侠说: + +“这是给超人的。” + +![](https://cdn-images-1.medium.com/max/1000/1*UNDvfIZQJ1Q_goHc-F-IPA.jpeg) + +你说:哦!我还以为是给神奇女侠的呢。 + +蝙蝠侠说:不,这是给超人的,请在最后写上 “I love you Superman.”。 + +好的,我们来写: + + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +这封信差不多完成了,蝙蝠侠另外想再做两次改变。蝙蝠侠希望在最后段落的第一句中的 “does” 一词是斜体,而 “I love you Superman” 这句话是粗体的。 + +我们使用 `` 和 `` 以斜体和粗体显示文本。让我们来更新这些更改: + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +![](https://cdn-images-1.medium.com/max/1000/1*6hZdQJglbHUcEEHzouk2eA.jpeg) + +### HTML 中的样式 + +你可以通过三种方式设置样式或定义 HTML 元素的外观: + +* 内联样式:我们使用元素的 `style` 属性来编写样式。这是我们迄今为止使用的,但这不是一个好的实践。 +* 嵌入式样式:我们在由 `` 包裹的 “style” 元素中编写所有样式。 +* 链接样式表:我们在具有 .css 扩展名的单独文件中编写所有元素的样式。此文件称为样式表。 + +让我们来看看如何定义 `
                ` 的内联样式: + +``` +
                +``` + +我们可以在 `` 里面写同样的样式: + +``` +div{ + width:550px; +} +``` + +在嵌入式样式中,我们编写的样式是与元素分开的。所以我们需要一种方法来关联元素及其样式。第一个单词 “div” 就做了这样的活。它让浏览器知道花括号 `{...}` 里面的所有样式都属于 “div” 元素。由于这种语法确定要应用样式的元素,因此它称为一个选择器。 + +我们编写样式的方式保持不变:属性(`width`)和值(`550px`)用冒号(`:`)分隔,以分号(`;`)结束。 + +让我们从 `
                ` 和 `` 元素中删除内联样式,将其写入 ` +``` + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +保存并刷新,结果应保持不变。 + +但是有一个大问题,如果我们的 HTML 文件中有多个 `
                ` 和 `` 元素该怎么办?这样我们在 ` +``` + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +HTML 已经准备好了嵌入式样式。 + +但是,你可以看到,随着我们包含越来越多的样式,`` 将变得很大。这可能很快会混乱我们的主 HTML 文件。 + +因此,让我们更进一步,通过将 ``。 + +我们需要使用 HTML 文件中的 `` 标签来将新创建的 CSS 文件链接到 HTML 文件。以下是我们如何做到这一点: + +``` + +``` + +我们使用 `` 元素在 HTML 文档中包含外部资源,它主要用于链接样式表。我们使用的三个属性是: + +* `rel`:关系。链接文件与文档的关系。具有 .css 扩展名的文件称为样式表,因此我们保留 rel=“stylesheet”。 +* `type`:链接文件的类型;对于一个 CSS 文件来说它是 “text/css”。 +* `href`:超文本参考。链接文件的位置。 + +link 元素的结尾没有 ``。因此,`` 也是一个自闭合的标签。 + +``` + +``` + +如果只是得到一个女朋友,那么很容易:D + +可惜没有那么简单,让我们继续前进。 + +这是我们 “loveletter.html” 的内容: + +``` + +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +“style.css” 内容: + +``` +#letter-container{ + width:550px; +} +#header-bat-logo{ + width:100%; +} +``` + +保存文件并刷新,浏览器中的输出应保持不变。 + +### 一些手续 + +我们的情书已经准备好给蝙蝠侠,但还有一些正式的片段。 + +与其他任何编程语言一样,HTML 自出生以来(1990 年)经历过许多版本,当前版本是 HTML5。 + +那么,浏览器如何知道你使用哪个版本的 HTML 来编写页面呢?要告诉浏览器你正在使用 HTML5,你需要在页面顶部包含 ``。对于旧版本的 HTML,这行不同,但你不需要了解它们,因为我们不再使用它们了。 + +此外,在之前的 HTML 版本中,我们曾经将整个文档封装在 `` 标签内。整个文件分为两个主要部分:头部在 `` 里面,主体在 `` 里面。这在 HTML5 中不是必须的,但由于兼容性原因,我们仍然这样做。让我们用 ``, ``、 `` 和 `` 更新我们的代码: + +``` + + + + + + +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                + + +``` + +主要内容在 `` 里面,元信息在 `` 里面。所以我们把 `
                ` 保存在 `` 里面并加载 `` 里面的样式表。 + +保存并刷新,你的 HTML 页面应显示与之前相同的内容。 + +### HTML 的标题 + +我发誓,这是最后一次改变。 + +你可能已经注意到选项卡的标题正在显示 HTML 文件的路径: + +![](https://cdn-images-1.medium.com/max/1000/1*PASKm4ji29hbcZXVSP8afg.jpeg) + +我们可以使用 `` 标签来定义 HTML 文件的标题。标题标签也像链接标签一样在 `<head>` 内部。让我们我们在标题中加上 “Bat Letter”: + +``` +<!DOCTYPE html> +<html> +<head> + <title>Bat Letter + + + +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                + + +``` + +保存并刷新,你将看到在选项卡上显示的是 “Bat Letter” 而不是文件路径。 + +蝙蝠侠的情书现在已经完成。 + +恭喜!你用 HTML 制作了蝙蝠侠的情书。 + +![](https://cdn-images-1.medium.com/max/1000/1*qC8qtrYtxAB6cJfm9aVOOQ.jpeg) + +### 我们学到了什么 + +我们学习了以下新概念: + + * 一个 HTML 文档的结构 + * 在 HTML 中如何写元素(`

                `) + * 如何使用 style 属性在元素内编写样式(这称为内联样式,尽可能避免这种情况) + * 如何在 `` 中编写元素的样式(这称为嵌入式样式) + * 在 HTML 中如何使用 `` 在单独的文件中编写样式并链接它(这称为链接样式表) + * 什么是标签名称,属性,开始标签和结束标签 + * 如何使用 id 属性为一个元素赋予 id + * CSS 中的标签选择器和 id 选择器 + +我们学习了以下 HTML 标签: + + * `

                `:用于段落 + * `
                `:用于换行 + * `

                  `、`
                • `:显示列表 + * `
                  `:用于分组我们信件的元素 + * `

                  `、`

                  `:用于标题和子标题 + * ``:用于插入图像 + * ``、``:用于粗体和斜体文字样式 + * `. - -* Linked stylesheet: We write styles of all the elements in a separate file with .css extension. This file is called Stylesheet. - -Let’s have a look at how we defined the inline style of the “div” until now: - -``` -
                  -``` - -We can write this same style inside `` like this: - -``` -div{ - width:550px; -} -``` - -In embedded styling, the styles we write are separate from the elements. So we need a way to relate the element and its style. The first word “div” does exactly that. It lets the browser know that whatever style is inside the curly braces `{…}` belongs to the “div” element. Since this phrase determines which element to apply the style to, it’s called a selector. - -The way we write style remains same: property(width) and value(550px) separated by a colon(:) and ended by a semicolon(;). - -Let’s remove inline style from our “div” and “img” element and write it inside the ` -``` - -``` -
                  -

                  Bat Letter

                  - -

                  - After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. -

                  -``` - -``` -

                  You are the light of my life

                  -

                  - You complete my darkness with your light. I love: -

                  -
                    -
                  • the way you see good in the worse
                  • -
                  • the way you handle emotionally difficult situations
                  • -
                  • the way you look at Justice
                  • -
                  -

                  - I have learned a lot from you. You have occupied a special place in my heart over the time. -

                  -

                  I have a confession to make

                  -

                  - It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. -

                  -

                  - I don't show my emotions, but I think this man behind the mask is falling for you. -

                  -

                  I love you Superman.

                  -

                  - Your not-so-secret-lover,
                  - Batman -

                  -
                  -``` - -Save and refresh, and the result should remain the same. - -There is one big problem though — what if there is more than one “div” and “img” element in our HTML file? The styles that we defined for div and img inside the “style” element will apply to every div and img on the page. - -If you add another div in your code in the future, then that div will also become 550px wide. We don’t want that. - -We want to apply our styles to the specific div and img that we are using right now. To do this, we need to give our div and img element unique ids. Here’s how you can give an id to an element using its “id” attribute: - -``` -
                  -``` - -and here’s how to use this id in our embedded style as a selector: - -``` -#letter-container{ - ... -} -``` - -Notice the “#” symbol. It indicates that it is an id, and the styles inside {…} should apply to the element with that specific id only. - -Let’s apply this to our code: - -``` - -``` - -``` -
                  -

                  Bat Letter

                  - -

                  - After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. -

                  -``` - -``` -

                  You are the light of my life

                  -

                  - You complete my darkness with your light. I love: -

                  -
                    -
                  • the way you see good in the worse
                  • -
                  • the way you handle emotionally difficult situations
                  • -
                  • the way you look at Justice
                  • -
                  -

                  - I have learned a lot from you. You have occupied a special place in my heart over the time. -

                  -

                  I have a confession to make

                  -

                  - It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. -

                  -

                  - I don't show my emotions, but I think this man behind the mask is falling for you. -

                  -

                  I love you Superman.

                  -

                  - Your not-so-secret-lover,
                  - Batman -

                  -
                  -``` - -Our HTML is ready with embedded styling. - -However, you can see that as we include more styles, the will get bigger. This can quickly clutter our main html file. So let’s go one step further and use linked styling by copying the content inside our style tag to a new file. - -Create a new file in the project root directory and save it as style.css: - -``` -#letter-container{ - width:550px; -} -#header-bat-logo{ - width:100%; -} -``` - -We don’t need to write `` in our CSS file. - -We need to link our newly created CSS file to our HTML file using the ``tag in our html file. Here’s how we can do that: - -``` - -``` - -We use the link element to include external resources inside your HTML document. It is mostly used to link Stylesheets. The three attributes that we are using are: - -* rel: Relation. What relationship the linked file has to the document. The file with the .css extension is called a stylesheet, and so we keep rel=“stylesheet”. - -* type: the Type of the linked file; it’s “text/css” for a CSS file. - -* href: Hypertext Reference. Location of the linked file. - -There is no at the end of the link element. So, is also a self-closing tag. - -``` - -``` - -If only getting a Girlfriend was so easy :D - -Nah, that’s not gonna happen, let’s move on. - -Here’s the content of our loveletter.html: - -``` - -
                  -

                  Bat Letter

                  - -

                  - After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. -

                  -

                  You are the light of my life

                  -

                  - You complete my darkness with your light. I love: -

                  -
                    -
                  • the way you see good in the worse
                  • -
                  • the way you handle emotionally difficult situations
                  • -
                  • the way you look at Justice
                  • -
                  -

                  - I have learned a lot from you. You have occupied a special place in my heart over the time. -

                  -

                  I have a confession to make

                  -

                  - It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. -

                  -

                  - I don't show my emotions, but I think this man behind the mask is falling for you. -

                  -

                  I love you Superman.

                  -

                  - Your not-so-secret-lover,
                  - Batman -

                  -
                  -``` - -and our style.css: - -``` -#letter-container{ - width:550px; -} -#header-bat-logo{ - width:100%; -} -``` - -Save both the files and refresh, and your output in the browser should remain the same. - -### A Few Formalities - -Our love letter is almost ready to deliver to Batman, but there are a few formal pieces remaining. - -Like any other programming language, HTML has also gone through many versions since its birth year(1990). The current version of HTML is HTML5. - -So, how would the browser know which version of HTML you are using to code your page? To tell the browser that you are using HTML5, you need to include `` at top of the page. For older versions of HTML, this line used to be different, but you don’t need to learn that because we don’t use them anymore. - -Also, in previous HTML versions, we used to encapsulate the entire document inside `` tag. The entire file was divided into two major sections: Head, inside ``, and Body, inside ``. This is not required in HTML5, but we still do this for compatibility reasons. Let’s update our code with ``, ``, `` and ``: - -``` - - - - - - -
                  -

                  Bat Letter

                  - -

                  - After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. -

                  -

                  You are the light of my life

                  -

                  - You complete my darkness with your light. I love: -

                  -
                    -
                  • the way you see good in the worse
                  • -
                  • the way you handle emotionally difficult situations
                  • -
                  • the way you look at Justice
                  • -
                  -

                  - I have learned a lot from you. You have occupied a special place in my heart over the time. -

                  -

                  I have a confession to make

                  -

                  - It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. -

                  -

                  - I don't show my emotions, but I think this man behind the mask is falling for you. -

                  -

                  I love you Superman.

                  -

                  - Your not-so-secret-lover,
                  - Batman -

                  -
                  - - -``` - -The main content goes inside `` and meta information goes inside ``. So we keep the div inside `` and load the stylesheets inside ``. - -Save and refresh, and your HTML page should display the same as earlier. - -### Title in HTML - -This is the last change. I promise. - -You might have noticed that the title of the tab is displaying the path of the HTML file: - - -![](https://cdn-images-1.medium.com/max/1000/1*PASKm4ji29hbcZXVSP8afg.jpeg) - -We can use `` tag to define a title for our HTML file. The title tag also, like the link tag, goes inside head. Let’s put “Bat Letter” in our title: - -``` -<!DOCTYPE html> -<html> -<head> - <title>Bat Letter - - - -
                  -

                  Bat Letter

                  - -

                  - After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. -

                  -

                  You are the light of my life

                  -

                  - You complete my darkness with your light. I love: -

                  -
                    -
                  • the way you see good in the worse
                  • -
                  • the way you handle emotionally difficult situations
                  • -
                  • the way you look at Justice
                  • -
                  -

                  - I have learned a lot from you. You have occupied a special place in my heart over the time. -

                  -

                  I have a confession to make

                  -

                  - It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. -

                  -

                  - I don't show my emotions, but I think this man behind the mask is falling for you. -

                  -

                  I love you Superman.

                  -

                  - Your not-so-secret-lover,
                  - Batman -

                  -
                  - - -``` - -Save and refresh, and you will see that instead of the file path, “Bat Letter” is now displayed on the tab. - -Batman’s Love Letter is now complete. - -Congratulations! You made Batman’s Love Letter in HTML. - - -![](https://cdn-images-1.medium.com/max/1000/1*qC8qtrYtxAB6cJfm9aVOOQ.jpeg) - -### What we learned - -We learned the following new concepts: - -* The structure of an HTML document - -* How to write elements in HTML (

                  ) - -* How to write styles inside the element using the style attribute (this is called inline styling, avoid this as much as you can) - -* How to write styles of an element inside (this is called embedded styling) - -* How to write styles in a separate file and link to it in HTML using (this is called a linked stylesheet) - -* What is a tag name, attribute, opening tag, and closing tag - -* How to give an id to an element using id attribute - -* Tag selectors and id selectors in CSS - -We learned the following HTML tags: - -*

                  : for paragraphs - -*
                  : for line breaks - -*

                    ,
                  • : to display lists - -*
                    : for grouping elements of our letter - -*

                    ,

                    : for heading and sub heading - -* : to insert an image - -* , : for bold and italic text styling - -* + +        +                +                +        +
                    + +    + +      +        +         

                    +          +           

                    +                +                 

                      +                   
                    • +                 

                    +               
                    +         
                    +        +         

                    +       
                    +       
                    +     
                    +    +    +  + +``` + +This looks a lot more like HTML, and you can see it contains a number of HTML tags. After some preliminary tags and some particulars about displaying H2, H3, and H4 tags, you see a Table tag. This adds a graphical heading at the top of the page and uses some images already in the documentation files. + +After this, you get into the process of dissecting the various **submenuitem** tags, trying to create the nested listing structure as it appears in Scribus when you view the manual. One feature I did not try to duplicate is the ability to collapse and expand **submenuitem** areas. As you can imagine, it takes some time to sort through the number of nested lists you need to create, but when I finished, here is how it looked: + +![](https://opensource.com/sites/default/files/uploads/xml_scribusmenuinbrowser.png) + +This minimal editing to **menu.xml** does not interfere with Scribus' ability to show the manual in its own browser. I put this modified **menu.xml** file and the **scribus-manual.xsl** in the English documentation folder for 1.5.x versions of Scribus, so anyone using these versions can simply point their browser to the **menu.xml** file and it should show up just like you see above. + +A much bigger chore I took on a few years ago was to create a version of the ICD10 (International Classification of Diseases, version 10) when it came out. Many changes were made from the previous version (ICD9) to 10. These are important since these codes must be used for diagnostic purposes in medical practice. You can easily download XML files from the US [Centers for Medicare and Medicaid][2] website since it is public information, but—just as with the Scribus manual—these files are hard to use. + +Here is the beginning of the tabular listing of diseases: + +![](https://opensource.com/sites/default/files/uploads/xml_tabular_begin.png) + +One of the features I created was the color coding used in the listing shown here: + +![](https://opensource.com/sites/default/files/uploads/xml_tabular_body.png) + +As with **menu.xml** , the only editing I did in this **Tabular.xml** file was to add **** as the second line of the file. I started this project with the 2014 version, and I was quite pleased to find that the original **tabular.xsl** stylesheet worked perfectly when the 2016 version came out, which is the last one I worked on. The** Tabular.xml** file is 8.4MB, quite large for a plaintext file. It takes a few seconds to load into a browser, but once it's loaded, navigation is fast. + +While you may not often have to deal with an XML file in this way, if you do, I hope this article shows that your file can easily be turned into something much more usable. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/xml-browser + +作者:[Greg Pittman][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/greg-p +[b]: https://github.com/lujun9972 +[1]: https://www.scribus.net/ +[2]: https://www.cms.gov/ diff --git a/sources/tech/20181207 5 Screen Recorders for the Linux Desktop.md b/sources/tech/20181207 5 Screen Recorders for the Linux Desktop.md new file mode 100644 index 0000000000..4dd47e948a --- /dev/null +++ b/sources/tech/20181207 5 Screen Recorders for the Linux Desktop.md @@ -0,0 +1,177 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 Screen Recorders for the Linux Desktop) +[#]: via: (https://www.linux.com/blog/intro-to-linux/2018/12/5-screen-recorders-linux-desktop) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +5 Screen Recorders for the Linux Desktop +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screen-record.png?itok=tKWx29k8) + +There are so many reasons why you might need to record your Linux desktop. The two most important are for training and for support. If you are training users, a video recording of the desktop can go a long way to help them understand what you are trying to impart. Conversely, if you’re having trouble with one aspect of your Linux desktop, recording a video of the shenanigans could mean the difference between solving the problem and not. But what tools are available for the task? Fortunately, for every Linux user (regardless of desktop), there are options available. I want to highlight five of my favorite screen recorders for the Linux desktop. Among these five, you are certain to find one that perfectly meets your needs. I will only be focusing on those screen recorders that save as video. What video format you prefer may or may not dictate which tool you select. + +And, without further ado, let’s get on with the list. + +### Simple Screen Recorder + +I’m starting out with my go-to screen recorder. I use [Simple Screen Recorder][1] on a daily basis, and it never lets me down. This particular take on the screen recorder is available for nearly every flavor of Linux and is, as the name implies, very simple to use. With Simple Screen Recorder you can select a single window, a portion of the screen, or the entire screen to record. One of the best features of Simple Screen Recorder is the ability to save profiles (Figure 1), which allows you to configure the input for a recording (including scaling, frame rate, width, height, left edge and top edge spacing, and more). By saving profiles, you can easily use a specific profile to meet a unique need, without having to go through the customization every time. This is handy for those who do a lot of screen recording, with different input variables for specific jobs. + +![Simple Screen Recorder ][3] + +Figure 1: Simple Screen Recorder input profile window. + +[Used with permission][4] + +Simple screen recorder also: + + * Records audio input + + * Allows you to pause and resume recording + + * Offers a preview during recording + + * Allows for the selection of video containers and codecs + + * Adds timestamp to file name (optional) + + * Includes hotkey recording and sound notifications + + * Works well on slower machines + + * And much more + + + + +Simple Screen Recorder is one of the most reliable screen recording tools I have found for the Linux desktop. Simple Screen Recorder can be installed from the standard repositories on many desktops, or via easy to follow instructions on the [application download page][5]. + +### Gtk-recordmydesktop + +The next entry, [gtk-recordmydesktop][6], doesn’t give you nearly the options found in Simple Screen Recorder, but it does offer a command line component (for those who prefer not working with a GUI). The simplicity that comes along with this tool also means you are limited to a specific video output format (.ogv). That doesn’t mean gtk-recordmydesktop isn’t without appeal. In fact, there are a few features that make this option in the genre fairly appealing. First and foremost, it’s very simple to use. Second, the record window automatically gets out of your way while you record (as opposed to Simple Screen Recorder, where you need to minimize the recording window when recording full screen). Another feature found in gtk-recordmydesktop is the ability to have the recording follow the mouse (Figure 2). + +![gtk-recordmydesktop][8] + +Figure 2: Some of the options for gtk-recordmydesktop. + +[Used with permission][4] + +Unfortunately, the follow the mouse feature doesn’t always work as expected, so chances are you’ll be using the tool without this interesting option. In fact, if you opt to go the gtk-recordmydesktop route, you should understand the GUI frontend isn’t nearly as reliable as is the command line version of the tool. From the command line, you could record a specific position of the screen like so: + +``` +recordmydesktop -x X_POS -y Y_POS --width WIDTH --height HEIGHT -o FILENAME.ogv +``` + +where: + + * X_POS is the offset on the X axis + + * Y_POS is the offset on the Y axis + + * WIDTH is the width of the screen to be recorded + + * HEIGHT is the height of the screen to be recorded + + * FILENAME is the name of the file to be saved + + + + +To find out more about the command line options, issue the command man recordmydesktop and read through the manual page. + +### Kazam + +If you’re looking for a bit more than just a recorded screencast, you might want to give Kazam a go. Not only can you record a standard screen video (with the usual—albeit limited amount of—bells and whistles), you can also take screenshots and even broadcast video to YouTube Live (Figure 3). + +![Kazam][10] + +Figure 3: Setting up YouTube Live broadcasting in Kazam. + +[Used with permission][4] + +Kazam falls in line with gtk-recordmydesktop, when it comes to features. In other words, it’s slightly limited in what it can do. However, that doesn’t mean you shouldn’t give Kazam a go. In fact, Kazam might be one of the best screen recorders out there for new Linux users, as this app is pretty much point and click all the way. But if you’re looking for serious bells and whistles, look away. + +The version of Kazam, with broadcast goodness, can be found in the following repository: + +``` +ppa:sylvain-pineau/kazam +``` + +For Ubuntu (and Ubuntu-based distributions), install with the following commands: + +``` +sudo apt-add-repository ppa:sylvain-pineau/kazam + +sudo apt-get update + +sudo apt-get install kazam -y +``` + +### Vokoscreen + +The [Vokoscreen][11] recording app is for new-ish users who need more options. Not only can you configure the output format and the video/audio codecs, you can also configure it to work with a webcam (Figure 4). + +![Vokoscreen][13] + +Figure 4: Configuring a web cam for a Vokoscreen screen recording. + +[Used with permission][4] + +As with most every screen recording tool, Vokoscreen allows you to specify what on your screen to record. You can record the full screen (even selecting which display on multi-display setups), window, or area. Vokoscreen also allows you to select a magnification level (200x200, 400x200, or 600x200). The magnification level makes for a great tool to highlight a specific section of the screen (the magnification window follows your mouse). + +Like all the other tools, Vokoscreen can be installed from the standard repositories or cloned from its [GitHub repository][14]. + +### OBS Studio + +For many, [OBS Studio][15] will be considered the mack daddy of all screen recording tools. Why? Because OBS Studio is as much a broadcasting tool as it is a desktop recording tool. With OBS Studio, you can broadcast to YouTube, Smashcast, Mixer.com, DailyMotion, Facebook Live, Restream.io, LiveEdu.tv, Twitter, and more. In fact, OBS Studio should seriously be considered the de facto standard for live broadcasting the Linux desktop. + +Upon installation (the software is only officially supported for Ubuntu Linux 14.04 and newer), you will be asked to walk through an auto-configuration wizard, where you setup your streaming service (Figure 5). This is, of course, optional; however, if you’re using OBS Studio, chances are this is exactly why, so you won’t want to skip out on configuring your default stream. + +![OBS Studio][17] + +Figure 5: Configuring your streaming service for OBS Studio. + +[Used with permission][4] + +I will warn you: OBS Studio isn’t exactly for the faint of heart. Plan on spending a good amount of time getting the streaming service up and running and getting up to speed with the tool. But for anyone needing such a solution for the Linux desktop, OBS Studio is what you want. Oh … it can also record your desktop screencast and save it locally. + +### There’s More Where That Came From + +This is a short list of screen recording solutions for Linux. Although there are plenty more where this came from, you should be able to fill all your desktop recording needs with one of these five apps. + +Learn more about Linux through the free ["Introduction to Linux" ][18]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/intro-to-linux/2018/12/5-screen-recorders-linux-desktop + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: http://www.maartenbaert.be/simplescreenrecorder/ +[2]: /files/images/screenrecorder1jpg +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_1.jpg?itok=hZJ5xugI (Simple Screen Recorder ) +[4]: /licenses/category/used-permission +[5]: http://www.maartenbaert.be/simplescreenrecorder/#download +[6]: http://recordmydesktop.sourceforge.net/about.php +[7]: /files/images/screenrecorder2jpg +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_2.jpg?itok=TEGXaVYI (gtk-recordmydesktop) +[9]: /files/images/screenrecorder3jpg +[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_3.jpg?itok=cvtFjxen (Kazam) +[11]: https://github.com/vkohaupt/vokoscreen +[12]: /files/images/screenrecorder4jpg +[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_4.jpg?itok=c3KVS954 (Vokoscreen) +[14]: https://github.com/vkohaupt/vokoscreen.git +[15]: https://obsproject.com/ +[16]: /files/images/desktoprecorder5jpg +[17]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/desktoprecorder_5.jpg?itok=xyM-dCa7 (OBS Studio) +[18]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20181207 Automatic continuous development and delivery of a hybrid mobile app.md b/sources/tech/20181207 Automatic continuous development and delivery of a hybrid mobile app.md new file mode 100644 index 0000000000..c513f36017 --- /dev/null +++ b/sources/tech/20181207 Automatic continuous development and delivery of a hybrid mobile app.md @@ -0,0 +1,102 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Automatic continuous development and delivery of a hybrid mobile app) +[#]: via: (https://opensource.com/article/18/12/hybrid-mobile-app-development) +[#]: author: (Angelo Manganiello https://opensource.com/users/amanganiello90) + +Automatic continuous development and delivery of a hybrid mobile app +====== +Hybrid apps are a good middle ground between native and web apps. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd) + +Offering a mobile app is essentially a business requirement for organizations today. One of the first steps in developing an app is to understand the different types—native, hybrid (or cross-platform), and web—so you can decide which one will best meet your needs. + +### Native is better, right? + +**Native apps** represent the vast majority of applications that people download every day. Native applications are developed specifically for an operating system. Thus, a native iOS application will not work on an Android system and vice versa. To develop a native app, you need to know two things: + + 1. How to develop in a specific programming language (e.g., Swift for Apple devices; Java for Android) + 2. The app will not work for other platforms + + + +Even though native apps will work only on the platform they're developed for, they have several notable advantages over hybrid and web apps: + + * Increased speed, reliability, and responsiveness and higher resolution, all of which provide a better user experience + * May work offline/without internet service + * Easier access to all phone features (e.g., accelerometer, camera, microphone) + + + +### But my business is still linked to the web… + +Most companies have focused their resources on web development and now want to enter the mobile market. But many don't have the right technical resources to develop a native app for each platform. For these companies, **hybrid** development is the right choice. In this model, developers can use their existing frontend skills to develop a single, cross-platform mobile app. + +![Hybrid mobile apps][2] + +Hybrid apps are a good middle ground: they're faster and less expensive to develop than native apps, and they offer more possibilities than web apps. The tradeoffs are they don't perform as well as native apps and developers can't maintain their existing tight focus on web development (as they could with web apps). + +If you already are a fan of the [Angular][3] cross-platform development framework, I recommend trying the [Ionic][4] framework, which "lets web developers build, test, and deploy cross-platform hybrid mobile apps." I see Ionic as an extension of the [Apache Cordova][5] framework, which enables a normal web app (JS, HTML, or CSS) to run as a mobile app in a container. Ionic uses the base Cordova features that support the Angular development for its user interface. + +The advantage of this approach is simple: the Angular paradigm is maintained, so developers can continue writing [TypeScript][6] files but target a build for Android, iOS, and Windows by properly configuring the development environment. It also provides two important tools: + + * An appealing design and widget that are very similar to a native app's, so your hybrid app will look less "web" + * Cordova Plugins allow the app to communicate with all phone features + + + +### What about the Node.js backend? + +The programming world likes to standardize, which is why hybrid apps are so popular. Frontend developers' common skills are useful in the mobile world. But if we have a technology stack for the user interface, why not focus on a single backend with the same programming paradigm? + +This makes [Node.js][7] an appealing option. Node.js is a JavaScript runtime built on the Chrome V8 JavaScript engine. It can make the API development backend very fast and easy, and it integrates fully with web technologies. You can develop a Cordova plugin, using your Node.js backend, internally in your hybrid app, as I did with the [nodejs-cordova-plugin][8]. This plugin, following the Cordova guidelines, integrates a mobile-compatible version of the Node.js platform to provide a full-stack mobile app. + +If you need a simple CRUD Node.js backend, you can use my [API][9] [node generator][9] that generates an app using a [MongoDB][10] embedded database. + +![Cordova Full Stack application][12] + +### Deploying your app + +Open source offers everything you need to deploy your app in the best way. You just need a GitHub repository and a good continuous integration tool. I recommend [Travis-ci][13], an excellent tool that allows you to build and deploy your product for every commit. + +Travis-ci is a fork of the better known [Jenkins][14]. Like with Jenkins, you have to configure your pipeline through a configuration file (in this case a **.travis.yml** file) in your GitHub repo. See the [.travis.yml file][15] in my repository as an example. + +![](https://opensource.com/sites/default/files/uploads/3-travis-ci-process.png) + +In addition, this pipeline automatically delivers and installs your app on [Appetize.io][16], a web-based iOS simulator and Android emulator, for testing. + +You can learn more in the [Cordova Android][17] section of my GitHub repository. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/hybrid-mobile-app-development + +作者:[Angelo Manganiello][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/amanganiello90 +[b]: https://github.com/lujun9972 +[1]: /file/416441 +[2]: https://opensource.com/sites/default/files/uploads/1-title.png (Hybrid mobile apps) +[3]: https://angular.io/ +[4]: https://ionicframework.com/ +[5]: https://cordova.apache.org/ +[6]: https://www.typescriptlang.org/ +[7]: https://nodejs.org/ +[8]: https://github.com/fullStackApp/nodejs-cordova-plugin +[9]: https://github.com/fullStackApp/generator-full-stack-api +[10]: https://www.mongodb.com/ +[11]: /file/416351 +[12]: https://opensource.com/sites/default/files/uploads/2-cordova-full-stack-app.png (Cordova Full Stack application) +[13]: https://travis-ci.org/ +[14]: https://jenkins.io/ +[15]: https://github.com/amanganiello90/java-angular-web-app/blob/master/.travis.yml +[16]: https://appetize.io/ +[17]: https://github.com/amanganiello90/java-angular-web-app#cordova-android diff --git a/sources/tech/20181211 How To Benchmark Linux Commands And Programs From Commandline.md b/sources/tech/20181211 How To Benchmark Linux Commands And Programs From Commandline.md new file mode 100644 index 0000000000..3962e361f3 --- /dev/null +++ b/sources/tech/20181211 How To Benchmark Linux Commands And Programs From Commandline.md @@ -0,0 +1,265 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Benchmark Linux Commands And Programs From Commandline) +[#]: via: (https://www.ostechnix.com/how-to-benchmark-linux-commands-and-programs-from-commandline/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +How To Benchmark Linux Commands And Programs From Commandline +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/benchmark-720x340.png) + +A while ago, I have written a guide about the [**alternatives to ‘top’, the command line utility**][1]. Some of the users asked me which one among those tools is best and on what basis (like features, contributors, years active, page requests etc.) I compared those tools. They also asked me to share the bench-marking results If I have any. Unfortunately, I didn’t even know how to benchmark programs at that time. While searching for some simple and easy to use bench-marking tools to compare the Linux programs, I stumbled upon two utilities named **‘Bench’** and **‘Hyperfine’**. These are simple and easy-to-use command line tools to benchmark Linux commands and programs on Unix-like systems. + +### 1\. Bench Tool + +The **‘Bench’** utility benchmarks one or more given commands/programs using **Haskell’s criterion** library and displays the output statistics in an easy-to-understandable format. This tool can be helpful where you need to compare similar programs based on the bench-marking result. We can also export the results to HTML format or CSV or templated output. + +#### Installing Bench Utility + +The bench utility can be installed in three methods. + +**1\. Using Linuxbrew** + +We can install Bench utility using Linuxbrew package manager. If you haven’t installed Linuxbrew yet, refer the following link. + +After installing Linuxbrew, run the following command to install Bench: + +``` +$ brew install bench +``` + +**2\. Using Haskell’s stack tool** + +First, install Haskell as described in the following link. + +And then, run the following commands to install Bench. + +``` +$ stack setup + +$ stack install bench +``` + +The ‘stack’ will install bench to **~/.local/bin** or something similar. Make sure that the installation directory is on your executable search path before using bench tool. You will be reminded to do this even if you forgot. + +**3\. Using Nix package manager** + +Another way to install Bench is using **Nix** package manager. Install Nix as shown in the below link. + +After installing Nix, install Bench tool using command: + +``` +$ nix-env -i bench +``` + +#### Benchmark Linux Commands And Programs Using Bench + +It is time to start benchmarking the programs. + +For instance, let me show you the benchmark result of ‘ls -al’ command. + +``` +$ bench 'ls -al' +``` + +**Sample output:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/Benchmark-commands-1.png) + +You must quote the commands when you use flags/options with them. + +Similarly, you can benchmark any programs installed in your system. The following commands shows the benchmarking result of ‘htop’ and ‘ptop’ programs. + +``` +$ bench htop + +$ bench ptop +``` +![](https://www.ostechnix.com/wp-content/uploads/2018/12/Benchmark-commands-2-1.png) +Bench tool can benchmark multiple programs at once as well. Here is the benchmarking result of ls, htop, ptop programs. + +``` +$ bench ls htop ptop +``` + +Sample output: +![](https://www.ostechnix.com/wp-content/uploads/2018/12/Benchmark-commands-3.png) + +We can also export the benchmark result to a HTML like below. + +``` +$ bench htop --output example.html +``` + +To export the result to CSV, just run: + +``` +$ bench htop --csv FILE +``` + +View help section: + +``` +$ bench --help +``` + +### **2. Hyperfine Benchmark Tool + +** + +**Hyperfine** is yet another command line benchmarking tool inspired by the ‘Bench’ tool which we just discussed above. It is free, open source, cross-platform benchmarking program and written in **Rust** programming language. It has few additional features compared to the Bench tool as listed below. + + * Statistical analysis across multiple runs. + * Support for arbitrary shell commands. + * Constant feedback about the benchmark progress and current estimates. + * Perform warmup runs before the actual benchmark. + * Cache-clearing commands can be set up before each timing run. + * Statistical outlier detection. + * Export benchmark results to various formats, such as CSV, JSON, Markdown. + * Parameterized benchmarks. + + + +#### Installing Hyperfine + +We can install Hyperfine using any one of the following methods. + +**1\. Using Linuxbrew** + +``` +$ brew install hyperfine +``` + +**2\. Using Cargo** + +Make sure you have installed Rust as described in the following link. + +After installing Rust, run the following command to install Hyperfine via Cargo: + +``` +$ cargo install hyperfine +``` + +**3\. Using AUR helper programs** + +Hyperfine is available in [**AUR**][2]. So, you can install it on Arch-based systems using any helper programs, such as [**YaY**][3], like below. + +``` +$ yay -S hyperfine +``` + +**4\. Download and install the binaries** + +Hyperfine is available in binaries for Debian-based systems. Download the latest .deb binary file from the [**releases page**][4] and install it using ‘dpkg’ package manager. As of writing this guide, the latest version was **1.4.0**. + +``` +$ wget https://github.com/sharkdp/hyperfine/releases/download/v1.4.0/hyperfine_1.4.0_amd64.deb + +$ sudo dpkg -i hyperfine_1.4.0_amd64.deb + +$ sudo apt install -f +``` + +#### Benchmark Linux Commands And Programs Using Hyperfine + +To run a benchmark using Hyperfine, simply run it along with the program/command as shown below. + +``` +$ hyperfine 'ls -al' +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/hyperfine-1.png) + +Benchmark multiple commands/programs: + +``` +$ hyperfine htop ptop +``` + +Sample output: + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/hyperfine-2.png) + +As you can see at the end of the output, Hyperfine mentiones – **‘htop ran 1.96 times faster than ptop’** , so we can immediately conclude htop performs better than Ptop. This will help you to quickly find which program performs better when benchmarking multiple programs. We don’t get this detailed output in Bench utility though. + +Hyperfine will automatically determine the number of runs to perform for each command. By default, it will perform at least **10 benchmarking runs**. If you want to set the **minimum number of runs** (E.g 5 runs), use the `-m` **/`--min-runs`** option like below: + +``` +$ hyperfine --min-runs 5 htop ptop +``` + +Or, + +``` +$ hyperfine -m 5 htop ptop +``` + +Similarly, to perform **maximum number of runs** for each command, the command would be: + +``` +$ hyperfine --max-runs 5 htop ptop +``` + +Or, + +``` +$ hyperfine -M 5 htop ptop +``` + +We can even perform **exact number of runs** for each command using the following command: + +``` +$ hyperfine -r 5 htop ptop +``` + +As you may know, if the program execution time is limited by disk I/O, the benchmarking results can be heavily influenced by disk caches and whether they are cold or warm. Luckily, Hyperfine has the options to perform a certain number of program executions before performing the actual benchmark. + +To perform NUM warmup runs (E.g 3) before the actual benchmark, use the **`-w`/**`--warmup` option like below: + +``` +$ hyperfine --warmup 3 htop +``` + +Just like Bench utility, Hyperfine also allows us to export the benchmark results to a given file. We can export the results to CSV, JSON, and Markdown formats. + +For instance, to export the results in Markdown format, use the following command: + +``` +$ hyperfine htop ptop --export-markdown +``` + +For more options and usage details, refer the help secion: + +``` +$ hyperfine --help +``` + +And, that’s all for now. If you ever be in a situation where you need to benchmark similar and alternative programs, these tools might help to compare how they performs and share the details with your peers and colleagues. + +More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-benchmark-linux-commands-and-programs-from-commandline/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/some-alternatives-to-top-command-line-utility-you-might-want-to-know/ +[2]: https://aur.archlinux.org/packages/hyperfine +[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[4]: https://github.com/sharkdp/hyperfine/releases diff --git a/sources/tech/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md b/sources/tech/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md new file mode 100644 index 0000000000..e1e6a7f25e --- /dev/null +++ b/sources/tech/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md @@ -0,0 +1,745 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (TLP – An Advanced Power Management Tool That Improve Battery Life On Linux Laptop) +[#]: via: (https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +TLP – An Advanced Power Management Tool That Improve Battery Life On Linux Laptop +====== + +Laptop battery is highly optimized for Windows OS, that i had realized when i was using Windows OS in my laptop but it’s not same for Linux. + +Over the years Linux has improved a lot for battery optimization but still we need make some necessary things to improve laptop battery life in Linux. + +When i think about battery life, i got few options for that but i felt TLP is a better solutions for me so, i’m going with it. + +In this tutorial we are going to discuss about TLP in details to improve battery life. + +We had written three articles previously in our site about **[laptop battery saving utilities][1]** for Linux **[PowerTOP][2]** and **[Battery Charging State][3]**. + +### What is TLP? + +[TLP][4] is a free opensource advanced power management tool that improve your battery life without making any configuration change. + +Since it comes with a default configuration already optimized for battery life, so you may just install and forget it. + +Also, it is highly customizable to fulfill your specific requirements. TLP is a pure command line tool with automated background tasks. It does not contain a GUI. + +TLP runs on every laptop brand. Setting the battery charge thresholds is available for IBM/Lenovo ThinkPads only. + +All TLP settings are stored in `/etc/default/tlp`. The default configuration provides optimized power saving out of the box. + +The following TLP settings is available for customization and you need to make the necessary changes accordingly if you want it. + +### TLP Features + + * Kernel laptop mode and dirty buffer timeouts + * Processor frequency scaling including “turbo boost” / “turbo core” + * Limit max/min P-state to control power dissipation of the CPU + * HWP energy performance hints + * Power aware process scheduler for multi-core/hyper-threading + * Processor performance versus energy savings policy (x86_energy_perf_policy) + * Hard disk advanced power magement level (APM) and spin down timeout (per disk) + * AHCI link power management (ALPM) with device blacklist + * PCIe active state power management (PCIe ASPM) + * Runtime power management for PCI(e) bus devices + * Radeon graphics power management (KMS and DPM) + * Wifi power saving mode + * Power off optical drive in drive bay + * Audio power saving mode + * I/O scheduler (per disk) + * USB autosuspend with device blacklist/whitelist (input devices excluded automatically) + * Enable or disable integrated wifi, bluetooth or wwan devices upon system startup and shutdown + * Restore radio device state on system startup (from previous shutdown). + * Radio device wizard: switch radios upon network connect/disconnect and dock/undock + * Disable Wake On LAN + * Integrated WWAN and bluetooth state is restored after suspend/hibernate + * Untervolting of Intel processors – requires kernel with PHC-Patch + * Battery charge thresholds – ThinkPads only + * Recalibrate battery – ThinkPads only + + + +### How to Install TLP in Linux + +TLP package is available in most of the distributions official repository so, use the distributions **[Package Manager][5]** to install it. + +For **`Fedora`** system, use **[DNF Command][6]** to install TLP. + +``` +$ sudo dnf install tlp tlp-rdw +``` + +ThinkPads require an additional packages. + +``` +$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm +$ sudo dnf install http://repo.linrunner.de/fedora/tlp/repos/releases/tlp-release.fc$(rpm -E %fedora).noarch.rpm +$ sudo dnf install akmod-tp_smapi akmod-acpi_call kernel-devel +``` + +Install smartmontool to display S.M.A.R.T. data in tlp-stat. + +``` +$ sudo dnf install smartmontools +``` + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][7]** or **[APT Command][8]** to install TLP. + +``` +$ sudo apt install tlp tlp-rdw +``` + +ThinkPads require an additional packages. + +``` +$ sudo apt-get install tp-smapi-dkms acpi-call-dkms +``` + +Install smartmontool to display S.M.A.R.T. data in tlp-stat. + +``` +$ sudo apt-get install smartmontools +``` + +When the official package becomes outdated for Ubuntu based systems then use the following PPA repository which provides an up-to-date version. Run the following commands to install TLP using the PPA. + +``` +$ sudo apt-get install tlp tlp-rdw +``` + +For **`Arch Linux`** based systems, use **[Pacman Command][9]** to install TLP. + +``` +$ sudo pacman -S tlp tlp-rdw +``` + +ThinkPads require an additional packages. + +``` +$ pacman -S tp_smapi acpi_call +``` + +Install smartmontool to display S.M.A.R.T. data in tlp-stat. + +``` +$ sudo pacman -S smartmontools +``` + +Enable TLP & TLP-Sleep service on boot for Arch Linux based systems. + +``` +$ sudo systemctl enable tlp.service +$ sudo systemctl enable tlp-sleep.service +``` + +You should also mask the following services to avoid conflicts and assure proper operation of TLP’s radio device switching options for Arch Linux based systems. + +``` +$ sudo systemctl mask systemd-rfkill.service +$ sudo systemctl mask systemd-rfkill.socket +``` + +For **`RHEL/CentOS`** systems, use **[YUM Command][10]** to install TLP. + +``` +$ sudo yum install tlp tlp-rdw +``` + +Install smartmontool to display S.M.A.R.T. data in tlp-stat. + +``` +$ sudo yum install smartmontools +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][11]** to install TLP. + +``` +$ sudo zypper install TLP +``` + +Install smartmontool to display S.M.A.R.T. data in tlp-stat. + +``` +$ sudo zypper install smartmontools +``` + +After successfully TLP installed, use the following command to start the service. + +``` +$ systemctl start tlp.service +``` + +To show battery information. + +``` +$ sudo tlp-stat -b +or +$ sudo tlp-stat --battery + +--- TLP 1.1 -------------------------------------------- + ++++ Battery Status +/sys/class/power_supply/BAT0/manufacturer = SMP +/sys/class/power_supply/BAT0/model_name = L14M4P23 +/sys/class/power_supply/BAT0/cycle_count = (not supported) +/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh] +/sys/class/power_supply/BAT0/energy_full = 48850 [mWh] +/sys/class/power_supply/BAT0/energy_now = 48850 [mWh] +/sys/class/power_supply/BAT0/power_now = 0 [mW] +/sys/class/power_supply/BAT0/status = Full + +Charge = 100.0 [%] +Capacity = 81.4 [%] +``` + +To show disk information. + +``` +$ sudo tlp-stat -d +or +$ sudo tlp-stat --disk + +--- TLP 1.1 -------------------------------------------- + ++++ Storage Devices +/dev/sda: + Model = WDC WD10SPCX-24HWST1 + Firmware = 02.01A02 + APM Level = 128 + Status = active/idle + Scheduler = mq-deadline + + Runtime PM: control = on, autosuspend_delay = (not available) + + SMART info: + 4 Start_Stop_Count = 18787 + 5 Reallocated_Sector_Ct = 0 + 9 Power_On_Hours = 606 [h] + 12 Power_Cycle_Count = 1792 + 193 Load_Cycle_Count = 25775 + 194 Temperature_Celsius = 31 [°C] + + ++++ AHCI Link Power Management (ALPM) +/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm + ++++ AHCI Host Controller Runtime Power Management +/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on +``` + +To show PCI device information. + +``` +$ sudo tlp-stat -e +or +$ sudo tlp-stat --pcie + +--- TLP 1.1 -------------------------------------------- + ++++ Runtime Power Management +Device blacklist = (not configured) +Driver blacklist = amdgpu nouveau nvidia radeon pcieport + +/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore) +/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915) +/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd) +/sys/bus/pci/devices/0000:00:16.0/power/control = auto (0x078000, Communication controller, mei_me) +/sys/bus/pci/devices/0000:00:17.0/power/control = auto (0x010601, SATA controller, ahci) +/sys/bus/pci/devices/0000:00:1c.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1c.2/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1c.3/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1d.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1f.0/power/control = auto (0x060100, ISA bridge, no driver) +/sys/bus/pci/devices/0000:00:1f.2/power/control = auto (0x058000, Memory controller, no driver) +/sys/bus/pci/devices/0000:00:1f.3/power/control = auto (0x040300, Audio device, snd_hda_intel) +/sys/bus/pci/devices/0000:00:1f.4/power/control = auto (0x0c0500, SMBus, i801_smbus) +/sys/bus/pci/devices/0000:01:00.0/power/control = auto (0x030200, 3D controller, nouveau) +/sys/bus/pci/devices/0000:07:00.0/power/control = auto (0x080501, SD Host controller, sdhci-pci) +/sys/bus/pci/devices/0000:08:00.0/power/control = auto (0x028000, Network controller, iwlwifi) +/sys/bus/pci/devices/0000:09:00.0/power/control = auto (0x020000, Ethernet controller, r8168) +/sys/bus/pci/devices/0000:0a:00.0/power/control = auto (0x010802, Non-Volatile memory controller, nvme) +``` + +To show graphics card information. + +``` +$ sudo tlp-stat -g +or +$ sudo tlp-stat --graphics + +--- TLP 1.1 -------------------------------------------- + ++++ Intel Graphics +/sys/module/i915/parameters/enable_dc = -1 (use per-chip default) +/sys/module/i915/parameters/enable_fbc = 1 (enabled) +/sys/module/i915/parameters/enable_psr = 0 (disabled) +/sys/module/i915/parameters/modeset = -1 (use per-chip default) +``` + +To show Processor information. + +``` +$ sudo tlp-stat -p +or +$ sudo tlp-stat --processor + +--- TLP 1.1 -------------------------------------------- + ++++ Processor +CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz + +/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu2/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu2/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu2/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu3/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu3/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu3/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu4/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu4/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu4/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu4/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu5/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu5/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu5/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu5/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu5/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu6/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu6/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu6/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu6/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu6/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu7/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu7/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu7/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu7/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%] +/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%] +/sys/devices/system/cpu/intel_pstate/no_turbo = 0 +/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%] +/sys/devices/system/cpu/intel_pstate/num_pstates = 28 + +x86_energy_perf_policy: program not installed. + +/sys/module/workqueue/parameters/power_efficient = Y +/proc/sys/kernel/nmi_watchdog = 0 + ++++ Undervolting +PHC kernel not available. +``` + +To show system data information. + +``` +$ sudo tlp-stat -s +or +$ sudo tlp-stat --system + +--- TLP 1.1 -------------------------------------------- + ++++ System Info +System = LENOVO Lenovo ideapad Y700-15ISK 80NV +BIOS = CDCN35WW +Release = "Manjaro Linux" +Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64 +/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b +Init system = systemd +Boot mode = BIOS (CSM, Legacy) + ++++ TLP Status +State = enabled +Last run = 11:04:00 IST, 596 sec(s) ago +Mode = battery +Power source = battery +``` + +To show temperatures and fan speed information. + +``` +$ sudo tlp-stat -t +or +$ sudo tlp-stat --temp + +--- TLP 1.1 -------------------------------------------- + ++++ Temperatures +CPU temp = 36 [°C] +Fan speed = (not available) +``` + +To show USB device data information. + +``` +$ sudo tlp-stat -u +or +$ sudo tlp-stat --usb + +--- TLP 1.1 -------------------------------------------- + ++++ USB +Autosuspend = disabled +Device whitelist = (not configured) +Device blacklist = (not configured) +Bluetooth blacklist = disabled +Phone blacklist = disabled +WWAN blacklist = enabled + +Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub) +Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo) +Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid) +Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb) +Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub) +``` + +To show warnings. + +``` +$ sudo tlp-stat -w +or +$ sudo tlp-stat --warn + +--- TLP 1.1 -------------------------------------------- + +No warnings detected. +``` + +Status report with configuration and all active settings. + +``` +$ sudo tlp-stat + +--- TLP 1.1 -------------------------------------------- + ++++ Configured Settings: /etc/default/tlp +TLP_ENABLE=1 +TLP_DEFAULT_MODE=AC +TLP_PERSISTENT_DEFAULT=0 +DISK_IDLE_SECS_ON_AC=0 +DISK_IDLE_SECS_ON_BAT=2 +MAX_LOST_WORK_SECS_ON_AC=15 +MAX_LOST_WORK_SECS_ON_BAT=60 +CPU_HWP_ON_AC=balance_performance +CPU_HWP_ON_BAT=balance_power +SCHED_POWERSAVE_ON_AC=0 +SCHED_POWERSAVE_ON_BAT=1 +NMI_WATCHDOG=0 +ENERGY_PERF_POLICY_ON_AC=performance +ENERGY_PERF_POLICY_ON_BAT=power +DISK_DEVICES="sda sdb" +DISK_APM_LEVEL_ON_AC="254 254" +DISK_APM_LEVEL_ON_BAT="128 128" +SATA_LINKPWR_ON_AC="med_power_with_dipm max_performance" +SATA_LINKPWR_ON_BAT="med_power_with_dipm max_performance" +AHCI_RUNTIME_PM_TIMEOUT=15 +PCIE_ASPM_ON_AC=performance +PCIE_ASPM_ON_BAT=powersave +RADEON_POWER_PROFILE_ON_AC=default +RADEON_POWER_PROFILE_ON_BAT=low +RADEON_DPM_STATE_ON_AC=performance +RADEON_DPM_STATE_ON_BAT=battery +RADEON_DPM_PERF_LEVEL_ON_AC=auto +RADEON_DPM_PERF_LEVEL_ON_BAT=auto +WIFI_PWR_ON_AC=off +WIFI_PWR_ON_BAT=on +WOL_DISABLE=Y +SOUND_POWER_SAVE_ON_AC=0 +SOUND_POWER_SAVE_ON_BAT=1 +SOUND_POWER_SAVE_CONTROLLER=Y +BAY_POWEROFF_ON_AC=0 +BAY_POWEROFF_ON_BAT=0 +BAY_DEVICE="sr0" +RUNTIME_PM_ON_AC=on +RUNTIME_PM_ON_BAT=auto +RUNTIME_PM_DRIVER_BLACKLIST="amdgpu nouveau nvidia radeon pcieport" +USB_AUTOSUSPEND=0 +USB_BLACKLIST_BTUSB=0 +USB_BLACKLIST_PHONE=0 +USB_BLACKLIST_PRINTER=1 +USB_BLACKLIST_WWAN=1 +RESTORE_DEVICE_STATE_ON_STARTUP=0 + ++++ System Info +System = LENOVO Lenovo ideapad Y700-15ISK 80NV +BIOS = CDCN35WW +Release = "Manjaro Linux" +Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64 +/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b +Init system = systemd +Boot mode = BIOS (CSM, Legacy) + ++++ TLP Status +State = enabled +Last run = 11:04:00 IST, 684 sec(s) ago +Mode = battery +Power source = battery + ++++ Processor +CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz + +/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu2/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu2/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu2/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu3/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu3/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu3/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu4/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu4/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu4/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu4/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu5/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu5/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu5/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu5/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu5/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu6/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu6/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu6/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu6/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu6/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu7/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu7/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu7/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu7/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%] +/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%] +/sys/devices/system/cpu/intel_pstate/no_turbo = 0 +/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%] +/sys/devices/system/cpu/intel_pstate/num_pstates = 28 + +x86_energy_perf_policy: program not installed. + +/sys/module/workqueue/parameters/power_efficient = Y +/proc/sys/kernel/nmi_watchdog = 0 + ++++ Undervolting +PHC kernel not available. + ++++ Temperatures +CPU temp = 42 [°C] +Fan speed = (not available) + ++++ File System +/proc/sys/vm/laptop_mode = 2 +/proc/sys/vm/dirty_writeback_centisecs = 6000 +/proc/sys/vm/dirty_expire_centisecs = 6000 +/proc/sys/vm/dirty_ratio = 20 +/proc/sys/vm/dirty_background_ratio = 10 + ++++ Storage Devices +/dev/sda: + Model = WDC WD10SPCX-24HWST1 + Firmware = 02.01A02 + APM Level = 128 + Status = active/idle + Scheduler = mq-deadline + + Runtime PM: control = on, autosuspend_delay = (not available) + + SMART info: + 4 Start_Stop_Count = 18787 + 5 Reallocated_Sector_Ct = 0 + 9 Power_On_Hours = 606 [h] + 12 Power_Cycle_Count = 1792 + 193 Load_Cycle_Count = 25777 + 194 Temperature_Celsius = 31 [°C] + + ++++ AHCI Link Power Management (ALPM) +/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm + ++++ AHCI Host Controller Runtime Power Management +/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on + ++++ PCIe Active State Power Management +/sys/module/pcie_aspm/parameters/policy = powersave + ++++ Intel Graphics +/sys/module/i915/parameters/enable_dc = -1 (use per-chip default) +/sys/module/i915/parameters/enable_fbc = 1 (enabled) +/sys/module/i915/parameters/enable_psr = 0 (disabled) +/sys/module/i915/parameters/modeset = -1 (use per-chip default) + ++++ Wireless +bluetooth = on +wifi = on +wwan = none (no device) + +hci0(btusb) : bluetooth, not connected +wlp8s0(iwlwifi) : wifi, connected, power management = on + ++++ Audio +/sys/module/snd_hda_intel/parameters/power_save = 1 +/sys/module/snd_hda_intel/parameters/power_save_controller = Y + ++++ Runtime Power Management +Device blacklist = (not configured) +Driver blacklist = amdgpu nouveau nvidia radeon pcieport + +/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore) +/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915) +/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd) +/sys/bus/pci/devices/0000:00:16.0/power/control = auto (0x078000, Communication controller, mei_me) +/sys/bus/pci/devices/0000:00:17.0/power/control = auto (0x010601, SATA controller, ahci) +/sys/bus/pci/devices/0000:00:1c.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1c.2/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1c.3/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1d.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1f.0/power/control = auto (0x060100, ISA bridge, no driver) +/sys/bus/pci/devices/0000:00:1f.2/power/control = auto (0x058000, Memory controller, no driver) +/sys/bus/pci/devices/0000:00:1f.3/power/control = auto (0x040300, Audio device, snd_hda_intel) +/sys/bus/pci/devices/0000:00:1f.4/power/control = auto (0x0c0500, SMBus, i801_smbus) +/sys/bus/pci/devices/0000:01:00.0/power/control = auto (0x030200, 3D controller, nouveau) +/sys/bus/pci/devices/0000:07:00.0/power/control = auto (0x080501, SD Host controller, sdhci-pci) +/sys/bus/pci/devices/0000:08:00.0/power/control = auto (0x028000, Network controller, iwlwifi) +/sys/bus/pci/devices/0000:09:00.0/power/control = auto (0x020000, Ethernet controller, r8168) +/sys/bus/pci/devices/0000:0a:00.0/power/control = auto (0x010802, Non-Volatile memory controller, nvme) + ++++ USB +Autosuspend = disabled +Device whitelist = (not configured) +Device blacklist = (not configured) +Bluetooth blacklist = disabled +Phone blacklist = disabled +WWAN blacklist = enabled + +Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub) +Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo) +Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid) +Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb) +Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub) + ++++ Battery Status +/sys/class/power_supply/BAT0/manufacturer = SMP +/sys/class/power_supply/BAT0/model_name = L14M4P23 +/sys/class/power_supply/BAT0/cycle_count = (not supported) +/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh] +/sys/class/power_supply/BAT0/energy_full = 51690 [mWh] +/sys/class/power_supply/BAT0/energy_now = 50140 [mWh] +/sys/class/power_supply/BAT0/power_now = 12185 [mW] +/sys/class/power_supply/BAT0/status = Discharging + +Charge = 97.0 [%] +Capacity = 86.2 [%] +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/check-laptop-battery-status-and-charging-state-in-linux-terminal/ +[2]: https://www.2daygeek.com/powertop-monitors-laptop-battery-usage-linux/ +[3]: https://www.2daygeek.com/monitor-laptop-battery-charging-state-linux/ +[4]: https://linrunner.de/en/tlp/docs/tlp-linux-advanced-power-management.html +[5]: https://www.2daygeek.com/category/package-management/ +[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[7]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ diff --git a/sources/tech/20181213 Podman and user namespaces- A marriage made in heaven.md b/sources/tech/20181213 Podman and user namespaces- A marriage made in heaven.md new file mode 100644 index 0000000000..adc14c6111 --- /dev/null +++ b/sources/tech/20181213 Podman and user namespaces- A marriage made in heaven.md @@ -0,0 +1,145 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Podman and user namespaces: A marriage made in heaven) +[#]: via: (https://opensource.com/article/18/12/podman-and-user-namespaces) +[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan) + +Podman and user namespaces: A marriage made in heaven +====== +Learn how to use Podman to run containers in separate user namespaces. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/architecture_structure_planning_design_.png?itok=KL7dIDct) + +[Podman][1], part of the [libpod][2] library, enables users to manage pods, containers, and container images. In my last article, I wrote about [Podman as a more secure way to run containers][3]. Here, I'll explain how to use Podman to run containers in separate user namespaces. + +I have always thought of [user namespace][4], primarily developed by Red Hat's Eric Biederman, as a great feature for separating containers. User namespace allows you to specify a user identifier (UID) and group identifier (GID) mapping to run your containers. This means you can run as UID 0 inside the container and UID 100000 outside the container. If your container processes escape the container, the kernel will treat them as UID 100000. Not only that, but any file object owned by a UID that isn't mapped into the user namespace will be treated as owned by "nobody" (65534, kernel.overflowuid), and the container process will not be allowed access unless the object is accessible by "other" (world readable/writable). + +If you have a file owned by "real" root with permissions [660][5], and the container processes in the user namespace attempt to read it, they will be prevented from accessing it and will see the file as owned by nobody. + +### An example + +Here's how that might work. First, I create a file in my system owned by root. + +``` +$ sudo bash -c "echo Test > /tmp/test" +$ sudo chmod 600 /tmp/test +$ sudo ls -l /tmp/test +-rw-------. 1 root root 5 Dec 17 16:40 /tmp/test +``` + +Next, I volume-mount the file into a container running with a user namespace map 0:100000:5000. + +``` +$ sudo podman run -ti -v /tmp/test:/tmp/test:Z --uidmap 0:100000:5000 fedora sh +# id +uid=0(root) gid=0(root) groups=0(root) +# ls -l /tmp/test +-rw-rw----. 1 nobody nobody 8 Nov 30 12:40 /tmp/test +# cat /tmp/test +cat: /tmp/test: Permission denied +``` + +The **\--uidmap** setting above tells Podman to map a range of 5000 UIDs inside the container, starting with UID 100000 outside the container (so the range is 100000-104999) to a range starting at UID 0 inside the container (so the range is 0-4999). Inside the container, if my process is running as UID 1, it is 100001 on the host + +Since the real UID=0 is not mapped into the container, any file owned by root will be treated as owned by nobody. Even if the process inside the container has **CAP_DAC_OVERRIDE** , it can't override this protection. **DAC_OVERRIDE** enables root processes to read/write any file on the system, even if the process was not owned by root or world readable or writable. + +User namespace capabilities are not the same as capabilities on the host. They are namespaced capabilities. This means my container root has capabilities only within the container—really only across the range of UIDs that were mapped into the user namespace. If a container process escaped the container, it wouldn't have any capabilities over UIDs not mapped into the user namespace, including UID=0. Even if the processes could somehow enter another container, they would not have those capabilities if the container uses a different range of UIDs. + +Note that SELinux and other technologies also limit what would happen if a container process broke out of the container. + +### Using `podman top` to show user namespaces + +We have added features to **podman top** to allow you to examine the usernames of processes running inside a container and identify their real UIDs on the host. + +Let's start by running a sleep container using our UID mapping. + +``` +$ sudo podman run --uidmap 0:100000:5000 -d fedora sleep 1000 +``` + +Now run **podman top** : + +``` +$ sudo podman top --latest user huser +USER   HUSER +root   100000 + +$ ps -ef | grep sleep +100000   21821 21809  0 08:04 ?         00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 +``` + +Notice **podman top** reports that the user process is running as root inside the container but as UID 100000 on the host (HUSER). Also the **ps** command confirms that the sleep process is running as UID 100000. + +Now let's run a second container, but this time we will choose a separate UID map starting at 200000. + +``` +$ sudo podman run --uidmap 0:200000:5000 -d fedora sleep 1000 +$ sudo podman top --latest user huser +USER   HUSER +root   200000 + +$ ps -ef | grep sleep +100000   21821 21809  0 08:04 ?         00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 +200000   23644 23632  1 08:08 ?         00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 +``` + +Notice that **podman top** reports the second container is running as root inside the container but as UID=200000 on the host. + +Also look at the **ps** command—it shows both sleep processes running: one as 100000 and the other as 200000. + +This means running the containers inside separate user namespaces gives you traditional UID separation between processes, which has been the standard security tool of Linux/Unix from the beginning. + +### Problems with user namespaces + +For several years, I've advocated user namespace as the security tool everyone wants but hardly anyone has used. The reason is there hasn't been any filesystem support or a shifting file system. + +In containers, you want to share the **base** image between lots of containers. The examples above use the Fedora base image in each example. Most of the files in the Fedora image are owned by real UID=0. If I run a container on this image with the user namespace 0:100000:5000, by default it sees all of these files as owned by nobody, so we need to shift all of these UIDs to match the user namespace. For years, I've wanted a mount option to tell the kernel to remap these file UIDs to match the user namespace. Upstream kernel storage developers continue to investigate and make progress on this feature, but it is a difficult problem. + + +Podman can use different user namespaces on the same image because of automatic [chowning][6] built into [containers/storage][7] by a team led by Nalin Dahyabhai. Podman uses containers/storage, and the first time Podman uses a container image in a new user namespace, container/storage "chowns" (i.e., changes ownership for) all files in the image to the UIDs mapped in the user namespace and creates a new image. Think of this as the **fedora:0:100000:5000** image. + +When Podman runs another container on the image with the same UID mappings, it uses the "pre-chowned" image. When I run the second container on 0:200000:5000, containers/storage creates a second image, let's call it **fedora:0:200000:5000**. + +Note if you are doing a **podman build** or **podman commit** and push the newly created image to a container registry, Podman will use container/storage to reverse the shift and push the image with all files chowned back to real UID=0. + +This can cause a real slowdown in creating containers in new UID mappings since the **chown** can be slow depending on the number of files in the image. Also, on a normal [OverlayFS][8], every file in the image gets copied up. The normal Fedora image can take up to 30 seconds to finish the chown and start the container. + +Luckily, the Red Hat kernel storage team, primarily Vivek Goyal and Miklos Szeredi, added a new feature to OverlayFS in kernel 4.19. The feature is called **metadata only copy-up**. If you mount an overlay filesystem with **metacopy=on** as a mount option, it will not copy up the contents of the lower layers when you change file attributes; the kernel creates new inodes that include the attributes with references pointing at the lower-level data. It will still copy up the contents if the content changes. This functionality is available in the Red Hat Enterprise Linux 8 Beta, if you want to try it out. + +This means container chowning can happen in a couple of seconds, and you won't double the storage space for each container. + +This makes running containers with tools like Podman in separate user namespaces viable, greatly increasing the security of the system. + +### Going forward + +I want to add a new flag, like **\--userns=auto** , to Podman that will tell it to automatically pick a unique user namespace for each container you run. This is similar to the way SELinux works with separate multi-category security (MCS) labels. If you set the environment variable **PODMAN_USERNS=auto** , you won't even need to set the flag. + +Podman is finally allowing users to run containers in separate user namespaces. Tools like [Buildah][9] and [CRI-O][10] will also be able to take advantage of user namespaces. For CRI-O, however, Kubernetes needs to understand which user namespace will run the container engine, and the upstream is working on that. + +In my next article, I will explain how to run Podman as non-root in a user namespace. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/podman-and-user-namespaces + +作者:[Daniel J Walsh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rhatdan +[b]: https://github.com/lujun9972 +[1]: https://podman.io/ +[2]: https://github.com/containers/libpod +[3]: https://opensource.com/article/18/10/podman-more-secure-way-run-containers +[4]: http://man7.org/linux/man-pages/man7/user_namespaces.7.html +[5]: https://chmodcommand.com/chmod-660/ +[6]: https://en.wikipedia.org/wiki/Chown +[7]: https://github.com/containers/storage +[8]: https://en.wikipedia.org/wiki/OverlayFS +[9]: https://buildah.io/ +[10]: http://cri-o.io/ diff --git a/sources/tech/20181214 Tips for using Flood Element for performance testing.md b/sources/tech/20181214 Tips for using Flood Element for performance testing.md new file mode 100644 index 0000000000..90994b0724 --- /dev/null +++ b/sources/tech/20181214 Tips for using Flood Element for performance testing.md @@ -0,0 +1,180 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Tips for using Flood Element for performance testing) +[#]: via: (https://opensource.com/article/18/12/tips-flood-element-testing) +[#]: author: (Nicole van der Hoeven https://opensource.com/users/nicolevanderhoeven) + +Tips for using Flood Element for performance testing +====== +Get started with this powerful, intuitive load testing tool. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn) + +In case you missed it, there’s a new performance test tool on the block: [Flood Element][1]. It’s a scalable, browser-based tool that allows you to write scripts in JavaScript that interact with web pages like a real user would. + +Browser Level Users is a [newer approach to load testing][2] that overcomes many of the common challenges we hear about traditional methods of testing. It offers: + + * Scripting that is akin to common functional tools like Selenium and easier to learn + * More realistic results that are based on true browser performance rather than API response + * The ability to test against all components of your web app, including things like JavaScript that are rendered via the browser + + + +Given the above benefits, it’s a no-brainer to check out Flood Element for your web load testing, especially if you have struggled with existing tools like JMeter or HP LoadRunner. + +Pairing Element with [Flood][3] turns it into a pretty powerful load test tool. We have a [great guide here][4] that you can follow if you’d like to get started. I’ve been using and testing Element for several months now, and I’d like to share some tips I’ve learned along the way. + +### Initializing your script + +You can always start from scratch, but the quickest way to get started is to type `element init myfirstelementtest` from your terminal, filling in your preferred project name. + +You’ll then be asked to type the title of your test as well as the URL you’d like to script against. After a minute, you’ll see that a new directory has been created: + +![](https://opensource.com/sites/default/files/uploads/image_1_-_new_directory.png) + +Element will automatically create a file called **test.ts**. This file contains the skeleton of a script, along with some sample code to help you find a button and then click on it. But before you open it, let’s move on to… + +### Choosing the right text editor + +Scripting in Element is already pretty simple, but two things that help are syntax highlighting and code completion. Syntax highlighting will greatly improve the experience of learning a new test tool like Element, and code completion will make your scripting lightning-fast as you become more experienced. My text editor of choice is [Visual Studio Code][5], which has both of those features. It’s slick and clean, and it does the job. + +Syntax highlighting is when the text editor intelligently changes the font color of your code according to its role in the programming language you’re using. Here’s a screenshot of the **test.ts** file we generated earlier in VS Code to show you what I mean: + +![](https://opensource.com/sites/default/files/uploads/image_2_test.ts_.png) + +This makes it easier to make sense of the code at a glance: Comments are in green, values and labels are in orange, etc. + +Code completion is when you start to type something, and VS Code helpfully opens a context menu with suggestions for methods you can use. + +![][6] + +I love this because it means I don’t need to remember the exact name of the method. It also suggests names of variables you’ve already defined and highlights code that doesn’t make sense. This will help to make your tests more maintainable and readable for others, which is a great benefit as you look to scale your testing out in the future. + +![](https://opensource.com/sites/default/files/image-4-element-visible-copy.gif) + +### Taking screenshots + +One of the most powerful features of Element is its ability to take screenshots. I find it immensely useful when debugging because sometimes it’s just easier to see what’s going on visually. With protocol-based tools, debugging can be a much more involved and technical process. + +There are two ways to take screenshots in Element: + + 1. Add a setting to automatically take a screenshot when an error is encountered. You can do this by setting `screenshotOnFailure` to "true" in `TestSettings`: + + + +``` +export const settings: TestSettings = { +        device: Device.iPadLandscape, +        userAgent: 'flood-chrome-test', +        clearCache: true, +        disableCache: true, +        screenshotOnFailure: true, +} +``` + + 2. Explicitly take a screenshot at a particular point in the script. You can do this by adding + + + +``` +await browser.takeScreenshot() +``` + +to your code. + +### Viewing screenshots + +Once you’ve taken screenshots within your tests, you will probably want to view them and know that they will be stored for future safekeeping. Whether you are running your test locally on have uploaded it to Flood to run with increased concurrency, Flood Element has you covered. + +**Locally run tests** + +Screenshots will be saved as .jpg files in a timestamped folder corresponding to your run. It should look something like this: **…myfirstelementtest/tmp/element-results/test/2018-11-20T135700.595Z/flood/screenshots/**. The screenshots will be uniquely named so that new screenshots, even for the same step, don’t overwrite older ones. + +However, I rarely need to look up the screenshots in that folder because I prefer to see them in iTerm2 for MacOS. iTerm is an alternative to the terminal that works particularly well with Element. When you take a screenshot, iTerm actually shows it in-line: + +![](https://opensource.com/sites/default/files/uploads/image_5_iterm_inline.png) + +**Tests run in Flood** + +Running an Element script on Flood is ideal when you need larger concurrency. Rather than accessing your screenshot locally, Flood will centralize the images into your account, so the images remain even after the cloud load injectors are destroyed. You can get to the screenshot files by downloading Archived Results: + +![](https://opensource.com/sites/default/files/image_6_archived_results.png) + +You can also click on a step on the dashboard to see a filmstrip of your test: + +![](https://opensource.com/sites/default/files/uploads/image_7_filmstrip_view.png) + +### Using logs + +You may need to check out the logs for more technical debugging, especially when the screenshots don’t tell the whole story. Again, whether you are running your test locally or have uploaded it to Flood to run with increased concurrency, Flood Element has you covered. + +**Locally run tests** + +You can print to the console by typing, for example: `console.log('orderValues = ’ + orderValues)` + +This will print the value of the variable `orderValues` at that point in the script. You would see this in your terminal if you’re running Element locally. + +**Tests run in Flood** + +If you’re running the script on Flood, you can either download the log (in the same Archived Results zipped file mentioned earlier) or click on the Logs tab: + +![](https://opensource.com/sites/default/files/uploads/image_8_logs_tab.png) + +### Fun with flags + +Element comes with a few flags that give you more control over how the script is run locally. Here are a few of my favorites: + +**Headless flag** + +When in doubt, run Element in non-headless mode to see the script actually opening the web app on Chrome and interacting with the page. This is only possible locally, but there’s nothing like actually seeing for yourself what’s happening in real time instead of relying on screenshots and logs after the fact. To enable this mode, add the flag when running your test: + +``` +element run myfirstelementtest.ts --no-headless +``` + +**Watch flag** + +Element will automatically close the browser window when it encounters an error or finishes the iteration. Adding `--watch` will leave the browser window open and then monitor the script. As soon as the script is saved, it will automatically run it in the same window from the beginning. Simply add this flag like the above example: + +``` +--watch +``` + +**Dev tools flag** + +This opens a browser instance and runs the script with the Chrome Dev Tools open, allowing you to find locators for the next action you want to script. Simply add this flag as in the first example: + +``` +--dev-tools +``` + +For more flags, use `element run --help`. + +### Try Element + +You’ve just gotten a crash course on Flood Element and are ready to get started. [Download Element][1] to start writing functional test scripts and reusing them as load test scripts on Flood. If you don’t have a Flood account, you can easily sign up for a free trial [on the Flood website][7]. + +We’re proud to contribute to the open source community and can’t wait to have you try this new addition to the Flood line. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/tips-flood-element-testing + +作者:[Nicole van der Hoeven][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nicolevanderhoeven +[b]: https://github.com/lujun9972 +[1]: https://element.flood.io/ +[2]: https://flood.io/blog/why-you-should-load-test-with-browsers/ +[3]: https://flood.io/ +[4]: https://help.flood.io/getting-started-with-load-testing/step-by-step-guide-flood-element +[5]: https://code.visualstudio.com/ +[6]: https://flood.io/wp-content/uploads/2018/11/vscode-codecompletion2.gif +[7]: https://flood.io/load-performance-testing-tool/free-load-testing-trial/ diff --git a/sources/tech/20181217 6 tips and tricks for using KeePassX to secure your passwords.md b/sources/tech/20181217 6 tips and tricks for using KeePassX to secure your passwords.md new file mode 100644 index 0000000000..ad688a7820 --- /dev/null +++ b/sources/tech/20181217 6 tips and tricks for using KeePassX to secure your passwords.md @@ -0,0 +1,78 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (6 tips and tricks for using KeePassX to secure your passwords) +[#]: via: (https://opensource.com/article/18/12/keepassx-security-best-practices) +[#]: author: (Michael McCune https://opensource.com/users/elmiko) + +6 tips and tricks for using KeePassX to secure your passwords +====== +Get more out of your password manager by following these best practices. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum) + +Our increasingly interconnected digital world makes security an essential and common discussion topic. We hear about [data breaches][1] with alarming regularity and are often on our own to make informed decisions about how to use technology securely. Although security is a deep and nuanced topic, there are some easy daily habits you can keep to reduce your attack surface. + +Securing passwords and account information is something that affects anyone today. Technologies like [OAuth][2] help make our lives simpler by reducing the number of accounts we need to create, but we are still left with a staggering number of places where we need new, unique information to keep our records secure. An easy way to deal with the increased mental load of organizing all this sensitive information is to use a password manager like [KeePassX][3]. + +In this article, I will explain the importance of keeping your password information secure and offer suggestions for getting the most out of KeePassX. For an introduction to KeePassX and its features, I highly recommend Ricardo Frydman's article "[Managing passwords in Linux with KeePassX][4]." + +### Why are unique passwords important? + +Using a different password for each account is the first step in ensuring that your accounts are not vulnerable to shared information leaks. Generating new credentials for every account is time-consuming, and it is extremely common for people to fall into the trap of using the same password on several accounts. The main problem with reusing passwords is that you increase the number of accounts an attacker could access if one of them experiences a credential breach. + +It may seem like a burden to create new credentials for each account, but the few minutes you spend creating and recording this information will pay for itself many times over in the event of a data breach. This is where password management tools like KeePassX are invaluable for providing convenience and reliability in securing your logins. + +### 3 tips for getting the most out of KeePassX + +I have been using KeePassX to manage my password information for many years, and it has become a primary resource in my digital toolbox. Overall, it's fairly simple to use, but there are a few best practices I've learned that I think are worth highlighting. + + 1. Add the direct login URL for each account entry. KeePassX has a very convenient shortcut to open the URL listed with an entry. (It's Control+Shift+U on Linux.) When creating a new account entry for a website, I spend some time to locate the site's direct login URL. Although most websites have a login widget in their navigation toolbars, they also usually have direct pages for login forms. By putting this URL into the URL field on the account entry setup form, I can use the shortcut to directly open the login page in my browser. + +![](https://opensource.com/sites/default/files/uploads/keepassx-tip1.png) + + 2. Use the Notes field to record extra security information. In addition to passwords, most websites will ask several questions to create additional authentication factors for an account. I use the Notes sections in my account entries to record these additional factors. + +![](https://opensource.com/sites/default/files/uploads/keepassx-tip2.png) + + 3. Turn on automatic database locking. In the **Application Settings** under the **Tools** menu, there is an option to lock the database after a period of inactivity. Enabling this option is a good common-sense measure, similar to enabling a password-protected screen lock, that will help ensure your password database is not left open and unprotected if someone else gains access to your computer. + +![](https://opensource.com/sites/default/files/uploads/keepassx_application-settings.png) + +### Food for thought + +Protecting your accounts with better password practices and daily habits is just the beginning. Once you start using a password manager, you need to consider issues like protecting the password database file and ensuring you don't forget or lose the master credentials. + +The cloud-native world of disconnected devices and edge computing makes having a central password store essential. The practices and methodologies you adopt will help minimize your risk while you explore and work in the digital world. + + 1. Be aware of retention policies when storing your database in the cloud. KeePassX's database has an open format used by several tools on multiple platforms. Sooner or later, you will want to transfer your database to another device. As you do this, consider the medium you will use to transfer the file. The best option is to use some sort of direct transfer between devices, but this is not always convenient. Always think about where the database file might be stored as it winds its way through the information superhighway; an email may get cached on a server, an object store may move old files to a trash folder. Learn about these interactions for the platforms you are using before deciding where and how you will share your database file. + + 2. Consider the source of truth for your database while you're making edits. After you share your database file between devices, you might need to create accounts for new services or change information for existing services while using a device. To ensure your information is always correct across all your devices, you need to make sure any edits you make on one device end up in all copies of the database file. There is no easy solution to this problem, but you might think about making all edits from a single device or storing the master copy in a location where all your devices can make edits. + + 3. Do you really need to know your passwords? This is more of a philosophical question that touches on the nature of memorable passwords, convenience, and secrecy. I hardly look at passwords as I create them for new accounts; in most cases, I don't even click the "Show Password" checkbox. There is an idea that you can be more secure by not knowing your passwords, as it would be impossible to compel you to provide them. This may seem like a worrisome idea at first, but consider that you can recover or reset passwords for most accounts through alternate verification methods. When you consider that you might want to change your passwords on a semi-regular basis, it almost makes more sense to treat them as ephemeral information that can be regenerated or replaced. + + + + +Here are a few more ideas to consider as you develop your best practices. + +I hope these tips and tricks have helped expand your knowledge of password management and KeePassX. You can find tools that support the KeePass database format on nearly every platform. If you are not currently using a password manager or have never tried KeePassX, I highly recommend doing so now! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/keepassx-security-best-practices + +作者:[Michael McCune][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/elmiko +[b]: https://github.com/lujun9972 +[1]: https://vigilante.pw/ +[2]: https://en.wikipedia.org/wiki/OAuth +[3]: https://www.keepassx.org/ +[4]: https://opensource.com/business/16/5/keepassx diff --git a/sources/tech/20181218 Insync- The Hassleless Way of Using Google Drive on Linux.md b/sources/tech/20181218 Insync- The Hassleless Way of Using Google Drive on Linux.md new file mode 100644 index 0000000000..c10e7ae4ed --- /dev/null +++ b/sources/tech/20181218 Insync- The Hassleless Way of Using Google Drive on Linux.md @@ -0,0 +1,137 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Insync: The Hassleless Way of Using Google Drive on Linux) +[#]: via: (https://itsfoss.com/insync-linux-review/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Insync: The Hassleless Way of Using Google Drive on Linux +====== + +Using Google Drive on Linux is a pain and you probably already know that. There is no official desktop client of Google Drive for Linux. It’s been [more than six years since Google promised Google Drive on Linux][1] but it doesn’t seem to be happening. + +In the absence of the official Google Drive client on Linux, you have no option other than trying the alternatives. I have already discussed a number of [tools that allow you to use Google Drive on Linux][2]. One of those to[ols is][3] Insync, and in my opinion, this is your best bet for a native Google Drive experience on desktop Linux. + +Note that Insync is not an open source software. Heck, it is not even free to use. + +But it has so many features that it becomes an essential tool for those Linux users who rely heavily on Google Drive. + +I briefly discussed Insync in the old article about [Google Drive and Linux][2]. In this article, I’ll discuss Insync features in detail. + +### Insync brings native Google Drive experience to Linux desktop + +![Use insync to access Google Drive in Linux][4] + +The core competency of Insync is syncing your Google Drive, but the app is much more than that. It has features to help you maximize and control your productivity, your Google Drive and your files such as: + + * Cross-platform access (supports Linux, Windows and macOS) + * Easy multiple Google Drive accounts access + * Choose your syncing location. Sync files to your hard drive, external drives and NAS! + * Support for features like file matching, symlink and ignore list + + + +Let me show you some of the main features in action: + +#### Cross-platform in true sense + +Insync claims to run the same app across all operating systems i.e., Linux, Windows, and macOS. That means that you can access the same UI across different OSes, making it easy for you to manage your files across multiple machines. + +![The UI of Insync and the default location of the Insync folder.][5]The UI of Insync and the default location of the Insync folder. + +#### Multiple Google account management + +Insync interface allows you to manage multiple Google Drive accounts seamlessly. You can easily switch between several accounts just by clicking your Google account. + +![Switching between multiple Google accounts in Insync][6]Switching between multiple Google accounts + +#### Custom sync folders + +Customize the way you sync your files and folders. You can easily set your syncing destination anywhere on your machine including external drive and network drives. + +![Customize sync location in Insync][7]Customize sync location + +The selective syncing mode also allows you to easily select a number of files and folders you’d want to sync (or unsync) in your local machine. This includes selectively syncing files within folders. + +![Selective synchronization in Insync][8]Selective synchronization + +It has features like file matching and ‘ignore list’ to help you filter files you don’t want to sync or files that you already have on your machine. + +![File matching feature in Insync][9]Avoids duplication of files + +The ‘ignore list’ allows you to set rules to exclude certain type of files from synchronization. + +![Selective syncing based on rules in Insync][10]Selective syncing based on rules + +If you prefer to work out of the desktop, you have an “Add to Insync” feature that will allow you to add any local file to your Drive. + +![Sync files right from your desktop][11]Sync files right from your desktop + +Insync also supports symlinks for those with workflows that use symbolic links. To learn more about Insync and symlinks, you can refer to [this article.][12] + +#### Exclusive features for Linux + +Insync supports the most commonly used 64-bit Linux distributions like **Ubuntu, Debian and Fedora**. You can check out the full list of distribution support [here][13]. + +Insync also has [headless][14] support for those looking to sync through the command line interface. This is perfect if you use a distro that is not fully supported by the GUI app or if you are working with servers or if you simply prefer the CLI. + +![Insync CLI][15]Command Line Interface + +You can learn more about installing and running Insync headless [here][16]. + +### Insync pricing and special discount + +Insync is a premium tool and it comes with a [price tag][17]. You have 2 licenses to choose from: + + * **Prime** is priced at $29.99 per Google account. You’ll get access to: cross-platform syncing, multiple accounts access and **support**. + * **Teams** is priced at $49.99 per Google account. You’ll be able to access all the Prime features + Team Drives syncing + + + +It’s a one-time fee which means once you buy it, you don’t have to pay it again. In a world where everything is paid monthly, it’s refreshing to pay for software that is still one-time! + +Each Google account has a 15-day free trial that will allow you to test the full suite of features, including [Team Drives][18] syncing. + +If you think it’s a bit expensive for your budget, I have good news for you. As an It’s FOSS reader, you get Insync at 25% discount. + +Just use the code ITSFOSS25 at checkout time and you will get 25% immediate discount on any license. Isn’t it cool? + +If you are not certain yet, you can try Insync free for 15 days. And if you think it’s worth the money, purchase the license with **ITSFOSS25** coupon code. + +You can download Insync from their website. + +I have used Insync from the time when it was available for free and I have always liked it. They have added more features over the time and improved its UI and performance. Overall, it’s a nice-to-have application if you use Google Drive a lot and do not mind paying for the efforts of the developers. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/insync-linux-review/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://abevoelker.github.io/how-long-since-google-said-a-google-drive-linux-client-is-coming/ +[2]: https://itsfoss.com/use-google-drive-linux/ +[3]: https://www.insynchq.com +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/12/google-drive-linux-insync.jpeg?resize=800%2C450&ssl=1 +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_interface.jpeg?fit=800%2C501&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_multiple_google_account.jpeg?ssl=1 +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_folder_settings.png?ssl=1 +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_selective_sync.png?ssl=1 +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_file_matching.jpeg?ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_ignore_list_1.png?ssl=1 +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/add-to-insync-shortcut.jpeg?ssl=1 +[12]: https://help.insynchq.com/key-features-and-syncing-explained/syncing-superpowers/using-symlinks-on-google-drive-with-insync +[13]: https://www.insynchq.com/downloads +[14]: https://en.wikipedia.org/wiki/Headless_software +[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_cli.jpeg?fit=800%2C478&ssl=1 +[16]: https://help.insynchq.com/installation-on-windows-linux-and-macos/advanced/linux-controlling-insync-via-command-line-cli +[17]: https://www.insynchq.com/pricing +[18]: https://gsuite.google.com/learning-center/products/drive/get-started-team-drive/#!/ diff --git a/sources/tech/20181220 Getting started with Prometheus.md b/sources/tech/20181220 Getting started with Prometheus.md new file mode 100644 index 0000000000..79704addb7 --- /dev/null +++ b/sources/tech/20181220 Getting started with Prometheus.md @@ -0,0 +1,166 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Getting started with Prometheus) +[#]: via: (https://opensource.com/article/18/12/introduction-prometheus) +[#]: author: (Michael Zamot https://opensource.com/users/mzamot) + +Getting started with Prometheus +====== +Learn to install and write queries for the Prometheus monitoring and alerting system. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn) + +[Prometheus][1] is an open source monitoring and alerting system that directly scrapes metrics from agents running on the target hosts and stores the collected samples centrally on its server. Metrics can also be pushed using plugins like **collectd_exporter** —although this is not Promethius' default behavior, it may be useful in some environments where hosts are behind a firewall or prohibited from opening ports by security policy. + +Prometheus, a project of the [Cloud Native Computing Foundation][2], scales up using a federation model, which enables one Prometheus server to scrape another Prometheus server. This allows creation of a hierarchical topology, where a central system or higher-level Prometheus server can scrape aggregated data already collected from subordinate instances. + +Besides the Prometheus server, its most common components are its [Alertmanager][3] and its exporters. + +Alerting rules can be created within Prometheus and configured to send custom alerts to Alertmanager. Alertmanager then processes and handles these alerts, including sending notifications through different mechanisms like email or third-party services like [PagerDuty][4]. + +Prometheus' exporters can be libraries, processes, devices, or anything else that exposes the metrics that will be scraped by Prometheus. The metrics are available at the endpoint **/metrics** , which allows Prometheus to scrape them directly without needing an agent. The tutorial in this article uses **node_exporter** to expose the target hosts' hardware and operating system metrics. Exporters' outputs are plaintext and highly readable, which is one of Prometheus' strengths. + +In addition, you can configure [Grafana][5] to use Prometheus as a backend to provide data visualization and dashboarding functions. + +### Making sense of Prometheus' configuration file + +The number of seconds between when **/metrics** is scraped controls the granularity of the time-series database. This is defined in the configuration file as the **scrape_interval** parameter, which by default is set to 60 seconds. + +Targets are set for each scrape job in the **scrape_configs** section. Each job has its own name and a set of labels that can help filter, categorize, and make it easier to identify the target. One job can have many targets. + +### Installing Prometheus + +In this tutorial, for simplicity, we will install a Prometheus server and **node_exporter** with docker. Docker should already be installed and configured properly on your system. For a more in-depth, automated method, I recommend Steve Ovens' article [How to use Ansible to set up system monitoring with Prometheus][6]. + +Before starting, create the Prometheus configuration file **prometheus.yml** in your work directory as follows: + +``` +global: +  scrape_interval:      15s +  evaluation_interval: 15s + +scrape_configs: +  - job_name: 'prometheus' + +        static_configs: +        - targets: ['localhost:9090'] + +  - job_name: 'webservers' + +        static_configs: +        - targets: [':9100'] +``` + +Start Prometheus with Docker by running the following command: + +``` +$ sudo docker run -d -p 9090:9090 -v +/path/to/prometheus.yml:/etc/prometheus/prometheus.yml +prom/prometheus +``` + +By default, the Prometheus server will use port 9090. If this port is already in use, you can change it by adding the parameter **\--web.listen-address=" :"** at the end of the previous command. + +In the machine you want to monitor, download and run the **node_exporter** container by using the following command: + +``` +$ sudo docker run -d -v "/proc:/host/proc" -v "/sys:/host/sys" -v +"/:/rootfs" --net="host" prom/node-exporter --path.procfs +/host/proc --path.sysfs /host/sys --collector.filesystem.ignored- +mount-points "^/(sys|proc|dev|host|etc)($|/)" +``` + +For the purposes of this learning exercise, you can install **node_exporter** and Prometheus on the same machine. Please note that it's not wise to run **node_exporter** under Docker in production—this is for testing purposes only. + +To verify that **node_exporter** is running, open your browser and navigate to **http:// :9100/metrics**. All the metrics collected will be displayed; these are the same metrics Prometheus will scrape. + +![](https://opensource.com/sites/default/files/uploads/check-node_exporter.png) + +To verify the Prometheus server installation, open your browser and navigate to . + +You should see the Prometheus interface. Click on **Status** and then **Targets**. Under State, you should see your machines listed as **UP**. + +![](https://opensource.com/sites/default/files/uploads/targets-up.png) + +### Using Prometheus queries + +It's time to get familiar with [PromQL][7], Prometheus' query syntax, and its graphing web interface. Go to **** on your Prometheus server. You will see a query editor and two tabs: Graph and Console. + +Prometheus stores all data as time series, identifying each one with a metric name. For example, the metric **node_filesystem_avail_bytes** shows the available filesystem space. The metric's name can be used in the expression box to select all of the time series with this name and produce an instant vector. If desired, these time series can be filtered using selectors and labels—a set of key-value pairs—for example: + +``` +node_filesystem_avail_bytes{fstype="ext4"} +``` + +When filtering, you can match "exactly equal" ( **=** ), "not equal" ( **!=** ), "regex-match" ( **=~** ), and "do not regex-match" ( **!~** ). The following examples illustrate this: + +To filter **node_filesystem_avail_bytes** to show both ext4 and XFS filesystems: + +``` +node_filesystem_avail_bytes{fstype=~"ext4|xfs"} +``` + +To exclude a match: + +``` +node_filesystem_avail_bytes{fstype!="xfs"} +``` + +You can also get a range of samples back from the current time by using square brackets. You can use **s** to represent seconds, **m** for minutes, **h** for hours, **d** for days, **w** for weeks, and **y** for years. When using time ranges, the vector returned will be a range vector. + +For example, the following command produces the samples from five minutes to the present: + +``` +node_memory_MemAvailable_bytes[5m] +``` + +Prometheus also includes functions to allow advanced queries, such as this: + +``` +100 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated (1 - avg by(instance)(irate(node_cpu_seconds_total{job='webservers',mode='idle'}[5m]))) +``` + +Notice how the labels are used to filter the job and the mode. The metric **node_cpu_seconds_total** returns a counter, and the **irate()** function calculates the per-second rate of change based on the last two data points of the range interval (meaning the range can be smaller than five minutes). To calculate the overall CPU usage, you can use the idle mode of the **node_cpu_seconds_total** metric. The idle percent of a processor is the opposite of a busy processor, so the **irate** value is subtracted from 1. To make it a percentage, multiply it by 100. + +![](https://opensource.com/sites/default/files/uploads/cpu-usage.png) + +### Learn more + +Prometheus is a powerful, scalable, lightweight, and easy to use and deploy monitoring tool that is indispensable for every system administrator and developer. For these and other reasons, many companies are implementing Prometheus as part of their infrastructure. + +To learn more about Prometheus and its functions, I recommend the following resources: + ++ About [PromQL][8] ++ What [node_exporters collects][9] ++ [Prometheus functions][10] ++ [4 open source monitoring tools][11] ++ [Now available: The open source guide to DevOps monitoring tools][12] + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/introduction-prometheus + +作者:[Michael Zamot][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mzamot +[b]: https://github.com/lujun9972 +[1]: https://prometheus.io/ +[2]: https://www.cncf.io/ +[3]: https://prometheus.io/docs/alerting/alertmanager/ +[4]: https://en.wikipedia.org/wiki/PagerDuty +[5]: https://grafana.com/ +[6]: https://opensource.com/article/18/3/how-use-ansible-set-system-monitoring-prometheus +[7]: https://prometheus.io/docs/prometheus/latest/querying/basics/ +[8]: https://prometheus.io/docs/prometheus/latest/querying/basics/ +[9]: https://github.com/prometheus/node_exporter#collectors +[10]: https://prometheus.io/docs/prometheus/latest/querying/functions/ +[11]: https://opensource.com/article/18/8/open-source-monitoring-tools +[12]: https://opensource.com/article/18/8/now-available-open-source-guide-devops-monitoring-tools diff --git a/sources/tech/20181221 Large files with Git- LFS and git-annex.md b/sources/tech/20181221 Large files with Git- LFS and git-annex.md new file mode 100644 index 0000000000..29a76f810f --- /dev/null +++ b/sources/tech/20181221 Large files with Git- LFS and git-annex.md @@ -0,0 +1,145 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Large files with Git: LFS and git-annex) +[#]: via: (https://anarc.at/blog/2018-12-21-large-files-with-git/) +[#]: author: (Anarc.at https://anarc.at/) + +Large files with Git: LFS and git-annex +====== + +Git does not handle large files very well. While there is work underway to handle large repositories through the [commit graph work][2], Git's internal design has remained surprisingly constant throughout its history, which means that storing large files into Git comes with a significant and, ultimately, prohibitive performance cost. Thankfully, other projects are helping Git address this challenge. This article compares how Git LFS and git-annex address this problem and should help readers pick the right solution for their needs. + +### The problem with large files + +As readers probably know, Linus Torvalds wrote Git to manage the history of the kernel source code, which is a large collection of small files. Every file is a "blob" in Git's object store, addressed by its cryptographic hash. A new version of that file will store a new blob in Git's history, with no deduplication between the two versions. The pack file format can store binary deltas between similar objects, but if many objects of similar size change in a repository, that algorithm might fail to properly deduplicate. In practice, large binary files (say JPEG images) have an irritating tendency of changing completely when even the smallest change is made, which makes delta compression useless. + +There have been different attempts at fixing this in the past. In 2006, Torvalds worked on [improving the pack-file format][3] to reduce object duplication between the index and the pack files. Those changes were eventually reverted because, as Nicolas Pitre [put it][4]: "that extra loose object format doesn't appear to be worth it anymore". + +Then in 2009, [Caca Labs][5] worked on improving the `fast-import` and `pack-objects` Git commands to do special handling for big files, in an effort called [git-bigfiles][6]. Some of those changes eventually made it into Git: for example, since [1.7.6][7], Git will stream large files directly to a pack file instead of holding them all in memory. But files are still kept forever in the history. + +An example of trouble I had to deal with is for the Debian security tracker, which follows all security issues in the entire Debian history in a single file. That file is around 360,000 lines for a whopping 18MB. The resulting repository takes 1.6GB of disk space and a local clone takes 21 minutes to perform, mostly taken up by Git resolving deltas. Commit, push, and pull are noticeably slower than a regular repository, taking anywhere from a few seconds to a minute depending one how old the local copy is. And running annotate on that large file can take up to ten minutes. So even though that is a simple text file, it's grown large enough to cause significant problems for Git, which is otherwise known for stellar performance. + +Intuitively, the problem is that Git needs to copy files into its object store to track them. Third-party projects therefore typically solve the large-files problem by taking files out of Git. In 2009, Git evangelist Scott Chacon released [GitMedia][8], which is a Git filter that simply takes large files out of Git. Unfortunately, there hasn't been an official release since then and it's [unclear][9] if the project is still maintained. The next effort to come up was [git-fat][10], first released in 2012 and still maintained. But neither tool has seen massive adoption yet. If I would have to venture a guess, it might be because both require manual configuration. Both also require a custom server (rsync for git-fat; S3, SCP, Atmos, or WebDAV for GitMedia) which limits collaboration since users need access to another service. + +### Git LFS + +That was before GitHub [released][11] Git Large File Storage (LFS) in August 2015. Like all software taking files out of Git, LFS tracks file hashes instead of file contents. So instead of adding large files into Git directly, LFS adds a pointer file to the Git repository, which looks like this: + +``` +version https://git-lfs.github.com/spec/v1 +oid sha256:4d7a214614ab2935c943f9e0ff69d22eadbb8f32b1258daaa5e2ca24d17e2393 +size 12345 +``` + +LFS then uses Git's smudge and clean filters to show the real file on checkout. Git only stores that small text file and does so efficiently. The downside, of course, is that large files are not version controlled: only the latest version of a file is kept in the repository. + +Git LFS can be used in any repository by installing the right hooks with `git lfs install` then asking LFS to track any given file with `git lfs track`. This will add the file to the `.gitattributes` file which will make Git run the proper LFS filters. It's also possible to add patterns to the `.gitattributes` file, of course. For example, this will make sure Git LFS will track MP3 and ZIP files: + +``` +$ cat .gitattributes +*.mp3 filter=lfs -text +*.zip filter=lfs -text +``` + +After this configuration, we use Git normally: `git add`, `git commit`, and so on will talk to Git LFS transparently. + +The actual files tracked by LFS are copied to a path like `.git/lfs/objects/{OID-PATH}`, where `{OID-PATH}` is a sharded file path of the form `OID[0:2]/OID[2:4]/OID` and where `OID` is the content's hash (currently SHA-256) of the file. This brings the extra feature that multiple copies of the same file in the same repository are automatically deduplicated, although in practice this rarely occurs. + +Git LFS will copy large files to that internal storage on `git add`. When a file is modified in the repository, Git notices, the new version is copied to the internal storage, and the pointer file is updated. The old version is left dangling until the repository is pruned. + +This process only works for new files you are importing into Git, however. If a Git repository already has large files in its history, LFS can fortunately "fix" repositories by retroactively rewriting history with [git lfs migrate][12]. This has all the normal downsides of rewriting history, however --- existing clones will have to be reset to benefit from the cleanup. + +LFS also supports [file locking][13], which allows users to claim a lock on a file, making it read-only everywhere except in the locking repository. This allows users to signal others that they are working on an LFS file. Those locks are purely advisory, however, as users can remove other user's locks by using the `--force` flag. LFS can also [prune][14] old or unreferenced files. + +The main [limitation][15] of LFS is that it's bound to a single upstream: large files are usually stored in the same location as the central Git repository. If it is hosted on GitHub, this means a default quota of 1GB storage and bandwidth, but you can purchase additional "packs" to expand both of those quotas. GitHub also limits the size of individual files to 2GB. This [upset][16] some users surprised by the bandwidth fees, which were previously hidden in GitHub's cost structure. + +While the actual server-side implementation used by GitHub is closed source, there is a [test server][17] provided as an example implementation. Other Git hosting platforms have also [implemented][18] support for the LFS [API][19], including GitLab, Gitea, and BitBucket; that level of adoption is something that git-fat and GitMedia never achieved. LFS does support hosting large files on a server other than the central one --- a project could run its own LFS server, for example --- but this will involve a different set of credentials, bringing back the difficult user onboarding that affected git-fat and GitMedia. + +Another limitation is that LFS only supports pushing and pulling files over HTTP(S) --- no SSH transfers. LFS uses some [tricks][20] to bypass HTTP basic authentication, fortunately. This also might change in the future as there are proposals to add [SSH support][21], resumable uploads through the [tus.io protocol][22], and other [custom transfer protocols][23]. + +Finally, LFS can be slow. Every file added to LFS takes up double the space on the local filesystem as it is copied to the `.git/lfs/objects` storage. The smudge/clean interface is also slow: it works as a pipe, but buffers the file contents in memory each time, which can be prohibitive with files larger than available memory. + +### git-annex + +The other main player in large file support for Git is git-annex. We [covered the project][24] back in 2010, shortly after its first release, but it's certainly worth discussing what has changed in the eight years since Joey Hess launched the project. + +Like Git LFS, git-annex takes large files out of Git's history. The way it handles this is by storing a symbolic link to the file in `.git/annex`. We should probably credit Hess for this innovation, since the Git LFS storage layout is obviously inspired by git-annex. The original design of git-annex introduced all sorts of problems however, especially on filesystems lacking symbolic-link support. So Hess has implemented different solutions to this problem. Originally, when git-annex detected such a "crippled" filesystem, it switched to [direct mode][25], which kept files directly in the work tree, while internally committing the symbolic links into the Git repository. This design turned out to be a little confusing to users, including myself; I have managed to shoot myself in the foot more than once using this system. + +Since then, git-annex has adopted a different v7 mode that is also based on smudge/clean filters, which it called "[unlocked files][26]". Like Git LFS, unlocked files will double disk space usage by default. However it is possible to reduce disk space usage by using "thin mode" which uses hard links between the internal git-annex disk storage and the work tree. The downside is, of course, that changes are immediately performed on files, which means previous file versions are automatically discarded. This can lead to data loss if users are not careful. + +Furthermore, git-annex in v7 mode suffers from some of the performance problems affecting Git LFS, because both use the smudge/clean filters. Hess actually has [ideas][27] on how the smudge/clean interface could be improved. He proposes changing Git so that it stops buffering entire files into memory, allows filters to access the work tree directly, and adds the hooks he found missing (for `stash`, `reset`, and `cherry-pick`). Git-annex already implements some tricks to work around those problems itself but it would be better for those to be implemented in Git natively. + +Being more distributed by design, git-annex does not have the same "locking" semantics as LFS. Locking a file in git-annex means protecting it from changes, so files need to actually be in the "unlocked" state to be editable, which might be counter-intuitive to new users. In general, git-annex has some of those unusual quirks and interfaces that often come with more powerful software. + +And git-annex is much more powerful: it not only addresses the "large-files problem" but goes much further. For example, it supports "partial checkouts" --- downloading only some of the large files. I find that especially useful to manage my video, music, and photo collections, as those are too large to fit on my mobile devices. Git-annex also has support for location tracking, where it knows how many copies of a file exist and where, which is useful for archival purposes. And while Git LFS is only starting to look at transfer protocols other than HTTP, git-annex already supports a [large number][28] through a [special remote protocol][29] that is fairly easy to implement. + +"Large files" is therefore only scratching the surface of what git-annex can do: I have used it to build an [archival system for remote native communities in northern Québec][30], while others have built a [similar system in Brazil][31]. It's also used by the scientific community in projects like [GIN][32] and [DataLad][33], which manage terabytes of data. Another example is the [Japanese American Legacy Project][34] which manages "upwards of 100 terabytes of collections, transporting them from small cultural heritage sites on USB drives". + +Unfortunately, git-annex is not well supported by hosting providers. GitLab [used to support it][35], but since it implemented Git LFS, it [dropped support for git-annex][36], saying it was a "burden to support". Fortunately, thanks to git-annex's flexibility, it may eventually be possible to treat [LFS servers as just another remote][37] which would make git-annex capable of storing files on those servers again. + +### Conclusion + +Git LFS and git-annex are both mature and well maintained programs that deal efficiently with large files in Git. LFS is easier to use and is well supported by major Git hosting providers, but it's less flexible than git-annex. + +Git-annex, in comparison, allows you to store your content anywhere and espouses Git's distributed nature more faithfully. It also uses all sorts of tricks to save disk space and improve performance, so it should generally be faster than Git LFS. Learning git-annex, however, feels like learning Git: you always feel you are not quite there and you can always learn more. It's a double-edged sword and can feel empowering for some users and terrifyingly hard for others. Where you stand on the "power-user" scale, along with project-specific requirements will ultimately determine which solution is the right one for you. + +Ironically, after thorough evaluation of large-file solutions for the Debian security tracker, I ended up proposing to rewrite history and [split the file by year][38] which improved all performance markers by at least an order of magnitude. As it turns out, keeping history is critical for the security team so any solution that moves large files outside of the Git repository is not acceptable to them. Therefore, before adding large files into Git, you might want to think about organizing your content correctly first. But if large files are unavoidable, the Git LFS and git-annex projects allow users to keep using most of their current workflow. + +> This article [first appeared][39] in the [Linux Weekly News][40]. + +-------------------------------------------------------------------------------- + +via: https://anarc.at/blog/2018-12-21-large-files-with-git/ + +作者:[Anarc.at][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://anarc.at/ +[b]: https://github.com/lujun9972 +[1]: https://anarc.at/blog/ +[2]: https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt +[3]: https://public-inbox.org/git/Pine.LNX.4.64.0607111010320.5623@g5.osdl.org/ +[4]: https://public-inbox.org/git/alpine.LFD.0.99.0705091422130.24220@xanadu.home/ +[5]: http://caca.zoy.org/ +[6]: http://caca.zoy.org/wiki/git-bigfiles +[7]: https://public-inbox.org/git/7v8vsnz2nc.fsf@alter.siamese.dyndns.org/ +[8]: https://github.com/alebedev/git-media +[9]: https://github.com/alebedev/git-media/issues/15 +[10]: https://github.com/jedbrown/git-fat +[11]: https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/ +[12]: https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-migrate.1.ronn +[13]: https://github.com/git-lfs/git-lfs/wiki/File-Locking +[14]: https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-prune.1.ronn +[15]: https://github.com/git-lfs/git-lfs/wiki/Limitations +[16]: https://medium.com/@megastep/github-s-large-file-storage-is-no-panacea-for-open-source-quite-the-opposite-12c0e16a9a91 +[17]: https://github.com/git-lfs/lfs-test-server +[18]: https://github.com/git-lfs/git-lfs/wiki/Implementations%0A +[19]: https://github.com/git-lfs/git-lfs/tree/master/docs/api +[20]: https://github.com/git-lfs/git-lfs/blob/master/docs/api/authentication.md +[21]: https://github.com/git-lfs/git-lfs/blob/master/docs/proposals/ssh_adapter.md +[22]: https://tus.io/ +[23]: https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md +[24]: https://lwn.net/Articles/419241/ +[25]: http://git-annex.branchable.com/direct_mode/ +[26]: https://git-annex.branchable.com/tips/unlocked_files/ +[27]: http://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/ +[28]: http://git-annex.branchable.com/special_remotes/ +[29]: http://git-annex.branchable.com/special_remotes/external/ +[30]: http://isuma-media-players.readthedocs.org/en/latest/index.html +[31]: https://github.com/RedeMocambos/baobaxia +[32]: https://web.gin.g-node.org/ +[33]: https://www.datalad.org/ +[34]: http://www.densho.org/ +[35]: https://docs.gitlab.com/ee/workflow/git_annex.html +[36]: https://gitlab.com/gitlab-org/gitlab-ee/issues/1648 +[37]: https://git-annex.branchable.com/todo/LFS_API_support/ +[38]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=908678#52 +[39]: https://lwn.net/Articles/774125/ +[40]: http://lwn.net/ diff --git a/sources/tech/20181222 How to detect automatically generated emails.md b/sources/tech/20181222 How to detect automatically generated emails.md new file mode 100644 index 0000000000..2ccaeddeee --- /dev/null +++ b/sources/tech/20181222 How to detect automatically generated emails.md @@ -0,0 +1,144 @@ +[#]: collector: (lujun9972) +[#]: translator: (wyxplus) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to detect automatically generated emails) +[#]: via: (https://arp242.net/weblog/autoreply.html) +[#]: author: (Martin Tournoij https://arp242.net/) + +How to detect automatically generated emails +====== + +### How to detect automatically generated emails + + +When you send out an auto-reply from an email system you want to take care to not send replies to automatically generated emails. At best, you will get a useless delivery failure. At words, you will get an infinite email loop and a world of chaos. + +Turns out that reliably detecting automatically generated emails is not always easy. Here are my observations based on writing a detector for this and scanning about 100,000 emails with it (extensive personal archive and company archive). + +### Auto-submitted header + +Defined in [RFC 3834][1]. + +This is the ‘official’ standard way to indicate your message is an auto-reply. You should **not** send a reply if `Auto-Submitted` is present and has a value other than `no`. + +### X-Auto-Response-Suppress header + +Defined [by Microsoft][2] + +This header is used by Microsoft Exchange, Outlook, and perhaps some other products. Many newsletters and such also set this. You should **not** send a reply if `X-Auto-Response-Suppress` contains `DR` (“Suppress delivery reports”), `AutoReply` (“Suppress auto-reply messages other than OOF notifications”), or `All`. + +### List-Id and List-Unsubscribe headers + +Defined in [RFC 2919][3] + +You usually don’t want to send auto-replies to mailing lists or news letters. Pretty much all mail lists and most newsletters set at least one of these headers. You should **not** send a reply if either of these headers is present. The value is unimportant. + +### Feedback-ID header + +Defined [by Google][4]. + +Gmail uses this header to identify mail newsletters, and uses it to generate statistics/reports for owners of those newsletters. You should **not** send a reply if this headers is present; the value is unimportant. + +### Non-standard ways + +The above methods are well-defined and clear (even though some are non-standard). Unfortunately some email systems do not use any of them :-( Here are some additional measures. + +#### Precedence header + +Not really defined anywhere, mentioned in [RFC 2076][5] where its use is discouraged (but this header is commonly encountered). + +Note that checking for the existence of this field is not recommended, as some ails use `normal` and some other (obscure) values (this is not very common though). + +My recommendation is to **not** send a reply if the value case-insensitively matches `bulk`, `auto_reply`, or `list`. + +#### Other obscure headers + +A collection of other (somewhat obscure) headers I’ve encountered. I would recommend **not** sending an auto-reply if one of these is set. Most mails also set one of the above headers, but some don’t (but it’s not very common). + + * `X-MSFBL`; can’t really find a definition (Microsoft header?), but I only have auto-generated mails with this header. + + * `X-Loop`; not really defined anywhere, and somewhat rare, but sometimes it’s set. It’s most often set to the address that should not get emails, but `X-Loop: yes` is also encountered. + + * `X-Autoreply`; fairly rare, and always seems to have a value of `yes`. + + + + +#### Email address + +Check if the `From` or `Reply-To` headers contains `noreply`, `no-reply`, or `no_reply` (regex: `^no.?reply@`). + +#### HTML only + +If an email only has a HTML part, but no text part it’s a good indication this is an auto-generated mail or newsletter. Pretty much all mail clients also set a text part. + +#### Delivery failures + +Many delivery failure messages don’t really indicate that they’re failures. Some ways to check this: + + * `From` contains `mailer-daemon` or `Mail Delivery Subsystem` + + + +Many mail libraries leave some sort of footprint, and most regular mail clients override this with their own data. Checking for this seems to work fairly well. + + * `X-Mailer: Microsoft CDO for Windows 2000` – Set by some MS software; I can only find it on autogenerated mails. Yes, it’s still used in 2015. + + * `Message-ID` header contains `.JavaMail.` – I’ve found a few (5 on 50k) regular messages with this, but not many; the vast majority (thousends) of messages are news-letters, order confirmations, etc. + + * `^X-Mailer` starts with `PHP`. This should catch both `X-Mailer: PHP/5.5.0` and `X-Mailer: PHPmailer blah blah`. The same as `JavaMail` applies. + + * `X-Library` presence; only [Indy][6] seems to set this. + + * `X-Mailer` starts with `wdcollect`. Set by some Plesk mails. + + * `X-Mailer` starts with `MIME-tools`. + + + + +### Final precaution: limit the number of replies + +Even when following all of the above advice, you may still encounter an email program that will slip through. This can very dangerous, as email systems that simply `IF email THEN send_email` have the potential to cause infinite email loops. + +For this reason, I recommend keeping track of which emails you’ve sent an autoreply to and rate limiting this to at most n emails in n minutes. This will break the back-and-forth chain. + +We use one email per five minutes, but something less strict will probably also work well. + +### What you need to set on your auto-response + +The specifics for this will vary depending on what sort of mails you’re sending. This is what we use for auto-reply mails: + +``` +Auto-Submitted: auto-replied +X-Auto-Response-Suppress: All +Precedence: auto_reply +``` + +### Feedback + +You can mail me at [martin@arp242.net][7] or [create a GitHub issue][8] for feedback, questions, etc. + +-------------------------------------------------------------------------------- + +via: https://arp242.net/weblog/autoreply.html + +作者:[Martin Tournoij][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://arp242.net/ +[b]: https://github.com/lujun9972 +[1]: http://tools.ietf.org/html/rfc3834 +[2]: https://msdn.microsoft.com/en-us/library/ee219609(v=EXCHG.80).aspx +[3]: https://tools.ietf.org/html/rfc2919) +[4]: https://support.google.com/mail/answer/6254652?hl=en +[5]: http://www.faqs.org/rfcs/rfc2076.html +[6]: http://www.indyproject.org/index.en.aspx +[7]: mailto:martin@arp242.net +[8]: https://github.com/Carpetsmoker/arp242.net/issues/new diff --git a/sources/tech/20181224 Turn GNOME to Heaven With These 23 GNOME Extensions.md b/sources/tech/20181224 Turn GNOME to Heaven With These 23 GNOME Extensions.md new file mode 100644 index 0000000000..e49778eab7 --- /dev/null +++ b/sources/tech/20181224 Turn GNOME to Heaven With These 23 GNOME Extensions.md @@ -0,0 +1,288 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Turn GNOME to Heaven With These 23 GNOME Extensions) +[#]: via: (https://fosspost.org/tutorials/turn-gnome-to-heaven-with-these-23-gnome-extensions) +[#]: author: (M.Hanny Sabbagh https://fosspost.org/author/mhsabbagh) + +Turn GNOME to Heaven With These 23 GNOME Extensions +====== + +GNOME Shell is one of the most used desktop interfaces on the Linux desktop. It’s part of the GNOME project and is considered to be the next generation of the old classic GNOME 2.x interface. GNOME Shell was first released in 2011 carrying a lot of features, including GNOME Shell extensions feature. + +GNOME Extensions are simply extra functionality that you can add to your interface, they can be panel extensions, performance extensions, quick access extensions, productivity extensions or for any other type of usage. They are all free and open source of course; you can install them with a single click **from your web browser** actually. + +### How To Install GNOME Extensions? + +You main way to install GNOME extensions will be via the extensions.gnome.org website. It’s an official platform belonging to GNOME where developers publish their extensions easily so that users can install them in a single click. + +In order to for this to work, you’ll need two things: + + 1. Browser Add-on: You’ll need to install a browser add-on that allows the website to communicate with your local GNOME desktop. You install it from [here for Firefox][1], or [here for Chrome][2] or [here for Opera][3]. + + 2. Native Connector: You still need another part to allow your system to accept installing files locally from your web browser. To install this component, you must install the `chrome-gnome-shell` package. Do not be deceived! Although the package name is containing “chrome”, it also works on Firefox too. To install it on Debian/Ubuntu/Mint run the following command in terminal: + +``` +sudo apt install chrome-gnome-shell +``` + +For Fedora: + +``` +sudo dnf install chrome-gnome-shell +``` + +For Arch: + +``` +sudo pacman -S chrome-gnome-shell +``` + +After you have installed the two components above, you can easily install extensions from the GNOME extensions website. + +### How to Configure GNOME Extensions Settings? + +Many of these extensions do have a settings window that you can access to adjust the preferences of that extension. You must make sure that you have seen its options at least once so that you know what you can possibly do using that extension. + +To do this, you can head to the [installed extensions page on the GNOME website][4], and you’ll see a small options button near every extension that offers one: + +![Screenshot 2018 12 24 20 50 55 41][5] + +Clicking it will display a window for you, from which you can see the possible settings: + +![Screenshot 2018 12 24 20 51 29 43][6] + +Read our article below for our list of recommended extension! + +### General Extensions + +#### 1\. User Themes + +![Screenshot from 2018 12 23 12 30 20 45][7] + +This is the first must-install extension on the GNOME Shell interface, it simply allows you to change the desktop theme to another one using the tweak tool. After installation run gnome-tweak-tool, and you’ll be able to change your desktop theme. + +Installation link: + +#### 2\. Dash to Panel + +![Screenshot from 2018 12 24 21 16 11 47][8] + +Converts the GNOME top bar into a taskbar with many added features, such as favorite icons, moving the clock to right, adding currently opened windows to the panel and many other features. (Make sure not to install this one with some other extensions below which do provide the same functionality). + +Installation link: + +#### 3\. Desktop Icons + +![gnome shell screenshot SSP3UZ 49][9] + +Restores desktop icons back again to GNOME. Still in continues development. + +Installation link: + +#### 4\. Dash to Dock + +![Screenshot from 2018 12 24 21 50 07 51][10] + +If you are a fan of the Unity interface, then this extension may help you. It simply adds a dock to the left/right side of the screen, which is very similar to Unity. You can customize that dock however you like. + +Installation link: + +### Productivity Extensions + +#### 5\. Todo.txt + +![screenshot_570_5X5YkZb][11] + +For users who like to maintain productivity, you can use this extension to add a simple To-Do list functionality to your desktop, it will use the [syntax][12] from todotxt.com, you can add unlimited to-dos, mark them as complete or remove them, change their position beside modifying or taking a backup of the todo.txt file manually. + +Installation link: + +#### 6\. Screenshot Tool + +![Screenshot from 2018 12 24 21 04 14 54][13] + +Easily take a screenshot of your desktop or a specific area, with the possibility of also auto-uploading it to imgur.com and auto-saving the link into the clipboard! Very useful extension. + +Installation link: + +#### 7\. OpenWeather + +![screenshot_750][14] + +If you would like to know the weather forecast everyday then this extension will be the right one for you, this extension will simply add an applet to the top panel allowing you to fetch the weather data from openweathermap.org or forecast.io, it supports all the countries and cities around the world. It also shows the wind and humidity. + +Installation link: + +#### 8 & 9\. Search Providers Extensions + +![Screenshot from 2018 12 24 21 29 41 57][15] + +In GNOME, you can add what’s known as “search providers” to the shell, meaning that when you type something in the search box, you’ll be able to automatically search these websites (search providers) using the same text you entered, and see the results directly from your shell! + +YouTube Search Provider: + +Wikipedia Search Provider: + +### Workflow Extensions + +#### 10\. No Title Bar + +![Screenshot 20181224210737 59][16] + +This extension simply removes the title bar from all the maximized windows, and moves it into the top GNOME Panel. In this way, you’ll be able to save a complete horizontal line on your screen, more space for your work! + +Installation Link: + +#### 11\. Applications Menu + +![Screenshot 2018 12 23 13 58 07 61][17] + +This extension simply adds a classic menu to the “activities” menu on the corner. By using it, you will be able to browse the installed applications and categories without the need to use the dash or the search feature, which saves you time. (Check the “No hot corner” extension below to get a better usage). + +Installation link: + +#### 12\. Places Status Indicator + +![screenshot_8_1][18] + +This indicator will put itself near the left corner of the activities button, it allows you to access your home folder and sub-folders easily using a menu, you can also browse the available devices and networks using it. + +Installation link: + +#### 13\. Window List + +![Screenshot from 2016-08-12 08-05-48][19] + +Officially supported by GNOME team, this extension adds a bottom panel to the desktop which allows you to navigate between the open windows easily, it also include a workspace indicator to switch between them. + +Installation link: + +#### 14\. Frippery Panel Favorites + +![screenshot_4][20] + +This extensions adds your favorite applications and programs to the panel near the activities button, allowing you to access to it more quickly with just 1 click, you can add or remove applications from it just by modifying your applications in your favorites (the same applications in the left panel when you click the activities button will appear here). + +Installation link: + +#### 15\. TopIcons + +![Screenshot 20181224211009 66][21] + +Those extensions restore the system tray back into the top GNOME panel. Very needed in cases of where applications are very much dependent on the tray icon. + +For GNOME 3.28, installation link: + +For GNOME 3.30, installation link: + +#### 16\. Clipboard Indicator + +![Screenshot 20181224214626 68][22] + +A clipboard manager is simply an applications that manages all the copy & paste operations you do on your system and saves them into a history, so that you can access them later whenever you want. + +This extension does exactly this, plus many other cool features that you can check. + +Installation link: + +### Other Extensions + +#### 17\. Frippery Move Clock + +![screenshot_2][23] + +If you are from those people who like alignment a lot, and dividing the panels into 2 parts only, then you may like this extension, what it simply does is moving the clock from the middle of the GNOME Shell panel to the right near the other applets on the panel, which makes it more organized. + +Installation link: + +#### 18\. No Topleft Hot Corner + +If you don’t like opening the dash whenever you move the mouse to the left corner, you can disable it easily using this extension. You can for sure click the activities button if you want to open the dash view (or via the Super key on the keyboard), but the hot corner will be disabled only. + +Installation link: + +#### 19\. No Annoyance + +Simply removes the “window is ready” notification each time a new window a opened. + +Installation link: + +#### 20\. EasyScreenCast + +![Screenshot 20181224214219 71][24] + +If you would like to quickly take a screencast for your desktop, then this extension may help you. By simply just choosing the type of recording you want, you’ll be able to take screencasts any time. You can also configure advanced options for the extension, such as the pipeline and many other things. + +Installation link: + +#### 21\. Removable drive Menu + +![Screenshot 20181224214131 73][25] + +Adds an icon to the top bar which shows you a list of your currently removable drives. + +Installation link: + +#### 22\. BottomPanel + +![Screenshot 20181224214419 75][26] + +As its title says.. It simply moves the top GNOME bar into the bottom of the screen. + +Installation link: + +#### 23\. Unite + +If you would like one extension only to do most of the above tasks, then Unite extension can help you. It adds panel favorites, removes title bar, moves the clock, allows you to change the location of the panel.. And many other features. All using this extension alone! + +Installation link: + +### Conclusion + +This was our list for some great GNOME Shell extensions to try out. Of course, you don’t (and shouldn’t!) install all of these, but just what you need for your own usage. As you can see, you can convert GNOME into any form you would like, but be careful for RAM usage (because if you use more extensions, the shell will consume very much resources). + +What other GNOME Shell extensions do you use? What do you think of this list? + + +-------------------------------------------------------------------------------- + +via: https://fosspost.org/tutorials/turn-gnome-to-heaven-with-these-23-gnome-extensions + +作者:[M.Hanny Sabbagh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fosspost.org/author/mhsabbagh +[b]: https://github.com/lujun9972 +[1]: https://addons.mozilla.org/en/firefox/addon/gnome-shell-integration/ +[2]: https://chrome.google.com/webstore/detail/gnome-shell-integration/gphhapmejobijbbhgpjhcjognlahblep +[3]: https://addons.opera.com/en/extensions/details/gnome-shell-integration/ +[4]: https://extensions.gnome.org/local/ +[5]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot_2018-12-24_20-50-55.png?resize=850%2C359&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 42) +[6]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot_2018-12-24_20-51-29.png?resize=850%2C462&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 44) +[7]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-23-12-30-20.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 46) +[8]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-16-11.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 48) +[9]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/gnome-shell-screenshot-SSP3UZ.png?resize=850%2C492&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 50) +[10]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-50-07.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 52) +[11]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_570_5X5YkZb.png?resize=478%2C474&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 53) +[12]: https://github.com/ginatrapani/todo.txt-cli/wiki/The-Todo.txt-Format +[13]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-04-14.png?resize=715%2C245&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 55) +[14]: https://i2.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_750.jpg?resize=648%2C276&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 56) +[15]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-29-41.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 58) +[16]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224210737-380x95.png?resize=380%2C95&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 60) +[17]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot_2018-12-23_13-58-07.png?resize=524%2C443&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 62) +[18]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_8_1.png?resize=247%2C620&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 63) +[19]: https://i1.wp.com/fosspost.org/wp-content/uploads/2016/08/Screenshot-from-2016-08-12-08-05-48.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 64) +[20]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_4.png?resize=414%2C39&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 65) +[21]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224211009-631x133.png?resize=631%2C133&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 67) +[22]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214626-520x443.png?resize=520%2C443&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 69) +[23]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_2.png?resize=388%2C26&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 70) +[24]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214219-327x328.png?resize=327%2C328&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 72) +[25]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214131-366x199.png?resize=366%2C199&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 74) +[26]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214419-830x143.png?resize=830%2C143&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 76) diff --git a/sources/tech/20181226 -Review- Polo File Manager in Linux.md b/sources/tech/20181226 -Review- Polo File Manager in Linux.md new file mode 100644 index 0000000000..cf763850cf --- /dev/null +++ b/sources/tech/20181226 -Review- Polo File Manager in Linux.md @@ -0,0 +1,139 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: ([Review] Polo File Manager in Linux) +[#]: via: (https://itsfoss.com/polo-file-manager/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +[Review] Polo File Manager in Linux +====== + +We are all familiar with file managers. It’s that piece of software that allows you to access your directories, files in a GUI. + +Most of us use the default file manager included with our desktop of choice. The creator of [Polo][1] hopes to get you to use his file manager by adding extra features but hides the good ones behind a paywall. + +![][2]Polo file manager + +### What is Polo file manager? + +According to its [website][1], Polo is an “advanced file manager for Linux written in [Vala][3])”. Further down the page, Polo is referred to as a “modern, light-weight file manager for Linux with support for multiple panes and tabs; support for archives, and much more.” + +It is from the same developer (Tony George) that has given us some of the most popular applications for desktop Linux. [Timeshift backup][4] tool, [Conky Manager][5], [Aptik backup tool][6]s for applications etc. Polo is the latest offering from Tony. + +Note that Polo is still in the beta stage of development which means the first stable version of the software is not out yet. + +### Features of Polo file manager + +![Polo File Manager in Ubuntu Linux][7]Polo File Manager in Ubuntu Linux + +It’s true that Polo has a bunch of neat features that most file managers don’t have. However, the really neat features are only available if you donate more than $10 to the project or sign up for the creator’s Patreon. I will be separating the free features from the features that require the “donation plugin”. + +![Cloud storage support in Polo file manager][8]Support cloud storage + +#### Free Features + + * Multiple Panes – Single-pane, dual-pane (vertical or horizontal split) and quad-pane layouts. + * Multiple Views – List view, Icon view, Tiled view, and Media view + * Device Manager – Devices popup displays the list of connected devices with options to mount and unmount + * Archive Support – Support for browsing archives as normal folders. Supports creation of archives in multiple formats with advanced compression settings. + * Checksum & Hashing – Generate and compare MD5, SHA1, SHA2-256 ad SHA2-512 checksums + * Built-in [Fish shell][9] + * Support for [cloud storage][10], such as Dropbox, Google Drive, Amazon Drive, Amazon S3, Backblaze B2, Hubi, Microsoft OneDrive, OpenStack Swift, and Yandex Disk + * Compare files + * Analyses disk usage + * KVM support + * Connect to FTP, SFTP, SSH and Samba servers + + + +![Dual pane view of Polo file manager][11]Polo in dual pane view + +#### Donation/Paywall Features + + * Write ISO to USB Device + * Image optimization and adjustment tools + * Optimize PNG + * Reduce JPEG Quality + * Remove Color + * Reduce Color + * Boost Color + * Set as Wallpaper + * Rotate + * Resize + * Convert to PNG, JPEG, TIFF, BMP, ICO and more + * PDF tools + * Split + * Merge + * Add and Remove Password + * Reduce File Size + * Uncompress + * Remove Colors + * Rotate + * Optimize + * Video Download via [youtube-dl][12] + + + +### Installing Polo + +Let’s see how to install Polo file manager on various Linux distributions. + +#### 1\. Ubuntu based distributions + +For all Ubuntu based systems (Ubuntu, Linux Mint, Elementary OS, etc), you can install Polo via the [official PPA][13]. Not sure what a PPA is? [Read about PPA here][14]. + +`sudo apt-add-repository -y ppa:teejee2008/ppa` +`sudo apt-get update` +`sudo apt-get install polo-file-manager` + +#### 2\. Arch based distributions + +For all Arch-based systems (Arch, Manjaro, ArchLabs, etc), you can install Polo from the [Arch User Repository][15]. + +#### 3\. Other Distros + +For all other distros, you can download and use the [.RUN installer][16] to setup Polo. + +### Thoughts on Polo + +I’ve installed tons of different distros and never had a problem with the default file manager. (I’ve probably used Thunar and Caja the most.) The free version of Polo doesn’t contain any features that would make me switch. As for the paid features, I already use a number of applications that accomplish the same things. + +One final note: the paid version of Polo is supposed to help fund development of the project. However, [according to GitHub][17], the last commit on Polo was three months ago. That’s quite a big interval of inactivity for a software that is still in the beta stages of development. + +Have you ever used [Polo][1]? If not, what is your favorite Linux file manager? Let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][18]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/polo-file-manager/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://teejee2008.github.io/polo/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo.jpg?fit=800%2C500&ssl=1 +[3]: https://en.wikipedia.org/wiki/Vala_(programming_language +[4]: https://itsfoss.com/backup-restore-linux-timeshift/ +[5]: https://itsfoss.com/conky-gui-ubuntu-1304/ +[6]: https://github.com/teejee2008/aptik +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo-file-manager-in-ubuntu.jpeg?resize=800%2C450&ssl=1 +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo-coud-options.jpg?fit=800%2C795&ssl=1 +[9]: https://fishshell.com/ +[10]: https://itsfoss.com/cloud-services-linux/ +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo-dual-pane.jpg?fit=800%2C520&ssl=1 +[12]: https://itsfoss.com/download-youtube-linux/ +[13]: https://launchpad.net/~teejee2008/+archive/ubuntu/ppa +[14]: https://itsfoss.com/ppa-guide/ +[15]: https://aur.archlinux.org/packages/polo +[16]: https://github.com/teejee2008/polo/releases +[17]: https://github.com/teejee2008/polo +[18]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20181227 Linux commands for measuring disk activity.md b/sources/tech/20181227 Linux commands for measuring disk activity.md new file mode 100644 index 0000000000..badda327dd --- /dev/null +++ b/sources/tech/20181227 Linux commands for measuring disk activity.md @@ -0,0 +1,252 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux commands for measuring disk activity) +[#]: via: (https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Linux commands for measuring disk activity +====== +![](https://images.idgesg.net/images/article/2018/12/tape-measure-100782593-large.jpg) +Linux systems provide a handy suite of commands for helping you see how busy your disks are, not just how full. In this post, we examine five very useful commands for looking into disk activity. Two of the commands (iostat and ioping) may have to be added to your system, and these same two commands require you to use sudo privileges, but all five commands provide useful ways to view disk activity. + +Probably one of the easiest and most obvious of these commands is **dstat**. + +### dtstat + +In spite of the fact that the **dstat** command begins with the letter "d", it provides stats on a lot more than just disk activity. If you want to view just disk activity, you can use the **-d** option. As shown below, you’ll get a continuous list of disk read/write measurements until you stop the display with a ^c. Note that after the first report, each subsequent row in the display will report disk activity in the following time interval, and the default is only one second. + +``` +$ dstat -d +-dsk/total- + read writ + 949B 73k + 65k 0 <== first second + 0 24k <== second second + 0 16k + 0 0 ^C +``` + +Including a number after the -d option will set the interval to that number of seconds. + +``` +$ dstat -d 10 +-dsk/total- + read writ + 949B 73k + 65k 81M <== first five seconds + 0 21k <== second five second + 0 9011B ^C +``` + +Notice that the reported data may be shown in a number of different units — e.g., M (megabytes), k (kilobytes), and B (bytes). + +Without options, the dstat command is going to show you a lot of other information as well — indicating how the CPU is spending its time, displaying network and paging activity, and reporting on interrupts and context switches. + +``` +$ dstat +You did not select any stats, using -cdngy by default. +--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system-- +usr sys idl wai stl| read writ| recv send| in out | int csw + 0 0 100 0 0| 949B 73k| 0 0 | 0 3B| 38 65 + 0 0 100 0 0| 0 0 | 218B 932B| 0 0 | 53 68 + 0 1 99 0 0| 0 16k| 64B 468B| 0 0 | 64 81 ^C +``` + +The dstat command provides valuable insights into overall Linux system performance, pretty much replacing a collection of older tools, such as vmstat, netstat, iostat, and ifstat, with a flexible and powerful command that combines their features. For more insight into the other information that the dstat command can provide, refer to this post on the [dstat][1] command. + +### iostat + +The iostat command helps monitor system input/output device loading by observing the time the devices are active in relation to their average transfer rates. It's sometimes used to evaluate the balance of activity between disks. + +``` +$ iostat +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +loop0 0.00 0.00 0.00 1048 0 +loop1 0.00 0.00 0.00 365 0 +loop2 0.00 0.00 0.00 1056 0 +loop3 0.00 0.01 0.00 16169 0 +loop4 0.00 0.00 0.00 413 0 +loop5 0.00 0.00 0.00 1184 0 +loop6 0.00 0.00 0.00 1062 0 +loop7 0.00 0.00 0.00 5261 0 +sda 1.06 0.89 72.66 2837453 232735080 +sdb 0.00 0.02 0.00 48669 40 +loop8 0.00 0.00 0.00 1053 0 +loop9 0.01 0.01 0.00 18949 0 +loop10 0.00 0.00 0.00 56 0 +loop11 0.00 0.00 0.00 7090 0 +loop12 0.00 0.00 0.00 1160 0 +loop13 0.00 0.00 0.00 108 0 +loop14 0.00 0.00 0.00 3572 0 +loop15 0.01 0.01 0.00 20026 0 +loop16 0.00 0.00 0.00 24 0 +``` + +Of course, all the stats provided on Linux loop devices can clutter the display when you want to focus solely on your disks. The command, however, does provide the **-p** option, which allows you to just look at your disks — as shown in the commands below. + +``` +$ iostat -p sda +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 1.06 0.89 72.54 2843737 232815784 +sda1 1.04 0.88 72.54 2821733 232815784 +``` + +Note that **tps** refers to transfers per second. + +You can also get iostat to provide repeated reports. In the example below, we're getting measurements every five seconds by using the **-d** option. + +``` +$ iostat -p sda -d 5 +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 1.06 0.89 72.51 2843749 232834048 +sda1 1.04 0.88 72.51 2821745 232834048 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 0.80 0.00 11.20 0 56 +sda1 0.80 0.00 11.20 0 56 +``` + +If you prefer to omit the first (stats since boot) report, add a **-y** to your command. + +``` +$ iostat -p sda -d 5 -y +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 0.80 0.00 11.20 0 56 +sda1 0.80 0.00 11.20 0 56 +``` + +Next, we look at our second disk drive. + +``` +$ iostat -p sdb +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sdb 0.00 0.02 0.00 48669 40 +sdb2 0.00 0.00 0.00 4861 40 +sdb1 0.00 0.01 0.00 35344 0 +``` + +### iotop + +The **iotop** command is top-like utility for looking at disk I/O. It gathers I/O usage information provided by the Linux kernel so that you can get an idea which processes are most demanding in terms in disk I/O. In the example below, the loop time has been set to 5 seconds. The display will update itself, overwriting the previous output. + +``` +$ sudo iotop -d 5 +Total DISK READ: 0.00 B/s | Total DISK WRITE: 1585.31 B/s +Current DISK READ: 0.00 B/s | Current DISK WRITE: 12.39 K/s + TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND +32492 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.12 % [kworker/u8:1-ev~_power_efficient] + 208 be/3 root 0.00 B/s 1585.31 B/s 0.00 % 0.11 % [jbd2/sda1-8] + 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init splash + 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] + 3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp] + 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp] + 8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq] +``` + +### ioping + +The **ioping** command is an altogether different type of tool, but it can report disk latency — how long it takes a disk to respond to requests — and can be helpful in diagnosing disk problems. + +``` +$ sudo ioping /dev/sda1 +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup) +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms +^C +--- /dev/sda1 (block device 111.8 GiB) ioping statistics --- +3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s +generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s +min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us +``` + +### atop + +The **atop** command, like **top** provides a lot of information on system performance, including some stats on disk activity. + +``` +ATOP - butterfly 2018/12/26 17:24:19 37d3h13m------ 10ed +PRC | sys 0.03s | user 0.01s | #proc 179 | #zombie 0 | #exit 6 | +CPU | sys 1% | user 0% | irq 0% | idle 199% | wait 0% | +cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu000 w 0% | +CPL | avg1 0.00 | avg5 0.00 | avg15 0.00 | csw 677 | intr 470 | +MEM | tot 5.8G | free 223.4M | cache 4.6G | buff 253.2M | slab 394.4M | +SWP | tot 2.0G | free 2.0G | | vmcom 1.9G | vmlim 4.9G | +DSK | sda | busy 0% | read 0 | write 7 | avio 1.14 ms | +NET | transport | tcpi 4 | tcpo stall 8 | udpi 1 | udpo 0swout 2255 | +NET | network | ipi 10 | ipo 7 | ipfrw 0 | deliv 60.67 ms | +NET | enp0s25 0% | pcki 10 | pcko 8 | si 1 Kbps | so 3 Kbp0.73 ms | + + PID SYSCPU USRCPU VGROW RGROW ST EXC THR S CPUNR CPU CMD 1/1673e4 | + 3357 0.01s 0.00s 672K 824K -- - 1 R 0 0% atop + 3359 0.01s 0.00s 0K 0K NE 0 0 E - 0% + 3361 0.00s 0.01s 0K 0K NE 0 0 E - 0% + 3363 0.01s 0.00s 0K 0K NE 0 0 E - 0% +31357 0.00s 0.00s 0K 0K -- - 1 S 1 0% bash + 3364 0.00s 0.00s 8032K 756K N- - 1 S 1 0% sleep + 2931 0.00s 0.00s 0K 0K -- - 1 I 1 0% kworker/u8:2-e + 3356 0.00s 0.00s 0K 0K -E 0 0 E - 0% + 3360 0.00s 0.00s 0K 0K NE 0 0 E - 0% + 3362 0.00s 0.00s 0K 0K NE 0 0 E - 0% +``` + +If you want to look at _just_ the disk stats, you can easily manage that with a command like this: + +``` +$ atop | grep DSK +$ atop | grep DSK +DSK | sda | busy 0% | read 122901 | write 3318e3 | avio 0.67 ms | +DSK | sdb | busy 0% | read 1168 | write 103 | avio 0.73 ms | +DSK | sda | busy 2% | read 0 | write 92 | avio 2.39 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.47 ms | +DSK | sda | busy 2% | read 0 | write 99 | avio 2.26 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | +DSK | sda | busy 2% | read 0 | write 92 | avio 2.43 ms | +^C +``` + +### Being in the know with disk I/O + +Linux provides enough commands to give you good insights into how hard your disks are working and help you focus on potential problems or slowdowns. Hopefully, one of these commands will tell you just what you need to know when it's time to question disk performance. Occasional use of these commands will help ensure that especially busy or slow disks will be obvious when you need to check them. + +Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3291616/linux/examining-linux-system-performance-with-dstat.html +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20181231 Easily Upload Text Snippets To Pastebin-like Services From Commandline.md b/sources/tech/20181231 Easily Upload Text Snippets To Pastebin-like Services From Commandline.md new file mode 100644 index 0000000000..58b072f2fc --- /dev/null +++ b/sources/tech/20181231 Easily Upload Text Snippets To Pastebin-like Services From Commandline.md @@ -0,0 +1,259 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Easily Upload Text Snippets To Pastebin-like Services From Commandline) +[#]: via: (https://www.ostechnix.com/how-to-easily-upload-text-snippets-to-pastebin-like-services-from-commandline/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Easily Upload Text Snippets To Pastebin-like Services From Commandline +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-720x340.png) + +Whenever there is need to share the code snippets online, the first one probably comes to our mind is Pastebin.com, the online text sharing site launched by **Paul Dixon** in 2002. Now, there are several alternative text sharing services available to upload and share text snippets, error logs, config files, a command’s output or any sort of text files. If you happen to share your code often using various Pastebin-like services, I do have a good news for you. Say hello to **Wgetpaste** , a command line BASH utility to easily upload text snippets to pastebin-like services. Using Wgetpaste script, anyone can quickly share text snippets to their friends, colleagues, or whoever wants to see/use/review the code from command line in Unix-like systems. + +### Installing Wgetpaste + +Wgetpaste is available in Arch Linux [Community] repository. To install it on Arch Linux and its variants like Antergos and Manjaro Linux, just run the following command: + +``` +$ sudo pacman -S wgetpaste +``` + +For other distributions, grab the source code from [**Wgetpaste website**][1] and install it manually as described below. + +First download the latest Wgetpaste tar file: + +``` +$ wget http://wgetpaste.zlin.dk/wgetpaste-2.28.tar.bz2 +``` + +Extract it: + +``` +$ tar -xvjf wgetpaste-2.28.tar.bz2 +``` + +It will extract the contents of the tar file in a folder named “wgetpaste-2.28”. + +Go to that directory: + +``` +$ cd wgetpaste-2.28/ +``` + +Copy the wgetpaste binary to your $PATH, for example **/usr/local/bin/**. + +``` +$ sudo cp wgetpaste /usr/local/bin/ +``` + +Finally, make it executable using command: + +``` +$ sudo chmod +x /usr/local/bin/wgetpaste +``` + +### Upload Text Snippets To Pastebin-like Services + +Uploading text snippets using Wgetpaste is trivial. Let me show you a few examples. + +**1\. Upload text files** + +To upload any text file using Wgetpaste, just run: + +``` +$ wgetpaste mytext.txt +``` + +This command will upload the contents of mytext.txt file. + +Sample output: + +``` +Your paste can be seen here: https://paste.pound-python.org/show/eO0aQjTgExP0wT5uWyX7/ +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-1.png) + +You can share the pastebin URL via any medium like mail, message, whatsapp or IRC etc. Whoever has this URL can visit it and view the contents of the text file in a web browser of their choice. + +Here is the contents of mytext.txt file in web browser: + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-2.png) + +You can also use **‘tee’** command to display what is being pasted, instead of uploading them blindly. + +To do so, use **-t** option like below. + +``` +$ wgetpaste -t mytext.txt +``` + +![][3] + +**2. Upload text snippets to different services +** + +By default, Wgetpaste will upload the text snippets to **poundpython** () service. + +To view the list of supported services, run: + +``` +$ wgetpaste -S +``` + +Sample output: + +``` +Services supported: (case sensitive): +Name: | Url: +=============|================= +bpaste | https://bpaste.net/ +codepad | http://codepad.org/ +dpaste | http://dpaste.com/ +gists | https://api.github.com/gists +*poundpython | https://paste.pound-python.org/ +``` + +Here, ***** indicates the default service. + +As you can see, Wgetpaste currently supports five text sharing services. I didn’t try all of them, but I believe all services will work. + +To upload the contents to other services, for example **bpaste.net** , use **-s** option like below. + +``` +$ wgetpaste -s bpaste mytext.txt +Your paste can be seen here: https://bpaste.net/show/5199e127e733 +``` + +**3\. Read input from stdin** + +Wgetpaste can also read the input from stdin. + +``` +$ uname -a | wgetpaste +``` + +This command will upload the output of ‘uname -a’ command. + +**4. Upload the COMMAND and the output of COMMAND together +** + +Sometimes, you may need to paste a COMMAND and its output. To do so, specify the contents of the command within quotes like below. + +``` +$ wgetpaste -c 'ls -l' +``` + +This will upload the command ‘ls -l’ along with its output to the pastebin service. + +This can be useful when you wanted to let others to clearly know what was the exact command you just ran and its output. + +![][4] + +As you can see in the output, I ran ‘ls -l’ command. + +**5. Upload system log files, config files +** + +Like I already said, we can upload any sort of text files, not just an ordinary text file, in your system such as log files, a specific command’s output etc. Say for example, you just updated your Arch Linux box and ended up with a broken system. You ask your colleague how to fix it and s/he wants to read the pacman.log file. Here is the command to upload the contents of the pacman.log file: + +``` +$ wgetpaste /var/log/pacman.log +``` + +Share the pastebin URL with your Colleague, so s/he will review the pacman.log and may help you to fix the problem by reviewing the log file. + +Usually, the contents of log files might be too long and you don’t want to share them all. In such cases, just use **cat** command to read the output and use **tail** command with the **-n** switch to define the number of lines to share and finally pipe the output to Wgetpaste as shown below. + +``` +$ cat /var/log/pacman.log | tail -n 50 | wgetpaste +``` + +The above command will upload only the **last 50 lines** of pacman.log file. + +**6\. Convert input url to tinyurl** + +By default, Wgetpaste will display the full pastebin URL in the output. If you want to convert the input URL to a tinyurl, just use **-u** option. + +``` +$ wgetpaste -u mytext.txt +Your paste can be seen here: http://tinyurl.com/y85d8gtz +``` + +**7. Set language +** + +By default, Wgetpaste will upload text snippets in **plain text**. + +To list languages supported by the specified service, use **-L** option. + +``` +$ wgetpaste -L +``` + +This command will list all languages supported by default service i.e **poundpython** (). + +We can change this using **-l** option. + +``` +$ wgetpaste -l Bash mytext.txt +``` + +**8\. Disable syntax highlighting or html in the output** + +As I mentioned above, the text snippets will be displayed in a specific language format (plaintext, Bash etc.). + +You can, however, change this behaviour to display the raw text snippets using **-r** option. + +``` +$ wgetpaste -r mytext.txt +Your raw paste can be seen here: https://paste.pound-python.org/raw/CUJhQ3jEmr2UvfmD2xCL/ +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-5.png) + +As you can see in the above output, there is no syntax highlighting, no html formatting. Just a raw output. + +**9\. Change Wgetpaste defaults** + +All Defaults values (DEFAULT_{NICK,LANGUAGE,EXPIRATION}[_${SERVICE}] and DEFAULT_SERVICE) can be changed globally in **/etc/wgetpaste.conf** or per user in **~/.wgetpaste.conf** files. These files, however, are not available by default in my system. I guess we need to manually create them. The developer has given the sample contents for both files [**here**][5] and [**here**][6]. Just create these files manually with given sample contents and modify the parameters accordingly to change Wgetpaste defaults. + +**10\. Getting help** + +To display the help section, run: + +``` +$ wgetpaste -h +``` + +And, that’s all for now. Hope this was useful. We will publish more useful content in the days to come. Stay tuned! + +On behalf of **OSTechNix** , I wish you all a very **Happy New Year 2019**. I am grateful to all our readers, contributors, and mentors for supporting us from the beginning of our journey. We couldn’t come this far without your support and guidance. Thank you everyone! Have a great year ahead!! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-easily-upload-text-snippets-to-pastebin-like-services-from-commandline/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: http://wgetpaste.zlin.dk/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-3.png +[4]: http://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-4.png +[5]: http://wgetpaste.zlin.dk/zlin.conf +[6]: http://wgetpaste.zlin.dk/wgetpaste.example diff --git a/sources/tech/20181231 Troubleshooting hardware problems in Linux.md b/sources/tech/20181231 Troubleshooting hardware problems in Linux.md new file mode 100644 index 0000000000..dcc89034db --- /dev/null +++ b/sources/tech/20181231 Troubleshooting hardware problems in Linux.md @@ -0,0 +1,141 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Troubleshooting hardware problems in Linux) +[#]: via: (https://opensource.com/article/18/12/troubleshooting-hardware-problems-linux) +[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) + +Troubleshooting hardware problems in Linux +====== +Learn what's causing your Linux hardware to malfunction so you can get it back up and running quickly. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other11x_cc.png?itok=I_kCDYj0) + +[Linux servers][1] run mission-critical business applications in many different types of infrastructures including physical machines, virtualization, private cloud, public cloud, and hybrid cloud. It's important for Linux sysadmins to understand how to manage Linux hardware infrastructure—including software-defined functionalities related to [networking][2], storage, Linux containers, and multiple tools on Linux servers. + +It can take some time to troubleshoot and solve hardware-related issues on Linux. Even highly experienced sysadmins sometimes spend hours working to solve mysterious hardware and software discrepancies. + +The following tips should make it quicker and easier to troubleshoot hardware in Linux. Many different things can cause problems with Linux hardware; before you start trying to diagnose them, it's smart to learn about the most common issues and where you're most likely to find them. + +### Quick-diagnosing devices, modules, and drivers + +The first step in troubleshooting usually is to display a list of the hardware installed on your Linux server. You can obtain detailed information on the hardware using **ls** commands such as **[lspci][3]** , **[lsblk][4]** , **[lscpu][5]** , and **[lsscsi][6]**. For example, here is output of the **lsblk** command: + +``` +# lsblk +NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT +xvda    202:0    0  50G  0 disk +├─xvda1 202:1    0   1M  0 part +└─xvda2 202:2    0  50G  0 part / +xvdb    202:16   0  20G  0 disk +└─xvdb1 202:17   0  20G  0 part +``` + +If the **ls** commands don't reveal any errors, use init processes (e.g., **systemd** ) to see how the Linux server is working. **systemd** is the most popular init process for bootstrapping user spaces and controlling multiple system processes. For example, here is output of the **systemctl status** command: + +``` +# systemctl status +● bastion.f347.internal +    State: running +     Jobs: 0 queued +   Failed: 0 units +    Since: Wed 2018-11-28 01:29:05 UTC; 2 days ago +   CGroup: / +           ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 21 +           ├─kubepods.slice +           │ ├─kubepods-pod3881728a_f2af_11e8_af77_06af52f87498.slice +           │ │ ├─docker-88b27385f4bae77bba834fbd60a61d19026bae13d18eb147783ae27819c34967.scope +           │ │ │ └─23860 /opt/bridge/bin/bridge --public-dir=/opt/bridge/static --config=/var/console-config/console-c +           │ │ └─docker-a4433f0d523c7e5bc772ee4db1861e4fa56c4e63a2d48f6bc831458c2ce9fd2d.scope +           │ │   └─23639 /usr/bin/pod +.... +``` + +### Digging into multiple loggings + +**Dmesg** allows you to figure out errors and warnings in the kernel's latest messages. For example, here is output of the **dmesg | more** command: + +``` +# dmesg | more +.... +[ 1539.027419] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready +[ 1539.042726] IPv6: ADDRCONF(NETDEV_UP): veth61f37018: link is not ready +[ 1539.048706] IPv6: ADDRCONF(NETDEV_CHANGE): veth61f37018: link becomes ready +[ 1539.055034] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready +[ 1539.098550] device veth61f37018 entered promiscuous mode +[ 1541.450207] device veth61f37018 left promiscuous mode +[ 1542.493266] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue) +[ 9965.292788] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue) +[ 9965.449401] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready +[ 9965.462738] IPv6: ADDRCONF(NETDEV_UP): vetheacc333c: link is not ready +[ 9965.468942] IPv6: ADDRCONF(NETDEV_CHANGE): vetheacc333c: link becomes ready +.... +``` + +You can also look at all Linux system logs in the **/var/log/messages** file, which is where you'll find errors related to specific issues. It's worthwhile to monitor the messages via the **tail** command in real time when you make modifications to your hardware, such as mounting an extra disk or adding an Ethernet network interface. For example, here is output of the **tail -f /var/log/messages** command: + +``` +# tail -f /var/log/messages +Dec  1 13:20:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain in-addr.arpa +Dec  1 13:20:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain cluster.local +Dec  1 13:21:03 bastion dnsmasq[30201]: setting upstream servers from DBus +Dec  1 13:21:03 bastion dnsmasq[30201]: using nameserver 192.199.0.2#53 +Dec  1 13:21:03 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain in-addr.arpa +Dec  1 13:21:03 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain cluster.local +Dec  1 13:21:33 bastion dnsmasq[30201]: setting upstream servers from DBus +Dec  1 13:21:33 bastion dnsmasq[30201]: using nameserver 192.199.0.2#53 +Dec  1 13:21:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain in-addr.arpa +Dec  1 13:21:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain cluster.local +``` + +### Analyzing networking functions + +You may have hundreds of thousands of cloud-native applications to serve business services in a complex networking environment; these may include virtualization, multiple cloud, and hybrid cloud. This means you should analyze whether networking connectivity is working correctly as part of your troubleshooting. Useful commands to figure out networking functions in the Linux server include **ip addr** , **traceroute** , **nslookup** , **dig** , and **ping** , among others. For example, here is output of the **ip addr show** command: + +``` +# ip addr show +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 +    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 +    inet 127.0.0.1/8 scope host lo +       valid_lft forever preferred_lft forever +    inet6 ::1/128 scope host +       valid_lft forever preferred_lft forever +2: eth0: mtu 9001 qdisc mq state UP group default qlen 1000 +    link/ether 06:af:52:f8:74:98 brd ff:ff:ff:ff:ff:ff +    inet 192.199.0.169/24 brd 192.199.0.255 scope global noprefixroute dynamic eth0 +       valid_lft 3096sec preferred_lft 3096sec +    inet6 fe80::4af:52ff:fef8:7498/64 scope link +       valid_lft forever preferred_lft forever +3: docker0: mtu 1500 qdisc noqueue state DOWN group default +    link/ether 02:42:67:fb:1a:a2 brd ff:ff:ff:ff:ff:ff +    inet 172.17.0.1/16 scope global docker0 +       valid_lft forever preferred_lft forever +    inet6 fe80::42:67ff:fefb:1aa2/64 scope link +       valid_lft forever preferred_lft forever +.... +``` + +### In conclusion + +Troubleshooting Linux hardware requires considerable knowledge, including how to use powerful command-line tools and figure out system loggings. You should also know how to diagnose the kernel space, which is where you can find the root cause of many hardware problems. Keep in mind that hardware issues in Linux may come from many different sources, including devices, modules, drivers, BIOS, networking, and even plain old hardware malfunctions. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/troubleshooting-hardware-problems-linux + +作者:[Daniel Oh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/daniel-oh +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/5/what-linux-server +[2]: https://opensource.com/article/18/11/intro-software-defined-networking +[3]: https://linux.die.net/man/8/lspci +[4]: https://linux.die.net/man/8/lsblk +[5]: https://linux.die.net/man/1/lscpu +[6]: https://linux.die.net/man/8/lsscsi diff --git a/sources/tech/20190102 Using Yarn on Ubuntu and Other Linux Distributions.md b/sources/tech/20190102 Using Yarn on Ubuntu and Other Linux Distributions.md new file mode 100644 index 0000000000..71555454f5 --- /dev/null +++ b/sources/tech/20190102 Using Yarn on Ubuntu and Other Linux Distributions.md @@ -0,0 +1,265 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Yarn on Ubuntu and Other Linux Distributions) +[#]: via: (https://itsfoss.com/install-yarn-ubuntu) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Using Yarn on Ubuntu and Other Linux Distributions +====== + +**This quick tutorial shows you the official way of installing Yarn package manager on Ubuntu and Debian Linux. You’ll also learn some basic Yarn commands and the steps to remove Yarn completely.** + +[Yarn][1] is an open source JavaScript package manager developed by Facebook. It is an alternative or should I say improvement to the popular npm package manager. [Facebook developers’ team][2] created Yarn to overcome the shortcomings of [npm][3]. Facebook claims that Yarn is faster, reliable and more secure than npm. + +Like npm, Yarn provides you a way to automate the process of installing, updating, configuring, and removing packages retrieved from a global registry. + +The advantage of Yarn is that it is faster as it caches every package it downloads so it doesn’t need to download it again. It also parallelizes operations to maximize resource utilization. Yarn also uses [checksums to verify the integrity][4] of every installed package before its code is executed. Yarn also guarantees that an install that worked on one system will work exactly the same way on any other system. + +If you are [using nodejs on Ubuntu][5], probably you already have npm installed on your system. In that case, you can use npm to install Yarn globally in the following manner: + +``` +sudo npm install yarn -g +``` + +However, I would recommend using the official way to install Yarn on Ubuntu/Debian. + +### Installing Yarn on Ubuntu and Debian [The Official Way] + +![Yarn JS][6] + +The instructions mentioned here should be applicable to all versions of Ubuntu such as Ubuntu 18.04, 16.04 etc. The same set of instructions are also valid for Debian and other Debian based distributions. + +Since the tutorial uses Curl to add the GPG key of Yarn project, it would be a good idea to verify whether you have Curl installed already or not. + +``` +sudo apt install curl +``` + +The above command will install Curl if it wasn’t installed already. Now that you have curl, you can use it to add the GPG key of Yarn project in the following fashion: + +``` +curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add - +``` + +After that, add the repository to your sources list so that you can easily upgrade the Yarn package in future with the rest of the system updates: + +``` +sudo sh -c 'echo "deb https://dl.yarnpkg.com/debian/ stable main" >> /etc/apt/sources.list.d/yarn.list' +``` + +You are set to go now. [Update Ubuntu][7] or Debian system to refresh the list of available packages and then install yarn: + +``` +sudo apt update +sudo apt install yarn +``` + +This will install Yarn along with nodejs. Once the process completes, verify that Yarn has been installed successfully. You can do that by checking the Yarn version. + +``` +yarn --version +``` + +For me, it showed an output like this: + +``` +yarn --version +1.12.3 +``` + +This means that I have Yarn version 1.12.3 installed on my system. + +### Using Yarn + +I presume that you have some basic understandings of JavaScript programming and how dependencies work. I am not going to go in details here. I’ll show you some of the basic Yarn commands that will help you getting started with it. + +#### Creating a new project with Yarn + +Like npm, Yarn also works with a package.json file. This is where you add your dependencies. All the packages of the dependencies are cached in the node_modules directory in the root directory of your project. + +In the root directory of your project, run the following command to generate a fresh package.json file: + +It will ask you a number of questions. You can skip the questions r go with the defaults by pressing enter. + +``` +yarn init +yarn init v1.12.3 +question name (test_yarn): test_yarn_proect +question version (1.0.0): 0.1 +question description: Test Yarn +question entry point (index.js): +question repository url: +question author: abhishek +question license (MIT): +question private: +success Saved package.json +Done in 82.42s. +``` + +With this, you get a package.json file of this sort: + +``` +{ + "name": "test_yarn_proect", + "version": "0.1", + "description": "Test Yarn", + "main": "index.js", + "author": "abhishek", + "license": "MIT" +} +``` + +Now that you have the package.json, you can either manually edit it to add or remove package dependencies or use Yarn commands (preferred). + +#### Adding dependencies with Yarn + +You can add a dependency on a certain package in the following fashion: + +``` +yarn add +``` + +For example, if you want to use [Lodash][8] in your project, you can add it using Yarn like this: + +``` +yarn add lodash +yarn add v1.12.3 +info No lockfile found. +[1/4] Resolving packages… +[2/4] Fetching packages… +[3/4] Linking dependencies… +[4/4] Building fresh packages… +success Saved lockfile. +success Saved 1 new dependency. +info Direct dependencies +└─ [email protected] +info All dependencies +└─ [email protected] +Done in 2.67s. +``` + +And you can see that this dependency has been added automatically in the package.json file: + +``` +{ + "name": "test_yarn_proect", + "version": "0.1", + "description": "Test Yarn", + "main": "index.js", + "author": "abhishek", + "license": "MIT", + "dependencies": { + "lodash": "^4.17.11" + } +} +``` + +By default, Yarn will add the latest version of a package in the dependency. If you want to use a specific version, you may specify it while adding. + +As always, you can also update the package.json file manually. + +#### Upgrading dependencies with Yarn + +You can upgrade a particular dependency to its latest version with the following command: + +``` +yarn upgrade +``` + +It will see if the package in question has a newer version and will update it accordingly. + +You can also change the version of an already added dependency in the following manner: + +You can also upgrade all the dependencies of your project to their latest version with one single command: + +``` +yarn upgrade +``` + +It will check the versions of all the dependencies and will update them if there are any newer versions. + +#### Removing dependencies with Yarn + +You can remove a package from the dependencies of your project in this way: + +``` +yarn remove +``` + +#### Install all project dependencies + +If you made any changes to the project.json file, you should run either + +``` +yarn +``` + +or + +``` +yarn install +``` + +to install all the dependencies at once. + +### How to remove Yarn from Ubuntu or Debian + +I’ll complete this tutorial by mentioning the steps to remove Yarn from your system if you used the above steps to install it. If you ever realized that you don’t need Yarn anymore, you will be able to remove it. + +Use the following command to remove Yarn and its dependencies. + +``` +sudo apt purge yarn +``` + +You should also remove the Yarn repository from the repository list: + +``` +sudo rm /etc/apt/sources.list.d/yarn.list +``` + +The optional next step is to remove the GPG key you had added to the trusted keys. But for that, you need to know the key. You can get that using the apt-key command: + +Warning: apt-key output should not be parsed (stdout is not a terminal) pub rsa4096 2016-10-05 [SC] 72EC F46A 56B4 AD39 C907 BBB7 1646 B01B 86E5 0310 uid [ unknown] Yarn Packaging + +Warning: apt-key output should not be parsed (stdout is not a terminal) pub rsa4096 2016-10-05 [SC] 72EC F46A 56B4 AD39 C907 BBB7 1646 B01B 86E5 0310 uid [ unknown] Yarn Packaging yarn@dan.cx sub rsa4096 2016-10-05 [E] sub rsa4096 2019-01-02 [S] [expires: 2020-02-02] + +The key here is the last 8 characters of the GPG key’s fingerprint in the line starting with pub. + +So, in my case, the key is 86E50310 and I’ll remove it using this command: + +``` +sudo apt-key del 86E50310 +``` + +You’ll see an OK in the output and the GPG key of Yarn package will be removed from the list of GPG keys your system trusts. + +I hope this tutorial helped you to install Yarn on Ubuntu, Debian, Linux Mint, elementary OS etc. I provided some basic Yarn commands to get you started along with complete steps to remove Yarn from your system. + +I hope you liked this tutorial and if you have any questions or suggestions, please feel free to leave a comment below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-yarn-ubuntu + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://yarnpkg.com/lang/en/ +[2]: https://code.fb.com/ +[3]: https://www.npmjs.com/ +[4]: https://itsfoss.com/checksum-tools-guide-linux/ +[5]: https://itsfoss.com/install-nodejs-ubuntu/ +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/yarn-js-ubuntu-debian.jpeg?resize=800%2C450&ssl=1 +[7]: https://itsfoss.com/update-ubuntu/ +[8]: https://lodash.com/ diff --git a/sources/tech/20190104 Midori- A Lightweight Open Source Web Browser.md b/sources/tech/20190104 Midori- A Lightweight Open Source Web Browser.md new file mode 100644 index 0000000000..fa1bd9c2c2 --- /dev/null +++ b/sources/tech/20190104 Midori- A Lightweight Open Source Web Browser.md @@ -0,0 +1,110 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Midori: A Lightweight Open Source Web Browser) +[#]: via: (https://itsfoss.com/midori-browser) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Midori: A Lightweight Open Source Web Browser +====== + +**Here’s a quick review of the lightweight, fast, open source web browser Midori, which has returned from the dead.** + +If you are looking for a lightweight [alternative web browser][1], try Midori. + +[Midori][2] is an open source web browser that focuses more on being lightweight than on providing a ton of features. + +If you have never heard of Midori, you might think that it is a new application but Midori was first released in 2007. + +Because it focused on speed, Midori soon gathered a niche following and became the default browser in lightweight Linux distributions like Bodhi Linux, SilTaz etc. + +Other distributions like [elementary OS][3] also used Midori as its default browser. But the development of Midori stalled around 2016 and its fans started wondering if Midori was dead already. elementary OS dropped it from its latest release, I believe, for this reason. + +The good news is that Midori is not dead. After almost two years of inactivity, the development resumed in the last quarter of 2018. A few extensions including an ad-blocker were added in the later releases. + +### Features of Midori web browser + +![Midori web browser][4] + +Here are some of the main features of the Midori browser + + * Written in Vala with GTK+3 and WebKit rendering engine. + * Tabs, windows and session management + * Speed dial + * Saves tab for the next session by default + * Uses DuckDuckGo as a default search engine. It can be changed to Google or Yahoo. + * Bookmark management + * Customizable and extensible interface + * Extension modules can be written in C and Vala + * Supports HTML5 + * An extremely limited set of extensions include an ad-blocker, colorful tabs etc. No third-party extensions. + * Form history + * Private browsing + * Available for Linux and Windows + + + +Trivia: Midori is a Japanese word that means green. The Midori developer is not Japanese if you were guessing something along that line. + +### Experiencing Midori + +![Midori web browser in Ubuntu 18.04][5] + +I have been using Midori for the past few days. The experience is mostly fine. It supports HTML5 and renders the websites quickly. The ad-blocker is okay. The browsing experience is more or less smooth as you would expect in any standard web browser. + +The lack of extensions has always been a weak point of Midori so I am not going to talk about that. + +What I did notice is that it doesn’t support international languages. I couldn’t find a way to add new language support. It could not render the Hindi fonts at all and I am guessing it’s the same with many other non-[Romance languages][6]. + +I also had my fair share of troubles with YouTube videos. Some videos would throw playback error while others would run just fine. + +Midori didn’t eat my RAM like Chrome so that’s a big plus here. + +If you want to try out Midori, let’s see how can you get your hands on it. + +### Install Midori on Linux + +Midori is no longer available in the Ubuntu 18.04 repository. However, the newer versions of Midori can be easily installed using the [Snap packages][7]. + +If you are using Ubuntu, you can find Midori (Snap version) in the Software Center and install it from there. + +![Midori browser is available in Ubuntu Software Center][8]Midori browser is available in Ubuntu Software Center + +For other Linux distributions, make sure that you have [Snap support enabled][9] and then you can install Midori using the command below: + +``` +sudo snap install midori +``` + +You always have the option to compile from the source code. You can download the source code of Midori from its website. + +If you like Midori and want to help this open source project, please donate to them or [buy Midori merchandise from their shop][10]. + +Do you use Midori or have you ever tried it? How’s your experience with it? What other web browser do you prefer to use? Please share your views in the comment section below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/midori-browser + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/open-source-browsers-linux/ +[2]: https://www.midori-browser.org/ +[3]: https://itsfoss.com/elementary-os-juno-features/ +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/Midori-web-browser.jpeg?resize=800%2C450&ssl=1 +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/midori-browser-linux.jpeg?resize=800%2C491&ssl=1 +[6]: https://en.wikipedia.org/wiki/Romance_languages +[7]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/ +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/midori-ubuntu-software-center.jpeg?ssl=1 +[9]: https://itsfoss.com/install-snap-linux/ +[10]: https://www.midori-browser.org/shop +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/Midori-web-browser.jpeg?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190104 Search, Study And Practice Linux Commands On The Fly.md b/sources/tech/20190104 Search, Study And Practice Linux Commands On The Fly.md new file mode 100644 index 0000000000..fa92d3450a --- /dev/null +++ b/sources/tech/20190104 Search, Study And Practice Linux Commands On The Fly.md @@ -0,0 +1,223 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Search, Study And Practice Linux Commands On The Fly!) +[#]: via: (https://www.ostechnix.com/search-study-and-practice-linux-commands-on-the-fly/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Search, Study And Practice Linux Commands On The Fly! +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/tldr-720x340.png) + +The title may look like sketchy and click bait. Allow me to explain what I am about to explain in this tutorial. Let us say you want to download an archive file, extract it and move the file from one location to another from command line. As per the above scenario, we may need at least three Linux commands, one for downloading the file, one for extracting the downloaded file and one for moving the file. If you’re intermediate or advanced Linux user, you could do this easily with an one-liner command or a script in few seconds/minutes. But, if you are a noob who don’t know much about Linux commands, you might need little help. + +Of course, a quick google search may yield many results. Or, you could use [**man pages**][1]. But some man pages are really long, comprehensive and lack in useful example. You might need to scroll down for quite a long time when you’re looking for a particular information on the specific flags/options. Thankfully, there are some [**good alternatives to man pages**][2], which are focused on mostly practical commands. One such good alternative is **TLDR pages**. Using TLDR pages, we can quickly and easily learn a Linux command with practical examples. To access the TLDR pages, we require a TLDR client. There are many clients available. Today, we are going to learn about one such client named **“Tldr++”**. + +Tldr++ is a fast and interactive tldr client written with **Go** programming language. Unlike the other Tldr clients, it is fully interactive. That means, you can pick a command, read all examples , and immediately run any command without having to retype or copy/paste each command in the Terminal. Still don’t get it? No problem. Read on to learn and practice Linux commands on the fly. + +### Install Tldr++ + +Installing Tldr++ is very simple. Download tldr++ latest version from the [**releases page**][3]. Extract it and move the tldr++ binary to your $PATH. + +``` +$ wget https://github.com/isacikgoz/tldr/releases/download/v0.5.0/tldr_0.5.0_linux_amd64.tar.gz + +$ tar xzf tldr_0.5.0_linux_amd64.tar.gz + +$ sudo mv tldr /usr/local/bin + +$ sudo chmod +x /usr/local/bin/tldr +``` + +Now, run ‘tldr’ binary to populate the tldr pages in your local system. + +``` +$ tldr +``` + +Sample output: + +``` +Enumerating objects: 6, done. +Counting objects: 100% (6/6), done. +Compressing objects: 100% (6/6), done. +Total 18157 (delta 0), reused 3 (delta 0), pack-reused 18151 +Successfully cloned into: /home/sk/.local/share/tldr +``` + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/tldr-2.png) + +Tldr++ is available in AUR. If you’re on Arch Linux, you can install it using any AUR helper, for example [**YaY**][4]. Make sure you have removed any existing tldr client from your system and run the following command to install tldr++. + +``` +$ yay -S tldr++ +``` + +Alternatively, you can build from source as described below. Since Tldr++ is written using Go language, make sure you have installed it on your Linux box. If it isn’t installed yet, refer the following guide. + ++ [How To Install Go Language In Linux](https://www.ostechnix.com/install-go-language-linux/) + +After installing Go, run the following command to install Tldr++. + +``` +$ go get -u github.com/isacikgoz/tldr +``` + +This command will download the contents of tldr repository in a folder named **‘go’** in the current working directory. + +Now run the ‘tldr’ binary to populate all tldr pages in your local system using command: + +``` +$ go/bin/tldr +``` + +Sample output: + +![][6] + +Finally, copy the tldr binary to your PATH. + +``` +$ sudo mv tldr /usr/local/bin +``` + +It is time to see some examples. + +### Tldr++ Usage + +Type ‘tldr’ command without any options to display all command examples in alphabetical order. + +![][7] + +Use the **UP/DOWN arrows** to navigate through the commands, type any letters to search or type a command name to view the examples of that respective command. Press **?** for more and **Ctrl+c** to return/exit. + +To display the example commands of a specific command, for example **apt** , simply do: + +``` +$ tldr apt +``` + +![][8] + +Choose any example command from the list and hit ENTER. You will see a *** symbol** before the selected command. For example, I choose the first command i.e ‘sudo apt update’. Now, it will ask you whether to continue or not. If the command is correct, just type ‘y’ to continue and type your sudo password to run the selected command. + +![][9] + +See? You don’t need to copy/paste or type the actual command in the Terminal. Just choose it from the list and run on the fly! + +There are hundreds of Linux command examples are available in Tldr pages. You can choose one or two commands per day and learn them thoroughly. And keep this practice everyday to learn as much as you can. + +### Learn And Practice Linux Commands On The Fly Using Tldr++ + +Now think of the scenario that I mentioned in the first paragraph. You want to download a file, extract it and move it to different location and make it executable. Let us see how to do it interactively using Tldr++ client. + +**Step 1 – Download a file from Internet** + +To download a file from command line, we mostly use **‘curl’** or **‘wget’** commands. Let me use ‘wget’ to download the file. To open tldr page of wget command, just run: + +``` +$ tldr wget +``` + +Here is the examples of wget command. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/wget-tldr.png) + +You can use **UP/DOWN** arrows to go through the list of commands. Once you choose the command of your choice, press ENTER. Here I chose the first command. + +Now, enter the path of the file to download. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/tldr-3.png) + +You will then be asked to confirm if it is the correct command or not. If the command is correct, simply type ‘yes’ or ‘y’ to start downloading the file. + +![][10] + +We have downloaded the file. Let us go ahead and extract this file. + +**Step 2 – Extract downloaded archive** + +We downloaded the **tar.gz** file. So I am going to open the ‘tar’ tldr page. + +``` +$ tldr tar +``` + +You will see the list of example commands. Go through the examples and find which command is suitable to extract tar.gz(gzipped archive) file and hit ENTER key. In our case, it is the third command. + +![][11] + +Now, you will be prompted to enter the path of the tar.gz file. Just type the path and hit ENTER key. Tldr++ supports smart file suggestions. That means it will suggest the file name automatically as you type. Just press TAB key for auto-completion. + +![][12] + +If you downloaded the file to some other location, just type the full path, for example **/home/sk/Downloads/tldr_0.5.0_linux_amd64.tar.gz.** + +Once you enter the path of the file to extract, press ENTER and then, type ‘y’ to confirm. + +![][13] + +**Step 3 – Move file from one location to another** + +We extracted the archive. Now we need to move the file to another location. To move the files from one location to another, we use ‘mv’ command. So, let me open the tldr page for mv command. + +``` +$ tldr mv +``` + +Choose the correct command to move the files from one location to another. In our case, the first command will work, so let me choose it. + +![][14] + +Type the path of the file that you want to move and enter the destination path and hit ENTER key. + +![][15] + +**Note:** Type **y!** or **yes!** to run command with **sudo** privileges. + +As you see in the above screenshot, I moved the file named **‘tldr’** to **‘/usr/local/bin/’** location. + +For more details, refer the project’s GitHub page given at the end. + + +### Conclusion + +Don’t get me wrong. **Man pages are great!** There is no doubt about it. But, as I already said, many man pages are comprehensive and doesn’t have useful examples. There is no way I could memorize all lengthy commands with tricky flags. Some times I spent much time on man pages and remained clueless. The Tldr pages helped me to find what I need within few minutes. Also, we use some commands once in a while and then we forget them completely. Tldr pages on the other hand actually helps when it comes to using commands we rarely use. Tldr++ client makes this task much easier with smart user interaction. Give it a go and let us know what you think about this tool in the comment section below. + +And, that’s all. More good stuffs to come. Stay tuned! + +Good luck! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/search-study-and-practice-linux-commands-on-the-fly/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/learn-use-man-pages-efficiently/ +[2]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ +[3]: https://github.com/isacikgoz/tldr/releases +[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[5]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[6]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-1.png +[7]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-11.png +[8]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-12.png +[9]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-13.png +[10]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-4.png +[11]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-6.png +[12]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-7.png +[13]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-8.png +[14]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-9.png +[15]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-10.png diff --git a/sources/tech/20190104 Take to the virtual skies with FlightGear.md b/sources/tech/20190104 Take to the virtual skies with FlightGear.md new file mode 100644 index 0000000000..c3793e4128 --- /dev/null +++ b/sources/tech/20190104 Take to the virtual skies with FlightGear.md @@ -0,0 +1,93 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Take to the virtual skies with FlightGear) +[#]: via: (https://opensource.com/article/19/1/flightgear) +[#]: author: (Don Watkins https://opensource.com/users/don-watkins) + +Take to the virtual skies with FlightGear +====== +Dreaming of piloting a plane? Try open source flight simulator FlightGear. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/flightgear_cockpit_plane_sky.jpg?itok=LRy0lpOS) + +If you've ever dreamed of piloting a plane, you'll love [FlightGear][1]. It's a full-featured, [open source][2] flight simulator that runs on Linux, MacOS, and Windows. + +The FlightGear project began in 1996 due to dissatisfaction with commercial flight simulation programs, which were not scalable. Its goal was to create a sophisticated, robust, extensible, and open flight simulator framework for use in academia and pilot training or by anyone who wants to play with a flight simulation scenario. + +### Getting started + +FlightGear's hardware requirements are fairly modest, including an accelerated 3D video card that supports OpenGL for smooth framerates. It runs well on my Linux laptop with an i5 processor and only 4GB of RAM. Its documentation includes an [online manual][3]; a [wiki][4] with portals for [users][5] and [developers][6]; and extensive tutorials (such as one for its default aircraft, the [Cessna 172p][7]) to teach you how to operate it. + +It's easy to install on both [Fedora][8] and [Ubuntu][9] Linux. Fedora users can consult the [Fedora installation page][10] to get FlightGear running. + +On Ubuntu 18.04, I had to install a repository: + +``` +$ sudo add-apt-repository ppa:saiarcot895/flightgear +$ sudo apt-get update +$ sudo apt-get install flightgear +``` + +Once the installation finished, I launched it from the GUI, but you can also launch the application from a terminal by entering: + +``` +$ fgfs +``` + +### Configuring FlightGear + +The menu on the left side of the application window provides configuration options. + +![](https://opensource.com/sites/default/files/uploads/flightgear_menu.png) + +**Summary** returns you to the application's home screen. + +**Aircraft** shows the aircraft you have installed and offers the option to install up to 539 other aircraft available in FlightGear's default "hangar." I installed a Cessna 150L, a Piper J-3 Cub, and a Bombardier CRJ-700. Some of the aircraft (including the CRJ-700) have tutorials to teach you how to fly a commercial jet; I found the tutorials informative and accurate. + +![](https://opensource.com/sites/default/files/uploads/flightgear_aircraft.png) + +To select an aircraft to pilot, highlight it and click on **Fly!** at the bottom of the menu. I chose the default Cessna 172p and found the cockpit depiction extremely accurate. + +![](https://opensource.com/sites/default/files/uploads/flightgear_cockpit-view.png) + +The default airport is Honolulu, but you can change it in the **Location** menu by providing your favorite airport's [ICAO airport code][11] identifier. I found some small, local, non-towered airports like Olean and Dunkirk, New York, as well as larger airports including Buffalo, O'Hare, and Raleigh—and could even choose a specific runway. + +Under **Environment** , you can adjust the time of day, the season, and the weather. The simulation includes advance weather modeling and the ability to download current weather from [NOAA][12]. + +**Settings** provides an option to start the simulation in Paused mode by default. Also in Settings, you can select multi-player mode, which allows you to "fly" with other players on FlightGear supporters' global network of servers that allow for multiple users. You must have a moderately fast internet connection to support this functionality. + +The **Add-ons** menu allows you to download aircraft and additional scenery. + +### Take flight + +To "fly" my Cessna, I used a Logitech joystick that worked well. You can calibrate your joystick using an option in the **File** menu at the top. + +Overall, I found the simulation very accurate and think the graphics are great. Try FlightGear yourself — I think you will find it a very fun and complete simulation package. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/flightgear + +作者:[Don Watkins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: http://home.flightgear.org/ +[2]: http://wiki.flightgear.org/GNU_General_Public_License +[3]: http://flightgear.sourceforge.net/getstart-en/getstart-en.html +[4]: http://wiki.flightgear.org/FlightGear_Wiki +[5]: http://wiki.flightgear.org/Portal:User +[6]: http://wiki.flightgear.org/Portal:Developer +[7]: http://wiki.flightgear.org/Cessna_172P +[8]: http://rpmfind.net/linux/rpm2html/search.php?query=flightgear +[9]: https://launchpad.net/~saiarcot895/+archive/ubuntu/flightgear +[10]: https://apps.fedoraproject.org/packages/FlightGear/ +[11]: https://en.wikipedia.org/wiki/ICAO_airport_code +[12]: https://www.noaa.gov/ diff --git a/sources/tech/20190104 Three Ways To Reset And Change Forgotten Root Password on RHEL 7-CentOS 7 Systems.md b/sources/tech/20190104 Three Ways To Reset And Change Forgotten Root Password on RHEL 7-CentOS 7 Systems.md new file mode 100644 index 0000000000..6619cfe65a --- /dev/null +++ b/sources/tech/20190104 Three Ways To Reset And Change Forgotten Root Password on RHEL 7-CentOS 7 Systems.md @@ -0,0 +1,254 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Three Ways To Reset And Change Forgotten Root Password on RHEL 7/CentOS 7 Systems) +[#]: via: (https://www.2daygeek.com/linux-reset-change-forgotten-root-password-in-rhel-7-centos-7/) +[#]: author: (Prakash Subramanian https://www.2daygeek.com/author/prakash/) + +Three Ways To Reset And Change Forgotten Root Password on RHEL 7/CentOS 7 Systems +====== + +If you are forget to remember your root password for RHEL 7 and CentOS 7 systems and want to reset the forgotten root password? + +If so, don’t worry we are here to help you out on this. + +Navigate to the following link if you want to **[reset forgotten root password on RHEL 6/CentOS 6][1]**. + +This is generally happens when you use different password in vast environment or if you are not maintaining the proper inventory. + +Whatever it is. No issues, we will help you through this article. + +It can be done in many ways but we are going to show you the best three methods which we tried many times for our clients. + +In Linux servers there are three different users are available. These are, Normal User, System User and Super User. + +As everyone knows the Root user is known as super user in Linux and Administrator is in Windows. + +We can’t perform any major activity without root password so, make sure you should have the right root password when you perform any major tasks. + +If you don’t know or don’t have it, try to reset using one of the below method. + + * Reset Forgotten Root Password By Booting into Single User Mode using `rd.break` + * Reset Forgotten Root Password By Booting into Single User Mode using `init=/bin/bash` + * Reset Forgotten Root Password By Booting into Rescue Mode + + + +### Method-1: Reset Forgotten Root Password By Booting into Single User Mode + +Just follow the below procedure to reset the forgotten root password in RHEL 7/CentOS 7 systems. + +To do so, reboot your system and follow the instructions carefully. + +**`Step-1:`** Reboot your system and interrupt at the boot menu by hitting **`e`** key to modify the kernel arguments. +![][3] + +**`Step-2:`** In the GRUB options, find `linux16` word and add the `rd.break` word in the end of the file then press `Ctrl+x` or `F10` to boot into single user mode. +![][4] + +**`Step-3:`** At this point of time, your root filesystem will be mounted in Read only (RO) mode to /sysroot. Run the below command to confirm this. + +``` +# mount | grep root +``` + +![][5] + +**`Step-4:`** Based on the above output, i can say that i’m in single user mode and my root file system is mounted in read only mode. + +It won’t allow you to make any changes on your system until you mount the root filesystem with Read and write (RW) mode to /sysroot. To do so, use the following command. + +``` +# mount -o remount,rw /sysroot +``` + +![][6] + +**`Step-5:`** Currently your file systems are mounted as a temporary partition. Now, your command prompt shows **switch_root:/#**. + +Run the following command to get into a chroot jail so that /sysroot is used as the root of the file system. + +``` +# chroot /sysroot +``` + +![][7] + +**`Step-6:`** Now you can able to reset the root password with help of `passwd` command. + +``` +# echo "CentOS7$#123" | passwd --stdin root +``` + +![][8] + +**`Step-7:`** By default CentOS 7/RHEL 7 use SELinux in enforcing mode, so create a following hidden file which will automatically perform a relabel of all files on next boot. + +It allow us to fix the context of the **/etc/shadow** file. + +``` +# touch /.autorelabel +``` + +![][9] + +**`Step-8:`** Issue `exit` twice to exit from the chroot jail environment and reboot the system. +![][10] + +**`Step-9:`** Now you can login to your system with your new password. +![][11] + +### Method-2: Reset Forgotten Root Password By Booting into Single User Mode + +Alternatively we can use the below procedure to reset the forgotten root password in RHEL 7/CentOS 7 systems. + +**`Step-1:`** Reboot your system and interrupt at the boot menu by hitting **`e`** key to modify the kernel arguments. +![][3] + +**`Step-2:`** In the GRUB options, find `rhgb quiet` word and replace with the `init=/bin/bash` or `init=/bin/sh` word then press `Ctrl+x` or `F10` to boot into single user mode. + +Screenshot for **`init=/bin/bash`**. +![][12] + +Screenshot for **`init=/bin/sh`**. +![][13] + +**`Step-3:`** At this point of time, your root system will be mounted in Read only mode to /. Run the below command to confirm this. + +``` +# mount | grep root +``` + +![][14] + +**`Step-4:`** Based on the above ouput, i can say that i’m in single user mode and my root file system is mounted in read only (RO) mode. + +It won’t allow you to make any changes on your system until you mount the root file system with Read and write (RW) mode. To do so, use the following command. + +``` +# mount -o remount,rw / +``` + +![][15] + +**`Step-5:`** Now you can able to reset the root password with help of `passwd` command. + +``` +# echo "RHEL7$#123" | passwd --stdin root +``` + +![][16] + +**`Step-6:`** By default CentOS 7/RHEL 7 use SELinux in enforcing mode, so create a following hidden file which will automatically perform a relabel of all files on next boot. + +It allow us to fix the context of the **/etc/shadow** file. + +``` +# touch /.autorelabel +``` + +![][17] + +**`Step-7:`** Finally `Reboot` the system. + +``` +# exec /sbin/init 6 +``` + +![][18] + +**`Step-9:`** Now you can login to your system with your new password. +![][11] + +### Method-3: Reset Forgotten Root Password By Booting into Rescue Mode + +Alternatively, we can reset the forgotten Root password for RHEL 7 and CentOS 7 systems using Rescue mode. + +**`Step-1:`** Insert the bootable media through USB or DVD drive which is compatible for you and reboot your system. It will take to you to the below screen. + +Hit `Troubleshooting` to launch the `Rescue` mode. +![][19] + +**`Step-2:`** Choose `Rescue a CentOS system` and hit `Enter` button. +![][20] + +**`Step-3:`** Here choose `1` and the rescue environment will now attempt to find your Linux installation and mount it under the directory `/mnt/sysimage`. +![][21] + +**`Step-4:`** Simple hit `Enter` to get a shell. +![][22] + +**`Step-5:`** Run the following command to get into a chroot jail so that /mnt/sysimage is used as the root of the file system. + +``` +# chroot /mnt/sysimage +``` + +![][23] + +**`Step-6:`** Now you can able to reset the root password with help of **passwd** command. + +``` +# echo "RHEL7$#123" | passwd --stdin root +``` + +![][24] + +**`Step-7:`** By default CentOS 7/RHEL 7 use SELinux in enforcing mode, so create a following hidden file which will automatically perform a relabel of all files on next boot. +It allow us to fix the context of the /etc/shadow file. + +``` +# touch /.autorelabel +``` + +![][25] + +**`Step-8:`** Remove the bootable media then initiate the reboot. + +**`Step-9:`** Issue `exit` twice to exit from the chroot jail environment and reboot the system. +![][26] + +**`Step-10:`** Now you can login to your system with your new password. +![][11] + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-reset-change-forgotten-root-password-in-rhel-7-centos-7/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/linux-reset-change-forgotten-root-password-in-rhel-6-centos-6/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-2.png +[4]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-3.png +[5]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-5.png +[6]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-6.png +[7]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-8.png +[8]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-10.png +[9]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-10a.png +[10]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-11.png +[11]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-12.png +[12]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-1.png +[13]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-1a.png +[14]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-3.png +[15]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-4.png +[16]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-5.png +[17]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-6.png +[18]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-7.png +[19]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-1.png +[20]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-2.png +[21]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-3.png +[22]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-4.png +[23]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-5.png +[24]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-6.png +[25]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-7.png +[26]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-8.png diff --git a/sources/tech/20190105 Setting up an email server, part 1- The Forwarder.md b/sources/tech/20190105 Setting up an email server, part 1- The Forwarder.md new file mode 100644 index 0000000000..c6c520e339 --- /dev/null +++ b/sources/tech/20190105 Setting up an email server, part 1- The Forwarder.md @@ -0,0 +1,224 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Setting up an email server, part 1: The Forwarder) +[#]: via: (https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/) +[#]: author: (Julian Andres Klode https://blog.jak-linux.org/) + +Setting up an email server, part 1: The Forwarder +====== + +This week, I’ve been working on rolling out mail services on my server. I started working on a mail server setup at the end of November, while the server was not yet in use, but only for about two days, and then let it rest. + +As my old shared hosting account expired on January 1, I had to move mail forwarding duties over to the new server. Yes forwarding - I do plan to move hosting the actual email too, but at the moment it’s “just” forwarding to gmail. + +### The Software + +As you might know from the web server story, my server runs on Ubuntu 18.04. I set up a mail server on this system using + + * [Postfix][1] for SMTP duties (warning, they oddly do not have an https page) + * [rspamd][2] for spam filtering, and signing DKIM / ARC + * [bind9][3] for DNS resolving + * [postsrsd][4] for SRS + + + +You might wonder why bind9 is in there. It turns out that DNS blacklists used by spam filters block the caching DNS servers you usually use, so you have to use your own recursive DNS server. Ubuntu offers you the choice between bind9 and dnsmasq in main, and it seems like bind9 is more appropriate here than dnsmasq. + +### Setting up postfix + +Most of the postfix configuration is fairly standard. So, let’s skip TLS configuration and outbound SMTP setups (this is email, and while they support TLS, it’s all optional, so let’s not bother that much here). + +The most important part is restrictions in `main.cf`. + +First of all, relay restrictions prevent us from relaying emails to weird domains: + +``` +# Relay Restrictions +smtpd_relay_restrictions = reject_non_fqdn_recipient reject_unknown_recipient_domain permit_mynetworks permit_sasl_authenticated defer_unauth_destination +``` + +We also only accept mails from hosts that know their own full qualified name: + +``` +# Helo restrictions (hosts not having a proper fqdn) +smtpd_helo_required = yes +smtpd_helo_restrictions = permit_mynetworks reject_invalid_helo_hostname reject_non_fqdn_helo_hostname reject_unknown_helo_hostname +``` + +We also don’t like clients (other servers) that send data too early, or have an unknown hostname: + +``` +smtpd_data_restrictions = reject_unauth_pipelining +smtpd_client_restrictions = permit_mynetworks reject_unknown_client_hostname +``` + +I also set up a custom apparmor profile that’s pretty lose, I plan to migrate to the one in the apparmor git eventually but it needs more testing and some cleanup. + +### Sender rewriting scheme + +For SRS using postsrsd, we define the `SRS_DOMAIN` in `/etc/default/postsrsd` and then configure postfix to talk to it: + +``` +# Handle SRS for forwarding +recipient_canonical_maps = tcp:localhost:10002 +recipient_canonical_classes= envelope_recipient,header_recipient + +sender_canonical_maps = tcp:localhost:10001 +sender_canonical_classes = envelope_sender +``` + +This has a minor issue that it also rewrites the `Return-Path` when it delivers emails locally, but as I am only forwarding, I’m worrying about that later. + +### rspamd basics + +rspamd is a great spam filtering system. It uses a small core written in C and a bunch of Lua plugins, such as: + + * IP score, which keeps track of how good a specific server was in the past + * Replies, which can check whether an email is a reply to another one + * DNS blacklisting + * DKIM and ARC validation and signing + * DMARC validation + * SPF validation + + + +It also has a nice web UI: + +![rspamd web ui status][5] + +rspamd web ui status + +![rspamd web ui investigating a spam message][6] + +rspamd web ui investigating a spam message + +Setting up rspamd is quite easy. You basically just drop a bunch of configuration overrides into `/etc/rspamd/local.d` and you’re done. Heck, it mostly works out of the box. There’s a fancy `rspamadm configwizard` too. + +What you do want for rspamd is a redis server. redis is needed in [many places][7], such as rate limiting, greylisting, dmarc, reply tracking, ip scoring, neural networks. + +I made a few changes to the defaults: + + * I enabled subject rewriting instead of adding headers, so spam mail subjects get `[SPAM]` prepended, in `local.d/actions.conf`: + +``` + reject = 15; +rewrite_subject = 6; +add_header = 6; +greylist = 4; +subject = "[SPAM] %s"; +``` + + * I set `autolearn = true;` in `local.d/classifier-bayes.conf` to make it learn that an email that has a score of at least 15 (those that are rejected) is spam, and emails with negative scores are ham. + + * I set `extended_spam_headers = true;` in `local.d/milter_headers.conf` to get a report from rspamd in the header seeing the score and how the score came to be. + + + + +### ARC setup + +[ARC][8] is the ‘Authenticated Received Chain’ and is currently a DMARC working group work item. It allows forwarders / mailing lists to authenticate their forwarding of the emails and the checks they have performed. + +rspamd is capable of validating and signing emails with ARC, but I’m not sure how much influence ARC has on gmail at the moment, for example. + +There are three parts to setting up ARC: + + 1. Generate a DKIM key pair (use `rspamadm dkim_keygen`) + 2. Setup rspamd to sign incoming emails using the private key + 3. Add a DKIM `TXT` record for the public key. `rspamadm` helpfully tells you how it looks like. + + + +For step two, what we need to do is configure `local.d/arc.conf`. You can basically use the example configuration from the [rspamd page][9], the key point for signing incoming email is to specifiy `sign_incoming = true;` and `use_domain_sign_inbound = "recipient";` (FWIW, none of these options are documented, they are fairly new, and nobody updated the documentation for them). + +My configuration looks like this at the moment: + +``` +# If false, messages with empty envelope from are not signed +allow_envfrom_empty = true; +# If true, envelope/header domain mismatch is ignored +allow_hdrfrom_mismatch = true; +# If true, multiple from headers are allowed (but only first is used) +allow_hdrfrom_multiple = false; +# If true, username does not need to contain matching domain +allow_username_mismatch = false; +# If false, messages from authenticated users are not selected for signing +auth_only = true; +# Default path to key, can include '$domain' and '$selector' variables +path = "${DBDIR}/arc/$domain.$selector.key"; +# Default selector to use +selector = "arc"; +# If false, messages from local networks are not selected for signing +sign_local = true; +# +sign_inbound = true; +# Symbol to add when message is signed +symbol_signed = "ARC_SIGNED"; +# Whether to fallback to global config +try_fallback = true; +# Domain to use for ARC signing: can be "header" or "envelope" +use_domain = "header"; +use_domain_sign_inbound = "recipient"; +# Whether to normalise domains to eSLD +use_esld = true; +# Whether to get keys from Redis +use_redis = false; +# Hash for ARC keys in Redis +key_prefix = "ARC_KEYS"; +``` + +This would also sign any outgoing email, but I’m not sure that’s necessary - my understanding is that we only care about ARC when forwarding/receiving incoming emails, not when sending them (at least that’s what gmail does). + +### Other Issues + +There are few other things to keep in mind when running your own mail server. I probably don’t know them all yet, but here we go: + + * You must have a fully qualified hostname resolving to a public IP address + + * Your public IP address must resolve back to the fully qualified host name + + * Again, you should run a recursive DNS resolver so your DNS blacklists work (thanks waldi for pointing that out) + + * Setup an SPF record. Mine looks like this: + +`jak-linux.org. 3600 IN TXT "v=spf1 +mx ~all"` + +this states that all my mail servers may send email, but others probably should not (a softfail). Not having an SPF record can punish you; for example, rspamd gives missing SPF and DKIM a score of 1. + + * All of that software is sandboxed using AppArmor. Makes you question its security a bit less! + + + + +### Source code, outlook + +As always, you can find the Ansible roles on [GitHub][10]. Feel free to point out bugs! 😉 + +In the next installment of this series, we will be looking at setting up Dovecot, and configuring DKIM. We probably also want to figure out how to run notmuch on the server, keep messages in matching maildirs, and have my laptop synchronize the maildir and notmuch state with the server. Ugh, sounds like a lot of work. + +-------------------------------------------------------------------------------- + +via: https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/ + +作者:[Julian Andres Klode][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.jak-linux.org/ +[b]: https://github.com/lujun9972 +[1]: http://www.postfix.org/ +[2]: https://rspamd.com/ +[3]: https://www.isc.org/downloads/bind/ +[4]: https://github.com/roehling/postsrsd +[5]: https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/rspamd-status.png +[6]: https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/rspamd-spam.png +[7]: https://rspamd.com/doc/configuration/redis.html +[8]: http://arc-spec.org/ +[9]: https://rspamd.com/doc/modules/arc.html +[10]: https://github.com/julian-klode/ansible.jak-linux.org diff --git a/sources/tech/20190107 Aliases- To Protect and Serve.md b/sources/tech/20190107 Aliases- To Protect and Serve.md new file mode 100644 index 0000000000..783c59dc41 --- /dev/null +++ b/sources/tech/20190107 Aliases- To Protect and Serve.md @@ -0,0 +1,176 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Aliases: To Protect and Serve) +[#]: via: (https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve) +[#]: author: (Paul Brown https://www.linux.com/users/bro66) + +Aliases: To Protect and Serve +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/prairie-path_1920.jpg?itok=wRARsM7p) + +Happy 2019! Here in the new year, we’re continuing our series on aliases. By now, you’ve probably read our [first article on aliases][1], and it should be quite clear how they are the easiest way to save yourself a lot of trouble. You already saw, for example, that they helped with muscle-memory, but let's see several other cases in which aliases come in handy. + +### Aliases as Shortcuts + +One of the most beautiful things about Linux's shells is how you can use zillions of options and chain commands together to carry out really sophisticated operations in one fell swoop. All right, maybe beauty is in the eye of the beholder, but let's agree that this feature published practical. + +The downside is that you often come up with recipes that are often hard to remember or cumbersome to type. Say space on your hard disk is at a premium and you want to do some New Year's cleaning. Your first step may be to look for stuff to get rid off in you home directory. One criteria you could apply is to look for stuff you don't use anymore. `ls` can help with that: + +``` +ls -lct +``` + +The instruction above shows the details of each file and directory (`-l`) and also shows when each item was last accessed (`-c`). It then orders the list from most recently accessed to least recently accessed (`-t`). + +Is this hard to remember? You probably don’t use the `-c` and `-t` options every day, so perhaps. In any case, defining an alias like + +``` +alias lt='ls -lct' +``` + +will make it easier. + +Then again, you may want to have the list show the oldest files first: + +``` +alias lo='lt -F | tac' +``` + +![aliases][3] + +Figure 1: The lt and lo aliases in action. + +[Used with permission][4] + +There are a few interesting things going here. First, we are using an alias (`lt`) to create another alias -- which is perfectly okay. Second, we are passing a new parameter to `lt` (which, in turn gets passed to `ls` through the definition of the `lt` alias). + +The `-F` option appends special symbols to the names of items to better differentiate regular files (that get no symbol) from executable files (that get an `*`), files from directories (end in `/`), and all of the above from links, symbolic and otherwise (that end in an `@` symbol). The `-F` option is throwback to the days when terminals where monochrome and there was no other way to easily see the difference between items. You use it here because, when you pipe the output from `lt` through to `tac` you lose the colors from `ls`. + +The third thing to pay attention to is the use of piping. Piping happens when you pass the output from an instruction to another instruction. The second instruction can then use that output as its own input. In many shells (including Bash), you pipe something using the pipe symbol (`|`). + +In this case, you are piping the output from `lt -F` into `tac`. `tac`'s name is a bit of a joke. You may have heard of `cat`, the instruction that was nominally created to con _cat_ enate files together, but that in practice is used to print out the contents of a file to the terminal. `tac` does the same, but prints out the contents it receives in reverse order. Get it? `cat` and `tac`. Developers, you so funny! + +The thing is both `cat` and `tac` can also print out stuff piped over from another instruction, in this case, a list of files ordered chronologically. + +So... after that digression, what comes out of the other end is the list of files and directories of the current directory in inverse order of freshness. + +The final thing you have to bear in mind is that, while `lt` will work the current directory and any other directory... + +``` +# This will work: +lt +# And so will this: +lt /some/other/directory +``` + +... `lo` will only work with the current directory: + +``` +# This will work: +lo +# But this won't: +lo /some/other/directory +``` + +This is because Bash expands aliases into their components. When you type this: + +``` +lt /some/other/directory +``` + +Bash REALLY runs this: + +``` +ls -lct /some/other/directory +``` + +which is a valid Bash command. + +However, if you type this: + +``` +lo /some/other/directory +``` + +Bash tries to run this: + +``` +ls -lct -F | tac /some/other/directory +``` + +which is not a valid instruction, because `tac` mainly because _/some/other/directory_ is a directory, and `cat` and `tac` don't do directories. + +### More Alias Shortcuts + + * `alias lll='ls -R'` prints out the contents of a directory and then drills down and prints out the contents of its subdirectories and the subdirectories of the subdirectories, and so on and so forth. It is a way of seeing everything you have under a directory. + + * `mkdir='mkdir -pv'` let's you make directories within directories all in one go. With the base form of `mkdir`, to make a new directory containing a subdirectory you have to do this: + +``` + mkdir newdir +mkdir newdir/subdir +``` + +Or this: + +``` +mkdir -p newdir/subdir +``` + +while with the alias you would only have to do this: + +``` +mkdir newdir/subdir +``` + +Your new `mkdir` will also tell you what it is doing while is creating new directories. + + + + +### Aliases as Safeguards + +The other thing aliases are good for is as safeguards against erasing or overwriting your files accidentally. At this stage you have probably heard the legendary story about the new Linux user who ran: + +``` +rm -rf / +``` + +as root, and nuked the whole system. Then there's the user who decided that: + +``` +rm -rf /some/directory/ * +``` + +was a good idea and erased the complete contents of their home directory. Notice how easy it is to overlook that space separating the directory path and the `*`. + +Both things can be avoided with the `alias rm='rm -i'` alias. The `-i` option makes `rm` ask the user whether that is what they really want to do and gives you a second chance before wreaking havoc in your file system. + +The same goes for `cp`, which can overwrite a file without telling you anything. Create an alias like `alias cp='cp -i'` and stay safe! + +### Next Time + +We are moving more and more into scripting territory. Next time, we'll take the next logical step and see how combining instructions on the command line gives you really interesting and sophisticated solutions to everyday admin problems. + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve + +作者:[Paul Brown][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/bro66 +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve +[2]: https://www.linux.com/files/images/fig01png-0 +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig01_0.png?itok=crqTm_va (aliases) +[4]: https://www.linux.com/licenses/category/used-permission diff --git a/sources/tech/20190107 Different Ways To Update Linux Kernel For Ubuntu.md b/sources/tech/20190107 Different Ways To Update Linux Kernel For Ubuntu.md new file mode 100644 index 0000000000..32a6a7dd3e --- /dev/null +++ b/sources/tech/20190107 Different Ways To Update Linux Kernel For Ubuntu.md @@ -0,0 +1,232 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Different Ways To Update Linux Kernel For Ubuntu) +[#]: via: (https://www.ostechnix.com/different-ways-to-update-linux-kernel-for-ubuntu/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Different Ways To Update Linux Kernel For Ubuntu +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/ubuntu-linux-kernel-720x340.png) + +In this guide, we have given 7 different ways to update Linux kernel for Ubuntu. Among the 7 methods, five methods requires system reboot to apply the new Kernel and two methods don’t. Before updating Linux Kernel, it is **highly recommended to backup your important data!** All methods mentioned here are tested on Ubuntu OS only. We are not sure if they will work on other Ubuntu flavors (Eg. Xubuntu) and Ubuntu derivatives (Eg. Linux Mint). + +### Part A – Kernel Updates with reboot + +The following methods requires you to reboot your system to apply the new Linux Kernel. All of the following methods are recommended for personal or testing systems. Again, please backup your important data, configuration files and any other important stuff from your Ubuntu system. + +#### Method 1 – Update the Linux Kernel with dpkg (The manual way) + +This method helps you to manually download and install the latest available Linux kernel from **[kernel.ubuntu.com][1]** website. If you want to install most recent version (either stable or release candidate), this method will help. Download the Linux kernel version from the above link. As of writing this guide, the latest available version was **5.0-rc1** and latest stable version was **v4.20**. + +![][3] + +Click on the Linux Kernel version link of your choice and find the section for your architecture (‘Build for XXX’). In that section, download the two files with these patterns (where X.Y.Z is the highest version): + + 1. linux-image-*X.Y.Z*-generic-*.deb + 2. linux-modules-X.Y.Z*-generic-*.deb + + + +In a terminal, change directory to where the files are and run this command to manually install the kernel: + +``` +$ sudo dpkg --install *.deb +``` + +Reboot to use the new kernel: + +``` +$ sudo reboot +``` + +Check the kernel is as expected: + +``` +$ uname -r +``` + +For step by step instructions, please check the section titled under “Install Linux Kernel 4.15 LTS On DEB based systems” in the following guide. + ++ [Install Linux Kernel 4.15 In RPM And DEB Based Systems](https://www.ostechnix.com/install-linux-kernel-4-15-rpm-deb-based-systems/) + +The above guide is specifically written for 4.15 version. However, all the steps are same for installing latest versions too. + +**Pros:** No internet needed (You can download the Linux Kernel from any system). + +**Cons:** Manual update. Reboot necessary. + +#### Method 2 – Update the Linux Kernel with apt-get (The recommended method) + +This is the recommended way to install latest Linux kernel on Ubuntu-like systems. Unlike the previous method, this method will download and install latest Kernel version from Ubuntu official repositories instead of **kernel.ubuntu.com** website.. + +To update the whole system including the Kernel, just do: + +``` +$ sudo apt-get update + +$ sudo apt-get upgrade +``` + +If you want to update the Kernel only, run: + +``` +$ sudo apt-get upgrade linux-image-generic +``` + +**Pros:** Simple. Recommended method. + +**Cons:** Internet necessary. Reboot necessary. + +Updating Kernel from official repositories will mostly work out of the box without any problems. If it is the production system, this is the recommended way to update the Kernel. + +Method 1 and 2 requires user intervention to update Linux Kernels. The following methods (3, 4 & 5) are mostly automated. + +#### Method 3 – Update the Linux Kernel with Ukuu + +**Ukuu** is a Gtk GUI and command line tool that downloads the latest main line Linux kernel from **kernel.ubuntu.com** , and install it automatically in your Ubuntu desktop and server editions. Ukku is not only simplifies the process of manually downloading and installing new Kernels, but also helps you to safely remove the old and unnecessary Kernels. For more details, refer the following guide. + ++ [Ukuu – An Easy Way To Install And Upgrade Linux Kernel In Ubuntu-based Systems](https://www.ostechnix.com/ukuu-an-easy-way-to-install-and-upgrade-linux-kernel-in-ubuntu-based-systems/) + +**Pros:** Easy to install and use. Automatically installs main line Kernel. + +**Cons:** Internet necessary. Reboot necessary. + +#### Method 4 – Update the Linux Kernel with UKTools + +Just like Ukuu, the **UKTools** also fetches the latest stable Kernel from from **kernel.ubuntu.com** site and installs it automatically on Ubuntu and its derivatives like Linux Mint. More details about UKTools can be found in the link given below. + ++ [UKTools – Upgrade Latest Linux Kernel In Ubuntu And Derivatives](https://www.ostechnix.com/uktools-upgrade-latest-linux-kernel-in-ubuntu-and-derivatives/) + +**Pros:** Simple. Automated. + +**Cons:** Internet necessary. Reboot necessary. + +#### Method 5 – Update the Linux Kernel with Linux Kernel Utilities + +**Linux Kernel Utilities** is yet another program that makes the process of updating Linux kernel easy in Ubuntu-like systems. It is actually a set of BASH shell scripts used to compile and/or update latest Linux kernels for Debian and derivatives. It consists of three utilities, one for manually compiling and installing Kernel from source from [**http://www.kernel.org**][4] website, another for downloading and installing pre-compiled Kernels from from **** website. and third script is for removing the old kernels. For more details, please have a look at the following link. + ++ [Linux Kernel Utilities – Scripts To Compile And Update Latest Linux Kernel For Debian And Derivatives](https://www.ostechnix.com/linux-kernel-utilities-scripts-compile-update-latest-linux-kernel-debian-derivatives/) + +**Pros:** Simple. Automated. + +**Cons:** Internet necessary. Reboot necessary. + + +### Part B – Kernel Updates without reboot + +As I already said, all of above methods need you to reboot the server before the new kernel is active. If they are personal systems or testing machines, you could simply reboot and start using the new Kernel. But, what if they are production systems that requires zero downtime? No problem. This is where **Livepatching** comes in handy! + +The **livepatching** (or hot patching) allows you to install Linux updates or patches without rebooting, keeping your server at the latest security level, without any downtime. This is attractive for ‘always-on’ servers, such as web hosts, gaming servers, in fact, any situation where the server needs to stay on all the time. Linux vendors maintain patches only for security fixes, so this approach is best when security is your main concern. + +The following two methods doesn’t require system reboot and useful for updating Linux Kernel on production and mission-critical Ubuntu servers. + +#### Method 6 – Update the Linux Kernel Canonical Livepatch Service + +![][5] + +[**Canonical Livepatch Service**][6] applies Kernel updates, patches and security hotfixes automatically without rebooting the Ubuntu systems. It reduces the Ubuntu systems downtime and keep them secure. Canonical Livepatch Service can be set up either during or after installation. If you are using desktop Ubuntu, the Software Updater will automatically check for kernel patches and notify you. In a console-based system, it is up to you to run apt-get update regularly. It will install kernel security patches only when you run the command “apt-get upgrade”, hence is semi-automatic. + +Livepatch is free for three systems. If you have more than three, you need to upgrade to enterprise support solution named **Ubuntu Advantage** suite. This suite includes **Kernel Livepatching** and other services such as, + + * Extended Security Maintenance – critical security updates after Ubuntu end-of-life. + * Landscape – the systems management tool for using Ubuntu at scale. + * Knowledge Base – A private collection of articles and tutorials written by Ubuntu experts. + * Phone and web-based support. + + + +**Cost** + +Ubuntu Advantage includes three paid plans namely, Essential, Standard and Advanced. The basic plan (Essential plan) starts from **225 USD per year for one physical node** and **75 USD per year for one VPS**. It seems there is no monthly subscription for Ubuntu servers and desktops. You can view detailed information on all plans [**here**][7]. + +**Pros:** Simple. Semi-automatic. No reboot necessary. Free for 3 systems. + +**Cons:** Expensive for 4 or more hosts. No patch rollback. + +**Enable Canonical Livepatch Service** + +If you want to setup Livepatch service after installation, just do the following steps. + +Get a key at [**https://auth.livepatch.canonical.com/**][8]. + +``` +$ sudo snap install canonical-livepatch + +$ sudo canonical-livepatch enable your-key +``` + +#### Method 7 – Update the Linux Kernel with KernelCare + +![][9] + +[**KernelCare**][10] is the newest of all the live patching solutions. It is the product of [CloudLinux][11]. KernelCare runs on Ubuntu and other flavors of Linux. It checks for patch releases every 4 hours and will install them without confirmation. Patches can be rolled back if there are problems. + +**Cost** + +Fees, per server: **4 USD per month** , **45 USD per year**. + +Compared to Ubuntu Livepatch, kernelCare seems very cheap and affordable. Good thing is **monthly subscriptions are also available**. Another notable feature is it supports other Linux distributions, such as Red Hat, CentOS, Debian, Oracle Linux, Amazon Linux and virtualization platforms like OpenVZ, Proxmox etc. + +You can read all the features and benefits of KernelCare [**here**][12] and check all available plan details [**here**][13]. + +**Pros:** Simple. Fully automated. Wide OS coverage. Patch rollback. No reboot necessary. Free license for non-profit organizations. Low cost. + +**Cons:** Not free (except for 30 day trial). + +**Enable KernelCare Service** + +Get a 30-day trial key at [**https://cloudlinux.com/kernelcare-free-trial5**][14]. + +Run the following commands to enable KernelCare and register the key. + +``` +$ sudo wget -qq -O - https://repo.cloudlinux.com/kernelcare/kernelcare_install.sh | bash + +$ sudo /usr/bin/kcarectl --register KEY +``` + +If you’re looking for an affordable and reliable commercial service to keep the Linux Kernel updated on your Linux servers, KernelCare is good to go. + +*with inputs from **Paul A. Jacobs** , a Technical Evangelist and Content Writer from Cloud Linux.* + +**Suggested read:** + +And, that’s all for now. Hope this was useful. If you believe any other tools/methods should include in this list, feel free to let us know in the comment section below. I will check and update this guide accordingly. + +More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/different-ways-to-update-linux-kernel-for-ubuntu/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: http://kernel.ubuntu.com/~kernel-ppa/mainline/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2019/01/Ubuntu-mainline-kernel.png +[4]: http://www.kernel.org +[5]: http://www.ostechnix.com/wp-content/uploads/2019/01/Livepatch.png +[6]: https://www.ubuntu.com/livepatch +[7]: https://www.ubuntu.com/support/plans-and-pricing +[8]: https://auth.livepatch.canonical.com/ +[9]: http://www.ostechnix.com/wp-content/uploads/2019/01/KernelCare.png +[10]: https://www.kernelcare.com/ +[11]: https://www.cloudlinux.com/ +[12]: https://www.kernelcare.com/update-kernel-linux/ +[13]: https://www.kernelcare.com/pricing/ +[14]: https://cloudlinux.com/kernelcare-free-trial5 diff --git a/sources/tech/20190107 DriveSync - Easy Way to Sync Files Between Local And Google Drive from Linux CLI.md b/sources/tech/20190107 DriveSync - Easy Way to Sync Files Between Local And Google Drive from Linux CLI.md new file mode 100644 index 0000000000..6552cc3905 --- /dev/null +++ b/sources/tech/20190107 DriveSync - Easy Way to Sync Files Between Local And Google Drive from Linux CLI.md @@ -0,0 +1,239 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (DriveSync – Easy Way to Sync Files Between Local And Google Drive from Linux CLI) +[#]: via: (https://www.2daygeek.com/drivesync-google-drive-sync-client-for-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +DriveSync – Easy Way to Sync Files Between Local And Google Drive from Linux CLI +====== + +Google Drive is an one of the best cloud storage compared with other cloud storage. + +It’s one of the application which is used by millions of users in daily basics. + +It allow users to access the application anywhere irrespective of devices. + +We can upload, download & share documents, photo, files, docs, spreadsheet, etc to anyone with securely. + +We had already written few articles in 2daygeek website about google drive mapping with Linux. + +If you would like to check those, navigate to the following link. + +GNOME desktop offers easy way to **[Integrate Google Drive Using Gnome Nautilus File Manager in Linux][1]** without headache. + +Also, you can give a try with **[Google Drive Ocamlfuse Client][2]**. + +### What’s DriveSync? + +[DriveSync][3] is a command line utility that synchronizes your files between local system and Google Drive via command line. + +Downloads new remote files, uploads new local files to your Drive and deletes or updates files both locally and on Drive if they have changed in one place. + +Allows blacklisting or whitelisting of files and folders that should not / should be synced. + +It was written in Ruby scripting language so, make sure your system should have ruby installed. If it’s not installed then install it as a prerequisites for DriveSync. + +### DriveSync Features + + * Downloads new remote files + * Uploads new local files + * Delete or Update files in both locally and Drive + * Allow blacklist to disable sync for files and folders + * Automate the sync using cronjob + * Allow us to set file upload/download size (Defautl 512MB) + * Allow us to modify Timeout threshold + + + +### How to Install Ruby Scripting Language in Linux? + +Ruby is an interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (like in Perl). It is simple, straight-forward, and extensible. + +It’s available in all the Linux distribution official repository. Hence we can easily install it with help of distribution official **[Package Manager][4]**. + +For **`Fedora`** system, use **[DNF Command][5]** to install Ruby. + +``` +$ sudo dnf install ruby rubygem-bundler +``` + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][6]** or **[APT Command][7]** to install Ruby. + +``` +$ sudo apt install ruby ruby-bundler +``` + +For **`Arch Linux`** based systems, use **[Pacman Command][8]** to install Ruby. + +``` +$ sudo pacman -S ruby ruby-bundler +``` + +For **`RHEL/CentOS`** systems, use **[YUM Command][9]** to install Ruby. + +``` +$ sudo yum install ruby ruby-bundler +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][10]** to install Ruby. + +``` +$ sudo zypper install ruby ruby-bundler +``` + +### How to Install DriveSync in Linux? + +DriveSync installation also easy to do it. Follow the below procedure to get it done. + +``` +$ git clone https://github.com/MStadlmeier/drivesync.git +$ cd drivesync/ +$ bundle install +``` + +### How to Set Up DriveSync in Linux? + +As of now, we had successfully installed DriveSync and still we need to perform few steps to use this. + +Run the following command to set up this and Sync the files. + +``` +$ ruby drivesync.rb +``` + +When you ran the above command you will be getting the below url. +![][12] + +Navigate to the given URL in your preferred Web Browser and follow the instruction. It will open a google sign-in page in default web browser. Enter your credentials then hit Sign in button. +![][13] + +Input your password. +![][14] + +Hit **`Allow`** button to allow DriveSync to access your Google Drive. +![][15] + +Finally, it will give you an authorization code. +![][16] + +Just copy and past it on the terminal and hit **`Enter`** button to start the sync. +![][17] + +Yes, it’s syncing the files from Google Drive to my local folder. + +``` +$ ruby drivesync.rb +Warning: Could not find config file at /home/daygeek/.drivesync/config.yml . Creating default config... +Open the following URL in the browser and enter the resulting code after authorization +https://accounts.google.com/o/oauth2/auth?access_type=offline&approval_prompt=force&client_id=xxxxxxxxxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com&include_granted_scopes=true&redirect_uri=urn:ietf:wg:oauth:2.0:oob&response_type=code&scope=https://www.googleapis.com/auth/drive +4/ygAxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx +Local folder is 1437 files behind and 0 files ahead of remote +Starting sync at 2019-01-06 19:48:49 +0530 +Downloading file 2018-07-31-17-48-54-635_1533039534635_XXXPM0534X_ITRV.zip ... +Downloading file 5459XXXXXXXXXX25_11-03-2018.PDF ... +Downloading file 2g-image-design/new-design-28-Mar-2018/new-base-format-icon-theme.svg ... +Downloading file 2g-image-design/new-design-28-Mar-2018/2g-banner-format.svg ... +Downloading file 2g-image-design/new-design-28-Mar-2018/new-base-format.svg ... +Downloading file documents/Magesh_Resume_Updated_26_Mar_2018.doc ... +Downloading file documents/Magesh_Resume_updated-new.doc ... +Downloading file documents/Aadhaar-Thanu.PNG ... +Downloading file documents/Aadhaar-Magesh.PNG ... +Downloading file documents/Copy of PP-docs.pdf ... +Downloading file EAadhaar_2189821080299520170807121602_25082017123052_172991.pdf ... +Downloading file Tanisha/VID_20170223_113925.mp4 ... +Downloading file Tanisha/VID_20170224_073234.mp4 ... +Downloading file Tanisha/VID_20170304_170457.mp4 ... +Downloading file Tanisha/IMG_20170225_203243.jpg ... +Downloading file Tanisha/IMG_20170226_123949.jpg ... +Downloading file Tanisha/IMG_20170226_123953.jpg ... +Downloading file Tanisha/IMG_20170304_184227.jpg ... +. +. +. +Sync complete. +``` + +It will create the **`drive`** folder under **`/home/user/Documents/`** and sync all the files in it. +![][18] + +DriveSync configuration files are located in the following location **`/home/user/.drivesync/`** if you had installed it on your **home** directory. + +``` +$ ls -lh ~/.drivesync/ +total 176K +-rw-r--r-- 1 daygeek daygeek 1.9K Jan 6 19:42 config.yml +-rw-r--r-- 1 daygeek daygeek 170K Jan 6 21:31 manifest +``` + +You can make your changes by modifying the **`config.yml`** file. + +### How to Verify Whether Sync is Working Fine or Not? + +To test this, we are going to create a new folder called **`2g-docs-2019`**. Also, adding an image file in it. Once it’s done, run the **`drivesync.rb`** command again. + +``` +$ ruby drivesync.rb +Local folder is 0 files behind and 1 files ahead of remote +Starting sync at 2019-01-06 21:59:32 +0530 +Uploading file 2g-docs-2019/Asciinema - Record And Share Your Terminal Activity On The Web.png ... +``` + +Yes, it has been synced to Google Drive. The same has been verified through Web Browser. +![][19] + +Create the below **CronJob** to enable an auto sync. The following “CronJob” will be running an every mins. + +``` +$ vi crontab +*/1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated ruby ~/drivesync/drivesync.rb +``` + +I have added one more file to test this. Yes, it got success. + +``` +Jan 07 09:36:01 daygeek-Y700 crond[590]: (daygeek) RELOAD (/var/spool/cron/daygeek) +Jan 07 09:36:01 daygeek-Y700 crond[20942]: pam_unix(crond:session): session opened for user daygeek by (uid=0) +Jan 07 09:36:01 daygeek-Y700 CROND[20943]: (daygeek) CMD (ruby ~/drivesync/drivesync.rb) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Local folder is 0 files behind and 1 files ahead of remote) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Starting sync at 2019-01-07 09:36:26 +0530) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Uploading file 2g-docs-2019/Check CPU And HDD Temperature In Linux.png ...) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT () +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Sync complete.) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: pam_unix(crond:session): session closed for user daygeek +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/drivesync-google-drive-sync-client-for-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/mount-access-setup-google-drive-in-linux/ +[2]: https://www.2daygeek.com/mount-access-google-drive-on-linux-with-google-drive-ocamlfuse-client/ +[3]: https://github.com/MStadlmeier/drivesync +[4]: https://www.2daygeek.com/category/package-management/ +[5]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[6]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[7]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[8]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[9]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[10]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[11]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-1.jpg +[13]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-2.png +[14]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-3.png +[15]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-4.png +[16]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-5.png +[17]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-6.jpg +[18]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-7.jpg +[19]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-8.png diff --git a/sources/tech/20190107 How to manage your media with Kodi.md b/sources/tech/20190107 How to manage your media with Kodi.md new file mode 100644 index 0000000000..cea446c5b0 --- /dev/null +++ b/sources/tech/20190107 How to manage your media with Kodi.md @@ -0,0 +1,303 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to manage your media with Kodi) +[#]: via: (https://opensource.com/article/19/1/manage-your-media-kodi) +[#]: author: (Steve Ovens https://opensource.com/users/stratusss) + +How to manage your media with Kodi +====== + +![](Get control over your home media content with Kodi media player software.) + +If you, like me, like to own your own data, chances are you also like to purchase movies and TV shows on Blu-Ray or DVD discs. And you may also like to make [ISOs][1] of the videos to keep exact digital copies, as I do. + +For a little while, it might be manageable to have a bunch of files stored in some sort of directory structure. However, as your collection grows, you may want features like resuming from a specific spot; keeping track of where you left off watching a video (i.e., its watched status); storing episode or movie summaries and movie trailers; buying media in multiple languages; or just having a sane way to play all those ISOs you ripped. + +This is where Kodi comes in. + +### What is Kodi? + +Modern [Kodi][2] is the successor to Xbox Media Player, which was discontinued way back in 2003. In June 2004, Xbox Media Center (XBMC) was born. For over three years, XBMC remained on the Xbox. Then in 2007, work began in earnest to port the media player over to Linux. + +![](https://opensource.com/sites/default/files/uploads/00_xbmc_500x300.png) + +Aside from some uninteresting technical history, things remained fairly stable, and XBMC grew in prominence. By 2014, XBMC had a thriving community, and its core functionality grew to include playing games, streaming content from the web, and connecting to mobile devices. This, combined with legal issues involving Xbox in the name, lead the team behind XBMC to rename it Kodi. Kodi is now branded as an "entertainment hub that brings all your digital media together into a beautiful and user-friendly package." + +Today, Kodi has an extensible interface that has allowed the open source community to build new functionality using plugins. Note that, as with all open source software, Kodi's developers are not responsible for the ecosystem's plugins. + +### How do I start? + +For Ubuntu-based distributions, Kodi is just a few short commands away: + +``` +sudo apt install software-properties-common +sudo add-apt-repository ppa:team-xbmc/ppa +sudo apt update +sudo apt install kodi +``` + +In Arch Linux, you can install the latest version from the community repo: + +``` +sudo pacman -S kodi +``` + +Packages were maintained for Fedora 26 by RPM Fusion (referenced in the [Kodi documentation][3]). I tried it on Fedora 29, and it was quite unstable. I'm sure that this will improve over time, but my experience is that Fedora 29 is not the ideal platform for Kodi. + +### OK, it's installed… now what? + +Before we proceed, note that I am making two assumptions about your media content: + + 1. You have your own local, legally attained content. + 2. You have already transferred this content from your DVDs, Blu-Rays, or another digital distribution source to your local directory or network. + + + +Kodi uses a scraping service to pull down TV and movie metadata. For Kodi to match things appropriately, I recommend adopting a directory and file-naming structure similar to this: + +``` +Utopia +├── Utopia.S01.dvd_rip.x264 +│   ├── Utopia.S01E01.dvd_rip.x264.mkv +│   ├── Utopia.S01E02.dvd_rip.x264.mkv +│   ├── Utopia.S01E03.dvd_rip.x264.mkv +│   ├── Utopia.S01E04.dvd_rip.x264.mkv +│   ├── Utopia.S01E05.dvd_rip.x264.mkv +│   ├── Utopia.S01E06.dvd_rip.x264.mkv +└── Utopia.S02.dvd_rip.x264 +    ├── Utopia.S02E01.dvd_rip.x264.mkv +    ├── Utopia.S02E02.dvd_rip.x264.mkv +    ├── Utopia.S02E03.dvd_rip.x264.mkv +    ├── Utopia.S02E04.dvd_rip.x264.mkv +    ├── Utopia.S02E05.dvd_rip.x264.mkv +    └── Utopia.S02E06.dvd_rip.x264.mkv +``` + +I put the source (my DVD) and the codec (x264) in the title, but these are optional. For a TV series, you can include the episode title in the filename if you like. The important part is **SxxExx** , which stands for Season and Episode. This is how Kodi (and by extension the scrapers) can identify your media. + +Assuming you have organized your media like this, let's do some basic Kodi configuration. + +### Add video sources + +Adding video sources is a simple, six-step process: + + 1. Enter the files section + 2. Select **Files** + 3. Click **Add source** + 4. Browse to your source + 5. Define the video content type + 6. Refresh the metadata + + + +If you're impatient, feel free to navigate these steps on your own. But if you want details, keep reading. + +When you first launch Kodi, you'll see the home screen below. Click **Enter files section**. It doesn't matter whether you do this under Movies (as shown here) or TV shows. + +![](https://opensource.com/sites/default/files/uploads/01_fresh_kodi_main_screen.png) + +Next, select the **Videos** folder, click **Files** , and choose **Add videos**. + +![](https://opensource.com/sites/default/files/uploads/02_videos_folder.png) + +![](https://opensource.com/sites/default/files/uploads/03_add_videos.png) + +Either click on **None** and start typing the path to your files or click **Browse** and use the file navigation. + +![](https://opensource.com/sites/default/files/uploads/04_browse_video_source.png) + +![](https://opensource.com/sites/default/files/uploads/05_add_video_source_name.png) + +As you can see in this screenshot, I added my local **Videos** directory. You can set some default options through **Browse** , such as specifying your home folder and any drives you have mounted—maybe on a network file system (NFS), universal plug and play (UPnP) device, Windows Network ([SMB/CIFS][4]), or [zeroconf][5]. I won't cover most of these, as they are outside the scope of this article, but we will use NFS later for one of Kodi's advanced features. + +After you select your path and click OK, identify the type of content you're working with. + +![](https://opensource.com/sites/default/files/uploads/06_define_video_content.png) + +Next, Kodi prompts you to refresh the metadata for the content in the selected directory. This is how Kodi knows what videos you have and their synopsis, cast information, thumbnails, fan art, etc. Select **Yes** , and you can watch the video-scanning progress in the top right-hand corner. + +![](https://opensource.com/sites/default/files/uploads/07_refresh.png) + +![](https://opensource.com/sites/default/files/uploads/08_active_scan_in_progress.png) + +When the scan completes, you'll see lots of useful information, such as video overviews and season and episode descriptions for TV shows. + +![](https://opensource.com/sites/default/files/uploads/09_screen_after_scan.png) + +![](https://opensource.com/sites/default/files/uploads/10_show_description.png) + +You can use the same process for other types of content, such as music or music videos. + +### Increase functionality with add-ons + +One of the most interesting things about open source projects is that the community often extends them well beyond their initial scope. Kodi has a very robust add-on infrastructure. Most of them are produced by Kodi fans who want to extend its default functionality, and sometimes companies (such as the [Plex][6] content streaming service) release official plugins. Be very careful about adding plugins from untrusted sources. Just because you find an add-on on the internet does not mean it is safe! + +**Be warned:** Add-ons are not supported by Kodi's core team! + +Having said that, there are many useful add-ons that are worth your consideration. In my house, we use Kodi for local playback and Plex when we want to access our content outside the house—with one exception. One of our rooms has a poor WiFi signal. I rip my Blu-Rays to very large MKV files (usually 20–40GB each), and the WiFi (and therefore Kodi) can't handle the files without stuttering. Although you can (and we have) dug into some of the advanced buffering options, even those tweaks have proved insufficient with very large files. Since we already have a Plex server that can transcode content, we solved our problem with a Kodi add-on. + +To show how to install an add-on, I'll use Plex as an example. First, click on **Add-ons** in the side panel and select **Enter add-on browser**. Either use the search function or scroll down until you find Plex. + +![](https://opensource.com/sites/default/files/uploads/11_addons.png) + +Select the Plex add-on and click the **Install** button in the lower right-hand corner. + +![](https://opensource.com/sites/default/files/uploads/13_install_plex_addon.png) + +Once the download completes, you can access Plex on the main Kodi screen under **Add-ons**. + +![](https://opensource.com/sites/default/files/uploads/14_addons_finished_installing.png) + +There are several ways to configure an add-on. Some add-ons, such as NHL TV, are configured via a menu accessed by right-clicking the add-on and selecting Configure. Others, such as Plex, display a configuration walk-through when they launch. If an add-on doesn't seem to be configured when you first launch it, try right-clicking its menu and see if a settings option is available there. + +### Coordinating metadata across Kodi devices + +In our house, we have multiple machines that run Kodi. By default, Kodi tracks metadata, such as a video's watched status and show information, locally. Therefore, content updates on one machine won't appear on any other machine—unless you configure all your Kodi devices to store metadata inside an SQL database (which is a feature Kodi supports). This technique is not particularly difficult, but it is more advanced. If you're willing to put in the effort, here's how to do it. + +#### Before you begin + +There are a few things you need to know before configuring shared status for Kodi. + + 1. All content must be on a network share ([Samba][7], NFS, etc.). + 2. All content must be mounted via the network protocol, even if the disks are local to the machine. That means that no matter where the content is physically located, each client must be configured to use a network fileshare source. + 3. You need to be running an SQL-style database. Kodi's official guide walks through MySQL, but I chose MariaDB. + 4. All clients need to have the database port open (port 3306 in the case of MySQL/MariaDB) or the firewalls disabled. + 5. All clients must be running the same version of Kodi + + + +#### Install and configure the database + +If you're running Ubuntu, you can install MariaDB with the following commands: + +``` +sudo apt update +sudo apt install mariadb-server -y +``` + +I am running MariaDB on an Arch Linux machine. The [Arch Wiki][8] documents the initial setup process well, but I'll summarize it here. + +To install, issue the following command: + +``` +sudo pacman -S mariadb +``` + +Most distributions of MariaDB will have the same setup commands. I recommend that you understand what the commands do, but you can safely take the defaults if you're in a home environment. + +``` +sudo systemctl start mariadb +sudo mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql +sudo mysql_secure_installation +``` + +Next, edit the MariaDB config file. This file is different depending on your distribution. On Ubuntu, you want to edit **/etc/mysql/mariadb.conf.d/50-server.cnf**. On Arch, the file is either **/etc/my.cnf** or **/etc/mysql/my.cnf**. Locate the line that says **bind-address = 127.0.0.1** and change it to your desired Ethernet port's IP address or to **bind-address = 0.0.0.0** if you want it to listen on all interfaces. + +Restart the service so the change will take effect: + +``` +sudo systemctl restart mariadb +``` + +#### Configure Kodi and MariaDB/MySQL + +To enable Kodi to write to the database, one of two things needs to happen: You can create the database yourself, or you can let Kodi do it for you. In this case, since the only database on this system is for Kodi, I'll create a user with the rights to create any databases that Kodi requires. Do NOT do this if the machine runs more than one database. + +``` +mysql -u root -p +CREATE USER 'kodi' IDENTIFIED BY 'kodi'; +GRANT ALL ON core.md Dict.md lctt2014.md lctt2016.md lctt2018.md README.md TO 'kodi'; +flush privileges; +\q +``` + +This grants the user all rights—essentially enabling it to act as a root user. For my purposes, this is fine. + +Next, on each Kodi device where you want to share metadata, create the following file: **/home/ /.kodi/userdata/advancedsettings.xml**. This file can contain a lot of very advanced, tweakable settings. My devices have these settings: + +``` + +    +        mysql +        mysql-arch.example.com +        3306 +        kodi +        kodi +    +    +        true +        true +    +    +        +        1 +        322122547 +        20 +    + +``` + +The **< cache>** section—which sets how much of a file Kodi will buffer over the network— is optional in this scenario. See the [Kodi wiki][9] for a full breakdown of this file and its options. + +Once the configuration is complete, it's a good idea to close and reopen Kodi to make sure the settings are applied. + +The final step is configuring all the Kodi clients to use the same network share for all their content. Only one client needs to scrap/refresh the metadata if everything is created successfully. When data is collected, you should see that Kodi creates a new database on your SQL server: + +``` +[kodi@kodi-mysql ~]$ mysql -u root -p +Enter password: +Welcome to the MariaDB monitor.  Commands end with ; or \g. +Your MariaDB connection id is 180 +Server version: 10.1.37-MariaDB MariaDB Server + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show databases; ++--------------------+ +| Database           | ++--------------------+ +| MyVideos107        | +| information_schema | +| mysql              | +| performance_schema | ++--------------------+ +4 rows in set (0.00 sec) +``` + +### Wrapping up + +This article walked through how to get up and running with the basic functionality of Kodi. You should be able to add content and pull down metadata to make browsing your media more convenient. + +You also know how to search for, install, and potentially configure add-ons for additional features. Be extra careful when downloading add-ons, as they are provided by the community at large and not the core developers. It's best to use add-ons only from organizations or companies you trust. + +And you know a bit about sharing metadata across multiple devices. You've been introduced to **advancedsettings.xml** ; hopefully it has piqued your interest. Kodi has a lot of dials and knobs to turn, and you can squeeze a lot of performance and functionality out of the platform with enough experimentation. + +Are you interested in doing more tweaking? What are some of your favorite add-ons or settings? Do you want to know how to change the user interface? What are some of your favorite skins? Let me know in the comments! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/manage-your-media-kodi + +作者:[Steve Ovens][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/stratusss +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/ISO_image +[2]: https://kodi.tv/ +[3]: https://kodi.wiki/view/HOW-TO:Install_Kodi_for_Linux#Fedora +[4]: https://en.wikipedia.org/wiki/Server_Message_Block +[5]: https://en.wikipedia.org/wiki/Zero-configuration_networking +[6]: https://www.plex.tv +[7]: https://www.samba.org/ +[8]: https://wiki.archlinux.org/index.php/MySQL +[9]: https://kodi.wiki/view/Advancedsettings.xml diff --git a/sources/tech/20190107 Testing isn-t everything.md b/sources/tech/20190107 Testing isn-t everything.md new file mode 100644 index 0000000000..b2a2daaaac --- /dev/null +++ b/sources/tech/20190107 Testing isn-t everything.md @@ -0,0 +1,135 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Testing isn't everything) +[#]: via: (https://arp242.net/weblog/testing.html) +[#]: author: (Martin Tournoij https://arp242.net/) + +Testing isn't everything +====== + +This is adopted from a discussion about [Want to write good unit tests in go? Don’t panic… or should you?][1] While this mainly talks about Go a lot of the points also apply to other languages. + +Some of the most difficult code I’ve worked with is code that is “easily testable”. Code that abstracts everything to the point where you have no idea what’s going on, just so that it can add a “unit test” to what would otherwise be a very straightforward function. DHH called this [Test-induced design damage][2]. + +Testing is just one tool to make sure that your program works, out of several. Another very important tool is writing code in such a way that it is easy to understand and reason about (“simplicity”). + +Books that advocate extensive testing – such as Robert C. Martin’s Clean Code – were written, in part, as a response to ever more complex programs, where you read 1,000 lines of code but still had no idea what’s going on. I recently had to port a simple Java “emoji replacer” (😂 ➙ 😂) to Go. To ensure compatibility I looked up the im­ple­men­ta­tion. It was a whole bunch of classes, factories, and whatnot which all just resulted in calling a regexp on a string. 🤷 + +In dynamic languages like Ruby and Python tests are important for a different reason, as something like this will “work” just fine: + +``` +if condition: + print('w00t') +else: + nonexistent_function() +``` + +Except of course if that `else` branch is entered. It’s easy to typo stuff, or mix stuff up. + +In Go, both of these problems are less of a concern. It has a good static type system, and the focus is on simple straightforward code that is easy to comprehend. Even for a number of dynamic languages there are optional typing systems (function annotations in Python, TypeScript for JavaScript). + +Sometimes you can do a straightforward implementation that doesn’t sacrifice anything for testability; great! But sometimes you have to strike a balance. For some code, not adding a unit test is fine. + +Intensive focus on “unit tests” can be incredibly damaging to a code base. Some codebases have a gazillion unit tests, which makes any change excessively time-consuming as you’re fixing up a whole bunch of tests for even trivial changes. Often times a lot of these tests are just duplicates; adding tests to every layer of a simple CRUD HTTP endpoint is a common example. In many apps it’s fine to just rely on a single integration test. + +Stuff like SQL mocks is another great example. It makes code more complex, harder to change, all so we can say we added a “unit test” to `select * from foo where x=?`. The worst part is, it doesn’t even test anything other than verifying you didn’t typo an SQL query. As soon as the test starts doing anything useful, such as verifying that it actually returns the correct rows from the database, the Unit Test purists will start complaining that it’s not a True Unit Test™ and that You’re Doing It Wrong™. +For most queries, the integration tests and/or manual tests are fine, and extensive SQL mocks are entirely superfluous at best, and harmful at worst. + +There are exceptions, of course; if you’ve got a lot of `if cond { q += "more sql" }` then adding SQL mocks to verify the correctness of that logic might be a good idea. Even in those cases a “non-unit unit test” (e.g. one that just accesses the database) is still a viable option. Integration tests are also still an option. A lot of applications don’t have those kind of complex queries anyway. + +One important reason for the focus on unit tests is to ensure test code runs fast. This was a response to massive test harnesses that take a day to run. This, again, is not really a problem in Go. All integration tests I’ve written run in a reasonable amount of time (several seconds at most, usually faster). The test cache introduced in Go 1.10 makes it even less of a concern. + +Last year a coworker refactored our ETag-based caching library. The old code was very straightforward and easy to understand, and while I’m not claiming it was guaranteed bug-free, it did work very well for a long time. + +It should have been written with some tests in place, but it wasn’t (I didn’t write the original version). Note that the code was not completely untested, as we did have integration tests. + +The refactored version is much more complex. Aside from the two weeks lost on refactoring a working piece of code to … another working piece of code (topic for another post), I’m not so convinced it’s actually that much better. I consider myself a reasonably accomplished and experienced programmer, with a reasonable knowledge and experience in Go. I think that in general, based on feedback from peers and performance reviews, I am at least a programmer of “average” skill level, if not more. + +If an average programmer has trouble comprehending what is in essence a handful of simple functions because there are so many layers of abstractions, then something has gone wrong. The refactor traded one tool to verify correctness (simplicity) with another (testing). Simplicity is hardly a guarantee to ensure correctness, but neither are unit tests. Ideally, we should do both. + +Postscript: the refactor introduced a bug and removed a feature that was useful, but is now harder to add, not in the least because the code is much more complex. + +All units working correctly gives exactly zero guarantees that the program is working correctly. A lot of logic errors won’t be caught because the logic consists of several units working together. So you need integration tests, and if the integration tests duplicate half of your unit tests, then why bother with those unit tests? + +Test Driven Development (TDD) is also just one tool. It works well for some problems; not so much for others. In particular, I think that “forced to write code in tiny units” can be terribly harmful in some cases. Some code is just a serial script which says “do this, and then that, and then this”. Splitting that up in a whole bunch of “tiny units” can greatly reduce how easy the code is to understand, and thus harder to verify that it is correct. + +I’ve had to fix some Ruby code where everything was in tiny units – there is a strong culture of TDD in the Ruby community – and even though the units were easy to understand I found it incredibly hard to understand the application logic. If everything is split in “tiny units” then understanding how everything fits together to create an actual program that does something useful will be much harder. + +You see the same friction in the old microkernel vs. monolithic kernel debate, or the more recent microservices vs. monolithic app one. In principle splitting everything up in small parts sounds like a great idea, but in practice it turns out that making all the small parts work together is a very hard problem. A hybrid approach seems to work best for kernels and app design, balancing the ad­van­tages and downsides of both approaches. I think the same applies to code. + +To be clear, I am not against unit tests or TDD and claiming we should all gung-go cowboy code our way through life 🤠. I write unit tests and practice TDD, when it makes sense. My point is that unit tests and TDD are not the solution to every single last problem and should applied indiscriminately. This is why I use words such as “some” and “often” so frequently. + +This brings me to the topic of testing frameworks. I have never understood what problem libraries such as [goblin][3] are solving. How is this: + +``` +Expect(err).To(nil) +Expect(out).To(test.wantOut) +``` + +An improvement over this? + +``` +if err != nil { + t.Fatal(err) +} + +if out != tt.want { + t.Errorf("out: %q\nwant: %q", out, tt.want) +} +``` + +What’s wrong with `if` and `==`? Why do we need to abstract it? Note that with table-driven tests you’re only typing these checks once, so you’re saving just a few lines here. + +[Ginkgo][4] is even worse. It turns a very simple, straightforward, and understandable piece of code and doesn’t just abstract `if`, it also chops up the execution in several different functions (`BeforeEach()` and `DescribeTable()`). + +This is known as Behaviour-driven development (BDD). I am not entirely sure what to think of BDD. I am skeptical, but I’ve never properly used it in a large project so I’m hesitant to just dismiss it. Note that I said “properly”: most projects don’t really use BDD, they just use a library with a BDD syntax and shoehorn their testing code in to that. That’s ad-hoc BDD, or faux-BDD. + +Whatever merits BDD may have, they are not present simply because your testing code vaguely resembles BDD-style syntax. This on its own demonstrates that BDD is perhaps not a great idea for many projects. + +I think there are real problems with these BDD(-ish) test tools, as they obfuscate what you’re actually doing. No matter what, testing remains a matter of getting the output of a function and checking if that matches what you expected. No testing methodology is going to change that fundamental. The more layers you add on top of that, the harder it will be to debug. + +When determining if something is “easy” then my prime concern is not how easy something is to write, but how easy something is to debug when things fail. I will gladly spend a bit more effort writing things if that makes things a lot easier to debug. + +All code – including testing code – can fail in confusing, surprising, and unexpected ways (a “bug”), and then you’re expected to debug that code. The more complex the code, the harder it is to debug. + +You should expect all code – including testing code – to go through several debugging cycles. Note that with debugging cycle I don’t mean “there is a bug in the code you need to fix”, but rather “I need to look at this code to fix the bug”. + +In general, I already find testing code harder to debug than regular code, as the “code surface” tends to be larger. You have the testing code and the actual implementation code to think of. That’s a lot more than just thinking of the implementation code. + +Adding these abstractions means you will now also have to think about that, too! This might be okay if the abstractions would reduce the scope of what you have to think about, which is a common reason to add abstractions in regular code, but it doesn’t. It just adds more things to think about. + +So these are exactly the wrong kind of abstractions: they wrap and obfuscate, rather than separate concerns and reduce the scope. + +If you’re interested in soliciting contributions from other people in open source projects then making your tests understandable is a very important concern (it’s also important in business context, but a bit less so, as you’ve got actual time to train people). + +Seeing PRs with “here’s the code, it works, but I couldn’t figure out the tests, plz halp!” is not uncommon; and I’m fairly sure that at least a few people never even bothered to submit PRs just because they got stuck on the tests. I know I have. + +There is one open source project that I contributed to, and would like to contribute more to, but don’t because it’s just too hard to write and run tests. Every change is “write working code in 15 minutes, spend 45 minutes dealing with tests”. It’s … no fun at all. + +Writing good software is hard. I’ve got some ideas on how to do it, but don’t have a comprehensive view. I’m not sure if anyone really does. I do know that “always add unit tests” and “always practice TDD” isn’t the answer, in spite of them being useful concepts. To give an analogy: most people would agree that a free market is a good idea, but at the same time even most libertarians would agree it’s not the complete solution to every single problem (well, [some do][5], but those ideas are … rather misguided). + +You can mail me at [martin@arp242.net][6] or [create a GitHub issue][7] for feedback, questions, etc. + +-------------------------------------------------------------------------------- + +via: https://arp242.net/weblog/testing.html + +作者:[Martin Tournoij][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://arp242.net/ +[b]: https://github.com/lujun9972 +[1]: https://medium.com/@jens.neuse/want-to-write-good-unit-tests-in-go-dont-panic-or-should-you-ba3eb5bf4f51 +[2]: http://david.heinemeierhansson.com/2014/test-induced-design-damage.html +[3]: https://github.com/franela/goblin +[4]: https://github.com/onsi/ginkgo +[5]: https://en.wikipedia.org/wiki/Murray_Rothbard#Children's_rights_and_parental_obligations +[6]: mailto:martin@arp242.net +[7]: https://github.com/Carpetsmoker/arp242.net/issues/new diff --git a/sources/tech/20190108 Create your own video streaming server with Linux.md b/sources/tech/20190108 Create your own video streaming server with Linux.md new file mode 100644 index 0000000000..24dd44524d --- /dev/null +++ b/sources/tech/20190108 Create your own video streaming server with Linux.md @@ -0,0 +1,301 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Create your own video streaming server with Linux) +[#]: via: (https://opensource.com/article/19/1/basic-live-video-streaming-server) +[#]: author: (Aaron J.Prisk https://opensource.com/users/ricepriskytreat) + +Create your own video streaming server with Linux +====== +Set up a basic live streaming server on a Linux or BSD operating system. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/shortcut_command_function_editing_key.png?itok=a0sEc5vo) + +Live video streaming is incredibly popular—and it's still growing. Platforms like Amazon's Twitch and Google's YouTube boast millions of users that stream and consume countless hours of live and recorded media. These services are often free to use but require you to have an account and generally hold your content behind advertisements. Some people don't need their videos to be available to the masses or just want more control over their content. Thankfully, with the power of open source software, anyone can set up a live streaming server. + +### Getting started + +In this tutorial, I'll explain how to set up a basic live streaming server with a Linux or BSD operating system. + +This leads to the inevitable question of system requirements. These can vary, as there are a lot of variables involved with live streaming, such as: + + * **Stream quality:** Do you want to stream in high definition or will standard definition fit your needs? + * **Viewership:** How many viewers are you expecting for your videos? + * **Storage:** Do you plan on keeping saved copies of your video stream? + * **Access:** Will your stream be private or open to the world? + + + +There are no set rules when it comes to system requirements, so I recommend you experiment and find what works best for your needs. I installed my server on a virtual machine with 4GB RAM, a 20GB hard drive, and a single Intel i7 processor core. + +This project uses the Real-Time Messaging Protocol (RTMP) to handle audio and video streaming. There are other protocols available, but I chose RTMP because it has broad support. As open standards like WebRTC become more compatible, I would recommend that route. + +It's also very important to know that "live" doesn't always mean instant. A video stream must be encoded, transferred, buffered, and displayed, which often adds delays. The delay can be shortened or lengthened depending on the type of stream you're creating and its attributes. + +### Setting up a Linux server + +You can use many different distributions of Linux, but I prefer Ubuntu, so I downloaded the [Ubuntu Server][1] edition for my operating system. If you prefer your server to have a graphical user interface (GUI), feel free to use [Ubuntu Desktop][2] or one of its many flavors. Then, I fired up the Ubuntu installer on my computer or virtual machine and chose the settings that best matched my environment. Below are the steps I took. + +Note: Because this is a server, you'll probably want to set some static network settings. + +![](https://opensource.com/sites/default/files/uploads/stream-server_profilesetup.png) + +After the installer finishes and your system reboots, you'll be greeted with a lovely new Ubuntu system. As with any newly installed operating system, install any updates that are available: + +``` +sudo apt update +sudo apt upgrade +``` + +This streaming server will use the very powerful and versatile Nginx web server, so you'll need to install it: + +``` +sudo apt install nginx +``` + +Then you'll need to get the RTMP module so Nginx can handle your media stream: + +``` +sudo add-apt-repository universe +sudo apt install libnginx-mod-rtmp +``` + +Adjust your web server's configuration so it can accept and deliver your media stream. + +``` +sudo nano /etc/nginx/nginx.conf +``` + +Scroll to the bottom of the configuration file and add the following code: + +``` +rtmp { +        server { +                listen 1935; +                chunk_size 4096; + +                application live { +                        live on; +                        record off; +                } +        } +} +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_config.png) + +Save the config. Because I'm a heretic, I use [Nano][3] for editing configuration files. In Nano, you can save your config by pressing **Ctrl+X** , **Y** , and then **Enter.** + +This is a very minimal config that will create a working streaming server. You'll add to this config later, but this is a great starting point. + +However, before you can begin your first stream, you'll need to restart Nginx with its new configuration: + +``` +sudo systemctl restart nginx +``` + +### Setting up a BSD server + +If you're of the "beastie" persuasion, getting a streaming server up and running is also devilishly easy. + +Head on over to the [FreeBSD][4] website and download the latest release. Fire up the FreeBSD installer on your computer or virtual machine and go through the initial steps and choose settings that best match your environment. Since this is a server, you'll likely want to set some static network settings. + +After the installer finishes and your system reboots, you should have a shiny new FreeBSD system. Like any other freshly installed system, you'll likely want to get everything updated (from this step forward, make sure you're logged in as root): + +``` +pkg update +pkg upgrade +``` + +I install [Nano][3] for editing configuration files: + +``` +pkg install nano +``` + +This streaming server will use the very powerful and versatile Nginx web server. You can build Nginx using the excellent ports system that FreeBSD boasts. + +First, update your ports tree: + +``` +portsnap fetch +portsnap extract +``` + +Browse to the Nginx ports directory: + +``` +cd /usr/ports/www/nginx +``` + +And begin building Nginx by running: + +``` +make install +``` + +You'll see a screen asking what modules to include in your Nginx build. For this project, you'll need to add the RTMP module. Scroll down until the RTMP module is selected and press **Space**. Then Press **Enter** to proceed with the rest of the build and installation. + +Once Nginx has finished installing, it's time to configure it for streaming purposes. + +First, add an entry into **/etc/rc.conf** to ensure the Nginx server starts when your system boots: + +``` +nano /etc/rc.conf +``` + +Add this text to the file: + +``` +nginx_enable="YES" +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_streamingconfig.png) + +Next, create a webroot directory from where Nginx will serve its content. I call mine **stream** : + +``` +cd /usr/local/www/ +mkdir stream +chmod -R 755 stream/ +``` + +Now that you have created your stream directory, configure Nginx by editing its configuration file: + +``` +nano /usr/local/etc/nginx/nginx.conf +``` + +Load your streaming modules at the top of the file: + +``` +load_module /usr/local/libexec/nginx/ngx_stream_module.so; +load_module /usr/local/libexec/nginx/ngx_rtmp_module.so; +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_modules.png) + +Under the **Server** section, change the webroot location to match the one you created earlier: + +``` +Location / { +root /usr/local/www/stream +} +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_webroot.png) + +And finally, add your RTMP settings so Nginx will know how to handle your media streams: + +``` +rtmp { +        server { +                listen 1935; +                chunk_size 4096; + +                application live { +                        live on; +                        record off; +                } +        } +} +``` + +Save the config. In Nano, you can do this by pressing **Ctrl+X** , **Y** , and then **Enter.** + +As you can see, this is a very minimal config that will create a working streaming server. Later, you'll add to this config, but this will provide you with a great starting point. + +However, before you can begin your first stream, you'll need to restart Nginx with its new config: + +``` +service nginx restart +``` + +### Set up your streaming software + +#### Broadcasting with OBS + +Now that your server is ready to accept your video streams, it's time to set up your streaming software. This tutorial uses the powerful and open source Open Broadcast Studio (OBS). + +Head over to the [OBS website][5] and find the build for your operating system and install it. Once OBS launches, you should see a first-time-run wizard that will help you configure OBS with the settings that best fit your hardware. + +![](https://opensource.com/sites/default/files/uploads/stream-server_autoconfig.png) + +OBS isn't capturing anything because you haven't supplied it with a source. For this tutorial, you'll just capture your desktop for the stream. Simply click the **+** button under **Source** , choose **Screen Capture** , and select which desktop you want to capture. + +Click OK, and you should see OBS mirroring your desktop. + +Now it's time to send your newly configured video stream to your server. In OBS, click **File** > **Settings**. Click on the **Stream** section, and set **Stream Type** to **Custom Streaming Server**. + +In the URL box, enter the prefix **rtmp://** followed the IP address of your streaming server followed by **/live**. For example, **rtmp://IP-ADDRESS/live**. + +Next, you'll probably want to enter a Stream key—a special identifier required to view your stream. Enter whatever key you want (and can remember) in the **Stream key** box. + +![](https://opensource.com/sites/default/files/uploads/stream-server_streamkey.png) + +Click **Apply** and then **OK**. + +Now that OBS is configured to send your stream to your server, you can start your first stream. Click **Start Streaming**. + +If everything worked, you should see the button change to **Stop Streaming** and some bandwidth metrics will appear at the bottom of OBS. + +![](https://opensource.com/sites/default/files/uploads/stream-server_metrics.png) + +If you receive an error, double-check Stream Settings in OBS for misspellings. If everything looks good, there could be another issue preventing it from working. + +### Viewing your stream + +A live video isn't much good if no one is watching it, so be your first viewer! + +There are a multitude of open source media players that support RTMP, but the most well-known is probably [VLC media player][6]. + +After you install and launch VLC, open your stream by clicking on **Media** > **Open Network Stream**. Enter the path to your stream, adding the Stream Key you set up in OBS, then click **Play**. For example, **rtmp://IP-ADDRESS/live/SECRET-KEY**. + +You should now be viewing your very own live video stream! + +![](https://opensource.com/sites/default/files/uploads/stream-server_livevideo.png) + +### Where to go next? + +This is a very simple setup that will get you off the ground. Here are two other features you likely will want to use. + + * **Limit access:** The next step you might want to take is to limit access to your server, as the default setup allows anyone to stream to and from the server. There are a variety of ways to set this up, such as an operating system firewall, [.htaccess file][7], or even using the [built-in access controls in the STMP module][8]. + + * **Record streams:** This simple Nginx configuration will only stream and won't save your videos, but this is easy to add. In the Nginx config, under the RTMP section, set up the recording options and the location where you want to save your videos. Make sure the path you set exists and Nginx is able to write to it. + + + + +``` +application live { +             live on; +             record all; +             record_path /var/www/html/recordings; +             record_unique on; +} +``` + +The world of live streaming is constantly evolving, and if you're interested in more advanced uses, there are lots of other great resources you can find floating around the internet. Good luck and happy streaming! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/basic-live-video-streaming-server + +作者:[Aaron J.Prisk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ricepriskytreat +[b]: https://github.com/lujun9972 +[1]: https://www.ubuntu.com/download/server +[2]: https://www.ubuntu.com/download/desktop +[3]: https://www.nano-editor.org/ +[4]: https://www.freebsd.org/ +[5]: https://obsproject.com/ +[6]: https://www.videolan.org/vlc/index.html +[7]: https://httpd.apache.org/docs/current/howto/htaccess.html +[8]: https://github.com/arut/nginx-rtmp-module/wiki/Directives#access diff --git a/sources/tech/20190108 How To Understand And Identify File types in Linux.md b/sources/tech/20190108 How To Understand And Identify File types in Linux.md new file mode 100644 index 0000000000..c1c4ca4c0a --- /dev/null +++ b/sources/tech/20190108 How To Understand And Identify File types in Linux.md @@ -0,0 +1,359 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Understand And Identify File types in Linux) +[#]: via: (https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +How To Understand And Identify File types in Linux +====== + +We all are knows, that everything is a file in Linux which includes Hard Disk, Graphics Card, etc. + +When you are navigating the Linux filesystem most of the files are fall under regular files and directories. + +But it has other file types as well for different purpose which fall in five categories. + +So, it’s very important to understand the file types in Linux that helps you in many ways. + +If you can’t believe this, you just gone through the complete article then you come to know how important is. + +If you don’t understand the file types you can’t make any changes on that without fear. + +If you made the changes wrongly that damage your system very badly so be careful when you are doing that. + +Files are very important in Linux because all the devices and daemon’s were stored as a file in Linux system. + +### How Many Types of File is Available in Linux? + +As per my knowledge, totally 7 types of files are available in Linux with 3 Major categories. The details are below. + + * Regular File + * Directory File + * Special Files (This category having five type of files) + * Link File + * Character Device File + * Socket File + * Named Pipe File + * Block File + + + +Refer the below table for better understanding of file types in Linux. +| Symbol | Meaning | +| – | Regular File. It starts with underscore “_”. | +| d | Directory File. It starts with English alphabet letter “d”. | +| l | Link File. It starts with English alphabet letter “l”. | +| c | Character Device File. It starts with English alphabet letter “c”. | +| s | Socket File. It starts with English alphabet letter “s”. | +| p | Named Pipe File. It starts with English alphabet letter “p”. | +| b | Block File. It starts with English alphabet letter “b”. | + +### Method-1: Manual Way to Identify File types in Linux + +If you are having good knowledge in Linux then you can easily identify the files type with help of above table. + +#### How to view the Regular files in Linux? + +Use the below command to view the Regular files in Linux. Regular files are available everywhere in Linux filesystem. +The Regular files color is `WHITE` + +``` +# ls -la | grep ^- +-rw-------. 1 mageshm mageshm 1394 Jan 18 15:59 .bash_history +-rw-r--r--. 1 mageshm mageshm 18 May 11 2012 .bash_logout +-rw-r--r--. 1 mageshm mageshm 176 May 11 2012 .bash_profile +-rw-r--r--. 1 mageshm mageshm 124 May 11 2012 .bashrc +-rw-r--r--. 1 root root 26 Dec 27 17:55 liks +-rw-r--r--. 1 root root 104857600 Jan 31 2006 test100.dat +-rw-r--r--. 1 root root 104874307 Dec 30 2012 test100.zip +-rw-r--r--. 1 root root 11536384 Dec 30 2012 test10.zip +-rw-r--r--. 1 root root 61 Dec 27 19:05 test2-bzip2.txt +-rw-r--r--. 1 root root 61 Dec 31 14:24 test3-bzip2.txt +-rw-r--r--. 1 root root 60 Dec 27 19:01 test-bzip2.txt +``` + +#### How to view the Directory files in Linux? + +Use the below command to view the Directory files in Linux. Directory files are available everywhere in Linux filesystem. The Directory files colour is `BLUE` + +``` +# ls -la | grep ^d +drwxr-xr-x. 3 mageshm mageshm 4096 Dec 31 14:24 links/ +drwxrwxr-x. 2 mageshm mageshm 4096 Nov 16 15:44 perl5/ +drwxr-xr-x. 2 mageshm mageshm 4096 Nov 16 15:37 public_ftp/ +drwxr-xr-x. 3 mageshm mageshm 4096 Nov 16 15:37 public_html/ +``` + +#### How to view the Link files in Linux? + +Use the below command to view the Link files in Linux. Link files are available everywhere in Linux filesystem. +Two type of link files are available, it’s Soft link and Hard link. The Link files color is `LIGHT TURQUOISE` + +``` +# ls -la | grep ^l +lrwxrwxrwx. 1 root root 31 Dec 7 15:11 s-link-file -> /links/soft-link/test-soft-link +lrwxrwxrwx. 1 root root 38 Dec 7 15:12 s-link-folder -> /links/soft-link/test-soft-link-folder +``` + +#### How to view the Character Device files in Linux? + +Use the below command to view the Character Device files in Linux. Character Device files are available only in specific location. + +It’s available under `/dev` directory. The Character Device files color is `YELLOW` + +``` +# ls -la | grep ^c +crw-------. 1 root root 5, 1 Jan 28 14:05 console +crw-rw----. 1 root root 10, 61 Jan 28 14:05 cpu_dma_latency +crw-rw----. 1 root root 10, 62 Jan 28 14:05 crash +crw-rw----. 1 root root 29, 0 Jan 28 14:05 fb0 +crw-rw-rw-. 1 root root 1, 7 Jan 28 14:05 full +crw-rw-rw-. 1 root root 10, 229 Jan 28 14:05 fuse +``` + +#### How to view the Block files in Linux? + +Use the below command to view the Block files in Linux. The Block files are available only in specific location. +It’s available under `/dev` directory. The Block files color is `YELLOW` + +``` +# ls -la | grep ^b +brw-rw----. 1 root disk 7, 0 Jan 28 14:05 loop0 +brw-rw----. 1 root disk 7, 1 Jan 28 14:05 loop1 +brw-rw----. 1 root disk 7, 2 Jan 28 14:05 loop2 +brw-rw----. 1 root disk 7, 3 Jan 28 14:05 loop3 +brw-rw----. 1 root disk 7, 4 Jan 28 14:05 loop4 +``` + +#### How to view the Socket files in Linux? + +Use the below command to view the Socket files in Linux. The Socket files are available only in specific location. +The Socket files color is `PINK` + +``` +# ls -la | grep ^s +srw-rw-rw- 1 root root 0 Jan 5 16:36 system_bus_socket +``` + +#### How to view the Named Pipe files in Linux? + +Use the below command to view the Named Pipe files in Linux. The Named Pipe files are available only in specific location. The Named Pipe files color is `YELLOW` + +``` +# ls -la | grep ^p +prw-------. 1 root root 0 Jan 28 14:06 replication-notify-fifo| +prw-------. 1 root root 0 Jan 28 14:06 stats-mail| +``` + +### Method-2: How to Identify File types in Linux Using file Command? + +The file command allow us to determine various file types in Linux. There are three sets of tests, performed in this order: filesystem tests, magic tests, and language tests to identify file types. + +#### How to view the Regular files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Regular file. The file command will read the given file contents and display exactly what kind of file it is. + +That’s why we are seeing different results for each Regular files. See the below various results for Regular files. + +``` +# file 2daygeek_access.log +2daygeek_access.log: ASCII text, with very long lines + +# file powertop.html +powertop.html: HTML document, ASCII text, with very long lines + +# file 2g-test +2g-test: JSON data + +# file powertop.txt +powertop.txt: HTML document, UTF-8 Unicode text, with very long lines + +# file 2g-test-05-01-2019.tar.gz +2g-test-05-01-2019.tar.gz: gzip compressed data, last modified: Sat Jan 5 18:22:20 2019, from Unix, original size 450560 +``` + +#### How to view the Directory files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Directory file. See the results below. + +``` +# file Pictures/ +Pictures/: directory +``` + +#### How to view the Link files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Link file. See the results below. + +``` +# file log +log: symbolic link to /run/systemd/journal/dev-log +``` + +#### How to view the Character Device files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Character Device file. See the results below. + +``` +# file vcsu +vcsu: character special (7/64) +``` + +#### How to view the Block files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Block file. See the results below. + +``` +# file sda1 +sda1: block special (8/1) +``` + +#### How to view the Socket files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Socket file. See the results below. + +``` +# file system_bus_socket +system_bus_socket: socket +``` + +#### How to view the Named Pipe files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Named Pipe file. See the results below. + +``` +# file pipe-test +pipe-test: fifo (named pipe) +``` + +### Method-3: How to Identify File types in Linux Using stat Command? + +The stat command allow us to check file types or file system status. This utility giving more information than file command. It shows lot of information about the given file such as Size, Block Size, IO Block Size, Inode Value, Links, File permission, UID, GID, File Access, Modify and Change time details. + +#### How to view the Regular files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Regular file. + +``` +# stat 2daygeek_access.log + File: 2daygeek_access.log + Size: 14406929 Blocks: 28144 IO Block: 4096 regular file +Device: 10301h/66305d Inode: 1727555 Links: 1 +Access: (0644/-rw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek) +Access: 2019-01-03 14:05:26.430328867 +0530 +Modify: 2019-01-03 14:05:26.460328868 +0530 +Change: 2019-01-03 14:05:26.460328868 +0530 + Birth: - +``` + +#### How to view the Directory files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Directory file. See the results below. + +``` +# stat Pictures/ + File: Pictures/ + Size: 4096 Blocks: 8 IO Block: 4096 directory +Device: 10301h/66305d Inode: 1703982 Links: 3 +Access: (0755/drwxr-xr-x) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek) +Access: 2018-11-24 03:22:11.090000828 +0530 +Modify: 2019-01-05 18:27:01.546958817 +0530 +Change: 2019-01-05 18:27:01.546958817 +0530 + Birth: - +``` + +#### How to view the Link files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Link file. See the results below. + +``` +# stat /dev/log + File: /dev/log -> /run/systemd/journal/dev-log + Size: 28 Blocks: 0 IO Block: 4096 symbolic link +Device: 6h/6d Inode: 278 Links: 1 +Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) +Access: 2019-01-05 16:36:31.033333447 +0530 +Modify: 2019-01-05 16:36:30.766666768 +0530 +Change: 2019-01-05 16:36:30.766666768 +0530 + Birth: - +``` + +#### How to view the Character Device files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Character Device file. See the results below. + +``` +# stat /dev/vcsu + File: /dev/vcsu + Size: 0 Blocks: 0 IO Block: 4096 character special file +Device: 6h/6d Inode: 16 Links: 1 Device type: 7,40 +Access: (0660/crw-rw----) Uid: ( 0/ root) Gid: ( 5/ tty) +Access: 2019-01-05 16:36:31.056666781 +0530 +Modify: 2019-01-05 16:36:31.056666781 +0530 +Change: 2019-01-05 16:36:31.056666781 +0530 + Birth: - +``` + +#### How to view the Block files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Block file. See the results below. + +``` +# stat /dev/sda1 + File: /dev/sda1 + Size: 0 Blocks: 0 IO Block: 4096 block special file +Device: 6h/6d Inode: 250 Links: 1 Device type: 8,1 +Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 994/ disk) +Access: 2019-01-05 16:36:31.596666806 +0530 +Modify: 2019-01-05 16:36:31.596666806 +0530 +Change: 2019-01-05 16:36:31.596666806 +0530 + Birth: - +``` + +#### How to view the Socket files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Socket file. See the results below. + +``` +# stat /var/run/dbus/system_bus_socket + File: /var/run/dbus/system_bus_socket + Size: 0 Blocks: 0 IO Block: 4096 socket +Device: 15h/21d Inode: 576 Links: 1 +Access: (0666/srw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root) +Access: 2019-01-05 16:36:31.823333482 +0530 +Modify: 2019-01-05 16:36:31.810000149 +0530 +Change: 2019-01-05 16:36:31.810000149 +0530 + Birth: - +``` + +#### How to view the Named Pipe files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Named Pipe file. See the results below. + +``` +# stat pipe-test + File: pipe-test + Size: 0 Blocks: 0 IO Block: 4096 fifo +Device: 10301h/66305d Inode: 1705583 Links: 1 +Access: (0644/prw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek) +Access: 2019-01-06 02:00:03.040394731 +0530 +Modify: 2019-01-06 02:00:03.040394731 +0530 +Change: 2019-01-06 02:00:03.040394731 +0530 + Birth: - +``` +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20190109 Automating deployment strategies with Ansible.md b/sources/tech/20190109 Automating deployment strategies with Ansible.md new file mode 100644 index 0000000000..175244e760 --- /dev/null +++ b/sources/tech/20190109 Automating deployment strategies with Ansible.md @@ -0,0 +1,152 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Automating deployment strategies with Ansible) +[#]: via: (https://opensource.com/article/19/1/automating-deployment-strategies-ansible) +[#]: author: (Jario da Silva Junior https://opensource.com/users/jairojunior) + +Automating deployment strategies with Ansible +====== +Use automation to eliminate time sinkholes due to repetitive tasks and unplanned work. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M) + +When you examine your technology stack from the bottom layer to the top—hardware, operating system (OS), middleware, and application—with their respective configurations, it's clear that changes are far more frequent as you go up in the stack. Your hardware will hardly change, your OS has a long lifecycle, and your middleware will keep up with the application's needs, but even if your release cycle is long (weeks or months), your applications will be the most volatile. + +![](https://opensource.com/sites/default/files/uploads/osdc-deployment-strategies.png) + +In [The Practice of System and Network Administration][1], the authors categorize the biggest "time sinkholes" in IT as manual/non-standard provisioning of OSes and application deployments. These time sinkholes will consume you with repetitive tasks or unplanned work. + +How so? Let's say you provision a new server without Network Time Protocol (NTP) properly configured, and a small percentage of your requests—in a cluster of dozens of servers—start to behave strangely because an application uses some sort of scheduler that relies on correct time. When you look at it like this, it is an easy problem to fix, but how long it would it take your team figure it out? Incidents or unplanned work consume a lot of your time and, even worse, your greatest talents. Should you really be wasting time investigating production systems like this? Wouldn't it be better to set this server aside and automatically provision a new one from scratch? + +What about manual deployment? Imagine 20 binaries deployed across a farm or nodes with their respective configuration files? How error-prone is this? Inevitably, it will eventually end up in unplanned work. + +The [State of DevOps Report 2018][2] introduces the stages of DevOps adoption, and it's no surprise that Stage 0 includes deployment automation and reuse of deployment patterns, while Stage 1 and 2 focus on standardization of your infrastructure stack to reduce inconsistencies across your environment. + +Note that, more than once, I have seen an ops team using this "standardization" as an excuse to limit a development team's ability to deliver, forcing them to use a hammer on something that is definitely not a nail. Don't do it; the price is extremely high. + +The lesson to be learned here is that lack of automation not only increases your lead time but also the rate of problems in your process and the amount of unplanned work you face. If you've read [The Phoenix Project][3], you know this is the root of all evil in any value stream, and if you don't get rid of it, it will eventually kill your business. + +When trying to fill the biggest time sinkholes, why not start with automating operating system installation? We could, but the results would take longer to appear since new virtual machines are not created as frequently as applications are deployed. In other words, this may not free up the time we need to power our initiative, so it could die prematurely. + +Still not convinced? Smaller and more frequent releases are also extremely positive from the development side. Let's explain a little further… + +### Deploy ≠ Release + +The first thing to understand is that, although they're used interchangeably, deployment and release do **NOT** mean the same thing. Release refers to providing the user a new version, while deployment is the technical process of deploying the new version. Let's focus on the technical process of deployment. + +### Tasks, groups, and Ansible + +We need to understand the deployment process from the beginning to the end, including everything in the middle—the tasks, which servers are involved in the process, and which steps are executed—to avoid falling into the pitfalls described by Mattias Geniar in [Automating the unknown][4]. + +#### Tasks + +The steps commonly executed in a regular deployment process include: + + * Deploy application(s)/database(s) or database(s) change(s) + * Stop/start services and monitoring + * Add/remove the server from our load balancers + * Verify application state—is it ready to serve requests? + * Manual approval—is it necessary? + + + +For some people, automating the deployment process but leaving a manual approval step is like riding a bike with training wheels. As someone once told me: "It's better to ride with training wheels than not ride at all." + +What if a tool doesn't include an API or a command-line interface (CLI) to enable task automation? Well, maybe it's time to think about changing tools. There are many open source application servers, databases, monitoring systems, and load balancers that are easily automated—thanks in large part to the [Unix way][5]. When adopting a new technology, eliminate options that are not automated and use your creativity to support your legacy technologies. For example, I've seen people versioning network appliance configuration files and updating them using FTP. + +And guess what? It's a wonderful time to adopt open source tools. The recent [Accelerate: State of DevOps][6] report found that open source technologies are in predominant use in high-performance organizations. The logic is pretty simple: open source projects function in a "Darwinist" model, where those that do not adapt and evolve will die for lack of a user base or contributions. Feedback is paramount to software evolution. + +#### Groups + +To identify groups of servers to target for automation, think about the most tasks you want to automate, such as those that: + + * Deploy application(s)/database(s) or database change(s) + * Stop/start services and monitoring + * Add/remove server(s) from load balancer(s) + * Verify application state—is it ready to serve requests? + + + +#### The playbook + +A high-level deployment process could be: + + 1. Stop monitoring (to avoid false-positives) + 2. Remove server from the load balancer (to prevent the user from receiving an error code) + 3. Stop the service (to enable a graceful shutdown) + 4. Deploy the new version of the application + 5. Wait for the application to be ready to receive new requests + 6. Execute steps 3, 2, and 1. + 7. Do the same for the next N servers. + + + +Having documentation of your process is nice, but having an executable documenting your deployment is better! Here's what steps 1–5 would look like in Ansible for a fully open source stack: + +``` +- name: Disable alerts +  nagios: +    action: disable_alerts +    host: "{{ inventory_hostname }}" +    services: webserver +  delegate_to: "{{ item }}" +  loop: "{{ groups.monitoring }}" + +- name: Disable servers in the LB +  haproxy: +    host: "{{ inventory_hostname }}" +    state: disabled +    backend: app +  delegate_to: "{{ item }}" +  loop: " {{ groups.lbserver }}" + +- name: Stop the service +  service: name=httpd state=stopped + +- name: Deploy a new version +  unarchive: src=app.tar.gz dest=/var/www/app + +- name: Verify application state +  uri: +    url: "http://{{ inventory_hostname }}/app/healthz" +    status_code: 200 +  retries: 5 +``` + +### Why Ansible? + +There are other alternatives for application deployment, but the things that make Ansible an excellent choice include: + + * Multi-tier orchestration (i.e., **delegate_to** ) allowing you to orderly target different groups of servers: monitoring, load balancer, application server, database, etc. + * Rolling upgrade (i.e., serial) to control how changes are made (e.g., 1 by 1, N by N, X% at a time, etc.) + * Error control, **max_fail_percentage** and **any_errors_fatal** , is my process all-in or will it tolerate fails? + * A vast library of modules for: + * Monitoring (e.g., Nagios, Zabbix, etc.) + * Load balancers (e.g., HAProxy, F5, Netscaler, Cisco, etc.) + * Services (e.g., service, command, file) + * Deployment (e.g., copy, unarchive) + * Programmatic verifications (e.g., command, Uniform Resource Identifier) + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/automating-deployment-strategies-ansible + +作者:[Jario da Silva Junior][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jairojunior +[b]: https://github.com/lujun9972 +[1]: https://www.amazon.com/Practice-System-Network-Administration-Enterprise/dp/0321919165/ref=dp_ob_title_bk +[2]: https://puppet.com/resources/whitepaper/state-of-devops-report +[3]: https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592 +[4]: https://ma.ttias.be/automating-unknown/ +[5]: https://en.wikipedia.org/wiki/Unix_philosophy +[6]: https://cloudplatformonline.com/2018-state-of-devops.html diff --git a/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md b/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md new file mode 100644 index 0000000000..3bad5ba969 --- /dev/null +++ b/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md @@ -0,0 +1,187 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (GoAccess – A Real-Time Web Server Log Analyzer And Interactive Viewer) +[#]: via: (https://www.2daygeek.com/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +GoAccess – A Real-Time Web Server Log Analyzer And Interactive Viewer +====== + +Analyzing a log file is a big headache for Linux administrators as it’s capturing a lot of things. + +Most of the newbies and L1 administrators doesn’t know how to analyze this. + +If you have good knowledge to analyze a logs then you will be a legend for NIX system. + +There are many tools available in Linux to analyze the logs easily. + +GoAccess is one of the tool which allow users to analyze web server logs easily. + +We will be going to discuss in details about GoAccess tool in this article. + +### What is GoAccess? + +GoAccess is a real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser. + +GoAccess has minimal requirements, it’s written in C and requires only ncurses. + +It will support Apache, Nginx and Lighttpd logs. It provides fast and valuable HTTP statistics for system administrators that require a visual server report on the fly. + +GoAccess parses the specified web log file and outputs the data to the X terminal and browser. + +GoAccess was designed to be a fast, terminal-based log analyzer. Its core idea is to quickly analyze and view web server statistics in real time without needing to use your browser. + +Terminal output is the default output, it has the capability to generate a complete, self-contained, real-time HTML report, as well as a JSON, and CSV report. + +GoAccess allows any custom log format and the following (Combined Log Format (XLF/ELF) Apache | Nginx & Common Log Format (CLF) Apache) predefined log format options are included, but not limited to. + +### GoAccess Features + + * **`Completely Real Time:`** All the metrics are updated every 200 ms on the terminal and every second on the HTML output. + * **`Track Application Response Time:`** Track the time taken to serve the request. Extremely useful if you want to track pages that are slowing down your site. + * **`Visitors:`** Determine the amount of hits, visitors, bandwidth, and metrics for slowest running requests by the hour, or date. + * **`Metrics per Virtual Host:`** Have multiple Virtual Hosts (Server Blocks)? It features a panel that displays which virtual host is consuming most of the web server resources. + + + +### How to Install GoAccess? + +I would advise users to install GoAccess from distribution official repository with help of Package Manager. It is available in most of the distributions official repository. + +As we know, we will be getting bit outdated package for standard release distribution and rolling release distributions always include latest package. + +If you are running the OS with standard release distributions, i would suggest you to check the alternative options such as PPA or Official GoAccess maintainer repository, etc, to get a latest package. + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][1]** or **[APT Command][2]** to install GoAccess on your systems. + +``` +# apt install goaccess +``` + +To get a latest GoAccess package, use the below GoAccess official repository. + +``` +$ echo "deb https://deb.goaccess.io/ $(lsb_release -cs) main" | sudo tee -a /etc/apt/sources.list.d/goaccess.list +$ wget -O - https://deb.goaccess.io/gnugpg.key | sudo apt-key add - +$ sudo apt-get update +$ sudo apt-get install goaccess +``` + +For **`RHEL/CentOS`** systems, use **[YUM Package Manager][3]** to install GoAccess on your systems. + +``` +# yum install goaccess +``` + +For **`Fedora`** system, use **[DNF Package Manager][4]** to install GoAccess on your system. + +``` +# dnf install goaccess +``` + +For **`ArchLinux/Manjaro`** based systems, use **[Pacman Package Manager][5]** to install GoAccess on your systems. + +``` +# pacman -S goaccess +``` + +For **`openSUSE Leap`** system, use **[Zypper Package Manager][6]** to install GoAccess on your system. + +``` +# zypper install goaccess + +# zypper ar -f obs://server:http + +# zypper ref && zypper in goaccess +``` + +### How to Use GoAccess? + +After successful installation of GoAccess. Just enter the goaccess command and followed by the web server log location to view it. + +``` +# goaccess [options] /path/to/Web Server/access.log + +# goaccess /var/log/apache/2daygeek_access.log +``` + +When you execute the above command, it will ask you to select the **Log Format Configuration**. +![][8] + +I had tested this with Apache access log. The Apache log is splitted in fifteen section. The details are below. The main section shows the summary about the fifteen section. + +The below screenshots included four sessions such as Unique Visitors, Requested files, Static Requests, Not found URLs. +![][9] + +The below screenshots included four sessions such as Visitor Hostnames and IPs, Operating Systems, Browsers, Time Distribution. +![][10] + +The below screenshots included four sessions such as Referrers URLs, Referring Sites, Google’s search engine results, HTTP status codes. +![][11] + +If you would like to generate a html report, use the following format. + +Initially i got an error when i was trying to generate the html report. + +``` +# goaccess 2daygeek_access.log -a > report.html + +GoAccess - version 1.3 - Nov 23 2018 11:28:19 +Config file: No config file used + +Fatal error has occurred +Error occurred at: src/parser.c - parse_log - 2764 +No time format was found on your conf file.Parsing... [0] [0/s] +``` + +It says “No time format was found on your conf file”. To overcome this issue, add the “COMBINED” log format option on it. + +``` +# goaccess -f 2daygeek_access.log --log-format=COMBINED -o 2daygeek.html +Parsing...[0,165] [50,165/s] +``` + +![][12] + +GoAccess allows you to access and analyze the real-time log filtering and parsing. + +``` +# tail -f /var/log/apache/2daygeek_access.log | goaccess - +``` + +For more details navigate to man or help page. + +``` +# man goaccess +or +# goaccess --help +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[2]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[3]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[6]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[7]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[8]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-1.png +[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-2.png +[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-3.png +[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-4.png +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-5.png diff --git a/sources/tech/20190111 Build a retro gaming console with RetroPie.md b/sources/tech/20190111 Build a retro gaming console with RetroPie.md new file mode 100644 index 0000000000..eedac575c9 --- /dev/null +++ b/sources/tech/20190111 Build a retro gaming console with RetroPie.md @@ -0,0 +1,82 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Build a retro gaming console with RetroPie) +[#]: via: (https://opensource.com/article/19/1/retropie) +[#]: author: (Jay LaCroix https://opensource.com/users/jlacroix) + +Build a retro gaming console with RetroPie +====== +Play your favorite classic Nintendo, Sega, and Sony console games on Linux. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_gaming_games_roundup_news.png?itok=KM0ViL0f) + +The most common question I get on [my YouTube channel][1] and in person is what my favorite Linux distribution is. If I limit the answer to what I run on my desktops and laptops, my answer will typically be some form of an Ubuntu-based Linux distro. My honest answer to this question may surprise many. My favorite Linux distribution is actually [RetroPie][2]. + +As passionate as I am about Linux and open source software, I'm equally passionate about classic gaming, specifically video games produced in the '90s and earlier. I spend most of my surplus income on older games, and I now have a collection of close to a thousand games for over 20 gaming consoles. In my spare time, I raid flea markets, yard sales, estate sales, and eBay buying games for various consoles, including almost every iteration made by Nintendo, Sega, and Sony. There's something about classic games that I adore, a charm that seems lost in games released nowadays. + +Unfortunately, collecting retro games has its fair share of challenges. Cartridges with memory for save files will lose their charge over time, requiring the battery to be replaced. While it's not hard to replace save batteries (if you know how), it's still time-consuming. Games on CD-ROMs are subject to disc rot, which means that even if you take good care of them, they'll still lose data over time and become unplayable. Also, sometimes it's difficult to find replacement parts for some consoles. This wouldn't be so much of an issue if the majority of classic games were available digitally, but the vast majority are never re-released on a digital platform. + +### Gaming on RetroPie + +RetroPie is a great project and an asset to retro gaming enthusiasts like me. RetroPie is a Raspbian-based distribution designed for use on the Raspberry Pi (though it is possible to get it working on other platforms, such as a PC). RetroPie boots into a graphical interface that is completely controllable via a gamepad or joystick and allows you to easily manage digital copies (ROMs) of your favorite games. You can scrape information from the internet to organize your collection better and manage lists of favorite games, and the entire interface is very user-friendly and efficient. From the interface, you can launch directly into a game, then exit the game by pressing a combination of buttons on your gamepad. You rarely need a keyboard, unless you have to enter your WiFi password or manually edit configuration files. + +I use RetroPie to host a digital copy of every physical game I own in my collection. When I purchase a game from a local store or eBay, I also download the ROM. As a collector, this is very convenient. If I don't have a particular physical console within arms reach, I can boot up RetroPie and enjoy a game quickly without having to connect cables or clean cartridge contacts. There's still something to be said about playing a game on the original hardware, but if I'm pressed for time, RetroPie is very convenient. I also don't have to worry about dead save batteries, dirty cartridge contacts, disc rot, or any of the other issues collectors like me have to regularly deal with. I simply play the game. + +Also, RetroPie allows me to be very clever and utilize my technical know-how to achieve additional functionality that's not normally available. For example, I have three RetroPies set up, each of them synchronizing their files between each other by leveraging [Syncthing][3], a popular open source file synchronization tool. The synchronization happens automatically, and it means I can start a game on one television and continue in the same place on another unit since the save files are included in the synchronization. To take it a step further, I also back up my save and configuration files to [Backblaze B2][4], so I'm protected if an SD card becomes defective. + +### Setting up RetroPie + +Setting up RetroPie is very easy, and if you've ever set up a Raspberry Pi Linux distribution before (such as Raspbian) the process is essentially the same—you simply download the IMG file and flash it to your SD card by utilizing another tool, such as [Etcher][5], and insert it into your RetroPie. Then plug in an AC adapter and gamepad and hook it up to your television via HDMI. Optionally, you can buy a case to protect your RetroPie from outside elements and add visual appeal. Here is a listing of things you'll need to get started: + + * Raspberry Pi board (Model 3B+ or higher recommended) + * SD card (16GB or larger recommended) + * A USB gamepad + * UL-listed micro USB power adapter, at least 2.5 amp + + + +If you choose to add the optional Raspberry Pi case, I recommend the Super NES and Super Famicom themed cases from [RetroFlag][6]. Not only do these cases look cool, but they also have fully functioning power and reset buttons. This means you can configure the reset and power buttons to directly trigger the operating system's halt process, rather than abruptly terminating power. This definitely makes for a more professional experience, but it does require the installation of a special script. The instructions are on [RetroFlag's GitHub page][7]. Be wary: there are many cases available on Amazon and eBay of varying quality. Some of them are cheap knock-offs of RetroFlag cases, and others are just a lower quality overall. In fact, even cases by RetroFlag vary in quality—I had some power-distribution issues with the NES-themed case that made for an unstable experience. If in doubt, I've found that RetroFlag's Super NES and Super Famicom themed cases work very well. + +### Adding games + +When you boot RetroPie for the first time, it will resize the filesystem to ensure you have full access to the available space on your SD card and allow you to set up your gamepad. I can't give you links for game ROMs, so I'll leave that part up to you to figure out. When you've found them, simply add them to the RetroPie SD card in the designated folder, which would be located under **/home/pi/RetroPie/roms/ **. You can use your favorite tool for transferring the ROMs to the Pi, such as [SCP][8] in a terminal, [WinSCP][9], [Samba][10], etc. Once you've added the games, you can rescan them by pressing start and choosing the option to restart EmulationStation. When it restarts, it should automatically add menu entries for the ROMs you've added. That's basically all there is to it. + +(The rescan updates EmulationStation’s game inventory. If you don’t do that, it won’t list any newly added games you copy over.) + +Regarding the games' performance, your mileage will vary depending on which consoles you're emulating. For example, I've noticed that Sega Dreamcast games barely run at all, and most Nintendo 64 games will run sluggishly with a bad framerate. Many PlayStation Portable (PSP) games also perform inconsistently. However, all of the 8-bit and 16-bit consoles emulate seemingly perfectly—I haven't run into a single 8-bit or 16-bit game that doesn't run well. Surprisingly, games designed for the original PlayStation run great for me, which is a great feat considering the lower-performance potential of the Raspberry Pi. + +Overall, RetroPie's performance is great, but the Raspberry Pi is not as powerful as a gaming PC, so adjust your expectations accordingly. + +### Conclusion + +RetroPie is a fantastic open source project dedicated to preserving classic games and an asset to game collectors everywhere. Having a digital copy of my physical game collection is extremely convenient. If I were to tell my childhood self that one day I could have an entire game collection on one device, I probably wouldn't believe it. But RetroPie has become a staple in my household and provides hours of fun and enjoyment. + +If you want to see the parts I mentioned as well as a quick installation overview, I have [a video][11] on [my YouTube channel][12] that goes over the process and shows off some gameplay at the end. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/retropie + +作者:[Jay LaCroix][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jlacroix +[b]: https://github.com/lujun9972 +[1]: https://www.youtube.com/channel/UCxQKHvKbmSzGMvUrVtJYnUA +[2]: https://retropie.org.uk/ +[3]: https://syncthing.net/ +[4]: https://www.backblaze.com/b2/cloud-storage.html +[5]: https://www.balena.io/etcher/ +[6]: https://www.amazon.com/shop/learnlinux.tv?listId=1N9V89LEH5S8K +[7]: https://github.com/RetroFlag/retroflag-picase +[8]: https://en.wikipedia.org/wiki/Secure_copy +[9]: https://winscp.net/eng/index.php +[10]: https://www.samba.org/ +[11]: https://www.youtube.com/watch?v=D8V-KaQzsWM +[12]: http://www.youtube.com/c/LearnLinuxtv diff --git a/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md b/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md new file mode 100644 index 0000000000..fbd8b9d120 --- /dev/null +++ b/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md @@ -0,0 +1,170 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top 5 Linux Distributions for Productivity) +[#]: via: (https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +Top 5 Linux Distributions for Productivity +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_main.jpg?itok=2IKyg_7_) + +I have to confess, this particular topic is a tough one to address. Why? First off, Linux is a productive operating system by design. Thanks to an incredibly reliable and stable platform, getting work done is easy. Second, to gauge effectiveness, you have to consider what type of work you need a productivity boost for. General office work? Development? School? Data mining? Human resources? You see how this question can get somewhat complicated. + +That doesn’t mean, however, that some distributions aren’t able to do a better job of configuring and presenting that underlying operating system into an efficient platform for getting work done. Quite the contrary. Some distributions do a much better job of “getting out of the way,” so you don’t find yourself in a work-related hole, having to dig yourself out and catch up before the end of day. These distributions help strip away the complexity that can be found in Linux, thereby making your workflow painless. + +Let’s take a look at the distros I consider to be your best bet for productivity. To help make sense of this, I’ve divided them into categories of productivity. That task itself was challenging, because everyone’s productivity varies. For the purposes of this list, however, I’ll look at: + + * General Productivity: For those who just need to work efficiently on multiple tasks. + + * Graphic Design: For those that work with the creation and manipulation of graphic images. + + * Development: For those who use their Linux desktops for programming. + + * Administration: For those who need a distribution to facilitate their system administration tasks. + + * Education: For those who need a desktop distribution to make them more productive in an educational environment. + + + + +Yes, there are more categories to be had, many of which can get very niche-y, but these five should fill most of your needs. + +### General Productivity + +For general productivity, you won’t get much more efficient than [Ubuntu][1]. The primary reason for choosing Ubuntu for this category is the seamless integration of apps, services, and desktop. You might be wondering why I didn’t choose Linux Mint for this category? Because Ubuntu now defaults to the GNOME desktop, it gains the added advantage of GNOME Extensions (Figure 1). + +![GNOME Clipboard][3] + +Figure 1: The GNOME Clipboard Indicator extension in action. + +[Used with permission][4] + +These extensions go a very long way to aid in boosting productivity (so Ubuntu gets the nod over Mint). But Ubuntu didn’t just accept a vanilla GNOME desktop. Instead, they tweaked it to make it slightly more efficient and user-friendly, out of the box. And because Ubuntu contains just the right mixture of default, out-of-the-box, apps (that just work), it makes for a nearly perfect platform for productivity. + +Whether you need to write a paper, work on a spreadsheet, code a new app, work on your company website, create marketing images, administer a server or network, or manage human resources from within your company HR tool, Ubuntu has you covered. The Ubuntu desktop distribution also doesn’t require the user to jump through many hoops to get things working … it simply works (and quite well). Finally, thanks to it’s Debian base, Ubuntu makes installing third-party apps incredibly easy. + +Although Ubuntu tends to be the go-to for nearly every list of “top distributions for X,” it’s very hard to argue against this particular distribution topping the list of general productivity distributions. + +### Graphic Design + +If you’re looking to up your graphic design productivity, you can’t go wrong with [Fedora Design Suite][5]. This Fedora respin was created by the team responsible for all Fedora-related art work. Although the default selection of apps isn’t a massive collection of tools, those it does include are geared specifically for the creation and manipulation of images. + +With apps like GIMP, Inkscape, Darktable, Krita, Entangle, Blender, Pitivi, Scribus, and more (Figure 2), you’ll find everything you need to get your image editing jobs done and done well. But Fedora Design Suite doesn’t end there. This desktop platform also includes a bevy of tutorials that cover countless subjects for many of the installed applications. For anyone trying to be as productive as possible, this is some seriously handy information to have at the ready. I will say, however, the tutorial entry in the GNOME Favorites is nothing more than a link to [this page][6]. + +![Fedora Design Suite Favorites][8] + +Figure 2: The Fedora Design Suite Favorites menu includes plenty of tools for getting your graphic design on. + +[Used with permission][4] + +Those that work with a digital camera will certainly appreciate the inclusion of the Entangle app, which allows you to control your DSLR from the desktop. + +### Development + +Nearly all Linux distributions are great platforms for programmers. However, one particular distributions stands out, above the rest, as one of the most productive tools you’ll find for the task. That OS comes from [System76][9] and it’s called [Pop!_OS][10]. Pop!_OS is tailored specifically for creators, but not of the artistic type. Instead, Pop!_OS is geared toward creators who specialize in developing, programming, and making. If you need an environment that is not only perfected suited for your development work, but includes a desktop that’s sure to get out of your way, you won’t find a better option than Pop!_OS (Figure 3). + +What might surprise you (given how “young” this operating system is), is that Pop!_OS is also one of the single most stable GNOME-based platforms you’ll ever use. This means Pop!_OS isn’t just for creators and makers, but anyone looking for a solid operating system. One thing that many users will greatly appreciate with Pop!_OS, is that you can download an ISO specifically for your video hardware. If you have Intel hardware, [download][10] the version for Intel/AMD. If your graphics card is NVIDIA, download that specific release. Either way, you are sure go get a solid platform for which to create your masterpiece. + +![Pop!_OS][12] + +Figure 3: The Pop!_OS take on GNOME Overview. + +[Used with permission][4] + +Interestingly enough, with Pop!_OS, you won’t find much in the way of pre-installed development tools. You won’t find an included IDE, or many other dev tools. You can, however, find all the development tools you need in the Pop Shop. + +### Administration + +If you’re looking to find one of the most productive distributions for admin tasks, look no further than [Debian][13]. Why? Because Debian is not only incredibly reliable, it’s one of those distributions that gets out of your way better than most others. Debian is the perfect combination of ease of use and unlimited possibility. On top of which, because this is the distribution for which so many others are based, you can bet if there’s an admin tool you need for a task, it’s available for Debian. Of course, we’re talking about general admin tasks, which means most of the time you’ll be using a terminal window to SSH into your servers (Figure 4) or a browser to work with web-based GUI tools on your network. Why bother making use of a desktop that’s going to add layers of complexity (such as SELinux in Fedora, or YaST in openSUSE)? Instead, chose simplicity. + +![Debian][15] + +Figure 4: SSH’ing into a remote server on Debian. + +[Used with permission][4] + +And because you can select which desktop you want (from GNOME, Xfce, KDE, Cinnamon, MATE, LXDE), you can be sure to have the interface that best matches your work habits. + +### Education + +If you are a teacher or student, or otherwise involved in education, you need the right tools to be productive. Once upon a time, there existed the likes of Edubuntu. That distribution never failed to be listed in the top of education-related lists. However, that distro hasn’t been updated since it was based on Ubuntu 14.04. Fortunately, there’s a new education-based distribution ready to take that title, based on openSUSE. This spin is called [openSUSE:Education-Li-f-e][16] (Linux For Education - Figure 5), and is based on openSUSE Leap 42.1 (so it is slightly out of date). + +openSUSE:Education-Li-f-e includes tools like: + + * Brain Workshop - A dual n-back brain exercise + + * GCompris - An educational software suite for young children + + * gElemental - A periodic table viewer + + * iGNUit - A general purpose flash card program + + * Little Wizard - Development environment for children based on Pascal + + * Stellarium - An astronomical sky simulator + + * TuxMath - An math tutor game + + * TuxPaint - A drawing program for young children + + * TuxType - An educational typing tutor for children + + * wxMaxima - A cross platform GUI for the computer algebra system + + * Inkscape - Vector graphics program + + * GIMP - Graphic image manipulation program + + * Pencil - GUI prototyping tool + + * Hugin - Panorama photo stitching and HDR merging program + + +![Education][18] + +Figure 5: The openSUSE:Education-Li-f-e distro has plenty of tools to help you be productive in or for school. + +[Used with permission][4] + +Also included with openSUSE:Education-Li-f-e is the [KIWI-LTSP Server][19]. The KIWI-LTSP Server is a flexible, cost effective solution aimed at empowering schools, businesses, and organizations all over the world to easily install and deploy desktop workstations. Although this might not directly aid the student to be more productive, it certainly enables educational institutions be more productive in deploying desktops for students to use. For more information on setting up KIWI-LTSP, check out the openSUSE [KIWI-LTSP quick start guide][20]. + +Learn more about Linux through the free ["Introduction to Linux" ][21]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://www.ubuntu.com/ +[2]: /files/images/productivity1jpg +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_1.jpg?itok=yxez3X1w (GNOME Clipboard) +[4]: /licenses/category/used-permission +[5]: https://labs.fedoraproject.org/en/design-suite/ +[6]: https://fedoraproject.org/wiki/Design_Suite/Tutorials +[7]: /files/images/productivity2jpg +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_2.jpg?itok=ke0b8qyH (Fedora Design Suite Favorites) +[9]: https://system76.com/ +[10]: https://system76.com/pop +[11]: /files/images/productivity3jpg-0 +[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_3_0.jpg?itok=8UkCUfsD (Pop!_OS) +[13]: https://www.debian.org/ +[14]: /files/images/productivity4jpg +[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_4.jpg?itok=c9yD3Xw2 (Debian) +[16]: https://en.opensuse.org/openSUSE:Education-Li-f-e +[17]: /files/images/productivity5jpg +[18]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_5.jpg?itok=oAFtV8nT (Education) +[19]: https://en.opensuse.org/Portal:KIWI-LTSP +[20]: https://en.opensuse.org/SDB:KIWI-LTSP_quick_start +[21]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20190113 Editing Subtitles in Linux.md b/sources/tech/20190113 Editing Subtitles in Linux.md new file mode 100644 index 0000000000..1eaa6a68fd --- /dev/null +++ b/sources/tech/20190113 Editing Subtitles in Linux.md @@ -0,0 +1,168 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Editing Subtitles in Linux) +[#]: via: (https://itsfoss.com/editing-subtitles) +[#]: author: (Shirish https://itsfoss.com/author/shirish/) + +Editing Subtitles in Linux +====== + +I have been a world movie and regional movies lover for decades. Subtitles are the essential tool that have enabled me to enjoy the best movies in various languages and from various countries. + +If you enjoy watching movies with subtitles, you might have noticed that sometimes the subtitles are not synced or not correct. + +Did you know that you can edit subtitles and make them better? Let me show you some basic subtitle editing in Linux. + +![Editing subtitles in Linux][1] + +### Extracting subtitles from closed captions data + +Around 2012, 2013 I came to know of a tool called [CCEextractor.][2] As time passed, it has become one of the vital tools for me, especially if I come across a media file which has the subtitle embedded in it. + +CCExtractor analyzes video files and produces independent subtitle files from the closed captions data. + +CCExtractor is a cross-platform, free and open source tool. The tool has matured quite a bit from its formative years and has been part of [GSOC][3] and Google Code-in now and [then.][4] + +The tool, to put it simply, is more or less a set of scripts which work one after another in a serialized order to give you an extracted subtitle. + +You can follow the installation instructions for CCExtractor on [this page][5]. + +After installing when you want to extract subtitles from a media file, do the following: + +``` +ccextractor +``` + +The output of the command will be something like this: + +It basically scans the media file. In this case, it found that the media file is in malyalam and that the media container is an [.mkv][6] container. It extracted the subtitle file with the same name as the video file adding _eng to it. + +CCExtractor is a wonderful tool which can be used to enhance subtitles along with Subtitle Edit which I will share in the next section. + +``` +Interesting Read: There is an interesting synopsis of subtitles at [vicaps][7] which tells and shares why subtitles are important to us. It goes into quite a bit of detail of movie-making as well for those interested in such topics. +``` + +### Editing subtitles with SubtitleEditor Tool + +You probably are aware that most subtitles are in [.srt format][8] . The beautiful thing about this format is and was you could load it in your text editor and do little fixes in it. + +A srt file looks something like this when launched into a simple text-editor: + +The excerpt subtitle I have shared is from a pretty Old German Movie called [The Cabinet of Dr. Caligari (1920)][9] + +Subtitleeditor is a wonderful tool when it comes to editing subtitles. Subtitle Editor is and can be used to manipulate time duration, frame-rate of the subtitle file to be in sync with the media file, duration of breaks in-between and much more. I’ll share some of the basic subtitle editing here. + +![][10] + +First install subtitleeditor the same way you installed ccextractor, using your favorite installation method. In Debian, you can use this command: + +``` +sudo apt install subtitleeditor +``` + +When you have it installed, let’s see some of the common scenarios where you need to edit a subtitle. + +#### Manipulating Frame-rates to sync with Media file + +If you find that the subtitles are not synced with the video, one of the reasons could be the difference between the frame rates of the video file and the subtitle file. + +How do you know the frame rates of these files, then? + +To get the frame rate of a video file, you can use the mediainfo tool. You may need to install it first using your distribution’s package manager. + +Using mediainfo is simple: + +``` +$ mediainfo somefile.mkv | grep Frame + Format settings : CABAC / 4 Ref Frames + Format settings, ReFrames : 4 frames + Frame rate mode : Constant + Frame rate : 25.000 FPS + Bits/(Pixel*Frame) : 0.082 + Frame rate : 46.875 FPS (1024 SPF) +``` + +Now you can see that framerate of the video file is 25.000 FPS. The other Frame-rate we see is for the audio. While I can share why particular fps are used in Video-encoding, Audio-encoding etc. it would be a different subject matter. There is a lot of history associated with it. + +Next is to find out the frame rate of the subtitle file and this is a slightly complicated. + +Usually, most subtitles are in a zipped format. Unzipping the .zip archive along with the subtitle file which ends in something.srt. Along with it, there is usually also a .info file with the same name which sometime may have the frame rate of the subtitle. + +If not, then it usually is a good idea to go some site and download the subtitle from a site which has that frame rate information. For this specific German file, I will be using [Opensubtitle.org][11] + +As you can see in the link, the frame rate of the subtitle is 23.976 FPS. Quite obviously, it won’t play well with my video file with frame rate 25.000 FPS. + +In such cases, you can change the frame rate of the subtitle file using the Subtitle Editor tool: + +Select all the contents from the subtitle file by doing CTRL+A. Go to Timings -> Change Framerate and change frame rates from 23.976 fps to 25.000 fps or whatever it is that is desired. Save the changed file. + +![synchronize frame rates of subtitles in Linux][12] + +#### Changing the Starting position of a subtitle file + +Sometimes the above method may be enough, sometimes though it will not be enough. + +You might find some cases when the start of the subtitle file is different from that in the movie or a media file while the frame rate is the same. + +In such cases, do the following: + +Select all the contents from the subtitle file by doing CTRL+A. Go to Timings -> Select Move Subtitle. + +![Move subtitles using Subtitle Editor on Linux][13] + +Change the new Starting position of the subtitle file. Save the changed file. + +![Move subtitles using Subtitle Editor in Linux][14] + +If you wanna be more accurate, then use [mpv][15] to see the movie or media file and click on the timing, if you click on the timing bar which shows how much the movie or the media file has elapsed, clicking on it will also reveal the microsecond. + +I usually like to be accurate so I try to be as precise as possible. It is very difficult in MPV as human reaction time is imprecise. If I wanna be super accurate then I use something like [Audacity][16] but then that is another ball-game altogether as you can do so much more with it. That may be something to explore in a future blog post as well. + +#### Manipulating Duration + +Sometimes even doing both is not enough and you even have to shrink or add the duration to make it sync with the media file. This is one of the more tedious works as you have to individually fix the duration of each sentence. This can happen especially if you have variable frame rates in the media file (nowadays rare but you still get such files). + +In such a scenario, you may have to edit the duration manually and automation is not possible. The best way is either to fix the video file (not possible without degrading the video quality) or getting video from another source at a higher quality and then [transcode][17] it with the settings you prefer. This again, while a major undertaking I could shed some light on in some future blog post. + +### Conclusion + +What I have shared in above is more or less on improving on existing subtitle files. If you were to start a scratch you need loads of time. I haven’t shared that at all because a movie or any video material of say an hour can easily take anywhere from 4-6 hours or even more depending upon skills of the subtitler, patience, context, jargon, accents, native English speaker, translator etc. all of which makes a difference to the quality of the subtitle. + +I hope you find this interesting and from now onward, you’ll handle your subtitles slightly better. If you have any suggestions to add, please leave a comment below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/editing-subtitles + +作者:[Shirish][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/shirish/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/editing-subtitles-in-linux.jpeg?resize=800%2C450&ssl=1 +[2]: https://www.ccextractor.org/ +[3]: https://itsfoss.com/best-open-source-internships/ +[4]: https://www.ccextractor.org/public:codein:google_code-in_2018 +[5]: https://github.com/CCExtractor/ccextractor/wiki/Installation +[6]: https://en.wikipedia.org/wiki/Matroska +[7]: https://www.vicaps.com/blog/history-of-silent-movies-and-subtitles/ +[8]: https://en.wikipedia.org/wiki/SubRip#SubRip_text_file_format +[9]: https://www.imdb.com/title/tt0010323/ +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/subtitleeditor.jpg?ssl=1 +[11]: https://www.opensubtitles.org/en/search/sublanguageid-eng/idmovie-4105 +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/subtitleeditor-frame-rate-sync.jpg?resize=800%2C450&ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Move-subtitles-Caligiri.jpg?resize=800%2C450&ssl=1 +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/move-subtitles.jpg?ssl=1 +[15]: https://itsfoss.com/mpv-video-player/ +[16]: https://www.audacityteam.org/ +[17]: https://en.wikipedia.org/wiki/Transcoding +[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/editing-subtitles-in-linux.jpeg?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190115 Linux Desktop Setup - HookRace Blog.md b/sources/tech/20190115 Linux Desktop Setup - HookRace Blog.md new file mode 100644 index 0000000000..29d5f63d2a --- /dev/null +++ b/sources/tech/20190115 Linux Desktop Setup - HookRace Blog.md @@ -0,0 +1,514 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux Desktop Setup · HookRace Blog) +[#]: via: (https://hookrace.net/blog/linux-desktop-setup/) +[#]: author: (Dennis Felsing http://felsin9.de/nnis/) + +Linux Desktop Setup +====== + + +My software setup has been surprisingly constant over the last decade, after a few years of experimentation since I initially switched to Linux in 2006. It might be interesting to look back in another 10 years and see what changed. A quick overview of what’s running as I’m writing this post: + +[![htop overview][1]][2] + +### Motivation + +My software priorities are, in no specific order: + + * Programs should run on my local system so that I’m in control of them, this excludes cloud solutions. + * Programs should run in the terminal, so that they can be used consistently from anywhere, including weak computers or a phone. + * Keyboard focused is nearly automatic by using terminal software. I prefer to use the mouse where it makes sense only, reaching for the mouse all the time during typing feels like a waste of time. Occasionally it took me an hour to notice that the mouse wasn’t even plugged in. + * Ideally use fast and efficient software, I don’t like hearing the fan and feeling the room heat up. I can also keep running older hardware for much longer, my 10 year old Thinkpad x200s is still fine for all the software I use. + * Be composable. I don’t want to do every step manually, instead automate more when it makes sense. This naturally favors the shell. + + + +### Operating Systems + +I had a hard start with Linux 12 years ago by removing Windows, armed with just the [Gentoo Linux][3] installation CD and a printed manual to get a functioning Linux system. It took me a few days of compiling and tinkering, but in the end I felt like I had learnt a lot. + +I haven’t looked back to Windows since then, but I switched to [Arch Linux][4] on my laptop after having the fan fail from the constant compilation stress. Later I also switched all my other computers and private servers to Arch Linux. As a rolling release distribution you get package upgrades all the time, but the most important breakages are nicely reported in the [Arch Linux News][5]. + +One annoyance though is that Arch Linux removes the old kernel modules once you upgrade it. I usually notice that once I try plugging in a USB flash drive and the kernel fails to load the relevant module. Instead you’re supposed to reboot after each kernel upgrade. There are a few [hacks][6] around to get around the problem, but I haven’t been bothered enough to actually use them. + +Similar problems happen with other programs, commonly Firefox, cron or Samba requiring a restart after an upgrade, but annoyingly not warning you that that’s the case. [SUSE][7], which I use at work, nicely warns about such cases. + +For the [DDNet][8] production servers I prefer [Debian][9] over Arch Linux, so that I have a lower chance of breakage on each upgrade. For my firewall and router I used [OpenBSD][10] for its clean system, documentation and great [pf firewall][11], but right now I don’t have a need for a separate router anymore. + +### Window Manager + +Since I started out with Gentoo I quickly noticed the huge compile time of KDE, which made it a no-go for me. I looked around for more minimal solutions, and used [Openbox][12] and [Fluxbox][13] initially. At some point I jumped on the tiling window manager train in order to be more keyboard-focused and picked up [dwm][14] and [awesome][15] close to their initial releases. + +In the end I settled on [xmonad][16] thanks to its flexibility, extendability and being written and configured in pure [Haskell][17], a great functional programming language. One example of this is that at home I run a single 40” 4K screen, but often split it up into four virtual screens, each displaying a workspace on which my windows are automatically arranged. Of course xmonad has a [module][18] for that. + +[dzen][19] and [conky][20] function as a simple enough status bar for me. My entire conky config looks like this: + +``` +out_to_console yes +update_interval 1 +total_run_times 0 + +TEXT +${downspeed eth0} ${upspeed eth0} | $cpu% ${loadavg 1} ${loadavg 2} ${loadavg 3} $mem/$memmax | ${time %F %T} +``` + +And gets piped straight into dzen2 with `conky | dzen2 -fn '-xos4-terminus-medium-r-normal-*-12-*-*-*-*-*-*-*' -bg '#000000' -fg '#ffffff' -p -e '' -x 1000 -w 920 -xs 1 -ta r`. + +One important feature for me is to make the terminal emit a beep sound once a job is done. This is done simply by adding a `\a` character to the `PR_TITLEBAR` variable in zsh, which is shown whenever a job is done. Of course I disable the actual beep sound by blacklisting the `pcspkr` kernel module with `echo "blacklist pcspkr" > /etc/modprobe.d/nobeep.conf`. Instead the sound gets turned into an urgency by urxvt’s `URxvt.urgentOnBell: true` setting. Then xmonad has an urgency hook to capture this and I can automatically focus the currently urgent window with a key combination. In dzen I get the urgent windowspaces displayed with a nice and bright `#ff0000`. + +The final result in all its glory on my Laptop: + +[![Laptop screenshot][21]][22] + +I hear that [i3][23] has become quite popular in the last years, but it requires more manual window alignment instead of specifying automated methods to do it. + +I realize that there are also terminal multiplexers like [tmux][24], but I still require a few graphical applications, so in the end I never used them productively. + +### Terminal Persistency + +In order to keep terminals alive I use [dtach][25], which is just the detach feature of screen. In order to make every terminal on my computer detachable I wrote a [small wrapper script][26]. This means that even if I had to restart my X server I could keep all my terminals running just fine, both local and remote. + +### Shell & Programming + +Instead of [bash][27] I use [zsh][28] as my shell for its huge number of features. + +As a terminal emulator I found [urxvt][29] to be simple enough, support Unicode and 256 colors and has great performance. Another great feature is being able to run the urxvt client and daemon separately, so that even a large number of terminals barely takes up any memory (except for the scrollback buffer). + +There is only one font that looks absolutely clean and perfect to me: [Terminus][30]. Since i’s a bitmap font everything is pixel perfect and renders extremely fast and at low CPU usage. In order to switch fonts on-demand in each terminal with `CTRL-WIN-[1-7]` my ~/.Xdefaults contains: + +``` +URxvt.font: -xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-* +dzen2.font: -xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-* + +URxvt.keysym.C-M-1: command:\033]50;-xos4-terminus-medium-r-normal-*-12-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-2: command:\033]50;-xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-3: command:\033]50;-xos4-terminus-medium-r-normal-*-18-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-4: command:\033]50;-xos4-terminus-medium-r-normal-*-22-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-5: command:\033]50;-xos4-terminus-medium-r-normal-*-24-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-6: command:\033]50;-xos4-terminus-medium-r-normal-*-28-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-7: command:\033]50;-xos4-terminus-medium-r-normal-*-32-*-*-*-*-*-*-*\007 + +URxvt.keysym.C-M-n: command:\033]10;#ffffff\007\033]11;#000000\007\033]12;#ffffff\007\033]706;#00ffff\007\033]707;#ffff00\007 +URxvt.keysym.C-M-b: command:\033]10;#000000\007\033]11;#ffffff\007\033]12;#000000\007\033]706;#0000ff\007\033]707;#ff0000\007 +``` + +For programming and writing I use [Vim][31] with syntax highlighting and [ctags][32] for indexing, as well as a few terminal windows with grep, sed and the other usual suspects for search and manipulation. This is probably not at the same level of comfort as an IDE, but allows me more automation. + +One problem with Vim is that you get so used to its key mappings that you’ll want to use them everywhere. + +[Python][33] and [Nim][34] do well as scripting languages where the shell is not powerful enough. + +### System Monitoring + +[htop][35] (look at the background of that site, it’s a live view of the server that’s hosting it) works great for getting a quick overview of what the software is currently doing. [lm_sensors][36] allows monitoring the hardware temperatures, fans and voltages. [powertop][37] is a great little tool by Intel to find power savings. [ncdu][38] lets you analyze disk usage interactively. + +[nmap][39], iptraf-ng, [tcpdump][40] and [Wireshark][41] are essential tools for analyzing network problems. + +There are of course many more great tools. + +### Mails & Synchronization + +On my home server I have a [fetchmail][42] daemon running for each email acccount that I have. Fetchmail just retrieves the incoming emails and invokes [procmail][43]: + +``` +#!/bin/sh +for i in /home/deen/.fetchmail/*; do + FETCHMAILHOME=$i /usr/bin/fetchmail -m 'procmail -d %T' -d 60 +done +``` + +The configuration is as simple as it could be and waits for the server to inform us of fresh emails: + +``` +poll imap.1und1.de protocol imap timeout 120 user "dennis@felsin9.de" password "XXX" folders INBOX keep ssl idle +``` + +My `.procmailrc` config contains a few rules to backup all mails and sort them into the correct directories, for example based on the mailing list id or from field in the mail header: + +``` +MAILDIR=/home/deen/shared/Maildir +LOGFILE=$HOME/.procmaillog +LOGABSTRACT=no +VERBOSE=off +FORMAIL=/usr/bin/formail +NL=" +" + +:0wc +* ! ? test -d /media/mailarchive/`date +%Y` +| mkdir -p /media/mailarchive/`date +%Y` + +# Make backups of all mail received in format YYYY/YYYY-MM +:0c +/media/mailarchive/`date +%Y`/`date +%Y-%m` + +:0 +* ^From: .*(.*@.*.kit.edu|.*@.*.uka.de|.*@.*.uni-karlsruhe.de) +$MAILDIR/.uni/ + +:0 +* ^list-Id:.*lists.kit.edu +$MAILDIR/.uni-ml/ + +[...] +``` + +To send emails I use [msmtp][44], which is also great to configure: + +``` +account default +host smtp.1und1.de +tls on +tls_trust_file /etc/ssl/certs/ca-certificates.crt +auth on +from dennis@felsin9.de +user dennis@felsin9.de +password XXX + +[...] +``` + +But so far the emails are still on the server. My documents are all stored in a directory that I synchronize between all computers using [Unison][45]. Think of Unison as a bidirectional interactive [rsync][46]. My emails are part of this documents directory and thus they end up on my desktop computers. + +This also means that while the emails reach my server immediately, I only fetch them on deman instead of getting instant notifications when an email comes in. + +From there I read the mails with [mutt][47], using the sidebar plugin to display my mail directories. The `/etc/mailcap` file is essential to display non-plaintext mails containing HTML, Word or PDF: + +``` +text/html;w3m -I %{charset} -T text/html; copiousoutput +application/msword; antiword %s; copiousoutput +application/pdf; pdftotext -layout /dev/stdin -; copiousoutput +``` + +### News & Communication + +[Newsboat][48] is a nice little RSS/Atom feed reader in the terminal. I have it running on the server in a `tach` session with about 150 feeds. Filtering feeds locally is also possible, for example: + +``` +ignore-article "https://forum.ddnet.tw/feed.php" "title =~ \"Map Testing •\" or title =~ \"Old maps •\" or title =~ \"Map Bugs •\" or title =~ \"Archive •\" or title =~ \"Waiting for mapper •\" or title =~ \"Other mods •\" or title =~ \"Fixes •\"" +``` + +I use [Irssi][49] the same way for communication via IRC. + +### Calendar + +[remind][50] is a calendar that can be used from the command line. Setting new reminders is done by editing the `rem` files: + +``` +# One time events +REM 2019-01-20 +90 Flight to China %b + +# Recurring Holidays +REM 1 May +90 Holiday "Tag der Arbeit" %b +REM [trigger(easterdate(year(today()))-2)] +90 Holiday "Karfreitag" %b + +# Time Change +REM Nov Sunday 1 --7 +90 Time Change (03:00 -> 02:00) %b +REM Apr Sunday 1 --7 +90 Time Change (02:00 -> 03:00) %b + +# Birthdays +FSET birthday(x) "'s " + ord(year(trigdate())-x) + " birthday is %b" +REM 16 Apr +90 MSG Andreas[birthday(1994)] + +# Sun +SET $LatDeg 49 +SET $LatMin 19 +SET $LatSec 49 +SET $LongDeg -8 +SET $LongMin -40 +SET $LongSec -24 + +MSG Sun from [sunrise(trigdate())] to [sunset(trigdate())] +[...] +``` + +Unfortunately there is no Chinese Lunar calendar function in remind yet, so Chinese holidays can’t be calculated easily. + +I use two aliases for remind: + +``` +rem -m -b1 -q -g +``` + +to see a list of the next events in chronological order and + +``` +rem -m -b1 -q -cuc12 -w$(($(tput cols)+1)) | sed -e "s/\f//g" | less +``` + +to show a calendar fitting just the width of my terminal: + +![remcal][51] + +### Dictionary + +[rdictcc][52] is a little known dictionary tool that uses the excellent dictionary files from [dict.cc][53] and turns them into a local database: + +``` +$ rdictcc rasch +====================[ A => B ]==================== +rasch: + - apace + - brisk [speedy] + - cursory + - in a timely manner + - quick + - quickly + - rapid + - rapidly + - sharpish [Br.] [coll.] + - speedily + - speedy + - swift + - swiftly +rasch [gehen]: + - smartly [quickly] +Rasch {n} [Zittergras-Segge]: + - Alpine grass [Carex brizoides] + - quaking grass sedge [Carex brizoides] +Rasch {m} [regional] [Putzrasch]: + - scouring pad +====================[ B => A ]==================== +Rasch model: + - Rasch-Modell {n} +``` + +### Writing and Reading + +I have a simple todo file containing my tasks, that is basically always sitting open in a Vim session. For work I also use the todo file as a “done” file so that I can later check what tasks I finished on each day. + +For writing documents, letters and presentations I use [LaTeX][54] for its superior typesetting. A simple letter in German format can be set like this for example: + +``` +\documentclass[paper = a4, fromalign = right]{scrlttr2} +\usepackage{german} +\usepackage{eurosym} +\usepackage[utf8]{inputenc} +\setlength{\parskip}{6pt} +\setlength{\parindent}{0pt} + +\setkomavar{fromname}{Dennis Felsing} +\setkomavar{fromaddress}{Meine Str. 1\\69181 Leimen} +\setkomavar{subject}{Titel} + +\setkomavar*{enclseparator}{Anlagen} + +\makeatletter +\@setplength{refvpos}{89mm} +\makeatother + +\begin{document} +\begin{letter} {Herr Soundso\\Deine Str. 2\\69121 Heidelberg} +\opening{Sehr geehrter Herr Soundso,} + +Sie haben bei mir seit dem Bla Bla Bla. + +Ich fordere Sie hiermit zu Bla Bla Bla auf. + +\closing{Mit freundlichen Grüßen} + +\end{letter} +\end{document} +``` + +Further example documents and presentations can be found over at [my private site][55]. + +To read PDFs [Zathura][56] is fast, has Vim-like controls and even supports two different PDF backends: Poppler and MuPDF. [Evince][57] on the other hand is more full-featured for the cases where I encounter documents that Zathura doesn’t like. + +### Graphical Editing + +[GIMP][58] and [Inkscape][59] are easy choices for photo editing and interactive vector graphics respectively. + +In some cases [Imagemagick][60] is good enough though and can be used straight from the command line and thus automated to edit images. Similarly [Graphviz][61] and [TikZ][62] can be used to draw graphs and other diagrams. + +### Web Browsing + +As a web browser I’ve always used [Firefox][63] for its extensibility and low resource usage compared to Chrome. + +Unfortunately the [Pentadactyl][64] extension development stopped after Firefox switched to Chrome-style extensions entirely, so I don’t have satisfying Vim-like controls in my browser anymore. + +### Media Players + +[mpv][65] with hardware decoding allows watching videos at 5% CPU load using the `vo=gpu` and `hwdec=vaapi` config settings. `audio-channels=2` in mpv seems to give me clearer downmixing to my stereo speakers / headphones than what PulseAudio does by default. A great little feature is exiting with `Shift-Q` instead of just `Q` to save the playback location. When watching with someone with another native tongue you can use `--secondary-sid=` to show two subtitles at once, the primary at the bottom, the secondary at the top of the screen + +My wirelss mouse can easily be made into a remote control with mpv with a small `~/.config/mpv/input.conf`: + +``` +MOUSE_BTN5 run "mixer" "pcm" "-2" +MOUSE_BTN6 run "mixer" "pcm" "+2" +MOUSE_BTN1 cycle sub-visibility +MOUSE_BTN7 add chapter -1 +MOUSE_BTN8 add chapter 1 +``` + +[youtube-dl][66] works great for watching videos hosted online, best quality can be achieved with `-f bestvideo+bestaudio/best --all-subs --embed-subs`. + +As a music player [MOC][67] hasn’t been actively developed for a while, but it’s still a simple player that plays every format conceivable, including the strangest Chiptune formats. In the AUR there is a [patch][68] adding PulseAudio support as well. Even with the CPU clocked down to 800 MHz MOC barely uses 1-2% of a single CPU core. + +![moc][69] + +My music collection sits on my home server so that I can access it from anywhere. It is mounted using [SSHFS][70] and automount in the `/etc/fstab/`: + +``` +root@server:/media/media /mnt/media fuse.sshfs noauto,x-systemd.automount,idmap=user,IdentityFile=/root/.ssh/id_rsa,allow_other,reconnect 0 0 +``` + +### Cross-Platform Building + +Linux is great to build packages for any major operating system except Linux itself! In the beginning I used [QEMU][71] to with an old Debian, Windows and Mac OS X VM to build for these platforms. + +Nowadays I switched to using chroot for the old Debian distribution (for maximum Linux compatibility), [MinGW][72] to cross-compile for Windows and [OSXCross][73] to cross-compile for Mac OS X. + +The script used to [build DDNet][74] as well as the [instructions for updating library builds][75] are based on this. + +### Backups + +As usual, we nearly forgot about backups. Even if this is the last chapter, it should not be an afterthought. + +I wrote [rrb][76] (reverse rsync backup) 10 years ago to wrap rsync so that I only need to give the backup server root SSH rights to the computers that it is backing up. Surprisingly rrb needed 0 changes in the last 10 years, even though I kept using it the entire time. + +The backups are stored straight on the filesystem. Incremental backups are implemented using hard links (`--link-dest`). A simple [config][77] defines how long backups are kept, which defaults to: + +``` +KEEP_RULES=( \ + 7 7 \ # One backup a day for the last 7 days + 31 8 \ # 8 more backups for the last month + 365 11 \ # 11 more backups for the last year +1825 4 \ # 4 more backups for the last 5 years +) +``` + +Since some of my computers don’t have a static IP / DNS entry and I still want to back them up using rrb I use a reverse SSH tunnel (as a systemd service) for them: + +``` +[Unit] +Description=Reverse SSH Tunnel +After=network.target + +[Service] +ExecStart=/usr/bin/ssh -N -R 27276:localhost:22 -o "ExitOnForwardFailure yes" server +KillMode=process +Restart=always + +[Install] +WantedBy=multi-user.target +``` + +Now the server can reach the client through `ssh -p 27276 localhost` while the tunnel is running to perform the backup, or in `.ssh/config` format: + +``` +Host cr-remote + HostName localhost + Port 27276 +``` + +While talking about SSH hacks, sometimes a server is not easily reachable thanks to some bad routing. In that case you can route the SSH connection through another server to get better routing, in this case going through the USA to reach my Chinese server which had not been reliably reachable from Germany for a few weeks: + +``` +Host chn.ddnet.tw + ProxyCommand ssh -q usa.ddnet.tw nc -q0 chn.ddnet.tw 22 + Port 22 +``` + +### Final Remarks + +Thanks for reading my random collection of tools. I probably forgot many programs that I use so naturally every day that I don’t even think about them anymore. Let’s see how stable my software setup stays in the next years. If you have any questions, feel free to get in touch with me at [dennis@felsin9.de][78]. + +Comments on [Hacker News][79]. + +-------------------------------------------------------------------------------- + +via: https://hookrace.net/blog/linux-desktop-setup/ + +作者:[Dennis Felsing][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://felsin9.de/nnis/ +[b]: https://github.com/lujun9972 +[1]: https://hookrace.net/public/linux-desktop/htop_small.png +[2]: https://hookrace.net/public/linux-desktop/htop.png +[3]: https://gentoo.org/ +[4]: https://www.archlinux.org/ +[5]: https://www.archlinux.org/news/ +[6]: https://www.reddit.com/r/archlinux/comments/4zrsc3/keep_your_system_fully_functional_after_a_kernel/ +[7]: https://www.suse.com/ +[8]: https://ddnet.tw/ +[9]: https://www.debian.org/ +[10]: https://www.openbsd.org/ +[11]: https://www.openbsd.org/faq/pf/ +[12]: http://openbox.org/wiki/Main_Page +[13]: http://fluxbox.org/ +[14]: https://dwm.suckless.org/ +[15]: https://awesomewm.org/ +[16]: https://xmonad.org/ +[17]: https://www.haskell.org/ +[18]: http://hackage.haskell.org/package/xmonad-contrib-0.15/docs/XMonad-Layout-LayoutScreens.html +[19]: http://robm.github.io/dzen/ +[20]: https://github.com/brndnmtthws/conky +[21]: https://hookrace.net/public/linux-desktop/laptop_small.png +[22]: https://hookrace.net/public/linux-desktop/laptop.png +[23]: https://i3wm.org/ +[24]: https://github.com/tmux/tmux/wiki +[25]: http://dtach.sourceforge.net/ +[26]: https://github.com/def-/tach/blob/master/tach +[27]: https://www.gnu.org/software/bash/ +[28]: http://www.zsh.org/ +[29]: http://software.schmorp.de/pkg/rxvt-unicode.html +[30]: http://terminus-font.sourceforge.net/ +[31]: https://www.vim.org/ +[32]: http://ctags.sourceforge.net/ +[33]: https://www.python.org/ +[34]: https://nim-lang.org/ +[35]: https://hisham.hm/htop/ +[36]: http://lm-sensors.org/ +[37]: https://01.org/powertop/ +[38]: https://dev.yorhel.nl/ncdu +[39]: https://nmap.org/ +[40]: https://www.tcpdump.org/ +[41]: https://www.wireshark.org/ +[42]: http://www.fetchmail.info/ +[43]: http://www.procmail.org/ +[44]: https://marlam.de/msmtp/ +[45]: https://www.cis.upenn.edu/~bcpierce/unison/ +[46]: https://rsync.samba.org/ +[47]: http://www.mutt.org/ +[48]: https://newsboat.org/ +[49]: https://irssi.org/ +[50]: https://www.roaringpenguin.com/products/remind +[51]: https://hookrace.net/public/linux-desktop/remcal.png +[52]: https://github.com/tsdh/rdictcc +[53]: https://www.dict.cc/ +[54]: https://www.latex-project.org/ +[55]: http://felsin9.de/nnis/research/ +[56]: https://pwmt.org/projects/zathura/ +[57]: https://wiki.gnome.org/Apps/Evince +[58]: https://www.gimp.org/ +[59]: https://inkscape.org/ +[60]: https://imagemagick.org/Usage/ +[61]: https://www.graphviz.org/ +[62]: https://sourceforge.net/projects/pgf/ +[63]: https://www.mozilla.org/en-US/firefox/new/ +[64]: https://github.com/5digits/dactyl +[65]: https://mpv.io/ +[66]: https://rg3.github.io/youtube-dl/ +[67]: http://moc.daper.net/ +[68]: https://aur.archlinux.org/packages/moc-pulse/ +[69]: https://hookrace.net/public/linux-desktop/moc.png +[70]: https://github.com/libfuse/sshfs +[71]: https://www.qemu.org/ +[72]: http://www.mingw.org/ +[73]: https://github.com/tpoechtrager/osxcross +[74]: https://github.com/ddnet/ddnet-scripts/blob/master/ddnet-release.sh +[75]: https://github.com/ddnet/ddnet-scripts/blob/master/ddnet-lib-update.sh +[76]: https://github.com/def-/rrb/blob/master/rrb +[77]: https://github.com/def-/rrb/blob/master/config.example +[78]: mailto:dennis@felsin9.de +[79]: https://news.ycombinator.com/item?id=18979731 diff --git a/sources/tech/20190116 Best Audio Editors For Linux.md b/sources/tech/20190116 Best Audio Editors For Linux.md new file mode 100644 index 0000000000..d588c886e2 --- /dev/null +++ b/sources/tech/20190116 Best Audio Editors For Linux.md @@ -0,0 +1,156 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Best Audio Editors For Linux) +[#]: via: (https://itsfoss.com/best-audio-editors-linux) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Best Audio Editors For Linux +====== + +You’ve got a lot of choices when it comes to audio editors for Linux. No matter whether you are a professional music producer or just learning to create awesome music, the audio editors will always come in handy. + +Well, for professional-grade usage, a [DAW][1] (Digital Audio Workstation) is always recommended. However, not everyone needs all the functionalities, so you should know about some of the most simple audio editors as well. + +In this article, we will talk about a couple of DAWs and basic audio editors which are available as **free and open source** solutions for Linux and (probably) for other operating systems. + +### Top Audio Editors for Linux + +![Best audio editors and DAW for Linux][2] + +We will not be focusing on all the functionalities that DAWs offer – but the basic audio editing capabilities. You may still consider this as the list of best DAW for Linux. + +**Installation instruction:** You will find all the mentioned audio editors or DAWs in your AppCenter or Software center. In case, you do not find them listed, please head to their official website for more information. + +#### 1\. Audacity + +![audacity audio editor][3] + +Audacity is one of the most basic yet a capable audio editor available for Linux. It is a free and open-source cross-platform tool. A lot of you must be already knowing about it. + +It has improved a lot when compared to the time when it started trending. I do recall that I utilized it to “try” making karaokes by removing the voice from an audio file. Well, you can still do it – but it depends. + +**Features:** + +It also supports plug-ins that include VST effects. Of course, you should not expect it to support VST Instruments. + + * Live audio recording through a microphone or a mixer + * Export/Import capability supporting multiple formats and multiple files at the same time + * Plugin support: LADSPA, LV2, Nyquist, VST and Audio Unit effect plug-ins + * Easy editing with cut, paste, delete and copy functions. + * Spectogram view mode for analyzing frequencies + + + +#### 2\. LMMS + +![][4] + +LMMS is a free and open source (cross-platform) digital audio workstation. It includes all the basic audio editing functionalities along with a lot of advanced features. + +You can mix sounds, arrange them, or create them using VST instruments. It does support them. Also, it comes baked in with some samples, presets, VST Instruments, and effects to get started. In addition, you also get a spectrum analyzer for some advanced audio editing. + +**Features:** + + * Note playback via MIDI + * VST Instrument support + * Native multi-sample support + * Built-in compressor, limiter, delay, reverb, distortion and bass enhancer + + + +#### 3\. Ardour + +![Ardour audio editor][5] + +Ardour is yet another free and open source digital audio workstation. If you have an audio interface, Ardour will support it. Of course, you can add unlimited multichannel tracks. The multichannel tracks can also be routed to different mixer tapes for the ease of editing and recording. + +You can also import a video to it and edit the audio to export the whole thing. It comes with a lot of built-in plugins and supports VST plugins as well. + +**Features:** + + * Non-linear editing + * Vertical window stacking for easy navigation + * Strip silence, push-pull trimming, Rhythm Ferret for transient and note onset-based editing + + + +#### 4\. Cecilia + +![cecilia audio editor][6] + +Cecilia is not an ordinary audio editor application. It is meant to be used by sound designers or if you are just in the process of becoming one. It is technically an audio signal processing environment. It lets you create ear-bending sound out of them. + +You get in-build modules and plugins for sound effects and synthesis. It is tailored for a specific use – if that is what you were looking for – look no further! + +**Features:** + + * Modules to achieve more (UltimateGrainer – A state-of-the-art granulation processing, RandomAccumulator – Variable speed recording accumulator, +UpDistoRes – Distortion with upsampling and resonant lowpass filter) + * Automatic Saving of modulations + + + +#### 5\. Mixxx + +![Mixxx audio DJ ][7] + +If you want to mix and record something while being able to have a virtual DJ tool, [Mixxx][8] would be a perfect tool. You get to know the BPM, key, and utilize the master sync feature to match the tempo and beats of a song. Also, do not forget that it is yet another free and open source application for Linux! + +It supports custom DJ equipment as well. So, if you have one or a MIDI – you can record your live mixes using this tool. + +**Features** + + * Broadcast and record DJ Mixes of your song + * Ability to connect your equipment and perform live + * Key detection and BPM detection + + + +#### 6\. Rosegarden + +![rosegarden audio editor][9] + +Rosegarden is yet another impressive audio editor for Linux which is free and open source. It is neither a fully featured DAW nor a basic audio editing tool. It is a mixture of both with some scaled down functionalities. + +I wouldn’t recommend this for professionals but if you have a home studio or just want to experiment, this would be one of the best audio editors for Linux to have installed. + +**Features:** + + * Music notation editing + * Recording, Mixing, and samples + + + +### Wrapping Up + +These are some of the best audio editors you could find out there for Linux. No matter whether you need a DAW, a cut-paste editing tool, or a basic mixing/recording audio editor, the above-mentioned tools should help you out. + +Did we miss any of your favorite? Let us know about it in the comments below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-audio-editors-linux + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Digital_audio_workstation +[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/linux-audio-editors-800x450.jpeg?resize=800%2C450&ssl=1 +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/audacity-audio-editor.jpg?fit=800%2C591&ssl=1 +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/lmms-daw.jpg?fit=800%2C472&ssl=1 +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/ardour-audio-editor.jpg?fit=800%2C639&ssl=1 +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/cecilia.jpg?fit=800%2C510&ssl=1 +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/mixxx.jpg?fit=800%2C486&ssl=1 +[8]: https://itsfoss.com/dj-mixxx-2/ +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/rosegarden.jpg?fit=800%2C391&ssl=1 +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/linux-audio-editors.jpeg?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190116 GameHub - An Unified Library To Put All Games Under One Roof.md b/sources/tech/20190116 GameHub - An Unified Library To Put All Games Under One Roof.md new file mode 100644 index 0000000000..bdaae74b43 --- /dev/null +++ b/sources/tech/20190116 GameHub - An Unified Library To Put All Games Under One Roof.md @@ -0,0 +1,139 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (GameHub – An Unified Library To Put All Games Under One Roof) +[#]: via: (https://www.ostechnix.com/gamehub-an-unified-library-to-put-all-games-under-one-roof/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +GameHub – An Unified Library To Put All Games Under One Roof +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/gamehub-720x340.png) + +**GameHub** is an unified gaming library that allows you to view, install, run and remove games on GNU/Linux operating system. It supports both native and non-native games from various sources including Steam, GOG, Humble Bundle, and Humble Trove etc. The non-native games are supported by [Wine][1], Proton, [DOSBox][2], ScummVM and RetroArch. It also allows you to add custom emulators and download bonus content and DLCs for GOG games. Simply put, Gamehub is a frontend for Steam/GoG/Humblebundle/Retroarch. It can use steam technologies like Proton to run windows gog games. GameHub is free, open source gaming platform written in **Vala** using **GTK+3**. If you’re looking for a way to manage all games under one roof, GameHub might be a good choice. + +### Installing GameHub + +The author of GameHub has designed it specifically for elementary OS. So, you can install it on Debian, Ubuntu, elementary OS and other Ubuntu-derivatives using GameHub PPA. + +``` +$ sudo apt install --no-install-recommends software-properties-common +$ sudo add-apt-repository ppa:tkashkin/gamehub +$ sudo apt update +$ sudo apt install com.github.tkashkin.gamehub +``` + +GameHub is available in [**AUR**][3], so just install it on Arch Linux and its variants using any AUR helpers, for example [**YaY**][4]. + +``` +$ yay -S gamehub-git +``` + +It is also available as **AppImage** and **Flatpak** packages in [**releases page**][5]. + +If you prefer AppImage package, do the following: + +``` +$ wget https://github.com/tkashkin/GameHub/releases/download/0.12.1-91-dev/GameHub-bionic-0.12.1-91-dev-cd55bb5-x86_64.AppImage -O gamehub +``` + +Make it executable: + +``` +$ chmod +x gamehub +``` + +And, run GameHub using command: + +``` +$ ./gamehub +``` + +If you want to use Flatpak installer, run the following commands one by one. + +``` +$ git clone https://github.com/tkashkin/GameHub.git +$ cd GameHub +$ scripts/build.sh build_flatpak +``` + +### Put All Games Under One Roof + +Launch GameHub from menu or application launcher. At first launch, you will see the following welcome screen. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/gamehub1.png) + +As you can see in the above screenshot, you need to login to the given sources namely Steam, GoG or Humble Bundle. If you don’t have Steam client on your Linux system, you need to install it first to access your steam account. For GoG and Humble bundle sources, click on the icon to log in to the respective source. + +Once you logged in to your account(s), all games from the all sources can be visible on GameHub dashboard. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/gamehub2.png) + +You will see list of logged-in sources on the top left corner. To view the games from each source, just click on the respective icon. + +You can also switch between list view or grid view, sort the games by applying the filters and search games from the list in GameHub dashboard. + +#### Installing a game + +Click on the game of your choice from the list and click Install button. If the game is non-native, GameHub will automatically choose the compatibility layer (E.g Wine) that suits to run the game and install the selected game. As you see in the below screenshot, Indiana Jones game is not available for Linux platform. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/gamehub3-1.png) + +If it is a native game (i.e supports Linux), simply press the Install button. + +![][7] + +If you don’t want to install the game, just hit the **Download** button to save it in your games directory. It is also possible to add locally installed games to GameHub using the **Import** option. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/gamehub5.png) + +#### GameHub Settings + +GameHub Settings window can be launched by clicking on the four straight lines on top right corner. + +From Settings section, we can enable, disable and set various settings such as, + + * Switch between light/dark themes. + * Use Symbolic icons instead of colored icons for games. + * Switch to compact list. + * Enable/disable merging games from different sources. + * Enable/disable compatibility layers. + * Set games collection directory. The default directory for storing the collection is **$HOME/Games/_Collection**. + * Set games directories for each source. + * Add/remove emulators, + * And many. + + + +For more details, refer the project links given at the end of this guide. + +**Related read:** + +And, that’s all for now. Hope this helps. I will be soon here with another guide. Until then, stay tuned with OSTechNix. + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/gamehub-an-unified-library-to-put-all-games-under-one-roof/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/run-windows-games-softwares-ubuntu-16-04/ +[2]: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/ +[3]: https://aur.archlinux.org/packages/gamehub-git/ +[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[5]: https://github.com/tkashkin/GameHub/releases +[6]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[7]: http://www.ostechnix.com/wp-content/uploads/2019/01/gamehub4.png diff --git a/sources/tech/20190116 Zipping files on Linux- the many variations and how to use them.md b/sources/tech/20190116 Zipping files on Linux- the many variations and how to use them.md new file mode 100644 index 0000000000..fb98f78b06 --- /dev/null +++ b/sources/tech/20190116 Zipping files on Linux- the many variations and how to use them.md @@ -0,0 +1,324 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Zipping files on Linux: the many variations and how to use them) +[#]: via: (https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-the-many-variations-and-how-to-use-them.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Zipping files on Linux: the many variations and how to use them +====== +![](https://images.idgesg.net/images/article/2019/01/zipper-100785364-large.jpg) + +Some of us have been zipping files on Unix and Linux systems for many decades — to save some disk space and package files together for archiving. Even so, there are some interesting variations on zipping that not all of us have tried. So, in this post, we’re going to look at standard zipping and unzipping as well as some other interesting zipping options. + +### The basic zip command + +First, let’s look at the basic **zip** command. It uses what is essentially the same compression algorithm as **gzip** , but there are a couple important differences. For one thing, the gzip command is used only for compressing a single file where zip can both compress files and join them together into an archive. For another, the gzip command zips “in place”. In other words, it leaves a compressed file — not the original file alongside the compressed copy. Here's an example of gzip at work: + +``` +$ gzip onefile +$ ls -l +-rw-rw-r-- 1 shs shs 10514 Jan 15 13:13 onefile.gz +``` + +And here's zip. Notice how this command requires that a name be provided for the zipped archive where gzip simply uses the original file name and adds the .gz extension. + +``` +$ zip twofiles.zip file* + adding: file1 (deflated 82%) + adding: file2 (deflated 82%) +$ ls -l +-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1 +-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2 +-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip +``` + +Notice also that the original files are still sitting there. + +The amount of disk space that is saved (i.e., the degree of compression obtained) will depend on the content of each file. The variation in the example below is considerable. + +``` +$ zip mybin.zip ~/bin/* + adding: bin/1 (deflated 26%) + adding: bin/append (deflated 64%) + adding: bin/BoD_meeting (deflated 18%) + adding: bin/cpuhog1 (deflated 14%) + adding: bin/cpuhog2 (stored 0%) + adding: bin/ff (deflated 32%) + adding: bin/file.0 (deflated 1%) + adding: bin/loop (deflated 14%) + adding: bin/notes (deflated 23%) + adding: bin/patterns (stored 0%) + adding: bin/runme (stored 0%) + adding: bin/tryme (deflated 13%) + adding: bin/tt (deflated 6%) +``` + +### The unzip command + +The **unzip** command will recover the contents from a zip file and, as you'd likely suspect, leave the zip file intact, whereas a similar gunzip command would leave only the uncompressed file. + +``` +$ unzip twofiles.zip +Archive: twofiles.zip + inflating: file1 + inflating: file2 +$ ls -l +-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1 +-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2 +-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip +``` + +### The zipcloak command + +The **zipcloak** command encrypts a zip file, prompting you to enter a password twice (to help ensure you don't "fat finger" it) and leaves the file in place. You can expect the file size to vary a little from the original. + +``` +$ zipcloak twofiles.zip +Enter password: +Verify password: +encrypting: file1 +encrypting: file2 +$ ls -l +total 204 +-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1 +-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2 +-rw-rw-r-- 1 shs shs 21313 Jan 15 13:46 twofiles.zip <== slightly larger than + unencrypted version +``` + +Keep in mind that the original files are still sitting there unencrypted. + +### The zipdetails command + +The **zipdetails** command is going to show you details — a _lot_ of details about a zipped file, likely a lot more than you care to absorb. Even though we're looking at an encrypted file, zipdetails does display the file names along with file modification dates, user and group information, file length data, etc. Keep in mind that this is all "metadata." We don't see the contents of the files. + +``` +$ zipdetails twofiles.zip + +0000 LOCAL HEADER #1 04034B50 +0004 Extract Zip Spec 14 '2.0' +0005 Extract OS 00 'MS-DOS' +0006 General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +0008 Compression Method 0008 'Deflated' +000A Last Mod Time 4E2F6B24 'Tue Jan 15 13:25:08 2019' +000E CRC F1B115BD +0012 Compressed Length 00002904 +0016 Uncompressed Length 0000E2A5 +001A Filename Length 0005 +001C Extra Length 001C +001E Filename 'file1' +0023 Extra ID #0001 5455 'UT: Extended Timestamp' +0025 Length 0009 +0027 Flags '03 mod access' +0028 Mod Time 5C3E2584 'Tue Jan 15 13:25:08 2019' +002C Access Time 5C3E27BB 'Tue Jan 15 13:34:35 2019' +0030 Extra ID #0002 7875 'ux: Unix Extra Type 3' +0032 Length 000B +0034 Version 01 +0035 UID Size 04 +0036 UID 000003E8 +003A GID Size 04 +003B GID 000003E8 +003F PAYLOAD + +2943 LOCAL HEADER #2 04034B50 +2947 Extract Zip Spec 14 '2.0' +2948 Extract OS 00 'MS-DOS' +2949 General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +294B Compression Method 0008 'Deflated' +294D Last Mod Time 4E2F6C56 'Tue Jan 15 13:34:44 2019' +2951 CRC EC214569 +2955 Compressed Length 00002913 +2959 Uncompressed Length 0000E635 +295D Filename Length 0005 +295F Extra Length 001C +2961 Filename 'file2' +2966 Extra ID #0001 5455 'UT: Extended Timestamp' +2968 Length 0009 +296A Flags '03 mod access' +296B Mod Time 5C3E27C4 'Tue Jan 15 13:34:44 2019' +296F Access Time 5C3E27BD 'Tue Jan 15 13:34:37 2019' +2973 Extra ID #0002 7875 'ux: Unix Extra Type 3' +2975 Length 000B +2977 Version 01 +2978 UID Size 04 +2979 UID 000003E8 +297D GID Size 04 +297E GID 000003E8 +2982 PAYLOAD + +5295 CENTRAL HEADER #1 02014B50 +5299 Created Zip Spec 1E '3.0' +529A Created OS 03 'Unix' +529B Extract Zip Spec 14 '2.0' +529C Extract OS 00 'MS-DOS' +529D General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +529F Compression Method 0008 'Deflated' +52A1 Last Mod Time 4E2F6B24 'Tue Jan 15 13:25:08 2019' +52A5 CRC F1B115BD +52A9 Compressed Length 00002904 +52AD Uncompressed Length 0000E2A5 +52B1 Filename Length 0005 +52B3 Extra Length 0018 +52B5 Comment Length 0000 +52B7 Disk Start 0000 +52B9 Int File Attributes 0001 + [Bit 0] 1 Text Data +52BB Ext File Attributes 81B40000 +52BF Local Header Offset 00000000 +52C3 Filename 'file1' +52C8 Extra ID #0001 5455 'UT: Extended Timestamp' +52CA Length 0005 +52CC Flags '03 mod access' +52CD Mod Time 5C3E2584 'Tue Jan 15 13:25:08 2019' +52D1 Extra ID #0002 7875 'ux: Unix Extra Type 3' +52D3 Length 000B +52D5 Version 01 +52D6 UID Size 04 +52D7 UID 000003E8 +52DB GID Size 04 +52DC GID 000003E8 + +52E0 CENTRAL HEADER #2 02014B50 +52E4 Created Zip Spec 1E '3.0' +52E5 Created OS 03 'Unix' +52E6 Extract Zip Spec 14 '2.0' +52E7 Extract OS 00 'MS-DOS' +52E8 General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +52EA Compression Method 0008 'Deflated' +52EC Last Mod Time 4E2F6C56 'Tue Jan 15 13:34:44 2019' +52F0 CRC EC214569 +52F4 Compressed Length 00002913 +52F8 Uncompressed Length 0000E635 +52FC Filename Length 0005 +52FE Extra Length 0018 +5300 Comment Length 0000 +5302 Disk Start 0000 +5304 Int File Attributes 0001 + [Bit 0] 1 Text Data +5306 Ext File Attributes 81B40000 +530A Local Header Offset 00002943 +530E Filename 'file2' +5313 Extra ID #0001 5455 'UT: Extended Timestamp' +5315 Length 0005 +5317 Flags '03 mod access' +5318 Mod Time 5C3E27C4 'Tue Jan 15 13:34:44 2019' +531C Extra ID #0002 7875 'ux: Unix Extra Type 3' +531E Length 000B +5320 Version 01 +5321 UID Size 04 +5322 UID 000003E8 +5326 GID Size 04 +5327 GID 000003E8 + +532B END CENTRAL HEADER 06054B50 +532F Number of this disk 0000 +5331 Central Dir Disk no 0000 +5333 Entries in this disk 0002 +5335 Total Entries 0002 +5337 Size of Central Dir 00000096 +533B Offset to Central Dir 00005295 +533F Comment Length 0000 +Done +``` + +### The zipgrep command + +The **zipgrep** command is going to use a grep-type feature to locate particular content in your zipped files. If the file is encrypted, you will need to enter the password provided for the encryption for each file you want to examine. If you only want to check the contents of a single file from the archive, add its name to the end of the zipgrep command as shown below. + +``` +$ zipgrep hazard twofiles.zip file1 +[twofiles.zip] file1 password: +Certain pesticides should be banned since they are hazardous to the environment. +``` + +### The zipinfo command + +The **zipinfo** command provides information on the contents of a zipped file whether encrypted or not. This includes the file names, sizes, dates and permissions. + +``` +$ zipinfo twofiles.zip +Archive: twofiles.zip +Zip file size: 21313 bytes, number of entries: 2 +-rw-rw-r-- 3.0 unx 58021 Tx defN 19-Jan-15 13:25 file1 +-rw-rw-r-- 3.0 unx 58933 Tx defN 19-Jan-15 13:34 file2 +2 files, 116954 bytes uncompressed, 20991 bytes compressed: 82.1% +``` + +### The zipnote command + +The **zipnote** command can be used to extract comments from zip archives or add them. To display comments, just preface the name of the archive with the command. If no comments have been added previously, you will see something like this: + +``` +$ zipnote twofiles.zip +@ file1 +@ (comment above this line) +@ file2 +@ (comment above this line) +@ (zip file comment below this line) +``` + +If you want to add comments, write the output from the zipnote command to a file: + +``` +$ zipnote twofiles.zip > comments +``` + +Next, edit the file you've just created, inserting your comments above the **(comment above this line)** lines. Then add the comments using a zipnote command like this one: + +``` +$ zipnote -w twofiles.zip < comments +``` + +### The zipsplit command + +The **zipsplit** command can be used to break a zip archive into multiple zip archives when the original file is too large — maybe because you're trying to add one of the files to a small thumb drive. The easiest way to do this seems to be to specify the max size for each of the zipped file portions. This size must be large enough to accomodate the largest included file. + +``` +$ zipsplit -n 12000 twofiles.zip +2 zip files will be made (100% efficiency) +creating: twofile1.zip +creating: twofile2.zip +$ ls twofile*.zip +-rw-rw-r-- 1 shs shs 10697 Jan 15 14:52 twofile1.zip +-rw-rw-r-- 1 shs shs 10702 Jan 15 14:52 twofile2.zip +-rw-rw-r-- 1 shs shs 21377 Jan 15 14:27 twofiles.zip +``` + +Notice how the extracted files are sequentially named "twofile1" and "twofile2". + +### Wrap-up + +The **zip** command, along with some of its zipping compatriots, provide a lot of control over how you generate and work with compressed file archives. + +**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][1] ]** + +Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-the-many-variations-and-how-to-use-them.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190117 Pyvoc - A Command line Dictionary And Vocabulary Building Tool.md b/sources/tech/20190117 Pyvoc - A Command line Dictionary And Vocabulary Building Tool.md new file mode 100644 index 0000000000..b0aa45d618 --- /dev/null +++ b/sources/tech/20190117 Pyvoc - A Command line Dictionary And Vocabulary Building Tool.md @@ -0,0 +1,239 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Pyvoc – A Command line Dictionary And Vocabulary Building Tool) +[#]: via: (https://www.ostechnix.com/pyvoc-a-command-line-dictionary-and-vocabulary-building-tool/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Pyvoc – A Command line Dictionary And Vocabulary Building Tool +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/pyvoc-720x340.jpg) + +Howdy! I have a good news for non-native English speakers. Now, you can improve your English vocabulary and find the meaning of English words, right from your Terminal. Say hello to **Pyvoc** , a cross-platform, open source, command line dictionary and vocabulary building tool written in **Python** programming language. Using this tool, you can brush up some English words meanings, test or improve your vocabulary skill or simply use it as a CLI dictionary on Unix-like operating systems. + +### Installing Pyvoc + +Since Pyvoc is written using Python language, you can install it using [**Pip3**][1] package manager. + +``` +$ pip3 install pyvoc +``` + +Once installed, run the following command to automatically create necessary configuration files in your $HOME directory. + +``` +$ pyvoc word +``` + +Sample output: + +``` +|Creating necessary config files +/getting api keys. please handle with care! +| + +word +Noun: single meaningful element of speech or writing +example: I don't like the word ‘unofficial’ + +Verb: express something spoken or written +example: he words his request in a particularly ironic way + +Interjection: used to express agreement or affirmation +example: Word, that's a good record, man +``` + +Done! Let us go ahead and brush the English skills. + +### Use Pyvoc as a command line Dictionary tool + +Pyvoc fetches the word meaning from **Oxford Dictionary API**. + +Let us say, you want to find the meaning of a word **‘digression’**. To do so, run: + +``` +$ pyvoc digression +``` + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/pyvoc1.png) + +See? Pyvoc not only displays the meaning of word **‘digression’** , but also an example sentence which shows how to use that word in practical. + +Let us see an another example. + +``` +$ pyvoc subterfuge +| + +subterfuge +Noun: deceit used in order to achieve one's goal +example: he had to use subterfuge and bluff on many occasions +``` + +It also shows the word classes as well. As you already know, English has four major **word classes** : + + 1. Nouns, + + 2. Verbs, + + 3. Adjectives, + + 4. Adverbs. + + + + +Take a look at the following example. + +``` +$ pyvoc welcome + / + +welcome +Noun: instance or manner of greeting someone +example: you will receive a warm welcome + +Interjection: used to greet someone in polite or friendly way +example: welcome to the Wildlife Park + +Verb: greet someone arriving in polite or friendly way +example: hotels should welcome guests in their own language + +Adjective: gladly received +example: I'm pleased to see you, lad—you're welcome +``` + +As you see in the above output, the word ‘welcome’ can be used as a verb, noun, adjective and interjection. Pyvoc has given example for each class. + +If you misspell a word, it will inform you to check the spelling of the given word. + +``` +$ pyvoc wlecome +\ +No definition found. Please check the spelling!! +``` + +Useful, isn’t it? + +### Create vocabulary groups + +A vocabulary group is nothing but a collection words added by the user. You can later revise or take quiz from these groups. 100 groups of 60 words are **reserved** for the user. + +To add a word (E.g **sporadic** ) to a group, just run: + +``` +$ pyvoc sporadic -a +- + +sporadic +Adjective: occurring at irregular intervals or only in few places +example: sporadic fighting broke out + + +writing to vocabulary group... +word added to group number 51 +``` + +As you can see, I didn’t provide any group number and pyvoc displayed the meaning of given word and automatically added that word to group number **51**. If you don’t provide the group number, Pyvoc will **incrementally add words** to groups **51-100**. + +Pyvoc also allows you to specify a group number if you want to. You can specify a group from 1-50 using **-g** option. For example, I am going to add a word to Vocabulary group 20 using the following command. + +``` +$ pyvoc discrete -a -g 20 + / + +discrete +Adjective: individually separate and distinct +example: speech sounds are produced as a continuous sound signal rather + than discrete units + +creating group Number 20... +writing to vocabulary group... +word added to group number 20 +``` + +See? The above command displays the meaning of ‘discrete’ word and adds it to the vocabulary group 20. If the group doesn’t exists, Pyvoc will create it and add the word. + +By default, Pyvoc includes three predefined vocabulary groups (101, 102, and 103). These custom groups has 800 words of each. All words in these groups are taken from **GRE** and **SAT** preparation websites. + +To view the user-generated groups, simply run: + +``` +$ pyvoc word -l + - + +word +Noun: single meaningful element of speech or writing +example: I don't like the word ‘unofficial’ + +Verb: express something spoken or written +example: he words his request in a particularly ironic way + +Interjection: used to express agreement or affirmation +example: Word, that's a good record, man + + +USER GROUPS +Group no. No. of words +20 1 + +DEFAULT GROUP +Group no. No. of words +51 1 +``` +``` + +``` + +As you see, I have created one group (20) including the default group (51). + +### Test and improve English vocabulary + +As I already said, you can use the Vocabulary groups to revise or take quiz from them. + +For instance, to revise the group no. **101** , use **-r** option like below. + +``` +$ pyvoc 101 -r +``` + +You can now revise the meaning of all words in the Vocabulary group 101 in random order. Just hit ENTER to go through next questions. Once done, hit **CTRL+C** to exit. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/pyvoc2-1.png) + +Also, you take quiz from the existing groups to brush up your vocabulary. To do so, use **-q** option like below. + +``` +$ pyvoc 103 -q 50 +``` + +This command allows you to take quiz of 50 questions from vocabulary group 103. Choose the correct answer from the list by entering the appropriate number. You will get 1 point for every correct answer. The more you score the more your vocabulary skill will be. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/pyvoc3.png) + +Pyvoc is in the early-development stage. I hope the developer will improve it and add more features in the days to come. + +As a non-native English speaker, I personally find it useful to test and learn new word meanings in my free time. If you’re a heavy command line user and wanted to quickly check the meaning of a word, Pyvoc is the right tool. You can also test your English Vocabulary at your free time to memorize and improve your English language skill. Give it a try. You won’t be disappointed. + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/pyvoc-a-command-line-dictionary-and-vocabulary-building-tool/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/manage-python-packages-using-pip/ diff --git a/sources/tech/20190118 Secure Email Service Tutanota Has a Desktop App Now.md b/sources/tech/20190118 Secure Email Service Tutanota Has a Desktop App Now.md new file mode 100644 index 0000000000..f56f1272f2 --- /dev/null +++ b/sources/tech/20190118 Secure Email Service Tutanota Has a Desktop App Now.md @@ -0,0 +1,119 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Secure Email Service Tutanota Has a Desktop App Now) +[#]: via: (https://itsfoss.com/tutanota-desktop) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Secure Email Service Tutanota Has a Desktop App Now +====== + +[Tutanota][1] recently [announced][2] the release of a desktop app for their email service. The beta is available for Linux, Windows, and macOS. + +### What is Tutanota? + +There are plenty of free, ad-supported email services available online. However, the majority of those email services are not exactly secure or privacy-minded. In this post-[Snowden][3] world, [Tutanota][4] offers a free, secure email service with a focus on privacy. + +Tutanota has a number of eye-catching features, such as: + + * End-to-end encrypted mailbox + * End-to-end encrypted address book + * Automatic end-to-end encrypted emails between users + * End-to-end encrypted emails to any email address with a shared password + * Secure password reset that gives Tutanota absolutely no access + * Strips IP addresses from emails sent and received + * The code that runs Tutanota is [open source][5] + * Two-factor authentication + * Focus on privacy + * Passwords are salted and hashed locally with Bcrypt + * Secure servers located in Germany + * TLS with support for PFS, DMARC, DKIM, DNSSEC, and DANE + * Full-text search of encrypted data executed locally + + + +![][6] +Tutanota on the web + +You can [sign up for an account for free][7]. You can also upgrade your account to get extra features, such as custom domains, custom domain login, domain rules, extra storage, and aliases. They also have accounts available for businesses. + +Tutanota is also available on mobile devices. In fact, it’s [Android app is open source as well][8]. + +This German company is planning to expand beyond email. They hope to offer an encrypted calendar and cloud storage. You can help them reach their goals by [donating][9] via PayPal and cryptocurrency. + +### The New Desktop App from Tutanota + +Tutanota announced the [beta release][2] of the desktop app right before Christmas. They based this app on [Electron][10]. + +![][11] +Tutanota desktop app + +They went the Electron route: + + * to support all three major operating systems with minimum effort. + * to quickly adapt the new desktop clients so that they match new features added to the webmail client. + * to allocate development time to particular desktop features, e.g. offline availability, email import, that will simultaneously be available in all three desktop clients. + + + +Because this is a beta, there are several features missing from the app. The development team at Tutanota is working to add the following features: + + * Email import and synchronization with external mailboxes. This will “enable Tutanota to import emails from external mailboxes and encrypt the data locally on your device before storing it on the Tutanota servers.” + * Offline availability of emails + * Two-factor authentication + + + +### How to Install the Tutanota desktop client? + +![][12]Composing email in Tutanota + +You can [download][2] the beta app directly from Tutanota’s website. They have an [AppImage file for Linux][13], a .exe file for Windows, and a .app file for macOS. You can post any bugs that you encounter to the Tutanota [GitHub account][14]. + +To prove the security of the app, Tutanota signed each version. “The signatures make sure that the desktop clients as well as any updates come directly from us and have not been tampered with.” You can verify the signature using from Tutanota’s [GitHub page][15]. + +Remember, you will need to create a Tutanota account before you can use it. This is email client is designed to work solely with Tutanota. + +### Wrapping up + +I tested out the Tutanota email app on Linux Mint MATE. As to be expected, it was a mirror image of the web app. At this point in time, I don’t see any difference between the desktop app and the web app. The only use case that I can see to use the app now is to have Tutanota in its own window. + +Have you ever used [Tutanota][16]? If not, what is your favorite privacy conscience email service? Let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][17]. + +![][18] + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/tutanota-desktop + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/tutanota-review/ +[2]: https://tutanota.com/blog/posts/desktop-clients/ +[3]: https://en.wikipedia.org/wiki/Edward_Snowden +[4]: https://tutanota.com/ +[5]: https://tutanota.com/blog/posts/open-source-email +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/tutanota2.jpg?resize=800%2C490&ssl=1 +[7]: https://tutanota.com/pricing +[8]: https://itsfoss.com/tutanota-fdroid-release/ +[9]: https://tutanota.com/community +[10]: https://electronjs.org/ +[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/tutanota-app1.png?fit=800%2C486&ssl=1 +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/tutanota1.jpg?resize=800%2C405&ssl=1 +[13]: https://itsfoss.com/use-appimage-linux/ +[14]: https://github.com/tutao/tutanota +[15]: https://github.com/tutao/tutanota/blob/master/buildSrc/installerSigner.js +[16]: https://tutanota.com/polo/ +[17]: http://reddit.com/r/linuxusersgroup +[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/02/tutanota-featured.png?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190121 Akira- The Linux Design Tool We-ve Always Wanted.md b/sources/tech/20190121 Akira- The Linux Design Tool We-ve Always Wanted.md new file mode 100644 index 0000000000..bd58eca5bf --- /dev/null +++ b/sources/tech/20190121 Akira- The Linux Design Tool We-ve Always Wanted.md @@ -0,0 +1,92 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Akira: The Linux Design Tool We’ve Always Wanted?) +[#]: via: (https://itsfoss.com/akira-design-tool) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Akira: The Linux Design Tool We’ve Always Wanted? +====== + +Let’s make it clear, I am not a professional designer – but I’ve used certain tools on Windows (like Photoshop, Illustrator, etc.) and [Figma][1] (which is a browser-based interface design tool). I’m sure there are a lot more design tools available for Mac and Windows. + +Even on Linux, there is a limited number of dedicated [graphic design tools][2]. A few of these tools like [GIMP][3] and [Inkscape][4] are used by professionals as well. But most of them are not considered professional grade, unfortunately. + +Even if there are a couple more solutions – I’ve never come across a native Linux application that could replace [Sketch][5], Figma, or Adobe **** XD. Any professional designer would agree to that, isn’t it? + +### Is Akira going to replace Sketch, Figma, and Adobe XD on Linux? + +Well, in order to develop something that would replace those awesome proprietary tools – [Alessandro Castellani][6] – came up with a [Kickstarter campaign][7] by teaming up with a couple of experienced developers – +[Alberto Fanjul][8], [Bilal Elmoussaoui][9], and [Felipe Escoto][10]. + +So, yes, Akira is still pretty much just an idea- with a working prototype of its interface (as I observed in their [live stream session][11] via Kickstarter recently). + +### If it does not exist, why the Kickstarter campaign? + +![][12] + +The aim of the Kickstarter campaign is to gather funds in order to hire the developers and take a few months off to dedicate their time in order to make Akira possible. + +Nonetheless, if you want to support the project, you should know some details, right? + +Fret not, we asked a couple of questions in their livestream session – let’s get into it… + +### Akira: A few more details + +![Akira prototype interface][13] +Image Credits: Kickstarter + +As the Kickstarter campaign describes: + +> The main purpose of Akira is to offer a fast and intuitive tool to **create Web and Mobile interfaces** , more like **Sketch** , **Figma** , or **Adobe XD** , with a completely native experience for Linux. + +They’ve also written a detailed description as to how the tool will be different from Inkscape, Glade, or QML Editor. Of course, if you want all the technical details, [Kickstarter][7] is the way to go. But, before that, let’s take a look at what they had to say when I asked some questions about Akira. + +Q: If you consider your project – similar to what Figma offers – why should one consider installing Akira instead of using the web-based tool? Is it just going to be a clone of those tools – offering a native Linux experience or is there something really interesting to encourage users to switch (except being an open source solution)? + +**Akira:** A native experience on Linux is always better and fast in comparison to a web-based electron app. Also, the hardware configuration matters if you choose to utilize Figma – but Akira will be light on system resource and you will still be able to do similar stuff without needing to go online. + +Q: Let’s assume that it becomes the open source solution that Linux users have been waiting for (with similar features offered by proprietary tools). What are your plans to sustain it? Do you plan to introduce any pricing plans – or rely on donations? + +**Akira** : The project will mostly rely on Donations (something like [Krita Foundation][14] could be an idea). But, there will be no “pro” pricing plans – it will be available for free and it will be an open source project. + +So, with the response I got, it definitely seems to be something promising that we should probably support. + +### Wrapping Up + +What do you think about Akira? Is it just going to remain a concept? Or do you hope to see it in action? + +Let us know your thoughts in the comments below. + +![][15] + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/akira-design-tool + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://www.figma.com/ +[2]: https://itsfoss.com/best-linux-graphic-design-software/ +[3]: https://itsfoss.com/gimp-2-10-release/ +[4]: https://inkscape.org/ +[5]: https://www.sketchapp.com/ +[6]: https://github.com/Alecaddd +[7]: https://www.kickstarter.com/projects/alecaddd/akira-the-linux-design-tool/description +[8]: https://github.com/albfan +[9]: https://github.com/bilelmoussaoui +[10]: https://github.com/Philip-Scott +[11]: https://live.kickstarter.com/alessandro-castellani/live-stream/the-current-state-of-akira +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/akira-design-tool-kickstarter.jpg?resize=800%2C451&ssl=1 +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/akira-mockup.png?ssl=1 +[14]: https://krita.org/en/about/krita-foundation/ +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/akira-design-tool-kickstarter.jpg?fit=812%2C458&ssl=1 diff --git a/sources/tech/20190121 How to Resize OpenStack Instance (Virtual Machine) from Command line.md b/sources/tech/20190121 How to Resize OpenStack Instance (Virtual Machine) from Command line.md new file mode 100644 index 0000000000..e235cabdbf --- /dev/null +++ b/sources/tech/20190121 How to Resize OpenStack Instance (Virtual Machine) from Command line.md @@ -0,0 +1,149 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Resize OpenStack Instance (Virtual Machine) from Command line) +[#]: via: (https://www.linuxtechi.com/resize-openstack-instance-command-line/) +[#]: author: (Pradeep Kumar http://www.linuxtechi.com/author/pradeep/) + +How to Resize OpenStack Instance (Virtual Machine) from Command line +====== + +Being a Cloud administrator, resizing or changing resources of an instance or virtual machine is one of the most common tasks. + +![](https://www.linuxtechi.com/wp-content/uploads/2019/01/Resize-openstack-instance.jpg) + +In Openstack environment, there are some scenarios where cloud user has spin a vm using some flavor( like m1.smalll) where root partition disk size is 20 GB, but at some point of time user wants to extends the root partition size to 40 GB. So resizing of vm’s root partition can be accomplished by using the resize option in nova command. During the resize, we need to specify the new flavor that will include disk size as 40 GB. + +**Note:** Once you extend the instance resources like RAM, CPU and disk using resize option in openstack then you can’t reduce it. + +**Read More on** : [**How to Create and Delete Virtual Machine(VM) from Command line in OpenStack**][1] + +In this tutorial I will demonstrate how to resize an openstack instance from command line. Let’s assume I have an existing instance named “ **test_resize_vm** ” and it’s associated flavor is “m1.small” and root partition disk size is 20 GB. + +Execute the below command from controller node to check on which compute host our vm “test_resize_vm” is provisioned and its flavor details + +``` +:~# openstack server show test_resize_vm | grep -E "flavor|hypervisor" +| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute-57    | +| flavor                               | m1.small (2)  | +:~# +``` + +Login to VM as well and check the root partition size, + +``` +[[email protected] ~]# df -Th +Filesystem     Type      Size  Used Avail Use% Mounted on +/dev/vda1      xfs        20G  885M   20G   5% / +devtmpfs       devtmpfs  900M     0  900M   0% /dev +tmpfs          tmpfs     920M     0  920M   0% /dev/shm +tmpfs          tmpfs     920M  8.4M  912M   1% /run +tmpfs          tmpfs     920M     0  920M   0% /sys/fs/cgroup +tmpfs          tmpfs     184M     0  184M   0% /run/user/1000 +[[email protected] ~]# echo "test file for resize operation" > demofile +[[email protected] ~]# cat demofile +test file for resize operation +[[email protected] ~]# +``` + +Get the available flavor list using below command, + +``` +:~# openstack flavor list ++--------------------------------------|-----------------|-------|------|-----------|-------|-----------+ +| ID                                   | Name            |   RAM | Disk | Ephemeral | VCPUs | Is Public | ++--------------------------------------|-----------------|-------|------|-----------|-------|-----------+ +| 2                                    | m1.small        |  2048 |   20 |         0 |     1 | True      | +| 3                                    | m1.medium       |  4096 |   40 |         0 |     2 | True      | +| 4                                    | m1.large        |  8192 |   80 |         0 |     4 | True      | +| 5                                    | m1.xlarge       | 16384 |  160 |         0 |     8 | True      | ++--------------------------------------|-----------------|-------|------|-----------|-------|-----------+ +``` + +So we will be using the flavor “m1.medium” for resize operation, Run the beneath nova command to resize “test_resize_vm”, + +Syntax: # nova resize {VM_Name} {flavor_id} —poll + +``` +:~# nova resize test_resize_vm 3 --poll +Server resizing... 100% complete +Finished +:~# +``` + +Now confirm the resize operation using “ **openstack server –confirm”** command, + +``` +~# openstack server list | grep -i test_resize_vm +| 1d56f37f-94bd-4eef-9ff7-3dccb4682ce0 | test_resize_vm | VERIFY_RESIZE |private-net=10.20.10.51                                  | +:~# +``` + +As we can see in the above command output the current status of the vm is “ **verify_resize** “, execute below command to confirm resize, + +``` +~# openstack server resize --confirm 1d56f37f-94bd-4eef-9ff7-3dccb4682ce0 +~# +``` + +After the resize confirmation, status of VM will become active, now re-verify hypervisor and flavor details for the vm + +``` +:~# openstack server show test_resize_vm | grep -E "flavor|hypervisor" +| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute-58   | +| flavor                               | m1.medium (3)| +``` + +Login to your VM now and verify the root partition size + +``` +[[email protected] ~]# df -Th +Filesystem     Type      Size  Used Avail Use% Mounted on +/dev/vda1      xfs        40G  887M   40G   3% / +devtmpfs       devtmpfs  1.9G     0  1.9G   0% /dev +tmpfs          tmpfs     1.9G     0  1.9G   0% /dev/shm +tmpfs          tmpfs     1.9G  8.4M  1.9G   1% /run +tmpfs          tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup +tmpfs          tmpfs     380M     0  380M   0% /run/user/1000 +[[email protected] ~]# cat demofile +test file for resize operation +[[email protected] ~]# +``` + +This confirm that VM root partition has been resized successfully. + +**Note:** Due to some reason if resize operation was not successful and you want to revert the vm back to previous state, then run the following command, + +``` +# openstack server resize --revert {instance_uuid} +``` + +If have noticed “ **openstack server show** ” commands output, VM is migrated from compute-57 to compute-58 after resize. This is the default behavior of “nova resize” command ( i.e nova resize command will migrate the instance to another compute & then resize it based on the flavor details) + +In case if you have only one compute node then nova resize will not work, but we can make it work by changing the below parameter in nova.conf file on compute node, + +Login to compute node, verify the parameter value + +If “ **allow_resize_to_same_host** ” is set as False then change it to True and restart the nova compute service. + +**Read More on** [**OpenStack Deployment using Devstack on CentOS 7 / RHEL 7 System**][2] + +That’s all from this tutorial, in case it helps you technically then please do share your feedback and comments. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/resize-openstack-instance-command-line/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/create-delete-virtual-machine-command-line-openstack/ +[2]: https://www.linuxtechi.com/openstack-deployment-devstack-centos-7-rhel-7/ diff --git a/sources/tech/20190123 Dockter- A container image builder for researchers.md b/sources/tech/20190123 Dockter- A container image builder for researchers.md new file mode 100644 index 0000000000..359d0c1d1e --- /dev/null +++ b/sources/tech/20190123 Dockter- A container image builder for researchers.md @@ -0,0 +1,121 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Dockter: A container image builder for researchers) +[#]: via: (https://opensource.com/article/19/1/dockter-image-builder-researchers) +[#]: author: (Nokome Bentley https://opensource.com/users/nokome) + +Dockter: A container image builder for researchers +====== +Dockter supports the specific requirements of researchers doing data analysis, including those using R. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/building_skyscaper_organization.jpg?itok=Ir5epxm8) + +Dependency hell is ubiquitous in the world of software for research, and this affects research transparency and reproducibility. Containerization is one solution to this problem, but it creates new challenges for researchers. Docker is gaining popularity in the research community—but using it efficiently requires solid Dockerfile writing skills. + +As a part of the [Stencila][1] project, which is a platform for creating, collaborating on, and sharing data-driven content, we are developing [Dockter][2], an open source tool that makes it easier for researchers to create Docker images for their projects. Dockter scans a research project's source code, generates a Dockerfile, and builds a Docker image. It has a range of features that allow flexibility and can help researchers learn more about working with Docker. + +Dockter also generates a JSON file with information about the software environment (based on [CodeMeta][3] and [Schema.org][4]) to enable further processing and interoperability with other tools. + +Several other projects create Docker images from source code and/or requirements files, including: [alibaba/derrick][5], [jupyter/repo2docker][6], [Gueils/whales][7], [o2r-project/containerit][8]; [openshift/source-to-image][9], and [ViDA-NYU/reprozip][10]. Dockter is similar to repo2docker, containerit, and ReproZip in that it is aimed at researchers doing data analysis (and supports R), whereas most other tools are aimed at software developers (and don't support R). + +Dockter differs from these projects principally in that it: + + * Performs static code analysis for multiple languages to determine package requirements + * Uses package databases to determine package system dependencies and generate linked metadata (containerit does this for R) + * Installs language package dependencies quicker (which can be useful during research projects where dependencies often change) + * By default but optionally, installs Stencila packages so that Stencila client interfaces can execute code in the container + + + +### Dockter's features + +Following are some of the ways researchers can use Dockter. + +#### Generating Docker images from code + +Dockter scans a research project folder and builds a Docker image for it. If the folder already has a Dockerfile, Dockter will build the image from that. If not, Dockter will scan the source code files in the folder and generate one. Dockter currently handles R, Python, and Node.js source code. The .dockerfile (with the dot at the beginning) it generates is fully editable so users can take over from Dockter and carry on with editing the file as they see fit. + +If the folder contains an R package [DESCRIPTION][11] file, Dockter will install the R packages listed under Imports into the image. If the folder does not contain a DESCRIPTION file, Dockter will scan all the R files in the folder for package import or usage statements and create a .DESCRIPTION file. + +If the folder contains a [requirements.txt][12] file for Python, Dockter will copy it into the Docker image and use [pip][13] to install the specified packages. If the folder does not contain either of those files, Dockter will scan all the folder's .py files for import statements and create a .requirements.txt file. + +If the folder contains a [package.json][14] file, Dockter will copy it into the Docker image and use npm to install the specified packages. If the folder does not contain a package.json file, Dockter will scan all the folder's .js files for require calls and create a .package.json file. + +#### Capturing system requirements automatically + +One of the headaches researchers face when hand-writing Dockerfiles is figuring out which system dependencies their project needs. Often this involves a lot of trial and error. Dockter automatically checks if any dependencies (or dependencies of dependencies, or dependencies of…) require system packages and installs those into the image. No more trial and error cycles of build, fail, add dependency, repeat… + +#### Reinstalling language packages faster + +If you have ever built a Docker image, you know it can be frustrating waiting for all your project's dependencies to reinstall when you add or remove just one. + +This happens because of Docker's layered filesystem: When you update a requirements file, Docker throws away all the subsequent layers—including the one where you previously installed your dependencies. That means all the packages have to be reinstalled. + +Dockter takes a different approach. It leaves the installation of language packages to the language package managers: Python's pip, Node.js's npm, and R's install.packages. These package managers are good at the job they were designed for: checking which packages need to be updated and updating only them. The result is much faster rebuilds, especially for R packages, which often involve compilation. + +Dockter does this by looking for a special **# dockter** comment in a Dockerfile. Instead of throwing away layers, it executes all instructions after this comment in the same layer—thereby reusing packages that were previously installed. + +#### Generating structured metadata for a project + +Dockter uses [JSON-LD][15] as its internal data structure. When it parses a project's source code, it generates a JSON-LD tree using vocabularies from schema.org and CodeMeta. + +Dockter also fetches metadata on a project's dependencies, which could be used to generate a complete software citation for the project. + +### Easy to pick up, easy to throw away + +Dockter is designed to make it easier to get started creating Docker images for your project. But it's also designed to not get in your way or restrict you from using bare Docker. You can easily and individually override any of the steps Dockter takes to build an image. + + * **Code analysis:** To stop Dockter from doing code analysis and specify your project's package dependencies, just remove the leading **.** (dot) from the .DESCRIPTION, .requirements.txt, or .package.json files. + + * **Dockerfile generation:** Dockter aims to generate readable Dockerfiles that conform to best practices. They include comments on what each section does and are a good way to start learning how to write your own Dockerfiles. To stop Dockter from generating a .Dockerfile and start editing it yourself, just rename it Dockerfile (without the leading dot). + + + + +### Install Dockter + +[Dockter is available][16] as pre-compiled, standalone command line tool or as a Node.js package. Click [here][17] for a demo. + +We welcome and encourage all [contributions][18]! + +A longer version of this article is available on the project's [GitHub page][19]. + +Aleksandra Pawlik will present [Building reproducible computing environments: a workshop for non-experts][20] at [linux.conf.au][21], January 21-25 in Christchurch, New Zealand. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/dockter-image-builder-researchers + +作者:[Nokome Bentley][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nokome +[b]: https://github.com/lujun9972 +[1]: https://stenci.la/ +[2]: https://stencila.github.io/dockter/ +[3]: https://codemeta.github.io/index.html +[4]: http://Schema.org +[5]: https://github.com/alibaba/derrick +[6]: https://github.com/jupyter/repo2docker +[7]: https://github.com/Gueils/whales +[8]: https://github.com/o2r-project/containerit +[9]: https://github.com/openshift/source-to-image +[10]: https://github.com/ViDA-NYU/reprozip +[11]: http://r-pkgs.had.co.nz/description.html +[12]: https://pip.readthedocs.io/en/1.1/requirements.html +[13]: https://pypi.org/project/pip/ +[14]: https://docs.npmjs.com/files/package.json +[15]: https://json-ld.org/ +[16]: https://github.com/stencila/dockter/releases/ +[17]: https://asciinema.org/a/pOHpxUqIVkGdA1dqu7bENyxZk?size=medium&cols=120&autoplay=1 +[18]: https://github.com/stencila/dockter/blob/master/CONTRIBUTING.md +[19]: https://github.com/stencila/dockter +[20]: https://2019.linux.conf.au/schedule/presentation/185/ +[21]: https://linux.conf.au/ diff --git a/sources/tech/20190123 GStreamer WebRTC- A flexible solution to web-based media.md b/sources/tech/20190123 GStreamer WebRTC- A flexible solution to web-based media.md new file mode 100644 index 0000000000..bb7e129ff3 --- /dev/null +++ b/sources/tech/20190123 GStreamer WebRTC- A flexible solution to web-based media.md @@ -0,0 +1,108 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (GStreamer WebRTC: A flexible solution to web-based media) +[#]: via: (https://opensource.com/article/19/1/gstreamer) +[#]: author: (Nirbheek Chauhan https://opensource.com/users/nirbheek) + +GStreamer WebRTC: A flexible solution to web-based media +====== +GStreamer's WebRTC implementation eliminates some of the shortcomings of using WebRTC in native apps, server applications, and IoT devices. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe) + +Currently, [WebRTC.org][1] is the most popular and feature-rich WebRTC implementation. It is used in Chrome and Firefox and works well for browsers, but the Native API and implementation have several shortcomings that make it a less-than-ideal choice for uses outside of browsers, including native apps, server applications, and internet of things (IoT) devices. + +Last year, our company ([Centricular][2]) made an independent implementation of a Native WebRTC API available in GStreamer 1.14. This implementation is much easier to use and more flexible than the WebRTC.org Native API, is transparently compatible with WebRTC.org, has been tested with all browsers, and is already in production use. + +### What are GStreamer and WebRTC? + +[GStreamer][3] is an open source, cross-platform multimedia framework and one of the easiest and most flexible ways to implement any application that needs to play, record, or transform media-like data across a diverse scale of devices and products, including embedded (IoT, in-vehicle infotainment, phones, TVs, etc.), desktop (video/music players, video recording, non-linear editing, video conferencing, [VoIP][4] clients, browsers, etc.), servers (encode/transcode farms, video/voice conferencing servers, etc.), and [more][5]. + +The main feature that makes GStreamer the go-to multimedia framework for many people is its pipeline-based model, which solves one of the hardest problems in API design: catering to applications of varying complexity; from the simplest one-liners and quick solutions to those that need several hundreds of thousands of lines of code to implement their full feature set. If you want to learn how to use GStreamer, [Jan Schmidt's tutorial][6] from [LCA 2018][7] is a good place to start. + +[WebRTC][8] is a set of draft specifications that build upon existing [RTP][9], [RTCP][10], [SDP][11], [DTLS][12], [ICE][13], and other real-time communication (RTC) specifications and define an API for making them accessible using browser JavaScript (JS) APIs. + +People have been doing real-time communication over [IP][14] for [decades][15] with the protocols WebRTC builds upon. WebRTC's real innovation was creating a bridge between native applications and web apps by defining a standard yet flexible API that browsers can expose to untrusted JavaScript code. + +These specifications are [constantly being improved][16], which, combined with the ubiquitous nature of browsers, means WebRTC is fast becoming the standard choice for video conferencing on all platforms and for most applications. + +### **Everything is great, let's build amazing apps!** + +Not so fast, there's more to the story! For web apps, the [PeerConnection API][17] is [everywhere][18]. There are some browser-specific quirks, and the API keeps changing, but the [WebRTC JS adapter][19] handles most of that. Overall, the web app experience is mostly 👍. + +Unfortunately, for native code or applications that need more flexibility than a sandboxed JavaScript app can achieve, there haven't been a lot of great options. + +[Libwebrtc][20] (Google's implementation), [Janus][21], [Kurento][22], and [OpenWebRTC][23] have traditionally been the main contenders, but each implementation has its own inflexibilities, shortcomings, and constraints. + +Libwebrtc is still the most mature implementation, but it is also the most difficult to work with. Since it's embedded inside Chrome, it's a moving target and the project [is quite difficult to build and integrate][24]. These are all obstacles for native or server app developers trying to quickly prototype and experiment with things. + +Also, WebRTC was not built for multimedia, so the lower layers get in the way of non-browser use cases and applications. It is quite painful to do anything other than the default "set raw media, transmit" and "receive from remote, get raw media." This means if you want to use your own filters or hardware-specific codecs or sinks/sources, you end up having to fork libwebrtc. + +[**OpenWebRTC**][23] by Ericsson was the first attempt to rectify this situation. It was built on top of GStreamer. Its target audience was app developers, and it fit the bill quite well as a proof of concept—even though it used a custom API and some of the architectural decisions made it quite inflexible for most other uses. However, after an initial flurry of activity around the project, momentum petered out, the project failed to gather a community, and it is now effectively dead. Full disclosure: Centricular worked with Ericsson to polish some of the rough edges around the project immediately prior to its public release. + +### WebRTC in GStreamer + +GStreamer's WebRTC implementation gives you full control, as it does with any other [GStreamer pipeline][25]. + +As we said, the WebRTC standards build upon existing standards and protocols that serve similar purposes. GStreamer has supported almost all of them for a while now because they were being used for real-time communication, live streaming, and many other IP-based applications. This led Ericsson to choose GStreamer as the base for its OpenWebRTC project. + +Combined with the [SRTP][26] and DTLS plugins that were written during OpenWebRTC's development, it means that the implementation is built upon a solid and well-tested base, and implementing WebRTC features does not involve as much code-from-scratch work as one might presume. However, WebRTC is a large collection of standards, and reaching feature-parity with libwebrtc is an ongoing task. + +Due to decisions made while architecting WebRTCbin's internals, the API follows the PeerConnection specification quite closely. Therefore, almost all its missing features involve writing code that would plug into clearly defined sockets. For instance, since the GStreamer 1.14 release, the following features have been added to the WebRTC implementation and will be available in the next release of the GStreamer WebRTC: + + * Forward error correction + * RTP retransmission (RTX) + * RTP BUNDLE + * Data channels over SCTP + + + +We believe GStreamer's API is the most flexible, versatile, and easy to use WebRTC implementation out there, and it will only get better as time goes by. Bringing the power of pipeline-based multimedia manipulation to WebRTC opens new doors for interesting, unique, and highly efficient applications. If you'd like to demo the technology and play with the code, build and run [these demos][27], which include C, Rust, Python, and C# examples. + +Matthew Waters will present [GStreamer WebRTC—The flexible solution to web-based media][28] at [linux.conf.au][29], January 21-25 in Christchurch, New Zealand. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/gstreamer + +作者:[Nirbheek Chauhan][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nirbheek +[b]: https://github.com/lujun9972 +[1]: http://webrtc.org/ +[2]: https://www.centricular.com/ +[3]: https://gstreamer.freedesktop.org/documentation/application-development/introduction/gstreamer.html +[4]: https://en.wikipedia.org/wiki/Voice_over_IP +[5]: https://wiki.ligo.org/DASWG/GstLAL +[6]: https://www.youtube.com/watch?v=ZphadMGufY8 +[7]: http://lca2018.linux.org.au/ +[8]: https://en.wikipedia.org/wiki/WebRTC +[9]: https://en.wikipedia.org/wiki/Real-time_Transport_Protocol +[10]: https://en.wikipedia.org/wiki/RTP_Control_Protocol +[11]: https://en.wikipedia.org/wiki/Session_Description_Protocol +[12]: https://en.wikipedia.org/wiki/Datagram_Transport_Layer_Security +[13]: https://en.wikipedia.org/wiki/Interactive_Connectivity_Establishment +[14]: https://en.wikipedia.org/wiki/Internet_Protocol +[15]: https://en.wikipedia.org/wiki/Session_Initiation_Protocol +[16]: https://datatracker.ietf.org/wg/rtcweb/documents/ +[17]: https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection +[18]: https://caniuse.com/#feat=rtcpeerconnection +[19]: https://github.com/webrtc/adapter +[20]: https://github.com/aisouard/libwebrtc +[21]: https://janus.conf.meetecho.com/ +[22]: https://www.kurento.org/kurento-architecture +[23]: https://en.wikipedia.org/wiki/OpenWebRTC +[24]: https://webrtchacks.com/building-webrtc-from-source/ +[25]: https://gstreamer.freedesktop.org/documentation/application-development/introduction/basics.html +[26]: https://en.wikipedia.org/wiki/Secure_Real-time_Transport_Protocol +[27]: https://github.com/centricular/gstwebrtc-demos/ +[28]: https://linux.conf.au/schedule/presentation/143/ +[29]: https://linux.conf.au/ diff --git a/sources/tech/20190124 ODrive (Open Drive) - Google Drive GUI Client For Linux.md b/sources/tech/20190124 ODrive (Open Drive) - Google Drive GUI Client For Linux.md new file mode 100644 index 0000000000..71a91ec3d8 --- /dev/null +++ b/sources/tech/20190124 ODrive (Open Drive) - Google Drive GUI Client For Linux.md @@ -0,0 +1,127 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (ODrive (Open Drive) – Google Drive GUI Client For Linux) +[#]: via: (https://www.2daygeek.com/odrive-open-drive-google-drive-gui-client-for-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +ODrive (Open Drive) – Google Drive GUI Client For Linux +====== + +This we had discussed in so many times. However, i will give a small introduction about it. + +As of now there is no official Google Drive Client for Linux and we need to use unofficial clients. + +There are many applications available in Linux for Google Drive integration. + +Each application has came out with set of features. + +We had written few articles about this in our website in the past. + +Those are **[DriveSync][1]** , **[Google Drive Ocamlfuse Client][2]** and **[Mount Google Drive in Linux Using Nautilus File Manager][3]**. + +Today also we are going to discuss about the same topic and the utility name is ODrive. + +### What’s ODrive? + +ODrive stands for Open Drive. It’s a GUI client for Google Drive which was written in electron framework. + +It’s simple GUI which allow users to integrate the Google Drive with few steps. + +### How To Install & Setup ODrive on Linux? + +Since the developer is offering the AppImage package and there is no difficulty for installing the ODrive on Linux. + +Simple download the latest ODrive AppImage package from developer github page using **wget Command**. + +``` +$ wget https://github.com/liberodark/ODrive/releases/download/0.1.3/odrive-0.1.3-x86_64.AppImage +``` + +You have to set executable file permission to the ODrive AppImage file. + +``` +$ chmod +x odrive-0.1.3-x86_64.AppImage +``` + +Simple run the following ODrive AppImage file to launch the ODrive GUI for further setup. + +``` +$ ./odrive-0.1.3-x86_64.AppImage +``` + +You might get the same window like below when you ran the above command. Just hit the **`Next`** button for further setup. +![][5] + +Click **`Connect`** link to add a Google drive account. +![][6] + +Enter your email id which you want to setup a Google Drive account. +![][7] + +Enter your password for the given email id. +![][8] + +Allow ODrive (Open Drive) to access your Google account. +![][9] + +By default, it will choose the folder location. You can change if you want to use the specific one. +![][10] + +Finally hit **`Synchronize`** button to start download the files from Google Drive to your local system. +![][11] + +Synchronizing is in progress. +![][12] + +Once synchronizing is completed. It will show you all files downloaded. +Once synchronizing is completed. It’s shows you that all the files has been downloaded. +![][13] + +I have seen all the files were downloaded in the mentioned directory. +![][14] + +If you want to sync any new files from local system to Google Drive. Just start the `ODrive` from the application menu but it won’t actual launch the application. But it will be running in the background that we can able to see by using the ps command. + +``` +$ ps -df | grep odrive +``` + +![][15] + +It will automatically sync once you add a new file into the google drive folder. The same has been checked through notification menu. Yes, i can see one file was synced to Google Drive. +![][16] + +GUI is not loading after sync, and i’m not sure this functionality. I will check with developer and will add update based on his input. + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/odrive-open-drive-google-drive-gui-client-for-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/drivesync-google-drive-sync-client-for-linux/ +[2]: https://www.2daygeek.com/mount-access-google-drive-on-linux-with-google-drive-ocamlfuse-client/ +[3]: https://www.2daygeek.com/mount-access-setup-google-drive-in-linux/ +[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[5]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-1.png +[6]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-2.png +[7]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-3.png +[8]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-4.png +[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-5.png +[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-6.png +[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-7.png +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-8a.png +[13]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-9.png +[14]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-11.png +[15]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-9b.png +[16]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-10.png diff --git a/sources/tech/20190124 Orpie- A command-line reverse Polish notation calculator.md b/sources/tech/20190124 Orpie- A command-line reverse Polish notation calculator.md new file mode 100644 index 0000000000..10e666f625 --- /dev/null +++ b/sources/tech/20190124 Orpie- A command-line reverse Polish notation calculator.md @@ -0,0 +1,128 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Orpie: A command-line reverse Polish notation calculator) +[#]: via: (https://opensource.com/article/19/1/orpie) +[#]: author: (Peter Faller https://opensource.com/users/peterfaller) + +Orpie: A command-line reverse Polish notation calculator +====== +Orpie is a scientific calculator that functions much like early, well-loved HP calculators. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calculator_money_currency_financial_tool.jpg?itok=2QMa1y8c) +Orpie is a text-mode [reverse Polish notation][1] (RPN) calculator for the Linux console. It works very much like the early, well-loved Hewlett-Packard calculators. + +### Installing Orpie + +RPM and DEB packages are available for most distributions, so installation is just a matter of using either: + +``` +$ sudo apt install orpie +``` + +or + +``` +$ sudo yum install orpie +``` + +Orpie has a comprehensive man page; new users may want to have it open in another terminal window as they get started. Orpie can be customized for each user by editing the **~/.orpierc** configuration file. The [orpierc(5)][2] man page describes the contents of this file, and **/etc/orpierc** describes the default configuration. + +### Starting up + +Start Orpie by typing **orpie** at the command line. The main screen shows context-sensitive help on the left and the stack on the right. The cursor, where you enter numbers you want to calculate, is at the bottom-right corner. + +![](https://opensource.com/sites/default/files/uploads/orpie_start.png) + +### Example calculation + +For a simple example, let's calculate the factorial of **5 (2 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 3 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 5)**. First the long way: + +| Keys | Result | +| --------- | --------- | +| 2 | Push 2 onto the stack | +| 3 | Push 3 onto the stack | +| * | Multiply to get 6 | +| 4 | Push 4 onto the stack | +| * | Multiply to get 24 | +| 5 | Push 5 onto the stack | +| * | Multiply to get 120 | + +Note that the multiplication happens as soon as you type *****. If you hit **< enter>** after ***** , Orpie will duplicate the value at position 1 on the stack. (If this happens, you can drop the duplicate with **\**.) + +Equivalent sequences are: + +| Keys | Result | +| ------------- | ------------- | +| 2 3 * 4 * 5 * | Faster! | +| 2 3 4 5 * * * | Same result | +| 5 ' fact | Fastest: Use the built-in function | + +Observe that when you enter **'** , the left pane changes to show matching functions as you type. In the example above, typing **fa** is enough to get the **fact** function. Orpie offers many functions—experiment by typing **'** and a few letters to see what's available. + +![](https://opensource.com/sites/default/files/uploads/orpie_functions.png) + +Note that each operation replaces one or more values on the stack. If you want to store the value at position 1 in the stack, key in (for example) **@factot ** and **S'**. To retrieve the value, key in (for example) **@factot ** then **;** (if you want to see it; otherwise just leave **@factot** as the value for the next calculation). + +### Constants and units + +Orpie understands units and predefines many useful scientific constants. For example, to calculate the energy in a blue light photon at 400nm, calculate **E=hc/(400nm)**. The key sequences are: + +| Keys | Result | +| -------------- | -------------- | +| C c | Get the speed of light in m/s | +| C h | Get Planck's constant in Js | +| * | Calculate h*c | +| 400 9 n _ m | Input 4 _ 10^-9 m | +| / | Do the division and get the result: 4.966 _ 10^-19 J | + +Like choosing functions after typing **'** , typing **C** shows matching constants based on what you type. + +![](https://opensource.com/sites/default/files/uploads/orpie_constants.png) + +### Matrices + +Orpie can also do operations with matrices. For example, to multiply two 2x2 matrices: + +| Keys | Result | +| -------- | -------- | +| [ 1 , 2 [ 3 , 4 | Stack contains the matrix [[ 1, 2 ][ 3, 4 ]] | +| [ 1 , 0 [ 1 , 1 | Push the multiplier matrix onto the stack | +| * | The result is: [[ 3, 2 ][ 7, 4 ]] | + +Note that the **]** characters are automatically inserted—entering **[** starts a new row. + +### Complex numbers + +Orpie can also calculate with complex numbers. They can be entered or displayed in either polar or rectangular form. You can toggle between the polar and rectangular display using the **p** key, and between degrees and radians using the **r** key. For example, to multiply **3 + 4i** by **4 + 4i** : + +| Keys | Result | +| -------- | -------- | +| ( 3 , 4 | The stack contains (3, 4) | +| ( 4 , 4 | Push (4, 4) | +| * | Get the result: (-4, 28) | + +Note that as you go, the results are kept on the stack so you can observe intermediate results in a lengthy calculation. + +![](https://opensource.com/sites/default/files/uploads/orpie_final.png) + +### Quitting Orpie + +You can exit from Orpie by typing **Q**. Your state is saved, so the next time you start Orpie, you'll find the stack as you left it. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/orpie + +作者:[Peter Faller][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/peterfaller +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Reverse_Polish_notation +[2]: https://github.com/pelzlpj/orpie/blob/master/doc/orpierc.5 diff --git a/sources/tech/20190124 What does DevOps mean to you.md b/sources/tech/20190124 What does DevOps mean to you.md new file mode 100644 index 0000000000..c62f0f83ba --- /dev/null +++ b/sources/tech/20190124 What does DevOps mean to you.md @@ -0,0 +1,143 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What does DevOps mean to you?) +[#]: via: (https://opensource.com/article/19/1/what-does-devops-mean-you) +[#]: author: (Girish Managoli https://opensource.com/users/gammay) + +What does DevOps mean to you? +====== +6 experts break down DevOps and the practices and philosophies key to making it work. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M) + +It's said if you ask 10 people about DevOps, you will get 12 answers. This is a result of the diversity in opinions and expectations around DevOps—not to mention the disparity in its practices. + +To decipher the paradoxes around DevOps, we went to the people who know it the best—its top practitioners around the industry. These are people who have been around the horn, who know the ins and outs of technology, and who have practiced DevOps for years. Their viewpoints should encourage, stimulate, and provoke your thoughts around DevOps. + +### What does DevOps mean to you? + +Let's start with the fundamentals. We're not looking for textbook answers, rather we want to know what the experts say. + +In short, the experts say DevOps is about principles, practices, and tools. + +[Ann Marie Fred][1], DevOps lead for IBM Digital Business Group's Commerce Platform, says, "to me, DevOps is a set of principles and practices designed to make teams more effective in designing, developing, delivering, and operating software." + +According to [Daniel Oh][2], senior DevOps evangelist at Red Hat, "in general, DevOps is compelling for enterprises to evolve current IT-based processes and tools related to app development, IT operations, and security protocol." + +[Brent Reed][3], founder of Tactec Strategic Solutions, talks about continuous improvement for the stakeholders. "DevOps means to me a way of working that includes a mindset that allows for continuous improvement for operational performance, maturing to organizational performance, resulting in delighted stakeholders." + +Many of the experts also emphasize culture. Ann Marie says, "it's also about continuous improvement and learning. It's about people and culture as much as it is about tools and technology." + +To [Dan Barker][4], chief architect and DevOps leader at the National Association of Insurance Commissioners (NAIC), "DevOps is primarily about culture. … It has brought several independent areas together like lean, [just culture][5], and continuous learning. And I see culture as being the most critical and the hardest to execute on." + +[Chris Baynham-Hughes][6], head of DevOps at Atos, says, "[DevOps] practice is adopted through the evolution of culture, process, and tooling within an organization. The key focus is culture change, and the key tenants of DevOps culture are collaboration, experimentation, fast-feedback, and continuous improvement." + +[Geoff Purdy][7], cloud architect, talks about agility and feedback "shortening and amplifying feedback loops. We want teams to get feedback in minutes rather than weeks." + +But in the end, Daniel nails it by explaining how open source and open culture allow him to achieve his goals "in easy and quick ways. In DevOps initiatives, the most important thing for me should be open culture rather than useful tools, multiple solutions." + +### What DevOps practices have you found effective? + +"Picking one, automated provisioning has been hugely effective for my team. " + +The most effective practices cited by the experts are pervasive yet disparate. + +According to Ann Marie, "some of the most powerful [practices] are agile project management; breaking down silos between cross-functional, autonomous squads; fully automated continuous delivery; green/blue deploys for zero downtime; developers setting up their own monitoring and alerting; blameless post-mortems; automating security and compliance." + +Chris says, "particular breakthroughs have been empathetic collaboration; continuous improvement; open leadership; reducing distance to the business; shifting from vertical silos to horizontal, cross-functional product teams; work visualization; impact mapping; Mobius loop; shortening of feedback loops; automation (from environments to CI/CD)." + +Brent supports "evolving a learning culture that includes TDD [test-driven development] and BDD [behavior-driven development] capturing of a story and automating the sequences of events that move from design, build, and test through implementation and production with continuous integration and delivery pipelines. A fail-first approach to testing, the ability to automate integration and delivery processes and include fast feedback throughout the lifecycle." + +Geoff highlights automated provisioning. "Picking one, automated provisioning has been hugely effective for my team. More specifically, automated provisioning from a versioned Infrastructure-as-Code codebase." + +Dan uses fun. "We do a lot of different things to create a DevOps culture. We hold 'lunch and learns' with free food to encourage everyone to come and learn together; we buy books and study in groups." + +### How do you motivate your team to achieve DevOps goals? + +``` +"Celebrate wins and visualize the progress made." +``` + +Daniel emphasizes "automation that matters. In order to minimize objection from multiple teams in a DevOps initiative, you should encourage your team to increase the automation capability of development, testing, and IT operations along with new processes and procedures. For example, a Linux container is the key tool to achieve the automation capability of DevOps." + +Geoff agrees, saying, "automate the toil. Are there tasks you hate doing? Great. Engineer them out of existence if possible. Otherwise, automate them. It keeps the job from becoming boring and routine because the job constantly evolves." + +Dan, Ann Marie, and Brent stress team motivation. + +Dan says, "at the NAIC, we have a great awards system for encouraging specific behaviors. We have multiple tiers of awards, and two of them can be given to anyone by anyone. We also give awards to teams after they complete something significant, but we often award individual contributors." + +According to Ann Marie, "the biggest motivator for teams in my area is seeing the success of others. We have a weekly playback for each other, and part of that is sharing what we've learned from trying out new tools or practices. When teams are enthusiastic about something they're doing and willing to help others get started, more teams will quickly get on board." + +Brent agrees. "Getting everyone educated and on the same baseline of knowledge is essential ... assessing what helps the team achieve [and] what it needs to deliver with the product owner and users is the first place I like to start." + +Chris recommends a two-pronged approach. "Run small, weekly goals that are achievable and agreed by the team as being important and [where] they can see progress outside of the feature work they are doing. Celebrate wins and visualize the progress made." + +### How do DevOps and agile work together? + +``` +"DevOps != Agile, second Agile != Scrum." +``` + +This is an important question because both DevOps and agile are cornerstones of modern software development. + +DevOps is a process of software development focusing on communication and collaboration to facilitate rapid application and product deployment, whereas agile is a development methodology involving continuous development, continuous iteration, and continuous testing to achieve predictable and quality deliverables. + +So, how do they relate? Let's ask the experts. + +In Brent's view, "DevOps != Agile, second Agile != Scrum. … Agile tools and ways of working—that support DevOps strategies and goals—are how they mesh together." + +Chris says, "agile is a fundamental component of DevOps for me. Sure, we could talk about how we adopt DevOps culture in a non-agile environment, but ultimately, improving agility in the way software is engineered is a key indicator as to the maturity of DevOps adoption within the organization." + +Dan relates DevOps to the larger [Agile Manifesto][8]. "I never talk about agile without referencing the Agile Manifesto in order to set the baseline. There are many implementations that don't focus on the Manifesto. When you read the Manifesto, they've really described DevOps from a development perspective. Therefore, it is very easy to fit agile into a DevOps culture, as agile is focused on communication, collaboration, flexibility to change, and getting to production quickly." + +Geoff sees "DevOps as one of many implementations of agile. Agile is essentially a set of principles, while DevOps is a culture, process, and toolchain that embodies those principles." + +Ann Marie keeps it succinct, saying "agile is a prerequisite for DevOps. DevOps makes agile more effective." + +### Has DevOps benefited from open source? + +``` +"Open source done well requires a DevOps culture." +``` + +This question receives a fervent "yes" from all participants followed by an explanation of the benefits they've seen. + +Ann Marie says, "we get to stand on the shoulders of giants and build upon what's already available. The open source model of maintaining software, with pull requests and code reviews, also works very well for DevOps teams." + +Chris agrees that DevOps has "undoubtedly" benefited from open source. "From the engineering and tooling side (e.g., Ansible), to the process and people side, through the sharing of stories within the industry and the open leadership community." + +A benefit Geoff cites is "grassroots adoption. Nobody had to sign purchase requisitions for free (as in beer) software. Teams found tooling that met their needs, were free (as in freedom) to modify, [then] built on top of it, and contributed enhancements back to the larger community. Rinse, repeat." + +Open source has shown DevOps "better ways you can adopt new changes and overcome challenges, just like open source software developers are doing it," says Daniel. + +Brent concurs. "DevOps has benefited in many ways from open source. One way is the ability to use the tools to understand how they can help accelerate DevOps goals and strategies. Educating the development and operations folks on crucial things like automation, virtualization and containerization, auto-scaling, and many of the qualities that are difficult to achieve without introducing technology enablers that make DevOps easier." + +Dan notes the two-way, symbiotic relationship between DevOps and open source. "Open source done well requires a DevOps culture. Most open source projects have very open communication structures with very little obscurity. This has actually been a great learning opportunity for DevOps practitioners around what they might bring into their own organizations. Also, being able to use tools from a community that is similar to that of your own organization only encourages your own culture growth. I like to use GitLab as an example of this symbiotic relationship. When I bring [GitLab] into a company, we get a great tool, but what I'm really buying is their unique culture. That brings substantial value through our interactions with them and our ability to contribute back. Their tool also has a lot to offer for a DevOps organization, but their culture has inspired awe in the companies where I've introduced it." + +Now that our DevOps experts have weighed in, please share your thoughts on what DevOps means—as well as the other questions we posed—in the comments. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/what-does-devops-mean-you + +作者:[Girish Managoli][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/gammay +[b]: https://github.com/lujun9972 +[1]: https://twitter.com/DukeAMO +[2]: https://twitter.com/danieloh30?lang=en +[3]: https://twitter.com/brentareed +[4]: https://twitter.com/barkerd427 +[5]: https://psnet.ahrq.gov/resources/resource/1582 +[6]: https://twitter.com/onlychrisbh?lang=en +[7]: https://twitter.com/geoff_purdy +[8]: https://agilemanifesto.org/ diff --git a/sources/tech/20190124 ffsend - Easily And Securely Share Files From Linux Command Line Using Firefox Send Client.md b/sources/tech/20190124 ffsend - Easily And Securely Share Files From Linux Command Line Using Firefox Send Client.md new file mode 100644 index 0000000000..fcbdd3c5c7 --- /dev/null +++ b/sources/tech/20190124 ffsend - Easily And Securely Share Files From Linux Command Line Using Firefox Send Client.md @@ -0,0 +1,330 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (ffsend – Easily And Securely Share Files From Linux Command Line Using Firefox Send Client) +[#]: via: (https://www.2daygeek.com/ffsend-securely-share-files-folders-from-linux-command-line-using-firefox-send-client/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +ffsend – Easily And Securely Share Files From Linux Command Line Using Firefox Send Client +====== + +Linux users were preferred to go with scp or rsync for files or folders copy. + +However, so many new options are coming to Linux because it’s a opensource. + +Anyone can develop a secure software for Linux. + +We had written multiple articles in our site in the past about this topic. + +Even, today we are going to discuss the same kind of topic called ffsend. + +Those are **[OnionShare][1]** , **[Magic Wormhole][2]** , **[Transfer.sh][3]** and **[Dcp – Dat Copy][4]**. + +### What’s ffsend? + +[ffsend][5] is a command line Firefox Send client that allow users to transfer and receive files and folders through command line. + +It allow us to easily and securely share files and directories from the command line through a safe, private and encrypted link using a single simple command. + +Files are shared using the Send service and the allowed file size is up to 2GB. + +Others are able to download these files with this tool, or through their web browser. + +All files are always encrypted on the client, and secrets are never shared with the remote host. + +Additionally you can add a password for the file upload. + +The uploaded files will be removed after the download (default count is 1 up to 10) or after 24 hours. This will make sure that your files does not remain online forever. + +This tool is currently in the alpha phase. Use at your own risk. Also, only limited installation options are available right now. + +### ffsend Features: + + * Fully featured and friendly command line tool + * Upload and download files and directories securely + * Always encrypted on the client + * Additional password protection, generation and configurable download limits + * Built-in file and directory archiving and extraction + * History tracking your files for easy management + * Ability to use your own Send host + * Inspect or delete shared files + * Accurate error reporting + * Low memory footprint, due to encryption and download/upload streaming + * Intended to be used in scripts without interaction + + + +### How To Install ffsend in Linux? + +There is no package for each distributions except Debian and Arch Linux systems. However, we can easily get this utility by downloading the prebuilt appropriate binaries file based on the operating system and architecture. + +Run the below command to download the latest available version for your operating system. + +``` +$ wget https://github.com/timvisee/ffsend/releases/download/v0.1.2/ffsend-v0.1.2-linux-x64.tar.gz +``` + +Extract the tar archive using the following command. + +``` +$ tar -xvf ffsend-v0.1.2-linux-x64.tar.gz +``` + +Run the following command to identify your path variable. + +``` +$ echo $PATH +/home/daygeek/.cargo/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl +``` + +As i told previously, just move the executable file to your path directory. + +``` +$ sudo mv ffsend /usr/local/sbin +``` + +Run the `ffsend` command alone to get the basic usage information. + +``` +$ ffsend +ffsend 0.1.2 +Usage: ffsend [FLAGS] ... + +Easily and securely share files from the command line. +A fully featured Firefox Send client. + +Missing subcommand. Here are the most used: + ffsend upload ... + ffsend download ... + +To show all subcommands, features and other help: + ffsend help [SUBCOMMAND] +``` + +For Arch Linux based users can easily install it with help of **[AUR Helper][6]** , as this package is available in AUR repository. + +``` +$ yay -S ffsend +``` + +For **`Debian/Ubuntu`** systems, use **[DPKG Command][7]** to install ffsend. + +``` +$ wget https://github.com/timvisee/ffsend/releases/download/v0.1.2/ffsend_0.1.2_amd64.deb +$ sudo dpkg -i ffsend_0.1.2_amd64.deb +``` + +### How To Send A File Using ffsend? + +It’s not complicated. We can easily send a file using simple syntax. + +**Syntax:** + +``` +$ ffsend upload [/Path/to/the/file/name] +``` + +In the following example, we are going to upload a file called `passwd-up1.sh`. Once you upload the file then you will be getting the unique URL. + +``` +$ ffsend upload passwd-up1.sh --copy +Upload complete +Share link: https://send.firefox.com/download/a4062553f4/#yy2_VyPaUMG5HwXZzYRmpQ +``` + +![][9] + +Just download the above unique URL to get the file in any remote system. + +**Syntax:** + +``` +$ ffsend download [Generated URL] +``` + +Output for the above command. + +``` +$ ffsend download https://send.firefox.com/download/a4062553f4/#yy2_VyPaUMG5HwXZzYRmpQ +Download complete +``` + +![][10] + +Use the following syntax format for directory upload. + +``` +$ ffsend upload [/Path/to/the/Directory] --copy +``` + +In this example, we are going to upload `2g` directory. + +``` +$ ffsend upload /home/daygeek/2g --copy +You've selected a directory, only a single file may be uploaded. +Archive the directory into a single file? [Y/n]: y +Archiving... +Upload complete +Share link: https://send.firefox.com/download/90aa5cfe67/#hrwu6oXZRG2DNh8vOc3BGg +``` + +Just download the above generated the unique URL to get a folder in any remote system. + +``` +$ ffsend download https://send.firefox.com/download/90aa5cfe67/#hrwu6oXZRG2DNh8vOc3BGg +You're downloading an archive, extract it into the selected directory? [Y/n]: y +Extracting... +Download complete +``` + +As this already send files through a safe, private, and encrypted link. However, if you would like to add a additional security at your level. Yes, you can add a password for a file. + +``` +$ ffsend upload file-copy-rsync.sh --copy --password +Password: +Upload complete +Share link: https://send.firefox.com/download/0742d24515/#P7gcNiwZJ87vF8cumU71zA +``` + +It will prompt you to update a password when you are trying to download a file in the remote system. + +``` +$ ffsend download https://send.firefox.com/download/0742d24515/#P7gcNiwZJ87vF8cumU71zA +This file is protected with a password. +Password: +Download complete +``` + +Alternatively you can limit a download speed by providing the download speed while uploading a file. + +``` +$ ffsend upload file-copy-scp.sh --copy --downloads 10 +Upload complete +Share link: https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw +``` + +Just download the above unique URL to get a file in any remote system. + +``` +ffsend download https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw +Download complete +``` + +If you want to see more details about the file, use the following format. It will shows you the file name, file size, Download counts and when it will going to expire. + +**Syntax:** + +``` +$ ffsend info [Generated URL] + +$ ffsend info https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw +ID: 23cb923c4e +Name: file-copy-scp.sh +Size: 115 B +MIME: application/x-sh +Downloads: 3 of 10 +Expiry: 23h58m (86280s) +``` + +You can view your transaction history using the following format. + +``` +$ ffsend history +# LINK EXPIRY +1 https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw 23h57m +2 https://send.firefox.com/download/0742d24515/#P7gcNiwZJ87vF8cumU71zA 23h55m +3 https://send.firefox.com/download/90aa5cfe67/#hrwu6oXZRG2DNh8vOc3BGg 23h52m +4 https://send.firefox.com/download/a4062553f4/#yy2_VyPaUMG5HwXZzYRmpQ 23h46m +5 https://send.firefox.com/download/74ff30e43e/#NYfDOUp_Ai-RKg5g0fCZXw 23h44m +6 https://send.firefox.com/download/69afaab1f9/#5z51_94jtxcUCJNNvf6RcA 23h43m +``` + +If you don’t want the link anymore then we can delete it. + +**Syntax:** + +``` +$ ffsend delete [Generated URL] + +$ ffsend delete https://send.firefox.com/download/69afaab1f9/#5z51_94jtxcUCJNNvf6RcA +File deleted +``` + +Alternatively this can be done using firefox browser by opening the page . + +Just drag and drop a file to upload it. +![][11] + +Once the file is downloaded, it will show you that 100% download completed. +![][12] + +To check other possible options, navigate to man page or help page. + +``` +$ ffsend --help +ffsend 0.1.2 +Tim Visee +Easily and securely share files from the command line. +A fully featured Firefox Send client. + +USAGE: + ffsend [FLAGS] [OPTIONS] [SUBCOMMAND] + +FLAGS: + -f, --force Force the action, ignore warnings + -h, --help Prints help information + -i, --incognito Don't update local history for actions + -I, --no-interact Not interactive, do not prompt + -q, --quiet Produce output suitable for logging and automation + -V, --version Prints version information + -v, --verbose Enable verbose information and logging + -y, --yes Assume yes for prompts + +OPTIONS: + -H, --history Use the specified history file [env: FFSEND_HISTORY] + -t, --timeout Request timeout (0 to disable) [env: FFSEND_TIMEOUT] + -T, --transfer-timeout Transfer timeout (0 to disable) [env: FFSEND_TRANSFER_TIMEOUT] + +SUBCOMMANDS: + upload Upload files [aliases: u, up] + download Download files [aliases: d, down] + debug View debug information [aliases: dbg] + delete Delete a shared file [aliases: del] + exists Check whether a remote file exists [aliases: e] + help Prints this message or the help of the given subcommand(s) + history View file history [aliases: h] + info Fetch info about a shared file [aliases: i] + parameters Change parameters of a shared file [aliases: params] + password Change the password of a shared file [aliases: pass, p] + +The public Send service that is used as default host is provided by Mozilla. +This application is not affiliated with Mozilla, Firefox or Firefox Send. +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/ffsend-securely-share-files-folders-from-linux-command-line-using-firefox-send-client/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/onionshare-secure-way-to-share-files-sharing-tool-linux/ +[2]: https://www.2daygeek.com/wormhole-securely-share-files-from-linux-command-line/ +[3]: https://www.2daygeek.com/transfer-sh-easy-fast-way-share-files-over-internet-from-command-line/ +[4]: https://www.2daygeek.com/dcp-dat-copy-secure-way-to-transfer-files-between-linux-systems/ +[5]: https://github.com/timvisee/ffsend +[6]: https://www.2daygeek.com/category/aur-helper/ +[7]: https://www.2daygeek.com/dpkg-command-to-manage-packages-on-debian-ubuntu-linux-mint-systems/ +[8]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-1.png +[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-2.png +[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-3.png +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-4.png diff --git a/sources/tech/20190125 Using Antora for your open source documentation.md b/sources/tech/20190125 Using Antora for your open source documentation.md new file mode 100644 index 0000000000..3df2862ba1 --- /dev/null +++ b/sources/tech/20190125 Using Antora for your open source documentation.md @@ -0,0 +1,208 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Antora for your open source documentation) +[#]: via: (https://fedoramagazine.org/using-antora-for-your-open-source-documentation/) +[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/) + +Using Antora for your open source documentation +====== +![](https://fedoramagazine.org/wp-content/uploads/2019/01/antora-816x345.jpg) + +Are you looking for an easy way to write and publish technical documentation? Let me introduce [Antora][1] — an open source documentation site generator. Simple enough for a tiny project, but also complex enough to cover large documentation sites such as [Fedora Docs][2]. + +With sources stored in git, written in a simple yet powerful markup language AsciiDoc, and a static HTML as an output, Antora makes writing, collaborating on, and publishing your documentation a no-brainer. + +### The basic concepts + +Before we build a simple site, let’s have a look at some of the core concepts Antora uses to make the world a happier place. Or, at least, to build a documentation website. + +#### Organizing the content + +All sources that are used to build your documentation site are stored in a **git repository**. Or multiple ones — potentially owned by different people. For example, at the time of writing, the Fedora Docs had its sources stored in 24 different repositories owned by different groups having their own rules around contributions. + +The content in Antora is organized into **components** , usually representing different areas of your project, or, well, different components of the software you’re documenting — such as the backend, the UI, etc. Components can be independently versioned, and each component gets a separate space on the docs site with its own menu. + +Components can be optionally broken down into so-called **modules**. Modules are mostly invisible on the site, but they allow you to organize your sources into logical groups, and even store each in different git repository if that’s something you need to do. We use this in Fedora Docs to separate [the Release Notes, the Installation Guide, and the System Administrator Guide][3] into three different source repositories with their own rules, while preserving a single view in the UI. + +What’s great about this approach is that, to some extent, the way your sources are physically structured is not reflected on the site. + +#### Virtual catalog + +When assembling the site, Antora builds a **virtual catalog** of all pages, assigning a [unique ID][4] to each one based on its name and the component, the version, and module it belongs to. The page ID is then used to generate URLs for each page, and for internal links as well. So, to some extent, the source repository structure doesn’t really matter as far as the site is concerned. + +As an example, if we’d for some reason decided to merge all the 24 repositories of Fedora Docs into one, nothing on the site would change. Well, except the “Edit this page” link on every page that would suddenly point to this one repository. + +#### Independent UI + +We’ve covered the content, but how it’s going to look like? + +Documentation sites generated with Antora use a so-called [UI bundle][5] that defines the look and feel of your site. The UI bundle holds all graphical assets such as CSS, images, etc. to make your site look beautiful. + +It is expected that the UI will be developed independently of the documentation content, and that’s exactly what Antora supports. + +#### Putting it all together + +Having sources distributed in multiple repositories might raise a question: How do you build the site? The answer is: [Antora Playbook][6]. + +Antora Playbook is a file that points to all the source repositories and the UI bundle. It also defines additional metadata such as the name of your site. + +The Playbook is the only file you need to have locally available in order to build the site. Everything else gets fetched automatically as a part of the build process. + +### Building a site with Antora + +Demo time! To build a minimal site, you need three things: + + 1. At least one component holding your AsciiDoc sources. + 2. An Antora Playbook. + 3. A UI bundle + + + +Good news is the nice people behind Antora provide [example Antora sources][7] we can try right away. + +#### The Playbook + +Let’s first have a look at [the Playbook][8]: + +``` +site: + title: Antora Demo Site +# the 404 page and sitemap files only get generated when the url property is set + url: https://example.org/docs + start_page: component-b::index.adoc +content: + sources: + - url: https://gitlab.com/antora/demo/demo-component-a.git + branches: master + - url: https://gitlab.com/antora/demo/demo-component-b.git + branches: [v2.0, v1.0] + start_path: docs +ui: + bundle: + url: https://gitlab.com/antora/antora-ui-default/-/jobs/artifacts/master/raw/build/ui-bundle.zip?job=bundle-stable + snapshot: true +``` + +As we can see, the Playbook defines some information about the site, lists the content repositories, and points to the UI bundle. + +There are two repositories. The [demo-component-a][9] with a single branch, and the [demo-component-b][10] having two branches, each representing a different version. + +#### Components + +The minimal source repository structure is nicely demonstrated in the [demo-component-a][9] repository: + +``` +antora.yml <- component metadata +modules/ + ROOT/ <- the default module + nav.adoc <- menu definition + pages/ <- a directory with all the .adoc sources + source1.adoc + source2.adoc + ... +``` + +The following + +``` +antora.yml +``` + +``` +name: component-a +title: Component A +version: 1.5.6 +start_page: ROOT:inline-text-formatting.adoc +nav: + - modules/ROOT/nav.adoc +``` + +contains metadata for this component such as the name and the version of the component, the starting page, and it also points to a menu definition file. + +The menu definition file is a simple list that defines the structure of the menu and the content. It uses the [page ID][4] to identify each page. + +``` +* xref:inline-text-formatting.adoc[Basic Inline Text Formatting] +* xref:special-characters.adoc[Special Characters & Symbols] +* xref:admonition.adoc[Admonition] +* xref:sidebar.adoc[Sidebar] +* xref:ui-macros.adoc[UI Macros] +* Lists +** xref:lists/ordered-list.adoc[Ordered List] +** xref:lists/unordered-list.adoc[Unordered List] + +And finally, there's the actual content under modules/ROOT/pages/ — you can see the repository for examples, or the AsciiDoc syntax reference +``` + +#### The UI bundle + +For the UI, we’ll be using the example UI provided by the project. + +Going into the details of Antora UI would be above the scope of this article, but if you’re interested, please see the [Antora UI documentation][5] for more info. + +#### Building the site + +Note: We’ll be using Podman to run Antora in a container. You can [learn about Podman on the Fedora Magazine][11]. + +To build the site, we only need to call Antora on the Playbook file. + +The easiest way to get antora at the moment is to use the container image provided by the project. You can get it by running: + +``` +$ podman pull antora/antora +``` + +Let’s get the playbook repository: + +``` +$ git clone https://gitlab.com/antora/demo/demo-site.git +$ cd demo-site +``` + +And run Antora using the following command: + +``` +$ podman run --rm -it -v $(pwd):/antora:z antora/antora site.yml +``` + +The site will be available in the + +public + +``` +$ cd public +$ python3 -m http.server 8080 +``` + +directory. You can either open it in your web browser directly, or start a local web server using: + +Your site will be available on . + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/using-antora-for-your-open-source-documentation/ + +作者:[Adam Šamalík][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/asamalik/ +[b]: https://github.com/lujun9972 +[1]: https://antora.org/ +[2]: http://docs.fedoraproject.org/ +[3]: https://docs.fedoraproject.org/en-US/fedora/f29/ +[4]: https://docs.antora.org/antora/2.0/page/page-id/#structure +[5]: https://docs.antora.org/antora-ui-default/ +[6]: https://docs.antora.org/antora/2.0/playbook/ +[7]: https://gitlab.com/antora/demo +[8]: https://gitlab.com/antora/demo/demo-site/blob/master/site.yml +[9]: https://gitlab.com/antora/demo/demo-component-a +[10]: https://gitlab.com/antora/demo/demo-component-b +[11]: https://fedoramagazine.org/running-containers-with-podman/ diff --git a/sources/tech/20190128 Top Hex Editors for Linux.md b/sources/tech/20190128 Top Hex Editors for Linux.md new file mode 100644 index 0000000000..5cd47704b4 --- /dev/null +++ b/sources/tech/20190128 Top Hex Editors for Linux.md @@ -0,0 +1,146 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top Hex Editors for Linux) +[#]: via: (https://itsfoss.com/hex-editors-linux) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Top Hex Editors for Linux +====== + +Hex editor lets you view/edit the binary data of a file – which is in the form of “hexadecimal” values and hence the name “Hex” editor. Let’s be frank, not everyone needs it. Only a specific group of users who have to deal with the binary data use it. + +If you have no idea, what it is, let me give you an example. Suppose, you have the configuration files of a game, you can open them using a hex editor and change certain values to have more ammo/score and so on. To know more about Hex editors, you should start with the [Wikipedia page][1]. + +In case you already know what’s it used for – let us take a look at the best Hex editors available for Linux. + +### 5 Best Hex Editors Available + +![Best Hex Editors for Linux][2] + +**Note:** The hex editors mentioned are in no particular order of ranking. + +#### 1\. Bless Hex Editor + +![bless hex editor][3] + +**Key Features** : + + * Raw disk editing + * Multilevel undo/redo operations. + * Multiple tabs + * Conversion table + * Plugin support to extend the functionality + + + +Bless is one of the most popular Hex editor available for Linux. You can find it listed in your AppCenter or Software Center. If that is not the case, you can check out their [GitHub page][4] for the build and the instructions associated. + +It can easily handle editing big files without slowing down – so it’s a fast hex editor. + +#### 2\. GNOME Hex Editor + +![gnome hex editor][5] + +**Key Features:** + + * View/Edit in either Hex/Ascii + + * Edit large files + + * + + +Yet another amazing Hex editor – specifically tailored for GNOME. Well, I personally use Elementary OS, so I find it listed in the App Center. You should find it in the Software Center as well. If not, refer to the [GitHub page][6] for the source. + +You can use this editor to view/edit in either hex or ASCII. The user interface is quite simple – as you can see in the image above. + +#### 3\. Okteta + +![okteta][7] + +**Key Features:** + + * Customizable data views + * Multiple tabs + * Character encodings: All 8-bit encodings as supplied by Qt, EBCDIC + * Decoding table listing common simple data types. + + + +Okteta is a simple hex editor with not so fancy features. Although it can handle most of the tasks. There’s a separate module of it which you can use to embed this in other programs to view/edit files. + +Similar to all the above-mentioned editors, you can find this listed on your AppCenter and Software center as well. + +#### 4\. wxHexEditor + +![wxhexeditor][8] + +**Key Features:** + + * Easily handle big files + * Has x86 disassembly support + * **** Sector Indication **** on Disk devices + * Supports customizable hex panel formatting and colors. + + + +This is something interesting. It is primarily a Hex editor but you can also use it as a low level disk editor. For example, if you have a problem with your HDD, you can use this editor to edit the the sectors in raw hex and fix it. + +You can find it listed on your App Center and Software Center. If not, [Sourceforge][9] is the way to go. + +#### 5\. Hexedit (Command Line) + +![hexedit][10] + +**Key Features** : + + * Works via terminal + * It’s fast and simple + + + +If you want something to work on your terminal, you can go ahead and install Hexedit via the console. It’s my favorite Linux hex editor in command line. + +When you launch it, you will have to specify the location of the file, and it’ll then open it for you. + +To install it, just type in: + +``` +sudo apt install hexedit +``` + +### Wrapping Up + +Hex editors could come in handy to experiment and learn. If you are someone experienced, you should opt for the one with more feature – with a GUI. Although, it all comes down to personal preferences. + +What do you think about the usefulness of Hex editors? Which one do you use? Did we miss listing your favorite? Let us know in the comments! + +![][11] + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/hex-editors-linux + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Hex_editor +[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-hex-editors-800x450.jpeg?resize=800%2C450&ssl=1 +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/bless-hex-editor.jpg?ssl=1 +[4]: https://github.com/bwrsandman/Bless +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/ghex-hex-editor.jpg?ssl=1 +[6]: https://github.com/GNOME/ghex +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/okteta-hex-editor-800x466.jpg?resize=800%2C466&ssl=1 +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/wxhexeditor.jpg?ssl=1 +[9]: https://sourceforge.net/projects/wxhexeditor/ +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/hexedit-console.jpg?resize=800%2C566&ssl=1 +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-hex-editors.jpeg?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md b/sources/tech/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md new file mode 100644 index 0000000000..366e75846d --- /dev/null +++ b/sources/tech/20190129 7 Methods To Identify Disk Partition-FileSystem UUID On Linux.md @@ -0,0 +1,159 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (7 Methods To Identify Disk Partition/FileSystem UUID On Linux) +[#]: via: (https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +7 Methods To Identify Disk Partition/FileSystem UUID On Linux +====== + +As a Linux administrator you should aware of that how do you check partition UUID or filesystem UUID. + +Because most of the Linux systems are mount the partitions with UUID. The same has been verified in the `/etc/fstab` file. + +There are many utilities are available to check UUID. In this article we will show you how to check UUID in many ways and you can choose the one which is suitable for you. + +### What Is UUID? + +UUID stands for Universally Unique Identifier which helps Linux system to identify a hard drives partition instead of block device file. + +libuuid is part of the util-linux-ng package since kernel version 2.15.1 and it’s installed by default in Linux system. + +The UUIDs generated by this library can be reasonably expected to be unique within a system, and unique across all systems. + +It’s a 128 bit number used to identify information in computer systems. UUIDs were originally used in the Apollo Network Computing System (NCS) and later UUIDs are standardized by the Open Software Foundation (OSF) as part of the Distributed Computing Environment (DCE). + +UUIDs are represented as 32 hexadecimal (base 16) digits, displayed in five groups separated by hyphens, in the form 8-4-4-4-12 for a total of 36 characters (32 alphanumeric characters and four hyphens). + +For example: d92fa769-e00f-4fd7-b6ed-ecf7224af7fa + +Sample of my /etc/fstab file. + +``` +# cat /etc/fstab + +# /etc/fstab: static file system information. +# +# Use 'blkid' to print the universally unique identifier for a device; this may +# be used with UUID= as a more robust way to name devices that works even if +# disks are added and removed. See fstab(5). +# +# +UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f / ext4 defaults,noatime 0 1 +UUID=a2092b92-af29-4760-8e68-7a201922573b swap swap defaults,noatime 0 2 +``` + +We can check this using the following seven commands. + + * **`blkid Command:`** locate/print block device attributes. + * **`lsblk Command:`** lsblk lists information about all available or the specified block devices. + * **`hwinfo Command:`** hwinfo stands for hardware information tool is another great utility that used to probe for the hardware present in the system. + * **`udevadm Command:`** udev management tool. + * **`tune2fs Command:`** adjust tunable filesystem parameters on ext2/ext3/ext4 filesystems. + * **`dumpe2fs Command:`** dump ext2/ext3/ext4 filesystem information. + * **`Using by-uuid Path:`** The directory contains UUID and real block device files, UUIDs were symlink with real block device files. + + + +### How To Check Disk Partition/FileSystem UUID In Linux Uusing blkid Command? + +blkid is a command-line utility to locate/print block device attributes. It uses libblkid library to get disk partition UUID in Linux system. + +``` +# blkid +/dev/sda1: UUID="d92fa769-e00f-4fd7-b6ed-ecf7224af7fa" TYPE="ext4" PARTUUID="eab59449-01" +/dev/sdc1: UUID="d17e3c31-e2c9-4f11-809c-94a549bc43b7" TYPE="ext2" PARTUUID="8cc8f9e5-01" +/dev/sdc3: UUID="ca307aa4-0866-49b1-8184-004025789e63" TYPE="ext4" PARTUUID="8cc8f9e5-03" +/dev/sdc5: PARTUUID="8cc8f9e5-05" +``` + +### How To Check Disk Partition/FileSystem UUID In Linux Uusing lsblk Command? + +lsblk lists information about all available or the specified block devices. The lsblk command reads the sysfs filesystem and udev db to gather information. + +If the udev db is not available or lsblk is compiled without udev support than it tries to read LABELs, UUIDs and filesystem types from the block device. In this case root permissions are necessary. The command prints all block devices (except RAM disks) in a tree-like format by default. + +``` +# lsblk -o name,mountpoint,size,uuid +NAME MOUNTPOINT SIZE UUID +sda 30G +└─sda1 / 20G d92fa769-e00f-4fd7-b6ed-ecf7224af7fa +sdb 10G +sdc 10G +├─sdc1 1G d17e3c31-e2c9-4f11-809c-94a549bc43b7 +├─sdc3 1G ca307aa4-0866-49b1-8184-004025789e63 +├─sdc4 1K +└─sdc5 1G +sdd 10G +sde 10G +sr0 1024M +``` + +### How To Check Disk Partition/FileSystem UUID In Linux Uusing by-uuid path? + +The directory contains UUID and real block device files, UUIDs were symlink with real block device files. + +``` +# ls -lh /dev/disk/by-uuid/ +total 0 +lrwxrwxrwx 1 root root 10 Jan 29 08:34 ca307aa4-0866-49b1-8184-004025789e63 -> ../../sdc3 +lrwxrwxrwx 1 root root 10 Jan 29 08:34 d17e3c31-e2c9-4f11-809c-94a549bc43b7 -> ../../sdc1 +lrwxrwxrwx 1 root root 10 Jan 29 08:34 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> ../../sda1 +``` + +### How To Check Disk Partition/FileSystem UUID In Linux Uusing hwinfo Command? + +**[hwinfo][1]** stands for hardware information tool is another great utility that used to probe for the hardware present in the system and display detailed information about varies hardware components in human readable format. + +``` +# hwinfo --block | grep by-uuid | awk '{print $3,$7}' +/dev/sdc1, /dev/disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7 +/dev/sdc3, /dev/disk/by-uuid/ca307aa4-0866-49b1-8184-004025789e63 +/dev/sda1, /dev/disk/by-uuid/d92fa769-e00f-4fd7-b6ed-ecf7224af7fa +``` + +### How To Check Disk Partition/FileSystem UUID In Linux Uusing udevadm Command? + +udevadm expects a command and command specific options. It controls the runtime behavior of systemd-udevd, requests kernel events, manages the event queue, and provides simple debugging mechanisms. + +``` +udevadm info -q all -n /dev/sdc1 | grep -i by-uuid | head -1 +S: disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7 +``` + +### How To Check Disk Partition/FileSystem UUID In Linux Uusing tune2fs Command? + +tune2fs allows the system administrator to adjust various tunable filesystem parameters on Linux ext2, ext3, or ext4 filesystems. The current values of these options can be displayed by using the -l option. + +``` +# tune2fs -l /dev/sdc1 | grep UUID +Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7 +``` + +### How To Check Disk Partition/FileSystem UUID In Linux Uusing dumpe2fs Command? + +dumpe2fs prints the super block and blocks group information for the filesystem present on device. + +``` +# dumpe2fs /dev/sdc1 | grep UUID +dumpe2fs 1.43.5 (04-Aug-2017) +Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/ diff --git a/sources/tech/20190129 A small notebook for a system administrator.md b/sources/tech/20190129 A small notebook for a system administrator.md new file mode 100644 index 0000000000..45d6ba50eb --- /dev/null +++ b/sources/tech/20190129 A small notebook for a system administrator.md @@ -0,0 +1,552 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A small notebook for a system administrator) +[#]: via: (https://habr.com/en/post/437912/) +[#]: author: (sukhe https://habr.com/en/users/sukhe/) + +A small notebook for a system administrator +====== + +I am a system administrator, and I need a small, lightweight notebook for every day carrying. Of course, not just to carry it, but for use it to work. + +I already have a ThinkPad x200, but it’s heavier than I would like. And among the lightweight notebooks, I did not find anything suitable. All of them imitate the MacBook Air: thin, shiny, glamorous, and they all critically lack ports. Such notebook is suitable for posting photos on Instagram, but not for work. At least not for mine. + +After not finding anything suitable, I thought about how a notebook would turn out if it were developed not with design, but the needs of real users in mind. System administrators, for example. Or people serving telecommunications equipment in hard-to-reach places — on roofs, masts, in the woods, literally in the middle of nowhere. + +The results of my thoughts are presented in this article. + + +[![Figure to attract attention][1]][2] + +Of course, your understanding of the admin notebook does not have to coincide with mine. But I hope you will find a couple of interesting thoughts here. + +Just keep in mind that «system administrator» is just the name of my position. And in fact, I have to work as a network engineer, and installer, and perform a significant part of other work related to hardware. Our company is tiny, we are far from large settlements, so all of us have to be universal specialist. + +In order not to constantly clarify «this notebook», later in the article I will call it the “adminbook”. Although it can be useful not only to administrators, but also to all who need a small, lightweight notebook with a lot of connectors. In fact, even large laptops don’t have as many connectors. + +So let's get started… + +### 1\. Dimensions and weight + +Of course, you want it smaller and more lightweight, but the keyboard with the screen should not be too small. And there has to be space for connectors, too. + +In my opinion, a suitable option is a notebook half the size of an x200. That is, approximately the size of a sheet of A5 paper (210x148mm). In addition, the side pockets of many bags and backpacks are designed for this size. This means that the adminbook doesn’t even have to be carried in the main compartment. + +Though I couldn’t fit everything I wanted into 210mm. To make a comfortable keyboard, the width had to be increased to 230mm. + +In the illustrations the adminbook may seem too thick. But that’s only an optical illusion. In fact, its thickness is 25mm (28mm taking the rubber feet into account). + +Its size is close to the usual hardcover book, 300-350 pages thick. + +It’s lightweight, too — about 800 grams (half the weight of the ThinkPad). + +The case of the adminbook is made of mithril aluminum. It’s a lightweight, durable metal with good thermal conductivity. + +### 2\. Keyboard and trackpoint + +A quality keyboard is very important for me. “Quality” there means the fastest possible typing and hotkey speed. It needs to be so “matter-of-fact” I don’t have to think about it at all, as if it types seemingly by force of thought. + +This is possible if the keys are normal size and in their typical positions. But the adminbook is too small for that. In width, it is even smaller than the main block of keys of a desktop keyboard. So, you have to work around that somehow. + +After a long search and numerous tests, I came up with what you see in the picture: + +![](https://habrastorage.org/webt/2-/mh/ag/2-mhagvoofl7vgqiadv3rcnclb0.jpeg) +Fig.2.1 — Adminbook keyboard + +This keyboard has the same vertical key distance as on a regular keyboard. A horizontal distance decreased just only 2mm (17 instead of 19). + +You can even type blindly on this keyboard! To do this, some keys have small bumps for tactile orientation. + +However, if you do not sit at a table, the main input method will be to press the keys “at a glance”. And here the muscle memory does not help — you have to look at the keys with your eyes. + +To hit the buttons faster, different key colors are used. + +For example, the numeric row is specifically colored gray to visually separate it from the QWERTY row, and NumLock is mapped to the “6” key, colored black to stand out. + +To the right of NumLock, gray indicates the area of the numeric keypad. These (and neighboring) buttons work like a numeric keypad in NumLock mode or when you press Fn. I must say, this is a useful feature for the admin computer — some users come up with passwords on the numpad in the form of a “cross”, “snake”, “spiral”, etc. I want to be able to type them that way too. + +As for the function keys. I don’t know about you, but it annoys me when, in a 15-inch laptop, this row is half-height and only accessible through pressing Fn. Given that there’s a lot free space around the keyboard! + +The adminbook doesn’t have free space at all. But the function keys can be pressed without Fn. These are separate keys that are even divided into groups of 4 using color coding and location. + +By the way, have you seen which key is to the right of AltGr on modern ThinkPads? I don’t know what they were thinking, but now they have PrintScreen there! + +Where? Where, I ask, is the context menu key that I use every day? It’s not there. + +So the adminbook has it. Two, even! You can put it up by pressing Fn + Alt. Sorry, I couldn’t map it to a separate key due to lack of space. Just in case, I added the “Right Win” key as Fn + CtrlR. Maybe some people use it for something. + +However, the adminbook allows you to customize the keyboard to your liking. The keyboard is fully reprogrammable. You can assign the scan codes you need to the keys. Setting the keyboard parameters is done via the “KEY” button (Fn + F3). + +Of course, the adminbook has a keyboard backlight. It is turned on with Fn + B (below the trackpoint, you can even find it in the dark). The backlight here is similar to the ThinkPad ThinkLight. That is, it’s an LED above the display, illuminating the keyboard from the top. In this case, it is better than a backlight from below, because it allows you to distinguish the color of the keys. In addition, keys have several characters printed on them, while only English letters are usually made translucent to the backlight. + +Since we’re on the topic of characters… Red letters are Ukrainian and Russian. I specifically drew them to show that keys have space for several alphabets: after all, English is not a native language for most of humanity. + +Since there isn’t enough space for a full touchpad, the trackpoint is used as the positioning device. If you have no experience working with it — don’t worry, it’s actually quite handy. The mouse cursor moves with slight inclines of the trackpoint, like an analog joystick, and its three buttons (under the spacebar) work the same as on the mouse. + +To the left of the trackpoint keys is a fingerprint scanner. That makes it possible to login by fingerprint. It’s very convenient in most cases. + +The space bar has an NFC antenna location mark. You can simply read data from devices equipped with NFC, and you can make it to lock the system while not in use. For example, if you wear an NFC-equipped ring, it looks like this: when you remove hands from the keyboard, the computer locks after a certain time, and unlocks when you put hands on the keyboard again. + +And now the unexpected part. The keyboard and the trackpoint can work as a USB keyboard and mouse for an external computer! For this, there are USB Type C and MicroUSB connectors on the back, labeled «OTG». You can connect to an external computer using a standard USB cable from a phone (which is usually always with you). + +![](https://habrastorage.org/webt/e2/wa/m5/e2wam5d1bbckfdxpvqwl-i6aqle.jpeg) +Fig.2.2 — On the right: the power connector 5.5x2.5mm, the main LAN connector, POE indicator, USB 3.0 Type A, USB Type C (with alternate HDMI mode), microSD card reader and two «magic» buttons + +Switching to the external keyboard mode is done with the «K» button on the right side of the adminbook. And there are actually three modes, since the keyboard+trackpoint combo can also work as a Bluetooth keyboard/mouse! + +Moreover: to save energy, the keyboard and trackpoint can work autonomously from the rest of the adminbook. When the adminbook is turned off, pressing «K» can turn on only the keyboard and trackpoint to use them by connecting to another computer. + +Of course, the keyboard is water-resistant. Excess water is drained down through the drainage holes. + +### 3\. Video subsystem + +There are some devices that normally do not need a monitor and keyboard. For example, industrial computers, servers or DVRs. And since the monitor is «not needed», it is, in most cases, absent. + +And when there is a need to configure such a device from the console, it can be a big surprise that the entire office is working on laptops and there is not a single stationary monitor within reach. Therefore, in some cases you have to take a monitor with you. + +But you don’t need to worry about this if you have the adminbook. + +The fact is that the video outputs of the adminbook can switch «in the opposite direction» and work as video inputs by displaying the incoming image on the built-in screen. So, the adminbook can also replace the monitor (in addition to replace the mouse and keyboard). + +![](https://habrastorage.org/webt/4a/qr/f-/4aqrf-1sgstwwffhx-n4wr0p7ws.jpeg) +Fig.3.1 — On the left side of the adminbook, there are Mini DisplayPort, USB Type C (with alternate DisplayPort mode), SD card reader, USB 3.0 Type A connectors, HDMI, four audio connectors, VGA and power button + +Switching modes between input and output is done by pressing the «M» button on the right side of the adminbook. + +The video subsystem, as well as the keyboard, can work autonomously — that is, when used as a monitor, the other parts of the adminbook remain disabled. To turn on to this mode also uses the «M» button. + +Detailed screen adjustment (contrast, geometry, video input selection, etc.) is performed using the menu, brought up with the «SCR» button (Fn + F4). + +The adminbook has HDMI, MiniDP, VGA and USB Type C connectors (with DisplayPort and HDMI alternate mode) for video input / output. The integrated GPU can display the image simultaneously in three directions (including the integrated display). + +The adminbook display is FullHD (1920x1080), 9.5’’, matte screen. The brightness is sufficient for working outside during the day. And to do it better, the set includes folding blinds for protection from sunlight. + +![](https://habrastorage.org/webt/k-/nc/rh/k-ncrhphspvcoimfds1wurnzk3i.jpeg) +Fig.3.2 — Blinds to protect from sunlight + +In addition to video output via these connectors, the adminbook can use wireless transmission via WiDi or Miracast protocols. + +### 4\. Emulation of external drives + +One of the options for installing the operating system is to install it from a CD / DVD, but now very few computers have optical drives. USB connectors are everywhere, though. Therefore, the adminbook can pretend to be an external optical drive connected via USB. + +That allows connecting it to any computer to install an operating system on it, while also running boot discs with test programs or antiviruses. + +To connect, it uses the same USB cable that’s used for connecting it to a desktop as an external keyboard/mouse. + +The “CD” button (Fn + F2) controls the drive emulation — select a disc image (in an .iso file) and mount / unmount it. + +If you need to copy data from a computer or to it, the adminbook can emulate an external hard drive connected via the same USB cable. HDD emulation is also enabled by the “CD” button. + +This button also turns on the emulation of bootable USB flash drives. They are now used to install operating systems almost more often than CDs. Therefore, the adminbook can pretend to be a bootable flash drive. + +The .iso files are located on a separate partition of the hard disk. This allows you to use them regardless of the operating system. Moreover, in the emulation menu you can connect a virtual drive to one of the USB interfaces of the adminbook. This makes it possible to install an operating system on the adminbook using itself as an installation disc drive. + +By the way, the adminbook is designed to work under Windows 10 and Debian / Kali / Ubuntu. The menu system called via function buttons with Fn works autonomously on a separate microcontroller. + +### 5\. Rear connectors + +First, a classic DB-9 connector for RS-232. Any admin notebook simply has to have it. We have it here, too, and galvanically isolated from the rest of the notebook. + +In addition to RS-232, RS-485 widely used in industrial automation is supported. It has a two-wire and four-wire version, with a terminating resistor and without, with the ability to enable a protective offset. It can also work in RS-422 and UART modes. + +All these protocols are configured in the on-screen menu, called by the «COM» button (Fn + F8). + +Since there are multiple protocols, it is possible to accidentally connect the equipment to a wrong connector and break it. + +To prevent this from happening, when you turn off the computer (or go into sleep mode, or close the display lid), the COM port switches to the default mode. This may be a “port disabled” state, or enabling one of the protocols. + +![](https://habrastorage.org/webt/uz/ii/ig/uziiig_yr86yzdcnivkbapkbbgi.jpeg) +Fig.5.1 — The rear connectors: DB-9, SATA + SATA Power, HD Mini SAS, the second wired LAN connector, two USB 3.0 Type A connectors, two USB 2.0 MicroB connectors, three USB Type C connectors, a USIM card tray, a PBD-12 pin connector (jack) + +The adminbook has one more serial port. But if the first one uses the hardware UART chipset, the second one is connected to the USB 2.0 line through the FT232H converter. + +Thanks to this, via COM2, you can exchange data via I2C, SMBus, SPI, JTAG, UART protocols or use it as 8 outputs for Bit-bang / GPIO. These protocols are used when working with microcontrollers, flashing firmware on routers and debugging any other electronics. For this purpose, pin connectors are usually used with a 2.54mm pitch. Therefore, COM2 is made to look like one of these connectors. + +![](https://habrastorage.org/webt/qd/rc/ln/qdrclnoljgnlohthok4hgjb0be4.jpeg) +Fig.5.2 — USB to UART adapter replaced by COM2 port + +There is also a secondary LAN interface at the back. Like the main one, it is gigabit-capable, with support for VLAN. Both interfaces are able to test the integrity of the cable (for pair length and short circuits), the presence of connected devices, available communication speeds, the presence of POE voltage. With the using a wiremap adapter on the other side (see chapter 17) it is possible to determine how the cable is connected to crimps. + +The network interface menu is called with the “LAN” button (Fn + F6). + +The adminbook has a combined SATA + SATA Power connector, connected directly to the chipset. That makes it possible to perform low-level tests of hard drives that do not work through USB-SATA adapters. Previously, you had to do it through ExpressCards-type adapters, but the adminbook can do without them because it has a true SATA output. + +![](https://habrastorage.org/webt/dr/si/in/drsiinbafiyz8ztzwrowtvi0lk8.jpeg) +Fig.5.3 — USB to SATA/IDE and ExpressCard to SATA adapters + +The adminbook also has a connector that no other laptops have — HD Mini SAS (SFF-8643). PCIe x4 is routed outside through this connector. Thus, it's possible to connect an external U.2 (directly) or M.2 type (through an adapter) drives. Or even a typical desktop PCIe expansion card (like a graphics card). + +![](https://habrastorage.org/webt/ud/ph/86/udph860bshazyd6lvuzvwgymwnk.jpeg) +Fig.5.4 — HD Mini SAS (SFF-8643) to U.2 cable + +![](https://habrastorage.org/webt/kx/dd/99/kxdd99krcllm5ooz67l_egcttym.jpeg) +Fig.5.5 — U.2 drive + +![](https://habrastorage.org/webt/xn/de/gx/xndegxy5i1g7h2lwefs2jt1scpq.jpeg) +Fig.5.6 — U.2 to M.2 adapter + +![](https://habrastorage.org/webt/z2/dd/hd/z2ddhdoioezdwov_nv9e3b0egsa.jpeg) +Fig.5.7 — Combined adapter from U.2 to M.2 and PCIe (sample M.2 22110 drive is installed) + +Unfortunately, the limitations of the chipset don’t allow arbitrary use of PCIe lanes. In addition, the processor uses the same data lanes for PCIe and SATA. Therefore, the rear connectors can only work in two ways: +— all four PCIe lanes go to the Mini SAS connector (the second network interface and SATA don’t work) +— two PCIe lanes go to the Mini SAS, and two lanes to the second network interface and SATA connector + +On the back there are also two USB connectors (usual and Type C), which are constantly powered. That allows you to charge other devices from your notebook, even when the notebook is turned off. + +### 6\. Power Supply + +The adminbook is designed to work in difficult and unpredictable conditions, therefore, it is able to receive power in various ways. + +**Method number one** is Power Delivery. The power supply cable can be connected to any USB Type C connector (except the one marked “OTG”). + +**The second option** is from a normal 5V phone charger with a microUSB or USB Type C connector. At the same time, if you connect to the ports labeled QC 3.0, the QuickCharge fast charging standard will be supported. + +**The third option** — from any source of 12-60V DC power. To connect, use a coaxial ( also known as “barrel”) 5.5x2.5mm power connector, often found in laptop power supplies. + +For greater safety, the 12-60V power supply is galvanically isolated from the rest of the notebook. In addition, there’s reverse polarity protection. In fact, the adminbook can receive energy even if positive and negative ends are mismatched. + +![](https://habrastorage.org/webt/ju/xo/c3/juxoc3lxi7urqwgegyd6ida5h_8.jpeg) +Fig.6.1 — The cable, connecting the power supply to the adminbook (terminated with 5.5x2.5mm connectors) + +Adapters for a car cigarette lighter and crocodile clips are included in the box. + +![](https://habrastorage.org/webt/l6/-v/gv/l6-vgvqjrssirnvyi14czhi0mrc.jpeg) +Fig.6.2 — Adapter from 5.5x2.5mm coaxial connector to crocodile clips + +![](https://habrastorage.org/webt/zw/an/gs/zwangsvfdvoievatpbfxqvxrszg.png) +Fig.6.3 — Adapter to a car cigarette lighter + +**The fourth option** — Power Over Ethernet (POE) through the main network adapter. Supported options are 802.3af, 802.3at and Passive POE. Input voltage from 12 to 60V. This method is convenient if you have to work on the roof or on the tower, setting up Wi-Fi antennas. Power to them comes through Ethernet cables, and there is no other electricity on the tower. + +POE electricity can be used in three ways: + + * power the notebook only + * forward to a second network adapter and power the notebook from batteries + * power the notebook and the antenna at the same time + + + +To prevent equipment damage, if one of the Ethernet cables is disconnected, the power to the second network interface is terminated. The power can only be turned on manually through the corresponding menu item. + +When using the 802.3af / at protocols, you can set the power class that the adminbook will request from the power supply device. This and other POE properties are configured from the menu called with the “LAN” button (Fn + F6). + +By the way, you can remotely reset Ubiquity access points (which is done by closing certain wires in the cable) with the second network interface. + +The indicator next to the main network interface shows the presence and type of POE: green — 802.3af / at, red — Passive POE. + +**The last, fifth** power supply is the battery. Here it’s a LiPol, 42W/hour battery. + +In case the external power supply does not provide sufficient power, the missing power can be drawn from the battery. Thus, it can draw power from the battery and external sources at the same time. + +### 7\. Display unit + +The display can tilt 180 degrees, and it’s locked with latches while closed (opens with a button on the front side). When the display is closed, adminbook doesn’t react to pressing any external buttons. + +In addition to the screen, the notebook lid contains: + + * front and rear cameras with lights, microphones, activity LEDs and mechanical curtains + * LED of the upper backlight of the keyboard (similar to ThinkLight) + * LED indicators for Wi-Fi, Bluetooth, HDD and others + * wireless protocol antennas (in the blue plastic insert) + * photo sensors and LEDs for the infrared remote + * gyroscope, accelerometer, magnetometer + + + +The plastic insert for the antennas does not reach the corners of the display lid. This is done because in the «traveling» notebooks the corners are most affected by impacts, and it's desirable that they be made of metal. + +### 8\. Webcams + +The notebook has 2 webcams. The front-facing one is 8MP (4K / UltraHD), while the “selfie” one is 2MP (FullHD). Both cameras have a backlight controlled by separate buttons (Fn + G and Fn + H). Each camera has a mechanical curtain and an activity LED. The shifted mechanical curtain also turns off the microphones of the corresponding side (configurable). + +The external camera has two quick launch buttons — Fn + 1 takes an instant photo, Fn + 2 turns on video recording. The internal camera has a combination of Fn + Q and Fn + W. + +You can configure cameras and microphones from the menu called up by the “CAM” button (Fn + F10). + +### 9\. Indicator row + +It has the following indicators: Microphone, NumLock, ScrollLock, hard drive access, battery charge, external power connection, sleep mode, mobile connection, WiFi, Bluetooth. + +Three indicators are made to shine through the back side of the display lid, so that they can be seen while the lid is closed: external power connection, battery charge, sleep mode. + +Indicators are color-coded. + +Microphone — lights up red when all microphones are muted + +Battery charge: more than 60% is green, 30-60% is yellow, less than 30% is red, less than 10% is blinking red. + +External power: green — power is supplied, the battery is charged; yellow — power is supplied, the battery is charging; red — there is not enough external power to operate, the battery is drained + +Mobile: 4G (LTE) — green, 3G — yellow, EDGE / GPRS — red, blinking red — on, but no connection + +Wi-Fi: green — connected to 5 GHz, yellow — to 2.4 GHz, red — on, but not connected + +You can configure the indication with the “IND” button (Fn + F9) + +### 10\. Infrared remote control + +Near the indicators (on the front and back of the display lid) there are infrared photo sensors and LEDs to recording and playback commands from IR remotes. You can set it up, as well as emulate a remote control by pressing the “IR” button (Fn + F5). + +### 11\. Wireless interfaces + +WiFi — dual-band, 802.11a/b/g/n/ac with support for Wireless Direct, Intel WI-Di / Miracast, Wake On Wireless LAN. + +You ask, why is Miracast here? Because is already embedded in many WiFi chips, so its presence does not lead to additional costs. But you can transfer the image wirelessly to TVs, projectors and TV set-top boxes, that already have Miracast built in. + +Regarding Bluetooth, there’s nothing special. It’s version 4.2 or newest. By the way, the keyboard and trackpoint have a separate Bluetooth module. This is much easier than connect them to the system-wide module. + +Of course, the adminbook has a built-in cellular modem for 4G (LTE) / 3G / EDGE / GPRS, as well as a GPS / GLONASS / Galileo / Beidou receiver. This receiver also doesn’t cost much, because it’s already built into the 4G modem. + +There is also an NFC communication module, with the antenna under the spacebar. Antennas of all other wireless interfaces are in a plastic insert above the display. + +You can configure wireless interfaces with the «WRLS» button (Fn + F7). + +### 12\. USB connectors + +In total, four USB 3.0 Type A connectors and four USB 3.1 Type C connectors are built into the adminbook. Peripherals are connected to the adminbook through these. + +One more Type C and MicroUSB are allocated only for keyboard / mouse / drive emulation (denoted as “OTG”). + +«QC 3.0» labeled MicroUSB connector can not only be used for power, but it can switch to normal USB 2.0 port mode, except using MicroB instead of normal Type A. Why is it necessary? Because to flash some electronics you sometimes need non-standard USB A to USB A cables. + +In order to not make adapters outselves, you can use a regular phone charging cable by plugging it into this Micro B connector. Or use an USB A to USB Type C cable (if you have one). + +![](https://habrastorage.org/webt/0p/90/7e/0p907ezbunekqwobeogjgs5fgsa.jpeg) +Fig.12.1 — Homemade USB A to USB A cable + +Since USB Type C supports alternate modes, it makes sense to use it. Alternate modes are when the connector works as HDMI or DisplayPort video outputs. Though you’ll need adapters to connect it to a TV or monitor. Or appropriate cables that have Type C on one end and HDMI / DP on the other. However, USB Type C to USB Type C cables might soon become the most common video transfer cable. + +The Type C connector on the left side of the adminbook supports an alternate Display Port mode, and on the right side, HDMI. Like the other video outputs of the adminbook, they can work as both input and output. + +The one thing left to say is that Type C is bidirectional in regard to power delivery — it can both take in power as well as output it. + +### 13\. Other + +On the left side there are four audio connectors: Line In, Line Out, Microphone and the combo headset jack (headphones + microphone). Supports simple stereo, quad and 5.1 mode output. + +Audio outputs are specially placed next to the video connectors, so that when connected to any equipment, the wires are on one side. + +Built-in speakers are on the sides. Outside, they are covered with grills and acoustic fabric with water-repellent impregnation. + +There are also two slots for memory cards — full-size SD and MicroSD. If you think that the first slot is needed only for copying photos from the camera — you are mistaken. Now, both single-board computers like Raspberry Pi and even rack-mount servers are loaded from SD cards. MicroSD cards are also commonly found outside of phones. In general, you need both card slots. + +Sensors more familiar to phones — a gyroscope, an accelerometer and a magnetometer — are built into the lid of the notebook. Thanks to this, one can determine where the notebook cameras are directed and use this for augmented reality, as well as navigation. Sensors are controlled via the menu using the “SNSR” button (Fn + F11). + +Among the function buttons with Fn, F1 (“MAN”) and F12 (“ETC”) I haven’t described yet. The first is a built-in guide on connectors, modes and how to use the adminbook. The second is the settings of non-standard subsystems that have not separate buttons. + +### 14\. What's inside + +The adminbook is based on the Core i5-7Y57 CPU (Kaby Lake architecture). Although it’s less of a CPU, but more of a real SOC (System On a Chip). That is, almost the entire computer (without peripherals) fits in one chip the size of a thumb nail (2x1.6 cm). + +It emits from 3.5W to 7W of heat (depends on the frequency). So, a passive cooling system is adequate in this case. + +8GB of RAM are installed by default, expandable up to 16GB. + +A 256GB M.2 2280 SSD, connected with two PCIe lanes, is used as the hard drive. + +Wi-Fi + Bluetooth and WWAN + GNSS adapters are also designed as M.2 modules. + +RAM, the hard drive and wireless adapters are located on the top of the motherboard and can be replaced by the user — just unscrew and lift the keyboard. + +The battery is assembled from four LP545590 cells and can also be replaced. + +SOC and other irreplaceable hardware are located on the bottom of the motherboard. The heating components for cooling are pressed directly against the case. + +External connectors are located on daughter boards connected to the motherboard via ribbon cables. That allows to release different versions of the adminbook based on the same motherboard. + +For example, one of the possible version: + +![](https://habrastorage.org/webt/j9/sw/vq/j9swvqfi1-ituc4u9nr6-ijv3nq.jpeg) +Fig.14.1 — Adminbook A4 (front view) + +![](https://habrastorage.org/webt/pw/fq/ag/pwfqagvrluf1dbnmcd0rt-0eyc0.jpeg) +Fig.14.2 — Adminbook A4 (back view) + +![](https://habrastorage.org/webt/mn/ir/8i/mnir8in1pssve0m2tymevz2sue4.jpeg) +Fig.14.3 — Adminbook A4 (keyboard) + +This is an adminbook with a 12.5” display, its overall dimensions are 210x297mm (A4 paper format). The keyboard is full-size, with a standard key size (only the top row is a bit narrower). All the standard keys are there, except for the numpad and the Right Win, available with Fn keys. And trackpad added. + +### 15\. The underside of the adminbook + +Not expecting anything interesting from the bottom? But there is! + +First I will say a few words about the rubber feet. On my ThinkPad, they sometimes fall away and lost. I don't know if it's a bad glue, or a backpack is not suitable for a notebook, but it happens. + +Therefore, in the adminbook, the rubber feet are screwed in (the screws are slightly buried in rubber, so as not to scratch the tables). The feet are sufficiently streamlined so that they cling less to other objects. + +On the bottom there are visible drainage holes marked with a water drop. + +And the four threaded holes for connecting the adminbook with fasteners. + +![](https://habrastorage.org/webt/3d/q9/ku/3dq9kus6t7ql3rh5mbpfo3_xqng.jpeg) +Fig.15.1 — The underside of the adminbook + +Large hole in the center has a tripod thread. + +![](https://habrastorage.org/webt/t5/e5/ps/t5e5ps3iasu2j-22uc2rgl_5x_y.jpeg) +Fig.15.2 — Camera clamp mount + +Why is it necessary? Because sometimes you have to hang on high, holding the mast with one hand, holding the notebook with the second, and typing something on the third… Unfortunately, I am not Shiva, so these tricks are not easy for me. And you can just screw the adminbook by a camera mount to any protruding part of the structure and free your hands! + +No protruding parts? No problem. A plate with neodymium magnets is screwed to three other holes and the adminbook is magnetised to any steel surface — even vertical! As you see, opening the display by 180° is pretty useful. + +![](https://habrastorage.org/webt/ua/28/ub/ua28ubhpyrmountubiqjegiibem.jpeg) +Fig.15.3 — Fastening with magnets and shaped holes for nails / screws + +And if there is no metal? For example, working on the roof, and next to only a wooden wall. Then you can screw 1-2 screws in the wall and hang the adminbook on them. To do this, there are special slots in the mount, plus an eyelet on the handle. + +For especially difficult cases, there’s an arm mount. This is not very convenient, but better than nothing. Besides, it allows you to navigate even with a working notebook. + +![](https://habrastorage.org/webt/tp/fo/0y/tpfo0y_8gku4bmlbeqwfux1j4me.jpeg) +Fig.15.4 — Arm mount + +In general, these three holes use a regular metric thread, specifically so that you can make some DIY fastening and fasten it with ordinary screws. + +Except fasteners, an additional radiator can be screwed to these holes, so that you can work for a long time under high load or at high ambient temperature. + +![](https://habrastorage.org/webt/k4/jo/eq/k4joeqhmaxgvzhnxno6z3alg5go.jpeg) +Fig.15.5 — Adminbook with additional radiator + +### 16\. Accessories + +The adminbook has some unique features, and some of them are implemented using equipment designed specifically for the adminbook. Therefore, these accessories are immediately included. However, non-unique accessories are also available immediately. + +Here is a complete list of both: + + * fasteners with magnets + * arm mount + * heatsink + * screen blinds covering it from sunlight + * HD Mini SAS to U.2 cable + * combined adapter from U.2 to M.2 and PCIe + * power cable, terminated by coaxial 5.5x2.5mm connectors + * adapter from power cable to cigarette lighter + * adapter from power cable to crocodile clips + * different adapters from the power cable to coaxial connectors + * universal power supply and power cord from it into the outlet + + + +### 17\. Power supply + +Since this is a power supply for a system administrator's notebook, it would be nice to make it universal, capable of powering various electronic devices. Fortunately, the vast majority of devices are connected via coaxial connectors or USB. I mean devices with external power supplies: routers, switches, notebooks, nettops, single-board computers, DVRs, IPTV set top boxes, satellite tuners and more. + +![](https://habrastorage.org/webt/jv/zs/ve/jvzsveqavvi2ihuoajjnsr1xlp0.jpeg) +Fig.17.1 — Adapters from 5.5x2.5mm coaxial connector to other types of connectors + +There aren’t many connector types, which allows to get by with an adjustable-voltage PSU and adapters for the necessary connectors. It also needs to support various power delivery standards. + +In our case, the power supply supports the following modes: + + * Power Delivery — displayed as **[pd]** + * Quick Charge **[qc]** + * 802.3af/at **[at]** + * voltage from 5 to 54 volts in 0.5V increments (displayed voltage) + + + +![](https://habrastorage.org/webt/fj/jm/qv/fjjmqvdhezywuyh9ew3umy9wgmg.jpeg) +Fig.17.2 — Mode display on the 7-segment indicator (1.9. = 19.5V) + +![](https://habrastorage.org/webt/h9/zg/u0/h9zgu0ngl01rvhgivlw7fb49gpq.jpeg) +Fig.17.3 — Front and top sides of power supply + +USB outputs on the power supply (5V 2A) are always on. On the other outputs the voltage is applied by pressing the ON/OFF button. + +The desired mode is selected with the MODE button and this selection is remembered even when the power is turned off. The modes are listed like this: pd, qc, at, then a series of voltages. + +Voltage increases by pressing and holding the MODE button, decreases by short pressing. Step to the right — 1 Volt, step to the left — 0.5 Volt. Half a volt is needed because some equipment requires, for example, 19.5 volts. These half volts are displayed on the display with decimal points (19V -> **[19]** , 19.5V -> **[1.9.]** ). + +When power is on, the green LED is on. When a short-circuit or overcurrent protection is triggered, **[SH]** is displayed, and the LED lights up red. + +In the Power Delivery and Quick Charge modes, voltage is applied to the USB outputs (Type A and Type C). Only one of them can be used at one time. + +In 802.3af/at modes, the power supply acts as an injector, combining the supply voltage with data from the LAN connector and supplying it to the POE connector. Power is supplied only if a device with 802.3af or 802.3at support is plugged into the POE connector. + +But in the simple voltage supply mode, electricity throu the POE connector is issued immediately, without any checks. This is the so-called Passive POE — positive charge goes to conductors 4 and 5, and negative charge to conductors 7 and 8. At the same time, the voltage is applied to the coaxial connector. Adapters for various types of connectors are used in this mode. + +The power supply unit has a built-in button to remotely reset Ubiquity access points. This is a very useful feature that allows you to reset the antenna to factory settings without having to climb on the mast. I don’t know — is any other manufacturers support a feature like this? + +The power supply also has the passive wiremap adapter, which allows you to determine the correct Ethernet cable crimping. The active part is located in the Ethernet ports of the adminbook. + +![](https://habrastorage.org/webt/pp/bm/ws/ppbmws4g1o5j05eyqqulnwuuwge.jpeg) +Fig.17.4 — Back side and wiremap adapter + +Of course, the network cable tester built into the adminbook will not replace a professional OTDR, but for most tasks it will be enough. + +To prevent overheating, part of the PSU’s body acts as an aluminum heatsink. Power supply power — 65 watts, size 10x5x4cm. + +### 18\. Afterword + +“It won’t fit into such a small case!” — the sceptics will probably say. To be frank, I also sometimes think that way when re-reading what I wrote above. + +And then I open the 3D model and see, that all parts fits. Of course, I am not an electronic engineer, and for sure I miss some important things. But, I hope that if there are mistakes, they are “overcorrections”. That is, real engineers would fit all of that into a case even smaller. + +By and large, the adminbook can be divided into 5 functional parts: + + * the usual part, as in all notebooks — processor, memory, hard drive, etc. + * keyboard and trackpoint that can work separately + * autonomous video subsystem + * subsystem for managing non-standard features (enable / disable POE, infrared remote control, PCIe mode switching, LAN testing, etc.) + * power subsystem + + + +If we consider them separately, then everything looks quite feasible. + +The **SOC Kaby Lake** contains a CPU, a graphics accelerator, a memory controller, PCIe, SATA controller, USB controller for 6 USB3 and 10 USB2 outputs, Gigabit Ethernet controller, 4 lanes to connect webcams, integrated audio and etc. + +All that remains is to trace the lanes to connectors and supply power to it. + +**Keyboard and trackpoint** is a separate module that connects via USB to the adminbook or to an external connector. Nothing complicated here: USB and Bluetooth keyboards are very widespread. In our case, in addition, needs to make a rewritable table of scan codes and transfer non-standard keys over a separate interface other than USB. + +**The video subsystem** receives the video signal from the adminbook or from external connectors. In fact, this is a regular monitor with a video switchboard plus a couple of VGA converters. + +**Non-standard features** are managed independently of the operating system. The easiest way to do it with via a separate microcontroller which receives codes for pressing non-standard keys (those that are pressed with Fn) and performs the corresponding actions. + +Since you have to display a menu to change the settings, the microcontroller has a video output, connected to the adminbook for the duration of the setup. + +**The internal PSU** is galvanically isolated from the rest of the system. Why not? On habr.com there was an article about making a 100W, 9.6mm thickness planar transformer! And it only costs $0.5. + +So the electronic part of the adminbook is quite feasible. There is the programming part, and I don’t know which part will harder. + +This concludes my fairly long article. It long, even though I simplified, shortened and threw out minor details. + +The ideal end of the article was a link to an online store where you can buy an adminbook. But it's not yet designed and released. Since this requires money. + +Unfortunately, I have no experience with Kickstarter or Indigogo. Maybe you have this experience? Let's do it together! + +### Update + +Many people asked for a simplified version. Ok. Done. Sorry — just a 3d model, without render. + +Deleted: second LAN adapter, micro SD card reader, one USB port Type C, second camera, camera lights and camera curtines, display latch, unnecessary audio connectors. + +Also in this version there will be no infrared remote control, a reprogrammable keyboard, QC 3.0 charging standard, and getting power by POE. + +![](https://habrastorage.org/webt/3l/lg/vm/3llgvmv4pebiruzgldqckab0uyc.jpeg) +![](https://habrastorage.org/webt/sp/x6/rv/spx6rvmn6zlumbwg46xwfmjnako.jpeg) +![](https://habrastorage.org/webt/sm/g0/xz/smg0xzdspfm3vr3gep__6bcqae8.jpeg) + + +-------------------------------------------------------------------------------- + +via: https://habr.com/en/post/437912/ + +作者:[sukhe][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://habr.com/en/users/sukhe/ +[b]: https://github.com/lujun9972 +[1]: https://habrastorage.org/webt/_1/mp/vl/_1mpvlyujldpnad0cvvzvbci50y.jpeg +[2]: https://habrastorage.org/webt/mr/m6/d3/mrm6d3szvghhpghfchsl_-lzgb4.jpeg diff --git a/sources/tech/20190129 Create an online store with this Java-based framework.md b/sources/tech/20190129 Create an online store with this Java-based framework.md new file mode 100644 index 0000000000..b72a8551de --- /dev/null +++ b/sources/tech/20190129 Create an online store with this Java-based framework.md @@ -0,0 +1,235 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Create an online store with this Java-based framework) +[#]: via: (https://opensource.com/article/19/1/scipio-erp) +[#]: author: (Paul Piper https://opensource.com/users/madppiper) + +Create an online store with this Java-based framework +====== +Scipio ERP comes with a large range of applications and functionality. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0) + +So you want to sell products or services online, but either can't find a fitting software or think customization would be too costly? [Scipio ERP][1] may just be what you are looking for. + +Scipio ERP is a Java-based open source e-commerce framework that comes with a large range of applications and functionality. The project was forked from [Apache OFBiz][2] in 2014 with a clear focus on better customization and a more modern appeal. The e-commerce component is quite extensive and works in a multi-store setup, internationally, and with a wide range of product configurations, and it's also compatible with modern HTML frameworks. The software also provides standard applications for many other business cases, such as accounting, warehouse management, or sales force automation. It's all highly standardized and therefore easy to customize, which is great if you are looking for more than a virtual cart. + +The system makes it very easy to keep up with modern web standards, too. All screens are constructed using the system's "[templating toolkit][3]," an easy-to-learn macro set that separates HTML from all applications. Because of it, every application is already standardized to the core. Sounds confusing? It really isn't—it all looks a lot like HTML, but you write a lot less of it. + +### Initial setup + +Before you get started, make sure you have Java 1.8 (or greater) SDK and a Git client installed. Got it? Great! Next, check out the master branch from GitHub: + +``` +git clone https://github.com/ilscipio/scipio-erp.git +cd scipio-erp +git checkout master +``` + +To set up the system, simply run **./install.sh** and select either option from the command line. Throughout development, it is best to stick to an **installation for development** (Option 1), which will also install a range of demo data. For professional installations, you can modify the initial config data ("seed data") so it will automatically set up the company and catalog data for you. By default, the system will run with an internal database, but it [can also be configured][4] with a wide range of relational databases such as PostgreSQL and MariaDB. + +![Setup wizard][6] + +Follow the setup wizard to complete your initial configuration, + +Start the system with **./start.sh** and head over to **** to complete the configuration. If you installed with demo data, you can log in with username **admin** and password **scipio**. During the setup wizard, you can set up a company profile, accounting, a warehouse, your product catalog, your online store, and additional user profiles. Keep the website entries on the product store configuration screen for now. The system allows you to run multiple webstores with different underlying code; unless you want to do that, it is easiest to stick to the defaults. + +Congratulations, you just installed Scipio ERP! Play around with the screens for a minute or two to get a feel for the functionality. + +### Shortcuts + +Before you jump into the customization, here are a few handy commands that will help you along the way: + + * Create a shop-override: **./ant create-component-shop-override** + * Create a new component: **./ant create-component** + * Create a new theme component: **./ant create-theme** + * Create admin user: **./ant create-admin-user-login** + * Various other utility functions: **./ant -p** + * Utility to install & update add-ons: **./git-addons help** + + + +Also, make a mental note of the following locations: + + * Scripts to run Scipio as a service: **/tools/scripts/** + * Log output directory: **/runtime/logs** + * Admin application: **** + * E-commerce application: **** + + + +Last, Scipio ERP structures all code in the following five major directories: + + * Framework: framework-related sources, the application server, generic screens, and configurations + * Applications: core applications + * Addons: third-party extensions + * Themes: modifies the look and feel + * Hot-deploy: your own components + + + +Aside from a few configurations, you will be working within the hot-deploy and themes directories. + +### Webstore customizations + +To really make the system your own, start thinking about [components][7]. Components are a modular approach to override, extend, and add to the system. Think of components as self-contained web modules that capture information on databases ([entity][8]), functions ([services][9]), screens ([views][10]), [events and actions][11], and web applications. Thanks to components, you can add your own code while remaining compatible with the original sources. + +Run **./ant create-component-shop-override** and follow the steps to create your webstore component. A new directory will be created inside of the hot-deploy directory, which extends and overrides the original e-commerce application. + +![component directory structure][13] + +A typical component directory structure. + +Your component will have the following directory structure: + + * config: configurations + * data: seed data + * entitydef: database table definitions + * script: Groovy script location + * servicedef: service definitions + * src: Java classes + * webapp: your web application + * widget: screen definitions + + + +Additionally, the **ivy.xml** file allows you to add Maven libraries to the build process and the **ofbiz-component.xml** file defines the overall component and web application structure. Apart from the obvious, you will also find a **controller.xml** file inside the web apps' **WEB-INF** directory. This allows you to define request entries and connect them to events and screens. For screens alone, you can also use the built-in CMS functionality, but stick to the core mechanics first. Familiarize yourself with **/applications/shop/** before introducing changes. + +#### Adding custom screens + +Remember the [templating toolkit][3]? You will find it used on every screen. Think of it as a set of easy-to-learn macros that structure all content. Here's an example: + +``` +<@section title="Title"> +    <@heading id="slider">Slider +    <@row> +        <@cell columns=6> +            <@slider id="" class="" controls=true indicator=true> +                <@slide link="#" image="https://placehold.it/800x300">Just some content… +                <@slide title="This is a title" link="#" image="https://placehold.it/800x300"> +            +        +        <@cell columns=6>Second column +    + +``` + +Not too difficult, right? Meanwhile, themes contain the HTML definitions and styles. This hands the power over to your front-end developers, who can define the output of each macro and otherwise stick to their own build tools for development. + +Let's give it a quick try. First, define a request on your own webstore. You will modify the code for this. A built-in CMS is also available at **** , which allows you to create new templates and screens in a much more efficient way. It is fully compatible with the templating toolkit and comes with example templates that can be adopted to your preferences. But since we are trying to understand the system here, let's go with the more complicated way first. + +Open the **[controller.xml][14]** file inside of your shop's webapp directory. The controller keeps track of request events and performs actions accordingly. The following will create a new request under **/shop/test** : + +``` + + +      +      + +``` + +You can define multiple responses and, if you want, you could use an event or a service call inside the request to determine which response you may want to use. I opted for a response of type "view." A view is a rendered response; other types are request-redirects, forwards, and alike. The system comes with various renderers and allows you to determine the output later; to do so, add the following: + +``` + + +``` + +Replace **my-component** with your own component name. Then you can define your very first screen by adding the following inside the tags within the **widget/CommonScreens.xml** file: + +``` + +       
                    +            +            +            +                +                    +                        +                    +                +            +       
                    +   
                    +``` + +Screens are actually quite modular and consist of multiple elements ([widgets, actions, and decorators][15]). For the sake of simplicity, leave this as it is for now, and complete the new webpage by adding your very first templating toolkit file. For that, create a new **webapp/mycomponent/test/test.ftl** file and add the following: + +``` +<@alert type="info">Success! +``` + +![Custom screen][17] + +A custom screen. + +Open **** and marvel at your own accomplishments. + +#### Custom themes + +Modify the look and feel of the shop by creating your very own theme. All themes can be found as components inside of the themes folder. Run **./ant create-theme** to add your own. + +![theme component layout][19] + +A typical theme component layout. + +Here's a list of the most important directories and files: + + * Theme configuration: **data/*ThemeData.xml** + * Theme-specific wrapping HTML: **includes/*.ftl** + * Templating Toolkit HTML definition: **includes/themeTemplate.ftl** + * CSS class definition: **includes/themeStyles.ftl** + * CSS framework: **webapp/theme-title/*** + + + +Take a quick look at the Metro theme in the toolkit; it uses the Foundation CSS framework and makes use of all the things above. Afterwards, set up your own theme inside your newly constructed **webapp/theme-title** directory and start developing. The Foundation-shop theme is a very simple shop-specific theme implementation that you can use as a basis for your own work. + +Voila! You have set up your own online store and are ready to customize! + +![Finished Scipio ERP shop][21] + +A finished shop based on Scipio ERP. + +### What's next? + +Scipio ERP is a powerful framework that simplifies the development of complex e-commerce applications. For a more complete understanding, check out the project [documentation][7], try the [online demo][22], or [join the community][23]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/scipio-erp + +作者:[Paul Piper][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/madppiper +[b]: https://github.com/lujun9972 +[1]: https://www.scipioerp.com +[2]: https://ofbiz.apache.org/ +[3]: https://www.scipioerp.com/community/developer/freemarker-macros/ +[4]: https://www.scipioerp.com/community/developer/installation-configuration/configuration/#database-configuration +[5]: /file/419711 +[6]: https://opensource.com/sites/default/files/uploads/setup_step5_sm.jpg (Setup wizard) +[7]: https://www.scipioerp.com/community/developer/architecture/components/ +[8]: https://www.scipioerp.com/community/developer/entities/ +[9]: https://www.scipioerp.com/community/developer/services/ +[10]: https://www.scipioerp.com/community/developer/views-requests/ +[11]: https://www.scipioerp.com/community/developer/events-actions/ +[12]: /file/419716 +[13]: https://opensource.com/sites/default/files/uploads/component_structure.jpg (component directory structure) +[14]: https://www.scipioerp.com/community/developer/views-requests/request-controller/ +[15]: https://www.scipioerp.com/community/developer/views-requests/screen-widgets-decorators/ +[16]: /file/419721 +[17]: https://opensource.com/sites/default/files/uploads/success_screen_sm.jpg (Custom screen) +[18]: /file/419726 +[19]: https://opensource.com/sites/default/files/uploads/theme_structure.jpg (theme component layout) +[20]: /file/419731 +[21]: https://opensource.com/sites/default/files/uploads/finished_shop_1_sm.jpg (Finished Scipio ERP shop) +[22]: https://www.scipioerp.com/demo/ +[23]: https://forum.scipioerp.com/ diff --git a/sources/tech/20190129 How To Configure System-wide Proxy Settings Easily And Quickly.md b/sources/tech/20190129 How To Configure System-wide Proxy Settings Easily And Quickly.md new file mode 100644 index 0000000000..0848111d08 --- /dev/null +++ b/sources/tech/20190129 How To Configure System-wide Proxy Settings Easily And Quickly.md @@ -0,0 +1,309 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Configure System-wide Proxy Settings Easily And Quickly) +[#]: via: (https://www.ostechnix.com/how-to-configure-system-wide-proxy-settings-easily-and-quickly/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +How To Configure System-wide Proxy Settings Easily And Quickly +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/ProxyMan-720x340.png) + +Today, we will be discussing a simple, yet useful command line utility named **“ProxyMan”**. As the name says, it helps you to apply and manage proxy settings on our system easily and quickly. Using ProxyMan, we can set or unset proxy settings automatically at multiple points, without having to configure them manually one by one. It also allows you to save the settings for later use. In a nutshell, ProxyMan simplifies the task of configuring system-wide proxy settings with a single command. It is free, open source utility written in **Bash** and standard POSIX tools, no dependency required. ProxyMan can be helpful if you’re behind a proxy server and you want to apply the proxy settings system-wide in one go. + +### Installing ProxyMan + +Download the latest ProxyMan version from the [**releases page**][1]. It is available as zip and tar file. I am going to download zip file. + +``` +$ wget https://github.com/himanshub16/ProxyMan/archive/v3.1.1.zip +``` + +Extract the downloaded zip file: + +``` +$ unzip v3.1.1.zip +``` + +The above command will extract the contents in a folder named “ **ProxyMan-3.1.1** ” in your current working directory. Cd to that folder and install ProxyMan as shown below: + +``` +$ cd ProxyMan-3.1.1/ + +$ ./install +``` + +If you see **“Installed successfully”** message as output, congratulations! ProxyMan has been installed. + +Let us go ahead and see how to configure proxy settings. + +### Configure System-wide Proxy Settings + +ProxyMan usage is pretty simple and straight forward. Like I already said, It allows us to set/unset proxy settings, list current proxy settings, list available configs, save settings in a profile and load profile later. Proxyman currently manages proxy settings for **GNOME gsettings** , **bash** , **apt** , **dnf** , **git** , **npm** and **Dropbox**. + +**Set proxy settings** + +To set proxy settings system-wide, simply run: + +``` +$ proxyman set +``` + +You will asked to answer a series of simple questions such as, + + 1. HTTP Proxy host IP address, + 2. HTTP port, + 3. Use username/password authentication, + 4. Use same settings for HTTPS and FTP, + 5. Save profile for later use, + 6. Finally, choose the list of targets to apply the proxy settings. You can choose all at once or separate multiple choices with space. + + + +Sample output for the above command: + +``` +Enter details to set proxy +HTTP Proxy Host 192.168.225.22 +HTTP Proxy Port 8080 +Use auth - userid/password (y/n)? n +Use same for HTTPS and FTP (y/n)? y +No Proxy (default localhost,127.0.0.1,192.168.1.1,::1,*.local) +Save profile for later use (y/n)? y +Enter profile name : proxy1 +Saved to /home/sk/.config/proxyman/proxy1. + +Select targets to modify +| 1 | All of them ... Don't bother me +| 2 | Terminal / bash / zsh (current user) +| 3 | /etc/environment +| 4 | apt/dnf (Package manager) +| 5 | Desktop settings (GNOME/Ubuntu) +| 6 | npm & yarn +| 7 | Dropbox +| 8 | Git +| 9 | Docker + +Separate multiple choices with space +? 1 +Setting proxy... +To activate in current terminal window +run source ~/.bashrc +[sudo] password for sk: +Done +``` + +**List proxy settings** + +To view the current proxy settings, run: + +``` +$ proxyman list +``` + +Sample output: + +``` +Hmm... listing it all + +Shell proxy settings : /home/sk/.bashrc +export http_proxy="http://192.168.225.22:8080/" +export ftp_proxy="ftp://192.168.225.22:8080/" +export rsync_proxy="rsync://192.168.225.22:8080/" +export no_proxy="localhost,127.0.0.1,192.168.1.1,::1,*.local" +export HTTP_PROXY="http://192.168.225.22:8080/" +export FTP_PROXY="ftp://192.168.225.22:8080/" +export RSYNC_PROXY="rsync://192.168.225.22:8080/" +export NO_PROXY="localhost,127.0.0.1,192.168.1.1,::1,*.local" +export https_proxy="/" +export HTTPS_PROXY="/" + +git proxy settings : +http http://192.168.225.22:8080/ +https https://192.168.225.22:8080/ + +APT proxy settings : +3 +Done +``` + +**Unset proxy settings** + +To unset proxy settings, the command would be: + +``` +$ proxyman unset +``` + +You can unset proxy settings for all targets at once by entering number **1** or enter any given number to unset proxy settings for the respective target. + +``` +Select targets to modify +| 1 | All of them ... Don't bother me +| 2 | Terminal / bash / zsh (current user) +| 3 | /etc/environment +| 4 | apt/dnf (Package manager) +| 5 | Desktop settings (GNOME/Ubuntu) +| 6 | npm & yarn +| 7 | Dropbox +| 8 | Git +| 9 | Docker + +Separate multiple choices with space +? 1 +Unset all proxy settings +To activate in current terminal window +run source ~/.bashrc +Done +``` + +To apply the changes, simply run: + +``` +$ source ~/.bashrc +``` + +On ZSH, use this command instead: + +``` +$ source ~/.zshrc +``` + +To verify if the proxy settings have been removed, simply run “proxyman list” command: + +``` +$ proxyman list +Hmm... listing it all + +Shell proxy settings : /home/sk/.bashrc +None + +git proxy settings : +http +https + +APT proxy settings : +None +Done +``` + +As you can see, there is no proxy settings for all targets. + +**View list of configs (profiles)** + +Remember we saved proxy settings as a profile in the “Set proxy settings” section? You can view the list of available profiles with command: + +``` +$ proxyman configs +``` + +Sample output: + +``` +Here are available configs! +proxy1 +Done +``` + +As you can see, we have only one profile i.e **proxy1**. + +**Load profiles** + +The profiles will be available until you delete them permanently, so you can load a profile (E.g proxy1) at any time using command: + +``` +$ proxyman load proxy1 +``` + +This command will list the proxy settings for proxy1 profile. You can apply these settings to all or multiple targets by entering the respective number with space-separated. + +``` +Loading profile : proxy1 +HTTP > 192.168.225.22 8080 +HTTPS > 192.168.225.22 8080 +FTP > 192.168.225.22 8080 +no_proxy > localhost,127.0.0.1,192.168.1.1,::1,*.local +Use auth > n +Use same > y +Config > +Targets > +Select targets to modify +| 1 | All of them ... Don't bother me +| 2 | Terminal / bash / zsh (current user) +| 3 | /etc/environment +| 4 | apt/dnf (Package manager) +| 5 | Desktop settings (GNOME/Ubuntu) +| 6 | npm & yarn +| 7 | Dropbox +| 8 | Git +| 9 | Docker + +Separate multiple choices with space +? 1 +Setting proxy... +To activate in current terminal window +run source ~/.bashrc +Done +``` + +Finally, activate the changes using command: + +``` +$ source ~/.bashrc +``` + +For ZSH: + +``` +$ source ~/.zshrc +``` + +**Deleting profiles** + +To delete a profile, run: + +``` +$ proxyman delete proxy1 +``` + +Output: + +``` +Deleting profile : proxy1 +Done +``` + +To display help, run: + +``` +$ proxyman help +``` + + +### Conclusion + +Before I came to know about Proxyman, I used to apply proxy settings manually at multiple places, for example package manager, web browser etc. Not anymore! ProxyMan did this job automatically in couple seconds. + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned. + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-configure-system-wide-proxy-settings-easily-and-quickly/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/himanshub16/ProxyMan/releases/ diff --git a/sources/tech/20190131 19 days of productivity in 2019- The fails.md b/sources/tech/20190131 19 days of productivity in 2019- The fails.md new file mode 100644 index 0000000000..e03a6f4ce0 --- /dev/null +++ b/sources/tech/20190131 19 days of productivity in 2019- The fails.md @@ -0,0 +1,78 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (19 days of productivity in 2019: The fails) +[#]: via: (https://opensource.com/article/19/1/productivity-tool-wish-list) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney)) + +19 days of productivity in 2019: The fails +====== +Here are some tools the open source world doesn't do as well as it could. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn) + +There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way. + +Part of being productive is accepting that failure happens. I am a big proponent of [Howard Tayler's][1] Maxim 70: "Failure is not an option—it is mandatory. The option is whether or not to let failure be the last thing you do." And there were many things I wanted to talk about in this series that I failed to find good answers for. + +So, for the final edition of my 19 new (or new-to-you) open source tools to help you be more productive in 2019, I present the tools I wanted but didn't find. I am hopeful that you, the reader, will be able to help me find some good solutions to the items below. If you do, please share them in the comments. + +### Calendaring + +![](https://opensource.com/sites/default/files/uploads/thunderbird-1.png) + +If there is one thing the open source world is weak on, it is calendaring. I've tried about as many calendar programs as I've tried email programs. There are basically three good options for shared calendaring: [Evolution][2], the [Lightning add-on to Thunderbird][3], or [KOrganizer][4]. All the other applications I've tried (including [Orage][5], [Osmo][6], and almost all of the [Org mode][7] add-ons) seem to reliably support only read-only access to remote calendars. If the shared calendar uses either [Google Calendar][8] or [Microsoft Exchange][9] as the server, the first three are the only easily configured options (and even then, additional add-ons are often required). + +### Linux on the inside + +![](https://opensource.com/sites/default/files/uploads/android-x86-2.png) + +I love [Chrome OS][10], with its simplicity and lightweight requirements. I have owned several Chromebooks, including the latest models from Google. I find it to be reasonably distraction-free, lightweight, and easy to use. With the addition of Android apps and a Linux container, it's easy to be productive almost anywhere. + +I'd like to carry that over to some of the older laptops I have hanging around, but unless I do a full compile of Chromium OS, it is hard to find that same experience. The desktop [Android][11] projects like [Bliss OS][12], [Phoenix OS][13], and [Android-x86][14] are getting close, and I'm keeping an eye on them for the future. + +### Help desks + +![](https://opensource.com/sites/default/files/uploads/opennms_jira_dashboard-3.png) + +Customer service is a big deal for companies big and small. And with the added focus on DevOps these days, it is important to have tools to help bridge the gap. Almost every company I've worked with uses either [Jira][15], [GitHub][16], or [GitLab][17] for code issues, but none of these tools are very good at customer support tickets (without a lot of work). While there are many applications designed around customer support tickets and issues, most (if not all) of them are silos that don't play nice with other systems, again without a lot of work. + +On my wishlist is an open source solution that allows customers, support, and developers to work together without an unwieldy pile of code to glue multiple systems together. + +### Your turn + +![](https://opensource.com/sites/default/files/uploads/asciiquarium-4.png) + +I'm sure there are a lot of options I missed during this series. I try new applications regularly, in the hopes that they will help me be more productive. I encourage everyone to do the same, because when it comes to being productive with open source tools, there is always something new to try. And, if you have favorite open source productivity apps that didn't make it into this series, please make sure to share them in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/productivity-tool-wish-list + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney (Kevin Sonney) +[b]: https://github.com/lujun9972 +[1]: https://www.schlockmercenary.com/ +[2]: https://wiki.gnome.org/Apps/Evolution +[3]: https://www.thunderbird.net/en-US/calendar/ +[4]: https://userbase.kde.org/KOrganizer +[5]: https://github.com/xfce-mirror/orage +[6]: http://clayo.org/osmo/ +[7]: https://orgmode.org/ +[8]: https://calendar.google.com +[9]: https://products.office.com/ +[10]: https://en.wikipedia.org/wiki/Chrome_OS +[11]: https://www.android.com/ +[12]: https://blissroms.com/ +[13]: http://www.phoenixos.com/ +[14]: http://www.android-x86.org/ +[15]: https://www.atlassian.com/software/jira +[16]: https://github.com +[17]: https://about.gitlab.com/ diff --git a/sources/tech/20190204 7 Best VPN Services For 2019.md b/sources/tech/20190204 7 Best VPN Services For 2019.md new file mode 100644 index 0000000000..e72d7de3df --- /dev/null +++ b/sources/tech/20190204 7 Best VPN Services For 2019.md @@ -0,0 +1,77 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (7 Best VPN Services For 2019) +[#]: via: (https://www.ostechnix.com/7-best-opensource-vpn-services-for-2019/) +[#]: author: (Editor https://www.ostechnix.com/author/editor/) + +7 Best VPN Services For 2019 +====== + +At least 67 percent of global businesses in the past three years have faced data breaching. The breaching has been reported to expose hundreds of millions of customers. Studies show that an estimated 93 percent of these breaches would have been avoided had data security fundamentals been considered beforehand. + +Understand that poor data security can be extremely costly, especially to a business and could quickly lead to widespread disruption and possible harm to your brand reputation. Although some businesses can pick up the pieces the hard way, there are still those that fail to recover. Today however, you are fortunate to have access to data and network security software. + +![](https://www.ostechnix.com/wp-content/uploads/2019/02/vpn-1.jpeg) + +As you start 2019, keep off cyber-attacks by investing in a **V** irtual **P** rivate **N** etwork commonly known as **VPN**. When it comes to online privacy and security, there are many uncertainties. There are hundreds of different VPN providers, and picking the right one means striking just the right balance between pricing, services, and ease of use. + +If you are looking for a solid 100 percent tested and secure VPN, you might want to do your due diligence and identify the best match. Here are the top 7 Best tried and tested VPN services For 2019. + +### 1. Vpnunlimitedapp + +With VPN Unlimited, you have total security. This VPN allows you to use any WIFI without worrying that your personal data can be leaked. With AES-256, your data is encrypted and protected against prying third-parties and hackers. This VPN ensures you stay anonymous and untracked on all websites no matter the location. It offers a 7-day trial and a variety of protocol options: OpenVPN, IKEv2, and KeepSolid Wise. Demanding users are entitled to special extras such as a personal server, lifetime VPN subscription, and personal IP options. + +### 2. VPN Lite + +VPN Lite is an easy-to-use and **free VPN service** that allows you to browse the internet at no charges. You remain anonymous and your privacy is protected. It obscures your IP and encrypts your data meaning third parties are not able to track your activities on all online platforms. You also get to access all online content. With VPN Lite, you get to access blocked sites in your state. You can also gain access to public WIFI without the worry of having sensitive information tracked and hacked by spyware and hackers. + +### 3. HotSpot Shield + +Launched in 2005, this is a popular VPN embraced by the majority of users. The VPN protocol here is integrated by at least 70 percent of the largest security companies globally. It is also known to have thousands of servers across the globe. It comes with two free options. One is completely free but supported by online advertisements, and the second one is a 7-day trial which is the flagship product. It contains military grade data encryption and protects against malware. HotSpot Shield guaranteed secure browsing and offers lightning-fast speeds. + +### 4. TunnelBear + +This is the best way to start if you are new to VPNs. It comes to you with a user-friendly interface complete with animated bears. With the help of TunnelBear, users are able to connect to servers in at least 22 countries at great speeds. It uses **AES 256-bit encryption** guaranteeing no data logging meaning your data stays protected. You also get unlimited data for up to five devices. + +### 5. ProtonVPN + +This VPN offers you a strong premium service. You may suffer from reduced connection speeds, but you also get to enjoy its unlimited data. It features an intuitive interface easy to use, and comes with a multi-platform compatibility. Proton’s servers are said to be specifically optimized for torrenting and thus cannot give access to Netflix. You get strong security features such as protocols and encryptions meaning your browsing activities remain secure. + +### 6. ExpressVPN + +This is known as the best offshore VPN for unblocking and privacy. It has gained recognition for being the top VPN service globally resulting from solid customer support and fast speeds. It offers routers that come with browser extensions and custom firmware. ExpressVPN also has an admirable scope of quality apps, plenty of servers, and can only support up to three devices. + +It’s not entirely free, and happens to be one of the most expensive VPNs on the market today because it is fully packed with the most advanced features. With it comes a 30-day money-back guarantee, meaning you can freely test this VPN for a month. Good thing is; it is completely risk-free. If you need a VPN for a short duration to bypass online censorship for instance, this could, be your go-to solution. You don’t want to give trials to a spammy, slow, free program. + +It is also one of the best ways to enjoy online streaming as well as outdoor security. Should you need to continue using it, you only have to renew or cancel your free trial if need be. Express VPN has over 2000 servers across 90 countries, unblocks Netflix, gives lightning fast connections, and gives users total privacy. + +### 7. PureVPN + +While this VPN may not be completely free, it falls under the most budget-friendly services on this list. Users can sign up for a free seven days trial and later choose one of its paid plans. With this VPN, you get to access 750-plus servers in at least 140 countries. There is also access to easy installation on almost all devices. All its paid features can still be accessed within the free trial window. That includes unlimited data transfers, IP leakage protection, and ISP invisibility. The supproted operating systems are iOS, Android, Windows, Linux, and macOS. + +### Summary + +With the large variety of available freemium VPN services today, why not take that opportunity to protect yourself and your customers? Understand that there are some great VPN services. Even the most secure free service however, cannot be touted as risk free. You might want to upgrade to a premium one for increased protection. Premium VPN allows you to test freely offering risk-free money-back guarantee. Whether you plan to sign up for a paid VPN or commit to a free one, it is highly advisable to have a VPN. + +**About the author:** + +**Renetta K. Molina** is a tech enthusiast and fitness enthusiast. She writes about technology, apps, WordPress and a variety of other topics. In her free time, she likes to play golf and read books. She loves to learn and try new things. + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/7-best-opensource-vpn-services-for-2019/ + +作者:[Editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20190204 Top 5 open source network monitoring tools.md b/sources/tech/20190204 Top 5 open source network monitoring tools.md new file mode 100644 index 0000000000..afbcae9833 --- /dev/null +++ b/sources/tech/20190204 Top 5 open source network monitoring tools.md @@ -0,0 +1,125 @@ +[#]: collector: (lujun9972) +[#]: translator: (sugarfillet) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top 5 open source network monitoring tools) +[#]: via: (https://opensource.com/article/19/2/network-monitoring-tools) +[#]: author: (Paul Bischoff https://opensource.com/users/paulbischoff) + +Top 5 open source network monitoring tools +====== +Keep an eye on your network to avoid downtime with these monitoring tools. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3) + +Maintaining a live network is one of a system administrator's most essential tasks, and keeping a watchful eye over connected systems is essential to keeping a network functioning at its best. + +There are many different ways to keep tabs on a modern network. Network monitoring tools are designed for the specific purpose of monitoring network traffic and response times, while application performance management solutions use agents to pull performance data from the application stack. If you have a live network, you need network monitoring to make sure you aren't vulnerable to an attacker. Likewise, if you rely on lots of different applications to run your daily operations, you will need an [application performance management][1] solution as well. + +This article will focus on open source network monitoring tools. These tools help monitor individual nodes and applications for signs of poor performance. Through one window, you can view the performance of an entire network and even get alerts to keep you in the loop if you're away from your desk. + +Before we get into the top five network monitoring tools, let's look more closely at the reasons you need to use one. + +### Why do I need a network monitoring tool? + +Network monitoring tools are vital to maintaining networks because they allow you to keep an eye on devices connected to the network from a central location. These tools help flag devices with subpar performance so you can step in and run troubleshooting to get to the root of the problem. + +Running in-depth troubleshooting can minimize performance problems and prevent security breaches. In practical terms, this keeps the network online and eliminates the risk of falling victim to unnecessary downtime. Regular network maintenance can also help prevent outages that could take thousands of users offline. + +A network monitoring tool enables you to: + + * Autodiscover devices connected to your network + * View live and historic performance data for a range of devices and applications + * Configure alerts to notify you of unusual activity + * Generate graphs and reports to analyze network activity in greater depth + +### The top 5 open source network monitoring tools + +Now, that you know why you need a network monitoring tool, take a look at the top 5 open source tools to see which might best meet your needs. + +#### Cacti + +![](https://opensource.com/sites/default/files/uploads/cacti_network-monitoring-tools.png) + +If you know anything about open source network monitoring tools, you've probably heard of [Cacti][2]. It's a graphing solution that acts as an addition to [RRDTool][3] and is used by many network administrators to collect performance data in LANs. Cacti comes with Simple Network Management Protocol (SNMP) support on Windows and Linux to create graphs of traffic data. + +Cacti typically works by using data sourced from user-created scripts that ping hosts on a network. The values returned by the scripts are stored in a MySQL database, and this data is used to generate graphs. + +This sounds complicated, but Cacti has templates to help speed the process along. You can also create a graph or data source template that can be used for future monitoring activity. If you'd like to try it out, [download Cacti][4] for free on Linux and Windows. + +#### Nagios Core + +![](https://opensource.com/sites/default/files/uploads/nagioscore_network-monitoring-tools.png) + +[Nagios Core][5] is one of the most well-known open source monitoring tools. It provides a network monitoring experience that combines open source extensibility with a top-of-the-line user interface. With Nagios Core, you can auto-discover devices, monitor connected systems, and generate sophisticated performance graphs. + +Support for customization is one of the main reasons Nagios Core has become so popular. For example, [Nagios V-Shell][6] was added as a PHP web interface built in AngularJS, searchable tables and a RESTful API designed with CodeIgniter. + +If you need more versatility, you can check the Nagios Exchange, which features a range of add-ons that can incorporate additional features into your network monitoring. These range from the strictly cosmetic to monitoring enhancements like [nagiosgraph][7]. You can try it out by [downloading Nagios Core][8] for free. + +#### Icinga 2 + +![](https://opensource.com/sites/default/files/uploads/icinga2_network-monitoring-tools.png) + +[Icinga 2][9] is another widely used open source network monitoring tool. It builds on the groundwork laid by Nagios Core. It has a flexible RESTful API that allows you to enter your own configurations and view live performance data through the dashboard. Dashboards are customizable, so you can choose exactly what information you want to monitor in your network. + +Visualization is an area where Icinga 2 performs particularly well. It has native support for Graphite and InfluxDB, which can turn performance data into full-featured graphs for deeper performance analysis. + +Icinga2 also allows you to monitor both live and historical performance data. It offers excellent alerts capabilities for live monitoring, and you can configure it to send notifications of performance problems by email or text. You can [download Icinga 2][10] for free for Windows, Debian, DHEL, SLES, Ubuntu, Fedora, and OpenSUSE. + +#### Zabbix + +![](https://opensource.com/sites/default/files/uploads/zabbix_network-monitoring-tools.png) + +[Zabbix][11] is another industry-leading open source network monitoring tool, used by companies from Dell to Salesforce on account of its malleable network monitoring experience. Zabbix does network, server, cloud, application, and services monitoring very well. + +You can track network information such as network bandwidth usage, network health, and configuration changes, and weed out problems that need to be addressed. Performance data in Zabbix is connected through SNMP, Intelligent Platform Management Interface (IPMI), and IPv6. + +Zabbix offers a high level of convenience compared to other open source monitoring tools. For instance, you can automatically detect devices connected to your network before using an out-of-the-box template to begin monitoring your network. You can [download Zabbix][12] for free for CentOS, Debian, Oracle Linux, Red Hat Enterprise Linux, Ubuntu, and Raspbian. + +#### Prometheus + +![](https://opensource.com/sites/default/files/uploads/promethius_network-monitoring-tools.png) + +[Prometheus][13] is an open source network monitoring tool with a large community following. It was built specifically for monitoring time-series data. You can identify time-series data by metric name or key-value pairs. Time-series data is stored on local disks so that it's easy to access in an emergency. + +Prometheus' [Alertmanager][14] allows you to view notifications every time it raises an event. Alertmanager can send notifications via email, PagerDuty, or OpsGenie, and you can silence alerts if necessary. + +Prometheus' visual elements are excellent and allow you to switch from the browser to the template language and Grafana integration. You can also integrate various third-party data sources into Prometheus from Docker, StatsD, and JMX to customize your Prometheus experience. + +As a network monitoring tool, Prometheus is suitable for organizations of all sizes. The onboard integrations and the easy-to-use Alertmanager make it capable of handling any workload, regardless of its size. You can [download Prometheus][15] for free. + +### Which are best? + +No matter what industry you're working in, if you rely on a network to do business, you need to implement some form of network monitoring. Network monitoring tools are an invaluable resource that help provide you with the visibility to keep your systems online. Monitoring your systems will give you the best chance to keep your equipment in working order. + +As the tools on this list show, you don't need to spend an exorbitant amount of money to reap the rewards of network monitoring. Of the five, I believe Icinga 2 and Zabbix are the best options for providing you with everything you need to start monitoring your network to keep it online. Staying vigilant will help to minimize the change of being caught off-guard by performance issues. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/network-monitoring-tools + +作者:[Paul Bischoff][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/paulbischoff +[b]: https://github.com/lujun9972 +[1]: https://www.comparitech.com/net-admin/application-performance-management/ +[2]: https://www.cacti.net/index.php +[3]: https://en.wikipedia.org/wiki/RRDtool +[4]: https://www.cacti.net/download_cacti.php +[5]: https://www.nagios.org/projects/nagios-core/ +[6]: https://exchange.nagios.org/directory/Addons/Frontends-%28GUIs-and-CLIs%29/Web-Interfaces/Nagios-V-2DShell/details +[7]: https://exchange.nagios.org/directory/Addons/Graphing-and-Trending/nagiosgraph/details#_ga=2.79847774.890594951.1545045715-2010747642.1545045715 +[8]: https://www.nagios.org/downloads/nagios-core/ +[9]: https://icinga.com/products/icinga-2/ +[10]: https://icinga.com/download/ +[11]: https://www.zabbix.com/ +[12]: https://www.zabbix.com/download +[13]: https://prometheus.io/ +[14]: https://prometheus.io/docs/alerting/alertmanager/ +[15]: https://prometheus.io/download/ diff --git a/sources/tech/20190205 12 Methods To Check The Hard Disk And Hard Drive Partition On Linux.md b/sources/tech/20190205 12 Methods To Check The Hard Disk And Hard Drive Partition On Linux.md new file mode 100644 index 0000000000..ef8c8dc460 --- /dev/null +++ b/sources/tech/20190205 12 Methods To Check The Hard Disk And Hard Drive Partition On Linux.md @@ -0,0 +1,435 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (12 Methods To Check The Hard Disk And Hard Drive Partition On Linux) +[#]: via: (https://www.2daygeek.com/linux-command-check-hard-disks-partitions/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +12 Methods To Check The Hard Disk And Hard Drive Partition On Linux +====== + +Usually Linux admins check the available hard disk and it’s partitions whenever they want to add a new disks or additional partition in the system. + +We used to check the partition table of our hard disk to view the disk partitions. + +This will help you to view how many partitions were already created on the disk. Also, it allow us to verify whether we have any free space or not. + +In general hard disks can be divided into one or more logical disks called partitions. + +Each partitions can be used as a separate disk with its own file system and partition information is stored in a partition table. + +It’s a 64-byte data structure. The partition table is part of the master boot record (MBR), which is a small program that is executed when a computer boots. + +The partition information are saved in the 0 the sector of the disk. Make a note, all the partitions must be formatted with an appropriate file system before files can be written to it. + +This can be verified using the following 12 methods. + + * **`fdisk:`** manipulate disk partition table + * **`sfdisk:`** display or manipulate a disk partition table + * **`cfdisk:`** display or manipulate a disk partition table + * **`parted:`** a partition manipulation program + * **`lsblk:`** lsblk lists information about all available or the specified block devices. + * **`blkid:`** locate/print block device attributes. + * **`hwinfo:`** hwinfo stands for hardware information tool is another great utility that used to probe for the hardware present in the system. + * **`lshw:`** lshw is a small tool to extract detailed information on the hardware configuration of the machine. + * **`inxi:`** inxi is a command line system information script built for for console and IRC. + * **`lsscsi:`** list SCSI devices (or hosts) and their attributes + * **`cat /proc/partitions:`** + * **`ls -lh /dev/disk/:`** The directory contains Disk manufacturer name, serial number, partition ID and real block device files, Those were symlink with real block device files. + + + +### How To Check Hard Disk And Hard Drive Partition In Linux Using fdisk Command? + +**[fdisk][1]** stands for fixed disk or format disk is a cli utility that allow users to perform following actions on disks. It allows us to view, create, resize, delete, move and copy the partitions. + +``` +# fdisk -l + +Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +Disklabel type: dos +Disk identifier: 0xeab59449 + +Device Boot Start End Sectors Size Id Type +/dev/sda1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 20973568 62914559 41940992 20G 83 Linux + + +Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes + + +Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +Disklabel type: dos +Disk identifier: 0x8cc8f9e5 + +Device Boot Start End Sectors Size Id Type +/dev/sdc1 2048 2099199 2097152 1G 83 Linux +/dev/sdc3 4196352 6293503 2097152 1G 83 Linux +/dev/sdc4 6293504 20971519 14678016 7G 5 Extended +/dev/sdc5 6295552 8392703 2097152 1G 83 Linux + + +Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes + + +Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using sfdisk Command? + +sfdisk is a script-oriented tool for partitioning any block device. sfdisk supports MBR (DOS), GPT, SUN and SGI disk labels, but no longer provides any functionality for CHS (Cylinder-Head-Sector) addressing. + +``` +# sfdisk -l + +Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +Disklabel type: dos +Disk identifier: 0xeab59449 + +Device Boot Start End Sectors Size Id Type +/dev/sda1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 20973568 62914559 41940992 20G 83 Linux + + +Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes + + +Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +Disklabel type: dos +Disk identifier: 0x8cc8f9e5 + +Device Boot Start End Sectors Size Id Type +/dev/sdc1 2048 2099199 2097152 1G 83 Linux +/dev/sdc3 4196352 6293503 2097152 1G 83 Linux +/dev/sdc4 6293504 20971519 14678016 7G 5 Extended +/dev/sdc5 6295552 8392703 2097152 1G 83 Linux + + +Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes + + +Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using cfdisk Command? + +cfdisk is a curses-based program for partitioning any block device. The default device is /dev/sda. It provides basic partitioning functionality with a user-friendly interface. + +``` +# cfdisk /dev/sdc + Disk: /dev/sdc + Size: 10 GiB, 10737418240 bytes, 20971520 sectors + Label: dos, identifier: 0x8cc8f9e5 + + Device Boot Start End Sectors Size Id Type +>> /dev/sdc1 2048 2099199 2097152 1G 83 Linux + Free space 2099200 4196351 2097152 1G + /dev/sdc3 4196352 6293503 2097152 1G 83 Linux + /dev/sdc4 6293504 20971519 14678016 7G 5 Extended + ├─/dev/sdc5 6295552 8392703 2097152 1G 83 Linux + └─Free space 8394752 20971519 12576768 6G + + + + ┌───────────────────────────────────────────────────────────────────────────────┐ + │ Partition type: Linux (83) │ + │Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7 │ + │ Filesystem: ext2 │ + │ Mountpoint: /part1 (mounted) │ + └───────────────────────────────────────────────────────────────────────────────┘ + [Bootable] [ Delete ] [ Quit ] [ Type ] [ Help ] [ Write ] + [ Dump ] + + Quit program without writing changes +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using parted Command? + +**[parted][2]** is a program to manipulate disk partitions. It supports multiple partition table formats, including MS-DOS and GPT. It is useful for creating space for new operating systems, reorganising disk usage, and copying data to new hard disks. + +``` +# parted -l + +Model: ATA VBOX HARDDISK (scsi) +Disk /dev/sda: 32.2GB +Sector size (logical/physical): 512B/512B +Partition Table: msdos +Disk Flags: + +Number Start End Size Type File system Flags + 1 10.7GB 32.2GB 21.5GB primary ext4 boot + + +Model: ATA VBOX HARDDISK (scsi) +Disk /dev/sdb: 10.7GB +Sector size (logical/physical): 512B/512B +Partition Table: msdos +Disk Flags: + +Model: ATA VBOX HARDDISK (scsi) +Disk /dev/sdc: 10.7GB +Sector size (logical/physical): 512B/512B +Partition Table: msdos +Disk Flags: + +Number Start End Size Type File system Flags + 1 1049kB 1075MB 1074MB primary ext2 + 3 2149MB 3222MB 1074MB primary ext4 + 4 3222MB 10.7GB 7515MB extended + 5 3223MB 4297MB 1074MB logical + + +Model: ATA VBOX HARDDISK (scsi) +Disk /dev/sdd: 10.7GB +Sector size (logical/physical): 512B/512B +Partition Table: msdos +Disk Flags: + +Model: ATA VBOX HARDDISK (scsi) +Disk /dev/sde: 10.7GB +Sector size (logical/physical): 512B/512B +Partition Table: msdos +Disk Flags: +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using lsblk Command? + +lsblk lists information about all available or the specified block devices. The lsblk command reads the sysfs filesystem and udev db to gather information. + +If the udev db is not available or lsblk is compiled without udev support than it tries to read LABELs, UUIDs and filesystem types from the block device. In this case root permissions are necessary. The command prints all block devices (except RAM disks) in a tree-like format by default. + +``` +# lsblk +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT +sda 8:0 0 30G 0 disk +└─sda1 8:1 0 20G 0 part / +sdb 8:16 0 10G 0 disk +sdc 8:32 0 10G 0 disk +├─sdc1 8:33 0 1G 0 part /part1 +├─sdc3 8:35 0 1G 0 part /part2 +├─sdc4 8:36 0 1K 0 part +└─sdc5 8:37 0 1G 0 part +sdd 8:48 0 10G 0 disk +sde 8:64 0 10G 0 disk +sr0 11:0 1 1024M 0 rom +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using blkid Command? + +blkid is a command-line utility to locate/print block device attributes. It uses libblkid library to get disk partition UUID in Linux system. + +``` +# blkid +/dev/sda1: UUID="d92fa769-e00f-4fd7-b6ed-ecf7224af7fa" TYPE="ext4" PARTUUID="eab59449-01" +/dev/sdc1: UUID="d17e3c31-e2c9-4f11-809c-94a549bc43b7" TYPE="ext2" PARTUUID="8cc8f9e5-01" +/dev/sdc3: UUID="ca307aa4-0866-49b1-8184-004025789e63" TYPE="ext4" PARTUUID="8cc8f9e5-03" +/dev/sdc5: PARTUUID="8cc8f9e5-05" +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using hwinfo Command? + +**[hwinfo][3]** stands for hardware information tool is another great utility that used to probe for the hardware present in the system and display detailed information about varies hardware components in human readable format. + +``` +# hwinfo --block --short +disk: + /dev/sdd VBOX HARDDISK + /dev/sdb VBOX HARDDISK + /dev/sde VBOX HARDDISK + /dev/sdc VBOX HARDDISK + /dev/sda VBOX HARDDISK +partition: + /dev/sdc1 Partition + /dev/sdc3 Partition + /dev/sdc4 Partition + /dev/sdc5 Partition + /dev/sda1 Partition +cdrom: + /dev/sr0 VBOX CD-ROM +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using lshw Command? + +**[lshw][4]** (stands for Hardware Lister) is a small nifty tool that generates detailed reports about various hardware components on the machine such as memory configuration, firmware version, mainboard configuration, CPU version and speed, cache configuration, usb, network card, graphics cards, multimedia, printers, bus speed, etc. + +``` +# lshw -short -class disk -class volume +H/W path Device Class Description +=================================================== +/0/3/0.0.0 /dev/cdrom disk CD-ROM +/0/4/0.0.0 /dev/sda disk 32GB VBOX HARDDISK +/0/4/0.0.0/1 /dev/sda1 volume 19GiB EXT4 volume +/0/5/0.0.0 /dev/sdb disk 10GB VBOX HARDDISK +/0/6/0.0.0 /dev/sdc disk 10GB VBOX HARDDISK +/0/6/0.0.0/1 /dev/sdc1 volume 1GiB Linux filesystem partition +/0/6/0.0.0/3 /dev/sdc3 volume 1GiB EXT4 volume +/0/6/0.0.0/4 /dev/sdc4 volume 7167MiB Extended partition +/0/6/0.0.0/4/5 /dev/sdc5 volume 1GiB Linux filesystem partition +/0/7/0.0.0 /dev/sdd disk 10GB VBOX HARDDISK +/0/8/0.0.0 /dev/sde disk 10GB VBOX HARDDISK +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using inxi Command? + +**[inxi][5]** is a nifty tool to check hardware information on Linux and offers wide range of option to get all the hardware information on Linux system that i never found in any other utility which are available in Linux. It was forked from the ancient and mindbendingly perverse yet ingenius infobash, by locsmif. + +``` +# inxi -Dp +Drives: HDD Total Size: 75.2GB (22.3% used) + ID-1: /dev/sda model: VBOX_HARDDISK size: 32.2GB + ID-2: /dev/sdb model: VBOX_HARDDISK size: 10.7GB + ID-3: /dev/sdc model: VBOX_HARDDISK size: 10.7GB + ID-4: /dev/sdd model: VBOX_HARDDISK size: 10.7GB + ID-5: /dev/sde model: VBOX_HARDDISK size: 10.7GB +Partition: ID-1: / size: 20G used: 16G (85%) fs: ext4 dev: /dev/sda1 + ID-3: /part1 size: 1008M used: 1.3M (1%) fs: ext2 dev: /dev/sdc1 + ID-4: /part2 size: 976M used: 2.6M (1%) fs: ext4 dev: /dev/sdc3 +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using lsscsi Command? + +Uses information in sysfs (Linux kernel series 2.6 and later) to list SCSI devices (or hosts) currently attached to the system. Options can be used to control the amount and form of information provided for each device. + +By default in this utility device node names (e.g. “/dev/sda” or “/dev/root_disk”) are obtained by noting the major and minor numbers for the listed device obtained from sysfs + +``` +# lsscsi +[0:0:0:0] cd/dvd VBOX CD-ROM 1.0 /dev/sr0 +[2:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sda +[3:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdb +[4:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdc +[5:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdd +[6:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sde +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using ProcFS? + +The proc filesystem (procfs) is a special filesystem in Unix-like operating systems that presents information about processes and other system information. + +It’s sometimes referred to as a process information pseudo-file system. It doesn’t contain ‘real’ files but runtime system information (e.g. system memory, devices mounted, hardware configuration, etc). + +``` +# cat /proc/partitions +major minor #blocks name + + 11 0 1048575 sr0 + 8 0 31457280 sda + 8 1 20970496 sda1 + 8 16 10485760 sdb + 8 32 10485760 sdc + 8 33 1048576 sdc1 + 8 35 1048576 sdc3 + 8 36 1 sdc4 + 8 37 1048576 sdc5 + 8 48 10485760 sdd + 8 64 10485760 sde +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using /dev/disk Path? + +This directory contains four directories, it’s by-id, by-uuid, by-path and by-partuuid. Each directory contains some useful information and it’s symlinked with real block device files. + +``` +# ls -lh /dev/disk/by-id +total 0 +lrwxrwxrwx 1 root root 9 Feb 2 23:08 ata-VBOX_CD-ROM_VB0-01f003f6 -> ../../sr0 +lrwxrwxrwx 1 root root 9 Feb 3 00:14 ata-VBOX_HARDDISK_VB26e827b5-668ab9f4 -> ../../sda +lrwxrwxrwx 1 root root 10 Feb 3 00:14 ata-VBOX_HARDDISK_VB26e827b5-668ab9f4-part1 -> ../../sda1 +lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VB3774c742-fb2b3e4e -> ../../sdd +lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e -> ../../sdc +lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part1 -> ../../sdc1 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part3 -> ../../sdc3 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part4 -> ../../sdc4 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part5 -> ../../sdc5 +lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VBed1cf451-9f51c5f6 -> ../../sdb +lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VBf242dbdd-49a982eb -> ../../sde +``` + +Output of by-uuid + +``` +# ls -lh /dev/disk/by-uuid +total 0 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 ca307aa4-0866-49b1-8184-004025789e63 -> ../../sdc3 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 d17e3c31-e2c9-4f11-809c-94a549bc43b7 -> ../../sdc1 +lrwxrwxrwx 1 root root 10 Feb 3 00:14 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> ../../sda1 +``` + +Output of by-path + +``` +# ls -lh /dev/disk/by-path +total 0 +lrwxrwxrwx 1 root root 9 Feb 2 23:08 pci-0000:00:01.1-ata-1 -> ../../sr0 +lrwxrwxrwx 1 root root 9 Feb 3 00:14 pci-0000:00:0d.0-ata-1 -> ../../sda +lrwxrwxrwx 1 root root 10 Feb 3 00:14 pci-0000:00:0d.0-ata-1-part1 -> ../../sda1 +lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-2 -> ../../sdb +lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-3 -> ../../sdc +lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part1 -> ../../sdc1 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part3 -> ../../sdc3 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part4 -> ../../sdc4 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part5 -> ../../sdc5 +lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-4 -> ../../sdd +lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-5 -> ../../sde +``` + +Output of by-partuuid + +``` +# ls -lh /dev/disk/by-partuuid +total 0 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-01 -> ../../sdc1 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-03 -> ../../sdc3 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-04 -> ../../sdc4 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-05 -> ../../sdc5 +lrwxrwxrwx 1 root root 10 Feb 3 00:14 eab59449-01 -> ../../sda1 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-command-check-hard-disks-partitions/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/linux-fdisk-command-to-manage-disk-partitions/ +[2]: https://www.2daygeek.com/how-to-manage-disk-partitions-using-parted-command/ +[3]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/ +[4]: https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/ +[5]: https://www.2daygeek.com/inxi-system-hardware-information-on-linux/ diff --git a/sources/tech/20190205 5 Streaming Audio Players for Linux.md b/sources/tech/20190205 5 Streaming Audio Players for Linux.md new file mode 100644 index 0000000000..1ddd4552f5 --- /dev/null +++ b/sources/tech/20190205 5 Streaming Audio Players for Linux.md @@ -0,0 +1,172 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 Streaming Audio Players for Linux) +[#]: via: (https://www.linux.com/blog/2019/2/5-streaming-audio-players-linux) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +5 Streaming Audio Players for Linux +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/music-main.png?itok=bTxfvadR) + +As I work, throughout the day, music is always playing in the background. Most often, that music is in the form of vinyl spinning on a turntable. But when I’m not in purist mode, I’ll opt to listen to audio by way of a streaming app. Naturally, I’m on the Linux platform, so the only tools I have at my disposal are those that play well on my operating system of choice. Fortunately, plenty of options exist for those who want to stream audio to their Linux desktops. + +In fact, Linux offers a number of solid offerings for music streaming, and I’ll highlight five of my favorite tools for this task. A word of warning, not all of these players are open source. But if you’re okay running a proprietary app on your open source desktop, you have some really powerful options. Let’s take a look at what’s available. + +### Spotify + +Spotify for Linux isn’t some dumb-downed, half-baked app that crashes every other time you open it, and doesn’t offer the full-range of features found on the macOS and Windows equivalent. In fact, the Linux version of Spotify is exactly the same as you’ll find on other platforms. With the Spotify streaming client you can listen to music and podcasts, create playlists, discover new artists, and so much more. And the Spotify interface (Figure 1) is quite easy to navigate and use. + +![Spotify][2] + +Figure 1: The Spotify interface makes it easy to find new music and old favorites. + +[Used with permission][3] + +You can install Spotify either using snap (with the command sudo snap install spotify), or from the official repository, with the following commands: + + * sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 931FF8E79F0876134EDDBDCCA87FF9DF48BF1C90 + + * sudo echo deb stable non-free | sudo tee /etc/apt/sources.list.d/spotify.list + + * sudo apt-get update + + * sudo apt-get install spotify-client + + + + +Once installed, you’ll want to log into your Spotify account, so you can start streaming all of the great music to help motivate you to get your work done. If you have Spotify installed on other devices (and logged into the same account), you can dictate to which device the music should stream (by clicking the Devices Available icon near the bottom right corner of the Spotify window). + +### Clementine + +Clementine one of the best music players available to the Linux platform. Clementine not only allows user to play locally stored music, but to connect to numerous streaming audio services, such as: + + * Amazon Cloud Drive + + * Box + + * Dropbox + + * Icecast + + * Jamendo + + * Magnatune + + * RockRadio.com + + * Radiotunes.com + + * SomaFM + + * SoundCloud + + * Spotify + + * Subsonic + + * Vk.com + + * Or internet radio streams + + + + +There are two caveats to using Clementine. The first is you must be using the most recent version (as the build available in some repositories is out of date and won’t install the necessary streaming plugins). Second, even with the most recent build, some streaming services won’t function as expected. For example, with Spotify, you’ll only have available to you the Top Tracks (and not your playlist … or the ability to search for songs). + +With Clementine Internet radio streaming, you’ll find musicians and bands you’ve never heard of (Figure 2), and plenty of them to tune into. + +![Clementine][5] + +Figure 2: Clementine Internet radio is a great way to find new music. + +[Used with permission][3] + +### Odio + +Odio is a cross-platform, proprietary app (available for Linux, MacOS, and Windows) that allows you to stream internet music stations of all genres. Radio stations are curated from [www.radio-browser.info][6] and the app itself does an incredible job of presenting the streams for you (Figure 3). + + +![Odio][8] + +Figure 3: The Odio interface is one of the best you’ll find. + +[Used with permission][3] + +Odio makes it very easy to find unique Internet radio stations and even add those you find and enjoy to your library. Currently, the only way to install Odio on Linux is via Snap. If your distribution supports snap packages, install this streaming app with the command: + +sudo snap install odio + +Once installed, you can open the app and start using it. There is no need to log into (or create) an account. Odio is very limited in its settings. In fact, it only offers the choice between a dark or light theme in the settings window. However, as limited as it might be, Odio is one of your best bets for playing Internet radio on Linux. + +Streamtuner2 is an outstanding Internet radio station GUI tool. With it you can stream music from the likes of: + + * Internet radio stations + + * Jameno + + * MyOggRadio + + * Shoutcast.com + + * SurfMusic + + * TuneIn + + * Xiph.org + + * YouTube + + +### StreamTuner2 + +Streamtuner2 offers a nice (if not slightly outdated) interface, that makes it quite easy to find and stream your favorite music. The one caveat with StreamTuner2 is that it’s really just a GUI for finding the streams you want to hear. When you find a station, double-click on it to open the app associated with the stream. That means you must have the necessary apps installed, in order for the streams to play. If you don’t have the proper apps, you can’t play the streams. Because of this, you’ll spend a good amount of time figuring out what apps to install for certain streams (Figure 4). + +![Streamtuner2][10] + +Figure 4: Configuring Streamtuner2 isn’t for the faint of heart. + +[Used with permission][3] + +### VLC + +VLC has been, for a very long time, dubbed the best media playback tool for Linux. That’s with good reason, as it can play just about anything you throw at it. Included in that list is streaming radio stations. Although you won’t find VLC connecting to the likes of Spotify, you can head over to Internet-Radio, click on a playlist and have VLC open it without a problem. And considering how many internet radio stations are available at the moment, you won’t have any problem finding music to suit your tastes. VLC also includes tools like visualizers, equalizers (Figure 5), and more. + +![VLC ][12] + +Figure 5: The VLC visualizer and equalizer features in action. + +[Used with permission][3] + +The only caveat to VLC is that you do have to have a URL for the Internet Radio you wish you hear, as the tool itself doesn’t curate. But with those links in hand, you won’t find a better media player than VLC. + +### Always More Where That Came From + +If one of these five tools doesn’t fit your needs, I suggest you open your distribution’s app store and search for one that will. There are plenty of tools to make streaming music, podcasts, and more not only possible on Linux, but easy. + +Learn more about Linux through the free ["Introduction to Linux" ][13] course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2019/2/5-streaming-audio-players-linux + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/spotify_0.jpg?itok=8-Ym-R61 (Spotify) +[3]: https://www.linux.com/licenses/category/used-permission +[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/clementine_0.jpg?itok=5oODJO3b (Clementine) +[6]: http://www.radio-browser.info +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/odio.jpg?itok=sNPTSS3c (Odio) +[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/streamtuner2.jpg?itok=1MSbafWj (Streamtuner2) +[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/vlc_0.jpg?itok=QEOsq7Ii (VLC ) +[13]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20190205 CFS- Completely fair process scheduling in Linux.md b/sources/tech/20190205 CFS- Completely fair process scheduling in Linux.md new file mode 100644 index 0000000000..be44e75fea --- /dev/null +++ b/sources/tech/20190205 CFS- Completely fair process scheduling in Linux.md @@ -0,0 +1,122 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (CFS: Completely fair process scheduling in Linux) +[#]: via: (https://opensource.com/article/19/2/fair-scheduling-linux) +[#]: author: (Marty kalin https://opensource.com/users/mkalindepauledu) + +CFS: Completely fair process scheduling in Linux +====== +CFS gives every task a fair share of processor resources in a low-fuss but highly efficient way. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh) + +Linux takes a modular approach to processor scheduling in that different algorithms can be used to schedule different process types. A scheduling class specifies which scheduling policy applies to which type of process. Completely fair scheduling (CFS), which became part of the Linux 2.6.23 kernel in 2007, is the scheduling class for normal (as opposed to real-time) processes and therefore is named **SCHED_NORMAL**. + +CFS is geared for the interactive applications typical in a desktop environment, but it can be configured as **SCHED_BATCH** to favor the batch workloads common, for example, on a high-volume web server. In any case, CFS breaks dramatically with what might be called "classic preemptive scheduling." Also, the "completely fair" claim has to be seen with a technical eye; otherwise, the claim might seem like an empty boast. + +Let's dig into the details of what sets CFS apart from—indeed, above—other process schedulers. Let's start with a quick review of some core technical terms. + +### Some core concepts + +Linux inherits the Unix view of a process as a program in execution. As such, a process must contend with other processes for shared system resources: memory to hold instructions and data, at least one processor to execute instructions, and I/O devices to interact with the external world. Process scheduling is how the operating system (OS) assigns tasks (e.g., crunching some numbers, copying a file) to processors—a running process then performs the task. A process has one or more threads of execution, which are sequences of machine-level instructions. To schedule a process is to schedule one of its threads on a processor. + +In a simplifying move, Linux turns process scheduling into thread scheduling by treating a scheduled process as if it were single-threaded. If a process is multi-threaded with N threads, then N scheduling actions would be required to cover the threads. Threads within a multi-threaded process remain related in that they share resources such as memory address space. Linux threads are sometimes described as lightweight processes, with the lightweight underscoring the sharing of resources among the threads within a process. + +Although a process can be in various states, two are of particular interest in scheduling. A blocked process is awaiting the completion of some event such as an I/O event. The process can resume execution only after the event completes. A runnable process is one that is not currently blocked. + +A process is processor-bound (aka compute-bound) if it consumes mostly processor as opposed to I/O resources, and I/O-bound in the opposite case; hence, a processor-bound process is mostly runnable, whereas an I/O-bound process is mostly blocked. As examples, crunching numbers is processor-bound, and accessing files is I/O-bound. Although an entire process might be characterized as either processor-bound or I/O-bound, a given process may be one or the other during different stages of its execution. Interactive desktop applications, such as browsers, tend to be I/O-bound. + +A good process scheduler has to balance the needs of processor-bound and I/O-bound tasks, especially in an operating system such as Linux that thrives on so many hardware platforms: desktop machines, embedded devices, mobile devices, server clusters, supercomputers, and more. + +### Classic preemptive scheduling versus CFS + +Unix popularized classic preemptive scheduling, which other operating systems including VAX/VMS, Windows NT, and Linux later adopted. At the center of this scheduling model is a fixed timeslice, the amount of time (e.g., 50ms) that a task is allowed to hold a processor until preempted in favor of some other task. If a preempted process has not finished its work, the process must be rescheduled. This model is powerful in that it supports multitasking (concurrency) through processor time-sharing, even on the single-CPU machines of yesteryear. + +The classic model typically includes multiple scheduling queues, one per process priority: Every process in a higher-priority queue gets scheduled before any process in a lower-priority queue. As an example, VAX/VMS uses 32 priority queues for scheduling. + +CFS dispenses with fixed timeslices and explicit priorities. The amount of time for a given task on a processor is computed dynamically as the scheduling context changes over the system's lifetime. Here is a sketch of the motivating ideas and technical details: + + * Imagine a processor, P, which is idealized in that it can execute multiple tasks simultaneously. For example, tasks T1 and T2 can execute on P at the same time, with each receiving 50% of P's magical processing power. This idealization describes perfect multitasking, which CFS strives to achieve on actual as opposed to idealized processors. CFS is designed to approximate perfect multitasking. + + * The CFS scheduler has a target latency, which is the minimum amount of time—idealized to an infinitely small duration—required for every runnable task to get at least one turn on the processor. If such a duration could be infinitely small, then each runnable task would have had a turn on the processor during any given timespan, however small (e.g., 10ms, 5ns, etc.). Of course, an idealized infinitely small duration must be approximated in the real world, and the default approximation is 20ms. Each runnable task then gets a 1/N slice of the target latency, where N is the number of tasks. For example, if the target latency is 20ms and there are four contending tasks, then each task gets a timeslice of 5ms. By the way, if there is only a single task during a scheduling event, this lucky task gets the entire target latency as its slice. The fair in CFS comes to the fore in the 1/N slice given to each task contending for a processor. + + * The 1/N slice is, indeed, a timeslice—but not a fixed one because such a slice depends on N, the number of tasks currently contending for the processor. The system changes over time. Some processes terminate and new ones are spawned; runnable processes block and blocked processes become runnable. The value of N is dynamic and so, therefore, is the 1/N timeslice computed for each runnable task contending for a processor. The traditional **nice** value is used to weight the 1/N slice: a low-priority **nice** value means that only some fraction of the 1/N slice is given to a task, whereas a high-priority **nice** value means that a proportionately greater fraction of the 1/N slice is given to a task. In summary, **nice** values do not determine the slice, but only modify the 1/N slice that represents fairness among the contending tasks. + + * The operating system incurs overhead whenever a context switch occurs; that is, when one process is preempted in favor of another. To keep this overhead from becoming unduly large, there is a minimum amount of time (with a typical setting of 1ms to 4ms) that any scheduled process must run before being preempted. This minimum is known as the minimum granularity. If many tasks (e.g., 20) are contending for the processor, then the minimum granularity (assume 4ms) might be more than the 1/N slice (in this case, 1ms). If the minimum granularity turns out to be larger than the 1/N slice, the system is overloaded because there are too many tasks contending for the processor—and fairness goes out the window. + + * When does preemption occur? CFS tries to minimize context switches, given their overhead: time spent on a context switch is time unavailable for other tasks. Accordingly, once a task gets the processor, it runs for its entire weighted 1/N slice before being preempted in favor of some other task. Suppose task T1 has run for its weighted 1/N slice, and runnable task T2 currently has the lowest virtual runtime (vruntime) among the tasks contending for the processor. The vruntime records, in nanoseconds, how long a task has run on the processor. In this case, T1 would be preempted in favor of T2. + + * The scheduler tracks the vruntime for all tasks, runnable and blocked. The lower a task's vruntime, the more deserving the task is for time on the processor. CFS accordingly moves low-vruntime tasks towards the front of the scheduling line. Details are forthcoming because the line is implemented as a tree, not a list. + + * How often should the CFS scheduler reschedule? There is a simple way to determine the scheduling period. Suppose that the target latency (TL) is 20ms and the minimum granularity (MG) is 4ms: + +`TL / MG = (20 / 4) = 5 ## five or fewer tasks are ok` + +In this case, five or fewer tasks would allow each task a turn on the processor during the target latency. For example, if the task number is five, each runnable task has a 1/N slice of 4ms, which happens to equal the minimum granularity; if the task number is three, each task gets a 1/N slice of almost 7ms. In either case, the scheduler would reschedule in 20ms, the duration of the target latency. + +Trouble occurs if the number of tasks (e.g., 10) exceeds TL / MG because now each task must get the minimum time of 4ms instead of the computed 1/N slice, which is 2ms. In this case, the scheduler would reschedule in 40ms: + +`(number of tasks) core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated MG = (10 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 4) = 40ms ## period = 40ms` + + + + +Linux schedulers that predate CFS use heuristics to promote the fair treatment of interactive tasks with respect to scheduling. CFS takes a quite different approach by letting the vruntime facts speak mostly for themselves, which happens to support sleeper fairness. An interactive task, by its very nature, tends to sleep a lot in the sense that it awaits user inputs and so becomes I/O-bound; hence, such a task tends to have a relatively low vruntime, which tends to move the task towards the front of the scheduling line. + +### Special features + +CFS supports symmetrical multiprocessing (SMP) in which any process (whether kernel or user) can execute on any processor. Yet configurable scheduling domains can be used to group processors for load balancing or even segregation. If several processors share the same scheduling policy, then load balancing among them is an option; if a particular processor has a scheduling policy different from the others, then this processor would be segregated from the others with respect to scheduling. + +Configurable scheduling groups are another CFS feature. As an example, consider the Nginx web server that's running on my desktop machine. At startup, this server has a master process and four worker processes, which act as HTTP request handlers. For any HTTP request, the particular worker that handles the request is irrelevant; it matters only that the request is handled in a timely manner, and so the four workers together provide a pool from which to draw a task-handler as requests come in. It thus seems fair to treat the four Nginx workers as a group rather than as individuals for scheduling purposes, and a scheduling group can be used to do just that. The four Nginx workers could be configured to have a single vruntime among them rather than individual vruntimes. Configuration is done in the traditional Linux way, through files. For vruntime-sharing, a file named **cpu.shares** , with the details given through familiar shell commands, would be created. + +As noted earlier, Linux supports scheduling classes so that different scheduling policies, together with their implementing algorithms, can coexist on the same platform. A scheduling class is implemented as a code module in C. CFS, the scheduling class described so far, is **SCHED_NORMAL**. There are also scheduling classes specifically for real-time tasks, **SCHED_FIFO** (first in, first out) and **SCHED_RR** (round robin). Under **SCHED_FIFO** , tasks run to completion; under **SCHED_RR** , tasks run until they exhaust a fixed timeslice and are preempted. + +### CFS implementation + +CFS requires efficient data structures to track task information and high-performance code to generate the schedules. Let's begin with a central term in scheduling, the runqueue. This is a data structure that represents a timeline for scheduled tasks. Despite the name, the runqueue need not be implemented in the traditional way, as a FIFO list. CFS breaks with tradition by using a time-ordered red-black tree as a runqueue. The data structure is well-suited for the job because it is a self-balancing binary search tree, with efficient **insert** and **remove** operations that execute in **O(log N)** time, where N is the number of nodes in the tree. Also, a tree is an excellent data structure for organizing entities into a hierarchy based on a particular property, in this case a vruntime. + +In CFS, the tree's internal nodes represent tasks to be scheduled, and the tree as a whole, like any runqueue, represents a timeline for task execution. Red-black trees are in wide use beyond scheduling; for example, Java uses this data structure to implement its **TreeMap**. + +Under CFS, every processor has a specific runqueue of tasks, and no task occurs at the same time in more than one runqueue. Each runqueue is a red-black tree. The tree's internal nodes represent tasks or task groups, and these nodes are indexed by their vruntime values so that (in the tree as a whole or in any subtree) the internal nodes to the left have lower vruntime values than the ones to the right: + +``` +    25     ## 25 is a task vruntime +    /\ +  17  29   ## 17 roots the left subtree, 29 the right one +  /\  ... + 5  19     ## and so on +...  \ +     nil   ## leaf nodes are nil +``` + +In summary, tasks with the lowest vruntime—and, therefore, the greatest need for a processor—reside somewhere in the left subtree; tasks with relatively high vruntimes congregate in the right subtree. A preempted task would go into the right subtree, thus giving other tasks a chance to move leftwards in the tree. A task with the smallest vruntime winds up in the tree's leftmost (internal) node, which is thus the front of the runqueue. + +The CFS scheduler has an instance, the C **task_struct** , to track detailed information about each task to be scheduled. This structure embeds a **sched_entity** structure, which in turn has scheduling-specific information, in particular, the vruntime per task or task group: + +``` +struct task_struct {       /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var info on a task **/ +  ... +  struct sched_entity se;  /** vruntime, etc. **/ +  ... +}; +``` + +The red-black tree is implemented in familiar C fashion, with a premium on pointers for efficiency. A **cfs_rq** structure instance embeds a **rb_root** field named **tasks_timeline** , which points to the root of a red-black tree. Each of the tree's internal nodes has pointers to the parent and the two child nodes; the leaf nodes have nil as their value. + +CFS illustrates how a straightforward idea—give every task a fair share of processor resources—can be implemented in a low-fuss but highly efficient way. It's worth repeating that CFS achieves fair and efficient scheduling without traditional artifacts such as fixed timeslices and explicit task priorities. The pursuit of even better schedulers goes on, of course; for the moment, however, CFS is as good as it gets for general-purpose processor scheduling. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/fair-scheduling-linux + +作者:[Marty kalin][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mkalindepauledu +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md b/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md new file mode 100644 index 0000000000..7ce1201c4f --- /dev/null +++ b/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md @@ -0,0 +1,443 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS) +[#]: via: (https://www.ostechnix.com/install-apache-mysql-php-lamp-stack-on-ubuntu-18-04-lts/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/02/lamp-720x340.jpg) + +**LAMP** stack is a popular, open source web development platform that can be used to run and deploy dynamic websites and web-based applications. Typically, LAMP stack consists of Apache webserver, MariaDB/MySQL databases, PHP/Python/Perl programming languages. LAMP is the acronym of **L** inux, **M** ariaDB/ **M** YSQL, **P** HP/ **P** ython/ **P** erl. This tutorial describes how to install Apache, MySQL, PHP (LAMP stack) in Ubuntu 18.04 LTS server. + +### Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS + +For the purpose of this tutorial, we will be using the following Ubuntu testbox. + + * **Operating System** : Ubuntu 18.04.1 LTS Server Edition + * **IP address** : 192.168.225.22/24 + + + +#### 1. Install Apache web server + +First of all, update Ubuntu server using commands: + +``` +$ sudo apt update + +$ sudo apt upgrade +``` + +Next, install Apache web server: + +``` +$ sudo apt install apache2 +``` + +Check if Apache web server is running or not: + +``` +$ sudo systemctl status apache2 +``` + +Sample output would be: + +``` +● apache2.service - The Apache HTTP Server + Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: en + Drop-In: /lib/systemd/system/apache2.service.d + └─apache2-systemd.conf + Active: active (running) since Tue 2019-02-05 10:48:03 UTC; 1min 5s ago + Main PID: 2025 (apache2) + Tasks: 55 (limit: 2320) + CGroup: /system.slice/apache2.service + ├─2025 /usr/sbin/apache2 -k start + ├─2027 /usr/sbin/apache2 -k start + └─2028 /usr/sbin/apache2 -k start + +Feb 05 10:48:02 ubuntuserver systemd[1]: Starting The Apache HTTP Server... +Feb 05 10:48:03 ubuntuserver apachectl[2003]: AH00558: apache2: Could not reliably +Feb 05 10:48:03 ubuntuserver systemd[1]: Started The Apache HTTP Server. +``` + +Congratulations! Apache service is up and running!! + +##### 1.1 Adjust firewall to allow Apache web server + +By default, the apache web browser can’t be accessed from remote systems if you have enabled the UFW firewall in Ubuntu 18.04 LTS. You must allow the http and https ports by following the below steps. + +First, list out the application profiles available on your Ubuntu system using command: + +``` +$ sudo ufw app list +``` + +Sample output: + +``` +Available applications: +Apache +Apache Full +Apache Secure +OpenSSH +``` + +As you can see, Apache and OpenSSH applications have installed UFW profiles. You can list out information about each profile and its included rules using “ **ufw app info “Profile Name”** command. + +Let us look into the **“Apache Full”** profile. To do so, run: + +``` +$ sudo ufw app info "Apache Full" +``` + +Sample output: + +``` +Profile: Apache Full +Title: Web Server (HTTP,HTTPS) +Description: Apache v2 is the next generation of the omnipresent Apache web +server. + +Ports: +80,443/tcp +``` + +As you see, “Apache Full” profile has included the rules to enable traffic to the ports **80** and **443** : + +Now, run the following command to allow incoming HTTP and HTTPS traffic for this profile: + +``` +$ sudo ufw allow in "Apache Full" +Rules updated +Rules updated (v6) +``` + +If you don’t want to allow https traffic, but only http (80) traffic, run: + +``` +$ sudo ufw app info "Apache" +``` + +##### 1.2 Test Apache Web server + +Now, open your web browser and access Apache test page by navigating to **** or ****. + +![](https://www.ostechnix.com/wp-content/uploads/2016/06/apache-2.png) + +If you are see a screen something like above, you are good to go. Apache server is working! + +#### 2. Install MySQL + +To install MySQL On Ubuntu, run: + +``` +$ sudo apt install mysql-server +``` + +Verify if MySQL service is running or not using command: + +``` +$ sudo systemctl status mysql +``` + +**Sample output:** + +``` +● mysql.service - MySQL Community Server +Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enab +Active: active (running) since Tue 2019-02-05 11:07:50 UTC; 17s ago +Main PID: 3423 (mysqld) +Tasks: 27 (limit: 2320) +CGroup: /system.slice/mysql.service +└─3423 /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysqld.pid + +Feb 05 11:07:49 ubuntuserver systemd[1]: Starting MySQL Community Server... +Feb 05 11:07:50 ubuntuserver systemd[1]: Started MySQL Community Server. +``` + +Mysql is running! + +##### 2.1 Setup database administrative user (root) password + +By default, MySQL **root** user password is blank. You need to secure your MySQL server by running the following script: + +``` +$ sudo mysql_secure_installation +``` + +You will be asked whether you want to setup **VALIDATE PASSWORD plugin** or not. This plugin allows the users to configure strong password for database credentials. If enabled, It will automatically check the strength of the password and enforces the users to set only those passwords which are secure enough. **It is safe to leave this plugin disabled**. However, you must use a strong and unique password for database credentials. If don’t want to enable this plugin, just press any key to skip the password validation part and continue the rest of the steps. + +If your answer is **Yes** , you will be asked to choose the level of password validation. + +``` +Securing the MySQL server deployment. + +Connecting to MySQL using a blank password. + +VALIDATE PASSWORD PLUGIN can be used to test passwords +and improve security. It checks the strength of password +and allows the users to set only those passwords which are +secure enough. Would you like to setup VALIDATE PASSWORD plugin? + +Press y|Y for Yes, any other key for No y +``` + +The available password validations are **low** , **medium** and **strong**. Just enter the appropriate number (0 for low, 1 for medium and 2 for strong password) and hit ENTER key. + +``` +There are three levels of password validation policy: + +LOW Length >= 8 +MEDIUM Length >= 8, numeric, mixed case, and special characters +STRONG Length >= 8, numeric, mixed case, special characters and dictionary file + +Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: +``` + +Now, enter the password for MySQL root user. Please be mindful that you must use password for mysql root user depending upon the password policy you choose in the previous step. If you didn’t enable the plugin, just use any strong and unique password of your choice. + +``` +Please set the password for root here. + +New password: + +Re-enter new password: + +Estimated strength of the password: 50 +Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : y +``` + +Once you entered the password twice, you will see the password strength (In our case it is **50** ). If it is OK for you, press Y to continue with the provided password. If not satisfied with password length, press any other key and set a strong password. I am OK with my current password, so I chose **y**. + +For the rest of questions, just type **y** and hit ENTER. This will remove anonymous user, disallow root user login remotely and remove test database. + +``` +Remove anonymous users? (Press y|Y for Yes, any other key for No) : y +Success. + +Normally, root should only be allowed to connect from +'localhost'. This ensures that someone cannot guess at +the root password from the network. + +Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y +Success. + +By default, MySQL comes with a database named 'test' that +anyone can access. This is also intended only for testing, +and should be removed before moving into a production +environment. + +Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y +- Dropping test database... +Success. + +- Removing privileges on test database... +Success. + +Reloading the privilege tables will ensure that all changes +made so far will take effect immediately. + +Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y +Success. + +All done! +``` + +That’s it. Password for MySQL root user has been set. + +##### 2.2 Change authentication method for MySQL root user + +By default, MySQL root user is set to authenticate using the **auth_socket** plugin in MySQL 5.7 and newer versions on Ubuntu. Even though it enhances the security, it will also complicate things when you access your database server using any external programs, for example phpMyAdmin. To fix this issue, you need to change authentication method from **auth_socket** to **mysql_native_password**. To do so, login to your MySQL prompt using command: + +``` +$ sudo mysql +``` + +Run the following command at the mysql prompt to find the current authentication method for all mysql user accounts: + +``` +SELECT user,authentication_string,plugin,host FROM mysql.user; +``` + +**Sample output:** + +``` ++------------------|-------------------------------------------|-----------------------|-----------+ +| user | authentication_string | plugin | host | ++------------------|-------------------------------------------|-----------------------|-----------+ +| root | | auth_socket | localhost | +| mysql.session | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | +| mysql.sys | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | +| debian-sys-maint | *F126737722832701DD3979741508F05FA71E5BA0 | mysql_native_password | localhost | ++------------------|-------------------------------------------|-----------------------|-----------+ +4 rows in set (0.00 sec) +``` + +![][2] + +As you see, mysql root user uses `auth_socket` plugin for authentication. + +To change this authentication to **mysql_native_password** method, run the following command at mysql prompt. Don’t forget to replace **“password”** with a strong and unique password of your choice. If you have enabled VALIDATION plugin, make sure you have used a strong password based on the current policy requirements. + +``` +ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password'; +``` + +Update the changes using command: + +``` +FLUSH PRIVILEGES; +``` + +Now check again if the authentication method is changed or not using command: + +``` +SELECT user,authentication_string,plugin,host FROM mysql.user; +``` + +Sample output: + +![][3] + +Good! Now the myql root user can authenticate using password to access mysql shell. + +Exit from the mysql prompt: + +``` +exit +``` + +#### 3\. Install PHP + +To install PHP, run: + +``` +$ sudo apt install php libapache2-mod-php php-mysql +``` + +After installing PHP, create **info.php** file in the Apache root document folder. Usually, the apache root document folder will be **/var/www/html/** or **/var/www/** in most Debian based Linux distributions. In Ubuntu 18.04 LTS, it is **/var/www/html/**. + +Let us create **info.php** file in the apache root folder: + +``` +$ sudo vi /var/www/html/info.php +``` + +Add the following lines: + +``` + +``` + +Press ESC key and type **:wq** to save and quit the file. Restart apache service to take effect the changes. + +``` +$ sudo systemctl restart apache2 +``` + +##### 3.1 Test PHP + +Open up your web browser and navigate to **** URL. + +You will see the php test page now. + +![](https://www.ostechnix.com/wp-content/uploads/2019/02/php-test-page.png) + +Usually, when a user requests a directory from the web server, Apache will first look for a file named **index.html**. If you want to change Apache to serve php files rather than others, move **index.php** to first position in the **dir.conf** file as shown below + +``` +$ sudo vi /etc/apache2/mods-enabled/dir.conf +``` + +Here is the contents of the above file. + +``` + +DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm + + +# vim: syntax=apache ts=4 sw=4 sts=4 sr noet +``` + +Move the “index.php” file to first. Once you made the changes, your **dir.conf** file will look like below. + +``` + +DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm + + +# vim: syntax=apache ts=4 sw=4 sts=4 sr noet +``` + +Press **ESC** key and type **:wq** to save and close the file. Restart Apache service to take effect the changes. + +``` +$ sudo systemctl restart apache2 +``` + +##### 3.2 Install PHP modules + +To improve the functionality of PHP, you can install some additional PHP modules. + +To list the available PHP modules, run: + +``` +$ sudo apt-cache search php- | less +``` + +**Sample output:** + +![][4] + +Use the arrow keys to go through the result. To exit, type **q** and hit ENTER key. + +To find the details of any particular php module, for example **php-gd** , run: + +``` +$ sudo apt-cache show php-gd +``` + +To install a php module run: + +``` +$ sudo apt install php-gd +``` + +To install all modules (not necessary though), run: + +``` +$ sudo apt-get install php* +``` + +Do not forget to restart Apache service after installing any php module. To check if the module is loaded or not, open info.php file in your browser and check if it is present. + +Next, you might want to install any database management tools to easily manage databases via a web browser. If so, install phpMyAdmin as described in the following link. + +Congratulations! We have successfully setup LAMP stack in Ubuntu 18.04 LTS server. + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/install-apache-mysql-php-lamp-stack-on-ubuntu-18-04-lts/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: http://www.ostechnix.com/wp-content/uploads/2019/02/mysql-1.png +[3]: http://www.ostechnix.com/wp-content/uploads/2019/02/mysql-2.png +[4]: http://www.ostechnix.com/wp-content/uploads/2016/06/php-modules.png diff --git a/sources/tech/20190206 And, Ampersand, and - in Linux.md b/sources/tech/20190206 And, Ampersand, and - in Linux.md new file mode 100644 index 0000000000..2febc0a2ef --- /dev/null +++ b/sources/tech/20190206 And, Ampersand, and - in Linux.md @@ -0,0 +1,211 @@ +[#]: collector: (lujun9972) +[#]: translator: (HankChow) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (And, Ampersand, and & in Linux) +[#]: via: (https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux) +[#]: author: (Paul Brown https://www.linux.com/users/bro66) + +And, Ampersand, and & in Linux +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ampersand.png?itok=7GdFO36Y) + +Take a look at the tools covered in the [three][1] [previous][2] [articles][3], and you will see that understanding the glue that joins them together is as important as recognizing the tools themselves. Indeed, tools tend to be simple, and understanding what _mkdir_ , _touch_ , and _find_ do (make a new directory, update a file, and find a file in the directory tree, respectively) in isolation is easy. + +But understanding what + +``` +mkdir test_dir 2>/dev/null || touch images.txt && find . -iname "*jpg" > backup/dir/images.txt & +``` + +does, and why we would write a command line like that is a whole different story. + +It pays to look more closely at the sign and symbols that live between the commands. It will not only help you better understand how things work, but will also make you more proficient in chaining commands together to create compound instructions that will help you work more efficiently. + +In this article and the next, we'll be looking at the the ampersand (`&`) and its close friend, the pipe (`|`), and see how they can mean different things in different contexts. + +### Behind the Scenes + +Let's start simple and see how you can use `&` as a way of pushing a command to the background. The instruction: + +``` +cp -R original/dir/ backup/dir/ +``` + +Copies all the files and subdirectories in _original/dir/_ into _backup/dir/_. So far so simple. But if that turns out to be a lot of data, it could tie up your terminal for hours. + +However, using: + +``` +cp -R original/dir/ backup/dir/ & +``` + +pushes the process to the background courtesy of the final `&`. This frees you to continue working on the same terminal or even to close the terminal and still let the process finish up. Do note, however, that if the process is asked to print stuff out to the standard output (like in the case of `echo` or `ls`), it will continue to do so, even though it is being executed in the background. + +When you push a process into the background, Bash will print out a number. This number is the PID or the _Process' ID_. Every process running on your Linux system has a unique process ID and you can use this ID to pause, resume, and terminate the process it refers to. This will become useful later. + +In the meantime, there are a few tools you can use to manage your processes as long as you remain in the terminal from which you launched them: + + * `jobs` shows you the processes running in your current terminal, whether be it in the background or foreground. It also shows you a number associated with each job (different from the PID) that you can use to refer to each process: + +``` + $ jobs +[1]- Running cp -i -R original/dir/* backup/dir/ & +[2]+ Running find . -iname "*jpg" > backup/dir/images.txt & +``` + + * `fg` brings a job from the background to the foreground so you can interact with it. You tell `fg` which process you want to bring to the foreground with a percentage symbol (`%`) followed by the number associated with the job that `jobs` gave you: + +``` + $ fg %1 # brings the cp job to the foreground +cp -i -R original/dir/* backup/dir/ +``` + +If the job was stopped (see below), `fg` will start it again. + + * You can stop a job in the foreground by holding down [Ctrl] and pressing [Z]. This doesn't abort the action, it pauses it. When you start it again with (`fg` or `bg`) it will continue from where it left off... + +...Except for [`sleep`][4]: the time a `sleep` job is paused still counts once `sleep` is resumed. This is because `sleep` takes note of the clock time when it was started, not how long it was running. This means that if you run `sleep 30` and pause it for more than 30 seconds, once you resume, `sleep` will exit immediately. + + * The `bg` command pushes a job to the background and resumes it again if it was paused: + +``` + $ bg %1 +[1]+ cp -i -R original/dir/* backup/dir/ & +``` + + + + +As mentioned above, you won't be able to use any of these commands if you close the terminal from which you launched the process or if you change to another terminal, even though the process will still continue working. + +To manage background processes from another terminal you need another set of tools. For example, you can tell a process to stop from a a different terminal with the [`kill`][5] command: + +``` +kill -s STOP +``` + +And you know the PID because that is the number Bash gave you when you started the process with `&`, remember? Oh! You didn't write it down? No problem. You can get the PID of any running process with the `ps` (short for _processes_ ) command. So, using + +``` +ps | grep cp +``` + +will show you all the processes containing the string " _cp_ ", including the copying job we are using for our example. It will also show you the PID: + +``` +$ ps | grep cp +14444 pts/3 00:00:13 cp +``` + +In this case, the PID is _14444_. and it means you can stop the background copying with: + +``` +kill -s STOP 14444 +``` + +Note that `STOP` here does the same thing as [Ctrl] + [Z] above, that is, it pauses the execution of the process. + +To start the paused process again, you can use the `CONT` signal: + +``` +kill -s CONT 14444 +``` + +There is a good list of many of [the main signals you can send a process here][6]. According to that, if you wanted to terminate the process, not just pause it, you could do this: + +``` +kill -s TERM 14444 +``` + +If the process refuses to exit, you can force it with: + +``` +kill -s KILL 14444 +``` + +This is a bit dangerous, but very useful if a process has gone crazy and is eating up all your resources. + +In any case, if you are not sure you have the correct PID, add the `x` option to `ps`: + +``` +$ ps x| grep cp +14444 pts/3 D 0:14 cp -i -R original/dir/Hols_2014.mp4 +  original/dir/Hols_2015.mp4 original/dir/Hols_2016.mp4 +  original/dir/Hols_2017.mp4 original/dir/Hols_2018.mp4 backup/dir/ +``` + +And you should be able to see what process you need. + +Finally, there is nifty tool that combines `ps` and `grep` all into one: + +``` +$ pgrep cp +8 +18 +19 +26 +33 +40 +47 +54 +61 +72 +88 +96 +136 +339 +6680 +13735 +14444 +``` + +Lists all the PIDs of processes that contain the string " _cp_ ". + +In this case, it isn't very helpful, but this... + +``` +$ pgrep -lx cp +14444 cp +``` + +... is much better. + +In this case, `-l` tells `pgrep` to show you the name of the process and `-x` tells `pgrep` you want an exact match for the name of the command. If you want even more details, try `pgrep -ax command`. + +### Next time + +Putting an `&` at the end of commands has helped us explain the rather useful concept of processes working in the background and foreground and how to manage them. + +One last thing before we leave: processes running in the background are what are known as _daemons_ in UNIX/Linux parlance. So, if you had heard the term before and wondered what they were, there you go. + +As usual, there are more ways to use the ampersand within a command line, many of which have nothing to do with pushing processes into the background. To see what those uses are, we'll be back next week with more on the matter. + +Read more: + +[Linux Tools: The Meaning of Dot][1] + +[Understanding Angle Brackets in Bash][2] + +[More About Angle Brackets in Bash][3] + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux + +作者:[Paul Brown][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/bro66 +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot +[2]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash +[3]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash +[4]: https://ss64.com/bash/sleep.html +[5]: https://bash.cyberciti.biz/guide/Sending_signal_to_Processes +[6]: https://www.computerhope.com/unix/signals.htm diff --git a/sources/tech/20190206 Getting started with Vim visual mode.md b/sources/tech/20190206 Getting started with Vim visual mode.md new file mode 100644 index 0000000000..e6b9b1da9b --- /dev/null +++ b/sources/tech/20190206 Getting started with Vim visual mode.md @@ -0,0 +1,126 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Getting started with Vim visual mode) +[#]: via: (https://opensource.com/article/19/2/getting-started-vim-visual-mode) +[#]: author: (Susan Lauber https://opensource.com/users/susanlauber) + +Getting started with Vim visual mode +====== +Visual mode makes it easier to highlight and manipulate text in Vim. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_keyboard_orange_hands.png?itok=G6tJ_64Y) + +Ansible playbook files are text files in a YAML format. People who work regularly with them have their favorite editors and plugin extensions to make the formatting easier. + +When I teach Ansible with the default editor available in most Linux distributions, I use Vim's visual mode a lot. It allows me to highlight my actions on the screen—what I am about to edit and the text manipulation task I'm doing—to make it easier for my students to learn. + +### Vim's visual mode + +When editing text with Vim, visual mode can be extremely useful for identifying chunks of text to be manipulated. + +Vim's visual mode has three versions: character, line, and block. The keystrokes to enter each mode are: + + * Character mode: **v** (lower-case) + * Line mode: **V** (upper-case) + * Block mode: **Ctrl+v** + + + +Here are some ways to use each mode to simplify your work. + +### Character mode + +Character mode can highlight a sentence in a paragraph or a phrase in a sentence. Then the visually identified text can be deleted, copied, changed, or modified with any other Vim editing command. + +#### Move a sentence + +To move a sentence from one place to another, start by opening the file and moving the cursor to the first character in the sentence you want to move. + +![](https://opensource.com/sites/default/files/uploads/vim-visual-char1.png) + + * Press the **v** key to enter visual character mode. The word **VISUAL** will appear at the bottom of the screen. + * Use the Arrow keys to highlight the desired text. You can use other navigation commands, such as **w** to highlight to the beginning of the next word or **$** to include the rest of the line. + * Once the text is highlighted, press the **d** key to delete the text. + * If you deleted too much or not enough, press **u** to undo and start again. + * Move your cursor to the new location and press **p** to paste the text. + + + +#### Change a phrase + +You can also highlight a chunk of text that you want to replace. + +![](https://opensource.com/sites/default/files/uploads/vim-visual-char2.png) + + * Place the cursor at the first character you want to change. + * Press **v** to enter visual character mode. + * Use navigation commands, such as the Arrow keys, to highlight the phrase. + * Press **c** to change the highlighted text. + * The highlighted text will disappear, and you will be in Insert mode where you can add new text. + * After you finish typing the new text, press **Esc** to return to command mode and save your work. + +![](https://opensource.com/sites/default/files/uploads/vim-visual-char3.png) + +### Line mode + +When working with Ansible playbooks, the order of tasks can matter. Use visual line mode to move a task to a different location in the playbook. + +#### Manipulate multiple lines of text + +![](https://opensource.com/sites/default/files/uploads/vim-visual-line1.png) + + * Place your cursor anywhere on the first or last line of the text you want to manipulate. + * Press **Shift+V** to enter line mode. The words **VISUAL LINE** will appear at the bottom of the screen. + * Use navigation commands, such as the Arrow keys, to highlight multiple lines of text. + * Once the desired text is highlighted, use commands to manipulate it. Press **d** to delete, then move the cursor to the new location, and press **p** to paste the text. + * **y** (yank) can be used instead of **d** (delete) if you want to copy the task. + + + +#### Indent a set of lines + +When working with Ansible playbooks or YAML files, indentation matters. A highlighted block can be shifted right or left with the **>** and **<** keys. + +![]9https://opensource.com/sites/default/files/uploads/vim-visual-line2.png + + * Press **>** to increase the indentation of all the lines. + * Press **<** to decrease the indentation of all the lines. + + + +Try other Vim commands to apply them to the highlighted text. + +### Block mode + +The visual block mode is useful for manipulation of specific tabular data files, but it can also be extremely helpful as a tool to verify indentation of an Ansible playbook. + +Tasks are a list of items and in YAML each list item starts with a dash followed by a space. The dashes must line up in the same column to be at the same indentation level. This can be difficult to see with just the human eye. Indentation of other lines within the task is also important. + +#### Verify tasks lists are indented the same + +![](https://opensource.com/sites/default/files/uploads/vim-visual-block1.png) + + * Place your cursor on the first character of the list item. + * Press **Ctrl+v** to enter visual block mode. The words **VISUAL BLOCK** will appear at the bottom of the screen. + * Use the Arrow keys to highlight the single character column. You can verify that each task is indented the same amount. + * Use the Arrow keys to expand the block right or left to check whether the other indentation is correct. + +![](https://opensource.com/sites/default/files/uploads/vim-visual-block2.png) + +Even though I am comfortable with other Vim editing shortcuts, I still like to use visual mode to sort out what text I want to manipulate. When I demo other concepts during a presentation, my students see a tool to highlight text and hit delete in this "new to them" text only editor. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/getting-started-vim-visual-mode + +作者:[Susan Lauber][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/susanlauber +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20190208 3 Ways to Install Deb Files on Ubuntu Linux.md b/sources/tech/20190208 3 Ways to Install Deb Files on Ubuntu Linux.md new file mode 100644 index 0000000000..3b84d85c62 --- /dev/null +++ b/sources/tech/20190208 3 Ways to Install Deb Files on Ubuntu Linux.md @@ -0,0 +1,185 @@ +[#]: collector: (lujun9972) +[#]: translator: (sndnvaps) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (3 Ways to Install Deb Files on Ubuntu Linux) +[#]: via: (https://itsfoss.com/install-deb-files-ubuntu) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +3 Ways to Install Deb Files on Ubuntu Linux +====== + +**This beginner article explains how to install deb packages in Ubuntu. It also shows you how to remove those deb packages afterwards.** + +This is another article in the Ubuntu beginner series. If you are absolutely new to Ubuntu, you might wonder about [how to install applications][1]. + +The easiest way is to use the Ubuntu Software Center. Search for an application by its name and install it from there. + +Life would be too simple if you could find all the applications in the Software Center. But that does not happen, unfortunately. + +Some software are available via DEB packages. These are archived files that end with .deb extension. + +You can think of .deb files as the .exe files in Windows. You double click on the .exe file and it starts the installation procedure in Windows. DEB packages are pretty much the same. + +You can find these DEB packages from the download section of the software provider’s website. For example, if you want to [install Google Chrome on Ubuntu][2], you can download the DEB package of Chrome from its website. + +Now the question arises, how do you install deb files? There are multiple ways of installing DEB packages in Ubuntu. I’ll show them to you one by one in this tutorial. + +![Install deb files in Ubuntu][3] + +### Installing .deb files in Ubuntu and Debian-based Linux Distributions + +You can choose a GUI tool or a command line tool for installing a deb package. The choice is yours. + +Let’s go on and see how to install deb files. + +#### Method 1: Use the default Software Center + +The simplest method is to use the default software center in Ubuntu. You have to do nothing special here. Simply go to the folder where you have downloaded the .deb file (it should be the Downloads folder) and double click on this file. + +![Google Chrome deb file on Ubuntu][4]Double click on the downloaded .deb file to start installation + +It will open the software center and you should see the option to install the software. All you have to do is to hit the install button and enter your login password. + +![Install Google Chrome in Ubuntu Software Center][5]The installation of deb file will be carried out via Software Center + +See, it’s even simple than installing from a .exe files on Windows, isn’t it? + +#### Method 2: Use Gdebi application for installing deb packages with dependencies + +Again, life would be a lot simpler if things always go smooth. But that’s not life as we know it. + +Now that you know that .deb files can be easily installed via Software Center, let me tell you about the dependency error that you may encounter with some packages. + +What happens is that a program may be dependent on another piece of software (libraries). When the developer is preparing the DEB package for you, he/she may assume that your system already has that piece of software on your system. + +But if that’s not the case and your system doesn’t have those required pieces of software, you’ll encounter the infamous ‘dependency error’. + +The Software Center cannot handle such errors on its own so you have to use another tool called [gdebi][6]. + +gdebi is a lightweight GUI application that has the sole purpose of installing deb packages. + +It identifies the dependencies and tries to install these dependencies along with installing the .deb files. + +![gdebi handling dependency while installing deb package][7]Image Credit: [Xmodulo][8] + +Personally, I prefer gdebi over software center for installing deb files. It is a lightweight application so the installation seems quicker. You can read in detail about [using gDebi and making it the default for installing DEB packages][6]. + +You can install gdebi from the software center or using the command below: + +``` +sudo apt install gdebi +``` + +#### Method 3: Install .deb files in command line using dpkg + +If you want to install deb packages in command lime, you can use either apt command or dpkg command. Apt command actually uses [dpkg command][9] underneath it but apt is more popular and easy to use. + +If you want to use the apt command for deb files, use it like this: + +``` +sudo apt install path_to_deb_file +``` + +If you want to use dpkg command for installing deb packages, here’s how to do it: + +``` +sudo dpkg -i path_to_deb_file +``` + +In both commands, you should replace the path_to_deb_file with the path and name of the deb file you have downloaded. + +![Install deb files using dpkg command in Ubuntu][10]Installing deb files using dpkg command in Ubuntu + +If you get a dependency error while installing the deb packages, you may use the following command to fix the dependency issues: + +``` +sudo apt install -f +``` + +### How to remove deb packages + +Removing a deb package is not a big deal as well. And no, you don’t need the original deb file that you had used for installing the program. + +#### Method 1: Remove deb packages using apt commands + +All you need is the name of the program that you have installed and then you can use apt or dpkg to remove that program. + +``` +sudo apt remove program_name +``` + +Now the question comes, how do you find the exact program name that you need to use in the remove command? The apt command has a solution for that as well. + +You can find the list of all installed files with apt command but manually going through this will be a pain. So you can use the grep command to search for your package. + +For example, I installed AppGrid application in the previous section but if I want to know the exact program name, I can use something like this: + +``` +sudo apt list --installed | grep grid +``` + +This will give me all the packages that have grid in their name and from there, I can get the exact program name. + +``` +apt list --installed | grep grid +WARNING: apt does not have a stable CLI interface. Use with caution in scripts. +appgrid/now 0.298 all [installed,local] +``` + +As you can see, a program called appgrid has been installed. Now you can use this program name with the apt remove command. + +#### Method 2: Remove deb packages using dpkg commands + +You can use dpkg to find the installed program’s name: + +``` +dpkg -l | grep grid +``` + +The output will give all the packages installed that has grid in its name. + +``` +dpkg -l | grep grid + +ii appgrid 0.298 all Discover and install apps for Ubuntu +``` + +ii in the above command output means package has been correctly installed. + +Now that you have the program name, you can use dpkg command to remove it: + +``` +dpkg -r program_name +``` + +**Tip: Updating deb packages** +Some deb packages (like Chrome) provide updates through system updates but for most other programs, you’ll have to remove the existing program and install the newer version. + +I hope this beginner guide helped you to install deb packages on Ubuntu. I added the remove part so that you’ll have better control over the programs you installed. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-deb-files-ubuntu + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/remove-install-software-ubuntu/ +[2]: https://itsfoss.com/install-chrome-ubuntu/ +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/deb-packages-ubuntu.png?resize=800%2C450&ssl=1 +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/install-google-chrome-ubuntu-4.jpeg?resize=800%2C347&ssl=1 +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/install-google-chrome-ubuntu-5.jpeg?resize=800%2C516&ssl=1 +[6]: https://itsfoss.com/gdebi-default-ubuntu-software-center/ +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/gdebi-handling-dependency.jpg?ssl=1 +[8]: http://xmodulo.com +[9]: https://help.ubuntu.com/lts/serverguide/dpkg.html.en +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/install-deb-file-with-dpkg.png?ssl=1 +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/deb-packages-ubuntu.png?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190208 7 steps for hunting down Python code bugs.md b/sources/tech/20190208 7 steps for hunting down Python code bugs.md new file mode 100644 index 0000000000..77b2c802a0 --- /dev/null +++ b/sources/tech/20190208 7 steps for hunting down Python code bugs.md @@ -0,0 +1,114 @@ +[#]: collector: (lujun9972) +[#]: translator: (LazyWolfLin) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (7 steps for hunting down Python code bugs) +[#]: via: (https://opensource.com/article/19/2/steps-hunting-code-python-bugs) +[#]: author: (Maria Mckinley https://opensource.com/users/parody) + +7 steps for hunting down Python code bugs +====== +Learn some tricks to minimize the time you spend tracking down the reasons your code fails. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bug-insect-butterfly-diversity-inclusion-2.png?itok=TcC9eews) + +It is 3 pm on a Friday afternoon. Why? Because it is always 3 pm on a Friday when things go down. You get a notification that a customer has found a bug in your software. After you get over your initial disbelief, you contact DevOps to find out what is happening with the logs for your app, because you remember receiving a notification that they were being moved. + +Turns out they are somewhere you can't get to, but they are in the process of being moved to a web application—so you will have this nifty application for searching and reading them, but of course, it is not finished yet. It should be up in a couple of days. I know, totally unrealistic situation, right? Unfortunately not; it seems logs or log messages often come up missing at just the wrong time. Before we track down the bug, a public service announcement: Check your logs to make sure they are where you think they are and logging what you think they should log, regularly. Amazing how these things just change when you aren't looking. + +OK, so you found the logs or tried the call, and indeed, the customer has found a bug. Maybe you even think you know where the bug is. + +You immediately open the file you think might be the problem and start poking around. + +### 1. Don't touch your code yet + +Go ahead and look at it, maybe even come up with a hypothesis. But before you start mucking about in the code, take that call that creates the bug and turn it into a test. This will be an integration test because although you may have suspicions, you do not yet know exactly where the problem is. + +Make sure this test fails. This is important because sometimes the test you make doesn't mimic the broken call; this is especially true if you are using a web or other framework that can obfuscate the tests. Many things may be stored in variables, and it is unfortunately not always obvious, just by looking at the test, what call you are making in the test. I'm not going to say that I have created a test that passed when I was trying to imitate a broken call, but, well, I have, and I don't think that is particularly unusual. Learn from my mistakes. + +### 2. Write a failing test + +Now that you have a failing test or maybe a test with an error, it is time to troubleshoot. But before you do that, let's do a review of the stack, as this makes troubleshooting easier. + +The stack consists of all of the tasks you have started but not finished. So, if you are baking a cake and adding the flour to the batter, then your stack would be: + + * Make cake + * Make batter + * Add flour + + + +You have started making your cake, you have started making the batter, and you are adding the flour. Greasing the pan is not on the list since you already finished that, and making the frosting is not on the list because you have not started that. + +If you are fuzzy on the stack, I highly recommend playing around on [Python Tutor][1], where you can watch the stack as you execute lines of code. + +Now, if something goes wrong with your Python program, the interpreter helpfully prints out the stack for you. This means that whatever the program was doing at the moment it became apparent that something went wrong is on the bottom. + +### 3. Always check the bottom of the stack first + +Not only is the bottom of the stack where you can see which error occurred, but often the last line of the stack is where you can find the issue. If the bottom doesn't help, and your code has not been linted in a while, it is amazing how helpful it can be to run. I recommend pylint or flake8. More often than not, it points right to where there is an error that I have been overlooking. + +If the error is something that seems obscure, your next move might just be to Google it. You will have better luck if you don't include information that is relevant only to your code, like the name of variables, files, etc. If you are using Python 3 (which you should be), it's helpful to include the 3 in the search; otherwise, Python 2 solutions tend to dominate the top. + +Once upon a time, developers had to troubleshoot without the benefit of a search engine. This was a dark time. Take advantage of all the tools available to you. + +Unfortunately, sometimes the problem occurred earlier and only became apparent during the line executed on the bottom of the stack. Think about how forgetting to add the baking powder becomes obvious when the cake doesn't rise. + +It is time to look up the stack. Chances are quite good that the problem is in your code, and not Python core or even third-party packages, so scan the stack looking for lines in your code first. Plus it is usually much easier to put a breakpoint in your own code. Stick the breakpoint in your code a little further up the stack and look around to see if things look like they should. + +"But Maria," I hear you say, "this is all helpful if I have a stack trace, but I just have a failing test. Where do I start?" + +Pdb, the Python Debugger. + +Find a place in your code where you know this call should hit. You should be able to find at least one place. Stick a pdb break in there. + +#### A digression + +Why not a print statement? I used to depend on print statements. They still come in handy sometimes. But once I started working with complicated code bases, and especially ones making network calls, print just became too slow. I ended up with print statements all over the place, I lost track of where they were and why, and it just got complicated. But there is a more important reason to mostly use pdb. Let's say you put a print statement in and discover that something is wrong—and must have gone wrong earlier. But looking at the function where you put the print statement, you have no idea how you got there. Looking at code is a great way to see where you are going, but it is terrible for learning where you've been. And yes, I have done a grep of my code base looking for where a function is called, but this can get tedious and doesn't narrow it down much with a popular function. Pdb can be very helpful. + +You follow my advice, and put in a pdb break and run your test. And it whooshes on by and fails again, with no break at all. Leave your breakpoint in, and run a test already in your test suite that does something very similar to the broken test. If you have a decent test suite, you should be able to find a test that is hitting the same code you think your failed test should hit. Run that test, and when it gets to your breakpoint, do a `w` and look at the stack. If you have no idea by looking at the stack how/where the other call may have gone haywire, then go about halfway up the stack, find some code that belongs to you, and put a breakpoint in that file, one line above the one in the stack trace. Try again with the new test. Keep going back and forth, moving up the stack to figure out where your call went off the rails. If you get all the way up to the top of the trace without hitting a breakpoint, then congratulations, you have found the issue: Your app was spelled wrong. No experience here, nope, none at all. + +### 4. Change things + +If you still feel lost, try making a new test where you vary something slightly. Can you get the new test to work? What is different? What is the same? Try changing something else. Once you have your test, and maybe additional tests in place, it is safe to start changing things in the code to see if you can narrow down the problem. Remember to start troubleshooting with a fresh commit so you can easily back out changes that do not help. (This is a reference to version control, if you aren't using version control, it will change your life. Well, maybe it will just make coding easier. See "[A Visual Guide to Version Control][2]" for a nice introduction.) + +### 5. Take a break + +In all seriousness, when it stops feeling like a fun challenge or game and starts becoming really frustrating, your best course of action is to walk away from the problem. Take a break. I highly recommend going for a walk and trying to think about something else. + +### 6. Write everything down + +When you come back, if you aren't suddenly inspired to try something, write down any information you have about the problem. This should include: + + * Exactly the call that is causing the problem + * Exactly what happened, including any error messages or related log messages + * Exactly what you were expecting to happen + * What you have done so far to find the problem and any clues that you have discovered while troubleshooting + + + +Sometimes this is a lot of information, but trust me, it is really annoying trying to pry information out of someone piecemeal. Try to be concise, but complete. + +### 7. Ask for help + +I often find that just writing down all the information triggers a thought about something I have not tried yet. Sometimes, of course, I realize what the problem is immediately after hitting the submit button. At any rate, if you still have not thought of anything after writing everything down, try sending an email to someone. First, try colleagues or other people involved in your project, then move on to project email lists. Don't be afraid to ask for help. Most people are kind and helpful, and I have found that to be especially true in the Python community. + +Maria McKinley will present [Hunting the Bugs][3] at [PyCascades 2019][4], February 23-24 in Seattle. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/steps-hunting-code-python-bugs + +作者:[Maria Mckinley][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/parody +[b]: https://github.com/lujun9972 +[1]: http://www.pythontutor.com/ +[2]: https://betterexplained.com/articles/a-visual-guide-to-version-control/ +[3]: https://2019.pycascades.com/talks/hunting-the-bugs +[4]: https://2019.pycascades.com/ diff --git a/sources/tech/20190211 How To Remove-Delete The Empty Lines In A File In Linux.md b/sources/tech/20190211 How To Remove-Delete The Empty Lines In A File In Linux.md new file mode 100644 index 0000000000..b55cbcd811 --- /dev/null +++ b/sources/tech/20190211 How To Remove-Delete The Empty Lines In A File In Linux.md @@ -0,0 +1,192 @@ +[#]: collector: (lujun9972) +[#]: translator: ( pityonline ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Remove/Delete The Empty Lines In A File In Linux) +[#]: via: (https://www.2daygeek.com/remove-delete-empty-lines-in-a-file-in-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +How To Remove/Delete The Empty Lines In A File In Linux +====== + +Some times you may wants to remove or delete the empty lines in a file in Linux. + +If so, you can use the one of the below method to achieve it. + +It can be done in many ways but i have listed simple methods in the article. + +You may aware of that grep, awk and sed commands are specialized for textual data manipulation. + +Navigate to the following URL, if you would like to read more about these kind of topics. For **[creating a file in specific size in Linux][1]** multiple ways, for **[creating a file in Linux][2]** multiple ways and for **[removing a matching string from a file in Linux][3]**. + +These are fall in advanced commands category because these are used in most of the shell script to do required things. + +It can be done using the following 5 methods. + + * **`sed Command:`** Stream editor for filtering and transforming text. + * **`grep Command:`** Print lines that match patterns. + * **`cat Command:`** It concatenate files and print on the standard output. + * **`tr Command:`** Translate or delete characters. + * **`awk Command:`** The awk utility shall execute programs written in the awk programming language, which is specialized for textual data manipulation. + * **`perl Command:`** Perl is a programming language specially designed for text editing. + + + +To test this, i had already created the file called `2daygeek.txt` with some texts and empty lines. The details are below. + +``` +$ cat 2daygeek.txt +2daygeek.com is a best Linux blog to learn Linux. + +It's FIVE years old blog. + +This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0. + +He got two GIRL babys. + +Her names are Tanisha & Renusha. +``` + +Now everything is ready and i’m going to test this in multiple ways. + +### How To Remove/Delete The Empty Lines In A File In Linux Using sed Command? + +Sed is a stream editor. A stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline). + +``` +$ sed '/^$/d' 2daygeek.txt +2daygeek.com is a best Linux blog to learn Linux. +It's FIVE years old blog. +This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0. +He got two GIRL babes. +Her names are Tanisha & Renusha. +``` + +Details are follow: + + * **`sed:`** It’s a command + * **`//:`** It holds the searching string. + * **`^:`** Matches start of string. + * **`$:`** Matches end of string. + * **`d:`** Delete the matched string. + * **`2daygeek.txt:`** Source file name. + + + +### How To Remove/Delete The Empty Lines In A File In Linux Using grep Command? + +grep searches for PATTERNS in each FILE. PATTERNS is one or patterns separated by newline characters, and grep prints each line that matches a pattern. + +``` +$ grep . 2daygeek.txt +or +$ grep -Ev "^$" 2daygeek.txt +or +$ grep -v -e '^$' 2daygeek.txt +2daygeek.com is a best Linux blog to learn Linux. +It's FIVE years old blog. +This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0. +He got two GIRL babes. +Her names are Tanisha & Renusha. +``` + +Details are follow: + + * **`grep:`** It’s a command + * **`.:`** Replaces any character. + * **`^:`** matches start of string. + * **`$:`** matches end of string. + * **`E:`** For extended regular expressions pattern matching. + * **`e:`** For regular expressions pattern matching. + * **`v:`** To select non-matching lines from the file. + * **`2daygeek.txt:`** Source file name. + + + +### How To Remove/Delete The Empty Lines In A File In Linux Using awk Command? + +The awk utility shall execute programs written in the awk programming language, which is specialized for textual data manipulation. An awk program is a sequence of patterns and corresponding actions. + +``` +$ awk NF 2daygeek.txt +or +$ awk '!/^$/' 2daygeek.txt +or +$ awk '/./' 2daygeek.txt +2daygeek.com is a best Linux blog to learn Linux. +It's FIVE years old blog. +This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0. +He got two GIRL babes. +Her names are Tanisha & Renusha. +``` + +Details are follow: + + * **`awk:`** It’s a command + * **`//:`** It holds the searching string. + * **`^:`** matches start of string. + * **`$:`** matches end of string. + * **`.:`** Replaces any character. + * **`!:`** Delete the matched string. + * **`2daygeek.txt:`** Source file name. + + + +### How To Delete The Empty Lines In A File In Linux using Combination of cat And tr Command? + +cat stands for concatenate. It is very frequently used in Linux to reads data from a file. + +cat is one of the most frequently used commands on Unix-like operating systems. It’s offer three functions which is related to text file such as display content of a file, combine multiple files into the single output and create a new file. + +Translate, squeeze, and/or delete characters from standard input, writing to standard output. + +``` +$ cat 2daygeek.txt | tr -s '\n' +2daygeek.com is a best Linux blog to learn Linux. +It's FIVE years old blog. +This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0. +He got two GIRL babes. +Her names are Tanisha & Renusha. +``` + +Details are follow: + + * **`cat:`** It’s a command + * **`tr:`** It’s a command + * **`|:`** Pipe symbol. It pass first command output as a input to another command. + * **`s:`** Replace each sequence of a repeated character that is listed in the last specified SET. + * **`\n:`** To add a new line. + * **`2daygeek.txt:`** Source file name. + + + +### How To Remove/Delete The Empty Lines In A File In Linux Using perl Command? + +Perl stands in for “Practical Extraction and Reporting Language”. Perl is a programming language specially designed for text editing. It is now widely used for a variety of purposes including Linux system administration, network programming, web development, etc. + +``` +$ perl -ne 'print if /\S/' 2daygeek.txt +2daygeek.com is a best Linux blog to learn Linux. +It's FIVE years old blog. +This website is maintained by Magesh M, it's licensed under CC BY-NC 4.0. +He got two GIRL babes. +Her names are Tanisha & Renusha. +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/remove-delete-empty-lines-in-a-file-in-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[pityonline](https://github.com/pityonline) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/create-a-file-in-specific-certain-size-linux/ +[2]: https://www.2daygeek.com/linux-command-to-create-a-file/ +[3]: https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/ diff --git a/sources/tech/20190211 How does rootless Podman work.md b/sources/tech/20190211 How does rootless Podman work.md new file mode 100644 index 0000000000..a085ae9014 --- /dev/null +++ b/sources/tech/20190211 How does rootless Podman work.md @@ -0,0 +1,107 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How does rootless Podman work?) +[#]: via: (https://opensource.com/article/19/2/how-does-rootless-podman-work) +[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan) + +How does rootless Podman work? +====== +Learn how Podman takes advantage of user namespaces to run in rootless mode. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82) + +In my [previous article][1] on user namespace and [Podman][2], I discussed how you can use Podman commands to launch different containers with different user namespaces giving you better separation between containers. Podman also takes advantage of user namespaces to be able to run in rootless mode. Basically, when a non-privileged user runs Podman, the tool sets up and joins a user namespace. After Podman becomes root inside of the user namespace, Podman is allowed to mount certain filesystems and set up the container. Note there is no privilege escalation here other then additional UIDs available to the user, explained below. + +### How does Podman create the user namespace? + +#### shadow-utils + +Most current Linux distributions include a version of shadow-utils that uses the **/etc/subuid** and **/etc/subgid** files to determine what UIDs and GIDs are available for a user in a user namespace. + +``` +$ cat /etc/subuid +dwalsh:100000:65536 +test:165536:65536 +$ cat /etc/subgid +dwalsh:100000:65536 +test:165536:65536 +``` + +The useradd program automatically allocates 65536 UIDs for each user added to the system. If you have existing users on a system, you would need to allocate the UIDs yourself. The format of these files is **username:STARTUID:TOTALUIDS**. Meaning in my case, dwalsh is allocated UIDs 100000 through 165535 along with my default UID, which happens to be 3265 defined in /etc/passwd. You need to be careful when allocating these UID ranges that they don't overlap with any **real** UID on the system. If you had a user listed as UID 100001, now I (dwalsh) would be able to become this UID and potentially read/write/execute files owned by the UID. + +Shadow-utils also adds two setuid programs (or setfilecap). On Fedora I have: + +``` +$ getcap /usr/bin/newuidmap +/usr/bin/newuidmap = cap_setuid+ep +$ getcap /usr/bin/newgidmap +/usr/bin/newgidmap = cap_setgid+ep +``` + +Podman executes these files to set up the user namespace. You can see the mappings by examining /proc/self/uid_map and /proc/self/gid_map from inside of the rootless container. + +``` +$ podman run alpine cat /proc/self/uid_map /proc/self/gid_map +        0       3267            1 +        1       100000          65536 +        0       3267            1 +        1       100000          65536 +``` + +As seen above, Podman defaults to mapping root in the container to your current UID (3267) and then maps ranges of allocated UIDs/GIDs in /etc/subuid and /etc/subgid starting at 1. Meaning in my example, UID=1 in the container is UID 100000, UID=2 is UID 100001, all the way up to 65536, which is 165535. + +Any item from outside of the user namespace that is owned by a UID or GID that is not mapped into the user namespace appears to belong to the user configured in the **kernel.overflowuid** sysctl, which by default is 35534, which my /etc/passwd file says has the name **nobody**. Since your process can't run as an ID that isn't mapped, the owner and group permissions don't apply, so you can only access these files based on their "other" permissions. This includes all files owned by **real** root on the system running the container, since root is not mapped into the user namespace. + +The [Buildah][3] command has a cool feature, [**buildah unshare**][4]. This puts you in the same user namespace that Podman runs in, but without entering the container's filesystem, so you can list the contents of your home directory. + +``` +$ ls -ild /home/dwalsh +8193 drwx--x--x. 290 dwalsh dwalsh 20480 Jan 29 07:58 /home/dwalsh +$ buildah unshare ls -ld /home/dwalsh +drwx--x--x. 290 root root 20480 Jan 29 07:58 /home/dwalsh +``` + +Notice that when listing the home dir attributes outside the user namespace, the kernel reports the ownership as dwalsh, while inside the user namespace it reports the directory as owned by root. This is because the home directory is owned by 3267, and inside the user namespace we are treating that UID as root. + +### What happens next in Podman after the user namespace is set up? + +Podman uses [containers/storage][5] to pull the container image, and containers/storage is smart enough to map all files owned by root in the image to the root of the user namespace, and any other files owned by different UIDs to their user namespace UIDs. By default, this content gets written to ~/.local/share/containers/storage. Container storage works in rootless mode with either the vfs mode or with Overlay. Note: Overlay is supported only if the [fuse-overlayfs][6] executable is installed. + +The kernel only allows user namespace root to mount certain types of filesystems; at this time it allows mounting of procfs, sysfs, tmpfs, fusefs, and bind mounts (as long as the source and destination are owned by the user running Podman. OverlayFS is not supported yet, although the kernel teams are working on allowing it). + +Podman then mounts the container's storage if it is using fuse-overlayfs; if the storage driver is using vfs, then no mounting is required. Podman on vfs requires a lot of space though, since each container copies the entire underlying filesystem. + +Podman then mounts /proc and /sys along with a few tmpfs and creates the devices in the container. + +In order to use networking other than the host networking, Podman uses the [slirp4netns][7] program to set up **User mode networking for unprivileged network namespace**. Slirp4netns allows Podman to expose ports within the container to the host. Note that the kernel still will not allow a non-privileged process to bind to ports less than 1024. Podman-1.1 or later is required for binding to ports. + +Rootless Podman can use user namespace for container separation, but you only have access to the UIDs defined in the /etc/subuid file. + +### Conclusion + +The Podman tool is enabling people to build and use containers without sacrificing the security of the system; you can give your developers the access they need without giving them root. + +And when you put your containers into production, you can take advantage of the extra security provided by the user namespace to keep the workloads isolated from each other. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/how-does-rootless-podman-work + +作者:[Daniel J Walsh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rhatdan +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/12/podman-and-user-namespaces +[2]: https://podman.io/ +[3]: https://buildah.io/ +[4]: https://github.com/containers/buildah/blob/master/docs/buildah-unshare.md +[5]: https://github.com/containers/storage +[6]: https://github.com/containers/fuse-overlayfs +[7]: https://github.com/rootless-containers/slirp4netns diff --git a/sources/tech/20190211 What-s the right amount of swap space for a modern Linux system.md b/sources/tech/20190211 What-s the right amount of swap space for a modern Linux system.md new file mode 100644 index 0000000000..c04d47e5ca --- /dev/null +++ b/sources/tech/20190211 What-s the right amount of swap space for a modern Linux system.md @@ -0,0 +1,68 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What's the right amount of swap space for a modern Linux system?) +[#]: via: (https://opensource.com/article/19/2/swap-space-poll) +[#]: author: (David Both https://opensource.com/users/dboth) + +What's the right amount of swap space for a modern Linux system? +====== +Complete our survey and voice your opinion on how much swap space to allocate. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0) + +Swap space is one of those things that everyone seems to have an idea about, and I am no exception. All my sysadmin friends have their opinions, and most distributions make recommendations too. + +Many years ago, the rule of thumb for the amount of swap space that should be allocated was 2X the amount of RAM installed in the computer. Of course that was when a typical computer's RAM was measured in KB or MB. So if a computer had 64KB of RAM, a swap partition of 128KB would be an optimum size. + +This took into account the fact that RAM memory sizes were typically quite small, and allocating more than 2X RAM for swap space did not improve performance. With more than twice RAM for swap, most systems spent more time thrashing than performing useful work. + +RAM memory has become quite inexpensive and many computers now have RAM in the tens of gigabytes. Most of my newer computers have at least 4GB or 8GB of RAM, two have 32GB, and my main workstation has 64GB. When dealing with computers with huge amounts of RAM, the limiting performance factor for swap space is far lower than the 2X multiplier. As a consequence, recommended swap space is considered a function of system memory workload, not system memory. + +Table 1 provides the Fedora Project's recommended size for a swap partition, depending on the amount of RAM in your system and whether you want enough memory for your system to hibernate. To allow for hibernation, you need to edit the swap space in the custom partitioning stage. The "recommended" swap partition size is established automatically during a default installation, but I usually find it's either too large or too small for my needs. + +The [Fedora 28 Installation Guide][1] defines current thinking about swap space allocation. Note that other versions of Fedora and other Linux distributions may differ slightly, but this is the same table Red Hat Enterprise Linux uses for its recommendations. These recommendations have not changed since Fedora 19. + +| Amount of RAM installed in system | Recommended swap space | Recommended swap space with hibernation | +| --------------------------------- | ---------------------- | --------------------------------------- | +| ≤ 2GB | 2X RAM | 3X RAM | +| 2GB – 8GB | = RAM | 2X RAM | +| 8GB – 64GB | 4G to 0.5X RAM | 1.5X RAM | +| >64GB | Minimum 4GB | Hibernation not recommended | + +Table 1: Recommended system swap space in Fedora 28's documentation. + +Table 2 contains my recommendations based on my experiences in multiple environments over the years. +| Amount of RAM installed in system | Recommended swap space | +| --------------------------------- | ---------------------- | +| ≤ 2GB | 2X RAM | +| 2GB – 8GB | = RAM | +| > 8GB | 8GB | + +Table 2: My recommended system swap space. + +It's possible that neither of these tables will work for your environment, but they will give you a place to start. The main consideration is that as the amount of RAM increases, adding more swap space simply leads to thrashing well before the swap space comes close to being filled. If you have too little virtual memory, you should add more RAM, if possible, rather than more swap space. + +In order to test the Fedora (and RHEL) swap space recommendations, I used its recommendation of **0.5*RAM** on my two largest systems (the ones with 32GB and 64GB of RAM). Even when running four or five VMs, multiple documents in LibreOffice, Thunderbird, the Chrome web browser, several terminal emulator sessions, the Xfe file manager, and a number of other background applications, the only time I see any use of swap is during backups I have scheduled for every morning at about 2am. Even then, swap usage is no more than 16MB—yes megabytes. These results are for my system with my loads and do not necessarily apply to your real-world environment. + +I recently had a conversation about swap space with some of the other Community Moderators here at [Opensource.com][2], and Chris Short, one of my friends in that illustrious and talented group, pointed me to an old [article][3] where he recommended using 1GB for swap space. This article was written in 2003, and he told me later that he now recommends zero swap space. + +So, we wondered, what you think? What do you recommend or use on your systems for swap space? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/swap-space-poll + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/ +[2]: http://Opensource.com +[3]: https://chrisshort.net/moving-to-linux-partitioning/ diff --git a/sources/tech/20190212 Ampersands and File Descriptors in Bash.md b/sources/tech/20190212 Ampersands and File Descriptors in Bash.md new file mode 100644 index 0000000000..ae0f2ce3f0 --- /dev/null +++ b/sources/tech/20190212 Ampersands and File Descriptors in Bash.md @@ -0,0 +1,162 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Ampersands and File Descriptors in Bash) +[#]: via: (https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash) +[#]: author: (Paul Brown https://www.linux.com/users/bro66) + +Ampersands and File Descriptors in Bash +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ampersand-coffee.png?itok=yChaT-47) + +In our quest to examine all the clutter (`&`, `|`, `;`, `>`, `<`, `{`, `[`, `(`, ), `]`, `}`, etc.) that is peppered throughout most chained Bash commands, [we have been taking a closer look at the ampersand symbol (`&`)][1]. + +[Last time, we saw how you can use `&` to push processes that may take a long time to complete into the background][1]. But, the &, in combination with angle brackets, can also be used to pipe output and input elsewhere. + +In the [previous tutorials on][2] [angle brackets][3], you saw how to use `>` like this: + +``` +ls > list.txt +``` + +to pipe the output from `ls` to the _list.txt_ file. + +Now we see that this is really shorthand for + +``` +ls 1> list.txt +``` + +And that `1`, in this context, is a file descriptor that points to the standard output (`stdout`). + +In a similar fashion `2` points to standard error (`stderr`), and in the following command: + +``` +ls 2> error.log +``` + +all error messages are piped to the _error.log_ file. + +To recap: `1>` is the standard output (`stdout`) and `2>` the standard error output (`stderr`). + +There is a third standard file descriptor, `0<`, the standard input (`stdin`). You can see it is an input because the arrow (`<`) is pointing into the `0`, while for `1` and `2`, the arrows (`>`) are pointing outwards. + +### What are the standard file descriptors good for? + +If you are following this series in order, you have already used the standard output (`1>`) several times in its shorthand form: `>`. + +Things like `stderr` (`2`) are also handy when, for example, you know that your command is going to throw an error, but what Bash informs you of is not useful and you don't need to see it. If you want to make a directory in your _home/_ directory, for example: + +``` +mkdir newdir +``` + +and if _newdir/_ already exists, `mkdir` will show an error. But why would you care? (Ok, there some circumstances in which you may care, but not always.) At the end of the day, _newdir_ will be there one way or another for you to fill up with stuff. You can supress the error message by pushing it into the void, which is _/dev/null_ : + +``` +mkdir newdir 2> /dev/null +``` + +This is not just a matter of " _let's not show ugly and irrelevant error messages because they are annoying,_ " as there may be circumstances in which an error message may cause a cascade of errors elsewhere. Say, for example, you want to find all the _.service_ files under _/etc_. You could do this: + +``` +find /etc -iname "*.service" +``` + +But it turns out that on most systems, many of the lines spat out by `find` show errors because a regular user does not have read access rights to some of the folders under _/etc_. It makes reading the correct output cumbersome and, if `find` is part of a larger script, it could cause the next command in line to bork. + +Instead, you can do this: + +``` +find /etc -iname "*.service" 2> /dev/null +``` + +And you get only the results you are looking for. + +### A Primer on File Descriptors + +There are some caveats to having separate file descriptors for `stdout` and `stderr`, though. If you want to store the output in a file, doing this: + +``` +find /etc -iname "*.service" 1> services.txt +``` + +would work fine because `1>` means " _send standard output, and only standard output (NOT standard error) somewhere_ ". + +But herein lies a problem: what if you *do* want to keep a record within the file of the errors along with the non-erroneous results? The instruction above won't do that because it ONLY writes the correct results from `find`, and + +``` +find /etc -iname "*.service" 2> services.txt +``` + +will ONLY write the errors. + +How do we get both? Try the following command: + +``` +find /etc -iname "*.service" &> services.txt +``` + +... and say hello to `&` again! + +We have been saying all along that `stdin` (`0`), `stdout` (`1`), and `stderr` (`2`) are _file descriptors_. A file descriptor is a special construct that points to a channel to a file, either for reading, or writing, or both. This comes from the old UNIX philosophy of treating everything as a file. Want to write to a device? Treat it as a file. Want to write to a socket and send data over a network? Treat it as a file. Want to read from and write to a file? Well, obviously, treat it as a file. + +So, when managing where the output and errors from a command goes, treat the destination as a file. Hence, when you open them to read and write to them, they all get file descriptors. + +This has interesting effects. You can, for example, pipe contents from one file descriptor to another: + +``` +find /etc -iname "*.service" 1> services.txt 2>&1 +``` + +This pipes `stderr` to `stdout` and `stdout` is piped to a file, _services.txt_. + +And there it is again: the `&`, signaling to Bash that `1` is the destination file descriptor. + +Another thing with the standard file descriptors is that, when you pipe from one to another, the order in which you do this is a bit counterintuitive. Take the command above, for example. It looks like it has been written the wrong way around. You may be reading it like this: " _pipe the output to a file and then pipe errors to the standard output._ " It would seem the error output comes to late and is sent when `1` is already done. + +But that is not how file descriptors work. A file descriptor is not a placeholder for the file, but for the _input and/or output channel_ to the file. In this case, when you do `1> services.txt`, you are saying " _open a write channel to services.txt and leave it open_ ". `1` is the name of the channel you are going to use, and it remains open until the end of the line. + +If you still think it is the wrong way around, try this: + +``` +find /etc -iname "*.service" 2>&1 1>services.txt +``` + +And notice how it doesn't work; notice how errors get piped to the terminal and only the non-erroneous output (that is `stdout`) gets pushed to `services.txt`. + +That is because Bash processes every result from `find` from left to right. Think about it like this: when Bash gets to `2>&1`, `stdout` (`1`) is still a channel that points to the terminal. If the result that `find` feeds Bash contains an error, it is popped into `2`, transferred to `1`, and, away it goes, off to the terminal! + +Then at the end of the command, Bash sees you want to open `stdout` as a channel to the _services.txt_ file. If no error has occurred, the result goes through `1` into the file. + +By contrast, in + +``` +find /etc -iname "*.service" 1>services.txt 2>&1 +``` + +`1` is pointing at `services.txt` right from the beginning, so anything that pops into `2` gets piped through `1`, which is already pointing to the final resting place in `services.txt`, and that is why it works. + +In any case, as mentioned above `&>` is shorthand for " _both standard output and standard error_ ", that is, `2>&1`. + +This is probably all a bit much, but don't worry about it. Re-routing file descriptors here and there is commonplace in Bash command lines and scripts. And, you'll be learning more about file descriptors as we progress through this series. See you next week! + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash + +作者:[Paul Brown][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/bro66 +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux +[2]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash +[3]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash diff --git a/sources/tech/20190212 How To Check CPU, Memory And Swap Utilization Percentage In Linux.md b/sources/tech/20190212 How To Check CPU, Memory And Swap Utilization Percentage In Linux.md new file mode 100644 index 0000000000..0fadc0908d --- /dev/null +++ b/sources/tech/20190212 How To Check CPU, Memory And Swap Utilization Percentage In Linux.md @@ -0,0 +1,226 @@ +[#]: collector: (lujun9972) +[#]: translator: (An-DJ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Check CPU, Memory And Swap Utilization Percentage In Linux?) +[#]: via: (https://www.2daygeek.com/linux-check-cpu-memory-swap-utilization-percentage/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +How To Check CPU, Memory And Swap Utilization Percentage In Linux? +====== + +There is a lot of commands and options are available in Linux to check memory utilization but i don’t see much information to check about memory utilization percentage. + +Most of the times we are checking memory utilization alone and we won’t think about how much percentage is used. + +If you want to know those information then you are in the right page. + +We are here to help you out on this in details. + +This tutorial will help you to identify the memory utilization when you are facing high memory utilization frequently in Linux server. + +But the same time, you won’t be getting the clear utilization if you are using `free -m` or `free -g`. + +These format commands fall under Linux advanced commands. It will be very useful for Linux Experts and Middle Level Linux Users. + +### Method-1: How To Check Memory Utilization Percentage In Linux? + +We can use the following combination of commands to get this done. In this method, we are using combination of free and awk command to get the memory utilization percentage. + +If you are looking for other articles which is related to memory then navigate to the following link. Those are **[free Command][1]** , **[smem Command][2]** , **[ps_mem Command][3]** , **[vmstat Command][4]** and **[Multiple ways to check size of physical memory][5]**. + +For `Memory` Utilization Percentage without Percent Symbol: + +``` +$ free -t | awk 'NR == 2 {print "Current Memory Utilization is : " $3/$2*100}' +or +$ free -t | awk 'FNR == 2 {print "Current Memory Utilization is : " $3/$2*100}' + +Current Memory Utilization is : 20.4194 +``` + +For `Swap` Utilization Percentage without Percent Symbol: + +``` +$ free -t | awk 'NR == 3 {print "Current Swap Utilization is : " $3/$2*100}' +or +$ free -t | awk 'FNR == 3 {print "Current Swap Utilization is : " $3/$2*100}' + +Current Swap Utilization is : 0 +``` + +For `Memory` Utilization Percentage with Percent Symbol and two decimal places: + +``` +$ free -t | awk 'NR == 2 {printf("Current Memory Utilization is : %.2f%"), $3/$2*100}' +or +$ free -t | awk 'FNR == 2 {printf("Current Memory Utilization is : %.2f%"), $3/$2*100}' + +Current Memory Utilization is : 20.42% +``` + +For `Swap` Utilization Percentage with Percent Symbol and two decimal places: + +``` +$ free -t | awk 'NR == 3 {printf("Current Swap Utilization is : %.2f%"), $3/$2*100}' +or +$ free -t | awk 'FNR == 3 {printf("Current Swap Utilization is : %.2f%"), $3/$2*100}' + +Current Swap Utilization is : 0.00% +``` + +If you are looking for other articles which is related to memory then navigate to the following link. Those are **[Create/Extend Swap Partition using LVM][6]** , **[Multiple Ways To Create Or Extend Swap Space][7]** and **[Shell Script to automatically Create/Remove and Mount Swap File][8]**. + +free command output for better clarification: + +``` +$ free + total used free shared buff/cache available +Mem: 15867 3730 9868 1189 2269 10640 +Swap: 17454 0 17454 +Total: 33322 3730 27322 +``` + +Details are follow: + + * **`free:`** free is a standard command to check memory utilization in Linux. + * **`awk:`** awk is a powerful command which is specialized for textual data manipulation. + * **`FNR == 2:`** It gives the total number of records for each input file. Basically it’s used to select the given line (Here, it chooses the line number 2). + * **`NR == 2:`** It gives the total number of records processed. Basically it’s used to filter the given line (Here, it chooses the line number 2).. + * **`$3/$2*100:`** It divides column 2 with column 3 and it’s multiply the results with 100. + * **`printf:`** It used to format and print data. + * **`%.2f%:`** By default it prints floating point numbers with 6 decimal places. Use the following format to limit a decimal places. + + + +### Method-2: How To Check Memory Utilization Percentage In Linux? + +We can use the following combination of commands to get this done. In this method, we are using combination of free, grep and awk command to get the memory utilization percentage. + +For `Memory` Utilization Percentage without Percent Symbol: + +``` +$ free -t | grep Mem | awk '{print "Current Memory Utilization is : " $3/$2*100}' +Current Memory Utilization is : 20.4228 +``` + +For `Swap` Utilization Percentage without Percent Symbol: + +``` +$ free -t | grep Swap | awk '{print "Current Swap Utilization is : " $3/$2*100}' +Current Swap Utilization is : 0 +``` + +For `Memory` Utilization Percentage with Percent Symbol and two decimal places: + +``` +$ free -t | grep Mem | awk '{printf("Current Memory Utilization is : %.2f%"), $3/$2*100}' +Current Memory Utilization is : 20.43% +``` + +For `Swap` Utilization Percentage with Percent Symbol and two decimal places: + +``` +$ free -t | grep Swap | awk '{printf("Current Swap Utilization is : %.2f%"), $3/$2*100}' +Current Swap Utilization is : 0.00% +``` + +### Method-1: How To Check CPU Utilization Percentage In Linux? + +We can use the following combination of commands to get this done. In this method, we are using combination of top, print and awk command to get the CPU utilization percentage. + +If you are looking for other articles which is related to memory then navigate to the following link. Those are **[top Command][9]** , **[htop Command][10]** , **[atop Command][11]** and **[Glances Command][12]**. + +If it shows multiple CPU in the output then you need to use the following method. + +``` +$ top -b -n1 | grep ^%Cpu +%Cpu0 : 5.3 us, 0.0 sy, 0.0 ni, 94.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu1 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu2 : 0.0 us, 0.0 sy, 0.0 ni, 94.7 id, 0.0 wa, 0.0 hi, 5.3 si, 0.0 st +%Cpu3 : 5.3 us, 0.0 sy, 0.0 ni, 94.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu4 : 10.5 us, 15.8 sy, 0.0 ni, 73.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu5 : 0.0 us, 5.0 sy, 0.0 ni, 95.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu6 : 5.3 us, 0.0 sy, 0.0 ni, 94.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu7 : 5.3 us, 0.0 sy, 0.0 ni, 94.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +``` + +For `CPU` Utilization Percentage without Percent Symbol: + +``` +$ top -b -n1 | grep ^%Cpu | awk '{cpu+=$9}END{print "Current CPU Utilization is : " 100-cpu/NR}' +Current CPU Utilization is : 21.05 +``` + +For `CPU` Utilization Percentage with Percent Symbol and two decimal places: + +``` +$ top -b -n1 | grep ^%Cpu | awk '{cpu+=$9}END{printf("Current CPU Utilization is : %.2f%"), 100-cpu/NR}' +Current CPU Utilization is : 14.81% +``` + +### Method-2: How To Check CPU Utilization Percentage In Linux? + +We can use the following combination of commands to get this done. In this method, we are using combination of top, print/printf and awk command to get the CPU utilization percentage. + +If it shows all together CPU(s) in the single output then you need to use the following method. + +``` +$ top -b -n1 | grep ^%Cpu +%Cpu(s): 15.3 us, 7.2 sy, 0.8 ni, 69.0 id, 6.7 wa, 0.0 hi, 1.0 si, 0.0 st +``` + +For `CPU` Utilization Percentage without Percent Symbol: + +``` +$ top -b -n1 | grep ^%Cpu | awk '{print "Current CPU Utilization is : " 100-$8}' +Current CPU Utilization is : 5.6 +``` + +For `CPU` Utilization Percentage with Percent Symbol and two decimal places: + +``` +$ top -b -n1 | grep ^%Cpu | awk '{printf("Current CPU Utilization is : %.2f%"), 100-$8}' +Current CPU Utilization is : 5.40% +``` + +Details are follow: + + * **`top:`** top is one of the best command to check currently running process on Linux system. + * **`-b:`** -b option, allow the top command to switch in batch mode. It is useful when you run the top command from local system to remote system. + * **`-n1:`** Number-of-iterations + * **`^%Cpu:`** Filter the lines which starts with %Cpu + * **`awk:`** awk is a powerful command which is specialized for textual data manipulation. + * **`cpu+=$9:`** For each line, add column 9 to a variable ‘cpu’. + * **`printf:`** It used to format and print data. + * **`%.2f%:`** By default it prints floating point numbers with 6 decimal places. Use the following format to limit a decimal places. + * **`100-cpu/NR:`** Finally print the ‘CPU Average’ by subtracting 100, divided by the number of records. + + + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-check-cpu-memory-swap-utilization-percentage/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/free-command-to-check-memory-usage-statistics-in-linux/ +[2]: https://www.2daygeek.com/smem-linux-memory-usage-statistics-reporting-tool/ +[3]: https://www.2daygeek.com/ps_mem-report-core-memory-usage-accurately-in-linux/ +[4]: https://www.2daygeek.com/linux-vmstat-command-examples-tool-report-virtual-memory-statistics/ +[5]: https://www.2daygeek.com/easy-ways-to-check-size-of-physical-memory-ram-in-linux/ +[6]: https://www.2daygeek.com/how-to-create-extend-swap-partition-in-linux-using-lvm/ +[7]: https://www.2daygeek.com/add-extend-increase-swap-space-memory-file-partition-linux/ +[8]: https://www.2daygeek.com/shell-script-create-add-extend-swap-space-linux/ +[9]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/ +[10]: https://www.2daygeek.com/linux-htop-command-linux-system-performance-resource-monitoring-tool/ +[11]: https://www.2daygeek.com/atop-system-process-performance-monitoring-tool/ +[12]: https://www.2daygeek.com/install-glances-advanced-real-time-linux-system-performance-monitoring-tool-on-centos-fedora-ubuntu-debian-opensuse-arch-linux/ diff --git a/sources/tech/20190212 Top 10 Best Linux Media Server Software.md b/sources/tech/20190212 Top 10 Best Linux Media Server Software.md new file mode 100644 index 0000000000..8fcea6343a --- /dev/null +++ b/sources/tech/20190212 Top 10 Best Linux Media Server Software.md @@ -0,0 +1,229 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top 10 Best Linux Media Server Software) +[#]: via: (https://itsfoss.com/best-linux-media-server) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Top 10 Best Linux Media Server Software +====== + +Did someone tell you that Linux is just for programmers? That is so wrong! You have got a lot of great tools for [digital artists][1], [writers][2] and musicians. + +We have covered such tools in the past. Today it’s going to be slightly different. Instead of creating new digital content, let’s talk about consuming it. + +You have probably heard of media servers? Basically these software (and sometimes gadgets) allow you to view your local or cloud media (music, videos etc) in an intuitive interface. You can even use it to stream the content to other devices on your network. Sort of your personal Netflix. + +In this article, we will talk about the best media software available for Linux that you can use as a media player or as a media server software – as per your requirements. + +Some of these applications can also be used with Google’s Chromecast and Amazon’s Firestick. + +### Best Media Server Software for Linux + +![Best Media Server Software for Linux][3] + +The mentioned Linux media server software are in no particular order of ranking. + +I have tried to provide installation instructions for Ubuntu and Debian based distributions. It’s not possible to list installation steps for all Linux distributions for all the media servers mentioned here. Please take no offence for that. + +A couple of software in this list are not open source. If that’s the case, I have highlighted it appropriately. + +### 1\. Kodi + +![Kodi Media Server][4] + +Kod is one of the most popular media server software and player. Recently, Kodi 18.0 dropped in with a bunch of improvements that includes the support for Digital Rights Management (DRM) decryption, game emulators, ROMs, voice control, and more. + +It is a completely free and open source software. An active community for discussions and support exists as well. The user interface for Kodi is beautiful. I haven’t had the chance to use it in its early days – but I was amazed to see such a good UI for a Linux application. + +It has got great playback support – so you can add any supported 3rd party media service for the content or manually add the ripped video files to watch. + +#### How to install Kodi + +Type in the following commands in the terminal to install the latest version of Kodi via its [official PPA][5]. + +``` +sudo apt-get install software-properties-common +sudo add-apt-repository ppa:team-xbmc/ppa +sudo apt-get update +sudo apt-get install kodi +``` + +To know more about installing a development build or upgrading Kodi, refer to the [official installation guide][6]. + +### 2\. Plex + +![Plex Media Server][7] + +Plex is yet another impressive media player or could be used as a media server software. It is a great alternative to Kodi for the users who mostly utilize it to create an offline network of their media collection to sync and watch across multiple devices. + +Unlike Kodi, **Plex is not entirely open source**. It does offer a free account in order to use it. In addition, it offers premium pricing plans to unlock more features and have a greater control over your media while also being able to get a detailed insight on who/what/how Plex is being used. + +If you are an audiophile, you would love the integration of Plex with [TIDAL][8] music streaming service. You can also set up Live TV by adding it to your tuner. + +#### How to install Plex + +You can simply download the .deb file available on their official webpage and install it directly (or using [GDebi][9]) + +### 3\. Jellyfin + +![Emby media server][10] + +Yet another open source media server software with a bunch of features. [Jellyfin][11] is actually a fork of Emby media server. It may be one of the best out there available for ‘free’ but the multi-platform support still isn’t there yet. + +You can run it on a browser or utilize Chromecast – however – you will have to wait if you want the Android app or if you want it to support several devices. + +#### How to install Jellyfin + +Jellyfin provides a [detailed documentation][12] on how to install it from the binary packages/image available for Linux, Docker, and more. + +You will also find it easy to install it from the repository via the command line for Debian-based distribution. Check out their [installation guide][13] for more information. + +### 4\. LibreELEC + +![libreELEC][14] + +LibreELEC is an interesting media server software which is based on Kodi v18.0. They have recently released a new version (9.0.0) with a complete overhaul of the core OS support, hardware compatibility and user experience. + +Of course, being based on Kodi, it also has the DRM support. In addition, you can utilize its generic Linux builds or the special ones tailored for Raspberry Pi builds, WeTek devices, and more. + +#### How to install LibreELEC + +You can download the installer from their [official site][15]. For detailed instructions on how to use it, please refer to the [installation guide][16]. + +### 5\. OpenFLIXR Media Server + +![OpenFLIXR Media Server][17] + +Want something similar that compliments Plex media server but also compatible with VirtualBox or VMWare? You got it! + +OpenFLIXR is an automated media server software which integrates with Plex to provide all the features along with the ability to auto download TV shows and movies from Torrents. It even fetches the subtitles automatically giving you a seamless experience when coupled with Plex media software. + +You can also automate your home theater with this installed. In case you do not want to run it on a physical instance, it supports VMware, VirtualBox and Hyper-V as well. The best part is – it is an open source solution and based on Ubuntu Server. + +#### How to install OpenFLIXR + +The best way to do it is by installing VirtualBox – it will be easier. After you do that, just download it from the [official website][18] and import it. + +### 6\. MediaPortal + +![MediaPortal][19] + +MediaPortal is just another open source simple media server software with a decent user interface. It all depends on your personal preference – event though I would recommend Kodi over this. + +You can play DVDs, stream videos on your local network, and listen to music as well. It does not offer a fancy set of features but the ones you will mostly need. + +It gives you the option to choose from two different versions (one that is stable and the second which tries to incorporate new features – could be unstable). + +#### How to install MediaPotal + +Depending on what you want to setup (A TV-server only or a complete server setup), follow the [official setup guide][20] to install it properly. + +### 7\. Gerbera + +![Gerbera Media Center][21] + +A simple implementation for a media server to be able to stream using your local network. It does support transcoding which will convert the media in the format your device supports. + +If you have been following the options for media server form a very long time, then you might identify this as the rebranded (and improved) version of MediaTomb. Even though it is not a popular choice among the Linux users – it is still something usable when all fails or for someone who prefers a straightforward and a basic media server. + +#### How to install Gerbera + +Type in the following commands in the terminal to install it on any Ubuntu-based distro: + +``` +sudo apt install gerbera +``` + +For other Linux distributions, refer to the [documentation][22]. + +### 8\. OSMC (Open Source Media Center) + +![OSMC Open Source Media Center][23] + +It is an elegant-looking media server software originally based on Kodi media center. I was quite impressed with the user interface. It is simple and robust, being a free and open source solution. In a nutshell, all the essential features you would expect in a media server software. + +You can also opt in to purchase OSMC’s flagship device. It will play just about anything up to 4K standards with HD audio. In addition, it supports Raspberry Pi builds and 1st-gen Apple TV. + +#### How to install OSMC + +If your device is compatible, you can just select your operating system and download the device installer from the official [download page][24] and create a bootable image to install. + +### 9\. Universal Media Server + +![][25] + +Yet another simple addition to this list. Universal Media Server does not offer any fancy features but just helps you transcode / stream video and audio without needing much configuration. + +It supports Xbox 360, PS 3, and just about any other [DLNA][26]-capable devices. + +#### How to install Universal Media Center + +You can find all the packages listed on [FossHub][27] but you should follow the [official forum][28] to know more about how to install the package that you downloaded from the website. + +### 10\. Red5 Media Server + +![Red5 Media Server][29]Image Credit: [Red5 Server][30] + +A free and open source media server tailored for enterprise usage. You can use it for live streaming solutions – no matter if it is for entertainment or just video conferencing. + +They also offer paid licensing options for mobiles and high scalability. + +#### How to install Red5 + +Even though it is not the quickest installation method, follow the [installation guide on GitHub][31] to get started with the server without needing to tinker around. + +### Wrapping Up + +Every media server software listed here has its own advantages – you should pick one up and try the one which suits your requirement. + +Did we miss any of your favorite media server software? Let us know about it in the comments below! + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-linux-media-server + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/best-linux-graphic-design-software/ +[2]: https://itsfoss.com/open-source-tools-writers/ +[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/best-media-server-linux.png?resize=800%2C450&ssl=1 +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kodi-18-media-server.jpg?fit=800%2C450&ssl=1 +[5]: https://itsfoss.com/ppa-guide/ +[6]: https://kodi.wiki/view/HOW-TO:Install_Kodi_for_Linux +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/plex.jpg?fit=800%2C368&ssl=1 +[8]: https://tidal.com/ +[9]: https://itsfoss.com/gdebi-default-ubuntu-software-center/ +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/emby-server.jpg?fit=800%2C373&ssl=1 +[11]: https://jellyfin.github.io/ +[12]: https://jellyfin.readthedocs.io/en/latest/ +[13]: https://jellyfin.readthedocs.io/en/latest/administrator-docs/installing/ +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/libreelec.jpg?resize=800%2C600&ssl=1 +[15]: https://libreelec.tv/downloads_new/ +[16]: https://libreelec.wiki/libreelec_usb-sd_creator +[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/openflixr-media-server.jpg?fit=800%2C449&ssl=1 +[18]: http://www.openflixr.com/#Download +[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/mediaportal.jpg?ssl=1 +[20]: https://www.team-mediaportal.com/wiki/display/MediaPortal1/Quick+Setup +[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/gerbera-server-softwarei.jpg?fit=800%2C583&ssl=1 +[22]: http://docs.gerbera.io/en/latest/install.html +[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/osmc-server.jpg?fit=800%2C450&ssl=1 +[24]: https://osmc.tv/download/ +[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/universal-media-server.jpg?ssl=1 +[26]: https://en.wikipedia.org/wiki/Digital_Living_Network_Alliance +[27]: https://www.fosshub.com/Universal-Media-Server.html?dwl=UMS-7.8.0.tgz +[28]: https://www.universalmediaserver.com/forum/viewtopic.php?t=10275 +[29]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/red5.jpg?resize=800%2C364&ssl=1 +[30]: https://www.red5server.com/ +[31]: https://github.com/Red5/red5-server/wiki/Installation-on-Linux +[32]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/best-media-server-linux.png?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190213 How To Install, Configure And Use Fish Shell In Linux.md b/sources/tech/20190213 How To Install, Configure And Use Fish Shell In Linux.md new file mode 100644 index 0000000000..a03335c6b6 --- /dev/null +++ b/sources/tech/20190213 How To Install, Configure And Use Fish Shell In Linux.md @@ -0,0 +1,264 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Install, Configure And Use Fish Shell In Linux?) +[#]: via: (https://www.2daygeek.com/linux-fish-shell-friendly-interactive-shell/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +How To Install, Configure And Use Fish Shell In Linux? +====== + +Every Linux administrator might heard the word called shell. + +Do you know what is shell? Do you know what is the role for shell in Linux? How many shell is available in Linux? + +A shell is a program that provides an interface between a user and kernel. + +kernel is a heart of the Linux operating system that manage everything between user and operating system (OS). + +Shell is available for all the users when they launch the terminal. + +Once the terminal launched then user can run any commands which is available for him. + +When shell completes the command execution then you will be getting the output on the terminal window. + +Bash stands for Bourne Again Shell is the default shell which is running on most of the Linux distribution on today’s. + +It’s very popular and has a lot of features. Today we are going to discuss about the fish shell. + +### What Is Fish Shell? + +[Fish][1] stands for friendly interactive shell, is a fully-equipped, smart and user-friendly command line shell for Linux which comes with some handy features that is not available in most of the shell. + +The features are Autosuggestion, Sane Scripting, Man Page Completions, Web Based configuration and Glorious VGA Color. Are you curious to test it? if so, go ahead and install it by following the below installation steps. + +### How To Install Fish Shell In Linux? + +It’s very simple to install but it doesn’t available in most of the distributions except few. However, it can be easily installed by using the following [fish repository][2]. + +For **`Arch Linux`** based systems, use **[Pacman Command][3]** to install fish shell. + +``` +$ sudo pacman -S fish +``` + +For **`Ubuntu 16.04/18.04`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install fish shell. + +``` +$ sudo apt-add-repository ppa:fish-shell/release-3 +$ sudo apt-get update +$ sudo apt-get install fish +``` + +For **`Fedora`** system, use **[DNF Command][6]** to install fish shell. + +For Fedora 29 System: + +``` +$ sudo dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:/fish:/release:/3/Fedora_29/shells:fish:release:3.repo +$ sudo dnf install fish +``` + +For Fedora 28 System: + +``` +$ sudo dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:/fish:/release:/3/Fedora_28/shells:fish:release:3.repo +$ sudo dnf install fish +``` + +For **`Debian`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install fish shell. + +For Debian 9 System: + +``` +$ sudo wget -nv https://download.opensuse.org/repositories/shells:fish:release:3/Debian_9.0/Release.key -O Release.key +$ sudo apt-key add - < Release.key +$ sudo echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/3/Debian_9.0/ /' > /etc/apt/sources.list.d/shells:fish:release:3.list +$ sudo apt-get update +$ sudo apt-get install fish +``` + +For Debian 8 System: + +``` +$ sudo wget -nv https://download.opensuse.org/repositories/shells:fish:release:3/Debian_8.0/Release.key -O Release.key +$ sudo apt-key add - < Release.key +$ sudo echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/3/Debian_8.0/ /' > /etc/apt/sources.list.d/shells:fish:release:3.list +$ sudo apt-get update +$ sudo apt-get install fish +``` + +For **`RHEL/CentOS`** systems, use **[YUM Command][7]** to install fish shell. + +For RHEL 7 System: + +``` +$ sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/shells:/fish:/release:/3/RHEL_7/shells:fish:release:3.repo +$ sudo yum install fish +``` + +For RHEL 6 System: + +``` +$ sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/shells:/fish:/release:/3/RedHat_RHEL-6/shells:fish:release:3.repo +$ sudo yum install fish +``` + +For CentOS 7 System: + +``` +$ sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_7/shells:fish:release:2.repo +$ sudo yum install fish +``` + +For CentOS 6 System: + +``` +$ sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_6/shells:fish:release:2.repo +$ sudo yum install fish +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][8]** to install fish shell. + +``` +$ sudo zypper addrepo https://download.opensuse.org/repositories/shells:/fish:/release:/3/openSUSE_Leap_42.3/shells:fish:release:3.repo +$ suod zypper refresh +$ sudo zypper install fish +``` + +### How To Use Fish Shell? + +Once you have successfully installed the fish shell. Simply type `fish` on your terminal, which will automatically switch to the fish shell from your default bash shell. + +``` +$ fish +``` + +![][10] + +### Auto Suggestions + +When you type any commands in the fish shell, it will auto suggest a command in a light grey color after typing few letters. +![][11] + +Once you got a suggestion then simple hit the `Left Arrow Mark` to complete it instead of typing the full command. +![][12] + +Instantly you can access the previous history based on the command by pressing `Up Arrow Mark` after typing a few letters. It’s similar to bash shell `CTRL+r` option. + +### Tab Completions + +If you would like to see if there are any other possibilities for the given command then simple press the `Tab` button once after typing a few letters. +![][13] + +Press the `Tab` button one more time to see the full lists. +![][14] + +### Syntax highlighting + +fish performs syntax highlighting, that you can see when you are typing any commands in the terminal. Invalid commands are colored by `RED color`. +![][15] + +The same way valid commands are shown in a different color. Also, fish will underline valid file paths when you type and it doesn’t show the underline if the path is not valid. +![][16] + +### Web based configuration + +There is a cool feature is available in the fish shell, that allow us to set colors, prompt, functions, variables, history and bindings via web browser. + +Run the following command on your terminal to start the web configuration interface. Simply press `Ctrl+c` to exit it. + +``` +$ fish_config +Web config started at 'file:///home/daygeek/.cache/fish/web_config-86ZF5P.html'. Hit enter to stop. +qt5ct: using qt5ct plugin +^C +Shutting down. +``` + +![][17] + +### Man Page Completions + +Other shells support programmable completions, but only fish generates them automatically by parsing your installed man pages. + +To do so, run the below command. + +``` +$ fish_update_completions +Parsing man pages and writing completions to /home/daygeek/.local/share/fish/generated_completions/ + 3466 / 3466 : zramctl.8.gz +``` + +### How To Set Fish as default shell + +If you would like to test the fish shell for some times then you can set the fish shell as your default shell instead of switching it every time. + +If so, first get the fish shell location by using the below command. + +``` +$ whereis fish +fish: /usr/bin/fish /etc/fish /usr/share/fish /usr/share/man/man1/fish.1.gz +``` + +Change your default shell as a fish shell by running the following command. + +``` +$ chsh -s /usr/bin/fish +``` + +![][18] + +`Make note:` Just verify whether the fish shell is added into `/etc/shells` directory or not. If no, then run the following command to append it. + +``` +$ echo /usr/bin/fish | sudo tee -a /etc/shells +``` + +Once you have done the testing and if you would like to come back to the bash shell permanently then use the following command. + +For temporary: + +``` +$ bash +``` + +For permanent: + +``` +$ chsh -s /bin/bash +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-fish-shell-friendly-interactive-shell/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://fishshell.com/ +[2]: https://download.opensuse.org/repositories/shells:/fish:/release:/ +[3]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[4]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[5]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[7]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[8]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[10]: https://www.2daygeek.com/wp-content/uploads/2019/02/linux-fish-shell-friendly-interactive-shell-1.png +[11]: https://www.2daygeek.com/wp-content/uploads/2019/02/linux-fish-shell-friendly-interactive-shell-2.png +[12]: https://www.2daygeek.com/wp-content/uploads/2019/02/linux-fish-shell-friendly-interactive-shell-5.png +[13]: https://www.2daygeek.com/wp-content/uploads/2019/02/linux-fish-shell-friendly-interactive-shell-3.png +[14]: https://www.2daygeek.com/wp-content/uploads/2019/02/linux-fish-shell-friendly-interactive-shell-4.png +[15]: https://www.2daygeek.com/wp-content/uploads/2019/02/linux-fish-shell-friendly-interactive-shell-6.png +[16]: https://www.2daygeek.com/wp-content/uploads/2019/02/linux-fish-shell-friendly-interactive-shell-8.png +[17]: https://www.2daygeek.com/wp-content/uploads/2019/02/linux-fish-shell-friendly-interactive-shell-9.png +[18]: https://www.2daygeek.com/wp-content/uploads/2019/02/linux-fish-shell-friendly-interactive-shell-7.png diff --git a/sources/tech/20190213 How to build a WiFi picture frame with a Raspberry Pi.md b/sources/tech/20190213 How to build a WiFi picture frame with a Raspberry Pi.md new file mode 100644 index 0000000000..615f7620ed --- /dev/null +++ b/sources/tech/20190213 How to build a WiFi picture frame with a Raspberry Pi.md @@ -0,0 +1,135 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to build a WiFi picture frame with a Raspberry Pi) +[#]: via: (https://opensource.com/article/19/2/wifi-picture-frame-raspberry-pi) +[#]: author: (Manuel Dewald https://opensource.com/users/ntlx) + +How to build a WiFi picture frame with a Raspberry Pi +====== +DIY a digital photo frame that streams photos from the cloud. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi_board_vector_red.png?itok=yaqYjYqI) + +Digital picture frames are really nice because they let you enjoy your photos without having to print them out. Plus, adding and removing digital files is a lot easier than opening a traditional frame and swapping the picture inside when you want to display a new photo. Even so, it's still a bit of overhead to remove your SD card, USB stick, or other storage from a digital picture frame, plug it into your computer, and copy new pictures onto it. + +An easier option is a digital picture frame that gets its pictures over WiFi, for example from a cloud service. Here's how to make one. + +### Gather your materials + + * Old [TFT][1] LCD screen + * HDMI-to-DVI cable (as the TFT screen supports DVI) + * Raspberry Pi 3 + * Micro SD card + * Raspberry Pi power supply + * Keyboard + * Mouse (optional) + + + +Connect the Raspberry Pi to the display using the cable and attach the power supply. + +### Install Raspbian + +**sudo raspi-config**. There I change the hostname (e.g., to **picframe** ) in Network Options and enable SSH to work remotely on the Raspberry Pi in Interfacing Options. Connect to the Raspberry Pi using (for example) . + +### Build and install the cloud client + +Download and flash Raspbian to the Micro SD card by following these [directions][2] . Plug the Micro SD card into the Raspberry Pi, boot it up, and configure your WiFi. My first action after a new Raspbian installation is usually running. There I change the hostname (e.g., to) in Network Options and enable SSH to work remotely on the Raspberry Pi in Interfacing Options. Connect to the Raspberry Pi using (for example) + +I use [Nextcloud][3] to synchronize my pictures, but you could use NFS, [Dropbox][4], or whatever else fits your needs to upload pictures to the frame. + +If you use Nextcloud, get a client for Raspbian by following these [instructions][5]. This is handy for placing new pictures on your picture frame and will give you the client application you may be familiar with on a desktop PC. When connecting the client application to your Nextcloud server, make sure to select only the folder where you'll store the images you want to be displayed on the picture frame. + +### Set up the slideshow + +The easiest way I've found to set up the slideshow is with a [lightweight slideshow project][6] built for exactly this purpose. There are some alternatives, like configuring a screensaver, but this application appears to be the simplest to set up. + +On your Raspberry Pi, download the binaries from the latest release, unpack them, and move them to an executable folder: + +``` +wget https://github.com/NautiluX/slide/releases/download/v0.9.0/slide_pi_stretch_0.9.0.tar.gz +tar xf slide_pi_stretch_0.9.0.tar.gz +mv slide_0.9.0/slide /usr/local/bin/ +``` + +Install the dependencies: + +``` +sudo apt install libexif12 qt5-default +``` + +Run the slideshow by executing the command below (don't forget to modify the path to your images). If you access your Raspberry Pi via SSH, set the **DISPLAY** variable to start the slideshow on the display attached to the Raspberry Pi. + +``` +DISPLAY=:0.0 slide -p /home/pi/nextcloud/picframe +``` + +### Autostart the slideshow + +To autostart the slideshow on Raspbian Stretch, create the following folder and add an **autostart** file to it: + +``` +mkdir -p /home/pi/.config/lxsession/LXDE/ +vi /home/pi/.config/lxsession/LXDE/autostart +``` + +Insert the following commands to autostart your slideshow. The **slide** command can be adjusted to your needs: + +``` +@xset s noblank +@xset s off +@xset -dpms +@slide -p -t 60 -o 200 -p /home/pi/nextcloud/picframe +``` + +Disable screen blanking, which the Raspberry Pi normally does after 10 minutes, by editing the following file: + +``` +vi /etc/lightdm/lightdm.conf +``` + +and adding these two lines to the end: + +``` +[SeatDefaults] +xserver-command=X -s 0 -dpms +``` + +### Configure a power-on schedule + +You can schedule your picture frame to turn on and off at specific times by using two simple cronjobs. For example, say you want it to turn on automatically at 7 am and turn off at 11 pm. Run **crontab -e** and insert the following two lines. + +``` +0 23 * * * /opt/vc/bin/tvservice -o + +0 7 * * * /opt/vc/bin/tvservice -p && sudo systemctl restart display-manager +``` + +Note that this won't turn the Raspberry Pi power's on and off; it will just turn off HDMI, which will turn the screen off. The first line will power off HDMI at 11 pm. The second line will bring the display back up and restart the display manager at 7 am. + +### Add a final touch + +By following these simple steps, you can create your own WiFi picture frame. If you want to give it a nicer look, build a wooden frame for the display. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/wifi-picture-frame-raspberry-pi + +作者:[Manuel Dewald][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ntlx +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Thin-film-transistor_liquid-crystal_display +[2]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md +[3]: https://nextcloud.com/ +[4]: http://dropbox.com/ +[5]: https://github.com/nextcloud/client_theming#building-on-debian +[6]: https://github.com/NautiluX/slide/releases/tag/v0.9.0 diff --git a/sources/tech/20190214 Run Particular Commands Without Sudo Password In Linux.md b/sources/tech/20190214 Run Particular Commands Without Sudo Password In Linux.md new file mode 100644 index 0000000000..df5bfddb3a --- /dev/null +++ b/sources/tech/20190214 Run Particular Commands Without Sudo Password In Linux.md @@ -0,0 +1,157 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Run Particular Commands Without Sudo Password In Linux) +[#]: via: (https://www.ostechnix.com/run-particular-commands-without-sudo-password-linux/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Run Particular Commands Without Sudo Password In Linux +====== + +I had a script on my Ubuntu system deployed on AWS. The primary purpose of this script is to check if a specific service is running at regular interval (every one minute to be precise) and start that service automatically if it is stopped for any reason. But the problem is I need sudo privileges to start the service. As you may know already, we should provide password when we run something as sudo user. But I don’t want to do that. What I actually want to do is to run the service as sudo without password. If you’re ever in a situation like this, I know a small work around, Today, in this brief guide, I will teach you how to run particular commands without sudo password in Unix-like operating systems. + +Have a look at the following example. + +``` +$ sudo mkdir /ostechnix +[sudo] password for sk: +``` + +![][2] + +As you can see in the above screenshot, I need to provide sudo password when creating a directory named ostechnix in root (/) folder. Whenever we try to execute a command with sudo privileges, we must enter the password. However, in my scenario, I don’t want to provide the sudo password. Here is what I did to run a sudo command without password on my Linux box. + +### Run Particular Commands Without Sudo Password In Linux + +For any reasons, if you want to allow a user to run a particular command without giving the sudo password, you need to add that command in **sudoers** file. + +I want the user named **sk** to execute **mkdir** command without giving the sudo password. Let us see how to do it. + +Edit sudoers file: + +``` +$ sudo visudo +``` + +Add the following line at the end of file. + +``` +sk ALL=NOPASSWD:/bin/mkdir +``` + +![][3] + +Here, **sk** is the username. As per the above line, the user **sk** can run ‘mkdir’ command from any terminal, without sudo password. + +You can add additional commands (for example **chmod** ) with comma-separated values as shown below. + +``` +sk ALL=NOPASSWD:/bin/mkdir,/bin/chmod +``` + +Save and close the file. Log out (or reboot) your system. Now, log in as normal user ‘sk’ and try to run those commands with sudo and see what happens. + +``` +$ sudo mkdir /dir1 +``` + +![][4] + +See? Even though I ran ‘mkdir’ command with sudo privileges, there was no password prompt. From now on, the user **sk** need not to enter the sudo password while running ‘mkdir’ command. + +When running all other commands except those commands added in sudoers files, you will be prompted to enter the sudo password. + +Let us run another command with sudo. + +``` +$ sudo apt update +``` + +![][5] + +See? This command prompts me to enter the sudo password. + +If you don’t want this command to prompt you to ask sudo password, edit sudoers file: + +``` +$ sudo visudo +``` + +Add the ‘apt’ command in visudo file like below: + +``` +sk ALL=NOPASSWD: /bin/mkdir,/usr/bin/apt +``` + +Did you notice that the apt binary executable file path is different from mkdir? Yes, you must provide the correct executable file path. To find executable file path of any command, for example ‘apt’, use ‘whereis’ command like below. + +``` +$ whereis apt +apt: /usr/bin/apt /usr/lib/apt /etc/apt /usr/share/man/man8/apt.8.gz +``` + +As you see, the executable file for apt command is **/usr/bin/apt** , hence I added it in sudoers file. + +Like I already mentioned, you can add any number of commands with comma-separated values. Save and close your sudoers file once you’re done. Log out and log in again to your system. + +Now, check if you can be able to run the command with sudo prefix without using the password: + +``` +$ sudo apt update +``` + +![][6] + +See? The apt command didn’t ask me the password even though I ran it with sudo. + +Here is yet another example. If you want to run a specific service, for example apache2, add it as shown below. + +``` +sk ALL=NOPASSWD:/bin/mkdir,/usr/bin/apt,/bin systemctl restart apache2 +``` + +Now, the user can run ‘sudo systemctl restart apache2’ command without sudo password. + +Can I re-authenticate to a particular command in the above case? Of course, yes! Just remove the added command. Log out and log in back. + +Alternatively, you can add **‘PASSWD:’** directive in-front of the command. Look at the following example. + +Add/modify the following line as shown below. + +``` +sk ALL=NOPASSWD:/bin/mkdir,/bin/chmod,PASSWD:/usr/bin/apt +``` + +In this case, the user **sk** can run ‘mkdir’ and ‘chmod’ commands without entering the sudo password. However, he must provide sudo password when running ‘apt’ command. + +**Disclaimer:** This is for educational-purpose only. You should be very careful while applying this method. This method might be both productive and destructive. Say for example, if you allow users to execute ‘rm’ command without sudo password, they could accidentally or intentionally delete important stuffs. You have been warned! + +**Suggested read:** + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/run-particular-commands-without-sudo-password-linux/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-1.png +[3]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-7.png +[4]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-6.png +[5]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-4.png +[6]: http://www.ostechnix.com/wp-content/uploads/2017/05/sudo-password-5.png diff --git a/sources/tech/20190215 4 Methods To Change The HostName In Linux.md b/sources/tech/20190215 4 Methods To Change The HostName In Linux.md new file mode 100644 index 0000000000..ad95e05fae --- /dev/null +++ b/sources/tech/20190215 4 Methods To Change The HostName In Linux.md @@ -0,0 +1,227 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 Methods To Change The HostName In Linux) +[#]: via: (https://www.2daygeek.com/four-methods-to-change-the-hostname-in-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +4 Methods To Change The HostName In Linux +====== + +We had written an article yesterday in our website about **[changing hostname in Linux][1]**. + +Today we are going to show you that how to change the hostname using different methods. You can choose the best one for you. + +systemd systems comes with a handy tool called `hostnamectl` that allow us to manage the system hostname easily. + +It’s changing the hostname instantly and doesn’t required reboot when you use the native commands. + +But if you modify the hostname manually in any of the configuration file that requires reboot. + +In this article we will show you the four methods to change the hostname in systemd system. + +hostnamectl command allows to set three kind of hostname in Linux and the details are below. + + * **`Static:`** It’s static hostname which is added by the system admin. + * **`Transient/Dynamic:`** It’s assigned by DHCP or DNS server at run time. + * **`Pretty:`** It can be assigned by the system admin. It is a free-form of the hostname that represent the server in the pretty way like, “JBOSS UAT Server”. + + + +It can be done in the following four methods. + + * **`hostnamectl Command:`** hostnamectl command is controling the system hostname. + * **`nmcli Command:`** nmcli is a command-line tool for controlling NetworkManager. + * **`nmtui Command:`** nmtui is a text User Interface for controlling NetworkManager. + * **`/etc/hostname file:`** This file is containing the static system hostname. + + + +### Method-1: Change The HostName Using hostnamectl Command in Linux + +hostnamectl may be used to query and change the system hostname and related settings. + +Simple run the `hostnamectl` command to view the system hostname. + +``` +$ hostnamectl +or +$ hostnamectl status + + Static hostname: daygeek-Y700 + Icon name: computer-laptop + Chassis: laptop + Machine ID: 31bdeb7b83230a2025d43547368d75bc + Boot ID: 267f264c448f000ea5aed47263c6de7f + Operating System: Manjaro Linux + Kernel: Linux 4.19.20-1-MANJARO + Architecture: x86-64 +``` + +If you would like to change the hostname, use the following command format. + +**The general syntax:** + +``` +$ hostnamectl set-hostname [YOUR NEW HOSTNAME] +``` + +Use the following command to change the hostname using hostnamectl command. In this example, i’m going to change the hostname from `daygeek-Y700` to `magi-laptop`. + +``` +$ hostnamectl set-hostname magi-laptop +``` + +You can view the updated hostname by running the following command. + +``` +$ hostnamectl + Static hostname: magi-laptop + Icon name: computer-laptop + Chassis: laptop + Machine ID: 31bdeb7b83230a2025d43547368d75bc + Boot ID: 267f264c448f000ea5aed47263c6de7f + Operating System: Manjaro Linux + Kernel: Linux 4.19.20-1-MANJARO + Architecture: x86-64 +``` + +### Method-2: Change The HostName Using nmcli Command in Linux + +nmcli is a command-line tool for controlling NetworkManager and reporting network status. + +nmcli is used to create, display, edit, delete, activate, and deactivate network connections, as well as control and display network device status. Also, it allow us to change the hostname. + +Use the following format to view the current hostname using nmcli. + +``` +$ nmcli general hostname +daygeek-Y700 +``` + +**The general syntax:** + +``` +$ nmcli general hostname [YOUR NEW HOSTNAME] +``` + +Use the following command to change the hostname using nmcli command. In this example, i’m going to change the hostname from `daygeek-Y700` to `magi-laptop`. + +``` +$ nmcli general hostname magi-laptop +``` + +It’s taking effect without bouncing the below service. However, for safety purpose just restart the systemd-hostnamed service for the changes to take effect. + +``` +$ sudo systemctl restart systemd-hostnamed +``` + +Again run the same nmcli command to check the changed hostname. + +``` +$ nmcli general hostname +magi-laptop +``` + +### Method-3: Change The HostName Using nmtui Command in Linux + +nmtui is a curses‐based TUI application for interacting with NetworkManager. When starting nmtui, the user is prompted to choose the activity to perform unless it was specified as the first argument. + +Run the following command on terminal to launch the terminal user interface. + +``` +$ nmtui +``` + +Use the `Down Arrow Mark` to choose the `Set system hostname` option then hit the `Enter` button. +![][3] + +This is old hostname screenshot. +![][4] + +Just remove the olde one and update the new one then hit `OK` button. +![][5] + +It will show you the updated hostname in the screen and simple hit `OK` button to complete it. +![][6] + +Finally hit the `Quit` button to exit from the nmtui terminal. +![][7] + +It’s taking effect without bouncing the below service. However, for safety purpose just restart the systemd-hostnamed service for the changes to take effect. + +``` +$ sudo systemctl restart systemd-hostnamed +``` + +You can view the updated hostname by running the following command. + +``` +$ hostnamectl + Static hostname: daygeek-Y700 + Icon name: computer-laptop + Chassis: laptop + Machine ID: 31bdeb7b83230a2025d43547368d75bc + Boot ID: 267f264c448f000ea5aed47263c6de7f + Operating System: Manjaro Linux + Kernel: Linux 4.19.20-1-MANJARO + Architecture: x86-64 +``` + +### Method-4: Change The HostName Using /etc/hostname File in Linux + +Alternatively, we can change the hostname by modifying the `/etc/hostname` file. But this method +requires server reboot for changes to take effect. + +Check the current hostname using /etc/hostname file. + +``` +$ cat /etc/hostname +daygeek-Y700 +``` + +To change the hostname, simple overwrite the file because it’s contains only the hostname alone. + +``` +$ sudo echo "magi-daygeek" > /etc/hostname + +$ cat /etc/hostname +magi-daygeek +``` + +Reboot the system by running the following command. + +``` +$ sudo init 6 +``` + +Finally verify the updated hostname using /etc/hostname file. + +``` +$ cat /etc/hostname +magi-daygeek +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/four-methods-to-change-the-hostname-in-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/linux-change-set-hostname/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-1.png +[4]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-2.png +[5]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-3.png +[6]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-4.png +[7]: https://www.2daygeek.com/wp-content/uploads/2019/02/four-methods-to-change-the-hostname-in-linux-5.png diff --git a/sources/tech/20190215 Make websites more readable with a shell script.md b/sources/tech/20190215 Make websites more readable with a shell script.md new file mode 100644 index 0000000000..06b748cfb5 --- /dev/null +++ b/sources/tech/20190215 Make websites more readable with a shell script.md @@ -0,0 +1,258 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Make websites more readable with a shell script) +[#]: via: (https://opensource.com/article/19/2/make-websites-more-readable-shell-script) +[#]: author: (Jim Hall https://opensource.com/users/jim-hall) + +Make websites more readable with a shell script +====== +Calculate the contrast ratio between your website's text and background to make sure your site is easy to read. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ) + +If you want people to find your website useful, they need to be able to read it. The colors you choose for your text can affect the readability of your site. Unfortunately, a popular trend in web design is to use low-contrast colors when printing text, such as gray text on a white background. Maybe that looks really cool to the web designer, but it is really hard for many of us to read. + +The W3C provides Web Content Accessibility Guidelines, which includes guidance to help web designers pick text and background colors that can be easily distinguished from each other. This is called the "contrast ratio." The W3C definition of the contrast ratio requires several calculations: given two colors, you first compute the relative luminance of each, then calculate the contrast ratio. The ratio will fall in the range 1 to 21 (typically written 1:1 to 21:1). The higher the contrast ratio, the more the text will stand out against the background. For example, black text on a white background is highly visible and has a contrast ratio of 21:1. And white text on a white background is unreadable at a contrast ratio of 1:1. + +The [W3C says body text][1] should have a contrast ratio of at least 4.5:1 with headings at least 3:1. But that seems to be the bare minimum. The W3C also recommends at least 7:1 for body text and at least 4.5:1 for headings. + +Calculating the contrast ratio can be a chore, so it's best to automate it. I've done that with this handy Bash script. In general, the script does these things: + + 1. Gets the text color and background color + 2. Computes the relative luminance of each + 3. Calculates the contrast ratio + + + +### Get the colors + +You may know that every color on your monitor can be represented by red, green, and blue (R, G, and B). To calculate the relative luminance of a color, my script will need to know the red, green, and blue components of the color. Ideally, my script would read this information as separate R, G, and B values. Web designers might know the specific RGB code for their favorite colors, but most humans don't know RGB values for the different colors. Instead, most people reference colors by names like "red" or "gold" or "maroon." + +Fortunately, the GNOME [Zenity][2] tool has a color-picker app that lets you use different methods to select a color, then returns the RGB values in a predictable format of "rgb( **R** , **G** , **B** )". Using Zenity makes it easy to get a color value: + +``` +color=$( zenity --title 'Set text color' --color-selection --color='black' ) +``` + +In case the user (accidentally) clicks the Cancel button, the script assumes a color: + +``` +if [ $? -ne 0 ] ; then +        echo '** color canceled .. assume black' +        color='rgb(0,0,0)' +fi +``` + +My script does something similar to set the background color value as **$background**. + +### Compute the relative luminance + +Once you have the foreground color in **$color** and the background color in **$background** , the next step is to compute the relative luminance for each. On its website, the [W3C provides an algorithm][3] to compute the relative luminance of a color. + +> For the sRGB colorspace, the relative luminance of a color is defined as +> **L = 0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated R + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated G + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated B** where R, G and B are defined as: +> +> if RsRGB <= 0.03928 then R = RsRGB/12.92 +> else R = ((RsRGB+0.055)/1.055) ^ 2.4 +> +> if GsRGB <= 0.03928 then G = GsRGB/12.92 +> else G = ((GsRGB+0.055)/1.055) ^ 2.4 +> +> if BsRGB <= 0.03928 then B = BsRGB/12.92 +> else B = ((BsRGB+0.055)/1.055) ^ 2.4 +> +> and RsRGB, GsRGB, and BsRGB are defined as: +> +> RsRGB = R8bit/255 +> +> GsRGB = G8bit/255 +> +> BsRGB = B8bit/255 + +Since Zenity returns color values in the format "rgb( **R** , **G** , **B** )," the script can easily pull apart the R, B, and G values to compute the relative luminance. AWK makes this a simple task, using the comma as the field separator ( **-F,** ) and using AWK's **substr()** string function to pick just the text we want from the "rgb( **R** , **G** , **B** )" color value: + +``` +R=$( echo $color | awk -F, '{print substr($1,5)}' ) +G=$( echo $color | awk -F, '{print $2}' ) +B=$( echo $color | awk -F, '{n=length($3); print substr($3,1,n-1)}' ) +``` + +**(For more on extracting and displaying data with AWK,[Get our AWK cheat sheet][4].)** + +Calculating the final relative luminance is best done using the BC calculator. BC supports the simple if-then-else needed in the calculation, which makes this part simple. But since BC cannot directly calculate exponentiation using a non-integer exponent, we need to do some extra math using the natural logarithm instead: + +``` +echo "scale=4 +rsrgb=$R/255 +gsrgb=$G/255 +bsrgb=$B/255 +if ( rsrgb <= 0.03928 ) r = rsrgb/12.92 else r = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((rsrgb+0.055)/1.055) ) +if ( gsrgb <= 0.03928 ) g = gsrgb/12.92 else g = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((gsrgb+0.055)/1.055) ) +if ( bsrgb <= 0.03928 ) b = bsrgb/12.92 else b = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((bsrgb+0.055)/1.055) ) +0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated r + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated g + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated b" | bc -l +``` + +This passes several instructions to BC, including the if-then-else statements that are part of the relative luminance formula. BC then prints the final value. + +### Calculate the contrast ratio + +With the relative luminance of the text color and the background color, now the script can calculate the contrast ratio. The [W3C determines the contrast ratio][5] with this formula: + +> (L1 + 0.05) / (L2 + 0.05), where +> L1 is the relative luminance of the lighter of the colors, and +> L2 is the relative luminance of the darker of the colors + +Given two relative luminance values **$r1** and **$r2** , it's easy to calculate the contrast ratio using the BC calculator: + +``` +echo "scale=2 +if ( $r1 > $r2 ) { l1=$r1; l2=$r2 } else { l1=$r2; l2=$r1 } +(l1 + 0.05) / (l2 + 0.05)" | bc +``` + +This uses an if-then-else statement to determine which value ( **$r1** or **$r2** ) is the lighter or darker color. BC performs the resulting calculation and prints the result, which the script can store in a variable. + +### The final script + +With the above, we can pull everything together into a final script. I use Zenity to display the final result in a text box: + +``` +#!/bin/sh +# script to calculate contrast ratio of colors + +# read color and background color: +# zenity returns values like 'rgb(255,140,0)' and 'rgb(255,255,255)' + +color=$( zenity --title 'Set text color' --color-selection --color='black' ) +if [ $? -ne 0 ] ; then +        echo '** color canceled .. assume black' +        color='rgb(0,0,0)' +fi + +background=$( zenity --title 'Set background color' --color-selection --color='white' ) +if [ $? -ne 0 ] ; then +        echo '** background canceled .. assume white' +        background='rgb(255,255,255)' +fi + +# compute relative luminance: + +function luminance() +{ +        R=$( echo $1 | awk -F, '{print substr($1,5)}' ) +        G=$( echo $1 | awk -F, '{print $2}' ) +        B=$( echo $1 | awk -F, '{n=length($3); print substr($3,1,n-1)}' ) + +        echo "scale=4 +rsrgb=$R/255 +gsrgb=$G/255 +bsrgb=$B/255 +if ( rsrgb <= 0.03928 ) r = rsrgb/12.92 else r = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((rsrgb+0.055)/1.055) ) +if ( gsrgb <= 0.03928 ) g = gsrgb/12.92 else g = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((gsrgb+0.055)/1.055) ) +if ( bsrgb <= 0.03928 ) b = bsrgb/12.92 else b = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((bsrgb+0.055)/1.055) ) +0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated r + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated g + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated b" | bc -l +} + +lum1=$( luminance $color ) +lum2=$( luminance $background ) + +# compute contrast + +function contrast() +{ +        echo "scale=2 +if ( $1 > $2 ) { l1=$1; l2=$2 } else { l1=$2; l2=$1 } +(l1 + 0.05) / (l2 + 0.05)" | bc +} + +rel=$( contrast $lum1 $lum2 ) + +# print results + +( cat< + +If you like Linux related videos, please [subscribe to our YouTube channel][7]. + +Once you have installed FinalCrypt, you’ll find it in your list of installed applications. Launch it from there. + +Upon launch, you will observe two sections (split) for the items to encrypt/decrypt and the other to select the OTP file. + +![Using FinalCrypt for encrypting files in Linux][8] + +First, you will have to generate an OTP key. Here’s how to do that: + +![finalcrypt otp][9] + +Do note that your file name can be anything – but you need to make sure that the key file size is greater or equal to the file you want to encrypt. I find it absurd but that’s how it is. + +![][10] + +After you generate the file, select the key on the right-side of the window and then select the files that you want to encrypt on the left-side of the window. + +You will find the checksum value, key file size, and valid status highlighted after generating the OTP: + +![][11] + +After making the selection, you just need to click on “ **Encrypt** ” to encrypt those files and if already encrypted, then “ **Decrypt** ” to decrypt those. + +![][12] + +You can also use FinalCrypt in command line to automate your encryption job. + +#### How do you secure your OTP key? + +It is easy to encrypt/decrypt the files you want to protect. But, where should you keep your OTP key? + +It is literally useless if you fail to keep your OTP key in a safe storage location. + +Well, one of the best ways would be to use a USB stick specifically for the keys you want to store. Just plug it in when you want to decrypt files and its all good. + +In addition to that, you may save your key on a [cloud service][13], if you consider it secure enough. + +More information about FinalCrypt can be found on its website. + +[FinalCrypt](https://sites.google.com/site/ronuitholland/home/finalcrypt) + +**Wrapping Up** + +It might seem a little overwhelming at the beginning but it is actually a simple and user-friendly encryption program available for Linux. There are other programs to [password protect folders][14] as well if you are interested in some additional reading. + +What do you think about FinalCrypt? Do you happen to know about something similar which is potentially better? Let us know in the comments and we shall take a look at them! + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/finalcrypt/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://www.gnupg.org/ +[2]: https://itsfoss.com/encryptpad-encrypted-text-editor-linux/ +[3]: https://github.com/ron-from-nl/FinalCrypt +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt.png?resize=800%2C450&ssl=1 +[5]: https://en.wikipedia.org/wiki/One-time_pad +[6]: https://itsfoss.com/install-deb-files-ubuntu/ +[7]: https://www.youtube.com/c/itsfoss?sub_confirmation=1 +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt.jpg?fit=800%2C439&ssl=1 +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt-otp-key.jpg?resize=800%2C443&ssl=1 +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt-otp-generate.jpg?ssl=1 +[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt-key.jpg?fit=800%2C420&ssl=1 +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt-encrypt.jpg?ssl=1 +[13]: https://itsfoss.com/cloud-services-linux/ +[14]: https://itsfoss.com/password-protect-folder-linux/ +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/finalcrypt.png?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190217 Install Android 8.1 Oreo on Linux To Run Apps - Games.md b/sources/tech/20190217 Install Android 8.1 Oreo on Linux To Run Apps - Games.md new file mode 100644 index 0000000000..88798037c5 --- /dev/null +++ b/sources/tech/20190217 Install Android 8.1 Oreo on Linux To Run Apps - Games.md @@ -0,0 +1,208 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Install Android 8.1 Oreo on Linux To Run Apps & Games) +[#]: via: (https://fosspost.org/tutorials/install-android-8-1-oreo-on-linux) +[#]: author: (Python Programmer;Open Source Software Enthusiast. Worked On Developing A Lot Of Free Software. The Founder Of Foss Post;Foss Project. Computer Science Major. ) + +Install Android 8.1 Oreo on Linux To Run Apps & Games +====== + +![](https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/android-8.1-oreo-x86-on-linux.png?resize=1237%2C527&ssl=1) + +[android x86][1] is a free and an open source project to port the android system made by Google from the ARM architecture to the x86 architecture, which allow users to run the android system on their desktop machines to enjoy all android functionalities + Apps & games. + +The android x86 project finished porting the android 8.1 Oreo system to the x86 architecture few weeks ago. In this post, we’ll explain how to install it on your Linux system so that you can use your android apps and games any time you want. + +### Installing Android x86 8.1 Oreo on Linux + +#### Preparing the Environment + +First, let’s download the android x86 8.1 Oreo system image. You can download it from [this page][2], just click on the “View” button under the android-x86_64-8.1-r1.iso file. + +We are going to use QEMU to run android x86 on our Linux system. QEMU is a very good emulator software, which is also free and open source, and is available in all the major Linux distributions repositories. + +To install QEMU on Ubuntu/Linux Mint/Debian: + +``` +sudo apt-get install qemu qemu-kvm libvirt-bin +``` + +To install QEMU on Fedora: + +``` +sudo dnf install qemu qemu-kvm +``` + +For other distributions, just search for the qemu and qemu-kvm packages and install them. + +After you have installed QEMU, we’ll need to run the following command to create the android.img file, which will be like some sort of an allocated disk space just for the android system. All android files and system will be inside that image file: + +``` +qemu-img create -f qcow2 android.img 15G +``` + +Here we are saying that we want to allocate a maximum of 15GB for android, but you can change it to any size you want (make sure it’s at least bigger than 5GB). + +Now, to start running the android system for the first time, run: + +``` +sudo qemu-system-x86_64 -m 2048 -boot d -enable-kvm -smp 3 -net nic -net user -hda android.img -cdrom /home/mhsabbagh/android-x86_64-8.1-r1.iso +``` + +Replace /home/mhsabbagh/android-x86_64-8.1-r1.iso with the path of the file that you downloaded from the android x86 website. For explaination of other options we are using here, you may refer to [this article][3]. + +After you run the above command, the android system will start: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 39 android 8.1 oreo on linux][4] + +#### Installing the System + +From this window, choose “Advanced options”, which should lead to the following menu, from which you should choose “Auto_installation” as follows: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 41 android 8.1 oreo on linux][5] + +After that, the installer will just tell you about whether you want to continue or not, choose Yes: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 43 android 8.1 oreo on linux][6] + +And the installation will carry on without any further instructions from you: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 45 android 8.1 oreo on linux][7] + +Finally you’ll receive this message, which indicates that you have successfully installed android 8.1: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 47 android 8.1 oreo on linux][8] + +For now, just close the QEMU window completely. + +#### Booting and Using Android 8.1 Oreo + +Now that the android system is fully installed in your android.img file, you should use the following QEMU command to start it instead of the previous one: + +``` +sudo qemu-system-x86_64 -m 2048 -boot d -enable-kvm -smp 3 -net nic -net user -hda android.img +``` + +Notice that all we did was that we just removed the -cdrom option and its argument. This is to tell QEMU that we no longer want to boot from the ISO file that we downloaded, but from the installed android system. + +You should see the android booting menu now: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 49 android 8.1 oreo on linux][9] + +Then you’ll be taken to the first preparation wizard, choose your language and continue: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 51 android 8.1 oreo on linux][10] + +From here, choose the “Set up as new” option: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 53 android 8.1 oreo on linux][11] + +Then android will ask you about if you want to login to your current Google account. This step is optional, but important so that you can use the Play Store later: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 55 android 8.1 oreo on linux][12] + +Then you’ll need to accept the terms and conditions: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 57 android 8.1 oreo on linux][13] + +Now you can choose your current timezone: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 59 android 8.1 oreo on linux][14] + +The system will ask you now if you want to enable any data collection features. If I were you, I’d simply turn them all off like that: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 61 android 8.1 oreo on linux][15] + +Finally, you’ll have 2 launcher types to choose from, I recommend that you choose the Launcher3 option and make it the default: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 63 android 8.1 oreo on linux][16] + +Then you’ll see your fully-working android system home screen: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 65 android 8.1 oreo on linux][17] + +From here now, you can do all the tasks you want; You can use the built-in android apps, or you may browse the settings of your system to adjust it however you like. You may change look and feeling of your system, or you can run Chrome for example: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 67 android 8.1 oreo on linux][18] + +You may start installing some apps like WhatsApp and others from the Google Play store for your own use: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 69 android 8.1 oreo on linux][19] + +You can now do whatever you want with your system. Congratulations! + +### How to Easily Run Android 8.1 Oreo Later + +We don’t want to always have to open the terminal window and write that long QEMU command to run the android system, but we want to run it in just 1 click whenever we need that. + +To do this, we’ll create a new file under /usr/share/applications called android.desktop with the following command: + +``` +sudo nano /usr/share/applications/android.desktop +``` + +And paste the following contents inside it (Right click and then paste): + +``` +[Desktop Entry] +Name=Android 8.1 +Comment=Run Android 8.1 Oreo on Linux using QEMU +Icon=phone +Exec=bash -c 'pkexec env DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY qemu-system-x86_64 -m 2048 -boot d -enable-kvm -smp 3 -net nic -net user -hda /home/mhsabbagh/android.img' +Terminal=false +Type=Application +StartupNotify=true +Categories=GTK; +``` + +Again, you have to replace /home/mhsabbagh/android.img with the path to the local image on your system. Then save the file (Ctrl + X, then press Y, then Enter). + +Notice that we needed to use “pkexec” to run QEMU with root privileges because starting from newer versions, accessing to the KVM technology via libvirt is not allowed for normal users; That’s why it will ask you for the root password each time. + +Now, you’ll see the android icon in the applications menu all the time, you can simply click it any time you want to use android and the QEMU program will start: + +![Install Android 8.1 Oreo on Linux To Run Apps & Games 71 android 8.1 oreo on linux][20] + +### Conclusion + +We showed you how install and run android 8.1 Oreo on your Linux system. From now on, it should be much easier on you to do your android-based tasks without some other software like Blutsticks and similar methods. Here, you have a fully-working and functional android system that you can manipulate however you like, and if anything goes wrong, you can simply nuke the image file and run the installation all over again any time you want. + +Have you tried android x86 before? How was your experience with it? + + +-------------------------------------------------------------------------------- + +via: https://fosspost.org/tutorials/install-android-8-1-oreo-on-linux + +作者:[Python Programmer;Open Source Software Enthusiast. Worked On Developing A Lot Of Free Software. The Founder Of Foss Post;Foss Project. Computer Science Major.][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: http://www.android-x86.org/ +[2]: http://www.android-x86.org/download +[3]: https://fosspost.org/tutorials/use-qemu-test-operating-systems-distributions +[4]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-16.png?resize=694%2C548&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 40 android 8.1 oreo on linux) +[5]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-15.png?resize=673%2C537&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 42 android 8.1 oreo on linux) +[6]: https://i1.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-14.png?resize=769%2C469&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 44 android 8.1 oreo on linux) +[7]: https://i1.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-13.png?resize=767%2C466&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 46 android 8.1 oreo on linux) +[8]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-12.png?resize=750%2C460&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 48 android 8.1 oreo on linux) +[9]: https://i1.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-11.png?resize=754%2C456&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 50 android 8.1 oreo on linux) +[10]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-10.png?resize=850%2C559&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 52 android 8.1 oreo on linux) +[11]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-09.png?resize=850%2C569&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 54 android 8.1 oreo on linux) +[12]: https://i1.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-08.png?resize=850%2C562&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 56 android 8.1 oreo on linux) +[13]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-07-1.png?resize=850%2C561&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 58 android 8.1 oreo on linux) +[14]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-06.png?resize=850%2C569&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 60 android 8.1 oreo on linux) +[15]: https://i1.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-05.png?resize=850%2C559&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 62 android 8.1 oreo on linux) +[16]: https://i1.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-04.png?resize=850%2C553&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 64 android 8.1 oreo on linux) +[17]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-03.png?resize=850%2C571&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 66 android 8.1 oreo on linux) +[18]: https://i1.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-02.png?resize=850%2C555&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 68 android 8.1 oreo on linux) +[19]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/Android-8.1-Oreo-on-Linux-01.png?resize=850%2C557&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 70 android 8.1 oreo on linux) +[20]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Screenshot-at-2019-02-17-1539.png?resize=850%2C557&ssl=1 (Install Android 8.1 Oreo on Linux To Run Apps & Games 72 android 8.1 oreo on linux) diff --git a/sources/tech/20190218 Emoji-Log- A new way to write Git commit messages.md b/sources/tech/20190218 Emoji-Log- A new way to write Git commit messages.md new file mode 100644 index 0000000000..e821337a60 --- /dev/null +++ b/sources/tech/20190218 Emoji-Log- A new way to write Git commit messages.md @@ -0,0 +1,176 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Emoji-Log: A new way to write Git commit messages) +[#]: via: (https://opensource.com/article/19/2/emoji-log-git-commit-messages) +[#]: author: (Ahmad Awais https://opensource.com/users/mrahmadawais) + +Emoji-Log: A new way to write Git commit messages +====== +Add context to your commits with Emoji-Log. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/emoji_tech_keyboard.jpg?itok=ncBNKZFl) + +I'm a full-time open source developer—or, as I like to call it, an 🎩 open sourcerer. I've been working with open source software for over a decade and [built hundreds][1] of open source software applications. + +I also am a big fan of the Don't Repeat Yourself (DRY) philosophy and believe writing better Git commit messages—ones that are contextual enough to serve as a changelog for your open source software—is an important component of DRY. One of the many workflows I've written is [Emoji-Log][2], a straightforward, open source Git commit log standard. It improves the developer experience (DX) by using emoji to create better Git commit messages. + +I've used Emoji-Log while building the [VSCode Tips & Tricks repo][3], my 🦄 [Shades of Purple VSCode theme repo][4], and even an [automatic changelog][5] that looks beautiful. + +### Emoji-Log's philosophy + +I like emoji (which is, in fact, the plural of emoji). I like 'em a lot. Programming, code, geeks/nerds, open source… all of that is inherently dull and sometimes boring. Emoji help me add colors and emotions to the mix. There's nothing wrong with wanting to attach feelings to the 2D, flat, text-based world of code. + +Instead of memorizing [hundreds of emoji][6], I've learned it's better to keep the categories small and general. Here's the philosophy that guides writing commit messages with Emoji-Log: + + 1. **Imperative** + * Make your Git commit messages imperative. + * Write commit message like you're giving an order. + * e.g., Use ✅ **Add** instead of ❌ **Added** + * e.g., Use ✅ **Create** instead of ❌ **Creating** + 2. **Rules** + * A small number of categories are easy to memorize. + * Nothing more, nothing less + * e.g. **📦 NEW** , **👌 IMPROVE** , **🐛 FIX** , **📖 DOC** , **🚀 RELEASE** , and **✅ TEST** + 3. **Actions** + * Make Git commits based on actions you take. + * Use a good editor like [VSCode][7] to commit the right files with commit messages. + + + +### Writing commit messages + +Use only the following Git commit messages. The simple and small footprint is the key to Emoji-Logo. + + 1. **📦 NEW: IMPERATIVE_MESSAGE** + * Use when you add something entirely new. + * e.g., **📦 NEW: Add Git ignore file** + 2. **👌 IMPROVE: IMPERATIVE_MESSAGE** + * Use when you improve/enhance piece of code like refactoring etc. + * e.g., **👌 IMPROVE: Remote IP API Function** + 3. **🐛 FIX: IMPERATIVE_MESSAGE** + * Use when you fix a bug. Need I say more? + * e.g., **🐛 FIX: Case converter** + 4. **📖 DOC: IMPERATIVE_MESSAGE** + * Use when you add documentation, like README.md or even inline docs. + * e.g., **📖 DOC: API Interface Tutorial** + 5. **🚀 RELEASE: IMPERATIVE_MESSAGE** + * Use when you release a new version. e.g., **🚀 RELEASE: Version 2.0.0** + 6. **✅ TEST: IMPERATIVE_MESSAGE** + * Use when you release a new version. + * e.g., **✅ TEST: Mock User Login/Logout** + + + +That's it for now. Nothing more, nothing less. + +### Emoji-Log functions + +For quick prototyping, I have made the following functions that you can add to your **.bashrc** / **.zshrc** files to use Emoji-Log quickly. + +``` +#.# Better Git Logs. + +### Using EMOJI-LOG (https://github.com/ahmadawais/Emoji-Log). + + + +# Git Commit, Add all and Push — in one step. + +function gcap() { +    git add . && git commit -m "$*" && git push +} + +# NEW. +function gnew() { +    gcap "📦 NEW: $@" +} + +# IMPROVE. +function gimp() { +    gcap "👌 IMPROVE: $@" +} + +# FIX. +function gfix() { +    gcap "🐛 FIX: $@" +} + +# RELEASE. +function grlz() { +    gcap "🚀 RELEASE: $@" +} + +# DOC. +function gdoc() { +    gcap "📖 DOC: $@" +} + +# TEST. +function gtst() { +    gcap "✅ TEST: $@" +} +``` + +To install these functions for the [fish shell][8], run the following commands: + +``` +function gcap; git add .; and git commit -m "$argv"; and git push; end; +function gnew; gcap "📦 NEW: $argv"; end +function gimp; gcap "👌 IMPROVE: $argv"; end; +function gfix; gcap "🐛 FIX: $argv"; end; +function grlz; gcap "🚀 RELEASE: $argv"; end; +function gdoc; gcap "📖 DOC: $argv"; end; +function gtst; gcap "✅ TEST: $argv"; end; +funcsave gcap +funcsave gnew +funcsave gimp +funcsave gfix +funcsave grlz +funcsave gdoc +funcsave gtst +``` + +If you prefer, you can paste these aliases directly in your **~/.gitconfig** file: + +``` +# Git Commit, Add all and Push — in one step. +cap = "!f() { git add .; git commit -m \"$@\"; git push; }; f" + +# NEW. +new = "!f() { git cap \"📦 NEW: $@\"; }; f" +# IMPROVE. +imp = "!f() { git cap \"👌 IMPROVE: $@\"; }; f" +# FIX. +fix = "!f() { git cap \"🐛 FIX: $@\"; }; f" +# RELEASE. +rlz = "!f() { git cap \"🚀 RELEASE: $@\"; }; f" +# DOC. +doc = "!f() { git cap \"📖 DOC: $@\"; }; f" +# TEST. +tst = "!f() { git cap \"✅ TEST: $@\"; }; f" +``` + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/emoji-log-git-commit-messages + +作者:[Ahmad Awais][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mrahmadawais +[b]: https://github.com/lujun9972 +[1]: https://github.com/ahmadawais +[2]: https://github.com/ahmadawais/Emoji-Log/ +[3]: https://github.com/ahmadawais/VSCode-Tips-Tricks +[4]: https://github.com/ahmadawais/shades-of-purple-vscode/commits/master +[5]: https://github.com/ahmadawais/shades-of-purple-vscode/blob/master/CHANGELOG.md +[6]: https://gitmoji.carloscuesta.me/ +[7]: https://VSCode.pro +[8]: https://en.wikipedia.org/wiki/Friendly_interactive_shell diff --git a/sources/tech/20190218 How To Restore Sudo Privileges To A User.md b/sources/tech/20190218 How To Restore Sudo Privileges To A User.md new file mode 100644 index 0000000000..8e6f6db66f --- /dev/null +++ b/sources/tech/20190218 How To Restore Sudo Privileges To A User.md @@ -0,0 +1,194 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Restore Sudo Privileges To A User) +[#]: via: (https://www.ostechnix.com/how-to-restore-sudo-privileges-to-a-user/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +How To Restore Sudo Privileges To A User +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/02/restore-sudo-privileges-720x340.png) + +The other day I was testing how to [**add a regular user to sudo group and remove the given privileges**][1] to make him as a normal user again on Ubuntu. While testing, I removed my administrative user from the **‘sudo’ group**. As you already know, a user should be in sudo group to do any administrative tasks. But, I had only one super user and I already took out his sudo privileges. Whenever I run a command with sudo prefix, I encountered an error – “ **sk is not in the sudoers file. This incident will be reported** “. I can’t do any administrative tasks. I couldn’t switch to root user using ‘sudo su’ command. As you know already, root user is disabled by default in Ubuntu, so I can’t log in as root user either. Have you ever been in a situation like this? No worries! This brief tutorial explains how to restore sudo privileges to a user on Linux. I tested this on Ubuntu 18.04 system, but it might work on other Linux distributions as well. + +### Restore Sudo Privileges + +Boot your Linux system into recovery mode. + +To do so, restart your system and press and hold the **SHIFT** key while booting. You will see the grub boot menu. Choose **“Advanced options for Ubuntu”** from the boot menu list. + +![][3] + +In the next screen, choose **“recovery mode”** option and hit ENTER: + +![][4] + +Next, choose **“Drop to root shell prompt”** option and hit ENTER key: + +![][5] + +You’re now in recovery mode as root user. + +![][6] + +Type the following command to mount root (/) file system in read/write mode. + +``` +mount -o remount,rw / +``` + +Now, add the user that you removed from the sudo group. + +In my case, I am adding the user called ‘sk’ to the sudo group using the following command: + +``` +adduser sk sudo +``` + +![][7] + +Then, type **exit** to return back to the recovery menu. Select **Resume** to start your Ubuntu system. + +![][8] + +Press ENTER to continue to log-in normal mode: + +![][9] + +Now check if the sudo privileges have been restored. + +To do so, type the following command from the Terminal. + +``` +$ sudo -l -U sk +``` + +Sample output: + +``` +[sudo] password for sk: +Matching Defaults entries for sk on ubuntuserver: +env_reset, mail_badpass, +secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin + +User sk may run the following commands on ubuntuserver: +(ALL : ALL) ALL +``` + +As you see in the above message, the user sk can run all commands with sudo prefix. Congratulations! You have successfully restored the sudo privileges to the user. + +#### There are also other possibilities for causing broken sudo + +Please note that I actually did it on purpose. I removed myself from the sudo group and fixed the broken sudo privileges as described above. Don’t do this if you have only one sudo user. And, this method will work only on systems that you have physical access. If it is remote server or vps, it is very difficult to fix it. You might require your hosting provider’s help. + +Also, there are two other possibilities for causing broken sudo. + + * The /etc/sudoers file might have been altered. + * You or someone might have changed the permission of /etc/sudoers file. + + + +If you have done any one or all of the above mentioned things and ended up with broken sudo, try the following solutions. + +**Solution 1:** + +If you have altered the contents of /etc/sudoers file, go to the recovery mode as described earlier. + +Backup the existing /etc/sudoers file before making any changes. + +``` +cp /etc/sudoers /etc/sudoers.bak +``` + +Then, open /etc/sudoers file: + +``` +visudo +``` + +Make the changes in the file to look like this: + +``` +# +# This file MUST be edited with the 'visudo' command as root. +# +# Please consider adding local content in /etc/sudoers.d/ instead of +# directly modifying this file. +# +# See the man page for details on how to write a sudoers file. +# +Defaults env_reset +Defaults mail_badpass +Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin" + +# Host alias specification + +# User alias specification + +# Cmnd alias specification + +# User privilege specification +root ALL=(ALL:ALL) ALL + +# Members of the admin group may gain root privileges +%admin ALL=(ALL) ALL + +# Allow members of group sudo to execute any command +%sudo ALL=(ALL:ALL) ALL + +# See sudoers(5) for more information on "#include" directives: + +#includedir /etc/sudoers.d +``` + +Once you modified the contents to reflect like this, press **CTRL+X** and **y** save and close the file. + +Finally, type ‘exit’ and select **Resume** to start your Ubuntu system to exit from the recovery mode and continue booting as normal user. + +Now, try to use run any command with sudo prefix to verify if the sudo privileges are restored. + +**Solution 2:** + +If you changed the permission of the /etc/sudoers file, this method will fix the broken sudo issue. + +From the recovery mode, run the following command to set the correct permission to /etc/sudoers file: + +``` +chmod 0440 /etc/sudoers +``` + +Once you set the proper permission to the file, type ‘exit’ and select **Resume** to start your Ubuntu system in normal mode. Finally, verify if you can able to run any sudo command. + +**Suggested read:** + +And, that’s all for now. Hope this was useful . More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-restore-sudo-privileges-to-a-user/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/how-to-grant-and-remove-sudo-privileges-to-users-on-ubuntu/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2019/02/fix-broken-sudo-1.png +[4]: http://www.ostechnix.com/wp-content/uploads/2019/02/fix-broken-sudo-2.png +[5]: http://www.ostechnix.com/wp-content/uploads/2019/02/fix-broken-sudo-3.png +[6]: http://www.ostechnix.com/wp-content/uploads/2019/02/fix-broken-sudo-4.png +[7]: http://www.ostechnix.com/wp-content/uploads/2019/02/fix-broken-sudo-5-1.png +[8]: http://www.ostechnix.com/wp-content/uploads/2019/02/fix-broken-sudo-6.png +[9]: http://www.ostechnix.com/wp-content/uploads/2019/02/fix-broken-sudo-7.png diff --git a/sources/tech/20190218 SPEED TEST- x86 vs. ARM for Web Crawling in Python.md b/sources/tech/20190218 SPEED TEST- x86 vs. ARM for Web Crawling in Python.md new file mode 100644 index 0000000000..86b5230d2d --- /dev/null +++ b/sources/tech/20190218 SPEED TEST- x86 vs. ARM for Web Crawling in Python.md @@ -0,0 +1,533 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (SPEED TEST: x86 vs. ARM for Web Crawling in Python) +[#]: via: (https://blog.dxmtechsupport.com.au/speed-test-x86-vs-arm-for-web-crawling-in-python/) +[#]: author: (James Mawson https://blog.dxmtechsupport.com.au/author/james-mawson/) + +SPEED TEST: x86 vs. ARM for Web Crawling in Python +====== + +![][1] + +Can you imagine if your job was to trawl competitor websites and jot prices down by hand, again and again and again? You’d burn your whole office down by lunchtime. + +So, little wonder web crawlers are huge these days. They can keep track of customer sentiment and trending topics, monitor job openings, real estate transactions, UFC results, all sorts of stuff. + +For those of a certain bent, this is fascinating stuff. Which is how I found myself playing around with [Scrapy][2], an open source web crawling framework written in Python. + +Being wary of the potential to do something catastrophic to my computer while poking with things I didn’t understand, I decided to install it on my main machine but a Raspberry Pi. + +And wouldn’t you know it? It actually didn’t run too shabby on the little tacker. Maybe this is a good use case for an ARM server? + +Google had no solid answer. The nearest thing I found was [this Drupal hosting drag race][3], which showed an ARM server outperforming a much more expensive x86 based account. + +That was definitely interesting. I mean, isn’t a web server kind of like a crawler in reverse? But with one operating on a LAMP stack and the other on a Python interpreter, it’s hardly the exact same thing. + +So what could I do? Only one thing. Get some VPS accounts and make them race each other. + +### What’s the Deal With ARM Processors? + +ARM is now the most popular CPU architecture in the world. + +But it’s generally seen as something you’d opt for to save money and battery life, rather than a serious workhorse. + +It wasn’t always that way: this CPU was designed in Cambridge, England to power the fiendishly expensive [Acorn Archimedes][4]. This was the most powerful desktop computer in the world, and by a long way too: it was multiple times the speed of the fastest 386. + +Acorn, like Commodore and Atari, somewhat ignorantly believed that the making of a great computer company was in the making of great computers. Bill Gates had a better idea. He got DOS on as many x86 machines – of the most widely varying quality and expense – as he could. + +Having the best user base made you the obvious platform for third party developers to write software for; having all the software support made yours the most useful computer. + +Even Apple nearly bit the dust. All the $$$$ were in building a better x86 chip, this was the architecture that ended up being developed for serious computing. + +That wasn’t the end for ARM though. Their chips weren’t just fast, they could run well without drawing much power or emitting much heat. That made them a preferred technology in set top boxes, PDAs, digital cameras, MP3 players, and basically anything that either used a battery or where you’d just rather avoid the noise of a large fan. + +So it was that Acorn spun off ARM, who began an idiosyncratic business model that continues to today: ARM doesn’t actually manufacture any chips, they license their intellectual property to others who do. + +Which is more or less how they ended up in so many phones and tablets. When Linux was ported to the architecture, the door opened to other open source technologies, which is how we can run a web crawler on these chips today. + +#### ARM in the Server Room + +Some big names, like [Microsoft][5] and [Cloudflare][6], have placed heavy bets on the British Bulldog for their infrastructure. But for those of us with more modest budgets, the options are fairly sparse. + +In fact, when it comes to cheap and cheerful VPS accounts that you can stick on the credit card for a few bucks a month, for years the only option was [Scaleway][7]. + +This changed a few months ago when public cloud heavyweight [AWS][8] launched its own ARM processor: the [AWS Graviton][9]. + +I decided to grab one of each, and race them against the most similar Intel offering from the same provider. + +### Looking Under the Hood + +So what are we actually racing here? Let’s jump right in. + +#### Scaleway + +Scaleway positions itself as “designed for developers”. And you know what? I think that’s fair enough: it’s definitely been a good little sandbox for developing and prototyping. + +The dirt simple product offering and clean, easy dashboard guides you from home page to bash shell in minutes. That makes it a strong option for small businesses, freelancers and consultants who just want to get straight into a good VPS at a great price to run some crawls. + +The ARM account we will be using is their [ARM64-2GB][10], which costs 3 euros a month and has 4 Cavium ThunderX cores. This launched in 2014 as the first server-class ARMv8 processor, but is now looking a bit middle-aged, having been superseded by the younger, prettier ThunderX2. + +The x86 account we will be comparing it to is the [1-S][11], which costs a more princely 4 euros a month and has 2 Intel Atom C3995 cores. Intel’s Atom range is a low power single-threaded system on chip design, first built for laptops and then adapted for server use. + +These accounts are otherwise fairly similar: they each have 2 gigabytes of memory, 50 gigabytes of SSD storage and 200 Mbit/s bandwidth. The disk drives possibly differ, but with the crawls we’re going to run here, this won’t come into play, we’re going to be doing everything in memory. + +When I can’t use a package manager I’m familiar with, I become angry and confused, a bit like an autistic toddler without his security blanket, entirely beyond reasoning or consolation, it’s quite horrendous really, so both of these accounts will use Debian Stretch. + +#### Amazon Web Services + +In the same length of time as it takes you to give Scaleway your credit card details, launch a VPS, add a sudo user and start installing dependencies, you won’t even have gotten as far as registering your AWS account. You’ll still be reading through the product pages trying to figure out what’s going on. + +There’s a serious breadth and depth here aimed at enterprises and others with complicated or specialised needs. + +The AWS Graviton we wanna drag race is part of AWS’s “Elastic Compute Cloud” or EC2 range. I’ll be running it as an on-demand instance, which is the most convenient and expensive way to use EC2. AWS also operates a [spot market][12], where you get the server much cheaper if you can be flexible about when it runs. There’s also a [mid-priced option][13] if you want to run it 24/7. + +Did I mention that AWS is complicated? Anyhoo.. + +The two accounts we’re comparing are [a1.medium][14] and [t2.small][15]. They both offer 2GB of RAM. Which begs the question: WTF is a vCPU? Confusingly, it’s a different thing on each account. + +On the a1.medium account, a vCPU is a single core of the new AWS Graviton chip. This was built by Annapurna Labs, an Israeli chip maker bought by Amazon in 2015. This is a single-threaded 64-bit ARMv8 core exclusive to AWS. This has an on-demand price of 0.0255 US dollars per hour. + +Our t2.small account runs on an Intel Xeon – though exactly which Xeon chip it is, I couldn’t really figure out. This has two threads per core – though we’re not really getting the whole core, or even the whole thread. + +Instead we’re getting a “baseline performance of 20%, with the ability to burst above that baseline using CPU credits”. Which makes sense in principle, though it’s completely unclear to me what to actually expect from this. The on-demand price for this account is 0.023 US dollars per hour. + +I couldn’t find Debian in the image library here, so both of these accounts will run Ubuntu 18.04. + +### Beavis and Butthead Do Moz’s Top 500 + +To test these VPS accounts, I need a crawler to run – one that will let the CPU stretch its legs a bit. One way to do this would be to just hammer a few websites with as many requests as fast as possible, but that’s not very polite. What we’ll do instead is a broad crawl of many websites at once. + +So it’s in tribute to my favourite physicist turned filmmaker, Mike Judge, that I wrote beavis.py. This crawls Moz’s Top 500 Websites to a depth of 3 pages to count how many times the words “wood” and “ass” occur anywhere within the HTML source. + +Not all 500 websites will actually get crawled here – some will be excluded by robots.txt and others will require javascript to follow links and so on. But it’s a wide enough crawl to keep the CPU busy. + +Python’s [global interpreter lock][16] means that beavis.py can only make use of a single CPU thread. To test multi-threaded we’re going to have to launch multiple spiders as seperate processes. + +This is why I wrote butthead.py. Any true fan of the show knows that, as crude as Butthead was, he was always slightly more sophisticated than Beavis. + +Splitting the crawl into multiple lists of start pages and allowed domains might slightly impact what gets crawled – fewer external links to other websites in the top 500 will get followed. But every crawl will be different anyway, so we will count how many pages are scraped as well as how long they take. + +### Installing Scrapy on an ARM Server + +Installing Scrapy is basically the same on each architecture. You install pip and various other dependencies, then install Scrapy from pip. + +Installing Scrapy from pip to an ARM device does take noticeably longer though. I’m guessing this is because it has to compile the binary parts from source. + +Once Scrapy is installed, I ran it from the shell to check that it’s fetching pages. + +On Scaleway’s ARM account, there seemed to be a hitch with the service_identity module: it was installed but not working. This issue had come up on the Raspberry Pi as well, but not the AWS Graviton. + +Not to worry, this was easily fixed with the following command: + +``` +sudo pip3 install service_identity --force --upgrade +``` + +Then we were off and racing! + +### Single Threaded Crawls + +The Scrapy docs say to try to [keep your crawls running between 80-90% CPU usage][17]. In practice, it’s hard – at least it is with the script I’ve written. What tends to happen is that the CPU gets very busy early in the crawl, drops a little bit and then rallies again. + +The last part of the crawl, where most of the domains have been finished, can go on for quite a few minutes, which is frustrating, because at that point it feels like more a measure of how big the last website is than anything to do with the processor. + +So please take this for what it is: not a state of the art benchmarking tool, but a short and slightly balding Australian in his underpants running some scripts and watching what happens. + +So let’s get down to brass tacks. We’ll start with the Scaleway crawls. + +| VPS | Account | Time | Pages | Scraped | Pages/Hour | €/million | pages | +| --------- | ------- | ------- | ------ | ---------- | ---------- | --------- | ----- | +| Scaleway | | | | | | | | +| ARM64-2GB | 108m | 59.27s | 38,205 | 21,032.623 | 0.28527 | | | +| --------- | ------- | ------- | ------ | ---------- | ---------- | --------- | ----- | +| Scaleway | | | | | | | | +| 1-S | 97m | 44.067s | 39,476 | 24,324.648 | 0.33011 | | | + +I kept an eye on the CPU use of both of these crawls using [top][18]. Both crawls hit 100% CPU use at the beginning, but the ThunderX chip was definitely redlining a lot more. That means these figures understate how much faster the Atom core crawls than the ThunderX. + +While I was watching CPU use in top, I could also see how much RAM was in use – this increased as the crawl continued. The ARM account used 14.7% at the end of the crawl, while the x86 was at 15%. + +Watching the logs of these crawls, I also noticed a lot more pages timing out and going missing when the processor was maxed out. That makes sense – if the CPU’s too busy to respond to everything then something’s gonna go missing. + +That’s not such a big deal when you’re just racing the things to see which is fastest. But in a real-world situation, with business outcomes at stake in the quality of your data, it’s probably worth having a little bit of headroom. + +And what about AWS? + +| VPS Account | Time | Pages Scraped | Pages / Hour | $ / Million Pages | +| ----------- | ---- | ------------- | ------------ | ----------------- | +| a1.medium | 100m 39.900s | 41,294 | 24,612.725 | 1.03605 | +| t2.small | 78m 53.171s | 41,200 | 31,336.286 | 0.73397 | + +I’ve included these results for sake of comparison with the Scaleway crawls, but these crawls were kind of a bust. Monitoring the CPU use – this time through the AWS dashboard rather than through top – showed that the script wasn’t making good use of the available processing power on either account. + +This was clearest with the a1.medium account – it hardly even got out of bed. It peaked at about 45% near the beginning and then bounced around between 20% and 30% for the rest. + +What’s intriguing to me about this is that the exact same script ran much slower on the ARM processor – and that’s not because it hit a limit of the Graviton’s CPU power. It had oodles of headroom left. Even the Intel Atom core managed to finish, and that was maxing out for some of the crawl. The settings were the same in the code, the way they were being handled differently on the different architecture. + +It’s a bit of a black box to me whether that’s something inherent to the processor itself, the way the binaries were compiled, or some interaction between the two. I’m going to speculate that we might have seen the same thing on the Scaleway ARM VPS, if we hadn’t hit the limit of the CPU core’s processing power first. + +It was harder to know how the t2.small account was doing. The crawl sat at about 20%, sometimes going as high as 35%. Was that it meant by “baseline performance of 20%, with the ability to burst to a higher level”? I had no idea. But I could see on the dashboard I wasn’t burning through any CPU credits. + +Just to make extra sure, I installed [stress][19] and ran it for a few minutes; sure enough, this thing could do 100% if you pushed it. + +Clearly, I was going to need to crank the settings up on both these processors to make them sweat a bit, so I set CONCURRENT_REQUESTS to 5000 and REACTOR_THREADPOOL_MAXSIZE to 120 and ran some more crawls. + +| VPS Account | Time | Pages Scraped | Pages/hr | $/10000 Pages | +| ----------- | ---- | ------------- | -------- | ------------- | +| a1.medium | 46m 13.619s | 40,283 | 52,285.047 | 0.48771 | +| t2.small | 41m7.619s | 36,241 | 52,871.857 | 0.43501 | +| t2.small (No CPU credits) | 73m 8.133s | 34,298 | 28,137.8891 | 0.81740 | + +The a1 instance hit 100% usage about 5 minutes into the crawl, before dropping back to 80% use for another 20 minutes, climbing up to 96% again and then dropping down again as it was wrapping things up. That was probably about as well-tuned as I was going to get it. + +The t2 instance hit 50% early in the crawl and stayed there for until it was nearly done. With 2 threads per core, 50% CPU use is one thread maxed out. + +Here we see both accounts produce similar speeds. But the Xeon thread was redlining for most of the crawl, and the Graviton was not. I’m going to chalk this up as a slight win for the Graviton. + +But what about once you’ve burnt through all your CPU credits? That’s probably the fairer comparison – to only use them as you earn them. I wanted to test that as well. So I ran stress until all the CPU credits were exhausted and ran the crawl again. + +With no credits in the bank, the CPU usage maxed out at 27% and stayed there. So many pages ended up going missing that it actually performed worse than when on the lower settings. + +### Multi Threaded Crawls + +Dividing our crawl up between multiple spiders in separate processes offers a few more options to make use of the available cores. + +I first tried dividing everything up between 10 processes and launching them all at once. This turned out to be slower than just dividing them up into 1 process per core. + +I got the best result by combining these methods – dividing the crawl up into 10 processes and then launching 1 process per core at the start and then the rest as these crawls began to wind down. + +To make this even better, you could try to minimise the problem of the last lingering crawler by making sure the longest crawls start first. I actually attempted to do this. + +Figuring that the number of links on the home page might be a rough proxy for how large the crawl would be, I built a second spider to count them and then sort them in descending order of number of outgoing links. This preprocessing worked well and added a little over a minute. + +It turned out though that blew the crawling time out beyond two hours! Putting all the most link heavy websites together in the same process wasn’t a great idea after all. + +You might effectively deal with this by tweaking the number of domains per process as well – or by shuffling the list after it’s ordered. That’s a bit much for Beavis and Butthead though. + +So I went back to my earlier method that had worked somewhat well: + +| VPS Account | Time | Pages Scraped | Pages/hr | €/10,000 pages | +| ----------- | ---- | ------------- | -------- | -------------- | +| Scaleway ARM64-2GB | 62m 10.078s | 36,158 | 34,897.0719 | 0.17193 | +| Scaleway 1-S | 60m 56.902s | 36,725 | 36,153.5529 | 0.22128 | + +After all that, using more cores did speed up the crawl. But it’s hardly a matter of just halving or quartering the time taken. + +I’m certain that a more experienced coder could better optimise this to take advantage of all the cores. But, as far as “out of the box” Scrapy performance goes, it seems to be a lot easier to speed up a crawl by using faster threads rather than by throwing more cores at it. + +As it is, the Atom has scraped slightly more pages in slightly less time. On a value for money metric, you could possibly say that the ThunderX is ahead. Either way, there’s not a lot of difference here. + +### Everything You Always Wanted to Know About Ass and Wood (But Were Afraid to Ask) + +After scraping 38,205 pages, our crawler found 24,170,435 mentions of ass and 54,368 mentions of wood. + +![][20] + +Considered on its own, this is a respectable amount of wood. + +But when you set it against the sheer quantity of ass we’re dealing with here, the wood looks miniscule. + +### The Verdict + +From what’s visible to me at the moment, it looks like the CPU architecture you use is actually less important than how old the processor is. The AWS Graviton from 2018 was the winner here in single-threaded performance. + +You could of course argue that the Xeon still wins, core for core. But then you’re not really going dollar for dollar anymore, or even thread for thread. + +The Atom from 2017, on the other hand, comfortably bested the ThunderX from 2014. Though, on the value for money metric, the ThunderX might be the clear winner. Then again, if you can run your crawls on Amazon’s spot market, the Graviton is still ahead. + +All in all, I think this shows that, yes, you can crawl the web with an ARM device, and it can compete on both performance and price. + +Whether the difference is significant enough for you to turn what you’re doing upside down is a whole other question of course. Certainly, if you’re already on the AWS cloud – and your code is portable enough – then it might be worthwhile testing out their a1 instances. + +Hopefully we will see more ARM options on the public cloud in near future. + +### The Scripts + +This is my first real go at doing anything in either Python or Scrapy. So this might not be great code to learn from. Some of what I’ve done here – such as using global variables – is definitely a bit kludgey. + +Still, I want to be transparent about my methods, so here are my scripts. + +To run them, you’ll need Scrapy installed and you will need the CSV file of [Moz’s top 500 domains][21]. To run butthead.py you will also need [psutil][22]. + +##### beavis.py + +``` +import scrapy +from scrapy.spiders import CrawlSpider, Rule +from scrapy.linkextractors import LinkExtractor +from scrapy.crawler import CrawlerProcess + +ass = 0 +wood = 0 +totalpages = 0 + +def getdomains(): + + moz500file = open('top500.domains.05.18.csv') + + domains = [] + moz500csv = moz500file.readlines() + + del moz500csv[0] + + for csvline in moz500csv: + leftquote = csvline.find('"') + rightquote = leftquote + csvline[leftquote + 1:].find('"') + domains.append(csvline[leftquote + 1:rightquote]) + + return domains + +def getstartpages(domains): + + startpages = [] + + for domain in domains: + startpages.append('http://' + domain) + + return startpages + +class AssWoodItem(scrapy.Item): + ass = scrapy.Field() + wood = scrapy.Field() + url = scrapy.Field() + +class AssWoodPipeline(object): + def __init__(self): + self.asswoodstats = [] + + def process_item(self, item, spider): + self.asswoodstats.append((item.get('url'), item.get('ass'), item.get('wood'))) + + def close_spider(self, spider): + asstally, woodtally = 0, 0 + + for asswoodcount in self.asswoodstats: + asstally += asswoodcount[1] + woodtally += asswoodcount[2] + + global ass, wood, totalpages + ass = asstally + wood = woodtally + totalpages = len(self.asswoodstats) + +class BeavisSpider(CrawlSpider): + name = "Beavis" + allowed_domains = getdomains() + start_urls = getstartpages(allowed_domains) + #start_urls = [ 'http://medium.com' ] + custom_settings = { + 'DEPTH_LIMIT': 3, + 'DOWNLOAD_DELAY': 3, + 'CONCURRENT_REQUESTS': 1500, + 'REACTOR_THREADPOOL_MAXSIZE': 60, + 'ITEM_PIPELINES': { '__main__.AssWoodPipeline': 10 }, + 'LOG_LEVEL': 'INFO', + 'RETRY_ENABLED': False, + 'DOWNLOAD_TIMEOUT': 30, + 'COOKIES_ENABLED': False, + 'AJAXCRAWL_ENABLED': True + } + + rules = ( Rule(LinkExtractor(), callback='parse_asswood'), ) + + def parse_asswood(self, response): + if isinstance(response, scrapy.http.TextResponse): + item = AssWoodItem() + item['ass'] = response.text.casefold().count('ass') + item['wood'] = response.text.casefold().count('wood') + item['url'] = response.url + yield item + + +if __name__ == '__main__': + + process = CrawlerProcess({ + 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)' + }) + + process.crawl(BeavisSpider) + process.start() + + print('Uhh, that was, like, ' + str(totalpages) + ' pages crawled.') + print('Uh huhuhuhuh. It said ass ' + str(ass) + ' times.') + print('Uh huhuhuhuh. It said wood ' + str(wood) + ' times.') +``` + +##### butthead.py + +``` +import scrapy, time, psutil +from scrapy.spiders import CrawlSpider, Rule, Spider +from scrapy.linkextractors import LinkExtractor +from scrapy.crawler import CrawlerProcess +from multiprocessing import Process, Queue, cpu_count + +ass = 0 +wood = 0 +totalpages = 0 +linkcounttuples =[] + +def getdomains(): + + moz500file = open('top500.domains.05.18.csv') + + domains = [] + moz500csv = moz500file.readlines() + + del moz500csv[0] + + for csvline in moz500csv: + leftquote = csvline.find('"') + rightquote = leftquote + csvline[leftquote + 1:].find('"') + domains.append(csvline[leftquote + 1:rightquote]) + + return domains + +def getstartpages(domains): + + startpages = [] + + for domain in domains: + startpages.append('http://' + domain) + + return startpages + +class AssWoodItem(scrapy.Item): + ass = scrapy.Field() + wood = scrapy.Field() + url = scrapy.Field() + +class AssWoodPipeline(object): + def __init__(self): + self.asswoodstats = [] + + def process_item(self, item, spider): + self.asswoodstats.append((item.get('url'), item.get('ass'), item.get('wood'))) + + def close_spider(self, spider): + asstally, woodtally = 0, 0 + + for asswoodcount in self.asswoodstats: + asstally += asswoodcount[1] + woodtally += asswoodcount[2] + + global ass, wood, totalpages + ass = asstally + wood = woodtally + totalpages = len(self.asswoodstats) + + +class ButtheadSpider(CrawlSpider): + name = "Butthead" + custom_settings = { + 'DEPTH_LIMIT': 3, + 'DOWNLOAD_DELAY': 3, + 'CONCURRENT_REQUESTS': 250, + 'REACTOR_THREADPOOL_MAXSIZE': 30, + 'ITEM_PIPELINES': { '__main__.AssWoodPipeline': 10 }, + 'LOG_LEVEL': 'INFO', + 'RETRY_ENABLED': False, + 'DOWNLOAD_TIMEOUT': 30, + 'COOKIES_ENABLED': False, + 'AJAXCRAWL_ENABLED': True + } + + rules = ( Rule(LinkExtractor(), callback='parse_asswood'), ) + + + def parse_asswood(self, response): + if isinstance(response, scrapy.http.TextResponse): + item = AssWoodItem() + item['ass'] = response.text.casefold().count('ass') + item['wood'] = response.text.casefold().count('wood') + item['url'] = response.url + yield item + +def startButthead(domainslist, urlslist, asswoodqueue): + crawlprocess = CrawlerProcess({ + 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)' + }) + + crawlprocess.crawl(ButtheadSpider, allowed_domains = domainslist, start_urls = urlslist) + crawlprocess.start() + asswoodqueue.put( (ass, wood, totalpages) ) + + +if __name__ == '__main__': + asswoodqueue = Queue() + domains=getdomains() + startpages=getstartpages(domains) + processlist =[] + cores = cpu_count() + + for i in range(10): + domainsublist = domains[i * 50:(i + 1) * 50] + pagesublist = startpages[i * 50:(i + 1) * 50] + p = Process(target = startButthead, args = (domainsublist, pagesublist, asswoodqueue)) + processlist.append(p) + + for i in range(cores): + processlist[i].start() + + time.sleep(180) + + i = cores + + while i != 10: + time.sleep(60) + if psutil.cpu_percent() < 66.7: + processlist[i].start() + i += 1 + + for i in range(10): + processlist[i].join() + + for i in range(10): + asswoodtuple = asswoodqueue.get() + ass += asswoodtuple[0] + wood += asswoodtuple[1] + totalpages += asswoodtuple[2] + + print('Uhh, that was, like, ' + str(totalpages) + ' pages crawled.') + print('Uh huhuhuhuh. It said ass ' + str(ass) + ' times.') + print('Uh huhuhuhuh. It said wood ' + str(wood) + ' times.') +``` + +-------------------------------------------------------------------------------- + +via: https://blog.dxmtechsupport.com.au/speed-test-x86-vs-arm-for-web-crawling-in-python/ + +作者:[James Mawson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.dxmtechsupport.com.au/author/james-mawson/ +[b]: https://github.com/lujun9972 +[1]: https://blog.dxmtechsupport.com.au/wp-content/uploads/2019/02/quadbike-1024x683.jpg +[2]: https://scrapy.org/ +[3]: https://www.info2007.net/blog/2018/review-scaleway-arm-based-cloud-server.html +[4]: https://blog.dxmtechsupport.com.au/playing-badass-acorn-archimedes-games-on-a-raspberry-pi/ +[5]: https://www.computerworld.com/article/3178544/microsoft-windows/microsoft-and-arm-look-to-topple-intel-in-servers.html +[6]: https://www.datacenterknowledge.com/design/cloudflare-bets-arm-servers-it-expands-its-data-center-network +[7]: https://www.scaleway.com/ +[8]: https://aws.amazon.com/ +[9]: https://www.theregister.co.uk/2018/11/27/amazon_aws_graviton_specs/ +[10]: https://www.scaleway.com/virtual-cloud-servers/#anchor_arm +[11]: https://www.scaleway.com/virtual-cloud-servers/#anchor_starter +[12]: https://aws.amazon.com/ec2/spot/pricing/ +[13]: https://aws.amazon.com/ec2/pricing/reserved-instances/ +[14]: https://aws.amazon.com/ec2/instance-types/a1/ +[15]: https://aws.amazon.com/ec2/instance-types/t2/ +[16]: https://wiki.python.org/moin/GlobalInterpreterLock +[17]: https://docs.scrapy.org/en/latest/topics/broad-crawls.html +[18]: https://linux.die.net/man/1/top +[19]: https://linux.die.net/man/1/stress +[20]: https://blog.dxmtechsupport.com.au/wp-content/uploads/2019/02/Screenshot-from-2019-02-16-17-01-08.png +[21]: https://moz.com/top500 +[22]: https://pypi.org/project/psutil/ diff --git a/sources/tech/20190219 5 Good Open Source Speech Recognition-Speech-to-Text Systems.md b/sources/tech/20190219 5 Good Open Source Speech Recognition-Speech-to-Text Systems.md new file mode 100644 index 0000000000..c7609f5022 --- /dev/null +++ b/sources/tech/20190219 5 Good Open Source Speech Recognition-Speech-to-Text Systems.md @@ -0,0 +1,131 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 Good Open Source Speech Recognition/Speech-to-Text Systems) +[#]: via: (https://fosspost.org/lists/open-source-speech-recognition-speech-to-text) +[#]: author: (Simon James https://fosspost.org/author/simonjames) + +5 Good Open Source Speech Recognition/Speech-to-Text Systems +====== + +![](https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/open-source-speech-recognition-speech-to-text.png?resize=1237%2C527&ssl=1) + +A speech-to-text (STT) system is as its name implies; A way of transforming the spoken words via sound into textual files that can be used later for any purpose. + +Speech-to-text technology is extremely useful. It can be used for a lot of applications such as a automation of transcription, writing books/texts using your own sound only, enabling complicated analyses on information using the generated textual files and a lot of other things. + +In the past, the speech-to-text technology was dominated by proprietary software and libraries; Open source alternatives didn’t exist or existed with extreme limitations and no community around. This is changing, today there are a lot of open source speech-to-text tools and libraries that you can use right now. + +Here we list 5 of them. + +### Open Source Speech Recognition Libraries + +#### Project DeepSpeech + +![5 Good Open Source Speech Recognition/Speech-to-Text Systems 15 open source speech recognition][1] + +This project is made by Mozilla; The organization behind the Firefox browser. It’s a 100% free and open source speech-to-text library that also implies the machine learning technology using TensorFlow framework to fulfill its mission. + +In other words, you can use it to build training models yourself to enhance the underlying speech-to-text technology and get better results, or even to bring it to other languages if you want. You can also easily integrate it to your other machine learning projects that you are having on TensorFlow. Sadly it sounds like the project is currently only supporting English by default. + +It’s also available in many languages such as Python (3.6); Which allows you to have it working in seconds: + +``` +pip3 install deepspeech +deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav +``` + +You can also install it using npm: + +``` +npm install deepspeech +``` + +For more information, refer to the [project’s homepage][2]. + +#### Kaldi + +![5 Good Open Source Speech Recognition/Speech-to-Text Systems 17 open source speech recognition][3] + +Kaldi is an open source speech recognition software written in C++, and is released under the Apache public license. It works on Windows, macOS and Linux. Its development started back in 2009. + +Kaldi’s main features over some other speech recognition software is that it’s extendable and modular; The community is providing tons of 3rd-party modules that you can use for your tasks. Kaldi also supports deep neural networks, and offers an [excellent documentation on its website][4]. + +While the code is mainly written in C++, it’s “wrapped” by Bash and Python scripts. So if you are looking just for the basic usage of converting speech to text, then you’ll find it easy to accomplish that via either Python or Bash. + +[Project’s homepage][5]. + +#### Julius + +![5 Good Open Source Speech Recognition/Speech-to-Text Systems 19 open source speech recognition][6] + +Probably one of the oldest speech recognition software ever; It’s development started in 1991 at the University of Kyoto, and then its ownership was transferred to an independent project team in 2005. + +Julius main features include its ability to perform real-time STT processes, low memory usage (Less than 64MB for 20000 words), ability to produce N-best/Word-graph output, ability to work as a server unit and a lot more. This software was mainly built for academic and research purposes. It is written in C, and works on Linux, Windows, macOS and even Android (on smartphones). + +Currently it supports both English and Japanese languages only. The software is probably availbale to install easily in your Linux distribution’s repository; Just search for julius package in your package manager. The latest version was [released][7] around one and half months ago. + +[Project’s homepage][8]. + +#### Wav2Letter++ + +![5 Good Open Source Speech Recognition/Speech-to-Text Systems 21 open source speech recognition][9] + +If you are looking for something modern, then this one is for you. Wav2Letter++ is an open source speech recognition software that was released by Facebook’s AI Research Team just 2 months ago. The code is released under the BSD license. + +Facebook is [describing][10] its library as “the fastest state-of-the-art speech recognition system available”. The concepts on which this tool is built makes it optimized for performance by default; Facebook’s also-new machine learning library [FlashLight][11] is used as the underlying core of Wav2Letter++. + +Wav2Letter++ needs you first to build a training model for the language you desire by yourself in order to train the algorithms on it. No pre-built support of any language (including English) is available; It’s just a machine-learning-driven tool to convert speech to text. It was written in C++, hence the name (Wav2Letter++). + +[Project’s homepage][12]. + +#### DeepSpeech2 + +![5 Good Open Source Speech Recognition/Speech-to-Text Systems 23 open source speech recognition][13] + +Researchers at the Chinese giant Baidu are also working on their own speech-to-text engine, called DeepSpeech2. It’s an end-to-end open source engine that uses the “PaddlePaddle” deep learning framework for converting both English & Mandarin Chinese languages speeches into text. The code is released under BSD license. + +The engine can be trained on any model and for any language you desire. The models are not released with the code; You’ll have to build them yourself, just like the other software. DeepSpeech2’s source code is written in Python; So it should be easy for you to get familiar with it if that’s the language you use. + +[Project’s homepage][14]. + +### Conclusion + +The speech recognition category is still mainly dominated by proprietary software giants like Google and IBM (which do provide their own closed-source commercial services for this), but the open source alternatives are promising. Those 5 open source speech recognition engines should get you going in building your application, all of them are still under heavy development by time. In few years, we expect open source to become the norm for those technologies just like in the other industries. + +If you have any other recommendations for this list, or comments in general, we’d love to hear them below! + +** + +Shares + + +-------------------------------------------------------------------------------- + +via: https://fosspost.org/lists/open-source-speech-recognition-speech-to-text + +作者:[Simon James][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fosspost.org/author/simonjames +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/hero_speech-machine-learning2.png?resize=820%2C280&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 16 open source speech recognition) +[2]: https://github.com/mozilla/DeepSpeech +[3]: https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/Screenshot-at-2019-02-19-1134.png?resize=591%2C138&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 18 open source speech recognition) +[4]: http://kaldi-asr.org/doc/index.html +[5]: http://kaldi-asr.org +[6]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/mic_web.png?resize=385%2C100&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 20 open source speech recognition) +[7]: https://github.com/julius-speech/julius/releases +[8]: https://github.com/julius-speech/julius +[9]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/fully_convolutional_ASR.png?resize=850%2C177&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 22 open source speech recognition) +[10]: https://code.fb.com/ai-research/wav2letter/ +[11]: https://github.com/facebookresearch/flashlight +[12]: https://github.com/facebookresearch/wav2letter +[13]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/ds2.png?resize=850%2C313&ssl=1 (5 Good Open Source Speech Recognition/Speech-to-Text Systems 24 open source speech recognition) +[14]: https://github.com/PaddlePaddle/DeepSpeech diff --git a/sources/tech/20190220 Automation evolution.md b/sources/tech/20190220 Automation evolution.md new file mode 100644 index 0000000000..09167521c6 --- /dev/null +++ b/sources/tech/20190220 Automation evolution.md @@ -0,0 +1,81 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Automation evolution) +[#]: via: (https://leancrew.com/all-this/2019/02/automation-evolution/) +[#]: author: (Dr.Drang https://leancrew.com) + +Automation evolution +====== + +In my experience, scripts and macros almost never end up the way they start. This shouldn’t be a surprise. Just as spending time performing a particular task makes you realize it should be automated, spending time working with the automation makes you realize how it can be improved. Contra [XKCD][3], this doesn’t mean the decision to automate a task puts you on an endless treadmill of tweaking that’s never worth the time you invest. It means you’re continuing to think about how you do things and how your methods can be improved. I have an example that I’ve been working on for years. + +Two of the essential but dull parts of my job involve sending out invoices to clients and following up when those invoices aren’t paid on time. I’ve gradually built up a system to handle both of these interrelated duties. I’ve written about certain details before, but here I want to talk about how and why the system has evolved. + +It started with [TextExpander][4] snippets. One was for the text of the email that accompanied the invoice when it was first sent, and it looked like this (albeit less terse): + +``` +Attached is invoice A for $B on project C. Payment is due on D. +``` + +where the A, B, C, and D were [fill-in fields][5]. Similarly, there was a snippet for the followup emails. + +``` +The attached invoice, X for $Y on project Z, is still outstanding +and is now E days old. Pay up. +``` + +While these snippets was certainly better than typing this boilerplate out again and again, they weren’t using the computer for what it’s good at: looking things up and calculating. The invoices are PDFs that came out of my company’s accounting system and contain the information for X, Y, Z, and D. The age of the invoice, E, can be calculated from D and the current date. + +So after a month or two of using the snippets, I wrote an invoicing script in Python that read the invoice PDF and created an email message with all of the parts filled in. It also added a subject line and used a project database to look up the client’s email address to put in the To field. A similar script created a dunning email message. Both of these scripts could be run from the Terminal and took the invoice PDF as their argument, e.g., + +``` +invoice 12345.pdf +``` + +and + +``` +dun 12345.pdf +``` + +I should mention that these scripts created the email messages, but they didn’t send them. Sometimes I need to add an extra sentence or two to handle particular situations, and these scripts stopped short of sending so I could do that. + +It didn’t take very long for me to realize that opening a Terminal window just to run a single command was itself a waste of time. I used Automator to add Quick Action workflows that run the `invoice` and `dun` scripts to the Services menu. That allowed me to run the scripts by right-clicking on an invoice PDF file in the Finder. + +This system lasted quite a while. Eventually, though, I decided it was foolish to rely on my memory (or periodic checking of my outstanding invoices) to decide when to send out the followup emails on unpaid bills. I added a section to the `invoice` script that created a reminder along with the invoicing email. The reminder went in the Invoices list of the Reminders app and was given a due date of the first Tuesday at least 45 days after the invoice date. My invoices are net 30, so 45 days seemed like a good starting time for followups. And rather than having the reminder pop up on any day of the week, I set it to Tuesday—early in the week but unlikely to be on a holiday.1 + +Changing the `invoice` script changed the behavior of the Services menu item that called it; I didn’t have to make any changes in Automator. + +This system was the state of the art until it hit me that I could write a script that checked Reminders for every invoice that was past due and run the `dun` script on all of them, creating a series of followup emails in one fell swoop. I wrote this script as a combination of Python and AppleScript and embedded it in a [Keyboard Maestro][6] macro. With this macro in place, I no longer had to hunt for the invoices to right-click on. + +A couple of weeks ago, after reading Federico Viticci’s article on [using a Mac from iOS][7], I began thinking about the hole in my followup system: I have to be at my Mac to run Keyboard Maestro. What if I’m traveling on Tuesday and want to send out followup emails from my iPhone or iPad? OK, sure, I could use Screens to connect to the Mac and run the Keyboard Maestro macro that way, but that’s very slow and clumsy over a cellular network connection, especially when trying to manipulate windows on a 27″ iMac screen as viewed through an iPhone-sized keyhole. + +The obvious solution, which wasn’t obvious to me until I’d thought of and rejected a few other ideas, was to change the `dun` script to create and save the followup email. Saving the email puts it in the Drafts folder, which I can get at from all of my devices. I also changed the Keyboard Maestro macro that executes the `dun` script on every overdue invoice to run every Tuesday morning at 5:00 am. When the reminders pop up later in the day, the emails are already written and waiting for me in the Drafts folder. + +Yesterday was the first “live” test of the new system. I was in an airport restaurant—nothing but the best cuisine for me—when my watch buzzed with reminders for two overdue invoices. I pulled out my phone, opened Mail, and there were the emails, waiting to be sent. In this case, I didn’t have to edit the messages before sending, but it wouldn’t have been a big deal if I had—no more difficult than writing any other email from my phone. + +Am I done with this? History suggests I’m not, and I’m OK with that. By getting rid of more scutwork, I’ve made myself better at following up on old invoices, and my average time-to-collection has improved. Even XKCD would think that’s worth the effort. + +-------------------------------------------------------------------------------- + +via: https://leancrew.com/all-this/2019/02/automation-evolution/ + +作者:[Dr.Drang][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://leancrew.com +[b]: https://github.com/lujun9972 +[1]: https://leancrew.com/all-this/2019/02/regex-groups-and-numerals/ +[2]: https://leancrew.com/all-this/2019/02/transparency/ +[3]: https://xkcd.com/1319/ +[4]: https://textexpander.com/ +[5]: https://textexpander.com/help/desktop/fillins.html +[6]: https://www.keyboardmaestro.com/main/ +[7]: https://www.macstories.net/ipad-diaries/ipad-diaries-using-a-mac-from-ios-part-1-finder-folders-siri-shortcuts-and-app-windows-with-keyboard-maestro/ diff --git a/sources/tech/20190222 Q4OS Linux Revives Your Old Laptop with Windows- Looks.md b/sources/tech/20190222 Q4OS Linux Revives Your Old Laptop with Windows- Looks.md new file mode 100644 index 0000000000..93549ac45b --- /dev/null +++ b/sources/tech/20190222 Q4OS Linux Revives Your Old Laptop with Windows- Looks.md @@ -0,0 +1,192 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Q4OS Linux Revives Your Old Laptop with Windows’ Looks) +[#]: via: (https://itsfoss.com/q4os-linux-review) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Q4OS Linux Revives Your Old Laptop with Windows’ Looks +====== + +There are quite a few Linux distros available that seek to make new users feel at home by [imitating the look and feel of Windows][1]. Today, we’ll look at a distro that attempts to do this with limited success We’ll be looking at [Q4OS][2]. + +### Q4OS Linux focuses on performance on low hardware + +![Q4OS Linux desktop after first boot][3]Q4OS after first boot + +> Q4OS is a fast and powerful operating system based on the latest technologies while offering highly productive desktop environment. We focus on security, reliability, long-term stability and conservative integration of verified new features. System is distinguished by speed and very low hardware requirements, runs great on brand new machines as well as legacy computers. It is also very applicable for virtualization and cloud computing. +> +> Q4OS Website + +Q4OS currently has two different release branches: 2.# Scorpion and 3.# Centaurus. Scorpion is the Long-Term-Support (LTS) release and will be supported for five years. That support should last until 2022. The most recent version of Scorpion is 2.6, which is based on [Debian][4] 9 Stretch. Centaurus is considered the testing branch and is based on Debian Buster. Centaurus will become the LTS when Debian Buster becomes stable. + +Q4OS is one of the few Linux distros that still support both 32-bit and 64-bit. It has also been ported to ARM devices, specifically the Raspberry PI and the PineBook. + +The one major thing that separates Q4OS from the majority of Linux distros is their use of the Trinity Desktop Environment as the default desktop environment. + +#### The not-so-famous Trinity Desktop Environment + +![][5]Trinity Desktop Environment + +I’m sure that most people are unfamiliar with the [Trinity Desktop Environment (TDE)][6]. I didn’t know until I discovered Q4OS a couple of years ago. TDE is a fork of [KDE][7], specifically KDE 3.5. TDE was created by Timothy Pearson and the first release took place in April 2010. + +From what I read, it sounds like TDE was created for the same reason as [MATE][8]). Early versions of KDE 4 were prone to crash and users were unhappy with the direction the new release was taking, it was decided to fork the previous release. That is where the similarities end. MATE has taken on a life of its own and grew to become an equal among desktop environments. Development of TDE seems to have slowed. There were two years between the last two point releases. + +Quick side note: TDE uses its own fork of Qt 3, named TQt. + +#### System Requirements + +According to the [Q4OS download page][9], the system requirements differ based on the desktop environment you install. + +**TDE Version** + + * At least 300MHz CPU + * 128 MB of RAM + * 3 GB Storage + + + +**KDE Version** + + * At least 1GHz CPU + * 1 GB of RAM + * 5 GB Storage + + + +You can see from the system requirements that Q4OS is a [lightweight Linux distribution suitable for older computers][10]. + +#### Included apps by default + +The following applications are included in the full install of Q4OS: + + * Google Chrome + * Thunderbird + * LibreOffice + * VLC player + * Konqueror browser + * Dolphin file manager + * AisleRiot Solitaire + * Konsole + * Software Center + + + * KMines + * Ockular + * KBounce + * DigiKam + * Kooka + * KolourPaint + * KSnapshot + * Gwenview + * Ark + + + * KMail + * SMPlayer + * KRec + * Brasero + * Amarok player + * qpdfview + * KOrganizer + * KMag + * KNotes + + + +Of course, you can install additional applications through the software center. Since Q4OS is based on Debian, you can also [install applications from deb packages][11]. + +#### Q4OS can be installed from within Windows + +I was able to successfully install TrueOS on my Dell Latitude D630 without any issues. This laptop has an Intel Centrino Duo Core processor running at 2.00 GHz, NVIDIA Quadro NVS 135M graphics chip, and 4 GB of RAM. + +You have a couple of options to choose from when installing Q4OS. You can either install Q4OS with a CD (Live or install) or you can install it from inside Window. The Windows installer asks for the drive location you want to install to, how much space you want Q4OS to take up and what login information do you want to use. + +![][12]Q4OS Windows installer + +Compared to most distros, the Live ISOs are small. The KDE version weighs less than 1GB and the TDE version is just a little north of 500 MB. + +### Experiencing Q4OS: Feels like older Windows versions + +Please note that while there is a KDE installation ISO, I used the TDE installation ISO. The KDE Live CD is a recent addition, so TDE is more in line with the project’s long term goals. + +When you boot into Q4OS for the first time, it feels like you jumped through a time portal and are staring at Windows 2000. The initial app offerings are very slim, you have access to a file manager, a web browser and not much else. There isn’t even a screenshot tool installed. + +![][13]Konqueror film manager + +When you try to use the TDE browser (Konqueror), a dialog box pops up recommending using the Desktop Profiler to [install Google Chrome][14] or some other recent web browser. + +The Desktop Profiler allows you to choose between a bare-bones, basic or full desktop and which desktop environment you wish to use as default. You can also use the Desktop Profiler to install other desktop environments, such as MATE, Xfce, LXQT, LXDE, Cinnamon and GNOME. + +![Q4OS Welcome Screen][15]![Q4OS Welcome Screen][15]Q4OS Welcome Screen + +Q4OS comes with its own application center. However, the offerings are limited to less than 20 options, including Synaptic, Google Chrome, Chromium, Firefox, LibreOffice, Update Manager, VLC, Multimedia codecs, Thunderbird, LookSwitcher, NVIDIA drivers, Network Manager, Skype, GParted, Wine, Blueman, X2Go server, X2Go Client, and Virtualbox additions. + +![][16]Q4OS Software Centre + +If you want to install anything else, you need to either use the command line or the [synaptic package manager][17]. Synaptic is a very good package manager and has been very serviceable for many years, but it isn’t quite newbie friendly. + +If you install an application from the Software Centre, you are treated to an installer that looks a lot like a Windows installer. I can only imagine that this is for people converting to Linux from Windows. + +![][18]Firefox installer + +As I mentioned earlier, when you boot into Q4OS’ desktop for the first time it looks like something out of the 1990s. Thankfully, you can install a utility named LookSwitcher to install a different theme. Initially, you are only shown half a dozen themes. There are other themes that are considered works-in-progress. You can also enhance the default theme by picking a more vibrant background and making the bottom panel transparent. + +![][19]Q4OS using the Debonair theme + +### Final Thoughts on Q4OS + +I may have mentioned a few times in this review that Q4OS looks like a dated version of Windows. It is obviously a very conscious decision because great care was taken to make even the control panel and file manager look Windows-eque. The problem is that it reminds me more of [ReactOS][20] than something modern. The Q4OS website says that it is made using the latest technology. The look of the system disagrees and will probably put some new users off. + +The fact that the install ISOs are smaller than most means that they are very quick to download. Unfortunately, it also means that if you want to be productive, you’ll have to spend quite a bit of time downloading software, either manually or automatically. You’ll also need an active internet connection. There is a reason why most ISOs are several gigabytes. + +I made sure to test the Windows installer. I installed a test copy of Windows 10 and ran the Q4OS installer. The process took a few minutes because the installer, which is less than 10 MB had to download an ISO. When the process was done, I rebooted. I selected Q4OS from the menu, but it looked like I was booting into Windows 10 (got the big blue circle). I thought that the install failed, but I eventually got to Q4OS. + +One of the few things that I liked about Q4OS was how easy it was to install the NVIDIA drivers. After I logged in for the first time, a little pop-up told me that there were NVIDIA drivers available and asked me if I wanted to install them. + +Using Q4OS was definitely an interesting experience, especially using TDE for the first time and the Windows look and feel. However, the lack of apps in the Software Centre and some of the design choices stop me from recommending this distro. + +**Do you like Q4OS?** + +Have you ever used Q4OS? What is your favorite Debian-based distro? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][21]. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/q4os-linux-review + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/windows-like-linux-distributions/ +[2]: https://q4os.org/ +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os1.jpg?resize=800%2C500&ssl=1 +[4]: https://www.debian.org/ +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os4.jpg?resize=800%2C412&ssl=1 +[6]: https://www.trinitydesktop.org/ +[7]: https://en.wikipedia.org/wiki/KDE +[8]: https://en.wikipedia.org/wiki/MATE_(software +[9]: https://q4os.org/downloads1.html +[10]: https://itsfoss.com/lightweight-linux-beginners/ +[11]: https://itsfoss.com/list-installed-packages-ubuntu/ +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os-windows-installer.jpg?resize=800%2C610&ssl=1 +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os2.jpg?resize=800%2C606&ssl=1 +[14]: https://itsfoss.com/install-chrome-ubuntu/ +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os10.png?ssl=1 +[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os3.jpg?resize=800%2C507&ssl=1 +[17]: https://www.nongnu.org/synaptic/ +[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os5.jpg?resize=800%2C616&ssl=1 +[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os8Debonaire.jpg?resize=800%2C500&ssl=1 +[20]: https://www.reactos.org/ +[21]: http://reddit.com/r/linuxusersgroup +[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os1.jpg?fit=800%2C500&ssl=1 diff --git a/sources/tech/20190223 Regex groups and numerals.md b/sources/tech/20190223 Regex groups and numerals.md new file mode 100644 index 0000000000..c24505ee6b --- /dev/null +++ b/sources/tech/20190223 Regex groups and numerals.md @@ -0,0 +1,60 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Regex groups and numerals) +[#]: via: (https://leancrew.com/all-this/2019/02/regex-groups-and-numerals/) +[#]: author: (Dr.Drang https://leancrew.com) + +Regex groups and numerals +====== + +A week or so ago, I was editing a program and decided I should change some variable names. I thought it would be a simple regex find/replace, and it was. Just not as simple as I thought. + +The variables were named `a10`, `v10`, and `x10`, and I wanted to change them to `a30`, `v30`, and `x30`, respectively. I brought up BBEdit’s Find window and entered this: + +![Mistaken BBEdit replacement pattern][2] + +I couldn’t just replace `10` with `30` because there were instances of `10` in the code that weren’t related to the variables. And because I think I’m clever, I didn’t want to do three non-regex replacements, one each for `a10`, `v10`, and `x10`. But I wasn’t clever enough to notice the blue coloring in the replacement pattern. Had I done so, I would have seen that BBEdit was interpreting my replacement pattern as “Captured group 13, followed by `0`” instead of “Captured group 1, followed by `30`,” which was what I intended. Since captured group 13 was blank, all my variable names were replaced with `0`. + +You see, BBEdit can capture up to 99 groups in the search pattern and, strictly speaking, we should use two-digit numbers when referring to them in the replacement pattern. But in most cases, we can use `\1` through `\9` instead of `\01` through `\09` because there’s no ambiguity. In other words, if I had been trying to change `a10`, `v10`, and `x10` to `az`, `vz`, and `xz`, a replacement pattern of `\1z` would have been just fine, because the trailing `z` means there’s no way to misinterpret the intent of the `\1` in that pattern. + +So after undoing the replacement, I changed the pattern to this, + +![Two-digit BBEdit replacement pattern][3] + +and all was right with the world. + +There was another option: a named group. Here’s how that would have looked, using `var` as the pattern name: + +![Named BBEdit replacement pattern][4] + +I don’t think I’ve ever used a named group in any situation, whether the regex was in a text editor or a script. My general feeling is that if the pattern is so complicated I have to use variables to keep track of all the groups, I should stop and break the problem down into smaller parts. + +By the way, you may have heard that BBEdit is celebrating its [25th anniversary][5] of not sucking. When a well-documented app has such a long history, the manual starts to accumulate delightful callbacks to the olden days. As I was looking up the notation for named groups in the BBEdit manual, I ran across this note: + +![BBEdit regex manual excerpt][6] + +BBEdit is currently on Version 12.5; Version 6.5 came out in 2001. But the manual wants to make sure that long-time customers (I believe it was on Version 4 when I first bought it) don’t get confused by changes in behavior, even when those changes occurred nearly two decades ago. + + +-------------------------------------------------------------------------------- + +via: https://leancrew.com/all-this/2019/02/regex-groups-and-numerals/ + +作者:[Dr.Drang][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://leancrew.com +[b]: https://github.com/lujun9972 +[1]: https://leancrew.com/all-this/2019/02/automation-evolution/ +[2]: https://leancrew.com/all-this/images2019/20190223-Mistaken%20BBEdit%20replacement%20pattern.png (Mistaken BBEdit replacement pattern) +[3]: https://leancrew.com/all-this/images2019/20190223-Two-digit%20BBEdit%20replacement%20pattern.png (Two-digit BBEdit replacement pattern) +[4]: https://leancrew.com/all-this/images2019/20190223-Named%20BBEdit%20replacement%20pattern.png (Named BBEdit replacement pattern) +[5]: https://merch.barebones.com/ +[6]: https://leancrew.com/all-this/images2019/20190223-BBEdit%20regex%20manual%20excerpt.png (BBEdit regex manual excerpt) diff --git a/translated/talk/20181220 7 CI-CD tools for sysadmins.md b/translated/talk/20181220 7 CI-CD tools for sysadmins.md new file mode 100644 index 0000000000..fe00691a9a --- /dev/null +++ b/translated/talk/20181220 7 CI-CD tools for sysadmins.md @@ -0,0 +1,134 @@ +[#]: collector: (lujun9972) +[#]: translator: (jdh8383) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (7 CI/CD tools for sysadmins) +[#]: via: (https://opensource.com/article/18/12/cicd-tools-sysadmins) +[#]: author: (Dan Barker https://opensource.com/users/barkerd427) + +系统管理员的 7 个 CI/CD 工具 +====== +本文是一篇简单指南:介绍一些常见的开源 CI/CD 工具。 +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc) + +虽然持续集成、持续交付和持续部署(CI/CD)在开发者社区里已经存在很多年,一些机构在运维部门也有实施经验,但大多数公司并没有做这样的尝试。对于很多机构来说,让运维团队能够像他们的开发同行一样熟练操作 CI/CD 工具,已经变得十分必要了。 + +无论是基础设施、第三方应用还是内部开发的应用,都可以开展 CI/CD 实践。尽管你会发现有很多不同的工具,但它们都有着相似的设计模型。而且可能最重要的一点是:通过带领你的公司进行这些实践,会让你在公司内部变得举足轻重,成为他人学习的榜样。 + +一些机构在自己的基础设施上已有多年的 CI/CD 实践经验,常用的工具包括 [Ansible][1]、[Chef][2] 或者 [Puppet][3]。另一些工具,比如 [Test Kitchen][4],允许在最终要部署应用的基础设施上运行测试。事实上,如果使用更高级的配置方法,你甚至可以将应用部署到有真实负载的仿真“生产环境”上,来运行应用级别的测试。然而,单单是能够测试基础设施就是一项了不起的成就了。配置管理工具 Terraform 可以通过 Test Kitchen 来快速创建[可复用][6]的基础设施配置,这比它的前辈要强不少。再加上 Linux 容器和 Kubernetes,在数小时内,你就可以创建一套类似于生产环境的配置参数和系统资源,来测试整个基础设施和其上部署的应用,这在以前可能需要花费几个月的时间。而且,删除和再次创建整个测试环境也非常容易。 + +当然,作为初学者,你也可以把网络配置和 DDL(数据定义语言)文件加入版本控制,然后开始尝试一些简单的 CI/CD 流程。虽然只能帮你检查一下语义语法,但实际上大多数用于开发的管道(pipeline)都是这样起步的。只要你把脚手架搭起来,建造就容易得多了。而一旦起步,你就会发现各种真实的使用场景。 + +举个例子,我经常会在公司内部写新闻简报,我使用 [MJML][7] 制作邮件模板,然后把它加入版本控制。我一般会维护一个 web 版本,但是一些同事喜欢 PDF 版,于是我创建了一个[管道][8]。每当我写好一篇新闻稿,就在 Gitlab 上提交一个合并请求。这样做会自动创建一个 index.html 文件,生成这篇新闻稿的 HTML 和 PDF 版链接。HTML 和 PDF 文件也会在管道里同时生成。除非有人来检查确认,这些文件不会被直接发布出去。使用 GitLab Pages 发布这个网站后,我就可以下载一份 HTML 版,用来发送新闻简报。未来,我会修改这个流程,当合并请求成功或者在某个审核步骤后,自动发出对应的新闻稿。这些处理逻辑并不复杂,但的确为我节省了不少时间。实际上这些工具最核心的用途就是替你节省时间。 + +关键是要在抽象层创建出工具,这样稍加修改就可以处理不同的问题。值得留意的是,我创建的这套流程几乎不需要任何代码,除了一些[轻量级的 HTML 模板][9],一些[把 HTML 文件转换成 PDF 的 nodejs 代码][10],还有一些[生成 index 页面的 nodejs 代码][11]。 + +这其中一些东西可能看起来有点复杂,但其中大部分都源自我使用的不同工具的教学文档。而且很多开发人员也会乐意跟你合作,因为他们在完工时会发现这些东西也挺有用。上面我提供的那些代码链接是给 [DevOps KC][12](一个地方性DevOps组织) 发送新闻简报用的,其中大部分用来创建网站的代码来自我在内部新闻简报项目上所作的工作。 + +下面列出的大多数工具都可以提供这种类型的交互,但是有些工具提供的模型略有不同。这一领域新兴的模型是用声明式的方法例如 YAML 来描述一个管道,其中的每个阶段都是短暂而幂等的。许多系统还会创建[有向无环图(DAG)][13],来确保管道上不同的阶段排序的正确性。 + +这些阶段一般运行在 Linux 容器里,和普通的容器并没有区别。有一些工具,比如 [Spinnaker][14],只关注部署组件,而且提供一些其他工具没有的操作特性。[Jenkins][15] 则通常把管道配置存成 XML 格式,大部分交互都可以在图形界面里完成,但最新的方案是使用[领域专用语言(DSL)][16]如[Groovy][17]。并且,Jenkins 的任务(job)通常运行在各个节点里,这些节点上会装一个专门的 Java 程序还有一堆混杂的插件和预装组件。 + +Jenkins 在自己的工具里引入了管道的概念,但使用起来却并不轻松,甚至包含一些禁区。最近,Jenkins 的创始人决定带领社区向新的方向前进,希望能为这个项目注入新的活力,把 CI/CD 真正推广开(译者注:详见后面的 Jenkins 章节)。我认为其中最有意思的想法是构建一个云原生 Jenkins,能把 Kubernetes 集群转变成 Jenkins CI/CD 平台。 + +当你更多地了解这些工具并把实践带入你的公司和运维部门,你很快就会有追随者,因为你有办法提升自己和别人的工作效率。我们都有多年积累下来的技术债要解决,如果你能给同事们提供足够的时间来处理这些积压的工作,他们该会有多感激呢?不止如此,你的客户也会开始看到应用变得越来越稳定,管理层会把你看作得力干将,你也会在下次谈薪资待遇或参加面试时更有底气。 + +让我们开始深入了解这些工具吧,我们将对每个工具做简短的介绍,并分享一些有用的链接。 + +### GitLab CI + +GitLab 可以说是 CI/CD 领域里新登场的玩家,但它却在 [Forrester(一个权威调研机构) 的调查报告][20]中位列第一。在一个高水平、竞争充分的领域里,这是个了不起的成就。是什么让 GitLab CI 这么成功呢?它使用 YAML 文件来描述整个管道。另有一个功能叫做 Auto DevOps,可以为较简单的项目自动生成管道,并且包含多种内置的测试单元。这套系统使用 [Herokuish buildpacks][21]来判断语言的种类以及如何构建应用。它和 Kubernetes 紧密整合,可以根据不同的方案将你的应用自动部署到 Kubernetes 集群,比如灰度发布、蓝绿部署等。 + +除了它的持续集成功能,GitLab 还提供了许多补充特性,比如:将 Prometheus 和你的应用一同部署,以提供监控功能;通过 GitLab 提供的 Issues、Epics 和 Milestones 功能来实现项目评估和管理;管道中集成了安全检测功能,多个项目的检测结果会聚合显示;你可以通过 GitLab 提供的网页版 IDE 在线编辑代码,还可以快速查看管道的预览或执行状态。 + +### GoCD + +GoCD 是由老牌软件公司 Thoughtworks 出品,这已经足够证明它的能力和效率。对我而言,GoCD 最具亮点的特性是它的[价值流视图(VSM)][22]。实际上,一个管道的输出可以变成下一个管道的输入,从而把管道串联起来。这样做有助于提高不同开发团队在整个开发流程中的独立性。比如在引入 CI/CD 系统时,有些成立较久的机构希望保持他们各个团队相互隔离,这时候 VSM 就很有用了:让每个人都使用相同的工具就很容易在 VSM 中发现工作流程上的瓶颈,然后可以按图索骥调整团队或者想办法提高工作效率。 + +为公司的每个产品配置 VSM 是非常有价值的;GoCD 可以使用 [JSON 或 YAML 格式存储配置][23],还能以可视化的方式展示等待时间,这让一个机构能有效减少学习它的成本。刚开始使用 GoCD 创建你自己的流程时,建议使用人工审核的方式。让每个团队也采用人工审核,这样你就可以开始收集数据并且找到可能的瓶颈点。 + +### Travis CI + +我使用的第一个软件既服务(SaaS)类型的 CI 系统就是 Travis CI,体验很不错。管道配置以源码形式用 YAML 保存,它与 GitHub 等工具无缝整合。我印象中管道从来没有失效过,因为 Travis CI 的在线率很高。除了 SaaS 版之外,你也可以使用自行部署的版本。我还没有自行部署过,它的组件非常多,要全部安装的话,工作量就有点吓人了。我猜更简单的办法是把它部署到 Kubernetes 上,[Travis CI 提供了 Helm charts][26],这些 charts 目前不包含所有要部署的组件,但我相信以后会越来越丰富的。如果你不想处理这些细枝末节的问题,还有一个企业版可以试试。 + +假如你在开发一个开源项目,你就能免费使用 SaaS 版的 Travis CI,享受顶尖团队提供的优质服务!这样能省去很多麻烦,你可以在一个相对通用的平台上(如 GitHub)研发开源项目,而不用找服务器来运行任何东西。 + +### Jenkins + +Jenkins在 CI/CD 界绝对是元老级的存在,也是事实上的标准。我强烈建议你读一读这篇文章:"[Jenkins: Shifting Gears][27]",作者 Kohsuke 是 Jenkins 的创始人兼 CloudBees 公司 CTO。这篇文章契合了我在过去十年里对 Jenkins 及其社区的感受。他在文中阐述了一些这几年呼声很高的需求,我很乐意看到 CloudBees 引领这场变革。长期以来,Jenkins 对于非开发人员来说有点难以接受,并且一直是其管理员的重担。还好,这些问题正是他们想要着手解决的。 + +[Jenkins 配置既代码][28](JCasC)应该可以帮助管理员解决困扰了他们多年的配置复杂性问题。与其他 CI/CD 系统类似,只需要修改一个简单的 YAML 文件就可以完成 Jenkins 主节点的配置工作。[Jenkins Evergreen][29] 的出现让配置工作变得更加轻松,它提供了很多预设的使用场景,你只管套用就可以了。这些发行版会比官方的标准版本 Jenkins 更容易维护和升级。 + +Jenkins 2 引入了两种原生的管道(pipeline)功能,我在 LISA(一个系统架构和运维大会) 2017 年的研讨会上已经[讨论过了][30]。这两种功能都没有 YAML 简便,但在处理复杂任务时它们很好用。 + +[Jenkins X][31] 是 Jenkins 的一个全新变种,用来实现云端原生 Jenkins(至少在用户看来是这样)。它会使用 JCasC 及 Evergreen,并且和 Kubernetes 整合的更加紧密。对于 Jenkins 来说这是个令人激动的时刻,我很乐意看到它在这一领域的创新,并且继续发挥领袖作用。 + +### Concourse CI + +我第一次知道 Concourse 是通过 Pivotal Labs 的伙计们介绍的,当时它处于早期 beta 版本,而且那时候也很少有类似的工具。这套系统是基于微服务构建的,每个任务运行在一个容器里。它独有的一个优良特性是能够在你本地系统上运行任务,体现你本地的改动。这意味着你完全可以在本地开发(假设你已经连接到了 Concourse 的服务器),像在真实的管道构建流程一样从你本地构建项目。而且,你可以在修改过代码后从本地直接重新运行构建,来检验你的改动结果。 + +Concourse 还有一个简单的扩展系统,它依赖于资源这一基础概念。基本上,你想给管道添加的每个新功能都可以用一个 Docker 镜像实现,并作为一个新的资源类型包含在你的配置中。这样可以保证每个功能都被封装在一个不易改变的独立工件中,方便对其单独修改和升级,改变其中一个时不会影响其他构建。 + +### Spinnaker + +Spinnaker 出自 Netflix,它更关注持续部署而非持续集成。它可以与其他工具整合,比如Travis 和 Jenkins,来启动测试和部署流程。它也能与 Prometheus、Datadog 这样的监控工具集成,参考它们提供的指标来决定如何部署。例如,在一次金丝雀发布(canary deployment)里,我们可以根据收集到的相关监控指标来做出判断:最近的这次发布是否导致了服务降级,应该立刻回滚;还是说看起来一切OK,应该继续执行部署。 + +谈到持续部署,一些另类但却至关重要的问题往往被忽略掉了,说出来可能有点让人困惑:Spinnaker 可以帮助持续部署不那么“持续”。在整个应用部署流程期间,如果发生了重大问题,它可以让流程停止执行,以阻止可能发生的部署错误。但它也可以在最关键的时刻让人工审核强制通过,发布新版本上线,使整体收益最大化。实际上,CI/CD 的主要目的就是在商业模式需要调整时,能够让待更新的代码立即得到部署。 + +### Screwdriver + +Screwdriver 是个简单而又强大的软件。它采用微服务架构,依赖像 Nomad、Kubernetes 和 Docker 这样的工具作为执行引擎。官方有一篇很不错的[部署教学文档][34],介绍了如何将它部署到 AWS 和 Kubernetes 上,但如果相应的 [Helm chart][35] 也完成的话,就更完美了。 + +Screwdriver 也使用 YAML 来描述它的管道,并且有很多合理的默认值,这样可以有效减少各个管道重复的配置项。用配置文件可以组织起高级的工作流,来描述各个 job 间复杂的依赖关系。例如,一项任务可以在另一个任务开始前或结束后运行;各个任务可以并行也可以串行执行;更赞的是你可以预先定义一项任务,只在特定的 pull request 请求时被触发,而且与之有依赖关系的任务并不会被执行,这能让你的管道具有一定的隔离性:什么时候被构造的工件应该被部署到生产环境,什么时候应该被审核。 + +以上只是我对这些 CI/CD 工具的简单介绍,它们还有许多很酷的特性等待你深入探索。而且它们都是开源软件,可以自由使用,去部署一下看看吧,究竟哪个才是最适合你的那个。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/cicd-tools-sysadmins + +作者:[Dan Barker][a] +选题:[lujun9972][b] +译者:[jdh8383](https://github.com/jdh8383) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/barkerd427 +[b]: https://github.com/lujun9972 +[1]: https://www.ansible.com/ +[2]: https://www.chef.io/ +[3]: https://puppet.com/ +[4]: https://github.com/test-kitchen/test-kitchen +[5]: https://www.merriam-webster.com/dictionary/ephemeral +[6]: https://en.wikipedia.org/wiki/Idempotence +[7]: https://mjml.io/ +[8]: https://gitlab.com/devopskc/newsletter/blob/master/.gitlab-ci.yml +[9]: https://gitlab.com/devopskc/newsletter/blob/master/index/index.html +[10]: https://gitlab.com/devopskc/newsletter/blob/master/html-to-pdf.js +[11]: https://gitlab.com/devopskc/newsletter/blob/master/populate-index.js +[12]: https://devopskc.com/ +[13]: https://en.wikipedia.org/wiki/Directed_acyclic_graph +[14]: https://www.spinnaker.io/ +[15]: https://jenkins.io/ +[16]: https://martinfowler.com/books/dsl.html +[17]: http://groovy-lang.org/ +[18]: https://about.gitlab.com/product/continuous-integration/ +[19]: https://gitlab.com/gitlab-org/gitlab-ce/ +[20]: https://about.gitlab.com/2017/09/27/gitlab-leader-continuous-integration-forrester-wave/ +[21]: https://github.com/gliderlabs/herokuish +[22]: https://www.gocd.org/getting-started/part-3/#value_stream_map +[23]: https://docs.gocd.org/current/advanced_usage/pipelines_as_code.html +[24]: https://docs.travis-ci.com/ +[25]: https://github.com/travis-ci/travis-ci +[26]: https://github.com/travis-ci/kubernetes-config +[27]: https://jenkins.io/blog/2018/08/31/shifting-gears/ +[28]: https://jenkins.io/projects/jcasc/ +[29]: https://github.com/jenkinsci/jep/blob/master/jep/300/README.adoc +[30]: https://danbarker.codes/talk/lisa17-becoming-plumber-building-deployment-pipelines/ +[31]: https://jenkins-x.io/ +[32]: https://concourse-ci.org/ +[33]: https://github.com/concourse/concourse +[34]: https://docs.screwdriver.cd/cluster-management/kubernetes +[35]: https://github.com/screwdriver-cd/screwdriver-chart diff --git a/translated/tech/20140114 Caffeinated 6.828:Lab 2 Memory Management.md b/translated/tech/20140114 Caffeinated 6.828:Lab 2 Memory Management.md deleted file mode 100644 index 39b20a33f0..0000000000 --- a/translated/tech/20140114 Caffeinated 6.828:Lab 2 Memory Management.md +++ /dev/null @@ -1,240 +0,0 @@ -# Caffeinated 6.828:实验 2:内存管理 - -### 简介 - -在本实验中,你将为你的操作系统写内存管理方面的代码。内存管理有两部分组成。 - -第一部分是内核的物理内存分配器,内核通过它来分配内存,以及在不需要时释放所分配的内存。分配器以页为单位分配内存,每个页的大小为 4096 字节。你的任务是去维护那个数据结构,它负责记录物理页的分配和释放,以及每个分配的页有多少进程共享它。本实验中你将要写出分配和释放内存页的全套代码。 - -第二个部分是虚拟内存的管理,它负责由内核和用户软件使用的虚拟内存地址到物理内存地址之间的映射。当使用内存时,x86 架构的硬件是由内存管理单元(MMU)负责执行映射操作来查阅一组页表。接下来你将要修改 JOS,以根据我们提供的特定指令去设置 MMU 的页表。 - -### 预备知识 - -在本实验及后面的实验中,你将逐步构建你的内核。我们将会为你提供一些附加的资源。使用 Git 去获取这些资源、提交自实验 1 以来的改变(如有需要的话)、获取课程仓库的最新版本、以及在我们的实验 2 (origin/lab2)的基础上创建一个称为 lab2 的本地分支: - -```c -athena% cd ~/6.828/lab -athena% add git -athena% git pull -Already up-to-date. -athena% git checkout -b lab2 origin/lab2 -Branch lab2 set up to track remote branch refs/remotes/origin/lab2. -Switched to a new branch "lab2" -athena% -``` - -上面的 `git checkout -b` 命令其实做了两件事情:首先它创建了一个本地分支 `lab2`,它跟踪给我们提供课程内容的远程分支 `origin/lab2` ,第二件事情是,它更改的你的 `lab` 目录的内容反映到 `lab2` 分支上存储的文件中。Git 允许你在已存在的两个分支之间使用 `git checkout *branch-name*` 命令去切换,但是在你切换到另一个分支之前,你应该去提交那个分支上你做的任何出色的变更。 - -现在,你需要将你在 lab1 分支中的改变合并到 lab2 分支中,命令如下: - -```c -athena% git merge lab1 -Merge made by recursive. - kern/kdebug.c | 11 +++++++++-- - kern/monitor.c | 19 +++++++++++++++++++ - lib/printfmt.c | 7 +++---- - 3 files changed, 31 insertions(+), 6 deletions(-) -athena% -``` - -在一些案例中,Git 或许并不能找到如何将你的更改与新的实验任务合并(例如,你在第二个实验任务中变更了一些代码的修改)。在那种情况下,你使用 git 命令去合并,它会告诉你哪个文件发生了冲突,你必须首先去解决冲突(通过编辑冲突的文件),然后使用 `git commit -a` 去重新提交文件。 - -实验 2 包含如下的新源代码,后面你将遍历它们: - -- inc/memlayout.h -- kern/pmap.c -- kern/pmap.h -- kern/kclock.h -- kern/kclock.c - -`memlayout.h` 描述虚拟地址空间的布局,这个虚拟地址空间是通过修改 `pmap.c`、`memlayout.h` 和 `pmap.h` 所定义的 *PageInfo* 数据结构来实现的,这个数据结构用于跟踪物理内存页面是否被释放。`kclock.c` 和 `kclock.h` 维护 PC 基于电池的时钟和 CMOS RAM 硬件,在 BIOS 中记录了 PC 上安装的物理内存数量,以及其它的一些信息。在 `pmap.c` 中的代码需要去读取这个设备硬件信息,以算出在这个设备上安装了多少物理内存,这些只是由你来完成的一部分代码:你不需要知道 CMOS 硬件工作原理的细节。 - -特别需要注意的是 `memlayout.h` 和 `pmap.h`,因为本实验需要你去使用和理解的大部分内容都包含在这两个文件中。你或许还需要去复习 `inc/mmu.h` 这个文件,因为它也包含了本实验中用到的许多定义。 - -开始本实验之前,记得去添加 `exokernel` 以获取 QEMU 的 6.828 版本。 - -### 实验过程 - -在你准备进行实验和写代码之前,先添加你的 `answers-lab2.txt` 文件到 Git 仓库,提交你的改变然后去运行 `make handin`。 - -```c -athena% git add answers-lab2.txt -athena% git commit -am "my answer to lab2" -[lab2 a823de9] my answer to lab2 4 files changed, 87 insertions(+), 10 deletions(-) -athena% make handin -``` - -正如前面所说的,我们将使用一个评级程序来分级你的解决方案,你可以在 `lab` 目录下运行 `make grade`,使用评级程序来测试你的内核。为了完成你的实验,你可以改变任何你需要的内核源代码和头文件。但毫无疑问的是,你不能以任何形式去改变或破坏评级代码。 - -### 第 1 部分:物理页面管理 - -操作系统必须跟踪物理内存页是否使用的状态。JOS 以页为最小粒度来管理 PC 的物理内存,以便于它使用 MMU 去映射和保护每个已分配的内存片段。 - -现在,你将要写内存的物理页分配器的代码。它使用链接到 `PageInfo` 数据结构的一组列表来保持对物理页的状态跟踪,每个列表都对应到一个物理内存页。在你能够写出剩下的虚拟内存实现之前,你需要先写出物理内存页面分配器,因为你的页表管理代码将需要去分配物理内存来存储页表。 - -> 练习 1 -> -> 在文件 `kern/pmap.c` 中,你需要去实现以下函数的代码(或许要按给定的顺序来实现)。 -> -> boot_alloc() -> -> mem_init()(只要能够调用 check_page_free_list() 即可) -> -> page_init() -> -> page_alloc() -> -> page_free() -> -> `check_page_free_list()` 和 `check_page_alloc()` 可以测试你的物理内存页分配器。你将需要引导 JOS 然后去看一下 `check_page_alloc()` 是否报告成功即可。如果没有报告成功,修复你的代码直到成功为止。你可以添加你自己的 `assert()` 以帮助你去验证是否符合你的预期。 - -本实验以及所有的 6.828 实验中,将要求你做一些检测工作,以便于你搞清楚它们是否按你的预期来工作。这个任务不需要详细描述你添加到 JOS 中的代码的细节。查找 JOS 源代码中你需要去修改的那部分的注释;这些注释中经常包含有技术规范和提示信息。你也可能需要去查阅 JOS、和 Intel 的技术手册、以及你的 6.004 或 6.033 课程笔记的相关部分。 - -### 第 2 部分:虚拟内存 - -在你开始动手之前,需要先熟悉 x86 内存管理架构的保护模式:即分段和页面转换。 - -> 练习 2 -> -> 如果你对 x86 的保护模式还不熟悉,可以查看 Intel 80386 参考手册的第 5 章和第 6 章。阅读这些章节(5.2 和 6.4)中关于页面转换和基于页面的保护。我们建议你也去了解关于段的章节;在虚拟内存和保护模式中,JOS 使用了分页、段转换、以及在 x86 上不能禁用的基于段的保护,因此你需要去理解这些基础知识。 - -### 虚拟地址、线性地址和物理地址 - -在 x86 的专用术语中,一个虚拟地址是由一个段选择器和在段中的偏移量组成。一个线性地址是在页面转换之前、段转换之后得到的一个地址。一个物理地址是段和页面转换之后得到的最终地址,它最终将进入你的物理内存中的硬件总线。 - -![屏幕快照 2018-09-04 11.22.20](https://ws1.sinaimg.cn/large/0069RVTdly1fuxgrc398jj30gx04bgm1.jpg) - -一个 C 指针是虚拟地址的“偏移量”部分。在 `boot/boot.S` 中我们安装了一个全局描述符表(GDT),它通过设置所有的段基址为 0,并且限制为 `0xffffffff` 来有效地禁用段转换。因此“段选择器”并不会生效,而线性地址总是等于虚拟地址的偏移量。在实验 3 中,为了设置权限级别,我们将与段有更多的交互。但是对于内存转换,我们将在整个 JOS 实验中忽略段,只专注于页转换。 - -回顾实验 1 中的第 3 部分,我们安装了一个简单的页表,这样内核就可以在 0xf0100000 链接的地址上运行,尽管它实际上是加载在 0x00100000 处的 ROM BIOS 的物理内存上。这个页表仅映射了 4MB 的内存。在实验中,你将要为 JOS 去设置虚拟内存布局,我们将从虚拟地址 0xf0000000 处开始扩展它,首先将物理内存扩展到 256MB,并映射许多其它区域的虚拟内存。 - -> 练习 3 -> -> 虽然 GDB 能够通过虚拟地址访问 QEMU 的内存,它经常用于在配置虚拟内存期间检查物理内存。在实验工具指南中复习 QEMU 的监视器命令,尤其是 `xp` 命令,它可以让你去检查物理内存。访问 QEMU 监视器,可以在终端中按 `Ctrl-a c`(相同的绑定返回到串行控制台)。 -> -> 使用 QEMU 监视器的 `xp` 命令和 GDB 的 `x` 命令去检查相应的物理内存和虚拟内存,以确保你看到的是相同的数据。 -> -> 我们的打过补丁的 QEMU 版本提供一个非常有用的 `info pg` 命令:它可以展示当前页表的一个简单描述,包括所有已映射的内存范围、权限、以及标志。Stock QEMU 也提供一个 `info mem` 命令用于去展示一个概要信息,这个信息包含了已映射的虚拟内存范围和使用了什么权限。 - -在 CPU 上运行的代码,一旦处于保护模式(这是在 boot/boot.S 中所做的第一件事情)中,是没有办法去直接使用一个线性地址或物理地址的。所有的内存引用都被解释为虚拟地址,然后由 MMU 来转换,这意味着在 C 语言中的指针都是虚拟地址。 - -例如在物理内存分配器中,JOS 内存经常需要在不反向引用的情况下,去维护作为地址的一个很难懂的值或一个整数。有时它们是虚拟地址,而有时是物理地址。为便于在代码中证明,JOS 源文件中将它们区分为两种:类型 `uintptr_t` 表示一个难懂的虚拟地址,而类型 `physaddr_trepresents` 表示物理地址。这些类型其实不过是 32 位整数(uint32_t)的同义词,因此编译器不会阻止你将一个类型的数据指派为另一个类型!因为它们都是整数(而不是指针)类型,如果你想去反向引用它们,编译器将报错。 - -JOS 内核能够通过将它转换为指针类型的方式来反向引用一个 `uintptr_t` 类型。相反,内核不能反向引用一个物理地址,因为这是由 MMU 来转换所有的内存引用。如果你转换一个 `physaddr_t` 为一个指针类型,并反向引用它,你或许能够加载和存储最终结果地址(硬件将它解释为一个虚拟地址),但你并不会取得你想要的内存位置。 - -总结如下: - -| C type | Address type | -| ------------ | ------------ | -| `T*` | Virtual | -| `uintptr_t` | Virtual | -| `physaddr_t` | Physical | - ->问题: -> ->假设下面的 JOS 内核代码是正确的,那么变量 `x` 应该是什么类型?uintptr_t 还是 physaddr_t ? -> ->![屏幕快照 2018-09-04 11.48.54](https://ws3.sinaimg.cn/large/0069RVTdly1fuxgrbkqd3j30m302bmxc.jpg) -> - -JOS 内核有时需要去读取或修改它知道物理地址的内存。例如,添加一个映射到页表,可以要求分配物理内存去存储一个页目录,然后去初始化它们。然而,内核也和其它的软件一样,并不能跳过虚拟地址转换,内核并不能直接加载和存储物理地址。一个原因是 JOS 将重映射从虚拟地址 0xf0000000 处物理地址 0 开始的所有的物理地址,以帮助内核去读取和写入它知道物理地址的内存。为转换一个物理地址为一个内核能够真正进行读写操作的虚拟地址,内核必须添加 0xf0000000 到物理地址以找到在重映射区域中相应的虚拟地址。你应该使用 KADDR(pa) 去做那个添加操作。 - -JOS 内核有时也需要能够通过给定的内核数据结构中存储的虚拟地址找到内存中的物理地址。内核全局变量和通过 `boot_alloc()` 分配的内存是加载到内核的这些区域中,从 0xf0000000 处开始,到全部物理内存映射的区域。因此,在这些区域中转变一个虚拟地址为物理地址时,内核能够只是简单地减去 0xf0000000 即可得到物理地址。你应该使用 PADDR(va) 去做那个减法操作。 - -### 引用计数 - -在以后的实验中,你将会经常遇到多个虚拟地址(或多个环境下的地址空间中)同时映射到相同的物理页面上。你将在 PageInfo 数据结构中用 pp_ref 字段来提供一个引用到每个物理页面的计数器。如果一个物理页面的这个计数器为 0,表示这个页面已经被释放,因为它不再被使用了。一般情况下,这个计数器应该等于相应的物理页面出现在所有页表下面的 UTOP 的次数(UTOP 上面的映射大都是在引导时由内核设置的,并且它从不会被释放,因此不需要引用计数器)。我们也使用它去跟踪到页目录的指针数量,反过来就是,页目录到页表的数量。 - -使用 `page_alloc` 时要注意。它返回的页面引用计数总是为 0,因此,一旦对返回页做了一些操作(比如将它插入到页表),`pp_ref` 就应该增加。有时这需要通过其它函数(比如,`page_instert`)来处理,而有时这个函数是直接调用 `page_alloc` 来做的。 - -### 页表管理 - -现在,你将写一套管理页表的代码:去插入和删除线性地址到物理地址的映射表,并且在需要的时候去创建页表。 - -> 练习 4 -> -> 在文件 `kern/pmap.c` 中,你必须去实现下列函数的代码。 -> -> pgdir_walk() -> -> boot_map_region() -> -> page_lookup() -> -> page_remove() -> -> page_insert() -> -> `check_page()`,调用 `mem_init()`,测试你的页表管理动作。在进行下一步流程之前你应该确保它成功运行。 - -### 第 3 部分:内核地址空间 - -JOS 分割处理器的 32 位线性地址空间为两部分:用户环境(进程),我们将在实验 3 中开始加载和运行,它将控制其上的布局和低位部分的内容,而内核总是维护对高位部分的完全控制。线性地址的定义是在 `inc/memlayout.h` 中通过符号 ULIM 来划分的,它为内核保留了大约 256MB 的虚拟地址空间。这就解释了为什么我们要在实验 1 中给内核这样的一个高位链接地址的原因:如是不这样做的话,内核的虚拟地址空间将没有足够的空间去同时映射到下面的用户空间中。 - -你可以在 `inc/memlayout.h` 中找到一个图表,它有助于你去理解 JOS 内存布局,这在本实验和后面的实验中都会用到。 - -### 权限和故障隔离 - -由于内核和用户的内存都存在于它们各自环境的地址空间中,因此我们需要在 x86 的页表中使用权限位去允许用户代码只能访问用户所属地址空间的部分。否则,用户代码中的 bug 可能会覆写内核数据,导致系统崩溃或者发生各种莫名其妙的的故障;用户代码也可能会偷窥其它环境的私有数据。 - -对于 ULIM 以上部分的内存,用户环境没有任何权限,只有内核才可以读取和写入这部分内存。对于 [UTOP,ULIM] 地址范围,内核和用户都有相同的权限:它们可以读取但不能写入这个地址范围。这个地址范围是用于向用户环境暴露某些只读的内核数据结构。最后,低于 UTOP 的地址空间是为用户环境所使用的;用户环境将为访问这些内核设置权限。 - -### 初始化内核地址空间 - -现在,你将去配置 UTOP 以上的地址空间:内核部分的地址空间。`inc/memlayout.h` 中展示了你将要使用的布局。我将使用函数去写相关的线性地址到物理地址的映射配置。 - -> 练习 5 -> -> 完成调用 `check_page()` 之后在 `mem_init()` 中缺失的代码。 - -现在,你的代码应该通过了 `check_kern_pgdir()` 和 `check_page_installed_pgdir()` 的检查。 - -> 问题: -> -> ​ 1、在这个时刻,页目录中的条目(行)是什么?它们映射的址址是什么?以及它们映射到哪里了?换句话说就是,尽可能多地填写这个表: -> -> EntryBase Virtual AddressPoints to (logically): -> -> 1023 ? Page table for top 4MB of phys memory -> -> 1022 ? ? -> -> . ? ? -> -> . ? ? -> -> . ? ? -> -> 2 0x00800000 ? -> -> 1 0x00400000 ? -> -> 0 0x00000000 [see next question] -> -> ​ 2、(来自课程 3) 我们将内核和用户环境放在相同的地址空间中。为什么用户程序不能去读取和写入内核的内存?有什么特殊机制保护内核内存? -> -> ​ 3、这个操作系统能够支持的最大的物理内存数量是多少?为什么? -> -> ​ 4、我们真实地拥有最大数量的物理内存吗?管理内存的开销有多少?这个开销可以减少吗? -> -> ​ 5、复习在 `kern/entry.S` 和 `kern/entrypgdir.c` 中的页表设置。一旦我们打开分页,EIP 中是一个很小的数字(稍大于 1MB)。在什么情况下,我们转而去运行在 KERNBASE 之上的一个 EIP?当我们启用分页并开始在 KERNBASE 之上运行一个 EIP 时,是什么让我们能够持续运行一个很低的 EIP?为什么这种转变是必需的? - -### 地址空间布局的其它选择 - -在 JOS 中我们使用的地址空间布局并不是我们唯一的选择。一个操作系统可以在低位的线性地址上映射内核,而为用户进程保留线性地址的高位部分。然而,x86 内核一般并不采用这种方法,而 x86 向后兼容模式是不这样做的其中一个原因,这种模式被称为“虚拟 8086 模式”,处理器使用线性地址空间的最下面部分是“不可改变的”,所以,如果内核被映射到这里是根本无法使用的。 - -虽然很困难,但是设计这样的内核是有这种可能的,即:不为处理器自身保留任何固定的线性地址或虚拟地址空间,而有效地允许用户级进程不受限制地使用整个 4GB 的虚拟地址空间 —— 同时还要在这些进程之间充分保护内核以及不同的进程之间相互受保护! - -将内核的内存分配系统进行概括类推,以支持二次幂为单位的各种页大小,从 4KB 到一些你选择的合理的最大值。你务必要有一些方法,将较大的分配单位按需分割为一些较小的单位,以及在需要时,将多个较小的分配单位合并为一个较大的分配单位。想一想在这样的一个系统中可能会出现些什么样的问题。 - -这个实验做完了。确保你通过了所有的等级测试,并记得在 `answers-lab2.txt` 中写下你对上述问题的答案。提交你的改变(包括添加 `answers-lab2.txt` 文件),并在 `lab` 目录下输入 `make handin` 去提交你的实验。 - ------- - -via: - -作者:[Mit][] -译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md b/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md new file mode 100644 index 0000000000..6b5db8b104 --- /dev/null +++ b/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 11 Input02.md @@ -0,0 +1,911 @@ +[#]: collector: (lujun9972) +[#]: translator: (guevaraya ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 11 Input02) +[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html) +[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk) + +计算机实验室 – 树莓派开发: 课程11 输入02 +====== + +课程输入02是以课程输入01基础讲解的,通过一个简单的命令行实现用户的命令输入和计算机的处理和显示。本文假设你已经具备 [课程11:输入01][1] 的操作系统代码基础。 + +### 1 终端 + +``` +早期的计算一般是在一栋楼里的一个巨型计算机系统,他有很多可以输命令的'终端'。计算机依次执行不同来源的命令。 +``` + +几乎所有的操作系统都是以字符终端显示启动的。经典的黑底白字,通过键盘输入计算机要执行的命令,然后会提示你拼写错误,或者恰好得到你想要的执行结果。这种方法有两个主要优点:键盘和显示器可以提供简易,健壮的计算机交互机制,几乎所有的计算机系统都采用这个机制,这个也广泛被系统管理员应用。 + +让我们分析下真正想要哪些信息: + +1. 计算机打开后,显示欢迎信息 +2. 计算机启动后可以接受输入标志 +3. 用户从键盘输入带参数的命令 +4. 用户输入回车键或提交按钮 +5. 计算机解析命令后执行可用的命令 +6. 计算机显示命令的执行结果,过程信息 +7. 循环跳转到步骤2 + + +这样的终端被定义为标准的输入输出设备。用于输入的屏幕和输出打印的屏幕是同一个。也就是说终端是对字符显示的一个抽象。字符显示中,单个字符是最小的单元,而不是像素。屏幕被划分成固定数量不同颜色的字符。我们可以在现有的屏幕代码基础上,先存储字符和对应的颜色,然后再用方法 DrawCharacter 把其推送到屏幕上。一旦我们需要字符显示,就只需要在屏幕上画出一行字符串。 + +新建文件名为 terminal.s 如下: +``` +.section .data +.align 4 +terminalStart: +.int terminalBuffer +terminalStop: +.int terminalBuffer +terminalView: +.int terminalBuffer +terminalColour: +.byte 0xf +.align 8 +terminalBuffer: +.rept 128*128 +.byte 0x7f +.byte 0x0 +.endr +terminalScreen: +.rept 1024/8 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 768/16 +.byte 0x7f +.byte 0x0 +.endr +``` +这是文件终端的配置数据文件。我们有两个主要的存储变量:terminalBuffer 和 terminalScreen。terminalBuffer保存所有显示过的字符。它保存128行字符文本(1行包含128个字符)。每个字符有一个 ASCII 字符和颜色单元组成,初始值为0x7f(ASCII的删除键)和 0(前景色和背景色为黑)。terminalScreen 保存当前屏幕显示的字符。它保存128x48的字符,与 terminalBuffer 初始化值一样。你可能会想我仅需要terminalScreen就够了,为什么还要terminalBuffer,其实有两个好处: + + 1. 我们可以很容易看到字符串的变化,只需画出有变化的字符。 + 2. 我们可以回滚终端显示的历史字符,也就是缓冲的字符(有限制) + + +你总是需要尝试去设计一个高效的系统,如果很少变化的条件这个系统会运行的更快。 + +独特的技巧在低功耗系统里很常见。画屏是很耗时的操作,因此我们仅在不得已的时候才去执行这个操作。在这个系统里,我们可以任意改变terminalBuffer,然后调用一个仅拷贝屏幕上字节变化的方法。也就是说我们不需要持续画出每个字符,这样可以节省一大段跨行文本的操作时间。 + +其他在 .data 段的值得含义如下: + + * terminalStart + 写入到 terminalBuffer 的第一个字符 + * terminalStop + 写入到 terminalBuffer 的最后一个字符 + * terminalView + 表示当前屏幕的第一个字符,这样我们可以控制滚动屏幕 + * temrinalColour + 即将被描画的字符颜色 + + +``` +循环缓冲区是**数据结构**一个例子。这是一个组织数据的思路,有时我们通过软件实现这种思路。 +``` + +![显示 Hellow world 插入到大小为5的循环缓冲区的示意图。][2] +terminalStart 需要保存起来的原因是 termainlBuffer 是一个循环缓冲区。意思是当缓冲区变满时,末尾地方会回滚覆盖开始位置,这样最后一个字符变成了第一个字符。因此我们需要将 terminalStart 往前推进,这样我们知道我们已经占满它了。如何实现缓冲区检测:如果索引越界到缓冲区的末尾,就将索引指向缓冲区的开始位置。循环缓冲区是一个比较常见的高明的存储大量数据的方法,往往这些数据的最近部分比较重要。它允许无限制的写入,只保证最近一些特定数据有效。这个常常用于信号处理和数据压缩算法。这样的情况,可以允许我们存储128行终端记录,超过128行也不会有问题。如果不是这样,当超过第128行时,我们需要把127行分别向前拷贝一次,这样很浪费时间。 + +之前已经提到过 terminalColour 几次了。你可以根据你的想法实现终端颜色,但这个文本终端有16个前景色和16个背景色(这里相当于有16²=256种组合)。[CGA][3]终端的颜色定义如下: + +表格 1.1 - CGA 颜色编码 + +| 序号 | 颜色 (R, G, B) | +| ------ | ------------------------| +| 0 | 黑 (0, 0, 0) | +| 1 | 蓝 (0, 0, ⅔) | +| 2 | 绿 (0, ⅔, 0) | +| 3 | 青色 (0, ⅔, ⅔) | +| 4 | 红色 (⅔, 0, 0) | +| 5 | 品红 (⅔, 0, ⅔) | +| 6 | 棕色 (⅔, ⅓, 0) | +| 7 | 浅灰色 (⅔, ⅔, ⅔) | +| 8 | 灰色 (⅓, ⅓, ⅓) | +| 9 | 淡蓝色 (⅓, ⅓, 1) | +| 10 | 淡绿色 (⅓, 1, ⅓) | +| 11 | 淡青色 (⅓, 1, 1) | +| 12 | 淡红色 (1, ⅓, ⅓) | +| 13 | 浅品红 (1, ⅓, 1) | +| 14 | 黄色 (1, 1, ⅓) | +| 15 | 白色 (1, 1, 1) | + +``` +棕色作为替代色(黑黄色)既不吸引人也没有什么用处。 +``` +我们将前景色保存到颜色的低字节,背景色保存到颜色高字节。除过棕色,其他这些颜色遵循一种模式如二进制的高位比特代表增加 ⅓ 到每个组件,其他比特代表增加⅔到各自组件。这样很容易进行RGB颜色转换。 + +我们需要一个方法从TerminalColour读取颜色编码的四个比特,然后用16比特等效参数调用 SetForeColour。尝试实现你自己实现。如果你感觉麻烦或者还没有完成屏幕系列课程,我们的实现如下: + +``` +.section .text +TerminalColour: +teq r0,#6 +ldreq r0,=0x02B5 +beq SetForeColour + +tst r0,#0b1000 +ldrne r1,=0x52AA +moveq r1,#0 +tst r0,#0b0100 +addne r1,#0x15 +tst r0,#0b0010 +addne r1,#0x540 +tst r0,#0b0001 +addne r1,#0xA800 +mov r0,r1 +b SetForeColour +``` +### 2 文本显示 + +我们的终端第一个真正需要的方法是 TerminalDisplay,它用来把当前的数据从 terminalBuffe r拷贝到 terminalScreen 和实际的屏幕。如上所述,这个方法必须是最小开销的操作,因为我们需要频繁调用它。它主要比较 terminalBuffer 与 terminalDisplay的文本,然后只拷贝有差异的字节。请记住 terminalBuffer 是循环缓冲区运行的,这种情况,从 terminalView 到 terminalStop 或者 128*48 字符集,哪个来的最快。如果我们遇到 terminalStop,我们将会假定在这之后的所有字符是7f16 (ASCII delete),背景色为0(黑色前景色和背景色)。 + +让我们看看必须要做的事情: + + 1. 加载 terminalView ,terminalStop 和 terminalDisplay 的地址。 + 2. 执行每一行: + 1. 执行每一列: + 1. 如果 terminalView 不等于 terminalStop,根据 terminalView 加载当前字符和颜色 + 2. 否则加载 0x7f 和颜色 0 + 3. 从 terminalDisplay 加载当前的字符 + 4. 如果字符和颜色相同,直接跳转到10 + 5. 存储字符和颜色到 terminalDisplay + 6. 用 r0 作为背景色参数调用 TerminalColour + 7. 用 r0 = 0x7f (ASCII 删除键, 一大块), r1 = x, r2 = y 调用 DrawCharacter + 8. 用 r0 作为前景色参数调用 TerminalColour + 9. 用 r0 = 字符, r1 = x, r2 = y 调用 DrawCharacter + 10. 对位置参数 terminalDisplay 累加2 + 11. 如果 terminalView 不等于 terminalStop不能相等 terminalView 位置参数累加2 + 12. 如果 terminalView 位置已经是文件缓冲器的末尾,将他设置为缓冲区的开始位置 + 13. x 坐标增加8 + 2. y 坐标增加16 + + +Try to implement this yourself. If you get stuck, my solution is given below: +尝试去自己实现吧。如果你遇到问题,我们的方案下面给出来了: + +1. +``` +.globl TerminalDisplay +TerminalDisplay: +push {r4,r5,r6,r7,r8,r9,r10,r11,lr} +x .req r4 +y .req r5 +char .req r6 +col .req r7 +screen .req r8 +taddr .req r9 +view .req r10 +stop .req r11 + +ldr taddr,=terminalStart +ldr view,[taddr,#terminalView - terminalStart] +ldr stop,[taddr,#terminalStop - terminalStart] +add taddr,#terminalBuffer - terminalStart +add taddr,#128*128*2 +mov screen,taddr +``` + +我这里的变量有点乱。为了方便起见,我用 taddr 存储 textBuffer 的末尾位置。 + +2. +``` +mov y,#0 +yLoop$: +``` +从yLoop开始运行。 + + 1. + ``` + mov x,#0 + xLoop$: + ``` + 从yLoop开始运行。 + + 1. + ``` + teq view,stop + ldrneh char,[view] + ``` + 为了方便起见,我把字符和颜色同时加载到 char 变量了 + + 2. + ``` + moveq char,#0x7f + ``` + 这行是对上面一行的补充说明:读取黑色的Delete键 + + 3. + ``` + ldrh col,[screen] + ``` + 为了简便我把字符和颜色同时加载到 col 里。 + + 4. + ``` + teq col,char + beq xLoopContinue$ + ``` + 现在我用teq指令检查是否有数据变化 + + 5. + ``` + strh char,[screen] + ``` + 我可以容易的保存当前值 + + 6. + ``` + lsr col,char,#8 + and char,#0x7f + lsr r0,col,#4 + bl TerminalColour + ``` + 我用 bitshift(比特偏移) 指令和 and 指令从 char 变量中分离出颜色到 col 变量和字符到 char变量,然后再用bitshift(比特偏移)指令后调用TerminalColour 获取背景色。 + + 7. + ``` + mov r0,#0x7f + mov r1,x + mov r2,y + bl DrawCharacter + ``` + 写入一个彩色的删除字符块 + + 8. + ``` + and r0,col,#0xf + bl TerminalColour + ``` + 用 and 指令获取 col 变量的最低字节,然后调用TerminalColour + + 9. + ``` + mov r0,char + mov r1,x + mov r2,y + bl DrawCharacter + ``` + 写入我们需要的字符 + + 10. + ``` + xLoopContinue$: + add screen,#2 + ``` + 自增屏幕指针 + + 11. + ``` + teq view,stop + addne view,#2 + ``` + 如果可能自增view指针 + + 12. + ``` + teq view,taddr + subeq view,#128*128*2 + ``` + 很容易检测 view指针是否越界到缓冲区的末尾,因为缓冲区的地址保存在 taddr 变量里 + + 13. + ``` + add x,#8 + teq x,#1024 + bne xLoop$ + ``` + 如果还有字符需要显示,我们就需要自增 x 变量然后循环到 xLoop 执行 + + 2. + ``` + add y,#16 + teq y,#768 + bne yLoop$ + ``` + 如果还有更多的字符显示我们就需要自增 y 变量,然后循环到 yLoop 执行 + +``` +pop {r4,r5,r6,r7,r8,r9,r10,r11,pc} +.unreq x +.unreq y +.unreq char +.unreq col +.unreq screen +.unreq taddr +.unreq view +.unreq stop +``` +不要忘记最后清除变量 + + +### 3 行打印 + +现在我有了自己 TerminalDisplay方法,它可以自动显示 terminalBuffer 到 terminalScreen,因此理论上我们可以画出文本。但是实际上我们没有任何基于字符显示的实例。 首先快速容易上手的方法便是 TerminalClear, 它可以彻底清除终端。这个方法没有循环很容易实现。可以尝试分析下面的方法应该不难: + +``` +.globl TerminalClear +TerminalClear: +ldr r0,=terminalStart +add r1,r0,#terminalBuffer-terminalStart +str r1,[r0] +str r1,[r0,#terminalStop-terminalStart] +str r1,[r0,#terminalView-terminalStart] +mov pc,lr +``` + +现在我们需要构造一个字符显示的基础方法:打印函数。它将保存在 r0 的字符串和 保存在 r1 字符串长度简易的写到屏幕上。有一些特定字符需要特别的注意,这些特定的操作是确保 terminalView 是最新的。我们来分析一下需要做啥: + + 1. 检查字符串的长度是否为0,如果是就直接返回 + 2. 加载 terminalStop 和 terminalView + 3. 计算出 terminalStop 的 x 坐标 + 4. 对每一个字符的操作: + 1. 检查字符是否为新起一行 + 2. 如果是的话,自增 bufferStop 到行末,同时写入黑色删除键 + 3. 否则拷贝当前 terminalColour 的字符 + 4. 加成是在行末 + 5. 如果是,检查从 terminalView 到 terminalStop 之间的字符数是否大于一屏 + 6. 如果是,terminalView 自增一行 + 7. 检查 terminalView 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置 + 8. 检查 terminalStop 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置 + 9. 检查 terminalStop 是否等于 terminalStart, 如果是的话 terminalStart 自增一行。 + 10. 检查 terminalStart 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置 + 5. 存取 terminalStop 和 terminalView + + +试一下自己去实现。我们的方案提供如下: + +1. +``` +.globl Print +Print: +teq r1,#0 +moveq pc,lr +``` +这个是打印函数开始快速检查字符串为0的代码 + +2. +``` +push {r4,r5,r6,r7,r8,r9,r10,r11,lr} +bufferStart .req r4 +taddr .req r5 +x .req r6 +string .req r7 +length .req r8 +char .req r9 +bufferStop .req r10 +view .req r11 + +mov string,r0 +mov length,r1 + +ldr taddr,=terminalStart +ldr bufferStop,[taddr,#terminalStop-terminalStart] +ldr view,[taddr,#terminalView-terminalStart] +ldr bufferStart,[taddr] +add taddr,#terminalBuffer-terminalStart +add taddr,#128*128*2 +``` + +这里我做了很多配置。 bufferStart 代表 terminalStart, bufferStop代表terminalStop, view 代表 terminalView,taddr 代表 terminalBuffer 的末尾地址。 + +3. +``` +and x,bufferStop,#0xfe +lsr x,#1 +``` +和通常一样,巧妙的对齐技巧让许多事情更容易。由于需要对齐 terminalBuffer,每个字符的 x 坐标需要8位要除以2。 + + 4. + 1. + ``` + charLoop$: + ldrb char,[string] + and char,#0x7f + teq char,#'\n' + bne charNormal$ + ``` + 我们需要检查新行 + + 2. + ``` + mov r0,#0x7f + clearLine$: + strh r0,[bufferStop] + add bufferStop,#2 + add x,#1 + teq x,#128 blt clearLine$ + + b charLoopContinue$ + ``` + 循环执行值到行末写入 0x7f;黑色删除键 + + 3. + ``` + charNormal$: + strb char,[bufferStop] + ldr r0,=terminalColour + ldrb r0,[r0] + strb r0,[bufferStop,#1] + add bufferStop,#2 + add x,#1 + ``` + 存储字符串的当前字符和 terminalBuffer 末尾的 terminalColour然后将它和 x 变量自增 + + 4. + ``` + charLoopContinue$: + cmp x,#128 + blt noScroll$ + ``` + 检查 x 是否为行末;128 + + 5. + ``` + mov x,#0 + subs r0,bufferStop,view + addlt r0,#128*128*2 + cmp r0,#128*(768/16)*2 + ``` + 这是 x 为 0 然后检查我们是否已经显示超过1屏。请记住,我们是用的循环缓冲区,因此如果 bufferStop 和 view 之前差是负值,我们实际使用是环绕缓冲区。 + + 6. + ``` + addge view,#128*2 + ``` + 增加一行字节到 view 的地址 + + 7. + ``` + teq view,taddr + subeq view,taddr,#128*128*2 + ``` + 如果 view 地址是缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。 + + 8. + ``` + noScroll$: + teq bufferStop,taddr + subeq bufferStop,taddr,#128*128*2 + ``` + 如果 stop 的地址在缓冲区末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。 + + 9. + ``` + teq bufferStop,bufferStart + addeq bufferStart,#128*2 + ``` + 检查 bufferStop 是否等于 bufferStart。 如果等于增加一行到 bufferStart。 + + 10. + ``` + teq bufferStart,taddr + subeq bufferStart,taddr,#128*128*2 + ``` + 如果 start 的地址在缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。 + +``` +subs length,#1 +add string,#1 +bgt charLoop$ +``` +循环执行知道字符串结束 + +5. +``` +charLoopBreak$: +sub taddr,#128*128*2 +sub taddr,#terminalBuffer-terminalStart +str bufferStop,[taddr,#terminalStop-terminalStart] +str view,[taddr,#terminalView-terminalStart] +str bufferStart,[taddr] + +pop {r4,r5,r6,r7,r8,r9,r10,r11,pc} +.unreq bufferStart +.unreq taddr +.unreq x +.unreq string +.unreq length +.unreq char +.unreq bufferStop +.unreq view +``` +保存变量然后返回 + + +这个方法允许我们打印任意字符到屏幕。然而我们用了颜色变量,但实际上没有设置它。一般终端用特性的组合字符去行修改颜色。如ASCII转移(1b16)后面跟着一个0-f的16进制的书,就可以设置前景色为 CGA颜色。如果你自己想尝试实现;在下载页面有一个我的详细的例子。 + + +### 4 标志输入 + +``` +按照惯例,许多编程语言中,任意程序可以访问 stdin 和 stdin,他们可以连接到终端的输入和输出流。在图形程序其实也可以进行同样操作,但实际几乎不用。 +``` + +现在我们有一个可以打印和显示文本的输出终端。这仅仅是说了一半,我们需要输入。我们想实现一个方法:Readline,可以保存文件的一行文本,文本位置有 r0 给出,最大的长度由 r1 给出,返回 r0 里面的字符串长度。棘手的是用户输出字符的时候要回显功能,同时想要退格键的删除功能和命令回车执行功能。他们还想需要一个闪烁的下划线代表计算机需要输入。这些完全合理的要求让构造这个方法更具有挑战性。有一个方法完成这些需求就是存储用户输入的文本和文件大小到内存的某个地方。然后当调用 ReadLine 的时候,移动 terminalStop 的地址到它开始的地方然后调用 Print。也就是说我们只需要确保在内存维护一个字符串,然后构造一个我们自己的打印函数。 + +让我们看看 ReadLine做了哪些事情: + + 1. 如果字符串可保存的最大长度为0,直接返回 + 2. 检索 terminalStop 和 terminalStop 的当前值 + 3. 如果字符串的最大长度大约缓冲区的一半,就设置大小为缓冲区的一半 + 4. 从最大长度里面减去1来确保输入的闪烁字符或结束符 + 5. 向字符串写入一个下划线 + 6. 写入一个 terminalView 和 terminalStop 的地址到内存 + 7. 调用 Print 大约当前字符串 + 8. 调用 TerminalDisplay + 9. 调用 KeyboardUpdate + 10. 调用 KeyboardGetChar + 11. 如果为一个新行直接跳转到16 + 12. 如果是一个退格键,将字符串长度减一(如果其大约0) + 13. 如果是一个普通字符,将他写入字符串(字符串大小确保小于最大值) + 14. 如果字符串是以下划线结束,写入一个空格,否则写入下划线 + 15. 跳转到6 + 16. 字符串的末尾写入一个新行 + 17. 调用 Print 和 TerminalDisplay + 18. 用结束符替换新行 + 19. 返回字符串的长度 + + + +为了方便读者理解,然后然后自己去实现,我们的实现提供如下: + +1. +``` +.globl ReadLine +ReadLine: +teq r1,#0 +moveq r0,#0 +moveq pc,lr +``` +快速处理长度为0的情况 + +2. +``` +string .req r4 +maxLength .req r5 +input .req r6 +taddr .req r7 +length .req r8 +view .req r9 + +push {r4,r5,r6,r7,r8,r9,lr} + +mov string,r0 +mov maxLength,r1 +ldr taddr,=terminalStart +ldr input,[taddr,#terminalStop-terminalStart] +ldr view,[taddr,#terminalView-terminalStart] +mov length,#0 +``` +考虑到常见的场景,我们初期做了很多初始化动作。input 代表 terminalStop 的值,view 代表 terminalView。Length 默认为 0. + +3. +``` +cmp maxLength,#128*64 +movhi maxLength,#128*64 +``` +我们必须检查异常大的读操作,我们不能处理超过 terminalBuffer 大小的输入(理论上可行,但是terminalStart 移动越过存储的terminalStop,会有很多问题)。 + +4. +``` +sub maxLength,#1 +``` +由于用户需要一个闪烁的光标,我们需要一个备用字符在理想状况在这个字符串后面放一个结束符。 + +5. +``` +mov r0,#'_' +strb r0,[string,length] +``` +写入一个下划线让用户知道我们可以输入了。 + +6. +``` +readLoop$: +str input,[taddr,#terminalStop-terminalStart] +str view,[taddr,#terminalView-terminalStart] +``` +保存 terminalStop 和 terminalView。这个对重置一个终端很重要,它会修改这些变量。严格讲也可以修改 terminalStart,但是不可逆。 + +7. +``` +mov r0,string +mov r1,length +add r1,#1 +bl Print +``` +写入当前的输入。由于下划线因此字符串长度加1 +8. +``` +bl TerminalDisplay +``` +拷贝下一个文本到屏幕 + +9. +``` +bl KeyboardUpdate +``` +获取最近一次键盘输入 + +10. +``` +bl KeyboardGetChar +``` +检索键盘输入键值 + +11. +``` +teq r0,#'\n' +beq readLoopBreak$ +teq r0,#0 +beq cursor$ +teq r0,#'\b' +bne standard$ +``` + +如果我们有一个回车键,循环中断。如果有结束符和一个退格键也会同样跳出选好。 + +12. +``` +delete$: +cmp length,#0 +subgt length,#1 +b cursor$ +``` +从 length 里面删除一个字符 + +13. +``` +standard$: +cmp length,maxLength +bge cursor$ +strb r0,[string,length] +add length,#1 +``` +写回一个普通字符 + +14. +``` +cursor$: +ldrb r0,[string,length] +teq r0,#'_' +moveq r0,#' ' +movne r0,#'_' +strb r0,[string,length] +``` +加载最近的一个字符,如果不是下换线则修改为下换线,如果是空格则修改为下划线 + +15. +``` +b readLoop$ +readLoopBreak$: +``` +循环执行值到用户输入按下 + +16. +``` +mov r0,#'\n' +strb r0,[string,length] +``` +在字符串的结尾处存入一新行 + +17. +``` +str input,[taddr,#terminalStop-terminalStart] +str view,[taddr,#terminalView-terminalStart] +mov r0,string +mov r1,length +add r1,#1 +bl Print +bl TerminalDisplay +``` +重置 terminalView 和 terminalStop 然后调用 Print 和 TerminalDisplay 输入回显 + + +18. +``` +mov r0,#0 +strb r0,[string,length] +``` +写入一个结束符 + +19. +``` +mov r0,length +pop {r4,r5,r6,r7,r8,r9,pc} +.unreq string +.unreq maxLength +.unreq input +.unreq taddr +.unreq length +.unreq view +``` +返回长度 + + + + +### 5 终端: 机器进化 + +现在我们理论用终端和用户可以交互了。最显而易见的事情就是拿去测试了!在 'main.s' 里UsbInitialise后面的删除代码如下 + +``` +reset$: + mov sp,#0x8000 + bl TerminalClear + + ldr r0,=welcome + mov r1,#welcomeEnd-welcome + bl Print + +loop$: + ldr r0,=prompt + mov r1,#promptEnd-prompt + bl Print + + ldr r0,=command + mov r1,#commandEnd-command + bl ReadLine + + teq r0,#0 + beq loopContinue$ + + mov r4,r0 + + ldr r5,=command + ldr r6,=commandTable + + ldr r7,[r6,#0] + ldr r9,[r6,#4] + commandLoop$: + ldr r8,[r6,#8] + sub r1,r8,r7 + + cmp r1,r4 + bgt commandLoopContinue$ + + mov r0,#0 + commandName$: + ldrb r2,[r5,r0] + ldrb r3,[r7,r0] + teq r2,r3 + bne commandLoopContinue$ + add r0,#1 + teq r0,r1 + bne commandName$ + + ldrb r2,[r5,r0] + teq r2,#0 + teqne r2,#' ' + bne commandLoopContinue$ + + mov r0,r5 + mov r1,r4 + mov lr,pc + mov pc,r9 + b loopContinue$ + + commandLoopContinue$: + add r6,#8 + mov r7,r8 + ldr r9,[r6,#4] + teq r9,#0 + bne commandLoop$ + + ldr r0,=commandUnknown + mov r1,#commandUnknownEnd-commandUnknown + ldr r2,=formatBuffer + ldr r3,=command + bl FormatString + + mov r1,r0 + ldr r0,=formatBuffer + bl Print + +loopContinue$: + bl TerminalDisplay + b loop$ + +echo: + cmp r1,#5 + movle pc,lr + + add r0,#5 + sub r1,#5 + b Print + +ok: + teq r1,#5 + beq okOn$ + teq r1,#6 + beq okOff$ + mov pc,lr + + okOn$: + ldrb r2,[r0,#3] + teq r2,#'o' + ldreqb r2,[r0,#4] + teqeq r2,#'n' + movne pc,lr + mov r1,#0 + b okAct$ + + okOff$: + ldrb r2,[r0,#3] + teq r2,#'o' + ldreqb r2,[r0,#4] + teqeq r2,#'f' + ldreqb r2,[r0,#5] + teqeq r2,#'f' + movne pc,lr + mov r1,#1 + + okAct$: + + mov r0,#16 + b SetGpio + +.section .data +.align 2 +welcome: .ascii "Welcome to Alex's OS - Everyone's favourite OS" +welcomeEnd: +.align 2 +prompt: .ascii "\n> " +promptEnd: +.align 2 +command: + .rept 128 + .byte 0 + .endr +commandEnd: +.byte 0 +.align 2 +commandUnknown: .ascii "Command `%s' was not recognised.\n" +commandUnknownEnd: +.align 2 +formatBuffer: + .rept 256 + .byte 0 + .endr +formatEnd: + +.align 2 +commandStringEcho: .ascii "echo" +commandStringReset: .ascii "reset" +commandStringOk: .ascii "ok" +commandStringCls: .ascii "cls" +commandStringEnd: + +.align 2 +commandTable: +.int commandStringEcho, echo +.int commandStringReset, reset$ +.int commandStringOk, ok +.int commandStringCls, TerminalClear +.int commandStringEnd, 0 +``` +这块代码集成了一个简易的命令行操作系统。支持命令:echo,reset,ok 和 cls。echo 拷贝任意文本到终端,reset命令会在系统出现问题的是复位操作系统,ok 有两个功能:设置 OK 灯亮灭,最后 cls 调用 TerminalClear 清空终端。 + +试试树莓派的代码吧。如果遇到问题,请参照问题集锦页面吧。 + +如果运行正常,祝贺你完成了一个操作系统基本终端和输入系列的课程。很遗憾这个教程先讲到这里,但是我希望将来能制作更多教程。有问题请反馈至awc32@cam.ac.uk。 + +你已经在建立了一个简易的终端操作系统。我们的代码在 commandTable 构造了一个可用的命令表格。每个表格的入口是一个整型数字,用来表示字符串的地址,和一个整型数字表格代码的执行入口。 最后一个入口是 为 0 的commandStringEnd。尝试实现你自己的命令,可以参照已有的函数,建立一个新的。函数的参数 r0 是用户输入的命令地址,r1是其长度。你可以用这个传递你输入值到你的命令。也许你有一个计算器程序,或许是一个绘图程序或国际象棋。不管你的什么电子,让它跑起来! + + +-------------------------------------------------------------------------------- + +via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html + +作者:[Alex Chadwick][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/guevaraya) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.cl.cam.ac.uk +[b]: https://github.com/lujun9972 +[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input01.html +[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/circular_buffer.png +[3]: https://en.wikipedia.org/wiki/Color_Graphics_Adapter diff --git a/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 8 Screen03.md b/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 8 Screen03.md new file mode 100644 index 0000000000..7f58f5da24 --- /dev/null +++ b/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 8 Screen03.md @@ -0,0 +1,469 @@ +[#]: collector: (lujun9972) +[#]: translator: (qhwdw) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 8 Screen03) +[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen03.html) +[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk) + +计算机实验室 – 树莓派:课程 8 屏幕03 +====== + +屏幕03 课程基于屏幕02 课程来构建,它教你如何绘制文本,和一个操作系统命令行参数上的一个小特性。假设你已经有了[课程 7:屏幕02][1] 的操作系统代码,我们将以它为基础来构建。 + +### 1、字符串的理论知识 + +是的,我们的任务是为这个操作系统绘制文本。我们有几个问题需要去处理,最紧急的那个可能是如何去保存文本。令人难以置信的是,文本是迄今为止在计算机上最大的缺陷之一。原本应该是简单的数据类型却导致了操作系统的崩溃,破坏了完美的加密,并给使用不同字母表的用户带来了许多问题。尽管如此,它仍然是极其重要的数据类型,因为它将计算机和用户很好地连接起来。文本是计算机能够理解的非常好的结构,同时人类使用它时也有足够的可读性。 + +``` +可变数据类型,比如文本要求能够进行很复杂的处理。 +``` + +那么,文本是如何保存的呢?非常简单,我们使用一种方法,给每个字母分配一个唯一的编号,然后我们保存一系列的这种编号。看起来很容易吧。问题是,那个编号的数字是不固定的。一些文本片断可能比其它的长。与保存普通数字一样,我们有一些固有的限制,即:3 位,我们不能超过这个限制,我们添加方法去使用那种长数字等等。“文本”这个术语,我们经常也叫它“字符串”,我们希望能够写一个可用于变长字符串的函数,否则就需要写很多函数!对于一般的数字来说,这不是个问题,因为只有几种通用的数字格式(字节、字、半字节、双字节)。 + +``` +缓冲区溢出攻击祸害计算机由来已久。最近,Wii、Xbox 和 Playstation 2、以及大型系统如 Microsoft 的 Web 和数据库服务器,都遭受到缓冲区溢出攻击。 +``` + +因此,如何判断字符串长度?我想显而易见的答案是存储多长的字符串,然后去存储组成字符串的字符。这称为长度前缀,因为长度位于字符串的前面。不幸的是,计算机科学家的先驱们不同意这么做。他们认为使用一个称为空终止符(NULL)的特殊字符(用 \0表示)来表示字符串结束更有意义。这样确定简化了许多字符串算法,因为你只需要持续操作直到遇到空终止符为止。不幸的是,这成为了许多安全问题的根源。如果一个恶意用户给你一个特别长的字符串会发生什么状况?如果没有足够的空间去保存这个特别长的字符串会发生什么状况?你可以使用一个字符串复制函数来做复制,直到遇到空终止符为止,但是因为字符串特别长,而覆写了你的程序,怎么办?这看上去似乎有些较真,但尽管如此,缓冲区溢出攻击还是经常发生。长度前缀可以很容易地缓解这种问题,因为它可以很容易地推算出保存这个字符串所需要的缓冲区的长度。作为一个操作系统开发者,我留下这个问题,由你去决定如何才能更好地存储文本。 + +接下来的事情是,我们需要去维护一个很好的从字符到数字的映射。幸运的是,这是高度标准化的,我们有两个主要的选择,Unicode 和 ASCII。Unicode 几乎将每个单个的有用的符号都映射为数字,作为交换,我们得到的是很多很多的数字,和一个更复杂的编码方法。ASCII 为每个字符使用一个字节,因此它仅保存拉丁字母、数字、少数符号和少数特殊字符。因此,ASCII 是非常易于实现的,与 Unicode 相比,它的每个字符占用的空间并不相同,这使得字符串算法更棘手。一般操作系统上字符使用 ASCII,并不是为了显示给最终用户的(开发者和专家用户除外),给终端用户显示信息使用 Unicode,因为 Unicode 能够支持像日语字符这样的东西,并且因此可以实现本地化。 + +幸运的是,在这里我们不需要去做选择,因为它们的前 128 个字符是完全相同的,并且编码也是完全一样的。 + +表 1.1 ASCII/Unicode 符号 0-127 + +| | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | a | b | c | d | e | f | | +|----| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | ----| +| 00 | NUL | SOH | STX | ETX | EOT | ENQ | ACK | BEL | BS | HT | LF | VT | FF | CR | SO | SI | | +| 10 | DLE | DC1 | DC2 | DC3 | DC4 | NAK | SYN | ETB | CAN | EM | SUB | ESC | FS | GS | RS | US | | +| 20 | ! | " | # | $ | % | & | . | ( | ) | * | + | , | - | . | / | | | +| 30 | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | : | ; | < | = | > | ? | | +| 40 | @ | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | | +| 50 | P | Q | R | S | T | U | V | W | X | Y | Z | [ | \ | ] | ^ | _ | | +| 60 | ` | a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | | +| 70 | p | q | r | s | t | u | v | w | x | y | z | { | | | } | ~ | DEL | + +这个表显示了前 128 个符号。一个符号的十六进制表示是行的值加上列的值,比如 A 是 41~16~。你可以惊奇地发现前两行和最后的值。这 33 个特殊字符是不可打印字符。事实上,许多人都忽略了它们。它们之所以存在是因为 ASCII 最初设计是基于计算机网络来传输数据的一种方法。因此它要发送的信息不仅仅是符号。你应该学习的重要的特殊字符是 `NUL`,它就是我们前面提到的空终止符。`HT` 水平制表符就是我们经常说的 `tab`,而 `LF` 换行符用于生成一个新行。你可能想研究和使用其它特殊字符在你的操行系统中的意义。 + +### 2、字符 + +到目前为止,我们已经知道了一些关于字符串的知识,我们可以开始想想它们是如何显示的。为了显示一个字符串,我们需要做的最基础的事情是能够显示一个字符。我们的第一个任务是编写一个 `DrawCharacter` 函数,给它一个要绘制的字符和一个位置,然后它将这个字符绘制出来。 + +```markdown +在许多操作系统中使用的 `truetype` 字体格式是很强大的,它内置有它自己的汇编语言,以确保在任何分辨率下字母看起来都是正确的。 +``` + +这就很自然地引出关于字体的讨论。我们已经知道有许多方式去按照选定的字体去显示任何给定的字母。那么字体又是如何工作的呢?在计算机科学的早期阶段,一种字体就是所有字母的一系列小图片而已,这种字体称为位图字体,而所有的字符绘制方法就是将图片复制到屏幕上。当人们想去调整字体大小时就出问题了。有时我们需要大的字母,而有时我们需要的是小的字母。尽管我们可以为每个字体、每种大小、每个字符都绘制新图片,但这种作法过于单调乏味。所以,发明了矢量字体。矢量字体不包含字体的图像,它包含的是如何去绘制字符的描述,即:一个 `o` 可能是最大字母高度的一半为半径绘制的圆。现代操作系统都几乎仅使用这种字体,因为这种字体在任何分辨率下都很完美。 + +不幸的是,虽然我很想包含一个矢量字体的格式的实现,但它的内容太多了,将占用这个站点的剩余部分。所以,我们将去实现一个位图字体,可是,如果你想去做一个正宗的图形化的操作系统,那么矢量字体将是很有用的。 + +在下载页面上的字体节中,我们提供了几个 `.bin` 文件。这些只是字体的原始二进制数据文件。为完成本教程,从等宽、单色、8x16 节中挑选你喜欢的字体。然后下载它并保存到 `source` 目录中并命名为 `font.bin` 文件。这些文件只是每个字母的单色图片,它们每个字母刚好是 8 x 16 个像素。所以,每个字母占用 16 字节,第一个字节是第一行,第二个字节是第二行,依此类推。 + +![bitmap](https://ws2.sinaimg.cn/large/006tNc79ly1fzzb2064agj305l0apt96.jpg) + +这个示意图展示了等宽、单色、8x16 的字符 A 的 `Bitstream Vera Sans Mono`。在这个文件中,我们可以找到,它从第 41~16~ × 10~16~ = 410~16~ 字节开始的十六进制序列: + +00, 00, 00, 10, 28, 28, 28, 44, 44, 7C, C6, 82, 00, 00, 00, 00 + +在这里我们将使用等宽字体,因为等宽字体的每个字符大小是相同的。不幸的是,大多数字体的复杂之处就是因为它的宽度不同,从而导致它的显示代码更复杂。在下载页面上还包含有几个其它的字体,并包含了这种字体的存储格式介绍。 + +我们回到正题。复制下列代码到 `drawing.s` 中的 `graphicsAddress` 的 `.int 0` 之后。 + +```assembly +.align 4 +font: +.incbin "font.bin" +``` + +```assembly +.incbin "file" 插入来自文件 “file” 中的二进制数据。 +``` + +这段代码复制文件中的字体数据到标签为 `font` 的地址。我们在这里使用了一个 `.align 4` 去确保每个字符都是从 16 字节的倍数开始,这是一个以后经常用到的用于加快访问速度的技巧。 + +现在我们去写绘制字符的方法。我在下面给出了伪代码,你可以尝试自己去实现它。按惯例 `>>` 的意思是逻辑右移。 + +```c +function drawCharacter(r0 is character, r1 is x, r2 is y) + if character > 127 then exit + set charAddress to font + character × 16 + for row = 0 to 15 + set bits to readByte(charAddress + row) + for bit = 0 to 7 + if test(bits >> bit, 0x1) + then setPixel(x + bit, y + row) + next + next + return r0 = 8, r1 = 16 +end function + +``` +如果直接去实现它,这显然不是个高效率的做法。像绘制字符这样的事情,效率是最重要的。因为我们要频繁使用它。我们来探索一些改善的方法,使其成为最优化的汇编代码。首先,因为我们有一个 `× 16`,你应该会马上想到它等价于逻辑左移 4 位。紧接着我们有一个变量 `row`,它只与 `charAddress` 和 `y` 相加。所以,我们可以通过增加替代变量来消除它。现在唯一的问题是如何判断我们何时完成。这时,一个很好用的 `.align 4` 上场了。我们知道,`charAddress` 将从包含 0 的低位半字节开始。这意味着我们可以通过检查低位半字节来看到进入字符数据的程度。 + +虽然我们可以消除对 `bit` 的需求,但我们必须要引入新的变量才能实现,因此最好还是保留它。剩下唯一的改进就是去除嵌套的 `bits >> bit`。 + +```c +function drawCharacter(r0 is character, r1 is x, r2 is y) + if character > 127 then exit + set charAddress to font + character << 4 + loop + set bits to readByte(charAddress) + set bit to 8 + loop + set bits to bits << 1 + set bit to bit - 1 + if test(bits, 0x100) + then setPixel(x + bit, y) + until bit = 0 + set y to y + 1 + set chadAddress to chadAddress + 1 + until charAddress AND 0b1111 = 0 + return r0 = 8, r1 = 16 +end function +``` + +现在,我们已经得到了非常接近汇编代码的代码了,并且代码也是经过优化的。下面就是上述代码用汇编写出来的代码。 + +```assembly +.globl DrawCharacter +DrawCharacter: +cmp r0,#127 +movhi r0,#0 +movhi r1,#0 +movhi pc,lr + +push {r4,r5,r6,r7,r8,lr} +x .req r4 +y .req r5 +charAddr .req r6 +mov x,r1 +mov y,r2 +ldr charAddr,=font +add charAddr, r0,lsl #4 + +lineLoop$: + + bits .req r7 + bit .req r8 + ldrb bits,[charAddr] + mov bit,#8 + + charPixelLoop$: + + subs bit,#1 + blt charPixelLoopEnd$ + lsl bits,#1 + tst bits,#0x100 + beq charPixelLoop$ + + add r0,x,bit + mov r1,y + bl DrawPixel + + teq bit,#0 + bne charPixelLoop$ + + charPixelLoopEnd$: + .unreq bit + .unreq bits + add y,#1 + add charAddr,#1 + tst charAddr,#0b1111 + bne lineLoop$ + +.unreq x +.unreq y +.unreq charAddr + +width .req r0 +height .req r1 +mov width,#8 +mov height,#16 + +pop {r4,r5,r6,r7,r8,pc} +.unreq width +.unreq height +``` + +### 3、字符串 + +现在,我们可以绘制字符了,我们可以绘制文本了。我们需要去写一个方法,给它一个字符串为输入,它通过递增位置来绘制出每个字符。为了做的更好,我们应该去实现新的行和制表符。是时候决定关于空终止符的问题了,如果你想让你的操作系统使用它们,可以按需来修改下面的代码。为避免这个问题,我将给 `DrawString` 函数传递一个字符串长度,以及字符串的地址,和 x 和 y 的坐标作为参数。 + +```c +function drawString(r0 is string, r1 is length, r2 is x, r3 is y) + set x0 to x + for pos = 0 to length - 1 + set char to loadByte(string + pos) + set (cwidth, cheight) to DrawCharacter(char, x, y) + if char = '\n' then + set x to x0 + set y to y + cheight + otherwise if char = '\t' then + set x1 to x + until x1 > x0 + set x1 to x1 + 5 × cwidth + loop + set x to x1 + otherwise + set x to x + cwidth + end if + next +end function +``` + +同样,这个函数与汇编代码还有很大的差距。你可以随意去尝试实现它,即可以直接实现它,也可以简化它。我在下面给出了简化后的函数和汇编代码。 + +很明显,写这个函数的人并不很有效率(感到奇怪吗?它就是我写的)。再说一次,我们有一个 `pos` 变量,它用于递增和与其它东西相加,这是完全没有必要的。我们可以去掉它,而同时进行长度递减,直到减到 0 为止,这样就少用了一个寄存器。除了那个烦人的乘以 5 以外,函数的其余部分还不错。在这里要做的一个重要事情是,将乘法移到循环外面;即便使用位移运算,乘法仍然是很慢的,由于我们总是加一个乘以 5 的相同的常数,因此没有必要重新计算它。实际上,在汇编代码中它可以在一个操作数中通过参数移位来实现,因此我将代码改变为下面这样。 + +```c +function drawString(r0 is string, r1 is length, r2 is x, r3 is y) + set x0 to x + until length = 0 + set length to length - 1 + set char to loadByte(string) + set (cwidth, cheight) to DrawCharacter(char, x, y) + if char = '\n' then + set x to x0 + set y to y + cheight + otherwise if char = '\t' then + set x1 to x + set cwidth to cwidth + cwidth << 2 + until x1 > x0 + set x1 to x1 + cwidth + loop + set x to x1 + otherwise + set x to x + cwidth + end if + set string to string + 1 + loop +end function +``` + +以下是它的汇编代码: + +```assembly +.globl DrawString +DrawString: +x .req r4 +y .req r5 +x0 .req r6 +string .req r7 +length .req r8 +char .req r9 +push {r4,r5,r6,r7,r8,r9,lr} + +mov string,r0 +mov x,r2 +mov x0,x +mov y,r3 +mov length,r1 + +stringLoop$: + subs length,#1 + blt stringLoopEnd$ + + ldrb char,[string] + add string,#1 + + mov r0,char + mov r1,x + mov r2,y + bl DrawCharacter + cwidth .req r0 + cheight .req r1 + + teq char,#'\n' + moveq x,x0 + addeq y,cheight + beq stringLoop$ + + teq char,#'\t' + addne x,cwidth + bne stringLoop$ + + add cwidth, cwidth,lsl #2 + x1 .req r1 + mov x1,x0 + + stringLoopTab$: + add x1,cwidth + cmp x,x1 + bge stringLoopTab$ + mov x,x1 + .unreq x1 + b stringLoop$ +stringLoopEnd$: +.unreq cwidth +.unreq cheight + +pop {r4,r5,r6,r7,r8,r9,pc} +.unreq x +.unreq y +.unreq x0 +.unreq string +.unreq length +``` + +```assembly +subs reg,#val 从寄存器 reg 中减去 val,然后将结果与 0 进行比较。 +``` + +这个代码中非常聪明地使用了一个新运算,`subs` 是从一个操作数中减去另一个数,保存结果,然后将结果与 0 进行比较。实现上,所有的比较都可以实现为减法后的结果与 0 进行比较,但是结果通常会丢弃。这意味着这个操作与 `cmp` 一样快。 + +### 4、你的愿意是我的命令行 + +现在,我们可以输出字符串了,而挑战是找到一个有意思的字符串去绘制。一般在这样的教程中,人们都希望去绘制 “Hello World!”,但是到目前为止,虽然我们已经能做到了,我觉得这有点“君临天下”的感觉(如果喜欢这种感觉,请随意!)。因此,作为替代,我们去继续绘制我们的命令行。 + +有一个限制是我们所做的操作系统是用在 ARM 架构的计算机上。最关键的是,在它们引导时,给它一些信息告诉它有哪些可用资源。几乎所有的处理器都有某些方式来确定这些信息,而在 ARM 上,它是通过位于地址 10016 处的数据来确定的,这个数据的格式如下: + + 1. 数据是可分解的一系列的标签。 + 2. 这里有九种类型的标签:`core`,`mem`,`videotext`,`ramdisk`,`initrd2`,`serial`,`revision`,`videolfb`,`cmdline`。 + 3. 每个标签只能出现一次,除了 'core’ 标签是必不可少的之外,其它的都是可有可无的。 + 4. 所有标签都依次放置在地址 0x100 处。 + 5. 标签列表的结束处总是有两个word,它们全为 0。 + 6. 每个标签的字节数都是 4 的倍数。 + 7. 每个标签都是以标签中(以字为单位)的标签大小开始(标签包含这个数字)。 + 8. 紧接着是包含标签编号的一个半字。编号是按上面列出的顺序,从 1 开始(`core` 是 1,`cmdline` 是 9)。 + 9. 紧接着是一个包含 544116 的半字。 + 10. 之后是标签的数据,它根据标签不同是可变的。数据大小(以字为单位)+ 2 的和总是与前面提到的长度相同。 + 11. 一个 `core` 标签的长度可以是 2 个字也可以是 5 个字。如果是 2 个字,表示没有数据,如果是 5 个字,表示它有 3 个字的数据。 + 12. 一个 `mem` 标签总是 4 个字的长度。数据是内存块的第一个地址,和内存块的长度。 + 13. 一个 `cmdline` 标签包含一个 `null` 终止符字符串,它是个内核参数。 + + +```markdown +几乎所有的操作系统都支持一个`命令行`的程序。它的想法是为选择一个程序所期望的行为而提供一个通用的机制。 +``` + +在目前的树莓派版本中,只提供了 `core`、`mem` 和 `cmdline` 标签。你可以在后面找到它们的用法,更全面的参考资料在树莓派的参考页面上。现在,我们感兴趣的是 `cmdline` 标签,因为它包含一个字符串。我们继续写一些搜索命令行标签的代码,如果找到了,以每个条目一个新行的形式输出它。命令行只是为了让操作系统理解图形处理器或用户认为的很好的事情的一个列表。在树莓派上,这包含了 MAC 地址,序列号和屏幕分辨率。字符串本身也是一个像 `key.subkey=value` 这样的由空格隔开的表达式列表。 + +我们从查找 `cmdline` 标签开始。将下列的代码复制到一个名为 `tags.s` 的新文件中。 + +```assembly +.section .data +tag_core: .int 0 +tag_mem: .int 0 +tag_videotext: .int 0 +tag_ramdisk: .int 0 +tag_initrd2: .int 0 +tag_serial: .int 0 +tag_revision: .int 0 +tag_videolfb: .int 0 +tag_cmdline: .int 0 +``` + +通过标签列表来查找是一个很慢的操作,因为这涉及到许多内存访问。因此,我们只是想实现它一次。代码创建一些数据,用于保存每个类型的第一个标签的内存地址。接下来,用下面的伪代码就可以找到一个标签了。 + +```c +function FindTag(r0 is tag) + if tag > 9 or tag = 0 then return 0 + set tagAddr to loadWord(tag_core + (tag - 1) × 4) + if not tagAddr = 0 then return tagAddr + if readWord(tag_core) = 0 then return 0 + set tagAddr to 0x100 + loop forever + set tagIndex to readHalfWord(tagAddr + 4) + if tagIndex = 0 then return FindTag(tag) + if readWord(tag_core+(tagIndex-1)×4) = 0 + then storeWord(tagAddr, tag_core+(tagIndex-1)×4) + set tagAddr to tagAddr + loadWord(tagAddr) × 4 + end loop +end function +``` +这段代码已经是优化过的,并且很接近汇编了。它尝试直接加载标签,第一次这样做是有些乐观的,但是除了第一次之外 的其它所有情况都是可以这样做的。如果失败了,它将去检查 `core` 标签是否有地址。因为 `core` 标签是必不可少的,如果它没有地址,唯一可能的原因就是它不存在。如果它有地址,那就是我们没有找到我们要找的标签。如果没有找到,那我们就需要查找所有标签的地址。这是通过读取标签编号来做的。如果标签编号为 0,意味着已经到了标签列表的结束位置。这意味着我们已经查找了目录中所有的标签。所以,如果我们再次运行我们的函数,现在它应该能够给出一个答案。如果标签编号不为 0,我们检查这个标签类型是否已经有一个地址。如果没有,我们在目录中保存这个标签的地址。然后增加这个标签的长度(以字节为单位)到标签地址中,然后去查找下一个标签。 + +尝试去用汇编实现这段代码。你将需要简化它。如果被卡住了,下面是我的答案。不要忘了 `.section .text`! + +```assembly +.section .text +.globl FindTag +FindTag: +tag .req r0 +tagList .req r1 +tagAddr .req r2 + +sub tag,#1 +cmp tag,#8 +movhi tag,#0 +movhi pc,lr + +ldr tagList,=tag_core +tagReturn$: +add tagAddr,tagList, tag,lsl #2 +ldr tagAddr,[tagAddr] + +teq tagAddr,#0 +movne r0,tagAddr +movne pc,lr + +ldr tagAddr,[tagList] +teq tagAddr,#0 +movne r0,#0 +movne pc,lr + +mov tagAddr,#0x100 +push {r4} +tagIndex .req r3 +oldAddr .req r4 +tagLoop$: +ldrh tagIndex,[tagAddr,#4] +subs tagIndex,#1 +poplt {r4} +blt tagReturn$ + +add tagIndex,tagList, tagIndex,lsl #2 +ldr oldAddr,[tagIndex] +teq oldAddr,#0 +.unreq oldAddr +streq tagAddr,[tagIndex] + +ldr tagIndex,[tagAddr] +add tagAddr, tagIndex,lsl #2 +b tagLoop$ + +.unreq tag +.unreq tagList +.unreq tagAddr +.unreq tagIndex +``` + +### 5、Hello World + +现在,我们已经万事俱备了,我们可以去绘制我们的第一个字符串了。在 `main.s` 文件中删除 `bl SetGraphicsAddress` 之后的所有代码,然后将下面的代码放进去: + +```assembly +mov r0,#9 +bl FindTag +ldr r1,[r0] +lsl r1,#2 +sub r1,#8 +add r0,#8 +mov r2,#0 +mov r3,#0 +bl DrawString +loop$: +b loop$ +``` + +这段代码简单地使用了我们的 `FindTag` 方法去查找第 9 个标签(`cmdline`),然后计算它的长度,然后传递命令和长度给 `DrawString` 方法,告诉它在 `0,0` 处绘制字符串。现在可以在树莓派上测试它了。你应该会在屏幕上看到一行文本。如果没有,请查看我们的排错页面。 + +如果一切正常,恭喜你已经能够绘制文本了。但它还有很大的改进空间。如果想去写了一个数字,或内存的一部分,或操作我们的命令行,该怎么做呢?在 [课程 9:屏幕04][2] 中,我们将学习如何操作文本和显示有用的数字和信息。 + +-------------------------------------------------------------------------------- + +via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen03.html + +作者:[Alex Chadwick][a] +选题:[lujun9972][b] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.cl.cam.ac.uk +[b]: https://github.com/lujun9972 +[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen02.html +[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen04.html diff --git a/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 9 Screen04.md b/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 9 Screen04.md new file mode 100644 index 0000000000..76573c4bd8 --- /dev/null +++ b/translated/tech/20150616 Computer Laboratory - Raspberry Pi- Lesson 9 Screen04.md @@ -0,0 +1,538 @@ +[#]: collector: (lujun9972) +[#]: translator: (qhwdw) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Computer Laboratory – Raspberry Pi: Lesson 9 Screen04) +[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen04.html) +[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk) + +计算机实验室 – 树莓派:课程 9 屏幕04 +====== + +屏幕04 课程基于屏幕03 课程来构建,它教你如何操作文本。假设你已经有了[课程 8:屏幕03][1] 的操作系统代码,我们将以它为基础。 + +### 1、操作字符串 + +``` +变长函数在汇编代码中看起来似乎不好理解,然而 ,它却是非常有用和很强大的概念。 +``` + +能够绘制文本是极好的,但不幸的是,现在你只能绘制预先准备好的字符串。如果能够像命令行那样显示任何东西才是完美的,而理想情况下应该是,我们能够显示任何我们期望的东西。一如既往地,如果我们付出努力而写出一个非常好的函数,它能够操作我们所希望的所有字符串,而作为回报,这将使我们以后写代码更容易。曾经如此复杂的函数,在 C 语言编程中只不过是一个 `sprintf` 而已。这个函数基于给定的另一个字符串和作为描述的额外的一个参数而生成一个字符串。我们对这个函数感兴趣的地方是,这个函数是个变长函数。这意味着它可以带可变数量的参数。参数的数量取决于具体的格式字符串,因此它的参数的数量不能预先确定。 + +完整的函数有许多选项,而我们在这里只列出了几个。在本教程中将要实现的选项我做了高亮处理,当然,你可以尝试去实现更多的选项。 + +函数通过读取格式字符串来工作,然后使用下表的意思去解释它。一旦一个参数已经使用了,就不会再次考虑它了。函数 的返回值是写入的字符数。如果方法失败,将返回一个负数。 + +表 1.1 sprintf 格式化规则 +| 选项 | 含义 | +| -------------------------- | ------------------------------------------------------------ | +| ==Any character except %== | 复制字符到输出。 | +| ==%%== | 写一个 % 字符到输出。 | +| ==%c== | 将下一个参数写成字符格式。 | +| ==%d or %i== | 将下一个参数写成十进制的有符号整数。 | +| %e | 将下一个参数写成科学记数法,使用 eN 意思是 ×10N。 | +| %E | 将下一个参数写成科学记数法,使用 EN 意思是 ×10N。 | +| %f | 将下一个参数写成十进制的 IEEE 754 浮点数。 | +| %g | 与 %e 和 %f 的指数表示形式相同。 | +| %G | 与 %E 和 %f 的指数表示形式相同。 | +| ==%o== | 将下一个参数写成八进制的无符号整数。 | +| ==%s== | 下一个参数如果是一个指针,将它写成空终止符字符串。 | +| ==%u== | 将下一个参数写成十进制无符号整数。 | +| ==%x== | 将下一个参数写成十六进制无符号整数(使用小写的 a、b、c、d、e 和 f)。 | +| %X | 将下一个参数写成十六进制的无符号整数(使用大写的 A、B、C、D、E 和 F)。 | +| %p | 将下一个参数写成指针地址。 | +| ==%n== | 什么也不输出。而是复制到目前为止被下一个参数在本地处理的字符个数。 | + +除此之外,对序列还有许多额外的处理,比如指定最小长度,符号等等。更多信息可以在 [sprintf - C++ 参考][2] 上找到。 + +下面是调用方法和返回的结果的示例。 + +表 1.2 sprintf 调用示例 +| 格式化字符串 | 参数 | 结果 | +| "%d" | 13 | "13" | +| "+%d degrees" | 12 | "+12 degrees" | +| "+%x degrees" | 24 | "+1c degrees" | +| "'%c' = 0%o" | 65, 65 | "'A' = 0101" | +| "%d * %d%% = %d" | 200, 40, 80 | "200 * 40% = 80" | +| "+%d degrees" | -5 | "+-5 degrees" | +| "+%u degrees" | -5 | "+4294967291 degrees" | + +希望你已经看到了这个函数是多么有用。实现它需要大量的编程工作,但给我们的回报却是一个非常有用的函数,可以用于各种用途。 + +### 2、除法 + +``` +除法是非常慢的,也是非常复杂的基础数学运算。它在 ARM 汇编代码中不能直接实现,因为如果直接实现的话,它得出答案需要花费很长的时间,因此它不是个“简单的”运算。 +``` + +虽然这个函数看起来很强大、也很复杂。但是,处理它的许多情况的最容易的方式可能是,编写一个函数去处理一些非常常见的任务。它是个非常有用的函数,可以为任何底的一个有符号或无符号的数字生成一个字符串。那么,我们如何去实现呢?在继续阅读之前,尝试快速地设计一个算法。 + +最简单的方法或许就是我在 [课程 1:OK01][3] 中提到的“除法余数法”。它的思路如下: + + 1. 用当前值除以你使用的底。 + 2. 保存余数。 + 3. 如果得到的新值不为 0,转到第 1 步。 + 4. 将余数反序连起来就是答案。 + + + +例如: + +表 2.1 以 2 为底的例子 +转换 + +| 值 | 新值 | 余数 | +| ---- | ---- | ---- | +| 137 | 68 | 1 | +| 68 | 34 | 0 | +| 34 | 17 | 0 | +| 17 | 8 | 1 | +| 8 | 4 | 0 | +| 4 | 2 | 0 | +| 2 | 1 | 0 | +| 1 | 0 | 1 | + +因此答案是 100010012 + +这个过程的不幸之外在于使用了除法。所以,我们必须首先要考虑二进制中的除法。 + +我们复习一下长除法 + +> 假如我们想把 4135 除以 17。 +> +> 0243 r 4 +> 17)4135 +> 0 0 × 17 = 0000 +> 4135 4135 - 0 = 4135 +> 34 200 × 17 = 3400 +> 735 4135 - 3400 = 735 +> 68 40 × 17 = 680 +> 55 735 - 680 = 55 +> 51 3 × 17 = 51 +> 4 55 - 51 = 4 +> 答案:243 余 4 +> +> 首先我们来看被除数的最高位。 我们看到它是小于或等于除数的最小倍数,因此它是 0。我们在结果中写一个 0。 +> +> 接下来我们看被除数倒数第二位和所有的高位。我们看到小于或等于那个数的除数的最小倍数是 34。我们在结果中写一个 2,和减去 3400。 +> +> 接下来我们看被除数的第三位和所有高位。我们看到小于或等于那个数的除数的最小倍数是 68。我们在结果中写一个 4,和减去 680。 +> +> 最后,我们看一下所有的余位。我们看到小于余数的除数的最小倍数是 51。我们在结果中写一个 3,减去 51。减法的结果就是我们的余数。 +> + +在汇编代码中做除法,我们将实现二进制的长除法。我们之所以实现它是因为,数字都是以二进制方式保存的,这让我们很容易地访问所有重要位的移位操作,并且因为在二进制中做除法比在其它高进制中做除法都要简单,因为它的数更少。 + +> 1011 r 1 +>1010)1101111 +> 1010 +> 11111 +> 1010 +> 1011 +> 1010 +> 1 +这个示例展示了如何做二进制的长除法。简单来说就是,在不超出被除数的情况下,尽可能将除数右移,根据位置输出一个 1,和减去这个数。剩下的就是余数。在这个例子中,我们展示了 11011112 ÷ 10102 = 10112 余数为 12。用十进制表示就是,111 ÷ 10 = 11 余 1。 + + +你自己尝试去实现这个长除法。你应该去写一个函数 `DivideU32` ,其中 `r0` 是被除数,而 `r1` 是除数,在 `r0` 中返回结果,在 `r1` 中返回余数。下面,我们将完成一个有效的实现。 + +```c +function DivideU32(r0 is dividend, r1 is divisor) + set shift to 31 + set result to 0 + while shift ≥ 0 + if dividend ≥ (divisor << shift) then + set dividend to dividend - (divisor << shift) + set result to result + 1 + end if + set result to result << 1 + set shift to shift - 1 + loop + return (result, dividend) +end function +``` + +这段代码实现了我们的目标,但却不能用于汇编代码。我们出现的问题是,我们的寄存器只能保存 32 位,而 `divisor << shift` 的结果可能在一个寄存器中装不下(我们称之为溢出)。这确实是个问题。你的解决方案是否有溢出的问题呢? + +幸运的是,有一个称为 `clz` 或 `计数前导零(count leading zeros)` 的指令,它能计算一个二进制表示的数字的前导零的个数。这样我们就可以在溢出发生之前,可以将寄存器中的值进行相应位数的左移。你可以找出的另一个优化就是,每个循环我们计算 `divisor << shift` 了两遍。我们可以通过将除数移到开始位置来改进它,然后在每个循环结束的时候将它移下去,这样可以避免将它移到别处。 + +我们来看一下进一步优化之后的汇编代码。 + +```assembly +.globl DivideU32 +DivideU32: +result .req r0 +remainder .req r1 +shift .req r2 +current .req r3 + +clz shift,r1 +lsl current,r1,shift +mov remainder,r0 +mov result,#0 + +divideU32Loop$: + cmp shift,#0 + blt divideU32Return$ + cmp remainder,current + + addge result,result,#1 + subge remainder,current + sub shift,#1 + lsr current,#1 + lsl result,#1 + b divideU32Loop$ +divideU32Return$: +.unreq current +mov pc,lr + +.unreq result +.unreq remainder +.unreq shift +``` + +```assembly +clz dest,src 将第一个寄存器 dest 中二进制表示的值的前导零的数量,保存到第二个寄存器 src 中。 +``` + +你可能毫无疑问的认为这是个非常高效的作法。它是很好,但是除法是个代价非常高的操作,并且我们的其中一个愿望就是不要经常做除法,因为如果能以任何方式提升速度就是件非常好的事情。当我们查看有循环的优化代码时,我们总是重点考虑一个问题,这个循环会运行多少次。在本案例中,在输入为 1 的情况下,这个循环最多运行 31 次。在不考虑特殊情况的时候,这很容易改进。例如,当 1 除以 1 时,不需要移位,我们将把除数移到它上面的每个位置。这可以通过简单地在被除数上使用新的 clz 命令并从中减去它来改进。在 `1 ÷ 1` 的案例中,这意味着移位将设置为 0,明确地表示它不需要移位。如果它设置移位为负数,表示除数大于被除数,因此我们就可以知道结果是 0,而余数是被除数。我们可以做的另一个快速检查就是,如果当前值为 0,那么它是一个整除的除法,我们就可以停止循环了。 + +```assembly +.globl DivideU32 +DivideU32: +result .req r0 +remainder .req r1 +shift .req r2 +current .req r3 + +clz shift,r1 +clz r3,r0 +subs shift,r3 +lsl current,r1,shift +mov remainder,r0 +mov result,#0 +blt divideU32Return$ + +divideU32Loop$: + cmp remainder,current + blt divideU32LoopContinue$ + + add result,result,#1 + subs remainder,current + lsleq result,shift + beq divideU32Return$ +divideU32LoopContinue$: + subs shift,#1 + lsrge current,#1 + lslge result,#1 + bge divideU32Loop$ + +divideU32Return$: +.unreq current +mov pc,lr + +.unreq result +.unreq remainder +.unreq shift +``` + +复制上面的代码到一个名为 `maths.s` 的文件中。 + +### 3、数字字符串 + +现在,我们已经可以做除法了,我们来看一下另外的一个将数字转换为字符串的实现。下列的伪代码将寄存器中的一个数字转换成以 36 为底的字符串。根据惯例,a % b 表示 a 被 b 相除之后的余数。 + +```c +function SignedString(r0 is value, r1 is dest, r2 is base) + if value ≥ 0 + then return UnsignedString(value, dest, base) + otherwise + if dest > 0 then + setByte(dest, '-') + set dest to dest + 1 + end if + return UnsignedString(-value, dest, base) + 1 + end if +end function + +function UnsignedString(r0 is value, r1 is dest, r2 is base) + set length to 0 + do + + set (value, rem) to DivideU32(value, base) + if rem > 10 + then set rem to rem + '0' + otherwise set rem to rem - 10 + 'a' + if dest > 0 + then setByte(dest + length, rem) + set length to length + 1 + + while value > 0 + if dest > 0 + then ReverseString(dest, length) + return length +end function + +function ReverseString(r0 is string, r1 is length) + set end to string + length - 1 + while end > start + set temp1 to readByte(start) + set temp2 to readByte(end) + setByte(start, temp2) + setByte(end, temp1) + set start to start + 1 + set end to end - 1 + end while +end function +``` + +上述代码实现在一个名为 `text.s` 的汇编文件中。记住,如果你遇到了困难,可以在下载页面找到完整的解决方案。 + +### 4、格式化字符串 + +我们继续回到我们的字符串格式化方法。因为我们正在编写我们自己的操作系统,我们根据我们自己的意愿来添加或修改格式化规则。我们可以发现,添加一个 `a %b` 操作去输出一个二进制的数字比较有用,而如果你不使用空终止符字符串,那么你应该去修改 `%s` 的行为,让它从另一个参数中得到字符串的长度,或者如果你愿意,可以从长度前缀中获取。我在下面的示例中使用了一个空终止符。 + +实现这个函数的一个主要的障碍是它的参数个数是可变的。根据 ABI 规定,额外的参数在调用方法之前以相反的顺序先推送到栈上。比如,我们使用 8 个参数 1、2、3、4、5、6、7 和 8 来调用我们的方法,我们将按下面的顺序来处理: + + 1. Set r0 = 5、r1 = 6、r2 = 7、r3 = 8 + 2. Push {r0,r1,r2,r3} + 3. Set r0 = 1、r1 = 2、r2 = 3、r3 = 4 + 4. 调用函数 + 5. Add sp,#4*4 + + + +现在,我们必须确定我们的函数确切需要的参数。在我的案例中,我将寄存器 `r0` 用来保存格式化字符串地址,格式化字符串长度则放在寄存器 `r1` 中,目标字符串地址放在寄存器 `r2` 中,紧接着是要求的参数列表,从寄存器 `r3` 开始和像上面描述的那样在栈上继续。如果你想去使用一个空终止符格式化字符串,在寄存器 r1 中的参数将被移除。如果你想有一个最大缓冲区长度,你可以将它保存在寄存器 `r3` 中。由于有额外的修改,我认为这样修改函数是很有用的,如果目标字符串地址为 0,意味着没有字符串被输出,但如果仍然返回一个精确的长度,意味着能够精确的判断格式化字符串的长度。 + +如果你希望尝试实现你自己的函数,现在就可以去做了。如果不去实现你自己的,下面我将首先构建方法的伪代码,然后给出实现的汇编代码。 + +```c +function StringFormat(r0 is format, r1 is formatLength, r2 is dest, ...) + set index to 0 + set length to 0 + while index < formatLength + if readByte(format + index) = '%' then + set index to index + 1 + if readByte(format + index) = '%' then + if dest > 0 + then setByte(dest + length, '%') + set length to length + 1 + otherwise if readByte(format + index) = 'c' then + if dest > 0 + then setByte(dest + length, nextArg) + set length to length + 1 + otherwise if readByte(format + index) = 'd' or 'i' then + set length to length + SignedString(nextArg, dest, 10) + otherwise if readByte(format + index) = 'o' then + set length to length + UnsignedString(nextArg, dest, 8) + otherwise if readByte(format + index) = 'u' then + set length to length + UnsignedString(nextArg, dest, 10) + otherwise if readByte(format + index) = 'b' then + set length to length + UnsignedString(nextArg, dest, 2) + otherwise if readByte(format + index) = 'x' then + set length to length + UnsignedString(nextArg, dest, 16) + otherwise if readByte(format + index) = 's' then + set str to nextArg + while getByte(str) != '\0' + if dest > 0 + then setByte(dest + length, getByte(str)) + set length to length + 1 + set str to str + 1 + loop + otherwise if readByte(format + index) = 'n' then + setWord(nextArg, length) + end if + otherwise + if dest > 0 + then setByte(dest + length, readByte(format + index)) + set length to length + 1 + end if + set index to index + 1 + loop + return length +end function +``` + +虽然这个函数很大,但它还是很简单的。大多数的代码都是在检查所有各种条件,每个代码都是很简单的。此外,所有的无符号整数的大小写都是相同的(除了底以外)。因此在汇编中可以将它们汇总。下面是它的汇编代码。 + +```assembly +.globl FormatString +FormatString: +format .req r4 +formatLength .req r5 +dest .req r6 +nextArg .req r7 +argList .req r8 +length .req r9 + +push {r4,r5,r6,r7,r8,r9,lr} +mov format,r0 +mov formatLength,r1 +mov dest,r2 +mov nextArg,r3 +add argList,sp,#7*4 +mov length,#0 + +formatLoop$: + subs formatLength,#1 + movlt r0,length + poplt {r4,r5,r6,r7,r8,r9,pc} + + ldrb r0,[format] + add format,#1 + teq r0,#'%' + beq formatArg$ + +formatChar$: + teq dest,#0 + strneb r0,[dest] + addne dest,#1 + add length,#1 + b formatLoop$ + +formatArg$: + subs formatLength,#1 + movlt r0,length + poplt {r4,r5,r6,r7,r8,r9,pc} + + ldrb r0,[format] + add format,#1 + teq r0,#'%' + beq formatChar$ + + teq r0,#'c' + moveq r0,nextArg + ldreq nextArg,[argList] + addeq argList,#4 + beq formatChar$ + + teq r0,#'s' + beq formatString$ + + teq r0,#'d' + beq formatSigned$ + + teq r0,#'u' + teqne r0,#'x' + teqne r0,#'b' + teqne r0,#'o' + beq formatUnsigned$ + + b formatLoop$ + +formatString$: + ldrb r0,[nextArg] + teq r0,#0x0 + ldreq nextArg,[argList] + addeq argList,#4 + beq formatLoop$ + add length,#1 + teq dest,#0 + strneb r0,[dest] + addne dest,#1 + add nextArg,#1 + b formatString$ + +formatSigned$: + mov r0,nextArg + ldr nextArg,[argList] + add argList,#4 + mov r1,dest + mov r2,#10 + bl SignedString + teq dest,#0 + addne dest,r0 + add length,r0 + b formatLoop$ + +formatUnsigned$: + teq r0,#'u' + moveq r2,#10 + teq r0,#'x' + moveq r2,#16 + teq r0,#'b' + moveq r2,#2 + teq r0,#'o' + moveq r2,#8 + + mov r0,nextArg + ldr nextArg,[argList] + add argList,#4 + mov r1,dest + bl UnsignedString + teq dest,#0 + addne dest,r0 + add length,r0 + b formatLoop$ +``` + +### 5、一个转换操作系统 + +你可以使用这个方法随意转换你希望的任何东西。比如,下面的代码将生成一个换算表,可以做从十进制到二进制到十六进制到八进制以及到 ASCII 的换算操作。 + +删除 `main.s` 文件中 `bl SetGraphicsAddress` 之后的所有代码,然后粘贴以下的代码进去。 + +```assembly +mov r4,#0 +loop$: +ldr r0,=format +mov r1,#formatEnd-format +ldr r2,=formatEnd +lsr r3,r4,#4 +push {r3} +push {r3} +push {r3} +push {r3} +bl FormatString +add sp,#16 + +mov r1,r0 +ldr r0,=formatEnd +mov r2,#0 +mov r3,r4 + +cmp r3,#768-16 +subhi r3,#768 +addhi r2,#256 +cmp r3,#768-16 +subhi r3,#768 +addhi r2,#256 +cmp r3,#768-16 +subhi r3,#768 +addhi r2,#256 + +bl DrawString + +add r4,#16 +b loop$ + +.section .data +format: +.ascii "%d=0b%b=0x%x=0%o='%c'" +formatEnd: +``` + +你能在测试之前推算出将发生什么吗?特别是对于 `r3 ≥ 128` 会发生什么?尝试在树莓派上运行它,看看你是否猜对了。如果不能正常运行,请查看我们的排错页面。 + +如果一切顺利,恭喜你!你已经完成了屏幕04 教程,屏幕系列的课程结束了!我们学习了像素和帧缓冲的知识,以及如何将它们应用到树莓派上。我们学习了如何绘制简单的线条,也学习如何绘制字符,以及将数字格式化为文本的宝贵技能。我们现在已经拥有了在一个操作系统上进行图形输出的全部知识。你可以写出更多的绘制方法吗?三维绘图是什么?你能实现一个 24 位帧缓冲吗?能够从命令行上读取帧缓冲的大小吗? + +接下来的课程是[输入][4]系列课程,它将教我们如何使用键盘和鼠标去实现一个传统的计算机控制台。 + +-------------------------------------------------------------------------------- + +via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen04.html + +作者:[Alex Chadwick][a] +选题:[lujun9972][b] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.cl.cam.ac.uk +[b]: https://github.com/lujun9972 +[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen03.html +[2]: http://www.cplusplus.com/reference/clibrary/cstdio/sprintf/ +[3]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/ok01.html +[4]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input01.html diff --git a/translated/tech/20180122 Ick- a continuous integration system.md b/translated/tech/20180122 Ick- a continuous integration system.md new file mode 100644 index 0000000000..eb1f3c6c45 --- /dev/null +++ b/translated/tech/20180122 Ick- a continuous integration system.md @@ -0,0 +1,74 @@ +Ick:一个连续集成系统 +====== + +**TL;DR:** Ick 是一个连续集成或者 CI 系统。访问 获取跟多信息。 + +更加详细的版本随后会出 + +### 首个公开版本发行 + +世界可能还不需要另一个连续集成系统(CI)但是我需要。我已对我尝试过或者看过的连续集成系统感到不满意了。更重要的是,几样我感兴趣的东西比我所听说过的连续集成系统要强大得多。因此我开始编写我自己的 CI 系统。 + +我的新个人业余项目叫做 ick。它是一个 CI 系统,这意味着他可以运行自动化的步骤来搭建、测试软件。它的主页是,[下载][1]页面有导向源码、.deb 包和用来安装的 Ansible 脚本的链接。 + +我现已发布了首个公开版本,绰号 ALPHA-1,版本号0.23。它现在是 alpha 品质,这意味着它并没拥有所有期望的特性,如果任何一个它已有的特性工作的话,你应该感到庆幸。 + +### 诚邀英才 + +Ick 目前是我的个人项目。我希望能让它不仅限于此,同时我也诚邀英才。访问[管理][2]页面查看章程,[开始][3]页面查看如何开始贡献的的小贴士,[联系][4]页面查看如何联络。 + +### 架构 + +Ick 拥有一个由几个通过 HTTPS 协议通信使用 RESTful API 和 JSON 处理结构化数据的部分组成的架构。访问[架构][5]页面查看细节。 + +### 宣言 + +连续集成(CI)是用于软件开发的强大工具。它不应枯燥、易溃或恼人。它搭建起来应简单快速,除非正在测试、搭建中的码有问题,不然它应在后台安静地工作。 + +一个连续集成系统应该简单、易用、清楚、干净、可扩展、快速、综合、透明、可靠并推动你的生产力。搭建它不应花大力气、不应需要专门为 CI 而造的硬件、不应需要频繁留意以使其保持工作、开发者永远不必思考为什么某样东西不工作。 + +一个连续集成系统应该足够灵活以适应你的搭建、测试需求。只要 CPU 架构和操作系统版本没问题,它应该支持各式操作者。 + +同时像所有软件一样,CI 应该彻彻底底的免费,你的 CI 应由你做主。 + +(目前的 Ick 仅稍具雏形,但是它会尝试着有朝一日变得完美,在最理想的情况下。) + +### 未来的梦想 + +长远来看,我希望 ick 拥有像下面所描述的特性。落实全部特性可能需要一些时间。 + +* 多种多样的事件都可以触发搭建。时间是一个明显的事件因为项目的源代码仓库改变了。更强大的是不管依赖是来自于 ick 搭建的另一个项目或则包比如说来自 Debian,任何用于搭建的依赖都会改变:ick 应当跟踪所有安装进一个项目搭建环境中的包,如果任何一个包的版本改变,都应再次触发项目搭建和测试。 + +* Ick 应该支持搭建任何合理的目标,包括任何 Linux 发行版,任何免费的操作系统,以及任何一息尚存的收费操作系统。 + +* Ick 应当不需要安装任何专门的代理,就能支持各种它能够通过 ssh 或者串口或者其它这种中性交流管道控制的操作者。Ick 不应默认它可以有比如说一个完整的 Java Runtime,如此一来,操作者就可以是一个微控制器了。 + +* Ick 应当能轻松掌控一大批项目。我觉得不管一个新的 Debian 源包何时上传,Ick 都应该要能够跟得上在 Debian 中搭建所有东西的进度。(明显这可行与否取决于是否有足够的资源确实用在搭建上,但是 Ick 自己不应有瓶颈。) + +* 如果有需要的话 Ick 应当有选择性地补给操作者。如果所有特定种类的操作者处于忙碌中且 Ick 被设置成允许使用更多资源的话,它就应该这么做。这看起来用虚拟机、容器、云提供商等做可能会简单一些。 + +* Ick 应当灵活提醒感兴趣的团体特别是关于其失败的方面。它应允许感兴趣的团体通过 IRC,Matrix,Mastodon, Twitter, email, SMS 甚至电话和语音合成来接受通知。例如“您好,感兴趣的团体。现在是四点钟您想被通知 hello 包什么时候为 RISC-V 搭建好。” + + + + +### 请提供反馈 + +如果你尝试 ick 或者甚至你仅仅是读到这,请在上面分享你的想法。[联系][4]页面查看如何发送反馈。相比私下反馈我更偏爱公开反馈。但如果你偏爱私下反馈,那也行。 + +-------------------------------------------------------------------------------- + +via: https://blog.liw.fi/posts/2018/01/22/ick_a_continuous_integration_system/ + +作者:[Lars Wirzenius][a] +译者:[tomjlw](https://github.com/tomjlw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.liw.fi/ +[1]:http://ick.liw.fi/download/ +[2]:http://ick.liw.fi/governance/ +[3]:http://ick.liw.fi/getting-started/ +[4]:http://ick.liw.fi/contact/ +[5]:http://ick.liw.fi/architecture/ diff --git a/translated/tech/20180307 3 open source tools for scientific publishing.md b/translated/tech/20180307 3 open source tools for scientific publishing.md new file mode 100644 index 0000000000..697b8d50ea --- /dev/null +++ b/translated/tech/20180307 3 open source tools for scientific publishing.md @@ -0,0 +1,76 @@ +3款用于学术发表的开源工具 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_science.png?itok=WDKARWGV) +有一个行业在采用数字或者开源工具方面已落后其它行业,那就是竞争与利润并存的学术出版业。根据 Stephen Buranyi 去年在 [The Guardian][1] 发表的一份图表,这个估值超过190亿英镑(260亿美元)的行业至今在其选择、发表甚至分享最重要的科学研究的系统方面,仍受限于印刷媒介的诸多限制。全新的数字时代科技展现了一个能够加速探索、推动科学合作性而不是竞争性以及将投入重新从基础建设导向有益社会的研究的巨大的机遇。 + +非盈利性的 [eLife 倡议][2] 是由研究的资金赞助方建立,旨在通过使用数字或者开源技术来走出上述僵局。除了为生活中科学和生物医疗方面的重大成就出版开放式获取的杂志,eLife 已将自己变成了一个在研究交流方面的创新实验窗口——而大部分的实验都是基于开源精神的。 + +参与开源出版项目给予我们加速接触、采用科学技术,提升用户体验的机会。我们认为这种机会对于推动学术出版行业是重要的。大而化之地说,开源产品的用户体验经常是有待开发的,而有时候这种情况会阻止其他人去使用它。作为我们在 OSS 开发中投入的一部分,为了鼓励更多用户使用这些产品,我们十分注重用户体验。 + +我们所有的代码都是开源的并且我们也积极鼓励开源社区参与进我们的项目中。这对我们来说意味着更快的迭代、更多的实验、更大的透明度,同时也拓宽了我们工作的外延。 + +我们现在参与的项目,例如 Libero (之前称作 [eLife Continuum][3])和 [可复制文档栈][4] 的开发以及我们最近和 [Hypothesis][5] 的合作,展示了 OSS 是如何在校队、发表以及新发现的沟通方面带来正面影响的。 + +### Libero + +Libero 是面向发布者的服务及应用套餐,它包括一个后生产出版系统、整套前端用户界面、Libero 的棱镜阅读器、一个开放式的API以及一个搜索推荐引擎。 + +去年我们采取了用户导向的途径重新设计了 Libero 的前端,做出了一个使用户较少地分心并更多地集中注意在研究文章上的站点。我们和 eLife 社区成员测试并迭代了站点所有的核心功能以确保给所有人最好的阅读体验。网站的新 API 也给可供机器阅读的内容提供了更简单的访问途径,其中包括文字挖掘、机器学习以及在线应用开发。我们网站上的内容以及引领新设计的样式都是开源的,以鼓励 eLife 和其它想要使用它的发布者后续的开发。 + +### 可复制文档栈 + +在与 [Substance][6] 和 [Stencila][7] 的合作下,eLife 也参与了一个项目来创建可复制的文档栈(RDS)——一个开放式的创作、编纂以及在线出版可复制的计算型手稿的工具栈。 + +今天越来越多的研究员能够通过 [R、Markdown][8] 和 [Python][9] 等语言记录他们的计算型实验。这些可以作为实验记录的重要部分,但是尽管它们可以通过最终的研究文章独立地分享,传统出版流程经常将它们视为次级内容。为了发表论文,使用这些语言的研究员除了将他们的计算结果用图片的形式“扁平化”提交外别无他法。但是这导致了许多实验价值和代码和计算数据可重复利用性的流失。诸如 [Jupyter][10] 的电子笔记本解决方案确实可以使研究员以一种可重复利用、可执行的简单形式发布,但是这种方案仍然独立于整个手稿发布过程之外,而不是集成在其中。 + +[可复制文档栈][11] 项目着眼于通过开发、发布一个能够把代码和数据集成在文档自身的产品雏形来突出这些挑战并阐述一个端对端的从创作到出版的完整科技。它将最终允许用户以一种包含嵌入代码块和计算结果(统计结果、图表或图片)的形式提交他们的手稿并在出版过程中保留这些可视、可执行的部分。那时发布者就可以将这些作为发布的在线文章的整体所保存。 + +### 用 Hypothesis 进行开放式注解 + +最近,我们与 [Hypothesis][12] 合作引进了开放式注解,使得我们网站的用户们可以写评语、高亮文章重要部分以及与在线阅读的群体互动。 + +通过这样的合作,开源的 Hypothesis 软件被定制得更具有现代化的特性如单次登录验证、用户界面定制选项,给予了发布者在他们自己网站上更多的控制。这些提升正引导着关于发表的学术内容高质量的讨论。 + +这个工具可以无缝集成进发布者的网站,学术发表平台 [PubFactory][13] 和内容解决方案供应商 [Ingenta][14] 已经利用了它优化后的特性集。[HighWire][15] 和 [Silverchair][16] 也为他们的发布者提供了实施这套方案的机会。 + + ### 其它产业和开源软件 + +过段时间,我们希望看到更多的发布者采用 Hypothesis、Libero 以及其它开源软件去帮助他们促进重要科学研究的发现以及循环利用。但是 eLife 所能利用的因这些软件和其它 OSS 科技带来的创新机会在其他产业也很普遍。 + +数据科学的世界离不开高质量、强支持的开源软件和围绕它们形成的社区;[TensorFlow][17] 就是这样一个好例子。感谢 OSS 以及其社区,AI 的所有领域和机器学习相比于计算机的其它领域有了迅速的提升和发展。与之类似的是 Linux 云端网页主机、Docker 容器、Github上最流行的开源项目之一的 Kubernetes 使用的爆炸性增长。 + +所有的这些科技使得不同团体能够四两拨千斤并集中在创新而不是造轮子上。最后,那才是 OSS 真正的好处:它使得我们从互相的失败中学习,在互相的成功中成长。 + +我们总是在寻找与研究和科技界面方面最好的人才和想法交流的机会。你可以在 [eLife Labs][18] 上或者联系 [innovation@elifesciences.org][19] 找到更多这种交流的信息。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/scientific-publishing-software + +作者:[Paul Shanno][a] +译者:[tomjlw](https://github.com/tomjlw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/pshannon +[1]:https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science +[2]:https://elifesciences.org/about +[3]:https://elifesciences.org/inside-elife/33e4127f/elife-introduces-continuum-a-new-open-source-tool-for-publishing +[4]:https://elifesciences.org/for-the-press/e6038800/elife-supports-development-of-open-technology-stack-for-publishing-reproducible-manuscripts-online +[5]:https://elifesciences.org/for-the-press/81d42f7d/elife-enhances-open-annotation-with-hypothesis-to-promote-scientific-discussion-online +[6]:https://github.com/substance +[7]:https://github.com/stencila/stencila +[8]:https://rmarkdown.rstudio.com/ +[9]:https://www.python.org/ +[10]:http://jupyter.org/ +[11]:https://elifesciences.org/labs/7dbeb390/reproducible-document-stack-supporting-the-next-generation-research-article +[12]:https://github.com/hypothesis +[13]:http://www.pubfactory.com/ +[14]:http://www.ingenta.com/ +[15]:https://github.com/highwire +[16]:https://www.silverchair.com/community/silverchair-universe/hypothesis/ +[17]:https://www.tensorflow.org/ +[18]:https://elifesciences.org/labs +[19]:mailto:innovation@elifesciences.org diff --git a/translated/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md b/translated/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md deleted file mode 100644 index 558663de0d..0000000000 --- a/translated/tech/20180615 Complete Sed Command Guide [Explained with Practical Examples].md +++ /dev/null @@ -1,1029 +0,0 @@ -Sed 命令完全指南 -====== -在前面的文章中,我展示了 [Sed 命令的基本用法][1],它是一个功能强大的流编辑器。今天,我们准备去了解关于 Sed 更多的知识,深入了解 Sed 的运行模式。这将是你全面了解 Sed 命令的一个机会,深入挖掘它的运行细节和精妙之处。因此,如果你已经做好了准备,那就打开终端吧,[下载测试文件][2] 然后坐在电脑前:开始我们的探索之旅吧! - -### 关于 Sed 的一点点理论知识 - -![complete reference guide to sed commands][4] - -#### 首先我们看一下 sed 的运行模式 - -要准确理解 Sed 命令,你必须先了解工具的运行模式。 - -当处理数据时,Sed 从输入源一次读入一行,并将它保存到所谓的 `pattern` 空间中。所有 Sed 的变动都发生在 `pattern` 空间。变动都是由命令行上或外部 Sed 脚本文件提供的单字母命令来描述的。大多数 Sed 命令都可以由一个地址或一个地址范围作为前导来限制它们的作用范围。 - -默认情况下,Sed 在结束每个处理循环后输出 `pattern` 空间中的内容,也就是说,输出发生在输入的下一个行覆盖 `pattern` 空间之前。我们可以将这种运行模式总结如下: - - 1. 尝试将下一个行读入到 `pattern` 空间中 - - 2. 如果读取成功: - - 1. 按脚本中的顺序将所有命令应用到与那个地址匹配的当前输入行上 - - 2. 如果 sed 没有以静默(`-n`)模式运行,那么将输出 `pattern` 空间中的所有内容(可能会是修改过的)。 - - 3. 重新回到 1。 - - - - -因此,在每个行被处理完毕之后, `pattern` 空间中的内容将被丢弃,它并不适合长时间保存内容。基于这种目的,Sed 有第二个缓冲区:`hold` 空间。除非你显式地要求它将数据置入到 `hold` 空间、或从`hode` 空间中取得数据,否则 Sed 从不清除 `hold` 空间的内容。在我们后面学习到 `exchange`、`get`、`hold` 命令时将深入研究它。 - -#### Sed 的抽象机制 - -你将在许多的 Sed 教程中都会看到上面解释的模式。的确,这是充分正确理解大多数基本 Sed 程序所必需的。但是当你深入研究更多的高级命令时,你将会发现,仅这些知识还是不够的。因此,我们现在尝试去了解更深入的一些知识。 - -的确,Sed 可以被视为是[抽象机制][5]的实现,它的[状态][6]由三个[缓冲区][7] 、两个[寄存器][8]和两个[标志][9]来定义的: - - * **三个缓冲区**用于去保存任意长度的文本。是的,是三个!在前面的基本运行模式中我们谈到了两个: `pattern` 空间和 `hold` 空间,但是 Sed 还有第三个缓冲区:追加队列。从 Sed 脚本的角度来看,它是一个只写缓冲区,Sed 将在它运行时的预定义阶段来自动刷新它(一般是在从输入源读入一个新行之前,或仅在它退出运行之前)。 - - * Sed 也维护**两个寄存器**:行计数器(LC)用于保存从输入源读取的行数,而程序计数器(PC)总是用来保存下一个将要运行的命令的索引(就是脚本中的位置),Sed 将它作为它的主循环的一部分来自动增加 PC。但在使用特定的命令时,脚本也会直接修改 PC 去跳过或重复程序的一部分。这就像使用 Sed 实现的一个循环或条件语句。更多内容将在下面的专用分支一节中描述。 - - * 最后,**两个标志**可以被某些 Sed 命令的行为所修改:自动输出(AP)标志和替换标志(SF)。当自动输出标志 AP 被设置时,Sed 将在 `pattern` 空间的内容被覆盖前自动输出(尤其是(包括但不限于)在从输入源读入一个新行之前)。当自动输出标准被清除时(即:没有设置),Sed 在脚本中没有显式命令的情况下,将不会输出 `pattern` 空间中的内容。你可以通过在“静默模式”(使用命令行选项 `-n` 或者在第一行或脚本中使用特殊注释 `#n`)运行 Sed 命令来清除自动输出标志。当它的地址和查找模式与 `pattern` 空间中的内容都匹配时,“替换标志”将被替换命令(`s` 命令)设置。替换标志在每个新的循环开始时、或当从输入源读入一个新行时、或获得条件分支之后将被清除。我们将在分支一节中详细研究这一话题。 - - - - -另外,Sed 维护一个进入到它的地址范围(关于地址范围的更多知识将在地址范围一节详细描述)的命令列表,以及用于读取和写入数据的两个文件句柄(你将在读取和写入命令的描述中获得更多有关文件句柄的内容)。 - -#### 一个更精确的 Sed 运行模式 - -由于一张图胜过千言万语,所以我画了一个流程图去描述 Sed 的运行模式。我将两个东西放在了旁边,像处理多个输入文件或错误处理,但是我认为这足够你去理解任何 Sed 程序的行为了,并且可以避免你在编写你自己的 Sed 脚本时浪费在摸索上的时间。 - -![The Sed execution model][10] - -你可能已经注意到,在上面的流程图上我并没有描述特定的命令动作。对于命令,我们将逐个详细讲解。因此,不用着急,我们马上开始! - -### print 命令 - -print 命令(`p`)是用于输出在它运行时 `pattern` 空间中的内容。它并不会以任何方式改变 Sed 抽象机制中的状态。 - -![The Sed `print` command][11] - -示例: -``` -sed -e 'p' inputfile - -``` - -上面的命令将输出输入文件中每一行的内容两次,因为你一旦显式地要求使用 `print` 命令时,将会在每个处理循环结束时再隐式地输出一次(因为在这里我们不是在“静默模式”中运行 Sed)。 - -如果我们不想每个行看到两次,我们可以用两种方式去解决它: -``` -sed -n -e 'p' inputfile # 在静默模式中显式输出 -sed -e '' inputfile # 空的"什么都不做的"程序,隐式输出 - -``` - -注意:`-e` 选项是引入一个 Sed 命令。它被用于区分命令和文件名。由于一个 Sed 表达式必须包含至少一个命令,所以对于第一个命令,`-e` 标志不是必需的。但是,由于我个人使用习惯问题,为了与在这里的大多数的一个命令行上给出多个 Sed 表达式的更复杂的案例保持一致性。你自己去判断这是一个好习惯还是坏习惯,并且在本文的后面部分还将延用这一习惯。 - -### 地址 - -显而易见,`print` 命令本身并没有太多的用处。但是,如果你在它之前添加一个地址,这样它就只输出输入文件的一些行,这样它就突然变得能够从一个输入文件中过滤一些不希望的行。那么 Sed 的地址又是什么呢?它是如何来辨别输入文件的“行”呢? - -#### 行号 - -一个 Sed 的地址既可以是一个行号(`$` 表示“最后一行”)也可以是一个正则表达式。在使用行号时,你需要记住 Sed 中的行数是从 1 开始的 — 并且需要注意的是,它不是从 0 行开始的。 -``` -sed -n -e '1p' inputfile # 仅输出文件的第一行 -sed -n -e '5p' inputfile # 仅输出第 5 行 -sed -n -e '$p' inputfile # 输出文件的最后一行 -sed -n -e '0p' inputfile # 结果将是报错,因为 0 不是有效的行号 - -``` - -根据 [POSIX 规范][12],如果你指定了几个输出文件,那么它的行号是累加的。换句话说,当 Sed 打开一个新输入文件时,它的行计数器是不会被重置的。因此,以下的两个命令所做的事情是一样的。仅输出一行文本: -``` -sed -n -e '1p' inputfile1 inputfile2 inputfile3 -cat inputfile1 inputfile2 inputfile3 | sed -n -e '1p' - -``` - -实际上,确实在 POSIX 中规定了多个文件是如何处理的: - -> 如果指定了多个文件,将按指定的文件命名顺序进行读取并被串联编辑。 - -但是,一些 Sed 的实现提供了命令行选项去改变这种行为,比如, GNU Sed 的 `-s` 标志(在使用 GNU Sed `-i` 标志时,它也被隐式地应用): -``` -sed -sn -e '1p' inputfile1 inputfile2 inputfile3 - -``` - -如果你的 Sed 实现支持这种非标准选项,那么关于它的具体细节请查看 `man` 手册页。 - -#### 正则表达式 - -我前面说过,Sed 地址既可以是行号也可以是正则表达式。那么正则表达式是什么呢? - -正如它的名字,一个[正则表达式][13]是描述一个字符串集合的方法。如果一个指定的字符串符合一个正则表达式所描述的集合,那么我们就认为这个字符串与正则表达式匹配。 - -一个正则表达式也可以包含必须完全匹配的文本字符。例如,所有的字母和数字,以及大部分可以打印的字符。但是,一些符号有特定意义: - - * 它们可能相当于锚,像 `^` 和 `$` 它们分别表示一个行的开始和结束; - - * 对于整个字符集,另外的符号可能做为占位符(比如圆点 `.` 可以匹配任意单个字符,或者方括号用于定义一个自定义的字符集); - - * 另外的是表示重复出现的数量(像 [Kleene 星号][14] 表示前面的模式出现 0、1 或多次); - - - - -这篇文章的目的不是给大家讲正则表达式。因此,我只粘几个示例。但是,你可以在网络上随便找到很多关于正则表达式的教程,正则表达式的功能非常强大,它可用于许多标准的 Unix 命令和编程语言中,并且是每个 Unix 用户应该掌握的技能。 - -下面是使用 Sed 地址的几个示例: -``` -sed -n -e '/systemd/p' inputfile # 仅输出包含字符串"systemd"的行 -sed -n -e '/nologin$/p' inputfile # 仅输出以"nologin"结尾的行 -sed -n -e '/^bin/p' inputfile # 仅输出以"bin"开头的行 -sed -n -e '/^$/p' inputfile # 仅输出空行(即:开始和结束之间什么都没有的行) -sed -n -e '/./p' inputfile # 仅输出包含一个字符的行(即:非空行) -sed -n -e '/^.$/p' inputfile # 仅输出确实只包含一个字符的行 -sed -n -e '/admin.*false/p' inputfile # 仅输出包含字符串"admin"后面有字符串"false"的行(在它们之间有任意数量的任意字符) -sed -n -e '/1[0,3]/p' inputfile # 仅输出包含一个"1"并且后面是一个"0"或"3"的行 -sed -n -e '/1[0-2]/p' inputfile # 仅输出包含一个"1"并且后面是一个"0"、"1"、"2"或"3"的行 -sed -n -e '/1.*2/p' inputfile # 仅输出包含字符"1"后面是一个"2"(在它们之间有任意数量的字符)的行 -sed -n -e '/1[0-9]*2/p' inputfile # 仅输出包含字符"1"后面跟着0、1、或更多数字,最后面是一个"2"的行 - -``` - -如果你想在正则表达式(包括正则表达式分隔符)中去除字符的特殊意义,你可以在它前面使用一个斜杠: -``` -# 输出所有包含字符串"/usr/sbin/nologin"的行 -sed -ne '/\/usr\/sbin\/nologin/p' inputfile - -``` - -并不是限制你只能使用反斜杠作为地址中正则表达式的分隔符。你可以通过在第一个分隔符前面加上斜杠的方式,来使用任何你认为适合你需要和偏好的其它字符作为正则表达式的分隔符。当你用地址与带文件路径的字符一起来匹配的时,是非常有用的: -``` -# 以下两个命令是完全相同的 -sed -ne '/\/usr\/sbin\/nologin/p' inputfile -sed -ne '\=/usr/sbin/nologin=p' inputfile - -``` - -#### 扩展的正则表达式 - -默认情况下,Sed 的正则表达式引擎仅理解 [POSIX 基本正则表达式][15] 的语法。如果你需要用到 [扩展的正则表达式][16],你必须在 Sed 命令上添加 `-E` 标志。扩展的正则表达式在基本的正则表达式基础上增加了一组额外的特性,并且很多都是很重要的,他们所要求的斜杠要少很多。我们来比较一下: -``` -sed -n -e '/\(www\)\|\(mail\)/p' inputfile -sed -En -e '/(www)|(mail)/p' inputfile - -``` - -#### 括号量词 - -正则表达式之所以强大的一个原因是[范围量词][17]`{,}`。事实上,当你写一个不太精确匹配的正则表达式时,量词 `*` 就是一个非常完美的符号。但是,你需要显式在它边上添加一个下限和上限,这样就有了很好的灵活性。当量词范围的下限省略时,下限被假定为 0。当上限被省略时,上限被假定为无限大: - -|括号| 速记词 |解释| - -| {,} | * | 前面的规则出现 0、1、或许多遍 | -| {,1} | ? | 前面的规则出现 0 或 1 遍 | -| {1,} | + | 前面的规则出现 1 或许多遍 | -| {n,n} | {n} | 前面的规则精确地出现 n 遍 | - -括号在基本的正则表达式中也是可以使用的,但是它要求使用反斜杠。根据 POSIX 规范,在基本的正则表达式中可以使用的量词仅有星号(`*`)和括号(使用反斜杠 `\{m,n\}`)。许多正则表达式引擎都扩展支持 `\?` 和 `\+`。但是,为什么魔鬼如此有诱惑力呢?因为,如果你需要这些量词,使用扩展的正则表达式将不但易于写而且可移植性更好。 - -为什么我要花点时间去讨论关于正则表达式的括号量词,这是因为在 Sed 脚本中经常用这个特性去计数字符。 -``` -sed -En -e '/^.{35}$/p' inputfile # 输出精确包含 35 个字符的行 -sed -En -e '/^.{0,35}$/p' inputfile # 输出包含 35 个字符或更少字符的行 -sed -En -e '/^.{,35}$/p' inputfile # 输出包含 35 个字符或更少字符的行 -sed -En -e '/^.{35,}$/p' inputfile # 输出包含 35 个字符或更多字符的行 -sed -En -e '/.{35}/p' inputfile # 你自己指出它的输出内容(这是留给你的测试题) - -``` - -#### 地址范围 - -到目前为止,我们使用的所有地址都是唯一地址。在我们使用一个唯一地址时,命令是应用在与那个地址匹配的行上。但是,Sed 也支持地址范围。Sed 命令可以应用到那个地址范围中从开始到结束的所有地址中的所有行上: -``` -sed -n -e '1,5p' inputfile # 仅输出 1 到 5 行 -sed -n -e '5,$p' inputfile # 从第 5 行输出到文件结尾 - -sed -n -e '/www/,/systemd/p' inputfile # 输出与正则表达式 /www/ 匹配的第一行到与正则表达式 /systemd/ 匹配的接下来的行 - -``` - -如果在开始和结束地址上使用了同一个行号,那么范围就缩小为那个行。事实上,如果第二个地址的数字小于或等于地址范围中选定的第一个行的数字,那么仅有一个行被选定: -``` -printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '4,4p' - 4 bd -printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '4,3p' - 4 bd - -``` - -这就有点难了,但是在前面的段落中给出的规则也适用于起始地址是正则表达式的情况。在那种情况下,Sed 将对正则表达式匹配的第一个行的行号和给定的作为结束地址的显式的行号进行比较。再强调一次,如果结束行号小于或等于起始行号,那么这个范围将缩小为一行: -``` -# 这个 /b/,4 地址将匹配三个单行 -# 因为每个匹配的行有一个行号 >= 4 -printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '/b/,4p' - 4 bd - 5 be - 6 bf - -# 你自己指出匹配的范围是多少 -# 第二个例子: -printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '/d/,4p' - 1 ad - 2 ae - 3 af - 4 bd - 7 cd - -``` - -但是,当结束地址是一个正则表达式时,Sed 的行为将不一样。在那种情况下,地址范围的第一行将不会与结束地址进行检查,因此地址范围将至少包含两行(当然,如果输入数据不足的情况除外): -``` -printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '/b/,/d/p' - 4 bd - 5 be - 6 bf - 7 cd - -printf "%s\n" {a,b,c}{d,e,f} | cat -n | sed -ne '4,/d/p' - 4 bd - 5 be - 6 bf - 7 cd - -``` - -#### 互补 - -在一个地址选择行后面添加一个感叹号(`!`)表示不匹配那个地址。例如: -``` -sed -n -e '5!p' inputfile # 输出除了第 5 行外的所有行 -sed -n -e '5,10!p' inputfile # 输出除了第 5 到 10 之间的所有行 -sed -n -e '/sys/!p' inputfile # 输出除了包含字符串"sys"的所有行 - -``` - -#### 连接 - -Sed 允许在一个块中使用括号 (`{…}`) 组合命令。你可以利用这个特性去组合几个地址。例如,我们来比较下面两个命令的输出: -``` -sed -n -e '/usb/{ -/daemon/p -}' inputfile - -sed -n -e '/usb.*daemon/p' inputfile - -``` - -通过在一个块中嵌套命令,我们将在任意顺序中选择包含字符串 “usb” 和 “daemon” 的行。而正则表达式 “usb.*daemon” 将仅匹配在字符串 “daemon” 前面包含 “usb” 字符串的行。 - -离题太长时间后,我们现在重新回去学习各种 Sed 命令。 - -### quit 命令 - -quit 命令(`q`)是指在当前的迭代循环处理结束之后停止 Sed。 - -![The Sed `quit` command][18] - -quit 命令是在到达输入文件的尾部之前停止处理输入的方法。为什么会有人想去那样做呢? - -很好的问题,如果你还记得,我们可以使用下面的命令来输出文件中第 1 到第 5 的行: -``` -sed -n -e '1,5p' inputfile - -``` - -对于 大多数 Sed 的实现,工具将循环读取输入文件的所有行,那怕是你只处理结果中的前 5 行。如果你的输入文件包含了几百万行(或者更糟糕的情况是,你从一个无限的数据流(比如像 `/dev/urandom` )中读取)。 - -使用 quit 命令,相同的程序可以被修改的更高效: -``` -sed -e '5q' inputfile - -``` - -由于我在这里并不使用 `-n` 选项,Sed 将在每个循环结束后隐式输出 `pattern` 空间的内容。但是在你处理完第 5 行后,它将退出,并且因此不会去读取更多的数据。 - -我们能够使用一个类似的技巧只输出文件中一个特定的行。那将是一个好机会,你将看到从命令行中提供多个 Sed 表达式的几种方法。下面的三个变体都可以从 Sed 中接受命令,要么是不同的 `-e` 选项,要么是在相同的表达式中新起一行或用分号(`;`)隔开: -``` -sed -n -e '5p' -e '5q' inputfile - -sed -n -e ' - 5p - 5q -' inputfile - -sed -n -e '5p;5q' inputfile - -``` - -如果你还记得,我们在前面看到过能够使用括号将命令组合起来,在这里我们使用它来防止相同的地址重复两次: -``` -# 组合命令 -sed -e '5{ - p - q -}' inputfile - -# Which can be shortened as: -sed '5{p;q;}' inputfile - -# As a POSIX extension, some implementations makes the semi-colon before the closing bracket optional: -sed '5{p;q}' inputfile - -``` - -### substitution 命令 - -你可以将替换命令想像为 Sed 的“查找替换”功能,这个功能在大多数的“所见即所得”的编辑器上都能找到。Sed 的替换命令与之类似,但比它们更强大。替换命令是 Sed 中最著名的命令之一,在网上有大量的关于这个命令的文档。 - -![The Sed `substitution` command][19] - -[在前一篇文章][20]中我们已经讲过它了,因此,在这里就不再重复了。但是,如果你对它的使用不是很熟悉,那么你需要记住下面的这些关键点: - - * 替换命令有两个参数:查找模式和替换字符串:`sed s/:/-----/ inputfile` - - * 命令和它的参数是用任意一个字符来分隔的。这主要看你的习惯,在 99% 的时间中我都使用斜杠,但也会用其它的字符:`sed s%:%-----% inputfile`、`sed sX:X-----X inputfile` 或者甚至是 `sed 's : ----- ' inputfile` - - * 默认情况下,替换命令仅被应用到 `pattern` 空间中匹配到的第一个字符串上。你可以通过在命令之后指定一个匹配指数作为标志来改变这种情况:`sed 's/:/-----/1' inputfile`、`sed 's/:/-----/2' inputfile`、`sed 's/:/-----/3' inputfile`、… - - * 如果你想执行一个全面的替换(即:在 `pattern` 空间上的每个非重叠匹配),你需要增加 `g` 标志:`sed 's/:/-----/g' inputfile` - - * 在字符串替换中,出现的任何一个 `&` 符号都将被与查找模式匹配的子字符串替换:`sed 's/:/-&&&-/g' inputfile`、`sed 's/…./& /g' inputfile` - - * 圆括号(在扩展的正则表达式中的 `(…)` 或者基本的正则表达式中的 `\(…\)`)被引用为捕获组。那是匹配字符串的一部分,可以在替换字符串中被引用。`\1` 是第一个捕获组的内容,`\2` 是第二个捕获组的内容,依次类推:`sed -E 's/(.)(.)/\2\1/g' inputfile`、`sed -E 's/(.):x:(.):(.*)/\1:\3/' inputfile`(后者之所能正常工作是因为 [正则表达式中的量词星号表示重复匹配下去,直到不匹配为止][21],并且它可以匹配许多个字符) - - * 在查找模式或替换字符串时,你可以通过使用一个反斜杠来去除任何字符的特殊意义:`sed 's/:/--\&--/g' inputfile`,`sed 's/\//\\/g' inputfile` - - - - -所有的这些看起来有点抽象,下面是一些示例。首先,我想去显示我的测试输入文件的第一个字段并给它在右侧附加 20 个空格字符,我可以这样写: -``` -sed < inputfile -E -e ' - s/:/ / # 用 20 个空格替换第一个字段的分隔符 - s/(.{20}).*/\1/ # 只保留一行的前 20 个字符 - s/.*/| & |/ # 为了输出好看添加竖条 -' - -``` - -第二个示例是,如果我想将用户 sonia 的 UID/GID 修改为 1100,我可以这样写: -``` -sed -En -e ' - /sonia/{ - s/[0-9]+/1100/g - p - }' inputfile - -``` - -注意在替换命令结束部分的 `g` 选项。这个选项改变了它的行为,因此它将查找全部的 `pattern` 空间并替换,如果没有那个选项,它只替换查找到的第一个。 - -顺便说一下,这也是使用前面讲过的输出(`p`)命令的好机会,可以在命令运行时输出修改前后时刻 `pattern` 空间的内容。因此,为了获得替换前后的内容,我可以这样写: -``` -sed -En -e ' - /sonia/{ - p - s/[0-9]+/1100/g - p - }' inputfile - -``` - -事实上,替换后输出一个行是很常见的用法,因此,替换命令也接受 `p` 选项: -``` -sed -En -e '/sonia/s/[0-9]+/1100/gp' inputfile - -``` - -最后,我就不详细讲替换命令的 `w` 选项了,我们将在稍后的学习中详细介绍。 - -#### delete 命令 - -删除命令(`d`)用于清除 `pattern` 空间的内容,然后立即开始下一个处理循环。这样它将会跳过隐式输出 `pattern` 空间内容的行为,即便是你设置了自动输出标志(AP)也不会输出。 - -![The Sed `delete` command][22] - -只输出一个文件前五行的一个很低效率的方法将是: -``` -sed -e '6,$d' inputfile - -``` - -你猜猜看,我为什么说它很低效率?如果你猜不到,建议你再次去阅读前面的关于 quit 命令的章节,答案就在那里! - -当你组合使用正则表达式和地址,从输出中删除匹配的行时,delete 命令将非常有用: -``` -sed -e '/systemd/d' inputfile - -``` - -#### next 命令 - -如果 Sed 命令不是在静默模式中运行,这个命令将输出当前 `pattern` 空间的内容,然后,在任何情况下它将读取下一个输入行到 `pattern` 空间中,并使用新的 `pattern` 空间中的内容来运行当前循环中剩余的命令。 - -![The Sed `next` command][23] - -常见的用 next 命令去跳过行的一个示例: -``` -cat -n inputfile | sed -n -e 'n;n;p' - -``` - -在上面的例子中,Sed 将隐式地读取输入文件的第一行。但是 `next` 命令将丢弃对 `pattern` 空间中的内容的输出(不输出是因为使用了 `-n` 选项),并从输入文件中读取下一行来替换 `pattern` 空间中的内容。而第二个 `next` 命令做的事情和前一个是一模一样的,这就实现了跳过输入文件 2 行的目的。最后,这个脚本显式地输出包含在 `pattern ` 空间中的输入文件的第三行的内容。然后,Sed 将启动一个新的循环,由于 `next` 命令,它会隐式地读取第 4 行的内容,然后跳过它,同样地也跳过第 5 行,并输出第 6 行。如此循环,直到文件结束。总体来看,这个脚本就是读取输入文件然后每三行输出一行。 - -使用 next 命令,我们也可以找到一些显示输入文件的前五行的几种方法: -``` -cat -n inputfile | sed -n -e '1{p;n;p;n;p;n;p;n;p}' -cat -n inputfile | sed -n -e 'p;n;p;n;p;n;p;n;p;q' -cat -n inputfile | sed -e 'n;n;n;n;q' - -``` - -更有趣的是,如果你需要根据一些地址来处理行时,next 命令也非常有用: -``` -cat -n inputfile | sed -n '/pulse/p' # 输出包含 "pulse" 的行 -cat -n inputfile | sed -n '/pulse/{n;p}' # 输出包含 "pulse" 之后的行 -cat -n inputfile | sed -n '/pulse/{n;n;p}' # 输出下面的行 - # 下一行 - # 包含 "pulse" 的行 - -``` - -### 使用 `hold` 空间 - -到目前为止,我们所看到的命令都是仅使用了 `pattern` 空间。但是,我们在文章的开始部分已经提到过,还有第二个缓冲区:`hold` 空间,它完全由用户管理。它就是我们在第二节中描述的目标。 - -#### exchange 命令 - -正如它的名字所表示的,exchange 命令(`x`)将交换 `hold` 空间和 `pattern` 空间的内容。记住,你只要没有把任何东西放入到 `hold` 空间中,那么 `hold` 空间就是空的。 - -![The Sed `exchange` command][24] - -作为第一个示例,我们可使用 exchange 命令去反序输出一个输入文件的前两行: -``` -cat -n inputfile | sed -n -e 'x;n;p;x;p;q' - -``` - -当然,在你设置 `hold` 之后你并没有立即使用它的内容,因为只要你没有显式地去修改它, `hold` 空间中的内容就保持不变。在下面的例子中,我在输入一个文件的前五行后,使用它去删除第一行: -``` -cat -n inputfile | sed -n -e ' - 1{x;n} # 交换 hold 和 pattern 空间 - # 保存第 1 行到 hold 空间中 - # 然后读取第 2 行 - 5{ - p # 输出第 5 行 - x # 交换 hold 和 pattern 空间 - # 去取得第 1 行的内容放回到 - # pattern 空间 - } - - 1,5p # 输出第 2 到第 5 行 - # (不要输错了!尝试找出这个规则 - # 没有在第 1 行上运行的原因;) -' - -``` - -#### hold 命令 - -hold 命令(`h`)是用于将 `pattern` 空间中的内容保存到 `hold` 空间中。但是,与 exchange 命令不同的是,`pattern` 空间中的内容不会被改变。hold 命令有两种用法: - - * `h` -将复制 `pattern` 空间中的内容到 `hold` 空间中,将覆盖 `hold` 空间中任何已经存在的内容。 - - * `H` -使用一个独立的新行,追加 `pattern` 空间中的内容到 `hold` 空间中。 - - - - -![The Sed `hold` command][25] - -上面使用 exchange 命令的例子可以使用 hold 命令重写如下: -``` -cat -n inputfile | sed -n -e ' - 1{h;n} # 保存第 1 行的内容到 hold 缓冲区并继续 - 5{ # 到第 5 行 - x # 交换 pattern 和 hold 空间 - # (现在 pattern 空间包含了第 1 行) - H # 在 hold 空间的第 5 行后追回第 1 行 - x # 再次交换取回第 5 行并将第 1 行插入 - # 到 pattern 空间 - } - - 1,5p # 输出第 2 行到第 5 行 - # (不要输错!尝试去打到为什么这个规则 - # 不在第 1 行上运行;) -' - -``` - -#### get 命令 - -get 命令(`g`)与 hold 命令恰好相反:它从 `hold` 空间中取得内容并将它置入到 `pattern` 空间中。同样它也有两种方式: - - * `g` -它将复制 `hold` 空间中的内容并将其放入到 `pattern` 空间,覆盖 `pattern`空间中已存在的任何内容 - - * `G` -使用一个单独的新行,追加 `hold` 空间中的内容到 `pattern` 空间中 - - - - -![The Sed `get` command][26] - -将 hold 命令和 get 命令一起使用,可以允许你去存储并调回数据。作为一个小挑战,我让你重写前一节中的示例,将输入文件的第 1 行放置在第 5 行之后,但是这次必须使用 get 和 hold 命令(注意大小写)而不能使用 exchange 命令。只要运气好,它将使那个方式更简单! - -在这期间,我可以给你展示另一个示例,它能给你一些灵感。目标是将拥有登录 shell 权限的用户与其它用户分开: -``` -cat -n inputfile | sed -En -e ' - \=(/usr/sbin/nologin|/bin/false)$= { H;d; } - # 追回匹配的行到 hold 空间 - # 然后继续下一个循环 - p # 输出其它行 - $ { g;p } # 在最后一行上 - # 取得并输出 hold 空间中的内容 -' - -``` - -### 复习 print、delete 和 next - -现在你已经更熟悉使用 hold 空间了,我们回到 print、delete 和 next 命令。我们已经讨论了小写的 `p`、`d` 和 `n` 命令了。而它们也有大写的版本。因为每个命令都有大小写版本,似乎是 Sed 的习惯,这些命令的大写版本将与多行缓冲区有关: - - * `P` -将 `pattern` 空间中第一个新行之前的内容输出 - - * `D` -删除 `pattern` 空间中的内容并且包含新行,然后不读取任何新的输入而是使用剩余的文本去重启一个循环 - - * `N` -使用一个换行符作为新旧数据的分隔符,然后读取并追加一个输入的新行到 `pattern` 空间。继续运行当前的循环。 - - - - -![The Sed uppercase `Delete` command][27] -![The Sed uppercase `Next` command][28] - -这些命令的使用场景主要用于实现队列([FIFO 列表][29])。从一个输入文件中删除最后 5 行就是一个很权威的例子: -``` -cat -n inputfile | sed -En -e ' - 1 { N;N;N;N } # 确保 pattern 空间中包含 5 行 - - N # 追加第 6 行到队列中 - P # 输出队列的第 1 行 - D # 删除队列的第 1 行 -' - -``` - -作为第二个示例,我们可以在两个列上显示输入数据: -``` -# 输出两列 -sed < inputfile -En -e ' - $!N # 追加一个新行到 pattern 空间 - # 除了输入文件的最后一行 - # 当在输入文件的最后一行使用 N 命令时 - # GNU Sed 和 POSIX Sed 的行为是有差异的 - # 需要使用一个技巧去处理这种情况 - # https://www.gnu.org/software/sed/manual/sed.html#N_005fcommand_005flast_005fline - - # 用空间填充第 1 行的第 1 个字段 - # 并丢弃其余行 - s/:.*\n/ \n/ - s/:.*// # 除了第 2 行上的第 1 个字段外,丢弃其余的行 - s/(.{20}).*\n/\1/ # 修剪并连接行 - p # 输出结果 -' - -``` - -### 分支 - -我们刚才已经看到,Sed 因为有 `hold` 空间所以有了缓存的功能。其实它还有测试和分支的指令。因为有这些特性使得 Sed 是一个[图灵完备][30]的语言。虽然它可能看起来很傻,但意味着你可以使用 Sed 写任何程序。你可以实现任何你的目的,但并不意味着实现起来会很容易,而且结果也不一定会很高效。 - -但是,不用担心。在本文中,我们将使用能够展示测试和分支功能的最简单的例子。虽然这些功能乍一看似乎很有限,但请记住,有些人用 Sed 写了 [calculators]、 [Tetris] 或许多其它类型的应用程序! - -#### 标签和分支 - -从某些方面,你可以将 Sed 看到是一个功能有限的汇编语言。因此,你不会找到在高级语言中常见的 “for” 或 “while” 循环,或者 “if … else” 语句,但是你可以使用分支来实现同样的功能。 - -![The Sed `branch` command][31] - -如果你在本文开始部分看到了用流程图描述的 Sed 运行模型,那么你应该知道 Sed 会自动增加程序计数器的值,命令是按程序的指令顺序来运行的。但是,使用分支指令,你可以通过选择程序中的任意命令来改变顺序运行的程序。跳转目的地是使用一个标签来显式定义的。 - -![The Sed `label` command][32] - -这是一个这样的示例: -``` -echo hello | sed -ne ' - :start # 在程序的那个行上放置一个 "start" 标签 - p # 输出 pattern 空间内容 - b start # 继续在 :start 标签上运行 -' | less - -``` - -那个 Sed 程序的行为非常类似于 `yes` 命令:它获取一个流并产生一个包含那个字符串的无限流。 - -切换到一个标签就像我们旁通了 Sed 的自动化特性一样:它既不读取任何输入,也不输出任何内容,更不更新任何缓冲区。它只是跳转到一个不同于源程序指令顺序的另一个指令。 - -值得一提的是,如果在分支命令(`b`)上没有指定一个标签作为它的参数,那么分支将直接切换到程序结束的地方。因此,Sed 将启动一个新的循环。这个特性可以用于去旁通一些指令并且因此可以用于作为块的替代者: -``` -cat -n inputfile | sed -ne ' -/usb/!b -/daemon/!b -p -' - -``` - -#### 条件分支 - -到目前为止,我们已经看到了无条件分支,这个术语可能有点误导嫌疑,因为 Sed 命令总是基于它们的可选地址来作为条件的。 - -但是,在传统意义上,一个无条件分支也是一个分支,当它运行时,将跳转到特定的目的地,而条件分支既有可能也或许不可能跳转到特定的指令,这取决于系统的当前状态。 - -Sed 只有一个条件指令,就是 test(`t`) 命令。只有在当前循环的开始或因为前一个条件分支运行了替换,它才跳转到不同的指令。更多的情况是,只有替换标志被设置时,test 命令才会切换。 - -![The Sed `test` command][3]![The Sed `test` command][33] - -使用 test 指令,你可以在一个 Sed 程序中很轻松地执行一个循环。作为一个特定的示例,你可以用它将一个行填充到某个长度(这是使用正则表达式无法实现的): -``` -# Center text -cut -d: -f1 inputfile | sed -Ee ' - :start - s/^(.{,19})$/ \1 / # 用空格在开始处填充少于 20 个字符的行 - # 并在结束处 - # 添加一个空格 - t start # 如果我们已经添加了一个空格,则返回到 :start 标签 - s/(.{20}).*/| \1 |/ # 保留一个行的前 20 个字符 - # 以修复由于奇数行引起的 - # 差一错误 -' - -``` - -如果你仔细读前面的示例,你可能注意到,在将要把数据“喂”给 Sed 之前,我会通过使用 cut 命令创建一个比特去预处理数据。 - -然后,我们可以只使用 Sed 对程序做一些小的修改来执行相同的任务: -``` -cat inputfile | sed -Ee ' - s/:.*// # 除第 1 个字段外删除剩余字段 - t start - :start - s/^(.{,19})$/ \1 / # 在开始处使用空格去填充 - # 并在结束处填充一个空格 - # 使行的长度不短于 20 个字符 - t start # 如果添加了一个空格,则返回到 :start - s/(.{20}).*/| \1 |/ # 仅保留一个行的前 20 个字符 - # 以修复由于奇数行引起的 - # 差一错误 -' - -``` - -在上面的示例中,你或许对下列的结构感到惊奇: -``` -t start -:start - -``` - -乍一看,在这里的分支并没有用,因为它只是跳转到将要运行的指令处。但是,如果你仔细阅读了 `test` 命令的定义,你将会看到,如果在当前循环的开始或者前一个 test 命令运行后发生了一个替换,分支才会起作用。换句话说就是,test 指令有清除替换标志的副作用。这也正是上面的代码片段的真实目的。这是一个在包含条件分支的 Sed 程序中经常看到的技巧,用于在使用多个替换命令时避免出现 false 的情况。 - -通过它并不能绝对强制地清除替换标志,我同意这一说法。因为我使用的特定的替换命令在将字符串填充到正确的长度时是幂等的。因此,一个多余的迭代并不会改变结果。不过,我们可以现在再次看一下第二个示例: -``` -# 基于它们的登录程序来分类用户帐户 -cat inputfile | sed -Ene ' - s/^/login=/ - /nologin/s/^/type=SERV / - /false/s/^/type=SERV / - t print - s/^/type=USER / - :print - s/:.*//p -' - -``` - -我希望在这里根据用户默认配置的登录程序,为用户帐户打上 “SERV” 或 “USER” 的标签。如果你运行它,预计你将看到 “SERV” 标签。然而,并没有在输出中跟踪到 “USER” 标签。为什么呢?因为 `t print` 指令不论行的内容是什么,它总是切换,替换标志总是由程序的第一个替换命令来设置。一旦替换标志设置完成后,在下一个行被读取或直到下一个 test 命令之前,这个标志将保持不变。下面我们给出修复这个程序的解决方案: -``` -# 基于用户登录程序来分类用户帐户 -cat inputfile | sed -Ene ' - s/^/login=/ - - t classify # clear the "substitution flag" - :classify - - /nologin/s/^/type=SERV / - /false/s/^/type=SERV / - t print - s/^/type=USER / - :print - s/:.*//p -' - -``` - -### 精确地处理文本 - -Sed 是一个非交互式文本编辑器。虽然是非交互式的,但仍然是文本编辑器。而如果没有在输出中插入一些东西的功能,那它就不算一个完整的文本编辑器。我不是很喜欢它的文本编辑的特性,因为我发现它的语法太难用了(即便是使用标准的 Sed),但有时你难免会用到它。 - -在严格的 POSIX 语法中,所有通过这三个命令:change(`c`)、insert(`i`)或 append(`a`)来处理一些到输出的文字文本,都遵循相同的特定语法:命令字母后面跟着一个反斜杠,并且文本从脚本的下一行上开始插入: -``` -head -5 inputfile | sed ' -1i\ -# List of user accounts -$a\ -# end -' - -``` - -插入多行文本,你必须每一行结束的位置使用一个反斜杠: -``` -head -5 inputfile | sed ' -1i\ -# List of user accounts\ -# (users 1 through 5) -$a\ -# end -' - -``` - -一些 Sed 实现,比如 GNU Sed,在初始的反斜杠后面有一个可选的换行符,即便是在 `--posix` 模式下仍然如此。我在标准中并没有找到任何关于替代该语法的授权(如果是因为我没有在标准中找到那个特性,请在评论区留言告诉我!)。因此,如果对可移植性要求很高,请注意使用它的风险: -``` -# 非 POSIX 语法: -head -5 inputfile | sed -e ' -1i \# List of user accounts -$a\# end -' - -``` - -也有一些 Sed 的实现,让初始的反斜杠完全是可选的。因此毫无疑问,它是一个厂商对 POSIX 标准进行扩展的特定版本,它是否支持那个语法,你需要去查看那个 Sed 版本的手册。 - -在简单概述之后,我们现在来回顾一下这些命令的更多细节,从我还没有介绍的 change 命令开始。 - -#### change 命令 - -change 命令(`c\`)就像 `d` 命令一样删除 `pattern` 空间的内容并开始一个新的循环。唯一的不同在于,当命令运行之后,用户提供的文本是写往输出的。 - -![The Sed `change` command][34] -``` -cat -n inputfile | sed -e ' -/systemd/c\ -# :REMOVED: -s/:.*// # This will NOT be applied to the "changed" text -' - -``` - -如果 change 命令与一个地址范围关联,当到达范围的最后一行时,这个文本将仅输出一次。这在某种程度上成为 Sed 命令将被重复应用在地址范围内所有行这一惯例的一个例外情况: -``` -cat -n inputfile | sed -e ' -19,22c\ -# :REMOVED: -s/:.*// # This will NOT be applied to the "changed" text -' - -``` - -因此,如果你希望将 change 命令重复应用到地址范围内的所有行上,除了将它封装到一个块中之外,你将没有其它的选择: -``` -cat -n inputfile | sed -e ' -19,22{c\ -# :REMOVED: -} -s/:.*// # This will NOT be applied to the "changed" text -' - -``` - -#### insert 命令 - -insert 命令(`i\`)将立即在输出中给出用户提供的文本。它并不以任何方式修改程序流或缓冲区的内容。 - -![The Sed `insert` command][35] -``` -# display the first five user names with a title on the first row -sed < inputfile -e ' -1i\ -USER NAME -s/:.*// -5q -' - -``` - -#### append 命令 - -当输入的下一行被读取时,append 命令(`a\`)将一些文本追加到显示队列。文本在当前循环的结束部分(包含程序结束的情况)或当使用 `n` 或 `N` 命令从输入中读取一个新行时被输出。 - -![The Sed `append` command][36] - -与上面相同的一个示例,但这次是插入到底部而是顶部: -``` -sed < inputfile -e ' -5a\ -USER NAME -s/:.*// -5q -' - -``` - -#### read 命令 - -这是插入一些文本内容到输出流的第四个命令:read 命令(`r`)。它的工作方式与 append 命令完全一样,但不同的,它不从 Sed 脚本中取得硬编码到脚本中的文本,而是在一个输出上写一个文件的内容。 - -read 命令只调度要读取的文件。当刷新 append 队列时,后者被高效地读取,而不是在 read 命令运行时。如果这时候对这个文件有并发的访问,或那个文件不是一个普通的文件(比如,它是一个字符设备或命名管道),或文件在读取期间被修改,这时可能会产生严重的后果。 - -作为一个例证,如果你使用我们将在下一次详细讲的 write 命令,它与 read 命令共同去写入并从一个临时文件中重新读取,你可能会获得一些创造性的结果(使用法语版的 [Shiritori][37] 游戏作为一个例证): -``` -printf "%s\n" "Trois p'tits chats" "Chapeau d' paille" "Paillasson" | -sed -ne ' - r temp - a\ - ---- - w temp -' - -``` - -现在,在流输出中专门用于插入一些文本的 Sed 命令的清单结束了。我的最后一个示例纯属好玩,但是由于我前面提到过有一个 write 命令,这个示例将我们完美地带到下一节,在下一节我们将看到在 Sed 中如何将数据写入到一个外部文件。 - -### 输出的替代 - -Sed 的设计思想是,所有的文本转换都将写入到进程的标准输出上。但是,Sed 也有一些特性支持将数据发送到替代的目的地。你有两种方式去实现上述的输出目标替换:使用专门的 `write` 命令,或者在一个 `substitution` 命令上添加一个写入标志。 - -#### write 命令 - -write 命令(`w`)追加 `pattern` 空间的内容到给定的目标文件中。POSIX 要求在 Sed 处理任何数据之前,目标文件能够被 Sed 所创建。如果给定的目标文件已经存在,它将被覆写。 - -![The Sed `write` command][38] - -因此,即便是你从未真实地去写入到一个文件中,但文件仍然会被创建。例如,下列的 Sed 程序将创建/覆写这个 “output” 文件,那怕是这个写入命令从未被运行过: -``` -echo | sed -ne ' -q # 立刻退出 -w output # 这个命令从未被运行 -' - -``` - -你可以将几个写入命令指向到同一个目标文件。指向同一个目标文件的所有写入命令将追加那个文件的内容(工作方式几乎与 shell 的重定向符 `>>` 相同): -``` -sed < inputfile -ne ' -/:\/bin\/false$/w server -/:\/usr\/sbin\/nologin$/w server -w output -' -cat server - -``` - -#### 替换命令的写入标志 - -在前面,我们已经学习了替换命令,它有一个 `p` 选项用于在替换之后输出 `pattern` 空间的内容。同样它也提供一个类似功能的 `w` 选项,用于在替换之后将 `pattern` 空间的内容输出到一个文件中: -``` -sed < inputfile -ne ' -s/:.*\/nologin$//w server -s/:.*\/false$//w server -' -cat server - -``` - -我无数次使用过它们,但我从未花时间正式介绍过它们,因此,我决定现在来正式地介绍它们:就像大多数编程语言一样,注释是添加软件不去解析的自由格式文本的一种方法。Sed 的语法很晦涩,我不得不强调在脚本中需要的地方添加足够的注释。否则,除了作者外其他人将几乎无法理解它。 - -![The Sed `comment` command][39] - -不过,和 Sed 的其它部分一样,注释也有它自己的微妙之处。首先并且是最重要的,注释并不是语法结构,但它在 Sed 中很成熟。注释虽然是一个“什么也不做”的命令,但它仍然是一个命令。至少,它是在 POSIX 中定义了的。因此,严格地说,它们只允许使用在其它命令允许使用的地方。 - -大多数 Sed 实现都通过允许行内命令来放松了那种要求,就像在那个文章中我到处都使用的那样。 - -结束那个主题之前,需要说一下 `#n` 注释(`#` 后面紧跟一个`n`,中间没有空格)的特殊情况。如果在脚本的第一行找到这个精确注释,Sed 将切换到静默模式(即:清除自动输出标志),就像在命令行上指定了 `-n` 选项一样。 - -### 很少用得到的命令 - -现在,我们已经学习的命令能让你写出你所用到的 99.99% 的脚本。但是,如果我没有提到剩余的 Sed 命令,那么本教程就不能称为完全指南。我把它们留到最后是因为我们很少用到它。但或许你有实际使用案例,那么你就会发现它们很有用。如果是那样,请不要犹豫,在下面的评论区中把它分享给我们吧。 - -#### 行数命令 - -这个 `=` 命令将向标准输出上显示当前 Sed 正在读取的行数,这个行数就是行计数器的内容。没有任何方式从任何一个 Sed 缓冲区中捕获那个数字,也不能对它进行输出格式化。由于这两个限制使得这个命令的可用性大大降低。 - -![The Sed `line number` command][40] - -请记住,在严格的 POSIX 兼容模式中,当在命令行上给定几个输入文件时,Sed 并不重置那个计数器,而是连续地增长它,就像所有的输入文件是连接在一起的一样。一些 Sed 实现,像 GNU Sed,它就有一个选项可以在每个输入文件读取结束后去重置计数器。 - -#### 明确的 print 命令 - -这个 `l`(小写的字母 `l`)作用类似于 print 命令(`p`),但它是以精确的格式去输出 `pattern` 空间的内容。以下引用自 [POSIX 标准][12]: - -> 在 XBD 转义序列中列出的字符和相关的动作(‘\\\’、‘\a’、‘\b’、‘\f’、‘\r’、‘\t’、‘\v’)将被写为相应的转义序列;在那个表中的 ‘\n’ 是不适用的。不在那个表中的不可打印字符将被写为一个三位八进制数字(在前面使用一个 <反斜杠>),表示字符中的每个字节(最重要的字节在前面)。长行应该被换行,通过写一个 <反斜杠>后跟一个 <换行符> 来表示换行点;发生换行时的长度是不确定的,但应该适合输出设备的具体情况。每个行应该以一个 ‘$’ 标记结束。 - -![The Sed `unambiguous print` command][3]![The Sed `unambiguous print` command][41] - -我怀疑这个命令是在非 [8位规则化信道][42] 上交换数据的。就我本人而言,除了调试用途以外,也从未使用过它。 - -#### transliterate 命令 - -移译transliterate(`y`)命令允许映射 `pattern` 空间的字符从一个源集到一个目标集。它非常类似于 `tr` 命令,但是限制更多。 - -![The Sed `transliterate` command][43] -``` -# The `y` c0mm4nd 1s for h4x0rz only -sed < inputfile -e ' - s/:.*// - y/abcegio/48<3610/ -' - -``` - -虽然 transliterate 命令语法与 substitution 命令的语法有一些相似之处,但它在替换字符串之后不接受任何选项。这个移译总是全局的。 - -请注意,移译命令要求源集和目标集之间要一一对应地转换。这意味着下面的 Sed 程序可能所做的事情并不是你乍一看所想的那样: - -``` -# BEWARE: this doesn't do what you may think! -sed < inputfile -e ' - s/:.*// - y/[a-z]/[A-Z]/ -' - -``` - -### 写在最后的话 -``` -# 它要做什么? -# 提示:答案就在不远处... -sed -E ' - s/.*\W(.*)/\1/ - h - ${ x; p; } - d' < inputfile - -``` - -我们已经学习了所有的 Sed 命令,真不敢相信我们已经做到了!如果你也读到这里了,应该恭喜你,尤其是如果你花费了一些时间,在你的系统上尝试了所有的不同示例! - -正如你所见,Sed 是非常复杂的,不仅因为它的语法比较零乱,也因为许多极端案例或命令行为之间的细微差别。毫无疑问,我们可以将这些归结于历史的原因。尽管它有这么多缺点,但是 Sed 仍然是一个非常强大的工具,甚至到现在,它仍然是大量使用的、为数不多的 Unix 工具箱中的命令之一。是时候总结一下这篇文章了,如果你不先支持我,我将不去总结它:请节选你对喜欢的或最具创意的 Sed 脚本,并共享给我们。如果我收集到的你们共享出的脚本足够多了,我将会把这些 Sed 脚本结集发布! - --------------------------------------------------------------------------------- - -via: https://linuxhandbook.com/sed-reference-guide/ - -作者:[Sylvain Leroux][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://linuxhandbook.com/author/sylvain/ -[1]:https://linuxhandbook.com/sed-command-basics/ -[2]:https://gist.github.com/s-leroux/5cb36435bac46c10cfced26e4bf5588c -[3]:https://linuxhandbook.com/wp-content/plugins/jetpack/modules/lazy-images/images/1x1.trans.gif -[4]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05/sed-reference-guide.jpeg?resize=702%2C395&ssl=1 -[5]:http://mathworld.wolfram.com/AbstractMachine.html -[6]:https://en.wikipedia.org/wiki/State_(computer_science) -[7]:https://en.wikipedia.org/wiki/Data_buffer -[8]:https://en.wikipedia.org/wiki/Processor_register#Categories_of_registers -[9]:https://www.computerhope.com/jargon/f/flag.htm -[10]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-flowchart.png?w=702&ssl=1 -[11]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-print-command.png?w=702&ssl=1 -[12]:http://pubs.opengroup.org/onlinepubs/9699919799/utilities/sed.html -[13]:https://www.regular-expressions.info/ -[14]:https://chortle.ccsu.edu/FiniteAutomata/Section07/sect07_16.html -[15]:https://www.regular-expressions.info/posix.html#bre -[16]:https://www.regular-expressions.info/posix.html#ere -[17]:https://www.regular-expressions.info/repeat.html#limit -[18]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-quit-command.png?w=702&ssl=1 -[19]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-substitution-command.png?w=702&ssl=1 -[20]:https://linuxhandbook.com/?p=128 -[21]:https://www.regular-expressions.info/repeat.html#greedy -[22]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-delete-command.png?w=702&ssl=1 -[23]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-next-command.png?w=702&ssl=1 -[24]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-exchange-command.png?w=702&ssl=1 -[25]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-hold-command.png?w=702&ssl=1 -[26]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-get-command.png?w=702&ssl=1 -[27]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-delete-upper-command.png?w=702&ssl=1 -[28]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-next-upper-command.png?w=702&ssl=1 -[29]:https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics) -[30]:https://chortle.ccsu.edu/StructuredC/Chap01/struct01_5.html -[31]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-branch-command.png?w=702&ssl=1 -[32]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-label-command.png?w=702&ssl=1 -[33]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-test-command.png?w=702&ssl=1 -[34]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-change-command.png?w=702&ssl=1 -[35]:https://i0.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-insert-command.png?w=702&ssl=1 -[36]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-append-command.png?w=702&ssl=1 -[37]:https://en.wikipedia.org/wiki/Shiritori -[38]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-write-command.png?w=702&ssl=1 -[39]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-comment-command.png?w=702&ssl=1 -[40]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-current-line-command.png?w=702&ssl=1 -[41]:https://i2.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-unambiguous-print-command.png?w=702&ssl=1 -[42]:https://en.wikipedia.org/wiki/8-bit_clean -[43]:https://i1.wp.com/linuxhandbook.com/wp-content/uploads/2018/05//sed-transliterate-command.png?w=702&ssl=1 diff --git a/translated/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md b/translated/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md deleted file mode 100644 index 42bd3d3dd9..0000000000 --- a/translated/tech/20180618 What-s all the C Plus Fuss- Bjarne Stroustrup warns of dangerous future plans for his C.md +++ /dev/null @@ -1,163 +0,0 @@ -# 关于 C ++ 的所有争论?Bjarne Stroustrup 警告他的 C++ 未来的计划很危险 - -![](https://regmedia.co.uk/2018/06/15/shutterstock_38621860.jpg?x=442&y=293&crop=1) - -今年早些时候,我们**访谈**了 Bjarne Stroustrup,他是 C++ 语言的创始人,摩根士丹利技术部门的董事总经理,美国哥伦比亚大学计算机科学的客座教授,他写了[一封信][1]邀请那些关注编程语言演进的人去“想想瓦萨号!” - -毫无疑问,对于丹麦人来说,这句话很容易理解,而那些对于 17 世纪的斯堪的纳维亚历史了解不多的人,还需要展开说一下。瓦萨号是一艘瑞典军舰,由国王 Gustavus Adolphus 委托建造。它是在 1628 年 8 月 10 日首航时,当时波罗的海国家中最强大的军舰,但是它在首航几分钟之后就沉没了。 - -巨大的瓦萨号有一个难以解决的设计缺陷:头重脚轻,以至于它被[一阵狂风刮翻了][2]。通过这段翻船历史的回忆,Stroustrup 警示了 C++ 所面临的风险,因为现在越来越多的特性被添加到了 C++ 中。 - -现在已经提议了不少这样的特性。Stroustrup 在他的信中引用了 43 条提议。他认为那些参与 C++ 语言 ISO 标准演进的人(指众所周知的 [WG21][3]),正在努力地让语言更高级,但他们的努力方向却并不一致。 - -在他的信中,他写道: - -> 分开来看,许多提议都很有道理。但将它们综合到一起,这些提议是很愚蠢的,将危害 C++ 的未来。 - -他明确表示,不希望 C++ 重蹈瓦萨号的覆辙,这种渐近式的改进将敲响 C++ 的丧钟。相反,应该吸取瓦萨号的教训,构建一个坚实的基础,吸取经验教训,并做彻底的测试。 - -在瑞士拉普斯威尔(Rapperswill)召开的 C++ 标准化委员会会议之后,本月早些时候,Stroustrup 接受了_《The Register》_ 的采访,回答了有关 C++ 语言下一步发展方向方面的几个问题。(最新版是 C++17,它去年刚发布;下一个版本是 C++20,它正在开发中,预计于 2020 年发布。) - -**Register:在你的信件《想想瓦萨号!》中,你写道:** - -> 在 C++11 开始基础不再完整,而 C++17 中在使基础更加稳固、规范和完整方面几乎没有改善。相反地,却增加了重要接口的复杂度,让人们需要学习的特性数量越来越多。C++ 可能在这种提议的重压之下崩溃 —— 这些提议大多数都不成熟。我们不应该花费大量的时间为专家级用户们(比如我们自己)去创建越来越复杂的东西。~~(还要考虑普通用户的学习曲线,越复杂的东西越不易普及。)~~ - -**对新人来说,C++ 很难吗?如果是这样,你认为怎样的特性让新人更易理解?** - -**Stroustrup:**C++ 的有些东西对于新人来说确实很难。 - -换句话说,C++ 中有些东西对于新人来说,比起 C 或上世纪九十年代的 C++ 更容易理解了。而难点是让大型社区专注于这些部分,并且帮助新手和普通 C++ 用户去规避那些对高级库实现提供支持的部分。 - -我建议使用 [C++ 核心准则][4] 作为实现上述目标的一个辅助。 - -此外,我的 “C++ 教程” 也可以帮助人们在使用现代 C++ 时走上正确的方向,而不会迷失在自上世纪九十年代以来的复杂性中,或困惑于只有专家级的用户才能理解的东西中。第二版的 “C++ 教程” 涵盖了 C++17 和部分 C++20 的内容,这本书即将要出版了。 - -我和其他人给没有编程经验的大一新生教过 C++,只要你不去深挖编程语言的每个晦涩难懂的角落,把注意力集中到 C++ 中最主流的部分,在三个月内新可以学会 C++。 - -“让简单的东西保持简单” 是我长期追求的目标。比如 C++11 的 `range-for` 循环: - -``` -for (int& x : v) ++x; // increment each element of the container v - -``` - -`v` 的位置可以是任何容器。在 C 和 C 风格的 C++ 中,它可能看到的是这样: - -``` -for (int i=0; iC语言GNU C][2]写就的。“我始终认为 C 是一个伟大的语言,它有着非常简单的语法,对于很多方向的开发都很合适,但是我怀疑你会挫折重重,从你的第一个'Hello World'程序开始到你真正能开发出能用的东西当中有很大一步要走”。他认为,如果用现在的标准,如果作为现在的入门语言的话,从 C语言开始的代价太大。 - -在他那个时代,Torvalds 的唯一选择的书就只能是Brian W. Kernighan 和Dennis M. Ritchie 合著的[C 编程语言C Programming Language, 2nd Edition][3],在编程圈内也被尊称为K&R。“这本书简单精炼,但是你要先有编程的背景才能欣赏它”。Torvalds 说到。 - -Torvalds 并不是唯一一个推荐K&R 的开源开发者。以下几位也同样引用了这本他们认为值得推荐的书籍,他们有:Linux 和 Oracle 虚拟化开发副总裁,Wim Coekaerts;Linux 开发者Alan Cox; Google 云 CTO Brian Stevens; Canonical 技术运营部副总裁Pete Graner。 - - -如果你今日还想同 C 语言较量一番的话,Jeremy Allison,Samba 的共同发起人,推荐[21世纪的 C 语言21st Century C: C Tips from the New School][4]。他还建议,同时也去阅读一本比较旧但是写的更详细的[C专家编程Expert C Programming: Deep C Secrets][5]和有着20年历史的[UNIX POSIX多线程编程Programming with POSIX Threads][6]。 - - -### 如果不选C 语言, 那选什么? - - Linux 开发者推荐的书籍自然都是他们认为适合今时今日的开发项目的语言工具。这也折射了开发者自身的个人偏好。例如, Allison认为年轻的开发者应该在[Go 编程语言The Go Programming Language ][7]和[Rust 编程Rust with Programming Rust][8]的帮助下去学习 Go 语言和 Rust 语言。 - - -但是超越编程语言来考虑问题也不无道理(尽管这些书传授了你编程技巧)。今日要做些有意义的开发工作的话,"要从那些已经完成了99%显而易见工作的框架开始,然后你就能围绕着它开始写脚本了", Torvalds 推荐了这种做法。 - - -“坦率来说,语言本身远远没有围绕着它的基础架构重要”,他继续道,“可能你会从 Java 或者Kotlin 开始,但那是因为你想为自己的手机开发一个应用,因此安卓 SDK 成为了最佳的选择,又或者,你对游戏开发感兴趣,你选择了一个游戏开发引擎来开始,而通常它们有着自己的脚本语言”。 - - -这里提及的基础架构包括那些和操作系统本身相关的编程书籍。 -Garner 在读完了大名鼎鼎的 K&R后又拜读了W. Richard Steven 的[Unix 网络编程Unix: Network Programming][10]。特别的是,Steven 的[TCP/IP详解,卷1:协议TCP/IP Illustrated, Volume 1: The Protocols][11]在出版了30年之后仍然被认为是必读的。因为 Linux 开发很大程度上和[和网络基础架构有关][12],Garner 也推荐了很多 O’Reilly 的书,包括[Sendmail][13],[Bash][14],[DNS][15],以及[IMAP/POP][16]。 - -Coekaerts也是Maurice Bach的[UNIX操作系统设计The Design of the Unix Operation System][17]的书迷之一。James Bottomley 也是这本书的推崇者,作为一个 Linux 内核开发者,当 Linux 刚刚问世时James就用Bach 的这本书所传授的知识将它研究了个底朝天。 - -### 软件设计知识永不过时 - -尽管这样说有点太局限在技术领域。Stevens 还是说到,“所有的开发者都应该在开始钻研语法前先研究如何设计,[日常物品的设计The Design of Everyday Things][18]是我的最爱”。 - -Coekaerts 喜欢Kernighan 和 Rob Pike合著的[程序设计实践The Practic of Programming][19]。这本关于设计实践的书当 Coekaerts 还在学校念书的时候还未出版,他说道,“但是我把它推荐给每一个人”。 - - -不管何时,当你问一个长期认真对待开发工作的开发者他最喜欢的计算机书籍时,你迟早会听到一个名字和一本书: -Donald Knuth和他所著的[计算机程序设计艺术(1-4A)The Art of Computer Programming, Volumes 1-4A][20]。Dirk Hohndel,VMware 首席开源官,认为这本书尽管有永恒的价值,但他也承认,“今时今日并非及其有用”。(译注:不代表译者观点) - - -### 读代码。大量的读。 - -编程书籍能教会你很多,也请别错过另外一个在开源社区特有的学习机会:[如何阅读代码Code Reading: The Open Source Perspective][21]。那里有不可计数的代码例子阐述如何解决编程问题(以及如何让你陷入麻烦...)。Stevens 说,谈到磨炼编程技巧,在他的书单里排名第一的“书”是 Unix 的源代码。 - -"也请不要忽略从他人身上学习的各种机会。", Cox道,“我是在一个计算机俱乐部里和其他人一起学的 BASIC,在我看来,这仍然是一个学习的最好办法”,他从[精通 ZX81机器码Mastering machine code on your ZX81][22]这本书和 Honeywell L66 B 编译器手册里学习到了如何编写机器码,但是学习技术这点来说,单纯阅读和与其他开发者在工作中共同学习仍然有着很大的不同。 - - -Cox 说,“我始终认为最好的学习方法是和一群人一起试图去解决你们共同关心的一些问题并从中找到快乐,这和你是5岁还是55岁无关”。 - - -最让我吃惊的是这些顶级 Linux 开发者都是在非常底层级别开始他们的开发之旅的,甚至不是从汇编语言或 C 语言,而是从机器码开始开发。毫无疑问,这对帮助开发者理解计算机在非常微观的底层级别是怎么工作的起了非常大的作用。 - - -那么现在你准备好尝试一下硬核 Linux 开发了吗?Greg Kroah-Hartman,这位 Linux 内核过期分支的维护者,推荐了Steve Oualline 的[实用 C 语言编程Practical C Programming][23]和Samuel harbison 以及Guy Steels 合著的[C语言参考手册C: A Reference Manual][24]。接下来请阅读“[如何进行 Linux 内核开发HOWTO do Linux kernel development][25]”,到这时,就像Kroah-Hartman所说,你已经准备好启程了。 - -于此同时,还请你刻苦学习并大量编码,最后祝你在跟随顶级 Linux 开发者脚步的道路上好运相随。 - - --------------------------------------------------------------------------------- - -via: https://www.hpe.com/us/en/insights/articles/top-linux-developers-recommended-programming-books-1808.html - -作者:[Steven Vaughan-Nichols][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:DavidChenLiang(https://github.com/DavidChenLiang) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.hpe.com/us/en/insights/contributors/steven-j-vaughan-nichols.html -[1]:https://www.codingdojo.com/blog/7-most-in-demand-programming-languages-of-2018/ -[2]:https://www.gnu.org/software/gnu-c-manual/ -[3]:https://amzn.to/2nhyjEO -[4]:https://amzn.to/2vsL8k9 -[5]:https://amzn.to/2KBbWn9 -[6]:https://amzn.to/2M0rfeR -[7]:https://amzn.to/2nhyrnMe -[8]:http://shop.oreilly.com/product/0636920040385.do -[9]:https://www.hpe.com/us/en/resources/storage/containers-for-dummies.html?jumpid=in_510384402_linuxbooks_containerebook0818 -[10]:https://amzn.to/2MfpbyC -[11]:https://amzn.to/2MpgrTn -[12]:https://www.hpe.com/us/en/insights/articles/how-to-see-whats-going-on-with-your-linux-system-right-now-1807.html -[13]:http://shop.oreilly.com/product/9780596510299.do -[14]:http://shop.oreilly.com/product/9780596009656.do -[15]:http://shop.oreilly.com/product/9780596100575.do -[16]:http://shop.oreilly.com/product/9780596000127.do -[17]:https://amzn.to/2vsCJgF -[18]:https://amzn.to/2APzt3Z -[19]:https://www.amazon.com/Practice-Programming-Addison-Wesley-Professional-Computing/dp/020161586X/ref=as_li_ss_tl?ie=UTF8&linkCode=sl1&tag=thegroovycorpora&linkId=e6bbdb1ca2182487069bf9089fc8107e&language=en_US -[20]:https://amzn.to/2OknFsJ -[21]:https://amzn.to/2M4VVL3 -[22]:https://amzn.to/2OjccJA -[23]:http://shop.oreilly.com/product/9781565923065.do -[24]:https://amzn.to/2OjzgrT -[25]:https://www.kernel.org/doc/html/v4.16/process/howto.html diff --git a/translated/tech/20180827 Top 10 Raspberry Pi blogs to follow.md b/translated/tech/20180827 Top 10 Raspberry Pi blogs to follow.md deleted file mode 100644 index 876ffc770d..0000000000 --- a/translated/tech/20180827 Top 10 Raspberry Pi blogs to follow.md +++ /dev/null @@ -1,94 +0,0 @@ -# 10个最值得关注的树莓派博客 - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-juggle.png?itok=oTgGGSRA) - -网上有很多很棒的树莓派爱好者网站,教程,代码仓库,YouTube 频道和其他资源。以下是我最喜欢的十大树莓派博客,排名不分先后。 - -### 1. Raspberry Pi Spy - -树莓派粉丝 Matt Hawkins 从很早开始就在他的网站 Raspberry Pi Spy 上撰写了大量全面且信息丰富的教程。我从这个网站上直接学到了很多东西,而且 Matt 似乎也总是第一个涵盖很多主题的人。在我学习使用树莓派的前三年里,多次在这个网站得到帮助。 - -让每个人感到幸运的是,这个不断采用新技术的网站仍然很强大。我希望看到它继续存在下去,让新社区成员在需要时得到帮助。 - -### 2. Adafruit - -Adafruit 是硬件黑客中最知名的品牌之一。该公司制作和销售漂亮的硬件,并提供由员工、社区成员,甚至 Lady Ada 女士自己编写的优秀教程。 - -除了网上商店,Adafruit 还经营一个博客,这个博客充满了来自世界各地的精彩内容。在博客上可以查看树莓派的类别,特别是在工作日的最后一天,会在 Adafruit Towers 举办名为 [Friday is Pi Day][1] 的活动。 - -### 3. Recantha's Raspberry Pi Pod - -Mike Horne(Recantha)是英国一位重要的树莓派社区成员,负责 [CamJam 和 Potton Pi&Pint][2](剑桥的两个树莓派社团)以及 [Pi Wars][3] (一年一度的树莓派机器人竞赛)。他为其他人建立树莓派社团提供建议,并且总是有时间帮助初学者。Horne和他的共同组织者 Tim Richardson 一起开发了 CamJam Edu Kit (一系列小巧且价格合理的套件,适合初学者使用 Python 学习物理计算)。 - -除此之外,他还运营着 Pi Pod,这是一个包含了世界各地树莓派相关内容的博客。它可能是这个列表中更新最频繁的树莓派博客,所以这是一个把握树莓派社区动向的极好方式。 - -### 4. Raspberry Pi blog - -必须提一下树莓派的官方博客:[Raspberry Pi Foundation][4],这个博客涵盖了基金会的硬件,软件,教育,社区,慈善和青年编码俱乐部的一系列内容。博客上的大型主题是家庭数字化,教育授权,以及硬件版本和软件更新的官方新闻。 - -该博客自 [2011 年][5] 运行至今,并提供了自那时以来所有 1800 多个帖子的 [存档][6] 。你也可以在Twitter上关注[@raspberrypi_otd][7],这是我用 [Python][8] 创建的机器人(教程在这里:[Opensource.com][9])。Twitter 机器人推送来自博客存档的过去几年同一天的树莓派帖子。 - -### 5. RasPi.tv - -另一位开创性的树莓派社区成员是 Alex Eames,通过他的博客和 YouTube 频道 RasPi.tv,他很早就加入了树莓派社区。他的网站为很多创客项目提供高质量、精心制作的视频教程和书面指南。 - -Alex 的网站 [RasP.iO][10] 制作了一系列树莓派附加板和配件,包括方便的 GPIO 端口引脚,电路板测量尺等等。他的博客也拓展到了 [Arduino][11],[WEMO][12] 以及其他小网站。 - -### 6. pyimagesearch - -虽然不是严格的树莓派博客(名称中的“py”是“Python”,而不是“树莓派”),但该网站有着大量的 [树莓派种类][13]。 Adrian Rosebrock 获得了计算机视觉和机器学习领域的博士学位,他的博客旨在分享他在学习和制作自己的计算机视觉项目时所学到的机器学习技巧。 - -如果你想使用树莓派的相机模块学习面部或物体识别,来这个网站就对了。Adrian 在图像识别领域的深度学习和人工智能知识和实际应用是首屈一指的,而且他编写了自己的项目,这样任何人都可以进行尝试。 - -### 7. Raspberry Pi Roundup - -这个博客由英国官方树莓派经销商之一 The Pi Hut 进行维护,会有每周的树莓派新闻。这是另一个很好的资源,可以紧跟树莓派社区的最新资讯,而且之前的文章也值得回顾。 - -### 8. Dave Akerman - -Dave Akerman 是研究高空热气球的一流专家,他分享使用树莓派以最低的成本进行热气球发射方面的知识和经验。他会在一张由热气球拍摄的平流层照片下面对本次发射进行评论,也会对个人发射树莓派热气球给出自己的建议。 - -查看 Dave 的博客,了解精彩的临近空间摄影作品。 - -### 9. Pimoroni - -Pimoroni 是一家世界知名的树莓派经销商,其总部位于英国谢菲尔德。这家经销商制作了著名的 [树莓派彩虹保护壳][14],并推出了许多极好的定制附加板和配件。 - -Pimoroni 的博客布局与其硬件设计和品牌推广一样精美,博文内容非常适合创客和业余爱好者在家进行创作,并且可以在有趣的 YouTube 频道 [Bilge Tank][15] 上找到。 - -### 10. Stuff About Code - -Martin O'Hanlon 以树莓派社区成员的身份转为了基金会的员工,他起初出于乐趣在树莓派上开发我的世界作弊器,最近作为内容编辑加入了基金会。幸运的是,马丁的新工作并没有阻止他更新博客并与世界分享有益的趣闻。 - -除了我的世界的很多内容,你还可以在 Python 库,[Blue Dot][16] 和 [guizero][17] 上找到 Martin O'Hanlon 的贡献,以及一些总结性的树莓派技巧。 - ------- - -via: https://opensource.com/article/18/8/top-10-raspberry-pi-blogs-follow - -作者:[Ben Nuttall][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[jlztan](https://github.com/jlztan) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/bennuttall -[1]: https://blog.adafruit.com/category/raspberry-pi/ -[2]: https://camjam.me/?page_id=753 -[3]: https://piwars.org/ -[4]: https://www.raspberrypi-spy.co.uk/ -[5]: https://www.raspberrypi.org/blog/first-post/ -[6]: https://www.raspberrypi.org/blog/archive/ -[7]: https://twitter.com/raspberrypi_otd -[8]: https://github.com/bennuttall/rpi-otd-bot/blob/master/src/bot.py -[9]: https://opensource.com/article/17/8/raspberry-pi-twitter-bot -[10]: https://rasp.io/ -[11]: https://www.arduino.cc/ -[12]: http://community.wemo.com/ -[13]: https://www.pyimagesearch.com/category/raspberry-pi/ -[14]: https://shop.pimoroni.com/products/pibow-for-raspberry-pi-3-b-plus -[15]: https://www.youtube.com/channel/UCuiDNTaTdPTGZZzHm0iriGQ -[16]: https://bluedot.readthedocs.io/en/latest/# -[17]: https://lawsie.github.io/guizero/ - diff --git a/translated/tech/20180905 How To Run MS-DOS Games And Programs In Linux.md b/translated/tech/20180905 How To Run MS-DOS Games And Programs In Linux.md deleted file mode 100644 index 2ee61e0223..0000000000 --- a/translated/tech/20180905 How To Run MS-DOS Games And Programs In Linux.md +++ /dev/null @@ -1,250 +0,0 @@ -在Linux中怎么运行Ms-Dos游戏和程序 -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/dosbox-720x340.png) - -你是否想过尝试一些经典的MS-DOS游戏和像Turbo C++这样的C++ 编译器?这篇教程将会介绍如何使用**DOSBox**在Linux环境下运行MS-DOS的游戏和程序。**DOSBox**是一个x86平台的DOS模拟器,可以用来运行经典的DOS游戏和程序。 DOSBox模拟带有声音,图形,鼠标,操纵杆和调制解调器等的因特尔 x86 电脑,它允许你运行许多旧的MS-DOS游戏和程序,这些游戏和程序根本无法在任何现代PC和操作系统上运行,例如Microsoft Windows XP及更高版本,Linux和FreeBSD。 DOSBox是免费的,使用C ++编程语言编写并在GPL下分发。 - -### 在Linux上安装DOSBox - -DOSBox在大多数Linux发行版的默认仓库中都能找的到 - -在Arch Linux及其衍生版如Antergos,Manjaro Linux上: -``` -$ sudo pacman -S dosbox - -``` - -在 Debian, Ubuntu, Linux Mint上: -``` -$ sudo apt-get install dosbox - -``` - -在 Fedora上: -``` -$ sudo dnf install dosbox - -``` - -### 配置DOSBox - -DOSBox是一个开箱即用的软件,它不需要进行初始化配置。 它的配置文件位于**`〜/ .dosbox` **文件夹中,名为`dosbox-x.xx.conf`。 在此配置文件中,你可以编辑/修改各种设置,例如以全屏模式启动DOSBox,全屏使用双缓冲,设置首选分辨率,鼠标灵敏度,启用或禁用声音,扬声器,操纵杆等等。 如前所述,默认设置即可正常工作。 你可以不用进行任何更改。 - -### 在Linux中运行MS-DOS上的游戏和程序 - -终端运行以下命令启动DOSBox: -``` -$ dosbox - -``` - -下图就是DOSBox的界面 - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt.png) - -正如你所看到的,DOSBox带有自己的类似DOS的命令提示符和一个虚拟的`Z:\`Drive,如果你熟悉MS-DOS的话,你会发现在DOSBox环境下工作不会有任何问题。 - -这是`dir`命令(在Linux中等同于`ls`命令)的输出: - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/dir-command-output.png) - -如果你是第一次使用DOSBox,你可以通过在DOSBox提示符中输入以下命令来查看关于DOSBox的简介: -``` -intro - -``` - -在介绍部分按ENTER进入下一页 - -要查看DOS中最常用命令的列表,请使用此命令: -``` -help - -``` - -要查看DOSBox中所有支持的命令的列表,请键入: -``` -help /all - -``` - -记好了这些命令应该在DOSBox提示符中使用,而不是在Linux终端中使用。 - -DOSBox还支持一些实用的键盘组合键。 下图是能有效使用DOSBox的默认键盘快捷键。 - -![](http://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-keyboard-shortcuts.png) - -要退出DOSBox,只需键入并按Enter: -``` -exit -``` - -默认情况下,DOSBox开始运行时的正常屏幕窗口大小如上所示 - -要直接在全屏启动dosbox,请编辑`dosbox-x.xx.conf`文件并将**fullscreen**变量的值设置为**enable**。 之后,DosBox将以全屏模式启动。 如果要返回正常屏幕,请按 **ALT+ENTER** - -希望你能掌握DOSBox的这些基本用法 - -让我们继续安装一些DOS程序和游戏。 - -首先,我们需要在Linux系统中创建目录来保存程序和游戏。 我将创建两个名为**`〜/ dosprograms` **和**`〜/ dosgames` **的目录,第一个用于存储程序,后者用于存储游戏。 -``` -$ mkdir ~/dosprograms ~/dosgames - -``` -出于本指南的目的,我将向你展示如何安装**Turbo C ++**程序和Mario游戏。我们首先将看到如何安装Turbo。 -下载最新的Turbo C ++编译器并将其解压到**`〜/ dosprograms` **目录中。 我已经将turbo c ++保存在在我的**〜/ dosprograms / TC /**目录中了。 -``` -$ ls dosprograms/tc/ - -BGI BIN CLASSLIB DOC EXAMPLES FILELIST.DOC INCLUDE LIB README README.COM - -``` - -运行 Dosbox: -``` -$ dosbox - -``` - -将**`〜/ dosprograms` **目录挂载为DOSBox中的虚拟驱动器 **C:\** -``` -Z:\>mount c ~/dosprograms - -``` - -你会看到类似下面的输出 -``` -Drive C is mounted as local directory /home/sk/dosprograms. - -``` - - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-1.png) - -现在,使用命令切换到C盘: -``` -Z:\>c: - -``` - -然后切换到**tc / bin**目录: -``` -Z:\>cd tc/bin - -``` - -最后,运行turbo c ++可执行文件: -``` -Z:\>tc.exe - -``` - -**备注:**只需输入前几个字母,然后按ENTER键自动填充文件名。 - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-4.png) - -你现在将进入Turbo C ++控制台。 - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-5.png) - -创建新文件(ATL + F)并开始编程: - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-6.png) - -你可以同样安装和运行其他经典DOS程序。 - -**故障排除:** - -运行turbo c ++或其他任何dos程序时,你可能会遇到以下错误: - -``` -DOSBox switched to max cycles, because of the setting: cycles=auto. If the game runs too fast try a fixed cycles amount in DOSBox's options. Exit to error: DRC64:Unhandled memory reference - -``` - -要解决此问题,编辑**〜/ .dosbox / dosbox-x.xx.conf **文件: -``` -$ nano ~/.dosbox/dosbox-0.74.conf - -``` - -找到以下变量: -``` -core=auto - -``` - -并更改其值为: -``` -core=normal -``` - -现在,让我们看看如何运行基于DOS的游戏,例如 **Mario Bros VGA** - -从 [**这里**][1]下载Mario游戏,并将其解压到Linux中的**〜/ dosgames **目录 - -运行 DOSBox: -``` -$ dosbox - -``` - -我们刚才使用了虚拟驱动器 **c:** 来运行dos程序。现在让我们使用 **d:** 作为虚拟驱动器来运行游戏。 - -在DOSBox提示符下,运行以下命令将 **~/dosgames** 目录挂载为虚拟驱动器 **d** -``` -Z:\>mount d ~/dosgames - -``` - -进入驱动器D: -``` -Z:\>d: - -``` - -然后进入mario游戏目录并运行 **mario.exe** 文件来启动游戏 -``` -Z:\>cd mario - -Z:\>mario.exe - -``` - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-7.png) - -开始玩游戏: - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/Mario-game-in-dosbox.png) - -你可以同样像上面所说的那样运行任何基于DOS的游戏。 [**点击这里**] [2]查看可以使用DOSBOX运行的游戏的完整列表。 - -### 总结 - -尽管DOSBOX并不能作为MS-DOS的完全替代品,并且还缺少MS-DOS中的许多功能,但它足以安装和运行大多数的DOS游戏和程序。 - -有关更多详细信息,请参阅官方[**DOSBox手册**][3] - -这就是全部内容。希望这对你有用。更多优秀指南即将到来。 敬请关注! - -干杯! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[way-ww](https://github.com/way-ww) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.dosgames.com/game/mario-bros-vga -[2]: https://www.dosbox.com/comp_list.php -[3]: https://www.dosbox.com/DOSBoxManual.html diff --git a/translated/tech/20180907 6 open source tools for writing a book.md b/translated/tech/20180907 6 open source tools for writing a book.md deleted file mode 100644 index ef1edd8cff..0000000000 --- a/translated/tech/20180907 6 open source tools for writing a book.md +++ /dev/null @@ -1,67 +0,0 @@ -6 个用于写书的开源工具 -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-austen-writing-code.png?itok=XPxRMtQ4) - -我在 1993 年首次使用并贡献了免费和开源软件,从那时起我一直是一名开源软件开发人员和传播者。尽管我一个被记住的项目是[ FreeDOS 项目][1], 一个 DOS 操作系统的开源实现,但我已经编写或者贡献了数十个开源软件项目。 - -我最近写了一本关于 FreeDOS 的书。 [_使用 FreeDOS_][2]是我庆祝 FreeDOS 出现 24 周年。它是关于安装和使用 FreeDOS、关于我最喜欢的 DOS 程序的文章,以及 DOS 命令行和 DOS 批处理编程的快速参考指南的集合。在一位出色的专业编辑的帮助下,我在过去的几个月里一直在编写这本书。 - -_使用 FreeDOS_ 可在知识共享署名(cc-by)国际公共许可证下获得。你可以从[FreeDO S电子书][2]网站免费下载 EPUB 和 PDF 版本。(我也计划为那些喜欢纸质的人提供打印版本。) - -这本书几乎完全是用开源软件制作的。我想分享一下对用来创建、编辑和生成_使用 FreeDOS_的工具的看法。 - -### Google 文档 - -[Google 文档][3]是我使用的唯一不是开源软件的工具。我将我的第一份草稿上传到 Google 文档,这样我就能与编辑器进行协作。我确信有开源协作工具,但 Google 文档能够让两个人同时编辑同一个文档、发表评论、编辑建议和更改跟踪 - 更不用说它使用段落样式和能够下载完成的文档 - 这使其成为编辑过程中有价值的一部分。 - -### LibreOffice - -我开始使用 [LibreOffice][4] 6.0,但我最终使用 LibreOffice 6.1 完成了这本书。我喜欢 LibreOffice 对样式的丰富支持。段落样式可以轻松地为标题、页眉、正文、示例代码和其他文本应用样式。字符样式允许我修改段落中文本的外观,例如内联示例代码或用不同的样式代表文件名。图形样式让我可以将某些样式应用于截图和其他图像。页面样式允许我轻松修改页面的布局和外观。 - -### GIMP - -我的书包括很多 DOS 程序截图,网站截图和 FreeDOS logo。我用 [GIMP][5] 修改了这本书的图像。通常,只是裁剪或调整图像大小,但在我准备本书的印刷版时,我使用 GIMP 创建了一些更易于打印布局的图像。 - -### Inkscape - -大多数 FreeDOS logo 和小鱼吉祥物都是 SVG 格式,我使用 [Inkscape][6]来调整它们。在准备电子书的 PDF 版本时,我想在页面顶部放置一个简单的蓝色横幅,角落里有 FreeDOS logo。实验后,我发现在 Inkscape 中创建一个我想要的横幅 SVG 图案更容易,然后我将其粘贴到页眉中。 - -### ImageMagick - -虽然使用 GIMP 来完成这项工作也很好,但有时在一组图像上运行 [ImageMagick][7] 命令会更快,例如转换为 PNG 格式或调整图像大小。 - -### Sigil - -LibreOffice 可以直接导出到 EPUB 格式,但它不是个好的转换器。我没有尝试使用 LibreOffice 6.1 创建 EPUB,但 LibreOffice 6.0 没有包含我的图像。它还以奇怪的方式添加了样式。我使用 [Sigil][8] 来调整 EPUB 并使一切看起来正常。Sigil 甚至还有预览功能,因此你可以看到 EPUB 的样子。 - -### QEMU - -因为本书是关于安装和运行 FreeDOS 的,所以我需要实际运行 FreeDOS。你可以在任何 PC 模拟器中启动 FreeDOS,包括 VirtualBox、QEMU、GNOME Boxes、PCem 和 Bochs。但我喜欢 [QEMU] [9] 的简单性。QEMU 控制台允许你以 PPM 转储屏幕,这非常适合抓取截图来包含在书中。 - -当然,我不得不提到在 [Linux][11] 上运行 [GNOME][10]。我使用 Linux 的 [Fedora][12] 发行版。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/writing-book-open-source-tools - -作者:[Jim Hall][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[1]: http://www.freedos.org/ -[2]: http://www.freedos.org/ebook/ -[3]: https://www.google.com/docs/about/ -[4]: https://www.libreoffice.org/ -[5]: https://www.gimp.org/ -[6]: https://inkscape.org/ -[7]: https://www.imagemagick.org/ -[8]: https://sigil-ebook.com/ -[9]: https://www.qemu.org/ -[10]: https://www.gnome.org/ -[11]: https://www.kernel.org/ -[12]: https://getfedora.org/ diff --git a/translated/tech/20180928 What containers can teach us about DevOps.md b/translated/tech/20180928 What containers can teach us about DevOps.md deleted file mode 100644 index d514d8ba0b..0000000000 --- a/translated/tech/20180928 What containers can teach us about DevOps.md +++ /dev/null @@ -1,105 +0,0 @@ -容器技术对指导我们 DevOps 的一些启发 -====== - -容器技术的使用支撑了目前 DevOps 三大主要实践:流水线,及时反馈,持续实验与学习以改进。 - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf) - -容器技术与 DevOps 二者在发展的过程中是互相促进的关系。得益于 DevOps 的设计理念愈发先进,容器生态系统在设计上与组件选择上也有相应发展。同时,由于容器技术在生产环境中的使用,反过来也促进了 DevOps 三大主要实践:[支撑DevOps的三个实践][1]. - - -### 工作流 - -**容器中的工作流** - -每个容器都可以看成一个独立的封闭仓库,当你置身其中,不需要管外部的系统环境、集群环境、以及其他基础设施,不管你在里面如何折腾,只要对外提供正常的功能就好。一般来说,容器内运行的应用,一般作为整个应用系统架构的一部分:比如 web API,数据库,任务执行,缓存系统,垃圾回收器等。运维团队一般会限制容器的资源使用,并在此基础上建立完善的容器性能监控服务,从而降低其对基础设施或者下游其他用户的影响。 - -**现实中的工作流** - -那些跟“容器”一样独立工作的团队,也可以借鉴这种限制容器占用资源的策略。因为无论是在现实生活中的工作流(代码发布、构建基础设施,甚至制造[Spacely’s Sprockets][2]等),还是技术中的工作流(开发、测试、试运行、发布)都使用了这样的线性工作流,一旦某个独立的环节或者工作团队出现了问题,那么整个下游都会受到影响,虽然使用我们这种线性的工作流有效降低了工作耦合性。 - -**DevOps 中的工作流** - -DevOps 中的第一条原则,就是掌控整个执行链路的情况,努力理解系统如何协同工作,并理解其中出现的问题如何对整个过程产生影响。为了提高流程的效率,团队需要持续不断的找到系统中可能存在的性能浪费以及忽视的点,并最终修复它们。 - - -> “践行这样的工作流后,可以避免传递一个已知的缺陷到工作流的下游,避免产生一个可能会导致全局性能退化的局部优化,持续优化工作流的性能,持续加深对于系统的理解” - -–Gene Kim, [支撑DevOps的三个实践][3], IT 革命, 2017.4.25 - -### 反馈 - -**容器中的反馈** - -除了限制容器的资源,很多产品还提供了监控和通知容器性能指标的功能,从而了解当容器工作不正常时,容器内部处于什么样的工作状态。比如 目前[流行的][5][Prometheus][4],可以用来从容器和容器集群中收集相应的性能指标数据。容器本身特别适用于分隔应用系统,以及打包代码和其运行环境,但也同时带来不透明的特性,这时从中快速的收集信息,从而解决发生在其内部出现的问题,就显得尤为重要了。 - -**现实中的反馈** - -在现实中,从始至终同样也需要反馈。一个高效的处理流程中,及时的反馈能够快速的定位事情发生的时间。反馈的关键词是“快速”和“相关”。当一个团队处理大量不相关的事件时,那些真正需要快速反馈的重要信息,很容易就被忽视掉,并向下游传递形成更严重的问题。想象下[如果露西和埃塞尔][6]能够很快的意识到:传送带太快了,那么制作出的巧克力可能就没什么问题了(尽管这样就不太有趣了)。 - -**DevOps and feedback** - -DevOps 中的第二条原则,就是快速收集所有的相关有用信息,这样在出现的问题影响到其他开发进程之前,就可以被识别出。DevOps 团队应该努力去“优化下游“,以及快速解决那些可能会影响到之后团队的问题。同工作流一样,反馈也是一个持续的过程,目标是快速的获得重要的信息以及当问题出现后能够及时的响应。 - -> "快速的反馈对于提高技术的质量、可用性、安全性至关重要。" - -–Gene Kim, et al., DevOps 手册:如何在技​​术组织中创造世界级的敏捷性,可靠性和安全性, IT 革命, 2016 - -### 持续实验与学习 - -**容器中的持续实验与学习** - -如何让”持续的实验与学习“更具操作性是一个不小的挑战。容器让我们的开发工程师和运营团队,在不需要掌握太多边缘或难以理解的东西情况下,依然可以安全地进行本地和生产环境的测试,这在之前是难以做到的。即便是一些激进的实验,容器技术仍然让我们轻松地进行版本控制、记录、分享。 - -**现实中的持续实验与学习** - -举个我自己的例子:多年前,作为一个年轻、初出茅庐的系统管理员(仅仅工作三周),我被要求对一个运行某个大学核心IT部门网站的Apache虚拟主机进行更改。由于没有易于使用的测试环境,我直接在生产的站点上进行了配置修改,当时觉得配置没问题就发布了,几分钟后,我隔壁无意中听到了同事说: - -”等会,网站挂了?“ - -“没错,怎么回事?” - -很多人蒙圈了…… - -在被嘲讽之后(真实的嘲讽),我一头扎在工作台上,赶紧撤销我之前的更改。当天下午晚些时候,部门主管 - 我老板的老板的老板来到我的工位上,问发生了什么事。 -“别担心,”她告诉我。“我们不会生你的气,这是一个错误,现在你已经学会了。“ - -而在容器中,这种情形很容易的进行测试,并且也很容易在部署生产环境之前,被那些经验老道的团队成员发现。 - -**DevOps 中的持续实验与学习** - -做实验的初衷是我们每个人都希望通过一些改变从而能够提高一些东西,并勇敢地通过实验来验证我们的想法。对于 DevOps 团队来说,失败无论对团队还是个人来说都是经验,所要不要担心失败。团队中的每个成员不断学习、共享,也会不断提升其所在团队与组织的水平。 - -随着系统变得越来越琐碎,我们更需要将注意力发在特殊的点上:上面提到的两条原则主要关注的是流程的目前全貌,而持续的学习则是关注的则是整个项目、人员、团队、组织的未来。它不仅对流程产生了影响,还对流程中的每个人产生影响。 - -> "无风险的实验让我们能够不懈的改进我们的工作,但也要求我们使用之前没有用过的工作方式" - -–Gene Kim, et al., [凤凰计划:让你了解 IT、DevOps以及如何取得商业成功][7], IT 革命, 2013 - -### 容器技术给我们 DevOps 上的启迪 - -学习如何有效地使用容器可以学习DevOps的三条原则:工作流,反馈以及持续实验和学习。从整体上看应用程序和基础设施,而不是对容器外的东西置若罔闻,教会我们考虑到系统的所有部分,了解其上游和下游影响,打破孤岛,并作为一个团队工作,以提高全局性能和深度 -了解整个系统。通过努力提供及时准确的反馈,我们可以在组织内部创建有效的反馈模式,以便在问题发生影响之前发现问题。 -最后,提供一个安全的环境来尝试新的想法并从中学习,教会我们创造一种文化,在这种文化中,失败一方面促进了我们知识的增长,另一方面通过有根据的猜测,可以为复杂的问题带来新的、优雅的解决方案。 - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/containers-can-teach-us-devops - -作者:[Chris Hermansen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/littleji) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen -[1]: https://itrevolution.com/the-three-ways-principles-underpinning-devops/ -[2]: https://en.wikipedia.org/wiki/The_Jetsons -[3]: http://itrevolution.com/the-three-ways-principles-underpinning-devops -[4]: https://prometheus.io/ -[5]: https://opensource.com/article/18/9/prometheus-operational-advantage -[6]: https://www.youtube.com/watch?v=8NPzLBSBzPI -[7]: https://itrevolution.com/book/the-phoenix-project/ diff --git a/translated/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md b/translated/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md deleted file mode 100644 index 8184021df9..0000000000 --- a/translated/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md +++ /dev/null @@ -1,485 +0,0 @@ -如何使用chkconfig和systemctl命令启用或禁用linux服务 -====== - -对于Linux管理员来说这是一个重要(美妙)的话题,所以每个人都必须知道并练习怎样才能更高效的使用它们。 - - - -在Linux中,无论何时当你安装任何带有服务和守护进程的包,系统默认会把这些进程添加到 “init & systemd” 脚本中,不过此时它们并没有被启动 。 - - - -我们需要手动的开启或者关闭那些服务。Linux中有三个著名的且一直在被使用的init系统。 - - - -### 什么是init系统? - - - -在以Linux/Unix 为基础的操作系统上,init (初始化的简称) 是内核引导系统启动过程中第一个启动的进程。 - - - -init的进程id(pid)是1,除非系统关机否则它将会一直在后台运行。 - - - -Init 首先根据 `/etc/inittab` 文件决定Linux运行的级别,然后根据运行级别在后台启动所有其他进程和应用程序。 - - - -BIOS, MBR, GRUB 和内核程序在启动init之前就作为linux的引导程序的一部分开始工作了。 - - - -下面是Linux中可以使用的运行级别(从0~6总共七个运行级别) - - - - * **`0:`** 关机 - - * **`1:`** 单用户模式 - - * **`2:`** 多用户模式(没有NFS) - - * **`3:`** 完全的多用户模式 - - * **`4:`** 系统未使用 - - * **`5:`** 图形界面模式 - - * **`:`** 重启 - - - - - -下面是Linux系统中最常用的三个init系统 - - - - * System V (Sys V) - - * Upstart - - * systemd - - - - - -### 什么是 System V (Sys V)? - - - -System V (Sys V)是类Unix系统第一个传统的init系统之一。init是内核引导系统启动过程中第一支启动的程序 ,它是所有程序的父进程。 - - - -大部分Linux发行版最开始使用的是叫作System V(Sys V)的传统的init系统。在过去的几年中,已经有好几个init系统被发布用来解决标准版本中的设计限制,例如:launchd, the Service Management Facility, systemd 和 Upstart。 - - - -与传统的 SysV init系统相比,systemd已经被几个主要的Linux发行版所采用。 - - - -### 什么是 Upstart? - - - -Upstart 是一个基于事件的/sbin/init守护进程的替代品,它在系统启动过程中处理任务和服务的启动,在系统运行期间监视它们,在系统关机的时候关闭它们。 - - - -它最初是为Ubuntu而设计,但是它也能够完美的部署在其他所有Linux系统中,用来代替古老的System-V。 - - - -Upstart被用于Ubuntu 从 9.10 到 Ubuntu 14.10和基于RHEL 6的系统,之后它被systemd取代。 - - - -### 什么是 systemd? - - - -Systemd是一个新的init系统和系统管理器, 和传统的SysV相比,它可以用于所有主要的Linux发行版。 - - - -systemd 兼容 SysV 和 LSB init脚本。 它可以直接替代Sys V init系统。systemd是被内核启动的第一支程序,它的PID 是1。 - - - -systemd是所有程序的父进程,Fedora 15 是第一个用systemd取代upstart的发行版。systemctl用于命令行,它是管理systemd的守护进程/服务的主要工具,例如:(开启,重启,关闭,启用,禁用,重载和状态) - - - -systemd 使用.service 文件而不是bash脚本 (SysVinit 使用的). systemd将所有守护进程添加到cgroups中排序,你可以通过浏览`/cgroup/systemd` 文件查看系统等级。 - - - -### 如何使用chkconfig命令启用或禁用引导服务? - - - -chkconfig实用程序是一个命令行工具,允许你在指定运行级别下启动所选服务,以及列出所有可用服务及其当前设置。 - - - -此外,它还允许我们从启动中启用或禁用服务。前提是你有超级管理员权限(root或者sudo)运行这个命令。 - - - -所有的服务脚本位于 `/etc/rd.d/init.d`文件中 - - - -### 如何列出运行级别中所有的服务 - - - - `--list` 参数会展示所有的服务及其当前状态 (启用或禁用服务的运行级别) - - - -``` - - # chkconfig --list - - NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off - - abrt-ccpp 0:off 1:off 2:off 3:on 4:off 5:on 6:off - - abrtd 0:off 1:off 2:off 3:on 4:off 5:on 6:off - - acpid 0:off 1:off 2:on 3:on 4:on 5:on 6:off - - atd 0:off 1:off 2:off 3:on 4:on 5:on 6:off - - auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off - - . - - . - -``` - - - -### 如何查看指定服务的状态 - - - -如果你想查看运行级别下某个服务的状态,你可以使用下面的格式匹配出需要的服务。 - - - -比如说我想查看运行级别中`auditd`服务的状态 - - - -``` - - # chkconfig --list| grep auditd - - auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off - -``` - - - -### 如何在指定运行级别中启用服务 - - - -使用`--level`参数启用指定运行级别下的某个服务,下面展示如何在运行级别3和运行级别5下启用 `httpd` 服务。 - - - -``` - - # chkconfig --level 35 httpd on - -``` - - - -### 如何在指定运行级别下禁用服务 - - - -同样使用 `--level`参数禁用指定运行级别下的服务,下面展示的是在运行级别3和运行级别5中禁用`httpd`服务。 - - - -``` - - # chkconfig --level 35 httpd off - -``` - - - -### 如何将一个新服务添加到启动列表中 - - - -`-–add`参数允许我们添加任何信服务到启动列表中, 默认情况下,新添加的服务会在运行级别2,3,4,5下自动开启。 - - - -``` - - # chkconfig --add nagios - -``` - - - -### 如何从启动列表中删除服务 - - - -可以使用 `--del` 参数从启动列表中删除服务,下面展示的事如何从启动列表中删除Nagios服务。 - - - -``` - - # chkconfig --del nagios - -``` - - - -### 如何使用systemctl命令启用或禁用开机自启服务? - - - -systemctl用于命令行,它是一个基础工具用来管理systemd的守护进程/服务,例如:(开启,重启,关闭,启用,禁用,重载和状态) - - - -所有服务创建的unit文件位与`/etc/systemd/system/`. - - - -### 如何列出全部的服务 - - - -使用下面的命令列出全部的服务(包括启用的和禁用的) - - - -``` - - # systemctl list-unit-files --type=service - - UNIT FILE STATE - - arp-ethers.service disabled - - auditd.service enabled - - [email protected] enabled - - blk-availability.service disabled - - brandbot.service static - - [email protected] static - - chrony-wait.service disabled - - chronyd.service enabled - - cloud-config.service enabled - - cloud-final.service enabled - - cloud-init-local.service enabled - - cloud-init.service enabled - - console-getty.service disabled - - console-shell.service disabled - - [email protected] static - - cpupower.service disabled - - crond.service enabled - - . - - . - - 150 unit files listed. - -``` - - - -使用下面的格式通过正则表达式匹配出你想要查看的服务的当前状态。下面是使用systemctl命令查看`httpd` 服务的状态。 - - - -``` - - # systemctl list-unit-files --type=service | grep httpd - - httpd.service disabled - -``` - - - -### 如何让指定的服务开机自启 - - - -使用下面格式的systemctl命令启用一个指定的服务。启用服务将会创建一个符号链接,如下可见 - - - -``` - - # systemctl enable httpd - - Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. - -``` - - - -运行下列命令再次确认服务是否被启用。 - - - -``` - - # systemctl is-enabled httpd - - enabled - -``` - - - -### 如何禁用指定的服务 - - - -运行下面的命令禁用服务将会移除你启用服务时所创建的 - - - -``` - - # systemctl disable httpd - - Removed symlink /etc/systemd/system/multi-user.target.wants/httpd.service. - -``` - - - -运行下面的命令再次确认服务是否被禁用 - - - -``` - - # systemctl is-enabled httpd - - disabled - -``` - - - -### 如何查看系统当前的运行级别 - - - -使用systemctl命令确认你系统当前的运行级别,'运行级'别仍然由systemd管理,不过,运行级别对于systemd来说是一个历史遗留的概念。所以我建议你全部使用systemctl命令。 - - - -我们当前处于`运行级别3`, 下面显示的是`multi-user.target`。 - - - -``` - - # systemctl list-units --type=target - - UNIT LOAD ACTIVE SUB DESCRIPTION - - basic.target loaded active active Basic System - - cloud-config.target loaded active active Cloud-config availability - - cryptsetup.target loaded active active Local Encrypted Volumes - - getty.target loaded active active Login Prompts - - local-fs-pre.target loaded active active Local File Systems (Pre) - - local-fs.target loaded active active Local File Systems - - multi-user.target loaded active active Multi-User System - - network-online.target loaded active active Network is Online - - network-pre.target loaded active active Network (Pre) - - network.target loaded active active Network - - paths.target loaded active active Paths - - remote-fs.target loaded active active Remote File Systems - - slices.target loaded active active Slices - - sockets.target loaded active active Sockets - - swap.target loaded active active Swap - - sysinit.target loaded active active System Initialization - - timers.target loaded active active Timers - -``` - --------------------------------------------------------------------------------- - - - -via: https://www.2daygeek.com/how-to-enable-or-disable-services-on-boot-in-linux-using-chkconfig-and-systemctl-command/ - - - -作者:[Prakash Subramanian][a] - -选题:[lujun9972][b] - -译者:[way-ww](https://github.com/way-ww) - -校对:[校对者ID](https://github.com/校对者ID) - - - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - - - -[a]: https://www.2daygeek.com/author/prakash/ - -[b]: https://github.com/lujun9972 - diff --git a/translated/tech/20181216 Schedule a visit with the Emacs psychiatrist.md b/translated/tech/20181216 Schedule a visit with the Emacs psychiatrist.md new file mode 100644 index 0000000000..7e05a0e930 --- /dev/null +++ b/translated/tech/20181216 Schedule a visit with the Emacs psychiatrist.md @@ -0,0 +1,62 @@ +[#]:collector:(lujun9972) +[#]:translator:(lujun9972) +[#]:reviewer:( ) +[#]:publisher:( ) +[#]:url:( ) +[#]:subject:(Schedule a visit with the Emacs psychiatrist) +[#]:via:(https://opensource.com/article/18/12/linux-toy-eliza) +[#]:author:(Jason Baker https://opensource.com/users/jason-baker) + +预约 Emacs 心理医生 +====== +Eliza 是一个隐藏于某个 Linux 最流行文本编辑器中的自然语言处理聊天机器人。 +![](https://opensource.com/sites/default/files/styles/image-full-size/public/uploads/linux-toy-eliza.png?itok=3ioiBik_) + +欢迎你,今天时期 24 天的 Linux 命令行玩具的又一天。如果你是第一次访问本系列,你可能会问什么是命令行玩具呢。我们将会逐步确定这个概念,但一般来说,它可能是一个游戏,或任何能让你在终端玩的开心的其他东西。 + +可能你们已经见过了很多我们之前挑选的那些玩具,但我们依然希望对所有人来说都至少有一件新鲜事物。 + +今天的选择是 Emacs 中的一个彩蛋:Eliza,Rogerian 心理医生,一个准备好倾听你述说一切的终端玩具。 + +旁白:虽然这个玩具很好玩,但你的健康不是用来开玩笑的。请在假期期间照顾好你自己,无论时身体上还是精神上,若假期中的压力和焦虑对你的健康产生负面影响,请考虑找专业人士进行指导。真的有用。 + +要启动 [Eliza][1],首先,你需要启动 Emacs。很有可能 Emacs 已经安装在你的系统中了,但若没有,它基本上也肯定在你默认的软件仓库中。 + +由于我要求本系列的工具一定要时运行在终端内,因此使用 **-nw** 标志来启动 Emacs 让它在你的终端模拟器中运行。 + +``` +$ emacs -nw +``` + +在 Emacs 中,输入 M-x doctor 来启动 Eliza。对于像我这样有 Vim 背景的人可能不知道这是什么意思,只需要按下 escape,输入 x 然后输入 doctor。然后,向它倾述所有假日的烦恼吧。 + +Eliza 历史悠久,最早可以追溯到 1960 年代中期的 MIT 人工智能实验室。[维基百科 ][2] 上有它历史的详细说明。 + +Eliza 并不是 Emacs 中唯一的娱乐工具。查看 [手册 ][3] 可以看到一整列好玩的玩具。 + + +![Linux toy:eliza animated][5] + +你有什么喜欢的命令行玩具值得推荐吗?我们时间不多了,但我还是想听听你的建议。请在下面评论中告诉我,我会查看的。另外也欢迎告诉我你们对本次玩具的想法。 + +请一定要看看昨天的玩具,[带着这个复刻版吃豆人来到 Linux 终端游乐中心 ][6],然后明天再来看另一个玩具! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/linux-toy-eliza + +作者:[Jason Baker][a] +选题:[lujun9972][b] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jason-baker +[b]: https://github.com/lujun9972 +[1]: https://www.emacswiki.org/emacs/EmacsDoctor +[2]: https://en.wikipedia.org/wiki/ELIZA +[3]: https://www.gnu.org/software/emacs/manual/html_node/emacs/Amusements.html +[4]: /file/417326 +[5]: https://opensource.com/sites/default/files/uploads/linux-toy-eliza-animated.gif (Linux toy: eliza animated) +[6]: https://opensource.com/article/18/12/linux-toy-myman diff --git a/translated/tech/20190108 How ASLR protects Linux systems from buffer overflow attacks.md b/translated/tech/20190108 How ASLR protects Linux systems from buffer overflow attacks.md new file mode 100644 index 0000000000..5d0c059f9b --- /dev/null +++ b/translated/tech/20190108 How ASLR protects Linux systems from buffer overflow attacks.md @@ -0,0 +1,132 @@ +[#]: collector: (lujun9972) +[#]: translator: (leommxj) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How ASLR protects Linux systems from buffer overflow attacks) +[#]: via: (https://www.networkworld.com/article/3331199/linux/what-does-aslr-do-for-linux.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +ASLR是如何保护Linux系统免受缓冲区溢出攻击的 +====== + +![](https://images.idgesg.net/images/article/2019/01/shuffling-cards-100784640-large.jpg) + +地址空间随机化( ASLR )是一种操作系统用来抵御缓冲区溢出攻击的内存保护机制。这种技术使得系统上运行的进程的内存地址无法预测,使得与这些进程有关的漏洞变得更加难以利用。 + +ASLR目前在 Linux , Windows 以及 MacOS 系统上都有使用。其最早出现在 2005 的Linux系统上。2007 年,这项技术被 Windows 和 MacOS 部署使用。尽管 ASLR 在各个系统上都提供相同的功能,却有着不同的实现。 + +ASLR的有效性依赖于整个地址空间布局对于攻击者保持未知。此外,只有编译时作为位置无关可执行文件(PIE)的程序才能得到ASLR最大的保护,因为只有这样,可执行文件的所有代码节区才会被加载在随机地址。PIE 代码不管绝对地址是多少都可以正确执行。 + +**[ 参见:[用于排除Linux故障的宝贵提示和技巧][1] ]** + +### ASLR 的局限性 + +尽管 ASLR 使得对系统漏洞的利用更加困难了,但其保护系统的能力是有限的。理解关于 ASLR 的以下几点是很重要的: + + * 不能解决漏洞,而是增加利用漏洞的难度 + * 并不追踪或报告漏洞 + * 不能对编译时没有开启 ASLR 支持的二进制文件提供保护 + * 不能避免被绕过 + + + +### ASLR 是如何工作的 + + + +ASLR通过对攻击者在进行缓冲区溢出攻击时所要用到的内存布局中的偏移做随机化来加大攻击成功的难度,从而增强了系统的控制流完整性。 + + +通常认为 ASLR 在64位系统上效果更好,因为64位系统提供了更大的熵(可随机的地址范围)。 + +### ASLR 是否正在你的 Linux 系统上运行? + +下面展示的两条命令都可以告诉你你的系统是否启用了 ASLR 功能 + +``` +$ cat /proc/sys/kernel/randomize_va_space +2 +$ sysctl -a --pattern randomize +kernel.randomize_va_space = 2 +``` + +上方指令结果中的数值 (2) 表示 ASLR 工作在全随机化模式。其可能为下面的几个数值之一: + +``` +0 = Disabled +1 = Conservative Randomization +2 = Full Randomization +``` + +如果你关闭了 ASLR 并且执行下面的指令,你将会注意到前后两条**ldd**的输出是完全一样的。**ldd**命令会加载共享对象并显示他们在内存中的地址。 + +``` +$ sudo sysctl -w kernel.randomize_va_space=0 <== disable +[sudo] password for shs: +kernel.randomize_va_space = 0 +$ ldd /bin/bash + linux-vdso.so.1 (0x00007ffff7fd1000) <== same addresses + libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007ffff7c69000) + libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ffff7c63000) + libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7a79000) + /lib64/ld-linux-x86-64.so.2 (0x00007ffff7fd3000) +$ ldd /bin/bash + linux-vdso.so.1 (0x00007ffff7fd1000) <== same addresses + libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007ffff7c69000) + libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ffff7c63000) + libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7a79000) + /lib64/ld-linux-x86-64.so.2 (0x00007ffff7fd3000) +``` + +如果将其重新设置为**2**来启用 ASLR,你将会看到每次运行**ldd**,得到的内存地址都不相同。 + +``` +$ sudo sysctl -w kernel.randomize_va_space=2 <== enable +[sudo] password for shs: +kernel.randomize_va_space = 2 +$ ldd /bin/bash + linux-vdso.so.1 (0x00007fff47d0e000) <== first set of addresses + libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f1cb7ce0000) + libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f1cb7cda000) + libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f1cb7af0000) + /lib64/ld-linux-x86-64.so.2 (0x00007f1cb8045000) +$ ldd /bin/bash + linux-vdso.so.1 (0x00007ffe1cbd7000) <== second set of addresses + libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007fed59742000) + libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fed5973c000) + libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fed59552000) + /lib64/ld-linux-x86-64.so.2 (0x00007fed59aa7000) +``` + +### 尝试绕过 ASLR + +尽管这项技术有很多优点,绕过ASLR的攻击并不罕见,主要有以下几类: + + * 利用地址泄露 + * 访问与特定地址关联的数据 + * 针对ASLR 实现的缺陷来猜测地址,常见于系统熵过低或 ASLR 实现不完善。 + * 利用侧信道攻击 + +### 总结 + +ASLR 有很大的价值,尤其是在64位系统上运行并被正确实现时。虽然不能避免被绕过,但这项技术的确使得利用系统漏洞变得更加困难了。这份参考资料可以提供更多有关细节 [on the Effectiveness of Full-ASLR on 64-bit Linux][2] ,这篇论文介绍了一种利用分支预测绕过ASLR的技术 [bypass ASLR][3]。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3331199/linux/what-does-aslr-do-for-linux.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[leommxj](https://github.com/leommxj) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html +[2]: https://cybersecurity.upv.es/attacks/offset2lib/offset2lib-paper.pdf +[3]: http://www.cs.ucr.edu/~nael/pubs/micro16.pdf +[4]: https://www.facebook.com/NetworkWorld/ +[5]: https://www.linkedin.com/company/network-world diff --git a/translated/tech/20190109 Configure Anaconda on Emacs - iD.md b/translated/tech/20190109 Configure Anaconda on Emacs - iD.md new file mode 100644 index 0000000000..09dcfd9a1c --- /dev/null +++ b/translated/tech/20190109 Configure Anaconda on Emacs - iD.md @@ -0,0 +1,117 @@ +[#]:collector:(lujun9972) +[#]:translator:(lujun9972) +[#]:reviewer:( ) +[#]:publisher:( ) +[#]:url:( ) +[#]:subject:(Configure Anaconda on Emacs – iD) +[#]:via:(https://idevji.com/configure-anaconda-on-emacs/) +[#]:author:(Devji Chhanga https://idevji.com/author/admin/) + +在 Emacs 上配置 Anaconda +====== + +也许我所最求的究极 IDE 就是 [Emacs][1] 了。我的目标是使 Emacs 成为一款全能的 Python IDE。本文描述了如何在 Emacs 上配置 Anaconda。 + +我的配置信息: + +``` +OS: Trisquel 8.0 +Emacs: GNU Emacs 25.3.2 +``` + +快捷键说明 [(参见完全指南 )][2]: + +``` +C-x = Ctrl + x +M-x = Alt + x +RET = ENTER +``` + +### 1。下载并安装 Anaconda + +#### 1.1 下载: +[从这儿 ][3] 下载 Anaconda。你应该下载 Python 3.x 版本因为 Python 2 在 2020 年就不再支持了。你无需预先安装 Python 3.x。安装脚本会自动进行安装。 + +#### 1.2 安装: + +``` + cd ~/Downloads +bash Anaconda3-2018.12-Linux-x86.sh +``` + + +### 2。将 Anaconda 添加到 Emacs + +#### 2.1 将 MELPA 添加到 Emacs +我们需要用到 _anaconda-mode_ 这个 Emacs 包。该包位于 MELPA 仓库中。Emacs25 需要手工添加该仓库。 + +[注意:点击本文查看如何将 MELPA 添加到 Emacs。][4] + +#### 2.2 为 Emacs 安装 anaconda-mode 包 + +``` +M-x package-install RET +anaconda-mode RET +``` + +#### 2.3 为 Emacs 配置 anaconda-mode + +``` +echo "(add-hook 'python-mode-hook 'anaconda-mode)" > ~/.emacs.d/init.el +``` + + +### 3。在 Emacs 上通过 Anaconda 运行你第一个脚本 + +#### 3.1 创建新 .py 文件 + +``` +C-x C-f +HelloWorld.py RET +``` + +#### 3.2 输入下面代码 + +``` +print ("Hello World from Emacs") +``` + +#### 3.3 运行之 + +``` +C-c C-p +C-c C-c +``` + +输出为 + +``` +Python 3.7.1 (default, Dec 14 2018, 19:46:24) +[GCC 7.3.0] :: Anaconda, Inc. on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> python.el: native completion setup loaded +>>> Hello World from Emacs +>>> +``` + +我是受到 [Codingquark;][5] 的影响才开始使用 Emacs 的。 +有任何错误和遗漏请在评论中之处。干杯! + +-------------------------------------------------------------------------------- + +via: https://idevji.com/configure-anaconda-on-emacs/ + +作者:[Devji Chhanga][a] +选题:[lujun9972][b] +译者:[lujun9972](https://github.com/lujun9972) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://idevji.com/author/admin/ +[b]: https://github.com/lujun9972 +[1]: https://www.gnu.org/software/emacs/ +[2]: https://www.math.uh.edu/~bgb/emacs_keys.html +[3]: https://www.anaconda.com/download/#linux +[4]: https://melpa.org/#/getting-started +[5]: https://codingquark.com diff --git a/translated/tech/20190123 Mind map yourself using FreeMind and Fedora.md b/translated/tech/20190123 Mind map yourself using FreeMind and Fedora.md new file mode 100644 index 0000000000..2e2331e698 --- /dev/null +++ b/translated/tech/20190123 Mind map yourself using FreeMind and Fedora.md @@ -0,0 +1,81 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Mind map yourself using FreeMind and Fedora) +[#]: via: (https://fedoramagazine.org/mind-map-yourself-using-freemind-and-fedora/) +[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/) + +在 Fedora 中使用 FreeMind 制作自己的思维导图 +====== +![](https://fedoramagazine.org/wp-content/uploads/2019/01/freemind-816x345.jpg) + +你自己的思维导图一开始听起来有些牵强。它是关于神经通路么?还是心灵感应?完全不是。相反,自己的思维导图是一种在视觉上向他人描述自己的方式。它还展示了你拿来描述的特征之间的联系。这是一种以聪明的同时可控的与他人分享信息的有用方式。你可以使用任何思维导图应用来做到。本文向你展示如何使用 Fedora 中提供的 [FreeMind][1]。 + +### 获取应用 + +FreeMind 已经出现有一段时间了。虽然 UI 有点过时,你也可以使用新的,但它是一个功能强大的应用,提供了许多构建思维导图的选项。当然,它是 100% 开源的。还有其他思维导图应用可供 Fedora 和 Linux 用户使用。查看[此前一篇涵盖多个思维导图选择的文章][2]。 + +如果你运行的是 Fedora Workstation,请使用“软件”应用从 Fedora 仓库安装 FreeMind。或者在终端中使用这个 [sudo][3] 命令: + +``` +$ sudo dnf install freemind +``` + +你可以从 Fedora Workstation 中的 GNOME Shell Overview 启动应用。或者使用桌面环境提供的应用启动服务。默认情况下,FreeMind 会显示一个新的空白脑图: + +![][4] +FreeMind 初始(空白)思维导图 + +脑图由链接的项目或描述(节点)组成。当你想到与节点相关的内容时,只需创建一个与其连接的新节点即可。 + +### + +单击初始节点。编辑文本并点击**回车**将其替换为你的姓名。你就能开始你的思维导图。 + +如果你必须向某人充分描述自己,你会怎么想?可能会有很多东西。你平时做什么?你喜欢什么?你不喜欢什么?你有什么价值?你有家庭吗?所有这些都可以在节点中体现。 + +要添加节点连接,请选择现有节点,然后单击**插入**,或使用“灯泡”图标作为新的子节点。要在与新子级相同的层级添加另一个节点,请使用**回车**。 + +如果你弄错了,别担心。你可以使用 **Delete** 键删除不需要的节点。内容上没有规则。但是最好是短节点。它们能让你在创建导图时思维更快。简洁的节点还能让其他浏览者更轻松地查看和理解。 + +该示例使用节点规划了每个主要类别: + +![][5] +个人思维导图,第一级 + +你可以为这些区域中的每个区域另外迭代一次。让你的思想自由地连接想法以生成导图。不要担心“做得正确“。最好将所有内容从头脑中移到显示屏上。这是下一级导图的样子。 + +![][6] +个人思维导图,第二级 + +你可以以相同的方式扩展任何这些节点。请注意你在示例中可以了解多少有关 John Q. Public 的信息。 + +### 如何使用你的个人思维导图 + +这是让团队或项目成员互相介绍的好方法。你可以将各种格式和颜色应用于导图以赋予其个性。当然,这些在纸上做很有趣。但是在 Fedora 中安装一个就意味着你可以随时修复错误,甚至可以在你改变的时候做出修改。 + +祝你在探索个人思维导图上玩得开心! + + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/mind-map-yourself-using-freemind-and-fedora/ + +作者:[Paul W. Frields][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/pfrields/ +[b]: https://github.com/lujun9972 +[1]: http://freemind.sourceforge.net/wiki/index.php/Main_Page +[2]: https://fedoramagazine.org/three-mind-mapping-tools-fedora/ +[3]: https://fedoramagazine.org/howto-use-sudo/ +[4]: https://fedoramagazine.org/wp-content/uploads/2019/01/Screenshot-from-2019-01-19-15-17-04-1024x736.png +[5]: https://fedoramagazine.org/wp-content/uploads/2019/01/Screenshot-from-2019-01-19-15-32-38-1024x736.png +[6]: https://fedoramagazine.org/wp-content/uploads/2019/01/Screenshot-from-2019-01-19-15-38-00-1024x736.png diff --git a/translated/tech/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md b/translated/tech/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md new file mode 100644 index 0000000000..93f73664a6 --- /dev/null +++ b/translated/tech/20190130 Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro.md @@ -0,0 +1,101 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Olive is a new Open Source Video Editor Aiming to Take On Biggies Like Final Cut Pro) +[#]: via: (https://itsfoss.com/olive-video-editor) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Olive 是一个新的开源视频编辑器,一款类似 Final Cut Pro 的工具 +====== + +[Olive][1] 是一个正在开发的新开源视频编辑器。这个非线性视频编辑器旨在提供高端专业视频编辑软件的免费替代品。目标高么?我认为是的。 + +如果你读过我们的 [Linux 中的最佳视频编辑器][2]这篇文章,你可能已经注意到大多数“专业级”视频编辑器(如 [Lightworks][3] 或 DaVinciResolve)既不免费也不开源。 + +[Kdenlive][4] 和 Shotcut 也出现在了文章中,但它通常无法达到专业视频编辑的标准(这是许多 Linux 用户说的)。 + +爱好者和专业视频编辑之间的这种差距促使 Olive 的开发人员启动了这个项目。 + +![Olive Video Editor][5]Olive Video Editor Interface + +Libre Graphics World 中有一篇详细的[关于 Olive 的评论][6]。实际上,这是我第一次知道 Olive 的地方。如果你有兴趣了解更多信息,请阅读该文章。 + +### 在 Linux 中安装 Olive 视频编辑器 + +提醒你一下。Olive 正处于发展的早期阶段。你会发现很多 bug 和缺失/不完整的功能。你不应该把它当作你的主要视频编辑器。 + +如果你想测试 Olive,有几种方法可以在 Linux 上安装它。 + +#### 通过 PPA 在基于 Ubuntu 的发行版中安装 Olive + +你可以在 Ubuntu、Mint 和其他基于 Ubuntu 的发行版使用官方 PPA 安装 Olive。 + +``` +sudo add-apt-repository ppa:olive-editor/olive-editor +sudo apt-get update +sudo apt-get install olive-editor +``` + +#### 通过 Snap 安装 Olive + +如果你的 Linux 发行版支持 Snap,则可以使用以下命令进行安装。 + +``` +sudo snap install --edge olive-editor +``` + +#### 通过 Flatpak 安装 Olive + +如果你的 [Linux 发行版支持 Flatpak][7],你可以通过 Flatpak 安装 Olive 视频编辑器。 + +#### 通过 AppImage 使用 Olive + +不想安装吗?下载 [AppImage][8] 文件,将其设置为可执行文件并运行它。 + +32 位和 64 位 AppImage 文件都有。你应该下载相应的文件。 + +Olive 也可用于 Windows 和 macOS。你可以从它的[下载页面][9]获得它。 + +### 想要支持 Olive 视频编辑器的开发吗? + +如果你喜欢 Olive 尝试实现的功能,并且想要支持它,那么你可以通过以下几种方式。 + +如果你在测试 Olive 时发现一些 bug,请到它们的 GitHub 仓库中报告。 + +如果你是程序员,请浏览 Olive 的源代码,看看你是否可以通过编码技巧帮助项目。 + +在经济上为项目做贡献是另一种可以帮助开发开源软件的方法。你可以通过成为赞助人来支持 Olive。 + +如果你没有支持 Olive 的金钱或编码技能,你仍然可以帮助它。在社交媒体或你经常访问的 Linux/软件相关论坛和群组中分享这篇文章或 Olive 的网站。一点微小的口碑都能间接地帮助它。 + +### 你如何看待 Olive? + +评判 Olive 还为时过早。我希望能够持续快速开发,并且在年底之前发布 Olive 的稳定版(如果我没有过于乐观的话)。 + +你如何看待 Olive?你是否认同开发人员针对专业用户的目标?你希望 Olive 拥有哪些功能? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/olive-video-editor + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://www.olivevideoeditor.org/ +[2]: https://itsfoss.com/best-video-editing-software-linux/ +[3]: https://www.lwks.com/ +[4]: https://kdenlive.org/en/ +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/olive-video-editor-interface.jpg?resize=800%2C450&ssl=1 +[6]: http://libregraphicsworld.org/blog/entry/introducing-olive-new-non-linear-video-editor +[7]: https://itsfoss.com/flatpak-guide/ +[8]: https://itsfoss.com/use-appimage-linux/ +[9]: https://www.olivevideoeditor.org/download.php +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/olive-video-editor-interface.jpg?fit=800%2C450&ssl=1 diff --git a/translated/tech/20190212 Two graphical tools for manipulating PDFs on the Linux desktop.md b/translated/tech/20190212 Two graphical tools for manipulating PDFs on the Linux desktop.md new file mode 100644 index 0000000000..adcf6de0d3 --- /dev/null +++ b/translated/tech/20190212 Two graphical tools for manipulating PDFs on the Linux desktop.md @@ -0,0 +1,99 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Two graphical tools for manipulating PDFs on the Linux desktop) +[#]: via: (https://opensource.com/article/19/2/manipulating-pdfs-linux) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) + +两款 Linux 桌面中的图形化操作 PDF 的工具 +====== +PDF-Shuffler 和 PDF Chain 是在 Linux 中修改 PDF 的绝佳工具。 +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4) + +由于我谈论并且写了些工作中使用 PDF 及其工具的文章,有些人认为我喜欢这种格式。其实我并不是,由于各种原因,我不会深入它。 + +我不会说 PDF 是我个人和职业生活中的一个躲不开的坏事 - 相反,它们不是那么好。通常即使有更好的替代方案来交付文档,我也必须使用 PDF。 + +当我使用 PDF 时,通常是在白天工作时在其他的操作系统上使用,我使用 Adobe Acrobat 进行操作。但是当我必须在 Linux 桌面上使用 PDF 时呢?我们来看看我用来操作 PDF 的两个图形工具。 + +### PDF-Shuffler + +顾名思义,你可以使用 [PDF-Shuffler][1] 在 PDF 文件中移动页面。它可以做得更多,但软件的功能是有限的。这并不意味着 PDF-Shuffler 没用。它有用,很有用。 + +你可以将 PDF-Shuffler 用来: + + * 从 PDF 文件中提取页面 + * 将页面添加到文件中 + * 重新排列文件中的页面 + + + +请注意,PDF-Shuffler 有一些依赖项,如 pyPDF 和 python-gtk。通常,通过包管理器安装它是最快且最不令人沮丧的途径。 + +假设你想从 PDF 中提取页面,也许是作为你书中的样本章节。选择**文件>添加**打开 PDF 文件。 + +![](https://opensource.com/sites/default/files/uploads/pdfshuffler-book.png) + +要提取第 7 页到第 9 页,请按住 Ctrl 并单击选择页面。然后,右键单击并选择**导出选择**。 + +![](https://opensource.com/sites/default/files/uploads/pdfshuffler-export.png) + +选择要保存文件的目录,为其命名,然后单击**保存**。 + +要添加文件 - 例如,要添加封面或重新插入已扫描的且已签名的合同或者应用 - 打开 PDF 文件,然后选择**文件>添加**并找到要添加的 PDF 文件。单击**打开**。 + +PDF-Shuffler 有个不好的东西就是在你正在处理的 PDF 文件末尾添加页面。单击并将添加的页面拖动到文件中的所需位置。你一次只能在文件中单击并拖动一个页面。 + +![](https://opensource.com/sites/default/files/uploads/pdfshuffler-move.png) + +### PDF Chain + +我是 [PDFtk][2] 的忠实粉丝,它是一个可以对 PDF 做一些有趣操作的命令行工具。由于我不经常使用它,我不记得所有 PDFtk 的命令和选项。 + +[PDF Chain][3] 是 PDFtk 命令行的一个很好的替代品。它可以让你一键使用 PDFtk 最常用的命令。无需使用菜单,你可以: + + * 合并 PDF(包括旋转一个或多个文件的页面) +  * 从 PDF 中提取页面并将其保存到单个文件中 +  * 为 PDF 添加背景或水印 +  * 将附件添加到文件 + +![](https://opensource.com/sites/default/files/uploads/pdfchain1.png) + +你也可以做得更多。点击**工具**菜单,你可以: + + * 从 PDF 中提取附件 +  * 压缩或解压缩文件 +  * 从文件中提取元数据 +  * 用外部[数据][4]填充 PDF 表格 +  * [扁平化][5] PDF +  * 从 PDF 表单中删除 [XML 表格结构][6](XFA)数据 + + + +老实说,我只使用 PDF Chain 或 PDFtk 提取附件、压缩或解压缩 PDF。其余的对我来说基本没听说。 + +### 总结 + +Linux 上用于处理 PDF 的工具数量一直让我感到吃惊。它们的特性和功能的广度和深度也是如此。我通常可以找到一个,无论是命令行还是图形,它都能做我需要的。在大多数情况下,PDF Mod 和 PDF Chain 对我来说效果很好。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/manipulating-pdfs-linux + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[b]: https://github.com/lujun9972 +[1]: https://savannah.nongnu.org/projects/pdfshuffler/ +[2]: https://en.wikipedia.org/wiki/PDFtk +[3]: http://pdfchain.sourceforge.net/ +[4]: http://www.verypdf.com/pdfform/fdf.htm +[5]: http://pdf-tips-tricks.blogspot.com/2009/03/flattening-pdf-layers.html +[6]: http://en.wikipedia.org/wiki/XFA diff --git a/translated/tech/20190213 How to use Linux Cockpit to manage system performance.md b/translated/tech/20190213 How to use Linux Cockpit to manage system performance.md new file mode 100644 index 0000000000..b2c5136494 --- /dev/null +++ b/translated/tech/20190213 How to use Linux Cockpit to manage system performance.md @@ -0,0 +1,87 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to use Linux Cockpit to manage system performance) +[#]: via: (https://www.networkworld.com/article/3340038/linux/sitting-in-the-linux-cockpit.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +如何使用 Linux Cockpit 来管理系统性能 +====== + +Linux Cockpit 是一个基于 Web 界面的应用,它提供了对系统的图形化管理。看下它能够控制哪些。 + +![](https://images.idgesg.net/images/article/2019/02/cockpit_airline_airplane_control_pilot-by-southerlycourse-getty-100787904-large.jpg) + +如果你还没有尝试过相对较新的 Linux Cockpit,你可能会对它所能做的一切感到惊讶。它是一个用户友好的基于 Web 的控制台,提供了一些非常简单的方法来管理 Linux 系统 —_通过**web**_。你可以通过一个非常简单的 web 来监控系统资源、添加或删除帐户、监控系统使用情况、关闭系统以及执行其他一些其他任务。它的设置和使用也非常简单。 + +虽然许多 Linux 系统管理员将大部分时间花在命令行上,但使用 PuTTY 等工具访问远程系统并不总能提供最有用的命令输出。Linux Cockpit 提供了图形和易于使用的表单,来查看性能情况并对系统进行更改。 + +Linux Cockpit 能让你查看系统性能的许多方面并进行配置更改,但任务列表可能取决于你使用的特定 Linux。任务分类包括以下内容: + + * 监控系统活动(CPU、内存、磁盘 IO 和网络流量) — **系统** +  * 查看系统日志条目 — **日志** +  * 查看磁盘分区的容量 — **存储** +  * 查看网络活动(发送和接收) — **网络** +  * 查看用户帐户 — **帐户** +  * 检查系统服务的状态 — **服务** +  * 提取已安装应用的信息 — **应用** +  * 查看和安装可用更新(如果以 root 身份登录)并在需要时重新启动系统 — **软件更新** +  * 打开并使用终端窗口 — **终端** + + + +某些 Linux Cockpit 安装还允许你运行诊断报告、转储内核、检查 SELinux(安全)设置和列表订阅。 + +以下是 Linux Cockpit 显示的系统活动示例: + +![cockpit activity][1] Sandra Henry-Stocker + +Linux Cockpit 显示系统活动 + +### 如何设置 Linux Cockpit + +在某些 Linux 发行版(例如,最新的 RHEL)中,Linux Cockpit 可能已经安装并可以使用。在其他情况下,你可能需要采取一些简单的步骤来安装它并使其可使用。 + +例如,在 Ubuntu 上,这些命令应该可用: + +``` +$ sudo apt-get install cockpit +$ man cockpit <== just checking +$ sudo systemctl enable --now cockpit.socket +$ netstat -a | grep 9090 +tcp6 0 0 [::]:9090 [::]:* LISTEN +$ sudo systemctl enable --now cockpit.socket +$ sudo ufw allow 9090 +``` + +启用 Linux Cockpit 后,在浏览器中打开 **https:// :9090**。 + +可以在 [Cockpit Project]][2] 中找到可以使用 Cockpit 的发行版列表以及安装说明。 + +没有额外的配置,Linux Cockpit 将无法识别 **sudo** 权限。如果你被禁止使用 Cockpit 进行更改,你将会在你点击的按钮上看到一个红色的国际禁止标志。 + +要使 sudo 权限有效,你需要确保用户位于 **/etc/group** 文件中的 **wheel**(RHEL)或 **adm** (Debian)组中,即服务器当以 root 用户身份登录 Cockpit 并且用户在登录 Cockpit 时选择“重用我的密码”时,已勾选了 Server Administrator。 + +在你管理的系统在千里之外或者没有控制台时,能使用图形界面控制也不错。虽然我喜欢在控制台上工作,但我偶然也乐于见到图形或者按钮。Linux Cockpit 为日常管理任务提供了非常有用的界面。 + +在 [Facebook][3] 和 [LinkedIn][4] 中加入 Network World 社区,对你喜欢的文章评论。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3340038/linux/sitting-in-the-linux-cockpit.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/cockpit-activity-100787994-large.jpg +[2]: https://cockpit-project.org/running.html +[3]: https://www.facebook.com/NetworkWorld/ +[4]: https://www.linkedin.com/company/network-world diff --git a/translated/tech/20190217 How to Change User Password in Ubuntu -Beginner-s Tutorial.md b/translated/tech/20190217 How to Change User Password in Ubuntu -Beginner-s Tutorial.md new file mode 100644 index 0000000000..a2dfb77515 --- /dev/null +++ b/translated/tech/20190217 How to Change User Password in Ubuntu -Beginner-s Tutorial.md @@ -0,0 +1,129 @@ +[#]: collector: (lujun9972) +[#]: translator: (An-DJ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Change User Password in Ubuntu [Beginner’s Tutorial]) +[#]: via: (https://itsfoss.com/change-password-ubuntu) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Ubuntu下如何修改用户密码 [新手教程] +====== +**想要在Ubuntu下修改root用户的密码?那我们来学习下如何在Ubuntu Linux下修改任意用户的密码。我们会讨论在终端下修改和在图形界面(GUI)修改两种做法** + +那么,在Ubuntu下什么时候会需要修改密码呢?这里我给出如下两种场景。 + +当你刚安装[Ubuntu][1]系统时,你会创建一个用户并且为之设置一个密码。这个初始密码可能安全性较弱或者太过于复杂,你会想要对它做出修改。 + +如果你是系统管理员,你可能需要去修改在你管理的系统内其他用户的密码。 + +当然,你可能会有其他的一些原因做这样的一件事。不过现在问题来了,我们到底如何在Ubuntu或Linux系统下修改单个用户的密码呢? + +在这个快速教程中,我将会展示给你在Ubuntu中如何使用命令行和图形界面(GUI)两种方式修改密码。 + +### 在Ubuntu中修改用户密码[通过命令行] + +![如何在Ubuntu Linux下修改用户密码][2] + +在Ubuntu下修改用户密码其实非常简单。事实上,在任何Linux发行版上修改的方式都是一样的,因为你要使用的是叫做 passwd 的普通Linux命令来达到此目的。 + +如果你想要修改你的当前密码,只需要简单地在终端执行此命令: + +``` +passwd +``` + +系统会要求你输入当前密码和两次新的密码。 + +在键入密码时,你不会从屏幕上看到任何东西。这在UNIX和Linux系统中是非常正常的表现。 + +``` +passwd + +Changing password for abhishek. + +(current) UNIX password: + +Enter new UNIX password: + +Retype new UNIX password: + +passwd: password updated successfully +``` + +由于这是你的管理员账户,你刚刚修改了Ubuntu下sudo的密码,但你甚至没有意识到这个操作。 + +![在Linux命令行中修改用户密码][3] + +如果你想要修改其他用户的密码,你也可以使用passwd命令来做。但是在这种情况下,你将不得不使用sudo。 + +``` +sudo passwd +``` + +如果你对密码已经做出了修改,不过之后忘记了,不要担心。你可以[很容易地在Ubuntu下重置密码][4]. + +### 修改Ubuntu下root用户密码 + +默认情况下,Ubuntu中root用户是没有密码的。不必惊讶,你并不是在Ubuntu下一直使用root用户。不太懂?让我快速地给你解释下。 + +当[安装Ubuntu][5]时,你会被强制创建一个用户。这个用户拥有管理员访问权限。这个管理员用户可以通过sudo命令获得root访问权限。但是,该用户使用的是自身的密码,而不是root账户的密码(因为就没有)。 + +你可以使用**passwd**命令来设置或修改root用户的密码。然而,在大多数情况下,你并不需要它,而且你不应该去做这样的事。 + +你将不得不使用sudo命令(对于拥有管理员权限的账户)。如果root用户的密码之前没有被设置,它会要求你设置。另外,你可以使用已有的root密码对它进行修改。 + +``` +sudo password root +``` + +### 在Ubuntu下使用图形界面(GUI)修改密码 + +我这里使用的是GNOME桌面环境,Ubuntu版本为18.04。这些步骤对于其他桌面环境和Ubuntu版本应该差别不大。 + +打开菜单(按下Windows/Super键)并搜索Settings。 + +在Settings中,向下滚动一段距离打开进入Details。 + +![在Ubuntu GNOME Settings中进入Details][6] + +在这里,点击Users获取系统下可见的所有用户。 + +![Ubuntu下用户设置][7] + +你可以选择任一你想要的用户,包括你的主要管理员账户。你需要先解锁用户并点击密码(password)区域。 + +![Ubuntu下修改用户密码][8] + +你会被要求设置密码。如果你正在修改的是你自己的密码,你将必须也输入当前使用的密码。 + +![Ubuntu下修改用户密码][9] + +做好这些后,点击上面的Change按钮,这样就完成了。你已经成功地在Ubuntu下修改了用户密码。 + +我希望这篇快速精简的小教程能够帮助你在Ubuntu下修改用户密码。如果你对此还有一些问题或建议,请在下方留下评论。 + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/change-password-ubuntu + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[An-DJ](https://github.com/An-DJ) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://www.ubuntu.com/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-password-ubuntu-linux.png?resize=800%2C450&ssl=1 +[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-user-password-linux-1.jpg?resize=800%2C253&ssl=1 +[4]: https://itsfoss.com/how-to-hack-ubuntu-password/ +[5]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/ +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-user-password-ubuntu-gui-2.jpg?resize=800%2C484&ssl=1 +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-user-password-ubuntu-gui-3.jpg?resize=800%2C488&ssl=1 +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-user-password-ubuntu-gui-4.jpg?resize=800%2C555&ssl=1 +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-user-password-ubuntu-gui-1.jpg?ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/change-password-ubuntu-linux.png?fit=800%2C450&ssl=1 diff --git a/translated/tech/20190219 Logical - in Bash.md b/translated/tech/20190219 Logical - in Bash.md new file mode 100644 index 0000000000..1b69e80e00 --- /dev/null +++ b/translated/tech/20190219 Logical - in Bash.md @@ -0,0 +1,229 @@ +[#]: collector: "lujun9972" +[#]: translator: "zero-mk" +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " +[#]: subject: "Logical & in Bash" +[#]: via: "https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash" +[#]: author: "Paul Brown https://www.linux.com/users/bro66" + +Bash中的逻辑和(`&`) +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ampersand-brian-taylor-unsplash.jpg?itok=Iq6vxSNK) + +有人可能会认为两篇文章中的`&`意思差不多,但实际上并不是。虽然 [第一篇文章讨论了如何在命令末尾使用`&`来将命令转到后台运行][1] 之后分解为解释流程管理, 第二篇文章将 [`&` 看作引用文件描述符的方法][2], 这些文章让我们知道了,与 `<` 和 `>` 结合使用后,你可以将输入或输出引导到别的地方。 + +但我们还没接触过作为 AND 操作符使用的`&`。所以,让我们来看看。 + +### & 是一个按位运算符 + +如果您完全熟悉二进制数操作,您肯定听说过 AND 和 OR 。这些是按位操作,对二进制数的各个位进行操作。在 Bash 中,使用`&`作为AND运算符,使用`|`作为 OR 运算符: + +**AND** + +``` +0 & 0 = 0 + +0 & 1 = 0 + +1 & 0 = 0 + +1 & 1 = 1 +``` + +**OR** + +``` +0 | 0 = 0 + +0 | 1 = 1 + +1 | 0 = 1 + +1 | 1 = 1 + +``` + + +您可以通过对任何两个数字进行 AND 运算并使用`echo`输出结果: + +``` +$ echo $(( 2 & 3 )) # 00000010 AND 00000011 = 00000010 + +2 + +$ echo $(( 120 & 97 )) # 01111000 AND 01100001 = 01100000 + +96 +``` + +OR(`|`)也是如此: + +``` +$ echo $(( 2 | 3 )) # 00000010 OR 00000011 = 00000011 + +3 + +$ echo $(( 120 | 97 )) # 01111000 OR 01100001 = 01111001 + +121 +``` + + +关于这个不得不说的三件事: + +1. 使用`(( ... ))`告诉 Bash 双括号之间的内容是某种算术或逻辑运算。`(( 2 + 2 ))`, `(( 5 % 2 ))` (`%`是[求模][3]运算符)和`((( 5 % 2 ) + 1))`(等于3)一切都会奏效。 + + 2. [像变量一样][4], 使用`$`提取值,以便你可以使用它。 + 3. 空格并没有影响: `((2+3))` 将等价于 `(( 2+3 ))` 和 `(( 2 + 3 ))`。 + 4. Bash只能对整数进行操作. 试试这样做: `(( 5 / 2 ))` ,你会得到"2";或者这样 `(( 2.5 & 7 ))` ,但会得到一个错误。然后,在按位操作中使用除整数之外的任何东西(这就是我们现在所讨论的)通常是你不应该做的事情。 + + + +**提示:** 如果您想看看十进制数字在二进制下会是什么样子,你可以使用 _bc_ ,这是一个大多数 Linux 发行版都预装了的命令行计算器。比如: + +``` +bc <<< "obase=2; 97" +``` + +这个操作将会把 `97`转换成十二进制(`obase` 中的 _o_ 代表 _output_ ,也即,_输出_)。 + +``` +bc <<< "ibase=2; 11001011" +``` +这个操作将会把 `11001011`转换成十进制(`ibase` 中的 _i_ 代表 _input_ ,也即,_输入_)。 + +### &&是一个逻辑运算符 + +虽然它使用与其按位表达相同的逻辑原理,但Bash的`&&`运算符只能呈现两个结果:1(“true”)和0(“false”)。对于Bash来说,任何不是0的数字都是“true”,任何等于0的数字都是“false”。什么也是false也不是数字: + +``` +$ echo $(( 4 && 5 )) # 两个非零数字, 两个为true = true + +1 + +$ echo $(( 0 && 5 )) # 有一个为零, 一个为false = false + +0 + +$ echo $(( b && 5 )) # 其中一个不是数字, 一个为false = false + +0 +``` + +与 `&&` 类似, OR 对应着 `||` ,用法正如你想的那样。 + +以上这些都很简单... 直到进入命令的退出状态。 + +### &&是命令退出状态的逻辑运算符 + +[正如我们在之前的文章中看到的][2],当命令运行时,它会输出错误消息。更重要的是,对于今天的讨论,它在结束时也会输出一个数字。此数字称为_exit code_(即_返回码_),如果为0,则表示该命令在执行期间未遇到任何问题。如果是任何其他数字,即使命令完成,也意味着某些地方出错了。 +所以 0 意味着非常棒,任何其他数字都说明有问题发生,并且,在返回码的上下文中,0 意味着“真”,其他任何数字都意味着“假”。对!这 **与您所熟知的逻辑操作完全相反** ,但是你能用这个做什么? 不同的背景,不同的规则。这种用处很快就会显现出来。 + +让我们继续! + +返回码 _临时_ 储存在 [特殊变量][5] `?` 中— 是的,我知道:这又是一个令人迷惑的选择。但不管怎样, [别忘了我们在讨论变量的文章中说过][4], 那时我们说你要用 `$` 符号来读取变量中的值,在这里也一样。所以,如果你想知道一个命令是否顺利运行,你需要在命令结束后,在运行别的命令之前马上用 `$?` 来读取 `?` 的值。 + +试试下面的命令: + +``` +$ find /etc -iname "*.service" + +find: '/etc/audisp/plugins.d': Permission denied + +/etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service + +/etc/systemd/system/dbus-org.freedesktop.ModemManager1.service + +[等等内容] +``` + +[正如你在上一篇文章中看到的一样][2],普通用户权限在 _/etc_ 下运行 `find` 通常将抛出错误,因为它试图读取你没有权限访问的子目录。 + +所以,如果你在执行 `find` 后立马执行... + +``` +echo $? +``` + +...,,它将打印 `1`,表明存在错误。 + +注意:当你在一行中运行两遍 `echo $?` ,你将得到一个 `0` 。这是因为 `$?` 将包含 `echo $?` 的返回码,而这条命令按理说一定会执行成功。所以学习如何使用 `$?` 的第一课就是: **单独执行 `$?`** 或者将它保存在别的安全的地方 —— 比如保存在一个变量里,不然你会很快丢失它。) + +一个直接使用 `?` 的用法是将它并入一串链式命令列表,这样 Bash 运行这串命令时若有任何操作失败,后面命令将终止。例如,您可能熟悉构建和编译应用程序源代码的过程。你可以像这样手动一个接一个地运行它们: + +``` +$ configure + +. + +. + +. + +$ make + +. + +. + +. + +$ make install + +. + +. + +. +``` + +你也可以把这三行合并成一行... + +``` +$ configure; make; make install +``` + +... 但你要希望上天保佑。 + +为什么这样说呢?因为你这样做是有缺点的,比方说 `configure` 执行失败了, Bash 将仍会尝试执行 `make` 和 `sudo make install`——就算没东西可 make ,实际上,是没东西会安装。 + +聪明一点的做法是: + +``` +$ configure && make && make install +``` + +这将从每个命令中获取退出代码,并将其用作链式 `&&` 操作的操作数。 +但是,没什么好抱怨的,Bash 知道如果 `configure` 返回非零结果,整个过程都会失败。如果发生这种情况,不必运行 `make` 来检查它的退出代码,因为无论如何都会失败的。因此,它放弃运行 `make` ,只是将非零结果传递给下一步操作。并且,由于 `configure && make` 传递了错误,Bash 也不必运行`make install`。这意味着,在一长串命令中,您可以使用 `&&` 连接它们,并且一旦失败,您可以节省时间,因为其他命令会立即被取消运行。 + +你可以类似地使用 `||`,OR 逻辑操作符,这样就算只有一部分命令成功执行,Bash 也能运行接下来链接在一起的命令。 +鉴于所有这些(以及我们之前介绍过的内容),您现在应该更清楚地了解我们在 [本文开头][1] 开头设置的命令行: + +``` +mkdir test_dir 2>/dev/null || touch backup/dir/images.txt && find . -iname "*jpg" > backup/dir/images.txt & +``` + +因此,假设您从具有读写权限的目录运行上述内容,它做了什么以及如何做到这一点?它如何避免不合时宜且可能导致执行错误的错误?下周,除了给你这些答案的结果,我们将讨论 brackets: curly, curvy and straight. 不要错过了哟! + +因此,假设您在具有读写权限的目录运行上述内容,它会执行的操作以及如何执行此操作?它如何避免不合时宜且可能导致执行错误的错误?下周,除了给你解决方案,我们将处理包括:卷曲,曲线和直线。不要错过! + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash + +作者:[Paul Brown][a] +选题:[lujun9972][b] +译者:[zero-MK](https://github.com/zero-mk) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/bro66 +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux +[2]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash +[3]: https://en.wikipedia.org/wiki/Modulo_operation +[4]: https://www.linux.com/blog/learn/2018/12/bash-variables-environmental-and-otherwise +[5]: https://www.gnu.org/software/bash/manual/html_node/Special-Parameters.html diff --git a/中文排版指北.md b/中文排版指北.md deleted file mode 100644 index 9888b4dbc1..0000000000 --- a/中文排版指北.md +++ /dev/null @@ -1,294 +0,0 @@ -# 中文文案排版指北 -[![devDependency Status](https://david-dm.org/mzlogin/chinese-copywriting-guidelines/dev-status.svg)](https://david-dm.org/mzlogin/chinese-copywriting-guidelines#info=devDependencies) - -统一中文文案、排版的相关用法,降低团队成员之间的沟通成本,增强网站气质。 - -Other languages: - -- [English](https://github.com/mzlogin/chinese-copywriting-guidelines/blob/Simplified/README.en.md) -- [Chinese Traditional](https://github.com/sparanoid/chinese-copywriting-guidelines) -- [Chinese Simplified](README.md) - ------ - -## 目录 - -- [空格](#空格) - - [中英文之间需要增加空格](#中英文之间需要增加空格) - - [中文与数字之间需要增加空格](#中文与数字之间需要增加空格) - - [数字与单位之间需要增加空格](#数字与单位之间需要增加空格) - - [全角标点与其他字符之间不加空格](#全角标点与其他字符之间不加空格) - - [`-ms-text-autospace` to the rescue?](#-ms-text-autospace-to-the-rescue) -- [标点符号](#标点符号) - - [不重复使用标点符号](#不重复使用标点符号) -- [全角和半角](#全角和半角) - - [使用全角中文标点](#使用全角中文标点) - - [数字使用半角字符](#数字使用半角字符) - - [遇到完整的英文整句、特殊名词,其內容使用半角标点](#遇到完整的英文整句特殊名词其內容使用半角标点) -- [名词](#名词) - - [专有名词使用正确的大小写](#专有名词使用正确的大小写) - - [不要使用不地道的缩写](#不要使用不地道的缩写) -- [争议](#争议) - - [链接之间增加空格](#链接之间增加空格) -  - [简体中文不要使用直角引号](#简体中文不要使用直角引号) -- [工具](#工具) -- [谁在这样做?](#谁在这样做) -- [参考文献](#参考文献) - -## 空格 - -「有研究显示,打字的时候不喜欢在中文和英文之间加空格的人,感情路都走得很辛苦,有七成的比例会在 34 岁的时候跟自己不爱的人结婚,而其余三成的人最后只能把遗产留给自己的猫。毕竟爱情跟书写都需要适时地留白。 - -与大家共勉之。」——[vinta/paranoid-auto-spacing](https://github.com/vinta/pangu.js) - -### 中英文之间需要增加空格 - -正确: - -> 在 LeanCloud 上,数据存储是围绕 `AVObject` 进行的。 - -错误: - -> 在LeanCloud上,数据存储是围绕`AVObject`进行的。 - -> 在 LeanCloud上,数据存储是围绕`AVObject` 进行的。 - -完整的正确用法: - -> 在 LeanCloud 上,数据存储是围绕 `AVObject` 进行的。每个 `AVObject` 都包含了与 JSON 兼容的 key-value 对应的数据。数据是 schema-free 的,你不需要在每个 `AVObject` 上提前指定存在哪些键,只要直接设定对应的 key-value 即可。 - -例外:「豆瓣FM」等产品名词,按照官方所定义的格式书写。 - -### 中文与数字之间需要增加空格 - -正确: - -> 今天出去买菜花了 5000 元。 - -错误: - -> 今天出去买菜花了 5000元。 - -> 今天出去买菜花了5000元。 - -### 数字与单位之间需要增加空格 - -正确: - -> 我家的光纤入户宽带有 10 Gbps,SSD 一共有 20 TB。 - -错误: - -> 我家的光纤入户宽带有 10Gbps,SSD 一共有 10TB。 - -例外:度/百分比与数字之间不需要增加空格: - -正确: - -> 今天是 233° 的高温。 - -> 新 MacBook Pro 有 15% 的 CPU 性能提升。 - -错误: - -> 今天是 233 ° 的高温。 - -> 新 MacBook Pro 有 15 % 的 CPU 性能提升。 - -### 全角标点与其他字符之间不加空格 - -正确: - -> 刚刚买了一部 iPhone,好开心! - -错误: - -> 刚刚买了一部 iPhone ,好开心! - -### `-ms-text-autospace` to the rescue? - -Microsoft 有个 [`-ms-text-autospace`](http://msdn.microsoft.com/en-us/library/ie/ms531164(v=vs.85).aspx) 的 CSS 属性可以实现自动为中英文之间增加空白。不过目前并未普及,另外在其他应用场景,例如 OS X、iOS 的用户界面目前并不存在这个特性,所以请继续保持随手加空格的习惯。 - -## 标点符号 - -### 不重复使用标点符号 - -正确: - -> 德国队竟然战胜了巴西队! - -> 她竟然对你说“喵”?! - -错误: - -> 德国队竟然战胜了巴西队!! - -> 德国队竟然战胜了巴西队!!!!!!!! - -> 她竟然对你说「喵」??!! - -> 她竟然对你说「喵」?!?!??!! - -## 全角和半角 - -不明白什么是全角(全形)与半角(半形)符号?请查看维基百科词条『[全角和半角](http://zh.wikipedia.org/wiki/%E5%85%A8%E5%BD%A2%E5%92%8C%E5%8D%8A%E5%BD%A2)』。 - -### 使用全角中文标点 - -正确: - -> 嗨!你知道嘛?今天前台的小妹跟我说“喵”了哎! - -> 核磁共振成像(NMRI)是什么原理都不知道?JFGI! - -错误: - -> 嗨! 你知道嘛? 今天前台的小妹跟我说 "喵" 了哎! - -> 嗨!你知道嘛?今天前台的小妹跟我说"喵"了哎! - -> 核磁共振成像 (NMRI) 是什么原理都不知道? JFGI! - -> 核磁共振成像(NMRI)是什么原理都不知道?JFGI! - -### 数字使用半角字符 - -正确: - -> 这件蛋糕只卖 1000 元。 - -错误: - -> 这件蛋糕只卖 1000 元。 - -例外:在设计稿、宣传海报中如出现极少量数字的情形时,为方便文字对齐,是可以使用全角数字的。 - -### 遇到完整的英文整句、特殊名词,其內容使用半角标点 - -正确: - -> 乔布斯那句话是怎么说的?“Stay hungry, stay foolish.” - -> 推荐你阅读《Hackers & Painters: Big Ideas from the Computer Age》,非常的有趣。 - -错误: - -> 乔布斯那句话是怎么说的?「Stay hungry,stay foolish。」 - -> 推荐你阅读《Hackers&Painters:Big Ideas from the Computer Age》,非常的有趣。 - -## 名词 - -### 专有名词使用正确的大小写 - -大小写相关用法原属于英文书写范畴,不属于本 wiki 讨论內容,在这里只对部分易错用法进行简述。 - -正确: - -> 使用 GitHub 登录 - -> 我们的客户有 GitHub、Foursquare、Microsoft Corporation、Google、Facebook, Inc.。 - -错误: - -> 使用 github 登录 - -> 使用 GITHUB 登录 - -> 使用 Github 登录 - -> 使用 gitHub 登录 - -> 使用 gイんĤЦ8 登录 - -> 我们的客户有 github、foursquare、microsoft corporation、google、facebook, inc.。 - -> 我们的客户有 GITHUB、FOURSQUARE、MICROSOFT CORPORATION、GOOGLE、FACEBOOK, INC.。 - -> 我们的客户有 Github、FourSquare、MicroSoft Corporation、Google、FaceBook, Inc.。 - -> 我们的客户有 gitHub、fourSquare、microSoft Corporation、google、faceBook, Inc.。 - -> 我们的客户有 gイんĤЦ8、キouЯƧquムгє、๓เςг๏ร๏Ŧt ς๏гק๏гคtเ๏ภn、900913、ƒ4ᄃëв๏๏к, IПᄃ.。 - -注意:当网页中需要配合整体视觉风格而出现全部大写/小写的情形,HTML 中请使用标准的大小写规范进行书写;并通过 `text-transform: uppercase;`/`text-transform: lowercase;` 对表现形式进行定义。 - -### 不要使用不地道的缩写 - -正确: - -> 我们需要一位熟悉 JavaScript、HTML5,至少理解一种框架(如 Backbone.js、AngularJS、React 等)的前端开发者。 - -错误: - -> 我们需要一位熟悉 Js、h5,至少理解一种框架(如 backbone、angular、RJS 等)的 FED。 - -## 争议 - -以下用法略带有个人色彩,既:无论是否遵循下述规则,从语法的角度来讲都是**正确**的。 - -### 链接之间增加空格 - -用法: - -> 请 [提交一个 issue](#) 并分配给相关同事。 - -> 访问我们网站的最新动态,请 [点击这里](#) 进行订阅! - -对比用法: - -> 请[提交一个 issue](#) 并分配给相关同事。 - -> 访问我们网站的最新动态,请[点击这里](#)进行订阅! - -### 简体中文不要使用直角引号 - -不管中英文,如果没有特殊要求,**不要用直角引号**。 - -## 工具 - -仓库 | 语言 ---- | --- -[vinta/paranoid-auto-spacing](https://github.com/vinta/paranoid-auto-spacing) | JavaScript -[huei90/pangu.node](https://github.com/huei90/pangu.node) | Node.js -[huacnlee/auto-correct](https://github.com/huacnlee/auto-correct) | Ruby -[sparanoid/space-lover](https://github.com/sparanoid/space-lover) | PHP (WordPress) -[nauxliu/auto-correct](https://github.com/NauxLiu/auto-correct) | PHP -[hotoo/pangu.vim](https://github.com/hotoo/pangu.vim) | Vim -[sparanoid/grunt-auto-spacing](https://github.com/sparanoid/grunt-auto-spacing) | Node.js (Grunt) -[hjiang/scripts/add-space-between-latin-and-cjk](https://github.com/hjiang/scripts/blob/master/add-space-between-latin-and-cjk) | Python - -## 谁在这样做? - -网站 | 文案 | UGC ---- | --- | --- -[Apple 中国](http://www.apple.com/cn/) | Yes | N/A -[Apple 香港](http://www.apple.com/hk/) | Yes | N/A -[Apple 台湾](http://www.apple.com/tw/) | Yes | N/A -[Microsoft 中国](http://www.microsoft.com/zh-cn/) | Yes | N/A -[Microsoft 香港](http://www.microsoft.com/zh-hk/) | Yes | N/A -[Microsoft 台湾](http://www.microsoft.com/zh-tw/) | Yes | N/A -[LeanCloud](https://leancloud.cn/) | Yes | N/A -[知乎](https://www.zhihu.com/) | Yes | 部分用户达成 -[V2EX](https://www.v2ex.com/) | Yes | Yes -[SegmentFault](https://segmentfault.com/) | Yes | 部分用户达成 -[Apple4us](http://apple4us.com/) | Yes | N/A -[豌豆荚](https://www.wandoujia.com/) | Yes | N/A -[Ruby China](https://ruby-china.org/) | Yes | 标题达成 -[PHPHub](https://phphub.org/) | Yes | 标题达成 - -## 参考文献 - -- [Guidelines for Using Capital Letters](http://grammar.about.com/od/punctuationandmechanics/a/Guidelines-For-Using-Capital-Letters.htm) -- [Letter case - Wikipedia](http://en.wikipedia.org/wiki/Letter_case) -- [Punctuation - Oxford Dictionaries](http://www.oxforddictionaries.com/words/punctuation) -- [Punctuation - The Purdue OWL](https://owl.english.purdue.edu/owl/section/1/6/) -- [How to Use English Punctuation Corrently - wikiHow](http://www.wikihow.com/Use-English-Punctuation-Correctly) -- [格式 - openSUSE](https://zh.opensuse.org/index.php?title=Help:%E6%A0%BC%E5%BC%8F) -- [全角和半角 - 维基百科](http://zh.wikipedia.org/wiki/%E5%85%A8%E5%BD%A2%E5%92%8C%E5%8D%8A%E5%BD%A2) -- [引号 - 维基百科](http://zh.wikipedia.org/wiki/%E5%BC%95%E8%99%9F) -- [疑问惊叹号 - 维基百科](http://zh.wikipedia.org/wiki/%E7%96%91%E5%95%8F%E9%A9%9A%E5%98%86%E8%99%9F) - -## CopyRight - -[中文文案排版指北](https://github.com/sparanoid/chinese-copywriting-guidelines) diff --git a/选题模板.txt b/选题模板.txt deleted file mode 100644 index a7cd92e614..0000000000 --- a/选题模板.txt +++ /dev/null @@ -1,43 +0,0 @@ -选题标题格式: - - 原文日期 标题.md - -正文内容: - - 标题 - ======= - - ### 子一级标题 - - 正文 - - #### 子二级标题 - - 正文内容 - - ![](图片地址) - - ### 子一级标题 - - 正文内容 : I have a [dream][1]。 - - -------------------------------------------------------------------------------- - - via: 原文地址 - - 作者:[作者名][a] - 译者:[译者ID](https://github.com/译者ID) - 校对:[校对者ID](https://github.com/校对者ID) - - 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - - [a]: 作者介绍地址 - [1]: 引文链接地址 - -说明: -1. 标题层级很多时从 “##” 开始 -2. 引文链接地址在下方集中写 -3. 因为 Windows 系统文件名有限制,所以文章名不要有特殊符号,如 `\/:*"<>|`,同时也不推荐全大写,或者其它不利阅读的格式 -4. 正文格式参照中文排版指北(https://github.com/LCTT/TranslateProject/blob/master/%E4%B8%AD%E6%96%87%E6%8E%92%E7%89%88%E6%8C%87%E5%8C%97.md) -5. 我们使用的 markdown 语法和 github 一致,具体语法可参见 https://github.com/guodongxiaren/README 。而实际中使用的都是基本语法,比如链接、包含图片、标题、列表、字体控制和代码高亮。 -6. 选题的内容分为两类: 干货和湿货。干货就是技术文章,比如针对某种技术、工具的介绍、讲解和讨论。湿货则是和技术、开发、计算机文化有关的文章。选题时主要就是根据这两条来选择文章,文章需要对大家有益处,篇幅不宜太短,可以是系列文章,也可以是长篇大论,但是文章要有内容,不能有严重的错误,最好不要选择已经有翻译的原文。