Merge pull request #1 from LCTT/master

update
This commit is contained in:
ChenYi 2018-05-24 16:02:10 +08:00 committed by GitHub
commit 46add1ff13
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
69 changed files with 4815 additions and 2230 deletions

View File

@ -1,122 +1,116 @@
程序员的学习之路 程序员的学习之路
============================================================ ============================================================
*2016 年 10 月,当我从微软离职时,我已经在微软工作了近 21 年,在业界也快 35 年了。我花了一些时间反思我这些年来学到的东西,这些文字是那篇帖子稍加修改后得到。请见谅,文章有一点长。* *2016 年 10 月,当我从微软离职时,我已经在微软工作了近 21 年,在业界也快 35 年了。我花了一些时间反思我这些年来学到的东西,这些文字是那篇帖子稍加修改后得到。请见谅,文章有一点长。*
要成为一名专业的程序员,你需要知道的事情多得令人吃惊:语言的细节API算法数据结构系统和工具。这些东西一直在随着时间变化——新的语言和编程环境不断出现,似乎总有热门的新工具或新语言是“每个人”都在使用的。紧跟潮流,保持专业,这很重要。木匠需要知道如何为工作选择合适的锤子和钉子,并且要有能力笔直精准地钉入钉子。 要成为一名专业的程序员,你需要知道的事情多得令人吃惊:语言的细节、API、算法、数据结构、系统和工具。这些东西一直在随着时间变化——新的语言和编程环境不断出现,似乎总有一些“每个人”都在使用的热门的新工具或新语言。紧跟潮流,保持专业,这很重要。木匠需要知道如何为工作选择合适的锤子和钉子,并且要有能力笔直精准地钉入钉子。
与此同时,我也发现有一些理论和方法有着广泛的应用场景,它们能使用几十年。底层设备的性能和容量在这几十年来增长了几个数量级,但系统设计的思考方式还是互相有关联的,这些思考方式比具体的实现更根本。理解这些重复出现的主题对分析与设计我们所负责的系统大有帮助。 与此同时,我也发现有一些理论和方法有着广泛的应用场景,它们能使用几十年。底层设备的性能和容量在这几十年来增长了几个数量级,但系统设计的思考方式还是互相有关联的,这些思考方式比具体的实现更根本。理解这些重复出现的主题对分析与设计我们所负责的系统大有帮助。
谦卑和自我 ### 谦卑和自我
这不仅仅局限于编程,但在编程这个持续发展的领域,一个人需要在谦卑和自我中保持平衡。总有新的东西需要学习,并且总有人能帮助你学习——如果你愿意学习的话。一个人即需要保持谦卑,认识到自己不懂并承认它,也要保持自我,相信自己能掌握一个新的领域,并且能运用你已经掌握的知识。我见过的最大的挑战就是一些人在某个领域深入专研了很长时间,“忘记”了自己擅长学习新的东西。最好的学习来自放手去做,建造一些东西,即便只是一个原型或者 hack。我知道的最好的程序员对技术有广泛的认识但同时他们对某个技术深入研究成为了专家。而深入的学习来自努力解决真正困难的问题。 这不仅仅局限于编程,但在编程这个持续发展的领域,一个人需要在谦卑和自我中保持平衡。总有新的东西需要学习,并且总有人能帮助你学习——如果你愿意学习的话。一个人即需要保持谦卑,认识到自己不懂并承认它,也要保持自我,相信自己能掌握一个新的领域,并且能运用你已经掌握的知识。我见过的最大的挑战就是一些人在某个领域深入专研了很长时间,“忘记”了自己擅长学习新的东西。最好的学习来自放手去做,建造一些东西,即便只是一个原型或者 hack。我知道的最好的程序员对技术有广泛的认识但同时他们对某个技术深入研究成为了专家。而深入的学习来自努力解决真正困难的问题。
端到端观点 ### 端到端观点
1981 年Jerry Saltzer, Dave Reed 和 Dave Clark 在做因特网和分布式系统的早期工作,他们提出了端到端观点,并作出了[经典的阐述][4]。网络上的文章有许多误传,所以更应该阅读论文本身。论文的作者很谦虚,没有声称这是他们自己的创造——从他们的角度看,这只是一个常见的工程策略,不只在通讯领域中,在其他领域中也有运用。他们只是将其写下来并收集了一些例子。下面是文章的一个小片段: 1981 年Jerry Saltzer Dave Reed 和 Dave Clark 在做因特网和分布式系统的早期工作,他们提出了<ruby>端到端<rt>end to end</rt></ruby>观点,并作出了[经典的阐述][4]。网络上的文章有许多误传,所以更应该阅读论文本身。论文的作者很谦虚,没有声称这是他们自己的创造——从他们的角度看,这只是一个常见的工程策略,不只在通讯领域中,在其他领域中也有运用。他们只是将其写下来并收集了一些例子。下面是文章的一个小片段:
> 当我们设计系统的一个功能时,仅依靠端点的知识和端点的参与,就能正确地完整地实现这个功能。在一些情况下,系统的内部模块局部实现这个功能,可能会对性能有重要的提升。 > 当我们设计系统的一个功能时,仅依靠端点的知识和端点的参与,就能正确地完整地实现这个功能。在一些情况下,系统的内部模块局部实现这个功能,可能会对性能有重要的提升。
论文称这是一个“观点”,虽然在维基百科和其他地方它已经被上升成“原则”。实际上,还是把它看作一个观点比较好,正如作者们所说,系统设计者面临的最难的问题之一就是如何在系统组件之间划分责任,这会引发不断的讨论:怎样在划分功能时权衡利弊,怎样隔离复杂性,怎样设计一个灵活的高性能系统来满足不断变化的需求。没有简单的原则可以直接遵循。 论文称这是一个“观点”,虽然在维基百科和其他地方它已经被上升成“原则”。实际上,还是把它看作一个观点比较好,正如作者们所说,系统设计者面临的最难的问题之一就是如何在系统组件之间划分责任,这会引发不断的讨论:怎样在划分功能时权衡利弊,怎样隔离复杂性,怎样设计一个灵活的高性能系统来满足不断变化的需求。没有简单的原则可以直接遵循。
互联网上的大部分讨论集中在通信系统上,但端到端观点的适用范围其实更广泛。分布式系统中的“最终一致性”就是一个例子。一个满足“最终一致性”的系统,可以让系统中的元素暂时进入不一致的状态,从而简化系统,优化性能,因为有一个更大的端到端过程来解决不一致的状态。我喜欢横向拓展的订购系统的例子(例如亚马逊),它不要求每个请求都通过中央库存的控制点。缺少中央控制点可能允许两个终端出售相同的最后一本书,所以系统需要用某种方法来解决这个问题,如通知客户该书会延期交货。不论怎样设计,想购买的最后一本书在订单完成前都有可能被仓库中的叉车运出厍(译注:比如被其他人下单购买)。一旦你意识到你需要一个端到端的解决方案,并实现了这个方案,那系统内部的设计就可以被优化,并利用这个解决方案。 互联网上的大部分讨论集中在通信系统上,但端到端观点的适用范围其实更广泛。分布式系统中的“<ruby>最终一致性<rt>eventual consistency</rt></ruby>”就是一个例子。一个满足“最终一致性”的系统,可以让系统中的元素暂时进入不一致的状态,从而简化系统,优化性能,因为有一个更大的端到端过程来解决不一致的状态。我喜欢横向拓展的订购系统的例子(例如亚马逊),它不要求每个请求都通过中央库存的控制点。缺少中央控制点可能允许两个终端出售相同的最后一本书,所以系统需要用某种方法来解决这个问题,如通知客户该书会延期交货。不论怎样设计,想购买的最后一本书在订单完成前都有可能被仓库中的叉车运出厍(LCTT 译注:比如被其他人下单购买)。一旦你意识到你需要一个端到端的解决方案,并实现了这个方案,那系统内部的设计就可以被优化利用这个解决方案。
事实上,这种设计上的灵活性可以优化系统的性能,或者提供其他的系统功能,从而使得端到端的方法变得如此强大。端到端的思考往往允许内部进行灵活的操作,使整个系统更加健壮,并且能适应每个组件特性的变化。这些都让端到端的方法变得健壮,并能适应变化。 事实上,这种设计上的灵活性可以优化系统的性能,或者提供其他的系统功能,从而使得端到端的方法变得如此强大。端到端的思考往往允许内部进行灵活的操作,使整个系统更加健壮,并且能适应每个组件特性的变化。这些都让端到端的方法变得健壮,并能适应变化。
端到端方法意味着,添加会牺牲整体性能灵活性的抽象层和功能时要非常小心(也可能是其他的灵活性,但性能,特别是延迟,往往是特殊的)。如果你展示出底层的原始性能(performance, 也可能指操作),端到端的方法可以根据这个性能(操作)来优化,实现特定的需求。如果你破坏了底层性能(操作),即使你实现了重要的有附加价值的功能,你也牺牲了设计灵活性。 端到端方法意味着,添加会牺牲整体性能灵活性的抽象层和功能时要非常小心(也可能是其他的灵活性,但性能,特别是延迟,往往是特殊的)。如果你展示出底层的原始性能(LCTT 译注performance也可能指操作下同),端到端的方法可以根据这个性能(操作)来优化,实现特定的需求。如果你破坏了底层性能(操作),即使你实现了重要的有附加价值的功能,你也牺牲了设计灵活性。
如果系统足够庞大而且足够复杂,需要把整个开发团队分配给系统内部的组件,那么端到端观点可以和团队组织相结合。这些团队自然要扩展这些组件的功能,他们通常从牺牲设计上的灵活性开始,尝试在组件上实现端到端的功能。 如果系统足够庞大而且足够复杂,需要把整个开发团队分配给系统内部的组件,那么端到端观点可以和团队组织相结合。这些团队自然要扩展这些组件的功能,他们通常从牺牲设计上的灵活性开始,尝试在组件上实现端到端的功能。
应用端到端方法面临的挑战之一是确定端点在哪里。 俗话说,“大跳蚤上有小跳蚤,小跳蚤上有更少的跳蚤……等等”。 应用端到端方法面临的挑战之一是确定端点在哪里。 俗话说,“大跳蚤上有小跳蚤,小跳蚤上有更少的跳蚤……等等”。
关注复杂性 ### 关注复杂性
编程是一门精确的艺术,每一行代码都要确保程序的正确执行。但这是带有误导的。编程的复杂性不在于各个部分的整合,也不在于各个部分之间如何相互交互。最健壮的程序将复杂性隔离开,让最重要的部分变的简单直接,通过简单的方式与其他部分交互。虽然隐藏复杂性和信息隐藏、数据抽象等其他设计方法一样,但我仍然觉得,如果你真的要定位出系统的复杂所在,并将其隔离开,那你需要对设计特别敏锐。 编程是一门精确的艺术,每一行代码都要确保程序的正确执行。但这是带有误导的。编程的复杂性不在于各个部分的整合,也不在于各个部分之间如何相互交互。最健壮的程序将复杂性隔离开,让最重要的部分变的简单直接,通过简单的方式与其他部分交互。虽然隐藏复杂性和信息隐藏、数据抽象等其他设计方法一样,但我仍然觉得,如果你真的要定位出系统的复杂所在,并将其隔离开,那你需要对设计特别敏锐。
在我的[文章][5]中反复提到的例子是早期的终端编辑器 VI 和 Emacs 中使用的屏幕重绘算法。早期的视频终端实现了控制序列,来控制绘制字符核心操作,也实现了附加的显示功能,来优化重新绘制屏幕,如向上向下滚动当前行,或者插入新行,或在当前行中移动字符。这些命令都具有不同的开销,并且这些开销在不同制造商的设备中也是不同的。(参见[TERMCAP][6]以获取代码链接和更完整的历史记录。)像文本编辑器这样的全屏应用程序希望尽快更新屏幕,因此需要优化使用这些控制序列来转换屏幕从一个状态到另一个状态。 在我的[文章][5]中反复提到的例子是早期的终端编辑器 VI 和 Emacs 中使用的屏幕重绘算法。早期的视频终端实现了控制序列,来控制绘制字符核心操作,也实现了附加的显示功能,来优化重新绘制屏幕,如向上向下滚动当前行,或者插入新行,或在当前行中移动字符。这些命令都具有不同的开销,并且这些开销在不同制造商的设备中也是不同的。(参见[TERMCAP][6] 以获取代码和更完整的历史记录的链接。)像文本编辑器这样的全屏应用程序希望尽快更新屏幕,因此需要优化使用这些控制序列来从一个状态到另一个状态屏幕转换
这些程序在设计上隐藏了底层的复杂性。系统中修改文本缓冲区的部分(功能上大多数创新都在这里)完全忽略了这些改变如何被转换成屏幕更新命令。这是可以接受的,因为针对*任何*内容的改变计算最佳命令所消耗的性能代价,远不及被终端本身实际执行这些更新命令的性能代价。在确定如何隐藏复杂性,以及隐藏哪些复杂性时,性能分析扮演着重要的角色,这一点在系统设计中非常常见。屏幕的更新与底层文本缓冲区的更改是异步的,并且可以独立于缓冲区的实际历史变化顺序。缓冲区*怎样*改变的并不重要,重要的是改变了*什么*。异步耦合,在组件交互时消除组件对历史路径依赖的组合,以及用自然的交互方式以有效地将组件组合在一起是隐藏耦合复杂度的常见特征。 这些程序在设计上隐藏了底层的复杂性。系统中修改文本缓冲区的部分(功能上大多数创新都在这里)完全忽略了这些改变如何被转换成屏幕更新命令。这是可以接受的,因为针对*任何*内容的改变计算最佳命令所消耗的性能代价,远不及被终端本身实际执行这些更新命令的性能代价。在确定如何隐藏复杂性,以及隐藏哪些复杂性时,性能分析扮演着重要的角色,这一点在系统设计中非常常见。屏幕的更新与底层文本缓冲区的更改是异步的,并且可以独立于缓冲区的实际历史变化顺序。缓冲区*怎样*改变的并不重要,重要的是改变了*什么*。异步耦合,在组件交互时消除组件对历史路径依赖的组合,以及用自然的交互方式以有效地将组件组合在一起是隐藏耦合复杂度的常见特征。
隐藏复杂性的成功不是由隐藏复杂性的组件决定的,而是由使用该模块的使用者决定的。这就是为什么组件的提供者至少要为组件的某些端到端过程负责。他们需要清晰的知道系统的其他部分如何与组件相互作用,复杂性是如何泄漏出来的(以及是否泄漏出来)。这常常表现为“这个组件很难使用”这样的反馈——这通常意味着它不能有效地隐藏内部复杂性,或者没有选择一个隐藏复杂性的功能边界。 隐藏复杂性的成功不是由隐藏复杂性的组件决定的,而是由使用该模块的使用者决定的。这就是为什么组件的提供者至少要为组件的某些端到端过程负责。他们需要清晰的知道系统的其他部分如何与组件相互作用,复杂性是如何泄漏出来的(以及是否泄漏出来)。这常常表现为“这个组件很难使用”这样的反馈——这通常意味着它不能有效地隐藏内部复杂性,或者没有选择一个隐藏复杂性的功能边界。
分层与组件化 ### 分层与组件化
系统设计人员的一个基本工作是确定如何将系统分解成组件和层;决定自己要开发什么,以及从别的地方获取什么。开源项目在决定自己开发组件还是购买服务时,大多会选择自己开发,但组件之间交互的过程是一样的。在大规模工程中,理解这些决策将如何随着时间的推移而发挥作用是非常重要的。从根本上说,变化是程序员所做的一切的基础,所以这些设计决定不仅在当下评估,还要随着产品的不断发展而在未来几年得到评估。 系统设计人员的一个基本工作是确定如何将系统分解成组件和层;决定自己要开发什么,以及从别的地方获取什么。开源项目在决定自己开发组件还是购买服务时,大多会选择自己开发,但组件之间交互的过程是一样的。在大规模工程中,理解这些决策将如何随着时间的推移而发挥作用是非常重要的。从根本上说,变化是程序员所做的一切的基础,所以这些设计决定不仅在当下评估,还要随着产品的不断发展而在未来几年得到评估。
以下是关于系统分解的一些事情,它们最终会占用大量的时间,因此往往需要更长的时间来学习和欣赏。 以下是关于系统分解的一些事情,它们最终会占用大量的时间,因此往往需要更长的时间来学习和欣赏。
* 层泄漏。层(或抽象)[基本上是泄漏的][1]。这些泄漏会立即产生后果,也会随着时间的推移而产生两方面的后果。其中一方面就是该抽象层的特性渗透到了系统的其他部分,渗透的程度比你意识到得更深入。这些渗透可能是关于具体的性能特征的假设,以及抽象层的文档中没有明确的指出的行为发生的顺序。这意味着假如内部组件的行为发生变化,你的系统会比想象中更加脆弱。第二方面是你比表面上看起来更依赖组件内部的行为,所以如果你考虑改变这个抽象层,后果和挑战可能超出你的想象。 * **层泄漏。**层(或抽象)[基本上是泄漏的][1]。这些泄漏会立即产生后果,也会随着时间的推移而产生两方面的后果。其中一方面就是该抽象层的特性渗透到了系统的其他部分,渗透的程度比你意识到得更深入。这些渗透可能是关于具体的性能特征的假设,以及抽象层的文档中没有明确的指出的行为发生的顺序。这意味着假如内部组件的行为发生变化,你的系统会比想象中更加脆弱。第二方面是你比表面上看起来更依赖组件内部的行为,所以如果你考虑改变这个抽象层,后果和挑战可能超出你的想象。
* **层具有太多功能。**您所采用的组件具有比实际需要更多的功能,这几乎是一个真理。在某些情况下,你决定采用这个组件是因为你想在将来使用那些尚未用到的功能。有时,你采用组件是想“上快车”,利用组件完成正在进行的工作。在功能强大的抽象层上开发会带来一些后果。
1. 组件往往会根据你并不需要的功能作出取舍。
2. 为了实现那些你并不没有用到的功能,组件引入了复杂性和约束,这些约束将阻碍该组件的未来的演变。
3. 层泄漏的范围更大。一些泄漏是由于真正的“抽象泄漏”另一些是由于明显的逐渐增加的对组件全部功能的依赖但这些依赖通常都没有处理好。Office 软件太大了,我们发现,对于我们建立的任何抽象层,我们最终都在系统的某个部分完全运用了它的功能。虽然这看起来是积极的(我们完全地利用了这个组件),但并不是所用的使用都有同样的价值。所以,我们最终要付出巨大的代价才能从一个抽象层往另一个抽象层迁移,这种“长尾”没什么价值,并且对使用场景认识不足。
4. 附加的功能会增加复杂性,并增加功能滥用的可能。如果将验证 XML 的 API 指定为 XML 树的一部分,那这个 API 可以选择动态下载 XML 的模式定义。这在我们的基本文件解析代码中被错误地执行,导致 w3c.org 服务器上的大量性能下降以及无意分布式拒绝服务攻击。这些被通俗地称为“地雷”API
* **抽象层被更换。**需求在进化,系统在进化,组件被放弃。您最终需要更换该抽象层或组件。不管是对外部组件的依赖还是对内部组件的依赖都是如此。这意味着上述问题将变得重要起来。
* **自己构建还是购买的决定将会改变。**这是上面几方面的必然结果。这并不意味着自己构建还是购买的决定在当时是错误的。一开始时往往没有合适的组件,一段时间之后才有合适的组件出现。或者,也可能你使用了一个组件,但最终发现它不符合您不断变化的要求,而且你的要求非常窄、很好理解,或者对你的价值体系来说是非常重要的,以至于拥有自己的模块是有意义的。这意味着你像关心自己构造的模块一样,关心购买的模块,关心它们是怎样泄漏并深入你的系统中的。
* **抽象层会变臃肿。**一旦你定义了一个抽象层,它就开始增加功能。层是对使用模式优化的自然分界点。臃肿的层的困难在于,它往往会降低您利用底层的不断创新的能力。从某种意义上说,这就是操作系统公司憎恨构建在其核心功能之上的臃肿的层的原因——采用创新的速度放缓了。避免这种情况的一种比较规矩的方法是禁止在适配器层中进行任何额外的状态存储。微软基础类在 Win32 上采用这个一般方法。在短期内,将功能集成到现有层(最终会导致上述所有问题)而不是重构和重新推导是不可避免的。理解这一点的系统设计人员寻找分解和简化组件的方法,而不是在其中增加越来越多的功能。
* 层具有太多功能了。您所采用的组件具有比实际需要更多的功能这几乎是一个真理。在某些情况下你决定采用这个组件是因为你想在将来使用那些尚未用到的功能。有时你采用组件是想“上快车”利用组件完成正在进行的工作。在功能强大的抽象层上开发会带来一些后果。1) 组件往往会根据你并不需要的功能作出取舍。 2) 为了实现那些你并不没有用到的功能组件引入了复杂性和约束这些约束将阻碍该组件的未来的演变。3) 层泄漏的范围更大。一些泄漏是由于真正的“抽象泄漏”另一些是由于明显的逐渐增加的对组件全部功能的依赖但这些依赖通常都没有处理好。Office 太大了我们发现对于我们建立的任何抽象层我们最终都在系统的某个部分完全运用了它的功能。虽然这看起来是积极的我们完全地利用了这个组件但并不是所用的使用都有同样的价值。所以我们最终要付出巨大的代价才能从一个抽象层往另一个抽象层迁移这种“长尾巴”没什么价值并且对使用场景认识不足。4) 附加的功能会增加复杂性,并增加功能滥用的可能。如果将验证 XML 的 API 指定为 XML 树的一部分,那这个 API 可以选择动态下载 XML 的模式定义。这在我们的基本文件解析代码中被错误地执行,导致 w3c.org 服务器上的大量性能下降以及无意分布式拒绝服务攻击。这些被通俗地称为“地雷”API ### 爱因斯坦宇宙
* 抽象层被更换。需求发展,系统发展,组件被放弃。您最终需要更换该抽象层或组件。不管是对外部组件的依赖还是对内部组件的依赖都是如此。这意味着上述问题将变得重要起来。 几十年来我一直在设计异步分布式系统但是在微软内部的一次演讲中SQL 架构师 Pat Helland 的一句话震惊了我。 “我们生活在爱因斯坦的宇宙中,没有同时性这种东西。”在构建分布式系统时(基本上我们构建的都是分布式系统),你无法隐藏系统的分布式特性。这是物理的。我一直感到远程过程调用在根本上错误的,这是一个原因,尤其是那些“透明的”远程过程调用,它们就是想隐藏分布式的交互本质。你需要拥抱系统的分布式特性,因为这些意义几乎总是需要通过系统设计和用户体验来完成。
* 自己构建还是购买的决定将会改变。这是上面几方面的必然结果。这并不意味着自己构建还是购买的决定在当时是错误的。一开始时往往没有合适的组件,一段时间之后才有合适的组件出现。或者,也可能你使用了一个组件,但最终发现它不符合您不断变化的要求,而且你的要求非常窄,能被理解,或着对你的价值体系来说是非常重要的,以至于拥有自己的模块是有意义的。这意味着你像关心自己构造的模块一样,关心购买的模块,关心它们是怎样泄漏并深入你的系统中的。
* 抽象层会变臃肿。一旦你定义了一个抽象层,它就开始增加功能。层是对使用模式优化的自然分界点。臃肿的层的困难在于,它往往会降低您利用底层的不断创新的能力。从某种意义上说,这就是操作系统公司憎恨构建在其核心功能之上的臃肿的层的原因——采用创新的速度放缓了。避免这种情况的一种比较规矩的方法是禁止在适配器层中进行任何额外的状态存储。微软基础类在 Win32 上采用这个一般方法。在短期内,将功能集成到现有层(最终会导致上述所有问题)而不是重构和重新推导是不可避免的。理解这一点的系统设计人员寻找分解和简化组件的方法,而不是在其中增加越来越多的功能。
爱因斯坦宇宙
几十年来我一直在设计异步分布式系统但是在微软内部的一次演讲中SQL 架构师 Pat Helland 的一句话震惊了我。 “我们生活在爱因斯坦的宇宙中,没有同时性。”在构建分布式系统时(基本上我们构建的都是分布式系统),你无法隐藏系统的分布式特性。这是物理的。我一直感到远程过程调用在根本上错误的,这是一个原因,尤其是那些“透明的”远程过程调用,它们就是想隐藏分布式的交互本质。你需要拥抱系统的分布式特性,因为这些意义几乎总是需要通过系统设计和用户体验来完成。
拥抱分布式系统的本质则要遵循以下几个方面: 拥抱分布式系统的本质则要遵循以下几个方面:
* 一开始就要思考设计对用户体验的影响,而不是试图在处理错误,取消请求和报告状态上打补丁。 * 一开始就要思考设计对用户体验的影响,而不是试图在处理错误,取消请求和报告状态上打补丁。
* 使用异步技术来耦合组件。同步耦合是*不可能*的。如果某些行为看起来是同步的,是因为某些内部层尝试隐藏异步,这样做会遮蔽(但绝对不隐藏)系统运行时的基本行为特征。 * 使用异步技术来耦合组件。同步耦合是*不可能*的。如果某些行为看起来是同步的,是因为某些内部层尝试隐藏异步,这样做会遮蔽(但绝对不隐藏)系统运行时的基本行为特征。
* 认识到并且明确设计了交互状态机,这些状态表示长期的可靠的内部系统状态(而不是由深度调用堆栈中的变量值编码的临时,短暂和不可发现的状态)。 * 认识到并且明确设计了交互状态机,这些状态表示长期的可靠的内部系统状态(而不是由深度调用堆栈中的变量值编码的临时,短暂和不可发现的状态)。
* 认识到失败是在所难免的。要保证能检测出分布式系统中的失败,唯一的办法就是直接看你的等待时间是否“太长”。这自然意味着[取消的等级最高][2]。系统的某一层(可能直接通向用户)需要决定等待时间是否过长,并取消操作。取消只是为了重建局部状态,回收局部的资源——没有办法在系统内广泛使用取消机制。有时用一种低成本,不可靠的方法广泛使用取消机制对优化性能可能有用。 * 认识到失败是在所难免的。要保证能检测出分布式系统中的失败,唯一的办法就是直接看你的等待时间是否“太长”。这自然意味着[取消的等级最高][2]。系统的某一层(可能直接通向用户)需要决定等待时间是否过长,并取消操作。取消只是为了重建局部状态,回收局部的资源——没有办法在系统内广泛使用取消机制。有时用一种低成本,不可靠的方法广泛使用取消机制对优化性能可能有用。
* 认识到取消不是回滚,因为它只是回收本地资源和状态。如果回滚是必要的,它必须实现成一个端到端的功能。 * 认识到取消不是回滚,因为它只是回收本地资源和状态。如果回滚是必要的,它必须实现成一个端到端的功能。
* 承认永远不会真正知道分布式组件的状态。只要你发现一个状态,它可能就已经改变了。当你发送一个操作时,请求可能在传输过程中丢失,也可能被处理了但是返回的响应丢失了,或者请求需要一定的时间来处理,这样远程状态最终会在未来的某个任意的时间转换。这需要像幂等操作这样的方法,并且要能够稳健有效地重新发现远程状态,而不是期望可靠地跟踪分布式组件的状态。“[最终一致性][3]”的概念简洁地捕捉了这其中大多数想法。 * 承认永远不会真正知道分布式组件的状态。只要你发现一个状态,它可能就已经改变了。当你发送一个操作时,请求可能在传输过程中丢失,也可能被处理了但是返回的响应丢失了,或者请求需要一定的时间来处理,这样远程状态最终会在未来的某个任意的时间转换。这需要像幂等操作这样的方法,并且要能够稳健有效地重新发现远程状态,而不是期望可靠地跟踪分布式组件的状态。“[最终一致性][3]”的概念简洁地捕捉了这其中大多数想法。
我喜欢说你应该“陶醉在异步”。与其试图隐藏异步,不如接受异步,为异步而设计。当你看到像幂等性或不变性这样的技术时,你就认识到它们是拥抱宇宙本质的方法,而不仅仅是工具箱中的一个设计工具。 我喜欢说你应该“陶醉在异步”。与其试图隐藏异步,不如接受异步,为异步而设计。当你看到像幂等性或不变性这样的技术时,你就认识到它们是拥抱宇宙本质的方法,而不仅仅是工具箱中的一个设计工具。
性能 ### 性能
我确信 Don Knuth 会对人们怎样误解他的名言“过早的优化是一切罪恶的根源”而感到震惊。事实上性能及性能持续超过60年的指数增长或超过10年取决于您是否愿意将晶体管真空管和机电继电器的发展算入其中为所有行业内的惊人创新和影响经济的“软件吃世界”的变化打下了基础。 我确信 Don Knuth 会对人们怎样误解他的名言“过早的优化是一切罪恶的根源”而感到震惊。事实上,性能,及性能持续超过 60 年的指数增长(或超过 10 年,取决于您是否愿意将晶体管,真空管和机电继电器的发展算入其中),为所有行业内的惊人创新和影响经济的“软件吃掉全世界”的变化打下了基础。
要认识到这种指数变化的一个关键是,虽然系统的所有组件正在经历指数变化,但这些指数是不同的。硬盘容量的增长速度与内存容量的增长速度不同,与 CPU 的增长速度不同,与内存 CPU 之间的延迟的性能改善速度也不用。即使性能发展的趋势是由相同的基础技术驱动的,增长的指数也会有分歧。[延迟的改进从根本上改善了带宽][7]。指数变化在近距离或者短期内看起来是线性的,但随着时间的推移可能是压倒性的。系统不同组件的性能的增长不同,会出现压倒性的变化,并迫使对设计决策定期进行重新评估。 要认识到这种指数变化的一个关键是,虽然系统的所有组件正在经历指数变化,但这些指数是不同的。硬盘容量的增长速度与内存容量的增长速度不同,与 CPU 的增长速度不同,与内存 CPU 之间的延迟的性能改善速度也不用。即使性能发展的趋势是由相同的基础技术驱动的,增长的指数也会有分歧。[延迟的改进从根本上改善了带宽][7]。指数变化在近距离或者短期内看起来是线性的,但随着时间的推移可能是压倒性的。系统不同组件的性能的增长不同,会出现压倒性的变化,并迫使对设计决策定期进行重新评估。
这样做的结果是,几年后,一度有意义的设计决就不再有意义了。或者在某些情况下,二十年前有意义的方法又开始变成一个好的决。现代内存映射的特点看起来更像是早期分时的进程切换,而不像分页那样。 (这样做有时会让我这样的老人说“这就是我们在 1975 年时用的方法”——忽略了这种方法在 40 年都没有意义,但现在又重新成为好的方法,因为两个组件之间的关系——可能是闪存和 NAND 而不是磁盘和核心内存——已经变得像以前一样了)。 这样做的结果是,几年后,一度有意义的设计决就不再有意义了。或者在某些情况下,二十年前有意义的方法又开始变成一个好的决。现代内存映射的特点看起来更像是早期分时的进程切换,而不像分页那样。 (这样做有时会让我这样的老人说“这就是我们在 1975 年时用的方法”——忽略了这种方法在 40 年都没有意义,但现在又重新成为好的方法,因为两个组件之间的关系——可能是闪存和 NAND 而不是磁盘和核心内存——已经变得像以前一样了)。
当这些指数超越人自身的限制时,重要的转变就发生了。你能从 2 的 16 次方个字符(一个人可以在几个小时打这么多字)过渡到 2 的 3 次方个字符(远超出了一个人打字的范围)。你可以捕捉比人眼能感知的分辨率更高的数字图像。或者你可以将整个音乐专辑存在小巧的磁盘上,放在口袋里。或者你可以将数字化视频录制存储在硬盘上。再通过实时流式传输的能力,可以在一个地方集中存储一次,不需要在数千个本地硬盘上重复记录。 当这些指数超越人自身的限制时,重要的转变就发生了。你能从 2 的 16 次方个字符(一个人可以在几个小时打这么多字)过渡到 2 的 32 次方个字符(远超出了一个人打字的范围)。你可以捕捉比人眼能感知的分辨率更高的数字图像。或者你可以将整个音乐专辑存在小巧的磁盘上,放在口袋里。或者你可以将数字化视频录制存储在硬盘上。再通过实时流式传输的能力,可以在一个地方集中存储一次,不需要在数千个本地硬盘上重复记录。
但有的东西仍然是根本的限制条件,那就是空间的三维和光速。我们又回到了爱因斯坦的宇宙。内存的分级结构将始终存在——它是物理定律的基础。稳定的存储和 IO,内存,计算和通信也都将一直存在。这些模块的相对容量延迟和带宽将会改变但是系统始终要考虑这些元素如何组合在一起以及它们之间的平衡和折衷。Jim Gary 是这方面的大师。 但有的东西仍然是根本的限制条件,那就是空间的三维和光速。我们又回到了爱因斯坦的宇宙。内存的分级结构将始终存在——它是物理定律的基础。稳定的存储和 IO、内存、计算和通信也都将一直存在。这些模块的相对容量延迟和带宽将会改变但是系统始终要考虑这些元素如何组合在一起以及它们之间的平衡和折衷。Jim Gary 是这方面的大师。
空间和光速的根本限制造成的另一个后果是,性能分析主要是关于三件事:局部化 (locality),局部化,局部化。无论是将数据打包在磁盘上,管理处理器缓存的层次结构,还是将数据合并到通信数据包中,数据如何打包在一起,如何在一段时间内从局部获取数据,数据如何在组件之间传输数据是性能的基础。把重点放在减少管理数据的代码上,增加空间和时间上的局部性,是消除噪声的好办法。 空间和光速的根本限制造成的另一个后果是,性能分析主要是关于三件事:<ruby>局部化<rt>locality</rt></ruby><ruby>局部化<rt>locality</rt></ruby><ruby>局部化<rt>locality</rt></ruby>。无论是将数据打包在磁盘上,管理处理器缓存的层次结构,还是将数据合并到通信数据包中,数据如何打包在一起,如何在一段时间内从局部获取数据,数据如何在组件之间传输数据是性能的基础。把重点放在减少管理数据的代码上,增加空间和时间上的局部性,是消除噪声的好办法。
Jon Devaan 曾经说过:“设计数据,而不是设计代码”。这也通常意味着当查看系统结构时,我不太关心代码如何交互——我想看看数据如何交互和流动。如果有人试图通过描述代码结构来解释一个系统,而不理解数据流的速率和数量,他们就不了解这个系统。 Jon Devaan 曾经说过:“设计数据,而不是设计代码”。这也通常意味着当查看系统结构时,我不太关心代码如何交互——我想看看数据如何交互和流动。如果有人试图通过描述代码结构来解释一个系统,而不理解数据流的速率和数量,他们就不了解这个系统。
内存的层级结构也意味着缓存将会一直存在——即使某些系统层正在试图隐藏它。缓存是根本的,但也是危险的。缓存试图利用代码的运行时行为,来改变系统中不同组件之间的交互模式。它们需要对运行时行为进行建模,即使模型填充缓存并使缓存失效,并测试缓存命中。如果模型由于行为改变而变差或变得不佳,缓存将无法按预期运行。一个简单的指导方针是,缓存必须被检测——由于应用程序行为的改变,事物不断变化的性质和组件之间性能的平衡,缓存的行为将随着时间的推移而退化。每一个老程序员都有缓存变糟的经历。 内存的层级结构也意味着缓存将会一直存在——即使某些系统层正在试图隐藏它。缓存是根本的,但也是危险的。缓存试图利用代码的运行时行为,来改变系统中不同组件之间的交互模式。它们需要对运行时行为进行建模,即使模型填充缓存并使缓存失效,并测试缓存命中。如果模型由于行为改变而变差或变得不佳,缓存将无法按预期运行。一个简单的指导方针是,缓存必须被检测——由于应用程序行为的改变,事物不断变化的性质和组件之间性能的平衡,缓存的行为将随着时间的推移而退化。每一个老程序员都有缓存变糟的经历。
我很幸运,我的早期职业生涯是在互联网的发源地之一 BBN 度过的。 我们很自然地将将异步组件之间的通信视为系统连接的自然方式。流量控制和队列理论是通信系统的基础,更是任何异步系统运行的方式。流量控制本质上是资源管理(管理通道的容量),但资源管理是更根本的关注点。流量控制本质上也应该由端到端的应用负责,所以用端到端的方式思考异步系统是自然的。[缓冲区膨胀][8]的故事在这种情况下值得研究,因为它展示了当对端到端行为的动态性以及技术“改进”(路由器中更大的缓冲区)缺乏理解时,在整个网络基础设施中导致的长久的问题。 我很幸运,我的早期职业生涯是在互联网的发源地之一 BBN 度过的。 我们很自然地将将异步组件之间的通信视为系统连接的自然方式。流量控制和队列理论是通信系统的基础,更是任何异步系统运行的方式。流量控制本质上是资源管理(管理通道的容量),但资源管理是更根本的关注点。流量控制本质上也应该由端到端的应用负责,所以用端到端的方式思考异步系统是自然的。[缓冲区膨胀][8]的故事在这种情况下值得研究,因为它展示了当对端到端行为的动态性以及技术“改进”(路由器中更大的缓冲区)缺乏理解时,在整个网络基础设施中导致的长久的问题。
我发现“光速”的概念在分析任何系统时都非常有用。光速分析并不是从当前的性能开始分析,而是问“这个设计理论上能达到的最佳性能是多少?”真正传递的信息是什么,以什么样的速度变化?组件之间的底层延迟和带宽是多少?光速分析迫使设计师深入思考他们的方法能否达到性能目标,或者否需要重新考虑设计的基本方法。它也迫使人们更深入地了解性能在哪里损耗,以及损耗是由固有的,还是由于一些不当行为产生的。从构建的角度来看,它迫使系统设计人员了解其构建的模块的真实性能特征,而不是关注其他功能特性。 我发现“<ruby>光速<rt>light speed</rt></ruby>”的概念在分析任何系统时都非常有用。光速分析并不是从当前的性能开始分析,而是问“这个设计理论上能达到的最佳性能是多少?”真正传递的信息是什么,以什么样的速度变化?组件之间的底层延迟和带宽是多少?光速分析迫使设计师深入思考他们的方法能否达到性能目标,或者否需要重新考虑设计的基本方法。它也迫使人们更深入地了解性能在哪里损耗,以及损耗是由固有的,还是由于一些不当行为产生的。从构建的角度来看,它迫使系统设计人员了解其构建的模块的真实性能特征,而不是关注其他功能特性。
我的职业生涯大多花费在构建图形应用程序上。用户坐在系统的一端,定义关键的常量和约束。人类的视觉和神经系统没有经历过指数性的变化。它们固有地受到限制,这意味着系统设计者可以利用(必须利用)这些限制,例如,通过虚拟化(限制底层数据模型需要映射到视图数据结构中的数量),或者通过将屏幕更新的速率限制到人类视觉系统的感知限制。 我的职业生涯大多花费在构建图形应用程序上。用户坐在系统的一端,定义关键的常量和约束。人类的视觉和神经系统没有经历过指数性的变化。它们固有地受到限制,这意味着系统设计者可以利用(必须利用)这些限制,例如,通过虚拟化(限制底层数据模型需要映射到视图数据结构中的数量),或者通过将屏幕更新的速率限制到人类视觉系统的感知限制。
复杂性的本质 ### 复杂性的本质
我的整个职业生涯都在与复杂性做斗争。为什么系统和应用变得复杂呢?为什么在一个应用领域内进行开发并没有随着时间变得简单,而基础设施却没有变得更复杂,反而变得更强大了?事实上,管理复杂性的一个关键方法就是“走开”然后重新开始。通常新的工具或语言迫使我们从头开始,这意味着开发人员将工具的优点与从新开始的优点结合起来。从新开始是重要的。这并不是说新工具,新平台,或新语言可能不好,但我保证它们不能解决复杂性增长的问题。控制复杂性的最简单的方法就是用更少的程序员,建立一个更小的系统。 我的整个职业生涯都在与复杂性做斗争。为什么系统和应用变得复杂呢?为什么在一个应用领域内进行开发并没有随着时间变得简单,而基础设施却没有变得更复杂,反而变得更强大了?事实上,管理复杂性的一个关键方法就是“走开”然后重新开始。通常新的工具或语言迫使我们从头开始,这意味着开发人员将工具的优点与从新开始的优点结合起来。从新开始是重要的。这并不是说新工具,新平台,或新语言可能不好,但我保证它们不能解决复杂性增长的问题。控制复杂性的最简单的方法就是用更少的程序员,建立一个更小的系统。
当然很多情况下“走开”并不是一个选择——Office 建立在有巨大的价值的复杂的资源上。通过 OneNote Office 从 Word 的复杂性上“走开”从而在另一个维度上进行创新。Sway 是另一个例子, Office 决定从限制中跳出来,利用关键的环境变化,抓住机会从底层上采取全新的设计方案。我们有 WordExcelPowerPoint 这些应用,它们的数据结构非常有价值,我们并不能完全放弃这些数据结构,它们成为了开发中持续的显著的限制条件。 当然很多情况下“走开”并不是一个选择——Office 软件建立在有巨大的价值的复杂的资源上。通过 OneNote Office 从 Word 的复杂性上“走开”从而在另一个维度上进行创新。Sway 是另一个例子, Office 决定从限制中跳出来,利用关键的环境变化,抓住机会从底层上采取全新的设计方案。我们有 Word、Excel、PowerPoint 这些应用,它们的数据结构非常有价值,我们并不能完全放弃这些数据结构,它们成为了开发中持续的显著的限制条件。
我受到 Fred Brook 讨论软件开发中的意外和本质的文章[《没有银弹》][9]的影响,他希望用两个趋势来尽可能地推动程序员的生产力:一是在选择自己开发还是购买时,更多地关注购买——这预示了开源社区和云架构的改变;二是从单纯的构建方法转型到更“有机”或者“生态”的增量开发方法。现代的读者可以认为是向敏捷开发和持续开发的转型。但那篇文章可是写于 1986 年! 我受到 Fred Brook 讨论软件开发中的意外和本质的文章[《没有银弹》][9]的影响,他希望用两个趋势来尽可能地推动程序员的生产力:一是在选择自己开发还是购买时,更多地关注购买——这预示了开源社区和云架构的改变;二是从单纯的构建方法转型到更“有机”或者“生态”的增量开发方法。现代的读者可以认为是向敏捷开发和持续开发的转型。但那篇文章可是写于 1986 年!
我很欣赏 Stuart Kauffman 的在复杂性的基本性上的研究工作。Kauffman 从一个简单的布尔网络模型(“[NK 模型][10]”)开始建立起来,然后探索这个基本的数学结构在相互作用的分子,基因网络,生态系统,经济系统,计算机系统(以有限的方式)等系统中的应用,来理解紧急有序行为的数学基础及其与混沌行为的关系。在一个高度连接的系统中,你固有地有一个相互冲突的约束系统,使得它(在数学上)很难向前发展(这被看作是在崎岖景观上的优化问题)。控制这种复杂性的基本方法是将系统分成独立元素并限制元素之间的相互连接(实质上减少 NK 模型中的“N”和“K”。当然对那些使用复杂隐藏信息隐藏和数据抽象并且使用松散异步耦合来限制组件之间的交互的技术的系统设计者来说这是很自然的。 我很欣赏 Stuart Kauffman 的在复杂性的基本性上的研究工作。Kauffman 从一个简单的布尔网络模型(“[NK 模型][10]”)开始建立起来,然后探索这个基本的数学结构在相互作用的分子,基因网络,生态系统,经济系统,计算机系统(以有限的方式)等系统中的应用,来理解紧急有序行为的数学基础及其与混沌行为的关系。在一个高度连接的系统中,你固有地有一个相互冲突的约束系统,使得它(在数学上)很难向前发展(这被看作是在崎岖景观上的优化问题)。控制这种复杂性的基本方法是将系统分成独立元素并限制元素之间的相互连接(实质上减少 NK 模型中的“N”和“K”。当然对那些使用复杂隐藏信息隐藏和数据抽象并且使用松散异步耦合来限制组件之间的交互的技术的系统设计者来说这是很自然的。
我们一直面临的一个挑战是,我们想到的许多拓展系统的方法,都跨越了所有的方面。实时共同编辑是 Office 应用程序最近的一个非常具体的(也是最复杂的)例子。 我们一直面临的一个挑战是,我们想到的许多拓展系统的方法,都跨越了所有的方面。实时共同编辑是 Office 应用程序最近的一个非常具体的(也是最复杂的)例子。
我们的数据模型的复杂性往往等同于“能力”。设计用户体验的固有挑战是我们需要将有限的一组手势,映射到底层数据模型状态空间的转换。增加状态空间的维度不可避免地在用户手势中产生模糊性。这是“[纯数学][11]”,这意味着确保系统保持“易于使用”的最基本的方式常常是约束底层的数据模型。 我们的数据模型的复杂性往往等同于“能力”。设计用户体验的固有挑战是我们需要将有限的一组手势,映射到底层数据模型状态空间的转换。增加状态空间的维度不可避免地在用户手势中产生模糊性。这是“[纯数学][11]”,这意味着确保系统保持“易于使用”的最基本的方式常常是约束底层的数据模型。
管理 ### 管理
我从高中开始着手一些领导角色(学生会主席!),对承担更多的责任感到理所当然。同时,我一直为自己在每个管理阶段都坚持担任全职程序员而感到自豪。但 Office 的开发副总裁最终还是让我从事管理,离开了日常的编程工作。当我在去年离开那份工作时,我很享受重返编程——这是一个出奇地充满创造力的充实的活动(当修完“最后”的 bug 时,也许也会有一点令人沮丧)。 我从高中开始担任一些领导角色(学生会主席!),对承担更多的责任感到理所当然。同时,我一直为自己在每个管理阶段都坚持担任全职程序员而感到自豪。但 Office 软件的开发副总裁最终还是让我从事管理,离开了日常的编程工作。当我在去年离开那份工作时,我很享受重返编程——这是一个出奇地充满创造力的充实的活动(当修完“最后”的 bug 时,也许也会有一点令人沮丧)。
尽管在我加入微软前已经做了十多年的“主管”,但是到了 1996 年我加入微软才真正了解到管理。微软强调“工程领导是技术领导”。这与我的观点一致,帮助我接受并承担更大的管理责任。 尽管在我加入微软前已经做了十多年的“主管”,但是到了 1996 年我加入微软才真正了解到管理。微软强调“工程领导是技术领导”。这与我的观点一致,帮助我接受并承担更大的管理责任。
@ -124,21 +118,19 @@ Jon Devaan 曾经说过:“设计数据,而不是设计代码”。这也通
我过去说我的工作是设计反馈回路。独立工程师,经理,行政人员,每一个项目的参与者都能通过分析记录的项目数据,推进项目,产出结果,了解自己在整个项目中扮演的角色。最终,透明化最终成为增强能力的一个很好的工具——管理者可以将更多的局部控制权给予那些最接近问题的人,因为他们对所取得的进展有信心。这样的话,合作自然就会出现。 我过去说我的工作是设计反馈回路。独立工程师,经理,行政人员,每一个项目的参与者都能通过分析记录的项目数据,推进项目,产出结果,了解自己在整个项目中扮演的角色。最终,透明化最终成为增强能力的一个很好的工具——管理者可以将更多的局部控制权给予那些最接近问题的人,因为他们对所取得的进展有信心。这样的话,合作自然就会出现。
Key to this is that the goal has actually been properly framed (including key resource constraints like ship schedule). Decision-making that needs to constantly flow up and down the management chain usually reflects poor framing of goals and constraints by management.
关键需要确定目标框架(包括关键资源的约束,如发布的时间表)。如果决策需要在管理链上下不断流动,那说明管理层对目标和约束的框架不好。 关键需要确定目标框架(包括关键资源的约束,如发布的时间表)。如果决策需要在管理链上下不断流动,那说明管理层对目标和约束的框架不好。
当我在 Beyond Software 工作时,我真正理解了一个项目拥有一个唯一领导的重要性。原来的项目经理离职了(后来从 FrontPage 雇佣了我)。我们四个主管在是否接任这个岗位上都有所犹豫,这不仅仅由于我们都不知道要在这家公司坚持多久。我们都技术高超,并且相处融洽,所以我们决定以同级的身份一起来领导这个项目。然而这槽糕透了。有一个显而易见的问题,我们没有相应的战略用来在原有的组织之间分配资源——这应当是管理者的首要职责之一!当你知道你是唯一的负责人时,你会有很深的责任感,但在这个例子中,这种责任感缺失了。我们没有真正的领导来负责统一目标和界定约束。 当我在 Beyond Software 工作时,我真正理解了一个项目拥有一个唯一领导的重要性。原来的项目经理离职了(后来从 FrontPage 团队雇佣了我)。我们四个主管在是否接任这个岗位上都有所犹豫,这不仅仅由于我们都不知道要在这家公司坚持多久。我们都技术高超,并且相处融洽,所以我们决定以同级的身份一起来领导这个项目。然而这槽糕透了。有一个显而易见的问题,我们没有相应的战略用来在原有的组织之间分配资源——这应当是管理者的首要职责之一!当你知道你是唯一的负责人时,你会有很深的责任感,但在这个例子中,这种责任感缺失了。我们没有真正的领导来负责统一目标和界定约束。
我有清晰地记得,我第一次充分认识到*倾听*对一个领导者的重要性。那时我刚刚担任了 WordOneNotePublisher 和 Text Services 团队的开发经理。关于我们如何组织文本服务团队,我们有一个很大的争议,我走到了每个关键参与者身边,听他们想说的话,然后整合起来,写下了我所听到的一切。当我向其中一位主要参与者展示我写下的东西时,他的反应是“哇,你真的听了我想说的话”!作为一名管理人员,我所经历的所有最大的问题(例如,跨平台和转型持续工程)涉及到仔细倾听所有的参与者。倾听是一个积极的过程,它包括:尝试以别人的角度去理解,然后写出我学到的东西,并对其进行测试,以验证我的理解。当一个关键的艰难决定需要发生的时候,在最终决定前,每个人都知道他们的想法都已经被听到并理解(不论他们是否同意最后的决定)。 我有清晰地记得,我第一次充分认识到*倾听*对一个领导者的重要性。那时我刚刚担任了 Word、OneNote、Publisher 和 Text Services 团队的开发经理。关于我们如何组织文本服务团队,我们有一个很大的争议,我走到了每个关键参与者身边,听他们想说的话,然后整合起来,写下了我所听到的一切。当我向其中一位主要参与者展示我写下的东西时,他的反应是“哇,你真的听了我想说的话”!作为一名管理人员,我所经历的所有最大的问题(例如,跨平台和转型持续工程)涉及到仔细倾听所有的参与者。倾听是一个积极的过程,它包括:尝试以别人的角度去理解,然后写出我学到的东西,并对其进行测试,以验证我的理解。当一个关键的艰难决定需要发生的时候,在最终决定前,每个人都知道他们的想法都已经被听到并理解(不论他们是否同意最后的决定)。
在 FrontPage 担任开发经理的工作,让我理解了在只有部分信息的情况下做决定的“操作困境”。你等待的时间越长,你就会有更多的信息做出决定。但是等待的时间越长,实际执行的灵活性就越低。在某个时候,你仅需要做出决定。 在 FrontPage 团队担任开发经理的工作,让我理解了在只有部分信息的情况下做决定的“操作困境”。你等待的时间越长,你就会有更多的信息做出决定。但是等待的时间越长,实际执行的灵活性就越低。在某个时候,你仅需要做出决定。
设计一个组织涉及类似的两难情形。您希望增加资源领域以便可以在更大的一组资源上应用一致的优先级划分框架。但资源领域越大越难获得作出决定所需要的所有信息。组织设计就是要平衡这两个因素。软件复杂化因为软件的特点可以在任意维度切入设计。Office 已经使用[共享团队][12]来解决这两个问题(优先次序和资源),让跨领域的团队能与需要产品的团队分享工作(增加资源)。 设计一个组织涉及类似的两难情形。您希望增加资源领域以便可以在更大的一组资源上应用一致的优先级划分框架。但资源领域越大越难获得作出决定所需要的所有信息。组织设计就是要平衡这两个因素。软件复杂化因为软件的特点可以在任意维度切入设计。Office 软件部门已经使用[共享团队][12]来解决这两个问题(优先次序和资源),让跨领域的团队能与需要产品的团队分享工作(增加资源)。
随着管理阶梯的提升,你会懂一个小秘密:你和你的新同事不会因为你现在承担更多的责任,就突然变得更聪明。这强调了整个组织比顶层领导者更聪明。赋予每个级别在一致框架下拥有自己的决定是实现这一目标的关键方法。听取并使自己对组织负责,阐明和解释决策背后的原因是另一个关键策略。令人惊讶的是,害怕做出一个愚蠢的决定可能是一个有用的激励因素,以确保你清楚地阐明你的推理,并确保你听取所有的信息。 随着管理阶梯的提升,你会懂一个小秘密:你和你的新同事不会因为你现在承担更多的责任,就突然变得更聪明。这强调了整个组织比顶层领导者更聪明。赋予每个级别在一致框架下拥有自己的决定是实现这一目标的关键方法。听取并使自己对组织负责,阐明和解释决策背后的原因是另一个关键策略。令人惊讶的是,害怕做出一个愚蠢的决定可能是一个有用的激励因素,以确保你清楚地阐明你的推理,并确保你听取所有的信息。
结语 ### 结语
我离开大学寻找第一份工作时,面试官在最后一轮面试时问我对做“系统”和做“应用”哪一个更感兴趣。我当时并没有真正理解这个问题。在软件技术栈的每一个层面都会有趣的难题,我很高兴深入研究这些问题。保持学习。 我离开大学寻找第一份工作时,面试官在最后一轮面试时问我对做“系统”和做“应用”哪一个更感兴趣。我当时并没有真正理解这个问题。在软件技术栈的每一个层面都会有趣的难题,我很高兴深入研究这些问题。保持学习。
@ -148,7 +140,7 @@ via: https://hackernoon.com/education-of-a-programmer-aaecf2d35312
作者:[Terry Crowley][a] 作者:[Terry Crowley][a]
译者:[explosic4](https://github.com/explosic4) 译者:[explosic4](https://github.com/explosic4)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,46 +1,54 @@
HeRMs - 一个命令行食谱管理器 HeRM's 一个命令行食谱管理器
====== ======
![配图](https://www.ostechnix.com/wp-content/uploads/2017/12/herms-720x340.jpg) ![配图](https://www.ostechnix.com/wp-content/uploads/2017/12/herms-720x340.jpg)
烹饪让爱变得可见,不是吗?确实!烹饪也许是你的热情或爱好或职业,我相信你会维护一份烹饪日记。保持写烹饪日记是改善烹饪习惯的一种方法。有很多方法可以记录食谱。你可以维护一份小日记/笔记或将配方的笔记存储在智能手机中,或将它们保存在计算机中文档中。这有很多选择。今天,我介绍 **HeRM 's**,一个基于 Haskell 的命令行食谱管理器,能为你的美食食谱做笔记。使用 Herm's你可以添加、查看、编辑和删除食物配方甚至可以制作购物清单。这些全部来自你的终端它是免费的并使用 Haskell 语言编写的开源程序。源代码在 GitHub 中免费提供,因此你可以 fork 它,添加更多功能或改进它。 烹饪让爱变得可见,不是吗?确实!烹饪也许是你的热情或爱好或职业,我相信你会维护一份烹饪日记。保持写烹饪日记是改善烹饪习惯的一种方法。有很多方法可以记录食谱。你可以维护一份小日记/笔记或将配方的笔记存储在智能手机中,或将它们保存在计算机中文档中。这有很多选择。今天,我介绍 **HeRM's**这是一个基于 Haskell 的命令行食谱管理器,能为你的美食食谱做笔记。使用 Herm's你可以添加、查看、编辑和删除食物配方甚至可以制作购物清单。这些全部来自你的终端它是免费的是使用 Haskell 语言编写的开源程序。源代码在 GitHub 中免费提供,因此你可以复刻它,添加更多功能或改进它。
### HeRM's - 一个命令食谱管理器 ### HeRM's - 一个命令食谱管理器
#### **安装 HeRM 's** #### 安装 HeRM's
由于它是使用 Haskell 编写的,因此我们需要首先安装 Cabal。 Cabal 是一个用于下载和编译用 Haskell 语言编写的软件的命令行程序。Cabal 存在于大多数 Linux 发行版的核心软件库中,因此你可以使用发行版的默认软件包管理器来安装它。 由于它是使用 Haskell 编写的,因此我们需要首先安装 Cabal。 Cabal 是一个用于下载和编译用 Haskell 语言编写的软件的命令行程序。Cabal 存在于大多数 Linux 发行版的核心软件库中,因此你可以使用发行版的默认软件包管理器来安装它。
例如,你可以使用以下命令在 Arch Linux 及其变体(如 Antergos、Manjaro Linux中安装 cabal 例如,你可以使用以下命令在 Arch Linux 及其变体(如 Antergos、Manjaro Linux中安装 cabal
``` ```
sudo pacman -S cabal-install sudo pacman -S cabal-install
``` ```
在 Debian、Ubuntu 上: 在 Debian、Ubuntu 上:
``` ```
sudo apt-get install cabal-install sudo apt-get install cabal-install
``` ```
安装 Cabal 后,确保你已经添加了 PATH。为此请编辑你的 **~/.bashrc** 安装 Cabal 后,确保你已经添加了 `PATH`。为此,请编辑你的 `~/.bashrc`
``` ```
vi ~/.bashrc vi ~/.bashrc
``` ```
添加下面这行: 添加下面这行:
``` ```
PATH=$PATH:~/.cabal/bin PATH=$PATH:~/.cabal/bin
``` ```
**:wq** 保存并退出文件。然后,运行以下命令更新所做的更改。 `:wq` 保存并退出文件。然后,运行以下命令更新所做的更改。
``` ```
source ~/.bashrc source ~/.bashrc
``` ```
安装 cabal 后,运行以下命令安装 herms 安装 cabal 后,运行以下命令安装 `herms`
``` ```
cabal install herms cabal install herms
``` ```
喝一杯咖啡!这将需要一段时间。几分钟后,你会看到一个输出,如下所示。 喝一杯咖啡!这将需要一段时间。几分钟后,你会看到一个输出,如下所示。
``` ```
[...] [...]
Linking dist/build/herms/herms ... Linking dist/build/herms/herms ...
@ -50,71 +58,77 @@ Installed herms-1.8.1.2
恭喜! Herms 已经安装完成。 恭喜! Herms 已经安装完成。
#### **添加食谱** #### 添加食谱
让我们添加一个食谱,例如 **Dosa**。对于那些想知道的Dosa 是一种受欢迎的南印度食物,配以 **sambar** 和**酸辣酱**。这是一种健康的,可以说是最美味的食物。它不含添加的糖或饱和脂肪。制作一个也很容易。有几种不同的 Dosas在我们家中最常见的是 Plain Dosa。 让我们添加一个食谱,例如 Dosa。对于那些想知道的Dosa 是一种受欢迎的南印度食物,配以 sambarLCTT 译注:扁豆和酸豆炖菜,像咖喱汤) 和**酸辣酱**。这是一种健康的,可以说是最美味的食物。它不含添加的糖或饱和脂肪。制作一个也很容易。有几种不同的 Dosas在我们家中最常见的是 Plain Dosa。
要添加食谱,请输入: 要添加食谱,请输入:
``` ```
herms add herms add
``` ```
你会看到一个如下所示的屏幕。开始输入食谱的详细信息。 你会看到一个如下所示的屏幕。开始输入食谱的详细信息。
[![][1]][2] ![][2]
要变换字段,请使用以下键盘快捷键: 要变换字段,请使用以下键盘快捷键:
* **Tab / Shift+Tab** - 下一个/前一个字段 * `Tab` / `Shift+Tab` - 下一个/前一个字段
* **Ctrl + <箭头键>** - 导航字段 * `Ctrl + <箭头键>` - 导航字段
* **[Meta 或者 Alt] + <h-j-k-l>** - 导航字段 * `[Meta 或者 Alt] + <h-j-k-l>` - 导航字段
* **Esc** - 保存或取消。 * `Esc` - 保存或取消。
添加完配方的详细信息后,按下 `ESC` 键并点击 `Y` 保存。同样,你可以根据需要添加尽可能多的食谱。
添加完配方的详细信息后,按下 ESC 键并点击 Y 保存。同样,你可以根据需要添加尽可能多的食谱。
要列出添加的食谱,输入: 要列出添加的食谱,输入:
``` ```
herms list herms list
``` ```
[![][1]][3] ![][3]
要查看上面列出的任何食谱的详细信息,请使用下面的相应编号。 要查看上面列出的任何食谱的详细信息,请使用下面的相应编号。
``` ```
herms view 1 herms view 1
``` ```
[![][1]][4] ![][4]
要编辑任何食谱,使用: 要编辑任何食谱,使用:
``` ```
herms edit 1 herms edit 1
``` ```
完成更改后,按下 ESC 键。系统会询问你是否要保存。你只需选择适当的选项。 完成更改后,按下 `ESC` 键。系统会询问你是否要保存。你只需选择适当的选项。
[![][1]][5] ![][5]
要删除食谱,命令是: 要删除食谱,命令是:
``` ```
herms remove 1 herms remove 1
``` ```
要为指定食谱生成购物清单,运行: 要为指定食谱生成购物清单,运行:
``` ```
herms shopping 1 herms shopping 1
``` ```
[![][1]][6] ![][6]
要获得帮助,运行: 要获得帮助,运行:
``` ```
herms -h herms -h
``` ```
当你下次听到你的同事、朋友或其他地方谈到好的食谱时,只需打开 Herms并快速记下并将它们分享给你的配偶。她会很高兴 当你下次听到你的同事、朋友或其他地方谈到好的食谱时,只需打开 Herm's并快速记下并将它们分享给你的配偶。她会很高兴
今天就是这些。还有更好的东西。敬请关注! 今天就是这些。还有更好的东西。敬请关注!
@ -126,16 +140,16 @@ herms -h
via: https://www.ostechnix.com/herms-commandline-food-recipes-manager/ via: https://www.ostechnix.com/herms-commandline-food-recipes-manager/
作者:[][a] 作者:[SK][a]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com [a]:https://www.ostechnix.com
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 [1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2017/12/Make-Dosa-1.png () [2]:http://www.ostechnix.com/wp-content/uploads/2017/12/Make-Dosa-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-1-1.png () [3]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-1-1.png
[4]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-2.png () [4]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-2.png
[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-3.png () [5]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-3.png
[6]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-4.png () [6]:http://www.ostechnix.com/wp-content/uploads/2017/12/herms-4.png

View File

@ -1,23 +1,25 @@
为什么建设一个社区值得额外的努力 为什么建设一个社区值得额外的努力
====== ======
> 建立 NethServer 社区是有风险的。但是我们从这些激情的人们所带来的力量当中学习到了很多。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_brandbalance.png?itok=XSQ1OU16) ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_brandbalance.png?itok=XSQ1OU16)
当我们在 2003 年推出 [Nethesis][1] 时,我们还只是系统集成商。我们只使用有的开源项目。我们的业务模式非常明确:为这些项目增加多种形式的价值:实践知识、针对意大利市场的文档、额外模块、专业支持和培训课程。我们还通过向上游贡献代码并参与其社区来回馈上游项目。 当我们在 2003 年推出 [Nethesis][1] 时,我们还只是系统集成商。我们只使用有的开源项目。我们的业务模式非常明确:为这些项目增加多种形式的价值:实践知识、针对意大利市场的文档、额外模块、专业支持和培训课程。我们还通过向上游贡献代码并参与其社区来回馈上游项目。
那时时代不同。我们不能太张扬地使用“开源”这个词。人们将它与诸如“书呆子”,“没有价值”而最糟糕的是“自由”这些词联系起来。这些不太适合生意。 那时时代不同。我们不能太张扬地使用“开源”这个词。人们将它与诸如“书呆子”,“没有价值”以及最糟糕的“免费”这些词联系起来。这些不太适合生意。
在 2010 年的一个星期六Nethesis 的工作人员,他们手中拿着馅饼和浓咖啡,正在讨论如何推进事情发展(嘿,我们喜欢在创新的同时吃喝东西!)。尽管势头对我们不利,但我们决定不改变方向。事实上,我们决定加大力度 - 去做开源和开放的工作方式,这是一个成功运营企业的模式。 在 2010 年的一个星期六Nethesis 的工作人员,他们手中拿着馅饼和浓咖啡,正在讨论如何推进事情发展(嘿,我们喜欢在创新的同时吃喝东西!)。尽管势头对我们不利,但我们决定不改变方向。事实上,我们决定加大力度 —— 去做开源和开放的工作方式,这是一个成功运营企业的模式。
多年来,我们已经证明了该模型的潜力。有一件事是我们成功的关键:社区。 多年来,我们已经证明了该模型的潜力。有一件事是我们成功的关键:社区。
在这个由三部分组成的系列文章中,我将解释社区在开放组织的存在中扮演的重要角色。我将探讨为什么一个组织希望建立一个社区,并讨论如何建立一个社区 - 因为我确实认为这是如今产生新创新的最佳方式。 在这个由三部分组成的系列文章中,我将解释社区在开放组织的存在中扮演的重要角色。我将探讨为什么一个组织希望建立一个社区,并讨论如何建立一个社区 —— 因为我确实认为这是如今产生新创新的最佳方式。
### 这个疯狂的想法 ### 这个疯狂的想法
与 Nethesis 伙伴一起,我们决定构建自己的开源项目:我们自己的操作系统,它建立在 CentOS 之上(因为我们不想重新发明轮子)。我们假设我们拥有实现它的经验、实践知识和人力。我们感到很勇敢。 与 Nethesis 伙伴一起,我们决定构建自己的开源项目:我们自己的操作系统,它建立在 CentOS 之上(因为我们不想重新发明轮子)。我们假设我们拥有实现它的经验、实践知识和人力。我们感到很勇敢。
我们非常希望构建一个名为 [NethServer][2] 的操作系统,其使命是:通过开源使系统管理员的生活更轻松。我们知道我们可以为服务器创建一个 Linux 发行版,与当前提供的任何东西相比,这些发行版更容易获取、更易于使用,并且更易于理解。 我们非常希望构建一个名为 [NethServer][2] 的操作系统,其使命是:通过开源使系统管理员的生活更轻松。我们知道我们可以为服务器创建一个 Linux 发行版,与当前已有的相比,它更容易使用、更易于部署,并且更易于理解。
不过最重要的是我们决定创建一个真正的100 开放的项目,其主要规则有三条: 不过最重要的是我们决定创建一个真正的100 开放的项目,其主要规则有三条:
@ -25,13 +27,11 @@
* 开发公开 * 开发公开
* 社区驱动 * 社区驱动
最后一个很重要。我们是一家公司。我们能够自己开发它。如果我们在内部完成这项工作,我们将会更有效(并且做出更快的决定)。与其他任何意大利公司一样,这将非常简单。 最后一个很重要。我们是一家公司。我们能够自己开发它。如果我们在内部完成这项工作,我们将会更有效(并且做出更快的决定)。与其他任何意大利公司一样,这将非常简单。
但是我们已经如此深入到开源文化中,所以我们选择了不同的路径。 但是我们已经如此深入到开源文化中,所以我们选择了不同的路径。
我们确实希望有尽可能多的人围绕着我们、围绕着产品、围绕着公司周围。我们希望对工作有尽可能多的视角。我们意识到:独自一人,你可以走得快 - 但是如果你想走很远,你需要一起走。 我们确实希望有尽可能多的人围绕着我们、围绕着产品、围绕着公司周围。我们希望对工作有尽可能多的视角。我们意识到:独自一人,你可以走得快 —— 但是如果你想走很远,你需要一起走。
所以我们决定建立一个社区。 所以我们决定建立一个社区。
@ -41,11 +41,11 @@
但是很快就出现了这样一个问题:我们如何建立一个社区?我们不知道如何实现这一点。我们参加了很多社区,但我们从未建立过一个社区。 但是很快就出现了这样一个问题:我们如何建立一个社区?我们不知道如何实现这一点。我们参加了很多社区,但我们从未建立过一个社区。
我们擅长编码 - 而不是人。我们是一家公司,是一个有非常具体优先事项的组织。那么我们如何建立一个社区,并在公司和社区之间建立良好的关系呢? 我们擅长编码 —— 而不是人。我们是一家公司,是一个有非常具体优先事项的组织。那么我们如何建立一个社区,并在公司和社区之间建立良好的关系呢?
我们做了你必须做的第一件事:学习。我们从专家、博客和许多书中学到了知识。我们进行了实验。我们失败了多次,从结果中收集数据,并再次进行测试。 我们做了你必须做的第一件事:学习。我们从专家、博客和许多书中学到了知识。我们进行了实验。我们失败了多次,从结果中收集数据,并再次进行测试。
最终我们学到了社区管理的黄金法则:没有社区管理的黄金法则。 最终我们学到了社区管理的黄金法则:**没有社区管理的黄金法则。**
人们太复杂了,社区无法用一条规则来“统治他们”。 人们太复杂了,社区无法用一条规则来“统治他们”。
@ -57,7 +57,7 @@ via: https://opensource.com/open-organization/18/1/why-build-community-1
作者:[Alessio Fattorini][a] 作者:[Alessio Fattorini][a]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,11 +3,11 @@ Linux 局域网路由新手指南:第 2 部分
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dortmund-hbf-1259559_1920.jpg?itok=mdkNQRkS) ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dortmund-hbf-1259559_1920.jpg?itok=mdkNQRkS)
上周 [我们学习了 IPv4 地址][1] 和如何使用管理员不可或缺的工具 —— ipcalc,今天我们继续学习更精彩的内容:局域网路由器。 上周 [我们学习了 IPv4 地址][1] 和如何使用管理员不可或缺的工具 —— `ipcalc`,今天我们继续学习更精彩的内容:局域网路由器。
VirtualBox 和 KVM 是测试路由的好工具,在本文中的所有示例都是在 KVM 中执行的。如果你喜欢使用物理硬件去做测试,那么你需要三台计算机:一台用作路由器,另外两台用于表示两个不同的网络。你也需要两台以太网交换机和相应的线缆。 VirtualBox 和 KVM 是测试路由的好工具,在本文中的所有示例都是在 KVM 中执行的。如果你喜欢使用物理硬件去做测试,那么你需要三台计算机:一台用作路由器,另外两台用于表示两个不同的网络。你也需要两台以太网交换机和相应的线缆。
我们假设示例是一个有线以太局域网,为了更符合真实使用场景,我们将假设有一些桥接的无线接入点,当然我并不会使用这些无线接入点做任何事情。(我也不会去尝试所有的无线路由器,以及使用一个移动宽带设备连接到以太网的局域网口进行混合组网,因为它们需要进一步的安装和设置) 我们假设这个示例是一个有线以太局域网,为了更符合真实使用场景,我们将假设有一些桥接的无线接入点,当然我并不会使用这些无线接入点做任何事情。(我也不会去尝试所有的无线路由器,以及使用一个移动宽带设备连接到以太网的局域网口进行混合组网,因为它们需要进一步的安装和设置)
### 网段 ### 网段
@ -22,12 +22,13 @@ VirtualBox 和 KVM 是测试路由的好工具,在本文中的所有示例都
一个广播域需要一台路由器才可以与其它广播域通讯。我们使用两台计算机和 `ip` 命令来解释这些。我们的两台计算机是 192.168.110.125 和 192.168.110.126,它们都插入到同一台以太网交换机上。在 VirtualBox 或 KVM 中,当你配置一个新网络的时候会自动创建一个虚拟交换机,因此,当你分配一个网络到虚拟虚拟机上时,就像是插入一个交换机一样。使用 `ip addr show` 去查看你的地址和网络接口名字。现在,这两台主机可以互 ping 成功。 一个广播域需要一台路由器才可以与其它广播域通讯。我们使用两台计算机和 `ip` 命令来解释这些。我们的两台计算机是 192.168.110.125 和 192.168.110.126,它们都插入到同一台以太网交换机上。在 VirtualBox 或 KVM 中,当你配置一个新网络的时候会自动创建一个虚拟交换机,因此,当你分配一个网络到虚拟虚拟机上时,就像是插入一个交换机一样。使用 `ip addr show` 去查看你的地址和网络接口名字。现在,这两台主机可以互 ping 成功。
现在,给其中一台主机添加一个不同网络的地址: 现在,给其中一台主机添加一个不同网络的地址:
``` ```
# ip addr add 192.168.120.125/24 dev ens3 # ip addr add 192.168.120.125/24 dev ens3
``` ```
你可以指定一个网络接口名字,在示例中它的名字是 ens3。这不需要去添加一个网络前缀在本案例中它是 /24但是显式地添加它并没有什么坏处。你可以使用 `ip` 命令去检查你的配置。下面的示例输出为了清晰其见进行了删减: 你可以指定一个网络接口名字,在示例中它的名字是 `ens3`。这不需要去添加一个网络前缀,在本案例中,它是 `/24`,但是显式地添加它并没有什么坏处。你可以使用 `ip` 命令去检查你的配置。下面的示例输出为了清晰其见进行了删减:
``` ```
$ ip addr show $ ip addr show
ens3: ens3:
@ -35,7 +36,6 @@ ens3:
valid_lft 875sec preferred_lft 875sec valid_lft 875sec preferred_lft 875sec
inet 192.168.120.125/24 scope global ens3 inet 192.168.120.125/24 scope global ens3
valid_lft forever preferred_lft forever valid_lft forever preferred_lft forever
``` ```
主机在 192.168.120.125 上可以 ping 它自己(`ping 192.168.120.125`),这是对你的配置是否正确的一个基本校验,这个时候第二台计算机就已经不能 ping 通那个地址了。 主机在 192.168.120.125 上可以 ping 它自己(`ping 192.168.120.125`),这是对你的配置是否正确的一个基本校验,这个时候第二台计算机就已经不能 ping 通那个地址了。
@ -45,30 +45,27 @@ ens3:
* 第一个网络192.168.110.0/24 * 第一个网络192.168.110.0/24
* 第二个网络192.168.120.0/24 * 第二个网络192.168.120.0/24
接下来你的路由器必须配置去转发数据包。数据包转发默认是禁用的,你可以使用 `sysctl` 命令去检查它的配置: 接下来你的路由器必须配置去转发数据包。数据包转发默认是禁用的,你可以使用 `sysctl` 命令去检查它的配置:
``` ```
$ sysctl net.ipv4.ip_forward $ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0 net.ipv4.ip_forward = 0
``` ```
0 意味着禁用,使用如下的命令去启用它: `0` 意味着禁用,使用如下的命令去启用它:
``` ```
# echo 1 > /proc/sys/net/ipv4/ip_forward # echo 1 > /proc/sys/net/ipv4/ip_forward
``` ```
接下来配置你的另一台主机做为第二个网络的一部分,你可以通过将原来在 192.168.110.0/24 的网络中的一台主机分配到 192.168.120.0/24 虚拟网络中,然后重新启动两个 “网络” 主机,注意不是路由器。(或者重启动网络;我年龄大了还有点懒,我记不住那些重启服务的奇怪命令,还不如重启网络来得干脆。)重启后各台机器的地址应该如下所示: 接下来配置你的另一台主机做为第二个网络的一部分,你可以通过将原来在 192.168.110.0/24 的网络中的一台主机分配到 192.168.120.0/24 虚拟网络中,然后重新启动两个 “连网的” 主机,注意不是路由器。(或者重启动主机上的网络服务;我年龄大了还有点懒,我记不住那些重启服务的奇怪命令,还不如重启主机来得干脆。)重启后各台机器的地址应该如下所示:
* 主机 1: 192.168.110.125
* 主机 2: 192.168.120.135
* 路由器: 192.168.110.126 and 192.168.120.136
* 主机 1 192.168.110.125
* 主机 2 192.168.120.135
* 路由器: 192.168.110.126 和 192.168.120.136
现在可以去随意 ping 它们,可以从任何一台计算机上 ping 到任何一台其它计算机上。使用虚拟机和各种 Linux 发行版做这些事时,可能会产生一些意想不到的问题,因此,有时候 ping 的通,有时候 ping 不通。不成功也是一件好事,这意味着你需要动手去创建一条静态路由。首先,查看已经存在的路由表。主机 1 和主机 2 的路由表如下所示: 现在可以去随意 ping 它们,可以从任何一台计算机上 ping 到任何一台其它计算机上。使用虚拟机和各种 Linux 发行版做这些事时,可能会产生一些意想不到的问题,因此,有时候 ping 的通,有时候 ping 不通。不成功也是一件好事,这意味着你需要动手去创建一条静态路由。首先,查看已经存在的路由表。主机 1 和主机 2 的路由表如下所示:
``` ```
$ ip route show $ ip route show
default via 192.168.110.1 dev ens3 proto static metric 100 default via 192.168.110.1 dev ens3 proto static metric 100
@ -82,26 +79,25 @@ default via 192.168.120.1 dev ens3 proto static metric 101
src 192.168.110.126 metric 100 src 192.168.110.126 metric 100
192.168.120.0/24 dev ens9 proto kernel scope link 192.168.120.0/24 dev ens9 proto kernel scope link
src 192.168.120.136 metric 100 src 192.168.120.136 metric 100
``` ```
这显示了我们使用的由 KVM 分配的缺省路由。169.* 地址是自动链接的本地地址,我们不去管它。接下来我们看两条路由,这两条路由指向到我们的路由器。你可以有多条路由,在这个示例中我们将展示如何在主机 1 上添加一个非默认路由: 这显示了我们使用的由 KVM 分配的缺省路由。169.* 地址是自动链接的本地地址,我们不去管它。接下来我们看两条路由,这两条路由指向到我们的路由器。你可以有多条路由,在这个示例中我们将展示如何在主机 1 上添加一个非默认路由:
``` ```
# ip route add 192.168.120.0/24 via 192.168.110.126 dev ens3 # ip route add 192.168.120.0/24 via 192.168.110.126 dev ens3
``` ```
这意味着主机 1 可以通过路由器接口 192.168.110.126 去访问 192.168.110.0/24 网络。看一下它们是如何工作的?主机 1 和路由器需要连接到相同的地址空间,然后路由器转发到其它的网络。 这意味着主机 1 可以通过路由器接口 192.168.110.126 去访问 192.168.110.0/24 网络。看一下它们是如何工作的?主机 1 和路由器需要连接到相同的地址空间,然后路由器转发到其它的网络。
以下的命令去删除一条路由: 以下的命令去删除一条路由:
``` ```
# ip route del 192.168.120.0/24 # ip route del 192.168.120.0/24
``` ```
在真实的案例中,你不需要像这样手动配置一台路由器,而是使用一个路由器守护程序,并通过 DHCP 做路由器通告,但是理解基本原理很重要。接下来我们将学习如何去配置一个易于使用的路由器守护程序来为你做这些事情。 在真实的案例中,你不需要像这样手动配置一台路由器,而是使用一个路由器守护程序,并通过 DHCP 做路由器通告,但是理解基本原理很重要。接下来我们将学习如何去配置一个易于使用的路由器守护程序来为你做这些事情。
通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][2] 来学习更多 Linux 的知识。 通过来自 Linux 基金会和 edX 的免费课程 [“Linux 入门”][2] 来学习更多 Linux 的知识。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -109,10 +105,10 @@ via: https://www.linux.com/learn/intro-to-linux/2018/3/linux-lan-routing-beginne
作者:[CARLA SCHRODER][a] 作者:[CARLA SCHRODER][a]
译者:[qhwdw](https://github.com/qhwdw) 译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder [a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/learn/intro-to-linux/2018/2/linux-lan-routing-beginners-part-1 [1]:https://linux.cn/article-9657-1.html
[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux [2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,179 +1,116 @@
如何使用 Ansible 打补丁以及安装应用 如何使用 Ansible 打补丁以及安装应用
====== ======
> 使用 Ansible IT 自动化引擎节省更新的时间。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4) ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4)
你有没有想过,如何打补丁、重启系统,然后继续工作? 你有没有想过,如何打补丁、重启系统,然后继续工作?
如果你的回答是肯定的,那就需要了解一下 [Ansible][1] 了。它是一个配置管理工具,对于一些复杂的系统管理任务有时候需要几个小时才能完成,又或者对安全性有比较高要求的时候,使用 Ansible 能够大大简化工作流程。 如果你的回答是肯定的,那就需要了解一下 [Ansible][1] 了。它是一个配置管理工具,对于一些复杂的有时候需要几个小时才能完成的系统管理任务,又或者对安全性有比较高要求的时候,使用 Ansible 能够大大简化工作流程。
以我作为系统管理员的经验打补丁是一项最有难度的工作。每次遇到公共漏洞和暴露CVE, Common Vulnearbilities and Exposure通知或者信息安全漏洞预警IAVA, Information Assurance Vulnerability Alert时都必须要高度关注安全漏洞否则安全部门将会严肃追究自己的责任。 以我作为系统管理员的经验,打补丁是一项最有难度的工作。每次遇到<ruby>公共漏洞批露<rt>Common Vulnearbilities and Exposure</rt></ruby>CVE通知或者<ruby>信息保障漏洞预警<rt>Information Assurance Vulnerability Alert</rt></ruby>IAVA时都必须要高度关注安全漏洞否则安全部门将会严肃追究自己的责任。
使用 Ansible 可以通过运行[封装模块][2]以缩短打补丁的时间,下面以 [yum 模块][3]更新系统为例,使用 Ansible 可以执行安装、更新、删除、从其它地方安装(例如持续集成/持续开发中的 `rpmbuild`)。以下是系统更新的任务:
使用 Ansible 可以通过运行[封装模块][2]以缩短打补丁的时间,下面以[yum模块][3]更新系统为例,使用 Ansible 可以执行安装、更新、删除、从其它地方安装(例如持续集成/持续开发中的 `rpmbuild`)。以下是系统更新的任务:
``` ```
- name: update the system - name: update the system
yum: yum:
name: "*" name: "*"
state: latest state: latest
``` ```
在第一行,我们给这个任务命名,这样可以清楚 Ansible 的工作内容。第二行表示使用 `yum` 模块在CentOS虚拟机中执行更新操作。第三行 `name: "*"` 表示更新所有程序。最后一行 `state: latest` 表示更新到最新的 RPM。 在第一行,我们给这个任务命名,这样可以清楚 Ansible 的工作内容。第二行表示使用 `yum` 模块在CentOS虚拟机中执行更新操作。第三行 `name: "*"` 表示更新所有程序。最后一行 `state: latest` 表示更新到最新的 RPM。
系统更新结束之后,需要重新启动并重新连接: 系统更新结束之后,需要重新启动并重新连接:
``` ```
- name: restart system to reboot to newest kernel - name: restart system to reboot to newest kernel
shell: "sleep 5 && reboot" shell: "sleep 5 && reboot"
async: 1 async: 1
poll: 0 poll: 0
- name: wait for 10 seconds - name: wait for 10 seconds
pause: pause:
seconds: 10 seconds: 10
- name: wait for the system to reboot - name: wait for the system to reboot
wait_for_connection: wait_for_connection:
connect_timeout: 20 connect_timeout: 20
sleep: 5 sleep: 5
delay: 5 delay: 5
timeout: 60 timeout: 60
- name: install epel-release - name: install epel-release
yum: yum:
name: epel-release name: epel-release
state: latest state: latest
``` ```
`shell` 字段中的命令让系统在5秒休眠之后重新启动,我们使用 `sleep` 来保持连接不断开,使用 `async` 设定最大等待时长以避免发生超时,`poll` 设置为0表示直接执行不需要等待执行结果。等待10秒钟,使用 `wait_for_connection` 在虚拟机恢复连接后尽快连接。随后由 `install epel-release` 任务检查 RPM 的安装情况。你可以对这个剧本执行多次来验证它的幂等性,唯一会显示造成影响的是重启操作,因为我们使用了 `shell` 模块。如果不想造成实际的影响,可以在使用 `shell` 模块的时候 `changed_when: False` `shell` 模块中的命令让系统在 5 秒休眠之后重新启动,我们使用 `sleep` 来保持连接不断开,使用 `async` 设定最大等待时长以避免发生超时,`poll` 设置为 0 表示直接执行不需要等待执行结果。暂停 10 秒钟以等待虚拟机恢复,使用 `wait_for_connection` 在虚拟机恢复连接后尽快连接。随后由 `install epel-release` 任务检查 RPM 的安装情况。你可以对这个剧本执行多次来验证它的幂等性,唯一会显示造成影响的是重启操作,因为我们使用了 `shell` 模块。如果不想造成实际的影响,可以在使用 `shell` 模块的时候 `changed_when: False`
现在我们已经知道如何对系统进行更新、重启虚拟机、重新连接、安装 RPM 包。下面我们通过 [Ansible Lightbulb][4] 来安装 NGINX: 现在我们已经知道如何对系统进行更新、重启虚拟机、重新连接、安装 RPM 包。下面我们通过 [Ansible Lightbulb][4] 来安装 NGINX:
``` ```
- name: Ensure nginx packages are present - name: Ensure nginx packages are present
yum: yum:
name: nginx, python-pip, python-devel, devel name: nginx, python-pip, python-devel, devel
state: present state: present
notify: restart-nginx-service notify: restart-nginx-service
- name: Ensure uwsgi package is present - name: Ensure uwsgi package is present
pip: pip:
name: uwsgi name: uwsgi
state: present state: present
notify: restart-nginx-service notify: restart-nginx-service
- name: Ensure latest default.conf is present - name: Ensure latest default.conf is present
template: template:
src: templates/nginx.conf.j2 src: templates/nginx.conf.j2
dest: /etc/nginx/nginx.conf dest: /etc/nginx/nginx.conf
backup: yes backup: yes
notify: restart-nginx-service notify: restart-nginx-service
- name: Ensure latest index.html is present - name: Ensure latest index.html is present
template: template:
src: templates/index.html.j2 src: templates/index.html.j2
dest: /usr/share/nginx/html/index.html dest: /usr/share/nginx/html/index.html
- name: Ensure nginx service is started and enabled - name: Ensure nginx service is started and enabled
service: service:
name: nginx name: nginx
state: started state: started
enabled: yes enabled: yes
- name: Ensure proper response from localhost can be received - name: Ensure proper response from localhost can be received
uri: uri:
url: "http://localhost:80/" url: "http://localhost:80/"
return_content: yes return_content: yes
register: response register: response
until: 'nginx_test_message in response.content' until: 'nginx_test_message in response.content'
retries: 10 retries: 10
delay: 1 delay: 1
``` ```
And the handler that restarts the nginx service: 以及用来重启 nginx 服务的操作文件:
``` ```
# 安装 nginx 的操作文件 # 安装 nginx 的操作文件
- name: restart-nginx-service - name: restart-nginx-service
service: service:
name: nginx name: nginx
state: restarted state: restarted
``` ```
在这个角色里,我们使用 RPM 安装了 `nginx`、`python-pip`、`python-devel`、`devel`,用 PIP 安装了 `uwsgi`,接下来使用 `template` 模块复制 `nginx.conf``index.html` 以显示页面,并确保服务在系统启动时启动。然后就可以使用 `uri` 模块检查到页面的连接了。 在这个角色里,我们使用 RPM 安装了 `nginx`、`python-pip`、`python-devel`、`devel`,用 PIP 安装了 `uwsgi`,接下来使用 `template` 模块复制 `nginx.conf``index.html` 以显示页面,并确保服务在系统启动时启动。然后就可以使用 `uri` 模块检查到页面的连接了。
这个是一个系统更新、系统重启、安装 RPM 包的剧本示例,后续可以继续安装 nginx当然这里可以替换成任何你想要的角色和应用程序。 这个是一个系统更新、系统重启、安装 RPM 包的剧本示例,后续可以继续安装 nginx当然这里可以替换成任何你想要的角色和应用程序。
``` ```
- hosts: all - hosts: all
roles: roles:
- centos-update - centos-update
- nginx-simple - nginx-simple
``` ```
观看演示视频了解了解这个过程。 观看演示视频了解了解这个过程。
@ -182,6 +119,10 @@ And the handler that restarts the nginx service:
这只是关于如何更新系统、重启以及后续工作的示例。简单起见,我只添加了不带[变量][5]的包,当你在操作大量主机的时候,你就需要修改其中的一些设置了: 这只是关于如何更新系统、重启以及后续工作的示例。简单起见,我只添加了不带[变量][5]的包,当你在操作大量主机的时候,你就需要修改其中的一些设置了:
- [async & poll](https://docs.ansible.com/ansible/latest/playbooks_async.html)
- [serial](https://docs.ansible.com/ansible/latest/playbooks_delegation.html#rolling-update-batch-size)
- [forks](https://docs.ansible.com/ansible/latest/intro_configuration.html#forks)
这是由于在生产环境中如果你想逐一更新每一台主机的系统,你需要花相当一段时间去等待主机重启才能够继续下去。 这是由于在生产环境中如果你想逐一更新每一台主机的系统,你需要花相当一段时间去等待主机重启才能够继续下去。
有关 Ansible 进行自动化工作的更多用法,请查阅[其它文章][6]。 有关 Ansible 进行自动化工作的更多用法,请查阅[其它文章][6]。
@ -192,7 +133,7 @@ via: https://opensource.com/article/18/3/ansible-patch-systems
作者:[Jonathan Lozada De La Matta][a] 作者:[Jonathan Lozada De La Matta][a]
译者:[HankChow](https://github.com/HankChow) 译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,36 +1,38 @@
Jupyter Notebooks 入门 Jupyter Notebooks 入门
===== =====
> 通过 Jupyter 使用实时代码、方程式和可视化及文本创建交互式的共享笔记本。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ)
自从 papyrus 发布以来,出版社一直在努力以吸引读者的方式来格式化数据。在数学、科学、和编程领域,这是一个特别的问题,设计良好的图表、插图和方程式可以成为帮助人们理解技术信息的关键。
[Jupyter Notebook][1] 通过重新构想我们如何制作教学文本来解决这个问题。Jupyter (我在 2017 年 10 月在 [All Things Open][2] 上首次了解到)是一款开源应用程序,它使用户能够创建包含实时代码,方程式,可视化和文本的交互式共享笔记本 自从有了纸莎草纸以来,出版人们一直在努力以吸引读者的方式来格式化数据。尤其是在数学、科学、和编程领域,设计良好的图表、插图和方程式可以成为帮助人们理解技术信息的关键
Jupyter 从 [IPython 项目][3]发展而来,该项目具有交互式 shell 和基于浏览器的笔记本支持代码文本和数学表达式。Jupyter 支持超过 40 种编程语言,包括 PythonR 和 Julia其代码可以导出为 HTML,LaTeX,PDF图像和视频或者作为 [IPyhton][4] 笔记本与其他用户共享 [Jupyter Notebook][1] 通过重新构想我们如何制作教学文本来解决这个问题。Jupyter (我在 2017 年 10 月在 [All Things Open][2] 上首次了解到)是一款开源应用程序,它使用户能够创建包含实时代码、方程式、可视化和文本的交互式共享笔记本
一个有趣的事实是:"Jupyter" 是 "Julia, Python, 和 R" 的缩写 Jupyter 从 [IPython 项目][3]发展而来,它是个具有交互式 shell 和基于浏览器的笔记本支持代码、文本和数学表达式。Jupyter 支持超过 40 种编程语言,包括 Python、R 和 Julia其代码可以导出为 HTML、LaTeX、PDF、图像和视频或者作为 [IPyhton][4] 笔记本与其他用户共享
根据 Jupyter 项目网站介绍,它的一些用途包括“数据清理和转换,数值模拟,统计建模,数据可视化,机器学习等等”。科学机构正在使用 Jupyter Notebooks 来解释研究结果。代码可以来自实际数据可以调整和重新调整以可视化成不同的结果和情景。通过这种方式Jupyter Notebooks 已成为生动的文本和报告。 > 一个有趣的事实是“Jupyter” 是 “Julia、Python 和 R” 的缩写。
根据 Jupyter 项目网站介绍,它的一些用途包括“数据清理和转换,数值模拟,统计建模,数据可视化,机器学习等等”。科学机构正在使用 Jupyter Notebooks 来解释研究结果。代码可以来自实际数据可以调整和重新调整以可视化成不同的结果和情景。通过这种方式Jupyter Notebooks 变成了生动的文本和报告。
### 安装并开始 Jupyter ### 安装并开始 Jupyter
Jupyter 软件是开源的,在[修改过的 BSD 许可证][5] 下获得许可,它可以[安装在 LinuxMacOS 或 Windows 上][6]。有很多种方法可以安装 Jupyter我尝试在 Linux 和 MacOS 上安装 PIP 和 [Anaconda][7]。PIP 安装要求你的计算机上已经安装了 PythonJupyter 推荐 Python 3。 Jupyter 软件是开源的,其授权于[修改过的 BSD 许可证][5],它可以[安装在 Linux、MacOS 或 Windows 上][6]。有很多种方法可以安装 Jupyter我在 Linux 和 MacOS 上试过 PIP 和 [Anaconda][7] 安装方式。PIP 安装要求你的计算机上已经安装了 PythonJupyter 推荐 Python 3。
由于 Python 3 已经安装在我的电脑上,我通过在终端(在 Linux 或 Mac 上)运行以下命令来安装 Jupyter 由于 Python 3 已经安装在我的电脑上,我通过在终端(在 Linux 或 Mac 上)运行以下命令来安装 Jupyter
``` ```
$ python3 -m pip install --upgrade pip $ python3 -m pip install --upgrade pip
$ python3 -m pip install jupyter $ python3 -m pip install jupyter
``` ```
在终端提示种输入以下命令立即启动应用程序: 在终端提示符输入以下命令立即启动应用程序:
``` ```
$ jupyter notebook $ jupyter notebook
``` ```
很快,我的浏览器打开并显示了我的 Jupyter Notebook 服务器在 `http://localhost:8888`。(支持的浏览器有 Google Chrome,Firefox 和 Safari 很快,我的浏览器打开并显示了我`http://localhost:8888` 的 Jupyter Notebook 服务器。(支持的浏览器有 Google Chrome、Firefox 和 Safari
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_1.png?itok=UyM1GuVG) ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_1.png?itok=UyM1GuVG)
@ -38,18 +40,19 @@ $ jupyter notebook
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_2.png?itok=alDI432q) ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_2.png?itok=alDI432q)
一个带有一些默认值的新笔记本,它可以被改变(包括笔记本的名字),打开 一个带有一些默认值的新笔记本,它可以被改变(包括笔记本的名字),打开。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_3.png?itok=9zjG-5JC) ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_3.png?itok=9zjG-5JC)
笔记本有两种不同的模式:命令和编辑。命令模式允许你添加或删除单元格。你可以通过按下 Escape 键进入命令模式,按 Enter 键或单击单元格进入编辑模式。
单元格周围的绿色突出显示你处于编辑模式,蓝色突出显示你处于命令模式。以下笔记本处于命令模式并准备好执行单元中的 Python 代码。注意,我已将笔记本的名称更改为 First Notebook。 笔记本有两种不同的模式:“命令模式”和“编辑模式”。命令模式允许你添加或删除单元格。你可以通过按下 `Escape` 键进入命令模式,按 `Enter` 键或单击单元格进入编辑模式。
单元格周围的绿色高亮显示你处于编辑模式,蓝色高亮显示你处于命令模式。以下笔记本处于命令模式并准备好执行单元中的 Python 代码。注意,我已将笔记本的名称更改为 “First Notebook”。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_4.png?itok=-QPxcuFX) ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_4.png?itok=-QPxcuFX)
### 使用 Jupyter ### 使用 Jupyter
Jupyter Notebooks 的强大之处在于除了能够输入代码之外,你还可以将 Markdown 与叙述性和解释性文本相结合。我想添加一个标题,所以我在代码上面添加了一个单元格,并在 Markdown 中输入了一个标题。当我按下 `Ctrl+Enter` 时,我的标题转换为 HTML。译注或者可以按下 Run 按钮。) Jupyter Notebooks 的强大之处在于除了能够输入代码之外,你还可以用 Markdown 添加叙述性和解释性文本。我想添加一个标题,所以我在代码上面添加了一个单元格,并以 Markdown 输入了一个标题。当我按下 `Ctrl+Enter` 时,我的标题转换为 HTML。LCTT 译注:或者可以按下 Run 按钮。)
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_5.png?itok=-sr9A8-W) ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_5.png?itok=-sr9A8-W)
@ -57,24 +60,24 @@ Jupyter Notebooks 的强大之处在于除了能够输入代码之外,你还
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_6.png?itok=o_g38ECp) ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_6.png?itok=o_g38ECp)
我也可以利用 IPython 的 [line magic 和 cell magic][8] 命令。你可以通过在代码单元内附加 `%``%%` 符号来列出魔术命令。例如,`%lsmagic` 产生所有可用于 Jupyter notebooks 的魔法命令。 我也可以利用 IPython 的 [line magic 和 cell magic][8] 命令。你可以通过在代码单元内附加 `%``%%` 符号来列出魔术命令。例如,`%lsmagic` 将输出所有可用于 Jupyter notebooks 的魔法命令。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_7.png?itok=uit0PtND) ![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_7.png?itok=uit0PtND)
这些魔术命令的例子包括 `%pwd`,它输出当前工作目录(例如 `/Users/YourName`)和 `%ls`,它列出当前工作目录中的所有文件和子目录。另一个神奇命令显示从笔记本中的 `matplotlib` 生成的图表。`%%html` 将该单元格中的任何内容呈现为 HTML这对嵌入视频和链接很有用还有 JavaScript 和 Bash 的单元魔术命令。 这些魔术命令的例子包括 `%pwd`——它输出当前工作目录(例如 `/Users/YourName`)和 `%ls`——它列出当前工作目录中的所有文件和子目录。另一个神奇命令显示从笔记本中的 `matplotlib` 生成的图表。`%%html` 将该单元格中的任何内容呈现为 HTML这对嵌入视频和链接很有用还有 JavaScript 和 Bash 的单元魔术命令。
如果你需要更多关于使用 Jupyter Notebooks 和它的特性的信息,帮助部分是非常完整的。 如果你需要更多关于使用 Jupyter Notebooks 和它的特性的信息,它的帮助部分是非常完整的。
人们用许多有趣的方式使用 Jupyter Notebooks;你可以在这个 [gallery][9] 里找到一些很好的例子。你如何使用Jupyter笔记本?请在下面的评论中分享你的想法。 人们用许多有趣的方式使用 Jupyter Notebooks;你可以在这个[展示栏目][9]里找到一些很好的例子。你如何使用 Jupyter 笔记本?请在下面的评论中分享你的想法。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/getting-started-jupyter-notebooks via: https://opensource.com/article/18/3/getting-started-jupyter-notebooks
作者:[Don Watkins][a] 作者:[Don Watkins][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972) 选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,134 @@
开始 Vagrant 之旅
=====
> 用管理虚拟机和容器的工具 Vagrant 清理你的开发环境和依赖。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf)
如果你和我一样,你可能在某一个地方有一个“沙盒”,你可以在那里进行你正在做的任何项目。随着时间的推移,沙盒会变得杂乱无章,充斥着各种想法、工具链元素、你不使用的代码模块,以及其他你不需要的东西。当你完成某件事情时,这会使你的部署变得复杂,因为你可能不确定项目的实际依赖关系 —— 随着时间推移你在沙盒中已经有了一些工具,但是你忘了必须安装它。你需要一个干净的环境,将所有的依赖关系放在一个地方,以便以后更方便。
或者你可能工作在 DevOps 中,你所服务的开发人员用模糊的依赖关系来编写代码,这使得测试变得更加困难。你需要一种方法来获得一个干净的盒子,将代码放入其中,并通过它运行代码,而且你希望这些环境是一次性的和可重复的。
那么选择 [Vagrant][1] 吧。由 HashiCorp 在 [MIT 许可证][2]下创建Vagrant 可充当 VirtualBox、Microsoft Hyper-V 或 Docker 容器的包装器和前端,并且可以通过[许多其他供应商][3]的插件进行扩展。你可以配置 Vagrant 以提供可重复的、干净的环境,并且已安装需要的基础架构。配置脚本是可移植的,因此,如果你的仓库和 Vagrant 配置脚本位于基于云存储上,那么你只需要很少的限制就可以启动并在多台机器机器上工作。让我们来看一看。
### 安装
对于本次安装,我的环境是 Linux Mint 桌面,版本是 18.3 Cinnamon 64 位,在其他大多数 Debian 派生系统上安装非常类似。在大多数发行版中,对于基于 RPM 的系统也有类似的安装程序。Vagrant 的[安装页面][4]为 Debian、 Windows、 CentOS、 MacOS 和 Arch Linux 都提供下载,但是我在我的软件包管理器中找到了它,所以我在那进行了安装。
最简单的安装使用了 VirtualBox 作为虚拟化提供者,所以我需要安装它:
```
sudo apt-get install virtualbox vagrant
```
安装程序将会获取依赖项 —— 主要是 Ruby 的一些东西,安装它们。
### 建立一个项目
在设置你的项目之前,你需要了解一些你想要运行它的环境。你可以在 [Vagrant Boxes 仓库][5]中找到为许多虚拟化供应商提供的大量预配置的<ruby>系统<rt>box</rt></ruby>。许多会预先配置一些你可能需要的核心基础设置,比如 PHP、 MySQL 和 Apache但是对于本次测试我将安装一个 Debian 8 64 位 “Jessie” 裸机沙盒并手动安装一些东西,这样你就可以看到具体过程了。
```
mkdir ~/myproject
cd ~/myproject
vagrant init debian/contrib-jessie64
vagrant up
```
最后一条命令将根据需要从仓库中获取或更新 VirtualBox 镜像,然后运行启动器,你的系统上会出现一个运行的系统!下次启动这个项目时,除非镜像已经在仓库中更新,否则不会花费太长时间。
要访问该沙盒,只需要输入 `vagrant ssh`,你将进入虚拟机的全功能 SSH 会话中,你将会是 `vagrant` 用户,但也是 `sudo` 组的成员,所以你可以切换到 root并在这里做你想做的任何事情。
你会在沙盒中看到一个名为 `/vagrant` 目录,对这个目录小心点,因为它与你主机上的 `~/myproject` 文件夹保持同步。在虚拟机 `/vagrant` 下建立一个文件它会立即复制到主机上,反之亦然。注意,有些沙盒并没有安装 VirtualBox 的附加功能,所以拷贝只能在启动时才起作用。有一些用于手动同步的命令行工具,这可能是测试环境中非常有用的特性。我倾向于坚持使用那些有附加功能的沙盒,所以这个目录可以正常工作,不必考虑它。
这个方案的好处很快显现出来了: 如果你在主机上有一个代码编辑工具链,并处于某种原因不希望它出现在虚拟机上,那么这不是问题 —— 在主机上进行编辑,虚拟机会立刻更改。快速更改虚拟机,它也将其同步到主机上的“官方”副本
让我们关闭这个系统,这样我们就可以在这个系统里提供一些我们需要的东西:
```
vagrant halt
```
### 在虚拟机上安装额外的软件
对于这个例子,我将使用 [Apache][6]、 [PostgreSQL][7] 和 Perl 的 [Dancer][8] web 框架进行项目开发。我将修改Vagrant 配置脚本,以便我需要的东西已经安装。 为了使之稍后更容易保持更新,我将在项目根目录下创建一个脚本`~/myproject/Vagrantfile`
```
$provision_script = <<SCRIPT
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get -y install \
apache2 \
postgresql-client-9.4 \
postgresql-9.4 \
libdbd-pg-perl \
libapache2-mod-fastcgi \
libdata-validate-email-perl \
libexception-class-perl \
libexception-class-trycatch-perl \
libtemplate-perl \
libtemplate-plugin-json-escape-perl \
libdbix-class-perl \
libyaml-tiny-perl \
libcrypt-saltedhash-perl \
libdancer2-perl \
libtemplate-plugin-gravatar-perl \
libtext-csv-perl \
libstring-tokenizer-perl \
cpanminus
cpanm -f -n \
Dancer2::Session::Cookie \
Dancer2::Plugin::DBIC \
Dancer2::Plugin::Auth::Extensible::Provider::DBIC \
Dancer2::Plugin::Locale \
Dancer2::Plugin::Growler
sudo a2enmod rewrite fastcgi
sudo apache2ctl restart
SCRIPT
```
在 Vagrantfile 结尾附近,你会发现一行 `config.vm.provision` 变量,正如你在示例中看到的那样,你可以在此处以内联方式进行操作,只需通过取消注释以下行:
```
# config.vm.provision "shell", inline: <<-SHELL
# sudo apt-get update
# sudo apt-get install -y apache2
# SHELL
```
相反,将那四行替换为使用你在文件顶部定义为变量的配置脚本:
```
config.vm.provision "shell", inline: $provision_script
```
你可能还希望将转发的端口设置为从主机访问虚拟机上的 Apache。寻找包含 `forwarded_port` 的行并取消注释它。如果你愿意,也可以将端口从 8080 更改为其他内容。我通常使用端口 5000并在我的浏览器浏览 http://localhost:5000 就可以访问我虚拟机上的 Apache 服务器。
这里有一个设置提示:如果你的仓库位于云存储上,为了在多台机器上使用 Vagrant你可能希望将不同机器上的 `VAGRANT_HOME` 环境变量设置为不同的东西。以 VirtualBox 的工作方式,你需要分别为这些系统存储状态信息,确保你的版本控制系统忽略了用于此的目录 —— 我将 `.vagrant.d*` 添加到仓库的 `.gitignore` 文件中。不过,我确实让 Vagrantfile 成为仓库的一部分!
### 好了!
我输入 `vagrant up`,我准备开始写代码了。一旦你做了一两次,你可能会想到你可以循环利用很多的 Vagrantfile 模板文件(就像我刚刚那样),这就是 Vagrant 的优势之一。你可以更快地完成实际的编码工作,并将很少的时间花在基础设施上!
你可以使用 Vagrant 做更多事情。配置工具存在于许多工具链中,因此,无论你需要复制什么环境,它都是快速而简单的。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/getting-started-vagrant
作者:[Ruth Holloway][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://vagrantup.com
[2]:https://opensource.org/licenses/MIT
[3]:https://github.com/hashicorp/vagrant/wiki/Available-Vagrant-Plugins#providers
[4]:https://www.vagrantup.com/downloads.html
[5]:https://app.vagrantup.com/boxes/search
[6]:https://httpd.apache.org/
[7]:https://postgresql.org
[8]:https://perldancer.org

View File

@ -1,17 +1,18 @@
Bootiso 让你安全地创建 USB 启动设备 Bootiso 让你安全地创建 USB 启动设备
====== ======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/USB-drive-720x340.png) ![](https://www.ostechnix.com/wp-content/uploads/2018/04/USB-drive-720x340.png)
你好,新兵!你们有些人经常使用 **dd 命令**做各种各样的事,比如创建 USB 启动盘或者克隆硬盘分区。不过请牢记dd 是一个危险且有毁灭性的命令。如果你是个 Linux 的新手,最好避免使用 dd 命令。如果你不知道你在做什么你可能会在几分钟里把硬盘擦掉。从原理上说dd 只是从 **“if”** 读取然后写到 **“of”** 上。它才不管往哪里写呢。它根本不关心那里是否有分区表、引导区、家目录或是其他重要的东西。你叫它做什么它就做什么。可以使用像 [**Etcher**][1] 这样的用户友好的应用来代替它。这样你就可以在创建 USB 引导设备之前知道你将要格式化的是哪块盘。
今天,我发现了另一个可以安全创建 USB 引导设备的工具 **Bootiso** 。它实际上是一个 BASH 脚本,但真的很智能!它有很多额外的功能来帮我们安全创建 USB 引导盘。如果你想确定你的目标是 USB 设备(而不是内部驱动器),或者如果你想检测 USB 设备,你可以使用 Bootiso。下面是使用此脚本的显著优点: 你好,新兵!你们有些人经常使用 `dd` 命令做各种各样的事,比如创建 USB 启动盘或者克隆硬盘分区。不过请牢记,`dd` 是一个危险且有毁灭性的命令。如果你是个 Linux 的新手,最好避免使用 `dd` 命令。如果你不知道你在做什么,你可能会在几分钟里把硬盘擦掉。从原理上说,`dd` 只是从 `if` 读取然后写到 `of` 上。它才不管往哪里写呢。它根本不关心那里是否有分区表、引导区、家目录或是其他重要的东西。你叫它做什么它就做什么。可以使用像 [Etcher][1] 这样的用户友好的应用来代替它。这样你就可以在创建 USB 引导设备之前知道你将要格式化的是哪块盘。
今天,我发现了另一个可以安全创建 USB 引导设备的工具 Bootiso 。它实际上是一个 BASH 脚本,但真的很智能!它有很多额外的功能来帮我们安全创建 USB 引导盘。如果你想确保你的目标是 USB 设备(而不是内部驱动器),或者如果你想检测 USB 设备,你可以使用 Bootiso。下面是使用此脚本的显著优点:
* 如果只有一个 USB 驱动器Bootiso 会自动选择它。 * 如果只有一个 USB 驱动器Bootiso 会自动选择它。
* 如果有一个以上的 USB 驱动器存在,它可以让你从列表中选择其中一个。 * 如果有一个以上的 USB 驱动器存在,它可以让你从列表中选择其中一个。
* 万一你错误地选择一个内部硬盘驱动器,它将退出而不做任何事情。 * 万一你错误地选择一个内部硬盘驱动器,它将退出而不做任何事情。
* 它检查选定的 ISO 是否具有正确的 MIME 类型。如果 MIME 类型不正确,它将退出。 * 它检查选定的 ISO 是否具有正确的 MIME 类型。如果 MIME 类型不正确,它将退出。
* 它判定所选的项目不是分区,如果判定失败则退出。 * 它判定所选的项目不是分区,如果判定失败则退出。
* 它将在擦除和分区 USB 驱动器之前提示用户确认。 * 它将在擦除和对 USB 驱动器分区之前提示用户确认。
* 列出可用的 USB 驱动器。 * 列出可用的 USB 驱动器。
* 安装 syslinux 引导系统 (可选)。 * 安装 syslinux 引导系统 (可选)。
* 自由且开源。 * 自由且开源。
@ -19,46 +20,48 @@ Bootiso 让你安全地创建 USB 启动设备
### 使用 Bootiso 安全地创建 USB 驱动器 ### 使用 Bootiso 安全地创建 USB 驱动器
安装 Bootiso 非常简单。用这个命令下载最新版本: 安装 Bootiso 非常简单。用这个命令下载最新版本:
``` ```
$ curl -L https://rawgit.com/jsamr/bootiso/latest/bootiso -O $ curl -L https://rawgit.com/jsamr/bootiso/latest/bootiso -O
``` ```
把下载的文件加到 **$PATH** 目录中,比如 /usr/local/bin/. 把下载的文件加到 `$PATH` 目录中,比如 `/usr/local/bin/`
``` ```
$ sudo cp bootiso /usr/local/bin/ $ sudo cp bootiso /usr/local/bin/
``` ```
最后,添加运行权限: 最后,添加运行权限:
``` ```
$ sudo chmod +x /usr/local/bin/bootiso $ sudo chmod +x /usr/local/bin/bootiso
``` ```
搞定!现在就可以创建 USB 引导设备了。首先,让我们用命令看看现在有哪些 USB 驱动器: 搞定!现在就可以创建 USB 引导设备了。首先,让我们用命令看看现在有哪些 USB 驱动器:
``` ```
$ bootiso -l $ bootiso -l
``` ```
输出: 输出:
``` ```
Listing USB drives available in your system: Listing USB drives available in your system:
NAME HOTPLUG SIZE STATE TYPE NAME HOTPLUG SIZE STATE TYPE
sdb 1 7.5G running disk sdb 1 7.5G running disk
``` ```
如你所见,我只有一个 USB 驱动器。让我们继续通过命令用 ISO 文件创建 USB 启动盘: 如你所见,我只有一个 USB 驱动器。让我们继续通过命令用 ISO 文件创建 USB 启动盘:
``` ```
$ bootiso bionic-desktop-amd64.iso $ bootiso bionic-desktop-amd64.iso
``` ```
这个命令会提示你输入 SUDO 密码。输入密码并回车来安装缺失的组件(如果有的话),然后创建 USB 启动盘。 这个命令会提示你输入 `sudo` 密码。输入密码并回车来安装缺失的组件(如果有的话),然后创建 USB 启动盘。
输出: 输出:
``` ```
[...] [...]
Listing USB drives available in your system: Listing USB drives available in your system:
@ -79,77 +82,78 @@ ISO succesfully unmounted.
USB device succesfully unmounted. USB device succesfully unmounted.
USB device succesfully ejected. USB device succesfully ejected.
You can safely remove it ! You can safely remove it !
``` ```
如果你的 ISO 文件 mine 类型不对,你会得到下列错误信息: 如果你的 ISO 文件 MIME 类型不对,你会得到下列错误信息:
``` ```
Provided file `bionic-desktop-amd64.iso' doesn't seem to be an iso file (wrong mime type: `application/octet-stream'). Provided file `bionic-desktop-amd64.iso' doesn't seem to be an iso file (wrong mime type: `application/octet-stream').
Exiting bootiso... Exiting bootiso...
``` ```
当然,你也能像下面那样使用 **no-mime-check** 选项来跳过 mime 类型检查。 当然,你也能像下面那样使用 `no-mime-check` 选项来跳过 MIME 类型检查。
``` ```
$ bootiso --no-mime-check bionic-desktop-amd64.iso $ bootiso --no-mime-check bionic-desktop-amd64.iso
``` ```
就像我前面提到的如果系统里只有1个 USB 设备 Bootiso 将自动选中它。所以我们不需要告诉它 USB 设备路径。如果你连接了多个设备,你可以像下面这样使用 **-d** 来指明 USB 设备。 就像我前面提到的,如果系统里只有 1 个 USB 设备 Bootiso 将自动选中它。所以我们不需要告诉它 USB 设备路径。如果你连接了多个设备,你可以像下面这样使用 `-d` 来指明 USB 设备。
``` ```
$ bootiso -d /dev/sdb bionic-desktop-amd64.iso $ bootiso -d /dev/sdb bionic-desktop-amd64.iso
``` ```
用你自己的设备路径来换掉 “/dev/sdb”. 用你自己的设备路径来换掉 `/dev/sdb`
在多个设备情况下,如果你没有使用 **-d** 来指明要使用的设备Bootiso 会提示你选择可用的 USB 设备。 在多个设备情况下,如果你没有使用 `-d` 来指明要使用的设备Bootiso 会提示你选择可用的 USB 设备。
Bootiso 在擦除和改写 USB 盘分区前会要求用户确认。使用 `-y``assume-yes` 选项可以跳过这一步。
Bootiso 在擦除和改写 USB 盘分区前会要求用户确认。使用 **-y** 或 **assume-yes** 选项可以跳过这一步。
``` ```
$ bootiso -y bionic-desktop-amd64.iso $ bootiso -y bionic-desktop-amd64.iso
``` ```
您也可以把自动选择 USB 设备与 **-y** 选项连用,如下所示。 您也可以把自动选择 USB 设备与 `-y` 选项连用,如下所示。
``` ```
$ bootiso -y -a bionic-desktop-amd64.iso $ bootiso -y -a bionic-desktop-amd64.iso
``` ```
或者, 或者,
``` ```
$ bootiso?--assume-yes?--autoselect bionic-desktop-amd64.iso $ bootiso?--assume-yes?--autoselect bionic-desktop-amd64.iso
``` ```
请记住,当你只连接一个 USB 驱动器时,它才会起作用。 请记住,当你只连接一个 USB 驱动器时,它才会起作用。
Bootiso 会默认创建一个 **FAT 32** 分区,挂载后用 **“rsync”** 程序把 ISO 的内容拷贝到 USB 盘里。 如果你愿意也可以使用 “dd” 代替 “rsync” 。 Bootiso 会默认创建一个 FAT 32 分区,挂载后用 `rsync` 程序把 ISO 的内容拷贝到 USB 盘里。 如果你愿意也可以使用 `dd` 代替 `rsync`
``` ```
$ bootiso --dd -d /dev/sdb bionic-desktop-amd64.iso $ bootiso --dd -d /dev/sdb bionic-desktop-amd64.iso
``` ```
如果你想增加 USB 引导的成功概率,请使用 **“-b”** 或 **bootloader”** 选项。 如果你想增加 USB 引导的成功概率,请使用 `-b``bootloader` 选项。
``` ```
$ bootiso -b bionic-desktop-amd64.iso $ bootiso -b bionic-desktop-amd64.iso
``` ```
上面这条命令会安装 **syslinux** 引导程序安全模式。注意dd” 选项不可用. 上面这条命令会安装 `syslinux` 引导程序(安全模式)。注意,此时 `dd` 选项不可用。
在创建引导设备后Bootiso 会自动弹出 USB 设备。如果不想自动弹出,请使用 `-J``no-eject` 选项。
在创建引导设备后Bootiso 会自动弹出 USB 设备。如果不想自动弹出,请使用 **-J** 或 **no-eject** 选项。
``` ```
$ bootiso -J bionic-desktop-amd64.iso $ bootiso -J bionic-desktop-amd64.iso
``` ```
现在USB 设备依然连接中。你可以使用 “umount” 命令随时卸载它。 现在USB 设备依然连接中。你可以使用 `umount` 命令随时卸载它。
需要完整帮助信息,请输入: 需要完整帮助信息,请输入:
``` ```
$ bootiso -h $ bootiso -h
```
好,今天就到这里。希望这个脚本对你有帮助。好货不断,不要走开哦! 好,今天就到这里。希望这个脚本对你有帮助。好货不断,不要走开哦!
@ -160,9 +164,9 @@ $ bootiso -h
via: https://www.ostechnix.com/bootiso-lets-you-safely-create-bootable-usb-drive/ via: https://www.ostechnix.com/bootiso-lets-you-safely-create-bootable-usb-drive/
作者:[SK][a] 作者:[SK][a]
译者:[kennethXia](https://github.com/kennethXia)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972) 选题:[lujun9972](https://github.com/lujun9972)
译者:[kennethXia](https://github.com/kennethXia)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,17 +1,17 @@
有用的资源,献给那些想更多了解 Linux 的人 有用的资源,献给那些想更多了解 Linux 的人
===== =====
Linux 是最流行和多功能的操作系统之一,它可以用在智能手机,电脑甚至汽车上。自 20 世纪 90 年代以来Linux 一直存在,并且仍然是最普遍的操作系统之一。 Linux 是最流行和多功能的操作系统之一,它可以用在智能手机,电脑甚至汽车上。自 20 世纪 90 年代以来Linux 存在至今,并且仍然是最普遍的操作系统之一。
Linux 实际上用于运行大多数网络服务,因为与其他操作系统相比,它被认为是相当稳定的。这是[人们选择 Linux 多于 Windows的原因][1]之一。此外Linux 为用户提供了隐私,并且根本不收集用户信息,而 Windows 10 及其 Cortana 语音控制系统总是需要更新你的个人信息。 Linux 实际上用于运行大多数网络服务,因为与其他操作系统相比,它被认为是相当稳定的。这是[人们选择 Linux 多于 Windows 的原因][1]之一。此外Linux 为用户提供了隐私,根本不收集用户信息,而 Windows 10 及其 Cortana 语音控制系统总是需要更新你的个人信息。
Linux 有很多优点。然而,人们并没有听到太多关于它的消息,因为它已被 Windows 和 Mac 挤出场。许多人开始使用 Linux 时会感到困惑,因为它与流行的操作系统有点不同。 Linux 有很多优点。然而,人们并没有听到太多关于它的消息,因为它已被 Windows 和 Mac 挤出(桌面)场。许多人开始使用 Linux 时会感到困惑,因为它与流行的操作系统有点不同。
为了帮助你,我们为那些想要了解更多关于 Linux 的人收集了 5 个有用的资源。 为了帮助你,我们为那些想要了解更多关于 Linux 的人收集了 5 个有用的资源。
### 1.[ Linux 纯新手 ][2] ### 1、[Linux 纯新手][2]
如果你想尽可能多地学习 Linux你应该考虑 Eduonix 为初学者提供的 Linux 完整教程。这个课程将向你介绍 Linux 的所有功能,并为你提供所有必要的资料,以帮助你了解更多关于 Linux 工作原理的特性。 如果你想尽可能多地学习 Linux你应该考虑 Eduonix 为[初学者提供的 Linux 完整教程][2]。这个课程将向你介绍 Linux 的所有功能,并为你提供所有必要的资料,以帮助你了解更多关于 Linux 工作原理的特性。
如果你是以下情况,你应该选择本课程: 如果你是以下情况,你应该选择本课程:
@ -20,58 +20,48 @@ Linux 有很多优点。然而,人们并没有听到太多关于它的消息
* 你想了解 Linux 如何与硬件配合使用; * 你想了解 Linux 如何与硬件配合使用;
* 你想学习如何操作 Linux 命令行。 * 你想学习如何操作 Linux 命令行。
### 2.[PC World: Linux 初学者指南][3] ### 2[PC World: Linux 初学者指南][3]
为想要在一个地方学习所有有关 Linux 的人提供免费资源。PC World 专注于计算机操作系统的各个方面,并为订阅用户提供最准确和最新的信息。在这里,你还可以了解更多关于 [Linux 的好处][4] 和关于其操作系统的最新消息。 为想要在一个地方学习所有有关 Linux 的人提供[免费资源][3]。PC World 专注于计算机操作系统的各个方面,并为订阅用户提供最准确和最新的信息。在这里,你还可以了解更多关于 [Linux 的好处][4] 和关于其操作系统的最新消息。
该资源为你提供以下信息: 该资源为你提供以下信息:
* 如何安装 Linux * 如何安装 Linux
* 如何使用命令行; * 如何使用命令行;
* 如何安装软件; * 如何安装软件;
* 如何操作 Linux 桌面环境。 * 如何操作 Linux 桌面环境。
### 3.[Linux 培训][5] ### 3、[Linux.comLinux 培训][5]
很多使用计算机的人都需要学习如何操作 Linux以防 Windows 操作系统突然崩溃。还有什么比使用官方资源来启动你的 Linux 培训更好呢? 很多使用计算机的人都需要学习如何操作 Linux以防 Windows 操作系统突然崩溃。还有什么比使用官方资源来启动你的 [Linux 培训][5]更好呢?
该资源提供了 Linux 训练的在线注册,你可以从官方来源获取最新信息。“一年前,我们的 IT 部门在官方网站上为我们提供了 Linux 培训” [Assignmenthelper.com.au][6] 的开发人员 Martin Gibson 说道。“我们选择了这门课,因为我们需要学习如何将我们的所有文件备份到另一个系统,为我们的客户提供最大的安全性,而且这个资源真的教会了我们所有的东西。” 该资源可用让你在线注册 Linux 训练,你可以从官方来源获取最新信息。“一年前,我们的 IT 部门在官方网站上为我们提供了 Linux 培训” [Assignmenthelper.com.au][6] 的开发人员 Martin Gibson 说道。“我们选择了这门课,因为我们需要学习如何将我们的所有文件备份到另一个系统,为我们的客户提供最大的安全性,而且这个资源真的教会了我们所有的东西。”
所以你肯定应该使用这个资源,如果: 所以你肯定应该使用这个资源,如果:
* 你想获得有关操作系统的第一手信息; * 你想获得有关操作系统的第一手信息;
* 想要了解如何在你的计算机上运行 Linux 的特性; * 想要了解如何在你的计算机上运行 Linux 的特性;
* 想要与其他 Linux 用户联系并与他们分享你的经验。 * 想要与其他 Linux 用户联系并与他们分享你的经验。
### 4. [The Linux Foundation: 视频培训][7] ### 4、 [Linux 基金会:视频培训][7]
如果你在阅读大量资源时容易感到无聊那么该网站绝对适合你。Linux Foundation 提供由 IT 专家,软件开发技术人员和技术顾问举办的视频培训,讲座和在线研讨会。 如果你在阅读大量资源时容易感到无聊那么该网站绝对适合你。Linux 基金会提供了由 IT 专家、软件开发技术人员和技术顾问举办的视频培训,讲座和在线研讨会。
所有培训视频分为以下类别: 所有培训视频分为以下类别:
* 开发人员: 使用 Linux Kernel 来处理 Linux 设备驱动程序Linux 虚拟化等; * 开发人员: 使用 Linux 内核来处理 Linux 设备驱动程序、Linux 虚拟化等;
* 系统管理员:在 Linux 上开发虚拟主机,构建防火墙,分析 Linux 性能等; * 系统管理员:在 Linux 上开发虚拟主机,构建防火墙,分析 Linux 性能等;
* 用户Linux 入门,介绍嵌入式 Linux 等。
* 用户:开始使用 Linux介绍嵌入式 Linux 等。 ### 5、 [LinuxInsider][8]
### 5. [LinuxInsider][8]
你知道吗?微软对 Linux 的效率感到惊讶,它[允许用户在微软云计算设备上运行 Linux][9]。如果你想了解更多关于 Linux 操作系统的知识Linux Insider 会向用户提供关于 Linux 操作系统的最新消息,以及最新更新和 Linux 特性的信息。 你知道吗?微软对 Linux 的效率感到惊讶,它[允许用户在微软云计算设备上运行 Linux][9]。如果你想了解更多关于 Linux 操作系统的知识Linux Insider 会向用户提供关于 Linux 操作系统的最新消息,以及最新更新和 Linux 特性的信息。
在此资源上,你将有机会: 在此资源上,你将有机会:
* 参与 Linux 社区; * 参与 Linux 社区;
* 了解如何在各种设备上运行 Linux * 了解如何在各种设备上运行 Linux
* 看看评论; * 看看评论;
* 参与博客讨论并阅读科技博客。 * 参与博客讨论并阅读科技博客。
### 总结一下 ### 总结一下
@ -80,7 +70,7 @@ Linux 提供了很多好处,包括完全的隐私,稳定的操作甚至恶
### 关于作者 ### 关于作者
Lucy Benton 是一位数字营销专家商业顾问帮助人们将他们的梦想变为有利润的业务。现在她正在为营销和商业资源撰写。Lucy 还有自己的博客 [_Prowritingpartner.com_][10],在那里你可以查看她最近发表。 Lucy Benton 是一位数字营销专家商业顾问帮助人们将他们的梦想变为有利润的业务。现在她正在为营销和商业资源撰写。Lucy 还有自己的博客 [_Prowritingpartner.com_][10],在那里你可以查看她最近发表的文章
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -88,9 +78,9 @@ Lucy Benton 是一位数字营销专家,商业顾问,帮助人们将他们
via: https://linuxaria.com/article/useful-resources-for-those-who-want-to-know-more-about-linux via: https://linuxaria.com/article/useful-resources-for-those-who-want-to-know-more-about-linux
作者:[Lucy Benton][a] 作者:[Lucy Benton][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972) 选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,22 +1,24 @@
6 个 Python 日期库 6 个 Python 日期时间
===== =====
### 在 Python 中有许多库可以很容易地测试,转换和读取日期和时间信息。 > 在 Python 中有许多库可以很容易地测试、转换和读取日期和时间信息。
![6 Python datetime libraries ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd "6 Python datetime libraries ") ![6 Python datetime libraries ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd "6 Python datetime libraries ")
图片由 [WOCinTech Chat][1] 提供,根据 Opensource.com 修改。[CC BY-SA 4.0][2] 图片由 [WOCinTech Chat][1] 提供,根据 Opensource.com 修改。[CC BY-SA 4.0][2]
_这篇文章是与 [Jeff Triplett][3] 一起合写的。_ _这篇文章是与 [Jeff Triplett][3] 一起合写的。_
曾几何时我们中的一个人Lacey花了超过一个小时盯着 [Python 文档][4]中描述日期和时间格式化字符串的表格。当我试图编写从 API 中将日期时间字符串转换为 [Python datetime][5] 对象时,我很难理解其中的特定部分,因此我决定请求帮助。 曾几何时我们中的一个人Lacey盯了一个多小时的 [Python 文档][4]中描述日期和时间格式化字符串的表格。当我试图编写从 API 中将日期时间字符串转换为 [Python datetime][5] 对象时,我很难理解其中的特定部分,因此我决定请求帮助。
有人问道:“为什么你不使用 dateutil 呢?” 有人问道:“为什么你不使用 `dateutil` 呢?”
读者,如果你没有从这个月的 Python 专栏中获得任何东西,仅仅是学习到有比 datetime 的 strptime 更容易地将 datetime 字符串转换为 datetime 对象的方法,那么我们认为自己是成功的 读者,如果你没有从这个月的 Python 专栏中获得任何东西,只是学习到有比 datetime 的 `strptime` 更容易地将 datetime 字符串转换为 datetime 对象的方法,那么我们觉得就已经成功了
但是,除了将字符串转换为更有用的 Python 对象之外,还有许多库都有一些有用的方法和工具,可以让您更轻松地管理测试,将时间转换为不同的时区,以人类可读的格式传递时间信息,等等。如果这是你在 Python 中第一次尝试日期和时间,请暂停并阅读 _[如何使用 Python][6]的日期和时间_ 。要理解为什么在编程中处理日期和时间是困难的,请阅读 [Falsehoods programmers believe about time][7](我将其翻译为[愚蠢的程序员相信时间][7]的理解 但是,除了将字符串转换为更有用的 Python 对象之外,还有许多库都有一些有用的方法和工具,可以让您更轻松地进行时间测试、将时间转换为不同的时区、以人类可读的格式传递时间信息,等等。如果这是你在 Python 中第一次接触日期和时间,请暂停并阅读 _[如何使用 Python的日期和时间][6]_ 。要理解为什么在编程中处理日期和时间是困难的,请阅读 [愚蠢的程序员相信时间][7]。
这篇文章将会向你介绍以下库: 这篇文章将会向你介绍以下库:
* [Dateutil][8] * [Dateutil][8]
* [Arrow][9] * [Arrow][9]
* [Moment][10] * [Moment][10]
@ -26,117 +28,97 @@ _这篇文章是与 [Jeff Triplett][3] 一起合写的。_
随意跳过那些你已经熟悉的库,专注于那些对你而言是新的库。 随意跳过那些你已经熟悉的库,专注于那些对你而言是新的库。
### 内建 datetime 模块 ### 内建 datetime 模块
以下这段是原文中侧面的链接认为i还是不翻译为好可以考虑将其删除 在跳转到其他库之前,让我们回顾一下如何使用 `datetime` 模块将日期字符串转换为 Python datetime 对象。
More Python Resources
* [What is Python?][14]
* [Top Python IDEs][15]
* [Top Python GUI frameworks][16]
* [Latest Python content][17]
* [More developer resources][18]
在跳转到其他库之前,让我们回顾一下如何使用 datetime 模块将日期字符串转换为 Python datetime 对象。
假设我们从 API 接受到一个日期字符串,并且需要它作为 Python datetime 对象存在: 假设我们从 API 接受到一个日期字符串,并且需要它作为 Python datetime 对象存在:
```
2018-04-29T17:45:25Z 2018-04-29T17:45:25Z
```
这个字符串包括: 这个字符串包括:
* 日期是 YYYY-MM-DD 格式的 * 日期是 `YYYY-MM-DD` 格式的
* 字母 “T” 表示时间即将到来 * 字母 `T` 表示时间即将到来
* 时间是 HH:II:SS 格式的 * 时间是 `HH:II:SS` 格式的
* 表示此时间的时区指示符 “Z” 采用 UTC (详细了解[日期时间字符格式][19] * 表示此时间的时区指示符 `Z` 采用 UTC (详细了解[日期时间字符格式][19]
要使用 datetime 模块将此字符串转换为 Python datetime 对象,你应该从 strptime 开始。 datetime.strptime 接受日期字符串和格式化字符并返回一个 Python datetime 对象。 要使用 `datetime` 模块将此字符串转换为 Python datetime 对象,你应该从 `strptime` 开始。 `datetime.strptime` 接受日期字符串和格式化字符并返回一个 Python datetime 对象。
我们必须手动将日期时间字符串的每个部分转换为 Python 的 datetime.strptime 可以理解的合适的格式化字符串。四位数年份由 %Y 表示,两位数月份是 %m两位数的日期是 %d。在 24 小时制中,小时是 %H分钟是 %M秒是 %S。 我们必须手动将日期时间字符串的每个部分转换为 Python 的 `datetime.strptime` 可以理解的合适的格式化字符串。四位数年份由 `%Y` 表示,两位数月份是 `%m`,两位数的日期是 `%d`。在 24 小时制中,小时是 `%H`,分钟是 `%M`,秒是 `%S`
为了得出这些结论,需要在[Python 文档][20]的表格中多加注意。 为了得出这些结论,需要在[Python 文档][20]的表格中多加注意。
由于字符串中的 “Z” 表示此日期时间字符串采用 UTC所以我们可以在格式中忽略此项。现在我们不会担心时区。 由于字符串中的 `Z` 表示此日期时间字符串采用 UTC所以我们可以在格式中忽略此项。现在我们不会担心时区。
转换的代码是这样的: 转换的代码是这样的:
``` ```
$ from datetime import datetime $ from datetime import datetime
$ datetime.strptime('2018-04-29T17:45:25Z', '%Y-%m-%dT%H:%M:%SZ') $ datetime.strptime('2018-04-29T17:45:25Z', '%Y-%m-%dT%H:%M:%SZ')
datetime.datetime(2018, 4, 29, 17, 45, 25) datetime.datetime(2018, 4, 29, 17, 45, 25)
``` ```
格式字符串很难阅读和理解。我必须手动计算原始字符串中的字母 “T” 和 “Z”以及标点符号和格式化字符串如 %S 和 %m。有些不太了解 datetime 的人阅读我的代码可能会发现它很难理解,尽管其含义已有文档记载,但它仍然很难阅读。 格式字符串很难阅读和理解。我必须手动计算原始字符串中的字母 `T` 和 “Z”的位置以及标点符号和格式化字符串`%S``%m`。有些不太了解 datetime 的人阅读我的代码可能会发现它很难理解,尽管其含义已有文档记载,但它仍然很难阅读。
让我们看看其他库是如何处理这种转换的。 让我们看看其他库是如何处理这种转换的。
### Dateutil ### Dateutil
[dateutil 模块][21]对 datetime 模块做了一些扩展。 [dateutil 模块][21]对 `datetime` 模块做了一些扩展。
继续使用上面的解析示例,使用 dateutil 实现相同的结果要简单得多: 继续使用上面的解析示例,使用 `dateutil` 实现相同的结果要简单得多:
``` ```
$ from dateutil.parser import parse $ from dateutil.parser import parse
$ parse('2018-04-29T17:45:25Z') $ parse('2018-04-29T17:45:25Z')
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=tzutc()) datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=tzutc())
``` ```
如果字符串包含时区,那么 dateutil 解析器会自动返回字符串的时区。由于我们在 UTC你可以看到一个 datetime 对象返回了。如果你想解析完全忽略时区信息并返回原生的 datetime 对象,你可以传递 ignoretz=True 来解析,如下所示: 如果字符串包含时区,那么 `dateutil` 解析器会自动返回字符串的时区。由于我们在 UTC 时区,你可以看到返回来一个 datetime 对象。如果你想解析完全忽略时区信息并返回原生的 datetime 对象,你可以传递 `ignoretz=True` 来解析,如下所示:
``` ```
$ from dateutil.parser import parse $ from dateutil.parser import parse
$ parse('2018-04-29T17:45:25Z', ignoretz=True) $ parse('2018-04-29T17:45:25Z', ignoretz=True)
datetime.datetime(2018, 4, 29, 17, 45, 25) datetime.datetime(2018, 4, 29, 17, 45, 25)
``` ```
Dateutil 还可以解析其他人类可读的日期字符串: `dateutil` 还可以解析其他人类可读的日期字符串:
``` ```
$ parse('April 29th, 2018 at 5:45 pm') $ parse('April 29th, 2018 at 5:45 pm')
datetime.datetime(2018, 4, 29, 17, 45) datetime.datetime(2018, 4, 29, 17, 45)
``` ```
Dateutil 还提供了像 [relativedelta][22] 的工具,它用于计算两个日期时间之间的时间差或向日期时间添加或删除时间,[rrule][23] 创建重复日期时间,[tz][24] 用于解决时区以及其他工具。 `dateutil` 还提供了像 [relativedelta][22] 的工具,它用于计算两个日期时间之间的时间差或向日期时间添加或删除时间,[rrule][23] 创建重复日期时间,[tz][24] 用于解决时区以及其他工具。
### Arrow ### Arrow
[Arrow][25] 是另一个库,其目标是进行操作,格式化,以及处理对人类更友好的日期和时间。它包含 dateutil根据其[文档][26],它旨在“帮助你使用更少的包导入和更少的代码来处理日期和时间”。 [Arrow][25] 是另一个库,其目标是操作、格式化,以及处理对人类更友好的日期和时间。它包含 `dateutil`,根据其[文档][26],它旨在“帮助你使用更少的包导入和更少的代码来处理日期和时间”。
要返回我们的解析示例,下面介绍如何使用 Arrow 将日期字符串转换为 Arrow 的 datetime 类的实例: 要返回我们的解析示例,下面介绍如何使用 Arrow 将日期字符串转换为 Arrow 的 datetime 类的实例:
``` ```
$ import arrow $ import arrow
$ arrow.get('2018-04-29T17:45:25Z') $ arrow.get('2018-04-29T17:45:25Z')
<Arrow [2018-04-29T17:45:25+00:00]> <Arrow [2018-04-29T17:45:25+00:00]>
``` ```
你也可以在 get() 的第二个参数中指定格式,就像使用 strptime 一样,但是 Arrow 会尽力解析你给出的字符串get() 返回 Arrow 的 datetime 类的一个实例。要使用 Arrow 来获取 Python datetime 对象,按照如下所示链式 datetime 你也可以在 `get()` 的第二个参数中指定格式,就像使用 `strptime` 一样,但是 Arrow 会尽力解析你给出的字符串,`get()` 返回 Arrow 的 `datetime` 类的一个实例。要使用 Arrow 来获取 Python datetime 对象,按照如下所示链式 datetime
``` ```
$ arrow.get('2018-04-29T17:45:25Z').datetime $ arrow.get('2018-04-29T17:45:25Z').datetime
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=tzutc()) datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=tzutc())
``` ```
通过 Arrow datetime 类的实例,你可以访问 Arrow 的其他有用方法。例如,它的 humanize() 方法将日期时间翻译成人类可读的短语,就像这样: 通过 Arrow datetime 类的实例,你可以访问 Arrow 的其他有用方法。例如,它的 `humanize()` 方法将日期时间翻译成人类可读的短语,就像这样:
``` ```
$ import arrow $ import arrow
$ utc = arrow.utcnow() $ utc = arrow.utcnow()
$ utc.humanize() $ utc.humanize()
'seconds ago' 'seconds ago'
``` ```
@ -144,23 +126,20 @@ $ utc.humanize()
### Moment ### Moment
[Moment][28] 的作者认为它是"内部测试版",但即使它处于早期阶段,它也是非常受欢迎的,我们想来讨论它。 [Moment][28] 的作者认为它是“内部测试版”,但即使它处于早期阶段,它也是非常受欢迎的,我们想来讨论它。
Moment 的方法将字符转换为其他更有用的东西很简单,类似于我们之前提到的库: Moment 的方法将字符转换为其他更有用的东西很简单,类似于我们之前提到的库:
``` ```
$ import moment $ import moment
$ moment.date('2018-04-29T17:45:25Z') $ moment.date('2018-04-29T17:45:25Z')
<Moment(2018-04-29T17:45:25)> <Moment(2018-04-29T17:45:25)>
``` ```
就像其他库一样,它最初返回它自己的 datetime 类的实例,要返回 Python datetime 对象,添加额外的 date() 调用即可。 就像其他库一样,它最初返回它自己的 datetime 类的实例,要返回 Python datetime 对象,添加额外的 `date()` 调用即可。
``` ```
$ moment.date('2018-04-29T17:45:25Z').date $ moment.date('2018-04-29T17:45:25Z').date
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=<StaticTzInfo 'Z'>) datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=<StaticTzInfo 'Z'>)
``` ```
@ -170,71 +149,60 @@ Moment 还提供了使用人类可读的语言创建新日期的方法。例如
``` ```
$ moment.date("tomorrow") $ moment.date("tomorrow")
<Moment(2018-04-06T11:24:42)> <Moment(2018-04-06T11:24:42)>
``` ```
它的 add 和 subtract 命令使用关键字参数来简化日期的操作。为了获得后天Moment 会使用下面的代码: 它的 `add()``subtract()` 命令使用关键字参数来简化日期的操作。为了获得后天Moment 会使用下面的代码:
``` ```
$ moment.date("tomorrow").add(days=1) $ moment.date("tomorrow").add(days=1)
<Moment(2018-04-07T11:26:48)> <Moment(2018-04-07T11:26:48)>
``` ```
### Maya ### Maya
[Maya][29] 包含其他流行的库,它们处理 Python 中的日期时间,包括 Humanize, pytz 和 pendulum 等等。这个项目旨在让人们更容易处理日期。 [Maya][29] 包含了 Python 中其他流行处理日期时间的库,包括 Humanize、 pytz 和 pendulum 等等。这个项目旨在让人们更容易处理日期。
Maya 的 README 包含几个有用的实例。以下是如何使用 Maya 来重新处理以前的解析示例: Maya 的 README 包含几个有用的实例。以下是如何使用 Maya 来重新处理以前的解析示例:
``` ```
$ import maya $ import maya
$ maya.parse('2018-04-29T17:45:25Z').datetime() $ maya.parse('2018-04-29T17:45:25Z').datetime()
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=<UTC>) datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=<UTC>)
``` ```
注意我们必须在 maya.parse() 之后调用 .datetime()。如果我们跳过这一步Maya 将会返回一个 MayaDT 类的示例:<MayaDT epoch=1525023925.0>。 注意我们必须在 `maya.parse()` 之后调用 `datetime()`。如果我们跳过这一步Maya 将会返回一个 MayaDT 类的示例:`<MayaDT epoch=1525023925.0>`
由于 Maya 与 datetime 库中很多有用的方法重叠,因此它可以使用 MayaDT 类的实例执行诸如使用 slang_time() 方法将 timedeltas 转换为纯文本语言,并将日期时间间隔保存在单个类的实例中。以下是如何使用 Maya 将日期时间表示为人类可读的短语: 由于 Maya 与 datetime 库中很多有用的方法重叠,因此它可以使用 MayaDT 类的实例执行诸如使用 `slang_time()` 方法将时间偏移量转换为纯文本语言,并将日期时间间隔保存在单个类的实例中。以下是如何使用 Maya 将日期时间表示为人类可读的短语:
``` ```
$ import maya $ import maya
$ maya.parse('2018-04-29T17:45:25Z').slang_time() $ maya.parse('2018-04-29T17:45:25Z').slang_time()
'23 days from now '23 days from now
``` ```
显然slang_time() 的输出将根据距离 datetime 对象相对较近或较远的距离而变化。 显然,`slang_time()` 的输出将根据距离 datetime 对象相对较近或较远的距离而变化。
### Delorean ### Delorean
[Delorean][30],以 _Back to the Future_ 电影中的时间旅行汽车命名,它对于操纵日期时间特别有用,包括将日期时间转换为其他时区并添加或减去时间。 [Delorean][30],以 《返回未来》 电影中的时间旅行汽车命名,它对于操纵日期时间特别有用,包括将日期时间转换为其他时区并添加或减去时间。
Delorean 需要有效的 Python datetime 对象才能工作,所以如果你需要使用时间字符串,最好将其与上述库中的一个配合使用。例如,将 Maya 与 Delorean 一起使用: Delorean 需要有效的 Python datetime 对象才能工作,所以如果你需要使用时间字符串,最好将其与上述库中的一个配合使用。例如,将 Maya 与 Delorean 一起使用:
``` ```
$ import maya $ import maya
$ d_t = maya.parse('2018-04-29T17:45:25Z').datetime() $ d_t = maya.parse('2018-04-29T17:45:25Z').datetime()
``` ```
现在,随着 datetime 对象 d_t 在你掌控之中,你可以使用 Delorean 来搞一些事情,例如将日期时间转换为美国东部时区: 现在,你有了一个 datetime 对象 d_t你可以使用 Delorean 来做一些事情,例如将日期时间转换为美国东部时区:
``` ```
$ from delorean import Delorean $ from delorean import Delorean
$ d = Delorean(d_t) $ d = Delorean(d_t)
$ d $ d
Delorean(datetime=datetime.datetime(2018, 4, 29, 17, 45, 25), timezone='UTC') Delorean(datetime=datetime.datetime(2018, 4, 29, 17, 45, 25), timezone='UTC')
$ d.shift('US/Eastern') $ d.shift('US/Eastern')
Delorean(datetime=datetime.datetime(2018, 4, 29, 13, 45, 25), timezone='US/Eastern') Delorean(datetime=datetime.datetime(2018, 4, 29, 13, 45, 25), timezone='US/Eastern')
``` ```
@ -244,7 +212,6 @@ Delorean(datetime=datetime.datetime(2018, 4, 29, 13, 45, 25), timezone='US/Easte
``` ```
$ d.next_friday() $ d.next_friday()
Delorean(datetime=datetime.datetime(2018, 5, 4, 13, 45, 25), timezone='US/Eastern') Delorean(datetime=datetime.datetime(2018, 5, 4, 13, 45, 25), timezone='US/Eastern')
``` ```
@ -252,7 +219,7 @@ Delorean(datetime=datetime.datetime(2018, 5, 4, 13, 45, 25), timezone='US/Easter
### Freezegun ### Freezegun
[Freezegun][32] 是一个可以帮助你在 Python 代码中测试特定日期的库。使用 @freeze_time 装饰器,你可以为测试用例设置特定的日期和时间,并且所有对 datetime.datetime.now(), datetime.datetime.utcnow() 等的调用都将返回你指定的日期和时间。例如: [Freezegun][32] 是一个可以帮助你在 Python 代码中测试特定日期的库。使用 `@freeze_time` 装饰器,你可以为测试用例设置特定的日期和时间,并且所有对 `datetime.datetime.now()``datetime.datetime.utcnow()` 等的调用都将返回你指定的日期和时间。例如:
``` ```
@ -264,7 +231,7 @@ def test(): 
assert datetime.datetime.now() == datetime.datetime(2017, 4, 14) assert datetime.datetime.now() == datetime.datetime(2017, 4, 14)
``` ```
要跨时区进行测试,你可以将 tz_offset 参数传递给装饰器。freeze_time 装饰器也接受更简单的口语化日期,例如 @freeze_time('April 4, 2017') 要跨时区进行测试,你可以将 `tz_offset` 参数传递给装饰器。`freeze_time` 装饰器也接受更简单的口语化日期,例如 `@freeze_time('April 4, 2017')`
--- ---
@ -276,8 +243,10 @@ def test(): 
via: [https://opensource.com/article/18/4/python-datetime-libraries][34] via: [https://opensource.com/article/18/4/python-datetime-libraries][34]
作者: [Lacey Williams Hensche][35] 选题者: [@lujun9972][36] 作者: [Lacey Williams Hensche][35]
译者: [MjSeven][37] 校对: [校对者ID][38] 选题: [lujun9972](https://github.com/lujun9972)
译者: [MjSeven](https://github.com/MjSeven)
校对: [wxy](https://github.com/wxy)
本文由 [LCTT][39] 原创编译,[Linux中国][40] 荣誉推出 本文由 [LCTT][39] 原创编译,[Linux中国][40] 荣誉推出
@ -316,8 +285,5 @@ via: [https://opensource.com/article/18/4/python-datetime-libraries][34]
[33]: https://github.com/kennethreitz/maya [33]: https://github.com/kennethreitz/maya
[34]: https://opensource.com/article/18/4/python-datetime-libraries [34]: https://opensource.com/article/18/4/python-datetime-libraries
[35]: https://opensource.com/users/laceynwilliams [35]: https://opensource.com/users/laceynwilliams
[36]: https://github.com/lujun9972
[37]: https://github.com/MjSeven
[38]: https://github.com/校对者ID
[39]: https://github.com/LCTT/TranslateProject [39]: https://github.com/LCTT/TranslateProject
[40]: https://linux.cn/ [40]: https://linux.cn/

View File

@ -0,0 +1,64 @@
一个可以更好地调试的 Perl 模块
======
> 这个简单优雅的模块可以让你包含调试或仅用于开发环境的代码,而在产品环境中隐藏它们。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/annoyingbugs.png?itok=ywFZ99Gs)
仅用于调试或开发调整时的 Perl 代码块有时会很有用。这很好,但是这样的代码块可能会对性能产生很大的影响, 尤其是在运行时才决定是否执行它。
[Curtis "Ovid" Poe][1] 最近编写了一个可以帮助解决这个问题的模块:[Keyword::DEVELOPMENT][2]。该模块利用 `Keyword::Simple` 和 Perl 5.012 中引入的可插入关键字架构来创建了新的关键字:`DEVELOPMENT`。它使用 `PERL_KEYWORD_DEVELOPMENT` 环境变量的值来确定是否要执行一段代码。
使用它不能更容易了:
```
use Keyword::DEVELOPMENT;
sub doing_my_big_loop {
my $self = shift;
DEVELOPMENT {
# insert expensive debugging code here!
}
}
```
在编译时,`DEVELOPMENT` 块内的代码已经被优化掉了,根本就不存在。
你看到好处了么?在沙盒中将 `PERL_KEYWORD_DEVELOPMENT` 环境变量设置为 `true`,在生产环境设为 `false`,并且可以将有价值的调试工具提交到你的代码库中,在你需要的时候随时可用。
在缺乏高级配置管理的系统中,你也可以使用此模块来处理生产和开发或测试环境之间的设置差异:
```
sub connect_to_my_database {
my $dsn = "dbi:mysql:productiondb";
my $user = "db_user";
my $pass = "db_pass";
DEVELOPMENT {
# Override some of that config information
$dsn = "dbi:mysql:developmentdb";
}
my $db_handle = DBI->connect($dsn, $user, $pass);
}
```
稍后对此代码片段的增强使你能在其他地方,比如 YAML 或 INI 中读取配置信息,但我希望您能在此看到该工具。
我查看了关键字 `Keyword::DEVELOPMENT` 的源码,花了大约半小时研究,“天哪,我为什么没有想到这个?”安装 `Keyword::Simple`Curtis 给我们的模块就非常简单了。这是我长期以来在自己的编码实践中所需要的一个优雅解决方案。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/perl-module-debugging-code
作者:[Ruth Holloway][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://metacpan.org/author/OVID
[2]:https://metacpan.org/pod/release/OVID/Keyword-DEVELOPMENT-0.04/lib/Keyword/DEVELOPMENT.pm

View File

@ -1,118 +1,117 @@
如何编译 Linux 内核 如何编译 Linux 内核
====== ======
> Jack 将带你在 Ubuntu 16.04 服务器上走过内核编译之旅。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/chester-alvarez-644-unsplash.jpg?itok=aFxG9kUZ) ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/chester-alvarez-644-unsplash.jpg?itok=aFxG9kUZ)
曾经一段时间,升级 Linux 内核让很多用户打心里有所畏惧。在那个时候,升级内核包含了很多步骤以及更多的时间。现在,内核安装可以轻易地通过像 `apt` 这样的包管理器来处理。通过添加特定的仓库,你能很轻易地安装实验版本的或者指定版本的内核 (比如针对音频产品的实时内核) 曾经一段时间,升级 Linux 内核让很多用户打心里有所畏惧。在那个时候,升级内核包含了很多步骤,也需要很多时间。现在,内核的安装可以轻易地通过像 `apt` 这样的包管理器来处理。通过添加特定的仓库,你能很轻易地安装实验版本的或者指定版本的内核(比如针对音频产品的实时内核)
考虑一下,既然升级内核如此容易,为什么你不愿意自行编译一个呢?这里列举一些可能的原因: 考虑一下,既然升级内核如此容易,为什么你不愿意自行编译一个呢?这里列举一些可能的原因:
* 你想要简单了解它 (编译内核) 的过程
* 你想要简单了解编译内核的过程
* 你需要启用或者禁用内核中特定的选项,因为它们没有出现在标准选项里 * 你需要启用或者禁用内核中特定的选项,因为它们没有出现在标准选项里
* 你想要启用标准内核中可能没有添加的硬件支持 * 你想要启用标准内核中可能没有添加的硬件支持
* 你使用的发行版需要你编译内核 * 你使用的发行版需要你编译内核
* 你是一个学生,而编译内核是你的任务 * 你是一个学生,而编译内核是你的任务
不管出于什么原因,懂得如何编译内核是非常有用的,而且可以被视作一个通行权。当我第一次编译一个新的 Linux 内核(很久以前),然后尝试从它启动,我从中 (系统快速地崩溃,然后不断地尝试和失败) 感受到一种特定的兴奋。 不管出于什么原因,懂得如何编译内核是非常有用的,而且可以被视作一个通行权。当我第一次编译一个新的 Linux 内核(那是很久以前了),然后尝试从它启动,我从中(系统马上就崩溃了,然后不断地尝试和失败)感受到一种特定的兴奋。
既然这样,让我们来实验一下编译内核的过程。我将使用 `Ubuntu 16.04 Server` 来进行演示。在运行一次常规的 `sudo apt upgrade` 之后,当前安装的内核版本是 `4.4.0-121`。我想要升级内核版本到 `4.17`. 我们小心地开始吧。
有一个警告:强烈建议你在虚拟机里实验本模块。基于虚拟机,你总能创建一个快照,然后轻松地从任何问题中回退出来。不要在产品机器上使用这种方式升级内核,除非你知道你在做什么。 既然这样,让我们来实验一下编译内核的过程。我将使用 Ubuntu 16.04 Server 来进行演示。在运行了一次常规的 `sudo apt upgrade` 之后,当前安装的内核版本是 `4.4.0-121`。我想要升级内核版本到 `4.17` 让我们小心地开始吧。
有一个警告:强烈建议你在虚拟机里实验这个过程。基于虚拟机,你总能创建一个快照,然后轻松地从任何问题中回退出来。不要在产品机器上使用这种方式升级内核,除非你知道你在做什么。
### 下载内核 ### 下载内核
我们要做的第一件事是下载内核源码。可以找到所需内核 (在 [Kernel.org][1]) 的 URL 来下载。找到 URL 之后,使用如下命令 (我以 `4.17 RC2` 内核为例) 来下载源码文件:
我们要做的第一件事是下载内核源码。在 [Kernel.org][1] 找到你要下载的所需内核的 URL。找到 URL 之后,使用如下命令(我以 `4.17 RC2` 内核为例) 来下载源码文件:
``` ```
wget https://git.kernel.org/torvalds/t/linux-4.17-rc2.tar.gz wget https://git.kernel.org/torvalds/t/linux-4.17-rc2.tar.gz
``` ```
在下载期间,有一些事需要去考虑。 在下载期间,有一些事需要去考虑。
### 安装需要的环境 ### 安装需要的环境
为了编译内核,我们首先得安装一些需要的环境。这可以通过一个命令来完成: 为了编译内核,我们首先得安装一些需要的环境。这可以通过一个命令来完成:
``` ```
sudo apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison sudo apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison
``` ```
务必注意: 你将需要至少 128GB 的本地可用磁盘空间来完成内核的编译过程。因此你必须确保有足够的空间。 务必注意你将需要至少 128GB 的本地可用磁盘空间来完成内核的编译过程。因此你必须确保有足够的空间。
### 解压源码 ### 解压源码
在新下载的内核所在的文件夹下,使用该命令来解压内核: 在新下载的内核所在的文件夹下,使用该命令来解压内核:
``` ```
tar xvzf linux-4.17-rc2.tar.gz tar xvzf linux-4.17-rc2.tar.gz
``` ```
使用命令 `cd linux-4.17-rc2` 进入新生成的文件夹。 使用命令 `cd linux-4.17-rc2` 进入新生成的文件夹。
### 配置内核 ### 配置内核
在正式编译内核之前,我们首先必须配置需要包含哪些模块。实际上,有一些非常简单的方式来配置。使用一个命令,你能拷贝当前内核的配置文件,然后使用可靠的 `menuconfig` 命令来做任何必要的更改。使用如下命令来完成: 在正式编译内核之前,我们首先必须配置需要包含哪些模块。实际上,有一些非常简单的方式来配置。使用一个命令,你能拷贝当前内核的配置文件,然后使用可靠的 `menuconfig` 命令来做任何必要的更改。使用如下命令来完成:
``` ```
cp /boot/config-$(uname -r) .config cp /boot/config-$(uname -r) .config
``` ```
现在你有一个配置文件了,输入命令 `make menuconfig`。该命令将打开一个配置工具 (图 1),它可以让你遍历每个可用模块,然后启用或者禁用你需要或者不需要的模块。 现在你有一个配置文件了,输入命令 `make menuconfig`。该命令将打开一个配置工具(图 1它可以让你遍历每个可用模块然后启用或者禁用你需要或者不需要的模块。
![menuconfig][3] ![menuconfig][3]
图 1: 运行中的 `make menuconfig`. *图 1: 运行中的 `make menuconfig`*
[Used with permission][4] 很有可能你会禁用掉内核中的一个重要部分,所以在 `menuconfig` 期间小心地一步步进行。如果你对某个选项不确定,不要去管它。或者更好的方法是使用我们拷贝的当前运行的内核的配置文件(因为我们知道它可以工作)。一旦你已经遍历了整个配置列表(它非常长),你就准备好开始编译了。
很有可能你会禁用掉内核中的一个重要部分,所以在 `menuconfig` 期间小心地一步步进行。如果你对某个选项不确定,不要去管它。或者更好的方法是使用我们拷贝的当前运行的内核的配置文件 (因为我们知道它可以工作)。一旦你已经遍历了整个配置列表 (它非常长),你就准备好开始编译了。
### 编译和安装 ### 编译和安装
现在是时候去实际地编译内核了。第一步是使用 `make` 命令去编译。那么调用 `make` 命令然后回答必要的问题 (图 2)。这些问题取决于你将升级的现有内核以及升级后的内核。相信我,将会有非常多的问题要回答,因此你得预留大量的时间。 现在是时候去实际地编译内核了。第一步是使用 `make` 命令去编译。调用 `make` 命令然后回答必要的问题(图 2。这些问题取决于你将升级的现有内核以及升级后的内核。相信我,将会有非常多的问题要回答,因此你得预留大量的时间。
![make][6] ![make][6]
图 2: 回答 `make` 命令的问题 *图 2: 回答 `make` 命令的问题*
[Used with permission][4] 回答了长篇累牍的问题之后,你就可以用如下的命令安装那些之前启用的模块:
回答了长篇累牍的问题之后,你就可以用如下的命令安装那些之前启用的模块:
``` ```
make modules_install make modules_install
``` ```
又来了,这个命令将耗费一些时间,所以要么坐下来看着编译输出,或者去做些其他事 (因为编译期间不需要你的输入)。可能的情况是,你想要进行别的任务 (除非你真的喜欢看着终端界面上飞舞的输出)。 又来了,这个命令将耗费一些时间,所以要么坐下来看着编译输出,或者去做些其他事(因为编译期间不需要你的输入)。可能的情况是,你想要去进行别的任务(除非你真的喜欢看着终端界面上飞舞而过的输出)。
现在我们使用这个命令来安装内核:
现在我们使用这个命令来安装内核:
``` ```
sudo make install sudo make install
``` ```
又一次,另一个将要耗费大量可观时间的命令。事实上,`make install` 命令将比 `make modules_install` 命令花费更多的时间。去享用午餐,配置一个路由器,将 Linux 安装在一些服务器上,或者小睡一会。 又一次,另一个将要耗费大量可观时间的命令。事实上,`make install` 命令将比 `make modules_install` 命令花费更多的时间。去享用午餐,配置一个路由器,将 Linux 安装在一些服务器上,或者小睡一会
### 启用内核作为引导 ### 启用内核作为引导
一旦 `make install` 命令完成了,就是时候将内核启用来作为引导。 一旦 `make install` 命令完成了,就是时候将内核启用来作为引导。使用这个命令来实现:
使用这个命令来实现:
``` ```
sudo update-initramfs -c -k 4.17-rc2 sudo update-initramfs -c -k 4.17-rc2
``` ```
当然,你需要将上述内核版本号替换成你编译完的。当命令执行完毕后,使用如下命令来更新 grub: 当然,你需要将上述内核版本号替换成你编译完的。当命令执行完毕后,使用如下命令来更新 grub
``` ```
sudo update-grub sudo update-grub
``` ```
现在你可以重启系统并且选择新安装的内核了。 现在你可以重启系统并且选择新安装的内核了。
### 恭喜! ### 恭喜!
你已经编译了一个 Linux 内核!它是一项耗费一些时间的活动;但是,最终你的 Linux 发行版将拥有一个定制的内核,同时你也将拥有一项被许多 Linux 管理员所倾向忽视的重要技能。 你已经编译了一个 Linux 内核!它是一项耗费时间的活动;但是,最终你的 Linux 发行版将拥有一个定制的内核,同时你也将拥有一项被许多 Linux 管理员所倾向忽视的重要技能。
从 Linux 基金会和 edX 提供的免费 ["Introduction to Linux" ][7] 课程来学习更多的 Linux 知识。 从 Linux 基金会和 edX 提供的免费 [“Introduction to Linux”][7] 课程来学习更多的 Linux 知识。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -121,7 +120,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/4/how-compile-linux-kernel-
作者:[Jack Wallen][a] 作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972) 选题:[lujun9972](https://github.com/lujun9972)
译者:[icecoobe](https://github.com/icecoobe) 译者:[icecoobe](https://github.com/icecoobe)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -2,7 +2,8 @@
====== ======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/Font-Finder-720x340.png) ![](https://www.ostechnix.com/wp-content/uploads/2018/04/Font-Finder-720x340.png)
**Font Finder** 是旧的 [**Typecatcher**][1] 的 Rust 实现,用于从[**Google 的字体存档**][2]中轻松搜索和安装 Google Web 字体。它可以帮助你在 Linux 桌面上安装数百种免费和开源字体。如果你正在为你的 Web 项目和应用以及其他任何地方寻找好看的字体Font Finder 可以轻松地为你提供。它是用 Rust 编程语言编写的免费开源 GTK3 应用程序。与使用 Python 编写的 Typecatcher 不同Font Finder 可以按类别过滤字体,没有 Python 运行时依赖关系,并且有更好的性能和资源消耗。
Font Finder 是旧的 [Typecatcher][1] 的 Rust 实现,用于从 [Google 的字体存档][2]中轻松搜索和安装 Google Web 字体。它可以帮助你在 Linux 桌面上安装数百种免费和开源字体。如果你正在为你的 Web 项目和应用以及其他任何地方寻找好看的字体Font Finder 可以轻松地为你提供。它是用 Rust 编程语言编写的自由、开源的 GTK3 应用程序。与使用 Python 编写的 Typecatcher 不同Font Finder 可以按类别过滤字体,没有 Python 运行时依赖关系,并且有更好的性能和更低的资源消耗。
在这个简短的教程中,我们将看到如何在 Linux 中安装和使用 Font Finder。 在这个简短的教程中,我们将看到如何在 Linux 中安装和使用 Font Finder。
@ -11,25 +12,25 @@
由于 Fond Finder 是使用 Rust 语言编写的,因此你需要向下面描述的那样在系统中安装 Rust。 由于 Fond Finder 是使用 Rust 语言编写的,因此你需要向下面描述的那样在系统中安装 Rust。
安装 Rust 后,运行以下命令安装 Font Finder 安装 Rust 后,运行以下命令安装 Font Finder
``` ```
$ cargo install fontfinder $ cargo install fontfinder
``` ```
Font Finder 也可以从 [**flatpak app**][3] 安装。首先在你的系统中安装 Flatpak如下面的链接所述。 Font Finder 也可以从 [flatpak app][3] 安装。首先在你的系统中安装 Flatpak如下面的链接所述。
然后,使用命令安装 Font Finder 然后,使用命令安装 Font Finder
``` ```
$ flatpak install flathub io.github.mmstick.FontFinder $ flatpak install flathub io.github.mmstick.FontFinder
``` ```
### 在 Linux 中使用 Font Finder 搜索和安装 Google Web 字体 ### 在 Linux 中使用 Font Finder 搜索和安装 Google Web 字体
你可以从程序启动器启动 Font Finder也可以运行以下命令启动它。 你可以从程序启动器启动 Font Finder也可以运行以下命令启动它。
``` ```
$ flatpak run io.github.mmstick.FontFinder $ flatpak run io.github.mmstick.FontFinder
``` ```
这是 Font Finder 默认界面的样子。 这是 Font Finder 默认界面的样子。
@ -42,7 +43,7 @@ $ flatpak run io.github.mmstick.FontFinder
![][6] ![][6]
要安装字体,只需选择它并点击顶部的 **Install** 按钮即可。 要安装字体,只需选择它并点击顶部的 “Install” 按钮即可。
![][7] ![][7]
@ -50,7 +51,7 @@ $ flatpak run io.github.mmstick.FontFinder
![][8] ![][8]
同样,要删除字体,只需从 Font Finder 面板中选择它并单击 **Uninstall** 按钮。就这么简单! 同样,要删除字体,只需从 Font Finder 面板中选择它并单击 “Uninstall” 按钮。就这么简单!
左上角的设置按钮(齿轮按钮)提供了切换到暗色预览的选项。 左上角的设置按钮(齿轮按钮)提供了切换到暗色预览的选项。
@ -62,8 +63,6 @@ $ flatpak run io.github.mmstick.FontFinder
干杯! 干杯!
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: https://www.ostechnix.com/font-finder-easily-search-and-install-google-web-fonts-in-linux/ via: https://www.ostechnix.com/font-finder-easily-search-and-install-google-web-fonts-in-linux/
@ -71,7 +70,7 @@ via: https://www.ostechnix.com/font-finder-easily-search-and-install-google-web-
作者:[SK][a] 作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972) 选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -7,97 +7,98 @@ Project Atomic 通过他们在 Open Container InitiativeOCI上的努力创
Buildah 处理构建容器镜像时无需安装完整的容器运行时或守护进程。这对建立容器的持续集成和持续交付管道尤其有用。 Buildah 处理构建容器镜像时无需安装完整的容器运行时或守护进程。这对建立容器的持续集成和持续交付管道尤其有用。
Buildah 使容器的文件系统可以直接供构建主机使用。这意味着构建工具在主机上可用,并且在容器镜像中不需要从而使构建更快速镜像更小更安全。Buildah 有 CentOS、Fedora 和 Debian 的软件包。 Buildah 使容器的文件系统可以直接供构建主机使用。这意味着构建工具在主机上可用就行,而不需要在容器镜像中可用从而使构建更快速镜像更小更安全。Buildah 有 CentOS、Fedora 和 Debian 的软件包。
### 安装 Buildah ### 安装 Buildah
从 Fedora 26 开始 Buildah 可以使用 dnf 进行安装。 从 Fedora 26 开始 Buildah 可以使用 `dnf` 进行安装。
``` ```
$ sudo dnf install buildah -y $ sudo dnf install buildah -y
``` ```
buildah 的当前版本为 0.16,可以通过以下命令显示。 `buildah` 的当前版本为 0.16,可以通过以下命令显示。
``` ```
$ buildah --version $ buildah --version
``` ```
### 基本命令 ### 基本命令
构建容器镜像的第一步是获取基础镜像,这是通过 Dockerfile 中的 FROM 语句完成的。Buildah 以类似的方式处理这个。 构建容器镜像的第一步是获取基础镜像,这是通过 Dockerfile 中的 `FROM` 语句完成的。Buildah 以类似的方式处理这个。
``` ```
$ sudo buildah from fedora $ sudo buildah from fedora
``` ```
该命令将拉取 Fedora 的基础镜像并存储在主机上。通过执行以下操作可以检查主机上可用的镜像。 该命令将拉取 Fedora 的基础镜像并存储在主机上。通过执行以下操作可以检查主机上可用的镜像。
``` ```
$ sudo buildah images $ sudo buildah images
IMAGE ID IMAGE NAME CREATED AT SIZE IMAGE ID IMAGE NAME CREATED AT SIZE
9110ae7f579f docker.io/library/fedora:latest Mar 7, 2018 20:51 234.7 MB 9110ae7f579f docker.io/library/fedora:latest Mar 7, 2018 20:51 234.7 MB
``` ```
在拉取基础镜像后,有一个该镜像的运行容器实例,这是一个“工作容器”。 在拉取基础镜像后,有一个该镜像的运行容器实例,这是一个“工作容器”。
以下命令显示正在运行的容器。 以下命令显示正在运行的容器。
``` ```
$ sudo buildah containers $ sudo buildah containers
CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER ID BUILDER IMAGE ID IMAGE NAME
CONTAINER NAME CONTAINER NAME
6112db586ab9 * 9110ae7f579f docker.io/library/fedora:latest fedora-working-container 6112db586ab9 * 9110ae7f579f docker.io/library/fedora:latest fedora-working-container
``` ```
Buildah 还提供了一个非常有用的命令来停止和删除当前正在运行的所有容器。 Buildah 还提供了一个非常有用的命令来停止和删除当前正在运行的所有容器。
``` ```
$ sudo buildah rm --all $ sudo buildah rm --all
``` ```
完整的命令列表可以使用 -help 选项。 完整的命令列表可以使用 `--help` 选项。
``` ```
$ buildah --help $ buildah --help
``` ```
### 构建一个 Apache Web 服务器容器镜像 ### 构建一个 Apache Web 服务器容器镜像
让我们看看如何使用 Buildah 在 Fedora 基础镜像上安装 Apache Web 服务器,然后复制一个可供服务的自定义 index.html。 让我们看看如何使用 Buildah 在 Fedora 基础镜像上安装 Apache Web 服务器,然后复制一个可供服务的自定义 `index.html`
首先让我们创建自定义的 `index.html`
首先让我们创建自定义的 index.html。
``` ```
$ echo "Hello Fedora Magazine !!!" > index.html $ echo "Hello Fedora Magazine !!!" > index.html
``` ```
然后在正在运行的容器中安装 httpd 包。 然后在正在运行的容器中安装 httpd 包。
``` ```
$ sudo buildah from fedora $ sudo buildah from fedora
$ sudo buildah run fedora-working-container dnf install httpd -y $ sudo buildah run fedora-working-container dnf install httpd -y
``` ```
让我们将 index.html 复制到 /var/www/html/。 让我们将 `index.html` 复制到 `/var/www/html/`
``` ```
$ sudo buildah copy fedora-working-container index.html /var/www/html/index.html $ sudo buildah copy fedora-working-container index.html /var/www/html/index.html
``` ```
然后配置容器入口点以启动 httpd。 然后配置容器入口点以启动 httpd。
``` ```
$ sudo buildah config --entrypoint "/usr/sbin/httpd -DFOREGROUND" fedora-working-container $ sudo buildah config --entrypoint "/usr/sbin/httpd -DFOREGROUND" fedora-working-container
``` ```
现在为了使“工作容器”可用commit 命令将容器保存到镜像。 现在为了使“工作容器”可用,`commit` 命令将容器保存到镜像。
``` ```
$ sudo buildah commit fedora-working-container hello-fedora-magazine $ sudo buildah commit fedora-working-container hello-fedora-magazine
``` ```
hello-fedora-magazine 镜像现在可用,并且可以推送到仓库以供使用。 hello-fedora-magazine 镜像现在可用,并且可以推送到仓库以供使用。
``` ```
$ sudo buildah images $ sudo buildah images
IMAGE ID IMAGE NAME CREATED IMAGE ID IMAGE NAME CREATED
@ -106,15 +107,13 @@ AT SIZE
Mar 7, 2018 22:51 234.7 MB Mar 7, 2018 22:51 234.7 MB
49bd5ec5be71 docker.io/library/hello-fedora-magazine:latest 49bd5ec5be71 docker.io/library/hello-fedora-magazine:latest
Apr 27, 2018 11:01 427.7 MB Apr 27, 2018 11:01 427.7 MB
``` ```
通过运行以下步骤,还可以使用 Buildah 来测试此镜像。 通过运行以下步骤,还可以使用 Buildah 来测试此镜像。
``` ```
$ sudo buildah from --name=hello-magazine docker.io/library/hello-fedora-magazine $ sudo buildah from --name=hello-magazine docker.io/library/hello-fedora-magazine
$ sudo buildah run hello-magazine $ sudo buildah run hello-magazine
``` ```
访问 <http://localhost> 将显示 “Hello Fedora Magazine !!!” 访问 <http://localhost> 将显示 “Hello Fedora Magazine !!!”
@ -127,7 +126,7 @@ via: https://fedoramagazine.org/daemon-less-container-management-buildah/
作者:[Ashutosh Sudhakar Bhakare][a] 作者:[Ashutosh Sudhakar Bhakare][a]
选题:[lujun9972](https://github.com/lujun9972) 选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -2,72 +2,69 @@
====== ======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/Preload-720x340.png) ![](https://www.ostechnix.com/wp-content/uploads/2018/05/Preload-720x340.png)
大多数 Linux 发行版在默认配置下已经足够快了。但是,我们仍然可以借助一些额外的应用程序和方法让它们启动更快一点。其中一个可用的这种应用程序就是 **Preload**。它监视用户使用频率比较高的应用程序并将它们添加到内存中这样就比一般的方式加载更快一点。因为正如你所知道的内存的读取速度远远快于硬盘。Preload 以守护进程的方式在后台中运行,并记录用户使用较为频繁的程序的文件使用相关的统计数据。然后,它将这些二进制文件及它们的依赖项加载进内存,以改善应用程序的加载时间。简而言之,一旦安装了 preload你使用较为频繁的应用程序将可能加载的更快。
大多数 Linux 发行版在默认配置下已经足够快了。但是,我们仍然可以借助一些额外的应用程序和方法让它们启动更快一点。其中一个可用的这种应用程序就是 Preload。它监视用户使用频率比较高的应用程序并将它们添加到内存中这样就比一般的方式加载更快一点。因为正如你所知道的内存的读取速度远远快于硬盘。Preload 以守护进程的方式在后台中运行,并记录用户使用较为频繁的程序的文件使用相关的统计数据。然后,它将这些二进制文件及它们的依赖项加载进内存,以改善应用程序的加载时间。简而言之,一旦安装了 Preload你使用较为频繁的应用程序将可能加载的更快。
在这篇详细的教程中,我们将去了解如何安装和使用 Preload以改善应用程序在 Linux 中的启动时间。 在这篇详细的教程中,我们将去了解如何安装和使用 Preload以改善应用程序在 Linux 中的启动时间。
### 在 Linux 中使用 Preload 改善应用程序启动时间 ### 在 Linux 中使用 Preload 改善应用程序启动时间
Preload 可以在 [**AUR**][1] 上找到。因此,你可以使用 AUR 助理程序在任何基于 Arch 的系统上去安装它比如Antergos、Manjaro Linux。 Preload 可以在 [AUR][1] 上找到。因此,你可以使用 AUR 助理程序在任何基于 Arch 的系统上去安装它比如Antergos、Manjaro Linux。
使用 [Pacaur][2]
使用 [**Pacaur**][2]:
``` ```
$ pacaur -S preload $ pacaur -S preload
``` ```
使用 [**Packer**][3]: 使用 [Packer][3]
``` ```
$ packer -S preload $ packer -S preload
``` ```
使用 [**Trizen**][4]: 使用 [Trizen][4]
``` ```
$ trizen -S preload $ trizen -S preload
``` ```
使用 [**Yay**][5]: 使用 [Yay][5]
``` ```
$ yay -S preload $ yay -S preload
``` ```
使用 [**Yaourt**][6]: 使用 [Yaourt][6]
``` ```
$ yaourt -S preload $ yaourt -S preload
``` ```
在 Debian、Ubuntu、Linux Mint 上Preload 可以在默认仓库中找到。因此,你可以像下面一样,使用 APT 包管理器去安装它。 在 Debian、Ubuntu、Linux Mint 上Preload 可以在默认仓库中找到。因此,你可以像下面一样,使用 APT 包管理器去安装它。
``` ```
$ sudo apt-get install preload $ sudo apt-get install preload
``` ```
Preload 安装完成后重新启动你的系统。从现在开始Preload 将监视频繁使用的应用程序,并将它们的二进制文件和库添加到内存中,以使它的启动速度更快。比如,如果你经常使用 Firefox、Chrome 以及 LibreOfficePreload 将添加这些二进制文件和库到内存中,因此,这些应用程序将启动的更快。而且更好的是,它不需要做任何配置。它是开箱即用的。但是,如果你想去对它进行微调,你可以通过编辑缺省的配置文件 **/etc/preload.conf** 来实现。 Preload 安装完成后重新启动你的系统。从现在开始Preload 将监视频繁使用的应用程序,并将它们的二进制文件和库添加到内存中,以使它的启动速度更快。比如,如果你经常使用 Firefox、Chrome 以及 LibreOfficePreload 将添加这些二进制文件和库到内存中,因此,这些应用程序将启动的更快。而且更好的是,它不需要做任何配置。它是开箱即用的。但是,如果你想去对它进行微调,你可以通过编辑缺省的配置文件 `/etc/preload.conf` 来实现。
### Preload 并不一定适合每个人! ### Preload 并不一定适合每个人!
以下是 Preload 的一些缺点,它并不是对每个人都有帮助,在这个 [**跟贴**][7] 中有讨论到。 以下是 Preload 的一些缺点,它并不是对每个人都有帮助,在这个 [跟贴][7] 中有讨论到。
1. 我使用的是一个有 8GB 内存的现代系统。因此我的系统总体上来说很快。我每天只打开狂吃内存的应用程序比如Firefox、Chrome、VirtualBox、Gimp等等)一到两次,并且它们始终处于打开状态,因此,它们的二进制文件和库被预读到内存中,并始终整天在内存中。我一般很少去关闭和打开这些应用程序,因此,内存使用纯属浪费。 1. 我使用的是一个有 8GB 内存的现代系统。因此我的系统总体上来说很快。我每天只打开狂吃内存的应用程序比如Firefox、Chrome、VirtualBox、Gimp 等等)一到两次,并且它们始终处于打开状态,因此,它们的二进制文件和库被预读到内存中,并始终整天在内存中。我一般很少去关闭和打开这些应用程序,因此,内存使用纯属浪费。
2. 如果你使用的是带有 SSD 的现代系统Preload 是绝对没用的。因为 SSD 的访问时间比起一般的硬盘来要快的多,因此,使用 Preload 是没有意义的。 2. 如果你使用的是带有 SSD 的现代系统Preload 是绝对没用的。因为 SSD 的访问时间比起一般的硬盘来要快的多,因此,使用 Preload 是没有意义的。
3. Preload 显著影响启动时间。因为大多数的应用程序都预读到内存中,长此以往,将让你的系统启动和运行的更快。 3. Preload 显著影响启动时间。因为更多的应用程序要被预读到内存中,这将让你的系统启动运行时间更长。
你只有在每天都在大量的重新加载应用程序时才能看到真正的差别。因此Preload 最适合开发人员和测试人员,他们每天都打开和关闭应用程序好多次。 你只有在每天都在大量的重新加载应用程序时才能看到真正的差别。因此Preload 最适合开发人员和测试人员,他们每天都打开和关闭应用程序好多次。
关于 Preload 更多的信息和它是如何工作的,请阅读它的作者写的完整版的 [**Preload 论文**][8]。 关于 Preload 更多的信息和它是如何工作的,请阅读它的作者写的完整版的 [Preload 论文][8]。
教程到此为止,希望能帮到你。后面还有更精彩的内容,请继续关注! 教程到此为止,希望能帮到你。后面还有更精彩的内容,请继续关注!
再见! 再见!
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-improve-application-startup-time-in-linux/ via: https://www.ostechnix.com/how-to-improve-application-startup-time-in-linux/
@ -75,7 +72,7 @@ via: https://www.ostechnix.com/how-to-improve-application-startup-time-in-linux/
作者:[SK][a] 作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972) 选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw) 译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,50 @@
LikeCoin一种给开放式许可的内容创作者的加密货币
======
> 在共创协议下授权作品和挣钱这二者不再是一种争议。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0)
传统观点认为,作家、摄影师、艺术家和其他创作者在<ruby>共创协议<rt>Creative Commons</rt></ruby>和其他开放许可下免费共享内容的不会得到报酬。这意味着大多数独立创作者无法通过在互联网上发布他们的作品来赚钱。而现在有了 [LikeCoin][1]:一个新的开源项目,旨在使这个让艺术家们经常为了贡献而不得不妥协或牺牲的常识成为过去。
LikeCoin 协议旨在通过创意内容获利,以便创作者可以专注于创造出色的内容而不是出售它。
该协议同样基于去中心化技术,它可以跟踪何时使用内容,并使用 LikeCoin 这种 [以太坊 ERC-20][2] 加密货币通证来奖励其创作者。它通过“<ruby>创造性共识<rt>Proof of Creativity</rt></ruby>”算法进行操作,该算法一部分根据作品收到多少个“喜欢”,一部分根据有多少作品衍生自它而分配 LikeCoin。由于开放式授权的内容有更多机会被重复使用并获得 LikeCoin 令牌,因此系统鼓励内容创作者在<ruby>共创协议<rt>Creative Commons</rt></ruby>许可下发布。
### 如何运作的
当通过 LikeCoin 协议上传创意片段时,内容创作者也将包括作品的元数据,包括作者信息及其 InterPlanetary 关联数据([IPLD][3])。这些数据构成了衍生作品的家族图谱;我们称作品与其衍生品之间的关系为“内容足迹”。这种结构使得内容的继承树可以很容易地追溯到原始作品。
LikeCoin 通证会使用作品的衍生历史记录的信息来将其分发给创作者。由于所有创意作品都包含作者钱包的元数据,因此相应的 LikeCoin 份额可以通过算法计算并分发。
LikeCoin 可以通过两种方式获得奖励:要么由想要通过支付给内容创建者来表示赞赏的个人直接给予,或通过 Creators Pool 收集观众的“赞”的并根据内容的 LikeRank 分配 LikeCoin。基于在 LikeCoin 协议中的内容追踪LikeRank 衡量作品重要性(或者我们在这个场景下定义的创造性)。一般来说,一副作品有越多的衍生作品,创意内容的创新就越多,内容就会有更高的 LikeRank。 LikeRank 是内容创新性的量化者。
### 如何参与?
LikeCoin 仍然非常新,我们期望在 2018 年晚些时候推出我们的第一个去中心化程序来奖励<ruby>共创协议<rt>Creative Commons</rt></ruby>的内容,并与更大的社区无缝连接。
LikeCoin 的大部分代码都可以在 [LikeCoin GitHub][4] 仓库中通过 [GPL 3.0 许可证][5]访问。由于它仍处于积极开发阶段,一些实验代码尚未公开,但我们会尽快完成。
我们欢迎功能请求、拉取请求、复刻和星标。请参与我们在 Github 上的开发,并加入我们在 [Telegram][6] 的讨论组。我们同样在 [Medium][7]、[Facebook][8]、[Twitter][9] 和我们的网站 [like.co][1] 发布关于我们进展的最新消息。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/likecoin
作者:[Kin Ko][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ckxpress
[1]:https://like.co/
[2]:https://en.wikipedia.org/wiki/ERC20
[3]:https://ipld.io/
[4]:https://github.com/likecoin
[5]:https://www.gnu.org/licenses/gpl-3.0.en.html
[6]:https://t.me/likecoin
[7]:http://medium.com/likecoin
[8]:http://fb.com/likecoin.foundation
[9]:https://twitter.com/likecoin_fdn

View File

@ -1,4 +1,4 @@
7 leadership rules for the DevOps age Translating by FelixYFZ 7 leadership rules for the DevOps age
====== ======
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_DigitalAcumen_2.png?itok=TGeMQYs4) ![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_DigitalAcumen_2.png?itok=TGeMQYs4)

View File

@ -1,107 +0,0 @@
Translating by FelixYFZ Being open about data privacy
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_opendata.png?itok=M8L2HGVx)
Image by : opensource.com
Today is [Data Privacy Day][1], ("Data Protection Day" in Europe), and you might think that those of us in the open source world should think that all data should be free, [as information supposedly wants to be][2], but life's not that simple. That's for two main reasons:
1. Most of us (and not just in open source) believe there's at least some data about us that we might not feel happy sharing (I compiled an example list in [a post][3] I published a while ago).
2. Many of us working in open source actually work for commercial companies or other organisations subject to legal requirements around what they can share.
So actually, data privacy is something that's important for pretty much everybody.
It turns out that the starting point for what data people and governments believe should be available for organisations to use is somewhat different between the U.S. and Europe, with the former generally providing more latitude for entities--particularly, the more cynical might suggest, large commercial entities--to use data they've collected about us as they will. Europe, on the other hand, has historically taken a more restrictive view, and on the 25th of May, Europe's view arguably will have triumphed.
### The impact of GDPR
That's a rather sweeping statement, but the fact remains that this is the date on which a piece of legislation called the General Data Protection Regulation (GDPR), enacted by the European Union in 2016, becomes enforceable. The GDPR basically provides a stringent set of rules about how personal data can be stored, what it can be used for, who can see it, and how long it can be kept. It also describes what personal data is--and it's a pretty broad set of items, from your name and home address to your medical records and on through to your computer's IP address.
What is important about the GDPR, though, is that it doesn't apply just to European companies, but to any organisation processing data about EU citizens. If you're an Argentinian, Japanese, U.S., or Russian company and you're collecting data about an EU citizen, you're subject to it.
"Pah!" you may say,1 "I'm not based in the EU: what can they do to me?" The answer is simple: If you want to continue doing any business in the EU, you'd better comply, because if you breach GDPR rules, you could be liable for up to four percent of your global revenues. Yes, that's global revenues: not just revenues in a particular country in Europe or across the EU, not just profits, but global revenues. Those are the sorts of numbers that should lead you to talk to your legal team, who will direct you to your exec team, who will almost immediately direct you to your IT group to make sure you're compliant in pretty short order.
This may seem like it's not particularly relevant to non-EU citizens, but it is. For most companies, it's going to be simpler and more efficient to implement the same protection measures for data associated with all customers, partners, and employees they deal with, rather than just targeting specific measures at EU citizens. This has got to be a good thing.2
However, just because GDPR will soon be applied to organisations across the globe doesn't mean that everything's fine and dandy3: it's not. We give away information about ourselves all the time--and permission for companies to use it.
There's a telling (though disputed) saying: "If you're not paying, you're the product." What this suggests is that if you're not paying for a service, then somebody else is paying to use your data. Do you pay to use Facebook? Twitter? Gmail? How do you think they make their money? Well, partly through advertising, and some might argue that's a service they provide to you, but actually that's them using your data to get money from the advertisers. You're not really a customer of advertising--it's only once you buy something from the advertiser that you become their customer, but until you do, the relationship is between the the owner of the advertising platform and the advertiser.
Some of these services allow you to pay to reduce or remove advertising (Spotify is a good example), but on the other hand, advertising may be enabled even for services that you think you do pay for (Amazon is apparently working to allow adverts via Alexa, for instance). Unless we want to start paying to use all of these "free" services, we need to be aware of what we're giving up, and making some choices about what we expose and what we don't.
### Who's the customer?
There's another issue around data that should be exercising us, and it's a direct consequence of the amounts of data that are being generated. There are many organisations out there--including "public" ones like universities, hospitals, or government departments4--who generate enormous quantities of data all the time, and who just don't have the capacity to store it. It would be a different matter if this data didn't have long-term value, but it does, as the tools for handling Big Data are developing, and organisations are realising they can be mining this now and in the future.
The problem they face, though, as the amount of data increases and their capacity to store it fails to keep up, is what to do with it. Luckily--and I use this word with a very heavy dose of irony,5 big corporations are stepping in to help them. "Give us your data," they say, "and we'll host it for free. We'll even let you use the data you collected when you want to!" Sounds like a great deal, yes? A fantastic example of big corporations6 taking a philanthropic stance and helping out public organisations that have collected all of that lovely data about us.
Sadly, philanthropy isn't the only reason. These hosting deals come with a price: in exchange for agreeing to host the data, these corporations get to sell access to it to third parties. And do you think the public organisations, or those whose data is collected, will get a say in who these third parties are or how they will use it? I'll leave this as an exercise for the reader.7
### Open and positive
It's not all bad news, however. There's a growing "open data" movement among governments to encourage departments to make much of their data available to the public and other bodies for free. In some cases, this is being specifically legislated. Many voluntary organisations--particularly those receiving public funding--are starting to do the same. There are glimmerings of interest even from commercial organisations. What's more, there are techniques becoming available, such as those around differential privacy and multi-party computation, that are beginning to allow us to mine data across data sets without revealing too much about individuals--a computing problem that has historically been much less tractable than you might otherwise expect.
What does this all mean to us? Well, I've written before on Opensource.com about the [commonwealth of open source][4], and I'm increasingly convinced that we need to look beyond just software to other areas: hardware, organisations, and, relevant to this discussion, data. Let's imagine that you're a company (A) that provides a service to another company, a customer (B).8 There are four different types of data in play:
1. Data that's fully open: visible to A, B, and the rest of the world
2. Data that's known, shared, and confidential: visible to A and B, but nobody else
3. Data that's company-confidential: visible to A, but not B
4. Data that's customer-confidential: visible to B, but not A
First of all, maybe we should be a bit more open about data and default to putting it into bucket 1. That data--on self-driving cars, voice recognition, mineral deposits, demographic statistics--could be enormously useful if it were available to everyone.9 Also, wouldn't it be great if we could find ways to make the data in buckets 2, 3, and 4--or at least some of it--available in bucket 1, whilst still keeping the details confidential? That's the hope for some of these new techniques being researched. They're a way off, though, so don't get too excited, and in the meantime, start thinking about making more of your data open by default.
### Some concrete steps
So, what can we do around data privacy and being open? Here are a few concrete steps that occurred to me: please use the comments to contribute more.
* Check to see whether your organisation is taking GDPR seriously. If it isn't, push for it.
* Default to encrypting sensitive data (or hashing where appropriate), and deleting when it's no longer required--there's really no excuse for data to be in the clear to these days except for when it's actually being processed.
* Consider what information you disclose when you sign up to services, particularly social media.
* Discuss this with your non-technical friends.
* Educate your children, your friends' children, and their friends. Better yet, go and talk to their teachers about it and present something in their schools.
* Encourage the organisations you work for, volunteer for, or interact with to make data open by default. Rather than thinking, "why should I make this public?" start with "why shouldn't I make this public?"
* Try accessing some of the open data sources out there. Mine it, create apps that use it, perform statistical analyses, draw pretty graphs,10 make interesting music, but consider doing something with it. Tell the organisations that sourced it, thank them, and encourage them to do more.
1. Though you probably won't, I admit.
2. Assuming that you believe that your personal data should be protected.
3. If you're wondering what "dandy" means, you're not alone at this point.
4. Exactly how public these institutions seem to you will probably depend on where you live: [YMMV][5].
5. And given that I'm British, that's a really very, very heavy dose.
6. And they're likely to be big corporations: nobody else can afford all of that storage and the infrastructure to keep it available.
7. No. The answer's "no."
8. Although the example works for people, too. Oh, look: A could be Alice, B could be Bob…
9. Not that we should be exposing personal data or data that actually needs to be confidential, of course--not that type of data.
10. A friend of mine decided that it always seemed to rain when she picked her children up from school, so to avoid confirmation bias, she accessed rainfall information across the school year and created graphs that she shared on social media.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/being-open-about-data-privacy
作者:[Mike Bursell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mikecamel
[1]:https://en.wikipedia.org/wiki/Data_Privacy_Day
[2]:https://en.wikipedia.org/wiki/Information_wants_to_be_free
[3]:https://aliceevebob.wordpress.com/2017/06/06/helping-our-governments-differently/
[4]:https://opensource.com/article/17/11/commonwealth-open-source
[5]:http://www.outpost9.com/reference/jargon/jargon_40.html#TAG2036

View File

@ -0,0 +1,137 @@
A year as Android Engineer
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*tqshw1o4JZZlA1HW3Cki1Q.png)
>Awesome drawing by [Miquel Beltran][0]
Two years ago I started my career in tech. I started as QA Tester and then transitioned into a developer role a year after. Not without a lot of effort and a lot of personal time invested.
You can find that part of the story in this post about [how I switch careers from Biology to tech][1] and how I [learned Android for a year][2]. Today I want to talk about how I started my first role as Android developer, how I switched companies and how my first year as Android Engineer has been overall.
### My first role
My first role as Android developer started out just a year ago. The company I was working at provided me with the opportunity to transition from QA to Android developer by dedicating half of the time to each role.
This transition was thanks to the time I invested learning Android on evenings and weekends. I went through the [Android Basics Nanodegree][3], the [Android Developer Nanodegree][4] and as well I got the [Google Developers Certification][5]. [That part of the story is explained in more detail here.][6]
After two months I switched to full time when they hired another QA. Here's were all the challenges began!
Transitioning someone into a developer role is a lot harder than just providing them with a laptop and a git account. And here I explain some of the roadblocks I got during that time:
#### Lack of expectations
The first problem I faced was not knowing the expectations that the company had on me. My thought was that they expected me to deliver since the very first day, probably not like my experienced colleagues, but deliver by doing small tasks. This feeling caused me a lot of pressure. By not having clear goals, I was constantly thinking I wasn't good enough and that I was an impostor.
#### Lack of mentorship
There was no concept of mentorship in the company and the environment didnt allow us to work together. We barely did pair-programming, because there was always a deadline and the company wanted us to keep delivering. Luckily my colleagues were always willing to help! They sat with me to help me whenever I got stuck or asked for help.
#### Lack of feedback
I never got any feedback during that time. What was I doing well or bad? What could I improve? I didn't know since I didnt have anyone whom to report.
#### Lack of learning culture
I think that in order to keep up to date we need to continue learning by reading blog posts, watching talks, attending conferences, trying new things, etc. The company didnt offer learning hours during working time, which is unfortunately quite common as other devs told me. Without having learning time, I felt I wasn't entitled to spend even 10 minutes to read a blog post I found to be interesting and relevant for my job.
The problem was not only the lack of an explicit learning time allowance, but also that when I requested it, it got denied.
An example of that occurred when I finished my tasks for the sprint and we were already at the end of it, so I asked if I could spend the rest of the day learning Kotlin. This request got denied.
Another case was when I requested to attend an Android conference, and then I was asked to take days from my paid time off.
#### Impostor syndrome
The lack of expectations, the lack of feedback, and the lack of learning culture in the company made the first 9 months of my developer career even more challenging. I have the feeling that it contributed to increase my internal impostors syndrome.
One example of it was opening and reviewing pull requests. Sometimes I'd ask a colleague to check my code privately, rather than opening a pull request, to avoid showing my code to everyone.
Other times, when I was reviewing, I would spend minutes staring the "approve" button, worried of approving something that another colleague would have considered wrong.
And when I didn't agree on something, I was never speaking loud enough worried of a backslash due to my lack of knowledge.
> Sometimes Id ask a colleague to check my code privately […] to avoid showing my code to everyone.
* * *
### New company, new challenges
Later on I got a new opportunity in my hands. I was invited to the hiring process for a Junior Android Engineer position at [Babbel][7] thanks to a friend who worked with me in the past.
I met the team while volunteering in a local meet-up that was hosted at their offices. That helped me a lot when deciding to apply. I loved the company's motto: learning for all. Also everyone was very friendly and looked happy working there! But I didn't apply straight away, because why would I apply if I though that I wasn't good enough?
Luckily my friend and my partner pushed me to do it. They gave me the strength I needed to send my CV. Shortly after I got into the interview process. It was fairly simple: I had to do a coding challenge in the form of a small app, and then later a technical interview with the team and team fit interview with the hiring manager.
#### Hiring process
I spent the weekend with the coding challenge and sent it right after on Monday. Soon after I got invited for an on-site interview.
The technical interview was about the coding challenge itself, we talked about good and bad things on Android, why did I implemented things in a way, how could it be improved, etc. It followed a short team fit interview with the hiring manager, where we talked about challenges I faced, how I solved this problems, and so on.
They offered me the job and I accepted the offer!
On my first year working as Android developer, I spent 9 months in a company and the next 3 with my current employer.
#### Learning environment
One big change for me is having 1:1 meetings with my Engineering Manager every two weeks. That way, it's clear for me what are our expectations.
We are getting constant feedback and ideas on what to improve and how to help and be helped. Beside the internal trainings they offer, I also have a weekly learning time allowance to learn anything I want. So far, I've been using it to improve my Kotlin and RxJava knowledge.
We also do pair-programming almost daily. There's always paper and pens nearby my desk to sketch ideas. And I keep a second chair by my side so my colleagues can sit with me :-)
However, I still struggle with the impostor syndrome.
#### Still the Impostor syndrome
I still struggle with it. For example, when doing pair-programming, if we reach a topic I don't quite understand, even when my colleagues have the patience to explain it to me many times, there are times I just can't get it.
After the second or third time I start feeling a big pressure on my chest. How come I don't get it? Why is it so difficult for me to understand? This situation creates me anxiety.
I realized I need to accept that I might not understand a given topic but that getting the idea is the first step! Sometimes we just require more time and practice so it finally "compiles in our brains" :-)
For example, I used to struggle with Java interfaces vs. abstract classes, I just couldn't understand them completely, no matter how many examples I saw. But then I started using them, and even if I could not explain how they worked, I knew how and when to use them.
#### Confidence
The learning environment in my current company helps me in building confidence. Even if I've been asking a lot of questions, there's always room for more!
Having less experience doesn't mean your opinions will be less valued. For example if a proposed solution seems too complex, I will challenge them to write it in a clearer way. Also, I provide a different set of experience and points of view, which has been helpful so far for polishing the app user experience.
### To improve
The engineer role isn't just coding, but rather a broad range of skills. I am still at the beginning of the journey, and on my way of mastering it, I want to focus on the following ideas:
* Communication: as English isn't my first language, sometimes I struggle to transmit an idea, which is essential for my job. I can work on that by writing, reading and talking more.
* Give constructive feedback: I want to give meaningful feedback to my colleagues so they can grow with me as well.
* Be proud of my achievements: I need to create a list to track all kind of achievements, small or big, and my overall progress, so I can look back and feel good when I struggle.
* Not being obsessed on what I don't know: hard to do when there's so many new things coming up, so keeping focused on the essentials, and what's required for my project in hand, is important.
* Share more knowledge with my colleagues! That I'm a junior doesn't mean I don't have anything to share! I need to keep sharing articles and talks I find interesting. I know my colleagues appreciate that.
* Be patient and constantly learn: keep learning as I am doing, but being more patient with myself.
* Self-care: take breaks whenever needed and don't be hard with myself. Relaxing is also productive.
--------------------------------------------------------------------------------
via: https://proandroiddev.com/a-year-as-android-engineer-55e2a428dfc8
作者:[Lara Martín][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://proandroiddev.com/@laramartin
[0]:https://medium.com/@Miqubel
[1]:https://medium.com/@laramartin/how-i-took-my-first-step-in-it-6e9233c4684d
[2]:https://medium.com/udacity/a-year-of-android-ffba9f3e40b6
[3]:https://de.udacity.com/course/android-basics-nanodegree-by-google--nd803
[4]:https://de.udacity.com/course/android-developer-nanodegree-by-google--nd801
[5]:https://developers.google.com/training/certification/
[6]:https://medium.com/udacity/a-year-of-android-ffba9f3e40b6
[7]:http://babbel.com/

View File

@ -0,0 +1,75 @@
Linux vs. Unix: What's the difference?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_twoforward.png?itok=exkV49ts)
If you are a software developer in your 20s or 30s, you've grown up in a world dominated by Linux. It has been a significant player in the data center for decades, and while it's hard to find definitive operating system market share reports, Linux's share of data center operating systems could be as high as 70%, with Windows variants carrying nearly all the remaining percentage. Developers using any major public cloud can expect the target system will run Linux. Evidence that Linux is everywhere has grown in recent years when you add in Android and Linux-based embedded systems in smartphones, TVs, automobiles, and many other devices.
Even so, most software developers, even those who have grown up during this venerable "Linux revolution" have at least heard of Unix. It sounds similar to Linux, and you've probably heard people use these terms interchangeably. Or maybe you've heard Linux called a "Unix-like" operating system.
So, what is this Unix? The caricatures speak of wizard-like "graybeards" sitting behind glowing green screens, writing C code and shell scripts, powered by old-fashioned, drip-brewed coffee. But Unix has a much richer history beyond those bearded C programmers from the 1970s. While articles detailing the history of Unix and "Unix vs. Linux" comparisons abound, this article will offer a high-level background and a list of major differences between these complementary worlds.
### Unix's beginnings
The history of Unix begins at AT&T Bell Labs in the late 1960s with a small team of programmers looking to write a multi-tasking, multi-user operating system for the PDP-7. Two of the most notable members of this team at the Bell Labs research facility were Ken Thompson and Dennis Ritchie. While many of Unix's concepts were derivative of its predecessor ([Multics][1]), the Unix team's decision early in the 1970s to rewrite this small operating system in the C language is what separated Unix from all others. At the time, operating systems were rarely, if ever, portable. Instead, by nature of their design and low-level source language, operating systems were tightly linked to the hardware platform for which they had been authored. By refactoring Unix on the C programming language, Unix could now be ported to many hardware architectures.
In addition to this new portability, which allowed Unix to quickly expand beyond Bell Labs to other research, academic, and even commercial uses, several key of the operating system's design tenets were attractive to users and programmers. For one, Ken Thompson's [Unix philosophy][2] became a powerful model of modular software design and computing. The Unix philosophy recommended utilizing small, purpose-built programs in combination to do complex overall tasks. Since Unix was designed around files and pipes, this model of "piping" inputs and outputs of programs together into a linear set of operations on the input is still in vogue today. In fact, the current cloud functions-as-a-service (FaaS)/serverless computing model owes much of its heritage to the Unix philosophy.
### Rapid growth and competition
Through the late 1970s and 80s, Unix became the root of a family tree that expanded across research, academia, and a growing commercial Unix operating system business. Unix was not open source software, and the Unix source code was licensable via agreements with its owner, AT&T. The first known software license was sold to the University of Illinois in 1975.
Unix grew quickly in academia, with Berkeley becoming a significant center of activity, given Ken Thompson's sabbatical there in the '70s. With all the activity around Unix at Berkeley, a new delivery of Unix software was born: the Berkeley Software Distribution, or BSD. Initially, BSD was not an alternative to AT&T's Unix, but an add-on with additional software and capabilities. By the time 2BSD (the Second Berkeley Software Distribution) arrived in 1979, Bill Joy, a Berkeley grad student, had added now-famous programs such as `vi` and the C shell (/bin/csh).
In addition to BSD, which became one of the most popular branches of the Unix family, Unix's commercial offerings exploded through the 1980s and into the '90s with names like HP-UX, IBM's AIX, Sun's Solaris, Sequent, and Xenix. As the branches grew from the original root, the "[Unix wars][3]" began, and standardization became a new focus for the community. The POSIX standard was born in 1988, as well as other standardization follow-ons via The Open Group into the 1990s.
Around this time AT&T and Sun released System V Release 4 (SVR4), which was adopted by many commercial vendors. Separately, the BSD family of operating systems had grown over the years, leading to some open source variations that were released under the now-familiar [BSD license][4] . This included FreeBSD, OpenBSD, and NetBSD, each with a slightly different target market in the Unix server industry. These Unix variants continue to have some usage today, although many have seen their server market share dwindle into the single digits (or lower). BSD may have the largest install base of any modern Unix system today. Also, every Apple Mac hardware unit shipped in recent history can be claimed by BSD, as its OS X (now macOS) operating system is a BSD-derivative.
While the full history of Unix and its academic and commercial variants could take many more pages, for the sake of our article focus, let's move on to the rise of Linux.
### Enter Linux
What we call the Linux operating system today is really the combination of two efforts from the early 1990s. Richard Stallman was looking to create a truly free and open source alternative to the proprietary Unix system. He was working on the utilities and programs under the name GNU, a recursive algorithm meaning "GNU's not Unix!" Although there was a kernel project underway, it turned out to be difficult going, and without a kernel, the free and open source operating system dream could not be realized. It was Linus Torvald's work—producing a working and viable kernel that he called Linux—that brought the complete operating system to life. Given that Linus was using several GNU tools (e.g., the GNU Compiler Collection, or [GCC][5]), the marriage of the GNU tools and the Linux kernel was a perfect match.
Linux distributions came to life with the components of GNU, the Linux kernel, MIT's X-Windows GUI, and other BSD components that could be used under the open source BSD license. The early popularity of distributions like Slackware and then Red Hat gave the "common PC user" of the 1990s access to the Linux operating system and, with it, many of the proprietary Unix system capabilities and utilities they used in their work or academic lives.
Because of the free and open source standing of all the Linux components, anyone could create a Linux distribution with a bit of effort, and soon the total number of distros reached into the hundreds. Today, [distrowatch.com][6] lists 312 unique Linux distributions available in some form. Of course, many developers utilize Linux either via cloud providers or by using popular free distributions like Fedora, Canonical's Ubuntu, Debian, Arch Linux, Gentoo, and many other variants. Commercial Linux offerings, which provide support on top of the free and open source components, became viable as many enterprises, including IBM, migrated from proprietary Unix to offering middleware and software solutions atop Linux. Red Hat built a model of commercial support around Red Hat Enterprise Linux, as did German provider SUSE with SUSE Linux Enterprise Server (SLES).
### Comparing Unix and Linux
So far, we've looked at the history of Unix and the rise of Linux and the GNU/Free Software Foundation underpinnings of a free and open source alternative to Unix. Let's examine the differences between these two operating systems that share much of the same heritage and many of the same goals.
From a user experience perspective, not very much is different! Much of the attraction of Linux was the operating system's availability across many hardware architectures (including the modern PC) and ability to use tools familiar to Unix system administrators and users.
Because of POSIX standards and compliance, software written on Unix could be compiled for a Linux operating system with a usually limited amount of porting effort. Shell scripts could be used directly on Linux in many cases. While some tools had slightly different flag/command-line options between Unix and Linux, many operated the same on both.
One side note is that the popularity of the macOS hardware and operating system as a platform for development that mainly targets Linux may be attributed to the BSD-like macOS operating system. Many tools and scripts meant for a Linux system work easily within the macOS terminal. Many open source software components available on Linux are easily available through tools like [Homebrew][7].
The remaining differences between Linux and Unix are mainly related to the licensing model: open source vs. proprietary, licensed software. Also, the lack of a common kernel within Unix distributions has implications for software and hardware vendors. For Linux, a vendor can create a device driver for a specific hardware device and expect that, within reason, it will operate across most distributions. Because of the commercial and academic branches of the Unix tree, a vendor might have to write different drivers for variants of Unix and have licensing and other concerns related to access to an SDK or a distribution model for the software as a binary device driver across many Unix variants.
As both communities have matured over the past decade, many of the advancements in Linux have been adopted in the Unix world. Many GNU utilities were made available as add-ons for Unix systems where developers wanted features from GNU programs that aren't part of Unix. For example, IBM's AIX offered an AIX Toolbox for Linux Applications with hundreds of GNU software packages (like Bash, GCC, OpenLDAP, and many others) that could be added to an AIX installation to ease the transition between Linux and Unix-based AIX systems.
Proprietary Unix is still alive and well and, with many major vendors promising support for their current releases well into the 2020s, it goes without saying that Unix will be around for the foreseeable future. Also, the BSD branch of the Unix tree is open source, and NetBSD, OpenBSD, and FreeBSD all have strong user bases and open source communities that may not be as visible or active as Linux, but are holding their own in recent server share reports, with well above the proprietary Unix numbers in areas like web serving.
Where Linux has shown a significant advantage over proprietary Unix is in its availability across a vast number of hardware platforms and devices. The Raspberry Pi, popular with hobbyists and enthusiasts, is Linux-driven and has opened the door for an entire spectrum of IoT devices running Linux. We've already mentioned Android devices, autos (with Automotive Grade Linux), and smart TVs, where Linux has large market share. Every cloud provider on the planet offers virtual servers running Linux, and many of today's most popular cloud-native stacks are Linux-based, whether you're talking about container runtimes or Kubernetes or many of the serverless platforms that are gaining popularity.
One of the most revealing representations of Linux's ascendancy is Microsoft's transformation in recent years. If you told software developers a decade ago that the Windows operating system would "run Linux" in 2016, most of them would have laughed hysterically. But the existence and popularity of the Windows Subsystem for Linux (WSL), as well as more recently announced capabilities like the Windows port of Docker, including LCOW (Linux containers on Windows) support, are evidence of the impact that Linux has had—and clearly will continue to have—across the software world.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/differences-between-linux-and-unix
作者:[Phil Estes][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/estesp
[1]:https://en.wikipedia.org/wiki/Multics
[2]:https://en.wikipedia.org/wiki/Unix_philosophy
[3]:https://en.wikipedia.org/wiki/Unix_wars
[4]:https://en.wikipedia.org/wiki/BSD_licenses
[5]:https://en.wikipedia.org/wiki/GNU_Compiler_Collection
[6]:https://distrowatch.com/
[7]:https://brew.sh/

View File

@ -0,0 +1,175 @@
The Best Linux Tools for Teachers and Students
======
Linux is a platform ready for everyone. If you have a niche, Linux is ready to meet or exceed the needs of said niche. One such niche is education. If you are a teacher or a student, Linux is ready to help you navigate the waters of nearly any level of the educational system. From study aids, to writing papers, to managing classes, to running an entire institution, Linux has you covered.
If youre unsure how, let me introduce you to a few tools Linux has at the ready. Some of these tools require little to no learning curve, whereas others require a full blown system administrator to install, setup, and manage. Well start with the simple and make our way to the complex.
### Study aids
Everyone studies a bit differently and every class requires a different type and level of studying. Fortunately, Linux has plenty of study aids. Lets take a look at a few examples:
Flash Cards ─ [KWordQuiz][1] (Figure 1) is one of the many flashcard applications available for the Linux platform. KWordQuiz uses the kvtml file format and you can download plenty of pre-made, contributed files to use [here][2]. KWordQuiz is part of the KDE desktop environment, but can be installed on other desktops (KDE dependencies will be installed alongside the flashcard app).
![](https://lcom.static.linuxfound.org/images/stories/41373/kwordquiz-sm.png)
### Language tools
Thanks to an ever-shrinking world, foreign language has become a crucial element of education. Youll find plenty of language tools, including [Kiten][3] (Figure 2) the kanji browser for the KDE desktop.
![](https://lcom.static.linuxfound.org/images/stories/41373/kiten.jpg)
If Japanese isnt your language, you could try [Jargon Informatique][4]. This dictionary is entirely in French and, so if youre new to the language, you might want to stick with something like [Google Translate][5].
### Writing Aids/ Note Taking
Linux has everything you need to keep notes on a subject and write those term papers. Lets start with taking notes. If youre familiar with Microsoft OneNote, you'll love [BasKet Note Pads][6]. With this app, you can create baskets for subjects and add just about anything ─ notes, links, images, cross references (to other baskets ─ Figure 3), app launchers, load from file, and more.
![](https://lcom.static.linuxfound.org/images/stories/41373/basket.jpg)
You can create baskets that are free-form, so elements can be moved around to suit your need. If you prefer a more ordered feel, create a columned basket to retain those notes walled in.
Of course, the mother of all writing aids for Linux would be [LibreOffice][7]. The default office suite on most Linux distributions, LibreOffice has your text documents, spreadsheets, presentations, databases, formula, and drawing covered.
The one caveat to using LibreOffice in an educational environment, is that you will most likely have to save your documents in the MS Office format.
### Education-specific distribution
With all of this said about Linux applications geared toward the student in mind, it might behoove you to take a look at one of the distributions created specifically for education. The best in breed is [Edubuntu][8]. This grassroots Linux distribution aims at getting Linux into schools, homes, and communities. Edubuntu uses the default Ubuntu desktop (the Unity shell) and adds the following software:
+ KDE Education Suite
+ GCompris
+ Celestia
+ Tux4Kids
+ Epoptes
+ LTSP
+ GBrainy
+ and much more.
Edubuntu isnt the only game in town. If youd rather test other education-specific Linux distributions, heres the short list:
+ Debian-Edu
+ Fedora Education Spin
+ Guadalinux-Edu
+ OpenSuse-Edu
+ Qimo for Kids
+ Uberstudent.
### Classroom/institutional administration
This is where the Linux platform really shines. There are a number of tools geared specifically for administering. Lets first look at tools specific to the classroom.
[iTalc][9] is a powerful didactical environment for the classroom. With this tool, teachers can view and control students desktops (supporting Linux and Windows). The iTalc system allows teachers to view whats happening on a student's desktop, take control of their desktop, lock their desktop, show demonstrations to desktops, power on/off desktops, send text messages to students' desktops, and much more.
[aTutor][10] (Figure 4) is an open source Learning Management System (LMS) focused on developing online courses and e-learning content. Where aTutor really shines is the creation and management of online tests and quizzes. Of course, aTutor is not limited to testing purposes. With this powerful software, students and teachers can enjoy:
* Social networking
* Profiles
* Messaging
* Adaptive navigation
* Work groups
* File storage
* Group blogs
* and much more.
![](https://lcom.static.linuxfound.org/images/stories/41373/atutor.png)
Course material is easy to create and deploy (you can even assign tests/quizzes to specific study groups).
[Moodle][11] is one of the most widely used educational management software titles available. With Moodle you can manage, teach, learn, and even participate in your childs education. This powerhouse software offers collaborative tools for teachers and students, exams, calendars, forums, file management, course management (Figure 5), notifications, progress tracking, mass enrollment, bulk course creation, attendance, and much more.
![](https://lcom.static.linuxfound.org/images/stories/41373/moodle.png)
[OpenSIS][12] stands for Open Source Student Information System and does a great job of managing your educational institution. There is a free community edition, but even with the paid version you can look forward to reducing ownership costs for a school district by up to 75 percent (when compared to proprietary solutions).
OpenSIS includes the following features/modules:
* Attendance (Figure 6)
* Contact information
* Student demographics
* Gradebook
* Scheduling
* Health records
* Report cards.
![](https://lcom.static.linuxfound.org/images/stories/41373/opensis.png)
There are four editions of OpenSIS. Check out the feature comparison matrix [here][13].
[vufind][14] is an outstanding library management system that allows students and teachers to easily browse for library resources such as:
* Catalog Records
* Locally Cached Journals
* Digital Library Items
* Institutional Repository
* Institutional Bibliography
* Other Library Collections and Resources.
The vufind system allows user login where authenticated users can save resources for quick recall and enjoy “more like this” results.
This list just barely scratches the surface of what is available for Linux in the educational arena. And, as you might expect, each tool is highly customizable and open source ─ so if the software doesnt precisely meet your needs, you are free (in most cases) to modify the source and change it.
Linux and education go hand in hand. Whether you are a teacher, a student, or an administrator, youll find plenty of tools to help make the institution of education open, flexible, and powerful.
--------------------------------------------------------------------------------
via: https://www.linux.com/news/best-linux-tools-teachers-and-students
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://edu.kde.org/kwordquiz/
[2]:http://kde-files.org/index.php?xcontentmode=694
[3]:https://edu.kde.org/kiten/
[4]:http://jargon.asher256.com/index.php
[5]:https://translate.google.com/
[6]:http://basket.kde.org/
[7]:http://www.libreoffice.com
[8]:http://www.edubuntu.org/
[9]:http://italc.sourceforge.net/
[10]:http://www.atutor.ca/
[11]:https://moodle.org/
[12]:http://www.opensis.com/
[13]:http://www.opensis.com/compare_edition.php
[14]:http://vufind-org.github.io/vufind/

View File

@ -1,198 +0,0 @@
transalting by wyxplus
4 Tools for Network Snooping on Linux
======
Computer networking data has to be exposed, because packets can't travel blindfolded, so join us as we use `whois`, `dig`, `nmcli`, and `nmap` to snoop networks.
Do be polite and don't run `nmap` on any network but your own, because probing other people's networks can be interpreted as a hostile act.
### Thin and Thick whois
You may have noticed that our beloved old `whois` command doesn't seem to give the level of detail that it used to. Check out this example for Linux.com:
```
$ whois linux.com
Domain Name: LINUX.COM
Registry Domain ID: 4245540_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.namecheap.com
Registrar URL: http://www.namecheap.com
Updated Date: 2018-01-10T12:26:50Z
Creation Date: 1994-06-02T04:00:00Z
Registry Expiry Date: 2018-06-01T04:00:00Z
Registrar: NameCheap Inc.
Registrar IANA ID: 1068
Registrar Abuse Contact Email: abuse@namecheap.com
Registrar Abuse Contact Phone: +1.6613102107
Domain Status: ok https://icann.org/epp#ok
Name Server: NS5.DNSMADEEASY.COM
Name Server: NS6.DNSMADEEASY.COM
Name Server: NS7.DNSMADEEASY.COM
DNSSEC: unsigned
[...]
```
There is quite a bit more, mainly annoying legalese. But where is the contact information? It is sitting on whois.namecheap.com (see the third line of output above):
```
$ whois -h whois.namecheap.com linux.com
```
I won't print the output here, as it is very long, containing the Registrant, Admin, and Tech contact information. So what's the deal, Lucille? Some registries, such as .com and .net are "thin" registries, storing a limited subset of domain data. To get complete information use the `-h`, or `--host` option, to get the complete dump from the domain's `Registrar WHOIS Server`.
Most of the other top-level domains are thick registries, such as .info. Try `whois blockchain.info` to see an example.
Want to get rid of the obnoxious legalese? Use the `-H` option.
### Digging DNS
Use the `dig` command to compare the results from different name servers to check for stale entries. DNS records are cached all over the place, and different servers have different refresh intervals. This is the simplest usage:
```
$ dig linux.com
<<>> DiG 9.10.3-P4-Ubuntu <<>> linux.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<<- opcode: QUERY, status: NOERROR, id: 13694
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1440
;; QUESTION SECTION:
;linux.com. IN A
;; ANSWER SECTION:
linux.com. 10800 IN A 151.101.129.5
linux.com. 10800 IN A 151.101.65.5
linux.com. 10800 IN A 151.101.1.5
linux.com. 10800 IN A 151.101.193.5
;; Query time: 92 msec
;; SERVER: 127.0.1.1#53(127.0.1.1)
;; WHEN: Tue Jan 16 15:17:04 PST 2018
;; MSG SIZE rcvd: 102
```
Take notice of the SERVER: 127.0.1.1#53(127.0.1.1) line near the end of the output. This is your default caching resolver. When the address is localhost, that means there is a DNS server installed on your machine. In my case that is Dnsmasq, which is being used by Network Manager:
```
$ ps ax|grep dnsmasq
2842 ? S 0:00 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground
--no-hosts --bind-interfaces --pid-file=/var/run/NetworkManager/dnsmasq.pid
--listen-address=127.0.1.1
```
The `dig` default is to return A records, which define the domain name. IPv6 has AAAA records:
```
$ $ dig linux.com AAAA
[...]
;; ANSWER SECTION:
linux.com. 60 IN AAAA 64:ff9b::9765:105
linux.com. 60 IN AAAA 64:ff9b::9765:4105
linux.com. 60 IN AAAA 64:ff9b::9765:8105
linux.com. 60 IN AAAA 64:ff9b::9765:c105
[...]
```
Checkitout, Linux.com has IPv6 addresses. Very good! If your Internet service provider supports IPv6 then you can connect over IPv6. (Sadly, my overpriced mobile broadband does not.)
Suppose you make some DNS changes to your domain, or you're seeing `dig` results that don't look right. Try querying with a public DNS service, like OpenNIC:
```
$ dig @69.195.152.204 linux.com
[...]
;; Query time: 231 msec
;; SERVER: 69.195.152.204#53(69.195.152.204)
```
`dig` confirms that you're getting your lookup from 69.195.152.204. You can query all kinds of servers and compare results.
### Upstream Name Servers
I want to know what my upstream name servers are. To find this, I first look in `/etc/resolv/conf`:
```
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1
```
Thanks, but I already knew that. Your Linux distribution may be configured differently, and you'll see your upstream servers. Let's try `nmcli`, the Network Manager command-line tool:
```
$ nmcli dev show | grep DNS
IP4.DNS[1]: 192.168.1.1
```
Now we're getting somewhere, as that is the address of my mobile hotspot, and I should have thought of that myself. I can log in to its weird little Web admin panel to see its upstream servers. A lot of consumer Internet gateways don't let you view or change these settings, so try an external service such as [What's my DNS server?][1]
### List IPv4 Addresses on your Network
Which IPv4 addresses are up and in use on your network?
```
$ nmap -sn 192.168.1.0/24
Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 14:03 PST
Nmap scan report for Mobile.Hotspot (192.168.1.1)
Host is up (0.011s latency).
Nmap scan report for studio (192.168.1.2)
Host is up (0.000071s latency).
Nmap scan report for nellybly (192.168.1.3)
Host is up (0.015s latency)
Nmap done: 256 IP addresses (2 hosts up) scanned in 2.23 seconds
```
Everyone wants to scan their network for open ports. This example looks for services and their versions:
```
$ nmap -sV 192.168.1.1/24
Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 16:46 PST
Nmap scan report for Mobile.Hotspot (192.168.1.1)
Host is up (0.0071s latency).
Not shown: 997 closed ports
PORT STATE SERVICE VERSION
22/tcp filtered ssh
53/tcp open domain dnsmasq 2.55
80/tcp open http GoAhead WebServer 2.5.0
Nmap scan report for studio (192.168.1.102)
Host is up (0.000087s latency).
Not shown: 998 closed ports
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.2p2 Ubuntu 4ubuntu2.2 (Ubuntu Linux; protocol 2.0)
631/tcp open ipp CUPS 2.1
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 256 IP addresses (2 hosts up) scanned in 11.65 seconds
```
These are interesting results. Let's try the same run from a different Internet account, to see if any of these services are exposed to big bad Internet. You have a second network if you have a smartphone. There are probably apps you can download, or use your phone as a hotspot to your faithful Linux computer. Fetch the WAN IP address from the hotspot control panel and try again:
```
$ nmap -sV 12.34.56.78
Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 17:05 PST
Nmap scan report for 12.34.56.78
Host is up (0.0061s latency).
All 1000 scanned ports on 12.34.56.78 are closed
```
That's what I like to see. Consult the fine man pages for these commands to learn more fun snooping techniques.
Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/1/4-tools-network-snooping-linux
作者:[Carla Schroder][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:http://www.whatsmydnsserver.com/
[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,267 +0,0 @@
How to Install and Optimize Apache on Ubuntu
======
This is the beginning of our LAMP tutorial series: how to install the Apache web server on Ubuntu.
These instructions should work on any Ubuntu-based distro, including Ubuntu 14.04, Ubuntu 16.04, [Ubuntu 18.04][1], and even non-LTS Ubuntu releases like 17.10. They were tested and written for Ubuntu 16.04.
Apache (aka httpd) is the most popular and most widely used web server, so this should be useful for everyone.
### Before we begin installing Apache
Some requirements and notes before we begin:
* Apache may already be installed on your server, so check if it is first. You can do so with the "apachectl -V" command that outputs the Apache version you're using and some other information.
* You'll need an Ubuntu server. You can buy one from [Vultr][2], they're one of the [best and cheapest cloud hosting providers][3]. Their servers start from $2.5 per month.
* You'll need the root user or a user with sudo access. All commands below are executed by the root user so we didn't have to append 'sudo' to each command.
* You'll need [SSH enabled][4] if you use Ubuntu or an SSH client like [MobaXterm][5] if you use Windows.
That's most of it. Let's move onto the installation.
### Install Apache on Ubuntu
The first thing you always need to do is update Ubuntu before you do anything else. You can do so by running:
```
apt-get update && apt-get upgrade
```
Next, to install Apache, run the following command:
```
apt-get install apache2
```
If you want to, you can also install the Apache documentation and some Apache utilities. You'll need the Apache utilities for some of the modules we'll install later.
```
apt-get install apache2-doc apache2-utils
```
**And that 's it. You've successfully installed Apache.**
You'll still need to configure it.
### Configure and Optimize Apache on Ubuntu
There are various configs you can do on Apache, but the main and most common ones are explained below.
#### Check if Apache is running
By default, Apache is configured to start automatically on boot, so you don't have to enable it. You can check if it's running and other relevant information with the following command:
```
systemctl status apache2
```
[![check if apache is running][6]][6]
And you can check what version you're using with
```
apachectl -V
```
A simpler way of checking this is by visiting your server's IP address. If you get the default Apache page, then everything's working fine.
#### Update your firewall
If you use a firewall (which you should), you'll probably need to update your firewall rules and allow access to the default ports. The most common firewall used on Ubuntu is UFW, so the instructions below are for UFW.
To allow traffic through both the 80 (http) and 443 (https) ports, run the following command:
```
ufw allow 'Apache Full'
```
#### Install common Apache modules
Some modules are frequently recommended and you should install them. We'll include instructions for the most common ones:
##### Speed up your website with the PageSpeed module
The PageSpeed module will optimize and speed up your Apache server automatically.
First, go to the [PageSpeed download page][7] and choose the file you need. We're using a 64-bit Ubuntu server and we'll install the latest stable version. Download it using wget:
```
wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb
```
Then, install it with the following commands:
```
dpkg -i mod-pagespeed-stable_current_amd64.deb
apt-get -f install
```
Restart Apache for the changes to take effect:
```
systemctl restart apache2
```
##### Enable rewrites/redirects using the mod_rewrite module
This module is used for rewrites (redirects), as the name suggests. You'll need it if you use WordPress or any other CMS for that matter. To install it, just run:
```
a2enmod rewrite
```
And restart Apache again. You may need some extra configurations depending on what CMS you're using, if any. Google it for specific instructions for your setup.
##### Secure your Apache with the ModSecurity module
ModSecurity is a module used for security, again, as the name suggests. It basically acts as a firewall, and it monitors your traffic. To install it, run the following command:
```
apt-get install libapache2-modsecurity
```
And restart Apache again:
```
systemctl restart apache2
```
ModSecurity comes with a default setup that's enough by itself, but if you want to extend it, you can use the [OWASP rule set][8].
##### Block DDoS attacks using the mod_evasive module
You can use the mod_evasive module to block and prevent DDoS attacks on your server, though it's debatable how useful it is in preventing attacks. To install it, use the following command:
```
apt-get install libapache2-mod-evasive
```
By default, mod_evasive is disabled, to enable it, edit the following file:
```
nano /etc/apache2/mods-enabled/evasive.conf
```
And uncomment all the lines (remove #) and configure it per your requirements. You can leave everything as-is if you don't know what to edit.
[![mod_evasive][9]][9]
And create a log file:
```
mkdir /var/log/mod_evasive
chown -R www-data:www-data /var/log/mod_evasive
```
That's it. Now restart Apache for the changes to take effect:
```
systemctl restart apache2
```
There are [additional modules][10] you can install and configure, but it's all up to you and the software you're using. They're usually not required. Even the 4 modules we included are not required. If a module is required for a specific application, then they'll probably note that.
#### Optimize Apache with the Apache2Buddy script
Apache2Buddy is a script that will automatically fine-tune your Apache configuration. The only thing you need to do is run the following command and the script does the rest automatically:
```
curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl
```
You may need to install curl if you don't have it already installed. Use the following command to install curl:
```
apt-get install curl
```
#### Additional configurations
There's some extra stuff you can do with Apache, but we'll leave them for another tutorial. Stuff like enabling http/2 support, turning off (or on) KeepAlive, tuning your Apache even more. You don't have to do any of this, but you can find tutorials online and do it if you can't wait for our tutorials.
### Create your first website with Apache
Now that we're done with all the tuning, let's move onto creating an actual website. Follow our instructions to create a simple HTML page and a virtual host that's going to run on Apache.
The first thing you need to do is create a new directory for your website. Run the following command to do so:
```
mkdir -p /var/www/example.com/public_html
```
Of course, replace example.com with your desired domain. You can get a cheap domain name from [Namecheap][11].
Don't forget to replace example.com in all of the commands below.
Next, create a simple, static web page. Create the HTML file:
```
nano /var/www/example.com/public_html/index.html
```
And paste this:
```
<html>
     <head>
       <title>Simple Page</title>
     </head>
     <body>
       <p>If you're seeing this in your browser then everything works.</p>
     </body>
</html>
```
Save and close the file.
Configure the permissions of the directory:
```
chown -R www-data:www-data /var/www/example.com
chmod -R og-r /var/www/example.com
```
Create a new virtual host for your site:
```
nano /etc/apache2/sites-available/example.com.conf
```
And paste the following:
```
<VirtualHost *:80>
     ServerAdmin admin@example.com
     ServerName example.com
     ServerAlias www.example.com
   
     DocumentRoot /var/www/example.com/public_html
    
     ErrorLog ${APACHE_LOG_DIR}/error.log
     CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
```
This is a basic virtual host. You may need a more advanced .conf file depending on your setup.
Save and close the file after updating everything accordingly.
Now, enable the virtual host with the following command:
```
a2ensite example.com.conf
```
And finally, restart Apache for the changes to take effect:
```
systemctl restart apache2
```
That's it. You're done. Now you can visit example.com and view your page.
--------------------------------------------------------------------------------
via: https://thishosting.rocks/how-to-install-optimize-apache-ubuntu/
作者:[ThisHosting][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://thishosting.rocks
[1]:https://thishosting.rocks/ubuntu-18-04-new-features-release-date/
[2]:https://thishosting.rocks/go/vultr/
[3]:https://thishosting.rocks/cheap-cloud-hosting-providers-comparison/
[4]:https://thishosting.rocks/how-to-enable-ssh-on-ubuntu/
[5]:https://mobaxterm.mobatek.net/
[6]:https://thishosting.rocks/wp-content/uploads/2018/01/apache-running.jpg
[7]:https://www.modpagespeed.com/doc/download
[8]:https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project
[9]:https://thishosting.rocks/wp-content/uploads/2018/01/mod_evasive.jpg
[10]:https://httpd.apache.org/docs/2.4/mod/
[11]:https://thishosting.rocks/neamcheap-review-cheap-domains-cool-names
[12]:https://thishosting.rocks/wp-content/plugins/patron-button-and-widgets-by-codebard/images/become_a_patron_button.png
[13]:https://www.patreon.com/thishostingrocks

View File

@ -1,3 +1,5 @@
pinewall translating
A history of low-level Linux container runtimes A history of low-level Linux container runtimes
====== ======

View File

@ -0,0 +1,166 @@
Token ERC Comparison for Fungible Tokens Blockchainers
======
“The good thing about standards is that there are so many to choose from.” [_Andrew S. Tanenbaum_][1]
### Current State of Token Standards
The current state of Token standards on the Ethereum platform is surprisingly simple: ERC-20 Token Standard is the only accepted and adopted (as [EIP-20][2]) standard for a Token interface.
Proposed in 2015, it has finally been accepted at the end of 2017.
In the meantime, many Ethereum Requests for Comments (ERC) have been proposed which address shortcomings of the ERC-20, which partly were caused by changes in the Ethereum platform itself, eg. the fix for the re-entrancy bug with [EIP-150][3]. Other ERC propose enhancements to the ERC-20 Token model. These enhancements were identified by experiences gathered due to the broad adoption of the Ethereum blockchain and the ERC-20 Token standard. The actual usage of the ERC-20 Token interface resulted in new demands and requirements to address non-functional requirements like permissioning and operations.
This blogpost should give a superficial, but complete, overview of all proposals for Token(-like) standards on the Ethereum platform. This comparison tries to be objective but most certainly will fail in doing so.
### The Mother of all Token Standards: ERC-20
There are dozens of [very good][4] and detailed description of the ERC-20, which will not be repeated here. Just the core concepts relevant for comparing the proposals are mentioned in this post.
#### The Withdraw Pattern
Users trying to understand the ERC-20 interface and especially the usage pattern for _transfer_ ing Tokens _from_ one externally owned account (EOA), ie. an end-user (“Alice”), to a smart contract, have a hard time getting the approve/transferFrom pattern right.
![][5]
From a software engineering perspective, this withdraw pattern is very similar to the [Hollywood principle][6] (“Dont call us, well call you!”). The idea is that the call chain is reversed: during the ERC-20 Token transfer, the Token doesnt call the contract, but the contract does the call transferFrom on the Token.
While the Hollywood Principle is often used to implement Separation-of-Concerns (SoC), in Ethereum it is a security pattern to avoid having the Token contract to call an unknown function on an external contract. This behaviour was necessary due to the [Call Depth Attack][7] until [EIP-150][3] was activated. After this hard fork, the re-entrancy bug was not possible anymore and the withdraw pattern did not provide any more security than calling the Token directly.
But why should it be a problem now, the usage might be somehow clumsy, but we can fix this in the DApp frontend, right?
So, lets see what happens if a user used transfer to send Tokens to a smart contract. Alice calls transfer on the Token contract with the contract address
**….aaaaand its gone!**
Thats right, the Tokens are gone. Most likely, nobody will ever get the Tokens back. But Alice is not alone, as Dexaran, inventor of ERC-223, found out, about $400.000 in tokens (lets just say _a lot_ due to the high volatility of ETH) are irretrievably lost for all of us due to users accidentally sending Tokens to smart contracts.
Even if the contract developer was extremely user friendly and altruistic, he couldnt create the contract so that it could react to getting Tokens transferred to it and eg. return them, as the contract will never be notified of this transfer and the event is only emitted on the Token contract.
From a software engineering perspective thats a severe shortcoming of ERC-20. If an event occurs (and for the sake of simplicity, we are now assuming Ethereum transactions are actually events), there should be a notification to the parties involved. However, there is an event, but its triggered in the Token smart contract which the receiving contract cannot know.
Currently, its not possible to prevent users sending Tokens to smart contracts and losing them forever using the unintuitive transfer on the ERC-20 Token contract.
### The Empire Strikes Back: ERC-223
The first attempt at fixing the problems of ERC-20 was proposed by [Dexaran][8]. The main issue solved by this proposal is the different handling of EOA and smart contract accounts.
The compelling strategy is to reverse the calling chain (and with [EIP-150][3] solved this is now possible) and use a pre-defined callback (tokenFallback) on the receiving smart contract. If this callback is not implemented, the transfer will fail (costing all gas for the sender, a common criticism for ERC-223).
![][9]
#### Pros:
* Establishes a new interface, intentionally being not compliant to ERC-20 with respect to the deprecated functions
* Allows contract developers to handle incoming tokens (eg. accept/reject) since event pattern is followed
* Uses one transaction instead of two (transfer vs. approve/transferFrom) and thus saves gas and Blockchain storage
#### Cons:
* If tokenFallback doesnt exist then the contract fallback function is executed, this might have unintended side-effects
* If contracts assume that transfer works with Tokens, eg. for sending Tokens to specific contracts like multi-sig wallets, this would fail with ERC-223 Tokens, making it impossible to move them (ie. they are lost)
### The Pragmatic Programmer: ERC-677
The [ERC-667 transferAndCall Token Standard][10] tries to marriage the ERC-20 and ERC-223. The idea is to introduce a transferAndCall function to the ERC-20, but keep the standard as is. ERC-223 intentionally is not completely backwards compatible, since the approve/allowance pattern is not needed anymore and was therefore removed.
The main goal of ERC-667 is backward compatibility, providing a safe way for new contracts to transfer tokens to external contracts.
![][11]
#### Pros:
* Easy to adapt for new Tokens
* Compatible to ERC-20
* Adapter for ERC-20 to use ERC-20 safely
#### Cons:
* No real innovations. A compromise of ERC-20 and ERC-223
* Current implementation [is not finished][12]
### The Reunion: ERC-777
[ERC-777 A New Advanced Token Standard][13] was introduced to establish an evolved Token standard which learned from misconceptions like approve() with a value and the aforementioned send-tokens-to-contract-issue.
Additionally, the ERC-777 uses the new standard [ERC-820: Pseudo-introspection using a registry contract][14] which allows for registering meta-data for contracts to provide a simple type of introspection. This allows for backwards compatibility and other functionality extensions, depending on the ITokenRecipient returned by a EIP-820 lookup on the to address, and the functions implemented by the target contract.
ERC-777 adds a lot of learnings from using ERC-20 Tokens, eg. white-listed operators, providing Ether-compliant interfaces with send(…), using the ERC-820 to override and adapt functionality for backwards compatibility.
![][15]
#### Pros:
* Well thought and evolved interface for tokens, learnings from ERC-20 usage
* Uses the new standard request ERC-820 for introspection, allowing for added functionality
* White-listed operators are very useful and are more necessary than approve/allowance , which was often left infinite
#### Cons:
* Is just starting, complex construction with dependent contract calls
* Dependencies raise the probability of security issues: first security issues have been [identified (and solved)][16] not in the ERC-777, but in the even newer ERC-820
### (Pure Subjective) Conclusion
For now, if you want to go with the “industry standard” you have to choose ERC-20. It is widely supported and well understood. However, it has its flaws, the biggest one being the risk of non-professional users actually losing money due to design and specification issues. ERC-223 is a very good and theoretically founded answer for the issues in ERC-20 and should be considered a good alternative standard. Implementing both interfaces in a new token is not complicated and allows for reduced gas usage.
A pragmatic solution to the event and money loss problem is ERC-677, however it doesnt offer enough innovation to establish itself as a standard. It could however be a good candidate for an ERC-20 2.0.
ERC-777 is an advanced token standard which should be the legitimate successor to ERC-20, it offers great concepts which are needed on the matured Ethereum platform, like white-listed operators, and allows for extension in an elegant way. Due to its complexity and dependency on other new standards, it will take time till the first ERC-777 tokens will be on the Mainnet.
### Links
[1] Security Issues with approve/transferFrom-Pattern in ERC-20: <https://drive.google.com/file/d/0ByMtMw2hul0EN3NCaVFHSFdxRzA/view>
[2] No Event Handling in ERC-20: <https://docs.google.com/document/d/1Feh5sP6oQL1-1NHi-X1dbgT3ch2WdhbXRevDN681Jv4>
[3] Statement for ERC-20 failures and history: <https://github.com/ethereum/EIPs/issues/223#issuecomment-317979258>
[4] List of differences ERC-20/223: <https://ethereum.stackexchange.com/questions/17054/erc20-vs-erc223-list-of-differences>
--------------------------------------------------------------------------------
via: http://blockchainers.org/index.php/2018/02/08/token-erc-comparison-for-fungible-tokens/
作者:[Alexander Culum][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://blockchainers.org/index.php/author/alex/
[1]:https://www.goodreads.com/quotes/589703-the-good-thing-about-standards-is-that-there-are-so
[2]:https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20.md
[3]:https://github.com/ethereum/EIPs/blob/master/EIPS/eip-150.md
[4]:https://medium.com/@jgm.orinoco/understanding-erc-20-token-contracts-a809a7310aa5
[5]:http://blockchainers.org/wp-content/uploads/2018/02/ERC-20-Token-Transfer-2.png
[6]:http://matthewtmead.com/blog/hollywood-principle-dont-call-us-well-call-you-4/
[7]:https://consensys.github.io/smart-contract-best-practices/known_attacks/
[8]:https://github.com/Dexaran
[9]:http://blockchainers.org/wp-content/uploads/2018/02/ERC-223-Token-Transfer-1.png
[10]:https://github.com/ethereum/EIPs/issues/677
[11]:http://blockchainers.org/wp-content/uploads/2018/02/ERC-677-Token-Transfer.png
[12]:https://github.com/ethereum/EIPs/issues/677#issuecomment-353871138
[13]:https://github.com/ethereum/EIPs/issues/777
[14]:https://github.com/ethereum/EIPs/issues/820
[15]:http://blockchainers.org/wp-content/uploads/2018/02/ERC-777-Token-Transfer.png
[16]:https://github.com/ethereum/EIPs/issues/820#issuecomment-362049573

View File

@ -0,0 +1,72 @@
Best Websites For Programmers
======
![][1]
As a programmer, you will often find yourself as a permanent visitor of some websites. These can be tutorial, reference or forums websites. So here in this article let us have a look at the best websites for programmers.
### W3Schools
W3Schools is one of the best websites for beginners as well as experienced web developers to learn various programming languages. You can learn HTML5, CSS3, PHP. JavaScript, ASP etc.
More importantly, the website holds a lot of resources and references for web developers.
[![w3schools logo][2]][3]
You can quickly see various keywords and what they do. The website is very interactive and it allows you to try and practice the code in an embedded editor on the website itself. The website is one of those few that you will frequently visit as a web developer.
### GeeksforGeeks
GeeksforGeeks is a website mostly focused on computer science. It has a huge collection of algorithms, solutions and programming questions.
[![geeksforgeeks programming support][4]][5]
The website also has a good stock of most frequently asked questions in interviews. Since the website is more about computer science in general, you will find a solution to all programming solutions in most famous languages.
### TutorialsPoint
The de facto place for learning anything. Tutorials point has some of the finest and easiest tutorials that can teach you any programming language. What I really love about this website is that it is not just limited to generic programming languages.
![](http://www.theitstuff.com/wp-content/uploads/2017/12/tutorialspoint-programming-website.png)
You can find tutorials for almost all frameworks of all languages on the planet.
### StackOverflow
You probably already know this that stack is the place where programmers meet. You ever get stuck solving some of your code, just ask a question on stack and programmers from all over the internet will be there to help you.
[![stackoverflow linux programming website][6]][7]
The best part about stack overflow is that almost all questions get answered. You might as well receive answers from several different points of views of other programmers.
### HackerRank
Hacker rank is a website where you can participate in various coding competitions and check your competitive abilities.
[![hackerrank programming forums][8]][9]There are various contests organized in various programming languages and winning in them increases your score. This score can get you in the top ranks and increase your chance of getting noticed by some software company.
### Codebeautify
Since we are programmers, beauty isnt something we look after. Many a time our code can be difficult to read by someone else. Codebeautify can make your code easy to read.
![](http://www.theitstuff.com/wp-content/uploads/2017/12/code-beautify-programming-forums.png)
The website has most languages that it can beautify. Alternatively, if you wish to make your code not readable by someone you can also do that.
So these were some of my picks for the best websites for programmers. If you frequently visit a site that I havent mentioned, do let me know in the comment section below.
--------------------------------------------------------------------------------
via: http://www.theitstuff.com/best-websites-programmers
作者:[Rishabh Kandari][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.theitstuff.com/author/reevkandari
[1]:http://www.theitstuff.com/wp-content/uploads/2017/12/best-websites-for-programmers.jpg
[2]:http://www.theitstuff.com/wp-content/uploads/2017/12/w3schools-logo-550x110.png
[3]:http://www.theitstuff.com/wp-content/uploads/2017/12/w3schools-logo.png
[4]:http://www.theitstuff.com/wp-content/uploads/2017/12/geeksforgeeks-programming-support-550x152.png
[5]:http://www.theitstuff.com/wp-content/uploads/2017/12/geeksforgeeks-programming-support.png
[6]:http://www.theitstuff.com/wp-content/uploads/2017/12/stackoverflow-linux-programming-website-550x178.png
[7]:http://www.theitstuff.com/wp-content/uploads/2017/12/stackoverflow-linux-programming-website.png
[8]:http://www.theitstuff.com/wp-content/uploads/2017/12/hackerrank-programming-forums-550x118.png
[9]:http://www.theitstuff.com/wp-content/uploads/2017/12/hackerrank-programming-forums.png

View File

@ -1,281 +0,0 @@
How To Archive Files And Directories In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Archive-Files-And-Directories-In-Linux-720x340.png)
In our previous tutorial, we discussed how to [**compress and decompress files using gzip and bzip2**][1] programs. In this tutorial, we are going to learn how to archive files in Linux. Arent archiving and compressing same? Some of you might often think these terms refers the same meaning. But, both are completely different. Archiving is the process of the combining multiple files and directories (same or different size) into one file. On the other hand, compressing is the process of reducing the size of of a file(s) or directory(s). Archiving is often used as part of the system backups or when moving the data from one system to another. Hope you understand the difference between archiving and compressing. Now, let us get into the topic.
### Archive files and directories
The most common programs to archive files and directories are;
1. tar
2. zip
This is a big topic. So, I am going to publish this article in two parts. In the first part, we will see how to archive files and directories using Tar command.
##### Archive files and directories using Tar command
**Tar** is an Unix command which stands for **T** ape **A** rchive. It is used to combine or store multiple files (same or different size) into a single file. There are 4 main operating modes in tar utility.
1. **c** Create an archive from a file(s) or directory(s).
2. **x** Extract an archive.
3. **r** Append files to the end of an archive.
4. **t** List the contents of the archive.
For complete list of modes, refer the man pages.
**Create a new archive**
For the purpose of this guide, I will be using a folder named **ostechnix** that contains three different type of files.
```
$ ls ostechnix/
file.odt image.png song.mp3
```
Now, let us create a new tar archive of the the directory ostechnix.
```
$ tar cf ostechnix.tar ostechnix/
```
Here, **c** flag refers create new archive and **f** refers the file name.
Similarly, to create an archive from a set of files in the current working directory, use this command:
```
$ tar cf archive.tar file1 file2 file 3
```
**Extract archives**
To extract an archive in the current directory, simply do:
```
$ tar xf ostechnix.tar
```
We can also extract the archive in a different directory using **C** flag(capital c). For example, the following command extracts the given archive file in **Downloads** directory.
```
$ tar xf ostechnix.tar -C Downloads/
```
Alternatively, go to the Downloads folder and extract the archive inside it like below.
```
$ cd Downloads/
$ tar xf ../ostechnix.tar
```
Some times you may want to extract files of a specific type. For example, the following command extracts the “.png” type files.
```
$ tar xf ostechnix.tar --wildcards "*.png"
```
**Create gzipped and bzipped archives**
By default, Tar creates the archive file ends with **.tar**. Also, tar command can be used in conjunction with the compression utilities **gzip** and **bzip2**. The files ends with **.tar** extensions refer the plain tar archive, the files ends with **tar.gz** or **.tgz** refers a **gzipped** archive, and the tar files ends with **tar.bz2** or **.tbz** refers **bzipped** archive, respectively.
First, let us **create a gzipped** archive:
```
$ tar czf ostechnix.tar.gz ostechnix/
```
Or,
```
$ tar czf ostechnix.tgz ostechnix/
```
Here, we use **z** flag to compress the archive using gzip compression method.
You can use **v** flag to view the progress while creating the archive.
```
$ tar czvf ostechnix.tar.gz ostechnix/
ostechnix/
ostechnix/file.odt
ostechnix/image.png
ostechnix/song.mp3
```
Here, **v** refers verbose.
To create gzipped archive from a list of files:
```
$ tar czf archive.tgz file1 file2 file3
```
To extract the gzipped archive in the current directory, use:
```
$ tar xzf ostechnix.tgz
```
To extract the archive in a different folder, use -C flag.
```
$ tar xzf ostechnix.tgz -C Downloads/
```
Now, let us create **bzipped archive**.
To do so, use **j** flag like below.
Create an archive of a directory:
```
$ tar cjf ostechnix.tar.bz2 ostechnix/
```
Or,
```
$ tar cjf ostechnix.tbz ostechnix/
```
Create archive from list of files:
```
$ tar cjf archive.tar.bz2 file1 file2 file3
```
Or,
```
$ tar cjf archive.tbz file1 file2 file3
```
To display the progress, use **v** flag.
Now, let us extract bzipped archive in the current directory. To do so, we do:
```
$ tar xjf ostechnix.tar.bz2
```
Or, extract the archive to some other directory:
```
$ tar xjf ostechnix.tar.bz2 -C Downloads
```
**Create archive of multiple directories and/or files at a time**
This is another coolest feature of tar command. To create an gzipped archive of multiple directories or files at once, use this command:
```
$ tar czvf ostechnix.tgz Downloads/ Documents/ ostechnix/file.odt
```
The above command will create an archive of **Downloads** , **Documents** directories and **file.odt** file in **ostechnix** directory and save the archive in the current working directory.
**Exclude directories and/or files from while creating an archive**
This is quite useful when backing up your data. You can exclude the non-important files or directories from your backup. This is where **exclude** switch comes in help. For example, you want to create an archive of your /home directory, but exclude Downloads, Documents, Pictures, Music directories.
This is how we do it.
```
$ tar czvf ostechnix.tgz /home/sk --exclude=/home/sk/Downloads --exclude=/home/sk/Documents --exclude=/home/sk/Pictures --exclude=/home/sk/Music
```
The above command will create an gzipped archive of my $HOME directory, excluding Downloads, Documents, Pictures and Music folders. To create bzipped archive, replace **z** with **j** and use the extension .bz2 in the above example.
**List contents of archive files without extracting them**
To list the contents of an archive file, we use **t** flag.
```
$ tar tf ostechnix.tar
ostechnix/
ostechnix/file.odt
ostechnix/image.png
ostechnix/song.mp3
```
To view the verbose output, use **v** flag.
```
$ tar tvf ostechnix.tar
drwxr-xr-x sk/users 0 2018-03-26 19:52 ostechnix/
-rw-r--r-- sk/users 9942 2018-03-24 13:49 ostechnix/file.odt
-rw-r--r-- sk/users 36013 2015-09-30 11:52 ostechnix/image.png
-rw-r--r-- sk/users 112383 2018-02-22 14:35 ostechnix/song.mp3
```
**Append files to existing archives**
Files or directories can be added/updated to the existing archives using **r** flag. Take a look at the following command.
```
$ tar rf ostechnix.tar ostechnix/ sk/ example.txt
```
The above command will add the directory named **sk** and a file named **example.txt** to ostechnix.tar archive.
You can verify if the files are added or not using command:
```
$ tar tvf ostechnix.tar
drwxr-xr-x sk/users 0 2018-03-26 19:52 ostechnix/
-rw-r--r-- sk/users 9942 2018-03-24 13:49 ostechnix/file.odt
-rw-r--r-- sk/users 36013 2015-09-30 11:52 ostechnix/image.png
-rw-r--r-- sk/users 112383 2018-02-22 14:35 ostechnix/song.mp3
drwxr-xr-x sk/users 0 2018-03-26 19:52 sk/
-rw-r--r-- sk/users 0 2018-03-26 19:39 sk/linux.txt
-rw-r--r-- sk/users 0 2018-03-26 19:56 example.txt
```
##### **TL;DR**
**Create tar archives:**
* **Plain tar archive:** tar -cf archive.tar file1 file2 file3
* **Gzipped tar archive:** tar -czf archive.tgz file1 file2 file3
* **Bzipped tar archive:** tar -cjf archive.tbz file1 file2 file3
**Extract tar archives:**
* **Plain tar archive:** tar -xf archive.tar
* **Gzipped tar archive:** tar -xzf archive.tgz
* **Bzipped tar archive:** tar -xjf archive.tbz
We just have covered the basic usage of tar command. It is enough to get started with tar command. However, if you to know more details, refer the man pages.
```
$ man tar
```
And, thats all for now. In the next part, we will see how to archives files and directories using Zip utility.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-archive-files-and-directories-in-linux-part-1/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/

View File

@ -1,3 +1,5 @@
pinewall translating
Containerization, Atomic Distributions, and the Future of Linux Containerization, Atomic Distributions, and the Future of Linux
====== ======

View File

@ -1,3 +1,4 @@
translating by wyxplus
Things You Should Know About Ubuntu 18.04 Things You Should Know About Ubuntu 18.04
====== ======
[Ubuntu 18.04 release][1] is just around the corner. I can see lots of questions from Ubuntu users in various Facebook groups and forums. I also organized Q&A sessions on Facebook and Instagram to know what Ubuntu users are wondering about Ubuntu 18.04. [Ubuntu 18.04 release][1] is just around the corner. I can see lots of questions from Ubuntu users in various Facebook groups and forums. I also organized Q&A sessions on Facebook and Instagram to know what Ubuntu users are wondering about Ubuntu 18.04.

View File

@ -1,3 +1,5 @@
translated by hopefully2333
Using machine learning to color cartoons Using machine learning to color cartoons
====== ======

View File

@ -1,3 +1,4 @@
KevinSJ Translating
A Beginners Guide To Cron Jobs A Beginners Guide To Cron Jobs
====== ======

View File

@ -1,3 +1,5 @@
pinewall translating
A reading list for Linux and open source fans A reading list for Linux and open source fans
====== ======

View File

@ -1,93 +0,0 @@
translating---geekpi
Orbital Apps A New Generation Of Linux applications
======
![](https://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-720x340.jpg)
Today, we are going to learn about **Orbital Apps** or **ORB** ( **O** pen **R** unnable **B** undle) **apps** , a collection of free, cross-platform, open source applications. All ORB apps are portable. You can either install them on your Linux system or on your USB drive, so that you you can use the same app on any system. There is no need of root privileges, and there are no dependencies. All required dependencies are included in the apps. Just copy the ORB apps to your USB drive and plug it on any Linux system, and start using them in no time. All settings and configurations, and data of the apps will be stored on the USB drive. Since there is no need to install the apps on the local drive, we can run the apps either in online or offline computers. That means we dont need Internet to download any dependencies.
ORB apps are compressed up to 60% smaller, so we can store and use them even from the small sized USB drives. All ORB apps are signed with PGP/RSA and distributed via TLS 1.2. All Applications are packaged without any modifications, they are not even re-compiled. Here is the list of currently available portable ORB applications.
* abiword
* audacious
* audacity
* darktable
* deluge
* filezilla
* firefox
* gimp
* gnome-mplayer
* hexchat
* inkscape
* isomaster
* kodi
* libreoffice
* qbittorrent
* sound-juicer
* thunderbird
* tomahawk
* uget
* vlc
* And more yet to come.
Orb is open source, so If youre a developer, feel free to collaborate and add more applications.
### Download and use portable ORB apps
As I mentioned already, we dont need to install portable ORB apps. However, ORB team strongly recommends you to use **ORB launcher** to get better experience. ORB launcher is a small installer file (less than 5MB) that will help you to launch the ORB apps with better and smoother experience.
Let us install ORB launcher first. To do so, [**download the ORB launcher**][1]. You can manually download ORB launcher ISO and mount it on your file manager. Or run any one of the following command in Terminal to install it:
```
$ wget -O - https://www.orbital-apps.com/orb.sh | bash
```
If you dont have wget, run:
```
$ curl https://www.orbital-apps.com/orb.sh | bash
```
Enter the root user password when it asked.
Thats it. Orbit launcher is installed and ready to use.
Now, go to the [**ORB portable apps download page**][2], and download the apps of your choice. For the purpose of this tutorial, I am going to download Firefox application.
Once you downloaded the package, go to the download location and double click ORB app to launch it. Click Yes to confirm.
![][4]
Firefox ORB application in action!
![][5]
Similarly, you can download and run any applications instantly.
If you dont want to use ORB launcher, make the downloaded .orb installer file as executable and double click it to install. However, ORB launcher is recommended and it gives you easier and smoother experience while using orb apps.
As far as I tested ORB apps, they worked just fine out of the box. Hope this helps. And, thats all for now. Have a good day!
Cheers!!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/orbitalapps-new-generation-ubuntu-linux-applications/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.orbital-apps.com/documentation/orb-launcher-all-installers
[2]:https://www.orbital-apps.com/download/portable_apps_linux/
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-1-2.png
[5]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-2.png

View File

@ -1,3 +1,5 @@
translating---geeekpi
Analyzing Ansible runs using ARA Analyzing Ansible runs using ARA
====== ======

View File

@ -1,3 +1,5 @@
pinewall translating
Creating small containers with Buildah Creating small containers with Buildah
====== ======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open%20source_collaboration_0.png?itok=YEl_GXbv) ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open%20source_collaboration_0.png?itok=YEl_GXbv)

View File

@ -1,260 +0,0 @@
Get more done at the Linux command line with GNU Parallel
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR)
Do you ever get the funny feeling that your computer isn't quite as fast as it should be? I used to feel that way, and then I found GNU Parallel.
GNU Parallel is a shell utility for executing jobs in parallel. It can parse multiple inputs, thereby running your script or command against sets of data at the same time. You can use all your CPU at last!
If you've ever used `xargs`, you already know how to use Parallel. If you don't, then this article teaches you, along with many other use cases.
### Installing GNU Parallel
GNU Parallel may not come pre-installed on your Linux or BSD computer. Install it from your repository or ports collection. For example, on Fedora:
```
$ sudo dnf install parallel
```
Or on NetBSD:
```
# pkg_add parallel
```
If all else fails, refer to the [project homepage][1].
### From serial to parallel
As its name suggests, Parallel's strength is that it runs jobs in parallel rather than, as many of us still do, sequentially.
When you run one command against many objects, you're inherently creating a queue. Some number of objects can be processed by the command, and all the other objects just stand around and wait their turn. It's inefficient. Given enough data, there's always going to be a queue, but instead of having just one queue, why not have lots of small queues?
Imagine you have a folder full of images you want to convert from JPEG to PNG. There are many ways to do this. There's the manual way of opening each image in GIMP and exporting it to the new format. That's usually the worst possible way. It's not only time-intensive, it's labor-intensive.
A pretty neat variation on this theme is the shell-based solution:
```
$ convert 001.jpeg 001.png
$ convert 002.jpeg 002.png
$ convert 003.jpeg 003.png
... and so on ...
```
It's a great trick when you first learn it, and at first it's a vast improvement. No need for a GUI and constant clicking. But it's still labor-intensive.
Better still:
```
$ for i in *jpeg; do convert $i $i.png ; done
```
This, at least, sets the job(s) in motion and frees you up to do more productive things. The problem is, it's still a serial process. One image gets converted, and then the next one in the queue steps up for conversion, and so on until the queue has been emptied.
With Parallel:
```
$ find . -name "*jpeg" | parallel -I% --max-args 1 convert % %.png
```
This is a combination of two commands: the `find` command, which gathers the objects you want to operate on, and the `parallel` command, which sorts through the objects and makes sure everything gets processed as required.
* `find . -name "*jpeg"` finds all files in the current directory that end in `jpeg`.
* `parallel` invokes GNU Parallel.
* `-I%` creates a placeholder, called `%`, to stand in for whatever `find` hands over to Parallel. You use this because otherwise you'd have to manually write a new command for each result of `find`, and that's exactly what you're trying to avoid.
* `--max-args 1` limits the rate at which Parallel requests a new object from the queue. Since the command Parallel is running requires only one file, you limit the rate to 1. Were you doing a more complex command that required two files (such as `cat 001.txt 002.txt > new.txt`), you would limit the rate to 2.
* `convert % %.png` is the command you want to run in Parallel.
The result of this command is that `find` gathers all relevant files and hands them over to `parallel`, which launches a job and immediately requests the next in line. Parallel continues to do this for as long as it is safe to launch new jobs without crippling your computer. As old jobs are completed, it replaces them with new ones, until all the data provided to it has been processed. What took 10 minutes before might take only 5 or 3 with Parallel.
### Multiple inputs
The `find` command is an excellent gateway to Parallel as long as you're familiar with `find` and `xargs` (collectively called GNU Find Utilities, or `findutils`). It provides a flexible interface that many Linux users are already comfortable with and is pretty easy to learn if you're a newcomer.
The `find` command is fairly straightforward: you provide `find` with a path to a directory you want to search and some portion of the file name you want to search for. Use wildcard characters to cast your net wider; in this example, the asterisk indicates anything, so `find` locates all files that end with the string `searchterm`:
```
$ find /path/to/directory -name "*searchterm"
```
By default, `find` returns the results of its search one item at a time, with one item per line:
```
$ find ~/graphics -name "*jpg"
/home/seth/graphics/001.jpg
/home/seth/graphics/cat.jpg
/home/seth/graphics/penguin.jpg
/home/seth/graphics/IMG_0135.jpg
```
When you pipe the results of `find` to `parallel`, each item on each line is treated as one argument to the command that `parallel` is arbitrating. If, on the other hand, you need to process more than one argument in one command, you can split up the way the data in the queue is handed over to `parallel`.
Here's a simple, unrealistic example, which I'll later turn into something more useful. You can follow along with this example, as long as you have GNU Parallel installed.
Assume you have four files. List them, one per line, to see exactly what you have:
```
$ echo ada &gt; ada ; echo lovelace &gt; lovelace
$ echo richard &gt; richard ; echo stallman &gt; stallman
$ ls -1
ada
lovelace
richard
stallman
```
You want to combine two files into a third that contains the contents of both files. This requires that Parallel has access to two files, so the `-I%` variable won't work in this case.
Parallel's default behavior is basically invisible:
```
$ ls -1 | parallel echo
ada
lovelace
richard
stallman
```
Now tell Parallel you want to get two objects per job:
```
$ ls -1 | parallel --max-args=2 echo
ada lovelace
richard stallman
```
Now the lines have been combined. Specifically, two results from `ls -1` are passed to Parallel all at once. That's the right number of arguments for this task, but they're effectively one argument right now: "ada lovelace" and "richard stallman." What you actually want is two distinct arguments per job.
Luckily, that technicality is parsed by Parallel itself. If you set `--max-args` to `2`, you get two variables, `{1}` and `{2}`, representing the first and second parts of the argument:
```
$ ls -1 | parallel --max-args=2 cat {1} {2} "&gt;" {1}_{2}.person
```
In this command, the variable `{1}` is ada or richard (depending on which job you look at) and `{2}` is either `lovelace` or `stallman`. The contents of the files are redirected with a redirect symbol in quotes (the quotes grab the redirect symbol from Bash so Parallel can use it) and placed into new files called `ada_lovelace.person` and `richard_stallman.person`.
```
$ ls -1
ada
ada_lovelace.person
lovelace
richard
richard_stallman.person
stallman
$ cat ada_*person
ada lovelace
$ cat ri*person
richard stallman
```
If you spend all day parsing log files that are hundreds of megabytes in size, you might see how parallelized text parsing could be useful to you; otherwise, this is mostly a demonstrative exercise.
However, this kind of processing is invaluable for more than just text parsing. Here's a real-life example from the film world. Consider a directory of video files and audio files that need to be joined together.
```
$ ls -1
12_LS_establishing-manor.avi
12_wildsound.flac
14_butler-dialogue-mixed.flac
14_MS_butler.avi
...and so on...
```
Using the same principles, a simple command can be created so that the files are combined in parallel:
```
$ ls -1 | parallel --max-args=2 ffmpeg -i {1} -i {2} -vcodec copy -acodec copy {1}.mkv
```
### Brute. Force.
All this fancy input and output parsing isn't to everyone's taste. If you prefer a more direct approach, you can throw commands at Parallel and walk away.
First, create a text file with one command on each line:
```
$ cat jobs2run
bzip2 oldstuff.tar
oggenc music.flac
opusenc ambiance.wav
convert bigfile.tiff small.jpeg
ffmepg -i foo.avi -v:b 12000k foo.mp4
xsltproc --output build/tmp.fo style/dm.xsl src/tmp.xml
bzip2 archive.tar
```
Then hand the file over to Parallel:
```
$ parallel --jobs 6 &lt; jobs2run
```
And now all jobs in your file are run in Parallel. If more jobs exist than jobs allowed, a queue is formed and maintained by Parallel until all jobs have run.
### Much, much more
GNU Parallel is a powerful and flexible tool, with far more use cases than can fit into this article. Its man page provides examples of really cool things you can do with it, from remote execution over SSH to incorporating Bash functions into your Parallel commands. There's even an extensive demonstration series on [YouTube][2], so you can learn from the GNU Parallel team directly. The GNU Parallel lead maintainer has also just released the command's official guide, available from [Lulu.com][3].
GNU Parallel has the power to change the way you compute, and if doesn't do that, it will at the very least change the time your computer spends computing. Try it today!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/gnu-parallel
作者:[Seth Kenlon][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[1]:https://www.gnu.org/software/parallel
[2]:https://www.youtube.com/watch?v=OpaiGYxkSuQ&list=PL284C9FF2488BC6D1
[3]:http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

View File

@ -1,116 +1,108 @@
Translating KevinSJ -- 05142018 如何在终端中显示图片
How To Display Images In The Terminal
====== ======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/fim-2-720x340.png) ![](https://www.ostechnix.com/wp-content/uploads/2018/05/fim-2-720x340.png)
There are plenty of GUI picture viewers available for Linux. But I havent heard or used any applications which displays pictures in the Terminal itself. Luckily, I have just found a CLI image viewer named **FIM** that can be used to display images images in Terminal. The FIM utility draw my attention, because it is very lightweight compared to most GUI picture viewer applications. Without further ado, lets us go ahead and see what it is capable of. Linux 上用许多 GUI 图片浏览器。但我尚未听说或使用过任何在终端中显示图片的应用程序。幸运的是,我刚刚发现了一个可用于在终端中显示图像的名叫 ** FIM ** 的图像查看器。FIM 引起了我的注意,是因为与大多数 GUI 图片浏览器相比,它非常轻巧。毫不迟疑,让我们来看看它能做什么。
### Display Images In the Terminal Using FIM ###使用FIM在终端中显示图像
**FIM** stands for **F** bi **IM** proved. For those who dont know, **Fbi** is a linux **f** rame **b** uffer **i** mageviewer. It uses the systems framebuffer to display images directly from the command line. By default, it displays bmp, gif, jpeg, PhotoCD, png, ppm, tiff, and xwd from the Terminal itself. For other formats, it will try to use ImageMagicks convert. **FIM** stands for **F**bi **IM**proved. 对于那些不知道的人,**Fbi**是指 linux 中的 **f**rame **b**uffer **i**mageviewer。它使用系统的帧缓冲器直接从命令行显示图像。默认情况下它能用终端显示bmpgifjpegPhotoCDpngppmtiff 和 xwd。对于其他格式它会尝试使用 ImageMagick 的转换。
FIM is based on Fbi and it is a highly customizable and scriptable image viewer targeted at the users who are comfortable with software like the Vim text editor or the Mutt mail user agent. It displays the images in full screen and the images can be controlled (such as resize, flip, zoom) using keyboard shortcuts. Unlike fbi, the FIM utility is universal: it can open many file formats and it can display pictures in the following video modes: FIM 基于 Fbi它是一款高度可高度定制及脚本化的图像浏览器非常适合使用 Vim 文本编辑器或 Mutt 等软件的用户。它会以全屏显示图像并可通过键盘快捷键控制如调整大小翻转缩放。与fbi不同的是FIM 是通用的:它可以打开许多文件格式,并且可以在以下视频模式下显示图片:
* Graphically, with the Linux framebuffer device. *使用 Linux framebuffer 设备,以图形方式呈现
* Graphically, under X/Xorg, using the SDL library. *在 X / Xorg 下,使用 SDL 库。以图形方式呈现
* Graphically, under X/Xorg, using the Imlib2 library. *在 X / Xorg 下,使用 Imlib2 库。以图形方式呈现。
* Rendered as ASCII Art in any textual console, using the AAlib library. *在任何文本控制台中使用 AAlib 库呈现为 ASCII 字符画 。
FIM is completely free and open source. FIM是完全免费且开源的。
### Install FIM ###安装FIM
基于 DEB 的系统,如 UbuntuLinux Mint 可从默认的仓库中获取 FIM 图像查看器。因此你可以使用如下命令安装fbi
The FIM image viewer is available in the default repositories of DEB-based systems such as Ubuntu, Linux Mint. So, you can install fbi using command:
``` ```
$ sudo apt-get install fim $ sudo apt-get install fim
``` ```
If it is not available in the default repositories of your Linux distribution, you can download, compile and install from source as shown below. 如果它在你使用的 Linux 发行版的仓库中不包含 FIM则可以下载源代码进行编译和安装如下所示。
``` ```
wget http://download.savannah.nongnu.org/releases/fbi-improved/fim-0.6-trunk.tar.gz wget http://download.savannah.nongnu.org/releases/fbi-improved/fim-0.6-trunk.tar.gz
wget http://download.savannah.nongnu.org/releases/fbi-improved/fim-0.6-trunk.tar.gz.sig wget http://download.savannah.nongnu.org/releases/fbi-improved/fim-0.6-trunk.tar.gz.sig
gpg --search 'dezperado autistici org' gpg --search 'dezperado autistici org'
# import the key from a trusted keyserver by following on screen instructions #按照屏幕上的说明,从密钥服务器导入密钥
gpg --verify fim-0.6-trunk.tar.gz.sig gpg --verify fim-0.6-trunk.tar.gz.sig
tar xzf fim-0.6-trunk.tar.gz tar xzf fim-0.6-trunk.tar.gz
cd fim-0.6-trunk cd fim-0.6-trunk
./configure --help=short ./configure --help=short
# read the ./configure --help=short output: you can give options to ./configure #阅读./configure --help = short的输出你可以在 ./configure 中添加选项
./configure ./configure
make make
su -c "make install" su -c“make install”
``` ```
### FIM Usage ### FIM用法
Once installed, you can display an image with “auto zoom” option using command: 安装完成后,您可以使用以下命令以“自动缩放”显示的图像:
``` ```
$ fim -a dog.jpg $ fim -a dog.jpg
``` ```
Here is the sample output from my Ubuntu box. 这里是我的 Ubuntu 主机的示例输出。
![][1] ![][1]
As you can see in the above screenshot, FIM didnt use any external GUI picture viewers. Instead, it uses our systems framebuffer to display the image. 正如你在上面的屏幕截图中看到的FIM 没有使用任何外部 GUI 图片浏览器。相反,它使用我们系统的帧缓冲器来显示图像。
If you have multiple .jpg files in the current directory, you could use wildcard to open all of them as shown below. 如果当前目录中有多个.jpg文件可以使用通配符打开所有文件如下所示。
``` ```
$ fim -a * .jpg $ fim -a * .jpg
``` ```
To open all images in a directory, for example **Pictures** , run: 要打开目录中的所有图像,例如 **Pictures**,请运行:
``` ```
$ fim Pictures/ $ fim Pictures/
``` ```
We can also open the images recursively in a folder and its sub-folder and then sorting the list like below. 我们也可以在文件夹及其子文件夹中递归地打开图像,然后像下面那样对列表进行排序。
``` ```
$ fim -R Pictures/ --sort $ fim -R Pictures/ --sort
``` ```
To render the image in ASCII format, you can use **-t** flag. 要以 ASCII 格式渲染图像,可以使用 **-t** 标志。
``` ```
$ fim -t dog.jpg $ fim -t dog.jpg
``` ```
To quit Fim, press **ESC** or **q**. 要退出 Fim请按 **ESC****q**
**Keyboard shortcuts** **键盘快捷键**
You can use various keyboard shortcuts to manage the images. For example, to load next image and previous images, press PgUp/PgDown keys. Ton Zoom in or out, use +/- keys. Here is the common keys used to control images in FIM. 您可以使用各种键盘快捷键来管理图像。例如,要加载下一张图像和之前的图像,请按下 PgUp / PgDown 键。成倍放大或缩小,请使用 +/- 键。以下是用于在FIM中控制图像的常用按键。
* **PageUp/Down** : Prev/Next image * **PageUp/Down**:上一张/下一张图片
* **+/-** : Zoom in/out * **+/-**:放大/缩小
* **a** : Autoscale * **a**:自动缩放
* **w** : Fit to width * **w**:自适应宽度
* **h** : Fit to height * **h**:自适应高度
* **j/k** : Pan down/up * **j/k**:平移/向上
* **f/m** : flip/mirror * **f/m**:翻转/镜像
* **r/R** : Rotate (Clock wise and ant-clock wise) * **r/R**:旋转(顺时针/逆时针)
* **ESC/q** : Quit * **ESC/q**:退出
For complete details, refer man pages. 有关完整详细信息,请参阅手册页。
``` ```
$ man fim $ man fim
``` ```
And, thats all for now. Hope this helps. More good stuffs to come. Stay tuned! 那么,就是这样。希望这对你有所帮助。后续还会介绍更多的优秀工具。敬请关注!
Cheers! 干杯!
@ -118,9 +110,9 @@ Cheers!
via: https://www.ostechnix.com/how-to-display-images-in-the-terminal/ via: https://www.ostechnix.com/how-to-display-images-in-the-terminal/
作者:[SK][a] 作者:[SK] [A]
选题:[lujun9972](https://github.com/lujun9972) 选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID) 译者:[译者KevinSJ](https://github.com/KevinSJ)
校对:[校对者ID](https://github.com/校对者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,174 @@
Splicing the Cloud Native Stack, One Floor at a Time
======
At Packet, our value (automated infrastructure) is super fundamental. As such, we spend an enormous amount of time looking up at the players and trends in all the ecosystems above us - as well as the very few below!
Its easy to get confused, or simply lose track, when swimming deep in the oceans of any ecosystem. I know this for a fact because when I started at Packet last year, my English degree from Bryn Mawr didnt quite come with a Kubernetes certification. :)
Due to its super fast evolution and massive impact, the cloud native ecosystem defies precedent. It seems that every time you blink, entirely new technologies (not to mention all of the associated logos) have become relevant...or at least interesting. Like many others, Ive relied on the CNCFs ubiquitous “[Cloud Native Landscape][1]” as a touchstone as I got to know the space. However, if there is one element that defines ecosystems, it is the people that contribute to and steer them.
Thats why, when we were walking back to the office one cold December afternoon, we hit upon a creative way to explain “cloud native” to an investor, whose eyes were obviously glazing over as we talked about the nuances that distinguished Cilium from Aporeto, and why everything from CoreDNS and Spiffe to Digital Rebar and Fission were interesting in their own right.
Looking up at our narrow 13 story office building in the shadow of the new World Trade Center, we hit on an idea that took us down an artistic rabbit hole: why not draw it?
![][2]
And thus began our journey to splice the Cloud Native Stack, one floor at a time. Lets walk through it together and we can give you the “guaranteed to be outdated tomorrow” down low.
[[View a High Resolution JPG][3]] or email us to request a copy.
### Starting at the Very Bottom
As we started to put pen to paper, we knew we wanted to shine a light on parts of the stack that we interact with on a daily basis, but that is largely invisible to users further up: hardware. And like any good secret lab investing in the next great (usually proprietary) thing, we thought the basement was the perfect spot.
From the well established giants of the space like Intel, AMD and Huawei (rumor has it they employ nearly 80,000 engineers!), to more niche players like Mellanox, the hardware ecosystem is on fire. In fact, we may be entering a Golden Age of hardware, as billions of dollars are poured into upstarts hacking on new offloads, GPUs, custom co-processors.
The famous software trailblazer Alan Kay said over 25 years ago: “People who are really serious about software should make their own hardware.” Good call Alan!
### The Cloud is About Capital
As our CEO Zac Smith has told me many times: its all about the money. And not just about making it, but spending it! In the cloud, it takes billions of dollars of capital to make computers show up in data centers so that developers can consume them with software. In other words:
![][4]
We thought the best place for “The Bank” (e.g. the lenders and investors that make this cloud fly) was the ground floor. So we transformed our lobby into the Bankers Cafe, complete with a wheel of fortune for all of us out there playing the startup game.
![][5]
### The Ping and Power
If the money is the grease, then the engine that consumes much of the fuel is the datacenter providers and the networks that connect them. We call them “power” and “ping”.
From top of mind names like Equinix and edge upstarts like Vapor.io, to the “pipes” that Verizon, Crown Castle and others literally put in the ground (or on the ocean floor), this is a part of the stack that we all rely upon but rarely see in person.
Since we spend a lot of time looking at datacenters and connectivity, one thing to note is that this space is changing quite rapidly, especially as 5G arrives in earnest and certain workloads start to depend on less centralized infrastructure.
The edge is coming yall! :-)
![][6]
### Hey, It's Infrastructure!
Sitting on top of “ping” and “power” is the floor we lovingly call “processors”. This is where our magic happens - we turn the innovation and physical investments from down below into something at the end of an API.
Since this is a NYC building, we kept the cloud providers here fairly NYC centric. Thats why you see Sammy the Shark (of Digital Ocean lineage) and a nod to Google over in the “meet me” room.
As youll see, this scene is pretty physical. Racking and stacking, as it were. While we love our facilities manager in EWR1 (Michael Pedrazzini), we are working hard to remove as much of this manual labor as possible. PhDs in cabling are hard to come by, after all.
![][7]
### Provisioning
One floor up, layered on top of infrastructure, is provisioning. This is one of our favorite spots, which years ago we might have called “config management.” But now its all about immutable infrastructure and automation from the start: Terraform, Ansible, Quay.io and the like. You can tell that software is working its way down the stack, eh?
Kelsey Hightower noted recently “its an exciting time to be in boring infrastructure.” I dont think he meant the physical part (although we think its pretty dope), but as software continues to hack on all layers of the stack, you can guarantee a wild ride.
![][8]
### Operating Systems
With provisioning in place, we move to the operating system layer. This is where we get to start poking fun at some of our favorite folks as well: note Brian Redbeards above average yoga pose. :)
Packet offers eleven major operating systems for our clients to choose from, including some that you see in this illustration: Ubuntu, CoreOS, FreeBSD, Suse, and various Red Hat offerings. More and more, we see folks putting their opinion on this layer: from custom kernels and golden images of their favorite distros for immutable deploys, to projects like NixOS and LinuxKit.
![][9]
### Run Time
We had to have fun with this, so we placed the runtime in the gym, with a championship match between CoreOS-sponsored rkt and Dockers containerd. Either way the CNCF wins!
We felt the fast-evolving storage ecosystem deserved some lockers. Whats fun about the storage aspect is the number of new players trying to conquer the challenging issue of persistence, as well as performance and flexibility. As they say: storage is just plain hard.
![][10]
### Orchestration
The orchestration layer has been all about Kubernetes this past year, so we took one of its most famous evangelists (Kelsey Hightower) and featured him in this rather odd meetup scene. We have some major Nomad fans on our team, and there is just no way to consider the cloud native space without the impact of Docker and its toolset.
While workload orchestration applications are fairly high up our stack, we see all kinds of evidence for these powerful tools are starting to look way down the stack to help users take advantage of GPUs and other specialty hardware. Stay tuned - were in the early days of the container revolution!
![][11]
### Platforms
This is one of our favorite layers of the stack, because there is so much craft in how each platform helps users accomplish what they really want to do (which, by the way, isnt run containers but run applications!). From Rancher and Kontena, to Tectonic and Redshift to totally different approaches like Cycle.io and Flynn.io - were always thrilled to see how each of these projects servers users differently.
The main takeaway: these platforms are helping to translate all of the various, fast-moving parts of the cloud native ecosystem to users. Its great watching what they each come up with!
![][12]
### Security
When it comes to security, its been a busy year! We tried to represent some of the more famous attacks and illustrate how various tools are trying to help protect us as workloads become highly distributed and portable (while at the same time, attackers become ever more resourceful).
We see a strong movement towards trustless environments (see Aporeto) and low level security (Cilium), as well as tried and true approaches at the network level like Tigera. No matter your approach, its good to remember: This is definitely not fine. :0
![][13]
### Apps
How to represent the huge, vast, limitless ecosystem of applications? In this case, it was easy: stay close to NYC and pick our favorites. ;) From the Postgres “elephant in the room” and the Timescale clock, to the sneaky ScyllaDB trash and the chillin Travis dude - we had fun putting this slice together.
One thing that surprised us: how few people noticed the guy taking a photocopy of his rear end. I guess its just not that common to have a photocopy machine anymore?!?
![][14]
### Observability
As our workloads start moving all over the place, and the scale gets gigantic, there is nothing quite as comforting as a really good Grafana dashboard, or that handy Datadog agent. As complexity increases, the “SRE” generation are starting to rely ever more on alerting and other intelligence events to help us make sense of whats going on, and work towards increasingly self-healing infrastructure and applications.
It will be interesting to see what kind of logos make their way into this floor over the coming months and years...maybe some AI, blockchain, ML powered dashboards? :-)
![][15]
### Traffic Management
People tend to think that the internet “just works” but in reality, were kind of surprised it works at all. I mean, a loose connection of disparate networks at massive scale - you have to be joking!?
One reason it all sticks together is traffic management, DNS and the like. More and more, these players are helping to make the interest both faster and safer, as well as more resilient. Were especially excited to see upstarts like Fly.io and NS1 competing against well established players, and watching the entire ecosystem improve as a result. Keep rockin it yall!
![][16]
### Users
What good is a technology stack if you dont have fantastic users? Granted, they sit on top of a massive stack of innovation, but in the cloud native world they do more than just consume: they create and contribute. From massive contributions like Kubernetes to more incremental (but equally important) aspects, what were all a part of is really quite special.
Many of the users lounging on our rooftop deck, like Ticketmaster and the New York Times, are not mere upstarts: these are organizations that have embraced a new way of deploying and managing their applications, and their own users are reaping the rewards.
![][17]
### Last but not Least, the Adult Supervision!
In previous ecosystems, foundations have played a more passive “behind the scenes” role. Not the CNCF! Their goal of building a robust cloud native ecosystem has been supercharged by the incredible popularity of the movement - and theyve not only caught up but led the way.
From rock solid governance and a thoughtful group of projects, to outreach like the CNCF Landscape, CNCF Cross Cloud CI, Kubernetes Certification, and Speakers Bureau - the CNCF is way more than “just” the ever popular KubeCon + CloudNativeCon.
--------------------------------------------------------------------------------
via: https://www.packet.net/blog/splicing-the-cloud-native-stack/
作者:[Zoe Allen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.packet.net/about/zoe-allen/
[1]:https://landscape.cncf.io/landscape=cloud
[2]:https://assets.packet.net/media/images/PIFg-30.vesey.street.ny.jpg
[3]:https://www.dropbox.com/s/ujxk3mw6qyhmway/Packet_Cloud_Native_Building_Stack.jpg?dl=0
[4]:https://assets.packet.net/media/images/3vVx-there.is.no.cloud.jpg
[5]:https://assets.packet.net/media/images/X0b9-the.bank.jpg
[6]:https://assets.packet.net/media/images/2Etm-ping.and.power.jpg
[7]:https://assets.packet.net/media/images/C800-infrastructure.jpg
[8]:https://assets.packet.net/media/images/0V4O-provisioning.jpg
[9]:https://assets.packet.net/media/images/eMYp-operating.system.jpg
[10]:https://assets.packet.net/media/images/9BII-run.time.jpg
[11]:https://assets.packet.net/media/images/njak-orchestration.jpg
[12]:https://assets.packet.net/media/images/1QUS-platforms.jpg
[13]:https://assets.packet.net/media/images/TeS9-security.jpg
[14]:https://assets.packet.net/media/images/SFgF-apps.jpg
[15]:https://assets.packet.net/media/images/SXoj-observability.jpg
[16]:https://assets.packet.net/media/images/tKhf-traffic.management.jpg
[17]:https://assets.packet.net/media/images/7cpe-users.jpg

View File

@ -1,86 +0,0 @@
translating---geekpi
3 useful things you can do with the IP tool in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
It has been more than a decade since the `ifconfig` command has been deprecated on Linux in favor of the `iproute2` project, which contains the magical tool `ip`. Many online tutorial resources still refer to old command-line tools like `ifconfig`, `route`, and `netstat`. The goal of this tutorial is to share some of the simple networking-related things you can do easily using the `ip` tool instead.
### Find your IP address
```
[dneary@host]$ ip addr show
[snip]
44: wlp4s0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 5c:e0:c5:c7:f0:f1 brd ff:ff:ff:ff:ff:ff
        inet 10.16.196.113/23 brd 10.16.197.255 scope global dynamic wlp4s0
        valid_lft 74830sec preferred_lft 74830sec
        inet6 fe80::5ee0:c5ff:fec7:f0f1/64 scope link
        valid_lft forever preferred_lft forever
```
`ip addr show` will show you a lot of information about all of your network link devices. In this case, my wireless Ethernet card (wlp4s0) is the IPv4 address (the `inet` field) `10.16.196.113/23`. The `/23` means that there are 23 bits of the 32 bits in the IP address, which will be shared by all of the IP addresses in this subnet. IP addresses in the subnet will range from `10.16.196.0 to 10.16.197.254`. The broadcast address for the subnet (the `brd` field after the IP address) `10.16.197.255` is reserved for broadcast traffic to all hosts on the subnet.
We can show only the information about a single device using `ip addr show dev wlp4s0`, for example.
### Display your routing table
```
[dneary@host]$ ip route list
default via 10.16.197.254 dev wlp4s0 proto static metric 600
10.16.196.0/23 dev wlp4s0 proto kernel scope link src 10.16.196.113 metric 601
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
```
The routing table is the local host's way of helping network traffic figure out where to go. It contains a set of signposts, sending traffic to a specific interface, and a specific next waypoint on its journey.
If you run any virtual machines or containers, these will get their own IP addresses and subnets, which can make these routing tables quite complicated, but in a single host, there are typically two instructions. For local traffic, send it out onto the local Ethernet, and the network switches will figure out (using a protocol called ARP) which host owns the destination IP address, and thus where the traffic should be sent. For traffic to the internet, send it to the local gateway node, which will have a better idea how to get to the destination.
In the situation above, the first line represents the external gateway for external traffic, the second line is for local traffic, and the third is reserved for a virtual bridge for VMs running on the host, but this link is not currently active.
### Monitor your network configuration
```
[dneary@host]$ ip monitor all
[dneary@host]$ ip -s link list wlp4s0
```
The `ip monitor` command can be used to monitor changes in routing tables, network addressing on network interfaces, or changes in ARP tables on the local host. This command can be particularly useful in debugging network issues related to containers and networking, when two VMs should be able to communicate with each other but cannot.
`all`, `ip monitor` will report all changes, prefixed with one of `[LINK]` (network interface changes), `[ROUTE]` (changes to a routing table), `[ADDR]` (IP address changes), or `[NEIGH]` (nothing to do with horses—changes related to ARP addresses of neighbors).
When used withwill report all changes, prefixed with one of(network interface changes),(changes to a routing table),(IP address changes), or(nothing to do with horses—changes related to ARP addresses of neighbors).
You can also monitor changes on specific objects (for example, a specific routing table or an IP address).
Another useful option that works with many commands is `ip -s`, which gives some statistics. Adding a second `-s` option adds even more statistics. `ip -s link list wlp4s0` above will give lots of information about packets received and transmitted, with the number of packets dropped, errors detected, and so on.
### Handy tip: Shorten your commands
In general, for the `ip` tool, you need to include only enough letters to uniquely identify what you want to do. Instead of `ip monitor`, you can use `ip mon`. Instead of `ip addr list`, you can use `ip a l`, and you can use `ip r` in place of `ip route`. `Ip link list` can be shorted to `ip l ls`. To read about the many options you can use to change the behavior of a command, visit the [ip manpage][1].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/useful-things-you-can-do-with-IP-tool-Linux
作者:[Dave Neary][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dneary
[1]:https://www.systutorials.com/docs/linux/man/8-ip-route/

View File

@ -0,0 +1,133 @@
A CLI Game To Learn Vim Commands
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/PacVim-720x340.png)
Howdy, Vim users! Today, I stumbled upon a cool utility to sharpen your Vim usage skills. Vim is a great editor to write and edit code. However, some of you (including me) are still struggling with the steep learning curve. Not anymore! Meet **PacVim** , a CLI game that helps you to learn Vim commands. PacVim is inspired by the classic game [**PacMan**][1] and it gives you plenty of practice with Vim commands in a fun and interesting way. Simply put, PacVim is a fun, free way to learn about the vim commands in-depth. Please do not confuse PacMan with [**pacman**][2] (the arch Linux package manager). PacMan is a classic, popular arcade game released in the 1980s.
In this brief guide, we will see how to install and use PacVim in Linux.
### Install PacVim
First, install **Ncurses** library and **development tools** as described in the following links.
Please note that this game may not compile and install properly without gcc version 4.8.X or higher. I tested PacVim on Ubuntu 18.04 LTS and it worked perfectly.
Once Ncurses and gcc are installed, run the following commands to install PacVim.
```
$ git clone https://github.com/jmoon018/PacVim.git
$ cd PacVim
$ sudo make install
```
## Learn Vim Commands Using PacVim
### Start PacVim game
To play this game, just run:
```
$ pacvim [LEVEL_NUMER] [MODE]
```
For example, the following command starts the game in 5th level with normal mode.
```
$ pacvim 5 n
```
Here, **“5”** represents the level and **“n”** represents the mode. There are two modes
* **n** normal mode.
* **h** hard mode.
The default mode is h, which is hard:
To start from the beginning (0 level), just run:
```
$ pacvim
```
Here is the sample output from my Ubuntu 18.04 LTS system.
![][4]
To begin the game, just press **ENTER**.
![][5]
Now start playing the game. Read the next chapter to know how to play.
To quit, press **ESC** or **q**.
The following command starts the game in 5th level with hard mode.
```
$ pacvim 5 h
```
Or,
```
$ pacvim 5
```
### How to play PacVim?
The usage of PacVim is very similar to PacMan.
You must run over all the characters on the screen while avoiding the ghosts (the red color characters).
PacVim has two special obstacles:
1. You cannot move into the walls (yellow color). You must use vim motions to jump over them.
2. If you step on a tilde character (cyan `~`), you lose!
You are given three lives. You gain a life each time you beat level 0, 3, 6, 9, etc. There are 10 levels in total, starting from 0 to 9. After beating the 9th level, the game is reset to the 0th level, but the ghosts move faster.
**Winning conditions**
Use vim commands to move the cursor over the letters and highlight them. After all letters are highlighted, you win and proceed to the next level.
**Losing conditions**
If you touch a ghost (indicated by a **red G** ) or a **tilde** character, you lose a life. If you have less than 0 lives, you will lose the entire game.
Here is the list of Implemented Commands:-
key what it does q quit the game h move left j move down k move up l move right w move forward to next word beginning W move forward to next WORD beginning e move forward to next word ending E move forward to next WORD ending b move backward to next word beginning B move backward to next WORD beginning $ move to the end of the line 0 move to the beginning of the line gg/1G move to the beginning of the first line numberG move to the beginning of the line given by number G move to the beginning of the last line ^ move to the first word at the current line & 1337 cheatz (beat current level)
After playing couple levels, you may notice there is a slight improvement in Vim usage. Keep playing this game once in a while until you mastering the Vim usage.
**Suggested read:**
And, thats all for now. Hope this was useful. Playing PacVim is fun, interesting and keep you occupied. At the same time, you should be able to thoroughly learn the enough Vim commands. Give it a try, you wont be disappointed.
More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/pacvim-a-cli-game-to-learn-vim-commands/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://en.wikipedia.org/wiki/Pac-Man
[2]:https://www.ostechnix.com/getting-started-pacman/
[4]:http://www.ostechnix.com/wp-content/uploads/2018/05/pacvim-1.png
[5]:http://www.ostechnix.com/wp-content/uploads/2018/05/pacvim-2.png

View File

@ -1,49 +0,0 @@
translating----geekpi
LikeCoin, a cryptocurrency for creators of openly licensed content
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0)
Conventional wisdom indicates that writers, photographers, artists, and other creators who share their content for free, under Creative Commons and other open licenses, won't get paid. That means most independent creators don't make any money by publishing their work on the internet. Enter [LikeCoin][1]: a new, open source project that intends to make this convention, where artists often have to compromise or sacrifice in order to contribute, a thing of the past.
The LikeCoin protocol is designed to monetize creative content so creators can focus on creating great material rather than selling it.
The protocol is also based on decentralized technologies that track when content is used and reward its creators with LikeCoin, an [Ethereum ERC-20][2] cryptocurrency token. It operates through a "Proof of Creativity" algorithm which assigns LikeCoins based partially on how many "likes" a piece of content receives and how many derivative works are produced from it. Because openly licensed content has more opportunity to be reused and earn LikeCoin tokens, the system encourages content creators to publish under Creative Commons licenses.
### How it works
When a creative piece is uploaded via the LikeCoin protocol, the content creator includes the work's metadata, including author information and its InterPlanetary Linked Data ([IPLD][3]). This data forms a family graph of derivative works; we call the relationships between a work and its derivatives the "content footprint." This structure allows a content's inheritance tree to be easily traced all the way back to the original work.
LikeCoin tokens will be distributed to creators using information about a work's derivation history. Since all creative works contain the metadata of the author's wallet, the corresponding LikeCoin shares can be calculated through the algorithm and distributed accordingly.
LikeCoins are awarded in two ways: either directly by individuals who want to show their appreciation by paying a content creator, or through the Creators Pool, which collects viewers' "Likes" and distributes LikeCoin according to a content's LikeRank. Based on content-footprint tracing in the LikeCoin protocol, the LikeRank measures the importance (or creativity as we define it in this context) of a creative content. In general, the more derivative works a creative content generates, the more creative the creative content is, and thus the higher LikeRank of the content. LikeRank is the quantifier of the creativity of contents.
### Want to get involved?
LikeCoin is still very new, and we expect to launch our first decentralized application later in 2018 to reward Creative Commons content and connect seamlessly with a much larger and established community.
Most of LikeCoin's code can be accessed in the [LikeCoin GitHub][4] repository under a [GPL 3.0 license][5]. Since it's still under active development, some of the experimental code is not yet open to the public, but we will make it so as soon as possible.
We welcome feature requests, pull requests, forks, and stars. Please join our development on GitHub and our general discussions on [Telegram][6]. We also release updates about our progress on [Medium][7], [Facebook][8], [Twitter][9], and our website, [like.co][1].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/likecoin
作者:[Kin Ko][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ckxpress
[1]:https://like.co/
[2]:https://en.wikipedia.org/wiki/ERC20
[3]:https://ipld.io/
[4]:https://github.com/likecoin
[5]:https://www.gnu.org/licenses/gpl-3.0.en.html
[6]:https://t.me/likecoin
[7]:http://medium.com/likecoin
[8]:http://fb.com/likecoin.foundation
[9]:https://twitter.com/likecoin_fdn

View File

@ -1,72 +0,0 @@
Protect your Fedora system against this DHCP flaw
======
![](https://fedoramagazine.org/wp-content/uploads/2018/05/dhcp-cve-816x345.jpg)
A critical security vulnerability was discovered and disclosed earlier today in dhcp-client. This DHCP flaw carries a high risk to your system and data, especially if you use untrusted networks such as a WiFi access point you dont own. Read more here for how to protect your Fedora system.
Dynamic Host Control Protocol (DHCP) allows your system to get configuration from a network it joins. Your system will make a request for DHCP data, and typically a server such as a router answers. The server provides the necessary data for your system to configure itself. This is how, for instance, your system configures itself properly for networking when it joins a wireless network.
However, an attacker on the local network may be able to exploit this vulnerability. Using a flaw in a dhcp-client script that runs under NetworkManager, the attacker may be able to run arbitrary commands with root privileges on your system. This DHCP flaw puts your system and your data at high risk. The flaw has been assigned CVE-2018-1111 and has a [Bugzilla tracking bug][1].
### Guarding against this DHCP flaw
New dhcp packages contain fixes for Fedora 26, 27, and 28, as well as Rawhide. The maintainers have submitted these updates to the updates-testing repositories. They should show up in stable repos within a day or so of this post for most users. The desired packages are:
* Fedora 26: dhcp-4.3.5-11.fc26
* Fedora 27: dhcp-4.3.6-10.fc27
* Fedora 28: dhcp-4.3.6-20.fc28
* Rawhide: dhcp-4.3.6-21.fc29
#### Updating a stable Fedora system
To update immediately on a stable Fedora release, use this command [with sudo][2]. Type your password at the prompt, if necessary:
```
sudo dnf --refresh --enablerepo=updates-testing update dhcp-client
```
Later, use the standard stable repos to update. To update your Fedora system from the stable repos, use this command:
```
sudo dnf --refresh update dhcp-client
```
#### Updating a Rawhide system
If your system is on Rawhide, use these commands to download and update the packages immediately:
```
mkdir dhcp && cd dhcp
koji download-build --arch={x86_64,noarch} dhcp-4.3.6-21.fc29
sudo dnf update ./dhcp-*.rpm
```
After the nightly Rawhide compose, simply run sudo dnf update to get the update.
### Fedora Atomic Host
The fixes for Fedora Atomic Host are in ostree version 28.20180515.1. To get the update, run this command:
```
atomic host upgrade -r
```
This command reboots your system to apply the upgrade.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/protect-fedora-system-dhcp-flaw/
作者:[Paul W. Frields][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/pfrields/
[1]:https://bugzilla.redhat.com/show_bug.cgi?id=1567974
[2]:https://fedoramagazine.org/howto-use-sudo/

View File

@ -1,3 +1,5 @@
pinewall translating
A guide to Git branching A guide to Git branching
====== ======

View File

@ -1,3 +1,5 @@
translating---geekpi
How to find your IP address in Linux How to find your IP address in Linux
====== ======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/satellite_radio_location.jpg?itok=KJUKSB6x) ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/satellite_radio_location.jpg?itok=KJUKSB6x)

View File

@ -0,0 +1,107 @@
What You Need to Know About Cryptocurrency Malware Found on Ubuntus Snap Store
======
Recently, it was discovered that a couple of apps in the Ubuntu Snaps store contained cryptocurrency mining software. Canonical swiftly removed the offending apps, but several questions are left unanswered.
### Discovery of Crypto Miner on Snap Store
![Crypto Miner Malware on Ubuntu Snap Store][1]
On May 11, a user named [tarwirdur][2] opened a new issue on the [snapcraft.io repository][3]. In the issue, he noted that a snap entitled 2048buntu created by Nicolas Tomb contained a cryptocurrency miner. He asked how he could “complain about the application” for security reasons. tarwirdur later posted to say that all the others snaps created by Nicolas Tomb also contained cryptocurrency miners.
It appears that the snaps used systemd to automatically launch the code at boot and run it in the background with the user none the wiser.
{For those unfamiliar with the terminology, a cryptocurrency miner is a piece of software that uses a computers main processor or graphics processor to “mine” digital currency. “Mining” usually involves solving a mathematical equation. In this case, if you were running the 2048buntu game, the game used additional processing power for cryptocurrency mining.}
The Snapcraft team responded by quickly removing all apps created by the offender. They also started an investigation.
### The Man Behind the Mask Speaks
On May 13, a Disqus user named Nicolas Tomb [posted a comment][4] on OMGUbuntus coverage of the news. In this comment, he stated that he added the cryptocurrency miner to monetize the snaps. He apologized for his actions and promised to send any funds that had been mined to the Ubuntu foundation.
We cant say for sure if this comment was posted by the same Nicolas Tomb since the Disqus account was just recently created and only has one comment associated with it. For now, well assume that it is.
### Canonical Makes a Statement
On May 15, Canonical issued a statement on the situation. Entitled [“Trust and security in the Snap Store”][5], the post starts out by restating the situation. They add that the snaps have been [reissued with the cryptocurrency mining code removed][6].
Canonical then attempts to examine the motives of Nicolas Tomb. They note that he told them he did it in an attempt to monetize the apps (as stated above) and stopped doing it when confronted. They also note that “mining cryptocurrency is not illegal or unethical by itself”. They are however unhappy about the fact that he did not disclose the cryptocurrency miner in the snap description.
From there Canonical moves to the subject of reviewing software. According to the post, the Snap Store uses a quality control system similar to iOS, Android, and Windows: “automated checkpoints that packages must go through before they are accepted, and manual reviews by a human when specific issues are flagged”.
However, Canonical says “its impossible for a large scale repository to only accept software after every individual file has been reviewed in detail”. Therefore, they need to trust the source, not the content. After all, that is what the current Ubuntu repo system is based on.
Canonical follows this up by talking about the future of snaps. They acknowledge that the current system is not perfect. They are continually working to improve it. They have “very interesting security features in the works that will improve the safety of the system and also the experience of people handling software deployments in servers and desktops”.
One of the features they are working on is the ability to see if a publisher is verified. Other improvements include: “upstreaming of all the AppArmor kernel patches” and other under-the-hood fixes.
### Thoughts on the Snap store malware
Based on all that Ive read, Ive got a few thoughts and questions of my own.
#### How Long Was This Running?
First of all, how long have these mining snaps been available on the Snap Store? Since they have all been removed, we dont have that data. I was able to grab an image of the 2048buntu page from the Google cache, but it doesnt show much of anything. Depending on how long it ran, how many systems it got installed on, and what cryptocurrency was being mined, we could either be talks about a little bit of money or a pile. A further question is: would Canonical have been able to catch this in the future?
#### Was it Really a Malware?
A lot of news sites are reporting this as a malware infection. I think I might have even seen this incident referred to as Linuxs first malware. Im not sure that term is accurate. Dictionary.com defines [malware][7] as: “software intended to damage a computer, mobile device, computer system, or computer network, or to take partial control over its operation”.
The snaps in question did not damage or take control of the computers involved. it also did not infect other computers. It couldnt have because all snaps are sandboxed. At the most, they leached processor power, thats about it. So, I wouldnt call it malware.
#### Nothing Like a Loophole
The one defense that Nicolas Tomb uses is that the Snap Store didnt have any rules against cryptocurrency mining when he uploaded the snaps. {I can bet you that they are rectifying that problem right now.} They didnt have that rule for the simple reason that no one had done it before. If Tomb was trying to do things correctly, he should have asked if this kind of behavior was allowed. The fact that he didnt seems to point to the fact that he knew they would probably say no. At the very least, they would have told him to put it in the description.
![][8]
#### Something Looks Hinkey
As I said before, I got a screenshot of the 2048buntu page from Google cache. Just looking at it raises several red flags. First, there is almost no real description. This is all it says “Game like 2048. This game is clone popular game 2048 with ubuntu colors.” Wow. {Thatll bring in the suckers.} When I read something as empty as that, I get nervous.
Another thing to notice is the size of it. Version 1.0 of the 2048buntu snap weighs almost 140 MB. Why would a game this simple need that much space? There are browser versions written in Javascript that probably use less than a quarter of that. There other snaps of 2048 games on the Snap Store and none of them has half the file size.
Then, you have the license. This is a clone of a popular game using Ubuntu colors. How can it be considered proprietary? Im sure that legit devs in the audience would have uploaded it with a FOSS (Free and Open Source Software) license just because of the content.
These factors alone should have made this snap, in particular, stand out and call for a review.
#### Who is Nicolas Tomb?
After first reading about this, I decided to see what I could find out about the guy who started this mess. When I searched for Nicolas Tomb, I found nothing, zip, nada, zilch. All I found were a bunch of news articles about the cryptocurrency mining snaps and information about taking a trip to the tomb of St. Nicolas. There is no sign of Nicolas Tomb on Twitter or Github either. This seems like a name created just to upload these snaps.
This also leads to a point in the Canonical blog post about verifying publishers. The last time I looked, quite a few snaps were not published by the maintainers of the applications. This makes me nervous. I would be more willing to trust a snap of say Firefox if it was published by Mozilla, instead of Leonard Borsch. If its too much work for the application maintainer to also take care of the snap, there should be a way for the maintainer to put their stamp of approval on the snap for their program. Something like Firefox snap published by Fredrick Ham, approved by Mozilla Foundation. Just something to give the user more confidence in what they are downloading.
#### Snap Store Definitely has Room to Improve
It seems to me that one of the first features that the Snap Store team should have implemented was a way to report suspicious snaps. tarwirdur had to find the sites Github page. The average user would not have thought of that. If the Snap Store cant review every line of code, enabling the users to reports problems is the next best thing. Even rating system would not be a bad addition. Im sure there would have been a couple people who would have given 2048buntu a low rating for using too many system resources.
#### Conclusion
From all the I have seen, I think that someone created a number of simple apps, embedded a cryptocurrency miner in each, and uploaded them to the Snap Store with the goal of raking in piles of money. Once they got caught, they claimed it was only to monetize the snaps. If that was true, they would have mentioned it in the snap description. Hidden crypto miners are nothing [new][9]. They are generally a method of computing power theft.
I wish that Canonical already have features in place to combat this problem and I hope they appear quickly.
What do you think of the Snap Store malware episode? What would you do to improve it? Let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media.
--------------------------------------------------------------------------------
via: https://itsfoss.com/snapstore-cryptocurrency-saga/
作者:[John Paul][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/ubuntu-snap-malware-800x450.jpeg
[2]:https://github.com/tarwirdur
[3]:https://github.com/canonical-websites/snapcraft.io/issues/651
[4]:https://disqus.com/home/discussion/omgubuntu/malware_found_on_the_ubuntu_snap_store/#comment-3899153046
[5]:https://blog.ubuntu.com/2018/05/15/trust-and-security-in-the-snap-store
[6]:https://forum.snapcraft.io/t/action-against-snap-store-malware/5417/8
[7]:http://www.dictionary.com/browse/malware?s=t
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/2048buntu.png
[9]:https://krebsonsecurity.com/2018/03/who-and-what-is-coinhive/

View File

@ -0,0 +1,92 @@
translating---geekpi
How To Install Ncurses Library In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/install-ncurses-720x340.png)
**GNU Ncurses** is a programming library that allows the users to write text-based user interfaces(TUI). Many text-based games are created using this library. One popular example is [**PacVim**][1], a CLI game to learn VIM commands. In this brief guide, I will be explaining how to install Ncurses library in Unix-like operating systems.
### Install Ncurses Library In Linux
Ncurses is available in the default repositories of most Linux distributions. For instance, you can install it on Arch-based systems using the following command:
```
$ sudo pacman -S ncurses
```
On RHEL, CentOS:
```
$ sudo yum install ncurses-devel
```
On Fedora 22 and newer versions:
```
$ sudo dnf install ncurses-devel
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt-get install libncurses5-dev libncursesw5-dev
```
The GNU ncureses might be bit old in the default repositories. If you want a most recent stable version, you can compile and install from the source as shown below.
Download the latest ncurses version from [**here**][2]. As of writing this guide, the latest version was 6.1.
```
$ wget https://ftp.gnu.org/pub/gnu/ncurses/ncurses-6.1.tar.gz
```
Extract the tar file:
```
$ tar xzf ncurses-6.1.tar.gz
```
This will create a folder named ncurses-6.1 in the current directory. Cd to the directory:
```
$ cd ncurses-6.1
$ ./configure --prefix=/opt/ncurses
```
Finally, compile and install using the following commands:
```
$ make
$ sudo make install
```
Verify the installation using command:
```
$ ls -la /opt/ncurses
```
Thats it. Ncurses have been installed on the Linux distribution. Go ahead and create your nice looking TUIs using Ncurses.
More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-install-ncurses-library-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/pacvim-a-cli-game-to-learn-vim-commands/
[2]:https://ftp.gnu.org/pub/gnu/ncurses/

View File

@ -0,0 +1,145 @@
How to Manage Fonts in Linux
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_main.jpg?itok=qcJks7-c)
Not only do I write technical documentation, I write novels. And because Im comfortable with tools like GIMP, I also create my own book covers (and do graphic design for a few clients). That artistic endeavor depends upon a lot of pieces falling into place, including fonts.
Although font rendering has come a long way over the past few years, it continues to be an issue in Linux. If you compare the look of the same fonts on Linux vs. macOS, the difference is stark. This is especially true when youre staring at a screen all day. But even though the rendering of fonts has yet to find perfection in Linux, one thing that the open source platform does well is allow users to easily manage their fonts. From selecting, adding, scaling, and adjusting, you can work with fonts fairly easily in Linux.
Here, Ill share some of the tips Ive depended on over the years to help extend my “font-ability” in Linux. These tips will especially help those who undertake artistic endeavors on the open source platform. Because there are so many desktop interfaces available for Linux (each of which deal with fonts in a different way), when a desktop environment becomes central to the management of fonts, Ill be focusing primarily on GNOME and KDE.
With that said, lets get to work.
### Adding new fonts
For the longest time, I have been a collector of fonts. Some might say I have a bit of an obsession. And since my early days of using Linux, Ive always used the same process for adding fonts to my desktops. There are two ways to do this:
* Make the fonts available on a per-user basis.
* Make the fonts available system-wide.
Because my desktops never have other users (besides myself), I only ever work with fonts on a per-user basis. However, I will show you how to do both. First, lets see how to add fonts on a per-user basis. The first thing you must do is find fonts. Both True Type Fonts (TTF) and Open Type Fonts (OTF) can be added. I add fonts manually. Do this is, I create a new hidden directory in ~/ called ~/.fonts. This can be done with the command:
```
mkdir ~/.fonts
```
With that folder created, I then move all of my TTF and OTF files into the directory. Thats it. Every font you add into that directory will now be available for use to your installed apps. But remember, those fonts will only be available to that one user.
If you want to make that collection of fonts available to all, heres what you do:
1. Open up a terminal window.
2. Change into the directory housing all of your fonts.
3. Copy all of those fonts with the commands sudo cp *.ttf *.TTF /usr/share/fonts/truetype/ and sudo cp *.otf *.OTF /usr/share/fonts/opentype
The next time a user logs in, theyll have access to all those glorious fonts.
### GUI Font Managers
There are a few ways to manage your fonts in Linux, via GUI. How its done will depend on your desktop environment. Lets examine KDE first. With the KDE that ships with Kubuntu 18.04, youll find a Font Management tool pre-installed. Open that tool and you can easily add, remove, enable, and disable fonts (as well as get information about all of the installed fonts. This tool also makes it easy for you to add and remove fonts for personal and system-wide use. Lets say you want to add a particular font for personal usage. To do this, download your font and then open up the Font Management tool. In this tool (Figure 1), click on Personal Fonts and then click the + Add button.
![adding fonts][2]
Figure 1: Adding personal fonts in KDE.
[Used with permission][3]
Navigate to the location of your fonts, select them, and click Open. Your fonts will then be added to the Personal section and are immediately available for you to use (Figure 2).
![KDE Font Manager][5]
Figure 2: Fonts added with the KDE Font Manager.
[Used with permission][3]
To do the same thing in GNOME requires the installation of an application. Open up either GNOME Software or Ubuntu Software (depending upon the distribution youre using) and search for Font Manager. Select Font Manager and then click the Install button. Once the software is installed, launch it from the desktop menu. With the tool open, lets install fonts on a per-user basis. Heres how:
1. Select User from the left pane (Figure 3).
2. Click the + button at the top of the window.
3. Navigate to and select the downloaded fonts.
4. Click Open.
![Adding fonts ][7]
Figure 3: Adding fonts in GNOME.
[Used with permission][3]
### Tweaking fonts
There are three concepts you must first understand:
* **Font Hinting:** The use of mathematical instructions to adjust the display of a font outline so that it lines up with a rasterized grid.
* **Anti-aliasing:** The technique used to add greater realism to a digital image by smoothing jagged edges on curved lines and diagonals.
* **Scaling factor:** **** A scalable unit that allows you to multiple the point size of a font. So if youre font is 12pt and you have an scaling factor of 1, the font size will be 12pt. If your scaling factor is 2, the font size will be 24pt.
Lets say youve installed your fonts, but they dont look quite as good as youd like. How do you tweak the appearance of fonts? In both the KDE and GNOME desktops, you can make a few adjustments. One thing to consider with the tweaking of fonts is that taste is very much subjective. You might find yourself having to continually tweak until you get the fonts looking exactly how you like (dictated by your needs and particular taste). Lets first look at KDE.
Open up the System Settings tool and clock on Fonts. In this section, you can not only change various fonts, you can also enable and configure both anti-aliasing and enable font scaling factor (Figure 4).
![Configuring fonts][9]
Figure 4: Configuring fonts in KDE.
[Used with permission][3]
To configure anti-aliasing, select Enabled from the drop-down and then click Configure. In the resulting window (Figure 5), you can configure an exclude range, sub-pixel rendering type, and hinting style.
Once youve made your changes, click Apply. Restart any running applications and the new settings will take effect.
To do this in GNOME, you have to have either use Font Manager or GNOME Tweaks installed. For this, GNOME Tweaks is the better tool. If you open the GNOME Dash and cannot find Tweaks installed, open GNOME Software (or Ubuntu Software), and install GNOME Tweaks. Once installed, open it and click on the Fonts section. Here you can configure hinting, anti-aliasing, and scaling factor (Figure 6).
![Tweaking fonts][11]
Figure 6: Tweaking fonts in GNOME.
[Used with permission][3]
### Make your fonts beautiful
And thats the gist of making your fonts look as beautiful as possible in Linux. You may not see a macOS-like rendering of fonts, but you can certainly improve the look. Finally, the fonts you choose will have a large impact on how things look. Make sure youre installing clean, well-designed fonts; otherwise, youre fighting a losing battle.
Learn more about Linux through the free ["Introduction to Linux" ][12] course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/5/how-manage-fonts-linux
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[2]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_1.jpg?itok=7yTTe6o3 (adding fonts)
[3]:https://www.linux.com/licenses/category/used-permission
[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_2.jpg?itok=_g0dyVYq (KDE Font Manager)
[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_3.jpg?itok=8o884QKs (Adding fonts )
[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_4.jpg?itok=QJpPzFED (Configuring fonts)
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_6.jpg?itok=4cQeIW9C (Tweaking fonts)
[12]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,275 @@
What's a hero without a villain? How to add one to your Python game
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/game-dogs-chess-play-lead.png?itok=NAuhav4Z)
In the previous articles in this series (see [part 1][1], [part 2][2], [part 3][3], and [part 4][4]), you learned how to use Pygame and Python to spawn a playable character in an as-yet empty video game world. But, what's a hero without a villain?
It would make for a pretty boring game if you had no enemies, so in this article, you'll add an enemy to your game and construct a framework for building levels.
It might seem strange to jump ahead to enemies when there's still more to be done to make the player sprite fully functional, but you've learned a lot already, and creating villains is very similar to creating a player sprite. So relax, use the knowledge you already have, and see what it takes to stir up some trouble.
For this exercise, you can download some pre-built assets from [Open Game Art][5]. Here are some of the assets I use:
+ Inca tileset
+ Some invaders
+ Sprites, characters, objects, and effects
### Creating the enemy sprite
Yes, whether you realize it or not, you basically already know how to implement enemies. The process is very similar to creating a player sprite:
1. Make a class so enemies can spawn.
2. Create an `update` function so enemies can detect collisions.
3. Create a `move` function so your enemy can roam around.
Start with the class. Conceptually, it's mostly the same as your Player class. You set an image or series of images, and you set the sprite's starting position.
Before continuing, make sure you have a graphic for your enemy, even if it's just a temporary one. Place the graphic in your game project's `images` directory (the same directory where you placed your player image).
A game looks a lot better if everything alive is animated. Animating an enemy sprite is done the same way as animating a player sprite. For now, though, keep it simple, and use a non-animated sprite.
At the top of the `objects` section of your code, create a class called Enemy with this code:
```
class Enemy(pygame.sprite.Sprite):
    '''
    Spawn an enemy
    '''
    def __init__(self,x,y,img):
        pygame.sprite.Sprite.__init__(self)
        self.image = pygame.image.load(os.path.join('images',img))
        self.image.convert_alpha()
        self.image.set_colorkey(ALPHA)
        self.rect = self.image.get_rect()
        self.rect.x = x
        self.rect.y = y
```
If you want to animate your enemy, do it the [same way][4] you animated your player.
### Spawning an enemy
You can make the class useful for spawning more than just one enemy by allowing yourself to tell the class which image to use for the sprite and where in the world the sprite should appear. This means you can use this same enemy class to generate any number of enemy sprites anywhere in the game world. All you have to do is make a call to the class, and tell it which image to use and the X and Y coordinates of your desired spawn point.
Again, this is similar in principle to spawning a player sprite. In the `setup` section of your script, add this code:
```
enemy   = Enemy(20,200,'yeti.png')# spawn enemy
enemy_list = pygame.sprite.Group()   # create enemy group
enemy_list.add(enemy)                # add enemy to group
```
In that sample code, `20` is the X position and `200` is the Y position. You might need to adjust these numbers, depending on how big your enemy sprite is, but try to get it to spawn in a place so that you can reach it with your player sprite. `Yeti.png` is the image used for the enemy.
Next, draw all enemies in the enemy group to the screen. Right now, you have only one enemy, but you can add more later if you want. As long as you add an enemy to the enemies group, it will be drawn to the screen during the main loop. The middle line is the new line you need to add:
```
    player_list.draw(world)
    enemy_list.draw(world)  # refresh enemies
    pygame.display.flip()
```
Launch your game. Your enemy appears in the game world at whatever X and Y coordinate you chose.
### Level one
Your game is in its infancy, but you will probably want to add another level. It's important to plan ahead when you program so your game can grow as you learn more about programming. Even though you don't even have one complete level yet, you should code as if you plan on having many levels.
Think about what a "level" is. How do you know you are at a certain level in a game?
You can think of a level as a collection of items. In a platformer, such as the one you are building here, a level consists of a specific arrangement of platforms, placement of enemies and loot, and so on. You can build a class that builds a level around your player. Eventually, when you create more than one level, you can use this class to generate the next level when your player reaches a specific goal.
Move the code you wrote to create an enemy and its group into a new function that will be called along with each new level. It requires some modification so that each time you create a new level, you can create several enemies:
```
class Level():
    def bad(lvl,eloc):
        if lvl == 1:
            enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
            enemy_list = pygame.sprite.Group() # create enemy group
            enemy_list.add(enemy)              # add enemy to group
        if lvl == 2:
            print("Level " + str(lvl) )
        return enemy_list
```
The `return` statement ensures that when you use the `Level.bad` function, you're left with an `enemy_list` containing each enemy you defined.
Since you are creating enemies as part of each level now, your `setup` section needs to change, too. Instead of creating an enemy, you must define where the enemy will spawn and what level it belongs to.
```
eloc = []
eloc = [200,20]
enemy_list = Level.bad( 1, eloc )
```
Run the game again to confirm your level is generating correctly. You should see your player, as usual, and the enemy you added in this chapter.
### Hitting the enemy
An enemy isn't much of an enemy if it has no effect on the player. It's common for enemies to cause damage when a player collides with them.
Since you probably want to track the player's health, the collision check happens in the Player class rather than in the Enemy class. You can track the enemy's health, too, if you want. The logic and code are pretty much the same, but, for now, just track the player's health.
To track player health, you must first establish a variable for the player's health. The first line in this code sample is for context, so add the second line to your Player class:
```
        self.frame  = 0
        self.health = 10
```
In the `update` function of your Player class, add this code block:
```
        hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
        for enemy in hit_list:
            self.health -= 1
            print(self.health)
```
This code establishes a collision detector using the Pygame function `sprite.spritecollide`, called `enemy_hit`. This collision detector sends out a signal any time the hitbox of its parent sprite (the player sprite, where this detector has been created) touches the hitbox of any sprite in `enemy_list`. The `for` loop is triggered when such a signal is received and deducts a point from the player's health.
Since this code appears in the `update` function of your player class and `update` is called in your main loop, Pygame checks for this collision once every clock tick.
### Moving the enemy
An enemy that stands still is useful if you want, for instance, spikes or traps that can harm your player, but the game is more of a challenge if the enemies move around a little.
Unlike a player sprite, the enemy sprite is not controlled by the user. Its movements must be automated.
Eventually, your game world will scroll, so how do you get an enemy to move back and forth within the game world when the game world itself is moving?
You tell your enemy sprite to take, for example, 10 paces to the right, then 10 paces to the left. An enemy sprite can't count, so you have to create a variable to keep track of how many paces your enemy has moved and program your enemy to move either right or left depending on the value of your counting variable.
First, create the counter variable in your Enemy class. Add the last line in this code sample:
```
        self.rect = self.image.get_rect()
        self.rect.x = x
        self.rect.y = y
        self.counter = 0 # counter variable
```
Next, create a `move` function in your Enemy class. Use an if-else loop to create what is called an infinite loop:
* Move right if the counter is on any number from 0 to 100.
* Move left if the counter is on any number from 100 to 200.
* Reset the counter back to 0 if the counter is greater than 200.
An infinite loop has no end; it loops forever because nothing in the loop is ever untrue. The counter, in this case, is always either between 0 and 100 or 100 and 200, so the enemy sprite walks right to left and right to left forever.
The actual numbers you use for how far the enemy will move in either direction depending on your screen size, and possibly, eventually, the size of the platform your enemy is walking on. Start small and work your way up as you get used to the results. Try this first:
```
    def move(self):
        '''
        enemy movement
        '''
        distance = 80
        speed = 8
        if self.counter >= 0 and self.counter <= distance:
            self.rect.x += speed
        elif self.counter >= distance and self.counter <= distance*2:
            self.rect.x -= speed
        else:
            self.counter = 0
        self.counter += 1
```
You can adjust the distance and speed as needed.
Will this code work if you launch your game now?
Of course not, and you probably know why. You must call the `move` function in your main loop. The first line in this sample code is for context, so add the last two lines:
```
    enemy_list.draw(world) #refresh enemy
    for e in enemy_list:
        e.move()
```
Launch your game and see what happens when you hit your enemy. You might have to adjust where the sprites spawn so that your player and your enemy sprite can collide. When they do collide, look in the console of [IDLE][6] or [Ninja-IDE][7] to see the health points being deducted.
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/yeti.png?itok=4_GsDGor)
You may notice that health is deducted for every moment your player and enemy are touching. That's a problem, but it's a problem you'll solve later, after you've had more practice with Python.
For now, try adding some more enemies. Remember to add each enemy to the `enemy_list`. As an exercise, see if you can think of how you can change how far different enemy sprites move.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/pygame-enemy
作者:[Seth Kenlon][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[1]:https://opensource.com/article/17/10/python-101
[2]:https://opensource.com/article/17/12/game-framework-python
[3]:https://opensource.com/article/17/12/game-python-add-a-player
[4]:https://opensource.com/article/17/12/game-python-moving-player
[5]:https://opengameart.org
[6]:https://docs.python.org/3/library/idle.html
[7]:http://ninja-ide.org/

View File

@ -0,0 +1,106 @@
An introduction to cryptography and public key infrastructure
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/locks_keys_bridge_paris.png?itok=Bp0dsEc9)
Secure communication is quickly becoming the norm for today's web. In July 2018, Google Chrome plans to [start showing "not secure" notifications][1] for **all** sites transmitted over HTTP (instead of HTTPS). Mozilla has a [similar plan][2]. While cryptography is becoming more commonplace, it has not become easier to understand. [Let's Encrypt][3] designed and built a wonderful solution to provide and periodically renew free security certificates, but if you don't understand the underlying concepts and pitfalls, you're just another member of a large group of [cargo cult][4] programmers.
### Attributes of secure communication
The intuitively obvious purpose of cryptography is confidentiality: a message can be transmitted without prying eyes learning its contents. For confidentiality, we encrypt a message: given a message, we pair it with a key and produce a meaningless jumble that can only be made useful again by reversing the process using the same key (thereby decrypting it). Suppose we have two friends, [Alice and Bob][5], and their nosy neighbor, Eve. Alice can encrypt a message like "Eve is annoying", send it to Bob, and never have to worry about Eve snooping on her.
For truly secure communication, we need more than confidentiality. Suppose Eve gathered enough of Alice and Bob's messages to figure out that the word "Eve" is encrypted as "Xyzzy". Furthermore, Eve knows Alice and Bob are planning a party and Alice will be sending Bob the guest list. If Eve intercepts the message and adds "Xyzzy" to the end of the list, she's managed to crash the party. Therefore, Alice and Bob need their communication to provide integrity: a message should be immune to tampering.
We have another problem though. Suppose Eve watches Bob open an envelope marked "From Alice" with a message inside from Alice reading "Buy another gallon of ice cream." Eve sees Bob go out and come back with ice cream, so she has a general idea of the message's contents even if the exact wording is unknown to her. Bob throws the message away, Eve recovers it, and then every day for the next week drops an envelope marked "From Alice" with a copy of the message in Bob's mailbox. Now the party has too much ice cream and Eve goes home with free ice cream when Bob gives it away at the end of the night. The extra messages are confidential, and their integrity is intact, but Bob has been misled as to the true identity of the sender. Authentication is the property of knowing that the person you are communicating with is in fact who they claim to be.
Information security has [other attributes][6], but confidentiality, integrity, and authentication are the three traits you must know.
### Encryption and ciphers
What are the components of encryption? We need a message which we'll call the plaintext. We may need to do some initial formatting to the message to make it suitable for the encryption process (padding it to a certain length if we're using a block cipher, for example). Then we take a secret sequence of bits called the key. A cipher then takes the key and transforms the plaintext into ciphertext. The ciphertext should look like random noise and only by using the same cipher and the same key (or as we will see later in the case of asymmetric ciphers, a mathematically related key) can the plaintext be restored.
The cipher transforms the plaintext's bits using the key's bits. Since we want to be able to decrypt the ciphertext, our cipher needs to be reversible too. We can use [XOR][7] as a simple example. It is reversible and is [its own inverse][8] (P ^ K = C; C ^ K = P) so it can both encrypt plaintext and decrypt ciphertext. A trivial use of an XOR can be used for encryption in a one-time pad, but it is generally not [practical][9]. However, it is possible to combine XOR with a function that generates an arbitrary stream of random data from a single key. Modern ciphers like AES and Chacha20 do exactly that.
We call any cipher that uses the same key to both encrypt and decrypt a symmetric cipher. Symmetric ciphers are divided into stream ciphers and block ciphers. A stream cipher runs through the message one bit or byte at a time. Our XOR cipher is a stream cipher, for example. Stream ciphers are useful if the length of the plaintext is unknown (such as data coming in from a pipe or socket). [RC4][10] is the best-known stream cipher but it is vulnerable to several different attacks, and the newest version (1.3) of the TLS protocol (the "S" in "HTTPS") does not even support it. [Efforts][11] are underway to create new stream ciphers with some candidates like [ChaCha20][12] already supported in TLS.
A block cipher takes a fix-sized block and encrypts it with a fixed-sized key. The current king of the hill in the block cipher world is the [Advanced Encryption Standard][13] (AES), and it has a block size of 128 bits. That's not very much data, so block ciphers have a [mode][14] that describes how to apply the cipher's block operation across a message of arbitrary size. The simplest mode is [Electronic Code Book][15] (ECB) which takes the message, splits it into blocks (padding the message's final block if necessary), and then encrypts each block with the key independently.
![](https://opensource.com/sites/default/files/uploads/ecb_encryption.png)
You may spot a problem here: if the same block appears multiple times in the message (a phrase like "GET / HTTP/1.1" in web traffic, for example) and we encrypt it using the same key, we'll get the same result. The appearance of a pattern in our encrypted communication makes it vulnerable to attack.
Thus there are more advanced modes such as [Cipher Block Chaining][16] (CBC) where the result of each block's encryption is XORed with the next block's plaintext. The very first block's plaintext is XORed with an initialization vector of random numbers. There are many other modes each with different advantages and disadvantages in security and speed. There are even modes, such as Counter (CTR), that can turn a block cipher into a stream cipher.
![](https://opensource.com/sites/default/files/uploads/cbc_encryption.png)
In contrast to symmetric ciphers, there are asymmetric ciphers (also called public-key cryptography). These ciphers use two keys: a public key and a private key. The keys are mathematically related but still distinct. Anything encrypted with the public key can only be decrypted with the private key and data encrypted with the private key can be decrypted with the public key. The public key is widely distributed while the private key is kept secret. If you want to communicate with a given person, you use their public key to encrypt your message and only their private key can decrypt it. [RSA][17] is the current heavyweight champion of asymmetric ciphers.
A major downside to asymmetric ciphers is that they are computationally expensive. Can we get authentication with symmetric ciphers to speed things up? If you only share a key with one other person, yes. But that breaks down quickly. Suppose a group of people want to communicate with one another using a symmetric cipher. The group members could establish keys for each unique pairing of members and encrypt messages based on the recipient, but a group of 20 people works out to 190 pairs of members total and 19 keys for each individual to manage and secure. By using an asymmetric cipher, each person only needs to guard their own private key and have access to a listing of public keys.
Asymmetric ciphers are also limited in the [amount of data][18] they can encrypt. Like block ciphers, you have to split a longer message into pieces. In practice then, asymmetric ciphers are often used to establish a confidential, authenticated channel which is then used to exchange a shared key for a symmetric cipher. The symmetric cipher is used for subsequent communications since it is much faster. TLS can operate in exactly this fashion.
### At the foundation
At the heart of secure communication are random numbers. Random numbers are used to generate keys and to provide unpredictability for otherwise deterministic processes. If the keys we use are predictable, then we're susceptible to attack right from the very start. Random numbers are difficult to generate on a computer which is meant to behave in a consistent manner. Computers can gather random data from things like mouse movement or keyboard timings. But gathering that randomness (called entropy) takes significant time and involve additional processing to ensure uniform distributions. It can even involve the use of dedicated hardware (such as [a wall of lava lamps][19]). Generally, once we have a truly random value, we use that as a seed to put into a [cryptographically secure pseudorandom number generator][20] Beginning with the same seed will always lead to the same stream of numbers, but what's important is that the stream of numbers descended from the seed don't exhibit any pattern. In the Linux kernel, [/dev/random and /dev/urandom][21], operate in this fashion: they gather entropy from multiple sources, process it to remove biases, create a seed, and can then provide the random numbers used to generate an RSA key for example.
### Other cryptographic building blocks
We've covered confidentiality, but I haven't mentioned integrity or authentication yet. For that, we'll need some new tools in our toolbox.
The first is the cryptographic hash function. A cryptographic hash function is meant to take an input of arbitrary size and produce a fixed size output (often called a digest). If we can find any two messages that create the same digest, that's a collision and makes the hash function unsuitable for cryptography. Note the emphasis on "find"; if we have an infinite world of messages and a fixed sized output, there are bound to be collisions, but if we can find any two messages that collide without a monumental investment of computational resources, that's a deal-breaker. Worse still would be if we could take a specific message and could then find another message that results in a collision.
As well, the hash function should be one-way: given a digest, it should be computationally infeasible to determine what the message is. Respectively, these [requirements][22] are called collision resistance, second preimage resistance, and preimage resistance. If we meet these requirements, our digest acts as a kind of fingerprint for a message. No two people ([in theory][23]) have the same fingerprints, and you can't take a fingerprint and turn it back into a person.
If we send a message and a digest, the recipient can use the same hash function to generate an independent digest. If the two digests match, they know the message hasn't been altered. [SHA-256][24] is the most popular cryptographic hash function currently since [SHA-1][25] is starting to [show its age][26].
Hashes sound great, but what good is sending a digest with a message if someone can tamper with your message and then tamper with the digest too? We need to mix hashing in with the ciphers we have. For symmetric ciphers, we have message authentication codes (MACs). MACs come in different forms, but an HMAC is based on hashing. An [HMAC][27] takes the key K and the message M and blends them together using a hashing function H with the formula H(K + H(K + M)) where "+" is concatenation. Why this formula specifically? That's beyond this article, but it has to do with protecting the integrity of the HMAC itself. The MAC is sent along with an encrypted message. Eve could blindly manipulate the message, but as soon as Bob independently calculates the MAC and compares it to the MAC he received, he'll realize the message has been tampered with.
For asymmetric ciphers, we have digital signatures. In RSA, encryption with a public key makes something only the private key can decrypt, but the inverse is true as well and can create a type of signature. If only I have the private key and encrypt a document, then only my public key will decrypt the document, and others can implicitly trust that I wrote it: authentication. In fact, we don't even need to encrypt the entire document. If we create a digest of the document, we can then encrypt just the fingerprint. Signing the digest instead of the whole document is faster and solves some problems around the size of a message that can be encrypted using asymmetric encryption. Recipients decrypt the digest, independently calculate the digest for the message, and then compare the two to ensure integrity. The method for digital signatures varies for other asymmetric ciphers, but the concept of using the public key to verify a signature remains.
### Putting it all together
Now that we have all the major pieces, we can implement a [system][28] that has all three of the attributes we're looking for. Alice picks a secret symmetric key and encrypts it with Bob's public key. Then she hashes the resulting ciphertext and uses her private key to sign the digest. Bob receives the ciphertext and the signature, computes the ciphertext's digest and compares it to the digest in the signature he verified using Alice's public key. If the two digests are identical, he knows the symmetric key has integrity and is authenticated. He decrypts the ciphertext with his private key and uses the symmetric key Alice sent him to communicate with her confidentially using HMACs with each message to ensure integrity. There's no protection here against a message being replayed (as seen in the ice cream disaster Eve caused). To handle that issue, we would need some sort of "handshake" that could be used to establish a random, short-lived session identifier.
The cryptographic world is vast and complex, but I hope this article gives you a basic mental model of the core goals and components it uses. With a solid foundation in the concepts, you'll be able to continue learning more.
Thank you to Hubert Kario, Florian Weimer, and Mike Bursell for their help with this article.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/cryptography-pki
作者:[Alex Wood][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/awood
[1]:https://security.googleblog.com/2018/02/a-secure-web-is-here-to-stay.html
[2]:https://blog.mozilla.org/security/2017/01/20/communicating-the-dangers-of-non-secure-http/
[3]:https://letsencrypt.org/
[4]:https://en.wikipedia.org/wiki/Cargo_cult_programming
[5]:https://en.wikipedia.org/wiki/Alice_and_Bob
[6]:https://en.wikipedia.org/wiki/Information_security#Availability
[7]:https://en.wikipedia.org/wiki/XOR_cipher
[8]:https://en.wikipedia.org/wiki/Involution_(mathematics)#Computer_science
[9]:https://en.wikipedia.org/wiki/One-time_pad#Problems
[10]:https://en.wikipedia.org/wiki/RC4
[11]:https://en.wikipedia.org/wiki/ESTREAM
[12]:https://en.wikipedia.org/wiki/Salsa20
[13]:https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
[14]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation
[15]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#/media/File:ECB_encryption.svg
[16]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#/media/File:CBC_encryption.svg
[17]:https://en.wikipedia.org/wiki/RSA_(cryptosystem)
[18]:https://security.stackexchange.com/questions/33434/rsa-maximum-bytes-to-encrypt-comparison-to-aes-in-terms-of-security
[19]:https://www.youtube.com/watch?v=1cUUfMeOijg
[20]:https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
[21]:https://www.2uo.de/myths-about-urandom/
[22]:https://crypto.stackexchange.com/a/1174
[23]:https://www.telegraph.co.uk/science/2016/03/14/why-your-fingerprints-may-not-be-unique/
[24]:https://en.wikipedia.org/wiki/SHA-2
[25]:https://en.wikipedia.org/wiki/SHA-1
[26]:https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
[27]:https://en.wikipedia.org/wiki/HMAC
[28]:https://en.wikipedia.org/wiki/Hybrid_cryptosystem

View File

@ -0,0 +1,53 @@
Audacity quick tip: quickly remove background noise
======
![](https://fedoramagazine.org/wp-content/uploads/2018/03/audacity-noise-816x345.png)
When recording sounds on a laptop — say for a simple first screencast — many users typically use the built-in microphone. However, these small microphones also capture a lot of background noise. In this quick tip, learn how to use [Audacity][1] in Fedora to quickly remove the background noise from audio files.
### Installing Audacity
Audacity is an application in Fedora for mixing, cutting, and editing audio files. It supports a wide range of formats out of the box on Fedora — including MP3 and OGG. Install Audacity from the Software application.
![][2]
If the terminal is more your speed, use the command:
```
sudo dnf install audacity
```
### Import your Audio, sample background noise
After installing Audacity, open the application, and import your sound using the **File > Import** menu item. This example uses a [sound bite from freesound.org][3] to which noise was added:
Next, take a sample of the background noise to be filtered out. With the tracks imported, select an area of the track that contains only the background noise. Then choose **Effect > Noise Reduction** from the menu, and press the **Get Noise Profile** button.
![][4]
### Filter the Noise
Next, select the area of the track you want to filter the noise from. Do this either by selecting with the mouse, or **Ctrl + a** to select the entire track. Finally, open the **Effect > Noise Reduction** dialog again, and click OK to apply the filter.
![][5]
Additionally, play around with the settings until your tracks sound better. Here is the original file again, followed by the noise reduced track for comparison (using the default settings):
https://ryanlerch.fedorapeople.org/sidebyside.ogg?_=2
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/audacity-quick-tip-quickly-remove-background-noise/
作者:[Ryan Lerch][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/introducing-flatpak/
[1]:https://www.audacityteam.org/
[2]:https://fedoramagazine.org/wp-content/uploads/2018/03/audacity-software.jpg
[3]:https://freesound.org/people/levinj/sounds/8323/
[4]:https://fedoramagazine.org/wp-content/uploads/2018/03/select-noise-profile.gif
[5]:https://fedoramagazine.org/wp-content/uploads/2018/03/apply-filter.gif

View File

@ -0,0 +1,209 @@
How to Install and Configure KVM on Ubuntu 18.04 LTS Server
======
**KVM** (Kernel-based Virtual Machine) is an open source full virtualization solution for Linux like systems, KVM provides virtualization functionality using the virtualization extensions like **Intel VT** or **AMD-V**. Whenever we install KVM on any linux box then it turns it into the hyervisor by loading the kernel modules like **kvm-intel.ko** ( for intel based machines) and **kvm-amd.ko** ( for amd based machines).
KVM allows us to install and run multiple virtual machines (Windows & Linux). We can create and manage KVM based virtual machines either via **virt-manager** graphical user interface or **virt-install** & **virsh** cli commands.
In this article we will discuss how to install and configure **KVM hypervisor** on Ubuntu 18.04 LTS server. I am assuming you have already installed Ubuntu 18.04 LTS server on your system. Login to your server and perform the following steps.
### Step:1 Verify Whether your system support hardware virtualization
Execute below egrep command to verify whether your system supports hardware virtualization or not,
```
linuxtechi@kvm-ubuntu18-04:~$ egrep -c '(vmx|svm)' /proc/cpuinfo
1
linuxtechi@kvm-ubuntu18-04:~$
```
If the output is greater than 0 then it means your system supports Virtualization else reboot your system, then go to BIOS settings and enable VT technology.
Now Install “ **kvm-ok** ” utility using below command, it is used to determine if your server is capable of running hardware accelerated KVM virtual machines
```
linuxtechi@kvm-ubuntu18-04:~$ sudo apt install cpu-checker
```
Run kvm-ok command and verify the output,
```
linuxtechi@kvm-ubuntu18-04:~$ sudo kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
linuxtechi@kvm-ubuntu18-04:~$
```
### Step:2 Install KVM and its required packages
Run the below apt commands to install KVM and its dependencies
```
linuxtechi@kvm-ubuntu18-04:~$ sudo apt update
linuxtechi@kvm-ubuntu18-04:~$ sudo apt install qemu qemu-kvm libvirt-bin  bridge-utils  virt-manager
```
Once the above packages are installed successfully, then your local user (In my case linuxtechi) will be added to the group libvirtd automatically.
### Step:3 Start & enable libvirtd service
Whenever we install qemu & libvirtd packages in Ubuntu 18.04 Server then it will automatically start and enable libvirtd service, In case libvirtd service is not started and enabled then run beneath commands,
```
linuxtechi@kvm-ubuntu18-04:~$ sudo service libvirtd start
linuxtechi@kvm-ubuntu18-04:~$ sudo update-rc.d libvirtd enable
```
Now verify the status of libvirtd service using below command,
```
linuxtechi@kvm-ubuntu18-04:~$ service libvirtd status
```
Output would be something like below:
[![libvirtd-command-ubuntu18-04][1]![libvirtd-command-ubuntu18-04][2]][3]
### Step:4 Configure Network Bridge for KVM virtual Machines
Network bridge is required to access the KVM based virtual machines outside the KVM hypervisor or host. In Ubuntu 18.04, network is managed by netplan utility, whenever we freshly installed Ubuntu 18.04 server then a file with name “ **/etc/netplan/50-cloud-init.yaml** ” is created automatically, to configure static IP and bridge, netplan utility will refer this file.
As of now I have already configured the static IP via this file and content of this file is below:
```
network:
    ethernets:
        ens33:
            addresses: [192.168.0.51/24]
            gateway4: 192.168.0.1
            nameservers:
              addresses: [192.168.0.1]
            dhcp4: no
            optional: true
    version: 2
```
Lets add the network bridge definition in this file,
```
linuxtechi@kvm-ubuntu18-04:~$ sudo vi /etc/netplan/50-cloud-init.yaml
network:
  version: 2
  ethernets:
    ens33:
      dhcp4: no
      dhcp6: no
  bridges:
    br0:
      interfaces: [ens33]
      dhcp4: no
      addresses: [192.168.0.51/24]
      gateway4: 192.168.0.1
      nameservers:
        addresses: [192.168.0.1]
```
As you can see we have removed the IP address from interface(ens33) and add the same IP to the bridge **br0** and also added interface (ens33) to the bridge br0. Apply these changes using below netplan command,
```
linuxtechi@kvm-ubuntu18-04:~$ sudo netplan apply
linuxtechi@kvm-ubuntu18-04:~$
```
If you want to see the debug logs then use the below command,
```
linuxtechi@kvm-ubuntu18-04:~$ sudo netplan --debug  apply
```
Now Verify the bridge status using following methods:
```
linuxtechi@kvm-ubuntu18-04:~$ sudo networkctl status -a
```
[![networkctl-command-output-ubuntu18-04][1]![networkctl-command-output-ubuntu18-04][4]][4]
```
linuxtechi@kvm-ubuntu18-04:~$ ifconfig
```
[![ifconfig-command-output-ubuntu18-04][1]![ifconfig-command-output-ubuntu18-04][5]][5]
### Start:5 Creating Virtual machine (virt-manager or virt-install command )
There are two ways to create virtual machine:
* virt-manager (GUI utility)
* virt-install command (cli utility)
**Creating Virtual machine using virt-manager:**
Start the virt-manager by executing the beneath command,
```
linuxtechi@kvm-ubuntu18-04:~$ sudo virt-manager
```
[![Start-Virt-Manager-Ubuntu18-04][1]![Start-Virt-Manager-Ubuntu18-04][6]][6]
Create a new virtual machine
[![ISO-file-Virt-Manager][1]![ISO-file-Virt-Manager][7]][7]
Click on forward and select the ISO file, in my case I am using RHEL 7.3 iso file.
[![Select-ISO-file-virt-manager-Ubuntu18-04-Server][1]![Select-ISO-file-virt-manager-Ubuntu18-04-Server][8]][8]
Click on Forward
In the next couple of windows, you will be prompted to specify the RAM, CPU and disk for the VM.
Now Specify the Name of the Virtual Machine and network,
[![VM-Name-Network-Virt-Manager-Ubuntu18-04][1]![VM-Name-Network-Virt-Manager-Ubuntu18-04][9]][9]
Click on Finish
[![RHEL7-3-Installation-Virt-Manager][1]![RHEL7-3-Installation-Virt-Manager][10]][10]
Now follow the screen instruction and complete the installation,
**Creating Virtual machine from CLI using virt-install command,**
Use the below virt-install command to create a VM from terminal, it will start the installation in CLI, replace the name of the VM, description, location of ISO file and network bridge as per your setup.
```
linuxtechi@kvm-ubuntu18-04:~$ sudo virt-install  -n DB-Server  --description "Test VM for Database"  --os-type=Linux  --os-variant=rhel7  --ram=1096  --vcpus=1  --disk path=/var/lib/libvirt/images/dbserver.img,bus=virtio,size=10  --network bridge:br0 --graphics none  --location /home/linuxtechi/rhel-server-7.3-x86_64-dvd.iso --extra-args console=ttyS0
```
Thats conclude the article, I hope this article help you to install KVM on your Ubuntu 18.04 Server. Apart from this, KVM is the default hypervisor for Openstack.
Read More On : “[ **How to Create, Revert and Delete KVM Virtual machine (domain) snapshot with virsh command**][11]“
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-configure-kvm-ubuntu-18-04-server/
作者:[Pradeep Kumar][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxtechi.com/author/pradeep/
[1]:https://www.linuxtechi.com/wp-content/plugins/lazy-load/images/1x1.trans.gif
[2]:https://www.linuxtechi.com/wp-content/uploads/2018/05/libvirtd-command-ubuntu18-04-1024x339.jpg
[3]:https://www.linuxtechi.com/wp-content/uploads/2018/05/libvirtd-command-ubuntu18-04.jpg
[4]:https://www.linuxtechi.com/wp-content/uploads/2018/05/networkctl-command-output-ubuntu18-04.jpg
[5]:https://www.linuxtechi.com/wp-content/uploads/2018/05/ifconfig-command-output-ubuntu18-04.jpg
[6]:https://www.linuxtechi.com/wp-content/uploads/2018/05/Start-Virt-Manager-Ubuntu18-04.jpg
[7]:https://www.linuxtechi.com/wp-content/uploads/2018/05/ISO-file-Virt-Manager.jpg
[8]:https://www.linuxtechi.com/wp-content/uploads/2018/05/Select-ISO-file-virt-manager-Ubuntu18-04-Server.jpg
[9]:https://www.linuxtechi.com/wp-content/uploads/2018/05/VM-Name-Network-Virt-Manager-Ubuntu18-04.jpg
[10]:https://www.linuxtechi.com/wp-content/uploads/2018/05/RHEL7-3-Installation-Virt-Manager.jpg
[11]:https://www.linuxtechi.com/create-revert-delete-kvm-virtual-machine-snapshot-virsh-command/

View File

@ -0,0 +1,82 @@
translating---geekpi
Starting user software in X
======
There are currently many ways of starting software when a user session starts.
This is an attempt to collect a list of pointers to piece the big picture together. It's partial and some parts might be imprecise or incorrect, but it's a start, and I'm happy to keep it updated if I receive corrections.
### x11-common
`man xsession`
* Started by the display manager for example, `/usr/share/lightdm/lightdm.conf.d/01_debian.conf` or `/etc/gdm3/Xsession`
* Debian specific
* Runs scripts in `/etc/X11/Xsession.d/`
* `/etc/X11/Xsession.d/40x11-common_xsessionrc` sources `~/.xsessionrc` which can do little more than set env vars, because it is run at the beginning of X session startup
* At the end, it starts the session manager (`gnome-session`, `xfce4-session`, and so on)
### systemd --user
* <https://wiki.archlinux.org/index.php/Systemd/User>
* Started by `pam_systemd`, so it might not have a DISPLAY variable set in the environment yet
* Manages units in:
* `/usr/lib/systemd/user/` where units provided by installed packages belong.
* `~/.local/share/systemd/user/` where units of packages that have been installed in the home directory belong.
* `/etc/systemd/user/` where system-wide user units are placed by the system administrator.
* `~/.config/systemd/user/` where the users put their own units.
* A trick to start a systemd user unit when the X session has been set up and the DISPLAY variable is available, is to call `systemctl start` from a `.desktop` autostart file.
### dbus activation
* <https://dbus.freedesktop.org/doc/system-activation.txt>
* A user process making a dbus request can trigger starting a server program
* For systems debugging, is there a way of monitoring what services are getting dbus activated?
### X session manager
* <https://en.wikipedia.org/wiki/X_session_manager>
* Run by `x11-common`'s `Xsession.d`
* Runs freedesktop autostart .desktop files
* Runs Desktop Environment specific software
### xdg autostart
* <https://specifications.freedesktop.org/autostart-spec/autostart-spec-latest.html>
* Run by the session manager
* If `/etc/xdg/autostart/foo.desktop` and `~/.config/autostart/foo.desktop` exist then only the file `~/.config/autostart/foo.desktop` will be used because `~/.config/autostart/` is more important than `/etc/xdg/autostart/`
* Is there an ordering or is it all in parallel?
### Other startup notes
#### ~/.Xauthority
To connect to an X server, a client needs to send a token from `~/.Xauthority`, which proves that they can read the user's provate data.
`~/.Xauthority` contains a token generated by display manager and communicated to X at startup.
To view its contents, use `xauth -i -f ~/.Xauthority list`
--------------------------------------------------------------------------------
via: http://www.enricozini.org/blog/2018/debian/starting-user-software/
作者:[Enrico Zini][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.enricozini.org/

View File

@ -0,0 +1,117 @@
Advanced use of the less text file viewer in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals_0.png?itok=XwIRERsn)
I recently read Scott Nesbitt's article "[Using less to view text files at the Linux command line][1]" and was inspired to share additional tips and tricks I use with `less`.
### LESS env var
If you have an environment variable `LESS` defined (e.g., in your `.bashrc`), `less` treats it as a list of options, as if passed on the command line.
I use this:
```
LESS='-C -M -I -j 10 -# 4'
```
These mean:
* `-C` Make full-screen reprints faster by not scrolling from the bottom.
* `-M` Show more information from the last (status) line. You can customize the information shown with `-PM`, but I usually do not bother.
* `-I` Ignore letter case (upper/lower) in searches.
* `-j 10` Show search results in line 10 of the terminal, instead of the first line. This way you have 10 lines of context each time you press `n` (or `N`) to jump to the next (or previous) match.
* `-# 4` Jump four characters to the right or left when pressing the Right or Left arrow key. The default is to jump half of the screen, which I usually find to be too much. Generally speaking, `less` seems to be (at least partially) optimized to the environment it was initially developed in, with slow modems and low-bandwidth internet connections, when it made sense to jump half a screen.
### PAGER env var
Many programs show information using the command set in the `PAGER` environment variable (if it's set). So, you can set `PAGER=less` in your `.bashrc` and have your program run `less`. Check the man page environ(7) (`man 7 environ`) for other such variables.
### -S
`-S` tells `less` to chop long lines instead of wrapping them. I rarely find a need for this unless (and until) I've started viewing a file. Fortunately, you can type all command-line options inside `less` as if they were keyboard commands. So, if I want to chop long lines while I'm already in a file, I can simply type `-S`.
The command-line optiontellsto chop long lines instead of wrapping them. I rarely find a need for this unless (and until) I've started viewing a file. Fortunately, you can type all command-line options insideas if they were keyboard commands. So, if I want to chop long lines while I'm already in a file, I can simply type
Here's an example I use a lot:
```
    su - postgres
    export PAGER=less  # Because I didn't bother editing postgres' .bashrc on all the machines I use it on
    psql
```
Sometimes when I later view the output of a `SELECT` command with a very wide output, I type `-S` so it will be formatted nicely. If it jumps too far when I press the Right arrow to see more (because I didn't set `-#`), I can type `-#8`, then each Right arrow press will move eight characters to the right.
Sometimes after typing `-S` too many times, I exit psql and run it again after entering:
```
export LESS=-S
```
### F
The command `F` makes `less` work like `tail -f`—waiting until more data is added to the file before showing it. One advantage this has over `tail -f` is that highlighting search matches still works. So you can enter `less /var/log/logfile`, search for something—which will highlight all occurrences of it (unless you used `-g`)—and then press `F`. When more data is written to the log, `less` will show it and highlight the new matches.
After you press `F`, you can press `Ctrl+C` to stop it from looking for new data (this will not kill it); go back into the file to see older stuff, search for other things, etc.; and then press `F` again to look at more new data.
### Searching
Searches use the system's regexp library, and this usually means you can use extended regular expressions. In particular, searching for `one|two|three` will find and highlight all occurrences of one, two, or three.
Another pattern I use a lot, especially with wide log lines (e.g., ones that span more than one terminal line), is `.*something.*`, which highlights the entire line. This pattern makes it much easier to see where a line starts and finishes. I also combine these, such as: `.*one thing.*|.*another thing.*`, or `key: .*|.*marker.*` to see the contents of `key` (e.g., in a log file with a dump of some dictionary/hash) and highlight relevant marker lines (so I have a context), or even, if I know the value is surrounded by quotes:
```
key: '[^']*'|.*marker.*
```
`less` maintains a history of your search items and saves them to disk for future invocations. When you press `/` (or `?`), you can go through this history with the Up or Down arrow (as well as do basic line editing).
I stumbled upon what seems to be a very useful feature when skimming through the `less` man page while writing this article: skipping uninteresting lines with `&!pattern`. For example, while looking for something in `/var/log/messages`, I used to iterate through this list of commands:
```
    cat /var/log/messages | egrep -v 'systemd: Started Session' | less
    cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session' | less
    cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session|User Slice' | less
    cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session|User Slice|dbus' | less
    cat /var/log/messages | egrep -v 'systemd: Started Session|systemd: Starting Session|User Slice|dbus|PackageKit Daemon' | less
```
But now I know how to do the same thing within `less`. For example, I can type `&!systemd: Started Session`, then decide I want to get rid of `systemd: Starting Session`, so I add it by typing `&!` and use the Up arrow to get the previous search from the history. Then I type `|systemd: Starting Session` and press `Enter`, continuing to add more items the same way until I filter out enough to see the more interesting stuff.
### =
The command `=` shows more information about the file and location, even more than `-M`. If the file is very long, and calculating `=` takes too long, you can press `Ctrl+C` and it will stop trying.
If the content you're viewing is from a pipe rather than a file, `=` (and `-M`) will not show what it does not know, including the number of lines and bytes in the file. To see that data, if you know that `command` will finish quickly, you can jump to the end with `G`, and then `less` will start showing that information.
If you press `G` and the command writing to the pipe takes longer than expected, you can press `Ctrl+C`, and the command will be killed. Pressing `Ctrl+C` will kill it even if you didn't press `G`, so be careful not to press `Ctrl+C` accidentally if you don't intend to kill it. For this reason, if the command does something (that is, it's not only showing information), it's usually safer to write its output to a file and view the file in a separate terminal, instead of using a pipe.
### Why you need less
`less` is a very powerful program, and contrary to newer contenders in this space, such as `most` and `moar`, you are likely to find it on almost all the systems you use, just like `vi`. So, even if you use GUI viewers or editors, it's worth investing some time going through the `less` man page, at least to get a feeling of what's available. This way, when you need to do something that might be covered by existing functionality, you'll know to search the manual page or the internet to find what you need.
For more information, visit the [less home page][2]. The site has a nice FAQ with more tips and tricks.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/advanced-use-less-text-file-viewer
作者:[Yedidyah Bar David][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/didib
[1]:http://opensource.com/article/18/4/using-less-view-text-files-command-line
[2]:http://www.greenwoodsoftware.com/less/

View File

@ -0,0 +1,83 @@
Free Resources for Securing Your Open Source Code
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-security.jpg?itok=R3M5LDrb)
While the widespread adoption of open source continues at a healthy rate, the recent [2018 Open Source Security and Risk Analysis Report][1] from Black Duck and Synopsys reveals some common concerns and highlights the need for sound security practices. The report examines findings from the anonymized data of over 1,100 commercial codebases with represented Industries from automotive, Big Data, enterprise software, financial services, healthcare, IoT, manufacturing, and more.
The report highlights a massive uptick in open source adoption, with 96 percent of the applications scanned containing open source components. However, the report also includes warnings about existing vulnerabilities. Among the [findings][2]:
* “What is worrisome is that 78 percent of the codebases examined contained at least one open source vulnerability, with an average 64 vulnerabilities per codebase.”
* “Over 54 percent of the vulnerabilities found in audited codebases are considered high-risk vulnerabilities.”
* Seventeen percent of the codebases contained a highly publicized vulnerability such as Heartbleed, Logjam, Freak, Drown, or Poodle.
"The report clearly demonstrates that with the growth in open source use, organizations need to ensure they have the tools to detect vulnerabilities in open source components and manage whatever license compliance their use of open source may require," said Tim Mackey, technical evangelist at Black Duck by Synopsys.
Indeed, with ever more impactful security threats emerging,the need for fluency with security tools and practices has never been more pronounced. Most organizations are aware that network administrators and sysadmins need to have strong security skills, and, in many cases security certifications. [In this article,][3] we explored some of the tools, certifications and practices that many of them wisely embrace.
The Linux Foundation has also made available many informational and educational resources on security. Likewise, the Linux community offers many free resources for specific platforms and tools. For example, The Linux Foundation has published a [Linux workstation security checklist][4] that covers a lot of good ground. Online publications ranging from the [Fedora security guide][5] to the[Securing Debian Manual][6] can also help users protect against vulnerabilities within specific platforms.
The widespread use of cloud platforms such as OpenStack is also stepping up the need for cloud-centric security smarts. According to The Linux Foundations[Guide to the Open Cloud][7]: “Security is still a top concern among companies considering moving workloads to the public cloud, according to Gartner, despite a strong track record of security and increased transparency from cloud providers. Rather, security is still an issue largely due to companies inexperience and improper use of cloud services.”
For both organizations and individuals, the smallest holes in implementation of routers, firewalls, VPNs, and virtual machines can leave room for big security problems. Here is a collection of free tools that can plug these kinds of holes:
* [Wireshark][8], a packet analyzer
* [KeePass Password Safe][9], a free open source password manager
* [Malwarebytes][10], a free anti-malware and antivirus tool
* [NMAP][11], a powerful security scanner
* [NIKTO][12], an open source web server scanner
* [Ansible][13], a tool for automating secure IT provisioning
* [Metasploit][14], a tool for understanding attack vectors and doing penetration testing
Instructional videos abound for these tools. Youll find a whole[tutorial series][15] for Metasploit, and [video tutorials][16] for Wireshark. Quite a few free ebooks provide good guidance on security as well. For example, one of the common ways for security threats to invade open source platforms occurs in M&A scenarios, where technology platforms are merged—often without proper open source audits. In an ebook titled [Open Source Audits in Merger and Acquisition Transactions][17], from Ibrahim Haddad and The Linux Foundation, youll find an overview of the open source audit process and important considerations for code compliance, preparation, and documentation.
Meanwhile, weve[previously covered][18] a free ebook from the editors at[The New Stack][19] called Networking, Security & Storage with Docker & Containers. It covers the latest approaches to secure container networking, as well as native efforts by Docker to create efficient and secure networking practices. The ebook is loaded with best practices for locking down security at scale.
All of these tools and resources, and many more, can go a long way toward preventing security problems, and an ounce of prevention is, as they say, worth a pound of cure. With security breaches continuing, now is an excellent time to look into the many security and compliance resources for open source tools and platforms available. Learn more about security, compliance, and open source project health [here][20].
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/5/free-resources-securing-your-open-source-code
作者:[Sam Dean][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/sam-dean
[1]:https://www.blackducksoftware.com/open-source-security-risk-analysis-2018
[2]:https://www.prnewswire.com/news-releases/synopsys-report-finds-majority-of-software-plagued-by-known-vulnerabilities-and-license-conflicts-as-open-source-adoption-soars-300648367.html
[3]:https://www.linux.com/blog/sysadmin-ebook/2017/8/future-proof-your-sysadmin-career-locking-down-security
[4]:http://go.linuxfoundation.org/ebook_workstation_security
[5]:https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html
[6]:https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html
[7]:https://www.linux.com/publications/2016-guide-open-cloud
[8]:https://www.wireshark.org/
[9]:http://keepass.info/
[10]:https://www.malwarebytes.com/
[11]:http://searchsecurity.techtarget.co.uk/tip/Nmap-tutorial-Nmap-scan-examples-for-vulnerability-discovery
[12]:https://cirt.net/Nikto2
[13]:https://www.ansible.com/
[14]:https://www.metasploit.com/
[15]:http://www.computerweekly.com/tutorial/The-Metasploit-Framework-Tutorial-PDF-compendium-Your-ready-reckoner
[16]:https://www.youtube.com/watch?v=TkCSr30UojM
[17]:https://www.linuxfoundation.org/resources/open-source-audits-merger-acquisition-transactions/
[18]:https://www.linux.com/news/networking-security-storage-docker-containers-free-ebook-covers-essentials
[19]:http://thenewstack.io/ebookseries/
[20]:https://www.linuxfoundation.org/projects/security-compliance/

View File

@ -0,0 +1,232 @@
How to Run Your Own Git Server
======
**Learn how to set up your own Git server in this tutorial from our archives.**
[Git ][1]is a versioning system [developed by Linus Torvalds][2], that is used by millions of users around the globe. Companies like GitHub offer code hosting services based on Git. [According to reports, GitHub, a code hosting site, is the world's largest code hosting service.][3] The company claims that there are 9.2M people collaborating right now across 21.8M repositories on GitHub. Big companies are now moving to GitHub. [Even Google, the search engine giant, is shutting it's own Google Code and moving to GitHub.][4]
### Run your own Git server
GitHub is a great service, however there are some limitations and restrictions, especially if you are an individual or a small player. One of the limitations of GitHub is that the free service doesnt allow private hosting of the code. [You have to pay a monthly fee of $7 to host 5 private repositories][5], and the expenses go up with more repos.
In cases like these or when you want more control, the best path is to run Git on your own server. Not only do you save costs, you also have more control over your server. In most cases a majority of advanced Linux users already have their own servers and pushing Git on those servers is like free as in beer.
In this tutorial we are going to talk about two methods of managing your code on your own server. One is running a bare, basic Git server and and the second one is via a GUI tool called [GitLab][6]. For this tutorial I used a fully patched Ubuntu 14.04 LTS server running on a VPS.
### Install Git on your server
In this tutorial we are considering a use-case where we have a remote server and a local server and we will work between these machines. For the sake of simplicity we will call them remote-server and local-server.
First, install Git on both machines. You can install Git from the packages already available via the repos or your distros, or you can do it manually. In this article we will use the simpler method:
```
sudo apt-get install git-core
```
Then add a user for Git.
```
sudo useradd git
passwd git
```
In order to ease access to the server let's set-up a password-less ssh login. First create ssh keys on your local machine:
```
ssh-keygen -t rsa
```
It will ask you to provide the location for storing the key, just hit Enter to use the default location. The second question will be to provide it with a pass phrase which will be needed to access the remote server. It generates two keys - a public key and a private key. Note down the location of the public key which you will need in the next step.
Now you have to copy these keys to the server so that the two machines can talk to each other. Run the following command on your local machine:
```
cat ~/.ssh/id_rsa.pub | ssh git@remote-server "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
```
Now ssh into the server and create a project directory for Git. You can use the desired path for the repo.
Then change to this directory:
```
cd /home/swapnil/project-1.git
```
Then create an empty repo:
```
git init --bare
Initialized empty Git repository in /home/swapnil/project-1.git
```
We now need to create a Git repo on the local machine.
```
mkdir -p /home/swapnil/git/project
```
And change to this directory:
```
cd /home/swapnil/git/project
```
Now create the files that you need for the project in this directory. Stay in this directory and initiate git:
```
git init
Initialized empty Git repository in /home/swapnil/git/project
```
Now add files to the repo:
```
git add .
```
Now every time you add a file or make changes you have to run the add command above. You also need to write a commit message with every change in a file. The commit message basically tells what changes were made.
```
git commit -m "message" -a
[master (root-commit) 57331ee] message
2 files changed, 2 insertions(+)
create mode 100644 GoT.txt
create mode 100644 writing.txt
```
In this case I had a file called GoT (Game of Thrones review) and I made some changes, so when I ran the command it specified that changes were made to the file. In the above command '-a' option means commits for all files in the repo. If you made changes to only one you can specify the name of that file instead of using '-a'.
An example:
```
git commit -m "message" GoT.txt
[master e517b10] message
1 file changed, 1 insertion(+)
```
Until now we have been working on the local server. Now we have to push these changes to the server so the work is accessible over the Internet and you can collaborate with other team members.
```
git remote add origin ssh://git@remote-server/repo-<wbr< a="">>path-on-server..git
```
Now you can push or pull changes between the server and local machine using the 'push' or 'pull' option:
```
git push origin master
```
If there are other team members who want to work with the project they need to clone the repo on the server to their local machine:
```
git clone git@remote-server:/home/swapnil/project.git
```
Here /home/swapnil/project.git is the project path on the remote server, exchange the values for your own server.
Then change directory on the local machine (exchange project with the name of project on your server):
```
cd /project
```
Now they can edit files, write commit change messages and then push them to the server:
```
git commit -m 'corrections in GoT.txt story' -a
And then push changes:
git push origin master
```
I assume this is enough for a new user to get started with Git on their own servers. If you are looking for some GUI tools to manage changes on local machines, you can use GUI tools such as QGit or GitK for Linux.
### Using GitLab
This was a pure command line solution for project owner and collaborator. It's certainly not as easy as using GitHub. Unfortunately, while GitHub is the world's largest code hosting service; its own software is not available for others to use. It's not open source so you can't grab the source code and compile your own GitHub. Unlike WordPress or Drupal you can't download GitHub and run it on your own servers.
As usual in the open source world there is no end to the options. GitLab is a nifty project which does exactly that. It's an open source project which allows users to run a project management system similar to GitHub on their own servers.
You can use GitLab to run a service similar to GitHub for your team members or your company. You can use GitLab to work on private projects before releasing them for public contributions.
GitLab employs the traditional Open Source business model. They have two products: free of cost open source software, which users can install on their own servers, and a hosted service similar to GitHub.
The downloadable version has two editions - the free of cost community edition and the paid enterprise edition. The enterprise edition is based on the community edition but comes with additional features targeted at enterprise customers. Its more or less similar to what WordPress.org or Wordpress.com offer.
The community edition is highly scalable and can support 25,000 users on a single server or cluster. Some of the features of GitLab include: Git repository management, code reviews, issue tracking, activity feeds, and wikis. It comes with GitLab CI for continuous integration and delivery.
Many VPS providers such as Digital Ocean offer GitLab droplets for users. If you want to run it on your own server, you can install it manually. GitLab offers an Omnibus package for different operating systems. Before we install GitLab, you may want to configure an SMTP email server so that GitLab can push emails as and when needed. They recommend Postfix. So, install Postfix on your server:
```
sudo apt-get install postfix
```
During installation of Postfix it will ask you some questions; don't skip them. If you did miss it you can always re-configure it using this command:
```
sudo dpkg-reconfigure postfix
```
When you run this command choose "Internet Site" and provide the email ID for the domain which will be used by Gitlab.
In my case I provided it with:
```
This e-mail address is being protected from spambots. You need JavaScript enabled to view it
```
Use Tab and create a username for postfix. The Next page will ask you to provide a destination for mail.
In the rest of the steps, use the default options. Once Postfix is installed and configured, let's move on to install GitLab.
Download the packages using wget (replace the download link with the [latest packages from here][7]) :
```
wget https://downloads-packages.s3.amazonaws.com/ubuntu-14.04/gitlab_7.9.4-omnibus.1-1_amd64.deb
```
Then install the package:
```
sudo dpkg -i gitlab_7.9.4-omnibus.1-1_amd64.deb
```
Now it's time to configure and start GitLabs.
```
sudo gitlab-ctl reconfigure
```
You now need to configure the domain name in the configuration file so you can access GitLab. Open the file.
```
nano /etc/gitlab/gitlab.rb
```
In this file edit the 'external_url' and give the server domain. Save the file and then open the newly created GitLab site from a web browser.
By default it creates 'root' as the system admin and uses '5iveL!fe' as the password. Log into the GitLab site and then change the password.
Once the password is changed, log into the site and start managing your project.
GitLab is overflowing with features and options. I will borrow popular lines from the movie, The Matrix: "Unfortunately, no one can be told what all GitLab can do. You have to try it for yourself."
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/how-run-your-own-git-server
作者:[Swapnil Bhartiya][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://github.com/git/git
[2]:https://www.linuxfoundation.org/blog/10-years-of-git-an-interview-with-git-creator-linus-torvalds/
[3]:https://github.com/about/press
[4]:http://google-opensource.blogspot.com/2015/03/farewell-to-google-code.html
[5]:https://github.com/pricing
[6]:https://about.gitlab.com/
[7]:https://about.gitlab.com/downloads/

View File

@ -0,0 +1,134 @@
Using Stratis to manage Linux storage from the command line
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
As discussed in [Part 1][1] and [Part 2][2] of this series, Stratis is a volume-managing filesystem with functionality similar to that of [ZFS][3] and [Btrfs][4]. In this article, we'll walk through how to use Stratis on the command line.
### Getting Stratis
For non-developers, the easiest way to try Stratis now is in [Fedora 28][5].
Once you're running this, you can install the Stratis daemon and the Stratis command-line tool with:
```
# dnf install stratis-cli stratisd
```
### Creating a pool
Stratis has three concepts: blockdevs, pools, and filesystems. Blockdevs are the block devices, such as a disk or a disk partition, that make up a pool. Once a pool is created, filesystems can be created from it.
Assuming you have a block device called `vdg` on your system that is not currently in use or mounted, you can create a Stratis pool on it with:
```
# stratis pool create mypool /dev/vdg
```
This assumes `vdg` is completely zeroed and empty. If it is not in use but has old data on it, it may be necessary to use `pool create`'s `- force` option. If it is in use, don't use it for Stratis.
If you want to create a pool from more than one block device, just list them all on the `pool create` command line. You can also add more blockdevs later using the `blockdev add-data` command. Note that Stratis requires blockdevs to be at least 1 GiB in size.
### Creating filesystems
Once you've created a pool called `mypool`, you can create filesystems from it:
```
# stratis fs create mypool myfs1
```
After creating a filesystem called `myfs1` from pool `mypool`, you can mount and use it, using the entries Stratis has created within /dev/stratis:
```
# mkdir myfs1
# mount /dev/stratis/mypool/myfs1 myfs1
```
The filesystem is now mounted on `myfs1` and ready to use.
### Snapshots
In addition to creating empty filesystems, you can also create a filesystem as a snapshot of an existing filesystem:
```
# stratis fs snapshot mypool myfs1 myfs1-experiment
```
After doing so, you could mount the new `myfs1-experiment`, which will initially contain the same file contents as `myfs1`, but could change as the filesystem is modified. Whatever changes you made to `myfs1-experiment` would not be reflected in `myfs1` unless you unmounted `myfs1` and destroyed it with:
```
# umount myfs1
# stratis fs destroy mypool myfs1
```
and then snapshotted the snapshot to recreate it and remounted it:
```
# stratis fs snapshot mypool myfs1-experiment myfs1
# mount /dev/stratis/mypool/myfs1 myfs1
```
### Getting information
Stratis can list pools on the system:
```
# stratis pool list
```
As filesystems have more data written to them, you will see the "Total Physical Used" value increase. Be careful when this approaches "Total Physical Size"; we're still working on handling this correctly.
To list filesystems within a pool:
```
# stratis fs list mypool
```
To list the blockdevs that make up a pool:
```
# stratis blockdev list mypool
```
These give only minimal information currently, but they will provide more in the future.
#### Destroying a pool
Once you have an idea of what Stratis can do, to destroy the pool, first make sure all filesystems created from it are unmounted and destroyed, then use the `pool destroy` command:
```
# umount myfs1
# umount myfs1-experiment (if you created it)
# stratis fs destroy mypool myfs1
# stratis fs destroy mypool myfs1-experiment
# stratis pool destroy mypool
```
`stratis pool list` should now show no pools.
That's it! For more information, please see the manpage: `man stratis`.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/stratis-storage-linux-command-line
作者:[Andy Grover][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/agrover
[1]:https://opensource.com/article/18/4/stratis-easy-use-local-storage-management-linux
[2]:https://opensource.com/article/18/4/stratis-lessons-learned
[3]:https://en.wikipedia.org/wiki/ZFS
[4]:https://en.wikipedia.org/wiki/Btrfs
[5]:https://fedoraproject.org/wiki/Releases/28/Schedule

View File

@ -0,0 +1,95 @@
对数据隐私持开放的态度
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_opendata.png?itok=M8L2HGVx)
Image by : opensource.com
今天是[数据隐私日][1](在欧洲叫"数据保护日"),你可能会认为现在我们处于一个开源的世界中,所有的数据都应该免费,[就像人们想的那样][2],但是现实并没那么简单。主要有两个原因:
1. 我们中的大多数(不仅仅是在开源中)认为至少有些关于我们自己的数据是不愿意分享出去的(我在之前发表的一篇文章中列举了一些列子[3]
2. 我们很多人虽然在开源中工作,但事实上是为了一些商业公司或者其他一些组织工作,也是在合法的要求范围内分享数据。
所以实际上,数据隐私对于每个人来说是很重要的。
事实证明,在美国和欧洲之间,人们和政府认为让组织使用的数据的起点是有些不同的。前者通常为实体提供更多的自由度,更愤世嫉俗的是--大型的商业体利用他们收集到的关于我们的数据。在欧洲完全是另一观念一直以来持有的多是有更多约束限制的观念而且在5月25日欧洲的观点可以说取得了胜利。
## 通用数据保护条例的影响
那是一个相当全面的声明其实事实上就是欧盟在2016年通过的一项关于通用数据保护的立法使它变得可实施。数据通用保护条例在私人数据怎样才能被保存如何才能被使用谁能使用能被持有多长时间这些方面设置了严格的规则。它描述了什么数据属于私人数据--而且涉及的条目范围非常广泛从你的姓名家庭住址到你的医疗记录以及接通你电脑的IP地址。
通用数据保护条例的重要之处是他并不仅仅适用于欧洲的公司,如果你是阿根廷人,日本人,美国人或者是俄罗斯的公司而且你正在收集涉及到欧盟居民的数据,你就要受到这个条例的约束管辖。
“哼!” 你可能会这样说,“我的业务不在欧洲:他们能对我有啥约束?” 答案很简答如果你想继续在欧盟做任何生意你最好遵守因为一旦你违反了通用数据保护条例的规则你将会受到你全球总收入百分之四的惩罚。是的你没听错是全球总收入不是仅仅在欧盟某一国家的的收入也不只是净利润而是全球总收入。这将会让你去叮嘱告知你的法律团队他们就会知会你的整个团队同时也会立即去指引你的IT团队确保你的行为相当短的时间内是符合要求的。
看上去这和欧盟之外的城市没有什么相关性但其实不然对大多数公司来说对所有的他们的顾客、合作伙伴以及员工实行同样的数据保护措施是件既简单又有效的事情而不是只是在欧盟的城市实施这将会是一件很有利的事情。2
然而,数据通用保护条例不久将在全球实施并不意味着一切都会变的很美好:事实并非如此,我们一直在丢弃关于我们自己的信息--而且允许公司去使用它。
有一句话是这么说的(尽管很争议):“如果你没有在付费,那么你就是产品。”这句话的意思就是如果你没有为某一项服务付费,那么其他的人就在付费使用你的数据。
你有付费使用Facebook、推特谷歌邮箱你觉得他们是如何赚钱的大部分是通过广告一些人会争论那是他们向你提供的一项服务而已但事实上是他们在利用你的数据从广告商里获取收益。你不是一个真正的广告的顾客-只有当你从看了广告后买了他们的商品之后你才变成了他们的顾客,但直到这个发生之前,都是广告平台和广告商的关系。
有些服务是允许你通过付费来消除广告的流媒体音乐平台声破天就是这样的但从另一方面来讲即使你认为付费的服务也可以启用广告列如亚马逊正在允许通过Alexa广告除非我们想要开始为这些所有的免费服务付费我们需要清除我们所放弃的而且在我们想要揭发和不想的里面做一些选择。
### 谁是顾客?
关于数据的另一个问题一直在困扰着我们它是产生的数据量的直接结果。有许多组织一直在产生巨量的数据包括公共的组织比如大学、医院或者是政府部门4--
而且他们没有能力去储存这些数据。如果这些数据没有长久的价值也就没什么要紧的,但事实正好相反,随着处理大数据的工具正在开发中,而且这些组织也认识到他们现在以及在不久的将来将能够去开采这些数据。
然而他们面临的是,随着数据的增长和存储量的不足他们是如何处理的。幸运--而且我是带有讽刺意味的使用了这个词5大公司正在介入去帮助他们。“把你们的数据给我们”他们说“我们将免费保存。我们甚至让你随时能够使用你所收集到的数据”这听起来很棒是吗这是大公司的一个极具代表性的列子站在慈善的立场上帮助公共组织管理他们收集到的关于我们的数据。
不幸的是慈善不是唯一的理由。他们是附有条件的作为同意保存数据的交换条件这些公司得到了将数据访问权限出售非第三方的权利。你认为公共组织或者是被收集数据的人在数据被出售使用权使给第三方在他们如何使用上面能有发言权吗我将把这个问题当做一个练习留给读者去思考。7
### 开放和积极
然而并不只有坏消息。政府中有一项在逐渐发展起来的“开放数据”运动鼓励部门能够将免费开放他们的数据给公众或者其他组织。这项行动目前正在被实施立法。许多
支援组织--尤其是那些收到公共基金的--正在开始推动同样的活动。即使商业组织也有些许的兴趣。而且,在技术上已经可行了,例如围绕不同的隐私和多方计算上,正在允许我们根据数据设置和不揭露太多关于个人的前提下开采数据--一个历史性的计算问题比你想象的要容易处理的多。
这些对我们来说意味着什么呢我之前在网站Opensource.com上写过关于[开源的共享福利][4],而且我越来越相信我们需要把我们的视野从软件拓展到其他区域硬件组织和这次讨论有关的数据。让我们假设一下你是A公司要提向另一家公司提供一项服务客户B。在游戏中有四种不同类型的数据
1. 数据完全开放:对A和B都是可得到的世界上任何人都可以得到
2. 数据是已知的共享的和机密的A和B可得到但其他人不能得到。
3. 数据是公司级别上保密的A公司可以得到但B顾客不能
4. 数据是顾客级别保密的B顾客可以得到但A公司不能
首先,也许我们对数据应该更开放些,将数据默认放到选项一中。如果那些数据对所有人开放--在无人驾驶、语音识别矿藏以及人口数据统计会有相当大的作用的9
如果我们能够找到方法将数据放到选项23和4中不是很好嘛--或者至少它们中的一些--在选项一中是可以实现的,同时仍将细节保密?这就是研究这些新技术的希望。
然而又很长的路要走,所以不要太兴奋,同时,开始考虑将你的的一些数据默认开放。
### 一些具体的措施
我们如何处理数据的隐私和开放?下面是我想到的一些具体的措施:欢迎大家评论做出更多的贡献。
* 检查你的组织是否正在认真严格的执行通用数据保护条例。如果没有,去推动实施它。
* 要默认去加密敏感数据(或者适当的时候用散列算法),当不再需要的时候及时删掉--除非数据正在被处理使用否则没有任何借口让数据清晰可见。
* 当你注册一个服务的时候考虑一下你公开了什么信息,特别是社交媒体类的。
* 和你的非技术朋友讨论这个话题。
* 教育你的孩子,你朋友的孩子以及他们的朋友。然而最好是去他们的学校和他们的老师交谈在他们的学校中展示。
* 鼓励你工作志愿服务的组织,或者和他们互动推动数据的默认开放。不是去思考为什么我要使数据开放而是以我为什么不让数据开放开始。
* 尝试去访问一些开源数据。开采使用它。开发应用来使用它进行数据分析画漂亮的图10 制作有趣的音乐,考虑使用它来做些事。告诉组织去使用它们,感谢它们,而且鼓励他们去做更多。
1. 我承认你可能尽管不会
2. 假设你坚信你的个人数据应该被保护。
3. 如果你在思考“极好的”的寓意,在这点上你并不孤独。
4. 事实上这些机构能够有多开放取决于你所居住的地方。
5. 假设我是英国人,那是非常非常大的剂量。
6. 他们可能是巨大的公司:没有其他人能够负担得起这么大的存储和基础架构来使数据保持可用。
7. 不,答案是“不”。
8. 尽管这个列子也同样适用于个人。看看A可能是Alice,B 可能是BOb...
9. 并不是说我们应该暴露个人的数据或者是这样的数据应该被保密,当然--不是那类的数据。
10. 我的一个朋友当她接孩子放学的时候总是下雨,所以为了避免确认失误,她在整个学年都访问天气信息并制作了图表分享到社交媒体上。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/being-open-about-data-privacy
作者:[Mike Bursell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者FelixYFZ](https://github.com/FelixYFZ)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mikecamel
[1]:https://en.wikipedia.org/wiki/Data_Privacy_Day
[2]:https://en.wikipedia.org/wiki/Information_wants_to_be_free
[3]:https://aliceevebob.wordpress.com/2017/06/06/helping-our-governments-differently/
[4]:https://opensource.com/article/17/11/commonwealth-open-source
[5]:http://www.outpost9.com/reference/jargon/jargon_40.html#TAG2036

View File

@ -1,72 +1,71 @@
用 Ansible 实现网络自动化 用 Ansible 实现网络自动化
================ ================
> 了解 Ansible 的功能,这是一个无代理的、可扩展的配置管理系统。
### 网络自动化 ### 网络自动化
随着 IT 行业的技术变化从服务器虚拟化到公有和私有云以及自服务能力、容器化应用、平台即服务Paas交付一直以来落后的一个领域是网络。 随着 IT 行业的技术变化,从服务器虚拟化到公有和私有云以及自服务能力、容器化应用、平台即服务PaaS交付有一直以来落后的一个领域就是网络。
在过去的五年多网络行业似乎有很多新的趋势出现它们中的很多被归入到软件定义网络SDN 在过去的五年多网络行业似乎有很多新的趋势出现它们中的很多被归入到软件定义网络SDN
>注意 >注意
> SDN 是新出现的一种构建、管理、操作和部署网络的方法。SDN 最初的定义是需要将控制层和数据层(包转发)物理分离,并且,解耦合的控制层必须管理好各自的设备。 > SDN 是新出现的一种构建、管理、操作和部署网络的方法。SDN 最初的定义是出于将控制层和数据层(包转发)物理分离的需要,并且,解耦合的控制层必须管理好各自的设备。
> 如今,_在 SDN_ 旗下已经有许多技术,包括基于<ruby>控制器的网络<rt>controller-based networks</rt></ruby>、网络设备上的 API、网络自动化、白盒交换机、策略网络化、网络功能虚拟化NFV等等。 > 如今,在 SDN 旗下已经有许多技术,包括<ruby>基于控制器的网络<rt>controller-based networks</rt></ruby>、网络设备上的 API、网络自动化、白盒交换机、策略网络化、网络功能虚拟化NFV等等。
> 由于这篇报告的目的,我们参考 SDN 的解决方案作为我们的解决方案,其中包括一个网络控制器作为解决方案的一部分,并且提升了该网络的可管理性,但并不需要从数据层解耦控制层。 > 由于这篇报告的目的,我们参考 SDN 的解决方案作为我们的解决方案,其中包括一个网络控制器作为解决方案的一部分,并且提升了该网络的可管理性,但并不需要从数据层解耦控制层。
这些趋势的之一是,网络设备的 API 作为管理和操作这些设备的一种方法而出现,真正地提供了机器对机器的通讯。当需要自动化和构建网络应用时 API 简化了开发过程,在数据如何建模时提供了更多结构。例如,当启用 API 的设备在 JSON/XML 中返回数据时,它是结构化的,并且比返回原生文本信息、需要手工去解析的仅支持命令行的设备更易于使用。 这些趋势的之一是,网络设备的 API 作为管理和操作这些设备的一种方法而出现,真正地提供了机器对机器的通讯。当需要自动化和构建网络应用时 API 简化了开发过程,在数据如何建模时提供了更多结构。例如,当启用 API 的设备以 JSON/XML 返回数据时,它是结构化的,并且比返回原生文本信息、需要手工去解析的仅支持命令行的设备更易于使用。
在 API 之前用于配置和管理网络设备的两个主要机制是命令行接口CLI和简单网络管理协议SNMP。让我们来了解一下它们CLI 是一个设备的人机界面,而 SNMP 并不是为设备提供的实时编程接口。 在 API 之前用于配置和管理网络设备的两个主要机制是命令行接口CLI和简单网络管理协议SNMP。让我们来了解一下它们CLI 是一个设备的人机界面,而 SNMP 并不是为设备提供的实时编程接口。
幸运的是,因为很多供应商争相为设备增加 API有时候 _是因为_ 它被放到需求建议书RFP就带来了一个非常好的副作用 —— 支持网络自动化。当真正的 API 发布时访问设备内数据的过程以及管理配置就会被极大简化因此我们将在本报告中对此进行评估。虽然使用许多传统方法也可以实现自动化比如CLI/SNMP。 幸运的是,因为很多供应商争相为设备增加 API有时候 _是因为_ 它被放到需求建议书RFP就带来了一个非常好的副作用 —— 支持网络自动化。当真正的 API 发布时访问设备内数据的过程以及管理配置就会被极大简化因此我们将在本报告中对此进行评估。虽然使用许多传统方法也可以实现自动化比如CLI/SNMP。
> 注意 > 注意
> 随着未来几个月或几年的网络设备更新,供应商的 API 无疑应该被测试,并且要做为采购网络设备(虚拟和物理)的关键决策标准。如果供应商提供一些库或集成到自动化工具中,或者如果被用于一个开放的标准/协议用户应该知道数据是如何通过设备建模的API 使用的传输类型是什么。 > 随着未来几个月或几年的网络设备更新,供应商的 API 无疑应该被做为采购网络设备(虚拟和物理)的关键决策标准而测试和使用。如果供应商提供一些库或集成到自动化工具中,或者如果被用于一个开放的标准协议用户应该知道数据是如何通过设备建模的API 使用的传输类型是什么。
总而言之,网络自动化,像大多数的自动化类型一样,是为了更快地工作。工作的更快是好事,降低部署和配置改变的时间并不总是许多 IT 组织需要去解决的问题。 总而言之,网络自动化,像大多数类型的自动化一样,是为了更快地工作。工作的更快是好事,降低部署和配置改变的时间并不总是许多 IT 组织需要去解决的问题。
包括速度,我们现在看看这些各种类型的 IT 组织逐渐采用网络自动化的几种原因。你应该注意到,同样的原则也适用于其它类型的自动化。 包括速度在内,我们现在看看这些各种类型的 IT 组织逐渐采用网络自动化的几种原因。你应该注意到,同样的原则也适用于其它类型的自动化。
### 简化架构 ### 简化架构
今天,每个网络都是一个独特的“雪花”型,并且,网络工程师们为能够解决传输和网络应用问题而感到自豪,这些问题最终使网络不仅难以维护和管理,而且也很难去实现自动化。 今天,每个网络都是一片独特的“雪花”,并且,网络工程师们为能够通过一次性的改变来解决传输和网络应用问题而感到自豪,而这最终导致网络不仅难以维护和管理,而且也很难去实现自动化。
需要从一开始就包含到新的架构和设计中去部署,而不是去考虑网络自动化和管理作为一个二级或三级项目。哪个特性可以跨不同的供应商工作哪个扩展可以跨不同的平台工作当使用特别的网络设备平台时API 类型或者自动化工程是什么?当这些问题在设计程之前得到答案,最终的架构将变成简单的、可重复的、并且易于维护 _和_ 自动化的,在整个网络中将很少启用供应商专用的扩展。 网络自动化和管理需要从一开始就包含到新的架构和设计中去部署而不是作为一个二级或三级项目。哪个特性可以跨不同的供应商工作哪个扩展可以跨不同的平台工作当使用特别的网络设备平台时API 类型或者自动化工程是什么?当这些问题在设计程之前得到答案,最终的架构将变成简单的、可重复的、并且易于维护 _和_ 自动化的,在整个网络中将很少启用供应商专用的扩展。
### 确定的结果 ### 确定的结果
在一个企业组织中改变审查会议change review meeting去评估即将到来的网络上的变化、它们对外部系统的影响、以及回滚计划。在这个世界上人们为这些即 _将到来的变化_ 去接触 CLI输入错误的命令造成的影响是灾难性的。想像一下一个有三位、四位、五位、或者 50 位工程师的团队。每位工程师应对 _即将到来的变化_ 都有他们自己的独特的方法。并且,在管理这些变化的期间,使用一个 CLI 或者 GUI 的能力并不会消除和减少出现错误的机率。 在一个企业组织中,<ruby>改变审查会议<rt>change review meeting</rt></ruby>会评估面临的网络变化、它们对外部系统的影响、以及回滚计划。在人们通过 CLI 来执行这些 _面临的变化_ 的世界上,输入错误的命令造成的影响是灾难性的。想像一下,一个有 3 位、4 位、5位或者 50 位工程师的团队。每位工程师应对 _面临的变化_ 都有他们自己的独特的方法。并且,在管理这些变化的期间,一个人使用 CLI 或者 GUI 的能力并不会消除和减少出现错误的机率。
使用经过验证和测试过的网络自动化可以帮助实现更多的可预测行为,并且使执行团队有更好的机会实现确实性结果,在保证任务没有人为错误的情况下首次正确完成的道路上更进一步。
使用经过验证和测试过的网络自动化可以帮助实现更多的可预测行为,并且使执行团队更有可能实现确实性结果,在保证任务没有人为错误的情况下首次正确完成的道路上更进一步。
### 业务灵活性 ### 业务灵活性
不用说,网络自动化不仅为部署变化提供速度和灵活性,而且使得根据业务需要去从网络设备中检索数据的速度变得更快。自从服务器虚拟化实现以后,服务器和虚拟化使得管理员有能力在瞬间去部署一个新的应用程序。而且,更多的快速部署应用程序的问题出现在,配置一个 VLAN虚拟局域网、路由器、FW ACL防火墙的访问控制列表、或者负载均衡策略需要多长时间 不用说,网络自动化不仅为部署变化提供速度和灵活性,而且使得根据业务需要去从网络设备中检索数据的速度变得更快。自从服务器虚拟化到来以后,服务器和虚拟化使得管理员有能力在瞬间去部署一个新的应用程序。而且,随着应用程序可以更快地部署,随之浮现的问题是为什么还需要花费如此长的时间配置一个 VLAN虚拟局域网、路由器、FW ACL防火墙的访问控制列表或者负载均衡策略呢
在一个组织内通过去熟悉大多数的通用工作流和 _为什么_ 网络改变是真实的需求?新的部署过程自动化工具,如 Ansible 将使这些变得非常简单。 通过了解在一个组织内最常见的工作流和 _为什么_ 真正需要改变网络,部署如 Ansible 这样的现代的自动化工具将使这些变得非常简单。
这一章将介绍一些关于为什么应该去考虑网络自动化的高级知识点。在下一节,我们将带你去了解 Ansible 是什么,并且继续深入了解各种不同规模的 IT 组织的网络自动化的不同类型。
这一章介绍了一些关于为什么应该去考虑网络自动化的高级知识点。在下一节,我们将带你去了解 Ansible 是什么,并且继续深入了解各种不同规模的 IT 组织的网络自动化的不同类型。
### 什么是 Ansible ### 什么是 Ansible
Ansible 是存在于开源世界里的一种最新的 IT 自动化和配置管理平台。它经常被拿来与其它工具如 Puppet、Chef和 SaltStack 去比较。Ansible 作为一个由 Michael DeHaan 创建的开源项目出现于 2012 年Michael DeHaan 也创建了 Cobbler 和 cocreated Func它们在开源社区都非常流行。在 Ansible 开源项目创建之后不足 18 个月时间, Ansilbe 公司成立,并收到了 $6 million 的一系列资金。它成为并一直保持着第一的贡献者和 Ansible 开源项目的支持者。在 2015 年 10 月Red Hat 获得了 Ansible 公司。 Ansible 是存在于开源世界里的一种最新的 IT 自动化和配置管理平台。它经常被拿来与其它工具如 Puppet、Chef 和 SaltStack 去比较。Ansible 作为一个由 Michael DeHaan 创建的开源项目出现于 2012 年Michael DeHaan 也创建了 Cobbler 和 cocreated Func它们在开源社区都非常流行。在 Ansible 开源项目创建之后不足 18 个月时间, Ansilbe 公司成立,并收到了六百万美金 A 轮投资。该公司成为 Ansible 开源项目排名第一的贡献者和支持者,并一直保持着。在 2015 年 10 月Red Hat 收购了 Ansible 公司。
但是Ansible 到底是什么? 但是Ansible 到底是什么?
_Ansible 是一个无需代理和可扩展的超级简单的自动化平台。_ _Ansible 是一个无需代理和可扩展的超级简单的自动化平台。_
让我们更深入地了解它的细节,并且看一看 Ansible 的属性,它帮助 Ansible 在行业内获得大量的吸引力traction 让我们更深入地了解它的细节,并且看一看那些使 Ansible 在行业内获得广泛认可的属性。
### 简单 ### 简单
Ansible 的其中一个吸引人的属性是,使用它你 _不_ 需要特定的编程技能。所有的指令,或者任务都是自动化的,在一个标准的、任何人都可以理解的人类可读的数据格式的一个文档中。在 30 分钟之内完成安装和自动化任务的情况并不罕见! Ansible 的其中一个吸引人的属性是,使用它你 _不_ 需要特定的编程技能。所有的指令,或者任务都是自动化的,以一个标准的、任何人都可以理解的人类可读的数据格式的文档化。在 30 分钟之内完成安装和自动化任务的情况并不罕见!
例如,下列的一个 Ansible playbook 任务是用于去确保在一个 Cisco Nexus 交换机中存在一个 VLAN 例如,下列来自一个 Ansible <ruby>剧本<rt>playbook</rt></ruby>的任务用于去确保在一个 VLAN 存在于一个 Cisco Nexus 交换机中
``` ```
- nxos_vlan: vlan_id=100 name=web_vlan - nxos_vlan: vlan_id=100 name=web_vlan
@ -74,9 +73,9 @@ Ansible 的其中一个吸引人的属性是,去使用它你 _不_ 需要特
你无需熟悉或写任何代码就可以明确地看出它将要做什么! 你无需熟悉或写任何代码就可以明确地看出它将要做什么!
###### 注意 > 注意
这个报告的下半部分涉到 Ansible 术语(playbooks、plays、tasks、modules、等等的细节。但是在我们为网络自动化使用 Ansible 时,我们也同时有一些详细的示例去解释这些关键概念 > 这个报告的下半部分涉到 Ansible 术语(<ruby>剧本<rt>playbook</rt></ruby><ruby>行动<rt>play</rt></ruby><ruby>任务<rt>task</rt></ruby><ruby>模块<rt>module</rt></ruby>等等)的细节。在我们使用 Ansible 进行网络自动化时,提及这些关键概念时我们会有一些简短的示例
### 无代理 ### 无代理

View File

@ -0,0 +1,206 @@
四个Linux网络嗅探工具
======
在计算机网络中,数据是暴露的,因为数据包传输是无法隐藏的,所以让我们来使用 `whois`,`dig``nmcli` 和 `nmap` 这四个工具来嗅探网络吧。
请注意,不要运行 `nmap` 在不属于自己的网络上,因为这有可能会被其他人解读成为恶意攻击。
### 精简和详细域名信息查询
您可能已经注意到,之前我们用心爱的 `whois` 命令查询域名信息,但现如今似乎没有提供同过去一样的详细程度。我们使用该命令查询 Linux.com 域名描述信息:
```
$ whois linux.com
Domain Name: LINUX.COM
Registry Domain ID: 4245540_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.namecheap.com
Registrar URL: http://www.namecheap.com
Updated Date: 2018-01-10T12:26:50Z
Creation Date: 1994-06-02T04:00:00Z
Registry Expiry Date: 2018-06-01T04:00:00Z
Registrar: NameCheap Inc.
Registrar IANA ID: 1068
Registrar Abuse Contact Email: abuse@namecheap.com
Registrar Abuse Contact Phone: +1.6613102107
Domain Status: ok https://icann.org/epp#ok
Name Server: NS5.DNSMADEEASY.COM
Name Server: NS6.DNSMADEEASY.COM
Name Server: NS7.DNSMADEEASY.COM
DNSSEC: unsigned
[...]
```
有很多令人讨厌的法律声明。但在哪有联系信息呢?该网站位于 whois.namecheap.com 站点上(见上面输出的第三行):
```
$ whois -h whois.namecheap.com linux.com
```
我就不复制出来因为这实在太长了包含了注册人管理员和技术人员的联系信息。怎么回事啊露西尔LCTT 译注:《行尸走肉》中尼根的棒子)有一些注册表,比如.com和.net是精简注册表保存了一部分有限的域名信息。为了获取完整信息请使用 `-h``--host` 参数,该参数便会从域名的 `注册服务机构` 中获取。
大部分顶级域名是需要详细的注册信息,如.info。试着使用`whois blockchain.info`命令来查看。
想要摆脱这些烦人的法律声明?使用 `-H` 参数。
### DNS解析
使用 `dig` 命令比较从不同的域名服务器返回的查询结果,去除陈旧的信息。域名服务器记录缓存各地的解析信息,并且不同的域名服务器有不同的刷新间隔。以下是一个简单的用法:
```
$ dig linux.com
<<>> DiG 9.10.3-P4-Ubuntu <<>> linux.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<<- opcode: QUERY, status: NOERROR, id: 13694
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1440
;; QUESTION SECTION:
;linux.com. IN A
;; ANSWER SECTION:
linux.com. 10800 IN A 151.101.129.5
linux.com. 10800 IN A 151.101.65.5
linux.com. 10800 IN A 151.101.1.5
linux.com. 10800 IN A 151.101.193.5
;; Query time: 92 msec
;; SERVER: 127.0.1.1#53(127.0.1.1)
;; WHEN: Tue Jan 16 15:17:04 PST 2018
;; MSG SIZE rcvd: 102
```
注意下靠近末尾的这行信息SERVER: 127.0.1.1#53(127.0.1.1)这是您默认的缓存解析器。当地址是本地时就相当于在您的电脑上安装DNS服务。在我看来这就是一个Dnsmasq工具LCTT 译注是一个小巧且方便地用于配置DNS和DHCP的工具该工具被用作网络管理
```
$ ps ax|grep dnsmasq
2842 ? S 0:00 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground
--no-hosts --bind-interfaces --pid-file=/var/run/NetworkManager/dnsmasq.pid
--listen-address=127.0.1.1
```
`dig` 命令默认是返回A记录也就是域名。IPv6则有AAAA记录
```
$ $ dig linux.com AAAA
[...]
;; ANSWER SECTION:
linux.com. 60 IN AAAA 64:ff9b::9765:105
linux.com. 60 IN AAAA 64:ff9b::9765:4105
linux.com. 60 IN AAAA 64:ff9b::9765:8105
linux.com. 60 IN AAAA 64:ff9b::9765:c105
[...]
```
仔细检查下发现Linux.com有IPv6地址。很好如果您的网络服务支持IPv6那么您就可以用IPv6连接。令人难过的是我的移动宽带则没提供IPv6
假设您能使DNS改变您的域名又或是您使用 `dig` 查询的结果有误。试着用一个公共DNS如OpenNIC:
```
$ dig @69.195.152.204 linux.com
[...]
;; Query time: 231 msec
;; SERVER: 69.195.152.204#53(69.195.152.204)
```
`dig` 回应您正在的查询是来自 69.195.152.204。您可以查询各种服务并且比较结果。
### 上游域名服务器
我想知道我的上游域名服务器是谁。为了查询,我首先看下`/etc/resolv/conf` 的配置信息:
```
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1
```
很幸运不过我是已经知道。您的Linux发行版可能配置不同您会看到您的上游服务器。接下来我们来试试网络管理器命令行工具 `nmcli`
```
$ nmcli dev show | grep DNS
IP4.DNS[1]: 192.168.1.1
```
很好,现在我们已经知道了,其实那是我的移动热点,并且我已经确认那是我的热点。我能够登录到简易管理面板,来查询上游服务器。然而许多消费者互联网网关不会让您看到或改变这些设置,因此只能尝试其他的方法,如 [我的域名服务器是什么?][1]
### 查找在您的网络中IPv4地址
您的网络上有哪些IPv4地址已启用并正在使用中
```
$ nmap -sn 192.168.1.0/24
Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 14:03 PST
Nmap scan report for Mobile.Hotspot (192.168.1.1)
Host is up (0.011s latency).
Nmap scan report for studio (192.168.1.2)
Host is up (0.000071s latency).
Nmap scan report for nellybly (192.168.1.3)
Host is up (0.015s latency)
Nmap done: 256 IP addresses (2 hosts up) scanned in 2.23 seconds
```
每个人都想去扫描自己的局域网中开放的端口。下面的例子是寻找服务和他们的版本号:
```
$ nmap -sV 192.168.1.1/24
Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 16:46 PST
Nmap scan report for Mobile.Hotspot (192.168.1.1)
Host is up (0.0071s latency).
Not shown: 997 closed ports
PORT STATE SERVICE VERSION
22/tcp filtered ssh
53/tcp open domain dnsmasq 2.55
80/tcp open http GoAhead WebServer 2.5.0
Nmap scan report for studio (192.168.1.102)
Host is up (0.000087s latency).
Not shown: 998 closed ports
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.2p2 Ubuntu 4ubuntu2.2 (Ubuntu Linux; protocol 2.0)
631/tcp open ipp CUPS 2.1
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 256 IP addresses (2 hosts up) scanned in 11.65 seconds
```
这些是有趣的结果。让我们尝试从不同的网络帐户进行相同的操作以查看这些服务是否暴露于互联网中。如果您有智能手机相当于您有第二个网络。您可以下载应用程序还可以为您的Linux电脑提供热点。从热点控制面板获取广域网IP地址然后重试
```
$ nmap -sV 12.34.56.78
Starting Nmap 7.01 ( https://nmap.org ) at 2018-01-14 17:05 PST
Nmap scan report for 12.34.56.78
Host is up (0.0061s latency).
All 1000 scanned ports on 12.34.56.78 are closed
```
果然不出所料,结果和我想象的一样。可以用手册来查询这些命令,以便了解更多有趣的嗅探技术。
了解更多Linux的相关知识可以从Linux基金会和edXLCTT译者注edX是麻省理工和哈佛大学于2012年4月联手创建的大规模开放在线课堂平台中获取免费的 ["介绍Linux" ][2]课程。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/1/4-tools-network-snooping-linux
作者:[Carla Schroder][a]
译者:[wyxplus](https://github.com/wyxplus)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:http://www.whatsmydnsserver.com/
[2]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,262 @@
如何在 Ubuntu 上安装和优化 Apache
=====
这是我们的 LAMP 系列教程的开始:如何在 Ubuntu 上安装 Apache web 服务器。
这些说明适用于任何基于 Ubuntu 的发行版,包括 Ubuntu 14.04, Ubuntu 16.04, [Ubuntu 18.04][1],甚至非 LTS 的 Ubuntu 发行版,例如 Ubuntu 17.10。这些说明经过测试并为 Ubuntu 16.04 编写。
Apache (又名 httpd) 是最受欢迎和使用最广泛的 web 服务器,所以这应该对每个人都有用。
### 开始安装 Apache 之前
在我们开始之前,这里有一些要求和说明:
* Apache 可能已经在你的服务器上安装了,所以开始之前首先检查一下。你可以使用 "apachectl -V" 命令来显示你正在使用的 Apache 的版本和一些其他信息。
* 你需要一个 Ubuntu 服务器。你可以从 [Vultr][2] 购买一个,它们是[最便宜的云托管服务商][3]之一。它们的服务器价格每月 2.5 美元起。
* 你需要有 root 用户或具有 sudo 访问权限的用户。下面的所有命令都由 root 用户自行,所以我们不必为每个命令都添加 'sudo'。
* 如果你使用 Ubuntu则需要[启用 SSH][4],如果你使用 Windows则应该使用类似 [MobaXterm][5] 的 SSH 客户端。
这就是全部要求和注释了,让我们进入安装过程。
### 在 Ubuntu 上安装 Apache
你需要做的第一件事就是更新 Ubuntu这是在你做任何事情之前都应该做的。你可以运行
```
apt-get update && apt-get upgrade
```
接下来,安装 Apache运行以下命令
```
apt-get install apache2
```
如果你愿意,你也可以安装 Apache 文档和一些 Apache 实用程序。对于我们稍后将要安装的一些模块,你将需要一些 Apache 实用程序。
```
apt-get install apache2-doc apache2-utils
```
**就是这样。你已经成功安装了 Apache **
你仍然需要配置它。
### 在 Ubuntu 上配置和优化 Apache
你可以在 Apache 上做各种各样的配置,但是主要的和最常见的配置将在下面做出解释。
#### 检查 Apache 是否正在运行
默认情况下Apache 设置为在机器启动时自动启动,因此你不必手动启用它。你可以使用以下命令检查它是否正在运行以及其他相关信息:
```
systemctl status apache2
```
[![check if apache is running][6]][6]
并且你可以检查你正在使用的版本:
```
apachectl -V
```
一种更简单的检查方法时访问服务器的 IP 地址,如果你得到默认的 Apache 页面,那么一切都正常。
#### 更新你的防火墙
如果你使用防火墙你应该使用它则可能需要更新防火墙规则并允许访问默认端口。Ubuntu 上最常用的防火墙是 UFW因此以下说明使用于 UFW。
要允许通过 80http和 443https端口的流量运行以下命令
```
ufw allow 'Apache Full'
```
#### 安装常见的 Apache 模块
一些模块经常被建议使用,所以你应该安装它们。我们将包含最常见模块的说明:
##### 使用 PageSpeed 加速你的网站
PageSpeed 模块将自动优化并加速你的 Apache 服务器。
首先,进入 [PageSpeed 下载页][7]并选择你需要的的文件。我们使用的是 64 位 Ubuntu 服务器,所以我们安装最新的稳定版本。使用 wget 下载它:
```
wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb
```
然后,使用以下命令安装它:
```
dpkg -i mod-pagespeed-stable_current_amd64.deb
apt-get -f install
```
重启 Apache 以使更改生效:
```
systemctl restart apache2
```
##### 使用 mod_rewrite 模块启动重写/重定向
顾名思义,该模块用于重写(重定向)。如果你使用 WordPress 或任何其他 CMS 来处理此问题,你就需要它。要安装它,只需运行:
```
a2enmod rewrite
```
然后再次重新启动 Apache。你可能需要一些额外的配置具体取决于你使用的 CMS如果有的话。为你的设置 Google 一下得到它的具体说明。
##### 使用 ModSecurity 模块保护你的 Apache
顾名思义ModSecurity 是一个用于安全性的模块,它基本上起着防火墙的作用,它可以监控你的流量。要安装它,运行以下命令:
```
apt-get install libapache2-modsecurity
```
再次重启 Apache
```
systemctl restart apache2
```
ModSecurity 自带了一个默认的设置,但如果你想扩展它,你可以使用 [OWASP 规则集][8]。
##### 使用 mod_evasive 模块抵御 DDoS 攻击
尽管 mod_evasive 在防止攻击方面有多大用处值得商榷,但是你可以使用它来阻止和防止服务器上的 DDoS 攻击。要安装它,使用以下命令:
```
apt-get install libapache2-mod-evasive
```
默认情况下mod_evasive 是禁用的,要启用它,编辑以下文件:
```
nano /etc/apache2/mods-enabled/evasive.conf
```
取消注释所有行(即删除 #),根据你的要求进行配置。如果你不知道要编辑什么,你可以保持原样。
[![mod_evasive][9]][9]
创建一个日志文件:
```
mkdir /var/log/mod_evasive
chown -R www-data:www-data /var/log/mod_evasive
```
就是这样。现在重启 Apache 以使更改生效。
```
systemctl restart apache2
```
你可以安装和配置[附加模块][10]但完全取决于你和你使用的软件。它们通常不是必需的。甚至我们上面包含的4个模块也不是必需的。如果特定应用需要模块那么它们可能会注意到这一点。
#### 用 Apache2Buddy 脚本优化 Apache
Apache2Buddy 是一个可以自动调整 Apache 配置的脚本。你唯一需要做的就是运行下面的命令,脚本会自动完成剩下的工作:
```
curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl
```
如果你没有安装 curl那么你可能需要安装它。使用以下命令来安装 curl
```
apt-get install curl
```
#### 额外配置
用 Apache 还可以做一些额外的东西,但我们会留下它们作为另一个教程。像启用 http/2 支持,关闭(或打开) KeepAlive调整你的 Apache 甚至更多。这些东西你现在不需要做,但是如果你在网上找到了教程,并且如果你等不及我们的教程,那就去做吧。
### 使用 Apache 创建你的第一个网站
现在我们已经完成了所有的调优工作,让我们开始创建一个实际的网站。按照我们的指示创建一个简单的 HTML 页面和一个在 Apache 上运行的虚拟主机。
你需要做的第一件事是为你的网站创建一个新的目录。运行以下命令来执行此操作:
```
mkdir -p /var/www/example.com/public_html
```
当然,将 example.com 替换为你所需的域名。你可以从 [Namecheap][11] 获得一个便宜的域名。
不要忘记在下面的所有命令中替换 example.com。
接下来,创建一个简单的静态网页。创建 HTML 文件:
```
nano /var/www/example.com/public_html/index.html
```
粘贴这些:
```
<html>
     <head>
       <title>Simple Page</title>
     </head>
     <body>
       <p>If you're seeing this in your browser then everything works.</p>
     </body>
</html>
```
保存并关闭文件。
配置目录的权限:
```
chown -R www-data:www-data /var/www/example.com
chmod -R og-r /var/www/example.com
```
为你的网站创建一个新的虚拟主机:
```
nano /etc/apache2/sites-available/example.com.conf
```
粘贴以下内容:
```
<VirtualHost *:80>
     ServerAdmin admin@example.com
     ServerName example.com
     ServerAlias www.example.com
   
     DocumentRoot /var/www/example.com/public_html
    
     ErrorLog ${APACHE_LOG_DIR}/error.log
     CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
```
这是一个基础的虚拟主机。根据你的设置,你可能需要更高级的 .conf 文件。
在更新所有内容后保存并关闭文件。
现在,使用以下命令启用虚拟主机:
```
a2ensite example.com.conf
```
最后,重启 Apache 以使更改生效:
```
systemctl restart apache2
```
这就是全部了,你做完了。现在你可以访问 example.com 并查看你的页面。
--------------------------------------------------------------------------------
via: https://thishosting.rocks/how-to-install-optimize-apache-ubuntu/
作者:[ThisHosting][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://thishosting.rocks
[1]:https://thishosting.rocks/ubuntu-18-04-new-features-release-date/
[2]:https://thishosting.rocks/go/vultr/
[3]:https://thishosting.rocks/cheap-cloud-hosting-providers-comparison/
[4]:https://thishosting.rocks/how-to-enable-ssh-on-ubuntu/
[5]:https://mobaxterm.mobatek.net/
[6]:https://thishosting.rocks/wp-content/uploads/2018/01/apache-running.jpg
[7]:https://www.modpagespeed.com/doc/download
[8]:https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project
[9]:https://thishosting.rocks/wp-content/uploads/2018/01/mod_evasive.jpg
[10]:https://httpd.apache.org/docs/2.4/mod/
[11]:https://thishosting.rocks/neamcheap-review-cheap-domains-cool-names
[12]:https://thishosting.rocks/wp-content/plugins/patron-button-and-widgets-by-codebard/images/become_a_patron_button.png
[13]:https://www.patreon.com/thishostingrocks

View File

@ -0,0 +1,240 @@
在 Linux 中如何归档文件和目录
=====
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Archive-Files-And-Directories-In-Linux-720x340.png)
在我们之前的教程中,我们讨论了如何[使用 gzip 和 bzip2 压缩和解压缩文件][1]。在本教程中,我们将学习如何在 Linux 归档文件。归档和压缩有什么不同吗?你们中的一些人可能经常认为这些术语有相同的含义。但是,这两者完全不同。归档是将多个文件和目录(相同或不同大小)组合成一个文件的过程。另一方面,压缩是减小文件或目录大小的过程。归档通常用作系统备份的一部分,或者将数据从一个系统移至另一个系统时。希望你了解归档和压缩之间的区别。现在,让我们进入主题。
### 归档文件和目录
归档文件和目录最常见的程序是:
1. tar
2. zip
这是一个很大的话题,所以,我将分两部分发表这篇文章。在第一部分中,我们将看到如何使用 tar 命令来归档文件和目录。
##### 使用 tar 命令归档文件和目录
**Tar** 是一个 Unix 命令,代表 **T**ape **A**rchive这里我将其翻译为 磁带归档,希望校正者修正以下)。它用于将多个文件(相同或不同大小)组合或存储到一个文件中。在 tar 实用程序中有 4 种主要的操作模式。
1. **c** 从文件或目录中建立归档
2. **x** 提取归档
3. **r** 将文件追加到归档
4. **t** 列出归档的内容
有关完整的模式列表,参阅 man 手册页。
**创建一个新的归档**
为了本指南,我将使用名为 **ostechnix** 的文件夹,其中包含三种不同类型的文件。
```
$ ls ostechnix/
file.odt image.png song.mp3
```
现在,让我们为 ostechnix 目录创建一个新的 tar 归档。
```
$ tar cf ostechnix.tar ostechnix/
```
这里,**c**标志指的是创建新的归档,**f** 是指定归档文件。
同样,对当前工作目录中的一组文件创建归档文件,使用以下命令:
```
$ tar cf archive.tar file1 file2 file 3
```
**提取归档**
要在当前目录中提取归档文件,只需执行以下操作:
```
$ tar xf ostechnix.tar
```
我们还可以使用 **C** 标志(大写字母 C将归档提取到不同的目录中。例如以下命令在 **Downloads** 目录中提取给定的归档文件。
```
$ tar xf ostechnix.tar -C Downloads/
```
或者,转到 Downloads 文件夹并像下面一样提取其中的归档。
```
$ cd Downloads/
$ tar xf ../ostechnix.tar
```
有时,你可能想要提取特定类型的文件。例如,以下命令提取 “.png” 类型的文件。
```
$ tar xf ostechnix.tar --wildcards "*.png"
```
**创建 gzip 和 bzip 格式的压缩归档**
默认情况下tar 创建归档文件以 **.tar** 结尾。另外tar 命令可以与压缩实用程序 **gzip****bzip** 结合使用。文件结尾以 **.tar** 为扩展名使用普通 tar 归档文件,文件以 **tar.gz****.tgz** 结尾使用 **gzip** 归档并压缩文件tar 文件以 **tar.bz2****.tbz** 结尾使用 **bzip** 归档并压缩。
首先,让我们来**创建一个 gzip 归档**
```
$ tar czf ostechnix.tar.gz ostechnix/
```
或者
```
$ tar czf ostechnix.tgz ostechnix/
```
这里,我们使用 **z** 标志来使用 gzip 压缩方法压缩归档文件。
你可以使用 **v** 标志在创建归档时查看进度。
```
$ tar czvf ostechnix.tar.gz ostechnix/
ostechnix/
ostechnix/file.odt
ostechnix/image.png
ostechnix/song.mp3
```
这里,**v** 指显示进度。
从一个文件列表创建 gzip 归档文件:
```
$ tar czf archive.tgz file1 file2 file3
```
要提取当前目录中的 gzip 归档文件,使用:
```
$ tar xzf ostechnix.tgz
```
要提取其他文件夹中的归档,使用 -C 标志:
```
$ tar xzf ostechnix.tgz -C Downloads/
```
现在,让我们创建 **bzip 归档**。为此,请使用下面的 **j** 标志。
创建一个目录的归档:
```
$ tar cjf ostechnix.tar.bz2 ostechnix/
```
```
$ tar cjf ostechnix.tbz ostechnix/
```
从一个列表文件中创建归档:
```
$ tar cjf archive.tar.bz2 file1 file2 file3
```
```
$ tar cjf archive.tbz file1 file2 file3
```
为了显示进度,使用 **v** 标志。
现在,在当前目录下,让我们提取一个 bzip 归档。这样做:
```
$ tar xjf ostechnix.tar.bz2
```
或者,提取归档文件到其他目录:
```
$ tar xjf ostechnix.tar.bz2 -C Downloads
```
**一次创建多个目录和/或文件的归档**
这是 tar 命令的另一个最酷的功能。要一次创建多个目录或文件的 gzip 归档文件,使用以下文件:
```
$ tar czvf ostechnix.tgz Downloads/ Documents/ ostechnix/file.odt
```
上述命令创建 **Downloads**, **Documents** 目录和 **ostechnix** 目录下的 **file.odt** 文件的归档,并将归档保存在当前工作目录中。
**在创建归档时跳过目录和/或文件**
这在备份数据时非常有用。你可以在备份中排除不重要的文件或目录,这是 **exclude** 选项所能帮助的。例如你想要创建 /home 目录的归档,但不希望包括 Downloads, Documents, Pictures, Music 这些目录。
这是我们的做法:
```
$ tar czvf ostechnix.tgz /home/sk --exclude=/home/sk/Downloads --exclude=/home/sk/Documents --exclude=/home/sk/Pictures --exclude=/home/sk/Music
```
上述命令将对我的 $HOME 目录创建一个 gzip 归档,其中不包括 Downloads, Documents, Pictures 和 Music 目录。要创建 bzip 归档,将 **z** 替换为 **j**,并在上例中使用扩展名 .bz2。
**列出归档文件但不提取它们**
要列出归档文件的内容,我们使用 **t** 标志。
```
$ tar tf ostechnix.tar
ostechnix/
ostechnix/file.odt
ostechnix/image.png
ostechnix/song.mp3
```
要查看详细输出,使用 **v** 标志。
```
$ tar tvf ostechnix.tar
drwxr-xr-x sk/users 0 2018-03-26 19:52 ostechnix/
-rw-r--r-- sk/users 9942 2018-03-24 13:49 ostechnix/file.odt
-rw-r--r-- sk/users 36013 2015-09-30 11:52 ostechnix/image.png
-rw-r--r-- sk/users 112383 2018-02-22 14:35 ostechnix/song.mp3
```
**追加文件到归档**
文件或目录可以使用 **r** 标志添加/更新到现有的归档。看看下面的命令:
```
$ tar rf ostechnix.tar ostechnix/ sk/ example.txt
```
上面的命令会将名为 **sk** 的目录和名为 **exmple.txt** 添加到 ostechnix.tar 归档文件中。
你可以使用以下命令验证文件是否已添加:
```
$ tar tvf ostechnix.tar
drwxr-xr-x sk/users 0 2018-03-26 19:52 ostechnix/
-rw-r--r-- sk/users 9942 2018-03-24 13:49 ostechnix/file.odt
-rw-r--r-- sk/users 36013 2015-09-30 11:52 ostechnix/image.png
-rw-r--r-- sk/users 112383 2018-02-22 14:35 ostechnix/song.mp3
drwxr-xr-x sk/users 0 2018-03-26 19:52 sk/
-rw-r--r-- sk/users 0 2018-03-26 19:39 sk/linux.txt
-rw-r--r-- sk/users 0 2018-03-26 19:56 example.txt
```
##### **TL;DR**
**创建 tar 归档:**
* **普通 tar 归档:** tar -cf archive.tar file1 file2 file3
* **Gzip tar 归档:** tar -czf archive.tgz file1 file2 file3
* **Bzip tar 归档:** tar -cjf archive.tbz file1 file2 file3
**提取 tar 归档:**
* **普通 tar 归档:** tar -xf archive.tar
* **Gzip tar 归档:** tar -xzf archive.tgz
* **Bzip tar 归档:** tar -xjf archive.tbz
我们只介绍了 tar 命令的基本用法,这些对于开始使用 tar 命令足够了。但是,如果你想了解更多详细信息,参阅 man 手册页。
```
$ man tar
```
好吧,这就是全部了。在下一部分中,我们将看到如何使用 Zip 实用程序来归档文件和目录。
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-archive-files-and-directories-in-linux-part-1/
作者:[SK][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/

View File

@ -1,166 +0,0 @@
开始 Vagrant 之旅
=====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf)
如果你和我一样,你可能在某一个地方有一个“沙盒”,你可以在那里进行你正在做的任何项目。随着时间的推移,沙箱会变得杂乱无章,充斥着各种想法,工具链元素,你不使用的代码模块,以及其他你不需要的东西。当你完成某件事情时,这会使你的部署变得复杂,因为你可能不确定项目的实际依赖关系 - 随着时间推移你在沙盒中已经有了一些工具,但是你忘了必须安装它。你需要一个干净的环境,将所有的依赖关系放在一个地方,以便以后更方便。
或者你可能工作在 DevOps 中,你所服务的开发人员用模糊的依赖关系来编写代码,这使得测试变得更加困难。你需要一种方法来获得一个干净的盒子,将代码放入其中,并通过它运行代码,而且你希望这些环境是一次性的和可重复的。
那么选择 [Vagrant][1] 吧。由 HashiCorp 根据 [MIT License][2] 创建Vagrant 可充当 VirtualBox, Microsoft Hyper-V 或 Docker 容器的包装器和前端,并且可以通过[许多其他供应商][3]的插件进行扩展。你可以配置 Vagrant 以提供可重复干净的环境,并且已安装需要的基础架构。配置脚本是可移植的,因此,如果你的仓库和 Vagrant 配置脚本位于基于云存储上,那么你只需要很少的限制就可以启动并在多台机器机器上工作。让我们来看一看。
### 安装
对于本次安装,我的环境是 Linux Mint 桌面,版本是 18.3 Cinnamon 64 位,在其他大多数 Debian 派生系统上安装非常类似。在大多数发行版中,对于基于 RPM 的系统也有类似的安装程序。Vagrant 的[安装页面][4]为 Debian, Windows, CentOS, MacOS 和 Arch Linux 都提供下载,但是我在我的软件包管理器中找到了它,所以我在那进行了安装。
最简单的安装使用了 VirtualBox 作为虚拟化提供者,所以我需要安装它:
```
sudo apt-get install virtualbox vagrant
```
安装程序将会获取依赖项 - 主要是 Ruby 一些东西,安装它们。
### 建立一个项目
在设置你的项目之前,你需要了解一些你想要运行它的环境。你可以在 [Vagrant Boxes 仓库][5]中为许多虚拟化提供者找到大量预配置的盒子。许多会预先配置一些你可能需要的核心基础设置,比如 PHP, MySQL 和 Apache但是对于本次测试我将安装一个 Debian 8 64 位裸机 "Jessie" 沙盒并手动安装一些东西,这样你就可以看到具体过程了。
```
mkdir ~/myproject
cd ~/myproject
vagrant init debian/contrib-jessie64
vagrant up
```
最后一条命令将根据需要从仓库中获取或更新 VirtualBox 镜像,然后拉起启动器,你的系统上会出现一个运行框!下次启动这个项目时,除非镜像已经在仓库中更新,否则不会花费太长时间。
要访问沙盒,只需要输入 `vagrant ssh`,你将被放到虚拟机的全功能 SSH 会话中,你将会是 `vagrant` 用户,但你是 `sudo` 组的成员,所以你可以切换到 root并在这里做你想做的任何事情。
你会在沙盒中看到一个名为 `/vagrant` 目录,对这个目录小心点,因为它与你主机上的 `~/myproject` 文件夹保持同步。在虚拟机 `/vagrant` 下建立一个文件它会立即复制到主机上,反之亦然。注意,有些沙盒并没有安装 VirtualBox 的附加功能,所以拷贝只能在启动时才起作用。有一些用于手动同步的命令行工具,这可能是测试环境中非常有用的特性。我倾向于坚持使用那些有附加功能的沙盒,所以这个目录正常工作,不必考虑它。
这个方案的好处很快显现出来了: 如果你在主机上有一个代码编辑工具链,并处于某种原因不希望它出现在虚拟机上,那么这不是问题 - 在主机上进行编辑,虚拟机会立刻更改。快速更改虚拟机,它也将其同步到主机上的“官方”副本
让我们关闭这个盒子,这样我们就可以在这个盒子里提供一些我们需要的东西:`vagrant halt`。
### 在 Vm 上安装额外的软件
对于这个例子,我将使用 [Apache][6], [PostgreSQL][7] 和 [Dancer][8] web 框架为 Perl 的一个项目开发。我将修改Vagrant配置脚本以便我需要的东西已经安装。 为了使事情更容易保持稍后更新,我将在项目根目录下创建一个脚本`~/myproject/Vagrantfile`:
```
$provision_script = <<SCRIPT
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get -y install \
  apache2 \
  postgresql-client-9.4 \
  postgresql-9.4 \
  libdbd-pg-perl \
  libapache2-mod-fastcgi \
  libdata-validate-email-perl  \
  libexception-class-perl \
  libexception-class-trycatch-perl \
  libtemplate-perl \
  libtemplate-plugin-json-escape-perl \
  libdbix-class-perl \
  libyaml-tiny-perl \
  libcrypt-saltedhash-perl \
  libdancer2-perl \
  libtemplate-plugin-gravatar-perl  \
  libtext-csv-perl \
  libstring-tokenizer-perl \
  cpanminus
cpanm -f -n \
  Dancer2::Session::Cookie \
  Dancer2::Plugin::DBIC \
  Dancer2::Plugin::Auth::Extensible::Provider::DBIC \
  Dancer2::Plugin::Locale \
  Dancer2::Plugin::Growler
sudo a2enmod rewrite fastcgi
sudo apache2ctl restart
SCRIPT
```
在 Vagrantfile 结尾附近,你会发现一行 `config.vm.provision` 变量,正如你在示例中看到的那样,你可以在此处以内联方式进行操作,只需通过取消注释以下行:
```
  # config.vm.provision "shell", inline: <<-SHELL
  #   sudo apt-get update
  #   sudo apt-get install -y apache2
  # SHELL
```
相反,将那四行替换为使用你在文件顶部定义为变量的配置脚本:
```
config.vm.provision "shell", inline: $provision_script
```
取消注释 `forwarded_port`。如果你愿意,你可以将端口从 8080 更改为其它端口。我通常使用 5000 端口,在浏览器中输入 `http://localhost:5000` 以访问虚拟机上的 Apache 服务器。
你可能还希望将转发的端口设置为从主机访问 VM 上的 Apache。寻找包含的行并取消注释它。如果你愿意也可以将端口从 8080 更改为其他内容。我通常使用端口 5000并在我的浏览器访问我虚拟机上的 Apache 服务器。
这里有一个设置提示:如果你的仓库位于云存储上,为了在多台机器上使用 Vagrant你可能希望将不同机器上的 `VAGRANT_HOME` 环境变量设置为不同的东西。通过 VirtualBox 的工作方式,你需要分别为这些系统存储状态信息,确保你的版本控制系统忽略了用于此的目录 - 我将 `.vagrant.d*` 添加到仓库的 `.gitignore` 文件中。不过,我确实让 Vagrantfile 成为仓库的一部分!
### 好了!
我进入 `vagrant up`,我准备开始写代码了。一旦你做了一两次,你可能会想出一些你会回收很多的 Vagrantfile 模板文件(就像我刚刚那样),这就是 Vagrant 的优势之一。你可以更快地完成实际的编码工作,并将很少的时间花在基础设施上!
你可以使用 Vagrant 做更多事情。配置工具存在于许多工具链中,因此,无论你需要复制什么环境,它都是快速而简单的。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/getting-started-vagrant
作者:[Ruth Holloway][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://vagrantup.com
[2]:https://opensource.org/licenses/MIT
[3]:https://github.com/hashicorp/vagrant/wiki/Available-Vagrant-Plugins#providers
[4]:https://www.vagrantup.com/downloads.html
[5]:https://app.vagrantup.com/boxes/search
[6]:https://httpd.apache.org/
[7]:https://postgresql.org
[8]:https://perldancer.org

View File

@ -1,80 +0,0 @@
一个更好的调试 Perl 模块
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/annoyingbugs.png?itok=ywFZ99Gs)
只有在调试或开发调整时才使用 Perl 代码块有时会很有用。这很好,但是这样的代码块可能会对性能产生很大的影响, 尤其是在运行时决定是否执行它。
[Curtis“Ovid”Poe][1] 编写了一个可以帮助解决这个问题的模块:[Keyword::DEVELOPMENT][2]。该模块利用 Keyword::Simple 和 Perl 5.012 中引入的可插入关键字架构来创建新的关键字DEVELOPMENT。它使用 PERL_KEYWORD_DEVELOPMENT 环境变量的值来确定是否要执行一段代码。
使用它并不容易:
```
use Keyword::DEVELOPMENT;
       
sub doing_my_big_loop {
    my $self = shift;
    DEVELOPMENT {
        # insert expensive debugging code here!
    }
}Keyworddoing_my_big_loopDEVELOPMENT
```
在编译时DEVELOPMENT 块内的代码已经被优化掉了,根本就不存在。
你看到好处了么?在沙盒中将 PERL_KEYWORD_DEVELOPMENT 环境变量设置为 true在生产环境设为 false并且可以将有价值的调试工具提交到你的代码库中在你需要的时候随时可用。
在缺乏高级配置管理的系统中,你也可以使用此模块来处理生产和开发或测试环境之间的设置差异:
```
sub connect_to_my_database {
       
    my $dsn = "dbi:mysql:productiondb";
    my $user = "db_user";
    my $pass = "db_pass";
   
    DEVELOPMENT {
        # Override some of that config information
        $dsn = "dbi:mysql:developmentdb";
    }
   
    my $db_handle = DBI->connect($dsn, $user, $pass);
}connect_to_my_databaseDEVELOPMENTDBI
```
稍后对此代码片段的增强使你能在其他地方,比如 YAML 或 INI 中读取配置信息,但我希望您能在此看到该工具。
我查看了关键字 Keyword::DEVELOPMENT 的源码,花了大约半小时研究,“天哪,我为什么没有想到这个?”安装 Keyword::Simple 后Curtis 给我们的模块就非常简单了。这是我长期以来在自己的编码实践中需要的一个优雅解决方案。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/perl-module-debugging-code
作者:[Ruth Holloway][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://metacpan.org/author/OVID
[2]:https://metacpan.org/pod/release/OVID/Keyword-DEVELOPMENT-0.04/lib/Keyword/DEVELOPMENT.pm

View File

@ -0,0 +1,93 @@
Orbital Apps - 新一代 Linux 程序
======
![](https://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-720x340.jpg)
今天,我们要了解 **Orbital Apps****ORB****O**pen **R**unnable **B**undle**apps**(开放可运行程序包),一个免费、跨平台的开源程序集合。所有 ORB 程序都是可移动的。你可以将它们安装在你的 Linux 系统或 USB 驱动器上,以便你可以在任何系统上使用相同的程序。它们不需要 root 权限,并且没有依赖关系。所有必需的依赖关系都包含在程序中。只需将 ORB 程序复制到 USB 驱动器并将其插入到任何 Linux 系统中就立即开始使用它们。所有设置和配置以及程序的数据都将存储在 USB 驱动器上。由于不需要在本地驱动器上安装程序,我们可以在联机或脱机的计算机上运行应用程序。这意味着我们不需要 Internet 来下载任何依赖。
ORB apps are compressed up to 60% smaller, so we can store and use them even from the small sized USB drives. All ORB apps are signed with PGP/RSA and distributed via TLS 1.2. All Applications are packaged without any modifications, they are not even re-compiled. Here is the list of currently available portable ORB applications.
ORB 程序压缩了 60因此我们甚至可以从小型USB驱动器存储和使用它们。所有ORB应用程序都使用PGP / RSA进行签名并通过TLS 1.2进行分发。所有应用程序打包时都不做任何修改甚至不会重新编译。以下是当前可用的便携式ORB应用程序列表。
* abiword
* audacious
* audacity
* darktable
* deluge
* filezilla
* firefox
* gimp
* gnome-mplayer
* hexchat
* inkscape
* isomaster
* kodi
* libreoffice
* qbittorrent
* sound-juicer
* thunderbird
* tomahawk
* uget
* vlc
* 未来还有更多。
Orb is open source, so If youre a developer, feel free to collaborate and add more applications.
Orb 是开源的,所以如果你是开发人员,欢迎协作并添加更多程序。
### 下载并使用可移动 ORB 程序
正如我已经提到的,我们不需要安装可移动 ORB 程序。但是ORB 团队强烈建议你使用 **ORB 启动器** 来获得更好的体验。 ORB 启动器是一个小的安装程序(小于 5MB它可帮助你启动 ORB 程序,并获得更好,更流畅的体验。
让我们先安装 ORB 启动器。为此,[**下载 ORB 启动器**][1]。你可以手动下载 ORB 启动器的 ISO 并将其挂载到文件管理器上。或者在终端中运行以下任一命令来安装它:
```
$ wget -O - https://www.orbital-apps.com/orb.sh | bash
```
如果你没有 wget请运行
```
$ curl https://www.orbital-apps.com/orb.sh | bash
```
询问时输入 root 用户和密码。
就是这样。Orbit 启动器已安装并可以使用。
现在,进入 [**ORB 可移动程序下载页面**][2],并下载你选择的程序。在本教程中,我会下载 Firefox。
下载完后,进入下载位置并双击 ORB 程序来启动它。点击 Yes 确认。
![][4]
Firefox ORB 程序能用了!
![][5]
同样,你可以立即下载并运行任何程序。
如果你不想使用 ORB 启动器,请将下载的 .orb 安装程序设置为可执行文件,然后双击它进行安装。不过,建议使用 ORB 启动器,它可让你在使用 ORB 程序时更轻松、更顺畅。
就我测试的 ORB 程序而言,它们打开即可使用。希望这篇文章有帮助。今天就是这些。祝你有美好的一天!
干杯!!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/orbitalapps-new-generation-ubuntu-linux-applications/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.orbital-apps.com/documentation/orb-launcher-all-installers
[2]:https://www.orbital-apps.com/download/portable_apps_linux/
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-1-2.png
[5]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-2.png

View File

@ -1,38 +1,36 @@
pinewall translating 在 CentOS 6 系统上安装最新版 Python3 软件包的 3 种方法
3 Methods To Install Latest Python3 Package On CentOS 6 System
====== ======
CentOS is RHEL clone and comes with free of cost. Its a industry standard and cutting edge operating system, this has been used by 90% of webhosting provider since its supporting the leading edge server control panel called cPanel/WHM. CentOS 克隆自 RHEL无需付费即可使用。CentOS 是一个企业级标准的、前沿的操作系统,被超过 90% 的网络主机托管商采用,因为它提供了技术领先的服务器控制面板 cPanel/WHM。
This control panel allowing users to manage everything through control panel without entering into terminal. 该控制面板使得用户无需进入命令行即可通过其管理一切。
As we already know that RHEL has long term support and doesnt offer the latest version of packages due to stability. 众所周知RHEL 提供长期支持,出于稳定性考虑,不提供最新版本的软件包。
If you want to install latest version of packages, which is not available in the default repository and you have to install manually by compiling the source package. 如果你想安装的最新版本软件包不在默认源中,你需要手动编译源码安装。
Its a high risk because we cant upgrade the manually installed packages to latest version if they release new version and we have to reinstall manually. 但手动编译安装的方式有不小的风险,即如果出现新版本,无法升级手动安装的软件包;你不得不重新手动安装。
In this case what will be the solution and suggested method to install latest version of package? Yes, this can be done by adding the necessary third party repository to system. 那么在这种情况下,安装最新版软件包的推荐方法和方案是什么呢?是的,可以通过为系统添加所需的第三方源来达到目的。
There are many third party repositories are available for Enterprise Linux but only few of repositories are suggested to use by CentOS communicant, which doesnt alter the base packages in large scale. 可供企业级 Linux 使用的第三方源有很多,但只有几个是 CentOS 社区推荐使用的,它们在很大程度上不修改基础软件包。
They are usually well maintained and provide a substantial number of additional packages to CentOS. 这几个推荐的源维护的很好,为 CentOS 提供大量补充软件包。
In this tutorial, we will teach you, how to install latest Python 3 package on CentOS 6 system. 在本教程中,我们将向你展示,如何在 CentOS 6 操作系统上安装最新版本的 Python 3 软件包。
### Method-1 : Using Software Collections Repository (SCL) ### 方法 1使用 Software Collections 源 (SCL)
The SCL repository is now maintained by a CentOS SIG, which rebuilds the Red Hat Software Collections and also provides some additional packages of their own. SCL 源目前由 CentOS SIG 维护,除了重新编译构建 Red Hat 的 Software Collections 外,还额外提供一些它们自己的软件包。
It contains newer versions of various programs that can be installed alongside existing older packages and invoked by using the scl command. 该源中包含不少程序的更高版本,可以在不改变原有旧版本程序包的情况下安装,使用时需要通过 scl 命令调用。
Run the following command to install Software Collections Repository on CentOS 运行如下命令可以在 CentOS 上安装 SCL 源:
``` ```
# yum install centos-release-scl # yum install centos-release-scl
``` ```
Check the available python 3 version. 检查可用的 Python 3 版本:
``` ```
# yum info rh-python35 # yum info rh-python35
Loaded plugins: fastestmirror, security Loaded plugins: fastestmirror, security
@ -53,49 +51,49 @@ Description : This is the main package for rh-python35 Software Collection.
``` ```
Run the below command to install latest available python 3 package from scl. 运行如下命令从 scl 源安装可用的最新版 python 3
``` ```
# yum install rh-python35 # yum install rh-python35
``` ```
Run the below special scl command to enable the installed package version at the shell. 运行如下特殊的 scl 命令,在当前 shell 中启用安装的软件包:
``` ```
# scl enable rh-python35 bash # scl enable rh-python35 bash
``` ```
Run the below command to check installed python3 version. 运行如下命令检查安装的 python3 版本:
``` ```
# python --version # python --version
Python 3.5.1 Python 3.5.1
``` ```
Run the following command to get a list of SCL packages have been installed on system. 运行如下命令获取系统已安装的 SCL 软件包列表:
``` ```
# scl -l # scl -l
rh-python35 rh-python35
``` ```
### Method-2 : Using EPEL Repository (Extra Packages for Enterprise Linux) ### 方法 2使用 EPEL 源 (Extra Packages for Enterprise Linux)
EPEL stands for Extra Packages for Enterprise Linux maintained by Fedora Special Interest Group. EPEL 是 Extra Packages for Enterprise Linux 的缩写,该源由 Fedora SIG (Special Interest Group) 维护。
They creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL). 该 SIG 为企业级 Linux 创建、维护并管理一系列高品质补充软件包,受益的企业级 Linux 发行版包括但不限于红帽企业级 Linux (RHEL), CentOS, Scientific Linux (SL) 和 Oracle Linux (OL)等。
EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL 通常基于 Fedora 对应代码提供软件包,不会与企业级 Linux 发行版中的基础软件包冲突或替换其中的软件包。
**Suggested Read :** [Install / Enable EPEL Repository on RHEL, CentOS, Oracle Linux & Scientific Linux][1] **推荐阅读:** [在 RHEL, CentOS, Oracle Linux 或 Scientific Linux 上安装启用 EPEL 源][1]
EPEL package is included in the CentOS Extras repository and enabled by default so, we can install this by running below command. EPEL 软件包位于 CentOS 的 Extra 源中,已经默认启用,故我们只需运行如下命令即可:
``` ```
# yum install epel-release # yum install epel-release
``` ```
Check the available python 3 version. 检查可用的 python 3 版本:
``` ```
# yum --disablerepo="*" --enablerepo="epel" info python34 # yum --disablerepo="*" --enablerepo="epel" info python34
Loaded plugins: fastestmirror, security Loaded plugins: fastestmirror, security
@ -119,13 +117,13 @@ Description : Python 3 is a new version of the language that is incompatible wit
``` ```
Run the below command to install latest available python 3 package from EPEL repository. 运行如下命令从 EPEL 源安装可用的最新版 python 3 软件包:
``` ```
# yum --disablerepo="*" --enablerepo="epel" install python34 # yum --disablerepo="*" --enablerepo="epel" install python34
``` ```
By default this will not install matching pip & setuptools and we have to install by running below command. 默认情况下并不会安装 pip 和 setuptools我们需要运行如下命令手动安装
``` ```
# curl -O https://bootstrap.pypa.io/get-pip.py # curl -O https://bootstrap.pypa.io/get-pip.py
% Total % Received % Xferd Average Speed Time Time Time Current % Total % Received % Xferd Average Speed Time Time Time Current
@ -146,28 +144,28 @@ Successfully installed pip-10.0.1 setuptools-39.1.0 wheel-0.31.0
``` ```
Run the below command to check installed python3 version. 运行如下命令检查已安装的 python3 版本:
``` ```
# python3 --version # python3 --version
Python 3.4.5 Python 3.4.5
``` ```
### Method-3 : Using IUS Community Repository ### 方法 3使用 IUS 社区源
IUS Community is a CentOS Community Approved third-party RPM repository which contains latest upstream versions of PHP, Python, MySQL, etc.., packages for Enterprise Linux (RHEL & CentOS) 5, 6 & 7. IUS 社区是 CentOS 社区批准的第三方 RPM 源,为企业级 Linux (RHEL 和 CentOS) 5, 6 和 7 版本提供最新上游版本的 PHP, Python, MySQL 等软件包。
IUS Community Repository have dependency with EPEL Repository so we have to install EPEL repository prior to IUS repository installation. Follow the below steps to install & enable EPEL & IUS Community Repository to RPM systems and install the packages. IUS 社区源依赖于 EPEL 源,故我们需要先安装 EPEL 源,然后再安装 IUS 社区源。按照下面的步骤安装启用 EPEL 源和 IUS 社区源,利用该 RPM 系统安装软件包。
**Suggested Read :** [Install / Enable IUS Community Repository on RHEL & CentOS][2] **推荐阅读:** [在 RHEL 或 CentOS 上安装启用 IUS 社区源][2]
EPEL package is included in the CentOS Extras repository and enabled by default so, we can install this by running below command. EPEL 软件包位于 CentOS 的 Extra 源中,已经默认启用,故我们只需运行如下命令即可:
``` ```
# yum install epel-release # yum install epel-release
``` ```
Download IUS Community Repository Shell script 下载 IUS 社区源安装脚本:
``` ```
# curl 'https://setup.ius.io/' -o setup-ius.sh # curl 'https://setup.ius.io/' -o setup-ius.sh
% Total % Received % Xferd Average Speed Time Time Time Current % Total % Received % Xferd Average Speed Time Time Time Current
@ -176,13 +174,13 @@ Download IUS Community Repository Shell script
``` ```
Install/Enable IUS Community Repository. 安装启用 IUS 社区源:
``` ```
# sh setup-ius.sh # sh setup-ius.sh
``` ```
Check the available python 3 version. 检查可用的 python 3 版本:
``` ```
# yum --enablerepo=ius info python36u # yum --enablerepo=ius info python36u
Loaded plugins: fastestmirror, security Loaded plugins: fastestmirror, security
@ -220,13 +218,13 @@ Description : Python is an accessible, high-level, dynamically typed, interprete
``` ```
Run the below command to install latest available python 3 package from IUS repository. 运行如下命令从 IUS 源安装最新可用版本的 python 3 软件包:
``` ```
# yum --enablerepo=ius install python36u # yum --enablerepo=ius install python36u
``` ```
Run the below command to check installed python3 version. 运行如下命令检查已安装的 python3 版本:
``` ```
# python3.6 --version # python3.6 --version
Python 3.6.5 Python 3.6.5
@ -239,7 +237,7 @@ via: https://www.2daygeek.com/3-methods-to-install-latest-python3-package-on-cen
作者:[PRAKASH SUBRAMANIAN][a] 作者:[PRAKASH SUBRAMANIAN][a]
选题:[lujun9972](https://github.com/lujun9972) 选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID) 译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,216 @@
使用 GNU Parallel 提高 Linux 命令行执行效率
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR)
你是否有过这种感觉,你的主机运行速度没有预期的那么快?我也曾经有过这种感觉,直到我发现了 GNU Parallel。
GNU Parallel 是一个 shell 工具,可以并行执行任务。它可以解析多种输入,让你可以同时在多份数据上运行脚本或命令。你终于可以使用全部的 CPU 了!
如果你用过 `xargs`,上手 Parallel 几乎没有难度。如果没有用过,这篇教程会告诉你如何使用,同时给出一些其它的用例。
### 安装 GNU Parallel
GNU Parallel 很可能没有预装在你的 Linux 或 BSD 主机上你可以从软件源Linux 对应 repositoryBSD 对应 ports collection中安装。以 Fedora 为例:
```
$ sudo dnf install parallel
```
对于 NetBSD
```
# pkg_add parallel
```
如果各种方式都不成功,请参考[项目主页][1]。
### 从串行到并行
正如其名称所示Parallel 的强大之处是以并行方式执行任务;而我们中不少人平时仍然以串行方式运行任务。
当你对多个对象执行某个命令时,你实际上创建了一个任务队列。一部分对象可以被命令处理,剩余的对象需要等待,直到命令处理它们。这种方式是低效的。只要数据够多,总会形成任务队列;但与其只使用一个任务队列,为何不使用多个更小规模的任务队列呢?
假设你有一个图片目录,你希望将目录中的图片从 JEEG 格式转换为 PNG 格式。有多种方法可以完成这个任务。可以手动用 GIMP 打开每个图片,输出成新格式,但这基本是最差的选择,费时费力。
上述方法有一个漂亮且简洁的变种,即基于 shell 的方案:
```
$ convert 001.jpeg 001.png
$ convert 002.jpeg 002.png
$ convert 003.jpeg 003.png
... 略 ...
```
对于初学者而言,这是一个不小的转变,而且看起来是个不小的改进。不再需要图像界面和不断的鼠标点击,但仍然是费力的。
进一步改进:
```
$ for i in *jpeg; do convert $i $i.png ; done
```
至少,这一步设置好任务执行,让你节省时间去做更有价值的事情。但问题来了,这仍然是串行操作;一张图片转换完成后,队列中的下一张进行转换,依此类推直到全部完成。
使用 Parallel:
```
$ find . -name "*jpeg" | parallel -I% --max-args 1 convert % %.png
```
这是两条命令的组合:`find` 命令,用于收集需要操作的对象;`parallel` 命令,用于对象排序并确保每个对象按需处理。
* `find . -name "*jpeg"` 查找当前目录下以 `jpeg` 结尾的所有文件。
* `parallel` 调用 GNU Parallel。
* `-I%` 创建了一个占位符 `%`,代表 `find` 传递给 Parallel 的内容。如果不使用占位符,你需要对 `find` 命令的每一个结果手动编写一个命令,而这恰恰是你想要避免的。
* `--max-args 1` 给出 Parallel 从队列获取新对象的速率限制。考虑到 Parallel 运行的命令只需要一个文件输入,这里将速率限制设置为 1。假如你需要执行更复杂的命令需要两个文件输入例如 `cat 001.txt 002.txt > new.txt`),你需要将速率限制设置为 2。
* `convert % %.png` 是你希望 Parallel 执行的命令。
组合命令的执行效果如下:`find` 命令收集所有相关的文件信息并传递给 `parallel`,后者(使用当前参数)启动一个任务,(无需等待任务完成)立即获取参数行中的下一个参数(注:管道输出的每一行对应 `parallel` 的一个参数所有参数构成参数行只要你的主机没有瘫痪Parallel 会不断做这样的操作。旧任务完成后Parallel 会为分配新任务,直到所有数据都处理完成。不使用 Parallel 完成任务大约需要 10 分钟,使用后仅需 3 至 5 分钟。
### 多个输入
只要你熟悉 `find``xargs` (整体被称为 GNU 查找工具,或 `findutils`,`find` 命令是一个完美的 Parallel 数据提供者。它提供了灵活的接口,大多数 Linux 用户已经很习惯使用,即使对于初学者也很容易学习。
`find` 命令十分直截了当:你向 `find` 提供搜索路径和待查找文件的一部分信息。可以使用通配符完成模糊搜索;在下面的例子中,星号匹配任何字符,故 `find` 定位(文件名)以字符 `searchterm` 结尾的全部文件:
```
$ find /path/to/directory -name "*searchterm"
```
默认情况下,`find` 逐行返回搜索结果,每个结果对应 1 行:
```
$ find ~/graphics -name "*jpg"
/home/seth/graphics/001.jpg
/home/seth/graphics/cat.jpg
/home/seth/graphics/penguin.jpg
/home/seth/graphics/IMG_0135.jpg
```
当使用管道将 `find` 的结果传递给 `parallel` 时,每一行中的文件路径被视为 `parallel` 命令的一个参数。另一方面,如果你需要使用命令处理多个参数,你可以改变队列数据传递给 `parallel` 的方式。
下面先给出一个不那么真实的例子,后续会做一些修改使其更加有意义。如果你安装了 GNU Parallel你可以跟着这个例子操作。
假设你有 4 个文件,按照每行一个文件的方式列出,具体如下:
```
$ echo ada > ada ; echo lovelace > lovelace
$ echo richard > richard ; echo stallman > stallman
$ ls -1
ada
lovelace
richard
stallman
```
你需要将两个文件合并成第三个文件后者同时包含前两个文件的内容。这种情况下Parallel 需要访问两个文件,使用 `-I%` 变量的方式不符合本例的预期。
Parallel 默认情况下读取 1 个队列对象:
```
$ ls -1 | parallel echo
ada
lovelace
richard
stallman
```
现在让 Parallel 每个任务使用 2 个队列对象:
```
$ ls -1 | parallel --max-args=2 echo
ada lovelace
richard stallman
```
现在,我们看到行已经并合并;具体而言,`ls -1` 的两个查询结果会被同时传送给 Parallel。传送给 Parallel 的参数涉及了任务所需的 2 个文件,但目前还只是 1 个有效参数:(对于两个任务分别为)"ada lovelace" 和 "richard stallman"。你真正需要的是每个任务对应 2 个独立的参数。
值得庆幸的是Parallel 本身提供了上述所需的解析功能。如果你将 `--max-args` 设置为 `2`,那么 `{1}``{2}` 这两个变量分别代表传入参数的第一和第二部分:
```
$ ls -1 | parallel --max-args=2 cat {1} {2} ">" {1}_{2}.person
```
在上面的命令中,变量 `{1}` 值为 `ada``richard` (取决于你选取的任务),变量 `{2}` 值为 `lovelace``stallman`。通过使用重定向符号(放到引号中,防止被 Bash 识别,以便 Parallel 使用),(两个)文件的内容被分别重定向至新文件 `ada_lovelace.person``richard_stallman.person`
```
$ ls -1
ada
ada_lovelace.person
lovelace
richard
richard_stallman.person
stallman
$ cat ada_*person
ada lovelace
$ cat ri*person
richard stallman
```
如果你整天处理大量几百 MB 大小的日志文件,那么(上述)并行处理文本的方法对你帮忙很大;否则,上述例子只是个用于上手的示例。
然而,这种处理方法对于很多文本处理之外的操作也有很大帮助。下面是来自电影产业的真实案例,其中需要将一个目录中的视频文件和(对应的)音频文件进行合并。
```
$ ls -1
12_LS_establishing-manor.avi
12_wildsound.flac
14_butler-dialogue-mixed.flac
14_MS_butler.avi
...略...
```
使用同样的方法,使用下面这个简单命令即可并行地合并文件:
```
$ ls -1 | parallel --max-args=2 ffmpeg -i {1} -i {2} -vcodec copy -acodec copy {1}.mkv
```
### 简单粗暴的方式
上述花哨的输入输出处理不一定对所有人的口味。如果你希望更直接一些,可以将一堆命令甩给 Parallel然后去干些其它事情。
首先,需要创建一个文本文件,每行包含一个命令:
```
$ cat jobs2run
bzip2 oldstuff.tar
oggenc music.flac
opusenc ambiance.wav
convert bigfile.tiff small.jpeg
ffmepg -i foo.avi -v:b 12000k foo.mp4
xsltproc --output build/tmp.fo style/dm.xsl src/tmp.xml
bzip2 archive.tar
```
接着,将文件传递给 Parallel
```
$ parallel --jobs 6 < jobs2run
```
现在文件中对应的全部任务都在被 Parallel 执行。如果任务数量超过允许的数目(译者注:应该是 --jobs 指定的数目或默认值Parallel 会创建并维护一个队列,直到任务全部完成。
### 更多内容
GNU Parallel 是个强大而灵活的工具,还有很多很多用例无法在本文中讲述。工具的 man 页面提供很多非常酷的例子可供你参考,包括通过 SSH 远程执行和在 Parallel 命令中使用 Bash 函数等。[YouTube][2] 上甚至有一个系列,包含大量操作演示,让你可以直接从 GNU Parallel 团队学习。GNU Paralle 的主要维护者还发布了官方使用指导手册,可以从 [Lulu.com][3] 获取。
GNU Parallel 有可能改变你完成计算的方式;即使没有,也会至少改变你主机花在计算上的时间。马上上手试试吧!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/gnu-parallel
作者:[Seth Kenlon][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[1]:https://www.gnu.org/software/parallel
[2]:https://www.youtube.com/watch?v=OpaiGYxkSuQ&list=PL284C9FF2488BC6D1
[3]:http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

View File

@ -0,0 +1,82 @@
你可以用 Linux 中的 IP 工具做 3 件有用的事情
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
`ifconfig` 命令在 Linux 上被弃用已有十多年的时间了,而 `iproute2` 项目包含了神奇的工具 `ip`。许多在线教程资源仍然采用旧的命令行工具,如 `ifconfig`、`route` 和 `netstat`。本教程的目标是分享一些可以使用 `ip` 工具轻松完成的网络相关的事情。
### 找出你的 IP 地址
```
[dneary@host]$ ip addr show
[snip]
44: wlp4s0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 5c:e0:c5:c7:f0:f1 brd ff:ff:ff:ff:ff:ff
        inet 10.16.196.113/23 brd 10.16.197.255 scope global dynamic wlp4s0
        valid_lft 74830sec preferred_lft 74830sec
        inet6 fe80::5ee0:c5ff:fec7:f0f1/64 scope link
        valid_lft forever preferred_lft forever
```
`ip addr show` 会告诉你很多关于你的所有网络链接设备的信息。在这里我的无线以太网卡wlp4s0是 IPv4 地址(`inet` 字段)`10.16.196.113/23`。 `/23` 表示 32 位 IP 地址中的 23 位将被该子网中的所有 IP 地址共享。子网中的 IP 地址范围从 `10.16.196.0 到 10.16.197.254`。子网的广播地址IP 地址后面的 `brd` 字段)`10.16.197.255` 保留给子网上所有主机的广播流量。
例如,我们只能使用 `ip addr show dev wlp4s0` 来显示单个设备的信息。
### 显示你的路由表
```
[dneary@host]$ ip route list
default via 10.16.197.254 dev wlp4s0 proto static metric 600
10.16.196.0/23 dev wlp4s0 proto kernel scope link src 10.16.196.113 metric 601
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
```
路由表是本地主机帮助网络流量确定去哪里的方式。它包含一组路标,将流量发送到特定的接口,以及在旅途中的特定下一个地点。
如果你运行任何虚拟机或容器,它们将获得自己的 IP 地址和子网,这可能会使这些路由表非常复杂,但在单个主机中,通常有两条指令。对于本地流量,将其发送到本地以太网上,并且网络交换机将找出(使用称为 ARP 的协议)哪个主机拥有目标 IP 地址,并且要将流量发送到哪里。对于到互联网的流量,将其发送到本地网关节点,它将更好地了解如何到达目的地。
在上面的情况中,第一行代表外部流量的外部网关,第二行代表本地流量,第三行代表主机上运行的虚拟机的虚拟网桥,但该链接当前未激活。
### 监视你的网络配置
```
[dneary@host]$ ip monitor all
[dneary@host]$ ip -s link list wlp4s0
```
`ip monitor` 命令可用于监视路由表中的更改,网络接口上的网络寻址或本地主机上 ARP 表的更改。此命令在调试与容器和网络相关的网络问题时特别有用,如当两个虚拟机应该能彼此通信,但实际不能。
在使用 all 时ip monitor 会报告报告以 [LINK](网络接口更改),[ROUTE](更改路由表)、[ADDR]IP 地址更改)或 [NEIGH](与马无关 - 与邻居的 ARP 地址相关的变化)。
你还可以监视特定对象上的更改(例如,特定的路由表或 IP 地址)。
另一个适用于许多命令的有用选项是 `ip -s`,它提供了一些统计信息。添加第二个 `-s` 选项可以添加更多统计信息。上面的 `ip -s link list wlp4s0` 会给出很多关于接收和发送的数据包的信息、丢弃的数据包数量、检测到的错误等等。
### 提示:缩短你的命令
一般来说,对于 `ip` 工具,你只需要包含足够的字母来唯一标识你想要做的事情。你可以使用 `ip mon` 来代替 `ip monitor`。你可以使用 `ip a l`,,而不是 `ip addr list`,并且可以使用 `ip r`来代替 `ip route`。`ip link list` 可以缩写为 `ip l ls`。要了解可用于更改命令行为的许多选项,请浏览[ ip 手册页][1]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/useful-things-you-can-do-with-IP-tool-Linux
作者:[Dave Neary][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dneary
[1]:https://www.systutorials.com/docs/linux/man/8-ip-route/

View File

@ -0,0 +1,72 @@
保护你的 Fedora 系统免受这个 DHCP 漏洞
======
![](https://fedoramagazine.org/wp-content/uploads/2018/05/dhcp-cve-816x345.jpg)
今天早些时候在 dhcp-client 中发现并披露了一个严重的安全漏洞。此 DHCP 漏洞会对你的系统和数据造成高风险,尤其是在使用不受信任的网络如非你拥有的 WiFi 接入点时。阅读更多关于如何保护你的 Fedora 系统。
动态主机控制协议DHCP能让你的系统从其加入的网络获取配置。你的系统将请求 DHCP 数据,并且通常是路由器等服务器应答。服务器为你的系统提供必要的数据以进行自我配置。例如,你的系统如何在加入无线网络时正确进行网络配置。
但是,本地网络上的攻击者可能会利用此漏洞。使用在 NetworkManager 下运行的 dhcp-client 脚本中的漏洞,攻击者可能能够在系统上以 root 权限运行任意命令。这个 DHCP 漏洞使你的系统和数据处于高风险状态。该漏洞已分配 CVE-2018-1111并且有[ Bugzilla 来跟踪 bug][1]。
### 防范这个 DHCP 漏洞
新的 dhcp 软件包包含 Fedora 26、27 和 28 以及 Rawhide 的修复程序。维护人员已将这些更新提交到 updates-testing 仓库。对于大多数用户而言,它们应该在这篇文章的大约一天左右的时间内在稳定仓库出现。所需的软件包是:
* Fedora 26: dhcp-4.3.5-11.fc26
* Fedora 27: dhcp-4.3.6-10.fc27
* Fedora 28: dhcp-4.3.6-20.fc28
* Rawhide: dhcp-4.3.6-21.fc29
#### 更新稳定的 Fedora 系统
要在稳定的 Fedora 版本上立即更新,请[使用 sudo ][2]运行此命令。如有必要,请在提示时输入你的密码:
```
sudo dnf --refresh --enablerepo=updates-testing update dhcp-client
```
之后,使用标准稳定仓库进行更新。要从稳定的仓库更新 Fedora 系统,请使用以下命令:
```
sudo dnf --refresh update dhcp-client
```
#### 更新 Rawhide 系统
如果你的系统是 Rawhide请使用以下命令立即下载和更新软件包
```
mkdir dhcp && cd dhcp
koji download-build --arch={x86_64,noarch} dhcp-4.3.6-21.fc29
sudo dnf update ./dhcp-*.rpm
```
在每日的 Rawhide compose 后,只需运行 sudo dnf update 即可获取更新。
### Fedora Atomic Host
针对 Fedora Atomic Host 的修复程序版本为 28.20180515.1。要获得更新,请运行以下命令:
```
atomic host upgrade -r
```
此命令将重启系统以应用升级。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/protect-fedora-system-dhcp-flaw/
作者:[Paul W. Frields][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/pfrields/
[1]:https://bugzilla.redhat.com/show_bug.cgi?id=1567974
[2]:https://fedoramagazine.org/howto-use-sudo/

View File

@ -0,0 +1,256 @@
You-Get - 支持 80+ 网站的命令行多媒体下载器
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/you-get-1-720x340.jpg)
你们大多数人可能用过或听说过 **Youtube-dl**,这个命令行程序可以从包括 Youtube 在内的 100+ 网站下载视频。我偶然发现了一个类似的工具,名字叫做 **"You-Get"**。这是一个 Python 编写的命令行下载器,可以让你从 YoutubeFacebookTwitter 等很多热门网站下载图片,音频和视频。目前该下载器支持 80+ 站点,点击[**这里**][1]查看所有支持的网站。
You-Get 不仅仅是一个下载器,它还可以将在线视频导流至你的视频播放器。更进一步,它还允许你在 Google 上搜索视频只要给出搜索项You-Get 使用 Google 搜索并下载相关度最高的视频。另外值得一提的特性是,它允许你暂停和恢复下载过程。它是一个完全自由、开源及跨平台的应用,适用于 LinuxMacOS 及 Windows。
### 安装 You-Get
确保你已经安装如下依赖项:
+ Python 3
+ FFmpeg (强烈推荐) 或 Libav
+ (可选) RTMPDump
有多种方式安装 You-Get其中官方推荐采用 pip 包管理器安装。如果你还没有安装 pip可以参考如下链接
[如何使用 pip 管理 Python 软件包][2]
需要注意的是,你需要安装 Python 3 版本的 pip。
接下来,运行如下命令安装 You-Get
```
$ pip3 install you-get
```
可以使用命令升级 You-Get 至最新版本:
```
$ pip3 install --upgrade you-get
```
### 开始使用 You-Get
使用方式与 Youtube-dl 工具基本一致。
**下载视频**
下载视频,只需运行:
```
$ you-get https://www.youtube.com/watch?v=HXaglTFJLMc
```
输出示例:
```
site: YouTube
title: The Last of The Mohicans by Alexandro Querevalú
stream:
- itag: 22
container: mp4
quality: hd720
size: 56.9 MiB (59654303 bytes)
# download-with: you-get --itag=22 [URL]
Downloading The Last of The Mohicans by Alexandro Querevalú.mp4 ...
100% ( 56.9/ 56.9MB) ├███████████████████████████████████████████████████████┤[1/1] 752 kB/s
```
下载视频前你可能希望查看视频的细节信息。You-Get 提供了 **info”** 或 **“-i”** 参数,使用该参数可以获得给定视频所有可用的分辨率和格式。
```
$ you-get -i https://www.youtube.com/watch?v=HXaglTFJLMc
```
或者
```
$ you-get --info https://www.youtube.com/watch?v=HXaglTFJLMc
```
输出示例如下:
```
site: YouTube
title: The Last of The Mohicans by Alexandro Querevalú
streams: # Available quality and codecs
[ DASH ] ____________________________________
- itag: 137
container: mp4
quality: 1920x1080
size: 101.9 MiB (106816582 bytes)
# download-with: you-get --itag=137 [URL]
- itag: 248
container: webm
quality: 1920x1080
size: 90.3 MiB (94640185 bytes)
# download-with: you-get --itag=248 [URL]
- itag: 136
container: mp4
quality: 1280x720
size: 56.9 MiB (59672392 bytes)
# download-with: you-get --itag=136 [URL]
- itag: 247
container: webm
quality: 1280x720
size: 52.6 MiB (55170859 bytes)
# download-with: you-get --itag=247 [URL]
- itag: 135
container: mp4
quality: 854x480
size: 32.2 MiB (33757856 bytes)
# download-with: you-get --itag=135 [URL]
- itag: 244
container: webm
quality: 854x480
size: 28.0 MiB (29369484 bytes)
# download-with: you-get --itag=244 [URL]
[ DEFAULT ] _________________________________
- itag: 22
container: mp4
quality: hd720
size: 56.9 MiB (59654303 bytes)
# download-with: you-get --itag=22 [URL]
```
默认情况下You-Get 会下载标记为 **DEFAULT** 的格式。如果你对格式或分辨率不满意,可以选择你喜欢的格式,使用格式对应的 itag 值即可。
```
$ you-get --itag=244 https://www.youtube.com/watch?v=HXaglTFJLMc
```
**下载音频**
执行下面的命令,可以从 soundcloud 网站下载音频:
```
$ you-get 'https://soundcloud.com/uiceheidd/all-girls-are-same-999-prod-nick-mira'
Site: SoundCloud.com
Title: ALL GIRLS ARE THE SAME (PROD. NICK MIRA)
Type: MP3 (audio/mpeg)
Size: 2.58 MiB (2710046 Bytes)
Downloading ALL GIRLS ARE THE SAME (PROD. NICK MIRA).mp3 ...
100% ( 2.6/ 2.6MB) ├███████████████████████████████████████████████████████┤[1/1] 983 kB/s
```
查看音频文件细节,使用 **-i** 参数:
```
$ you-get -i 'https://soundcloud.com/uiceheidd/all-girls-are-same-999-prod-nick-mira'
```
**下载图片**
运行如下命令下载图片:
```
$ you-get https://pixabay.com/en/mountain-crumpled-cyanus-montanus-3393209/
```
You-Get 也可以下载网页中的全部图片:
You-Get can also download all images from a web page.
```
$ you-get https://www.ostechnix.com/pacvim-a-cli-game-to-learn-vim-commands/
```
**搜索视频**
你只需向 You-Get 传递一个任意的搜索项,而无需给出有效的 URLYou-Get 会使用 Google 搜索并下载与你给出搜索项最相关的视频。(译者注Google 的机器人检测机制可能导致 503 报错导致该功能无法使用)。
```
$ you-get 'Micheal Jackson'
Google Videos search:
Best matched result:
site: YouTube
title: Michael Jackson - Beat It (Official Video)
stream:
- itag: 43
container: webm
quality: medium
size: 29.4 MiB (30792050 bytes)
# download-with: you-get --itag=43 [URL]
Downloading Michael Jackson - Beat It (Official Video).webm ...
100% ( 29.4/ 29.4MB) ├███████████████████████████████████████████████████████┤[1/1] 2 MB/s
```
**观看视频**
You-Get 可以将在线视频导流至你的视频播放器或浏览器,跳过广告和评论部分。(译者注:使用 -p 参数需要对应的 vlc/chrominum 命令可以调用,一般适用于具有图形化界面的操作系统)。
以 VLC 视频播放器为例,使用如下命令在其中观看视频:
```
$ you-get -p vlc https://www.youtube.com/watch?v=HXaglTFJLMc
```
或者
```
$ you-get --player vlc https://www.youtube.com/watch?v=HXaglTFJLMc
```
类似地,将视频导流至以 chromium 为例的浏览器中,使用如下命令:
```
$ you-get -p chromium https://www.youtube.com/watch?v=HXaglTFJLMc
```
![][3]
在上述屏幕截图中,可以看到并没有广告和评论部分,只是一个包含视频的简单页面。
**设置下载视频的路径及文件名**
默认情况下,使用视频标题作为默认文件名,下载至当前工作目录。当然,你可以按照你的喜好进行更改,使用 **output-dir/-o** 参数可以指定路径,使用 **output-filename/-O** 参数可以指定下载文件的文件名。
```
$ you-get -o ~/Videos -O output.mp4 https://www.youtube.com/watch?v=HXaglTFJLMc
```
**暂停和恢复下载**
**CTRL+C** 可以取消下载。一个以 **.download** 为扩展名的临时文件会保存至输出路径。下次你使用相同的参数下载时,下载过程将延续上一次的过程。
当文件下载完成后,以 .download 为扩展名的临时文件会自动消失。如果这时你使用同样参数下载You-Get 会跳过下载;如果你想强制重新下载,可以使用 **force/-f** 参数。
查看命令的帮助部分可以获取更多细节,命令如下:
```
$ you-get --help
```
这次的分享到此结束,后续还会介绍更多的优秀工具,敬请期待!
感谢各位阅读!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/you-get-a-cli-downloader-to-download-media-from-80-websites/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://you-get.org/#supported-sites
[2]:https://www.ostechnix.com/manage-python-packages-using-pip/
[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/you-get.jpg