mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-28 23:20:10 +08:00
commit
5e4d0b4cf1
@ -0,0 +1,56 @@
|
||||
怎样通过示弱增强领导力
|
||||
======
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/25/052430b01hc00qrcz99p9w.jpg)
|
||||
|
||||
传统观念中的领导者总是强壮、大胆、果决的。我也确实见过一些拥有这些特点的领导者。但更多时候,领导者也许看起来比传统印象中的领导者要更脆弱些,他们内心有很多这样的疑问:我的决策正确吗?我真的适合这个职位吗?我有没有在做最该做的事情?
|
||||
|
||||
解决这些问题的方法是把问题说出来。把问题憋在心里只会助长它们,一名开明的领导者更倾向于把自己的脆弱之处暴露出来,这样我们才能从有过相同经验的人那里得到慰藉。
|
||||
|
||||
为了证明这个观点,我来讲一个故事。
|
||||
|
||||
### 一个扰人的想法
|
||||
|
||||
假如你在教育领域工作,你会发现发现大家更倾向于创造[一个包容性的环境][1] —— 一个鼓励多样性繁荣发展的环境。长话短说,我一直以来都认为自己是出于营造包容性环境的考量,而进行的“多样性雇佣”,意思就是人事雇佣我看重的是我的性别而非能力,这个想法一直困扰着我。随之而来的开始自我怀疑:我真的是这个岗位的最佳人选吗?还是只是因为我是个女人?许多年来,我都认为公司雇佣我是因为我的能力最好。但如今却发现,对那些雇主们来说,与我的能力相比,他们似乎更关注我的性别。
|
||||
|
||||
我开解自己:我到底是因为什么被雇佣并不重要,我知道我是这个职位的最佳人选而且我会用实际行动去证明。我工作很努力,达到过预期,也犯过错,也收获了很多,我做了一个老板想要自己雇员做的一切事情。
|
||||
|
||||
但那个“多样性雇佣”问题的阴影并未因此散去。我无法摆脱它,甚至回避一切与之相关的话题如蛇蝎,最终意识到自己拒绝谈论它意味着我能做的只有直面它。如果我继续回避这个问题,早晚会影响到我的工作,这是我最不希望看到的。
|
||||
|
||||
### 倾诉心中的困扰
|
||||
|
||||
直接谈论多样性和包容性这个话题有点尴尬,在进行自我剖析之前有几个问题需要考虑:
|
||||
|
||||
* 我们能够相信我们的同事,能够在他们面前表露脆弱吗?
|
||||
* 一个团队的领导者在同事面前表露脆弱合适吗?
|
||||
* 如果我玩脱了呢?会不会影响我的工作?
|
||||
|
||||
于是我和一位主管在午餐时间进行了一场小型的 Q&A 会议,这位主管负责着集团很多领域,并且以正直坦率著称。一位女同事问他,“我是因为多样性才被招进来的吗?”,他停下手头工作花了很长时间和一屋子女性员工解释了这件事,我不想复述他讲话的全部内容,我只说对我触动最大的几句:如果你知道自己能够胜任这个职位,并且面试很顺利,那么不必质疑招聘的结果。每个怀疑自己是因为多样性雇佣进公司的人私下都有自己的问题,你不必重蹈他们的覆辙。
|
||||
|
||||
完毕。
|
||||
|
||||
我很希望我能由衷地说我放下这个问题了,但事实上我没有。这问题挥之不去:万一我就是被破格录取的那个呢?万一我就是多样性雇佣的那个呢?我认识到我不能避免地反复思考这些问题。
|
||||
|
||||
几周后我和这位主管进行了一次一对一谈话,在谈话的末尾,我提到作为一位女性,自己很欣赏他那番对于多样性和包容性的坦率发言。当得知领导很有交流的意愿时,谈论这种话题变得轻松许多。我也向他提出了最初的问题,“我是因为多样性才被雇佣的吗?”,他回答得很干脆:“我们谈论过这个问题。”谈话后我意识到,我急切地想找人谈论这些需要勇气的问题,其实只是因为我需要有一个人的关心、倾听和好言劝说。
|
||||
|
||||
但正因为我有展露脆弱的勇气——去和那位主管谈论我的问题——我承受我的秘密困扰的能力提高了。我觉得身轻如燕,我开始组织各种对话,主要围绕着内隐偏见及其引起的一系列问题、怎样增加自身的包容性和多样性的表现等。通过这些经历,我发现每个人对于多样性都有不同的认识,如果我只囿于自己的秘密,我不会有机会组织参与这些精彩的对话。
|
||||
|
||||
我有谈论内心脆弱的勇气,我希望你也有。
|
||||
|
||||
我们可以谈谈那些影响我们领导力的秘密,这样从任何意义上来说,我们距离成为一位开明的领导就近了一些。那么适当示弱有帮助你成为更好的领导者吗?
|
||||
|
||||
### 作者简介
|
||||
|
||||
Angela Robertson 是微软的一名高管。她和她的团队对社群互助有着极大热情,并参与开源工作。在加入微软之前,Angela 就职于红帽公司。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/12/how-allowing-myself-be-vulnerable-made-me-better-leader
|
||||
|
||||
作者:[Angela Robertson][a]
|
||||
译者:[Valoniakim](https://github.com/Valoniakim)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/arobertson98
|
||||
[1]:https://opensource.com/open-organization/17/9/building-for-inclusivity
|
@ -0,0 +1,156 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (cycoe)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11272-1.html)
|
||||
[#]: subject: (A brief history of text-based games and open source)
|
||||
[#]: via: (https://opensource.com/article/18/7/interactive-fiction-tools)
|
||||
[#]: author: (Jason Mclntosh https://opensource.com/users/jmac)
|
||||
|
||||
互动小说及其开源简史
|
||||
======
|
||||
|
||||
> 了解开源如何促进互动小说的成长和发展。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/27/142657ryf4pa2lym6fe6f1.jpg)
|
||||
|
||||
<ruby>[互动小说技术基金会][1]<rt>Interactive Fiction Technology Foundation</rt></ruby>(IFTF) 是一个非营利组织,致力于保护和改进那些用来生成我们称之为<ruby>互动小说<rt>interactive fiction</rt></ruby>的数字艺术形式的技术。当 Opensource.com 的一位社区版主提出一篇关于 IFTF、它支持的技术与服务,以及它如何与开源相交织的文章时,我发现这对于我讲了数几十年的开源故事来说是个新颖的视角。互动小说的历史比<ruby>自由及开源软件<rt>Free and Open Source Software</rt></ruby>(FOSS)运动的历史还要长,但同时也与之密切相关。希望你们能喜欢我在这里的分享。
|
||||
|
||||
### 定义和历史
|
||||
|
||||
对于我来说,互动小说这个术语涵盖了读者主要通过文本与之交互的任何视频游戏或数字化艺术作品。这个术语起源于 20 世纪 80 年代,当时由语法解析器驱动的文本冒险游戏确立了什么是家用电脑娱乐,在美国主要以 [魔域][2]、[银河系漫游指南][3] 和 [Infocom][4] 公司的其它佳作为代表。在 20 世纪 90 年代,它的主流商业价值被挖掘殆尽,但在线爱好者社区接过了该传统,继续发布这类游戏和游戏创建工具。
|
||||
|
||||
在四分之一个世纪之后的今天,互动小说包括了品种繁多并且妙趣橫生的作品,如从充满谜题的文字冒险游戏到衍生改良的超文本类型。定期举办的在线竞赛和节日为品鉴和试玩新作品提供了个好地方---英语互动小说世界每年都会举办一些活动,包括 [Spring Thing][5] 和 [IFComp][6]。后者是自 1995 年以来现代互动小说的核心活动,这也使它成为在同类型活动中持续举办时间最长的游戏展示活动。[IFComp 从 2017 年开始的评选和排名记录][7] 显示了如今基于文本的游戏在形式、风格和主题方面的惊人多样性。
|
||||
|
||||
(作者注:以上我特指英语,因为可能出于写作方面的技术原因,互动小说社区倾向于按语言进行区分。例如也有 [法语][8] 或 [意大利语][9] 的互动小说年度活动,我就听说过至少一届的中文互动小说节。幸运的是,这些边界易于打破。在我管理 IFComp 的四年中,我们很欢迎来自国际社区的所有英语翻译工作。)
|
||||
|
||||
![“假冒的猴子”游戏截图][11]
|
||||
|
||||
*在解释器 Lectrote 上启动 Emily Short 的“假冒的猴子”新游戏(二者皆为开源软件)。*
|
||||
|
||||
此外由于互动小说专注于文本,它为玩家和作者都提供了最方便的平台。几乎所有能阅读数字化文本的人(包括能通过文字转语音软件等辅助技术阅读的用户)都能玩大部分的互动小说作品。同样,互动小说的创作对所有愿意学习和使用其工具和技术的作家开放。
|
||||
|
||||
这使我们了解了互动小说与开源的长期关系,以及从它的全盛时期以来,对于艺术形式可用性的积极影响。接下来我将概述当代开源互动小说创建工具,并讨论共享源代码的互动小说作品古老且有点稀奇的传统。
|
||||
|
||||
### 开源互动小说工具的世界
|
||||
|
||||
一些开发平台(其中大部分是开源的)可用于创建传统的语法解析器驱动互动小说,其中用户可通过输入命令(如 `向北走`、`拾取提灯`、`收养小猫` 或 `向 Zoe 询问量子机械学`)来与游戏世界交互。20 世纪 90 年代初期出现了几个<ruby>适于魔改<rt>hacker-friendly</rt></ruby>的语法解析器游戏开发工具,其中目前还在使用的有 [TADS][12]、[Alan][13] 和 [Quest][14],它们都是开源的,并且后两者兼容 FOSS 许可证。
|
||||
|
||||
其中最出名的是 [Inform][15],1993 年 Graham Nelson 发布了第一个版本,目前由 Nelson 领导的一个团队进行维护。Inform 的源代码是不太寻常的半开源:Inform 6 是前一个主要版本,[它通过 Artistic 许可证开放源码][16]。这其中蕴涵着比我们所看到的更直接的相关性,因为它专有的 Inform 7 将 Inform 6 作为其核心,可以让它在将作品编译为机器码之前,将其 [杰出的自然语言语法][17] (LCTT 译注:此链接已遗失)翻译为其前一代的那种更类似 C 的代码。
|
||||
|
||||
![inform 7 集成式开发环境截图][19]
|
||||
|
||||
*Inform 7 集成式开发环境,打开了文档以及一个示例项目*
|
||||
|
||||
Inform 游戏运行在虚拟机上,这是 Infocom 时代的遗留产物。当时的发行者为了让同一个游戏可以运行在 Apple II、Commodore 4、Atari 800 以及其它种类的“[家用计算机][20]”上,将虚拟机作为解决方案。这些原本流行的操作系统中只有少数至今仍存在,但 Inform 的虚拟机使得它创建的作品能够通过 Inform 解释器运行在任何的计算机上。这些虚拟机包括相对现代的 [Glulx][21],或者通过对 Infocom 过去的虚拟机进行逆向工程克隆得到的可爱的古董 [Z-machine][22]。现在,流行的跨平台解释器包括如 [lectrote][23] 和 [Gargoyle][24] 等桌面程序,以及如 [Quixe][25] 和 [Parchment][26] 等基于浏览器的程序。以上所有均为开源软件。
|
||||
|
||||
如其它的流行开源项目一样,如果 Inform 的发展进程随着它的成熟而逐渐变缓,它为我们留下的最重要的财富就是其活跃透明的生态环境。对于 Inform 来说,(这些财富)包括前面提到的解释器、[一套语言扩展][27](通常混合使用 Inform 6 和 Inform 7 写成),当然也包括所有用它们写成并分享于全世界的作品,有的时候也包括那些源代码。(在这篇文章的后半部分我会回到这个话题)
|
||||
|
||||
互动小说创建工具发明于 21 世纪,力求在传统的语法解析器之外探索一种新的玩家交互方式,即创建任何现代 Web 浏览器都能加载的超文本驱动作品。其中的领头羊是 [Twine][28],原本由 Klimas 在 2009 年开发,目前是 [GNU 许可证开源项目][29],有许多贡献者正在积极开发。(事实上,[Twine][30] 的开源软件血统可追溯到 [TiddlyWiki][31],Klimas 的项目最初是从该项目衍生而来的)
|
||||
|
||||
对于互动小说开发来说,Twine 代表着一系列最开放及最可用的方法。由于它天生的 FOSS 属性,它将其输出渲染为一个自包含的网站,不依赖于需要进一步特殊解析的机器码,而是使用开放并且成熟的 HTML、CSS 和 JavaScript 标准。作为一个创建工具,Twine 能够根据创建者的技能等级,展示出与之相匹配的复杂度。拥有很少或没有编程知识的用户能够创建简单但是可玩的互动小说作品,但那些拥有更多编码和设计技能(包括通过开发 Twine 游戏获得的技能提升)的用户能够创建更复杂的项目。这也难怪近几年 Twine 在教育领域的曝光率和流行度有不小的提升。
|
||||
|
||||
另一些值得注意的开源互动小说开发项目包括由 Ian Millington 开发的以 MIT 许可证发布的 [Undum][32],以及由 Dan Fabulich 和 [Choice of Games][34] 团队开发的 [ChoiceScript][33],两者也专注于将 Web 浏览器作为游戏平台。除了以上专用的开发系统以外,基于 Web 的互动小说也呈现给我们以开源作品的丰富、变幻的生态。比如 Furkle 的 [Twine 扩展工具集][35],以及 Liza Daly 为自己的互动小说游戏创建的名为 [Windrift][36] 的 JavaScript 框架。
|
||||
|
||||
### 程序、游戏,以及游戏程序
|
||||
|
||||
Twine 受益于 [一个致力于提供支持的长期 IFTF 计划][37],公众可以为其维护和发展提供资助。IFTF 还直接支持两个长期公共服务:IFComp 和<ruby>互动小说归档<rt>IF Archive</rt></ruby>,这两个服务都依赖并回馈开源软件和技术。
|
||||
|
||||
![Harmonia 开场截图][39]
|
||||
|
||||
*由 Liza Daly 开发的“Harmonia”的开场画面,该游戏使用 Windrift 开源互动小说创建框架创建。*
|
||||
|
||||
自 2014 年以来,用于运行 IFComp 网站的基于 Perl 和 JavaScript 的应用程序一直是 [一个共享源代码项目][40],它反映了 [互动小说特有子组件使用的 FOSS 许可证是个大杂烩][41],其中包括那些可以让以语法解析器驱动的竞争项目在 Web 浏览器中运行的各式各样的代码库。在 1992 年上线并 [在 2017 年成为一个 IFTF 项目][43] 的 <ruby>[互动小说归档][42]<rt>IF Archive</rt></ruby>,是一套完全基于古老且稳定的互联网标准的镜像仓库,只使用了 [一点开源 Pyhon 脚本][44] 用来处理索引。
|
||||
|
||||
### 最后,也是最有趣的部分,让我们聊聊开源文字游戏
|
||||
|
||||
互动小说归档的主体 [由游戏组成][45],当然,是那些历经岁月的游戏。它们反映了数十年来不断发展的游戏设计趋势和互动小说工具发展。
|
||||
|
||||
许多互动小说作品都共享其源代码,要快速找到它们的快速很简单 —— [在 IFDB 中搜索标签 “source available”][46]。IFDB 是另一个长期运行的互动小说社区服务,由 TADS 的创立者 Mike Roberts 私人运营。对更加简单的界面感到舒适的用户,也可以浏览互动小说归档的 [games/source 目录][47],该目录按开发平台和编写语言对内容运行分组(也有很多作品,由于太繁杂或太古老而无法分类,只能浮于顶级目录)。
|
||||
|
||||
对这些代码共享游戏随机抽取的几个样本,揭示了一个有趣的窘境:与更广阔的开源软件世界不同,互动小说社区缺少一种普遍认同的方式来授权它生成的所有代码。与软件工具(包括我们用来创建互动小说的所有工具)不同的是,从字面意思上讲,交互式小说游戏是一件艺术作品。这意味着,将面向软件的开源许可证用于交互式小说游戏,并不会比将其用于其它像散文或诗歌作品更适合。但同样,互动小说游戏也是一个软件,它展示了创建者希望合法地与世界分享的源代码模式和技术。一个拥有开源意识的互动小说创建者会怎么做呢?
|
||||
|
||||
有些游戏通过将其代码放到公共领域来解决这一问题,或者通过明确的许可证,亦或者如 [42 年前由 Crowther 和 Woods 开发的“<ruby>冒险之旅<rt>Adventure</rt></ruby>”][48] 一样通过社区发布。一些人试图将其中的不同部分分开,应用他们自己的许可证,允许免费复用游戏公开的业务逻辑,但禁止针对其散文内容的再创作。这是我在开源自己的游戏 <ruby>[莺巢][49]<rt>The Warbler’s Nest</rt></ruby> 时采取的策略。天知道这是否能在法律上站得住脚,但我当时没有更好的主意。
|
||||
|
||||
当然,你会发现一些作品对所有部分使用单一的许可证,而不介意反对者。一个突出的例子就是 [Emily Short 的史诗作品“<ruby>假冒的猴子<rt>Counterfeit Monkey</rt></ruby>”][50],其全部采用 Creative Commons 4.0 许可证发布。[CC 对其应用于代码感到不满][51],但你可以认为 [Inform 7 源码这种不寻常的散文风格特性][52] 至少比传统的软件项目更适合 CC 许可证。
|
||||
|
||||
### 接下来要做什么呢,冒险者?
|
||||
|
||||
如果你希望开始探索互动小说的世界,这里有几个链接可供你参考:
|
||||
|
||||
+ 如上所述,[IFDB][53] 和[互动小说归档][54]都提供了可浏览的界面,用于浏览超过 40 年价值的互动小说作品。其中大部分可以在 Web 浏览器中玩,但有些需要额外的解释器程序。IFDB 能帮助你找到并安装它们。[IFComp 的年度结果页面][55]展现了另一个视图,帮助你了解最佳的免费和归档可用作品。
|
||||
+ [互动小说技术基金会][56]是一个非营利组织,主要帮助并支持 Twine、IFComp 和互动小说归档的发展,以及提升互动小说的无障碍功能、探索互动小说在教育领域中的应用等等。加入其[邮件列表][57],可以接收 IFTF 的每月资讯,浏览其[博客][58],亦或浏览[一些主题商品][59]。
|
||||
+ 在今年的早些时候,John Paul Wohlscheid 写了这篇[关于开源互动小说工具][60]的文章。它涵盖了一些这里没有提及的平台,所以如果你还想了解更多,请看一看这篇文章。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/interactive-fiction-tools
|
||||
|
||||
作者:[Jason Mclntosh][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[cycoe](https://github.com/cycoe)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jmac
|
||||
[1]:http://iftechfoundation.org/
|
||||
[2]:https://en.wikipedia.org/wiki/Zork
|
||||
[3]:https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy_(video_game)
|
||||
[4]:https://en.wikipedia.org/wiki/Infocom
|
||||
[5]:http://www.springthing.net/
|
||||
[6]:http://ifcomp.org/
|
||||
[7]:https://ifcomp.org/comp/2017
|
||||
[8]:http://www.fiction-interactive.fr/
|
||||
[9]:http://www.oldgamesitalia.net/content/marmellata-davventura-2018
|
||||
[10]:/file/403396
|
||||
[11]:https://opensource.com/sites/default/files/uploads/monkey.png (counterfeit monkey game screenshot)
|
||||
[12]:http://tads.org/
|
||||
[13]:https://www.alanif.se/
|
||||
[14]:http://textadventures.co.uk/quest/
|
||||
[15]:http://inform7.com/
|
||||
[16]:https://github.com/DavidKinder/Inform6
|
||||
[17]:http://inform7.com/learn/man/RB_4_1.html#e307
|
||||
[18]:/file/403386
|
||||
[19]:https://opensource.com/sites/default/files/uploads/inform.png (inform 7 IDE screenshot)
|
||||
[20]:https://www.youtube.com/watch?v=bu55q_3YtOY
|
||||
[21]:http://ifwiki.org/index.php/Glulx
|
||||
[22]:http://ifwiki.org/index.php/Z-machine
|
||||
[23]:https://github.com/erkyrath/lectrote
|
||||
[24]:https://github.com/garglk/garglk/
|
||||
[25]:http://eblong.com/zarf/glulx/quixe/
|
||||
[26]:https://github.com/curiousdannii/parchment
|
||||
[27]:https://github.com/i7/extensions
|
||||
[28]:http://twinery.org/
|
||||
[29]:https://github.com/klembot/twinejs
|
||||
[30]:/article/18/7/twine-vs-renpy-interactive-fiction
|
||||
[31]:https://tiddlywiki.com/
|
||||
[32]:https://github.com/idmillington/undum
|
||||
[33]:https://github.com/dfabulich/choicescript
|
||||
[34]:https://www.choiceofgames.com/
|
||||
[35]:https://github.com/furkle
|
||||
[36]:https://github.com/lizadaly/windrift
|
||||
[37]:http://iftechfoundation.org/committees/twine/
|
||||
[38]:/file/403391
|
||||
[39]:https://opensource.com/sites/default/files/uploads/harmonia.png (Harmonia opening screen shot)
|
||||
[40]:https://github.com/iftechfoundation/ifcomp
|
||||
[41]:https://github.com/iftechfoundation/ifcomp/blob/master/LICENSE.md
|
||||
[42]:https://www.ifarchive.org/
|
||||
[43]:http://blog.iftechfoundation.org/2017-06-30-iftf-is-adopting-the-if-archive.html
|
||||
[44]:https://github.com/iftechfoundation/ifarchive-ifmap-py
|
||||
[45]:https://www.ifarchive.org/indexes/if-archiveXgames
|
||||
[46]:http://ifdb.tads.org/search?sortby=ratu&searchfor=%22source+available%22
|
||||
[47]:https://www.ifarchive.org/indexes/if-archiveXgamesXsource.html
|
||||
[48]:http://ifdb.tads.org/viewgame?id=fft6pu91j85y4acv
|
||||
[49]:https://github.com/jmacdotorg/warblers-nest/
|
||||
[50]:https://github.com/i7/counterfeit-monkey
|
||||
[51]:https://creativecommons.org/faq/#can-i-apply-a-creative-commons-license-to-software
|
||||
[52]:https://github.com/i7/counterfeit-monkey/blob/master/Counterfeit%20Monkey.materials/Extensions/Counterfeit%20Monkey/Liquids.i7x
|
||||
[53]:http://ifdb.tads.org/
|
||||
[54]:https://ifarchive.org/
|
||||
[55]:https://ifcomp.org/comp/last_comp
|
||||
[56]:http://iftechfoundation.org/
|
||||
[57]:http://iftechfoundation.org/cgi-bin/mailman/listinfo/friends
|
||||
[58]:http://blog.iftechfoundation.org/
|
||||
[59]:http://blog.iftechfoundation.org/2017-12-20-the-iftf-gift-shop-is-now-open.html
|
||||
[60]:https://itsfoss.com/create-interactive-fiction/
|
@ -0,0 +1,146 @@
|
||||
4 种方式来自定义 Xfce 来给它一个现代化外观
|
||||
======
|
||||
|
||||
> Xfce 是一个非常轻量的桌面环境,但它有一个缺点,它看起来有点老旧。但是你没有必要坚持默认外观。让我们看看你可以自定义 Xfce 的各种方法,来给它一个现代化的、漂亮的外观。
|
||||
|
||||
![Customize Xfce desktop envirnment][1]
|
||||
|
||||
首先,Xfce 是最[受欢迎的桌面环境][2]之一。作为一个轻量级桌面环境,你可以在非常低的资源上运行 Xfce,并且它仍然能很好地工作。这是为什么很多[轻量级 Linux 发行版][3]默认使用 Xfce 的原因之一。
|
||||
|
||||
一些人甚至喜欢在高端设备上使用它,说明它的简单性、易用性和非资源依赖性是主要原因。
|
||||
|
||||
[Xfce][4] 自身很小,而且只提供你需要的东西。令人烦恼的一点是会令人觉得它的外观和感觉很老了。然而,你可以简单地自定义 Xfce 以使其看起来现代化和漂亮,但又不会像 Unity/GNOME 会话那样占用系统资源。
|
||||
|
||||
### 4 种方式来自定义 Xfce 桌面
|
||||
|
||||
让我们看看一些方法,我们可以通过这些方法改善你的 Xfce 桌面环境的外观和感觉。
|
||||
|
||||
默认 Xfce 桌面环境看起来有些像这样:
|
||||
|
||||
![Xfce default screen][5]
|
||||
|
||||
如你所见,默认 Xfce 桌面有点乏味。我们将使用主题、图标包以及更改默认 dock 来使它看起来新鲜和有点惊艳。
|
||||
|
||||
#### 1. 在 Xfce 中更改主题
|
||||
|
||||
我们将做的第一件事是从 [xfce-look.org][6] 中找到一款主题。我最喜欢的 Xfce 主题是 [XFCE-D-PRO][7]。
|
||||
|
||||
你可以从[这里][8]下载主题,并提取到某处。
|
||||
|
||||
你可以复制提取出的这些主题文件到你的家目录中的 `.theme` 文件夹。如果该文件夹尚不存在,你可以创建一个,同样的道理,图标需要放在一个在家目录中的 `.icons` 文件夹。
|
||||
|
||||
打开 **设置 > 外观 > 样式** 来选择该主题,注销并重新登录以查看更改。默认的 Adwaita-dark 也是一个不错的主题。
|
||||
|
||||
![Appearance Xfce][9]
|
||||
|
||||
你可以在 Xfce 上使用各种[好的 GTK 主题][10]。
|
||||
|
||||
#### 2. 在 Xfce 中更改图标
|
||||
|
||||
Xfce-look.org 也提供你可以下载的图标主题,提取并放置图标到你的家目录中 `.icons` 目录。在你添加图标主题到 `.icons` 目录中后,转到 **设置 > 外观 > 图标** 来选择这个图标主题。
|
||||
|
||||
![Moka icon theme][11]
|
||||
|
||||
我已经安装 [Moka 图标集][12] ,它看起来令人惊艳。
|
||||
|
||||
![Moka theme][13]
|
||||
|
||||
你也可以参考我们[令人惊艳的图标主题][14]列表。
|
||||
|
||||
##### 可选: 通过 Synaptic 安装主题
|
||||
|
||||
如果你想避免手工搜索和复制文件,在你的系统中安装 Synaptic 软件包管理器。你可以通过网络来查找最佳的主题和图标集,使用 synaptic 软件包管理器,你可以搜索和安装主题。
|
||||
|
||||
```
|
||||
sudo apt-get install synaptic
|
||||
```
|
||||
|
||||
**通过 Synaptic 搜索和安装主题/图标**
|
||||
|
||||
打开 synaptic,并在**搜索**上单击。输入你期望的主题名,接下来,它将显示匹配主题的列表。勾选所有更改所需的附加依赖,并在**应用**上单击。这些操作将下载主题和安装主题。
|
||||
|
||||
![Arc Theme][15]
|
||||
|
||||
在安装后,你可以打开**外观**选项来选择期望的主题。
|
||||
|
||||
在我看来,这不是在 Xfce 中安装主题的最佳方法。
|
||||
|
||||
#### 3. 在 Xfce 中更改桌面背景
|
||||
|
||||
再强调一次,默认 Xfce 桌面背景也不错。但是你可以把桌面背景更改成与你的图标和主题相匹配的东西。
|
||||
|
||||
为在 Xfce 中更改桌面背景,在桌面上右击,并单击**桌面设置**。从文件夹选择中选择**背景**,并选择任意一个默认背景或自定义背景。
|
||||
|
||||
![Changing desktop wallpapers][16]
|
||||
|
||||
#### 4. 在 Xfce 中更改 dock
|
||||
|
||||
默认 dock 也不错,恰如其分。但是,再强调一次,它看来有点平平淡淡。
|
||||
|
||||
![Docky][17]
|
||||
|
||||
不过,如果你想你的 dock 变得更好,并带有更多一点的自定义选项,你可以安装另一个 dock 。
|
||||
|
||||
Plank 是一个简单而轻量级的、高度可配置的 dock。
|
||||
|
||||
为安装 Plank ,使用下面的命令:
|
||||
|
||||
```
|
||||
sudo apt-get install plank
|
||||
```
|
||||
|
||||
如果 Plank 在默认存储库中没有,你可以从这个 PPA 中安装它。
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:ricotz/docky
|
||||
sudo apt-get update
|
||||
sudo apt-get install plank
|
||||
```
|
||||
|
||||
在你使用 Plank 前,你应该通过右键单击移除默认的 dock,并在**面板设置**下,单击**删除**。
|
||||
|
||||
在完成后,转到 **附件 > Plank** 来启动 Plank dock 。
|
||||
|
||||
![Plank][18]
|
||||
|
||||
Plank 从你正在使用的图标主题中选取图标。因此,如果你更改图标主题,你也将在 dock 中看到相关的更改。
|
||||
|
||||
### 总结
|
||||
|
||||
XFCE 是一个轻量级、快速和高度可自定义的桌面环境。如果你的系统资源有限,它服务很好,并且你可以简单地自定义它来看起来更好。这是在应用这些步骤后,我的屏幕的外观。
|
||||
|
||||
![XFCE desktop][19]
|
||||
|
||||
这只是半个小时的努力成果,你可以使用不同的主题/图标自定义使它看起来更好。请随意在评论区分享你自定义的 XFCE 桌面屏幕,以及你正在使用的主题和图标组合。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/customize-xfce/
|
||||
|
||||
作者:[Ambarish Kumar][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/ambarish/
|
||||
[1]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/07/xfce-customization.jpeg
|
||||
[2]:https://itsfoss.com/best-linux-desktop-environments/
|
||||
[3]:https://itsfoss.com/lightweight-linux-beginners/
|
||||
[4]:https://xfce.org/
|
||||
[5]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/06/1-1-800x410.jpg
|
||||
[6]:http://xfce-look.org
|
||||
[7]:https://www.xfce-look.org/p/1207818/XFCE-D-PRO
|
||||
[8]:https://www.xfce-look.org/p/1207818/startdownload?file_id=1523730502&file_name=XFCE-D-PRO-1.6.tar.xz&file_type=application/x-xz&file_size=105328&url=https%3A%2F%2Fdl.opendesktop.org%2Fapi%2Ffiles%2Fdownloadfile%2Fid%2F1523730502%2Fs%2F6019b2b57a1452471eac6403ae1522da%2Ft%2F1529360682%2Fu%2F%2FXFCE-D-PRO-1.6.tar.xz
|
||||
[9]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/07/4.jpg
|
||||
[10]:https://itsfoss.com/best-gtk-themes/
|
||||
[11]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/07/6.jpg
|
||||
[12]:https://snwh.org/moka
|
||||
[13]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/07/11-800x547.jpg
|
||||
[14]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
||||
[15]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/07/5-800x531.jpg
|
||||
[16]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/07/7-800x546.jpg
|
||||
[17]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/07/8.jpg
|
||||
[18]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/07/9.jpg
|
||||
[19]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/07/10-800x447.jpg
|
@ -0,0 +1,128 @@
|
||||
Podman:一个更安全的运行容器的方式
|
||||
======
|
||||
|
||||
> Podman 使用传统的 fork/exec 模型(相对于客户端/服务器模型)来运行容器。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/23/234924m55sn8zt3b5q8815.jpg)
|
||||
|
||||
在进入本文的主要主题 [Podman][1] 和容器之前,我需要了解一点 Linux 审计功能的技术。
|
||||
|
||||
### 什么是审计?
|
||||
|
||||
Linux 内核有一个有趣的安全功能,叫做**审计**。它允许管理员在系统上监视安全事件,并将它们记录到`audit.log` 中,该文件可以本地存储或远程存储在另一台机器上,以防止黑客试图掩盖他的踪迹。
|
||||
|
||||
`/etc/shadow` 文件是一个经常要监控的安全文件,因为向其添加记录可能允许攻击者获得对系统的访问权限。管理员想知道是否有任何进程修改了该文件,你可以通过执行以下命令来执行此操作:
|
||||
|
||||
```
|
||||
# auditctl -w /etc/shadow
|
||||
```
|
||||
|
||||
现在让我们看看当我修改了 `/etc/shadow` 文件会发生什么:
|
||||
|
||||
```
|
||||
# touch /etc/shadow
|
||||
# ausearch -f /etc/shadow -i -ts recent
|
||||
|
||||
type=PROCTITLE msg=audit(10/10/2018 09:46:03.042:4108) : proctitle=touch /etc/shadow type=SYSCALL msg=audit(10/10/2018 09:46:03.042:4108) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb17f6704 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=2712 pid=3727 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts1 ses=3 comm=touch exe=/usr/bin/touch subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)`
|
||||
```
|
||||
|
||||
审计记录中有很多信息,但我重点注意到它记录了 root 修改了 `/etc/shadow` 文件,并且该进程的审计 UID(`auid`)的所有者是 `dwalsh`。
|
||||
|
||||
内核修改了这个文件了么?
|
||||
|
||||
#### 跟踪登录 UID
|
||||
|
||||
登录 UID(`loginuid`),存储在 `/proc/self/loginuid` 中,它是系统上每个进程的 proc 结构的一部分。该字段只能设置一次;设置后,内核将不允许任何进程重置它。
|
||||
|
||||
当我登录系统时,登录程序会为我的登录过程设置 `loginuid` 字段。
|
||||
|
||||
我(`dwalsh`)的 UID 是 3267。
|
||||
|
||||
```
|
||||
$ cat /proc/self/loginuid
|
||||
3267
|
||||
```
|
||||
|
||||
现在,即使我变成了 root,我的登录 UID 仍将保持不变。
|
||||
|
||||
```
|
||||
$ sudo cat /proc/self/loginuid
|
||||
3267
|
||||
```
|
||||
|
||||
请注意,从初始登录过程 fork 并 exec 的每个进程都会自动继承 `loginuid`。这就是内核知道登录的人是 `dwalsh` 的方式。
|
||||
|
||||
### 容器
|
||||
|
||||
现在让我们来看看容器。
|
||||
|
||||
```
|
||||
sudo podman run fedora cat /proc/self/loginuid
|
||||
3267
|
||||
```
|
||||
|
||||
甚至容器进程也保留了我的 `loginuid`。 现在让我们用 Docker 试试。
|
||||
|
||||
```
|
||||
sudo docker run fedora cat /proc/self/loginuid
|
||||
4294967295
|
||||
```
|
||||
|
||||
### 为什么不一样?
|
||||
|
||||
Podman 对于容器使用传统的 fork/exec 模型,因此容器进程是 Podman 进程的后代。Docker 使用客户端/服务器模型。我执行的 `docker` 命令是 Docker 客户端工具,它通过客户端/服务器操作与 Docker 守护进程通信。然后 Docker 守护程序创建容器并处理 stdin/stdout 与 Docker 客户端工具的通信。
|
||||
|
||||
进程的默认 `loginuid`(在设置 `loginuid` 之前)是 `4294967295`(LCTT 译注:2^32 - 1)。由于容器是 Docker 守护程序的后代,而 Docker 守护程序是 init 系统的子代,所以,我们看到 systemd、Docker 守护程序和容器进程全部具有相同的 `loginuid`:`4294967295`,审计系统视其为未设置审计 UID。
|
||||
|
||||
```
|
||||
cat /proc/1/loginuid
|
||||
4294967295
|
||||
```
|
||||
|
||||
### 怎么会被滥用?
|
||||
|
||||
让我们来看看如果 Docker 启动的容器进程修改 `/etc/shadow` 文件会发生什么。
|
||||
|
||||
```
|
||||
$ sudo docker run --privileged -v /:/host fedora touch /host/etc/shadow
|
||||
$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:27:20.055:4569) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:27:20.055:4569) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb6973f50 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11863 pid=11882 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=touch exe=/usr/bin/coreutils subj=system_u:system_r:spc_t:s0 key=(null)
|
||||
```
|
||||
|
||||
在 Docker 情形中,`auid` 是未设置的(`4294967295`);这意味着安全人员可能知道有进程修改了 `/etc/shadow` 文件但身份丢失了。
|
||||
|
||||
如果该攻击者随后删除了 Docker 容器,那么在系统上谁修改 `/etc/shadow` 文件将没有任何跟踪信息。
|
||||
|
||||
现在让我们看看相同的场景在 Podman 下的情况。
|
||||
|
||||
```
|
||||
$ sudo podman run --privileged -v /:/host fedora touch /host/etc/shadow
|
||||
$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:23:41.659:4530) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:23:41.659:4530) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7fffdffd0f34 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11671 pid=11683 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=3 comm=touch exe=/usr/bin/coreutils subj=unconfined_u:system_r:spc_t:s0 key=(null)
|
||||
```
|
||||
|
||||
由于它使用传统的 fork/exec 方式,因此 Podman 正确记录了所有内容。
|
||||
|
||||
这只是观察 `/etc/shadow` 文件的一个简单示例,但审计系统对于观察系统上的进程非常有用。使用 fork/exec 容器运行时(而不是客户端/服务器容器运行时)来启动容器允许你通过审计日志记录保持更好的安全性。
|
||||
|
||||
### 最后的想法
|
||||
|
||||
在启动容器时,与客户端/服务器模型相比,fork/exec 模型还有许多其他不错的功能。例如,systemd 功能包括:
|
||||
|
||||
* `SD_NOTIFY`:如果将 Podman 命令放入 systemd 单元文件中,容器进程可以通过 Podman 返回通知,表明服务已准备好接收任务。这是在客户端/服务器模式下无法完成的事情。
|
||||
* 套接字激活:你可以将连接的套接字从 systemd 传递到 Podman,并传递到容器进程以便使用它们。这在客户端/服务器模型中是不可能的。
|
||||
|
||||
在我看来,其最好的功能是**作为非 root 用户运行 Podman 和容器**。这意味着你永远不会在宿主机上授予用户 root 权限,而在客户端/服务器模型中(如 Docker 使用的),你必须打开以 root 身份运行的特权守护程序的套接字来启动容器。在那里,你将受到守护程序中实现的安全机制与宿主机操作系统中实现的安全机制的支配 —— 这是一个危险的主张。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/podman-more-secure-way-run-containers
|
||||
|
||||
作者:[Daniel J Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rhatdan
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://podman.io
|
@ -0,0 +1,146 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11268-1.html)
|
||||
[#]: subject: (Podman and user namespaces: A marriage made in heaven)
|
||||
[#]: via: (https://opensource.com/article/18/12/podman-and-user-namespaces)
|
||||
[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan)
|
||||
|
||||
Podman 和用户名字空间:天作之合
|
||||
======
|
||||
|
||||
> 了解如何使用 Podman 在单独的用户空间运行容器。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/25/220204khh9psjo1phllkok.jpg)
|
||||
|
||||
[Podman][1] 是 [libpod][2] 库的一部分,使用户能够管理 pod、容器和容器镜像。在我的上一篇文章中,我写过将 [Podman 作为一种更安全的运行容器的方式][3]。在这里,我将解释如何使用 Podman 在单独的用户命名空间中运行容器。
|
||||
|
||||
作为分离容器的一个很棒的功能,我一直在思考<ruby>[用户命名空间][4]<rt>user namespace</rt></ruby>,它主要是由 Red Hat 的 Eric Biederman 开发的。用户命名空间允许你指定用于运行容器的用户标识符(UID)和组标识符(GID)映射。这意味着你可以在容器内以 UID 0 运行,在容器外以 UID 100000 运行。如果容器进程逃逸出了容器,内核会将它们视为以 UID 100000 运行。不仅如此,任何未映射到用户命名空间的 UID 所拥有的文件对象都将被视为 `nobody` 所拥有(UID 是 `65534`, 由 `kernel.overflowuid` 指定),并且不允许容器进程访问,除非该对象可由“其他人”访问(即世界可读/可写)。
|
||||
|
||||
如果你拥有一个权限为 [660][5] 的属主为“真实” `root` 的文件,而当用户命名空间中的容器进程尝试读取它时,会阻止它们访问它,并且会将该文件视为 `nobody` 所拥有。
|
||||
|
||||
### 示例
|
||||
|
||||
以下是它是如何工作的。首先,我在 `root` 拥有的系统中创建一个文件。
|
||||
|
||||
```
|
||||
$ sudo bash -c "echo Test > /tmp/test"
|
||||
$ sudo chmod 600 /tmp/test
|
||||
$ sudo ls -l /tmp/test
|
||||
-rw-------. 1 root root 5 Dec 17 16:40 /tmp/test
|
||||
```
|
||||
|
||||
接下来,我将该文件卷挂载到一个使用用户命名空间映射 `0:100000:5000` 运行的容器中。
|
||||
|
||||
```
|
||||
$ sudo podman run -ti -v /tmp/test:/tmp/test:Z --uidmap 0:100000:5000 fedora sh
|
||||
# id
|
||||
uid=0(root) gid=0(root) groups=0(root)
|
||||
# ls -l /tmp/test
|
||||
-rw-rw----. 1 nobody nobody 8 Nov 30 12:40 /tmp/test
|
||||
# cat /tmp/test
|
||||
cat: /tmp/test: Permission denied
|
||||
```
|
||||
|
||||
上面的 `--uidmap` 设置告诉 Podman 在容器内映射一系列的 5000 个 UID,从容器外的 UID 100000 开始的范围(100000-104999)映射到容器内 UID 0 开始的范围(0-4999)。在容器内部,如果我的进程以 UID 1 运行,则它在主机上为 100001。
|
||||
|
||||
由于实际的 `UID=0` 未映射到容器中,因此 `root` 拥有的任何文件都将被视为 `nobody` 所拥有。即使容器内的进程具有 `CAP_DAC_OVERRIDE` 能力,也无法覆盖此种保护。`DAC_OVERRIDE` 能力使得 root 的进程能够读/写系统上的任何文件,即使进程不是 `root` 用户拥有的,也不是全局可读或可写的。
|
||||
|
||||
用户命名空间的功能与宿主机上的功能不同。它们是命名空间的功能。这意味着我的容器的 root 只在容器内具有功能 —— 实际上只有该范围内的 UID 映射到内用户命名空间。如果容器进程逃逸出了容器,则它将没有任何非映射到用户命名空间的 UID 之外的功能,这包括 `UID=0`。即使进程可能以某种方式进入另一个容器,如果容器使用不同范围的 UID,它们也不具备这些功能。
|
||||
|
||||
请注意,SELinux 和其他技术还限制了容器进程破开容器时会发生的情况。
|
||||
|
||||
### 使用 podman top 来显示用户名字空间
|
||||
|
||||
我们在 `podman top` 中添加了一些功能,允许你检查容器内运行的进程的用户名,并标识它们在宿主机上的真实 UID。
|
||||
|
||||
让我们首先使用我们的 UID 映射运行一个 `sleep` 容器。
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:100000:5000 -d fedora sleep 1000
|
||||
```
|
||||
|
||||
现在运行 `podman top`:
|
||||
|
||||
```
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 100000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
注意 `podman top` 报告用户进程在容器内以 `root` 身份运行,但在宿主机(`HUSER`)上以 UID 100000 运行。此外,`ps` 命令确认 `sleep` 过程以 UID 100000 运行。
|
||||
|
||||
现在让我们运行第二个容器,但这次我们将选择一个单独的 UID 映射,从 200000 开始。
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:200000:5000 -d fedora sleep 1000
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 200000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
200000 23644 23632 1 08:08 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
请注意,`podman top` 报告第二个容器在容器内以 `root` 身份运行,但在宿主机上是 UID=200000。
|
||||
|
||||
另请参阅 `ps` 命令,它显示两个 `sleep` 进程都在运行:一个为 100000,另一个为 200000。
|
||||
|
||||
这意味着在单独的用户命名空间内运行容器可以在进程之间进行传统的 UID 分离,而这从一开始就是 Linux/Unix 的标准安全工具。
|
||||
|
||||
### 用户名字空间的问题
|
||||
|
||||
几年来,我一直主张用户命名空间应该作为每个人应该有的安全工具,但几乎没有人使用过。原因是没有任何文件系统支持,也没有一个<ruby>移动文件系统<rt>shifting file system</rt></ruby>。
|
||||
|
||||
在容器中,你希望在许多容器之间共享**基本**镜像。上面的每个示例中使用了 Fedora 基本镜像。Fedora 镜像中的大多数文件都由真实的 `UID=0` 拥有。如果我在此镜像上使用用户名称空间 `0:100000:5000` 运行容器,默认情况下它会将所有这些文件视为 `nobody` 所拥有,因此我们需要移动所有这些 UID 以匹配用户名称空间。多年来,我想要一个挂载选项来告诉内核重新映射这些文件 UID 以匹配用户命名空间。上游内核存储开发人员还在继续研究,在此功能上已经取得一些进展,但这是一个难题。
|
||||
|
||||
由于由 Nalin Dahyabhai 领导的团队开发的自动 [chown][6] 内置于[容器/存储][7]中,Podman 可以在同一镜像上使用不同的用户名称空间。当 Podman 使用容器/存储,并且 Podman 在新的用户命名空间中首次使用一个容器镜像时,容器/存储会 “chown”(如,更改所有权)镜像中的所有文件到用户命名空间中映射的 UID 并创建一个新镜像。可以把它想象成一个 `fedora:0:100000:5000` 镜像。
|
||||
|
||||
当 Podman 在具有相同 UID 映射的镜像上运行另一个容器时,它使用“预先 chown”的镜像。当我在`0:200000:5000` 上运行第二个容器时,容器/存储会创建第二个镜像,我们称之为 `fedora:0:200000:5000`。
|
||||
|
||||
请注意,如果你正在执行 `podman build` 或 `podman commit` 并将新创建的镜像推送到容器注册库,Podman 将使用容器/存储来反转该移动,并将推送所有文件属主变回真实 UID=0 的镜像。
|
||||
|
||||
这可能会导致在新的 UID 映射中创建容器时出现真正的减速,因为 `chown` 可能会很慢,具体取决于镜像中的文件数。此外,在普通的 [OverlayFS][8] 上,镜像中的每个文件都会被复制。普通的 Fedora 镜像最多可能需要 30 秒才能完成 `chown` 并启动容器。
|
||||
|
||||
幸运的是,Red Hat 内核存储团队(主要是 Vivek Goyal 和 Miklos Szeredi)在内核 4.19 中为 OverlayFS 添加了一项新功能。该功能称为“仅复制元数据”。如果使用 `metacopy=on` 选项来挂载层叠文件系统,则在更改文件属性时,它不会复制较低层的内容;内核会创建新的 inode,其中包含引用指向较低级别数据的属性。如果内容发生变化,它仍会复制内容。如果你想试用它,可以在 Red Hat Enterprise Linux 8 Beta 中使用此功能。
|
||||
|
||||
这意味着容器 `chown` 可能在两秒钟内发生,并且你不会倍增每个容器的存储空间。
|
||||
|
||||
这使得像 Podman 这样的工具在不同的用户命名空间中运行容器是可行的,大大提高了系统的安全性。
|
||||
|
||||
### 前瞻
|
||||
|
||||
我想向 Podman 添加一个新选项,比如 `--userns=auto`,它会为你运行的每个容器自动选择一个唯一的用户命名空间。这类似于 SELinux 与单独的多类别安全(MCS)标签一起使用的方式。如果设置环境变量 `PODMAN_USERNS=auto`,则甚至不需要设置该选项。
|
||||
|
||||
Podman 最终允许用户在不同的用户名称空间中运行容器。像 [Buildah][9] 和 [CRI-O][10] 这样的工具也可以利用用户命名空间。但是,对于 CRI-O,Kubernetes 需要了解哪个用户命名空间将运行容器引擎,上游正在开发这个功能。
|
||||
|
||||
在我的下一篇文章中,我将解释如何在用户命名空间中将 Podman 作为非 root 用户运行。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/podman-and-user-namespaces
|
||||
|
||||
作者:[Daniel J Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rhatdan
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://podman.io/
|
||||
[2]: https://github.com/containers/libpod
|
||||
[3]: https://linux.cn/article-11261-1.html
|
||||
[4]: http://man7.org/linux/man-pages/man7/user_namespaces.7.html
|
||||
[5]: https://chmodcommand.com/chmod-660/
|
||||
[6]: https://en.wikipedia.org/wiki/Chown
|
||||
[7]: https://github.com/containers/storage
|
||||
[8]: https://en.wikipedia.org/wiki/OverlayFS
|
||||
[9]: https://buildah.io/
|
||||
[10]: http://cri-o.io/
|
@ -1,32 +1,32 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11270-1.html)
|
||||
[#]: subject: (Find The Linux Distribution Name, Version And Kernel Details)
|
||||
[#]: via: (https://www.ostechnix.com/find-out-the-linux-distribution-name-version-and-kernel-details/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Find The Linux Distribution Name, Version And Kernel Details
|
||||
查找 Linux 发行版名称、版本和内核详细信息
|
||||
======
|
||||
|
||||
![Find The Linux Distribution Name, Version And Kernel Details][1]
|
||||
|
||||
This guide explains how to find the Linux distribution name, version and Kernel details. If your Linux system has GUI mode, you can find these details easily from the System’s Settings. But in CLI mode, it is bit difficult for beginners to find out such details. No problem! Here I have given a few command line methods to find the Linux system information. There could be many, but these methods will work on most Linux distributions.
|
||||
本指南介绍了如何查找 Linux 发行版名称、版本和内核详细信息。如果你的 Linux 系统有 GUI 界面,那么你可以从系统设置中轻松找到这些信息。但在命令行模式下,初学者很难找到这些详情。没有问题!我这里给出了一些命令行方法来查找 Linux 系统信息。可能有很多,但这些方法适用于大多数 Linux 发行版。
|
||||
|
||||
### 1\. Find Linux distribution name, version
|
||||
### 1、查找 Linux 发行版名称、版本
|
||||
|
||||
There are many methods to find out what OS is running on in your VPS.
|
||||
有很多方法可以找出 VPS 中运行的操作系统。
|
||||
|
||||
##### Method 1:
|
||||
#### 方法 1:
|
||||
|
||||
Open your Terminal and run the following command:
|
||||
打开终端并运行以下命令:
|
||||
|
||||
```
|
||||
$ cat /etc/*-release
|
||||
```
|
||||
|
||||
**Sample output from CentOS 7:**
|
||||
CentOS 7 上的示例输出:
|
||||
|
||||
```
|
||||
CentOS Linux release 7.0.1406 (Core)
|
||||
@ -45,7 +45,7 @@ CentOS Linux release 7.0.1406 (Core)
|
||||
CentOS Linux release 7.0.1406 (Core)
|
||||
```
|
||||
|
||||
**Sample output from Ubuntu 18.04:**
|
||||
Ubuntu 18.04 上的示例输出:
|
||||
|
||||
```
|
||||
DISTRIB_ID=Ubuntu
|
||||
@ -66,29 +66,29 @@ VERSION_CODENAME=bionic
|
||||
UBUNTU_CODENAME=bionic
|
||||
```
|
||||
|
||||
##### Method 2:
|
||||
#### 方法 2:
|
||||
|
||||
The following command will also get your distribution details.
|
||||
以下命令也能获取你发行版的详细信息。
|
||||
|
||||
```
|
||||
$ cat /etc/issue
|
||||
```
|
||||
|
||||
**Sample output from Ubuntu 18.04:**
|
||||
Ubuntu 18.04 上的示例输出:
|
||||
|
||||
```
|
||||
Ubuntu 18.04.2 LTS \n \l
|
||||
```
|
||||
|
||||
##### Method 3:
|
||||
#### 方法 3:
|
||||
|
||||
The following command will get you the distribution details in Debian and its variants like Ubuntu, Linux Mint etc.
|
||||
以下命令能在 Debian 及其衍生版如 Ubuntu、Linux Mint 上获取发行版详细信息。
|
||||
|
||||
```
|
||||
$ lsb_release -a
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
No LSB modules are available.
|
||||
@ -98,87 +98,73 @@ Release: 18.04
|
||||
Codename: bionic
|
||||
```
|
||||
|
||||
### 2\. Find Linux Kernel details
|
||||
### 2、查找 Linux 内核详细信息
|
||||
|
||||
##### Method 1:
|
||||
#### 方法 1:
|
||||
|
||||
To find out your Linux kernel details, run the following command from your Terminal.
|
||||
要查找 Linux 内核详细信息,请在终端运行以下命令。
|
||||
|
||||
```
|
||||
$ uname -a
|
||||
```
|
||||
|
||||
**Sample output in CentOS 7:**
|
||||
CentOS 7 上的示例输出:
|
||||
|
||||
```
|
||||
Linux server.ostechnix.lan 3.10.0-123.9.3.el7.x86_64 #1 SMP Thu Nov 6 15:06:03 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
**Sample output in Ubuntu 18.04:**
|
||||
Ubuntu 18.04 上的示例输出:
|
||||
|
||||
```
|
||||
Linux ostechnix 4.18.0-25-generic #26~18.04.1-Ubuntu SMP Thu Jun 27 07:28:31 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
Or,
|
||||
或者,
|
||||
|
||||
```
|
||||
$ uname -mrs
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Linux 4.18.0-25-generic x86_64
|
||||
```
|
||||
|
||||
Where,
|
||||
这里,
|
||||
|
||||
* **Linux** – Kernel name
|
||||
* **4.18.0-25-generic** – Kernel version
|
||||
* **x86_64** – System hardware architecture (i.e 64 bit system)
|
||||
* `Linux` – 内核名
|
||||
* `4.18.0-25-generic` – 内核版本
|
||||
* `x86_64` – 系统硬件架构(即 64 位系统)
|
||||
|
||||
|
||||
|
||||
For more details about uname command, refer the man page.
|
||||
有关 `uname` 命令的更多详细信息,请参考手册页。
|
||||
|
||||
```
|
||||
$ man uname
|
||||
```
|
||||
|
||||
##### Method 2:
|
||||
#### 方法2:
|
||||
|
||||
From your Terminal, run the following command:
|
||||
在终端中,运行以下命令:
|
||||
|
||||
```
|
||||
$ cat /proc/version
|
||||
```
|
||||
|
||||
**Sample output from CentOS 7:**
|
||||
CentOS 7 上的示例输出:
|
||||
|
||||
```
|
||||
Linux version 3.10.0-123.9.3.el7.x86_64 ([email protected]) (gcc version 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC) ) #1 SMP Thu Nov 6 15:06:03 UTC 2014
|
||||
```
|
||||
|
||||
**Sample output from Ubuntu 18.04:**
|
||||
Ubuntu 18.04 上的示例输出:
|
||||
|
||||
```
|
||||
Linux version 4.18.0-25-generic ([email protected]) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #26~18.04.1-Ubuntu SMP Thu Jun 27 07:28:31 UTC 2019
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**How To Find Linux System Details Using inxi**][2]
|
||||
* [**Neofetch – Display Linux system Information In Terminal**][3]
|
||||
* [**How To Find Hardware And Software Specifications In Ubuntu**][4]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
These are few ways to find find out a Linux distribution’s name, version and Kernel details. Hope you find it useful.
|
||||
这些是查找 Linux 发行版的名称、版本和内核详细信息的几种方法。希望你觉得它有用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -186,14 +172,11 @@ via: https://www.ostechnix.com/find-out-the-linux-distribution-name-version-and-
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2015/08/Linux-Distribution-Name-Version-Kernel-720x340.png
|
||||
[2]: https://www.ostechnix.com/how-to-find-your-system-details-using-inxi/
|
||||
[3]: https://www.ostechnix.com/neofetch-display-linux-systems-information/
|
||||
[4]: https://www.ostechnix.com/getting-hardwaresoftware-specifications-in-linux-mint-ubuntu/
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11262-1.html)
|
||||
[#]: subject: (How to Reinstall Ubuntu in Dual Boot or Single Boot Mode)
|
||||
[#]: via: (https://itsfoss.com/reinstall-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
@ -16,9 +16,9 @@
|
||||
|
||||
据我所知,Ubuntu 中没有像 Windows 那样的系统恢复分区。那么,问题出现了:如何重新安装 Ubuntu?让我告诉你如何重新安装 Ubuntu。
|
||||
|
||||
警告!
|
||||
**警告!**
|
||||
|
||||
磁盘分区始终是一项危险的任务。我强烈建议你在外部磁盘上备份数据。
|
||||
> **磁盘分区始终是一项危险的任务。我强烈建议你在外部磁盘上备份数据。**
|
||||
|
||||
### 如何重新安装 Ubuntu Linux
|
||||
|
||||
@ -30,21 +30,21 @@
|
||||
|
||||
首先,在网站上下载 Ubuntu。你可以下载[任何需要的 Ubuntu 版本][2]。
|
||||
|
||||
[Download Ubuntu][3]
|
||||
- [下载 Ubuntu][3]
|
||||
|
||||
获得 ISO 镜像后,就可以创建 live USB 了。如果 Ubuntu 系统仍然可以使用,那么可以使用 Ubuntu 提供的启动盘创建工具创建它。
|
||||
|
||||
如果无法使用你的 Ubuntu,那么你可以使用其他系统。你可以参考这篇文章来学习[如何在 Windows 中创建 Ubuntu的 live USB][4]。
|
||||
如果无法使用你的 Ubuntu,那么你可以使用其他系统。你可以参考这篇文章来学习[如何在 Windows 中创建 Ubuntu 的 live USB][4]。
|
||||
|
||||
#### 步骤 2:重新安装 Ubuntu
|
||||
|
||||
有了 Ubuntu 的 live USB 之后插入 USB。重新启动系统。在启动时,按下 F2/10/F12 键进入 BIOS 设置,并确保已在顶部设置 “Boot from Removable Devices/USB”。保存并退出 BIOS。这将启动进入 live USB。
|
||||
有了 Ubuntu 的 live USB 之后将其插入 USB 端口。重新启动系统。在启动时,按下 `F2`/`F10`/`F12` 之类的键进入 BIOS 设置,并确保已设置 “Boot from Removable Devices/USB”。保存并退出 BIOS。这将启动进入 live USB。
|
||||
|
||||
进入 live USB 后,选择安装 Ubuntu。你将看到选择语言和键盘布局这些常用选项。你还可以选择下载更新等。
|
||||
|
||||
![Go ahead with regular installation option][5]
|
||||
|
||||
现在是重要的步骤。你应该看到一个“安装类型”页面。你在屏幕上看到的内容在很大程度上取决于 Ubuntu 如何处理系统上的磁盘分区和安装的操作系统。
|
||||
现在是重要的步骤。你应该看到一个“<ruby>安装类型<rt>Installation Type</rt></ruby>”页面。你在屏幕上看到的内容在很大程度上取决于 Ubuntu 如何处理系统上的磁盘分区和安装的操作系统。
|
||||
|
||||
在此步骤中仔细阅读选项及它的细节。注意每个选项的说明。屏幕上的选项可能在不同的系统中看上去不同。
|
||||
|
||||
@ -54,15 +54,15 @@
|
||||
|
||||
第一个选项是擦除 Ubuntu 18.04.2 并重新安装它。它告诉我它将删除我的个人数据,但它没有说删除所有操作系统(即 Windows)。
|
||||
|
||||
如果你非常幸运或处于单一启动模式,你可能会看到一个“重新安装 Ubuntu” 的选项。此选项将保留现有数据,甚至尝试保留已安装的软件。如果你看到这个选项,那么就用它吧。
|
||||
如果你非常幸运或处于单一启动模式,你可能会看到一个“<ruby>重新安装 Ubuntu<rt>Reinstall Ubuntu</rt></ruby>”的选项。此选项将保留现有数据,甚至尝试保留已安装的软件。如果你看到这个选项,那么就用它吧。
|
||||
|
||||
双启动系统注意
|
||||
**双启动系统注意**
|
||||
|
||||
如果你是双启动 Ubuntu 和 Windows,并且在重新安装中,你的 Ubuntu 系统看不到 Windows,你必须选择 “Something else” 选项并从那里安装 Ubuntu。我已经在[在双启动下安装 Linux 的过程][8]这篇文章中说明了。
|
||||
> **如果你是双启动 Ubuntu 和 Windows,并且在重新安装中,你的 Ubuntu 系统看不到 Windows,你必须选择 “Something else” 选项并从那里安装 Ubuntu。我已经在[在双启动下安装 Linux 的过程][8]这篇文章中说明了。**
|
||||
|
||||
对我来说,没有重新安装并保留数据的选项,因此我选择了“擦除 Ubuntu 并重新安装”。该选项即使在 Windows 的双启动模式下,也将重新安装 Ubuntu。
|
||||
对我来说,没有重新安装并保留数据的选项,因此我选择了“<ruby>擦除 Ubuntu 并重新安装<rt>Erase Ubuntu and reinstall</rt></ruby>”。该选项即使在 Windows 的双启动模式下,也将重新安装 Ubuntu。
|
||||
|
||||
我建议为 root 和 home 使用单独分区就是为了重新安装。这样,即使重新安装 Linux,也可以保证 home 分区中的数据安全。我已在此视频中演示过:
|
||||
我建议为 `/` 和 `/home` 使用单独分区就是为了重新安装。这样,即使重新安装 Linux,也可以保证 `/home` 分区中的数据安全。我已在此视频中演示过:
|
||||
|
||||
选择重新安装 Ubuntu 后,剩下就是单击下一步。选择你的位置、创建用户账户。
|
||||
|
||||
@ -79,7 +79,7 @@ via: https://itsfoss.com/reinstall-ubuntu/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,103 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11266-1.html)
|
||||
[#]: subject: (LiVES Video Editor 3.0 is Here With Significant Improvements)
|
||||
[#]: via: (https://itsfoss.com/lives-video-editor/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
LiVES 视频编辑器 3.0 有了显著的改善
|
||||
======
|
||||
|
||||
我们最近列出了一个[最佳开源视频编辑器][1]的清单。LiVES 是这些开源视频编辑器之一,可以免费使用。
|
||||
|
||||
即使许多用户还在等待 Windows 版本的发行,但在刚刚发行的 LiVES 视频编辑器 Linux 版本中(最新版本 v3.0.1)进行了一个重大更新,包括了一些新的功能和改进。
|
||||
|
||||
在这篇文章里,我将会列出新版本中的重要改进,并且我将会提到在 Linux 上安装的步骤。
|
||||
|
||||
### LiVES 视频编辑器 3.0:新的改进
|
||||
|
||||
![Zorin OS 中正在加载的 LiVES 视频编辑器][2]
|
||||
|
||||
总的来说,在这次重大更新中 LiVES 视频编辑器旨在提供更加平滑的回放、防止意外崩溃、优化视频录制,以及让在线视频下载器更加实用。
|
||||
|
||||
下面列出了变化:
|
||||
|
||||
* 如果需要渲染的话,可以静默渲染直到到视频播放完毕。
|
||||
* 改进回放插件为 openGL,提供更加平滑的回放。
|
||||
* 重新启用了 openGL 回放插件的高级选项。
|
||||
* 在所有帧的 VJ/预解码中允许“Enough”
|
||||
* 重构了在回放时的时基计算的代码(a/v 同步更好)。
|
||||
* 彻底修复了外部音频和音频,提高了准确性并减少了 CPU 占用。
|
||||
* 进入多音轨模式时自动切换至内部音频。
|
||||
* 重新显示效果映射器窗口时,将会正常展示效果状态(on/off)。
|
||||
* 解决了音频和视频线程之间的冲突。
|
||||
* 现在可以对在线视频下载器,可以对剪辑大小和格式进行修改并添加了更新选项。
|
||||
* 对实时效果实例实现了引用计数。
|
||||
* 大量重写了主界面,清理代码并改进许多视觉效果。
|
||||
* 优化了视频播放器运行时的录制功能。
|
||||
* 改进了 projectM 过滤器封装,包括对 SDL2 的支持。
|
||||
* 添加了一个选项来逆转多轨合成器中的 Z-order(后层现在可以覆盖上层了)。
|
||||
* 增加了对 musl libc 的支持
|
||||
* 更新了乌克兰语的翻译
|
||||
|
||||
如果你不是一位高级视频编辑师,也许会对上面列出的重要更新提不起太大的兴趣。但正是因为这些更新,才使得“LiVES 视频编辑器”成为了最好的开源视频编辑软件。
|
||||
|
||||
[][3]
|
||||
|
||||
### 在 Linux 上安装 LiVES 视频编辑器
|
||||
|
||||
LiVES 几乎可以在所有主要的 Linux 发行版中使用。但是,你可能并不能在软件中心找到它的最新版本。所以,如果你想通过这种方式安装,那你就不得不耐心等待了。
|
||||
|
||||
如果你想要手动安装,可以从它的下载页面获取 Fedora/Open SUSE 的 RPM 安装包。它也适用于其他 Linux 发行版。
|
||||
|
||||
- [下载 LiVES 视频编辑器][4]
|
||||
|
||||
如果你使用的是 Ubuntu(或其他基于 Ubuntu 的发行版),可以安装由 [Ubuntuhandbook][6] 维护的[非官方 PPA][5]。
|
||||
|
||||
下面由我来告诉你,你该做些什么:
|
||||
|
||||
1、启动终端后输入以下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:ubuntuhandbook1/lives
|
||||
```
|
||||
|
||||
系统将提示你输入密码用于确认添加 PPA。
|
||||
|
||||
2、完成后,你现在可以轻松地更新软件包列表并安装 LiVES 视频编辑器。以下是需要你输入的命令段:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install life-plugins
|
||||
```
|
||||
|
||||
3、现在,它开始下载并安装这个视频编辑器,等待大约一分钟即可完成。
|
||||
|
||||
### 总结
|
||||
|
||||
Linux 上有许多[视频编辑器][7]。但它们通常被认为不能用于专业编辑。而我并不是一名专业人士,所以像 LiVES 这样免费的视频编辑器就足以进行简单的编辑了。
|
||||
|
||||
你认为怎么样呢?你在 Linux 上使用 LiVES 或其他视频编辑器的体验还好吗?在下面的评论中告诉我们你的感觉吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/lives-video-editor/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Scvoet](https://github.com/scvoet)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/open-source-video-editors/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/lives-video-editor-loading.jpg?ssl=1
|
||||
[3]: https://itsfoss.com/vidcutter-video-editor-linux/
|
||||
[4]: http://lives-video.com/index.php?do=downloads#binaries
|
||||
[5]: https://itsfoss.com/ppa-guide/
|
||||
[6]: http://ubuntuhandbook.org/index.php/2019/08/lives-video-editor-3-0-released-install-ubuntu/
|
||||
[7]: https://itsfoss.com/best-video-editing-software-linux/
|
@ -0,0 +1,54 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11271-1.html)
|
||||
[#]: subject: (Happy birthday to the Linux kernel: What's your favorite release?)
|
||||
[#]: via: (https://opensource.com/article/19/8/linux-kernel-favorite-release)
|
||||
[#]: author: (Lauren Pritchett https://opensource.com/users/lauren-pritchett)
|
||||
|
||||
Linux 内核生日快乐 —— 那么你喜欢哪个版本?
|
||||
======
|
||||
|
||||
> 自从第一个 Linux 内核发布已经过去 28 年了。自 1991 年以来发布了几十个 Linux 内核版本,你喜欢的是哪个?投个票吧!
|
||||
|
||||
![][1]
|
||||
|
||||
让我们回到 1991 年 8 月,那个创造历史的时间。科技界经历过许多关键时刻,这些时刻仍在影响着我们。在那个 8 月,Tim Berners-Lee 宣布了一个名为<ruby>万维网<rt>World Wide Web</rt></ruby>的有趣项目,并推出了第一个网站;超级任天堂在美国发布,为所有年龄段的孩子们开启了新的游戏篇章;在赫尔辛基大学,一位名叫 Linus Torvalds 的学生向同好们询问(1991 年 8 月 25 日)了他作为[业余爱好][3]开发的新免费操作系统的反馈。那时 Linux 内核诞生了。
|
||||
|
||||
如今,我们可以浏览超过 15 亿个网站,在我们的电视机上玩另外五种任天堂游戏机,并维护着六个长期 Linux 内核。以下是我们的一些作者对他们最喜欢的 Linux 内核版本所说的话。
|
||||
|
||||
“引入模块的那个版本(1.2 吧?)。这是 Linux 迈向成功的未来的重要一步。” - Milan Zamazal
|
||||
|
||||
“2.6.9,因为它是我 2006 年加入 Red Hat 时的版本(在 RHEL4 中)。但我也更钟爱 2.6.18(RHEL5)一点,因为它在大规模部署的、我们最大客户(Telco, FSI)的关键任务工作负载中使用。它还带来了我们最大的技术变革之一:虚拟化(Xen 然后是 KVM)。” - Herve Lemaitre
|
||||
|
||||
“4.10。(虽然我不知道如何衡量这一点)。” - Ivan Bazulic
|
||||
|
||||
“Fedora 30 附带的新内核修复了我的 Thinkpad Yoga 笔记本电脑的挂起问题;挂起功能现在可以完美运行。我是一个笨人,只是忍受这个问题而从未试着提交错误报告,所以我特别感谢这项工作,我知道一定会解决这个问题。” - MáirínDuffy
|
||||
|
||||
“2.6.16 版本将永远在我的心中占有特殊的位置。这是我负责将其转换为在 hertz neverlost gps 系统上运行的第一个内核。我负责这项为那个设备构建内核和根文件系统的工作,对我来说这真的是一个奇迹时刻。我们在初次发布后多次更新了内核,但我想我必须还是推选那个最初版本,不过,我对于它的热爱没有任何技术原因,这纯属感性选择 =)” - Michael McCune
|
||||
|
||||
“我最喜欢的 Linux 内核版本是 2.4.0 系列,它集成了对 USB、LVM 和 ext3 的支持。ext3 是第一个具有日志支持的主流 Linux 文件系统,其从 2.4.15 内核可用。我使用的第一个内核版本是 2.2.13。” - Sean Nelson
|
||||
|
||||
“也许是 2.2.14,因为它是在我安装的第一个 Linux 上运行的版本(Mandrake Linux 7.0,在 2000 IIRC)。它也是我第一个需要重新编译以让我的视频卡或我的调制解调器(记不清了)工作的版本。” - GermánPulido
|
||||
|
||||
“我认为最新的一个!但我有段时间使用实时内核扩展来进行音频制作。” - Mario Torre
|
||||
|
||||
*在 Linux 内核超过 [52 个的版本][2]当中,你最喜欢哪一个?参加我们的调查并在评论中告诉我们原因。*
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/linux-kernel-favorite-release
|
||||
|
||||
作者:[Lauren Pritchett][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lauren-pritchetthttps://opensource.com/users/sethhttps://opensource.com/users/luis-ibanezhttps://opensource.com/users/mhayden
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_anniversary_celebreate_tux.jpg?itok=JOE-yXus
|
||||
[2]: http://phb-crystal-ball.org/
|
||||
[3]: http://lkml.iu.edu/hypermail/linux/kernel/1908.3/00457.html
|
@ -1,107 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (LiVES Video Editor 3.0 is Here With Significant Improvements)
|
||||
[#]: via: (https://itsfoss.com/lives-video-editor/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
LiVES 视频编辑器 3.0 有了显著的改善
|
||||
======
|
||||
|
||||
我们最近列出了[最好开源视频编辑器][1]的清单。LiVES 是这些开源视频编辑器中的免费提供服务的一个。
|
||||
|
||||
即使许多用户还在等待 Windows 版本的发行,但在刚刚发行的 LiVES 视频编辑器 Linux 版本中(最新版本 v3.0.1)进行了一个重大更新,更新内容中包括了一些新的功能和改进。
|
||||
|
||||
在这篇文章里,我将会列出新版本中的重要改进,并且我将会提到在 Linux 上安装的步骤。
|
||||
|
||||
### LiVES 视频编辑器 3.0:新的改进
|
||||
|
||||
![Zorin OS 中正在加载的 LiVES 视频编辑器][2]
|
||||
|
||||
总的来说,在这次重大更新中 LiVES 视频编辑器旨在提供更加丝滑的回放、防止闻所未闻的崩溃、优化视频记录,以及让在线视频下载更加实用。
|
||||
|
||||
下面列出了修改:
|
||||
|
||||
* 如果需要加载的话,可以静默加载直到到视频播放完毕。
|
||||
* 改进回放插件为 openGL,提供更加丝滑的回放。
|
||||
* 重新启用了 openGL 回放插件的高级选项。
|
||||
* 在 VJ/预解码 中允许“充足”的所有帧
|
||||
* 重构了在播放时基础计算的代码(有了更好的 a/v 同步)。
|
||||
* 彻底修复了外部音频和音频,提高了准确性并减少了 CPU 周期。
|
||||
* 进入多音轨模式时自动切换至内部音频。
|
||||
* 重新显示效果映射器窗口时,将会正常展示效果状态(on/off)。
|
||||
* 解决了音频和视频线程之间的冲突。
|
||||
* 现在可以对在线视频下载器,剪辑大小和格式进行修改并添加了更新选项。
|
||||
* 对实时效果实行了参考计数的记录。
|
||||
* 大范围重写了主界面,清理代码并改进多视觉。
|
||||
* 优化了视频播放器运行时的录制功能。
|
||||
* 改进了 projectM 过滤器,包括支持了 SDL2。
|
||||
* 添加了一个选项来逆转多轨合成器中的 Z-order(后层现在可以覆盖上层了)。
|
||||
* 增加了对 musl libc 的支持
|
||||
* 更新了乌克兰语的翻译
|
||||
|
||||
|
||||
如果您不是一位高级视频编辑师,也许会对上面列出的重要更新提不起太大的兴趣。但正是因为这些更新,才使得“LiVES 视频编辑器”成为了最好的开源视频编辑软件。
|
||||
|
||||
[][3]
|
||||
|
||||
推荐阅读 VidCutter Lets You Easily Trim And Merge Videos In Linux
|
||||
|
||||
### 在 Linux 上安装 LiVES 视频编辑器
|
||||
|
||||
LiVES 几乎可以在所有主要 Linux 发行版中使用。但是,您可能并不能在软件中心找到它的最新版本。所以,如果你想通过这种方式安装,那你就不得不耐心等待了。
|
||||
|
||||
如果你想要手动安装,可以从它的下载页面获取 Fedora/Open SUSE 的 RPM 安装包。它也适用于其他 Linux 发行版。
|
||||
|
||||
[下载 LiVES 视频编辑器] [4]
|
||||
|
||||
如果您使用的是 Ubuntu(或其他基于 Ubuntu 的发行版),您可以安装由 [Ubuntuhandbook][6] 进行维护的[非官方 PPA][5]。
|
||||
|
||||
下面由我来告诉你,你该做些什么:
|
||||
|
||||
**1. **启动终端后输入以下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:ubuntuhandbook1 / lives
|
||||
```
|
||||
|
||||
系统将提示您输入密码用于确认添加 PPA。
|
||||
|
||||
**2. **完成后,您现在可以轻松地更新软件包列表并安装 LiVES 视频编辑器。以下是需要您输入的命令段:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install life-plugins
|
||||
```
|
||||
|
||||
**3.** 现在,它开始下载并安装视频编辑器,等待大约一分钟即可完成。
|
||||
|
||||
**总结**
|
||||
|
||||
Linux 上有许多[视频编辑器] [7]。但它们通常被认为不能进行专业的编辑。而我并不是一名专业人士,所以像 LiVES 这样免费的视频编辑器就足以进行简单的编辑了。
|
||||
|
||||
您认为怎么样呢?您在 Linux 上使用 LiVES 或其他视频编辑器的体验还好吗?在下面的评论中告诉我们你的感觉吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/lives-video-editor/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Scvoet][c]
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[c]: https://github.com/scvoet
|
||||
[1]: https://itsfoss.com/open-source-video-editors/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/lives-video-editor-loading.jpg?ssl=1
|
||||
[3]: https://itsfoss.com/vidcutter-video-editor-linux/
|
||||
[4]: http://lives-video.com/index.php?do=downloads#binaries
|
||||
[5]: https://itsfoss.com/ppa-guide/
|
||||
[6]: http://ubuntuhandbook.org/index.php/2019/08/lives-video-editor-3-0-released-install-ubuntu/
|
||||
[7]: https://itsfoss.com/best-video-editing-software-linux/
|
@ -0,0 +1,57 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Semiconductor startup Cerebras Systems launches massive AI chip)
|
||||
[#]: via: (https://www.networkworld.com/article/3433617/semiconductor-startup-cerebras-systems-launches-massive-ai-chip.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Semiconductor startup Cerebras Systems launches massive AI chip
|
||||
======
|
||||
|
||||
![Cerebras][1]
|
||||
|
||||
There are a host of different AI-related solutions for the data center, ranging from add-in cards to dedicated servers, like the Nvidia DGX-2. But a startup called Cerebras Systems has its own server offering that relies on a single massive processor rather than a slew of small ones working in parallel.
|
||||
|
||||
Cerebras has taken the wraps off its Wafer Scale Engine (WSE), an AI chip that measures 8.46x8.46 inches, making it almost the size of an iPad and more than 50 times larger than a CPU or GPU. A typical CPU or GPU is about the size of a postage stamp.
|
||||
|
||||
[Now see how AI can boost data-center availability and efficiency.][2]
|
||||
|
||||
Cerebras won’t sell the chips to ODMs due to the challenges of building and cooling such a massive chip. Instead, it will come as part of a complete server to be installed in data centers, which it says will start shipping in October.
|
||||
|
||||
The logic behind the design is that AI requires huge amounts of data just to run a test and current technology, even GPUs, are not fast or powerful enough. So Cerebras supersized the chip.
|
||||
|
||||
The numbers are just incredible. The company’s WSE chip has 1.2 trillion transistors, 400,000 computing cores and 18 gigabytes of memory. A typical PC processor has about 2 billion transistors, four to six cores and a few megabytes of cache memory. Even a high-end GPU has 21 billion transistors and a few thousand cores.
|
||||
|
||||
The 400,000 cores on the WSE are connected via the Swarm communication fabric in a 2D mesh with 100 Pb/s of bandwidth. The WSE has 18 GB of on-chip memory, all accessible within a single clock cycle, and provides 9 PB/s memory bandwidth. This is 3000x more capacity and 10,000x greater bandwidth than the best Nvidia has to offer. More to the point it eliminates the need to move data in and out of memory to and from the CPU.
|
||||
|
||||
“A vast array of programmable cores provides cluster-scale compute on a single chip. High-speed memory close to each core ensures that cores are always occupied doing calculations. And by connecting everything on-die, communication is many thousands of times faster than what is possible with off-chip technologies like InfiniBand,” the company said in a [blog post][3] announcing the processor.
|
||||
|
||||
The cores are called Sparse Linear Algebra Cores, or SLA. They are optimized for the sparse linear algebra that is fundamental to neural network calculation. These cores are designed specifically for AI work. They are small and fast, contain no caches, and have eliminated other features and overheads that are needed in general purpose cores but play no useful role in a deep learning processor.
|
||||
|
||||
The chip is the brainchild of Andrew Feldman, who created the SeaMicro high density Atom-based server a decade ago as an alternative to overpowered Xeons for doing simple tasks like file and print or serving LAMP stacks. Feldman is a character, one of the more interesting people [I’ve interviewed][4]. He definitely thinks outside the box.
|
||||
|
||||
Feldman sold SeaMicro to AMD for $334 million in 2012, which turned out to be a colossal waste of money on AMD’s part, as the product shortly disappeared from the market. Since then he’s raised $100 million in VC money.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3433617/semiconductor-startup-cerebras-systems-launches-massive-ai-chip.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/cerebras-wafer-scale-engine-100809084-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3274654/ai-boosts-data-center-availability-efficiency.html
|
||||
[3]: https://www.cerebras.net/hello-world/
|
||||
[4]: https://www.serverwatch.com/news/article.php/3887471/SeaMicro-Launches-an-AtomPowered-Cloud-Computing-Server.htm
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,96 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (VMware spends $4.8B to grab Pivotal, Carbon Black to secure, develop integrated cloud world)
|
||||
[#]: via: (https://www.networkworld.com/article/3433916/vmware-spends-48b-to-grab-pivotal-carbon-black-to-secure-develop-integrated-cloud-world.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
VMware spends $4.8B to grab Pivotal, Carbon Black to secure, develop integrated cloud world
|
||||
======
|
||||
VMware will spend $2.7 billion on cloud-application developer Pivotal and $2.1 billion for security vendor Carbon Black - details at next week's VMworld user conference
|
||||
![Bigstock][1]
|
||||
|
||||
All things cloud are major topics of conversation at the VMworld user conference next week, ratcheded up a notch by VMware's $4.8 billion plans to acquire cloud development firm Pivotal and security provider Carbon Black.
|
||||
|
||||
VMware said during its quarterly financial call this week it would spend about $2.7 billion on Pivotal and its Cloud Foundry hybrid cloud development technology, and about $2.1 billion for the security technology of Carbon Black, which includes its Predictive Security Cloud and other endpoint-security software. Both amounts represent the [enterprise value][2] of the deals the actual purchase prices will vary, experts said.
|
||||
|
||||
**[ Check out [What is hybrid cloud computing][3] and learn [what you need to know about multi-cloud][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
|
||||
|
||||
VMware has deep relationships with both companies. Carbon Black technology is part of [VMware’s AppDefense][6] endpoint security. Pivotal has a deeper relationship in that VMware and Dell, VMware’s parent company, [spun out Pivotal][7] in 2013.
|
||||
|
||||
“These acquisitions address two critical technology priorities of all businesses today – building modern, enterprise-grade applications and protecting enterprise workloads and clients. With these actions we meaningfully accelerate our subscription and SaaS offerings and expand our ability to enable our customers’ digital transformation,” said VMware CEO Pat Gelsinger, on the call.
|
||||
|
||||
With regards to the Pivotal acquisition Gelsinger said the time was right to own the whole compute stack. “We will now be uniquely positioned to help customers build, run and manage their cloud environment, and customers can go one place to get all of this technology,” Gelsinger said. “We embed the technology in our core VMware platform, and we will explain more about that at VMworld next week.”
|
||||
|
||||
On the Carbon Black buy Gelsinger said he expects the technology to be integrated across VMware’s produce families such as NSX networking software and vSphere, VMware's flagship virtualization platform.
|
||||
|
||||
“Security is broken and fundamentally customers want a different answer in the security space. We think this move will be an opportunity for major disruption.”
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
|
||||
|
||||
Patric Morley, president and CEO of Carbon Black [wrote of the deal][9]: “VMware has a vision to create a modern security platform for any app, running on any cloud, delivered to any device – essentially, to build security into the fabric of the compute stack. Carbon Black’s cloud-native platform, our ability to see and stop attackers by leveraging the power of our rich data and behavioral analytics, and our deep cybersecurity expertise are all truly differentiating.”
|
||||
|
||||
Both transactions are expected to close in the second half of VMware’s fiscal year, which ends Jan. 31.
|
||||
|
||||
VMware has been on a massive buying spree this year that has included:
|
||||
|
||||
* Avi Networks for multi-cloud application delivery services.
|
||||
* Bitfusion for hardware virtualization.
|
||||
* Uhana, a company that is employing deep learning and real-time AI in carrier networks and applications, to automate network operations and optimize application experience.
|
||||
* Veriflow, for network verification, assurance, and troubleshooting.
|
||||
* Heptio for its Kubernetes technology.
|
||||
|
||||
|
||||
|
||||
Kubernetes integration will be a big topic at VMworld, Gelsinger hinted. “You will hear very specific announcements about how Heptio will be used. [And] we will be announcing major expansions of our Kubernetes and modern apps portfolio and help Pivotal complete that strategy. Together with Heptio and Pivotal, VMware will offer a comprehensive Kubernetes-based portfolio to build, run and manage modern applications on any cloud,” Gelsinger said.
|
||||
|
||||
“VMware has increased its Kubernetes-related investments over the past year with the acquisition of Heptio to become a top-three contributor to Kubernetes, and at VMworld we will describe a major R&D effort to evolve VMware vSphere into a native Kubernetes platform for VMs and containers.”
|
||||
|
||||
Other updates about where VMware vSphere and NSX-T are headed will also be hot topics.
|
||||
|
||||
Introduced in 2017, NSX-T Data Center software is targeted at organizations looking to support multivendor cloud-native applications, [bare-metal][10] workloads, [hypervisor][11] environments and the growing hybrid and multi-cloud worlds. In February the [company anointed NSX-T][12] the company’s go-to platform for future software-defined cloud developments.
|
||||
|
||||
VMware is battling Cisco's Application Centric Infrastructure, Juniper's Contrail system and other platforms from vendors including Pluribus, Arista and Big Switch. How NSX-T evolves will be key to how well VMware competes.
|
||||
|
||||
The most recent news around vSphere was that new features of its Hybrid Cloud Extension application-mobility software enables non-vSphere as well as increased on-premises application workloads to migrate to a variety of specific cloud services. Introduced in 2017, [VMware HCX][13] lets vSphere customers tie on-premises systems and applications to cloud services.
|
||||
|
||||
The HCX announcement was part of VMware’s continued evolution into cloud technologies. In July the company teamed with [Google][14] to natively support VMware workloads in its Google Cloud service, giving customers more options for deploying enterprise applications.
|
||||
|
||||
Further news about that relationship is likely at VMworld as well.
|
||||
|
||||
VMware also has a hybrid cloud partnership with [Microsoft’s Azure cloud service][15]. That package, called Azure VMware Solutions is built on VMware Cloud Foundation, which is a packag of vSphere with NSX network-virtualization and VSAN software-defined storage-area-network platform. The company is expected to update developments with that platform as well.
|
||||
|
||||
Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3433916/vmware-spends-48b-to-grab-pivotal-carbon-black-to-secure-develop-integrated-cloud-world.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/hybridcloud-100808516-large.jpg
|
||||
[2]: http://valuationacademy.com/what-is-the-enterprise-value-ev/
|
||||
[3]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
|
||||
[4]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
|
||||
[5]: https://www.networkworld.com/newsletters/signup.html
|
||||
[6]: https://www.networkworld.com/article/3359242/vmware-firewall-takes-aim-at-defending-apps-in-data-center-cloud.html
|
||||
[7]: https://www.networkworld.com/article/2225739/what-is-pivotal--emc-and-vmware-want-it-to-be-your-platform-for-building-big-data-apps.html
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[9]: https://www.carbonblack.com/2019/08/22/the-next-chapter-in-our-story-vmware-carbon-black/
|
||||
[10]: https://www.networkworld.com/article/3261113/why-a-bare-metal-cloud-provider-might-be-just-what-you-need.html?nsdr=true
|
||||
[11]: https://www.networkworld.com/article/3243262/what-is-a-hypervisor.html?nsdr=true
|
||||
[12]: https://www.networkworld.com/article/3346017/vmware-preps-milestone-nsx-release-for-enterprise-cloud-push.html
|
||||
[13]: https://docs.vmware.com/en/VMware-HCX/services/rn/VMware-HCX-Release-Notes.html
|
||||
[14]: https://www.networkworld.com/article/3428497/google-cloud-to-offer-vmware-data-center-tools-natively.html
|
||||
[15]: https://www.networkworld.com/article/3113394/vmware-cloud-foundation-integrates-virtual-compute-network-and-storage-systems.html
|
||||
[16]: https://www.facebook.com/NetworkWorld/
|
||||
[17]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,72 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Implementing edge computing, DevOps like car racing, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/8/implementing-edge-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
Implementing edge computing, DevOps like car racing, and more industry trends
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [How to implement edge computing][2]
|
||||
|
||||
> "When you have hundreds or thousands of locations, it's a challenge to manage all of that compute as you continue to scale it out at the edge," said Coufal. "For organizations heavily involved with IoT, there are cases where these enterprises can find themselves with millions of different endpoints to manage. This is where you need to automate as much as you can operationally so there is less need for humans to manage the day-to-day activities."
|
||||
|
||||
**The impact:** We may think that there is a lot of stuff hooked up to the internet already, but edge connected Internet of Things (IoT) devices are already proving we ain't seen nothing yet. A heuristic that breaks the potential billions of endpoints into three categories (at least in a business context) helps us think about what this IoT might actually do for us, and who should be responsible for what.
|
||||
|
||||
## [Can a composable hypervisor re-imagine virtualization?][3]
|
||||
|
||||
> Van de Ven explained that in talking with customers he has seen five areas emerge as needing re-imagining in order to support evolving virtualization plans. These include a platform that is lightweight; one that is fast; something that can support high density workloads; that has quick start up; and one that is secure. However, the degrees of those needs remains in flux.
|
||||
>
|
||||
> Van de Ven explained that a [composable][4] hypervisor was one way to deal with these varying needs, pointing to Intel’s work with the [recently launched][5] rust-vmm hypervisor.
|
||||
>
|
||||
> That [open source project][6] provides a set of common hypervisor components developed by contributing vendors that can provide a more secure, higher performance container technology designed for [cloud native][7] environments.
|
||||
|
||||
**The impact**: The container boom has been perhaps unprecedented in both the rapidness of its onset and the breadth of its impact. You'd be forgiven for thinking that all the innovation has moved on from virtualization; not so! For one thing, most of those containers are running in virtual machines, and there are still places where virtual machines outshine containers (particularly where security is concerned). Thankfully there are projects pushing the state of hypervisors and virtualization forward.
|
||||
|
||||
## [How DevOps is like auto racing][8]
|
||||
|
||||
> To achieve their goals, race teams don’t think from start to finish; they flip the table to look at the race from the end goal to the beginning. They set a goal, a stretch goal, and then work backward from that goal to determine how to get there. Work is delegated to team members to push toward the objectives that will get the team to the desired outcome.
|
||||
|
||||
**The impact**: Sometimes the best way to understand the impact of an idea is to re-imagine the stakes. Here we recontextualize the moving and configuration of bits as the direction of explosive power and get a better understanding of why process, roles, and responsibilities are important contributors to success.
|
||||
|
||||
## [CNCF archives the rkt project][9]
|
||||
|
||||
> All open source projects are subject to a lifecycle and can become less active for a number of reasons. In rkt’s case, despite its initial popularity following its creation in December 2014, and contribution to CNCF in March 2017, end user adoption has severely declined. The CNCF is also [home][10] to other container runtime projects: [containerd][11] and [CRI-O][12], and while the rkt project played an important part in the early days of cloud native adoption, in recent times user adoption has trended away from rkt towards these other projects. Furthermore, [project activity][13] and the number of contributors has also steadily declined over time, along with unpatched CVEs.
|
||||
|
||||
**The impact**: Betamax and laser discs pushed cassettes and DVDs to be better, and so it is with rkt. The project showed there is more than one way to run containers at a time when it looked like there was only one way to run containers. rkt galvanized a push towards standard interfaces in the container space, and for that, we are eternally grateful.
|
||||
|
||||
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/implementing-edge-more-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://www.techrepublic.com/article/how-to-implement-edge-computing/
|
||||
[3]: https://www.sdxcentral.com/articles/news/can-a-composable-hypervisor-re-imagine-virtualization/2019/08/
|
||||
[4]: https://www.sdxcentral.com/data-center/composable/definitions/what-is-composable-infrastructure-definition/ (What is Composable Infrastructure? Definition)
|
||||
[5]: https://www.sdxcentral.com/articles/news/intel-pushes-open-source-hypervisor-with-cloud-giants/2019/05/
|
||||
[6]: https://github.com/rust-vmm
|
||||
[7]: https://www.sdxcentral.com/cloud-native/ (Cloud Native)
|
||||
[8]: https://developers.redhat.com/blog/2019/08/22/how-devops-is-like-auto-racing/
|
||||
[9]: https://www.cncf.io/blog/2019/08/16/cncf-archives-the-rkt-project/
|
||||
[10]: https://landscape.cncf.io/category=container-runtime&format=card-mode
|
||||
[11]: https://containerd.io/
|
||||
[12]: https://cri-o.io/
|
||||
[13]: https://rkt.devstats.cncf.io
|
@ -1,66 +0,0 @@
|
||||
translating by valoniakim
|
||||
|
||||
How allowing myself to be vulnerable made me a better leader
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/leaderscatalysts.jpg?itok=f8CwHiKm)
|
||||
|
||||
Conventional wisdom suggests that leadership is strong, bold, decisive. In my experience, leadership does feel like that some days.
|
||||
|
||||
Some days leadership feels more vulnerable. Doubts creep in: Am I making good decisions? Am I the right person for this job? Am I focusing on the most important things?
|
||||
|
||||
The trick with these moments is to talk about these moments. When we keep them secret, our insecurity only grows. Being an open leader means pushing our vulnerability into the spotlight. Only then can we seek comfort from others who have experienced similar moments.
|
||||
|
||||
To demonstrate how this works, I'll share a story.
|
||||
|
||||
### A nagging question
|
||||
|
||||
If you work in the tech industry, you'll note an obvious focus on creating [an organization that's inclusive][1]--a place for diversity to flourish. Long story short: I thought I was a "diversity hire," someone hired because of my gender, not my ability. Even after more than 15 years in the industry, with all of the focus on diversity in hiring, that possibility got under my skin. Along came the doubts: Was I hired because I was the best person for the job--or because I was a woman? After years of knowing I was hired because I was the best person, the fact that I was female suddenly seemed like it was more interesting to potential employers.
|
||||
|
||||
I rationalized that it didn't matter why I was hired; I knew I was the best person for the job and would prove it. I worked hard, delivered results, made mistakes, learned, and did everything an employer would want from an employee.
|
||||
|
||||
And yet the "diversity hire" question nagged. I couldn't shake it. I avoided the subject like the plague and realized that not talking about it was a signal that I had no choice but to deal with it. If I continued to avoid the subject, it was going to affect my work. And that's the last thing I wanted.
|
||||
|
||||
### Speaking up
|
||||
|
||||
Talking about diversity and inclusion can be awkward. So many factors enter into the decision to open up:
|
||||
|
||||
* Can we trust our co-workers with a vulnerable moment?
|
||||
* Can a leader of a team be too vulnerable?
|
||||
* What if I overstep? Do I damage my career?
|
||||
|
||||
|
||||
|
||||
In my case, I ended up at a lunch Q&A session with an executive who's a leader in many areas of the organization--especially candid conversations. A coworker asked the "Was I a diversity hire?" question. He stopped and spent a significant amount of time talking about this question to a room full of women. I'm not going to recount the entire discussion here; I will share the most salient point: If you know you're qualified for the job and you know the interview went well, don't doubt the outcome. Anyone who questions whether you're a diversity hire has their own questions to answer. You don't have to go on their journey.
|
||||
|
||||
Mic drop.
|
||||
|
||||
I wish I could say that I stopped thinking about this topic. I didn't. The question lingered: What if I am the exception to the rule? What if I was the one diversity hire? I realized that I couldn't avoid the nagging question.
|
||||
|
||||
Because I had the courage to be vulnerable--to go there with my question--I had the burden of my secret question lifted.
|
||||
|
||||
A few weeks later I had a one-on-one with the executive. At the end of conversation, I mentioned that, as a woman, I appreciate his candid conversations about diversity and inclusion. It's easier to talk about these topics when a recognized leader is willing to have the conversation. I also returned to the "Was I a diversity hire? question. He didn't hesitate: We talked. At the end of the conversation, I realized that I was hungry to talk about these things that require bravery; I only needed a nudge and someone who cared enough to talk and listen.
|
||||
|
||||
Because I had the courage to be vulnerable--to go there with my question--I had the burden of my secret question lifted. Feeling physically lighter, I started to have constructive conversations around the questions of implicit bias, what we can do to be inclusive, and what diversity looks like. As I've learned, every person has a different answer when I ask the diversity question. I wouldn't have gotten to have all of these amazing conversations if I'd stayed stuck with my secret.
|
||||
|
||||
I had courage to talk, and I hope you will too.
|
||||
|
||||
Let's talk about these things that hold us back in terms of our ability to lead so we can be more open leaders in every sense of the phrase. Has allowing yourself to be vulnerable made you a better leader?
|
||||
|
||||
### About The Author
|
||||
|
||||
Angela Robertson;Angela Robertson Works As A Senior Manager At Microsoft. She Works With An Amazing Team Of People Passionate About Community Contributions;Engaged In Open Organizations. Before Joining Microsoft;Angela Worked At Red Hat
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/12/how-allowing-myself-be-vulnerable-made-me-better-leader
|
||||
|
||||
作者:[Angela Robertson][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/arobertson98
|
||||
[1]:https://opensource.com/open-organization/17/9/building-for-inclusivity
|
@ -1,162 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Command Line Heroes: Season 1: OS Wars)
|
||||
[#]: via: (https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux)
|
||||
[#]: author: (redhat https://www.redhat.com)
|
||||
|
||||
Command Line Heroes: Season 1: OS Wars(Part2 Rise of Linux)
|
||||
======
|
||||
Saron Yitbarek: Is this thing on? Cue the epic Star Wars crawl, and, action.
|
||||
|
||||
Voice Actor: [00:00:30] Episode Two: Rise of Linux ® . The empire of Microsoft controls 90 % of desktop users . C omplete standardization of operating systems seems assured. However, the advent of the internet swerves the focus of the war from the desktop toward enterprise, where all businesses scramble to claim a server of their own. Meanwhile, an unlikely hero arises from amongst the band of open source rebels . Linus Torvalds, head strong, bespectacled, releases his Linux system free of charge. Microsoft reels — and regroups.
|
||||
|
||||
Saron Yitbarek: [00:01:00] Oh, the nerd in me just loves that. So, where were we? Last time, Apple and Microsoft were trading blows, trying to dominate in a war over desktop users. By the end of e pisode o ne, we saw Microsoft claiming most of the prize. Soon, the entire landscape went through a seismic upheaval. That's all because of the rise of the internet and the army of developers that rose with it. The internet moves the battlefield from PC users in their home offices to giant business clients with hundreds of servers.
|
||||
|
||||
[00:01:30] This is a huge resource shift. Not only does every company out there wanting to remain relevant suddenly have to pay for server space and get a website built — they also have to integrate software to track resources, monitor databases, et cetera, et cetera. You're going to need a lot of developers to help you with that. At least, back then you did.
|
||||
|
||||
In p art t wo of the OS wars, we'll see how that enormous shift in priorities , and the work of a few open source rebels like Linus Torvalds and Richard Stallman , managed to strike fear in the heart of Microsoft, and an entire software industry.
|
||||
|
||||
[00:02:00] I'm Saron Yitbarek and you're listening to Command Line Heroes, an original podcast from Red Hat. In each episode, we're bringing you stories about the people who transform technology from the command line up.
|
||||
|
||||
[00:02:30] Okay. Imagine for a second that you're Microsoft in 1991. You're feeling pretty good, right? Pretty confident. Assured global domination feels nice. You've mastered the art of partnering with other businesses, but you're still pretty much cutting out the developers, programmers, and sys admins that are the real foot soldiers out there. There is this Finnish geek named Linus Torvalds. He and his team of open source programmers are starting to put out versions of Linux, this OS kernel that they're duct taping together.
|
||||
|
||||
[00:03:00] If you're Microsoft, frankly, you're not too concerned about Linux or even about open source in general, but eventually, the sheer size of Linux gets so big that it becomes impossible for Microsoft not to notice. The first version comes out in 1991 and it's got maybe 10,000 lines of code. A decade later, there will be three million lines of code. In case you're wondering, today it's at 20 million.
|
||||
|
||||
[00:03:30] For a moment, let's stay in the early 90s. Linux hasn't yet become the behemoth we know now. It's just this strangely viral OS that's creeping across the planet, and the geeks and hackers of the world are falling in love with it. I was too young in those early days, but I sort of wish I'd been there. At that time, discovering Linux was like gaining access to a secret society. Programmers would share the Linux CD set with friends the same way other people would share mixtapes of underground music.
|
||||
|
||||
Developer Tristram Oaten [00:03:40] tells the story of how he first encountered Linux when he was 16 years old.
|
||||
|
||||
Tristram Oaten: [00:04:00] We went on a scuba diving holiday, my family and I, to Hurghada, which is on the Red Sea. Beautiful place, highly recommend it. The first day, I drank the tap water. Probably, my mom told me not to. I was really sick the whole week — didn't leave the hotel room. All I had with me was a laptop with a fresh install of Slackware Linux, this thing that I'd heard about and was giving it a try. There were no extra apps, just what came on the eight CDs. By necessity, all I had to do this whole week was to get to grips with this alien system. I read man pages, played around with the terminal. I remember not knowing the difference between a single dot, meaning the current directory, and two dots, meaning the previous directory.
|
||||
|
||||
[00:04:30] I had no clue. I must have made so many mistakes, but slowly, over the course of this forcible solitude, I broke through this barrier and started to understand and figure out what this command line thing was all about. By the end of the holiday, I hadn't seen the pyramids, the Nile, or any Egyptian sites, but I had unlocked one of the wonders of the modern world. I'd unlocked Linux, and the rest is history.
|
||||
|
||||
Saron Yitbarek: You can hear some variation on that story from a lot of people. Getting access to that Linux command line was a transformative experience.
|
||||
|
||||
David Cantrell: This thing gave me the source code. I was like, "That's amazing."
|
||||
|
||||
Saron Yitbarek: We're at a 2017 Linux developers conference called Flock to Fedora.
|
||||
|
||||
David Cantrell: ... very appealing. I felt like I had more control over the system and it just drew me in more and more. From there, I guess, after my first Linux kernel compile in 1995, I was hooked, so, yeah.
|
||||
|
||||
Saron Yitbarek: Developers David Cantrell and Joe Brockmire.
|
||||
|
||||
Joe Brockmeier: I was going through the cheap software and found a four - CD set of Slackware Linux. It sounded really exciting and interesting so I took it home, installed it on a second computer, started playing with it, and really got excited about two things. One was, I was excited not to be running Windows, and I was excited by the open source nature of Linux.
|
||||
|
||||
Saron Yitbarek: [00:06:00] That access to the command line was, in some ways, always there. Decades before open source really took off, there was always a desire to have complete control, at least among developers. Go way back to a time before the OS wars, before Apple and Microsoft were fighting over their GUIs. There were command line heroes then, too. Professor Paul Jones is the director of the online library ibiblio.org. He worked as a developer during those early days.
|
||||
|
||||
Paul Jones: [00:07:00] The internet, by its nature, at that time, was less client server, totally, and more peer to peer. We're talking about, really, some sort of VAX to VAX, some sort of scientific workstation, the scientific workstation. That doesn't mean that client and server relationships and applications weren't there, but it does mean that the original design was to think of how to do peer - to - peer things, the opposite of what IBM had been doing, in which they had dumb terminals that had only enough intelligence to manage the user interface, but not enough intelligence to actually let you do anything in the terminal that would expose anything to it.
|
||||
|
||||
Saron Yitbarek: As popular as GUI was becoming among casual users, there was always a pull in the opposite direction for the engineers and developers. Before Linux in the 1970s and 80s, that resistance was there, with EMAX and GNU . W ith Stallman's free software foundation, certain folks were always begging for access to the command line, but it was Linux in the 1990s that delivered like no other.
|
||||
|
||||
[00:07:30] The early lovers of Linux and other open source software were pioneers. I'm standing on their shoulders. We all are.
|
||||
|
||||
You're listening to Command Line Heroes, an original podcast from Red Hat. This is part two of the OS wars: Rise of Linux.
|
||||
|
||||
Steven Vaughan-Nichols: By 1998, things have changed.
|
||||
|
||||
Saron Yitbarek: Steven Vaughan-Nichols is a contributing editor at zdnet.com, and he's been writing for decades about the business side of technology. He describes how Linux slowly became more and more popular until the number of volunteer contributors was way larger than the number of Microsoft developers working on Windows. Linux never really went after Microsoft's desktop customers, though, and maybe that's why Microsoft ignored them at first. Where Linux did shine was in the server room. When businesses went online, each one required a unique programming solution for their needs.
|
||||
|
||||
[00:08:30] Windows NT comes out in 1993 and it's competing with other server operating systems, but lots of developers are thinking, "Why am I going to buy an AIX box or a large windows box when I could set up a cheap Linux-based system with Apache?" Point is, Linux code started seeping into just about everything online.
|
||||
|
||||
Steven Vaughan-Nichols: [00:09:00] Microsoft realizes that Linux, quite to their surprise, is actually beginning to get some of the business, not so much on the desktop, but on business servers. As a result of that, they start a campaign, what we like to call FUD — fear, uncertainty and doubt — saying, "Oh this Linux stuff, it's really not that good. It's not very reliable. You can't trust it with anything."
|
||||
|
||||
Saron Yitbarek: [00:09:30] That soft propaganda style attack goes on for a while. Microsoft wasn't the only one getting nervous about Linux, either. It was really a whole industry versus that weird new guy. For example, anyone with a stake in UNIX was likely to see Linux as a usurper. Famously, the SCO Group, which had produced a version of UNIX, waged lawsuits for over a decade to try and stop the spread of Linux. SCO ultimately failed and went bankrupt. Meanwhile, Microsoft kept searching for their opening. They were a company that needed to make a move. It just wasn't clear what that move was going to be.
|
||||
|
||||
Steven Vaughan-Nichols: [00:10:30] What will make Microsoft really concerned about it is the next year, in 2000, IBM will announce that they will invest a billion dollars in Linux in 2001. Now, IBM is not really in the PC business anymore. They're not out yet, but they're going in that direction, but what they are doing is they see Linux as being the future of servers and mainframe computers, which, spoiler alert, IBM was correct. Linux is going to dominate the server world.
|
||||
|
||||
Saron Yitbarek: This was no longer just about a bunch of hackers loving their Jedi-like control of the command line. This was about the money side working in Linux's favor in a major way. John "Mad Dog" Hall, the executive director of Linux International, has a story that explains why that was. We reached him by phone.
|
||||
|
||||
John Hall: [00:11:30] A friend of mine named Dirk Holden [00:10:56] was a German systems administrator at Deutsche Bank in Germany, and he also worked in the graphics projects for the early days of the X Windows system for PCs. I visited him one day at the bank, and I said, "Dirk, you have 3,000 servers here at the bank and you use Linux. Why don't you use Microsoft NT?" He looked at me and he said, "Yes, I have 3,000 servers , and if I used Microsoft Windows NT, I would need 2,999 systems administrators." He says, "With Linux, I only need four." That was the perfect answer.
|
||||
|
||||
Saron Yitbarek: [00:12:00] The thing programmers are getting obsessed with also happens to be deeply attractive to big business. Some businesses were wary. The FUD was having an effect. They heard open source and thought, "Open. That doesn't sound solid. It's going to be chaotic, full of bugs," but as that bank manager pointed out, money has a funny way of convincing people to get over their hangups. Even little businesses, all of which needed websites, were coming on board. The cost of working with a cheap Linux system over some expensive proprietary option, there was really no comparison. If you were a shop hiring a pro to build your website, you wanted them to use Linux.
|
||||
|
||||
[00:12:30] Fast forward a few years. Linux runs everybody's website. Linux has conquered the server world, and then, along comes the smartphone. Apple and their iPhones take a sizeable share of the market, of course, and Microsoft hoped to get in on that, except, surprise, Linux was there, too, ready and raring to go.
|
||||
|
||||
Author and journalist James Allworth.
|
||||
|
||||
James Allworth: [00:13:00] There was certainly room for a second player, and that could well have been Microsoft, but for the fact of Android, which was fundamentally based on Linux, and because Android, famously acquired by Google, and now running a majority of the world's smartphones, Google built it on top of that. They were able to start with a very sophisticated operating system and a cost basis of zero. They managed to pull it off, and it ended up locking Microsoft out of the next generation of devices, by and large, at least from an operating system perspective.
|
||||
|
||||
Saron Yitbarek: [00:13:30] The ground was breaking up, big time, and Microsoft was in danger of falling into the cracks. John Gossman is the chief architect on the Azure team at Microsoft. He remembers the confusion that gripped the company at that time.
|
||||
|
||||
John Gossman: [00:14:00] Like a lot of companies, Microsoft was very concerned about IP pollution. They thought that if you let developers use open source they would likely just copy and paste bits of code into some product and then some sort of a viral license might take effect that ... They were also very confused, I think, it was just culturally, a lot of companies, Microsoft included, were confused on the difference between what open source development meant and what the business model was. There was this idea that open source meant that all your software was free and people were never going to pay anything.
|
||||
|
||||
Saron Yitbarek: [00:14:30] Anybody invested in the old, proprietary model of software is going to feel threatened by what's happening here. When you threaten an enormous company like Microsoft, yeah, you can bet they're going to react. It makes sense they were pushing all that FUD — fear, uncertainty and doubt. At the time, an “ us versus them ” attitude was pretty much how business worked. If they'd been any other company, though, they might have kept that old grudge, that old thinking, but then, in 2013, everything changes.
|
||||
|
||||
[00:15:00] Microsoft's cloud computing service, Azure, goes online and, shockingly, it offers Linux virtual machines from day one. Steve Ballmer, the CEO who called Linux a cancer, he's out, and a new forward - thinking CEO, Satya Nadella, has been brought in.
|
||||
|
||||
John Gossman: Satya has a different attitude. He's another generation. He's a generation younger than Paul and Bill and Steve were, and had a different perspective on open source.
|
||||
|
||||
Saron Yitbarek: John Gossman, again, from Microsoft's Azure team.
|
||||
|
||||
John Gossman: [00:16:00] We added Linux support into Azure about four years ago, and that was for very pragmatic reasons. If you go to any enterprise customer, you will find that they are not trying to decide whether to use Windows or to use Linux or to use .net or to use Java TM . They made all those decisions a long time ago — about 15 years or so ago, there was some of this argument. Now, every company that I have ever seen has a mix of Linux and Java and Windows and .net and SQL Server and Oracle and MySQL — proprietary source code - based products and open source code products.
|
||||
|
||||
If you're going to operate a cloud and you're going to allow and enable those companies to run their businesses on the cloud, you simply cannot tell them, "You can use this software but you can't use this software."
|
||||
|
||||
Saron Yitbarek: [00:16:30] That's exactly the philosophy that Satya Nadella adopted. In the fall of 2014, he gets up on stage and he wants to get across one big, fat point. Microsoft loves Linux. He goes on to say that 20 % of Azure is already Linux and that Microsoft will always have first - class support for Linux distros. There's not even a whiff of that old antagonism toward open source.
|
||||
|
||||
To drive the point home, there's literally a giant sign behind them that reads, "Microsoft hearts Linux." Aww. For some of us, that turnaround was a bit of a shock, but really, it shouldn't have been. Here's Steven Levy, a tech journalist and author.
|
||||
|
||||
Steven Levy: [00:17:30] When you're playing a football game and the turf becomes really slick, maybe you switch to a different kind of footwear in order to play on that turf. That's what they were doing. They can't deny reality and there are smart people there so they had to realize that this is the way the world is and put aside what they said earlier, even though they might be a little embarrassed at their earlier statements, but it would be crazy to let their statements about how horrible open source was earlier, affect their smart decisions now.
|
||||
|
||||
Saron Yitbarek: [00:18:00] Microsoft swallowed its pride in a big way. You might remember that Apple, after years of splendid isolation, finally shifted toward a partnership with Microsoft. Now it was Microsoft's turn to do a 180. After years of battling the open source approach, they were reinventing themselves. It was change or perish. Steven Vaughan-Nichols.
|
||||
|
||||
Steven Vaughan-Nichols: [00:18:30] Even a company the size of Microsoft simply can't compete with the thousands of open source developers working on all these other major projects , including Linux. They were very loath e to do so for a long time. The former Microsoft CEO, Steve Ballmer, hated Linux with a passion. Because of its GPL license, it was a cancer, but once Ballmer was finally shown the door, the new Microsoft leadership said, "This is like trying to order the tide to stop coming in. The tide is going to keep coming in. We should work with Linux, not against it."
|
||||
|
||||
Saron Yitbarek: [00:19:00] Really, one of the big wins in the history of online tech is the way Microsoft was able to make this pivot, when they finally decided to. Of course, older, hardcore Linux supporters were pretty skeptical when Microsoft showed up at the open source table. They weren't sure if they could embrace these guys, but, as Vaughan-Nichols points out, today's Microsoft simply is not your mom and dad's Microsoft.
|
||||
|
||||
Steven Vaughan-Nichols : [00:19:30] Microsoft 2017 is not Steve Ballmer's Microsoft, nor is it Bill Gates' Microsoft. It's an entirely different company with a very different approach and, again, once you start using open source, it's not like you can really pull back. Open source has devoured the entire technology world. People who have never heard of Linux as such, don't know it, but every time they're on Facebook , they're running Linux. Every time you do a Google search , you're running Linux.
|
||||
|
||||
[00:20:00] Every time you do anything with your Android phone , you're running Linux again. It literally is everywhere, and Microsoft can't stop that, and thinking that Microsoft can somehow take it all over, I think is naïve.
|
||||
|
||||
Saron Yitbarek: [00:20:30] Open source supporters might have been worrying about Microsoft coming in like a wolf in the flock, but the truth is, the very nature of open source software protects it from total domination. No single company can own Linux and control it in any specific way. Greg Kroah-Hartman is a fellow at the Linux Foundation.
|
||||
|
||||
Greg Kroah-Hartman: Every company and every individual contributes to Linux in a selfish manner. They're doing so because they want to solve a problem that they have, be it hardware isn't working , or they want to add a new feature to do something else , or want to take it in a direction that they'll build that they can use for their product. That's great, because then everybody benefits from that because they're releasing the code back, so that everybody can use it. It's because of that selfishness that all companies and all people have, everybody benefits.
|
||||
|
||||
Saron Yitbarek: [00:21:30] Microsoft has realized that in the coming cloud wars, fighting Linux would be like going to war with, well, a cloud. Linux and open source aren't the enemy, they're the atmosphere. Today, Microsoft has joined the Linux Foundation as a platinum member. They became the number one contributor to open source on GitHub. In September, 2017, they even joined the Open Source Initiative. These days, Microsoft releases a lot of its code under open licenses. Microsoft's John Gossman describes what happened when they open sourced .net. At first, they didn't really think they'd get much back.
|
||||
|
||||
John Gossman: [00:22:00] We didn't count on contributions from the community, and yet, three years in, over 50 per cent of the contributions to the .net framework libraries, now, are coming from outside of Microsoft. This includes big pieces of code. Samsung has contributed ARM support to .net. Intel and ARM and a couple other chip people have contributed code generation specific for their processors to the .net framework, as well as a surprising number of fixes, performance improvements , and stuff — from just individual contributors to the community.
|
||||
|
||||
Saron Yitbarek: Up until a few years ago, the Microsoft we have today, this open Microsoft, would have been unthinkable.
|
||||
|
||||
[00:23:00] I'm Saron Yitbarek, and this is Command Line Heroes. Okay, we've seen titanic battles for the love of millions of desktop users. We've seen open source software creep up behind the proprietary titans, and nab huge market share. We've seen fleets of command line heroes transform the programming landscape into the one handed down to people like me and you. Today, big business is absorbing open source software, and through it all, everybody is still borrowing from everybody.
|
||||
|
||||
[00:23:30] In the tech wild west, it's always been that way. Apple gets inspired by Xerox, Microsoft gets inspired by Apple, Linux gets inspired by UNIX. Evolve, borrow, constantly grow. In David and Goliath terms, open source software is no longer a David, but, you know what? It's not even Goliath, either. Open source has transcended. It's become the battlefield that others fight on. As the open source approach becomes inevitable, new wars, wars that are fought in the cloud, wars that are fought on the open source battlefield, are ramping up.
|
||||
|
||||
Here's author Steven Levy.
|
||||
|
||||
Steven Levy: [00:24:00] Basically, right now, we have four or five companies, if you count Microsoft, that in various ways are fighting to be the platform for all we do, for artificial intelligence, say. You see wars between intelligent assistants, and guess what? Apple has an intelligent assistant, Siri. Microsoft has one, Cortana. Google has the Google Assistant. Samsung has an intelligent assistant. Amazon has one, Alexa. We see these battles shifting to different areas, there. Maybe, you could say, the hottest one is would be, whose AI platform is going to control all the stuff in our lives there, and those five companies are all competing for that.
|
||||
|
||||
Saron Yitbarek: If you're looking for another rebel that's going to sneak up behind Facebook or Google or Amazon and blindside them the way Linux blindsided Microsoft, you might be looking a long time, because as author James Allworth points out, being a true rebel is only getting harder and harder.
|
||||
|
||||
James Allworth: [00:25:30] Scale's always been an advantage but the nature of scale advantages are almost ... Whereas, I think previously they were more linear in nature, now it's more exponential in nature, and so, once you start to get out in front with something like this , it becomes harder and harder for a new player to come in and catch up. I think this is true of the internet era in general, whether it's scale like that or the importance and advantages that data bestow on an organization in terms of its ability to compete. Once you get out in front, you attract more customers, and then that gives you more data and that enables you to do an even better job, and then, why on earth would you want to go with the number two player, because they're so far behind? I think it's going to be no different in cloud.
|
||||
|
||||
Saron Yitbarek: [00:26:00] This story began with singular heroes like Steve Jobs and Bill Gates, but the progress of technology has taken on a crowdsourced, organic feel. I think it's telling that our open source hero, Linus Torvalds, didn't even have a real plan when he first invented the Linux kernel. He was a brilliant , young developer for sure, but he was also like a single drop of water at the very front of a tidal wave. The revolution was inevitable. It's been estimated that for a proprietary company to create a Linux distribution in their old - fashioned , proprietary way, it would cost them well over $ 10 billion. That points to the power of open source.
|
||||
|
||||
[00:26:30] In the end, it's not something that a proprietary model is going to compete with. Successful companies have to remain open. That's the big, ultimate lesson in all this. Something else to keep in mind: W hen we're wired together, our capacity to grow and build on what we've already accomplished becomes limitless. As big as these companies get, we don't have to sit around waiting for them to give us something better. Think about the new developer who learns to code for the sheer joy of creating, the mom who decides that if nobody's going to build what she needs, then she'll build it herself.
|
||||
|
||||
Wherever tomorrow's great programmers come from, they're always going to have the capacity to build the next big thing, so long as there's access to the command line.
|
||||
|
||||
[00:27:30] That's it for our two - part tale on the OS wars that shaped our digital lives. The struggle for dominance moved from the desktop to the server room, and ultimately into the cloud. Old enemies became unlikely allies, and a crowdsourced future left everything open . Listen, I know, there are a hundred other heroes we didn't have space for in this history trip, so drop us a line. Share your story. Redhat.com/commandlineheroes. I'm listening.
|
||||
|
||||
We're spending the rest of the season learning what today's heroes are creating, and what battles they're going through to bring their creations to life. Come back for more tales — from the epic front lines of programming. We drop a new episode every two weeks. In a couple weeks' time, we bring you episode three: the Agile Revolution.
|
||||
|
||||
[00:28:00] Command Line Heroes is an original podcast from Red Hat. To get new episodes delivered automatically for free, make sure to subscribe to the show. Just search for “ Command Line Heroes ” in Apple p odcast s , Spotify, Google Play, and pretty much everywhere else you can find podcasts. Then, hit “ subscribe ” so you will be the first to know when new episodes are available.
|
||||
|
||||
I'm Saron Yitbarek. Thanks for listening. Keep on coding.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux
|
||||
|
||||
作者:[redhat][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.redhat.com
|
||||
[b]: https://github.com/lujun9972
|
@ -1,151 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (cycoe)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A brief history of text-based games and open source)
|
||||
[#]: via: (https://opensource.com/article/18/7/interactive-fiction-tools)
|
||||
[#]: author: (Jason Mclntosh https://opensource.com/users/jmac)
|
||||
|
||||
A brief history of text-based games and open source
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/compass_map_explore_adventure.jpg?itok=ecCoVTrZ)
|
||||
|
||||
The [Interactive Fiction Technology Foundation][1] (IFTF) is a non-profit organization dedicated to the preservation and improvement of technologies enabling the digital art form we call interactive fiction. When a Community Moderator for Opensource.com suggested an article about IFTF, the technologies and services it supports, and how it all intersects with open source, I found it a novel angle to the decades-long story I’ve so often told. The history of IF is longer than—but quite enmeshed with—the modern FOSS movement. I hope you’ll enjoy my sharing it here.
|
||||
|
||||
### Definitions and history
|
||||
|
||||
To me, the term interactive fiction includes any video game or digital artwork whose audience interacts with it primarily through text. The term originated in the 1980s when parser-driven text adventure games—epitomized in the United States by [Zork][2], [The Hitchhiker’s Guide to the Galaxy][3], and the rest of [Infocom][4]’s canon—defined home-computer entertainment. Its mainstream commercial viability had guttered by the 1990s, but online hobbyist communities carried on the tradition, releasing both games and game-creation tools.
|
||||
|
||||
After a quarter century, interactive fiction now comprises a broad and sparkling variety of work, from puzzle-laden text adventures to sprawling and introspective hypertexts. Regular online competitions and festivals provide a great place to peruse and play new work: The English-language IF world enjoys annual events including [Spring Thing][5] and [IFComp][6], the latter a centerpiece of modern IF since 1995—which also makes it the longest-lived continually running game showcase event of its kind in any genre. [IFComp’s crop of judged-and-ranked entries from 2017][7] shows the amazing diversity in form, style, and subject matter that text-based games boast today.
|
||||
|
||||
(I specify "English-language" above because IF communities tend to self-segregate by language, perhaps due to the technology's focus on writing. There are also annual IF events in [French][8] and [Italian][9], for example, and I've heard of at least one Chinese IF festival. Happily, these borders are porous; during the four years I managed IFComp, it has welcomed English-translated work from all international communities.)
|
||||
|
||||
![counterfeit monkey game screenshot][11]
|
||||
|
||||
Starting a new game of Emily Short's "Counterfeit Monkey," running on the interpreter Lectrote (both open source software).
|
||||
|
||||
Also due to its focus on text, IF presents some of the most accessible platforms for both play and authorship. Almost anyone who can read digital text—including users of assistive technology such as text-to-speech software—can play most IF works. Likewise, IF creation is open to all writers willing to learn and work with its tools and techniques.
|
||||
|
||||
This brings us to IF’s long relationship with open source, which has helped enable the art form’s availability since its commercial heyday. I'll provide an overview of contemporary open-source IF creation tools, and then discuss the ancient and sometimes curious tradition of IF works that share their source code.
|
||||
|
||||
### The world of open source IF tools
|
||||
|
||||
A number of development platforms, most of which are open source, are available to create traditional parser-driven IF in which the user types commands—for example, `go north,` `get lamp`, `pet the cat`, or `ask Zoe about quantum mechanics`—to interact with the game’s world. The early 1990s saw the emergence of several hacker-friendly parser-game development kits; those still in use today include [TADS][12], [Alan][13], and [Quest][14]—all open, with the latter two bearing FOSS licenses.
|
||||
|
||||
But by far the most prominent of these is [Inform][15], first released by Graham Nelson in 1993 and now maintained by a team Nelson still leads. Inform source is semi-open, in an unusual fashion: Inform 6, the previous major version, [makes its source available through the Artistic License][16]. This has more immediate relevance than may be obvious, since the otherwise proprietary Inform 7 holds Inform 6 at its core, translating its [remarkable natural-language syntax][17] into its predecessor’s more C-like code before letting it compile the work down into machine code.
|
||||
|
||||
![inform 7 IDE screenshot][19]
|
||||
|
||||
The Inform 7 IDE, loaded up with documentation and a sample project.
|
||||
|
||||
Inform games run on a virtual machine, a relic of the Infocom era when that publisher targeted a VM so that it could write a single game that would run on Apple II, Commodore 64, Atari 800, and other flavors of the "[home computer][20]." Fewer popular operating systems exist today, but Inform’s virtual machines—the relatively modern [Glulx][21] or the charmingly antique [Z-machine][22], a reverse-engineered clone of Infocom’s historical VM—let Inform-created work run on any computer with an Inform interpreter. Currently, popular cross-platform interpreters include desktop programs like [Lectrote][23] and [Gargoyle][24] or browser-based ones like [Quixe][25] and [Parchment][26]. All are open source.
|
||||
|
||||
If the pace of Inform’s development has slowed in its maturity, it remains vital through an active and transparent ecosystem—just like any other popular open source project. In Inform’s case, this includes the aforementioned interpreters, [a collection of language extensions][27] (usually written in a mix of Inform 6 and 7), and of course, all the work created with it and shared with the world, sometimes with source included (I’ll return to that topic later in this article).
|
||||
|
||||
IF creation tools invented in the 21st century tend to explore player interactions outside of the traditional parser, generating hypertext-driven work that any modern web browser can load. Chief among these is [Twine][28], originally developed by Chris Klimas in 2009 and under active development by many contributors today as [a GNU-licensed open source project][29]. (In fact, [Twine][30] can trace its OSS lineage back to [TiddlyWiki][31], the project from which Klimas initially derived it.)
|
||||
|
||||
Twine represents a sort of maximally [open and accessible approach][30] to IF development: Beyond its own FOSS nature, it renders its output as self-contained websites, relying not on machine code requiring further specialized interpretation but the open and well-exercised standards of HTML, CSS, and JavaScript. As a creative tool, Twine can match its own exposed complexity to the creator’s skill level. Users with little or no programming knowledge can create simple but playable IF work, while those with more coding and design skills—including those developing these skills by making Twine games—can develop more sophisticated projects. Little wonder that Twine’s visibility and popularity in educational contexts has grown quite a bit in recent years.
|
||||
|
||||
Other noteworthy open source IF development projects include the MIT-licensed [Undum][32] by Ian Millington, and [ChoiceScript][33] by Dan Fabulich and the [Choice of Games][34] team—both of which also target the web browser as the gameplay platform. Looking beyond strict development systems like these, web-based IF gives us a rich and ever-churning ecosystem of open source work, such as furkle’s [collection of Twine-extending tools][35] and Liza Daly’s [Windrift][36], a JavaScript framework purpose-built for her own IF games.
|
||||
|
||||
### Programs, games, and game-programs
|
||||
|
||||
Twine benefits from [a standing IFTF program dedicated to its support][37], allowing the public to help fund its maintenance and development. IFTF also directly supports two long-time public services, IFComp and the IF Archive, both of which depend upon and contribute back into open software and technologies.
|
||||
|
||||
![Harmonia opening screen shot][39]
|
||||
|
||||
The opening of Liza Daly's "Harmonia," created with the Windrift open source IF-creation framework.
|
||||
|
||||
The Perl- and JavaScript-based application that runs the IFComp’s website has been [a shared-source project][40] since 2014, and it reflects [the stew of FOSS licenses used by its IF-specific sub-components][41], including the various code libraries that allow parser-driven competition entries to run in a web browser. [The IF Archive][42]—online since 1992 and [an IFTF project since 2017][43]—is a set of mirrored repositories based entirely on ancient and stable internet standards, with [a little open source Python script][44] to handle indexing.
|
||||
|
||||
### At last, the fun part: Let's talk about open source text games
|
||||
|
||||
The bulk of the archive [comprises games][45], of course—years and years of games, reflecting decades of evolving game-design trends and IF tool development.
|
||||
|
||||
Lots of IF work shares its source code, and the community’s quick-start solution for finding it is simple: [Search the IFDB for the tag "source available"][46]. (The IFDB is yet another long-running IF community service, run privately by TADS creator Mike Roberts.) Users who are comfortable with a more bare-bones interface may also wish to browse [the `/games/source` directory][47] of the IF Archive, which groups content by development platform and written language (there's also a lot of work either too miscellaneous or too ancient to categorize floating at the top).
|
||||
|
||||
A little bit of random sampling of these code-sharing games reveals an interesting dilemma: Unlike the wider world of open source software, the IF community lacks a generally agreed-upon way of licensing all the code that it generates. Unlike a software tool—including all the tools we use to build IF—an interactive fiction game is a work of art in the most literal sense, meaning that an open source license intended for software would fit it no better than it would any other work of prose or poetry. But then again, an IF game is also a piece of software, and it exhibits source-code patterns and techniques that its creator may legitimately wish to share with the world. What is an open source-aware IF creator to do?
|
||||
|
||||
Some games address this by passing their code into the public domain, either through explicit license or—as in the case of [the original 42-year-old Adventure by Crowther and Woods][48]—through community fiat. Some try to split the difference, rolling their own license that allows for free re-use of a game’s exposed business logic but prohibits the creation of work derived specifically from its prose. This is the tack I took when I opened up the source of my own game, [The Warbler’s Nest][49]. Lord knows how well that’d stand up in court, but I didn’t have any better ideas at the time.
|
||||
|
||||
Naturally, you can find work that simply puts everything under a single common license and never mind the naysayers. A prominent example is [Emily Short’s epic Counterfeit Monkey][50], released in its entirety under a Creative Commons 4.0 license. [CC frowns at its application to code][51], but you could argue that [the strangely prose-like nature of Inform 7 source][52] makes it at least a little more compatible with a CC license than a more traditional software project would be.
|
||||
|
||||
### What now, adventurer?
|
||||
|
||||
If you are eager to start exploring the world of interactive fiction, here are a few links to check out:
|
||||
|
||||
|
||||
+ As mentioned above, IFDB and the IF Archive both present browsable interfaces to more than 40 years worth of collected interactive fiction work. Much of this is playable in a web browser, but some require additional interpreter programs. IFDB can help you find and install these.
|
||||
|
||||
IFComp’s annual results pages provide another view into the best of this free and archive-available work.
|
||||
|
||||
+ The Interactive Fiction Technology Foundation is a charitable non-profit organization that helps support Twine, IFComp, and the IF Archive, as well as improve the accessibility of IF, explore IF’s use in education, and more. Join its mailing list to receive IFTF’s monthly newsletter, peruse its blog, and browse some thematic merchandise.
|
||||
|
||||
+ John Paul Wohlscheid wrote this article about open-source IF tools earlier this year. It covers some platforms not mentioned here, so if you’re still hungry for more, have a look.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/interactive-fiction-tools
|
||||
|
||||
作者:[Jason Mclntosh][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jmac
|
||||
[1]:http://iftechfoundation.org/
|
||||
[2]:https://en.wikipedia.org/wiki/Zork
|
||||
[3]:https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy_(video_game)
|
||||
[4]:https://en.wikipedia.org/wiki/Infocom
|
||||
[5]:http://www.springthing.net/
|
||||
[6]:http://ifcomp.org/
|
||||
[7]:https://ifcomp.org/comp/2017
|
||||
[8]:http://www.fiction-interactive.fr/
|
||||
[9]:http://www.oldgamesitalia.net/content/marmellata-davventura-2018
|
||||
[10]:/file/403396
|
||||
[11]:https://opensource.com/sites/default/files/uploads/monkey.png (counterfeit monkey game screenshot)
|
||||
[12]:http://tads.org/
|
||||
[13]:https://www.alanif.se/
|
||||
[14]:http://textadventures.co.uk/quest/
|
||||
[15]:http://inform7.com/
|
||||
[16]:https://github.com/DavidKinder/Inform6
|
||||
[17]:http://inform7.com/learn/man/RB_4_1.html#e307
|
||||
[18]:/file/403386
|
||||
[19]:https://opensource.com/sites/default/files/uploads/inform.png (inform 7 IDE screenshot)
|
||||
[20]:https://www.youtube.com/watch?v=bu55q_3YtOY
|
||||
[21]:http://ifwiki.org/index.php/Glulx
|
||||
[22]:http://ifwiki.org/index.php/Z-machine
|
||||
[23]:https://github.com/erkyrath/lectrote
|
||||
[24]:https://github.com/garglk/garglk/
|
||||
[25]:http://eblong.com/zarf/glulx/quixe/
|
||||
[26]:https://github.com/curiousdannii/parchment
|
||||
[27]:https://github.com/i7/extensions
|
||||
[28]:http://twinery.org/
|
||||
[29]:https://github.com/klembot/twinejs
|
||||
[30]:/article/18/7/twine-vs-renpy-interactive-fiction
|
||||
[31]:https://tiddlywiki.com/
|
||||
[32]:https://github.com/idmillington/undum
|
||||
[33]:https://github.com/dfabulich/choicescript
|
||||
[34]:https://www.choiceofgames.com/
|
||||
[35]:https://github.com/furkle
|
||||
[36]:https://github.com/lizadaly/windrift
|
||||
[37]:http://iftechfoundation.org/committees/twine/
|
||||
[38]:/file/403391
|
||||
[39]:https://opensource.com/sites/default/files/uploads/harmonia.png (Harmonia opening screen shot)
|
||||
[40]:https://github.com/iftechfoundation/ifcomp
|
||||
[41]:https://github.com/iftechfoundation/ifcomp/blob/master/LICENSE.md
|
||||
[42]:https://www.ifarchive.org/
|
||||
[43]:http://blog.iftechfoundation.org/2017-06-30-iftf-is-adopting-the-if-archive.html
|
||||
[44]:https://github.com/iftechfoundation/ifarchive-ifmap-py
|
||||
[45]:https://www.ifarchive.org/indexes/if-archiveXgames
|
||||
[46]:http://ifdb.tads.org/search?sortby=ratu&searchfor=%22source+available%22
|
||||
[47]:https://www.ifarchive.org/indexes/if-archiveXgamesXsource.html
|
||||
[48]:http://ifdb.tads.org/viewgame?id=fft6pu91j85y4acv
|
||||
[49]:https://github.com/jmacdotorg/warblers-nest/
|
||||
[50]:https://github.com/i7/counterfeit-monkey
|
||||
[51]:https://creativecommons.org/faq/#can-i-apply-a-creative-commons-license-to-software
|
||||
[52]:https://github.com/i7/counterfeit-monkey/blob/master/Counterfeit%20Monkey.materials/Extensions/Counterfeit%20Monkey/Liquids.i7x
|
@ -0,0 +1,538 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (I Used The Web For A Day On A 50 MB Budget — Smashing Magazine)
|
||||
[#]: via: (https://www.smashingmagazine.com/2019/07/web-on-50mb-budget/)
|
||||
[#]: author: (Chris Ashton https://www.smashingmagazine.com/author/chrisbashton)
|
||||
|
||||
I Used The Web For A Day On A 50 MB Budget
|
||||
======
|
||||
|
||||
Data can be prohibitively expensive, especially in developing countries. Chris Ashton puts himself in the shoes of someone on a tight data budget and offers practical tips for reducing our websites’ data footprint.
|
||||
|
||||
This article is part of a series in which I attempt to use the web under various constraints, representing a given demographic of user. I hope to raise the profile of difficulties faced by real people, which are avoidable if we design and develop in a way that is sympathetic to their needs.
|
||||
|
||||
Last time, I [navigated the web for a day using Internet Explorer 8][7]. This time, I browsed the web for a day on a 50 MB budget.
|
||||
|
||||
### Why 50 MB?
|
||||
|
||||
Many of us are lucky enough to be on mobile plans which allow several gigabytes of data transfer per month. Failing that, we are usually able to connect to home or public WiFi networks that are on fast broadband connections and have effectively unlimited data.
|
||||
|
||||
But there are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.
|
||||
|
||||
> People often buy data packages of just tens of megabytes at a time, making a gigabyte a relatively large and therefore expensive amount of data to buy.
|
||||
> — Dan Howdle, consumer telecoms analyst at Cable.co.uk
|
||||
|
||||
Just how expensive are we talking?
|
||||
|
||||
#### The Cost Of Mobile Data
|
||||
|
||||
A 2018 [study by cable.co.uk][8] found that Zimbabwe was the most expensive country in the world for mobile data, where 1 GB cost an average of $75.20, ranging from $12.50 to $138.46. The enormous range in price is due to smaller amounts of data being very expensive, getting proportionally cheaper the bigger the data plan you commit to. You can read the [study methodology][9] for more information.
|
||||
|
||||
Zimbabwe is by no means a one-off. Equatorial Guinea, Saint Helena and the Falkland Islands are next in line, with 1 GB of data costing $65.83, $55.47 and $47.39 respectively. These countries generally have a combination of poor technical infrastructure and low adoption, meaning data is both costly to deliver and doesn’t have the economy of scale to drive costs down.
|
||||
|
||||
Data is expensive in parts of Europe too. A gigabyte of data in Greece will set you back $32.71; in Switzerland, $20.22. For comparison, the same amount of data costs $6.66 in the UK, or $12.37 in the USA. On the other end of the scale, India is the cheapest place in the world for data, at an average cost of $0.26. Kyrgyzstan, Kazakhstan and Ukraine follow at $0.27, $0.49 and $0.51 per GB respectively.
|
||||
|
||||
The speed of mobile networks, too, varies considerably between countries. Perhaps surprisingly, [users experience faster speeds over a mobile network than WiFi][10] in at least 30 countries worldwide, including Australia and France. South Korea has the [fastest mobile download speed][11], averaging 52.4 Mbps, but Iraq has the slowest, averaging 1.6 Mbps download and 0.7 Mbps upload. The USA ranks 40th in the world for mobile download speeds, at around 34 Mbps, and is [at risk of falling further behind][12] as the world moves towards 5G.
|
||||
|
||||
As for mobile network connection type, 84.7% of user connections in the UK are on 4G, compared to 93% in the USA, and 97.5% in South Korea. This compares with less than 50% in Uzbekistan and less than 60% in Algeria, Ecuador, Nepal and Iraq.
|
||||
|
||||
#### The Cost Of Broadband Data
|
||||
|
||||
Meanwhile, a [study of the cost of broadband in 2018][13] shows that a broadband connection in Niger costs $263 ‘per megabit per month’. This metric is a little difficult to comprehend, so here’s an example: if the average cost of broadband packages in a country is $22, and the average download speed offered by the packages is 10 Mbps, then the cost ‘per megabit per month’ would be $2.20.
|
||||
|
||||
It’s an interesting metric, and one that acknowledges that broadband speed is as important a factor as the data cap. A cost of $263 suggests a combination of extremely slow and extremely expensive broadband. For reference, the metric is $1.19 in the UK and $1.26 in the USA.
|
||||
|
||||
What’s perhaps easier to comprehend is the average cost of a broadband package. Note that this study was looking for the cheapest broadband packages on offer, ignoring whether or not these packages had a data cap, so provides a useful ballpark figure rather than the cost of data per se.
|
||||
|
||||
On package cost alone, Mauritania has the most expensive broadband in the world, at an average of $768.16 (a range of $307.26 to $1,368.72). This enormous cost includes building physical lines to the property, since few already exist in Mauritania. At 0.7 Mbps, Mauritania also has one of the slowest broadband networks in the world.
|
||||
|
||||
[Taiwan has the fastest broadband in the world][14], at a mean speed of 85 Mbps. Yemen has the slowest, at 0.38 Mbps. But even countries with good established broadband infrastructure have so-called ‘not-spots’. The United Kingdom is ranked 34th out of 207 countries for broadband speed, but in July 2019 there was [still a school in the UK without broadband][15].
|
||||
|
||||
The average cost of a broadband package in the UK is $39.58, and in the USA is $67.69. The cheapest average in the world is Ukraine’s, at just $5, although the cheapest broadband deal of them all was found in Kyrgystan ($1.27 — against the country average of $108.22).
|
||||
|
||||
Zimbabwe was the most costly country for mobile data, and the statistics aren’t much better for its broadband, with an average cost of $128.71 and a ‘per megabit per month’ cost of $6.89.
|
||||
|
||||
#### Absolute Cost vs Cost In Real Terms
|
||||
|
||||
All of the costs outlined so far are the absolute costs in USD, based on the exchange rates at the time of the study. These costs have [not been accounted for cost of living][16], meaning that for many countries the cost is actually far higher in real terms.
|
||||
|
||||
I’m going to limit my browsing today to 50 MB, which in Zimbabwe would cost around $3.67 on a mobile data tariff. That may not sound like much, but teachers in Zimbabwe were striking this year because their [salaries had fallen to just $2.50 a day][17].
|
||||
|
||||
For comparison, $3.67 is around half the [$7.25 minimum wage in the USA][18]. As a Zimbabwean, I’d have to work for around a day and a half to earn the money to buy this 50MB data, compared to just half an hour in the USA. It’s not easy to compare cost of living between countries, but on wages alone the $3.67 cost of 50 MB of data in Zimbabwe would feel like $52 to an American on minimum wage.
|
||||
|
||||
### Setting Up The Experiment
|
||||
|
||||
I launched Chrome and opened the dev tools, where I throttled the network to a slow 3G connection. I wanted to simulate a slow connection like those experienced by users in Uzbekistan, to see what kind of experience websites would give me. I also throttled my CPU to simulate being on a lower end device.
|
||||
|
||||
[![][19]][20]I opted to throttle my network to Slow 3G and my CPU to 6x slowdown. ([Large preview][20])
|
||||
|
||||
I installed [ModHeader][21] and set the [‘Save-Data’ header][22] to let websites know I want to minimise my data usage. This is also the header set by Chrome for Android’s ‘Lite mode’, which I’ll cover in more detail later.
|
||||
|
||||
I downloaded [TripMode][23]; an application for Mac which gives you control over which apps on your Mac can access the internet. Any other application’s internet access is automatically blocked.
|
||||
|
||||
<https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/6964df33-b6ca-4fe0-bbc3-8b9f3eb525cf/trip-mode.png>You can enable/disable individual apps from connecting to the internet with TripMode. I enabled Chrome. ([Large preview][24])
|
||||
|
||||
How far do I predict my 50 MB budget will take me? With the [average weight of a web page being almost 1.7 MB][25], that suggests I’ve got around 29 pages in my budget, although probably a few more than that if I’m able to stay on the same sites and leverage browser caching.
|
||||
|
||||
Throughout the experiment I will suggest performance tips to speed up the [first contentful paint][26] and perceived loading time of the page. Some of these tips may not affect the amount of data transferred directly, but do generally involve deferring the download of less important resources, which on slow connections may mean the resources are never downloaded and data is saved.
|
||||
|
||||
### The Experiment
|
||||
|
||||
Without any further ado, I loaded google.com, using 402 KB of my budget and spending $0.03 (around 1% of my Zimbabwe budget).
|
||||
|
||||
[![402 KB transferred, 1.1 MB resources, 24 requests][27]][28]402 KB transferred, 1.1 MB resources, 24 requests. ([Large preview][28])
|
||||
|
||||
All in all, not a bad page size, but I wondered where those 24 network requests were coming from and whether or not the page could be made any lighter.
|
||||
|
||||
#### Google Homepage — DOM
|
||||
|
||||
[![][29]][30]Chrome devtools screenshot of the DOM, where I’ve expanded one inline `style` tag. ([Large preview][30])
|
||||
|
||||
Looking at the page markup, there are no external stylesheets — all of the CSS is inline.
|
||||
|
||||
##### Performance Tip #1: Inline Critical CSS
|
||||
|
||||
This is good for performance as it saves the browser having to make an additional network request in order to fetch an external stylesheet, so the styles can be parsed and applied immediately for the first contentful paint. There’s a trade-off to be made here, as external stylesheets can be cached but inline ones cannot (unless you [get clever with JavaScript][31]).
|
||||
|
||||
The general advice is for your [critical styles][32] (anything [above-the-fold][33]) to be inline, and for the rest of your styling to be external and loaded asynchronously. Asynchronous loading of CSS can be achieved in [one remarkably clever line of HTML][34]:
|
||||
|
||||
```
|
||||
<link rel="stylesheet" href="/path/to/my.css" media="print" onload="this.media='all'">
|
||||
```
|
||||
|
||||
The devtools show a prettified version of the DOM. If you want to see what was actually downloaded to the browser, switch to the Sources tab and find the document.
|
||||
|
||||
[![A wall of minified code.][35]][36]Switching to Sources and finding the index shows the ‘raw’ HTML that was delivered to the browser. What a mess! ([Large preview][36])
|
||||
|
||||
You can see there is a LOT of inline JavaScript here. It’s worth noting that it has been uglified rather than merely minified.
|
||||
|
||||
##### Performance Tip #2: Minify And Uglify Your Assets
|
||||
|
||||
Minification removes unnecessary spaces and characters, but uglification actually ‘mangles’ the code to be shorter. The tell-tale sign is that the code contains short, machine-generated variable names rather than untouched source code. This is good as it means the script is smaller and quicker to download.
|
||||
|
||||
Even so, inline scripts look to be roughly 120 KB of the 210 KB page resource (about half the 60 KB gzipped size). In addition, there are five external JavaScript files amounting to 291 KB of the 402 KB downloaded:
|
||||
|
||||
[![Network tab of DevTools showing the external javascript files][37]][38]Five external JavaScript files in the Network tab of the devtools. ([Large preview][38])
|
||||
|
||||
This means that JavaScript accounts for about 80 percent of the overall page weight.
|
||||
|
||||
This isn’t useless JavaScript; Google has to have some in order to display suggestions as you type. But I suspect a lot of it is tracking code and advertising setup.
|
||||
|
||||
For comparison, I disabled JavaScript and reloaded the page:
|
||||
|
||||
[![DevTools showing only 5 network requests][39]][40]The disabled JS version of Google search was only 102 KB and had just 5 network requests. ([Large preview][40])
|
||||
|
||||
The JS-disabled version of Google search is just 102 KB, as opposed to 402 KB. Although Google can’t provide autosuggestions under these conditions, the site is still functional, and I’ve just cut my data usage down to a quarter of what it was. If I really did have to limit my data usage in the long term, one of the first things I’d do is disable JavaScript. [It’s not as bad as it sounds][41].
|
||||
|
||||
##### Performance Tip #3: Less Is More
|
||||
|
||||
Inlining, uglifying and minifying assets is all well and good, but the best performance comes from not sending down the assets in the first place.
|
||||
|
||||
* Before adding any new features, do you have a [performance budget][42] in place?
|
||||
* Before adding JavaScript to your site, can your feature be accomplished using plain HTML? (For example, [HTML5 form validation][43]).
|
||||
* Before pulling a large JavaScript or CSS library into your application, use something like [bundlephobia.com][44] to measure how big it is. Is the convenience worth the weight? Can you accomplish the same thing using vanilla code at a much smaller data size?
|
||||
|
||||
|
||||
|
||||
#### Analysing The Resource Info
|
||||
|
||||
There’s a lot to unpack here, so let’s get cracking. I’ve only got 50 MB to play with, so I’m going to milk every bit of this page load. Settle in for a short Chrome Devtools tutorial.
|
||||
|
||||
402 KB transferred, but 1.1 MB of resources: what does that actually mean?
|
||||
|
||||
It means 402 KB of content was actually downloaded, but in its compressed form (using a compression algorithm such as [gzip or brotli][45]). The browser then had to do some work to unpack it into something meaningful. The total size of the unpacked data is 1.1 MB.
|
||||
|
||||
This unpacking isn’t free — [there are a few milliseconds of overhead in decompressing the resources][46]. But that’s a negligible overhead compared to sending 1.1MB down the wire.
|
||||
|
||||
##### Performance Tip #4: Compress Text-based Assets
|
||||
|
||||
As a general rule, always compress your assets, using something like gzip. But don’t use compression on your images and other binary files — you should optimize these in advance at source. Compression could actually end up [making them bigger][47].
|
||||
|
||||
And, if you can, [avoid compressing files that are 1500 bytes or smaller][47]. The smallest TCP packet size is 1500 bytes, so by compressing to, say, 800 bytes, you save nothing, as it’s still transmitted in the same byte packet. Again, the cost is negligible, but wastes some compression CPU time on the server and decompression CPU time on the client.
|
||||
|
||||
Now back to the Network tab in Chrome: let’s dig into those priorities. Notice that resources have priority “Highest” to “Lowest” — these are the browser’s best guess as to what are the more important resources to download. The higher the priority, the sooner the browser will try to download the asset.
|
||||
|
||||
##### Performance Tip #5: Give Resource Hints To The Browser
|
||||
|
||||
The browser will guess at what the highest priority assets are, but you can [provide a resource hint][48] using the `<link rel="preload">` tag, instructing the browser to download the asset as soon as possible. It’s a good idea to preload fonts, logos and anything else that appears above the fold.
|
||||
|
||||
Let’s talk about caching. I’m going to hold ALT and right-click to change my column headers to unlock some more juicy information. We’re going to check out Cache-Control.
|
||||
|
||||
<https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/88384090-3ed6-482c-a2b4-7aeb057c3b19/cache-control.png>There are lots of interesting fields tucked away behind ALT. ([Large preview][49])
|
||||
|
||||
Cache-Control denotes whether or not a resource can be cached, how long it can be cached for, and what rules it should follow around [revalidating][50]. Setting proper cache values is crucial to keeping the data cost of repeat visits down.
|
||||
|
||||
##### Performance Tip #6: Set cache-control Headers On All Cacheable Assets
|
||||
|
||||
Note that the cache-control value begins with a directive of `public` or `private`, followed by an expiration value (e.g. `max-age=31536000`). What does the directive mean, and why the oddly specific `max-age` value?
|
||||
|
||||
[![Screenshot of Google network tab with cache-control column visible][51]][52]A mixture of max-age values and public/private. ([Large preview][52])
|
||||
|
||||
The value `31536000` is the number of seconds there are in a year, and is the theoretical maximum value allowed by the cache-control specification. It is common to see this value applied to all static assets and effectively means “this resource isn’t going to change”. In practice, [no browser is going to cache for an entire year][53], but it will cache the asset for as long as makes sense.
|
||||
|
||||
To explain the public/private directive, we must explain the two main caches that exist off the server. First, there is the traditional browser cache, where the resource is stored on the user’s machine (the ‘client’). And then there is the CDN cache, which sits between the client and the server; resources are cached at the CDN level to prevent the CDN from requesting the resource from the origin server over and over again.
|
||||
|
||||
A `Cache-Control` directive of `public` allows the resource to be cached in both the client and the CDN. A value of `private` means only the client can cache it; the CDN is not supposed to. This latter value is typically used for pages or assets that exist behind authentication, where it is fine to be cached on the client but we wouldn’t want to leak private information by caching it in the CDN and delivering it to other users.
|
||||
|
||||
[![Screenshot of Google logo cache-control setting: private, max-age=31536000][54]][55]A mixture of max-age values and public/private. ([Large preview][55])
|
||||
|
||||
One thing that got my attention was that the Google logo has a cache control of “private”. Other images on the page do have a public cache, and I don’t know why the logo would be treated any differently. If you have any ideas, let me know in the comments!
|
||||
|
||||
I refreshed the page and most of the resources were served from cache, apart from the page itself, which as you’ve seen already is `private, max-age=0`, meaning it cannot be cached. This is normal for dynamic web pages where it is important that the user always gets the very latest page when they refresh.
|
||||
|
||||
It was at this point I accidentally clicked on an ‘Explanation’ URL in the devtools, which took me to the [network analysis reference][56], costing me about 5 MB of my budget. Oops.
|
||||
|
||||
### Google Dev Docs
|
||||
|
||||
4.2 MB of this new 5 MB page was down to images; specifically SVGs. The weightiest of these was 186 KB, which isn’t particularly big — there were just so many of them, and they all downloaded at once.
|
||||
|
||||
<https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/39870718-c891-4d34-bd1b-74a9d28986a0/gif-scrolling-down-the-very-long-dev-docs-page.gif>This is a loooong page. All the images downloaded on page load. ([Large preview][57])
|
||||
|
||||
That 5 MB page load was 10% of my budget for today. So far I’ve used 5.5 MB, including the no-JavaScript reload of the Google homepage, and spent $0.40. I didn’t even mean to open this page.
|
||||
|
||||
What would have been a better user experience here?
|
||||
|
||||
##### Performance Tip #7: Lazy-load Your Images
|
||||
|
||||
Ordinarily, if I accidentally clicked on a link, I would hit the back button in my browser. I’d have received no benefit whatsoever from downloading those images — what a waste of 4.2 MB!
|
||||
|
||||
Apart from video, where you generally know what you’re getting yourself into, images are by far the biggest culprit to data usage on the web. A [study of the world’s top 500 websites][58] found that images take up to 53% of the average page weight. “This means they have a big impact on page-loading times and subsequently overall performance”.
|
||||
|
||||
Instead of downloading all of the images on page load, it is good practice to lazy-load the images so that only users who are engaged with the page pay the cost of downloading them. Users who choose not to scroll below the fold therefore don’t waste any unnecessary bandwidth downloading images they’ll never see.
|
||||
|
||||
There’s a great [css-tricks.com guide to rolling out lazy-loading for images][59] which offers a good balance between those on good connections, those on poor connections, and those with JavaScript disabled.
|
||||
|
||||
If this page had implemented lazy loading as per the guide above, each of the 38 SVGs would have been represented by a 1 KB placeholder image by default, and only loaded into view on scroll.
|
||||
|
||||
##### Performance Tip #8: Use The Right Format For Your Images
|
||||
|
||||
I thought that Google had missed a trick by not using [WebP][60], which is an image format that is 26% smaller in size compared to PNGs (with no loss in quality) and 25-34% smaller in size compared to JPEGs (and of a comparable quality). I thought I’d have a go at converting SVG to WebP.
|
||||
|
||||
Converting to WebP did bring one of the SVGs down from 186 KB to just 65 KB, but actually, looking at the images side by side, the WebP came out grainy:
|
||||
|
||||
[![Comparison of the two images][61]][62]The SVG (left) is noticeably crisper than the WebP (right). ([Large preview][62])
|
||||
|
||||
I then tried converting one of the PNGs to WebP, which is supposed to be lossless and should come out smaller. However, the WebP output was *heavier* (127 KB, from 109 KB)!
|
||||
|
||||
[![Comparison of the two images][63]][64]The PNG (left) is a similar quality to the WebP (right) but is smaller at 109 KB compared to 127 KB. ([Large preview][64])
|
||||
|
||||
This surprised me. WebP isn’t necessarily the silver bullet we think it is, and even Google have neglected to use it on this page.
|
||||
|
||||
So my advice would be: where possible, experiment with different image formats on a per-image basis. The format that keeps the best quality for the smallest size may not be the one you expect.
|
||||
|
||||
Now back to the DOM. I came across this:
|
||||
|
||||
Notice the `async` keyword on the Google analytics script?
|
||||
|
||||
[![Screenshot of performance analysis output of devtools][65]][66]Google analytics has ‘low’ priority. ([Large preview][66])
|
||||
|
||||
Despite being one of the first things in the head of the document, this was given a low priority, as we’ve explicitly opted out of being a blocking request by using the `async` keyword.
|
||||
|
||||
A blocking request is one that stops the rendering of the page. A `<script>` call is blocking by default, stopping the parsing of the HTML until the script has downloaded, compiled and executed. This is why we traditionally put `<script>` calls at the end of the document.
|
||||
|
||||
##### Performance Tip #9: Avoid Writing Blocking Script Calls
|
||||
|
||||
By adding the `async` attribute to our `<script>` tag, we’re telling the browser not to stop rendering the page but to download the script in the background. If the HTML is still being parsed by the time the script is downloaded, the parsing is paused while the script is executed, and then resumed. This is significantly better than blocking the rendering as soon as `<script>` is encountered.
|
||||
|
||||
There is also a `defer` attribute, which is subtly different. `<script defer>` tells the browser to render the page while the script loads in the background, and even if the HTML is still being parsed by the time the script is downloaded, the script must wait until the page is rendered before it can be executed. This makes the script completely non-blocking. Read “[Efficiently load JavaScript with defer and async][67]” for more information.
|
||||
|
||||
Anyway, enough Google dissecting. It’s time to try out another site. I’ve still got almost 45 MB of my budget left!
|
||||
|
||||
## Amazon
|
||||
|
||||
The Amazon homepage loaded with a total weight of about 6 MB. One of these was a 587 KB image that I couldn’t even find on the page. This was a PNG, presumably to have crisp text, but on a photographic background — a classic combination that’s terrible for performance.
|
||||
|
||||
[![image of spanners with overlaid text: Hands-on time. Discover our tool selection for your car][68]][69]This grainy image used over 1% of my budget. ([Large preview][69])
|
||||
|
||||
In fact, there were a few several-hundred-kilobyte images in my network tab that I couldn’t actually see on the page. I suspect a misconfiguration somewhere on Amazon, but these invisible images combined chewed through at least 1 MB of my data.
|
||||
|
||||
How about the hero image? It’s the main image on the page, and it’s only 94 KB transferred — but it could be reduced in size by about 15% if it were cropped directly around the text and footballers. We could then apply the same background color in CSS as is in the image. This has the additional advantage of being resizable down to smaller screens whilst retaining legibility of text.
|
||||
|
||||
I’ve said it once, and I’ll say it again: **optimising and lazy-loading your images is the single biggest benefit you can make to the page weight of your site**.
|
||||
|
||||
> Optimizing images provided, by far, the most significant data reduction. You can make the case JavaScript is a bigger deal for overall performance, but not data reduction. Optimizing or removing images is the safest way of ensuring a much lighter experience and that’s the primary optimization Data Saver relies on.
|
||||
> — Tim Kadlec, [Making Sense of Chrome Lite Pages][70]
|
||||
|
||||
To be fair to Amazon, if I resize the browser to a mobile size and refresh the page, the site is optimized for mobile and the total page weight is only 2.1 MB.
|
||||
|
||||
But this brings me onto my next point…
|
||||
|
||||
##### Performance Tip #10: Don’t Make Assumptions About Data Connections
|
||||
|
||||
It’s difficult to detect if someone on a desktop is on a broadband connection or is tethering through a data-limited dongle or mobile. Many people work on the train like that, or live in an area where broadband infrastructure is poor but mobile signal is strong. In Amazon’s case, there is room to make some big data savings on the desktop site and we shouldn’t get complacent just because the screen size suggests I’m not on a mobile device.
|
||||
|
||||
Yes, we should expect a larger page load if our viewport is ‘desktop sized’ as the images will be larger and better optimized for the screen than a grainier mobile one. But the page shouldn’t be orders of magnitude bigger.
|
||||
|
||||
Moreover, I was sending the `Save-Data` header with my request. This header [explicitly indicates a preference for reduced data usage][71], and I hope more websites start to take notice of it in the future.
|
||||
|
||||
The initial ‘desktop’ load may have been 6 MB, but after sitting and watching it for a minute it had climbed to 8.6 MB as the lower-priority resources and event tracking kicked into action. This page weight includes almost 1.7 MB of minified JavaScript. I don’t even want to begin to look at that.
|
||||
|
||||
##### Performance Tip #11: Use Web Workers For Your JavaScript
|
||||
|
||||
Which would be worse — 1.7 MB of JavaScript or 1.7 MB of images? The answer is JavaScript: the two assets are not equivalent when it comes to performance.
|
||||
|
||||
> A JPEG image needs to be decoded, rasterized, and painted on the screen. A JavaScript bundle needs to be downloaded and then parsed, compiled, executed —and there are a number of other steps that an engine needs to complete. Be aware that these costs are not quite equivalent.
|
||||
> — Addy Osmani, The Cost of JavaScript in 2018
|
||||
|
||||
If you must ship this much JavaScript, try [putting it in a web worker][72]. This keeps the bulk of JavaScript off the main thread, which is now freed up for repainting the UI, helping your web page to stay responsive on low-powered devices.
|
||||
|
||||
I’m now about 15.5 MB into my budget, and have spent $1.14 of my Zimbabwe data budget. I’d have had to have worked for half a day as a teacher to earn the money to get this far.
|
||||
|
||||
### Pinterest
|
||||
|
||||
I’ve heard good things about Pinterest’s performance, so I decided to put it to the test.
|
||||
|
||||
[![A staggering 327 requests, making 6.1 MB of data.][73]][74]A staggering 327 requests, making 6.1 MB of data. ([Large preview][74])
|
||||
|
||||
Perhaps this isn’t the fairest of tests; I was taken to the sign-in page, upon which an asynchronous process found I was logged into Facebook and logged me in automatically. The page loaded relatively quickly, but the requests crept up as more and more content was preloaded.
|
||||
|
||||
However, I saw that on subsequent page loads, the service worker surfaced much of the content — saving about half of the page weight:
|
||||
|
||||
[![8.2 / 15.6 MB resources, and 39 / 180 requests handled by the service worker cache.][75]][76]8.2 / 15.6 MB resources, and 39 / 180 requests handled by the service worker cache. ([Large preview][76])
|
||||
|
||||
The Pinterest site is a progressive web app; it installed a service worker to manually handle caching of CSS and JS. I could now turn off my WiFi and continue to use the site (albeit not very usefully):
|
||||
|
||||
<https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_2000/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/188acdb0-1ec0-4404-bcdb-b6cb653c5dcc/loading-spinner-and-message-saying-you-re-not-connected-to-the-internet.png>You can’t do much when you’re offline. ([Large preview][77])
|
||||
|
||||
##### Performance Tip #12: Use Service Workers To Provide Offline Support
|
||||
|
||||
Wouldn’t it be great if I only had to load a website once over network, and now get all the information I need even if I’m offline?
|
||||
|
||||
A great example would be a website that shows the weather forecast for the week. I should only need to download that page once. If I turn off my mobile data and subsequently go back to the page at some point, it should be able to serve the last known content to me. If I connect to the internet again and load the page, I would get a more up to date forecast, but static assets such as CSS and images should still be served locally from the service worker.
|
||||
|
||||
This is possible by setting up a [service worker with a good caching strategy][78] so that cached pages can be re-accessed offline. The [lodash documentation website][79] is a nice example of a service worker in the wild:
|
||||
|
||||
[![Screenshot of devtools showing 'ServiceWorker' next to each request][80]][81]The Lodash docs work offline. ([Large preview][81])
|
||||
|
||||
Content that rarely updates and is likely to be used quite regularly is a perfect candidate for service worker treatment. Dynamic sites with ever-changing news feeds aren’t quite so well suited for offline experiences, but can still benefit.
|
||||
|
||||
[![Screenshot of Chris Ashton profile on Pinterest][82]][83]The second Pinterest page load was 443 KB. ([Large preview][83])
|
||||
|
||||
Service workers can truly save the day when you’re on a tight data budget. I’m not convinced the Pinterest experience was the most optimal in terms of data usage – subsequent pages were around the 0.5 MB mark even on pages with few images — but letting your JavaScript handle page requests for you and keeping the same navigational elements in place can be very performant. The BBC manages a [transfer size of just 3.1 KB][84] for return-visits to articles that are renderable via the single page application.
|
||||
|
||||
So far, Pinterest alone has chewed through 14 MB, which means I’ve blown around 30 MB of my budget, or $2.20 (almost a day’s wages) of my Zimbabwe budget.
|
||||
|
||||
I’d better be careful with my final 20 MB… but where’s the fun in that?
|
||||
|
||||
### Gamespot
|
||||
|
||||
I picked this one because it felt noticeably sluggish on my mobile in the past and I wanted to dig into the reasons why. Sure enough, loading the homepage consumes 8.5 MB of data.
|
||||
|
||||
[![Screenshot of devtools alongside homepage][85]][86]The Gamespot homepage trickled up to 8.5 MB, and a whopping 347 requests. ([Large preview][86])
|
||||
|
||||
6.5 MB of this was down to an autoplaying video halfway down the page, which — to be fair — didn’t appear to download until I began scrolling. Nevertheless…
|
||||
|
||||
<https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/000d9d1e-05ea-4de0-b2d2-f5228ad75653/the-video-is-clipped-off-screen.gif>The video is clipped off-screen. ([Large preview][87])
|
||||
|
||||
I could only see half the video in my viewport — the right hand side was clipped. It was also 30 seconds long, and I would wager that most people won’t sit and watch the whole thing. This single asset more than tripled the size of the page.
|
||||
|
||||
##### Performance Tip #13: Don’t Preload Video
|
||||
|
||||
As a rule, unless your site’s primary mode of communication is video, don’t preload it.
|
||||
|
||||
If you’re YouTube or Netflix, it’s reasonable to assume that someone coming to your page will want the video to auto-load and auto-play. There is an expectation that the video will chew through some data, but that it’s a fair exchange for the content. But if you’re a site whose primary medium is text and image — and you just happen to offer additional video content — then don’t preload the video.
|
||||
|
||||
Think of news articles with embedded videos. Many users only want to skim the article headline before moving on to their next thing. Others will read the article but ignore any embeds. And others will diligently click and watch each embedded video. We shouldn’t hog the bandwidth of every user on the assumption that they’re going to want to watch these videos.
|
||||
|
||||
To reiterate: [users don’t like autoplaying video][88]. As developers we only do it because our managers tell us to, and they only tell us to do it because all the coolest apps are doing it, and the coolest apps are only doing it because video ads generate 20 to 50 times more revenue than traditional ads. Google Chrome has started [blocking autoplay videos for some sites][89], based on personal preferences, so even if you develop your site to autoplay video, there’s no guarantee that’s the experience your users are getting.
|
||||
|
||||
If we agree that it’s a good idea to make video an opt-in experience (click to play), we can take it a step further and make it click to load too. That means mocking a video placeholder image with a play button over it, and only downloading the video when you click the play button. People on fast connections should notice no difference in buffer speed, and people on slow connections will appreciate how fast the rest of your site loaded because it didn’t have to preload a large video file.
|
||||
|
||||
Anyway, back to Gamespot, where I was indeed forced to preload a large video file I ended up not watching. I then clicked through to a [game review page][90] that weighed another 8.5 MB, this time with 5.4 MB of video, before I even started scrolling down the page.
|
||||
|
||||
What was really galling was when I looked at what the video actually was. It was an [advert for a Samsung TV][91]! This advert cost me $0.40 of my Zimbabwe wages. Not only was it pre-loaded, but it also didn’t end up playing anywhere as far as I’m aware, so I never actually saw it.
|
||||
|
||||
[![Screenshot of the offending request][92]][93]This advert wasted 5.4 MB of my precious data. ([Large preview][93])
|
||||
|
||||
The ‘real’ video — the gameplay footage (in other words, the content) — wasn’t actually loaded until I clicked on it. And that ploughed through my remaining data in seconds.
|
||||
|
||||
That’s it. That’s my 50 MB gone. I’ll need to work another 1.5 days as a Zimbabwean schoolteacher to repeat the experience.
|
||||
|
||||
##### Performance Tip #14: Optimize For First Page Load
|
||||
|
||||
What’s striking is that I used 50 MB of data and in most cases, I only visited one or two pages on any given site. If you think about it, this is true of most user journeys today.
|
||||
|
||||
Think about the last time you Googled something. You no doubt clicked on the first search result. If you got your answer, you closed the tab, or else you hit the back button and moved onto the next search result.
|
||||
|
||||
With the exception of a few so-called ‘destination sites’ such as Facebook or YouTube, where users habitually go as a starting point for other activities, the majority of user journeys are ephemeral. We stumble across random sites to get the answers to our questions, never to return to those sites again.
|
||||
|
||||
Web development practices are heavily skewed towards optimising for repeat visitors. “Cache these assets — they’ll come in handy later”. “Pre-load this onward journey, in case the user clicks to read more”. “Subscribe to our mailing list”.
|
||||
|
||||
Instead, I believe we should optimize heavily for one-off visitors. Call it a controversial opinion, but maybe caching isn’t really all that important. How important can a cached resource that never gets surfaced again be? And perhaps users aren’t actually going to subscribe to your mailing list after reading just the one article, so downloading the JavaScript and CSS for the mail subscription modal is both a waste of data and an [annoying user experience][94].
|
||||
|
||||
### The Decline Of Proxy Browsers
|
||||
|
||||
I had hoped to try out Opera Mini as part of this experiment. Opera Mini is a mobile web browser which proxies web pages through Opera’s compression servers. It accounts for 1.42% of global traffic as of June 2019, according to caniuse.com.
|
||||
|
||||
Opera Mini claims to save up to 90% of data by doing some pretty intensive transcoding. HTML is parsed, images are compressed, styling is applied, and a certain amount of JavaScript is executed on Opera’s servers. The server doesn’t respond with HTML as you might expect — it actually transcodes the data into Opera Binary Markup Language (OBML), which is progressively loaded by Opera Mini on the device. It renders what is essentially an interactive ‘snapshot’ of the web page — think of it as a PDF with hyperlinks inside it. Read Tiffany Brown’s excellent article, “[Opera Mini and JavaScript][95]” for a technical deep-dive.
|
||||
|
||||
It would have been a perfect way to eek my 50 MB budget as far as possible. Unfortunately, Opera Mini is no longer available on iOS in the UK. Attempting to visit it in the [app store][96] throws an error:
|
||||
|
||||
It’s still available “[in some markets][97]” but reading between the lines, Opera will be phasing out Opera Mini for its new app — Opera Touch — which [doesn’t have any data-saving functionality][98] apart from the ability to natively block ads.
|
||||
|
||||
Opera desktop used to have a ‘Turbo mode’, acting as a traditional proxy server (returning a HTML document instead of OBML), applying data-saving techniques but less intensively than Opera Mini. According to Opera, JavaScript continues to work and “you get all the videos, photos and text that you normally would, but you eat up less data and load pages faster”. However, [Opera quietly removed Turbo mode in v60][99] earlier this year, and Opera Touch [doesn’t have a Turbo mode][100] either. Turbo mode is currently only available on Opera for Android.
|
||||
|
||||
Android is where all the action is in terms of data-saving technology. Chrome offers a ‘Lite mode’ on its mobile browser for Android, which is not available for iPhones or iPads because of “[platform constraints][101]“. Outside of mobile, Google used to provide a ‘Data Saver’ extension for Chrome desktop, but [this was canned in April][102].
|
||||
|
||||
Lite mode for Chrome Android can be forcibly enabled, or automatically kicks in when the network’s effective connection type is 2G or worse, or when Chrome estimates the page will take more than 5 seconds to reach first contentful paint. Under these conditions, [Chrome will request the lite version of the HTTPS URL as cached by Google’s servers][103], and display this stripped-down version inside the user’s browser, alongside a “Lite” marker in the address bar.
|
||||
|
||||
[![Screenshot showing button in toolbar denoting 'Lite' mode][104]][105]‘Lite mode’ on Chrome for Android. Image: Google. ([Large preview][105])
|
||||
|
||||
I’d love to try it out — apparently it [disables scripts][106], [replaces images with placeholders][107], [prevents loading of non-critical resources][108] and [shows offline copies of pages][109] if one is available on the device. This [saves up to 60% of data][110]. However, [it isn’t available in private (Incognito) mode][101], which hints at some of the privacy concerns surrounding proxy browsers.
|
||||
|
||||
Lite mode shares the HTTPS URL with Google, therefore it makes sense that this mode isn’t available in Incognito. However other information such as cookies, login information, and personalised page content is not shared with Google — [according to ghacks.net][110] — and “never breaks secure connections between Chrome and a website”. One wonders why seemingly none of these data-saving services are allowed on iOS (and there is [no news as to whether Lite mode will ever become available on iOS][111]).
|
||||
|
||||
Data saver proxies require a great deal of trust; your browsing activity, cookies and other sensitive information are entrusted to some server, often in another country. Many proxies simply won’t work anymore because a lot of sites have moved to HTTPS, meaning initiatives such as Turbo mode have become a largely “[useless feature][112]“. HTTPS prevents this kind of man-in-the-middle behaviour, which is a good thing, although it has meant the demise of some of these proxy services and has made sites [less accessible to those on poor connections][113].
|
||||
|
||||
I was unable to find any OSX or iOS compatible data-saving tool except for [Bandwidth Hero][114] for Firefox (which requires setting up your own data compression service — far beyond the technical capabilities of most users!) and [skyZIP Proxy][115] (which, last updated in 2017 and riddled with typos, I just couldn’t bring myself to trust).
|
||||
|
||||
### Conclusion
|
||||
|
||||
Reducing the data footprint of your website goes hand in hand with improving frontend performance. It is the single most reliable thing you can do to speed up your site.
|
||||
|
||||
In addition to the cost of data, there are lots of good reasons to focus on performance, as described in a [GOV.UK blog post on the subject][116]:
|
||||
|
||||
* [53% of users will abandon a mobile site][117] if it takes more than 3 seconds to load.
|
||||
* [People have to concentrate 50% more][118] when trying to complete a simple task on a website using a slow connection.
|
||||
* More performant web pages are better for the battery life of the user’s device, and typically require less power on the server to deliver. A performant site is good for the environment.
|
||||
|
||||
|
||||
|
||||
We don’t have the power to change the global cost of data inequality. But we do have the power to lessen its impact, improving the experience for everyone in the process.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.smashingmagazine.com/2019/07/web-on-50mb-budget/
|
||||
|
||||
作者:[Chris Ashton][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.smashingmagazine.com/author/chrisbashton
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.smashingmagazine.com/author/chrisbashton
|
||||
[2]: https://d33wubrfki0l68.cloudfront.net/1dbc465f56f3a812f09666f522fa226efd947cfa/a4d9f/images/smashing-cat/newsletter-fish-cat.svg
|
||||
[3]: https://www.smashingmagazine.com/the-smashing-newsletter/
|
||||
[4]: https://www.smashingmagazine.com/the-smashing-newsletter/
|
||||
[5]: https://d33wubrfki0l68.cloudfront.net/a2b586e0ae8a08879457882013f0015fa9c31f7c/9e355/images/drop-caps/t.svg
|
||||
[6]: https://d33wubrfki0l68.cloudfront.net/b5449482a65c611116580c9dfbf75c686b132629/e2b2f/images/drop-caps/character-7.svg
|
||||
[7]: https://www.smashingmagazine.com/2019/03/web-on-internet-explorer-ie8/
|
||||
[8]: https://www.cable.co.uk/mobiles/worldwide-data-pricing/
|
||||
[9]: https://s3-eu-west-1.amazonaws.com/assets.cable.co.uk/mobile-data-cost/cost-of-a-gigabyte-research-method.pdf
|
||||
[10]: https://www.opensignal.com/sites/opensignal-com/files/data/reports/global/data-2018-11/state_of_wifi_vs_mobile_opensignal_201811.pdf
|
||||
[11]: https://www.opensignal.com/sites/opensignal-com/files/data/reports/global/data-2019-05/the_state_of_mobile_experience_may_2019_0.pdf
|
||||
[12]: https://www.vox.com/recode/2019/7/12/20681214/mobile-speeds-slow-ookla-5g
|
||||
[13]: https://www.cable.co.uk/broadband/pricing/worldwide-comparison/
|
||||
[14]: https://www.cable.co.uk/broadband/speed/worldwide-speed-league/
|
||||
[15]: https://www.bbc.co.uk/news/uk-wales-48982460
|
||||
[16]: https://twitter.com/ChrisBAshton/status/1138726856872607744
|
||||
[17]: https://www.timeslive.co.za/news/africa/2019-02-06-striking-zimbabwean-teachers-earn-equivalent-of-just-r700-a-month/
|
||||
[18]: https://www.dol.gov/general/topic/wages/minimumwage
|
||||
[19]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/491b7575-818f-4517-acf1-fd209a44aa74/01-i-used-the-web-for-a-day-on-a-50-mb-budget.png
|
||||
[20]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/491b7575-818f-4517-acf1-fd209a44aa74/01-i-used-the-web-for-a-day-on-a-50-mb-budget.png
|
||||
[21]: https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj
|
||||
[22]: https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/save-data/
|
||||
[23]: https://www.tripmode.ch/
|
||||
[24]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/58e4e027-f544-4068-9c0f-47a717885bc9/screenshot-of-tripmode-settings-chrome-is-enabled-mail-is-disabled.png
|
||||
[25]: https://httparchive.org/reports/page-weight
|
||||
[26]: https://web.dev/first-contentful-paint
|
||||
[27]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/71c8184e-85af-41d6-9253-1a2d74cdb5ec/02-i-used-the-web-for-a-day-on-a-50-mb-budget.png
|
||||
[28]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/71c8184e-85af-41d6-9253-1a2d74cdb5ec/02-i-used-the-web-for-a-day-on-a-50-mb-budget.png
|
||||
[29]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/69e02a99-4481-46bd-b2a9-e5411991a865/03-i-used-the-web-for-a-day-on-a-50-mb-budget.png
|
||||
[30]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/69e02a99-4481-46bd-b2a9-e5411991a865/03-i-used-the-web-for-a-day-on-a-50-mb-budget.png
|
||||
[31]: https://github.com/ChrisBAshton/inline-cacher
|
||||
[32]: https://web.dev/extract-critical-css
|
||||
[33]: https://www.abtasty.com/blog/above-the-fold/
|
||||
[34]: https://www.filamentgroup.com/lab/load-css-simpler/
|
||||
[35]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8cf45731-08f8-4e31-b0c7-5dec6b235e41/a-wall-of-minified-code.png
|
||||
[36]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8cf45731-08f8-4e31-b0c7-5dec6b235e41/a-wall-of-minified-code.png
|
||||
[37]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/dbc81fd5-f81e-4f44-8f2b-c1065ad26ed3/five-external-javascript-files.png
|
||||
[38]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/dbc81fd5-f81e-4f44-8f2b-c1065ad26ed3/five-external-javascript-files.png
|
||||
[39]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8fdcaa1a-0691-4ba2-a9e3-b2b236af5d88/disabled-js-version-of-google-search.png
|
||||
[40]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8fdcaa1a-0691-4ba2-a9e3-b2b236af5d88/disabled-js-version-of-google-search.png
|
||||
[41]: https://www.smashingmagazine.com/2018/05/using-the-web-with-javascript-turned-off/
|
||||
[42]: https://web.dev/performance-budgets-101
|
||||
[43]: https://developer.mozilla.org/en-US/docs/Learn/HTML/Forms/Form_validation#Using_built-in_form_validation
|
||||
[44]: https://bundlephobia.com/
|
||||
[45]: https://medium.com/oyotech/how-brotli-compression-gave-us-37-latency-improvement-14d41e50fee4
|
||||
[46]: https://stackoverflow.com/questions/16803876/browser-gzip-decompression-overhead-speed/16816099
|
||||
[47]: https://www.itworld.com/article/2693941/why-it-doesn-t-make-sense-to-gzip-all-content-from-your-web-server.html
|
||||
[48]: https://developer.mozilla.org/en-US/docs/Web/HTML/Preloading_content
|
||||
[49]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8831ed71-56a8-4431-a39f-298dd4bc072c/screenshot-showing-how-to-display-cache-control-information.png
|
||||
[50]: https://traffic-control-cdn.readthedocs.io/en/latest/basics/cache_revalidation.html
|
||||
[51]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/06e8d48f-193e-4370-b41e-7d68083bd0fe/screenshot-of-google-network-tab-with-cache-control-column-visible.png
|
||||
[52]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/06e8d48f-193e-4370-b41e-7d68083bd0fe/screenshot-of-google-network-tab-with-cache-control-column-visible.png
|
||||
[53]: https://ashton.codes/set-cache-control-max-age-1-year/
|
||||
[54]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/1010e6d4-6f0c-4d95-a088-67bf4e4b1b2c/screenshot-of-google-logo-cache-control-setting.png
|
||||
[55]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/1010e6d4-6f0c-4d95-a088-67bf4e4b1b2c/screenshot-of-google-logo-cache-control-setting.png
|
||||
[56]: https://developers.google.com/web/tools/chrome-devtools/network/reference
|
||||
[57]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/39870718-c891-4d34-bd1b-74a9d28986a0/gif-scrolling-down-the-very-long-dev-docs-page.gif
|
||||
[58]: https://blog.uploadcare.com/image-optimization-and-performance-score-23516ebdd31d
|
||||
[59]: https://css-tricks.com/tips-for-rolling-your-own-lazy-loading/
|
||||
[60]: https://developers.google.com/speed/webp/
|
||||
[61]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/64eeb0ac-96ac-44ca-8164-59749f3b850f/the-svg-left-is-noticeably-crisper-than-the-webp.png
|
||||
[62]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/64eeb0ac-96ac-44ca-8164-59749f3b850f/the-svg-left-is-noticeably-crisper-than-the-webp.png
|
||||
[63]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/09abae72-2997-4230-a99e-52eb120006c5/the-png-left-is-a-similar-quality-to-the-webp.png
|
||||
[64]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/09abae72-2997-4230-a99e-52eb120006c5/the-png-left-is-a-similar-quality-to-the-webp.png
|
||||
[65]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/0b2a8a71-6573-4a7b-8008-d73a1c54f318/screenshot-of-performance-analysis-output-of-devtools.png
|
||||
[66]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/0b2a8a71-6573-4a7b-8008-d73a1c54f318/screenshot-of-performance-analysis-output-of-devtools.png
|
||||
[67]: https://flaviocopes.com/javascript-async-defer/
|
||||
[68]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/fe47c512-b890-4e8a-bebd-0e0529cc565b/image-of-spanners-with-overlaid-text.png
|
||||
[69]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/fe47c512-b890-4e8a-bebd-0e0529cc565b/image-of-spanners-with-overlaid-text.png
|
||||
[70]: https://timkadlec.com/remembers/2019-03-14-making-sense-of-chrome-lite-pages/
|
||||
[71]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Save-Data
|
||||
[72]: https://dassur.ma/things/when-workers/
|
||||
[73]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/e32acbef-01c4-4cbe-b5c1-6e840519f1c9/06-i-used-the-web-for-a-day-on-a-50-mb-budget.png
|
||||
[74]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/e32acbef-01c4-4cbe-b5c1-6e840519f1c9/06-i-used-the-web-for-a-day-on-a-50-mb-budget.png
|
||||
[75]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/88853603-7b89-4ae1-8969-db19fac4a95d/07-i-used-the-web-for-a-day-on-a-50-mb-budget.png
|
||||
[76]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/88853603-7b89-4ae1-8969-db19fac4a95d/07-i-used-the-web-for-a-day-on-a-50-mb-budget.png
|
||||
[77]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/188acdb0-1ec0-4404-bcdb-b6cb653c5dcc/loading-spinner-and-message-saying-you-re-not-connected-to-the-internet.png
|
||||
[78]: https://serviceworke.rs/caching-strategies.html
|
||||
[79]: https://lodash.com/docs/
|
||||
[80]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/3ae779a9-740f-4715-9e48-fd67a9a3a8ea/screenshot-of-devtools-showing-serviceworker-next-to-each-request.png
|
||||
[81]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/3ae779a9-740f-4715-9e48-fd67a9a3a8ea/screenshot-of-devtools-showing-serviceworker-next-to-each-request.png
|
||||
[82]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/bef23c7c-9ad0-4ac9-b646-4352bf3b34d6/screenshot-of-chris-ashton-profile-on-pinterest.png
|
||||
[83]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/bef23c7c-9ad0-4ac9-b646-4352bf3b34d6/screenshot-of-chris-ashton-profile-on-pinterest.png
|
||||
[84]: https://www.bbc.co.uk/news/articles/c5ll353v7y9o
|
||||
[85]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8df6aa4b-4199-4fd0-b0fe-15f7c993796a/screenshot-of-devtools-alongside-homepage.png
|
||||
[86]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/8df6aa4b-4199-4fd0-b0fe-15f7c993796a/screenshot-of-devtools-alongside-homepage.png
|
||||
[87]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/000d9d1e-05ea-4de0-b2d2-f5228ad75653/the-video-is-clipped-off-screen.gif
|
||||
[88]: https://www.nytimes.com/2018/08/01/technology/personaltech/autoplay-video-fight-them.html
|
||||
[89]: https://www.theverge.com/2018/5/3/17251104/google-chrome-66-autoplay-sound-videos-mute
|
||||
[90]: https://www.gamespot.com/reviews/final-fantasy-xiv-shadowbringers-review-dancer-in-/1900-6417212/
|
||||
[91]: https://static.sharethrough.com/sfp/hosted_video/DS843eHqTGfEfxMBvLMh1n6uyq/video.mp4
|
||||
[92]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/1d1b0ac0-1909-4542-b5e4-2096419ba635/screenshot-of-the-offending-request.png
|
||||
[93]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/1d1b0ac0-1909-4542-b5e4-2096419ba635/screenshot-of-the-offending-request.png
|
||||
[94]: https://www.smashingmagazine.com/2019/06/web-designers-speed-mobile-websites/#2-stop-using-cumbersome-design-elements
|
||||
[95]: https://dev.opera.com/articles/opera-mini-and-javascript/
|
||||
[96]: https://apps.apple.com/app/id363729560
|
||||
[97]: https://twitter.com/opera/status/1084736938312110080
|
||||
[98]: https://www.guidingtech.com/opera-mini-vs-opera-touch-comparison-differences/
|
||||
[99]: https://techdows.com/2019/06/opera-quietly-removed-turbo-mode-from-their-browser.html
|
||||
[100]: https://forums.opera.com/topic/26886/no-turbo-mode-for-opera-touch
|
||||
[101]: https://support.google.com/chrome/answer/2392284
|
||||
[102]: https://venturebeat.com/2019/04/23/google-kills-chrome-data-saver-extension/
|
||||
[103]: https://www.zdnet.com/article/google-announces-chrome-lite-pages-a-way-to-speed-up-https-sites-on-slow-connections/
|
||||
[104]: https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/69282232-f726-4760-9743-73373c5ee43f/chrome-lite-pages.png
|
||||
[105]: https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/69282232-f726-4760-9743-73373c5ee43f/chrome-lite-pages.png
|
||||
[106]: https://www.chromestatus.com/feature/4775088607985664
|
||||
[107]: https://www.chromestatus.com/feature/6072546726248448
|
||||
[108]: https://www.chromestatus.com/feature/4510564810227712
|
||||
[109]: https://www.chromestatus.com/feature/5076871637106688
|
||||
[110]: https://www.ghacks.net/2019/04/24/google-deprecates-chrome-data-saver-extension-for-the-desktop/
|
||||
[111]: https://www.phonearena.com/news/Google-Chrome-update-Data-Saver-Lite-mode-Android_id115558
|
||||
[112]: https://forums.opera.com/topic/32749/turbo-mode-disappear-in-opera-60-0/3
|
||||
[113]: https://meyerweb.com/eric/thoughts/2018/08/07/securing-sites-made-them-less-accessible/
|
||||
[114]: https://addons.mozilla.org/en-US/firefox/addon/bandwidth-hero/
|
||||
[115]: https://chrome.google.com/webstore/detail/skyzip-proxy/hbgknjagaclofapkgkeapamhmglnbphi
|
||||
[116]: https://technology.blog.gov.uk/2019/04/18/why-we-focus-on-frontend-performance/
|
||||
[117]: https://www.thinkwithgoogle.com/marketing-resources/data-measurement/mobile-page-speed-new-industry-benchmarks/
|
||||
[118]: http://www.tecnostress.it/wp-content/uploads/2010/02/final_webstress_survey_report_229296.pdf
|
||||
[119]: https://www.smashingmagazine.com/images/logo/logo--red.png
|
@ -0,0 +1,102 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Back to School: Your SD-WAN Reading List)
|
||||
[#]: via: (https://www.networkworld.com/article/3433618/back-to-school-your-sd-wan-reading-list.html)
|
||||
[#]: author: (Cato Networks https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
Back to School: Your SD-WAN Reading List
|
||||
======
|
||||
Summer is about to end. Time to head back to work, understand next year’s projects and plans, and set your IT infrastructure objectives accordingly.
|
||||
![baona][1]
|
||||
|
||||
Summer is about to end. Time to head back to work, understand next year’s projects and plans, and set your IT infrastructure objectives accordingly. SD-WAN and MPLS transformation are huge trends that cannot be overlooked. How will it impact your existing IT?
|
||||
|
||||
MPLS transformations can send chills down any IT pro’s spine but these suggested readings will help calm the nerves. They’re a series of clear, concise, and practical manuals divided into lessons on how to migrate from the prehistoric world of MPLS into the light of [SD-WAN][2]. Finish all of them and you’ll have an A+ in WAN Transformation 101.
|
||||
|
||||
**Lesson 1 – SD-WAN as an MPLS Alternative **
|
||||
|
||||
Are you still running MPLS to your branch offices? If so, you’re probably undermining cloud performance, wasting expensive bandwidth backhauling branch office Internet and cloud traffic, and taking far too long to open new locations. [**How to Re-evaluate your MPLS Provider**][3] delves into all the issues to consider when replacing or supplementing your MPLS. If nothing else, the table comparing MPLS, basic SD-WAN, and cloud-based SD-WAN is a gem you shouldn’t miss.
|
||||
|
||||
**Lesson 2 – Why the Internet Is Broken **
|
||||
|
||||
Public Internet connections work — sort of — but everyone knows that they’re not always as fast and reliable as they could be. If you’ve ever wondered why and what to do about it, [The Secrets Behind Internet Routing][4] dives into the gory details of Internet routing and the BGP protocol, including why their fragmented, opaque workings are not ideal for today’s business uses. It does the same for MPLS, for those of you with hybrid networks, clarifying two complicated subjects and outlining exactly why you might want to consider alternatives.
|
||||
|
||||
**Lesson 3** **–** **How to Transition from MPLS to SD-WAN**
|
||||
|
||||
You’re probably aware of the cost, flexibility, and cloud benefits of SD-WAN but you may not have a clear idea of what it takes to get from here to there. [**How to Migrate from MPLS to SD-WAN**][5] is a high-level overview — based on insights from SD-WAN adopters, industry best practices, and years of network transition experience — of the steps you need to take to get from MPLS to SD-WAN. It’s a good start if you’re looking for a quick read to get you thinking about how to do your own migration.
|
||||
|
||||
**Lesson 4** **–** **How to Migrate Your Sites to SD-WAN**
|
||||
|
||||
If you’ve read the previous high-level view and are ready for more specifics on site migration to SD-WAN, check out** **[**How to Migrate Sites to SD-WAN**][6]. It takes step one from the first e-book and delves deeply into the issues and sub-steps involved in transforming each site. The steps it covers in depth include:
|
||||
|
||||
* Categorize Your Locations
|
||||
* Select the Right Last Mile
|
||||
* Decide on the Middle Mile
|
||||
* Engineer your End-To-End Network Architecture
|
||||
* Procure Your Last Mile Services
|
||||
|
||||
|
||||
|
||||
**Lesson 5 – SD-WAN Security, Cloud, and Mobility Challenges**
|
||||
|
||||
Once you have a solid grasp of the steps for migrating your sites to SD-WAN, it’s time to start thinking about how to handle security, mobility, and the cloud. [**The Security, Cloud and Mobility Challenges Facing SD-WAN and How to Address Them**][7] addresses these topics, including:
|
||||
|
||||
* **Transform Security**, which compares security appliances and cloud security services.
|
||||
* **Connect the Cloud to SD-WAN**, including agent-based vs. agentless cloud deployments, routing optimization, and security.
|
||||
* **Optimize the Mobile experience**, including SD-WAN mobile access considerations.
|
||||
* **Determine the right SD-WAN Management Model**, which compares SD-WAN appliances and services and discusses four SD-WAN management models.
|
||||
|
||||
|
||||
|
||||
**Lesson 6 – SD-WAN Migration Realities**
|
||||
|
||||
You’ve done your homework on the MPLS to SD-WAN transition, but really, wouldn’t it be a big help to hear from someone who has lived it? Definitely. Which is why you should listen to this exciting “reality show,” [Nick Dell Webinar: How I Migrated from MPLS to SD-WAN. ][8]
|
||||
|
||||
Nick Dell, an IT manager at a leading automotive manufacturer with 2,000 employees and nine locations, describes his experience transitioning to SD-WAN for just-in-time manufacturing, including connecting to his company’s critical cloud-based ERP and VoIP applications. He covers the problems and heartbreaks of his MPLS deployment, the alternatives he considered, the migration process, questions he asked, and how he got 5X to 20X the bandwidth and better availability and security for less.
|
||||
|
||||
**Lesson 7 – MPLS or SLA-Backed Affordable Backbone**
|
||||
|
||||
Enterprises that depend on global network backbone connections may think that their choice is only between reliable but expensive MPLS and flexible but unreliable SD-WAN Internet routing. Think again. [MPLS or SLA-backed Affordable Backbone: Which is Right for Your Global Network? ][9]digs into the challenges of MPLS and traditional SD-WAN global backbones and presents a potentially superior alternative: the SLA-backed Affordable Backbone. This new architecture combines global IP transit services from Tier-1 providers, an SD-WAN overlay, and commercial off-the-shelf hardware to deliver a much less expensive, more reliable, and agile alternative to MPLS. Get the details of how you can take advantage of this new global WAN alternative.
|
||||
|
||||
**Lesson 8 – How to Migrate from MPLS to Cato**
|
||||
|
||||
If you yearn for a less complex, more transformational alternative to traditional SD-WAN edge appliances and services, check out [**How to Migrate from MPLS to the Cat**][10]**o Cloud.** This e-book tells you how moving to the Cato Cloud can achieve a simpler, more agile infrastructure that connects any user to any resource quickly and securely. It covers each step necessary for transitioning your network to Cato and how Cato slashes WAN costs, doubles throughput, and decreases deployment times from months to as little as 30 minutes.
|
||||
|
||||
**Lesson 9 – SD-WAN Use Cases and Success Stories**
|
||||
|
||||
In this on-demand webinar, [5 MPLS Migration Challenges SD-WAN Solves][11], learn how SD-WAN can help you slash remote office latency without paying MPLS prices, secure Internet access without appliance sprawl, optimize voice and unified communications, enhance branch office and mobile user cloud performance, and manage last mile communications intelligently. It includes six case studies of how real companies solved these issues using SD-WAN.
|
||||
|
||||
**Bonus** **–** **SD-WAN Cheat Sheet**
|
||||
|
||||
Are you ready to make the leap to SD-WAN, but feel a little lost about how to choose an SD-WAN vendor and what questions to ask? Believe it or not, the answers can be found on a single page. Based on Nick Dell’s own experience and adventures migrating his company to SD-WAN, [Nick Dell’s Questions to Ask Yourself about SD-WAN ][12]spells out the questions you need to ask your vendor and yourself before embarking on an SD-WAN journey, such as what do you want to replace, how important is high availability, does you SD-WAN vendor fulfill your future needs, and who provides support?
|
||||
|
||||
Finish one of these e-books and you’ll feel enlightened. Finish all of them and you’ll get a solid SD-WAN education in surprisingly little time. But don’t get so immersed in SD-WAN that you forget to enjoy the rest of your time off.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3433618/back-to-school-your-sd-wan-reading-list.html
|
||||
|
||||
作者:[Cato Networks][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/istock-1065824638-100809025-large.jpg
|
||||
[2]: https://www.catonetworks.com/glossary-use-cases/sd-wan?utm_source=IDG&utm_campaign=IDG
|
||||
[3]: https://go.catonetworks.com/What-to-consider-before-renewing-your-MPLS-contract?utm_source=IDG&utm_campaign=IDG
|
||||
[4]: https://go.catonetworks.com/The_Internet_is_Broken?utm_source=IDG&utm_campaign=IDG
|
||||
[5]: https://go.catonetworks.com/How-to-Migrate-from-MPLS-to-SD-WAN?utm_source=IDG&utm_campaign=IDG
|
||||
[6]: https://go.catonetworks.com/How-to-Migrate-Sits-to-SD-WAN?utm_source=IDG&utm_campaign=IDG
|
||||
[7]: https://go.catonetworks.com/Challenges-Facing-SD-WAN-and-How-to-Address-Them?utm_source=IDG&utm_campaign=IDG
|
||||
[8]: https://go.catonetworks.com/VOD-How-I-Migrated-From-MPLS-to-SD-WAN?utm_source=IDG&utm_campaign=IDG
|
||||
[9]: https://go.catonetworks.com/MPLS-or-SLA-backed-Affordable-Backbone?utm_source=IDG&utm_campaign=IDG
|
||||
[10]: https://go.catonetworks.com/How-to-Migrate-from-MPLS-to-Cato-Cloud?utm_source=IDG&utm_campaign=IDG
|
||||
[11]: https://go.catonetworks.com/VOD-5-MPLS-migration-challenges?utm_source=IDG&utm_campaign=IDG
|
||||
[12]: https://go.catonetworks.com/LP-Questions-to-ask-yourself-before-purchasing-SD-WAN?utm_source=IDG&utm_campaign=IDG
|
@ -0,0 +1,82 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Don’t worry about shadow IT. Shadow IoT is much worse.)
|
||||
[#]: via: (https://www.networkworld.com/article/3433496/dont-worry-about-shadow-it-shadow-iot-is-much-worse.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
Don’t worry about shadow IT. Shadow IoT is much worse.
|
||||
======
|
||||
Shadow IoT – the use of unauthorized internet-of-things devices and networks – poses a new level of threats for enterprises
|
||||
![Air Force photo illustration by Margo Wright][1]
|
||||
|
||||
For years, IT departments have been railing about the dangers of shadow IT and bring-your-own-device. The worry is that these unauthorized practices bring risks to corporate systems, introducing new vulnerabilities and increasing the attack surface.
|
||||
|
||||
That may be true, but it’s not the whole story. As I’ve long argued, shadow IT may increase risks, but it can also cut costs, boost productivity and speed innovation. That’s why users are often [so eager to circumvent what they see as slow and conservative IT departments][2] by adopting increasingly powerful and affordable consumer and cloud-based alternatives, with or without the blessing of the powers that be. Just as important, there’s plenty of evidence of that [enlightened IT departments should work to leverage those new approaches][3] to serve their internal customers in a more agile manner.
|
||||
|
||||
**Also on Network World:** [**5 key enterprise IoT security recommendations**][4]
|
||||
|
||||
### Shadow IoT takes shadow IT to the next level
|
||||
|
||||
So far so good. But this reasoning emphatically does not carry over to the [emerging practice of shadow IoT][5], which has become a growing concern in the last year or so. Basically, we are talking about when people in your organization connect internet-connected devices (or worse, entire [IoT][6] networks!) without IT’s knowledge.
|
||||
|
||||
Those renegades are likely seeking the same speed and flexibility that drove shadow IT, but they are taking a far bigger risk for a much smaller reward. Shadow IoT takes shadow IT to another level, with the potential for many more devices as well as new types of devices and use cases, not to mention the addition of wholly new networks and technologies.
|
||||
|
||||
### Why shadow IoT is worse than shadow IT
|
||||
|
||||
According to a 2018 report from [802 Secure][7], “IoT introduces [new operating systems, protocols and wireless frequencies][8]. Companies that rely on legacy security technologies are blind to this rampant IoT threat. Organizations need to broaden their view into these invisible devices and networks to identify rogue IoT devices on the network, visibility into shadow-IoT networks, and detection of nearby threats such as drones and spy cameras.”
|
||||
|
||||
The report noted that _all_ of the organizations surveyed had rogue consumer IoT wireless devices on their enterprise networks, and nine out of 10 had shadow IoT/[IIoT][9] wireless networks, defined as “undetected company-deployed wireless networks separate from the enterprise infrastructure.”
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][10] ]**
|
||||
|
||||
Similarly, a 2018 [Infoblox report][11] found that a third of companies have more than 1,000 shadow-IoT devices connected to their networks on a typical day, including fitness trackers, digital assistants, smart TVs, smart appliances and gaming consoles. (And yes, Infoblox is talking about _enterprise_ networks.)
|
||||
|
||||
It gets worse. Many of these consumer IoT devices don’t even _try_ to be secure. And, per Microsoft, criminal and [state-sponsored actors][12] are already weaponizing these devices and networks (both shadow and IT-approved), as shown by the [Mirai botnet][13] and many others.
|
||||
|
||||
One more thing: Unlike cloud and consumer shadow IT, shadow IoT implementations often don’t provide additional levels of speed, agility or usability, meaning that organizations are not getting much benefit in exchange for the heightened risks. But that doesn’t seem to be stopping people from using them on corporate networks.
|
||||
|
||||
### Security basics
|
||||
|
||||
Fortunately, protecting your organization from shadow IoT isn’t so different from security best practices for other threats, including shadow IT.
|
||||
|
||||
**Education:** Make sure your team is aware of the threat and try to get their buy-in on key IOT policies and security measures. According to that 802 Secure report, “88 percent of IT leaders in the US and UK believed they had an effective policy in place for mitigating security risks from connected devices. But a full 24 percent of employees represented in the survey said they did not even know such policies existed, while a bare 20 percent of the people who professed knowledge of these policies actually abided by them.” Sure, you’ll never get 100 percent participation, but people can’t follow a policy they don’t know exists.
|
||||
|
||||
**Assimilation:** Create policies to let team members easily connect their IoT devices and networks to the enterprise network with the IT department’s approval and support. It’s extra work, and some folks will inevitably go rogue anyway, but the more devices you know about, the better.
|
||||
|
||||
**Isolation:** Set up separate networks so you can support approved and shadow IoT devices while protecting your core corporate networks as much as possible.
|
||||
|
||||
**Monitoring:** Make regular checks of connected devices and networks, and proactively search for unknown devices on all your networks.
|
||||
|
||||
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3433496/dont-worry-about-shadow-it-shadow-iot-is-much-worse.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/fight-shadow-100787429-large.jpg
|
||||
[2]: http://www.networkworld.com/cms/article/6%20ways%20'shadow%20IT'%20can%20actually%20help%20IT
|
||||
[3]: https://www.networkworld.com/article/2885758/6-ways-shadow-it-can-actually-help-it.html
|
||||
[4]: https://www.networkworld.com/article/3269247/5-key-enterprise-iot-security-recommendations.html
|
||||
[5]: https://www.bbntimes.com/en/technology/shadow-iot-is-a-threat-to-your-business-here-s-how-to-deal-with-it
|
||||
[6]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[7]: https://www.prnewswire.com/news/802-secure%2C-inc
|
||||
[8]: https://www.networkworld.com/article/3235124/internet-of-things-definitions-a-handy-guide-to-essential-iot-terms.html
|
||||
[9]: https://www.networkworld.com/article/3243928/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[10]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[11]: https://www.infoblox.com/company/news-events/press-releases/infoblox-research-finds-explosion-of-personal-and-iot-devices-on-enterprise-networks-introduces-immense-security-risk/
|
||||
[12]: https://diginomica.com/state-sponsored-cyber-spies-targeting-iot-warning-microsoft
|
||||
[13]: https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/antonakakis
|
||||
[14]: https://www.facebook.com/NetworkWorld/
|
||||
[15]: https://www.linkedin.com/company/network-world
|
201
sources/talk/20190822 How the Linux desktop has grown.md
Normal file
201
sources/talk/20190822 How the Linux desktop has grown.md
Normal file
@ -0,0 +1,201 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How the Linux desktop has grown)
|
||||
[#]: via: (https://opensource.com/article/19/8/how-linux-desktop-grown)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hallhttps://opensource.com/users/jason-bakerhttps://opensource.com/users/jlacroixhttps://opensource.com/users/doni08521059https://opensource.com/users/etc-eterahttps://opensource.com/users/marcobravohttps://opensource.com/users/alanfdoss)
|
||||
|
||||
How the Linux desktop has grown
|
||||
======
|
||||
Since the early 1990s, the Linux desktop has matured from a simple
|
||||
window manager to a full desktop. Join us on a journey through the
|
||||
history of the Linux desktop.
|
||||
![Person typing on a 1980's computer][1]
|
||||
|
||||
I first installed Linux in 1993. At that time, you really didn't have many options for installing the operating system. In those early days, many people simply copied a running image from someone else. Then someone had the neat idea to create a "distribution" of Linux that let you customize what software you wanted to install. That was the Softlanding Linux System (SLS) and my first introduction to Linux.
|
||||
|
||||
My '386 PC didn't have much memory, but it was enough. SLS 1.03 required 2MB of memory to run, or 4MB if you wanted to compile programs. If you wanted to run the X Window System, you needed a whopping 8MB of memory. And my PC had just enough memory to run X.
|
||||
|
||||
As I'd grown up with the command line, a graphical user interface wasn't essential to me. But it sure was convenient. I could run applications in different windows and easily switch between tasks.
|
||||
|
||||
From my first experiment with Linux, I was hooked. I've stuck with Linux on my desktop ever since. Like many people, I ran Linux in a dual-boot configuration for a while so I could jump back to MS-DOS and Windows to run certain programs. Until 1998, when I finally took the plunge and went all-in with Linux.
|
||||
|
||||
Over the last 26 years, I have watched the Linux desktop mature. I've also tried an interesting combination of desktop environments over that time, which I'll share by taking a journey through the history of the Linux desktop.
|
||||
|
||||
### X and window managers
|
||||
|
||||
The first "desktops" on Linux weren't yet desktops. Instead, they were _window managers_ running on the X Window System. X provided the basic building blocks for a graphical user interface, such as creating windows on the screen and providing keyboard and mouse input. By itself, X didn't do much. To make the X graphical environment useful, you needed a way to manage all the windows in your session. That's where the _window manager_ came in. Running an X program like xterm or xclock opens that program in a window. The window manager keeps track of windows and does basic housekeeping, such as letting you move windows around and minimize them. The rest is up to you. You could launch programs when X started by listing them in the **~/.xinitrc** file, but usually, you'd run new programs from an xterm.
|
||||
|
||||
The most common window manager in 1993 was TWM, which dates back to 1988. TWM was quite simple and provided only basic window management.
|
||||
|
||||
![TWM on SLS 1.05][2]
|
||||
|
||||
TWM on SLS 1.05 showing xterm, xclock, and the Emacs editor
|
||||
|
||||
Yet another early window manager was the OpenLook Virtual Window Manager (OLVWM). OpenLook was a graphical user interface developed by Sun Microsystems in the 1980s and later ported to other Unix platforms. As a _virtual_ window manager, OLVWM supported multiple workspaces.
|
||||
|
||||
![OLVWM on SLS 1.05][3]
|
||||
|
||||
OLVWM on SLS 1.05 showing xterm and the Virtual Workspaces selector
|
||||
|
||||
When Linux began to grow in popularity, it didn't take long for others to create new window managers with smoother performance and improved interfaces. The first of these new window managers was FVWM, a virtual window manager. FVWM sported a more modern look than TWM or OLVWM. But we didn't yet have a desktop.
|
||||
|
||||
![FVWM on SLS 1.05][4]
|
||||
|
||||
FVWM on SLS 1.05 showing xterm and a file manager
|
||||
|
||||
To modern eyes, TWM and FVWM may look pretty plain. But it's important to remember what other graphical environments looked like at the time. The then-current version of Windows looked rather simple. Windows versions 1 through 3 used a plain launcher called the Program Manager.
|
||||
|
||||
![Windows 3.11][5]
|
||||
|
||||
Windows 3.11 showing the Program Manager and the Notepad editor
|
||||
|
||||
In August 1995, Microsoft released Windows 95 and changed the modern PC desktop landscape. Certainly, I was impressed. I thought Windows 3.x was ungainly and ugly, but Windows 95 was smooth and pretty. More importantly, Windows 95 was what we now consider a _desktop_. The new desktop metaphor was a huge step forward. You could put icons on the desktop—and in fact, Windows 95 presented two default desktop icons, for My Computer (to open a file manager) and the Recycle Bin (where you put files to be deleted later).
|
||||
|
||||
But more importantly, the Windows 95 desktop meant _integration_. The Program Manager was gone, replaced by a Taskbar at the bottom of the screen that let you launch new programs using a simpler Start menu. The Taskbar was multifunctional and also showed your running programs via a series of buttons and a dock showing the time, speaker volume, and other simple controls. You could right-click on any object on the new desktop, and Windows 95 would present you with a context-sensitive menu with actions you could perform.
|
||||
|
||||
![Windows 95][6]
|
||||
|
||||
Windows 95 showing the Notepad editor
|
||||
|
||||
The Windows 95 interface was slick and much easier to use than previous versions of Windows—and even other Linux window managers. Not to be outdone, Linux developers created a new version of FVWM that mimicked the Windows 95 interface. Called FVWM95, the new window manager still wasn't a desktop, but it looked very nice. The new taskbar let you start new X programs using the Start menu. The taskbar also showed your running programs using buttons similar to Windows 95's.
|
||||
|
||||
![FVWM95 on Red Hat Linux 5.2][7]
|
||||
|
||||
FVWM95 on Red Hat Linux 5.2 showing xterm and a quick-access program launcher with icons for xterm, the file manager, and other programs
|
||||
|
||||
While FVWM95 and other window managers were improving, the core problem remained: Linux didn't really have a desktop. It had a collection of window managers, and that was about it. Linux applications that used a graphical user interface (GUI, pretty much meaning they were X applications) all looked different and worked differently. You couldn't copy and paste from one application to another, except the simple text-only copy/paste provided by the X Window System. What Linux really needed was a complete redo in its GUI to create the first desktop.
|
||||
|
||||
### The first Linux desktop
|
||||
|
||||
In 1996, Matthias Ettrich was troubled by the inconsistency of Linux applications under X. He wanted to make the graphical environment easy to use. And more importantly, he wanted to make everything _integrated_—like an actual desktop.
|
||||
|
||||
Matthias started work on the K Desktop Environment. That's K for "Kool." But the name KDE was also meant to be a play on the Common Desktop Environment (CDE) that was the standard in the "Big Unix" world. Although by 1996, CDE was looking pretty dated. CDE was based on the Motif widget set, which is the same design that FVWM mimicked. Finalized in July 1998, KDE 1.0 was a definite improvement over plain window managers like FVWM95.
|
||||
|
||||
![KDE 1.0][8]
|
||||
|
||||
K Desktop Environment (KDE) version 1.0
|
||||
|
||||
Image credit: Paul Brown / KDE
|
||||
|
||||
KDE was a big step forward for Linux. Finally, Linux had a true desktop with application integration and more modern desktop icons. KDE's design was not dissimilar from Windows 95. You had a kind-of taskbar along the bottom of the screen that provided the equivalent of Windows 95's Start menu as well as several application shortcuts. KDE also supported virtual desktops, which were cleverly labeled One, Two, Three, and Four. Running applications were represented via buttons in a separate taskbar at the top of the screen.
|
||||
|
||||
But not everyone was happy with KDE. To abstract the GUI from the system, KDE used Trolltech's Qt toolkit library. Unfortunately, Qt was not distributed under a free software license. Trolltech allowed Qt to be used at no charge in free software applications but charged a fee to use it in commercial or proprietary applications. And that dichotomy is not aligned with free software. This caused problems for Linux distributions: Should they include KDE? Or default to an older but free software graphical user interface like FVWM?
|
||||
|
||||
In response, Miguel de Icaza and Federico Mena started work in 1997 on a new Linux desktop. The new project was dubbed GNOME, for GNU Network Object Model Environment. GNOME aimed to be completely free software and used a different toolkit, called GTK, from the GIMP image editor. GTK literally stood for GIMP Tool Kit. When GNOME 1.0 was finally released in 1999, Linux had another modern desktop environment.
|
||||
|
||||
![GNOME 1.0][9]
|
||||
|
||||
GNOME version 1.0
|
||||
|
||||
Image credit: GNOME Documentation Project
|
||||
|
||||
While it was great to have two desktop environments for Linux, the "KDE versus GNOME" rivalry continued for some time. By 1999, Trolltech re-released the Qt library under a new public license, the Q Public License (QPL). But the new license carried its own baggage—the QPL only applied to Qt's use in open source software projects, not commercial projects. Thus the Free Software Foundation deemed the QPL [not compatible][10] with the GNU General Public License (GNU GPL). This licensing issue would remain until Trolltech re-re-released the Qt library under the GNU GPL version 2 in 2000.
|
||||
|
||||
### Development over time
|
||||
|
||||
The Linux desktop continued to mature. KDE and GNOME settled into a friendly competition that pushed both to add new features and to exchange ideas and concepts. By 2004, both GNOME and KDE had made significant strides, yet brought only incremental changes to the user interface.
|
||||
|
||||
KDE 2 and 3 continued to rely on a taskbar concept at the bottom of the screen but incorporated the buttons for running applications. One of KDE's most visible changes was the addition of the Konqueror browser, which first appeared in KDE 2.
|
||||
|
||||
![KDE 2.2.2 \(2001\) showing the Konqueror browser][11]
|
||||
|
||||
KDE 2.2.2 (2001) showing the Konqueror browser
|
||||
|
||||
Image credit: Paul Brown / KDE
|
||||
|
||||
![KDE 3.2.2][12]
|
||||
|
||||
KDE 3.2.2 (2004) on Fedora Core 2 showing the Konqueror file manager (using a Fedora Core 2 theme)
|
||||
|
||||
GNOME 2 also used a taskbar concept but split the bar into two: a taskbar at the top of the screen to launch applications and respond to desktop alerts, and a taskbar at the bottom of the page to show running applications. On my own, I referred to the two taskbars as "things you can do" (top) and "things are you doing" (bottom). In addition to the streamlined user interface, GNOME also added an updated file manager called Nautilus, developed by Eazel.
|
||||
|
||||
![GNOME 2.6.0][13]
|
||||
|
||||
GNOME 2.6.0 (2004) on Fedora Core 2 showing the Nautilus file manager (using a Fedora Core 2 theme)
|
||||
|
||||
Over time, KDE and GNOME have taken different paths. Both provide a feature-rich, robust, and modern desktop environment—but with different user interface goals. In 2011, there was a major deviation between how GNOME and KDE approached the desktop interface. KDE 4.6 (January 2011) and KDE 4.7 (July 2011) provided a more traditional desktop metaphor while continuing to rely on the taskbar concept familiar to many users. Of course, KDE saw lots of changes under the hood, but the familiar look and feel remained.
|
||||
|
||||
![KDE 4.6][14]
|
||||
|
||||
KDE 4.6 showing the Gwenview image viewer
|
||||
|
||||
Image credit: KDE
|
||||
|
||||
In 2011, GNOME completely changed gears with a new desktop concept. GNOME 3 aimed to create a simpler, more streamlined desktop experience, allowing users to focus on what they were working on. The taskbar disappeared, replaced by a black status bar at the top of the screen that included volume and network controls, displayed the time and battery status, and allowed users to launch new programs via a redesigned menu.
|
||||
|
||||
The menu was the most dramatic change. Clicking the Activities menu or moving the mouse into the Activities "hot corner" showed all open applications as separate windows. Users could also click an Applications tab from the Overview to start a new program. The Overview also provided an integrated search function.
|
||||
|
||||
![GNOME 3.0][15]
|
||||
|
||||
GNOME 3.0 showing the GNOME Pictures application
|
||||
|
||||
Image credit: GNOME
|
||||
|
||||
![GNOME 3.0][16]
|
||||
|
||||
GNOME 3.0 showing the Activities Overview
|
||||
|
||||
Image credit: GNOME
|
||||
|
||||
### Your choice of desktop
|
||||
|
||||
Having two desktops for Linux means users have great choice. Some prefer KDE and others like GNOME. That's fine. Pick the desktop that best suits you.
|
||||
|
||||
To be sure, both KDE and GNOME have fans and detractors. For example, GNOME received a fair bit of criticism for dropping the taskbar in favor of the Activities Overview. Perhaps the most well-known critic was Linus Torvalds, who [loudly denounced and abandoned][17] the new GNOME as an "unholy mess" in 2011—before [moving back][18] to GNOME two years later.
|
||||
|
||||
Others have made similar criticisms of GNOME 3, to the point that some developers forked the GNOME 2 source code to create the MATE desktop. MATE (which stands for MATE Advanced Traditional Environment) continues the traditional taskbar interface from GNOME 2.
|
||||
|
||||
Regardless, there's no doubt that the two most popular Linux desktops today are KDE and GNOME. Their current versions are both very mature and packed with features. Both KDE 5.16 (2019) and GNOME 3.32 (2019) try to simplify and streamline the Linux desktop experience—but in different ways. GNOME 3.32 continues to aim for a minimal appearance, removing all distracting user interface elements so users can focus on their applications and work. KDE 5.16 takes a more familiar approach with the taskbar but has added other visual improvements and flair, especially around improved widget handling and icons.
|
||||
|
||||
![KDE 5.16 Plasma][19]
|
||||
|
||||
KDE 5.16 Plasma
|
||||
|
||||
Image credit: KDE
|
||||
|
||||
![GNOME 3.32][20]
|
||||
|
||||
GNOME 3.32
|
||||
|
||||
Image credit: GNOME
|
||||
|
||||
At the same time, you don't completely lose out on compatibility. Every major Linux distribution provides compatibility libraries, so you can run applications from, say, KDE while running GNOME. This is immensely useful when an application you really want to use is written for the other desktop environment—not a problem; you can run KDE applications on GNOME and vice versa.
|
||||
|
||||
I don't see this changing anytime soon. And I think that's a good thing. Healthy competition between KDE and GNOME has allowed developers in both camps to push the envelope. Whether you use KDE or GNOME, you have a modern desktop with great integration. And above all, this means Linux has the best feature in free software: choice.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/how-linux-desktop-grown
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hallhttps://opensource.com/users/jason-bakerhttps://opensource.com/users/jlacroixhttps://opensource.com/users/doni08521059https://opensource.com/users/etc-eterahttps://opensource.com/users/marcobravohttps://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/1980s-computer-yearbook.png?itok=eGOYEKK- (Person typing on a 1980's computer)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/twm-sls105.png (TWM on SLS 1.05)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/olvwm-sls105.png (OLVWM on SLS 1.05)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/fvwm-sls105.png (FVWM on SLS 1.05)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/win311.png (Windows 3.11)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/win95.png (Windows 95)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/fvwm95-rh52.png (FVWM95 on Red Hat Linux 5.2)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/kde1.png (KDE 1.0)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/gnome10.png (GNOME 1.0)
|
||||
[10]: https://www.linuxtoday.com/developer/2000090500121OPLFKE
|
||||
[11]: https://opensource.com/sites/default/files/uploads/kde_2.2.2.png (KDE 2.2.2 (2001) showing the Konqueror browser)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/kde322-fc2.png (KDE 3.2.2)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/gnome26-fc2.png (GNOME 2.6.0)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/kde46.png (KDE 4.6)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/gnome30.png (GNOME 3.0)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/gnome30-overview.png (GNOME 3.0)
|
||||
[17]: https://www.theregister.co.uk/2011/08/05/linus_slams_gnome_three/
|
||||
[18]: https://www.phoronix.com/scan.php?page=news_item&px=MTMxNjc
|
||||
[19]: https://opensource.com/sites/default/files/uploads/kde516.png (KDE 5.16 Plasma)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/gnome332.png (GNOME 3.32)
|
@ -0,0 +1,136 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Things You Didn't Know About GNU Readline)
|
||||
[#]: via: (https://twobithistory.org/2019/08/22/readline.html)
|
||||
[#]: author: (Two-Bit History https://twobithistory.org)
|
||||
|
||||
Things You Didn't Know About GNU Readline
|
||||
======
|
||||
|
||||
I sometimes think of my computer as a very large house. I visit this house every day and know most of the rooms on the ground floor, but there are bedrooms I’ve never been in, closets I haven’t opened, nooks and crannies that I’ve never explored. I feel compelled to learn more about my computer the same way anyone would feel compelled to see a room they had never visited in their own home.
|
||||
|
||||
GNU Readline is an unassuming little software library that I relied on for years without realizing that it was there. Tens of thousands of people probably use it every day without thinking about it. If you use the Bash shell, every time you auto-complete a filename, or move the cursor around within a single line of input text, or search through the history of your previous commands, you are using GNU Readline. When you do those same things while using the command-line interface to Postgres (`psql`), say, or the Ruby REPL (`irb`), you are again using GNU Readline. Lots of software depends on the GNU Readline library to implement functionality that users expect, but the functionality is so auxiliary and unobtrusive that I imagine few people stop to wonder where it comes from.
|
||||
|
||||
GNU Readline was originally created in the 1980s by the Free Software Foundation. Today, it is an important if invisible part of everyone’s computing infrastructure, maintained by a single volunteer.
|
||||
|
||||
### Feature Replete
|
||||
|
||||
The GNU Readline library exists primarily to augment any command-line interface with a common set of keystrokes that allow you to move around within and edit a single line of input. If you press `Ctrl-A` at a Bash prompt, for example, that will jump your cursor to the very beginning of the line, while pressing `Ctrl-E` will jump it to the end. Another useful command is `Ctrl-U`, which will delete everything in the line before the cursor.
|
||||
|
||||
For an embarrassingly long time, I moved around on the command line by repeatedly tapping arrow keys. For some reason, I never imagined that there was a faster way to do it. Of course, no programmer familiar with a text editor like Vim or Emacs would deign to punch arrow keys for long, so something like Readline was bound to be created. Using Readline, you can do much more than just jump around—you can edit your single line of text as if you were using a text editor. There are commands to delete words, transpose words, upcase words, copy and paste characters, etc. In fact, most of Readline’s keystrokes/shortcuts are based on Emacs. Readline is essentially Emacs for a single line of text. You can even record and replay macros.
|
||||
|
||||
I have never used Emacs, so I find it hard to remember what all the different Readline commands are. But one thing about Readline that is really neat is that you can switch to using a Vim-based mode instead. To do this for Bash, you can use the `set` builtin. The following will tell Readline to use Vim-style commands for the current shell:
|
||||
|
||||
```
|
||||
$ set -o vi
|
||||
```
|
||||
|
||||
With this option enabled, you can delete words using `dw` and so on. The equivalent to `Ctrl-U` in the Emacs mode would be `d0`.
|
||||
|
||||
I was excited to try this when I first learned about it, but I’ve found that it doesn’t work so well for me. I’m happy that this concession to Vim users exists, and you might have more luck with it than me, particularly if you haven’t already used Readline’s default command keystrokes. My problem is that, by the time I heard about the Vim-based interface, I had already learned several Readline keystrokes. Even with the Vim option enabled, I keep using the default keystrokes by mistake. Also, without some sort of indicator, Vim’s modal design is awkward here—it’s very easy to forget which mode you’re in. So I’m stuck at a local maximum using Vim as my text editor but Emacs-style Readline commands. I suspect a lot of other people are in the same position.
|
||||
|
||||
If you feel, not unreasonably, that both Vim and Emacs’ keyboard command systems are bizarre and arcane, you can customize Readline’s key bindings and make them whatever you like. This is not hard to do. Readline reads a `~/.inputrc` file on startup that can be used to configure various options and key bindings. One thing I’ve done is reconfigured `Ctrl-K`. Normally it deletes from the cursor to the end of the line, but I rarely do that. So I’ve instead bound it so that pressing `Ctrl-K` deletes the whole line, regardless of where the cursor is. I’ve done that by adding the following to `~/.inputrc`:
|
||||
|
||||
```
|
||||
Control-k: kill-whole-line
|
||||
```
|
||||
|
||||
Each Readline command (the documentation refers to them as _functions_) has a name that you can associate with a key sequence this way. If you edit `~/.inputrc` in Vim, it turns out that Vim knows the filetype and will help you by highlighting valid function names but not invalid ones!
|
||||
|
||||
Another thing you can do with `~/.inputrc` is create canned macros by mapping key sequences to input strings. [The Readline manual][1] gives one example that I think is especially useful. I often find myself wanting to save the output of a program to a file, which means that I often append something like `> output.txt` to Bash commands. To save some time, you could make this a Readline macro:
|
||||
|
||||
```
|
||||
Control-o: "> output.txt"
|
||||
```
|
||||
|
||||
Now, whenever you press `Ctrl-O`, you’ll see that `> output.txt` gets added after your cursor on the command line. Neat!
|
||||
|
||||
But with macros you can do more than just create shortcuts for strings of text. The following entry in `~/.inputrc` means that, every time I press `Ctrl-J`, any text I already have on the line is surrounded by `$(` and `)`. The macro moves to the beginning of the line with `Ctrl-A`, adds `$(`, then moves to the end of the line with `Ctrl-E` and adds `)`:
|
||||
|
||||
```
|
||||
Control-j: "\C-a$(\C-e)"
|
||||
```
|
||||
|
||||
This might be useful if you often need the output of one command to use for another, such as in:
|
||||
|
||||
```
|
||||
$ cd $(brew --prefix)
|
||||
```
|
||||
|
||||
The `~/.inputrc` file also allows you to set different values for what the Readline manual calls _variables_. These enable or disable certain Readline behaviors. You can use these variables to change, for example, how Readline auto-completion works or how the Readline history search works. One variable I’d recommend turning on is the `revert-all-at-newline` variable, which by default is off. When the variable is off, if you pull a line from your command history using the reverse search feature, edit it, but then decide to search instead for another line, the edit you made is preserved in the history. I find this confusing because it leads to lines showing up in your Bash command history that you never actually ran. So add this to your `~/.inputrc`:
|
||||
|
||||
```
|
||||
set revert-all-at-newline on
|
||||
```
|
||||
|
||||
When you set options or key bindings using `~/.inputrc`, they apply wherever the Readline library is used. This includes Bash most obviously, but you’ll also get the benefit of your changes in other programs like `irb` and `psql` too! A Readline macro that inserts `SELECT * FROM` could be useful if you often use command-line interfaces to relational databases.
|
||||
|
||||
### Chet Ramey
|
||||
|
||||
GNU Readline is today maintained by Chet Ramey, a Senior Technology Architect at Case Western Reserve University. Ramey also maintains the Bash shell. Both projects were first authored by a Free Software Foundation employee named Brian Fox beginning in 1988. But Ramey has been the sole maintainer since around 1994.
|
||||
|
||||
Ramey told me via email that Readline, far from being an original idea, was created to implement functionality prescribed by the POSIX specification, which in the late 1980s had just been created. Many earlier shells, including the Korn shell and at least one version of the Unix System V shell, included line editing functionality. The 1988 version of the Korn shell (`ksh88`) provided both Emacs-style and Vi/Vim-style editing modes. As far as I can tell from [the manual page][2], the Korn shell would decide which mode you wanted to use by looking at the `VISUAL` and `EDITOR` environment variables, which is pretty neat. The parts of POSIX that specified shell functionality were closely modeled on `ksh88`, so GNU Bash was going to have to implement a similarly flexible line-editing system to stay compliant. Hence Readline.
|
||||
|
||||
When Ramey first got involved in Bash development, Readline was a single source file in the Bash project directory. It was really just a part of Bash. Over time, the Readline file slowly moved toward becoming an independent project, though it was not until 1994 (with the 2.0 release of Readline) that Readline became a separate library entirely.
|
||||
|
||||
Readline is closely associated with Bash, and Ramey usually pairs Readline releases with Bash releases. But as I mentioned above, Readline is a library that can be used by any software implementing a command-line interface. And it’s really easy to use. This is a simple example, but here’s how you would you use Readline in your own C program. The string argument to the `readline()` function is the prompt that you want Readline to display to the user:
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include "readline/readline.h"
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
char* line = readline("my-rl-example> ");
|
||||
printf("You entered: \"%s\"\n", line);
|
||||
|
||||
free(line);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Your program hands off control to Readline, which is responsible for getting a line of input from the user (in such a way that allows the user to do all the fancy line-editing things). Once the user has actually submitted the line, Readline returns it to you. I was able to compile the above by linking against the Readline library, which I apparently have somewhere in my library search path, by invoking the following:
|
||||
|
||||
```
|
||||
$ gcc main.c -lreadline
|
||||
```
|
||||
|
||||
The Readline API is much more extensive than that single function of course, and anyone using it can tweak all sorts of things about the library’s behavior. Library users can even add new functions that end users can configure via `~/.inputrc`, meaning that Readline is very easy to extend. But, as far as I can tell, even Bash ultimately calls the simple `readline()` function to get input just as in the example above, though there is a lot of configuration beforehand. (See [this line][3] in the source for GNU Bash, which seems to be where Bash hands off responsibility for getting input to Readline.)
|
||||
|
||||
Ramey has now worked on Bash and Readline for well over a decade. He has never once been compensated for his work—he is and has always been a volunteer. Bash and Readline continue to be actively developed, though Ramey said that Readline changes much more slowly than Bash does. I asked Ramey what it was like being the sole maintainer of software that so many people use. He said that millions of people probably use Bash without realizing it (because every Apple device runs Bash), which makes him worry about how much disruption a breaking change might cause. But he’s slowly gotten used to the idea of all those people out there. He said that he continues to work on Bash and Readline because at this point he is deeply invested and because he simply likes to make useful software available to the world.
|
||||
|
||||
_You can find more information about Chet Ramey at [his website][4]._
|
||||
|
||||
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][5] on Twitter or subscribe to the [RSS feed][6] to make sure you know when a new post is out._
|
||||
|
||||
_Previously on TwoBitHistory…_
|
||||
|
||||
> Please enjoy my long overdue new post, in which I use the story of the BBC Micro and the Computer Literacy Project as a springboard to complain about Codecademy.<https://t.co/PiWlKljDjK>
|
||||
>
|
||||
> — TwoBitHistory (@TwoBitHistory) [March 31, 2019][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2019/08/22/readline.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://tiswww.case.edu/php/chet/readline/readline.html
|
||||
[2]: https://web.archive.org/web/20151105130220/http://www2.research.att.com/sw/download/man/man1/ksh88.html
|
||||
[3]: https://github.com/bminor/bash/blob/9f597fd10993313262cab400bf3c46ffb3f6fd1e/parse.y#L1487
|
||||
[4]: https://tiswww.case.edu/php/chet/
|
||||
[5]: https://twitter.com/TwoBitHistory
|
||||
[6]: https://twobithistory.org/feed.xml
|
||||
[7]: https://twitter.com/TwoBitHistory/status/1112492084383092738?ref_src=twsrc%5Etfw
|
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What piece of advice had the greatest impact on your career?)
|
||||
[#]: via: (https://opensource.com/article/19/8/what-devops-principle-changed-your-career)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/juliegundhttps://opensource.com/users/mbbroberghttps://opensource.com/users/don-watkins)
|
||||
|
||||
What piece of advice had the greatest impact on your career?
|
||||
======
|
||||
See what practices, principles, and patterns have influenced DevOps
|
||||
leaders' careers, and share your own wisdom.
|
||||
![Question and answer.][1]
|
||||
|
||||
I love learning the what, why, and how of new open source projects, especially when they gain popularity in the [DevOps][2] space. Classification as a "DevOps technology" tends to mean scalable, collaborative systems that go across a broad range of challenges—from message bus to monitoring and back again. There is always something new to explore, install, spin up, and explore.
|
||||
|
||||
That said, you don't have DevOps without principles. Some of these concepts are intuitive truths we have known from the start but needed a movement to help us adopt them. Others are quite different and help us acknowledge and grow beyond our [cognitive biases][3].
|
||||
|
||||
While not strictly DevOps, one principle that changed everything for me is [kanban][4]. The simple idea of work being visible and optimized for flow was radical for a chronic multi-tasker like me. To this day, I keep work in progress visible, and it's been a huge relief to not worry about losing a task along the way. Not only that, I no longer celebrate work in progress: I celebrate completed tasks.
|
||||
|
||||
To find out what things have influenced my colleagues, I asked members of the [Open Source DevOps Team][5] to share their thoughts on this question:
|
||||
|
||||
> **What is one DevOps concept (practice, principle, pattern) that changed your career?**
|
||||
|
||||
Here's what they had to say.
|
||||
|
||||
#### [Alex Bunardzic][6]
|
||||
|
||||
**Fail fast, fail early, fail as frequently as you possibly can.** Before I clued into this amazing concept, I was toiling miserably in vain under the traditional waterfall model. My career was a series of botched projects; all of them commencing with the "failure is not an option!" cheer. It is an extremely tiresome way that always results in working inefficiently and lurching from one frustration to the next.
|
||||
|
||||
Embracing the fast and furious flurries of failure was the best thing that happened to my career. Frustration got replaced by the feeling of soaring. That lead to the wholesale adoption/embracing of [TDD][7] [test-driven development] practices and to the realization that TDD is NOT about testing, it is about DRIVING!
|
||||
|
||||
#### [Catherine Louis][8]
|
||||
|
||||
**Culture hacking.** I had no idea there was a name for the method I had (subversively) used to change a culture, but then I saw [Seb Paquet's "Ignite Montreal" video][9] and rejoiced that there were others out there.
|
||||
|
||||
#### [Clement Verna][10]
|
||||
|
||||
**Continuous improvement.** Until I was introduced to continuous improvement, I was not really looking at ways to improve in my job or in my career. Continuous improvement made me realize that it was up to me to challenge myself with learning new things and getting out of my comfort zone. That led me to start contributing to an open source project (Fedora) and then led me to work for Red Hat. So that definitely changed my career.
|
||||
|
||||
#### [Jason Hibbets][11]
|
||||
|
||||
It started with _**The Lean Startup**_ at [my first Code for America Summit][12]. In 2012, I distinctly remember a career-changing moment. Eric Ries, author of _The Lean Startup_ and Code for America board member, was on stage with Tim O'Reilly. The topic they were talking about was hacking on code, and culture and failure as validating learning. My biggest takeaway was discovering _The Lean Startup_. I downloaded the book and read most of it on the plane ride home. It changed how I approach my work and how I lead my team.
|
||||
|
||||
The biggest change I made was to **incorporate feedback loops**. This was a critical difference in how I transformed my work style and my team. I shifted my team habits to making data-driven decisions and sharing information and insights to create those feedback loops. We hold weekly health-check meetings and constantly examine our processes and assumptions. In addition to that, we experiment with new ideas and evaluate how those experiments went. We'll conduct start, stop, and continue sessions to help us understand what to tackle next or what didn't work so we can move on.
|
||||
|
||||
#### [Willy-Peter Schaub][13]
|
||||
|
||||
During a two-month sabbatical in 2018, it dawned on me that the fear of failure had paralyzed my energy and passion for software engineering, a career I used to love. **Realizing that failure is not bad, but an enabler for innovation, collaboration, and continuous learning that fuels DevOps, was a key moment in my career.** Transparent collaboration, progressive exposure, hypothesis-driven development, test-driven development, and continuous delivery of value are some of the core practices that generate frequent opportunities to fail fast, inspect, and adapt the solution (and the career) we are working on.
|
||||
|
||||
### Your turn
|
||||
|
||||
There are so many ways DevOps can teach us without ever opening a terminal or user interface. So, I ask you the same question: **What DevOps concept made the most impact on your career?** Please share your thoughts in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/what-devops-principle-changed-your-career
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/juliegundhttps://opensource.com/users/mbbroberghttps://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_HowToFish_520x292.png?itok=DHbdxv6H (Question and answer.)
|
||||
[2]: https://opensource.com/resources/devops
|
||||
[3]: https://commons.wikimedia.org/wiki/File:Cognitive_Bias_Codex_-_180%2B_biases,_designed_by_John_Manoogian_III_(jm3).jpg
|
||||
[4]: https://en.wikipedia.org/wiki/Kanban
|
||||
[5]: https://opensource.com/devops-team
|
||||
[6]: https://opensource.com/users/alex-bunardzic
|
||||
[7]: https://en.wikipedia.org/wiki/Test-driven_development
|
||||
[8]: https://opensource.com/users/catherinelouis
|
||||
[9]: https://www.youtube.com/watch?v=ojQT6U-gRAM
|
||||
[10]: https://opensource.com/users/cverna
|
||||
[11]: https://opensource.com/users/jhibbets
|
||||
[12]: https://medium.com/@jhibbets/where-civic-tech-gets-inspired-rejuvenated-c77ae75af24b
|
||||
[13]: https://opensource.com/users/wpschaub
|
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top 5 IoT networking security mistakes)
|
||||
[#]: via: (https://www.networkworld.com/article/3433476/top-5-iot-networking-security-mistakes.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
Top 5 IoT networking security mistakes
|
||||
======
|
||||
IT supplier Brother International shares five of the most common internet-of-things security errors it sees among buyers of its printers and multi-function devices.
|
||||
![Getty Images][1]
|
||||
|
||||
Even though [Brother International][2] is a supplier of many IT products, from [machine tools][3] to [head-mounted displays][4] to [industrial sewing machines][5], it’s best known for printers. And in today’s world, those printers are no longer stand-alone devices, but components of the internet of things.
|
||||
|
||||
That’s why I was interested in this list from Robert Burnett, Brother’s director, B2B product & solution – basically, the company’s point man for large customer implementations. Not surprisingly, Burnett focuses on IoT security mistakes related to printers and also shares Brother’s recommendations for dealing with the top five.
|
||||
|
||||
## #5: Not controlling access and authorization
|
||||
|
||||
“In the past,” Burnett says, “cost control was the driving force behind managing who can use a machine and when their jobs are released.” That’s still important, of course, but Burnett says security is quickly becoming the key reason to put management controls on print and scan devices. That’s true not just for large enterprises, he notes, but for businesses of all sizes.
|
||||
|
||||
[INSIDER: 5 ways to prepare for Internet of Things security threats][6]
|
||||
|
||||
## #4: Failure to update firmware regularly
|
||||
|
||||
Let’s face it, most IT professionals stay plenty busy keeping servers and other network infrastructure devices up to date and ensuring their infrastructure is as secure and efficient as possible. “In this day-to-day process,” Burnett says, “devices like printers are very often overlooked.” But out-of-date firmware could expose the infrastructure to new threats.
|
||||
|
||||
## #3: Inadequate device awareness
|
||||
|
||||
It’s critical, Burnett says, to properly understand who is using what, and the capabilities of all the connected devices in the fleet. Reviewing these devices using port scanning, protocol analysis and other detection techniques should be part of the overall security reviews of your network infrastructure. Too often, he warns, “the approach to print devices is ‘if it’s not broke, don’t fix it!’” But even devices that have been running reliably for years should be part of security reviews. That’s because older devices may not have the capability to offer stronger security settings or you may need to update their configuration to meet today’s greater security demands. This includes the monitoring/reporting capabilities of a device.
|
||||
|
||||
## #2: Inadequate user training
|
||||
|
||||
“Training your team on best practices for managing documents within the workflow must be part of a strong security plan,” Burnett says. The fact is, no matter how hard you work to secure IoT devices, “the human factor is often the weakest link in securing important and sensitive information within a business. Items as simple as leaving important documents on the printer for anyone to see, or scanning documents to the wrong destination by accident, can have a huge, negative impact on a business not just financially, but also to its IP, reputation, and cause compliance/regulation issues.”
|
||||
|
||||
## #1: Using default passwords**
|
||||
|
||||
**
|
||||
|
||||
“Just because it’s easy doesn’t mean it’s not important!” Burnett says. Securing printer and multi-function devices from unauthorized admin access not only helps protect sensitive machine-configuration settings and report information, Burnett says, but also prevents access to personal information, such as user names that could be used in phishing attacks, for example.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][7] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3433476/top-5-iot-networking-security-mistakes.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/iot_security_tablet_conference_digital-100787102-large.jpg
|
||||
[2]: https://www.brother-usa.com/business
|
||||
[3]: https://www.brother-usa.com/machinetool/default?src=default
|
||||
[4]: https://www.brother-usa.com/business/hmd#sort=%40productcatalogsku%20ascending
|
||||
[5]: https://www.brother-usa.com/business/industrial-sewing
|
||||
[6]: https://www.networkworld.com/article/2855207/internet-of-things/5-ways-to-prepare-for-internet-of-things-security-threats.html#tk.nww-infsb
|
||||
[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,137 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Software-defined perimeter – the essence of trust)
|
||||
[#]: via: (https://www.networkworld.com/article/3433922/software-defined-perimeter-the-essence-of-trust.html)
|
||||
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
Software-defined perimeter – the essence of trust
|
||||
======
|
||||
Within a zero-trust environment, there is no implicit trust. Thus, trust must be sourced from somewhere else in order to gain access to protected resources.
|
||||
![Thinkstock][1]
|
||||
|
||||
Actions speak louder than words. Reliable actions build lasting trust in contrast to unreliable words. Imagine that you had a house with a guarded wall. You would feel safe in the house, correct? Now, what if that wall is dismantled? You might start to feel your security is under threat. Anyone could have easy access to your house.
|
||||
|
||||
In the same way, with traditional security products: it is as if anyone is allowed to leave their house, knock at your door and pick your locks. Wouldn’t it be more secure if only certain individuals whom you fully trust can even see your house? This is the essence of zero-trust networking and is a core concept discussed in my recent course on [SDP (][2][software-defined perimeter).][2]
|
||||
|
||||
Within a zero-trust environment, there is no implicit trust. Thus, trust must be sourced from somewhere else in order to gain access to protected resources. It is only after sufficient trust has been established and the necessary controls are passed, that the access is granted, providing a path to the requested resource. The access path to the resource is designed differently, depending on whether it’s a client or service-initiated software-defined perimeter solution.
|
||||
|
||||
**[ Don’t miss [customer reviews of top remote access tools][3] and see [the most powerful IoT companies][4] . | Get daily insights by [signing up for Network World newsletters][5]. ]**
|
||||
|
||||
A per application tunnel is created through the one-to-one mapping process. This effectively establishes a “network segment of one” that is unique for each user and application. This is in comparison to broad-level access that traditional architectures exhibit. Typically, the broad network access results in overly permissive access with an attack surface that is simply too large.
|
||||
|
||||
One of the key elements for the software-defined perimeter and zero trust is that both; the user and device validation, along with application access validation are based on the user’s identity, not the IP address. Nowadays, anything associated with IP addresses is ridiculous as we have nothing solid to hang policy from.
|
||||
|
||||
### The traditional chain of events
|
||||
|
||||
Traditionally, a device would look at the IP address of the requesting entity and ask for a password. This opens the door to bad actors that can communicate from any IP address and insert themselves between you and the trusted remote device if they please.
|
||||
|
||||
Today, the IP address is no longer sufficient to define the level of trust for a user. IP addresses lack user knowledge to assign and validate the trust. There is no contextual information taken into consideration. This is often referred to as the IP address conundrum. Therefore, as an anchor for the network location and policy, we need to look beyond the ports and IP addresses.
|
||||
|
||||
Network policies have traditionally focused on what systems can communicate with each other. The permit or deny is a very binary framework to use in today's dynamic environment. It has resulted in a policy that is either too rigidly defined or too loosely defined. This is where the software-defined perimeter finds the middle-ground.
|
||||
|
||||
Essentially with SDP, we examine at every stage the user validation, device validation, and applications for every request in a zero-trust network. Trust in a network is constantly evolving, based on the previous and current actions of the user. This is what’s known as a variable trust level and will ensure a strong trust chain.
|
||||
|
||||
### What is Software-Defined Perimeter (SDP)?
|
||||
|
||||
According to a report from IDG and Pulsesecure, [SDP is gaining momentum][6]. It controls access to resources based on the identity often referred to as the “Black Out”, which means that the applications and sensitive data cannot be detected by unauthorized users. As mentioned earlier, it is designed around the user identity, not the IP address.
|
||||
|
||||
The whole idea of SDP is to isolate the user from the application. This can be done in a number of ways, client-initiated and service-initiated.
|
||||
|
||||
### SDP client-initiated
|
||||
|
||||
One such way is to put the requesting client on the network while leaving the network infrastructure and applications dark. This can be done with a lightweight security protocol, such as single packet authorization (SPA). This allows the SDP to receive a valid SPA packet before turning the lights on. In this case, the client has to initiate the connection to trigger the authentication.
|
||||
|
||||
### What is a SPA?
|
||||
|
||||
A [SPA packet][7] _[Disclaimer: the author is employed by Network Insight]_ is a UDP packet that is encrypted, cryptographically signed and cannot be faked. No two SPA packets are ever the same; therefore, replay attacks are impossible. SPA is intentionally designed to have multiple layers of security that are built on each other. These are the layers of security that are on top of whatever the port it is protecting.
|
||||
|
||||
SPA plays a significant role by hiding both; the applications and SDP infrastructure components (including the SDP controller and gateway) until the client sends a valid SPA packet. Until this happens, everything remains dark hidden from unauthorized users and devices.
|
||||
|
||||
### SDP service-initiated
|
||||
|
||||
We also have a service-initiated approach that uses an SDP broker. The SDP broker may come in the form of an appliance or cloud service that remains within the cloud approach. One example could be to have mutual TLS tunnels from an SDP device (located wherever the application is ) to the SDP cloud, then another mutual TLS from the initiating client to the same SDP cloud. This enables both the client and an SDP device to connect outbound on the port TCP 443 to the SDP cloud.
|
||||
|
||||
With the service-initiated model, the client does not connect to the location of the application. Basically, the location of the application is irrelevant to the path access. This is completely different to how application access is built today. Instead, with SDP, a trust chain is built between the two sides of the service.
|
||||
|
||||
### Removing the network
|
||||
|
||||
This model of connectivity inverts our existing approach. Fundamentally, the user is never given access to the network until fully authenticated. Applications can literally exist anywhere as you don’t need an inbound path. This offers additional levels of security against the application DDoS and the ability to have overlapping IP addresses that are useful for mergers and acquisitions.
|
||||
|
||||
As per the initial security setup, in order to get access to an application you had to connect a user to a network. Further, if the users were external and not on the local network, you needed to create a virtual network. An inbound port was needed to expose the application to the public internet. However, this open port was visible to anyone. As you can imagine from a security standpoint, the idea of network connectivity to access an application is not a good idea.
|
||||
|
||||
The traditional chain of events is to connect first and then authenticate. This allows users, good or bad, to gain access to your network and services. Then we hand them off to the service to determine whether the user is permitted access.
|
||||
|
||||
Access before authentication allows both good and bad users to gain access to all the services. Evidently, ‘first, connect and then authenticate’ has many drawbacks which is why we must secure the user access with stronger levels of authentication and authorization.
|
||||
|
||||
### A robust way to validate
|
||||
|
||||
The first stage is authentication, which ensures that a user is who he or she claims to be. The second stage is authorization, which allows access to the resources based on the identity. Here, the SDP policies can define a set of network services (such as network geolocation and encryption for communications) that a given user or group of users is authorized to access, and under what circumstances.
|
||||
|
||||
### The authentication
|
||||
|
||||
The policies governing authentication may require single or multiple factors. SDP uses a combination of user authentication as well as device authentication for each connection between the initiating and accepting hosts.
|
||||
|
||||
The main part of the authentication stage is the “Authenticators”. Authenticators could be as simple as something you know, such as a password or potentially more complex, such as something you are, a fingerprint or other biometric data.
|
||||
|
||||
The authentication process provides authentication to two things, user and device authentication. Device authentication can be done by following protocols, such as single packet authorization (SPA), host-specific firewalls, mutual transport layer security, device fingerprinting, software verification, and geo-location
|
||||
|
||||
The user authentication can be provided by the trusted browser, SAML or authentication to an identity provider (IP).
|
||||
|
||||
### Authenticating the user
|
||||
|
||||
Firstly, the user needs to be authenticated. The SDP policy plane has a context that needs to be validated before access is granted. This will most likely be not just authentication but a full authorization.
|
||||
|
||||
The users will have a set of credentials on their device as a result of pre-authentication and pre-authorization. What is recommended here is to use SAML based services. With SAML you can get things back, such as SAML attributes. This ensures that if this user is part of this group or access path the user is allowed access to this application.
|
||||
|
||||
The identity can also be validated against an identity provider. The traditional VPNs would use an authorization database that had user credentials. But this was different from the centralized trust. The centralized trust would be provided by some kind of LDAP environment, such as an active directory (AD).
|
||||
|
||||
The considerable benefit of using an identity provider is that it acts as a gateway for users to authenticate against the same centralized trust. However, VPNs or other gateway services require a different database with a different management process. This can create an overhead to either add or delete the users from different databases.
|
||||
|
||||
Having everything controlled in one central database provider is the key to managing a single set of controls of trust. Essentially, in SDP, a user validates against an externally facing IDP and then the user is authenticated against the identity store. The identity store can be certificate-based and may even include multi-factor authentication (MFA).
|
||||
|
||||
With services, such as MS Azure AD and Octa, you can go deeper and include the contextual-based authentication against the IDP. The IDP can then take into consideration the geolocation details and other parameters that are not usually accessed with the standard authentication.
|
||||
|
||||
### Geo-location
|
||||
|
||||
Geo-location is a useful feature. It is the identification or estimation of the real-world’s geographic location. This can be used as a source of information which can help in making decisions regarding the access in an SDP.
|
||||
|
||||
Here, an ideal approach for the SDP would be to compare the user geolocation with the connection attempts to detect the credential theft. If the user does not meet the criteria in the IDP, the trust level is lowered, and the user will be given a subset of access instead of the full access path.
|
||||
|
||||
### Authenticating the device
|
||||
|
||||
Largely, the end goal of device validation is to make sure that the device is trusted. This stage occurs only after the user is validated. By employing some SDP solutions, the device validation can be carried out for the client’s software running on the client’s device.
|
||||
|
||||
The process is that the user authorizes himself or herself against the IDP. Then all the tests are run against the IDP. Once the user passes these validation steps and submits an application request to the SDP, the SDP will go to another layer of validation and test things like corporate certification tests, disk encryption, etc. The SDP will also verify if there a local firewall running. These tests are done under the enforcement of the policy.
|
||||
|
||||
The device validation should be done on a per-application and per policy rule. The device test is not a one-time test. It tests the device in relation to the access request at that particular time. On the other hand, SAML occurs only once, i.e. when a user logs in.
|
||||
|
||||
**This article is published as part of the IDG Contributor Network. [Want to Join?][8]**
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3433922/software-defined-perimeter-the-essence-of-trust.html
|
||||
|
||||
作者:[Matt Conran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2015/11/millennials_trust-100625376-large.jpg
|
||||
[2]: https://www.pluralsight.com/courses/sdp-leveraging-zero-trust-create-network-security-architecture
|
||||
[3]: https://www.networkworld.com/article/3262145/lan-wan/customer-reviews-top-remote-access-tools.html#nww-fsb
|
||||
[4]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
|
||||
[5]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[6]: https://www.pulsesecure.net/resource/2019-state-of-enterprise-secure-access/
|
||||
[7]: https://network-insight.net/2019/06/zero-trust-single-packet-authorization-passive-authorization/
|
||||
[8]: https://www.networkworld.com/contributor-network/signup.html
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -1,145 +0,0 @@
|
||||
4 Ways to Customize Xfce and Give it a Modern Look
|
||||
======
|
||||
**Brief: Xfce is a great lightweight desktop environment with one drawback. It looks sort of old. But you don’t have to stick with the default looks. Let’s see various ways you can customize Xfce to give it a modern and beautiful look.**
|
||||
|
||||
![Customize Xfce desktop envirnment][1]
|
||||
|
||||
To start with, Xfce is one of the most [popular desktop environments][2]. Being a lightweight DE, you can run Xfce on very low resource and it still works great. This is one of the reasons why many [lightweight Linux distributions][3] use Xfce by default.
|
||||
|
||||
Some people prefer it even on a high-end device stating its simplicity, easy of use and non-resource hungry nature as the main reasons.
|
||||
|
||||
[Xfce][4] is in itself minimal and provides just what you need. The one thing that bothers is its look and feel which feel old. However, you can easily customize Xfce to look modern and beautiful without reaching the limit where a Unity/GNOME session eats up system resources.
|
||||
|
||||
### 4 ways to Customize Xfce desktop
|
||||
|
||||
Let’s see some of the ways by which we can improve the look and feel of your Xfce desktop environment.
|
||||
|
||||
The default Xfce desktop environment looks something like this :
|
||||
|
||||
![Xfce default screen][5]
|
||||
|
||||
As you can see, the default Xfce desktop is kinda boring. We will use some themes, icon packs and change the default dock to make it look fresh and a bit revealing.
|
||||
|
||||
#### 1. Change themes in Xfce
|
||||
|
||||
The first thing we will do is pick up a theme from [xfce-look.org][6]. My favorite Xfce theme is [XFCE-D-PRO][7].
|
||||
|
||||
You can download the theme from [here][8] and extract it somewhere.
|
||||
|
||||
You can copy this extracted file to **.theme** folder in your home directory. If the folder is not present by default, you can create one and the same goes for icons which needs a **.icons** folder in the home directory.
|
||||
|
||||
Open **Settings > Appearance > Style** to select the theme, log out and login to see the change. Adwaita-dark from default is also a nice one.
|
||||
|
||||
![Appearance Xfce][9]
|
||||
|
||||
You can use any [good GTK theme][10] on Xfce.
|
||||
|
||||
#### 2. Change icons in Xfce
|
||||
|
||||
Xfce-look.org also provides icon themes which you can download, extract and put it in your home directory under **.icons** directory. Once you have added the icon theme in the .icons directory, go to **Settings > Appearance > Icons** to select that icon theme.
|
||||
|
||||
![Moka icon theme][11]
|
||||
|
||||
I have installed [Moka icon set][12] that looks awesome.
|
||||
|
||||
![Moka theme][13]
|
||||
|
||||
You can also refer to our list of [awesome icon themes][14].
|
||||
|
||||
##### **Optional: Installing themes through Synaptic**
|
||||
|
||||
If you want to avoid the manual search and copying of the files, install Synaptic Manager in your system. You can look for some best themes over web and icon sets, and using synaptic manager you can search and install it.
|
||||
```
|
||||
sudo apt-get install synaptic
|
||||
|
||||
```
|
||||
|
||||
**Searching and installing theme/icons through Synaptic**
|
||||
|
||||
Open synaptic and click on **Search**. Enter your desired theme, and it will display the list of matching items. Mark all the additional required changes and click on **Apply**. This will download the theme and then install it.
|
||||
|
||||
![Arc Theme][15]
|
||||
|
||||
Once done, you can open the **Appearance** option to select the desired theme.
|
||||
|
||||
In my opinion, this is not the best way to install themes in Xfce.
|
||||
|
||||
#### 3. Change wallpapers in Xfce
|
||||
|
||||
Again, the default Xfce wallpaper is not bad at all. But you can change the wallpaper to something that matches with your icons and themes.
|
||||
|
||||
To change wallpapers in Xfce, right click on the desktop and click on Desktop Settings. You can change the desktop background from your custom collection or the defaults one given.
|
||||
|
||||
Right click on the desktop and click on **Desktop Settings**. Choose **Background** from the folder option, and choose any one of the default backgrounds or a custom one.
|
||||
|
||||
![Changing desktop wallpapers][16]
|
||||
|
||||
#### 4. Change the dock in Xfce
|
||||
|
||||
The default dock is nice and pretty much does what it is for. But again, it looks a bit boring.
|
||||
|
||||
![Docky][17]
|
||||
|
||||
However, if you want your dock to be better and with a little more customization options, you can install another dock.
|
||||
|
||||
Plank is one of the simplest and lightweight docks and is highly configurable.
|
||||
|
||||
To install Plank use the command below:
|
||||
|
||||
`sudo apt-get install plank`
|
||||
|
||||
If Plank is not available in the default repository, you can install it from this PPA.
|
||||
```
|
||||
sudo add-apt-repository ppa:ricotz/docky
|
||||
sudo apt-get update
|
||||
sudo apt-get install plank
|
||||
|
||||
```
|
||||
|
||||
Before you use Plank, you should remove the default dock by right-clicking in it and under Panel Settings, clicking on delete.
|
||||
|
||||
Once done, go to **Accessory > Plank** to launch Plank dock.
|
||||
|
||||
![Plank][18]
|
||||
|
||||
Plank picks up icons from the one you are using. So if you change the icon themes, you’ll see the change is reflected in the dock also.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
XFCE is a lightweight, fast and highly customizable. If you are limited on system resource, it serves good and you can easily customize it to look better. Here’s how my screen looks after applying these steps.
|
||||
|
||||
![XFCE desktop][19]
|
||||
|
||||
This is just with half an hour of effort. You can make it look much better with different themes/icons customization. Feel free to share your customized XFCE desktop screen in the comments and the combination of themes and icons you are using.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/customize-xfce/
|
||||
|
||||
作者:[Ambarish Kumar][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/ambarish/
|
||||
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/xfce-customization.jpeg
|
||||
[2]:https://itsfoss.com/best-linux-desktop-environments/
|
||||
[3]:https://itsfoss.com/lightweight-linux-beginners/
|
||||
[4]:https://xfce.org/
|
||||
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/06/1-1-800x410.jpg
|
||||
[6]:http://xfce-look.org
|
||||
[7]:https://www.xfce-look.org/p/1207818/XFCE-D-PRO
|
||||
[8]:https://www.xfce-look.org/p/1207818/startdownload?file_id=1523730502&file_name=XFCE-D-PRO-1.6.tar.xz&file_type=application/x-xz&file_size=105328&url=https%3A%2F%2Fdl.opendesktop.org%2Fapi%2Ffiles%2Fdownloadfile%2Fid%2F1523730502%2Fs%2F6019b2b57a1452471eac6403ae1522da%2Ft%2F1529360682%2Fu%2F%2FXFCE-D-PRO-1.6.tar.xz
|
||||
[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/4.jpg
|
||||
[10]:https://itsfoss.com/best-gtk-themes/
|
||||
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/6.jpg
|
||||
[12]:https://snwh.org/moka
|
||||
[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/11-800x547.jpg
|
||||
[14]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
||||
[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/5-800x531.jpg
|
||||
[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/7-800x546.jpg
|
||||
[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/8.jpg
|
||||
[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/9.jpg
|
||||
[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/10-800x447.jpg
|
@ -1,130 +0,0 @@
|
||||
Podman: A more secure way to run containers
|
||||
======
|
||||
Podman uses a traditional fork/exec model (vs. a client/server model) for running containers.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-cloud-safe.png?itok=yj2TFPzq)
|
||||
|
||||
Before I get into the main topic of this article, [Podman][1] and containers, I need to get a little technical about the Linux audit feature.
|
||||
|
||||
### What is audit?
|
||||
|
||||
The Linux kernel has an interesting security feature called **audit**. It allows administrators to watch for security events on a system and have them logged to the audit.log, which can be stored locally or remotely on another machine to prevent a hacker from trying to cover his tracks.
|
||||
|
||||
The **/etc/shadow** file is a common security file to watch, since adding a record to it could allow an attacker to get return access to the system. Administrators want to know if any process modified the file. You can do this by executing the command:
|
||||
|
||||
```
|
||||
# auditctl -w /etc/shadow
|
||||
```
|
||||
|
||||
Now let's see what happens if I modify the /etc/shadow file:
|
||||
|
||||
```
|
||||
# touch /etc/shadow
|
||||
# ausearch -f /etc/shadow -i -ts recent
|
||||
|
||||
type=PROCTITLE msg=audit(10/10/2018 09:46:03.042:4108) : proctitle=touch /etc/shadow type=SYSCALL msg=audit(10/10/2018 09:46:03.042:4108) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb17f6704 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=2712 pid=3727 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts1 ses=3 comm=touch exe=/usr/bin/touch subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)`
|
||||
```
|
||||
|
||||
There's a lot of information in the audit record, but I highlighted that it recorded that root modified the /etc/shadow file and the owner of the process' audit UID ( **auid** ) was **dwalsh**.
|
||||
|
||||
Did the kernel do that?
|
||||
|
||||
#### Tracking the login UID
|
||||
|
||||
**loginuid** , stored in **/proc/self/loginuid** , that is part of the proc struct of every process on the system. This field can be set only once; after it is set, the kernel will not allow any process to reset it.
|
||||
|
||||
There is a field called, stored in, that is part of the proc struct of every process on the system. This field can be set only once; after it is set, the kernel will not allow any process to reset it.
|
||||
|
||||
When I log into the system, the login program sets the loginuid field for my login process.
|
||||
|
||||
My UID, dwalsh, is 3267.
|
||||
|
||||
```
|
||||
$ cat /proc/self/loginuid
|
||||
3267
|
||||
```
|
||||
|
||||
Now, even if I become root, my login UID stays the same.
|
||||
|
||||
```
|
||||
$ sudo cat /proc/self/loginuid
|
||||
3267
|
||||
```
|
||||
|
||||
Note that every process that's forked and executed from the initial login process automatically inherits the loginuid. This is how the kernel knew that the person who logged was dwalsh.
|
||||
|
||||
### Containers
|
||||
|
||||
Now let's look at containers.
|
||||
|
||||
```
|
||||
sudo podman run fedora cat /proc/self/loginuid
|
||||
3267
|
||||
```
|
||||
|
||||
Even the container process retains my loginuid. Now let's try with Docker.
|
||||
|
||||
```
|
||||
sudo docker run fedora cat /proc/self/loginuid
|
||||
4294967295
|
||||
```
|
||||
|
||||
### Why the difference?
|
||||
|
||||
Podman uses a traditional fork/exec model for the container, so the container process is an offspring of the Podman process. Docker uses a client/server model. The **docker** command I executed is the Docker client tool, and it communicates with the Docker daemon via a client/server operation. Then the Docker daemon creates the container and handles communications of stdin/stdout back to the Docker client tool.
|
||||
|
||||
The default loginuid of processes (before their loginuid is set) is 4294967295. Since the container is an offspring of the Docker daemon and the Docker daemon is a child of the init system, we see that systemd, Docker daemon, and the container processes all have the same loginuid, 4294967295, which audit refers to as the unset audit UID.
|
||||
|
||||
```
|
||||
cat /proc/1/loginuid
|
||||
4294967295
|
||||
```
|
||||
|
||||
### How can this be abused?
|
||||
|
||||
Let's look at what would happen if a container process launched by Docker modifies the /etc/shadow file.
|
||||
|
||||
```
|
||||
$ sudo docker run --privileged -v /:/host fedora touch /host/etc/shadow
|
||||
$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:27:20.055:4569) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:27:20.055:4569) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb6973f50 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11863 pid=11882 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=touch exe=/usr/bin/coreutils subj=system_u:system_r:spc_t:s0 key=(null)
|
||||
```
|
||||
|
||||
In the Docker case, the auid is unset (4294967295); this means the security officer might know that a process modified the /etc/shadow file but the identity was lost.
|
||||
|
||||
If that attacker then removed the Docker container, there would be no trace on the system of who modified the /etc/shadow file.
|
||||
|
||||
Now let's look at the exact same scenario with Podman.
|
||||
|
||||
```
|
||||
$ sudo podman run --privileged -v /:/host fedora touch /host/etc/shadow
|
||||
$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:23:41.659:4530) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:23:41.659:4530) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7fffdffd0f34 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11671 pid=11683 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=3 comm=touch exe=/usr/bin/coreutils subj=unconfined_u:system_r:spc_t:s0 key=(null)
|
||||
```
|
||||
|
||||
Everything is recorded correctly with Podman since it uses traditional fork/exec.
|
||||
|
||||
This was just a simple example of watching the /etc/shadow file, but the auditing system is very powerful for watching what processes do on a system. Using a fork/exec container runtime for launching containers (instead of a client/server container runtime) allows you to maintain better security through audit logging.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
There are many other nice features about the fork/exec model versus the client/server model when launching containers. For example, systemd features include:
|
||||
|
||||
* **SD_NOTIFY:** If you put a Podman command into a systemd unit file, the container process can return notice up the stack through Podman that the service is ready to receive tasks. This is something that can't be done in client/server mode.
|
||||
* **Socket activation:** You can pass down connected sockets from systemd to Podman and onto the container process to use them. This is impossible in the client/server model.
|
||||
|
||||
|
||||
|
||||
The nicest feature, in my opinion, is **running Podman and containers as a non-root user**. This means you never have give a user root privileges on the host, while in the client/server model (like Docker employs), you must open a socket to a privileged daemon running as root to launch the containers. There you are at the mercy of the security mechanisms implemented in the daemon versus the security mechanisms implemented in the host operating systems—a dangerous proposition.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/podman-more-secure-way-run-containers
|
||||
|
||||
作者:[Daniel J Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rhatdan
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://podman.io
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,86 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A guided tour of Linux file system types)
|
||||
[#]: via: (https://www.networkworld.com/article/3432990/a-guided-tour-of-linux-file-system-types.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
A guided tour of Linux file system types
|
||||
======
|
||||
Linux file systems have evolved over the years, and here's a look at file system types
|
||||
![Andreas Lehner / Flickr \(CC BY 2.0\)][1]
|
||||
|
||||
While it may not be obvious to the casual user, Linux file systems have evolved significantly over the last decade or so to make them more resistant to corruption and performance problems.
|
||||
|
||||
Most Linux systems today use a file system type called **ext4**. The “ext” part stands for “extended” and the 4 indicates that this is the 4th generation of this file system type. Features added over time include the ability to provide increasingly larger file systems (currently as large as 1,000,000 TiB) and much larger files (up to 16 TiB), more resistance to system crashes and less fragmentation (scattering single files as chunks in multiple locations) which improves performance.
|
||||
|
||||
The **ext4** file system type also came with other improvements to performance, scalability and capacity. Metadata and journal checksums were implemented for reliability. Timestamps now track changes down to nanoseconds for better file time-stamping (e.g., file creation and last updates). And, with two additional bits in the timestamp field, the year 2038 problem (when the digitally stored date/time fields will roll over from maximum to zero) has been put off for more than 400 years (to 2446).
|
||||
|
||||
### File system types
|
||||
|
||||
To determine the type of file system on a Linux system, use the **df** command. The **T** option in the command shown below provides the file system type. The **h** makes the disk sizes “human-readable”; in other words, adjusting the reported units (such as M and G) in a way that makes the most sense to the people reading them.
|
||||
|
||||
```
|
||||
$ df -hT | head -10
|
||||
Filesystem Type Size Used Avail Use% Mounted on
|
||||
udev devtmpfs 2.9G 0 2.9G 0% /dev
|
||||
tmpfs tmpfs 596M 1.5M 595M 1% /run
|
||||
/dev/sda1 ext4 110G 50G 55G 48% /
|
||||
/dev/sdb2 ext4 457G 642M 434G 1% /apps
|
||||
tmpfs tmpfs 3.0G 0 3.0G 0% /dev/shm
|
||||
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
|
||||
tmpfs tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
|
||||
/dev/loop0 squashfs 89M 89M 0 100% /snap/core/7270
|
||||
/dev/loop2 squashfs 142M 142M 0 100% /snap/hexchat/42
|
||||
```
|
||||
|
||||
Notice that the **/** (root) and **/apps** file systems are both **ext4** file systems while **/dev** is a **devtmpfs** file system – one with automated device nodes populated by the kernel. Some of the other file systems shown are **tmpfs** – temporary file systems that reside in memory and/or swap partitions and **squashfs** – file systems that are read-only compressed file systems and are used for snap packages.
|
||||
|
||||
There's also proc file systems that stores information on running processes.
|
||||
|
||||
```
|
||||
$ df -T /proc
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
proc proc 0 0 0 - /proc
|
||||
```
|
||||
|
||||
There are a number of other file system types that you might encounter as you're moving around the overall file system. When you've moved into a directory, for example, and want to ask about the related file system, you can run a command like this:
|
||||
|
||||
```
|
||||
$ cd /dev/mqueue; df -T .
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
mqueue mqueue 0 0 0 - /dev/mqueue
|
||||
$ cd /sys; df -T .
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
sysfs sysfs 0 0 0 - /sys
|
||||
$ cd /sys/kernel/security; df -T .
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
securityfs securityfs 0 0 0 - /sys/kernel/security
|
||||
```
|
||||
|
||||
As with other Linux commands, the . in these commands refers to the current location in the overall file system.
|
||||
|
||||
These and other unique file-system types provide some special functions. For example, securityfs provides file system support for security modules.
|
||||
|
||||
Linux file systems need to be resistant to corruption, have the ability to survive system crashes and provide fast and reliable performance. The improvements provided by the generations of **ext** file systems and the new generation on purpose-specific file system types have made Linux systems easier to manage and more reliable.
|
||||
|
||||
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432990/a-guided-tour-of-linux-file-system-types.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/guided-tour-on-the-flaker_people-in-horse-drawn-carriage_germany-by-andreas-lehner-flickr-100808681-large.jpg
|
||||
[2]: https://www.facebook.com/NetworkWorld/
|
||||
[3]: https://www.linkedin.com/company/network-world
|
286
sources/tech/20190822 How to move a file in Linux.md
Normal file
286
sources/tech/20190822 How to move a file in Linux.md
Normal file
@ -0,0 +1,286 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to move a file in Linux)
|
||||
[#]: via: (https://opensource.com/article/19/8/moving-files-linux-depth)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/doni08521059)
|
||||
|
||||
How to move a file in Linux
|
||||
======
|
||||
Whether you're new to moving files in Linux or experienced, you'll learn
|
||||
something in this in-depth writeup.
|
||||
![Files in a folder][1]
|
||||
|
||||
Moving files in Linux can seem relatively straightforward, but there are more options available than most realize. This article teaches beginners how to move files in the GUI and on the command line, but also explains what’s actually happening under the hood, and addresses command line options that many experience users have rarely explored.
|
||||
|
||||
### Moving what?
|
||||
|
||||
Before delving into moving files, it’s worth taking a closer look at what actually happens when _moving_ file system objects. When a file is created, it is assigned to an _inode_, which is a fixed point in a file system that’s used for data storage. You can what inode maps to a file with the [ls][2] command:
|
||||
|
||||
|
||||
```
|
||||
$ ls --inode example.txt
|
||||
7344977 example.txt
|
||||
```
|
||||
|
||||
When you move a file, you don’t actually move the data from one inode to another, you only assign the file object a new name or file path. In fact, a file retains its permissions when it’s moved, because moving a file doesn’t change or re-create it.
|
||||
|
||||
File and directory inodes never imply inheritance and are dictated by the filesystem itself. Inode assignment is sequential based on when the file was created and is entirely independent of how you organize your computer. A file "inside" a directory may have a lower inode number than its parent directory, or a higher one. For example:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir foo
|
||||
$ mv example.txt foo
|
||||
$ ls --inode
|
||||
7476865 foo
|
||||
$ ls --inode foo
|
||||
7344977 example.txt
|
||||
```
|
||||
|
||||
When moving a file from one hard drive to another, however, the inode is very likely to change. This happens because the new data has to be written onto a new filesystem. For this reason, in Linux the act of moving and renaming files is literally the same action. Whether you move a file to another directory or to the same directory with a new name, both actions are performed by the same underlying program.
|
||||
|
||||
This article focuses on moving files from one directory to another.
|
||||
|
||||
### Moving with a mouse
|
||||
|
||||
The GUI is a friendly and, to most people, familiar layer of abstraction on top of a complex collection of binary data. It’s also the first and most intuitive way to move files on Linux. If you’re used to the desktop experience, in a generic sense, then you probably already know how to move files around your hard drive. In the GNOME desktop, for instance, the default action when dragging and dropping a file from one window to another is to move the file rather than to copy it, so it’s probably one of the most intuitive actions on the desktop:
|
||||
|
||||
![Moving a file in GNOME.][3]
|
||||
|
||||
The Dolphin file manager in the KDE Plasma desktop defaults to prompting the user for an action. Holding the **Shift** key while dragging a file forces a move action:
|
||||
|
||||
![Moving a file in KDE.][4]
|
||||
|
||||
### Moving on the command line
|
||||
|
||||
The shell command intended for moving files on Linux, BSD, Illumos, Solaris, and MacOS is **mv**. A simple command with a predictable syntax, **mv <source> <destination>** moves a source file to the specified destination, each defined by either an [absolute][5] or [relative][6] file path. As mentioned before, **mv** is such a common command for [POSIX][7] users that many of its additional modifiers are generally unknown, so this article brings a few useful modifiers to your attention whether you are new or experienced.
|
||||
|
||||
Not all **mv** commands were written by the same people, though, so you may have GNU **mv**, BSD **mv**, or Sun **mv**, depending on your operating system. Command options differ from implementation to implementation (BSD **mv** has no long options at all) so refer to your **mv** man page to see what’s supported, or install your preferred version instead (that’s the luxury of open source).
|
||||
|
||||
#### Moving a file
|
||||
|
||||
To move a file from one folder to another with **mv**, remember the syntax **mv <source> <destination>**. For instance, to move the file **example.txt** into your **Documents** directory:
|
||||
|
||||
|
||||
```
|
||||
$ touch example.txt
|
||||
$ mv example.txt ~/Documents
|
||||
$ ls ~/Documents
|
||||
example.txt
|
||||
```
|
||||
|
||||
Just like when you move a file by dragging and dropping it onto a folder icon, this command doesn’t replace **Documents** with **example.txt**. Instead, **mv** detects that **Documents** is a folder, and places the **example.txt** file into it.
|
||||
|
||||
You can also, conveniently, rename the file as you move it:
|
||||
|
||||
|
||||
```
|
||||
$ touch example.txt
|
||||
$ mv example.txt ~/Documents/foo.txt
|
||||
$ ls ~/Documents
|
||||
foo.txt
|
||||
```
|
||||
|
||||
That’s important because it enables you to rename a file even when you don’t want to move it to another location, like so:
|
||||
|
||||
|
||||
```
|
||||
`$ touch example.txt $ mv example.txt foo2.txt $ ls foo2.txt`
|
||||
```
|
||||
|
||||
#### Moving a directory
|
||||
|
||||
The **mv** command doesn’t differentiate a file from a directory the way [**cp**][8] does. You can move a directory or a file with the same syntax:
|
||||
|
||||
|
||||
```
|
||||
$ touch file.txt
|
||||
$ mkdir foo_directory
|
||||
$ mv file.txt foo_directory
|
||||
$ mv foo_directory ~/Documents
|
||||
```
|
||||
|
||||
#### Moving a file safely
|
||||
|
||||
If you copy a file to a directory where a file of the same name already exists, the **mv** command replaces the destination file with the one you are moving, by default. This behavior is called _clobbering_, and sometimes it’s exactly what you intend. Other times, it is not.
|
||||
|
||||
Some distributions _alias_ (or you might [write your own][9]) **mv** to **mv --interactive**, which prompts you for confirmation. Some do not. Either way, you can use the **\--interactive** or **-i** option to ensure that **mv** asks for confirmation in the event that two files of the same name are in conflict:
|
||||
|
||||
|
||||
```
|
||||
$ mv --interactive example.txt ~/Documents
|
||||
mv: overwrite '~/Documents/example.txt'?
|
||||
```
|
||||
|
||||
If you do not want to manually intervene, use **\--no-clobber** or **-n** instead. This flag silently rejects the move action in the event of conflict. In this example, a file named **example.txt** already exists in **~/Documents**, so it doesn't get moved from the current directory as instructed:
|
||||
|
||||
|
||||
```
|
||||
$ mv --no-clobber example.txt ~/Documents
|
||||
$ ls
|
||||
example.txt
|
||||
```
|
||||
|
||||
#### Moving with backups
|
||||
|
||||
If you’re using GNU **mv**, there are backup options offering another means of safe moving. To create a backup of any conflicting destination file, use the **-b** option:
|
||||
|
||||
|
||||
```
|
||||
$ mv -b example.txt ~/Documents
|
||||
$ ls ~/Documents
|
||||
example.txt example.txt~
|
||||
```
|
||||
|
||||
This flag ensures that **mv** completes the move action, but also protects any pre-existing file in the destination location.
|
||||
|
||||
Another GNU backup option is **\--backup**, which takes an argument defining how the backup file is named:
|
||||
|
||||
* **existing**: If numbered backups already exist in the destination, then a numbered backup is created. Otherwise, the **simple** scheme is used.
|
||||
* **none**: Does not create a backup even if **\--backup** is set. This option is useful to override a **mv** alias that sets the backup option.
|
||||
* **numbered**: Appends the destination file with a number.
|
||||
* **simple**: Appends the destination file with a **~**, which can conveniently be hidden from your daily view with the **\--ignore-backups** option for **[ls][2]**.
|
||||
|
||||
|
||||
|
||||
For example:
|
||||
|
||||
|
||||
```
|
||||
$ mv --backup=numbered example.txt ~/Documents
|
||||
$ ls ~/Documents
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~
|
||||
```
|
||||
|
||||
A default backup scheme can be set with the environment variable VERSION_CONTROL. You can set environment variables in your **~/.bashrc** file or dynamically before your command:
|
||||
|
||||
|
||||
```
|
||||
$ VERSION_CONTROL=numbered mv --backup example.txt ~/Documents
|
||||
$ ls ~/Documents
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~
|
||||
```
|
||||
|
||||
The **\--backup** option still respects the **\--interactive** or **-i** option, so it still prompts you to overwrite the destination file, even though it creates a backup before doing so:
|
||||
|
||||
|
||||
```
|
||||
$ mv --backup=numbered example.txt ~/Documents
|
||||
mv: overwrite '~/Documents/example.txt'? y
|
||||
$ ls ~/Documents
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:24 example.txt
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:23 example.txt.~3~
|
||||
```
|
||||
|
||||
You can override **-i** with the **\--force** or **-f** option.
|
||||
|
||||
|
||||
```
|
||||
$ mv --backup=numbered --force example.txt ~/Documents
|
||||
$ ls ~/Documents
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:26 example.txt
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:20 example.txt.~1~
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:22 example.txt.~2~
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:24 example.txt.~3~
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:25 example.txt.~4~
|
||||
```
|
||||
|
||||
The **\--backup** option is not available in BSD **mv**.
|
||||
|
||||
#### Moving many files at once
|
||||
|
||||
When moving multiple files, **mv** treats the final directory named as the destination:
|
||||
|
||||
|
||||
```
|
||||
$ mv foo bar baz ~/Documents
|
||||
$ ls ~/Documents
|
||||
foo bar baz
|
||||
```
|
||||
|
||||
If the final item is not a directory, **mv** returns an error:
|
||||
|
||||
|
||||
```
|
||||
$ mv foo bar baz
|
||||
mv: target 'baz' is not a directory
|
||||
```
|
||||
|
||||
The syntax of GNU **mv** is fairly flexible. If you are unable to provide the **mv** command with the destination as the final argument, use the **\--target-directory** or **-t** option:
|
||||
|
||||
|
||||
```
|
||||
$ mv --target-directory=~/Documents foo bar baz
|
||||
$ ls ~/Documents
|
||||
foo bar baz
|
||||
```
|
||||
|
||||
This is especially useful when constructing **mv** commands from the output of some other command, such as the **find** command, **xargs**, or [GNU Parallel][10].
|
||||
|
||||
#### Moving based on mtime
|
||||
|
||||
With GNU **mv**, you can define a move action based on whether the file being moved is newer than the destination file it would replace. This option is possible with the **\--update** or **-u** option, and is not available in BSD **mv**:
|
||||
|
||||
|
||||
```
|
||||
$ ls -l ~/Documents
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:32 example.txt
|
||||
$ ls -l
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:42 example.txt
|
||||
$ mv --update example.txt ~/Documents
|
||||
$ ls -l ~/Documents
|
||||
-rw-rw-r--. 1 seth users 128 Aug 1 17:42 example.txt
|
||||
$ ls -l
|
||||
```
|
||||
|
||||
This result is exclusively based on the files’ modification time, not on a diff of the two files, so use it with care. It’s easy to fool **mv** with a mere **touch** command:
|
||||
|
||||
|
||||
```
|
||||
$ cat example.txt
|
||||
one
|
||||
$ cat ~/Documents/example.txt
|
||||
one
|
||||
two
|
||||
$ touch example.txt
|
||||
$ mv --update example.txt ~/Documents
|
||||
$ cat ~/Documents/example.txt
|
||||
one
|
||||
```
|
||||
|
||||
Obviously, this isn’t the most intelligent update function available, but it offers basic protection against overwriting recent data.
|
||||
|
||||
### Moving
|
||||
|
||||
There are more ways to move data than just the **mv** command, but as the default program for the job, **mv** is a good universal option. Now that you know what options you have available, you can use **mv** smarter than ever before.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/moving-files-linux-depth
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sethhttps://opensource.com/users/doni08521059
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
|
||||
[2]: https://opensource.com/article/19/7/master-ls-command
|
||||
[3]: https://opensource.com/sites/default/files/uploads/gnome-mv.jpg (Moving a file in GNOME.)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/kde-mv.jpg (Moving a file in KDE.)
|
||||
[5]: https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them
|
||||
[6]: https://opensource.com/article/19/7/navigating-filesystem-relative-paths
|
||||
[7]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[8]: https://opensource.com/article/19/7/copying-files-linux
|
||||
[9]: https://opensource.com/article/19/7/bash-aliases
|
||||
[10]: https://opensource.com/article/18/5/gnu-parallel
|
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Dive into the life and legacy of Alan Turing: 5 books and more)
|
||||
[#]: via: (https://opensource.com/article/19/8/who-was-alan-turing)
|
||||
[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja)
|
||||
|
||||
Dive into the life and legacy of Alan Turing: 5 books and more
|
||||
======
|
||||
Turing's theories had a huge impact on the development of the field of
|
||||
computer science.
|
||||
![Fire fist breaking glass][1]
|
||||
|
||||
Recently, Bank of England Governor, Mark Carney, [announced][2] that Alan Turning would be the new face on the UK£ 50 note. The name _Alan Turing_ should be familiar to anyone in open source communities: His theories had a huge impact on the development of the field of computer science, and his code-breaking work at [Bletchley Park][3] during World War II was the focus of the 2014 film, [_The Imitation Game_][4], which starred Benedict Cumberbatch as Alan Turing.
|
||||
|
||||
Another well-known fact about Turing was his conviction for "gross indecency" because of his homosexuality, and the posthumous [apology][5] and [pardon][6] issued over more a half a decade after Turing’s death.
|
||||
|
||||
But beyond all of this, who was Alan Turing?
|
||||
|
||||
Here are five books and archival material that delve deeply into the life and legacy of Alan Turing. Collectively, these resources cover his life, both professional and personal, and work others have done to build upon Turing’s ideas. Individually, or collectively, these works allow the reader to learn who Alan Turing was beyond just a few well-known, broad-stroke themes.
|
||||
|
||||
### Alan Turing: The Enigma
|
||||
|
||||
![Alan Turing: The Enigma][7]
|
||||
|
||||
One of the most expansive biographies of Alan Turing, [_Alan Turing: The Enigma_][8], by Andrew Hodges, states on its cover that it is the inspiration for the film _The Imitation Game_. Weighing in at over 750 pages, this is no quick read, but it covers much of Turing’s life. The only drawback is the fact that the first edition was published in 1983. Even the updated edition does not make use of information that has become declassified in the past few years.
|
||||
|
||||
Despite that, if you only read one book from this list, _Alan Turing: The Enigma_ is still an excellent choice. Hodges’s work is the gold standard when it comes to Alan Turing biographies.
|
||||
|
||||
### The Imitation Game: Alan Turing Decoded
|
||||
|
||||
![The Imitation Game: Alan Turing Decoded][9]
|
||||
|
||||
_[The Imitation Game: Alan Turing Decoded][10]_, by Jim Ottaviani and illustrated by Leland Purvis, presents the life of Alan Turing as a graphic novel. Well told and partnered with lovely artwork, this book covers all the major facets of Alan Turing’s life but lacks the depth of a biography like Hodges’.
|
||||
|
||||
That is not to say that there is anything wrong or deficient with Ottaviani’s writing, just that the graphic novel form requires a more streamlined narrative. For anyone wanting a quick introduction to Turing, this graphic novel is the quickest way to read an overview of Turing’s life and works.
|
||||
|
||||
### Prof: Alan Turing Decoded
|
||||
|
||||
![Prof: Alan Turing Decoded][11]
|
||||
|
||||
Written by Alan Turing’s nephew, Durmot Turing, _[Prof: Alan Turing Decoded][12]_ draws upon material from the family, plus declassified material that was not available when Hodges researched his book. This shorter biography provides a more personal look at Alan Turing’s life while still being scholarly.
|
||||
|
||||
Dermot Turing does an excellent job of telling the story of Alan Turing the man, not the myth born from public perceptions based on various dramatic interpretations. _Prof: Alan Turing Decoded_ is an interesting biography owing to its use of letters from members of the Turing family, including Alan Turing himself.
|
||||
|
||||
### The Turing Digital Archive
|
||||
|
||||
Nothing beats archival materials for really learning about a subject. Biographers have done masterful jobs at turning primary sources about Alan Turing’s life into compelling biographies, but reading Turing’s own writings and exploring other material in [The Turing Digital Archive][13]—maintained by King’s College, Cambridge—provides a more intimate look at Turing’s life and works. This archive contains Turing’s scholar papers, personal correspondence, photographs, and more. The collection is well-organized and the site is easy to use, making it simple for anyone to conduct their own archival research about the life of Alan Turing.
|
||||
|
||||
### Turing’s Cathedral
|
||||
|
||||
![Turing’s Cathedral][14]
|
||||
|
||||
In [_Turing’s Cathedral_][15], George Dyson explores the efforts by John von Neumann and his collaborators to construct a computer based on Alan Turing’s theory of a Universal Machine. John von Neumann made many, many contributions to computer science, which are also covered in this book, but the transition of Alan Turing’s Universal Machine from theory to practice is the facet that concerns readers wishing to learn more about Alan Turing’s legacy.
|
||||
|
||||
_Turing’s Cathedral_ is the story of von Neumann constructing one of the earliest modern computers, but it is, like all modern computing, the story of Alan Turing’s influence on everything that developed from his theories.
|
||||
|
||||
### Turing’s Vision: The Birth of Computer Science
|
||||
|
||||
![Turing’s Vision: The Birth of Computer Science][16]
|
||||
|
||||
[_Turing’s Vision: The Birth of Computer Science_][17], like its title states, explores the birth of the field of computer science. Full of diagrams and complex examples, this book might not be for everyone, but it does a masterful job of explaining computer science concepts and Turing’s place in the birth of the discipline. Chris Bernhardt does an excellent job of weaving together the biographical aspects with the technical, but the technical material can be very, very technical. There are mathematical proofs and other things that make this book a poor choice for the non-technical reader, but an excellent choice for someone with a background in computer science.
|
||||
|
||||
For a very technical book, it is an enjoyable read. The biographical aspects are not as broad or as deep as pure biographies, but it is the synthesis of the biographical and the technical that make this book so interesting.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/who-was-alan-turing
|
||||
|
||||
作者:[Joshua Allen Holm][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/holmja
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fire_fist_break_glass_smash_fail.jpg?itok=S6hQNLtB (Fire fist breaking glass)
|
||||
[2]: https://www.bankofengland.co.uk/news/2019/july/50-pound-banknote-character-announcement
|
||||
[3]: https://www.bletchleypark.org.uk/
|
||||
[4]: https://www.imdb.com/title/tt2084970/
|
||||
[5]: https://www.telegraph.co.uk/news/politics/gordon-brown/6170112/Gordon-Brown-Im-proud-to-say-sorry-to-a-real-war-hero.html
|
||||
[6]: https://www.bbc.com/news/technology-25495315
|
||||
[7]: https://opensource.com/sites/default/files/uploads/alan_turing-_the_enigma_125.jpeg (Alan Turing: The Enigma)
|
||||
[8]: https://press.princeton.edu/titles/10413.html
|
||||
[9]: https://opensource.com/sites/default/files/uploads/the_imitation_game-_alan_turing_decoded_125.jpg (The Imitation Game: Alan Turing Decoded)
|
||||
[10]: https://www.abramsbooks.com/product/imitation-game_9781419718939/
|
||||
[11]: https://opensource.com/sites/default/files/uploads/prof-_alan_turing_decoded_125.jpg (Prof: Alan Turing Decoded)
|
||||
[12]: https://dermotturing.com/my-recent-books/alan-turing/
|
||||
[13]: http://www.turingarchive.org/
|
||||
[14]: https://opensource.com/sites/default/files/uploads/turing_s_cathedral_125.jpg (Turing’s Cathedral)
|
||||
[15]: https://www.penguinrandomhouse.com/books/44425/turings-cathedral-by-george-dyson/9781400075997/
|
||||
[16]: https://opensource.com/sites/default/files/uploads/turing_s_vision-_the_birth_of_computer_science_125.jpg (Turing’s Vision: The Birth of Computer Science)
|
||||
[17]: https://mitpress.mit.edu/books/turings-vision
|
@ -0,0 +1,117 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Check Your IP Address in Ubuntu [Beginner’s Tip])
|
||||
[#]: via: (https://itsfoss.com/check-ip-address-ubuntu/)
|
||||
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
|
||||
|
||||
How To Check Your IP Address in Ubuntu [Beginner’s Tip]
|
||||
======
|
||||
|
||||
Wonder what’s your IP address? Here are several ways to check IP address in Ubuntu and other Linux distributions.
|
||||
|
||||
![][1]
|
||||
|
||||
### What is an IP Address?
|
||||
|
||||
An **Internet Protocol address** (commonly referred to as **IP address**) is a numerical label assigned to each device connected to a computer network (using the Internet Protocol). An IP address serves both the purpose of identification and localisation of a machine.
|
||||
|
||||
The **IP address** is _unique_ within the network, allowing the communication between all connected devices.
|
||||
|
||||
You should also know that there are two **types of IP addresses**: **public** and **private**. The **public IP address** is the address used to communicate over the Internet, the same way your physical address is used for postal mail. However, in the context of a local network (such as a home where are router is used), each device is assigned a **private IP address** unique within this sub-network. This is used inside this local network, without directly exposing the public IP (which is used by the router to communicate with the Internet).
|
||||
|
||||
Another distinction can be made between **IPv4** and **IPv6** protocol. **IPv4** is the classic IP format,consisting of a basic 4 part structure, with four bytes separated by dots (e.g. 127.0.0.1). However, with the growing number of devices, IPv4 will soon be unable to offer enough addresses. This is why **IPv6** was invented, a format using **128-bit addresses** (compared to the **32-bit addresses** used by **IPv4**).
|
||||
|
||||
## Checking your IP Address in Ubuntu [Terminal Method]
|
||||
|
||||
The fastest and the simplest way to check your IP address is by using the ip command. You can use this command in the following fashion:
|
||||
|
||||
```
|
||||
ip addr show
|
||||
```
|
||||
|
||||
It will show you both IPv4 and IPv6 addresses:
|
||||
|
||||
![Display IP Address in Ubuntu Linux][2]
|
||||
|
||||
Actually, you can further shorten this command to just `ip a`. It will give you the exact same result.
|
||||
|
||||
```
|
||||
ip a
|
||||
```
|
||||
|
||||
If you prefer to get minimal details, you can also use **hostname**:
|
||||
|
||||
```
|
||||
hostname -I
|
||||
```
|
||||
|
||||
There are some other [ways to check IP address in Linux][3] but these two commands are more than enough to serve the purpose.
|
||||
|
||||
[][4]
|
||||
|
||||
Suggested read How to Disable IPv6 on Ubuntu Linux
|
||||
|
||||
What about ifconfig?
|
||||
|
||||
Long-time users might be tempted to use ifconfig (part of net-tools), but that program is deprecated. Some newer Linux distributions don’t include this package anymore and if you try running it, you’ll see ifconfig command not found error.
|
||||
|
||||
## Checking IP address in Ubuntu [GUI Method]
|
||||
|
||||
If you are not comfortable with the command line, you can also check IP address graphically.
|
||||
|
||||
Open up the Ubuntu Applications Menu (**Show Applications** in the bottom-left corner of the screen) and search for **Settings** and click on the icon:
|
||||
|
||||
![Applications Menu Settings][5]
|
||||
|
||||
This should open up the **Settings Menu**. Go to **Network**:
|
||||
|
||||
![Network Settings Ubuntu][6]
|
||||
|
||||
Pressing on the **gear icon** next to your connection should open up a window with more settings and information about your link to the network, including your IP address:
|
||||
|
||||
![IP Address GUI Ubuntu][7]
|
||||
|
||||
## Bonus Tip: Checking your Public IP Address (for desktop computers)
|
||||
|
||||
First of all, to check your **public IP address** (used for communicating with servers etc.) you can [use curl command][8]. Open up a terminal and enter the following command:
|
||||
|
||||
```
|
||||
curl ifconfig.me
|
||||
```
|
||||
|
||||
This should simply return your IP address with no additional bulk information. I would recommend being careful when sharing this address, since it is the equivalent to giving out your personal address.
|
||||
|
||||
**Note:** _If **curl** isn’t installed on your system, simply use **sudo apt install curl -y** to solve the problem, then try again._
|
||||
|
||||
Another simple way you can see your public IP address is by searching for **ip address** on Google.
|
||||
|
||||
**Summary**
|
||||
|
||||
In this article I went through the different ways you can find your IP address in Uuntu Linux, as well as giving you a basic overview of what IP addresses are used for and why they are so important for us.
|
||||
|
||||
I hope you enjoyed this quick guide. Let us know if you found this explanation helpful in the comments section!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/check-ip-address-ubuntu/
|
||||
|
||||
作者:[Sergiu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/sergiu/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/checking-ip-address-ubuntu.png?resize=800%2C450&ssl=1
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/ip_addr_show.png?fit=800%2C493&ssl=1
|
||||
[3]: https://linuxhandbook.com/find-ip-address/
|
||||
[4]: https://itsfoss.com/disable-ipv6-ubuntu-linux/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/applications_menu_settings.jpg?fit=800%2C309&ssl=1
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/network_settings_ubuntu.jpg?fit=800%2C591&ssl=1
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/ip_address_gui_ubuntu.png?fit=800%2C510&ssl=1
|
||||
[8]: https://linuxhandbook.com/curl-command-examples/
|
@ -0,0 +1,407 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hello-wn)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Delete Lines from a File Using the sed Command)
|
||||
[#]: via: (https://www.2daygeek.com/linux-remove-delete-lines-in-file-sed-command/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Delete Lines from a File Using the sed Command
|
||||
======
|
||||
|
||||
Sed command stands for Stream Editor, is used to perform basic text transformations in Linux.
|
||||
|
||||
sed is one of the important command, which plays major role for file manipulation. It can be used to delete or remove specific lines which matches a given pattern.
|
||||
|
||||
Also, it’s used to remove a particular line in a file.
|
||||
|
||||
It’s capable to delete expressions as well from a file, which can be identified by a specifying delimiter (such as a comma, tab, or space).
|
||||
|
||||
There are fifteen examples are listed in this article, which helps you to become a master in sed command.
|
||||
|
||||
If you understand and remember all these commands that can be useful in many ways. Also, it saves lot of time when you have some requirements to perform sed command.
|
||||
|
||||
**`Note:`**` ` Since it’s demonstration purpose so, i use sed command without `-i` option which prints the contents of the file on Linux terminal by removing the lines.
|
||||
|
||||
But, if you would like to remove the lines from the source file in real environment then use `-i` option with sed command.
|
||||
|
||||
To test this, i have created the sed-demo.txt file and added the following contents with line number for better understanding.
|
||||
|
||||
```
|
||||
# cat sed-demo.txt
|
||||
|
||||
1 Linux Operating System
|
||||
2 Unix Operating System
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
6 Arch Linux
|
||||
7 CentOS
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
10 openSUSE
|
||||
```
|
||||
|
||||
### 1) How to Delete First Line from a File?
|
||||
|
||||
If you would like to delete first line from a file, use the following syntax.
|
||||
|
||||
**`N`**` ` denotes Nth line in a file and d option in sed command is used to delete a line.
|
||||
|
||||
**Syntax:**
|
||||
|
||||
```
|
||||
sed 'Nd' file
|
||||
```
|
||||
|
||||
The below sed command removes the first line in sed-demo.txt file.
|
||||
|
||||
```
|
||||
# sed '1d' sed-demo.txt
|
||||
|
||||
2 Unix Operating System
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
6 Arch Linux
|
||||
7 CentOS
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
10 openSUSE
|
||||
```
|
||||
|
||||
### 2) How to Delete Last Line from a File?
|
||||
|
||||
If you would like to delete first line from a file, use the following syntax.
|
||||
|
||||
The **`$`**` ` denotes the last line of a file.
|
||||
|
||||
The below sed command removes the last line in sed-demo.txt file.
|
||||
|
||||
```
|
||||
# sed '$d' sed-demo.txt
|
||||
|
||||
1 Linux Operating System
|
||||
2 Unix Operating System
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
6 Arch Linux
|
||||
7 CentOS
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
```
|
||||
|
||||
### 3) How to Delete Particular Line from a File?
|
||||
|
||||
The below sed command removes the third line in sed-demo.txt file.
|
||||
|
||||
```
|
||||
# sed '3d' sed-demo.txt
|
||||
|
||||
1 Linux Operating System
|
||||
2 Unix Operating System
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
6 Arch Linux
|
||||
7 CentOS
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
10 openSUSE
|
||||
```
|
||||
|
||||
### 4) How to Delete Range of Lines from a File?
|
||||
|
||||
The below sed command removes the lines ranging from 5 to 7.
|
||||
|
||||
```
|
||||
# sed '5,7d' sed-demo.txt
|
||||
|
||||
1 Linux Operating System
|
||||
2 Unix Operating System
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
10 openSUSE
|
||||
```
|
||||
|
||||
### 5) How to Delete Multiple Lines from a File?
|
||||
|
||||
The sed command is capable to removes set of given lines.
|
||||
|
||||
In this example, the following sed command removes 1st line, 5th line, 9th line, and last line.
|
||||
|
||||
```
|
||||
# sed '1d;5d;9d;$d' sed-demo.txt
|
||||
|
||||
2 Unix Operating System
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
6 Arch Linux
|
||||
7 CentOS
|
||||
8 Debian
|
||||
```
|
||||
|
||||
### 5a) How to Delete Lines Other Than the Specified Range from a File?
|
||||
|
||||
Use the following sed command to remove all the lines from the file only except specified range.
|
||||
|
||||
```
|
||||
# sed '3,6!d' sed-demo.txt
|
||||
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
6 Arch Linux
|
||||
```
|
||||
|
||||
### 6) How to Delete Empty or Blank Lines from a File?
|
||||
|
||||
The following sed command removes the empty or blank lines from sed-demo.txt file.
|
||||
|
||||
```
|
||||
# sed '/^$/d' sed-demo.txt
|
||||
|
||||
1 Linux Operating System
|
||||
2 Unix Operating System
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
6 Arch Linux
|
||||
7 CentOS
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
10 openSUSE
|
||||
```
|
||||
|
||||
### 7) How to Delete Lines That Contain a Pattern from a File?
|
||||
|
||||
The following sed command removes the lines in sed-demo.txt file which match the **`System`**` ` pattern.
|
||||
|
||||
```
|
||||
# sed '/System/d' sed-demo.txt
|
||||
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
6 Arch Linux
|
||||
7 CentOS
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
10 openSUSE
|
||||
```
|
||||
|
||||
### 8) How to Delete Lines That Containing One of Multiple Strings from a File?
|
||||
|
||||
The following sed command removes the lines in sed-demo.txt file which match the **`System`**` ` or **`Linux`**` ` pattern.
|
||||
|
||||
```
|
||||
# sed '/System\|Linux/d' sed-demo.txt
|
||||
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
7 CentOS
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
10 openSUSE
|
||||
```
|
||||
|
||||
### 9) How to Delete Lines That Begin with Specified Character from a File?
|
||||
|
||||
The following sed command removes all the lines that start with given character.
|
||||
|
||||
To test this, i have created another file called sed-demo-1.txt with following contents.
|
||||
|
||||
```
|
||||
# cat sed-demo-1.txt
|
||||
|
||||
Linux Operating System
|
||||
Unix Operating System
|
||||
RHEL
|
||||
Red Hat
|
||||
Fedora
|
||||
debian
|
||||
ubuntu
|
||||
Arch Linux - 1
|
||||
2 - Manjaro
|
||||
3 4 5 6
|
||||
```
|
||||
|
||||
The following sed command removes all the lines that start with character **`R`**` `.
|
||||
|
||||
```
|
||||
# sed '/^R/d' sed-demo-1.txt
|
||||
|
||||
Linux Operating System
|
||||
Unix Operating System
|
||||
Fedora
|
||||
debian
|
||||
ubuntu
|
||||
Arch Linux - 1
|
||||
2 - Manjaro
|
||||
3 4 5 6
|
||||
```
|
||||
|
||||
The following sed command removes all the lines that start with character either **`R`**` ` or **`F`**` `.
|
||||
|
||||
```
|
||||
# sed '/^[RF]/d' sed-demo-1.txt
|
||||
|
||||
Linux Operating System
|
||||
Unix Operating System
|
||||
debian
|
||||
ubuntu
|
||||
Arch Linux - 1
|
||||
2 - Manjaro
|
||||
3 4 5 6
|
||||
```
|
||||
|
||||
### 10) How to Delete Lines That End with Specified Character from a File?
|
||||
|
||||
The following sed command removes all the lines that end with character **`m`**` `.
|
||||
|
||||
```
|
||||
# sed '/m$/d' sed-demo.txt
|
||||
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
6 Arch Linux
|
||||
7 CentOS
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
10 openSUSE
|
||||
```
|
||||
|
||||
The following sed command removes all the lines that end with character either **`x`**` ` or **`m`**` `.
|
||||
|
||||
```
|
||||
# sed '/[xm]$/d' sed-demo.txt
|
||||
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
7 CentOS
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
10 openSUSE
|
||||
```
|
||||
|
||||
### 11) How to Delete All Lines That Start with Capital Letters
|
||||
|
||||
Use the following sed command to remove all the lines that start with entirely in capital letters.
|
||||
|
||||
```
|
||||
# sed '/^[A-Z]/d' sed-demo-1.txt
|
||||
|
||||
debian
|
||||
ubuntu
|
||||
2 - Manjaro
|
||||
3 4 5 6
|
||||
```
|
||||
|
||||
### 12) How to Delete a Matching Pattern Lines with Specified Range in a File?
|
||||
|
||||
The below sed command removes the pattern **`Linux`**` ` only if it is present in the lines from 1 to 6.
|
||||
|
||||
```
|
||||
# sed '1,6{/Linux/d;}' sed-demo.txt
|
||||
|
||||
2 Unix Operating System
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
7 CentOS
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
10 openSUSE
|
||||
```
|
||||
|
||||
### 13) How to Delete Pattern Matching Line and also the Next Line?
|
||||
|
||||
Use the following sed command to delete the line which containing the pattern ‘System’ and also the next line.
|
||||
|
||||
```
|
||||
# sed '/System/{N;d;}' sed-demo.txt
|
||||
|
||||
3 RHEL
|
||||
4 Red Hat
|
||||
5 Fedora
|
||||
6 Arch Linux
|
||||
7 CentOS
|
||||
8 Debian
|
||||
9 Ubuntu
|
||||
10 openSUSE
|
||||
```
|
||||
|
||||
### 14) How Delete lines that contains Digits from a File?
|
||||
|
||||
The below sed command removes all the lines that contains digits.
|
||||
|
||||
```
|
||||
# sed '/[0-9]/d' sed-demo-1.txt
|
||||
|
||||
Linux Operating System
|
||||
Unix Operating System
|
||||
RHEL
|
||||
Red Hat
|
||||
Fedora
|
||||
debian
|
||||
ubuntu
|
||||
```
|
||||
|
||||
The below sed command removes all the lines Begin with digits.
|
||||
|
||||
```
|
||||
# sed '/^[0-9]/d' sed-demo-1.txt
|
||||
|
||||
Linux Operating System
|
||||
Unix Operating System
|
||||
RHEL
|
||||
Red Hat
|
||||
Fedora
|
||||
debian
|
||||
ubuntu
|
||||
Arch Linux - 1
|
||||
```
|
||||
|
||||
The below sed command removes all the lines End with digits.
|
||||
|
||||
```
|
||||
# sed '/[0-9]$/d' sed-demo-1.txt
|
||||
|
||||
Linux Operating System
|
||||
Unix Operating System
|
||||
RHEL
|
||||
Red Hat
|
||||
Fedora
|
||||
debian
|
||||
ubuntu
|
||||
2 - Manjaro
|
||||
```
|
||||
|
||||
### 15) How Delete lines that contains Alphabetic Characters from a File?
|
||||
|
||||
The below sed command removes all the lines that contains alphabetic characters.
|
||||
|
||||
```
|
||||
# sed '/[A-Za-z]/d' sed-demo-1.txt
|
||||
|
||||
3 4 5 6
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-remove-delete-lines-in-file-sed-command/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
122
sources/tech/20190823 Managing credentials with KeePassXC.md
Normal file
122
sources/tech/20190823 Managing credentials with KeePassXC.md
Normal file
@ -0,0 +1,122 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Managing credentials with KeePassXC)
|
||||
[#]: via: (https://fedoramagazine.org/managing-credentials-with-keepassxc/)
|
||||
[#]: author: (Marco Sarti https://fedoramagazine.org/author/msarti/)
|
||||
|
||||
Managing credentials with KeePassXC
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
A [previous article][2] discussed password management tools that use server-side technology. These tools are very interesting and suitable for a cloud installation.
|
||||
In this article we will talk about KeePassXC, a simple multi-platform open source software that uses a local file as a database.
|
||||
The main advantage of this type of password management is simplicity. No server-side technology expertise is required and can therefore be used by any type of user.
|
||||
|
||||
### Introducing KeePassXC
|
||||
|
||||
KeePassXC is an open source cross platform password manager: its development started as a fork of KeePassX, a good product but with a not very active development. It saves the secrets in an encrypted database with AES algorithm using 256 bit key, this makes it reasonably safe to save the database in a cloud drive storage such as pCloud or Dropbox.
|
||||
|
||||
In addition to the passwords, KeePassXC allows you to save various information and attachments in the encrypted wallet. It also has a valid password generator that helps the user to correctly manage his credentials.
|
||||
|
||||
### Installation
|
||||
|
||||
The program is available both in the standard Fedora repository and in the Flathub repository. Unfortunately the integration with the browser does not work with the application running in the sandbox, so I suggest to install the program via dnf:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
sudo dnf install keepassxc
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
### Creating your wallet
|
||||
|
||||
To create a new database there are two important steps:
|
||||
|
||||
* Choose the encryption settings: the default settings are reasonably safe, increasing the transform rounds also increases the decryption time.
|
||||
* Choose the master key and additional protections: the master key must be easy to remember (if you lose it your wallet is lost!) but strong enough, a passphrase with at least 4 random words can be a good choice. As additional protection you can choose a key file (remember: you must always have it available otherwise you cannot open the wallet) and / or a YubiKey hardware key.
|
||||
|
||||
|
||||
|
||||
![][3]
|
||||
|
||||
![][4]
|
||||
|
||||
The database file will be saved to the file system. If you want to share with other computers / devices you can save it on a USB key or in a cloud storage like pCloud or Dropbox. Of course, if you choose a cloud storage, a particularly strong master password is recommended, better if accompanied by additional protection.
|
||||
|
||||
### Creating your first entry
|
||||
|
||||
Once the database has been created, you can start creating your first entry. For a web login specify a username, password and url in the Entry tab. Optionally you can specify an expiration date for the credentials based on your personal policy: also by pressing the button on the right the favicon of the site is downloaded and associated as an icon of the entry, this is a nice feature.
|
||||
|
||||
![][5]
|
||||
|
||||
![][6]
|
||||
|
||||
KeePassXC also offers a good password / passphrase generator, you can choose length and complexity and check the degree of resistance to a brute force attack:
|
||||
|
||||
![][7]
|
||||
|
||||
### Browser integration
|
||||
|
||||
KeePassXC has an extension available for all major browsers. The extension allows you to fill in the login information for all the entries whose URL is specified.
|
||||
|
||||
Browser integration must be enabled on KeePassXC (Tools menu -> Settings) specifying which browsers you intend to use:
|
||||
|
||||
![][8]
|
||||
|
||||
Once the extension is installed, it is necessary to create a connection with the database. To do this, press the extension button and then the Connect button: if the database is open and unlocked the extension will create an association key and save it in the database, the key is unique to the browser so I suggest naming it appropriately :
|
||||
|
||||
![][9]
|
||||
|
||||
When you reach the login page specified in the Url field and the database is unlocked, the extension will offer you all the credentials you have associated with that page:
|
||||
|
||||
![][10]
|
||||
|
||||
In this way, browsing with KeePassXC running you will have your internet credentials available without necessarily saving them in the browser.
|
||||
|
||||
### SSH agent integration
|
||||
|
||||
Another interesting feature of KeePassXC is the integration with SSH. If you have ssh-agent running KeePassXC is able to interact and add the ssh keys that you have uploaded as attachments to your entries.
|
||||
|
||||
First of all in the general settings (Tools menu -> Settings) you have to enable the ssh agent and restart the program:
|
||||
|
||||
![][11]
|
||||
|
||||
At this point it is required to upload your ssh key pair as an attachment to your entry. Then in the “SSH agent” tab select the private key in the attachment drop-down list, the public key will be populated automatically. Don’t forget to select the two checkboxes above to allow the key to be added to the agent when the database is opened / unlocked and removed when the database is closed / locked:
|
||||
|
||||
![][12]
|
||||
|
||||
Now with the database open and unlocked you can log in ssh using the keys saved in your wallet.
|
||||
|
||||
The only limitation is in the maximum number of keys that can be added to the agent: ssh servers do not accept by default more than 5 login attempts, for security reasons it is not recommended to increase this value.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/managing-credentials-with-keepassxc/
|
||||
|
||||
作者:[Marco Sarti][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/msarti/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/keepassxc-816x345.png
|
||||
[2]: https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-07-33-27.png
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-07-48-21.png
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-08-30-07.png
|
||||
[6]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-08-43-11.png
|
||||
[7]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-08-49-22.png
|
||||
[8]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-09-48-09.png
|
||||
[9]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-09-05-57.png
|
||||
[10]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-09-13-29.png
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-09-47-21.png
|
||||
[12]: https://fedoramagazine.org/wp-content/uploads/2019/08/Screenshot-from-2019-08-17-09-46-35.png
|
105
sources/tech/20190823 The Linux kernel- Top 5 innovations.md
Normal file
105
sources/tech/20190823 The Linux kernel- Top 5 innovations.md
Normal file
@ -0,0 +1,105 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Linux kernel: Top 5 innovations)
|
||||
[#]: via: (https://opensource.com/article/19/8/linux-kernel-top-5-innovations)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez)
|
||||
|
||||
The Linux kernel: Top 5 innovations
|
||||
======
|
||||
Want to know what the actual (not buzzword) innovations are when it
|
||||
comes to the Linux kernel? Read on.
|
||||
![Penguin with green background][1]
|
||||
|
||||
The word _innovation_ gets bandied about in the tech industry almost as much as _revolution_, so it can be difficult to differentiate hyperbole from something that’s actually exciting. The Linux kernel has been called innovative, but then again it’s also been called the biggest hack in modern computing, a monolith in a micro world.
|
||||
|
||||
Setting aside marketing and modeling, Linux is arguably the most popular kernel of the open source world, and it’s introduced some real game-changers over its nearly 30-year life span.
|
||||
|
||||
### Cgroups (2.6.24)
|
||||
|
||||
Back in 2007, Paul Menage and Rohit Seth got the esoteric [_control groups_ (cgroups)][2] feature added to the kernel (the current implementation of cgroups is a rewrite by Tejun Heo.) This new technology was initially used as a way to ensure, essentially, quality of service for a specific set of tasks.
|
||||
|
||||
For example, you could create a control group definition (cgroup) for all tasks associated with your web server, another cgroup for routine backups, and yet another for general operating system requirements. You could then control a percentage of resources for each cgroup, such that your OS and web server gets the bulk of system resources while your backup processes have access to whatever is left.
|
||||
|
||||
What cgroups has become most famous for, though, is its role as the technology driving the cloud today: containers. In fact, cgroups were originally named [process containers][3]. It was no great surprise when they were adopted by projects like [LXC][4], [CoreOS][5], and Docker.
|
||||
|
||||
The floodgates being opened, the term _containers_ justly became synonymous with Linux, and the concept of microservice-style cloud-based “apps” quickly became the norm. These days, it’s hard to get away from cgroups, they’re so prevalent. Every large-scale infrastructure (and probably your laptop, if you run Linux) takes advantage of cgroups in a meaningful way, making your computing experience more manageable and more flexible than ever.
|
||||
|
||||
For example, you might already have installed [Flathub][6] or [Flatpak][7] on your computer, or maybe you’ve started using [Kubernetes][8] and/or [OpenShift][9] at work. Regardless, if the term “containers” is still hazy for you, you can gain a hands-on understanding of containers from [Behind the scenes with Linux containers][10].
|
||||
|
||||
### LKMM (4.17)
|
||||
|
||||
In 2018, the hard work of Jade Alglave, Alan Stern, Andrea Parri, Luc Maranget, Paul McKenney, and several others, got merged into the mainline Linux kernel to provide formal memory models. The Linux Kernel Memory [Consistency] Model (LKMM) subsystem is a set of tools describing the Linux memory coherency model, as well as producing _litmus tests_ (**klitmus**, specifically) for testing.
|
||||
|
||||
As systems become more complex in physical design (more CPU cores added, cache and RAM grow, and so on), the harder it is for them to know which address space is required by which CPU, and when. For example, if CPU0 needs to write data to a shared variable in memory, and CPU1 needs to read that value, then CPU0 must write before CPU1 attempts to read. Similarly, if values are written in one order to memory, then there’s an expectation that they are also read in that same order, regardless of which CPU or CPUs are doing the reading.
|
||||
|
||||
Even on a single CPU, memory management requires a specific task order. A simple action such as **x = y** requires a CPU to load the value of **y** from memory, and then store that value in **x**. Placing the value stored in **y** into the **x** variable cannot occur _before_ the CPU has read the value from memory. There are also address dependencies: **x[n] = 6** requires that **n** is loaded before the CPU can store the value of six.
|
||||
|
||||
LKMM helps identify and trace these memory patterns in code. It does this in part with a tool called **herd**, which defines the constraints imposed by a memory model (in the form of logical axioms), and then enumerates all possible outcomes consistent with these constraints.
|
||||
|
||||
### Low-latency patch (2.6.38)
|
||||
|
||||
Long ago, in the days before 2011, if you wanted to do "serious" [multimedia work on Linux][11], you had to obtain a low-latency kernel. This mostly applied to [audio recording][12] while adding lots of real-time effects (such as singing into a microphone and adding reverb, and hearing your voice in your headset with no noticeable delay). There were distributions, such as [Ubuntu Studio][13], that reliably provided such a kernel, so in practice it wasn't much of a hurdle, just a significant caveat when choosing your distribution as an artist.
|
||||
|
||||
However, if you weren’t using Ubuntu Studio, or you had some need to update your kernel before your distribution got around to it, you had to go to the rt-patches web page, download the kernel patches, apply them to your kernel source code, compile, and install manually.
|
||||
|
||||
And then, with the release of kernel version 2.6.38, this process was all over. The Linux kernel suddenly, as if by magic, had low-latency code (according to benchmarks, latency decreased by a factor of 10, at least) built-in by default. No more downloading patches, no more compiling. Everything just worked, and all because of a small 200-line patch implemented by Mike Galbraith.
|
||||
|
||||
For open source multimedia artists the world over, it was a game-changer. Things got so good from 2011 on that in 2016, I challenged myself to [build a Digital Audio Workstation (DAW) on a Raspberry Pi v1 (model B)][14] and found that it worked surprisingly well.
|
||||
|
||||
### RCU (2.5)
|
||||
|
||||
RCU, or Read-Copy-Update, is a system defined in computer science that allows multiple processor threads to read from shared memory. It does this by deferring updates, but also marking them as updated, to ensure that the data’s consumers read the latest version. Effectively, this means that reads happen concurrently with updates.
|
||||
|
||||
The typical RCU cycle is a little like this:
|
||||
|
||||
1. Remove pointers to data to prevent other readers from referencing it.
|
||||
2. Wait for readers to complete their critical processes.
|
||||
3. Reclaim the memory space.
|
||||
|
||||
|
||||
|
||||
Dividing the update stage into removal and reclamation phases means the updater performs the removal immediately while deferring reclamation until all active readers are complete (either by blocking them or by registering a callback to be invoked upon completion).
|
||||
|
||||
While the concept of read-copy-update was not invented for the Linux kernel, its implementation in Linux is a defining example of the technology.
|
||||
|
||||
### Collaboration (0.01)
|
||||
|
||||
The final answer to the question of what the Linux kernel innovated will always be, above all else, collaboration. Call it good timing, call it technical superiority, call it hackability, or just call it open source, but the Linux kernel and the many projects that it enabled is a glowing example of collaboration and cooperation.
|
||||
|
||||
And it goes well beyond just the kernel. People from all walks of life have contributed to open source, arguably _because_ of the Linux kernel. The Linux was, and remains to this day, a major force of [Free Software][15], inspiring users to bring their code, art, ideas, or just themselves, to a global, productive, and diverse community of humans.
|
||||
|
||||
### What’s your favorite innovation?
|
||||
|
||||
This list is biased toward my own interests: containers, non-uniform memory access (NUMA), and multimedia. I’ve surely left your favorite kernel innovation off the list. Tell me about it in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/linux-kernel-top-5-innovations
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sethhttps://opensource.com/users/mhaydenhttps://opensource.com/users/mralexjuarez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
|
||||
[2]: https://en.wikipedia.org/wiki/Cgroups
|
||||
[3]: https://lkml.org/lkml/2006/10/20/251
|
||||
[4]: https://linuxcontainers.org
|
||||
[5]: https://coreos.com/
|
||||
[6]: http://flathub.org
|
||||
[7]: http://flatpak.org
|
||||
[8]: http://kubernetes.io
|
||||
[9]: https://www.redhat.com/sysadmin/learn-openshift-minishift
|
||||
[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
|
||||
[11]: http://slackermedia.info
|
||||
[12]: https://opensource.com/article/17/6/qtractor-audio
|
||||
[13]: http://ubuntustudio.org
|
||||
[14]: https://opensource.com/life/16/3/make-music-raspberry-pi-milkytracker
|
||||
[15]: http://fsf.org
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The lifecycle of Linux kernel testing)
|
||||
[#]: via: (https://opensource.com/article/19/8/linux-kernel-testing)
|
||||
[#]: author: (Major Hayden https://opensource.com/users/mhaydenhttps://opensource.com/users/mhaydenhttps://opensource.com/users/marcobravohttps://opensource.com/users/mhayden)
|
||||
|
||||
The lifecycle of Linux kernel testing
|
||||
======
|
||||
The Continuous Kernel Integration (CKI) project aims to prevent bugs
|
||||
from entering the Linux kernel.
|
||||
![arrows cycle symbol for failing faster][1]
|
||||
|
||||
In _[Continuous integration testing for the Linux kernel][2]_, I wrote about the [Continuous Kernel Integration][3] (CKI) project and its mission to change how kernel developers and maintainers work. This article is a deep dive into some of the more technical aspects of the project and how all the pieces fit together.
|
||||
|
||||
### It all starts with a change
|
||||
|
||||
Every exciting feature, improvement, and bug in the kernel starts with a change proposed by a developer. These changes appear on myriad mailing lists for different kernel repositories. Some repositories focus on certain subsystems in the kernel, such as storage or networking, while others focus on broad aspects of the kernel. The CKI project springs into action when developers propose a change, or patchset, to the kernel or when a maintainer makes changes in the repository itself.
|
||||
|
||||
The CKI project maintains triggers that monitor these patchsets and take action. Software projects such as [Patchwork][4] make this process much easier by collating multi-patch contributions into a single patch series. This series travels as a unit through the CKI system and allows for publishing a single report on the series.
|
||||
|
||||
Other triggers watch the repository for changes. This occurs when kernel maintainers merge patchsets, revert patches, or create new tags. Testing these critical changes ensures that developers always have a solid baseline to use as a foundation for writing new patches.
|
||||
|
||||
All of these changes make their way into a GitLab pipeline and pass through multiple stages and multiple systems.
|
||||
|
||||
### Prepare the build
|
||||
|
||||
Everything starts with getting the source ready for compile time. This requires cloning the repository, applying the patchset proposed by the developer, and generating a kernel config file. These config files have thousands of options that turn features on or off, and config files differ incredibly between different system architectures. For example, a fairly standard x86_64 system may have a ton of options available in its config file, but an s390x system (IBM zSeries mainframes) likely has much fewer options. Some options might make sense on that mainframe but they have no purpose on a consumer laptop.
|
||||
|
||||
The kernel moves forward and transforms into a source artifact. The artifact contains the entire repository, with patches applied, and all kernel configuration files required for compiling. Upstream kernels move on as a tarball, while Red Hat kernels become a source RPM for the next step.
|
||||
|
||||
### Piles of compiles
|
||||
|
||||
Compiling the kernel turns the source code into something that a computer can boot up and use. The config file describes what to build, scripts in the kernel describe how to build it, and tools on the system (like GCC and glibc) do the building. This process takes a while to complete, but the CKI project needs it done quickly for four architectures: aarch64 (64-bit ARM), ppc64le (POWER), s390x (IBM zSeries), and x86_64. It's important that we compile kernels quickly so that we keep our backlog manageable and developers receive prompt feedback.
|
||||
|
||||
Adding more CPUs provides plenty of speed improvements, but every system has its limits. The CKI project compiles kernels within containers in an OpenShift deployment; although OpenShift allows for tons of scalability, the deployment still has a finite number of CPUs available. The CKI team allocates 20 virtual CPUs for compiling each kernel. With four architectures involved, this balloons to 80 CPUs!
|
||||
|
||||
Another speed increase comes from a tool called [ccache][5]. Kernel development moves quickly, but a large amount of the kernel remains unchanged even between multiple releases. The ccache tool caches the built objects (small pieces of the overall kernel) during the compile on a disk. When another kernel compile comes along later, ccache looks for unchanged pieces of the kernel that it saw before. Ccache pulls the cached object from the disk and reuses it. This allows for faster compiles and lower overall CPU usage. Kernels that took 20 minutes to compile now race to the finish line in less than a few minutes.
|
||||
|
||||
### Testing time
|
||||
|
||||
The kernel moves onto its last step: testing on real hardware. Each kernel boots up on its native architecture using Beaker, and myriad tests begin poking it to find problems. Some tests look for simple problems, such as issues with containers or error messages on boot-up. Other tests dive deep into various kernel subsystems to find regressions in system calls, memory allocation, and threading.
|
||||
|
||||
Large testing frameworks, such as the [Linux Test Project][6] (LTP), contain tons of tests that look for troublesome regressions in the kernel. Some of these regressions could roll back critical security fixes, and there are tests to ensure those improvements remain in the kernel.
|
||||
|
||||
One critical step remains when tests finish: reporting. Kernel developers and maintainers need a concise report that tells them exactly what worked, what did not work, and how to get more information. Each CKI report contains details about the source code used, the compile parameters, and the testing output. That information helps developers know where to begin looking to fix an issue. Also, it helps maintainers know when a patchset needs to be held for another look before a bug makes its way into their kernel repository.
|
||||
|
||||
### Summary
|
||||
|
||||
The CKI project team strives to prevent bugs from entering the Linux kernel by providing timely, automated feedback to kernel developers and maintainers. This work makes their job easier by finding the low-hanging fruit that leads to kernel bugs, security issues, and performance problems.
|
||||
|
||||
* * *
|
||||
|
||||
_To learn more, you can attend the [CKI Hackfest][7] on September 12-13 following the [Linux Plumbers Conference][8] September 9-11 in Lisbon, Portugal._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/linux-kernel-testing
|
||||
|
||||
作者:[Major Hayden][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mhaydenhttps://opensource.com/users/mhaydenhttps://opensource.com/users/marcobravohttps://opensource.com/users/mhayden
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
|
||||
[2]: https://opensource.com/article/19/6/continuous-kernel-integration-linux
|
||||
[3]: https://cki-project.org/
|
||||
[4]: https://github.com/getpatchwork/patchwork
|
||||
[5]: https://ccache.dev/
|
||||
[6]: https://linux-test-project.github.io
|
||||
[7]: https://cki-project.org/posts/hackfest-agenda/
|
||||
[8]: https://www.linuxplumbersconf.org/
|
@ -0,0 +1,225 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to compile a Linux kernel in the 21st century)
|
||||
[#]: via: (https://opensource.com/article/19/8/linux-kernel-21st-century)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/greg-p)
|
||||
|
||||
How to compile a Linux kernel in the 21st century
|
||||
======
|
||||
You don't have to compile the Linux kernel but you can with this quick
|
||||
tutorial.
|
||||
![and old computer and a new computer, representing migration to new software or hardware][1]
|
||||
|
||||
In computing, a kernel is the low-level software that handles communication with hardware and general system coordination. Aside from some initial firmware built into your computer's motherboard, when you start your computer, the kernel is what provides awareness that it has a hard drive and a screen and a keyboard and a network card. It's also the kernel's job to ensure equal time (more or less) is given to each component so that your graphics and audio and filesystem and network all run smoothly, even though they're running concurrently.
|
||||
|
||||
The quest for hardware support, however, is ongoing, because the more hardware that gets released, the more stuff a kernel must adopt into its code to make the hardware work as expected. It's difficult to get accurate numbers, but the Linux kernel is certainly among the top kernels for hardware compatibility. Linux operates innumerable computers and mobile phones, embedded system on a chip (SoC) boards for hobbyist and industrial uses, RAID cards, sewing machines, and much more.
|
||||
|
||||
Back in the 20th century (and even in the early years of the 21st), it was not unreasonable for a Linux user to expect that when they purchased a very new piece of hardware, they would need to download the very latest kernel source code, compile it, and install it so that they could get support for the device. Lately, though, you'd be hard-pressed to find a Linux user who compiles their own kernel except for fun or profit by way of highly specialized custom hardware. It generally isn't required these days to compile the Linux kernel yourself.
|
||||
|
||||
Here are the reasons why, plus a quick tutorial on how to compile a kernel when you need to.
|
||||
|
||||
### Update your existing kernel
|
||||
|
||||
Whether you've got a brand new laptop featuring a fancy new graphics card or WiFi chipset or you've just brought home a new printer, your operating system (called either GNU+Linux or just Linux, which is also the name of the kernel) needs a driver to open communication channels to that new component (graphics card, WiFi chip, printer, or whatever). It can be deceptive, sometimes, when you plug in a new device and your computer _appears_ to acknowledge it. But don't let that fool you. Sometimes that _is_ all you need, but other times your OS is just using generic protocols to probe a device that's attached.
|
||||
|
||||
For instance, your computer may be able to identify your new network printer, but sometimes that's only because the network card in the printer is programmed to identify itself to a network so it can gain a DHCP address. It doesn't necessarily mean that your computer knows what instructions to send to the printer to produce a page of printed text. In fact, you might argue that the computer doesn't even really "know" that the device is a printer; it may only display that there's a device on the network at a specific address and the device identifies itself with the series of characters _p-r-i-n-t-e-r_. The conventions of human language are meaningless to a computer; what it needs is a driver.
|
||||
|
||||
Kernel developers, hardware manufacturers, support technicians, and hobbyists all know that new hardware is constantly being released. Many of them contribute drivers, submitted straight to the kernel development team for inclusion in Linux. For example, Nvidia graphic card drivers are often written into the [Nouveau][2] kernel module and, because Nvidia cards are common, the code is usually included in any kernel distributed for general use (such as the kernel you get when you download [Fedora][3] or [Ubuntu][4]. Where Nvidia is less common, for instance in embedded systems, the Nouveau module is usually excluded. Similar modules exist for many other devices: printers benefit from [Foomatic][5] and [CUPS][6], wireless cards have [b43, ath9k, wl][7] modules, and so on.
|
||||
|
||||
Distributions tend to include as much as they reasonably can in their Linux kernel builds because they want you to be able to attach a device and start using it immediately, with no driver installation required. For the most part, that's what happens, especially now that many device vendors are now funding Linux driver development for the hardware they sell and submitting those drivers directly to the kernel team for general distribution.
|
||||
|
||||
Sometimes, however, you're running a kernel you installed six months ago with an exciting new device that just hit the stores a week ago. In that case, your kernel may not have a driver for that device. The good news is that very often, a driver for that device may exist in a very recent edition of the kernel, meaning that all you have to do is update what you're running.
|
||||
|
||||
Generally, this is done through a package manager. For instance, on RHEL, CentOS, and Fedora:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf update kernel`
|
||||
```
|
||||
|
||||
On Debian and Ubuntu, first get your current kernel version:
|
||||
|
||||
|
||||
```
|
||||
$ uname -r
|
||||
4.4.186
|
||||
```
|
||||
|
||||
Search for newer versions:
|
||||
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
$ sudo apt search linux-image
|
||||
```
|
||||
|
||||
Install the latest version you find. In this example, the latest available is 5.2.4:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install linux-image-5.2.4`
|
||||
```
|
||||
|
||||
After a kernel upgrade, you must [reboot][8] (unless you're using kpatch or kgraft). Then, if the device driver you need is in the latest kernel, your hardware will work as expected.
|
||||
|
||||
### Install a kernel module
|
||||
|
||||
Sometimes a distribution doesn't expect that its users often use a device (or at least not enough that the device driver needs to be in the Linux kernel). Linux takes a modular approach to drivers, so distributions can ship separate driver packages that can be loaded by the kernel even though the driver isn't compiled into the kernel itself. This is useful, although it can get complicated when a driver isn't included in a kernel but is needed during boot, or when the kernel gets updated out from under the modular driver. The first problem is solved with an **initrd** (initial RAM disk) and is out of scope for this article, and the second is solved by a system called **kmod**.
|
||||
|
||||
The kmod system ensures that when a kernel is updated, all modular drivers installed alongside it are also updated. If you install a driver manually, you miss out on the automation that kmod provides, so you should opt for a kmod package whenever it is available. For instance, while Nvidia drivers are built into the kernel as the Nouveau driver, the official Nvidia drivers are distributed only by Nvidia. You can install Nvidia-branded drivers manually by going to the website, downloading the **.run** file, and running the shell script it provides, but you must repeat that same process after you install a new kernel, because nothing tells your package manager that you manually installed a kernel driver. Because Nvidia drives your graphics, updating the Nvidia driver manually usually means you have to perform the update from a terminal, because you have no graphics without a functional graphics driver.
|
||||
|
||||
![Nvidia configuration application][9]
|
||||
|
||||
However, if you install the Nvidia drivers as a kmod package, updating your kernel also updates your Nvidia driver. On Fedora and related:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install kmod-nvidia`
|
||||
```
|
||||
|
||||
On Debian and related:
|
||||
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
$ sudo apt install nvidia-kernel-common nvidia-kernel-dkms nvidia-glx nvidia-xconfig nvidia-settings nvidia-vdpau-driver vdpau-va-driver
|
||||
```
|
||||
|
||||
This is only an example, but if you're installing Nvidia drivers in real life, you must also blacklist the Nouveau driver. See your distribution's documentation for the best steps.
|
||||
|
||||
### Download and install a driver
|
||||
|
||||
Not everything is included in the kernel, and not everything _else_ is available as a kernel module. In some cases, you have to download a special driver written and bundled by the hardware vendor, and other times, you have the driver but not the frontend to configure driver options.
|
||||
|
||||
Two common examples are HP printers and [Wacom][10] illustration tablets. If you get an HP printer, you probably have generic drivers that can communicate with your printer. You might even be able to print. But the generic driver may not be able to provide specialized options specific to your model, such as double-sided printing, collation, paper tray choices, and so on. [HPLIP][11] (the HP Linux Imaging and Printing system) provides options to manage jobs, adjust printing options, select paper trays where applicable, and so on.
|
||||
|
||||
HPLIP is usually bundled in package managers; just search for "hplip."
|
||||
|
||||
![HPLIP in action][12]
|
||||
|
||||
Similarly, drivers for Wacom tablets, the leading illustration tablet for digital artists, are usually included in your kernel, but options to fine-tune settings, such as pressure sensitivity and button functionality, are only accessible through the graphical control panel included by default with GNOME but installable as the extra package **kde-config-tablet** on KDE.
|
||||
|
||||
There are likely some edge cases that don't have drivers in the kernel but offer kmod versions of driver modules as an RPM or DEB file that you can download and install through your package manager.
|
||||
|
||||
### Patching and compiling your own kernel
|
||||
|
||||
Even in the futuristic utopia that is the 21st century, there are vendors that don't understand open source enough to provide installable drivers. Sometimes, such companies provide source code for a driver but expect you to download the code, patch a kernel, compile, and install manually.
|
||||
|
||||
This kind of distribution model has the same disadvantages as installing packaged drivers outside of the kmod system: an update to your kernel breaks the driver because it must be re-integrated into your kernel manually each time the kernel is swapped out for a new one.
|
||||
|
||||
This has become rare, happily, because the Linux kernel team has done an excellent job of pleading loudly for companies to communicate with them, and because companies are finally accepting that open source isn't going away any time soon. But there are still novelty or hyper-specialized devices out there that provide only kernel patches.
|
||||
|
||||
Officially, there are distribution-specific preferences for how you should compile a kernel to keep your package manager involved in upgrading such a vital part of your system. There are too many package managers to cover each; as an example, here is what happens behind the scenes when you use tools like **rpmdev** on Fedora or **build-essential** and **devscripts** on Debian.
|
||||
|
||||
First, as usual, find out which kernel version you're running:
|
||||
|
||||
|
||||
```
|
||||
`$ uname -r`
|
||||
```
|
||||
|
||||
In most cases, it's safe to upgrade your kernel if you haven't already. After all, it's possible that your problem will be solved in the latest release. If you tried that and it didn't work, then you should download the source code of the kernel you are running. Most distributions provide a special command for that, but to do it manually, you can find the source code on [kernel.org][13].
|
||||
|
||||
You also must download whatever patch you need for your kernel. Sometimes, these patches are specific to the kernel release, so choose carefully.
|
||||
|
||||
It's traditional, or at least it was back when people regularly compiled their own kernels, to place the source code and patches in **/usr/src/linux**.
|
||||
|
||||
Unarchive the kernel source and the patch files as needed:
|
||||
|
||||
|
||||
```
|
||||
$ cd /usr/src/linux
|
||||
$ bzip2 --decompress linux-5.2.4.tar.bz2
|
||||
$ cd linux-5.2.4
|
||||
$ bzip2 -d ../patch*bz2
|
||||
```
|
||||
|
||||
The patch file may have instructions on how to do the patch, but often they're designed to be executed from the top level of your tree:
|
||||
|
||||
|
||||
```
|
||||
`$ patch -p1 < patch*example.patch`
|
||||
```
|
||||
|
||||
Once the kernel code is patched, you can use your old configuration to prepare the patched kernel config:
|
||||
|
||||
|
||||
```
|
||||
`$ make oldconfig`
|
||||
```
|
||||
|
||||
The **make oldconfig** command serves two purposes: it inherits your current kernel's configuration, and it allows you to configure new options introduced by the patch.
|
||||
|
||||
You may need to run the **make menuconfig** command, which launches an ncurses-based, menu-driven list of possible options for your new kernel. The menu can be overwhelming, but since it starts with your old config as a foundation, you can look through the menu and disable modules for hardware that you know you do not have and do not anticipate needing. Alternately, if you know that you have some piece of hardware and see it is not included in your current configuration, you may choose to build it, either as a module or directly into the kernel. In theory, this isn't necessary because presumably, your current kernel was treating you well but for the missing patch, and probably the patch you applied has activated all the necessary options required by whatever device prompted you to patch your kernel in the first place.
|
||||
|
||||
Next, compile the kernel and its modules:
|
||||
|
||||
|
||||
```
|
||||
$ make bzImage
|
||||
$ make modules
|
||||
```
|
||||
|
||||
This leaves you with a file named **vmlinuz**, which is a compressed version of your bootable kernel. Save your old version and place the new one in your **/boot** directory:
|
||||
|
||||
|
||||
```
|
||||
$ sudo mv /boot/vmlinuz /boot/vmlinuz.nopatch
|
||||
$ sudo cat arch/x86_64/boot/bzImage > /boot/vmlinuz
|
||||
$ sudo mv /boot/System.map /boot/System.map.stock
|
||||
$ sudo cp System.map /boot/System.map
|
||||
```
|
||||
|
||||
So far, you've patched and built a kernel and its modules, you've installed the kernel, but you haven't installed any modules. That's the final build step:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo make modules_install`
|
||||
```
|
||||
|
||||
The new kernel is in place, and its modules are installed.
|
||||
|
||||
The final step is to update your bootloader so that the part of your computer that loads before the kernel knows where to find Linux. The GRUB bootloader makes this process relatively simple:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo grub2-mkconfig`
|
||||
```
|
||||
|
||||
### Real-world compiling
|
||||
|
||||
Of course, nobody runs those manual commands now. Instead, refer to your distribution for instructions on modifying a kernel using the developer toolset that your distribution's maintainers use. This toolset will probably create a new installable package with all the patches incorporated, alert the package manager of the upgrade, and update your bootloader for you.
|
||||
|
||||
### Kernels
|
||||
|
||||
Operating systems and kernels are mysterious things, but it doesn't take much to understand what components they're built upon. The next time you get a piece of tech that appears to not work on Linux, take a deep breath, investigate driver availability, and go with the path of least resistance. Linux is easier than ever—and that includes the kernel.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/linux-kernel-21st-century
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sethhttps://opensource.com/users/greg-p
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware)
|
||||
[2]: https://nouveau.freedesktop.org/wiki/
|
||||
[3]: http://fedoraproject.org
|
||||
[4]: http://ubuntu.com
|
||||
[5]: https://wiki.linuxfoundation.org/openprinting/database/foomatic
|
||||
[6]: https://www.cups.org/
|
||||
[7]: https://wireless.wiki.kernel.org/en/users/drivers
|
||||
[8]: https://opensource.com/article/19/7/reboot-linux
|
||||
[9]: https://opensource.com/sites/default/files/uploads/nvidia.jpg (Nvidia configuration application)
|
||||
[10]: https://linuxwacom.github.io
|
||||
[11]: https://developers.hp.com/hp-linux-imaging-and-printing
|
||||
[12]: https://opensource.com/sites/default/files/uploads/hplip.jpg (HPLIP in action)
|
||||
[13]: https://www.kernel.org/
|
@ -0,0 +1,196 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Install Ansible (Automation Tool) on Debian 10 (Buster))
|
||||
[#]: via: (https://www.linuxtechi.com/install-ansible-automation-tool-debian10/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Install Ansible (Automation Tool) on Debian 10 (Buster)
|
||||
======
|
||||
|
||||
Now a days in IT field, automation is the hot topic and every organization is start adopting the automation tools like **Puppet**, **Ansible**, **Chef**, **CFEngine**, **Foreman** and **Katello**. Out of these tools Ansible is the first choice of almost all the IT organization to manage UNIX and Linux like systems. In this article we will demonstrate on how to install and use ansible tool on Debian 10 Sever.
|
||||
|
||||
[![Ansible-Install-Debian10][1]][2]
|
||||
|
||||
My Lab details:
|
||||
|
||||
* Debian 10 – Ansible Server/ controller Node – 192.168.1.14
|
||||
* CentOS 7 – Ansible Host (Web Server) – 192.168.1.15
|
||||
* CentOS 7 – Ansible Host (DB Server) – 192.169.1.17
|
||||
|
||||
|
||||
|
||||
We will also demonstrate how Linux Servers can be managed using Ansible Server
|
||||
|
||||
### Ansible Installation on Debian 10 Server
|
||||
|
||||
I am assuming in your Debian 10 system you have a user which has either root privileges or sudo rights. In my setup I have a local user named “pkumar” with sudo rights.
|
||||
|
||||
Ansible 2.7 packages are available in default Debian 10 repositories, run the following commands from command line to install Ansible,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo apt update
|
||||
root@linuxtechi:~$ sudo apt install ansible -y
|
||||
```
|
||||
|
||||
Run the below command to verify the ansible version,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ansible --version
|
||||
```
|
||||
|
||||
![ansible-version][1]
|
||||
|
||||
To Install latest version of Ansible 2.8, first we must set Ansible repositories.
|
||||
|
||||
Execute the following commands one after the another,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ echo "deb http://ppa.launchpad.net/ansible/ansible/ubuntu bionic main" | sudo tee -a /etc/apt/sources.list
|
||||
root@linuxtechi:~$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367
|
||||
root@linuxtechi:~$ sudo apt update
|
||||
root@linuxtechi:~$ sudo apt install ansible -y
|
||||
root@linuxtechi:~$ sudo ansible --version
|
||||
```
|
||||
|
||||
![latest-ansible-version][1]
|
||||
|
||||
### Managing Linux Servers using Ansible
|
||||
|
||||
Refer the following steps to manage Linux like servers using Ansible controller node,
|
||||
|
||||
### Step:1) Exchange the SSH keys between Ansible Server and its hosts
|
||||
|
||||
Generate the ssh keys from ansible server and shared the keys among the ansible hosts
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo -i
|
||||
root@linuxtechi:~# ssh-keygen
|
||||
root@linuxtechi:~# ssh-copy-id root@linuxtechi
|
||||
root@linuxtechi:~# ssh-copy-id root@linuxtechi
|
||||
```
|
||||
|
||||
### Step:2) Create Ansible Hosts inventory file
|
||||
|
||||
When ansible is installed then /etc/hosts file is created created automatically, in this file we can mentioned the ansible hosts or its clients. We can also create our own ansible host inventory file in our home directory,
|
||||
|
||||
Read More on : [**How to Manage Ansible Static and Dynamic Host Inventory**][3]
|
||||
|
||||
Run the below command to create ansible hosts inventory in our home directory
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ vi $HOME/hosts
|
||||
[Web]
|
||||
192.168.1.15
|
||||
|
||||
[DB]
|
||||
192.168.1.17
|
||||
```
|
||||
|
||||
Save and exit the file
|
||||
|
||||
**Note:** In above hosts file we can also use host name or FQDN as well but for that we have to make sure that ansible hosts are reachable and accessible by hostname or fqdn.
|
||||
|
||||
### Step:3) Test and Use default ansible modules
|
||||
|
||||
Ansible comes with lot of default modules which can used in ansible command, examples are shown below,
|
||||
|
||||
Syntax:
|
||||
|
||||
# ansible -i <host_file> -m <module> <host>
|
||||
|
||||
Where:
|
||||
|
||||
* **-i ~/hosts**: contains list of ansible hosts
|
||||
* **-m:** after -m specify the ansible module like ping & shell
|
||||
* **<host>:** Ansible hosts where we want to run the ansible modules
|
||||
|
||||
|
||||
|
||||
Verify ping connectivity using ansible ping module
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ansible -i ~/hosts -m ping all
|
||||
root@linuxtechi:~$ sudo ansible -i ~/hosts -m ping Web
|
||||
root@linuxtechi:~$ sudo ansible -i ~/hosts -m ping DB
|
||||
```
|
||||
|
||||
Output of above commands would be something like below:
|
||||
|
||||
![Ansible-ping-module-examples][1]
|
||||
|
||||
Running shell commands on ansible hosts using shell module
|
||||
|
||||
**Syntax:** # ansible -i <hosts_file> -m shell -a <shell_commands> <host>
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ansible -i ~/hosts -m shell -a "uptime" all
|
||||
192.168.1.17 | CHANGED | rc=0 >>
|
||||
01:48:34 up 1:07, 3 users, load average: 0.00, 0.01, 0.05
|
||||
|
||||
192.168.1.15 | CHANGED | rc=0 >>
|
||||
01:48:39 up 1:07, 3 users, load average: 0.00, 0.01, 0.04
|
||||
|
||||
root@linuxtechi:~$
|
||||
root@linuxtechi:~$ sudo ansible -i ~/hosts -m shell -a "uptime ; df -Th / ; uname -r" Web
|
||||
192.168.1.15 | CHANGED | rc=0 >>
|
||||
01:52:03 up 1:11, 3 users, load average: 0.12, 0.07, 0.06
|
||||
Filesystem Type Size Used Avail Use% Mounted on
|
||||
/dev/mapper/centos-root xfs 13G 1017M 12G 8% /
|
||||
3.10.0-327.el7.x86_64
|
||||
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
Above commands output confirms that we have successfully setup ansible controller node
|
||||
|
||||
Let’s create a sample NGINX installation playbook, below playbook will install nginx on all server which are part of Web host group, but in my case I have one centos 7 machine under this host group.
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ vi nginx.yaml
|
||||
---
|
||||
- hosts: Web
|
||||
tasks:
|
||||
- name: Install latest version of nginx on CentOS 7 Server
|
||||
yum: name=nginx state=latest
|
||||
- name: start nginx
|
||||
service:
|
||||
name: nginx
|
||||
state: started
|
||||
```
|
||||
|
||||
Now execute the playbook using following command,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo ansible-playbook -i ~/hosts nginx.yaml
|
||||
```
|
||||
|
||||
output of above command would be something like below,
|
||||
|
||||
![nginx-installation-playbook-debian10][1]
|
||||
|
||||
This confirms that Ansible playbook has been executed successfully, that’s all from article, please do share your feedback and comments.
|
||||
|
||||
Read Also: [**How to Download and Use Ansible Galaxy Roles in Ansible Playbook**][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/install-ansible-automation-tool-debian10/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/08/Ansible-Install-Debian10.jpg
|
||||
[3]: https://www.linuxtechi.com/manage-ansible-static-and-dynamic-host-inventory/
|
||||
[4]: https://www.linuxtechi.com/use-ansible-galaxy-roles-ansible-playbook/
|
97
sources/tech/20190826 5 ops tasks to do with Ansible.md
Normal file
97
sources/tech/20190826 5 ops tasks to do with Ansible.md
Normal file
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 ops tasks to do with Ansible)
|
||||
[#]: via: (https://opensource.com/article/19/8/ops-tasks-ansible)
|
||||
[#]: author: (Mark Phillips https://opensource.com/users/markphttps://opensource.com/users/adminhttps://opensource.com/users/alsweigarthttps://opensource.com/users/belljennifer43)
|
||||
|
||||
5 ops tasks to do with Ansible
|
||||
======
|
||||
Less DevOps, more OpsDev.
|
||||
![gears and lightbulb to represent innovation][1]
|
||||
|
||||
In this DevOps world, it sometimes appears the Dev half gets all the limelight, with Ops the forgotten half in the relationship. It's almost as if the leading Dev tells the trailing Ops what to do, with almost everything "Ops" being whatever Dev says it should be. Ops, therefore, gets left behind, punted to the back, relegated to the bench.
|
||||
|
||||
I'd like to see more OpsDev happening. So let's look at a handful of things Ansible can help you do with your day-to-day Ops life.
|
||||
|
||||
![Job templates][2]
|
||||
|
||||
I've chosen to present these solutions within [Ansible Tower][3] because I think a user interface (UI) adds value to most of these tasks. If you want to emulate this, you can test it out in [AWX][4], the upstream open source version of Tower.
|
||||
|
||||
### Manage users
|
||||
|
||||
In a large-scale environment, your users would be centralised in a system like Active Directory or LDAP. But I bet there are still a whole load of environments with lots of static users in them, too. Ansible can help you centralise that decentralised problem. And _the community_ has already solved it for us. Meet the [Ansible Galaxy][5] role **[users][6]**.
|
||||
|
||||
What's clever about this role is it allows us to manage users via *data—*no changes to play logic required.
|
||||
|
||||
![User data][7]
|
||||
|
||||
With simple data structures, we can add, remove and modify static users on a system. Very useful.
|
||||
|
||||
### Manage sudo
|
||||
|
||||
Privilege escalation comes [in many forms][8], but one of the most popular is [sudo][9]. It's relatively easy to manage sudo through discrete files per user, group, etc. But some folk get nervous about giving privilege escalation willy-nilly and prefer it to be time-bound. So [here's a take on that][10], using the simple **at** command to put a time limit on the granted access.
|
||||
|
||||
![Managing sudo][11]
|
||||
|
||||
### Manage services
|
||||
|
||||
Wouldn't it be great to give a [menu][12] to an entry-level ops team so they could just restart certain services? Voila!
|
||||
|
||||
![Managing services][13]
|
||||
|
||||
### Manage disk space
|
||||
|
||||
Here's [a simple role][14] that can be used to look for files larger than size _N_ in a particular directory. Doing this in Tower, we have the bonus of enabling [callbacks][15]. Imagine your monitoring solution spotting a filesystem going over X% full and triggering a job in Tower to go find out what files are the cause.
|
||||
|
||||
![Managing disk space][16]
|
||||
|
||||
### Debug a system performance problem
|
||||
|
||||
[This role][17] is fairly simple: it runs some commands and prints the output. The details are printed at the end of the run for you, sysadmin, to cast your skilled eyes over. Bonus homework: use [regexs][18] to find certain conditions in the output (CPU hog over 80%, say).
|
||||
|
||||
![Debugging system performance][19]
|
||||
|
||||
### Summary
|
||||
|
||||
I've recorded a short video of these five tasks in action. You can find all [the code on GitHub][20] too!
|
||||
|
||||
Michael DeHaan is the guy who created, in his own words, "that Ansible thing." A lot of the things...
|
||||
|
||||
A little bit of coding knowledge can let anyone write small scripts to do these tasks and save them...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/ops-tasks-ansible
|
||||
|
||||
作者:[Mark Phillips][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/markphttps://opensource.com/users/adminhttps://opensource.com/users/alsweigarthttps://opensource.com/users/belljennifer43
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/00_templates.png (Job templates)
|
||||
[3]: https://www.ansible.com/products/tower
|
||||
[4]: https://github.com/ansible/awx
|
||||
[5]: https://galaxy.ansible.com
|
||||
[6]: https://galaxy.ansible.com/singleplatform-eng/users
|
||||
[7]: https://opensource.com/sites/default/files/uploads/01_users_data.png (User data)
|
||||
[8]: https://docs.ansible.com/ansible/latest/plugins/become.html
|
||||
[9]: https://www.sudo.ws/intro.html
|
||||
[10]: https://github.com/phips/ansible-demos/tree/master/roles/sudo
|
||||
[11]: https://opensource.com/sites/default/files/uploads/02_sudo.png (Managing sudo)
|
||||
[12]: https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#surveys
|
||||
[13]: https://opensource.com/sites/default/files/uploads/03_services.png (Managing services)
|
||||
[14]: https://github.com/phips/ansible-demos/tree/master/roles/disk
|
||||
[15]: https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#provisioning-callbacks
|
||||
[16]: https://opensource.com/sites/default/files/uploads/04_diskspace.png (Managing disk space)
|
||||
[17]: https://github.com/phips/ansible-demos/tree/master/roles/gather_debug
|
||||
[18]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#regular-expression-filters
|
||||
[19]: https://opensource.com/sites/default/files/uploads/05_debug.png (Debugging system performance)
|
||||
[20]: https://github.com/phips/ansible-demos
|
@ -0,0 +1,238 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How RPM packages are made: the source RPM)
|
||||
[#]: via: (https://fedoramagazine.org/how-rpm-packages-are-made-the-source-rpm/)
|
||||
[#]: author: (Ankur Sinha "FranciscoD" https://fedoramagazine.org/author/ankursinha/)
|
||||
|
||||
How RPM packages are made: the source RPM
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
In a [previous post, we looked at what RPM packages are][2]. They are archives that contain files and metadata. This metadata tells RPM where to create or remove files from when an RPM is installed or uninstalled. The metadata also contains information on “dependencies”, which you will remember from the previous post, can either be “runtime” or “build time”.
|
||||
|
||||
As an example, we will look at _fpaste_. You can download the RPM using _dnf_. This will download the latest version of _fpaste_ that is available in the Fedora repositories. On Fedora 30, this is currently 0.3.9.2:
|
||||
|
||||
```
|
||||
$ dnf download fpaste
|
||||
|
||||
...
|
||||
fpaste-0.3.9.2-2.fc30.noarch.rpm
|
||||
```
|
||||
|
||||
Since this is the built RPM, it contains only files needed to use _fpaste_:
|
||||
|
||||
```
|
||||
$ rpm -qpl ./fpaste-0.3.9.2-2.fc30.noarch.rpm
|
||||
/usr/bin/fpaste
|
||||
/usr/share/doc/fpaste
|
||||
/usr/share/doc/fpaste/README.rst
|
||||
/usr/share/doc/fpaste/TODO
|
||||
/usr/share/licenses/fpaste
|
||||
/usr/share/licenses/fpaste/COPYING
|
||||
/usr/share/man/man1/fpaste.1.gz
|
||||
```
|
||||
|
||||
### Source RPMs
|
||||
|
||||
The next link in the chain is the source RPM. All software in Fedora must be built from its source code. We do not include pre-built binaries. So, for an RPM file to be made, RPM (the tool) needs to be:
|
||||
|
||||
* given the files that have to be installed,
|
||||
* told how to generate these files, if they are to be compiled, for example,
|
||||
* told where these files must be installed,
|
||||
* what other dependencies this particular software needs to work properly.
|
||||
|
||||
|
||||
|
||||
The source RPM holds all of this information. Source RPMs are similar archives to RPM, but as the name suggests, instead of holding the built binary files, they contain the source files for a piece of software. Let’s download the source RPM for _fpaste_:
|
||||
|
||||
```
|
||||
$ dnf download fpaste --source
|
||||
...
|
||||
fpaste-0.3.9.2-2.fc30.src.rpm
|
||||
```
|
||||
|
||||
Notice how the file ends with “src.rpm”. All RPMs are built from source RPMs. You can easily check what source RPM a “binary” RPM comes from using dnf too:
|
||||
|
||||
```
|
||||
$ dnf repoquery --qf "%{SOURCERPM}" fpaste
|
||||
fpaste-0.3.9.2-2.fc30.src.rpm
|
||||
```
|
||||
|
||||
Also, since this is the source RPM, it does not contain built files. Instead, it contains the sources and instructions on how to build the RPM from them:
|
||||
|
||||
```
|
||||
$ rpm -qpl ./fpaste-0.3.9.2-2.fc30.src.rpm
|
||||
fpaste-0.3.9.2.tar.gz
|
||||
fpaste.spec
|
||||
```
|
||||
|
||||
Here, the first file is simply the source code for _fpaste_. The second is the “spec” file. The spec file is the recipe that tells RPM (the tool) how to create the RPM (the archive) using the sources contained in the source RPM—all the information that RPM (the tool) needs to build RPMs (the archives) are contained in spec files. When we package maintainers add software to Fedora, most of our time is spent writing and perfecting the individual spec files. When a software package needs an update, we go back and tweak the spec file. You can see the spec files for ALL packages in Fedora at our source repository at <https://src.fedoraproject.org/browse/projects/>
|
||||
|
||||
Note that one source RPM may contain the instructions to build multiple RPMs. _fpaste_ is a very simple piece of software, where one source RPM generates one “binary” RPM. Python, on the other hand is more complex. While there is only one source RPM, it generates multiple binary RPMs:
|
||||
|
||||
```
|
||||
$ sudo dnf repoquery --qf "%{SOURCERPM}" python3
|
||||
python3-3.7.3-1.fc30.src.rpm
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
|
||||
$ sudo dnf repoquery --qf "%{SOURCERPM}" python3-devel
|
||||
python3-3.7.3-1.fc30.src.rpm
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
|
||||
$ sudo dnf repoquery --qf "%{SOURCERPM}" python3-libs
|
||||
python3-3.7.3-1.fc30.src.rpm
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
|
||||
$ sudo dnf repoquery --qf "%{SOURCERPM}" python3-idle
|
||||
python3-3.7.3-1.fc30.src.rpm
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
|
||||
$ sudo dnf repoquery --qf "%{SOURCERPM}" python3-tkinter
|
||||
python3-3.7.3-1.fc30.src.rpm
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
```
|
||||
|
||||
In RPM jargon, “python3” is the “main package”, and so the spec file will be called “python3.spec”. All the other packages are “sub-packages”. You can download the source RPM for python3 and see what’s in it too. (Hint: patches are also part of the source code):
|
||||
|
||||
```
|
||||
$ dnf download --source python3
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
|
||||
$ rpm -qpl ./python3-3.7.4-1.fc30.src.rpm
|
||||
00001-rpath.patch
|
||||
00102-lib64.patch
|
||||
00111-no-static-lib.patch
|
||||
00155-avoid-ctypes-thunks.patch
|
||||
00170-gc-assertions.patch
|
||||
00178-dont-duplicate-flags-in-sysconfig.patch
|
||||
00189-use-rpm-wheels.patch
|
||||
00205-make-libpl-respect-lib64.patch
|
||||
00251-change-user-install-location.patch
|
||||
00274-fix-arch-names.patch
|
||||
00316-mark-bdist_wininst-unsupported.patch
|
||||
Python-3.7.4.tar.xz
|
||||
check-pyc-timestamps.py
|
||||
idle3.appdata.xml
|
||||
idle3.desktop
|
||||
python3.spec
|
||||
```
|
||||
|
||||
### Building an RPM from a source RPM
|
||||
|
||||
Now that we have the source RPM, and know what’s in it, we can rebuild our RPM from it. Before we do so, though, we should set our system up to build RPMs. First, we install the required tools:
|
||||
|
||||
```
|
||||
$ sudo dnf install fedora-packager
|
||||
```
|
||||
|
||||
This will install the rpmbuild tool. rpmbuild requires a default layout so that it knows where each required component of the source rpm is. Let’s see what they are:
|
||||
|
||||
```
|
||||
# Where should the spec file go?
|
||||
$ rpm -E %{_specdir}
|
||||
/home/asinha/rpmbuild/SPECS
|
||||
|
||||
# Where should the sources go?
|
||||
$ rpm -E %{_sourcedir}
|
||||
/home/asinha/rpmbuild/SOURCES
|
||||
|
||||
# Where is temporary build directory?
|
||||
$ rpm -E %{_builddir}
|
||||
/home/asinha/rpmbuild/BUILD
|
||||
|
||||
# Where is the buildroot?
|
||||
$ rpm -E %{_buildrootdir}
|
||||
/home/asinha/rpmbuild/BUILDROOT
|
||||
|
||||
# Where will the source rpms be?
|
||||
$ rpm -E %{_srcrpmdir}
|
||||
/home/asinha/rpmbuild/SRPMS
|
||||
|
||||
# Where will the built rpms be?
|
||||
$ rpm -E %{_rpmdir}
|
||||
/home/asinha/rpmbuild/RPMS
|
||||
```
|
||||
|
||||
I have all of this set up on my system already:
|
||||
|
||||
```
|
||||
$ cd
|
||||
$ tree -L 1 rpmbuild/
|
||||
rpmbuild/
|
||||
├── BUILD
|
||||
├── BUILDROOT
|
||||
├── RPMS
|
||||
├── SOURCES
|
||||
├── SPECS
|
||||
└── SRPMS
|
||||
|
||||
6 directories, 0 files
|
||||
```
|
||||
|
||||
RPM provides a tool that sets it all up for you too:
|
||||
|
||||
```
|
||||
$ rpmdev-setuptree
|
||||
```
|
||||
|
||||
Then we ensure that we have all the build dependencies for _fpaste_ installed:
|
||||
|
||||
```
|
||||
sudo dnf builddep fpaste-0.3.9.2-3.fc30.src.rpm
|
||||
```
|
||||
|
||||
For _fpaste_ you only need Python and that must already be installed on your system (dnf uses Python too). The builddep command can also be given a spec file instead of an source RPM. Read more in the man page:
|
||||
|
||||
```
|
||||
$ man dnf.plugin.builddep
|
||||
```
|
||||
|
||||
Now that we have all that we need, building an RPM from a source RPM is as simple as:
|
||||
|
||||
```
|
||||
$ rpmbuild --rebuild fpaste-0.3.9.2-3.fc30.src.rpm
|
||||
..
|
||||
..
|
||||
|
||||
$ tree ~/rpmbuild/RPMS/noarch/
|
||||
/home/asinha/rpmbuild/RPMS/noarch/
|
||||
└── fpaste-0.3.9.2-3.fc30.noarch.rpm
|
||||
|
||||
0 directories, 1 file
|
||||
```
|
||||
|
||||
rpmbuild will install the source RPM and build your RPM from it. You can now install the RPM to use it as you do–using dnf. Of course, as said before, if you want to change anything in the RPM, you must modify the spec file—we’ll cover spec files in next post.
|
||||
|
||||
### Summary
|
||||
|
||||
To summarise this post in two short points:
|
||||
|
||||
* the RPMs we generally install to use software are “binary” RPMs that contain built versions of the software
|
||||
* these are built from source RPMs that include the source code and the spec file that are needed to generate the binary RPMs.
|
||||
|
||||
|
||||
|
||||
If you’d like to get started with building RPMs, and help the Fedora community maintain the massive amount of software we provide, you can start here: <https://fedoraproject.org/wiki/Join_the_package_collection_maintainers>
|
||||
|
||||
For any queries, post to the [Fedora developers mailing list][3]—we’re always happy to help!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/how-rpm-packages-are-made-the-source-rpm/
|
||||
|
||||
作者:[Ankur Sinha "FranciscoD"][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/ankursinha/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/rpm.png-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/rpm-packages-explained/
|
||||
[3]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/
|
138
sources/tech/20190826 Introduction to the Linux chown command.md
Normal file
138
sources/tech/20190826 Introduction to the Linux chown command.md
Normal file
@ -0,0 +1,138 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Introduction to the Linux chown command)
|
||||
[#]: via: (https://opensource.com/article/19/8/linux-chown-command)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/greg-phttps://opensource.com/users/alanfdoss)
|
||||
|
||||
Introduction to the Linux chown command
|
||||
======
|
||||
Learn how to change a file or directory's ownership with chown.
|
||||
![Hand putting a Linux file folder into a drawer][1]
|
||||
|
||||
Every file and directory on a Linux system is owned by someone, and the owner has complete control to change or delete the files they own. In addition to having an owning _user_, a file has an owning _group_.
|
||||
|
||||
You can view the ownership of a file using the **ls -l** command:
|
||||
|
||||
|
||||
```
|
||||
[pablo@workstation Downloads]$ ls -l
|
||||
total 2454732
|
||||
-rw-r--r--. 1 pablo pablo 1934753792 Jul 25 18:49 Fedora-Workstation-Live-x86_64-30-1.2.iso
|
||||
```
|
||||
|
||||
The third and fourth columns of the output are the owning user and group, which together are referred to as _ownership_. Both are **pablo** for the ISO file above.
|
||||
|
||||
The ownership settings, set by the [**chmod** command][2], control who is allowed to perform read, write, or execute actions. You can change ownership (one or both) with the **chown** command.
|
||||
|
||||
It is often necessary to change ownership. Files and directories can live a long time on a system, but users can come and go. Ownership may also need to change when files and directories are moved around the system or from one system to another.
|
||||
|
||||
The ownership of the files and directories in my home directory are my user and my primary group, represented in the form **user:group**. Suppose Susan is managing the Delta group, which needs to edit a file called **mynotes**. You can use the **chown** command to change the user to **susan** and the group to **delta**:
|
||||
|
||||
|
||||
```
|
||||
$ chown susan:delta mynotes
|
||||
ls -l
|
||||
-rw-rw-r--. 1 susan delta 0 Aug 1 12:04 mynotes
|
||||
```
|
||||
|
||||
Once the Delta group is finished with the file, it can be assigned back to me:
|
||||
|
||||
|
||||
```
|
||||
$ chown alan mynotes
|
||||
$ ls -l mynotes
|
||||
-rw-rw-r--. 1 alan delta 0 Aug 1 12:04 mynotes
|
||||
```
|
||||
|
||||
Both the user and group can be assigned back to me by appending a colon (**:**) to the user:
|
||||
|
||||
|
||||
```
|
||||
$ chown alan: mynotes
|
||||
$ ls -l mynotes
|
||||
-rw-rw-r--. 1 alan alan 0 Aug 1 12:04 mynotes
|
||||
```
|
||||
|
||||
By prepending the group with a colon, you can change just the group. Now members of the **gamma** group can edit the file:
|
||||
|
||||
|
||||
```
|
||||
$ chown :gamma mynotes
|
||||
$ ls -l
|
||||
-rw-rw-r--. 1 alan gamma 0 Aug 1 12:04 mynotes
|
||||
```
|
||||
|
||||
A few additional arguments to chown can be useful at both the command line and in a script. Just like many other Linux commands, chown has a recursive argument ****(**-R**) which tells the command to descend into the directory to operate on all files inside. Without the **-R** flag, you change permissions of the folder only, leaving the files inside it unchanged. In this example, assume that the intent is to change permissions of a directory and all its contents. Here I have added the **-v** (verbose) argument so that chown reports what it is doing:
|
||||
|
||||
|
||||
```
|
||||
$ ls -l . conf
|
||||
.:
|
||||
drwxrwxr-x 2 alan alan 4096 Aug 5 15:33 conf
|
||||
|
||||
conf:
|
||||
-rw-rw-r-- 1 alan alan 0 Aug 5 15:33 conf.xml
|
||||
|
||||
$ chown -vR susan:delta conf
|
||||
changed ownership of 'conf/conf.xml' from alan:alan to susan:delta
|
||||
changed ownership of 'conf' from alan:alan to susan:delta
|
||||
```
|
||||
|
||||
Depending on your role, you may need to use **sudo** to change ownership of a file.
|
||||
|
||||
You can use a reference file (**\--reference=RFILE**) when changing the ownership of files to match a certain configuration or when you don't know the ownership (as might be the case when running a script). You can duplicate the user and group of another file (**RFILE**, known as a reference file), for example, to undo the changes made above. Recall that a dot (**.**) refers to the present working directory.
|
||||
|
||||
|
||||
```
|
||||
`$ chown -vR --reference=. conf`
|
||||
```
|
||||
|
||||
### Report Changes
|
||||
|
||||
Most commands have arguments for controlling their output. The most common is **-v** (-**-verbose**) to enable verbose, but chown also has a **-c** (**\--changes**) argument to instruct chown to only report when a change is made. Chown still reports other things, such as when an operation is not permitted.
|
||||
|
||||
The argument **-f** (**\--silent**, **\--quiet**) is used to suppress most error messages. I will use **-f** and the **-c** in the next section so that only actual changes are shown.
|
||||
|
||||
### Preserve Root
|
||||
|
||||
The root (**/**) of the Linux filesystem should be treated with great respect. If a mistake is made at this level, the consequences could leave a system completely useless. Particularly when you are running a recursive command that makes any kind of change or worse: deletions. The chown command has an argument that can be used to protect and preserve the root. The argument is **\--preserve-root**. If this argument is used with a recursive chown command on the root, nothing is done and a message appears instead.
|
||||
|
||||
|
||||
```
|
||||
$ chown -cfR --preserve-root alan /
|
||||
chown: it is dangerous to operate recursively on '/'
|
||||
chown: use --no-preserve-root to override this failsafe
|
||||
```
|
||||
|
||||
The option has no effect when not used in conjunction with **\--recursive**. However, if the command is run by the root user, the permissions of the **/** itself will be changed, but not of other files or directories within.
|
||||
|
||||
|
||||
```
|
||||
$ chown -c --preserve-root alan /
|
||||
chown: changing ownership of '/': Operation not permitted
|
||||
[root@localhost /]# chown -c --preserve-root alan /
|
||||
changed ownership of '/' from root to alan
|
||||
```
|
||||
|
||||
### Ownership is security
|
||||
|
||||
File and directory ownership is part of good information security, so it's important to occasionally check and maintain file ownership to prevent unwanted access. The chown command is one of the most common and important in the set of Linux security commands.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/linux-chown-command
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/greg-phttps://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
|
||||
[2]: https://opensource.com/article/19/8/introduction-linux-chmod-command
|
186
sources/tech/20190826 Using variables in Bash.md
Normal file
186
sources/tech/20190826 Using variables in Bash.md
Normal file
@ -0,0 +1,186 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using variables in Bash)
|
||||
[#]: via: (https://opensource.com/article/19/8/using-variables-bash)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/jpnc)
|
||||
|
||||
Using variables in Bash
|
||||
======
|
||||
Take advantage of Bash variables with our miniseries: Variables in
|
||||
shells.
|
||||
![bash logo on green background][1]
|
||||
|
||||
In computer science (and casual computing), a variable is a location in memory that holds arbitrary information for later use. In other words, it’s a temporary storage container for you to put data into and get data out of. In the Bash shell, that data can be a word (a _string_, in computer lingo) or a number (an _integer_).
|
||||
|
||||
You may have never (knowingly) used a variable before on your computer, but you probably have used a variable in other areas of your life. When you say things like "give me that" or "look at this," you’re using grammatical variables (you think of them as _pronouns_), because the meaning of "this" and "that" depends on whatever you’re picturing in your mind, or whatever you’re pointing to so your audience knows what you’re referring to. When you do math, you use variables to stand in for an unknown value, even though you probably don’t call it a variable.
|
||||
|
||||
Here’s a quick and easy demonstration of a Bash variable you may not realize you use every day. The **PS1** variable holds information about how you want your terminal prompt to appear. For instance, you can set it to something very simple—like a percent sign (**%**)—by redefining the **PS1** variable:
|
||||
|
||||
|
||||
```
|
||||
$ PS1="% "
|
||||
%
|
||||
```
|
||||
|
||||
This article addresses variables in a Bash shell running on Linux, BSD, Mac, or Cygwin. Users of Microsoft’s open source [PowerShell][2] should refer to my article about [variables in PowerShell][3].
|
||||
|
||||
### What are variables for?
|
||||
|
||||
Whether you need variables in Bash depends on what you do in a terminal. For some users, variables are an essential means of managing data, while for others they’re minor and temporary conveniences, and for still others, they may as well not exist.
|
||||
|
||||
Ultimately, variables are a tool. You can use them when you find a use for them, or leave them alone in the comfort of knowing they’re managed by your OS. Knowledge is power, though, and understanding how variables work in Bash can lead you to all kinds of unexpected creative problem-solving.
|
||||
|
||||
### How to set a variable
|
||||
|
||||
You don’t need special permissions to create a variable. They’re free to create, free to use, and generally harmless. In a Bash shell (on Linux and Mac), you can set them by defining a variable name, and then setting its value. The following example creates a new variable called **FOO** and sets the value to the string **/home/seth/Documents**:
|
||||
|
||||
|
||||
```
|
||||
`$ declare FOO="/home/seth/Documents"`
|
||||
```
|
||||
|
||||
Success is eerily silent, so you may not feel confident that your variable got set. You can see the results for yourself with the **echo command**, recalling your variable by prepending it with a dollar sign (**$**). To ensure that the variable is read exactly as you defined it, you can also wrap it in braces and quotes. Doing this preserves any special characters that might appear in the variable; in this example, that doesn’t apply, but it’s still a good habit to form:
|
||||
|
||||
|
||||
```
|
||||
$ echo "${FOO}"
|
||||
/home/seth/Documents
|
||||
```
|
||||
|
||||
Setting variables can be a common thing for people who use the shell often, so the process has become somewhat informal. When a string is followed by an equal sign (**=**) and a value, Bash quietly assumes that you’re setting a variable, making the **declare** keyword unnecessary:
|
||||
|
||||
|
||||
```
|
||||
`$ FOO="/home/seth/Documents"`
|
||||
```
|
||||
|
||||
Variables usually are meant to convey information from one system to another. In this simple example, your variable is not very useful, but it can still communicate information. For instance, because the content of the **FOO** variable is a [file path][4], you can use the variable as a shortcut to the **~/Documents** directory:
|
||||
|
||||
|
||||
```
|
||||
$ pwd
|
||||
/home/seth
|
||||
$ cd "${FOO}"
|
||||
$ pwd
|
||||
/home/seth/Documents
|
||||
```
|
||||
|
||||
Variables can be any non-reserved string (along with integers and underscores). They don't have to be capitalized, but they often are so that they're easy to identify as variables.
|
||||
|
||||
### How to clear a variable
|
||||
|
||||
You can clear a variable with the **unset** command:
|
||||
|
||||
|
||||
```
|
||||
$ unset FOO
|
||||
$ echo $FOO
|
||||
```
|
||||
|
||||
In practice, this action is not usually necessary. Variables are relatively "cheap," so you can create them and then forget them when you don’t need them anymore. However, there may be times you want to ensure a variable is empty to avoid conveying incorrect information to another process that might read the variable.
|
||||
|
||||
### Create a new variable with collision protection
|
||||
|
||||
Sometimes, you may have reason to believe a variable was already set by you or a process. If you would rather not override it, there’s a special syntax to set a variable to its existing value unless its existing value is empty.
|
||||
|
||||
For this example, assume that **FOO** is set to **/home/seth/Documents**:
|
||||
|
||||
|
||||
```
|
||||
$ FOO=${FOO:-"bar"}
|
||||
$ echo $FOO
|
||||
/home/seth/Documents
|
||||
```
|
||||
|
||||
The colon-dash **:-** notation causes **declare** to default to an existing value. To see this process work the other way, clear the variable, and try again:
|
||||
|
||||
|
||||
```
|
||||
$ unset FOO
|
||||
$ echo $FOO
|
||||
|
||||
$ FOO=${FOO:-"bar"}
|
||||
$ echo $FOO
|
||||
bar
|
||||
```
|
||||
|
||||
### Pass variables to a child process
|
||||
|
||||
When you create a variable, you are creating what is called a _local variable_. This means that the variable is known to your current shell and only your current shell.
|
||||
|
||||
This setup is an intentional limitation of a variable’s _scope_. Variables, by design, tend to default to being locally available only, in an attempt to keep information sharing on a need-to-know basis. If you foolishly create a variable containing an important password in clear text, for example, it’s nice to know that your system won’t allow a remote shell or rogue daemon (or anything else outside the one session that created the variable) access that password.
|
||||
|
||||
Use the example from the beginning of this article to change your prompt, but then launch a new shell within your current one:
|
||||
|
||||
|
||||
```
|
||||
$ PS1="% "
|
||||
% bash
|
||||
$
|
||||
```
|
||||
|
||||
When you launch a new shell, the new value of **PS1** is the default prompt instead of your new prompt: A child process does not inherit variables set in its parent. If you kill the child process, you return to the parent shell, and you see your custom **PS1** prompt again:
|
||||
|
||||
|
||||
```
|
||||
$ exit
|
||||
%
|
||||
```
|
||||
|
||||
If you want to pass a variable to a child process, you can _prepend_ a command with variable definitions, or you can _export_ the variable to the child process.
|
||||
|
||||
### Prepending variables
|
||||
|
||||
You can prepend any number of variables before running a command. ****Whether the variables are used by the child process is up to the process, but you can pass the variables to it no matter what:
|
||||
|
||||
|
||||
```
|
||||
$ FOO=123 bash
|
||||
$ echo $FOO
|
||||
123
|
||||
$
|
||||
```
|
||||
|
||||
Prepending can be a useful trick when you’re running an application requiring a special location for certain libraries (using the **LD_LIBRARY_PATH** variable), or when you’re compiling software with a non-standard compiler (using the **CC** variable), and so on.
|
||||
|
||||
### Exporting variables
|
||||
|
||||
Another way to make variables available to a child process is the **export** keyword, a command built into Bash. The **export** command broadens the scope of whatever variable or variables you specify:
|
||||
|
||||
|
||||
```
|
||||
$ PS1="% "
|
||||
$ FOO=123
|
||||
$ export PS1
|
||||
$ export FOO
|
||||
% bash
|
||||
% echo $PS1
|
||||
%
|
||||
% echo $FOO
|
||||
123
|
||||
```
|
||||
|
||||
In both cases, it’s not just a child shell that has access to a local variable that’s been passed to it or exported, it’s any child process of that shell. You can launch an application from the same shell, and that variable is available as an environment variable from within the application.
|
||||
|
||||
Variables exported for everything on your system to use are called _environment variables_, which is a topic for a future article. In the meantime, try using some variables for everyday tasks to see what they bring to your workflow.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/using-variables-bash
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/jpnc
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: https://github.com/PowerShell/PowerShell
|
||||
[3]: https://opensource.com/article/19/8/variables-powershell
|
||||
[4]: https://opensource.com/article/19/8/understanding-file-paths-linux
|
@ -0,0 +1,232 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Best Email Services For Privacy Concerned People)
|
||||
[#]: via: (https://itsfoss.com/secure-private-email-services/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Best Email Services For Privacy Concerned People
|
||||
======
|
||||
|
||||
Email services like Gmail, Outlook, and a couple of other email service providers are quite popular. Well, they’re definitely secure in a way – but not necessarily private (i.e they do not respect your privacy with utmost care).
|
||||
|
||||
Maybe, you want to share something confidential and you want it to be well-protected. Or, maybe, you just want to talk about Area 51? (_shh, CIA wants to know your location!_) Or you just don’t want to the service providers to read your emails to serve you ads.
|
||||
|
||||
No matter what. If you are concerned about the privacy and security of your email conversation and want them to be as private as possible – this article shall help you find the best email services for the job.
|
||||
|
||||
**Note**: _Judging by the privacy policies and given the work of these services (advocating about privacy & protecting users’ data) – we have hand-picked the services as recommendations for you. But, we advise you to believe that nothing is 100% foolproof and will never be. So, always be cautious – no matter what service it is._
|
||||
|
||||
### Email Services to Secure Your Privacy
|
||||
|
||||
![][1]
|
||||
|
||||
Our list includes free/paid services which may offer standalone applications for multiple platforms (here – Linux will be the priority) or could be web-based email service providers as usual.
|
||||
|
||||
#### 1\. ProtonMail
|
||||
|
||||
![][2]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Open Source
|
||||
* End-to-end encrypted
|
||||
* Swiss-based (protected by legal privacy laws)
|
||||
* Free & Paid options available
|
||||
* Custom Domain supported (requires premium subscription)
|
||||
* Self-destruct message functionality
|
||||
* 2FA Available
|
||||
|
||||
|
||||
|
||||
ProtonMail is a quite popular Swiss-based email service which follows an ad-free model to protect your privacy. It lets you specify an expiration time for an email to self-destruct itself. In addition to all the security features, it is open source in nature. So, you can review the open-source encryption libraries or other stuff to be sure.
|
||||
|
||||
To add a custom domain, you need to have a premium subscription. You can use it for free with limited features or choose to upgrade it to a premium subscription (and supporting the company behind it).
|
||||
|
||||
[ProtonMail][3]
|
||||
|
||||
#### 2\. Tutanota
|
||||
|
||||
![][4]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Open Source
|
||||
* End-to-end encrypted
|
||||
* 2FA Available
|
||||
* Free & Paid options available
|
||||
* Custom Domains supported (requires premium subscription)
|
||||
* Whitelabel for business available
|
||||
|
||||
|
||||
|
||||
Tutanota is yet another privacy-friendly email service provider fit for personal and business use. In contrast to ProtonMail, it provides 1 GB of storage (instead of 500 MB) for free users. And, you can add more storage to your account as well.
|
||||
|
||||
[][5]
|
||||
|
||||
Suggested read 4 Best Free and Open Source Tools for RAW Image Processing in Linux
|
||||
|
||||
You would need a premium subscription in order to add a custom domain. If you want, you can also opt for the ability to whitelabel the service for your business.
|
||||
|
||||
[Tutanota][6]
|
||||
|
||||
#### 3\. Librem Mail
|
||||
|
||||
![][7]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Decentralized
|
||||
* End-to-end encrypted
|
||||
|
||||
|
||||
|
||||
Librem Mail is a part of [Librem One][8] suite of services by Purism. Unlike others, it isn’t free. You need to opt for the premium subscription of Librem One in order to get access to Librem Mail.
|
||||
|
||||
Personally, I haven’t used it. But, it looks like a decent end-to-end encrypted ad-free email service given the history of Purism to protect users’ privacy.
|
||||
|
||||
[Librem One Suite][8]
|
||||
|
||||
#### 4\. Criptext
|
||||
|
||||
![Criptext][9]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* Decentralized
|
||||
* End-to-end encrypted
|
||||
* Open Source Signal Protocol Encryption
|
||||
* Standalone Desktop Applications (Linux, Windows, & Mac)
|
||||
* Real-time email tracking
|
||||
* Unsend option
|
||||
|
||||
|
||||
|
||||
Criptext is a completely free email service available across desktops and mobile phones. It does not support custom domains (or Whitelabel). However, it utilizes the open source encryption (Signal Protocol) – which you might have heard of if you have used “Signal” messenger.
|
||||
|
||||
They plan to introduce premium subscriptions with added benefits – but for now as it stands, it is free for all.
|
||||
|
||||
[Criptext][10]
|
||||
|
||||
#### 5\. Mailfence
|
||||
|
||||
![][11]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* End-to-end encrypted
|
||||
* Free & Paid options
|
||||
* Custom domain support available
|
||||
* 2FA Available
|
||||
* Browser-based only (no mobile apps)
|
||||
|
||||
|
||||
|
||||
Mailfence is a decent privacy-focused email service which enforces OpenPGP end-to-end encryption. You can start using it for free with limited storage (500 MB) and features. In either case, you also get the ability to upgrade your subscription to increase the storage space, unlock the ability to use a custom domain and so on.
|
||||
|
||||
The only downside that can be seen here is the lack of mobile apps. So, you need to launch a browser and sign-in in order to use across multiple devices.
|
||||
|
||||
[Mailfence][12]
|
||||
|
||||
#### 6\. TorGuard’s Private-Mail
|
||||
|
||||
![][13]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* OpenPGP end-to-end encryption
|
||||
* Standalone Desktop app (Windows for now)
|
||||
* Free and paid options
|
||||
* Custom domain option available with premium subscription
|
||||
|
||||
|
||||
|
||||
Private-mail ticks all the points that you normally look for in a privacy-focused email service. If you want it for free, you will only get 100 MB of storage with encryption and webmail access only.
|
||||
|
||||
[][14]
|
||||
|
||||
Suggested read 11 Open Source Tools for Writers
|
||||
|
||||
If you want to access the service across multiple devices (that includes your smartphone) – you may want to upgrade your subscription. For now, a desktop client for Windows is available. According to their download page – it is coming to Linux soon enough.
|
||||
|
||||
[Private-Mail][15]
|
||||
|
||||
#### 7\. Hushmail
|
||||
|
||||
![][16]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* 14-Days Trial for Personal Use
|
||||
* Separate plans & pricing for business users (small businesses, healthcare, law, non-profits, & enterprise)
|
||||
* OpenPGP end-to-end encryption
|
||||
* Ability to create 2 web forms to let people reach out to you
|
||||
|
||||
|
||||
|
||||
Hushmail is an interesting email service provider for privacy concerned people. For personal use, it allows you to access a 14-day trial period.
|
||||
|
||||
But, for businesses – it categorizes them and offers different pricing. For example, if you want to utilize a secure email service for your Healthcare company – it offers you HIPAA compliant service. Similarly, different plans for law firms, non-profits, enterprises, and small businesses.
|
||||
|
||||
In addition to the ability to add a custom domain for businesses, it also lets everyone create web forms (both personal and business users).
|
||||
|
||||
[Hushmail][17]
|
||||
|
||||
#### 8\. CounterMail
|
||||
|
||||
![][18]
|
||||
|
||||
Key Highlights:
|
||||
|
||||
* OpenPGP end-to-end encryption
|
||||
* Custom domain support
|
||||
* Web form support
|
||||
* Windows, Linux, & MacOS X support
|
||||
|
||||
|
||||
|
||||
CounterMail is yet another alternative as a secure email service to the others mentioned above. It lets you try the service for one week absolutely for free. In addition to the encryption, it lets you have your own domain and create web forms – no matter what level of subscription you have.
|
||||
|
||||
The more you spend, the more storage you get. But, the features remain the same – which is a good thing in a way.
|
||||
|
||||
[CounterMail][19]
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
You may have to face some inconvenience when using a privacy-focused email service, such as – mediocre UI and some personalization features. However, that’s a trade-off that you have to bear with if you want to prioritize your privacy over anything else.
|
||||
|
||||
Did we miss any of your favorites? Do you hate one of the services mentioned above? Feel free to let us what you think in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/secure-private-email-services/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/secure-email-services.png?resize=800%2C450&ssl=1
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/protonmail.jpg?fit=800%2C424&ssl=1
|
||||
[3]: http://proton.go2cloud.org/SH1V
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/tutanota.jpg?fit=800%2C441&ssl=1
|
||||
[5]: https://itsfoss.com/raw-image-tools-linux/
|
||||
[6]: https://tutanota.com/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/librem-mail.jpg?fit=800%2C470&ssl=1
|
||||
[8]: https://librem.one/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/criptext.jpg?fit=800%2C477&ssl=1
|
||||
[10]: https://criptext.com
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/mailfence.png?fit=800%2C295&ssl=1
|
||||
[12]: https://mailfence.com/
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/private-mail.jpg?fit=800%2C304&ssl=1
|
||||
[14]: https://itsfoss.com/open-source-tools-writers/
|
||||
[15]: https://privatemail.com/
|
||||
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/hushmail.png?fit=800%2C357&ssl=1
|
||||
[17]: https://www.hushmail.com
|
||||
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/countermail-1.jpg?fit=800%2C454&ssl=1
|
||||
[19]: https://countermail.com/?p=start
|
@ -0,0 +1,162 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Command Line Heroes: Season 1: OS Wars)
|
||||
[#]: via: (https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux)
|
||||
[#]: author: (redhat https://www.redhat.com)
|
||||
|
||||
代码英雄:第一季:操作系统大战(第二部分 Linux 崛起)
|
||||
======
|
||||
Saron Yitbarek: 这玩意开着的吗?让我们进一段史诗般的星球大战的开幕吧,开始了。
|
||||
|
||||
配音:[00:00:30] 第二集 :Linux® 的崛起。微软帝国控制着 90% 的桌面用户。操作系统的全面标准化似乎是板上钉钉的事了。然而,互联网的出现将战争的焦点从桌面转向了企业,在该领域,所有商业组织都争相构建自己的服务器。与此同时,一个不太可能的英雄出现在开源反叛组织中。固执,戴着眼镜的 Linus Torvalds 免费发布了他的 Linux 系统。微软打了个趔趄-并且开始重组。
|
||||
|
||||
Saron Yitbarek: [00:01:00] 哦,我们书呆子就是喜欢那样。上一次我们讲到哪了?苹果和微软互相攻伐,试图在一场争夺桌面用户的战争中占据主导地位。在第一集的结尾,我们看到微软获得了大部分的市场份额。很快,由于互联网的兴起以及随之而来的开发者大军,整个市场都经历了一场地震。互联网将战场从在家庭和办公室中的个人电脑用户转移到拥有数百台服务器的大型商业客户中。
|
||||
|
||||
[00:01:30] 这意味着巨量资源的迁移。突然间,所有相关企业不仅被迫为服务器空间和网站建设付费,而且还必须集成软件来进行资源跟踪和数据库监控等工作。你需要很多开发人员来帮助你。至少那时候大家都是这么做的。
|
||||
|
||||
在操作系统之战的第二部分,我们将看到优先级的巨大转变,以及像 Linus Torvalds 和 Richard Stallman 这样的开源叛逆者是如何成功地在微软和整个软件行业的核心地带引起恐惧的。
|
||||
|
||||
[00:02:00] 我是 Saron Yitbarek,您现在收听的是代码英雄,一款红帽公司原创的播客节目。每一集,我们都会给您带来“从码开始”改变技术的人的故事。
|
||||
|
||||
[00:02:30] 好。假设你是 1991 年的微软。你自我感觉良好,对吧?满怀信心。确定全球主导的地位感觉不错。你已经掌握了与其他企业合作的艺术,但是仍然将大部分开发人员、程序员和系统管理员排除在联盟之外,而他们才是真正的步兵。这是出现了个叫 Linus Torvalds 的芬兰极客。他和他的开源程序员团队正在开始发布 Linux,其操作系统内核是由他们一起编写出来的。
|
||||
|
||||
[00:03:00] 坦白地说,如果你是微软公司,你并不会太在意 Linux,甚至是一般意义上的开源运动,但是最终,Linux 的规模变得如此之大,以至于微软不可能不注意到。Linux 第一个版本出现在 1991 年,当时大概有 10000 行代码。十年后,变成了 300 万行代码。如果你想知道,今天则是 2000 万行代码。
|
||||
|
||||
[00:03:30] 让我们停留在 90 年代初一会儿。那是 Linux 还没有成为我们现在所知道的庞然大物。只是这个奇怪的病毒式的操作系统正在这个星球上蔓延,全世界的极客和黑客都爱上了它。那时候我还太年轻,但依然希望加入他们。在那个时候,发现 Linux 就如同进入了一个秘密社会一样。程序员与朋友分享 Linux CD 集,就像其他人分享地下音乐混音带一样。
|
||||
|
||||
Developer Tristram Oaten [00:03:40] 讲讲了你 16 岁时第一次接触 Linux 的故事吧。
|
||||
|
||||
Tristram Oaten: [00:04:00] 我和我的家人去了 Red Sea 上的 Hurghada 潜水度假。那是一个美丽的地方,强烈推荐。第一天,我喝了自来水。也许,我妈妈跟我说过不要这么做。我整个星期都病得很厉害,没有离开旅馆房间。当时我只带了一台新安装了 Slackware Linux 的笔记本电脑,我听说过这玩意并且正在尝试使用它。这台笔记本上没有额外的应用程序,只有 8 张 cd。出于必要,整个星期我所做的就是去了解这个外星一般的系统。我阅读手册,摆弄着终端。我记得当时我甚至我不知道一个点(表示当前目录)和两个点(表示前一个目录)之间的区别。
|
||||
|
||||
[00:04:30] 我一点头绪都没有。犯过很多错误,但慢慢地,在这种强迫的孤独中,我突破了障碍,开始理解并明白命令行到底是怎么回事。假期结束时,我没有看过金字塔、尼罗河等任何埃及遗址,但我解锁了现代世界的一个奇迹。我解锁了 Linux,接下来的事大家都知道了。
|
||||
|
||||
Saron Yitbarek: 你可以从很多人那里听到关于这个故事的不同说法。访问 Linux 命令行是一种革命性的体验。
|
||||
|
||||
David Cantrell: 它给了我源代码。我当时的感觉是,"太神奇了。"
|
||||
|
||||
Saron Yitbarek: 我们正在参加一个名为 Flock to Fedora 的 2017 年 Linux 开发者大会。
|
||||
|
||||
David Cantrell: .。. 非常有吸引力。我觉得我对这个系统有了更多的控制力,它越来越吸引我。我想,从那时起,1995 年我第一次编译 Linux 内核时,我就迷上了它。
|
||||
|
||||
Saron Yitbarek: 开发者 David Cantrell 与 Joe Brockmire。
|
||||
|
||||
Joe Brockmeier: 我寻遍了便宜软件最终找到一套四张 CD 的 Slackware Linux。它看起来来非常令人兴奋而且很有趣,所以我把它带回家,安装在第二台电脑上,开始摆弄它,并为两件事情感到兴奋。一个是,我运行的不是 Windows,另一个我 Linux 的开源特性。
|
||||
|
||||
Saron Yitbarek: [00:06:00] 某种程度上来说,对命令行的访问总是存在的。在开源真正开始流行还要早几十年前,人们(至少在开发人员中是这样)总是希望能够做到完全控制。让我们回到操作系统大战之前的那个时代,在苹果和微软他们的 GUI 而战之前。那时也有代码英雄。保罗·琼斯 (Paul Jones) 教授(在线图书馆 ibiblio.org 负责人)在那个古老的时代,就是一名开发人员。
|
||||
|
||||
Paul Jones: [00:07:00] 从本质上讲,互联网在那个时候比较少是客户端-服务器架构的,而是更多是点对点架构的。讲真,当我们说,某种 VAX 到 VAX,某科学工作站,科学工作站。这并不意味着没有客户端与服务端的关系以及没有应用程序,但这的确意味着,最初的设计是思考如何实现点对点,它与 IBM 一直在做的东西相对立。IBM 给你的只有哑终端,这种终端只能让你管理用户界面,却无法让你像真正的终端一样为所欲为。
|
||||
|
||||
Saron Yitbarek: 图形用户界面在普通用户中普及的同时,在工程师和开发人员中总是存在和一股相反的力量。早在 20 世纪 70 年代和 80 年代的 Linux 出现之前,这股力量就存在于 EMAX 和 GNU 中。有了斯托尔曼的自由软件基金会后,总有某些人想要使用命令行,但上世纪 90 年代的 Linux 的交付方式是独一无二的。
|
||||
|
||||
[00:07:30] Linux 和其他开源软件的早期爱好者是都是先驱。我正站在他们的肩膀上。我们都是。
|
||||
|
||||
您现在收听的是代码英雄,一款由红帽公司原创的播客。这是操作系统大战的第二部分:Linux 崛起。
|
||||
|
||||
Steven Vaughan-Nichols: 1998 年的时候,情况发生了变化。
|
||||
|
||||
Saron Yitbarek: Steven Vaughan-Nichols 是 zdnet.com 的特约编辑,他已经写了几十年关于技术商业方面的文章了。他将向我们讲述 Linux 是如何慢慢变得越来越流行,直到自愿贡献者的数量远远超过了在 Windows 上工作的微软开发人员的数量的。不过,Linux 从来没有真正关注过微软的台式机客户,这也许就是微软最开始时忽略了 Linux 及其开发者的原因。Linux 真正大放光彩的地方是在服务器机房。当企业开始线上业务时,每个企业都需要一个独特的编程解决方案来满足其需求。
|
||||
|
||||
[00:08:30] WindowsNT 于 1993 年问世,当时它已经在与其他的服务器操作系统展开竞争了,但是许多开发人员都在想,“既然我可以通过 Apache 构建出基于 Linux 的廉价系统,那我为什么要购买 AIX 设备或大型 Windows 设备呢?”关键点在于,Linux 代码已经开始渗透到几乎所有在线的东西中。
|
||||
|
||||
Steven Vaughan-Nichols: [00:09:00] 令微软感到惊讶的是,它开始意识到,Linux 实际上已经开始有一些商业应用,不是在桌面环境,而是在商业服务器上。因此,他们发起了一场运动,我们称之为 FUD- 恐惧、不确定和怀疑 (fear,uncertainty 和 double)。他们说,“哦,Linux 这玩意,真的没有那么好。它不太可靠。你一点都不能相信它”。
|
||||
|
||||
Saron Yitbarek: [00:09:30] 这种软宣传式的攻击持续了一段时间。微软也不是唯一一个对 Linux 感到紧张的公司。这其实是整个行业在对抗这个奇怪新人的挑战。例如,任何与 UNIX 有利害关系的人都可能将 Linux 视为篡夺者。有一个案例很著名,那就是 SCO 组织(它发行过一种版本的 UNIX) 在过去 10 多年里发起一系列的诉讼,试图阻止 Linux 的传播。SCO 最终失败而且破产了。与此同时,微软一直在寻找机会。他们势在必行。只不过目前还不清楚具体要怎么做。
|
||||
|
||||
Steven Vaughan-Nichols: [00:10:30] 让微软真正担心的是,第二年,在 2000 年的时候,IBM 宣布,他们将于 2001 年投资 10 亿美元在 Linux 上。现在,IBM 已经不再涉足个人电脑业务。他们还没有走出去,但他们正朝着这个方向前进,他们将 Linux 视为服务器和大型计算机的未来,在这一点上,剧透警告,IBM 是正确的。Linux 将主宰服务器世界。
|
||||
|
||||
Saron Yitbarek: 这已经不再仅仅是一群黑客喜欢命令行的 Jedi 式的控制了。金钱的投入对 Linux 助力极大。Linux 国际的执行董事 John "Mad Dog" Hall 有一个故事可以解释为什么会这样。我们通过电话与他取得了联系。
|
||||
|
||||
John Hall: [00:11:30] 我的一个朋友名叫 Dirk Holden[00:10:56],他是德国德意志银行的一名系统管理员,他也参与了个人电脑上早期 X Windows 系统的图形项目中的工作。有一天我去银行拜访他,我说 :“Dirk,你银行里有 3000 台服务器,用的都是 Linux。为什么不用 Microsoft NT 呢?”他看着我说:“是的,我有 3000 台服务器,如果使用微软的 Windows NT 系统,我需要 2999 名系统管理员。”他继续说道:“而使用 Linux,我只需要四个。”这真是完美的答案。
|
||||
|
||||
Saron Yitbarek: [00:12:00] 程序员们着迷的这些东西恰好对大公司也极具吸引力。但由于 FUD 的作用,一些企业对此持谨慎态度。他们听到开源,就想:"开源。这看起来不太可靠,很混乱,充满了 BUG"。但正如那位银行经理所指出的,金钱听过一种有趣的方式,说服人们克服困境。甚至那些需要网站的小公司也加入了 Linux 阵营。与一些昂贵的专有选择相比,使用一个廉价的 Linux 系统在成本上是无法比拟的。如果您是一家雇佣专业人员来构建网站的商店,那么您很定想让他们使用 Linux。
|
||||
|
||||
[00:12:30] 让我们快进几年。Linux 运行每个人的网站上。Linux 已经征服了服务器世界,然后智能手机也随之诞生。当然,苹果和他们的 iPhone 占据了相当大的市场份额,而且微软也希望能进入这个市场,但令人惊讶的是,Linux 也在那,已经做好准备了,迫不及待要大展拳脚。
|
||||
|
||||
作家兼记者 James Allworth。
|
||||
|
||||
James Allworth: [00:13:00] 当然还有容纳第二个竞争者的空间,那本可以是微软,但是实际上却是 Android,而 Andrid 基本上是基于 Linux 的。众所周知,Android 被谷歌所收购,现在运行在世界上大部分的智能手机上,谷歌在 Linux 的基础上创建了 Android。Linux 使他们能够以零成本从一个非常复杂的操作系统开始。他们成功地实现了这一目标,最终将微软挡在了下一代设备之外,至少从操作系统的角度来看是这样。
|
||||
|
||||
Saron Yitbarek: [00:13:30] 天崩地裂了,很大程度上,微软有被埋没的风险。John Gossman 是微软 Azure 团队的首席架构师。他还记得当时困扰公司的困惑。
|
||||
|
||||
John Gossman: [00:14:00] 像许多公司一样,微软也非常担心知识产权污染。他们认为,如果允许开发人员使用开源代码,那么很可能只是复制并粘贴一些代码到某些产品中,就会让某种病毒式的许可证生效从而引发未知的风险…,我认为,这跟公司文化有关,很多公司,包括微软,都对开源开发的意义和商业模式之间的分歧感到困惑。有一种观点认为,开源意味着你所有的软件都是免费的,人们永远不会付钱。
|
||||
|
||||
Saron Yitbarek: [00:14:30] 任何投资于旧的、专有软件模型的人都会觉得这里发生的一切对他们构成了威胁。当你威胁到像微软这样的大公司时,是的,他们一定会做出反应。他们推动所有这些 FUD(fear,uncertainty,doubt)- 恐惧,不确定性和怀疑是有道理的。当时,商业运作的方式基本上就是相互竞争。不过,如果他们是其他公司的话 (If they'd been any other company,看不懂什么意思),他们可能还会怀恨在心,抱着旧有的想法,但到了 2013 年,一切都变了。
|
||||
|
||||
[00:15:00] 微软的云计算服务 Azure 上线了,令人震惊的是,它从第一天开始就提供了 Linux 虚拟机。Steve Ballmer,这位把 Linux 称为癌症的首席执行官,已经离开了,代替他的是一位新的有远见的首席执行官 Satya Nadella。
|
||||
|
||||
John Gossman: Satya 有不同的看法。他属于另一个世代。比 Paul,Bill 和 Steve 更年轻的世代,他对开源有不同的看法。
|
||||
|
||||
Saron Yitbarek: John Gossman,再说一次,来自于 微软的 Azure 团队。
|
||||
|
||||
John Gossman: [00:16:00] 大约四年前,处于实际需要,我们在 Azure 中添加了 Linux 支持。如果访问任何一家企业客户,你都会发现他们并没有试图决定是使用 Windows 还是使用 Linux、 使用 .net 还是使用 Java TM。他们在很久以前就做出了决定——大约 15 年前才有这样的一些争论。现在,我见过的每一家公司都混合了 Linux 和 Java、Windows 和 .net、SQL Server、Oracle 和 MySQL—— 基于专有源代码的产品和开放源代码的产品。
|
||||
|
||||
如果你正在运维着一个云,允许这些公司在云上运行他们的业务,那么你就不能简单地告诉他们,“你可以使用这个软件,但你不能使用那个软件。”
|
||||
|
||||
Saron Yitbarek: [00:16:30] 这正是 Satya Nadella 采纳的哲学思想。2014 年秋季,他站在舞台上,希望传递一个重要信息。微软爱 Linux。他接着说,Azure 20% 的业务量已经是 Linux 了,微软将始终对 Linux 发行版提供一流的支持。没有哪怕一丝对开源的宿怨。
|
||||
|
||||
为了说明这一点,在他们的背后有一个巨大的标志,上面写着 :“Microsoft hearts Linux”。哇哇哇。对我们中的一些人来说,这种转变有点令人震惊,但实际上,无需如此震惊。下面是 Steven Levy,一名科技记者兼作家。
|
||||
|
||||
Steven Levy: [00:17:30] 当你在踢足球的时候,如果草坪变滑了,那么你也许会换一种不同的鞋子。他们当初就是这么做的。他们不能否认现实而且他们之间也有有聪明人,所以他们必须意识到,这就是世界的运行方式,不管他们早些时候说了什么,即使他们对之前的言论感到尴尬,但是让他们之前关于开源多么可怕的言论影响到现在明智的决策那才真的是疯了。
|
||||
|
||||
Saron Yitbarek: [00:18:00] 微软低下了它高傲的头。你可能还记得苹果公司,经过多年的孤立无援,最终转向与微软构建合作伙伴关系。现在轮到微软进行 180 度转变了。经过多年的与开源方法的战斗后,他们正在重塑自己。要么改变,要么死亡。Steven Vaughan-Nichols。
|
||||
|
||||
Steven Vaughan-Nichols: [00:18:30] 即使是像微软这样规模的公司也无法与成千上万的开源开发者竞争,这些开发者开发这包括 Linux 在内的其他大项目。很长时间以来他们都不愿意这么做。前微软首席执行官史蒂夫·鲍尔默 (SteveBallmer) 对 Linux 深恶痛绝。由于它的 GPL 许可证,让 Linux 称为一种癌症,但一旦鲍尔默被扫地出门,新的微软领导层说,“这就好像试图命令潮流不要过来,但潮水依然会不断涌进来。我们应该与 Linux 合作,而不是与之对抗。”
|
||||
|
||||
Saron Tiebreak: [00:19:00] 真的,在线技术历史上最大的胜利之一就是微软能够做出这样的转变,当他们最终决定这么做的时候。当然,当微软出现在开源的桌子上时,老的、铁杆 Linux 支持者是相当怀疑的。他们不确定自己是否能接受这些家伙,但正如沃恩-尼科尔斯所指出的,今天的微软根本不是你父母的微软。事实上,互联网技术历史上最大的胜利之一就是让微软最终做出如此转变。当然,当微软出现在开源桌上时,老一代的、铁杆 Linux 支持者是相当怀疑的。他们不确定自己是否能接受这些家伙,但正如 Vaughan-Nichols 所指出的,今天的微软已经不是你父母那一代时的微软了。
|
||||
|
||||
Steven Vaughan-Nichols : [00:19:30] 2017 年的微软既不是史蒂夫•鲍尔默 (Steve Ballmer) 的微软,也不是比尔•盖茨 (Bill Gates) 的微软。这是一家完全不同的公司,有着完全不同的方法,而且,开源软件一旦被放出,就无法被收回。开源已经吞噬了整个技术世界。从未听说过 Linux 的人可能对它并不了解,但是每次他们访问 Facebook,他们都在运行 Linux。每次执行谷歌搜索时,你都在运行 Linux。
|
||||
|
||||
[00:20:00] 每次你用 Android 手机,你都在运行 Linux。它确实无处不在,微软无法阻止它,我认为以为微软可以以某种方式接管它的想法,太天真了。
|
||||
|
||||
Saron Yitbarek: [00:20:30] 开源支持者可能一直担心微软会像混入羊群中的狼一样,但事实是,开源软件的本质保护了它无法被完全控制。没有一家公司能够拥有 Linux 并以某种特定的方式控制它。Greg Kroah-Hartman 是 Linux 基金会的一名成员。
|
||||
|
||||
Greg Kroah-Hartman: 每个公司和个人都以自私的方式为 Linux 做出贡献。他们之所以这样做是因为他们想要解决他们所面临的问题,可能是硬件无法工作,或者是他们想要添加一个新功能来做其他事情,又或者想在他们的产品中使用它。这很棒,因为他们会把代码贡献回去,此后每个人都会从中受益,这样每个人都可以用到这份代码。正是因为这种自私,所有的公司,所有的人,所有的人都能从中受益。
|
||||
|
||||
Saron Yitbarek: [00:21:30] 微软已经意识到,在即将到来的云战争中,与 Linux 作战就像与空气作战一样。Linux 和开源不是敌人,它们是一种氛围。今天,微软以白金会员的身份加入了 Linux 基金会。他们成为 GitHub 开源项目的头号贡献者。2017 年 9 月,他们甚至加入了 Open Source Initiative 组织。现在,微软在开放许可下发布了很多代码。微软的 John Gossman 描述了他们开源 .net 时所发生的事情。起初,他们并不认为自己能得到什么回报。
|
||||
|
||||
John Gossman: [00:22:00] 我们本没有指望来自社区的贡献,然而,三年后,超过 50% 的对 .net 框架库的贡献来自于微软之外。这包括大量的代码。三星为 .net 提供了 ARM 支持。Intel 和 ARM 以及其他一些芯片厂商已经为 .net 框架贡献了特定于他们处理器的代码生成,以及数量惊人的修复、性能改进等等——既有单个贡献者也有社区。
|
||||
|
||||
Saron Yitbarek: 直到几年前,我们今天拥有的这个微软,这个开放的微软,还是不可想象的。
|
||||
|
||||
[00:23:00] 我是 Saron Yitbarek,这里是代码英雄。好吧,我们已经看到了为了赢得数百万桌面用户的爱而战的激烈场面。我们已经看到开源软件在私有巨头的背后悄然崛起,并攫取了巨大的市场份额。我们已经看到了一批批的代码英雄将编程领域变成了我你现在看到的这个样子。今天,大企业正在吸收开源软件,通过这一切,每个人都从他人那里受益。
|
||||
|
||||
[00:23:30] 在狂野的西方科技界,一贯如此。苹果受到施乐的启发,微软受到苹果的启发,Linux 受到 UNIX 的启发。进化,借鉴,不断成长。如果比喻成大卫和歌利亚(西方经典的以弱胜强战争中的两个主角)的话,开源软件不再是大卫,但是,你知道吗?它也不是歌利亚。开源已经超越了传统。它已经成为其他人战斗的战场。随着开源变得不可避免,新的战争,那些在云计算中进行的战争,那些在开源战场上进行的战争,都在增加。
|
||||
|
||||
这是 Steven Levy,他是一名作者。
|
||||
|
||||
Steven Levy: [00:24:00] 基本上,到目前为止,包括微软在内,我们有四到五家公司,正以各种方式努力把自己打造成为我们的工作平台,比如人工智能领域。你能看到智能助手之间的战争,你猜怎么着?苹果有一个智能助手,叫 Siri。微软有一个,叫 Cortana。谷歌有谷歌助手。三星也有一个智能助手。亚马逊也有一个,叫 Alexa。我们看到这些战斗遍布各地。也许,你可以说,最热门的人工智能平台将控制我们生活中所有的东西,而这五家公司就是在为此而争斗。
|
||||
|
||||
Saron Yitbarek: 现在很难再找到另一个反叛者,它就像 Linux 奇袭微软那样,偷偷潜入 Facebook、 谷歌或亚马逊,攻其不备。因为正如作家 James Allworth 所指出的,成为一个真正的反叛者者只会变得越来越难。
|
||||
|
||||
James Allworth: [00:25:30] 规模一直以来都是一种优势,但规模优势本质上……怎么说呢,我认为以前它们在本质上是线性的,现在它们在本质上是指数型的了,所以,一旦你开始以某种方法走在前面,另一个新玩家要想赶上来就变得越来越难了。我认为在互联网时代这大体来说来说是正确的,无论是因为规模,还是数据赋予组织的重要性和优势,就其竞争能力而言。一旦你走在前面,你就会吸引更多的客户,这就给了你更多的数据,让你能做得更好,这之后,客户还有什么理由选择排名第二的公司呢,难道是因为因为他们落后了这么远么?我认为在云的时代这个逻辑也不会有什么不同。
|
||||
|
||||
Saron Yitbarek: [00:26:00] 这个故事始于史蒂夫•乔布斯 (Steve Jobs) 和比尔•盖茨 (Bill Gates) 这样的非凡的英雄,但科技的进步已经呈现出一种众包、有机的感觉。我认为据说我们的开源英雄 Linus Torvalds 在第一次发明 Linux 内核时甚至没有一个真正的计划。他无疑是一位才华横溢的年轻开发者,但他也像潮汐前的一滴水一样。变革是不可避免的。据估计,对于一家私有专利公司来说,用他们老式的、专有的方式创建一个 Linux 发行版将花费他们超过 100 亿美元。这说明了开源的力量。
|
||||
|
||||
[00:26:30] 最后,这并不是一个专利模型所能与之竞争的东西。成功的公司必须保持开放。这是最大、最终极的教训。还有一点要记住:当我们连接在一起的时候,我们在已有基础上成长和建设的能力是无限的。不管这些公司有多大,我们都不必坐等他们给我们更好的东西。想想那些为了纯粹的创造乐趣而学习编码的新开发者,那些自己动手丰衣足食的人。
|
||||
|
||||
未来的优秀程序员无管来自何方,只要能够访问代码,他们就能构建下一个大项目。
|
||||
|
||||
[00:27:30] 以上就是我们关于操作系统战争的两个故事。这场战争塑造了我们的数字生活。争夺主导地位的斗争从桌面转移到了服务器室,最终进入了云计算领域。过去的敌人难以置信地变成了盟友,众包的未来让一切都变得开放。听着,我知道,在这段历史之旅中,还有很多英雄我们没有提到,所以给我们写信吧。分享你的故事。Redhat.com/commandlineheroes。我恭候佳音。
|
||||
|
||||
在本季剩下的时间里,我们将学习今天的英雄们在创造什么,以及他们要经历什么样的战斗才能将他们的创造变为现实。让我们从艰苦卓绝的编程一线回来看看更多的传奇故事吧。我们每两周放一集新的博客。几周后,我们将为您带来第三集:敏捷革命。
|
||||
|
||||
[00:28:00] 命令行英雄是一款红帽公司原创的播客。要想免费自动获得新一集的代码英雄,请订阅我们的节目。只要在苹果播客 、Spotify、 谷歌 Play,或其他应用中搜索“代码英雄”。然后点击“订阅”。这样你就会第一个知道什么时候有新剧集了。
|
||||
|
||||
我是 Saron Yitbarek。感谢收听。继续编码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux
|
||||
|
||||
作者:[redhat][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.redhat.com
|
||||
[b]: https://github.com/lujun9972
|
@ -1,146 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Podman and user namespaces: A marriage made in heaven)
|
||||
[#]: via: (https://opensource.com/article/18/12/podman-and-user-namespaces)
|
||||
[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan)
|
||||
|
||||
Podman 和用户名字空间:天作之合
|
||||
======
|
||||
|
||||
> 了解如何使用 Podman 在单独的用户空间运行容器。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/architecture_structure_planning_design_.png?itok=KL7dIDct)
|
||||
|
||||
[Podman][1] 是 [libpod][2] 库的一部分,使用户能够管理 pod、容器和容器镜像。在我的上一篇文章中,我写过 [Podman 作为一种更安全的运行容器的方式][3]。在这里,我将解释如何使用 Podman 在单独的用户命名空间中运行容器。
|
||||
|
||||
我一直在思考<ruby>[用户命名空间][4]<rt>user namespace</rt></ruby>,它主要是由 Red Hat 的 Eric Biederman 开发的,作为分离容器的一个很棒的功能。用户命名空间允许你指定用于运行容器的用户标识符(UID)和组标识符(GID)映射。这意味着你可以在容器内运行 UID 0,在容器外运行 UID 100000。如果容器进程逃逸出了容器,内核会将它们视为 UID 100000。不仅如此,任何未映射到用户命名空间的 UID 所拥有的任何文件对象都将被视为 `nobody` 所拥有(65534,`kernel.overflowuid`),并且不允许容器进程访问,除非该对象可由“其他人”访问(世界可读/可写)。
|
||||
|
||||
如果你拥有一个权限为 [660][5] 的属主为“真实” `root` 的文件,并且用户命名空间中的容器进程尝试读取它,则会阻止它们访问它,并且会将该文件视为 `nobody` 所拥有。
|
||||
|
||||
### 示例
|
||||
|
||||
以下是它是如何工作的。首先,我在 `root` 拥有的系统中创建一个文件。
|
||||
|
||||
```
|
||||
$ sudo bash -c "echo Test > /tmp/test"
|
||||
$ sudo chmod 600 /tmp/test
|
||||
$ sudo ls -l /tmp/test
|
||||
-rw-------. 1 root root 5 Dec 17 16:40 /tmp/test
|
||||
```
|
||||
|
||||
接下来,我将该文件卷挂载到一个使用用户命名空间映射 `0:100000:5000` 运行的容器中。
|
||||
|
||||
```
|
||||
$ sudo podman run -ti -v /tmp/test:/tmp/test:Z --uidmap 0:100000:5000 fedora sh
|
||||
# id
|
||||
uid=0(root) gid=0(root) groups=0(root)
|
||||
# ls -l /tmp/test
|
||||
-rw-rw----. 1 nobody nobody 8 Nov 30 12:40 /tmp/test
|
||||
# cat /tmp/test
|
||||
cat: /tmp/test: Permission denied
|
||||
```
|
||||
|
||||
上面的 `--uidmap` 设置告诉 Podman 在容器内映射一系列 5000 个 UID,从容器外的 UID 100000开始(因此范围是 100000-104999)到容器内 UID 0 开始的范围(所以范围是 0-4999)。在容器内部,如果我的进程以 UID 1 运行,则它在主机上为 100001。
|
||||
|
||||
由于实际的 `UID=0` 未映射到容器中,因此 `root` 拥有的任何文件都将被视为 `nobody` 所拥有。即使容器内的进程具有 `CAP_DAC_OVERRIDE`,也无法覆盖此种保护。`DAC_OVERRIDE` 使根进程能够读/写系统上的任何文件,即使进程不是 `root` 用户拥有,也不是全局可读或可写的。
|
||||
|
||||
用户命名空间功能与主机上的功能不同。它们是命名空间功能。这意味着我的容器根只在容器内具有功能,实际上只在映射到用户命名空间的 UID 范围内。如果容器进程逃逸了容器,则它将没有任何功能而不是映射到用户命名空间的 UID,包括 UID=0。即使进程可能以某种方式进入另一个容器,如果容器使用不同范围的 UID,它们也不具备这些功能。
|
||||
|
||||
请注意,SELinux 和其他技术还限制了容器进程破开容器时会发生的情况。
|
||||
|
||||
### 使用 podman top 来显示用户名字空间
|
||||
|
||||
我们在 `podman top` 中添加了一些功能,允许你检查容器内运行的进程的用户名,并在主机上标识它们的真实 UID。
|
||||
|
||||
让我们首先使用我们的 UID 映射运行一个 `sleep` 容器。
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:100000:5000 -d fedora sleep 1000
|
||||
```
|
||||
|
||||
现在运行 `podman top`:
|
||||
|
||||
```
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 100000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
注意 `podman top` 报告用户进程在容器内以 `root` 身份运行,但在主机(`HUSER`)上以 UID 100000 运行。此外,`ps` 命令确认 `sleep` 过程以 UID 100000 运行。
|
||||
|
||||
现在让我们运行第二个容器,但这次我们将选择一个单独的 UID 映射,从 200000 开始。
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:200000:5000 -d fedora sleep 1000
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 200000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
200000 23644 23632 1 08:08 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
请注意,`podman top` 报告第二个容器在容器内以 `root` 身份运行,但主机上的 UID=200000。
|
||||
|
||||
另请参阅 `ps` 命令,它显示两个 `sleep` 进程都在运行:一个为 100000,另一个为 200000。
|
||||
|
||||
这意味着在单独的用户命名空间内运行容器可以在进程之间进行传统的 UID 分离,这从一开始就是 Linux/Unix 的标准安全工具。
|
||||
|
||||
### 用户名字空间的问题
|
||||
|
||||
几年来,我一直主张用户命名空间应该作为每个人应该有的安全工具,但几乎没有人使用过。原因是没有任何文件系统支持或转移文件系统。
|
||||
|
||||
在容器中,你希望在许多容器之间共享**基本**镜像。上面的示例在每个示例中使用 Fedora 基本镜像。Fedora 镜像中的大多数文件都由实际的 UID=0 拥有。如果我使用用户名称空间 0:100000:5000 在此镜像上运行容器,默认情况下它会将所有这些文件视为 `nobody` 所拥有,因此我们需要移动所有这些 UID 以匹配用户名称空间。多年来,我想要一个挂载选项来告诉内核重新映射这些文件 UID 以匹配用户命名空间。上游内核存储开发人员继续调查并在此功能上取得进展,但这是一个难题。
|
||||
|
||||
Podman 可以在同一镜像上使用不同的用户名称空间,是由于自动 [chown][6] 内置于由 Nalin Dahyabhai 领导的团队开发的[容器/存储][7]中。Podman使用容器/存储,Podman 第一次在新的用户命名空间中使用容器镜像,容器/存储 “chowns”(如,更改所有权)镜像中的所有文件到用户命名空间中映射的 UID 并创建新镜像。可以把它想象成 `fedora:0:100000:5000` 镜像。
|
||||
|
||||
当 Podman 在具有相同 UID 映射的镜像上运行另一个容器时,它使用“预先设置所有权”的图像。当我在0:200000:5000 上运行第二个容器时,容器/存储会创建第二个镜像,我们称之为 `fedora:0:200000:5000`。
|
||||
|
||||
请注意,如果你正在执行 `podman build` 或 `podman commit` 并将新创建的镜像推送到容器注册表,Podman 将使用容器/存储来反转移位并将所有文件推回到实际 UID=0 的镜像。
|
||||
|
||||
这可能会导致在新的 UID 映射中创建容器时出现真正的减速,因为 `chown` 可能会很慢,具体取决于镜像中的文件数。此外,在普通 [OverlayFS][8] 上,镜像中的每个文件都会被复制。正常的 Fedora 镜像最多可能需要 30 秒才能完成 `chown` 并启动容器。
|
||||
|
||||
幸运的是,Red Hat 内核存储团队(主要是 Vivek Goyal 和 Miklos Szeredi)在内核 4.19 中为 OverlayFS 添加了一项新功能。该功能称为“仅元数据复制”。如果使用 `metacopy=on` 挂载覆盖文件系统作为挂载选项,则在更改文件属性时,它不会复制较低层的内容;内核创建新的 inode,其中包含引用指向较低级别数据的属性。如果内容发生变化,它仍会复制内容。如果你想试用它,可以在 Red Hat Enterprise Linux 8 Beta 中使用此功能。
|
||||
|
||||
这意味着容器 `chown` 可能在几秒钟内发生,并且你不会将每个容器的存储空间加倍。
|
||||
|
||||
这使得像 Podman 这样的工具在不同的用户命名空间中运行容器是可行的,大大提高了系统的安全性。
|
||||
|
||||
### 前瞻
|
||||
|
||||
我想向 Podman 添加一个新标志,比如 `--userns=auto`,它会告诉它为你运行的每个容器自动选择一个唯一的用户命名空间。这类似于 SELinux 与单独的多类别安全(MCS)标签一起使用的方式。如果设置环境变量 `PODMAN_USERNS=auto`,则甚至不需要设置标志。
|
||||
|
||||
Podman 最终允许用户在不同的用户名称空间中运行容器。像 [Buildah][9] 和 [CRI-O][10] 这样的工具也可以利用用户命名空间。但是,对于 CRI-O,Kubernetes 需要了解哪个用户命名空间将运行容器引擎,而上游正在处理它。
|
||||
|
||||
在我的下一篇文章中,我将解释如何在用户命名空间中将 Podman 作为非 root 用户运行。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/podman-and-user-namespaces
|
||||
|
||||
作者:[Daniel J Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rhatdan
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://podman.io/
|
||||
[2]: https://github.com/containers/libpod
|
||||
[3]: https://opensource.com/article/18/10/podman-more-secure-way-run-containers
|
||||
[4]: http://man7.org/linux/man-pages/man7/user_namespaces.7.html
|
||||
[5]: https://chmodcommand.com/chmod-660/
|
||||
[6]: https://en.wikipedia.org/wiki/Chown
|
||||
[7]: https://github.com/containers/storage
|
||||
[8]: https://en.wikipedia.org/wiki/OverlayFS
|
||||
[9]: https://buildah.io/
|
||||
[10]: http://cri-o.io/
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A guided tour of Linux file system types)
|
||||
[#]: via: (https://www.networkworld.com/article/3432990/a-guided-tour-of-linux-file-system-types.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Linux 文件系统类型的导览
|
||||
======
|
||||
Linux 文件系统多年来在不断发展,来看一下文件系统类型。
|
||||
![Andreas Lehner / Flickr \(CC BY 2.0\)][1]
|
||||
|
||||
虽然对于普通用户来说可能并不明显,但在过去十年左右的时间里,Linux 文件系统已经发生了显著的变化,这使它们能够更好抵抗损坏和性能问题。。
|
||||
|
||||
如今大多数 Linux 系统使用名为 **ext4** 的文件系统。 “ext” 代表“扩展”,4 表示这是此文件系统的第 4 代。随着时间的推移,已经添加的功能包括能够提供越来越大的文件系统(目前大到 1,000,000 TiB)和更大的文件(高达 16 TiB),更能抵抗系统崩溃和更少碎片(将多个文件分散为多个位置的块),这些提高了性能。
|
||||
|
||||
**ext4** 文件系统还带来了对性能、可伸缩性和容量的其他改进。为实现可靠性,实现了元数据和日志校验和。时间戳现在跟踪变化到纳秒,以便更好地进行文件时间戳(例如,文件创建和最后更新)。并且,在时间戳字段中有两个附加位,2038 年的问题(存储日期/时间的字段将从最大值翻到零)已被推迟超过 400 年(到 2446)。
|
||||
|
||||
### 文件系统类型
|
||||
|
||||
要确定 Linux 系统上文件系统的类型,请使用 **df** 命令。下面显示的命令中的 **T** 选项显示文件系统类型。 **h** 使磁盘大小显示“人类可读”。换句话说,调整报告的单位(如 M 和 G),使人们更好地理解。
|
||||
|
||||
```
|
||||
$ df -hT | head -10
|
||||
Filesystem Type Size Used Avail Use% Mounted on
|
||||
udev devtmpfs 2.9G 0 2.9G 0% /dev
|
||||
tmpfs tmpfs 596M 1.5M 595M 1% /run
|
||||
/dev/sda1 ext4 110G 50G 55G 48% /
|
||||
/dev/sdb2 ext4 457G 642M 434G 1% /apps
|
||||
tmpfs tmpfs 3.0G 0 3.0G 0% /dev/shm
|
||||
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
|
||||
tmpfs tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
|
||||
/dev/loop0 squashfs 89M 89M 0 100% /snap/core/7270
|
||||
/dev/loop2 squashfs 142M 142M 0 100% /snap/hexchat/42
|
||||
```
|
||||
|
||||
请注意,**/**(根)和 **/apps** 的文件系统都是 **ext4**,而 **/dev** 是 **devtmpfs** 文件系统 - 一个由内核填充的自动化设备节点。其他的文件系统显示为 **tmpfs** - 驻留在内存和/或交换分区中的临时文件系统和 **squashfs** - 只读压缩文件系统的文件系统,用于快照包。
|
||||
|
||||
还有 proc 文件系统,用于存储正在运行的进程的信息。
|
||||
|
||||
```
|
||||
$ df -T /proc
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
proc proc 0 0 0 - /proc
|
||||
```
|
||||
|
||||
当你在整个文件系统中移动时,可能会遇到许多其他文件系统类型。例如,当你移动到目录中并想了解它的文件系统时,可以运行以下命令:
|
||||
|
||||
```
|
||||
$ cd /dev/mqueue; df -T .
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
mqueue mqueue 0 0 0 - /dev/mqueue
|
||||
$ cd /sys; df -T .
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
sysfs sysfs 0 0 0 - /sys
|
||||
$ cd /sys/kernel/security; df -T .
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
securityfs securityfs 0 0 0 - /sys/kernel/security
|
||||
```
|
||||
|
||||
与其他 Linux 命令一样,这里的 . 代表整个文件系统的当前位置。
|
||||
|
||||
这些和其他独特的文件系统提供了一些特殊功能。例如,securityfs 提供支持安全模块的文件系统
|
||||
|
||||
Linux 文件系统需要能够抵抗损坏,能够承受系统崩溃并提供快速可靠的性能。由几代 **ext** 文件系统和新一代专用文件系统提供的改进使 Linux 系统更易于管理和更可靠。
|
||||
|
||||
在 [Facebook][2] 和 [LinkedIn][3] 上加入 Network World 社区,评论最热主题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432990/a-guided-tour-of-linux-file-system-types.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/guided-tour-on-the-flaker_people-in-horse-drawn-carriage_germany-by-andreas-lehner-flickr-100808681-large.jpg
|
||||
[2]: https://www.facebook.com/NetworkWorld/
|
||||
[3]: https://www.linkedin.com/company/network-world
|
Loading…
Reference in New Issue
Block a user