mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-13 22:30:37 +08:00
Merge remote-tracking branch 'LCTT/master'
merge upstream
This commit is contained in:
commit
5584c9f641
135
published/20180116 Command Line Heroes- Season 1- OS Wars.md
Normal file
135
published/20180116 Command Line Heroes- Season 1- OS Wars.md
Normal file
@ -0,0 +1,135 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11251-1.html)
|
||||
[#]: subject: (Command Line Heroes: Season 1: OS Wars)
|
||||
[#]: via: (https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1)
|
||||
[#]: author: (redhat https://www.redhat.com)
|
||||
|
||||
《代码英雄》第一季(1):操作系统战争(上)
|
||||
======
|
||||
|
||||
> 代码英雄讲述了开发人员、程序员、黑客、极客和开源反叛者如何彻底改变技术前景的真实史诗故事。
|
||||
|
||||
![](https://www.redhat.com/files/webux/img/bandbg/bkgd-clh-ep1-2000x950.png)
|
||||
|
||||
本文是《[代码英雄](https://www.redhat.com/en/command-line-heroes)》系列播客[第一季(1):操作系统战争(上)](https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1) 的[音频](https://dts.podtrac.com/redirect.mp3/audio.simplecast.com/f7670e99.mp3)脚本。
|
||||
|
||||
**Saron Yitbarek:**有些故事如史诗般,惊险万分,在我脑海中似乎出现了星球大战电影开头的爬行文字。你知道的,就像——
|
||||
|
||||
配音:“第一集,操作系统大战”
|
||||
|
||||
**Saron Yitbarek:**是的,就像那样子。
|
||||
|
||||
配音:这是一个局势紧张加剧的时期。<ruby>比尔·盖茨<rt>Bill Gates</rt></ruby>与<ruby>史蒂夫·乔布斯<rt>Steve Jobs</rt></ruby>的帝国发起了一场无可避免的专有软件之战。[00:00:30] 盖茨与 IBM 结成了强大的联盟,而乔布斯则拒绝了对它的硬件和操作系统开放授权。他们争夺统治地位的争斗在一场操作系统战争中席卷了整个银河系。与此同时,这些帝王们所不知道的偏远之地,开源的反叛者们开始集聚。
|
||||
|
||||
**Saron Yitbarek:**好吧。这也许有点戏剧性,但当我们谈论上世纪八九十年代和 2000 年代的操作系统之争时,这也不算言过其实。*[00:01:00]* 确实曾经发生过一场史诗级的统治之战。史蒂夫·乔布斯和比尔·盖茨确实掌握着数十亿人的命运。掌控了操作系统,你就掌握了绝大多数人使用计算机的方式、互相通讯的方式、获取信息的方式。我可以一直罗列下去,不过你知道我的意思。掌握了操作系统,你就是帝王。
|
||||
|
||||
我是 Saron Yitbarek,你现在收听的是代码英雄,一款红帽公司原创的博客节目。*[00:01:30]* 你问,什么是<ruby>代码英雄<rt>Command Line Hero</rt></ruby>?嗯,如果你愿意创造而不仅仅是使用,如果你相信开发者拥有构建美好未来的能力,如果你希望拥有一个大家都有权利表达科技如何塑造生活的世界,那么你,我的朋友,就是一位代码英雄。在本系列节目中,我们将为你带来那些“白码起家”(LCTT 译注:原文是 “from the command line up”,应该是演绎自 “from the ground up”——白手起家)改变技术的程序员故事。*[00:02:00]* 那么我是谁,凭什么指导你踏上这段艰苦的旅程?Saron Yitbarek 是哪根葱?嗯,事实上我觉得我跟你差不多。我是一名面向初学者的开发人员,我做的任何事都依赖于开源软件,我的世界就是如此。通过在博客中讲故事,我可以跳出无聊的日常工作,鸟瞰全景,希望这对你也一样有用。
|
||||
|
||||
我迫不及待地想知道,开源技术从何而来?我的意思是,我对<ruby>林纳斯·托瓦兹<rt>Linus Torvalds</rt></ruby>和 Linux^® 的荣耀有一些了解,*[00:02:30]* 我相信你也一样,但是说真的,开源并不是一开始就有的,对吗?如果我想发自内心的感激这些最新、最棒的技术,比如 DevOps 和容器之类的,我感觉我对那些早期的开发者缺乏了解,我有必要了解这些东西来自何处。所以,让我们暂时先不用担心内存泄露和缓冲溢出。我们的旅程将从操作系统之战开始,这是一场波澜壮阔的桌面控制之战。*[00:03:00]* 这场战争亘古未有,因为:首先,在计算机时代,大公司拥有指数级的规模优势;其次,从未有过这么一场控制争夺战是如此变化多端。比尔·盖茨和史蒂夫·乔布斯? 他们也不知道结果会如何,但是到目前为止,这个故事进行到一半的时候,他们所争夺的所有东西都将发生改变、进化,最终上升到云端。
|
||||
|
||||
*[00:03:30]* 好的,让我们回到 1983 年的秋季。还有六年我才出生。那时候的总统还是<ruby>罗纳德·里根<rt>Ronald Reagan</rt></ruby>,美国和苏联扬言要把地球拖入核战争之中。在檀香山(火奴鲁鲁)的市政中心正在举办一年一度的苹果公司销售会议。一群苹果公司的员工正在等待史蒂夫·乔布斯上台。他 28 岁,热情洋溢,看起来非常自信。乔布斯很严肃地对着麦克风说他邀请了三个行业专家来就软件进行了一次小组讨论。*[00:04:00]* 然而随后发生的事情你肯定想不到。超级俗气的 80 年代音乐响彻整个房间。一堆多彩灯管照亮了舞台,然后一个播音员的声音响起-
|
||||
|
||||
**配音:**女士们,先生们,现在是麦金塔软件的约会游戏时间。
|
||||
|
||||
**Saron Yitbarek:**乔布斯的脸上露出一个大大的笑容,台上有三个 CEO 都需要轮流向他示好。这基本上就是 80 年代钻石王老五,不过是科技界的。*[00:04:30]* 两个软件大佬讲完话后,然后就轮到第三个人讲话了。仅此而已不是吗?是的。新面孔比尔·盖茨带着一个大大的遮住了半张脸的方框眼镜。他宣称在 1984 年,微软的一半收入将来自于麦金塔软件。他的这番话引来了观众热情的掌声。*[00:05:00]* 但是他们不知道的是,在一个月后,比尔·盖茨将会宣布发布 Windows 1.0 的计划。你永远也猜不到乔布斯正在跟苹果未来最大的敌人打情骂俏。但微软和苹果即将经历科技史上最糟糕的婚礼。他们会彼此背叛、相互毁灭,但又深深地、痛苦地捆绑在一起。
|
||||
|
||||
**James Allworth:***[00:05:30]* 我猜从哲学上来讲,一个更理想化、注重用户体验高于一切,是一个一体化的组织,而微软则更务实,更模块化 ——
|
||||
|
||||
**Saron Yitbarek:**这位是 James Allworth。他是一位多产的科技作家,曾在苹果零售的企业团队工作。注意他给出的苹果的定义,一个一体化的组织,那种只对自己负责的公司,一个不想依赖别人的公司,这是关键。
|
||||
|
||||
**James Allworth:***[00:06:00]* 苹果是一家一体化的公司,它希望专注于令人愉悦的用户体验,这意味着它希望控制整个技术栈以及交付的一切内容:从硬件到操作系统,甚至运行在操作系统上的应用程序。当新的创新、重要的创新刚刚进入市场,而你需要横跨软硬件,并且能够根据自己意愿和软件的革新来改变硬件时,这是一个优势。例如 ——,
|
||||
|
||||
**Saron Yitbarek:***[00:06:30]* 很多人喜欢这种一体化的模式,并因此成为了苹果的铁杆粉丝。还有很多人则选择了微软。让我们回到檀香山的销售会议上,在同一场活动中,乔布斯向观众展示了他即将发布的超级碗广告。你可能已经亲眼见过这则广告了。想想<ruby>乔治·奥威尔<rt>George Orwell<rt></ruby>的 《一九八四》。在这个冰冷、灰暗的世界里,无意识的机器人正在独裁者的投射凝视下徘徊。*[00:07:00]* 这些机器人就像是 IBM 的用户们。然后,代表苹果公司的漂亮而健美的<ruby>安娅·梅杰<rt>Anya Major</rt></ruby>穿着鲜艳的衣服跑过大厅。她向着大佬们的屏幕猛地投出大锤,将它砸成了碎片。老大哥的咒语解除了,一个低沉的声音响起,苹果公司要开始介绍麦金塔了。
|
||||
|
||||
**配音:**这就是为什么现实中的 1984 跟小说《一九八四》不一样了。
|
||||
|
||||
Saron Yitbarek:是的,现在回顾那则广告,认为苹果是一个致力于解放大众的自由斗士的想法有点过分了。但这件事触动了我的神经。*[00:07:30]* Ken Segal 曾在为苹果制作这则广告的广告公司工作过。在早期,他为史蒂夫·乔布斯做了十多年的广告。
|
||||
|
||||
**Ken Segal:**1984 这则广告的风险很大。事实上,它的风险太大,以至于苹果公司在看到它的时候都不想播出它。你可能听说了史蒂夫喜欢它,但苹果公司董事会的人并不喜欢它。事实上,他们很愤怒,这么多钱被花在这么一件事情上,以至于他们想解雇广告代理商。*[00:08:00]* 史蒂夫则为我们公司辩护。
|
||||
|
||||
**Saron Yitbarek:**乔布斯,一如既往地,慧眼识英雄。
|
||||
|
||||
**Ken Segal:**这则广告在公司内、在业界内都引起了共鸣,成为了苹果产品的代表。无论人们那天是否有在购买电脑,它都带来了一种持续了一年又一年的影响,并有助于定义这家公司的品质:我们是叛军,我们是拿着大锤的人。
|
||||
|
||||
**Saron Yitbarek:***[00:08:30]* 因此,在争夺数十亿潜在消费者心智的过程中,苹果公司和微软公司的帝王们正在学着把自己塑造成救世主、非凡的英雄、一种对生活方式的选择。但比尔·盖茨知道一些苹果难以理解的事情。那就是在一个相互连接的世界里,没有人,即便是帝王,也不能独自完成任务。
|
||||
|
||||
*[00:09:00]* 1985 年 6 月 25 日。盖茨给当时的苹果 CEO John Scully 发了一份备忘录。那是一个迷失的年代。乔布斯刚刚被逐出公司,直到 1996 年才回到苹果。也许正是因为乔布斯离开了,盖茨才敢写这份东西。在备忘录中,他鼓励苹果授权制造商分发他们的操作系统。我想读一下备忘录的最后部分,让你们知道这份备忘录是多么的有洞察力。*[00:09:30]* 盖茨写道:“如果没有其他个人电脑制造商的支持,苹果现在不可能让他们的创新技术成为标准。苹果必须开放麦金塔的架构,以获得快速发展和建立标准所需的支持。”换句话说,你们不要再自己玩自己的了。你们必须有与他人合作的意愿。你们必须与开发者合作。
|
||||
|
||||
*[00:10:00]* 多年后你依然可以看到这条哲学思想,当微软首席执行官<ruby>史蒂夫·鲍尔默<rt>Steve Ballmer</rt></ruby>上台做主题演讲时,他开始大喊:“开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者。”你懂我的意思了吧。微软喜欢开发人员。虽然目前(LCTT 译注:本播客发布于 2018 年初)他们不打算与这些开发人员共享源代码,但是他们确实想建立起整个合作伙伴生态系统。*[00:10:30]* 而当比尔·盖茨建议苹果公司也这么做时,如你可能已经猜到的,这个想法就被苹果公司抛到了九霄云外。他们的关系产生了间隙,五个月后,微软发布了 Windows 1.0。战争开始了。
|
||||
|
||||
> 开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者。
|
||||
|
||||
*[00:11:00]* 你正在收听的是来自红帽公司的原创播客《代码英雄》。本集是第一集,我们将回到过去,重温操作系统战争的史诗故事,我们将会发现,科技巨头之间的战争是如何为我们今天所生活的开源世界扫清道路的
|
||||
|
||||
好的,让我们先来个背景故事吧。如果你已经听过了,那么请原谅我,但它很经典。当时是 1979 年,史蒂夫·乔布斯开车去<ruby>帕洛阿尔托<rt>Palo Alto</rt></ruby>的<ruby>施乐公园研究中心<rt>Xerox Park research center</rt></ruby>。*[00:11:30]* 那里的工程师一直在为他们所谓的图形用户界面开发一系列的元素。也许你听说过。它们有菜单、滚动条、按钮、文件夹和重叠的窗口。这是对计算机界面的一个美丽的新设想。这是前所未有的。作家兼记者 Steve Levy 会谈到它的潜力。
|
||||
|
||||
**Steven Levy:***[00:12:00]* 对于这个新界面来说,有很多令人激动的地方,它比以前的交互界面更友好,以前用的所谓的命令行 —— 你和电脑之间的交互方式跟现实生活中的交互方式完全不同。鼠标和电脑上的图像让你可以做到像现实生活中的交互一样,你可以像指向现实生活中的东西一样指向电脑上的东西。这让事情变得简单多了。你无需要记住所有那些命令。
|
||||
|
||||
**Saron Yitbarek:***[00:12:30]* 不过,施乐的高管们并没有意识到他们正坐在金矿上。一如既往地,工程师比主管们更清楚它的价值。因此那些工程师,当被要求向乔布斯展示所有这些东西是如何工作时,有点紧张。然而这是毕竟是高管的命令。乔布斯觉得,用他的话来说,“这个产品天才本来能够让施乐公司垄断整个行业,可是它最终会被公司的经营者毁掉,因为他们对产品的好坏没有概念。”*[00:13:00]* 这话有些苛刻,但是,乔布斯带着一卡车施乐高管错过的想法离开了会议。这几乎包含了他需要革新桌面计算体验的所有东西。1983 年,苹果发布了 Lisa 电脑,1984 年又发布了 Mac 电脑。这些设备的创意是抄袭自施乐公司的。
|
||||
|
||||
让我感兴趣的是,乔布斯对控诉他偷了图形用户界面的反应。他对此很冷静。他引用毕加索的话:“好的艺术家抄袭,伟大的艺术家偷窃。”他告诉一位记者,“我们总是无耻地窃取伟大的创意。”*[00:13:30]* 伟大的艺术家偷窃,好吧,我的意思是,我们说的并不是严格意义上的“偷窃”。没人拿到了专有的源代码并公然将其集成到他们自己的操作系统中去。这要更温和些,更像是创意的借用。这就难控制的多了,就像乔布斯自己即将学到的那样。传奇的软件奇才、真正的代码英雄 Andy Hertzfeld 就是麦金塔开发团队的最初成员。
|
||||
|
||||
**Andy Hertzfeld:***[00:14:00]* 是的,微软是我们的第一个麦金塔电脑软件合作伙伴。当时,我们并没有把他们当成是竞争对手。他们是苹果之外,我们第一家交付麦金塔电脑原型的公司。我通常每周都会和微软的技术主管聊一次。他们是第一个尝试我们所编写软件的外部团队。*[00:14:30]* 他们给了我们非常重要的反馈,总的来说,我认为我们的关系非常好。但我也注意到,在我与技术主管的交谈中,他开始问一些系统实现方面的问题,而他本无需知道这些,我觉得,他们想要复制麦金塔电脑。我很早以前就向史蒂夫·乔布斯反馈过这件事,但在 1983 年秋天,这件事达到了高潮。*[00:15:00]* 我们发现,他们在 1983 年 11 月的 COMDEX 上发布了 Windows,但却没有提前告诉我们。对此史蒂夫·乔布斯勃然大怒。他认为那是一种背叛。
|
||||
|
||||
**Saron Yitbarek:**随着新版 Windows 的发布,很明显,微软从苹果那里学到了苹果从施乐那里学来的所有想法。乔布斯很易怒。他的关于伟大艺术家如何偷窃的毕加索名言被别人学去了,而且恐怕盖茨也正是这么做的。*[00:15:30]* 据报道,当乔布斯怒斥盖茨偷了他们的东西时,盖茨回应道:“史蒂夫,我觉得这更像是我们都有一个叫施乐的富有邻居,我闯进他家偷电视机,却发现你已经偷过了”。苹果最终以窃取 GUI 的外观和风格为名起诉了微软。这个案子持续了好几年,但是在 1993 年,第 9 巡回上诉法院的一名法官最终站在了微软一边。*[00:16:00]* Vaughn Walker 法官宣布外观和风格不受版权保护。这是非常重要的。这一决定让苹果在无法垄断桌面计算的界面。很快,苹果短暂的领先优势消失了。以下是 Steven Levy 的观点。
|
||||
|
||||
**Steven Levy:**他们之所以失去领先地位,不是因为微软方面窃取了知识产权,而是因为他们无法巩固自己在上世纪 80 年代拥有的更好的操作系统的优势。坦率地说,他们的电脑索价过高。*[00:16:30]* 因此微软从 20 世纪 80 年代中期开始开发 Windows 系统,但直到 1990 年开发出的 Windows 3,我想,它才真正算是一个为黄金时期做好准备的版本,才真正可供大众使用。*[00:17:00]* 从此以后,微软能够将数以亿计的用户迁移到图形界面,而这是苹果无法做到的。虽然苹果公司有一个非常好的操作系统,但是那已经是 1984 年的产品了。
|
||||
|
||||
**Saron Yitbarek:**现在微软主导着操作系统的战场。他们占据了 90% 的市场份额,并且针对各种各样的个人电脑进行了标准化。操作系统的未来看起来会由微软掌控。此后发生了什么?*[00:17:30]* 1997 年,波士顿 Macworld 博览会上,你看到了一个几近破产的苹果,一个谦逊的多的史蒂夫·乔布斯走上舞台,开始谈论伙伴关系的重要性,特别是他们与微软的新型合作伙伴关系。史蒂夫·乔布斯呼吁双方缓和关系,停止火拼。微软将拥有巨大的市场份额。从表面看,我们可能会认为世界和平了。但当利益如此巨大时,事情就没那么简单了。*[00:18:00]* 就在苹果和微软在数十年的争斗中伤痕累累、最终败退到死角之际,一名 21 岁的芬兰计算机科学专业学生出现了。几乎是偶然地,他彻底改变了一切。
|
||||
|
||||
我是 Saron Yitbarek,这里是代码英雄。
|
||||
|
||||
正当某些科技巨头正忙着就专有软件相互攻击时,自由软件和开源软件的新领军者如雨后春笋般涌现。*[00:18:30]* 其中一位优胜者就是<ruby>理查德·斯托尔曼<rt>Richard Stallman</rt></ruby>。你也许对他的工作很熟悉。他想要有自由软件和自由社会。这就像言论自由一样的<ruby>自由<rt>free</rt></ruby>,而不是像免费啤酒一样的<ruby>免费<rt>free</rt></ruby>。早在 80 年代,斯托尔曼就发现,除了昂贵的专有操作系统(如 UNIX)外,就没有其他可行的替代品。因此他决定自己做一个。斯托尔曼的<ruby>自由软件基金会<rt>Free Software Foundation</rt></ruby>开发了 GNU,当然,它的意思是 “GNU's not UNIX”。它将是一个像 UNIX 一样的操作系统,但不包含所有的 UNIX 代码,而且用户可以自由共享。
|
||||
|
||||
*[00:19:00]* 为了让你体会到上世纪 80 年代自由软件概念的重要性,从不同角度来说拥有 UNIX 代码的两家公司,<ruby>AT&T 贝尔实验室<rt>AT&T Bell Laboratories</rt></ruby>以及<ruby>UNIX 系统实验室<rt>UNIX System Laboratories</rt></ruby>威胁将会起诉任何看过 UNIX 源代码后又创建自己操作系统的人。这些人是次级专利所属。*[00:19:30]* 用这两家公司的话来说,所有这些程序员都在“精神上受到了污染”,因为他们都见过 UNIX 代码。在 UNIX 系统实验室和<ruby>伯克利软件设计公司<rt>Berkeley Software Design</rt></ruby>之间的一个著名的法庭案例中,有人认为任何功能类似的系统,即使它本身没有使用 UNIX 代码,也侵犯版权。Paul Jones 当时是一名开发人员。他现在是数字图书馆 ibiblio.org 的主管。
|
||||
|
||||
**Paul Jones:***[00:20:00]* 任何看过代码的人都受到了精神污染是他们的观点。因此几乎所有在安装有与 UNIX 相关操作系统的电脑上工作过的人以及任何在计算机科学部门工作的人都受到精神上的污染。因此,在 USENIX 的一年里,我们都得到了一写带有红色字母的白色小别针,上面写着“精神受到了污染”。我们很喜欢带着这些别针到处走,以表达我们跟着贝尔实验室混,因为我们的精神受到了污染。
|
||||
|
||||
**Saron Yitbarek:***[00:20:30]* 整个世界都被精神污染了。想要保持纯粹、保持事物的美好和专有的旧哲学正变得越来越不现实。正是在这被污染的现实中,历史上最伟大的代码英雄之一诞生了,他是一个芬兰男孩,名叫<ruby>林纳斯·托瓦兹<rt>Linus Torvalds</rt></ruby>。如果这是《星球大战》,那么林纳斯·托瓦兹就是我们的<ruby>卢克·天行者<rt>Luke Skywalker</rt></ruby>。他是赫尔辛基大学一名温文尔雅的研究生。*[00:21:00]* 有才华,但缺乏大志。典型的被逼上梁山的英雄。和其他年轻的英雄一样,他也感到沮丧。他想把 386 处理器整合到他的新电脑中。他对自己的 IBM 兼容电脑上运行的 MS-DOS 操作系统并不感冒,也负担不起 UNIX 软件 5000 美元的价格,而只有 UNIX 才能让他自由地编程。解决方案是托瓦兹在 1991 年春天基于 MINIX 开发了一个名为 Linux 的操作系统内核。他自己的操作系统内核。
|
||||
|
||||
**Steven Vaughan-Nichols:***[00:21:30]* 林纳斯·托瓦兹真的只是想找点乐子而已。
|
||||
|
||||
**Saron Yitbarek:**Steven Vaughan-Nichols 是 ZDNet.com 的特约编辑,而且他从科技行业出现以来就一直在写科技行业相关的内容。
|
||||
|
||||
**Steven Vaughan-Nichols:**当时有几个类似的操作系统。他最关注的是一个名叫 MINIX 的操作系统,MINIX 旨在让学生学习如何构建操作系统。林纳斯看到这些,觉得很有趣,但他想建立自己的操作系统。*[00:22:00]* 所以,它实际上始于赫尔辛基的一个 DIY 项目。一切就这样开始了,基本上就是一个大孩子在玩耍,学习如何做些什么。*[00:22:30]* 但不同之处在于,他足够聪明、足够执着,也足够友好,让所有其他人都参与进来,然后他开始把这个项目进行到底。27 年后,这个项目变得比他想象的要大得多。
|
||||
|
||||
**Saron Yitbarek:**到 1991 年秋季,托瓦兹发布了 10000 行代码,世界各地的人们开始评头论足,然后进行优化、添加和修改代码。*[00:23:00]* 对于今天的开发人员来说,这似乎很正常,但请记住,在那个时候,像这样的开放协作是对微软、苹果和 IBM 已经做的很好的整个专有系统的道德侮辱。随后这种开放性被奉若神明。托瓦兹将 Linux 置于 GNU 通用公共许可证(GPL)之下。曾经保障斯托尔曼的 GNU 系统自由的许可证现在也将保障 Linux 的自由。Vaughan-Nichols 解释道,这种融入到 GPL 的重要性怎么强调都不过分,它基本上能永远保证软件的自由和开放性。
|
||||
|
||||
**Steven Vaughan-Nichols:***[00:23:30]* 事实上,根据 Linux 所遵循的许可协议,即 GPL 第 2 版,如果你想贩卖 Linux 或者向全世界展示它,你必须与他人共享代码,所以如果你对其做了一些改进,仅仅给别人使用是不够的。事实上你必须和他们分享所有这些变化的具体细节。然后,如果这些改进足够好,就会被 Linux 所吸收。
|
||||
|
||||
**Saron Yitbarek:***[00:24:00]* 事实证明,这种公开的方式极具吸引力。<ruby>埃里克·雷蒙德</rt>Eric Raymond</rt></ruby> 是这场运动的早期传道者之一,他在他那篇著名的文章中写道:“微软和苹果这样的公司一直在试图建造软件大教堂,而 Linux 及类似的软件则提供了一个由不同议程和方法组成的巨大集市,集市比大教堂有趣多了。”
|
||||
|
||||
**tormy Peters:**我认为在那个时候,真正吸引人的是人们终于可以把控自己的世界了。
|
||||
|
||||
**Saron Yitbarek:**Stormy Peters 是一位行业分析师,也是自由和开源软件的倡导者。
|
||||
|
||||
**Stormy Peters:***[00:24:30]* 当开源软件第一次出现的时候,所有的操作系统都是专有的。如果不使用专有软件,你甚至不能添加打印机,你不能添加耳机,你不能自己开发一个小型硬件设备,然后让它在你的笔记本电脑上运行。你甚至不能放入 DVD 并复制它,因为你不能改变软件,即使你拥有这张 DVD,你也无法复制它。*[00:25:00]* 你无法控制你购买的硬件/软件系统。你不能从中创造出任何新的、更大的、更好的东西。这就是为什么开源操作系统在一开始就如此重要的原因。我们需要一个开源协作环境,在那里我们可以构建更大更好的东西。
|
||||
|
||||
**Saron Yitbarek:**请注意,Linux 并不是一个纯粹的平等主义乌托邦。林纳斯·托瓦兹不会批准对内核的所有修改,而是主导了内核的变更。他安排了十几个人来管理内核的不同部分。*[00:25:30]* 这些人也会信任自己下面的人,以此类推,形成信任金字塔。变化可能来自任何地方,但它们都是经过判断和策划的。
|
||||
|
||||
然而,考虑到到林纳斯的 DIY 项目一开始是多么的简陋和随意,这项成就令人十分惊讶。他完全不知道自己就是这一切中的卢克·天行者。当时他只有 21 岁,一半的时间都在编程。但是当魔盒第一次被打开,人们开始给他反馈。*[00:26:00]* 几十个,然后几百个,成千上万的贡献者。有了这样的众包基础,Linux 很快就开始成长。真的成长得很快。甚至最终引起了微软的注意。他们的首席执行官<ruby>史蒂夫·鲍尔默<rt>Steve Ballmer</rt></ruby>将 Linux 称为是“一种癌症,从知识产权得角度来看,它传染了接触到得任何东西 ”。Steven Levy 将会描述 Ballmer 的由来。
|
||||
|
||||
**Steven Levy:***[00:26:30]* 一旦微软真正巩固了它的垄断地位,而且它也确实被联邦法院判定为垄断,他们将会对任何可能对其构成威胁的事情做出强烈反应。因此,既然他们对软件收费,很自然得,他们将自由软件得出现看成是一种癌症。他们试图提出一个知识产权理论,来解释为什么这对消费者不利。
|
||||
|
||||
**Saron Yitbarek:***[00:27:00]* Linux 在不断传播,微软也开始担心起来。到了 2006 年,Linux 成为仅次于 Windows 的第二大常用操作系统,全球约有 5000 名开发人员在使用它。5000 名开发者。还记得比尔·盖茨给苹果公司的备忘录吗?在那份备忘录中,他向苹果公司的员工们论述了与他人合作的重要性。事实证明,开源将把伙伴关系的概念提升到一个全新的水平,这是比尔·盖茨从未预见到的。
|
||||
|
||||
*[00:27:30]* 我们一直在谈论操作系统之间的大战,但是到目前为止,并没有怎么提到无名英雄和开发者们。在下次的代码英雄中,情况就不同了。第二集讲的是操作系统大战的第二部分,是关于 Linux 崛起的。业界醒悟过来,认识到了开发人员的重要性。这些开源反叛者变得越来越强大,战场从桌面转移到了服务器领域。*[00:28:00]* 这里有商业间谍活动、新的英雄人物,还有科技史上最不可思议的改变。这一切都在操作系统大战的后半集内达到了高潮。
|
||||
|
||||
要想免费自动获得新一集的代码英雄,请点击订阅苹果播客、Spotify、谷歌 Play,或其他应用获取该播客。在这一季剩下的时间里,我们将参观最新的战场,相互争斗的版图,这里是下一代的代码英雄留下印记的地方。*[00:28:30]* 更多信息,请访问 https://redhat.com/commandlineheroes 。我是 Saron Yitbarek。下次之前,继续编码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1
|
||||
|
||||
作者:[redhat][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.redhat.com
|
||||
[b]: https://github.com/lujun9972
|
214
published/20180622 Use LVM to Upgrade Fedora.md
Normal file
214
published/20180622 Use LVM to Upgrade Fedora.md
Normal file
@ -0,0 +1,214 @@
|
||||
使用 LVM 升级 Fedora
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/06/lvm-upgrade-816x345.jpg)
|
||||
|
||||
大多数用户发现使用标准流程升级[从一个 Fedora 版本升级到下一个][1]很简单。但是,Fedora 升级也不可避免地会遇到许多特殊情况。本文介绍了使用 DNF 和逻辑卷管理(LVM)进行升级的一种方法,以便在出现问题时保留可引导备份。这个例子是将 Fedora 26 系统升级到 Fedora 28。
|
||||
|
||||
此处展示的过程比标准升级过程更复杂。在使用此过程之前,你应该充分掌握 LVM 的工作原理。如果没有适当的技能和细心,你可能会丢失数据和/或被迫重新安装系统!如果你不知道自己在做什么,那么**强烈建议**你坚持只使用得到支持的升级方法。
|
||||
|
||||
### 准备系统
|
||||
|
||||
在开始之前,请确保你的现有系统已完全更新。
|
||||
|
||||
```
|
||||
$ sudo dnf update
|
||||
$ sudo systemctl reboot # 或采用 GUI 方式
|
||||
```
|
||||
|
||||
检查你的根文件系统是否是通过 LVM 挂载的。
|
||||
|
||||
```
|
||||
$ df /
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f26 20511312 14879816 4566536 77% /
|
||||
|
||||
$ sudo lvs
|
||||
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
|
||||
f22 vg_sdg -wi-ao---- 15.00g
|
||||
f24_64 vg_sdg -wi-ao---- 20.00g
|
||||
f26 vg_sdg -wi-ao---- 20.00g
|
||||
home vg_sdg -wi-ao---- 100.00g
|
||||
mockcache vg_sdg -wi-ao---- 10.00g
|
||||
swap vg_sdg -wi-ao---- 4.00g
|
||||
test vg_sdg -wi-a----- 1.00g
|
||||
vg_vm vg_sdg -wi-ao---- 20.00g
|
||||
```
|
||||
|
||||
如果你在安装 Fedora 时使用了默认值,你可能会发现根文件系统挂载在名为 `root` 的逻辑卷(LV)上。卷组(VG)的名称可能会有所不同。看看根卷的总大小。在该示例中,根文件系统名为 `f26`,大小为 `20G`。
|
||||
|
||||
接下来,确保 LVM 中有足够的可用空间。
|
||||
|
||||
```
|
||||
$ sudo vgs
|
||||
VG #PV #LV #SN Attr VSize VFree
|
||||
vg_sdg 1 8 0 wz--n- 232.39g 42.39g
|
||||
```
|
||||
|
||||
该系统有足够的可用空间,可以为升级后的 Fedora 28 的根卷分配 20G 的逻辑卷。如果你使用的是默认安装,则你的 LVM 中将没有可用空间。对 LVM 的一般性管理超出了本文的范围,但这里有一些情形下可能采取的方法:
|
||||
|
||||
1、`/home` 在自己的逻辑卷,而且 `/home` 中有大量空闲空间。
|
||||
|
||||
你可以从图形界面中注销并切换到文本控制台,以 `root` 用户身份登录。然后你可以卸载 `/home`,并使用 `lvreduce -r` 调整大小并重新分配 `/home` 逻辑卷。你也可以从<ruby>现场镜像<rt>Live image</rt></ruby>启动(以便不使用 `/home`)并使用 gparted GUI 实用程序进行分区调整。
|
||||
|
||||
2、大多数 LVM 空间被分配给根卷,该文件系统中有大量可用空间。
|
||||
|
||||
你可以从现场镜像启动并使用 gparted GUI 实用程序来减少根卷的大小。此时也可以考虑将 `/home` 移动到另外的文件系统,但这超出了本文的范围。
|
||||
|
||||
3、大多数文件系统已满,但你有个已经不再需要逻辑卷。
|
||||
|
||||
你可以删除不需要的逻辑卷,释放卷组中的空间以进行此操作。
|
||||
|
||||
### 创建备份
|
||||
|
||||
首先,为升级后的系统分配新的逻辑卷。确保为系统的卷组(VG)使用正确的名称。在这个例子中它是 `vg_sdg`。
|
||||
|
||||
```
|
||||
$ sudo lvcreate -L20G -n f28 vg_sdg
|
||||
Logical volume "f28" created.
|
||||
```
|
||||
|
||||
接下来,创建当前根文件系统的快照。此示例创建名为 `f26_s` 的快照卷。
|
||||
|
||||
```
|
||||
$ sync
|
||||
$ sudo lvcreate -s -L1G -n f26_s vg_sdg/f26
|
||||
Using default stripesize 64.00 KiB.
|
||||
Logical volume "f26_s" created.
|
||||
```
|
||||
|
||||
现在可以将快照复制到新逻辑卷。当你替换自己的卷名时,**请确保目标正确**。如果不小心,就会不可撤销地删除了数据。此外,请确保你从根卷的快照复制,**而不是**从你的现在的根卷。
|
||||
|
||||
```
|
||||
$ sudo dd if=/dev/vg_sdg/f26_s of=/dev/vg_sdg/f28 bs=256k
|
||||
81920+0 records in
|
||||
81920+0 records out
|
||||
21474836480 bytes (21 GB, 20 GiB) copied, 149.179 s, 144 MB/s
|
||||
```
|
||||
|
||||
给新文件系统一个唯一的 UUID。这不是绝对必要的,但 UUID 应该是唯一的,因此这避免了未来的混淆。以下是在 ext4 根文件系统上的方法:
|
||||
|
||||
```
|
||||
$ sudo e2fsck -f /dev/vg_sdg/f28
|
||||
$ sudo tune2fs -U random /dev/vg_sdg/f28
|
||||
```
|
||||
|
||||
然后删除不再需要的快照卷:
|
||||
|
||||
```
|
||||
$ sudo lvremove vg_sdg/f26_s
|
||||
Do you really want to remove active logical volume vg_sdg/f26_s? [y/n]: y
|
||||
Logical volume "f26_s" successfully removed
|
||||
```
|
||||
|
||||
如果你单独挂载了 `/home`,你可能希望在此处制作 `/home` 的快照。有时,升级的应用程序会进行与旧版 Fedora 版本不兼容的更改。如果需要,编辑**旧**根文件系统上的 `/etc/fstab` 文件以在 `/home` 上挂载快照。请记住,当快照已满时,它将消失!另外,你可能还希望给 `/home` 做个正常备份。
|
||||
|
||||
### 配置以使用新的根
|
||||
|
||||
首先,安装新的逻辑卷并备份现有的 GRUB 设置:
|
||||
|
||||
```
|
||||
$ sudo mkdir /mnt/f28
|
||||
$ sudo mount /dev/vg_sdg/f28 /mnt/f28
|
||||
$ sudo mkdir /mnt/f28/f26
|
||||
$ cd /boot/grub2
|
||||
$ sudo cp -p grub.cfg grub.cfg.old
|
||||
```
|
||||
|
||||
编辑 `grub.conf` 并在第一个菜单项 `menuentry` 之前添加这些,除非你已经有了:
|
||||
|
||||
```
|
||||
menuentry 'Old boot menu' {
|
||||
configfile /grub2/grub.cfg.old
|
||||
}
|
||||
```
|
||||
|
||||
编辑 `grub.conf` 并更改默认菜单项以激活并挂载新的根文件系统。改变这一行:
|
||||
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f26 ro rd.lvm.lv=vg_sdg/f26 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
```
|
||||
|
||||
如你看到的这样。请记住使用你系统上的正确的卷组和逻辑卷条目名称!
|
||||
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f28 ro rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
```
|
||||
|
||||
编辑 `/mnt/f28/etc/default/grub` 并改变在启动时激活的默认的根卷:
|
||||
|
||||
```
|
||||
GRUB_CMDLINE_LINUX="rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet"
|
||||
```
|
||||
|
||||
编辑 `/mnt/f28/etc/fstab`,将挂载的根文件系统从旧的逻辑卷:
|
||||
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 / ext4 defaults 1 1
|
||||
```
|
||||
|
||||
改为新的:
|
||||
|
||||
```
|
||||
/dev/mapper/vg_sdg-f28 / ext4 defaults 1 1
|
||||
```
|
||||
|
||||
然后,出于参考的用途,只读挂载旧的根卷:
|
||||
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 /f26 ext4 ro,nodev,noexec 0 0
|
||||
```
|
||||
|
||||
如果你的根文件系统是通过 UUID 挂载的,你需要改变这个方式。如果你的根文件系统是 ext4 你可以这样做:
|
||||
|
||||
```
|
||||
$ sudo e2label /dev/vg_sdg/f28 F28
|
||||
```
|
||||
|
||||
现在编辑 `/mnt/f28/etc/fstab` 使用该卷标。改变该根文件系统的挂载行,像这样:
|
||||
|
||||
```
|
||||
LABEL=F28 / ext4 defaults 1 1
|
||||
```
|
||||
|
||||
### 重启与升级
|
||||
|
||||
重新启动,你的系统将使用新的根文件系统。它仍然是 Fedora 26,但是是带有新的逻辑卷名称的副本,并可以进行 `dnf` 系统升级!如果出现任何问题,请使用旧引导菜单引导回到你的工作系统,此过程可避免触及旧系统。
|
||||
|
||||
```
|
||||
$ sudo systemctl reboot # or GUI equivalent
|
||||
...
|
||||
$ df / /f26
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f28 20511312 14903196 4543156 77% /
|
||||
/dev/mapper/vg_sdg-f26 20511312 14866412 4579940 77% /f26
|
||||
```
|
||||
|
||||
你可能希望验证使用旧的引导菜单确实可以让你回到挂载在旧的根文件系统上的根。
|
||||
|
||||
现在按照[此维基页面][2]中的说明进行操作。如果系统升级出现任何问题,你还会有一个可以重启回去的工作系统。
|
||||
|
||||
### 进一步的考虑
|
||||
|
||||
创建新的逻辑卷并将根卷的快照复制到其中的步骤可以使用通用脚本自动完成。它只需要新的逻辑卷的名称,因为现有根的大小和设备很容易确定。例如,可以输入以下命令:
|
||||
|
||||
```
|
||||
$ sudo copyfs / f28
|
||||
```
|
||||
|
||||
提供挂载点以进行复制可以更清楚地了解发生了什么,并且复制其他挂载点(例如 `/home`)可能很有用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-lvm-upgrade-fedora/
|
||||
|
||||
作者:[Stuart D Gathman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/sdgathman/
|
||||
[1]:https://fedoramagazine.org/upgrading-fedora-27-fedora-28/
|
||||
[2]:https://fedoraproject.org/wiki/DNF_system_upgrade
|
@ -0,0 +1,69 @@
|
||||
使用 MacSVG 创建 SVG 动画
|
||||
======
|
||||
|
||||
> 开源 SVG:墙上的魔法字。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/18/000809mzl1wb1ww754z455.jpg)
|
||||
|
||||
新巴比伦的摄政王[伯沙撒][1]没有注意到他在盛宴期间神奇地[书写在墙上的文字][2]。但是,如果他在公元前 539 年有一台笔记本电脑和良好的互联网连接,他可能会通过在浏览器上阅读 SVG 来避开那些讨厌的波斯人。
|
||||
|
||||
出现在网页上的动画文本和对象是建立用户兴趣和参与度的好方法。有几种方法可以实现这一点,例如视频嵌入、动画 GIF 或幻灯片 —— 但你也可以使用[可缩放矢量图形(SVG)][3]。
|
||||
|
||||
SVG 图像与 JPG 不同,因为它可以缩放而不会丢失其分辨率。矢量图像是由点而不是像素创建的,所以无论它放大到多大,它都不会失去分辨率或像素化。充分利用可缩放的静态图像的一个例子是网站的徽标。
|
||||
|
||||
### 动起来,动起来
|
||||
|
||||
你可以使用多种绘图程序创建 SVG 图像,包括开源的 [Inkscape][4] 和 Adobe Illustrator。让你的图像“能动起来”需要更多的努力。幸运的是,有一些开源解决方案甚至可以引起伯沙撒的注意。
|
||||
|
||||
[MacSVG][5] 是一款可以让你的图像动起来的工具。你可以在 [GitHub][6] 上找到源代码。
|
||||
|
||||
根据其[官网][5]说,MacSVG 由阿肯色州康威的 Douglas Ward 开发,是一个“用于设计 HTML5 SVG 艺术和动画的开源 Mac OS 应用程序”。
|
||||
|
||||
我想使用 MacSVG 来创建一个动画签名。我承认我发现这个过程有点令人困惑,并且在我第一次尝试创建一个实际的动画 SVG 图像时失败了。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/macsvg-screen.png)
|
||||
|
||||
重要的是首先要了解要展示的书法内容实际写的是什么。
|
||||
|
||||
动画文字背后的属性是 [stroke-dasharray][7]。将该术语分成三个单词有助于解释正在发生的事情:“stroke” 是指用笔(无论是物理的笔还是数字化笔)制作的线条或笔画。“dash” 意味着将笔划分解为一系列折线。“array” 意味着将整个东西生成为数组。这是一个简单的概述,但它可以帮助我理解应该发生什么以及为什么。
|
||||
|
||||
使用 MacSVG,你可以导入图形(.PNG)并使用钢笔工具描绘书写路径。我使用了草书来表示我的名字。然后,只需应用该属性来让书法动画起来、增加和减少笔划的粗细、改变其颜色等等。完成后,动画的书法将导出为 .SVG 文件,并可以在网络上使用。除书写外,MacSVG 还可用于许多不同类型的 SVG 动画。
|
||||
|
||||
### 在 WordPress 中书写
|
||||
|
||||
我准备在我的 [WordPress][8] 网站上传和分享我的 SVG 示例,但我发现 WordPress 不允许进行 SVG 媒体导入。幸运的是,我找到了一个方便的插件:Benbodhi 的 [SVG 支持][9]插件允许快速、轻松地导入我的 SVG,就像我将 JPG 导入媒体库一样。我能够在世界各地向巴比伦人展示我[写在墙上的魔法字][10]。
|
||||
|
||||
我在 [Brackets][11] 中开源了 SVG 的源代码,结果如下:
|
||||
|
||||
```
|
||||
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://web.resource.org/cc/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" height="360px" style="zoom: 1;" cursor="default" id="svg_document" width="480px" baseProfile="full" version="1.1" preserveAspectRatio="xMidYMid meet" viewBox="0 0 480 360"><title id="svg_document_title">Path animation with stroke-dasharray</title><desc id="desc1">This example demonstrates the use of a path element, an animate element, and the stroke-dasharray attribute to simulate drawing.</desc><defs id="svg_document_defs"></defs><g id="main_group"></g><path stroke="#004d40" id="path2" stroke-width="9px" d="M86,75 C86,75 75,72 72,61 C69,50 66,37 71,34 C76,31 86,21 92,35 C98,49 95,73 94,82 C93,91 87,105 83,110 C79,115 70,124 71,113 C72,102 67,105 75,97 C83,89 111,74 111,74 C111,74 119,64 119,63 C119,62 110,57 109,58 C108,59 102,65 102,66 C102,67 101,75 107,79 C113,83 118,85 122,81 C126,77 133,78 136,64 C139,50 147,45 146,33 C145,21 136,15 132,24 C128,33 123,40 123,49 C123,58 135,87 135,96 C135,105 139,117 133,120 C127,123 116,127 120,116 C124,105 144,82 144,81 C144,80 158,66 159,58 C160,50 159,48 161,43 C163,38 172,23 166,22 C160,21 155,12 153,23 C151,34 161,68 160,78 C159,88 164,108 163,113 C162,118 165,126 157,128 C149,130 152,109 152,109 C152,109 185,64 185,64 " fill="none" transform=""><animate values="0,1739;1739,0;" attributeType="XML" begin="0; animate1.end+5s" id="animateSig1" repeatCount="indefinite" attributeName="stroke-dasharray" fill="freeze" dur="2"></animate></path></svg>
|
||||
```
|
||||
|
||||
你会使用 MacSVG 做什么?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/macsvg-open-source-tool-animation
|
||||
|
||||
作者:[Jeff Macharyas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rikki-endsley
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Belshazzar
|
||||
[2]: https://en.wikipedia.org/wiki/Belshazzar%27s_feast
|
||||
[3]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics
|
||||
[4]: https://inkscape.org/
|
||||
[5]: https://macsvg.org/
|
||||
[6]: https://github.com/dsward2/macSVG
|
||||
[7]: https://gist.github.com/mbostock/5649592
|
||||
[8]: https://macharyas.com/
|
||||
[9]: https://wordpress.org/plugins/svg-support/
|
||||
[10]: https://macharyas.com/index.php/2018/10/14/open-source-svg/
|
||||
[11]: http://brackets.io/
|
285
published/20181202 How To Customize The GNOME 3 Desktop.md
Normal file
285
published/20181202 How To Customize The GNOME 3 Desktop.md
Normal file
@ -0,0 +1,285 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: subject: (How To Customize The GNOME 3 Desktop?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
[#]: url: (https://linux.cn/article-11256-1.html)
|
||||
|
||||
如何自定义 GNOME 3 桌面?
|
||||
======
|
||||
|
||||
我们收到很多来自用户的电子邮件,要我们写一篇关于 GNOME 3 桌面自定义的文章,但是,我们没有时间来写这个主题。
|
||||
|
||||
在很长时间内,我一直在我的主要笔记本电脑上使用 Ubuntu 操作系统,并且,渐感无聊,我想测试一些与 Arch Linux 相关的其它的发行版。
|
||||
|
||||
我比较喜欢 Majaro,我在我的笔记本电脑中安装使用了 GNOME 3 桌面的 Manjaro 18.0 。
|
||||
|
||||
我按照我想要的自定义我的桌面。所以,我想抓住这个机会来详细撰写这篇文章,以帮助其他人。
|
||||
|
||||
这篇文章帮助其他人来轻松地自定义他们的桌面。
|
||||
|
||||
我不打算包括我所有的自定义,并且,我将强制性地添加一个对 Linux 桌面用户来说有用的必要的东西。
|
||||
|
||||
如果你觉得这篇文章中缺少一些调整,请你在评论区提到缺少的东西。对其它用户来说这是非常有用的 。
|
||||
|
||||
### 1) 如何在 GNOME 3 桌面中启动活动概述?
|
||||
|
||||
活动概述将显示所有运行的应用程序,或通过单击 `Super` 键 ,或在左上角上单击“活动”按钮来启动/打开窗口。
|
||||
|
||||
它允许你来启动一个新的应用程序、切换窗口,和在工作空间之间移动窗口。
|
||||
|
||||
你可以通过选择如下任一操作简单地退出活动概述,如选择一个窗口、应用程序或工作区间,或通过按 `Super` 键或 `Esc` 键。
|
||||
|
||||
![][2]
|
||||
|
||||
*活动概述屏幕截图*
|
||||
|
||||
### 2) 如何在 GNOME 3 桌面中重新调整窗口大小?
|
||||
|
||||
通过下面的组合键来将启动的窗口最大化、取消最大化,并吸附到屏幕的一侧(左侧或右侧)。
|
||||
|
||||
* `Super Key+下箭头`:来非最大化窗口。
|
||||
* `Super Key+上箭头`:来最大化窗口。
|
||||
* `Super Key+右箭头`:来在填充半个屏幕的右侧窗口。
|
||||
* `Super Key+作箭头`:来在填充半个屏幕的左侧窗口。
|
||||
|
||||
|
||||
![][3]
|
||||
|
||||
*使用 `Super Key+下箭头` 来取消最大化窗口。*
|
||||
|
||||
![][4]
|
||||
|
||||
*使用 `Super Key+上箭头` 来最大化窗口。*
|
||||
|
||||
|
||||
![][5]
|
||||
|
||||
*使用 `Super Key+右箭头` 来在填充半个屏幕的右侧窗口。*
|
||||
|
||||
![][6]
|
||||
|
||||
*使用 `Super Key+左箭头` 来在填充半个屏幕的左侧窗口。*
|
||||
|
||||
这个功能将帮助你可以一次查看两个应用程序,又名,拆分屏幕。
|
||||
|
||||
![][7]
|
||||
|
||||
### 3) 如何在 GNOME 3 桌面中显示应用程序?
|
||||
|
||||
在 Dash 中,单击“显示应用程序网格”按钮来显示在你的系统上的所有已安装的应用程序。
|
||||
|
||||
![][8]
|
||||
|
||||
### 4) 如何在 GNOME 3 桌面中的 Dash 中添加应用程序?
|
||||
|
||||
为加速你的日常活动,你可能想要把频繁使用的应用程序添加到 Dash 中,或拖拽应用程序启动器到 Dash 中。
|
||||
|
||||
它将允许你直接启动你的收藏夹中的应用程序,而不用先去搜索应用程序。为做到这样,在应用程序上简单地右击,并使用选项“添加到收藏夹”。
|
||||
|
||||
![][9]
|
||||
|
||||
为从 Dash 中移除一个应用程序启动器(收藏的程序),要么从 Dash 中拖拽应用程序到网格按钮,或者在应用程序上简单地右击,并使用选项“从收藏夹中移除”。
|
||||
|
||||
![][10]
|
||||
|
||||
### 5) 如何在 GNOME 3 桌面中的工作区间之间切换?
|
||||
|
||||
工作区间允许你将窗口组合在一起。它将帮助你恰当地分隔你的工作。如果你正在做多项工作,并且你想对每项工作和相关的事物进行单独地分组,那么,它将是非常便利的,对你来说是一个非常方便和完美的选项。
|
||||
|
||||
你可以用两种方法切换工作区间,打开活动概述,并从右手边选择一个工作区间,或者使用下面的组合键。
|
||||
|
||||
* 使用 `Ctrl+Alt+Up` 切换到上一个工作区间。
|
||||
* 使用 `Ctrl+Alt+Down` 切换到下一个工作区间。
|
||||
|
||||
![][11]
|
||||
|
||||
### 6) 如何在 GNOME 3 桌面中的应用程序之间切换 (应用程序切换器) ?
|
||||
|
||||
使用 `Alt+Tab` 或 `Super+Tab` 来在应用程序之间切换。为启动应用程序切换器,使用 `Alt+Tab` 或 `Super+Tab` 。
|
||||
|
||||
在启动后,只需要按住 `Alt` 或 `Super` 键,按 `Tab` 键来从左到右的依次移动接下来的应用程序。
|
||||
|
||||
### 7) 如何在 GNOME 3 桌面中添加用户姓名到顶部面板?
|
||||
|
||||
如果你想添加你的用户姓名到顶部面板,那么安装下面的[添加用户姓名到顶部面板][12] GNOME 扩展。
|
||||
|
||||
![][13]
|
||||
|
||||
### 8) 如何在 GNOME 3 桌面中添加微软 Bing 的桌面背景?
|
||||
|
||||
安装下面的 [Bing 桌面背景更换器][14] GNOME shell 扩展,来每天更改你的桌面背景为微软 Bing 的桌面背景。
|
||||
|
||||
![][15]
|
||||
|
||||
### 9) 如何在 GNOME 3 桌面中启用夜光?
|
||||
|
||||
夜光应用程序是著名的应用程序之一,它通过在日落后把你的屏幕从蓝光调成暗黄色,来减轻眼睛疲劳。
|
||||
|
||||
它在智能手机上也可用。相同目标的其它已知应用程序是 flux 和 [redshift][16]。
|
||||
|
||||
为启用这个特色,导航到**系统设置** >> **设备** >> **显示** ,并打开夜光。
|
||||
|
||||
![][17]
|
||||
|
||||
在它启用后,状态图标将被放置到顶部面板上。
|
||||
|
||||
![][18]
|
||||
|
||||
### 10) 如何在 GNOME 3 桌面中显示电池百分比?
|
||||
|
||||
电池百分比将向你精确地显示电池使用情况。为启用这个功能,遵循下面的步骤。
|
||||
|
||||
启动 GNOME Tweaks >> **顶部栏** >> **电池百分比** ,并打开它。
|
||||
|
||||
![][19]
|
||||
|
||||
在修改后,你能够在顶部面板上看到电池百分比图标。
|
||||
|
||||
![][20]
|
||||
|
||||
### 11) 如何在 GNOME 3 桌面中启用鼠标右键单击?
|
||||
|
||||
在 GNOME 3 桌面环境中右键单击是默认禁用的。为启用这个特色,遵循下面的步骤。
|
||||
|
||||
启动 GNOME Tweaks >> **键盘和鼠标** >> 鼠标点击硬件仿真,并选择“区域”选项。
|
||||
|
||||
![][21]
|
||||
|
||||
### 12) 如何在 GNOME 3 桌面中启用单击最小化?
|
||||
|
||||
启用单击最小化功能,这将帮助我们最小化打开的窗口,而不必使用最小化选项。
|
||||
|
||||
```
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
|
||||
```
|
||||
|
||||
### 13) 如何在 GNOME 3 桌面中自定义 Dock ?
|
||||
|
||||
如果你想更改你的 Dock,类似于 Deepin 桌面或 Mac 桌面,那么使用下面的一组命令。
|
||||
|
||||
```
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock dock-position BOTTOM
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock extend-height false
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock transparency-mode FIXED
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock dash-max-icon-size 50
|
||||
```
|
||||
|
||||
![][22]
|
||||
|
||||
### 14) 如何在 GNOME 3桌面中显示桌面?
|
||||
|
||||
默认 `Super 键 + D` 快捷键不能显示你的桌面。为配置这种情况,遵循下面的步骤。
|
||||
|
||||
设置 >> **设备** >> **键盘** >> 单击在导航下的 **隐藏所有普通窗口** ,然后按 `Super 键 + D` ,最后按`设置`按钮来启用它。
|
||||
|
||||
![][23]
|
||||
|
||||
### 15) 如何自定义日期和时间格式?
|
||||
|
||||
GNOME 3 默认用 `Sun 04:48` 的格式来显示日期和时间。它并不清晰易懂,如果你想获得以下格式的输出:`Sun Dec 2 4:49 AM` ,遵循下面的步骤。
|
||||
|
||||
**对于日期修改:** 打开 GNOME Tweaks >> **顶部栏** ,并在时钟下启用“星期”选项。
|
||||
|
||||
![][24]
|
||||
|
||||
**对于时间修改:** 设置 >> **具体情况** >> **日期和时间** ,然后,在时间格式中选择 `AM/PM` 选项。
|
||||
|
||||
![][25]
|
||||
|
||||
在修改后,你能够看到与下面相同的日期和时间格式。
|
||||
|
||||
![][26]
|
||||
|
||||
### 16) 如何在启动程序中永久地禁用不使用的服务?
|
||||
|
||||
就我来说,我不使用 **蓝牙** & **cups(打印机服务)**。因此,在我的笔记本电脑上禁用这些服务。为在基于 Arch 的系统上禁用服务,使用 [Pacman 软件包管理器][27]。
|
||||
|
||||
对于蓝牙:
|
||||
|
||||
```
|
||||
$ sudo systemctl stop bluetooth.service
|
||||
$ sudo systemctl disable bluetooth.service
|
||||
$ sudo systemctl mask bluetooth.service
|
||||
$ systemctl status bluetooth.service
|
||||
```
|
||||
|
||||
对于 cups:
|
||||
|
||||
```
|
||||
$ sudo systemctl stop org.cups.cupsd.service
|
||||
$ sudo systemctl disable org.cups.cupsd.service
|
||||
$ sudo systemctl mask org.cups.cupsd.service
|
||||
$ systemctl status org.cups.cupsd.service
|
||||
```
|
||||
|
||||
最后,使用以下命令验证这些服务是否在启动程序中被禁用。如果你想再次确认这一点,你可以重新启动一次,并检查相同的东西。导航到以下链接来了解更多关于 [systemctl][28] 的用法,
|
||||
|
||||
```
|
||||
$ systemctl list-unit-files --type=service | grep enabled
|
||||
[email protected] enabled
|
||||
dbus-org.freedesktop.ModemManager1.service enabled
|
||||
dbus-org.freedesktop.NetworkManager.service enabled
|
||||
dbus-org.freedesktop.nm-dispatcher.service enabled
|
||||
display-manager.service enabled
|
||||
gdm.service enabled
|
||||
[email protected] enabled
|
||||
linux-module-cleanup.service enabled
|
||||
ModemManager.service enabled
|
||||
NetworkManager-dispatcher.service enabled
|
||||
NetworkManager-wait-online.service enabled
|
||||
NetworkManager.service enabled
|
||||
systemd-fsck-root.service enabled-runtime
|
||||
tlp-sleep.service enabled
|
||||
tlp.service enabled
|
||||
```
|
||||
|
||||
### 17) 在 GNOME 3 桌面中安装图标和主题?
|
||||
|
||||
有大量的图标和主题可供 GNOME 桌面使用,因此,选择吸引你的 [GTK 主题][29] 和 [图标主题][30]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-overview-screenshot.jpg
|
||||
[3]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-unmaximize-the-window.jpg
|
||||
[4]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-maximize-the-window.jpg
|
||||
[5]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-right-side.jpg
|
||||
[6]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-left-side.jpg
|
||||
[7]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-split-screen.jpg
|
||||
[8]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-applications.jpg
|
||||
[9]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-applications-on-dash.jpg
|
||||
[10]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-remove-applications-from-dash.jpg
|
||||
[11]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-workspaces-screenshot.jpg
|
||||
[12]: https://extensions.gnome.org/extension/1108/add-username-to-top-panel/
|
||||
[13]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-username-to-top-panel.jpg
|
||||
[14]: https://extensions.gnome.org/extension/1262/bing-wallpaper-changer/
|
||||
[15]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-microsoft-bings-wallpaper.jpg
|
||||
[16]: https://www.2daygeek.com/install-redshift-reduce-prevent-protect-eye-strain-night-linux/
|
||||
[17]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light.jpg
|
||||
[18]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light-1.jpg
|
||||
[19]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage.jpg
|
||||
[20]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage-1.jpg
|
||||
[21]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-mouse-right-click.jpg
|
||||
[22]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-dock-customization.jpg
|
||||
[23]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-show-desktop.jpg
|
||||
[24]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date.jpg
|
||||
[25]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-time.jpg
|
||||
[26]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date-time.jpg
|
||||
[27]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[28]: https://www.2daygeek.com/sysvinit-vs-systemd-cheatsheet-systemctl-command-usage/
|
||||
[29]: https://www.2daygeek.com/category/gtk-theme/
|
||||
[30]: https://www.2daygeek.com/category/icon-theme/
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11245-1.html)
|
||||
[#]: subject: (How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip])
|
||||
[#]: via: (https://itsfoss.com/turn-on-raspberry-pi/)
|
||||
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
|
||||
|
||||
如何打开和关闭树莓派(绝对新手)
|
||||
======
|
||||
|
||||
> 这篇短文教你如何打开树莓派以及如何在之后正确关闭它。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/19/192825rlrjy3sj77j7j79y.jpg)
|
||||
|
||||
[树莓派][1]是[最流行的 SBC(单板计算机)][2]之一。如果你对这个话题感兴趣,我相信你已经有了一个树莓派。我还建议你使用[其他树莓派配件][3]来开始使用你的设备。
|
||||
|
||||
你已经准备好打开并开始使用。与桌面和笔记本电脑等传统电脑相比,它有相似以及不同之处。
|
||||
|
||||
今天,让我们继续学习如何打开和关闭树莓派,因为它并没有真正的“电源按钮”。
|
||||
|
||||
在本文中,我使用的是树莓派 3B+,但对于所有树莓派变体都是如此。
|
||||
|
||||
### 如何打开树莓派
|
||||
|
||||
![Micro USB port for Power][7]
|
||||
|
||||
micro USB 口为树莓派供电,打开它的方式是将电源线插入 micro USB 口。但是开始之前,你应该确保做了以下事情。
|
||||
|
||||
* 根据官方[指南][8]准备带有 Raspbian 的 micro SD 卡并插入 micro SD 卡插槽。
|
||||
* 插入 HDMI 线、USB 键盘和鼠标。
|
||||
* 插入以太网线(可选)。
|
||||
|
||||
成上述操作后,请插入电源线。这会打开树莓派,显示屏将亮起并加载操作系统。
|
||||
|
||||
如果您将其关闭并且想要再次打开它,则必须从电源插座(首选)或从电路板的电源端口拔下电源线,然后再插上。它没有电源按钮。
|
||||
|
||||
### 关闭树莓派
|
||||
|
||||
关闭树莓派非常简单,单击菜单按钮并选择关闭。
|
||||
|
||||
![Turn off Raspberry Pi graphically][9]
|
||||
|
||||
或者你可以在终端使用 [shutdown 命令][10]
|
||||
|
||||
```
|
||||
sudo shutdown now
|
||||
```
|
||||
|
||||
`shutdown` 执行后**等待**它完成,接着你可以关闭电源。
|
||||
|
||||
再说一次,树莓派关闭后,没有真正的办法可以在不关闭再打开电源的情况下打开树莓派。你可以使用 GPIO 打开树莓派,但这需要额外的改装。
|
||||
|
||||
*注意:Micro USB 口往往是脆弱的,因此请关闭/打开电源,而不是经常拔出插入 micro USB 口。
|
||||
|
||||
好吧,这就是关于打开和关闭树莓派的所有内容,你打算用它做什么?请在评论中告诉我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/turn-on-raspberry-pi/
|
||||
|
||||
作者:[Chinmay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/chinmay/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.raspberrypi.org/
|
||||
[2]: https://linux.cn/article-10823-1.html
|
||||
[3]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-3-microusb.png?fit=800%2C532&ssl=1
|
||||
[8]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/Raspbian-ui-menu.jpg?fit=800%2C492&ssl=1
|
||||
[10]: https://linuxhandbook.com/linux-shutdown-command/
|
@ -1,38 +1,38 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11243-1.html)
|
||||
[#]: subject: (How To Check Linux Package Version Before Installing It)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-check-linux-package-version-before-installing-it/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Check Linux Package Version Before Installing It
|
||||
如何在安装之前检查 Linux 软件包的版本?
|
||||
======
|
||||
|
||||
![Check Linux Package Version][1]
|
||||
|
||||
Most of you will know how to [**find the version of an installed package**][2] in Linux. But, what would you do to find the packages’ version which are not installed in the first place? No problem! This guide describes how to check Linux package version before installing it in Debian and its derivatives like Ubuntu. This small tip might be helpful for those wondering what version they would get before installing a package.
|
||||
大多数人都知道如何在 Linux 中[查找已安装软件包的版本][2],但是,你会如何查找那些还没有安装的软件包的版本呢?很简单!本文将介绍在 Debian 及其衍生品(如 Ubuntu)中,如何在软件包安装之前检查它的版本。对于那些想在安装之前知道软件包版本的人来说,这个小技巧可能会有所帮助。
|
||||
|
||||
### Check Linux Package Version Before Installing It
|
||||
### 在安装之前检查 Linux 软件包版本
|
||||
|
||||
There are many ways to find a package’s version even if it is not installed already in DEB-based systems. Here I have given a few methods.
|
||||
在基于 DEB 的系统中,即使软件包还没有安装,也有很多方法可以查看他的版本。接下来,我将一一介绍。
|
||||
|
||||
##### Method 1 – Using Apt
|
||||
#### 方法 1 – 使用 Apt
|
||||
|
||||
The quick and dirty way to check a package version, simply run:
|
||||
检查软件包的版本的懒人方法:
|
||||
|
||||
```
|
||||
$ apt show <package-name>
|
||||
```
|
||||
|
||||
**Example:**
|
||||
**示例:**
|
||||
|
||||
```
|
||||
$ apt show vim
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
**示例输出:**
|
||||
|
||||
```
|
||||
Package: vim
|
||||
@ -67,23 +67,21 @@ Description: Vi IMproved - enhanced vi editor
|
||||
N: There is 1 additional record. Please use the '-a' switch to see it
|
||||
```
|
||||
|
||||
As you can see in the above output, “apt show” command displays, many important details of the package such as,
|
||||
正如你在上面的输出中看到的,`apt show` 命令显示了软件包许多重要的细节,例如:
|
||||
|
||||
1. package name,
|
||||
2. version,
|
||||
3. origin (from where the vim comes from),
|
||||
4. maintainer,
|
||||
5. home page of the package,
|
||||
6. dependencies,
|
||||
7. download size,
|
||||
8. description,
|
||||
9. and many.
|
||||
1. 包名称,
|
||||
2. 版本,
|
||||
3. 来源(vim 来自哪里),
|
||||
4. 维护者,
|
||||
5. 包的主页,
|
||||
6. 依赖,
|
||||
7. 下载大小,
|
||||
8. 简介,
|
||||
9. 其他。
|
||||
|
||||
因此,Ubuntu 仓库中可用的 Vim 版本是 **8.0.1453**。如果我把它安装到我的 Ubuntu 系统上,就会得到这个版本。
|
||||
|
||||
|
||||
So, the available version of Vim package in the Ubuntu repositories is **8.0.1453**. This is the version I get if I install it on my Ubuntu system.
|
||||
|
||||
Alternatively, use **“apt policy”** command if you prefer short output:
|
||||
或者,如果你不想看那么多的内容,那么可以使用 `apt policy` 这个命令:
|
||||
|
||||
```
|
||||
$ apt policy vim
|
||||
@ -98,7 +96,7 @@ vim:
|
||||
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
```
|
||||
|
||||
Or even shorter:
|
||||
甚至更短:
|
||||
|
||||
```
|
||||
$ apt list vim
|
||||
@ -107,17 +105,17 @@ vim/bionic-updates,bionic-security 2:8.0.1453-1ubuntu1.1 amd64
|
||||
N: There is 1 additional version. Please use the '-a' switch to see it
|
||||
```
|
||||
|
||||
**Apt** is the default package manager in recent Ubuntu versions. So, this command is just enough to find the detailed information of a package. It doesn’t matter whether given package is installed or not. This command will simply list the given package’s version along with all other details.
|
||||
`apt` 是 Ubuntu 最新版本的默认包管理器。因此,这个命令足以找到一个软件包的详细信息,给定的软件包是否安装并不重要。这个命令将简单地列出给定包的版本以及其他详细信息。
|
||||
|
||||
##### Method 2 – Using Apt-get
|
||||
#### 方法 2 – 使用 Apt-get
|
||||
|
||||
To find a package version without installing it, we can use **apt-get** command with **-s** option.
|
||||
要查看软件包的版本而不安装它,我们可以使用 `apt-get` 命令和 `-s` 选项。
|
||||
|
||||
```
|
||||
$ apt-get -s install vim
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
**示例输出:**
|
||||
|
||||
```
|
||||
NOTE: This is only a simulation!
|
||||
@ -136,19 +134,19 @@ Inst vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic
|
||||
Conf vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic-security [amd64])
|
||||
```
|
||||
|
||||
Here, -s option indicates **simulation**. As you can see in the output, It performs no action. Instead, It simply performs a simulation to let you know what is going to happen when you install the Vim package.
|
||||
这里,`-s` 选项代表 **模拟**。正如你在输出中看到的,它不执行任何操作。相反,它只是模拟执行,好让你知道在安装 Vim 时会发生什么。
|
||||
|
||||
You can substitute “install” option with “upgrade” option to see what will happen when you upgrade a package.
|
||||
你可以将 `install` 选项替换为 `upgrade`,以查看升级包时会发生什么。
|
||||
|
||||
```
|
||||
$ apt-get -s upgrade vim
|
||||
```
|
||||
|
||||
##### Method 3 – Using Aptitude
|
||||
#### 方法 3 – 使用 Aptitude
|
||||
|
||||
**Aptitude** is an ncurses and commandline-based front-end to APT package manger in Debian and its derivatives.
|
||||
在 Debian 及其衍生品中,`aptitude` 是一个基于 ncurses(LCTT 译注:ncurses 是终端基于文本的字符处理的库)和命令行的前端 APT 包管理器。
|
||||
|
||||
To find the package version with Aptitude, simply run:
|
||||
使用 aptitude 来查看软件包的版本,只需运行:
|
||||
|
||||
```
|
||||
$ aptitude versions vim
|
||||
@ -156,7 +154,7 @@ p 2:8.0.1453-1ubuntu1
|
||||
p 2:8.0.1453-1ubuntu1.1 bionic-security,bionic-updates 500
|
||||
```
|
||||
|
||||
You can also use simulation option ( **-s** ) to see what would happen if you install or upgrade package.
|
||||
你还可以使用模拟选项(`-s`)来查看安装或升级包时会发生什么。
|
||||
|
||||
```
|
||||
$ aptitude -V -s install vim
|
||||
@ -167,33 +165,29 @@ Need to get 1,152 kB of archives. After unpacking 2,852 kB will be used.
|
||||
Would download/install/remove packages.
|
||||
```
|
||||
|
||||
Here, **-V** flag is used to display detailed information of the package version.
|
||||
|
||||
Similarly, just substitute “install” with “upgrade” option to see what would happen if you upgrade a package.
|
||||
这里,`-V` 标志用于显示软件包的详细信息。
|
||||
|
||||
```
|
||||
$ aptitude -V -s upgrade vim
|
||||
```
|
||||
|
||||
Another way to find the non-installed package’s version using Aptitude command is:
|
||||
类似的,只需将 `install` 替换为 `upgrade` 选项,即可查看升级包会发生什么。
|
||||
|
||||
```
|
||||
$ aptitude search vim -F "%c %p %d %V"
|
||||
```
|
||||
|
||||
Here,
|
||||
这里,
|
||||
|
||||
* **-F** is used to specify which format should be used to display the output,
|
||||
* **%c** – status of the given package (installed or not installed),
|
||||
* **%p** – name of the package,
|
||||
* **%d** – description of the package,
|
||||
* **%V** – version of the package.
|
||||
* `-F` 用于指定应使用哪种格式来显示输出,
|
||||
* `%c` – 包的状态(已安装或未安装),
|
||||
* `%p` – 包的名称,
|
||||
* `%d` – 包的简介,
|
||||
* `%V` – 包的版本。
|
||||
|
||||
当你不知道完整的软件包名称时,这非常有用。这个命令将列出包含给定字符串(即 vim)的所有软件包。
|
||||
|
||||
|
||||
This is helpful when you don’t know the full package name. This command will list all packages that contains the given string (i.e vim).
|
||||
|
||||
Here is the sample output of the above command:
|
||||
以下是上述命令的示例输出:
|
||||
|
||||
```
|
||||
[...]
|
||||
@ -207,17 +201,17 @@ p vim-voom Vim two-pane out
|
||||
p vim-youcompleteme fast, as-you-type, fuzzy-search code completion engine for Vim 0+20161219+git
|
||||
```
|
||||
|
||||
##### Method 4 – Using Apt-cache
|
||||
#### 方法 4 – 使用 Apt-cache
|
||||
|
||||
**Apt-cache** command is used to query APT cache in Debian-based systems. It is useful for performing many operations on APT’s package cache. One fine example is we can [**list installed applications from a certain repository/ppa**][3].
|
||||
`apt-cache` 命令用于查询基于 Debian 的系统中的 APT 缓存。对于要在 APT 的包缓存上执行很多操作时,它很有用。一个很好的例子是我们可以从[某个仓库或 ppa 中列出已安装的应用程序][3]。
|
||||
|
||||
Not just installed applications, we can also find the version of a package even if it is not installed. For instance, the following command will find the version of Vim package:
|
||||
不仅是已安装的应用程序,我们还可以找到软件包的版本,即使它没有被安装。例如,以下命令将找到 Vim 的版本:
|
||||
|
||||
```
|
||||
$ apt-cache policy vim
|
||||
```
|
||||
|
||||
Sample output:
|
||||
示例输出:
|
||||
|
||||
```
|
||||
vim:
|
||||
@ -231,19 +225,19 @@ vim:
|
||||
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
```
|
||||
|
||||
As you can see in the above output, Vim is not installed. If you wanted to install it, you would get version **8.0.1453**. It also displays from which repository the vim package is coming from.
|
||||
正如你在上面的输出中所看到的,Vim 并没有安装。如果你想安装它,你会知道它的版本是 **8.0.1453**。它还显示 vim 包来自哪个仓库。
|
||||
|
||||
##### Method 5 – Using Apt-show-versions
|
||||
#### 方法 5 – 使用 Apt-show-versions
|
||||
|
||||
**Apt-show-versions** command is used to list installed and available package versions in Debian and Debian-based systems. It also displays the list of all upgradeable packages. It is quite handy if you have a mixed stable/testing environment. For instance, if you have enabled both stable and testing repositories, you can easily find the list of applications from testing and also you can upgrade all packages in testing.
|
||||
在 Debian 和基于 Debian 的系统中,`apt-show-versions` 命令用于列出已安装和可用软件包的版本。它还显示所有可升级软件包的列表。如果你有一个混合的稳定或测试环境,这是非常方便的。例如,如果你同时启用了稳定和测试仓库,那么你可以轻松地从测试库找到应用程序列表,还可以升级测试库中的所有软件包。
|
||||
|
||||
Apt-show-versions is not installed by default. You need to install it using command:
|
||||
默认情况下系统没有安装 `apt-show-versions`,你需要使用以下命令来安装它:
|
||||
|
||||
```
|
||||
$ sudo apt-get install apt-show-versions
|
||||
```
|
||||
|
||||
Once installed, run the following command to find the version of a package,for example Vim:
|
||||
安装后,运行以下命令查找软件包的版本,例如 Vim:
|
||||
|
||||
```
|
||||
$ apt-show-versions -a vim
|
||||
@ -253,15 +247,15 @@ vim:amd64 2:8.0.1453-1ubuntu1.1 bionic-updates archive.ubuntu.com
|
||||
vim:amd64 not installed
|
||||
```
|
||||
|
||||
Here, **-a** switch prints all available versions of the given package.
|
||||
这里,`-a` 选项打印给定软件包的所有可用版本。
|
||||
|
||||
If the given package is already installed, you need not to use **-a** option. In that case, simply run:
|
||||
如果已经安装了给定的软件包,那么就不需要使用 `-a` 选项。在这种情况下,只需运行:
|
||||
|
||||
```
|
||||
$ apt-show-versions vim
|
||||
```
|
||||
|
||||
And, that’s all. If you know any other methods, please share them in the comment section below. I will check and update this guide.
|
||||
差不多完了。如果你还了解其他方法,在下面的评论中分享,我将检查并更新本指南。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -269,8 +263,8 @@ via: https://www.ostechnix.com/how-to-check-linux-package-version-before-install
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -69,17 +69,17 @@ package org.opensource.demo.singleton;
|
||||
|
||||
public class OpensourceSingleton {
|
||||
|
||||
private static OpensourceSingleton uniqueInstance;
|
||||
private static OpensourceSingleton uniqueInstance;
|
||||
|
||||
private OpensourceSingleton() {
|
||||
}
|
||||
private OpensourceSingleton() {
|
||||
}
|
||||
|
||||
public static OpensourceSingleton getInstance() {
|
||||
if (uniqueInstance == null) {
|
||||
uniqueInstance = new OpensourceSingleton();
|
||||
}
|
||||
return uniqueInstance;
|
||||
}
|
||||
public static OpensourceSingleton getInstance() {
|
||||
if (uniqueInstance == null) {
|
||||
uniqueInstance = new OpensourceSingleton();
|
||||
}
|
||||
return uniqueInstance;
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
@ -102,20 +102,20 @@ package org.opensource.demo.singleton;
|
||||
|
||||
public class ImprovedOpensourceSingleton {
|
||||
|
||||
private volatile static ImprovedOpensourceSingleton uniqueInstance;
|
||||
private volatile static ImprovedOpensourceSingleton uniqueInstance;
|
||||
|
||||
private ImprovedOpensourceSingleton() {}
|
||||
private ImprovedOpensourceSingleton() {}
|
||||
|
||||
public static ImprovedOpensourceSingleton getInstance() {
|
||||
if (uniqueInstance == null) {
|
||||
synchronized (ImprovedOpensourceSingleton.class) {
|
||||
if (uniqueInstance == null) {
|
||||
uniqueInstance = new ImprovedOpensourceSingleton();
|
||||
}
|
||||
}
|
||||
}
|
||||
return uniqueInstance;
|
||||
}
|
||||
public static ImprovedOpensourceSingleton getInstance() {
|
||||
if (uniqueInstance == null) {
|
||||
synchronized (ImprovedOpensourceSingleton.class) {
|
||||
if (uniqueInstance == null) {
|
||||
uniqueInstance = new ImprovedOpensourceSingleton();
|
||||
}
|
||||
}
|
||||
}
|
||||
return uniqueInstance;
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
@ -141,20 +141,20 @@ package org.opensource.demo.factory;
|
||||
|
||||
public class OpensourceFactory {
|
||||
|
||||
public OpensourceJVMServers getServerByVendor([String][18] name) {
|
||||
if(name.equals("Apache")) {
|
||||
return new Tomcat();
|
||||
}
|
||||
else if(name.equals("Eclipse")) {
|
||||
return new Jetty();
|
||||
}
|
||||
else if (name.equals("RedHat")) {
|
||||
return new WildFly();
|
||||
}
|
||||
else {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
public OpensourceJVMServers getServerByVendor(String name) {
|
||||
if(name.equals("Apache")) {
|
||||
return new Tomcat();
|
||||
}
|
||||
else if(name.equals("Eclipse")) {
|
||||
return new Jetty();
|
||||
}
|
||||
else if (name.equals("RedHat")) {
|
||||
return new WildFly();
|
||||
}
|
||||
else {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@ -164,9 +164,9 @@ public class OpensourceFactory {
|
||||
package org.opensource.demo.factory;
|
||||
|
||||
public interface OpensourceJVMServers {
|
||||
public void startServer();
|
||||
public void stopServer();
|
||||
public [String][18] getName();
|
||||
public void startServer();
|
||||
public void stopServer();
|
||||
public String getName();
|
||||
}
|
||||
```
|
||||
|
||||
@ -176,17 +176,17 @@ public interface OpensourceJVMServers {
|
||||
package org.opensource.demo.factory;
|
||||
|
||||
public class WildFly implements OpensourceJVMServers {
|
||||
public void startServer() {
|
||||
[System][19].out.println("Starting WildFly Server...");
|
||||
}
|
||||
public void startServer() {
|
||||
System.out.println("Starting WildFly Server...");
|
||||
}
|
||||
|
||||
public void stopServer() {
|
||||
[System][19].out.println("Shutting Down WildFly Server...");
|
||||
}
|
||||
public void stopServer() {
|
||||
System.out.println("Shutting Down WildFly Server...");
|
||||
}
|
||||
|
||||
public [String][18] getName() {
|
||||
return "WildFly";
|
||||
}
|
||||
public String getName() {
|
||||
return "WildFly";
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@ -209,9 +209,9 @@ package org.opensource.demo.observer;
|
||||
|
||||
public interface Topic {
|
||||
|
||||
public void addObserver([Observer][22] observer);
|
||||
public void deleteObserver([Observer][22] observer);
|
||||
public void notifyObservers();
|
||||
public void addObserver(Observer observer);
|
||||
public void deleteObserver(Observer observer);
|
||||
public void notifyObservers();
|
||||
}
|
||||
```
|
||||
|
||||
@ -226,39 +226,39 @@ import java.util.List;
|
||||
import java.util.ArrayList;
|
||||
|
||||
public class Conference implements Topic {
|
||||
private List<Observer> listObservers;
|
||||
private int totalAttendees;
|
||||
private int totalSpeakers;
|
||||
private [String][18] nameEvent;
|
||||
private List<Observer> listObservers;
|
||||
private int totalAttendees;
|
||||
private int totalSpeakers;
|
||||
private String nameEvent;
|
||||
|
||||
public Conference() {
|
||||
listObservers = new ArrayList<Observer>();
|
||||
}
|
||||
public Conference() {
|
||||
listObservers = new ArrayList<Observer>();
|
||||
}
|
||||
|
||||
public void addObserver([Observer][22] observer) {
|
||||
listObservers.add(observer);
|
||||
}
|
||||
public void addObserver(Observer observer) {
|
||||
listObservers.add(observer);
|
||||
}
|
||||
|
||||
public void deleteObserver([Observer][22] observer) {
|
||||
int i = listObservers.indexOf(observer);
|
||||
if (i >= 0) {
|
||||
listObservers.remove(i);
|
||||
}
|
||||
}
|
||||
public void deleteObserver(Observer observer) {
|
||||
int i = listObservers.indexOf(observer);
|
||||
if (i >= 0) {
|
||||
listObservers.remove(i);
|
||||
}
|
||||
}
|
||||
|
||||
public void notifyObservers() {
|
||||
for (int i=0, nObservers = listObservers.size(); i < nObservers; ++ i) {
|
||||
[Observer][22] observer = listObservers.get(i);
|
||||
observer.update(totalAttendees,totalSpeakers,nameEvent);
|
||||
}
|
||||
}
|
||||
public void notifyObservers() {
|
||||
for (int i=0, nObservers = listObservers.size(); i < nObservers; ++ i) {
|
||||
Observer observer = listObservers.get(i);
|
||||
observer.update(totalAttendees,totalSpeakers,nameEvent);
|
||||
}
|
||||
}
|
||||
|
||||
public void setConferenceDetails(int totalAttendees, int totalSpeakers, [String][18] nameEvent) {
|
||||
this.totalAttendees = totalAttendees;
|
||||
this.totalSpeakers = totalSpeakers;
|
||||
this.nameEvent = nameEvent;
|
||||
notifyObservers();
|
||||
}
|
||||
public void setConferenceDetails(int totalAttendees, int totalSpeakers, String nameEvent) {
|
||||
this.totalAttendees = totalAttendees;
|
||||
this.totalSpeakers = totalSpeakers;
|
||||
this.nameEvent = nameEvent;
|
||||
notifyObservers();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@ -269,8 +269,8 @@ public class Conference implements Topic {
|
||||
```
|
||||
package org.opensource.demo.observer;
|
||||
|
||||
public interface [Observer][22] {
|
||||
public void update(int totalAttendees, int totalSpeakers, [String][18] nameEvent);
|
||||
public interface Observer {
|
||||
public void update(int totalAttendees, int totalSpeakers, String nameEvent);
|
||||
}
|
||||
```
|
||||
|
||||
@ -281,27 +281,27 @@ public interface [Observer][22] {
|
||||
```
|
||||
package org.opensource.demo.observer;
|
||||
|
||||
public class MonitorConferenceAttendees implements [Observer][22] {
|
||||
private int totalAttendees;
|
||||
private int totalSpeakers;
|
||||
private [String][18] nameEvent;
|
||||
private Topic topic;
|
||||
public class MonitorConferenceAttendees implements Observer {
|
||||
private int totalAttendees;
|
||||
private int totalSpeakers;
|
||||
private String nameEvent;
|
||||
private Topic topic;
|
||||
|
||||
public MonitorConferenceAttendees(Topic topic) {
|
||||
this.topic = topic;
|
||||
topic.addObserver(this);
|
||||
}
|
||||
public MonitorConferenceAttendees(Topic topic) {
|
||||
this.topic = topic;
|
||||
topic.addObserver(this);
|
||||
}
|
||||
|
||||
public void update(int totalAttendees, int totalSpeakers, [String][18] nameEvent) {
|
||||
this.totalAttendees = totalAttendees;
|
||||
this.totalSpeakers = totalSpeakers;
|
||||
this.nameEvent = nameEvent;
|
||||
printConferenceInfo();
|
||||
}
|
||||
public void update(int totalAttendees, int totalSpeakers, String nameEvent) {
|
||||
this.totalAttendees = totalAttendees;
|
||||
this.totalSpeakers = totalSpeakers;
|
||||
this.nameEvent = nameEvent;
|
||||
printConferenceInfo();
|
||||
}
|
||||
|
||||
public void printConferenceInfo() {
|
||||
[System][19].out.println(this.nameEvent + " has " + totalSpeakers + " speakers and " + totalAttendees + " attendees");
|
||||
}
|
||||
public void printConferenceInfo() {
|
||||
System.out.println(this.nameEvent + " has " + totalSpeakers + " speakers and " + totalAttendees + " attendees");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
99
published/20190806 Unboxing the Raspberry Pi 4.md
Normal file
99
published/20190806 Unboxing the Raspberry Pi 4.md
Normal file
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11248-1.html)
|
||||
[#]: subject: (Unboxing the Raspberry Pi 4)
|
||||
[#]: via: (https://opensource.com/article/19/8/unboxing-raspberry-pi-4)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall)
|
||||
|
||||
树莓派 4 开箱记
|
||||
======
|
||||
|
||||
> 树莓派 4 与其前代产品相比具有令人印象深刻的性能提升,而入门套件使其易于快速启动和运行。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/20/091730rl99q2ahycd4sz9h.jpg)
|
||||
|
||||
当树莓派 4 [在 6 月底宣布发布][2]时,我没有迟疑,在发布的同一天就从 [CanaKit][3] 订购了两套树莓派 4 入门套件。1GB RAM 版本有现货,但 4GB 版本要在 7 月 19 日才能发货。由于我想两个都试试,这两个都订购了让它们一起发货。
|
||||
|
||||
![CanaKit's Raspberry Pi 4 Starter Kit and official accessories][4]
|
||||
|
||||
这是我开箱我的树莓派 4 后所看到的。
|
||||
|
||||
### 电源
|
||||
|
||||
树莓派 4 使用 USB-C 连接器供电。虽然 USB-C 电缆现在非常普遍,但你的树莓派 4 [可能不喜欢你的USB-C 线][5](至少对于树莓派 4 的第一版如此)。因此,除非你确切知道自己在做什么,否则我建议你订购含有官方树莓派充电器的入门套件。如果你想尝试手头的充电设备,那么该设备的输入是 100-240V ~ 50/60Hz 0.5A,输出为 5.1V - 3.0A。
|
||||
|
||||
![Raspberry Pi USB-C charger][6]
|
||||
|
||||
### 键盘和鼠标
|
||||
|
||||
官方的键盘和鼠标是和入门套件是[分开出售][7]的,总价 25 美元,并不很便宜,因为你的这台树莓派电脑也才只有 35 到 55 美元。但树莓派徽标印在这个键盘上(而不是 Windows 徽标),并且外观相宜。键盘也是 USB 集线器,因此它允许你插入更多设备。我插入了我的 [YubiKey][8] 安全密钥,它运行得非常好。我会把键盘和鼠标分类为“值得拥有”而不是“必须拥有”。你的常规键盘和鼠标应该也可以正常工作。
|
||||
|
||||
![Official Raspberry Pi keyboard \(with YubiKey plugged in\) and mouse.][9]
|
||||
|
||||
![Raspberry Pi logo on the keyboard][10]
|
||||
|
||||
### Micro-HDMI 电缆
|
||||
|
||||
可能让一些人惊讶的是,与带有 Mini-HDMI 端口的树莓派 Zero 不同,树莓派 4 配备了 Micro-HDMI。它们不是同一个东西!因此,即使你手头有合适的 USB-C 线缆/电源适配器、鼠标和键盘,也很有可能需要使用 Micro-HDMI 转 HDMI 的线缆(或适配器)来将你的新树莓派接到显示器上。
|
||||
|
||||
### 外壳
|
||||
|
||||
树莓派的外壳已经有了很多年,这可能是树莓派基金会销售的第一批“官方”外围设备之一。有些人喜欢它们,而有些人不喜欢。我认为将一个树莓派放在一个盒子里可以更容易携带它,可以避免静电和针脚弯曲。
|
||||
|
||||
另一方面,把你的树莓派装在盒子里会使电路板过热。这款 CanaKit 入门套件还配备了处理器散热器,这可能有所帮助,因为较新的树莓派已经[以运行相当热而闻名][11]了。
|
||||
|
||||
![Raspberry Pi 4 case][12]
|
||||
|
||||
### Raspbian 和 NOOBS
|
||||
|
||||
入门套件附带的另一个东西是 microSD 卡,其中预装了树莓派 4 的 [NOOBS][13] 操作系统的正确版本。(我拿到的是 3.19 版,发布于 2019 年 6 月 24 日)。如果你是第一次使用树莓派并且不确定从哪里开始,这可以为你节省大量时间。入门套件中的 microSD 卡容量为 32GB。
|
||||
|
||||
插入 microSD 卡并连接所有电缆后,只需启动树莓派,引导进入 NOOBS,选择 Raspbian 发行版,然后等待安装。
|
||||
|
||||
![Raspberry Pi 4 with 4GB of RAM][14]
|
||||
|
||||
我注意到在安装最新的 Raspbian 时有一些改进。(如果它们已经出现了一段时间,请原谅我 —— 自从树莓派 3 出现以来我没有对树莓派进行过全新安装。)其中一个是 Raspbian 会要求你在安装后的首次启动为你的帐户设置一个密码,另一个是它将运行软件更新(假设你有网络连接)。这些都是很大的改进,有助于保持你的树莓派更安全。我很希望能有一天在安装时看到加密 microSD 卡的选项。
|
||||
|
||||
![Running Raspbian updates at first boot][15]
|
||||
|
||||
![Raspberry Pi 4 setup][16]
|
||||
|
||||
运行非常顺畅!
|
||||
|
||||
### 结语
|
||||
|
||||
虽然 CanaKit 不是美国唯一授权的树莓派零售商,但我发现它的入门套件的价格物超所值。
|
||||
|
||||
到目前为止,我对树莓派 4 的性能提升印象深刻。我打算尝试用一整个工作日将它作为我唯一的计算机,我很快就会写一篇关于我探索了多远的后续文章。敬请关注!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/unboxing-raspberry-pi-4
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi4_board_hardware.jpg?itok=KnFU7NvR (Raspberry Pi 4 board, posterized filter)
|
||||
[2]: https://opensource.com/article/19/6/raspberry-pi-4
|
||||
[3]: https://www.canakit.com/raspberry-pi-4-starter-kit.html
|
||||
[4]: https://opensource.com/sites/default/files/uploads/raspberrypi4_canakit.jpg (CanaKit's Raspberry Pi 4 Starter Kit and official accessories)
|
||||
[5]: https://www.techrepublic.com/article/your-new-raspberry-pi-4-wont-power-on-usb-c-cable-problem-now-officially-confirmed/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/raspberrypi_usb-c_charger.jpg (Raspberry Pi USB-C charger)
|
||||
[7]: https://www.canakit.com/official-raspberry-pi-keyboard-mouse.html?defpid=4476
|
||||
[8]: https://www.yubico.com/products/yubikey-hardware/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardmouse.jpg (Official Raspberry Pi keyboard (with YubiKey plugged in) and mouse.)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardlogo.jpg (Raspberry Pi logo on the keyboard)
|
||||
[11]: https://www.theregister.co.uk/2019/07/22/raspberry_pi_4_too_hot_to_handle/
|
||||
[12]: https://opensource.com/sites/default/files/uploads/raspberrypi4_case.jpg (Raspberry Pi 4 case)
|
||||
[13]: https://www.raspberrypi.org/downloads/noobs/
|
||||
[14]: https://opensource.com/sites/default/files/uploads/raspberrypi4_ram.jpg (Raspberry Pi 4 with 4GB of RAM)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/raspberrypi4_rasbpianupdate.jpg (Running Raspbian updates at first boot)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/raspberrypi_setup.jpg (Raspberry Pi 4 setup)
|
@ -1,28 +1,28 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11238-1.html)
|
||||
[#]: subject: (Find Out How Long Does it Take To Boot Your Linux System)
|
||||
[#]: via: (https://itsfoss.com/check-boot-time-linux/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
了解你的 Linux 系统的启动时间
|
||||
你的 Linux 系统开机时间已经击败了 99% 的电脑
|
||||
======
|
||||
|
||||
当你打开系统电源时,你会等待制造商的 logo 出现,屏幕上可能会显示一些消息(以非安全模式启动),[Grub][1] 页面,操作系统加载页面以及最后的登录页面。
|
||||
当你打开系统电源时,你会等待制造商的徽标出现,屏幕上可能会显示一些消息(以非安全模式启动),然后是 [Grub][1] 屏幕、操作系统加载屏幕以及最后的登录屏。
|
||||
|
||||
你检查过这花费了多长时间么?也许没有。除非你真的需要知道,否则你不会在意开机时间。
|
||||
|
||||
但是如果你很想知道你的 Linux 系统需要很长时间才能启动呢?使用秒表是一种方法,但在 Linux 中,你有一种更好、更轻松地了解系统启动时间的方法。
|
||||
但是如果你很想知道你的 Linux 系统需要很长时间才能启动完成呢?使用秒表是一种方法,但在 Linux 中,你有一种更好、更轻松地了解系统启动时间的方法。
|
||||
|
||||
### 在 Linux 中使用 systemd-analyze 检查启动时间
|
||||
|
||||
![][2]
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/17/104358s1ho8ug868hso1y8.jpg)
|
||||
|
||||
无论你是否喜欢,[systemd][3] 运行在大多数流行的 Linux 发行版中。systemd 有许多管理 Linux 系统的工具。其中一个就是 systemd-analyze。
|
||||
无论你是否喜欢,[systemd][3] 运行在大多数流行的 Linux 发行版中。systemd 有许多管理 Linux 系统的工具。其中一个就是 `systemd-analyze`。
|
||||
|
||||
systemd-analyze 命令为你提供上次启动时运行的服务数量以及消耗时间的详细信息。
|
||||
`systemd-analyze` 命令为你提供最近一次启动时运行的服务数量以及消耗时间的详细信息。
|
||||
|
||||
如果在终端中运行以下命令:
|
||||
|
||||
@ -90,9 +90,9 @@ sudo systemctl disable NetworkManager-wait-online.service
|
||||
sudo systemctl enable NetworkManager-wait-online.service
|
||||
```
|
||||
|
||||
现在,请不要在不知道用途的情况下自行禁用各种服务。这可能会产生危险的后果。
|
||||
请不要在不知道用途的情况下自行禁用各种服务。这可能会产生危险的后果。
|
||||
|
||||
_ **现在你知道了如何检查 Linux 系统的启动时间,为什么不在评论栏分享你的系统的启动时间?** _
|
||||
现在你知道了如何检查 Linux 系统的启动时间,为什么不在评论栏分享你的系统的启动时间?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -101,7 +101,7 @@ via: https://itsfoss.com/check-boot-time-linux/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
210
published/20190809 Copying files in Linux.md
Normal file
210
published/20190809 Copying files in Linux.md
Normal file
@ -0,0 +1,210 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tomjlw)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11259-1.html)
|
||||
[#]: subject: (Copying files in Linux)
|
||||
[#]: via: (https://opensource.com/article/19/8/copying-files-linux)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/greg-p)
|
||||
|
||||
在 Linux 中复制文档
|
||||
======
|
||||
|
||||
> 了解在 Linux 中多种复制文档的方式以及各自的优点。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/23/053859f1stcjezllmj28e8.jpg)
|
||||
|
||||
在办公室里复印文档过去需要专门的员工与机器。如今,复制是电脑用户无需多加思考的任务。在电脑里复制数据是如此微不足道的事,以致于你还没有意识到复制就发生了,例如当拖动文档到外部硬盘的时候。
|
||||
|
||||
数字实体复制起来十分简单已是一个不争的事实,以致于大部分现代电脑用户从未考虑过其它的复制他们工作的方式。无论如何,在 Linux 中复制文档仍有几种不同的方式。每种方法取决于你的目的不同而都有其独到之处。
|
||||
|
||||
以下是一系列在 Linux、BSD 及 Mac 上复制文件的方式。
|
||||
|
||||
### 在 GUI 中复制
|
||||
|
||||
如大多数操作系统一样,如果你想的话,你可以完全用 GUI 来管理文件。
|
||||
|
||||
#### 拖拽放下
|
||||
|
||||
最浅显的复制文件的方式可能就是你以前在电脑中复制文件的方式:拖拽并放下。在大多数 Linux 桌面上,从一个本地文件夹拖拽放下到另一个本地文件夹是*移动*文件的默认方式,你可以通过在拖拽文件开始后按住 `Ctrl` 来改变这个行为。
|
||||
|
||||
你的鼠标指针可能会有一个指示,例如一个加号以显示你在复制模式。
|
||||
|
||||
![复制一个文件][2]
|
||||
|
||||
注意如果文件是放在远程系统上的,不管它是一个 Web 服务器还是在你自己网络里用文件共享协议访问的另一台电脑,默认动作经常是复制而不是移动文件。
|
||||
|
||||
#### 右击
|
||||
|
||||
如果你觉得在你的桌面拖拽文档不够精准或者有点笨拙,或者这么做会让你的手离开键盘太久,你可以经常使用右键菜单来复制文件。这取决于你所用的文件管理器,但通常来说,右键弹出的关联菜单会包括常见的操作。
|
||||
|
||||
关联菜单的“复制”动作将你的[文件路径][3](即文件在系统的位置)保存在你的剪切板中,这样你可以将你的文件*粘贴*到别处:(LCTT 译注:此处及下面的描述不确切,这里并非复制的文件路径的“字符串”,而是复制了代表文件实体的对象/指针)
|
||||
|
||||
![从右键菜单复制文件][4]
|
||||
|
||||
在这种情况下,你并没有将文件的内容复制到你的剪切版上。取而代之的是你复制了[文件路径][3]。当你粘贴时,你的文件管理器会查看剪贴板上的路径并执行复制命令,将相应路径上的文件粘贴到你准备复制到的路径。
|
||||
|
||||
### 用命令行复制
|
||||
|
||||
虽然 GUI 通常是相对熟悉的复制文件方式,用终端复制却更有效率。
|
||||
|
||||
#### cp
|
||||
|
||||
在终端上等同于在桌面上复制和粘贴文件的最显而易见的方式就是 `cp` 命令。这个命令可以复制文件和目录,也相对直接。它使用熟悉的*来源*和*目的*(必须以这样的顺序)句法,因此复制一个名为 `example.txt` 的文件到你的 `Documents` 目录就像这样:
|
||||
|
||||
```
|
||||
$ cp example.txt ~/Documents
|
||||
```
|
||||
|
||||
就像当你拖拽文件放在文件夹里一样,这个动作并不会将 `Documents` 替换为 `example.txt`。取而代之的是,`cp` 察觉到 `Documents` 是一个文件夹,就将 `example.txt` 的副本放进去。
|
||||
|
||||
你同样可以便捷有效地重命名你复制的文档:
|
||||
|
||||
```
|
||||
$ cp example.txt ~/Documents/example_copy.txt
|
||||
```
|
||||
|
||||
重要的是,它使得你可以在与原文件相同的目录中生成一个副本:
|
||||
|
||||
```
|
||||
$ cp example.txt example.txt
|
||||
cp: 'example.txt' and 'example.txt' are the same file.
|
||||
$ cp example.txt example_copy.txt
|
||||
```
|
||||
|
||||
要复制一个目录,你必须使用 `-r` 选项(代表 `--recursive`,递归)。以这个选项对目录 `nodes` 运行 `cp` 命令,然后会作用到该目录下的所有文件。没有 `-r` 选项,`cp` 不会将目录当成一个可复制的对象:
|
||||
|
||||
```
|
||||
$ cp notes/ notes-backup
|
||||
cp: -r not specified; omitting directory 'notes/'
|
||||
$ cp -r notes/ notes-backup
|
||||
```
|
||||
|
||||
#### cat
|
||||
|
||||
`cat` 命令是最易被误解的命令,但这只是因为它表现了 [POSIX][5] 系统的极致灵活性。在 `cat` 可以做到的所有事情中(包括其原意的连接文件的用途),它也能复制。例如说使用 `cat` 你可以仅用一个命令就[从一个文件创建两个副本][6]。你用 `cp` 无法做到这一点。
|
||||
|
||||
使用 `cat` 复制文档要注意的是系统解释该行为的方式。当你使用 `cp` 复制文件时,该文件的属性跟着文件一起被复制,这意味着副本的权限和原件一样。
|
||||
|
||||
```
|
||||
$ ls -l -G -g
|
||||
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
|
||||
$ cp foo.jpg bar.jpg
|
||||
-rw-r--r--. 1 57368 Jul 29 13:37 bar.jpg
|
||||
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
|
||||
```
|
||||
|
||||
然而用 `cat` 将一个文件的内容读取至另一个文件是让系统创建了一个新文件。这些新文件取决于你的默认 umask 设置。要了解 umask 更多的知识,请阅读 Alex Juarez 讲述 [umask][7] 以及权限概览的文章。
|
||||
|
||||
运行 `unmask` 获取当前设置:
|
||||
|
||||
```
|
||||
$ umask
|
||||
0002
|
||||
```
|
||||
|
||||
这个设置代表在该处新创建的文档被给予 `664`(`rw-rw-r--`)权限,因为该 `unmask` 设置的前几位数字没有遮掩任何权限(而且执行位不是文件创建的默认位),并且写入权限被最终位所屏蔽。
|
||||
|
||||
当你使用 `cat` 复制时,实际上你并没有真正复制文件。你使用 `cat` 读取文件内容并将输出重定向到了一个新文件:
|
||||
|
||||
```
|
||||
$ cat foo.jpg > baz.jpg
|
||||
$ ls -l -G -g
|
||||
-rw-r--r--. 1 57368 Jul 29 13:37 bar.jpg
|
||||
-rw-rw-r--. 1 57368 Jul 29 13:42 baz.jpg
|
||||
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
|
||||
```
|
||||
|
||||
如你所见,`cat` 应用系统默认的 umask 设置创建了一个全新的文件。
|
||||
|
||||
最后,当你只是想复制一个文件时,这些手段无关紧要。但如果你想复制文件并保持默认权限时,你可以用一个命令 `cat` 完成一切。
|
||||
|
||||
#### rsync
|
||||
|
||||
有着著名的同步源和目的文件的能力,`rsync` 命令是一个复制文件的多才多艺的工具。最为简单的,`rsync` 可以类似于 `cp` 命令一样使用。
|
||||
|
||||
```
|
||||
$ rsync example.txt example_copy.txt
|
||||
$ ls
|
||||
example.txt example_copy.txt
|
||||
```
|
||||
|
||||
这个命令真正的威力藏在其能够*不做*不必要的复制的能力里。如果你使用 `rsync` 来将文件复制进目录里,且其已经存在在该目录里,那么 `rsync` 不会做复制操作。在本地这个差别不是很大,但如果你将海量数据复制到远程服务器,这个特性的意义就完全不一样了。
|
||||
|
||||
甚至在本地中,真正不一样的地方在于它可以分辨具有相同名字但拥有不同数据的文件。如果你曾发现你面对着同一个目录的两个相同副本时,`rsync` 可以将它们同步至一个包含每一个最新修改的目录。这种配置在尚未发现版本控制威力的业界十分常见,同时也作为需要从一个可信来源复制的备份方案。
|
||||
|
||||
你可以通过创建两个文件夹有意识地模拟这种情况,一个叫做 `example` 另一个叫做 `example_dupe`:
|
||||
|
||||
```
|
||||
$ mkdir example example_dupe
|
||||
```
|
||||
|
||||
在第一个文件夹里创建文件:
|
||||
|
||||
```
|
||||
$ echo "one" > example/foo.txt
|
||||
```
|
||||
|
||||
用 `rsync` 同步两个目录。这种做法最常见的选项是 `-a`(代表 “archive”,可以保证符号链接和其它特殊文件保留下来)和 `-v`(代表 “verbose”,向你提供当前命令的进度反馈):
|
||||
|
||||
```
|
||||
$ rsync -av example/ example_dupe/
|
||||
```
|
||||
|
||||
两个目录现在包含同样的信息:
|
||||
|
||||
```
|
||||
$ cat example/foo.txt
|
||||
one
|
||||
$ cat example_dupe/foo.txt
|
||||
one
|
||||
```
|
||||
|
||||
如果你当作源分支的文件发生改变,目的文件也会随之跟新:
|
||||
|
||||
```
|
||||
$ echo "two" >> example/foo.txt
|
||||
$ rsync -av example/ example_dupe/
|
||||
$ cat example_dupe/foo.txt
|
||||
one
|
||||
two
|
||||
```
|
||||
|
||||
注意 `rsync` 命令是用来复制数据的,而不是充当版本管理系统的。例如假设有一个目的文件比源文件多了改变,那个文件仍将被覆盖,因为 `rsync` 比较文件的分歧并假设目的文件总是应该镜像为源文件:
|
||||
|
||||
```
|
||||
$ echo "You will never see this note again" > example_dupe/foo.txt
|
||||
$ rsync -av example/ example_dupe/
|
||||
$ cat example_dupe/foo.txt
|
||||
one
|
||||
two
|
||||
```
|
||||
|
||||
如果没有改变,那么就不会有复制动作发生。
|
||||
|
||||
`rsync` 命令有许多 `cp` 没有的选项,例如设置目标权限、排除文件、删除没有在两个目录中出现的过时文件,以及更多。可以使用 `rsync` 作为 `cp` 的强力替代或者有效补充。
|
||||
|
||||
### 许多复制的方式
|
||||
|
||||
在 POSIX 系统中有许多能够达成同样目的的方式,因此开源的灵活性名副其实。我忘了哪个复制数据的有效方式吗?在评论区分享你的复制神技。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/copying-files-linux
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/greg-p
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/copy-nautilus.jpg (Copying a file.)
|
||||
[3]: https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them
|
||||
[4]: https://opensource.com/sites/default/files/uploads/copy-files-menu.jpg (Copying a file from the context menu.)
|
||||
[5]: https://linux.cn/article-11222-1.html
|
||||
[6]: https://opensource.com/article/19/2/getting-started-cat-command
|
||||
[7]: https://opensource.com/article/19/7/linux-permissions-101
|
@ -0,0 +1,164 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11244-1.html)
|
||||
[#]: subject: (How to measure the health of an open source community)
|
||||
[#]: via: (https://opensource.com/article/19/8/measure-project)
|
||||
[#]: author: (Jon Lawrence https://opensource.com/users/the3rdlaw)
|
||||
|
||||
如何衡量一个开源社区的健康度
|
||||
======
|
||||
|
||||
> 这比较复杂。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/19/184719nz3xuazppzu3vwcg.jpg)
|
||||
|
||||
作为一个经常管理软件开发团队的人,多年来我一直关注度量指标。一次次,我发现自己领导团队使用一个又一个的项目平台(例如 Jira、GitLab 和 Rally)生成了大量可测量的数据。从那时起,我已经及时投入了大量时间从记录平台中提取了有用的指标,并采用了一种我们可以理解的格式,然后使用这些指标对开发的许多方面做出更好的选择。
|
||||
|
||||
今年早些时候,我有幸在 [Linux 基金会][2]遇到了一个名为<ruby>[开源软件社区健康分析][3]<rt>Community Health Analytics for Open Source Software</rt></ruby>(CHAOSS)的项目。该项目侧重于从各种来源收集和丰富指标,以便开源社区的利益相关者可以衡量他们项目的健康状况。
|
||||
|
||||
### CHAOSS 介绍
|
||||
|
||||
随着我对该项目的基本指标和目标越来越熟悉,一个问题在我的脑海中不断翻滚。什么是“健康”的开源项目,由谁来定义?
|
||||
|
||||
特定角色的人认为健康的东西可能另一个角色的人就不会这样认为。似乎可以用 CHAOSS 收集的细粒度数据进行市场细分实验,重点关注对特定角色可能最有意义的背景问题,以及 CHAOSS 收集哪些指标可能有助于回答这些问题。
|
||||
|
||||
CHAOSS 项目创建并维护了一套开源应用程序和度量标准定义,使得这个实验具有可能性,这包括:
|
||||
|
||||
* 许多基于服务器的应用程序,用于收集、聚合和丰富度量标准(例如 Augur 和 GrimoireLab)。
|
||||
* ElasticSearch、Kibana 和 Logstash(ELK)的开源版本。
|
||||
* 身份服务、数据分析服务和各种集成库。
|
||||
|
||||
在我过去的一个程序中,有六个团队从事于不同复杂程度的项目,我们找到了一个简洁的工具,它允许我们从简单(或复杂)的 JQL 语句中创建我们想要的任何类型的指标,然后针对这些指标开发计算。在我们注意到之前,我们仅从 Jira 中就提取了 400 多个指标,而且还有更多指标来自手动的来源。
|
||||
|
||||
在项目结束时,我们认定这 400 个指标中,大多数指标在*以我们的角色*做出决策时并不重要。最终,只有三个对我们非常重要:“缺陷去除效率”、“已完成的条目与承诺的条目”,以及“每个开发人员的工作进度”。这三个指标最重要,因为它们是我们对自己、客户和团队成员所做出的承诺,因此是最有意义的。
|
||||
|
||||
带着这些通过经验得到的教训和对什么是健康的开源项目的问题,我跳进了 CHAOSS 社区,开始建立一套角色,以提供一种建设性的方法,从基于角色的角度回答这个问题。
|
||||
|
||||
CHAOSS 是一个开源项目,我们尝试使用民主共识来运作。因此,我决定使用<ruby>组成分子<rt>constituent</rt></ruby>这个词而不是利益相关者,因为它更符合我们作为开源贡献者的责任,以创建更具共生性的价值链。
|
||||
|
||||
虽然创建此组成模型的过程采用了特定的“目标-问题-度量”方法,但有许多方法可以进行细分。CHAOSS 贡献者已经开发了很好的模型,可以按照矢量进行细分,例如项目属性(例如,个人、公司或联盟)和“失败容忍度”。在为 CHAOSS 开发度量定义时,每个模型都会提供建设性的影响。
|
||||
|
||||
基于这一切,我开始构建一个谁可能关心 CHAOSS 指标的模型,以及每个组成分子在 CHAOSS 的四个重点领域中最关心的问题:
|
||||
|
||||
* [多样性和包容性][4]
|
||||
* [演化][5]
|
||||
* [风险][6]
|
||||
* [价值][7]
|
||||
|
||||
在我们深入研究之前,重要的是要注意 CHAOSS 项目明确地将背景判断留给了实施指标的团队。什么是“有意义的”和“什么是健康的?”的答案预计会因团队和项目而异。CHAOSS 软件的现成仪表板尽可能地关注客观指标。在本文中,我们关注项目创始人、项目维护者和贡献者。
|
||||
|
||||
### 项目组成分子
|
||||
|
||||
虽然这绝不是这些组成分子可能认为重要的问题的详尽清单,但这些选择感觉是一个好的起点。以下每个“目标-问题-度量”标准部分与 CHAOSS 项目正在收集和汇总的指标直接相关。
|
||||
|
||||
现在,进入分析的第 1 部分!
|
||||
|
||||
#### 项目创始人
|
||||
|
||||
作为**项目创始人**,我**最**关心:
|
||||
|
||||
* 我的项目**对其他人有用吗?**通过以下测量:
|
||||
* 随着时间推移有多少复刻?
|
||||
* **指标:**存储库复刻数。
|
||||
* 随着时间的推移有多少贡献者?
|
||||
* **指标:**贡献者数量。
|
||||
* 贡献净质量。
|
||||
* **指标:**随着时间的推移提交的错误。
|
||||
* **指标:**随着时间的回归。
|
||||
* 项目的财务状况。
|
||||
* **指标:**随着时间的推移的捐赠/收入。
|
||||
* **指标:**随着时间的推移的费用。
|
||||
* 我的项目对其它人的**可见**程度?
|
||||
* 有谁知道我的项目?别人认为它很整洁吗?
|
||||
* **指标:**社交媒体上的提及、分享、喜欢和订阅的数量。
|
||||
* 有影响力的人是否了解我的项目?
|
||||
* **指标:**贡献者的社会影响力。
|
||||
* 人们在公共场所对项目有何评价?是正面还是负面?
|
||||
* **指标:**跨社交媒体渠道的情感(关键字或 NLP)分析。
|
||||
* 我的项目**可行性**程度?
|
||||
* 我们有足够的维护者吗?该数字是随着时间的推移而上升还是下降?
|
||||
* **指标:**维护者数量。
|
||||
* 改变速度如何随时间变化?
|
||||
* **指标:**代码随时间的变化百分比。
|
||||
* **指标:**拉取请求、代码审查和合并之间的时间。
|
||||
* 我的项目的[多样化 & 包容性][4]如何?
|
||||
* 我们是否拥有有效的公开行为准则(CoC)?
|
||||
* **度量标准:** 检查存储库中的 CoC 文件。
|
||||
* 与我的项目相关的活动是否积极包容?
|
||||
* **指标:**关于活动的票务政策和活动包容性行为的手动报告。
|
||||
* 我们的项目在可访问性上做的好不好?
|
||||
* **指标:**验证发布的文字会议纪要。
|
||||
* **指标:**验证会议期间使用的隐藏式字幕。
|
||||
* **指标:**验证在演示文稿和项目前端设计中色盲可访问的素材。
|
||||
* 我的项目代表了多少[价值][7]?
|
||||
* 我如何帮助组织了解使用我们的项目将节省多少时间和金钱(劳动力投资)
|
||||
* **指标:**仓库的议题、提交、拉取请求的数量和估计人工费率。
|
||||
* 我如何理解项目创建的下游价值的数量,以及维护我的项目对更广泛的社区的重要性(或不重要)?
|
||||
* **指标:**依赖我的项目的其他项目数。
|
||||
* 为我的项目做出贡献的人有多少机会使用他们学到的东西来找到合适的工作岗位,以及在哪些组织(即生活工资)?
|
||||
* **指标:**使用或贡献此库的组织数量。
|
||||
* **指标:**使用此类项目的开发人员的平均工资。
|
||||
* **指标:**与该项目匹配的关键字的职位发布计数。
|
||||
|
||||
### 项目维护者
|
||||
|
||||
作为**项目维护者**,我**最**关心:
|
||||
|
||||
* 我是**高效的**维护者吗?
|
||||
* **指标:**拉取请求在代码审查之前等待的时间。
|
||||
* **指标:**代码审查和后续拉取请求之间的时间。
|
||||
* **指标:**我的代码审核中有多少被批准?
|
||||
* **指标:**我的代码评论中有多少被拒绝或返工?
|
||||
* **指标:**代码审查的评论的情感分析。
|
||||
* 我如何让**更多人**帮助我维护这件事?
|
||||
* **指标:**项目贡献者的社交覆盖面数量。
|
||||
* 我们的**代码质量**随着时间的推移变得越来越好吗?
|
||||
* **指标:**计算随着时间的推移引入的回归数量。
|
||||
* **指标:**计算随着时间推移引入的错误数量。
|
||||
* **指标:**错误归档、拉取请求、代码审查、合并和发布之间的时间。
|
||||
|
||||
### 项目开发者和贡献者
|
||||
|
||||
作为**项目开发者或贡献者**,我**最**关心:
|
||||
|
||||
* 我可以从为这个项目做出贡献中获得哪些有价值的东西,以及实现这个价值需要多长时间?
|
||||
* **指标:**下游价值。
|
||||
* **指标:**提交、代码审查和合并之间的时间。
|
||||
* 通过使用我在贡献中学到的东西来增加工作机是否有良好的前景?
|
||||
* **指标:**生活工资。
|
||||
* 这个项目有多受欢迎?
|
||||
* **指标:**社交媒体帖子、分享和收藏的数量。
|
||||
* 社区有影响力的人知道我的项目吗?
|
||||
* **指标:**创始人、维护者和贡献者的社交范围。
|
||||
|
||||
通过创建这个列表,我们开始让 CHAOSS 更加丰满了,并且在今年夏天项目中首次发布该指标时,我迫不及待地想看看广泛的开源社区可能有什么其他伟大的想法,以及我们还可以从这些贡献中学到什么(并衡量!)。
|
||||
|
||||
### 其它角色
|
||||
|
||||
接下来,你需要了解有关其他角色(例如基金会、企业开源计划办公室、业务风险和法律团队、人力资源等)以及最终用户的目标问题度量集的更多信息。他们关心开源的不同事物。
|
||||
|
||||
如果你是开源贡献者或组成分子,我们邀请你[来看看这个项目][8]并参与社区活动!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/measure-project
|
||||
|
||||
作者:[Jon Lawrence][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/the3rdlaw
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
|
||||
[2]: https://www.linuxfoundation.org/
|
||||
[3]: https://chaoss.community/
|
||||
[4]: https://github.com/chaoss/wg-diversity-inclusion
|
||||
[5]: https://github.com/chaoss/wg-evolution
|
||||
[6]: https://github.com/chaoss/wg-risk
|
||||
[7]: https://github.com/chaoss/wg-value
|
||||
[8]: https://github.com/chaoss/
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11236-1.html)
|
||||
[#]: subject: (How to Get Linux Kernel 5.0 in Ubuntu 18.04 LTS)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-hwe-kernel/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
@ -10,31 +10,31 @@
|
||||
如何在 Ubuntu 18.04 LTS 中获取 Linux 5.0 内核
|
||||
======
|
||||
|
||||
_ **最近发布的 Ubuntu 18.04.3 包括 Linux 5.0 内核中的几个新功能和改进,但默认情况下没有安装。本教程演示了如何在 Ubuntu 18.04 LTS 中获取 Linux 5 内核。** _
|
||||
> 最近发布的 Ubuntu 18.04.3 包括 Linux 5.0 内核中的几个新功能和改进,但默认情况下没有安装。本教程演示了如何在 Ubuntu 18.04 LTS 中获取 Linux 5 内核。
|
||||
|
||||
[Subscribe to It’s FOSS YouTube Channel for More Videos][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/17/101052xday1jyrszbddsfc.jpg)
|
||||
|
||||
[Ubuntu 18.04 的第三个“点发布版”在这里][2],它带来了新的稳定版本的 GNOME 组件、livepatch 桌面集成和内核 5.0。
|
||||
[Ubuntu 18.04 的第三个“点发布版”已经发布][2],它带来了新的稳定版本的 GNOME 组件、livepatch 桌面集成和内核 5.0。
|
||||
|
||||
可是等等!什么是“点发布版”(point release)?让我先解释一下。
|
||||
可是等等!什么是“<ruby>小数点版本<rt>point release</rt></ruby>”?让我先解释一下。
|
||||
|
||||
### Ubuntu LTS 点发布版
|
||||
### Ubuntu LTS 小数点版本
|
||||
|
||||
Ubuntu 18.04 于 2018 年 4 月发布,由于它是一个长期支持 (LTS) 版本,它将一直支持到 2023 年。从那时起,已经有许多 bug 修复,安全更新和软件升级。如果你今天下载 Ubuntu 18.04,你需要在[在安装 Ubuntu 后首先安装这些更新][3]。
|
||||
Ubuntu 18.04 于 2018 年 4 月发布,由于它是一个长期支持 (LTS) 版本,它将一直支持到 2023 年。从那时起,已经有许多 bug 修复、安全更新和软件升级。如果你今天下载 Ubuntu 18.04,你需要在[在安装 Ubuntu 后首先安装这些更新][3]。
|
||||
|
||||
当然,这不是一种理想情况。这就是 Ubuntu 提供这些“点发布版”的原因。点发布版包含所有功能和安全更新以及自 LTS 版本首次发布以来添加的 bug 修复。如果你今天下载 Ubuntu,你会得到 Ubuntu 18.04.3 而不是 Ubuntu 18.04。这节省了在新安装的 Ubuntu 系统上下载和安装数百个更新的麻烦。
|
||||
当然,这不是一种理想情况。这就是 Ubuntu 提供这些“小数点版本”的原因。点发布版包含所有功能和安全更新以及自 LTS 版本首次发布以来添加的 bug 修复。如果你今天下载 Ubuntu,你会得到 Ubuntu 18.04.3 而不是 Ubuntu 18.04。这节省了在新安装的 Ubuntu 系统上下载和安装数百个更新的麻烦。
|
||||
|
||||
好了!现在你知道“点发布版”的概念了。你如何升级到这些点发布版?答案很简单。只需要像平时一样[更新你的 Ubuntu 系统][4],这样你将在最新的点发布版上了。
|
||||
好了!现在你知道“小数点版本”的概念了。你如何升级到这些小数点版本?答案很简单。只需要像平时一样[更新你的 Ubuntu 系统][4],这样你将在最新的小数点版本上了。
|
||||
|
||||
你可以[查看 Ubuntu 版本][5]来了解正在使用的版本。我检查了一下,因为我用的是 Ubuntu 18.04.3,我以为我的内核会是 5。当我[查看 Linux 内核版本][6]时,它仍然是基本内核 4.15。
|
||||
|
||||
![Ubuntu Version And Linux Kernel Version Check][7]
|
||||
|
||||
这是为什么?如果 Ubuntu 18.04.3 有 Linux 5.0 内核,为什么它仍然使用 Linux 4.15 内核?这是因为你必须通过选择 LTS 支持栈(通常称为 HWE)手动请求在 Ubuntu LTS 中安装新内核。
|
||||
这是为什么?如果 Ubuntu 18.04.3 有 Linux 5.0 内核,为什么它仍然使用 Linux 4.15 内核?这是因为你必须通过选择 LTS <ruby>支持栈<rt>Enablement Stack</rt></ruby>(通常称为 HWE)手动请求在 Ubuntu LTS 中安装新内核。
|
||||
|
||||
### 使用 HWE 在Ubuntu 18.04 中获取 Linux 5.0 内核
|
||||
|
||||
默认情况下,Ubuntu LTS 将保持在最初发布的 Linux 内核上。 [硬件支持栈][9](HWE)为现有的 Ubuntu LTS 版本提供了更新的内核和 xorg 支持。
|
||||
默认情况下,Ubuntu LTS 将保持在最初发布的 Linux 内核上。<ruby>[硬件支持栈][9]<rt>hardware enablement stack</rt></ruby>(HWE)为现有的 Ubuntu LTS 版本提供了更新的内核和 xorg 支持。
|
||||
|
||||
最近发生了一些变化。如果你下载了 Ubuntu 18.04.2 或更新的桌面版本,那么就会为你启用 HWE,默认情况下你将获得新内核以及常规更新。
|
||||
|
||||
@ -54,7 +54,7 @@ sudo apt-get install --install-recommends linux-generic-hwe-18.04
|
||||
|
||||
完成 HWE 内核的安装后,重启系统。现在你应该拥有更新的 Linux 内核了。
|
||||
|
||||
**你在 Ubuntu 18.04 中获取 5.0 内核了么?**
|
||||
### 你在 Ubuntu 18.04 中获取 5.0 内核了么?
|
||||
|
||||
请注意,下载并安装了 Ubuntu 18.04.2 的用户已经启用了 HWE。所以这些用户将能轻松获取 5.0 内核。
|
||||
|
||||
@ -69,7 +69,7 @@ via: https://itsfoss.com/ubuntu-hwe-kernel/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11254-1.html)
|
||||
[#]: subject: (Fix ‘E: The package cache file is corrupted, it has the wrong hash’ Error In Ubuntu)
|
||||
[#]: via: (https://www.ostechnix.com/fix-e-the-package-cache-file-is-corrupted-it-has-the-wrong-hash-error-in-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
修复 Ubuntu 中 “E: The package cache file is corrupted, it has the wrong hash” 错误
|
||||
======
|
||||
|
||||
今天,我尝试更新我的 Ubuntu 18.04 LTS 的仓库列表,但收到了一条错误消息:“**E: The package cache file is corrupted, it has the wrong hash**”。这是我在终端运行的命令以及输出:
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Hit:1 http://it-mirrors.evowise.com/ubuntu bionic InRelease
|
||||
Hit:2 http://it-mirrors.evowise.com/ubuntu bionic-updates InRelease
|
||||
Hit:3 http://it-mirrors.evowise.com/ubuntu bionic-backports InRelease
|
||||
Hit:4 http://it-mirrors.evowise.com/ubuntu bionic-security InRelease
|
||||
Hit:5 http://ppa.launchpad.net/alessandro-strada/ppa/ubuntu bionic InRelease
|
||||
Hit:7 http://ppa.launchpad.net/leaeasy/dde/ubuntu bionic InRelease
|
||||
Hit:8 http://ppa.launchpad.net/rvm/smplayer/ubuntu bionic InRelease
|
||||
Ign:6 https://dl.bintray.com/etcher/debian stable InRelease
|
||||
Get:9 https://dl.bintray.com/etcher/debian stable Release [3,674 B]
|
||||
Fetched 3,674 B in 3s (1,196 B/s)
|
||||
Reading package lists... Done
|
||||
E: The package cache file is corrupted, it has the wrong hash
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
*Ubuntu 中的 “The package cache file is corrupted, it has the wrong hash” 错误*
|
||||
|
||||
经过一番谷歌搜索,我找到了解决此错误的方法。
|
||||
|
||||
如果你遇到过这个错误,不要惊慌。只需运行下面的命令修复。
|
||||
|
||||
在运行命令之前,**请再次确认你在最后加入了 `*`**。在命令最后加上 `*` 很重要。如果你没有添加,它会删除 `/var/lib/apt/lists/`*目录,而且无法恢复。我提醒过你了!
|
||||
|
||||
```
|
||||
$ sudo rm -rf /var/lib/apt/lists/*
|
||||
```
|
||||
|
||||
现在我再次使用下面的命令更新系统:
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
现在好了!希望它有帮助。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/fix-e-the-package-cache-file-is-corrupted-it-has-the-wrong-hash-error-in-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/08/The-package-cache-file-is-corrupted.png
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/08/apt-update-command-output-in-Ubuntu.png
|
@ -0,0 +1,135 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11258-1.html)
|
||||
[#]: subject: (How To Change Linux Console Font Type And Size)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-change-linux-console-font-type-and-size/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
如何更改 Linux 控制台字体类型和大小
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/23/041741x6qiajjijupjyjsp.jpg)
|
||||
|
||||
如果你有图形桌面环境,那么就很容易更改文本的字体以及大小。但你如何在没有图形环境的 Ubuntu 无头服务器中做到?别担心!本指南介绍了如何更改 Linux 控制台的字体和大小。这对于那些不喜欢默认字体类型/大小或者喜欢不同字体的人来说非常有用。
|
||||
|
||||
### 更改 Linux 控制台字体类型和大小
|
||||
|
||||
如果你还不知道,这就是无头 Ubuntu Linux 服务器控制台的样子。
|
||||
|
||||
![][2]
|
||||
|
||||
*Ubuntu Linux 控制台*
|
||||
|
||||
据我所知,我们可以[列出已安装的字体][3],但是没有办法可以像在 Linux 桌面终端仿真器中那样更改 Linux 控制台字体类型或大小。
|
||||
|
||||
但这并不意味着我们无法改变它。我们仍然可以更改控制台字体。
|
||||
|
||||
如果你正在使用 Debian、Ubuntu 和其他基于 DEB 的系统,你可以使用 `console-setup` 配置文件来设置 `setupcon`,它用于配置控制台的字体和键盘布局。该控制台设置的配置文件位于 `/etc/default/console-setup`。
|
||||
|
||||
现在,运行以下命令来设置 Linux 控制台的字体。
|
||||
|
||||
```
|
||||
$ sudo dpkg-reconfigure console-setup
|
||||
```
|
||||
|
||||
选择要在 Linux 控制台上使用的编码。只需保留默认值,选择 “OK” 并按回车继续。
|
||||
|
||||
![][4]
|
||||
|
||||
*选择要在 Ubuntu 控制台上设置的编码*
|
||||
|
||||
接下来,在列表中选择受支持的字符集。默认情况下,它是最后一个选项,即在我的系统中 **Guess optimal character set**(猜测最佳字符集)。只需保留默认值,然后按回车键。
|
||||
|
||||
![][5]
|
||||
|
||||
*在 Ubuntu 中选择字符集*
|
||||
|
||||
接下来选择控制台的字体,然后按回车键。我这里选择 “TerminusBold”。
|
||||
|
||||
![][6]
|
||||
|
||||
*选择 Linux 控制台的字体*
|
||||
|
||||
这里,我们为 Linux 控制台选择所需的字体大小。
|
||||
|
||||
![][7]
|
||||
|
||||
*选择 Linux 控制台的字体大小*
|
||||
|
||||
几秒钟后,所选的字体及大小将应用于你的 Linux 控制台。
|
||||
|
||||
这是在更改字体类型和大小之前,我的 Ubuntu 18.04 LTS 服务器中控制台字体的样子。
|
||||
|
||||
![][8]
|
||||
|
||||
这是更改之后。
|
||||
|
||||
![][9]
|
||||
|
||||
如你所见,文本更大、更好,字体类型也不同于默认。
|
||||
|
||||
你也可以直接编辑 `/etc/default/console-setup`,并根据需要设置字体类型和大小。根据以下示例,我的 Linux 控制台字体类型为 “Terminus Bold”,字体大小为 32。
|
||||
|
||||
```
|
||||
ACTIVE_CONSOLES="/dev/tty[1-6]"
|
||||
CHARMAP="UTF-8"
|
||||
CODESET="guess"
|
||||
FONTFACE="TerminusBold"
|
||||
FONTSIZE="16x32"
|
||||
```
|
||||
|
||||
### 附录:显示控制台字体
|
||||
|
||||
要显示你的控制台字体,只需输入:
|
||||
|
||||
```
|
||||
$ showconsolefont
|
||||
```
|
||||
|
||||
此命令将显示字体的字形或字母表。
|
||||
|
||||
![][11]
|
||||
|
||||
*显示控制台字体*
|
||||
|
||||
如果你的 Linux 发行版没有 `console-setup`,你可以从[这里][12]获取它。
|
||||
|
||||
在使用 Systemd 的 Linux 发行版上,你可以通过编辑 `/etc/vconsole.conf` 来更改控制台字体。
|
||||
|
||||
以下是德语键盘的示例配置。
|
||||
|
||||
```
|
||||
$ vi /etc/vconsole.conf
|
||||
|
||||
KEYMAP=de-latin1
|
||||
FONT=Lat2-Terminus16
|
||||
```
|
||||
|
||||
希望这篇文章对你有用!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-change-linux-console-font-type-and-size/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/08/Ubuntu-Linux-console.png
|
||||
[3]: https://www.ostechnix.com/find-installed-fonts-commandline-linux/
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-encoding-to-set-on-the-console.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-character-set-in-Ubuntu.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-font-for-Linux-console.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-font-size-for-Linux-console.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2019/08/Linux-console-tty-ubuntu-1.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2019/08/Ubuntu-Linux-TTY-console.png
|
||||
[10]: https://www.ostechnix.com/how-to-switch-between-ttys-without-using-function-keys-in-linux/
|
||||
[11]: https://www.ostechnix.com/wp-content/uploads/2019/08/show-console-fonts.png
|
||||
[12]: https://software.opensuse.org/package/console-setup
|
@ -0,0 +1,156 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tomjlw)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11252-1.html)
|
||||
[#]: subject: (How To Set up Automatic Security Update (Unattended Upgrades) on Debian/Ubuntu?)
|
||||
[#]: via: (https://www.2daygeek.com/automatic-security-update-unattended-upgrades-ubuntu-debian/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
如何在 Debian/Ubuntu 上设置自动安全更新(无人值守更新)
|
||||
======
|
||||
|
||||
对于 Linux 管理员来说重要的任务之一是让系统保持最新状态,这可以使得你的系统更加稳健并且可以避免不想要的访问与攻击。
|
||||
|
||||
在 Linux 上安装软件包是小菜一碟,用相似的方法我们也可以更新安全补丁。
|
||||
|
||||
这是一个向你展示如何配置系统接收自动安全更新的简单教程。当你运行自动安全包更新而不经审查会给你带来一定风险,但是也有一些好处。
|
||||
|
||||
如果你不想错过安全补丁,且想要与最新的安全补丁保持同步,那你应该借助无人值守更新机制设置自动安全更新。
|
||||
|
||||
如果你不想要自动安全更新的话,你可以[在 Debian/Ubuntu 系统上手动安装安全更新][1]。
|
||||
|
||||
我们有许多可以自动化更新的办法,然而我们将先采用官方的方法之后我们会介绍其它方法。
|
||||
|
||||
### 如何在 Debian/Ubuntu 上安装无人值守更新包
|
||||
|
||||
无人值守更新包默认应该已经装在你的系统上。但万一它没被安装,就用下面的命令来安装。
|
||||
|
||||
使用 [APT-GET 命令][2]和 [APT 命令][3]来安装 `unattended-upgrades` 软件包。
|
||||
|
||||
```
|
||||
$ sudo apt-get install unattended-upgrades
|
||||
```
|
||||
|
||||
下方两个文件可以使你自定义该机制:
|
||||
|
||||
```
|
||||
/etc/apt/apt.conf.d/50unattended-upgrades
|
||||
/etc/apt/apt.conf.d/20auto-upgrades
|
||||
```
|
||||
|
||||
### 在 50unattended-upgrades 文件中做出必要修改
|
||||
|
||||
默认情况下只有安全更新需要的最必要的选项被启用。但并不限于此,你可以配置其中的许多选项以使得这个机制更加有用。
|
||||
|
||||
我修改了一下文件并仅加上被启用的行以方便阐述:
|
||||
|
||||
```
|
||||
# vi /etc/apt/apt.conf.d/50unattended-upgrades
|
||||
|
||||
Unattended-Upgrade::Allowed-Origins {
|
||||
"${distro_id}:${distro_codename}";
|
||||
"${distro_id}:${distro_codename}-security";
|
||||
"${distro_id}ESM:${distro_codename}";
|
||||
};
|
||||
Unattended-Upgrade::DevRelease "false";
|
||||
```
|
||||
|
||||
有三个源被启用,细节如下:
|
||||
|
||||
* `${distro_id}:${distro_codename}`:这是必须的,因为安全更新可能会从非安全来源拉取依赖。
|
||||
* `${distro_id}:${distro_codename}-security`:这用来从来源得到安全更新。
|
||||
* `${distro_id}ESM:${distro_codename}`:这是用来从 ESM(扩展安全维护)获得安全更新。
|
||||
|
||||
**启用邮件通知:** 如果你想要在每次安全更新后收到邮件通知,那么就修改以下行段(取消其注释并加上你的 email 账号)。
|
||||
|
||||
从:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Mail "root";
|
||||
```
|
||||
|
||||
修改为:
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Mail "2daygeek@gmail.com";
|
||||
```
|
||||
|
||||
**自动移除不用的依赖:** 你可能需要在每次更新后运行 `sudo apt autoremove` 命令来从系统中移除不用的依赖。
|
||||
|
||||
我们可以通过修改以下行来自动化这项任务(取消注释并将 `false` 改成 `true`)。
|
||||
|
||||
从:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Remove-Unused-Dependencies "false";
|
||||
```
|
||||
|
||||
修改为:
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Remove-Unused-Dependencies "true";
|
||||
```
|
||||
|
||||
**启用自动重启:** 你可能需要在安全更新安装至内核后重启你的系统。你可以在以下行做出修改:
|
||||
|
||||
从:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Automatic-Reboot "false";
|
||||
```
|
||||
|
||||
到:取消注释并将 `false` 改成 `true`以启用自动重启。
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Automatic-Reboot "true";
|
||||
```
|
||||
|
||||
**启用特定时段的自动重启:** 如果自动重启已启用,且你想要在特定时段进行重启,那么做出以下修改。
|
||||
|
||||
从:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Automatic-Reboot-Time "02:00";
|
||||
```
|
||||
|
||||
到:取消注释并将时间改成你需要的时间。我将重启设置在早上 5 点。
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Automatic-Reboot-Time "05:00";
|
||||
```
|
||||
|
||||
### 如何启用自动化安全更新?
|
||||
|
||||
现在我们已经配置好了必须的选项,一旦配置好,打开以下文件并确认是否这两个值都已设置好?值不应为0。(1=启用,0=禁止)。
|
||||
|
||||
```
|
||||
# vi /etc/apt/apt.conf.d/20auto-upgrades
|
||||
|
||||
APT::Periodic::Update-Package-Lists "1";
|
||||
APT::Periodic::Unattended-Upgrade "1";
|
||||
```
|
||||
|
||||
**详情:**
|
||||
|
||||
* 第一行使 `apt` 每天自动运行 `apt-get update`。
|
||||
* 第一行使 `apt` 每天自动安装安全更新。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/automatic-security-update-unattended-upgrades-ubuntu-debian/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/manually-install-security-updates-ubuntu-debian/
|
||||
[2]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
@ -0,0 +1,170 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tomjlw)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11255-1.html)
|
||||
[#]: subject: (How To Setup Multilingual Input Method On Ubuntu)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-setup-multilingual-input-method-on-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
如何在 Ubuntu 上设置多语言输入法
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201908/21/231916g3gxbhybq0zv0q1h.jpg)
|
||||
|
||||
或许你不知道,在印度有数以百计的语言被使用,其中 22 种被印度机构列为官方语言。我的母语不是英语,因此当我需要从英语输入或者翻译到我的母语泰米尔语时我经常使用**谷歌翻译**。嗯,我估计我不再需要依靠谷歌翻译了。我刚发现在 Ubuntu 上输入印度语的好办法。这篇教程解释了如何配置多语言输入法的方法。这个是为 Ubuntu 18.04 LTS 特别打造的,但是它可以在其它类 Ubuntu 系统例如 Linux mint、Elementary OS 上使用。
|
||||
|
||||
### 在 Ubuntu Linux 上设置多语言输入法
|
||||
|
||||
通过 **IBus** 的帮助,我们可以轻松在 Ubuntu 及其衍生物上配置多语言输入法。Ibus,代表 **I** ntelligent **I** nput **Bus**(智能输入总线),是一种针对类 Unix 操作系统下多语言输入的输入法框架。它使得我们可以在大多数 GUI 应用例如 LibreOffice 下输入母语。
|
||||
|
||||
### 在 Ubuntu 上安装 IBus
|
||||
|
||||
在 Ubuntu 上 安装 IBus 包,运行:
|
||||
|
||||
```
|
||||
$ sudo apt install ibus-m17n
|
||||
```
|
||||
|
||||
Ibus-m17n 包提供了许多印度语和其它国家语言包括阿姆哈拉语,阿拉伯语,阿美尼亚语,阿萨姆语,阿萨巴斯卡语,白俄罗斯语,孟加拉语,缅甸语,中高棉语,占文,**汉语**,克里语,克罗地亚语,捷克语,丹麦语,迪维希语,马尔代夫语,世界语,法语,格鲁吉亚语,古/现代希腊语,古吉拉特语,希伯来语,因纽特语,日语,卡纳达语,克什米尔语,哈萨克语,韩语,老挝语,马来语,马拉地语,尼泊尔语,欧吉布威语,欧瑞亚语,旁遮普语,波斯语,普什图语,俄语,梵语,塞尔维亚语,四川彝文,彝文,西格西卡语,信德语,僧伽罗语,斯洛伐克语,瑞典语,泰语,泰米尔语,泰卢固语,藏语,维吾尔语,乌都语,乌兹别克语,越语及意第绪语。
|
||||
|
||||
##### 添加输入语言
|
||||
|
||||
我们可以在系统里的**设置**部分添加语言。点击你的 Ubuntu 桌面右上角的下拉箭头选择底部左下角的设置图标。
|
||||
|
||||
![][2]
|
||||
|
||||
*从顶部面板启动系统设置*
|
||||
|
||||
从设置部分,点击左侧面板的**区域及语言**选项。再点击右侧**输入来源**标签下的**+**(加号)按钮。
|
||||
|
||||
![][3]
|
||||
|
||||
*设置部分的区域及语言选项*
|
||||
|
||||
在下个窗口,点击**三个垂直的点**按钮。
|
||||
|
||||
![][4]
|
||||
|
||||
*在 Ubuntu 里添加输入来源*
|
||||
|
||||
搜寻并选择你想从列表中添加的输入语言。
|
||||
|
||||
![][5]
|
||||
|
||||
*添加输入语言*
|
||||
|
||||
在本篇教程中,我将加入**泰米尔**语。在选择语言后,点击**添加**按钮。
|
||||
|
||||
![][6]
|
||||
|
||||
*添加输入来源*
|
||||
|
||||
现在你会看到选中的输入来源已经被添加了。你会在输入来源标签下的区域及语言选项中看到它。
|
||||
|
||||
![][7]
|
||||
|
||||
*Ubuntu 里的输入来源选项*
|
||||
|
||||
点击输入来源标签下的“管理安装的语言”按钮
|
||||
|
||||
![][8]
|
||||
|
||||
*在 Ubuntu 里管理安装的语言*
|
||||
|
||||
接下来你会被询问是否想要为选定语言安装翻译包。如果你想的话你可以安装它们。或者仅仅选择“稍后提醒我”按钮。你下次打开的时候会收到统治。
|
||||
|
||||
![][9]
|
||||
|
||||
*语言支持没完全安装好*
|
||||
|
||||
一旦翻译包安装好,点击**安装/移除语言**按钮。同时确保 IBus 在键盘输入法系统中被选中。
|
||||
|
||||
![][10]
|
||||
|
||||
*在 Ubuntu 中安装/移除语言*
|
||||
|
||||
从列表中选择你想要的语言并点击采用按钮。
|
||||
|
||||
![][11]
|
||||
|
||||
*选择输入语言*
|
||||
|
||||
到此为止了。我们已成功在 Ubuntu 18.04 桌面上配置好多语输入方法。同样的,你可以添加尽可能多的输入语言。
|
||||
|
||||
在添加完所有语言来源后,登出再登陆回去。
|
||||
|
||||
### 用印度语或者你喜欢的语言输入
|
||||
|
||||
一旦你添加完所有语言后,你就会从你的 Ubuntu 桌面上的顶端菜单下载栏看到它们。
|
||||
|
||||
![][12]
|
||||
|
||||
*从 Ubuntu 桌面的顶端栏选择输入语言。*
|
||||
|
||||
你也可以使用键盘上的**徽标键+空格键**在不同语言中切换。
|
||||
|
||||
![][13]
|
||||
|
||||
*在 Ubuntu 里用**徽标键+空格键**选择输入语言*
|
||||
|
||||
打开任何 GUI 文本编辑器/应用开始打字吧!
|
||||
|
||||
![][14]
|
||||
|
||||
*在 Ubuntu 中用印度语输入*
|
||||
|
||||
### 将 IBus 加入启动应用
|
||||
|
||||
我们需要让 IBus 在每次重启后自动打开,这样每次你想要用自己喜欢的语言输入的时候就无须手动打开。
|
||||
|
||||
为此仅须在面板中输入“开机应用”点开开机应用选项。
|
||||
|
||||
![][15]
|
||||
|
||||
在下个窗口,点击添加,在名字栏输入“Ibus”并在命令栏输入“ibus-daemon”点击添加按钮。
|
||||
|
||||
![][16]
|
||||
|
||||
*在 Ubuntu 中将 Ibus 添加进开机启动项*
|
||||
|
||||
从现在起 IBus 将在系统启动后自动开始。
|
||||
|
||||
现在到你的回合了。在什么应用/工具中你用当地的印度语输入?在下方评论区让我们知道它们。
|
||||
|
||||
参考:
|
||||
|
||||
* [IBus – Ubuntu 社区百科][20]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-setup-multilingual-input-method-on-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/07/Ubuntu-system-settings.png
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/08/Region-language-in-Settings-ubuntu.png
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-input-source-in-Ubuntu.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-input-language.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-Input-Source-Ubuntu.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/08/Input-sources-Ubuntu.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2019/08/Manage-Installed-Languages.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2019/08/The-language-support-is-not-installed-completely.png
|
||||
[10]: https://www.ostechnix.com/wp-content/uploads/2019/08/Install-Remove-languages.png
|
||||
[11]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-language.png
|
||||
[12]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-input-language-from-top-bar-in-Ubuntu.png
|
||||
[13]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-input-language-using-SuperSpace-keys.png
|
||||
[14]: https://www.ostechnix.com/wp-content/uploads/2019/08/Setup-Multilingual-Input-Method-On-Ubuntu.png
|
||||
[15]: https://www.ostechnix.com/wp-content/uploads/2019/08/Launch-startup-applications-in-ubuntu.png
|
||||
[16]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-Ibus-to-startup-applications-on-Ubuntu.png
|
||||
[17]: https://www.ostechnix.com/use-google-translate-commandline-linux/
|
||||
[18]: https://www.ostechnix.com/type-indian-rupee-sign-%e2%82%b9-linux/
|
||||
[19]: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
|
||||
[20]: https://help.ubuntu.com/community/ibus
|
205
published/20190815 SSLH - Share A Same Port For HTTPS And SSH.md
Normal file
205
published/20190815 SSLH - Share A Same Port For HTTPS And SSH.md
Normal file
@ -0,0 +1,205 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11247-1.html)
|
||||
[#]: subject: (SSLH – Share A Same Port For HTTPS And SSH)
|
||||
[#]: via: (https://www.ostechnix.com/sslh-share-port-https-ssh/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
SSLH:让 HTTPS 和 SSH 共享同一个端口
|
||||
======
|
||||
|
||||
![SSLH - Share A Same Port For HTTPS And SSH][1]
|
||||
|
||||
一些 ISP 和公司可能已经阻止了大多数端口,并且只允许少数特定端口(如端口 80 和 443)访问来加强其安全性。在这种情况下,我们别无选择,但同一个端口可以用于多个程序,比如 HTTPS 端口 443,很少被阻止。通过 SSL/SSH 多路复用器 SSLH 的帮助,它可以侦听端口 443 上的传入连接。更简单地说,SSLH 允许我们在 Linux 系统上的端口 443 上运行多个程序/服务。因此,你可以同时通过同一个端口同时使用 SSL 和 SSH。如果你遇到大多数端口被防火墙阻止的情况,你可以使用 SSLH 访问远程服务器。这个简短的教程描述了如何在类 Unix 操作系统中使用 SSLH 让 https、ssh 共享相同的端口。
|
||||
|
||||
### SSLH:让 HTTPS、SSH 共享端口
|
||||
|
||||
#### 安装 SSLH
|
||||
|
||||
大多数 Linux 发行版上 SSLH 都有软件包,因此你可以使用默认包管理器进行安装。
|
||||
|
||||
在 Debian、Ubuntu 及其衍生品上运行:
|
||||
|
||||
```
|
||||
$ sudo apt-get install sslh
|
||||
```
|
||||
|
||||
安装 SSLH 时,将提示你是要将 sslh 作为从 inetd 运行的服务,还是作为独立服务器运行。每种选择都有其自身的优点。如果每天只有少量连接,最好从 inetd 运行 sslh 以节省资源。另一方面,如果有很多连接,sslh 应作为独立服务器运行,以避免为每个传入连接生成新进程。
|
||||
|
||||
![][2]
|
||||
|
||||
*安装 sslh*
|
||||
|
||||
在 Arch Linux 和 Antergos、Manjaro Linux 等衍生品上,使用 Pacman 进行安装,如下所示:
|
||||
|
||||
```
|
||||
$ sudo pacman -S sslh
|
||||
```
|
||||
|
||||
在 RHEL、CentOS 上,你需要添加 EPEL 存储库,然后安装 SSLH,如下所示:
|
||||
|
||||
```
|
||||
$ sudo yum install epel-release
|
||||
$ sudo yum install sslh
|
||||
```
|
||||
|
||||
在 Fedora:
|
||||
|
||||
```
|
||||
$ sudo dnf install sslh
|
||||
```
|
||||
|
||||
如果它在默认存储库中不可用,你可以如[这里][3]所述手动编译和安装 SSLH。
|
||||
|
||||
#### 配置 Apache 或 Nginx Web 服务器
|
||||
|
||||
如你所知,Apache 和 Nginx Web 服务器默认会监听所有网络接口(即 `0.0.0.0:443`)。我们需要更改此设置以告知 Web 服务器仅侦听 `localhost` 接口(即 `127.0.0.1:443` 或 `localhost:443`)。
|
||||
|
||||
为此,请编辑 Web 服务器(nginx 或 apache)配置文件并找到以下行:
|
||||
|
||||
```
|
||||
listen 443 ssl;
|
||||
```
|
||||
|
||||
将其修改为:
|
||||
|
||||
```
|
||||
listen 127.0.0.1:443 ssl;
|
||||
```
|
||||
|
||||
如果你在 Apache 中使用虚拟主机,请确保你也修改了它。
|
||||
|
||||
```
|
||||
VirtualHost 127.0.0.1:443
|
||||
```
|
||||
|
||||
保存并关闭配置文件。不要重新启动该服务。我们还没有完成。
|
||||
|
||||
#### 配置 SSLH
|
||||
|
||||
使 Web 服务器仅在本地接口上侦听后,编辑 SSLH 配置文件:
|
||||
|
||||
```
|
||||
$ sudo vi /etc/default/sslh
|
||||
```
|
||||
|
||||
找到下列行:
|
||||
|
||||
```
|
||||
Run=no
|
||||
```
|
||||
|
||||
将其修改为:
|
||||
|
||||
```
|
||||
Run=yes
|
||||
```
|
||||
|
||||
然后,向下滚动一点并修改以下行以允许 SSLH 在所有可用接口上侦听端口 443(例如 `0.0.0.0:443`)。
|
||||
|
||||
```
|
||||
DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --pidfile /var/run/sslh/sslh.pid"
|
||||
```
|
||||
|
||||
这里,
|
||||
|
||||
* `–user sslh`:要求在这个特定的用户身份下运行。
|
||||
* `–listen 0.0.0.0:443`:SSLH 监听于所有可用接口的 443 端口。
|
||||
* `–sshs 127.0.0.1:22` : 将 SSH 流量路由到本地的 22 端口。
|
||||
* `–ssl 127.0.0.1:443` : 将 HTTPS/SSL 流量路由到本地的 443 端口。
|
||||
|
||||
保存并关闭文件。
|
||||
|
||||
最后,启用并启动 `sslh` 服务以更新更改。
|
||||
|
||||
```
|
||||
$ sudo systemctl enable sslh
|
||||
$ sudo systemctl start sslh
|
||||
```
|
||||
|
||||
#### 测试
|
||||
|
||||
检查 SSLH 守护程序是否正在监听 443。
|
||||
|
||||
```
|
||||
$ ps -ef | grep sslh
|
||||
sslh 2746 1 0 15:51 ? 00:00:00 /usr/sbin/sslh --foreground --user sslh --listen 0.0.0.0 443 --ssh 127.0.0.1 22 --ssl 127.0.0.1 443 --pidfile /var/run/sslh/sslh.pid
|
||||
sslh 2747 2746 0 15:51 ? 00:00:00 /usr/sbin/sslh --foreground --user sslh --listen 0.0.0.0 443 --ssh 127.0.0.1 22 --ssl 127.0.0.1 443 --pidfile /var/run/sslh/sslh.pid
|
||||
sk 2754 1432 0 15:51 pts/0 00:00:00 grep --color=auto sslh
|
||||
```
|
||||
|
||||
现在,你可以使用端口 443 通过 SSH 访问远程服务器:
|
||||
|
||||
```
|
||||
$ ssh -p 443 [email protected]
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
[email protected]'s password:
|
||||
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-55-generic x86_64)
|
||||
|
||||
* Documentation: https://help.ubuntu.com
|
||||
* Management: https://landscape.canonical.com
|
||||
* Support: https://ubuntu.com/advantage
|
||||
|
||||
System information as of Wed Aug 14 13:11:04 IST 2019
|
||||
|
||||
System load: 0.23 Processes: 101
|
||||
Usage of /: 53.5% of 19.56GB Users logged in: 0
|
||||
Memory usage: 9% IP address for enp0s3: 192.168.225.50
|
||||
Swap usage: 0% IP address for enp0s8: 192.168.225.51
|
||||
|
||||
* Keen to learn Istio? It's included in the single-package MicroK8s.
|
||||
|
||||
https://snapcraft.io/microk8s
|
||||
|
||||
61 packages can be updated.
|
||||
22 updates are security updates.
|
||||
|
||||
|
||||
Last login: Wed Aug 14 13:10:33 2019 from 127.0.0.1
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
*通过 SSH 使用 443 端口访问远程系统*
|
||||
|
||||
看见了吗?即使默认的 SSH 端口 22 被阻止,我现在也可以通过 SSH 访问远程服务器。正如你在上面的示例中所看到的,我使用 https 端口 443 进行 SSH 连接。
|
||||
|
||||
我在我的 Ubuntu 18.04 LTS 服务器上测试了 SSLH,它如上所述工作得很好。我在受保护的局域网中测试了 SSLH,所以我不知道是否有安全问题。如果你在生产环境中使用它,请在下面的评论部分中告诉我们使用 SSLH 的优缺点。
|
||||
|
||||
有关更多详细信息,请查看下面给出的官方 GitHub 页面。
|
||||
|
||||
资源:
|
||||
|
||||
* [SSLH GitHub 仓库][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/sslh-share-port-https-ssh/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2017/08/SSLH-Share-A-Same-Port-For-HTTPS-And-SSH-1-720x340.jpg
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2017/08/install-sslh.png
|
||||
[3]: https://github.com/yrutschle/sslh/blob/master/doc/INSTALL.md
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2017/08/Access-remote-systems-via-SSH-using-port-443.png
|
||||
[5]: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
|
||||
[6]: https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/
|
||||
[7]: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
|
||||
[8]: https://www.ostechnix.com/how-to-stop-ssh-session-from-disconnecting-in-linux/
|
||||
[9]: https://www.ostechnix.com/allow-deny-ssh-access-particular-user-group-linux/
|
||||
[10]: https://www.ostechnix.com/4-ways-keep-command-running-log-ssh-session/
|
||||
[11]: https://www.ostechnix.com/scanssh-fast-ssh-server-open-proxy-scanner/
|
||||
[12]: https://github.com/yrutschle/sslh
|
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11241-1.html)
|
||||
[#]: subject: (GNOME and KDE team up on the Linux desktop, docs for Nvidia GPUs open up, a powerful new way to scan for firmware vulnerabilities, and more news)
|
||||
[#]: via: (https://opensource.com/article/19/8/news-august-17)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
开源新闻综述:GNOME 和 KDE 达成合作、Nvidia 开源 GPU 文档
|
||||
======
|
||||
|
||||
> 不要错过两周以来最大的开源头条新闻。
|
||||
|
||||
![Weekly news roundup with TV][1]
|
||||
|
||||
在本期开源新闻综述中,我们将介绍两种新的强大数据可视化工具、Nvidia 开源其 GPU 文档、激动人心的新工具、确保自动驾驶汽车的固件安全等等!
|
||||
|
||||
### GNOME 和 KDE 在 Linux 桌面上达成合作伙伴
|
||||
|
||||
Linux 在桌面计算机上一直处于分裂状态。在最近的一篇[公告][2]中称,“两个主要的 Linux 桌面竞争对手,[GNOME 基金会][3] 和 [KDE][4] 已经同意合作。”
|
||||
|
||||
这两个组织将成为今年 11 月在巴塞罗那举办的 [Linux App Summit(LAS)2019][5] 的赞助商。这一举措在某种程度上似乎是对桌面计算不再是争夺支配地位的最佳场所的回应。无论是什么原因,Linux 桌面的粉丝们都有新的理由希望未来出现一个标准化的 GUI 环境。
|
||||
|
||||
### 新的开源数据可视化工具
|
||||
|
||||
这个世界上很少有不是由数据驱动的。除非数据以人们可以互动的形式出现,否则它并不是很好使用。最近开源的两个数据可视化项目正在尝试使数据更有用。
|
||||
|
||||
第一个工具名为 **Neuroglancer**,由 [Google 的研究团队][6]创建。它“使神经科医生能够在交互式可视化中建立大脑神经通路的 3D 模型。”Neuroglancer 通过使用神经网络追踪大脑中的神经元路径并构建完整的可视化来实现这一点。科学家已经使用了 Neuroglancer(你可以[从 GitHub 取得][7])通过扫描果蝇的大脑来建立一个交互式地图。
|
||||
|
||||
第二个工具来自一个不太能想到的的来源:澳大利亚信号理事会。这是该国家类似 NSA 的机构,它“开源了[内部数据可视化和分析工具][8]之一。”这个被称为 **[Constellation][9]** 的工具可以“识别复杂数据集中的趋势和模式,并且能够扩展到‘数十亿输入’。”该机构总干事迈克•伯吉斯表示,他希望“这一工具将有助于产生有利于所有澳大利亚人的科学和其他方面的突破。”鉴于它是开源的,它可以使整个世界受益。
|
||||
|
||||
### Nvidia 开始发布 GPU 文档
|
||||
|
||||
多年来,图形处理单元(GPU)制造商 Nvidia 并没有做出什么让开源项目轻松开发其产品的驱动程序的努力。现在,该公司通过[发布 GPU 硬件文档][10]向这些项目迈出了一大步。
|
||||
|
||||
该公司根据 MIT 许可证发布的文档[可在 GitHub 上获取][11]。它涵盖了几个关键领域,如设备初始化、内存时钟/调整和电源状态。据硬件新闻网站 Phoronix 称,开发了 Nvidia GPU 的开源驱动程序的 Nouveau 项目将是率先使用该文档来推动其开发工作的项目之一。
|
||||
|
||||
### 用于保护固件的新工具
|
||||
|
||||
似乎每周都有的消息称,移动设备或连接互联网的小设备中出现新漏洞。通常,这些漏洞存在于控制设备的固件中。自动驾驶汽车服务 Cruise [发布了一个开源工具][12],用于在这些漏洞成为问题之前捕获这些漏洞。
|
||||
|
||||
该工具被称为 [FwAnalzyer][13]。它检查固件代码中是否存在许多潜在问题,包括“识别潜在危险的可执行文件”,并查明“任何错误遗留的调试代码”。Cruise 的工程师 Collin Mulliner 曾帮助开发该工具,他说通过在代码上运行 FwAnalyzer,固件开发人员“现在能够检测并防止各种安全问题。”
|
||||
|
||||
### 其它新闻
|
||||
|
||||
* [为什么洛杉矶决定将未来寄予开源][14]
|
||||
* [麻省理工学院出版社发布了关于开源出版软件的综合报告][15]
|
||||
* [华为推出鸿蒙操作系统,不会放弃 Android 智能手机][16]
|
||||
|
||||
*一如既往地感谢 Opensource.com 的工作人员和主持人本周的帮助。*
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/news-august-17
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
|
||||
[2]: https://www.zdnet.com/article/gnome-and-kde-work-together-on-the-linux-desktop/
|
||||
[3]: https://www.gnome.org/
|
||||
[4]: https://kde.org/
|
||||
[5]: https://linuxappsummit.org/
|
||||
[6]: https://www.cbronline.com/news/brain-mapping-google-ai
|
||||
[7]: https://github.com/google/neuroglancer
|
||||
[8]: https://www.computerworld.com.au/article/665286/australian-signals-directorate-open-sources-data-analysis-tool/
|
||||
[9]: https://www.constellation-app.com/
|
||||
[10]: https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-Open-GPU-Docs
|
||||
[11]: https://github.com/nvidia/open-gpu-doc
|
||||
[12]: https://arstechnica.com/information-technology/2019/08/self-driving-car-service-open-sources-new-tool-for-securing-firmware/
|
||||
[13]: https://github.com/cruise-automation/fwanalyzer
|
||||
[14]: https://www.techrepublic.com/article/why-la-decided-to-open-source-its-future/
|
||||
[15]: https://news.mit.edu/2019/mit-press-report-open-source-publishing-software-0808
|
||||
[16]: https://www.itnews.com.au/news/huawei-unveils-harmony-operating-system-wont-ditch-android-for-smartphones-529432
|
@ -0,0 +1,107 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (LiVES Video Editor 3.0 is Here With Significant Improvements)
|
||||
[#]: via: (https://itsfoss.com/lives-video-editor/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
LiVES 视频编辑器 3.0 有了显著的改善
|
||||
======
|
||||
|
||||
我们最近列出了[最好开源视频编辑器][1]的清单。LiVES 是这些开源视频编辑器中的免费提供服务的一个。
|
||||
|
||||
即使许多用户还在等待 Windows 版本的发行,但在刚刚发行的 LiVES 视频编辑器 Linux 版本中(最新版本 v3.0.1)进行了一个重大更新,更新内容中包括了一些新的功能和改进。
|
||||
|
||||
在这篇文章里,我将会列出新版本中的重要改进,并且我将会提到在 Linux 上安装的步骤。
|
||||
|
||||
### LiVES 视频编辑器 3.0:新的改进
|
||||
|
||||
![Zorin OS 中正在加载的 LiVES 视频编辑器][2]
|
||||
|
||||
总的来说,在这次重大更新中 LiVES 视频编辑器旨在提供更加丝滑的回放、防止闻所未闻的崩溃、优化视频记录,以及让在线视频下载更加实用。
|
||||
|
||||
下面列出了修改:
|
||||
|
||||
* 如果需要加载的话,可以静默加载直到到视频播放完毕。
|
||||
* 改进回放插件为 openGL,提供更加丝滑的回放。
|
||||
* 重新启用了 openGL 回放插件的高级选项。
|
||||
* 在 VJ/预解码 中允许“充足”的所有帧
|
||||
* 重构了在播放时基础计算的代码(有了更好的 a/v 同步)。
|
||||
* 彻底修复了外部音频和音频,提高了准确性并减少了 CPU 周期。
|
||||
* 进入多音轨模式时自动切换至内部音频。
|
||||
* 重新显示效果映射器窗口时,将会正常展示效果状态(on/off)。
|
||||
* 解决了音频和视频线程之间的冲突。
|
||||
* 现在可以对在线视频下载器,剪辑大小和格式进行修改并添加了更新选项。
|
||||
* 对实时效果实行了参考计数的记录。
|
||||
* 大范围重写了主界面,清理代码并改进多视觉。
|
||||
* 优化了视频播放器运行时的录制功能。
|
||||
* 改进了 projectM 过滤器,包括支持了 SDL2。
|
||||
* 添加了一个选项来逆转多轨合成器中的 Z-order(后层现在可以覆盖上层了)。
|
||||
* 增加了对 musl libc 的支持
|
||||
* 更新了乌克兰语的翻译
|
||||
|
||||
|
||||
如果您不是一位高级视频编辑师,也许会对上面列出的重要更新提不起太大的兴趣。但正是因为这些更新,才使得“LiVES 视频编辑器”成为了最好的开源视频编辑软件。
|
||||
|
||||
[][3]
|
||||
|
||||
推荐阅读 VidCutter Lets You Easily Trim And Merge Videos In Linux
|
||||
|
||||
### 在 Linux 上安装 LiVES 视频编辑器
|
||||
|
||||
LiVES 几乎可以在所有主要 Linux 发行版中使用。但是,您可能并不能在软件中心找到它的最新版本。所以,如果你想通过这种方式安装,那你就不得不耐心等待了。
|
||||
|
||||
如果你想要手动安装,可以从它的下载页面获取 Fedora/Open SUSE 的 RPM 安装包。它也适用于其他 Linux 发行版。
|
||||
|
||||
[下载 LiVES 视频编辑器] [4]
|
||||
|
||||
如果您使用的是 Ubuntu(或其他基于 Ubuntu 的发行版),您可以安装由 [Ubuntuhandbook][6] 进行维护的[非官方 PPA][5]。
|
||||
|
||||
下面由我来告诉你,你该做些什么:
|
||||
|
||||
**1. **启动终端后输入以下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:ubuntuhandbook1 / lives
|
||||
```
|
||||
|
||||
系统将提示您输入密码用于确认添加 PPA。
|
||||
|
||||
**2. **完成后,您现在可以轻松地更新软件包列表并安装 LiVES 视频编辑器。以下是需要您输入的命令段:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install life-plugins
|
||||
```
|
||||
|
||||
**3.** 现在,它开始下载并安装视频编辑器,等待大约一分钟即可完成。
|
||||
|
||||
**总结**
|
||||
|
||||
Linux 上有许多[视频编辑器] [7]。但它们通常被认为不能进行专业的编辑。而我并不是一名专业人士,所以像 LiVES 这样免费的视频编辑器就足以进行简单的编辑了。
|
||||
|
||||
您认为怎么样呢?您在 Linux 上使用 LiVES 或其他视频编辑器的体验还好吗?在下面的评论中告诉我们你的感觉吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/lives-video-editor/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Scvoet][c]
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[c]: https://github.com/scvoet
|
||||
[1]: https://itsfoss.com/open-source-video-editors/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/lives-video-editor-loading.jpg?ssl=1
|
||||
[3]: https://itsfoss.com/vidcutter-video-editor-linux/
|
||||
[4]: http://lives-video.com/index.php?do=downloads#binaries
|
||||
[5]: https://itsfoss.com/ppa-guide/
|
||||
[6]: http://ubuntuhandbook.org/index.php/2019/08/lives-video-editor-3-0-released-install-ubuntu/
|
||||
[7]: https://itsfoss.com/best-video-editing-software-linux/
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Serverless on Kubernetes, diverse automation, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/8/serverless-kubernetes-and-more)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
Serverless on Kubernetes, diverse automation, and more industry trends
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [10 tips for creating robust serverless components][2]
|
||||
|
||||
> There are some repeated patterns that we have seen after creating 20+ serverless components. We recommend that you browse through the [available component repos on GitHub][3] and check which one is close to what you’re building. Just open up the repo and check the code and see how everything fits together.
|
||||
>
|
||||
> All component code is open source, and we are striving to keep it clean, simple and easy to follow. After you look around you’ll be able to understand how our core API works, how we interact with external APIs, and how we are reusing other components.
|
||||
|
||||
**The impact**: Serverless Inc is striving to take probably the most hyped architecture early on in the hype cycle and make it usable and practical today. For serverless to truly go mainstream, producing something useful has to be as easy for a developer as "Hello world!," and these components are a step in that direction.
|
||||
|
||||
## [Kubernetes workloads in the serverless era: Architecture, platforms, and trends][4]
|
||||
|
||||
> There are many fascinating elements of the Kubernetes architecture: the containers providing common packaging, runtime and resource isolation model within its foundation; the simple control loop mechanism that monitors the actual state of components and reconciles this with the desired state; the custom resource definitions. But the true enabler for extending Kubernetes to support diverse workloads is the concept of the pod.
|
||||
>
|
||||
> A pod provides two sets of guarantees. The deployment guarantee ensures that the containers of a pod are always placed on the same node. This behavior has some useful properties such as allowing containers to communicate synchronously or asynchronously over localhost, over inter-process communication ([IPC][5]), or using the local file system.
|
||||
|
||||
**The impact**: If developer adoption of serverless architectures is largely driven by how easily they can be productive working that way, business adoption will be driven by the ability to place this trend in the operational and business context. IT decision-makers need to see a holistic picture of how serverless adds value alongside their existing investments, and operators and architects need to envision how they'll keep it all up and running.
|
||||
|
||||
## [How developers can survive the Last Mile with CodeReady Workspaces][6]
|
||||
|
||||
> Inside each cloud provider, a host of tools can address CI/CD, testing, monitoring, backing up and recovery problems. Outside of those providers, the cloud native community has been hard at work cranking out new tooling from [Prometheus][7], [Knative][8], [Envoy][9] and [Fluentd][10], to [Kubenetes][11] itself and the expanding ecosystem of Kubernetes Operators.
|
||||
>
|
||||
> Within all of those projects, cloud-based services and desktop utilities is one major gap, however: the last mile of software development is the IDE. And despite the wealth of development projects inside the community and Cloud Native Computing Foundation, it is indeed the Eclipse Foundation, as mentioned above, that has taken on this problem with a focus on the new cloud development landscape.
|
||||
|
||||
**The impact**: Increasingly complex development workflows and deployment patterns call for increasingly intelligent IDEs. While I'm sure it is possible to push a button and redeploy your microservices to a Kubernetes cluster from emacs (or vi, relax), Eclipse Che (and CodeReady Workspaces) are being built from the ground up with these types of cloud-native workflows in mind.
|
||||
|
||||
## [Automate security in increasingly complex hybrid environments][12]
|
||||
|
||||
> According to the [Information Security Forum][13]’s [Global Security Threat Outlook for 2019][14], one of the biggest IT trends to watch this year is the increasing sophistication of cybercrime and ransomware. And even as the volume of ransomware attacks is dropping, cybercriminals are finding new, more potent ways to be disruptive. An [article in TechRepublic][15] points to cryptojacking malware, which enables someone to hijack another's hardware without permission to mine cryptocurrency, as a growing threat for enterprise networks.
|
||||
>
|
||||
> To more effectively mitigate these risks, organizations could invest in automation as a component of their security plans. That’s because it takes time to investigate and resolve issues, in addition to applying controlled remediations across bare metal, virtualized systems, and cloud environments -- both private and public -- all while documenting changes.
|
||||
|
||||
**The impact**: This one is really about our ability to trust that the network service providers that we rely upon to keep our phones and smart TVs full of stutter-free streaming HD content have what they need to protect the infrastructure that makes it all possible. I for one am rooting for you!
|
||||
|
||||
## [AnsibleFest 2019 session catalog][16]
|
||||
|
||||
> 85 Ansible automation sessions over 3 days in Atlanta, Georgia
|
||||
|
||||
**The impact**: What struck me is the range of things that can be automated with Ansible. Windows? Check. Multicloud? Check. Security? Check. The real question after those three days are over will be: Is there anything in IT that can't be automated with Ansible? Seriously, I'm asking, let me know.
|
||||
|
||||
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/serverless-kubernetes-and-more
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://serverless.com/blog/10-tips-creating-robust-serverless-components/
|
||||
[3]: https://github.com/serverless-components/
|
||||
[4]: https://www.infoq.com/articles/kubernetes-workloads-serverless-era/
|
||||
[5]: https://opensource.com/article/19/4/interprocess-communication-linux-networking
|
||||
[6]: https://thenewstack.io/how-developers-can-survive-the-last-mile-with-codeready-workspaces/
|
||||
[7]: https://prometheus.io/
|
||||
[8]: https://knative.dev/
|
||||
[9]: https://www.envoyproxy.io/
|
||||
[10]: https://www.fluentd.org/
|
||||
[11]: https://kubernetes.io/
|
||||
[12]: https://www.redhat.com/en/blog/automate-security-increasingly-complex-hybrid-environments
|
||||
[13]: https://www.securityforum.org/
|
||||
[14]: https://www.prnewswire.com/news-releases/information-security-forum-forecasts-2019-global-security-threat-outlook-300757408.html
|
||||
[15]: https://www.techrepublic.com/article/top-4-security-threats-businesses-should-expect-in-2019/
|
||||
[16]: https://agenda.fest.ansible.com/sessions
|
@ -1,3 +1,5 @@
|
||||
translating by valoniakim
|
||||
|
||||
How allowing myself to be vulnerable made me a better leader
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/leaderscatalysts.jpg?itok=f8CwHiKm)
|
||||
|
@ -1,317 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Command Line Heroes: Season 1: OS Wars)
|
||||
[#]: via: (https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1)
|
||||
[#]: author: (redhat https://www.redhat.com)
|
||||
|
||||
Command Line Heroes: Season 1: OS Wars
|
||||
======
|
||||
Saron Yitbarek:
|
||||
|
||||
Some stories are so epic, with such high stakes , that in my head, it's like that crawling text at the start of a Star Wars movie. You know, like-
|
||||
|
||||
Voice Actor:
|
||||
|
||||
Episode One, The OS Wars.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
Yeah, like that.
|
||||
|
||||
Voice Actor:
|
||||
|
||||
[00:00:30]
|
||||
|
||||
It is a period of mounting tensions. The empires of Bill Gates and Steve Jobs careen toward an inevitable battle over proprietary software. Gates has formed a powerful alliance with IBM, while Jobs refuses to license his hardware or operating system. Their battle for dominance threatens to engulf the galaxy in an OS war. Meanwhile, in distant lands, and unbeknownst to the emperors, open source rebels have begun to gather.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:01:00]
|
||||
|
||||
Okay. Maybe that's a bit dramatic, but when we're talking about the OS wars of the 1980s, '90s, and 2000s, it's hard to overstate things. There really was an epic battle for dominance. Steve Jobs and Bill Gates really did hold the fate of billions in their hands. Control the operating system, and you control how the vast majority of people use computers, how we communicate with each other, how we source information. I could go on, but you know all this. Control the OS, and you would be an emperor.
|
||||
|
||||
[00:01:30]
|
||||
|
||||
[00:02:00]
|
||||
|
||||
I'm Saron Yitbarek [00:01:24], and you're listening to Command Line Heroes, an original podcast from Red Hat. What is a Command Line Hero, you ask? Well, if you would rather make something than just use it, if you believe developers have the power to build a better future, if you want a world where we all get a say in how our technologies shape our lives, then you, my friend, are a command line hero. In this series, we bring you stories from the developers among us who are transforming tech from the command line up. And who am I to be guiding you on this trek? Who is Saron Yitbarek? Well, actually I'm guessing I'm a lot like you. I'm a developer for starters, and everything I do depends on open source software. It's my world. The stories we tell on this podcast are a way for me to get above the daily grind of my work, and see that big picture. I hope it does the same thing for you , too.
|
||||
|
||||
[00:02:30]
|
||||
|
||||
[00:03:00]
|
||||
|
||||
What I wanted to know right off the bat was, where did open source technology even come from? I mean, I know a fair bit about Linus Torvalds and the glories of L inux ® , as I'm sure you do , too, but really, there was life before open source, right? And if I want to truly appreciate the latest and greatest of things like DevOps and containers, and on and on, well, I feel like I owe it to all those earlier developers to know where this stuff came from. So, let's take a short break from worrying about memory leaks and buffer overflows. Our journey begins with the OS wars, the epic battle for control of the desktop. It was like nothing the world had ever seen, and I'll tell you why. First, in the age of computing, you've got exponentially scaling advantages for the big fish ; and second, there's never been such a battle for control on ground that's constantly shifting. Bill Gates and Steve Jobs? They don't know it yet, but by the time this story is halfway done, everything they're fighting for is going to change, evolve, and even ascend into the cloud.
|
||||
|
||||
[00:03:30]
|
||||
|
||||
[00:04:00]
|
||||
|
||||
Okay, it's the fall of 1983. I was negative six years old. Ronald Reagan was president, and the U . S . and the Soviet Union are threatening to drag the planet into nuclear war. Over at the Civic Center in Honolulu, it's the annual Apple sales conference. An exclusive bunch of Apple employees are waiting for Steve Jobs to get onstage. He's this super bright-eyed 28-year-old, and he's looking pretty confident. In a very serious voice, Jobs speaks into the mic and says that he's invited three industry experts to have a panel discussion on software. But the next thing that happens is not what you'd expect. Super cheesy '80s music fills the room. A bunch of multi-colored tube lights light up the stage, and then an announcer voice says-
|
||||
|
||||
Voice Actor:
|
||||
|
||||
And now, ladies and gentlemen, the Macintosh software dating game.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:04:30]
|
||||
|
||||
[00:05:00]
|
||||
|
||||
Jobs has this big grin on his face as he reveals that the three CEOs on stage have to take turns wooing him. It's essentially an '80s version of The Bachelor, but for tech love. Two of the software bigwigs say their bit, and then it's over to contestant number three. Is that? Yup. A fresh - faced Bill Gates with large square glasses that cover half his face. He proclaims that during 1984, half of Microsoft's revenue is going to come from Macintosh software. The audience loves it, and gives him a big round of applause. What they don't know is that one month after this event, Bill Gates will announce his plans to release Windows 1.0. You'd never guess Jobs is flirting with someone who'd end up as Apple's biggest rival. But Microsoft and Apple are about to live through the worst marriage in tech history. They're going to betray each other, they're going to try and destroy each other, and they're going to be deeply, painfully bound to each other.
|
||||
|
||||
James Allworth:
|
||||
|
||||
[00:05:30]
|
||||
|
||||
I guess philosophically, one was more idealistic and focused on the user experience above all else, and was an integrated organization, whereas Microsoft much more pragmatic, a modular focus-
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
That's James Allworth. He's a prolific tech writer who worked inside the corporate team of Apple Retail. Notice that definition of Apple he gives. An integrated organization. That sense of a company beholden only to itself. A company that doesn't want to rely on others. That's key.
|
||||
|
||||
James Allworth:
|
||||
|
||||
[00:06:00]
|
||||
|
||||
Apple was the integrated player, and it wanted to focus on a delightful user experience, and that meant that it wanted to control the entire stack and everything that was delivered, from the hardware to the operating system, to even some of the applications that ran on top of the operating system. That always served it well in periods where new innovations, important innovations, were coming to market where you needed to be across both hardware and software, and where being able to change the hardware based on what you wanted to do and what t was new in software was an advantage. For example-
|
||||
|
||||
[00:06:30]
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:07:00]
|
||||
|
||||
A lot of people loved that integration, and became die hard Apple fans. Plenty of others stuck with Microsoft. Back to that sales conference in Honolulu. At that very same event, Jobs gave his audience a sneak peek at the Superbowl ad he was about to release. You might have seen it for yourself. Think George Orwell's 1984. In this cold and gray world, mindless automatons are shuffling along under a dictator's projected gaze. They represent IBM users. Then, beautiful, athletic Anya Major, representing Apple, comes running through the hall in full color. She hurls her sledgehammer at Big Brother's screen, smashing it to bits. Big Brother's spell is broken, and a booming voice tells us that Apple is about to introduce the Macintosh.
|
||||
|
||||
Voice Actor:
|
||||
|
||||
And you'll see why 1984 will not be like 1984.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:07:30]
|
||||
|
||||
And yeah, looking back at that commercial, the idea that Apple was a freedom fighter working to set the masses free is a bit much. But the thing hit a nerve. Ken Segal worked at the advertising firm that made the commercial for Apple. He was Steve Jobs' advertising guy for more than a decade in the early days.
|
||||
|
||||
Ken Segal:
|
||||
|
||||
[00:08:00]
|
||||
|
||||
Well, the 1984 commercial came with a lot of risk. In fact, it was so risky that Apple didn't want to run it when they saw it. You've probably heard stories that Steve liked it, but the Apple board did not like it. In fact, they were so outraged that so much money had been spent on such a thing that they wanted to fire the ad agency. Steve was the one sticking up for the agency.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
Jobs, as usual, knew a good mythology when he saw one.
|
||||
|
||||
Ken Segal:
|
||||
|
||||
That commercial struck such a chord within the company, within the industry, that it became this thing for Apple. Whether or not people were buying computers that day, it had a sort of an aura that stayed around for years and years and years, and helped define the character of the company. We're the rebels. We're the guys with the sledgehammer.
|
||||
|
||||
[00:08:30]
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
So in their battle for the hearts and minds of literally billions of potential consumers, the emperors of Apple and Microsoft were learning to frame themselves as redeemers. As singular heroes. As lifestyle choices. But Bill Gates knew something that Apple had trouble understanding. This idea that in a wired world, nobody, not even an emperor, can really go it alone.
|
||||
|
||||
[00:09:00]
|
||||
|
||||
[00:09:30]
|
||||
|
||||
June 25th, 1985. Gates sends a memo to Apple's then CEO John Scully. This was during the wilderness years. Jobs had just been excommunicated, and wouldn't return to Apple until 1996. Maybe it was because Jobs was out that Gates felt confident enough to write what he wrote. In the memo, he encourages Apple to license their OS to clone makers. I want to read a bit from the end of the memo, just to give you a sense of how perceptive it was. Gates writes, "It is now impossible for Apple to create a standard out of their innovative technology without support from other personal computer manufacturers. Apple must open the Macintosh architecture to have the independent support required to gain momentum and establish a standard." In other words, no more operating in a silo, you guys. You've got to be willing to partner with others. You have to work with developers.
|
||||
|
||||
[00:10:00]
|
||||
|
||||
[00:10:30]
|
||||
|
||||
You see this philosophy years later, when Microsoft CEO Steve Ballmer gets up on stage to give a keynote and he starts shouting, "Developers, developers, developers, developers, developers, developers. Developers, developers, developers, developers, developers, developers, developers, developers." You get the idea. Microsoft likes developers. Now, they're not about to share source code with them, but they do want to build this whole ecosystem of partners. And when Bill Gates suggests that Apple do the same, as you might have guessed, the idea is tossed out the window. Apple had drawn a line in the sand, and five months after they trashed Gates' memo, Microsoft released Windows 1.0. The war was on.
|
||||
|
||||
Developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers.
|
||||
|
||||
[00:11:00]
|
||||
|
||||
You're listening to Command Line Heroes, an original podcast from Red Hat. In this inaugural episode, we go back in time to relive the epic story of the OS wars, and we're going to find out, how did a war between tech giants clear the way for the open source world we all live in today?
|
||||
|
||||
[00:11:30]
|
||||
|
||||
Okay, a little backstory. Forgive me if you've heard this one, but it's a classic. It's 1979, and Steve Jobs drives up to the Xerox Park research center in Palo Alto. The engineers there have been developing this whole fleet of elements for what they call a graphical user interface. Maybe you've heard of it. They've got menus, they've got scroll bars, they've got buttons and folders and overlapping windows. It was a beautiful new vision of what a computer interface could look like. And nobody had any of this stuff. Author and journalist Steven Levy talks about its potential.
|
||||
|
||||
Steven Levy:
|
||||
|
||||
[00:12:00]
|
||||
|
||||
There was a lot of excitement about this new interface that was going to be much friendlier than what we had before, which used what was called the command line, where there was really no interaction between you and the computer in the way you'd interact with something in real life. The mouse and the graphics on the computer gave you a way to do that, to point to something just like you'd point to something in real life. It made it a lot easier. You didn't have to memorize all these codes.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:12:30]
|
||||
|
||||
[00:13:00]
|
||||
|
||||
Except, the Xerox executives did not get that they were sitting on top of a platinum mine. The engineers were more aware than the execs. Typical. So those engineers were, yeah, a little stressed out that they were instructed to show Jobs how everything worked. But the executives were calling the shots. Jobs felt, quote, "The product genius that brought them to that monopolistic position gets rotted out by people running these companies that have no conception of a good product versus a bad product." That's sort of harsh, but hey, Jobs walked out of that meeting with a truckload of ideas that Xerox executives had missed. Pretty much everything he needed to revolutionize the desktop computing experience. Apple releases the Lisa in 1983, and then the Mac in 1984. These devices are made by the ideas swiped from Xerox.
|
||||
|
||||
[00:13:30]
|
||||
|
||||
What's interesting to me is Jobs' reaction to the claim that he stole the GUI. He's pretty philosophical about it. He quotes Picasso, saying, "Good artists copy, great artists steal." He tells one reporter, "We have always been shameless about stealing great ideas." Great artists steal. Okay. I mean, we're not talking about stealing in a hard sense. Nobody's obtaining proprietary source code and blatantly incorporating it into their operating system. This is softer, more like idea borrowing. And that's much more difficult to control, as Jobs himself was about to learn. Legendary software wizard, and true command line hero, Andy Hertzfeld, was an original member of the Macintosh development team.
|
||||
|
||||
[00:14:00]
|
||||
|
||||
Andy Hertzfeld:
|
||||
|
||||
[00:14:30]
|
||||
|
||||
[00:15:00]
|
||||
|
||||
Yeah, Microsoft was our first software partner with the Macintosh. At the time, we didn't really consider them a competitor. They were the very first company outside of Apple that we gave Macintosh prototypes to. I talked with the technical lead at Microsoft usually once a week. They were the first outside party trying out the software that we wrote. They gave us very important feedback, and in general I would say the relationship was pretty good. But I also noticed in my conversations with the technical lead, he started asking questions that he didn't really need to know about how the system was implemented, and I got the idea that they were trying to copy the Macintosh. I t old Steve Jobs about it pretty early on, but it really came to a head in the fall of 1983. We discovered that they actually, without telling us ahead of time, they announced Windows at the COMDEX in November 1983 and Steve Jobs hit the roof. He really considered that a betrayal.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:15:30]
|
||||
|
||||
[00:16:00]
|
||||
|
||||
As newer versions of Windows were released, it became pretty clear that Microsoft had lifted from Apple all the ideas that Apple had lifted from Xerox. Jobs was apoplectic. His Picasso line about how great artists steal. Yeah. That goes out the window. Though maybe Gates was using it now. Reportedly, when Jobs screamed at Gates that he'd stolen from them, Gates responded, "Well Steve, I think it's more like we both had this rich neighbor named Xerox, and I broke into his house to steal the TV set, and found out that you'd already stolen it." Apple ends up suing Microsoft for stealing the look and feel of their GUI. The case goes on for years, but in 1993, a judge from the 9th Circuit Court of Appeals finally sides with Microsoft. Judge Vaughn Walker declares that look and feel are not covered by copyright. This is super important. That decision prevented Apple from creating a monopoly with the interface that would dominate desktop computing. Soon enough, Apple's brief lead had vanished. Here's Steven Levy's take.
|
||||
|
||||
Steven Levy:
|
||||
|
||||
[00:16:30]
|
||||
|
||||
[00:17:00]
|
||||
|
||||
They lost the lead not because of intellectual property theft on Microsoft's part, but because they were unable to consolidate their advantage in having a better operating system during the 1980s. They overcharged for their computers, quite frankly. So Microsoft had been developing Windows, starting with the mid-1980s, but it wasn't until Windows 3 in 1990, I believe, where they really came across with a version that was ready for prime time. Ready for masses of people to use. At that point is where Microsoft was able to migrate huge numbers of people, hundreds of millions, over to the graphical interface in a way that Apple had not been able to do. Even though they had a really good operating system, they used it since 1984.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:17:30]
|
||||
|
||||
[00:18:00]
|
||||
|
||||
Microsoft now dominated the OS battlefield. They held 90% of the market, and standardized their OS across a whole variety of PCs. The future of the OS looked like it'd be controlled by Microsoft. And then? Well, at the 1997 Macworld Expo in Boston, you have an almost bankrupt Apple. A more humble Steve Jobs gets on stage, and starts talking about the importance of partnerships, and one in particular, he says, has become very, very meaningful. Their new partnership with Microsoft. Steve Jobs is calling for a détente, a ceasefire. Microsoft could have their enormous market share. If we didn't know better, we might think we were entering a period of peace in the kingdom. But when stakes are this high, it's never that simple. Just as Apple and Microsoft were finally retreating to their corners, pretty bruised from decades of fighting, along came a 21-year-old Finnish computer science student who, almost by accident, changed absolutely everything.
|
||||
|
||||
I'm Saron Yitbarek, and this is Command Line Heroes.
|
||||
|
||||
[00:18:30]
|
||||
|
||||
While certain tech giants were busy bashing each other over proprietary software, there were new champions of free and open source software popping up like mushrooms. One of these champions was Richard Stallman. You're probably familiar with his work. He wanted free software and a free society. That's free as in free speech, not free as in free beer. Back in the '80s, Stallman saw that there was no viable alternative to pricey, proprietary OSs, like UNIX . So, he decided to make his own. Stallman's Free Software Foundation developed GNU, which stood for GNU's not UNIX , of course. It'd be an OS like UNIX, but free of all UNIX code, and free for users to share.
|
||||
|
||||
[00:19:00]
|
||||
|
||||
[00:19:30]
|
||||
|
||||
Just to give you a sense of how important that idea of free software was in the 1980s, the companies that owned the UNIX code at different points, AT&T Bell Laboratories and then UNIX System Laboratories, they threatened lawsuits on anyone making their own OS after looking at UNIX source code. These guys were next - level proprietary. All those programmers were, in the words of the two companies, "mentally contaminated," because they'd seen UNIX code. In a famous court case between UNIX System Laboratories and Berkeley Software Design, it was argued that any functionally similar system, even though it didn't use the UNIX code itself, was a bre a ch of copyright. Paul Jones was a developer at that time. He's now the director of the digital library ibiblio.org.
|
||||
|
||||
Paul Jones:
|
||||
|
||||
[00:20:00]
|
||||
|
||||
Anyone who has seen any of the code is mentally contaminated was their argument. That would have made almost anyone who had worked on a computer operating system that involved UNIX , in any computer science department, was mentally contaminated. So in one year at USENIX, we all got little white bar pin s with red letters that say mentally contaminated, and we all wear those around to our own great pleasure, to show that we were sticking it to Bell because we were mentally contaminated.
|
||||
|
||||
[00:20:30]
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:21:00]
|
||||
|
||||
The whole world was getting mentally contaminated. Staying pure, keeping things nice and proprietary, that old philosophy was getting less and less realistic. It was into this contaminated reality that one of history's biggest command line heroes was born, a boy in Finland named Linus Torvalds. If this is Star Wars, then Linus Torvalds is our Luke Skywalker. He was a mild-mannered grad student at the University of Helsinki. Talented, but lacking in grand visions. The classic reluctant hero. And, like any young hero, he was also frustrated. He wanted to incorporate the 386 processor into his new PC's functions. He wasn't impressed by the MS-DOS running on his IBM clone, and he couldn't afford the $5,000 price tag on the UNIX software that would have given him some programming freedom. The solution, which Torvalds crafted on MINIX in the spring of 1991, was an OS kernel called Linux. The kernel of an OS of his very own.
|
||||
|
||||
[00:21:30]
|
||||
|
||||
Steven Vaughan-Nichols:
|
||||
|
||||
Linus Torvalds really just wanted to have something to play with.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
Steven Vaughan-Nichols is a contributing editor at ZDNet.com, and he's been writing about the business of technology since there was a business of technology.
|
||||
|
||||
Steven Vaughan-Nichols:
|
||||
|
||||
[00:22:00]
|
||||
|
||||
[00:22:30]
|
||||
|
||||
There were a couple of operating systems like it at the time. The main one that he was concerned about was called MINIX. That was an operating system that was meant for students to learn how to build operating systems. Linus looked at that, and thought that it was interesting, but he wanted to build his own. So it really started as a do-it-yourself project at Helsinki. That's how it all started, is just basically a big kid playing around and learning how to do things. But what was different in his case is that he was both bright enough and persistent enough, and also friendly enough to get all these other people working on it, and then he started seeing the project through. 27 years later, it is much, much bigger than he ever dreamed it would be.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:23:00]
|
||||
|
||||
By the fall of 1991, Torvalds releases 10,000 lines of code, and people around the world start offering comments, then tweaks, additions, edits. That might seem totally normal to you as a developer today, but remember, at that time, open collaboration like that was a moral affront to the whole proprietary system that Microsoft, Apple, and IBM had done so well by. Then that openness gets enshrined. Torvalds places Linux under the GNU general public license. The license that had kept Stallman's GNU system free was now going to keep Linux free , too. The importance of that move to incorporate GPL, basically preserving the freedom and openness of the software forever, cannot be overstated. Vaughan-Nichols explains.
|
||||
|
||||
[00:23:30]
|
||||
|
||||
Steven Vaughan-Nichols:
|
||||
|
||||
In fact, by the license that it's under, which is called GPL version 2, you have to share the code if you're going to try to sell it or present it to the world, so that if you make an improvement, it's not enough just to give someone the improvement. You actually have to share with them the nuts and bolts of all those changes. Then they are adapted into Linux if they're good enough.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:24:00]
|
||||
|
||||
That public approach proved massively attractive. Eric Raymond, one of the early evangelists of the movement wrote in his famous essay that, "Corporations like Microsoft and Apple have been trying to build software cathedrals, while Linux and its kind were offering a great babbling bazaar of different agendas and approaches. The bazaar was a lot more fun than the cathedral."
|
||||
|
||||
Stormy Peters:
|
||||
|
||||
I think at the time, what attracted people is that they were going to be in control of their own world.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
Stormy Peters is an industry analyst, and an advocate for free and open source software.
|
||||
|
||||
[00:24:30]
|
||||
|
||||
Stormy Peters:
|
||||
|
||||
[00:25:00]
|
||||
|
||||
When open source software first came out, the OS was all proprietary. You couldn't even add a printer without going through proprietary software. You couldn't add a headset. You couldn't develop a small hardware device of your own, and make it work with your laptop. You couldn't even put in a DVD and copy it, because you couldn't change the software. Even if you owned the DVD, you couldn't copy it. You had no control over this hardware/software system that you'd bought. You couldn't create anything new and bigger and better out of it. That's why an open source operating system was so important at the beginning. We needed an open source collaborative environment where we could build bigger and better things.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:25:30]
|
||||
|
||||
Mind you, Linux isn't a purely egalitarian utopia. Linus Torvalds doesn't approve everything that goes into the kernel, but he does preside over its changes. He's installed a dozen or so people below him to manage different parts of the kernel. They, in turn, trust people under themselves, and so on, in a pyramid of trust. Changes might come from anywhere, but they're all judged and curated.
|
||||
|
||||
[00:26:00]
|
||||
|
||||
It is amazing, though, to think how humble, and kind of random, Linus' DIY project was to begin with. He didn't have a clue he was the Luke Skywalker figure in all this. He was just 21, and had been programming half his life. But this was the first time the silo opened up, and people started giving him feedback. Dozens, then hundreds, and thousands of contributors. With crowdsourcing like that, it doesn't take long before Linux starts growing. Really growing. It even finally gets noticed by Microsoft. Their CEO, Steve Ballmer, called Linux, and I quote, "A cancer that attaches itself in an intellectual property sense to everything it touches." Steven Levy describes where Ballmer was coming from.
|
||||
|
||||
Steven Levy:
|
||||
|
||||
[00:26:30]
|
||||
|
||||
Once Microsoft really solidified its monopoly, and indeed it was judged in federal court as a monopoly, anything that could be a threat to that, they reacted very strongly to. So of course, the idea that free software would be emerging, when they were charging for software, they saw as a cancer. They tried to come up with an intellectual property theory about why this was going to be bad for consumers.
|
||||
|
||||
Saron Yitbarek:
|
||||
|
||||
[00:27:00]
|
||||
|
||||
Linux was spreading, and Microsoft was worried. By 2006, Linux would become the second most widely used operating system after Windows, with about 5,000 developers working on it worldwide. Five thousand. Remember that memo that Bill Gates sent to Apple, the one where he's lecturing them about the importance of partnering with other people? Turns out, open source would take that idea of partnerships to a whole new level, in a way Bill Gates would have never foreseen.
|
||||
|
||||
[00:27:30]
|
||||
|
||||
[00:28:00]
|
||||
|
||||
We've been talking about these huge battles for the OS, but so far, the unsung heroes, the developers, haven't fully made it onto the battlefield. That changes next time, on Command Line Heroes. In episode two, part two of the OS wars, it's the rise of Linux. Businesses wake up, and realize the importance of developers. These open source rebels grow stronger, and the battlefield shifts from the desktop to the server room. There's corporate espionage, new heroes, and the unlikeliest change of heart in tech history. It all comes to a head in the concluding half of the OS wars.
|
||||
|
||||
[00:28:30]
|
||||
|
||||
To get new episodes of Command Line Heroes delivered automatically for free, make sure you hit subscribe on Apple podcasts, Spotify, Google Play, or however you get your podcasts. Over the rest of the season, we're visiting the latest battlefields, the up-for-grab territories where the next generation of Command Line Heroes are making their mark. For more info, check us out at redhat.com/commandlineheroes. I'm Saron Yitbarek. Until next time, keep on coding.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1
|
||||
|
||||
作者:[redhat][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.redhat.com
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,162 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Command Line Heroes: Season 1: OS Wars)
|
||||
[#]: via: (https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux)
|
||||
[#]: author: (redhat https://www.redhat.com)
|
||||
|
||||
Command Line Heroes: Season 1: OS Wars(Part2 Rise of Linux)
|
||||
======
|
||||
Saron Yitbarek: Is this thing on? Cue the epic Star Wars crawl, and, action.
|
||||
|
||||
Voice Actor: [00:00:30] Episode Two: Rise of Linux ® . The empire of Microsoft controls 90 % of desktop users . C omplete standardization of operating systems seems assured. However, the advent of the internet swerves the focus of the war from the desktop toward enterprise, where all businesses scramble to claim a server of their own. Meanwhile, an unlikely hero arises from amongst the band of open source rebels . Linus Torvalds, head strong, bespectacled, releases his Linux system free of charge. Microsoft reels — and regroups.
|
||||
|
||||
Saron Yitbarek: [00:01:00] Oh, the nerd in me just loves that. So, where were we? Last time, Apple and Microsoft were trading blows, trying to dominate in a war over desktop users. By the end of e pisode o ne, we saw Microsoft claiming most of the prize. Soon, the entire landscape went through a seismic upheaval. That's all because of the rise of the internet and the army of developers that rose with it. The internet moves the battlefield from PC users in their home offices to giant business clients with hundreds of servers.
|
||||
|
||||
[00:01:30] This is a huge resource shift. Not only does every company out there wanting to remain relevant suddenly have to pay for server space and get a website built — they also have to integrate software to track resources, monitor databases, et cetera, et cetera. You're going to need a lot of developers to help you with that. At least, back then you did.
|
||||
|
||||
In p art t wo of the OS wars, we'll see how that enormous shift in priorities , and the work of a few open source rebels like Linus Torvalds and Richard Stallman , managed to strike fear in the heart of Microsoft, and an entire software industry.
|
||||
|
||||
[00:02:00] I'm Saron Yitbarek and you're listening to Command Line Heroes, an original podcast from Red Hat. In each episode, we're bringing you stories about the people who transform technology from the command line up.
|
||||
|
||||
[00:02:30] Okay. Imagine for a second that you're Microsoft in 1991. You're feeling pretty good, right? Pretty confident. Assured global domination feels nice. You've mastered the art of partnering with other businesses, but you're still pretty much cutting out the developers, programmers, and sys admins that are the real foot soldiers out there. There is this Finnish geek named Linus Torvalds. He and his team of open source programmers are starting to put out versions of Linux, this OS kernel that they're duct taping together.
|
||||
|
||||
[00:03:00] If you're Microsoft, frankly, you're not too concerned about Linux or even about open source in general, but eventually, the sheer size of Linux gets so big that it becomes impossible for Microsoft not to notice. The first version comes out in 1991 and it's got maybe 10,000 lines of code. A decade later, there will be three million lines of code. In case you're wondering, today it's at 20 million.
|
||||
|
||||
[00:03:30] For a moment, let's stay in the early 90s. Linux hasn't yet become the behemoth we know now. It's just this strangely viral OS that's creeping across the planet, and the geeks and hackers of the world are falling in love with it. I was too young in those early days, but I sort of wish I'd been there. At that time, discovering Linux was like gaining access to a secret society. Programmers would share the Linux CD set with friends the same way other people would share mixtapes of underground music.
|
||||
|
||||
Developer Tristram Oaten [00:03:40] tells the story of how he first encountered Linux when he was 16 years old.
|
||||
|
||||
Tristram Oaten: [00:04:00] We went on a scuba diving holiday, my family and I, to Hurghada, which is on the Red Sea. Beautiful place, highly recommend it. The first day, I drank the tap water. Probably, my mom told me not to. I was really sick the whole week — didn't leave the hotel room. All I had with me was a laptop with a fresh install of Slackware Linux, this thing that I'd heard about and was giving it a try. There were no extra apps, just what came on the eight CDs. By necessity, all I had to do this whole week was to get to grips with this alien system. I read man pages, played around with the terminal. I remember not knowing the difference between a single dot, meaning the current directory, and two dots, meaning the previous directory.
|
||||
|
||||
[00:04:30] I had no clue. I must have made so many mistakes, but slowly, over the course of this forcible solitude, I broke through this barrier and started to understand and figure out what this command line thing was all about. By the end of the holiday, I hadn't seen the pyramids, the Nile, or any Egyptian sites, but I had unlocked one of the wonders of the modern world. I'd unlocked Linux, and the rest is history.
|
||||
|
||||
Saron Yitbarek: You can hear some variation on that story from a lot of people. Getting access to that Linux command line was a transformative experience.
|
||||
|
||||
David Cantrell: This thing gave me the source code. I was like, "That's amazing."
|
||||
|
||||
Saron Yitbarek: We're at a 2017 Linux developers conference called Flock to Fedora.
|
||||
|
||||
David Cantrell: ... very appealing. I felt like I had more control over the system and it just drew me in more and more. From there, I guess, after my first Linux kernel compile in 1995, I was hooked, so, yeah.
|
||||
|
||||
Saron Yitbarek: Developers David Cantrell and Joe Brockmire.
|
||||
|
||||
Joe Brockmeier: I was going through the cheap software and found a four - CD set of Slackware Linux. It sounded really exciting and interesting so I took it home, installed it on a second computer, started playing with it, and really got excited about two things. One was, I was excited not to be running Windows, and I was excited by the open source nature of Linux.
|
||||
|
||||
Saron Yitbarek: [00:06:00] That access to the command line was, in some ways, always there. Decades before open source really took off, there was always a desire to have complete control, at least among developers. Go way back to a time before the OS wars, before Apple and Microsoft were fighting over their GUIs. There were command line heroes then, too. Professor Paul Jones is the director of the online library ibiblio.org. He worked as a developer during those early days.
|
||||
|
||||
Paul Jones: [00:07:00] The internet, by its nature, at that time, was less client server, totally, and more peer to peer. We're talking about, really, some sort of VAX to VAX, some sort of scientific workstation, the scientific workstation. That doesn't mean that client and server relationships and applications weren't there, but it does mean that the original design was to think of how to do peer - to - peer things, the opposite of what IBM had been doing, in which they had dumb terminals that had only enough intelligence to manage the user interface, but not enough intelligence to actually let you do anything in the terminal that would expose anything to it.
|
||||
|
||||
Saron Yitbarek: As popular as GUI was becoming among casual users, there was always a pull in the opposite direction for the engineers and developers. Before Linux in the 1970s and 80s, that resistance was there, with EMAX and GNU . W ith Stallman's free software foundation, certain folks were always begging for access to the command line, but it was Linux in the 1990s that delivered like no other.
|
||||
|
||||
[00:07:30] The early lovers of Linux and other open source software were pioneers. I'm standing on their shoulders. We all are.
|
||||
|
||||
You're listening to Command Line Heroes, an original podcast from Red Hat. This is part two of the OS wars: Rise of Linux.
|
||||
|
||||
Steven Vaughan-Nichols: By 1998, things have changed.
|
||||
|
||||
Saron Yitbarek: Steven Vaughan-Nichols is a contributing editor at zdnet.com, and he's been writing for decades about the business side of technology. He describes how Linux slowly became more and more popular until the number of volunteer contributors was way larger than the number of Microsoft developers working on Windows. Linux never really went after Microsoft's desktop customers, though, and maybe that's why Microsoft ignored them at first. Where Linux did shine was in the server room. When businesses went online, each one required a unique programming solution for their needs.
|
||||
|
||||
[00:08:30] Windows NT comes out in 1993 and it's competing with other server operating systems, but lots of developers are thinking, "Why am I going to buy an AIX box or a large windows box when I could set up a cheap Linux-based system with Apache?" Point is, Linux code started seeping into just about everything online.
|
||||
|
||||
Steven Vaughan-Nichols: [00:09:00] Microsoft realizes that Linux, quite to their surprise, is actually beginning to get some of the business, not so much on the desktop, but on business servers. As a result of that, they start a campaign, what we like to call FUD — fear, uncertainty and doubt — saying, "Oh this Linux stuff, it's really not that good. It's not very reliable. You can't trust it with anything."
|
||||
|
||||
Saron Yitbarek: [00:09:30] That soft propaganda style attack goes on for a while. Microsoft wasn't the only one getting nervous about Linux, either. It was really a whole industry versus that weird new guy. For example, anyone with a stake in UNIX was likely to see Linux as a usurper. Famously, the SCO Group, which had produced a version of UNIX, waged lawsuits for over a decade to try and stop the spread of Linux. SCO ultimately failed and went bankrupt. Meanwhile, Microsoft kept searching for their opening. They were a company that needed to make a move. It just wasn't clear what that move was going to be.
|
||||
|
||||
Steven Vaughan-Nichols: [00:10:30] What will make Microsoft really concerned about it is the next year, in 2000, IBM will announce that they will invest a billion dollars in Linux in 2001. Now, IBM is not really in the PC business anymore. They're not out yet, but they're going in that direction, but what they are doing is they see Linux as being the future of servers and mainframe computers, which, spoiler alert, IBM was correct. Linux is going to dominate the server world.
|
||||
|
||||
Saron Yitbarek: This was no longer just about a bunch of hackers loving their Jedi-like control of the command line. This was about the money side working in Linux's favor in a major way. John "Mad Dog" Hall, the executive director of Linux International, has a story that explains why that was. We reached him by phone.
|
||||
|
||||
John Hall: [00:11:30] A friend of mine named Dirk Holden [00:10:56] was a German systems administrator at Deutsche Bank in Germany, and he also worked in the graphics projects for the early days of the X Windows system for PCs. I visited him one day at the bank, and I said, "Dirk, you have 3,000 servers here at the bank and you use Linux. Why don't you use Microsoft NT?" He looked at me and he said, "Yes, I have 3,000 servers , and if I used Microsoft Windows NT, I would need 2,999 systems administrators." He says, "With Linux, I only need four." That was the perfect answer.
|
||||
|
||||
Saron Yitbarek: [00:12:00] The thing programmers are getting obsessed with also happens to be deeply attractive to big business. Some businesses were wary. The FUD was having an effect. They heard open source and thought, "Open. That doesn't sound solid. It's going to be chaotic, full of bugs," but as that bank manager pointed out, money has a funny way of convincing people to get over their hangups. Even little businesses, all of which needed websites, were coming on board. The cost of working with a cheap Linux system over some expensive proprietary option, there was really no comparison. If you were a shop hiring a pro to build your website, you wanted them to use Linux.
|
||||
|
||||
[00:12:30] Fast forward a few years. Linux runs everybody's website. Linux has conquered the server world, and then, along comes the smartphone. Apple and their iPhones take a sizeable share of the market, of course, and Microsoft hoped to get in on that, except, surprise, Linux was there, too, ready and raring to go.
|
||||
|
||||
Author and journalist James Allworth.
|
||||
|
||||
James Allworth: [00:13:00] There was certainly room for a second player, and that could well have been Microsoft, but for the fact of Android, which was fundamentally based on Linux, and because Android, famously acquired by Google, and now running a majority of the world's smartphones, Google built it on top of that. They were able to start with a very sophisticated operating system and a cost basis of zero. They managed to pull it off, and it ended up locking Microsoft out of the next generation of devices, by and large, at least from an operating system perspective.
|
||||
|
||||
Saron Yitbarek: [00:13:30] The ground was breaking up, big time, and Microsoft was in danger of falling into the cracks. John Gossman is the chief architect on the Azure team at Microsoft. He remembers the confusion that gripped the company at that time.
|
||||
|
||||
John Gossman: [00:14:00] Like a lot of companies, Microsoft was very concerned about IP pollution. They thought that if you let developers use open source they would likely just copy and paste bits of code into some product and then some sort of a viral license might take effect that ... They were also very confused, I think, it was just culturally, a lot of companies, Microsoft included, were confused on the difference between what open source development meant and what the business model was. There was this idea that open source meant that all your software was free and people were never going to pay anything.
|
||||
|
||||
Saron Yitbarek: [00:14:30] Anybody invested in the old, proprietary model of software is going to feel threatened by what's happening here. When you threaten an enormous company like Microsoft, yeah, you can bet they're going to react. It makes sense they were pushing all that FUD — fear, uncertainty and doubt. At the time, an “ us versus them ” attitude was pretty much how business worked. If they'd been any other company, though, they might have kept that old grudge, that old thinking, but then, in 2013, everything changes.
|
||||
|
||||
[00:15:00] Microsoft's cloud computing service, Azure, goes online and, shockingly, it offers Linux virtual machines from day one. Steve Ballmer, the CEO who called Linux a cancer, he's out, and a new forward - thinking CEO, Satya Nadella, has been brought in.
|
||||
|
||||
John Gossman: Satya has a different attitude. He's another generation. He's a generation younger than Paul and Bill and Steve were, and had a different perspective on open source.
|
||||
|
||||
Saron Yitbarek: John Gossman, again, from Microsoft's Azure team.
|
||||
|
||||
John Gossman: [00:16:00] We added Linux support into Azure about four years ago, and that was for very pragmatic reasons. If you go to any enterprise customer, you will find that they are not trying to decide whether to use Windows or to use Linux or to use .net or to use Java TM . They made all those decisions a long time ago — about 15 years or so ago, there was some of this argument. Now, every company that I have ever seen has a mix of Linux and Java and Windows and .net and SQL Server and Oracle and MySQL — proprietary source code - based products and open source code products.
|
||||
|
||||
If you're going to operate a cloud and you're going to allow and enable those companies to run their businesses on the cloud, you simply cannot tell them, "You can use this software but you can't use this software."
|
||||
|
||||
Saron Yitbarek: [00:16:30] That's exactly the philosophy that Satya Nadella adopted. In the fall of 2014, he gets up on stage and he wants to get across one big, fat point. Microsoft loves Linux. He goes on to say that 20 % of Azure is already Linux and that Microsoft will always have first - class support for Linux distros. There's not even a whiff of that old antagonism toward open source.
|
||||
|
||||
To drive the point home, there's literally a giant sign behind them that reads, "Microsoft hearts Linux." Aww. For some of us, that turnaround was a bit of a shock, but really, it shouldn't have been. Here's Steven Levy, a tech journalist and author.
|
||||
|
||||
Steven Levy: [00:17:30] When you're playing a football game and the turf becomes really slick, maybe you switch to a different kind of footwear in order to play on that turf. That's what they were doing. They can't deny reality and there are smart people there so they had to realize that this is the way the world is and put aside what they said earlier, even though they might be a little embarrassed at their earlier statements, but it would be crazy to let their statements about how horrible open source was earlier, affect their smart decisions now.
|
||||
|
||||
Saron Yitbarek: [00:18:00] Microsoft swallowed its pride in a big way. You might remember that Apple, after years of splendid isolation, finally shifted toward a partnership with Microsoft. Now it was Microsoft's turn to do a 180. After years of battling the open source approach, they were reinventing themselves. It was change or perish. Steven Vaughan-Nichols.
|
||||
|
||||
Steven Vaughan-Nichols: [00:18:30] Even a company the size of Microsoft simply can't compete with the thousands of open source developers working on all these other major projects , including Linux. They were very loath e to do so for a long time. The former Microsoft CEO, Steve Ballmer, hated Linux with a passion. Because of its GPL license, it was a cancer, but once Ballmer was finally shown the door, the new Microsoft leadership said, "This is like trying to order the tide to stop coming in. The tide is going to keep coming in. We should work with Linux, not against it."
|
||||
|
||||
Saron Yitbarek: [00:19:00] Really, one of the big wins in the history of online tech is the way Microsoft was able to make this pivot, when they finally decided to. Of course, older, hardcore Linux supporters were pretty skeptical when Microsoft showed up at the open source table. They weren't sure if they could embrace these guys, but, as Vaughan-Nichols points out, today's Microsoft simply is not your mom and dad's Microsoft.
|
||||
|
||||
Steven Vaughan-Nichols : [00:19:30] Microsoft 2017 is not Steve Ballmer's Microsoft, nor is it Bill Gates' Microsoft. It's an entirely different company with a very different approach and, again, once you start using open source, it's not like you can really pull back. Open source has devoured the entire technology world. People who have never heard of Linux as such, don't know it, but every time they're on Facebook , they're running Linux. Every time you do a Google search , you're running Linux.
|
||||
|
||||
[00:20:00] Every time you do anything with your Android phone , you're running Linux again. It literally is everywhere, and Microsoft can't stop that, and thinking that Microsoft can somehow take it all over, I think is naïve.
|
||||
|
||||
Saron Yitbarek: [00:20:30] Open source supporters might have been worrying about Microsoft coming in like a wolf in the flock, but the truth is, the very nature of open source software protects it from total domination. No single company can own Linux and control it in any specific way. Greg Kroah-Hartman is a fellow at the Linux Foundation.
|
||||
|
||||
Greg Kroah-Hartman: Every company and every individual contributes to Linux in a selfish manner. They're doing so because they want to solve a problem that they have, be it hardware isn't working , or they want to add a new feature to do something else , or want to take it in a direction that they'll build that they can use for their product. That's great, because then everybody benefits from that because they're releasing the code back, so that everybody can use it. It's because of that selfishness that all companies and all people have, everybody benefits.
|
||||
|
||||
Saron Yitbarek: [00:21:30] Microsoft has realized that in the coming cloud wars, fighting Linux would be like going to war with, well, a cloud. Linux and open source aren't the enemy, they're the atmosphere. Today, Microsoft has joined the Linux Foundation as a platinum member. They became the number one contributor to open source on GitHub. In September, 2017, they even joined the Open Source Initiative. These days, Microsoft releases a lot of its code under open licenses. Microsoft's John Gossman describes what happened when they open sourced .net. At first, they didn't really think they'd get much back.
|
||||
|
||||
John Gossman: [00:22:00] We didn't count on contributions from the community, and yet, three years in, over 50 per cent of the contributions to the .net framework libraries, now, are coming from outside of Microsoft. This includes big pieces of code. Samsung has contributed ARM support to .net. Intel and ARM and a couple other chip people have contributed code generation specific for their processors to the .net framework, as well as a surprising number of fixes, performance improvements , and stuff — from just individual contributors to the community.
|
||||
|
||||
Saron Yitbarek: Up until a few years ago, the Microsoft we have today, this open Microsoft, would have been unthinkable.
|
||||
|
||||
[00:23:00] I'm Saron Yitbarek, and this is Command Line Heroes. Okay, we've seen titanic battles for the love of millions of desktop users. We've seen open source software creep up behind the proprietary titans, and nab huge market share. We've seen fleets of command line heroes transform the programming landscape into the one handed down to people like me and you. Today, big business is absorbing open source software, and through it all, everybody is still borrowing from everybody.
|
||||
|
||||
[00:23:30] In the tech wild west, it's always been that way. Apple gets inspired by Xerox, Microsoft gets inspired by Apple, Linux gets inspired by UNIX. Evolve, borrow, constantly grow. In David and Goliath terms, open source software is no longer a David, but, you know what? It's not even Goliath, either. Open source has transcended. It's become the battlefield that others fight on. As the open source approach becomes inevitable, new wars, wars that are fought in the cloud, wars that are fought on the open source battlefield, are ramping up.
|
||||
|
||||
Here's author Steven Levy.
|
||||
|
||||
Steven Levy: [00:24:00] Basically, right now, we have four or five companies, if you count Microsoft, that in various ways are fighting to be the platform for all we do, for artificial intelligence, say. You see wars between intelligent assistants, and guess what? Apple has an intelligent assistant, Siri. Microsoft has one, Cortana. Google has the Google Assistant. Samsung has an intelligent assistant. Amazon has one, Alexa. We see these battles shifting to different areas, there. Maybe, you could say, the hottest one is would be, whose AI platform is going to control all the stuff in our lives there, and those five companies are all competing for that.
|
||||
|
||||
Saron Yitbarek: If you're looking for another rebel that's going to sneak up behind Facebook or Google or Amazon and blindside them the way Linux blindsided Microsoft, you might be looking a long time, because as author James Allworth points out, being a true rebel is only getting harder and harder.
|
||||
|
||||
James Allworth: [00:25:30] Scale's always been an advantage but the nature of scale advantages are almost ... Whereas, I think previously they were more linear in nature, now it's more exponential in nature, and so, once you start to get out in front with something like this , it becomes harder and harder for a new player to come in and catch up. I think this is true of the internet era in general, whether it's scale like that or the importance and advantages that data bestow on an organization in terms of its ability to compete. Once you get out in front, you attract more customers, and then that gives you more data and that enables you to do an even better job, and then, why on earth would you want to go with the number two player, because they're so far behind? I think it's going to be no different in cloud.
|
||||
|
||||
Saron Yitbarek: [00:26:00] This story began with singular heroes like Steve Jobs and Bill Gates, but the progress of technology has taken on a crowdsourced, organic feel. I think it's telling that our open source hero, Linus Torvalds, didn't even have a real plan when he first invented the Linux kernel. He was a brilliant , young developer for sure, but he was also like a single drop of water at the very front of a tidal wave. The revolution was inevitable. It's been estimated that for a proprietary company to create a Linux distribution in their old - fashioned , proprietary way, it would cost them well over $ 10 billion. That points to the power of open source.
|
||||
|
||||
[00:26:30] In the end, it's not something that a proprietary model is going to compete with. Successful companies have to remain open. That's the big, ultimate lesson in all this. Something else to keep in mind: W hen we're wired together, our capacity to grow and build on what we've already accomplished becomes limitless. As big as these companies get, we don't have to sit around waiting for them to give us something better. Think about the new developer who learns to code for the sheer joy of creating, the mom who decides that if nobody's going to build what she needs, then she'll build it herself.
|
||||
|
||||
Wherever tomorrow's great programmers come from, they're always going to have the capacity to build the next big thing, so long as there's access to the command line.
|
||||
|
||||
[00:27:30] That's it for our two - part tale on the OS wars that shaped our digital lives. The struggle for dominance moved from the desktop to the server room, and ultimately into the cloud. Old enemies became unlikely allies, and a crowdsourced future left everything open . Listen, I know, there are a hundred other heroes we didn't have space for in this history trip, so drop us a line. Share your story. Redhat.com/commandlineheroes. I'm listening.
|
||||
|
||||
We're spending the rest of the season learning what today's heroes are creating, and what battles they're going through to bring their creations to life. Come back for more tales — from the epic front lines of programming. We drop a new episode every two weeks. In a couple weeks' time, we bring you episode three: the Agile Revolution.
|
||||
|
||||
[00:28:00] Command Line Heroes is an original podcast from Red Hat. To get new episodes delivered automatically for free, make sure to subscribe to the show. Just search for “ Command Line Heroes ” in Apple p odcast s , Spotify, Google Play, and pretty much everywhere else you can find podcasts. Then, hit “ subscribe ” so you will be the first to know when new episodes are available.
|
||||
|
||||
I'm Saron Yitbarek. Thanks for listening. Keep on coding.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux
|
||||
|
||||
作者:[redhat][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.redhat.com
|
||||
[b]: https://github.com/lujun9972
|
@ -1,102 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (beamrolling)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to transition into a career as a DevOps engineer)
|
||||
[#]: via: (https://opensource.com/article/19/7/how-transition-career-devops-engineer)
|
||||
[#]: author: (Conor Delanbanque https://opensource.com/users/cdelanbanquehttps://opensource.com/users/daniel-ohhttps://opensource.com/users/herontheclihttps://opensource.com/users/marcobravohttps://opensource.com/users/cdelanbanque)
|
||||
|
||||
How to transition into a career as a DevOps engineer
|
||||
======
|
||||
Whether you're a recent college graduate or a seasoned IT pro looking to
|
||||
advance your career, these tips can help you get hired as a DevOps
|
||||
engineer.
|
||||
![technical resume for hiring new talent][1]
|
||||
|
||||
DevOps engineering is a hot career with many rewards. Whether you're looking for your first job after graduating or seeking an opportunity to reskill while leveraging your prior industry experience, this guide should help you take the right steps to become a [DevOps engineer][2].
|
||||
|
||||
### Immerse yourself
|
||||
|
||||
Begin by learning the fundamentals, practices, and methodologies of [DevOps][3]. Understand the "why" behind DevOps before jumping into the tools. A DevOps engineer's main goal is to increase speed and maintain or improve quality across the entire software development lifecycle (SDLC) to provide maximum business value. Read articles, watch YouTube videos, and go to local Meetup groups or conferences—become part of the welcoming DevOps community, where you'll learn from the mistakes and successes of those who came before you.
|
||||
|
||||
### Consider your background
|
||||
|
||||
If you have prior experience working in technology, such as a software developer, systems engineer, systems administrator, network operations engineer, or database administrator, you already have broad insights and useful experience for your future role as a DevOps engineer. If you're just starting your career after finishing your degree in computer science or any other STEM field, you have some of the basic stepping-stones you'll need in this transition.
|
||||
|
||||
The DevOps engineer role covers a broad spectrum of responsibilities. Following are the three ways enterprises are most likely to use them:
|
||||
|
||||
* **DevOps engineers with a dev bias** work in a software development role building applications. They leverage continuous integration/continuous delivery (CI/CD), shared repositories, cloud, and containers as part of their everyday work, but they are not necessarily responsible for building or implementing tooling. They understand infrastructure and, in a mature environment, will be able to push their own code into production.
|
||||
* **DevOps engineers with an ops bias** could be compared to systems engineers or systems administrators. They understand software development but do not spend the core of their day building applications. Instead, they are more likely to be supporting software development teams to automate manual processes and increase efficiencies across human and technology systems. This could mean breaking down legacy code and using less cumbersome automation scripts to run the same commands, or it could mean installing, configuring, or maintaining infrastructure and tooling. They ensure the right tools are installed and available for any teams that need them. They also help to enable teams by teaching them how to leverage CI/CD and other DevOps practices.
|
||||
* **Site reliability engineers (SRE)** are like software engineers that solve operations and infrastructure problems. SREs focus on creating scalable, highly available, and reliable software systems.
|
||||
|
||||
|
||||
|
||||
In the ideal world, DevOps engineers will understand all of these areas; this is typical at mature technology companies. However, DevOps roles at top-tier banks and many Fortune 500 companies usually have biases towards dev or ops.
|
||||
|
||||
### Technologies to learn
|
||||
|
||||
DevOps engineers need to know a wide spectrum of technologies to do their jobs effectively. Whatever your background, start with the fundamental technologies you'll need to use and understand as a DevOps engineer.
|
||||
|
||||
#### Operating systems
|
||||
|
||||
The operating system is where everything runs, and having fundamental knowledge is important. [Linux is the operating system][4] you'll most likely use daily, although some organizations use Windows. To get started, you can install Linux at home, where you'll be able to break as much as you want and learn along the way.
|
||||
|
||||
#### Scripting
|
||||
|
||||
Next, pick a language to learn for scripting purposes. There are many to choose from ranging from Python, Go, Java, Bash, PowerShell, Ruby, and C/C++. I suggest [starting with Python][5]; it's one of the most popular for a reason, as it's relatively easy to learn and interpret. Python is often written to follow the fundamentals of object-oriented programming (OOP) and can be used for web development, software development, and creating desktop GUI and business applications.
|
||||
|
||||
#### Cloud
|
||||
|
||||
After [Linux][4] and [Python][5], I think the next thing to study is cloud computing. Infrastructure is no longer left to the "operations guys," so you'll need some exposure to a cloud platform such as Amazon Web Services, Azure, or Google Cloud Platform. I'd start with AWS, as it has an extensive collection of free learning tools that can take you down any track from using AWS as a developer, to operations, and even business-facing components. In fact, you might become overwhelmed by how much is on offer. Consider starting with EC2, S3, and VPC, and see where you want to go from there.
|
||||
|
||||
#### Programming languages
|
||||
|
||||
If you come to DevOps with a passion for software development, keep on improving your programming skills. Some good and commonly used languages in DevOps are the same as you would for scripting: Python, Go, Java, Bash, PowerShell, Ruby, and C/C++. You should also become familiar with Jenkins and Git/GitHub, which you'll use frequently within the CI/CD process.
|
||||
|
||||
#### Containers
|
||||
|
||||
Finally, start learning about [containerizing cod][6]e using tools such as Docker and orchestration platforms such as Kubernetes. There are extensive learning resources available for free online, and most cities will have local Meetup groups where you can learn from experienced people in a friendly environment (with pizza and beer!).
|
||||
|
||||
#### What else?
|
||||
|
||||
If you have less experience in development, you can still [get involved in DevOps][3] by applying your passion for automation, increasing efficiency, collaborating with others, and improving your work. I would still suggest learning the tooling described above, but with less emphasis on the coding/scripting languages. It will be useful to learn about Infrastructure-as-a-Service, Platform-as-a-Service, cloud platforms, and Linux. You'll likely be setting up the tools and learning how to build systems that are resilient and fault-tolerant, leveraging them while writing code.
|
||||
|
||||
### Finding a DevOps job
|
||||
|
||||
The job search process will differ depending on whether you've been working in tech and are moving into DevOps or you're a recent graduate beginning your career.
|
||||
|
||||
#### If you're already working in technology
|
||||
|
||||
If you're transitioning from one tech field into a DevOps role, start by exploring opportunities within your current company. Can you reskill by working with another team? Try to shadow other team members, ask for advice, and acquire new skills without leaving your current job. If this isn't possible, you may need to move to another company. If you can learn some of the practices, tools, and technologies listed above, you'll be in a good position to demonstrate relevant knowledge during interviews. The key is to be honest and not set yourself up for failure. Most hiring managers understand that you don't know all the answers; if you can show what you've been learning and explain that you're open to learning more, you should have a good chance to land a DevOps job.
|
||||
|
||||
#### If you're starting your career
|
||||
|
||||
Apply to open opportunities at companies hiring junior DevOps engineers. Unfortunately, many companies say they're looking for more experience and recommend you re-apply when you've gained some. It's the typical, frustrating scenario of "we want more experience," but nobody seems willing to give you the first chance.
|
||||
|
||||
It's not all gloomy though; some companies focus on training and upskilling graduates directly out of the university. For example, [MThree][7], where I work, hires fresh graduates and trains them for eight weeks. When they complete training, participants have solid exposure to the entire SDLC and a good understanding of how it applies in a Fortune 500 environment. Graduates are hired as junior DevOps engineers with MThree's client companies—MThree pays their full-time salary and benefits for the first 18 to 24 months, after which they join the client as direct employees. This is a great way to bridge the gap from the university into a technology career.
|
||||
|
||||
### Summary
|
||||
|
||||
There are many ways to transition to become a DevOps engineer. It is a very rewarding career route that will likely keep you engaged and challenged—and increase your earning potential.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/how-transition-career-devops-engineer
|
||||
|
||||
作者:[Conor Delanbanque][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cdelanbanquehttps://opensource.com/users/daniel-ohhttps://opensource.com/users/herontheclihttps://opensource.com/users/marcobravohttps://opensource.com/users/cdelanbanque
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hiring_talent_resume_job_career.png?itok=Ci_ulYAH (technical resume for hiring new talent)
|
||||
[2]: https://opensource.com/article/19/7/devops-vs-sysadmin
|
||||
[3]: https://opensource.com/resources/devops
|
||||
[4]: https://opensource.com/resources/linux
|
||||
[5]: https://opensource.com/resources/python
|
||||
[6]: https://opensource.com/article/18/8/sysadmins-guide-containers
|
||||
[7]: https://www.mthreealumni.com/
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Extreme's acquisitions have prepped it to better battle Cisco, Arista, HPE, others)
|
||||
[#]: via: (https://www.networkworld.com/article/3432173/extremes-acquisitions-have-prepped-it-to-better-battle-cisco-arista-hpe-others.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Extreme's acquisitions have prepped it to better battle Cisco, Arista, HPE, others
|
||||
======
|
||||
Extreme has bought cloud, SD-WAN and data center technologies that make it more prepared to take on its toughest competitors.
|
||||
Extreme Networks has in recent months restyled the company with data-center networking technology acquisitions and upgrades, but now comes the hard part – executing with enterprise customers and effectively competing with the likes of Cisco, VMware, Arista, Juniper, HPE and others.
|
||||
|
||||
The company’s latest and perhaps most significant long-term move was closing the [acquisition of wireless-networking vendor Aerohive][1] for about $210 million. The deal brings Extreme Aerohive’s wireless-networking technology – including its WiFi 6 gear, SD-WAN software and cloud-management services.
|
||||
|
||||
**More about edge networking**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][2]
|
||||
* [Edge computing best practices][3]
|
||||
* [How edge computing can help secure the IoT][4]
|
||||
|
||||
|
||||
|
||||
With the Aerohive technology, Extreme says customers and partners will be able to mix and match a broader array of software, hardware, and services to create networks that support their unique needs, and that can be managed and automated from the enterprise edge to the cloud.
|
||||
|
||||
The Aerohive buy is just the latest in a string of acquisitions that have reshaped the company. In the past few years the company has acquired networking and data-center technology from Avaya and Brocade, and it bought wireless player Zebra Technologies in 2016 for $55 million.
|
||||
|
||||
While it has been a battle to integrate and get solid sales footing for those acquisitions – particularly Brocade and Avaya, the company says those challenges are behind it and that the Aerohive integration will be much smoother.
|
||||
|
||||
“After scaling Extreme’s business to $1B in revenue [for FY 2019, which ended in June] and expanding our portfolio to include end-to-end enterprise networking solutions, we are now taking the next step to transform our business to add sustainable, subscription-oriented cloud-based solutions that will enable us to drive recurring revenue and improved cash-flow generation,” said Extreme CEO Ed Meyercord at the firm’s [FY 19 financial analysts][5] call.
|
||||
|
||||
The strategy to move more toward a software-oriented, cloud-based revenue generation and technology development is brand new for Extreme. The company says it expects to generate as much as 30 percent of revenues from recurring charges in the near future. The tactic was enabled in large part by the Aerohive buy, which doubled Extreme’s customer based to 60,000 and its sales partners to 11,000 and whose revenues are recurring and cloud-based. The acquisition also created the number-three enterprise Wireless LAN company behind Cisco and HPE/Aruba.
|
||||
|
||||
“We are going to take this Aerohive system and expand across our entire portfolio and use it to deliver common, simplified software with feature packages for on-premises or in-cloud based on customers' use case,” added Norman Rice, Extreme’s Chief Marketing, Development and Product Operations Officer. “We have never really been in any cloud conversations before so for us this will be a major add.”
|
||||
|
||||
Indeed, the Aerohive move is key for the company’s future, analysts say.
|
||||
|
||||
To continue reading this article register now
|
||||
|
||||
[Get Free Access][6]
|
||||
|
||||
[Learn More][7] Existing Users [Sign In][6]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432173/extremes-acquisitions-have-prepped-it-to-better-battle-cisco-arista-hpe-others.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3405440/extreme-targets-cloud-services-sd-wan-wifi-6-with-210m-aerohive-grab.html
|
||||
[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[5]: https://seekingalpha.com/article/4279527-extreme-networks-inc-extr-ceo-ed-meyercord-q4-2019-results-earnings-call-transcript
|
||||
[6]: javascript://
|
||||
[7]: https://www.networkworld.com/learn-about-insider/
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Nvidia rises to the need for natural language processing)
|
||||
[#]: via: (https://www.networkworld.com/article/3432203/nvidia-rises-to-the-need-for-natural-language-processing.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Nvidia rises to the need for natural language processing
|
||||
======
|
||||
As the demand for natural language processing grows for chatbots and AI-powered interactions, more companies will need systems that can provide it. Nvidia says its platform can handle it.
|
||||
![andy.brandon50 \(CC BY-SA 2.0\)][1]
|
||||
|
||||
Nvidia is boasting of a breakthrough in conversation natural language processing (NLP) training and inference, enabling more complex interchanges between customers and chatbots with immediate responses.
|
||||
|
||||
The need for such technology is expected to grow, as digital voice assistants alone are expected to climb from 2.5 billion to 8 billion within the next four years, according to Juniper Research, while Gartner predicts that by 2021, 15% of all customer service interactions will be completely handled by AI, an increase of 400% from 2017.
|
||||
|
||||
The company said its DGX-2 AI platform trained the BERT-Large AI language model in less than an hour and performed AI inference in 2+ milliseconds, making it possible “for developers to use state-of-the-art language understanding for large-scale applications.”
|
||||
|
||||
**[ Also read: [What is quantum computing (and why enterprises should care)][2] ]**
|
||||
|
||||
BERT, or Bidirectional Encoder Representations from Transformers, is a Google-powered AI language model that many developers say has better accuracy than humans in some performance evaluations. It’s all discussed [here][3].
|
||||
|
||||
### Nvidia sets natural language processing records
|
||||
|
||||
All told, Nvidia is claiming three NLP records:
|
||||
|
||||
**1\. Training:** Running the largest version of the BERT language model, a Nvidia DGX SuperPOD with 92 Nvidia DGX-2H systems running 1,472 V100 GPUs cut training from several days to 53 minutes. A single DGX-2 system, which is about the size of a tower PC, trained BERT-Large in 2.8 days.
|
||||
|
||||
“The quicker we can train a model, the more models we can train, the more we learn about the problem, and the better the results get,” said Bryan Catanzaro, vice president of applied deep learning research, in a statement.
|
||||
|
||||
**2\. Inference**: Using Nvidia T4 GPUs on its TensorRT deep learning inference platform, Nvidia performed inference on the BERT-Base SQuAD dataset in 2.2 milliseconds, well under the 10 millisecond processing threshold for many real-time applications, and far ahead of the 40 milliseconds measured with highly optimized CPU code.
|
||||
|
||||
**3\. Model:** Nvidia said its new custom model, called Megatron, has 8.3 billion parameters, making it 24 times larger than the BERT-Large and the world's largest language model based on Transformers, the building block used for BERT and other natural language AI models.
|
||||
|
||||
In a move sure to make FOSS advocates happy, Nvidia is also making a ton of source code available via [GitHub][4].
|
||||
|
||||
* NVIDIA GitHub BERT training code with PyTorch
|
||||
* NGC model scripts and check-points for TensorFlow
|
||||
* TensorRT optimized BERT Sample on GitHub
|
||||
* Faster Transformer: C++ API, TensorRT plugin, and TensorFlow OP
|
||||
* MXNet Gluon-NLP with AMP support for BERT (training and inference)
|
||||
* TensorRT optimized BERT Jupyter notebook on AI Hub
|
||||
* Megatron-LM: PyTorch code for training massive Transformer models
|
||||
|
||||
|
||||
|
||||
Not that any of this is easily consumed. We’re talking very advanced AI code. Very few people will be able to make heads or tails of it. But the gesture is a positive one.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432203/nvidia-rises-to-the-need-for-natural-language-processing.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/alphabetic_letters_characters_language_by_andybrandon50_cc_by-sa_2-0_1500x1000-100794409-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
|
||||
[3]: https://medium.com/ai-network/state-of-the-art-ai-solutions-1-google-bert-an-ai-model-that-understands-language-better-than-92c74bb64c
|
||||
[4]: https://github.com/NVIDIA/TensorRT/tree/release/5.1/demo/BERT/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get ready for the convergence of IT and OT networking and security)
|
||||
[#]: via: (https://www.networkworld.com/article/3432132/get-ready-for-the-convergence-of-it-and-ot-networking-and-security.html)
|
||||
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
|
||||
|
||||
Get ready for the convergence of IT and OT networking and security
|
||||
======
|
||||
Collecting telemetry data from operational networks and passing it to information networks for analysis has its benefits. But this convergence presents big cultural and technology challenges.
|
||||
![Thinkstock][1]
|
||||
|
||||
Most IT networking professionals are so busy with their day-to-day responsibilities that they don’t have time to consider taking on more work. But for companies with an industrial component, there’s an elephant in the room that is clamoring for attention. I’m talking about the increasingly common convergence of IT and operational technology (OT) networking and security.
|
||||
|
||||
Traditionally, IT and OT have had very separate roles in an organization. IT is typically tasked with moving data between computers and humans, whereas OT is tasked with moving data between “things,” such as sensors, actuators, smart machines, and other devices to enhance manufacturing and industrial processes. Not only were the roles for IT and OT completely separate, but their technologies and networks were, too.
|
||||
|
||||
That’s changing, however, as companies want to collect telemetry data from the OT side to drive analytics and business processes on the IT side. The lines between the two sides are blurring, and this has big implications for IT networking and security teams.
|
||||
|
||||
“This convergence of IT and OT systems is absolutely on the increase, and it's especially affecting the industries that are in the business of producing things, whatever those things happen to be,” according to Jeff Hussey, CEO of [Tempered Networks][2], which is working to help bridge the gap between the two. “There are devices on the OT side that are increasingly networked but without any security to those networks. Their operators historically relied on an air gap between the networks of devices, but those gaps no longer exist. The complexity of the environment and the expansive attack surface that is created as a result of connecting all of these devices to corporate networks massively increases the tasks needed to secure even the traditional networks, much less the expanded converged networks.”
|
||||
|
||||
**[ Also read: [Is your enterprise software committing security malpractice?][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
Hussey is well versed on the cultural and technology issues in this arena. When asked if IT and OT people are working together to integrate their networks, he says, “That would be ideal, but it’s not really what we see in the marketplace. Typically, we see some acrimony between these two groups.”
|
||||
|
||||
Hussey explains that the groups move at different paces.
|
||||
|
||||
“The OT groups think in terms of 10-plus year cycles, whereas the IT groups think in terms of three-plus years cycles,” he says. “There's a lot more change and iteration in IT environments than there is OT environments, which are traditionally extremely static. But now companies want to bring telemetry data that is produced by OT devices back to some workload in a data center or in a cloud. That forces a requirement for secure connectivity because of corporate governance or regulatory requirements, and this is when we most often see the two groups clash.”
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
|
||||
|
||||
Based on the situations Hussey has observed so far, the onus to connect and secure the disparate networks falls to the IT side of the house. This is a big challenge because the tools that have traditionally been used for security in IT environments aren’t necessarily appropriate or applicable in OT environments. IT and OT systems have very different protocols and operating systems. It’s not practical to try to create network segmentation using firewall rules, access control lists, VLANs, or VPNs because those things can’t scale to the workloads presented in OT environments.
|
||||
|
||||
### OT practices create IT security concerns
|
||||
|
||||
Steve Fey, CEO of [Totem Building Cybersecurity][6], concurs with Hussey and points out another significant issue in trying to integrate the networking and security aspects of IT and OT systems. In the OT world, it’s often the device vendors or their local contractors who manage and maintain all aspects of the device, typically through remote access. These vendors even install the remote access capabilities and set up the users. “This is completely opposite to how it should be done from a cybersecurity policy perspective,” says Fey. And yet, it’s common today in many industrial environments.
|
||||
|
||||
Fey’s company is in the building controls industry, which automates control of everything from elevators and HVAC systems to lighting and life safety systems in commercial buildings.
|
||||
|
||||
“The building controls industry, in particular, is one that's characterized by a completely different buying and decision-making culture than in enterprise IT. Everything from how the systems are engineered, purchased, installed, and supported is very different than the equivalent world of enterprise IT. Even the suppliers are largely different,” says Fey. “This is another aspect of the cultural challenge between IT and OT teams. They are two worlds that are having to figure each other out because of the cyber threats that pose a risk to these control systems.”
|
||||
|
||||
Fey says major corporate entities are just waking up to the reality of this massive threat surface, whether it’s in their buildings or their manufacturing processes.
|
||||
|
||||
“There’s a dire need to overcome decades of installed OT systems that have been incorrectly configured and incorrectly operated without the security policies and safeguards that are normal to enterprise IT. But the toolsets for these environments are incompatible, and the cultural differences are great,” he says.
|
||||
|
||||
Totem’s goal is to bridge this gap with a specific focus on cyber and to provide a toolset that is recognizable to the enterprise IT world.
|
||||
|
||||
Both Hussey and Fey say it’s likely that IT groups will be charged with leading the convergence of IT and OT networks, but they must include their OT counterparts in the efforts. There are big cultural and technical gaps to bridge to deliver the results that industrial companies are hoping to achieve.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432132/get-ready-for-the-convergence-of-it-and-ot-networking-and-security.html
|
||||
|
||||
作者:[Linda Musthaler][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Linda-Musthaler/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/02/abstract_networks_thinkstock_881604844-100749945-large.jpg
|
||||
[2]: https://www.temperednetworks.com/
|
||||
[3]: https://www.networkworld.com/article/3429559/is-your-enterprise-software-committing-security-malpractice.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[6]: https://totembuildings.com/
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Powering edge data centers: Blue energy might be the perfect solution)
|
||||
[#]: via: (https://www.networkworld.com/article/3432116/powering-edge-data-centers-blue-energy-might-be-the-perfect-solution.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Powering edge data centers: Blue energy might be the perfect solution
|
||||
======
|
||||
Blue energy, created by mixing seawater and fresh water, could be the perfect solution for providing cheap and eco-friendly power to edge data centers.
|
||||
![Benkrut / Getty Images][1]
|
||||
|
||||
About a cubic yard of freshwater mixed with seawater provides almost two-thirds of a kilowatt-hour of energy. And scientists say a revolutionary new battery chemistry based on that theme could power edge data centers.
|
||||
|
||||
The idea is to harness power from wastewater treatment plants located along coasts, which happen to be ideal locations for edge data centers and are heavy electricity users.
|
||||
|
||||
“Places where salty ocean water and freshwater mingle could provide a massive source of renewable power,” [writes Rob Jordan in a Stanford University article][2].
|
||||
|
||||
**[ Read also: [Data centers may soon recycle heat into electricity][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
The chemical process harnesses a mix of sodium and chloride ions. They’re squirted from battery electrodes into a solution and cause current to flow. That initial infusion is then followed by seawater being exchanged with wastewater effluent. It reverses the current flow and creates the energy, the researchers explain.
|
||||
|
||||
“Energy is recovered during both the freshwater and seawater flushes, with no upfront energy investment and no need for charging,” the article says.
|
||||
|
||||
In other words, the battery is continually recharging and discharging with no added input—such as electricity from the grid. The Stanford researchers say the technology could be ideal for making coastal wastewater plants energy independent.
|
||||
|
||||
### Coastal edge data centers
|
||||
|
||||
But edge data centers, also taking up location on the coasts, could also benefit. Those data centers are already exploring kinetic wave-energy to harvest power, as well as using seawater to cool data centers.
|
||||
|
||||
I’ve written about [Ocean Energy’s offshore, power platform using kinetic wave energy][5]. That 125-feet-long, wave converter solution, not only uses ocean water for power generation, but its sea-environment implementation means the same body of water can be used for cooling, too.
|
||||
|
||||
“Ocean cooling and ocean energy in the one device” is a seductive solution, the head of that company said at the time.
|
||||
|
||||
[Microsoft, too, has an underwater data center][6] that proffers the same kinds of power benefits.
|
||||
|
||||
Locating data centers on coasts or in the sea rather than inland doesn’t just provide virtually free-of-cost, power and cooling advantages, plus the associated eco-emissions advantages. The coasts tend to be where the populous is, and locating data center operations near to where the actual calculations, data stores, and other activities need to take place fits neatly into low-latency edge computing, conceptually.
|
||||
|
||||
Other advantages to placing a data center actually in the ocean, although close to land, include the fact that there’s no rent in open waters. And in international waters, one could imagine regulatory advantages—there isn’t a country’s official hovering around.
|
||||
|
||||
However, by placing the installation on terra firma (as the seawater-saltwater mix power solution would be designed for) but close to water at a coast, one can use the necessary seawater and gain an advantage of ease of access to the real estate, connections, and so on.
|
||||
|
||||
The Stanford University engineers, in their seawater/wastewater mix tests, flushed a battery prototype 180 times with wastewater from the Palo Alto Regional Water Quality Control Plant and seawater from nearby Half Moon Bay. The group says they obtained 97% “capturing [of] the salinity gradient energy,” or the blue energy, as it’s sometimes called.
|
||||
|
||||
“Surplus power production could even be diverted to nearby industrial operations,” the article continues.
|
||||
|
||||
“Tapping blue energy at the global scale: rivers running into the ocean” is yet to be solved. “But it is a good starting point,” says Stanford scholar Kristian Dubrawski in the article.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432116/powering-edge-data-centers-blue-energy-might-be-the-perfect-solution.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/uk_united_kingdom_northern_ireland_belfast_river_lagan_waterfront_architecture_by_benkrut_gettyimages-530205844_2400x1600-100807934-large.jpg
|
||||
[2]: https://news.stanford.edu/2019/07/29/generating-energy-wastewater/
|
||||
[3]: https://www.networkworld.com/article/3410578/data-centers-may-soon-recycle-heat-into-electricity.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.networkworld.com/article/3314597/wave-energy-to-power-undersea-data-centers.html
|
||||
[6]: https://www.networkworld.com/article/3283332/microsoft-launches-undersea-free-cooling-data-center.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Breakthroughs bring a quantum Internet closer)
|
||||
[#]: via: (https://www.networkworld.com/article/3432509/breakthroughs-bring-a-quantum-internet-closer.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Breakthroughs bring a quantum Internet closer
|
||||
======
|
||||
Universities around the world are making discoveries that advance technologies needed to underpin quantum computing.
|
||||
![Getty Images][1]
|
||||
|
||||
Breakthroughs in the manipulation of light are making it more likely that we will, in due course, be seeing a significantly faster and more secure Internet. Adoption of optical circuits in chips, for example, to be driven by [quantum technologies][2] could be just around the corner.
|
||||
|
||||
Physicists at the Technical University of Munich (TUM), have just announced a dramatic leap forward in the methods used to accurately place light sources in atom-thin layers. That fine positioning has been one block in the movement towards quantum chips.
|
||||
|
||||
[See who's creating quantum computers][3]
|
||||
|
||||
“Previous circuits on chips rely on electrons as the information carriers,” [the school explains in a press release][4]. However, by using light instead, it's possible to send data at the faster speed of light, gain power-efficiencies and take advantage of quantum entanglement, where the data is positioned in multiple states in the circuit, all at the same time.
|
||||
|
||||
Roughly, quantum entanglement is highly secure because eavesdropping attempts can not only be spotted immediately anywhere along a circuit, due to the always-intertwined parts, but the keys can be automatically shut down at the same time, thus corrupting visibility for the hacker.
|
||||
|
||||
The school says its light-source-positioning technique, using a three-atom-thick layer of the semiconductor molybdenum disulfide (MoS2) as the initial material and then irradiating it with a helium ion beam, controls the positioning of the light source better, in a chip, than has been achieved before.
|
||||
|
||||
They say that the precision now opens the door to quantum sensor chips for smartphones, and also “new encryption technologies for data transmission.” Any smartphone sensor also has applications in IoT.
|
||||
|
||||
The TUM quantum-electronics breakthrough is just one announced in the last few weeks. Scientists at Osaka University say they’ve figured a way to get information that’s encoded in a laser-beam to translate to a spin state of an electron in a quantum dot. They explain, [in their release][5], that they solve an issue where entangled states can be extremely fragile, in other words, petering out and not lasting for the required length of transmission. Roughly, they explain that their invention allows electron spins in distant, terminus computers to interact better with the quantum-data-carrying light signals.
|
||||
|
||||
“The achievement represents a major step towards a ‘quantum internet,’ the university says.
|
||||
|
||||
“There are those who think all computers, and other electronics, will eventually be run on light and forms of photons, and that we will see a shift to all-light,” [I wrote earlier this year][6].
|
||||
|
||||
That movement is not slowing. Unrelated to the aforementioned quantum-based light developments, we’re also seeing a light-based thrust that can be used in regular electronics too.
|
||||
|
||||
Engineers may soon be designing with small photon diodes (not traditional LEDs, which are also diodes) that would allow light to flow in one direction only, [says Stanford University in a press release][7]. They are using materials science and have figured a way to trap light in nano-sized silicon. Diodes are basically a valve that stops electrical circuits running in reverse. Light-based diodes, for direction, haven’t been available in small footprints, such as would be needed in smartphone-sized form factors, or IoT sensing, for example.
|
||||
|
||||
“One grand vision is to have an all-optical computer where electricity is replaced completely by light and photons drive all information processing,” Mark Lawrence of Stanford says. “The increased speed and bandwidth of light would enable faster solutions.”
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432509/breakthroughs-bring-a-quantum-internet-closer.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/08/3_nodes-and-wires_servers_hardware-100769198-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
|
||||
[3]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html
|
||||
[4]: https://www.tum.de/nc/en/about-tum/news/press-releases/details/35627/
|
||||
[5]: https://resou.osaka-u.ac.jp/en/research/2019/20190717_1
|
||||
[6]: https://www.networkworld.com/article/3338081/light-based-computers-to-be-5000-times-faster.html
|
||||
[7]: https://news.stanford.edu/2019/07/24/developing-technologies-run-light/
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The cloud isn't killing open source software)
|
||||
[#]: via: (https://opensource.com/article/19/8/open-source-licensing)
|
||||
[#]: author: (Peter Zaitsev https://opensource.com/users/peter-zaitsev)
|
||||
|
||||
The cloud isn't killing open source software
|
||||
======
|
||||
How the cloud motivates open source businesses to evolve quickly.
|
||||
![Globe up in the clouds][1]
|
||||
|
||||
Over the last few months, I participated in two keynote panels where people asked questions about open source licensing:
|
||||
|
||||
* Do we need to redefine what open source means in the age of the cloud?
|
||||
* Are cloud vendors abusing open source?
|
||||
* Will open source, as we know it, survive?
|
||||
|
||||
|
||||
|
||||
Last year was the most eventful in my memory for the usually very conservative open source licensing space:
|
||||
|
||||
* [Elastic][2] and [Confluent][3] introduced their own licenses for a portion of their stack.
|
||||
* [Redis Labs][4] changed its license for some extensions by adding "Commons Clause," then changed the entire license a few months later.
|
||||
* [MongoDB][5] famously proposed a new license called Server-Side Public License (SSPL) to the Open Source Initiative (OSI) for approval, only to [retract][6] the proposal before the OSI had an opportunity to reach a decision. Many in the open source community regarded SSPL as failing to meet the standards of open source licenses. As a result, MongoDB is under a license that can be described as "[source-available][7]" but not open source, given that it has not been approved by the OSI.
|
||||
|
||||
|
||||
|
||||
### Competition in the cloud
|
||||
|
||||
The most common reason given for software vendors making these changes is "foul play" by cloud vendors. The argument is that cloud vendors unfairly offer open source software "as a service," capturing large portions of the revenue, while the original software vendor continues to carry most of the development costs. Market rumors claim Amazon Web Services (AWS) makes more revenue from MySQL than Oracle, which owns the product.
|
||||
|
||||
So, who is claiming foul play is destroying the open source ecosystem? Typically, the loudest voices are venture-funded open source software companies. These companies require a very high growth rate to justify their hefty valuation, so it makes sense that they would prefer not to worry about additional competition.
|
||||
|
||||
But I reject this argument. If you have an open source license for your software, you need to accept the benefits and drawbacks that go along with it. Besides, you are likely to have a much faster and larger adoption rate partly because other businesses, large and small, can make money from your software. You need to accept and even expect competition from these businesses.
|
||||
|
||||
In simple terms, there will be a larger cake, but you will only get a slice of it. If you want a bigger slice of that cake, you can choose a proprietary license for all or some of your software (the latter is often called "open core"). Or, you can choose more or less permissive open source licensing. Choosing the right mix and adapting it as time goes by is critical for the success of businesses that produce open source software.
|
||||
|
||||
### Open source communities
|
||||
|
||||
But what about software users and the open source communities that surround these projects? These groups generally love to see their software available from cloud vendors, for example, database-as-a-service (DBaaS), as it makes the software much easier to access and gives users more choices than ever. This can have a very positive impact on the community. For example, the adoption of PostgreSQL, which was not easy to use, was dramatically boosted by its availability on Heroku and then as DBaaS on major cloud vendors.
|
||||
|
||||
Another criticism leveled at cloud vendors is that they do not support open source communities. This is partly due to their reluctance to share software code. They do, however, contribute significantly to the community by pushing the boundaries of usability, and more and more, we see examples of cloud vendors contributing code. AWS, which gets most of the criticism, has multiple [open source projects][8] and contributes to other projects. Amazon [contributed Encryption in Transit to Redis][9] and recently released [Open Distro for Elasticsearch][10], which provides open source equivalents for many features not available in the open source version of the Elastic platform.
|
||||
|
||||
### Open source now and in the future
|
||||
|
||||
So, while open source companies impacted by cloud vendors continue to argue that such competition can kill their business—and consequently kill open source projects—this argument is misguided. Competition is not new. Weaker companies that fail to adjust to these new business realities may fail. Other companies will thrive or be acquired by stronger players. This process generally leads to better products and more choice.
|
||||
|
||||
This is especially true for open source software, which, unlike proprietary software, cannot be wiped out by a company's failure. Once released, open source code is _always_ open (you can only change the license for new releases), so everyone can exercise the right to fork and continue development if there is demand.
|
||||
|
||||
So, I believe open source software is working exactly as intended.
|
||||
|
||||
Some businesses attempt to balance open and proprietary software licenses and are now changing to restrictive licenses. Time will tell whether this will protect them or result in their users seeking a more open alternative.
|
||||
|
||||
But, what about "source-available" licenses? This is a new category and another option for software vendors and users. However, it can be confusing. The source-available category is not well defined. Some people even refer to this software as open source, as you can browse the source code on GitHub. When source-available code is mixed in with truly open source components in the same product, it can be problematic. If issues arise, they could damage the reputation of the open source software and even expose the user to potential litigation. I hope that standardized source-available licenses will be developed and adopted by software vendors, as was the case with open source licenses.
|
||||
|
||||
At [Percona][11], we find ourselves in a unique position. We have spent years using the freedom of open source to develop better versions of existing software, with enhanced features, at no cost to our users. Percona Server for MySQL is as open as MySQL Community Edition but has many of the enhanced features available in MySQL Enterprise as well as additional benefits. This also applies to Percona Server for MongoDB. So, we compete with MongoDB and Oracle, while also being thankful for the amazing engineering work they are doing.
|
||||
|
||||
We also compete with DBaaS on other cloud vendors. DBaaS is a great choice for smaller companies that aren't worried about vendor lock-in. It offers superb value without huge costs and is a great choice for some customers. This rivalry is sometimes unpleasant, but it is ultimately fair, and the competition pushes us to be a better company.
|
||||
|
||||
In summary, there is no need to panic! The cloud is not going to kill open source software, but it should motivate open source software businesses to adjust and evolve their operations. It is clear that agility will be key, and businesses that can take advantage of new developments and adapt to changing market conditions will be more successful. The final result is likely to be more open software and also more non-open source software, all operating under a variety of licenses.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/open-source-licensing
|
||||
|
||||
作者:[Peter Zaitsev][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/peter-zaitsev
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-globe.png?itok=_drXt4Tn (Globe up in the clouds)
|
||||
[2]: https://www.elastic.co/guide/en/elastic-stack-overview/current/license-management.html
|
||||
[3]: https://www.confluent.io/blog/license-changes-confluent-platform
|
||||
[4]: https://redislabs.com/blog/redis-labs-modules-license-changes/
|
||||
[5]: https://www.mongodb.com/licensing/server-side-public-license
|
||||
[6]: http://lists.opensource.org/pipermail/license-review_lists.opensource.org/2019-March/003989.html
|
||||
[7]: https://en.wikipedia.org/wiki/Source-available_software
|
||||
[8]: https://aws.amazon.com/opensource/
|
||||
[9]: https://aws.amazon.com/blogs/opensource/open-sourcing-encryption-in-transit-redis/
|
||||
[10]: https://aws.amazon.com/blogs/opensource/keeping-open-source-open-open-distro-for-elasticsearch/
|
||||
[11]: https://www.percona.com/
|
@ -1,13 +1,17 @@
|
||||
Managing Digital Files (e.g., Photographs) in Files and Folders
|
||||
qfzy1233 is translating
|
||||
|
||||
|
||||
|
||||
数码文件与文件夹收纳术(以照片为例)
|
||||
======
|
||||
Update 2014-05-14: added real world example
|
||||
更新 2014-05-14:增加了一些具体实例
|
||||
|
||||
Update 2015-03-16: filtering photographs according to their GPS coordinates
|
||||
更新 2015-03-16:根据照片的 GPS 坐标过滤图片
|
||||
|
||||
Update 2016-08-29: replaced outdated `show-sel.sh` method with new `filetags --filter` method
|
||||
更新 2016-08-29:以新的 `filetags--filter` (LCTT译注:文件标签过滤器)替换已经过时的 `show-sel.sh` 脚本(LCTT译注:show-sel 为 show firmware System Event Log records 即硬件系统事件及日志显示)
|
||||
|
||||
Update 2017-08-28: Email comment on geeqie video thumbnails
|
||||
|
||||
更新 2017-08-28:
|
||||
I am a passionate photographer when being on vacation or whenever I see something beautiful. This way, I collected many [JPEG][1] files over the past years. Here, I describe how I manage my digital photographs while avoiding any [vendor lock-in][2] which binds me to a temporary solution and leads to loss of data. Instead, I prefer solutions where I am able to **invest my time and effort for a long-term relationship**.
|
||||
|
||||
This (very long) entry is **not about image files only** : I am going to explain further things like my folder hierarchy, file name convention, and so forth. Therefore, this information applies to all kind of files I process.
|
||||
|
@ -1,4 +1,3 @@
|
||||
leemeans translating
|
||||
7 deadly sins of documentation
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-cat-writing-king-typewriter-doc.png?itok=afaEoOqc)
|
||||
|
@ -1,215 +0,0 @@
|
||||
Use LVM to Upgrade Fedora
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/06/lvm-upgrade-816x345.jpg)
|
||||
|
||||
Most users find it simple to upgrade [from one Fedora release to the next][1] with the standard process. However, there are inevitably many special cases that Fedora can also handle. This article shows one way to upgrade using DNF along with Logical Volume Management (LVM) to keep a bootable backup in case of problems. This example upgrades a Fedora 26 system to Fedora 28.
|
||||
|
||||
The process shown here is more complex than the standard upgrade process. You should have a strong grasp of how LVM works before you use this process. Without proper skill and care, you could **lose data and/or be forced to reinstall your system!** If you don’t know what you’re doing, it is **highly recommended** you stick to the supported upgrade methods only.
|
||||
|
||||
### Preparing the system
|
||||
|
||||
Before you start, ensure your existing system is fully updated.
|
||||
```
|
||||
$ sudo dnf update
|
||||
$ sudo systemctl reboot # or GUI equivalent
|
||||
|
||||
```
|
||||
|
||||
Check that your root filesystem is mounted via LVM.
|
||||
```
|
||||
$ df /
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f26 20511312 14879816 4566536 77% /
|
||||
|
||||
$ sudo lvs
|
||||
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
|
||||
f22 vg_sdg -wi-ao---- 15.00g
|
||||
f24_64 vg_sdg -wi-ao---- 20.00g
|
||||
f26 vg_sdg -wi-ao---- 20.00g
|
||||
home vg_sdg -wi-ao---- 100.00g
|
||||
mockcache vg_sdg -wi-ao---- 10.00g
|
||||
swap vg_sdg -wi-ao---- 4.00g
|
||||
test vg_sdg -wi-a----- 1.00g
|
||||
vg_vm vg_sdg -wi-ao---- 20.00g
|
||||
|
||||
```
|
||||
|
||||
If you used the defaults when you installed Fedora, you may find the root filesystem is mounted on a LV named root. The name of your volume group will likely be different. Look at the total size of the root volume. In the example, the root filesystem is named f26 and is 20G in size.
|
||||
|
||||
Next, ensure you have enough free space in LVM.
|
||||
```
|
||||
$ sudo vgs
|
||||
VG #PV #LV #SN Attr VSize VFree
|
||||
vg_sdg 1 8 0 wz--n- 232.39g 42.39g
|
||||
|
||||
```
|
||||
|
||||
This system has enough free space to allocate a 20G logical volume for the upgraded Fedora 28 root. If you used the default install, there will be no free space in LVM. Managing LVM in general is beyond the scope of this article, but here are some possibilities:
|
||||
|
||||
1\. /home on its own LV, and lots of free space in /home
|
||||
|
||||
You can logout of the GUI and switch to a text console, logging in as root. Then you can unmount /home, and use lvreduce -r to resize and reallocate the /home LV. You might also boot from a Live image (so as not to use /home) and use the gparted GUI utility.
|
||||
|
||||
2\. Most of the LVM space allocated to a root LV, with lots of free space in the filesystem
|
||||
|
||||
You can boot from a Live image and use the gparted GUI utility to reduce the root LV. Consider moving /home to its own filesystem at this point, but that is beyond the scope of this article.
|
||||
|
||||
3\. Most of the file systems are full, but you have LVs you no longer need
|
||||
|
||||
You can delete the unneeded LVs, freeing space in the volume group for this operation.
|
||||
|
||||
### Creating a backup
|
||||
|
||||
First, allocate a new LV for the upgraded system. Make sure to use the correct name for your system’s volume group (VG). In this example it’s vg_sdg.
|
||||
```
|
||||
$ sudo lvcreate -L20G -n f28 vg_sdg
|
||||
Logical volume "f28" created.
|
||||
|
||||
```
|
||||
|
||||
Next, make a snapshot of your current root filesystem. This example creates a snapshot volume named f26_s.
|
||||
```
|
||||
$ sync
|
||||
$ sudo lvcreate -s -L1G -n f26_s vg_sdg/f26
|
||||
Using default stripesize 64.00 KiB.
|
||||
Logical volume "f26_s" created.
|
||||
|
||||
```
|
||||
|
||||
The snapshot can now be copied to the new LV. **Make sure you have the destination correct** when you substitute your own volume names. If you are not careful you could delete data irrevocably. Also, make sure you are copying from the snapshot of your root, **not** your live root.
|
||||
```
|
||||
$ sudo dd if=/dev/vg_sdg/f26_s of=/dev/vg_sdg/f28 bs=256k
|
||||
81920+0 records in
|
||||
81920+0 records out
|
||||
21474836480 bytes (21 GB, 20 GiB) copied, 149.179 s, 144 MB/s
|
||||
|
||||
```
|
||||
|
||||
Give the new filesystem copy a unique UUID. This is not strictly necessary, but UUIDs are supposed to be unique, so this avoids future confusion. Here is how for an ext4 root filesystem:
|
||||
```
|
||||
$ sudo e2fsck -f /dev/vg_sdg/f28
|
||||
$ sudo tune2fs -U random /dev/vg_sdg/f28
|
||||
|
||||
```
|
||||
|
||||
Then remove the snapshot volume which is no longer needed:
|
||||
```
|
||||
$ sudo lvremove vg_sdg/f26_s
|
||||
Do you really want to remove active logical volume vg_sdg/f26_s? [y/n]: y
|
||||
Logical volume "f26_s" successfully removed
|
||||
|
||||
```
|
||||
|
||||
You may wish to make a snapshot of /home at this point if you have it mounted separately. Sometimes, upgraded applications make changes that are incompatible with a much older Fedora version. If needed, edit the /etc/fstab file on the **old** root filesystem to mount the snapshot on /home. Remember that when the snapshot is full, it will disappear! You may also wish to make a normal backup of /home for good measure.
|
||||
|
||||
### Configuring to use the new root
|
||||
|
||||
First, mount the new LV and backup your existing GRUB settings:
|
||||
```
|
||||
$ sudo mkdir /mnt/f28
|
||||
$ sudo mount /dev/vg_sdg/f28 /mnt/f28
|
||||
$ sudo mkdir /mnt/f28/f26
|
||||
$ cd /boot/grub2
|
||||
$ sudo cp -p grub.cfg grub.cfg.old
|
||||
|
||||
```
|
||||
|
||||
Edit grub.conf and add this before the first menuentry, unless you already have it:
|
||||
```
|
||||
menuentry 'Old boot menu' {
|
||||
configfile /grub2/grub.cfg.old
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Edit grub.conf and change the default menuentry to activate and mount the new root filesystem. Change this line:
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f26 ro rd.lvm.lv=vg_sdg/f26 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
|
||||
```
|
||||
|
||||
So that it reads like this. Remember to use the correct names for your system’s VG and LV entries!
|
||||
```
|
||||
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f28 ro rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
|
||||
|
||||
```
|
||||
|
||||
Edit /mnt/f28/etc/default/grub and change the default root LV activated at boot:
|
||||
```
|
||||
GRUB_CMDLINE_LINUX="rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet"
|
||||
|
||||
```
|
||||
|
||||
Edit /mnt/f28/etc/fstab to change the mounting of the root filesystem from the old volume:
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 / ext4 defaults 1 1
|
||||
|
||||
```
|
||||
|
||||
to the new one:
|
||||
```
|
||||
/dev/mapper/vg_sdg-f28 / ext4 defaults 1 1
|
||||
|
||||
```
|
||||
|
||||
Then, add a read-only mount of the old system for reference purposes:
|
||||
```
|
||||
/dev/mapper/vg_sdg-f26 /f26 ext4 ro,nodev,noexec 0 0
|
||||
|
||||
```
|
||||
|
||||
If your root filesystem is mounted by UUID, you will need to change this. Here is how if your root is an ext4 filesystem:
|
||||
```
|
||||
$ sudo e2label /dev/vg_sdg/f28 F28
|
||||
|
||||
```
|
||||
|
||||
Now edit /mnt/f28/etc/fstab to use the label. Change the mount line for the root file system so it reads like this:
|
||||
```
|
||||
LABEL=F28 / ext4 defaults 1 1
|
||||
|
||||
```
|
||||
|
||||
### Rebooting and upgrading
|
||||
|
||||
Reboot, and your system will be using the new root filesystem. It is still Fedora 26, but a copy with a new LV name, and ready for dnf system-upgrade! If anything goes wrong, use the Old Boot Menu to boot back into your working system, which this procedure avoids touching.
|
||||
```
|
||||
$ sudo systemctl reboot # or GUI equivalent
|
||||
...
|
||||
$ df / /f26
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/vg_sdg-f28 20511312 14903196 4543156 77% /
|
||||
/dev/mapper/vg_sdg-f26 20511312 14866412 4579940 77% /f26
|
||||
|
||||
```
|
||||
|
||||
You may wish to verify that using the Old Boot Menu does indeed get you back to having root mounted on the old root filesystem.
|
||||
|
||||
Now follow the instructions at [this wiki page][2]. If anything goes wrong with the system upgrade, you have a working system to boot back into.
|
||||
|
||||
### Future ideas
|
||||
|
||||
The steps to create a new LV and copy a snapshot of root to it could be automated with a generic script. It needs only the name of the new LV, since the size and device of the existing root are easy to determine. For example, one would be able to type this command:
|
||||
```
|
||||
$ sudo copyfs / f28
|
||||
|
||||
```
|
||||
|
||||
Supplying the mount-point to copy makes it clearer what is happening, and copying other mount-points like /home could be useful.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-lvm-upgrade-fedora/
|
||||
|
||||
作者:[Stuart D Gathman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/sdgathman/
|
||||
[1]:https://fedoramagazine.org/upgrading-fedora-27-fedora-28/
|
||||
[2]:https://fedoraproject.org/wiki/DNF_system_upgrade
|
@ -1,69 +0,0 @@
|
||||
Create animated, scalable vector graphic images with MacSVG
|
||||
======
|
||||
|
||||
Open source SVG: The writing is on the wall
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_design_paper_plane_2_0.jpg?itok=xKdP-GWE)
|
||||
|
||||
The Neo-Babylonian regent [Belshazzar][1] did not heed [the writing on the wall][2] that magically appeared during his great feast. However, if he had had a laptop and a good internet connection in 539 BC, he might have staved off those pesky Persians by reading the SVG on the browser.
|
||||
|
||||
Animating text and objects on web pages is a great way to build user interest and engagement. There are several ways to achieve this, such as a video embed, an animated GIF, or a slideshow—but you can also use [scalable vector graphics][3] (SVG).
|
||||
|
||||
An SVG image is different from, say, a JPG, because it is scalable without losing its resolution. A vector image is created by points, not dots, so no matter how large it gets, it will not lose its resolution or pixelate. An example of a good use of scalable, static images would be logos on websites.
|
||||
|
||||
### Move it, move it
|
||||
|
||||
You can create SVG images with several drawing programs, including open source [Inkscape][4] and Adobe Illustrator. Getting your images to “do something” requires a bit more effort. Fortunately, there are open source solutions that would get even Belshazzar’s attention.
|
||||
|
||||
[MacSVG][5] is one tool that will get your images moving. You can find the source code on [GitHub][6].
|
||||
|
||||
Developed by Douglas Ward of Conway, Arkansas, MacSVG is an “open source Mac OS app for designing HTML5 SVG art and animation,” according to its [website][5].
|
||||
|
||||
I was interested in using MacSVG to create an animated signature. I admit that I found the process a bit confusing and failed at my first attempts to create an actual animated SVG image.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/macsvg-screen.png)
|
||||
|
||||
It is important to first learn what makes “the writing on the wall” actually write.
|
||||
|
||||
The attribute behind the animated writing is [stroke-dasharray][7]. Breaking the term into three words helps explain what is happening: Stroke refers to the line or stroke you would make with a pen, whether physical or digital. Dash means breaking the stroke down into a series of dashes. Array means producing the whole thing into an array. That’s a simple overview, but it helped me understand what was supposed to happen and why.
|
||||
|
||||
With MacSVG, you can import a graphic (.PNG) and use the pen tool to trace the path of the writing. I used a cursive representation of my first name. Then it was just a matter of applying the attributes to animate the writing, increase and decrease the thickness of the stroke, change its color, and so on. Once completed, the animated writing was exported as an .SVG file and was ready for use on the web. MacSVG can be used for many different types of SVG animation in addition to handwriting.
|
||||
|
||||
### The writing is on the WordPress
|
||||
|
||||
I was ready to upload and share my SVG example on my [WordPress][8] site, but I discovered that WordPress does not allow for SVG media imports. Fortunately, I found a handy plugin: Benbodhi’s [SVG Support][9] allowed a quick, easy import of my SVG the same way I would import a JPG to my Media Library. I was able to showcase my [writing on the wall][10] to Babylonians everywhere.
|
||||
|
||||
I opened the source code of my SVG in [Brackets][11], and here are the results:
|
||||
|
||||
```
|
||||
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://web.resource.org/cc/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" height="360px" style="zoom: 1;" cursor="default" id="svg_document" width="480px" baseProfile="full" version="1.1" preserveAspectRatio="xMidYMid meet" viewBox="0 0 480 360"><title id="svg_document_title">Path animation with stroke-dasharray</title><desc id="desc1">This example demonstrates the use of a path element, an animate element, and the stroke-dasharray attribute to simulate drawing.</desc><defs id="svg_document_defs"></defs><g id="main_group"></g><path stroke="#004d40" id="path2" stroke-width="9px" d="M86,75 C86,75 75,72 72,61 C69,50 66,37 71,34 C76,31 86,21 92,35 C98,49 95,73 94,82 C93,91 87,105 83,110 C79,115 70,124 71,113 C72,102 67,105 75,97 C83,89 111,74 111,74 C111,74 119,64 119,63 C119,62 110,57 109,58 C108,59 102,65 102,66 C102,67 101,75 107,79 C113,83 118,85 122,81 C126,77 133,78 136,64 C139,50 147,45 146,33 C145,21 136,15 132,24 C128,33 123,40 123,49 C123,58 135,87 135,96 C135,105 139,117 133,120 C127,123 116,127 120,116 C124,105 144,82 144,81 C144,80 158,66 159,58 C160,50 159,48 161,43 C163,38 172,23 166,22 C160,21 155,12 153,23 C151,34 161,68 160,78 C159,88 164,108 163,113 C162,118 165,126 157,128 C149,130 152,109 152,109 C152,109 185,64 185,64 " fill="none" transform=""><animate values="0,1739;1739,0;" attributeType="XML" begin="0; animate1.end+5s" id="animateSig1" repeatCount="indefinite" attributeName="stroke-dasharray" fill="freeze" dur="2"></animate></path></svg>
|
||||
```
|
||||
|
||||
What would you use MacSVG for?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/macsvg-open-source-tool-animation
|
||||
|
||||
作者:[Jeff Macharyas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rikki-endsley
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Belshazzar
|
||||
[2]: https://en.wikipedia.org/wiki/Belshazzar%27s_feast
|
||||
[3]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics
|
||||
[4]: https://inkscape.org/
|
||||
[5]: https://macsvg.org/
|
||||
[6]: https://github.com/dsward2/macSVG
|
||||
[7]: https://gist.github.com/mbostock/5649592
|
||||
[8]: https://macharyas.com/
|
||||
[9]: https://wordpress.org/plugins/svg-support/
|
||||
[10]: https://macharyas.com/index.php/2018/10/14/open-source-svg/
|
||||
[11]: http://brackets.io/
|
@ -1,266 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: subject: (How To Customize The GNOME 3 Desktop?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
[#]: url: ( )
|
||||
|
||||
How To Customize The GNOME 3 Desktop?
|
||||
======
|
||||
|
||||
We have got many emails from user to write an article about GNOME 3 desktop customization but we don’t get a time to write this topic.
|
||||
|
||||
I was using Ubuntu operating system since long time in my primary laptop and i got bored so, i would like to test some other distro which is related to Arch Linux.
|
||||
|
||||
I prefer to go with Majaro so, i have installed Manjaro 18.0 with GNOME 3 desktop in my laptop.
|
||||
|
||||
I’m customizing my desktop, how i want it. So, i would like to take this opportunity to write up this article in detailed way to help others.
|
||||
|
||||
This article helps others to customize their desktop without headache.
|
||||
|
||||
I’m not going to include all my customization and i will be adding a necessary things which will be mandatory and useful for Linux desktop users.
|
||||
|
||||
If you feel some tweak is missing in this article, i would request you to mention that in comment sections. It will be very helpful for other users.
|
||||
|
||||
### 1) How to Launch Activities Overview in GNOME 3 Desktop?
|
||||
|
||||
The Activities Overview will display all the running applications or launched/opened windows by clicking `Super Key` or by clicking `Activities` button in the topmost left corner.
|
||||
|
||||
It allows you to launch a new applications, switch windows, and move windows between workspaces.
|
||||
|
||||
You can simply exit the Activities Overview by choosing the following any of the one actions like selecting a window, application or workspace, or by pressing the `Super Key` or `Esc Key`.
|
||||
|
||||
Activities Overview Screenshot.
|
||||
![][2]
|
||||
|
||||
### 2) How to Resize Windows in GNOME 3 Desktop?
|
||||
|
||||
The Launched windows can be maximized, unmaximized and snapped to one side of the screen (Left or Right) by using the following key combinations.
|
||||
|
||||
* `Super Key+Down Arrow:` To unmaximize the window.
|
||||
* `Super Key+Up Arrow:` To maximize the window.
|
||||
* `Super Key+Right Arrow:` To fill a window in the right side of the half screen.
|
||||
* `Super Key+Left Arrow:` To fill a window in the left side of the half screen
|
||||
|
||||
|
||||
|
||||
Use `Super Key+Down Arrow` to unmaximize the window.
|
||||
![][3]
|
||||
|
||||
Use `Super Key+Up Arrow` to maximize the window.
|
||||
![][4]
|
||||
|
||||
Use `Super Key+Right Arrow` to fill a window in the right side of the half screen.
|
||||
![][5]
|
||||
|
||||
Use `Super Key+Left Arrow` to fill a window in the left side of the half screen.
|
||||
![][6]
|
||||
|
||||
This feature will help you to view two applications at a time a.k.a splitting screen.
|
||||
![][7]
|
||||
|
||||
### 3) How to Display Applications in GNOME 3 Desktop?
|
||||
|
||||
Click on the `Show Application Grid` button in the Dash to display all the installed applications on your system.
|
||||
![][8]
|
||||
|
||||
### 4) How to Add Applications on Dash in GNOME 3 Desktop?
|
||||
|
||||
To speed up your day to day activity you may want to add frequently used application into Dash or Drag the application launcher to the Dash.
|
||||
|
||||
It will allow you to directly launch your favorite applications without searching them. To do so, simply right click on it and use the option `Add to Favorites`.
|
||||
![][9]
|
||||
|
||||
To remove a application launcher a.k.a favorite from Dash, either drag it from the Dash to the grid button or simply right click on it and use the option `Remove from Favorites`.
|
||||
![][10]
|
||||
|
||||
### 5) How to Switch Between Workspaces in GNOME 3 Desktop?
|
||||
|
||||
Workspaces allow you to group windows together. It will helps you to segregate your work properly. If you are working on Multiple things and you want to group each work and related things separately then it will be very handy and perfect option for you.
|
||||
|
||||
You can switch workspaces in two ways, Open the Activities Overview and select a workspace from the right-hand side or use the following key combinations.
|
||||
|
||||
* Use `Ctrl+Alt+Up` Switch to the workspace above.
|
||||
* Use `Ctrl+Alt+Down` Switch to the workspace below.
|
||||
|
||||
|
||||
|
||||
![][11]
|
||||
|
||||
### 6) How to Switch Between Applications (Application Switcher) in GNOME 3 Desktop?
|
||||
|
||||
Use either `Alt+Tab` or `Super+Tab` to switch between applications. To launch Application Switcher, use either `Alt+Tab` or `Super+Tab`.
|
||||
|
||||
Once launched, just keep holding the Alt or Super key and hit the tab key to move to the next application from left to right order.
|
||||
|
||||
### 7) How to Add UserName to Top Panel in GNOME 3 Desktop?
|
||||
|
||||
If you would like to add your UserName to Top Panel then install the following [Add Username to Top Panel][12] GNOME Extension.
|
||||
![][13]
|
||||
|
||||
### 8) How to Add Microsoft Bing’s wallpaper in GNOME 3 Desktop?
|
||||
|
||||
Install the following [Bing Wallpaper Changer][14] GNOME shell extension to change your wallpaper every day to Microsoft Bing’s wallpaper.
|
||||
![][15]
|
||||
|
||||
### 9) How to Enable Night Light in GNOME 3 Desktop?
|
||||
|
||||
Night light app is one of the famous app which reduces strain on the eyes by turning your screen a dim yellow from blue light after sunset.
|
||||
|
||||
It is available in smartphones. The other known apps for the same purpose are flux and **[redshift][16]**.
|
||||
|
||||
To enable this, navigate to **System Settings** >> **Devices** >> **Displays** and turn Nigh Light on.
|
||||
![][17]
|
||||
|
||||
Once it’s enabled and status icon will be placed on the top panel.
|
||||
![][18]
|
||||
|
||||
### 10) How to Show the Battery Percentage in GNOME 3 Desktop?
|
||||
|
||||
Battery percentage will show you the exact battery usage. To enable this follow the below steps.
|
||||
|
||||
Start GNOME Tweaks >> **Top Bar** >> **Battery Percentage** and switch it on.
|
||||
![][19]
|
||||
|
||||
After modification you can able to see the battery percentage icon on the top panel.
|
||||
![][20]
|
||||
|
||||
### 11) How to Enable Mouse Right Click in GNOME 3 Desktop?
|
||||
|
||||
By default right click is disabled on GNOME 3 desktop environment. To enable this follow the below steps.
|
||||
|
||||
Start GNOME Tweaks >> **Keyboard & Mouse** >> Mouse Click Emulation and select “Area” option.
|
||||
![][21]
|
||||
|
||||
### 12) How to Enable Minimize On Click in GNOME 3 Desktop?
|
||||
|
||||
Enable one-click minimize feature which will help us to minimize opened window without using minimize option.
|
||||
|
||||
```
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
|
||||
```
|
||||
|
||||
### 13) How to Customize Dock in GNOME 3 Desktop?
|
||||
|
||||
If you would like to change your Dock similar to Deepin desktop or Mac then use the following set of commands.
|
||||
|
||||
```
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock dock-position BOTTOM
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock extend-height false
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock transparency-mode FIXED
|
||||
$ gsettings set org.gnome.shell.extensions.dash-to-dock dash-max-icon-size 50
|
||||
```
|
||||
|
||||
![][22]
|
||||
|
||||
### 14) How to Show Desktop in GNOME 3 Desktop?
|
||||
|
||||
By default `Super Key+D` shortcut doesn’t show your desktop. To configure this follow the below steps.
|
||||
|
||||
Settings >> **Devices** >> **Keyboard** >> Click **Hide all normal windows** under Navigation then Press `Super Key+D` finally hit `Set` button to enable it.
|
||||
![][23]
|
||||
|
||||
### 15) How to Customize Date and Time Format?
|
||||
|
||||
By default GNOME 3 shows date and time with `Sun 04:48`. It’s not clear and if you want to get the output with following format `Sun Dec 2 4:49 AM` follow the below steps.
|
||||
|
||||
**For Date Modification:** Start GNOME Tweaks >> **Top Bar** and enable `Weekday` option under Clock.
|
||||
![][24]
|
||||
|
||||
**For Time Modification:** Settings >> **Details** >> **Date & Time** then choose `AM/PM` option in the time format.
|
||||
![][25]
|
||||
|
||||
After modification you can able to see the date and time format same as below.
|
||||
![][26]
|
||||
|
||||
### 16) How to Permanently Disable Unused Services in Boot?
|
||||
|
||||
In my case, i’m not going to use **Bluetooth** & **cpus a.k.a Printer service**. Hence, disabling these services on my laptop. To disable services on Arch based systems use **[Pacman Package Manager][27]**.
|
||||
For Bluetooth
|
||||
|
||||
```
|
||||
$ sudo systemctl stop bluetooth.service
|
||||
$ sudo systemctl disable bluetooth.service
|
||||
$ sudo systemctl mask bluetooth.service
|
||||
$ systemctl status bluetooth.service
|
||||
```
|
||||
|
||||
For cups
|
||||
|
||||
```
|
||||
$ sudo systemctl stop org.cups.cupsd.service
|
||||
$ sudo systemctl disable org.cups.cupsd.service
|
||||
$ sudo systemctl mask org.cups.cupsd.service
|
||||
$ systemctl status org.cups.cupsd.service
|
||||
```
|
||||
|
||||
Finally verify whether these services are disabled or not in the boot using the following command. If you want to double confirm this, you can reboot once and check the same. Navigate to the following link to know more about **[systemctl][28]** usage,
|
||||
|
||||
```
|
||||
$ systemctl list-unit-files --type=service | grep enabled
|
||||
[email protected] enabled
|
||||
dbus-org.freedesktop.ModemManager1.service enabled
|
||||
dbus-org.freedesktop.NetworkManager.service enabled
|
||||
dbus-org.freedesktop.nm-dispatcher.service enabled
|
||||
display-manager.service enabled
|
||||
gdm.service enabled
|
||||
[email protected] enabled
|
||||
linux-module-cleanup.service enabled
|
||||
ModemManager.service enabled
|
||||
NetworkManager-dispatcher.service enabled
|
||||
NetworkManager-wait-online.service enabled
|
||||
NetworkManager.service enabled
|
||||
systemd-fsck-root.service enabled-runtime
|
||||
tlp-sleep.service enabled
|
||||
tlp.service enabled
|
||||
```
|
||||
|
||||
### 17) Install Icons & Themes in GNOME 3 Desktop?
|
||||
|
||||
Bunch of Icons and Themes are available for GNOME Desktop so, choose the desired **[GTK Themes][29]** and **[Icons Themes][30]** for you. To configure this further, navigate to the below links which makes your Desktop more elegant.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-overview-screenshot.jpg
|
||||
[3]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-unmaximize-the-window.jpg
|
||||
[4]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-maximize-the-window.jpg
|
||||
[5]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-right-side.jpg
|
||||
[6]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-left-side.jpg
|
||||
[7]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-split-screen.jpg
|
||||
[8]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-applications.jpg
|
||||
[9]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-applications-on-dash.jpg
|
||||
[10]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-remove-applications-from-dash.jpg
|
||||
[11]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-workspaces-screenshot.jpg
|
||||
[12]: https://extensions.gnome.org/extension/1108/add-username-to-top-panel/
|
||||
[13]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-username-to-top-panel.jpg
|
||||
[14]: https://extensions.gnome.org/extension/1262/bing-wallpaper-changer/
|
||||
[15]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-microsoft-bings-wallpaper.jpg
|
||||
[16]: https://www.2daygeek.com/install-redshift-reduce-prevent-protect-eye-strain-night-linux/
|
||||
[17]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light.jpg
|
||||
[18]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light-1.jpg
|
||||
[19]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage.jpg
|
||||
[20]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage-1.jpg
|
||||
[21]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-mouse-right-click.jpg
|
||||
[22]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-dock-customization.jpg
|
||||
[23]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-show-desktop.jpg
|
||||
[24]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date.jpg
|
||||
[25]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-time.jpg
|
||||
[26]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date-time.jpg
|
||||
[27]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[28]: https://www.2daygeek.com/sysvinit-vs-systemd-cheatsheet-systemctl-command-usage/
|
||||
[29]: https://www.2daygeek.com/category/gtk-theme/
|
||||
[30]: https://www.2daygeek.com/category/icon-theme/
|
@ -1,145 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Podman and user namespaces: A marriage made in heaven)
|
||||
[#]: via: (https://opensource.com/article/18/12/podman-and-user-namespaces)
|
||||
[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan)
|
||||
|
||||
Podman and user namespaces: A marriage made in heaven
|
||||
======
|
||||
Learn how to use Podman to run containers in separate user namespaces.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/architecture_structure_planning_design_.png?itok=KL7dIDct)
|
||||
|
||||
[Podman][1], part of the [libpod][2] library, enables users to manage pods, containers, and container images. In my last article, I wrote about [Podman as a more secure way to run containers][3]. Here, I'll explain how to use Podman to run containers in separate user namespaces.
|
||||
|
||||
I have always thought of [user namespace][4], primarily developed by Red Hat's Eric Biederman, as a great feature for separating containers. User namespace allows you to specify a user identifier (UID) and group identifier (GID) mapping to run your containers. This means you can run as UID 0 inside the container and UID 100000 outside the container. If your container processes escape the container, the kernel will treat them as UID 100000. Not only that, but any file object owned by a UID that isn't mapped into the user namespace will be treated as owned by "nobody" (65534, kernel.overflowuid), and the container process will not be allowed access unless the object is accessible by "other" (world readable/writable).
|
||||
|
||||
If you have a file owned by "real" root with permissions [660][5], and the container processes in the user namespace attempt to read it, they will be prevented from accessing it and will see the file as owned by nobody.
|
||||
|
||||
### An example
|
||||
|
||||
Here's how that might work. First, I create a file in my system owned by root.
|
||||
|
||||
```
|
||||
$ sudo bash -c "echo Test > /tmp/test"
|
||||
$ sudo chmod 600 /tmp/test
|
||||
$ sudo ls -l /tmp/test
|
||||
-rw-------. 1 root root 5 Dec 17 16:40 /tmp/test
|
||||
```
|
||||
|
||||
Next, I volume-mount the file into a container running with a user namespace map 0:100000:5000.
|
||||
|
||||
```
|
||||
$ sudo podman run -ti -v /tmp/test:/tmp/test:Z --uidmap 0:100000:5000 fedora sh
|
||||
# id
|
||||
uid=0(root) gid=0(root) groups=0(root)
|
||||
# ls -l /tmp/test
|
||||
-rw-rw----. 1 nobody nobody 8 Nov 30 12:40 /tmp/test
|
||||
# cat /tmp/test
|
||||
cat: /tmp/test: Permission denied
|
||||
```
|
||||
|
||||
The **\--uidmap** setting above tells Podman to map a range of 5000 UIDs inside the container, starting with UID 100000 outside the container (so the range is 100000-104999) to a range starting at UID 0 inside the container (so the range is 0-4999). Inside the container, if my process is running as UID 1, it is 100001 on the host
|
||||
|
||||
Since the real UID=0 is not mapped into the container, any file owned by root will be treated as owned by nobody. Even if the process inside the container has **CAP_DAC_OVERRIDE** , it can't override this protection. **DAC_OVERRIDE** enables root processes to read/write any file on the system, even if the process was not owned by root or world readable or writable.
|
||||
|
||||
User namespace capabilities are not the same as capabilities on the host. They are namespaced capabilities. This means my container root has capabilities only within the container—really only across the range of UIDs that were mapped into the user namespace. If a container process escaped the container, it wouldn't have any capabilities over UIDs not mapped into the user namespace, including UID=0. Even if the processes could somehow enter another container, they would not have those capabilities if the container uses a different range of UIDs.
|
||||
|
||||
Note that SELinux and other technologies also limit what would happen if a container process broke out of the container.
|
||||
|
||||
### Using `podman top` to show user namespaces
|
||||
|
||||
We have added features to **podman top** to allow you to examine the usernames of processes running inside a container and identify their real UIDs on the host.
|
||||
|
||||
Let's start by running a sleep container using our UID mapping.
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:100000:5000 -d fedora sleep 1000
|
||||
```
|
||||
|
||||
Now run **podman top** :
|
||||
|
||||
```
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 100000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
Notice **podman top** reports that the user process is running as root inside the container but as UID 100000 on the host (HUSER). Also the **ps** command confirms that the sleep process is running as UID 100000.
|
||||
|
||||
Now let's run a second container, but this time we will choose a separate UID map starting at 200000.
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:200000:5000 -d fedora sleep 1000
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 200000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
200000 23644 23632 1 08:08 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
Notice that **podman top** reports the second container is running as root inside the container but as UID=200000 on the host.
|
||||
|
||||
Also look at the **ps** command—it shows both sleep processes running: one as 100000 and the other as 200000.
|
||||
|
||||
This means running the containers inside separate user namespaces gives you traditional UID separation between processes, which has been the standard security tool of Linux/Unix from the beginning.
|
||||
|
||||
### Problems with user namespaces
|
||||
|
||||
For several years, I've advocated user namespace as the security tool everyone wants but hardly anyone has used. The reason is there hasn't been any filesystem support or a shifting file system.
|
||||
|
||||
In containers, you want to share the **base** image between lots of containers. The examples above use the Fedora base image in each example. Most of the files in the Fedora image are owned by real UID=0. If I run a container on this image with the user namespace 0:100000:5000, by default it sees all of these files as owned by nobody, so we need to shift all of these UIDs to match the user namespace. For years, I've wanted a mount option to tell the kernel to remap these file UIDs to match the user namespace. Upstream kernel storage developers continue to investigate and make progress on this feature, but it is a difficult problem.
|
||||
|
||||
|
||||
Podman can use different user namespaces on the same image because of automatic [chowning][6] built into [containers/storage][7] by a team led by Nalin Dahyabhai. Podman uses containers/storage, and the first time Podman uses a container image in a new user namespace, container/storage "chowns" (i.e., changes ownership for) all files in the image to the UIDs mapped in the user namespace and creates a new image. Think of this as the **fedora:0:100000:5000** image.
|
||||
|
||||
When Podman runs another container on the image with the same UID mappings, it uses the "pre-chowned" image. When I run the second container on 0:200000:5000, containers/storage creates a second image, let's call it **fedora:0:200000:5000**.
|
||||
|
||||
Note if you are doing a **podman build** or **podman commit** and push the newly created image to a container registry, Podman will use container/storage to reverse the shift and push the image with all files chowned back to real UID=0.
|
||||
|
||||
This can cause a real slowdown in creating containers in new UID mappings since the **chown** can be slow depending on the number of files in the image. Also, on a normal [OverlayFS][8], every file in the image gets copied up. The normal Fedora image can take up to 30 seconds to finish the chown and start the container.
|
||||
|
||||
Luckily, the Red Hat kernel storage team, primarily Vivek Goyal and Miklos Szeredi, added a new feature to OverlayFS in kernel 4.19. The feature is called **metadata only copy-up**. If you mount an overlay filesystem with **metacopy=on** as a mount option, it will not copy up the contents of the lower layers when you change file attributes; the kernel creates new inodes that include the attributes with references pointing at the lower-level data. It will still copy up the contents if the content changes. This functionality is available in the Red Hat Enterprise Linux 8 Beta, if you want to try it out.
|
||||
|
||||
This means container chowning can happen in a couple of seconds, and you won't double the storage space for each container.
|
||||
|
||||
This makes running containers with tools like Podman in separate user namespaces viable, greatly increasing the security of the system.
|
||||
|
||||
### Going forward
|
||||
|
||||
I want to add a new flag, like **\--userns=auto** , to Podman that will tell it to automatically pick a unique user namespace for each container you run. This is similar to the way SELinux works with separate multi-category security (MCS) labels. If you set the environment variable **PODMAN_USERNS=auto** , you won't even need to set the flag.
|
||||
|
||||
Podman is finally allowing users to run containers in separate user namespaces. Tools like [Buildah][9] and [CRI-O][10] will also be able to take advantage of user namespaces. For CRI-O, however, Kubernetes needs to understand which user namespace will run the container engine, and the upstream is working on that.
|
||||
|
||||
In my next article, I will explain how to run Podman as non-root in a user namespace.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/podman-and-user-namespaces
|
||||
|
||||
作者:[Daniel J Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rhatdan
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://podman.io/
|
||||
[2]: https://github.com/containers/libpod
|
||||
[3]: https://opensource.com/article/18/10/podman-more-secure-way-run-containers
|
||||
[4]: http://man7.org/linux/man-pages/man7/user_namespaces.7.html
|
||||
[5]: https://chmodcommand.com/chmod-660/
|
||||
[6]: https://en.wikipedia.org/wiki/Chown
|
||||
[7]: https://github.com/containers/storage
|
||||
[8]: https://en.wikipedia.org/wiki/OverlayFS
|
||||
[9]: https://buildah.io/
|
||||
[10]: http://cri-o.io/
|
@ -1,111 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip])
|
||||
[#]: via: (https://itsfoss.com/turn-on-raspberry-pi/)
|
||||
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
|
||||
|
||||
How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip]
|
||||
======
|
||||
|
||||
_**Brief: This quick tip teaches you how to turn on Raspberry Pi and how to shut it down properly afterwards.**_
|
||||
|
||||
The [Raspberry Pi][1] is one of the [most popular SBC (Single-Board-Computer)][2]. If you are interested in this topic, I believe that you’ve finally got a Pi device. I also advise to get all the [additional Raspberry Pi accessories][3] to get started with your device.
|
||||
|
||||
You’re ready to turn it on and start to tinker around with it. It has it’s own similarities and differences compared to traditional computers like desktops and laptops.
|
||||
|
||||
Today, let’s go ahead and learn how to turn on and shutdown a Raspberry Pi as it doesn’t really feature a ‘power button’ of sorts.
|
||||
|
||||
For this article I’m using a Raspberry Pi 3B+, but it’s the same for all the Raspberry Pi variants.
|
||||
|
||||
Bestseller No. 1
|
||||
|
||||
[][4]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$79.99 [][5]
|
||||
|
||||
Bestseller No. 2
|
||||
|
||||
[][6]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply][6]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$54.99 [][5]
|
||||
|
||||
### Turn on Raspberry Pi
|
||||
|
||||
![Micro USB port for Power][7]
|
||||
|
||||
The micro USB port powers the Raspberry Pi, the way you turn it on is by plugging in the power cable into the micro USB port. But, before you do that you should make sure that you have done the following things.
|
||||
|
||||
* Preparing the micro SD card with Raspbian according to the official [guide][8] and inserting into the micro SD card slot.
|
||||
* Plugging in the HDMI cable, USB keyboard and a Mouse.
|
||||
* Plugging in the Ethernet Cable(Optional).
|
||||
|
||||
|
||||
|
||||
Once you have done the above, plug in the power cable. This turns on the Raspberry Pi and the display will light up and load the Operating System.
|
||||
|
||||
Bestseller No. 1
|
||||
|
||||
[][4]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$79.99 [][5]
|
||||
|
||||
### Shutting Down the Pi
|
||||
|
||||
Shutting down the Pi is pretty straight forward, click the menu button and choose shutdown.
|
||||
|
||||
![Turn off Raspberry Pi graphically][9]
|
||||
|
||||
Alternatively, you can use the [shutdown command][10] in the terminal:
|
||||
|
||||
```
|
||||
sudo shutdown now
|
||||
```
|
||||
|
||||
Once the shutdown process has started **wait** till it completely finishes and then you can cut the power to it. Once the Pi shuts down, there is no real way to turn the Pi back on without turning off and turning on the power. You could the GPIO’s to turn on the Pi from the shutdown state but it’ll require additional modding.
|
||||
|
||||
[][2]
|
||||
|
||||
Suggested read 12 Single Board Computers: Alternative to Raspberry Pi
|
||||
|
||||
_Note: Micro USB ports tend to be fragile, hence turn-off/on the power at source instead of frequently unplugging and plugging into the micro USB port._
|
||||
|
||||
Well, that’s about all you should know about turning on and shutting down the Pi, what do you plan to use it for? Let me know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/turn-on-raspberry-pi/
|
||||
|
||||
作者:[Chinmay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/chinmay/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.raspberrypi.org/
|
||||
[2]: https://itsfoss.com/raspberry-pi-alternatives/
|
||||
[3]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
|
||||
[4]: https://www.amazon.com/CanaKit-Raspberry-Starter-Premium-Black/dp/B07BCC8PK7?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BCC8PK7&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case))
|
||||
[5]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime)
|
||||
[6]: https://www.amazon.com/CanaKit-Raspberry-Premium-Clear-Supply/dp/B07BC7BMHY?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BC7BMHY&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply)
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-3-microusb.png?fit=800%2C532&ssl=1
|
||||
[8]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/Raspbian-ui-menu.jpg?fit=800%2C492&ssl=1
|
||||
[10]: https://linuxhandbook.com/linux-shutdown-command/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,99 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Unboxing the Raspberry Pi 4)
|
||||
[#]: via: (https://opensource.com/article/19/8/unboxing-raspberry-pi-4)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall)
|
||||
|
||||
Unboxing the Raspberry Pi 4
|
||||
======
|
||||
The Raspberry Pi 4 delivers impressive performance gains over its
|
||||
predecessors, and the Starter Kit makes it easy to get it up and running
|
||||
quickly.
|
||||
![Raspberry Pi 4 board, posterized filter][1]
|
||||
|
||||
When the Raspberry Pi 4 was [announced at the end of June][2], I wasted no time. I ordered two Raspberry Pi 4 Starter Kits the same day from [CanaKit][3]. The 1GB RAM version was available right away, but the 4GB version wouldn't ship until July 19th. Since I wanted to try both, I ordered them to be shipped together.
|
||||
|
||||
![CanaKit's Raspberry Pi 4 Starter Kit and official accessories][4]
|
||||
|
||||
Here's what I found when I unboxed my Raspberry Pi 4.
|
||||
|
||||
### Power supply
|
||||
|
||||
The Raspberry Pi 4 uses a USB-C connector for its power supply. Even though USB-C cables are very common now, your Pi 4 [may not like your USB-C cable][5] (at least with these first editions of the Raspberry Pi 4). So, unless you know exactly what you are doing, I recommend ordering the Starter Kit, which comes with an official Raspberry Pi charger. In case you would rather try whatever you have on hand, the device's input reads 100-240V ~ 50/60Hz 0.5A, and the output says 5.1V --- 3.0A.
|
||||
|
||||
![Raspberry Pi USB-C charger][6]
|
||||
|
||||
### Keyboard and mouse
|
||||
|
||||
The official keyboard and mouse are [sold separately][7] from the Starter Kit, and at $25 total, they aren't really cheap, given you're paying only $35 to $55 for a proper computer. But the Raspberry Pi logo is printed on this keyboard (instead of the Windows logo), and there is something compelling about having an appropriate appearance. The keyboard is also a USB hub, so it allows you to plug in even more devices. I plugged in my [YubiKey][8] security key, and it works very nicely. I would classify the keyboard and mouse as a "nice to have" versus a "must-have." Your regular keyboard and mouse should work fine.
|
||||
|
||||
![Official Raspberry Pi keyboard \(with YubiKey plugged in\) and mouse.][9]
|
||||
|
||||
![Raspberry Pi logo on the keyboard][10]
|
||||
|
||||
### Micro-HDMI cable
|
||||
|
||||
Something that may have caught some folks by surprise is that, unlike the Raspberry Pi Zero that comes with a Mini-HDMI port, the Raspberry Pi 4 comes with a Micro-HDMI. They are not the same thing! So, even though you may have a suitable USB-C cable/power adaptor, mouse, and keyboard on hand, there is a pretty good chance you will need a Micro-HDMI-to-HDMI cable (or an adapter) to plug your new Raspberry Pi into a display.
|
||||
|
||||
### The case
|
||||
|
||||
Cases for the Raspberry Pi have been around for years and are probably one of the very first "official" peripherals the Raspberry Pi Foundation sold. Some people like them; others don't. I think putting a Pi in a case makes it easier to carry it around and avoid static electricity and bent pins.
|
||||
|
||||
On the other hand, keeping your Pi covered can overheat the board. This CanaKit Starter Kit also comes with heatsink for the processor, which might help, as the newer Pis are already [known for running pretty hot][11].
|
||||
|
||||
![Raspberry Pi 4 case][12]
|
||||
|
||||
### Raspbian and NOOBS
|
||||
|
||||
The other item that comes with the Starter Kit is a microSD card with the correct version of the [NOOBS][13] operating system for the Raspberry Pi 4 pre-installed. (I got version 3.1.1, released June 24, 2019). If you're using a Raspberry Pi for the first time and are not sure where to start, this could save you a lot of time. The microSD card in the Starter Kit is 32GB.
|
||||
|
||||
After you insert the microSD card and connect all the cables, just start up the Pi, boot into NOOBS, pick the Raspbian distribution, and wait while it installs.
|
||||
|
||||
![Raspberry Pi 4 with 4GB of RAM][14]
|
||||
|
||||
I noticed a couple of improvements while installing the latest Raspbian. (Forgive me if they've been around for a while—I haven't done a fresh install on a Pi since the 3 came out.) One is that Raspbian will ask you to set up a password for your account at first boot after installation, and the other is that it will run a software update (assuming you have network connectivity). These are great improvements to help keep your Raspberry Pi a little more secure. I would love to see the option to encrypt the microSD card at installation … maybe someday?
|
||||
|
||||
![Running Raspbian updates at first boot][15]
|
||||
|
||||
![Raspberry Pi 4 setup][16]
|
||||
|
||||
It runs very smoothly!
|
||||
|
||||
### Wrapping up
|
||||
|
||||
Although CanaKit isn't the only authorized Raspberry Pi retailer in the US, I found its Starter Kit to provide great value for the price.
|
||||
|
||||
So far, I am very impressed with the performance gains in the Raspberry Pi 4. I'm planning to try spending an entire workday using it as my only computer, and I'll write a follow-up article soon about how far I can go. Stay tuned!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/unboxing-raspberry-pi-4
|
||||
|
||||
作者:[Anderson Silva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi4_board_hardware.jpg?itok=KnFU7NvR (Raspberry Pi 4 board, posterized filter)
|
||||
[2]: https://opensource.com/article/19/6/raspberry-pi-4
|
||||
[3]: https://www.canakit.com/raspberry-pi-4-starter-kit.html
|
||||
[4]: https://opensource.com/sites/default/files/uploads/raspberrypi4_canakit.jpg (CanaKit's Raspberry Pi 4 Starter Kit and official accessories)
|
||||
[5]: https://www.techrepublic.com/article/your-new-raspberry-pi-4-wont-power-on-usb-c-cable-problem-now-officially-confirmed/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/raspberrypi_usb-c_charger.jpg (Raspberry Pi USB-C charger)
|
||||
[7]: https://www.canakit.com/official-raspberry-pi-keyboard-mouse.html?defpid=4476
|
||||
[8]: https://www.yubico.com/products/yubikey-hardware/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardmouse.jpg (Official Raspberry Pi keyboard (with YubiKey plugged in) and mouse.)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/raspberrypi_keyboardlogo.jpg (Raspberry Pi logo on the keyboard)
|
||||
[11]: https://www.theregister.co.uk/2019/07/22/raspberry_pi_4_too_hot_to_handle/
|
||||
[12]: https://opensource.com/sites/default/files/uploads/raspberrypi4_case.jpg (Raspberry Pi 4 case)
|
||||
[13]: https://www.raspberrypi.org/downloads/noobs/
|
||||
[14]: https://opensource.com/sites/default/files/uploads/raspberrypi4_ram.jpg (Raspberry Pi 4 with 4GB of RAM)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/raspberrypi4_rasbpianupdate.jpg (Running Raspbian updates at first boot)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/raspberrypi_setup.jpg (Raspberry Pi 4 setup)
|
@ -1,222 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Copying files in Linux)
|
||||
[#]: via: (https://opensource.com/article/19/8/copying-files-linux)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/greg-p)
|
||||
|
||||
Copying files in Linux
|
||||
======
|
||||
Learn multiple ways to copy files on Linux, and the advantages of each.
|
||||
![Filing papers and documents][1]
|
||||
|
||||
Copying documents used to require a dedicated staff member in offices, and then a dedicated machine. Today, copying is a task computer users do without a second thought. Copying data on a computer is so trivial that copies are made without you realizing it, such as when dragging a file to an external drive.
|
||||
|
||||
The concept that digital entities are trivial to reproduce is pervasive, so most modern computerists don’t think about the options available for duplicating their work. And yet, there are several different ways to copy a file on Linux. Each method has nuanced features that might benefit you, depending on what you need to get done.
|
||||
|
||||
Here are a number of ways to copy files on Linux, BSD, and Mac.
|
||||
|
||||
### Copying in the GUI
|
||||
|
||||
As with most operating systems, you can do all of your file management in the GUI, if that's the way you prefer to work.
|
||||
|
||||
Drag and drop
|
||||
|
||||
The most obvious way to copy a file is the way you’re probably used to copying files on computers: drag and drop. On most Linux desktops, dragging and dropping from one local folder to another local folder _moves_ a file by default. You can change this behavior to a copy operation by holding down the **Ctrl** key after you start dragging the file.
|
||||
|
||||
Your cursor may show an indicator, such as a plus sign, to show that you are in copy mode:
|
||||
|
||||
![Copying a file.][2]
|
||||
|
||||
Note that if the file exists on a remote system, whether it’s a web server or another computer on your own network that you access through a file-sharing protocol, the default action is often to copy, not move, the file.
|
||||
|
||||
#### Right-click
|
||||
|
||||
If you find dragging and dropping files around your desktop imprecise or clumsy, or doing so takes your hands away from your keyboard too much, you can usually copy a file using the right-click menu. This possibility depends on the file manager you use, but generally, a right-click produces a contextual menu containing common actions.
|
||||
|
||||
The contextual menu copy action stores the [file path][3] (where the file exists on your system) in your clipboard so you can then _paste_ the file somewhere else:
|
||||
|
||||
![Copying a file from the context menu.][4]
|
||||
|
||||
In this case, you’re not actually copying the file’s contents to your clipboard. Instead, you're copying the [file path][3]. When you paste, your file manager looks at the path in your clipboard and then runs a copy command, copying the file located at that path to the path you are pasting into.
|
||||
|
||||
### Copying on the command line
|
||||
|
||||
While the GUI is a generally familiar way to copy files, copying in a terminal can be more efficient.
|
||||
|
||||
#### cp
|
||||
|
||||
The obvious terminal-based equivalent to copying and pasting a file on the desktop is the **cp** command. This command copies files and directories and is relatively straightforward. It uses the familiar _source_ and _target_ (strictly in that order) syntax, so to copy a file called **example.txt** into your **Documents** directory:
|
||||
|
||||
|
||||
```
|
||||
$ cp example.txt ~/Documents
|
||||
```
|
||||
|
||||
Just like when you drag and drop a file onto a folder, this action doesn’t replace **Documents** with **example.txt**. Instead, **cp** detects that **Documents** is a folder, and places a copy of **example.txt** into it.
|
||||
|
||||
You can also, conveniently (and efficiently), rename the file as you copy it:
|
||||
|
||||
|
||||
```
|
||||
$ cp example.txt ~/Documents/example_copy.txt
|
||||
```
|
||||
|
||||
That fact is important because it allows you to make a copy of a file in the same directory as the original:
|
||||
|
||||
|
||||
```
|
||||
$ cp example.txt example.txt
|
||||
cp: 'example.txt' and 'example.txt' are the same file.
|
||||
$ cp example.txt example_copy.txt
|
||||
```
|
||||
|
||||
To copy a directory, you must use the **-r** option, which stands for --**recursive**. This option runs **cp** on the directory _inode_, and then on all files within the directory. Without the **-r** option, **cp** doesn’t even recognize a directory as an object that can be copied:
|
||||
|
||||
|
||||
```
|
||||
$ cp notes/ notes-backup
|
||||
cp: -r not specified; omitting directory 'notes/'
|
||||
$ cp -r notes/ notes-backup
|
||||
```
|
||||
|
||||
#### cat
|
||||
|
||||
The **cat** command is one of the most misunderstood commands, but only because it exemplifies the extreme flexibility of a [POSIX][5] system. Among everything else **cat** does (including its intended purpose of con_cat_enating files), it can also copy. For instance, with **cat** you can [create two copies from one file][6] with just a single command. You can’t do that with **cp**.
|
||||
|
||||
The significance of using **cat** to copy a file is the way the system interprets the action. When you use **cp** to copy a file, the file’s attributes are copied along with the file itself. That means that the file permissions of the duplicate are the same as the original:
|
||||
|
||||
|
||||
```
|
||||
$ ls -l -G -g
|
||||
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
|
||||
$ cp foo.jpg bar.jpg
|
||||
-rw-r--r--. 1 57368 Jul 29 13:37 bar.jpg
|
||||
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
|
||||
```
|
||||
|
||||
Using **cat** to read the contents of a file into another file, however, invokes a system call to create a new file. These new files are subject to your default **umask** settings. To learn more about `umask`, read Alex Juarez’s article covering [umask][7] and permissions in general.
|
||||
|
||||
Run **umask** to get the current settings:
|
||||
|
||||
|
||||
```
|
||||
$ umask
|
||||
0002
|
||||
```
|
||||
|
||||
This setting means that new files created in this location are granted **664** (**rw-rw-r--**) permission because nothing is masked by the first digits of the **umask** setting (and the executable bit is not a default bit for file creation), and the write permission is blocked by the final digit.
|
||||
|
||||
When you copy with **cat**, you don’t actually copy the file. You use **cat** to read the contents of the file, and then redirect the output into a new file:
|
||||
|
||||
|
||||
```
|
||||
$ cat foo.jpg > baz.jpg
|
||||
$ ls -l -G -g
|
||||
-rw-r--r--. 1 57368 Jul 29 13:37 bar.jpg
|
||||
-rw-rw-r--. 1 57368 Jul 29 13:42 baz.jpg
|
||||
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
|
||||
```
|
||||
|
||||
As you can see, **cat** created a brand new file with the system’s default umask applied.
|
||||
|
||||
In the end, when all you want to do is copy a file, the technicalities often don’t matter. But sometimes you want to copy a file and end up with a default set of permissions, and with **cat** you can do it all in one command**.**
|
||||
|
||||
#### rsync
|
||||
|
||||
The **rsync** command is a versatile tool for copying files, with the notable ability to synchronize your source and destination. At its most simple, **rsync** can be used similarly to **cp** command:
|
||||
|
||||
|
||||
```
|
||||
$ rsync example.txt example_copy.txt
|
||||
$ ls
|
||||
example.txt example_copy.txt
|
||||
```
|
||||
|
||||
The command’s true power lies in its ability to _not_ copy when it’s not necessary. If you use **rsync** to copy a file into a directory, but that file already exists in that directory, then **rsync** doesn’t bother performing the copy operation. Locally, that fact doesn’t necessarily mean much, but if you’re copying gigabytes of data to a remote server, this feature makes a world of difference.
|
||||
|
||||
What does make a difference even locally, though, is the command’s ability to differentiate files that share the same name but which contain different data. If you’ve ever found yourself faced with two copies of what is meant to be the same directory, then **rsync** can synchronize them into one directory containing the latest changes from each. This setup is a pretty common occurrence in industries that haven’t yet discovered the magic of version control, and for backup solutions in which there is one source of truth to propagate.
|
||||
|
||||
You can emulate this situation intentionally by creating two folders, one called **example** and the other **example_dupe**:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir example example_dupe
|
||||
```
|
||||
|
||||
Create a file in the first folder:
|
||||
|
||||
|
||||
```
|
||||
$ echo "one" > example/foo.txt
|
||||
```
|
||||
|
||||
Use **rsync** to synchronize the two directories. The most common options for this operation are **-a** (for _archive_, which ensures symlinks and other special files are preserved) and **-v** (for _verbose_, providing feedback to you on the command’s progress):
|
||||
|
||||
|
||||
```
|
||||
$ rsync -av example/ example_dupe/
|
||||
```
|
||||
|
||||
The directories now contain the same information:
|
||||
|
||||
|
||||
```
|
||||
$ cat example/foo.txt
|
||||
one
|
||||
$ cat example_dupe/foo.txt
|
||||
one
|
||||
```
|
||||
|
||||
If the file you are treating as the source diverges, then the target is updated to match:
|
||||
|
||||
|
||||
```
|
||||
$ echo "two" >> example/foo.txt
|
||||
$ rsync -av example/ example_dupe/
|
||||
$ cat example_dupe/foo.txt
|
||||
one
|
||||
two
|
||||
```
|
||||
|
||||
Keep in mind that the **rsync** command is meant to copy data, not to act as a version control system. For instance, if a file in the destination somehow gets ahead of a file in the source, that file is still overwritten because **rsync** compares files for divergence and assumes that the destination is always meant to mirror the source:
|
||||
|
||||
|
||||
```
|
||||
$ echo "You will never see this note again" > example_dupe/foo.txt
|
||||
$ rsync -av example/ example_dupe/
|
||||
$ cat example_dupe/foo.txt
|
||||
one
|
||||
two
|
||||
```
|
||||
|
||||
If there is no change, then no copy occurs.
|
||||
|
||||
The **rsync** command has many options not available in **cp**, such as the ability to set target permissions, exclude files, delete outdated files that don’t appear in both directories, and much more. Use **rsync** as a powerful replacement for **cp**, or just as a useful supplement.
|
||||
|
||||
### Many ways to copy
|
||||
|
||||
There are many ways to achieve essentially the same outcome on a POSIX system, so it seems that open source’s reputation for flexibility is well earned. Have I missed a useful way to copy data? Share your copy hacks in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/copying-files-linux
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/greg-p
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/copy-nautilus.jpg (Copying a file.)
|
||||
[3]: https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them
|
||||
[4]: https://opensource.com/sites/default/files/uploads/copy-files-menu.jpg (Copying a file from the context menu.)
|
||||
[5]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[6]: https://opensource.com/article/19/2/getting-started-cat-command
|
||||
[7]: https://opensource.com/article/19/7/linux-permissions-101
|
@ -1,202 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to measure the health of an open source community)
|
||||
[#]: via: (https://opensource.com/article/19/8/measure-project)
|
||||
[#]: author: (Jon Lawrence https://opensource.com/users/the3rdlaw)
|
||||
|
||||
How to measure the health of an open source community
|
||||
======
|
||||
It's complicated.
|
||||
![metrics and data shown on a computer screen][1]
|
||||
|
||||
As a person who normally manages software development teams, over the years I’ve come to care about metrics quite a bit. Time after time, I’ve found myself leading teams using one project platform or another (Jira, GitLab, and Rally, for example) generating an awful lot of measurable data. From there, I’ve promptly invested significant amounts of time to pull useful metrics out of the platform-of-record and into a format where we could make sense of them, and then use the metrics to make better choices about many aspects of development.
|
||||
|
||||
Earlier this year, I had the good fortune of coming across a project at the [Linux Foundation][2] called [Community Health Analytics for Open Source Software][3], or CHAOSS. This project focuses on collecting and enriching metrics from a wide range of sources so that stakeholders in open source communities can measure the health of their projects.
|
||||
|
||||
### What is CHAOSS?
|
||||
|
||||
As I grew familiar with the project’s underlying metrics and objectives, one question kept turning over in my head. What is a "healthy" open source project, and by whose definition?
|
||||
|
||||
What’s considered healthy by someone in a particular role may not be viewed that way by someone in another role. It seemed there was an opportunity to back out from the granular data that CHAOSS collects and do a market segmentation exercise, focusing on what might be the most meaningful contextual questions for a given role, and what metrics CHAOSS collects that might help answer those questions.
|
||||
|
||||
This exercise was made possible by the fact that the CHAOSS project creates and maintains a suite of open source applications and metric definitions, including:
|
||||
|
||||
* A number of server-based applications for gathering, aggregating, and enriching metrics (such as Augur and GrimoireLab).
|
||||
* The open source versions of ElasticSearch, Kibana, and Logstash (ELK).
|
||||
* Identity services, data analysis services, and a wide range of integration libraries.
|
||||
|
||||
|
||||
|
||||
In one of my past programs, where half a dozen teams were working on projects of varying complexity, we found a neat tool which allowed us to create any kind of metric we wanted from a simple (or complex) JQL statement, and then develop calculations against and between those metrics. Before we knew it, we were pulling over 400 metrics from Jira alone, and more from manual sources.
|
||||
|
||||
By the end of the project, we decided that out of the 400-ish metrics, most of them didn’t really matter when it came to making decisions _in our roles_. At the end of the day, there were only three that really mattered to us: "Defect Removal Efficiency," "Points completed vs. Points committed," and "Work-in-Progress per Developer." Those three metrics mattered most because they were promises we made to ourselves, to our clients, and to our team members, and were, therefore, the most meaningful.
|
||||
|
||||
Drawing from the lessons learned through that experience and the question of what is a healthy open source project?, I jumped into the CHAOSS community and started building a set of personas to offer a constructive approach to answering that question from a role-based lens.
|
||||
|
||||
CHAOSS is an open source project and we try to operate using democratic consensus. So, I decided that instead of stakeholders, I’d use the word _constituent_, because it aligns better with the responsibility we have as open source contributors to create a more symbiotic value chain.
|
||||
|
||||
While the exercise of creating this constituent model takes a particular goal-question-metric approach, there are many ways to segment. CHAOSS contributors have developed great models that segment by vectors, like project profiles (for example, individual, corporate, or coalition) and "Tolerance to Failure." Every model provides constructive influence when developing metric definitions for CHAOSS.
|
||||
|
||||
Based on all of this, I set out to build a model of who might care about CHAOSS metrics, and what questions each constituent might care about most in each of CHAOSS’ four focus areas:
|
||||
|
||||
* [Diversity and Inclusion][4]
|
||||
* [Evolution][5]
|
||||
* [Risk][6]
|
||||
* [Value][7]
|
||||
|
||||
|
||||
|
||||
Before we dive in, it’s important to note that the CHAOSS project expressly leaves contextual judgments to teams implementing the metrics. What’s "meaningful" and the answer to "What is healthy?" is expected to vary by team and by project. The CHAOSS software’s ready-made dashboards focus on objective metrics as much as possible. In this article, we focus on project founders, project maintainers, and contributors.
|
||||
|
||||
### Project constituents
|
||||
|
||||
While this is by no means an exhaustive list of questions these constituents might feel are important, these choices felt like a good place to start. Each of the Goal-Question-Metric segments below is directly tied to metrics that the CHAOSS project is collecting and aggregating.
|
||||
|
||||
Now, on to Part 1 of the analysis!
|
||||
|
||||
#### Project founders
|
||||
|
||||
As a **project founder**, I care **most** about:
|
||||
|
||||
* Is my project **useful to others?** Measured as a function of:
|
||||
|
||||
* How many forks over time?
|
||||
**Metric:** Repository forks.
|
||||
|
||||
* How many contributors over time?
|
||||
**Metric:** Contributor count.
|
||||
|
||||
* Net quality of contributions.
|
||||
**Metric:** Bugs filed over time.
|
||||
**Metric:** Regressions over time.
|
||||
|
||||
* Financial health of my project.
|
||||
**Metric:** Donations/Revenue over time.
|
||||
**Metric:** Expenses over time.
|
||||
|
||||
* How **visible** is my project to others?
|
||||
|
||||
* Does anyone know about my project? Do others think it’s neat?
|
||||
**Metric:** Social media mentions, shares, likes, and subscriptions.
|
||||
|
||||
* Does anyone with influence know about my project?
|
||||
**Metric:** Social reach of contributors.
|
||||
|
||||
* What are people saying about the project in public spaces? Is it positive or negative?
|
||||
**Metric:** Sentiment (keyword or NLP) analysis across social media channels.
|
||||
|
||||
* How **viable** is my project?
|
||||
|
||||
* Do we have enough maintainers? Is the number rising or falling over time?
|
||||
**Metric:** Number of maintainers.
|
||||
|
||||
* Do we have enough contributors? Is the number rising or falling over time?
|
||||
**Metric:** Number of contributors.
|
||||
|
||||
* How is velocity changing over time?
|
||||
**Metric:** Percent change of code over time.
|
||||
**Metric:** Time between pull request, code review, and merge.
|
||||
|
||||
* How [**diverse & inclusive**][4] is my project?
|
||||
|
||||
* Do we have a valid, public, Code of Conduct (CoC)?
|
||||
**Metric:** CoC repository file check.
|
||||
|
||||
* Are events associated with my project actively inclusive?
|
||||
**Metric:** Manual reporting on event ticketing policies and event inclusion activities.
|
||||
|
||||
* Does our project do a good job of being accessible?
|
||||
**Metric:** Validation of typed meeting minutes being posted.
|
||||
**Metric:** Validation of closed captioning used during meetings.
|
||||
**Metric:** Validation of color-blind-accessible materials in presentations and in project front-end designs.
|
||||
|
||||
* How much [**value**][7] does my project represent?
|
||||
|
||||
* How can I help organizations understand how much time and money using our project would save them (labor investment)
|
||||
**Metric:** Repo count of issues, commits, pull requests, and the estimated labor rate.
|
||||
|
||||
* How can I understand the amount of downstream value my project creates and how vital (or not) it is to the wider community to maintain my project?
|
||||
**Metric:** Repo count of how many other projects rely on my project.
|
||||
|
||||
* How much opportunity is there for those contributing to my project to use what they learn working on it to land good jobs and at what organizations (aka living wage)?
|
||||
**Metric:** Count of organizations using or contributing to this library.
|
||||
**Metric:** Averages of salaries for developers working with this kind of project.
|
||||
**Metric:** Count of job postings with keywords that match this project.
|
||||
|
||||
|
||||
|
||||
|
||||
### Project maintainers
|
||||
|
||||
As a **Project Maintainer,** I care **most** about:
|
||||
|
||||
* Am I an **efficient** maintainer?
|
||||
**Metric:** Time PR’s wait before a code review.
|
||||
**Metric:** Time between code review and subsequent PR’s.
|
||||
**Metric:** How many of my code reviews are approvals?
|
||||
**Metric:** How many of my code reviews are rejections/rework requests?
|
||||
**Metric:** Sentiment analysis of code review comments.
|
||||
|
||||
* How do I get **more people** to help me maintain this thing?
|
||||
**Metric:** Count of social reach of project contributors.
|
||||
|
||||
* Is our **code quality** getting better or worse over time?
|
||||
**Metric:** Count how many regressions are being introduced over time.
|
||||
**Metric:** Count how many bugs are being introduced over time.
|
||||
**Metric:** Time between bug filing, pull request, review, merge, and release.
|
||||
|
||||
|
||||
|
||||
|
||||
### Project developers and contributors
|
||||
|
||||
As a **project developer or contributor**, I care most about:
|
||||
|
||||
* What things of value can I gain from contributing to this project and how long might it take to realize that value?
|
||||
**Metric:** Downstream value.
|
||||
**Metric:** Time between commits, code reviews, and merges.
|
||||
|
||||
* Are there good prospects for using what I learn by contributing to increase my job opportunities?
|
||||
**Metric:** Living wage.
|
||||
|
||||
* How popular is this project?
|
||||
**Metric:** Counts of social media posts, shares, and favorites.
|
||||
|
||||
* Do community influencers know about my project?
|
||||
**Metric:** Social reach of founders, maintainers, and contributors.
|
||||
|
||||
|
||||
|
||||
|
||||
By creating this list, we’ve just begun to put meat on the contextual bones of CHAOSS, and with the first release of metrics in the project this summer, I can’t wait to see what other great ideas the broader open source community may have to contribute and what else we can all learn (and measure!) from those contributions.
|
||||
|
||||
### Other roles
|
||||
|
||||
Next, you need to learn more about goal-question-metric sets for other roles (such as foundations, corporate open source program offices, business risk and legal teams, human resources, and others) as well as end users, who have a distinctly different set of things they care about when it comes to open source.
|
||||
|
||||
If you’re an open source contributor or constituent, we invite you to [come check out the project][8] and get engaged in the community!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/measure-project
|
||||
|
||||
作者:[Jon Lawrence][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/the3rdlaw
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
|
||||
[2]: https://www.linuxfoundation.org/
|
||||
[3]: https://chaoss.community/
|
||||
[4]: https://github.com/chaoss/wg-diversity-inclusion
|
||||
[5]: https://github.com/chaoss/wg-evolution
|
||||
[6]: https://github.com/chaoss/wg-risk
|
||||
[7]: https://github.com/chaoss/wg-value
|
||||
[8]: https://github.com/chaoss/
|
@ -1,105 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Reinstall Ubuntu in Dual Boot or Single Boot Mode)
|
||||
[#]: via: (https://itsfoss.com/reinstall-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
How to Reinstall Ubuntu in Dual Boot or Single Boot Mode
|
||||
======
|
||||
|
||||
If you have messed up your Ubuntu system and after trying numerous ways to fix it, you finally give up and take the easy way out: you reinstall Ubuntu.
|
||||
|
||||
We have all been in a situation when reinstalling Linux seems a better idea than try to troubleshoot and fix the issue for good. Troubleshooting a Linux system teaches you a lot but you cannot always afford to spend more time fixing a broken system.
|
||||
|
||||
There is no Windows like recovery drive system in Ubuntu as far as I know. So, the question then arises: how to reinstall Ubuntu? Let me show you how can you reinstall Ubuntu.
|
||||
|
||||
Warning!
|
||||
|
||||
Playing with disk partitions is always a risky task. I strongly recommend to make a backup of your data on an external disk.
|
||||
|
||||
### How to reinstall Ubuntu Linux
|
||||
|
||||
![][1]
|
||||
|
||||
Here are the steps to follow for reinstalling Ubuntu.
|
||||
|
||||
#### Step 1: Create a live USB
|
||||
|
||||
First, download Ubuntu from its website. You can download [whichever Ubuntu version][2] you want to use.
|
||||
|
||||
[Download Ubuntu][3]
|
||||
|
||||
Once you have got the ISO image, it’s time to create a live USB from it. If your Ubuntu system is still accessible, you can create a live disk using the startup disk creator tool provided by Ubuntu.
|
||||
|
||||
If you cannot access your Ubuntu system, you’ll have to use another system. You can refer to this article to learn [how to create live USB of Ubuntu in Windows][4].
|
||||
|
||||
#### Step 2: Reinstall Ubuntu
|
||||
|
||||
Once you have got the live USB of Ubuntu, plugin the USB. Reboot your system. At boot time, press F2/10/F12 key to go into the BIOS settings and make sure that you have set Boot from Removable Devices/USB option at the top. Save and exit BIOS. This will allow you to boot into live USB.
|
||||
|
||||
Once you are in the live USB, choose to install Ubuntu. You’ll get the usual option for choosing your language and keyboard layout. You’ll also get the option to download updates etc.
|
||||
|
||||
![Go ahead with regular installation option][5]
|
||||
|
||||
The important steps comes now. You should see an “Installation Type” screen. What you see on your screen here depends heavily on how Ubuntu sees the disk partitioning and installed operating systems on your system.
|
||||
|
||||
[][6]
|
||||
|
||||
Suggested read How to Update Ubuntu Linux [Beginner's Tip]
|
||||
|
||||
Be very careful in reading the options and its details at this step. Pay attention to what each options says. The screen options may look different in different systems.
|
||||
|
||||
![Reinstall Ubuntu option in dual boot mode][7]
|
||||
|
||||
In my case, it finds that I have Ubuntu 18.04.2 and Windows installed on my system and it gives me a few options.
|
||||
|
||||
The first option here is to erase Ubuntu 18.04.2 and reinstall it. It tells me that it will delete my personal data but it says nothing about deleting all the operating systems (i.e. Windows).
|
||||
|
||||
If you are super lucky or in single boot mode, you may see an option where you can see a “Reinstall Ubuntu”. This option will keep your existing data and even tries to keep the installed software. If you see this option, you should go for it it.
|
||||
|
||||
Attention for Dual Boot System
|
||||
|
||||
If you are dual booting Ubuntu and Windows and during reinstall, your Ubuntu system doesn’t see Windows, you must go for Something else option and install Ubuntu from there. I have described the [process of reinstalling Linux in dual boot in this tutorial][8].
|
||||
|
||||
For me, there was no reinstall and keep the data option so I went for “Erase Ubuntu and reinstall”option. This will install Ubuntu afresh even if it is in dual boot mode with Windows.
|
||||
|
||||
The reinstalling part is why I recommend using separate partitions for root and home. With that, you can keep your data in home partition safe even if you reinstall Linux. I have already demonstrated it in this video:
|
||||
|
||||
Once you have chosen the reinstall Ubuntu option, the rest of the process is just clicking next. Select your location and when asked, create your user account.
|
||||
|
||||
![Just go on with the installation options][9]
|
||||
|
||||
Once the procedure finishes, you’ll have your Ubuntu reinstalled afresh.
|
||||
|
||||
In this tutorial, I have assumed that you know things because you already has Ubuntu installed before. If you need clarification at any step, please feel free to ask in the comment section.
|
||||
|
||||
[][10]
|
||||
|
||||
Suggested read How To Fix No Wireless Network In Ubuntu
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/reinstall-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/Reinstall-Ubuntu.png?resize=800%2C450&ssl=1
|
||||
[2]: https://itsfoss.com/which-ubuntu-install/
|
||||
[3]: https://ubuntu.com/download/desktop
|
||||
[4]: https://itsfoss.com/create-live-usb-of-ubuntu-in-windows/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/reinstall-ubuntu-1.jpg?resize=800%2C473&ssl=1
|
||||
[6]: https://itsfoss.com/update-ubuntu/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/reinstall-ubuntu-dual-boot.jpg?ssl=1
|
||||
[8]: https://itsfoss.com/replace-linux-from-dual-boot/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/reinstall-ubuntu-3.jpg?ssl=1
|
||||
[10]: https://itsfoss.com/fix-no-wireless-network-ubuntu/
|
130
sources/tech/20190815 12 extensions for your GNOME desktop.md
Normal file
130
sources/tech/20190815 12 extensions for your GNOME desktop.md
Normal file
@ -0,0 +1,130 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (12 extensions for your GNOME desktop)
|
||||
[#]: via: (https://opensource.com/article/19/8/extensions-gnome-desktop)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdosshttps://opensource.com/users/erezhttps://opensource.com/users/alanfdosshttps://opensource.com/users/patrickhttps://opensource.com/users/liamnairn)
|
||||
|
||||
12 extensions for your GNOME desktop
|
||||
======
|
||||
Add functionality and features to your Linux desktop with these add-ons.
|
||||
![A person working.][1]
|
||||
|
||||
The GNOME desktop is the default graphical user interface for most of the popular Linux distributions and some of the BSD and Solaris operating systems. Currently at version 3, GNOME provides a sleek user experience, and extensions are available for additional functionality.
|
||||
|
||||
We've covered [GNOME extensions][2] at Opensource.com before, but to celebrate GNOME's 22nd anniversary, I decided to revisit the topic. Some of these extensions may already be installed, depending on your Linux distribution; if not, check your package manager.
|
||||
|
||||
### How to add extensions from the package manager
|
||||
|
||||
To install extensions that aren't in your distro, open the package manager and click **Add-ons**. Then click **Shell Extensions** at the top-right of the Add-ons screen, and you will see a button for **Extension Settings** and a list of available extensions.
|
||||
|
||||
![Package Manager Add-ons Extensions view][3]
|
||||
|
||||
Use the Extension Settings button to enable, disable, or configure the extensions you have installed.
|
||||
|
||||
Now that you know how to add and enable extensions, here are some good ones to try.
|
||||
|
||||
## 1\. GNOME Clocks
|
||||
|
||||
[GNOME Clocks][4] is an application that includes a world clock, alarm, stopwatch, and timer. You can configure clocks for different geographic locations. For example, if you regularly work with colleagues in another time zone, you can set up a clock for their location. You can access the World Clocks section in the top panel's drop-down menu by clicking the system clock. It shows your configured world clocks (not including your local time), so you can quickly check the time in other parts of the world.
|
||||
|
||||
## 2\. GNOME Weather
|
||||
|
||||
[GNOME Weather][5] displays the weather conditions and forecast for your current location. You can access local weather conditions from the top panel's drop-down menu. You can also check the weather in other geographic locations using Weather's Places menu.
|
||||
|
||||
GNOME Clocks and Weather are small applications that have extension-like functionality. Both are installed by default on Fedora 30 (which is what I'm using). If you're using another distribution and don't see them, check the package manager.
|
||||
|
||||
You can see both extensions in action in the image below.
|
||||
|
||||
![Clocks and Weather shown in the drop-down][6]
|
||||
|
||||
## 3\. Applications Menu
|
||||
|
||||
I think the GNOME 3 interface is perfectly enjoyable in its stock form, but you may prefer a traditional application menu. In GNOME 30, the [Applications Menu][7] extension was installed by default but not enabled. To enable it, click the Extensions Settings button in the Add-ons section of the package manager and enable the Applications Menu extension.
|
||||
|
||||
![Extension Settings][8]
|
||||
|
||||
Now you can see the Applications Menu in the top-left corner of the top panel.
|
||||
|
||||
![Applications Menu][9]
|
||||
|
||||
## 4\. More columns in applications view
|
||||
|
||||
The Applications view is set by default to six columns of icons, probably because GNOME needs to accommodate a wide array of displays. If you're using a wide-screen display, you can use the [More columns in applications menu][10] extension to increase the columns. I find that setting it to eight makes better use of my screen by eliminating the empty columns on either side of the icons when I launch the Applications view.
|
||||
|
||||
## Add system info to the top panel
|
||||
|
||||
The next three extensions provide basic system information to the top panel.
|
||||
|
||||
* 5. [Harddisk LED][11] shows a small hard drive icon with input/output (I/O) activity.
|
||||
* 6. [Load Average][12] indicates Linux load averages taken over three time intervals.
|
||||
* 7. [Uptime Indicator][13] shows system uptime; when it's clicked, it shows the date and time the system was started.
|
||||
|
||||
|
||||
|
||||
## 8\. Sound Input and Output Device Chooser
|
||||
|
||||
Your system may have more than one audio device for input and output. For example, my laptop has internal speakers and sometimes I use a wireless Bluetooth speaker. The [Sound Input and Output Device Chooser][14] extension adds a list of your sound devices to the System Menu so you can quickly select which one you want to use.
|
||||
|
||||
## 9\. Drop Down Terminal
|
||||
|
||||
Fellow Opensource.com writer [Scott Nesbitt][15] recommended the next two extensions. The first, [Drop Down Terminal][16], enables a terminal window to drop down from the top panel by pressing a certain key; the default is the key above Tab; on my keyboard, that's the tilde (~) character. Drop Down Terminal has a settings menu for customizing transparency, height, the activation keystroke, and other configurations.
|
||||
|
||||
## 10\. Todo.txt
|
||||
|
||||
[Todo.txt][17] adds a menu to the top panel for maintaining a file for Todo.txt task tracking. You can add or delete a task from the menu or mark it as completed.
|
||||
|
||||
![Drop-down menu for Todo.txt][18]
|
||||
|
||||
## 11\. Removable Drive Menu
|
||||
|
||||
Opensource.com editor [Seth Kenlon][19] suggested [Removable Drive Menu][20]. It provides a drop-down menu for managing removable media, such as USB thumb drives. From the extension's menu, you can access a drive's files and eject it. The menu only appears when removable media is inserted.
|
||||
|
||||
![Removable Drive Menu][21]
|
||||
|
||||
## 12\. GNOME Internet Radio
|
||||
|
||||
I enjoy listening to internet radio streams with the [GNOME Internet Radio][22] extension, which I wrote about in [How to Stream Music with GNOME Internet Radio][23].
|
||||
|
||||
* * *
|
||||
|
||||
What are your favorite GNOME extensions? Please share them in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/extensions-gnome-desktop
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdosshttps://opensource.com/users/erezhttps://opensource.com/users/alanfdosshttps://opensource.com/users/patrickhttps://opensource.com/users/liamnairn
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.)
|
||||
[2]: https://opensource.com/article/17/2/top-gnome-shell-extensions
|
||||
[3]: https://opensource.com/sites/default/files/uploads/add-onsextensions_6.png (Package Manager Add-ons Extensions view)
|
||||
[4]: https://wiki.gnome.org/Apps/Clocks
|
||||
[5]: https://wiki.gnome.org/Apps/Weather
|
||||
[6]: https://opensource.com/sites/default/files/uploads/clocksweatherdropdown_6.png (Clocks and Weather shown in the drop-down)
|
||||
[7]: https://extensions.gnome.org/extension/6/applications-menu/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/add-onsextensionsettings_6.png (Extension Settings)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/applicationsmenuextension_5.png (Applications Menu)
|
||||
[10]: https://extensions.gnome.org/extension/1305/more-columns-in-applications-view/
|
||||
[11]: https://extensions.gnome.org/extension/988/harddisk-led/
|
||||
[12]: https://extensions.gnome.org/extension/1381/load-average/
|
||||
[13]: https://extensions.gnome.org/extension/508/uptime-indicator/
|
||||
[14]: https://extensions.gnome.org/extension/906/sound-output-device-chooser/
|
||||
[15]: https://opensource.com/users/scottnesbitt
|
||||
[16]: https://extensions.gnome.org/extension/442/drop-down-terminal/
|
||||
[17]: https://extensions.gnome.org/extension/570/todotxt/
|
||||
[18]: https://opensource.com/sites/default/files/uploads/todo.txtmenu_3.png (Drop-down menu for Todo.txt)
|
||||
[19]: https://opensource.com/users/seth
|
||||
[20]: https://extensions.gnome.org/extension/7/removable-drive-menu/
|
||||
[21]: https://opensource.com/sites/default/files/uploads/removabledrivemenu_3.png (Removable Drive Menu)
|
||||
[22]: https://extensions.gnome.org/extension/836/internet-radio/
|
||||
[23]: https://opensource.com/article/19/6/gnome-internet-radio
|
@ -1,93 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fix ‘E: The package cache file is corrupted, it has the wrong hash’ Error In Ubuntu)
|
||||
[#]: via: (https://www.ostechnix.com/fix-e-the-package-cache-file-is-corrupted-it-has-the-wrong-hash-error-in-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Fix ‘E: The package cache file is corrupted, it has the wrong hash’ Error In Ubuntu
|
||||
======
|
||||
|
||||
Today, I tried to update the repository lists in my Ubuntu 18.04 LTS desktop and got an error that says – **E: The package cache file is corrupted, it has the wrong hash**. Here is what I run from the Terminal and its output:
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
```
|
||||
Hit:1 http://it-mirrors.evowise.com/ubuntu bionic InRelease
|
||||
Hit:2 http://it-mirrors.evowise.com/ubuntu bionic-updates InRelease
|
||||
Hit:3 http://it-mirrors.evowise.com/ubuntu bionic-backports InRelease
|
||||
Hit:4 http://it-mirrors.evowise.com/ubuntu bionic-security InRelease
|
||||
Hit:5 http://ppa.launchpad.net/alessandro-strada/ppa/ubuntu bionic InRelease
|
||||
Hit:7 http://ppa.launchpad.net/leaeasy/dde/ubuntu bionic InRelease
|
||||
Hit:8 http://ppa.launchpad.net/rvm/smplayer/ubuntu bionic InRelease
|
||||
Ign:6 https://dl.bintray.com/etcher/debian stable InRelease
|
||||
Get:9 https://dl.bintray.com/etcher/debian stable Release [3,674 B]
|
||||
Fetched 3,674 B in 3s (1,196 B/s)
|
||||
Reading package lists... Done
|
||||
E: The package cache file is corrupted, it has the wrong hash
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
“The package cache file is corrupted, it has the wrong hash” Error In Ubuntu
|
||||
|
||||
After couple Google searches, I found a workaround to fix this error.
|
||||
|
||||
If you ever encountered with this error, don’t panic. Just run the following commands to fix it.
|
||||
|
||||
Before running the following command, **double check you have added “*” at the end**. It is very important to add ***** at the end of this command. If you don’t add it, it will delete **/var/lib/apt/lists/** directory and there is no way to bring it back. You have been warned!
|
||||
|
||||
```
|
||||
$ sudo rm -rf /var/lib/apt/lists/*
|
||||
```
|
||||
|
||||
Now I tried again to update the system using command:
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
This time it works!! Hope this helps.
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**How To Fix Broken Ubuntu OS Without Reinstalling It**][4]
|
||||
* [**How to fix “Package operation failed” Error in Ubuntu**][5]
|
||||
* [**Fix “dpkg: error: parsing file ‘/var/lib/dpkg/updates/0014′” Error In Ubuntu**][6]
|
||||
* [**How To Fix “Kernel driver not installed (rc=-1908)” VirtualBox Error In Ubuntu**][7]
|
||||
* [**How to fix ‘Failed to install the Extension pack’ error in Ubuntu**][8]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/fix-e-the-package-cache-file-is-corrupted-it-has-the-wrong-hash-error-in-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/08/The-package-cache-file-is-corrupted.png
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/08/apt-update-command-output-in-Ubuntu.png
|
||||
[4]: https://www.ostechnix.com/how-to-fix-broken-ubuntu-os-without-reinstalling-it/
|
||||
[5]: https://www.ostechnix.com/how-to-fix-package-operation-failed-error-in-ubuntu/
|
||||
[6]: https://www.ostechnix.com/fix-dpkg-error-parsing-file-var-lib-dpkg-updates-0014-error-in-ubuntu/
|
||||
[7]: https://www.ostechnix.com/how-to-fix-kernel-driver-not-installed-rc-1908-virtualbox-error-in-ubuntu/
|
||||
[8]: https://www.ostechnix.com/some-softwares-are-not-working-after-uninstall-moksha-desktop-from-ubuntu-14-04-lts/
|
@ -1,143 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Change Linux Console Font Type And Size)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-change-linux-console-font-type-and-size/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Change Linux Console Font Type And Size
|
||||
======
|
||||
|
||||
It is quite easy to change the text font type and its size if you have graphical desktop environment. How would you do that in an Ubuntu headless server that doesn’t have a graphical environment? No worries! This brief guide describes how to change Linux console font and size. This can be useful for those who don’t like the default font type/size or who prefer different fonts in general.
|
||||
|
||||
### Change Linux Console Font Type And Size
|
||||
|
||||
Just in case you don’t know yet, this is how a headless Ubuntu Linux server console looks like.
|
||||
|
||||
![][2]
|
||||
|
||||
Ubuntu Linux console
|
||||
|
||||
As far as I know, we can [**list the installed fonts**][3], but there is no option to change the font type or its size from Linux console as we do in the Terminal emulators in GUI desktop.
|
||||
|
||||
But that doesn’t mean that we can’t change it. We still can change the console fonts.
|
||||
|
||||
If you’re using Debian, Ubuntu and other DEB-based systems, you can use **“console-setup”** configuration file for **setupcon** which is used to configure font and keyboard layout for the console. The standard location of the console-setup configuration file is **/etc/default/console- setup**.
|
||||
|
||||
Now, run the following command to setup font for your Linux console.
|
||||
|
||||
```
|
||||
$ sudo dpkg-reconfigure console-setup
|
||||
```
|
||||
|
||||
Choose the encoding to use on your Linux console. Just leave the default values, choose OK and hit ENTER to continue.
|
||||
|
||||
![][4]
|
||||
|
||||
Choose encoding to set on the console in Ubuntu
|
||||
|
||||
Next choose the character set that should be supported by the console font from the list. By default, it was the the last option i.e. **Guess optimal character set** in my system. Just leave it as default and hit ENTER key.
|
||||
|
||||
![][5]
|
||||
|
||||
Choose character set in Ubuntu
|
||||
|
||||
Next choose the font for your console and hit ENTER key. Here, I am choosing “TerminusBold”.
|
||||
|
||||
![][6]
|
||||
|
||||
Choose font for your Linux console
|
||||
|
||||
In this step, we choose the desired font size for our Linux console.
|
||||
|
||||
![][7]
|
||||
|
||||
Choose font size for your Linux console
|
||||
|
||||
After a few seconds, the selected font with size will applied for your Linux console.
|
||||
|
||||
This is how console fonts looked like in my Ubuntu 18.04 LTS server before changing the font type and size.
|
||||
|
||||
![][8]
|
||||
|
||||
This is after changing the font type and size.
|
||||
|
||||
![][9]
|
||||
|
||||
As you can see, the text size is much bigger, better and the font type is different that default one.
|
||||
|
||||
You can also directly edit **/etc/default/console-setup** file and set the font type and size as you wish. As per the following example, my Linux console font type is “Terminus Bold” and font size is 32.
|
||||
|
||||
```
|
||||
ACTIVE_CONSOLES="/dev/tty[1-6]"
|
||||
CHARMAP="UTF-8"
|
||||
CODESET="guess"
|
||||
FONTFACE="TerminusBold"
|
||||
FONTSIZE="16x32"
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**How To Switch Between TTYs Without Using Function Keys In Linux**][10]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
##### Display Console fonts
|
||||
|
||||
To show your console font, simply type:
|
||||
|
||||
```
|
||||
$ showconsolefont
|
||||
```
|
||||
|
||||
This command will show a table of glyphs or letters of a font.
|
||||
|
||||
![][11]
|
||||
|
||||
Show console fonts
|
||||
|
||||
If your Linux distribution does not have “console-setup”, you can get it from [**here**][12].
|
||||
|
||||
On Linux distributions that uses **Systemd** , you can change the console font by editing **“/etc/vconsole.conf”** file.
|
||||
|
||||
Here is an example configuration for German keyboard.
|
||||
|
||||
```
|
||||
$ vi /etc/vconsole.conf
|
||||
|
||||
KEYMAP=de-latin1
|
||||
FONT=Lat2-Terminus16
|
||||
```
|
||||
|
||||
Hope you find this useful.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-change-linux-console-font-type-and-size/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/08/Ubuntu-Linux-console.png
|
||||
[3]: https://www.ostechnix.com/find-installed-fonts-commandline-linux/
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-encoding-to-set-on-the-console.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-character-set-in-Ubuntu.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-font-for-Linux-console.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-font-size-for-Linux-console.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2019/08/Linux-console-tty-ubuntu-1.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2019/08/Ubuntu-Linux-TTY-console.png
|
||||
[10]: https://www.ostechnix.com/how-to-switch-between-ttys-without-using-function-keys-in-linux/
|
||||
[11]: https://www.ostechnix.com/wp-content/uploads/2019/08/show-console-fonts.png
|
||||
[12]: https://software.opensuse.org/package/console-setup
|
@ -1,168 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Set up Automatic Security Update (Unattended Upgrades) on Debian/Ubuntu?)
|
||||
[#]: via: (https://www.2daygeek.com/automatic-security-update-unattended-upgrades-ubuntu-debian/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Set up Automatic Security Update (Unattended Upgrades) on Debian/Ubuntu?
|
||||
======
|
||||
|
||||
One of an important task for Linux admins to make the system up-to-date.
|
||||
|
||||
It’s keep your system more stable and avoid unwanted access and attack.
|
||||
|
||||
Installing a package in Linux is a piece of cake.
|
||||
|
||||
In the similar way we can update security patches as well.
|
||||
|
||||
This is a simple tutorial that will show you to configure your system to receive automatic security updates.
|
||||
|
||||
There are some security risks involved when you running an automatic security package upgrades without inspection, but there are also benefits.
|
||||
|
||||
If you don’t want to miss security patches and would like to stay up-to-date with the latest security patches.
|
||||
|
||||
Then you should set up an automatic security update with help of unattended upgrades utility.
|
||||
|
||||
You can **[manually install Security Updates on Debian & Ubuntu systems][1]** if you don’t want to go for automatic security update.
|
||||
|
||||
There are many ways that we can automate this. However, we are going with an official method and later we will cover other ways too.
|
||||
|
||||
### How to Install unattended-upgrades package in Debian/Ubuntu?
|
||||
|
||||
By default unattended-upgrades package should be installed on your system. But in case if it’s not installed use the following command to install it.
|
||||
|
||||
Use **[APT-GET Command][2]** or **[APT Command][3]** to install unattended-upgrades package.
|
||||
|
||||
```
|
||||
$ sudo apt-get install unattended-upgrades
|
||||
```
|
||||
|
||||
The below two files are allows you to customize this utility.
|
||||
|
||||
```
|
||||
/etc/apt/apt.conf.d/50unattended-upgrades
|
||||
/etc/apt/apt.conf.d/20auto-upgrades
|
||||
```
|
||||
|
||||
### Make necessary changes in 50unattended-upgrades file
|
||||
|
||||
By default only minimal required options were enabled for security updates. It’s not limited and you can configure many option in this to make this utility more useful.
|
||||
|
||||
I have trimmed the file and added only the enabled lines for better clarifications.
|
||||
|
||||
```
|
||||
# vi /etc/apt/apt.conf.d/50unattended-upgrades
|
||||
|
||||
Unattended-Upgrade::Allowed-Origins {
|
||||
"${distro_id}:${distro_codename}";
|
||||
"${distro_id}:${distro_codename}-security";
|
||||
"${distro_id}ESM:${distro_codename}";
|
||||
};
|
||||
Unattended-Upgrade::DevRelease "false";
|
||||
```
|
||||
|
||||
There are three origins are enabled and the details are below.
|
||||
|
||||
* **`${distro_id}:${distro_codename}:`**` ` It is necessary because security updates may pull in new dependencies from non-security sources.
|
||||
* **`${distro_id}:${distro_codename}-security:`**` ` It is used to get a security updates from sources.
|
||||
* **`${distro_id}ESM:${distro_codename}:`**` ` It is used to get a security updates for ESM (Extended Security Maintenance) users.
|
||||
|
||||
|
||||
|
||||
**Enable Email Notification:** If you would like to receive email notifications after every security update, then modify the following line (uncomment it and add your email id).
|
||||
|
||||
From:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Mail "root";
|
||||
```
|
||||
|
||||
To:
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Mail "[email protected]";
|
||||
```
|
||||
|
||||
**Auto Remove Unused Dependencies:** You may need to run “sudo apt autoremove” command after every update to remove unused dependencies from the system.
|
||||
|
||||
We can automate this task by making the changes in the following line (uncomment it and change “false” to “true”).
|
||||
|
||||
From:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Remove-Unused-Dependencies "false";
|
||||
```
|
||||
|
||||
To:
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Remove-Unused-Dependencies "true";
|
||||
```
|
||||
|
||||
**Enable Automatic Reboot:** You may need to reboot your system when a security updates installed for kernel. To do so, make the following changes in the following line.
|
||||
|
||||
From:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Automatic-Reboot "false";
|
||||
```
|
||||
|
||||
To: Uncomment it and change “false” to “true” to enable automatic reboot.
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Automatic-Reboot "true";
|
||||
```
|
||||
|
||||
**Enable Automatic Reboot at The Specific Time:** If automatic reboot is enabled and you would like to perform the reboot at the specific time, then make the following changes.
|
||||
|
||||
From:
|
||||
|
||||
```
|
||||
//Unattended-Upgrade::Automatic-Reboot-Time "02:00";
|
||||
```
|
||||
|
||||
To: Uncomment it and change the time as per your requirement. I set it to reboot at 5 AM.
|
||||
|
||||
```
|
||||
Unattended-Upgrade::Automatic-Reboot-Time "05:00";
|
||||
```
|
||||
|
||||
### How to Enable Automatic Security Update?
|
||||
|
||||
Now, we have configured the necessary options. Once you are done.
|
||||
|
||||
Open the following file and verify it, both the values are set up correctly or not? It should not be a zero’s. (1=enabled, 0=disabled).
|
||||
|
||||
```
|
||||
# vi /etc/apt/apt.conf.d/20auto-upgrades
|
||||
|
||||
APT::Periodic::Update-Package-Lists "1";
|
||||
APT::Periodic::Unattended-Upgrade "1";
|
||||
```
|
||||
|
||||
**Details:**
|
||||
|
||||
* The first line makes apt to perform “apt-get update” automatically every day.
|
||||
* The second line makes apt to install security updates automatically every day.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/automatic-security-update-unattended-upgrades-ubuntu-debian/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/manually-install-security-updates-ubuntu-debian/
|
||||
[2]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
@ -1,184 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Setup Multilingual Input Method On Ubuntu)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-setup-multilingual-input-method-on-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Setup Multilingual Input Method On Ubuntu
|
||||
======
|
||||
|
||||
For those who don’t know, there are hundreds of spoken languages in India and 22 languages are listed as official languages in Indian constitution. I am not a native English speaker, so I often use **Google translate** if I ever needed to type and/or translate something from English to my native language, which is Tamil. Well, I guess I don’t need to rely on Google translate anymore. I just found way to type in Indian languages on Ubuntu. This guide explains how to setup multilingual input method. It has been exclusively written for Ubuntu 18.04 LTS, however it might work on other Ubuntu variants like Linux mint, Elementary OS.
|
||||
|
||||
### Setup Multilingual Input Method On Ubuntu Linux
|
||||
|
||||
With the help of **IBus** , we can easily setup multilingual input method on Ubuntu and its derivatives. Ibus, stands for **I** ntelligent **I** nput **Bus** , is an input method framework for multilingual input in Unix-like operating systems. It allows us to type in our native language in most GUI applications, for example LibreOffice.
|
||||
|
||||
##### Install IBus On Ubuntu
|
||||
|
||||
To install IBus package on Ubuntu, run:
|
||||
|
||||
```
|
||||
$ sudo apt install ibus-m17n
|
||||
```
|
||||
|
||||
The Ibus-m17n package provides a lot of Indian and other countries languages including amharic, arabic, armenian, assamese, athapascan languages, belarusian, bengali, burmese, central khmer, chamic languages, chinese, cree, croatian, czech, danish, divehi, dhivehi, maldivian, esperanto, french, georgian, ancient and modern greek, gujarati, hebrew, hindi, inuktitut, japanese, kannada, kashmiri, kazakh, korean, lao, malayalam, marathi, nepali, ojibwa, oriya, panjabi, punjabi, persian, pushto, pashto, russian, sanskrit, serbian, sichuan yi, nuosu, siksika, sindhi, sinhala, sinhalese, slovak, swedish, tai languages, tamil, telugu, thai, tibetan, uighur, uyghur, urdu, uzbek, vietnamese, as well as yiddish.
|
||||
|
||||
##### Add input languages
|
||||
|
||||
We can add languages in System **Settings** section. Click the drop down arrow on the top right corner of your Ubuntu desktop and choose Settings icon in the bottom left corner.
|
||||
|
||||
![][2]
|
||||
|
||||
Launch System’s settings from top panel
|
||||
|
||||
From the Settings section, click on **Region & Language** option in the left pane. Then click the **+** (plus) sign button on the right side under **Input Sources** tab.
|
||||
|
||||
![][3]
|
||||
|
||||
Region & language section in Settings section
|
||||
|
||||
In the next window, click on the **three vertical dots** button.
|
||||
|
||||
![][4]
|
||||
|
||||
Add input source in Ubuntu
|
||||
|
||||
Search and choose the input language you’d like to add from the list.
|
||||
|
||||
![][5]
|
||||
|
||||
Add input language
|
||||
|
||||
For the purpose of this guide, I am going to add **Tamil** language. After choosing the language, click **Add** button.
|
||||
|
||||
![][6]
|
||||
|
||||
Add Input Source
|
||||
|
||||
Now you will see the selected input source has been added. You will see it in Region & Language section under Input Sources tab.
|
||||
|
||||
![][7]
|
||||
|
||||
Input sources section in Ubuntu
|
||||
|
||||
Click the “Manage Installed Languages” button under Input Sources tab.
|
||||
|
||||
![][8]
|
||||
|
||||
Manage Installed Languages In Ubuntu
|
||||
|
||||
Next you will be asked whether you want to install translation packs for the chosen language. You can install them if you want. Or, simply choose “Remind Me Later” button. You will be notified when you open this next time.
|
||||
|
||||
![][9]
|
||||
|
||||
The language support is not installed completely
|
||||
|
||||
Once the translation packs are installed, Click **Install / Remove Languages** button. Also make sure IBus is selected in Keyboard input method system.
|
||||
|
||||
![][10]
|
||||
|
||||
Install / Remove Languages In Ubuntu
|
||||
|
||||
Choose your desired language from the list and click Apply button.
|
||||
|
||||
![][11]
|
||||
|
||||
Choose input language
|
||||
|
||||
That’s it. That’s we have successfully setup multilingual input method on Ubuntu 18.04 desktop. Similarly, add as many as input languages you want.
|
||||
|
||||
After adding all language sources, log out and log in back.
|
||||
|
||||
##### Type In Indian languages and/or your preferred languages
|
||||
|
||||
Once you have added all languages, you will see them from the drop download on the top bar of your Ubuntu desktop.
|
||||
|
||||
![][12]
|
||||
|
||||
Choose input language from top bar in Ubuntu desktop
|
||||
|
||||
Alternatively, you can use **SUPER+SPACE** keys from the Keyboard to switch between input languages.
|
||||
|
||||
![][13]
|
||||
|
||||
Choose input language using Super+Space keys in Ubuntu
|
||||
|
||||
Open any GUI text editors/apps and start typing!
|
||||
|
||||
![][14]
|
||||
|
||||
Type in Indian languages in Ubuntu
|
||||
|
||||
##### Add IBus to startup applications
|
||||
|
||||
We need let IBus to start automatically on every reboot, so you need not to start it manually whenever you want to type in your preferred language.
|
||||
|
||||
To do so, simply type “startup applications” in the dash and click on Startup Applications option.
|
||||
|
||||
![][15]
|
||||
|
||||
Launch startup applications in Ubuntu
|
||||
|
||||
In the next window, click Add, type “Ibus” in the name field and “ibus-daemon” in the Command field and then click Add button.
|
||||
|
||||
![][16]
|
||||
|
||||
Add Ibus to startup applications on Ubuntu
|
||||
|
||||
From now IBus will automatically start on system startup.
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**How To Use Google Translate From Commandline In Linux**][17]
|
||||
* [**How To Type Indian Rupee Sign (₹) In Linux**][18]
|
||||
* [**How To Setup Japanese Language Environment In Arch Linux**][19]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
So, it is your turn now. What application/tool you’re using to type in local Indian languages? Let us know them in the comment section below.
|
||||
|
||||
**Reference:**
|
||||
|
||||
* [**IBus – Ubuntu Community Wiki**][20]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-setup-multilingual-input-method-on-ubuntu/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2019/07/Ubuntu-system-settings.png
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/08/Region-language-in-Settings-ubuntu.png
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-input-source-in-Ubuntu.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-input-language.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-Input-Source-Ubuntu.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/08/Input-sources-Ubuntu.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2019/08/Manage-Installed-Languages.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2019/08/The-language-support-is-not-installed-completely.png
|
||||
[10]: https://www.ostechnix.com/wp-content/uploads/2019/08/Install-Remove-languages.png
|
||||
[11]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-language.png
|
||||
[12]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-input-language-from-top-bar-in-Ubuntu.png
|
||||
[13]: https://www.ostechnix.com/wp-content/uploads/2019/08/Choose-input-language-using-SuperSpace-keys.png
|
||||
[14]: https://www.ostechnix.com/wp-content/uploads/2019/08/Setup-Multilingual-Input-Method-On-Ubuntu.png
|
||||
[15]: https://www.ostechnix.com/wp-content/uploads/2019/08/Launch-startup-applications-in-ubuntu.png
|
||||
[16]: https://www.ostechnix.com/wp-content/uploads/2019/08/Add-Ibus-to-startup-applications-on-Ubuntu.png
|
||||
[17]: https://www.ostechnix.com/use-google-translate-commandline-linux/
|
||||
[18]: https://www.ostechnix.com/type-indian-rupee-sign-%e2%82%b9-linux/
|
||||
[19]: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
|
||||
[20]: https://help.ubuntu.com/community/ibus
|
@ -0,0 +1,282 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to create a vanity Tor .onion web address)
|
||||
[#]: via: (https://opensource.com/article/19/8/how-create-vanity-tor-onion-address)
|
||||
[#]: author: (Kc Nwaezuoke https://opensource.com/users/areahintshttps://opensource.com/users/sethhttps://opensource.com/users/bexelbiehttps://opensource.com/users/bcotton)
|
||||
|
||||
How to create a vanity Tor .onion web address
|
||||
======
|
||||
Generate a vanity .onion website to protect your anonymity—and your
|
||||
visitors' privacy, too.
|
||||
![Password security with a mask][1]
|
||||
|
||||
[Tor][2] is a powerful, open source network that enables anonymous and non-trackable (or difficult to track) browsing of the internet. It's able to achieve this because of users running Tor nodes, which serve as intentional detours between two otherwise direct paths. For instance, if you are in New Zealand and visit python.nz, instead of being routed next door to the data center running python.nz, your traffic might be routed to Pittsburgh and then Berlin and then Vanuatu and finally to python.nz. The Tor network, being built upon opt-in participant nodes, has an ever-changing structure. Only within this dynamic network space can there exist an exciting, transient top-level domain identifier: the .onion address.
|
||||
|
||||
If you own or are looking to create a website, you can generate a vanity .onion site to protect your and your visitors' anonymity.
|
||||
|
||||
### What are onion addresses?
|
||||
|
||||
Because Tor is dynamic and intentionally re-routes traffic in unpredictable ways, an onion address makes both the information provider (you) and the person accessing the information (your traffic) difficult to trace by one another, by intermediate network hosts, or by an outsider. Generally, an onion address is unattractive, with 16-character names like 8zd335ae47dp89pd.onion. Not memorable, and difficult to identify when spoofed, but a few projects that culminated with Shallot (forked as eschalot) provides "vanity" onion addresses to solve those issues.
|
||||
|
||||
Creating a vanity onion URL on your own is possible but computationally expensive. Getting the exact 16 characters you want could take a single computer billions of years to achieve.
|
||||
|
||||
Here's a rough example (courtesy of [Shallot][3]) of how much time it takes to generate certain lengths of characters on a 1.5GHz processor:
|
||||
|
||||
Characters | Time
|
||||
---|---
|
||||
1 | Less than 1 second
|
||||
2 | Less than 1 second
|
||||
3 | Less than 1 second
|
||||
4 | 2 seconds
|
||||
5 | 1 minute
|
||||
6 | 30 minutes
|
||||
7 | 1 day
|
||||
8 | 25 days
|
||||
9 | 2.5 years
|
||||
10 | 40 years
|
||||
11 | 640 years
|
||||
12 | 10 millennia
|
||||
13 | 160 millennia
|
||||
14 | 2.6 million years
|
||||
|
||||
I love how this table goes from 25 days to 2.5 years. If you wanted to generate 56 characters, it would take 1078 years.
|
||||
|
||||
An onion address with 16 characters is referred to as a version 2 onion address, and one with 56 characters is a version 3 onion address. If you're using the Tor browser, you can check out this [v2 address][4] or this [v3 address][5].
|
||||
|
||||
A v3 address has several advantages over v2:
|
||||
|
||||
* Better crypto (v3 replaced SHA1/DH/RSA1024 with SHA3/ed25519/curve25519)
|
||||
* Improved directory protocol that leaks much less information to directory servers
|
||||
* Improved directory protocol with a smaller surface for targeted attacks
|
||||
* Better onion address security against impersonation
|
||||
|
||||
|
||||
|
||||
However, the downside (supposedly) of v3 is the marketing effort you might need to get netizens to type that marathon-length URL in their browser.
|
||||
|
||||
You can [learn more about v3][6] in the Tor docs.
|
||||
|
||||
### Why you might need an onion address
|
||||
|
||||
A .onion domain has a few key advantages. Its key feature is that it can be accessed only with a Tor browser. Many people don't even know Tor exists, so you shouldn't expect massive traffic on your .onion site. However, the Tor browser provides numerous layers of anonymity not available on more popular browsers. If you want to ensure near-total anonymity for both you and your visitors, onion addresses are built for it.
|
||||
|
||||
With Tor, you do not need to register with ICANN to create your own domain. You don't need to hide your details from Whois searches, and your ICANN account won't be vulnerable to malicious takeovers. You are completely in control of your privacy and your domain.
|
||||
|
||||
An onion address is also an effective way to bypass censorship restrictions imposed by a government or regime. Its privacy helps protect you if your site may be viewed as a threat to the interests of the political class. Sites like Wikileaks are the best examples.
|
||||
|
||||
### What you need to generate a vanity URL
|
||||
|
||||
To configure a vanity onion address, you need to generate a new private key to match a custom hostname.
|
||||
|
||||
Two applications that you can use for generating .onion addresses are [eschalot][7] for v2 addresses and [mkp224o][8] for v3 addresses.
|
||||
|
||||
Eschalot is a Tor hidden service name generator. It allows you to produce a (partially) customized vanity .onion address using a brute-force method. Eschalot is distributed in source form under the BSD license and should compile on any Unix or Linux system.
|
||||
|
||||
mkp224o is a vanity address generator for ed25519 .onion services that's available on GitHub with the CC0 1.0 Universal license. It generates vanity 56-character onion addresses.
|
||||
|
||||
Here's a simple explanation of how these applications work. (This assumes you are comfortable with Git.)
|
||||
|
||||
#### Eschalot
|
||||
|
||||
Eschalot requires [OpenSSL][9] 0.9.7 or later libraries with source headers. Confirm your version with this command:
|
||||
|
||||
|
||||
```
|
||||
$ openssl version
|
||||
OpenSSL 1.1.1c FIPS 28 May 2019
|
||||
```
|
||||
|
||||
You also need a [Make][10] utility (either BSD or GNU Make will do) and a C compiler (GCC, PCC, or LLVM/Clang).
|
||||
|
||||
Clone the eschalot repo to your system, and then compile:
|
||||
|
||||
|
||||
```
|
||||
$ git clone <https://github.com/ReclaimYourPrivacy/eschalot.git>
|
||||
$ cd eschalot-1.2.0
|
||||
$ make
|
||||
```
|
||||
|
||||
If you're not using GCC, you must set the **CC** environment variable. For example, to use PCC instead:
|
||||
|
||||
|
||||
```
|
||||
$ make clean
|
||||
$ env CC=pcc make
|
||||
```
|
||||
|
||||
##### Using eschalot
|
||||
|
||||
To see Echalot's Help pages, type **./eschalot** in the terminal:
|
||||
|
||||
|
||||
```
|
||||
$ ./eschalot
|
||||
Version: 1.2.0
|
||||
|
||||
usage:
|
||||
eschalot [-c] [-v] [-t count] ([-n] [-l min-max] -f filename) | (-r regex) | (-p prefix)
|
||||
-v : verbose mode - print extra information to STDERR
|
||||
-c : continue searching after the hash is found
|
||||
-t count : number of threads to spawn default is one)
|
||||
-l min-max : look for prefixes that are from 'min' to 'max' characters long
|
||||
-n : Allow digits to be part of the prefix (affects wordlist mode only)
|
||||
-f filename: name of the text file with a list of prefixes
|
||||
-p prefix : single prefix to look for (1-16 characters long)
|
||||
-r regex : search for a POSIX-style regular expression
|
||||
|
||||
Examples:
|
||||
eschalot -cvt4 -l8-12 -f wordlist.txt >> results.txt
|
||||
eschalot -v -r '^test|^exam'
|
||||
eschalot -ct5 -p test
|
||||
|
||||
base32 alphabet allows letters [a-z] and digits [2-7]
|
||||
Regex pattern examples:
|
||||
xxx must contain 'xxx'
|
||||
^foo must begin with 'foo'
|
||||
bar$ must end with 'bar'
|
||||
b[aoeiu]r must have a vowel between 'b' and 'r'
|
||||
'^ab|^cd' must begin with 'ab' or 'cd'
|
||||
[a-z]{16} must contain letters only, no digits
|
||||
^dusk.*dawn$ must begin with 'dusk' and end with 'dawn'
|
||||
[a-z2-7]{16} any name - will succeed after one iteration
|
||||
```
|
||||
|
||||
You can use eschalot to generate an address using the prefix **-p** for _privacy_. Assuming your system has multiple CPU cores, use _multi-threading_ (**-t**) to speed up the URL generation. To _get verbose output_, use the **-v** option. Write the results of your calculation to a file named **newonion.txt**:
|
||||
|
||||
|
||||
```
|
||||
`./eschalot -v -t4 -p privacy >> newonion.txt`
|
||||
```
|
||||
|
||||
The script executes until it finds a suitable match:
|
||||
|
||||
|
||||
```
|
||||
$ ./eschalot -v -t4 -p privacy >> newonion.txt
|
||||
Verbose, single result, no digits, 4 threads, prefixes 7-7 characters long.
|
||||
Thread #1 started.
|
||||
Thread #2 started.
|
||||
Thread #3 started.
|
||||
Thread #4 started.
|
||||
Running, collecting performance data...
|
||||
Found a key for privacy (7) - privacyzofgsihx2.onion
|
||||
```
|
||||
|
||||
To access the public and private keys eschalot generates, locate **newonion.txt** in the eschalot folder.
|
||||
|
||||
#### mkp224o
|
||||
|
||||
mkp224o requires a C99 compatible compiler, Libsodium, GNU Make, GNU Autoconf, and a Unix-like platform. It has been tested on Linux and OpenBSD.
|
||||
|
||||
To get started, clone the mkp224o repo onto your system, generate the required [Autotools infrastructure][11], configure, and compile:
|
||||
|
||||
|
||||
```
|
||||
$ git clone <https://github.com/cathugger/mkp224o.git>
|
||||
$ cd mkp224o
|
||||
$ ./autogen.sh
|
||||
$ ./configure
|
||||
$ make
|
||||
```
|
||||
|
||||
##### Using mkp224o
|
||||
|
||||
Type **./mkp224o -h** to view Help:
|
||||
|
||||
|
||||
```
|
||||
$ ./mkp224o -h
|
||||
Usage: ./mkp224o filter [filter...] [options]
|
||||
./mkp224o -f filterfile [options]
|
||||
Options:
|
||||
-h - print help to stdout and quit
|
||||
-f - specify filter file which contains filters separated by newlines
|
||||
-D - deduplicate filters
|
||||
-q - do not print diagnostic output to stderr
|
||||
-x - do not print onion names
|
||||
-v - print more diagnostic data
|
||||
-o filename - output onion names to specified file (append)
|
||||
-O filename - output onion names to specified file (overwrite)
|
||||
-F - include directory names in onion names output
|
||||
-d dirname - output directory
|
||||
-t numthreads - specify number of threads to utilise (default - CPU core count or 1)
|
||||
-j numthreads - same as -t
|
||||
-n numkeys - specify number of keys (default - 0 - unlimited)
|
||||
-N numwords - specify number of words per key (default - 1)
|
||||
-z - use faster key generation method; this is now default
|
||||
-Z - use slower key generation method
|
||||
-B - use batching key generation method (>10x faster than -z, experimental)
|
||||
-s - print statistics each 10 seconds
|
||||
-S t - print statistics every specified ammount of seconds
|
||||
-T - do not reset statistics counters when printing
|
||||
-y - output generated keys in YAML format instead of dumping them to filesystem
|
||||
-Y [filename [host.onion]] - parse YAML encoded input and extract key(s) to filesystem
|
||||
```
|
||||
|
||||
One or more filters are required for mkp224o to work. When executed, mkp224o creates a directory with secret and public keys, plus a hostname for each discovered service. By default, **root** is the current directory, but that can be overridden with the **-d** switch.
|
||||
|
||||
Use the **-t numthreads** option to define how many threads you want to use during processing, and **-v** to see verbose output. Use the **fast** filter, and generate four keys by setting the **-n** option:
|
||||
|
||||
|
||||
```
|
||||
$ ./mkp224o filter fast -t 4 -v -n 4 -d ~/Extracts
|
||||
set workdir: /home/areahints/Extracts/
|
||||
sorting filters... done.
|
||||
filters:
|
||||
fast
|
||||
filter
|
||||
in total, 2 filters
|
||||
using 4 threads
|
||||
fastrcl5totos3vekjbqcmgpnias5qytxnaj7gpxtxhubdcnfrkapqad.onion
|
||||
fastz7zvpzic6dp6pvwpmrlc43b45usm2itkn4bssrklcjj5ax74kaad.onion
|
||||
fastqfj44b66mqffbdfsl46tg3c3xcccbg5lfuhr73k7odfmw44uhdqd.onion
|
||||
fast4xwqdhuphvglwic5dfcxoysz2kvblluinr4ubak5pluunduy7qqd.onion
|
||||
waiting for threads to finish... done.
|
||||
```
|
||||
|
||||
In the directory path set with **-d**, mkp224o creates a folder with the v3 address name it has generated, and within it you see your hostname, secret, and public files.
|
||||
|
||||
Use the **-s** switch to enable printing statistics, which may be useful when benchmarking different ed25519 implementations on your machine. Also, read the **OPTIMISATION.txt** file in mkp224o for performance-related tips.
|
||||
|
||||
### Notes about security
|
||||
|
||||
If you're wondering about the security of v2 generated keys, [Shallot][3] provides an interesting take:
|
||||
|
||||
> It is sometimes claimed that private keys generated by Shallot are less secure than those generated by Tor. This is false. Although Shallot generates a keypair with an unusually large public exponent **e**, it performs all of the sanity checks specified by PKCS #1 v2.1 (directly in **sane_key**), and then performs all of the sanity checks that Tor does when it generates an RSA keypair (by calling the OpenSSL function **RSA_check_key**).
|
||||
|
||||
"[Zooko's Triangle][12]" (which is discussed in [Stiegler's Petname Systems][13]) argues that names cannot be global, secure, and memorable at the same time. This means while .onion names are unique and secure, they have the disadvantage that they cannot be meaningful to humans.
|
||||
|
||||
Imagine that an attacker creates an .onion name that looks similar to the .onion of a different onion service and replaces its hyperlink on the onion wiki. How long would it take for someone to recognize it?
|
||||
|
||||
The onion address system has trade-offs, but vanity addresses may be a reasonable balance among them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/how-create-vanity-tor-onion-address
|
||||
|
||||
作者:[Kc Nwaezuoke][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/areahintshttps://opensource.com/users/sethhttps://opensource.com/users/bexelbiehttps://opensource.com/users/bcotton
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_password_mask_secret.png?itok=EjqwosxY (Password security with a mask)
|
||||
[2]: https://www.torproject.org/
|
||||
[3]: https://github.com/katmagic/Shallot
|
||||
[4]: http://6zdgh5a5e6zpchdz.onion/
|
||||
[5]: http://vww6ybal4bd7szmgncyruucpgfkqahzddi37ktceo3ah7ngmcopnpyyd.onion/
|
||||
[6]: https://www.torproject.org/docs/tor-onion-service.html.en#four
|
||||
[7]: https://github.com/ReclaimYourPrivacy/eschalot
|
||||
[8]: https://github.com/cathugger/mkp224o
|
||||
[9]: https://www.openssl.org/
|
||||
[10]: https://en.wikipedia.org/wiki/Make_(software)
|
||||
[11]: https://opensource.com/article/19/7/introduction-gnu-autotools
|
||||
[12]: https://en.wikipedia.org/wiki/Zooko%27s_triangle
|
||||
[13]: http://www.skyhunter.com/marcs/petnames/IntroPetNames.html
|
@ -0,0 +1,195 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Keeping track of Linux users: When do they log in and for how long?)
|
||||
[#]: via: (https://www.networkworld.com/article/3431864/keeping-track-of-linux-users-when-do-they-log-in-and-for-how-long.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Keeping track of Linux users: When do they log in and for how long?
|
||||
======
|
||||
Getting an idea how often your users are logging in and how much time they spend on a Linux server is pretty easy with a couple commands and maybe a script or two.
|
||||
![Adikos \(CC BY 2.0\)][1]
|
||||
|
||||
The Linux command line provides some excellent tools for determining how frequently users log in and how much time they spend on a system. Pulling information from the **/var/log/wtmp** file that maintains details on user logins can be time-consuming, but with a couple easy commands, you can extract a lot of useful information on user logins.
|
||||
|
||||
One of the commands that helps with this is the **last** command. It provides a list of user logins that can go quite far back. The output looks like this:
|
||||
|
||||
```
|
||||
$ last | head -5 | tr -s " "
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:44 still logged in
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:41 - 09:41 (00:00)
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:40 - 09:41 (00:00)
|
||||
nemo pts/1 192.168.0.18 Wed Aug 14 09:38 still logged in
|
||||
shs pts/0 192.168.0.14 Tue Aug 13 06:15 - 18:18 (00:24)
|
||||
```
|
||||
|
||||
Note that the **tr -s " "** portion of the command above reduces strings of blanks to single blanks, and in this case, it keeps the output shown from being so wide that it would be wrapped around on this web page. Without the **tr** command, that output would look like this:
|
||||
|
||||
```
|
||||
$ last | head -5
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:44 still logged in
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:41 - 09:41 (00:00)
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:40 - 09:41 (00:00)
|
||||
nemo pts/1 192.168.0.18 Wed Aug 14 09:38 still logged in
|
||||
shs pts/0 192.168.0.14 Wed Aug 14 09:15 - 09:40 (00:24)
|
||||
```
|
||||
|
||||
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
While it’s easy to generate and review login activity records like these for all users with the **last** command or for some particular user with a **last username** command, without the pipe to **head**, these commands will generally result in a _lot_ of data. In this case, a listing for all users would have 908 lines.
|
||||
|
||||
```
|
||||
$ last | wc -l
|
||||
908
|
||||
```
|
||||
|
||||
### Counting logins with last
|
||||
|
||||
If you don't need all of the login detail, you can view user login sessions as a simple count of logins for all users on the system with a command like this:
|
||||
|
||||
```
|
||||
$ for user in `ls /home`; do echo -ne "$user\t"; last $user | wc -l; done
|
||||
dorothy 21
|
||||
dory 13
|
||||
eel 29
|
||||
jadep 124
|
||||
jdoe 27
|
||||
jimp 42
|
||||
nemo 9
|
||||
shark 17
|
||||
shs 423
|
||||
test 2
|
||||
waynek 201
|
||||
```
|
||||
|
||||
The list above shows how many times each user has logged since the current **/var/log/wtmp** file was initiated. Notice, however, that the command to generate it does depend on user accounts being set up in the default /home directory.
|
||||
|
||||
Depending on how much data has been accumulated in your current **wtmp** file, you may see a lot of logins or relatively few. To get a little more insight into how relevant the number of logins are, you could turn this command into a script, adding a command that shows when the first login in the current file occurred to provide a little perspective.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
echo -n "Logins since "
|
||||
who /var/log/wtmp | head -1 | awk '{print $3}'
|
||||
echo "======================="
|
||||
|
||||
for user in `ls /home`
|
||||
do
|
||||
echo -ne "$user\t"
|
||||
last $user | wc -l
|
||||
done
|
||||
```
|
||||
|
||||
When you run the script, the "Logins since" line will let you know how to interpret the stats shown.
|
||||
|
||||
```
|
||||
$ ./show_user_logins
|
||||
Logins since 2018-10-05
|
||||
=======================
|
||||
dorothy 21
|
||||
dory 13
|
||||
eel 29
|
||||
jadep 124
|
||||
jdoe 27
|
||||
jimp 42
|
||||
nemo 9
|
||||
shark 17
|
||||
shs 423
|
||||
test 2
|
||||
waynek 201
|
||||
```
|
||||
|
||||
### Looking at accumulated login time with **ac**
|
||||
|
||||
The **ac** command provides a report on user login time — hours spent logged in. As with the **last** command, **ac** reports on user logins since the last rollover of the **wtmp** file since **ac**, like **last**, gets its details from **/var/log/wtmp**. The **ac** command, however, provides a much different view of user activity than the number of logins. For a single user, we might use a command like this one:
|
||||
|
||||
```
|
||||
$ ac nemo
|
||||
total 31.61
|
||||
```
|
||||
|
||||
This tells us that nemo has spent nearly 32 hours logged in. To use the command to generate a listing of the login times for all users, you might use a command like this:
|
||||
|
||||
```
|
||||
$ for user in `ls /home`; do ac $user | sed "s/total/$user\t/" ; done
|
||||
dorothy 9.12
|
||||
dory 1.67
|
||||
eel 4.32
|
||||
…
|
||||
```
|
||||
|
||||
In this command, we are replacing the word “total” in each line with the relevant username. And, as long as usernames are fewer than 8 characters, the output will line up nicely. To left justify the output, you can modify that command to this:
|
||||
|
||||
```
|
||||
$ for user in `ls /home`; do ac $user | sed "s/^\t//" | sed "s/total/$user\t/" ; done
|
||||
dorothy 9.12
|
||||
dory 1.67
|
||||
eel 4.32
|
||||
...
|
||||
```
|
||||
|
||||
The first used of **sed** in that string of commands strips off the initial tabs.
|
||||
|
||||
To turn this command into a script and display the initial date for the **wtmp** file to add more relevance to the hour counts, you could use a script like this:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
echo -n "hours online since "
|
||||
who /var/log/wtmp | head -1 | awk '{print $3}'
|
||||
echo "============================="
|
||||
|
||||
for user in `ls /home`
|
||||
do
|
||||
ac $user | sed "s/^\t//" | sed "s/total/$user\t/"
|
||||
done
|
||||
```
|
||||
|
||||
If you run the script, you'll see the hours spent by each user over the lifespan of the **wtmp** file:
|
||||
|
||||
```
|
||||
$ ./show_user_hours
|
||||
hours online since 2018-10-05
|
||||
=============================
|
||||
dorothy 70.34
|
||||
dory 4.67
|
||||
eel 17.05
|
||||
jadep 186.04
|
||||
jdoe 28.20
|
||||
jimp 11.49
|
||||
nemo 11.61
|
||||
shark 13.04
|
||||
shs 3563.60
|
||||
test 1.00
|
||||
waynek 312.00
|
||||
```
|
||||
|
||||
The difference between the user activity levels in this example is pretty obvious with one user spending only one hour on the system since October and another dominating the system.
|
||||
|
||||
### Wrap-up
|
||||
|
||||
Reviewing how often users log into a system and how many hours they spend online can both give you an overview of how a system is being used and who are likely the heaviest users. Of course, login time does not necessarily correspond to how much work each user is getting done, but it's likely close and commands such as **last** and **ac **can help you identify the most active users.
|
||||
|
||||
### More Linux advice: Sandra Henry-Stocker explains how to use the rev command in this 2-Minute Linux Tip video
|
||||
|
||||
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3431864/keeping-track-of-linux-users-when-do-they-log-in-and-for-how-long.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/keyboard-adikos-100808324-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -1,227 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (SSLH – Share A Same Port For HTTPS And SSH)
|
||||
[#]: via: (https://www.ostechnix.com/sslh-share-port-https-ssh/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
SSLH – Share A Same Port For HTTPS And SSH
|
||||
======
|
||||
|
||||
![SSLH - Share A Same Port For HTTPS And SSH][1]
|
||||
|
||||
Some Internet service providers and corporate companies might have blocked most of the ports, and allowed only a few specific ports such as port 80 and 443 to tighten their security. In such cases, we have no choice, but use a same port for multiple programs, say the HTTPS Port **443** , which is rarely blocked. Here is where **SSLH** , a SSL/SSH multiplexer, comes in help. It will listen for incoming connections on a port 443. To put this more simply, SSLH allows us to run several programs/services on port 443 on a Linux system. So, you can use both SSL and SSH using a same port at the same time. If you ever been in a situation where most ports are blocked by the firewalls, you can use SSLH to access your remote server. This brief tutorial describes how to share a same port for https, ssh using SSLH in Unix-like operating systems.
|
||||
|
||||
### SSLH – Share A Same Port For HTTPS, SSH, And OpenVPN
|
||||
|
||||
##### Install SSLH
|
||||
|
||||
SSLH is packaged for most Linux distributions, so you can install it using the default package managers.
|
||||
|
||||
On **Debian** , **Ubuntu** , and derivatives, run:
|
||||
|
||||
```
|
||||
$ sudo apt-get install sslh
|
||||
```
|
||||
|
||||
While installing SSLH, you will prompted whether you want to run sslh as a service from inetd, or as a standalone server. Each choice has its own benefits. With only a few connection per day, it is probably better to run sslh from inetd in order to save resources. On the other hand, with many connections, sslh should run as a standalone server to avoid spawning a new process for each incoming connection.
|
||||
|
||||
![][2]
|
||||
|
||||
Install sslh
|
||||
|
||||
On **Arch Linux** and derivatives like Antergos, Manjaro Linux, install it using Pacman as shown below.
|
||||
|
||||
```
|
||||
$ sudo pacman -S sslh
|
||||
```
|
||||
|
||||
On **RHEL** , **CentOS** , you need to add **EPEL** repository and then install SSLH as shown below.
|
||||
|
||||
```
|
||||
$ sudo yum install epel-release
|
||||
|
||||
$ sudo yum install sslh
|
||||
```
|
||||
|
||||
On **Fedora** :
|
||||
|
||||
```
|
||||
$ sudo dnf install sslh
|
||||
```
|
||||
|
||||
If it is not available on default repositories, you can manually compile and install SSLH as described [**here**][3].
|
||||
|
||||
##### Configure Apache or Nginx webservers
|
||||
|
||||
As you already know, Apache and Nginx webservers will listen on all network interfaces (i.e **0.0.0.0:443** ) by default. We need to change this setting to tell the webserver to listen on the localhost interface only (i.e **127.0.0.1:443 **or **localhost:443** ).
|
||||
|
||||
To do so, edit the webserver (nginx or apache) configuration file and find the following line:
|
||||
|
||||
```
|
||||
listen 443 ssl;
|
||||
```
|
||||
|
||||
And, change it to:
|
||||
|
||||
```
|
||||
listen 127.0.0.1:443 ssl;
|
||||
```
|
||||
|
||||
If you’re using Virutalhosts in Apache, make sure you have changed that it too.
|
||||
|
||||
```
|
||||
VirtualHost 127.0.0.1:443
|
||||
```
|
||||
|
||||
Save and close the config files. Do not restart the services. We haven’t finished yet.
|
||||
|
||||
##### Configure SSLH
|
||||
|
||||
Once you have made the webservers to listen on local interface only, edit SSLH config file:
|
||||
|
||||
```
|
||||
$ sudo vi /etc/default/sslh
|
||||
```
|
||||
|
||||
Find the following line:
|
||||
|
||||
```
|
||||
Run=no
|
||||
```
|
||||
|
||||
And, change it to:
|
||||
|
||||
```
|
||||
Run=yes
|
||||
```
|
||||
|
||||
Then, scroll a little bit down and modify the following line to allow SSLH to listen on port 443 on all available interfaces (Eg. 0.0.0.0:443).
|
||||
|
||||
```
|
||||
DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --pidfile /var/run/sslh/sslh.pid"
|
||||
```
|
||||
|
||||
Where,
|
||||
|
||||
* –user sslh : Requires to run under this specified username.
|
||||
* –listen 0.0.0.0:443 : SSLH is listening on port 443 on all available interfaces.
|
||||
* –sshs 127.0.0.1:22 : Route SSH traffic to port 22 on the localhost.
|
||||
* –ssl 127.0.0.1:443 : Route HTTPS/SSL traffic to port 443 on the localhost.
|
||||
|
||||
|
||||
|
||||
Save and close the file.
|
||||
|
||||
Finally, enable and start sslh service to update the changes.
|
||||
|
||||
```
|
||||
$ sudo systemctl enable sslh
|
||||
|
||||
$ sudo systemctl start sslh
|
||||
```
|
||||
|
||||
##### Testing
|
||||
|
||||
Check if the SSLH daemon is listening to 443.
|
||||
|
||||
```
|
||||
$ ps -ef | grep sslh
|
||||
sslh 2746 1 0 15:51 ? 00:00:00 /usr/sbin/sslh --foreground --user sslh --listen 0.0.0.0 443 --ssh 127.0.0.1 22 --ssl 127.0.0.1 443 --pidfile /var/run/sslh/sslh.pid
|
||||
sslh 2747 2746 0 15:51 ? 00:00:00 /usr/sbin/sslh --foreground --user sslh --listen 0.0.0.0 443 --ssh 127.0.0.1 22 --ssl 127.0.0.1 443 --pidfile /var/run/sslh/sslh.pid
|
||||
sk 2754 1432 0 15:51 pts/0 00:00:00 grep --color=auto sslh
|
||||
```
|
||||
|
||||
Now, you can access your remote server via SSH using port 443:
|
||||
|
||||
```
|
||||
$ ssh -p 443 [email protected]
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
[email protected]'s password:
|
||||
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-55-generic x86_64)
|
||||
|
||||
* Documentation: https://help.ubuntu.com
|
||||
* Management: https://landscape.canonical.com
|
||||
* Support: https://ubuntu.com/advantage
|
||||
|
||||
System information as of Wed Aug 14 13:11:04 IST 2019
|
||||
|
||||
System load: 0.23 Processes: 101
|
||||
Usage of /: 53.5% of 19.56GB Users logged in: 0
|
||||
Memory usage: 9% IP address for enp0s3: 192.168.225.50
|
||||
Swap usage: 0% IP address for enp0s8: 192.168.225.51
|
||||
|
||||
* Keen to learn Istio? It's included in the single-package MicroK8s.
|
||||
|
||||
https://snapcraft.io/microk8s
|
||||
|
||||
61 packages can be updated.
|
||||
22 updates are security updates.
|
||||
|
||||
|
||||
Last login: Wed Aug 14 13:10:33 2019 from 127.0.0.1
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
Access remote systems via SSH using port 443
|
||||
|
||||
See? I can now be able to access the remote server via SSH even if the default SSH port 22 is blocked. As you see in the above example, I have used the https port 443 for SSH connection. Also, we can use the same port 443 for openVPN connections too.
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**How To SSH Into A Particular Directory On Linux**][5]
|
||||
* [**How To Create SSH Alias In Linux**][6]
|
||||
* [**How To Configure SSH Key-based Authentication In Linux**][7]
|
||||
* [**How To Stop SSH Session From Disconnecting In Linux**][8]
|
||||
* [**Allow Or Deny SSH Access To A Particular User Or Group In Linux**][9]
|
||||
* [**4 Ways To Keep A Command Running After You Log Out Of The SSH Session**][10]
|
||||
* [**ScanSSH – Fast SSH Server And Open Proxy Scanner**][11]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
I tested SSLH on my Ubuntu 18.04 LTS server and it worked just fine as described above. I tested SSLH in a protected local area network, so I am not aware of the security issues. If you’re using it in production, let us know the advantages and disadvantages of using SSLH in the comment section below.
|
||||
|
||||
For more details, check the official GitHub page given below.
|
||||
|
||||
**Resource:**
|
||||
|
||||
* [**SSLH GitHub Repository**][12]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/sslh-share-port-https-ssh/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2017/08/SSLH-Share-A-Same-Port-For-HTTPS-And-SSH-1-720x340.jpg
|
||||
[2]: https://www.ostechnix.com/wp-content/uploads/2017/08/install-sslh.png
|
||||
[3]: https://github.com/yrutschle/sslh/blob/master/doc/INSTALL.md
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2017/08/Access-remote-systems-via-SSH-using-port-443.png
|
||||
[5]: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
|
||||
[6]: https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/
|
||||
[7]: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
|
||||
[8]: https://www.ostechnix.com/how-to-stop-ssh-session-from-disconnecting-in-linux/
|
||||
[9]: https://www.ostechnix.com/allow-deny-ssh-access-particular-user-group-linux/
|
||||
[10]: https://www.ostechnix.com/4-ways-keep-command-running-log-ssh-session/
|
||||
[11]: https://www.ostechnix.com/scanssh-fast-ssh-server-open-proxy-scanner/
|
||||
[12]: https://github.com/yrutschle/sslh
|
@ -0,0 +1,169 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cockpit and the evolution of the Web User Interface)
|
||||
[#]: via: (https://fedoramagazine.org/cockpit-and-the-evolution-of-the-web-user-interface/)
|
||||
[#]: author: (Shaun Assam https://fedoramagazine.org/author/sassam/)
|
||||
|
||||
Cockpit and the evolution of the Web User Interface
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Over 3 years ago the Fedora Magazine published an article entitled [Cockpit: an overview][2]. Since then, the interface has see some eye-catching changes. Today’s Cockpit is cleaner and the larger fonts makes better use of screen real-estate.
|
||||
|
||||
This article will go over some of the changes made to the UI. It will also explore some of the general tools available in the web interface to simplify those monotonous sysadmin tasks.
|
||||
|
||||
### Cockpit installation
|
||||
|
||||
Cockpit can be installed using the **dnf install cockpit** command. This provides a minimal setup providing the basic tools required to use the interface.
|
||||
|
||||
Another option is to install the Headless Management group. This will install additional packages used to extend the usability of Cockpit. It includes extensions for NetworkManager, software packages, disk, and SELinux management.
|
||||
|
||||
Run the following commands to enable the web service on boot and open the firewall port:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable --now cockpit.socket
|
||||
Created symlink /etc/systemd/system/sockets.target.wants/cockpit.socket -> /usr/lib/systemd/system/cockpit.socket
|
||||
|
||||
$ sudo firewall-cmd --permanent --add-service cockpit
|
||||
success
|
||||
$ sudo firewall-cmd --reload
|
||||
success
|
||||
```
|
||||
|
||||
### Logging into the web interface
|
||||
|
||||
To access the web interface, open your favourite browser and enter the server’s domain name or IP in the address bar followed by the service port (9090). Because Cockpit uses HTTPS, the installation will create a self-signed certificate to encrypt passwords and other sensitive data. You can safely accept this certificate, or request a CA certificate from your sysadmin or a trusted source.
|
||||
|
||||
Once the certificate is accepted, the new and improved login screen will appear. Long-time users will notice the username and password fields have been moved to the top. In addition, the white background behind the credential fields immediately grabs the user’s attention.
|
||||
|
||||
![][3]
|
||||
|
||||
A feature added to the login screen since the previous article is logging in with **sudo** privileges — if your account is a member of the wheel group. Check the box beside _Reuse my password for privileged tasks_ to elevate your rights.
|
||||
|
||||
Another edition to the login screen is the option to connect to remote servers also running the Cockpit web service. Click _Other Options_ and enter the host name or IP address of the remote machine to manage it from your local browser.
|
||||
|
||||
### Home view
|
||||
|
||||
Right off the bat we get a basic overview of common system information. This includes the make and model of the machine, the operating system, if the system is up-to-date, and more.
|
||||
|
||||
![][4]
|
||||
|
||||
Clicking the make/model of the system displays hardware information such as the BIOS/Firmware. It also includes details about the components as seen with **lspci**.
|
||||
|
||||
![][5]
|
||||
|
||||
Clicking on any of the options to the right will display the details of that device. For example, the _% of CPU cores_ option reveals details on how much is used by the user and the kernel. In addition, the _Memory & Swap_ graph displays how much of the system’s memory is used, how much is cached, and how much of the swap partition active. The _Disk I/O_ and _Network Traffic_ graphs are linked to the Storage and Networking sections of Cockpit. These topics will be revisited in an upcoming article that explores the system tools in detail.
|
||||
|
||||
#### Secure Shell Keys and authentication
|
||||
|
||||
Because security is a key factor for sysadmins, Cockpit now has the option to view the machine’s MD5 and SHA256 key fingerprints. Clicking the **Show fingerprints** options reveals the server’s ECDSA, ED25519, and RSA fingerprint keys.
|
||||
|
||||
![][6]
|
||||
|
||||
You can also add your own keys by clicking on your username in the top-right corner and selecting **Authentication**. Click on **Add keys** to validate the machine on other systems. You can also revoke your privileges in the Cockpit web service by clicking on the **X** button to the right.
|
||||
|
||||
![][7]
|
||||
|
||||
#### Changing the host name and joining a domain
|
||||
|
||||
Changing the host name is a one-click solution from the home page. Click the host name currently displayed, and enter the new name in the _Change Host Name_ box. One of the latest features is the option to provide a _Pretty name_.
|
||||
|
||||
Another feature added to Cockpit is the ability to connect to a directory server. Click _Join a domain_ and a pop-up will appear requesting the domain address or name, organization unit (optional), and the domain admin’s credentials. The Domain Membership group provides all the packages required to join an LDAP server including FreeIPA, and the popular Active Directory.
|
||||
|
||||
To opt-out, click on the domain name followed by _Leave Domain_. A warning will appear explaining the changes that will occur once the system is no longer on the domain. To confirm click the red _Leave Domain_ button.
|
||||
|
||||
![][8]
|
||||
|
||||
#### Configuring NTP and system date and time
|
||||
|
||||
Using the command-line and editing config files definitely takes the cake when it comes to maximum tweaking. However, there are times when something more straightforward would suffice. With Cockpit, you have the option to set the system’s date and time manually or automatically using NTP. Once synchronized, the information icon on the right turns from red to blue. The icon will disappear if you manually set the date and time.
|
||||
|
||||
To change the timezone, type the continent and a list of cities will populate beneath.
|
||||
|
||||
![][9]
|
||||
|
||||
#### Shutting down and restarting
|
||||
|
||||
You can easily shutdown and restart the server right from home screen in Cockpit. You can also delay the shutdown/reboot and send a message to warn users.
|
||||
|
||||
![][10]
|
||||
|
||||
#### Configuring the performance profile
|
||||
|
||||
If the _tuned_ and _tuned-utils_ packages are installed, performance profiles can be changed from the main screen. By default it is set to a recommended profile. However, if the purpose of the server requires more performance, we can change the profile from Cockpit to suit those needs.
|
||||
|
||||
![][11]
|
||||
|
||||
### Terminal web console
|
||||
|
||||
A Linux sysadmin’s toolbox would be useless without access to a terminal. This allows admins to fine-tune the server beyond what’s available in Cockpit. With the addition of themes, admins can quickly adjust the text and background colours to suit their preference.
|
||||
|
||||
Also, if you type **exit** by mistake, click the _Reset_ button in the top-right corner*.* This will provide a fresh screen with a flashing cursor.
|
||||
|
||||
![][12]
|
||||
|
||||
### Adding a remote server and the Dashboard overlay
|
||||
|
||||
The Headless Management group includes the Dashboard module (**cockpit-dashboard**). This provides an overview the of the CPU, memory, network, and disk performance in a real-time graph. Remote servers can also be added and managed through the same interface.
|
||||
|
||||
For example, to add a remote computer in Dashboard, click the **+** button. Enter the name or IP address of the server and select the colour of your choice. This helps to differentiate the stats of the servers in the graph. To switch between servers, click on the host name (as seen in the screen-cast below). To remove a server from the list, click the check-mark icon, then click the red trash icon. The example below demonstrates how Cockpit manages a remote machine named _server02.local.lan_.
|
||||
|
||||
![][13]
|
||||
|
||||
### Documentation and finding help
|
||||
|
||||
As always, the _man_ pages are a great place to find documentation. A simple search in the command-line results with pages pertaining to different aspects of using and configuring the web service.
|
||||
|
||||
```
|
||||
$ man -k cockpit
|
||||
cockpit (1) - Cockpit
|
||||
cockpit-bridge (1) - Cockpit Host Bridge
|
||||
cockpit-desktop (1) - Cockpit Desktop integration
|
||||
cockpit-ws (8) - Cockpit web service
|
||||
cockpit.conf (5) - Cockpit configuration file
|
||||
```
|
||||
|
||||
The Fedora repository also has a package called **cockpit-doc**. The package’s description explains it best:
|
||||
|
||||
> The Cockpit Deployment and Developer Guide shows sysadmins how to deploy Cockpit on their machines as well as helps developers who want to embed or extend Cockpit.
|
||||
|
||||
For more documentation visit <https://cockpit-project.org/external/source/HACKING>
|
||||
|
||||
### Conclusion
|
||||
|
||||
This article only touches upon some of the main functions available in Cockpit. Managing storage devices, networking, user account, and software control will be covered in an upcoming article. In addition, optional extensions such as the 389 directory service, and the _cockpit-ostree_ module used to handle packages in Fedora Silverblue.
|
||||
|
||||
The options continue to grow as more users adopt Cockpit. The interface is ideal for admins who want a light-weight interface to control their server(s).
|
||||
|
||||
What do you think about Cockpit? Share your experience and ideas in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/cockpit-and-the-evolution-of-the-web-user-interface/
|
||||
|
||||
作者:[Shaun Assam][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/sassam/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/cockpit-overview/
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-login-screen.png
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-home-screen.png
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-system-info.gif
|
||||
[6]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-ssh-key-fingerprints.png
|
||||
[7]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-authentication.png
|
||||
[8]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-hostname-domain.gif
|
||||
[9]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-date-time.png
|
||||
[10]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-power-options.gif
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-tuned.gif
|
||||
[12]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-terminal.gif
|
||||
[13]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-add-remote-servers.gif
|
@ -0,0 +1,208 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Designing open audio hardware as DIY kits)
|
||||
[#]: via: (https://opensource.com/article/19/8/open-audio-kit-developer)
|
||||
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansenhttps://opensource.com/users/alanfdosshttps://opensource.com/users/clhermansen)
|
||||
|
||||
Designing open audio hardware as DIY kits
|
||||
======
|
||||
Did you know you can build your own speaker systems? Muffsy creator
|
||||
shares how he got into making open audio hardware and why he started
|
||||
selling his designs to other DIYers.
|
||||
![Colorful sound wave graph][1]
|
||||
|
||||
Previously in this series about people who are developing audio technology in the open, I interviewed [Juan Rios, developer and maintainer of Guayadeque][2] and [Sander Jansen, developer and maintainer of Goggles Music Manager][3]. These conversations have broadened my thinking and helped me enjoy their software even more than before.
|
||||
|
||||
For this article, I contacted Håvard Skrödahl, founder of [Muffsy][4]. His hobby is designing open source audio hardware, and he offers his designs as kits for those of us who can't wait to wind up the soldering iron for another adventure.
|
||||
|
||||
I've built two of Håvard's kits: a [moving coil (MC) cartridge preamp][5] and a [moving magnet (MM) cartridge phono preamp][6]. Both were a lot of fun to build and sound great. They were also a bit of a stroll down memory lane for me. In my 20s, I built some other audio kits, including a [Hafler DH-200 power amplifier][7] and a [DH-110 preamplifier][8]. Before that, I built a power amplifier using a Motorola circuit design; both the design and the amplifier were lost along the way, but they were a lot of fun!
|
||||
|
||||
### Meet Håvard Skrödahl, open audio hardware maker
|
||||
|
||||
**Q: How did you get started designing music playback hardware?**
|
||||
|
||||
**A:** I was a teenager in the mid-'80, and listening to records and cassettes were the only options we had. Vinyl was, of course, the best quality, while cassettes were more portable. About five years ago, I was getting back into vinyl and found myself lacking the equipment I needed. So, I decided to make my own phono stage (also called a phono preamp). The first iteration was bulky and had a relatively bad [RIAA filter][9], but I improved it during the first few months.
|
||||
|
||||
The first version was completely homemade. I was messing about with toner transfer and chemicals and constantly breaking drill bits to create this board.
|
||||
|
||||
![Phono stage board top][10]
|
||||
|
||||
Top of the phono stage board
|
||||
|
||||
![Bottom of the phono stage board][11]
|
||||
|
||||
Bottom of the phono stage board
|
||||
|
||||
I was over the moon with this phono stage. It worked perfectly, even though the RIAA curve was out by quite a bit. It also had variable input impedance (greatly inspired by the [Audiokarma CNC phono stage][12]).
|
||||
|
||||
When I moved on to getting boards professionally made, I found that the impedance settings could be improved quite a bit. My setup needed adjustable gain, so I added it. The RIAA filter was completely redesigned, and it is (to my knowledge) the only accurate RIAA filter circuit that uses [standard E24 component values][13].
|
||||
|
||||
![Muffsy audio hardware boards][14]
|
||||
|
||||
Various iterations of boards in development.
|
||||
|
||||
**Q: How did you decide to offer your work as kits? And how did you go from kits to open source?**
|
||||
|
||||
**A:** The component values being E24 came from a lack of decent, component providers in my area (or so I thought, as it turned out I have a great provider nearby), so I had to go for standard values. This meant my circuit was well suited for DIY, and I started selling blank printed circuit boards on [Tindie][15].
|
||||
|
||||
What really made the phono stage suitable as a kit was a power supply that didn't require messing about with [mains electricity][16]. It's basically an AC adapter, a voltage doubler, and a couple of three-pin voltage regulators.
|
||||
|
||||
So there I was; I had a phono stage, a power supply, and the right (and relatively easy to source) components. The boards fit straight into the enclosure I'd selected, and I made a suitable back panel. I could now sell a DIY kit that turns into a working device once it is assembled. This is pretty unique; you won't see many kit suppliers provide everything that's needed to make a functional product.
|
||||
|
||||
![Phono stage kit with the power supply and back panel][17]
|
||||
|
||||
The assembled current phono stage kit with the power supply and back panel.
|
||||
|
||||
As a bonus, since this is just my hobby, I'm not even aiming for profit. This is also partly why my designs are open source. Not revealing who is using the designs, but you'll find them in more than one professional vinyl mastering studio, in governmental digitization projects, and even at a vinyl player manufacturer that you will have heard of.
|
||||
|
||||
**Q: Tell us a bit about your educational background. Are you an electrical engineer? Where did your interest in circuits come from?**
|
||||
|
||||
**A:** I went to a military school of electrical engineering (EE). My career has been pretty void of EE though, apart from a few years as a telephony switch technician. The music interest has stayed with me, though, and I've been dreaming of making something all my life. I would rather avoid mains electricity though, so signal level and below is where I'm happy.
|
||||
|
||||
**Q: In your day job, do you do hardware stuff as well? Or software? What about open source—does it matter to your day job?**
|
||||
|
||||
**A:** My profession is IT management, system architecture, and security. So I'm not doing any hardware designs at work. I wouldn't be where I am today without open source, so that is the other part of the reason why my designs are openly available.
|
||||
|
||||
**Q: Can you tell us a bit about what it takes to go from a design, to a circuit board, to producing a kit?**
|
||||
|
||||
**A:** I am motivated by my own needs when it comes to starting a new project. Or if I get inspired, like I was when I made a constant current [LED][18] tester. The LED tester required a very specific sub-milliampere meter, and it was kind of hard to find an enclosure for it. So the LED tester wasn't suited for a kit.
|
||||
|
||||
![LED tester][19]
|
||||
|
||||
LED tester
|
||||
|
||||
I made a [notch filter][20] that requires matched precision capacitors, and the potentiometers are quite hard to fine-tune. Besides, I don't see people lining up to buy this product, so it's not suited to be a kit.
|
||||
|
||||
![Notch filter][21]
|
||||
|
||||
Notch filter
|
||||
|
||||
I made an inverse RIAA filter using only surface-mount device [(SMD) components][22]—not something I would offer as a kit. I built an SMD vacuum pickup tool for this project, so it wasn't all for nothing.
|
||||
|
||||
![SMD vacuum pickup tool][23]
|
||||
|
||||
SMD vacuum pickup tool
|
||||
|
||||
I've made various PSU/transformer breakout boards, which are not suitable as kits because they require mains electricity.
|
||||
|
||||
![PSU/transformer breakout board][24]
|
||||
|
||||
PSU/transformer breakout boards
|
||||
|
||||
I designed and built the [MC Head Amp][25] without even owning an [MC cartridge][26]. I even built the [O2 headphone amp][27] without owning a pair of headphones, although people much smarter than me suspect it was a clever excuse for buying a pair of Sennheisers.
|
||||
|
||||
Kits need to be something I think people need, they must be easy to assemble (or rather difficult to assemble incorrectly), not too expensive nor require exotic components, and can't weigh too much because of the very expensive postage from Sweden.
|
||||
|
||||
Most importantly, I need to have time and space for another kit. This picture shows pretty much all the space I have available for my kits, two boxes deep, IKEA all the way.
|
||||
|
||||
![A shelf filled with boxed of Muffsy kits][28]
|
||||
|
||||
**Q: Are you a musician or audiophile? What kind of equipment do you have?**
|
||||
|
||||
**A:** I'm not a musician, and I am not an audiophile in the way most people would define such a person. I do know, from education and experience, that good sound doesn't have to cost a whole lot. It can be quite cheap, actually. A lot of the cost is in transformers, enclosures, and licenses (the stickers on the front of your gear). Stay away from those, and you're back at signal level audio that can be really affordable.
|
||||
|
||||
Don't get me wrong; there are a lot of gifted designers who spend an awful lot of time and creativity making the next great piece of equipment. They deserve to get paid for their work. What I mean is that the components that go into this equipment can be bought for not much money at all.
|
||||
|
||||
My equipment is a simple [op-amp][29]-based preamp with a rotational input-switch and a sub-$25 class-D amp based on the TPA3116 chip (which I will be upgrading to an IcePower 125ASX2). I'm using both the Muffsy Phono Preamp and the Muffsy MC Head Amp. Then I've got some really nice Dynaco A25 loudspeakers that I managed to refurbish through nothing more than good old dumb luck. I went for the cheapest Pro-Ject turntable that's still a "real" turntable. That's it. No surround, no remote control (yet), unless you count the Chromecast Audio that's connected to the back of my amp.
|
||||
|
||||
![Håvard Skrödahl's A/V setup][30]
|
||||
|
||||
Håvard's A/V setup
|
||||
|
||||
I'll happily shell out for quality components, good connectors, and shielded signal cables. But, to be diplomatic, I'd rather use the correct component for the job instead of the most expensive one. I do get questions about specific resistors and expensive "boutique" components now and then. I keep my answer short and remind people that my own builds are identical to what I sell on Tindie.
|
||||
|
||||
My preamp uses my MC Head Amp as a building block.
|
||||
|
||||
![Preamp internals][31]
|
||||
|
||||
Preamp internals
|
||||
|
||||
**Q: What kind of software do you use for hardware design?**
|
||||
|
||||
**A:** I've been using [Eagle][32] for many years. Getting into a different workflow takes a lot of time and requires a whole lot of mistakes, so no [KiCad][33] yet.
|
||||
|
||||
**Q: Can you tell us a bit about where your kits are going? Is there a new head amplifier? A new phono amplifier? What about a line-level pre-amp or power amp?**
|
||||
|
||||
**A:** If I were to do a power amp, I wouldn't dream of selling it because of what I said about mains electricity. Chip amps and [Class-D][34] seem to have taken over the DIY segment anyway, and I'm really happy with Class-D.
|
||||
|
||||
My latest kit is an input selector. It's something that's a cross between hardware and software as it uses an [ESP32][35] system on a chip microcontroller. And it's something that I want for myself.
|
||||
|
||||
The kit includes everything you need. It's got a rotational encoder, an infrared receiver, and I'm even adding a remote control to the kit. The software and hardware are available on GitHub, also under a permissive open source license, and will soon include Alexa voice support and [MQTT][36] for app or command line remote control.
|
||||
|
||||
![Input selector][37]
|
||||
|
||||
Input selector kit
|
||||
|
||||
My lineup now consists of preamps for MC and MM cartridges, a power supply and a back panel for them, and the input selector. I'm even selling bare circuit boards for a tube preamp and accompanying power supply.
|
||||
|
||||
These components make up pretty much all the internals of a complete preamplifier, which has become one of my main motivational factors.
|
||||
|
||||
I have nothing new or significantly better to provide in terms of an ordinary preamplifier, so I'm using a modified version of a well-known circuit. I cannot, and would not, sell this circuit, as it's proprietary.
|
||||
|
||||
Anyhow, here's my personal goal. It's still a work in progress, using an S3207 enclosure and a front panel made at Frontpanel Express.
|
||||
|
||||
![Muffsy preamp][38]
|
||||
|
||||
New Muffsy preamp prototype
|
||||
|
||||
* * *
|
||||
|
||||
Thanks, Håvard, that looks pretty great! I'd be happy to have something like that sitting on my Hi-Fi shelf.
|
||||
|
||||
I hope there are people out there just waiting to try their hand at audio kit building or even board layout from proven open source schematics, and they find Håvard's story motivating. As for me, I think my next project could be an [active crossover][39].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/open-audio-kit-developer
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansenhttps://opensource.com/users/alanfdosshttps://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/colorful_sound_wave.png?itok=jlUJG0bM (Colorful sound wave graph)
|
||||
[2]: https://opensource.com/article/19/6/creator-guayadeque-music-player
|
||||
[3]: https://opensource.com/article/19/6/gogglesmm-developer-sander-jansen
|
||||
[4]: https://www.muffsy.com/
|
||||
[5]: https://opensource.com/article/18/5/building-muffsy-phono-head-amplifier-kit
|
||||
[6]: https://opensource.com/article/18/7/diy-amplifier-vinyl-records
|
||||
[7]: https://kenrockwell.com/audio/hafler/dh-200.htm
|
||||
[8]: https://www.hifiengine.com/manual_library/hafler/dh-110.shtml
|
||||
[9]: https://en.wikipedia.org/wiki/RIAA_equalization
|
||||
[10]: https://opensource.com/sites/default/files/uploads/phono-stagetop.png (Phono stage board top)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/phono-stagebottom.png (Bottom of the phono stage board)
|
||||
[12]: https://forum.psaudio.com/t/the-cnc-phono-stage-diy/3613
|
||||
[13]: https://en.wikipedia.org/wiki/E_series_of_preferred_numbers
|
||||
[14]: https://opensource.com/sites/default/files/uploads/boards.png (Muffsy audio hardware boards)
|
||||
[15]: https://www.tindie.com/stores/skrodahl/
|
||||
[16]: https://en.wikipedia.org/wiki/Mains_electricity
|
||||
[17]: https://opensource.com/sites/default/files/uploads/phonostage-kit.png (Phono stage kit with the power supply and back panel)
|
||||
[18]: https://en.wikipedia.org/wiki/Light-emitting_diode
|
||||
[19]: https://opensource.com/sites/default/files/uploads/led-tester.png (LED tester)
|
||||
[20]: https://en.wikipedia.org/wiki/Band-stop_filter
|
||||
[21]: https://opensource.com/sites/default/files/uploads/notch-filter.png (Notch filter)
|
||||
[22]: https://en.wikipedia.org/wiki/Surface-mount_technology
|
||||
[23]: https://opensource.com/sites/default/files/uploads/smd-vacuum-pick-tool.png (SMD vacuum pickup tool)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/psu-transformer-breakout-board.png (PSU/transformer breakout board)
|
||||
[25]: https://leachlegacy.ece.gatech.edu/headamp/
|
||||
[26]: https://blog.audio-technica.com/audio-solutions-question-week-differences-moving-magnet-moving-coil-phono-cartridges/
|
||||
[27]: http://nwavguy.blogspot.com/2011/07/o2-headphone-amp.html
|
||||
[28]: https://opensource.com/sites/default/files/uploads/kit-shelves.png (Muffsy kits on shelves)
|
||||
[29]: https://en.wikipedia.org/wiki/Operational_amplifier
|
||||
[30]: https://opensource.com/sites/default/files/uploads/av-setup.png (Håvard Skrödahl's A/V setup)
|
||||
[31]: https://opensource.com/sites/default/files/uploads/preamp-internals.png (Preamp internals)
|
||||
[32]: https://en.wikipedia.org/wiki/EAGLE_(program)
|
||||
[33]: https://en.wikipedia.org/wiki/KiCad
|
||||
[34]: https://en.wikipedia.org/wiki/Class-D_amplifier
|
||||
[35]: https://en.wikipedia.org/wiki/ESP32
|
||||
[36]: http://mqtt.org/
|
||||
[37]: https://opensource.com/sites/default/files/uploads/input-selector.png (Input selector)
|
||||
[38]: https://opensource.com/sites/default/files/uploads/muffsy-preamp.png (Muffsy preamp)
|
||||
[39]: https://www.youtube.com/watch?v=7u9OKPL1ezA&feature=youtu.be
|
@ -0,0 +1,150 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to encrypt files with gocryptfs on Linux)
|
||||
[#]: via: (https://opensource.com/article/19/8/how-encrypt-files-gocryptfs)
|
||||
[#]: author: (Brian "bex" Exelbierd https://opensource.com/users/bexelbiehttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo)
|
||||
|
||||
How to encrypt files with gocryptfs on Linux
|
||||
======
|
||||
Gocryptfs encrypts at the file level, so synchronization operations can
|
||||
work efficiently on each file.
|
||||
![Filing papers and documents][1]
|
||||
|
||||
[Gocryptfs][2] is a Filesystem in Userspace (FUSE)-mounted file-level encryption program. FUSE-mounted means that the encrypted files are stored in a single directory tree that is mounted, like a USB key, using the [FUSE][3] interface. This allows any user to do the mount—you don't need to be root. Because gocryptfs encrypts at the file level, synchronization operations that copy your files can work efficiently on each file. This contrasts with disk-level encryption, where the whole disk is encrypted as a single, large binary blob.
|
||||
|
||||
When you use gocryptfs in its normal mode, your files are stored on your disk in an encrypted format. However, when you mount the encrypted files, you get unencrypted access to your files, just like any other file on your computer. This means all your regular tools and programs can use your unencrypted files. Changes, new files, and deletions are reflected in real-time in the encrypted version of the files stored on your disk.
|
||||
|
||||
### Install gocryptfs
|
||||
|
||||
Installing gocryptfs is easy on [Fedora][4] because it is packaged for Fedora 30 and Rawhide. Therefore, **sudo dnf install gocryptfs** does all the required installation work. If you're not using Fedora, you can find details on installing from source, on Debian, or via Homebrew in the [Quickstart][5].
|
||||
|
||||
### Initialize your encrypted filesystem
|
||||
|
||||
To get started, you need to decide where you want to store your encrypted files. This example will keep the files in **~/.sekrit_files** so that they don't show up when doing a normal **ls**.
|
||||
|
||||
Start by initializing the filesystem. This will require you to choose a password. You are strongly encouraged to use a unique password you've never used anywhere else, as this is your key to unlocking your files. The project's authors recommend a password with between 64 and 128 bits of entropy. Assuming you use upper and lower case letters and numbers, this means your password should be between [11 and 22 characters long][6]. If you're using a password manager, this should be easy to accomplish with a generated password.
|
||||
|
||||
When you initialize the filesystem, you will see a unique key. Store this key somewhere securely, as it will allow you to access your files if you need to recover your files but have forgotten your password. The key works without your password, so keep it private!
|
||||
|
||||
The initialization routine looks like this:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir ~/.sekrit_files
|
||||
$ gocryptfs -init ~/.sekrit_files
|
||||
Choose a password for protecting your files.
|
||||
Password:
|
||||
Repeat:
|
||||
|
||||
Your master key is:
|
||||
|
||||
XXXXXXXX-XXXXXXXX-XXXXXXXX-XXXXXXXX-
|
||||
XXXXXXXX-XXXXXXXX-XXXXXXXX-XXXXXXXX
|
||||
|
||||
If the gocryptfs.conf file becomes corrupted or you ever forget your password,
|
||||
there is only one hope for recovery: The master key. Print it to a piece of
|
||||
paper and store it in a drawer. This message is only printed once.
|
||||
The gocryptfs filesystem has been created successfully.
|
||||
You can now mount it using: gocryptfs .sekrit_files MOUNTPOINT
|
||||
```
|
||||
|
||||
If you look in the **~/.sekrit_files** directory, you will see two files: a configuration file and a unique directory-level initialization vector. You will not need to edit these two files by hand. Make sure you do not delete these files.
|
||||
|
||||
### Use your encrypted filesystem
|
||||
|
||||
To use your encrypted filesystem, you need to mount it. This requires an empty directory where you can mount the filesystem. For example, use the **~/my_files** directory. As you can see from the initialization, mounting is easy:
|
||||
|
||||
|
||||
```
|
||||
$ gocryptfs ~/.sekrit_files ~/my_files
|
||||
Password:
|
||||
Decrypting master key
|
||||
Filesystem mounted and ready.
|
||||
```
|
||||
|
||||
If you check out the **~/my_files** directory, you'll see it is empty. The configuration and initialization vector files aren't data, so they don't show up. Let's put some data in the filesystem and see what happens:
|
||||
|
||||
|
||||
```
|
||||
$ cp /usr/share/dict/words ~/my_files/
|
||||
$ ls -la ~/my_files/ ~/.sekrit_files/
|
||||
~/my_files/:
|
||||
.rw-r--r-- 5.0M bexelbie 19 Jul 17:48 words
|
||||
|
||||
~/.sekrit_files/:
|
||||
.r--------@ 402 bexelbie 19 Jul 17:39 gocryptfs.conf
|
||||
.r--------@ 16 bexelbie 19 Jul 17:39 gocryptfs.diriv
|
||||
.rw-r--r--@ 5.0M bexelbie 19 Jul 17:48 xAQrtlyYSFeCN5w7O3-9zg
|
||||
```
|
||||
|
||||
Notice that there is a new file in the **~/.sekrit_files** directory. This is the encrypted copy of the dictionary you copied in (the file name will vary). Feel free to use **cat** and other tools to examine these files and experiment with adding, deleting, and modifying files. Make sure to test with a few applications, such as LibreOffice.
|
||||
|
||||
Remember, this a filesystem mount, so the contents of **~/my_files** aren't saved to disk. You can verify this by running **mount | grep my_files** and observing the output. Only the encrypted files are written to your disk. The FUSE interface is doing real-time encryption and decryption of the files and presenting them to your applications and shell as a filesystem.
|
||||
|
||||
### Unmount the filesystem
|
||||
|
||||
When you're done with your files, you can unmount them. This causes the unencrypted filesystem to no longer be available. The encrypted files in **~/.sekrit_files** are unaffected. Unmount the filesystem using the FUSE mounter program with **fusermount -u ~/my_files** .
|
||||
|
||||
### Back up your data
|
||||
|
||||
One of the cool benefits of gocryptfs using file-level encryption is that it makes backing up your encrypted data easier. The files are safe to store on a synchronizing system, such as OwnCloud or Dropbox. The standard disclaimer about not modifying the same file at the same time applies. However, the files can be backed up even if they are mounted. You can also save your data any other way you would typically back up files. You don't need anything special.
|
||||
|
||||
When you do backups, make sure to include the **gocryptfs.diriv** file. This file is not a secret and can be saved with the backup. However, your **gocryptfs.conf** is a secret. When you control the entirety of the backup chain, such as with tape, you can back it up with the rest of the files. However, when the files are backed up to the cloud or publicly, you may wish to omit this file. In theory, if someone gets this file, the only thing protecting your files is the strength of your password. If you have chosen a [strong password][6], that may be enough; however, you need to consider your situation carefully. More details are in this gocryptfs [upstream issue][7].
|
||||
|
||||
### Bonus: Reverse mode
|
||||
|
||||
A neat feature of gocryptfs is the reverse mode function. In reverse mode, point gocryptfs at your unencrypted data, and it will create a mount point with an encrypted view of this data. This is useful for things such as creating encrypted backups. This is easy to do:
|
||||
|
||||
|
||||
```
|
||||
$ gocryptfs -reverse -init my_files
|
||||
Choose a password for protecting your files.
|
||||
Password:
|
||||
Repeat:
|
||||
|
||||
Your master key is:
|
||||
|
||||
XXXXXXXX-XXXXXXXX-XXXXXXXX-XXXXXXXX-
|
||||
XXXXXXXX-XXXXXXXX-XXXXXXXX-XXXXXXXX
|
||||
|
||||
If the gocryptfs.conf file becomes corrupted or you ever forget your password,
|
||||
there is only one hope for recovery: The master key. Print it to a piece of
|
||||
paper and store it in a drawer. This message is only printed once.
|
||||
The gocryptfs-reverse filesystem has been created successfully.
|
||||
You can now mount it using: gocryptfs -reverse my_files MOUNTPOINT
|
||||
|
||||
$ gocryptfs -reverse my_files sekrit_files
|
||||
Password:
|
||||
Decrypting master key
|
||||
Filesystem mounted and ready.
|
||||
```
|
||||
|
||||
Now **sekrit_files** contains an encrypted view of your unencrypted data from **my_files**. This can be backed up, shared, or handled as needed. The directory is read-only, as there is nothing useful you can do with those files except back them up.
|
||||
|
||||
A new file, **.gocryptfs.reverse.conf**, has been added to **my_files** to provide a stable encrypted view. This configuration file will ensure that each reverse mount will use the same encryption key. This way you could, for example, back up only changed files.
|
||||
|
||||
Gocryptfs is a flexible file encryption tool that allows you to store your data in an encrypted manner without changing your workflow or processes significantly. The design has undergone a security audit, and the developers have experience with other systems, such as **encfs**. I encourage you to add gocryptfs to your system today and start protecting your data.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/how-encrypt-files-gocryptfs
|
||||
|
||||
作者:[Brian "bex" Exelbierd][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bexelbiehttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
|
||||
[2]: https://nuetzlich.net/gocryptfs/
|
||||
[3]: https://en.wikipedia.org/wiki/Filesystem_in_Userspace
|
||||
[4]: https://getfedora.org
|
||||
[5]: https://nuetzlich.net/gocryptfs/quickstart/
|
||||
[6]: https://github.com/rfjakob/gocryptfs/wiki/Password-Strength
|
||||
[7]: https://github.com/rfjakob/gocryptfs/issues/50
|
147
sources/tech/20190816 How to plan your next IT career move.md
Normal file
147
sources/tech/20190816 How to plan your next IT career move.md
Normal file
@ -0,0 +1,147 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to plan your next IT career move)
|
||||
[#]: via: (https://opensource.com/article/19/8/plan-next-IT-career-move)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/marcobravo)
|
||||
|
||||
How to plan your next IT career move
|
||||
======
|
||||
Ask yourself these essential career questions about cloud, DevOps,
|
||||
coding, and where you're going next in IT.
|
||||
![Two hands holding a resume with computer, clock, and desk chair ][1]
|
||||
|
||||
Being part of technology-oriented communities has been an essential part of my career development. The first community that made a difference for me was focused on virtualization. Less than a year into my first career-related job, I met a group of friends who were significant contributors to this "vCommunity," and I found their enthusiasm to be contagious. That began our daily "nerd herd," where a handful of us met nearly every day for coffee before our shifts in tech support. We often discussed the latest software releases or hardware specs of a related storage array, and other times we strategized about how we could help each other grow in our careers.
|
||||
|
||||
> Any community worth being a part of will lift you up as you lift others up in the process.
|
||||
|
||||
In those years, I learned a foundational truth that's as true for me today as it was then: Any community worth being a part of will lift you up as you lift others up in the process.
|
||||
|
||||
![me with friends at New England VTUG meeting][2]
|
||||
|
||||
Matthew with friends [Jonathan][3] and [Paul][4] (L-R) at a New England VTUG meeting.
|
||||
|
||||
We began going to conferences together, with the first major effort by being the volunteer social team for a user group out of New England. We set up a Twitter account, sending play-by-plays as the event happened, but we also were there, in person, to welcome new members into our community of practice. While it wasn't my intention, finding that intersection of community and technology taught me the skills that led to my next job offer. And my story wasn't unique; many of us supported each other, and many of us advanced in our careers along the way.
|
||||
|
||||
While I remained connected to the vCommunity, I haven't kept up with the (mostly proprietary) technology stack we used to talk about.
|
||||
|
||||
My preferred technology shifted direction drastically when I fell in love with open source. It's been about five years since I knew virtualization deeply and two years since I spoke at an event centered on the topic. So it was a surprise and honor to be invited to give the opening keynote to the last edition of the [New England Virtualization Technology User Group][5]'s (VTUG) Summer Slam in July. Here's what I spoke about.
|
||||
|
||||
### Technology and, more importantly, employment
|
||||
|
||||
When I heard the user group was hosting its last-ever event, I said I'd love to be part of it. The challenge was that I didn't know how I would. While there is plenty of open source virtualization technology, I had shifted further up the stack toward applications and programming languages of late, so my technical angle wouldn't make for a good talk. The organizer said, "Good, that's what people need to hear."
|
||||
|
||||
Being further away from the vCommunity meant I had missed some of the context of the last few years. A noticeable amount of the community was facing unemployment. When they went to apply for a new job, there were new titles like [DevOps Engineer][6] and [SRE][7]. Not only that, I was told that the deep focus on a single vendor of proprietary virtualization technology is no longer enough. Virtualization and storage administration (my first area of expertise) appear to be the hardest hit by this shift. One story I heard at the event was that over 50% of a local user group's attendees were looking, and there was a gap in awareness of how to move forward.
|
||||
|
||||
So while I enjoy having lighthearted conversations with people learning to contribute to open source, this talk was different. It had more to do with people's lives than usual. The stakes were higher.
|
||||
|
||||
### 3 trends worth exploring
|
||||
|
||||
There are a thousand ways to summarize the huge waves of change that are taking place in the tech industry. In my talk, I offered the idea that cloud, DevOps, and coding are three distinct trends making some of those waves and worth considering as you plan the next steps in your IT-oriented career.
|
||||
|
||||
* Cloud, including the new operational model of IT that is often Kubernetes-based
|
||||
* DevOps, which rejects the silos, ticket systems, and blame of legacy IT departments
|
||||
* Coding, including the practice of infrastructure as code (accompanied by a ton of YAML)
|
||||
|
||||
|
||||
|
||||
I imagine them as literal waves, crashing down upon the ships of the old way to make way for the new.
|
||||
|
||||
![Adaption of The Great Wave Off Kanagawa][8]
|
||||
|
||||
Adaption of [The Great Wave Off Kanagawa][9]
|
||||
|
||||
We have two mutually exclusive options as we consider how to respond to these shifts. We can paddle hard, feeling like we're going against the current, or we can stay put and be crushed by the wave. One is uncomfortable in the short term, while the other is more comfortable for now. Only one option is survivable. It sounds scary, and I'm okay with that. The stakes are real.
|
||||
|
||||
![Adaption of The Great Wave Off Kanagawa][10]
|
||||
|
||||
Adaption of [The Great Wave Off Kanagawa][9]
|
||||
|
||||
Cloud, DevOps, and coding are each massive topics with a ton of nuance to unpack. But if want to retool your skills for the future, I'm confident that focusing on **any** of them will set you up for a successful next step.
|
||||
|
||||
### Finding the right adoption timeline
|
||||
|
||||
One of the most challenging aspects of this is the sheer onslaught of information. It's reasonable to ask what you should learn, specifically, and when. I'm reminded of the work of [Nassim Taleb][11] who, among his deeply thoughtful insights into risk, makes note of a powerful concept:
|
||||
|
||||
> "The longer a technology lives, the longer it can be expected to live."
|
||||
> – Nassim Taleb, [_Antifragile_][12] (2012)
|
||||
|
||||
This sticking power of technology may offer insight into the right time to jump on a wave of newness. It doesn't have to be right away, given that early adopters may find their efforts don't have enough stick-to-it-ness to linger beyond a passing trend. That aligns well with my style: I'm rarely an early adopter, and I'm comfortable with that fact. I leave a lot of the early testing and debugging of new projects to those who are excited by the uncertainty of it all, and I'll be around for the phase when the brilliant idea needs to be refined (or, as [Simon Wardley][13] puts it, I prefer the [Settling phase over the Pioneer one][14]). That also aligns well with the perspective of most admin-centric professionals I know. They're wary of new things because they know saying yes to something in production is easier than supporting it after it gets there.
|
||||
|
||||
![One theory on when to adopt technology as its being displaced][15]
|
||||
|
||||
What I also love about Taleb's words is they offer a reasonable equation to make sure you're not the last to adopt a technology. Why not be last? Because you'll be so far behind that no one will want to hire you.
|
||||
|
||||
So what does that equation look like? I think it's taking Taleb's theory, more broadly called the [Lindy effect][16], and doing the math: you can expect that any technology will be in existence at least as long as it was before a competitor came into play. So if X technology excited for 30 years before Y threatened its dominance, you can expect X to be in existence for another 30 years (even if Y is way more popular, powerful, and trendy). It takes a long time for technology to "die."
|
||||
|
||||
My observation is more of a half-life of that concept: you can expect broad adoption of Y technology halfway through the adoption curve. By that point, companies hiring will start to signal they want this knowledge on their team, and it's reasonable for even the most skeptical of Sysadmins to learn said technology. In practice, that may look like this, where ETOA is the estimated time of mass adoption:
|
||||
|
||||
![IP4 to IP6 estimated time of mass adoption][17]
|
||||
|
||||
Many would love for IPv6 to be widely adopted sooner than 2027, and this theory offers a potential reason why it takes so long. Change is going somewhere, but the pace more aligned with the Lindy effect than to those people's expectations.
|
||||
|
||||
To paraphrase statistician [George Box][18], "all models are wrong, but some are more useful than others." Taleb's adaptation of the Lindy effect helps me think about how I need to prioritize my learning relative to the larger waves of change happening in the industry.
|
||||
|
||||
### Asking the right questions
|
||||
|
||||
The thing I cannot say often enough is that _people who have IT admin skills have a ton of options when they look at the industry_.
|
||||
|
||||
Every person who has been an admin has had to learn new technology. They excel at it. And while mastery takes time, a decent amount of familiarity and a willingness to learn are very hirable skills. Learning a new technology and inspecting its operational model to anticipate its failures in production are hugely valuable to any project. I look at open source software projects on GitHub and GitLab regularly, and many are looking for feedback on how to get their projects ready for production environments. Admins are experts at operational improvement.
|
||||
|
||||
All that said, it can still be paralyzing to decide what to learn. When people are feeling stuck, I recommend asking yourself these questions to jumpstart your thinking:
|
||||
|
||||
1. What technology do you want to know?
|
||||
2. What's your next career?
|
||||
|
||||
|
||||
|
||||
The first question is full of important reminders. First off, many of us in IT have the privilege of choosing to study the things that bring us joy to learn. It's a wonderful feeling to be excited by our work, and I love when I see that in someone I'm mentoring.
|
||||
|
||||
Another favorite takeaway is that no one is born knowing any specific technology. All technology skills are learned skills. So, your back-of-the-mind thought that "I don't understand that" is often masking the worry that "I can't learn it." Investigate further, and you'll find that your nagging sense of impossibility is easily disproven by everything you've accomplished until this point. I find a gentle and regular reminder that all technology is learned is a great place to start. You've done this before; you will do so again.
|
||||
|
||||
Here's one more piece of advice: stick with one thing and go deep. Skills—and the stories we tell about them to potential employers—are more interesting the deeper we can go. Therefore, when you're learning to code in your language of choice, find a way to build something that you can talk about at length in a job interview. Maybe it's an Ansible module in Python or a Go module for Terraform. Either one is a lot more powerful than saying you can code Hello World in seven languages.
|
||||
|
||||
There is also power in asking yourself what your next career will be. It's a reminder that you have one and, to survive and advance, you have to continue learning. What got you here will not get you where you're going next.
|
||||
|
||||
It's freeing to find that your next career can be an evolution of what you know now or a doubling-down on something much larger. I advocate for evolutionary, not revolutionary. There is a lot of foundational knowledge in everything we know, and it can be powerful to us and the story we tell others when we stay connected to our past.
|
||||
|
||||
### Community is key
|
||||
|
||||
All careers evolve and skills develop. Many of us are drawn to IT because it requires continual learning. Know that you can do it, and stick with your community to help you along the way.
|
||||
|
||||
If you're looking for a way to apply your background in IT administration and you have an open source story to tell, we would love to help you share it. Read our [information for writers][19] to learn how.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/plan-next-IT-career-move
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI (Two hands holding a resume with computer, clock, and desk chair )
|
||||
[2]: https://opensource.com/sites/default/files/uploads/vtug.png (Matthew with friends at New England VTUG meeting)
|
||||
[3]: https://twitter.com/jfrappier
|
||||
[4]: https://twitter.com/paulbraren
|
||||
[5]: https://vtug.com/
|
||||
[6]: https://opensource.com/article/19/7/how-transition-career-devops-engineer
|
||||
[7]: https://opensource.com/article/19/7/sysadmins-vs-sres
|
||||
[8]: https://opensource.com/sites/default/files/uploads/greatwave.png (Adaption of The Great Wave Off Kanagawa)
|
||||
[9]: https://en.wikipedia.org/wiki/The_Great_Wave_off_Kanagawa
|
||||
[10]: https://opensource.com/sites/default/files/uploads/greatwave2.png (Adaption of The Great Wave Off Kanagawa)
|
||||
[11]: https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb
|
||||
[12]: https://en.wikipedia.org/wiki/Antifragile
|
||||
[13]: https://twitter.com/swardley
|
||||
[14]: https://blog.gardeviance.org/2015/03/on-pioneers-settlers-town-planners-and.html
|
||||
[15]: https://opensource.com/sites/default/files/articles/displacing-technology-lindy-effect-opensource.com_.png (One theory on when to adopt technology as its being displaced)
|
||||
[16]: https://en.wikipedia.org/wiki/Lindy_effect
|
||||
[17]: https://opensource.com/sites/default/files/uploads/ip4-to-etoa.png (IP4 to IP6 estimated time of mass adoption)
|
||||
[18]: https://en.wikipedia.org/wiki/George_E._P._Box
|
||||
[19]: https://opensource.com/how-submit-article
|
@ -0,0 +1,209 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Share Your Keyboard and Mouse Between Linux and Raspberry Pi)
|
||||
[#]: via: (https://itsfoss.com/keyboard-mouse-sharing-between-computers/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Share Your Keyboard and Mouse Between Linux and Raspberry Pi
|
||||
======
|
||||
|
||||
_**This DIY tutorial teaches you to share mouse and keyboard between multiple computers using open source software Barrier.**_
|
||||
|
||||
I have a multi-monitor setup where my [Dell XPS running Ubuntu][1] is connected to two external monitors. I recently got a [Raspberry Pi 4][2] that has the capability to double up as a desktop. I bought a new screen so that I could set it up for monitoring the performance of my cloud servers.
|
||||
|
||||
Now the problem is that I have fours screens and one pair of keyboard and mouse. I could use a new keyboard-mouse pair but my desk doesn’t have enough free space and it’s not very convenient to switch keyboards and mouse all the time.
|
||||
|
||||
One way to tackle this problem would be to buy a kvm switch. This is a handy gadget that allows you to use same display screen, keyboard and mouse between several computers running various operating systems. You can easily find one for around $30 on Amazon.
|
||||
|
||||
Bestseller No. 1
|
||||
|
||||
[][3]
|
||||
|
||||
[UGREEN USB Switch Selector 2 Computers Sharing 4 USB Devices USB 2.0 Peripheral Switcher Box Hub for Mouse, Keyboard, Scanner, Printer, PCs with One-Button Swapping and 2 Pack USB A to A Cable][3]
|
||||
|
||||
Ugreen Group Limited - Electronics
|
||||
|
||||
$18.99 [][4]
|
||||
|
||||
But I didn’t go for the hardware solution. I opted for a software based approach to share the keyboard and mouse between computers.
|
||||
|
||||
I used [Barrier][5], an open source fork of the now proprietary software [Synergy][6]. Synergy Core is still open source but you can’t get encryption option in its GUI. With all its limitation, Barrier works fine for me.
|
||||
|
||||
Let’s see how you can use Barrier to share mouse and keyboard with multiple computers. Did I mention that you can even share clipboard and thus copy paste text between the computers?
|
||||
|
||||
### Set up Barrier to share keyboard and mouse between Linux and Raspberry Pi or other devices
|
||||
|
||||
![][7]
|
||||
|
||||
I have prepared this tutorial with Ubuntu 18.04.3 and Raspbian 10. Some installation instructions may differ based on your distribution and version but you’ll get the idea of what you need to do here.
|
||||
|
||||
#### Step 1: Install Barrier
|
||||
|
||||
The first step is obvious. You need to install Barrier in your computer.
|
||||
|
||||
Barrier is available in the universe repository starting Ubuntu 19.04 so you can easily install it using apt command.
|
||||
|
||||
You’ll have to use the snap version of Barrier in Ubuntu 18.04. Open Software Center and search for Barrier. I recommend using barrier-maxiberta
|
||||
|
||||
![Install this Barrier version][8]
|
||||
|
||||
On other distributions, you should [enable Snap][9] first and then use this command:
|
||||
|
||||
```
|
||||
sudo snap install barrier-maxiberta
|
||||
```
|
||||
|
||||
Barrier is available in Debian 10 repositories. So installing barrier on Raspbian was easy with the [apt command][10]:
|
||||
|
||||
```
|
||||
sudo apt install barrier
|
||||
```
|
||||
|
||||
Once you have installed the software, it’s time to configure it.
|
||||
|
||||
[][11]
|
||||
|
||||
Suggested read Fix Application Installation Issues In elementary OS
|
||||
|
||||
#### Step 2: Configure Barrier server
|
||||
|
||||
Barrier works on server-client model. You should configure your main computer as server and the secondary computer as client.
|
||||
|
||||
In my case, my Ubuntu 18.04 is my main system so I set it up as the server. Search for Barrier in menu and start it.
|
||||
|
||||
![Setup Barrier as server][12]
|
||||
|
||||
You should see an IP address and an SSL fingerprint. It’s not entirely done because you have to configure the server a little. Click on Configure Server option.
|
||||
|
||||
![Configure the Barrier server][13]
|
||||
|
||||
In here, you should see your own system in the center. Now you have to drag and drop the computer icon from the top right to a suitable position. The position is important because that’s how your mouse pointer will move between screens.
|
||||
|
||||
![Setup Barrier server with client screens][14]
|
||||
|
||||
Do note that you should provide the [hostname][15] of the client computer. In my case, it was raspberrypi. It won’t work if the hostname is not correct. Don’t know the client’s hostname? Don’t worry, you can get it from the client system.
|
||||
|
||||
#### Step 3: Setup barrier client
|
||||
|
||||
On the second computer, start Barrier and choose to use it as client.
|
||||
|
||||
![Setup Barrier Client on Raspberry Pi][16]
|
||||
|
||||
You need to provide the IP address of Barrier server. You can find this IP address on the Barrier application running on the main system (see the screenshots in previous section).
|
||||
|
||||
![Setup Barrier Client on Raspberry Pi][17]
|
||||
|
||||
If you see an option to accept secure connection from another computer, accept it.
|
||||
|
||||
You should be now able to move your mouse pointer between the screens connected to two different computers running two different operating systems. How cool is that!
|
||||
|
||||
### Optional: Autostart Barrier [Intermediate to Advanced Users]
|
||||
|
||||
Now that you have setup Barrier and enjoying by using the same mouse and keyboard for more than one computers, what happens when you reboot your system? You need to start Barrier in both the systems again, right? This means that you need to connect keyboard-mouse to the second computer as well.
|
||||
|
||||
[][18]
|
||||
|
||||
Suggested read How To Fix Windows Updates Stuck At 0%
|
||||
|
||||
Since I use Wireless mouse and keyboard, this is still easier as all I need to do is to take the adapter from my laptop and plug it in the Raspberry Pi. This works but I don’t want to do this extra step. This is why I made Barrier running at the start on both systems so that I could use the same mouse and keyboard without any additional step.
|
||||
|
||||
There is no autostart option in the Barrier application. But it’s easy to [add an application to autostart in Ubuntu][19]. Just open the Startup Applications program and add the command _**barrier-maxiberta.barrier**_ here.
|
||||
|
||||
![Adding Barrier To Startup applications in Ubuntu][20]
|
||||
|
||||
That was the easy part. It’s not the same in Raspberry Pi though. Since Raspbian uses systemd, you can use it to create a new service that will run at the boot time.
|
||||
|
||||
Open a terminal and create a new file named barrier.service in /etc/systemd/system directory. If this directory doesn’t exist, create it. You can use your favorite command line text editor for this task. I used Vim here.
|
||||
|
||||
```
|
||||
sudo vim /etc/systemd/system/barrier.service
|
||||
```
|
||||
|
||||
Now add lines like these to your file. _**You must replace 192.168.0.109 with your barrier server’s IP address.**_
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Barrier Client mouse/keyboard share
|
||||
Requires=display-manager.service
|
||||
After=display-manager.service
|
||||
StartLimitIntervalSec=0
|
||||
|
||||
[Service]
|
||||
Type=forking
|
||||
ExecStart=/usr/bin/barrierc --no-restart --name raspberrypi --enable-crypto 192.168.0.109
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
User=pi
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
Save your file. I would advise to run the command mentioned in ExecStart line manually to see if it works or not. This will save you some headache later.
|
||||
|
||||
Reload the systemd daemon:
|
||||
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
```
|
||||
|
||||
Now start this new service
|
||||
|
||||
```
|
||||
systemctl start barrier.service
|
||||
```
|
||||
|
||||
Check its status to see if its running fine:
|
||||
|
||||
```
|
||||
systemctl status barrier.service
|
||||
```
|
||||
|
||||
If it works, add it to startup services:
|
||||
|
||||
```
|
||||
systemctl enable barrier.service
|
||||
```
|
||||
|
||||
This should take care of things for you. Now you should be able to control the Raspberry Pi (or any other second computer) with a single keyboard mouse pair.
|
||||
|
||||
I know that these DIY stuff may not work straightforward for everyone so if you face issues, let me know in the comments and I’ll try to help you out.
|
||||
|
||||
If it worked for you or if you use some other solution to share the mouse and keyboard between the computers, do mention it in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/keyboard-mouse-sharing-between-computers/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/dell-xps-13-ubuntu-review/
|
||||
[2]: https://itsfoss.com/raspberry-pi-4/
|
||||
[3]: https://www.amazon.com/UGREEN-Selector-Computers-Peripheral-One-Button/dp/B01MXXQKGM?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01MXXQKGM&keywords=kvm%20switch (UGREEN USB Switch Selector 2 Computers Sharing 4 USB Devices USB 2.0 Peripheral Switcher Box Hub for Mouse, Keyboard, Scanner, Printer, PCs with One-Button Swapping and 2 Pack USB A to A Cable)
|
||||
[4]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime)
|
||||
[5]: https://github.com/debauchee/barrier
|
||||
[6]: https://symless.com/synergy
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/Share-Keyboard-and-Mouse.jpg?resize=800%2C450&ssl=1
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/barrier-ubuntu-snap.jpg?ssl=1
|
||||
[9]: https://itsfoss.com/install-snap-linux/
|
||||
[10]: https://itsfoss.com/apt-command-guide/
|
||||
[11]: https://itsfoss.com/fix-application-installation-issues-elementary-os-loki/
|
||||
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/barrier-2.jpg?resize=800%2C512&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/barrier-server-ubuntu.png?resize=800%2C450&ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/barrier-server-configuration.png?ssl=1
|
||||
[15]: https://itsfoss.com/change-hostname-ubuntu/
|
||||
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/setup-barrier-client.jpg?resize=800%2C400&ssl=1
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/setup-barrier-client-2.jpg?resize=800%2C400&ssl=1
|
||||
[18]: https://itsfoss.com/fix-windows-updates-stuck-0/
|
||||
[19]: https://itsfoss.com/manage-startup-applications-ubuntu/
|
||||
[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/adding-barrier-to-startup-apps-ubuntu.jpg?ssl=1
|
@ -0,0 +1,321 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Create Availability Zones in OpenStack from Command Line)
|
||||
[#]: via: (https://www.linuxtechi.com/create-availability-zones-openstack-command-line/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Create Availability Zones in OpenStack from Command Line
|
||||
======
|
||||
|
||||
In **OpenStack** terminology, **Availability Zones (AZ**) is defined as a logical partition of compute(nova), block storage (cinder) and network Service (neutron). Availability zones are required to segregate the work load of environments like production and non-production, let me elaborate this statement.
|
||||
|
||||
[![Availability-Zones-OpenStack-Command-Line][1]][2]
|
||||
|
||||
let’s suppose we have a tenant in OpenStack who wants to deploy their VMs in Production and Non-Production, so to create this type of setup in openstack , first we have to identify which computes will be considered as Production and Non-production then we have to create host-aggregate group where we will add the computes to the host-aggregate group and then we will map th host aggregate group to the Availability Zone.
|
||||
|
||||
In this tutorial we will demonstrate how to create and use computes availability zones in openstack via command line.
|
||||
|
||||
### Creating compute availability zones
|
||||
|
||||
Whenever OpenStack is deployed, Nova is the default Availability Zone(AZ) created automatically and all the compute nodes belongs to Nova AZ. Run the below openstack command from the controller node to list Availability zones,
|
||||
|
||||
```
|
||||
~# source openrc
|
||||
~# openstack availability zone list
|
||||
+-----------+-------------+
|
||||
| Zone Name | Zone Status |
|
||||
+-----------+-------------+
|
||||
| internal | available |
|
||||
| nova | available |
|
||||
| nova | available |
|
||||
| nova | available |
|
||||
+-----------+-------------+
|
||||
~#
|
||||
```
|
||||
|
||||
To list only compute’s availability zones, run the beneath openstack command,
|
||||
|
||||
```
|
||||
~# openstack availability zone list --compute
|
||||
+-----------+-------------+
|
||||
| Zone Name | Zone Status |
|
||||
+-----------+-------------+
|
||||
| internal | available |
|
||||
| nova | available |
|
||||
+-----------+-------------+
|
||||
~#
|
||||
```
|
||||
|
||||
To list all compute hosts which are mapped to nova availability zone execute the below command,
|
||||
|
||||
```
|
||||
~# openstack host list | grep -E "Zone|nova"
|
||||
| Host Name | Service | Zone |
|
||||
| compute-0-1 | compute | nova |
|
||||
| compute-0-2 | compute | nova |
|
||||
| compute-0-4 | compute | nova |
|
||||
| compute-0-3 | compute | nova |
|
||||
| compute-0-8 | compute | nova |
|
||||
| compute-0-6 | compute | nova |
|
||||
| compute-0-9 | compute | nova |
|
||||
| compute-0-5 | compute | nova |
|
||||
| compute-0-7 | compute | nova |
|
||||
~#
|
||||
```
|
||||
|
||||
Let’s create a two host-aggregate group with name **production** and **non-production**, add compute-4,5 & 6 to production host aggregate group and compute-7,8 & 9 to non-production host aggregate group.
|
||||
|
||||
Create Production and Non-Production Host aggregate using following OpenStack commands,
|
||||
|
||||
```
|
||||
~# openstack aggregate create production
|
||||
+-------------------+----------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2019-08-17T03:02:41.561259 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| id | 7 |
|
||||
| name | production |
|
||||
| updated_at | None |
|
||||
+-------------------+----------------------------+
|
||||
|
||||
~# openstack aggregate create non-production
|
||||
+-------------------+----------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------+
|
||||
| availability_zone | None |
|
||||
| created_at | 2019-08-17T03:02:53.806713 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| id | 10 |
|
||||
| name | non-production |
|
||||
| updated_at | None |
|
||||
+-------------------+----------------------------+
|
||||
~#
|
||||
```
|
||||
|
||||
Now create the availability zones and associate it to its respective host aggregate groups
|
||||
|
||||
**Syntax:**
|
||||
|
||||
# openstack aggregate set –zone <az_name> <host_aggregate_name>
|
||||
|
||||
```
|
||||
~# openstack aggregate set --zone production-az production
|
||||
~# openstack aggregate set --zone non-production-az non-production
|
||||
```
|
||||
|
||||
Finally add the compute host to its host-aggregate group
|
||||
|
||||
**Syntax:**
|
||||
|
||||
# openstack aggregate add host <host_aggregate_name> <compute_host>
|
||||
|
||||
```
|
||||
~# openstack aggregate add host production compute-0-4
|
||||
~# openstack aggregate add host production compute-0-5
|
||||
~# openstack aggregate add host production compute-0-6
|
||||
```
|
||||
|
||||
Similarly add compute host to non-production host aggregation group,
|
||||
|
||||
```
|
||||
~# openstack aggregate add host non-production compute-0-7
|
||||
~# openstack aggregate add host non-production compute-0-8
|
||||
~# openstack aggregate add host non-production compute-0-9
|
||||
```
|
||||
|
||||
Execute the beneath openstack commands to verify Host aggregate group and its availability zone
|
||||
|
||||
```
|
||||
~# openstack aggregate list
|
||||
+----+----------------+-------------------+
|
||||
| ID | Name | Availability Zone |
|
||||
+----+----------------+-------------------+
|
||||
| 7 | production | production-az |
|
||||
| 10 | non-production | non-production-az |
|
||||
+----+----------------+-------------------+
|
||||
~#
|
||||
```
|
||||
|
||||
Run below commands to list computes associated to AZ and host aggregate group
|
||||
|
||||
```
|
||||
~# openstack aggregate show production
|
||||
+-------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------+
|
||||
| availability_zone | production-az |
|
||||
| created_at | 2019-08-17T03:02:42.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'compute-0-4', u'compute-0-5', u'compute-0-6'] |
|
||||
| id | 7 |
|
||||
| name | production |
|
||||
| properties | |
|
||||
| updated_at | None |
|
||||
+-------------------+--------------------------------------------+
|
||||
|
||||
~# openstack aggregate show non-production
|
||||
+-------------------+---------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+---------------------------------------------+
|
||||
| availability_zone | non-production-az |
|
||||
| created_at | 2019-08-17T03:02:54.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [u'compute-0-7', u'compute-0-8', u'compute-0-9'] |
|
||||
| id | 10 |
|
||||
| name | non-production |
|
||||
| properties | |
|
||||
| updated_at | None |
|
||||
+-------------------+---------------------------------------------+
|
||||
~#
|
||||
```
|
||||
|
||||
Above command’s output confirm that we have successfully create host aggregate group and availability zones.
|
||||
|
||||
### Launch Virtual Machines in Availability Zones
|
||||
|
||||
Now let’s create couple virtual machines in these two availability zones, to create a vm in particular availability zone use below command,
|
||||
|
||||
**Syntax:**
|
||||
|
||||
# openstack server create –flavor <flavor-name> –image <Image-Name-Or-Image-ID> –nic net-id=<Network-ID> –security-group <Security-Group-ID> –key-name <Keypair-Name> –availability-zone <AZ-Name> <VM-Name>
|
||||
|
||||
Example is shown below:
|
||||
|
||||
```
|
||||
~# openstack server create --flavor m1.small --image Cirros --nic net-id=37b9ab9a-f198-4db1-a5d6-5789b05bfb4c --security-group f8dda7c3-f7c3-423b-923a-2b21fe0bbf3c --key-name mykey --availability-zone production-az test-vm-prod-az
|
||||
```
|
||||
|
||||
Run below command to verify virtual machine details:
|
||||
|
||||
```
|
||||
~# openstack server show test-vm-prod-az
|
||||
```
|
||||
|
||||
![Openstack-Server-AZ-command][1]
|
||||
|
||||
To create a virtual machine in a specific compute node under availability zone, use the below command,
|
||||
|
||||
**Syntax:**
|
||||
|
||||
# openstack server create –flavor <flavor-name> –image <Image-Name-Or-Image-ID> –nic net-id=<Network-ID> –security-group <Security-Group-ID> –key-name {Keypair-Name} –availability-zone <AZ-Name>:<Compute-Host> <VM-Name>
|
||||
|
||||
Let suppose we want to spin a vm under production AZ on specific compute (compute-0-6), so to accomplish this, run the beneath command,
|
||||
|
||||
```
|
||||
~# openstack server create --flavor m1.small --image Cirros --nic net-id=37b9ab9a-f198-4db1-a5d6-5789b05bfb4c --security-group f8dda7c3-f7c3-423b-923a-2b21fe0bbf3c --key-name mykey --availability-zone production-az:compute-0-6 test-vm-prod-az-host
|
||||
```
|
||||
|
||||
Execute following command to verify the VM details:
|
||||
|
||||
```
|
||||
~# openstack server show test-vm-prod-az-host
|
||||
```
|
||||
|
||||
Output above command would be something like below:
|
||||
|
||||
![OpenStack-VM-AZ-Specific-Host][1]
|
||||
|
||||
Similarly, we can create virtual machines in non-production AZ, example is shown below
|
||||
|
||||
```
|
||||
~# openstack server create --flavor m1.small --image Cirros --nic net-id=37b9ab9a-f198-4db1-a5d6-5789b05bfb4c --security-group f8dda7c3-f7c3-423b-923a-2b21fe0bbf3c --key-name mykey --availability-zone non-production-az vm-nonprod-az
|
||||
```
|
||||
|
||||
Use below command verify the VM details,
|
||||
|
||||
```
|
||||
~# openstack server show vm-nonprod-az
|
||||
```
|
||||
|
||||
Output of above command would be something like below,
|
||||
|
||||
![OpenStack-Non-Production-AZ-VM][1]
|
||||
|
||||
### Removing Host aggregate group and Availability Zones
|
||||
|
||||
Let’s suppose we want to remove /delete above created host aggregate group and availability zones, for that first we must remove host from the host aggregate group, use the below command,
|
||||
|
||||
```
|
||||
~# openstack aggregate show production
|
||||
```
|
||||
|
||||
Above command will give u the list of compute host which are added to production host aggregate group.
|
||||
|
||||
Use below command to remove host from the host aggregate group
|
||||
|
||||
**Syntax:**
|
||||
|
||||
# openstack aggregate remove host <host-aggregate-name> <compute-name>
|
||||
|
||||
```
|
||||
~# openstack aggregate remove host production compute-0-4
|
||||
~# openstack aggregate remove host production compute-0-5
|
||||
~# openstack aggregate remove host production compute-0-6
|
||||
```
|
||||
|
||||
Once you remove all host from the group, then re-run the below command,
|
||||
|
||||
```
|
||||
~# openstack aggregate show production
|
||||
+-------------------+----------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------+
|
||||
| availability_zone | production-az |
|
||||
| created_at | 2019-08-17T03:02:42.000000 |
|
||||
| deleted | False |
|
||||
| deleted_at | None |
|
||||
| hosts | [] |
|
||||
| id | 7 |
|
||||
| name | production |
|
||||
| properties | |
|
||||
| updated_at | None |
|
||||
+-------------------+----------------------------+
|
||||
~#
|
||||
```
|
||||
|
||||
As we can see in above output there is no compute host associated to production host aggregate group, now we are good to remove the group
|
||||
|
||||
Use below command to delete host aggregate group and its associated availability zone
|
||||
|
||||
```
|
||||
~# openstack aggregate delete production
|
||||
```
|
||||
|
||||
Run the following command to check whether availability zone has been removed or not,
|
||||
|
||||
```
|
||||
~# openstack availability zone list | grep -i production-az
|
||||
~#
|
||||
```
|
||||
|
||||
Similarly, you can refer the above steps to remove or delete non-production host aggregate and its availability zone.
|
||||
|
||||
That’s all from this tutorial, in case above content helps you to understand OpenStack host aggregate and availability zones then please do share your feedback and comments.
|
||||
|
||||
**Read Also: [Top 30 OpenStack Interview Questions and Answers][3]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/create-availability-zones-openstack-command-line/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/08/Availability-Zones-OpenStack-Command-Line.jpg
|
||||
[3]: https://www.linuxtechi.com/openstack-interview-questions-answers/
|
353
sources/tech/20190819 An introduction to bpftrace for Linux.md
Normal file
353
sources/tech/20190819 An introduction to bpftrace for Linux.md
Normal file
@ -0,0 +1,353 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (An introduction to bpftrace for Linux)
|
||||
[#]: via: (https://opensource.com/article/19/8/introduction-bpftrace)
|
||||
[#]: author: (Brendan Gregg https://opensource.com/users/brendanghttps://opensource.com/users/marcobravo)
|
||||
|
||||
An introduction to bpftrace for Linux
|
||||
======
|
||||
New Linux tracer analyzes production performance problems and
|
||||
troubleshoots software.
|
||||
![Linux keys on the keyboard for a desktop computer][1]
|
||||
|
||||
Bpftrace is a new open source tracer for Linux for analyzing production performance problems and troubleshooting software. Its users and contributors include Netflix, Facebook, Red Hat, Shopify, and others, and it was created by [Alastair Robertson][2], a talented UK-based developer who has won various coding competitions.
|
||||
|
||||
Linux already has many performance tools, but they are often counter-based and have limited visibility. For example, [iostat(1)][3] or a monitoring agent may tell you your average disk latency, but not the distribution of this latency. Distributions can reveal multiple modes or outliers, either of which may be the real cause of your performance problems. [Bpftrace][4] is suited for this kind of analysis: decomposing metrics into distributions or per-event logs and creating new metrics for visibility into blind spots.
|
||||
|
||||
You can use bpftrace via one-liners or scripts, and it ships with many prewritten tools. Here is an example that traces the distribution of read latency for PID 181 and shows it as a power-of-two histogram:
|
||||
|
||||
|
||||
```
|
||||
# bpftrace -e 'kprobe:vfs_read /pid == 30153/ { @start[tid] = nsecs; }
|
||||
kretprobe:vfs_read /@start[tid]/ { @ns = hist(nsecs - @start[tid]); delete(@start[tid]); }'
|
||||
Attaching 2 probes...
|
||||
^C
|
||||
|
||||
@ns:
|
||||
[256, 512) 10900 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
|
||||
[512, 1k) 18291 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
|
||||
[1k, 2k) 4998 |@@@@@@@@@@@@@@ |
|
||||
[2k, 4k) 57 | |
|
||||
[4k, 8k) 117 | |
|
||||
[8k, 16k) 48 | |
|
||||
[16k, 32k) 109 | |
|
||||
[32k, 64k) 3 | |
|
||||
```
|
||||
|
||||
This example instruments one event out of thousands available. If you have some weird performance problem, there's probably some bpftrace one-liner that can shed light on it. For large environments, this ability can help you save millions. For smaller environments, it can be of more use in helping to eliminate latency outliers.
|
||||
|
||||
I [previously][5] wrote about bpftrace vs. other tracers, including [BCC][6] (BPF Compiler Collection). BCC is great for canned complex tools and agents. Bpftrace is best for short scripts and ad hoc investigations. In this article, I'll summarize the bpftrace language, variable types, probes, and tools.
|
||||
|
||||
Bpftrace uses BPF (Berkeley Packet Filter), an in-kernel execution engine that processes a virtual instruction set. BPF has been extended (aka eBPF) in recent years for providing a safe way to extend kernel functionality. It also has become a hot topic in systems engineering, with at least 24 talks on BPF at the last [Linux Plumber's Conference][7]. BPF is in the Linux kernel, and bpftrace is the best way to get started using BPF for observability.
|
||||
|
||||
See the bpftrace [INSTALL][8] guide for how to install it, and get the latest version; [0.9.2][9] was just released. For Kubernetes clusters, there is also [kubectl-trace][10] for running it.
|
||||
|
||||
### Syntax
|
||||
|
||||
|
||||
```
|
||||
`probe[,probe,...] /filter/ { action }`
|
||||
```
|
||||
|
||||
The probe specifies what events to instrument. The filter is optional and can filter down the events based on a boolean expression, and the action is the mini-program that runs.
|
||||
|
||||
Here's hello world:
|
||||
|
||||
|
||||
```
|
||||
`# bpftrace -e 'BEGIN { printf("Hello eBPF!\n"); }'`
|
||||
```
|
||||
|
||||
The probe is **BEGIN**, a special probe that runs at the beginning of the program (like awk). There's no filter. The action is a **printf()** statement.
|
||||
|
||||
Now a real example:
|
||||
|
||||
|
||||
```
|
||||
`# bpftrace -e 'kretprobe:sys_read /pid == 181/ { @bytes = hist(retval); }'`
|
||||
```
|
||||
|
||||
This uses a **kretprobe** to instrument the return of the **sys_read()** kernel function. If the PID is 181, a special map variable **@bytes** is populated with a log2 histogram function with the return value **retval** of **sys_read()**. This produces a histogram of the returned read size for PID 181. Is your app doing lots of one byte reads? Maybe that can be optimized.
|
||||
|
||||
### Probe types
|
||||
|
||||
These are libraries of related probes. The currently supported types are (more will be added):
|
||||
|
||||
Type | Description
|
||||
---|---
|
||||
**tracepoint** | Kernel static instrumentation points
|
||||
**usdt** | User-level statically defined tracing
|
||||
**kprobe** | Kernel dynamic function instrumentation
|
||||
**kretprobe** | Kernel dynamic function return instrumentation
|
||||
**uprobe** | User-level dynamic function instrumentation
|
||||
**uretprobe** | User-level dynamic function return instrumentation
|
||||
**software** | Kernel software-based events
|
||||
**hardware** | Hardware counter-based instrumentation
|
||||
**watchpoint** | Memory watchpoint events (in development)
|
||||
**profile** | Timed sampling across all CPUs
|
||||
**interval** | Timed reporting (from one CPU)
|
||||
**BEGIN** | Start of bpftrace
|
||||
**END** | End of bpftrace
|
||||
|
||||
Dynamic instrumentation (aka dynamic tracing) is the superpower that lets you trace any software function in a running binary without restarting it. This lets you get to the bottom of just about any problem. However, the functions it exposes are not considered a stable API, as they can change from one software version to another. Hence static instrumentation, where event points are hard-coded and become a stable API. When you write bpftrace programs, try to use the static types first, before the dynamic ones, so your programs are more stable.
|
||||
|
||||
### Variable types
|
||||
|
||||
Variable | Description
|
||||
---|---
|
||||
**@name** | global
|
||||
**@name[key]** | hash
|
||||
**@name[tid]** | thread-local
|
||||
**$name** | scratch
|
||||
|
||||
Variables with an **@** prefix use BPF maps, which can behave like associative arrays. They can be populated in one of two ways:
|
||||
|
||||
* Variable assignment: **@name = x;**
|
||||
* Function assignment: **@name = hist(x);**
|
||||
|
||||
|
||||
|
||||
Various map-populating functions are built in to provide quick ways to summarize data.
|
||||
|
||||
### Built-in variables and functions
|
||||
|
||||
Here are some of the built-in variables and functions, but there are many more.
|
||||
|
||||
**Built-in variables:**
|
||||
|
||||
Variable | Description
|
||||
---|---
|
||||
**pid** | process ID
|
||||
**comm** | Process or command name
|
||||
**nsecs** | Current time in nanoseconds
|
||||
**kstack** | Kernel stack trace
|
||||
**ustack** | User-level stack trace
|
||||
**arg0...argN** | Function arguments
|
||||
**args** | Tracepoint arguments
|
||||
**retval** | Function return value
|
||||
**name** | Full probe name
|
||||
|
||||
**Built-in functions:**
|
||||
|
||||
Function | Description
|
||||
---|---
|
||||
**printf("...")** | Print formatted string
|
||||
**time("...")** | Print formatted time
|
||||
**system("...")** | Run shell command
|
||||
**@ = count()** | Count events
|
||||
**@ = hist(x)** | Power-of-2 histogram for x
|
||||
**@ = lhist(x, min, max, step)** | Linear histogram for x
|
||||
|
||||
See the [reference guide][11] for details.
|
||||
|
||||
### One-liners tutorial
|
||||
|
||||
A great way to learn bpftrace is via one-liners, which I turned into a [one-liners tutorial][12] that covers the following:
|
||||
|
||||
Listing probes | **bpftrace -l 'tracepoint:syscalls:sys_enter_*'**
|
||||
---|---
|
||||
Hello world | **bpftrace -e 'BEGIN { printf("hello world\n") }'**
|
||||
File opens | **bpftrace -e 'tracepoint:syscalls:sys_enter_open { printf("%s %s\n", comm, str(args->filename)) }'**
|
||||
Syscall counts by process | **bpftrace -e 'tracepoint:raw_syscalls:sys_enter { @[comm] = count() }'**
|
||||
Distribution of read() bytes | **bpftrace -e 'tracepoint:syscalls:sys_exit_read /pid == 18644/ { @bytes = hist(args->retval) }'**
|
||||
Kernel dynamic tracing of read() bytes | **bpftrace -e 'kretprobe:vfs_read { @bytes = lhist(retval, 0, 2000, 200) }'**
|
||||
Timing read()s | **bpftrace -e 'kprobe:vfs_read { @start[tid] = nsecs } kretprobe:vfs_read /@start[tid]/ { @ns[comm] = hist(nsecs - @start[tid]); delete(@start[tid]) }'**
|
||||
Count process-level events | **bpftrace -e 'tracepoint:sched:sched* { @[name] = count() } interval:s:5 { exit() }'**
|
||||
Profile on-CPU kernel stacks | **bpftrace -e 'profile:hz:99 { @[stack] = count() }'**
|
||||
Scheduler tracing | **bpftrace -e 'tracepoint:sched:sched_switch { @[stack] = count() }'**
|
||||
Block I/O tracing | **bpftrace -e 'tracepoint:block:block_rq_issue { @ = hist(args->bytes); }**
|
||||
Kernel struct tracing (a script, not a one-liner) | Command: **bpftrace path.bt**, where the path.bt file is:
|
||||
|
||||
**#include <linux/path.h>
|
||||
#include <linux/dcache.h>**
|
||||
|
||||
**kprobe:vfs_open { printf("open path: %s\n", str(((path *)arg0)->dentry->d_name.name)); }**
|
||||
|
||||
See the tutorial for an explanation of each.
|
||||
|
||||
### Provided tools
|
||||
|
||||
Apart from one-liners, bpftrace programs can be multi-line scripts. Bpftrace ships with 28 of them as tools:
|
||||
|
||||
![bpftrace/eBPF tools][13]
|
||||
|
||||
These can be found in the **[/tools][14]** directory:
|
||||
|
||||
|
||||
```
|
||||
tools# ls *.bt
|
||||
bashreadline.bt dcsnoop.bt oomkill.bt syncsnoop.bt vfscount.bt
|
||||
biolatency.bt execsnoop.bt opensnoop.bt syscount.bt vfsstat.bt
|
||||
biosnoop.bt gethostlatency.bt pidpersec.bt tcpaccept.bt writeback.bt
|
||||
bitesize.bt killsnoop.bt runqlat.bt tcpconnect.bt xfsdist.bt
|
||||
capable.bt loads.bt runqlen.bt tcpdrop.bt
|
||||
cpuwalk.bt mdflush.bt statsnoop.bt tcpretrans.bt
|
||||
```
|
||||
|
||||
Apart from their use in diagnosing performance issues and general troubleshooting, they also provide another way to learn bpftrace. Here are some examples.
|
||||
|
||||
#### Source
|
||||
|
||||
Here's the code to **biolatency.bt**:
|
||||
|
||||
|
||||
```
|
||||
tools# cat -n biolatency.bt
|
||||
1 /*
|
||||
2 * biolatency.bt Block I/O latency as a histogram.
|
||||
3 * For Linux, uses bpftrace, eBPF.
|
||||
4 *
|
||||
5 * This is a bpftrace version of the bcc tool of the same name.
|
||||
6 *
|
||||
7 * Copyright 2018 Netflix, Inc.
|
||||
8 * Licensed under the Apache License, Version 2.0 (the "License")
|
||||
9 *
|
||||
10 * 13-Sep-2018 Brendan Gregg Created this.
|
||||
11 */
|
||||
12
|
||||
13 BEGIN
|
||||
14 {
|
||||
15 printf("Tracing block device I/O... Hit Ctrl-C to end.\n");
|
||||
16 }
|
||||
17
|
||||
18 kprobe:blk_account_io_start
|
||||
19 {
|
||||
20 @start[arg0] = nsecs;
|
||||
21 }
|
||||
22
|
||||
23 kprobe:blk_account_io_done
|
||||
24 /@start[arg0]/
|
||||
25
|
||||
26 {
|
||||
27 @usecs = hist((nsecs - @start[arg0]) / 1000);
|
||||
28 delete(@start[arg0]);
|
||||
29 }
|
||||
30
|
||||
31 END
|
||||
32 {
|
||||
33 clear(@start);
|
||||
34 }
|
||||
```
|
||||
|
||||
It's straightforward, easy to read, and short enough to include on a slide. This version uses kernel dynamic tracing to instrument the **blk_account_io_start()** and **blk_account_io_done()** functions, and it passes a timestamp between them keyed on **arg0** to each. **arg0** on **kprobe** is the first argument to that function, which is the struct request *****, and its memory address is used as a unique identifier.
|
||||
|
||||
#### Example files
|
||||
|
||||
You can see screenshots and explanations of these tools in the [GitHub repo][14] as ***_example.txt** files. For [example][15]:
|
||||
|
||||
|
||||
```
|
||||
tools# more biolatency_example.txt
|
||||
Demonstrations of biolatency, the Linux BPF/bpftrace version.
|
||||
|
||||
This traces block I/O, and shows latency as a power-of-2 histogram. For example:
|
||||
|
||||
# biolatency.bt
|
||||
Attaching 3 probes...
|
||||
Tracing block device I/O... Hit Ctrl-C to end.
|
||||
^C
|
||||
|
||||
@usecs:
|
||||
[256, 512) 2 | |
|
||||
[512, 1K) 10 |@ |
|
||||
[1K, 2K) 426 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
|
||||
[2K, 4K) 230 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
|
||||
[4K, 8K) 9 |@ |
|
||||
[8K, 16K) 128 |@@@@@@@@@@@@@@@ |
|
||||
[16K, 32K) 68 |@@@@@@@@ |
|
||||
[32K, 64K) 0 | |
|
||||
[64K, 128K) 0 | |
|
||||
[128K, 256K) 10 |@ |
|
||||
|
||||
While tracing, this shows that 426 block I/O had a latency of between 1K and 2K
|
||||
usecs (1024 and 2048 microseconds), which is between 1 and 2 milliseconds.
|
||||
There are also two modes visible, one between 1 and 2 milliseconds, and another
|
||||
between 8 and 16 milliseconds: this sounds like cache hits and cache misses.
|
||||
There were also 10 I/O with latency 128 to 256 ms: outliers. Other tools and
|
||||
instrumentation, like biosnoop.bt, can shed more light on those outliers.
|
||||
[...]
|
||||
```
|
||||
|
||||
Sometimes it can be most effective to switch straight to the example file when trying to understand these tools, since the output may be self-evident (by design!).
|
||||
|
||||
#### Man pages
|
||||
|
||||
There are also man pages for every tool in the GitHub repo under [/man/man8][16]. They include sections on the output fields and the tool's expected overhead.
|
||||
|
||||
|
||||
```
|
||||
# nroff -man man/man8/biolatency.8
|
||||
biolatency(8) System Manager's Manual biolatency(8)
|
||||
|
||||
NAME
|
||||
biolatency.bt - Block I/O latency as a histogram. Uses bpftrace/eBPF.
|
||||
|
||||
SYNOPSIS
|
||||
biolatency.bt
|
||||
|
||||
DESCRIPTION
|
||||
This tool summarizes time (latency) spent in block device I/O (disk
|
||||
I/O) as a power-of-2 histogram. This allows the distribution to be
|
||||
studied, including modes and outliers. There are often two modes, one
|
||||
for device cache hits and one for cache misses, which can be shown by
|
||||
this tool. Latency outliers will also be shown.
|
||||
[...]
|
||||
```
|
||||
|
||||
Writing all these man pages was the least fun part of developing these tools, and some took longer to write than the tool took to develop, but it's nice to see the final result.
|
||||
|
||||
### bpftrace vs. BCC
|
||||
|
||||
Since eBPF has been merging in the kernel, most effort has been placed on the [BCC][6] frontend, which provides a BPF library and Python, C++, and Lua interfaces for writing programs. I've developed a lot of [tools][17] in BCC/Python; it works great, although coding in BCC is verbose. If you're hacking away at a performance issue, bpftrace is better for your one-off custom queries. If you're writing a tool with many command-line options or an agent that uses Python libraries, you'll want to consider using BCC.
|
||||
|
||||
On the Netflix performance team, we use both: BCC for developing canned tools that others can easily use and for developing agents; and bpftrace for ad hoc analysis. The network engineering team has been using BCC to develop an agent for its needs. The security team is most interested in bpftrace for quick ad hoc instrumentation for detecting zero-day vulnerabilities. And I expect the developer teams will use both without knowing it, via the self-service GUIs we are building (Vector), and occasionally may SSH into an instance and run a canned tool or ad hoc bpftrace one-liner.
|
||||
|
||||
### Learn more
|
||||
|
||||
* The [bpftrace][4] repository on GitHub
|
||||
* The bpftrace [one-liners tutorial][12]
|
||||
* The bpftrace [reference guide][11]
|
||||
* The [BCC][6] repository for more complex BPF-based tools
|
||||
|
||||
|
||||
|
||||
I also have a book coming out this year that covers bpftrace: _[BPF Performance Tools: Linux System and Application Observability][18]_, to be published by Addison Wesley, and which contains many new bpftrace tools.
|
||||
|
||||
* * *
|
||||
|
||||
_Thanks to Alastair Robertson for creating bpftrace, and the bpftrace, BCC, and BPF communities for all the work over the past five years._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/introduction-bpftrace
|
||||
|
||||
作者:[Brendan Gregg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brendanghttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
|
||||
[2]: https://ajor.co.uk/bpftrace/
|
||||
[3]: https://linux.die.net/man/1/iostat
|
||||
[4]: https://github.com/iovisor/bpftrace
|
||||
[5]: http://www.brendangregg.com/blog/2018-10-08/dtrace-for-linux-2018.html
|
||||
[6]: https://github.com/iovisor/bcc
|
||||
[7]: https://www.linuxplumbersconf.org/
|
||||
[8]: https://github.com/iovisor/bpftrace/blob/master/INSTALL.md
|
||||
[9]: https://github.com/iovisor/bpftrace/releases/tag/v0.9.2
|
||||
[10]: https://github.com/iovisor/kubectl-trace
|
||||
[11]: https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md
|
||||
[12]: https://github.com/iovisor/bpftrace/blob/master/docs/tutorial_one_liners.md
|
||||
[13]: https://opensource.com/sites/default/files/uploads/bpftrace_tools_early2019.png (bpftrace/eBPF tools)
|
||||
[14]: https://github.com/iovisor/bpftrace/tree/master/tools
|
||||
[15]: https://github.com/iovisor/bpftrace/blob/master/tools/biolatency_example.txt
|
||||
[16]: https://github.com/iovisor/bcc/tree/master/man/man8
|
||||
[17]: https://github.com/iovisor/bcc#tools
|
||||
[18]: http://www.brendangregg.com/blog/2019-07-15/bpf-performance-tools-book.html
|
189
sources/tech/20190819 Moving files on Linux without mv.md
Normal file
189
sources/tech/20190819 Moving files on Linux without mv.md
Normal file
@ -0,0 +1,189 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Moving files on Linux without mv)
|
||||
[#]: via: (https://opensource.com/article/19/8/moving-files-linux-without-mv)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/seth)
|
||||
|
||||
Moving files on Linux without mv
|
||||
======
|
||||
Sometimes the mv command isn't the best option when you need to move a
|
||||
file. So how else do you do it?
|
||||
![Hand putting a Linux file folder into a drawer][1]
|
||||
|
||||
The humble **mv** command is one of those useful tools you find on every POSIX box you encounter. Its job is clearly defined, and it does it well: Move a file from one place in a file system to another. But Linux is nothing if not flexible, and there are other options for moving files. Using different tools can provide small advantages that fit perfectly with a specific use case.
|
||||
|
||||
Before straying too far from **mv**, take a look at this command’s default results. First, create a directory and generate some files with permissions set to 777:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir example
|
||||
$ touch example/{foo,bar,baz}
|
||||
$ for i in example/*; do ls /bin > "${i}"; done
|
||||
$ chmod 777 example/*
|
||||
```
|
||||
|
||||
You probably don't think about it this way, but files exist as entries, called index nodes (commonly known as **inodes**), in a [filesystem][2]. You can see what inode a file occupies with the [ls command][3] and its **\--inode** option:
|
||||
|
||||
|
||||
```
|
||||
$ ls --inode example/foo
|
||||
7476868 example/foo
|
||||
```
|
||||
|
||||
As a test, move that file from the example directory to your current directory and then view the file’s attributes:
|
||||
|
||||
|
||||
```
|
||||
$ mv example/foo .
|
||||
$ ls -l -G -g --inode
|
||||
7476868 -rwxrwxrwx. 1 29545 Aug 2 07:28 foo
|
||||
```
|
||||
|
||||
As you can see, the original file—along with its existing permissions—has been "moved", but its inode has not changed.
|
||||
|
||||
That’s the way the **mv** tool is programmed to move a file: Leave the inode unchanged (unless the file is being moved to a different filesystem), and preserve its ownership and permissions.
|
||||
|
||||
Other tools provide different options.
|
||||
|
||||
### Copy and remove
|
||||
|
||||
On some systems, the move action is a true move action: Bits are removed from one point in the file system and reassigned to another. This behavior has largely fallen out of favor. Move actions are now either attribute reassignments (an inode now points to a different location in your file organization) or amalgamations of a copy action followed by a remove action.
|
||||
The philosophical intent of this design is to ensure that, should a move fail, a file is not left in pieces.
|
||||
|
||||
The **cp** command, unlike **mv**, creates a brand new data object in your filesystem. It has a new inode location, and it is subject to your active umask. You can mimic a move using the **cp** and **rm** (or [trash][4] if you have it) commands:
|
||||
|
||||
|
||||
```
|
||||
$ cp example/foo .
|
||||
$ ls -l -G -g --inode
|
||||
7476869 -rwxrwxr-x. 29545 Aug 2 11:58 foo
|
||||
$ trash example/foo
|
||||
```
|
||||
|
||||
The new **foo** file in this example got 775 permissions because the location’s umask specifically excludes write permissions:
|
||||
|
||||
|
||||
```
|
||||
$ umask
|
||||
0002
|
||||
```
|
||||
|
||||
For more information about umask, read Alex Juarez’s article about [file permissions][5].
|
||||
|
||||
### Cat and remove
|
||||
|
||||
Similar to a copy and remove, using the [cat][6] (or **tac**, for that matter) command assigns different permissions when your "moved" file is created. Assuming a fresh test environment with no **foo** in the current directory:
|
||||
|
||||
|
||||
```
|
||||
$ cat example/foo > foo
|
||||
$ ls -l -G -g --inode
|
||||
7476869 -rw-rw-r--. 29545 Aug 8 12:21 foo
|
||||
$ trash example/foo
|
||||
```
|
||||
|
||||
This time, a new file was created with no prior permissions set. The result is entirely subject to the umask setting, which blocks no permission bit for the user and group (the executable bit is not granted for new files regardless of umask), but it blocks the write (value two) bit from others. The result is a file with 664 permission.
|
||||
|
||||
### Rsync
|
||||
|
||||
The **rsync** command is a robust multipurpose tool to send files between hosts and file system locations. This command has many options available to it, including the ability to make its destination mirror its source.
|
||||
|
||||
You can copy and then remove a file with **rsync** using the **\--remove-source-files** option, along with whatever other option you choose to perform the synchronization (a common, general-purpose one is **\--archive**):
|
||||
|
||||
|
||||
```
|
||||
$ rsync --archive --remove-source-files example/foo .
|
||||
$ ls example
|
||||
bar baz
|
||||
$ ls -lGgi
|
||||
7476870 -rwxrwxrwx. 1 seth users 29545 Aug 8 12:23 foo
|
||||
```
|
||||
|
||||
Here you can see that file permission and ownership was retained, the timestamp was updated, and the source file was removed.
|
||||
|
||||
**A word of warning:** Do not confuse this option for **\--delete**, which removes files from your _destination_ directory. Misusing **\--delete** can wipe out most of your data, and it’s recommended that you avoid this option except in a test environment.
|
||||
|
||||
You can override some of these defaults, changing permission and modification settings:
|
||||
|
||||
|
||||
```
|
||||
$ rsync --chmod=666 --times \
|
||||
\--remove-source-files example/foo .
|
||||
$ ls example
|
||||
bar baz
|
||||
$ ls -lGgi
|
||||
7476871 -rw-rw-r--. 1 seth users 29545 Aug 8 12:55 foo
|
||||
```
|
||||
|
||||
Here, the destination’s umask is respected, so the **\--chmod=666** option results in a file with 664 permissions.
|
||||
|
||||
The benefits go beyond just permissions, though. The **rsync** command has [many][7] useful [options][8] (not the least of which is the **\--exclude** flag so you can exempt items from a large move operation) that make it a more robust tool than the simple **mv** command. For example, to exclude all backup files while moving a collection of files:
|
||||
|
||||
|
||||
```
|
||||
$ rsync --chmod=666 --times \
|
||||
\--exclude '*~' \
|
||||
\--remove-source-files example/foo .
|
||||
```
|
||||
|
||||
### Set permissions with install
|
||||
|
||||
The **install** command is a copy command specifically geared toward developers and is mostly invoked as part of the install routine of software compiling. It’s not well known among users (and I do often wonder why it got such an intuitive name, leaving mere acronyms and pet names for package managers), but **install** is actually a useful way to put files where you want them.
|
||||
|
||||
There are many options for the **install** command, including **\--backup** and **\--compare** command (to avoid "updating" a newer copy of a file).
|
||||
|
||||
Unlike **cp** and **cat**, but exactly like **mv**, the **install** command can copy a file while preserving its timestamp:
|
||||
|
||||
|
||||
```
|
||||
$ install --preserve-timestamp example/foo .
|
||||
$ ls -l -G -g --inode
|
||||
7476869 -rwxr-xr-x. 1 29545 Aug 2 07:28 foo
|
||||
$ trash example/foo
|
||||
```
|
||||
|
||||
Here, the file was copied to a new inode, but its **mtime** did not change. The permissions, however, were set to the **install** default of **755**.
|
||||
|
||||
You can use **install** to set the file’s permissions, owner, and group:
|
||||
|
||||
|
||||
```
|
||||
$ install --preserve-timestamp \
|
||||
\--owner=skenlon \
|
||||
\--group=dialout \
|
||||
\--mode=666 example/foo .
|
||||
$ ls -li
|
||||
7476869 -rw-rw-rw-. 1 skenlon dialout 29545 Aug 2 07:28 foo
|
||||
$ trash example/foo
|
||||
```
|
||||
|
||||
### Move, copy, and remove
|
||||
|
||||
Files contain data, and the really important files contain _your_ data. Learning to manage them wisely is important, and now you have the toolkit to ensure that your data is handled in exactly the way you want.
|
||||
|
||||
Do you have a different way of managing your data? Tell us your ideas in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/moving-files-linux-without-mv
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sethhttps://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
|
||||
[2]: https://opensource.com/article/18/11/partition-format-drive-linux#what-is-a-filesystem
|
||||
[3]: https://opensource.com/article/19/7/master-ls-command
|
||||
[4]: https://gitlab.com/trashy
|
||||
[5]: https://opensource.com/article/19/8/linux-permissions-101#umask
|
||||
[6]: https://opensource.com/article/19/2/getting-started-cat-command
|
||||
[7]: https://opensource.com/article/19/5/advanced-rsync
|
||||
[8]: https://opensource.com/article/17/1/rsync-backup-linux
|
@ -0,0 +1,102 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A brief introduction to learning agility)
|
||||
[#]: via: (https://opensource.com/open-organization/19/8/introduction-learning-agility)
|
||||
[#]: author: (Jen Kelchner https://opensource.com/users/jenkelchnerhttps://opensource.com/users/marcobravohttps://opensource.com/users/jenkelchner)
|
||||
|
||||
A brief introduction to learning agility
|
||||
======
|
||||
The ability to learn and adapt quickly isn't something our hiring
|
||||
algorithms typically identify. But by ignoring it, we're overlooking
|
||||
insightful and innovative job candidates.
|
||||
![Teacher or learner?][1]
|
||||
|
||||
I think everyone can agree that the workplace has changed dramatically in the last decade—or is in the process of changing, depending on where you're currently working. The landscape has evolved. Distributed leadership, project-based work models, and cross-functional solution building are commonplace. In essence, the world is going [open][2].
|
||||
|
||||
And yet our talent acquisition strategies, development models, and internal systems have shifted little (if at all) to meet the demands these shifts in our external work have created.
|
||||
|
||||
In this three-part series, let's take a look at what is perhaps the game-changing key to acquisition, retention, engagement, innovation, problem-solving, and leadership in this emerging future: learning agility. We'll discuss not only what learning agility _is_, but how your organization's leaders can create space for agile learning on teams and in departments.
|
||||
|
||||
### Algorithmed out of opportunities
|
||||
|
||||
For the last decade, I've freelanced as an independent consultant. Occasionally, when the stress of entrepreneurial, project-based work gets heavy, I search out full-time positions. As I'm sure you know, job searching requires hours of research—and often concludes in dead-ends. On a rare occasion, you find a great fit (the culture looks right and you have every skill the role could need and more!), except for one small thing: a specific educational degree.
|
||||
|
||||
Sticking with these outdated practices puts us in danger of overlooking amazing candidates capable of accelerating innovation and becoming amazing leaders in our organizations.
|
||||
|
||||
More times than I can count, I've gotten "algorithmed out" of even an initial conversation about a new position. What do I mean by that exactly?
|
||||
|
||||
If your specific degree—or, in my case, lack thereof—doesn't meet the one listed, the algorithmically driven job portal spits me back out. I've received a "no thank you" email within thirty seconds of hitting submit.
|
||||
|
||||
So why is calling this out so important?
|
||||
|
||||
Hiring practices have changed very little in both closed _and_ open organizations. Sticking with these outdated practices puts us in danger of overlooking amazing candidates capable of accelerating innovation and becoming amazing leaders in our organizations.
|
||||
|
||||
Developing more inclusive and open hiring processes will require work. For starters, it'll require focus on a key competency so often overlooked as part of more traditional, "closed" processes: Learning agility.
|
||||
|
||||
### Just another buzzword or key performance indicator?
|
||||
|
||||
While "learning agility" [is not a new term][3], it's one that organizations clearly still need help taking into account. Even in open organizations, we tend to overlook this element by focusing too rigidly on a candidate's degree history or _current role_ when we should be taking a more holistic view of the individual.
|
||||
|
||||
One crucial element of [adaptability][4] is learning agility. It is the capacity for adapting to situations and applying knowledge from prior experience—even when you don't know what to do. In short, it's a willingness to learn from all your experiences and then apply that knowledge to tackle new challenges in new situations.
|
||||
|
||||
Every experience we encounter in life can teach us something if we pay attention to it. All of these experiences are educational and useful in organizational life. In fact, as Colin Willis notes in his recent article on [informal learning][5], 70%‒80% of all job-related knowledge isn't learned in formal training programs. And yet we're conditioned to think that _only what you were paid to do in a formal role_ or _the degree you once earned_ speaks solely to your potential value or fit for a particular role.
|
||||
|
||||
Likewise, in extensive research conducted over years, Korn Ferry has shown that learning agility is also a [predictor of long-term performance][6] and leadership potential. In an [article on leadership][7], Korn Ferry notes that "individuals exhibiting high levels of learning agility can adapt quickly in unfamiliar situations and even thrive amid chaos." [Chaos][8]—there's a word I think we would all use to describe the world we live in today.
|
||||
|
||||
Every experience we encounter in life can teach us something if pay attention to it.
|
||||
|
||||
Organizations continue to overlook this critical skill ([too few U.S. companies consider candidates without college degrees][9]), even though it's a proven component of success in a volatile, complex, ambiguous world? Why?
|
||||
|
||||
And as adaptability and collaboration—[two key open principles][2]—sit at the top of the list of job [skills needed in 2019][10], perhaps talent acquisition conversations should stop focusing on _how to measure adaptability_ and shift to _sourcing learning agile people_ so problems can get solved faster.
|
||||
|
||||
### Learning agility has dimensions
|
||||
|
||||
A key to unlocking our adaptability during rapid change is learning agility. Agile people are great at integrating information from their experiences and then using that information to navigate unfamiliar situations. This complex set of skills allows us to draw patterns from one context and apply them to another context.
|
||||
|
||||
So when you're looking for an agile person to join your team, what exactly are you looking for?
|
||||
|
||||
Start with getting to know someone _beyond_ a resume, because learning-agile people have more lessons, more tools, and more solutions in their history that can be valuable when your organization is facing new challenges.
|
||||
|
||||
Next, understand the [five dimensions of learning agility][11], according to Korn Ferry's research.
|
||||
|
||||
**Mental Agility:** This looks like _thinking critically to decipher complex problems and expanding possibilities by seeing new connections_.
|
||||
|
||||
**People Agility:** This looks like _understanding and relating to other people to empower collective performance_.
|
||||
|
||||
**Change Agility**: This looks like _experimentation, being curious, and effectively dealing with uncertainty_.
|
||||
|
||||
**Results Agility:** This looks like _delivering results in first-time situations by inspiring teams and exhibiting a presence that builds confidence in themselves and others_.
|
||||
|
||||
**Self-Awareness:** This looks like _the ability to reflect on oneself, knowing oneself well, and understanding how one's behaviors impact others._
|
||||
|
||||
While finding someone with all these traits may seem like sourcing a unicorn, you'll find learning agility is more common than you think. In fact, your organization is likely _already_ full of agile people, but your culture and systems don't support agile learning.
|
||||
|
||||
In the next part of this series, we'll explore how you can tap into this crucial skill and create space for agile learning every day. Until then, do what you can to become more aware of the lessons you encounter _today_ that will help you solve problems _tomorrow_.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/8/introduction-learning-agility
|
||||
|
||||
作者:[Jen Kelchner][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jenkelchnerhttps://opensource.com/users/marcobravohttps://opensource.com/users/jenkelchner
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G (Teacher or learner?)
|
||||
[2]: https://opensource.com/open-organization/resources/open-org-definition
|
||||
[3]: https://www.researchgate.net/publication/321438460_Learning_agility_Its_evolution_as_a_psychological_construct_and_its_empirical_relationship_to_leader_success
|
||||
[4]: https://opensource.com/open-organization/resources/open-org-maturity-model
|
||||
[5]: https://opensource.com/open-organization/19/7/informal-learning-adaptability
|
||||
[6]: https://cmo.cm/2TDofV4
|
||||
[7]: https://focus.kornferry.com/leadership-and-talent/learning-agility-a-highly-prized-quality-in-todays-marketplace/
|
||||
[8]: https://opensource.com/open-organization/19/6/innovation-delusion
|
||||
[9]: https://www.cnbc.com/2018/08/16/15-companies-that-no-longer-require-employees-to-have-a-college-degree.html
|
||||
[10]: https://www.weforum.org/agenda/2019/01/the-hard-and-soft-skills-to-futureproof-your-career-according-to-linkedin/
|
||||
[11]: https://www.forbes.com/sites/kevincashman/2013/04/03/the-five-dimensions-of-learning-agile-leaders/#7b003b737457
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A guided tour of Linux file system types)
|
||||
[#]: via: (https://www.networkworld.com/article/3432990/a-guided-tour-of-linux-file-system-types.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
A guided tour of Linux file system types
|
||||
======
|
||||
Linux file systems have evolved over the years, and here's a look at file system types
|
||||
![Andreas Lehner / Flickr \(CC BY 2.0\)][1]
|
||||
|
||||
While it may not be obvious to the casual user, Linux file systems have evolved significantly over the last decade or so to make them more resistant to corruption and performance problems.
|
||||
|
||||
Most Linux systems today use a file system type called **ext4**. The “ext” part stands for “extended” and the 4 indicates that this is the 4th generation of this file system type. Features added over time include the ability to provide increasingly larger file systems (currently as large as 1,000,000 TiB) and much larger files (up to 16 TiB), more resistance to system crashes and less fragmentation (scattering single files as chunks in multiple locations) which improves performance.
|
||||
|
||||
The **ext4** file system type also came with other improvements to performance, scalability and capacity. Metadata and journal checksums were implemented for reliability. Timestamps now track changes down to nanoseconds for better file time-stamping (e.g., file creation and last updates). And, with two additional bits in the timestamp field, the year 2038 problem (when the digitally stored date/time fields will roll over from maximum to zero) has been put off for more than 400 years (to 2446).
|
||||
|
||||
### File system types
|
||||
|
||||
To determine the type of file system on a Linux system, use the **df** command. The **T** option in the command shown below provides the file system type. The **h** makes the disk sizes “human-readable”; in other words, adjusting the reported units (such as M and G) in a way that makes the most sense to the people reading them.
|
||||
|
||||
```
|
||||
$ df -hT | head -10
|
||||
Filesystem Type Size Used Avail Use% Mounted on
|
||||
udev devtmpfs 2.9G 0 2.9G 0% /dev
|
||||
tmpfs tmpfs 596M 1.5M 595M 1% /run
|
||||
/dev/sda1 ext4 110G 50G 55G 48% /
|
||||
/dev/sdb2 ext4 457G 642M 434G 1% /apps
|
||||
tmpfs tmpfs 3.0G 0 3.0G 0% /dev/shm
|
||||
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
|
||||
tmpfs tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
|
||||
/dev/loop0 squashfs 89M 89M 0 100% /snap/core/7270
|
||||
/dev/loop2 squashfs 142M 142M 0 100% /snap/hexchat/42
|
||||
```
|
||||
|
||||
Notice that the **/** (root) and **/apps** file systems are both **ext4** file systems while **/dev** is a **devtmpfs** file system – one with automated device nodes populated by the kernel. Some of the other file systems shown are **tmpfs** – temporary file systems that reside in memory and/or swap partitions and **squashfs** – file systems that are read-only compressed file systems and are used for snap packages.
|
||||
|
||||
There's also proc file systems that stores information on running processes.
|
||||
|
||||
```
|
||||
$ df -T /proc
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
proc proc 0 0 0 - /proc
|
||||
```
|
||||
|
||||
There are a number of other file system types that you might encounter as you're moving around the overall file system. When you've moved into a directory, for example, and want to ask about the related file system, you can run a command like this:
|
||||
|
||||
```
|
||||
$ cd /dev/mqueue; df -T .
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
mqueue mqueue 0 0 0 - /dev/mqueue
|
||||
$ cd /sys; df -T .
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
sysfs sysfs 0 0 0 - /sys
|
||||
$ cd /sys/kernel/security; df -T .
|
||||
Filesystem Type 1K-blocks Used Available Use% Mounted on
|
||||
securityfs securityfs 0 0 0 - /sys/kernel/security
|
||||
```
|
||||
|
||||
As with other Linux commands, the . in these commands refers to the current location in the overall file system.
|
||||
|
||||
These and other unique file-system types provide some special functions. For example, securityfs provides file system support for security modules.
|
||||
|
||||
Linux file systems need to be resistant to corruption, have the ability to survive system crashes and provide fast and reliable performance. The improvements provided by the generations of **ext** file systems and the new generation on purpose-specific file system types have made Linux systems easier to manage and more reliable.
|
||||
|
||||
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3432990/a-guided-tour-of-linux-file-system-types.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/08/guided-tour-on-the-flaker_people-in-horse-drawn-carriage_germany-by-andreas-lehner-flickr-100808681-large.jpg
|
||||
[2]: https://www.facebook.com/NetworkWorld/
|
||||
[3]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,72 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A project manager's guide to Ansible)
|
||||
[#]: via: (https://opensource.com/article/19/8/project-managers-guide-ansible)
|
||||
[#]: author: (Rich Butkevic https://opensource.com/users/rich-butkevic)
|
||||
|
||||
A project manager's guide to Ansible
|
||||
======
|
||||
Ansible is best known for IT automation, but it can streamline
|
||||
operations across the entire organization.
|
||||
![Coding on a computer][1]
|
||||
|
||||
From application deployment to provisioning, [Ansible][2] is a powerful open source tool for automating routine IT tasks. It can help an organization's IT run smoothly, with core IT processes networked and maintained. Ansible is an advanced IT orchestration solution, and it can be deployed even over a large, complex network infrastructure.
|
||||
|
||||
### Project management applications for Ansible
|
||||
|
||||
The Ansible platform can improve an entire business' operations by streamlining the company's infrastructure. Apart from directly contributing to the efficiency of the IT team, Ansible also contributes to the productivity and efficiency of development teams.
|
||||
|
||||
As such, it can be used for a number of project management applications:
|
||||
|
||||
* Ansible Tower helps teams manage the entirety of their application lifecycle. It can take applications from development into production, giving teams more control over applications being deployed.
|
||||
* Ansible Playbook enables teams to keep their applications deployed and properly configured. A Playbook can be easily programmed using Ansible's simple markup language.
|
||||
* Defined automated security policies allow Ansible to detect and remediate security issues automatically. By automating security policies, the company can improve its security substantially and without increasing its administrative burden.
|
||||
* Automation and systemization of processes reduce project risk by improving the precision of [project and task estimation][3].
|
||||
* Ansible can update applications, reducing the time the team needs to manage and maintain its systems. Keeping applications updated can be a constant time sink, and failing to update applications reduces overall security and productivity.
|
||||
|
||||
|
||||
|
||||
### Ansible's core benefits
|
||||
|
||||
There are many IT solutions for automating tasks or managing IT infrastructure. But Ansible is so popular because of its advantages over other IT automation solutions:
|
||||
|
||||
1. Ansible is free. As an open source solution, you don't have to pay for Ansible. Many commercial products require per-seat licensing or annual licensing subscriptions, which can add up.
|
||||
2. Ansible doesn't require an agent. It can be installed server-side only, requiring less interaction from end users. Other solutions require both server-side and endpoint installations, which takes a significant amount of time to manage. Not only do end users have to install these solutions on their own devices, but they also need to keep them updated and patched. Ansible doesn't require this type of maintenance.
|
||||
3. Ansible is easy to install and manage out of the box. It can be quickly installed, configured, and customized, so organizations can begin reaping its benefits in managing and monitoring IT solutions immediately.
|
||||
4. Ansible is flexible and can automate and control many types of IT tasks. The Ansible Playbook makes it easy to quickly code new tasks in a human-readable scripting language. Many other automation solutions require in-depth knowledge of programming languages, possibly even learning a proprietary programming language.
|
||||
5. Ansible has an active community with nearly 3,000 contributors contributing to the project. The robust open source community provides pre-programmed solutions and answers for more niche problems. Ansible's community ensures that it is stable, reliable, and constantly growing.
|
||||
6. Ansible is versatile and can be used in virtually any IT environment. Since it is both reliable and scalable, it is suitable for rapidly growing network environments.
|
||||
|
||||
|
||||
|
||||
### Ansible makes IT automation easier
|
||||
|
||||
Ansible is an out-of-the-box, open source automation solution that can schedule tasks and manage configurations over complex networks. Although it's intuitive and easy to use, it's also very robust; it has its own scripting language that can be used to program more complex functionality.
|
||||
|
||||
As an open source tool, Ansible is cost-effective and well-supported. The Ansible community is large and active, providing solutions for most common use cases and providing support as needed. Companies working towards IT automation can begin with an Ansible deployment and save a significant amount of money and time compared to commercial solutions.
|
||||
|
||||
For project managers, it's important to know that deploying Ansible will improve the effectiveness of a company's IT. Employees will spend less time trying to troubleshoot their own configuration, deployment, and provisioning. Ansible is designed to be a straightforward, reliable way to automate a network's IT tasks.
|
||||
|
||||
Further, development teams can use the Ansible Tower to track applications from development to production. Ansible Tower includes everything from role-based access to graphical inventory management and enables teams to remain on the same page even with complex tasks.
|
||||
|
||||
Ansible has a number of fantastic use cases and provides substantial productivity gains for both internal teams and the IT infrastructure as a whole. It's free, easy to use, and robust. By automating IT with Ansible, project managers will find that their teams can work more effectively without the burden of having to manage their own IT—and that IT works more smoothly overall.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/project-managers-guide-ansible
|
||||
|
||||
作者:[Rich Butkevic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rich-butkevic
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
|
||||
[2]: https://www.ansible.com/
|
||||
[3]: https://www.projecttimes.com/articles/avoiding-the-planning-fallacy-improving-your-project-estimates.html
|
307
sources/tech/20190820 Go compiler intrinsics.md
Normal file
307
sources/tech/20190820 Go compiler intrinsics.md
Normal file
@ -0,0 +1,307 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Go compiler intrinsics)
|
||||
[#]: via: (https://dave.cheney.net/2019/08/20/go-compiler-intrinsics)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Go compiler intrinsics
|
||||
======
|
||||
|
||||
Go allows authors to write functions in assembly if required. This is called a _stub_ or _forward_ declaration.
|
||||
|
||||
```
|
||||
package asm
|
||||
|
||||
// Add returns the sum of a and b.
|
||||
func Add(a int64, b int64) int64
|
||||
```
|
||||
|
||||
Here we’re declaring `Add`, a function which takes two `int64`‘s and returns their sum.`Add` is a normal Go function declaration, except it is missing the function body.
|
||||
|
||||
If we were to try to compile this package the compiler, justifiably, complains;
|
||||
|
||||
```
|
||||
% go build
|
||||
examples/asm
|
||||
./decl.go:4:6: missing function body
|
||||
```
|
||||
|
||||
To satisfy the compiler we must supply a body for `Add` via assembly, which we do by adding a `.s` file in the same package.
|
||||
|
||||
```
|
||||
TEXT ·Add(SB),$0-24
|
||||
MOVQ a+0(FP), AX
|
||||
ADDQ b+8(FP), AX
|
||||
MOVQ AX, ret+16(FP)
|
||||
RET
|
||||
```
|
||||
|
||||
Now we can build, test, and use our `Add` function just like normal Go code. But, there’s a problem, assembly functions cannot be inlined.
|
||||
|
||||
This has long been a complaint by Go developers who want to use assembly either for performance or to access operations which are not exposed in the language. Some examples would be vector instructions, atomic instructions, and so on. Without the ability to inline assembly functions writing these functions in Go can have a relatively large overhead.
|
||||
|
||||
```
|
||||
var Result int64
|
||||
|
||||
func BenchmarkAddNative(b *testing.B) {
|
||||
var r int64
|
||||
for i := 0; i < b.N; i++ {
|
||||
r = int64(i) + int64(i)
|
||||
}
|
||||
Result = r
|
||||
}
|
||||
|
||||
func BenchmarkAddAsm(b *testing.B) {
|
||||
var r int64
|
||||
for i := 0; i < b.N; i++ {
|
||||
r = Add(int64(i), int64(i))
|
||||
}
|
||||
Result = r
|
||||
}
|
||||
```
|
||||
|
||||
```
|
||||
BenchmarkAddNative-8 1000000000 0.300 ns/op
|
||||
BenchmarkAddAsm-8 606165915 1.93 ns/op
|
||||
```
|
||||
|
||||
Over the years there have been various proposals for an inline assembly syntax similar to gcc’s `asm(...)` directive. None have been accepted by the Go team. Instead, Go has added _intrinsic functions_[1][1].
|
||||
|
||||
An intrinsic function is Go code written in regular Go. These functions are known the the Go compiler which contains replacements which it can substitute during compilation. As of Go 1.13 the packages which the compiler knows about are:
|
||||
|
||||
* `math/bits`
|
||||
* `sync/atomic`
|
||||
|
||||
|
||||
|
||||
The functions in these packages have baroque signatures but this lets the compiler, if your architecture supports a more efficient way of performing the operation, transparently replace the function call with comparable native instructions.
|
||||
|
||||
For the remainder of this post we’ll study two different ways the Go compiler produces more efficient code using intrinsics.
|
||||
|
||||
### Ones count
|
||||
|
||||
Population count, the number of `1` bits in a word, is an important cryptographic and compression primitive. Because this is an important operation most modern CPUs provide a native hardware implementation.
|
||||
|
||||
The `math/bits` package exposes support for this operation via the `OnesCount` series of functions. The various `OnesCount` functions are recognised by the compiler and, depending on the CPU architecture and the version of Go, will be replaced with the native hardware instruction.
|
||||
|
||||
To see how effective this can be lets compare three different ones count implementations. The first is Kernighan’s Algorithm[2][2].
|
||||
|
||||
```
|
||||
func kernighan(x uint64) int {
|
||||
var count int
|
||||
for ; x > 0; x &= (x - 1) {
|
||||
count++
|
||||
}
|
||||
return count
|
||||
}
|
||||
```
|
||||
|
||||
This algorithm has a maximum loop count of the number of bits set; the more bits set, the more loops it will take.
|
||||
|
||||
The second algorithm is taken from Hacker’s Delight via [issue 14813][3].
|
||||
|
||||
```
|
||||
func hackersdelight(x uint64) int {
|
||||
const m1 = 0x5555555555555555
|
||||
const m2 = 0x3333333333333333
|
||||
const m4 = 0x0f0f0f0f0f0f0f0f
|
||||
const h01 = 0x0101010101010101
|
||||
|
||||
x -= (x >> 1) & m1
|
||||
x = (x & m2) + ((x >> 2) & m2)
|
||||
x = (x + (x >> 4)) & m4
|
||||
return int((x * h01) >> 56)
|
||||
}
|
||||
```
|
||||
|
||||
Lots of clever bit twiddling allows this version to run in constant time and optimises very well if the input is a constant (the whole thing optimises away if the compiler can figure out the answer at compiler time).
|
||||
|
||||
Let’s benchmark these implementations against `math/bits.OnesCount64`.
|
||||
|
||||
```
|
||||
var Result int
|
||||
|
||||
func BenchmarkKernighan(b *testing.B) {
|
||||
var r int
|
||||
for i := 0; i < b.N; i++ {
|
||||
r = kernighan(uint64(i))
|
||||
}
|
||||
Result = r
|
||||
}
|
||||
|
||||
func BenchmarkPopcnt(b *testing.B) {
|
||||
var r int
|
||||
for i := 0; i < b.N; i++ {
|
||||
r = hackersdelight(uint64(i))
|
||||
}
|
||||
Result = r
|
||||
}
|
||||
|
||||
func BenchmarkMathBitsOnesCount64(b *testing.B) {
|
||||
var r int
|
||||
for i := 0; i < b.N; i++ {
|
||||
r = bits.OnesCount64(uint64(i))
|
||||
}
|
||||
Result = r
|
||||
}
|
||||
```
|
||||
|
||||
To keep it fair, we’re feeding each function under test the same input; a sequence of integers from zero to `b.N`. This is fairer to Kernighan’s method as its runtime increases with the number of one bits in the input argument.[3][4]
|
||||
|
||||
```
|
||||
BenchmarkKernighan-8 100000000 11.2 ns/op
|
||||
BenchmarkPopcnt-8 618312062 2.02 ns/op
|
||||
BenchmarkMathBitsOnesCount64-8 1000000000 0.565 ns/op
|
||||
```
|
||||
|
||||
The winner by nearly 4x is `math/bits.OnesCount64`, but is this really using a hardware instruction, or is the compiler just doing a better job at optimising this code? Let’s check the assembly
|
||||
|
||||
```
|
||||
% go test -c
|
||||
% go tool objdump -s MathBitsOnesCount popcnt-intrinsic.test
|
||||
TEXT examples/popcnt-intrinsic.BenchmarkMathBitsOnesCount64(SB) /examples/popcnt-intrinsic/popcnt_test.go
|
||||
popcnt_test.go:45 0x10f8610 65488b0c2530000000 MOVQ GS:0x30, CX
|
||||
popcnt_test.go:45 0x10f8619 483b6110 CMPQ 0x10(CX), SP
|
||||
popcnt_test.go:45 0x10f861d 7668 JBE 0x10f8687
|
||||
popcnt_test.go:45 0x10f861f 4883ec20 SUBQ $0x20, SP
|
||||
popcnt_test.go:45 0x10f8623 48896c2418 MOVQ BP, 0x18(SP)
|
||||
popcnt_test.go:45 0x10f8628 488d6c2418 LEAQ 0x18(SP), BP
|
||||
popcnt_test.go:47 0x10f862d 488b442428 MOVQ 0x28(SP), AX
|
||||
popcnt_test.go:47 0x10f8632 31c9 XORL CX, CX
|
||||
popcnt_test.go:47 0x10f8634 31d2 XORL DX, DX
|
||||
popcnt_test.go:47 0x10f8636 eb03 JMP 0x10f863b
|
||||
popcnt_test.go:47 0x10f8638 48ffc1 INCQ CX
|
||||
popcnt_test.go:47 0x10f863b 48398808010000 CMPQ CX, 0x108(AX)
|
||||
popcnt_test.go:47 0x10f8642 7e32 JLE 0x10f8676
|
||||
popcnt_test.go:48 0x10f8644 803d29d5150000 CMPB $0x0, runtime.x86HasPOPCNT(SB)
|
||||
popcnt_test.go:48 0x10f864b 740a JE 0x10f8657
|
||||
popcnt_test.go:48 0x10f864d 4831d2 XORQ DX, DX
|
||||
popcnt_test.go:48 0x10f8650 f3480fb8d1 POPCNT CX, DX // math/bits.OnesCount64
|
||||
popcnt_test.go:48 0x10f8655 ebe1 JMP 0x10f8638
|
||||
popcnt_test.go:47 0x10f8657 48894c2410 MOVQ CX, 0x10(SP)
|
||||
popcnt_test.go:48 0x10f865c 48890c24 MOVQ CX, 0(SP)
|
||||
popcnt_test.go:48 0x10f8660 e87b28f8ff CALL math/bits.OnesCount64(SB)
|
||||
popcnt_test.go:48 0x10f8665 488b542408 MOVQ 0x8(SP), DX
|
||||
popcnt_test.go:47 0x10f866a 488b442428 MOVQ 0x28(SP), AX
|
||||
popcnt_test.go:47 0x10f866f 488b4c2410 MOVQ 0x10(SP), CX
|
||||
popcnt_test.go:48 0x10f8674 ebc2 JMP 0x10f8638
|
||||
popcnt_test.go:50 0x10f8676 48891563d51500 MOVQ DX, examples/popcnt-intrinsic.Result(SB)
|
||||
popcnt_test.go:51 0x10f867d 488b6c2418 MOVQ 0x18(SP), BP
|
||||
popcnt_test.go:51 0x10f8682 4883c420 ADDQ $0x20, SP
|
||||
popcnt_test.go:51 0x10f8686 c3 RET
|
||||
popcnt_test.go:45 0x10f8687 e884eef5ff CALL runtime.morestack_noctxt(SB)
|
||||
popcnt_test.go:45 0x10f868c eb82 JMP examples/popcnt-intrinsic.BenchmarkMathBitsOnesCount64(SB)
|
||||
:-1 0x10f868e cc INT $0x3
|
||||
:-1 0x10f868f cc INT $0x3
|
||||
```
|
||||
|
||||
There’s quite a bit going on here, but the key take away is on line 48 (taken from the source code of the `_test.go` file) the program is using the x86 `POPCNT` instruction as we hoped. This turns out to be faster than bit twiddling.
|
||||
|
||||
Of interest is the comparison two instructions prior to the `POPCNT`,
|
||||
|
||||
```
|
||||
CMPB $0x0, runtime.x86HasPOPCNT(SB)
|
||||
```
|
||||
|
||||
As not all intel CPUs support `POPCNT` the Go runtime records at startup if the CPU has the necessary support and stores the result in `runtime.x86HasPOPCNT`. Each time through the benchmark loop the program is checking _does the CPU have POPCNT support_ before it issues the `POPCNT` request.
|
||||
|
||||
The value of `runtime.x86HasPOPCNT` isn’t expected to change during the life of the program’s execution so the result of the check should be highly predictable making the check relatively cheap.
|
||||
|
||||
### Atomic counter
|
||||
|
||||
As well as generating more efficient code, intrinsic functions are just regular Go code, the rules of inlining (including mid stack inlining) apply equally to them.
|
||||
|
||||
Here’s an example of an atomic counter type. It’s got methods on types, method calls several layers deep, multiple packages, etc.
|
||||
|
||||
```
|
||||
import (
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
type counter uint64
|
||||
|
||||
func (c counter) get() uint64 {
|
||||
return atomic.LoadUint64((uint64)(c))
|
||||
}
|
||||
|
||||
func (c counter) inc() uint64 {
|
||||
return atomic.AddUint64((uint64)(c), 1)
|
||||
}
|
||||
|
||||
func (c counter) reset() uint64 {
|
||||
return atomic.SwapUint64((uint64)(c), 0)
|
||||
}
|
||||
|
||||
var c counter
|
||||
|
||||
func f() uint64 {
|
||||
c.inc()
|
||||
c.get()
|
||||
return c.reset()
|
||||
}
|
||||
```
|
||||
|
||||
You’d be forgiven for thinking this would have a lot of overhead. However, because of the interaction between inlining and compiler intrinsics, this code collapses down to efficient native code on most platforms.
|
||||
|
||||
```
|
||||
TEXT main.f(SB) examples/counter/counter.go
|
||||
counter.go:23 0x10512e0 90 NOPL
|
||||
counter.go:29 0x10512e1 b801000000 MOVL $0x1, AX
|
||||
counter.go:13 0x10512e6 488d0d0bca0800 LEAQ main.c(SB), CX
|
||||
counter.go:13 0x10512ed f0480fc101 LOCK XADDQ AX, 0(CX) // c.inc
|
||||
counter.go:24 0x10512f2 90 NOPL
|
||||
counter.go:10 0x10512f3 488b05fec90800 MOVQ main.c(SB), AX // c.get
|
||||
counter.go:25 0x10512fa 90 NOPL
|
||||
counter.go:16 0x10512fb 31c0 XORL AX, AX
|
||||
counter.go:16 0x10512fd 488701 XCHGQ AX, 0(CX) // c.reset
|
||||
counter.go:16 0x1051300 c3 RET
|
||||
```
|
||||
|
||||
By way of explanation. The first operation, `counter.go:13` is `c.inc` a `LOCK`ed `XADDQ`, which on x86 is an atomic increment. The second, `counter.go:10` is `c.get` which on x86, due to its strong memory consistency model, is a regular load from memory. The final operation, `counter.go:16`, `c.reset` is an atomic exchange of the address in `CX` with `AX` which was zeroed on the previous line. This puts the value in `AX`, zero, into the address stored in `CX`. The value previously stored at `(CX)` is discarded.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Intrinsics are a neat solution that give Go programmers access to low level architectural operations without having to extend the specification of the language. If an architecture doesn’t have a specific `sync/atomic` primitive (like some ARM variants), or a `math/bits` operation, then the compiler transparently falls back to the operation written in pure Go.
|
||||
|
||||
1. This may not be their official name, however the word is in common use inside the compiler and its tests[][5]
|
||||
2. The C Programming Language 2nd Ed, 1998[][6]
|
||||
3. As extra credit homework, try passing `0xdeadbeefdeadbeef` to each function under test and observe the results.[][7]
|
||||
|
||||
|
||||
|
||||
#### Related posts:
|
||||
|
||||
1. [Notes on exploring the compiler flags in the Go compiler suite][8]
|
||||
2. [Padding is hard][9]
|
||||
3. [Should methods be declared on T or *T][10]
|
||||
4. [Wednesday pop quiz: spot the race][11]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2019/08/20/go-compiler-intrinsics
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: tmp.OyARdRB2s8#easy-footnote-bottom-1-3803 (This may not be their official name, however the word is in common use inside the compiler and its tests)
|
||||
[2]: tmp.OyARdRB2s8#easy-footnote-bottom-2-3803 (The C Programming Language 2nd Ed, 1998)
|
||||
[3]: https://github.com/golang/go/issues/14813
|
||||
[4]: tmp.OyARdRB2s8#easy-footnote-bottom-3-3803 (As extra credit homework, try passing <code>0xdeadbeefdeadbeef</code> to each function under test and observe the results.)
|
||||
[5]: tmp.OyARdRB2s8#easy-footnote-1-3803
|
||||
[6]: tmp.OyARdRB2s8#easy-footnote-2-3803
|
||||
[7]: tmp.OyARdRB2s8#easy-footnote-3-3803
|
||||
[8]: https://dave.cheney.net/2012/10/07/notes-on-exploring-the-compiler-flags-in-the-go-compiler-suite (Notes on exploring the compiler flags in the Go compiler suite)
|
||||
[9]: https://dave.cheney.net/2015/10/09/padding-is-hard (Padding is hard)
|
||||
[10]: https://dave.cheney.net/2016/03/19/should-methods-be-declared-on-t-or-t (Should methods be declared on T or *T)
|
||||
[11]: https://dave.cheney.net/2015/11/18/wednesday-pop-quiz-spot-the-race (Wednesday pop quiz: spot the race)
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The infrastructure is code: A story of COBOL and Go)
|
||||
[#]: via: (https://opensource.com/article/19/8/command-line-heroes-cobol-golang)
|
||||
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg)
|
||||
|
||||
The infrastructure is code: A story of COBOL and Go
|
||||
======
|
||||
COBOL remains the dominant language of mainframes. What can Go learn
|
||||
from its history to dominate the cloud?
|
||||
![Listen to the Command Line Heroes Podcast][1]
|
||||
|
||||
Old challenges are new again. In [this week's Command Line Heroes podcast][2] (Season 3, Episode 5), that thought comes with a twist of programming languages and platforms.
|
||||
|
||||
### COBOL dominates the mainframe
|
||||
|
||||
One of the most brilliant minds in all of computer science is [Grace Murray Hopper][3]. Every time we don't have to write in binary to talk to computers, I recommend saying out loud: "Thank you, Rear Admiral Grace Murray Hopper." Try it next time, for she is the one who invented the first compiler (the software that translates programming code to machine language).
|
||||
|
||||
Hopper was essential to the invention and adoption of high-level programming languages, the first of which was COBOL. She helped create the **CO**mmon **B**usiness-**O**riented **L**anguage (COBOL for short) in 1959. As Ritika Trikha put it on [HackerRank][4]:
|
||||
|
||||
> "Grace Hopper, the mother of COBOL, helped champion the creation of this brand-new programming language that aimed to function across all business systems, saving an immense amount of time and money. Hopper was also the first to believe that programming languages should read just like English instead of computer jargon. Hence why COBOL's syntax is so wordy. But it helped humanize the computing process for businesses during an era when computing was intensive and prevalent only in research facilities."
|
||||
|
||||
In the early 1960s, mainframes were a wild new architecture for sharing powerful amounts of computation. And in the era of mainframe computing, COBOL dominated the landscape.
|
||||
|
||||
### COBOL in today's world
|
||||
|
||||
But what about today? With the decline of mainframes and the rise of newer and more innovative languages designed for the web and cloud, where does COBOL sit?
|
||||
|
||||
As last week's episode of Command Line Heroes mentioned, in the late 1990s, [Perl][5] (as well as JavaScript and C++) was outpacing COBOL. And, as Perl's creator, [Larry Wall stated then][6]: "COBOL is no big deal these days since demand for COBOL seems to be trailing off, for some strange reason."
|
||||
|
||||
Fast forward to 2019, and COBOL has far from "trailed off." As David Cassel wrote on [The New Stack][7] in 2017:
|
||||
|
||||
> "About 95% of ATM swipes use COBOL code, Reuters [reported in April][8], and the 58-year-old language even powers 80% of in-person transactions. In fact, Reuters calculates that there's still 220 billion lines of COBOL code currently being used in production today, and that every day, COBOL systems handle $3 trillion in commerce."
|
||||
|
||||
Given its continued significance in the business world, knowing COBOL can be a great career move. Top COBOL programmers can expect to [make six figures][9] due to the limited number of people who specialize in the language.
|
||||
|
||||
### Go dominates in the cloud, for now
|
||||
|
||||
That story of COBOL's early dominance rings a bell for me. If we survey the most influential projects of this cloud computing era, you'd be hard-pressed to miss Go sitting at the top of the pack. Kubernetes and much of its related technology—from Etcd to Prometheus—are written in Go. As [RedMonk explored][10] back in 2014:
|
||||
|
||||
> "Go's rapidly closing in on 1% of total commits and half a percent of projects and contributors. While the trend is obviously interesting, at first glance, numbers well under one percent look inconsequential relative to overall adoption. To provide some context, however, each of the most popular languages on Ohloh (C, C++, Java, JavaScript) only constitute ~10% of commits and ~5% of projects and contributors. **That means Go, a seemingly very minor player, is already used nearly one-tenth as much in FOSS as the most popular languages in existence**."
|
||||
|
||||
In two of my previous jobs, my team (re)wrote infrastructure software in Go to be part of this monumental wave. Influential projects continue to live in the space that Go can fill, as [Uday Hiwarale explained][11] well in 2018:
|
||||
|
||||
> "Things that make Go a great language [are] its simple concurrency model, its package-based code management, and its non-strict (type inference) typing system. Go does not support out-of-the box object-oriented programming experience, but [its] support structures (structs) …, with the help of methods and pointers, can help us achieve the same [outcomes]."
|
||||
|
||||
It looks to me like Go could be following in COBOL's footsteps, but questions remain about where it's going. In June 2019, [RedMonk ranked][12] Go in 16th place, with a future that could lead either direction.
|
||||
|
||||
### What can Go learn from COBOL?
|
||||
|
||||
If Go were to see into its future, would it look like COBOL's, with such staying power?
|
||||
|
||||
The stories told this season by Command Line Heroes illustrate how languages are born, how communities form around them, how they rise in popularity and standardize, and how some slowly decline. What can we learn about the lifespan of programming languages? Do they have a similar arc? Or do they differ?
|
||||
|
||||
I think this podcast is well worth [subscribing so that you don't miss a single one][2]. I would love to hear your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/command-line-heroes-cobol-golang
|
||||
|
||||
作者:[Matthew Broberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command-line-heroes-520x292.png?itok=s_F6YEoS (Listen to the Command Line Heroes Podcast)
|
||||
[2]: https://www.redhat.com/en/command-line-heroes
|
||||
[3]: https://www.biography.com/scientist/grace-hopper
|
||||
[4]: https://blog.hackerrank.com/the-inevitable-return-of-cobol/
|
||||
[5]: https://opensource.com/article/19/8/command-line-heroes-perl
|
||||
[6]: http://www.wall.org/~larry/onion3/talk.html
|
||||
[7]: https://thenewstack.io/cobol-everywhere-will-maintain/
|
||||
[8]: http://fingfx.thomsonreuters.com/gfx/rngs/USA-BANKS-COBOL/010040KH18J/index.html
|
||||
[9]: https://www.laserfiche.com/ecmblog/looking-job-hows-your-cobol/
|
||||
[10]: https://redmonk.com/dberkholz/2014/03/18/go-the-emerging-language-of-cloud-infrastructure/
|
||||
[11]: https://medium.com/rungo/introduction-to-go-programming-language-golang-89d16ca72bbf
|
||||
[12]: https://redmonk.com/sogrady/2019/07/18/language-rankings-6-19/
|
104
sources/tech/20190821 5 notable open source 3D printers.md
Normal file
104
sources/tech/20190821 5 notable open source 3D printers.md
Normal file
@ -0,0 +1,104 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 notable open source 3D printers)
|
||||
[#]: via: (https://opensource.com/article/19/8/3D-printers)
|
||||
[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg)
|
||||
|
||||
5 notable open source 3D printers
|
||||
======
|
||||
A roundup of the latest notable open source 3D printers.
|
||||
![Open source 3D printed coin][1]
|
||||
|
||||
Open source hardware and 3D printers go together like, well, open source hardware and 3D printers. Not only are 3D printers used to create all sorts of open source hardware—there are also a huge number of 3D printers that have been [certified as open source][2] by the [Open Source Hardware Association][3]. That fact means that they are freely available to improve and build upon.
|
||||
|
||||
There are plenty of open source 3D printers out there, with more being certified on a regular basis. Here’s a look at some of the latest.
|
||||
|
||||
### BigFDM
|
||||
|
||||
![The BigFDM 3D printer by Daniele Ingrassia.][4]
|
||||
|
||||
_The BigFDM 3D printer by Daniele Ingrassia._
|
||||
|
||||
German-designed and UAE-built, the [Big FDM][5] is both the second-newest and the biggest certified open 3D printer. It was certified on July 14, and has a massive 800x800x900mm printing area, making it possibly big enough to print a full replica of many of the other printers on this list.
|
||||
|
||||
### Creatable 3D
|
||||
|
||||
![The Creatable 3D printer by Creatable Labs.][6]
|
||||
|
||||
_The Creatable 3D printer by Creatable Labs._
|
||||
|
||||
Certified on July 30, the [Creatable 3D][7] is the most recently certified printer on this list. It is the only [delta-style][8] 3D printer, which is a design that makes it faster. It is also the first piece of certified open source hardware from South Korea, sporting the certification UID SK000001.
|
||||
|
||||
### Ender CR-10
|
||||
|
||||
![The Ender CR-10 3D printer by Creality3d.][9]
|
||||
|
||||
_The Ender CR-10 3D printer by Creality3d._
|
||||
|
||||
[Ender’s CR-10][10] is a well-known certified as open source 3D printer. That means that this Chinese 3D printer is fully documented and licensed to allow others to build upon it. Ender also certified its [Ender 3][11] printer as open source hardware.
|
||||
|
||||
### LulzBot TAZ Workhorse
|
||||
|
||||
![The LulzBot TAZ Workhorse by Aleph Objects.][12]
|
||||
|
||||
_The LulzBot TAZ Workhorse by Aleph Objects._
|
||||
|
||||
Colorado-based Aleph Objects—creators of the LulzBot line of 3D printers—is the most prolific certifier of open source 3D printers and printer components. Their [TAZ Workhorse][13] was just certified in June, making it the latest in a long line of printers and printer elements that LulzBot has certified as open source hardware. If you are in the market for a hot end, extruder, board, or pretty much any other 3D printer component, and want to make sure that it is certified open source hardware, you will likely find something from Aleph Objects in their [certification directory][2].
|
||||
|
||||
### Nautilus
|
||||
|
||||
![The Nautilus 3D printer by Hydra Research.][14]
|
||||
|
||||
_The Nautilus 3D printer by Hydra Research._
|
||||
|
||||
Hydra Research’s [Nautilus][15] was just certified on July 10, making it the third-most recently certified printer of the bunch. It features removable build plates and a fully enclosed build area and hails from Oregon.
|
||||
|
||||
#### IC3D open source filament
|
||||
|
||||
![IC3D open source 3D printer filament.][16]
|
||||
|
||||
_The IC3D Open Source Filament._
|
||||
|
||||
What will you put in your open source 3D printer? Open source 3D printing filament, of course. Ohio’s IC3D certified a full line of open source 3D printing filament for all of your open source 3D printing needs, including their:
|
||||
|
||||
* [ABS 3D Printing Filament][17]
|
||||
* [PETG 3D Printing Filament][18]
|
||||
* [PLA 3D Printing Filament][19]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/3D-printers
|
||||
|
||||
作者:[Michael Weinberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mweinberg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_source_print_resize.jpg?itok=v6z2FtLS (Open source 3D printed coin)
|
||||
[2]: https://certification.oshwa.org/list.html
|
||||
[3]: https://www.oshwa.org/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/bigfdm.png (The BigFDM 3D printer by Daniele Ingrassia.)
|
||||
[5]: https://certification.oshwa.org/de000013.html
|
||||
[6]: https://opensource.com/sites/default/files/uploads/creatable_3d.png (The Creatable 3D printer by Creatable Labs.)
|
||||
[7]: https://certification.oshwa.org/kr000001.html
|
||||
[8]: https://www.youtube.com/watch?v=BTU6UGm15Zc
|
||||
[9]: https://opensource.com/sites/default/files/uploads/ender_cr-10.png (The Ender CR-10 3D printer by Creality3d.)
|
||||
[10]: https://certification.oshwa.org/cn000005.html
|
||||
[11]: https://certification.oshwa.org/cn000003.html
|
||||
[12]: https://opensource.com/sites/default/files/uploads/lulzbot_taz_workhorse.png (The LulzBot TAZ Workhorse by Aleph Objects.)
|
||||
[13]: https://certification.oshwa.org/us000161.html
|
||||
[14]: https://opensource.com/sites/default/files/uploads/hydra_research_nautilus.png (The Nautilus 3D printer by Hydra Research.)
|
||||
[15]: https://certification.oshwa.org/us000166.html
|
||||
[16]: https://opensource.com/sites/default/files/uploads/ic3d_open_source_filament.png (The IC3D Open Source Filament.)
|
||||
[17]: https://certification.oshwa.org/us000066.html
|
||||
[18]: https://certification.oshwa.org/us000131.html
|
||||
[19]: https://certification.oshwa.org/us000130.html
|
@ -0,0 +1,189 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Build a distributed NoSQL database with Apache Cassandra)
|
||||
[#]: via: (https://opensource.com/article/19/8/how-set-apache-cassandra-cluster)
|
||||
[#]: author: (James Farrell https://opensource.com/users/jamesfhttps://opensource.com/users/ben-bromhead)
|
||||
|
||||
Build a distributed NoSQL database with Apache Cassandra
|
||||
======
|
||||
Set up a basic three-node Cassandra cluster from scratch with some extra
|
||||
bits for replication and future expansion.
|
||||
![Woman programming][1]
|
||||
|
||||
Recently, I got a rush request to get a three-node [Apache Cassandra][2] cluster with a replication factor of two working for a development job. I had little idea what that meant but needed to figure it out quickly—a typical day in a sysadmin's job.
|
||||
|
||||
Here's how to set up a basic three-node Cassandra cluster from scratch with some extra bits for replication and future node expansion.
|
||||
|
||||
### Basic nodes needed
|
||||
|
||||
To start, you need some basic Linux machines. For a production install, you would likely put physical machines into racks, data centers, and diverse locations. For development, you just need something suitably sized for the scale of your development. I used three CentOS 7 virtual machines on VMware that have 20GB thin provisioned disks, two processors, and 4GB of RAM. These three machines are called: CS1 (192.168.0.110), CS2 (192.168.0.120), and CS3 (192.168.0.130).
|
||||
|
||||
First, do a minimal install of CentOS 7 as an operating system on each machine. To run this in production with CentOS, consider [tweaking][3] your [firewalld][4] and [SELinux][5]. Since this cluster would be used just for initial development, I turned them off.
|
||||
|
||||
The only other requirement is an OpenJDK 1.8 installation, which is available from the CentOS repository.
|
||||
|
||||
### Installation
|
||||
|
||||
Create a **cass** user account on each machine. To ensure no variation between nodes, force the same UID on each install:
|
||||
|
||||
|
||||
```
|
||||
$ useradd --create-home \
|
||||
\--uid 1099 cass
|
||||
$ passwd cass
|
||||
```
|
||||
|
||||
[Download][6] the current version of Apache Cassandra (3.11.4 as I'm writing this). Extract the Cassandra archive in the **cass** home directory like this:
|
||||
|
||||
|
||||
```
|
||||
`$ tar zfvx apache-cassandra-3.11.4-bin.tar.gz`
|
||||
```
|
||||
|
||||
The complete software is contained in **~cass/apache-cassandra-3.11.4**. For a quick development trial, this is fine. The data files are there, and the **conf/** directory has the important bits needed to tune these nodes into a real cluster.
|
||||
|
||||
### Configuration
|
||||
|
||||
Out of the box, Cassandra runs as a localhost one-node cluster. That is convenient for a quick look, but the goal here is a real cluster that external clients can access and that provides the option to add additional nodes when development and tests need to broaden. The two configuration files to look at are **conf/cassandra.yaml** and **conf/cassandra-rackdc.properties**.
|
||||
|
||||
First, edit **conf/cassandra.yaml** to set the cluster name, network, and remote procedure call (RPC) interfaces; define peers; and change the strategy for routing requests and replication.
|
||||
|
||||
Edit **conf/cassandra.yaml** on each of the cluster nodes.
|
||||
|
||||
Change the cluster name to be the same on each node:
|
||||
|
||||
|
||||
```
|
||||
`cluster_name: 'DevClust'`
|
||||
```
|
||||
|
||||
Change the following two entries to match the primary IP address of the node you are working on:
|
||||
|
||||
|
||||
```
|
||||
listen_address: 192.168.0.110
|
||||
rpc_address: 192.168.0.110
|
||||
```
|
||||
|
||||
Find the **seed_provider** entry and look for the **\- seeds:** configuration line. Edit each node to include all your nodes:
|
||||
|
||||
|
||||
```
|
||||
` - seeds: "192.168.0.110, 192.168.0.120, 192.168.0.130"`
|
||||
```
|
||||
|
||||
This enables the local Cassandra instance to see all its peers (including itself).
|
||||
|
||||
Look for the **endpoint_snitch** setting and change it to:
|
||||
|
||||
|
||||
```
|
||||
`endpoint_snitch: GossipingPropertyFileSnitch`
|
||||
```
|
||||
|
||||
The **endpoint_snitch** setting enables flexibility later on if new nodes need to be joined. The Cassandra documentation indicates that **GossipingPropertyFileSnitch** is the preferred setting for production use; it is also necessary to set the replication strategy that will be presented below.
|
||||
|
||||
Save and close the **cassandra.yaml** file.
|
||||
|
||||
Open the **conf/cassandra-rackdc.properties** file and change the default values for **dc=** and **rack=**. They can be anything that is unique and does not conflict with other local installs. For production, you would put more thought into how to organize your racks and data centers. For this example, I used generic names like:
|
||||
|
||||
|
||||
```
|
||||
dc=NJDC
|
||||
rack=rack001
|
||||
```
|
||||
|
||||
### Start the cluster
|
||||
|
||||
On each node, log into the account where Cassandra is installed (**cass** in this example), enter **cd apache-cassandra-3.11.4/bin**, and run **./cassandra**. A long list of messages will print to the terminal, and the Java process will run in the background.
|
||||
|
||||
### Confirm the cluster
|
||||
|
||||
While logged into the Cassandra user account, go to the **bin** directory and run **$ ./nodetool status**. If everything went well, you would see something like:
|
||||
|
||||
|
||||
```
|
||||
$ ./nodetool status
|
||||
INFO [main] 2019-08-04 15:14:18,361 Gossiper.java:1715 - No gossip backlog; proceeding
|
||||
Datacenter: NJDC
|
||||
================
|
||||
Status=Up/Down
|
||||
|/ State=Normal/Leaving/Joining/Moving
|
||||
\-- Address Load Tokens Owns (effective) Host ID Rack
|
||||
UN 192.168.0.110 195.26 KiB 256 69.2% 0abc7ad5-6409-4fe3-a4e5-c0a31bd73349 rack001
|
||||
UN 192.168.0.120 195.18 KiB 256 63.0% b7ae87e5-1eab-4eb9-bcf7-4d07e4d5bd71 rack001
|
||||
UN 192.168.0.130 117.96 KiB 256 67.8% b36bb943-8ba1-4f2e-a5f9-de1a54f8d703 rack001
|
||||
```
|
||||
|
||||
This means the cluster sees all the nodes and prints some interesting information.
|
||||
|
||||
Note that if **cassandra.yaml** uses the default **endpoint_snitch: SimpleSnitch**, the **nodetool** command above indicates the default locations as **Datacenter: datacenter1** and the racks as **rack1**. In the example output above, the **cassandra-racdc.properties** values are evident.
|
||||
|
||||
### Run some CQL
|
||||
|
||||
This is where the replication factor setting comes in.
|
||||
|
||||
Create a keystore with a replication factor of two. From any one of the cluster nodes, go to the **bin** directory and run **./cqlsh 192.168.0.130** (substitute the appropriate cluster node IP address). You can see the default administrative keyspaces with the following:
|
||||
|
||||
|
||||
```
|
||||
cqlsh> SELECT * FROM system_schema.keyspaces;
|
||||
|
||||
keyspace_name | durable_writes | replication
|
||||
\--------------------+----------------+-------------------------------------------------------------------------------------
|
||||
system_auth | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '1'}
|
||||
system_schema | True | {'class': 'org.apache.cassandra.locator.LocalStrategy'}
|
||||
system_distributed | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
|
||||
system | True | {'class': 'org.apache.cassandra.locator.LocalStrategy'}
|
||||
system_traces | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '2'}
|
||||
```
|
||||
|
||||
Create a new keyspace with replication factor two, insert some rows, then recall some data:
|
||||
|
||||
|
||||
```
|
||||
cqlsh> CREATE KEYSPACE TestSpace WITH replication = {'class': 'NetworkTopologyStrategy', 'NJDC' : 2};
|
||||
cqlsh> select * from system_schema.keyspaces where keyspace_name='testspace';
|
||||
|
||||
keyspace_name | durable_writes | replication
|
||||
\---------------+----------------+--------------------------------------------------------------------------------
|
||||
testspace | True | {'NJDC': '2', 'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy'}
|
||||
cqlsh> use testspace;
|
||||
cqlsh:testspace> create table users ( userid int PRIMARY KEY, email text, name text );
|
||||
cqlsh:testspace> insert into users (userid, email, name) VALUES (1, '[jd@somedomain.com][7]', 'John Doe');
|
||||
cqlsh:testspace> select * from users;
|
||||
|
||||
userid | email | name
|
||||
\--------+-------------------+----------
|
||||
1 | [jd@somedomain.com][7] | John Doe
|
||||
```
|
||||
|
||||
Now you have a basic three-node Cassandra cluster running and ready for some development and testing work. The CQL syntax is similar to standard SQL, as you can see from the familiar commands to create a table, insert, and query data.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Apache Cassandra seems like an interesting NoSQL clustered database, and I'm looking forward to diving deeper into its use. This simple setup only scratches the surface of the options available. I hope this three-node primer helps you get started with it, too.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/how-set-apache-cassandra-cluster
|
||||
|
||||
作者:[James Farrell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jamesfhttps://opensource.com/users/ben-bromhead
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
|
||||
[2]: http://cassandra.apache.org/
|
||||
[3]: https://opensource.com/article/19/7/make-linux-stronger-firewalls
|
||||
[4]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
|
||||
[5]: https://opensource.com/business/13/11/selinux-policy-guide
|
||||
[6]: https://cassandra.apache.org/download/
|
||||
[7]: mailto:jd@somedomain.com
|
117
sources/tech/20190821 Getting Started with Go on Fedora.md
Normal file
117
sources/tech/20190821 Getting Started with Go on Fedora.md
Normal file
@ -0,0 +1,117 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting Started with Go on Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/getting-started-with-go-on-fedora/)
|
||||
[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
|
||||
|
||||
Getting Started with Go on Fedora
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
The [Go][2] programming language was first publicly announced in 2009, since then the language has become widely adopted. In particular Go has become a reference in the world of cloud infrastructure with big projects like [Kubernetes][3], [OpenShift][4] or [Terraform][5] for example.
|
||||
|
||||
Some of the main reasons for Go’s increasing popularity are the performances, the ease to write fast concurrent application, the simplicity of the language and fast compilation time. So let’s see how to get started with Go on Fedora.
|
||||
|
||||
### Install Go in Fedora
|
||||
|
||||
Fedora provides an easy way to install the Go programming language via the official repository.
|
||||
|
||||
```
|
||||
$ sudo dnf install -y golang
|
||||
$ go version
|
||||
go version go1.12.7 linux/amd64
|
||||
```
|
||||
|
||||
Now that Go is installed, let’s write a simple program, compile it and execute it.
|
||||
|
||||
### First program in Go
|
||||
|
||||
Let’s write the traditional “Hello, World!” program in Go. First create a _main.go_ file and type or copy the following.
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
fmt.Println("Hello, World!")
|
||||
}
|
||||
```
|
||||
|
||||
Running this program is quite simple.
|
||||
|
||||
```
|
||||
$ go run main.go
|
||||
Hello, World!
|
||||
```
|
||||
|
||||
This will build a binary from main.go in a temporary directory, execute the binary, then delete the temporary directory. This command is really great to quickly run the program during development and it also highlights the speed of Go compilation.
|
||||
|
||||
Building an executable of the program is as simple as running it.
|
||||
|
||||
```
|
||||
$ go build main.go
|
||||
$ ./main
|
||||
Hello, World!
|
||||
```
|
||||
|
||||
### Using Go modules
|
||||
|
||||
Go 1.11 and 1.12 introduce preliminary support for modules. Modules are a solution to manage application dependencies. This solution is based on 2 files _go.mod_ and _go.sum_ used to explicitly define the version of the dependencies.
|
||||
|
||||
To show how to use modules, let’s add a dependency to the hello world program.
|
||||
|
||||
Before changing the code, the module needs to be initialized.
|
||||
|
||||
```
|
||||
$ go mod init helloworld
|
||||
go: creating new go.mod: module helloworld
|
||||
$ ls
|
||||
go.mod main main.go
|
||||
```
|
||||
|
||||
Next modify the main.go file as follow.
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import "github.com/fatih/color"
|
||||
|
||||
func main () {
|
||||
color.Blue("Hello, World!")
|
||||
}
|
||||
```
|
||||
|
||||
In the modified main.go, instead of using the standard library “_fmt_” to print the “Hello, World!”. The application uses an external library which makes it easy to print text in color.
|
||||
|
||||
Let’s run this version of the application.
|
||||
|
||||
```
|
||||
$ go run main.go
|
||||
Hello, World!
|
||||
```
|
||||
|
||||
Now that the application is depending on the _github.com/fatih/color_ library, it needs to download all the dependencies before compiling it. The list of dependencies is then added to _go.mod_ and the exact version and commit hash of these dependencies is recorded in _go.sum_.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/getting-started-with-go-on-fedora/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/cverna/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/go-article-816x345.jpg
|
||||
[2]: https://golang.org/
|
||||
[3]: https://kubernetes.io/
|
||||
[4]: https://www.openshift.com/
|
||||
[5]: https://www.terraform.io/
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open Policy Agent: Cloud-native security and compliance)
|
||||
[#]: via: (https://opensource.com/article/19/8/open-policy-agent)
|
||||
[#]: author: (Tim Hinrichs https://opensource.com/users/thinrich)
|
||||
|
||||
Open Policy Agent: Cloud-native security and compliance
|
||||
======
|
||||
A look at three use cases where organizations used Open Policy Agent to
|
||||
reliably automate cloud-based access policy control.
|
||||
![clouds in the sky with blue pattern][1]
|
||||
|
||||
Every product or service has a unique way of handling policy and authorization: who-can-do-what and what-can-do-what. In the cloud-native world, authorization and policy are more complex than ever before. As the cloud-native ecosystem evolves, there’s a growing need for DevOps and DevSecOps teams to identify and address security and compliance issues earlier in development and deployment cycles. Businesses need to release software on the order of minutes (instead of months). For this to happen, those security and compliance policies—which in the past were written in PDFs or email—need to be checked and enforced by machines. That way, every few minutes when software goes out the door, it’s obeying all of the necessary policies.
|
||||
|
||||
This problem was at the top of our minds when Teemu Koponen, Torin Sandall, and I founded the [Open Policy Agent project (OPA)][2] as a practical solution for the critical security and policy challenges of the cloud-native ecosystem. As the list of OPA’s successful integrations grows—thanks to active involvement by the open source community—the time is right to re-introduce OPA and offer a look at how it addresses business and policy pain points in varied contexts.
|
||||
|
||||
### What is OPA?
|
||||
|
||||
OPA is a general-purpose policy engine that makes policy a first-class citizen within the cloud-native ecosystem, putting it on par with servers, networks, and storage. Its uses range from authorization and admission control to data filtering. The community uses OPA for Kubernetes admission control across all major cloud providers, as well as on on-premises deployments, along with HTTP API authorization, remote access policy, and data filtering. Since OPA’s RESTful APIs use JSON over HTTP, OPA can be integrated with any programming language, making it extremely flexible across services.
|
||||
|
||||
OPA gives policy its own lifecycle and toolsets, so policy can be managed separately from the underlying systems that the policy applies to. Launched in 2016, OPA provides local enforcement for the sake of higher availability, better performance, greater flexibility, and more expressiveness than hard-coded service logic or ad-hoc domain-specific languages. With dedicated tooling for new users and experienced practitioners, combined with many integrations to third-party systems, OPA empowers administrators with unified, flexible, and granular policy control across their entire software stack. OPA also provides policy guardrails around Kubernetes admission control, HTTP API authorization, entitlement management, remote access, and data filtering. In 2018, we donated OPA to the [Cloud Native Computing Foundation][3], a vendor-neutral home, and since then it has graduated from the sandbox to the incubating stage.
|
||||
|
||||
### What can OPA do in the real world?
|
||||
|
||||
In short, OPA provides unified, context-aware policy controls for cloud-native environments. OPA policy is context-aware, meaning that the administrator can make policy decisions based on what is happening in the real world, such as:
|
||||
|
||||
* Is there currently an outage?
|
||||
* Is there a new vulnerability that’s been released?
|
||||
* Who are the people on call right now?
|
||||
|
||||
|
||||
|
||||
Its policies are flexible enough to accommodate arbitrary context and arbitrary. OPA has been proven in production in some of the largest cloud-native deployments in the world—from global financial firms with trillions under management to technology giants and household names—but is also in use at emerging startups and regional healthcare organizations.
|
||||
|
||||
Beyond our own direct experiences, and thanks to the open source community’s innovations, OPA continues to mature and solve varied and evolving customer authorization and policy problems, such as Kubernetes admission control, microservice authorization, and entitlements management for both end-user and employee-facing applications. We’re thrilled by both the depth and breadth of innovative use cases unfolding in front of our eyes. To better articulate some of the real-world problems OPA is solving, we looked across OPA’s business-critical deployments in the user community to provide the following examples.
|
||||
|
||||
#### Provide regulatory compliance that role-based access control (RBAC) can’t.
|
||||
|
||||
This lesson came to us through a global bank with trillions in assets. Their problem: A breach that occurred because a third-party broker had too much access. The bank’s relationship with the public was under significant stress, and it was also penalized with nearly $100 million in fines.
|
||||
|
||||
How did such a breach happen? In short, due to the complexity of trying to map decades of role-based access control (RBAC) onto every sprawling monolithic app. With literally millions of roles across thousands of internal and external applications, the bank’s situation was—not unlike most large, established corporations—impossible to manage or troubleshoot. What started out as a best practice (RBAC) could no longer scale. Static roles, based on business logic, cannot be tested. They can’t be deployed inline. They can’t be validated like today’s modern code can. Simply put, RBAC cannot alone manage access at cloud scale.
|
||||
|
||||
OPA facilitated a solution: Rearchitect and simplify application access with a local context-based authorization that’s automated, tested, audited, and scalable. There are both technology and business benefits to this approach. The main technology benefit is that the authorization policy (rules that establish what a given user can do) is built, tested, and deployed as part of continuous integration and continuous delivery (CI/CD). Every decision is tied directly to microservices and apps for auditing and validation, and all access is based not on role, but on the current context.
|
||||
|
||||
Instead of creating thousands of roles to cover every permutation of what’s allowed, a simple policy can determine whether or not the user should have access, and to a very fine degree. This simplified policy greatly, since context drives access decisions. Versioning and backtesting aren’t required, since every time a new policy is needed the entire policy set is re-created, eliminating nested issues and legacy role sprawl. The local-only policy also eliminates the presence of conflicting rules/roles across repositories.
|
||||
|
||||
The major business benefit is that compliance became easier through the separation of duties (with security teams—not developers—writing policy) and by providing clear, testable visibility into access policy across applications. This process accelerated development since AppDev teams were freed from having to code Authz or policy directly into applications, and central RBAC repositories no longer need to be updated, maintained, and made available.
|
||||
|
||||
#### Provide regulatory compliance and safety by default.
|
||||
|
||||
Another large bank, with nearly 20,000 employees, was in the untenable scenario of managing policy with spreadsheets. This situation may sound comical, but it‘s far more common than you might think. Access is often "managed" via best effort and tribal knowledge. Teams document access policy in PDFs, on Wikis, or in spreadsheets. They then rely on well-intentioned developers to read, understand, and remember access rules and guidelines. The bank had business reasons to move from monolithic apps to Kubernetes (K8s)—primarily improving differentiation and time to market—but it's legacy compliance solutions weren’t compatible with K8s.
|
||||
|
||||
The bank knew that while it was a financial institution, it was _really_ a software organization. Rather than relying on human memory and best effort, the staff started thinking of policy with a GitOps mindset (pull requests, comments, and peer review to get to consensus and commitment). OPA became the single source of truth behind what was (or wasn’t) allowed with policy, implementing a true policy-as-code solution where effort was removed from the equation entirely, thanks to automation.
|
||||
|
||||
The K8s platform that the bank created was compliant by default, as it executed company regulatory policies exactly, every time. With OPA, the bank could build, deploy, and version its regulatory policy through an agile process, ensuring that all users, teams, and services were always obeying policy. The infrastructure is now compliant because compliance is literally built into the infrastructure.
|
||||
|
||||
#### Streamline and strengthen institutional knowledge.
|
||||
|
||||
A major telecommunications company had an education problem that was sapping time and money. Its pain points: It had created and maintained its own admission control (AC) service; had a slow, costly, HR-heavy support model that couldn’t scale as its developer base grew; and it had a hammer-like enforcement model that wasn’t efficient, slowing time to market.
|
||||
|
||||
OPA was deployed to replace the custom AC, thereby saving resources. The guardrails OPA provided allowed management to discover and deploy key policies that they developed from world events (and problems) that they wanted to eliminate moving forward.
|
||||
|
||||
Management has now become accustomed to using policy-as-code and is able to hone in on the specific policies that developers trip over most. The primary benefit for this company was in the person-hours saved by not having to talk to the same developers about the same problems over and over again, and by being able to educate about and enforce policies automatically. The insights from these efforts allow the company to target education (not enforcement) to the teams that need it, proactively focusing on providing help to struggling teams.
|
||||
|
||||
### Learn More about OPA
|
||||
|
||||
To learn how to use OPA to help with your authorization and policy or to learn how to contribute, check out the [Open Policy Agent on Github][4] or check out the tutorials on different usecases at the [OPA homepage][2].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/open-policy-agent
|
||||
|
||||
作者:[Tim Hinrichs][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thinrich
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e (clouds in the sky with blue pattern)
|
||||
[2]: https://www.openpolicyagent.org/
|
||||
[3]: https://www.cncf.io/
|
||||
[4]: https://github.com/open-policy-agent/opa
|
@ -0,0 +1,141 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (11 Essential Keyboard Shortcuts Google Chrome/Chromium Users Should Know)
|
||||
[#]: via: (https://itsfoss.com/google-chrome-shortcuts/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
11 Essential Keyboard Shortcuts Google Chrome/Chromium Users Should Know
|
||||
======
|
||||
|
||||
_**Brief: Master these Google Chrome keyboard shortcuts for a better, smoother and more productive web browsing experience. Downloadable cheatsheet is also included.**_
|
||||
|
||||
Google Chrome is the [most popular web browser][1] and there is no denying it. It’s open source version [Chromium][2] is also getting popularity and some Linux distributions now include it as the default web browser.
|
||||
|
||||
If you use it on desktop a lot, you can improve your browsing experience by using Google Chrome keyboard shortcuts. No need to go up to your mouse and spend time finding your way around. Just master these shortcuts and you’ll even save some time and be more productive.
|
||||
|
||||
I am using the term Google Chrome but these shortcuts are equally applicable to the Chromium browser.
|
||||
|
||||
### 11 Cool Chrome Keyboard shortcuts you should be using
|
||||
|
||||
If you are a pro, you might know a few of these Chrome shortcuts already but the chances are that you may still find some hidden gems here. Let’s see.
|
||||
|
||||
**Keyboard Shortcuts** | **Action**
|
||||
---|---
|
||||
Ctrl+T | Open a new tab
|
||||
Ctrl+N | Open a new window
|
||||
Ctrl+Shift+N | Open incognito window
|
||||
Ctrl+W | Close current tab
|
||||
Ctrl+Shift+T | Reopen last closed tab
|
||||
Ctrl+Shift+W | Close the window
|
||||
Ctrl+Tab and Ctrl+Shift+Tab | Switch to right or left tab
|
||||
Ctrl+L | Go to search/address bar
|
||||
Ctrl+D | Bookmark the website
|
||||
Ctrl+H | Access browsing history
|
||||
Ctrl+J | Access downloads history
|
||||
Shift+Esc | Open Chrome task manager
|
||||
|
||||
You can [download this list of useful Chrome keyboard shortcut for quick reference][3].
|
||||
|
||||
#### 1\. Open a new tab with Ctrl+T
|
||||
|
||||
Need to open a new tab? Just press Ctrl and T keys together and you’ll have a new tab opened.
|
||||
|
||||
#### 2\. Open a new window with Ctrl+N
|
||||
|
||||
Too many tabs opened already? Time to open a fresh new window. Use Ctrl and N keys to open a new browser window.
|
||||
|
||||
#### 3\. Go incognito with Ctrl+Shift+N
|
||||
|
||||
Checking flight or hotel prices online? Going incognito might help. Open an incognito window in Chrome with Ctrl+Shift+N.
|
||||
|
||||
[][4]
|
||||
|
||||
Suggested read Best Text Editors for Linux Command Line
|
||||
|
||||
#### 4\. Close a tab with Ctrl+W
|
||||
|
||||
Close the current tab with Ctrl and W key. No need to take the mouse to the top and look for the x button.
|
||||
|
||||
#### 5\. Accidentally closed a tab? Reopen it with Ctrl+Shift+T
|
||||
|
||||
This is my favorite Google Chrome shortcut. No more ‘oh crap’ when you close a tab you didn’t mean to. Use the Ctrl+Shift+T and it will open the last closed tab. Keep hitting this key combination and it will keep on bringing the closed tabs.
|
||||
|
||||
#### 6\. Close the entire browser window with Ctrl+Shift+W
|
||||
|
||||
Done with you work? Time to close the entire browser window with all the tabs. Use the keys Ctrl+Shift+W and the browser window will disappear like it never existed.
|
||||
|
||||
#### 7\. Switch between tabs with Ctrl+Tab
|
||||
|
||||
Too many tabs open? You can move to right tab with Ctrl+Tab. Want to move left? Use Ctrl+Shift+Tab. Press these keys repeatedly and you can move between all the open tabs in the current browser window.
|
||||
|
||||
You can also use Ctrl+0 till Ctrl+9 to go to one of the first 10 tabs. But this Chrome keyboard shortcut doesn’t work for the 11th tabs onward.
|
||||
|
||||
#### 8\. Go to the search/address bar with Ctrl+L
|
||||
|
||||
Want to type a new URL or search something quickly. You can use Ctrl+L and it will highlight the address bar on the top.
|
||||
|
||||
#### 9\. Bookmark the current website with Ctrl+D
|
||||
|
||||
Found something interesting? Save it in your bookmarks with Ctrl+D keys combination.
|
||||
|
||||
#### 10\. Go back in history with Ctrl+H
|
||||
|
||||
You can open up your browser history with Ctrl+H keys. Search through the history if you are looking for a page visited some time ago or delete something that you don’t want to be seen anymore.
|
||||
|
||||
#### 11\. See your downloads with Ctrl+J
|
||||
|
||||
Pressing the Ctrl+J keys in Chrome will take you to the Downloads page. This page will show you all the downloads action you performed.
|
||||
|
||||
[][5]
|
||||
|
||||
Suggested read Get Rid Of Two Google Chrome Icons From Dock In Elementary OS Freya [Quick Tip]
|
||||
|
||||
#### Bonus shortcut: Open Chrome task manager with Shift+Esc
|
||||
|
||||
Many people doesn’t even know that there is a task manager in Chrome browser. Chrome is infamous for eating up your system’s RAM. And when you have plenty of tabs opened, finding the culprit is not easy.
|
||||
|
||||
With Chrome task manager, you can see all the open tabs and their system utilization stats. You can also see various hidden processes such as Chrome extensions and other services.
|
||||
|
||||
![Google Chrome Task Manager][6]
|
||||
|
||||
I am going to this table here for a quick reference.
|
||||
|
||||
### Download Chrome shortcut cheatsheet
|
||||
|
||||
I know that mastering keyboard shortcuts depends on habit and you can make it a habit by using it again and again. To help you in this task, I have created this Google Chrome keyboard shortcut cheatsheet.
|
||||
|
||||
You can download the below image in PDF form, print it and put it on your desk. This way you can use practice the shortcuts all the time.
|
||||
|
||||
![Google Chrome Keyboard Shortcuts Cheat Sheet][7]
|
||||
|
||||
[Download Chrome Shortcut Cheatsheet][8]
|
||||
|
||||
If you are interested in mastering shortcuts, you may also have a look at [Ubuntu keyboard shortcuts][9].
|
||||
|
||||
By the way, what’s your favorite Chrome shortcut?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/google-chrome-shortcuts/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
|
||||
[2]: https://www.chromium.org/Home
|
||||
[3]: tmp.3qZNXSy2FC#download-cheatsheet
|
||||
[4]: https://itsfoss.com/command-line-text-editors-linux/
|
||||
[5]: https://itsfoss.com/rid-google-chrome-icons-dock-elementary-os-freya/
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/google-chrome-task-manager.png?resize=800%2C300&ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/google-chrome-keyboard-shortcuts-cheat-sheet.png?ssl=1
|
||||
[8]: https://drive.google.com/open?id=1lZ4JgRuFbXrnEXoDQqOt7PQH6femIe3t
|
||||
[9]: https://itsfoss.com/ubuntu-shortcuts/
|
@ -0,0 +1,82 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A Raspberry Pi Based Open Source Tablet is in Making and it’s Called CutiePi)
|
||||
[#]: via: (https://itsfoss.com/cutiepi-open-source-tab/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
A Raspberry Pi Based Open Source Tablet is in Making and it’s Called CutiePi
|
||||
======
|
||||
|
||||
CutiePie is an 8-inch open-source tablet built on top of Raspberry Pi. For now, it is just a working prototype which they announced on [Raspberry Pi forums][1].
|
||||
|
||||
In this article, you’ll get to know more details on the specifications, price, and availability of CutiePi.
|
||||
|
||||
They have made the Tablet using a custom-designed Compute Model (CM3) carrier board. The [official website][2] mentions the purpose of a custom CM3 carrier board as:
|
||||
|
||||
> A custom CM3/CM3+ carrier board designed for portable use, with enhanced power management and Li-Po battery level monitoring features; works with selected HDMI or MIPI DSI displays.
|
||||
|
||||
So, this is what makes the Tablet thin enough while being portable.
|
||||
|
||||
### CutiePi Specifications
|
||||
|
||||
![CutiePi Board][3]
|
||||
|
||||
I was surprised to know that it rocks an 8-inch IPS LCD display – which is a good thing for starters. However, you won’t be getting a true HD screen because the resolution is 1280×800 – as mentioned officially.
|
||||
|
||||
It is also planned to come packed with Li-Po 4800 mAh battery (the prototype had a 5000 mAh battery). Well, for a Tablet, that isn’t bad at all.
|
||||
|
||||
Connectivity options include the support for Wi-Fi and Bluetooth 4.0. In addition to this, a USB Type-A, 6x GPIO pins, and a microSD card slot is present.
|
||||
|
||||
![CutiePi Specifications][4]
|
||||
|
||||
The hardware is officially compatible with [Raspbian OS][5] and the user interface is built with [Qt][6] for a fast and intuitive user experience. Also, along with the in-built apps, it is expected to support Raspbian PIXEL apps via XWayland.
|
||||
|
||||
### CutiePi Source Code
|
||||
|
||||
You can second-guess the pricing of this tablet by analyzing the bill for the materials used. CutiePi follows a 100% open-source hardware design for this project. So, if you are curious, you can check out their GitHub page for more information on the hardware design and stuff.
|
||||
|
||||
[CutiePi on GitHub][7]
|
||||
|
||||
### CutiePi Pricing, Release Date & Availability
|
||||
|
||||
CutiePi plans to work on [DVT][8] batch PCBs in August (this month). And, they target to launch the final product by the end of 2019.
|
||||
|
||||
Officially, they expect it to launch it at around $150-$250. This is just an approximate for the range and should be taken with a pinch of salt.
|
||||
|
||||
Obviously, the price will be a major factor in order to make it a success – even though the product itself sounds promising.
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
CutiePi is not the first project to use a [single board computer like Raspberry Pi][9] to make a tablet. We have the upcoming [PineTab][10] which is based on Pine64 single board computer. Pine also has a laptop called [Pinebook][11] based on the same.
|
||||
|
||||
Judging by the prototype – it is indeed a product that we can expect to work. However, the pre-installed apps and the apps that it will support may turn the tide. Also, considering the price estimate – it sounds promising.
|
||||
|
||||
What do you think about it? Let us know your thoughts in the comments below or just play this interactive poll.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/cutiepi-open-source-tab/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.raspberrypi.org/forums/viewtopic.php?t=247380
|
||||
[2]: https://cutiepi.io/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/cutiepi-board.png?ssl=1
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/cutiepi-specifications.jpg?ssl=1
|
||||
[5]: https://itsfoss.com/raspberry-pi-os-desktop/
|
||||
[6]: https://en.wikipedia.org/wiki/Qt_%28software%29
|
||||
[7]: https://github.com/cutiepi-io/cutiepi-board
|
||||
[8]: https://en.wikipedia.org/wiki/Engineering_validation_test#Design_verification_test
|
||||
[9]: https://itsfoss.com/raspberry-pi-alternatives/
|
||||
[10]: https://www.pine64.org/pinetab/
|
||||
[11]: https://itsfoss.com/pinebook-pro/
|
@ -0,0 +1,166 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Four Ways to Check How Long a Process Has Been Running in Linux)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-check-how-long-a-process-has-been-running-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Four Ways to Check How Long a Process Has Been Running in Linux
|
||||
======
|
||||
|
||||
If you want to figure out how long a process has been running in Linux for some reason.
|
||||
|
||||
Yes, it is possible and can be done with help of ps command.
|
||||
|
||||
It shows, the given process uptime in the form of [[DD-]hh:]mm:ss, in seconds, and exact start date and time.
|
||||
|
||||
There are multiple options are available in ps command to check this.
|
||||
|
||||
Each options comes with different output, which can be used for different purpose.
|
||||
|
||||
```
|
||||
# top -b -s -n 1 | grep httpd
|
||||
|
||||
16337 root 20 0 228272 5160 3272 S 0.0 0.1 1:02.27 httpd
|
||||
30442 apache 20 0 240520 3132 1232 S 0.0 0.1 0:00.00 httpd
|
||||
30443 apache 20 0 240520 3132 1232 S 0.0 0.1 0:00.00 httpd
|
||||
30444 apache 20 0 240520 3132 1232 S 0.0 0.1 0:00.00 httpd
|
||||
30445 apache 20 0 240520 3132 1232 S 0.0 0.1 0:00.00 httpd
|
||||
30446 apache 20 0 240520 3132 1232 S 0.0 0.1 0:00.00 httpd
|
||||
```
|
||||
|
||||
**`Make a note:`**` ` You may think the same details can be found on **[top command output][1]**. No, It shows you the total CPU time the task has used since it started. But it doesn’t include elapsed time. So, don’t confuse on this.
|
||||
|
||||
### What’s ps Command?
|
||||
|
||||
ps stands for processes status, it is display information about the active/running processes on the system.
|
||||
|
||||
It provides a snapshot of the current processes along with detailed information like username, user id, cpu usage, memory usage, process start date and time command name etc.
|
||||
|
||||
* **`etime:`**` ` elapsed time since the process was started, in the form of [[DD-]hh:]mm:ss.
|
||||
* **`etimes:`**` ` elapsed time since the process was started, in seconds.
|
||||
|
||||
|
||||
|
||||
To do so, you need to **[find out the PID of a process][2]**, we can easily identify it by using pidof command.
|
||||
|
||||
```
|
||||
# pidof httpd
|
||||
|
||||
30446 30445 30444 30443 30442 16337
|
||||
```
|
||||
|
||||
### Method-1: Using etime Option
|
||||
|
||||
Use the ps command with **`etime`**` ` option to get detailed elapsed time.
|
||||
|
||||
```
|
||||
# ps -p 16337 -o etime
|
||||
|
||||
ELAPSED
|
||||
13-13:13:26
|
||||
```
|
||||
|
||||
As per the above output, the httpd process has been running in our server `13 days, 13 hours, 13 mins, and 26 sec`****.
|
||||
|
||||
### Method-2: Using Process Name Instead of Process ID (PID)
|
||||
|
||||
If you want to use process name instead of PID, use the following one.
|
||||
|
||||
```
|
||||
# ps -eo pid,etime,cmd | grep httpd | grep -v grep
|
||||
|
||||
16337 13-13:13:39 /usr/sbin/httpd -DFOREGROUND
|
||||
30442 1-02:59:50 /usr/sbin/httpd -DFOREGROUND
|
||||
30443 1-02:59:49 /usr/sbin/httpd -DFOREGROUND
|
||||
30444 1-02:59:49 /usr/sbin/httpd -DFOREGROUND
|
||||
30445 1-02:59:49 /usr/sbin/httpd -DFOREGROUND
|
||||
30446 1-02:59:49 /usr/sbin/httpd -DFOREGROUND
|
||||
```
|
||||
|
||||
### Method-3: Using etimes Option
|
||||
|
||||
The following command will show you the elapsed time in seconds.
|
||||
|
||||
```
|
||||
# ps -p 16337 -o etimes
|
||||
|
||||
ELAPSED
|
||||
1170810
|
||||
```
|
||||
|
||||
It shows the output in Seconds and you need to convert it as per your requirement.
|
||||
|
||||
```
|
||||
+---------------------+-------------------------+
|
||||
| Human-Readable time | Seconds |
|
||||
+---------------------+-------------------------+
|
||||
| 1 hour | 3600 seconds |
|
||||
| 1 day | 86400 seconds |
|
||||
+---------------------+-------------------------+
|
||||
```
|
||||
|
||||
If you would like to know how many hours the process has been running then use, **[Linux command line calculator][3]**.
|
||||
|
||||
```
|
||||
# bc -l
|
||||
|
||||
1170810/3600
|
||||
325.22500000000000000000
|
||||
```
|
||||
|
||||
If you would like to know how many days the process has been running then use the following format.
|
||||
|
||||
```
|
||||
# bc -l
|
||||
|
||||
1170810/86400
|
||||
13.55104166666666666666
|
||||
```
|
||||
|
||||
The above commands doesn’t show you the exact start date of the process and if you want to know those information then you can use the following command. As per the below output the httpd process has been running since **`Aug 05`**.
|
||||
|
||||
```
|
||||
# ps -ef | grep httpd
|
||||
|
||||
root 16337 1 0 Aug05 ? 00:01:02 /usr/sbin/httpd -DFOREGROUND
|
||||
root 24999 24902 0 06:34 pts/0 00:00:00 grep --color=auto httpd
|
||||
apache 30442 16337 0 Aug18 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
|
||||
apache 30443 16337 0 Aug18 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
|
||||
apache 30444 16337 0 Aug18 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
|
||||
apache 30445 16337 0 Aug18 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
|
||||
apache 30446 16337 0 Aug18 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
|
||||
```
|
||||
|
||||
### Method-4: Using proc filesystem (procfs)
|
||||
|
||||
However, the above command doesn’t show you the exact start time of the process and use the following format to check that. As per the below output the httpd process has been running since **`Aug 05 at 17:20`**.
|
||||
|
||||
The proc filesystem (procfs) is a special filesystem in Unix-like operating systems that presents information about processes and other system information.
|
||||
|
||||
It’s sometimes referred to as a process information pseudo-file system. It doesn’t contain ‘real’ files but run time system information (e.g. system memory, devices mounted, hardware configuration, etc).
|
||||
|
||||
```
|
||||
# ls -ld /proc/16337
|
||||
|
||||
dr-xr-xr-x. 9 root root 0 Aug 5 17:20 /proc/16337/
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-check-how-long-a-process-has-been-running-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/understanding-linux-top-command-output-usage/
|
||||
[2]: https://www.2daygeek.com/9-methods-to-check-find-the-process-id-pid-ppid-of-a-running-program-in-linux/
|
||||
[3]: https://www.2daygeek.com/linux-command-line-calculator-bc-calc-qalc-gcalccmd/
|
@ -0,0 +1,101 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (beamrolling)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to transition into a career as a DevOps engineer)
|
||||
[#]: via: (https://opensource.com/article/19/7/how-transition-career-devops-engineer)
|
||||
[#]: author: (Conor Delanbanque https://opensource.com/users/cdelanbanquehttps://opensource.com/users/daniel-ohhttps://opensource.com/users/herontheclihttps://opensource.com/users/marcobravohttps://opensource.com/users/cdelanbanque)
|
||||
|
||||
如何转职为 DevOps 工程师
|
||||
======
|
||||
无论你是刚毕业的大学生,还是想在职业中寻求进步的经验丰富的 IT 专家,这些提示都可以帮你成为 DevOps 工程师。
|
||||
|
||||
![technical resume for hiring new talent][1]
|
||||
|
||||
DevOps 工程是一个备受称赞的热门职业。不管你是刚毕业正在找第一份工作,还是在利用之前的行业经验的同时寻求学习新技能的机会,本指南都能帮你通过正确的步骤成为[ DevOps 工程师][2].
|
||||
|
||||
### 让自己沉浸其中
|
||||
|
||||
首先学习 [DevOps][3] 的基本原理,实践以及方法。在使用工具之前,先了解 DevOps 背后的“为什么”。DevOps 工程师的主要目标是在整个软件开发生命周期(SDLC)中提高速度并保持或提高质量,以提供最大的业务价值。阅读文章,观看 YouTube 视频,参加当地小组聚会或者会议 — 成为热情的 DevOps 社区中的一员,在那里你将从先行者的错误和成功中学习。
|
||||
|
||||
### 考虑你的背景
|
||||
|
||||
如果你有从事技术工作的经历,例如软件开发人员,系统工程师,系统管理员,网络运营工程师或者数据库管理员,那么你已经拥有了广泛的见解和有用的经验,它们可以帮助你在未来成为 DevOps 工程师。如果你在完成计算机科学或任何其他 STEM(译者注,STEM 是科学 Science,技术 Technology,工程 Engineering 和数学 Math四个学科的首字母缩略字)领域的学业后刚开始职业生涯,那么你将拥有在这个过渡期间需要的一些基本踏脚石。
|
||||
|
||||
DevOps 工程师的角色涵盖了广泛的职责。以下是企业最有可能使用他们的三种方向:
|
||||
|
||||
* **偏向于开发(Dev)的 DevOps 工程师**在构建应用中扮演软件开发的角色。他们日常工作的一部分是利用持续集成 / 持续交付(CI/CD),共享仓库,云和容器,但他们不一定负责构建或实施工具。他们了解基础架构,并且在成熟的环境中,能将自己的代码推向生产环境。
|
||||
* **偏向于运维技术(Ops)的 DevOps 工程师**可以与系统工程师或系统管理员进行比较。他们了解软件的开发,但并不会把一天的重心放在构建应用上。相反,他们更有可能支持软件开发团队将手动流程自动化的过程,并提高人员和技术系统的效率。这可能意味着分解遗留代码并用较少繁琐的自动化脚本来运行相同的命令,或者可能意味着安装,配置或维护基础结构和工具。他们确保为任何有需要的团队安装可使用的工具。他们也会通过教授如何利用 CI / CD 和其他 DevOps 实践来帮助团队。
|
||||
* **网站可靠性工程师(SRE)** 就像解决运维和基础设施的软件工程师。SRE 专注于创建可扩展,高度可用且可靠的软件系统。
|
||||
|
||||
|
||||
|
||||
在理想的世界中,DevOps 工程师将了解以上所有领域;这在成熟的科技公司中很常见。然而,顶级银行和许多财富 500 强企业的 DevOps 职位通常会偏向开发(Dev)或运营(Ops)。
|
||||
|
||||
### 要学习的技术
|
||||
|
||||
DevOps 工程师需要了解各种技术才能有效完成工作。无论你的背景如何,请从作为 DevOps 工程师使用和理解的基础技术入手。
|
||||
|
||||
#### 操作系统
|
||||
|
||||
操作系统是所有东西运行的地方,拥有相关的基础知识十分重要。 [Linux ][4]是你最有可能每天使用的操作系统,尽管有的组织会使用 Windows 操作系统。要开始使用,你可以在家中安装 Linux,在那里你可以随心所欲地打破并学习。
|
||||
|
||||
#### 脚本
|
||||
|
||||
接下来,选择一门语言来学习脚本。有很多语言可供选择,包括 Python,Go,Java,Bash,PowerShell,Ruby,和 C / C++。我建议[从 Python 开始][5],因为它相对容易学习和解释,是最受欢迎的语言之一。Python 通常是为了遵循面向对象编程(OOP)的基础而写的,可用于 Web 开发,软件开发以及创建桌面 GUI 和业务应用程序。
|
||||
|
||||
#### 云
|
||||
|
||||
学习了 [Linux][4] 和 [Python][5] 之后,我认为下一个该学习的是云计算。基础设施不再只是“运维小哥”的事情了,因此你需要接触云平台,例如 AWS 云服务,Azure 或者谷歌云平台。我会从 AWS 开始,因为它有大量免费学习工具,可以帮助你降低作为开发人员,运维,甚至面向业务的组件的任何障碍。事实上,你可能会被它提供的东西所淹没。考虑从 EC2, S3 和 VPC 开始,然后看看你从其中想学到什么。
|
||||
|
||||
#### 编程语言
|
||||
|
||||
如果你带着对软件开发的热情来到 DevOps,请继续提高你的编程技能。DevOps 中的一些优秀和常用语言与脚本相同:Python,Go,Java,Bash,PowerShell,Ruby 和 C / C++。你还应该熟悉 Jenkins 和 Git / Github,你将会在 CI / CD 过程中经常使用到它们。
|
||||
|
||||
#### 容器
|
||||
|
||||
最后,使用 Docker 和编排平台(如 Kubernetes)等工具开始学习[容器化][6]。网上有大量的免费学习资源,大多数城市都有本地的线下小组,你可以在友好的环境中向有经验的人学习(还有披萨和啤酒哦!)。
|
||||
|
||||
#### 其他的呢?
|
||||
|
||||
如果你缺乏开发经验,你依然可以通过对自动化的热情,效率的提高,与他人协作以及改进自己的工作[参与到 DevOps 中][3]来。我仍然建议学习上述工具,但重点不要放在编程 / 脚本语言上。了解基础架构即服务,平台即服务,云平台和 Linux 会非常有用。你可能正在设置工具并学习如何构建有弹性和容错性的系统,并在写代码时利用它们。
|
||||
|
||||
### 找一份 DevOps 的工作
|
||||
|
||||
求职过程会有所不同,具体取决于你是否一直从事技术工作,并且正在进入 DevOps 领域,或者你是刚开始职业生涯的毕业生。
|
||||
|
||||
#### 如果你已经从事技术工作
|
||||
|
||||
如果你正在从一个技术领域转入 DevOps 角色,首先尝试在你当前的公司寻找机会。你可以和其他的团队一起工作吗?尝试影响其他团队成员,寻求建议,并在不离开当前工作的情况下获得新技能。如果做不到这一点,你可能需要换另一家公司。如果你能从上面列出的一些实践,工具和技术中学习,你将能在面试时展示相关知识中占据有利位置。关键是要诚实,不要让自己陷入失败中。大多数招聘主管都了解你不知道所有的答案;如果你能展示你所学到的东西,并解释你愿意学习更多,你应该有机会获得 DevOps 的工作。
|
||||
|
||||
#### 如果你刚开始职业生涯
|
||||
|
||||
申请雇用初级 DevOps 工程师的公司的开放机会。不幸的是,许多公司表示他们希望寻找更富有经验的人,并建议你在获得经验后再申请该职位。这是“我们需要经验丰富的人”的典型,令人沮丧的场景,并且似乎没人愿意给你第一次机会。
|
||||
|
||||
然而,并不是所有求职经历都那么令人沮丧;一些公司专注于培训和提升刚从大学毕业的学生。例如,我工作的 [MThree][7] 聘请来应届毕业生并且对其进行了 8 周的培训。当完成培训后,参与者们可以充分了解到整个 SDLC,并很好地了解它在财富 500 强公司相关环境中的应用。毕业生被聘为 MThree 的客户公司的初级 DevOps 工程师 — MThree 在前 18 - 24 个月内支付全职工资和福利,之后他们将作为直接雇员加入客户。这是弥合从大学到技术职业的间隙的好方法。
|
||||
|
||||
### 总结
|
||||
|
||||
转换成 DevOps 工程师的方法有很多种。这是一条非常有益的职业路线,可能会让你保持繁忙和挑战 — 并增加你的收入潜力。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/how-transition-career-devops-engineer
|
||||
|
||||
作者:[Conor Delanbanque][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[beamrolling](https://github.com/beamrolling)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cdelanbanquehttps://opensource.com/users/daniel-ohhttps://opensource.com/users/herontheclihttps://opensource.com/users/marcobravohttps://opensource.com/users/cdelanbanque
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hiring_talent_resume_job_career.png?itok=Ci_ulYAH (technical resume for hiring new talent)
|
||||
[2]: https://opensource.com/article/19/7/devops-vs-sysadmin
|
||||
[3]: https://opensource.com/resources/devops
|
||||
[4]: https://opensource.com/resources/linux
|
||||
[5]: https://opensource.com/resources/python
|
||||
[6]: https://opensource.com/article/18/8/sysadmins-guide-containers
|
||||
[7]: https://www.mthreealumni.com/
|
@ -0,0 +1,146 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Podman and user namespaces: A marriage made in heaven)
|
||||
[#]: via: (https://opensource.com/article/18/12/podman-and-user-namespaces)
|
||||
[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan)
|
||||
|
||||
Podman 和用户名字空间:天作之合
|
||||
======
|
||||
|
||||
> 了解如何使用 Podman 在单独的用户空间运行容器。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/architecture_structure_planning_design_.png?itok=KL7dIDct)
|
||||
|
||||
[Podman][1] 是 [libpod][2] 库的一部分,使用户能够管理 pod、容器和容器镜像。在我的上一篇文章中,我写过 [Podman 作为一种更安全的运行容器的方式][3]。在这里,我将解释如何使用 Podman 在单独的用户命名空间中运行容器。
|
||||
|
||||
我一直在思考<ruby>[用户命名空间][4]<rt>user namespace</rt></ruby>,它主要是由 Red Hat 的 Eric Biederman 开发的,作为分离容器的一个很棒的功能。用户命名空间允许你指定用于运行容器的用户标识符(UID)和组标识符(GID)映射。这意味着你可以在容器内运行 UID 0,在容器外运行 UID 100000。如果容器进程逃逸出了容器,内核会将它们视为 UID 100000。不仅如此,任何未映射到用户命名空间的 UID 所拥有的任何文件对象都将被视为 `nobody` 所拥有(65534,`kernel.overflowuid`),并且不允许容器进程访问,除非该对象可由“其他人”访问(世界可读/可写)。
|
||||
|
||||
如果你拥有一个权限为 [660][5] 的属主为“真实” `root` 的文件,并且用户命名空间中的容器进程尝试读取它,则会阻止它们访问它,并且会将该文件视为 `nobody` 所拥有。
|
||||
|
||||
### 示例
|
||||
|
||||
以下是它是如何工作的。首先,我在 `root` 拥有的系统中创建一个文件。
|
||||
|
||||
```
|
||||
$ sudo bash -c "echo Test > /tmp/test"
|
||||
$ sudo chmod 600 /tmp/test
|
||||
$ sudo ls -l /tmp/test
|
||||
-rw-------. 1 root root 5 Dec 17 16:40 /tmp/test
|
||||
```
|
||||
|
||||
接下来,我将该文件卷挂载到一个使用用户命名空间映射 `0:100000:5000` 运行的容器中。
|
||||
|
||||
```
|
||||
$ sudo podman run -ti -v /tmp/test:/tmp/test:Z --uidmap 0:100000:5000 fedora sh
|
||||
# id
|
||||
uid=0(root) gid=0(root) groups=0(root)
|
||||
# ls -l /tmp/test
|
||||
-rw-rw----. 1 nobody nobody 8 Nov 30 12:40 /tmp/test
|
||||
# cat /tmp/test
|
||||
cat: /tmp/test: Permission denied
|
||||
```
|
||||
|
||||
上面的 `--uidmap` 设置告诉 Podman 在容器内映射一系列 5000 个 UID,从容器外的 UID 100000开始(因此范围是 100000-104999)到容器内 UID 0 开始的范围(所以范围是 0-4999)。在容器内部,如果我的进程以 UID 1 运行,则它在主机上为 100001。
|
||||
|
||||
由于实际的 `UID=0` 未映射到容器中,因此 `root` 拥有的任何文件都将被视为 `nobody` 所拥有。即使容器内的进程具有 `CAP_DAC_OVERRIDE`,也无法覆盖此种保护。`DAC_OVERRIDE` 使根进程能够读/写系统上的任何文件,即使进程不是 `root` 用户拥有,也不是全局可读或可写的。
|
||||
|
||||
用户命名空间功能与主机上的功能不同。它们是命名空间功能。这意味着我的容器根只在容器内具有功能,实际上只在映射到用户命名空间的 UID 范围内。如果容器进程逃逸了容器,则它将没有任何功能而不是映射到用户命名空间的 UID,包括 UID=0。即使进程可能以某种方式进入另一个容器,如果容器使用不同范围的 UID,它们也不具备这些功能。
|
||||
|
||||
请注意,SELinux 和其他技术还限制了容器进程破开容器时会发生的情况。
|
||||
|
||||
### 使用 podman top 来显示用户名字空间
|
||||
|
||||
我们在 `podman top` 中添加了一些功能,允许你检查容器内运行的进程的用户名,并在主机上标识它们的真实 UID。
|
||||
|
||||
让我们首先使用我们的 UID 映射运行一个 `sleep` 容器。
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:100000:5000 -d fedora sleep 1000
|
||||
```
|
||||
|
||||
现在运行 `podman top`:
|
||||
|
||||
```
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 100000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
注意 `podman top` 报告用户进程在容器内以 `root` 身份运行,但在主机(`HUSER`)上以 UID 100000 运行。此外,`ps` 命令确认 `sleep` 过程以 UID 100000 运行。
|
||||
|
||||
现在让我们运行第二个容器,但这次我们将选择一个单独的 UID 映射,从 200000 开始。
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:200000:5000 -d fedora sleep 1000
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 200000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
200000 23644 23632 1 08:08 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
请注意,`podman top` 报告第二个容器在容器内以 `root` 身份运行,但主机上的 UID=200000。
|
||||
|
||||
另请参阅 `ps` 命令,它显示两个 `sleep` 进程都在运行:一个为 100000,另一个为 200000。
|
||||
|
||||
这意味着在单独的用户命名空间内运行容器可以在进程之间进行传统的 UID 分离,这从一开始就是 Linux/Unix 的标准安全工具。
|
||||
|
||||
### 用户名字空间的问题
|
||||
|
||||
几年来,我一直主张用户命名空间应该作为每个人应该有的安全工具,但几乎没有人使用过。原因是没有任何文件系统支持或转移文件系统。
|
||||
|
||||
在容器中,你希望在许多容器之间共享**基本**镜像。上面的示例在每个示例中使用 Fedora 基本镜像。Fedora 镜像中的大多数文件都由实际的 UID=0 拥有。如果我使用用户名称空间 0:100000:5000 在此镜像上运行容器,默认情况下它会将所有这些文件视为 `nobody` 所拥有,因此我们需要移动所有这些 UID 以匹配用户名称空间。多年来,我想要一个挂载选项来告诉内核重新映射这些文件 UID 以匹配用户命名空间。上游内核存储开发人员继续调查并在此功能上取得进展,但这是一个难题。
|
||||
|
||||
Podman 可以在同一镜像上使用不同的用户名称空间,是由于自动 [chown][6] 内置于由 Nalin Dahyabhai 领导的团队开发的[容器/存储][7]中。Podman使用容器/存储,Podman 第一次在新的用户命名空间中使用容器镜像,容器/存储 “chowns”(如,更改所有权)镜像中的所有文件到用户命名空间中映射的 UID 并创建新镜像。可以把它想象成 `fedora:0:100000:5000` 镜像。
|
||||
|
||||
当 Podman 在具有相同 UID 映射的镜像上运行另一个容器时,它使用“预先设置所有权”的图像。当我在0:200000:5000 上运行第二个容器时,容器/存储会创建第二个镜像,我们称之为 `fedora:0:200000:5000`。
|
||||
|
||||
请注意,如果你正在执行 `podman build` 或 `podman commit` 并将新创建的镜像推送到容器注册表,Podman 将使用容器/存储来反转移位并将所有文件推回到实际 UID=0 的镜像。
|
||||
|
||||
这可能会导致在新的 UID 映射中创建容器时出现真正的减速,因为 `chown` 可能会很慢,具体取决于镜像中的文件数。此外,在普通 [OverlayFS][8] 上,镜像中的每个文件都会被复制。正常的 Fedora 镜像最多可能需要 30 秒才能完成 `chown` 并启动容器。
|
||||
|
||||
幸运的是,Red Hat 内核存储团队(主要是 Vivek Goyal 和 Miklos Szeredi)在内核 4.19 中为 OverlayFS 添加了一项新功能。该功能称为“仅元数据复制”。如果使用 `metacopy=on` 挂载覆盖文件系统作为挂载选项,则在更改文件属性时,它不会复制较低层的内容;内核创建新的 inode,其中包含引用指向较低级别数据的属性。如果内容发生变化,它仍会复制内容。如果你想试用它,可以在 Red Hat Enterprise Linux 8 Beta 中使用此功能。
|
||||
|
||||
这意味着容器 `chown` 可能在几秒钟内发生,并且你不会将每个容器的存储空间加倍。
|
||||
|
||||
这使得像 Podman 这样的工具在不同的用户命名空间中运行容器是可行的,大大提高了系统的安全性。
|
||||
|
||||
### 前瞻
|
||||
|
||||
我想向 Podman 添加一个新标志,比如 `--userns=auto`,它会告诉它为你运行的每个容器自动选择一个唯一的用户命名空间。这类似于 SELinux 与单独的多类别安全(MCS)标签一起使用的方式。如果设置环境变量 `PODMAN_USERNS=auto`,则甚至不需要设置标志。
|
||||
|
||||
Podman 最终允许用户在不同的用户名称空间中运行容器。像 [Buildah][9] 和 [CRI-O][10] 这样的工具也可以利用用户命名空间。但是,对于 CRI-O,Kubernetes 需要了解哪个用户命名空间将运行容器引擎,而上游正在处理它。
|
||||
|
||||
在我的下一篇文章中,我将解释如何在用户命名空间中将 Podman 作为非 root 用户运行。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/podman-and-user-namespaces
|
||||
|
||||
作者:[Daniel J Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rhatdan
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://podman.io/
|
||||
[2]: https://github.com/containers/libpod
|
||||
[3]: https://opensource.com/article/18/10/podman-more-secure-way-run-containers
|
||||
[4]: http://man7.org/linux/man-pages/man7/user_namespaces.7.html
|
||||
[5]: https://chmodcommand.com/chmod-660/
|
||||
[6]: https://en.wikipedia.org/wiki/Chown
|
||||
[7]: https://github.com/containers/storage
|
||||
[8]: https://en.wikipedia.org/wiki/OverlayFS
|
||||
[9]: https://buildah.io/
|
||||
[10]: http://cri-o.io/
|
@ -0,0 +1,95 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Reinstall Ubuntu in Dual Boot or Single Boot Mode)
|
||||
[#]: via: (https://itsfoss.com/reinstall-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
如何在双启动或单启动模式下重新安装 Ubuntu
|
||||
======
|
||||
|
||||
如果你弄坏了你的 Ubuntu 系统,并尝试了很多方法来修复,你最终放弃并采取简单的方法:重新安装 Ubuntu。
|
||||
|
||||
我们一直遇到这样一种情况,重新安装 Linux 似乎比找出问题并解决来得更好。排查 Linux 故障能教你很多,但你不会总是花费更多时间来修复损坏的系统。
|
||||
|
||||
据我所知,Ubuntu 中没有像 Windows 那样的系统恢复分区。那么,问题出现了:如何重新安装 Ubuntu?让我告诉你如何重新安装 Ubuntu。
|
||||
|
||||
警告!
|
||||
|
||||
磁盘分区始终是一项危险的任务。我强烈建议你在外部磁盘上备份数据。
|
||||
|
||||
### 如何重新安装 Ubuntu Linux
|
||||
|
||||
![][1]
|
||||
|
||||
以下是重新安装 Ubuntu 的步骤。
|
||||
|
||||
#### 步骤 1:创建一个 live USB
|
||||
|
||||
首先,在网站上下载 Ubuntu。你可以下载[任何需要的 Ubuntu 版本][2]。
|
||||
|
||||
[Download Ubuntu][3]
|
||||
|
||||
获得 ISO 镜像后,就可以创建 live USB 了。如果 Ubuntu 系统仍然可以使用,那么可以使用 Ubuntu 提供的启动盘创建工具创建它。
|
||||
|
||||
如果无法使用你的 Ubuntu,那么你可以使用其他系统。你可以参考这篇文章来学习[如何在 Windows 中创建 Ubuntu的 live USB][4]。
|
||||
|
||||
#### 步骤 2:重新安装 Ubuntu
|
||||
|
||||
有了 Ubuntu 的 live USB 之后插入 USB。重新启动系统。在启动时,按下 F2/10/F12 键进入 BIOS 设置,并确保已在顶部设置 “Boot from Removable Devices/USB”。保存并退出 BIOS。这将启动进入 live USB。
|
||||
|
||||
进入 live USB 后,选择安装 Ubuntu。你将看到选择语言和键盘布局这些常用选项。你还可以选择下载更新等。
|
||||
|
||||
![Go ahead with regular installation option][5]
|
||||
|
||||
现在是重要的步骤。你应该看到一个“安装类型”页面。你在屏幕上看到的内容在很大程度上取决于 Ubuntu 如何处理系统上的磁盘分区和安装的操作系统。
|
||||
|
||||
在此步骤中仔细阅读选项及它的细节。注意每个选项的说明。屏幕上的选项可能在不同的系统中看上去不同。
|
||||
|
||||
![Reinstall Ubuntu option in dual boot mode][7]
|
||||
|
||||
在这里,它发现我的系统上安装了 Ubuntu 18.04.2 和 Windows,它给了我一些选项。
|
||||
|
||||
第一个选项是擦除 Ubuntu 18.04.2 并重新安装它。它告诉我它将删除我的个人数据,但它没有说删除所有操作系统(即 Windows)。
|
||||
|
||||
如果你非常幸运或处于单一启动模式,你可能会看到一个“重新安装 Ubuntu” 的选项。此选项将保留现有数据,甚至尝试保留已安装的软件。如果你看到这个选项,那么就用它吧。
|
||||
|
||||
双启动系统注意
|
||||
|
||||
如果你是双启动 Ubuntu 和 Windows,并且在重新安装中,你的 Ubuntu 系统看不到 Windows,你必须选择 “Something else” 选项并从那里安装 Ubuntu。我已经在[在双启动下安装 Linux 的过程][8]这篇文章中说明了。
|
||||
|
||||
对我来说,没有重新安装并保留数据的选项,因此我选择了“擦除 Ubuntu 并重新安装”。该选项即使在 Windows 的双启动模式下,也将重新安装 Ubuntu。
|
||||
|
||||
我建议为 root 和 home 使用单独分区就是为了重新安装。这样,即使重新安装 Linux,也可以保证 home 分区中的数据安全。我已在此视频中演示过:
|
||||
|
||||
选择重新安装 Ubuntu 后,剩下就是单击下一步。选择你的位置、创建用户账户。
|
||||
|
||||
![Just go on with the installation options][9]
|
||||
|
||||
以上完成后,你就完成重装 Ubuntu 了。
|
||||
|
||||
在本教程中,我假设你已经知道我说的东西,因为你之前已经安装过 Ubuntu。如果需要澄清任何一个步骤,请随时在评论栏询问。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/reinstall-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/Reinstall-Ubuntu.png?resize=800%2C450&ssl=1
|
||||
[2]: https://itsfoss.com/which-ubuntu-install/
|
||||
[3]: https://ubuntu.com/download/desktop
|
||||
[4]: https://itsfoss.com/create-live-usb-of-ubuntu-in-windows/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/reinstall-ubuntu-1.jpg?resize=800%2C473&ssl=1
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/reinstall-ubuntu-dual-boot.jpg?ssl=1
|
||||
[8]: https://itsfoss.com/replace-linux-from-dual-boot/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/reinstall-ubuntu-3.jpg?ssl=1
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -7,10 +7,10 @@
|
||||
[#]: via: (https://www.ostechnix.com/how-to-fix-kernel-driver-not-installed-rc-1908-virtualbox-error-in-ubuntu/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Fix “Kernel driver not installed (rc=-1908)” VirtualBox Error In Ubuntu
|
||||
如何在 Ubuntu 中修复 VirtualBox 的 “Kernel driver not installed (rc=-1908)” 错误
|
||||
======
|
||||
|
||||
I use Oracle VirtualBox to test various Linux and Unix distributions. I’ve tested hundred of virtual machines in VirtualBox so far. Today, I started Ubuntu 18.04 server VM in my Ubuntu 18.04 desktop and I got the following error.
|
||||
我使用 Oracle VirtualBox 来测试各种 Linux 和 Unix 发行版。到目前为止,我已经在 VirtualBox 中测试了上百个虚拟机。今天,我在我的 Ubuntu 18.04 桌面上启动了 Ubuntu 18.04 服务器版虚拟机,我收到了以下错误。
|
||||
|
||||
```
|
||||
Kernel driver not installed (rc=-1908)
|
||||
@ -26,9 +26,9 @@ where: suplibOsInit what: 3 VERR_VM_DRIVER_NOT_INSTALLED (-1908) - The support d
|
||||
|
||||
![][2]
|
||||
|
||||
“Kernel driver not installed (rc=-1908)” Error in Ubuntu
|
||||
Ubuntu 中的 “Kernel driver not installed (rc=-1908)” 错误
|
||||
|
||||
I clicked OK to close the message box and and I saw another one in the background.
|
||||
我点击了 OK 关闭消息框,然后在后台看到了另一条消息。
|
||||
|
||||
```
|
||||
Failed to open a session for the virtual machine Ubuntu 18.04 LTS Server.
|
||||
@ -45,45 +45,35 @@ IMachine {85cd948e-a71f-4289-281e-0ca7ad48cd89}
|
||||
|
||||
![][3]
|
||||
|
||||
The virtual machine has terminated unexpectedly during startup with exit code 1 (0x1)
|
||||
启动期间虚拟机意外终止,退出代码为 1(0x1)
|
||||
|
||||
I didn’t know what to do first. I ran the following command to check if it helps.
|
||||
我不知道该先做什么。我运行以下命令来检查是否有用。
|
||||
|
||||
```
|
||||
$ sudo modprobe vboxdrv
|
||||
```
|
||||
|
||||
And, I got this error.
|
||||
我收到了这个错误:
|
||||
|
||||
```
|
||||
modprobe: FATAL: Module vboxdrv not found in directory /lib/modules/5.0.0-23-generic
|
||||
```
|
||||
|
||||
After carefully reading the both error messages, I realized that I should update the Virtualbox application.
|
||||
仔细阅读这两个错误消息后,我意识到我应该更新 Virtualbox 程序。
|
||||
|
||||
If you ever run into this error in Ubuntu and its variants like Linux Mint, all you have to do is just reinstall or update the **“virtualbox-dkms”** package using command:
|
||||
如果你在 Ubuntu 及其衍生版(如 Linux Mint)中遇到此错误,你只需使用以下命令重新安装或更新 **“virtualbox-dkms”** 包:
|
||||
|
||||
```
|
||||
$ sudo apt install virtualbox-dkms
|
||||
```
|
||||
|
||||
Or, it is much better to update the whole system:
|
||||
或者,最好更新整个系统:
|
||||
|
||||
```
|
||||
$ sudo apt upgrade
|
||||
```
|
||||
|
||||
Now the error has gone and I could start VMs from VirtualBox without any issues.
|
||||
|
||||
* * *
|
||||
|
||||
**Related read:**
|
||||
|
||||
* [**Solve “Result Code: NS_ERROR_FAILURE (0x80004005)” VirtualBox Error In Arch Linux**][4]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
错误消失了,我可以正常在 VirtualBox 中启动虚拟机了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -91,7 +81,7 @@ via: https://www.ostechnix.com/how-to-fix-kernel-driver-not-installed-rc-1908-vi
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
Loading…
Reference in New Issue
Block a user