Merge pull request #2 from LCTT/master

refresh #2
This commit is contained in:
0x996 2019-07-19 14:41:54 +10:00 committed by GitHub
commit 5cfd93a9f8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
41 changed files with 3452 additions and 1270 deletions

View File

@ -0,0 +1,126 @@
MX Linux一款专注于简洁性的中等体量发行版
======
> 这个发行版可以使任何人在 Linux 上如家一般。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mxlinux.png?itok=OLjmCxT9)
Linux 有着如此多种的发行版。许多发行版为了使自己与众不同而做出了很多改变。另一方面,许多发行版之间的区别又是如此之小,你可能会问为什么有人还愿意不厌其烦的重复别人已经做过的工作呢?也正是基于这一疑惑,让我好奇为什么 [antiX][1] 和 [MEPIS][2]这两个社区要联合推出一个特殊的发行版,考虑到具体情况应该会是一个搭载 Xfce 桌面并基于 antiX 的版本,由 MEPIS 社区承担开发。
这一开发中的使用 Xfce 桌面的 antiX 系统是否会基于它之前的发行版呢毕竟antiX 旨在提供一个“基于 Debian 稳定版的快速、轻量级、易于安装的非 systemd 的 live CD 发行版”。antiX 所搭载的桌面是 [LXDE][3],能够极好的满足关于轻量化系统的相关要求和特性。那究竟是什么原因使得 antiX 决定构建另一个轻量化发行版呢,仅仅是因为这次换成了 Xfce 吗好吧Linux 社区中的任何人都知道增加了不同风格的好的轻量级发行版是值得一试的特别是可以使得我们的旧硬件摆脱进入垃圾填埋场的宿命。当然LXDE 和 Xfce 并不完全属于同一类别。LXDE 应该被认为是一个真正的轻量级桌面,而 Xfce 应该被认为是一个中等体量的桌面。朋友们,这就是为什么 MX Linux 是 antiX 的一个重要迭代的关键。一个基于 Debian 的中等体量的发行版,它包含你完成工作所需的所有工具。
但是在 MX Linux 中有一些直接从 antiX 借用来的非常有用的东西 —— 那就是安装工具。当我初次设置了 VirtualBox 虚拟机来安装 MX Linux 时,我认为安装的系统将是我已经习惯的典型的、非常简单的 Linux 系统。令我非常惊讶的是MX Linux 使用的 antiX 安装程序打破了以往的痛点,特别是对于那些对尝试 Linux 持观望态度的人来说。
因此,甚至在我开始尝试 MX Linux 之前,我就对它有了深刻的印象。让我们来看看是什么让这个发行版的安装如此特别,最后再来看看桌面。
你可以从[这里][4]下载 MX Linux 17.1。系统的最低要求是:
* CD/DVD驱动器以及能够从该驱动器引导的 BIOS或 live USB以及能够从 USB 引导的 BIOS
* 英特尔 i486 或 AMD 处理器
* 512 MB 内存
* 5 GB 硬盘空间
* 扬声器AC97 或 HDA-compatible 声卡
* 作为一个 LiveUSB 使用,需要 4 GB 空间
### 安装
MX Linux 安装程序使安装 Linux 变得轻而易举。虽然它可能不是外观最现代化的安装工具,但也已经差不多了。安装的要点是从选择磁盘和选择安装类型开始的(图 1
![install][6]
*图 1MX Linux 的安装程序截图之一*
下一个重要的界面(图 2要求你设置一个计算机名称、域名和如果需要的话为微软网络设置工作组。
![network][8]
*图 2设置网络名称*
配置工作组的能力是第一个真正值得称赞的。这是我记忆中第一款在安装期间提供此选项的发行版。它还应该提示你MX Linux 提供了开箱即用的共享目录功能。它做到了,而且深藏功与名。它并不完美,但它可以在不需要安装任何额外包的情况下工作(稍后将详细介绍)。
最后一个重要的安装界面(需要用户交互)是创建用户帐户和 root 权限的密码(图 3
![user][9]
*图 3设置用户帐户详细信息和 root 用户密码*
最后一个界面设置完成后,安装将完成并要求重新启动。重启后,你将看到登录屏幕。登录并享受 MX Linux 带来的体验。
### 使用
Xfce 桌面是一个非常容易上手的界面。默认设置将面板位于屏幕的左边缘(图 4
![desktop][11]
*图 4MX Linux 的默认桌面*
如果你想将面板移动到更传统的位置,右键单击面板上的空白点,然后单击“面板”>“面板首选项”。在显示的窗口中(图 5单击样式下拉菜单在桌面栏、垂直栏或水平栏之间进行选择你想要的模式。
![panel][13]
*图 5配置 MX Linux 面板*
桌面栏和垂直选项的区别在于,在桌面栏模式下,面板垂直对齐,就像在垂直模式下一样,但是插件是水平放置的。这意味着你可以创建更宽的面板(用于宽屏布局)。如果选择水平布局,它将默在顶部,然后你必须取消锁定面板,单击关闭,然后(使用面板左侧边缘的拖动手柄)将其拖动到底部。你可以回到面板设置窗口并重新锁定面板。
除此之外,使用 Xfce 桌面对于任何级别的用户来说都是无需动脑筋的事情……就是这么简单。你会发现很多涵盖了生产力LibreOffice、Orage Calendar、PDF-Shuffler、图像GIMP)、通信Firefox、Thunderbird、HexChat、多媒体Clementine、guvcview SMTube、VLC媒体播放器的软件和一些 MX Linux 专属的工具(称为 MX 工具,涵盖了 live-USB 驱动器制作工具、网络助手、包管理工具、仓库管理工具、live ISO 快照工具等等)。
### Samba
让我们讨论一下如何将文件夹共享到你的网络。正如我所提到的你不需要安装任何额外的包就可以使其正常工作。只需打开文件管理器右键单击任何位置并选择网络上的共享文件夹。系统将提示你输入管理密码已在安装期间设置。验证成功之后Samba 服务器配置工具将打开(图 6
![sharing][15]
*图 6向网络共享一个目录*
单击“+”按钮配置你的共享。你将被要求指定一个目录,为共享提供一个名称/描述,然后决定该共享是否可写和可见(图 7
![sharing][17]
*图 7在 MX Linux 上配置共享*
当你单击 Access 选项时,你可以选择是让每个人都访问共享,还是限于特定的用户。问题就出在这里。此时,没有用户可以共享。为什么?因为它们还没有被添加。有两种方法可以把它们添加到共享:从命令行或使用我们已经打开的工具。让我们用一种更为简单的方法。在 Samba 服务器配置工具的主窗口中,单击“首选项” > “Samba 用户”。在弹出的窗口中,单击“添加用户”。
将出现一个新窗口(图 8你需要从下拉框中选择用户输入 Windows 用户名,并为用户键入/重新键入密码。
![Samba][19]
*图 8向 Samba 添加用户*
一旦你单击“确定”,这用户就会被添加,并且基于你的网络的对用户的共享功能也随之启用。创建 Samba 共享从未变得如此容易。
### 结论
MX Linux 使任何从桌面操作系统转到 Linux 都变得非常简单。尽管有些人可能会觉得桌面界面不太现代但发行版的主要关注点不是美观而是简洁。为此MX Linux 以出色的方式取得了成功。Linux 的这个特色发行版可以让任何人在使用 Linux 的过程中感到宾至如归。尝试这一中等体量的发行版,看看它能否作为你的日常系统。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/4/mx-linux-mid-weight-distro-focused-simplicity
作者:[JACK WALLEN][a]
译者:[qfzy1233](https://github.com/qfzy1233)
校对:[wxy](https://github.com/wxy)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://antixlinux.com/
[2]:https://en.wikipedia.org/wiki/MEPIS
[3]:https://lxde.org/
[4]:https://mxlinux.org/download-links
[5]:/files/images/mxlinux1jpg
[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mxlinux_1.jpg?itok=i9bNScjH (install)
[7]:/licenses/category/used-permission
[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mxlinux_2.jpg?itok=72nWxkGo
[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mxlinux_3.jpg?itok=ppf2l_bm (user)
[10]:/files/images/mxlinux4jpg
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mxlinux_4.jpg?itok=mS1eBy9m (desktop)
[12]:/files/images/mxlinux5jpg
[13]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/mxlinux_5.jpg?itok=wsN1hviN (panel)
[14]:/files/images/mxlinux6jpg
[15]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/mxlinux_6.jpg?itok=vw8mIp9T (sharing)
[16]:/files/images/mxlinux7jpg
[17]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/mxlinux_7.jpg?itok=tRIWdcEk (sharing)
[18]:/files/images/mxlinux8jpg
[19]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/mxlinux_8.jpg?itok=ZS6lhZN2 (Samba)
[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (WangYueScream)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11116-1.html)
[#]: subject: (Introducing kids to computational thinking with Python)
[#]: via: (https://opensource.com/article/19/2/break-down-stereotypes-python)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
@ -11,40 +11,37 @@
利用 Python 引导孩子的计算机思维
========================
编程可以给低收入家庭的学生提供足够的技能信心和知识,进而让他们摆脱因为家庭收入低带来的经济和社会地位上的劣势。
> 编程可以给低收入家庭的学生提供足够的技能信心和知识,进而让他们摆脱因为家庭收入低带来的经济和社会地位上的劣势。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_kid_education.png?itok=3lRp6gFa)
![](https://img.linux.net.cn/data/attachment/album/201907/17/231228k3t9skntnlst59h9.jpg)
尽管暑假期间底特律公共图书馆的[帕克曼分部][1]挤满了无聊的孩子并且占用了所有的电脑,图书馆工作人员并不觉得这会是个问题,反而更多是一个机会。他们成立一个名为 [Parkman Coders][2] 的编程社团,社团以 [Qumisha Goss][3] 为首,她是图书管理员,也负责利用 Python 的魔力引导弱势儿童的计算机思维。
四年前 [Qumisha Goss][3] 刚发起 Parkman Coders 计划的时候, “Q” 并不是太懂编程。之后她通过努力成为图书馆里教学和技术方面的专家和 Raspberry Pi 认证讲师。
四年前 [Qumisha Goss][3] 刚发起 Parkman Coders 计划的时候, “Q”(代表她)并不是太懂编程。之后她通过努力成为图书馆里教学和技术方面的专家和树莓派认证讲师。
社团最开始采用 [Scratch][4] 教学但很快学生就对这种图形化的块编程感到乏味他们觉得这就是个“婴儿玩具”。Q 坦言,“我们意识到是时候需要在课程内容这方面做些改变了,如果是为了维持课程内容对初学者的友好性继续选择 Scratch 教学,这无疑会影响孩子们后期继续保持对编程的关注。”因此,她开始教授孩子们 Python。
Q 是在 [Code.org][5] 平台玩地牢骷髅怪物这个关卡的时候第一次接触到 Python。她最开始是通过 [Python Programming: An Introduction to Computer Science][6] 和 [Python for Kids][7] 这两本书学习 Python。她也推荐 [Automate the Boring Stuff with Python][8] 和 [Lauren Ipsum: A Story about Computer Science and Other Improbable Things][9] 这两本书。
Q 是在 [Code.org][5] 平台玩地牢骷髅怪物这个关卡的时候第一次接触到 Python。她最开始是通过 [Python Programming: An Introduction to Computer Science][6][Python for Kids][7] 这两本书学习 Python。她也推荐 [Automate the Boring Stuff with Python][8][Lauren Ipsum: A Story about Computer Science and Other Improbable Things][9] 这两本书。
### 建立一个基于树莓派的创客空间
### 建立一个基于 Raspberry Pi 的创客空间
Q 决定使用[树莓派][10]电脑来避免学生可能会因为自己的不当操作对图书馆的电脑造成损害,而且这些电脑因为便携性等问题也不方便用来构建组成一个创客空间。[树莓派][10]的购买价格加上它的灵活性和便携性包括生态圈里面的一些适合教学的自由免费软件,让大家更能感受到她的决策的可行性和可靠性。
虽然图书馆发起 [Parkman Coders][2] 社区计划的本意是通过努力创造一个吸引孩子们的学习空间,进而维持图书馆的平和,但社区发展的很快,很受大家欢迎,以至于这座建立于 1921 的大楼的空间,电脑和插座都不够用了。他们最开始是 20 个孩子共享 10 台[树莓派][10]来进行授课,但后来图书馆陆续收到了来自个人和公司比如 Microsoft、4H和 Detroit Public Library Foundation 的资金援助从而能够购买更多设备以支撑社区的进一步壮大发展。
Q 决定使用 [Raspberry Pi][10] 电脑来避免学生可能会因为自己的不当操作对图书馆的电脑造成损害,而且这些电脑因为便携性等问题也不方便用来构建组成一个创客空间。[Raspberry Pi][10] 的购买价格加上它的灵活性和便携性包括生态圈里面的一些适合教学的自由免费软件,让大家更能感受到她的决策的可行性和可靠性
目前,每节课程大概有 40 个孩子参加,而且图书馆也有了足够的[树莓派][10]让参与者人手一台设备甚至还可以送出去一些。鉴于不少 [Parkman Coders][2] 的参与者来自于低收入家庭,图书馆也能提供别人捐赠的 Chromebooks 给他们使用
虽然图书馆发起 [Parkman Coders][2] 社区计划的本意是通过努力创造一个吸引孩子们的学习空间进而维持图书馆的平和,但社区发展的很快,很受大家欢迎以至于这座建立于 1921 的大楼的空间,电脑和插座都不够用了。他们最开始是 20 个孩子共享 10 台 [Raspberry Pi][10] 来进行授课,但后来图书馆陆续收到了来自个人和公司比如 Microsoft4H和 Detroit Public Library Foundation 的资金援助从而能够购买更多设备以支撑社区的进一步壮大发展。
目前,大概有 40 个孩子参加了每节课程而且图书馆也有了足够的 [Raspberry Pi][10] 让参与者人手一台设备甚至还可以分发出去。鉴于不少 [Parkman Coders][2] 的参与者来自于低收入家庭,图书馆也能提供别人捐赠的 Chromebooks 给他们使用。
Q 说,“当孩子们的表现可以证明他们能够很好的使用 [Raspberry Pi][10] 或者 [Microbit][11] 而且定期来参加课程,我们也会提供设备允许他们可以带回家练习。但即便这样也还是会遇到很多问题,比如他们在家无法访问网络或者没有显示器,键盘,鼠标等外设。”
Q 说,“当孩子们的表现可以证明他们能够很好的使用[树莓派][10]或者 [Microbit][11] 而且定期来参加课程,我们也会提供设备允许他们可以带回家练习。但即便这样也还是会遇到很多问题,比如他们在家无法访问网络或者没有显示器、键盘、鼠标等外设。”
### 利用 Python 学习生存技能,打破束缚
Q 说,“我认为教授孩子们计算机科学的主要目的是让他们学会批判性思考和解决问题的能力。我希望随着孩子们长大成人,不管他们选择在哪个领域继续发展他们的未来,这些经验教训都会一直伴随他们成长。此外,我也希望这个课程能够激发孩子们对创造的自豪感。能够清楚的意识到‘这是我做的’是一种很强烈很有用的感受。而且一旦孩子们越早能够有这种成功的体验,我相信未来的路上他们都会满怀热情迎接新的挑战而不是逃避。”
Q 说,“我认为教授孩子们计算机科学的主要目的是让他们学会批判性思考和解决问题的能力。我希望随着孩子们长大成人,不管他们选择在哪个领域继续发展他们的未来,这些经验教训都会一直伴随他们成长。此外,我也希望这个课程能够激发孩子们对创造的自豪感。能够清楚的意识到‘这是我做的’是一种很强烈、很有用的感受。而且一旦孩子们越早能够有这种成功的体验,我相信未来的路上他们都会满怀热情迎接新的挑战而不是逃避。”
她继续分享道,“在学习编程的过程中,你不得不对单词的拼写和大小写高度警惕。受限于孩子年龄,有时候阅读认知会是个大问题。为了确保课程受众的包容性,我们会在授课过程中大声拼读,同样我们也会极力鼓励孩子们大声说出他们不知道的或者不能正确拼写的单词,以便我们纠正。”
Q 也会尝试尽力去给需要帮助的孩子们更多的关注。她解释道,“如果我确认有孩子遇到困难不能跟上我们的授课进度,我们会尝试在课下时间安排老师辅导帮助他,但还是会允许他们继续参加编程。我们想帮助到他们而不是让他们因为挫败而沮丧的不参与进来。”
Q 也会尝试尽力去给需要帮助的孩子们更多的关注。她解释道,“如果我确认有孩子遇到困难不能跟上我们的授课进度,我们会尝试在课下时间安排老师辅导帮助他,但还是会允许他们继续参加编程。我们想帮助到他们而不是让他们因为挫败而沮丧的不参与进来。”
最重要的是, [Parkman Coders][2] 计划所追求的是能够帮助每个孩子认识到每个人都会有独特的技能和能力。参与进来的大部分孩子都是非裔美国人一半是女孩。Q 直言,“我们所生活在的这个世界,我们成长的过程中,伴随着各种各种的社会偏见,这些都常常会限制我们对自己所能达到的成就的准确认知。”她坚信孩子们需要一个没有偏见的空间,“他们可以尝试很多新事物,不会因为担心犯错责骂而束手束脚,可以放心大胆的去求知探索。”
最重要的是,[Parkman Coders][2] 计划所追求的是能够帮助每个孩子认识到每个人都会有独特的技能和能力。参与进来的大部分孩子都是非裔美国人一半是女孩。Q 直言,“我们所生活在的这个世界,我们成长的过程中,伴随着各种各种的社会偏见,这些都常常会限制我们对自己所能达到的成就的准确认知。”她坚信孩子们需要一个没有偏见的空间,“他们可以尝试很多新事物,不会因为担心犯错责骂而束手束脚,可以放心大胆的去求知探索。”
Q 和 [Parkman Coders][2] 计划所营造的环境氛围能够帮助到参与者摆脱低家庭收入带来的劣势。如果说社区能够发展壮大到今天的规模真有什么独特秘诀的话那大概就是Q 解释道,“确保你有一个令人舒适的空间,充满了理解与宽容,这样大家才会被吸引过来。让来的人不忘初心,做好传道受业解惑的准备;当大家参与进来并感觉到充实愉悦,自然而然会想要留下来。”
@ -56,7 +53,7 @@ via: https://opensource.com/article/19/2/break-down-stereotypes-python
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[WangYueScream](https://github.com/WangYueScream)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,136 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11115-1.html)
[#]: subject: (How To Delete A Repository And GPG Key In Ubuntu)
[#]: via: (https://www.ostechnix.com/how-to-delete-a-repository-and-gpg-key-in-ubuntu/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
如何在 Ubuntu 中删除仓库及其 GPG 密钥
======
![Delete A Repository And GPG Key In Ubuntu][1]
前几天我们讨论了如何在基于 RPM 和 DEB 的系统中[列出已安装的仓库][2]。今天,我们将学习如何在 Ubuntu 中删除仓库及其 GPG 密钥。对于不知道仓库的人,仓库(简称 repo是开发人员存储软件包的地方。仓库的软件包经过全面测试并由 Ubuntu 开发人员专门为每个版本构建。用户可以使用 Apt 包管理器在他们的 Ubuntu 系统上下载和安装这些包。Ubuntu 有四个官方仓库,即 Main、Universe、Restricted 和 Multiverse。
除了官方仓库外,还有许多由开发人员(或软件包维护人员)维护的非官方仓库。非官方仓库通常有官方仓库中不可用的包。所有包都由包维护者用一对密钥(公钥和私钥)签名。如你所知,公钥是发给用户的,私钥必须保密。每当你在源列表中添加新的仓库时,如果 Apt 包管理器想要信任新添加的仓库,你还应该添加仓库密钥(公钥)。使用仓库密钥,你可以确保从正确的人那里获得包。到这里希望你对软件仓库和仓库密钥有了一个基本的了解。现在让我们继续看看如果在 Ubuntu 系统中不再需要仓库及其密钥,那么该如何删除它。
### 在 Ubuntu 中删除仓库
每当使用 `add-apt-repository` 命令添加仓库时,它都将保存在 `/etc/apt/sources.list` 中。
要从 Ubuntu 及其衍生版中删除软件仓库,只需打开 `/etc/apt/sources.list` 文件并查找仓库名字并将其删除即可。
```
$ sudo nano /etc/apt/sources.list
```
正如你在下面的截图中看到的,我在我的 Ubuntu 系统中添加了 [Oracle Virtualbox][3] 仓库。
![][4]
*virtualbox 仓库*
要删除此仓库,只需删除该条目即可。保存并关闭文件。
如果你已添加 PPA 仓库,请查看 `/etc/apt/sources.list.d/` 目录并删除相应的条目。
或者,你可以使用 `add-apt-repository` 命令删除仓库。例如,我要删除 [Systemback][5] 仓库,如下所示。
```
$ sudo add-apt-repository -r ppa:nemh/systemback
```
最后,使用以下命令更新软件源列表:
```
$ sudo apt update
```
### 删除仓库密钥
我们使用 `apt-key` 命令添加仓库密钥。首先,让我们使用命令列出添加的密钥:
```
$ sudo apt-key list
```
此命令将列出所有添加的仓库密钥。
```
/etc/apt/trusted.gpg
--------------------
pub rsa1024 2010-10-31 [SC]
3820 03C2 C8B7 B4AB 813E 915B 14E4 9429 73C6 2A1B
uid [ unknown] Launchpad PPA for Kendek
pub rsa4096 2016-04-22 [SC]
B9F8 D658 297A F3EF C18D 5CDF A2F6 83C5 2980 AECF
uid [ unknown] Oracle Corporation (VirtualBox archive signing key) <[email protected]>
sub rsa4096 2016-04-22 [E]
/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg
------------------------------------------------------
pub rsa4096 2012-05-11 [SC]
790B C727 7767 219C 42C8 6F93 3B4F E6AC C0B2 1F32
uid [ unknown] Ubuntu Archive Automatic Signing Key (2012) <[email protected]>
/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg
------------------------------------------------------
pub rsa4096 2012-05-11 [SC]
8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092
uid [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <[email protected]>
/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg
------------------------------------------------------
pub rsa4096 2018-09-17 [SC]
F6EC B376 2474 EDA9 D21B 7022 8719 20D1 991B C93C
uid [ unknown] Ubuntu Archive Automatic Signing Key (2018) <[email protected]>
```
正如你在上面的输出中所看到的那串长的40 个字符)十六进制值是仓库密钥。如果你希望 APT 包管理器停止信任该密钥,只需使用以下命令将其删除:
```
$ sudo apt-key del "3820 03C2 C8B7 B4AB 813E 915B 14E4 9429 73C6 2A1B"
```
或者,仅指定最后 8 个字符:
```
$ sudo apt-key del 73C62A1B
```
完成!仓库密钥已被删除。运行以下命令更新仓库列表:
```
$ sudo apt update
```
资源:
* [软件仓库 Ubuntu 社区 Wiki][6]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-delete-a-repository-and-gpg-key-in-ubuntu/
作者:[sk][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/07/Delete-a-repository-in-ubuntu-720x340.png
[2]: https://www.ostechnix.com/find-list-installed-repositories-commandline-linux/
[3]: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/
[4]: https://www.ostechnix.com/wp-content/uploads/2019/07/virtualbox-repository.png
[5]: https://www.ostechnix.com/systemback-restore-ubuntu-desktop-and-server-to-previous-state/
[6]: https://help.ubuntu.com/community/Repositories/Ubuntu

View File

@ -1,26 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (vizv)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11113-1.html)
[#]: subject: (Sysadmin vs SRE: What's the difference?)
[#]: via: (https://opensource.com/article/19/7/sysadmins-vs-sres)
[#]: author: (Vince Power https://opensource.com/users/vincepower/users/craig5/users/dawnparzych/users/penglish)
系统管理员与网站可靠性工程师对比:区别在那儿?
系统管理员与网站可靠性工程师SRE对比:区别在那儿?
======
系统管理员和网站可靠性工程师SRE下同对于任何组织来讲都很重要。本篇将介绍下两者的不同之处。
![People work on a computer server with devices][1]
在 IT 行业成为多面手或是专家的争议一直存在。99% 的传统系统管理员都被归到了多面手这类。[网站可靠性工程师][2]的角色则更加专精,并且其需求在如 Google 般有着一定规模的公司中不断增加。但总的来说这两者对于跑着应用的基础设施有着同样的目标:为应用的消费者提供良好的体验。然而两者的出发点却截然不同。
> 系统管理员和网站可靠性工程师SRE下同对于任何组织来讲都很重要。本篇将介绍下两者的不同之处。
![](https://img.linux.net.cn/data/attachment/album/201907/17/214505qgk19kjuvzb2m1m4.jpg)
在 IT 行业成为多面手或是专家的争议一直存在。99% 的传统系统管理员都被归到了多面手这类。<ruby>[网站可靠性工程师][2]<rt>site reliability engineer</rt></ruby>SRE的角色则更加专精并且在如 Google 般有着一定规模的头部公司中对其的需求不断增加。但总的来说这两者对于跑着应用的基础设施有着同样的目标:为应用的消费者提供良好的体验。然而两者的出发点却截然不同。
### 系统管理员:中立善良的化身
系统管理员一般都是从基础的桌面或网络支持成长过来的,并一路习得大多数系统管理员都会掌握的广泛的技能。此时这些系统管理员会对他们所负责的系统和应用了如指掌。他们会知道一号服务器上的应用每隔一个星期二就需要重启一次,或是九号服务器周三会静默的崩溃。他们会对服务器的监视作出微调以忽略无关紧要的信息,尽管那是个每月第三个周日都会显示的被标记为<ruby>致命<rt>fatal<rt></ruby>的错误信息。
系统管理员一般都是从基础的桌面或网络支持成长过来的,并一路习得大多数系统管理员都会掌握的广泛的技能。此时这些系统管理员会对他们所负责的系统和应用了如指掌。他们会知道一号服务器上的应用每隔一个星期二就需要重启一次,或是九号服务器周三会静默的崩溃。他们会对服务器的监视作出微调以忽略无关紧要的信息,尽管那个被标记为<ruby>致命<rt>fatal<rt></ruby>的错误信息每个月第三个周日都会显示
总的来讲,系统管理员了解如何照料那些跑着你核心业务的服务器。这些系统管理员已经成长到开始使用自动化工具去处理所有归他们管的服务器上的例行任务。他们虽然喜欢使用模板、<ruby>黄金镜像<rt>golden images</rt></ruby>、以及标准,但同时也有着足够的灵活度去修改一个服务器上的参数以解决错误,并注释为什么那个服务器的配置与众不同。
尽管系统管理员很伟大,但他们也有着一些怪癖。其中一项就是没有他们神圣的授权你永远也获取不了系统的 root 访问权限,另一项则是任何不是他们的主意都在文档被记录为应用的提供者的要求,并仍然需要再次核对。
尽管系统管理员很伟大,但他们也有着一些怪癖。其中一项就是没有他们神圣的授权你永远也获取不了系统的 root 访问权限,另一项则是任何不是出于他们的主意的变更都要在文档中被记录为应用提供方的要求,并仍然需要再次核对。
他们所管理的服务器是他们的地盘,没有人可以随意干涉。
@ -28,15 +30,15 @@
与成为系统管理员的道路相反,从开发背景和从系统管理员背景成长为 SRE 的可能性相近。SRE 的职位出现的时长与应用开发环境的生命周期相近。
随着一个组织的发展而引入的类似于[持续集成][4]和[持续发布][5] (CI/CD) 的 [DevOps][3] 概念,通常会带来如何让这些<ruby>不可变<rt>immutable</rt></ruby>的应用部署到多个环境并随着业务需求进行扩展的技能空缺。着将是 SRE 的舞台。的确,一个系统管理员可以学习额外的工具,但大体上成为一个全职的职位更容易跟的上进度。在这里成为一个专家更说的通
随着一个组织的发展而引入的类似于[持续集成][4]和[持续发布][5] (CI/CD) 的 [DevOps][3] 概念,通常会出现技能空缺,以让这些<ruby>不可变<rt>immutable</rt></ruby>的应用部署到多个环境并随着业务需求进行扩展。这将是 SRE 的舞台。的确,一个系统管理员可以学习额外的工具,但大体上成为一个全职的职位更容易跟的上发展。一个专精的专家更有意义
SRE 使用如<ruby>[代码即基础设施][6]<rt>infrastructure-as-code</rt></ruby>的概念去制作模板,然后调用它们来部署用以运行应用的环境,并以使用一键将每个应用和它们的环境完整重现作为目标。因此测试环境中一号服务器里的一号应用的二进制文件与生产环境中十五号服务器的完全一致,仅环境相关的变量如密码和数据库链接字串有所不同。
SRE 使用如<ruby>[代码即基础设施][6]<rt>infrastructure-as-code</rt></ruby>的概念去制作模板,然后调用它们来部署用以运行应用的环境,并以使用一键完整重现每个应用和它们的环境作为目标。因此会出现这样的情况:测试环境中一号服务器里的一号应用的二进制文件与生产环境中十五号服务器的完全一致,仅环境相关的变量如密码和数据库链接字串有所不同。
SRE 同时也在配置发生改变时完全摧毁一个环境并重新构建它。对于任何系统他们不带一点感情。每个系统只是个被打了标记和安排了生命周期的数字,甚至例行的服务器补丁也要重新部署整个<ruby>应用栈<rt>application stack</rt></ruby>
SRE 也会在配置发生改变时完全销毁一个环境并重新构建它。对于任何系统他们不带一点感情。每个系统只是个被打了标记和安排了生命周期的数字而已,甚至例行的服务器补丁也要重新部署整个<ruby>应用栈<rt>application stack</rt></ruby>
### 总结
对于一些情况,尤其是运维一些大型的基于 DevOps 的环境时,一个 SRE 所能提供的用于处理各种规模业务的专业技能当然更具优势。但每次他们在运气不好走入死胡同时都会去寻求友人系统管理员,或是 [(BOFH)][7] 那身经百战的故障排除技能和那些系统管理员用于给组织提供价值的丰富经验的帮助。
对于一些情况,尤其是运维一些大型的基于 DevOps 的环境时,一个 SRE 所能提供的用于处理各种规模业务的专业技能当然更具优势。但每次他们在运气不好走入死胡同时都会去寻求他的系统管理员友人或是 [来自地狱的混蛋运维BOFH][7] ,得到他那身经百战的故障排除技能,和那些用于给组织提供价值的丰富经验的帮助。
--------------------------------------------------------------------------------
@ -45,7 +47,7 @@ via: https://opensource.com/article/19/7/sysadmins-vs-sres
作者:[Vince Power][a]
选题:[lujun9972][b]
译者:[vizv](https://github.com/vizv)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,178 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11120-1.html)
[#]: subject: (32-bit life support: Cross-compiling with GCC)
[#]: via: (https://opensource.com/article/19/7/cross-compiling-gcc)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
32 位支持:使用 GCC 交叉编译
======
> 使用 GCC 在单一的构建机器上来为不同的 CPU 架构交叉编译二进制文件。
![](https://img.linux.net.cn/data/attachment/album/201907/19/054242nwhludz9tm2lwd8t.jpg)
如果你是一个开发者,要创建二进制软件包,像一个 RPM、DEB、Flatpak 或 Snap 软件包,你不得不为各种不同的目标平台编译代码。典型的编译目标包括 32 位和 64 位的 x86 和 ARM。你可以在不同的物理或虚拟机器上完成你的构建但这需要你为何几个系统。作为代替你可以使用 GNU 编译器集合 ([GCC][2]) 来交叉编译,在单一的构建机器上为几个不同的 CPU 架构产生二进制文件。
假设你有一个想要交叉编译的简单的掷骰子游戏。在大多数系统上,以 C 语言来编写这个相对简单,出于给添加现实的复杂性的目的,我以 C++ 语言写这个示例,所以程序依赖于一些不在 C 语言中东西 (具体来说就是 `iostream`)。
```
#include <iostream>
#include <cstdlib>
using namespace std;
void lose (int c);
void win (int c);
void draw ();
int main() {
int i;
do {
cout << "Pick a number between 1 and 20: \n";
cin >> i;
int c = rand ( ) % 21;
if (i > 20) lose (c);
else if (i < c ) lose (c);
else if (i > c ) win (c);
else draw ();
}
while (1==1);
}
void lose (int c )
{
cout << "You lose! Computer rolled " << c << "\n";
}
void win (int c )
{
cout << "You win!! Computer rolled " << c << "\n";
}
void draw ( )
{
cout << "What are the chances. You tied. Try again, I dare you! \n";
}
```
在你的系统上使用 `g++` 命令编译它:
```
$ g++ dice.cpp -o dice
```
然后,运行它来确认其工作:
```
$ ./dice
Pick a number between 1 and 20:
[...]
```
你可以使用 `file` 命令来查看你刚刚生产的二进制文件的类型:
```
$ file ./dice
dice: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically
linked (uses shared libs), for GNU/Linux 5.1.15, not stripped
```
同样重要,使用 `ldd` 命令来查看它链接哪些库:
```
$ ldd dice
linux-vdso.so.1 =&gt; (0x00007ffe0d1dc000)
libstdc++.so.6 =&gt; /usr/lib/x86_64-linux-gnu/libstdc++.so.6
(0x00007fce8410e000)
libc.so.6 =&gt; /lib/x86_64-linux-gnu/libc.so.6
(0x00007fce83d4f000)
libm.so.6 =&gt; /lib/x86_64-linux-gnu/libm.so.6
(0x00007fce83a52000)
/lib64/ld-linux-x86-64.so.2 (0x00007fce84449000)
libgcc_s.so.1 =&gt; /lib/x86_64-linux-gnu/libgcc_s.so.1
(0x00007fce8383c000)
```
从这些测试中,你已经确认了两件事:你刚刚运行的二进制文件是 64 位的,并且它链接的是 64 位库。
这意味着,为实现 32 位交叉编译,你必需告诉 `g++` 来:
1. 产生一个 32 位二进制文件
2. 链接 32 位库,而不是 64 位库
### 设置你的开发环境
为编译成 32 位二进制,你需要在你的系统上安装 32 位的库和头文件。如果你运行一个纯 64 位系统,那么,你没有 32 位的库或头文件,并且需要安装一个基础集合。最起码,你需要 C 和 C++ 库(`glibc` 和 `libstdc++`)以及 GCC 库(`libgcc`)的 32 位版本。这些软件包的名称可能在每个发行版中不同。在 Slackware 系统上,一个纯 64 位的带有 32 位兼容的发行版,可以从 [Alien BOB][3] 提供的 `multilib` 软件包中获得。在 Fedora、CentOS 和 RHEL 系统上:
```
$ yum install libstdc++-*.i686
$ yum install glibc-*.i686
$ yum install libgcc.i686
```
不管你正在使用什么系统,你同样必须安装一些你工程使用的 32 位库。例如,如果你在你的工程中包含 `yaml-cpp`,那么,在编译工程前,你必需安装 `yaml-cpp` 的 32 位版本,或者,在很多系统上,安装 `yaml-cpp` 的开发软件包(例如,在 Fedora 系统上的 `yaml-cpp-devel`)。
一旦这些处理好了,编译是相当简单的:
```
$ g++ -m32 dice.cpp -o dice32 -L /usr/lib -march=i686
```
`-m32` 标志告诉 GCC 以 32 位模式编译。`-march=i686` 选项进一步定义来使用哪种最优化类型(参考 `info gcc` 了解选项列表)。`-L` 标志设置你希望 GCC 来链接的库的路径。对于 32 位来说通常是 `/usr/lib`,不过,这依赖于你的系统是如何设置的,它可以是 `/usr/lib32`,甚至 `/opt/usr/lib`,或者任何你知道存放你的 32 位库的地方。
在代码编译后,查看你的构建的证据:
```
$ file ./dice32
dice: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked (uses shared libs) [...]
```
接着,当然, `ldd ./dice32` 也会指向你的 32 位库。
### 不同的架构
在 64 位相同的处理器家族上允许 GCC 做出很多关于如何编译代码的假设来编译 32 位软件。如果你需要为完全不同的处理器编译,你必需安装适当的交叉构建实用程序。安装哪种实用程序取决于你正在编译的东西。这个过程比为相同的 CPU 家族编译更复杂一点。
当你为相同处理器家族交叉编译时,你可以期待找到与 32 位库集的相同的 64 位库集,因为你的 Linux 发行版是同时维护这二者的。当为一个完全不同的架构编译时,你可能不得不穷追你的代码所需要的库。你需要的版本可能不在你的发行版的存储库中,因为你的发行版可能不为你的目标系统提供软件包,或者它不在容易到达的位置提供所有的软件包。如果你正在编译的代码是你写的,那么你可能非常清楚它的依赖关系是什么,并清楚在哪里找到它们。如果代码是你下载的,并需要编译,那么你可能不熟悉它的要求。在这种情况下,研究正确编译代码需要什么(它们通常被列在 `README``INSTALL` 文件中,当然也出现在源文件代码自身之中),然后收集需要的组件。
例如,如果你需要为 ARM 编译 C 代码,你必须首先在 Fedora 或 RHEL 上安装 `gcc-arm-linux-gnu`32 位)或 `gcc-aarch64-linux-gnu`64 位);或者,在 Ubuntu 上安装 `arm-linux-gnueabi-gcc``binutils-arm-linux-gnueabi`。这提供你需要用来构建(至少)一个简单的 C 程序的命令和库。此外,你需要你的代码使用的任何库。你可以在惯常的位置(大多数系统上在 `/usr/include`)放置头文件,或者,你可以放置它们在一个你选择的目录,并使用 `-I` 选项将 GCC 指向它。
当编译时,不使用标准的 `gcc``g++` 命令。作为代替,使用你安装的 GCC 实用程序。例如:
```
$ arm-linux-gnu-g++ dice.cpp \
-I/home/seth/src/crossbuild/arm/cpp \
-o armdice.bin
```
验证你构建的内容:
```
$ file armdice.bin
armdice.bin: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV) [...]
```
### 库和可交付结果
这是一个如何使用交叉编译的简单的示例。在真实的生活中,你的源文件代码可能产生的不止于一个二进制文件。虽然你可以手动管理,在这里手动管理可能不是好的正当理由。在我接下来的文章中,我将说明 GNU 自动工具GNU 自动工具做了使你的代码可移植的大部分工作。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/cross-compiling-gcc
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4 (Wratchet set tools)
[2]: https://gcc.gnu.org/
[3]: http://www.slackware.com/~alien/multilib/

View File

@ -1,28 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (0x996)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11118-1.html)
[#]: subject: (Test 200+ Linux And Unix Operating Systems Online For Free)
[#]: via: (https://www.ostechnix.com/test-100-linux-and-unix-operating-systems-online-for-free/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
在线试用200多种Linux和Unix操作系统
在线试用 200 多种 Linux Unix 操作系统
======
![DistroTest——在线试用200多种Linux和Unix操作系统][1]
不久前我们介绍过[**OSBoxes**][2],该网站提供了一系列免费且开箱即用的Linux和Unix虚拟机。你可以在你的Linux系统中下载这些虚拟机并用VirtualBox或VMWare workstation试用。今天我偶然发现一个名叫**“DistroTest”**的类似服务。与OSBoxes不同之处在于DistroTest让你免费试用现场版Linux和Unix操作系统。你可以在线试用200多种Linux和Unix操作系统而无需在本地安装它们。只要打开该网站选择你需要的Linux/Unix发行版然后开始试用
不久前我们介绍过[OSBoxes][2],该网站提供了一系列免费且开箱即用的 Linux 和 Unix 虚拟机。你可以在你的 Linux 系统中下载这些虚拟机并用 VirtualBox 或 VMWare workstation 试用。今天,我偶然发现一个名叫 “DistroTest” 的类似服务。与 OSBoxes 不同之处在于 DistroTest 让你免费试用现场版 Linux 和 Unix 操作系统。你可以在线试用 200 多种 Linux 和 Unix 操作系统而无需在本地安装它们。只要打开该网站,选择你需要的 Linux/Unix 发行版,然后开始试用!
两个名为**Klemann Andy**和**Forster Tobias**的好心人用**Qemu**在**Debian**上运行这项网络服务。这里列出的公开发行版在使用上没有任何限制。你可以象使用本地系统一样使用系统的所有功能。你可以安装和卸载软件。你可以测试安装的程序甚至删除或格式化硬盘删除系统文件。简而言之DistroTest让喜欢尝试不同发行版的的人自行决定
两个名为 Klemann Andy 和 Forster Tobias 的好心人用 Qemu 在 Debian 上运行了这项网络服务。这里列出的公开发行版在使用上没有任何限制。你可以象使用本地系统一样使用系统的所有功能。你可以安装和卸载软件。你可以测试安装的程序甚至删除或格式化硬盘删除系统文件。简而言之DistroTest让喜欢尝试不同发行版的的人自行决定
* 最适合他们的发行版
* 想要哪种图形界面
* 他们可以选择哪些配置
本文撰写之时DistroTest提供了**227种操作系统的711个版本**。我已经使用Linux很多年但我从未听说过这里列出的一些发行版。说实话我甚至不知道Linux操作系统有如此之多的版本。
本文撰写之时DistroTest 提供了 227 种操作系统的 711 个版本。我已经使用 Linux 很多年,但我从未听说过这里列出的一些发行版。说实话我甚至不知道 Linux 操作系统有如此之多的版本。
DistroTest网站提供的Linux发行版的列表如下。其中也包括部分非Linux的操作系统如FreeBSD和FreeDOS或是分区工具如Gparted
DistroTest 网站提供的 Linux 发行版的列表如下。(LCTT 译注:其中也包括部分非 Linux 的操作系统如 FreeBSD FreeDOS或是分区工具如 Gparted
* 4mLinux
* AbsoluteLinux
@ -186,60 +186,50 @@ DistroTest网站提供的Linux发行版的列表如下。译者注其中
* Zevenet
* Zorin OS
### 如何使用?
要试用任何操作系统,点击下面的链接:
![1][3]
要试用任何操作系统,点击下面的链接: https://distrotest.net/
在这个网站,你会看到可用的操作系统列表。单击你想了解的发行版名称即可。
![1][4]
DistroTest试用100多种Linux和Unix操作系统
DistroTest 试用 200 多种Linux和Unix操作系统
本文中我会试用Arch Linux。
本文中我会试用 Arch Linux。
单击发行版链接后,在跳转到的页面单击**System start**按钮即可启动所选操作系统。
单击发行版链接后,在跳转到的页面单击 “System start” 按钮即可启动所选操作系统。
![1][5]
此现场版操作系统会在新浏览器窗口中启动。你可以通过内建的**noVNC viewer**访问它。请在浏览器中启用/允许DistroTest 网站的弹出窗口否则无法看到弹出的noVNC窗口。
此现场版操作系统会在新浏览器窗口中启动。你可以通过内建的 noVNC viewer 访问它。请在浏览器中启用/允许 DistroTest 网站的弹出窗口,否则无法看到弹出的 noVNC 窗口。
按回车启动现场版系统。
![1][6]
这就是Arch Linux现场版系统
这就是 Arch Linux 现场版系统:
![1][7]
你可以免费**使用这个系统1小时**。你可以试用该现场版操作系统,安装应用,卸载应用,删除或修改系统文件,测试配置或脚本。每次关机后,一切都会恢复成默认配置。
你可以免费使用这个系统 1 小时。你可以试用该现场版操作系统、安装应用、卸载应用、删除或修改系统文件、测试配置或脚本。每次关机后,一切都会恢复成默认配置。
一旦试用结束回到DistroTest页面并停止你试用的系统。如果你不想启用DistroTest页面的弹出窗口用你本地系统安装的任意VNC客户端也可以。VNC登录信息可在同一页面找到。
一旦试用结束,回到 DistroTest 页面并停止你试用的系统。如果你不想启用 DistroTest 页面的弹出窗口,用你本地系统安装的任意 VNC 客户端也可以。VNC 登录信息可在同一页面找到。
![1][8]
DistroTest服务对两类用户比较实用想在线试用Linux/Unix系统或是没有喜欢的操作系统现场版ISO镜像文件的人。我在4G 网络上测试的结果一切正常。
DistroTest 服务对两类用户比较实用:想在线试用 Linux/Unix 系统,或是没有喜欢的操作系统现场版 ISO 镜像文件的人。我在 4G 网络上测试的结果一切正常。
### 实际上,我没法在虚拟机里安装新软件
### 实际上,我没法在这个虚拟机里安装新软件
试用期间我注意到的**一个问题**是虚拟机没有联网。除了本地环回接口之外没有其他网络接口。没有联网也没有本地镜像源的情况下我没法下载和安装新软件。我不知道为何网站声称可以安装软件。也许在这点上我遗漏了什么。我在DistroTest上能做的只是**看看现成的系统,试用现场版而不能安装任何软件**
试用期间我注意到的一个问题是这个虚拟机没有联网。除了本地环回接口之外没有其他网络接口。没有联网也没有本地镜像源的情况下我没法下载和安装新软件。我不知道为何网站声称可以安装软件。也许在这点上我遗漏了什么。我在 DistroTest 上能做的只是看看现成的系统,试用现场版而不能安装任何软件。
* * *
推荐阅读:
**推荐阅读:**
* [免费在线学习和练习 Linux命令][9]
* [在浏览器中运行 Linux 和其他操作系统][10]
* [**免费在线学习和练习Linux命令**][9]
* [**在浏览器中运行Linux和其他操作系统**][10]
* * *
暂时就这样了。我不知道DistroTest团队如何设法托管了这么多操作系统。我肯定这会花不少时间。这的确是件值得称赞的工作。我非常感激项目成员的无私行为。荣誉归于你们。加油
暂时就这样了。我不知道 DistroTest 团队如何设法托管了这么多操作系统。我肯定这会花不少时间。这的确是件值得称赞的工作。我非常感激项目成员的无私行为。荣誉归于你们。加油!
--------------------------------------------------------------------------------
@ -248,7 +238,7 @@ via: https://www.ostechnix.com/test-100-linux-and-unix-operating-systems-online-
作者:[sk][a]
选题:[lujun9972][b]
译者:[0x996](https://github.com/0x996)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11110-1.html)
[#]: subject: (Excellent! Ubuntu LTS Users Will Now Get the Latest Nvidia Driver Updates [No PPA Needed Anymore])
[#]: via: (https://itsfoss.com/ubuntu-lts-latest-nvidia-drivers/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
好消息Ubuntu LTS 用户不需要 PPA 也可以获得最新的 Nvidia 驱动更新
======
> 要在 Ubuntu LTS 上获得的最新 Nvidia 驱动程序,你不必再使用 PPA 了。最新的驱动程序现在将在 Ubuntu LTS 版本的存储库中提供。
![][1]
你可能已经注意到在 Ubuntu 上安装最新和最好的 Nvidia 二进制驱动程序更新的麻烦。
默认情况下Ubuntu 提供开源的 [Nvidia Nouveau 驱动程序][2],这有时会导致 Ubuntu 卡在启动屏幕上。
你也可以轻松地[在 Ubuntu 中安装专有的 Nvidia 驱动程序][3]。问题是默认 [Ubuntu 存储库][4]中的 Nvidia 驱动程序不是最新的。为此,几年前 [Ubuntu 引入了一个专门的 PPA][5]以解决这个问题。
[使用官方 PPA][6] 仍然是安装闭源图形驱动程序的一个不错的解决方法。但是,它绝对不是最方便的选择。
但是现在Ubuntu 同意将最新的 Nvidia 驱动程序更新作为 SRU[StableReleaseUpdates][7])的一部分提供。所以,你将在使用 Ubuntu LTS 版本时也拥有 Nvidia 驱动程序了。
好吧,这意味着你不再需要在 Ubuntu LTS 版本上单独下载/安装 Nvidia 图形驱动程序。
就像你会获得浏览器或核心操作系统更新(或安全更新)的更新包一样,你将获得所需的 Nvidia 二进制驱动程序的更新包。
### 这个最新的 Nvidia 显卡驱动程序可靠吗?
SRU 字面上指的是 Ubuntu或基于 Ubuntu 的发行版)的稳定更新。因此,要获得最新的图形驱动程序,你应该等待它作为稳定更新释出,而不是选择预先发布的更新程序。
当然,没有人能保证它能在所有时间内都正常工作 —— 但安装起来比预先发布的更安全。
### 怎样获得最新的 Nvidia 驱动程序?
![Software Updates Nvidia][8]
你只需从软件更新选项中的其他驱动程序部分启用“使用 NVIDIA 驱动程序元数据包……”。
最初,[The Linux Experiment][10] 通过视频分享了这个消息 —— 然后 Ubuntu 的官方 Twitter 作为公告重新推送了它。你可以观看下面的视频以获取更多详细信息:
- [https://youtu.be/NFdeWTQIpjo](https://youtu.be/NFdeWTQIpjo)
### 支持哪些 Ubuntu LTS 版本?
目前Ubuntu 18.04 LTS 已经可用了,它也将很快在 Ubuntu 16.04 LTS 上可用(随后的 LTS 也将次第跟上)。
### 总结
现在你可以安装最新的 Nvidia 二进制驱动程序更新了,你认为这有何帮助?
如果你已经测试了预先发布的软件包,请在下面的评论中告诉我们你对此的看法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-lts-latest-nvidia-drivers/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/nvidia-ubuntu-logo.png?resize=800%2C450&ssl=1
[2]: https://nouveau.freedesktop.org/wiki/
[3]: https://itsfoss.com/install-additional-drivers-ubuntu/
[4]: https://itsfoss.com/ubuntu-repositories/
[5]: https://itsfoss.com/ubuntu-official-ppa-graphics/
[6]: https://itsfoss.com/ppa-guide/
[7]: https://wiki.ubuntu.com/StableReleaseUpdates
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/software-updates-nvidia.jpg?fit=800%2C542&ssl=1
[9]: https://itsfoss.com/ubuntu-17-04-release-features/
[10]: https://twitter.com/thelinuxEXP

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11121-1.html)
[#]: subject: (Epic Games Backs Blender Foundation with $1.2m Epic MegaGrants)
[#]: via: (https://itsfoss.com/epic-games-blender-grant/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Epic Games 给予 Blender 基金会 120 万美元的拨款支持
======
[Epic MegaGrants][1] 是 [Epic Games][2] 的一个计划,用于支持游戏开发人员、企业专业人士、内容创建者和工具开发人员使用<ruby>虚幻引擎<rt>Unreal Engine</rt></ruby>UE做出神奇的作品或增强 3D 图形社区的开源功能。
作为该计划的一部分Epic Games 给予 [Blender 基金会][3] 120 万美元拨款以帮助改善他们的发展。如果你还不知道Blender 是[最好的开源视频编辑][4]之一,特别是以创建专业的 3D 计算机图形而闻名。
**Tim Sweeney**Epic Games 的创始人兼首席执行官)这样评论这笔授予:
> “开放式工具、库和平台对数字内容生态系统的未来至关重要。……Blender 是艺术社区持久的资源,我们的目标是确保其进步,造福所有创作者。”
即使这是个好消息,也有人对此不满意。在本文中,我们将看一下得到该拨款后的 Blender 基金会的计划,以及人们对此的看法。
### Blender 基金会的改进计划
![Image Credit : BlenderNation][5]
在[新闻稿][6]当中Blender 基金会提到了如何利用这笔资金以及用于何种目的:
> “Epic MegaGrant 将在未来三年内逐步交付,并将为 Blender 的<ruby>专业 Blender 发展计划<rt>Professionalizing Blender Development Initiative</rt></ruby>做出贡献。”
所以,没错,这笔财务帮助将以现金提供 —— 但是,它要在 3 年内完成。也就是说,我们要期待 Blender 基金会及其软件质量得到重大改进还有很长时间。
这是 **Ton Roosendaal**Blender 基金会的创始人)需要说明的它将如何被利用:
> “Epic Games 的支持对 Blender 是一个重要里程碑”Blender 基金会的创始人兼董事长 Ton Roosendaal 说道。“由于这项拨款,我们将对我们的项目组织进行大量投入,以改善支持、协作和代码质量实践。因此,我们期望更多来自该行业的贡献者加入我们的项目。”
### 为什么人们对此不是很喜欢?
让我澄清一下,就我个人而言,我不喜欢用 Epic Game 的市场或客户端玩游戏。
由于各种原因(功能、隐私等),我更喜欢 [Steam][7] 而不是 Epic Games。
Epic Games 被称为游戏社区中的坏人,因为它最近几款游戏专属于其平台 —— 尽管很多人警告用户该平台上的隐私问题。
不仅如此Epic Games 的首席执行官在过去发过这样的推特:
> 安装 Linux 相当于人们不喜欢美国的政治趋势时就搬到加拿大。
>
> 不,我们必须为今天的自由而战,如今我们拥有自由。
>
> - Tim Sweeney@TimSweeneyEpic[2018年2月15日][8]
嗯,这并不直接暗示他讨厌 Linux 或者没有积极推动 Linux 的游戏开发 —— 但是只是因为很多历史情况,人们并不真正信任 Epic Games 的决策。所以,他们并不欣赏与 Blender 基金会的联系(即使这个财务帮助是积极的)。
这与财务帮助无关。但是Epic Games 缺乏良好的声誉(当然是主观的),因此,人们对此的看法是消极的。看看拨款公告后的一些推文:
> 希望不要走向排它……这可能会破坏你的声誉。
>
> - Ray@ Epicshadow1994[2019年7月15日][9]
> 我对将来会变成什么样感到怀疑。EPIC 最近一直在采取敌对战术。
>
> - acrid Heartwood@acrid_heartwood[2019年7月15日][10]
### 总而言之
你仍然可以[通过 Lutris 在 Linux 上运行 Epic Games][11]但这是很单薄的非官方尝试。Epic Games 没有表示有兴趣正式支持该项目。
所以,很明显不是每个人都信任 Epic Games。因此这个消息带来了各种消极反应。
但是,这笔拨款肯定会帮助 Blender 基金会改善其组织和软件质量。
你怎么看待这件事?请在下面的评论中告诉我们您的想法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/epic-games-blender-grant/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.unrealengine.com/en-US/megagrants
[2]: https://www.epicgames.com/store/en-US/
[3]: https://www.blender.org/
[4]: https://itsfoss.com/open-source-video-editors/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/epic-games-blender-megagrant.jpg?resize=800%2C450&ssl=1
[6]: https://www.blender.org/press/epic-games-supports-blender-foundation-with-1-2-million-epic-megagrant/
[7]: https://itsfoss.com/install-steam-ubuntu-linux/
[8]: https://twitter.com/TimSweeneyEpic/status/964284402741149698?ref_src=twsrc%5Etfw
[9]: https://twitter.com/Epicshadow1994/status/1150787326626263042?ref_src=twsrc%5Etfw
[10]: https://twitter.com/acrid_heartwood/status/1150789691979030528?ref_src=twsrc%5Etfw
[11]: https://linux.cn/article-10968-1.html

View File

@ -1,81 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Excellent! Ubuntu LTS Users Will Now Get the Latest Nvidia Driver Updates [No PPA Needed Anymore])
[#]: via: (https://itsfoss.com/ubuntu-lts-latest-nvidia-drivers/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Excellent! Ubuntu LTS Users Will Now Get the Latest Nvidia Driver Updates [No PPA Needed Anymore]
======
_**Brief: To get the latest Nvidia drivers in Ubuntu LTS versions, you dont have to use PPA anymore. The latest drivers will now be available in the repositories of the Ubuntu LTS versions.**_
![][1]
You might be aware of the troubles to install the latest and greatest Nvidia binary driver updates on Ubuntu.
By default, Ubuntu provides the open source [Nvidia Nouveau drivers][2] that some time result in Ubuntu being stuck at boot screen.
You can also [install the proprietary Nvidia driver in Ubuntu][3] easily. The problem is that the Nvidia drivers in the default [Ubuntu repositories][4] are not the latest one. To solve this problem, [Ubuntu introduced a dedicated PPA][5] a few years back.
[Using the official PPA][6] is still a decent workaround for installing the closed source graphics driver. However, it is definitely not the most convenient option.
But, now, Ubuntu agreed to include the latest Nvidia driver update as part of the SRU ([StableReleaseUpdates][7]). So, you will have Nvidia drivers baked in with Ubuntu LTS versions.
Well, this means that you no longer have to separately download/install the Nvidia graphics drivers on Ubuntu LTS versions.
Just like you get an update for your browser or the core OS updates (or the security updates), similarly, you will get the required Nvidia binary driver update packages.
### Can We Rely on the Latest Nvidia Graphics Driver?
SRU literally refers to stable updates for Ubuntu (or Ubuntu-based distros). So, instead of opting for the pre-released updates in order to get the latest graphics driver, you should wait for it to drop as a stable update.
Of course, no one can guarantee that it will work 100% of the time but it will be way more safe to install than the pre-released ones.
### How Can I Get the latest Nvidia drivers?
![Software Updates Nvidia][8]
You just have to enable “Using NVIDIA driver meta package….” from the additional drivers section in the software update option.
[][9]
Suggested read  Ubuntu 17.04 Release Date, Features And Upgrade Procedure
Originally, [The Linux Experiment][10] shared this news through a video which then Ubuntus official Twitter handle re-tweeted as an announcement. You can watch the video below to get more details on it:
### Which Ubuntu LTS Versions are Supported?
For now, Ubuntu 18.04 LTS supports this out of the box. It will soon be available for Ubuntu 16.04 LTS (and later LTS versions will follow).
**Wrapping Up**
Now that you can install the latest Nvidia binary driver updates, how do you think will it help you?
If you have tested a pre-released package, let us know your thoughts on that in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-lts-latest-nvidia-drivers/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/nvidia-ubuntu-logo.png?resize=800%2C450&ssl=1
[2]: https://nouveau.freedesktop.org/wiki/
[3]: https://itsfoss.com/install-additional-drivers-ubuntu/
[4]: https://itsfoss.com/ubuntu-repositories/
[5]: https://itsfoss.com/ubuntu-official-ppa-graphics/
[6]: https://itsfoss.com/ppa-guide/
[7]: https://wiki.ubuntu.com/StableReleaseUpdates
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/software-updates-nvidia.jpg?fit=800%2C542&ssl=1
[9]: https://itsfoss.com/ubuntu-17-04-release-features/
[10]: https://twitter.com/thelinuxEXP

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,11 +1,11 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Lessons in Vendor Lock-in: Google and Huawei)
[#]: via: (https://www.linuxjournal.com/content/lessons-vendor-lock-google-and-huawei)
[#]: author: (Kyle Rankin https://www.linuxjournal.com/users/kyle-rankin)
[#]: collector: "lujun9972"
[#]: translator: "acyanbird "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "Lessons in Vendor Lock-in: Google and Huawei"
[#]: via: "https://www.linuxjournal.com/content/lessons-vendor-lock-google-and-huawei"
[#]: author: "Kyle Rankin https://www.linuxjournal.com/users/kyle-rankin"
Lessons in Vendor Lock-in: Google and Huawei
======
@ -43,7 +43,6 @@ What's more, the Google Apps suite isn't just a convenient way to load Gmail or
Without access to these OS updates, Huawei now will have to decide whether to create its own LineageOS-style Android fork or a whole new phone OS of its own. In either case, it will have to abandon the Google Play Store ecosystem and use F-Droid-style app repositories, or if it goes 100% alone, it will need to create a completely new app ecosystem. If its engineers planned for this situation, then they likely are working on this plan right now; otherwise, they are all presumably scrambling to address an event that "should never happen". Here's hoping that if you find yourself in a similar case of vendor lock-in with an overseas company that's too big to fail, you never get caught in the middle of a trade war.
--------------------------------------------------------------------------------
via: https://www.linuxjournal.com/content/lessons-vendor-lock-google-and-huawei

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Server hardware makers shift production out of China)
[#]: via: (https://www.networkworld.com/article/3409784/server-hardware-makers-shift-production-out-of-china.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Server hardware makers shift production out of China
======
Tariffs on Chinese products and unstable U.S./China relations cause server makers to speed up their move out of China.
![Etereuti \(CC0\)][1]
The supply chain of vendors that build servers and network communication devices is accelerating its shift of production out of China to Taiwan and North America, along with other nations not subject to the trade war between the U.S. and China.
Last May, the Trump Administration levied tariffs on a number of imported Chinese goods, computer components among them. The tariffs ranged from 10-25%. Consumers were hit hardest, since they are more price sensitive than IT buyers. PC World said the [average laptop price could rise by $120][2] just for the tariffs.
But since the tariff was based on the value of the product, that means server hardware prices could skyrocket, since servers cost much more than PCs.
**[ Read also: [HPEs CEO lays out his technology vision][3] ]**
### Companies that are moving production out of China
The Taiwanese tech publication DigiTimes reported (article now locked behind a paywall) that Mitac Computing Technology, a server ODM, reactivated an old production line at Hsinchu Science Park (HSP) in Taiwan at the end of 2018 and restarted another for motherboard SMT process in March 2019. The company plans to establish one more SMT production line prior to the end of 2019.
It went on to say Mitac plans to produce all of its high-end U.S.-bound servers in Taiwan and is looking to move 30% of its overall server production lines back to Taiwan in the next three years.
Wiwynn, a cloud computing server subsidiary of Wistron, is primarily assembling its U.S.-bound servers in Mexico and has also recently established a production site in southern Taiwan per clients' requests.
Taiwan-based server chassis and assembly player AIC recently expanded the number of its factories in Taiwan to four and has been aggressively forming cooperation with its partners to expand its capacity. Many Taiwan-based component suppliers are also expanding their capacity in Taiwan.
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][4] ]**
Several ODMs, such as Inventec, Wiwynn, Wistron, and Foxconn, all have plants in Mexico, while Quanta Computer has production lines in the U.S. Wiwynn also plans to open manufacturing facilities in eastern U.S.
“This is not something that just happened overnight, its a process that started a few years ago. The tariffs just accelerated the desire of ODMs to do it,” said Ashish Nadkarni, group vice president for infrastructure systems, platforms and technologies at IDC. “Since [President] Trump has come into office there has been saber rattling about China and a trade war. There has also been a focus on margins.”
He added that component makers are definitely moving out of China to other parts of Asia, like Korea, the Philippines, and Vietnam.
### HPE, Dell and Lenovo should remain unaffected
The big three branded server makers are all largely immunized against the tariffs. HP Enterprise, Dell, and Lenovo all have U.S.-based assemblies and their contract manufacturers are in Taiwan, said Nadkarni. So, their costs should remain unaffected by tariffs.
The tariffs are not affecting sales as much as revenue for hyperscale whitebox vendors is being stressed. Hyperscale companies such as Amazon Web Services (AWS), Microsoft, Google, etc. have contracts with vendors such as Inspur and Super Micro, and if prices fluctuate, thats not their problem. The hardware vendor is expected to deliver at the agreed cost.
So margins, already paper thin, cant be passed on to the customer, unlike the aforementioned laptop example.
“Its not the end customers who are affected by it, its the vendors who are affected by it. Certain things they can pass on, like component prices. But if the build value goes up, thats not the customers problem, thats the vendors problem,” said Nadkarni.
So while it may cost you more to buy a laptop as this trade fracas goes on, it shouldnt cost more to buy a server.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3409784/server-hardware-makers-shift-production-out-of-china.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/07/asia_china_flag_grunge-stars_pixabay_etereuti-100763424-large.jpg
[2]: https://www.pcworld.com/article/3403405/trump-tariffs-on-chinese-goods-could-cost-you-120-more-for-notebook-pcs-say-dell-hp-and-cta.html
[3]: https://www.networkworld.com/article/3394879/hpe-s-ceo-lays-out-his-technology-vision.html
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,107 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How edge computing is driving a new era of CDN)
[#]: via: (https://www.networkworld.com/article/3409027/how-edge-computing-is-driving-a-new-era-of-cdn.html)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
How edge computing is driving a new era of CDN
======
A CDN is an edge application and an edge application is a superset of what your CDN is doing.
![geralt \(CC0\)][1]
We are living in a hyperconnected world where anything can now be pushed to the cloud. The idea of having content located in one place, which could be useful from the managements perspective, is now redundant. Today, the users and data are omnipresent.
The customers expectations have up-surged because of this evolution. There is now an increased expectation of high-quality service and a decrease in customers patience. In the past, one could patiently wait 10 hours to download the content. But this is certainly not the scenario at the present time. Nowadays we have high expectations and high-performance requirements but on the other hand, there are concerns as well. The internet is a weird place, with unpredictable asymmetric patterns, buffer bloat and a list of other [performance-related problems][2] that I wrote about on Network Insight. _[Disclaimer: the author is employed by Network Insight.]_
Also, the internet is growing at an accelerated rate. By the year 2020, the internet is expected to reach 1.5 Gigabyte of traffic per day per person. In the coming times, the world of the Internet of Things (IoT) driven by objects will far supersede these data figures as well. For example, a connected airplane will generate around 5 Terabytes of data per day. This spiraling level of volume requires a new approach to data management and forces us to re-think how we delivery applications.
[RELATED: How Notre Dame is going all in with Amazons cloud][3]
Why? Because all this information cannot be processed by a single cloud or an on-premise location. Latency will always be a problem. For example, in virtual reality (VR) anything over 7 milliseconds will cause motion sickness. When decisions are required to be taken in real-time, you cannot send data to the cloud. You can, however, make use of edge computing and a multi-CDN design.
### Introducing edge computing and multi-CDN
The rate of cloud adoption, all-things-video, IoT and edge computing are bringing life back to CDNs and multi-CDN designs. Typically, a multi-CDN is an implementation pattern that includes more than one CDN vendor. The traffic direction is performed by using different metrics, whereby traffic can either be load balanced or failed across the different vendors.
Edge computing moves actions as close as possible to the source. It is the point where the physical world interacts with the digital world. Logically, the decentralized approach of edge computing will not take over the centralized approach. They will be complementary to each other, so that the application can run at its peak level, depending on its position in the network.
For example, in IoT, saving battery life is crucial. Lets assume an IoT device can conduct the transaction in 10ms round trip time (RTT), instead of 100ms RTT. As a result, it can use 10 times less battery.
### The internet, a performance bottleneck
The internet is designed on the principle that everyone can talk to everyone, thereby providing universal connectivity whether required or not. There has been a number of design changes with network address translation (NAT) being the biggest. However, essentially the role of the internet has remained the same in terms of connectivity, regardless of location.
With this type of connectivity model, distance is an important determinant for the applications performance. Users on the other side of the planet will suffer regardless of buffer sizes or other device optimizations. Long RTT is experienced as packets go back and forth before the actual data transmission. Although caching and traffic redirection is being used but limited success has been achieved so far.
### The principles of application delivery
When transmission control protocol (TCP) starts, it thinks it is back in the late 1970s. It assumes that all services are on a local area network (LAN) and there is no packet loss. It then starts to work backward from there. Back when it was designed, we didn't have real-time traffic, such as voice and video that is latency and jitter sensitive.
Ideally, TCP was designed for the ease of use and reliability, not to boost the performance. You actually need to optimize the TCP stack. And this is why CDNs are very good at performing such tasks. For example, if a connection is received from a mobile phone, a CDN will start with the assumption that there is going to be high jitter and packet loss. This allows them to size the TCP window correctly that accurately match network conditions.
How do you magnify the performance, what options do you have? In a generic sense, many look to lowering the latency. However, with applications, such as video streaming, latency does not tell you if the video is going to buffer. One can only assume that lower latency will lead to less buffering. In such a scenario, measurement-based on throughput is a far better performance metric since will tell you how fast an object will load.
We have also to consider the page load times. At the network level, it's the time to first byte (TTFB) and ping. However, these mechanisms dont tell you much about the user experience as everything fits into one packet. Using ping will not inform you about the bandwidth problems.
And if a web page goes slower by 25% once packet loss exceeds 5% and you are measuring time to the first byte which is the 4th packet - what exactly can you learn? TTFB is comparable to an internet control message protocol (ICMP) request just one layer up the stack. It's good if something is broken but not if there is underperformance issue.
When you examine the history of TTFB measuring, you will find that it was deployed due to the lack of Real User Monitoring (RUM) measurements. Previously TTFB was as good in approximating how fast something was going to load, but we don't have to approximate anymore as we can measure it with RUM. RUM is measurements from the end-users. An example could be the metrics generated from a webpage that is being served to an actual user.
Conclusively, TTFB, ping and page load times are not sophisticated measurements. We should prefer RUM time measurements as much as we can. This provides a more accurate picture of the user experience. This is something which has become critical over the last decade.
Now we are living in a world of RUM which lets us build our network based on what matters to the business users. All CDNs should aim for RUM measurements. For this, they may need to integrate with traffic management systems that intelligently measure on what the end-user really sees.
### The need for multi-CDN
Primarily, the reasons one would opt for a multi-CDN environment are availability and performance. No single CDN can be the fastest to everyone and everywhere in the world. It is impossible due to the internet's connectivity model. However, combining the best of two or even more CDN providers will increase the performance.
A multi-CDN will give a faster performance and higher availability than what can be achieved with a single CDN. A good design is what runs two availability zones. A better design is what runs two availability zones with a single CDN provider. However, superior design is what runs two availability zones in a multi-CDN environment.
### Edge applications will be the new norm
Its not that long ago that there was a transition from the heavy physical monolithic architecture to the agile cloud. But all that really happened was the transition from the physical appliance to a virtual cloud-based appliance. Maybe now is the time that we should ask, is this the future that we really want?
One of the main issues in introducing edge applications is the mindset. It is challenging to convince yourself or your peers that the infrastructure you have spent all your time working on and investing in is not the best way forward for your business. 
Although the cloud has created a big buzz, just because you migrate to the cloud does not mean that your applications will run faster. In fact, all you are really doing is abstracting the physical pieces of the architecture and paying someone else to manage it. The cloud has, however, opened the door for the edge application conversation. We have already taken the first step to the cloud and now it's time to make the second move.
Basically, when you think about edge applications: its simplicity is a programmable CDN. A CDN is an edge application and an edge application is a superset of what your CDN is doing. Edge applications denote cloud computing at the edge. It is a paradigm to distribute the application closer to the source for lower latency, additional resilience, and simplified infrastructure, where you still have control and privacy.
From an architectural point of view, an edge application provides more resilience than deploying centralized applications. In today's world of high expectations, resilience is a necessity for the continuity of business. Edge applications allow you to collapse the infrastructure into an architecture that is cheaper, simpler and more attentive to the application. The less in the expanse of infrastructure, the more time you can focus on what really matters to your business - the customer.
### An example of an edge architecture
An example of edge architecture is within each PoP, every application has its own isolated JavaScript (JS) environment. JavaScript is great for security isolation and the performance guarantees scale. The JavaScript is a dedicated isolated instance that executes the code at the edge.
Most likely, each JavaScript has its own virtual machine (VM). The sole operation that the VM is performing is the JavaScript runtime engine and the only thing it is running is the customer's code. One could use Google V8 open-source high-performance JavaScript and WebAssembly engine.
Lets face it, if you continue building more PoPs, you will hit the law of diminishing returns. When it comes to application such as mobile, you really are maxed out when throwing PoPs to form a solution. So we need to find another solution.
In the coming times, we are going to witness a trend where most applications will become global, which means edge applications. It certainly makes little sense to place all the application in one location when your users are everywhere else.
**This article is published as part of the IDG Contributor Network. [Want to Join?][4]**
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3409027/how-edge-computing-is-driving-a-new-era-of-cdn.html
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/02/network-traffic-100707086-large.jpg
[2]: https://network-insight.net/2016/12/buffers-packet-drops/
[3]: https://www.networkworld.com/article/3014599/cloud-computing/how-notre-dame-is-going-all-in-with-amazon-s-cloud.html#tk.nww-fsb
[4]: https://www.networkworld.com/contributor-network/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (MPLS is hanging on in this SD-WAN world)
[#]: via: (https://www.networkworld.com/article/3409070/mpls-is-hanging-on-in-this-sd-wan-world.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
MPLS is hanging on in this SD-WAN world
======
The legacy networking protocol is still viable and there is no need to replace it in certain use cases, argues one cloud provider.
![jamesteohart][1]
The [SD-WAN networking market is booming and is expected to grow][2] to $17 billion by 2025, and no wonder. Software-defined wide-area networking eliminates the need for expensive routers and does all the network connectivity in the cloud.
Among its advantages is the support for secure cloud connectivity, one area where multiprotocol label switching (MPLS) falls short. MPLS is a data protocol from before the internet took off and while ideal for communications within the corporate firewall, it doesnt lend itself to cloud and outside communications well.
You would think that would seal MPLSs fate, but just like IPv6 is ever so slowly replacing IPv4, MPLS is hanging on and some IT pros are even increasing their investment.
**[ Related: [MPLS explained What you need to know about multi-protocol label switching][3] ]**
Avant Communications, a cloud services provider that specializes in SD-WAN, recently issued a report entitled [State of Disruption][4] that found that 83% of enterprises that use or are familiar with MPLS plan to increase their MPLS network infrastructure this year, and 40% say they will “significantly increase” their use of it.
The report did not find one protocol winning that the expense of another. Just as 83% plan to use MPLS, 78% acknowledged plans to use SD-WAN in their corporate networks by the end of the year. Although SD-WAN is on the rise, MPLS is clearly not going away anytime soon. Both SD-WAN and MPLS can live together in harmony, adding value to each other.
“SD-WAN is the most disruptive technology in our study. Its not surprising that adoption of new technologies is slowest among the largest companies. The wave of SD-WAN disruption has not fully hit larger companies yet, but our belief is that it is moving quickly upmarket,” the report stated.
While SD-WAN is much better suited for the job of cloud connectivity, 50% of network traffic is still staying within the corporate firewall. So while SD-WAN can solve the connection issues, so can MPLS. And if you have it deployed, rip and replace makes no sense.
“MPLS continues to have a strong role in modern networks, and we expect that to continue,” the report stated. “This is especially true among larger enterprises that have larger networks depending on MPLS. While youll find MPLS at the core for a long time to come, we expect to see a shared environment with SD-WAN at the edge, enabled by broadband Internet and other lower cost networks. “
And MPLS isnt without its advantages, most notably it can [guarantee performance][5] while SD-WAN, at the mercy of the public internet, cannot.
As broadband networks continue to improve in performance, SD-WAN will allow companies to reduce their reliance on MPLS, especially as equipment ages and is replaced. Avant expects that, for the foreseeable future, there will continue to be a very viable role for both.
**More about SD-WAN:**
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][6]
* [How to pick an off-site data-backup method][7]
* [SD-Branch: What it is and why youll need it][8]
* [What are the options for security SD-WAN?][9]
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3409070/mpls-is-hanging-on-in-this-sd-wan-world.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/07/the-latest-in-innovation-in-the-sd-wan-managed-services-market1400-100801684-large.jpg
[2]: https://www.prnewswire.com/news-releases/software-defined-wide-area-network-sd-wan-market-to-hit-17bn-by-2025-global-market-insights-inc-300795304.html
[3]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html
[4]: https://www.goavant.net/Disruption
[5]: https://www.networkworld.com/article/2297171/network-security-mpls-explained.html
[6]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
[7]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
[8]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
[9]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Public internet should be all software-defined)
[#]: via: (https://www.networkworld.com/article/3409783/public-internet-should-be-all-software-defined.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Public internet should be all software-defined
======
Having a programmable public internet will correct inefficiencies in the current system, engineers at NOIA say.
![Thinkstock][1]
The public internet should migrate to a programmable backbone-as-a-service architecture, says a team of network engineers behind NOIA, a startup promising to revolutionize global traffic. They say the internet will be more efficient if internet protocols and routing technologies are re-worked and then combined with a traffic-trading blockchain.
Its “impossible to use internet for modern applications,” the company says on its website. “Almost all global internet companies struggle to ensure uptime and reliable user experience.”
Thats because modern techniques arent being introduced fully, NOIA says. The engineers say algorithms should be implemented to route traffic and that segment routing technology should be adopted. Plus, blockchain should be instigated to trade internet transit capacity. A “programmable internet solves the webs inefficiencies,” a representative from NOIA told me.
**[ Read also: [What is IPv6, and why arent we there yet?][2] ]**
### Deprecate the public internet
NOIA has started introducing a caching, distributed content delivery application to improve website loading times, but it wants to ultimately deprecate the existing internet completely.
The company currently has 353 active cache nodes around the world, with a total 27 terabytes of storage for that caching system—NOIA clients contribute spare bandwidth and storage. Its also testing a network backbone using four providers with European and American locations that it says will be the [development environment for its envisaged software-defined and radical internet replacement][3].
### The problem with today's internet
The “internet is a mesh of tangled up cables,” [NOIA says][4]. “Thousands of physically connected networks” are involved. Any configuration alterations in any of the jumble of networks causes issues with the protocols, it explains. The company is referring to Border Gateway Protocol (BGP), which lets routers discover paths to IP addresses through the disparate network. Because BGP only forwards to a neighboring router, it doesnt manage the entire route. That introduces “severe variability” or unreliability.
“It is impossible to guarantee service reliability without using overlay networks. Low-latency, performance-critical applications, and games cannot operate on public Internet,” the company says.
### How a software-defined internet works
NOIA's idea is to use [IPv6][5], the latest internet protocol. IPv6 features an expanded packet size and allows custom headers. The company then adds segment routing to create Segment Routing over IPv6 (SRv6). That SRv6 combo adds routing information to each data packet sent—a packet-level programmable network, in other words.
Segment routing, roughly, is an updated internet protocol that lets routers comprehend routing information in packet headers and then perform the routing. Cisco has been using it, too.
NOIAs network then adds the SRv6 amalgamation to distributed ledger technology (blockchain) in order to let ISPs and data centers buy and sell the routes—buyers can choose their routes in the exchange, too.
In addition to trade, blockchain introduces security. It's worth noting that routings arent the only internet technologies that could be disrupted due to blockchain. In April I wrote about [organizations that propose moving data storage transactions over to distributed ledgers][6]. They say that will be more secure than anything seen before. [Ethernets lack of inherent security could be corrected by smart contract, trackable verifiable transactions][7], say some. And, of course, supply chain, the automotive vertical, and the selling of sensor data overall may emerge as [use-contenders for secure, blockchain in the internet of things][8].
In NOIAs case, with SRv6 blended with distributed ledgers, the encrypted ledger holds the IP addresses, but it is architecturally decentralized—no one controls it. Thats one element of added security, along with the aforementioned trading, provided by the ledger.
That trading could handle the question of whos paying for all this. However, NOIA says current internet hardware will be able to understand the segment routings, so no new equipment investments are needed.
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3409783/public-internet-should-be-all-software-defined.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/05/dns_browser_http_web_internet_thinkstock-100758191-large.jpg
[2]: https://www.networkworld.com/article/3254575/lan-wan/what-is-ipv6-and-why-aren-t-we-there-yet.html
[3]: https://medium.com/noia/development-update-06-20-07-04-2879f9fce3cb
[4]: https://noia.network/
[5]: https://www.networkworld.com/article/3254575/what-is-ipv6-and-why-aren-t-we-there-yet.html
[6]: https://www.networkworld.com/article/3390722/how-data-storage-will-shift-to-blockchain.html
[7]: https://www.networkworld.com/article/3356496/how-blockchain-will-manage-networks.html
[8]: https://www.networkworld.com/article/3330937/how-blockchain-will-transform-the-iot.html
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,102 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Smart cities offer window into the evolution of enterprise IoT technology)
[#]: via: (https://www.networkworld.com/article/3409787/smart-cities-offer-window-into-the-evolution-of-enterprise-iot-technology.html)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
Smart cities offer window into the evolution of enterprise IoT technology
======
Smart-city technologies such as 0G networking hold clues for successful large-scale implementations of the internet of things in enterprise settings.
![Benjamin Hung modified by IDG Comm. \(CC0\)][1]
Powering smart cities is one of the most ambitious use cases for the internet of things (IoT), combining a wide variety of IoT technologies to create coherent systems that span not just individual buildings or campuses but entire metropolises. As such, smart cities offer a window into the evolution of enterprise IoT technologies and implementations on the largest scale.
And thats why I connected with [Christophe Fourtet][2], CSO and co-founder of [Sigfox][3], a French global network operator, to learn more about using wireless networks to connect large numbers of low-power objects, ranging from smartwatches to electricity meters. (And I have to admit I was intrigued by the 0G network moniker, which conjured visions of weightless IoT devices floating in space, or maybe [OG-][4]style old-school authenticity. Thats not at all what its about, of course.)
**[ Learns more: [Download a PDF bundle of five essential articles about IoT in the enterprise][5] ]**
According to Fourtet, "Sigfoxs global 0G network specializes in inexpensively conveying small amounts of data over long ranges—without sacrificing quality. Whereas other networks aim to collect and transmit as much data as possible, as quickly as possible, we deliver small packets of information at regular intervals, giving customers only the critical information they need."
The software-based wireless 0G network listens to devices without the need to establish and maintain network connection, eliminating signaling overhead. With network and computing complexity managed in the cloud, energy consumption and costs of connected devices are dramatically reduced, [the company says][6]. Just as important, the low power requirements can also dramatically cut battery requirements for IoT devices.
Around the world, customers like Michelin, General Motors, and Airbus use the 0G networks to connect IoT devices, and the network is supported by more than 660 partner organizations, including device makers and service providers such as Urbansense and Bosch. Sigfox cited [0G-connected IoT devices enabling Danish cities][7] to monitor quality of life data, from detecting defects in buildings to tracking garbage collection.
### 0G applications beyond smart cities
In addition to smart cities applications, Sigfox serves several industry verticals, including manufacturing, agriculture, and retail. Common use cases include supply-chain management and asset tracking, both within factory/warehouse environments and between locations as containers/shipments move through the supply chain around the globe. The network is uniquely equipped for supply chain use cases due to its cost-efficiency, long-lasting batteries with totally predictable autonomy, and wide-range reach.
In facilities management, the 0G network can connect IoT devices that track ambient factors such temperature, humidity, and occupancy. Doing so helps managers leverage occupancy data to adjust the amount of space a company needs to rent, reducing overhead costs. It can also help farmers optimize the planting, care, and harvesting of crops.
Operating as a backup solution to ensure connectivity during a broadband network outage, 0G networking built into a cable box or router could allow service providers to access hardware even when the primary network is down, Fourtet said.
“The 0G network does not promise a continuation of these services,” Fourtet noted, “but it can provide access to the necessary information to solve challenges associated with outages.”
In a more dire example in the home and commercial building security market, sophisticated burglars could use cellular and Wi-Fi jammers to block a security systems access to a network so even though alarms were issued, the service might never receive them, Fourtet said. But the 0G network can send an alert to the alarm system provider even if it has been jammed or blocked, he said.
### How 0g networks are used today
Current 0G implementations include helping [Louis Vuitton track luggage][8] for its traveling customers. Using a luggage tracker powered by by [Sigfoxs Monarch service][9], a suitcase can stay connected to the 0G network throughout a trip, automatically recognizing and adapting to local radio frequency standards. The idea is for travelers to track the location of their bags at major airports in multiple countries, Fourtet said, while low energy consumption promises a six-month battery life with a very small battery.
At the Special Olympics World Games Abu Dhabi 2019, [iWire, LITE-ON and Sigfox worked together][10] to create a tracking solution designed to help safeguard 10,000 athletes and delegates. Sensors connected to the Sigfox 0G network and outfitted with Wi-Fi capabilities were equipped with tiny batteries designed to provide uninterrupted service throughout the weeklong event. The devices “periodically transmitted messages that helped to identify the location of athletes and delegates in case they went off course,” Fourtet said, while LITE-ON incorporated a panic button for use in case of emergencies. In fact, during the event, the system was used to locate a lost athlete and return them to the Games without incident, he said.
French car manufacturer [Groupe PSA][11] uses the 0G network to optimize shipping container routes between suppliers and assembly plants. [Track&amp;Trace][11] works with IBMs cloud-based IoT technologies to track container locations and alert Groupe PSA when issues crop up, Fourtet said.
### 0G is still growing
“It takes time to build a new network,” Fourtet said. So while Sigfox has delivered 0G network coverage in 60 countries across five continents, covering 1 billion people  (including 51 U.S. metropolitan areas covering 30% of the population), Fourtet acknowledged, “[We] still have a ways to go to build our global network.” In the meantime, the company is expanding its Connectivity-as-a-Service (CaaS) solutions to enable coverage in areas where the 0G network does not yet exist.
**More on IoT:**
* [What is the IoT? How the internet of things works][12]
* [What is edge computing and how its changing the network][13]
* [Most powerful Internet of Things companies][14]
* [10 Hot IoT startups to watch][15]
* [The 6 ways to make money in IoT][16]
* [What is digital twin technology? [and why it matters]][17]
* [Blockchain, service-centric networking key to IoT success][18]
* [Getting grounded in IoT networking and security][5]
* [Building IoT-ready networks must become a priority][19]
* [What is the Industrial IoT? [And why the stakes are so high]][20]
Join the Network World communities on [Facebook][21] and [LinkedIn][22] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3409787/smart-cities-offer-window-into-the-evolution-of-enterprise-iot-technology.html
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/07/tokyo_asia_smart-city_iot_networking_by-benjamin-hung-unsplash-100764249-large.jpg
[2]: https://www.sigfox.com/en/sigfox-story
[3]: https://www.sigfox.com/en
[4]: https://www.dictionary.com/e/slang/og/
[5]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
[6]: https://www.sigfox.com/en/sigfox-iot-technology-overview
[7]: https://www.youtube.com/watch?v=WXc722WGjnE&t=1s
[8]: https://www.sigfox.com/en/news/sigfox-and-louis-vuitton-partner-innovative-luggage-tracker
[9]: https://www.sigfox.com/en/solutions/sigfox-services
[10]: https://www.sigfox.com/en/news/case-study-special-olympics-2019
[11]: https://www.sigfox.com/en/news/ibm-revolutionizes-container-tracking-groupe-psa-sigfox
[12]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
[13]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[14]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
[15]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
[16]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
[17]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
[18]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
[19]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
[20]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[21]: https://www.facebook.com/NetworkWorld/
[22]: https://www.linkedin.com/company/network-world

View File

@ -1,3 +1,4 @@
leemeans translating
7 deadly sins of documentation
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-cat-writing-king-typewriter-doc.png?itok=afaEoOqc)

View File

@ -1,243 +0,0 @@
ZhiW5217 is translating
Top 10 Microsoft Visio Alternatives for Linux
======
**Brief: If you are looking for a good Visio viewer in Linux, here are some alternatives to Microsoft Visio that you can use in Linux.**
[Microsoft Visio][1] is a great tool for creating or generating mission-critical diagrams and vector representations. While it may be a good tool for making floor plans or other kinds of diagrams - it is neither free nor open source.
Moreover, Microsoft Visio is not a standalone product. It comes bundled with Microsoft Office. We have already seen [open source alternatives to MS Office][2] in the past. Today we'll see what tools you can use in place of Visio on Linux.
## Best Microsoft Visio alternatives for Linux
![Microsoft Visio Alternatives for Linux][4]
Mandatory disclaimer here. The list is not a ranking. The product at number three is not better than the one at number six on the list.
I have also mentioned a couple of non open source Visio software that you can use from the web interface.
| Software | Type | License Type |
| LibreOffice Draw | Desktop Software | Free and Open Source |
| OpenOffice Draw | Desktop Software | Free and Open Source |
| Dia | Desktop Software | Free and Open Source |
| yED Graph Editor | Desktop and web-based | Freemium |
| Inkscape | Desktop Software | Free and Open Source |
| Pencil | Desktop and web-based | Free and Open Source |
| Graphviz | Desktop Software | Free and Open Source |
| darw.io | Desktop and web-based | Free and Open Source |
| Lucidchart | Web-based | Freemium |
| Calligra Flow | Desktop Software | Free and Open Source |
### 1. LibreOffice Draw
![][5]
LibreOffice Draw module is one of the best open source alternatives to Microsoft Visio. With the help of it, you can either choose to make a quick sketch of an idea or a complex professional floor plan for presentation. Flowcharts, organization charts, network diagrams, brochures, posters, and what not! All that without even requiring to spend a penny.
Good thing is that it comes bundled with LibreOffice which is installed in most Linux distributions by default.
#### Overview of Key Features:
* Style & Formatting tools to make Brochures/Posters
* Calc Data Visualization
* PDF-File editing capability
* Create Photo Albums by manipulating the pictures from Gallery
* Flexible Diagramming tools similar to the ones with Microsoft Visio (Smart Connectors, Dimension lines, etc.,)
* Supports .VSD files (to open)
[LibreOffice Draw][6]
### 2. Apache OpenOffice Draw
![][7]
A lot of people do know about OpenOffice (on which LibreOffice project was initially based on) but they don't really mention Apache OpenOffice Draw as an alternative to Microsoft Visio. But, for a fact - it is yet another amazing open-source diagramming software tool. Unlike LibreOffice Draw, it does not support editing PDF files but it does offer drawing tools for any type of diagram creation.
Just a caveat here. Use this tool only if you have OpenOffice already on your system. This is because [installing OpenOffice][8] is a pain and it is [not properly developed anymore][9].
#### Overview of Key Features:
* 3D Controller to create shapes quickly
* Create (.swf) flash versions of your work
* Style & Formatting tools
* Flexible Diagramming tools similar to the ones with Microsoft Visio (Smart Connectors, Dimension lines, etc.,)
[Apache OpenOffice Draw][10]
### 3. Dia
![][11]
Dia is yet another interesting open source tool. It may not seem to be under active development like the other ones mentioned. But, if you were looking for a free and open source alternative to Microsoft Visio for simple and decent diagrams - Dia could be your choice. The only let down of this tool for you could be its user interface. Apart from that, it does let you utilize powerful tools for a complex diagram (but it may not look great - so we recommend it for simpler diagrams).
#### Overview of Key Features:
* It can be used via command-line
* Styling & Formatting tools
* Shape Repository for custom shapes
* Diagramming tools similar to the ones with Microsoft Visio (Special Objects, Grid Lines, Layers, etc.,)
* Cross-platform
[Dia][12]
### 4. yED Graph Editor
yED Graph editor is one of the most loved free Microsoft Visio alternative. If you worry about it being a freeware but not an open source project, you can still utilize [yED's live editor][13] via your web browser for free. It is one of the best recommendations if you want to make diagrams quickly with a very easy-to-use interface.
#### Overview of Key Features:
* Drag and drop feature for easy diagram making
* Supports importing external data for linking
[yED Graph Editor][14]
### 5. Inkscape
![][15]
Inkscape is a free and open source vector graphics editor. You get the basic functionalities of creating a flowchart or a data flow diagram. It does not offer advanced diagramming tools but the basic ones to create simpler diagrams. So, Inkscape could be your Visio alternative only if you are looking to generate basic diagrams with the help of diagram connector tool by utilizing the available symbols from the library.
#### Overview of Key Features:
* Connector Tool
* Flexible drawing tools
* Broad file format compatibility
[Inkscape][16]
### 6. Pencil Project
![][17]
Pencil Project is an impressive open source initiative that is available for both Windows and Mac along with Linux. It features an easy-to-use GUI which makes diagramming easier and convenient. A good collection of inbuilt shapes and symbols to make your diagrams look great. It also comes baked in with Android and iOS UI stencils to let you start prototyping apps when needed.
You can also have it installed as a Firefox extension - but the extension does not utilize the latest build of the project.
#### Overview of Key Features:
* Browse cliparts easily (utilizing openclipart.org)
* Export as an ODT file / PDF file
* Diagram connector tool
* Cross-platform
[Pencil Project][18]
### 7. Graphviz
![][19]
Graphviz is slightly different. It is not a drawing tool but a dedicated graph visualization tool. You should definitely utilize this tool if you are into network diagrams which require several designs to represent a node. Well, of course, you can't make a floor plan with this tool (it won't be easy at least). So, it is best-suited for network diagrams, bioinformatics, database connections, and similar stuff.
#### Overview of Key Features:
* Supports command-line usage
* Supports custom shapes & tabular node layouts
* Basic stying and formatting tools
[Graphviz][20]
### 8. Draw.io
Draw.io is primarily a free web-based diagramming tool with powerful tools to make almost any type of diagrams. You just need to drag n drop and then connect them to create a flowchart, an E-R diagram, or anything relevant. Also, if you like the tool, you can try the [offline desktop version][21].
**Overview of Key Features:**
* Direct uploads to a cloud storage service
* Custom Shapes
* Styling & Formatting tools
* Cross-platform
[Draw.io][22]
### 9. Lucidchart
![][23]
Lucidchart is a premium web-based diagramming tool which offers a free subscription with limited features. You can utilize the free subscription to create several types of diagrams and export them as an image or a PDF. However, the free version does not support data linking and Visio import/export functionality. If you do not need data linking -Lucidchart could prove to be a very good tool while generating beautiful diagrams.
#### Overview of Key Features:
* Integrations to Slack, Jira Core, Confluence
* Ability to make product mockups
* Import Visio files
[Lucidchart][24]
### 10. Calligra Flow
![calligra flow][25]
Calligra Flow is a part of [Calligra Project][26] which aims to provide free and open source software tools. With Calligra flow, you can easily create network diagrams, entity-relation diagrams, flowcharts, and more.
#### Overview of Key Features:
* Wide range of stencil boxes
* Styling and formatting tools
[Calligra Flow][27]
### Wrapping Up
Now that you know about the best free and open source Visio alternatives, what do you think about them?
Are they better than Microsoft Visio in any aspect of your requirements? Also, let us know in the comments below if we missed any of your favorite diagramming tools as an Linux alternative to Microsoft Visio.
--------------------------------------------------------------------------------
via: https://itsfoss.com/visio-alternatives-linux/
作者:[Ankush Das][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ankush/
[1]:https://products.office.com/en/visio/flowchart-software
[2]:https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
[3]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/visio-alternatives-linux-featured.png
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/libreoffice-draw-microsoft-visio-alternatives.jpg
[6]:https://www.libreoffice.org/discover/draw/
[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/apache-open-office-draw.jpg
[8]:https://itsfoss.com/install-openoffice-ubuntu-linux/
[9]:https://itsfoss.com/openoffice-shutdown/
[10]:https://www.openoffice.org/product/draw.html
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/dia-screenshot.jpg
[12]:http://dia-installer.de/
[13]:https://www.yworks.com/products/yed-live
[14]:https://www.yworks.com/products/yed
[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/inkscape-screenshot.jpg
[16]:https://inkscape.org/en/
[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/pencil-project.jpg
[18]:http://pencil.evolus.vn/Downloads.html
[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/graphviz.jpg
[20]:http://graphviz.org/
[21]:https://about.draw.io/integrations/#integrations_offline
[22]:https://about.draw.io/
[23]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/lucidchart-visio-alternative.jpg
[24]:https://www.lucidchart.com/
[25]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/calligra-flow.jpg
[26]:https://www.calligra.org/
[27]:https://www.calligra.org/flow/

View File

@ -1,134 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Delete A Repository And GPG Key In Ubuntu)
[#]: via: (https://www.ostechnix.com/how-to-delete-a-repository-and-gpg-key-in-ubuntu/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
How To Delete A Repository And GPG Key In Ubuntu
======
![Delete A Repository And GPG Key In Ubuntu][1]
The other day we discussed how to [**list the installed repositories**][2] in RPM and DEB-based systems. Today, we are going to learn how to delete a repository along with its GPG key in Ubuntu. For those wondering, a repository (shortly **repo** ) is a central place where the developers keep the software packages. The packages in the repositories are thoroughly tested and built specifically for each version by Ubuntu developers. The users can download and install these packages on their Ubuntu system using **Apt** **package manager**. Ubuntu has four official repositories namely **Main** , **Universe** , **Restricted** and **Multiverse**.
Apart from the official repositories, there are many unofficial repositories maintained by developers (or package maintainers). The unofficial repositories usually have the packages which are not available in the official repositories. All packages are signed with pair of keys, a public and private key, by the package maintainer. As you already know, the public key is given out to the users and the private must be kept secret. Whenever you add a new repository in the sources list, you should also add the repository key if Apt package manager wants to trust the newly added repository. Using the repository keys, you can ensure that youre getting the packages from the right person. Hope you got a basic idea about software repositories and repository keys. Now let us go ahead and see how to delete the repository and its key if it is no longer necessary in Ubuntu systems.
### Delete A Repository In Ubuntu
Whenever you add a repository using “add-apt-repository” command, it will be stored in **/etc/apt/sources.list** file.
To delete a software repository from Ubuntu and its derivatives, just open the /etc/apt/sources.list file and look for the repository entry and delete it.
```
$ sudo nano /etc/apt/sources.list
```
As you can see in the below screenshot, I have added [**Oracle Virtualbox**][3] repository in my Ubuntu system.
![][4]
virtualbox repository
To delete this repository, simply remove the entry. Save and close the file.
If you have added PPA repositories, look into **/etc/apt/sources.list.d/** directory and delete the respective entry.
Alternatively, you can delete the repository using “add-apt-repository” command. For example, I am deleting the [**Systemback**][5] repository like below.
```
$ sudo add-apt-repository -r ppa:nemh/systemback
```
Finally, update the software sources list using command:
```
$ sudo apt update
```
### Delete Repository keys
We use “apt-key” command to add the repository keys. First, let us list the added keys using command:
```
$ sudo apt-key list
```
This command will list all added repository keys.
```
/etc/apt/trusted.gpg
--------------------
pub rsa1024 2010-10-31 [SC]
3820 03C2 C8B7 B4AB 813E 915B 14E4 9429 73C6 2A1B
uid [ unknown] Launchpad PPA for Kendek
pub rsa4096 2016-04-22 [SC]
B9F8 D658 297A F3EF C18D 5CDF A2F6 83C5 2980 AECF
uid [ unknown] Oracle Corporation (VirtualBox archive signing key) <[email protected]>
sub rsa4096 2016-04-22 [E]
/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg
------------------------------------------------------
pub rsa4096 2012-05-11 [SC]
790B C727 7767 219C 42C8 6F93 3B4F E6AC C0B2 1F32
uid [ unknown] Ubuntu Archive Automatic Signing Key (2012) <[email protected]>
/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-cdimage.gpg
------------------------------------------------------
pub rsa4096 2012-05-11 [SC]
8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092
uid [ unknown] Ubuntu CD Image Automatic Signing Key (2012) <[email protected]>
/etc/apt/trusted.gpg.d/ubuntu-keyring-2018-archive.gpg
------------------------------------------------------
pub rsa4096 2018-09-17 [SC]
F6EC B376 2474 EDA9 D21B 7022 8719 20D1 991B C93C
uid [ unknown] Ubuntu Archive Automatic Signing Key (2018) <[email protected]>
```
As you can see in the above output, the long (40 characters) hex value is the repository key. If you want APT package manager to stop trusting the key, simply delete it using command:
```
$ sudo apt-key del "3820 03C2 C8B7 B4AB 813E 915B 14E4 9429 73C6 2A1B"
```
Or, specify the last 8 characters only:
```
$ sudo apt-key del 73C62A1B
```
Done! The repository key has been deleted. Run the following command to update the repository lists:
```
$ sudo apt update
```
**Resource:**
* [**Software repositories Ubuntu Community Wiki**][6]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-delete-a-repository-and-gpg-key-in-ubuntu/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/07/Delete-a-repository-in-ubuntu-720x340.png
[2]: https://www.ostechnix.com/find-list-installed-repositories-commandline-linux/
[3]: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/
[4]: https://www.ostechnix.com/wp-content/uploads/2019/07/virtualbox-repository.png
[5]: https://www.ostechnix.com/systemback-restore-ubuntu-desktop-and-server-to-previous-state/
[6]: https://help.ubuntu.com/community/Repositories/Ubuntu

View File

@ -1,133 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Command line quick tips: Permissions)
[#]: via: (https://fedoramagazine.org/command-line-quick-tips-permissions/)
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
Command line quick tips: Permissions
======
![][1]
Fedora, like all Linux based systems, comes with a powerful set of security features. One of the basic features is _permissions_ on files and folders. These permissions allow files and folders to be secured from unauthorized access. This article explains a bit about these permissions, and shows you how to share access to a folder using them.
### Permission basics
Fedora is by nature a multi-user operating system. It also has _groups_, which users can be members of. But imagine for a moment a multi-user system with no concept of permissions. Different logged in users could read each others content at will. This isnt very good for privacy or security, as you can imagine.
Any file or folder on Fedora has three sets of permissions assigned. The first set is for the _user_ who owns the file or folder. The second is for the _group_ that owns it. The third set is for everyone else whos not the user who owns the file, or in the group that owns the file. Sometimes this is called the _world_.
### What permissions mean
Each set of permissions comes in three flavors — _read_, _write_, and _execute_. Each of these has an initial that stands for the permission, thus _r_, _w_, and _x_.
#### File permissions
For _files_, heres what these permissions mean:
* Read (r): the file content can be read
* Write (w): the file content can be changed
* Execute (x): the file can be executed — this is used primarily for programs or scripts that are meant to be run directly
*
You can see the three sets of these permissions when you do a long listing of any file. Try this with the _/etc/services_ file on your system:
```
$ ls -l /etc/services
-rw-r--r--. 1 root root 692241 Apr 9 03:47 /etc/services
```
Notice the groups of permissions at the left side of the listing. These are provided in three sets, as mentioned above — for the user who owns the file, for the group that owns the file, and for everyone else. The user owner is _root_ and the group owner is the _root_ group. The user owner has read and write access to the file. Anyone in the group _root_ can only read the file. And finally, anyone else can also only read the file. (The dash at the far left shows this is a regular file.)
By the way, youll commonly find this set of permissions on many (but not all) system configuration files. They are only meant to be changed by the system administrator, not regular users. Often regular users need to read the content as well.
#### Folder (directory) permissions
For folders, the permissions have slightly different meaning:
* Read (r): the folder contents can be read (such as the _ls_ command)
* Write (w): the folder contents can be changed (files can be created or erased in this folder)
* Execute (x): the folder can be searched, although its contents cannot be read. (This may sound strange, but the explanation requires more complex details of file systems outside the scope of this article. So just roll with it for now.)
Take a look at the _/etc/grub.d_ folder for example:
```
$ ls -ld /etc/grub.d
drwx------. 2 root root 4096 May 23 16:28 /etc/grub.d
```
Note the _d_ at the far left. It shows this is a directory, or folder. The permissions show the user owner (_root_) can read, change, and _cd_ into this folder. However, no one else can do so — whether theyre a member of the _root_ group or not. Notice you cant _cd_ into the folder, either:
```
$ cd /etc/grub.d
bash: cd: /etc/grub.d: Permission denied
```
Notice how your own home directory is setup:
```
$ ls -ld $HOME
drwx------. 221 paul paul 28672 Jul 3 14:03 /home/paul
```
Now, notice how no one, other than you as the owner, can access anything in this folder. This is intentional! You wouldnt want others to be able to read your private content on a shared system.
### Making a shared folder
You can exploit this permissions capability to easily make a folder to share within a group. Imagine you have a group called _finance_ with several members who need to share documents. Because these are user documents, its a good idea to store them within the _/home_ folder hierarchy.
To get started, [use][2] _[sudo][2]_ to make a folder for sharing, and set it to be owned by the _finance_ group:
```
$ sudo mkdir -p /home/shared/finance
$ sudo chgrp finance /home/shared/finance
```
By default the new folder has these permissions. Notice how it can be read or searched by anyone, even if they cant create or erase files in it:
```
drwxr-xr-x. 2 root root 4096 Jul 6 15:35 finance
```
That doesnt seem like a good idea for financial data. Next, use the _chmod_ command to change the mode (permissions) of the shared folder. Note the use of _g_ to change the owning groups permissions, and _o_ to change other users permissions. Similarly, _u_ would change the user owners permissions:
```
$ sudo chmod g+w,o-rx /home/shared/finance
```
The resulting permissions look better. Now, anyone in the _finance_ group (or the user owner _root_) have total access to the folder and its contents:
```
drwxrwx---. 2 root finance 4096 Jul 6 15:35 finance
```
If any other user tries to access the shared folder, they wont be able to do so. Great! Now our finance group can put documents in a shared place.
### Other notes
There are additional ways to manipulate these permissions. For example, you may want any files in this folder to be set as owned by the group _finance_. This requires additional settings not covered in this article, but stay tuned to the Magazine for more on that topic soon.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/command-line-quick-tips-permissions/
作者:[Paul W. Frields][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg
[2]: https://fedoramagazine.org/howto-use-sudo/

View File

@ -1,189 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (32-bit life support: Cross-compiling with GCC)
[#]: via: (https://opensource.com/article/19/7/cross-compiling-gcc)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
32-bit life support: Cross-compiling with GCC
======
Use GCC to cross-compile binaries for different architectures from a
single build machine.
![Wratchet set tools][1]
If you're a developer creating binary packages, like an RPM, DEB, Flatpak, or Snap, you have to compile code for a variety of different target platforms. Typical targets include 32-bit and 64-bit x86 and ARM. You could do your builds on different physical or virtual machines, but that means maintaining several systems. Instead, you can use the GNU Compiler Collection ([GCC][2]) to cross-compile, producing binaries for several different architectures from a single build machine.
Assume you have a simple dice-rolling game that you want to cross-compile. Something written in C is relatively easy on most systems, so to add complexity for the sake of realism, I wrote this example in C++, so the program depends on something not present in C (**iostream**, specifically).
```
#include &lt;iostream&gt;
#include &lt;cstdlib&gt;
using namespace std;
void lose (int c);
void win (int c);
void draw ();
int main() {
  int i;
    do {
      cout &lt;&lt; "Pick a number between 1 and 20: \n";
      cin &gt;&gt; i;
      int c = rand ( ) % 21;
      if (i &gt; 20) lose (c);
      else if (i &lt; c ) lose (c);
      else if (i &gt; c ) win (c);
      else draw ();
      }
      while (1==1);
      }
void lose (int c )
  {
    cout &lt;&lt; "You lose! Computer rolled " &lt;&lt; c &lt;&lt; "\n";
  }
void win (int c )
  {
    cout &lt;&lt; "You win!! Computer rolled " &lt;&lt; c &lt;&lt; "\n";
   }
void draw ( )
   {
     cout &lt;&lt; "What are the chances. You tied. Try again, I dare you! \n";
   }
```
Compile it on your system using the **g++** command:
```
`$ g++ dice.cpp -o dice`
```
Then run it to confirm that it works:
```
$ ./dice
Pick a number between 1 and 20:
[...]
```
You can see what kind of binary you just produced with the **file** command:
```
$ file ./dice
dice: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically
linked (uses shared libs), for GNU/Linux 5.1.15, not stripped
```
And just as important, what libraries it links to with **ldd**:
```
$ ldd dice
linux-vdso.so.1 =&gt; (0x00007ffe0d1dc000)
libstdc++.so.6 =&gt; /usr/lib/x86_64-linux-gnu/libstdc++.so.6
(0x00007fce8410e000)
libc.so.6 =&gt; /lib/x86_64-linux-gnu/libc.so.6
(0x00007fce83d4f000)
libm.so.6 =&gt; /lib/x86_64-linux-gnu/libm.so.6
(0x00007fce83a52000)
/lib64/ld-linux-x86-64.so.2 (0x00007fce84449000)
libgcc_s.so.1 =&gt; /lib/x86_64-linux-gnu/libgcc_s.so.1
(0x00007fce8383c000)
```
You have confirmed two things from these tests: The binary you just ran is 64-bit, and it is linked to 64-bit libraries.
That means that, in order to cross-compile for 32-bit, you must tell **g++** to:
1. Produce a 32-bit binary
2. Link to 32-bit libraries instead of the default 64-bit libraries
### Setting up your dev environment
To compile to 32-bit, you need 32-bit libraries and headers installed on your system. If you run a pure 64-bit system, then you have no 32-bit libraries or headers and need to install a base set. At the very least, you need the C and C++ libraries (**glibc** and **libstdc++**) along with 32-bit version of GCC libraries (**libgcc**). The names of these packages may vary from distribution to distribution. On Slackware, a pure 64-bit distribution with 32-bit compatibility is available from the **multilib** packages provided by [Alien BOB][3]. On Fedora, CentOS, and RHEL:
```
$ yum install libstdc++-*.i686
$ yum install glibc-*.i686
$ yum install libgcc.i686
```
Regardless of the system you're using, you also must install any 32-bit libraries your project uses. For instance, if you include **yaml-cpp** in your project, then you must install the 32-bit version of **yaml-cpp** or, on many systems, the development package for **yaml-cpp** (for instance, **yaml-cpp-devel** on Fedora) before compiling it.
Once that's taken care of, the compilation is fairly simple:
```
`$ g++ -m32 dice.cpp -o dice32 -L /usr/lib -march=i686`
```
The **-m32** flag tells GCC to compile in 32-bit mode. The **-march=i686** option further defines what kind of optimizations to use (refer to **info gcc** for a list of options). The **-L** flag sets the path to the libraries you want GCC to link to. This is usually **/usr/lib** for 32-bit, although, depending on how your system is set up, it could be **/usr/lib32** or even **/opt/usr/lib** or any place you know you keep your 32-bit libraries.
After the code compiles, see proof of your build:
```
$ file ./dice32
dice: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked (uses shared libs) [...]
```
And, of course, **ldd ./dice32** points to your 32-bit libraries.
### Different architectures
Compiling 32-bit on 64-bit for the same processor family allows GCC to make many assumptions about how to compile the code. If you need to compile for an entirely different processor, you must install the appropriate cross-build GCC utilities. Which utility you install depends on what you are compiling. This process is a little more complex than compiling for the same CPU family.
When you're cross-compiling for the same family, you can expect to find the same set of 32-bit libraries as 64-bit libraries, because your Linux distribution is maintaining both. When compiling for an entirely different architecture, you may have to hunt down libraries required by your code. The versions you need may not be in your distribution's repositories because your distribution may not provide packages for your target system, or it may not mirror all packages in a convenient location. If the code you're compiling is yours, then you probably have a good idea of what its dependencies are and possibly where to find them. If the code is something you have downloaded and need to compile, then you probably aren't as familiar with its requirements. In that case, investigate what the code requires to build correctly (they're usually listed in the README or INSTALL files, and certainly in the source code itself), then go gather the components.
For example, if you need to compile C code for ARM, you must first install **gcc-arm-linux-gnu** (32-bit) or **gcc-aarch64-linux-gnu** (64-bit) on Fedora or RHEL, or **arm-linux-gnueabi-gcc** and **binutils-arm-linux-gnueabi** on Ubuntu. This provides the commands and libraries you need to build (at least) a simple C program. Additionally, you need whatever libraries your code uses. You can place header files in the usual location (**/usr/include** on most systems), or you can place them in a directory of your choice and point GCC to it with the **-I** option.
When compiling, don't use the standard **gcc** or **g++** command. Instead, use the GCC utility you installed. For example:
```
$ arm-linux-gnu-g++ dice.cpp \
  -I/home/seth/src/crossbuild/arm/cpp \
  -o armdice.bin
```
Verify what you've built:
```
$ file armdice.bin
armdice.bin: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV) [...]
```
### Libraries and deliverables
This was a simple example of how to use cross-compiling. In real life, your source code may produce more than just a single binary. While you can manage this manually, there's probably no good reason to do that. In my next article, I'll demonstrate GNU Autotools, which does most of the work required to make your code portable.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/cross-compiling-gcc
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4 (Wratchet set tools)
[2]: https://gcc.gnu.org/
[3]: http://www.slackware.com/~alien/multilib/

View File

@ -1,154 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to install Elasticsearch on MacOS)
[#]: via: (https://opensource.com/article/19/7/installing-elasticsearch-macos)
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo/users/don-watkins)
How to install Elasticsearch on MacOS
======
Installing Elasticsearch is complex! Here's how to do it on a Mac.
![magnifying glass on computer screen][1]
[Elasticsearch][2] is an open source, full-text search engine developed in Java. Users upload datasets as JSON files. Then, Elasticsearch stores the original document before adding a searchable reference to the document in the clusters index.
Less than nine years after its creation, Elasticsearch is the most popular enterprise search engine. Elastic released its latest update—version 7.2.0 —on June 25, 2019.
[Kibana][3] is an open source data visualizer for Elasticsearch. This tool helps users create visualizations on top of content indexed in an Elasticsearch cluster.
[Sunbursts][4], [geospatial data maps][5], [relationship analyses][6], and dashboards with live data are just a few options. And thanks to Elasticsearchs machine learning prowess, you can learn which properties might influence your data (like servers or IP addresses) and find abnormal patterns.
At [DevFest DC][7] last month, [Dr. Summer Rankin][8]—lead data scientist at Booz Allen Hamilton—uploaded a dataset of content from TED Talks to Elasticsearch, then used Kibana to quickly build a dashboard. Intrigued, I went to an Elasticsearch meetup days later.
Since this course was for newbies, we started at Square One: Installing Elastic and Kibana on our laptops. Without both packages installed, we couldnt create our own visualizations from the dataset of Shakespeare texts we were using as a dummy JSON file.
Next, I will share step-by-step instructions for downloading, installing, and running Elasticsearch Version 7.1.1 on MacOS. This was the latest version when I attended the Elasticsearch meetup in mid-June 2019.
### Downloading Elasticsearch for MacOS
1. Go to <https://www.elastic.co/downloads/elasticsearch>, which takes you to the webpage below:
![The Elasticsearch download page.][9]
2. In the **Downloads** section, click **MacOS**, which downloads the Elasticsearch TAR file (for example, **elasticsearch-7.1.1-darwin-x86_64.tar**) into your **Downloads** folder.
3. Double-click this file to unpack it into its own folder (for example, **elasticsearch-7.1.1**), which contains all of the files that were in the TAR.
**Tip**: If you want Elasticsearch to live in another folder, now is the time to move this folder.
### Running Elasticsearch from the MacOS command line
You can run Elasticsearch only using the command line if you prefer. Just follow this process:
1. [Open a **Terminal** window][10].
2. In the terminal window, enter your Elasticsearch folder. For example (if you moved the program, change **Downloads** to the correct path):
**$ cd ~Downloads/elasticsearch-1.1.0**
3. Change to the Elasticsearch **bin** subfolder, and start the program. For example:
**$ cd bin $ ./elasticsearch**
Heres some of the output that my command line terminal displayed when I launched Elasticsearch 1.1.0:
![Terminal output when running Elasticsearch.][11]
**NOTE**: Elasticsearch runs in the foreground by default, which can cause your computer to slow down. Press **Ctrl-C to** stop Elasticsearch from running.
### Running Elasticsearch using the GUI
If you prefer your point-and-click environment, you can run Elasticsearch like so:
1. Open a new **Finder** window.
2. Select **Downloads** in the left Finder sidebar (or, if you moved Elasticsearch to another folder, navigate to there).
3. Open the folder called (for the sake of this example) **elasticsearch-7.1.1**. A selection of eight subfolders appears.
![The elasticsearch/bin menu.][12]
4. Open the **bin** subfolder. As the screenshot above shows, this subfolder yields 20 assets.
5. Click the first option, which is **elasticsearch**.
Note that you may get a security warning, as shown below:
![The security warning dialog box.][13]
 
In order to open the program in this case:
1. Click **OK** in the warning dialog box.
2. Open **System Preferences**.
3. Click **Security &amp; Privacy**, which opens the window shown below:
![Where you can allow your computer to open the downloaded file.][14]
4. Click **Open Anyway**, which opens the confirmation dialog box shown below:
![Security confirmation dialog box.][15]
5. Click **Open**. A terminal window opens and launches Elasticsearch.
The launch process can take a while, so let it run. Eventually, it will finish, and you will see output similar to this at the end:
![Launching Elasticsearch in MacOS.][16]
### Learning more
Once youve installed Elasticsearch, its time to start exploring!
The tools [Elasticsearch: Getting Started][17] guide directs you based on your goals. Its introductory video walks through steps to launch a hosted cluster on [Elasticsearch Service][18], perform basic search queries, play with data through create, read, update, and delete (CRUD) REST APIs, and more.
This guide also offers links to documentation, dev console commands, training subscriptions, and a free trial of Elasticsearch Service. This trial lets you deploy Elastic and Kibana on AWS and GCP to support your Elastic clusters in the cloud.
In the follow-up to this article, well walk through the steps youll take to install Kibana on MacOS. This process will take your Elasticsearch queries to the next level via diverse data visualizations. Stay tuned!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/installing-elasticsearch-macos
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
[2]: https://www.getapp.com/it-management-software/a/qbox-dot-io-hosted-elasticsearch/
[3]: https://www.elastic.co/products/kibana
[4]: https://en.wikipedia.org/wiki/Pie_chart#Ring
[5]: https://en.wikipedia.org/wiki/Spatial_analysis
[6]: https://en.wikipedia.org/wiki/Correlation_and_dependence
[7]: https://www.devfestdc.org/
[8]: https://www.summerrankin.com/about
[9]: https://opensource.com/sites/default/files/uploads/wwa1f3_600px_0.png (The Elasticsearch download page.)
[10]: https://support.apple.com/en-ca/guide/terminal/welcome/mac
[11]: https://opensource.com/sites/default/files/uploads/io6t1a_600px.png (Terminal output when running Elasticsearch.)
[12]: https://opensource.com/sites/default/files/uploads/o43yku_600px.png (The elasticsearch/bin menu.)
[13]: https://opensource.com/sites/default/files/uploads/elasticsearch_security_warning_500px.jpg (The security warning dialog box.)
[14]: https://opensource.com/sites/default/files/uploads/the_general_tab_of_the_system_preferences_security_and_privacy_window.jpg (Where you can allow your computer to open the downloaded file.)
[15]: https://opensource.com/sites/default/files/uploads/confirmation_dialog_box.jpg (Security confirmation dialog box.)
[16]: https://opensource.com/sites/default/files/uploads/y5dvtu_600px.png (Launching Elasticsearch in MacOS.)
[17]: https://www.elastic.co/webinars/getting-started-elasticsearch?ultron=%5BB%5D-Elastic-US+CA-Exact&blade=adwords-s&Device=c&thor=elasticsearch&gclid=EAIaIQobChMImdbvlqOP4wIVjI-zCh3P_Q9mEAAYASABEgJuAvD_BwE
[18]: https://info.elastic.co/elasticsearch-service-gaw-v10-nav.html?ultron=%5BB%5D-Elastic-US+CA-Exact&blade=adwords-s&Device=c&thor=elasticsearch%20service&gclid=EAIaIQobChMI_MXHt-SZ4wIVJBh9Ch3wsQfPEAAYASAAEgJo9fD_BwE

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,118 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (ElectronMail a Desktop Client for ProtonMail and Tutanota)
[#]: via: (https://itsfoss.com/electronmail/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
ElectronMail a Desktop Client for ProtonMail and Tutanota
======
The majority of people on the internet have email accounts from big companies, such as Google, that do not respect your privacy. Thankfully, there are privacy conscience alternatives like [Tutanota][1] and [ProtonMail][2]. The problems is that not all of them have a desktop client. Today, we will look at a project that seeks to solve that problem for you. Lets take a look at ElectronMail.
Electron-ic warning!
The following app is built with Electron (the name is ElectronMail for a reason). If the use of Electron upsets you, please consider this a trigger warning.
### ElectronMail: Desktop Client for Tutanota and ProtonMail
![Electron Mail About][3]
[ElectronMail][4] is simply put an email client for ProtonMail and Tutanota. It is built using three big technologies: [Electron][5], [TypeScript][6] and [Angular][7]. It includes the following features:
* Multi accounts support per each email provider
* Encrypted local storage
* Available for Linux, Windows, macOS, and FreeBSD
* Native notifications
* System tray icon with a total number of unread messages
* Master password to protect account information
* Switchable view layouts
* Offline access to the emails
* Encrypted local storage for emails
* Batch emails export to EML files
* Full-text search
* Built-in/prepackaged web clients
* Configuring proxy per account
* Spell Checking
* Support for two-factor authentication for extra security
Currently, ElectronMail only supports Tutanota and ProtonMail. I get the feeling that they will be adding more in the future. According to the [GitHub page][4]: “Multi email providers support. ProtonMail and Tutanota at the moment.”
ElectronMail is licensed under the MIT license.
#### How to install ElectronMail
Currently, there are several options to install ElectronMail on Linux. for Arch and Arch-based distros, you can install it from the [Arch User Repository][8]. There is also a Snap available for ElectrionMail. To install it, just enter `sudo snap install electron-mail`.
For all other Linux distros, you can [download][9] a `.deb` or `.rpm` file.
![Electron Mail Inbox][10]
You can also [download][9] an `.exe` installer for Windows or a `.dmg` file for macOS. There is even a file for FreeBSD.
[][11]
Suggested read  Zettlr - Markdown Editor for Writers and Researchers
#### Removing ElectronMail
If you install ElectronMail and decide that it is not for you, there are a couple steps that the [developer][12] recommends. **Be sure to follow these steps before you uninstall the application.**
If you are using the “Keep Me Signed In” feature, click “Log out” on the menu. This will delete the locally stored master password. It is possible to delete the master password after uninstalling ElectronMail, but that would involve editing the system keychain.
You will also need to delete the settings folder manually. You can find it by clicking “Open setting folder” after selecting the applications icon in the system tray.
![Electron Mail Setting][13]
### My Thoughts on ElectronMail
I dont usually use email clients. In fact, I mostly depend on web clients. So, I dont have much use for this application.
That being said, ElectronMail has a nice feel to it and is easy to set up. It has a good number of features activated out of the box and the advanced features arent that hard to activate.
The one question I have relates to search. According to the features list, ElectronMail supports full-text search. However, the free version of Tutanota only supports a limited search. I wonder how ElectronMail handles that.
At the end of the day, ElectronMail is just an Electron wrapper for a couple of web-based emails. I would rather just have them open in my browser than dedicate separate system resources to running Electron. If you only [use Tutanota email, they have their own official Electron-based desktop client][14]. You may try that.
My biggest issue is with security. This is an unofficial app for two very secure email apps. What if there is a way to capture your login info or read through your emails? Someone who is smarter than I would have to go through the source code to know for sure. That is always the issue with unofficial apps for a security project.
[][14]
Suggested read  Secure Email Service Tutanota Has a Desktop App Now
Have you every used ElectronMail? Do you think it would be worthwhile to install ElectronMail? What is your favorite email client? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][15].
--------------------------------------------------------------------------------
via: https://itsfoss.com/electronmail/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/tutanota-review/
[2]: https://itsfoss.com/protonmail/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/electron-mail-about.jpg?resize=800%2C500&ssl=1
[4]: https://github.com/vladimiry/ElectronMail
[5]: https://electronjs.org/
[6]: http://www.typescriptlang.org/
[7]: https://angular.io/
[8]: https://aur.archlinux.org/packages/electronmail-bin
[9]: https://github.com/vladimiry/ElectronMail/releases
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/electron-mail-inbox.jpg?ssl=1
[11]: https://itsfoss.com/zettlr-markdown-editor/
[12]: https://github.com/vladimiry
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/electron-mail-setting.jpg?ssl=1
[14]: https://itsfoss.com/tutanota-desktop/
[15]: http://reddit.com/r/linuxusersgroup

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (arrowfeng)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,134 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Save and load Python data with JSON)
[#]: via: (https://opensource.com/article/19/7/save-and-load-data-python-json)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Save and load Python data with JSON
======
The JSON format saves you from creating your own data formats, and is
particularly easy to learn if you already know Python. Here's how to use
it with Python.
![Cloud and databsae incons][1]
[JSON][2] stands for JavaScript Object Notation. This format is a popular method of storing data in key-value arrangements so it can be parsed easily later. Dont let the name fool you, though: You can use JSON in Python—not just JavaScript—as an easy way to store data, and this article demonstrates how to get started.
First, take a look at this simple JSON snippet:
```
{
        "name":"tux",
        "health":"23",
        "level":"4"
}
```
That's pure JSON and has not been altered for Python or any other language. Yet if youre familiar with Python, you might notice that this example JSON code looks an awful lot like a Python dictionary. In fact, the two are very similar: If you are comfortable with Python lists and dictionaries, then JSON is a natural fit for you.
### Storing data in JSON format
You might consider using JSON if your application needs to store somewhat complex data. While you may have previously resorted to custom text configuration files or data formats, JSON offers you structured, recursive storage, and Pythons JSON module offers all of the parsing libraries necessary for getting this data in and out of your application. So, you dont have to write parsing code yourself, and other programmers dont have to decode a new data format when interacting with your application. For this reason, JSON is easy to use, and ubiquitous.
Here is some sample Python code using a dictionary within a dictionary:
```
#!/usr/bin/env python3
import json
# instantiate an empty dict
team = {}
# add a team member
team['tux'] = {'health': 23, 'level': 4}
team['beastie'] = {'health': 13, 'level': 6}
team['konqi'] = {'health': 18, 'level': 7}
```
This code creates a Python dictionary called **team**. Its empty initially (you can create one that's already populated, but thats impossible if you dont have the data to put into the dictionary yet).
To add to the **dict** object, you create a key, such as **tux**, **beastie**, or **konqi** in the example code, and then provide a value. In this case, the value is _another_ dictionary full of player statistics.
Dictionaries are mutable. You can add, remove, and update the data they contain as often as you please. This format is ideal storage for data that your application frequently uses.
### Saving data in JSON format 
If the data youre storing in your dictionary is user data that needs to persist after the application quits, then you must write the data to a file on disk. This is where the JSON Python module comes in:
```
with open('mydata.json', 'w') as f:
    json.dump(team, f)
```
This code block creates a file called **mydata.json** and opens it in write mode. The file is represented with the variable **f** (a completely arbitrary designation; you can use whatever variable name you like, such as **file**, **FILE**, **output**, or practically anything). Meanwhile, the JSON modules **dump** function is used to dump the data from the **dict** into the data file.
Saving data from your application is as simple as that, and the best part about this is that the data is structured and predictable. To see, take a look at the resulting file:
```
$ cat mydata.json
{"tux": {"health": 23, "level": 4}, "beastie": {"health": 13, "level": 6}, "konqi": {"health": 18, "level": 7}}
```
### Reading data from a JSON file
If you are saving data to JSON format, you probably want to read the data back into Python eventually. To do this, use the Python JSON modules **json.load** function:
```
#!/usr/bin/env python3
import json
f = open('mydata.json')
team = json.load(f)
print(team['tux'])
print(team['tux']['health'])
print(team['tux']['level'])
print(team['beastie'])
print(team['beastie']['health'])
print(team['beastie']['level'])
# when finished, close the file
f.close()
```
This function implements the inverse, more or less, of saving the file: an arbitrary variable (**f**) represents the data file, and then the JSON modules **load** function dumps the data from the file into the arbitrary **team** variable.
The **print** statements in the code sample demonstrate how to use the data. It can be confusing to compound **dict** key upon **dict** key, but as long as you are familiar with your own dataset, or else can read the JSON source to get a mental map of it, the logic makes sense.
Of course, the **print** statements dont have to be hard-coded. You could rewrite the sample application using a **for** loop:
```
for i in team.values():
    print(i)
```
### Using JSON
As you can see, JSON integrates surprisingly well with Python, so its a great format when your data fits in with its model. JSON is flexible and simple to use, and learning one basically means youre learning the other, so consider it for data storage the next time youre working on a Python application.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/save-and-load-data-python-json
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg (Cloud and databsae incons)
[2]: https://json.org

View File

@ -0,0 +1,488 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Security scanning your DevOps pipeline)
[#]: via: (https://opensource.com/article/19/7/security-scanning-your-devops-pipeline)
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/marcobravohttps://opensource.com/users/jrepka)
Security scanning your DevOps pipeline
======
A hands-on introduction to container security using Anchore with Jenkins
on Kubernetes.
![Target practice][1]
Security is one of the most important considerations for running in any environment, and using open source software is a great way to handle security without going over budget in your corporate environment or for your home setup. It is easy to talk about the concepts of security, but it's another thing to understand the tools that will get you there. This tutorial explains how to set up security using [Jenkins][2] with [Anchore][3].
There are many ways to run [Kubernetes][4]. Using [Minikube][5], a prepackaged virtual machine (VM) environment designed for local testing, reduces the complexity of running an environment.
Technology | What is it?
---|---
[Jenkins][2] | An open source automation server
[Anchore][3] | A centralized service for inspection, analysis, and certification of container images
[Minikube][5] | A single-node Kubernetes cluster inside a VM
In this tutorial, you'll learn how to add Jenkins and Anchore to Kubernetes and configure a scanning pipeline for new container images and registries.
_Note: For best performance in this tutorial, Minikube requires at least four CPUs._
### Basic requirements
#### Knowledge
* Docker (including a [Docker Hub][6] account)
* Minikube
* Jenkins
* Helm
* Kubectl
#### Software
* Minikube
* Helm
* Kubectl client
* Anchore CLI installed locally
### Set up the environment
[Install Minikube][7] in whatever way that makes sense for your environment. If you have enough resources, I recommend giving a bit more than the default memory and CPU power to your VM:
```
$ minikube config set memory 8192
⚠️  These changes will take effect upon a minikube delete and then a minikube start
$ minikube config set cpus 4
⚠️  These changes will take effect upon a minikube delete and then a minikube start
```
If you are already running a Minikube instance, you must delete it using **minikube delete** before continuing.
Next, [install Helm][8], the standard Kubernetes package manager, in whatever way makes sense for your operating system.
Now you're ready to install the applications.
### Install and configure Anchore and Jenkins
To begin, start Minikube and its dashboard.
```
$ minikube start
😄  minikube v1.1.0 on darwin (amd64)
💡  Tip: Use 'minikube start -p &lt;name&gt;' to create a new cluster, or 'minikube delete' to delete this one.
🔄  Restarting existing virtualbox VM for "minikube" ...
 Waiting for SSH access ...
🐳  Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
🔄  Relaunching Kubernetes v1.14.2 using kubeadm ...
 Verifying: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"
$ minikube dashboard
🔌  Enabling dashboard ...
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening <http://127.0.0.1:52646/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/> in your default browser...
```
As long as you stay connected to this terminal session, you will have access to a visual dashboard for Minikube at **127.0.0.1:52646**.
![Minikube dashboard][9]
 
### Create namespace and install Jenkins
The next step is to get the Jenkins build environment up and running. To start, ensure your storage is configured for persistence so you can reuse it later. Set the storage class for **Persistent Volumes** before you install Helm, so its installation will be persistent across reboots.
Either exit the dashboard using CTRL+C or open a new terminal to run:
```
$ minikube addons enable default-storageclass
 default-storageclass was successfully enabled
```
**Using namespaces**
I test quite a few different applications, and I find it incredibly helpful to use [namespaces][10] in Kubernetes. Leaving everything in the default namespace can overcrowd it and make it challenging to uninstall a Helm-installed application. If you stick to this for Jenkins, you can remove it by running **helm del --purge jenkins --namespace jenkins** then **kubectl delete ns jenkins**. This is much easier than manually hunting and pecking through a long list of containers.
### Install Helm
To use Helm, Kubernetes' default package manager, initialize an environment and install Jenkins.
```
$ kubectl create ns jenkins
namespace "jenkins" created
$ helm init
helm init
Creating /Users/alleycat/.helm
Creating /Users/alleycat/.helm/repository
Creating /Users/alleycat/.helm/repository/cache
Creating /Users/alleycat/.helm/repository/local
Creating /Users/alleycat/.helm/plugins
Creating /Users/alleycat/.helm/starters
Creating /Users/alleycat/.helm/cache/archive
Creating /Users/alleycat/.helm/repository/repositories.yaml
Adding stable repo with URL: <https://kubernetes-charts.storage.googleapis.com>
Adding local repo with URL: <http://127.0.0.1:8879/charts>
$HELM_HOME has been configured at /Users/alleycat/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: <https://docs.helm.sh/using\_helm/\#securing-your-helm-installation>
$ helm install --name jenkins stable/jenkins --namespace jenkins
NAME:   jenkins
LAST DEPLOYED: Tue May 28 11:12:39 2019
NAMESPACE: jenkins
STATUS: DEPLOYED
RESOURCES:
==&gt; v1/ConfigMap
NAME           DATA  AGE
jenkins        5     0s
jenkins-tests  1     0s
==&gt; v1/Deployment
NAME     READY  UP-TO-DATE  AVAILABLE  AGE
jenkins  0/1    1           0          0s
==&gt; v1/PersistentVolumeClaim
NAME     STATUS   VOLUME    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
jenkins  Pending  standard  0s
==&gt; v1/Pod(related)
NAME                      READY  STATUS   RESTARTS  AGE
jenkins-7565554b8f-cvhbd  0/1    Pending  0         0s
==&gt; v1/Role
NAME                     AGE
jenkins-schedule-agents  0s
==&gt; v1/RoleBinding
NAME                     AGE
jenkins-schedule-agents  0s
==&gt; v1/Secret
NAME     TYPE    DATA  AGE
jenkins  Opaque  2     0s
==&gt; v1/Service
NAME           TYPE          CLUSTER-IP    EXTERNAL-IP  PORT(S)         AGE
jenkins        LoadBalancer  10.96.90.0    &lt;pending&gt;    8080:32015/TCP  0s
jenkins-agent  ClusterIP     10.103.85.49  &lt;none&gt;       50000/TCP       0s
==&gt; v1/ServiceAccount
NAME     SECRETS  AGE
jenkins  1        0s
NOTES:
1\. Get your 'admin' user password by running:
  printf $(kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2\. Get the Jenkins URL to visit by running these commands in the same shell:
  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        You can watch the status of by running 'kubectl get svc --namespace jenkins -w jenkins'
  export SERVICE_IP=$(kubectl get svc --namespace jenkins jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
  echo http://$SERVICE_IP:8080/login
3\. Login with the password from step 1 and the username: admin
For more information on running Jenkins on Kubernetes, visit:
<https://cloud.google.com/solutions/jenkins-on-container-engine>
```
Note the Bash one-liner above that begins with **printf**; it allows you to query for the Jenkins password and it can be challenging to find your [default Jenkins password][11] without it. Take note of it and save it for later.
### Set up port forwarding to log into the UI
Now that you've installed Minikube and Jenkins, log in to configure Jenkins. You'll need the Pod name for port forwarding:
```
$ kubectl get pods --namespace jenkins
NAME                       READY     STATUS    RESTARTS   AGE
jenkins-7565554b8f-cvhbd   1/1       Running   0          9m
```
Run the following to set up port forwarding (using your Jenkins pod name, which will be different from mine below):
```
# verify your pod name from the namespace named jenkins
kubectl get pods --namespace jenkins
NAME                       READY     STATUS    RESTARTS   AGE
jenkins-7565554b8f-cvhbd   1/1       Running   0          37m
# then forward it
$ kubectl port-forward jenkins-7565554b8f-cvhbd 8088:8080 -n jenkins
Forwarding from 127.0.0.1:8088 -&gt; 8080
Forwarding from [::1]:8088 -&gt; 8080
```
Note that you will need multiple tabs in your terminal once you run the port-forwarding command.
Leave this tab open going forward to maintain your port-forwarding session.
Navigate to Jenkins in your preferred browser by going to **localhost:8088**. The default username is **admin** and the password is stored in Kubernetes Secrets. Use the command at the end of the **helm install jenkins** step:
```
$ printf $(kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
Jfstacz2vy
```
After logging in, the UI will display **Welcome to Jenkins!**
![Jenkins UI][12]
From here we'll have to install some plugins to Jenkins for our pipeline to work properly. From the main page choose **Manage Jenkins **on the left-hand side.
![][13]
 
Then choose **Manage Plugins**
![][14]
Then choose **Available **
**![][15]**
Then choose the checkboxes beside these plugins shown below
![][16]
![][17]
Once you have checked the boxes scroll to the bottom of the page and choose **Install without Restart**.
 
![][18]
 
 
#### Deploy Anchore
[Anchore Engine][19] "is an open source project that provides a centralized service for inspection, analysis, and certification of container images." Deploy it within Minikube to do some security inspection on your Jenkins pipeline. Add a security namespace for the Helm install, then run an installation:
```
$ kubectl create ns security
namespace "security" created
$ helm install --name anchore-engine stable/anchore-engine --namespace security
NAME:   anchore-engine
LAST DEPLOYED: Wed May 29 12:22:25 2019
NAMESPACE: security
STATUS: DEPLOYED
## And a lot more output
```
Confirm that the service is up and running with this command:
```
kubectl run -i --tty anchore-cli --restart=Always --image anchore/engine-cli --env ANCHORE_CLI_USER=admin --env ANCHORE_CLI_PASS=${ANCHORE_CLI_PASS} --env ANCHORE_CLI_URL=<http://anchore-engine-anchore-engine-api.security.svc.cluster.local:8228/v1/>
If you don't see a command prompt, try pressing enter.
[anchore@anchore-cli-86d7fd9568-rmknw anchore-cli]$
```
If you are logged into an Anchore container (similar to above), then the system is online. The default password for Anchore is **admin/foobar**. Type **exit** to leave the terminal.
Use port forwarding again to access the Anchore Engine API from your host system:
```
$ kubectl get pods --namespace security
NAME                                                         READY     STATUS    RESTARTS   AGE
anchore-engine-anchore-engine-analyzer-7cf5958795-wtw69      1/1       Running   0          3m
anchore-engine-anchore-engine-api-5c4cdb5587-mxkd7           1/1       Running   0          3m
anchore-engine-anchore-engine-catalog-648fcf54fd-b8thl       1/1       Running   0          3m
anchore-engine-anchore-engine-policy-7b78dd57f4-5dwsx        1/1       Running   0          3m
anchore-engine-anchore-engine-simplequeue-859c989f99-5dwgf   1/1       Running   0          3m
anchore-engine-postgresql-844dfcc468-s92c5                   1/1       Running   0          3m
# Find the API pod name above and add it to the command below
$ kubectl port-forward anchore-engine-anchore-engine-api-5c4cdb5587-mxkd7 8228:8228 --namespace security
```
### Join Anchore and Jenkins
Go back to the Jenkins UI at **<http://127.0.0.1:8088/>**. On the main menu, click **Manage Jenkins &gt; Manage Plugins**. Choose the **Available** tab, then scroll down or search for the **Anchore Container Image Scanner Plugin**. Check the box next to the plugin and choose **Install without restart**.
![Jenkins plugin manager][20]
Once the installation completes, go back to the main menu in Jenkins and choose **Manage Jenkins**, then **Configure System**. Scroll down to **Anchore Configuration**. Confirm **Engine Mode** is selected and a URL is entered, which is output from the Helm installation. Add the username and password (default **admin/foobar**). For debugging purposes, check **Enable DEBUG logging**.
![Anchore plugin mode][21]
Now that the plugin is configured, you can set up a Jenkins pipeline to scan your container builds.
### Jenkins pipeline and Anchore scanning
The purpose of this setup is to be able to inspect container images on the fly to ensure they meet security requirements. To do so, use Anchore Engine and give it permission to access your images. In this example, they are on Docker Hub, but they could also be on Quay or any other [container registry supported by Anchore][22]. 
In order to run the necessary commands on the command line, we need to find our Anchore pod name, then SSH into it using **kubectl exec**:
```
$ kubectl get all
NAME                               READY     STATUS    RESTARTS   AGE
pod/anchore-cli-86d7fd9568-rmknw   1/1       Running   2          2d
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    &lt;none&gt;        443/TCP   7d
NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/anchore-cli   1         1         1            1           2d
NAME                                     DESIRED   CURRENT   READY     AGE
replicaset.apps/anchore-cli-86d7fd9568   1         1         1         2d
# Lets connect to our anchore-cli pod
$ kubectl exec -it anchore-cli-86d7fd9568-rmknw -i -t -- bash
[anchore@anchore-cli-86d7fd9568-rmknw anchore-cli]$ anchore-cli --u admin  --p foobar  registry add index.docker.io &lt;username&gt; &lt;password&gt;
Registry: index.docker.io
User: jrepka
Type: docker_v2
Verify TLS: True
Created: 2019-05-14T22:37:59Z
Updated: 2019-05-14T22:37:59Z
```
Anchore Engine is now ready to work with your registry. There are [several ways][23] it can do so, including:
* Analyzing images
* Inspecting image content
* Scanning repositories
* Viewing security vulnerabilities
Point Anchore Engine toward an image to analyze it against your policy. For our testing, we'll use the publicly available [Cassandra][24] image:
```
[anchore@anchore-cli-86d7fd9568-rmknw anchore-cli]$ anchore-cli --u admin  --p foobar   image add
docker.io/library/cassandra:latest
Image Digest: sha256:7f7afff84384e36593b085d62e087674029de9aced4482c7780f155d8ee55fad
Parent Digest: sha256:800084987d58c2a62daeea4662ecdd79fd4928d449279bd410ef7690ef482469
Analysis Status: not_analyzed
Image Type: docker
Analyzed At: None
Image ID: a34c036183d18527684cdb613fbb1c806c7e1bc26f6911dcc25e918aa7b093fc
Dockerfile Mode: None
Distro: None
Distro Version: None
Size: None
Architecture: None
Layer Count: None
Full Tag: docker.io/library/cassandra:latest
Tag Detected At: 2019-07-09T17:44:45Z
```
You will also need to grab a default policy ID to test against for your pipeline. (In a future article, I will go into customizing policy and whitelist rules.)
Run the following command to get the policy ID:
```
[anchore@anchore-cli-86d7fd9568-rmknw anchore-cli]$ anchore-cli --u admin  --p foobar policy list
Policy ID                                   Active        Created                     Updated                    
2c53a13c-1765-11e8-82ef-23527761d060        True          2019-05-14T22:12:05Z        2019-05-14T22:12:05Z
```
Now that you have added a registry and the image you want, you can build a pipeline to scan it continuously.
Scanning works in this order: **Build, push, scan.** To prevent images that do not meet security requirements from making it into production, I recommend a tiered approach to security scanning: promote a container image to a separate development environment and promote it to production only once it passes the Anchore Engine's scan.
We can't do anything too exciting until we configure our custom policy, so we will make sure a scan completes successfully by running a Hello World version of it. Below is an example workflow written in Groovy:
```
node {
   echo 'Hello World'
}
```
To run this code, log back into the Jenkins UI at **localhost:8088**, choose New Item, Pipeline, then place this code block into the Pipeline Script area.
![The "Hello World" of Jenkins][25]
It will take some time to complete since we're building the entire Cassandra image added above. You will see a blinking red icon in the meantime. 
![Jenkins building][26]
And it will eventually finish and pass. That means we have set everything up correctly.
### That's a wrap
If you made it this far, you have a running Minikube configuration with Jenkins and Anchore Engine. You also have one or more images hosted on a container registry service and a way for Jenkins to show errors when images don't meet the default policy. In the next article, we will build a custom pipeline that verifies security policies set by Anchore Engine.
Anchore can also be used to scan large-scale Amazon Elastic Container Registries (ECRs), as long as the credentials are configured properly in Jenkins.
### Other resources
This is a lot of information for one article. If you'd like more details, the following links (which include my GitHub for all the examples in this tutorial) may help:
* [Anchore scan example][27]
* [Anchore Engine][28]
* [Running Kubernetes locally via Minikube][5]
* [Jenkins Helm Chart][29]
Are there any specific pipelines you want me to build in the next tutorial? Let me know in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/security-scanning-your-devops-pipeline
作者:[Jessica Repka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/marcobravohttps://opensource.com/users/jrepka
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/target-security.png?itok=Ca5-F6GW (Target practice)
[2]: https://jenkins.io/
[3]: https://anchore.com/
[4]: https://opensource.com/resources/what-is-kubernetes
[5]: https://kubernetes.io/docs/setup/minikube/
[6]: https://hub.docker.com/
[7]: https://kubernetes.io/docs/tasks/tools/install-minikube/
[8]: https://helm.sh/docs/using_helm/#installing-helm
[9]: https://opensource.com/sites/default/files/uploads/minikube-dashboard.png (Minikube dashboard)
[10]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
[11]: https://opensource.com/article/19/6/jenkins-admin-password-helm-kubernetes
[12]: https://opensource.com/sites/default/files/uploads/welcometojenkins.png (Jenkins UI)
[13]: https://opensource.com/sites/default/files/lead-images/screen_shot_2019-06-20_at_4.52.06_pm.png
[14]: https://opensource.com/sites/default/files/lead-images/screen_shot_2019-06-20_at_4.52.30_pm.png
[15]: https://opensource.com/sites/default/files/lead-images/screen_shot_2019-06-20_at_4.59.20_pm.png
[16]: https://opensource.com/sites/default/files/resize/lead-images/screen_shot_2019-06-14_at_8.26.55_am-500x288.png
[17]: https://opensource.com/sites/default/files/resize/lead-images/screen_shot_2019-06-14_at_8.26.25_am-500x451.png
[18]: https://opensource.com/sites/default/files/lead-images/screen_shot_2019-06-20_at_5.05.10_pm.png
[19]: https://github.com/anchore/anchore-engine
[20]: https://opensource.com/sites/default/files/uploads/jenkins-install-without-restart.png (Jenkins plugin manager)
[21]: https://opensource.com/sites/default/files/uploads/anchore-configuration.png (Anchore plugin mode)
[22]: https://github.com/anchore/enterprise-docs/blob/master/content/docs/using/ui_usage/registries/_index.md
[23]: https://docs.anchore.com/current/docs/using/cli_usage/
[24]: http://cassandra.apache.org/
[25]: https://opensource.com/sites/default/files/articles/jenkins_hello_world_pipeline_opensourcecom.png (The "Hello World" of Jenkins)
[26]: https://opensource.com/sites/default/files/jenkins_build_opensourcecom.png (Jenkins building)
[27]: https://github.com/Alynder/anchore_example
[28]: https://github.com/anchore/anchore-engine/wiki
[29]: https://github.com/helm/charts/tree/master/stable/jenkins

View File

@ -0,0 +1,126 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bond WiFi and Ethernet for easier networking mobility)
[#]: via: (https://fedoramagazine.org/bond-wifi-and-ethernet-for-easier-networking-mobility/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
Bond WiFi and Ethernet for easier networking mobility
======
![][1]
Sometimes one network interface isnt enough. Network bonding allows multiple network connections to act together with a single logical interface. You might do this because you want more bandwidth than a single connection can handle. Or maybe you want to switch back and forth between your wired and wireless networks without losing your network connection.
The latter applies to me. One of the benefits to working from home is that when the weather is nice, its enjoyable to work from a sunny deck instead of inside. But every time I did that, I lost my network connections. IRC, SSH, VPN — everything goes away, at least for a moment while some clients reconnect. This article describes how I set up network bonding on my Fedora 30 laptop to seamlessly move from the wired connection my laptop dock to a WiFi connection.
In Linux, interface bonding is handled by the bonding kernel module. Fedora does not ship with this enabled by default, but it is included in the kernel-core package. This means that enabling interface bonding is only a command away:
```
sudo modprobe bonding
```
Note that this will only have effect until you reboot. To permanently enable interface bonding, create a file called _bonding.conf_ in the _/etc/modules-load.d_ directory that contains only the word “bonding”.
Now that you have bonding enabled, its time to create the bonded interface. First, you must get the names of the interfaces you want to bond. To list the available interfaces, run:
```
sudo nmcli device status
```
You will see output that looks like this:
```
DEVICE TYPE STATE CONNECTION
enp12s0u1 ethernet connected Wired connection 1
tun0 tun connected tun0
virbr0 bridge connected virbr0
wlp2s0 wifi disconnected --
p2p-dev-wlp2s0 wifi-p2p disconnected --
enp0s31f6 ethernet unavailable --
lo loopback unmanaged --
virbr0-nic tun unmanaged --
```
In this case, there are two (wired) Ethernet interfaces available. _enp12s0u1_ is on a laptop docking station, and you can tell that its connected from the _STATE_ column. The other, _enp0s31f6_, is the built-in port in the laptop. There is also a WiFi connection called _wlp2s0_. _enp12s0u1_ and _wlp2s0_ are the two interfaces were interested in here. (Note that its not necessary for this exercise to understand how network devices are named, but if youre interested you can see the [systemd.net-naming-scheme man page][2].)
The first step is to create the bonded interface:
```
sudo nmcli connection add type bond ifname bond0 con-name bond0
```
In this example, the bonded interface is named _bond0_. The “_con-name bond0_” sets the connection name to _bond0_; leaving this off would result in a connection named _bond-bond0_. You can also set the connection name to something more human-friendly, like “Docking station bond” or “Ben”
The next step is to add the interfaces to the bonded interface:
```
sudo nmcli connection add type ethernet ifname enp12s0u1 master bond0 con-name bond-ethernet
sudo nmcli connection add type wifi ifname wlp2s0 master bond0 ssid Cotton con-name bond-wifi
```
As above, the connection name is specified to be [more descriptive][3]. Be sure to replace _enp12s0u1_ and _wlp2s0_ with the appropriate interface names on your system. For the WiFi interface, use your own network name (SSID) where I use “Cotton”. If your WiFi connection has a password (and of course it does!), youll need to add that to the configuration, too. The following assumes youre using [WPA2-PSK][4] authentication
```
sudo nmcli connection modify bond-wifi wifi-sec.key-mgmt wpa-psk
sudo nmcli connection edit bond-wif
```
The second command will bring you into the interactive editor where you can enter your password without it being logged in your shell history. Enter the following, replacing _password_ with your actual password
```
set wifi-sec.psk password
save
quit
```
Now youre ready to start your bonded interface and the secondary interfaces you created
```
sudo nmcli connection up bond0
sudo nmcli connection up bond-ethernet
sudo nmcli connection up bond-wifi
```
You should now be able to disconnect your wired or wireless connections without losing your network connections.
### A caveat: using other WiFi networks
This configuration works well when moving around on the specified WiFi network, but when away from this network, the SSID used in the bond is not available. Theoretically, one could add an interface to the bond for every WiFi connection used, but that doesnt seem reasonable. Instead, you can disable the bonded interface:
```
sudo nmcli connection down bond0
```
When back on the defined WiFi network, simply start the bonded interface as above.
### Fine-tuning your bond
By default, the bonded interface uses the “load balancing (round-robin)” mode. This spreads the load equally across the interfaces. But if you have a wired and a wireless connection, you may want to prefer the wired connection. The “active-backup” mode enables this. You can specify the mode and primary interface when you are creating the interface, or afterward using this command (the bonded interface should be down):
```
sudo nmcli connection modify bond0 +bond.options "mode=active-backup,primary=enp12s0u1"
```
The [kernel documentation][5] has much more information about bonding options.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/bond-wifi-and-ethernet-for-easier-networking-mobility/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/networkingmobility-816x345.jpg
[2]: https://www.freedesktop.org/software/systemd/man/systemd.net-naming-scheme.html
[3]: https://en.wikipedia.org/wiki/Master/slave_(technology)#Terminology_concerns
[4]: https://en.wikipedia.org/wiki/Wi-Fi_Protected_Access#Target_users_(authentication_key_distribution)
[5]: https://www.kernel.org/doc/Documentation/networking/bonding.txt

View File

@ -0,0 +1,130 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get going with EtherCalc, a web-based alternative to Google Sheets)
[#]: via: (https://opensource.com/article/19/7/get-going-ethercalc)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitthttps://opensource.com/users/greg-phttps://opensource.com/users/marcobravo)
Get going with EtherCalc, a web-based alternative to Google Sheets
======
EtherCalc is an open source spreadsheet that makes it easy to work
remotely and collaborate with others.
![Open data brain][1]
Spreadsheets can be very useful—and not just for [managing your finances][2]. That said, desktop spreadsheets have their limitations. The biggest is that you need to be at your computer to use one. On top of that, collaborating on a spreadsheet can quickly become a messy affair.
Enter [EtherCalc][3], an open source, web-based spreadsheet. While not as fully featured as a desktop spreadsheet, EtherCalc packs enough features for most people.
Let's take a look at how to get started using it.
### Getting EtherCalc
If you're self-hosting, you can [download the code][4], get it through [Sandstorm.io][5], or use npm (the Node.js package manager) to install it on a server.
But what if you don't have a server? You can use one of the many hosted instances of EtherCalc—for example, at [EtherCalc.org][6], the [instance hosted][7] by the folks at [Framasoft][8], or use it through [Sandstorm Oasis][9].
### What can you use EtherCalc for?
Just about everything you'd use a desktop spreadsheet for. That could be to balance your budget, track your savings, record your income, schedule meetings, or take an inventory of your possessions.
I've used EtherCalc to track time on freelance projects, to create invoices for those projects, and even to share article ideas with my fellow [Opensource.com community moderators][10]. How you use EtherCalc is up to your needs and your imagination.
### Working with EtherCalc
The first step is to create a spreadsheet.
![Empty EtherCalc spreadsheet][11]
If you've used a desktop or web-based spreadsheet before, EtherCalc will look somewhat familiar. As with any spreadsheet, you type what you need to type in the cells on the sheet. The includes column headings, labels, and functions (more on those in a moment).
Before you do anything else, bookmark the URL to your spreadsheet. EtherCalc uses randomly generated URLs—for example, <https://ethercalc.org/9krfqj2en6cke>—which aren't easy to remember.
### Formatting your spreadsheet
To add formatting to your spreadsheet, highlight the cell or cells that you want to format and click the **Format** menu.
![EtherCalc's Format menu][12]
You can add borders and padding, change fonts and their attributes, align text, and change the format of numbers, for example to dates or currency formats. When you're done, click the **Save to:** button to apply the formatting.
### Adding functions
_Functions_ enable you to add data, manipulate data, and make calculations in a spreadsheet. They can do a lot more, too.
To add a function to your spreadsheet, click a cell. Then, click the **Function** button on the toolbar.
![EtherCalc Function button][13]
That opens a list all of the functions EtherCalc supports, along with a short description of what each function does.
![EtherCalc Functions list][14]
Select the function you want to use, then click **Paste**. EtherCalc adds the function, along with an opening parenthesis, to the cell. Type what you need to after the parenthesis, then type a closing parenthesis. For example, if you want to total up all the numbers in column B in the spreadsheet using the _=SUM()_ function, type _B1:B21_ and close the parenthesis.
![Entering a function in EtherCalc ][15]
You can also add functions by double-clicking in a cell and typing them. There's no reference in the documentation for EtherCalc's functions. However, it does support [OpenFormula][16] (a standard for math formulas that spreadsheets support). If you're not familiar with spreadsheet functions, you can look up what you need in the [OpenFormula specification][17] or this handy dandy [reference to LibreOffice Calc's functions][18].
### Collaborating with others
Earlier this year, I worked with two friends on a content strategy project. I'm in New Zealand, one friend is in British Columbia, and the other is in Toronto. Since we were working across time zones, each of us needed access to the spreadsheet we were using to track and coordinate our work. Emailing a LibreOffice Calc file wasn't an option. Instead, we turned to EtherCalc, and it worked very well.
Collaborating with EtherCalc starts with sharing your spreadsheet's URL with your collaborators. You can tell when someone else is working on the spreadsheet by the blue border that appears around one or more cells.
![Collaborating in EtherCalc][19]
You and your collaborators can enter information into the spreadsheet simultaneously. All you need to remember is to respect the sanctity of those blue borders.
The **Comment** tab comes in handy when you need to ask a question, include additional information, or make a note to follow up on something. To add a comment, click the tab, and type what you need to type. When you're finished, click **Save**.
![Adding a comment in EtherCalc][20]
You can tell when a cell has a comment by the small red triangle in the top-right corner of the cell. Hold your mouse pointer over it to view the comment.
![Viewing a comment in EtherCalc][21]
### Final thoughts
EtherCalc doesn't do everything that, say, [LibreOffice Calc][22] or [Gnumeric][23] can do. And there's nothing wrong with that. In this case, the [80/20 rule][24] applies.
If you need a simple spreadsheet and one that you can work on with others, EtherCalc is a great choice. It takes a bit of getting used to, but once you do, you'll have no problems using EtherCalc.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/get-going-ethercalc
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitthttps://opensource.com/users/greg-phttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opendata_0613mm.png?itok=UIjD_jhK (Open data brain)
[2]: https://opensource.com/article/17/8/budget-libreoffice-calc
[3]: https://ethercalc.net/
[4]: https://github.com/audreyt/ethercalc
[5]: https://sandstorm.io
[6]: https://ethercalc.org
[7]: https://accueil.framacalc.org/en/
[8]: https://opensource.com/article/18/8/framasoft
[9]: https://oasis.sandstorm.io/
[10]: https://opensource.com/community-moderator-program
[11]: https://opensource.com/sites/default/files/uploads/ethercalc-empty-spreadsheet.png (Empty EtherCalc spreadsheet)
[12]: https://opensource.com/sites/default/files/uploads/ethercalc-formatting.png (EtherCalc's Format menu)
[13]: https://opensource.com/sites/default/files/uploads/ethercalc-function.png (EtherCalc Function button)
[14]: https://opensource.com/sites/default/files/uploads/ethercalc-function-list.png (EtherCalc Functions list)
[15]: https://opensource.com/sites/default/files/uploads/ethercalc-function-example.png (Entering a function in EtherCalc )
[16]: https://en.wikipedia.org/wiki/OpenFormula
[17]: https://docs.oasis-open.org/office/v1.2/os/OpenDocument-v1.2-os-part2.html
[18]: https://help.libreoffice.org/Calc/Functions_by_Category
[19]: https://opensource.com/sites/default/files/uploads/ethercalc-collaborators.png (Collaborating in EtherCalc)
[20]: https://opensource.com/sites/default/files/uploads/ethercalc-add-comment.png (Adding a comment in EtherCalc)
[21]: https://opensource.com/sites/default/files/uploads/ethercalc-view-comment.png (Viewing a comment in EtherCalc)
[22]: https://www.libreoffice.org/discover/calc/
[23]: http://www.gnumeric.org/
[24]: https://en.wikipedia.org/wiki/Pareto_principle

View File

@ -0,0 +1,120 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to install Kibana on MacOS)
[#]: via: (https://opensource.com/article/19/7/installing-kibana-macos)
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
How to install Kibana on MacOS
======
Once you have Elasticsearch installed, the Kibana plugin adds
visualization to this powerful search tool.
![Analytics: Charts and Graphs][1]
In my previous post, I walked Mac users through the steps theyll take to [install Elasticsearch][2], the worlds most popular enterprise search engine. (Here's a [separate article][3] for Linux users.) Its natural language processing power makes Elasticsearch excel at finding details within datasets. Once youve discovered the data you need, you can take it to the next level if youve installed [Kibana][4] as well.
Kibana is an open source data visualization plugin for Elasticsearch. Once youve found data in Elasticsearch, Kibana helps you put it into line charts, [time series queries][5], geospatial maps, and more. This tool is ideal for data scientists who must present their research results, especially those working with open source data.
### Installing Kibana
Youll need to install Kibana separately from Elasticsearch. Since I installed Elasticsearch 7.1.1, Ill install Kibana 1.1. Its important to match versions so Kibana runs against an Elasticsearch node of the same version. (Kibana runs on **node.js**.)
Here are the steps I followed to install Kibana 7.1.1 for MacOS:
1. Make sure Elasticsearch is downloaded and running. See the previous article for instructions if needed.
**Note**: At minimum, youll need to install Elasticsearch 1.4.4 or later before you can use Kibana. This is because youll need to give Kibana the URL for the Elasticsearch instance to connect to, along with the Elasticsearch indices you want to search. In general, its best to install the latest versions of both.
2. Click [here][6] to download Kibana. Youll see the webpage below, which prompts you to download Kibana for Mac in the top right-hand corner of the **Downloads** section:
![Download Kibana here.][7]
3. In your **Downloads** folder, open the **.tar** file to expand it. This action creates a folder with the same name (for example, **kibana-7.1.1-darwin-x86_64**).
4. If you would prefer Kibana to live in another folder, move it now.
Double-check that Elasticsearch is running, and if not, launch it before continuing. (See the previous article if you need instructions.) 
### Opening the Kibana plugin
With Elasticsearch running, you can now launch Kibana. The process is similar to launching Elasticsearch:
1. From your Macs **Downloads** folder (or the new folder if you moved Kibana), open the Kibana folder (i.e. **~Downloads/kibana-7.1.1-darwin-x86_64**).
2. Open the **bin** subfolder to enter that one.
![The Kibana bin folder.][8]
3. Run **kibana-plugin**. You may run into the same security warning that came up in the previous article:
![Security warning][9]
In general, if you get this warning, follow the instructions in that article to clear the warning and open Kibana. Note that if I try to open the plugin without Elasticsearch running in the terminal, I get this same security warning. To fix this, I open Elasticsearch and run it in the terminal as described in the previous article. Launching Elasticsearch with the GUI should open the terminal as well.
Then, I right-clicked on **kibana-plugin** and selected **Open**. This solution worked for me, but you might need to try a few times. Several people in my Elasticsearch meetup had some trouble opening Kibana on their devices.
### Changing Kibanas host and port numbers
Kibanas default settings configure it to run on **localhost: 5601**. Youll need to update the file (in the case of this example) **~Downloads/kibana-7.1.1-darwin-x86_64/config/kibana.yml** to change the host or port number before you run Kibana.
![The Kibana config directory.][10]
Heres what the terminal looked like when my Elasticsearch meetup group configured Kibana so it would default to **<http://localhost:9200>**, which is the URL to use when querying your Elasticsearch instances:
![Configuring Kibana's host and port connections.][11]
### Running Kibana from the command line
Once youve opened the plugin, you can run Kibana from the command line or from the GUI. Heres what the terminal looked like once it connected to Elasticsearch:
![Kibana running once it's connected to Elasticsearch.][12]
Like Elasticsearch, Kibana runs in the foreground by default. You can stop it by pressing **Ctrl-C**.
### Wrapping up
Elasticsearch and Kibana are large packages that take up a fair amount of storage. With so many people downloading both packages at once, my fellow Elasticsearch meetup members and I had to wait an average of several minutes for both of them to download. This might have been due to poor WiFi and/or too many users at once, but keep this possibility in mind if the same thing happens to you.
After that, I couldnt upload the JSON file we were using due to low storage on my laptop. I was able to follow the instructors visualizations, but couldnt use Kibana myself in real time. So, before you download Elasticsearch and Kibana, make sure theres enough room (at least a few gigabytes) on your device to upload and search files with these tools.
To learn more about Kibana, their user guides [Introduction][13] is ideal. (You can configure the guide based on which version of Kibana youre using.) Their demo also shows you how to [build a dashboard in minutes][14] and then make your first deployment.
Have fun!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/installing-kibana-macos
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/analytics-graphs-charts.png?itok=sersoqbV (Analytics: Charts and Graphs)
[2]: https://opensource.com/article/19/7/installing-elasticsearch-macos
[3]: https://opensource.com/article/19/7/installing-elasticsearch-and-kibana-linux
[4]: https://www.elastic.co/products/kibana
[5]: https://en.wikipedia.org/wiki/Time_series
[6]: https://www.elastic.co/downloads/kibana
[7]: https://opensource.com/sites/default/files/uploads/download_kibana.png (Download Kibana here.)
[8]: https://opensource.com/sites/default/files/uploads/kibana_bin_folder.png (The Kibana bin folder.)
[9]: https://opensource.com/sites/default/files/uploads/security_warning.png (Security warning)
[10]: https://opensource.com/sites/default/files/uploads/kibana_config_directory.png (The Kibana config directory.)
[11]: https://opensource.com/sites/default/files/uploads/kibana_host_port_config.png (Configuring Kibana's host and port connections.)
[12]: https://opensource.com/sites/default/files/uploads/kibana_running.png (Kibana running once it's connected to Elasticsearch.)
[13]: https://www.elastic.co/guide/en/kibana/7.2/introduction.html
[14]: https://www.elastic.co/webinars/getting-started-kibana?baymax=rtp&elektra=docs&storm=top-video&iesrc=ctr

View File

@ -0,0 +1,225 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mastering user groups on Linux)
[#]: via: (https://www.networkworld.com/article/3409781/mastering-user-groups-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Mastering user groups on Linux
======
Managing user groups on Linux systems is easy, but the commands can be more flexible than you might be aware.
![Scott 97006 \(CC BY 2.0\)][1]
User groups play an important role on Linux systems. They provide an easy way for a select groups of users to share files with each other. They also allow sysadmins to more effectively manage user privileges, since they can assign privileges to groups rather than individual users.
While a user group is generally created whenever a user account is added to a system, theres still a lot to know about how they work and how to work with them.
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
### One user, one group?
Most user accounts on Linux systems are set up with the user and group names the same. The user "jdoe" will be set up with a group named "jdoe" and will be the only member of that newly created group. The users login name, user id, and group id will be added to the **/etc/passwd** and **/etc/group** files when the account is added, as shown in this example:
```
$ sudo useradd jdoe
$ grep jdoe /etc/passwd
jdoe:x:1066:1066:Jane Doe:/home/jdoe:/bin/sh
$ grep jdoe /etc/group
jdoe:x:1066:
```
The values in these files allow the system to translate between the text (jdoe) and numeric (1066) versions of the user id — jdoe is 1066 and 1066 is jdoe.
The assigned UID (user id) and GID (group id) for each user are generally the same and configured sequentially. If Jane Doe in the above example were the most recently added user, the next new user would likely be assigned 1067 as their user and group IDs.
### GID = UID?
UIDs and GIDs can get of out sync. For example, if you add a group using the **groupadd** command without specifying a group id, your system will assign the next available group id (in this case, 1067). The next user to be added to the system would then get 1067 as a UID but 1068 as a GID.
You can avoid this issue by specifying a smaller group id when you add a group rather than going with the default. In this command, we add a new group and provide a GID that is smaller than the range used for user accounts.
```
$ sudo groupadd -g 500 devops
```
If it works better for you, you can specify a shared group when you create accounts. For example, you might want to assign new development staff members to a devops group instead of putting each one in their own group.
```
$ sudo useradd -g staff bennyg
$ grep bennyg /etc/passwd
bennyg:x:1064:50::/home/bennyg:/bin/sh
```
### Primary and secondary groups
There are actually two types of groups — primary and secondary.
The **primary group** is the one thats recorded in the **/etc/passwd** file, configured when an account is set up. When a user creates a file, its their primary group that is associated with it.
```
$ whoami
jdoe
$ grep jdoe /etc/passwd
jdoe:x:1066:1066:John Doe:/home/jdoe:/bin/bash
^
|
+-------- primary group
$ touch newfile
$ ls -l newfile
-rw-rw-r-- 1 jdoe jdoe 0 Jul 16 15:22 newfile
^
|
+-------- primary group
```
**Secondary groups** are those that users might be added to once they already have accounts. Secondary group memberships show up in the /etc/group file.
```
$ grep devops /etc/group
devops:x:500:shs,jadep
^
|
+-------- secondary group for shs and jadep
```
The **/etc/group** file assigns names to user groups (e.g., 500 = devops) and records secondary group members.
### Preferred convention
The convention of having each user a member of their own group and optionally a member of any number of secondary groups allows users to more easily separate files that are personal from those they need to share with co-workers. When a user creates a file, members of the various user groups they belong to don't necessarily have access. A user will have to use the **chgrp** command to associate a file with a secondary group.
### Theres no place like /home
One important detail when adding a new account is that the **useradd** command does not necessarily add a home directory for a new user. If you want this step to be taken only some of the time, you can add **-m** (think of this as the “make home” option) with your useradd commands.
```
$ sudo useradd -m -g devops -c "John Doe" jdoe2
```
The options in this command:
* **-m** creates the home directory and populates it with start-up files
* **-g** specifies the group to assign the user to
* **-c** adds a descriptor for the account (usually the persons name)
If you want a home directory to be created _all_ of the time, you can change the default behavior by editing the **/etc/login.defs** file. Change or add a setting for the CREATE_HOME variable and set it to “yes”:
```
$ grep CREATE_HOME /etc/login.defs
CREATE_HOME yes
```
Another option is to set yourself up with an alias so that **useradd** always uses the -m option.
```
$ alias useradd=useradd -m
```
Make sure you add the alias to your ~/.bashrc or similar start-up file to make it permanent.
### Looking into /etc/login.defs
Heres a command to list all the setting in the /etc/login.defs file. The **grep** commands are hiding comments and blank lines.
```
$ cat /etc/login.defs | grep -v "^#" | grep -v "^$"
MAIL_DIR /var/mail
FAILLOG_ENAB yes
LOG_UNKFAIL_ENAB no
LOG_OK_LOGINS no
SYSLOG_SU_ENAB yes
SYSLOG_SG_ENAB yes
FTMP_FILE /var/log/btmp
SU_NAME su
HUSHLOGIN_FILE .hushlogin
ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV_PATH PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
TTYGROUP tty
TTYPERM 0600
ERASECHAR 0177
KILLCHAR 025
UMASK 022
PASS_MAX_DAYS 99999
PASS_MIN_DAYS 0
PASS_WARN_AGE 7
UID_MIN 1000
UID_MAX 60000
GID_MIN 1000
GID_MAX 60000
LOGIN_RETRIES 5
LOGIN_TIMEOUT 60
CHFN_RESTRICT rwh
DEFAULT_HOME yes
CREATE_HOME yes <===
USERGROUPS_ENAB yes
ENCRYPT_METHOD SHA512
```
Notice the various settings in this file determine the range of user ids to be used along with password aging and other setting (e.g., umask).
### How to display a users groups
Users can be members of multiple groups for various reasons. Group membership gives a user access to group-owned files and directories, and sometimes this behavior is critical. To generate a list of the groups that some user belongs to, use the **groups** command.
```
$ groups jdoe
jdoe : jdoe adm admin cdrom sudo dip plugdev lpadmin staff sambashare
```
You can list your own groups by typing “groups” without an argument.
### How to add users to groups
If you want to add an existing user to another group, you can do that with a command like this:
```
$ sudo usermod -a -G devops jdoe
```
You can also add a user to multiple groups by specifying the groups in a comma-separated list:
```
$ sudo usermod -a -G devops,mgrs jdoe
```
The **-a** argument means “add” while **-G** lists the groups.
You can remove a user from a group by editing the **/etc/group** file and removing the username from the list. The usermod command may also have an option for removing a member from a group.
```
fish:x:16:nemo,dory,shark
|
V
fish:x:16:nemo,dory
```
### Wrap-up
Adding and managing user groups isn't particularly difficult, but consistency in how you configure accounts can make it easier in the long run.
**[ Now see: [Must-know Linux Commands][3] ]**
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3409781/mastering-user-groups-on-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/07/carrots-100801917-large.jpg
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Start tinkering with the Circuit Playground Express)
[#]: via: (https://opensource.com/article/19/7/circuit-playground-express)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
Start tinkering with the Circuit Playground Express
======
Learn what you can do with these tiny gadgets and a bit of Python code.
![Tools in a cloud][1]
I've been a gadget person as long as I can remember, so I was delighted when I discovered an Adafruit [Circuit Playground Express][2] (CPX) in the swag bag I got at [PyConUS][3] in May. I became fascinated with these little devices last year, when Nina Zakharenko highlighted them in her All Things Open presentation, [Five Things You Didn't Know Python Could Do][4], with Python-powered earrings.
After finding one in my PyCon bag, I set out to learn more about these mesmerizing little devices. First, I attended a "how-to" session at one of the Open Spaces meetups at PyCon. But learning always requires hands-on practice, and that's what I did when I got home. I connected the CPX device to my Linux laptop with a USB-to-MicroUSB cable. The unit mounts just like any standard USB drive, listed as CIRCUITPY.
![Circuit Playground Express mounted as USB drive][5]
The CPX works on MacOS, [Windows][6], and Linux (including [Chromebooks][7]). The device comes pre-loaded with code and some sound files. [Adafruit][8]'s extremely well-written documentation answered most of my questions. I discovered the unit can be programmed on Linux three different ways: [MakeCode][9], the [Arduino IDE][10], and the Python-based [CircuitPython][11], which I chose.
Adafruit provides excellent documentation for [creating and editing CircuitPython code][12], which I found helpful. You can use a variety of editors (e.g., Emacs, Visual Studio Code, gedit), but Adafruit recommends the [Mu Python editor][13], which I [wrote about][14] last year. I [installed Mu][15] on my system with **pip3 install --user mu-editor**. Then I opened a terminal and entered **mu-editor**. It asked me how to run Mu, and I chose Adafruit Circuit Python. Then I was able to look at the code that powers the CPX.
![Selecting CircuitPython mode to run Mu][16]
To open a connection between Mu and the CPX connected to your computer, press the Serial button in Mu. Then you can see any serial data from the CPX and edit it using Python's REPL shell.
Adafruit's programmers have written a library called **adafruit_circuitplayground.express** that enables CircuitPython to interact with the CPX board. To use it, add **import adafruit.circuitplayground.express** to your code. Or, to make it simpler, you can use the acronym **cpx**, shortening the code (as shown below) to **from adafruit_circuitplayground.express import cpx**.
![Importing Adafruit's CPX library][17]
The way you name your file is essential. The four options are code.txt, code.py, main.txt, and main.py. CircuitPython looks for the code files in that order and runs the first one it finds. Save the code to your CIRCUITPY drive each time you change it.
The main.py code included with a new CPX offers an example of the device's capabilities.
![CPX's default main.py][18]
When you execute this code, the CPX displays beautiful, brightly colored LEDs whirling in a rainbow of colors. With my rudimentary knowledge, I could tweak a few settings, like increasing the brightness and turning on the TOUCH_PIANO capability, but other modifications were beyond my coding ability at this point.
Eager to do more, I wanted to find code snippets I could use as building blocks to learn. First, I reached out to [Nina Zakharenko][19] on Twitter and asked for some help. She recommended I contact [Kattni Rembor][20], who pointed me to her GitHub repo and some [code examples][21] she wrote for the Chicago Linux User Group.
Each of these simple building blocks left me more confident in my Python journey. In addition to making lights blink, the CPX can also function as a sensor, and I wanted to try that. Here is code for a simple light sensor:
![CPX code for a blinking LED][22]
And here is the CPX with the D13 LED blinking:
![CPX with a blinking LED][23]
I also discovered a way to create some fun for my grandson by making the CPX "come to life." I recorded a couple of .wav files with Audacity and saved them to the device. Then I wrote some simple code that utilized the A and B buttons on the device to make the CPX "talk" to him:
![Code to play a sound when a button is pressed on CPX][24]
I've really enjoyed tinkering with the code to explore the CPX's capabilities. I am always looking for ways to make Python code come alive for students I teach. The CPX is a great way to help new users learn and enjoy coding and digital making. Another excellent resource for new users is Mike Barela's book _[Getting Started with Adafruit Circuit Playground Express][25]_. I found its information and examples very helpful as I was learning.
Get a [Circuit Playground Express][2] and start writing your own code. And then please share how you are using it in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/circuit-playground-express
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud)
[2]: https://www.adafruit.com/product/3333
[3]: https://us.pycon.org/2019/
[4]: https://youtu.be/WlGkBqBRsik
[5]: https://opensource.com/sites/default/files/uploads/circuitplaygroundexpress_mounted.png (Circuit Playground Express mounted as USB drive)
[6]: https://learn.adafruit.com/adafruit-circuit-playground-express/adafruit2-windows-driver-installation
[7]: https://learn.adafruit.com/using-circuit-playground-express-makecode-circuitpython-on-a-chromebook/overview
[8]: https://learn.adafruit.com/adafruit-circuit-playground-express
[9]: https://makecode.adafruit.com/
[10]: https://learn.adafruit.com/adafruit-circuit-playground-express/arduino
[11]: https://circuitpython.org/
[12]: https://learn.adafruit.com/adafruit-circuit-playground-express/creating-and-editing-code
[13]: https://codewith.mu/en/
[14]: https://opensource.com/article/18/8/getting-started-mu-python-editor-beginners
[15]: https://codewith.mu/en/howto/1.0/install_with_python
[16]: https://opensource.com/sites/default/files/uploads/circuitplaygroundexpress_mu.png (Selecting CircuitPython mode to run Mu)
[17]: https://opensource.com/sites/default/files/uploads/circuitplaygroundexpress_import-cpx.png (Importing Adafruit's CPX library)
[18]: https://opensource.com/sites/default/files/uploads/circuitplaygroundexpress_main-py.png (CPX's default main.py)
[19]: https://twitter.com/nnja
[20]: https://learn.adafruit.com/users/kattni
[21]: https://github.com/kattni/ChiPy_2018
[22]: https://opensource.com/sites/default/files/uploads/circuitplaygroundexpress_simpleblinkingled.png (CPX code for a blinking LED)
[23]: https://opensource.com/sites/default/files/uploads/circuitplaygroundexpress_d13blinking.jpg (CPX with a blinking LED)
[24]: https://opensource.com/sites/default/files/uploads/circuitplaygroundexpress_talking.png (Code to play a sound when a button is pressed on CPX)
[25]: https://www.adafruit.com/product/3944

View File

@ -0,0 +1,110 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to apply 'release early, release often' to build a better brand)
[#]: via: (https://opensource.com/article/19/7/build-better-brand)
[#]: author: (Alex Kimball https://opensource.com/users/alex-kimballhttps://opensource.com/users/marcobravo)
How to apply 'release early, release often' to build a better brand
======
Try this faster, more collaborative process to promote your project.
![][1]
The importance of open source—and specifically the maxim "release early, release often" (RERO)—can hardly be overstated. 
This approach born at the command line has impacted the world as organizations of every shape and size discover what open, collaborative processes can do. Look around. The evidence is everywhere: on our phones, in our cars, in schools and hospitals.
If we still built software the way we used to, innovations across these and countless other areas may never have seen the light of day.
The worlds of marketing and brand development are no different. In today's fast-moving tech industry, marketers and strategists are taking a page out of the dev team's playbook, applying more agile methods to creating brand messaging and visual identities.
Why? For the same reasons cathedral-style development is no longer the best way to build apps: it's too isolated, too slow, and too disconnected from the reality of how the rest of the world works.
Fans of the TV series _Mad Men_ will be familiar with how marketing used to be done, especially within an agency. Marketing messages—taglines, slogans, jingles, and the like—developed by creative powerhouses with a unique gift they (and only they) seemed to truly understand.
A typical project would go something like this:
* The client hires the agency. Much excitement as the journey sets off; after a very brief introductory meeting, the client sends along a few existing materials.
* The agency retreats to its creative confines to wait for divine inspiration to strike.
* Many weeks and months later, the agency returns with—_**eureka!**_—the answer!
Why these antiquated processes don't work:
* They allow little room for the voice of the customer.
* They allow little room for the voice of those within the company who know it best and care about it most.
* They hold precious the final product until the very end, increasing the likelihood that bugs in the creative code will survive until it's too late. All destination. No journey.
That's not how today's most agile, most innovative companies build their software. It shouldn't be how they build their brands.
### 3 simple steps to begin your RERO journey
Applying RERO to your brand projects is simple. Start with these three steps:
#### 1\. Set clear expectations
As the well-known African proverb cautions: "If you want to go fast, go alone; if you want to go far, go together." When it comes to projects guided by RERO principles, setting clear expectations early can be the key to going farther, faster.
At any project's outset, gather your working team, partners, clients, or anyone else expected to contribute, and make sure everyone is prepared to adopt an agile mindset. Progress will be measured in speedy steps, not big leaps. In a room packed with perfectionists, some will likely bristle at this new approach—don't worry.
Don't forget the logistics, either. Share the project schedule, clearly outlining the milestones where all are expected to participate, as well as windows of working time where individual progress should be made.
#### 2\. Share silly first drafts
Ideas, especially rough ones, should be welcomed with open arms. We call these early explorations Silly First Drafts. The SFD is a crucial piece of our creative puzzle. Naming it makes it less scary. An open invitation to share with the team even the smallest seed of an idea without fear of ridicule or rejection.
Easier said than done, for sure. As a writer, I struggle to take this advice more than I'd care to admit. Putting something you've created out there before it's ready for primetime feels like a mortal sin. Not so in the RERO world. The goal of the SFD is to create more, more quickly — and get what you create in front of users and customers who can help you make it even better. To dive in, get messy and create in the open for more eyes to see and help improve.
Some content creators will bristle at this approach initially. Encourage them to embrace this discomfort with a more curious, growth-oriented mindset. Apply what the improv world calls a "Yes, and…" mindset, which emphasizes additive contributions not subtractive naysaying.
Emphasize the journey _and_ the destination. Not a simple binary of success or failure, but rather an opportunity for continuous development. At the project level, this approach helps combat progress-killing factors like the dreaded "paralysis by analysis." How much time have we all lost to hand-wringing indecision and over-thinking?
Gather input on anything you've created—whether lines of code or lines of copy—and your final product will thank you. Just make sure these "fresh eyes" know what to look for. Equip your team to understand how they can be most helpful. Highlight specific areas where feedback is most needed (and by contrast, where things are feeling pretty good).
#### 3\. Embrace technology
This last step is probably the easiest but worth mentioning, as it's overlooked more often than you might expect. Collaboration software has made life a lot easier for all of us lately and can be a crucial strategy to implement a RERO approach for your next project effectively.
##### Team communication
Today's crop of online messaging platforms is well-suited to match the pace of a RERO-guided process. Provide all project contributors—especially those who work remotely—with a single, always-available venue for productive discussions. For our money, the best open source option of the bunch is [Mattermost][2]. It does just about everything Slack can do—file sharing, real-time chat, robust integrations—while also letting you access the source code.
Set a few parameters for the team around best practices to keep the channel positive and productive. Without at least a few rules in place, channels and threads can quickly become side-tracked and GIF-overloaded. Describe the purpose of each channel when it's created so that everyone knows why it exists and what you're there to do.
##### Content creation
For content-related tasks that require word processing, spreadsheets, and presentations, open source alternatives to the Microsoft Office suite of products have been available for decades—chief among them was the now-defunct OpenOffice. Since its final release in 2011, a handful of other providers have picked up the torch.
LibreOffice is a good option, but by the development team's own admission, it may not be the best choice for enterprise deployments. For business-critical applications, check out [Collabora Online][3]. This hosted solution offers a full suite of office applications with powerful team tools for live collaborative editing, support for all major file formats, and data security protections.
With project files accessible to anyone with a modern browser, you won't have to worry about "I can't open it" issues that arise when your PC-loving executive goes to review work from the Mac-loving design team. And when revision histories are updated automatically, it's easy to revisit previous drafts and chronicle the journey you've taken.
Just be sure to set some ground rules as all your cooks enter the content kitchen at once. Allow one content creator to retain editing permissions while granting other team members comment-only or view-only permissions. This helps keep revisions to the core piece of content firmly within the realm of the person who created it and helps ensure all reviewers are reacting to the same version.
### How the future of business is built
Release early, release often is more relevant to our world than ever. This faster, more collaborative process is the way businesses of all kinds must create their futures.
Follow the steps outlined above, and you and your team will be well on your way to applying the principles of agile development, rapid iteration, and continuous innovation to your next brand development project.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/build-better-brand
作者:[Alex Kimball][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-kimballhttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/branding_opensource_intersection.png?itok=4lf-f5NB
[2]: https://mattermost.com/
[3]: https://www.collaboraoffice.com/code/

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Redirect a GitHub Pages site with this HTTP hack)
[#]: via: (https://opensource.com/article/19/7/permanently-redirect-github-pages)
[#]: author: (Oleksii Tsvietnov https://opensource.com/users/oleksii-tsvietnov)
Redirect a GitHub Pages site with this HTTP hack
======
Learn how to configure two repositories to serve as static websites with
custom domain names.
![computer servers processing data][1]
I run a few static websites for my private projects on [GitHub Pages][2]. I'm absolutely happy with the service, as it supports custom domains, automatically redirects to HTTPS, and transparently installs SSL certificates (with automatic issuing via [Let's Encrypt][3]). It is very fast (thanks to [Fastly's][4] content delivery network) and is extremely reliable (I haven't had any issues for years). Taking into account the fact that I get all of this for free, it perfectly matches my needs at the moment.
It has, however, one important limitation: because it serves static websites only, this means no query parameters, no dynamic content generated on the server side, no options for injecting any server-side configuration (e.g., .htaccess), and the only things I can push to the website's root directory are static assets (e.g., HTML, CSS, JS, JPEG, etc.). In general, this is not a big issue. There are a lot of open source [static site generators][5] available, such as [Jekyll][6], which is available by default from the dashboard, and [Pelican][7], which I prefer in most cases. Nevertheless, when you need to implement something that is traditionally solved on the server side, a whole new level of challenge begins. 
For example, I recently had to change a custom domain name for one of my websites. Keeping the old one was ridiculously expensive, and I wasn't willing to continue wasting money. I found a cheaper alternative and immediately faced a bigger problem: all the search engines have the old name in their indexes. Updating indexes takes time, and until that happens, I would have to redirect all requests to the new location. Ideally, I would redirect each indexed resource to the equivalent on the new site, but at minimum, I needed to redirect requests to the new start page. I had access to the old domain name for enough time, and therefore, I could run the site separately on both domain names at the same time.
There is one proper solution to this situation that should be used whenever possible: Permanent redirect, or the [301 Moved Permanently][8] status code, is the way to redirect pages implemented in the HTTP protocol. The only issue is that it's supposed to happen on the server side within a server response's HTTP header. But the only solution I could implement resides on a client side; that is, either HTML code or JavaScript. I didn't consider the JS variant because I didn't want to rely on the script's support in web browsers. Once I defined the task, I recalled a solution: the [**HTML &lt;meta&gt; tag**][9] **&lt;meta http-equiv&gt;** with the [**refresh**][10] [HTTP header][11]. Although it can be used to ask browsers to reload a page or jump to another URL after a specified number of seconds, after some research, I learned it is more complicated than I thought with some interesting facts and details.
### The solution
**TL;DR** (for anyone who isn't interested in all the details): In brief, this solution configures two repositories to serve as static websites with custom domain names.
On the site with the old domain, I reconstructed the website's entire directory structure and put the following index.html (including the root) in each of them: 
```
&lt;!DOCTYPE HTML&gt;                                                                
&lt;html lang="en"&gt;                                                                
    &lt;head&gt;                                                                      
        &lt;meta charset="utf-8"&gt;
        &lt;meta http-equiv="refresh" content="0;url={{THE_NEW_URL}}" /&gt;      
        &lt;link rel="canonical" href="{{THE_NEW_URL}}" /&gt;                    
    &lt;/head&gt;                                                                                                                                                                  
    &lt;body&gt;                                                                      
        &lt;h1&gt;                                                                    
            The page been moved to &lt;a href="{{THE_NEW_URL}}"&gt;{{THE_NEW_URL}}&lt;/a&gt;
        &lt;/h1&gt;                                                                  
    &lt;/body&gt;                                                                    
&lt;/html&gt;
```
When someone opens a resource on the old domain, most web browsers promptly redirect to the same resource on the new website (thanks to **http-equiv="refresh"**). For any resources that were missed or nonexistent, it's helpful to create a **404.html** file in the old website's root directory with similar content, but without **rel="canonical"** (because there is no a canonical page, in this case).
The last piece of the puzzle is the [canonical link relation][12] (**rel="canonical"**), which prevents duplicating content as long as the implemented redirect _is not permanent._ From the HTTP response's perspective, it happens when [the request has succeeded][13] and there is an indication for search engines that a resource has moved and should be associated with a new (preferred) location.
I have learned a few interesting facts related to **http-equiv="refresh"** and **rel="canonical"**. The HTML metatag **http-equiv** is used to simulate the presence of an HTTP header in a server response. That is, web developers without access to the web server's configuration can get a similar result by "injecting" HTTP headers from an HTML document (the "body" of an HTTP response). It seems the **refresh** header, which has been used by all popular web browsers for many years, _doesn't really exist_. At least not as a standardized HTTP header. There was a plan to add it in the HTTP/1.1 specification that was [deferred to HTTP/1.2][14] (or later), but it never happened.
### Summary
The task of finding the real source URL for a resource is far from trivial. There are different scheme names (HTTP, HTTPS), multiple query parameters (page.html, page.html?a=1), various hostnames that resolve to the same IP address, etc. All of these options make a webpage look different to search engines, but the page is still the same. It gets even worse when the same content is published on independent web services. In 2009, Google, Yahoo, and Microsoft announced [support for a canonical link element][15] to clean up duplicate URLs on sites by allowing webmasters to choose a canonical (preferred) URL for a group of possible URLs for the same page. This helps search engines pick up the correct URL to associate with the content and can also improve [SEO for a site][16].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/permanently-redirect-github-pages
作者:[Oleksii Tsvietnov][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/oleksii-tsvietnov
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8 (computer servers processing data)
[2]: https://pages.github.com/
[3]: https://letsencrypt.org/
[4]: https://www.fastly.com/
[5]: https://www.staticgen.com/
[6]: https://jekyllrb.com/
[7]: https://github.com/getpelican/pelican
[8]: https://tools.ietf.org/html/rfc2616#section-10.3.2
[9]: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/meta
[10]: http://www.otsukare.info/2015/03/26/refresh-http-header
[11]: https://tools.ietf.org/html/rfc2616#section-14
[12]: https://tools.ietf.org/html/rfc6596
[13]: https://tools.ietf.org/html/rfc2616#section-10.2.1
[14]: https://lists.w3.org/Archives/Public/ietf-http-wg-old/1996MayAug/0594.html
[15]: https://www.mattcutts.com/blog/canonical-link-tag/
[16]: https://yoast.com/rel-canonical/

View File

@ -0,0 +1,121 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What you need to know to be a sysadmin)
[#]: via: (https://opensource.com/article/19/7/be-a-sysadmin)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/marcobravohttps://opensource.com/users/kimvila)
What you need to know to be a sysadmin
======
Kickstart your sysadmin career by gaining these minimum competencies.
![People work on a computer server with devices][1]
The system administrator of yesteryear jockeyed users and wrangled servers all day, in between mornings and evenings spent running hundreds of meters of hundreds of cables. This is still true today, with the added complexity of cloud computing, containers, and virtual machines.
Looking in from the outside, it can be difficult to pinpoint what exactly a sysadmin does, because they play at least a small role in so many places. Nobody goes into a career already knowing everything they need for a job, but everyone needs a strong foundation. If you're looking to start down the path of system administration, here's what you should be concentrating on in your personal or formal training.
### Bash
When you learn the Bash shell, you don't just learn the Bash shell. You learn a common interface to Linux systems, BSD, MacOS, and even Windows (under the right conditions). You learn the importance of syntax, so you can quickly adapt to systems like Cisco routers' command line or Microsoft's PowerShell, and eventually, you can even learn more powerful languages like Python or Go. And you also begin to think procedurally so you can analyze complex problems and break them down into individual components, which is key because _that's_ how systems, like the internet, or an organization's intranet, or a web server, or a backup solution, are designed.
But wait. There's more.
Knowing the Bash shell has become particularly important because of the recent trend toward DevOps and [containers][2]. Your career as a sysadmin may lead you into a world where infrastructure is treated like code, which usually means you'll have to know the basics of scripting, the structure of [YAML-based][3] configuration, and how to [interact][4] with [containers][5] (tiny Linux systems running inside a [sandboxed file][6]). Knowing Bash is the gateway to efficient management of the most exciting open source technology, so go get [Bourne Again][7].
#### Resources
There are many ways to get practice in the Bash shell.
Try a [portable Linux distribution][8]. You don't have to install Linux to use Linux, so grab a spare thumb drive and spend your evenings or weekends getting comfortable with a text-based interface.
There are several excellent [Bash articles][9] available here on opensource.com as well as [on Enable SysAdmin][10].
The problem with telling someone to practice with Bash is that to practice, you must have something to do. And until you know how to use Bash, you probably won't be able to think of anything to do. If that's your situation, go to Over The Wire and play [Bandit][11]. It's a game aimed at absolute beginners, with 34 levels of interactive basic hacking to get you comfortable with the Linux shell.
### Web server setup
Once you're comfortable with Bash, you should try setting up a web server. Not all sysadmins go around setting up web servers or even maintain web servers, but the skills you acquire while installing and starting the HTTP daemon, configuring Apache or Nginx, setting up the [correct permissions][12], and [configuring a firewall][13], are the same skills you need on a daily basis. After a little bit of effort, you may start to notice certain patterns in your labor. There are concepts you probably took for granted before trying to administer production-ready software and hardware, and you're no longer shielded from them in your fledgling role as an administrator. It might be frustrating at first because everyone likes to be good at everything they do, but that's actually a good thing. Let yourself be bad at new skills. That's how you learn.
And besides, the more you struggle through your first steps, the sweeter it is when you finally see that triumphant "it works!" default index.html.
#### Resources
David Both wrote an excellent article on [Apache web server][14] configuration. For extra credit, step through his follow-up article on how to [host multiple sites][15] on one machine.
### DHCP
The Dynamic Host Configuration Protocol (DHCP) is the system that assigns IP addresses to devices on a network. At home, the modem or router your ISP (internet service provider) supports probably has an embedded DHCP server in it, so it's likely out of your purview. If you've ever logged into your home router to adjust the IP address range or set up a static address for some of your network devices, then you're at least somewhat familiar with the concept. You may understand that devices on a network are assigned the equivalent of phone numbers in the form of IP addresses, and you may realize that computers communicate with one another by broadcasting messages addressed to a specific IP address. Message headers are read by routers along the path, each of which works to direct the message to the next most logical router along the path toward its ultimate goal.
Even if you understand these concepts, the inevitable escalation of basic familiarity with DHCP is to set up a DHCP server. Installing and configuring your own DHCP server provides you the opportunity to introduce DHCP collisions on your home network (try to avoid that, if you can, as it will definitely kill your network until it's resolved), control the distribution of addresses, create subnets, and monitor connections and lease times.
More importantly, setting up DHCP and experimenting with different configurations helps you understand inter-networking. You understand how networks represent "partitions" in data transference and what steps you have to take to pass information from one to the other. That's vital for a sysadmin to know because the network is easily one of the most important aspects of the job.
#### Resources
Before running your own DHCP server, ensure that the DHCP server in your home router (if you have one) is inactive. Once you have it up and running, read Archit Modi's [guide to network commands][16] for tips on how to explore your network.
### Network cables
It might sound mundane, but getting familiar with how network cables work not only makes for a really fun weekend but also gives you a whole new understanding of how data gets across the wires. The best way to learn is to go to your local hobby shop and purchase a Cat 5 cutter and crimper and a few Cat 5 terminators. Then head home, grab a spare Ethernet cable, and cut the terminators off. Spend whatever amount of time it takes to get that cable back in commission.
Once you have solved that puzzle, do it again, this time creating a working [crossover cable][17].
You should also start obsessing _now_ about cable management. If you're not naturally inclined to run cables neatly along the floor molding or the edges of a desk or to bind cables together to keep them orderly, then make it a goal to permanently condition yourself with a phobia of messy cables. You won't understand why this is necessary at first, but the first time you walk into a server room, you will immediately know.
### Ansible
[Ansible][18] is configuration management software, and it's a bit of a bridge between sysadmin and DevOps. Sysadmins use Ansible to configure fresh installs of an operating system and to maintain specific states on machines. DevOps uses Ansible to reduce time and effort spent on tooling so that more time and effort gets spent on developing. You should learn Ansible as part of your sysadmin training, with an eye toward the practices of DevOps, because most of what DevOps is pioneering now will end up as part of your workflow in the system administration of the future.
The good thing about Ansible is that you can start using it now. It's cross-platform, and it scales both up and down. Ansible may be overkill for a single-user computer, but then again, Ansible could change the way you spin up virtual machines, or it could help you synchronize the states of all the computers in your home or [home lab][19].
#### Resources
Read "[How to manage your workstation configuration with Ansible][20]" by Jay LaCroix for the quintessential introduction to get started with Ansible on a casual basis.
### Break stuff
Problems arise on computers because of user error, buggy software, administrator (that's you!) error, and any number of other factors. There's no way to predict what's going to fail or why, so part of your personal sysadmin training regime should be to poke at the systems you set up until they fail. The worse you are to your own lab infrastructure, the more likely you are to find weak points. And the more often you repair those weak spots, the more confident you become in your problem-solving skills.
Aside from the rigors of setting up all the usual software and hardware, your primary job as a sysadmin is to find solutions. There will be times when you encounter a problem outside your job description, and it may not even be possible for you to fix it, but it'll be up to you to find a workaround.
The more you break stuff now and work to fix it, the better prepared you will be to work as a sysadmin.
* * *
Are you a working sysadmin? Are there tasks you wish you'd prepared better for? Add them in the comments below!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/be-a-sysadmin
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/marcobravohttps://opensource.com/users/kimvila
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server with devices)
[2]: https://opensource.com/article/19/6/kubernetes-dump-truck
[3]: https://www.redhat.com/sysadmin/yaml-tips
[4]: https://opensource.com/article/19/6/how-ssh-running-container
[5]: https://opensource.com/resources/what-are-linux-containers
[6]: https://opensource.com/article/18/11/behind-scenes-linux-containers
[7]: https://opensource.com/article/18/7/admin-guide-bash
[8]: https://opensource.com/article/19/6/linux-distros-to-try
[9]: https://opensource.com/tags/bash
[10]: https://www.redhat.com/sysadmin/managing-files-linux-terminal
[11]: http://overthewire.org/wargames/bandit
[12]: https://opensource.com/article/19/6/understanding-linux-permissions
[13]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
[14]: https://opensource.com/article/18/2/how-configure-apache-web-server
[15]: https://opensource.com/article/18/3/configuring-multiple-web-sites-apache
[16]: https://opensource.com/article/18/7/sysadmin-guide-networking-commands
[17]: https://en.wikipedia.org/wiki/Ethernet_crossover_cable
[18]: https://opensource.com/sitewide-search?search_api_views_fulltext=ansible
[19]: https://opensource.com/article/19/6/create-centos-homelab-hour
[20]: https://opensource.com/article/18/3/manage-workstation-ansible

View File

@ -0,0 +1,240 @@
用于 Linux 中代替 Microsoft Visio 的十个最佳备选方案
======
**简介:如果您正在 Linux 中寻找一个好的 Visio 查看器,这里有一些可以在 Linux 中使用的 Microsoft Visio 的替代方案。**
Microsoft Visio 是创建或生成关键任务图和矢量表示的绝佳工具。虽然它可能是制作平面图或其他类型图表的好工具 - 但它既不是免费的,也不是开源的
此外Microsoft Visio 不是一个独立的产品。它与 Microsoft Office 捆绑在一起。我们过去已经看过 MS Office 的开源替代品。今天我们将看到您可以使用哪些工具代替 Linux 上的 Visio。
## 适用于 Linux 的最佳 Microsoft Visio 备选方案
![用于 Linux 的 Microsoft Visio 备选方案][4]
此处为强制性免责声明。该列表不是排名。排名第三的产品并不比排名第六的好。
我还提到了一些可以从 Web 界面使用的非开源 Visio 软件。
| 软件| 输入| 许可证类型|
| LibreOffice Draw | 桌面软件| 免费和开源|
| OpenOffice Draw | 桌面软件| 免费和开源|
| Dia | 桌面软件| 免费和开源|
| yED图形编辑器| 桌面和基于网络| 免费增值|
| Inkscape | 桌面软件| 免费和开源|
| 铅笔| 桌面和基于网络| 免费和开源|
| Graphviz | 桌面软件| 免费和开源|
| darw.io | 桌面和基于网络| 免费和开源|
| Lucidchart | 基于网络| 免费增值|
| Calligra Flow | 桌面软件| 免费和开源|
### 1. LibreOffice Draw
![][5]
LibreOffice Draw 模块是 Microsoft Visio 的最佳开源替代方案之一。在它的帮助下,你可以选择制作一个想法的速写或一个复杂的专业平面布置图来展示。流程图,组织结构图,网络图,小册子,海报等等!所有这些都不需要花一分钱。
好的是它与 LibreOffice 捆绑在一起,默认情况下安装在大多数 Linux 发行版中。
#### 主要功能概述:
* 用于制作宣传册/海报的样式和格式工具
* Calc 数据可视化
* PDF 文件编辑功能
* 通过操作图库中的图片来创建相册
* 灵活的绘图工具类似于 Microsoft Visio (智能连接器,尺寸线等)的工具
* 支持 .VSD 文件(打开)
[LibreOffice Draw][6]
### 2. Apache OpenOffice Draw
![][7]
很多人都知道 OpenOffice LibreOffice 项目最初基于它),但他们并没有真正提到 Apache OpenOffice Draw 作为 Microsoft Visio 的替代方案。但事实上,它是另一个令人惊奇的开源图表软件工具。与 LibreOffice Draw 不同,它不支持编辑 PDF 文件,但它为任何类型的图表创建提供了绘图工具。
这只是个警告。仅当您的系统中已经有 OpenOffice 时才使用此工具。这是因为[安装 OpenOffice ][8] 是一件痛苦的事情,而且[不再继续开发][9]。
#### 主要功能概述:
* 快速创建 3D 形状控制器
* 创建(.swf)工作的 flash 版本
* 样式和格式工具
* 与M icrosoft Visio 类似的灵活绘图工具(智能连接器,尺寸线等)
[Apache OpenOffice Draw][10]
### 3. Dia
![][11]
Dia 是另一个有趣的开源工具。它可能不像前面提到的那样处于积极发展之中。但是,如果您正在寻找一个免费的,开源的替代 Microsoft Visio 的简单而体面的图表,那么 Dia 可能是您的选择。这个工具唯一让你失望的地方就是它的用户界面。除此之外,它还允许您为复杂的图使用强大的工具(但它看起来可能不太好——所以我们建议您使用更简单的图)。
#### 主要功能概述:
* 它可以通过命令行使用
* 样式和格式工具
* 用于自定义形状的形状存储库
* 与 Microsoft Visio 类似的绘图工具(特殊对象,网格线,图层等)
* 跨平台
[Dia][12]
### 4. yED Graph Editor
是最受欢迎的免费 Microsoft Visio 替代方案之一。如果你担心它是一个免费软件而不是开源项目,您仍然可以通过 web 浏览器免费使用 yED 的实时编辑器。如果您想用一个非常易于使用的界面快速绘制图表,这是最好的建议之一。
#### 主要功能概述:
* 拖放功能,方便图表制作
* 支持导入外部数据进行链接
[yED Graph Editor][14]
### 5. Inkscape
![][15]
Inkscape 是一个免费的开源矢量图形编辑器。您将了解创建流程图或数据流程图的基本功能。它不提供高级的图表绘制工具而是提供创建更简单图表的基本工具。因此Inkscape 可能是您的 Visio 替代品,只有当您希望通过使用图库中的可用符号,在图库连接器工具的帮助下生成基本图时,才可以使用它。
#### 主要功能概述:
* 连接器工具
* 灵活的绘图工具
* 广泛的文件格式兼容性
[Inkscape][16]
### 6. Pencil Project
![][17]
Pencil Project 是一个令人印象深刻的开源项目,适用于 Windows 和 Mac 以及 Linux。它具有易于使用的 GUI使绘图更容易和方便。一个很好的内建形状和符号的集合使您的图表看起来很棒。它还内置了 Android 和 iOS UI 模板,可以让你在需要时创建应用程序原型。
您也可以将其安装为 Firefox 扩展,但该扩展不能使用项目的最新版本。
#### 主要功能概述:
* 轻松浏览剪贴画(使用 openclipart.org
* 导出为 ODT 文件/PDF 文件
* 图表连接工具
* 跨平台
[Pencil Project][18]
### 7. Graphviz
![][19]
Graphviz 略有不同。它不是绘图工具,而是专用的图形可视化工具。如果您在网络图中需要多个设计来表示一个节点,那么一定要使用这个工具。当然,你不能用这个工具做平面布置图(至少这不容易)。因此,它最适合于网络图,生物信息学,数据库连接和类似的东西。
#### 主要功能概述:
* 支持命令行使用
* 支持自定义形状和表格节点布局
* 基本样式和格式设置工具
[Graphviz][20]
### 8. Draw.io
Draw.io 主要是一个免费的基于 Web 的图表工具它有强大的工具几乎可以制作任何类型的图表。您只需要拖放然后连接它们以创建流程图ER 图或任何相关的。此外,如果您喜欢该工具,则可以尝试[离线桌面版本][21]。
**主要功能概述:**
* 直接上传到云存储服务
* 自定义形状
* 样式和格式工具
* 跨平台
[Draw.io][22]
### 9. Lucidchart
![][23]
Lucidchart 是一个高级的基于 Web 的图表工具,它提供了一个具有有限功能的免费订阅。您可以使用免费订阅创建几种类型的图表,并将其导出为图像或 PDF。但是免费版本不支持数据链接和 Visio 导入/导出功能。如果您不需要数据链接 Lucidchart 可以说是一个非常好的工具,同时可以生成漂亮的图表。
####主要功能概述:
* 集成到 SlackJira 核心Confluence
* 能够制作产品模型
* 导入 Visio 文件
[Lucidchart][24]
### 10. Calligra Flow
![calligra flow][25]
Calligra Flow 是[Calligra Project][26]的一部分,宗旨市提供免费和开源的软件工具。使用 Calligra flow 您可以轻松地创建网络图,实体关系图,流程图等
#### 主要功能概述:
* 各种模具盒
* 样式和格式工具
[Calligra Flow][27]
### 总结
既然你已经知道了最好的免费和开源的 Visio 替代方案,你对它们有什么看法?
在您任何方面的需求中,它们是否优于 Microsoft Visio另外如果我们错过了您最喜欢的基于 Linux 替代 Microsoft Visio 的绘图工具,请在下面的评论中告诉我们。
--------------------------------------------------------------------------------
via: https://itsfoss.com/visio-alternatives-linux/
作者:[Ankush Das][a]
译者:[ZhiW5217](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ankush/
[1]:https://products.office.com/en/visio/flowchart-software
[2]:https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
[3]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/visio-alternatives-linux-featured.png
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/libreoffice-draw-microsoft-visio-alternatives.jpg
[6]:https://www.libreoffice.org/discover/draw/
[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/apache-open-office-draw.jpg
[8]:https://itsfoss.com/install-openoffice-ubuntu-linux/
[9]:https://itsfoss.com/openoffice-shutdown/
[10]:https://www.openoffice.org/product/draw.html
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/dia-screenshot.jpg
[12]:http://dia-installer.de/
[13]:https://www.yworks.com/products/yed-live
[14]:https://www.yworks.com/products/yed
[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/inkscape-screenshot.jpg
[16]:https://inkscape.org/en/
[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/pencil-project.jpg
[18]:http://pencil.evolus.vn/Downloads.html
[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/graphviz.jpg
[20]:http://graphviz.org/
[21]:https://about.draw.io/integrations/#integrations_offline
[22]:https://about.draw.io/
[23]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/lucidchart-visio-alternative.jpg
[24]:https://www.lucidchart.com/
[25]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/12/calligra-flow.jpg
[26]:https://www.calligra.org/
[27]:https://www.calligra.org/flow/

View File

@ -1,139 +0,0 @@
MX Linux: 一款专注于简洁性的发行版
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mxlinux.png?itok=OLjmCxT9)
Linux 有着如此多种的发型版。许多发行版为了使自己与众不同而做出了很多改变。另一方面,许多发行版之间的区别又是如此之小,你可能会问为什么有人还愿意不厌其烦的重复别人已经做过的工作呢?也正是基于这一疑惑,让我好奇为什么 [antiX][1] 和 [MEPIS][2]这两个社区要联合推出一个特殊的发行版,考虑到具体情况应该会是一个搭载 Xfce 桌面并基于 antiX 的版本,由 MEPIS 社区承担开发。
这一开发中的使用 Xfce 桌面的 antiX 系统是否会基于他之前的发行版呢毕竟antiX 旨在提供一个“快速、轻量级、易于安装的、支持linux live CD 且基于Debian Stable的发行版”。antiX 所搭载的桌面是 [LXDE][3],能够极好的满足关于轻量化系统的相关要求和特性。那究竟是什么原因使得 antiX 决定构建另一个轻量化发行版呢,仅仅是因为这次换成了 Xfce 吗好吧Linux 社区中的任何人都知道,差异之处体现了不同版本的特性,一个好的轻量级发行版是值得一试的(特别是可以使得我们的旧硬件摆脱进入垃圾填埋场的宿命)。当然LXDE 和 Xfce 并不完全属于同一类别。LXDE应该被认为是一个真正的轻量级桌面而 Xfce 应该被认为是一个中等重量的桌面。朋友们,这就是为什么 MX Linux 是 antiX 的一个重要迭代的关键。一个基于Debian的中等体量的发行版它包含你完成工作所需的所有工具。
但是在 MX linux 中有一些直接从 antiX 借用来的非常有用的东西—那就是安装工具。当我第一次设置VirtualBox 的虚拟机来安装 MX Linux 时我认为安装将是我已经习惯的典型的、非常简单的Linux安装。令我非常惊讶的是MX Linux 使用的 antiX 安装程序打破了以往的痛点,特别是对于那些对尝试 Linux 持观望态度的人来说。
因此,甚至在我开始尝试 MX Linux 之前,我就对它有了深刻的印象。让我们来看看是什么让这个发行版的安装如此特别,最后再来看看桌面。
你可以从[这里][4]下载 MX Linux 17.1。系统的最低要求是:
* CD/DVD驱动器(以及能够从该驱动器引导的BIOS)或活动USB(以及能够从USB引导的BIOS)
* 英特尔 i486 或 AMD 处理器
* 512 MB 内存
* 5 GB 硬盘空间
* 扬声器AC97 或 HDA-compatible 声卡
* 作为一个 LiveUSB 使用,需要 4 GB 空间
### 安装
MX Linux安装程序使安装 Linux 变得轻而易举。虽然它可能不是外观最现代化的安装工具,但也已经差不多了。安装的要点是从选择磁盘和选择安装类型开始的(图1)。
![install][6]
图1:MX Linux 的安装程序截图之一。
[Used with permission][7]
下一个重要的界面(图2)要求你为MS网络设置一个计算机名称、域名和(如果需要的话)工作组。
配置工作组的能力是真正值得称赞的第一项。这是我记忆中第一款在安装期间提供此选项的发行版。它还应该提示你MX Linux 提供了开箱即用共享目录的功能。它做到了,而且深藏功与名。它并不完美,但它可以在不需要安装任何额外包的情况下工作(稍后将详细介绍)。
最后一个重要的安装界面(需要用户交互)是创建用户帐户和 root 权限的密码(图3)。
![user][9]
图3:设置用户帐户详细信息和 root 用户密码。
[Used with permission][7]
最后一个界面设置完成后安装将完成并要求重新启动。重启后你将看到登录屏幕。登录并享受MX Linux 带来的体验。
### 使用
Xfce桌面是一个非常容易上手的界面。默认设置将面板位于屏幕的左边缘(图4)。
![desktop ][11]
图4: MX Linux 的默认桌面。
[Used with permission][7]
如果你想将面板移动到更传统的位置,右键单击面板上的空白点,然后单击面板>面板首选项。在生成的窗口中(图5),单击样式下拉菜单,在桌面栏、垂直栏或水平栏之间进行选择你想要的模式。
![panel][13]
图5:配置 MX Linux 面板。
[Used with permission][7]
桌面栏和垂直选项的区别在于,在桌面栏模式下,面板垂直对齐,就像在垂直模式下一样,但是插件是水平放置的。这意味着你可以创建更宽的面板(用于宽屏布局)。如果选择水平布局,它将默在顶部,然后你必须取消锁定面板,单击关闭,然后(使用面板左侧边缘的拖动手柄)将其拖动到底部。你可以回到面板设置窗口并重新锁定面板。
除此之外,使用 Xfce 桌面对于任何级别的用户来说都是无需动脑筋的事情……就是这么简单。你会发现很多生产力代表的软件(LibreOffice, Orage日历,PDF-Shuffler)、图像软件(GIMP)、通信(Firefox,Thunderbird,HexChat),多媒体(Clementine、guvcview SMTube, VLC媒体播放器),和一些 MX Linux 专属的工具(称为MX工具,涵盖了 live-USB 驱动器制作工具,包管理工具,repo 管理工具,回购经理,live ISO 快照工具,等等)。
![sharing][15]
图6:向网络共享一个目录。
[Used with permission][7]
### Samba
让我们讨论一下如何将文件夹共享到你的网络。正如我所提到的,你不需要安装任何额外的包就可以使其正常工作。只需打开文件管理器,右键单击任何位置,并选择网络上的共享文件夹。系统将提示你输入管理密码(已在安装期间设置)。验证成功之后Samba服务器配置工具将打开(图6)。
![sharing][17]
图7在MX Linux上配置共享。
[Used with permission][7]
单击+按钮并配置你的共享。你将被要求指定一个目录,为共享提供一个名称/描述,然后决定该共享是否可写且可见(图7)。
当你单击 Access 选项时,你可以选择是让每个人都访问共享,还是限于特定的用户。问题就出在这里。此时,没有用户可以共享。为什么?因为它们还没有被添加。,有两种方法可以把它们添加到共享:从命令行或使用我们已经打开的工具。让我们用一种更为简单的方法。在Samba服务器配置工具的主窗口中单击Preferences > Samba Users。在弹出的窗口中单击 Add user。
将出现一个新窗口(图8)你需要从下拉框中选择用户输入Windows用户名并为用户键入/重新键入密码。
![Samba][19]
图8:向 Samba 添加用户。
[Used with permission][7]
一旦你单击确定,用户将被添加,并且基于你的网络的对用户的共享功能也随之启用。创建 Samba 共享从未变得如此容易。
### 结论
MX Linux 使任何从桌面操作系统转到 Linux 都变得非常简单。尽管有些人可能会觉得桌面界面不太现代但发行版的主要关注点不是美观而是简洁。为此MX Linux 以出色的方式取得了成功。Linux 的这种特性可以让任何人在使用 Linux 的过程中感到宾至如归。尝试这一中等体量的发行版,看看它能否作为你的日常系统。
从Linux 基金会和 edX 的[“Linux入门”][20]课程了解更多关于Linux的知识。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/4/mx-linux-mid-weight-distro-focused-simplicity
作者:[JACK WALLEN][a]
译者:[qfzy1233](https://github.com/qfzy1233)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://antixlinux.com/
[2]:https://en.wikipedia.org/wiki/MEPIS
[3]:https://lxde.org/
[4]:https://mxlinux.org/download-links
[5]:/files/images/mxlinux1jpg
[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mxlinux_1.jpg?itok=i9bNScjH (install)
[7]:/licenses/category/used-permission
[8]:/files/images/mxlinux3jpg
[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mxlinux_3.jpg?itok=ppf2l_bm (user)
[10]:/files/images/mxlinux4jpg
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mxlinux_4.jpg?itok=mS1eBy9m (desktop)
[12]:/files/images/mxlinux5jpg
[13]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/mxlinux_5.jpg?itok=wsN1hviN (panel)
[14]:/files/images/mxlinux6jpg
[15]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/mxlinux_6.jpg?itok=vw8mIp9T (sharing)
[16]:/files/images/mxlinux7jpg
[17]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/mxlinux_7.jpg?itok=tRIWdcEk (sharing)
[18]:/files/images/mxlinux8jpg
[19]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/mxlinux_8.jpg?itok=ZS6lhZN2 (Samba)
[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,129 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Command line quick tips: Permissions)
[#]: via: (https://fedoramagazine.org/command-line-quick-tips-permissions/)
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
命令行快速提示:权限
======
![][1]
Fedora 与所有基于 Linux 的系统一样,它提供了一组强大的安全特性。其中一个基本特性是文件和文件夹上的 _权限_。这些权限保护文件和文件夹免受未经授权的访问。本文将简要介绍这些权限,并向你展示如何使用它们共享对文件夹的访问。
### 权限基础
Fedora 本质上是一个多用户操作系统,它也有 _组_,用户可以是其成员。但是,想象一下一个没有权限概念的多用户系统,不同的登录用户可以随意阅读彼此的内容。你可以想象到这对隐私或安全性并不是很好。
Fedora 上的任何文件或文件夹都分配了三组权限。第一组用于拥有文件或文件夹的 _用户_,第二组用于拥有它的 _组_,第三组用于其他人,即不是该文件的用户或拥有该文件的组中的用户。有时这被称为 _world_to 校正:这个 world 没理解)
### 权限意味着什么
每组权限都有三种形式_读__写_ 和 _执行_。其中每个都可以用首字母来代替,即 _r_、_w_、_x_。
#### 文件权限
对于 _文件_,权限的含义如下所示:
* 读r可以读取文件内容
* 写w可以更改文件内容
* 执行x可以执行文件 -- 这主要用于打算直接运行的程序或脚本
当你对任何文件进行详细信息列表查看时,可以看到这三组权限。尝试查看系统上的 _/etc/services_ 文件:
```
$ ls -l /etc/services
-rw-r--r--. 1 root root 692241 Apr 9 03:47 /etc/services
```
注意列表左侧的权限组。如上所述,这些表明三种用户的权限:拥有该文件的用户,拥有该文件的组以及其他人。用户所有者是 _root_,组所有者是 _root_ 组。用户所有者具有对文件的读写权限_root_ 组中的任何人都只能读取该文件。最后,其他任何人也只能读取该文件。(最左边的破折号显示这是一个常规文件。)
顺便说一下,你通常会在许多(但不是所有)系统配置文件上发现这组权限,它们只由系统管理员而不是普通用户更改。通常,普通用户需要读取其内容。
#### 文件夹(目录)权限
对于文件夹,权限的含义略有不同:
* 读r可以读取文件夹内容例如 _ls_ 命令)
* 写w可以更改文件夹内容可以在此文件夹中创建或删除文件
* 执行x可以搜索文件夹但无法读取其内容。这听起来可能很奇怪但解释起来需要更复杂的文件系统细节这超出了本文的范围所以现在就这样吧。
看一下 _/etc/grub.d_ 文件夹的例子:
```
$ ls -ld /etc/grub.d
drwx------. 2 root root 4096 May 23 16:28 /etc/grub.d
```
注意最左边的 _d_它显示这是一个目录或文件夹。权限显示用户所有者_root_可以读取、更改和 _cd_ 到此文件夹中。但是,没有其他人可以这样做 - 无论他们是否是 _root_ 组的成员。注意,你不能 _cd_ 进入该文件夹。
```
$ cd /etc/grub.d
bash: cd: /etc/grub.d: Permission denied
```
注意你自己的主目录是如何设置的:
```
$ ls -ld $HOME
drwx------. 221 paul paul 28672 Jul 3 14:03 /home/paul
```
现在,注意除了作为所有者之外,没有人可以访问此文件夹中的任何内容。这是特意的!你不希望其他人能够在共享系统上读取你的私人内容。
### 创建共享文件夹
你可以利用此权限功能轻松创建一个文件夹以在组内共享。假设你有一个名为 _finance_ 的小组,其中有几个成员需要共享文档。因为这些是用户文档,所以将它们存储在 _/home_ 文件夹层次结构中是个好主意。
首先,[使用][2] _[sudo][2]_ 创建一个共享文件夹,并将其设置为 _finance_ 组所有:
```
$ sudo mkdir -p /home/shared/finance
$ sudo chgrp finance /home/shared/finance
```
默认情况下,新文件夹具有这些权限。注意任何人都可以读取或搜索它,即使他们无法创建或删除其中的文件:
```
drwxr-xr-x. 2 root root 4096 Jul 6 15:35 finance
```
对于金融数据来说,这似乎不是一个好主意。接下来,使用 _chmod_ 命令更改共享文件夹的模式(权限)。注意,使用 _g_ 更改所属组的权限,使用 _o_ 更改其他用户的权限。同样_u_ 会更改用户所有者的权限:
```
$ sudo chmod g+w,o-rx /home/shared/finance
```
生成的权限看起来更好。现在_finance_ 组中的任何人(或用户所有者 _root_)都可以完全访问该文件夹及其内容:
```
drwxrwx---. 2 root finance 4096 Jul 6 15:35 finance
```
如果其他用户尝试访问共享文件夹,他们将无法执行此操作。太棒了!现在,我们的金融部门可以将文档放在一个共享的地方。
### 其他说明
还有其他方法可以操作这些权限。例如,你可能希望将此文件夹中的任何文件设置为 _finance_ 组所拥有。这需要本文未涉及的其他设置,但请继续关注杂志,以了解关于该主题的更多信息。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/command-line-quick-tips-permissions/
作者:[Paul W. Frields][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg
[2]: https://fedoramagazine.org/howto-use-sudo/

View File

@ -0,0 +1,154 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to install Elasticsearch on MacOS)
[#]: via: (https://opensource.com/article/19/7/installing-elasticsearch-macos)
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo/users/don-watkins)
如何在 MacOS 上安装 Elasticsearch
======
安装 Elasticsearch 很复杂!以下是如何在 Mac 上安装。
![magnifying glass on computer screen][1]
[Elasticsearch][2] 是一个用 Java 开发的开源全文搜索引擎。用户上传 JSON 格式的数据集。然后Elasticsearch 在向集群索引中的文档添加可搜索引用之前保存原始文档。
Elasticsearch 创建还不到九年但它是最受欢迎的企业搜索引擎。Elastic 在 2019 年 6 月 25 日发布了最新的更新版本 7.2.0。
[Kibana][3] 是 Elasticsearch 的开源数据可视化工具。此工具可帮助用户在 Elasticsearch 集群的内容索引之上创建可视化。
[Sunbursts][4]、[地理空间数据地图][5]、[关系分析][6]和实时数据面板只是其中几个功能。并且由于 Elasticsearch 的机器学习能力,你可以了解哪些属性可能会影响你的数据(如服务器或 IP 地址)并查找异常模式。
在上个月的 [DevFest DC][7] 中,[Summer Rankin 博士][8]Booz Allen Hamilton 的首席数据科学家将 TED Talk 的内容数据集上传到了 Elasticsearch然后使用 Kibana 快速构建了面板。出于好奇,几天后我去了 Elasticsearch 聚会。
由于本课程针对的是新手,因此我们从第一步开始:在我们的笔记本上安装 Elastic 和 Kibana。如果没有安装这两个包我们无法将莎士比亚的文本数据集作为测试 JSON 文件创建可视化了。
接下来,我将分享在 MacOS 上下载、安装和运行 Elasticsearch V7.1.1 的分步说明。这是我在 2019 年 6 月中旬参加 Elasticsearch 聚会时的最新版本。
### 下载适合 MacOS 的 Elasticsearch
1. 进入 <https://www.elastic.co/downloads/elasticsearch>,你会看到下面的页面:
![The Elasticsearch download page.][9]
2. 在**下载**栏,单击 **MacOS**,将 Elasticsearch TAR 文件(例如,**elasticsearch-7.1.1-darwin-x86_64.tar**)下载到 **Downloads** 文件夹。
  3. 双击此文件并解压到自己的文件夹中(例如,**elasticsearch-7.1.1**),这其中包含 TAR 中的所有文件。
**提示**:如果你希望 Elasticsearch 放在另一个文件夹中,现在可以移动。
### 在 MacOS 命令行中运行 Elasticsearch
如果你愿意,你可以只用命令行运行 Elasticsearch。只需遵循以下流程
1. [打开**终端**窗口][10]。
  2. 在终端窗口中,输入你的 Elasticsearch 文件夹。例如(如果你移动了程序,请将 **Downloads** 更改为正确的路径):
**$ cd ~Downloads/elasticsearch-1.1.0**
3. 切换到 Elasticsearch **bin** 子文件夹,然后启动该程序。例如:
**$ cd bin $ ./elasticsearch**
这是我启动 Elasticsearch 1.1.0 时命令行终端显示的一些输出:
![Terminal output when running Elasticsearch.][11]
**注意**默认情况下Elasticsearch 在前台运行,这可能会导致计算机速度变慢。按 **Ctrl-C** 可以阻止 Elasticsearch 运行。
### 使用 GUI 运行 Elasticsearch
如果你更喜欢点击,你可以像这样运行 Elasticsearch
1. 打开一个新的 **Finder** 窗口。
  2. 在左侧 Finder 栏中选择 **Downloads**(如果你将 Elasticsearch 移动了另一个文件夹,请进入它)。
  3. 打开名为 **elasticsearch-7.1.1** 的文件夹(对于此例)。出现了八个子文件夹。
![The elasticsearch/bin menu.][12]
4. 打开 **bin** 子文件夹。如上面的截图所示,此子文件夹中有 20 个文件。
  5. 单击第一个文件,即 **elasticsearch**
请注意,你可能会收到安全警告,如下所示:
![The security warning dialog box.][13]
 
这时候要打开程序:
1. 在警告对话框中单击 **OK**
  2. 打开**系统偏好**。
  3. 单击**安全和隐私**,打开如下窗口:
![Where you can allow your computer to open the downloaded file.][14]
4. 单击**永远打开**,打开如下所示的确认对话框:
![Security confirmation dialog box.][15]
5. 单击**打开**。会打开一个终端窗口并启动 Elasticsearch。
启动过程可能需要一段时间,所以让它继续运行。最终,它将完成,你最后将看到类似这样的输出:
![Launching Elasticsearch in MacOS.][16]
### 了解更多
安装 Elasticsearch 之后,就可以开始探索了!
该工具的 [Elasticsearch开始使用][17]指南会根据你的目标指导你。它的介绍视频介绍了在 [Elasticsearch Service][18] 上启动托管集群执行基本搜索查询通过创建读取更新和删除CRUDREST API等方式操作数据的步骤。
本指南还提供文档链接,开发控制台命令,培训订阅以及 Elasticsearch Service 的免费试用版。此试用版允许你在 AWS 和 GCP 上部署 Elastic 和 Kibana 以支持云中的 Elastic 集群。
在本文的后续内容中,我们将介绍在 MacOS 上安装 Kibana 所需的步骤。此过程将通过不同的数据可视化将你的 Elasticsearch 查询带到一个新的水平。 敬请关注!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/installing-elasticsearch-macos
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
[2]: https://www.getapp.com/it-management-software/a/qbox-dot-io-hosted-elasticsearch/
[3]: https://www.elastic.co/products/kibana
[4]: https://en.wikipedia.org/wiki/Pie_chart#Ring
[5]: https://en.wikipedia.org/wiki/Spatial_analysis
[6]: https://en.wikipedia.org/wiki/Correlation_and_dependence
[7]: https://www.devfestdc.org/
[8]: https://www.summerrankin.com/about
[9]: https://opensource.com/sites/default/files/uploads/wwa1f3_600px_0.png (The Elasticsearch download page.)
[10]: https://support.apple.com/en-ca/guide/terminal/welcome/mac
[11]: https://opensource.com/sites/default/files/uploads/io6t1a_600px.png (Terminal output when running Elasticsearch.)
[12]: https://opensource.com/sites/default/files/uploads/o43yku_600px.png (The elasticsearch/bin menu.)
[13]: https://opensource.com/sites/default/files/uploads/elasticsearch_security_warning_500px.jpg (The security warning dialog box.)
[14]: https://opensource.com/sites/default/files/uploads/the_general_tab_of_the_system_preferences_security_and_privacy_window.jpg (Where you can allow your computer to open the downloaded file.)
[15]: https://opensource.com/sites/default/files/uploads/confirmation_dialog_box.jpg (Security confirmation dialog box.)
[16]: https://opensource.com/sites/default/files/uploads/y5dvtu_600px.png (Launching Elasticsearch in MacOS.)
[17]: https://www.elastic.co/webinars/getting-started-elasticsearch?ultron=%5BB%5D-Elastic-US+CA-Exact&blade=adwords-s&Device=c&thor=elasticsearch&gclid=EAIaIQobChMImdbvlqOP4wIVjI-zCh3P_Q9mEAAYASABEgJuAvD_BwE
[18]: https://info.elastic.co/elasticsearch-service-gaw-v10-nav.html?ultron=%5BB%5D-Elastic-US+CA-Exact&blade=adwords-s&Device=c&thor=elasticsearch%20service&gclid=EAIaIQobChMI_MXHt-SZ4wIVJBh9Ch3wsQfPEAAYASAAEgJo9fD_BwE

View File

@ -0,0 +1,108 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (ElectronMail a Desktop Client for ProtonMail and Tutanota)
[#]: via: (https://itsfoss.com/electronmail/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
ElectronMail - ProtonMail 和 Tutanota 的桌面客户端
======
互联网上的大多数人都拥有来自 Google 等大公司的电子邮件帐户,但这些帐户不尊重你的隐私。值得庆幸的是,目前有 [Tutanota][1] 和 [ProtonMail][2] 等隐私良心替代品。问题是并非所有人都有桌面客户端。今天,我们将研究一个为你解决该问题的项目。我们来看看 ElectronMail 吧。
Electron-ic 警告!
以下应用是使用 Electron 构建的(也就是名为 ElectronMail 的原因之一)。如果使用 Electron 让你感到不安,请将此视为触发警告。
### ElectronMailTutanota 和 ProtonMail 的桌面客户端
![Electron Mail About][3]
[ElectronMail][4] 简单地为 ProtonMail 和 Tutanota 设置了一个电子邮件客户端。它使用三大技术构建:[Electron][5]、[TypeScript][6] 和 [Angular][7]。它包括以下功能:
* 每个电子邮件提供商提供多帐户支持
  * 加密本地存储
  * 适用于 Linux、Windows、macOS 和 FreeBSD
  * 原生通知
  * 系统托盘图标,包含未读消息总数
  * 主密码保护帐户信息
  * 可切换的视图布局
  * 离线访问电子邮件
  * 电子邮件本地加密存储
  * 批量导出电子邮件为 EML 文件
  * 全文搜索
  * 内置/预打包的 Web 客户端
  * 为每个帐户配置代理
  * 拼写检查
  * 支持双因素身份验证,以提高安全性
目前ElectronMail 仅支持 Tutanota 和 ProtonMail。我觉得他们将来会增加更多。根据 [GitHub 页面][4]:“多电子邮件提供商支持。 目前支持 ProtonMail 和 Tutanota 目前。“
ElectronMail 目前是 MIT 许可。
#### 如何安装 ElectronMail
目前,有几种方法可以在 Linux 上安装 ElectronMail。对于Arch 和基于 Arch 的发行版,你可以从 [Arch 用户仓库][8]安装它。ElectrionMail 还有一个 Snap 包。要安装它,只需输入 `sudo snap install electron-mail` 即可。
对于所有其他 Linux 发行版,你可以[下载][9] `.deb``.rpm` 文件。
![Electron Mail Inbox][10]
你也可以[下载] [9]用于 Windows 中的 `.exe` 安装程序或用于 macOS 的 `.dmg` 文件。甚至还有 FreeBSD 文件。
#### 删除 ElectronMail
如果你安装了 ElectronMail 并确定它不适合你,那么[开发者][12]建议采用几个步骤。 **在卸载应用之前,请务必遵循以下步骤。**
如果你使用了“保持登录”功能,请单击菜单上的“注销”。这将删除本地保存的主密码。卸载 ElectronMail 后可以删除主密码,但这涉及编辑系统密钥链。
你还需要手动删除设置文件夹。在系统托盘中选择应用图标后,单击“打开设置文件夹”可以找到它。
![Electron Mail Setting][13]
### 我对 ElectronMail 的看法
我通常不使用电子邮件客户端。事实上,我主要依赖于 Web 客户端。所以,这个应用对我没太大用处。
话虽这么说,但是 ElectronMail 看着不错,而且很容易设置。它有大量开箱即用的功能,并且高级功能并不难激活。
我遇到的一个问题与搜索有关。根据功能列表ElectronMail 支持全文搜索。但是Tutanota 的免费版本仅支持有限的搜索。我想知道 ElectronMail 如何处理这个问题。
最后ElectronMail 只是一个基于 Web Enail 的一个 Electron 封装。我宁愿在浏览器中打开它们,而不是将单独的系统资源用于运行 Electron。如果你只[使用 Tutanota他们有自己官方的 Electron 桌面客户端][14]。你可以尝试一下。
我最大的问题是安全性。这是两个非常安全的电子邮件应用的非官方应用。如果有办法捕获你的登录信息或阅读你的电子邮件怎么办?比我聪明的人必须通过源代码才能确定。这始终是一个安全项目的非官方应用的问题。
你有没有使用过 ElectronMail你认为是否值得安装 ElectronMail你最喜欢的邮件客户端是什么请在下面的评论中告诉我们。
如果你发现这篇文章很有趣请花一点时间在社交媒体、Hacker News 或 [Reddit][15] 上分享它。
--------------------------------------------------------------------------------
via: https://itsfoss.com/electronmail/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/tutanota-review/
[2]: https://itsfoss.com/protonmail/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/electron-mail-about.jpg?resize=800%2C500&ssl=1
[4]: https://github.com/vladimiry/ElectronMail
[5]: https://electronjs.org/
[6]: http://www.typescriptlang.org/
[7]: https://angular.io/
[8]: https://aur.archlinux.org/packages/electronmail-bin
[9]: https://github.com/vladimiry/ElectronMail/releases
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/electron-mail-inbox.jpg?ssl=1
[12]: https://github.com/vladimiry
[14]: https://itsfoss.com/tutanota-desktop/
[15]: http://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Become a lifelong learner and succeed at work)
[#]: via: (https://opensource.com/open-organization/19/7/informal-learning-adaptability)
[#]: author: (Colin Willis https://opensource.com/users/colinwillishttps://opensource.com/users/marcobravo)
成为终身学习者,并在工作中取得成功
======
在具有适应性文化的开放组织中,学习应该一直持续——不会总是在正式场合发生。我们真的明白它是如何工作的吗?
![Writing in a notebook][1]
持续学习是指人们为发展自己而进行的持续的、职业驱动的、有意识的学习过程。对于那些认为自己是持续学习者的人来说,学习从未停止——这些人从日常经历中看到学习机会。与同事进行辩论、反思反馈、在互联网上寻找问题的解决方案、尝试新事物或冒险都是一个人在工作中可以进行的非正式学习活动的例子。
持续学习是开放组织中任何人的核心能力。毕竟,开放的组织是建立在同行相互思考、争论和行动的基础上的。在开放组织的模棱两可、话语驱动的世界中茁壮成长,每天都需要员工具备这些技能。
不幸的是,科学文献在传播我们在工作中学习的知识方面,帮助个人欣赏和发展自己的学习能力方面,做得很差。因此,在本文系列中,我将向您介绍非正式学习,并帮助您理解将学习视为一种技能如何帮助您在任何组织中茁壮成长,尤其是在开源组织中。
### 为什么这么正式?
迄今为止,对组织中学习的科学研究主要集中在正式学习而不是非正式学习的设计、实施和评估上。
投资于员工知识、技能和能力的发展是一个组织保持其相对于竞争对手优势的重要方式。组织通过创建或购买课程、在线课程、研讨会等来提供正式确定学习机会。意在指导个人学习与工作相关的内容,就像学校里的一节课。对于一个组织来说,提供一门课程是一种简单(如果昂贵的话)的方法,可以确保其员工的技能或知识保持最新。同样,教室环境是研究人员的天然实验室,使得基于培训的研究和工作不仅可能,而且强大。
最近的评估表明70%到80%的工作相关知识不是在培训中学到的,而是非正式的在职学习中得到的。
当然,人们不需要训练来学习一些东西;通常,人们通过研究答案、与同事交谈、思考、实验或适应变化来学习。事实上,[最近的评估表明][2]70%到80%的与工作相关的知识不是在培训中学到的,而是在工作中非正式学到的。这并不是说正式培训无效;培训可能非常有效,但它是一种精确的干预方式。在工作的大部分方面正式培训一个人是不现实的,尤其是当这些工作变得更加复杂的时候。
因此,非正式学习,或者任何发生在结构化学习环境之外的学习,对工作场所来说是极其重要的。事实上,[最近的科学证据][3]表明,非正式学习比正式培训更能预测工作表现。
那么,为什么组织和科学界如此关注培训呢?
### 循环过程
除了我前面提到的原因,研究非正式学习可能非常困难。与正式学习不同,非正式学习发生在非结构化环境中,高度依赖于个人,很难或不可能观察并研究。
直到最近,大多数关于非正式学习的研究都集中在定义非正式学习的合格特征和确定非正式学习在理论上是如何与工作经验联系在一起的。研究人员描述了一个[动态的周期性过程][4],通过这个过程,个人可以在组织中非正式地学习。
与正式学习一样,非正式学习发生在非结构化环境中,高度依赖于个人,很难或不可能观察。
在这个过程中,个人和组织都有创造学习机会的机构。例如,一个人可能对学习某样东西感兴趣,并为此表现出学习行为。组织以反馈的形式传递给个人,可能表明需要学习。这可能是一个糟糕的绩效评估,一个在项目中发表的评论,或者一个不是个人指导的组织环境的更广泛的变化。这些力量在组织环境中(例如,有人尝试了一个新想法,他或她的同事认识到并奖励了这种行为)或者通过在个人的头脑中反思(例如,有人反思了关于他或她的表现的,并决定在学习工作中付出更多的努力)。与培训不同,非正式学习不遵循正式的线性过程。一个人可以在任何时候体验过程的任何部分,同时体验过程的多个部分。
### 开放组织中的非正式学习
具体而言,在开放组织中,对等级制度的重视程度降低,对参与式文化的重视程度提高,这两者都推动了这种非正式的学习。简而言之,开放的组织只是为个人和组织环境提供了更多互动和激发学习的机会。此外,想法和变革需要开放组织中员工更广泛的认同——而认同需要对他人的适应性和洞察力的欣赏。
也就是说,仅仅增加学习机会并不能保证学习会发生或成功。有人甚至可能会说,开放组织中常见的模糊性和开放性话语可能会阻止不擅长持续学习的人——再一次,随着时间的推移学习的习惯和开放组织的核心能力——尽可能有效地为组织做出贡献。
解决这些问题需要一种以一致的方式跟踪非正式学习。最近,科学界呼吁创造衡量非正式学习的方法,这样就可以进行系统的研究来解决非正式学习的前因后果的问题。我自己的研究集中在这一呼吁上,我花了几年时间发展和完善我们对非正式学习行为的理解,以便对它们进行测量。
在本文系列的第二部分,我将重点介绍我最近在一个开放组织中进行的一项研究的结果,在该研究中,我测试了我对非正式学习行为的研究,并将它们与更广泛的工作环境和个人工作成果联系起来。
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/7/informal-learning-adaptability
作者:[Colin Willis][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/colinwillishttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/notebook-writing-pen.jpg?itok=uA3dCfu_ (Writing in a notebook)
[2]: https://www.groupoe.com/images/Accelerating_On-the-Job-Learning_-_White_Paper.pdf
[3]: https://www.researchgate.net/publication/316490244_Antecedents_and_Outcomes_of_Informal_Learning_Behaviors_a_Meta-Analysis
[4]: https://psycnet.apa.org/record/2008-13469-009