diff --git a/translated/tech/20190826 5 ops tasks to do with Ansible.md b/published/20190826 5 ops tasks to do with Ansible.md similarity index 64% rename from translated/tech/20190826 5 ops tasks to do with Ansible.md rename to published/20190826 5 ops tasks to do with Ansible.md index 36b0d955cb..de7916b81d 100644 --- a/translated/tech/20190826 5 ops tasks to do with Ansible.md +++ b/published/20190826 5 ops tasks to do with Ansible.md @@ -1,30 +1,32 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11312-1.html) [#]: subject: (5 ops tasks to do with Ansible) [#]: via: (https://opensource.com/article/19/8/ops-tasks-ansible) [#]: author: (Mark Phillips https://opensource.com/users/markphttps://opensource.com/users/adminhttps://opensource.com/users/alsweigarthttps://opensource.com/users/belljennifer43) -5 个使用 Ansible 的运维任务 +5 个 Ansible 运维任务 ====== -更少的 DevOps、更多的 OpsDev + +> 让 DevOps 少一点,OpsDev 多一点。 + ![gears and lightbulb to represent innovation][1] -在这个 DevOps 世界中,有时看起来开发 (Dev) 开始成为关注的焦点,而运维 (Ops) 则是关系中被遗忘的一半。这几乎就好像领先的开发告诉尾随的运维做什么,几乎所有的“运维”都是开发说要做的。因此,运维被抛到后面,降级到了替补席上。 +在这个 DevOps 世界中,看起来开发(Dev)这一半成为了关注的焦点,而运维(Ops)则是这个关系中被遗忘的另一半。这几乎就好像是领头的开发告诉尾随的运维做什么,几乎所有的“运维”都是开发说要做的。因此,运维被抛到后面,降级到了替补席上。 我想看到更多的 OpsDev。因此,让我们来看看 Ansible 在日常的运维中可以帮助你什么。 ![Job templates][2] -我选择在 [Ansible Tower][3] 中展示这些方案,因为我认为用户界面 (UI) 为大多数任务增加了价值。如果你想模仿,你可以在 Tower 的上游开源版本 [AWX][4] 中测试它。 +我选择在 [Ansible Tower][3] 中展示这些方案,因为我认为用户界面 (UI) 可以增色大多数的任务。如果你想模拟测试,你可以在 Tower 的上游开源版本 [AWX][4] 中测试它。 ### 管理用户 -在大规模环境中,你的用户将集中在 Active Directory 或 LDAP 等系统中。但我敢打赌,仍然存在许多环境,其中包含大量的静态用户。Ansible 可以帮助你集中分散的问题。 _社区_ 已为我们解决了这个问题。看看 [Ansible Galaxy][5] 角色**[用户][6]**。 +在大规模环境中,你的用户将集中在活动目录或 LDAP 等系统中。但我敢打赌,仍然存在许多包含大量的静态用户的全负荷环境。Ansible 可以帮助你将这些分散的环境集中到一起。*社区*已为我们解决了这个问题。看看 [Ansible Galaxy][5] 中的 [users][6] 角色。 -这个角色的聪明之处在于它允许我们通过 *data* 管理用户,无需更改运行逻辑。 +这个角色的聪明之处在于它允许我们通过*数据*管理用户,而无需更改运行逻辑。 ![User data][7] @@ -32,7 +34,7 @@ ### 管理 sudo -有多种形式][8]可以升级特权,但最受欢迎的是 [sudo][9]。通过每个用户、组等的离散文件来管理 sudo 相对容易。但一些人对给予特权升级感到紧张,并倾向于有时限地给予特权升级。因此[下面是一种方案] [10],使用简单的 **at** 命令对授权访问设置时间限制。 +提权有[多种形式][8],但最流行的是 [sudo][9]。通过每个 `user`、`group` 等离散文件来管理 sudo 相对容易。但一些人对给予特权感到紧张,并倾向于有时限地给予提权。因此[下面是一种方案][10],它使用简单的 `at` 命令对授权访问设置时间限制。 ![Managing sudo][11] @@ -44,13 +46,13 @@ ### 管理磁盘空间 -这有[一个简单的角色][14],可在特定目录中查找大小大于 _N_ 的文件。在 Tower 中这么做时,启用 [callbacks][15] 有额外的好处。想象一下,你的监控方案发现文件系统已超过 X% 并触发 Tower 中的任务以找出是什么文件导致的。 +这有[一个简单的角色][14],可在特定目录中查找字节大于某个大小的文件。在 Tower 中这么做时,启用[回调][15]有额外的好处。想象一下,你的监控方案发现文件系统已超过 X% 并触发 Tower 中的任务以找出是什么文件导致的。 ![Managing disk space][16] ### 调试系统性能问题 -[这个角色][17]相当简单:它运行一些命令并打印输出。细节在最后输出,让你、系统管理员快速浏览一眼。另外可以使用 [regexs][18] 在输出中找到某些条件(比如说 CPU 占用率超过 80%)。 +[这个角色][17]相当简单:它运行一些命令并打印输出。细节在最后输出,让你 —— 系统管理员快速浏览一眼。另外可以使用 [正则表达式][18] 在输出中找到某些条件(比如说 CPU 占用率超过 80%)。 ![Debugging system performance][19] @@ -65,7 +67,7 @@ via: https://opensource.com/article/19/8/ops-tasks-ansible 作者:[Mark Phillips][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20190830 Change your Linux terminal color theme.md b/published/20190830 Change your Linux terminal color theme.md similarity index 53% rename from translated/tech/20190830 Change your Linux terminal color theme.md rename to published/20190830 Change your Linux terminal color theme.md index 6b1c4d9ea8..321dc40997 100644 --- a/translated/tech/20190830 Change your Linux terminal color theme.md +++ b/published/20190830 Change your Linux terminal color theme.md @@ -1,32 +1,34 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11310-1.html) [#]: subject: (Change your Linux terminal color theme) [#]: via: (https://opensource.com/article/19/8/add-color-linux-terminal) [#]: author: (Seth Kenlon https://opensource.com/users/seth) -更改 Linux 终端颜色主题 +如何更改 Linux 终端颜色主题 ====== -你的终端有丰富的选项来来定义你看到的主题。 -![Terminal command prompt on orange background][1] -如果你大部分时间都盯着一个终端,那么你很自然地希望它看起来赏心悦目。美与不美,全在观者,终端自 CRT 串口控制台以来已经走了很长一段路。因此,你的软件终端窗口有丰富的选项来定义你看到的主题,不管你如何定义美,这是件好事。 +> 你可以用丰富的选项来定义你的终端主题。 + +![](https://img.linux.net.cn/data/attachment/album/201909/06/070600ztd434ppd99df99d.jpg) + +如果你大部分时间都盯着终端,那么你很自然地希望它看起来能赏心悦目。美与不美,全在观者,自 CRT 串口控制台以来,终端已经经历了很多变迁。因此,你的软件终端窗口有丰富的选项,可以用来定义你看到的主题,不管你如何定义美,这总是件好事。 ### 设置 -最受欢迎的软件终端应用,包括GNOME、KDE 和 Xfce,它们都提供了更改其颜色主题的选项。调整主题就像调整应用首选项一样简单。Fedora、RHEL 和 Ubuntu 默认使用 GNOME,因此本文使用该终端作为示例,但 Konsole、Xfce 终端和许多其他终端的流程类似。 +包括 GNOME、KDE 和 Xfce 在内的流行的软件终端应用,它们都提供了更改其颜色主题的选项。调整主题就像调整应用首选项一样简单。Fedora、RHEL 和 Ubuntu 默认使用 GNOME,因此本文使用该终端作为示例,但对 Konsole、Xfce 终端和许多其他终端的设置流程类似。 首先,进入到应用的“首选项”或“设置”面板。在 GNOME 终端中,你可以通过屏幕顶部或窗口右上角的“应用”菜单访问它。 -在“首选项”中,单击“配置文件” 旁边的加号 (+) 来创建新的主题配置文件。在新配置文件中,单击“颜色”选项卡。 +在“首选项”中,单击“配置文件” 旁边的加号(“+”)来创建新的主题配置文件。在新配置文件中,单击“颜色”选项卡。 ![GNOME Terminal preferences][2] -在“颜色”选项卡中,取消选择“使用系统主题中的颜色”选项,以使窗口的其余部分变为可选状态。最开始,你可以选择内置的颜色方案。这些包括浅色主题,它有明亮的背景和深色的前景文字,还有深色主题,它有深色背景和浅色前景文字。 +在“颜色”选项卡中,取消选择“使用系统主题中的颜色”选项,以使窗口的其余部分变为可选状态。最开始,你可以选择内置的颜色方案。这些包括浅色主题,它有明亮的背景和深色的前景文字;还有深色主题,它有深色背景和浅色前景文字。 -当没有其他设置(例如 dircolors 命令的设置)覆盖它们时,“默认颜色”色板将同时定义前景色和背景色。“调色板”设置 dircolors 命令定义的颜色。这些颜色由终端以 LS_COLORS 环境变量的形式使用,以在 [ls][3] 命令的输出中添加颜色。如果它们都不吸引你,请在此更改它们。 +当没有其他设置(例如 `dircolors` 命令的设置)覆盖它们时,“默认颜色”色板将同时定义前景色和背景色。“调色板”设置 `dircolors` 命令定义的颜色。这些颜色由终端以 `LS_COLORS` 环境变量的形式使用,以在 [ls][3] 命令的输出中添加颜色。如果这些颜色不吸引你,请在此更改它们。 如果对主题感到满意,请关闭“首选项”窗口。 @@ -36,28 +38,27 @@ ### 命令选项 -如果你的终端没有不错的设置窗口,它仍然可以在启动命令中提供颜色选项。xterm 和 rxvt 终端(旧的和启用 Unicode 的变体,有时称为 urxvt 或 rxvt-unicode)都提供了这样的选项,因此即使没有桌面环境和大型 GUI 框架,你仍然可以设置终端模拟器的主题。 +如果你的终端没有合适的设置窗口,它仍然可以在启动命令中提供颜色选项。xterm 和 rxvt 终端(旧的和启用 Unicode 的变体,有时称为 urxvt 或 rxvt-unicode)都提供了这样的选项,因此即使没有桌面环境和大型 GUI 框架,你仍然可以设置终端模拟器的主题。 -两个明显的选项是前景色和背景色,分别用 **-fg** 和 **-bg** 定义。每个选项的参数是颜色_名_而不是它的 ANSI 编号。例如: +两个明显的选项是前景色和背景色,分别用 `-fg` 和 `-bg` 定义。每个选项的参数是*颜色名*而不是它的 ANSI 编号。例如: ``` -`$ urxvt -bg black -fg green` +$ urxvt -bg black -fg green ``` -这些设置默认的前景和背景。如果任何其他规则控制特定文件或设备类型的颜色,那么就使用这些颜色。有关如何设置它们的信息,请参阅 [dircolors][5] 命令。 - -你还可以使用 **-cr** 设置文本光标(而不是鼠标光标)的颜色: +这些会设置默认的前景和背景。如果有任何其他规则会控制特定文件或设备类型的颜色,那么就使用这些颜色。有关如何设置它们的信息,请参阅 [dircolors][5] 命令。 +你还可以使用 `-cr` 设置文本光标(而不是鼠标光标)的颜色: ``` -`$ urxvt -bg black -fg green -cr teal` +$ urxvt -bg black -fg green -cr teal ``` ![Setting color in urxvt][6] -你的终端模拟器可能有更多选项,如边框颜色(rxvt 中的 **-bd**)、光标闪烁(urxvt 中的 **-bc** 和 **+bc**),甚至背景透明度。请参阅终端的手册页,了解更多的功能。 +你的终端模拟器可能还有更多选项,如边框颜色(rxvt 中的 `-bd`)、光标闪烁(urxvt 中的 `-bc` 和 `+bc`),甚至背景透明度。请参阅终端的手册页,了解更多的功能。 -要使用你选择的颜色启动终端,你可以将选项添加到用于启动终端的命令或菜单中(例如,在你的 Fluxbox 菜单文件、**$HOME/.local/share/applications** 中的 **.desktop** 或者类似的)。或者,你可以使用 [xrdb][7] 工具来管理与 X 相关的资源(但这超出了本文的范围)。 +要使用你选择的颜色启动终端,你可以将选项添加到用于启动终端的命令或菜单中(例如,在你的 Fluxbox 菜单文件、`$HOME/.local/share/applications` 目录中的 `.desktop` 或者类似的)。或者,你可以使用 [xrdb][7] 工具来管理与 X 相关的资源(但这超出了本文的范围)。 ### 家是可定制的地方 @@ -70,7 +71,7 @@ via: https://opensource.com/article/19/8/add-color-linux-terminal 作者:[Seth Kenlon][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20190903 The birth of the Bash shell.md b/published/20190903 The birth of the Bash shell.md new file mode 100644 index 0000000000..8102b01097 --- /dev/null +++ b/published/20190903 The birth of the Bash shell.md @@ -0,0 +1,102 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-11314-1.html) +[#]: subject: (The birth of the Bash shell) +[#]: via: (https://opensource.com/19/9/command-line-heroes-bash) +[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg) + +Bash shell 的诞生 +====== + +> 本周的《代码英雄》播客深入研究了最广泛使用的、已经成为事实标准的脚本语言,它来自于自由软件基金会及其作者的早期灵感。 + +![Listen to the Command Line Heroes Podcast][1] + +对于任何从事于系统管理员方面的人来说,Shell 脚本编程是一门必不可少的技能,而如今人们编写脚本的主要 shell 是 Bash。Bash 是几乎所有的 Linux 发行版和现代 MacOS 版本的默认配置,也很快就会成为 [Windows 终端][2]的原生部分。你可以说 Bash 无处不在。 + +那么它是如何做到这一点的呢?本周的《[代码英雄][3]》播客将通过询问编写那些代码的人来深入研究这个问题。 + +### 肇始于 Unix + +像所有编程方面的东西一样,我们必须追溯到 Unix。shell 的简短历史是这样的:1971 年,Ken Thompson 发布了第一个 Unix shell:Thompson shell。但是,脚本用户所能做的存在严重限制,这意味着严重制约了自动化以及整个 IT 运营领域。 + +这个[奇妙的研究][4]概述了早期尝试脚本的挑战: + +> 类似于它在 Multics 中的前身,这个 shell(`/bin/sh`)是一个在内核外执行的独立用户程序。诸如通配(参数扩展的模式匹配,例如 `*.txt`)之类的概念是在一个名为 `glob` 的单独的实用程序中实现的,就像用于计算条件表达式的 `if` 命令一样。这种分离使 shell 变得更小,才不到 900 行的 C 源代码。 +> +> shell 引入了紧凑的重定向(`<`、`>` 和 `>>`)和管道(`|` 或 `^`)语法,它们已经存在于现代 shell 中。你还可以找到对调用顺序命令(`;`)和异步命令(`&`)的支持。 +> +> Thompson shell 缺少的是编写脚本的能力。它的唯一目的是作为一个交互式 shell(命令解释器)来调用命令和查看结果。 + +随着对终端使用的增长,对自动化的兴趣随之增长。 + +### Bourne shell 前进一步 + +在 Thompson 发布 shell 六年后,1977 年,Stephen Bourne 发布了 Bourne shell,旨在解决Thompson shell 中的脚本限制。(Chet Ramey 是自 1990 年以来 Bash 语言的主要维护者,在这一集的《代码英雄》中讨论了它)。作为 Unix 系统的一部分,这是这个来自贝尔实验室的技术的自然演变。 + +Bourne 打算做什么不同的事情?[研究员 M. Jones][4] 很好地概述了它: + +> Bourne shell 有两个主要目标:作为命令解释器以交互方式执行操作系统的命令,和用于脚本编程(编写可通过 shell 调用的可重用脚本)。除了替换 Thompson shell,Bourne shell 还提供了几个优于其前辈的优势。Bourne 将控制流、循环和变量引入脚本,提供了更具功能性的语言来(以交互式和非交互式)与操作系统交互。该 shell 还允许你使用 shell 脚本作为过滤器,为处理信号提供集成支持,但它缺乏定义函数的能力。最后,它结合了我们今天使用的许多功能,包括命令替换(使用后引号)和 HERE 文档(以在脚本中嵌入保留的字符串文字)。 + +Bourne 在[之前的一篇采访中][5]这样描述它: + +> 最初的 shell (编程语言)不是一种真正的语言;它是一种记录 —— 一种从文件中线性执行命令序列的方法,唯一的控制流的原语是 `GOTO` 到一个标签。Ken Thompson 所编写的这个最初的 shell 的这些限制非常重要。例如,你无法简单地将命令脚本用作过滤器,因为命令文件本身是标准输入。而在过滤器中,标准输入是你从父进程继承的,不是命令文件。 +> +> 最初的 shell 很简单,但随着人们开始使用 Unix 进行应用程序开发和脚本编写,它就太有限了。它没有变量、它没有控制流,而且它的引用能力非常不足。 + +对于脚本编写者来说,这个新 shell 是一个巨大的进步,但前提是你可以使用它。 + +### 以自由软件来重新构思 Bourne Shell + +在此之前,这个占主导地位的 shell 是由贝尔实验室拥有和管理的专有软件。幸运的话,你的大学可能有权访问 Unix shell。但这种限制性访问远非自由软件基金会(FSF)想要实现的世界。 + +Richard Stallman 和一群志同道合的开发人员那时正在编写所有的 Unix 功能,其带有可以在 GNU 许可证下免费获得的许可。其中一个开发人员的任务是制作一个 shell,那位开发人员是 Brian Fox。他对他的任务的讲述十分吸引我。正如他在播客上所说: + +> 它之所以如此具有挑战性,是因为我们必须忠实地模仿 Bourne shell 的所有行为,同时允许扩展它以使其成为一个供人们使用的更好工具。 + +而那时也恰逢人们在讨论 shell 标准是什么的时候。在这一历史背景和将来的竞争前景下,流行的 Bourne shell 被重新构想,并再次重生。 + +### 重新打造 Bourne Shell + +自由软件的使命和竞争这两个催化剂使重制的 Bourne shell(Bash)具有了生命。和之前不同的是,Fox 并没有把 shell 放到自己的名字之后命名,他专注于从 Unix 到自由软件的演变。(虽然 Fox Shell 这个名字看起来要比 Fish shell 更适合作为 fsh 命令 #missedopportunity)。这个命名选择似乎符合他的个性。正如 Fox 在剧集中所说,他甚至对个人的荣耀也不感兴趣;他只是试图帮助编程文化发展。然而,他并不是一个优秀的双关语。 + +而 Bourne 也并没有因为他命名 shell 的文字游戏而感到被轻视。Bourne 讲述了一个故事,有人走到他面前,并在会议上给了他一件 Bash T 恤,而那个人是 Brian Fox。 + +Shell | 发布于 | 创造者 +---|---|--- +Thompson Shell | 1971 | Ken Thompson +Bourne Shell | 1977 | Stephen Bourne +Bourne-Again Shell | 1989 | Brian Fox + +随着时间的推移,Bash 逐渐成长。其他工程师开始使用它并对其设计进行改进。事实上,多年后,Fox 坚定地认为学会放弃控制 Bash 是他一生中最重要的事情之一。随着 Unix 让位于 Linux 和开源软件运动,Bash 成为开源世界的至关重要的脚本语言。这个伟大的项目似乎超出了单一一个人的愿景范围。 + +### 我们能从 shell 中学到什么? + +shell 是一项技术,它是笔记本电脑日常使用中的一个组成部分,你很容易忘记它也需要发明出来。从 Thompson 到 Bourne 再到 Bash,shell 的故事为我们描绘了一些熟悉的结论: + +* 有动力的人可以在正确的使命中取得重大进展。 +* 我们今天所依赖的大部分内容都建立在我们行业中仍然活着的那些传奇人物打下的基础之上。 +* 能够生存下来的软件超越了其原始创作者的愿景。 +   +代码英雄在全部的第三季中讲述了编程语言,并且正在接近它的尾声。[请务必订阅,来了解你想知道的有关编程语言起源的各种内容][3],我很乐意在下面的评论中听到你的 shell 故事。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/19/9/command-line-heroes-bash + +作者:[Matthew Broberg][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/commnad_line_hereoes_ep6_blog-header-292x521.png?itok=Bs1RlwoW (Listen to the Command Line Heroes Podcast) +[2]: https://devblogs.microsoft.com/commandline/introducing-windows-terminal/ +[3]: https://www.redhat.com/en/command-line-heroes +[4]: https://developer.ibm.com/tutorials/l-linux-shells/ +[5]: https://www.computerworld.com.au/article/279011/-z_programming_languages_bourne_shell_sh diff --git a/sources/news/20190905 Exploit found in Supermicro motherboards could allow for remote hijacking.md b/sources/news/20190905 Exploit found in Supermicro motherboards could allow for remote hijacking.md new file mode 100644 index 0000000000..6d2b48755b --- /dev/null +++ b/sources/news/20190905 Exploit found in Supermicro motherboards could allow for remote hijacking.md @@ -0,0 +1,72 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Exploit found in Supermicro motherboards could allow for remote hijacking) +[#]: via: (https://www.networkworld.com/article/3435123/exploit-found-in-supermicro-motherboards-could-allow-for-remote-hijacking.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Exploit found in Supermicro motherboards could allow for remote hijacking +====== +The vulnerability impacts three models of Supermicro motherboards. Fortunately, a fix is already available. +IDG / Thinkstock + +A security group discovered a vulnerability in three models of Supermicro motherboards that could allow an attacker to remotely commandeer the server. Fortunately, a fix is already available. + +Eclypsium, which specializes in firmware security, announced in its blog that it had found a set of flaws in the baseboard management controller (BMC) for three different models of Supermicro server boards: the X9, X10, and X11. + +**[ Also see: [What to consider when deploying a next-generation firewall][1] | Get regularly scheduled insights: [Sign up for Network World newsletters][2] ]** + +BMCs are designed to permit administrators remote access to the computer so they can do maintenance and other updates, such as firmware and operating system patches. It’s meant to be a secure port into the computer while at the same time walled off from the rest of the server. + +Normally BMCs are locked down within the network in order to prevent this kind of malicious access in the first place. In some cases, BMCs are left open to the internet so they can be accessed from a web browser, and those interfaces are not terribly secure. That’s what Eclypsium found. + +For its BMC management console, Supermicro uses an app called virtual media application. This application allows admins to remotely mount images from USB devices and CD or DVD-ROM drives. + +When accessed remotely, the virtual media service allows for plaintext authentication, sends most of the traffic unencrypted, uses a weak encryption algorithm for the rest, and is susceptible to an authentication bypass, [according to Eclypsium][3]. + +Eclypsium was more diplomatic than I, so I’ll say it: Supermicro was sloppy. + +**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]** + +These issues allow an attacker to easily gain access to a server, either by capturing a legitimate user’s authentication packet, using default credentials, and in some cases, without any credentials at all. + +"This means attackers can attack the server in the same way as if they had physical access to a USB port, such as loading a new operating system image or using a keyboard and mouse to modify the server, implant malware, or even disable the device entirely," Eclypsium wrote in its blog post. + +All told, the team found four different flaws within the virtual media service of the BMC's web control interface. + +### How an attacker could exploit the Supermicro flaws + +According to Eclypsium, the easiest way to attack the virtual media flaws is to find a server with the default login or brute force an easily guessed login (root or admin). In other cases, the flaws would have to be targeted. + +Normally, access to the virtual media service is conducted by a small Java application served on the BMC’s web interface. This application then connects to the virtual media service listening on TCP port 623 on the BMC. A scan by Eclypsium on port 623 turned up 47,339 exposed BMCs around the world. + +Eclypsium did the right thing and contacted Supermicro and waited for the vendor to release [an update to fix the vulnerabilities][5] before going public. Supermicro thanked Eclypsium for not only bringing this issue to its attention but also helping validate the fixes. + +Eclypsium is on quite the roll. In July it disclosed BMC [vulnerabilities in motherboards from Lenovo, Gigabyte][6] and other vendors, and last month it [disclosed flaws in 40 device drivers][7] from 20 vendors that could be exploited to deploy malware. + +Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3435123/exploit-found-in-supermicro-motherboards-could-allow-for-remote-hijacking.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html +[2]: https://www.networkworld.com/newsletters/signup.html +[3]: https://eclypsium.com/2019/09/03/usbanywhere-bmc-vulnerability-opens-servers-to-remote-attack/ +[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[5]: https://www.supermicro.com/support/security_BMC_virtual_media.cfm +[6]: https://eclypsium.com/2019/07/16/vulnerable-firmware-in-the-supply-chain-of-enterprise-servers/ +[7]: https://eclypsium.com/2019/08/10/screwed-drivers-signed-sealed-delivered/ +[8]: https://www.facebook.com/NetworkWorld/ +[9]: https://www.linkedin.com/company/network-world diff --git a/sources/news/20190905 USB4 gets final approval, offers Ethernet-like speed.md b/sources/news/20190905 USB4 gets final approval, offers Ethernet-like speed.md new file mode 100644 index 0000000000..5c17746eb5 --- /dev/null +++ b/sources/news/20190905 USB4 gets final approval, offers Ethernet-like speed.md @@ -0,0 +1,59 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (USB4 gets final approval, offers Ethernet-like speed) +[#]: via: (https://www.networkworld.com/article/3435113/usb4-gets-final-approval-offers-ethernet-like-speed.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +USB4 gets final approval, offers Ethernet-like speed +====== +USB4 could be a unifying interface that eliminates bulky cables and oversized plugs and provides throughput that satisfies everyone laptop users to server administrators. +Intel + +The USB Implementers Forum (USB-IF), the industry consortium behind the development of the Universal Serial Bus (USB) specification, announced this week it has finalized the technical specifications for USB4, the next generation of the spec. + +One of the most important aspects of USB4 (they have dispensed with the space between the acronym and the version number with this release) is that it merges USB with Thunderbolt 3, an Intel-designed interface that hasn’t really caught on outside of laptops despite its potential. For that reason, Intel gave the Thunderbolt spec to the USB consortium. + +Unfortunately, Thunderbolt 3 is listed as an option for USB4 devices, so some will have it and some won’t. This will undoubtedly cause headaches, and hopefully all device makers will include Thunderbolt 3. + +**[ Also read: [Your hardware order is ready. Do you want cables with that?][1] ]** + +USB4 will use the same form factor as USB type-C, the small plug used in all modern Android phones and by Thunderbolt 3. It will be backwards compatible with USB 3.2, USB 2.0, as well as Thunderbolt. So, just about any existing USB type-C device can connect to a machine featuring a USB4 bus but will run at the connecting cable’s rated speed. + +### USB4: Less bulk, more speed + +Because it supports Thunderbolt 3, the new connection will support both data and display protocols, so this could mean the small USB-C port replacing the big, bulky DVI port on monitors, and monitors coming with multiple USB4 ports to act as a hub. + +Which gets to the main point of the new standard: It offers dual-lane 40Gbps transfer speed, double the rate of USB 3.2, which is the current spec, and eight times that of USB 3. That’s Ethernet speed and should be more than enough to keep your high-definition monitor fed with plenty of bandwidth for other data movement. + +USB4 also has better resource allocation for video, so if you use a USB4 port to move video and data at the same time, the port will allocate bandwidth accordingly. This will allow a computer to use both an external GPU in a self-contained case, which have come to market only because of Thunderbolt 3, and an external SSD. + +**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][2] ]** + +This could open up all kinds of new server designs because large, bulky devices, such as GPUs or other cards that won’t go easily into a 1U or 2U case, can now be externally attached and run at speeds comparable to an internal device. + +Of course, it will be a while before we see PCs with USB4 ports, never mind servers. It took years to get USB 3 into PCs, and uptake for USB-C has been very slow. USB 2 thumb drives are still the bulk of the market for those devices, and motherboards are still shipping with USB 2 on them. + +Still, USB4 has the potential to be a unifying interface that gets rid of bulky cables that have oversized plugs and provides throughput that can satisfy everyone from a laptop user to a server administrator. + +Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3435113/usb4-gets-final-approval-offers-ethernet-like-speed.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3278052/your-hardware-order-is-ready-do-you-want-cables-with-that.html +[2]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11 +[3]: https://www.facebook.com/NetworkWorld/ +[4]: https://www.linkedin.com/company/network-world diff --git a/sources/news/20190906 Great News- Firefox 69 Blocks Third-Party Cookies, Autoplay Videos - Cryptominers by Default.md b/sources/news/20190906 Great News- Firefox 69 Blocks Third-Party Cookies, Autoplay Videos - Cryptominers by Default.md new file mode 100644 index 0000000000..1cb11a5e59 --- /dev/null +++ b/sources/news/20190906 Great News- Firefox 69 Blocks Third-Party Cookies, Autoplay Videos - Cryptominers by Default.md @@ -0,0 +1,96 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Great News! Firefox 69 Blocks Third-Party Cookies, Autoplay Videos & Cryptominers by Default) +[#]: via: (https://itsfoss.com/firefox-69/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Great News! Firefox 69 Blocks Third-Party Cookies, Autoplay Videos & Cryptominers by Default +====== + +If you’re using [Mozilla Firefox][1] and haven’t updated yet to the latest version, you are missing a lot of new and important features. + +### Awesome new features in Firefox 69 release + +To start with, Mozilla Firefox 69 enforces stronger security and privacy options by default. Here are some of the major highlights of the new release. + +#### Firefox 69 blocks autoplay videos + +![][2] + +A lot of websites offer auto-play videos nowadays. No matter whether it is a pop-up video or a video embedded in an article set to autoplay, it is blocked by default (or you may be prompted about it). + +The [Block Autoplay][3] feature gives users to block any video playing automatically. + +#### No more third party tracking cookies + +By default, as part of the Enhanced Tracking Protection feature, it will now block third-party tracking cookies and crypto miners. This is a very useful change to enhance privacy protection while using Mozilla Firefox. + +There are two kind of cookies: first party and third party. The first party cookies are owned by the website itself. These are the ‘good cookies’ that improve your browsing experience by keeping you logged in, remembering your password or entry fields etc. The third party cookies are owned by domains other than the website you visit. Ad servers use these cookies to track you and serve you tracking ads on all the website you visit. Firefox 69 aims to block these. + +You will observe the shield icon in the address bar when it’s active. You may choose to disable it for specific websites. + +![Firefox Blocking Tracking][4] + +#### No more cryptomining off your CPU + +![][5] + +The lust for cryptocurrency has plagued the world. The cost of GPU has gone high because the professional cryptominers use them for mining cryptocurrency. + +People are using computers at work to secretly mine cryptocurrency. And when I say work, I don’t necessarily mean an IT company. Only this year, [people got caught mining cryptocurency at a nuclear plant in Ukrain][6][.][6] + +That’s not it. If you visit some websites, they run scripts and use your computer’s CPU to mine cryptocurrency. This is called [cryptojacking][7] in IT terms. + +The good thing is that Firefox 69 will automatically blocking cryptominers. So websites should not be able to exploit your system resources for cryptojacking. + +#### Stronger Privacy with Firefox 69 + +![][8] + +If you take it up a notch with a stricter setting, it will block fingerprinters as well. So, you won’t have to worry about sharing your computer’s configuration info via [fingerprinters][9] when you choose the strict privacy setting in Firefox 69. + +In the [official blog post about the release][10], Mozilla mentions that with this release, they expect to provide protection for 100% of our users by default. + +#### Performance Improvements + +Even though Linux hasn’t been mentioned in the changelog – it mentions performance, UI, and battery life improvements for systems running on Windows 10/mac OS. If you observe any performance improvements, do mention it in comments. + +**Wrapping Up** + +In addition to all these, there’s a lot of under-the-hood improvements as well. You can check out the details in the [release notes][11]. + +Firefox 69 is an impressive update for users concerned about their privacy. Similar to our recommendation on some of the [secure email services][12] recently, we recommend you to update your browser to get the best out of it. The new update is already available in most Linux distributions. You just have to update your system. + +If you are interested in browsers that block ads and tracking cookies, try [open source Brave browser][13]. They are even giving you their own cryptocurrency for using their web browser. You can use it to reward your favorite publishers. + +What do you think about this release? Let us know your thoughts in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/firefox-69/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/why-firefox/ +[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/auto-block-firefox.png?ssl=1 +[3]: https://support.mozilla.org/en-US/kb/block-autoplay +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/firefox-blocking-tracking.png?ssl=1 +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/firefox-shield.png?ssl=1 +[6]: https://thenextweb.com/hardfork/2019/08/22/ukrainian-nuclear-powerplant-mine-cryptocurrency-state-secrets/ +[7]: https://hackernoon.com/cryptojacking-in-2019-is-not-dead-its-evolving-984b97346d16 +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/firefox-secure.jpg?ssl=1 +[9]: https://clearcode.cc/blog/device-fingerprinting/ +[10]: https://blog.mozilla.org/blog/2019/09/03/todays-firefox-blocks-third-party-tracking-cookies-and-cryptomining-by-default/ +[11]: https://www.mozilla.org/en-US/firefox/69.0/releasenotes/ +[12]: https://itsfoss.com/secure-private-email-services/ +[13]: https://itsfoss.com/brave-web-browser/ diff --git a/sources/talk/20190903 The birth of the Bash shell.md b/sources/talk/20190903 The birth of the Bash shell.md deleted file mode 100644 index 4ed5588cdb..0000000000 --- a/sources/talk/20190903 The birth of the Bash shell.md +++ /dev/null @@ -1,104 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (The birth of the Bash shell) -[#]: via: (https://opensource.com/19/9/command-line-heroes-bash) -[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg) - -The birth of the Bash shell -====== -This week's Command Line Heroes podcast delves into the most widely used -and de facto standard scripting language, its early inspirations as part -of the Free Software Foundation, and its author. -![Listen to the Command Line Heroes Podcast][1] - -Shell scripting is an essential discipline for anyone in a sysadmin type of role, and the predominant shell in which people write scripts today is Bash. Bash comes as default on nearly all Linux distributions and modern MacOS versions and is slated to be a native part of [Windows Terminal][2] soon enough. Bash, you could say, is everywhere. - -So how did it get to this point? This week's [Command Line Heroes][3] podcast dives deeply into that question by asking the very people who wrote the code. - -### It started with Unix - -Like all programming things, we have to go back to Unix. A little shell history: In 1971, Ken Thompson released the first Unix shell—the Thompson shell. But there were severe limitations to the amount of scripting users could do. And that meant serious limitations for automation and, consequently, for the whole field of IT operations. - -This [fantastic piece of research][4] outlines the challenges to early attempts at scripting (bold added to highlight commands): - -> Similar to its predecessor in Multics, this shell (**/bin/sh**) was an independent user program that executed outside of the kernel. Concepts like globbing (pattern matching for parameter expansion, such as ***.txt**) were implemented in a separate utility called **glob**, as was the **if** command to evaluate conditional expressions. This separation kept the shell small, at under 900 lines of C source. -> -> The shell introduced a compact syntax for redirection (**< >** and **>>**) and piping (**|** or **^**) that has survived into modern shells. You can also find support for invoking sequential commands (with **;**) and asynchronous commands (with **&**). -> -> What the Thompson shell lacked was the ability to script. Its sole purpose was as an interactive shell (command interpreter) to invoke commands and view results. - -As the access to terminals grew, an interest in automation grew along with it. - -### Bourne shell is a step forward - -Six years after Thompson's release, in 1977, Stephen Bourne released the Bourne shell, which was meant to solve the scripting limitations of the Thompson shell. (Chet Ramey, the primary maintainer of the Bash language since 1990, discusses it on this episode of Command-Line Heroes). It was the natural evolution of technology coming out of Bell Labs as part of the Unix system. - -What did Bourne intend to do differently? [Researcher M. Jones][4] outlines it well:  - -> The Bourne shell had two primary goals: serve as a command interpreter to interactively execute commands for the operating system and for scripting (writing reusable scripts that could be invoked through the shell). In addition to replacing the Thompson shell, the Bourne shell offered several advantages over its predecessors. Bourne introduced control flows, loops, and variables into scripts, providing a more functional language to interact with the operating system (both interactively and noninteractively). The shell also permitted you to use shell scripts as filters, providing integrated support for handling signals, but lacked the ability to define functions. Finally, it incorporated a number of features we use today, including command substitution (using back quotes) and HERE documents to embed preserved string literals within a script. - -Bourne, in a [previous interview][5], described it this way: - -> The original shell wasn’t really a language; it was a recording—a way of executing a linear sequence of commands from a file, the only control flow primitive being GOTO a label. These limitations to the original shell that Ken Thompson wrote were significant. You couldn’t, for example, easily use a command script as a filter because the command file itself was the standard input. And in a filter, the standard input is what you inherit from your parent process, not the command file. -> -> The original shell was simple but, as people started to use Unix for application development and scripting, it was too limited. It didn’t have variables, it didn’t have control flow, and it had very inadequate quoting capabilities. - -This new shell was a huge step forward for scripters, but only if you had access to it. - -### Rethinking Bourne's shell as free software - -Until then, the dominant shells were proprietary software that was owned and operated at Bell Labs. If you were fortunate enough, your university might have access to a Unix shell. But that restricted access was far from the world that the Free Software Foundation (FSF) wanted to achieve.  - -Richard Stallman and a group of like-minded developers were writing all the features of Unix with a license that is freely available under the GNU license. One of those developers was tasked with making a shell. That developer was Brian Fox. And the way he talks about his task absolutely fascinates me. As he says on the podcast: - -> The reason it was so challenging was that we had to faithfully mimic all of the behaviors of the Bourne shell, while at the same time being allowed to extend it to make it a better tool for people to use. - -This was also at a time when people were discussing what it meant to be a shell standard. With this history as background and competition in the foreground, the popular Bourne shell was reimagined; born again. - -### The shell, Bourne-Again - -These two catalysts—the free software mission and competition—brought the Bourne-Again shell (Bash) to life. In an unusual move for the time, Fox didn't name his shell after himself, and he focused on the evolution from Unix to free software. (Although Fox Shell could have beaten Fish shell to the fsh command #missedopportunity). That naming choice seems aligned with his personality. As Fox says in the episode, he wasn't interested in even the perception of personal glory; he was trying to help the culture of programming evolve. He was not, however, above a good pun. - -It was nice to hear that Bourne didn't feel slighted by the play on words. Bourne tells a story about when someone walked up to him and gave him a Bash t-shirt at a conference. That person was Brian Fox. - -Shell | Released | Creator ----|---|--- -Thompson Shell | 1971 | Ken Thompson -Bourne Shell | 1977 | Stephen Bourne -Bourne-Again Shell | 1989 | Brian Fox - -With time, Bash grew in adoption. Other engineers started using it and submitting improvements to its design. Indeed, years later, Fox would insist that learning to give up control of Bash was one of the most important things he did in his life. As Unix gave way to Linux and the open source software movement, Bash became the key scripting force in an open source world. Great projects seem to grow beyond the scope of a single person's vision. - -### What can we learn from shells? - -A shell is a technology that is so integral to everyday laptop use that it's easy to forget it needed invention. The story of going from Thompson to Bourne to Bash shells draws some familiar takeaways: - - * Motivated individuals can make great strides with the right mission in mind. - * Much of what we rely on today is built on the work of still-living legends in our industry. - * The software that tends to survive are the ones that evolve beyond the vision of their original creators. - - - -Command Line Heroes has covered programming languages for all of Season 3 and is approaching its finale. [Be sure to subscribe to learn everything you want to know about the origin of programming languages][3], and I would love to hear your shell stories in the comments below. - --------------------------------------------------------------------------------- - -via: https://opensource.com/19/9/command-line-heroes-bash - -作者:[Matthew Broberg][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberghttps://opensource.com/users/mbbroberg -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/commnad_line_hereoes_ep6_blog-header-292x521.png?itok=Bs1RlwoW (Listen to the Command Line Heroes Podcast) -[2]: https://devblogs.microsoft.com/commandline/introducing-windows-terminal/ -[3]: https://www.redhat.com/en/command-line-heroes -[4]: https://developer.ibm.com/tutorials/l-linux-shells/ -[5]: https://www.computerworld.com.au/article/279011/-z_programming_languages_bourne_shell_sh diff --git a/sources/talk/20190905 10 pitfalls to avoid when implementing DevOps.md b/sources/talk/20190905 10 pitfalls to avoid when implementing DevOps.md new file mode 100644 index 0000000000..2d66534959 --- /dev/null +++ b/sources/talk/20190905 10 pitfalls to avoid when implementing DevOps.md @@ -0,0 +1,126 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (10 pitfalls to avoid when implementing DevOps) +[#]: via: (https://opensource.com/article/19/9/pitfalls-avoid-devops) +[#]: author: (Mehul Rajput https://opensource.com/users/mehulrajputhttps://opensource.com/users/daniel-ohhttps://opensource.com/users/genekimhttps://opensource.com/users/ghaff) + +10 pitfalls to avoid when implementing DevOps +====== +Make your DevOps implementation journey smoother by avoiding the +mistakes others have made. +![old postcard highway ][1] + +In companies of every size, software is increasingly providing business value because of a shift in how technology teams define success. More than ever, they are defined by how the applications they build bring value to their customers. Tickets and stability at the cost of saying no are no longer the key value of IT. It's now about increasing developer velocity by partnering with the business. + +In order to keep up with this faster pace, leading technology professionals are building software with precision and embracing standards of continuous delivery, integration, and DevOps. According to [Shanhong Liu][2], "As of 2018, only nine percent of technology professionals responsible for the development and quality of web and mobile applications stated that they had not adopted DevOps and had no plans to do so." + +A significant value in a [DevOps culture][3] is to accept failure as a part of the journey toward value. For software, the journey comes in the form of [continuous delivery][4] with the expectation that we regularly release code. The fast pace ensures failure, but it also ensures that when you do fail, you learn from your mistakes and adapt quickly. This is how you grow as a business: you get more insights and let them guide you toward success. + +Since those who have already adopted DevOps have made mistakes, you can use their experience to learn and avoid repeating the same mistakes. In the spirit of DevOps and open source—rapid iteration, building upon the work (and mistakes) of those who have gone before—following are some of the most common mistakes businesses encounter on their DevOps journey and how to work through them. + +### 1\. Out-of-order delivery + +Sometimes, developers will perform continuous delivery (CD) and continuous integration (CI) simultaneously to accelerate automated testing and feedback cycles. CI/CD as a practice has a lot of benefits when it comes to the pace of software delivery. The risk is that incorrect code configurations may be delivered to production environments without enough exploration on their impact, negating the value of automated testing before expansion. + +I believe manual confirmation is still essential before code goes all the way through the software delivery cycle. There must be a pre-production stage—a layer of deployment and testing before production—that allows developers to correct and rectify errors that users could face if the code were pushed directly to production. + +![Software delivery cycle][5] + +It is extremely important to put monitoring in place before code reaches the end user. For instance, structuring the CD pipelines so testing is done alongside development will ensure that changes are not deployed automatically. + +While DevOps standards declare that teams must expand beyond silos, deployment should always be validated by those familiar with the code that comes out at the end of the pipeline. This mandates a thorough examination before code reaches customers. + +### 2\. Misunderstanding the DevOps title + +Some organizations are bewildered about the DevOps title. They believe a DevOps engineer's object is to solve all problems associated with DevOps—even when DevOps means collaboration across Developers and Operators. + +The way DevOps integrates development and operations roles can be a difficult career progression. Developers require more understanding of how their application runs in order to keep it running and potentially be on call for support if it goes down. Operations must become an expert on how to scale and understand the metrics that fit inside a larger [monitoring and observability strategy][6]. + +DevOps, in practice, helps companies accelerate time-consuming tasks in IT operations. For example, automating testing provides developers with faster feedback, and automating integration incorporates developers' changes more quickly into the codebase. DevOps may also be asked to automate procedures around collecting, expanding, and running apps. + +Your organization's fundamental needs dictate whether your DevOps professionals' skill sets need to be stronger in operations or development, and this information must align with how you select or hire your DevOps team. For instance, it is important to prioritize past software development and scripting skills when automation is key (instead of requiring expertise around containerization). Hire for your unique DevOps experience needs, and let people learn the other skills on the job. If you hire people who are ready to learn, you will build the best possible team for your organization. + +### 3\. Inflexibility around DevOps procedures + +While DevOps principles provide a foundation, each organization must be ready to customize their journey for their desired outcomes. Companies need to make sure that, while the core DevOps pillars stay steady during implementation, they create internal modifications that are essential in measuring their predicted results. + +It is important to master the fundamentals of DevOps, especially the [CALMS][7] (Culture, Automation, Lean, Measurement, and Sharing) pillars, to build a foundation for technology advancement. But there is no one-size-fits-all DevOps implementation. By recognizing that, the DevOps team can build a plan to address the key reason for the initiative and build from past failed results. Teams should be ready to modify their plan while staying within the recommendations of the fundamental DevOps principles. + +### 4\. Selecting speed over quality + +Many companies concentrate on product delivery without paying enough attention to product quality. If the effort's key performance indicators (KPIs) center only on time to production, it is easy for quality to fall off in the process. Endpoints that could monitor performance are left for future versions, and software that is not production-ready is seen as a success because it was pushed rapidly forward. + +In a fast-paced market, teams can't afford to provide the best product quality with the time requirements dictated by either the customer or internal demand. Many companies are hurrying to get and finish as many DevOps projects as possible within a shorter time span to keep their position in a competitive market. That may sound like a good idea, but expecting DevOps to be a quick journey may result in more pain than gain. + +Achieving both speed and quality improvement is an essential DevOps value. It is not achieved easily and requires operators and developers to write testing in new and improved ways. When done well, quality and speed improve simultaneously. + +### 5\. Building a dedicated DevOps team + +Theoretically, it makes sense to build a dedicated team to concentrate on training the newest professionals in IT. The movement to complete a DevOps journey must be hassle-free and seamless, right? But two issues quickly arise: + + * Existing quality assurance (QA), operations, and development team members feel overlooked and may try to hinder the new team's efforts. + * This new team becomes another silo, providing new technology but not advancing the company's goals on a DevOps journey. + + + +It is better to have a mix of new team members and current employees from QA, ops, and dev who are excited to join the DevOps initiative. The latter group has a lot of institutional knowledge that is valuable as you roll out such a large initiative. + +### 6\. Overlooking databases + +The database is one of the most essential technology areas overlooked while building out DevOps. New, ephemeral applications can fly through a DevOps pipeline at a speed unlike any application before. But data-hungry applications don't see the same ease of deployment. + +Data snapshots in separate environments can and will drift toward inaccuracy without a concentrated effort to automate them effectively. Experts stress constant integration and code handling but fail in automating the database. Database management should be done properly, particularly for data-centric apps. The database plays an important role in such apps and may require separate expertise to automate it alongside other applications. + +### 7\. Insufficient incident-handling procedures + +In case something goes wrong (and it will), DevOps teams should have incident-handling procedures in place. Incident handling should be a continuous and active procedure clearly outlined for consistency and to avoid error. This means that in order for an incident-handling process to be documented, you must capture and describe the incident-response requirements. There is a lot of research into runbook documentation and [blameless post-mortems][8] that is important to learn to be successful. + +### 8\. Insufficient knowledge of DevOps + +Although acceptance of DevOps has expanded rapidly in recent years, application experts may be working without precise quality-control procedures. The team's ability to do all the technical, cultural, and process changes needed to succeed in DevOps sometimes falls short. + +It's a wise move to adopt DevOps practices, but success requires ample experience and preparation. In some cases, getting the right expertise to meet your requirements means hiring outside experts (disclaimer: I manage a DevOps consultancy). These trained experts should have certifications in the required technologies, and companies should abstain from making rapid DevOps decisions without having a good handle on outcomes. + +### 9\. Neglecting security + +Security and DevOps should move side-by-side. Many organizations dismiss security guidelines because it's hard, and a DevOps journey can be hard enough. This leads to issues, such as initially maximizing the output of developers and later realizing that they neglected to secure those applications. Take security seriously, and look into [DevSecOps][9] to see if it makes sense to your organization. + +### 10\. Getting fatigued while implementing DevOps + +If you start a DevOps team with the goal to go from one product deployment a year to 10 pushes in a week, it will likely fail. The path to get to an arbitrary metric that looks good in a presentation will not motivate the team. DevOps is not a simple technology movement; it's a huge cultural upgrade. + +The larger the enterprise, the longer it may take to get DevOps practices to stick. You should apply your DevOps methodology in a phased and measured approach with realistic results as milestones to celebrate. Train your employees, and schedule ample breaks before starting the initial round of application deployments. The first DevOps pipeline can be slow to achieve. That's what continuous improvement looks like in real life. + +### The bottom line + +Companies are rapidly moving towards DevOps to keep pace with their competition but make common mistakes in their implementations. To avoid these pitfalls, plan precisely and apply the right strategies to achieve a more successful DevOps outcome. + +Just as with any transformational and disruptive movement, DevOps can be misunderstood or... + +Is DevOps fundamentally about changing culture in an IT organization? That seemingly simple... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/pitfalls-avoid-devops + +作者:[Mehul Rajput][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mehulrajputhttps://opensource.com/users/daniel-ohhttps://opensource.com/users/genekimhttps://opensource.com/users/ghaff +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/road2.jpeg?itok=chTVOSil (old postcard highway ) +[2]: https://www.statista.com/statistics/673505/worldwide-software-development-survey-devops-adoption/ +[3]: https://www.linkedin.com/pulse/10-facts-stats-every-devops-enthusiast-must-know-pavan-belagatti-/ +[4]: https://opensource.com/article/19/4/devops-pipeline +[5]: https://opensource.com/sites/default/files/uploads/devopsmistakes_pipeline.png (Software delivery cycle) +[6]: https://opensource.com/article/18/8/now-available-open-source-guide-devops-monitoring-tools +[7]: https://whatis.techtarget.com/definition/CALMS +[8]: https://opensource.com/article/19/4/psychology-behind-blameless-retrospective +[9]: https://opensource.com/article/19/1/what-devsecops diff --git a/sources/talk/20190905 6 years of tech evolution, revolution and radical change.md b/sources/talk/20190905 6 years of tech evolution, revolution and radical change.md new file mode 100644 index 0000000000..f08cb34071 --- /dev/null +++ b/sources/talk/20190905 6 years of tech evolution, revolution and radical change.md @@ -0,0 +1,106 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (6 years of tech evolution, revolution and radical change) +[#]: via: (https://www.networkworld.com/article/3435857/6-years-of-tech-evolution-revolution-and-radical-change.html) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +6 years of tech evolution, revolution and radical change +====== +In his farewell TechWatch post, Fredric Paul looks back at how technology has changed in the six years he’s been writing for Network World—and what to expect over the next six years. +Peshkov / Getty Images + +Exactly six years ago today—Sept. 5, 2013—Network World published my very first [TechWatch][1] blog post. It addressed the introduction of [Samsung's Galaxy Gear and the problem with smartwatches][2]. + +Since then, I’ve written hundreds of blog posts on a dizzying array of technology topics, ranging from [net neutrality][3] to phablets to cloud computing to big data to the internet of things (IoT)—and many, many more. It’s been a great ride, and I will be forever grateful to my amazing editors at Network World and everyone who’s taken the time to read my work. But all good things must come to an end, and this will be my last TechWatch post for Network World. + +You see, writing for Network World is not my day job. For the last five and a half years, I have been editor in chief of [New Relic][4], a leader in the enterprise observability space. But this week, I’ve taken a new position at director of content for [Redis Labs][5], the home of the fast Redis database. I’m super excited about the opportunity, but here’s the thing: Redis Labs has a number of products in the IoT space, which could raise thorny conflict-of-interest questions for many blog posts I might write. + +**[ Find out how [5G wireless could change networking as we know it][6] and [how to deal with networking IoT][7]. | Get regularly scheduled insights by [signing up for Network World newsletters][8]. ]** + +### Looking back: What technology has changed and what hasn’t + +So, for this good-bye post, I want to look back at some of the topics I’ve touched on over the years—and especially that first year—and see how far we’ve come, and maybe get a sense of what’s coming next. + +Obviously, there’s no time or space to revisit everything. But I do want to touch on six key themes. + +#### **1\. Wearable tech** + +Back when I wrote that first post on [Samsung's Galaxy Gear][2], it seemed like wearable technology was about to change everything. Smartwatches and fitness trackers were being introduced by everyone from tech companies to sporting goods manufacturers to fashion brands and [luxury watch companies][9]. The [epic failure of Google Glass][10] hadn’t yet set the category back a decade by permanently creeping out folks around the world. + +Today, things look very different. [The Apple Watch is thriving in a limited role][11] as a fitness and health tracking device, trailed by a variety of simpler, cheaper options. [Google Glass and its ilk are niche products looking for industrial applications.][12] And my grand vision of wearable tech fundamentally reshaping the technology landscape? Yeah, we’re still waiting for that. + +**[ [Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][13] ]** + +#### 2\. Phones & phablets & tablets + +Six years ago, I had a lot of phun making puns about the rise of “phablets,” those giant phones (or miniature tablets) threatening to take over the mobile world. Well, that happened. In fact, it happened so thoroughly that no one even talks about increasingly ginormous phones as phablets anymore. They’re just… phones. + +**[ Also on Network World: [Phablets are the phuture, no phoolin'][14] ]** + +#### 3\. BYOD and shadow IT + +Back in 2013, shadow IT was still mostly thought of as [Bring Your Own Device][15], but increasingly powerful online services have expanded the concept far beyond enterprise workers using their own phones on the corporate network. Now, shadow IT includes everything from computing power and storage in the cloud to virtually Everything-as-a-Service. And with the rise of Shadow IoT, the situation is only getting more complicated for IT teams. How do you maintain order and security while also empowering users with maximum productivity? + +**[ Also on Network World: [Don’t worry about shadow IT. Shadow IoT is much worse.][16] ]** + +#### 4\. Net neutrality + +Hoo-boy. After endless arguments rooted in deeply differing versions of what freedom really means, ideological conflicts, all-out business battles between communications companies and online services, net neutrality was finally settled as official U.S. policy. And then, suddenly, [all that was changed by a new administration and a new FCC leader][17]. At least for now. Probably. Ahhh, who are we kidding? We’re going to be arguing over net neutrality forever. + +**[ Also on Network World: [Why even ISPs will regret the end of net neutrality][18] ]** + +#### 5\. The cloud + +When I started writing TechWatch, the cloud was still a good idea looking to find its rightful place in a world still dominated by private data centers. Today, everything has flipped. [The cloud is now pretty much the default for new IT infrastructure workloads][19], and it is slowly but surely chipping away at all those legacy, mission-critical apps and systems. Sure, key questions around cost, security, compliance and reliability remain, but as 2019 begins to wind down, the cloud’s promise of radical improvements in development speed and agility can no longer be questioned. As [cloud providers keep growing like weeds on steroids][19], modern IT leaders increasingly have to justify _not_doing things in the cloud. + +**[ Also on Network World: [The week cloud computing took over the world][20] ]** + +#### 6\. The internet of things + +TechWatch has been interested in the IoT for a while now (see [How the Internet of Things will – and won't – change IT][21]), but for the last couple years it has been the dominant topic for TechWatch. Over that time, the IoT has evolved from a concept with tremendous promise but limited real-world applications to one of the most important technologies on the planet, on pace to disrupt everything from brushing your teeth to driving (or not driving) your car to [maintaining jet airplanes][22]. It’s been a wild ride, but serious barriers remain. IoT security concerns, especially on the consumer side, still threaten IoT adoption. Lack of interoperability and unclear ROI continue to slow IoT installations. + +But those are just speed bumps. Like it or not, the IoT is going to keep growing. And that growth won’t always come in nicely defined, easily understood and controlled ways. In many cases, IoT devices and networks are being deployed without established goals, metrics, controls, and contingency plans. It may be a recipe for trouble, but it’s also how just about every important technology rolls out. [The winners will be the organizations that figure out how to maximize the IoT’s value while avoiding its pitfalls.][23] I will be watching closely to see what happens! + +Join the Network World communities on [Facebook][24] and [LinkedIn][25] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3435857/6-years-of-tech-evolution-revolution-and-radical-change.html + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/blog/techwatch/ +[2]: https://www.networkworld.com/article/2225307/samsung-s-galaxy-gear-and-the-problem-with-smartwatches.html +[3]: https://www.networkworld.com/article/3238016/will-the-end-of-net-neutrality-crush-the-internet-of-things.html +[4]: https://newrelic.com/ +[5]: https://redislabs.com/ +[6]: https://www.networkworld.com/cms/article/https:/www.networkworld.com/article/3203489/lan-wan/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html +[7]: https://www.networkworld.com/cms/article/https:/www.networkworld.com/article/3258993/internet-of-things/how-to-deal-with-networking-iot-devices.html +[8]: https://www.networkworld.com/newsletters/signup.html +[9]: https://www.networkworld.com/article/2882057/would-you-buy-a-smartwatch-from-a-watch-company.html +[10]: https://www.networkworld.com/article/2364501/how-google-glass-set-wearable-computing-back-10-years.html +[11]: https://www.networkworld.com/article/3305812/the-new-apple-watch-4-represents-an-epic-fail-for-smartwatches.html +[12]: https://www.networkworld.com/article/2955708/google-glass-returning-enterprise-business-users.html +[13]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture +[14]: https://www.networkworld.com/article/2225569/phablets-are-the-phuture--no-phoolin-.html +[15]: https://www.networkworld.com/article/2225741/5-disturbing-byod-lessons-from-the-faa-s-in-flight-electronics-announcement.html +[16]: https://www.networkworld.com/article/3433496/dont-worry-about-shadow-it-shadow-iot-is-much-worse.html +[17]: https://www.networkworld.com/article/3166611/the-end-of-net-neutrality-is-nighheres-whats-likely-to-happen.html +[18]: https://www.networkworld.com/article/2226155/why-even-isps-will-regret-the-end-of-net-neutrality.html +[19]: https://www.networkworld.com/article/3391465/another-strong-cloud-computing-quarter-puts-pressure-on-data-centers.html +[20]: https://www.networkworld.com/article/2914880/the-week-cloud-computing-took-over-the-world-microsoft-amazon.html +[21]: https://www.networkworld.com/article/2454225/how-the-internet-of-things-will-and-wont-change-it.html +[22]: https://www.networkworld.com/article/3340132/why-predictive-maintenance-hasn-t-taken-off-as-expected.html +[23]: https://www.networkworld.com/article/3211438/is-iot-really-driving-enterprise-digital-transformation.html +[24]: https://www.facebook.com/NetworkWorld/ +[25]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190905 Data center cooling- Electricity-free system sends excess building heat into space.md b/sources/talk/20190905 Data center cooling- Electricity-free system sends excess building heat into space.md new file mode 100644 index 0000000000..e87065de21 --- /dev/null +++ b/sources/talk/20190905 Data center cooling- Electricity-free system sends excess building heat into space.md @@ -0,0 +1,66 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Data center cooling: Electricity-free system sends excess building heat into space) +[#]: via: (https://www.networkworld.com/article/3435769/data-center-cooling-electricity-free-system-sends-excess-building-heat-into-space.html) +[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/) + +Data center cooling: Electricity-free system sends excess building heat into space +====== +A polymer and aluminum film solar shelter that transmits heat into space without using electricity could drastically cut data center cooling costs. +University at Buffalo + +We all know that blocking incoming sunlight helps cool buildings and that indoor thermal conditions can be improved with the added shade. More recently, though, scientists have been experimenting with ways to augment that passive cooling by capturing any superfluous, unwanted solar heat and expelling it, preferably into outer space, where it can’t add to global warming. + +Difficulties in getting that kind of radiative cooling to work are two-fold. First, directing the heat optimally is hard. + +“Normally, thermal emissions travel in all directions,” says Qiaoqiang Gan, an associate professor of electrical engineering at University at Buffalo, in a [news release][1]. The school is working on radiative concepts. That’s bad for heat spill-over and can send the thermal energy where it’s not wanted—like into other buildings. + +**[ Learn [how server disaggregation can boost data center efficiency][2] and [how Windows Server 2019 embraces hyperconverged data centers][3] . | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]** + +But the school says it has recently figured out how to “beam the emissions in a narrow direction.” + +Second, radiative cooling is a night-time effect. It can be best described in the analogy of a blacktop road surface, which absorbs the sun’s rays during the day and emits that captured heat overnight as the surrounding air cools. + +But University at Buffalo's system works during the day, the researchers say. It is made up of building-installed rooftop boxes that have a polymer combination aluminum film affixed to the bottom (pictured above). The film stops the area around the roof from getting hot through a form of heat absorption. The polymer absorbs warmth from the air in the box and transmits “that energy through the Earth’s atmosphere into outer space.” The box, when installed en massse, potentially shelters the entire roof from sunlight “while also beaming thermal radiation emitted from the film into the sky.” The polymer itself stays cool. + +That directionality also solves the problem of how to get the application to function within a city—the heat is beamed straight up in this case, rather than being allowed to disperse side to side and potentially infiltrate neighboring buildings. + +University at Buffalo’s box is about 18 inches by 10 inches. Multiple boxes would be affixed to cover a rooftop, augmenting the air conditioning. + +### Stanford University also has a cooling system + +I’ve written before about passive, radiative cooling systems that could be used in data center environments. A few years ago, [Stanford University suggested using the sky as one giant heatsink][5]. It reckons cost savings for cooling could be in the order of 21%. That system used mirror-like panels and, like the University at Buffalo solution, tries to solve the second major problem involved with radiative heating: How to get it to work during the day when the sun is beating down on the surfaces and the ambient air is warm—you need cool air to absorb the hot air. + +Stanford’s solution is to reflect the sunlight away from the panels during the day so the stored heat can radiate, even during the day. Researchers at University at Buffalo, however, say their approach, which uses special materials, is better. + +“Daytime cooling is a challenge because the sun is shining. In this situation, you need to find strategies to prevent rooftops from heating up. You also need to find emissive materials that don’t absorb solar energy. Our system address these challenges,” says Haomin Song, Ph.D., UB assistant professor of research in electrical engineering, in the news release. + +University at Buffalo’s directional aspect is interesting, too. + +“If you look at the headlight of your car, it has a certain structure that allows it to direct the light in a certain direction,” Gan says. “We follow this kind of a design. The structure of our beam-shaping system increases our access to the sky.” + +Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3435769/data-center-cooling-electricity-free-system-sends-excess-building-heat-into-space.html + +作者:[Patrick Nelson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Patrick-Nelson/ +[b]: https://github.com/lujun9972 +[1]: http://www.buffalo.edu/news/releases/2019/08/003.html +[2]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html +[3]: https://www.networkworld.com/article/3263718/software/windows-server-2019-embraces-hybrid-cloud-hyperconverged-data-centers-linux.html +[4]: https://www.networkworld.com/newsletters/signup.html +[5]: https://www.networkworld.com/article/3222850/space-radiated-cooling-cuts-power-use-21.html +[6]: https://www.facebook.com/NetworkWorld/ +[7]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190905 HPE-s vision for the intelligent edge.md b/sources/talk/20190905 HPE-s vision for the intelligent edge.md new file mode 100644 index 0000000000..44161337ba --- /dev/null +++ b/sources/talk/20190905 HPE-s vision for the intelligent edge.md @@ -0,0 +1,88 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (HPE's vision for the intelligent edge) +[#]: via: (https://www.networkworld.com/article/3435790/hpes-vision-for-the-intelligent-edge.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +HPE's vision for the intelligent edge +====== +HPE plans to incorporate segmentation, artificial intelligence and automation into its wired and wireless gear in order to deal with the increased network challenges imposed by IoT and SD-WAN. +HPE + +It’s not just speeds and feeds anymore, it's intelligent software, integrated security and automation that will drive the networks of the future. + +That about sums up the networking areas that Keerti Melkote, HPE's President, Intelligent Edge, thinks are ripe for innovation in the next few years.He has a broad perspective because his role puts him in charge of the company's networking products, both wired and wireless. + +[Now see how AI can boost data-center availability and efficiency][1] + +“On the wired side, we are seeing an evolution in terms of manageability," said Melkote, who founded Aruba, now part of HPE. "I think the last couple of decades of wired networking have been about faster connectivity. How do you go from a 10G to 100G Ethernet inside data centers? That will continue, but the bigger picture that we’re beginning to see is really around automation.”  + +[For an edited version of Network World\\\\\'s wide-ranging inerview with Merkote click here.][2] + +The challenge is how to inject automation into areas such as [data centers][3], [IoT][4] and granting network access to endpoints. In the past, automation and manageability were afterthoughts, he said. “The wired network world never really enabled native management monitoring and automation from the get-go.”  + +Melkote said HPE is changing that world with its next generation of switches and apps, starting with a switching line the company introduced a little over a year ago, the Core Switch 8400 series, which puts the the ability to monitor, manage and automate right at the heart of the network itself, he said. + +In addition to providing the network fabric, it also provides deep visibility, deep penetrability and deep automation capabilities. "That is where we see the wide network foundation evolving," he said. + +In the wireless world, speeds and capacity have also increased over time, but there remains the need to improve network efficiency for high-density deployments, Melkote said. Improvements with the latest generation of wireless, [Wi-Fi 6][5], address this by focusing on efficiency and reliability and high-density connectivity, which are necessary given the explosion of wireless devices, including IoT gear, he said.  + +**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][6] ]** + +Artificial intelligence will also play a major role in how networks are managed, he said. “Behind the scenes, across both wired and wireless, AI and AI operations are going to be at the heart of how the vision of manageability and automation is going to be realized,” Melkote said.   + +AI operations are fundamentally about collecting large amounts of data from network devices and gaining insights from the data to predict when and where the network is going to face capacity and congestion problems that could kill performance, and to discover security issues, he said.  + +“Any one of those insights being able to proactively give our customers a view into what’s happening so they can solve a problem before it really becomes a big issue is a huge area of research and development for us,” Melkote said. + +And that includes AI in wireless networks. “Even more than Wi-Fi 6, I see the evolution of AI behind the Wi-Fi 6 network or the next-generation wired network being really the enabler of the next evolution of efficiency, the next level of insights into the operations of the network,” he said. + +From a security perspective, IoT poses a particular challenge that can be addressed in part via network features. “The big risk with IoT is that these devices are not secured with traditional operating systems. They don’t run Windows; they don’t run [Linux][7]; they don’t run an OS,” Melkote said. As a result, they are susceptible to attacks, "and if a hacker is able to jump onto your video camera or your IoT sensor, it can then use that to attack the rest of the internal network.” + +That creates a need for access control and network segmentation that isolates these devices and provides a level of visibility and control that is integrated into the network architecture itself. HPE regards this as a massive shift from what enterprise networks have been used for historically – connecting users and taking them from Point A to Point B with high quality of service, Melkote said. + +"The segmentation is, I think, the next big evolution for all the new use cases that are emerging,” Melkote said. “The segmentation not only happens inside a LAN context with Wi-Fi and wired technology but in a WAN context, too. You need to be able to extend it across a wide area network, which itself is changing from a traditional [MPLS][8] network to a software-defined WAN, [SD-WAN][9].”  + +SD-WAN is one of the core technologies for enabling edge-to-cloud efficiency, an ever-more-important consideration given the migration of applications from private data centers to public cloud, Melkote said. SD-WAN also extends to branch offices that not only need to connect to data centers, but directly to the cloud using a combination of internet links and private circuits, he said. + +“What we are doing is basically integrating the security and the WAN functionality into the architecture so you don’t have to rely on technology from third parties to provide that additional level of security or additional segmentation on the network itself,” Melkote said.    + +The edge of the network – or the intelligent edge – is also brings with it its own challenges. HPE says the intelligent edge entails analysis of data where it is generated to reduce latency, security risk and costs. It breaks intelligent edge types into three groups: operational technology, IT and IoT edges. + +Part of the intelligent edge will include micro data centers that will be deployed at the point where data gets created, he said. "That’s not to say that the on-prem data center goes away or the cloud data center goes away," Melkote said. "Those two will continue to be served, and we will continue to serve those through our switching/networking products as well as our traditional compute and storage products." + +The biggest challenge will be bringing these technologies to customers to deploy them quickly. "We are still in the early days of the intelligent-edge explosion. I think in a decade we’ll be talking about the edge in the same way we talk about mobility and cloud today, which is in the past tense – and they’re massive trends. The edge is going to be very similar, and I think we don’t say that yet simply because I don’t think we have enough critical mass and use cases yet.” + +But ultimately, individual industustries will glean advantages from the intelligent edge, and it will spread, Melkote said. + +“A lot of the early work that we’re doing is taking these building blocks of connectivity, security, manageability and analytics and packaging them in a manner that is consumable for retail use cases, for energy use cases, for healthcare use cases, for education use cases and workplace use cases," he said. Every vertical has its own unique way to derive value out of this package. We are in the early days figuring that out." + +Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3435790/hpes-vision-for-the-intelligent-edge.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3274654/ai-boosts-data-center-availability-efficiency.html +[2]: https://www.networkworld.com/article/3435206/hpe-s-keerti-melkote-dissects-future-of-mobility-the-role-of-the-data-center-and-data-intelligence.html +[3]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html +[4]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html +[5]: https://www.networkworld.com/article/3356838/how-to-determine-if-wi-fi-6-is-right-for-you.html +[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[7]: https://www.networkworld.com/article/3215226/what-is-linux-uses-featres-products-operating-systems.html +[8]: https://www.networkworld.com/article/2297171/network-security-mpls-explained.html +[9]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html +[10]: https://www.facebook.com/NetworkWorld/ +[11]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190906 How to open source your academic work in 7 steps.md b/sources/talk/20190906 How to open source your academic work in 7 steps.md new file mode 100644 index 0000000000..a233d8131d --- /dev/null +++ b/sources/talk/20190906 How to open source your academic work in 7 steps.md @@ -0,0 +1,114 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to open source your academic work in 7 steps) +[#]: via: (https://opensource.com/article/19/9/how-open-source-academic-work) +[#]: author: (Joshua Pearce https://opensource.com/users/jmpearcehttps://opensource.com/users/ianwellerhttps://opensource.com/users/edunham) + +How to open source your academic work in 7 steps +====== +Open source technology and academia are the perfect match. Find out how +to meet tenure requirements while benefiting the whole community. +![Document sending][1] + +Academic work fits nicely into the open source ethos: The higher the value of what you give away, the greater your academic prestige and earnings. Professors accomplish this by sharing their best ideas for free in journal articles in peer-reviewed literature. This is our currency, without a strong publishing record not only would our ability to progress in our careers degrade, but even our jobs could be lost (and the ability to get any other job). + +This situation makes attribution or credit for developing new ideas and technologies critical to an academic, and it must be done in the peer-reviewed literature. Many young academics struggle with how to pull this off while working with an open source community and keeping their academic publishing record strong. There does not need to be a conflict. In fact, by fully embracing open source, there are distinct advantages (e.g., it is hard to get scooped by unethical reviewers when you have a time- and date-stamped open access preprint published for all the world to see). + +The following seven steps provide the best practices for making an academic’s work open source. Start by using best practices (e.g., [General Design Procedure for Free and Open-Source Hardware for Scientific Equipment][2]), then when your work is ready to share, do the first three steps simultaneously. + +**Note:** Academics should not be concerned about working in open source at all at this point, as open source is now mainstream academia in software and has even been [embraced by the hardware community][3]. + +### Step 1: Select a relevant peer-reviewed journal + +Your work should first be published as a technology paper in a peer-reviewed journal with a good reputation (e.g., by [Impact Factor][4] or [CiteScore][5], which is a measure reflecting the yearly average number of citations to recent articles published in that journal). If yours is a hardware project, then choose journals such as: + + * _[HardwareX][6]_ (CiteScore: 4.42) + * _[Sensors][7]_ (CiteScore: 3.72) + * _[PLOS ONE][8]_ (CiteScore: 3.02) + * _[The Journal of Open Hardware][9]_ (new) + + + +You could also choose a discipline-specific journal that publishes hardware. + +Or, if your project is software, then the following journals may be of interest: + + * _[SoftwareX][10]_ (CiteScore: 11.56) + * _[The Journal of Open Source Software][11]_ (new) + * _[The Journal of Open Research Software][12]_ (new) + + + +### Step 2: Post your source code + +When submitting your work to a peer-reviewed journal, you will need to post your source code and cite it in your paper. For software papers, you would post your actual code, but for hardware papers, you would post aspects like the bill of materials, CAD designs, build instructions, etc. + +Use common websites for sharing code like [GitLab][13], or websites meant specifically for academia like the [Open Science Framework][14]. + +### Step 3: Publish an open access pre-print + +When your paper is complete and you submit it to the journal, publish an open-access preprint as well. Doing so protects you against others scooping or patenting your ideas, while at the same time opening all of your work including the paper itself. + +Almost all major publishers allow preprints. There are a lot of pre-print servers for every discipline (e.g., [arXiv][15], [preprints.org][16]). + +### Step 4: Start (or select) a company + +The next step is not strictlly mandatory, but it is useful to either commercialize your work or provide support for your open source software. Start a spin-off company (or have a student do it), or work with an existing open source company. This step is recommended because although some people will fabricate your device from open source plans or compile your code, the vast majority would rather buy a reasonably priced version that is both open source so they control it, but also assembled or compiled, ready to use, and supported. + +### Step 5: Certify your project + +As soon as the paper is accepted, send it in for certification. See: [The trials of certifying open source software][17], or the perhaps more straightforward [OSHWA open hardware certification][18], for more information. You could do this step earlier, but publishing outside of a preprint server runs risks of being auto-rejected due to the plagiarism checks run by some journals. + +### Step 6: Prepare a press release + +As soon as certification comes through, organize a press release between the company and the university—and embargo it to the date of the academic paper’s official publication. This action spreads information about your open source technology and its benefits as far as possible. + +### Step 7: Use your own technology + +Last, but not least—use your open source technology in future research. From here on out, an academic can publish normally (e.g., do a scientific study using the open hardware and publish in a discipline-specific journal). Of course, preference should be given to [open access journals][19]. + +### Step up + +These seven steps are reasonably straightforward for new open source technology projects. When it comes to academics working on _existing_ open source projects, it’s a bit different. They need to carve out an area such as an upgrade to existing open hardware, or a new feature for free and open source software (FOSS), that they can publish on independently. They can then follow the same steps as above while integrating their work back into the community’s code (e.g., back in Step 2). + +Following these steps enables academics to more than meet the academic requirements they need for tenure and promotion while developing open source technology for everyone’s benefit. + +A few months ago, opensource.com ran a story on a textbook for college students learning... + +When academia and open source collaborate, everybody wins. Open source projects get new... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/how-open-source-academic-work + +作者:[Joshua Pearce][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jmpearcehttps://opensource.com/users/ianwellerhttps://opensource.com/users/edunham +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ (Document sending) +[2]: http://www.mdpi.com/2411-9660/2/1/2/htm +[3]: https://opensource.com/article/18/4/mainstream-academia-embraces-open-source-hardware +[4]: https://en.wikipedia.org/wiki/Impact_factor +[5]: https://en.wikipedia.org/wiki/CiteScore +[6]: https://www.journals.elsevier.com/hardwarex +[7]: https://www.mdpi.com/journal/sensors +[8]: https://journals.plos.org/plosone/ +[9]: https://openhardware.metajnl.com/ +[10]: https://www.journals.elsevier.com/softwarex +[11]: https://joss.theoj.org/ +[12]: https://openresearchsoftware.metajnl.com/ +[13]: https://about.gitlab.com/ +[14]: https://osf.io/ +[15]: https://arxiv.org/ +[16]: https://www.preprints.org/ +[17]: https://opensource.com/business/16/2/certified-good-software +[18]: https://certification.oshwa.org/ +[19]: https://doaj.org/ diff --git a/sources/tech/20180802 Top 5 CAD Software Available for Linux in 2018.md b/sources/tech/20180802 Top 5 CAD Software Available for Linux in 2018.md deleted file mode 100644 index 7959967c3b..0000000000 --- a/sources/tech/20180802 Top 5 CAD Software Available for Linux in 2018.md +++ /dev/null @@ -1,144 +0,0 @@ -Translating by robsean -Top 5 CAD Software Available for Linux in 2018 -====== -[Computer Aided Design (CAD)][1] is an essential part of many streams of engineering. CAD is professionally used is architecture, auto parts design, space shuttle research, aeronautics, bridge construction, interior design, and even clothing and jewelry. - -A number of professional grade CAD software like SolidWorks and Autodesk AutoCAD are not natively supported on the Linux platform. So today we will be having a look at the top CAD software available for Linux. Let’s dive right in. - -### Best CAD Software available for Linux - -![CAD Software for Linux][2] - -Before you see the list of CAD software for Linux, you should keep one thing in mind that not all the applications listed here are open source. We included some non-FOSS CAD software to help average Linux user. - -Installation instructions of Ubuntu-based Linux distributions have been provided. You may check the respective websites to learn the installation procedure for other distributions. - -The list is not any specific order. CAD application at number one should not be considered better than the one at number three and so on. - -#### 1\. FreeCAD - -For 3D Modelling, FreeCAD is an excellent option which is both free (beer and speech) and open source. FreeCAD is built with keeping mechanical engineering and product design as target purpose. FreeCAD is multiplatform and is available on Windows, Mac OS X+ along with Linux. - -![freecad][3] - -Although FreeCAD has been the choice of many Linux users, it should be noted that FreeCAD is still on version 0.17 and therefore, is not suitable for major deployment. But the development has picked up pace recently. - -[FreeCAD][4] - -FreeCAD does not focus on direct 2D drawings and animation of organic shapes but it’s great for design related to mechanical engineering. FreeCAD version 0.15 is available in the Ubuntu repositories. You can install it by running the below command. -``` -sudo apt install freecad - -``` - -To get newer daily builds (0.17 at the moment), open a terminal (ctrl+alt+t) and run the commands below one by one. -``` -sudo add-apt-repository ppa:freecad-maintainers/freecad-daily - -sudo apt update - -sudo apt install freecad-daily - -``` - -#### 2\. LibreCAD - -LibreCAD is a free, opensource, 2D CAD solution. Generally, CAD tends to be a resource-intensive task, and if you have a rather modest hardware, then I’d suggest you go for LibreCAD as it is really lightweight in terms of resource usage. LibreCAD is a great candidate for geometric constructions. - -![librecad][5] -As a 2D tool, LibreCAD is good but it cannot work on 3D models and renderings. It might be unstable at times but it has a dependable autosave which won’t let your work go wasted. - -[LibreCAD][6] - -You can install LibreCAD by running the following command -``` -sudo apt install librecad - -``` - -#### 3\. OpenSCAD - -OpenSCAD is a free 3D CAD software. OpenSCAD is very lightweight and flexible. OpenSCAD is not interactive. You need to ‘program’ the model and OpenSCAD interprets that code to render a visual model. It is a compiler in a sense. You cannot draw the model. You describe the model. - -![openscad][7] - -OpenSCAD is the most complicated tool on this list but once you get to know it, it provides an enjoyable work experience. - -[OpenSCAD][8] - -You can use the following commands to install OpenSCAD. -``` -sudo apt-get install openscad - -``` - -#### 4\. BRL-CAD - -BRL-CAD is one of the oldest CAD tools out there. It also has been loved by Linux/UNIX users as it aligns itself with *nix philosophies of modularity and freedom. - -![BRL-CAD rendering by Sean][9] - -BRL-CAD was started in 1979, and it is still developed actively. Now, BRL-CAD is not AutoCAD but it is still a great choice for transport studies such as thermal and ballistic penetration. BRL-CAD underlies CSG instead of boundary representation. You might need to keep that in mind while opting for BRL-CAD. You can download BRL-CAD from its official website. - -[BRL-CAD][10] - -#### 5\. DraftSight (not open source) - -If You’re used to working on AutoCAD, then DraftSight would be the perfect alternative for you. - -DraftSight is a great CAD tool available on Linux. It has a rather similar workflow to AutoCAD, which makes migrating easier. It even provides a similar look and feel. DrafSight is also compatible with the .dwg file format of AutoCAD. But DrafSight is a 2D CAD software. It does not support 3D CAD as of yet. - -![draftsight][11] - -Although DrafSight is a commercial software with a starting price of $149. A free version is also made available on the[DraftSight website][12]. You can download the .deb package and install it on Ubuntu based distributions. need to register your free copy using your email ID to start using DraftSight. - -[DraftSight][12] - -#### Honorary mentions - - * With a huge growth in cloud computing technologies, cloud CAD solutions like [OnShape][13] have been getting popular day by day. - * [SolveSpace][14] is another open-source project worth mentioning. It supports 3D modeling. - * Siemens NX is an industrial grade CAD solution available on Windows, Mac OS and Linux, but it is ridiculously expensive, so omitted in this list. - * Then you have [LeoCAD][15], which is a CAD software where you use LEGO blocks to build stuff. What you do with this information is up to you. - - - -#### CAD on Linux, in my opinion - -Although gaming on Linux has picked up, I always tell my hardcore gaming friends to stick to Windows. Similarly, if You are an engineering student with CAD in your curriculum, I’d recommend that you use the software that your college prescribes (AutoCAD, SolidEdge, Catia), which generally tend to run on Windows only. - -And for the advanced professionals, these tools are simply not up to the mark when we’re talking about industry standards. - -For those of you thinking about running AutoCAD in WINE, although some older versions of AutoCAD can be installed on WINE, they simply do not perform, with glitches and crashes ruining the experience. - -That being said, I highly respect the work that has been put by the developers of the above-listed software. They have enriched the FOSS world. And it’s great to see software like FreeCAD developing with an accelerated pace in the recent years. - -Well, that’s it for today. Do share your thoughts with us using the comments section below and don’t forget to share this article. Cheers. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/cad-software-linux/ - -作者:[Aquil Roshan][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/aquil/ -[1]:https://en.wikipedia.org/wiki/Computer-aided_design -[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/cad-software-linux.jpeg -[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freecad.jpg -[4]:https://www.freecadweb.org/ -[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/librecad.jpg -[6]:https://librecad.org/ -[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/openscad.jpg -[8]:http://www.openscad.org/ -[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/brlcad.jpg -[10]:https://brlcad.org/ -[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/draftsight.jpg -[12]:https://www.draftsight2018.com/ -[13]:https://www.onshape.com/ -[14]:http://solvespace.com/index.pl -[15]:https://www.leocad.org/ diff --git a/sources/tech/20190819 Moving files on Linux without mv.md b/sources/tech/20190819 Moving files on Linux without mv.md index 6bf44ff584..263ebaf0ed 100644 --- a/sources/tech/20190819 Moving files on Linux without mv.md +++ b/sources/tech/20190819 Moving files on Linux without mv.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (MjSeven) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) diff --git a/sources/tech/20190830 How to Create and Use Swap File on Linux.md b/sources/tech/20190830 How to Create and Use Swap File on Linux.md index c0a91a8c22..bfda3bcdbe 100644 --- a/sources/tech/20190830 How to Create and Use Swap File on Linux.md +++ b/sources/tech/20190830 How to Create and Use Swap File on Linux.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (hello-wn) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) diff --git a/sources/tech/20190903 5 open source speed-reading applications.md b/sources/tech/20190903 5 open source speed-reading applications.md deleted file mode 100644 index 9d37f331dc..0000000000 --- a/sources/tech/20190903 5 open source speed-reading applications.md +++ /dev/null @@ -1,94 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (5 open source speed-reading applications) -[#]: via: (https://opensource.com/article/19/8/speed-reading-open-source) -[#]: author: (Jaouhari Youssef https://opensource.com/users/jaouhari) - -5 open source speed-reading applications -====== -Train yourself to read text faster with these five apps. -![stack of books][1] - -English essayist and politician [Joseph Addison][2] once said, "Reading is to the mind what exercise is to the body." Today, most (if not all) of us are training our brains by reading text on computer monitors, television screens, mobile devices, street signs, newspapers, magazines, and papers at work or school. - -Given the large amount of written information we take in each day, it seems advantageous to train our brains to read faster by doing specific exercises that challenge our classical reading habits and teach us to absorb more content and data. The goal of learning these skills is not just to skim text, because reading without comprehension is wasted effort. The goal is to increase your reading speed while still achieving high levels of comprehension. - -### Reading and processing input - -Before diving into the topic of speed reading, let's examine the reading process. According to French ophthalmologist Louis Emile Javal, reading is a three-step process: - - 1. Fixate - 2. Process - 3. [Saccade][3] - - - -In step one, we determine a fixation point in the text, called the optimal recognition point. In the second step, we bring in (process) new information while the eye is fixated. Finally, we change the location of our fixation point, an operation called saccade, a time when no new information is acquired. - -In practice, the main differences among faster readers are a shorter-than-average fixation period, a longer-distance saccade, and less re-reading. - -### Reading exercise - -Reading is not a natural process for human beings, as it is a fairly recent development in the span of human existence. The first writing system was created around 5,000 years ago, not long enough for people to develop into reading machines. Therefore, we have to exercise our reading skills to become more adept and efficient at this basic task of communication. - -The first exercise consists of reducing subvocalization, mainly known as silent speech, which is the habit of pronouncing words internally while reading them. It is a natural process that slows down reading, as reading speed is limited to the speed of speech. The key to reducing subvocalization is to say only some of the words that are read. One way to do this is to occupy the internal voice with another task, chewing gum, for example. - -A second exercise consists of reducing regression, or re-reading text. Regression is a mechanism of laziness because our brains can re-read any material at any time, thus reducing concentration. - -### 5 open source applications to train your brain - -There are several interesting open-source applications that you can use to exercise your reading speed. - -One is [Gritz][4], an open source file reader that makes words pop up, one at a time, to reduce regression. It works on Linux, Windows, and MacOS and is released under the GPL, so you can play with it however you want. - -Other options include [Spray Speed-Reader][5], an open source speed-reading application written in JavaScript, and [Sprits-it!][6], an open source web application that enables speed-reading of web pages. - -For Android users, [Comfort Reader][7] is an open source speed-reading app. It is available in the [F-droid][8] and [Google Play][9] app stores. - -My favorite application is [Speedread][10], a simple terminal program that shows text files word-by-word at the optimal reading point. To install it, clone the GitHub repository on your device and type in the appropriate command to read a document at your preferred word-per-minute (WPM) rate. The default rate is 250 WPM. For example, to read _your_text_file.txt_ at 400 WPM, you would enter: - - -``` -`cat your_text_file.txt | ./speedread -w 400` -``` - -Here is the program in action: - -![Speedread demo][11] - -Since you probably don't read just [plain text][12] files these days, you can use [Pandoc][13] to convert files from markup format to text format. You can also run Speedread on Android devices using [Termux][14], an Android terminal simulator. - -### Other solutions - -An interesting project for the open source community is to build a solution that is intended only for enhancing reading speed using specific exercises to improve things like subvocalization and regression reduction. I believe this project would be very beneficial, as increasing reading speed is very valuable in today's information-rich environment. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/8/speed-reading-open-source - -作者:[Jaouhari Youssef][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jaouhari -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_stack_library_reading.jpg?itok=uulcS8Sw (stack of books) -[2]: https://en.wikipedia.org/wiki/Joseph_Addison -[3]: https://en.wikipedia.org/wiki/Saccade -[4]: https://github.com/jeffkowalski/gritz -[5]: https://github.com/chaimpeck/spray -[6]: https://github.com/the-happy-hippo/sprits-it -[7]: https://github.com/mschlauch/comfortreader -[8]: https://f-droid.org/packages/com.mschlauch.comfortreader/ -[9]: https://play.google.com/store/apps/details?id=com.mschlauch.comfortreader -[10]: https://github.com/pasky/speedread -[11]: https://opensource.com/sites/default/files/uploads/speedread_demo.gif (Speedread demo) -[12]: https://plaintextproject.online/ -[13]: https://opensource.com/article/18/9/intro-pandoc -[14]: https://termux.com/ diff --git a/sources/tech/20190904 How to build Fedora container images.md b/sources/tech/20190904 How to build Fedora container images.md new file mode 100644 index 0000000000..fc443c8bf1 --- /dev/null +++ b/sources/tech/20190904 How to build Fedora container images.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to build Fedora container images) +[#]: via: (https://fedoramagazine.org/how-to-build-fedora-container-images/) +[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/) + +How to build Fedora container images +====== + +![][1] + +With the rise of containers and container technology, all major Linux distributions nowadays provide a container base image. This article presents how the Fedora project builds its base image. It also shows you how to use it to create a layered image. + +### Base and layered images + +Before we look at how the Fedora container base image is built, let’s define a base image and a layered image. A simple way to define a base image is an image that has no parent layer. But what does that concretely mean? It means a base image usually contains only the root file system (_rootfs_) of an operating system. The base image generally provides the tools needed to install software in order to create layered images. + +A layered image adds a collections of layers on top of the base image in order to install, configure, and run an application. Layered images reference base images in a _Dockerfile_ using the _FROM_ instruction: + +``` +FROM fedora:latest +``` + +### How to build a base image + +Fedora has a full suite of tools available to build container images. [This includes][2] _[podman][2]_, which does not require running as the root user. + +#### Building a rootfs + +A base image comprises mainly a [tarball][3]. This tarball contains a rootfs. There are different ways to build this rootfs. The Fedora project uses the [kickstart][4] installation method coupled with [imagefactory][5] software to create these tarballs. + +The kickstart file used during the creation of the Fedora base image is available in Fedora’s build system [Koji][6]. The _[Fedora-Container-Base][7]_ package regroups all the base image builds. If you select a build, it gives you access to all the related artifacts, including the kickstart files. Looking at an [example][8], the _%packages_ section at the end of the file defines all the packages to install. This is how you make software available in the base image. + +#### Using a rootfs to build a base image + +Building a base image is easy, once a rootfs is available. It requires only a Dockerfile with the following instructions: + +``` +FROM scratch +ADD layer.tar / +CMD ["/bin/bash"] +``` + +The important part here is the _FROM scratch_ instruction, which is creating an empty image. The following instructions then add the rootfs to the image, and set the default command to be executed when the image is run. + +Let’s build a base image using a Fedora rootfs built in Koji: + +``` +$ curl -o fedora-rootfs.tar.xz https://kojipkgs.fedoraproject.org/packages/Fedora-Container-Base/Rawhide/20190902.n.0/images/Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz +$ tar -xJvf fedora-rootfs.tar.xz 51c14619f9dfd8bf109ab021b3113ac598aec88870219ff457ba07bc29f5e6a2/layer.tar +$ mv 51c14619f9dfd8bf109ab021b3113ac598aec88870219ff457ba07bc29f5e6a2/layer.tar layer.tar +$ printf "FROM scratch\nADD layer.tar /\nCMD [\"/bin/bash\"]" > Dockerfile +$ podman build -t my-fedora . +$ podman run -it --rm my-fedora cat /etc/os-release +``` + +The _layer.tar_ file which contains the rootfs needs to be extracted from the downloaded archive. This is only needed because Fedora generates images that are ready to be consumed by a container run-time. + +So using Fedora’s generated image, it’s even easier to get a base image. Let’s see how that works: + +``` +$ curl -O https://kojipkgs.fedoraproject.org/packages/Fedora-Container-Base/Rawhide/20190902.n.0/images/Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz +$ podman load --input Fedora-Container-Base-Rawhide-20190902.n.0.x86_64.tar.xz +$ podman run -it --rm localhost/fedora-container-base-rawhide-20190902.n.0.x86_64:latest cat /etc/os-release +``` + +### Building a layered image + +To build a layered image that uses the Fedora base image, you only need to specify _fedora_ in the _FROM_ line instruction: + +``` +FROM fedora:latest +``` + +The _latest_ tag references the latest active Fedora release (Fedora 30 at the time of writing). But it is possible to get other versions using the image tag. For example, _FROM fedora:31_ will use the Fedora 31 base image. + +Fedora supports building and releasing software as containers. This means you can maintain a Dockerfile to make your software available to others. For more information about becoming a container image maintainer in Fedora, check out the [Fedora Containers Guidelines][9]. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/how-to-build-fedora-container-images/ + +作者:[Clément Verna][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/cverna/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/fedoracontainers-816x345.jpg +[2]: https://fedoramagazine.org/running-containers-with-podman/ +[3]: https://en.wikipedia.org/wiki/Tar_(computing) +[4]: https://en.wikipedia.org/wiki/Kickstart_(Linux) +[5]: http://imgfac.org/ +[6]: https://koji.fedoraproject.org/koji/ +[7]: https://koji.fedoraproject.org/koji/packageinfo?packageID=26387 +[8]: https://kojipkgs.fedoraproject.org//packages/Fedora-Container-Base/30/20190902.0/images/koji-f30-build-37420478-base.ks +[9]: https://docs.fedoraproject.org/en-US/containers/guidelines/guidelines/ diff --git a/sources/tech/20190905 Building CI-CD pipelines with Jenkins.md b/sources/tech/20190905 Building CI-CD pipelines with Jenkins.md new file mode 100644 index 0000000000..44b4d6cd24 --- /dev/null +++ b/sources/tech/20190905 Building CI-CD pipelines with Jenkins.md @@ -0,0 +1,255 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Building CI/CD pipelines with Jenkins) +[#]: via: (https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins) +[#]: author: (Bryant Son https://opensource.com/users/brson) + +Building CI/CD pipelines with Jenkins +====== +Build continuous integration and continuous delivery (CI/CD) pipelines +with this step-by-step Jenkins tutorial. +![pipelines][1] + +In my article [_A beginner's guide to building DevOps pipelines with open source tools_][2], I shared a story about building a DevOps pipeline from scratch. The core technology driving that initiative was [Jenkins][3], an open source tool to build continuous integration and continuous delivery (CI/CD) pipelines. + +At Citi, there was a separate team that provided dedicated Jenkins pipelines with a stable master-slave node setup, but the environment was only used for quality assurance (QA), staging, and production environments. The development environment was still very manual, and our team needed to automate it to gain as much flexibility as possible while accelerating the development effort. This is the reason we decided to build a CI/CD pipeline for DevOps. And the open source version of Jenkins was the obvious choice due to its flexibility, openness, powerful plugin-capabilities, and ease of use. + +In this article, I will share a step-by-step walkthrough on how you can build a CI/CD pipeline using Jenkins. + +### What is a pipeline? + +Before jumping into the tutorial, it's helpful to know something about CI/CD pipelines. + +To start, it is helpful to know that Jenkins itself is not a pipeline. Just creating a new Jenkins job does not construct a pipeline. Think about Jenkins like a remote control—it's the place you click a button. What happens when you do click a button depends on what the remote is built to control. Jenkins offers a way for other application APIs, software libraries, build tools, etc. to plug into Jenkins, and it executes and automates the tasks. On its own, Jenkins does not perform any functionality but gets more and more powerful as other tools are plugged into it. + +A pipeline is a separate concept that refers to the groups of events or jobs that are connected together in a sequence: + +> A **pipeline** is a sequence of events or jobs that can be executed. + +The easiest way to understand a pipeline is to visualize a sequence of stages, like this: + +![Pipeline example][4] + +Here, you should see two familiar concepts: _Stage_ and _Step_. + + * **Stage:** A block that contains a series of steps. A stage block can be named anything; it is used to visualize the pipeline process. + * **Step:** A task that says what to do. Steps are defined inside a stage block. + + + +In the example diagram above, Stage 1 can be named "Build," "Gather Information," or whatever, and a similar idea is applied for the other stage blocks. "Step" simply says what to execute, and this can be a simple print command (e.g., **echo "Hello, World"**), a program-execution command (e.g., **java HelloWorld**), a shell-execution command (e.g., **chmod 755 Hello**), or any other command—as long as it is recognized as an executable command through the Jenkins environment. + +The Jenkins pipeline is provided as a _codified script_ typically called a **Jenkinsfile**, although the file name can be different. Here is an example of a simple Jenkins pipeline file. + + +``` +// Example of Jenkins pipeline script + +pipeline { +  stages { +    stage("Build") { +       steps { +          // Just print a Hello, Pipeline to the console +          echo "Hello, Pipeline!" +          // Compile a Java file. This requires JDKconfiguration from Jenkins +          javac HelloWorld.java +          // Execute the compiled Java binary called HelloWorld. This requires JDK configuration from Jenkins +          java HelloWorld +          // Executes the Apache Maven commands, clean then package. This requires Apache Maven configuration from Jenkins +          mvn clean package ./HelloPackage +          // List the files in current directory path by executing a default shell command +          sh "ls -ltr" +       } +   } +   // And next stages if you want to define further... + } // End of stages +} // End of pipeline +``` + +It's easy to see the structure of a Jenkins pipeline from this sample script. Note that some commands, like **java**, **javac**, and **mvn**, are not available by default, and they need to be installed and configured through Jenkins. Therefore: + +> A **Jenkins pipeline** is the way to execute a Jenkins job sequentially in a defined way by codifying it and structuring it inside multiple blocks that can include multiple steps containing tasks. + +OK. Now that you understand what a Jenkins pipeline is, I'll show you how to create and execute a Jenkins pipeline. At the end of the tutorial, you will have built a Jenkins pipeline like this: + +![Final Result][5] + +### How to build a Jenkins pipeline + +To make this tutorial easier to follow, I created a sample [GitHub repository][6] and a video tutorial. + +Before starting this tutorial, you'll need: + + * **Java Development Kit:** If you don't already have it, install a JDK and add it to the environment path so a Java command (like **java jar**) can be executed through a terminal. This is necessary to leverage the Java Web Archive (WAR) version of Jenkins that is used in this tutorial (although you can use any other distribution). + * **Basic computer operations:** You should know how to type some code, execute basic Linux commands through the shell, and open a browser. + + + +Let's get started. + +#### Step 1: Download Jenkins + +Navigate to the [Jenkins download page][7]. Scroll down to **Generic Java package (.war)** and click on it to download the file; save it someplace where you can locate it easily. (If you choose another Jenkins distribution, the rest of tutorial steps should be pretty much the same, except for Step 2.) The reason to use the WAR file is that it is a one-time executable file that is easily executable and removable. + +![Download Jenkins as Java WAR file][8] + +#### Step 2: Execute Jenkins as a Java binary + +Open a terminal window and enter the directory where you downloaded Jenkins with **cd <your path>**. (Before you proceed, make sure JDK is installed and added to the environment path.) Execute the following command, which will run the WAR file as an executable binary: + + +``` +`java -jar ./jenkins.war` +``` + +If everything goes smoothly, Jenkins should be up and running at the default port 8080. + +![Execute as an executable JAR binary][9] + +#### Step 3: Create a new Jenkins job + +Open a web browser and navigate to **localhost:8080**. Unless you have a previous Jenkins installation, it should go straight to the Jenkins dashboard. Click **Create New Jobs**. You can also click **New Item** on the left. + +![Create New Job][10] + +#### Step 4: Create a pipeline job + +In this step, you can select and define what type of Jenkins job you want to create. Select **Pipeline** and give it a name (e.g., TestPipeline). Click **OK** to create a pipeline job. + +![Create New Pipeline Job][11] + +You will see a Jenkins job configuration page. Scroll down to find** Pipeline section**. There are two ways to execute a Jenkins pipeline. One way is by _directly writing a pipeline script_ on Jenkins, and the other way is by retrieving the _Jenkins file from SCM_ (source control management). We will go through both ways in the next two steps. + +#### Step 5: Configure and execute a pipeline job through a direct script + +To execute the pipeline with a direct script, begin by copying the contents of the [sample Jenkinsfile][6] from GitHub. Choose **Pipeline script** as the **Destination** and paste the **Jenkinsfile** contents in **Script**. Spend a little time studying how the Jenkins file is structured. Notice that there are three Stages: Build, Test, and Deploy, which are arbitrary and can be anything. Inside each Stage, there are Steps; in this example, they just print some random messages. + +Click **Save** to keep the changes, and it should automatically take you back to the Job Overview. + +![Configure to Run as Jenkins Script][12] + +To start the process to build the pipeline, click **Build Now**. If everything works, you will see your first pipeline (like the one below). + +![Click Build Now and See Result][13] + +To see the output from the pipeline script build, click any of the Stages and click **Log**. You will see a message like this. + +![Visit sample GitHub with Jenkins get clone link][14] + +#### Step 6: Configure and execute a pipeline job with SCM + +Now, switch gears: In this step, you will Deploy the same Jenkins job by copying the **Jenkinsfile** from a source-controlled GitHub. In the same [GitHub repository][6], pick up the repository URL by clicking **Clone or download** and copying its URL. + +![Checkout from GitHub][15] + +Click **Configure** to modify the existing job. Scroll to the **Advanced Project Options** setting, but this time, select the **Pipeline script from SCM** option in the **Destination** dropdown. Paste the GitHub repo's URL in the **Repository URL**, and type **Jenkinsfile** in the **Script Path**. Save by clicking the **Save** button. + +![Change to Pipeline script from SCM][16] + +To build the pipeline, once you are back to the Task Overview page, click **Build Now** to execute the job again. The result will be the same as before, except you have one additional stage called **Declaration: Checkout SCM**. + +![Build again and verify][17] + +To see the pipeline's output from the SCM build, click the Stage and view the **Log** to check how the source control cloning process went. + +![Verify Checkout Procedure][18] + +### Do more than print messages + +Congratulations! You've built your first Jenkins pipeline! + +"But wait," you say, "this is very limited. I cannot really do anything with it except print dummy messages." That is OK. So far, this tutorial provided just a glimpse of what a Jenkins pipeline can do, but you can extend its capabilities by integrating it with other tools. Here are a few ideas for your next project: + + * Build a multi-staged Java build pipeline that takes from the phases of pulling dependencies from JAR repositories like Nexus or Artifactory, compiling Java codes, running the unit tests, packaging into a JAR/WAR file, and deploying to a cloud server. + * Implement the advanced code testing dashboard that will report back the health of the project based on the unit test, load test, and automated user interface test with Selenium.  + * Construct a multi-pipeline or multi-user pipeline automating the tasks of executing Ansible playbooks while allowing for authorized users to respond to task in progress. + * Design a complete end-to-end DevOps pipeline that pulls the infrastructure resource files and configuration files stored in SCM like GitHub and executing the scripts through various runtime programs. + + + +Follow any of the tutorials at the end of this article to get into these more advanced cases. + +#### Manage Jenkins + +From the main Jenkins dashboard, click **Manage Jenkins**. + +![Manage Jenkins][19] + +#### Global tool configuration + +There are many options available, including managing plugins, viewing the system log, etc. Click **Global Tool Configuration**. + +![Global Tools Configuration][20] + +#### Add additional capabilities + +Here, you can add the JDK path, Git, Gradle, and so much more. After you configure a tool, it is just a matter of adding the command into your Jenkinsfile or executing it through your Jenkins script. + +![See Various Options for Plugin][21] + +### Where to go from here? + +This article put you on your way to creating a CI/CD pipeline using Jenkins, a cool open source tool. To find out about many of the other things you can do with Jenkins, check out these other articles on Opensource.com: + + * [Getting started with Jenkins X][22] + * [Install an OpenStack cloud with Jenkins][23] + * [Running Jenkins builds in containers][24] + * [Getting started with Jenkins pipelines][25] + * [How to run JMeter with Jenkins][26] + * [Integrating OpenStack into your Jenkins workflow][27] + + + +You may be interested in some of the other articles I've written to supplement your open source journey: + + * [9 open source tools for building a fault-tolerant system][28] + * [Understanding software design patterns][29] + * [A beginner's guide to building DevOps pipelines with open source tools][2] + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins + +作者:[Bryant Son][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/brson +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pipe-pipeline-grid.png?itok=kkpzKxKg (pipelines) +[2]: https://opensource.com/article/19/4/devops-pipeline +[3]: https://jenkins.io/ +[4]: https://opensource.com/sites/default/files/uploads/diagrampipeline.jpg (Pipeline example) +[5]: https://opensource.com/sites/default/files/uploads/0_endresultpreview_0.jpg (Final Result) +[6]: https://github.com/bryantson/CICDPractice +[7]: https://jenkins.io/download/ +[8]: https://opensource.com/sites/default/files/uploads/2_downloadwar.jpg (Download Jenkins as Java WAR file) +[9]: https://opensource.com/sites/default/files/uploads/3_runasjar.jpg (Execute as an executable JAR binary) +[10]: https://opensource.com/sites/default/files/uploads/4_createnewjob.jpg (Create New Job) +[11]: https://opensource.com/sites/default/files/uploads/5_createpipeline.jpg (Create New Pipeline Job) +[12]: https://opensource.com/sites/default/files/uploads/6_runaspipelinescript.jpg (Configure to Run as Jenkins Script) +[13]: https://opensource.com/sites/default/files/uploads/7_buildnow4script.jpg (Click Build Now and See Result) +[14]: https://opensource.com/sites/default/files/uploads/8_seeresult4script.jpg (Visit sample GitHub with Jenkins get clone link) +[15]: https://opensource.com/sites/default/files/uploads/9_checkoutfromgithub.jpg (Checkout from GitHub) +[16]: https://opensource.com/sites/default/files/uploads/10_runsasgit.jpg (Change to Pipeline script from SCM) +[17]: https://opensource.com/sites/default/files/uploads/11_seeresultfromgit.jpg (Build again and verify) +[18]: https://opensource.com/sites/default/files/uploads/12_verifycheckout.jpg (Verify Checkout Procedure) +[19]: https://opensource.com/sites/default/files/uploads/13_managingjenkins.jpg (Manage Jenkins) +[20]: https://opensource.com/sites/default/files/uploads/14_globaltoolsconfiguration.jpg (Global Tools Configuration) +[21]: https://opensource.com/sites/default/files/uploads/15_variousoptions4plugin.jpg (See Various Options for Plugin) +[22]: https://opensource.com/article/18/11/getting-started-jenkins-x +[23]: https://opensource.com/article/18/4/install-OpenStack-cloud-Jenkins +[24]: https://opensource.com/article/18/4/running-jenkins-builds-containers +[25]: https://opensource.com/article/18/4/jenkins-pipelines-with-cucumber +[26]: https://opensource.com/life/16/7/running-jmeter-jenkins-continuous-delivery-101 +[27]: https://opensource.com/business/15/5/interview-maish-saidel-keesing-cisco +[28]: https://opensource.com/article/19/3/tools-fault-tolerant-system +[29]: https://opensource.com/article/19/7/understanding-software-design-patterns diff --git a/sources/tech/20190905 Don-t force allocations on the callers of your API.md b/sources/tech/20190905 Don-t force allocations on the callers of your API.md new file mode 100644 index 0000000000..eca6cc3732 --- /dev/null +++ b/sources/tech/20190905 Don-t force allocations on the callers of your API.md @@ -0,0 +1,79 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Don’t force allocations on the callers of your API) +[#]: via: (https://dave.cheney.net/2019/09/05/dont-force-allocations-on-the-callers-of-your-api) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +Don’t force allocations on the callers of your API +====== + +This is a post about performance. Most of the time when worrying about the performance of a piece of code the overwhelming advice should be (with apologies to Brendan Gregg) _don’t worry about it, yet._ However there is one area where I counsel developers to think about the performance implications of a design, and that is API design. + +Because of the high cost of retrofitting a change to an API’s signature to address performance concerns, it’s worthwhile considering the performance implications of your API’s design on its caller. + +### A tale of two API designs + +Consider these two `Read` methods: + +``` +func (r *Reader) Read(buf []byte) (int, error) +func (r *Reader) Read() ([]byte, error) +``` + +The first method takes a `[]byte` buffer and returns the number of bytes read into that buffer and possibly an `error` that occurred while reading. The second takes no arguments and returns some data as a `[]byte` or an `error`. + +This first method should be familiar to any Go programmer, it’s `io.Reader.Read`. As ubiquitous as `io.Reader` is, it’s not the most convenient API to use. Consider for a moment that `io.Reader` is the only Go interface in widespread use that returns _both_ a result _and_ an error. Meditate on this for a moment. The standard Go idiom, checking the error and iff it is `nil` is it safe to consult the other return values, does not apply to `Read`. In fact the caller must do the opposite. First they must record the number of bytes read into the buffer, reslice the buffer, process that data, and only then, consult the error. This is an unusual API for such a common operation and one that frequently catches out newcomers. + +### A trap for young players? + +Why is it so? Why is one of the central APIs in Go’s standard library written like this? A superficial answer might be `io.Reader`‘s signature is a reflection of the underlying `read(2)` syscall, which is indeed true, but misses the point of this post. + +If we compare the API of `io.Reader` to our alternative, `func Read() ([]byte, error)`, this API seems easier to use. Each call to `Read()` will return the data that was read, no need to reslice buffers, no need to remember the special case to do this before checking the error. Yet this is not the signature of `io.Reader.Read`. Why would one of Go’s most pervasive interfaces choose such an awkward API? The answer, I believe, lies in the performance implications of the APIs signature on the _caller_. + +Consider again our alternative `Read` function, `func Read() ([]byte, error)`. On each call `Read` will read some data into a buffer[1][1] and return the buffer to the caller. Where does this buffer come from? Who allocates it? The answer is the buffer is allocated _inside_ `Read`. Therefore each call to `Read` is guaranteed to allocate a buffer which would escape to the heap. The more the program reads, the faster it reads data, the more streams of data it reads concurrently, the more pressure it places on the garbage collector. + +The standard libraries’ `io.Reader.Read` forces the caller to supply a buffer because if the caller is concerned with the number of allocations their program is making this is precisely the kind of thing they want to control. Passing a buffer into `Read` puts the control of the allocations into the caller’s hands. If they aren’t concerned about allocations they can use higher level helpers like `ioutil.ReadAll` to read the contents into a `[]byte`, or `bufio.Scanner` to stream the contents instead. + +The opposite, starting with a method like our alternative `func Read() ([]byte, error)` API, prevents callers from pooling or reusing allocations–no amount of helper methods can fix this. As an API author, if the API cannot be changed you’ll be forced to add a second form to your API taking a supplied buffer and reimplementing your original API in terms of the newer form. Consider, for example, `io.CopyBuffer`. Other examples of retrofitting APIs for performance reasons are the `fmt` [package][2] and the `net/http` [package][3] which drove the introduction of the `sync.Pool` type precisely because the Go 1 guarantee prevented the APIs of those packages from changing. + +* * * + +If you want to commit to an API for the long run, consider how its design will impact the size and frequency of allocations the caller will have to make to use it. + + 1. This API has other problems, such as, _how much data should be read?_ or _should it try to read as much as possible, or return promptly if the read would block?_[][4] + + + +#### Related posts: + + 1. [Friday pop quiz: the smallest buffer][5] + 2. [Constant errors][6] + 3. [Simple test coverage with Go 1.2][7] + 4. [Struct composition with Go][8] + + + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2019/09/05/dont-force-allocations-on-the-callers-of-your-api + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: tmp.9E95iAQGkb#easy-footnote-bottom-1-3821 (This API has other problems, such as, how much data should be read? or should it try to read as much as possible, or return promptly if the read would block?) +[2]: https://golang.org/cl/43990043 +[3]: https://golang.org/cl/44080043 +[4]: tmp.9E95iAQGkb#easy-footnote-1-3821 +[5]: https://dave.cheney.net/2015/06/05/friday-pop-quiz-the-smallest-buffer (Friday pop quiz: the smallest buffer) +[6]: https://dave.cheney.net/2016/04/07/constant-errors (Constant errors) +[7]: https://dave.cheney.net/2013/10/07/simple-test-coverage-with-go-1-2 (Simple test coverage with Go 1.2) +[8]: https://dave.cheney.net/2015/05/22/struct-composition-with-go (Struct composition with Go) diff --git a/sources/tech/20190906 6 Open Source Paint Applications for Linux Users.md b/sources/tech/20190906 6 Open Source Paint Applications for Linux Users.md new file mode 100644 index 0000000000..d1523f33c3 --- /dev/null +++ b/sources/tech/20190906 6 Open Source Paint Applications for Linux Users.md @@ -0,0 +1,234 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (6 Open Source Paint Applications for Linux Users) +[#]: via: (https://itsfoss.com/open-source-paint-apps/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +6 Open Source Paint Applications for Linux Users +====== + +As a child, when I started using computer (with Windows XP), my favorite application was Paint. I spent hours doodling on it. Surprisingly, children still love the paint apps. And not just children, the simple paint app comes handy in a number of situations. + +You will find a bunch of applications that let you draw/paint or manipulate images. However, some of them are proprietary. While you’re a Linux user – why not focus on open source paint applications? + +In this article, we are going to list some of the best open source paint applications which are worthy alternatives to proprietary painting software available on Linux. + +### Open Source paint & drawing applications + +![][1] + +**Note:** _The list is in no particular order of ranking._ + +#### 1\. Pinta + +![][2] + +Key Highlights: + + * Great alternative to Paint.NET / MS Paint + * Add-on support (WebP Image support available) + * Layer Support + + + +[Pinta][3] is an impressive open-source paint application which is perfect for drawing and basic image editing. In other words, it is a simple paint application with some fancy features. + +You may consider [Pinta][4] as an alternative to MS Paint on Linux – but with layer support and more. Not just MS Paint, but it acts as a Linux replacement for Paint.NET software available for Windows. Even though Paint.NET is better – Pinta seems to be a decent alternative to it. + +A couple of add-ons can be utilized to enhance the functionality, like the [support for WebP images on Linux][5]. In addition to the layer support, you can easily resize the images, add effects, make adjustments (brightness, contrast, etc.), and also adjust the quality when exporting the image. + +#### How to install Pinta? + +You should be able to easily find it in the Software Center / App Center / Package Manager. Just type in “**Pinta**” and get started installing it. In either case, try the [Flatpak][6] package. + +Or, you can enter the following command in the terminal (Ubuntu/Debian): + +``` +sudo apt install pinta +``` + +For more information on the download packages and installation instructions, refer the [official download page][7]. + +#### 2\. Krita + +![][8] + +Key Highlights: + + * HDR Painting + * PSD Support + * Layer Support + * Brush stabilizers + * 2D Animation + + + +Krita is one of the most advanced open source paint applications for Linux. Of course, for this article, it helps you draw sketches and wreak havoc upon the canvas. But, in addition to that, it offers a whole lot of features. + +[][9] + +Suggested read  Things To Do After Installing Fedora 24 + +For instance, if you have a shaky hand, it can help you stabilize the brush strokes. You also get built-in vector tools to create comic panels and other interesting things. If you are looking for a full-fledged color management support, drawing assistants, and layer management, Krita should be your preferred choice. + +#### How to install Krita? + +Similar to pinta, you should be able to find it listed in the Software Center/App Center or the package manager. It’s also available in the [Flatpak repository][10]. + +Thinking to install it via terminal? Type in the following command: + +``` +sudo apt install krita +``` + +In either case, you can head down to their [official download page][11] to get the **AppImage** file and run it. + +If you have no idea on AppImage files, check out our guide on – [how to use AppImage][12]. + +#### 3\. Tux Paint + +![][13] + +Key Highlights: + + * A no-nonsense paint application for kids + + + +I’m not kidding, Tux Paint is one of the best open-source paint applications for kids between 3-12 years of age. Of course, you do not want options when you want to just scribble. So, Tux Paint seems to be the best option in that case (even for adults!). + +#### How to install Tuxpaint? + +Tuxpaint can be downloaded from the Software Center or Package manager. In either case, to install it on Ubuntu/Debian, type in the following command in the terminal: + +``` +sudo apt install tuxpaint +``` + +For more information on it, head to the [official site][14]. + +#### 4\. Drawpile + +![][15] + +Key Highlights: + + * Collaborative Drawing + * Built-in chat to interact with other users + * Layer support + * Record drawing sessions + + + +Drawpile is an interesting open-source paint application where you get to collaborate with other users in real-time. To be precise, you can simultaneously draw in a single canvas. In addition to this unique feature, you have the layer support, ability to record your drawing session, and even a chat facility to interact with the users collaborating. + +You can host/join a public session or start a private session with your friend which requires a code. By default, the server will be your computer. But, if you want a remote server, you can select it as well. + +Do note, that you will need to [sign up for a Drawpile account][16] in order to collaborate. + +#### How to install Drawpile? + +As far as I’m aware of, you can only find it listed in the [Flatpak repository][17]. + +[][18] + +Suggested read  OCS Store: One Stop Shop All of Your Linux Software Customization Needs + +#### 5\. MyPaint + +![][19] + +Key Highlights: + + * Easy-to-use tool for digital painters + * Layer management support + * Lots of options to tweak your brush and drawing + + + +[MyPaint][20] is a simple yet powerful tool for digital painters. It features a lot of options to tweak in order to make the perfect digital brush stroke. I’m not much of a digital artist (but a scribbler) but I observed quite a few options to adjust the brush, the colors, and an option to add a scratchpad panel. + +It also supports layer management – in case you want that. The latest stable version hasn’t been updated for a few years now, but the recent alpha build (which I tested) works just fine. If you are looking for an open source paint application on Linux – do give this a try. + +#### How to install MyPaint? + +MyPaint is available in the official repository. However, that’s the old version. If you still want to proceed, you can search for it in the Software Center or type the following command in the terminal: + +``` +sudo apt install mypaint +``` + +You can head to its official [GitHub release page][21] for the latest alpha build and get the [AppImage file][12] (any version) to make it executable and launch the app. + +#### 6\. KolourPaint + +![][22] + +Key Highlights: + + * A simple alternative to MS Paint on Linux + * No layer management support + + + +If you aren’t looking for any Layer management support and just want an open source paint application to draw stuff – this is it. + +[KolourPaint][23] is originally tailored for KDE desktop environments but it works flawlessly on others too. + +#### How to install KolourPaint? + +You can install KolourPaint right from the Software Center or via the terminal using the following command: + +``` +sudo apt install kolourpaint4 +``` + +In either case, you can utilize [Flathub][24] as well. + +**Wrapping Up** + +If you are wondering about applications like GIMP/Inkscape, we have those listed in another separate article on the [best Linux Tools for digital artists][25]. If you’re curious about more options, I recommend you to check that out. + +Here, we try to compile a list of best open source paint applications available for Linux. If you think we missed something, feel free to tell us about it in the comments section below! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/open-source-paint-apps/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/open-source-paint-apps.png?resize=800%2C450&ssl=1 +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/pinta.png?ssl=1 +[3]: https://pinta-project.com/pintaproject/pinta/ +[4]: https://itsfoss.com/pinta-1-6-ubuntu-linux-mint/ +[5]: https://itsfoss.com/webp-ubuntu-linux/ +[6]: https://www.flathub.org/apps/details/com.github.PintaProject.Pinta +[7]: https://pinta-project.com/pintaproject/pinta/releases +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/krita-paint.png?ssl=1 +[9]: https://itsfoss.com/things-to-do-after-installing-fedora-24/ +[10]: https://www.flathub.org/apps/details/org.kde.krita +[11]: https://krita.org/en/download/krita-desktop/ +[12]: https://itsfoss.com/use-appimage-linux/ +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/tux-paint.jpg?ssl=1 +[14]: http://www.tuxpaint.org/ +[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/drawpile.png?ssl=1 +[16]: https://drawpile.net/accounts/signup/ +[17]: https://flathub.org/apps/details/net.drawpile.drawpile +[18]: https://itsfoss.com/ocs-store/ +[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/mypaint.png?ssl=1 +[20]: https://mypaint.org/ +[21]: https://github.com/mypaint/mypaint/releases +[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/kolourpaint.png?ssl=1 +[23]: http://kolourpaint.org/ +[24]: https://flathub.org/apps/details/org.kde.kolourpaint +[25]: https://itsfoss.com/best-linux-graphic-design-software/ diff --git a/sources/tech/20190906 How to change the color of your Linux terminal.md b/sources/tech/20190906 How to change the color of your Linux terminal.md new file mode 100644 index 0000000000..bb418a6ded --- /dev/null +++ b/sources/tech/20190906 How to change the color of your Linux terminal.md @@ -0,0 +1,211 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to change the color of your Linux terminal) +[#]: via: (https://opensource.com/article/19/9/linux-terminal-colors) +[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/jason-bakerhttps://opensource.com/users/tlaihttps://opensource.com/users/amjithhttps://opensource.com/users/greg-phttps://opensource.com/users/marcobravo) + +How to change the color of your Linux terminal +====== +Make Linux as colorful (or as monochromatic) as you want. +![4 different color terminal windows with code][1] + +You can add color to your Linux terminal using special ANSI encoding settings, either dynamically in a terminal command or in configuration files, or you can use ready-made themes in your terminal emulator. Either way, the nostalgic green or amber text on a black screen is wholly optional. This article demonstrates how you can make Linux as colorful (or as monochromatic) as you want. + +### Terminal capabilities + +Modern systems usually default to at least xterm-256color, but if you try to add color to your terminal without success, you should check your TERM setting. + +Historically, Unix terminals were literally that: physical points at the literal endpoint (termination) of a shared computer system where users could type in commands. They were unique from the teletype machines (which is why we still have /dev/tty devices in Linux today) that were often used to issue commands remotely. Terminals had CRT monitors built-in, so users could sit at a terminal in their office to interact directly with the mainframe. CRT monitors were expensive—both to manufacture and to control; it was easier to have a computer spit out crude ASCII text than to worry about anti-aliasing and other niceties that modern computerists take for granted. However, developments in technology happened fast even then, and it quickly became apparent that as new video display terminals were designed, they needed new capabilities to be available on an optional basis. + +For instance, the fancy new VT100 released in 1978 supported ANSI color, so if a user identified the terminal type as vt100, then a computer could deliver color output, while a basic serial device might not have such an option. The same principle applies today, and it's set by the TERM [environment variable][2]. You can check your TERM definition with **echo**: + + +``` +$ echo $TERM +xterm-256color +``` + +The obsolete (but still maintained on some systems in the interest of backward compatibility) /etc/termcap file defined the capabilities of terminals and printers. The modern version of that is terminfo, located in either /etc or /usr/share, depending on your distribution. These files list features available in different kinds of terminals, many of which are defined by historical hardware: there are definitions for vt100 through vt220, as well as for modern software emulators like xterm and Xfce. Most software doesn't care what terminal type you're using; in rare instances, you might get a warning or error about an incorrect terminal type when logging into a server that checks for compatible features. If your terminal is set to a profile with very few features, but you know the emulator you use is capable of more, then you can change your setting by defining the TERM environment variable. You can do this by exporting the TERM variable in your ~/.bashrc configuration file: + + +``` +`export TERM=xterm-256color` +``` + +Save the file, and reload your settings: + + +``` +`$ source ~/.bashrc` +``` + +### ANSI color codes + +Modern terminals have inherited ANSI escape sequences for "meta" features. These are special sequences of characters that a terminal interprets as actions instead of characters. For instance, this sequence clears the screen up to the next prompt: + + +``` +`$ printf `\033[2J`` +``` + +It doesn't clear your history; it just clears up the screen in your terminal emulator, so it's a safe and demonstrative ANSI escape sequence. + +ANSI also has sequences to set the color of your terminal. For example, typing this code changes the subsequent text to green: + + +``` +`$ printf '\033[32m'` +``` + +As long as you see color the same way your computer does, you could use color to help you remember what system you're logged into. For example, if you regularly SSH into your server, you can set your server prompt to green to help you differentiate it at a glance from your local prompt. For a green prompt, use the ANSI code for green before your prompt character and end it with the code representing your normal default color: + + +``` +`export PS1=`printf "\033[32m$ \033[39m"`` +``` + +### Foreground and background + +You're not limited to setting the color of your text. With ANSI codes, you can control the background color of your text as well as do some rudimentary styling. + +For instance, with **\033[4m**, you can cause text to be underlined, or with **\033[5m** you can set it to blink. That might seem silly at first—because you're probably not going to set your terminal to underline all text and blink all day—but it can be useful for select functions. For instance, you might set an urgent error produced by a shell script to blink (as an alert for your user), or you might underline a URL. + +For your reference, here are the foreground and background color codes. Foreground colors are in the 30 range, while background colors are in the 40 range: + +Color | Foreground | Background +---|---|--- +Black | \033[30m | \033[40m +Red | \033[31m | \033[41m +Green | \033[32m | \033[42m +Orange | \033[33m | \033[43m +Blue | \033[34m | \033[44m +Magenta | \033[35m | \033[45m +Cyan | \033[36m | \033[46m +Light gray | \033[37m | \033[47m +Fallback to distro's default | \033[39m | \033[49m + +There are some additional colors available for the background: + +Color | Background +---|--- +Dark gray | \033[100m +Light red | \033[101m +Light green | \033[102m +Yellow | \033[103m +Light blue | \033[104m +Light purple | \033[105m +Teal | \033[106m +White | \033[107m + +### Permanency + +Setting colors in your terminal session is only temporary and relatively unconditional. Sometimes the effect lasts for a few lines; that's because this method of setting colors relies on a printf statement to set a mode that lasts only until something else overrides it. + +The way a terminal emulator typically gets instructions on what colors to use is from the settings of the LS_COLORS environment variable, which is in turn populated by the settings of dircolors. You can view your current settings with an echo statement: + + +``` +$ echo $LS_COLORS +rs=0:di=38;5;33:ln=38;5;51:mh=00:pi=40; +38;5;11:so=38;5;13:do=38;5;5:bd=48;5; +232;38;5;11:cd=48;5;232;38;5;3:or=48; +5;232;38;5;9:mi=01;05;37;41:su=48;5; +196;38;5;15:sg=48;5;11;38;5;16:ca=48;5; +196;38;5;226:tw=48;5;10;38;5;16:ow=48;5; +[...] +``` + +Or you can use dircolors directly: + + +``` +$ dircolors --print-database +[...] +# image formats +.jpg 01;35 +.jpeg 01;35 +.mjpg 01;35 +.mjpeg 01;35 +.gif 01;35 +.bmp 01;35 +.pbm 01;35 +.tif 01;35 +.tiff 01;35 +[...] +``` + +If that looks cryptic, it's because it is. The first digit after a file type is the attribute code, and it has six options: + + * 00 none + * 01 bold + * 04 underscore + * 05 blink + * 07 reverse + * 08 concealed + + + +The next digit is the color code in a simplified form. You can get the color code by taking the final digit of the ANSII code (32 for green foreground, 42 for green background; 31 or 41 for red, and so on). + +Your distribution probably sets LS_COLORS globally, so all users on your system inherit the same colors. If you want a customized set of colors, you can use dircolors for that. First, generate a local copy of your color settings: + + +``` +`$ dircolors --print-database > ~/.dircolors` +``` + +Edit your local list as desired. When you're happy with your choices, save the file. Your color settings are just a database and can't be used directly by [ls][3], but you can use dircolors to get shellcode you can use to set LS_COLORS: + + +``` +$ dircolors --bourne-shell ~/.dircolors +LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00: +pi=40;33:so=01;35:do=01;35:bd=40;33;01: +cd=40;33;01:or=40;31;01:mi=00:su=37;41: +sg=30;43:ca=30;41:tw=30;42:ow=34; +[...] +export LS_COLORS +``` + +Copy and paste that output into your ~/.bashrc file and reload. Alternatively, you can dump that output straight into your .bashrc file and reload. + + +``` +$ dircolors --bourne-shell ~/.dircolors >> ~/.bashrc +$ source ~/.bashrc +``` + +You can also make Bash resolve .dircolors upon launch instead of doing the conversion manually. Realistically, you're probably not going to change colors often, so this may be overly aggressive, but it's an option if you plan on changing your color scheme a lot. In your .bashrc file, add this rule: + + +``` +`[[ -e $HOME/.dircolors ]] && eval "`dircolors --sh $HOME/.dircolors`"` +``` + +Should you have a .dircolors file in your home directory, Bash evaluates it upon launch and sets LS_COLORS accordingly. + +### Color + +Colors in your terminal are an easy way to give yourself a quick visual reference for specific information. However, you might not want to lean on them too heavily. After all, colors aren't universal, so if someone else uses your system, they may not see the colors the same way you do. Furthermore, if you use a variety of tools to interact with computers, you might also find that some terminals or remote connections don't provide the colors you expect (or colors at all). + +Those warnings aside, colors can be useful and fun in some workflows, so create a .dircolor database and customize it to your heart's content. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/linux-terminal-colors + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/sethhttps://opensource.com/users/jason-bakerhttps://opensource.com/users/tlaihttps://opensource.com/users/amjithhttps://opensource.com/users/greg-phttps://opensource.com/users/marcobravo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freedos.png?itok=aOBLy7Ky (4 different color terminal windows with code) +[2]: https://opensource.com/article/19/8/what-are-environment-variables +[3]: https://opensource.com/article/19/7/master-ls-command diff --git a/sources/tech/20190906 Introduction to monitoring with Pandora FMS.md b/sources/tech/20190906 Introduction to monitoring with Pandora FMS.md new file mode 100644 index 0000000000..2c88ffc6e5 --- /dev/null +++ b/sources/tech/20190906 Introduction to monitoring with Pandora FMS.md @@ -0,0 +1,221 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Introduction to monitoring with Pandora FMS) +[#]: via: (https://opensource.com/article/19/9/introduction-monitoring-pandora-fms) +[#]: author: (Sancho Lerena https://opensource.com/users/slerenahttps://opensource.com/users/jimmyolanohttps://opensource.com/users/alanfdoss) + +Introduction to monitoring with Pandora FMS +====== +Open source, all-purpose monitoring software monitors network equipment, +servers, virtual environments, applications, and much more. +![A network diagram][1] + +Pandora Flexible Monitoring Solution (FMS) is all-purpose monitoring software, which means it can control network equipment, servers (Linux and Windows), virtual environments, applications, databases, and a lot more. It can do both remote monitoring and monitoring based on agents installed on the servers. You can get collected data in reports and graphs and raise alerts if something goes wrong. + +Pandora FMS is offered in two versions: the [open source community edition][2] is aimed at private users and organizations of any size and is fully functional and totally free, while the [enterprise version][3] is designed to facilitate the work of companies, as it has support services and special features for large environments. Both versions are updated every month and accessible directly from the console. + +### Installing Pandora FMS + +#### Getting started + +Linux is the Pandora FMS's preferred operating system, but it also works perfectly under Windows. CentOS 7 is the recommended distribution, and there are installation packages in Debian/Ubuntu and SUSE Linux. If you feel brave, you can install it from source on other distros or FreeBSD or Solaris, but professional support is available only in Linux. + +For a small test, you will need a server with at least 4GB of RAM and about 20GB of free disk space. With this environment, you can monitor 200 to 300 servers easily. Pandora FMS has different ways to scale, and it can monitor several thousand servers in a single instance. By combining several instances, clients with even 100,000 devices can be monitored. + +#### ISO installation + +The easiest way to install Pandora FMS is to use the ISO image, which contains a CentOS 7 version with all the dependencies. The following steps will get Pandora FMS ready to use in just five minutes. + + 1. [Download][4] the ISO from Pandora FMS's website. + 2. Burn it onto a DVD or USB stick, or boot it from your virtual infrastructure manager (e.g., VMware, Xen, VirtualBox). + 3. Boot the image and proceed to the guided setup (a standard CentOS setup process). Set a unique password for the root user. + 4. Identify the IP address of your new system. + 5. Access the Pandora FMS console, using the IP address of the system where you installed Pandora FMS. Open a web browser and enter **http://<pandora_ip_address>/pandora_console** and log in as **admin** using the default password **pandora**. + + + +Congratulations, you're in! You can skip the other installation methods and [jump ahead][5] to start monitoring something real. + +#### Docker installation + + 1. First, launch Pandora FMS with this command: [code]`curl -sSL http://pandorafms.org/getpandora | sh`[/code] You can also run Pandora FMS as a container by executing: [code] docker run --rm -ti -p 80:80 -p 443:443 \ +  --name pandorafms pandorafms/pandorafms:latest +``` + 2. Once Pandora FMS is running, open your browser and enter  +**http://<ip address>/pandora_console**. Log in as **admin** with the default password **pandora**. + + + +The Docker container is at [hub.docker.com/r/pandorafms/pandorafms][6]. + +### Yum installation + +You can install Pandora FMS for Red Hat Enterprise Linux or CentOS 7 in just five steps. + + 1. Activate CentOS Updates, CentOS Extras, and EPEL in [your repository's library][7]. + + 2. Add the official Pandora FMS repo to your system: [code] [artica_pandorafms] + +name=CentOS7 - PandoraFMS official repo +baseurl= +gpgcheck=0 +enabled=1 +``` + + 3. Install the packages from the repo and solve all dependencies: [code]`yum install pandorafms_console pandorafms_server mariadb-server` +``` + 4. Reload services if you need to install Apache or MySQL/MariaDB: [code] service httpd reload (or equivalent) + +service mysqld reload (or equivalent) +``` + + 5. Open your browser and enter **http://<ip address>/pandora_console**. Proceed with the setup process. After accepting the license and doing a few pre-checks, you should see something like this: + + + + +![Pandora FMS environment and database setup][8] + +This screen is only needed when you install using the RPM, DEB, or source code (Git, tarball, etc.). This step of the console configuration uses MySQL credentials (which you need to know) to create a database and a username and password for Pandora FMS console and server. You need to set up the server password manually (yep! Vim or Nano?) by editing the **/etc/pandora/pandora_server.conf** file (follow the [instructions in the documentation][9]). + +Restart the Pandora FMS server, and everything should be ready. + +#### Other ways to install Pandora FMS + +If none of these installation methods work with your setup, other options include a Git checkout, tarball with sources, DEB package (with the .deb online repo), and SUSE RPM. You can learn more about these installation methods in the [installing wiki][10]. + +Grabbing the code is pretty easy with Git: + + +``` +`git clone https://github.com/pandorafms/pandorafms.git` +``` + +### Monitoring with Pandora FMS + +When you log into the console, you will see a welcome screen. + +![Pandora FMS welcome screen][11] + +#### Monitoring something connected to the network + +Let's begin with the most simple thing to do: ping to a host. First, create an agent by selecting **Resources** then **Manage Agents** from the menu. + +![Locating the Manage Agents menu][12] + +Click on **Create** at the bottom of the page, and fill the basic information (don't go crazy, just add your IP address and name). + +![Enter basic data in the Agent Manager][13] + +Go to the **Modules** tab and create a network module. + +![Create a network module][14] + +Use the Module component (which comes from an internal library pre-defined in Pandora FMS) to choose the ping by selecting **Network Management** and entering **Host Alive**. + +![Choosing Host Alive ping][15] + +Click on **Save** and go back to the "view" interface by clicking the "eye" icon on the right. + +![Menu bar with "eye" icon][16] + +Congratulations! Your ping is running (you know it because it's green). + +![Console showing ping is running][17] + +This is the manual way; you can also use the wizard to grab an entire Simple Network Management Protocol (SNMP) device to show interfaces, or you can use a bulk operation to copy a configuration from one device to another, or you can use the command-line interface (CLI) API to do configurations automatically. Review the [online wiki][18], with over 1200 articles of documentation, to learn more. + +The following shows an old Sonicwall NSA 250M Firewall monitored with the SNMP wizard. It shows data on status interfaces, active connections, CPU usage, active VPN, and a lot more. + +![Console showing firewall monitoring][19] + +Remote monitoring supports SNMP v.1, 2, and 3; Windows Management Instrumentation (WMI); remote SSH calls; SNMP trap capturing; and NetFlow monitoring. + +#### Monitoring a server with an agent + +Installing a Linux agent in Red Hat/CentOS is simple. Enter: + + +``` +`yum install pandorafms_agent_unix` +``` + +Edit **/etc/pandora/pandora_agent.conf** and set up the IP address of your Pandora FMS server: + + +``` +`server_ip   ` +``` + +Restart the agent and wait a few seconds for the console to show the data. + +![Console monitoring a Linux agent][20] + +In the main agent view, you can see events, data, and history; define the threshold for status change; and set up alerts to warn you when something is wrong. Months worth of data is available for graphs, reports, and service-level agreement (SLA) compliance. + +Installing a Windows agent is even easier because the installer supports automation for unattended setups. Start by downloading the agent and doing some usual routines. At some point, it will ask you for your server IP and the name for the agent, but that's all. + +![Pandora FMS Windows setup screen][21] + +Windows agents support grabbing service status and processes, executing local commands to get information, getting Windows events, native WMI calls, obtaining performance counters directly from the system, and providing a lot more information than the basic CPU/RAM/disk stuff. It uses the same configuration file as the Linux version (pandora_agent.conf), which you can edit with a text editor like Notepad. Editing is very easy; you should be able to add your own checks in less than a minute. + +### Creating graphs, reports, and SLA checks + +Pandora FMS has lots of options for graphs and reports, including on SLA compliance, in both the open source and enterprise versions. + +![Pandora FMS SLA compliance report][22] + +Pandora FMS's Visual Map feature allows you to create a map of information that combines status, data, graphs, icons, and more. You can edit it using an online editor. Pandora FMS is 100% operable from the console; no desktop application or Java is needed, nor do you need to execute commands from the console. + +Here are three examples. + +![Pandora FMS support ticket graph][23] + +![Pandora FMS network status graph][24] + +![Pandora FMS server status graph][25] + +If you would like to learn more about Pandora FMS, visit the [website][2] or ask questions in the [forum][26]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/9/introduction-monitoring-pandora-fms + +作者:[Sancho Lerena][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/slerenahttps://opensource.com/users/jimmyolanohttps://opensource.com/users/alanfdoss +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_fedora_cla.png?itok=O927VLkU (A network diagram) +[2]: https://pandorafms.org/ +[3]: https://pandorafms.com/ +[4]: http://pandorafms.org/features/free-download-monitoring-software/ +[5]: tmp.jGrBq9KQnv#Monitoring +[6]: https://hub.docker.com/r/pandorafms/pandorafms +[7]: https://pandorafms.com/docs/index.php?title=Pandora:Documentation_en:Installing#Installation_in_Red_Hat_Enterprise_Linux_.2F_Fedora_.2F_CentOS +[8]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-1.png (Pandora FMS environment and database setup) +[9]: https://pandorafms.com/docs/index.php?title=Pandora:Documentation_en:Installing#Server_Initialization_and_Basic_Configuration +[10]: https://pandorafms.com/docs/index.php?title=Pandora:Documentation_en:Installing +[11]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-2.png (Pandora FMS welcome screen) +[12]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-3.png (Locating the Manage Agents menu) +[13]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-4.png (Enter basic data in the Agent Manager) +[14]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-5.png (Create a network module) +[15]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-6.png (Choosing Host Alive ping) +[16]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-7.png (Menu bar with "eye" icon) +[17]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-8.png (Console showing ping is running) +[18]: https://pandorafms.com/docs/index.php?title=Main_Page +[19]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-9.png (Console showing firewall monitoring) +[20]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-10.png (Console monitoring a Linux agent) +[21]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-11.png (Pandora FMS Windows setup screen) +[22]: https://opensource.com/sites/default/files/uploads/installing-pandora-fms-12.png (Pandora FMS SLA compliance report) +[23]: https://opensource.com/sites/default/files/uploads/pandora-fms-visual-console-1.jpg (Pandora FMS support ticket graph) +[24]: https://opensource.com/sites/default/files/uploads/pandora-fms-visual-console-2.jpg (Pandora FMS network status graph) +[25]: https://opensource.com/sites/default/files/uploads/pandora-fms-visual-console-3.png (Pandora FMS server status graph) +[26]: https://pandorafms.org/forum/ diff --git a/sources/tech/20190906 Performing storage management tasks in Cockpit.md b/sources/tech/20190906 Performing storage management tasks in Cockpit.md new file mode 100644 index 0000000000..133d1437a9 --- /dev/null +++ b/sources/tech/20190906 Performing storage management tasks in Cockpit.md @@ -0,0 +1,100 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Performing storage management tasks in Cockpit) +[#]: via: (https://fedoramagazine.org/performing-storage-management-tasks-in-cockpit/) +[#]: author: (Shaun Assam https://fedoramagazine.org/author/sassam/) + +Performing storage management tasks in Cockpit +====== + +![][1] + +In the [previous article][2] we touched upon some of the new features introduced to Cockpit over the years. This article will look into some of the tools within the UI to perform everyday storage management tasks. To access these functionalities, install the _cockpit-storaged_ package: + +``` +sudo dnf install cockpit-storaged +``` + +From the main screen, click the **Storage** menu option in the left column. Everything needed to observe and manage disks is available on the main Storage screen. Also, the top of the page displays two graphs for the disk’s reading and writing performance with the local filesystem’s information below. In addition, the options to add or modify RAID devices, volume groups, iSCSI devices, and drives are available as well. In addition, scrolling down will reveal a summary of recent logs. This allows admins to catch any errors that require immediate attention. + +![][3] + +### Filesystems + +This section lists the system’s mounted partitions. Clicking on a partition will display information and options for that mounted drive. Growing and shrinking partitions are available in the **Volume** sub-section. There’s also a filesystem subsection that allows you to change the label and configure the mount. + +If it’s part of a volume group, other logical volumes in that group will also be available. Each standard partition has the option to delete and format. Also, logical volumes have an added option to deactivate the partition. + +![][4] + +### RAID devices + +Cockpit makes it super-easy to manage RAID drives. With a few simple clicks the RAID drive is created, formatted, encrypted, and mounted. For details, or a how-to on creating a RAID device from the CLI check out the article [Managing RAID arrays with mdadm][5]. + +To create a RAID device, start by clicking the add (**+**) button. Enter a name, select the type of RAID level and the available drives, then click **Create**. The RAID section will show the newly created device. Select it to create the partition table and format the drive(s). You can always remove the device by clicking the **Stop** and **Delete** buttons in the top-right corner. + +![][6] + +### Logical volumes + +By default, the Fedora installation uses LVM when creating the partition scheme. This allows users to create groups, and add volumes from different disks to those groups. The article, [Use LVM to Upgrade Fedora][7], has some great tips and explanations on how it works in the command-line. + +Start by clicking the add (**+**) button next to “Volume Groups”. Give the group a name, select the disk(s) for the volume group, and click **Create**. The new group is available in the Volume Groups section. The example below demonstrates a new group named “vgraiddemo”. + +Now, click the newly made group then select the option to **Create a New Logical Volume**. Give the LV a name and select the purpose: Block device for filesystems, or pool for thinly provisioning volumes. Adjust the amount of storage, if necessary, and click the **Format** button to finalize the creation. + +![][8] + +Cockpit can also configure current volume groups. To add a drive to an existing group, click the name of the volume group, then click the add (**+**) button next to “Physical Volumes”. Select the disk from the list and click the **Add** button. In one shot, not only has a new PV, been created, but it’s also added to the group. From here, we can add the available storage to a partition, or create a new LV. The example below demonstrates how the additional space is used to grow the root filesystem. + +![][9] + +### iSCSI targets + +Connecting to an iSCSI server is a quick process and requires two things, the initiator’s name, which is assigned to the client, and the name or IP of the server, or target. Therefore we will need to change the initiator’s name on the system to match the configurations on the target server. + +To change the initiator’s name, click the button with the pencil icon, enter the name, and click **Change**. + +To add the iSCSI target, click the add (**+**) button, enter the server’s address, the username and password, if required, and click **Next**. Select the target — verify the name, address, and port, — and click **Add** to finalize the process. + +To remove a target, click the “checkmark” button. A red trashcan will appear beside the target(s). Click it to remove the target from the setup list. + +![][10] + +### NFS mount + +Cockpit even allows sysadmins to configure NFS shares within the UI. To add NFS shares, click the add (**+**) button in the NFS mounts section. Enter the server’s address, the path of the share on the server, and a location on the local machine to mount the share. Adjust the mount options if needed and click **Add** to view information about the share. We also have the options to unmount, edit, and remove the share. The example below demonstrates how the NFS share on SERVER02 is mounted to the _/mnt_ directory. + +![][11] + +### Conclusion + +As we’ve seen in this article, a lot of the storage-related tasks that require lengthy, and multiple, lines of commands can be easily done within the web UI with just a few clicks. Cockpit is continuously evolving and every new feature makes the project better and better. In the next article we’ll explore the features and components on the networking side of things. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/performing-storage-management-tasks-in-cockpit/ + +作者:[Shaun Assam][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/sassam/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-storage-816x345.png +[2]: https://fedoramagazine.org/cockpit-and-the-evolution-of-the-web-user-interface/ +[3]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-storage-main-screen.png +[4]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-filesystem.png +[5]: https://fedoramagazine.org/managing-raid-arrays-with-mdadm/ +[6]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-raid.gif +[7]: https://fedoramagazine.org/use-lvm-upgrade-fedora/ +[8]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-lvm-volgroup.gif.gif +[9]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-lvm-pv_lv.gif.gif +[10]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-iscsi-storage.gif +[11]: https://fedoramagazine.org/wp-content/uploads/2019/08/cockpit-nfs-storage.gif diff --git a/translated/tech/20180802 Top 5 CAD Software Available for Linux in 2018.md b/translated/tech/20180802 Top 5 CAD Software Available for Linux in 2018.md new file mode 100644 index 0000000000..f27407bc51 --- /dev/null +++ b/translated/tech/20180802 Top 5 CAD Software Available for Linux in 2018.md @@ -0,0 +1,143 @@ +2018年前5名的 Linux 可用 CAD 软件 +====== +[计算机辅助设计 (CAD)][1] 是很多工程流的必不可少的部分。CAD 被专业地使用于建筑,汽车零部件设计,航天飞机研究,航空,桥梁施工,室内设计,甚至服装和珠宝设计。 + +在 Linux 上,一些专业级 CAD 软件,像 SolidWorks 和 Autodesk AutoCAD ,不是本地支持的。因此,今天,我们将看看排名靠前的 Linux 可用的 CAD 软件。我们马上进去看看。 + +### Linux 可用的最好的 CAD 软件 + +![CAD Software for Linux][2] + +在我们看 Linux 的 CAD 软件列表前,你应该记住一件事,在这里不是所有的应用程序都是开源软件。我们包含一些非自由和开源软件的 CAD 软件来帮助平常的 Linux 用户。 + +基于 Ubuntu 的 Linux 发行版已经提供安装操作指南。你可以检查各自的网站来学习其它发行版的安装程序步骤。 + +列表没有任何特殊顺序。在第一顺位的 CAD 应用程序不能认为比在第三顺位的好,以此类推。 + +#### 1\. FreeCAD + +对于 3D 建模,FreeCAD 是一个极好的选项,它是自由 (啤酒和演讲) 和开源软件。 FreeCAD 以构建坚持机械工程和产品设计为目标目的。FreeCAD 是多平台的,可用于 Windows,Mac OS X+ ,以及 Linux。 + +![freecad][3] + +尽管 FreeCAD 已经是很多 Linux 用户的选择,应该注意到,FreeCAD 仍然是 0.17 版本,因此,不适用于重要的部署。但是最近开发加速了。 + +[FreeCAD][4] + +FreeCAD 不专注于 2D 直接绘图和统一形状动画,但是它对机械工程相关的设计极好。FreeCAD 的 0.15 版本在 Ubuntu 存储库中可用。你可以通过运行下面的命令安装。 +``` +sudo apt install freecad + +``` + +为获取新的每日构建(目前0.17),打开一个终端(ctrl+alt+t),并逐个运行下面的命令。 +``` +sudo add-apt-repository ppa:freecad-maintainers/freecad-daily + +sudo apt update + +sudo apt install freecad-daily + +``` + +#### 2\. LibreCAD + +LibreCAD 是一个自由,开源,2D CAD 解决方案。一般来说,CAD 倾向于一个资源密集型任务,如果你有一个相当普通的硬件,那么我建议你使用 LibreCAD ,因为它在资源使用方面真的轻量化。LibreCAD 是几何图形结构方面的一个极好的候选者。 + +![librecad][5] +作为一个 2D 工具,LibreCAD 是好的,但是它不能在 3D 模型和渲染上工作。它有时可能不稳定,但是,它有一个可靠的自动保存,它不会让你的工作浪费。 + +[LibreCAD][6] + +你可以通过运行下面的命令安装 LibreCAD +``` +sudo apt install librecad + +``` + +#### 3\. OpenSCAD + +OpenSCAD 是一个自由的 3D CAD 软件。OpenSCAD 非常轻量和灵活。OpenSCAD 不是交互式的。你需要‘编程’模型,OpenSCAD 解释这些代码来渲染一个可视化模型。在某种意义上说,它是一个编译器。你不能直接绘制模型,而是你描述模型。 + +![openscad][7] + +OpenSCAD 是这个列表上最复杂的工具,但是,一旦你了解它,它将提供一个令人愉快的工作经历。 + +[OpenSCAD][8] + +你可以使用下面的命令来安装 OpenSCAD。 +``` +sudo apt-get install openscad + +``` + +#### 4\. BRL-CAD + +BRL-CAD 是最老的 CAD 工具之一。它也深受 Linux/UNIX 用户喜爱,因为它与模块化和自由的 *nix 哲学相一致。 + +![BRL-CAD rendering by Sean][9] + +BRL-CAD 始于1979年,并且,它仍然在积极开发。现在,BRL-CAD 不是 AutoCAD ,但是对于像热穿透和弹道穿透等等的运输研究仍然是一个极好的选择。BRL-CAD 构成 CSG 的基础,而不是边界表示。在选择 BRL-CAD 是,你可能需要记住这一点。你可以从它的官方网站下载 BRL-CAD 。 + +[BRL-CAD][10] + +#### 5\. DraftSight (非开源) + +如果你习惯在 AutoCAD 上作业。那么, DraftSight 将是完美的替代。 + +DraftSight 是一个在 Linux 上可用的极好的 CAD 工具。 它有相当类似于 AutoCAD 的工作流,这使得迁移更容易。它甚至提供一种类似的外观和感觉。DrafSight 也兼容 AutoCAD 的 .dwg 文件格式。 但是,DrafSight 是一个 2D CAD 软件。截至当前,它不支持 3D CAD 。 + +![draftsight][11] + +尽管 DrafSight 是一款起价149美元的商业软件。在 [DraftSight 网站][12]上可获得一个免费版本。你可以下载 .deb 软件包,并在基于 Ubuntu 的发行版上安装它。为了开始使用 DraftSight ,你需要使用你的电子邮件 ID 来注册你的免费版本 。 + +[DraftSight][12] + +#### 荣誉提名 + + * 随着云计算技术的巨大发展,像 [OnShape][13] 的云 CAD 解决方案已经变得日渐流行。 + * [SolveSpace][14] 是另一个值得一提的开源软件项目。它支持 3D 模型。 + * 西门子 NX 是一个在 Windows,Mac OS 及 Linux 上可用的工业级 CAD 解决方案,但是它贵得离谱,所以,在这个列表中被忽略。 + * 接下来,你有 [LeoCAD][15],它是一个 CAD 软件,在软件中你使用乐高积木来构建东西。你使用这些信息做些什么取决于你。 + + + +#### 在 Linux 上的 CAD ,以我的看法 + +尽管在 Linux 上游戏变得流行,我总是告诉我的铁杆游戏朋友坚持使用 Windows 。 类似地,如果你是一名在你是课程中使用 CAD 的工科学生,我建议你使用学校规定的软件 (AutoCAD,SolidEdge,Catia),这些软件通常只在 Windows 上运行。 + +对于高级专业人士来说,当我们讨论行业标准时,这些工具根本达不到标准。 + +对于想在 WINE 中运行 AutoCAD 的那些人来说,尽管一些较旧版本的 AutoCAD 可以安装在 WINE 上,它们根本不执行工作,小故障和崩溃严重损害这些体验。 + +话虽如此,我高度尊重上述列表中软件的开发者的工作。他们丰富了 FOSS 世界。很高兴看到像 FreeCAD 一样的软件在近些年中加速开发速度。 + +好了,今天到此为止。使用下面的评论区与我们分享你的想法,不用忘记分享这篇文章。谢谢。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/cad-software-linux/ + +作者:[Aquil Roshan][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[robsean](https://github.com/robsean) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/aquil/ +[1]:https://en.wikipedia.org/wiki/Computer-aided_design +[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/cad-software-linux.jpeg +[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freecad.jpg +[4]:https://www.freecadweb.org/ +[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/librecad.jpg +[6]:https://librecad.org/ +[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/openscad.jpg +[8]:http://www.openscad.org/ +[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/brlcad.jpg +[10]:https://brlcad.org/ +[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/draftsight.jpg +[12]:https://www.draftsight2018.com/ +[13]:https://www.onshape.com/ +[14]:http://solvespace.com/index.pl +[15]:https://www.leocad.org/ diff --git a/translated/tech/20190903 5 open source speed-reading applications.md b/translated/tech/20190903 5 open source speed-reading applications.md new file mode 100644 index 0000000000..8aa2911166 --- /dev/null +++ b/translated/tech/20190903 5 open source speed-reading applications.md @@ -0,0 +1,94 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 open source speed-reading applications) +[#]: via: (https://opensource.com/article/19/8/speed-reading-open-source) +[#]: author: (Jaouhari Youssef https://opensource.com/users/jaouhari) + +5 个开源的速读应用 +====== +使用这五个应用训练自己更快地阅读文本。 +![stack of books][1] + +英国散文家和政治家 [Joseph Addison][2] 曾经说过,“读书益智,运动益体。”如今,我们大多数人(如果不是全部)都是通过计算机显示器、电视屏幕、移动设备、街道标志、报纸、杂志上阅读,以及在工作和学校阅读论文来训练我们的大脑。 + +鉴于我们每天都会收到大量的书面信息,通过做一些挑战我们经典阅读习惯并教会我们吸收更多内容和数据的特定练习来训练我们的大脑以便更快地阅读似乎是有利的。学习这些技能的目的不仅仅是浏览文本,因为没有理解的阅读就是浪费精力。目标是提高你的阅读速度,同时仍然达到高水平的理解。 + +### 阅读和处理输入 + +在深入探讨速读之前,让我们来看看阅读过程。根据法国眼科医生 Louis Emile Javal 的说法,阅读分为三个步骤: + + 1. 固定 + 2. 处理 + 3. [扫视][3] + + + +在第一步,我们确定文本中的固定点,称为最佳识别点。在第二步中,我们在眼睛固定的同时引入(处理)新信息。最后,我们改变注视点的位置,这是一种称为扫视的操作,此时未获取任何新信息。 + +在实践中,阅读较快读者之间的主要差异是固定时间短于平均值,更长距离扫视,重读更少。 + +### 阅读练习 + +阅读不是人类的自然过程,因为它是人类生存跨度中一个相当新的发展。第一个书写系统是在大约 5000 年前创建的,它不足以让人们发展成为阅读机器。因此,我们必须运用我们的阅读技巧,在这项沟通的基本任务中变得更加娴熟和高效。 + +第一项练习包括减少默读,也被称为无声语音,这是一种在阅读时内部发音的习惯。它是一个减慢阅读速度的自然过程,因为阅读速度限于语速。减少默读的关键是只说出一些阅读的单词。一种方法是用其他任务来占据内部声音,例如用口香糖。 + +第二个练习包括减少回归或重读。回归是一种懒惰的机制,因为我们的大脑可以随时重读任何材料,从而降低注意力。 + +### 5 个开源应用来训练你的大脑 + +有几个有趣的开源应用可用于锻炼你的阅读速度。 + +一个是 [Gritz][4],它是一个开源文件阅读器,它一次一个地弹出单词,以减少回归。它适用于 Linux、Windows 和 MacOS,并在 GPL 许可证下发布,因此你可以随意使用它。 + +其他选择包括 [Spray Speed-Reader][5],一个用 JavaScript 编写的开源速读应用,以及 [Sprits-it!][6],一个开源 Web 应用,可以快速阅读网页。 + +对于 Android 用户,[Comfort Reader][7] 是一个开源的速读应用。它可以在 [F-droid][8] 和 [Google Play][9] 应用商店中找到。 + +我最喜欢的应用是 [Speedread][10],它是一个简单的终端程序,它可以在最佳阅读点逐字显示文本。要安装它,请在你的设备上克隆 Github 仓库,然后输入相应的命令来选择以喜好的每分钟字数 (WPM)来阅读文档。默认速率为 250 WPM。例如,要以 400 WPM 阅读 _your_text_file.txt_,你应该输入: + + +``` +`cat your_text_file.txt | ./speedread -w 400` +``` + +下面是程序的运行界面: + +![Speedread demo][11] + +由于你可能不会只阅读[纯文本][12],因此可以使用 [Pandoc][13] 将文件从标记格式转换为文本格式。你还可以使用 Android 终端模拟器 [Termux][14] 在 Android 设备上运行 Speedread。 + +### 其他方案 + +开源社区的一个有趣项目是构建一个解决方案,它仅为了通过使用特定练习来提高阅读速度,以此改进如默读和重读。我相信这个项目会非常有益,因为在当今信息丰富的环境中,提高阅读速度非常有价值。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/8/speed-reading-open-source + +作者:[Jaouhari Youssef][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jaouhari +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_stack_library_reading.jpg?itok=uulcS8Sw (stack of books) +[2]: https://en.wikipedia.org/wiki/Joseph_Addison +[3]: https://en.wikipedia.org/wiki/Saccade +[4]: https://github.com/jeffkowalski/gritz +[5]: https://github.com/chaimpeck/spray +[6]: https://github.com/the-happy-hippo/sprits-it +[7]: https://github.com/mschlauch/comfortreader +[8]: https://f-droid.org/packages/com.mschlauch.comfortreader/ +[9]: https://play.google.com/store/apps/details?id=com.mschlauch.comfortreader +[10]: https://github.com/pasky/speedread +[11]: https://opensource.com/sites/default/files/uploads/speedread_demo.gif (Speedread demo) +[12]: https://plaintextproject.online/ +[13]: https://opensource.com/article/18/9/intro-pandoc +[14]: https://termux.com/