Merge pull request #7 from LCTT/master

update
This commit is contained in:
Morisun029 2019-10-25 20:33:34 +08:00 committed by GitHub
commit 4595f594cc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
42 changed files with 5521 additions and 880 deletions

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11499-1.html)
[#]: subject: (How writers can get work done better with Git)
[#]: via: (https://opensource.com/article/19/4/write-git)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/noreplyhttps://opensource.com/users/seth)
@ -12,7 +12,7 @@
> 如果你是一名写作者,你也能从使用 Git 中受益。在我们的系列文章中了解有关 Git 鲜为人知的用法。
![Writing Hand][1]
![](https://img.linux.net.cn/data/attachment/album/201910/24/222747ltajik2ymzmmttha.png)
[Git][2] 是一个少有的能将如此多的现代计算封装到一个程序之中的应用程序,它可以用作许多其他应用程序的计算引擎。虽然它以跟踪软件开发中的源代码更改而闻名,但它还有许多其他用途,可以让你的生活更轻松、更有条理。在这个 Git 系列中,我们将分享七种鲜为人知的使用 Git 的方法。
@ -20,7 +20,7 @@
### 写作者的 Git
有些人写小说,也有人撰写学术论文、诗歌、剧本、技术手册或有关开源的文章。许多人都在做一各种写作。相同的是,如果你是一名写作者,或许能从使用 Git 中受益。尽管 Git 是著名的计算机程序员所使用的高度技术性工具,但它也是现代写作者的理想之选,本文将向你演示如何改变你的书写方式以及为什么要这么做的原因。
有些人写小说,也有人撰写学术论文、诗歌、剧本、技术手册或有关开源的文章。许多人都在做一各种写作。相同的是,如果你是一名写作者,或许能从使用 Git 中受益。尽管 Git 是著名的计算机程序员所使用的高度技术性工具,但它也是现代写作者的理想之选,本文将向你演示如何改变你的书写方式以及为什么要这么做的原因。
但是,在谈论 Git 之前,重要的是先谈谈“副本”(或者叫“内容”,对于数字时代而言)到底是什么,以及为什么它与你的交付*媒介*不同。这是 21 世纪,大多数写作者选择的工具是计算机。尽管计算机看似擅长将副本的编辑和布局等过程结合在一起,但写作者还是(重新)发现将内容与样式分开是一个好主意。这意味着你应该在计算机上像在打字机上而不是在文字处理器中进行书写。以计算机术语而言,这意味着以*纯文本*形式写作。
@ -30,13 +30,13 @@
你只需要逐字写下你的内容,而将交付工作留给发布者。即使你是自己发布,将字词作为写作作品的一种源代码也是一种更聪明、更有效的工作方式,因为在发布时,你可以使用相同的源(你的纯文本)生成适合你的目标输出(用于打印的 PDF、用于电子书的 EPUB、用于网站的 HTML 等)。
用纯文本编写不仅意味着你不必担心布局或文本样式,而且也不再需要专门的工具。无论是手机或平板电脑上的基本记事本应用程序、计算机附带的文本编辑器,还是从互联网上下载的免费编辑器,任何能够产生文本内容的工具对你而言都是有效的“文字处理器”。无论你身在何处或在做什么,几乎可以在任何设备上书写,并且所生成的文本可以与你的项目完美集成,而无需进行任何修改。
用纯文本编写不仅意味着你不必担心布局或文本样式,而且也不再需要专门的工具。无论是手机或平板电脑上的基本记事本应用程序、计算机附带的文本编辑器,还是从互联网上下载的免费编辑器,任何能够产生文本内容的工具对你而言都是有效的“文字处理器”。无论你身在何处或在做什么,几乎可以在任何设备上书写,并且所生成的文本可以与你的项目完美集成,而无需进行任何修改。
而且Git 专门用来管理纯文本。
### Atom 编辑器
当你以纯文本形式书写时,文字处理程序会显得过于庞大。使用文本编辑器更容易,因为文本编辑器不会尝试“有效地”重组输入内容。它使你可以将脑海中的单词输入到屏幕中,而不会受到干扰。更好的是,文本编辑器通常是围绕插件体系结构设计的,这样应用程序本身很基础(它用来编辑文本),但是你可以围绕它构建一个环境来满足你的各种需求。
当你以纯文本形式书写时,文字处理程序会显得过于庞大。使用文本编辑器更容易,因为文本编辑器不会尝试“有效地”重组输入内容。它使你可以将脑海中的单词输入到屏幕中,而不会受到干扰。更好的是,文本编辑器通常是围绕插件体系结构设计的,这样应用程序本身很基础(它用来编辑文本),但是你可以围绕它构建一个环境来满足你的各种需求。
[Atom][4] 编辑器就是这种设计理念的一个很好的例子。这是一个具有内置 Git 集成的跨平台文本编辑器。如果你不熟悉纯文本格式,也不熟悉 Git那么 Atom 是最简单的入门方法。
@ -64,15 +64,15 @@ Atom 当前没有在 BSD 上构建。但是,有很好的替代方法,例如
#### 快速指导
如果要使用纯文本和 Git则需要适应你的编辑器。Atom 的用户界面可能比你习惯的更加动态。实际上,你可以将它视为 Firefox 或 Chrome而不是文字处理程序因为它具有可以根据需要打开关闭的选项卡和面板,甚至还可以安装和配置附件。尝试全部掌握 Atom 如许之多的功能是不切实际的,但是你至少可以知道有什么功能。
如果要使用纯文本和 Git则需要适应你的编辑器。Atom 的用户界面可能比你习惯的更加动态。实际上,你可以将它视为 Firefox 或 Chrome而不是文字处理程序因为它具有可以根据需要打开关闭的选项卡和面板,甚至还可以安装和配置附件。尝试全部掌握 Atom 如许之多的功能是不切实际的,但是你至少可以知道有什么功能。
当 Atom 打开时,它将显示一个欢迎屏幕。如果不出意外,此屏幕很好地介绍了 Atom 的选项卡式界面。你可以通过单击 Atom 窗口顶部选项卡上的“关闭”图标来关闭欢迎屏幕,并使用“文件 > 新建文件”创建一个新文件。
打开 Atom 时,它将显示一个欢迎屏幕。如果不出意外,此屏幕很好地介绍了 Atom 的选项卡式界面。你可以通过单击 Atom 窗口顶部选项卡上的“关闭”图标来关闭欢迎屏幕,并使用“文件 > 新建文件”创建一个新文件。
使用纯文本格式与使用文字处理程序有点不同,因此这里有一些技巧,以人可以连接的方式编写内容,并且 Git 和计算机可以解析,跟踪和转换。
使用纯文本格式与使用文字处理程序有点不同,因此这里有一些技巧,以人可以理解的方式编写内容,并且 Git 和计算机可以解析,跟踪和转换。
#### 用 Markdown 书写
如今,当人们谈论纯文本时,大多是指 Markdown。Markdown 与其说是格式不如说是样式这意味着它旨在为文本提供可预测的结构以便计算机可以检测自然的模式并智能地转换文本。Markdown 有很多定义,但是最好的技术定义和备忘单在 [CommonMark 的网站][8]上。
如今,当人们谈论纯文本时,大多是指 Markdown。Markdown 与其说是格式不如说是样式这意味着它旨在为文本提供可预测的结构以便计算机可以检测自然的模式并智能地转换文本。Markdown 有很多定义,但是最好的技术定义和备忘单在 [CommonMark 的网站][8]上。
```
# Chapter 1
@ -85,9 +85,9 @@ And it can even reference an image.
从示例中可以看出Markdown 读起来感觉不像代码,但可以将其视为代码。如果你遵循 CommonMark 定义的 Markdown 规范,那么一键就可以可靠地将 Markdown 的文字转换为 .docx、.epub、.html、MediaWiki、.odt、.pdf、.rtf 和各种其他的格式,而*不会*失去格式。
你可以认为 Markdown 有点像文字处理程序的样式。如果你曾经为出版社撰写过一套样式来控制章节标题和章节标题的样式,那基本上就是一回事,除了不是从下拉菜单中选择样式以外,你要给你的文字添加一些小记号。对于任何习惯“以文字交谈”的现代阅读者来说,这些表示法都是很自然的,但是在呈现文本时,它们会被精美的文本样式替换掉。实际上,这是文字处理程序在后台秘密进行的操作。文字处理器显示粗体文本,但是如果你可以看到使文本变为粗体的生成代码,则它与 Markdown 很像(实际上,它是更复杂的 XML。使用 Markdown 可以消除这种代码和样式之间的阻隔,一方面看起来更可怕,但另一方面,你可以在几乎所有可以生成文本的东西上书写 Markdown 而不会丢失任何格式信息。
你可以认为 Markdown 有点像文字处理程序的样式。如果你曾经为出版社撰写过一套样式来控制章节标题及其样式,那基本上就是一回事,除了不是从下拉菜单中选择样式以外,你要给你的文字添加一些小记号。对于任何习惯“以文字交谈”的现代阅读者来说,这些表示法都是很自然的,但是在呈现文本时,它们会被精美的文本样式替换掉。实际上,这是文字处理程序在后台秘密进行的操作。文字处理器显示粗体文本,但是如果你可以看到使文本变为粗体的生成代码,则它与 Markdown 很像(实际上,它是更复杂的 XML。使用 Markdown 可以消除这种代码和样式之间的阻隔,一方面看起来更可怕一些,但另一方面,你可以在几乎所有可以生成文本的东西上书写 Markdown 而不会丢失任何格式信息。
Markdown 文件流行d 文件扩展名是 .md。如果你使用的平台不知道 .md 文件是什么,则可以手动将扩展名与 Atom 关联,或者仅使用通用的 .txt 扩展名。文件扩展名不会更改文件的性质。它只会改变你的计算机决定如何处理它的方式。Atom 和某些平台足够聪明,可以知道该文件是纯文本格式,无论你给它以什么扩展名。
Markdown 文件流行文件扩展名是 .md。如果你使用的平台不知道 .md 文件是什么,则可以手动将扩展名与 Atom 关联,或者仅使用通用的 .txt 扩展名。文件扩展名不会更改文件的性质。它只会改变你的计算机决定如何处理它的方式。Atom 和某些平台足够聪明,可以知道该文件是纯文本格式,无论你给它以什么扩展名。
#### 实时预览
@ -97,25 +97,25 @@ Atom 具有 “Markdown 预览” 插件,该插件可以向你显示正在编
要激活此预览窗格,请选择“包 > Markdown 预览 > 切换预览” 或按 `Ctrl + Shift + M`
此视图为你提供了两全其美的方法。无需承担为你的文本添加样式的负担就可以写作,而你也可以看到一个通用的示例外观,至少是以典型的数字化格式显示文本的外观。当然,关键是你无法控制文本的最终呈现方式,因此不要试图调整 Markdown 来强制以某种方式显示呈现的预览。
此视图为你提供了两全其美的方法。无需承担为你的文本添加样式的负担就可以写作,而你也可以看到一个通用的示例外观,至少是以典型的数字化格式显示文本的外观。当然,关键是你无法控制文本的最终呈现方式,因此不要试图调整 Markdown 来强制以某种方式显示呈现的预览。
#### 每行一句话
你的高中写作老师不会看你的 Markdown。
一开始它那么自然但是在数字世界中保持每行一个句子更有意义。Markdown 忽略单个换行符(当你按下 Return 或 Enter 键时),并且只在单个空行之后才会创建一个新段落。
一开始它那么自然但是在数字世界中保持每行一个句子更有意义。Markdown 忽略单个换行符(当你按下 `Return``Enter` 键时),并且只在单个空行之后才会创建一个新段落。
![Writing in Atom][10]
每行写一个句子的好处是你的工作更容易跟踪。也就是说,如你在段落的开头更改了一个单词,那么如果更改仅限于一行而不是一个长的段落中的一个单词,那么 Atom、Git 或任何应用程序很容易以有意义的方式突出显示该更改。换句话说,对一个句子的更改只会影响该句子,而不会影响整个段落。
每行写一个句子的好处是你的工作更容易跟踪。也就是说,如你在段落的开头更改了一个单词,如果更改仅限于一行而不是一个长的段落中的一个单词,那么 Atom、Git 或任何应用程序很容易以有意义的方式突出显示该更改。换句话说,对一个句子的更改只会影响该句子,而不会影响整个段落。
你可能会想:“许多文字处理器也可以跟踪更改,它们可以突出显示已更改的单个单词。”但是这些修订跟踪器绑定该字处理器的界面上,这意味着你必须先打开该字处理器才能浏览修订。在纯文本工作流程中,你可以以纯文本形式查看修订,这意味着无论手头有什么,只要该设备可以处理纯文本(大多数都可以),就可以进行编辑或批准编辑。
你可能会想:“许多文字处理器也可以跟踪更改,它们可以突出显示已更改的单个单词。”但是这些修订跟踪器绑定该字处理器的界面上,这意味着你必须先打开该字处理器才能浏览修订。在纯文本工作流程中,你可以以纯文本形式查看修订,这意味着无论手头有什么,只要该设备可以处理纯文本(大多数都可以),就可以进行编辑或批准编辑。
诚然写作者通常不会考虑行号但它对于计算机有用并且通常是一个很好的参考点。默认情况下Atom 为文本文档的行进行编号。按下 Enter 键或 Return 键后,一*行*就是一行。
诚然写作者通常不会考虑行号但它对于计算机有用并且通常是一个很好的参考点。默认情况下Atom 为文本文档的行进行编号。按下 `Enter` 键或 `Return` 键后,一*行*就是一行。
![Writing in Atom][11]
如果一行中有一个点而不是一个数字,则表示它是上一行折叠的一部分,因为它超出了你的屏幕。
如果(在 Atom 的)一行的行号中有一个点而不是一个数字,则表示它是上一行折叠的一部分,因为它超出了你的屏幕。
#### 主题
@ -127,7 +127,7 @@ Atom 具有 “Markdown 预览” 插件,该插件可以向你显示正在编
![Atom's themes][13]
要使用已安装的主题或根据喜好自定义主题,请导航至“设置”标签页中的“主题”类别中。从下拉菜单中选择要使用的主题。更改会立即进行,因此你可以准确了解主题如何影响的环境。
要使用已安装的主题或根据喜好自定义主题,请导航至“设置”标签页中的“主题”类别中。从下拉菜单中选择要使用的主题。更改会立即进行,因此你可以准确了解主题如何影响的环境。
你也可以在“设置”标签的“编辑器”类别中更改工作字体。Atom 默认采用等宽字体,程序员通常首选这种字体。但是你可以使用系统上的任何字体,无论是衬线字体、无衬线字体、哥特式字体还是草书字体。无论你想整天盯着什么字体都行。
@ -139,19 +139,19 @@ Atom 具有 “Markdown 预览” 插件,该插件可以向你显示正在编
创建长文档时,我发现每个文件写一个章节比在一个文件中写整本书更有意义。此外,我不会以明显的语法 ` chapter-1.md``1.example.md` 来命名我的章节,而是以章节标题或关键词(例如 `example.md`)命名。为了将来为自己提供有关如何编写本书的指导,我维护了一个名为 `toc.md` (用于“目录”)的文件,其中列出了各章的(当前)顺序。
我这样做是因为,无论我多么相信第 6 章都不可能出现在第 1 章之前,但在我完成整本书之前,几乎不大可能出现我不会交换一两个章节的顺序。我发现从一开始就保持动态变化可以帮助我避免重命名混乱,也可以帮助我避免僵化的结构。
我这样做是因为,无论我多么相信第 6 章都不可能出现在第 1 章之前,但在我完成整本书之前,几乎难以避免我会交换一两个章节的顺序。我发现从一开始就保持动态变化可以帮助我避免重命名混乱,也可以帮助我避免僵化的结构。
### 在 Atom 中使用 Git
每位写作者的共同点是两件事:他们为流传而写作,而他们的写作是一段旅程。你无需坐下来写作就完成最终稿件。顾名思义,你有一个初稿。该草稿会经过修订,你会仔细地将每个修订保存一式两份或三份,以防万一你的文件损坏了。最终,你得到了所谓的最终草案,但很有可能你有一天还会回到这份最终草案,要么恢复好的部分要么修改坏的部分。
每位写作者的共同点是两件事:他们为流传而写作,而他们的写作是一段旅程。你不能一坐下来写作就完成了最终稿件。顾名思义,你有一个初稿。该草稿会经过修订,你会仔细地将每个修订保存一式两份或三份的备份,以防万一你的文件损坏了。最终,你得到了所谓的最终草稿,但很有可能你有一天还会回到这份最终草稿,要么恢复好的部分,要么修改坏的部分。
Atom 最令人兴奋的功能是其强大的 Git 集成。无需离开 Atom你就可以与 Git 的所有主要功能进行交互,跟踪和更新项目、回滚你不喜欢的更改、集成来自协作者的更改等等。最好的学习方法就是逐步学习,因此这是从写作项目开始到结束在 Atom 界面中使用 Git 的方法。
Atom 最令人兴奋的功能是其强大的 Git 集成。无需离开 Atom你就可以与 Git 的所有主要功能进行交互,跟踪和更新项目、回滚你不喜欢的更改、集成来自协作者的更改等等。最好的学习方法就是逐步学习,因此这是在一个写作项目中从始至终在 Atom 界面中使用 Git 的方法。
第一件事:通过选择 “视图 > 切换 Git 标签页” 来显示 Git 面板。这将在 Atom 界面的右侧打开一个新标签页。现在没什么可看的,所以暂时保持打开状态就行。
#### 建立一个 Git 项目
你可以将 Git 视为它被绑定到文件夹。Git 目录之外的任何文件夹都不知道 Git而 Git 也不知道外面。Git 目录中的文件夹和文件将被忽略,直到你授予 Git 权限来跟踪它们为止。
你可以认为 Git 被绑定到一个文件夹。Git 目录之外的任何文件夹都不知道 Git而 Git 也不知道外面。Git 目录中的文件夹和文件将被忽略,直到你授予 Git 权限来跟踪它们为止。
你可以通过在 Atom 中创建新的项目文件夹来创建 Git 项目。选择 “文件 > 添加项目文件夹”,然后在系统上创建一个新文件夹。你创建的文件夹将出现在 Atom 窗口的左侧“项目面板”中。
@ -159,11 +159,11 @@ Atom 最令人兴奋的功能是其强大的 Git 集成。无需离开 Atom
右键单击你的新项目文件夹然后选择“新建文件”以在项目文件夹中创建一个新文件。如果你要导入文件到新项目中请右键单击该文件夹然后选择“在文件管理器中显示”以在系统的文件查看器中打开该文件夹Linux 上为 Dolphin 或 NautilusMac 上为 Finder在 Windows 上是 Explorer然后拖放文件到你的项目文件夹。
在 Atom 中打开一个项目文件(你创建的空文件或导入的文件)后,单击 Git 标签中的 “<ruby>创建存储库<rt>Create Repository</rt></ruby>” 按钮。在弹出的对话框中,单击 “<ruby>初始化<rt>Init</rt></ruby>” 以将你的项目目录初始化为本地 Git 存储库。 Git 会将 `.git` 目录(在系统的文件管理器中不可见,但在 Atom 中可见)添加到项目文件夹中。不要被这个愚弄了:`.git` 目录是 Git 管理的,而不是由你管理的,因此一般不要动它。但是在 Atom 中看到它可以很好地提醒你正在由 Git 管理的项目中工作。换句话说,当你看到 `.git` 目录时,就有了修订历史记录。
在 Atom 中打开一个项目文件(你创建的空文件或导入的文件)后,单击 Git 标签中的 “<ruby>创建存储库<rt>Create Repository</rt></ruby>” 按钮。在弹出的对话框中,单击 “<ruby>初始化<rt>Init</rt></ruby>” 以将你的项目目录初始化为本地 Git 存储库。 Git 会将 `.git` 目录(在系统的文件管理器中不可见,但在 Atom 中可见)添加到项目文件夹中。不要被这个愚弄了:`.git` 目录是 Git 管理的,而不是由你管理的,因此一般不要动它。但是在 Atom 中看到它可以很好地提醒你正在由 Git 管理的项目中工作。换句话说,当你看到 `.git` 目录时,就有了修订历史记录。
在你的空文件中,写一些东西。你是写作者,所以输入一些单词就行。你可以随意输入任何一组单词,但要记住上面的写作技巧。
`Ctrl + S` 保存文件,该文件将显示在 Git 标签的 “<ruby>未暂存的改变<rt>Unstaged Changes</rt></ruby>” 部分中。这意味着该文件存在于你的项目文件夹中,但尚未提交给 Git 管理。通过单击 Git 选项卡右上方的 “<ruby>暂存全部<rt>Stage All</rt></ruby>” 按钮,允许 Git 跟踪这些文件。如果你使用过带有修订历史记录的文字处理器,则可以将此步骤视为允许 Git记录更改。
`Ctrl + S` 保存文件,该文件将显示在 Git 标签的 “<ruby>未暂存的改变<rt>Unstaged Changes</rt></ruby>” 部分中。这意味着该文件存在于你的项目文件夹中,但尚未提交给 Git 管理。通过单击 Git 选项卡右上方的 “<ruby>暂存全部<rt>Stage All</rt></ruby>” 按钮,允许 Git 跟踪这些文件。如果你使用过带有修订历史记录的文字处理器,则可以将此步骤视为允许 Git 记录更改。
#### Git 提交
@ -171,7 +171,7 @@ Atom 最令人兴奋的功能是其强大的 Git 集成。无需离开 Atom
Git 的<ruby>提交<rt>commit</rt></ruby>会将你的文件发送到 Git 的内部和永久存档中。如果你习惯于文字处理程序,这就类似于给一个修订版命名。要创建一个提交,请在 Git 选项卡底部的“<ruby>提交<rt>Commit</rt></ruby>”消息框中输入一些描述性文本。你可能会感到含糊不清或随意写点什么,但如果你想在将来知道进行修订的原因,那么输入一些有用的信息会更有用。
第一次提交时,必须创建一个<ruby>分支<rt>branch</rt></ruby>。Git 分支有点像另外一个空间,它允许你从一个时间轴切换到另一个时间轴,以进行你可能想要可能不想要永久保留的更改。如果最终喜欢该更改,则可以将一个实验分支合并到另一个实验分支,从而统一项目的不同版本。这是一个高级过程,不需要先学会,但是你仍然需要一个活动分支,因此你必须为首次提交创建一个分支。
第一次提交时,必须创建一个<ruby>分支<rt>branch</rt></ruby>。Git 分支有点像另外一个空间,它允许你从一个时间轴切换到另一个时间轴,以进行你可能想要可能不想要永久保留的更改。如果最终喜欢该更改,则可以将一个实验分支合并到另一个实验分支,从而统一项目的不同版本。这是一个高级过程,不需要先学会,但是你仍然需要一个活动分支,因此你必须为首次提交创建一个分支。
单击 Git 选项卡最底部的“<ruby>分支<rt>Branch</rt></ruby>”图标,以创建新的分支。
@ -185,7 +185,7 @@ Git 的<ruby>提交<rt>commit</rt></ruby>会将你的文件发送到 Git 的内
#### 历史记录和 Git 差异
一个自然而然的问题是你应该多久做一次提交。这并没有正确的答案。使用 `Ctrl + S` 保存文件提交到 Git 是两个单独的过程,因此你会一直做这两个过程。每当你觉得自己已经做了重要的事情或打算尝试一个可能要被干掉的疯狂的新想法时,你可能都会想要做个提交。
一个自然而然的问题是你应该多久做一次提交。这并没有正确的答案。使用 `Ctrl + S` 保存文件提交到 Git 是两个单独的过程,因此你会一直做这两个过程。每当你觉得自己已经做了重要的事情或打算尝试一个可能会被干掉的疯狂的新想法时,你可能都会想要做次提交。
要了解提交对工作流程的影响,请从测试文档中删除一些文本,然后在顶部和底部添加一些文本。再次提交。 这样做几次,直到你在 Git 标签的底部有了一小段历史记录,然后单击其中一个提交以在 Atom 中查看它。
@ -199,15 +199,15 @@ Git 的<ruby>提交<rt>commit</rt></ruby>会将你的文件发送到 Git 的内
#### 远程备份
使用 Git 的优点之一是,按照设计它是分布式的,这意味着你可以将工作提交到本地存储库,并将所做的更改推送到任意数量的服务器上进行备份。你还可以从这些服务器中拉取更改,以便你碰巧正在使用的任何设备始终具有最新更改。
使用 Git 的优点之一是,按照设计它是分布式的,这意味着你可以将工作提交到本地存储库,并将所做的更改推送到任意数量的服务器上进行备份。你还可以从这些服务器中拉取更改,以便你碰巧正在使用的任何设备始终具有最新更改。
为此,你必须在 Git 服务器上拥有一个帐户。有几种免费的托管服务,其中包括 GitHub这个公司开发了 Atom但奇怪的是 GitHub 不是开源的;而 GitLab 是开源的。相比私有,我更喜欢开源,在本示例中,我将使用 GitLab。
为此,你必须在 Git 服务器上拥有一个帐户。有几种免费的托管服务,其中包括 GitHub这个公司开发了 Atom但奇怪的是 GitHub 不是开源的;而 GitLab 是开源的。相比私有软件,我更喜欢开源,在本示例中,我将使用 GitLab。
如果你还没有 GitLab 帐户,请注册一个帐户并开始一个新项目。项目名称不必与 Atom 中的项目文件夹匹配,但是如果匹配,则可能更有意义。你可以将项目保留为私有,在这种情况下,只有你和任何一个你给予了明确权限的人可以访问它,或者,如果你希望该项目可供任何互联网上偶然发现它的人使用,则可以将其公开。
不要将 README 文件添加到项目中。
创建项目后,这个文件将为你提供有关如何设置存储库的说明。如果你决定在终端中或通过单独的 GUI 使用 Git这是非常有用的信息但是 Atom 的工作流程则有所不同。
创建项目后,将为你提供有关如何设置存储库的说明。如果你决定在终端中或通过单独的 GUI 使用 Git这是非常有用的信息但是 Atom 的工作流程则有所不同。
单击 GitLab 界面右上方的 “<ruby>克隆<rt>Clone</rt></ruby>” 按钮。这显示了访问 Git 存储库必须使用的地址。复制 “SSH” 地址(而不是 “https” 地址)。
@ -224,7 +224,7 @@ Git 的<ruby>提交<rt>commit</rt></ruby>会将你的文件发送到 Git 的内
在 Git 标签的底部,出现了一个新按钮,标记为 “<ruby>提取<rt>Fetch</rt></ruby>”。由于你的服务器是全新的服务器,因此没有可供你提取的数据,因此请右键单击该按钮,然后选择“<ruby>推送<rt>Push</rt></ruby>”。这会将你的更改推送到你的 GitLab 帐户,现在你的项目已备份到 Git 服务器上。
你可以在每次提交后将更改推送到服务器。它提供了即的异地备份,并且由于数据量通常很少,因此它几乎与本地保存一样快。
你可以在每次提交后将更改推送到服务器。它提供了即的异地备份,并且由于数据量通常很少,因此它几乎与本地保存一样快。
### 撰写而 Git
@ -237,7 +237,7 @@ via: https://opensource.com/article/19/4/write-git
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,22 +1,24 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11498-1.html)
[#]: subject: (How DevOps professionals can become security champions)
[#]: via: (https://opensource.com/article/19/9/devops-security-champions)
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo)
[#]: author: (Jessica Repka https://opensource.com/users/jrepka)
DevOps 专业人员如何成为网络安全拥护者
======
打破信息孤岛,成为网络安全的拥护者,这对你、对你的职业、对你的公司都会有所帮助。
![A lock on the side of a building][1]
> 打破信息孤岛,成为网络安全的拥护者,这对你、对你的职业、对你的公司都会有所帮助。
![](https://img.linux.net.cn/data/attachment/album/201910/24/202520u09xw2vm4w2jm0mx.jpg)
安全是 DevOps 中一个被误解了的部分,一些人认为它不在 DevOps 的范围内,而另一些人认为它太过重要(并且被忽视),建议改为使用 DevSecOps。无论你同意哪一方的观点网络安全都会影响到我们每一个人这是很明显的事实。
每年, [黑客行为的统计数据][3] 都会更加令人震惊。例如, 每 39 秒就有一次黑客行为发生,这可能会导致你为公司写的记录、身份和专有项目被盗。你的安全团队可能需要花上几个月(也可能是永远找不到)才能发现这次黑客行为背后是谁,目的是什么,人在哪,什么时候黑进来的。
每年[黑客行为的统计数据][3] 都会更加令人震惊。例如,每 39 秒就有一次黑客行为发生,这可能会导致你为公司写的记录、身份和专有项目被盗。你的安全团队可能需要花上几个月(也可能是永远找不到)才能发现这次黑客行为背后是谁,目的是什么,人在哪,什么时候黑进来的。
专家面对这些棘手问题应该如何是好?呐我说,现在是时候成为网络安全的拥护者,变为解决方案的一部分了。
专家面对这些棘手问题应该如何是好?呐我说,现在是时候成为网络安全的拥护者,变为解决方案的一部分了。
### 孤岛势力范围的战争
@ -28,52 +30,44 @@ DevOps 专业人员如何成为网络安全拥护者
为了打破这些孤岛并结束势力战争,我在每个安全团队中都选了至少一个人来交谈,了解我们组织日常安全运营里的来龙去脉。我开始做这件事是出于好奇,但我持续做这件事是因为它总是能带给我一些有价值的、新的观点。例如,我了解到,对于每个因为失败的安全性而被停止的部署,安全团队都在疯狂地尝试修复 10 个他们看见的其他问题。他们反应的莽撞和尖锐是因为他们必须在有限的时间里修复这些问题,不然这些问题就会变成一个大问题。
考虑到发现、识别和撤销已完成操作所需的大量知识,或者指出 DevOps 团队正在做什么-没有背景信息-然后复制并测试它。所有的这些通常都要由人手配备非常不足的安全团队完成。
考虑到发现、识别和撤销已完成操作所需的大量知识,或者指出 DevOps 团队正在做什么(没有背景信息)然后复制并测试它。所有的这些通常都要由人手配备非常不足的安全团队完成。
这就是你的安全团队的日常生活,并且你的 DevOps 团队看不到这些。ITSEC 的日常工作意味着超时加班和过度劳累,以确保公司,公司的团队,团队里工作的所有人能够安全地工作。
### 成为安全拥护者的方法
这些是你成为你的安全团队的拥护者之后可以帮到它们的。这意味着-对于你做的所有操作-你必须仔细、认真地查看所有能够让其他人登录的方式,以及他们能够从中获得什么。
这些是你成为你的安全团队的拥护者之后可以帮到它们的。这意味着,对于你做的所有操作,你必须仔细、认真地查看所有能够让其他人登录的方式,以及他们能够从中获得什么。
帮助你的安全团队就是在帮助你自己。将工具添加到你的工作流程里以此将你知道的要干的活和他们知道的要干的活结合到一起。从小事入手例如阅读公共漏洞披露CVEs),并将扫描模块添加到你的 CI/CD 流程里。对于你写的所有代码,都会有一个开源扫描工具,添加小型开源工具(例如下面列出来的)在长远看来是可以让项目更好的。
帮助你的安全团队就是在帮助你自己。将工具添加到你的工作流程里以此将你知道的要干的活和他们知道的要干的活结合到一起。从小事入手例如阅读公共漏洞披露CVE并将扫描模块添加到你的 CI/CD 流程里。对于你写的所有代码,都会有一个开源扫描工具,添加小型开源工具(例如下面列出来的)在长远看来是可以让项目更好的。
**容器扫描工具:**
**容器扫描工具**
* [Anchore Engine][5]
* [Clair][6]
* [Vuls][7]
* [OpenSCAP][8]
**代码扫描工具:**
**代码扫描工具:**
* [OWASP SonarQube][9]
* [Find Security Bugs][10]
* [Google Hacking Diggity Project][11]
**Kubernetes 安全工具:**
**Kubernetes 安全工具:**
* [Project Calico][12]
* [Kube-hunter][13]
* [NeuVector][14]
### 保持你的 DevOps 态度
如果你的工作角色是和 DevOps 相关的,那么学习新技术和如何运用这项新技术创造新事物就是你工作的一部分。安全也是一样。我在 DevOps 安全方面保持到最新,下面是我的方法的列表。
* 每周阅读一篇你工作的方向里和安全相关的文章.
* 每周查看 [CVE][15] 官方网站,了解出现了什么新漏洞.
* 每周查看 [CVE][15] 官方网站,了解出现了什么新漏洞.
* 尝试做一次黑客马拉松。一些公司每个月都要这样做一次;如果你觉得还不够、想了解更多,可以访问 Beginner Hack 1.0 网站。
* 每年至少一次和那你的安全团队的成员一起参加安全会议,从他们的角度来看事情。
### 成为拥护者是为了变得更好
你应该成为你的安全的拥护者,下面是我们列出来的几个理由。首先是增长你的知识,帮助你的职业发展。第二是帮助其他的团队,培养新的关系,打破对你的组织有害的孤岛。在你的整个组织内建立由很多好处,包括设置沟通团队的典范,并鼓励人们一起工作。你同样能促进在整个组织中分享知识,并给每个人提供一个在安全方面更好的内部合作的新契机。
@ -87,11 +81,11 @@ via: https://opensource.com/article/19/9/devops-security-champions
作者:[Jessica Repka][a]
选题:[lujun9972][b]
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo
[a]: https://opensource.com/users/jrepka
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://opensource.com/article/19/1/what-devsecops

View File

@ -0,0 +1,189 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11494-1.html)
[#]: subject: (Mutation testing by example: Failure as experimentation)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew)
变异测试:基于故障的试验
======
> 基于 .NET 的 xUnit.net 测试框架,开发一款自动猫门的逻辑,让门在白天开放,夜间锁定。
![Digital hand surrounding by objects, bike, light bulb, graphs][1]
在本系列的[第一篇文章][2]中,我演示了如何使用设计的故障来确保代码中的预期结果。在第二篇文章中,我将继续开发示例项目:一款自动猫门,该门在白天开放,夜间锁定。
在此提醒一下,你可以按照[此处的说明][3]使用 .NET 的 xUnit.net 测试框架。
### 关于白天时间
回想一下测试驱动开发TDD围绕着大量的单元测试。
第一篇文章中实现了满足 `Given7pmReturnNighttime` 单元测试期望的逻辑。但还没有完,现在,你需要描述当前时间大于 7 点时期望发生的结果。这是新的单元测试,称为 `Given7amReturnDaylight`
```
[Fact]
public void Given7amReturnDaylight()
{
var expected = "Daylight";
var actual = dayOrNightUtility.GetDayOrNight();
Assert.Equal(expected, actual);
}
```
现在,新的单元测试失败了(越早失败越好!):
```
Starting test execution, please wait...
[Xunit.net 00:00:01.23] unittest.UnitTest1.Given7amReturnDaylight [FAIL]
Failed unittest.UnitTest1.Given7amReturnDaylight
[...]
```
期望接收到字符串值是 `Daylight`,但实际接收到的值是 `Nighttime`
### 分析失败的测试用例
经过仔细检查,代码本身似乎已经出现问题。 事实证明,`GetDayOrNight` 方法的实现是不可测试的!
看看我们面临的核心挑战:
1. `GetDayOrNight` 依赖隐藏输入。
`dayOrNight` 的值取决于隐藏输入(它从内置系统时钟中获取一天的时间值)。
2. `GetDayOrNight` 包含非确定性行为。
从系统时钟中获取到的时间值是不确定的。(因为)该时间取决于你运行代码的时间点,而这一点我们认为这是不可预测的。
3. `GetDayOrNight` API 的质量差。
该 API 与具体的数据源(系统 `DateTime`)紧密耦合。
4. `GetDayOrNight` 违反了单一责任原则。
该方法实现同时使用和处理信息。优良作法是一种方法应负责执行一项职责。
5. `GetDayOrNight` 有多个更改原因。
可以想象内部时间源可能会更改的情况。同样,很容易想象处理逻辑也将改变。这些变化的不同原因必须相互隔离。
6. 当(我们)尝试了解 `GetDayOrNight` 行为时,会发现它的 API 签名不足。
最理想的做法就是通过简单的查看 API 的签名,就能了解 API 预期的行为类型。
7. `GetDayOrNight` 取决于全局共享可变状态。
要不惜一切代价避免共享的可变状态!
8. 即使在阅读源代码之后,也无法预测 `GetDayOrNight` 方法的行为。
这是一个严重的问题。通过阅读源代码,应该始终非常清晰,系统一旦开始运行,便可以预测出其行为。
### 失败背后的原则
每当你遇到工程问题时,建议使用久经考验的<ruby>分而治之<rt>divide and conquer</rt></ruby>策略。在这种情况下,遵循<ruby>关注点分离<rt>separation of concerns</rt></ruby>的原则是一种可行的方法。
> 关注点分离SoC是一种用于将计算机程序分为不同模块的设计原理以便每个模块都可以解决一个关注点。关注点是影响计算机程序代码的一组信息。关注点可以和要优化代码的硬件的细节一样概括也可以和要实例化的类的名称一样具体。完美体现 SoC 的程序称为模块化程序。
>
> [出处][4]
`GetDayOrNight` 方法应仅与确定日期和时间值表示白天还是夜晚有关。它不应该与寻找该值的来源有关。该问题应留给调用客户端。
必须将这个问题留给调用客户端,以获取当前时间。这种方法符合另一个有价值的工程原理——<ruby>控制反转<rt>inversion of control</rt></ruby>。Martin Fowler [在这里][5]详细探讨了这一概念。
> 框架的一个重要特征是用户定义的用于定制框架的方法通常来自于框架本身,而不是从用户的应用程序代码调用来的。该框架通常在协调和排序应用程序活动中扮演主程序的角色。控制权的这种反转使框架有能力充当可扩展的框架。用户提供的方法为框架中的特定应用程序量身制定泛化算法。
>
> -- [Ralph Johnson and Brian Foote][6]
### 重构测试用例
因此,代码需要重构。摆脱对内部时钟的依赖(`DateTime` 系统实用程序):
```
DateTime time = new DateTime();
```
删除上述代码(在你的文件中应该是第 7 行)。通过将输入参数 `DateTime` 时间添加到 `GetDayOrNight` 方法,进一步重构代码。
这是重构的类 `DayOrNightUtility.cs`
```
using System;
namespace app {
public class DayOrNightUtility {
public string GetDayOrNight(DateTime time) {
string dayOrNight = "Nighttime";
if(time.Hour >= 7 && time.Hour < 19) {
dayOrNight = "Daylight";
}
return dayOrNight;
}
}
}
```
重构代码需要更改单元测试。 需要准备 `nightHour``dayHour` 的测试数据,并将这些值传到`GetDayOrNight` 方法中。 以下是重构的单元测试:
```
using System;
using Xunit;
using app;
namespace unittest
{
public class UnitTest1
{
DayOrNightUtility dayOrNightUtility = new DayOrNightUtility();
DateTime nightHour = new DateTime(2019, 08, 03, 19, 00, 00);
DateTime dayHour = new DateTime(2019, 08, 03, 07, 00, 00);
[Fact]
public void Given7pmReturnNighttime()
{
var expected = "Nighttime";
var actual = dayOrNightUtility.GetDayOrNight(nightHour);
Assert.Equal(expected, actual);
}
[Fact]
public void Given7amReturnDaylight()
{
var expected = "Daylight";
var actual = dayOrNightUtility.GetDayOrNight(dayHour);
Assert.Equal(expected, actual);
}
}
}
```
### 经验教训
在继续开发这种简单的场景之前,请先回顾复习一下本次练习中所学到的东西。
运行无法测试的代码很容易在不经意间制造陷阱。从表面上看这样的代码似乎可以正常工作。但是遵循测试驱动开发TDD的实践首先描述期望结果然后才描述实现暴露了代码中的严重问题。
这表明 TDD 是确保代码不会太凌乱的理想方法。TDD 指出了一些问题区域例如缺乏单一责任和存在隐藏输入。此外TDD 有助于删除不确定性代码,并用行为明确的完全可测试代码替换它。
最后TDD 帮助交付易于阅读、逻辑易于遵循的代码。
在本系列的下一篇文章中,我将演示如何使用在本练习中创建的逻辑来实现功能代码,以及如何进行进一步的测试使其变得更好。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/Morisun029)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
[2]: https://linux.cn/article-11483-1.html
[3]: https://linux.cn/article-11468-1.html
[4]: https://en.wikipedia.org/wiki/Separation_of_concerns
[5]: https://martinfowler.com/bliki/InversionOfControl.html
[6]: http://www.laputan.org/drc/drc.html
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -1,34 +1,34 @@
[#]: collector: "lujun9972"
[#]: translator: "way-ww"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-11491-1.html"
[#]: subject: "How to Run the Top Command in Batch Mode"
[#]: via: "https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/"
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
如何在批处理模式下运行 Top 命令
如何在批处理模式下运行 top 命令
======
**[Top 命令][1]** 是每个人都在使用的用于 **[监控 Linux 系统性能][2]** 的最好的命令。
![](https://img.linux.net.cn/data/attachment/album/201910/22/235420ylswdescv5ddffit.jpg)
除了很少的几个操作, 你可能已经知道 top 命令的绝大部分操作, 如果我没错的话, 批处理模式就是其中之一。
[top 命令][1] 是每个人都在使用的用于 [监控 Linux 系统性能][2] 的最好的命令。你可能已经知道 `top` 命令的绝大部分操作,除了很少的几个操作,如果我没错的话,批处理模式就是其中之一。
大部分的脚本编写者和开发人员都知道这个, 因为这个操作主要就是用来编写脚本。
大部分的脚本编写者和开发人员都知道这个,因为这个操作主要就是用来编写脚本。
如果你不了解这个, 不用担心,我们将在这里介绍它。
如果你不了解这个,不用担心,我们将在这里介绍它。
### 什么是 Top 命令的批处理模式
### 什么是 top 命令的批处理模式
批处理模式允许你将 top 命令的输出发送至其他程序或者文件中。
批处理模式允许你将 `top` 命令的输出发送至其他程序或者文件中。
在这个模式中, top 命令将不会接收输入并且持续运行直到迭代次数达到你用 “-n” 选项指定的次数为止。
在这个模式中,`top` 命令将不会接收输入并且持续运行,直到迭代次数达到你用 `-n` 选项指定的次数为止。
如果你想解决 Linux 服务器上的任何性能问题, 你需要正确的 **[理解 top 命令的输出][3]**
如果你想解决 Linux 服务器上的任何性能问题,你需要正确的 [理解 top 命令的输出][3]。
### 1) 如何在批处理模式下运行 top 命令
默认地, top 命令按照 CPU 的使用率来排序输出结果, 所以当你在批处理模式中运行以下命令时, 它会执行同样的操作并打印前 35 行。
默认地,`top` 命令按照 CPU 的使用率来排序输出结果,所以当你在批处理模式中运行以下命令时,它会执行同样的操作并打印前 35 行:
```
# top -bc | head -35
@ -72,7 +72,7 @@ PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
### 2) 如何在批处理模式下运行 top 命令并按内存使用率排序结果
在批处理模式中运行以下命令按内存使用率对结果进行排序
在批处理模式中运行以下命令按内存使用率对结果进行排序
```
# top -bc -o +%MEM | head -n 20
@ -99,19 +99,17 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2475984 avail Mem
8632 nobody 20 0 256844 25744 2216 S 0.0 0.7 0:00.03 /usr/sbin/httpd -k start
```
**上面命令的详细信息:**
* **-b :** 批处理模式选项
* **-c :** 打印运行中的进程的绝对路径
* **-o :** 指定进行排序的字段
* **head :** 输出文件的第一部分
* **-n :** 打印前 n 行
上面命令的详细信息:
* `-b`:批处理模式选项
* `-c`:打印运行中的进程的绝对路径
* `-o`:指定进行排序的字段
* `head`:输出文件的第一部分
* `-n`:打印前 n 行
### 3) 如何在批处理模式下运行 top 命令并按照指定的用户进程对结果进行排序
如果你想要按照指定用户进程对结果进行排序请运行以下命令
如果你想要按照指定用户进程对结果进行排序请运行以下命令
```
# top -bc -u mysql | head -n 10
@ -128,13 +126,11 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2649412 avail Mem
### 4) 如何在批处理模式下运行 top 命令并按照处理时间进行排序
在批处理模式中使用以下 top 命令按照处理时间对结果进行排序。 这展示了任务从启动以来已使用的总 CPU 时间
但是如果你想要检查一个进程在 Linux 上运行了多长时间请看接下来的文章。
* **[检查 Linux 中进程运行时间的五种方法][4]**
在批处理模式中使用以下 `top` 命令按照处理时间对结果进行排序。这展示了任务从启动以来已使用的总 CPU 时间。
但是如果你想要检查一个进程在 Linux 上运行了多长时间请看接下来的文章:
* [检查 Linux 中进程运行时间的五种方法][4]
```
# top -bc -o TIME+ | head -n 20
@ -163,7 +159,7 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2440332 avail Mem
### 5) 如何在批处理模式下运行 top 命令并将结果保存到文件中
如果出于解决问题的目的, 你想要和别人分享 top 命令的输出, 请使用以下命令重定向输出到文件中
如果出于解决问题的目的,你想要和别人分享 `top` 命令的输出,请使用以下命令重定向输出到文件中
```
# top -bc | head -35 > top-report.txt
@ -209,9 +205,9 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2659084 avail Mem
### 如何按照指定字段对结果进行排序
top 命令的最新版本中, 按下 **“f”** 键进入字段管理界面。
`top` 命令的最新版本中, 按下 `f` 键进入字段管理界面。
要使用新字段进行排序, 请使用 **“up/down”** 箭头选择正确的选项, 然后再按下 **“s”** 键进行排序。 最后按 **“q”** 键退出此窗口。
要使用新字段进行排序, 请使用 `up`/`down` 箭头选择正确的选项,然后再按下 `s` 键进行排序。最后按 `q` 键退出此窗口。
```
Fields Management for window 1:Def, whose current sort field is %CPU
@ -269,9 +265,9 @@ Fields Management for window 1:Def, whose current sort field is %CPU
nsUSER = USER namespace Inode
```
top 命令的旧版本, 请按 **“shift+f”** 或 **“shift+o”** 键进入字段管理界面进行排序。
`top` 命令的旧版本,请按 `shift+f``shift+o` 键进入字段管理界面进行排序。
要使用新字段进行排序, 请选择相应的排序字段字母, 然后按下 **“Enter”** 排序。
要使用新字段进行排序,请选择相应的排序字段字母, 然后按下回车键排序。
```
Current Sort Field: N for window 1:Def
@ -323,7 +319,7 @@ via: https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[way-ww](https://github.com/way-ww)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: (lnrCoder)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11492-1.html)
[#]: subject: (DevSecOps pipelines and tools: What you need to know)
[#]: via: (https://opensource.com/article/19/10/devsecops-pipeline-and-tools)
[#]: author: (Sagar Nangare https://opensource.com/users/sagarnangare)
你需要知道的 DevSecOps 流程及工具
======
> DevSecOps 对 DevOps 进行了改进,以确保安全性仍然是该过程的一个重要部分。
![](https://img.linux.net.cn/data/attachment/album/201910/23/002010fvzh282e8ghhdzpk.jpg)
到目前为止DevOps 在 IT 世界中已广为人知,但其并非完美无缺。试想一下,你在一个项目的现代应用程序交付中实施了所有 DevOps 工程实践。你已经到达开发流程的末尾,但是渗透测试团队(内部或外部)检测到安全漏洞并提出了报告。现在,你必须重新启动所有流程,并要求开发人员修复该漏洞。
在基于 DevOps 的软件开发生命周期SDLC系统中这并不繁琐但它确实会浪费时间并影响交付进度。如果从 SDLC 初期就已经集成了安全性,那么你可能已经跟踪到了该故障,并在开发流程中就消除了它。但是,如上述情形那样,将安全性推到开发流程的最后将导致更长的开发生命周期。
这就是引入 DevSecOps 的原因,它以自动化的方式巩固了整个软件交付周期。
在现代 DevOps 方法中,组织广泛使用容器托管应用程序,我们看到 [Kubernetes][2] 和 [Istio][3] 使用的较多。但是这些工具都有其自身的漏洞。例如云原生计算基金会CNCF最近完成了一项 [kubernetes 安全审计][4]发现了几个问题。DevOps 开发流程中使用的所有工具在流程运行时都需要进行安全检查DevSecOps 会推动管理员去监视工具的存储库以获取升级和补丁。
### 什么是 DevSecOps?
与 DevOps 一样DevSecOps 是开发人员和 IT 运营团队在开发和部署软件应用程序时所遵循的一种思维方式或文化。它将主动和自动化的安全审计以及渗透测试集成到敏捷应用程序开发中。
要使用 [DevSecOps][5],你需要:
* 从 SDLC 开始就引入安全性概念,以最大程度地减少软件代码中的漏洞。
* 确保每个人(包括开发人员和 IT 运营团队)共同承担在其任务中遵循安全实践的责任。
* 在 DevOps 工作流程开始时集成安全控件、工具和流程。这些将在软件交付的每个阶段启用自动安全检查。
DevOps 一直致力于在开发和发布过程中包括安全性以及质量保证QA、数据库管理和其他所有方面。然而DevSecOps 是该过程的一个演进,以确保安全永远不会被遗忘,成为该过程的一个重要部分。
### 了解 DevSecOps 流程
典型的 DevOps 流程有不同的阶段;典型的 SDLC 流程包括计划、编码、构建、测试、发布和部署等阶段。在 DevSecOps 中,每个阶段都会应用特定的安全检查。
* **计划**:执行安全性分析并创建测试计划,以确定在何处、如何以及何时进行测试的方案。
* **编码**:部署整理工具和 Git 控件以保护密码和 API 密钥。
* **构建**在构建执行代码时请结合使用静态应用程序安全测试SAST工具来跟踪代码中的缺陷然后再部署到生产环境中。这些工具针对特定的编程语言。
* **测试**在运行时使用动态应用程序安全测试DAST工具来测试您的应用程序。 这些工具可以检测与用户身份验证授权SQL 注入以及与 API 相关的端点相关的错误。
* **发布**:在发布应用程序之前,请使用安全分析工具来进行全面的渗透测试和漏洞扫描。
* **部署**:在运行时完成上述测试后,将安全的版本发送到生产中以进行最终部署。
### DevSecOps 工具
SDLC 的每个阶段都有可用的工具。有些是商业产品,但大多数是开源的。在我的下一篇文章中,我将更多地讨论在流程的不同阶段使用的工具。
随着基于现代 IT 基础设施的企业安全威胁的复杂性增加DevSecOps 将发挥更加关键的作用。然而DevSecOps 流程将需要随着时间的推移而改进,而不是仅仅依靠同时实施所有安全更改即可。这将消除回溯或应用交付失败的可能性。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/devsecops-pipeline-and-tools
作者:[Sagar Nangare][a]
选题:[lujun9972][b]
译者:[lnrCoder](https://github.com/lnrCoder)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sagarnangare
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
[2]: https://opensource.com/resources/what-is-kubernetes
[3]: https://opensource.com/article/18/9/what-istio
[4]: https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/
[5]: https://resources.whitesourcesoftware.com/blog-whitesource/devsecops

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11490-1.html)
[#]: subject: (Bash Script to Delete Files/Folders Older Than “X” Days in Linux)
[#]: via: (https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-days-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
@ -10,29 +10,21 @@
在 Linux 中使用 Bash 脚本删除早于 “X” 天的文件/文件夹
======
**[磁盘使用率][1]**监控工具能够在达到给定阈值时提醒我们。
[磁盘使用率][1] 监控工具能够在达到给定阈值时提醒我们。但它们无法自行解决 [磁盘使用率][2] 问题。需要手动干预才能解决该问题。
但它们无法自行解决**[磁盘使用率][2]**问题
如果你想完全自动化此类操作,你会做什么。是的,可以使用 bash 脚本来完成
需要手动干预才能解决该问题。
如果你想完全自动化此类操作,你会做什么。
是的,可以使用 bash 脚本来完成。
该脚本可防止来自**[监控工具][3]**的警报,因为我们会在填满磁盘空间之前删除旧的日志文件。
该脚本可防止来自 [监控工具][3] 的警报,因为我们会在填满磁盘空间之前删除旧的日志文件。
我们过去做了很多 shell 脚本。如果要查看,请进入下面的链接。
* **[如何使用 shell 脚本自动化日常活动?][4]**
* [如何使用 shell 脚本自动化日常活动?][4]
我在本文中添加了两个 bash 脚本,它们有助于清除旧日志。
### 1在 Linux 中删除早于 “X” 天的文件夹的 Bash 脚本
我们有一个名为 **“/var/log/app/”** 的文件夹,其中包含 15 天的日志,我们将删除早于 10 天的文件夹。
我们有一个名为 `/var/log/app/` 的文件夹,其中包含 15 天的日志,我们将删除早于 10 天的文件夹。
```
$ ls -lh /var/log/app/
@ -56,7 +48,7 @@ drwxrw-rw- 3 root root 24K Oct 15 23:52 app_log.15
该脚本将删除早于 10 天的文件夹,并通过邮件发送文件夹列表。
你可以根据需要修改 **“-mtime X”** 的值。另外,请替换你的电子邮箱,而不是用我们的。
你可以根据需要修改 `-mtime X` 的值。另外,请替换你的电子邮箱,而不是用我们的。
```
# /opt/script/delete-old-folders.sh
@ -81,7 +73,7 @@ rm $MESSAGE /tmp/folder.out
fi
```
**“delete-old-folders.sh”** 设置可执行权限。
`delete-old-folders.sh` 设置可执行权限。
```
# chmod +x /opt/script/delete-old-folders.sh
@ -109,15 +101,13 @@ Oct 15 /var/log/app/app_log.15
### 2在 Linux 中删除早于 “X” 天的文件的 Bash 脚本
我们有一个名为 **“/var/log/apache/”** 的文件夹其中包含15天的日志我们将删除 10 天前的文件。
我们有一个名为 `/var/log/apache/` 的文件夹其中包含15天的日志我们将删除 10 天前的文件。
以下文章与该主题相关,因此你可能有兴趣阅读。
* **[如何在 Linux 中查找和删除早于 “X” 天和 “X” 小时的文件?][6]**
* **[如何在 Linux 中查找最近修改的文件/文件夹][7]**
* **[如何在 Linux 中自动删除或清理 /tmp 文件夹内容?][8]**
* [如何在 Linux 中查找和删除早于 “X” 天和 “X” 小时的文件?][6]
* [如何在 Linux 中查找最近修改的文件/文件夹][7]
* [如何在 Linux 中自动删除或清理 /tmp 文件夹内容?][8]
```
# ls -lh /var/log/apache/
@ -141,7 +131,7 @@ Oct 15 /var/log/app/app_log.15
该脚本将删除 10 天前的文件并通过邮件发送文件夹列表。
你可以根据需要修改 **“-mtime X”** 的值。另外,请替换你的电子邮箱,而不是用我们的。
你可以根据需要修改 `-mtime X` 的值。另外,请替换你的电子邮箱,而不是用我们的。
```
# /opt/script/delete-old-files.sh
@ -166,7 +156,7 @@ rm $MESSAGE /tmp/file.out
fi
```
**“delete-old-files.sh”** 设置可执行权限。
`delete-old-files.sh` 设置可执行权限。
```
# chmod +x /opt/script/delete-old-files.sh
@ -199,7 +189,7 @@ via: https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-d
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11495-1.html)
[#]: subject: (Linux sudo flaw can lead to unauthorized privileges)
[#]: via: (https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux sudo 漏洞可能导致未经授权的特权访问
======
![](https://img.linux.net.cn/data/attachment/album/201910/23/173934huyi6siys2u33w9z.png)
> 在 Linux 中利用新发现的 sudo 漏洞可以使某些用户以 root 身份运行命令,尽管对此还有所限制。
[sudo][1] 命令中最近发现了一个严重漏洞,如果被利用,普通用户可以 root 身份运行命令,即使在 `/etc/sudoers` 文件中明确禁止了该用户这样做。
`sudo` 更新到版本 1.8.28 应该可以解决该问题,因此建议 Linux 管理员尽快这样做。
如何利用此漏洞取决于 `/etc/sudoers` 中授予的特定权限。例如,一条规则允许用户以除了 root 用户之外的任何用户身份来编辑文件,这实际上将允许该用户也以 root 用户身份来编辑文件。在这种情况下,该漏洞可能会导致非常严重的问题。
用户要能够利用此漏洞,需要在 `/etc/sudoers` 中为**用户**分配特权,以使该用户可以以其他用户身份运行命令,并且该漏洞仅限于以这种方式分配的命令特权。
此问题影响 1.8.28 之前的版本。要检查你的 `sudo` 版本,请使用以下命令:
```
$ sudo -V
Sudo version 1.8.27 <===
Sudoers policy plugin version 1.8.27
Sudoers file grammar version 46
Sudoers I/O plugin version 1.8.27
```
该漏洞已在 CVE 数据库中分配了编号 [CVE-2019-14287][4]。它的风险是,任何被指定能以任意用户运行某个命令的用户,即使被明确禁止以 root 身份运行,它都能逃脱限制。
下面这些行让 `jdoe` 能够以除了 root 用户之外的其他身份使用 `vi` 编辑文件(`!root` 表示“非 root”同时 `nemo` 有权运行以除了 root 身份以外的任何用户使用 `id` 命令:
```
# affected entries on host "dragonfly"
jdoe dragonfly = (ALL, !root) /usr/bin/vi
nemo dragonfly = (ALL, !root) /usr/bin/id
```
但是,由于存在漏洞,这些用户中要么能够绕过限制并以 root 编辑文件,或者以 root 用户身份运行 `id` 命令。
攻击者可以通过指定用户 ID 为 `-1``4294967295` 来以 root 身份运行命令。
```
sudo -u#-1 id -u
```
或者
```
sudo -u#4294967295 id -u
```
响应为 `1` 表明该命令以 root 身份运行(显示 root 的用户 ID
苹果信息安全团队的 Joe Vennix 找到并分析该问题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3236499/some-tricks-for-using-sudo.html
[4]: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14287
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11497-1.html)
[#]: subject: (Kubernetes networking, OpenStack Train, and more industry trends)
[#]: via: (https://opensource.com/article/19/10/kubernetes-openstack-and-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
每周开源点评Kubernetes 网络、OpenStack Train 以及更多的行业趋势
======
> 开源社区和行业趋势的每周总览。
![Person standing in front of a giant computer screen with numbers, data][1]
作为我在具有开源开发模型的企业软件公司担任高级产品营销经理的角色的一部分,我为产品营销人员、经理和其他影响者定期发布有关开源社区,市场和行业趋势的定期更新。以下是该更新中我和他们最喜欢的五篇文章。
### OpenStack Train 中最令人兴奋的功能
- [文章地址][2]
> 考虑到 Train 版本必须提供的所有技术优势([你可以在此处查看版本亮点][3]),你可能会对 Red Hat 认为这些将使我们的电信和企业客户受益的顶级功能及其用例感到好奇。以下我们对该版本最兴奋的功能的概述。
**影响**OpenStack 对我来说就像 Shia LaBeouf它在几年前达到了炒作的顶峰然后继续产出了好的作品。Train 版本看起来是又一次令人难以置信的创新下降。
### 以 Ansible 原生的方式构建 Kubernetes 操作器
- [文章地址][4]
> 操作器简化了 Kubernetes 上复杂应用程序的管理。它们通常是用 Go 语言编写的,并且需要懂得 Kubernetes 内部的专业知识。但是还有另一种进入门槛较低的选择。Ansible 是操作器 SDK 中的一等公民。使用 Ansible 可以释放应用程序工程师的精力,最大限度地利用时间来自动化和协调你的应用程序,并使用一种简单的语言在新的和现有的平台上进行操作。在这里我们可以看到如何做。
**影响**这就像你发现可以用搅拌器和冷冻香蕉制作出不错的冰淇淋一样Ansible通常被认为很容易掌握可以使你比你想象的更容易地做一些令人印象深刻的操作器魔术。
### Kubernetes 网络:幕后花絮
- [文章地址][5]
> 尽管围绕该主题有很多很好的资源(链接在[这里][6]),但我找不到一个示例,可以将所有的点与网络工程师喜欢和讨厌的命令输出连接起来,以显示背后实际发生的情况。因此,我决定从许多不同的来源收集这些信息,以期帮助你更好地了解事物之间的联系。
**影响**:这是一篇对复杂主题(带有图片)阐述的很好的作品。保证可以使 Kubenetes 网络的混乱程度降低 10
### 保护容器供应链
- [文章地址][7]
> 随着容器、软件即服务和函数即服务的出现,人们开始着眼于在使用现有服务、函数和容器映像的过程中寻求新的价值。[Red Hat][8] 的容器首席产品经理 Scott McCarty 表示关注这个重点既有优点也有缺点。“它使我们能够集中精力编写满足我们需求的新应用程序代码同时将对基础架构的关注转移给其他人身上”McCarty 说,“容器处于一个最佳位置,提供了足够的控制,而卸去了许多繁琐的基础架构工作。”但是,容器也会带来与安全性相关的劣势。
**影响**:我在一个由大约十位安全人员组成的小组中,可以肯定地说,整天要考虑软件安全性需要一定的倾向。当你长时间凝视深渊时,它也凝视着你。如果你不是如此倾向的软件开发人员,请听取 Scott 的建议并确保你的供应商考虑安全。
### 15 岁的 Fedora为何 Matthew Miller 看到 Linux 发行版的光明前景
- [文章链接][9]
> 在 TechRepublic 的一个大范围采访中Fedora 项目负责人 Matthew Miller 讨论了过去的经验教训、软件容器的普遍采用和竞争性标准、Fedora 的潜在变化以及包括 systemd 在内的热门话题。
**影响**:我喜欢 Fedora 项目的原因是它的清晰度;该项目知道它代表什么。像 Matt 这样的人就是为什么能看到光明前景的原因。
*我希望你喜欢这张上周让我印象深刻的列表,并在下周一回来了解更多的开放源码社区、市场和行业趋势。*
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/kubernetes-openstack-and-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.redhat.com/en/blog/look-most-exciting-features-openstack-train
[3]: https://releases.openstack.org/train/highlights.html
[4]: https://www.cncf.io/webinars/building-kubernetes-operators-in-an-ansible-native-way/
[5]: https://itnext.io/kubernetes-networking-behind-the-scenes-39a1ab1792bb
[6]: https://github.com/nleiva/kubernetes-networking-links
[7]: https://www.devprojournal.com/technology-trends/open-source/securing-the-container-supply-chain/
[8]: https://www.redhat.com/en
[9]: https://www.techrepublic.com/article/fedora-at-15-why-matthew-miller-sees-a-bright-future-for-the-linux-distribution/

View File

@ -0,0 +1,93 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11502-1.html)
[#]: subject: (Pylint: Making your Python code consistent)
[#]: via: (https://opensource.com/article/19/10/python-pylint-introduction)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Pylint让你的 Python 代码保持一致
======
> 当你想要争论代码复杂性时Pylint 是你的朋友。
![OpenStack source code \(Python\) in VIM][1]
Pylint 是更高层级的 Python 样式强制程序。而 [flake8][2] 和 [black][3] 检查的是“本地”样式:换行位置、注释的格式、发现注释掉的代码或日志格式中的错误做法之类的问题。
默认情况下Pylint 非常激进。它将对每样东西都提供严厉的意见,从检查是否实际实现声明的接口到重构重复代码的可能性,这对新用户来说可能会很多。一种温和地将其引入项目或团队的方法是先关闭*所有*检查器,然后逐个启用检查器。如果你已经在使用 flake8、black 和 [mypy][4]这尤其有用Pylint 有相当多的检查器和它们在功能上重叠。
但是Pylint 独有之处之一是能够强制执行更高级别的问题:例如,函数的行数或者类中方法的数量。
这些数字可能因项目而异,并且可能取决于开发团队的偏好。但是,一旦团队就参数达成一致,使用自动工具*强制化*这些参数非常有用。这是 Pylint 闪耀的地方。
### 配置 Pylint
要以空配置开始,请将 `.pylintrc` 设置为
```
[MESSAGES CONTROL]
disable=all
```
这将禁用所有 Pylint 消息。由于其中许多是冗余的,这是有道理的。在 Pylint 中,`message` 是一种特定的警告。
你可以通过运行 `pylint` 来确认所有消息都已关闭:
```
$ pylint <my package>
```
通常,向 `pylint` 命令行添加参数并不是一个好主意:配置 `pylint` 的最佳位置是 `.pylintrc`。为了使它做*一些*有用的事,我们需要启用一些消息。
要启用消息,在 `.pylintrc` 中的 `[MESSAGES CONTROL]` 下添加
```
enable=<message>,
...
```
对于看起来有用的“消息”Pylint 称之为不同类型的警告)。我最喜欢的包括 `too-many-lines`、`too-many-arguments` 和 `too-many-branches`。所有这些会限制模块或函数的复杂性,并且无需进行人工操作即可客观地进行代码复杂度测量。
*检查器*是*消息*的来源:每条消息只属于一个检查器。许多最有用的消息都在[设计检查器][5]下。默认数字通常都不错,但要调整最大值也很简单:我们可以在 `.pylintrc` 中添加一个名为 `DESIGN` 的段。
```
[DESIGN]
max-args=7
max-locals=15
```
另一个有用的消息来源是“重构”检查器。我已启用一些最喜欢的消息有 `consider-using-dict-comprehension`、`stop-iteration-return`(它会查找正确的停止迭代的方式是 `return` 而使用了 `raise StopIteration` 的迭代器)和 `chained-comparison`,它将建议使用如 `1 <= x < 5`,而不是不太明显的 `1 <= x && 5 > 5` 的语法。
最后是一个在性能方面消耗很大的检查器,但它非常有用,就是 `similarities`。它会查找不同部分代码之间的复制粘贴来强制执行“不要重复自己”DRY 原则)。它只启用一条消息:`duplicate-code`。默认的 “最小相似行数” 设置为 4。可以使用 `.pylintrc` 将其设置为不同的值。
```
[SIMILARITIES]
min-similarity-lines=3
```
### Pylint 使代码评审变得简单
如果你厌倦了需要指出一个类太复杂,或者两个不同的函数基本相同的代码评审,请将 Pylint 添加到你的[持续集成][6]配置中,并且只需要对项目复杂性准则的争论一次就行。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/python-pylint-introduction
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_2.jpg?itok=4fza48WU (OpenStack source code (Python) in VIM)
[2]: https://opensource.com/article/19/5/python-flake8
[3]: https://opensource.com/article/19/5/python-black
[4]: https://opensource.com/article/19/5/python-mypy
[5]: https://pylint.readthedocs.io/en/latest/technical_reference/features.html#design-checker
[6]: https://opensource.com/business/15/7/six-continuous-integration-tools

View File

@ -0,0 +1,183 @@
[#]: collector: (lujun9972)
[#]: translator: (lnrCoder)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11503-1.html)
[#]: subject: (How to Get the Size of a Directory in Linux)
[#]: via: (https://www.2daygeek.com/find-get-size-of-directory-folder-linux-disk-usage-du-command/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
如何获取 Linux 中的目录大小
======
你应该已经注意到,在 Linux 中使用 [ls 命令][1] 列出的目录内容中,目录的大小仅显示 4KB。这个大小正确吗如果不正确那它代表什么又该如何获取 Linux 中的目录或文件夹大小?这是一个默认的大小,是用来存储磁盘上存储目录的元数据的大小。
Linux 上有一些应用程序可以 [获取目录的实际大小][2]。其中,磁盘使用率(`du`)命令已被 Linux 管理员广泛使用。
我将向您展示如何使用各种选项获取文件夹大小。
### 什么是 du 命令?
[du 命令][3] 表示 <ruby>磁盘使用率<rt>Disk Usage</rt></ruby>。这是一个标准的 Unix 程序,用于估计当前工作目录中的文件空间使用情况。
它使用递归方式总结磁盘使用情况,以获取目录及其子目录的大小。
如同我说的那样, 使用 `ls` 命令时,目录大小仅显示 4KB。参见下面的输出。
```
$ ls -lh | grep ^d
drwxr-xr-x 3 daygeek daygeek 4.0K Aug 2 13:57 Bank_Details
drwxr-xr-x 2 daygeek daygeek 4.0K Mar 15 2019 daygeek
drwxr-xr-x 6 daygeek daygeek 4.0K Feb 16 2019 drive-2daygeek
drwxr-xr-x 13 daygeek daygeek 4.0K Jan 6 2019 drive-mageshm
drwxr-xr-x 15 daygeek daygeek 4.0K Sep 29 21:32 Thanu_Photos
```
### 1) 在 Linux 上如何只获取父目录的大小
使用以下 `du` 命令格式获取给定目录的总大小。在该示例中,我们将得到 `/home/daygeek/Documents` 目录的总大小。
```
$ du -hs /home/daygeek/Documents
$ du -h --max-depth=0 /home/daygeek/Documents/
20G /home/daygeek/Documents
```
详细说明:
* `du` 这是一个命令
* `-h` 以易读的格式显示大小 (例如 1K 234M 2G)
* `-s` 仅显示每个参数的总数
* `--max-depth=N` 目录的打印深度
### 2) 在 Linux 上如何获取每个目录的大小
使用以下 `du` 命令格式获取每个目录(包括子目录)的总大小。
在该示例中,我们将获得每个 `/home/daygeek/Documents` 目录及其子目录的总大小。
```
$ du -h /home/daygeek/Documents/ | sort -rh | head -20
20G /home/daygeek/Documents/
9.6G /home/daygeek/Documents/drive-2daygeek
6.3G /home/daygeek/Documents/Thanu_Photos
5.3G /home/daygeek/Documents/Thanu_Photos/Camera
5.3G /home/daygeek/Documents/drive-2daygeek/Thanu-videos
3.2G /home/daygeek/Documents/drive-mageshm
2.3G /home/daygeek/Documents/drive-2daygeek/Thanu-Photos
2.2G /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month
916M /home/daygeek/Documents/drive-mageshm/Tanisha
454M /home/daygeek/Documents/drive-mageshm/2g-backup
415M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video
300M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Jan-2017
288M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Oct-2017
226M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Sep-2017
219M /home/daygeek/Documents/Thanu_Photos/WhatsApp Documents
213M /home/daygeek/Documents/drive-mageshm/photos
163M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video/Sent
161M /home/daygeek/Documents/Thanu_Photos/WhatsApp Images
154M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/June-2017
150M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Nov-2016
```
### 3) 在 Linux 上如何获取每个目录的摘要
使用如下 `du` 命令格式仅获取每个目录的摘要。
```
$ du -hs /home/daygeek/Documents/* | sort -rh | head -10
9.6G /home/daygeek/Documents/drive-2daygeek
6.3G /home/daygeek/Documents/Thanu_Photos
3.2G /home/daygeek/Documents/drive-mageshm
756K /home/daygeek/Documents/Bank_Details
272K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-TouchInterface1.png
172K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-NightLight.png
164K /home/daygeek/Documents/ConfigServer Security and Firewall (csf) Cheat Sheet.pdf
132K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-Todo.png
112K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-ZorinAutoTheme.png
96K /home/daygeek/Documents/distro-info.xlsx
```
### 4) 在 Linux 上如何获取每个目录的不含子目录的大小
使用如下 `du` 命令格式来展示每个目录的总大小,不包括子目录。
```
$ du -hS /home/daygeek/Documents/ | sort -rh | head -20
5.3G /home/daygeek/Documents/Thanu_Photos/Camera
5.3G /home/daygeek/Documents/drive-2daygeek/Thanu-videos
2.3G /home/daygeek/Documents/drive-2daygeek/Thanu-Photos
1.5G /home/daygeek/Documents/drive-mageshm
831M /home/daygeek/Documents/drive-mageshm/Tanisha
454M /home/daygeek/Documents/drive-mageshm/2g-backup
300M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Jan-2017
288M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Oct-2017
253M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video
226M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Sep-2017
219M /home/daygeek/Documents/Thanu_Photos/WhatsApp Documents
213M /home/daygeek/Documents/drive-mageshm/photos
163M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video/Sent
154M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/June-2017
150M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Nov-2016
127M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Dec-2016
100M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Oct-2016
94M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Nov-2017
92M /home/daygeek/Documents/Thanu_Photos/WhatsApp Images
90M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Dec-2017
```
### 5) 在 Linux 上如何仅获取一级子目录的大小
如果要获取 Linux 上给定目录的一级子目录(包括其子目录)的大小,请使用以下命令格式。
```
$ du -h --max-depth=1 /home/daygeek/Documents/
3.2G /home/daygeek/Documents/drive-mageshm
4.0K /home/daygeek/Documents/daygeek
756K /home/daygeek/Documents/Bank_Details
9.6G /home/daygeek/Documents/drive-2daygeek
6.3G /home/daygeek/Documents/Thanu_Photos
20G /home/daygeek/Documents/
```
### 6) 如何在 du 命令输出中获得总计
如果要在 `du` 命令输出中获得总计,请使用以下 `du` 命令格式。
```
$ du -hsc /home/daygeek/Documents/* | sort -rh | head -10
20G total
9.6G /home/daygeek/Documents/drive-2daygeek
6.3G /home/daygeek/Documents/Thanu_Photos
3.2G /home/daygeek/Documents/drive-mageshm
756K /home/daygeek/Documents/Bank_Details
272K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-TouchInterface1.png
172K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-NightLight.png
164K /home/daygeek/Documents/ConfigServer Security and Firewall (csf) Cheat Sheet.pdf
132K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-Todo.png
112K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-ZorinAutoTheme.png
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/find-get-size-of-directory-folder-linux-disk-usage-du-command/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[lnrCoder](https://github.com/lnrCoder)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/linux-unix-ls-command-display-directory-contents/
[2]: https://www.2daygeek.com/how-to-get-find-size-of-directory-folder-linux/
[3]: https://www.2daygeek.com/linux-check-disk-usage-files-directories-size-du-command/

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (MX Linux 19 Released With Debian 10.1 Buster & Other Improvements)
[#]: via: (https://itsfoss.com/mx-linux-19/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
MX Linux 19 Released With Debian 10.1 Buster & Other Improvements
======
MX Linux 18 has been one of my top recommendations for the [best Linux distributions][1], specially when considering distros other than Ubuntu.
It is based on Debian 9.6 Stretch which was incredibly a fast and smooth experience.
Now, as a major upgrade to that, MX Linux 19 brings a lot of major improvements and changes. Here, we shall take a look at the key highlights.
### New features in MX Linux 19
[Subscribe to our YouTube channel for more Linux videos][2]
#### Debian 10 Buster
This deserves a separate mention as Debian 10 is indeed a major upgrade from Debian 9.6 Stretch on which MX Linux 18 was based on.
In case youre curious about what has changed with Debian 10 Buster, we suggest to check out our article on the [new features of Debian 10 Buster][3].
#### Xfce Desktop 4.14
![MX Linux 19][4]
[Xfce 4.14][5] happens to be the latest offering from Xfce development team. Personally, Im not a fan of Xfce desktop environment but it screams fast performance when you get to use it on a Linux distro (especially on MX Linux 19).
Interestingly, we also have a quick guide to help you [customize Xfce][6] on your system.
#### Updated Packages &amp; Latest Debian Kernel 4.19
Along with updated packages for [GIMP][7], MESA, Firefox, and so on it also comes baked in with the latest kernel 4.19 available for Debian Buster.
#### Updated MX-Apps
If youve used MX Linux before, you might be knowing that it comes pre-installed with useful MX-Apps that help you get more things done quickly.
The apps like MX-installer and MX-packageinstaller have significantly improved.
In addition to these two, all other MX-tools have been updated here and there to fix bugs, add new translations (or simply to improve the user experience).
#### Other Improvements
Considering it a major upgrade, theres obviously a lot of under-the-hood changes than highlighted (including the latest antiX live system updates).
You can check out more details on their [official announcement post][8]. You may also watch this video from the developers explaining all the new stuff in MX Linux 19:
### Getting MX Linux 19
Even if you are using MX Linux 18 versions right now, you [cannot upgrade][9] to MX Linux 19. You need to go for a clean install like everyone else.
You can download MX Linux 19 from this page:
[Download MX Linux 19][10]
**Wrapping Up**
With MX Linux 18, I had a problem using my WiFi adapter due to a driver issue which I resolved through the [forum][11], it seems that it still hasnt been fixed with MX Linux 19. So, you might want to take a look at my [forum post][11] if you face the same issue after installing MX Linux 19.
If youve been using MX Linux 18, this definitely seems to be an impressive upgrade.
Have you tried it yet? What are your thoughts on the new MX Linux 19 release? Let me know what you think in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/mx-linux-19/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-linux-distributions/
[2]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[3]: https://itsfoss.com/debian-10-buster/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/mx-linux-19.jpg?ssl=1
[5]: https://xfce.org/about/news
[6]: https://itsfoss.com/customize-xfce/
[7]: https://itsfoss.com/gimp-2-10-release/
[8]: https://mxlinux.org/blog/mx-19-patito-feo-released/
[9]: https://mxlinux.org/migration/
[10]: https://mxlinux.org/download-links/
[11]: https://forum.mxlinux.org/viewtopic.php?t=52201

View File

@ -0,0 +1,83 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (VMware on AWS gets an on-premises option)
[#]: via: (https://www.networkworld.com/article/3446796/vmware-on-aws-gets-an-on-premises-option.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
VMware on AWS gets an on-premises option
======
Amazon Relational Database Service on VMware automates database provisioning for customers running VMware vSphere 6.5 or later, and it supports Microsoft SQL Server, PostgreSQL, and MySQL.
Getty Images
VMware has taken another step to integrate its virtual kingdom with Amazon Web Services' world with an [on-premise service][1] that will let customers automate database provisioning and management. 
The package, [Amazon Relational Database Service][2] (RDS) on VMware is available now for customers running VMware vSphere 6.5 or later and supports Microsoft SQL Server, PostgreSQL, and MySQL. Other DBs will be supported in the future, the companies said.
****[**** Read also: [How to plan a software-defined data-center network][3] ****|**** [Get regularly scheduled insights by signing up for Network World newsletters.][4]**]**
The RDS lets customers run native RDS Database instances on a vSphere platform and manage those instances from the AWS Management Console in the cloud. It automates database provisioning, operating-system and database patching, backups, point-in-time restore and compute scaling, as well as database-instance health management, VMware said.
[][5]
BrandPost Sponsored by HPE
[HPE Synergy For Dummies][5]
Heres how IT can provide an anytime, anywhere, any workload infrastructure.
With the service customers such as software developers and database administrators get native access to the Amazon Relational Database Service using their familiar AWS Management Console, CLI, and RDS APIs,  Chris Wolf, vice president and CTO, global field and industry at VMware wrote in a [blog][6] about the service. “Operations teams can quickly stand up an RDS instance anywhere they run vSphere, and manage it using all of their existing tools and processes.”
Wolf said the service should greatly simplify managing databases linked to its flagship vSphere system. 
Advertisement
Managing databases on vSphere or natively has always been a tedious exercise that steals the focus of highly skilled database administrators, Wolf stated. “VMware customers will now be able to expand the benefits of automation and standardization of their database workloads inside of vSphere and focus more of their time and energy on improving their applications for their customers.”
The RDS is just the part of the enterprise data center/cloud integration work VMware and AWS have been up to in the past year.
In August [VMware said it added VMware HCX][7] capabilities to enable push-button migration and interconnectivity between VMware Cloud on AWS Software-Defined Data Centers running in different AWS Regions. It has also added new Elastic vSAN support to bolster storage scaling.
Once applications are migrated to the cloud, customers can extend their capabilities  through the integration of native AWS services. In the future, through technology such as Bitfusion and partnerships with other vendors such as NVIDIA, customers will be able to enrich existing applications and power new enterprise applications.
VMware and NVIDIA also announced their intent to deliver accelerated GPU services for VMware Cloud on AWS.  These services will let customers migrate VMware vSphere-based applications and containers to the cloud, unchanged, where they can be modernized to take advantage of high-performance computing, machine learning, data analytics and video-processing applications, VMware said.
And last November [AWS tied in VMware][8] to its on-premises Outposts development, which comes in two versions. The first, VMware Cloud on AWS Outposts, lets customers  use the same VMware control plane and APIs they currently deploy. The other is an AWS-native variant that lets customers use the same APIs and control plane they use to run in the AWS cloud, but on premises, according to AWS.
Outposts can be upgraded with the latest hardware and next-generation instances to run all native AWS and VMware applications, [AWS stated][9]. A second version, VMware Cloud on AWS Outposts, lets customers use a VMware control plane and APIs to run the hybrid environment.
The idea with Outposts is that customers can use the same programming interface, same APIs, same console and CLI they use on the AWS cloud for on-premises applications, develop and maintain a single code base, and use the same deployment tools in the AWS cloud and on premises, AWS wrote.
VMware isnt the only vendor cozying up to AWS. Cisco has done a variety of integration work with the cloud service provider as well.  In [April Cisco released Cloud ACI for AWS][10] to let users configure inter-site connectivity, define policies and monitor the health of network infrastructure across hybrid environments, Cisco said. The AWS service utilizes the Cisco Cloud APIC [Application Policy Infrastructure Controller] to provide connectivity, policy translation and enhanced visibility of workloads in the public cloud, Cisco said.
“This solution brings a suite of capabilities to extend your on-premises data center into true multi-cloud architectures, helping to drive policy and operational consistency, independent of where your applications or data reside. [It] uses the native AWS constructs for policy translation and gives end-to-end visibility into the customer's multi-cloud workloads and connectivity,” Cisco said.
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3446796/vmware-on-aws-gets-an-on-premises-option.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://aws.amazon.com/blogs/aws/now-available-amazon-relational-database-service-rds-on-vmware/
[2]: https://blogs.vmware.com/vsphere/2019/10/how-amazon-rds-on-vmware-works.html
[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.networkworld.com/article/3399618/hpe-synergy-for-dummies.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE19718&utm_content=sidebar (HPE Synergy For Dummies)
[6]: https://cloud.vmware.com/community/2019/10/16/announcing-general-availability-amazon-rds-vmware/
[7]: https://www.networkworld.com/article/3434397/vmware-fortifies-its-hybrid-cloud-portfolio-with-management-automation-aws-and-dell-offerings.html
[8]: https://www.networkworld.com/article/3324043/aws-does-hybrid-cloud-with-on-prem-hardware-vmware-help.html
[9]: https://aws.amazon.com/outposts/
[10]: https://www.networkworld.com/article/3388679/cisco-taps-into-aws-for-data-center-cloud-applications.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Enterprises find new uses for mainframes: blockchain and containerized apps)
[#]: via: (https://www.networkworld.com/article/3446140/enterprises-find-a-new-use-for-mainframes-blockchain-and-containerized-apps.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Enterprises find new uses for mainframes: blockchain and containerized apps
======
Blockchain and containerized microservices can benefit from the mainframes integrated security and massive parallelization capabilities.
Thinkstock
News flash: Mainframes still aren't dead.
On the contrary, mainframe use is increasing, and not to run COBOL, either. Mainframes are being eyed for modern technologies including blockchain and containers.
A survey of 153 IT decision makers found that 50% of organizations will continue with the mainframe and increase its use over the next two years, while just 5% plan to decrease or remove mainframe activity. The survey was conducted by Forrester Research and commissioned by Ensono, a hybrid IT services provider, and Wipro Limited, a global IT consulting services company.
[][1]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][1]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
**READ MORE:** [Data center workloads become more complex despite promises to the contrary][2]
That kind of commitment to the mainframe is a bit of a surprise, given the trend to reduce or eliminate the on-premises data center footprint and move to the cloud. However, enterprises are now taking a hybrid approach to their infrastructure, migrating some applications to the cloud while keeping the most business-critical applications on-premises and on mainframes.
Forrester's research found mainframes continue to be considered a critical piece of infrastructure for the modern business  and not solely to run old technologies. Of course, traditional enterprise applications and workloads remain firmly on the mainframe, with 48% of ERP apps, 45% of finance and accounting apps, 44% of HR management apps, and 43% of ECM apps staying on mainframes.
But that's not all. Among survey respondents, 25% said that mobile sites and applications were being put into the mainframe, and 27% said they're running new blockchain initiatives and containerized applications. Blockchain and containerized applications benefit from the integrated security and massive parallelization inherent in a mainframe, Forrester said in its report.
"We believe this research challenges popular opinion that mainframe is for legacy," said Brian Klingbeil, executive vice president of technology and strategy at Ensono, in a statement. "Mainframe modernization is giving enterprises not only the ability to continue to run their legacy applications, but also allows them to embrace new technologies such as containerized microservices, blockchain and mobile applications."
Wipro's Kiran Desai, senior vice president and global head of cloud and infrastructure services, added that enterprises should adopt two strategies to take full advantage of mainframes. The first is to refactor applications to take advantage of cloud, while the second is to adopt DevOps to modernize mainframes.
**Learn more about mixing cloud and on-premises workloads**
* [5 times when cloud repatriation makes sense][3]
* [Network monitoring in the hybrid cloud/multi-cloud era][4]
* [Data center workloads become more complex][2]
* [The benefits of mixing private and public cloud services][5]
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3446140/enterprises-find-a-new-use-for-mainframes-blockchain-and-containerized-apps.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[2]: https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html
[3]: https://www.networkworld.com/article/3388032/5-times-when-cloud-repatriation-makes-sense.html
[4]: https://www.networkworld.com/article/3398482/network-monitoring-in-the-hybrid-cloudmulti-cloud-era.html
[5]: https://www.networkworld.com/article/3233132/what-is-hybrid-cloud-computing.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Tokalabs Software Defined Labs automates configuration of lab test-beds)
[#]: via: (https://www.networkworld.com/article/3446816/tokalabs-software-defined-labs-automates-configuration-of-lab-test-beds.html)
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
Tokalabs Software Defined Labs automates configuration of lab test-beds
======
The primary challenge of running a test lab is the amount of time it takes to provision the test beds within the lab. This software defined lab platform automates the setup and configuration process so that tests can be accelerated.
7Postman / Getty Images
Network environments have become so complex that companies such as systems integrators, equipment manufacturers and enterprise organizations feel compelled to test their configurations and equipment in lab environments before deployment. Performance test labs are used extensively for quality, proof of concept, customer support, and technical sales initiatives. Labs are the perfect place to see how well something performs before its put into a production environment.
The primary challenge of running a test lab is the amount of time it takes to provision the test environments. A network lab infrastructure might include switches, routers, servers, [virtual machines][1] running on various server clusters, security services, cloud resources, software and so on. It takes considerable time to wire the configurations, physically build the desired test beds, login to each individual device and load the proper software configurations. Quite often, lab staffers spend more time on setup than they do on conducting actual tests.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
This is a problem that the networking company Allied Telesis was having in building test beds for its own development engineers. The company developed an application for internal use that would ease the setup and reconfiguration problem. The equipment could be physically cabled once and then configured and controlled centrally through software. The application worked so well that Allied Telesis spun it off for others to use, and this is the origin of [Tokalabs Software Defined Labs][3] (SDL) technology.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Tokalabs provides a platform that enables engineers to manage a lab-network infrastructure and create sandboxes or topologies that can be used for R&amp;D, product development and quality testing, customer support, sales demos, competitive benchmarking, driving proof of concept efforts, etc. Theres an automation sequencer built into the platform that allows users to automate test cases, sales demos, troubleshooting methods, image upgrades and the like. 
The Tokalabs SDL controller is a virtual appliance that can be imported into any virtualization environment. Once installed, the customer can access the controllers UI using a web browser. The controller has an auto-discovery mechanism that inventories everything within a specified range of IP addresses, including cloud resources.
Tokalabs probes the addresses to figure out what ports are open on them, what management types are supported, and the vendor information of the devices. This results in an inventory of hundreds of devices that are discovered by the SDL controller.
On the hardware side, lab engineers only need to cable and configure their lab devices once, which eliminates the cumbersome setup and tear down processes. These devices are abstracted and managed centrally through the SDL controller, which maintains a centralized networking fabric. Lab engineers have full visibility of every physical and virtual device and every public and [private cloud][5] instance within their domain.
Engineers can use the Tokalabs SDL controller to dynamically create and reserve test-bed resources and then save them as a template for future use. Engineers also can automate and schedule test executions and the controller will release the resources once the tests are done. The controllers codeless automation feature means users dont need to know how to write scripts to orchestrate and automate a pretty comprehensive configuration and test scenario. They can use the controller to automate sequences without writing code or instruct the controller to execute external scripts developed by an engineer.
The automation is helpful to set up a specific configuration quickly. For example, a customer-support engineer might need to replicate a scenario that one of its customers has in order to troubleshoot an issue. Using the controllers automation feature, devices can be configured and loaded with specific firmware quickly to ease the setup process.
Tokalabs logs everything that transpires through its controller, so a lab administrator has oversight into how the equipment is being used or what types of tests are being created and executed. This helps with resource capacity planning, to ensure that there is enough equipment without having devices sit idle for too long.
One leader in cybersecurity became an early adopter of Tokalabs. This vendor has a test lab to conduct comparative benchmark numbers with competitors products in order to close large deals and to confirm their product strengths and performance numbers for marketing materials.
Prior to using the Tokalabs SDL controller, engineering teams would physically cable the topologies, configure the devices and execute various benchmark tests. Then they would tear down that configuration and start all over again for every set of devices and firmware revisions.
Given that this is a multi-billion-dollar equipment manufacturer, there are a lot of new product releases and updates to existing products. That means theres a heck of a lot of work for the engineers in the lab to test each product and compare it to competitors offerings. They cant really afford the time spent configuring rather than testing, so they turned to Tokalabs technology to manage the lab infrastructure and to automate the configurations and scheduling of test executions. They chose this solution largely for the ease of setup and use.
Now, each engineer can create hundreds of reusable templates, thus eliminating the repetitive work of creating test beds, and also automate test scripts using the Tokalabs automation sequencer. Additionally, all their existing test scripts are available to use through the SDL controller. This has helped the team reduce its backlog and keep up with the product release cycles.
Beyond this use case for comparative benchmark tests, some of the other uses for Tokalabs SDL include:
* Creating a portal for others to use lab resources; for example, for training purposes or for customers to test network environments prior to purchasing them
* Doing sales demonstrations and customer PoCs in order to showcase a feature, an application, or even an entire configuration
* Automating bringing up virtualized environments
Tokalabs claims to work closely with its customers to tailor the Software Defined Labs platform to specific use cases and customer needs.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3446816/tokalabs-software-defined-labs-automates-configuration-of-lab-test-beds.html
作者:[Linda Musthaler][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Linda-Musthaler/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3234795/what-is-virtualization-definition-virtual-machine-hypervisor.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://tokalabs.com/
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,80 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (“Making software liquid: a DevOps company founders journey from OSS community to billion-dollar darling”)
[#]: via: (https://opensourceforu.com/2019/10/making-software-liquid-a-devops-company-founders-journey-from-oss-community-to-billion-dollar-darling/)
[#]: author: (Editor Team https://opensourceforu.com/author/editor/)
“Making software liquid: a DevOps company founders journey from OSS community to billion-dollar darling”
======
![][1]
_JFrog claims to make software development easier and faster, and enable firms to reduce their development costs. To understand the basis of this promise, **Rahul Chopra, editorial director, EFY Group**, spoke to **Fred Simon, co-founder and chief architect, JFrog** and heres what he discovered…_
**Q. How would you explain JFrogs solutions to a senior business decision maker?**
**A.** Its a fair question, as we have been and continue to be a developer-driven company that makes tools and solutions for developers. Originally, we were a team of engineers working on Java and had the task of solving some package management pain during the J2EE (now Java EE) transformation. So, it was all hyper-technical and hard to explain to non-engineers.
Today, its a bit easier. As the world moves towards cloud-native applications as the default and every company is now a software company, the benefits of software management and delivery are now mission-critical. We see that as this industry maturity has taken place, software conversations are now management-level conversations. So, its now a very simple proposition: You are getting business demands for faster, smarter, more secure software. And you must “release software fast, or you die,” as we like to say. Competition is fierce, so if you can provide value to the end-user faster and smarter without downtime what we call “Liquid Software,” you have a competitive edge. JFrog helps you achieve these goals faster as a DevOps organization.
**Q. How does the explanation change, when explaining it to a senior techie like a CTO?**
**A.** At this level, it is even simpler. You have historically released software once a quarter or even once a year or more. You know that this demand has changed with cloud, microservices and agility movements. We give you the ability to get a new version rapidly, control where the version was validated and how it ends up in runtime as quickly as possible. Weve been doing this for more than 10 years at scale. When we started, our customers managed a gigabyte, then later a terabyte and today we have customers with petabytes of binary software creating dozens or hundreds of builds a day.
**Q. You mentioned the word control. But, a lot of developers do not like control. So, how would you explain JFrogs promise to them, what would your pitch be?**
**A.** The word “control” to a developers ear can sound like something being imposed on them. Its “someone elses” control. But the developer roots of JFrog demand that we provide speed and agility to developers, giving them as much control over their environments as possible without sacrificing their speed.
**Q. According to you, the drivers within the company are the developers, who then take it to DevOps, then to CTO and the CEO signs the cheque? Is that how it works?**
**A.** Yes. JFrog to date has only had an inside sales force with no outbound sales. The first time we ever talked to a company was because the developer said that they needed a professional version of JFrog tools. Then we started the discussion with the managers and so on up the chain. Developers as some like our friends at RedMonk have said are still the kingmakers.
**Q. Can you explain the term Liquid Software thats been mentioned quite a few times on your website?**
**A.** The concept of Liquid Software is to enable continuous, secure, seamless updates of every piece of software that is running without any downtime. This is very different than the traditional build, package, distribute once a year model. The old way doesnt scale. The new world of Liquid Software makes the update process nearly seamless from code to end device.
**Q. Has the shift to “everything-as-a-service” become the main driver for this concept of Liquid Software?**
**A.** Yes. People are not making software as a service, they are delivering services. They are using our Liquid Software pipeline delivery process for every kind of delivery. Its important to note that “as a service” isnt just for cloud, but is in fact the standard for every type of software delivery.
**Q. Hows JFrog connected with open source and how does it shift to an enterprise paid version? What is the licensing model at JFrog?**
**A.** We have an open-source version licensed under AGPL. This open source version allows you to do many Java-related works, and is sometimes where developers start to “kick the tires.”.There is also an edition specifically for C/C++ developer utilizing the Conan framework. Since most development shops do more than one type of development, our commercial versions starting with a Pro subscription universally support all package types. From there, there are other plans available that include HA, Security and Compliance tools, Distribution and more. We have also recently added JFrog Pipelines for  full automation of your pipelines across the organization. So, you can choose what makes the most sense for them, and JFrog can grow alongside your needs as you mature your DevOps and DevSecOps infrastructure.
**Q. Do you have a different set of solutions for developers depending on whether they are developing for the web, mobile, IoT?**
**A.** No, we are universal. You dont need to re-install different things for different technology. So, if you are a Ruby developer or a Python developer or if you have Docker, Debian, Microsoft or NuGet, then you get everything in one single tool. Pipelines are so unique to each organization that we need to support all of it.
**Q. Are there any specific solutions or capabilities that you have developed for IoT?**
**A.** Yes. Quite early on we worked with customers on an IoT offering. We provided an IoT-specific solution, which is an extension of Debian for IoT, and we also have Conan and Yocto. Controlling and increasing the speed of delivery is something that is in the early stages of the IoT environment. So we are helping in this integration and providing tools that are enabling different technologies on your JFrog platform that are tailored to an IoT environment.
**Q. Overall, how important is India, both as development and tech-support centre for JFrog globally as well as a market for JFrog?**
**A.** JFrog opened its first office in India more than three years ago, with a development office working on JFrog Insight and JFrog Mission Control (which provide pipeline tooling and performance visibility). We purchased an organization called Shippable at the beginning of this year for their technology and their R&amp;D team, who then created JFrog Pipelines product. They are also located in India, so India has been and is increasingly important from both an R&amp;D and support perspective. A lot of our senior support force is in India, so we need really good developers working at JFrog to handle the high-tech support volume. We are already at 60 employees in Bangalore and have recently appointed a General Manager. As you know, JFrog is now a company of more than 500 people. We are also growing our marketing and sales teams in India that will help drive the DevOps revolution for Indian customers.
**Q. Are these more of a global account that have shops in India or are these Indian companies?**
**A.** Both. We started with the global companies with R&amp;D in India. Today, we have companies throughout India that are directly buying from us.
**Q. A little bit about your personal journey, when and how did you connect with the open-source world?**
**A.** I will date myself and say that in 1992, I used to play with Mosaic while I was in University, and I created a web server based on open source web stacks. Gloriously geeky stuff, but it put me in the OSS community right from the beginning. When I was a kid, I used to share code and OSS was the way I learned how to code in the first place. Its clear to me that OSS is the future for creating innovative software, and I and JFrog continue to support and contribute to development communities globally. I look forward to seeing OSS and OSS communities drive the innovations of the future.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/making-software-liquid-a-devops-company-founders-journey-from-oss-community-to-billion-dollar-darling/
作者:[Editor Team][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Copy-of-IMG_0219a-_39_-new.jpg?resize=350%2C472&ssl=1

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Gartner: 10 infrastructure trends you need to know)
[#]: via: (https://www.networkworld.com/article/3447397/gartner-10-infrastructure-trends-you-need-to-know.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Gartner: 10 infrastructure trends you need to know
======
Gartner names the most important factors affecting infrastructure and operations
[Daniel Páscoa][1] [(CC0)][2]
ORLANDO Corporate network infrastructure is only going to get more  involved  over the next two to three years as automation, network challenges and hybrid cloud become more integral to the enterprise.
Those were some of the main infrastructure trend themes espoused by Gartner vice president and distinguished analyst [David Cappuccio][3] at the research firms IT Symposium/XPO here this week.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][4]
Cappuccio noted that Gartners look at the top infrastructure and operational trends reflect offshoots of technologies such as cloud computing, automation and networking advances the companys [analysts have talked][5] about many times before.
[][6]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][6]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Gartners “Top Ten Trends Impacting Infrastructure and Operations” list is:
### Automation-strategy rethink
[Automation][7] has been going on at some level for years, Cappuccio said, but the level of complexity as it is developed and deployed further is whats becoming confusing.  The amounts and types of automation need to be managed and require a shift to a team development approach led by an automation architect that can be standardized across business units. Cappuccio said.  What would help?  Gartner says by 2025, more than 90 percent of enterprises will have an automation architect, up from less than 20 percent today.
Advertisement
### Hybrid IT Impacts Disaster Recovery Confidence
Hybrid IT which includes a mix of data center, SAAS, PAAS, branch offices, [edge computing][8] and security services makes it hard to promise enterprise resources will be available or backed-up, Cappuccio said. Overly-simplistic IT disaster recovery plans may only deliver partial success.  By 2021, the root cause of 90 percent of cloud-based availability issues will be the failure to fully use cloud service provider native redundancy capabilities, he said.  Enterprises need to leverage their automation investments and other IT tools to refocus how systems are recovered.
### Scaling DevOps agility demands platform rethinking
ITs role in many companies has almost become that of a product manager for all its different DevOps teams. IT needs to build consistency across the enterprise because they dont want islands of DeVOps teams across the company. By 2023, 90 percent of enterprises will fail to scale DevOps initiatives if shared self-service platform approaches are not adopted, Gartner stated. 
### Infrastructure - and your data - are everywhere
By 2022, more than 50 percent of enterprise-generated data will be created and processed outside the [data cente][9]r or cloud, up from less than 10 percent in 2019.  Infrastructure is everywhere, Cappuccio said and every time data is moved it creates challenges. How does IT manage data-everywhere scenarios?  Cappuccio advocated mandating data-driven infrastructure impact-assessment at early stages of design, investing in infrastructure tools to manage data wherever it resides, and modernizing existing backup architectures to be able to protect data wherever it resides.
### Overwhelming Impact of IoT
The issue here is that most [IoT][10] implementations are not driven by IT, and they typically involve different protocols and vendors that dont usually deal with an IT organization. In the end, who controls and manages IoT becomes an issue and it creates security and operational risks. Cappuccio said companies need to engage with business leaders to shape IoT strategies and establish a center of excellence for IoT.
### Distributed cloud
The methods of putting cloud services or cloud-like services on-premises but letting a vendor manage that cloud are increasing. Google has Athos and AWS will soon roll out OutPosts, for example, so this environment is going to change a lot in the next two years, Cappuccio said. This is a nascent market so customers should beware. Enterprises should also be prepared to set boundaries and determine who is responsible for software upgrades, patching and performance.
### Immersive Experience
Humans used to learn about and adapt to technology. Today, technology learns and adapts to humans, Cappuccio said. “We have created a world where customers have a serious expectation of perfection. We have designed applications where perfection is the norm.” Such systems are great for mindshare, marketshare and corporate reputation, but as soon as theres one glitch thats all out the window.
### Democratization of IT
Application development is no longer the realm of specialists. There has been the rollout of simpler development tools like [low code][11] or [no code][12] packages and a focus on bringing new applications to market quickly. That may bring a quicker time-to-market for the business but could be riskier for IT, Cappuccio said.  IT leaders perhaps cant control such rapid development, but it needs to understand whats happening.
### What's next for networking?
There are tons of emerging trends around networking such as mesh, secure-access service edge, network automation, network-on-demand service, network automation, and firewalls as a service. “After decades of focusing on network performance and availability, future network innovation will target operational simplicity, automation, reliability and flexible business models,” Cappuccio said.  Enterprises need to automate “everywhere” and balance what technologies are safe vs. what is agile, he said.
### Hybrid digital-infrastructure management
The general idea here is that CIOs face the challenge of selecting the right mixture of cloud and traditional IT for the organization.  The mix of many different elements such as edge, [hybrid cloud][13], workflow and management creates complex infrastructures. Gartner recommends a focus on workflow visualization utilizing an in integrated toolset and developing a center of excellence to work on the issues, Cappuccio said.
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3447397/gartner-10-infrastructure-trends-you-need-to-know.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://unsplash.com/photos/tjiPN3e45WE
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.linkedin.com/in/davecappuccio/
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.networkworld.com/article/2160904/gartner--10-critical-it-trends-for-the-next-five-years.html
[6]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[7]: https://www.networkworld.com/article/3223189/how-network-automation-can-speed-deployments-and-improve-security.html
[8]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
[9]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[10]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[11]: https://www.mendix.com/low-code-guide/
[12]: https://kissflow.com/no-code/
[13]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html
[14]: https://www.facebook.com/NetworkWorld/
[15]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Disneys Streaming Service is Having Troubles with Linux)
[#]: via: (https://itsfoss.com/disney-plus-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Disneys Streaming Service is Having Troubles with Linux
======
You might be already using Amazon Prime Video (comes free with [Amazon Prime membership][1]) or [Netflix on your Linux system][2]. Google Chrome supports these streaming services out of the box. You can also [watch Netflix on Firefox in Linux][3] but you have to explicitly enable DRM content.
However we just learned that Disneys upcoming streaming service, Disney+ does not work in the same way.
A user, Hans de Goede, on [LiveJournal][4] revealed this from his experience with Disney+ in the testing period. In fact, the upcoming streaming service Disney+ does not support Linux at all, at least for now.
### The trouble with Disney+ and DRM
![][5]
As Hans explains in his [post][4], he subscribed to the streaming service in the testing period because of the availability of Disney+ in Netherlands.
Hans tested it on Fedora with mainstream browsers like Firefox and Chrome. However, every time, an error was encountered “**Error Code 83**“.
So, he reached out to Disney support to solve the issue but interestingly they werent even properly aware of the issue as it took them a week to give him a response.
Heres how he puts his experience:
> So I mailed the Disney helpdesk about this, explaining how Linux works fine with Netflix, AmazonPrime video and even the web-app from my local cable provider. They promised to get back to me in 24 hours, the eventually got back to me in about a week. They wrote: “We are familiar with Error 83. This often happens if you want to play Disney + via the web browser or certain devices. Our IT department working hard to solve this. In the meantime, I want to advise you to watch Disney + via the app on a phone or tablet. If this error code still occurs in a few days, you can check the help center …” this was on September 23th.
They just blatantly advised him to use his phone/tablet to access the streaming service instead. Thats genius!
### Disney should reconsider their DRM implementation
What is DRM?
Digital Rights Management ([DRM][6]) technologies attempt to control what you can and cant do with the media and hardware youve purchased.
Even though they want to make sure that their content remains protected from pirates (which wont make a difference either), it creates a problem with the support for multiple platforms.
How on earth do you expect more people to subscribe to your streaming service when you do not even support platforms like Linux? So many media center devices run on Linux. This will be a big setback if Disney continues like this.
To shed some light on the issue, a user on [tweakers.net][7] found out that it is a [Widevine][8] error. Here, it generally means that your device is incompatible with the security level of DRM implemented.
It turns out that it isnt just limited to Linux but a lot of users are encountering the same error on other platforms as well.
In addition to the wave of issues, the Widevine error also points to a fact that Disney+ may not even work on Chromebooks, some Android smartphones, and Linux desktops in general.
Seriously, Disney?
### Go easy, Disney!
A common DRM (low-level security) implementation with Disney+ should make it accessible on every platform including Linux systems.
Disney+ might want to re-think about the DRM implementation if they want to compete with other streaming platforms like Netflix and Amazon Prime Video.
Personally, I would prefer to stay with Netflix if Disney does not care about supporting multiple platforms.
It is not actually about supporting “Linux” but conveniently making the streaming service available for more platforms which could justify its subscription fee.
What do you think about this? Let me know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/disney-plus-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/tryprimefree?tag=chmod7mediate-20
[2]: https://itsfoss.com/watch-netflix-in-ubuntu-linux/
[3]: https://itsfoss.com/netflix-firefox-linux/
[4]: https://hansdegoede.livejournal.com/22338.html
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/disney-plus-linux.jpg?resize=800%2C450&ssl=1
[6]: https://www.eff.org/issues/drm
[7]: https://tweakers.net/nieuws/157224/disney+-start-met-gratis-proefperiode-van-twee-maanden-in-nederland.html?showReaction=13428408#r_13428408
[8]: https://www.widevine.com/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (PsiACE)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -339,7 +339,7 @@ via: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
作者:[Nicolás Parada][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[PsiACE](https://github.com/PsiACE)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,161 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (wenwensnow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use GameHub to Manage All Your Linux Games in One Place)
[#]: via: (https://itsfoss.com/gamehub/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Use GameHub to Manage All Your Linux Games in One Place
======
How do you [play games on Linux][1]? Let me guess. Either you install games from the software center or from Steam or from GOG or Humble Bundle etc, right? But, how do you plan to manage all your games from multiple launchers and clients? Well, that sounds like a hassle to me which is why I was delighted when I come across [GameHub][2].
GameHub is a desktop application for Linux distributions that lets you manage “All your games in one place”. That sounds interesting, isnt it? Let me share more details about it.
![][3]
### GameHub Features to manage Linux games from different sources at one place
Lets see all the features that make GameHub one of the [essential Linux applications][4], specially for gamers.
#### Steam, GOG &amp; Humble Bundle Support
![][5]
It supports Steam, [GOG][6], and [Humble Bundle][7] account integration. You can sign in to your account to see manager your library from within GameHub.
For my usage, I have a lot of games on Steam and a couple on Humble Bundle. I cant speak for all but it is safe to assume that these are the major platforms one would want to have.
#### Native Game Support
![][8]
There are several [websites where you can find and download Linux games][9]. You can also add native Linux games by downloading their installers or add the executable file.
Unfortunately, theres no easy way of finding out games for Linux from within GameHub at the moment. So, you will have to download them separately and add it to the GameHub as shown in the image above.
#### Emulator Support
With emulators, you can [play retro games on Linux][10]. As you can observe in the image above, you also get the ability to add emulators (and import emulated images).
You can see [RetroArch][11] listed already but you can also add custom emulators as per your requirements.
#### User Interface
![Gamehub Appearance Option][12]
Of course, the user experience matters. Hence, it is important to take a look at its user interface and what it offers.
To me, I felt it very easy to use and the presence of a dark theme is a bonus.
#### Controller Support
If you are comfortable using a controller with your Linux system to play games you can easily add it, enable or disable it from the settings.
#### Multiple Data Providers
Just because it fetches the information (or metadata) of your games, it needs a source for that. You can see all the sources listed in the image below.
![Data Providers Gamehub][13]
You dont have to do anything here but if you are using anything else other than steam as your platform, you can generate an [API key for IDGB.][14]
I shall recommend you to do that only if you observe a prompt/notice within GameHub or if you have some games that do not have any description/pictures/stats on GameHub.
#### Compatibility Layer
![][15]
Do you have a game that does not support Linux?
You do not have to worry. GameHub offers multiple compatibility layers like Wine/Proton which you can use to get the game installed in order to make it playable.
We cant be really sure on what works for you so you have to test it yourself for that matter. Nevertheless, it is an important feature that could come handy for a lot of gamers.
### How Do You Manage Your Games in GameHub?
You get the option to add Steam/GOG/Humble Bundle account right after you launch it.
For Steam, you need to have the Steam client installed on your Linux distro. Once, you have it, you can easily link the games to GameHub.
![][16]
For GOG &amp; Humble Bundle, you can directly sign in using your credentials to get your games organized in GameHub.
If you are adding an emulated image or a native installer, you can always do that by clicking on the “**+**” button that you observe in the top-right corner of the window.
### How Do You Install Games?
For Steam games, it automatically launches the Steam client to download/install (I wish if this was possible without launching Steam!)
![][17]
But, for GOG/Humble Bundle, you can directly start downloading to install the games after signing in. If necessary, you can utilize the compatibility layer for non-native Linux games.
In either case, if you want to install an emulated game or a native game just add the installer or import the emulated image. Theres nothing more to it.
### GameHub: How do you install it?
![][18]
To start with, you can just search for it in your software center or app center. It is available in the **Pop!_Shop**. So, it can be found in most of the official repositories.
If you dont find it there, you can always add the repository and install it via terminal by typing these commands:
```
sudo add-apt-repository ppa:tkashkin/gamehub
sudo apt update
sudo apt install com.github.tkashkin.gamehub
```
In case you encounter “**add-apt-repository command not found**” error, you can take a look at our article to help fix [add-apt-repository not found error.][19]
There are also AppImage and Flatpak versions available. You can find installation instructions for other Linux distros on its [official webpage][2].
Also, you have the option to download pre-release packages from its [GitHub page][20].
[GameHub][2]
**Wrapping Up**
GameHub is a pretty neat application as a unified library for all your games. The user interface is intuitive and so are the options.
Have you had the chance it test it out before? If yes, let us know your experience in the comments down below.
Also, feel free to tell us about some of your favorite tools/applications similar to this which you would want us to try.
--------------------------------------------------------------------------------
via: https://itsfoss.com/gamehub/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/linux-gaming-guide/
[2]: https://tkashkin.tk/projects/gamehub/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-home-1.png?ssl=1
[4]: https://itsfoss.com/essential-linux-applications/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-platform-support.png?ssl=1
[6]: https://www.gog.com/
[7]: https://www.humblebundle.com/monthly?partner=itsfoss
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-native-installers.png?ssl=1
[9]: https://itsfoss.com/download-linux-games/
[10]: https://itsfoss.com/play-retro-games-linux/
[11]: https://www.retroarch.com/
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-appearance.png?ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/data-providers-gamehub.png?ssl=1
[14]: https://www.igdb.com/api
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-windows-game.png?fit=800%2C569&ssl=1
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-library.png?ssl=1
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-compatibility-layer.png?ssl=1
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-install.jpg?ssl=1
[19]: https://itsfoss.com/add-apt-repository-command-not-found/
[20]: https://github.com/tkashkin/GameHub/releases

View File

@ -1,74 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (DevSecOps pipelines and tools: What you need to know)
[#]: via: (https://opensource.com/article/19/10/devsecops-pipeline-and-tools)
[#]: author: (Sagar Nangare https://opensource.com/users/sagarnangare)
DevSecOps pipelines and tools: What you need to know
======
DevSecOps evolves DevOps to ensure security remains an essential part of
the process.
![An intersection of pipes.][1]
DevOps is well-understood in the IT world by now, but it's not flawless. Imagine you have implemented all of the DevOps engineering practices in modern application delivery for a project. You've reached the end of the development pipeline—but a penetration testing team (internal or external) has detected a security flaw and come up with a report. Now you have to re-initiate all of your processes and ask developers to fix the flaw.
This is not terribly tedious in a DevOps-based software development lifecycle (SDLC) system—but it does consume time and affects the delivery schedule. If security were integrated from the start of the SDLC, you might have tracked down the glitch and eliminated it on the go. But pushing security to the end of the development pipeline, as in the above scenario, leads to a longer development lifecycle.
This is the reason for introducing DevSecOps, which consolidates the overall software delivery cycle in an automated way.
In modern DevOps methodologies, where containers are widely used by organizations to host applications, we see greater use of [Kubernetes][2] and [Istio][3]. However, these tools have their own vulnerabilities. For example, the Cloud Native Computing Foundation (CNCF) recently completed a [Kubernetes security audit][4] that identified several issues. All tools used in the DevOps pipeline need to undergo security checks while running in the pipeline, and DevSecOps pushes admins to monitor the tools' repositories for upgrades and patches.
### What Is DevSecOps?
Like DevOps, DevSecOps is a mindset or a culture that developers and IT operations teams follow while developing and deploying software applications. It integrates active and automated security audits and penetration testing into agile application development.
To utilize [DevSecOps][5], you need to:
* Introduce the concept of security right from the start of the SDLC to minimize vulnerabilities in software code.
* Ensure everyone (including developers and IT operations teams) shares responsibility for following security practices in their tasks.
* Integrate security controls, tools, and processes at the start of the DevOps workflow. These will enable automated security checks at each stage of software delivery.
DevOps has always been about including security—as well as quality assurance (QA), database administration, and everyone else—in the dev and release process. However, DevSecOps is an evolution of that process to ensure security is never forgotten as an essential part of the process.
### Understanding the DevSecOps pipeline
There are different stages in a typical DevOps pipeline; a typical SDLC process includes phases like Plan, Code, Build, Test, Release, and Deploy. In DevSecOps, specific security checks are applied in each phase.
* **Plan:** Execute security analysis and create a test plan to determine scenarios for where, how, and when testing will be done.
* **Code:** Deploy linting tools and Git controls to secure passwords and API keys.
* **Build:** While building code for execution, incorporate static application security testing (SAST) tools to track down flaws in code before deploying to production. These tools are specific to programming languages.
* **Test:** Use dynamic application security testing (DAST) tools to test your application while in runtime. These tools can detect errors associated with user authentication, authorization, SQL injection, and API-related endpoints.
* **Release:** Just before releasing the application, employ security analysis tools to perform thorough penetration testing and vulnerability scanning.
* **Deploy:** After completing the above tests in runtime, send a secure build to production for final deployment.
### DevSecOps tools
Tools are available for every phase of the SDLC. Some are commercial products, but most are open source. In my next article, I will talk more about the tools to use in different stages of the pipeline.
DevSecOps will play a more crucial role as we continue to see an increase in the complexity of enterprise security threats built on modern IT infrastructure. However, the DevSecOps pipeline will need to improve over time, rather than simply relying on implementing all security changes simultaneously. This will eliminate the possibility of backtracking or the failure of application delivery.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/devsecops-pipeline-and-tools
作者:[Sagar Nangare][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sagarnangare
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
[2]: https://opensource.com/resources/what-is-kubernetes
[3]: https://opensource.com/article/18/9/what-istio
[4]: https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/
[5]: https://resources.whitesourcesoftware.com/blog-whitesource/devsecops

View File

@ -1,81 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux sudo flaw can lead to unauthorized privileges)
[#]: via: (https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux sudo flaw can lead to unauthorized privileges
======
Exploiting a newly discovered sudo flaw in Linux can enable certain users with to run commands as root despite restrictions against it.
Thinkstock
A newly discovered and serious flaw in the [**sudo**][1] command can, if exploited, enable users to run commands as root in spite of the fact that the syntax of the  **/etc/sudoers** file specifically disallows them from doing so.
Updating **sudo** to version 1.8.28 should address the problem, and Linux admins are encouraged to do so as soon as possible. 
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
How the flaw might be exploited depends on specific privileges granted in the **/etc/sudoers** file. A rule that allows a user to edit files as any user except root, for example, would actually allow that user to edit files as root as well. In this case, the flaw could lead to very serious problems.
[][3]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][3]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
For a user to exploit the flaw, **a user** needs to be assigned privileges in the **/etc/sudoers **file that allow that user to run commands as some other users, and the flaw is limited to the command privileges that are assigned in this way.  
This problem affects versions prior to 1.8.28. To check your sudo version, use this command:
```
$ sudo -V
Sudo version 1.8.27 <===
Sudoers policy plugin version 1.8.27
Sudoers file grammar version 46
Sudoers I/O plugin version 1.8.27
```
The vulnerability has been assigned [CVE-2019-14287][4] in the **Common Vulnerabilities and Exposures** database. The risk is that any user who has been given the ability to run even a single command as an arbitrary user may be able to escape the restrictions and run that command as root even if the specified privilege is written to disallow running the command as root.
The lines below are meant to give the user "jdoe" the ability to edit files with **vi** as any user except root (**!root** means "not root") and nemo the right to run the **id** command as any user except root:
```
# affected entries on host "dragonfly"
jdoe dragonfly = (ALL, !root) /usr/bin/vi
nemo dragonfly = (ALL, !root) /usr/bin/id
```
However, given the flaw, either of these users would be able to circumvent the restriction and edit files or run the **id** command as root as well.
The flaw can be exploited by an attacker to run commands as root by specifying the user ID "-1" or "4294967295."  
The response of "1" demonstrates that the command is being run as root (showing root's user ID).
Joe Vennix from Apple Information Security both found and analyzed the problem.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3236499/some-tricks-for-using-sudo.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14287
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (wenwensnow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,210 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Configure Rsyslog Server in CentOS 8 / RHEL 8)
[#]: via: (https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/)
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
How to Configure Rsyslog Server in CentOS 8 / RHEL 8
======
**Rsyslog** is a free and opensource logging utility that exists by default on  **CentOS** 8 and **RHEL** 8 systems. It provides an easy and effective way of **centralizing logs** from client nodes to a single central server. The centralization of logs is beneficial in two ways. First,  it simplifies viewing of logs as the Systems administrator can view all the logs of remote servers from a central point without logging into every client system to check the logs. This is greatly beneficial if there are several servers that need to be monitored and secondly, in the event that a remote client suffers a crash, you need not worry about losing the logs because all the logs will be saved on the **central rsyslog server**. Rsyslog has replaced syslog which only supported **UDP** protocol. It extends the basic syslog protocol with superior features such as support for both **UDP** and **TCP** protocols in transporting logs, augmented filtering abilities, and flexible configuration options. That said, lets explore how to configure the Rsyslog server in CentOS 8 / RHEL 8 systems.
[![configure-rsyslog-centos8-rhel8][1]][2]
### Prerequisites
We are going to have the following lab setup to test the centralized logging process:
* **Rsyslog server**       CentOS 8 Minimal    IP address: 10.128.0.47
* **Client system**         RHEL 8 Minimal      IP address: 10.128.0.48
From the setup above, we will demonstrate how you can set up the Rsyslog server and later configure the client system to ship logs to the Rsyslog server for monitoring.
Lets get started!
### Configuring the Rsyslog Server on CentOS 8
By default, Rsyslog comes installed on CentOS 8 / RHEL 8 servers. To verify the status of Rsyslog, log in via SSH and issue the command:
```
$ systemctl status rsyslog
```
Sample Output
![rsyslog-service-status-centos8][1]
If rsyslog is not present for whatever reason, you can install it using the command:
```
$ sudo yum install rsyslog
```
Next, you need to modify a few settings in the Rsyslog configuration file. Open the configuration file.
```
$ sudo vim /etc/rsyslog.conf
```
Scroll and uncomment the lines shown below to allow reception of logs via UDP protocol
```
module(load="imudp") # needs to be done just once
input(type="imudp" port="514")
```
![rsyslog-conf-centos8-rhel8][1]
Similarly, if you prefer to enable TCP rsyslog reception uncomment the lines:
```
module(load="imtcp") # needs to be done just once
input(type="imtcp" port="514")
```
![rsyslog-conf-tcp-centos8-rhel8][1]
Save and exit the configuration file.
To receive the logs from the client system,  we need to open Rsyslog default port 514 on the firewall. To achieve this, run
```
# sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
```
Next, reload the firewall to save the changes
```
# sudo firewall-cmd --reload
```
Sample Output
![firewall-ports-rsyslog-centos8][1]
Next, restart Rsyslog server
```
$ sudo systemctl restart rsyslog
```
To enable Rsyslog on boot, run beneath command
```
$ sudo systemctl enable rsyslog
```
To confirm that the Rsyslog server is listening on port 514, use the netstat command as follows:
```
$ sudo netstat -pnltu
```
Sample Output
![netstat-rsyslog-port-centos8][1]
Perfect! we have successfully configured our Rsyslog server to receive logs from the client system.
To view log messages in real-time run the command:
```
$ tail -f /var/log/messages
```
Lets now configure the client system.
### Configuring the client system on RHEL 8
Like the Rsyslog server, log in and check if the rsyslog daemon is running by issuing the command:
```
$ sudo systemctl status rsyslog
```
Sample Output
![client-rsyslog-service-rhel8][1]
Next, proceed to open the rsyslog configuration file
```
$ sudo vim /etc/rsyslog.conf
```
At the end of the file, append the following line
```
*.* @10.128.0.47:514 # Use @ for UDP protocol
*.* @@10.128.0.47:514 # Use @@ for TCP protocol
```
Save and exit the configuration file. Just like the Rsyslog Server, open port 514 which is the default Rsyslog port on the firewall
```
$ sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
```
Next, reload the firewall to save the changes
```
$ sudo firewall-cmd --reload
```
Next,  restart the rsyslog service
```
$ sudo systemctl restart rsyslog
```
To enable Rsyslog on boot, run following command
```
$ sudo systemctl enable rsyslog
```
### Testing the logging operation
Having successfully set up and configured Rsyslog Server and client system, its time to verify of your configuration is working as intended.
On the client system issue the command:
```
# logger "Hello guys! This is our first log"
```
Now head out to the Rsyslog server and run the command below to check the logs messages in real-time
```
# tail -f /var/log/messages
```
The output from the command run on the client system should register on the Rsyslog servers log messages to imply that the  Rsyslog server is now receiving logs from the client system.
![centralize-logs-rsyslogs-centos8][1]
And thats it, guys! We have successfully setup the Rsyslog server to receive log messages from a client system.
Read Also: **[How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8][3]**
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/
作者:[James Kiarie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/configure-rsyslog-centos8-rhel8.jpg
[3]: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/

View File

@ -0,0 +1,320 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to build a Flatpak)
[#]: via: (https://opensource.com/article/19/10/how-build-flatpak-packaging)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
How to build a Flatpak
======
A universal packaging format with a decentralized means of distribution.
Plus, portability and sandboxing.
![][1]
A long time ago, a Linux distribution shipped an operating system along with _all_ the software available for it. There was no concept of “third party” software because everything was a part of the distribution. Applications werent so much installed as they were enabled from a great big software repository that you got on one of the many floppy disks or, later, CDs you purchased or downloaded.
This evolved into something even more convenient as the internet became ubiquitous, and the concept of what is now the “app store” was born. Of course, Linux distributions tend to call this a _software repository_ or just _repo_ for short, with some variations for “branding”, such as _Ubuntu Software Center_ or, with typical GNOME minimalism, simply _Software_.
This model worked well back when open source software was still a novelty and the number of open source applications was a number rather than a _theoretical_ number. In todays world of GitLab and GitHub and Bitbucket (and [many][2] [many][3] more), its hardly possible to count the number of open source projects, much less package them up in a repository. No Linux distribution today, even [Debian][4] and its formidable group of package maintainers, can claim or hope to have a package for every installable open source project.
Of course, a Linux package doesnt have to be in a repository to be installable. Any programmer can package up their software and distribute it from their own website. However, because repositories are seen as an integral part of a distribution, there isnt a universal packaging format, meaning that a programmer must decide whether to release a `.deb` or `.rpm`, or an AUR build script, or a Nix or Guix package, or a Homebrew script, or just a mostly-generic `.tgz` archive for `/opt`. Its overwhelming for a developer who lives and breathes Linux every day, much less for a developer just trying to make a best-effort attempt at supporting a free and open source target.
### Why Flatpak?
The Flatpak project provides a universal packaging format along with a decentralized means of distribution, plus portability, and sandboxing.
* **Universal** Install the Flatpak system, and you can run Flatpaks, regardless of your distribution. No daemon or systemd required. The same Flatpak runs on Fedora, Ubuntu, Mageia, Pop OS, Arch, Slackware, and more.
* **Decentralized** Developers can create and sign their own Flatpak packages and repositories. Theres no repository to petition in order to get a package included.
* **Portability** If you have a Flatpak on your system and want to hand it to a friend so they can run the same application, you can export the Flatpak to a USB thumbdrive.
* **Sandboxed** Flatpaks use a container-based model, allowing multiple versions of libraries and applications to exist on one system. Yes, you can easily install the latest version of an app to test out while maintaining the old version you rely on.
### Building a Flatpak
To build a Flatpak, you must first install Flatpak (the subsystem that enables you to use Flatpak packages) and the Flatpak-builder application.
On Fedora, CentOS, RHEL, and similar:
```
`$ sudo dnf install flatpak flatpak-builder`
```
On Debian, Ubuntu, and similar:
```
`$ sudo apt install flatpak flatpak-builder`
```
You must also install the development tools required to build the application you are packaging. By nature of developing the application youre now packaging, you may already have a development environment installed, so you might not notice that these components are required, but should you start building Flatpaks with Jenkins or from inside containers, then you must ensure that your build tools are a part of your toolchain.
For the first example build, this article assumes that your application uses [GNU Autotools][5], but Flatpak itself supports other build systems, such as `cmake`, `cmake-ninja`, `meson`, `ant`, as well as custom commands (a `simple` build system, in Flatpak terminology, but by no means does this imply that the build itself is actually simple).
#### Project directory
Unlike the strict RPM build infrastructure, Flatpak doesnt impose a project directory structure. I prefer to create project directories based on the **dist** packages of software, but theres no technical reason you cant instead integrate your Flatpak build process with your source directory. It is technically easier to build a Flatpak from your **dist** package, though, and its an easier demo too, so thats the model this article uses. Set up a project directory for GNU Hello, serving as your first Flatpak:
```
$ mkdir hello_flatpak
$ mkdir src
```
Download your distributable source. For this example, the source code is located at `https://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz`.
```
$ cd hello_flatpak
$ wget <https://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz>
```
#### Manifest
A Flatpak is defined by a manifest, which describes how to build and install the application it is delivering. A manifest is atomic and reproducible. A Flatpak exists in a “sandbox” container, though, so the manifest is based on a mostly empty environment with a root directory call `/app`.
The first two attributes are the ID of the application you are packaging and the command provided by it. The application ID must be unique to the application you are packaging. The canonical way of formulating a unique ID is to use a triplet value consisting of the entity responsible for the code followed by the name of the application, such as `org.gnu.Hello`. The command provided by the application is whatever you type into a terminal to run the application. This does not imply that the application is intended to be run from a terminal instead of a `.desktop` file in the Activities or Applications menu.
In a file called `org.gnu.Hello.yaml`, enter this text:
```
id: org.gnu.Hello
command: hello
```
A manifest can be written in [YAML][6] or in JSON. This article uses YAML.
Next, you must define each “module” delivered by this Flatpak package. You can think of a module as a dependency or a component. For GNU Hello, there is only one module: GNU Hello. More complex applications may require a specific library or another application entirely.
```
modules:
  - name: hello
    buildsystem: autotools
    no-autogen: true
    sources:
      - type: archive
        path: src/hello-2.10.tar.gz
```
The `buildsystem` value identifies how Flatpak must build the module. Each module can use its own build system, so one Flatpak can have several build systems defined.
The `no-autogen` value tells Flatpak not to run the setup commands for `autotools`, which arent necessary because the GNU Hello source code is the product of `make dist`. If the code youre building isnt in a easily buildable form, then you may need to install `autogen` and `autoconf` to prepare the source for `autotools`. This option doesnt apply at all to projects that dont use `autotools`.
The `type` value tells Flatpak that the source code is in an archive, which triggers the requisite unarchival tasks before building. The `path` points to the source code. In this example, the source exists in the `src` directory on your local build machine, but you could instead define the source as a remote location:
```
modules:
  - name: hello
    buildsystem: autotools
    no-autogen: true
    sources:
      - type: archive
        url: <https://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz>
```
Finally, you must define the platform required for the application to run and build. The Flatpak maintainers supply runtimes and SDKs that include common libraries, including `freedesktop`, `gnome`, and `kde`. The basic requirement is the `freedesk` runtime and SDK, although this may be superseded by GNOME or KDE, depending on what your code needs to run. For this GNU Hello example, only the basics are required.
```
runtime: org.freedesktop.Platform
runtime-version: '18.08'
sdk: org.freedesktop.Sdk
```
The entire GNU Hello flatpak manifest:
```
id: org.gnu.Hello
runtime: org.freedesktop.Platform
runtime-version: '18.08'
sdk: org.freedesktop.Sdk
command: hello
modules:
  - name: hello
    buildsystem: autotools
    no-autogen: true
    sources:
      - type: archive
        path: src/hello-2.10.tar.gz
```
#### Building a Flatpak
Now that the package is defined, you can build it. The build process prompts Flatpak-builder to parse the manifest and to resolve each requirement: it ensures that the necessary Platform and SDK are available (if they arent, then youll have to install them with the `flatpak` command), it unarchives the source code, and executes the `buildsystem` specified.
The command to start:
```
`$ flatpak-builder build-dir org.gnu.Hello.yaml`
```
The directory `build-dir` is created if it does not already exist. The name `build-dir` is arbitrary; you could call it `build` or `bld` or `penguin`, and you can have more than one build destination in the same project directory. However, the term `build-dir` is a frequent value used in documentation, so using it as the literal value can be helpful.
#### Testing your application
You can test your application before or after it has been built by running the build command along with the `--run` option, and endingi the command with the command provided by the Flatpak:
```
$ flatpak-builder --run build-dir \
org.gnu.Hello.yaml hello
Hello, world!
```
### Packaging GUI apps with Flatpak
Packaging up a simple self-contained _hello world_ application is trivial, and fortunately packaging up a GUI application isnt much harder. The most difficult applications to package are those that dont rely on common libraries and frameworks (in the context of packaging, “common” means anything _not_ already packaged by someone else). The Flatpak community provides SDKs and SDK Extensions for many components you might otherwise have had to package yourself. For instance, when packaging the pure Java implementation of `pdftk`, I use the OpenJDK SDK extension I found in the Flatpak Github repository:
```
runtime: org.freedesktop.Platform
runtime-version: '18.08'
sdk: org.freedesktop.Sdk
sdk-extensions:
 - org.freedesktop.Sdk.Extension.openjdk11
```
The Flatpak community does a lot of work on the foundations required for applications to run upon in order to make the packaging process easy for developers. For instance, the Kblocks game from the KDE community requires the KDE platform to run, and thats already available from Flatpak. The additional `libkdegames` library is not included, but its as easy to add it to your list of `modules` as `kblocks` itself.
Heres a manifest for the Kblocks game:
```
id: org.kde.kblocks
command: kblocks
modules:
\- buildsystem: cmake-ninja
  name: libkdegames
  sources:
    type: archive
    path: src/libkdegames-19.08.2.tar.xz
\- buildsystem: cmake-ninja
  name: kblocks
  sources:
    type: archive
    path: src/kblocks-19.08.2.tar.xz
runtime: org.kde.Platform
runtime-version: '5.13'
sdk: org.kde.Sdk
```
As you can see, the manifest is still straight-forward and relatively intuitive. The build system is different, and the runtime and SDK point to KDE instead of the Freedesktop, but the structure and requirements are basically the same.
Because its a GUI application, however, there are some new options required. First, it needs an icon so that when its listed in the Activities or Application menu, it looks nice and recognizable. Kblocks includes an icon in its sources, but the names of files exported by a Flatpak must be prefixed using the application ID (such as `org.kde.Kblocks.desktop`). The easiest way to do this is to rename the file directly in the application source, which Flatpak can do for you as long as you include this directive in your manifest:
```
`rename-icon: kblocks`
```
Another unique trait of GUI applications is that they often require integration with common desktop services, like the graphics server (X11 or Wayland) itself, a sound server such as [Pulse Audio][7], and the Inter-Process Communication (IPC) subsystem.
In the case of Kblocks, the requirements are:
```
finish-args:
\- --share=ipc
\- --socket=x11
\- --socket=wayland
\- --socket=pulseaudio
\- --device=dri
\- --filesystem=xdg-config/kdeglobals:ro
```
Heres the final, complete manifest, using URLs for the sources so you can try this on your own system easily:
```
command: kblocks
finish-args:
\- --share=ipc
\- --socket=x11
\- --socket=wayland
\- --socket=pulseaudio
\- --device=dri
\- --filesystem=xdg-config/kdeglobals:ro
id: org.kde.kblocks
modules:
\- buildsystem: cmake-ninja
  name: libkdegames
  sources:
  - sha256: 83456cec44502a1f79c0be00c983090e32fd8aea5fec1461fbfbd37b5f8866ac
    type: archive
    url: <https://download.kde.org/stable/applications/19.08.2/src/libkdegames-19.08.2.tar.xz>
\- buildsystem: cmake-ninja
  name: kblocks
  sources:
  - sha256: 8b52c949e2d446a4ccf81b09818fc90234f2f55d8722c385491ee67e1f2abf93
    type: archive
    url: <https://download.kde.org/stable/applications/19.08.2/src/kblocks-19.08.2.tar.xz>
rename-icon: kblocks
runtime: org.kde.Platform
runtime-version: '5.13'
sdk: org.kde.Sdk
```
To build the application, you must have the KDE Platform and SDK Flatpaks (version 5.13 as of this writing) installed. Once the application has been built, you can run it using the `--run` method, but to see the application icon, you must install it.
#### Distributing and installing a Flatpak you have built
Distributing flatpaks happen through repositories.
You can list your apps on [Flathub.org][8], a community website meant as a _technically_ decentralised (but central in spirit) location for Flatpaks. To submit your Flatpak, [place your manifest into a Git repository][9] and [submit a pull request on Github][10].
Alternately, you can create your own repository using the `flatpak build-export` command.
You can also just install locally:
```
`$ flatpak-builder --force-clean --install build-dir org.kde.Kblocks.yaml`
```
Once installed, open your Activities or Applications menu and search for Kblocks.
![The Activities menu in GNOME][11]
### Learning more
The [Flatpak documentation site][12] has a good walkthrough on building your first Flatpak. Its worth reading even if youve followed along with this article. Besides that, the docs provide details on what Platforms and SDKs are available.
For those who enjoy learning from examples, there are manifests for _every application_ available on [Flathub][13].
The resources to build and use Flatpaks are plentiful, and Flatpak, along with containers and sandboxed apps, are arguably [the future][14], so get familiar with them, start integrating them with your Jenkins pipelines, and enjoy easy and universal Linux app packaging.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/how-build-flatpak-packaging
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/flatpak-lead-image.png?itok=J93RG_fi
[2]: http://notabug.org
[3]: http://savannah.nongnu.org/
[4]: http://debian.org
[5]: https://opensource.com/article/19/7/introduction-gnu-autotools
[6]: https://www.redhat.com/sysadmin/yaml-tips
[7]: https://opensource.com/article/17/1/linux-plays-sound
[8]: http://flathub.org
[9]: https://opensource.com/resources/what-is-git
[10]: https://opensource.com/life/16/3/submit-github-pull-request
[11]: https://opensource.com/sites/default/files/gnome-activities-kblocks.jpg (The Activities menu in GNOME)
[12]: http://docs.flatpak.org/en/latest/introduction.html
[13]: https://github.com/flathub
[14]: https://silverblue.fedoraproject.org/

View File

@ -0,0 +1,272 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to program with Bash: Syntax and tools)
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-1)
[#]: author: (David Both https://opensource.com/users/dboth)
How to program with Bash: Syntax and tools
======
Learn basic Bash programming syntax and tools, as well as how to use
variables and control operators, in the first article in this three-part
series.
![bash logo on green background][1]
A shell is the command interpreter for the operating system. Bash is my favorite shell, but every Linux shell interprets the commands typed by the user or sysadmin into a form the operating system can use. When the results are returned to the shell program, it sends them to STDOUT which, by default, [displays them in the terminal][2]. All of the shells I am familiar with are also programming languages.
Features like tab completion, command-line recall and editing, and shortcuts like aliases all contribute to its value as a powerful shell. Its default command-line editing mode uses Emacs, but one of my favorite Bash features is that I can change it to Vi mode to use editing commands that are already part of my muscle memory.
However, if you think of Bash solely as a shell, you miss much of its true power. While researching my three-volume [Linux self-study course][3] (on which this series of articles is based), I learned things about Bash that I'd never known in over 20 years of working with Linux. Some of these new bits of knowledge relate to its use as a programming language. Bash is a powerful programming language, one perfectly designed for use on the command line and in shell scripts.
This three-part series explores using Bash as a command-line interface (CLI) programming language. This first article looks at some simple command-line programming with Bash, variables, and control operators. The other articles explore types of Bash files; string, numeric, and miscellaneous logical operators that provide execution-flow control logic; different types of shell expansions; and the **for**, **while**, and **until** loops that enable repetitive operations. They will also look at some commands that simplify and support the use of these tools.
### The shell
A shell is the command interpreter for the operating system. Bash is my favorite shell, but every Linux shell interprets the commands typed by the user or sysadmin into a form the operating system can use. When the results are returned to the shell program, it displays them in the terminal. All of the shells I am familiar with are also programming languages.
Bash stands for Bourne Again Shell because the Bash shell is [based upon][4] the older Bourne shell that was written by Steven Bourne in 1977. Many [other shells][5] are available, but these are the four I encounter most frequently:
* **csh:** The C shell for programmers who like the syntax of the C language
* **ksh:** The Korn shell, written by David Korn and popular with Unix users
* **tcsh:** A version of csh with more ease-of-use features
* **zsh:** The Z shell, which combines many features of other popular shells
All shells have built-in commands that supplement or replace the ones provided by the core utilities. Open the shell's man page and find the "BUILT-INS" section to see the commands it provides.
Each shell has its own personality and syntax. Some will work better for you than others. I have used the C shell, the Korn shell, and the Z shell. I still like the Bash shell more than any of them. Use the one that works best for you, although that might require you to try some of the others. Fortunately, it's quite easy to change shells.
All of these shells are programming languages, as well as command interpreters. Here's a quick tour of some programming constructs and tools that are integral parts of Bash.
### Bash as a programming language
Most sysadmins have used Bash to issue commands that are usually fairly simple and straightforward. But Bash can go beyond entering single commands, and many sysadmins create simple command-line programs to perform a series of tasks. These programs are common tools that can save time and effort.
My objective when writing CLI programs is to save time and effort (i.e., to be the lazy sysadmin). CLI programs support this by listing several commands in a specific sequence that execute one after another, so you do not need to watch the progress of one command and type in the next command when the first finishes. You can go do other things and not have to continually monitor the progress of each command.
### What is "a program"?
The Free On-line Dictionary of Computing ([FOLDOC][6]) defines a program as: "The instructions executed by a computer, as opposed to the physical device on which they run." Princeton University's [WordNet][7] defines a program as: "…a sequence of instructions that a computer can interpret and execute…" [Wikipedia][8] also has a good entry about computer programs.
Therefore, a program can consist of one or more instructions that perform a specific, related task. A computer program instruction is also called a program statement. For sysadmins, a program is usually a sequence of shell commands. All the shells available for Linux, at least the ones I am familiar with, have at least a basic form of programming capability, and Bash, the default shell for most Linux distributions, is no exception.
While this series uses Bash (because it is so ubiquitous), if you use a different shell, the general programming concepts will be the same, although the constructs and syntax may differ somewhat. Some shells may support some features that others do not, but they all provide some programming capability. Shell programs can be stored in a file for repeated use, or they may be created on the command line as needed.
### Simple CLI programs
The simplest command-line programs are one or two consecutive program statements, which may be related or not, that are entered on the command line before the **Enter** key is pressed. The second statement in a program, if there is one, might be dependent upon the actions of the first, but it does not need to be.
There is also one bit of syntactical punctuation that needs to be clearly stated. When entering a single command on the command line, pressing the **Enter** key terminates the command with an implicit semicolon (**;**). When used in a CLI shell program entered as a single line on the command line, the semicolon must be used to terminate each statement and separate it from the next one. The last statement in a CLI shell program can use an explicit or implicit semicolon.
### Some basic syntax
The following examples will clarify this syntax. This program consists of a single command with an explicit terminator:
```
[student@studentvm1 ~]$ echo "Hello world." ;
Hello world.
```
That may not seem like much of a program, but it is the first program I encounter with every new programming language I learn. The syntax may be a bit different for each language, but the result is the same.
Let's expand a little on this trivial but ubiquitous program. Your results will be different from mine because I have done other experiments, while you may have only the default directories and files that are created in the account home directory the first time you log into an account via the GUI desktop.
```
[student@studentvm1 ~]$ echo "My home directory." ; ls ;
My home directory.
chapter25   TestFile1.Linux  dmesg2.txt  Downloads  newfile.txt  softlink1  testdir6
chapter26   TestFile1.mac    dmesg3.txt  file005    Pictures     Templates  testdir
TestFile1      Desktop       dmesg.txt   link3      Public       testdir    Videos
TestFile1.dos  dmesg1.txt    Documents   Music      random.txt   testdir1
```
That makes a bit more sense. The results are related, but the individual program statements are independent of each other. Notice that I like to put spaces before and after the semicolon because it makes the code a bit easier to read. Try that little CLI program again without an explicit semicolon at the end:
```
`[student@studentvm1 ~]$ echo "My home directory." ; ls`
```
There is no difference in the output.
### Something about variables
Like all programming languages, the Bash shell can deal with variables. A variable is a symbolic name that refers to a specific location in memory that contains a value of some sort. The value of a variable is changeable, i.e., it is variable.
Bash does not type variables like C and related languages, defining them as integers, floating points, or string types. In Bash, all variables are strings. A string that is an integer can be used in integer arithmetic, which is the only type of math that Bash is capable of doing. If more complex math is required, the [**bc** command][9] can be used in CLI programs and scripts.
Variables are assigned values and can be used to refer to those values in CLI programs and scripts. The value of a variable is set using its name but not preceded by a **$** sign. The assignment **VAR=10** sets the value of the variable VAR to 10. To print the value of the variable, you can use the statement **echo $VAR**. Start with text (i.e., non-numeric) variables.
Bash variables become part of the shell environment until they are unset.
Check the initial value of a variable that has not been assigned; it should be null. Then assign a value to the variable and print it to verify its value. You can do all of this in a single CLI program:
```
[student@studentvm1 ~]$ echo $MyVar ; MyVar="Hello World" ; echo $MyVar ;
Hello World
[student@studentvm1 ~]$
```
_Note: The syntax of variable assignment is very strict. There must be no spaces on either side of the equal (**=**) sign in the assignment statement._
The empty line indicates that the initial value of **MyVar** is null. Changing and setting the value of a variable are done the same way. This example shows both the original and the new value.
As mentioned, Bash can perform integer arithmetic calculations, which is useful for calculating a reference to the location of an element in an array or doing simple math problems. It is not suitable for scientific computing or anything that requires decimals, such as financial calculations. There are much better tools for those types of calculations.
Here's a simple calculation:
```
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var1*Var2))"
Result = 63
```
What happens when you perform a math operation that results in a floating-point number?
```
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var1/Var2))"
Result = 0
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var2/Var1))"
Result = 1
[student@studentvm1 ~]$
```
The result is the nearest integer. Notice that the calculation was performed as part of the **echo** statement. The math is performed before the enclosing echo command due to the Bash order of precedence. For details see the Bash man page and search "precedence."
### Control operators
Shell control operators are one of the syntactical operators for easily creating some interesting command-line programs. The simplest form of CLI program is just stringing several commands together in a sequence on the command line:
```
`command1 ; command2 ; command3 ; command4 ; . . . ; etc. ;`
```
Those commands all run without a problem so long as no errors occur. But what happens when an error occurs? You can anticipate and allow for errors using the built-in **&amp;&amp;** and **||** Bash control operators. These two control operators provide some flow control and enable you to alter the sequence of code execution. The semicolon is also considered to be a Bash control operator, as is the newline character.
The **&amp;&amp;** operator simply says, "if command1 is successful, then run command2. If command1 fails for any reason, then command2 is skipped." That syntax looks like this:
```
`command1 && command2`
```
Now, look at some commands that will create a new directory and—if it's successful—make it the present working directory (PWD). Ensure that your home directory (**~**) is the PWD. Try this first in **/root**, a directory that you do not have access to:
```
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir/ &amp;&amp; cd $Dir
mkdir: cannot create directory '/root/testdir/': Permission denied
[student@studentvm1 ~]$
```
The error was emitted by the **mkdir** command. You did not receive an error indicating that the file could not be created because the creation of the directory failed. The **&amp;&amp;** control operator sensed the non-zero return code, so the **touch** command was skipped. Using the **&amp;&amp;** control operator prevents the **touch** command from running because there was an error in creating the directory. This type of command-line program flow control can prevent errors from compounding and making a real mess of things. But it's time to get a little more complicated.
The **||** control operator allows you to add another program statement that executes when the initial program statement returns a code greater than zero. The basic syntax looks like this:
```
`command1 || command2`
```
This syntax reads, "If command1 fails, execute command2." That implies that if command1 succeeds, command2 is skipped. Try this by attempting to create a new directory:
```
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir || echo "$Dir was not created."
mkdir: cannot create directory '/root/testdir': Permission denied
/root/testdir was not created.
[student@studentvm1 ~]$
```
This is exactly what you would expect. Because the new directory could not be created, the first command failed, which resulted in the execution of the second command.
Combining these two operators provides the best of both. The control operator syntax using some flow control takes this general form when the **&amp;&amp;** and **||** control operators are used:
```
`preceding commands ; command1 && command2 || command3 ; following commands`
```
This syntax can be stated like so: "If command1 exits with a return code of 0, then execute command2, otherwise execute command3." Try it:
```
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir &amp;&amp; cd $Dir || echo "$Dir was not created."
mkdir: cannot create directory '/root/testdir': Permission denied
/root/testdir was not created.
[student@studentvm1 ~]$
```
Now try the last command again using your home directory instead of the **/root** directory. You will have permission to create this directory:
```
[student@studentvm1 ~]$ Dir=~/testdir ; mkdir $Dir &amp;&amp; cd $Dir || echo "$Dir was not created."
[student@studentvm1 testdir]$
```
The control operator syntax, like **command1 &amp;&amp; command2**, works because every command sends a return code (RC) to the shell that indicates if it completed successfully or whether there was some type of failure during execution. By convention, an RC of zero (0) indicates success, and any positive number indicates some type of failure. Some of the tools sysadmins use just return a one (1) to indicate a failure, but many use other codes to indicate the type of failure that occurred.
The Bash shell variable **$?** contains the RC from the last command. This RC can be checked very easily by a script, the next command in a list of commands, or even the sysadmin directly. Start by running a simple command and immediately checking the RC. The RC will always be for the last command that ran before you looked at it.
```
[student@studentvm1 testdir]$ ll ; echo "RC = $?"
total 1264
drwxrwxr-x  2 student student   4096 Mar  2 08:21 chapter25
drwxrwxr-x  2 student student   4096 Mar 21 15:27 chapter26
-rwxr-xr-x  1 student student     92 Mar 20 15:53 TestFile1
&lt;snip&gt;
drwxrwxr-x. 2 student student 663552 Feb 21 14:12 testdir
drwxr-xr-x. 2 student student   4096 Dec 22 13:15 Videos
RC = 0
[student@studentvm1 testdir]$
```
The RC, in this case, is zero, which means the command completed successfully. Now try the same command on root's home directory, a directory you do not have permissions for:
```
[student@studentvm1 testdir]$ ll /root ; echo "RC = $?"
ls: cannot open directory '/root': Permission denied
RC = 2
[student@studentvm1 testdir]$
```
In this case, the RC is two; this means permission was denied for a non-root user to access a directory to which the user is not permitted access. The control operators use these RCs to enable you to alter the sequence of program execution.
### Summary
This article looked at Bash as a programming language and explored its basic syntax as well as some basic tools. It showed how to print data to STDOUT and how to use variables and control operators. The next article in this series looks at some of the many Bash logical operators that control the flow of instruction execution.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/programming-bash-part-1
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://opensource.com/article/18/10/linux-data-streams
[3]: http://www.both.org/?page_id=1183
[4]: https://opensource.com/19/9/command-line-heroes-bash
[5]: https://en.wikipedia.org/wiki/Comparison_of_command_shells
[6]: http://foldoc.org/program
[7]: https://wordnet.princeton.edu/
[8]: https://en.wikipedia.org/wiki/Computer_program
[9]: https://www.gnu.org/software/bc/manual/html_mono/bc.html

View File

@ -0,0 +1,185 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Transition to Nftables)
[#]: via: (https://opensourceforu.com/2019/10/transition-to-nftables/)
[#]: author: (Vijay Marcel D https://opensourceforu.com/author/vijay-marcel/)
Transition to Nftables
======
[![][1]][2]
_Every major distribution in the open source world is moving towards nftables as the default firewall. In short, the venerable Iptables is now dead. This article is a tutorial on how to build nftables._
Currently, there is an iptables-nft backend that is compatible with nftables but soon, even this will not be available. Also, as noted by Red Hat developers, sometimes it may translate the rules incorrectly. Rather than rely on an iptables-to-nftables converter, we need to know how to build our own nftables. In nftables, all the address families come under one rule. Nftables runs in the user space unlike iptables, where every module is in the kernel. It also needs less kernel updates and comes with new features such as maps, families and dictionaries.
**Address families**
Address families determine the types of packets that are processed. There are six address families in nftables and they are:
* ip
* ipv6
* inet
* arp
* bridge
* netdev
In nftables, the ipv4 and ipv6 protocols are combined into one single family called inet. So we do not need to specify two rules one for ipv4 and another for ipv6. If no address family is specified, it will default to ip protocol, i.e., ipv4. Our area of interest lies in the inet family, since most home users will use either ipv4 or ipv6 protocols (see Figure 1).
**Nftables**
A typical nftable rule contains three parts table, chain and rules.
Tables are containers for chains and rules. They are identified by their address families and their names. Chains contain the rules needed for the _inet/arp/bridge/netdev_ protocols and are of three types — filter, NAT and route. Nftable rules can be loaded from a script or they can be typed into a terminal and then saved as a rule-set. For home users, the default chain will be filter. The inet family contains the following hooks:
* Input
* Output
* Forward
* Pre-routing
* Post-routing
**To script or not to script?**
One of the biggest questions is whether we can use a firewall script or not. The answer is: its your choice. Heres some advice if you have hundreds of rules in your firewall, then it is best to use a script, but if you are a typical home user, then you can type the commands in the terminal and then load your rule-set. Each option has its own advantages and disadvantages. In this article, we will type them in the terminal to build our firewall.
Nftables uses a program called nft to add, create, list, delete and load rules. Make sure nftables is installed along with conntrackd and netfilter-persistent, and remove iptables, using the following command:
```
apt-get install nftables conntrackd netfilter-persistent
apt-get purge iptables
```
_nft_ needs to be run as root or use sudo. Use the following commands to list, flush, delete ruleset and load the script respectively.
```
nft list ruleset
nft flush ruleset
nft delete table inet filter
/usr/sbin/nft -f /etc/nftables.conf
```
**Input policy**
The firewall will contain three parts input, forward and output just like in iptables. In the terminal, type the following commands for the input firewall. Make sure you have flushed your rule-set before you begin. Our default policy will be to drop everything. We will use the inet family in the firewall. Add the following rules as root or use sudo:
```
nft add table inet filter
nft add chain inet filter input { type filter hook input priority 0 \; counter \; policy drop \; }
```
You have noticed there is something called _priority 0_. It means giving the rule higher precedence. Hooks typically give higher precedence to the negative integer. Every hook has its own precedence and the filter chain has priority 0. You can check the nftables wiki page to see the priority of each hook.
To know the network interfaces in your computer, run the following command:
```
ip link show
```
It will show the installed network interface, one local host and other Ethernet port or your wireless port. Your Ethernet ports name looks something like this: _enpXsY_ where X and Y are numbers, and the same goes for your wireless port. We have to allow the local host and only allow established incoming connections from the Internet.
Nftables has a feature called verdict statements on how to parse a rule. The verdict statements are _accept, drop, queue, jump, goto, continue_ and _return_. Since the firewall is a simple one, we will use either _accept_ or _drop the packets_ (Figure 2).
```
nft add rule inet filter input iifname lo accept
nft add rule inet filter input iifname enpXsY ct state new, established, related accept
```
Next, we have to add rules to protect us from stealth scans. Not all stealth scans are malicious but most of them are. We have to protect the network from such scans. The first set lists the TCP flags to be tested. Of these flags, the second set lists the flags to be matched with the first.
```
nft add rule inet filter input iifname enpXsY tcp flags \& \(syn\|fin\) == \(syn\|fin\) drop
nft add rule inet filter input iifname enpXsY tcp flags \& \(syn\|rst\) == \(syn\|rst\) drop
nft add rule inet filter input iifname enpXsY tcp flags \& \(fin\|rst\) == \(fin\|rst\) drop
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|fin\) == fin drop
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|psh\) == psh drop
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|urg\) == urg drop
```
Remember, we are typing these commands in the terminal. So we have to add a backslash before some special characters, to make sure the terminal interprets it as it should. If you are using a script, then this isnt required.
**A word of caution regarding ICMP**
The Internet Control Message Protocol (ICMP) is a diagnostic tool and so should not be dropped outright. Any attempt to fully block ICMP is unwise as it will also stop giving error messages to us. Enable only the most important control messages such as echo-request, echo-reply, destination-unreachable and time-exceeded, and reject the rest. Echo-request and echo-reply are part of ping. In the input, we only allow echo reply and in the output, we only allow the echo-request.
```
nft add rule inet filter input iifname enpXsY icmp type { echo-reply, destination-unreachable, time-exceeded } limit rate 1/second accept
nft add rule inet filter input iifname enpXsY ip protocol icmp drop
```
Finally, we are logging and dropping all the invalid packets.
```
nft add rule inet filter input iifname enpXsY ct state invalid log flags all level info prefix \”Invalid-Input: \”
nft add rule inet filter input iifname enpXsY ct state invalid drop
```
**Forward and output policy**
In both the forward and output policies, we will drop packets by default and only accept those that are established connections.
```
nft add chain inet filter forward { type filter hook forward priority 0 \; counter \; policy drop \; }
nft add rule inet filter forward ct state established, related accept
nft add rule inet filter forward ct state invalid drop
nft add chain inet filter output { type filter hook output priority 0 \; counter \; policy drop \; }
```
A typical desktop user needs only Port 80 and 443 to be allowed to access the Internet. Finally, allow acceptable ICMP protocols and drop the invalid packets while logging them.
```
nft add rule inet filter output oifname enpXsY tcp dport { 80, 443 } ct state established accept
nft add rule inet filter output oifname enpXsY icmp type { echo-request, destination-unreachable, time-exceeded } limit rate 1/second accept
nft add rule inet filter output oifname enpXsY ip protocol icmp drop
nft add rule inet filter output oifname enpXsY ct state invalid log flags all level info prefix \”Invalid-Output: \”
nft add rule inet filter output oifname enpXsY ct state invalid drop
```
Now we have to save our rule-set, otherwise it will be lost when we reboot. To do so, run the following command:
```
sudo nft list ruleset. > /etc/nftables.conf
```
We now have to load nftables at boot, for that enables the nftables service in systemd:
```
sudo systemctl enable nftables
```
Next, edit the nftables unit file to remove the Execstop option to avoid flushing the rule-set at every boot. The file is usually located in /etc/systemd/system/sysinit.target.wants/nftables.service. Now restart the nftables:
```
sudo systemctl restart nftables
```
**Logging in rsyslog**
When you log the dropped packets, they go straight to _syslog_, which makes reading your log file quite difficult. It is better to redirect your firewall logs to a separate file. Create a directory called nftables in
_/var/log_ and in it, create two files called _input.log_ and _output.log_ to store the input and output logs, respectively. Make sure rsyslog is installed in your system. Now go to _/etc/rsyslog.d_ and create a file called _nftables.conf_ with the following contents:
```
:msg,regex,”Invalid-Input: “ -/var/log/nftables/Input.log
:msg,regex,”Invalid-Output: “ -/var/log/nftables/Output.log
& stop
```
Now we have to make sure the log is manageable. For that, create another file in _/etc/logrotate.d_ called nftables with the following code:
```
/var/log/nftables/* { rotate 5 daily maxsize 50M missingok notifempty delaycompress compress postrotate invoke-rc.d rsyslog rotate > /dev/null endscript }
```
Restart nftables. You can now check your rule-set. If you feel typing each command in the terminal is bothersome, you can use a script to load the nftables firewall. I hope this article is useful in protecting your system.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/transition-to-nftables/
作者:[Vijay Marcel D][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/vijay-marcel/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/01/REHfirewall-1.jpg?resize=696%2C481&ssl=1 (REHfirewall)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/01/REHfirewall-1.jpg?fit=900%2C622&ssl=1

View File

@ -0,0 +1,261 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Beginners Guide to Handle Various Update Related Errors in Ubuntu)
[#]: via: (https://itsfoss.com/ubuntu-update-error/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Beginners Guide to Handle Various Update Related Errors in Ubuntu
======
_**Who hasnt come across an error while doing an update in Ubuntu? Update errors are common and plenty in Ubuntu and other Linux distributions based on Ubuntu. Here are some common Ubuntu update errors and their fixes.**_
This article is part of Ubuntu beginner series that explains the know-how of Ubuntu so that a new user could understand the things better.
In an earlier article, I discussed [how to update Ubuntu][1]. In this tutorial, Ill discuss some common errors you may encounter while updating [Ubuntu][2]. It usually happens because you tried to add software or repositories on your own and that probably caused an issue.
There is no need to panic if you see the errors while updating your system.The errors are common and the fix is easy. Youll learn how to fix those common update errors.
_**Before you begin, I highly advise reading these two articles to have a better understanding of the repository concept in Ubuntu.**_
![Understand Ubuntu repositories][3]
![Understand Ubuntu repositories][3]
###### **Understand Ubuntu repositories**
Learn what are various repositories in Ubuntu and how they enable you to install software in your system.
[Read More][4]
![Understanding PPA in Ubuntu][5]
![Understanding PPA in Ubuntu][5]
###### **Understanding PPA in Ubuntu**
Further improve your concept of repositories and package handling in Ubuntu with this detailed guide on PPA.
[Read More][6]
### Error 0: Failed to download repository information
Many Ubuntu desktop users update their system through the graphical software updater tool. You are notified that updates are available for your system and you can click one button to start downloading and installing the updates.
Well, thats what usually happens. But sometimes youll see an error like this:
![][7]
_**Failed to download repository information. Check your internet connection.**_
Thats a weird error because your internet connection is most likely working just fine and it still says to check the internet connection.
Did you note that I called it error 0? Its because its not an error in itself. I mean, most probably, it has nothing to do with the internet connection. But there is no useful information other than this misleading error message.
If you see this error message and your internet connection is working fine, its time to put on your detective hat and [use your grey cells][8] (as [Hercule Poirot][9] would say).
Youll have to use the command line here. You can [use Ctrl+Alt+T keyboard shortcut to open the terminal in Ubuntu][10]. In the terminal, use this command:
```
sudo apt update
```
Let the command finish. Observe the last three-four lines of its output. That will give you the real reason why sudo apt-get update fails. Heres an example:
![][11]
Rest of the tutorial here shows how to handle the errors that you just saw in the last few lines of the update command output.
### Error 1: Problem With MergeList
When you run update in terminal, you may see an error “[problem with MergeList][12]” like below:
```
E:Encountered a section with no Package: header,
E:Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-i386_Packages,
E:The package lists or status file could not be parsed or opened.
```
For some reasons, the file in /var/lib/apt/lists directory got corrupted. You can delete all the files in this directory and run the update again to regenerate everything afresh. Use the following commands one by one:
```
sudo rm -r /var/lib/apt/lists/*
sudo apt-get clean && sudo apt-get update
```
Your problem should be fixed.
### Error 2: Hash Sum mismatch
If you find an error that talks about [Hash Sum mismatch][13], the fix is the same as the one in the previous error.
```
W:Failed to fetch bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ubuntu_dists_oneiric_restricted_binary-i386_Packages Hash Sum mismatch,
W:Failed to fetch bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_binary-i386_Packages Hash Sum mismatch,
E:Some index files failed to download. They have been ignored, or old ones used instead
```
The error occurs possibly because of a mismatched metadata cache between the server and your system. You can use the following commands to fix it:
```
sudo rm -rf /var/lib/apt/lists/*
sudo apt update
```
### Error 3: Failed to fetch with error 404 not found
If you try adding a PPA repository that is not available for your current [Ubuntu version][14], youll see that it throws a 404 not found error.
```
W: Failed to fetch http://ppa.launchpad.net/venerix/pkg/ubuntu/dists/raring/main/binary-i386/Packages 404 Not Found
E: Some index files failed to download. They have been ignored, or old ones used instead.
```
You added a PPA hoping to install an application but it is not available for your Ubuntu version and you are now stuck with the update error. This is why you should check beforehand if a PPA is available for your Ubuntu version or not. I have discussed how to check the PPA availability in the detailed [PPA guide][6].
Anyway, the fix here is that you remove the troublesome PPA from your list of repositories. Note the PPA name from the error message. Go to _Software &amp; Updates_ tool:
![Open Software & Updates][15]
In here, move to _Other Software_ tab and look for that PPA. Uncheck the box to [remove the PPA][16] from your system.
![Remove PPA Using Software & Updates In Ubuntu][17]
Your software list will be updated when you do that. Now if you run the update again, you shouldnt see the error.
### Error 4: Failed to download package files error
A similar error is **[failed to download package files error][18] **like this:
![][19]
In this case, a newer version of the software is available but its not propagated to all the mirrors. If you are not using a mirror, easily fixed by changing the software sources to Main server. Please read this article for more details on [failed to download package error][18].
Go to _Software &amp; Updates_ and in there changed the download server to Main server:
![][20]
### Error 5: GPG error: The following signatures couldnt be verified
Adding a PPA may also result in the following [GPG error: The following signatures couldnt be verified][21] when you try to run an update in terminal:
```
W: GPG error: http://repo.mate-desktop.org saucy InRelease: The following signatures couldnt be verified because the public key is not available: NO_PUBKEY 68980A0EA10B4DE8
```
All you need to do is to fetch this public key in the system. Get the key number from the message. In the above message, the key is 68980A0EA10B4DE8.
This key can be used in the following manner:
```
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 68980A0EA10B4DE8
```
Once the key has been added, run the update again and it should be fine.
### Error 6: BADSIG error
Another signature related Ubuntu update error is [BADSIG error][22] which looks something like this:
```
W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://extras.ubuntu.com precise Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key
W: GPG error: http://ppa.launchpad.net precise Release:
The following signatures were invalid: BADSIG 4C1CBC1B69B0E2F4 Launchpad PPA for Jonathan French W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/precise/Release
```
All the repositories are signed with the GPG and for some reason, your system finds them invalid. Youll need to update the signature keys. The easiest way to do that is by regenerating the apt packages list (with their signature keys) and it should have the correct key.
Use the following commands one by one in the terminal:
```
cd /var/lib/apt
sudo mv lists oldlist
sudo mkdir -p lists/partial
sudo apt-get clean
sudo apt-get update
```
### Error 7: Partial upgrade error
Running updates in terminal may throw this partial upgrade error:
![][23]
```
Not all updates can be installed
Run a partial upgrade, to install as many updates as possible
```
Run the following command in terminal to fix this error:
```
sudo apt-get install -f
```
### Error 8: Could not get lock /var/cache/apt/archives/lock
This error happens when another program is using APT. Suppose you are installing some thing in Ubuntu Software Center and at the same time, trying to run apt in terminal.
```
E: Could not get lock /var/cache/apt/archives/lock open (11: Resource temporarily unavailable)
E: Unable to lock directory /var/cache/apt/archives/
```
Check if some other program might be using apt. It could be a command running terminal, Software Center, Software Updater, Software &amp; Updates or any other software that deals with installing and removing applications.
If you can close other such programs, close them. If there is a process in progress, wait for it to finish.
If you cannot find any such programs, use the following [command to kill all such running processes][24]:
```
sudo killall apt apt-get
```
This is a tricky problem and if the problem still persists, please read this detailed tutorial on [fixing the unable to lock the administration directory error in Ubuntu][25].
_**Any other update error you encountered?**_
That compiles the list of frequent Ubuntu update errors you may encounter. I hope this helps you to get rid of these errors.
Have you encountered any other update error in Ubuntu recently that hasnt been covered here? Do mention it in comments and Ill try to do a quick tutorial on it.
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-update-error/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/update-ubuntu/
[2]: https://ubuntu.com/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/ubuntu-repositories.png?ssl=1
[4]: https://itsfoss.com/ubuntu-repositories/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/what-is-ppa.png?ssl=1
[6]: https://itsfoss.com/ppa-guide/
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2013/04/Failed-to-download-repository-information-Ubuntu-13.04.png?ssl=1
[8]: https://idioms.thefreedictionary.com/little+grey+cells
[9]: https://en.wikipedia.org/wiki/Hercule_Poirot
[10]: https://itsfoss.com/ubuntu-shortcuts/
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2013/11/Ubuntu-Update-error.jpeg?ssl=1
[12]: https://itsfoss.com/how-to-fix-problem-with-mergelist/
[13]: https://itsfoss.com/solve-ubuntu-error-failed-to-download-repository-information-check-your-internet-connection/
[14]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/05/software-updates-ubuntu-gnome.jpeg?ssl=1
[16]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/remove_ppa_using_software_updates_in_ubuntu.jpg?ssl=1
[18]: https://itsfoss.com/fix-failed-download-package-files-error-ubuntu/
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2014/09/Ubuntu_Update_error.jpeg?ssl=1
[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2014/09/Change_server_Ubuntu.jpeg?ssl=1
[21]: https://itsfoss.com/solve-gpg-error-signatures-verified-ubuntu/
[22]: https://itsfoss.com/solve-badsig-error-quick-tip/
[23]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2013/09/Partial_Upgrade_error_Elementary_OS_Luna.png?ssl=1
[24]: https://itsfoss.com/how-to-find-the-process-id-of-a-program-and-kill-it-quick-tip/
[25]: https://itsfoss.com/could-not-get-lock-error/

View File

@ -0,0 +1,108 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How collaboration fueled a development breakthrough at Greenpeace)
[#]: via: (https://opensource.com/open-organization/19/10/collaboration-breakthrough-greenpeace)
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
How collaboration fueled a development breakthrough at Greenpeace
======
We're building an innovative platform to connect environmental
advocates—but system complexity threatened to slow us down. Opening up
was the answer.
![The Open Organization at Greenpeace][1]
Activists really don't like feeling stuck.
We thrive on forward momentum and the energy it creates. When that movement grinds to a halt, even for a moment, our ability to catalyze passion in others stalls too.
And my colleagues and I at Greenpeace International were feeling stuck.
We'd managed to launch a prototype of Planet 4, [Greenpeace's new, open engagement platform][2] for activists and communities. It's live in more than 38 countries (with many more sites). More than 1.75 million people are using it. We've topped more than 3.1 million pageviews.
To get here, we [spent more than 650 hours in meetings, drank 1,478 litres of coffee, and fixed more than 300 bugs][3]. But it fell short of our vision; it _still_ wasn't [the minimum lovable product][4] we wanted and we didn't know how to move it forward.
We were stuck.
Planet 4's complexity was daunting. We didn't always have the right people to address the numerous challenges the project raised. We didn't know if we'd ever realize our vision. Yet a commitment to openness had gotten us here, and I knew a commitment to openness would get us through this, too.
As [the story of Planet 4][5] continues, I'll explain how it did.
### An opportunity
By 2016, my work helping Greenpeace International become a more open organization—[which I described in the first part of this series][6]—was beginning to bear fruit. We were holding regular [community calls][7]. We were releasing project updates frequently and publicly. We were networking with global stakeholders across the organization to define what Planet 4 needed to be. We were [architecting the project with participation in mind][8].
Becoming open is an organic process. There's no standard "game plan" for implementing process and practices in an organization. Success depends on the people, the tools, the project, the very fabric of the culture you're working inside.
Inside Greenpeace, we were beginning to see that success.
A commitment to openness had gotten us here, and I knew a commitment to openness would get us through this, too.
For some, this open way of working was inspiring and engaging. For others it was terrifying. Some thought asking for everyone's input was ridiculous. Some thought only "experts" should be part of the conversations, a viewpoint that doesn't mesh well with [the principle of inclusivity][9]. I appreciate expertise—don't get me wrong—but the problem with only asking for "expert" opinions is that you exclude people who might have more interest, passion, and knowledge than someone with a formal title.
Planet 4 was a vision—not just of a new and open engagement platform, but of an organization that could make _use_ of this platform. And it raised problems on both those fronts:
* **Data and systems integration:** As a network of 28 independent offices all over the world, Greenpeace has a complex technical landscape. While Greenpeace International provides system _recommendations_ and _support_, individual National and Regional Offices are free to make their own systems choices, even if they aren't the supported ones. This is a good thing; different tools better address different needs for different offices. But it's challenging, too, because the absence of standardization means a lack of expertise in all those systems.
* **Organizational culture and work styles:** Planet 4 devoured many of Greenpeace's internal strategies and visions, then spit them out into a way that promised to move toward the type of organization we wanted to be. It was challenging the organizational status quo.
Our team was too small, our work too big, and the landscape of working in a global non-profit too complex. The team was struggling, and we needed help.
Then, in 2018, I saw an opportunity.
As an [Open Organization Ambassador][10], I'd been to Red Hat Summit to speak on a panel about open organizational principles. There I noticed a session exploring what [Red Hat had done to help UNICEF][11], another global non-profit, with its digital transformation efforts. Surely, I thought, Red Hat and Greenpeace could work together, too.
So I did something that shouldn't seem so revolutionary or audacious: I found the Red Hatter responsible for the company's collaboration with UNICEF, Alexandra Machado, and I _said hello_. I wasn't just introducing myself; I was approaching Alexandra on behalf of a global community of open-minded advocates.
And it worked.
### Accelerating
Together, Alexandra and I spent more than a year coordinating a collaboration that could help Greenpeace move forward. Earlier this year, we started to succeed.
Planet 4 was a vision—not just of a new and open engagement platform, but of an organization that could make use of this platform. And it raised problems on both those fronts.
In late May, members of the Planet 4 project and a team from Red Hat's App Dev Center of Excellence met in Amsterdam. The goal: Accelerate us.
We'd spend an entire week together in a design sprint aimed at helping us chart a speedy path toward making our vision for the Planet 4 engagement platform a reality, beginning with navigating its technical complexity. And in the process, we'd lean heavily on the open way of working we'd learned to embrace.
At the sprint, our teams got to know each other. We dumped everything on the table. In a radically open and honest way, the Greenpeace team helped the Red Hat team from Waterford understand the technical and cultural hurdles we faced. We explained our organization and our tech stack, our vision and our dreams. Red Hatters noticed our passion and worked alongside us to explore possible technologies that could make our vision a reality.
Through a series of exercises—including a particularly helpful session of [event storming][12]—we confirmed that our dream was not only the right one to have but also fully realizable. We talked through the dynamics of the systems we are addressing, and, in the end, the Red Hat team helped us envision a prototype for integrated systems that the Greenpeace team could take forward. We've already begun user testing.
_Listen to Patrick Carney of Red Hat Open Innovation Labs explain event storming._
On top of that, our new allies wrote a technical report that laid out the complexities we could _see_ but not _address_—and in a way that spurred internal conversations forward. We found ourselves, a few weeks after the event, moving forward at speed.
Finally, we were unstuck.
In the final chapter of Planet 4's story, I'll explain what the experience taught us about the power of openness.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/10/collaboration-breakthrough-greenpeace
作者:[Laura Hilliger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/laurahilliger
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/images/open-org/open-org-greenpeace-article-2-blog-thumbnail-520x292.png?itok=YNEKRAxS (The Open Organization at Greenpeace)
[2]: http://greenpeace.org/international
[3]: https://medium.com/planet4/p4-in-2018-3bec1cc12be8
[4]: https://medium.com/planet4/past-the-prototype-d3e0a4d3a171
[5]: https://opensource.com/tags/open-organization-greenpeace
[6]: https://opensource.com/open-organization/19/10/open-platform-greenpeace-1
[7]: https://opensource.com/open-organization/16/1/community-calls-will-increase-participation-your-open-organization
[8]: https://opensource.com/open-organization/16/8/best-results-design-participation
[9]: https://opensource.com/open-organization/resources/open-org-definition
[10]: https://opensource.com/open-organization/resources/meet-ambassadors
[11]: https://www.redhat.com/en/proof-of-concept-series
[12]: https://openpracticelibrary.com/practice/event-storming/

View File

@ -0,0 +1,227 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Go About Linux Boot Time Optimisation)
[#]: via: (https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/)
[#]: author: (B Thangaraju https://opensourceforu.com/author/b-thangaraju/)
How to Go About Linux Boot Time Optimisation
======
[![][1]][2]
_Booting an embedded device or a piece of telecommunication equipment quickly is crucial for time-critical applications and also plays a very major role in improving the user experience. This article gives some important tips on how to enhance the boot-up time of any device._
Fast booting or fast rebooting plays a crucial role in various situations. It is critical for an embedded system to boot up fast in order to maintain the high availability and better performance of all the services. Imagine a telecommunications device running a Linux operating system that does not have fast booting enabled. All the systems, services and the users dependent on that particular embedded device might be affected. It is really important that devices maintain high availability in their services, for which fast booting and rebooting play a crucial role.
A small failure or shutdown of a telecom device, even for a few seconds, can play havoc with countless users working on the Internet. Thus, it is really important for a lot of time-dependent devices and telecommunication devices to incorporate fast booting in their devices to help them get back to work quicker. Let us understand the Linux boot-up procedure from Figure 1.
![Figure 1: Boot-up procedure][3]
![Figure 2: Boot chart][4]
**Monitoring tools and the boot-up procedure**
A user should take note of a number of factors before making changes to a machine. These include the current booting speed of the machine and also the services, processes or applications that are taking up resources and increasing the boot-up time.
**Boot chart:** To monitor the boot-up speed and the various services that start while booting up, the user can install the boot chart using the following command:
```
sudo apt-get install pybootchartgui.
```
Each time you boot up, the boot chart saves a _.png_ (portable network graphics) file in the log, which enables the user to view the _png_ files to get an understanding about the systems boot-up process and services. Use the following command for this purpose:
```
cd /var/log/bootchart
```
The user might need an application to view the _.png_ files. Feh is an X11 image viewer that targets console users. It doesnt have a fancy GUI, unlike most other image viewers, but it simply displays pictures. Feh can be used to view the _.png_ files. You can install it using the following command:
```
sudo apt-get install feh
```
You can view the _png_ files using _feh xxxx.png_.
Figure 2 shows the boot chart when a boot chart _png_ file is viewed.
However, a boot chart is not necessary for Ubuntu versions later than 15.10. To get very brief information regarding boot up speed, use the following command:
```
systemd-analyze
```
![Figure 3: Output of systemd-analyze][5]
Figure 3 shows the output of the command _systemd-analyze_.
The command _systemd-analyze_ blame is used to print a list of all running units based on the time they took to initialise. This information is very helpful and can be used to optimise boot-up times. systemd-analyze blame doesnt display results for services with _Type=simple_, because systemd considers such services to be started immediately; hence, no measurement of the initialisation delays can be done.
![Figure 4: Output of systemd-analyze blame][6]
Figure 4 shows the output of _systemd-analyze_ blame.
The following command prints a tree of the time-critical chain of units:
```
command systemd-analyze critical-chain
```
Figure 5 shows the output of the command _systemd-analyze critical-chain_.
![Figure 5: Output of systemd-analyze critical-chain][7]
**Steps to reduce the boot-up time**
Shown below are the various steps that can be taken to reduce boot-up time.
**BUM (Boot-Up-Manager):** BUM is a run level configuration editor that allows the configuration of _init_ services when the system boots up or reboots. It displays a list of every service that can be started at boot. The user can toggle individual services on and off. BUM has a very clean GUI and is very easy to use.
BUM can be installed in Ubuntu 14.04 using the following command:
```
sudo apt-get install bum
```
To install it in versions later than 15.10, download the packages from the link _<http://apt.ubuntu.com/p/bum> 13_.
Start with basic things and disable services related to the scanner and printer. You can also disable Bluetooth and all other unwanted devices and services if you are not using any of them. I strongly recommend that you study the basics about the services before disabling them, as it might affect the machine or operating system. Figure 6 shows the GUI of BUM.
![Figure 6: BUM][8]
**Editing the rc file:** To edit the rc file, you need to go to the rc directory. This can be done using the following command:
```
cd /etc/init.d.
```
However, root privileges are needed to access _init.d_, which basically contains start/stop scripts that are used to control (start, stop, reload, restart) the daemon while the system is running or during boot.
The _rc_ file in _init.d_ is called a run control script. During booting, init executes the _rc_ script and plays its role. To improve the booting speed, we make changes to the _rc_ file. Open the _rc_ file (once you are in the _init.d_ directory) using any file editor.
For example, by entering _vim rc_, you can change the value of _CONCURRENCY=none_ to _CONCURRENCY=shell_. The latter allows certain startup scripts to be executed simultaneously, rather than serially.
In the latest versions of the kernel, the value should be changed to _CONCURRENCY=makefile_.
Figures 7 and 8 show the comparison of boot-up times before and after editing the rc file. The improvement in the boot-up speed can be noticed. The time to boot before editing the rc file was 50.98 seconds, whereas the time to boot after making the changes to the rc file is 23.85 seconds.
However, the above-mentioned changes dont work on operating systems later than the Ubuntu version 15.10, since the operating systems with the latest kernel use the systemd file and not the _init.d_ file any more.
![Figure 7: Boot speed before making changes to the rc file][9]
![Figure 8: Boot speed after making changes to the rc file][10]
**E4rat:** E4rat stands for e4 reduced access time (ext4 file system only). It is a project developed by Andreas Rid and Gundolf Kiefer. E4rat is an application that helps in achieving a fast boot with the help of defragmentation. It also accelerates application startups. E4rat eliminates both seek times and rotational delays using physical file reallocation. This leads to a high disk transfer rate.
E4rat is available as a .deb package and you can download it from its official website _<http://e4rat.sourceforge.net/>_.
Ubuntus default ureadahead package conflicts with e4rat. So a few packages have to be installed using the following command:
```
sudo dpkg purge ureadahead ubuntu-minimal
```
Now install the dependencies for e4rat using the following command:
```
sudo apt-get install libblkid1 e2fslibs
```
Open the downloaded _.deb_ file and install it. Boot data is now needed to be gathered properly to work with e4rat.
Follow the steps given below to get e4rat running properly and to increase the boot-up speed.
* Access the Grub menu while booting. This can be done by holding the shift button when the system is booting.
* Choose the option (kernel version) that is normally used to boot and press e.
* Look for the line starting with _linux /boot/vmlinuz_ and add the following code at the end of the line (hit space after the last letter of the sentence):
```
- init=/sbin/e4rat-collect or try - quiet splash vt.handsoff =7 init=/sbin/e4rat-collect
```
* Now press _Ctrl+x_ to continue booting. This lets e4rat collect data after booting. Work on the machine, open and close applications for the next two minutes.
* Access the log file by going to the e4rat folder and using the following command:
```
cd /var/log/e4rat
```
* If you do not find any log file, repeat the above mentioned process. Once the log file is there, access the Grub menu again and press e as your option.
* Enter single at the end of the same line that you have edited before. This will help you access the command line. If a different menu appears asking for anything, choose Resume normal boot. If you dont get to the command prompt for some reason, hit Ctrl+Alt+F1.
* Enter your details once you see the login prompt.
* Now enter the following command:
```
sudo e4rat-realloc /var/lib/e4rat/startup.log
```
This process takes a while, depending on the machines disk speed.
* Now restart your machine using the following command:
```
sudo shutdown -r now
```
* Now, we need to configure Grub to run e4rat at every boot.
* Access the grub file using any editor. For example, _gksu gedit /etc/default/grub._
* Look for a line starting with _GRUB CMDLINE LINUX DEFAULT=_, and add the following line in between the quotes and before whatever options there are:
```
init=/sbin/e4rat-preload 18
```
* It should look like this:
```
GRUB CMDLINE LINUX DEFAULT = init=/sbin/e4rat- preload quiet splash
```
* Save and close the Grub menu and update Grub using _sudo update-grub_.
* Reboot the system and you will find noticeable changes in boot speed.
Figures 9 and 10 show the differences between the boot-up time before and after installing e4rat. The improvement in the boot-up speed can be noticed. The time taken to boot before using e4rat was 22.32 seconds, whereas the time taken to boot after using e4rat is 9.065 seconds
![Figure 9: Boot speed before using e4rat][11]
![Figure 10: Boot speed after using e4rat][12]
**A few simple tweaks**
A good boot-up speed can also be achieved using very small tweaks, two of which are listed below.
**SSD:** Using solid-state devices rather than normal hard disks or other storage devices will surely improve your booting speed. SSDs also help in achieving great speeds in transferring files and running applications.
**Disabling GUI:** The graphical user interface, desktop graphics and window animations take up a lot of resources. Disabling the GUI is another good way to achieve great boot-up speed.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/
作者:[B Thangaraju][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/b-thangaraju/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?resize=696%2C496&ssl=1 (Screenshot from 2019-10-07 13-16-32)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?fit=700%2C499&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-1.png?resize=350%2C302&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-2.png?resize=350%2C412&ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-3.png?resize=350%2C69&ssl=1
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-4.png?resize=350%2C535&ssl=1
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-5.png?resize=350%2C206&ssl=1
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-6.png?resize=350%2C449&ssl=1
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-7.png?resize=350%2C85&ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-8.png?resize=350%2C72&ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-9.png?resize=350%2C61&ssl=1
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-10.png?resize=350%2C61&ssl=1

View File

@ -0,0 +1,498 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to program with Bash: Logical operators and shell expansions)
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-2)
[#]: author: (David Both https://opensource.com/users/dboth)
How to program with Bash: Logical operators and shell expansions
======
Learn about logical operators and shell expansions, in the second
article in this three-part series on programming with Bash.
![Women in computing and open source v5][1]
Bash is a powerful programming language, one perfectly designed for use on the command line and in shell scripts. This three-part series (which is based on my [three-volume Linux self-study course][2]) explores using Bash as a programming language on the command-line interface (CLI).
The [first article][3] explored some simple command-line programming with Bash, including using variables and control operators. This second article looks into the types of file, string, numeric, and miscellaneous logical operators that provide execution-flow control logic and different types of shell expansions in Bash. The third and final article in the series will explore the **for**, **while**, and **until** loops that enable repetitive operations.
Logical operators are the basis for making decisions in a program and executing different sets of instructions based on those decisions. This is sometimes called flow control.
### Logical operators
Bash has a large set of logical operators that can be used in conditional expressions. The most basic form of the **if** control structure tests for a condition and then executes a list of program statements if the condition is true. There are three types of operators: file, numeric, and non-numeric operators. Each operator returns true (0) if the condition is met and false (1) if the condition is not met.
The functional syntax of these comparison operators is one or two arguments with an operator that are placed within square braces, followed by a list of program statements that are executed if the condition is true, and an optional list of program statements if the condition is false:
```
if [ arg1 operator arg2 ] ; then list
or
if [ arg1 operator arg2 ] ; then list ; else list ; fi
```
The spaces in the comparison are required as shown. The single square braces, **[** and **]**, are the traditional Bash symbols that are equivalent to the **test** command:
```
`if test arg1 operator arg2 ; then list`
```
There is also a more recent syntax that offers a few advantages and that some sysadmins prefer. This format is a bit less compatible with different versions of Bash and other shells, such as ksh (the Korn shell). It looks like:
```
`if [[ arg1 operator arg2 ]] ; then list`
```
#### File operators
File operators are a powerful set of logical operators within Bash. Figure 1 lists more than 20 different operators that Bash can perform on files. I use them quite frequently in my scripts.
Operator | Description
---|---
-a filename | True if the file exists; it can be empty or have some content but, so long as it exists, this will be true
-b filename | True if the file exists and is a block special file such as a hard drive like **/dev/sda** or **/dev/sda1**
-c filename | True if the file exists and is a character special file such as a TTY device like **/dev/TTY1**
-d filename | True if the file exists and is a directory
-e filename | True if the file exists; this is the same as **-a** above
-f filename | True if the file exists and is a regular file, as opposed to a directory, a device special file, or a link, among others
-g filename | True if the file exists and is **set-group-id**, **SETGID**
-h filename | True if the file exists and is a symbolic link
-k filename | True if the file exists and its "sticky'" bit is set
-p filename | True if the file exists and is a named pipe (FIFO)
-r filename | True if the file exists and is readable, i.e., has its read bit set
-s filename | True if the file exists and has a size greater than zero; a file that exists but that has a size of zero will return false
-t fd | True if the file descriptor **fd** is open and refers to a terminal
-u filename | True if the file exists and its **set-user-id** bit is set
-w filename | True if the file exists and is writable
-x filename | True if the file exists and is executable
-G filename | True if the file exists and is owned by the effective group ID
-L filename | True if the file exists and is a symbolic link
-N filename | True if the file exists and has been modified since it was last read
-O filename | True if the file exists and is owned by the effective user ID
-S filename | True if the file exists and is a socket
file1 -ef file2 | True if file1 and file2 refer to the same device and iNode numbers
file1 -nt file2 | True if file1 is newer (according to modification date) than file2, or if file1 exists and file2 does not
file1 -ot file2 | True if file1 is older than file2, or if file2 exists and file1 does not
_**Fig. 1: The Bash file operators**_
As an example, start by testing for the existence of a file:
```
[student@studentvm1 testdir]$ File="TestFile1" ; if [ -e $File ] ; then echo "The file $File exists." ; else echo "The file $File does not exist." ; fi
The file TestFile1 does not exist.
[student@studentvm1 testdir]$
```
Next, create a file for testing named **TestFile1**. For now, it does not need to contain any data:
```
`[student@studentvm1 testdir]$ touch TestFile1`
```
It is easy to change the value of the **$File** variable rather than a text string for the file name in multiple locations in this short CLI program:
```
[student@studentvm1 testdir]$ File="TestFile1" ; if [ -e $File ] ; then echo "The file $File exists." ; else echo "The file $File does not exist." ; fi
The file TestFile1 exists.
[student@studentvm1 testdir]$
```
Now, run a test to determine whether a file exists and has a non-zero length, which means it contains data. You want to test for three conditions: 1. the file does not exist; 2. the file exists and is empty; and 3. the file exists and contains data. Therefore, you need a more complex set of tests—use the **elif** stanza in the **if-elif-else** construct to test for all of the conditions:
```
[student@studentvm1 testdir]$ File="TestFile1" ; if [ -s $File ] ; then echo "$File exists and contains data." ; fi
[student@studentvm1 testdir]$
```
In this case, the file exists but does not contain any data. Add some data and try again:
```
[student@studentvm1 testdir]$ File="TestFile1" ; echo "This is file $File" &gt; $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; fi
TestFile1 exists and contains data.
[student@studentvm1 testdir]$
```
That works, but it is only truly accurate for one specific condition out of the three possible ones. Add an **else** stanza so you can be somewhat more accurate, and delete the file so you can fully test this new code:
```
[student@studentvm1 testdir]$ File="TestFile1" ; rm $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; else echo "$File does not exist or is empty." ; fi
TestFile1 does not exist or is empty.
```
Now create an empty file to test:
```
[student@studentvm1 testdir]$ File="TestFile1" ; touch $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; else echo "$File does not exist or is empty." ; fi
TestFile1 does not exist or is empty.
```
Add some content to the file and test again:
```
[student@studentvm1 testdir]$ File="TestFile1" ; echo "This is file $File" &gt; $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; else echo "$File does not exist or is empty." ; fi
TestFile1 exists and contains data.
```
Now, add the **elif** stanza to discriminate between a file that does not exist and one that is empty:
```
[student@studentvm1 testdir]$ File="TestFile1" ; touch $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; elif [ -e $File ] ; then echo "$File exists and is empty." ; else echo "$File does not exist." ; fi
TestFile1 exists and is empty.
[student@studentvm1 testdir]$ File="TestFile1" ; echo "This is $File" &gt; $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; elif [ -e $File ] ; then echo "$File exists and is empty." ; else echo "$File does not exist." ; fi
TestFile1 exists and contains data.
[student@studentvm1 testdir]$
```
Now you have a Bash CLI program that can test for these three different conditions… but the possibilities are endless.
It is easier to see the logic structure of the more complex compound commands if you arrange the program statements more like you would in a script that you can save in a file. Figure 2 shows how this would look. The indents of the program statements in each stanza of the **if-elif-else** structure help to clarify the logic.
```
File="TestFile1"
echo "This is $File" &gt; $File
if [ -s $File ]
   then
   echo "$File exists and contains data."
elif [ -e $File ]
   then
   echo "$File exists and is empty."
else
   echo "$File does not exist."
fi
```
_**Fig. 2: The command line program rewritten as it would appear in a script**_
Logic this complex is too lengthy for most CLI programs. Although any Linux or Bash built-in commands may be used in CLI programs, as the CLI programs get longer and more complex, it makes more sense to create a script that is stored in a file and can be executed at any time, now or in the future.
#### String comparison operators
String comparison operators enable the comparison of alphanumeric strings of characters. There are only a few of these operators, which are listed in Figure 3.
Operator | Description
---|---
-z string | True if the length of string is zero
-n string | True if the length of string is non-zero
string1 == string2
or
string1 = string2 | True if the strings are equal; a single **=** should be used with the test command for POSIX conformance. When used with the **[[** command, this performs pattern matching as described above (compound commands).
string1 != string2 | True if the strings are not equal
string1 &lt; string2 | True if string1 sorts before string2 lexicographically (refers to locale-specific sorting sequences for all alphanumeric and special characters)
string1 &gt; string2 | True if string1 sorts after string2 lexicographically
_**Fig. 3: Bash string logical operators**_
First, look at string length. The quotes around **$MyVar** in the comparison must be there for the comparison to work. (You should still be working in **~/testdir**.)
```
[student@studentvm1 testdir]$ MyVar="" ; if [ -z "" ] ; then echo "MyVar is zero length." ; else echo "MyVar contains data" ; fi
MyVar is zero length.
[student@studentvm1 testdir]$ MyVar="Random text" ; if [ -z "" ] ; then echo "MyVar is zero length." ; else echo "MyVar contains data" ; fi
MyVar is zero length.
```
You could also do it this way:
```
[student@studentvm1 testdir]$ MyVar="Random text" ; if [ -n "$MyVar" ] ; then echo "MyVar contains data." ; else echo "MyVar is zero length" ; fi
MyVar contains data.
[student@studentvm1 testdir]$ MyVar="" ; if [ -n "$MyVar" ] ; then echo "MyVar contains data." ; else echo "MyVar is zero length" ; fi
MyVar is zero length
```
Sometimes you may need to know a string's exact length. This is not a comparison, but it is related. Unfortunately, there is no simple way to determine the length of a string. There are a couple of ways to do it, but I think using the **expr** (evaluate expression) command is easiest. Read the man page for **expr** for more about what it can do. Note that quotes are required around the string or variable you're testing.
```
[student@studentvm1 testdir]$ MyVar="" ; expr length "$MyVar"
0
[student@studentvm1 testdir]$ MyVar="How long is this?" ; expr length "$MyVar"
17
[student@studentvm1 testdir]$ expr length "We can also find the length of a literal string as well as a variable."
70
```
Regarding comparison operators, I use a lot of testing in my scripts to determine whether two strings are equal (i.e., identical). I use the non-POSIX version of this comparison operator:
```
[student@studentvm1 testdir]$ Var1="Hello World" ; Var2="Hello World" ; if [ "$Var1" == "$Var2" ] ; then echo "Var1 matches Var2" ; else echo "Var1 and Var2 do not match." ; fi
Var1 matches Var2
[student@studentvm1 testdir]$ Var1="Hello World" ; Var2="Hello world" ; if [ "$Var1" == "$Var2" ] ; then echo "Var1 matches Var2" ; else echo "Var1 and Var2 do not match." ; fi
Var1 and Var2 do not match.
```
Experiment some more on your own to try out these operators.
#### Numeric comparison operators
Numeric operators make comparisons between two numeric arguments. Like the other operator classes, most are easy to understand.
Operator | Description
---|---
arg1 -eq arg2 | True if arg1 equals arg2
arg1 -ne arg2 | True if arg1 is not equal to arg2
arg1 -lt arg2 | True if arg1 is less than arg2
arg1 -le arg2 | True if arg1 is less than or equal to arg2
arg1 -gt arg2 | True if arg1 is greater than arg2
arg1 -ge arg2 | True if arg1 is greater than or equal to arg2
_**Fig. 4: Bash numeric comparison logical operators**_
Here are some simple examples. The first instance sets the variable **$X** to 1, then tests to see if **$X** is equal to 1. In the second instance, **X** is set to 0, so the comparison is not true.
```
[student@studentvm1 testdir]$ X=1 ; if [ $X -eq 1 ] ; then echo "X equals 1" ; else echo "X does not equal 1" ; fi
X equals 1
[student@studentvm1 testdir]$ X=0 ; if [ $X -eq 1 ] ; then echo "X equals 1" ; else echo "X does not equal 1" ; fi
X does not equal 1
[student@studentvm1 testdir]$
```
Try some more experiments on your own.
#### Miscellaneous operators
These miscellaneous operators show whether a shell option is set or a shell variable has a value, but it does not discover the value of the variable, just whether it has one.
Operator | Description
---|---
-o optname | True if the shell option optname is enabled (see the list of options under the description of the **-o** option to the Bash set builtin in the Bash man page)
-v varname | True if the shell variable varname is set (has been assigned a value)
-R varname | True if the shell variable varname is set and is a name reference
_**Fig. 5: Miscellaneous Bash logical operators**_
Experiment on your own to try out these operators.
### Expansions
Bash supports a number of types of expansions and substitutions that can be quite useful. According to the Bash man page, Bash has seven forms of expansions. This article looks at five of them: tilde expansion, arithmetic expansion, pathname expansion, brace expansion, and command substitution.
#### Brace expansion
Brace expansion is a method for generating arbitrary strings. (This tool is used below to create a large number of files for experiments with special pattern characters.) Brace expansion can be used to generate lists of arbitrary strings and insert them into a specific location within an enclosing static string or at either end of a static string. This may be hard to visualize, so it's best to just do it.
First, here's what a brace expansion does:
```
[student@studentvm1 testdir]$ echo {string1,string2,string3}
string1 string2 string3
```
Well, that is not very helpful, is it? But look what happens when you use it just a bit differently:
```
[student@studentvm1 testdir]$ echo "Hello "{David,Jen,Rikki,Jason}.
Hello David. Hello Jen. Hello Rikki. Hello Jason.
```
That looks like something useful—it could save a good deal of typing. Now try this:
```
[student@studentvm1 testdir]$ echo b{ed,olt,ar}s
beds bolts bars
```
I could go on, but you get the idea.
#### Tilde expansion
Arguably, the most common expansion is the tilde (**~**) expansion. When you use this in a command like **cd ~/Documents**, the Bash shell expands it as a shortcut to the user's full home directory.
Use these Bash programs to observe the effects of the tilde expansion:
```
[student@studentvm1 testdir]$ echo ~
/home/student
[student@studentvm1 testdir]$ echo ~/Documents
/home/student/Documents
[student@studentvm1 testdir]$ Var1=~/Documents ; echo $Var1 ; cd $Var1
/home/student/Documents
[student@studentvm1 Documents]$
```
#### Pathname expansion
Pathname expansion is a fancy term expanding file-globbing patterns, using the characters **?** and *****, into the full names of directories that match the pattern. File globbing refers to special pattern characters that enable significant flexibility in matching file names, directories, and other strings when performing various actions. These special pattern characters allow matching single, multiple, or specific characters in a string.
* **?** — Matches only one of any character in the specified location within the string
* ***** — Matches zero or more of any character in the specified location within the string
This expansion is applied to matching directory names. To see how this works, ensure that **testdir** is the present working directory (PWD) and start with a plain listing (the contents of my home directory will be different from yours):
```
[student@studentvm1 testdir]$ ls
chapter6  cpuHog.dos    dmesg1.txt  Documents  Music       softlink1  testdir6    Videos
chapter7  cpuHog.Linux  dmesg2.txt  Downloads  Pictures    Templates  testdir
testdir  cpuHog.mac    dmesg3.txt  file005    Public      testdir    tmp
cpuHog     Desktop       dmesg.txt   link3      random.txt  testdir1   umask.test
[student@studentvm1 testdir]$
```
Now list the directories that start with **Do**, **testdir/Documents**, and **testdir/Downloads**:
```
Documents:
Directory01  file07  file15        test02  test10  test20      testfile13  TextFiles
Directory02  file08  file16        test03  test11  testfile01  testfile14
file01       file09  file17        test04  test12  testfile04  testfile15
file02       file10  file18        test05  test13  testfile05  testfile16
file03       file11  file19        test06  test14  testfile09  testfile17
file04       file12  file20        test07  test15  testfile10  testfile18
file05       file13  Student1.txt  test08  test16  testfile11  testfile19
file06       file14  test01        test09  test18  testfile12  testfile20
Downloads:
[student@studentvm1 testdir]$
```
Well, that did not do what you wanted. It listed the contents of the directories that begin with **Do**. To list only the directories and not their contents, use the **-d** option.
```
[student@studentvm1 testdir]$ ls -d Do*
Documents  Downloads
[student@studentvm1 testdir]$
```
In both cases, the Bash shell expands the **Do*** pattern into the names of the two directories that match the pattern. But what if there are also files that match the pattern?
```
[student@studentvm1 testdir]$ touch Downtown ; ls -d Do*
Documents  Downloads  Downtown
[student@studentvm1 testdir]$
```
This shows the file, too. So any files that match the pattern are also expanded to their full names.
#### Command substitution
Command substitution is a form of expansion that allows the STDOUT data stream of one command to be used as the argument of another command; for example, as a list of items to be processed in a loop. The Bash man page says: "Command substitution allows the output of a command to replace the command name." I find that to be accurate if a bit obtuse.
There are two forms of this substitution, **`command`** and **$(command)**. In the older form using back tics (**`**), using a backslash (**\**) in the command retains its literal meaning. However, when it's used in the newer parenthetical form, the backslash takes on its meaning as a special character. Note also that the parenthetical form uses only single parentheses to open and close the command statement.
I frequently use this capability in command-line programs and scripts where the results of one command can be used as an argument for another command.
Start with a very simple example that uses both forms of this expansion (again, ensure that **testdir** is the PWD):
```
[student@studentvm1 testdir]$ echo "Todays date is `date`"
Todays date is Sun Apr  7 14:42:46 EDT 2019
[student@studentvm1 testdir]$ echo "Todays date is $(date)"
Todays date is Sun Apr  7 14:42:59 EDT 2019
[student@studentvm1 testdir]$
```
The **-w** option to the **seq** utility adds leading zeros to the numbers generated so that they are all the same width, i.e., the same number of digits regardless of the value. This makes it easier to sort them in numeric sequence.
The **seq** utility is used to generate a sequence of numbers:
```
[student@studentvm1 testdir]$ seq 5
1
2
3
4
5
[student@studentvm1 testdir]$ echo `seq 5`
1 2 3 4 5
[student@studentvm1 testdir]$
```
Now you can do something a bit more useful, like creating a large number of empty files for testing:
```
`[student@studentvm1 testdir]$ for I in $(seq -w 5000) ; do touch file-$I ; done`
```
In this usage, the statement **seq -w 5000** generates a list of numbers from one to 5,000. By using command substitution as part of the **for** statement, the list of numbers is used by the **for** statement to generate the numerical part of the file names.
#### Arithmetic expansion
Bash can perform integer math, but it is rather cumbersome (as you will soon see). The syntax for arithmetic expansion is **$((arithmetic-expression))**, using double parentheses to open and close the expression.
Arithmetic expansion works like command substitution in a shell program or script; the value calculated from the expression replaces the expression for further evaluation by the shell.
Once again, start with something simple:
```
[student@studentvm1 testdir]$ echo $((1+1))
2
[student@studentvm1 testdir]$ Var1=5 ; Var2=7 ; Var3=$((Var1*Var2)) ; echo "Var 3 = $Var3"
Var 3 = 35
```
The following division results in zero because the result would be a decimal value of less than one:
```
[student@studentvm1 testdir]$ Var1=5 ; Var2=7 ; Var3=$((Var1/Var2)) ; echo "Var 3 = $Var3"
Var 3 = 0
```
Here is a simple calculation I often do in a script or CLI program that tells me how much total virtual memory I have in a Linux host. The **free** command does not provide that data:
```
[student@studentvm1 testdir]$ RAM=`free | grep ^Mem | awk '{print $2}'` ; Swap=`free | grep ^Swap | awk '{print $2}'` ; echo "RAM = $RAM and Swap = $Swap" ; echo "Total Virtual memory is $((RAM+Swap))" ;
RAM = 4037080 and Swap = 6291452
Total Virtual memory is 10328532
```
I used the **`** character to delimit the sections of code used for command substitution.
I use Bash arithmetic expansion mostly for checking system resource amounts in a script and then choose a program execution path based on the result.
### Summary
This article, the second in this series on Bash as a programming language, explored the Bash file, string, numeric, and miscellaneous logical operators that provide execution-flow control logic and the different types of shell expansions.
The third article in this series will explore the use of loops for performing various types of iterative operations.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/programming-bash-part-2
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_5.png?itok=YHpNs_ss (Women in computing and open source v5)
[2]: http://www.both.org/?page_id=1183
[3]: https://opensource.com/article/19/10/programming-bash-part-1

View File

@ -0,0 +1,389 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Initializing arrays in Java)
[#]: via: (https://opensource.com/article/19/10/initializing-arrays-java)
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
Initializing arrays in Java
======
Arrays are a helpful data type for managing collections elements best
modeled in contiguous memory locations. Here's how to use them
effectively.
![Coffee beans and a cup of coffee][1]
People who have experience programming in languages like C or FORTRAN are familiar with the concept of arrays. Theyre basically a contiguous block of memory where each location is a certain type: integers, floating-point numbers, or what-have-you.
The situation in Java is similar, but with a few extra wrinkles.
### An example array
Lets make an array of 10 integers in Java:
```
int[] ia = new int[10];
```
Whats going on in the above piece of code? From left to right:
1. The **int[]** to the extreme left declares the _type_ of the variable as an array (denoted by the **[]**) of **int**.
2. To the right is the _name_ of the variable, which in this case is **ia**.
3. Next, the **=** tells us that the variable defined on the left side is set to whats to the right side.
4. To the right of the **=** we see the word **new**, which in Java indicates that an object is being _initialized_, meaning that storage is allocated and its constructor is called ([see here for more information][2]).
5. Next, we see **int[10]**, which tells us that the specific object being initialized is an array of 10 integers.
Since Java is strongly-typed, the type of the variable **ia** must be compatible with the type of the expression on the right-hand side of the **=**.
### Initializing the example array
Lets put this simple array in a piece of code and try it out. Save the following in a file called **Test1.java**, use **javac** to compile it, and use **java** to run it (in the terminal of course):
```
import java.lang.*;
public class Test1 {
    public static void main([String][3][] args) {
        int[] ia = new int[10];                              // See note 1 below
        [System][4].out.println("ia is " + ia.getClass());        // See note 2 below
        for (int i = 0; i &lt; ia.length; i++)                  // See note 3 below
            [System][4].out.println("ia[" + i + "] = " + ia[i]);  // See note 4 below
    }
}
```
Lets work through the most important bits.
1. Our declaration and initialization of the array of 10 integers, **ia**, is easy to spot.
2. In the line just following, we see the expression **ia.getClass()**. Thats right, **ia** is an _object_ belonging to a _class_, and this code will let us know which class that is.
3. In the next line following that, we see the start of the loop **for (int i = 0; i &lt; ia.length; i++)**, which defines a loop index variable **i** that runs through a sequence from zero to one less than **ia.length**, which is an expression that tells us how many elements are defined in the array **ia**.
4. Next, the body of the loop prints out the values of each element of **ia**.
When this program is compiled and run, it produces the following results:
```
me@mydesktop:~/Java$ javac Test1.java
me@mydesktop:~/Java$ java Test1
ia is class [I
ia[0] = 0
ia[1] = 0
ia[2] = 0
ia[3] = 0
ia[4] = 0
ia[5] = 0
ia[6] = 0
ia[7] = 0
ia[8] = 0
ia[9] = 0
me@mydesktop:~/Java$
```
The string representation of the output of **ia.getClass()** is **[I**, which is shorthand for "array of integer." Similar to the C programming language, Java arrays begin with element zero and extend up to element **&lt;array size&gt; 1**. We can see above that each of the elements of **ia** are set to zero (by the array constructor, it seems).
So, is that it? We declare the type, use the appropriate initializer, and were done?
Well, no. There are many other ways to initialize an array in Java. 
### Why do I want to initialize an array, anyway?
The answer to this question, like that of all good questions, is "it depends." In this case, the answer depends on what we expect to do with the array once it is initialized.
In some cases, arrays emerge naturally as a type of accumulator. For example, suppose we are writing code for counting the number of calls received and made by a set of telephone extensions in a small office. There are eight extensions, numbered one through eight, plus the operators extension, numbered zero. So we might declare two arrays:
```
int[] callsMade;
int[] callsReceived;
```
Then, whenever we start a new period of accumulating call statistics, we initialize each array as:
```
callsMade = new int[9];
callsReceived = new int[9];
```
At the end of each period of accumulating call statistics, we can print out the stats. In very rough terms, we might see:
```
import java.lang.*;
import java.io.*;
public class Test2 {
    public static void main([String][3][] args) {
        int[] callsMade;
        int[] callsReceived;
        // initialize call counters
        callsMade = new int[9];
        callsReceived = new int[9];
        // process calls...
        //   an extension makes a call: callsMade[ext]++
        //   an extension receives a call: callsReceived[ext]++
        // summarize call statistics
        [System][4].out.printf("%3s%25s%25s\n","ext"," calls made",
            "calls received");
        for (int ext = 0; ext &lt; callsMade.length; ext++)
            [System][4].out.printf("%3d%25d%25d\n",ext,
                callsMade[ext],callsReceived[ext]);
    }
}
```
Which would produce output something like this:
```
me@mydesktop:~/Java$ javac Test2.java
me@mydesktop:~/Java$ java Test2
ext               calls made           calls received
  0                        0                        0
  1                        0                        0
  2                        0                        0
  3                        0                        0
  4                        0                        0
  5                        0                        0
  6                        0                        0
  7                        0                        0
  8                        0                        0
me@mydesktop:~/Java$
```
Not a very busy day in the call center.
In the above example of an accumulator, we see that the starting value of zero as set by the array initializer is satisfactory for our needs. But in other cases, this starting value may not be the right choice.
For example, in some kinds of geometric computations, we might need to initialize a two-dimensional array to the identity matrix (all zeros except for the ones along the main diagonal). We might choose to do this as:
```
 double[][] m = new double[3][3];
        for (int d = 0; d &lt; 3; d++)
            m[d][d] = 1.0;
```
In this case, we rely on the array initializer **new double[3][3]** to set the array to zeros, and then use a loop to set the diagonal elements to ones. In this simple case, we might use a shortcut that Java provides:
```
 double[][] m = {
         {1.0, 0.0, 0.0},
         {0.0, 1.0, 0.0},
         {0.0, 0.0, 1.0}};
```
This type of visual structure is particularly appropriate in this sort of application, where it can be a useful double-check to see the actual layout of the array. But in the case where the number of rows and columns is only determined at run time, we might instead see something like this:
```
 int nrc;
 // some code determines the number of rows &amp; columns = nrc
 double[][] m = new double[nrc][nrc];
 for (int d = 0; d &lt; nrc; d++)
     m[d][d] = 1.0;
```
Its worth mentioning that a two-dimensional array in Java is actually an array of arrays, and theres nothing stopping the intrepid programmer from having each one of those second-level arrays be a different length. That is, something like this is completely legitimate:
```
int [][] differentLengthRows = {
     { 1, 2, 3, 4, 5},
     { 6, 7, 8, 9},
     {10,11,12},
     {13,14},
     {15}};
```
There are various linear algebra applications that involve irregularly-shaped matrices, where this type of structure could be applied (for more information see [this Wikipedia article][5] as a starting point). Beyond that, now that we understand that a two-dimensional array is actually an array of arrays, it shouldnt be too much of a surprise that:
```
differentLengthRows.length
```
tells us the number of rows in the two-dimensional array **differentLengthRows**, and:
```
differentLengthRows[i].length
```
tells us the number of columns in row **i** of **differentLengthRows**.
### Taking the array further
Considering this idea of array size that is determined at run time, we see that arrays still require us to know that size before instantiating them. But what if we dont know the size until weve processed all of the data? Does that mean we have to process it once to figure out the size of the array, and then process it again? That could be hard to do, especially if we only get one chance to consume the data.
The [Java Collections Framework][6] solves this problem in a nice way. One of the things provided there is the class **ArrayList**, which is like an array but dynamically extensible. To demonstrate the workings of **ArrayList**, lets create one and initialize it to the first 20 [Fibonacci numbers][7]:
```
import java.lang.*;
import java.util.*;
public class Test3 {
       
        public static void main([String][3][] args) {
                ArrayList&lt;Integer&gt; fibos = new ArrayList&lt;Integer&gt;();
                fibos.add(0);
                fibos.add(1);
                for (int i = 2; i &lt; 20; i++)
                        fibos.add(fibos.get(i-1) + fibos.get(i-2));
                for (int i = 0; i &lt; fibos.size(); i++)
                        [System][4].out.println("fibonacci " + i +
                       " = " + fibos.get(i));
        }
}
```
Above, we see:
* The declaration and instantiation of an **ArrayList** that is used to store **Integer**s.
* The use of **add()** to append to the **ArrayList** instance.
* The use of **get()** to retrieve an element by index number.
* The use of **size()** to determine how many elements are already in the **ArrayList** instance.
Not shown is the **put()** method, which places a value at a given index number.
The output of this program is:
```
fibonacci 0 = 0
fibonacci 1 = 1
fibonacci 2 = 1
fibonacci 3 = 2
fibonacci 4 = 3
fibonacci 5 = 5
fibonacci 6 = 8
fibonacci 7 = 13
fibonacci 8 = 21
fibonacci 9 = 34
fibonacci 10 = 55
fibonacci 11 = 89
fibonacci 12 = 144
fibonacci 13 = 233
fibonacci 14 = 377
fibonacci 15 = 610
fibonacci 16 = 987
fibonacci 17 = 1597
fibonacci 18 = 2584
fibonacci 19 = 4181
```
**ArrayList** instances can also be initialized by other techniques. For example, an array can be supplied to the **ArrayList** constructor, or the **List.of()** and **Arrays.asList()** methods can be used when the initial elements are known at compile time. I dont find myself using these options all that often since my primary use case for an **ArrayList** is when I only want to read the data once.
Moreover, an **ArrayList** instance can be converted to an array using its **toArray()** method, for those who prefer to work with an array once the data is loaded; or, returning to the current topic, once the **ArrayList** instance is initialized.
The Java Collections Framework provides another kind of array-like data structure called a **Map**. What I mean by "array-like" is that a **Map** defines a collection of objects whose values can be set or retrieved by a key, but unlike an array (or an **ArrayList**), this key need not be an integer; it could be a **String** or any other complex object.
For example, we can create a **Map** whose keys are **String**s and whose values are **Integer**s as follows:
```
Map&lt;[String][3],Integer&gt; stoi = new Map&lt;[String][3],Integer&gt;();
```
Then we can initialize this **Map** as follows:
```
stoi.set("one",1);
stoi.set("two",2);
stoi.set("three",3);
```
And so on. Later, when we want to know the numeric value of **"three"**, we can retrieve it as:
```
stoi.get("three");
```
In my world, a **Map** is useful for converting strings occurring in third-party datasets into coherent code values in my datasets. As a part of a [data transformation pipeline][8], I will often build a small standalone program to clean the data before processing it; for this, I will almost always use one or more **Map**s.
Worth mentioning is that its quite possible, and sometimes reasonable, to have **ArrayLists** of **ArrayLists** and **Map**s of **Map**s. For example, lets assume were looking at trees, and were interested in accumulating the count of the number of trees by tree species and age range. Assuming that the age range definition is a set of string values ("young," "mid," "mature," and "old") and that the species are string values like "Douglas fir," "western red cedar," and so forth, then we might define a **Map** of **Map**s as:
```
Map&lt;[String][3],Map&lt;[String][3],Integer&gt;&gt; counter =
        new Map&lt;[String][3],Map&lt;[String][3],Integer&gt;&gt;();
```
One thing to watch out for here is that the above only creates storage for the _rows_ of **Map**s. So, our accumulation code might look like:
```
// assume at this point we have figured out the species
// and age range
if (!counter.containsKey(species))
        counter.put(species,new Map&lt;[String][3],Integer&gt;());
if (!counter.get(species).containsKey(ageRange))
        counter.get(species).put(ageRange,0);
```
At which point, we can start accumulating as:
```
counter.get(species).put(ageRange,
        counter.get(species).get(ageRange) + 1);
```
Finally, its worth mentioning that the (new in Java 8) Streams facility can also be used to initialize arrays, **ArrayList** instances, and **Map** instances. A nice discussion of this feature can be found [here][9] and [here][10].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/initializing-arrays-java
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/java-coffee-mug.jpg?itok=Bj6rQo8r (Coffee beans and a cup of coffee)
[2]: https://opensource.com/article/19/8/what-object-java
[3]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[5]: https://en.wikipedia.org/wiki/Irregular_matrix
[6]: https://en.wikipedia.org/wiki/Java_collections_framework
[7]: https://en.wikipedia.org/wiki/Fibonacci_number
[8]: https://towardsdatascience.com/data-science-for-startups-data-pipelines-786f6746a59a
[9]: https://stackoverflow.com/questions/36885371/lambda-expression-to-initialize-array
[10]: https://stackoverflow.com/questions/32868665/how-to-initialize-a-map-using-a-lambda

View File

@ -0,0 +1,258 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (NGT: A library for high-speed approximate nearest neighbor search)
[#]: via: (https://opensource.com/article/19/10/ngt-open-source-library)
[#]: author: (Masajiro Iwasaki https://opensource.com/users/masajiro-iwasaki)
NGT: A library for high-speed approximate nearest neighbor search
======
NGT is a high-performing, open source deep learning library for
large-scale and high-dimensional vectors.
![Houses in a row][1]
Approximate nearest neighbor ([ANN][2]) search is used in deep learning to make a best guess at the point in a given set that is most similar to another point. This article explains the differences between ANN search and traditional search methods and introduces [NGT][3], a top-performing open source ANN library developed by [Yahoo! Japan Research][4].
### Nearest neighbor search for high-dimensional data
Different search methods are used for different data types. For example, full-text search is for text data, content-based image retrieval is for images, and relational databases are for data relationships. Deep learning models can easily generate vectors from various kinds of data so that the vector space has embedded relationships among source data. This means that if two source data are similar, the two vectors from the data will be located near each other in the vector space. Therefore, all you have to do is search the vectors instead of the source data.
Moreover, the vectors not only represent the text and image characteristics of the source data, but they also represent products, human beings, organizations, and so forth. Therefore, you can search for similar documents and images as well as products with similar attributes, human beings with similar skills, clothing with similar features, and so on. For example, [Yahoo! Japan][5] provides a similarity-based fashion-item search using NGT.
![Nearest neighbour search][6]
Since the number of dimensions in deep learning models tends to increase, ANN search methods are indispensable when searching for more than several million high-dimensional vectors. ANN search methods allow you to search for neighbors to the specified query vector in high-dimensional space.
There are many nearest-neighbor search methods to choose from. [ANN Benchmarks][7] evaluates the best-known ANN search methods, including Faiss (Facebook), Flann, and Hnswlib. According to this benchmark, NGT achieves top-level performance.
### NGT algorithms
The NGT index combines a graph and a tree. This result is a very good search performance, with the graph's vertices representing searchable objects. Neighboring vertices are connected by edges.
This animation shows how a graph is constructed.
![NGT graph construction][8]
In the search procedure, neighboring vertices to the specified query can be found descending the graph. Densely connected vertices enable users to explore the graph effectively.
![NGT graph][9]
NGT provides a command-line tool, along with C, C++, and Python APIs. This article focuses on the command-line tool and the Python API.
### Using NGT with the command-line tool
#### Linux installation
Download the [latest version of NGT][10] as a ZIP file and install it on Linux with:
```
unzip NGT-x.x.x.zip
cd NGT-x.x.x
mkdir build
cd build
cmake ..
make
make install
```
Since NGT libraries are installed in **/usr/local/lib(64)** by default, add the directory to the search path:
```
export PATH="$PATH:/opt/local/bin"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
```
#### Sample dataset generation
Before you can search for a large-scale dataset, you must generate an NGT dataset. As an example, [download the][11] [fastText][11] [dataset][11] from the [fastText website][12], then convert it to the NGT registration format with:
```
curl -O <https://dl.fbaipublicfiles.com/fasttext/vectors-english/wiki-news-300d-1M-subword.vec.zip>
unzip wiki-news-300d-1M-subword.vec.zip
tail -n +2 wiki-news-300d-1M-subword.vec | cut -d " " -f 2- &gt; objects.ssv
```
**Objects.ssv** is a registration file that has 1 million objects. One object in the file is extracted as a query:
```
`head -10000 objects.ssv | tail -1 > query.ssv`
```
#### Index construction
An **ngt_index** can be constructed using the following command:
```
`ngt create -d 300 -D c index objects.ssv`
```
_-d_ specifies the number of dimensions of the vector. _-D c_ means using cosine similarity.
#### Approximate nearest neighbor search
The **ngt_index** can be searched for with the queries using:
```
`ngt search -n 10 index query.ssv`
```
**-n** specifies the number of resulting objects.
The search results are:
```
Query No.1
Rank    ID      Distance
1       10000   0
2       21516   0.184495
3       201860  0.240375
4       71865   0.241284
5       339589  0.267265
6       485158  0.280977
7       7961    0.283865
8       924513  0.286571
9       28870   0.286654
10      395274  0.290466
Query Time= 0.000972628 (sec), 0.972628 (msec)
Average Query Time= 0.000972628 (sec), 0.972628 (msec), (0.000972628/1)
```
Please see the [NGT command-line README][13] for more information.
### Using NGT from Python
Although NGT has C and C++ APIs, the [ngtpy][14] Python binding for NGT is the simplest option for programming.
#### Installing ngtpy
Install the Python binding (ngtpy) through PyPI with:
```
`pip3 install ngt`
```
#### Sample dataset generation
Generate data files for Python sample programs from the sample data set you downloaded by using this code:
```
dataset_path = 'wiki-news-300d-1M-subword.vec'
with open(dataset_path, 'r') as fi, open('objects.tsv', 'w') as fov,
open('words.tsv', 'w') as fow:
    n, dim = map(int, fi.readline().split())
    fov.write('{0}¥t{1}¥n'.format(n, dim))
    for line in fi:
        tokens = line.rstrip().split(' ')
        fow.write(tokens[0] + '¥n')
        fov.write('{0}¥n'.format('¥t'.join(tokens[1:])))
```
#### Index construction
Construct the NGT index with:
```
import ngtpy
index_path = 'index'
with open('objects.tsv', 'r') as fin:
    n, dim = map(int, fin.readline().split())
    ngtpy.create(index_path, dim, distance_type='Cosine') # create an index
    index = ngtpy.Index(index_path) # open the index
    print('inserting objects...')
    for line in fin:
        object = list(map(float, line.rstrip().split('¥t')))
        index.insert(object) # insert objects
print('building objects...')
index.build_index()
print('saving the index...')
index.save()
```
#### Approximate nearest neighbor search
Here is an example ANN search program:
```
import ngtpy
print('loading words...')
with open('words.tsv', 'r') as fin:
    words = list(map(lambda x: x.rstrip('¥n'), fin.readlines()))
index = ngtpy.Index('index', zero_based_numbering = False) # open index
query_id = 10000
query_object = index.get_object(query_id) # get the object for a query
result = index.search(query_object) # aproximate nearest neighbor search
print('Query={}'.format(words[query_id - 1]))
print('Rank¥tID¥tDistance¥tWord')
for rank, object in enumerate(result):
    print('{}¥t{}¥t{:.6f}¥t{}'.format(rank + 1, object[0], object[1], words[object[0] - 1]))
```
And here are the search results, which are the same as the NGT command-line option's results:
```
loading words...
Query=Horse
Rank    ID      Distance        Word
1       10000   0.000000        Horse
2       21516   0.184495        Horses
3       201860  0.240375        Horseback
4       71865   0.241284        Horseman
5       339589  0.267265        Prancing
6       485158  0.280977        Horsefly
7       7961    0.283865        Dog
8       924513  0.286571        Horsing
9       28870   0.286654        Pony
10      395274  0.290466        Blood-Horse
```
For more information, please see [ngtpy README][14].
Approximate nearest neighbor (ANN) principles are important features for analyzing data. Learning how to use it in your own projects, or to make sense of data that you're analyzing, is a powerful way to make correlations and interpret information. With NGT, you can use ANN in whatever way you require, or build upon it to add custom features.
Introduction to Apache Hadoop, an open source software framework for storage and large scale...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/ngt-open-source-library
作者:[Masajiro Iwasaki][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/masajiro-iwasaki
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
[2]: https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximate_nearest_neighbor
[3]: https://github.com/yahoojapan/NGT
[4]: https://research-lab.yahoo.co.jp/en/
[5]: https://www.yahoo.co.jp/
[6]: https://opensource.com/sites/default/files/browser-visual-search_new.jpg (Nearest neighbour search)
[7]: https://github.com/erikbern/ann-benchmarks
[8]: https://opensource.com/sites/default/files/uploads/ngt_movie2.gif (NGT graph construction)
[9]: https://opensource.com/sites/default/files/uploads/ngt_movie1.gif (NGT graph)
[10]: https://github.com/yahoojapan/NGT/releases/latest
[11]: https://dl.fbaipublicfiles.com/fasttext/vectors-english/wiki-news-300d-1M-subword.vec.zip
[12]: https://fasttext.cc/
[13]: https://github.com/yahoojapan/NGT/blob/master/bin/ngt/README.md
[14]: https://github.com/yahoojapan/NGT/blob/master/python/README-ngtpy.md

View File

@ -0,0 +1,206 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Best practices in test-driven development)
[#]: via: (https://opensource.com/article/19/10/test-driven-development-best-practices)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
Best practices in test-driven development
======
Ensure you're producing very high-quality code by following these TDD
best practices.
![magnifying glass on computer screen][1]
In my previous series on [test-driven development (TDD) and mutation testing][2], I demonstrated the benefits of relying on examples when building a solution. That begs the question: What does "relying on examples" mean?
In that series, I described one of my expectations when building a solution to determine whether it's daytime or nighttime. I provided an example of a specific hour of the day that I consider to fall in the daytime category. I created a **DateTime** variable named **dayHour** and gave it the specific value of **August 8, 2019, 7 hours, 0 minutes, 0 seconds**.
My logic (or way of reasoning) was: "When the system is notified that the time is exactly 7am on August 8, 2019, I expect that the system will perform the necessary calculations and return the value **Daylight**."
Armed with such a specific example, it was very easy to create a unit test (**Given7amReturnDaylight**). I then ran the tests and watched my unit test fail, which gave me the opportunity to work on fixing this early failure.
### Iteration is the solution
One very important aspect of TDD (and, by proxy, of agile) is the fact that it is impossible to arrive at an acceptable solution unless you are iterating. TDD is a professional discipline based on the process of relentless iterating. It is very important to note that it mandates that each iteration must begin with a micro-failure. That micro-failure has only one purpose: to solicit immediate feedback. And that immediate feedback ensures we can rapidly close the gap between _wanting_ a solution and _getting_ a solution.
Iteration provides an opportunity to solicit immediate feedback by failing as early as possible. Because that failure is fast (i.e., it is a micro-failure), it is not alarming; even when we fail, we can remain calm, knowing that it will be easy to fix the failure. And the feedback from that failure will guide us toward fixing the failure.
Rinse, repeat, until we completely close the gap and deliver the solution that fully meets the expectation (but keep in mind that the expectation must also be a micro-expectation).
### Why micro?
This approach often feels very unambitious. In TDD (and in agile), it's best to pick a tiny, almost trivial challenge, and then do the TDD song-and-dance by failing first, then iterating until we solve that trivial challenge. People who are used to more substantial, beefy engineering and problem solving tend to feel that such an exercise is beneath their level of competence.
One of the cornerstones of agile philosophy relies on reducing the problem space to multiple, smallest-possible surface areas. As Robert C. Martin puts it:
> _"Agile is a small idea about the small problems of small programming teams doing small things"_
But how can making an unimpressive series of such pedestrian, minuscule, and almost insignificant micro-victories ever enable us to reach the big-scale solution?
Here is where sophisticated and elaborate systems thinking comes into play. When building a system, there's always the risk of ending up with a dreaded "monolith." A monolith is a system built on the principle of tight coupling. Any part of the monolith is highly dependent on many other parts of the same monolith. That arrangement makes the monolith very brittle, unreliable, and difficult to operate, maintain, troubleshoot, and fix.
The only way to avoid this trap is to minimize or, better yet, completely remove coupling. Instead of investing heroic efforts into building elaborate parts that will be assembled into a system, it is much better to take humble, baby steps toward building tiny, micro parts. These micro parts have very little capability on their own, and will, by virtue of such arrangement, not be dependent on other components. This will minimize and even remove any coupling.
The desired end game in building a useful, elaborate system is to compose it from a collection of generic, completely independent components. The more generic each component is, the more robust, resilient, and flexible the resulting system will be. Also, having a collection of generic components enables them to be repurposed to build brand new systems by reconfiguring those components.
Consider a toy castle made out of Lego blocks. If we pick almost any block from that castle and examine it in isolation, we won't be able to find anything on that block that specifies it is a Lego block meant for building a castle. The block itself is sufficiently generic, which makes it suitable for building other contraptions, such as toy cars, toy airplanes, toy boats, etc. That's the power of having generic components.
TDD is a proven discipline for delivering generic, independent, and autonomous components that can be safely used to assemble large, sophisticated systems expediently. As in agile, TDD is focused on micro-activities. And because agile is based on the fundamental principle known as "the Whole Team," the humble approach illustrated here is also important when specifying business examples. If the example used for building a component is not modest, it will be difficult to meet the expectations. Therefore, the expectations must be humble, which makes the resulting examples equally humble.
For instance, if a member of the Whole Team (a requester) provides the developer with an expectation and an example that reads:
> _"When processing an order, make sure to apply appropriate discount for orders made by loyal customers, or for orders over certain monetary value, or both."_
The developer should recognize that this example is too ambitious. That's not a humble expectation. It is not sufficiently micro, if you will. The developer should always strive to guide a requester in being more specific and micro-level when crafting examples. Paradoxically, the more specific the example, the more generic the resulting solution will be.
A much better, more effective expectation and example would be:
> _"Discount made for an order greater than $100.00 is $18.00."_
Or:
> _"Discount made for an order greater than $100.00 that was made by a customer who already placed three orders is $25.00."_
Such micro-examples make it easy to turn them into automated micro-expectations (read: unit tests). Such expectations will make us fail, and then we will pick ourselves up and iterate until we deliver the solution—a robust, generic component that knows how to calculate discounts based on the micro-examples supplied by the Whole Team.
### Writing quality unit tests
Merely writing unit tests without any concern about their quality is a fool's errand. Shoddily written unit tests will result in bloated, tightly coupled code. Such code is brittle, difficult to reason about, and often nearly impossible to fix.
We need to lay down some ground rules for writing quality unit tests. These ground rules will help us make swift progress in building robust, reliable solutions. The easiest way to do that is to introduce a mnemonic in the form of an acronym: **FIRST**, which says unit tests must be:
* **F** = Fast
* **I** = Independent
* **R** = Repeatable
* **S** = Self-validating
* **T** = Thorough
#### Fast
Since a unit test describes a micro-example, it should expect very simple processing from the implemented code. This means that each unit test should be very fast to run.
#### Independent
Since a unit test describes a micro-example, it should describe a very simple process that does not depend on any other unit test.
#### Repeatable
Since a unit test does not depend on any other unit test, it must be fully repeatable. What that means is that each time a certain unit test runs, it produces the same results as the previous time it ran. Neither the number of times the unit tests run nor the order in which they run should ever affect the expected output.
#### Self-validating
When unit tests run, the outcome of the testing should be instantly visible. Developers should not be expected to reach for some other source(s) of information to find out whether their unit tests failed or passed.
#### Thorough
Unit tests should describe all the expectations as defined in the micro-examples.
### Well-structured unit tests
Unit tests are code. And the same as any other code, unit tests need to be well-structured. It is unacceptable to deliver sloppy, messy unit tests. All the principles that apply to the rules governing clean implementation code apply with equal force to unit tests.
A time-tested and proven methodology for writing reliable quality code is based on the clean code principle known as **SOLID**. This acronym that helps us remember five very important principles:
* **S** = Single responsibility principle
* **O** = Openclosed principle
* **L** = Liskov substitution principle
* **I** = Interface segregation principle
* **D** = Dependency inversion principle
#### Single responsibility principle
Each component must be responsible for performing only one operation. This principle is illustrated in this meme
![Sign illustrating single-responsibility principle][3]
Pumping septic tanks is an operation that must be kept separate from filling swimming pools.
Applied to unit tests, this principle ensures that each unit test verifies one—and only one—expectation. From a technical standpoint, this means each unit test must have one and only one **Assert** statement.
#### Openclosed principle
This principle states that a component should be open for extensions, but closed for any modifications.
![Open-closed principle][4]
Applied to unit tests, this principle ensures that we will not implement a change to an existing unit test in that unit test. Instead, we must write a brand new unit test that will implement the changes.
#### Liskov substitution principle
This principle provides a guide for deciding which level of abstraction may be appropriate for the solution.
![Liskov substitution principle][5]
Applied to unit tests, this principle guides us to avoid tight coupling with dependencies that depend on the underlying computing environment (such as databases, disks, network, etc.).
#### Interface segregation principle
This principle reminds us not to bloat APIs. When subsystems need to collaborate to complete a task, they should communicate via interfaces. But those interfaces must not be bloated. If a new capability becomes necessary, don't add it to the already defined interface; instead, craft a brand new interface.
![Interface segregation principle][6]
Applied to unit tests, removing the bloat from interfaces helps us craft more specific unit tests, which, in turn, results in more generic components.
#### Dependency inversion principle
This principle states that we should control our dependencies, instead of dependencies controlling us. If there is a need to use another component's services, instead of being responsible for instantiating that component within the component we are building, it must instead be injected into our component.
![Dependency inversion principle][7]
Applied to the unit tests, this principle helps separate the intention from the implementation. We must strive to inject only those dependencies that have been sufficiently abstracted. That approach is important for ensuring unit tests are not mixed with integration tests.
### Testing the tests
Finally, even if we manage to produce well-structured unit tests that fulfill the FIRST principles, it does not guarantee that we have delivered a solid solution. TDD best practices rely on the proper sequence of events when building components/services; we are always and invariably expected to provide a description of our expectations (supplied in the micro-examples). Only after those expectations are described in the unit test can we move on to writing the implementation code. However, two unwanted side effects can, and often do, happen while writing implementation code:
1. Implemented code enables the unit tests to pass, but they are written in a convoluted way, using unnecessarily complex logic
2. Implemented code gets tagged on AFTER the unit tests have been written
In the first case, even if all unit tests pass, mutation testing uncovers that some mutants have survived. As I explained in _[Mutation testing by example: Evolving from fragile TDD][8]_, that is an extremely undesirable situation because it means that the solution is unnecessarily complex and, therefore, unmaintainable.
In the second case, all unit tests are guaranteed to pass, but a potentially large portion of the codebase consists of implemented code that hasn't been described anywhere. This means we are dealing with mysterious code. In the best-case scenario, we could treat that mysterious code as deadwood and safely remove it. But more likely than not, removing this not-described, implemented code will cause some serious breakages. And such breakages indicate that our solution is not well engineered.
### Conclusion
TDD best practices stem from the time-tested methodology called [extreme programming][9] (XP for short). One of the cornerstones of XP is based on the **three C's**:
1. **Card:** A small card briefly specifies the intent (e.g., "Review customer request").
2. **Conversation:** The card becomes a ticket to conversation. The whole team gets together and talks about "Review customer request." What does that mean? Do we have enough information/knowledge to ship the "review customer request" functionality in this increment? If not, how do we further slice this card?
3. **Concrete confirmation examples:** This includes all the specific values plugged in (e.g., concrete names, numeric values, specific dates, whatever else is pertinent to the use case) plus all values expected as an output of the processing.
Starting from such micro-examples, we write unit tests. We watch unit tests fail, then make them pass. And while doing that, we observe and respect the best software engineering practices: the **FIRST** principles, the **SOLID** principles, and the mutation testing discipline (i.e., kill all surviving mutants).
This ensures that our components and services are delivered with solid quality built in. And what is the measure of that quality? Simple—**the cost of change**. If the delivered code is costly to change, it is of shoddy quality. Very high-quality code is structured so well that it is simple and inexpensive to change and, at the same time, does not incur any change-management risks.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/test-driven-development-best-practices
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
[2]: https://opensource.com/users/alex-bunardzic
[3]: https://opensource.com/sites/default/files/uploads/single-responsibility.png (Sign illustrating single-responsibility principle)
[4]: https://opensource.com/sites/default/files/uploads/openclosed_cc.jpg (Open-closed principle)
[5]: https://opensource.com/sites/default/files/uploads/liskov_substitution_cc.jpg (Liskov substitution principle)
[6]: https://opensource.com/sites/default/files/uploads/interface_segregation_cc.jpg (Interface segregation principle)
[7]: https://opensource.com/sites/default/files/uploads/dependency_inversion_cc.jpg (Dependency inversion principle)
[8]: https://opensource.com/article/19/9/mutation-testing-example-definition
[9]: https://en.wikipedia.org/wiki/Extreme_programming

View File

@ -0,0 +1,154 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building container images with the ansible-bender tool)
[#]: via: (https://opensource.com/article/19/10/building-container-images-ansible)
[#]: author: (Tomas Tomecek https://opensource.com/users/tomastomecek)
Building container images with the ansible-bender tool
======
Learn how to use Ansible to execute commands in a container.
![Blocks for building][1]
Containers and [Ansible][2] blend together so nicely—from management and orchestration to provisioning and building. In this article, we'll focus on the building part.
If you are familiar with Ansible, you know that you can write a series of tasks, and the **ansible-playbook** command will execute them for you. Did you know that you can also execute such commands in a container environment and get the same result as if you'd written a Dockerfile and run **podman build**.
Here is an example:
```
\- name: Serve our file using httpd
  hosts: all
  tasks:
  - name: Install httpd
    package:
      name: httpd
      state: installed
  - name: Copy our file to httpds webroot
    copy:
      src: our-file.txt
      dest: /var/www/html/
```
You could execute this playbook locally on your web server or in a container, and it would work—as long as you remember to create the **our-file.txt** file first.
But something is missing. You need to start (and configure) httpd in order for your file to be served. This is a difference between container builds and infrastructure provisioning: When building an image, you just prepare the content; running the container is a different task. On the other hand, you can attach metadata to the container image that tells the command to run by default.
Here's where a tool would help. How about trying **ansible-bender**?
```
`$ ansible-bender build the-playbook.yaml fedora:30 our-httpd`
```
This script uses the ansible-bender tool to execute the playbook against a Fedora 30 container image and names the resulting container image **our-httpd**.
But when you run that container, it won't start httpd because it doesn't know how to do it. You can fix this by adding some metadata to the playbook:
```
\- name: Serve our file using httpd
  hosts: all
  vars:
    ansible_bender:
      base_image: fedora:30
      target_image:
        name: our-httpd
        cmd: httpd -DFOREGROUND
  tasks:
  - name: Install httpd
    package:
      name: httpd
      state: installed
  - name: Listen on all network interfaces.
    lineinfile:    
      path: /etc/httpd/conf/httpd.conf  
      regexp: '^Listen '
      line: Listen 0.0.0.0:80  
  - name: Copy our file to httpds webroot
    copy:
      src: our-file.txt
      dest: /var/www/html
```
Now you can build the image (from here on, please run all the commands as root—currently, Buildah and Podman won't create dedicated networks for rootless containers):
```
# ansible-bender build the-playbook.yaml
PLAY [Serve our file using httpd] ****************************************************
                                                                                                                                                                             
TASK [Gathering Facts] ***************************************************************    
ok: [our-httpd-20191004-131941266141-cont]
TASK [Install httpd] *****************************************************************
loaded from cache: 'f053578ed2d47581307e9ba3f64f4b4da945579a082c6f99bd797635e62befd0'
skipping: [our-httpd-20191004-131941266141-cont]
TASK [Listen on all network interfaces.] *********************************************
changed: [our-httpd-20191004-131941266141-cont]
TASK [Copy our file to httpds webroot] **********************************************
changed: [our-httpd-20191004-131941266141-cont]
PLAY RECAP ***************************************************************************
our-httpd-20191004-131941266141-cont : ok=3    changed=2    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0
Getting image source signatures
Copying blob sha256:4650c04b851c62897e9c02c6041a0e3127f8253fafa3a09642552a8e77c044c8
Copying blob sha256:87b740bba596291af8e9d6d91e30a01d5eba9dd815b55895b8705a2acc3a825e
Copying blob sha256:82c21252bd87532e93e77498e3767ac2617aa9e578e32e4de09e87156b9189a0
Copying config sha256:44c6dc6dda1afe28892400c825de1c987c4641fd44fa5919a44cf0a94f58949f
Writing manifest to image destination
Storing signatures
44c6dc6dda1afe28892400c825de1c987c4641fd44fa5919a44cf0a94f58949f
Image 'our-httpd' was built successfully \o/
```
The image is built, and it's time to run the container:
```
# podman run our-httpd
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.88.2.106. Set the 'ServerName' directive globally to suppress this message
```
Is your file being served? First, find out the IP of your container:
```
# podman inspect -f '{{ .NetworkSettings.IPAddress }}' 7418570ba5a0
10.88.2.106
```
And now you can check:
```
$ curl <http://10.88.2.106/our-file.txt>
Ansible is ❤
```
What were the contents of your file?
This was just an introduction to building container images with Ansible. If you want to learn more about what ansible-bender can do, please check it out on [GitHub][3]. Happy building!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/building-container-images-ansible
作者:[Tomas Tomecek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tomastomecek
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/blocks_building.png?itok=eMOT-ire (Blocks for building)
[2]: https://www.ansible.com/
[3]: https://github.com/ansible-community/ansible-bender

View File

@ -0,0 +1,263 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to dual boot Windows 10 and Debian 10)
[#]: via: (https://www.linuxtechi.com/dual-boot-windows-10-debian-10/)
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
How to dual boot Windows 10 and Debian 10
======
So, you finally made the bold decision to try out **Linux** after much convincing. However, you do not want to let go of your Windows 10 operating system yet as you will still be needing it before you learn the ropes on Linux. Thankfully, you can easily have a dual boot setup that allows you to switch to either of the operating systems upon booting your system. In this guide, you will learn how to **dual boot  Windows 10 alongside Debian 10**.
[![How-to-dual-boot-Windows-and-Debian10][1]][2]
### Prerequisites
Before you get started, ensure you have the following:
* A bootable USB  or DVD of Debian 10
* A fast and stable internet connection ( For installation updates &amp; third party applications)
Additionally, it worth paying attention to how your system boots (UEFI or Legacy) and ensure both the operating systems boot using the same boot mode.
### Step 1: Create a free partition on your hard drive
To start off, you need to create a free partition on your hard drive. This is the partition where Debian will be installed during the installation process. To achieve this, you will invoke the disk management utility as shown:
Press **Windows Key + R** to launch the Run dialogue. Next, type **diskmgmt.msc** and hit **ENTER**
[![Launch-Run-dialogue][1]][3]
This launches the **disk management** window displaying all the drives existing on your Windows system.
[![Disk-management][1]][4]
Next, you need to create a free space for Debian installation. To do this, you need to shrink a partition from one of the volumes and create a new unallocated partition. In this case, I will create a **30 GB** partition from Volume D.
To shrink a volume, right-click on it and select the **shrink** option
[![Shrink-volume][1]][5]
In the pop-up dialogue, define the size that you want to shrink your space. Remember, this will be the disk space on which Debian 10 will be installed. In my case, I selected **30000MB  ( Approximately 30 GB)**. Once done, click on **Shrink**.
[![Shrink-space][1]][6]
After the shrinking operation completes, you should have an unallocated partition as shown:
[![Unallocated-partition][1]][7]
Perfect! We are now good to go and ready to begin the installation process.
### Step 2: Begin the installation of Debian 10
With the free partition already created, plug in your bootable USB drive or insert the DVD installation medium in your PC and reboot your system. Be sure to make changes to the **boot order** in the **BIOS** set up by pressing the function keys (usually, **F9, F10 or F12** depending on the vendor). This is crucial so that the PC boots into your installation medium. Saves the BIOS settings and reboot.
A new grub menu will be displayed as shown below: Click on **Graphical install**
[![Graphical-Install-Debian10][1]][8]
In the next step, select your **preferred language** and click **Continue**
[![Select-Language-Debian10][1]][9]
Next, select your **location** and click **Continue**. Based on this location the time will automatically be selected for you. If you cannot find you located, scroll down and click on **other** then select your location.
[![Select-location-Debain10][1]][10]
Next, select your **keyboard** layout.
[![Configure-Keyboard-layout-Debain10][1]][11]
In the next step, specify your systems **hostname** and click **Continue**
[![Set-hostname-Debian10][1]][12]
Next, specify the **domain name**. If you are not in a domain environment, simply click on the **continue** button.
[![Set-domain-name-Debian10][1]][13]
In the next step, specify the **root password** as shown and click **continue**.
[![Set-root-Password-Debian10][1]][14]
In the next step, specify the full name of the user for the account and click **continue**
[![Specify-fullname-user-debain10][1]][15]
Then set the account name by specifying the **username** associated with the account
[![Specify-username-Debian10][1]][16]
Next, specify the usernames password as shown and click **continue**
[![Specify-user-password-Debian10][1]][17]
Next, specify your **timezone**
[![Configure-timezone-Debian10][1]][18]
At this point, you need to create partitions for your Debian 10 installation. If you are an inexperienced user, Click on the **Use the largest continuous free space** and click **continue**.
[![Use-largest-continuous-free-space-debian10][1]][19]
However, if you are more knowledgeable about creating partitions, select the **Manual** option and click **continue**
[![Select-Manual-Debain10][1]][20]
Thereafter, select the partition labeled **FREE SPACE**  and click **continue** . Next click on **Create a new partition**.
[![Create-new-partition-Debain10][1]][21]
In the next window, first, define the size of swap space, In my case, I specified **2GB**. Click **Continue**.
[![Define-swap-space-debian10][1]][22]
Next, click on **Primary** on the next screen and click **continue**
[![Partition-Disks-Primary-Debain10][1]][23]
Select the partition to **start at the beginning** and click continue.
[![Start-at-the-beginning-Debain10][1]][24]
Next, click on **Ext 4 journaling file system** and click **continue**
[![Select-Ext4-Journaling-system-debain10][1]][25]
On the next window, select **swap  **and click continue
[![Select-swap-debain10][1]][26]
Next, click on **done setting the partition** and click continue.
[![Done-setting-partition-debian10][1]][27]
Back to the **Partition disks** page, click on **FREE SPACE** and click continue
[![Click-Free-space-Debain10][1]][28]
To make your life easy select **Automatically partition the free space** and click **continue**.
[![Automatically-partition-free-space-Debain10][1]][29]
Next click on **All files in one partition (recommended for new users)**
[![All-files-in-one-partition-debian10][1]][30]
Finally, click on **Finish partitioning and write changes to disk** and click **continue**.
[![Finish-partitioning-write-changes-to-disk][1]][31]
Confirm that you want to write changes to disk and click **Yes**
[![Write-changes-to-disk-Yes-Debian10][1]][32]
Thereafter, the installer will begin installing all the requisite software packages.
When asked if you want to scan another CD, select **No** and click continue
[![Scan-another-CD-No-Debain10][1]][33]
Next, select the mirror of the Debian archive closest to you and click Continue
[![Debian-archive-mirror-country][1]][34]
Next, select the **Debian mirror** that is most preferable to you and click **Continue**
[![Select-Debian-archive-mirror][1]][35]
If you plan on using a proxy server, enter its details as shown below, otherwise leave it blank and click continue
[![Enter-proxy-details-debian10][1]][36]
As the installation proceeds, you will be asked if you would like to participate in a **package usage survey**. You can select either option and click continue . In my case, I selected **No**
[![Participate-in-survey-debain10][1]][37]
Next, select the packages you need in the **software selection** window and click **continue**.
[![Software-selection-debian10][1]][38]
The installation will continue installing the selected packages. At this point, you can take a coffee break as the installation goes on.
You will be prompted whether to install the grub **bootloader** on **Master Boot Record (MBR)**. Click **Yes** and click **Continue**.
[![Install-grub-bootloader-debian10][1]][39]
Next, select the hard drive on which you want to install **grub** and click **Continue**.
[![Select-hard-drive-install-grub-Debian10][1]][40]
Finally, the installation will complete, Go ahead and click on the **Continue** button
[![Installation-complete-reboot-debian10][1]][41]
You should now have a grub menu with both **Windows** and **Debian** listed. To boot to Debian, scroll and click on Debian. Thereafter, you will be prompted with a login screen. Enter your details and hit ENTER.
[![Debian10-log-in][1]][42]
And voila! There goes your fresh copy of Debian 10 in a dual boot setup with Windows 10.
[![Debian10-Buster-Details][1]][43]
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/dual-boot-windows-10-debian-10/
作者:[James Kiarie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/How-to-dual-boot-Windows-and-Debian10.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Launch-Run-dialogue.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Disk-management.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Shrink-volume.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Shrink-space.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Unallocated-partition.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Graphical-Install-Debian10.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Language-Debian10.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-location-Debain10.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Configure-Keyboard-layout-Debain10.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-hostname-Debian10.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-domain-name-Debian10.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-root-Password-Debian10.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-fullname-user-debain10.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-username-Debian10.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-user-password-Debian10.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Configure-timezone-Debian10.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Use-largest-continuous-free-space-debian10.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Manual-Debain10.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Create-new-partition-Debain10.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Define-swap-space-debian10.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Partition-Disks-Primary-Debain10.jpg
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Start-at-the-beginning-Debain10.jpg
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Ext4-Journaling-system-debain10.jpg
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-swap-debain10.jpg
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Done-setting-partition-debian10.jpg
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Click-Free-space-Debain10.jpg
[29]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Automatically-partition-free-space-Debain10.jpg
[30]: https://www.linuxtechi.com/wp-content/uploads/2019/10/All-files-in-one-partition-debian10.jpg
[31]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Finish-partitioning-write-changes-to-disk.jpg
[32]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Write-changes-to-disk-Yes-Debian10.jpg
[33]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Scan-another-CD-No-Debain10.jpg
[34]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian-archive-mirror-country.jpg
[35]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Debian-archive-mirror.jpg
[36]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Enter-proxy-details-debian10.jpg
[37]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Participate-in-survey-debain10.jpg
[38]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Software-selection-debian10.jpg
[39]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Install-grub-bootloader-debian10.jpg
[40]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-hard-drive-install-grub-Debian10.jpg
[41]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Installation-complete-reboot-debian10.jpg
[42]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian10-log-in.jpg
[43]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian10-Buster-Details.jpg

View File

@ -0,0 +1,352 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to program with Bash: Loops)
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-3)
[#]: author: (David Both https://opensource.com/users/dboth)
How to program with Bash: Loops
======
Learn how to use loops for performing iterative operations, in the final
article in this three-part series on programming with Bash.
![arrows cycle symbol for failing faster][1]
Bash is a powerful programming language, one perfectly designed for use on the command line and in shell scripts. This three-part series, based on my [three-volume Linux self-study course][2], explores using Bash as a programming language on the command-line interface (CLI).
The [first article][3] in this series explored some simple command-line programming with Bash, including using variables and control operators. The [second article][4] looked into the types of file, string, numeric, and miscellaneous logical operators that provide execution-flow control logic and different types of shell expansions in Bash. This third (and final) article examines the use of loops for performing various types of iterative operations and ways to control those loops.
### Loops
Every programming language I have ever used has at least a couple types of loop structures that provide various capabilities to perform repetitive operations. I use the for loop quite often but I also find the while and until loops useful.
#### for loops
Bash's implementation of the **for** command is, in my opinion, a bit more flexible than most because it can handle non-numeric values; in contrast, for example, the standard C language **for** loop can deal only with numeric values.
The basic structure of the Bash version of the **for** command is simple:
```
`for Var in list1 ; do list2 ; done`
```
This translates to: "For each value in list1, set the **$Var** to that value and then perform the program statements in list2 using that value; when all of the values in list1 have been used, it is finished, so exit the loop." The values in list1 can be a simple, explicit string of values, or they can be the result of a command substitution (described in the second article in the series). I use this construct frequently.
To try it, ensure that **~/testdir** is still the present working directory (PWD). Clean up the directory, then look at a trivial example of the **for** loop starting with an explicit list of values. This list is a mix of alphanumeric values—but do not forget that all variables are strings and can be treated as such.
```
[student@studentvm1 testdir]$ rm *
[student@studentvm1 testdir]$ for I in a b c d 1 2 3 4 ; do echo $I ; done
a
b
c
d
1
2
3
4
```
Here is a bit more useful version with a more meaningful variable name:
```
[student@studentvm1 testdir]$ for Dept in "Human Resources" Sales Finance "Information Technology" Engineering Administration Research ; do echo "Department $Dept" ; done
Department Human Resources
Department Sales
Department Finance
Department Information Technology
Department Engineering
Department Administration
Department Research
```
Make some directories (and show some progress information while doing so):
```
[student@studentvm1 testdir]$ for Dept in "Human Resources" Sales Finance "Information Technology" Engineering Administration Research ; do echo "Working on Department $Dept" ; mkdir "$Dept"  ; done
Working on Department Human Resources
Working on Department Sales
Working on Department Finance
Working on Department Information Technology
Working on Department Engineering
Working on Department Administration
Working on Department Research
[student@studentvm1 testdir]$ ll
total 28
drwxrwxr-x 2 student student 4096 Apr  8 15:45  Administration
drwxrwxr-x 2 student student 4096 Apr  8 15:45  Engineering
drwxrwxr-x 2 student student 4096 Apr  8 15:45  Finance
drwxrwxr-x 2 student student 4096 Apr  8 15:45 'Human Resources'
drwxrwxr-x 2 student student 4096 Apr  8 15:45 'Information Technology'
drwxrwxr-x 2 student student 4096 Apr  8 15:45  Research
drwxrwxr-x 2 student student 4096 Apr  8 15:45  Sales
```
The **$Dept** variable must be enclosed in quotes in the **mkdir** statement; otherwise, two-part department names (such as "Information Technology") will be treated as two separate departments. That highlights a best practice I like to follow: all file and directory names should be a single word. Although most modern operating systems can deal with spaces in names, it takes extra work for sysadmins to ensure that those special cases are considered in scripts and CLI programs. (They almost certainly should be considered, even if they're annoying because you never know what files you will have.)
So, delete everything in **~/testdir**—again—and do this one more time:
```
[student@studentvm1 testdir]$ rm -rf * ; ll
total 0
[student@studentvm1 testdir]$ for Dept in Human-Resources Sales Finance Information-Technology Engineering Administration Research ; do echo "Working on Department $Dept" ; mkdir "$Dept"  ; done
Working on Department Human-Resources
Working on Department Sales
Working on Department Finance
Working on Department Information-Technology
Working on Department Engineering
Working on Department Administration
Working on Department Research
[student@studentvm1 testdir]$ ll
total 28
drwxrwxr-x 2 student student 4096 Apr  8 15:52 Administration
drwxrwxr-x 2 student student 4096 Apr  8 15:52 Engineering
drwxrwxr-x 2 student student 4096 Apr  8 15:52 Finance
drwxrwxr-x 2 student student 4096 Apr  8 15:52 Human-Resources
drwxrwxr-x 2 student student 4096 Apr  8 15:52 Information-Technology
drwxrwxr-x 2 student student 4096 Apr  8 15:52 Research
drwxrwxr-x 2 student student 4096 Apr  8 15:52 Sales
```
Suppose someone asks for a list of all RPMs on a particular Linux computer and a short description of each. This happened to me when I worked for the State of North Carolina. Since open source was not "approved" for use by state agencies at that time, and I only used Linux on my desktop computer, the pointy-haired bosses (PHBs) needed a list of each piece of software that was installed on my computer so that they could "approve" an exception.
How would you approach that? Here is one way, starting with the knowledge that the **rpm qa** command provides a complete description of an RPM, including the two items the PHBs want: the software name and a brief summary.
Build up to the final result one step at a time. First, list all RPMs:
```
[student@studentvm1 testdir]$ rpm -qa
perl-HTTP-Message-6.18-3.fc29.noarch
perl-IO-1.39-427.fc29.x86_64
perl-Math-Complex-1.59-429.fc29.noarch
lua-5.3.5-2.fc29.x86_64
java-11-openjdk-headless-11.0.ea.28-2.fc29.x86_64
util-linux-2.32.1-1.fc29.x86_64
libreport-fedora-2.9.7-1.fc29.x86_64
rpcbind-1.2.5-0.fc29.x86_64
libsss_sudo-2.0.0-5.fc29.x86_64
libfontenc-1.1.3-9.fc29.x86_64
&lt;snip&gt;
```
Add the **sort** and **uniq** commands to sort the list and print the unique ones (since it's possible that some RPMs with identical names are installed):
```
[student@studentvm1 testdir]$ rpm -qa | sort | uniq
a2ps-4.14-39.fc29.x86_64
aajohan-comfortaa-fonts-3.001-3.fc29.noarch
abattis-cantarell-fonts-0.111-1.fc29.noarch
abiword-3.0.2-13.fc29.x86_64
abrt-2.11.0-1.fc29.x86_64
abrt-addon-ccpp-2.11.0-1.fc29.x86_64
abrt-addon-coredump-helper-2.11.0-1.fc29.x86_64
abrt-addon-kerneloops-2.11.0-1.fc29.x86_64
abrt-addon-pstoreoops-2.11.0-1.fc29.x86_64
abrt-addon-vmcore-2.11.0-1.fc29.x86_64
&lt;snip&gt;
```
Since this gives the correct list of RPMs you want to look at, you can use this as the input list to a loop that will print all the details of each RPM:
```
`[student@studentvm1 testdir]$ for RPM in `rpm -qa | sort | uniq` ; do rpm -qi $RPM ; done`
```
This code produces way more data than you want. Note that the loop is complete. The next step is to extract only the information the PHBs requested. So, add an **egrep** command, which is used to select **^Name** or **^Summary**. The carat (**^**) specifies the beginning of the line; thus, any line with Name or Summary at the beginning of the line is displayed.
```
[student@studentvm1 testdir]$ for RPM in `rpm -qa | sort | uniq` ; do rpm -qi $RPM ; done | egrep -i "^Name|^Summary"
Name        : a2ps
Summary     : Converts text and other types of files to PostScript
Name        : aajohan-comfortaa-fonts
Summary     : Modern style true type font
Name        : abattis-cantarell-fonts
Summary     : Humanist sans serif font
Name        : abiword
Summary     : Word processing program
Name        : abrt
Summary     : Automatic bug detection and reporting tool
&lt;snip&gt;
```
You can try **grep** instead of **egrep** in the command above, but it will not work. You could also pipe the output of this command through the **less** filter to explore the results. The final command sequence looks like this:
```
`[student@studentvm1 testdir]$ for RPM in `rpm -qa | sort | uniq` ; do rpm -qi $RPM ; done | egrep -i "^Name|^Summary" > RPM-summary.txt`
```
This command-line program uses pipelines, redirection, and a **for** loop—all on a single line. It redirects the output of your little CLI program to a file that can be used in an email or as input for other purposes.
This process of building up the program one step at a time allows you to see the results of each step and ensure that it is working as you expect and provides the desired results.
From this exercise, the PHBs received a list of over 1,900 separate RPM packages. I seriously doubt that anyone read that list. But I gave them exactly what they asked for, and I never heard another word from them about it.
### Other loops
There are two more types of loop structures available in Bash: the **while** and **until** structures, which are very similar to each other in both syntax and function. The basic syntax of these loop structures is simple:
```
`while [ expression ] ; do list ; done`
```
and
```
`until [ expression ] ; do list ; done`
```
The logic of the first reads: "While the expression evaluates as true, execute the list of program statements. When the expression evaluates as false, exit from the loop." And the second: "Until the expression evaluates as true, execute the list of program statements. When the expression evaluates as true, exit from the loop."
#### While loop
The **while** loop is used to execute a series of program statements while (so long as) the logical expression evaluates as true. Your PWD should still be **~/testdir**.
The simplest form of the **while** loop is one that runs forever. The following form uses the true statement to always generate a "true" return code. You could also use a simple "1"—and that would work just the same—but this illustrates the use of the true statement:
```
[student@studentvm1 testdir]$ X=0 ; while [ true ] ; do echo $X ; X=$((X+1)) ; done | head
0
1
2
3
4
5
6
7
8
9
[student@studentvm1 testdir]$
```
This CLI program should make more sense now that you have studied its parts. First, it sets **$X** to zero in case it has a value left over from a previous program or CLI command. Then, since the logical expression **[ true ]** always evaluates to 1, which is true, the list of program instructions between **do** and **done** is executed forever—or until you press **Ctrl+C** or otherwise send a signal 2 to the program. Those instructions are an arithmetic expansion that prints the current value of **$X** and then increments it by one.
One of the tenets of [_The Linux Philosophy for Sysadmins_][5] is to strive for elegance, and one way to achieve elegance is simplicity. You can simplify this program by using the variable increment operator, **++**. In the first instance, the current value of the variable is printed, and then the variable is incremented. This is indicated by placing the **++** operator after the variable:
```
[student@studentvm1 ~]$ X=0 ; while [ true ] ; do echo $((X++)) ; done | head
0
1
2
3
4
5
6
7
8
9
```
Now delete **| head** from the end of the program and run it again.
In this version, the variable is incremented before its value is printed. This is specified by placing the **++** operator before the variable. Can you see the difference?
```
[student@studentvm1 ~]$ X=0 ; while [ true ] ; do echo $((++X)) ; done | head
1
2
3
4
5
6
7
8
9
```
You have reduced two statements into a single one that prints the value of the variable and increments that value. There is also a decrement operator, **\--**.
You need a method for stopping the loop at a specific number. To accomplish that, change the true expression to an actual numeric evaluation expression. Have the program loop to 5 and stop. In the example code below, you can see that **-le** is the logical numeric operator for "less than or equal to." This means: "So long as **$X** is less than or equal to 5, the loop will continue. When **$X** increments to 6, the loop terminates."
```
[student@studentvm1 ~]$ X=0 ; while [ $X -le 5 ] ; do echo $((X++)) ; done
0
1
2
3
4
5
[student@studentvm1 ~]$
```
#### Until loop
The **until** command is very much like the **while** command. The difference is that it will continue to loop until the logical expression evaluates to "true." Look at the simplest form of this construct:
```
[student@studentvm1 ~]$ X=0 ; until false  ; do echo $((X++)) ; done | head
0
1
2
3
4
5
6
7
8
9
[student@studentvm1 ~]$
```
It uses a logical comparison to count to a specific value:
```
[student@studentvm1 ~]$ X=0 ; until [ $X -eq 5 ]  ; do echo $((X++)) ; done
0
1
2
3
4
[student@studentvm1 ~]$ X=0 ; until [ $X -eq 5 ]  ; do echo $((++X)) ; done
1
2
3
4
5
[student@studentvm1 ~]$
```
### Summary
This series has explored many powerful tools for building Bash command-line programs and shell scripts. But it has barely scratched the surface on the many interesting things you can do with Bash; the rest is up to you.
I have discovered that the best way to learn Bash programming is to do it. Find a simple project that requires multiple Bash commands and make a CLI program out of them. Sysadmins do many tasks that lend themselves to CLI programming, so I am sure that you will easily find tasks to automate.
Many years ago, despite being familiar with other shell languages and Perl, I made the decision to use Bash for all of my sysadmin automation tasks. I have discovered that—sometimes with a bit of searching—I have been able to use Bash to accomplish everything I need.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/programming-bash-part-3
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
[2]: http://www.both.org/?page_id=1183
[3]: https://opensource.com/article/19/10/programming-bash-part-1
[4]: https://opensource.com/article/19/10/programming-bash-part-2
[5]: https://www.apress.com/us/book/9781484237298

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using SSH port forwarding on Fedora)
[#]: via: (https://fedoramagazine.org/using-ssh-port-forwarding-on-fedora/)
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
Using SSH port forwarding on Fedora
======
![][1]
You may already be familiar with using the _[ssh][2]_ [command][2] to access a remote system. The protocol behind _ssh_ allows terminal input and output to flow through a [secure channel][3]. But did you know that you can also use _ssh_ to send and receive other data securely as well? One way is to use _port forwarding_, which allows you to connect network ports securely while conducting your _ssh_ session. This article shows you how it works.
### About ports
A standard Linux system has a set of network ports already assigned, from 0-65535. Your system reserves ports up to 1023 for system use. In many systems you cant elect to use one of these low-numbered ports. Quite a few ports are commonly expected to run specific services. You can find these defined in your systems _/etc/services_ file.
You can think of a network port like a physical port or jack to which you can connect a cable. That port may connect to some sort of service on the system, like wiring behind that physical jack. An example is the Apache web server (also known as _httpd_). The web server usually claims port 80 on the host system for HTTP non-secure connections, and 443 for HTTPS secure connections.
When you connect to a remote system, such as with a web browser, you are also “wiring” your browser to a port on your host. This is usually a random high port number, such as 54001. The port on your host connects to the port on the remote host, such as 443 to reach its secure web server.
So why use port forwarding when you have so many ports available? Here are a couple common cases in the life of a web developer.
### Local port forwarding
Imagine that you are doing web development on a remote system called _remote.example.com_. You usually reach this system via _ssh_ but its behind a firewall that allows very little additional access, and blocks most other ports. To try out your web app, its helpful to be able to use your web browser to point to the remote system. But you cant reach it via the normal method of typing the URL in your browser, thanks to that pesky firewall.
Local forwarding allows you to tunnel a port available via the remote system through your _ssh_ connection. The port appears as a local port on your system (thus “local forwarding.”)
Lets say your web app is running on port 8000 on the _remote.example.com_ box. To locally forward that systems port 8000 to your systems port 8000, use the _-L_ option with _ssh_ when you start your session:
```
$ ssh -L 8000:localhost:8000 remote.example.com
```
Wait, why did we use _localhost_ as the target for forwarding? Its because from the perspective of _remote.example.com_, youre asking the host to use its own port 8000. (Recall that any host usually can refer to itself as _localhost_ to connect to itself via a network connection.) That port now connects to your systems port 8000. Once the _ssh_ session is ready, keep it open, and you can type _<http://localhost:8000>_ in your browser to see your web app. The traffic between systems now travels securely over an _ssh_ tunnel!
If you have a sharp eye, you may have noticed something. What if we used a different hostname than _localhost_ for the _remote.example.com_ to forward? If it can reach a port on another system on its network, it usually can forward that port just as easily. For example, say you wanted to reach a MariaDB or MySQL service on the _db.example.com_ box also on the remote network. This service typically runs on port 3306. So you could forward it with this command, even if you cant _ssh_ to the actual _db.example.com_ host:
```
$ ssh -L 3306:db.example.com:3306 remote.example.com
```
Now you can run MariaDB commands against your _localhost_ and youre actually using the _db.example.com_ box.
### Remote port forwarding
Remote forwarding lets you do things the opposite way. Imagine youre designing a web app for a friend at the office, and want to show them your work. Unfortunately, though, youre working in a coffee shop, and because of the network setup, they cant reach your laptop via a network connection. However, you both use the _remote.example.com_ system at the office and you can still log in there. Your web app seems to be running well on port 5000 locally.
Remote port forwarding lets you tunnel a port from your local system through your _ssh_ connection, and make it available on the remote system. Just use the _-R_ option when you start your _ssh_ session:
```
$ ssh -R 6000:localhost:5000 remote.example.com
```
Now when your friend inside the corporate firewall runs their browser, they can point it at _<http://remote.example.com:6000>_ and see your work. And as in the local port forwarding example, the communications travel securely over your _ssh_ session.
By default the _sshd_ daemon running on a host is set so that **only** that host can connect to its remote forwarded ports. Lets say your friend wanted to be able to let people on other _example.com_ corporate hosts see your work, and they werent on _remote.example.com_ itself. Youd need the owner of the _remote.example.com_ host to add **one** of these options to _/etc/ssh/sshd_config_ on that box:
```
GatewayPorts yes # OR
GatewayPorts clientspecified
```
The first option means remote forwarded ports are available on all the network interfaces on _remote.example.com_. The second means that the client who sets up the tunnel gets to choose the address. This option is set to **no** by default.
With this option, you as the _ssh_ client must still specify the interfaces on which the forwarded port on your side can be shared. Do this by adding a network specification before the local port. There are several ways to do this, including the following:
```
$ ssh -R *:6000:localhost:5000 # all networks
$ ssh -R 0.0.0.0:6000:localhost:5000 # all networks
$ ssh -R 192.168.1.15:6000:localhost:5000 # single network
$ ssh -R remote.example.com:6000:localhost:5000 # single network
```
### Other notes
Notice that the port numbers need not be the same on local and remote systems. In fact, at times you may not even be able to use the same port. For instance, normal users may not to forward onto a system port in a default setup.
In addition, its possible to restrict forwarding on a host. This might be important to you if you need tighter security on a network-connected host. The _PermitOpen_ option for the _sshd_ daemon controls whether, and which, ports are available for TCP forwarding. The default setting is **any**, which allows all the examples above to work. To disallow any port fowarding, choose **none**, or choose only a specific **host:port** setting to permit. For more information, search for _PermitOpen_ in the manual page for _sshd_ daemon configuration:
```
$ man sshd_config
```
Finally, remember port forwarding only happens as long as the controlling _ssh_ session is open. If you need to keep the forwarding active for a long period, try running the session in the background using the _-N_ option. Make sure your console is locked to prevent tampering while youre away from it.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/using-ssh-port-forwarding-on-fedora/
作者:[Paul W. Frields][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/ssh-port-forwarding-816x345.jpg
[2]: https://en.wikipedia.org/wiki/Secure_Shell
[3]: https://fedoramagazine.org/open-source-ssh-clients/

View File

@ -0,0 +1,116 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Source CMS Ghost 3.0 Released with New features for Publishers)
[#]: via: (https://itsfoss.com/ghost-3-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Open Source CMS Ghost 3.0 Released with New features for Publishers
======
[Ghost][1] is a free and open source content management system (CMS). If you are not aware of the term, a CMS is a software that allows you to build a website that is primarily focused on creating content without knowledge of HTML and other web-related technologies.
Ghost is in fact one of the [best open source CMS][2] out there. Its main focus is on creating lightweight, fast loading and good looking blogs.
It has a modern intuitive editor with built-in SEO features. You also have native desktop (Linux including) and mobile apps. If you like terminal, you can also use the CLI tools it provides.
Lets see what new feature Ghost 3.0 brings.
### New Features in Ghost 3.0
![][3]
Im usually intrigued by open source CMS solutions so after reading the official announcement post, I went ahead and gave it a try by installing a new Ghost instance via [Digital Ocean cloud server][4].
I was really impressed with the improvements theyve made with the features and the UI compared to the previous version.
Here, I shall list out the key changes/additions worth mentioning.
#### Bookmark Cards
![][5]
In addition to all the subtle change to the editor, it now lets you add a beautiful bookmark card by just entering the URL.
If you have used WordPress you may have noticed that you need to have a plugin in order to add a card like that so it is definitely a useful addition in Ghost 3.0.
#### Improved WordPress Migration Plugin
I havent tested this in particular but they have updated their WordPress migration plugin to let you easily clone the posts (with images) to Ghost CMS.
Basically, with the plugin, you will be able to create an archive (with images) and import it to Ghost CMS.
#### Responsive Image Galleries &amp; Images
To make the user experience better, they have also updated the image galleries (which is now responsive) to present your picture collection comfortably across all devices.
In addition, the images in post/pages are now responsive as well.
#### Members &amp; Subscriptions option
![Ghost Subscription Model][6]
Even though the feature is still in the beta phase, it lets you add members and a subscription model for your blog if you choose to make it a premium publication to sustain your business.
With this feature, you can make sure that your blog can only be accessed by the subscribed members or choose to make it available to the public in addition to the subscription.
#### Stripe: Payment Integration
It supports Stripe payment gateway by default to help you easily enable the subscription (or any type of payments) with no additional fee charged by Ghost.
#### New App Integrations
![][7]
You can now integrate a variety of popular applications/services with your blog on Ghost 3.0. It could come in handy to automate a lot of things.
#### Default Theme Improvement
The default theme (design) that comes baked in has improved and now offers a dark mode as well.
You can always choose to create a custom theme as well (if not pre-built themes available).
#### Other Minor Improvements
In addition to all the key highlights, the visual editor to create posts/pages has improved as well (with some drag and drop capabilities).
Im sure theres a lot of technical changes as well which you can check it out in their [changelog][8] if youre interested.
### Ghost is gradually getting good traction
Its not easy to make your mark in a world dominated by WordPress. But Ghost has gradually formed a dedicated community of publishers around it.
Not only that, their managed hosting service [Ghost Pro][9] now has customers like NASA, Mozilla and DuckDuckGo.
In last six years, Ghost has made $5 million in revenue from their Ghost Pro customers . Considering that they are a non-profit organization working on open source solution, this is indeed an achievement.
This helps them remain independent by avoiding external funding from venture capitalists. The more customers for managed Ghost CMS hosting, the more funds goes into the development of the free and open source CMS.
Overall, Ghost 3.0 is by far the best upgrade theyve offered. Im personally impressed with the features.
If you have websites of your own, what CMS do you use? Have you ever used Ghost? Hows your experience with it? Do share your thoughts in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/ghost-3-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/recommends/ghost/
[2]: https://itsfoss.com/open-source-cms/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-3.jpg?ssl=1
[4]: https://itsfoss.com/recommends/digital-ocean/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-editor-screenshot.png?ssl=1
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-subscription-model.jpg?resize=800%2C503&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-app-integration.jpg?ssl=1
[8]: https://ghost.org/faq/upgrades/
[9]: https://itsfoss.com/recommends/ghost-pro/

View File

@ -7,22 +7,22 @@
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-oauth/)
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
Building a Messenger App: OAuth
构建一个即时消息应用(二):OAuth
======
[Previous part: Schema][1].
[上一篇:模式](https://linux.cn/article-11396-1.html)[原文][1]。
In this post we start the backend by adding social login.
在这篇帖子中,我们将会通过为应用添加社交登录功能进入后端开发。
This is how it works: the user click on a link that redirects him to the GitHub authorization page. The user grant access to his info and get redirected back logged in. The next time he tries to login, he wont be asked to grant permission, it is remembered so the login flow is as fast as a single click.
社交登录的工作方式十分简单:用户点击链接,然后重定向到 GitHub 授权页面。当用户授予我们对他的个人信息的访问权限之后,就会重定向回登录页面。下一次尝试登录时,系统将不会再次请求授权,也就是说,我们的应用已经记住了这个用户。这使得整个登录流程看起来就和你用鼠标单击一样快。
Internally, the history is more complex tho. First we need the register a new [OAuth app on GitHub][2].
如果进一步考虑其内部实现的话,过程就会变得复杂起来。首先,我们需要注册一个新的 [GitHub OAuth 应用][2]。
The important part is the callback URL. Set it to `http://localhost:3000/api/oauth/github/callback`. On development we are on localhost, so when you ship the app to production, register a new app with the correct callback URL.
这一步中,比较重要的是回调 URL。我们将它设置为 `http://localhost:3000/api/oauth/github/callback`。这是因为,在开发过程中,我们总是在本地主机上工作。一旦你要将应用交付生产,请使用正确的回调 URL 注册一个新的应用。
This will give you a client id and a secret key. Dont share them with anyone 👀
注册以后,你将会收到「客户端 id」和「安全密钥」。安全起见请不要与任何人分享他们 👀
With that off of the way, lets start to write some code. Create a `main.go` file:
顺便让我们开始写一些代码吧。现在,创建一个 `main.go` 文件:
```
package main
@ -139,7 +139,7 @@ func intEnv(key string, fallbackValue int) int {
}
```
Install dependencies:
安装依赖项:
```
go get -u github.com/gorilla/securecookie
@ -151,28 +151,26 @@ go get -u github.com/matryer/way
go get -u golang.org/x/oauth2
```
We use a `.env` file to save secret keys and other configurations. Create it with at least this content:
我们将会使用 `.env` 文件来保存密钥和其他配置。请创建这个文件,并保证里面至少包含以下内容:
```
GITHUB_CLIENT_ID=your_github_client_id
GITHUB_CLIENT_SECRET=your_github_client_secret
```
The other enviroment variables we use are:
我们还要用到的其他环境变量有:
* `PORT`: The port in which the server runs. Defaults to `3000`.
* `ORIGIN`: Your domain. Defaults to `http://localhost:3000/`. The port can also be extracted from this.
* `DATABASE_URL`: The Cockroach address. Defaults to `postgresql://root@127.0.0.1:26257/messenger?sslmode=disable`.
* `HASH_KEY`: Key to sign cookies. Yeah, well use signed cookies for security.
* `JWT_KEY`: Key to sign JSON web tokens.
* `PORT`:服务器运行的端口,默认值是 `3000`
* `ORIGIN`:你的域名,默认值是 `http://localhost:3000/`。我们也可以在这里指定端口。
* `DATABASE_URL`Cockroach 数据库的地址。默认值是 `postgresql://root@127.0.0.1:26257/messenger?sslmode=disable`
* `HASH_KEY`:用于为 cookies 签名的密钥。没错,我们会使用已签名的 cookies 来确保安全。
* `JWT_KEY`:用于签署 JSON 网络令牌Json Web Token的密钥。
因为代码中已经设定了默认值,所以你也不用把它们写到 `.env` 文件中。
在读取配置并连接到数据库之后,我们会创建一个 OAuth 配置。我们会使用 `ORIGIN` 来构建回调 URL就和我们在 GitHub 页面上注册的一样)。我们的数据范围设置为 “read:user”。这会允许我们读取公开的用户信息这里我们只需要他的用户名和头像就够了。然后我们会初始化 cookie 和 JWT 签名器。定义一些端点并启动服务器。
Because they have default values, your dont need to write them on the `.env` file.
After reading the configuration and connecting to the database, we create an OAuth config. We use the origin to build the callback URL (the same we registered on the github page). And we set the scope to “read:user”. This will give us permission to read the public user info. Thats because we just need his username and avatar. Then we initialize the cookie and JWT signers. Define some endpoints and start the server.
Before implementing those HTTP handlers lets write a couple functions to send HTTP responses.
在实现 HTTP 处理程序之前,让我们编写一些函数来发送 HTTP 响应。
```
func respond(w http.ResponseWriter, v interface{}, statusCode int) {
@ -192,11 +190,11 @@ func respondError(w http.ResponseWriter, err error) {
}
```
The first one is to send JSON and the second one logs the error to the console and return a `500 Internal Server Error` error.
第一个函数用来发送 JSON而第二个将错误记录到控制台并返回一个 `500 Internal Server Error` 错误信息。
### OAuth Start
### OAuth 开始
So, the user clicks on a link that says “Access with GitHub”… That link points the this endpoint `/api/oauth/github` that will redirect the user to github.
所以,用户点击写着 “Access with GitHub” 的链接。该链接指向 `/api/oauth/github`,这将会把用户重定向到 github。
```
func githubOAuthStart(w http.ResponseWriter, r *http.Request) {
@ -222,11 +220,11 @@ func githubOAuthStart(w http.ResponseWriter, r *http.Request) {
}
```
OAuth2 uses a mechanism to prevent CSRF attacks so it requires a “state”. We use nanoid to create a random string and use that as state. We save it as a cookie too.
OAuth2 使用一种机制来防止 CSRF 攻击,因此它需要一个「状态」 "state"。我们使用 `Nanoid()` 来创建一个随机字符串,并用这个字符串作为状态。我们也把它保存为一个 cookie。
### OAuth Callback
### OAuth 回调
Once the user grant access to his info on the GitHub page, he will be redirected to this endpoint. The URL will come with the state and a code on the query string `/api/oauth/github/callback?state=&code=`
一旦用户授权我们访问他的个人信息,他将会被重定向到这个端点。这个 URL 的查询字符串上将会包含状态state和授权码code `/api/oauth/github/callback?state=&code=`
```
const jwtLifetime = time.Hour * 24 * 14
@ -341,19 +339,19 @@ func githubOAuthCallback(w http.ResponseWriter, r *http.Request) {
}
```
First we try to decode the cookie with the state we saved before. And compare it with the state that comes in the query string. In case they dont match, we return a `418 I'm teapot` error.
首先,我们会尝试使用之前保存的状态对 cookie 进行解码。并将其与查询字符串中的状态进行比较。如果它们不匹配,我们会返回一个 `418 I'm teapot`(未知来源)错误。
Then we exchange the code for a token. This token is used to create an HTTP client to make requests to the GitHub API. So we do a GET request to `https://api.github.com/user`. This endpoint will give us the current authenticated user info in JSON format. We decode it to get the user ID, login (username) and avatar URL.
接着,我们使用授权码生成一个令牌。这个令牌被用于创建 HTTP 客户端来向 GitHub API 发出请求。所以最终我们会向 `https://api.github.com/user` 发送一个 GET 请求。这个端点将会以 JSON 格式向我们提供当前经过身份验证的用户信息。我们将会解码这些内容,一并获取用户的 ID登录名用户名和头像 URL。
Then we try to find a user with that GitHub ID on the database. If none is found, we create one using that data.
然后我们将会尝试在数据库上找到具有该 GitHub ID 的用户。如果没有找到,就使用该数据创建一个新的。
Then, with the newly created user, we issue a JSON web token with the user ID as Subject and redirect to the frontend with the token, along side the expiration date in the query string.
之后,对于新创建的用户,我们会发出一个用户 ID 为主题subject的 JSON 网络令牌并使用该令牌重定向到前端查询字符串中一并包含该令牌的到期日the expiration date
The web app will be for another post, but the URL you are being redirected is `/callback?token=&expires_at=`. There well have some JavaScript to extract the token and expiration date from the URL and do a GET request to `/api/auth_user` with the token in the `Authorization` header in the form of `Bearer token_here` to get the authenticated user and save it to localStorage.
这一 Web 应用也会被用在其他帖子,但是重定向的链接会是 `/callback?token=&expires_at=`。在那里,我们将会利用 JavaScript 从 URL 中获取令牌和到期日,并通过 `Authorization` 标头中的令牌以`Bearer token_here` 的形式对 `/ api / auth_user` 进行GET请求来获取已认证的身份用户并将其保存到 localStorage。
### Guard Middleware
### Guard 中间件
To get the current authenticated user we use a middleware. Thats because in future posts well have more endpoints that requires authentication, and a middleware allow us to share functionality.
为了获取当前已经过身份验证的用户,我们设计了 Guard 中间件。这是因为在接下来的文章中,我们会有很多需要进行身份认证的端点,而中间件将会允许我们共享这一功能。
```
type ContextKey struct {
@ -388,9 +386,9 @@ func guard(handler http.HandlerFunc) http.HandlerFunc {
}
```
First we try to read the token from the `Authorization` header or a `token` in the URL query string. If none found, we return a `401 Unauthorized` error. Then we decode the claims in the token and use the Subject as the current authenticated user ID.
首先,我们尝试从 `Authorization` 标头或者是 URL 查询字符串中的 `token` 字段中读取令牌。如果没有找到,我们需要返回 `401 Unauthorized`(未授权)错误。然后我们将会对令牌中的申明进行解码,并使用该主题作为当前已经过身份验证的用户 ID。
Now, we can wrap any `http.handlerFunc` that needs authentication with this middleware and well have the authenticated user ID in the context.
现在,我们可以用这一中间件来封装任何需要授权的 `http.handlerFunc`,并且在处理函数的上下文中保有已经过身份验证的用户 ID。
```
var guarded = guard(func(w http.ResponseWriter, r *http.Request) {
@ -398,7 +396,7 @@ var guarded = guard(func(w http.ResponseWriter, r *http.Request) {
})
```
### Get Authenticated User
### 获取认证用户
```
func getAuthUser(w http.ResponseWriter, r *http.Request) {
@ -422,13 +420,13 @@ func getAuthUser(w http.ResponseWriter, r *http.Request) {
}
```
We use the guard middleware to get the current authenticated user id and do a query to the database.
我们使用 Guard 中间件来获取当前经过身份认证的用户 ID 并查询数据库。
* * *
That will cover the OAuth process on the backend. In the next part well see how to start conversations with other users.
这一部分涵盖了后端的 OAuth 流程。在下一篇帖子中,我们将会看到如何开始与其他用户的对话。
[Souce Code][3]
[源代码][3]
--------------------------------------------------------------------------------

View File

@ -1,192 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing by example: Failure as experimentation)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew)
以变异测试为例:基于故障的试验
======
基于 .NET 的 xUnit.net 测试框架,开发一款自动猫门的逻辑,让门在白天开放,夜间锁定,
![Digital hand surrounding by objects, bike, light bulb, graphs][1]
在本系列的[第一篇文章][2]中,我演示了如何使用设计的故障来确保代码中的预期结果。 在第二篇文章中,我将继续开发示例项目——一款自动猫门,该门在白天开放,夜间锁定。
在此提醒一下,您可以按照[此处的说明][3]使用 .NET 的 xUnit.net 测试框架。
### 关于白天时间
回想一下测试驱动开发TDD围绕着大量的单元测试。
第一篇文章中实现了满足 **Given7pmReturnNighttime** 单元测试期望的逻辑。 但还没有完, 现在您需要描述当前时间大于7点时期望发生的结果。 这是新的单元测试,称为 **Given7amReturnDaylight**
```
[Fact]
public void Given7amReturnDaylight()
{
var expected = "Daylight";
var actual = dayOrNightUtility.GetDayOrNight();
Assert.Equal(expected, actual);
}
```
现在,新的单元测试失败了(越早失败越好!):
```
Starting test execution, please wait...
[Xunit.net 00:00:01.23] unittest.UnitTest1.Given7amReturnDaylight [FAIL]
Failed unittest.UnitTest1.Given7amReturnDaylight
[...]
```
期望接收到字符串值是 "Daylight" ,但实际接收到的值是 "Nighttime"。
### 分析失败的测试用例
经过仔细检查,代码本身似乎已经出现问题。 事实证明,**GetDayOrNight** 方法的实现是不可测试的!
看看我们面临的核心挑战:
1. **GetDayOrNight 依赖隐藏输入。 **
**dayOrNight** 的值取决于隐藏输入(它从内置系统时钟中获取一天的时间值)。
2. **GetDayOrNight 包含非确定性行为。 **
从系统时钟中获取到的时间值是不确定的。 (因为)该时间取决于你运行代码的时间点,而这一点我们认为这是不可预测的。
3. **GetDayOrNight API 的质量差。**
该 API 与具体的数据源(系统 **DateTime**) 紧密耦合。
4. **GetDayOrNight violates 违反了单一责任原则。**
该方法实现同时使用和处理信息。优良作法是一种方法应负责执行一项职责。
5. **GetDayOrNight 有多个更改原因。**
可以想象内部时间源可能会更改的情况。同样,很容易想象处理逻辑也将改变。这些变化的不同原因必须相互隔离。
6. **当(我们)尝试了解 GetDayOrNight 行为时,会发现它的 API 签名不足。 **
最理想的做法就是通过简单的查看API的签名就能了解API预期的行为类型。。
7. **GetDayOrNight 取决于全局共享可变状态。**
要不惜一切代价避免共享的可变状态!
8. **即使在阅读源代码之后,也无法预测 GetDayOrNight方法的行为。**
这是一个严重的问题。 通过阅读源代码,应该始终非常清楚,系统一旦开始运行,便可以预测出其行为。
### 失败背后的原则
每当您遇到工程问题时,建议使用久经考验的分而治之策略。 在这种情况下,遵循关注点分离的原则是一种可行的方法。
> **separation of concerns** (**SoC**) 是一种用于将计算机程序分为不同模块的设计原理,以便每个模块都可以解决一个关注点。 关注点是影响计算机程序代码的一组信息。 关注点信息可能与要优化代码的硬件的细节一样概括,也可能与要实例化的类的名称一样具体。完美体现 SoC 的程序称为模块化程序。
>
> ([source][4])
**GetDayOrNight** 方法应仅与确定日期和时间值表示白天还是夜晚有关。 它不应该与寻找该值的来源有关。该问题应留给调用客户端。
必须将这个问题留给调用客户端,以获取当前时间。 这种方法符合另一个有价值的工程原理-控制反转。 Martin Fowler [在这里][5]详细探讨了这一概念。
> 框架的一个重要特征是用户定义的用于定制框架的方法通常来自于框架本身而不是从用户的应用程序代码调用来的。 该框架通常在协调和排序应用程序活动中扮演主程序的角色。 控制权的这种反转使框架有能力充当可扩展的框架。 用户提供的方法为框架中的特定应用程序量身制定泛化算法。
>
> \-- [Ralph Johnson and Brian Foote][6]
### 重构测试用例
因此,代码需要重构。 摆脱对内部时钟的依赖(**DateTime** 系统实用程序):
```
` DateTime time = new DateTime();`
```
删除上述代码在你的文件中应该是第7行。 通过将输入参数 **DateTime** 时间添加到 **GetDayOrNight** 方法,进一步重构代码。
这是重构类 **DayOrNightUtility.cs**:
```
using System;
namespace app {
public class DayOrNightUtility {
public string GetDayOrNight(DateTime time) {
string dayOrNight = "Nighttime";
if(time.Hour &gt;= 7 &amp;&amp; time.Hour &lt; 19) {
dayOrNight = "Daylight";
}
return dayOrNight;
}
}
}
```
重构代码需要更改单元测试。 需要准备 **nightHour****dayHour** 的测试数据,并将这些值传到**GetDayOrNight** 方法中。 以下是重构的单元测试:
```
using System;
using Xunit;
using app;
namespace unittest
{
public class UnitTest1
{
DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
DateTime nightHour = [new][7] DateTime(2019, 08, 03, 19, 00, 00);
DateTime dayHour = [new][7] DateTime(2019, 08, 03, 07, 00, 00);
[Fact]
public void Given7pmReturnNighttime()
{
var expected = "Nighttime";
var actual = dayOrNightUtility.GetDayOrNight(nightHour);
Assert.Equal(expected, actual);
}
[Fact]
public void Given7amReturnDaylight()
{
var expected = "Daylight";
var actual = dayOrNightUtility.GetDayOrNight(dayHour);
Assert.Equal(expected, actual);
}
}
}
```
### 经验教训
在继续开发这种简单的场景之前,请先回顾复习一下本次练习中所学到的东西。
运行无法测试的代码,很容易在不经意间制造陷阱。 从表面上看这样的代码似乎可以正常工作。但是遵循测试驱动开发TDD的实践首先描述期望结果---执行测试---暴露了代码中的严重问题。
这表明 TDD 是确保代码不会太凌乱的理想方法。 TDD 指出了一些问题区域,例如缺乏单一责任和存在隐藏输入。 此外TDD 有助于删除不确定性代码,并用行为明确的完全可测试代码替换它。
最后TDD 帮助交付易于阅读、逻辑易于遵循的代码。
在本系列的下一篇文章中,我将演示如何使用在本练习中创建的逻辑来实现功能代码,以及如何进行进一步的测试使其变得更好。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
[2]: https://opensource.com/article/19/9/mutation-testing-example-part-1-how-leverage-failure
[3]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
[4]: https://en.wikipedia.org/wiki/Separation_of_concerns
[5]: https://martinfowler.com/bliki/InversionOfControl.html
[6]: http://www.laputan.org/drc/drc.html
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -0,0 +1,161 @@
[#]: collector: (lujun9972)
[#]: translator: (wenwensnow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use GameHub to Manage All Your Linux Games in One Place)
[#]: via: (https://itsfoss.com/gamehub/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
用GameHub集中管理你Linux上的所有游戏
======
你在Linux 上打算怎么[玩游戏呢][1]? 让我猜猜, 要不就是从软件中心直接安装要不就选Steam,GOG, Humble Bundle 等平台,对吧?但是,如果你有多个游戏启动器和客户端,又要如何管理呢?好吧,对我来说这简直令人头疼 —— 这也是我发现[GameHub][2]这个应用之后,感到非常高兴的原因。
GameHub是为Linux发行版设计的一个桌面应用它能让你“集中管理你的所有游戏”。这听起来很有趣是不是下面让我来具体说明一下。
![][3]
### 集中管理不同平台Linux游戏的GameHub功能
让我们看看对玩家来说让GameHub成为一个[不可或缺的Linux应用][4]的功能,都有哪些。
#### Steam, GOG &amp; Humble Bundle 支持
![][5]
它支持Steam, [GOG][6], 和 [Humble Bundle][7] 账户整合。你可以登录你的GameHub账号从而在库管理器中管理所有游戏。
对我来说我在Steam上有很多游戏Humble Bundle上也有一些。我不能确保它支持所有平台。但可以确信的是主流平台游戏是没有问题的。
#### 本地游戏支持
![][8]
有很多网站专门推荐Linux游戏并[支持下载][9]。你可以通过下载安装包,或者添加可执行文件,从而管理本地游戏。
可惜的是在GameHub内无法在线搜索Linux游戏。如上图所示你需要将各平台游戏分开下载随后再添加到自己的GameHub账号中。
#### 模拟器支持
在模拟器方面,你可以玩[Linux上的retro game][10]。正如上图所示,你可以添加模拟器(或导入模拟器镜像)。
你可以在[RetroArch][11]查看可添加的模拟器,但也能根据需求,添加自定义模拟器。
#### 用户界面
![Gamehub 界面选项][12]
当然,用户体验很重要。因此,探究下用户界面都有些什么,也很有必要。
我个人觉得,这一应用很容易使用,并且黑色主题是一个加分项。
#### 手柄支持
如果你习惯在Linux系统上用手柄玩游戏 —— 你可以轻松在设置里添加,启用或禁用它。
#### 多个数据提供商
因为它需要获取你的游戏信息(或元数据),也意味着它需要一个数据源。你可以看到上图列出的所有数据源。
![Data Providers Gamehub][13]
这里你什么也不用做 —— 但如果你使用的是其他平台而不是steam的话你需要为[IDGB生成一个API密钥][14]。
我建议只有出现提示/通知或有些游戏在GameHub上没有任何描述/图片/状态时,再这么做。
#### 兼容性选项
![][15]
你有不支持在Linux上运行的游戏吗
不用担心GameHub上提供了多种兼容工具如 Wine/Proton,你可以利用它们让游戏得以运行。
我们无法确定具体哪个兼容工具适用于你 —— 所以你需要自己亲自测试。 然而,对许多游戏玩家来说,这的确是个很有用的功能。
### 如何在GameHub上管理你的游戏
在启动程序后你可以将自己的Steam/GOG/Humble Bundle 账号添加进来。
对于Steam, 你需要在Linux 发行版上安装Steam 客户端。一旦安装完成你可以轻松将账号中的游戏导入GameHub.
![][16]
对于GOG &amp; Humble Bundle, 登录后就能直接在GameHub上管理游戏了。
如果你想添加模拟器或者本地安装文件,点击窗口右上角的 “**+**” 按钮进行添加。
### 如何安装游戏?
对于Steam游戏它会自动启动Steam 客户端,从而下载/安装游戏我希望之后安装游戏可以不用启动Steam
![][17]
但对于GOG/Humble Bundle, 登录后就能直接、下载安装游戏。必要的话对于那些不支持在Linux上运行的游戏你可以使用兼容工具。
无论是模拟器游戏,还是本地游戏,只需添加安装包或导入模拟器镜像就可以了。这里没什么其他步骤要做。
### GameHub: 如何安装它呢?
![][18]
首先,你可以直接在软件中心或者应用商店内搜索。 它在 **Pop!_Shop** 分类下可见。所以,它在绝大多数官方源中都能找到。
如果你在这些地方都没有找到,你可以手动添加源,并从终端上安装它,你需要输入以下命令:
```
sudo add-apt-repository ppa:tkashkin/gamehub
sudo apt update
sudo apt install com.github.tkashkin.gamehub
```
如果你遇到了 “**add-apt-repository command not found**” 这个错误,你可以看看,[add-apt-repository not found error.][19]这篇文章,它能帮你解决这一问题。
这里还提供AppImage 和 FlatPak版本。 在[官网][2] 上你可以针对找到其他Linux发行版的安装手册。
同时,你还可以从它的 [GitHub页面][20]下载之前版本的安装包.
[GameHub][2]
**注意**
GameHub 是相当灵活的一个集中游戏管理应用。 用户界面和选项设置也相当直观。
你之前是否使用过这一应用呢?如果有,请在评论里写下你的感受。
而且,如果你想尝试一些与此功能相似的工具/应用,请务必告诉我们。
--------------------------------------------------------------------------------
via: https://itsfoss.com/gamehub/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/linux-gaming-guide/
[2]: https://tkashkin.tk/projects/gamehub/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-home-1.png?ssl=1
[4]: https://itsfoss.com/essential-linux-applications/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-platform-support.png?ssl=1
[6]: https://www.gog.com/
[7]: https://www.humblebundle.com/monthly?partner=itsfoss
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-native-installers.png?ssl=1
[9]: https://itsfoss.com/download-linux-games/
[10]: https://itsfoss.com/play-retro-games-linux/
[11]: https://www.retroarch.com/
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-appearance.png?ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/data-providers-gamehub.png?ssl=1
[14]: https://www.igdb.com/api
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-windows-game.png?fit=800%2C569&ssl=1
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-library.png?ssl=1
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-compatibility-layer.png?ssl=1
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-install.jpg?ssl=1
[19]: https://itsfoss.com/add-apt-repository-command-not-found/
[20]: https://github.com/tkashkin/GameHub/releases

View File

@ -0,0 +1,207 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Configure Rsyslog Server in CentOS 8 / RHEL 8)
[#]: via: (https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/)
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
如何在 CentOS 8 / RHEL 8 中配置 Rsyslog 服务器
======
**Rsyslog** 是一个免费的开源日志记录程序,默认下在 **CentOS** 8 和 **RHEL** 8 系统上存在。它提供了一种从客户端节点到单个中央服务器的“集中日志”的简单有效的方法。日志集中化有两个好处。首先,它简化了日志查看,因为系统管理员可以在一个中心节点查看远程服务器的所有日志,而无需登录每个客户端系统来检查日志。如果需要监视多台服务器,这将非常有用,其次,如果远程客户端崩溃,你不用担心丢失日志,因为所有日志都将保存在**中央 rsyslog 服务器上**。Rsyslog 取代了仅支持 **UDP** 协议的 syslog。它以优异的功能扩展了基本的 syslog 协议,例如在传输日志时支持 **UDP** 和 **TCP**协议,增强的过滤功能以及灵活的配置选项。让我们来探讨如何在 CentOS 8 / RHEL 8 系统中配置 Rsyslog 服务器。
[![configure-rsyslog-centos8-rhel8][1]][2]
### 预先条件
我们将搭建以下实验环境来测试集中式日志记录过程:
* **Rsyslog 服务器**       CentOS 8 Minimal    IP 地址: 10.128.0.47
* **客户端系统**         RHEL 8 Minimal      IP 地址: 10.128.0.48
通过上面的设置,我们将演示如何设置 Rsyslog 服务器,然后配置客户端系统以将日志发送到 Rsyslog 服务器进行监视。
让我们开始!
### 在 CentOS 8 上配置 Rsyslog 服务器
默认情况下Rsyslog 已安装在 CentOS 8 / RHEL 8 服务器上。要验证 Rsyslog 的状态,请通过 SSH 登录并运行以下命令:
```
$ systemctl status rsyslog
```
示例输出
![rsyslog-service-status-centos8][1]
如果由于某种原因不存在 rsyslog那么可以使用以下命令进行安装
```
$ sudo yum install rsyslog
```
接下来,你需要修改 Rsyslog 配置文件中的一些设置。打开配置文件。
```
$ sudo vim /etc/rsyslog.conf
```
滚动并取消注释下面的行,以允许通过 UDP 协议接收日志
```
module(load="imudp") # needs to be done just once
input(type="imudp" port="514")
```
![rsyslog-conf-centos8-rhel8][1]
同样,如果你希望启用 TCP rsyslog 接收,请取消注释下面的行:
```
module(load="imtcp") # needs to be done just once
input(type="imtcp" port="514")
```
![rsyslog-conf-tcp-centos8-rhel8][1]
保存并退出配置文件。
要从客户端系统接收日志,我们需要在防火墙上打开 Rsyslog 默认端口 514。为此请运行
```
# sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
```
接下来,重新加载防火墙保存更改
```
# sudo firewall-cmd --reload
```
示例输出
![firewall-ports-rsyslog-centos8][1]
接下来,重启 Rsyslog 服务器
```
$ sudo systemctl restart rsyslog
```
要在启动时运行 Rsyslog运行以下命令
```
$ sudo systemctl enable rsyslog
```
要确认 Rsyslog 服务器正在监听 514 端口,请使用 netstat 命令,如下所示:
```
$ sudo netstat -pnltu
```
示例输出
![netstat-rsyslog-port-centos8][1]
完美!我们已经成功配置了 Rsyslog 服务器来从客户端系统接收日志。
要实时查看日志消息,请运行以下命令:
```
$ tail -f /var/log/messages
```
现在开始配置客户端系统。
### 在 RHEL 8 上配置客户端系统
与 Rsyslog 服务器一样,登录并通过以下命令检查 rsyslog 守护进程是否正在运行:
```
$ sudo systemctl status rsyslog
```
示例输出
![client-rsyslog-service-rhel8][1]
接下来,打开 rsyslog 配置文件
```
$ sudo vim /etc/rsyslog.conf
```
在文件末尾,添加以下行
```
*.* @10.128.0.47:514 # Use @ for UDP protocol
*.* @@10.128.0.47:514 # Use @@ for TCP protocol
```
保存并退出配置文件。就像 Rsyslog 服务器一样,打开 514 端口,这是防火墙上的默认 Rsyslog 端口。
```
$ sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
```
接下来,重新加载防火墙以保存更改
```
$ sudo firewall-cmd --reload
```
接下来,重启 rsyslog 服务
```
$ sudo systemctl restart rsyslog
```
要在启动时运行 Rsyslog请运行以下命令
```
$ sudo systemctl enable rsyslog
```
### 测试日志记录操作
已经成功安装并配置 Rsyslog 服务器和客户端后,就该验证你的配置是否按预期运行了。
在客户端系统上,运行以下命令:
```
# logger "Hello guys! This is our first log"
```
现在进入 Rsyslog 服务器并运行以下命令来实时查看日志消息
```
# tail -f /var/log/messages
```
客户端系统上命令运行的输出显示在了 Rsyslog 服务器的日志中,这意味着 Rsyslog 服务器正在接收来自客户端系统的日志。
![centralize-logs-rsyslogs-centos8][1]
就是这些了!我们成功设置了 Rsyslog 服务器来接收来自客户端系统的日志信息。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/
作者:[James Kiarie][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/configure-rsyslog-centos8-rhel8.jpg