mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
commit
a8522c16c3
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11499-1.html)
|
||||
[#]: subject: (How writers can get work done better with Git)
|
||||
[#]: via: (https://opensource.com/article/19/4/write-git)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/noreplyhttps://opensource.com/users/seth)
|
||||
@ -12,7 +12,7 @@
|
||||
|
||||
> 如果你是一名写作者,你也能从使用 Git 中受益。在我们的系列文章中了解有关 Git 鲜为人知的用法。
|
||||
|
||||
![Writing Hand][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/201910/24/222747ltajik2ymzmmttha.png)
|
||||
|
||||
[Git][2] 是一个少有的能将如此多的现代计算封装到一个程序之中的应用程序,它可以用作许多其他应用程序的计算引擎。虽然它以跟踪软件开发中的源代码更改而闻名,但它还有许多其他用途,可以让你的生活更轻松、更有条理。在这个 Git 系列中,我们将分享七种鲜为人知的使用 Git 的方法。
|
||||
|
||||
@ -20,7 +20,7 @@
|
||||
|
||||
### 写作者的 Git
|
||||
|
||||
有些人写小说,也有人撰写学术论文、诗歌、剧本、技术手册或有关开源的文章。许多人都在做一点各种写作。相同的是,如果你是一名写作者,则或许能从使用 Git 中受益。尽管 Git 是著名的计算机程序员所使用的高度技术性工具,但它也是现代写作者的理想之选,本文将向你演示如何改变你的书写方式以及为什么要这么做的原因。
|
||||
有些人写小说,也有人撰写学术论文、诗歌、剧本、技术手册或有关开源的文章。许多人都在做一些各种写作。相同的是,如果你是一名写作者,或许能从使用 Git 中受益。尽管 Git 是著名的计算机程序员所使用的高度技术性工具,但它也是现代写作者的理想之选,本文将向你演示如何改变你的书写方式以及为什么要这么做的原因。
|
||||
|
||||
但是,在谈论 Git 之前,重要的是先谈谈“副本”(或者叫“内容”,对于数字时代而言)到底是什么,以及为什么它与你的交付*媒介*不同。这是 21 世纪,大多数写作者选择的工具是计算机。尽管计算机看似擅长将副本的编辑和布局等过程结合在一起,但写作者还是(重新)发现将内容与样式分开是一个好主意。这意味着你应该在计算机上像在打字机上而不是在文字处理器中进行书写。以计算机术语而言,这意味着以*纯文本*形式写作。
|
||||
|
||||
@ -30,13 +30,13 @@
|
||||
|
||||
你只需要逐字写下你的内容,而将交付工作留给发布者。即使你是自己发布,将字词作为写作作品的一种源代码也是一种更聪明、更有效的工作方式,因为在发布时,你可以使用相同的源(你的纯文本)生成适合你的目标输出(用于打印的 PDF、用于电子书的 EPUB、用于网站的 HTML 等)。
|
||||
|
||||
用纯文本编写不仅意味着你不必担心布局或文本样式,而且也不再需要专门的工具。无论是手机或平板电脑上的基本记事本应用程序、计算机附带的文本编辑器,还是从互联网上下载的免费编辑器,任何能够产生文本内容的工具对你而言都是有效的“文字处理器”。无论你身在何处或在做什么,几乎可以在任何设备上书写,并且所生成的文本可以与你的项目完美集成,而无需进行任何修改。
|
||||
用纯文本编写不仅意味着你不必担心布局或文本样式,而且也不再需要专门的工具。无论是手机或平板电脑上的基本的记事本应用程序、计算机附带的文本编辑器,还是从互联网上下载的免费编辑器,任何能够产生文本内容的工具对你而言都是有效的“文字处理器”。无论你身在何处或在做什么,几乎可以在任何设备上书写,并且所生成的文本可以与你的项目完美集成,而无需进行任何修改。
|
||||
|
||||
而且,Git 专门用来管理纯文本。
|
||||
|
||||
### Atom 编辑器
|
||||
|
||||
当你以纯文本形式书写时,文字处理程序会显得过于庞大。使用文本编辑器更容易,因为文本编辑器不会尝试“有效地”重组输入内容。它使你可以将脑海中的单词输入到屏幕中,而不会受到干扰。更好的是,文本编辑器通常是围绕插件体系结构设计的,这样应用程序本身就很基础(它用来编辑文本),但是你可以围绕它构建一个环境来满足你的各种需求。
|
||||
当你以纯文本形式书写时,文字处理程序会显得过于庞大。使用文本编辑器更容易,因为文本编辑器不会尝试“有效地”重组输入内容。它使你可以将脑海中的单词输入到屏幕中,而不会受到干扰。更好的是,文本编辑器通常是围绕插件体系结构设计的,这样应用程序本身很基础(它用来编辑文本),但是你可以围绕它构建一个环境来满足你的各种需求。
|
||||
|
||||
[Atom][4] 编辑器就是这种设计理念的一个很好的例子。这是一个具有内置 Git 集成的跨平台文本编辑器。如果你不熟悉纯文本格式,也不熟悉 Git,那么 Atom 是最简单的入门方法。
|
||||
|
||||
@ -64,15 +64,15 @@ Atom 当前没有在 BSD 上构建。但是,有很好的替代方法,例如
|
||||
|
||||
#### 快速指导
|
||||
|
||||
如果要使用纯文本和 Git,则需要适应你的编辑器。Atom 的用户界面可能比你习惯的更加动态。实际上,你可以将它视为 Firefox 或 Chrome,而不是文字处理程序,因为它具有可以根据需要打开和关闭的选项卡和面板,甚至还可以安装和配置附件。尝试全部掌握 Atom 如许之多的功能是不切实际的,但是你至少可以知道有什么功能。
|
||||
如果要使用纯文本和 Git,则需要适应你的编辑器。Atom 的用户界面可能比你习惯的更加动态。实际上,你可以将它视为 Firefox 或 Chrome,而不是文字处理程序,因为它具有可以根据需要打开或关闭的选项卡和面板,甚至还可以安装和配置附件。尝试全部掌握 Atom 如许之多的功能是不切实际的,但是你至少可以知道有什么功能。
|
||||
|
||||
当 Atom 打开时,它将显示一个欢迎屏幕。如果不出意外,此屏幕很好地介绍了 Atom 的选项卡式界面。你可以通过单击 Atom 窗口顶部选项卡上的“关闭”图标来关闭欢迎屏幕,并使用“文件 > 新建文件”创建一个新文件。
|
||||
当打开 Atom 时,它将显示一个欢迎屏幕。如果不出意外,此屏幕很好地介绍了 Atom 的选项卡式界面。你可以通过单击 Atom 窗口顶部选项卡上的“关闭”图标来关闭欢迎屏幕,并使用“文件 > 新建文件”创建一个新文件。
|
||||
|
||||
使用纯文本格式与使用文字处理程序有点不同,因此这里有一些技巧,以人可以连接的方式编写内容,并且 Git 和计算机可以解析,跟踪和转换。
|
||||
使用纯文本格式与使用文字处理程序有点不同,因此这里有一些技巧,以人可以理解的方式编写内容,并且 Git 和计算机可以解析,跟踪和转换。
|
||||
|
||||
#### 用 Markdown 书写
|
||||
|
||||
如今,当人们谈论纯文本时,大多是指 Markdown。Markdown 与其说是格式,不如说是样式,这意味着它旨在为文本提供可预测的结构,以便计算机可以检测自然的模式并智能地转换文本。Markdown 有很多定义,但是最好的技术定义和备忘单在 [CommonMark 的网站][8]上。
|
||||
如今,当人们谈论纯文本时,大多是指 Markdown。Markdown 与其说是格式,不如说是样式,这意味着它旨在为文本提供可预测的结构,以便计算机可以检测自然的模式并智能地转换文本。Markdown 有很多定义,但是最好的技术定义和备忘清单在 [CommonMark 的网站][8]上。
|
||||
|
||||
```
|
||||
# Chapter 1
|
||||
@ -85,9 +85,9 @@ And it can even reference an image.
|
||||
|
||||
从示例中可以看出,Markdown 读起来感觉不像代码,但可以将其视为代码。如果你遵循 CommonMark 定义的 Markdown 规范,那么一键就可以可靠地将 Markdown 的文字转换为 .docx、.epub、.html、MediaWiki、.odt、.pdf、.rtf 和各种其他的格式,而*不会*失去格式。
|
||||
|
||||
你可以认为 Markdown 有点像文字处理程序的样式。如果你曾经为出版社撰写过一套样式来控制章节标题和章节标题的样式,那基本上就是一回事,除了不是从下拉菜单中选择样式以外,你要给你的文字添加一些小记号。对于任何习惯“以文字交谈”的现代阅读者来说,这些表示法都是很自然的,但是在呈现文本时,它们会被精美的文本样式替换掉。实际上,这是文字处理程序在后台秘密进行的操作。文字处理器显示粗体文本,但是如果你可以看到使文本变为粗体的生成代码,则它与 Markdown 很像(实际上,它是更复杂的 XML)。使用 Markdown 可以消除这种代码和样式之间的阻隔,一方面看起来更可怕,但另一方面,你可以在几乎所有可以生成文本的东西上书写 Markdown 而不会丢失任何格式信息。
|
||||
你可以认为 Markdown 有点像文字处理程序的样式。如果你曾经为出版社撰写过一套样式来控制章节标题及其样式,那基本上就是一回事,除了不是从下拉菜单中选择样式以外,你需要给你的文字添加一些小记号。对于任何习惯“以文字交谈”的现代阅读者来说,这些表示法都是很自然的,但是在呈现文本时,它们会被精美的文本样式替换掉。实际上,这就是文字处理程序在后台秘密进行的操作。文字处理器显示粗体文本,但是如果你可以看到使文本变为粗体的生成代码,则它与 Markdown 很像(实际上,它是更复杂的 XML)。使用 Markdown 可以消除这种代码和样式之间的阻隔,一方面看起来更可怕一些,但另一方面,你可以在几乎所有可以生成文本的东西上书写 Markdown 而不会丢失任何格式信息。
|
||||
|
||||
Markdown 文件流行d 文件扩展名是 .md。如果你使用的平台不知道 .md 文件是什么,则可以手动将扩展名与 Atom 关联,或者仅使用通用的 .txt 扩展名。文件扩展名不会更改文件的性质。它只会改变你的计算机决定如何处理它的方式。Atom 和某些平台足够聪明,可以知道该文件是纯文本格式,无论你给它以什么扩展名。
|
||||
Markdown 文件流行的文件扩展名是 .md。如果你使用的平台不知道 .md 文件是什么,则可以手动将该扩展名与 Atom 关联,或者仅使用通用的 .txt 扩展名。文件扩展名不会更改文件的性质。它只会改变你的计算机决定如何处理它的方式。Atom 和某些平台足够聪明,可以知道该文件是纯文本格式,无论你给它以什么扩展名。
|
||||
|
||||
#### 实时预览
|
||||
|
||||
@ -97,25 +97,25 @@ Atom 具有 “Markdown 预览” 插件,该插件可以向你显示正在编
|
||||
|
||||
要激活此预览窗格,请选择“包 > Markdown 预览 > 切换预览” 或按 `Ctrl + Shift + M`。
|
||||
|
||||
此视图为你提供了两全其美的方法。无需承担为你的文本添加样式的负担,就可以写作,而你也可以看到一个通用的示例外观,至少是以典型的数字化格式显示了文本的外观。当然,关键是你无法控制文本的最终呈现方式,因此不要试图调整 Markdown 来强制以某种方式显示呈现的预览。
|
||||
此视图为你提供了两全其美的方法。无需承担为你的文本添加样式的负担就可以写作,而你也可以看到一个通用的示例外观,至少是以典型的数字化格式显示文本的外观。当然,关键是你无法控制文本的最终呈现方式,因此不要试图调整 Markdown 来强制以某种方式显示呈现的预览。
|
||||
|
||||
#### 每行一句话
|
||||
|
||||
你的高中写作老师不会看你的 Markdown。
|
||||
|
||||
一开始它并那么自然,但是在数字世界中,保持每行一个句子更有意义。Markdown 忽略单个换行符(当你按下 Return 或 Enter 键时),并且只在单个空行之后才会创建一个新段落。
|
||||
一开始它不那么自然,但是在数字世界中,保持每行一个句子更有意义。Markdown 会忽略单个换行符(当你按下 `Return` 或 `Enter` 键时),并且只在单个空行之后才会创建一个新段落。
|
||||
|
||||
![Writing in Atom][10]
|
||||
|
||||
每行写一个句子的好处是你的工作更容易跟踪。也就是说,如果你在段落的开头更改了一个单词,那么如果更改仅限于一行而不是一个长的段落中的一个单词,那么 Atom、Git 或任何应用程序很容易以有意义的方式突出显示该更改。换句话说,对一个句子的更改只会影响该句子,而不会影响整个段落。
|
||||
每行写一个句子的好处是你的工作更容易跟踪。也就是说,假如你在段落的开头更改了一个单词,如果更改仅限于一行而不是一个长的段落中的一个单词,那么 Atom、Git 或任何应用程序很容易以有意义的方式突出显示该更改。换句话说,对一个句子的更改只会影响该句子,而不会影响整个段落。
|
||||
|
||||
你可能会想:“许多文字处理器也可以跟踪更改,它们可以突出显示已更改的单个单词。”但是这些修订跟踪器绑定到该字处理器的界面上,这意味着你必须先打开该字处理器才能浏览修订。在纯文本工作流程中,你可以以纯文本形式查看修订,这意味着无论手头有什么,只要该设备可以处理纯文本(大多数都可以),就可以进行编辑或批准编辑。
|
||||
你可能会想:“许多文字处理器也可以跟踪更改,它们可以突出显示已更改的单个单词。”但是这些修订跟踪器绑定在该字处理器的界面上,这意味着你必须先打开该字处理器才能浏览修订。在纯文本工作流程中,你可以以纯文本形式查看修订,这意味着无论手头有什么,只要该设备可以处理纯文本(大多数都可以),就可以进行编辑或批准编辑。
|
||||
|
||||
诚然,写作者通常不会考虑行号,但它对于计算机有用,并且通常是一个很好的参考点。默认情况下,Atom 为文本文档的行进行编号。按下 Enter 键或 Return 键后,一*行*就是一行。
|
||||
诚然,写作者通常不会考虑行号,但它对于计算机有用,并且通常是一个很好的参考点。默认情况下,Atom 为文本文档的行进行编号。按下 `Enter` 键或 `Return` 键后,一*行*就是一行。
|
||||
|
||||
![Writing in Atom][11]
|
||||
|
||||
如果一行中有一个点而不是一个数字,则表示它是上一行折叠的一部分,因为它不超出了你的屏幕。
|
||||
如果(在 Atom 的)一行的行号中有一个点而不是一个数字,则表示它是上一行折叠的一部分,因为它超出了你的屏幕。
|
||||
|
||||
#### 主题
|
||||
|
||||
@ -127,7 +127,7 @@ Atom 具有 “Markdown 预览” 插件,该插件可以向你显示正在编
|
||||
|
||||
![Atom's themes][13]
|
||||
|
||||
要使用已安装的主题或根据喜好自定义主题,请导航至“设置”标签页中的“主题”类别中。从下拉菜单中选择要使用的主题。更改会立即进行,因此你可以准确了解主题如何影响您的环境。
|
||||
要使用已安装的主题或根据喜好自定义主题,请导航至“设置”标签页中的“主题”类别中。从下拉菜单中选择要使用的主题。更改会立即进行,因此你可以准确了解主题如何影响你的环境。
|
||||
|
||||
你也可以在“设置”标签的“编辑器”类别中更改工作字体。Atom 默认采用等宽字体,程序员通常首选这种字体。但是你可以使用系统上的任何字体,无论是衬线字体、无衬线字体、哥特式字体还是草书字体。无论你想整天盯着什么字体都行。
|
||||
|
||||
@ -139,19 +139,19 @@ Atom 具有 “Markdown 预览” 插件,该插件可以向你显示正在编
|
||||
|
||||
创建长文档时,我发现每个文件写一个章节比在一个文件中写整本书更有意义。此外,我不会以明显的语法 ` chapter-1.md` 或 `1.example.md` 来命名我的章节,而是以章节标题或关键词(例如 `example.md`)命名。为了将来为自己提供有关如何编写本书的指导,我维护了一个名为 `toc.md` (用于“目录”)的文件,其中列出了各章的(当前)顺序。
|
||||
|
||||
我这样做是因为,无论我多么相信第 6 章都不可能出现在第 1 章之前,但在我完成整本书之前,几乎不大可能出现我不会交换一两个章节的顺序。我发现从一开始就保持动态变化可以帮助我避免重命名混乱,也可以帮助我避免僵化的结构。
|
||||
我这样做是因为,无论我多么相信第 6 章都不可能出现在第 1 章之前,但在我完成整本书之前,几乎难以避免我会交换一两个章节的顺序。我发现从一开始就保持动态变化可以帮助我避免重命名混乱,也可以帮助我避免僵化的结构。
|
||||
|
||||
### 在 Atom 中使用 Git
|
||||
|
||||
每位写作者的共同点是两件事:他们为流传而写作,而他们的写作是一段旅程。你无需坐下来写作就完成最终稿件。顾名思义,你有一个初稿。该草稿会经过修订,你会仔细地将每个修订保存一式两份或三份,以防万一你的文件损坏了。最终,你得到了所谓的最终草案,但很有可能你有一天还会回到这份最终草案,要么恢复好的部分要么修改坏的部分。
|
||||
每位写作者的共同点是两件事:他们为流传而写作,而他们的写作是一段旅程。你不能一坐下来写作就完成了最终稿件。顾名思义,你有一个初稿。该草稿会经过修订,你会仔细地将每个修订保存一式两份或三份的备份,以防万一你的文件损坏了。最终,你得到了所谓的最终草稿,但很有可能你有一天还会回到这份最终草稿,要么恢复好的部分,要么修改坏的部分。
|
||||
|
||||
Atom 最令人兴奋的功能是其强大的 Git 集成。无需离开 Atom,你就可以与 Git 的所有主要功能进行交互,跟踪和更新项目、回滚你不喜欢的更改、集成来自协作者的更改等等。最好的学习方法就是逐步学习,因此这是从写作项目开始到结束在 Atom 界面中使用 Git 的方法。
|
||||
Atom 最令人兴奋的功能是其强大的 Git 集成。无需离开 Atom,你就可以与 Git 的所有主要功能进行交互,跟踪和更新项目、回滚你不喜欢的更改、集成来自协作者的更改等等。最好的学习方法就是逐步学习,因此这是在一个写作项目中从始至终在 Atom 界面中使用 Git 的方法。
|
||||
|
||||
第一件事:通过选择 “视图 > 切换 Git 标签页” 来显示 Git 面板。这将在 Atom 界面的右侧打开一个新标签页。现在没什么可看的,所以暂时保持打开状态就行。
|
||||
|
||||
#### 建立一个 Git 项目
|
||||
|
||||
你可以将 Git 视为它被绑定到文件夹。Git 目录之外的任何文件夹都不知道 Git,而 Git 也不知道外面。Git 目录中的文件夹和文件将被忽略,直到你授予 Git 权限来跟踪它们为止。
|
||||
你可以认为 Git 被绑定到一个文件夹。Git 目录之外的任何文件夹都不知道 Git,而 Git 也不知道外面。Git 目录中的文件夹和文件将被忽略,直到你授予 Git 权限来跟踪它们为止。
|
||||
|
||||
你可以通过在 Atom 中创建新的项目文件夹来创建 Git 项目。选择 “文件 > 添加项目文件夹”,然后在系统上创建一个新文件夹。你创建的文件夹将出现在 Atom 窗口的左侧“项目面板”中。
|
||||
|
||||
@ -159,11 +159,11 @@ Atom 最令人兴奋的功能是其强大的 Git 集成。无需离开 Atom,
|
||||
|
||||
右键单击你的新项目文件夹,然后选择“新建文件”以在项目文件夹中创建一个新文件。如果你要导入文件到新项目中,请右键单击该文件夹,然后选择“在文件管理器中显示”,以在系统的文件查看器中打开该文件夹(Linux 上为 Dolphin 或 Nautilus,Mac 上为 Finder,在 Windows 上是 Explorer),然后拖放文件到你的项目文件夹。
|
||||
|
||||
在 Atom 中打开一个项目文件(你创建的空文件或导入的文件)后,单击 Git 标签中的 “<ruby>创建存储库<rt>Create Repository</rt></ruby>” 按钮。在弹出的对话框中,单击 “<ruby>初始化<rt>Init</rt></ruby>” 以将你的项目目录初始化为本地 Git 存储库。 Git 会将 `.git` 目录(在系统的文件管理器中不可见,但在 Atom 中可见)添加到项目文件夹中。不要被这个愚弄了:`.git` 目录是 Git 管理的,而不是由你管理的,因此你一般不要动它。但是在 Atom 中看到它可以很好地提醒你正在由 Git 管理的项目中工作。换句话说,当你看到 `.git` 目录时,就有了修订历史记录。
|
||||
在 Atom 中打开一个项目文件(你创建的空文件或导入的文件)后,单击 Git 标签中的 “<ruby>创建存储库<rt>Create Repository</rt></ruby>” 按钮。在弹出的对话框中,单击 “<ruby>初始化<rt>Init</rt></ruby>” 以将你的项目目录初始化为本地 Git 存储库。 Git 会将 `.git` 目录(在系统的文件管理器中不可见,但在 Atom 中可见)添加到项目文件夹中。不要被这个愚弄了:`.git` 目录是 Git 管理的,而不是由你管理的,因此一般你不要动它。但是在 Atom 中看到它可以很好地提醒你正在由 Git 管理的项目中工作。换句话说,当你看到 `.git` 目录时,就有了修订历史记录。
|
||||
|
||||
在你的空文件中,写一些东西。你是写作者,所以输入一些单词就行。你可以随意输入任何一组单词,但要记住上面的写作技巧。
|
||||
|
||||
按 `Ctrl + S` 保存文件,该文件将显示在 Git 标签的 “<ruby>未暂存的改变<rt>Unstaged Changes</rt></ruby>” 部分中。这意味着该文件存在于你的项目文件夹中,但尚未提交给 Git 管理。通过单击 Git 选项卡右上方的 “<ruby>暂存全部<rt>Stage All</rt></ruby>” 按钮,允许 Git 跟踪这些文件。如果你使用过带有修订历史记录的文字处理器,则可以将此步骤视为允许 Git记录更改。
|
||||
按 `Ctrl + S` 保存文件,该文件将显示在 Git 标签的 “<ruby>未暂存的改变<rt>Unstaged Changes</rt></ruby>” 部分中。这意味着该文件存在于你的项目文件夹中,但尚未提交给 Git 管理。通过单击 Git 选项卡右上方的 “<ruby>暂存全部<rt>Stage All</rt></ruby>” 按钮,以允许 Git 跟踪这些文件。如果你使用过带有修订历史记录的文字处理器,则可以将此步骤视为允许 Git 记录更改。
|
||||
|
||||
#### Git 提交
|
||||
|
||||
@ -171,7 +171,7 @@ Atom 最令人兴奋的功能是其强大的 Git 集成。无需离开 Atom,
|
||||
|
||||
Git 的<ruby>提交<rt>commit</rt></ruby>会将你的文件发送到 Git 的内部和永久存档中。如果你习惯于文字处理程序,这就类似于给一个修订版命名。要创建一个提交,请在 Git 选项卡底部的“<ruby>提交<rt>Commit</rt></ruby>”消息框中输入一些描述性文本。你可能会感到含糊不清或随意写点什么,但如果你想在将来知道进行修订的原因,那么输入一些有用的信息会更有用。
|
||||
|
||||
第一次提交时,必须创建一个<ruby>分支<rt>branch</rt></ruby>。Git 分支有点像另外一个空间,它允许你从一个时间轴切换到另一个时间轴,以进行你可能想要或可能不想要永久保留的更改。如果最终喜欢该更改,则可以将一个实验分支合并到另一个实验分支,从而统一项目的不同版本。这是一个高级过程,不需要先学会,但是你仍然需要一个活动分支,因此你必须为首次提交创建一个分支。
|
||||
第一次提交时,必须创建一个<ruby>分支<rt>branch</rt></ruby>。Git 分支有点像另外一个空间,它允许你从一个时间轴切换到另一个时间轴,以进行你可能想要也可能不想要永久保留的更改。如果最终喜欢该更改,则可以将一个实验分支合并到另一个实验分支,从而统一项目的不同版本。这是一个高级过程,不需要先学会,但是你仍然需要一个活动分支,因此你必须为首次提交创建一个分支。
|
||||
|
||||
单击 Git 选项卡最底部的“<ruby>分支<rt>Branch</rt></ruby>”图标,以创建新的分支。
|
||||
|
||||
@ -185,7 +185,7 @@ Git 的<ruby>提交<rt>commit</rt></ruby>会将你的文件发送到 Git 的内
|
||||
|
||||
#### 历史记录和 Git 差异
|
||||
|
||||
一个自然而然的问题是你应该多久做一次提交。这并没有正确的答案。使用 `Ctrl + S` 保存文件并提交到 Git 是两个单独的过程,因此你会一直做这两个过程。每当你觉得自己已经做了重要的事情或打算尝试一个可能要被干掉的疯狂的新想法时,你可能都会想要做个提交。
|
||||
一个自然而然的问题是你应该多久做一次提交。这并没有正确的答案。使用 `Ctrl + S` 保存文件和提交到 Git 是两个单独的过程,因此你会一直做这两个过程。每当你觉得自己已经做了重要的事情或打算尝试一个可能会被干掉的疯狂的新想法时,你可能都会想要做次提交。
|
||||
|
||||
要了解提交对工作流程的影响,请从测试文档中删除一些文本,然后在顶部和底部添加一些文本。再次提交。 这样做几次,直到你在 Git 标签的底部有了一小段历史记录,然后单击其中一个提交以在 Atom 中查看它。
|
||||
|
||||
@ -199,15 +199,15 @@ Git 的<ruby>提交<rt>commit</rt></ruby>会将你的文件发送到 Git 的内
|
||||
|
||||
#### 远程备份
|
||||
|
||||
使用 Git 的优点之一是,按照设计,它是分布式的,这意味着你可以将工作提交到本地存储库,并将所做的更改推送到任意数量的服务器上进行备份。你还可以从这些服务器中拉取更改,以便你碰巧正在使用的任何设备始终具有最新更改。
|
||||
使用 Git 的优点之一是,按照设计它是分布式的,这意味着你可以将工作提交到本地存储库,并将所做的更改推送到任意数量的服务器上进行备份。你还可以从这些服务器中拉取更改,以便你碰巧正在使用的任何设备始终具有最新更改。
|
||||
|
||||
为此,你必须在 Git 服务器上拥有一个帐户。有几种免费的托管服务,其中包括 GitHub,这个公司开发了 Atom,但奇怪的是 GitHub 不是开源的;而 GitLab 是开源的。相比私有的,我更喜欢开源,在本示例中,我将使用 GitLab。
|
||||
为此,你必须在 Git 服务器上拥有一个帐户。有几种免费的托管服务,其中包括 GitHub,这个公司开发了 Atom,但奇怪的是 GitHub 不是开源的;而 GitLab 是开源的。相比私有软件,我更喜欢开源,在本示例中,我将使用 GitLab。
|
||||
|
||||
如果你还没有 GitLab 帐户,请注册一个帐户并开始一个新项目。项目名称不必与 Atom 中的项目文件夹匹配,但是如果匹配,则可能更有意义。你可以将项目保留为私有,在这种情况下,只有你和任何一个你给予了明确权限的人可以访问它,或者,如果你希望该项目可供任何互联网上偶然发现它的人使用,则可以将其公开。
|
||||
|
||||
不要将 README 文件添加到项目中。
|
||||
|
||||
创建项目后,这个文件将为你提供有关如何设置存储库的说明。如果你决定在终端中或通过单独的 GUI 使用 Git,这是非常有用的信息,但是 Atom 的工作流程则有所不同。
|
||||
创建项目后,它将为你提供有关如何设置存储库的说明。如果你决定在终端中或通过单独的 GUI 使用 Git,这是非常有用的信息,但是 Atom 的工作流程则有所不同。
|
||||
|
||||
单击 GitLab 界面右上方的 “<ruby>克隆<rt>Clone</rt></ruby>” 按钮。这显示了访问 Git 存储库必须使用的地址。复制 “SSH” 地址(而不是 “https” 地址)。
|
||||
|
||||
@ -224,7 +224,7 @@ Git 的<ruby>提交<rt>commit</rt></ruby>会将你的文件发送到 Git 的内
|
||||
|
||||
在 Git 标签的底部,出现了一个新按钮,标记为 “<ruby>提取<rt>Fetch</rt></ruby>”。由于你的服务器是全新的服务器,因此没有可供你提取的数据,因此请右键单击该按钮,然后选择“<ruby>推送<rt>Push</rt></ruby>”。这会将你的更改推送到你的 GitLab 帐户,现在你的项目已备份到 Git 服务器上。
|
||||
|
||||
你可以在每次提交后将更改推送到服务器。它提供了立即的异地备份,并且由于数据量通常很少,因此它几乎与本地保存一样快。
|
||||
你可以在每次提交后将更改推送到服务器。它提供了即刻的异地备份,并且由于数据量通常很少,因此它几乎与本地保存一样快。
|
||||
|
||||
### 撰写而 Git
|
||||
|
||||
@ -237,7 +237,7 @@ via: https://opensource.com/article/19/4/write-git
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,22 +1,24 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11498-1.html)
|
||||
[#]: subject: (How DevOps professionals can become security champions)
|
||||
[#]: via: (https://opensource.com/article/19/9/devops-security-champions)
|
||||
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo)
|
||||
[#]: author: (Jessica Repka https://opensource.com/users/jrepka)
|
||||
|
||||
DevOps 专业人员如何成为网络安全拥护者
|
||||
======
|
||||
打破信息孤岛,成为网络安全的拥护者,这对你、对你的职业、对你的公司都会有所帮助。
|
||||
![A lock on the side of a building][1]
|
||||
|
||||
> 打破信息孤岛,成为网络安全的拥护者,这对你、对你的职业、对你的公司都会有所帮助。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201910/24/202520u09xw2vm4w2jm0mx.jpg)
|
||||
|
||||
安全是 DevOps 中一个被误解了的部分,一些人认为它不在 DevOps 的范围内,而另一些人认为它太过重要(并且被忽视),建议改为使用 DevSecOps。无论你同意哪一方的观点,网络安全都会影响到我们每一个人,这是很明显的事实。
|
||||
|
||||
每年, [黑客行为的统计数据][3] 都会更加令人震惊。例如, 每 39 秒就有一次黑客行为发生,这可能会导致你为公司写的记录、身份和专有项目被盗。你的安全团队可能需要花上几个月(也可能是永远找不到)才能发现这次黑客行为背后是谁,目的是什么,人在哪,什么时候黑进来的。
|
||||
每年,[黑客行为的统计数据][3] 都会更加令人震惊。例如,每 39 秒就有一次黑客行为发生,这可能会导致你为公司写的记录、身份和专有项目被盗。你的安全团队可能需要花上几个月(也可能是永远找不到)才能发现这次黑客行为背后是谁,目的是什么,人在哪,什么时候黑进来的。
|
||||
|
||||
运营专家面对这些棘手问题应该如何是好?呐我说,现在是时候成为网络安全的拥护者,变为解决方案的一部分了。
|
||||
运维专家面对这些棘手问题应该如何是好?呐我说,现在是时候成为网络安全的拥护者,变为解决方案的一部分了。
|
||||
|
||||
### 孤岛势力范围的战争
|
||||
|
||||
@ -28,41 +30,35 @@ DevOps 专业人员如何成为网络安全拥护者
|
||||
|
||||
为了打破这些孤岛并结束势力战争,我在每个安全团队中都选了至少一个人来交谈,了解我们组织日常安全运营里的来龙去脉。我开始做这件事是出于好奇,但我持续做这件事是因为它总是能带给我一些有价值的、新的观点。例如,我了解到,对于每个因为失败的安全性而被停止的部署,安全团队都在疯狂地尝试修复 10 个他们看见的其他问题。他们反应的莽撞和尖锐是因为他们必须在有限的时间里修复这些问题,不然这些问题就会变成一个大问题。
|
||||
|
||||
考虑到发现、识别和撤销已完成操作所需的大量知识,或者指出 DevOps 团队正在做什么-没有背景信息-然后复制并测试它。所有的这些通常都要由人手配备非常不足的安全团队完成。
|
||||
考虑到发现、识别和撤销已完成操作所需的大量知识,或者指出 DevOps 团队正在做什么(没有背景信息)然后复制并测试它。所有的这些通常都要由人手配备非常不足的安全团队完成。
|
||||
|
||||
这就是你的安全团队的日常生活,并且你的 DevOps 团队看不到这些。ITSEC 的日常工作意味着超时加班和过度劳累,以确保公司,公司的团队,团队里工作的所有人能够安全地工作。
|
||||
|
||||
### 成为安全拥护者的方法
|
||||
|
||||
这些是你成为你的安全团队的拥护者之后可以帮到它们的。这意味着-对于你做的所有操作-你必须仔细、认真地查看所有能够让其他人登录的方式,以及他们能够从中获得什么。
|
||||
这些是你成为你的安全团队的拥护者之后可以帮到它们的。这意味着,对于你做的所有操作,你必须仔细、认真地查看所有能够让其他人登录的方式,以及他们能够从中获得什么。
|
||||
|
||||
帮助你的安全团队就是在帮助你自己。将工具添加到你的工作流程里,以此将你知道的要干的活和他们知道的要干的活结合到一起。从小事入手,例如阅读公共漏洞披露(CVEs),并将扫描模块添加到你的 CI/CD 流程里。对于你写的所有代码,都会有一个开源扫描工具,添加小型开源工具(例如下面列出来的)在长远看来是可以让项目更好的。
|
||||
帮助你的安全团队就是在帮助你自己。将工具添加到你的工作流程里,以此将你知道的要干的活和他们知道的要干的活结合到一起。从小事入手,例如阅读公共漏洞披露(CVE),并将扫描模块添加到你的 CI/CD 流程里。对于你写的所有代码,都会有一个开源扫描工具,添加小型开源工具(例如下面列出来的)在长远看来是可以让项目更好的。
|
||||
|
||||
**容器扫描工具:**
|
||||
**容器扫描工具:**
|
||||
|
||||
* [Anchore Engine][5]
|
||||
* [Clair][6]
|
||||
* [Vuls][7]
|
||||
* [OpenSCAP][8]
|
||||
|
||||
|
||||
|
||||
**代码扫描工具:**
|
||||
**代码扫描工具:**
|
||||
|
||||
* [OWASP SonarQube][9]
|
||||
* [Find Security Bugs][10]
|
||||
* [Google Hacking Diggity Project][11]
|
||||
|
||||
|
||||
|
||||
**Kubernetes 安全工具:**
|
||||
**Kubernetes 安全工具:**
|
||||
|
||||
* [Project Calico][12]
|
||||
* [Kube-hunter][13]
|
||||
* [NeuVector][14]
|
||||
|
||||
|
||||
|
||||
### 保持你的 DevOps 态度
|
||||
|
||||
如果你的工作角色是和 DevOps 相关的,那么学习新技术和如何运用这项新技术创造新事物就是你工作的一部分。安全也是一样。我在 DevOps 安全方面保持到最新,下面是我的方法的列表。
|
||||
@ -72,8 +68,6 @@ DevOps 专业人员如何成为网络安全拥护者
|
||||
* 尝试做一次黑客马拉松。一些公司每个月都要这样做一次;如果你觉得还不够、想了解更多,可以访问 Beginner Hack 1.0 网站。
|
||||
* 每年至少一次和那你的安全团队的成员一起参加安全会议,从他们的角度来看事情。
|
||||
|
||||
|
||||
|
||||
### 成为拥护者是为了变得更好
|
||||
|
||||
你应该成为你的安全的拥护者,下面是我们列出来的几个理由。首先是增长你的知识,帮助你的职业发展。第二是帮助其他的团队,培养新的关系,打破对你的组织有害的孤岛。在你的整个组织内建立由很多好处,包括设置沟通团队的典范,并鼓励人们一起工作。你同样能促进在整个组织中分享知识,并给每个人提供一个在安全方面更好的内部合作的新契机。
|
||||
@ -87,11 +81,11 @@ via: https://opensource.com/article/19/9/devops-security-champions
|
||||
作者:[Jessica Repka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo
|
||||
[a]: https://opensource.com/users/jrepka
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
|
||||
[2]: https://opensource.com/article/19/1/what-devsecops
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11497-1.html)
|
||||
[#]: subject: (Kubernetes networking, OpenStack Train, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/10/kubernetes-openstack-and-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
@ -0,0 +1,93 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11502-1.html)
|
||||
[#]: subject: (Pylint: Making your Python code consistent)
|
||||
[#]: via: (https://opensource.com/article/19/10/python-pylint-introduction)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
Pylint:让你的 Python 代码保持一致
|
||||
======
|
||||
|
||||
> 当你想要争论代码复杂性时,Pylint 是你的朋友。
|
||||
|
||||
![OpenStack source code \(Python\) in VIM][1]
|
||||
|
||||
Pylint 是更高层级的 Python 样式强制程序。而 [flake8][2] 和 [black][3] 检查的是“本地”样式:换行位置、注释的格式、发现注释掉的代码或日志格式中的错误做法之类的问题。
|
||||
|
||||
默认情况下,Pylint 非常激进。它将对每样东西都提供严厉的意见,从检查是否实际实现声明的接口到重构重复代码的可能性,这对新用户来说可能会很多。一种温和地将其引入项目或团队的方法是先关闭*所有*检查器,然后逐个启用检查器。如果你已经在使用 flake8、black 和 [mypy][4],这尤其有用:Pylint 有相当多的检查器和它们在功能上重叠。
|
||||
|
||||
但是,Pylint 独有之处之一是能够强制执行更高级别的问题:例如,函数的行数或者类中方法的数量。
|
||||
|
||||
这些数字可能因项目而异,并且可能取决于开发团队的偏好。但是,一旦团队就参数达成一致,使用自动工具*强制化*这些参数非常有用。这是 Pylint 闪耀的地方。
|
||||
|
||||
### 配置 Pylint
|
||||
|
||||
要以空配置开始,请将 `.pylintrc` 设置为
|
||||
|
||||
```
|
||||
[MESSAGES CONTROL]
|
||||
|
||||
disable=all
|
||||
```
|
||||
|
||||
这将禁用所有 Pylint 消息。由于其中许多是冗余的,这是有道理的。在 Pylint 中,`message` 是一种特定的警告。
|
||||
|
||||
你可以通过运行 `pylint` 来确认所有消息都已关闭:
|
||||
|
||||
```
|
||||
$ pylint <my package>
|
||||
```
|
||||
|
||||
通常,向 `pylint` 命令行添加参数并不是一个好主意:配置 `pylint` 的最佳位置是 `.pylintrc`。为了使它做*一些*有用的事,我们需要启用一些消息。
|
||||
|
||||
要启用消息,在 `.pylintrc` 中的 `[MESSAGES CONTROL]` 下添加
|
||||
|
||||
```
|
||||
enable=<message>,
|
||||
...
|
||||
```
|
||||
|
||||
对于看起来有用的“消息”(Pylint 称之为不同类型的警告)。我最喜欢的包括 `too-many-lines`、`too-many-arguments` 和 `too-many-branches`。所有这些会限制模块或函数的复杂性,并且无需进行人工操作即可客观地进行代码复杂度测量。
|
||||
|
||||
*检查器*是*消息*的来源:每条消息只属于一个检查器。许多最有用的消息都在[设计检查器][5]下。默认数字通常都不错,但要调整最大值也很简单:我们可以在 `.pylintrc` 中添加一个名为 `DESIGN` 的段。
|
||||
|
||||
```
|
||||
[DESIGN]
|
||||
max-args=7
|
||||
max-locals=15
|
||||
```
|
||||
|
||||
另一个有用的消息来源是“重构”检查器。我已启用一些最喜欢的消息有 `consider-using-dict-comprehension`、`stop-iteration-return`(它会查找正确的停止迭代的方式是 `return` 而使用了 `raise StopIteration` 的迭代器)和 `chained-comparison`,它将建议使用如 `1 <= x < 5`,而不是不太明显的 `1 <= x && 5 > 5` 的语法。
|
||||
|
||||
最后是一个在性能方面消耗很大的检查器,但它非常有用,就是 `similarities`。它会查找不同部分代码之间的复制粘贴来强制执行“不要重复自己”(DRY 原则)。它只启用一条消息:`duplicate-code`。默认的 “最小相似行数” 设置为 4。可以使用 `.pylintrc` 将其设置为不同的值。
|
||||
|
||||
```
|
||||
[SIMILARITIES]
|
||||
min-similarity-lines=3
|
||||
```
|
||||
|
||||
### Pylint 使代码评审变得简单
|
||||
|
||||
如果你厌倦了需要指出一个类太复杂,或者两个不同的函数基本相同的代码评审,请将 Pylint 添加到你的[持续集成][6]配置中,并且只需要对项目复杂性准则的争论一次就行。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/python-pylint-introduction
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_2.jpg?itok=4fza48WU (OpenStack source code (Python) in VIM)
|
||||
[2]: https://opensource.com/article/19/5/python-flake8
|
||||
[3]: https://opensource.com/article/19/5/python-black
|
||||
[4]: https://opensource.com/article/19/5/python-mypy
|
||||
[5]: https://pylint.readthedocs.io/en/latest/technical_reference/features.html#design-checker
|
||||
[6]: https://opensource.com/business/15/7/six-continuous-integration-tools
|
@ -1,34 +1,28 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lnrCoder)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11503-1.html)
|
||||
[#]: subject: (How to Get the Size of a Directory in Linux)
|
||||
[#]: via: (https://www.2daygeek.com/find-get-size-of-directory-folder-linux-disk-usage-du-command/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Get the Size of a Directory in Linux
|
||||
如何获取 Linux 中的目录大小
|
||||
======
|
||||
|
||||
You may have noticed that the size of a directory is showing only 4KB when you use the **[ls command][1]** to list the directory content in Linux.
|
||||
你应该已经注意到,在 Linux 中使用 [ls 命令][1] 列出的目录内容中,目录的大小仅显示 4KB。这个大小正确吗?如果不正确,那它代表什么,又该如何获取 Linux 中的目录或文件夹大小?这是一个默认的大小,是用来存储磁盘上存储目录的元数据的大小。
|
||||
|
||||
Is this the right size? If not, what is it, and how to get a directory or folder size in Linux?
|
||||
Linux 上有一些应用程序可以 [获取目录的实际大小][2]。其中,磁盘使用率(`du`)命令已被 Linux 管理员广泛使用。
|
||||
|
||||
This is the default size, which is used to store the meta information of the directory on the disk.
|
||||
我将向您展示如何使用各种选项获取文件夹大小。
|
||||
|
||||
There are some applications on Linux to **[get the actual size of a directory][2]**.
|
||||
### 什么是 du 命令?
|
||||
|
||||
But the disk usage (du) command is widely used by the Linux administrator.
|
||||
[du 命令][3] 表示 <ruby>磁盘使用率<rt>Disk Usage</rt></ruby>。这是一个标准的 Unix 程序,用于估计当前工作目录中的文件空间使用情况。
|
||||
|
||||
I will show you how to get folder size with various options.
|
||||
它使用递归方式总结磁盘使用情况,以获取目录及其子目录的大小。
|
||||
|
||||
### What’s du Command?
|
||||
|
||||
**[du command][3]** stands for `Disk Usage`. It’s a standard Unix program which used to estimate file space usage in present working directory.
|
||||
|
||||
It summarize disk usage recursively to get a directory and its sub-directory size.
|
||||
|
||||
As I said, the directory size only shows 4KB when you use the ls command. See the below output.
|
||||
如同我说的那样, 使用 `ls` 命令时,目录大小仅显示 4KB。参见下面的输出。
|
||||
|
||||
```
|
||||
$ ls -lh | grep ^d
|
||||
@ -40,32 +34,29 @@ drwxr-xr-x 13 daygeek daygeek 4.0K Jan 6 2019 drive-mageshm
|
||||
drwxr-xr-x 15 daygeek daygeek 4.0K Sep 29 21:32 Thanu_Photos
|
||||
```
|
||||
|
||||
### 1) How to Check Only the Size of the Parent Directory on Linux
|
||||
### 1) 在 Linux 上如何只获取父目录的大小
|
||||
|
||||
Use the below du command format to get the total size of a given directory. In this example, we are going to get the total size of the **“/home/daygeek/Documents”** directory.
|
||||
使用以下 `du` 命令格式获取给定目录的总大小。在该示例中,我们将得到 `/home/daygeek/Documents` 目录的总大小。
|
||||
|
||||
```
|
||||
$ du -hs /home/daygeek/Documents
|
||||
or
|
||||
或
|
||||
$ du -h --max-depth=0 /home/daygeek/Documents/
|
||||
|
||||
20G /home/daygeek/Documents
|
||||
```
|
||||
|
||||
**Details**:
|
||||
详细说明:
|
||||
|
||||
* du – It is a command
|
||||
* h – Print sizes in human readable format (e.g., 1K 234M 2G)
|
||||
* s – Display only a total for each argument
|
||||
* –max-depth=N – Print levels of directory
|
||||
* `du` – 这是一个命令
|
||||
* `-h` – 以易读的格式显示大小 (例如 1K 234M 2G)
|
||||
* `-s` – 仅显示每个参数的总数
|
||||
* `--max-depth=N` – 目录的打印深度
|
||||
|
||||
### 2) 在 Linux 上如何获取每个目录的大小
|
||||
|
||||
使用以下 `du` 命令格式获取每个目录(包括子目录)的总大小。
|
||||
|
||||
### 2) How to Get the Size of Each Directory on Linux
|
||||
|
||||
Use the below du command format to get the total size of each directory, including sub-directories.
|
||||
|
||||
In this example, we are going to get the total size of each **“/home/daygeek/Documents”** directory and it’s sub-directories.
|
||||
在该示例中,我们将获得每个 `/home/daygeek/Documents` 目录及其子目录的总大小。
|
||||
|
||||
```
|
||||
$ du -h /home/daygeek/Documents/ | sort -rh | head -20
|
||||
@ -92,9 +83,9 @@ $ du -h /home/daygeek/Documents/ | sort -rh | head -20
|
||||
150M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Nov-2016
|
||||
```
|
||||
|
||||
### 3) How to Get a Summary of Each Directory on Linux
|
||||
### 3) 在 Linux 上如何获取每个目录的摘要
|
||||
|
||||
Use the below du command format to get only the summary for each directory.
|
||||
使用如下 `du` 命令格式仅获取每个目录的摘要。
|
||||
|
||||
```
|
||||
$ du -hs /home/daygeek/Documents/* | sort -rh | head -10
|
||||
@ -111,9 +102,9 @@ $ du -hs /home/daygeek/Documents/* | sort -rh | head -10
|
||||
96K /home/daygeek/Documents/distro-info.xlsx
|
||||
```
|
||||
|
||||
### 4) How to Display the Size of Each Directory and Exclude Sub-Directories on Linux
|
||||
### 4) 在 Linux 上如何获取每个目录的不含子目录的大小
|
||||
|
||||
Use the below du command format to display the total size of each directory, excluding subdirectories.
|
||||
使用如下 `du` 命令格式来展示每个目录的总大小,不包括子目录。
|
||||
|
||||
```
|
||||
$ du -hS /home/daygeek/Documents/ | sort -rh | head -20
|
||||
@ -140,9 +131,9 @@ $ du -hS /home/daygeek/Documents/ | sort -rh | head -20
|
||||
90M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Dec-2017
|
||||
```
|
||||
|
||||
### 5) How to Get Only the Size of First-Level Sub-Directories on Linux
|
||||
### 5) 在 Linux 上如何仅获取一级子目录的大小
|
||||
|
||||
If you want to get the size of the first-level sub-directories, including their subdirectories, for a given directory on Linux, use the command format below.
|
||||
如果要获取 Linux 上给定目录的一级子目录(包括其子目录)的大小,请使用以下命令格式。
|
||||
|
||||
```
|
||||
$ du -h --max-depth=1 /home/daygeek/Documents/
|
||||
@ -155,9 +146,9 @@ $ du -h --max-depth=1 /home/daygeek/Documents/
|
||||
20G /home/daygeek/Documents/
|
||||
```
|
||||
|
||||
### 6) How to Get Grand Total in the du Command Output
|
||||
### 6) 如何在 du 命令输出中获得总计
|
||||
|
||||
If you want to get the grand total in the du Command output, use the below du command format.
|
||||
如果要在 `du` 命令输出中获得总计,请使用以下 `du` 命令格式。
|
||||
|
||||
```
|
||||
$ du -hsc /home/daygeek/Documents/* | sort -rh | head -10
|
||||
@ -181,7 +172,7 @@ via: https://www.2daygeek.com/find-get-size-of-directory-folder-linux-disk-usage
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lnrCoder](https://github.com/lnrCoder)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,94 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (MX Linux 19 Released With Debian 10.1 ‘Buster’ & Other Improvements)
|
||||
[#]: via: (https://itsfoss.com/mx-linux-19/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
MX Linux 19 Released With Debian 10.1 ‘Buster’ & Other Improvements
|
||||
======
|
||||
|
||||
MX Linux 18 has been one of my top recommendations for the [best Linux distributions][1], specially when considering distros other than Ubuntu.
|
||||
|
||||
It is based on Debian 9.6 ‘Stretch’ – which was incredibly a fast and smooth experience.
|
||||
|
||||
Now, as a major upgrade to that, MX Linux 19 brings a lot of major improvements and changes. Here, we shall take a look at the key highlights.
|
||||
|
||||
### New features in MX Linux 19
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][2]
|
||||
|
||||
#### Debian 10 ‘Buster’
|
||||
|
||||
This deserves a separate mention as Debian 10 is indeed a major upgrade from Debian 9.6 ‘Stretch’ on which MX Linux 18 was based on.
|
||||
|
||||
In case you’re curious about what has changed with Debian 10 Buster, we suggest to check out our article on the [new features of Debian 10 Buster][3].
|
||||
|
||||
#### Xfce Desktop 4.14
|
||||
|
||||
![MX Linux 19][4]
|
||||
|
||||
[Xfce 4.14][5] happens to be the latest offering from Xfce development team. Personally, I’m not a fan of Xfce desktop environment but it screams fast performance when you get to use it on a Linux distro (especially on MX Linux 19).
|
||||
|
||||
Interestingly, we also have a quick guide to help you [customize Xfce][6] on your system.
|
||||
|
||||
#### Updated Packages & Latest Debian Kernel 4.19
|
||||
|
||||
Along with updated packages for [GIMP][7], MESA, Firefox, and so on – it also comes baked in with the latest kernel 4.19 available for Debian Buster.
|
||||
|
||||
#### Updated MX-Apps
|
||||
|
||||
If you’ve used MX Linux before, you might be knowing that it comes pre-installed with useful MX-Apps that help you get more things done quickly.
|
||||
|
||||
The apps like MX-installer and MX-packageinstaller have significantly improved.
|
||||
|
||||
In addition to these two, all other MX-tools have been updated here and there to fix bugs, add new translations (or simply to improve the user experience).
|
||||
|
||||
#### Other Improvements
|
||||
|
||||
Considering it a major upgrade, there’s obviously a lot of under-the-hood changes than highlighted (including the latest antiX live system updates).
|
||||
|
||||
You can check out more details on their [official announcement post][8]. You may also watch this video from the developers explaining all the new stuff in MX Linux 19:
|
||||
|
||||
### Getting MX Linux 19
|
||||
|
||||
Even if you are using MX Linux 18 versions right now, you [cannot upgrade][9] to MX Linux 19. You need to go for a clean install like everyone else.
|
||||
|
||||
You can download MX Linux 19 from this page:
|
||||
|
||||
[Download MX Linux 19][10]
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
With MX Linux 18, I had a problem using my WiFi adapter due to a driver issue which I resolved through the [forum][11], it seems that it still hasn’t been fixed with MX Linux 19. So, you might want to take a look at my [forum post][11] if you face the same issue after installing MX Linux 19.
|
||||
|
||||
If you’ve been using MX Linux 18, this definitely seems to be an impressive upgrade.
|
||||
|
||||
Have you tried it yet? What are your thoughts on the new MX Linux 19 release? Let me know what you think in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/mx-linux-19/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-linux-distributions/
|
||||
[2]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[3]: https://itsfoss.com/debian-10-buster/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/mx-linux-19.jpg?ssl=1
|
||||
[5]: https://xfce.org/about/news
|
||||
[6]: https://itsfoss.com/customize-xfce/
|
||||
[7]: https://itsfoss.com/gimp-2-10-release/
|
||||
[8]: https://mxlinux.org/blog/mx-19-patito-feo-released/
|
||||
[9]: https://mxlinux.org/migration/
|
||||
[10]: https://mxlinux.org/download-links/
|
||||
[11]: https://forum.mxlinux.org/viewtopic.php?t=52201
|
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IT-as-a-Service Simplifies Hybrid IT)
|
||||
[#]: via: (https://www.networkworld.com/article/3447342/it-as-a-service-simplifies-hybrid-it.html)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
IT-as-a-Service Simplifies Hybrid IT
|
||||
======
|
||||
Consumption-based model reduces complexity, improves IT infrastructure.
|
||||
iStock
|
||||
|
||||
The data center must rapidly change. Companies are increasingly moving toward hybrid IT models, with some workloads in the cloud and others staying on premises. The burden of ever-growing apps and data is placing pressure on infrastructure in both worlds, but especially the data center.
|
||||
|
||||
Organizations are struggling to reach the required speed and flexibility — with the same public-cloud economics — from their on-premises data centers. That’s likely because they’re dealing with legacy systems acquired over the years, possibly inherited as the result of mergers and acquisitions.
|
||||
|
||||
These complex environments create headaches when trying to accommodate for IT capacity fluctuations. When extra storage is needed, for example, 67% of IT departments buy too much, according to [Futurum Research][1]. They don’t have the visibility into resources, nor the ability to effectively scale up and down.
|
||||
|
||||
Meanwhile, lines of business need solutions fast, and if IT can’t deliver, they’ll go out and buy their own cloud-based services or solutions. IT must think strategically about how all this technology strings together — efficiently, securely, and cost-effectively.
|
||||
|
||||
Enter IT-as-a-Service (ITaaS).
|
||||
|
||||
**1) How does ITaaS work?**
|
||||
|
||||
Unlike other as-a-service models, ITaaS is not cloud based, although the concept can be applied to cloud environments. Rather, the focus is about shifting IT operations toward managed services on an as-needed, pay-as-you-go basis. 1
|
||||
|
||||
For example, HPE GreenLake delivers infrastructure capacity based on actual metered usage, where companies only pay for what is used. There are no upfront costs, extended purchasing and implementation timeframes, or overprovisioning headaches. Infrastructure capacity can be scaled up or down as needed.
|
||||
|
||||
**2) What are the benefits of ITaaS?**
|
||||
|
||||
Some of the most significant advantages include: scalable infrastructure and resources, improved workload management, greater availability, and reduced burden on IT, including network admins.
|
||||
|
||||
* _Infrastructure_. Resource needs are often in flux depending on business demands and market changes. Using ITaaS not only enhances infrastructure usage, it also helps network admins better plan for and manage bandwidth, switches, routers, and other network gear.
|
||||
* _Workloads_. ITaaS can immediately tackle cloud bursting to better manage application flow. Companies might also, for example, choose to use the consumption-based model for workloads that are unpredictable in their growth — such as big data, storage, and private cloud.
|
||||
* _Availability_. It’s critical to have zero network downtime. Using a consumption-based IT model, companies can opt to adopt services such as continuous network monitoring or expertise on-call with a 24/7 network help desk.
|
||||
* _Reduced burden on IT_. All of the above benefits affect day-to-day operations. By simplifying network management, ITaaS frees personnel to use their expertise where it is best served.
|
||||
|
||||
|
||||
|
||||
Furthermore, a consumption-based IT model helps organizations gain end-to-end visibility into storage resources, so that admins can ensure the highest levels of service, performance, and availability.
|
||||
|
||||
**HPE GreenLake: The Answer**
|
||||
|
||||
As hybrid IT takes hold, IT organizations must get a grip on their infrastructure resources to ensure agility and scalability for the business, while maintaining IT cost-effectiveness.
|
||||
|
||||
HPE GreenLake enables a simplified IT environment where companies pay only for the resources they actually use, while providing the business with the speed and agility it requires.
|
||||
|
||||
[Learn more at hpe.com/greenlake.][2]
|
||||
|
||||
Minimum commitment may apply
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3447342/it-as-a-service-simplifies-hybrid-it.html
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://h20195.www2.hpe.com/v2/Getdocument.aspx?docname=a00079768enw
|
||||
[2]: https://www.hpe.com/us/en/services/flexible-capacity.html
|
@ -0,0 +1,141 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Protocols That Help Things to Communicate Over the Internet)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/the-protocols-that-help-things-to-communicate-over-the-internet-2/)
|
||||
[#]: author: (Sapna Panchal https://opensourceforu.com/author/sapna-panchal/)
|
||||
|
||||
The Protocols That Help Things to Communicate Over the Internet
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_The Internet of Things is a system of connected, interrelated objects. These objects transmit data to servers for processing and, in turn, receive messages from the servers. These messages are sent and received using different protocols. This article discusses some of the protocols related to the IoT._
|
||||
|
||||
The Internet of Things (IoT) is beginning to pervade more and more aspects of our lives. Everyone everywhere is using the Internet of Things. Using the Internet, connected things are used to collect information, convey/send information back, or do both. IoT is an architecture that is a combination of available technologies. It helps to make our daily lives more pleasant and convenient.
|
||||
|
||||
![Figure 1: IoT architecture][3]
|
||||
|
||||
![Figure 2: Messaging Queuing Telemetry Transmit protocol][4]
|
||||
|
||||
**IoT architecture**
|
||||
Basically, IoT architecture has four components. In this article, we will explore each component to understand the architecture better.
|
||||
|
||||
**Sensors:** These are present everywhere. They help to collect data from any location and then share it to the IoT gateway. As an example, sensors sense the temperature at different locations, which helps to gauge the weather conditions. And this information is shared or passed to the IoT gateway. This is a basic example of how the IoT operates.
|
||||
|
||||
**IoT gateway:** Once the information is collected from the sensors, it is passed on to the gateway. The gateway is a mediator between sensor nodes and the World Wide Web. So basically, it processes the data that is collected from sensor nodes and then transmits this to the Internet infrastructure.
|
||||
**Cloud server:** Once data is transmitted through the gateway, it is stored and processed in the cloud server.
|
||||
**Mobile app:** Using a mobile application, the user can view and access the data processed in the cloud server.
|
||||
This is the basic idea of the IoT and its architecture, along with the components. We now move on to the basic ideas behind different IoT protocols.
|
||||
|
||||
![Figure 3: Advance Message Queuing Protocol][5]
|
||||
|
||||
![Figure 4: CoAP][6]
|
||||
|
||||
**IoT protocols**
|
||||
As mentioned earlier, connected things are used to collect information, convey/send information back, or do both, using the Internet. This is the fundamental basis of the IoT. To convey/send information, we need a protocol, which is a set of procedures that is used to transmit the data between electronic devices.
|
||||
Essentially, we have two types of IoT protocols — the IoT network protocols and the IoT data protocols. This article discusses the IoT data protocols.
|
||||
|
||||
![Figure 5: Constrained Application Protocol architecture][7]
|
||||
|
||||
**MQTT**
|
||||
The Messaging Queuing Telemetry Transmit (MQTT) protocol was primarily designed for low bandwidth networks, but is very popular today as an IoT protocol. It is used to exchange data between clients and the server. It is a lightweight messaging protocol.
|
||||
|
||||
This protocol has many advantages:
|
||||
|
||||
* It is small in size and has low power usage.
|
||||
* It is a lightweight protocol.
|
||||
* It is based on low network usage.
|
||||
* It works entirely in real-time.
|
||||
|
||||
|
||||
|
||||
Considering all the above reasons, MQTT emerges as the perfect IoT data protocol.
|
||||
|
||||
**How MQTT works:** MQTT is based on a client-server relationship. The server manages the requests that come from different clients and sends the required information to clients. MQTT is based on two operations.
|
||||
|
||||
i) _Publish:_ When the client sends data to the MQTT broker, this operation is known as ‘Publish’.
|
||||
ii) _Subscribe:_ When the client receives data from the broker, this operation is known as ‘Subscribe’.
|
||||
|
||||
The MQTT broker is the mediator that handles these operations, primarily taking messages and delivering them to the application or client.
|
||||
|
||||
Let’s look at the example of a device temperature sensor, which sends readings to the MQTT broker, and then information is delivered to desktop or mobile applications. As stated earlier, ‘Publish’ means sending readings to the MQTT broker and ‘Subscribe’ means delivering the information to the desktop/mobile application.
|
||||
|
||||
**AMQP**
|
||||
Advanced Message Queuing Protocol is a peer-to-peer protocol, where one peer plays the role of the client application and the other peer plays the role of the delivery service or broker. It is the combination of hard and fast components that basically routes and saves messages within the delivery service or broker carrier.
|
||||
The benefits of AMQP are:
|
||||
|
||||
* It helps to send messages without them getting missed out.
|
||||
* It helps to guarantee a ‘one-time-only’ and secured delivery.
|
||||
* It provides a secure connection.
|
||||
* It always supports acknowledgements for message delivery or failure.
|
||||
|
||||
|
||||
|
||||
**How AMQP works and its architecture:** The AMQP architecture is made up of the following parts.
|
||||
|
||||
_**Exchange**_ – Messages that come from the publisher are accepted by Exchange, which routes them to the message queue.
|
||||
_**Message queue**_ – This is the combination of multiple queues and is helpful for processing the messages.
|
||||
_**Binding**_ – This helps to maintain the connectivity between Exchange and the message queue.
|
||||
The combination of Exchange and the message queues is known as the broker or AMQP broker.
|
||||
|
||||
![Figure 6: Extensible Messaging and Presence Protocol][8]
|
||||
|
||||
**Constrained Application Protocol (CoAP)**
|
||||
This was initially used as a machine-to-machine (M2M) protocol and later began to be used as an IoT protocol. It is a Web transfer protocol that is used with constrained nodes and constrained networks. CoAP uses the RESTful architecture, just like the HTTP protocol.
|
||||
The advantages CoAP offers are:
|
||||
|
||||
* It works as a REST model for small devices.
|
||||
* As this is like HTTP, it’s easy for developers to work on.
|
||||
* It is a one-to-one protocol for transferring information between the client and server, directly.
|
||||
* It is very simple to parse.
|
||||
|
||||
|
||||
|
||||
**How CoAP works and its architecture:** From Figure 4, we can understand that CoAP is the combination of ‘Request/Response and Message’. We can also say it has two layers – ‘Request/Response’and ‘Message’.
|
||||
Figure 5 clearly explains that CoAP architecture is based on the client server relationship, where…
|
||||
|
||||
* The client sends requests to the server.
|
||||
* The server receives requests from the client and responds to them.
|
||||
|
||||
|
||||
|
||||
**Extensible Messaging and Presence Protocol (XMPP)**
|
||||
|
||||
This protocol is used to exchange messages in real-time. It is used not only to communicate with others, but also to get information on the status of the user (away, offline, active). This protocol is widely used in real life, like in WhatsApp.
|
||||
|
||||
The Extensible Messaging and Presence Protocol should be used because:
|
||||
|
||||
* It is free, open and easy to understand. Hence, it is very popular.
|
||||
* It has secured authentication, is extensible and flexible.
|
||||
|
||||
|
||||
|
||||
**How XMPP works and its architecture:** In the XMPP architecture, each client has a unique name associated with it and communicates to other clients via the XMPP server. The XMPP client has either the same domain or a different one.
|
||||
|
||||
In Figure 6, the XMPP client belongs to the same domain in which one XMPP client sends the information to the XMPP server. The server translates it and conveys the information to another client.
|
||||
Basically, this protocol is the backbone that provides universal connectivity between different endpoint protocols.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/the-protocols-that-help-things-to-communicate-over-the-internet-2/
|
||||
|
||||
作者:[Sapna Panchal][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/sapna-panchal/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Internet-of-things-illustration.jpg?resize=696%2C439&ssl=1 (Internet of things illustration)
|
||||
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Internet-of-things-illustration.jpg?fit=1125%2C710&ssl=1
|
||||
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-1-IoT-architecture.jpg?resize=350%2C133&ssl=1
|
||||
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-2-Messaging-Queuing-Telemetry-Transmit-protocol.jpg?resize=350%2C206&ssl=1
|
||||
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-3-Advance-Message-Queuing-Protocol.jpg?resize=350%2C160&ssl=1
|
||||
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-4-CoAP.jpg?resize=350%2C84&ssl=1
|
||||
[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-5-Constrained-Application-Protocol-architecture.jpg?resize=350%2C224&ssl=1
|
||||
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-6-Extensible-Messaging-and-Presence-Protocol.jpg?resize=350%2C46&ssl=1
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (PsiACE)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -339,7 +339,7 @@ via: https://nicolasparada.netlify.com/posts/go-messenger-conversations/
|
||||
|
||||
作者:[Nicolás Parada][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[PsiACE](https://github.com/PsiACE)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,210 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Configure Rsyslog Server in CentOS 8 / RHEL 8)
|
||||
[#]: via: (https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/)
|
||||
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
|
||||
|
||||
How to Configure Rsyslog Server in CentOS 8 / RHEL 8
|
||||
======
|
||||
|
||||
**Rsyslog** is a free and opensource logging utility that exists by default on **CentOS** 8 and **RHEL** 8 systems. It provides an easy and effective way of **centralizing logs** from client nodes to a single central server. The centralization of logs is beneficial in two ways. First, it simplifies viewing of logs as the Systems administrator can view all the logs of remote servers from a central point without logging into every client system to check the logs. This is greatly beneficial if there are several servers that need to be monitored and secondly, in the event that a remote client suffers a crash, you need not worry about losing the logs because all the logs will be saved on the **central rsyslog server**. Rsyslog has replaced syslog which only supported **UDP** protocol. It extends the basic syslog protocol with superior features such as support for both **UDP** and **TCP** protocols in transporting logs, augmented filtering abilities, and flexible configuration options. That said, let’s explore how to configure the Rsyslog server in CentOS 8 / RHEL 8 systems.
|
||||
|
||||
[![configure-rsyslog-centos8-rhel8][1]][2]
|
||||
|
||||
### Prerequisites
|
||||
|
||||
We are going to have the following lab setup to test the centralized logging process:
|
||||
|
||||
* **Rsyslog server** CentOS 8 Minimal IP address: 10.128.0.47
|
||||
* **Client system** RHEL 8 Minimal IP address: 10.128.0.48
|
||||
|
||||
|
||||
|
||||
From the setup above, we will demonstrate how you can set up the Rsyslog server and later configure the client system to ship logs to the Rsyslog server for monitoring.
|
||||
|
||||
Let’s get started!
|
||||
|
||||
### Configuring the Rsyslog Server on CentOS 8
|
||||
|
||||
By default, Rsyslog comes installed on CentOS 8 / RHEL 8 servers. To verify the status of Rsyslog, log in via SSH and issue the command:
|
||||
|
||||
```
|
||||
$ systemctl status rsyslog
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![rsyslog-service-status-centos8][1]
|
||||
|
||||
If rsyslog is not present for whatever reason, you can install it using the command:
|
||||
|
||||
```
|
||||
$ sudo yum install rsyslog
|
||||
```
|
||||
|
||||
Next, you need to modify a few settings in the Rsyslog configuration file. Open the configuration file.
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
Scroll and uncomment the lines shown below to allow reception of logs via UDP protocol
|
||||
|
||||
```
|
||||
module(load="imudp") # needs to be done just once
|
||||
input(type="imudp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-centos8-rhel8][1]
|
||||
|
||||
Similarly, if you prefer to enable TCP rsyslog reception uncomment the lines:
|
||||
|
||||
```
|
||||
module(load="imtcp") # needs to be done just once
|
||||
input(type="imtcp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-tcp-centos8-rhel8][1]
|
||||
|
||||
Save and exit the configuration file.
|
||||
|
||||
To receive the logs from the client system, we need to open Rsyslog default port 514 on the firewall. To achieve this, run
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
Next, reload the firewall to save the changes
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![firewall-ports-rsyslog-centos8][1]
|
||||
|
||||
Next, restart Rsyslog server
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
To enable Rsyslog on boot, run beneath command
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
```
|
||||
|
||||
To confirm that the Rsyslog server is listening on port 514, use the netstat command as follows:
|
||||
|
||||
```
|
||||
$ sudo netstat -pnltu
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![netstat-rsyslog-port-centos8][1]
|
||||
|
||||
Perfect! we have successfully configured our Rsyslog server to receive logs from the client system.
|
||||
|
||||
To view log messages in real-time run the command:
|
||||
|
||||
```
|
||||
$ tail -f /var/log/messages
|
||||
```
|
||||
|
||||
Let’s now configure the client system.
|
||||
|
||||
### Configuring the client system on RHEL 8
|
||||
|
||||
Like the Rsyslog server, log in and check if the rsyslog daemon is running by issuing the command:
|
||||
|
||||
```
|
||||
$ sudo systemctl status rsyslog
|
||||
```
|
||||
|
||||
Sample Output
|
||||
|
||||
![client-rsyslog-service-rhel8][1]
|
||||
|
||||
Next, proceed to open the rsyslog configuration file
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
At the end of the file, append the following line
|
||||
|
||||
```
|
||||
*.* @10.128.0.47:514 # Use @ for UDP protocol
|
||||
*.* @@10.128.0.47:514 # Use @@ for TCP protocol
|
||||
```
|
||||
|
||||
Save and exit the configuration file. Just like the Rsyslog Server, open port 514 which is the default Rsyslog port on the firewall
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
Next, reload the firewall to save the changes
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
Next, restart the rsyslog service
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
To enable Rsyslog on boot, run following command
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
```
|
||||
|
||||
### Testing the logging operation
|
||||
|
||||
Having successfully set up and configured Rsyslog Server and client system, it’s time to verify of your configuration is working as intended.
|
||||
|
||||
On the client system issue the command:
|
||||
|
||||
```
|
||||
# logger "Hello guys! This is our first log"
|
||||
```
|
||||
|
||||
Now head out to the Rsyslog server and run the command below to check the logs messages in real-time
|
||||
|
||||
```
|
||||
# tail -f /var/log/messages
|
||||
```
|
||||
|
||||
The output from the command run on the client system should register on the Rsyslog server’s log messages to imply that the Rsyslog server is now receiving logs from the client system.
|
||||
|
||||
![centralize-logs-rsyslogs-centos8][1]
|
||||
|
||||
And that’s it, guys! We have successfully setup the Rsyslog server to receive log messages from a client system.
|
||||
|
||||
Read Also: **[How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8][3]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/
|
||||
|
||||
作者:[James Kiarie][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/james/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/configure-rsyslog-centos8-rhel8.jpg
|
||||
[3]: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/
|
@ -1,101 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Pylint: Making your Python code consistent)
|
||||
[#]: via: (https://opensource.com/article/19/10/python-pylint-introduction)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
Pylint: Making your Python code consistent
|
||||
======
|
||||
Pylint is your friend when you want to avoid arguing about code
|
||||
complexity.
|
||||
![OpenStack source code \(Python\) in VIM][1]
|
||||
|
||||
Pylint is a higher-level Python style enforcer. While [flake8][2] and [black][3] will take care of "local" style: where the newlines occur, how comments are formatted, or find issues like commented out code or bad practices in log formatting.
|
||||
|
||||
Pylint is extremely aggressive by default. It will offer strong opinions on everything from checking if declared interfaces are actually implemented to opportunities to refactor duplicate code, which can be a lot to a new user. One way of introducing it gently to a project, or a team, is to start by turning _all_ checkers off, and then enabling checkers one by one. This is especially useful if you already use flake8, black, and [mypy][4]: Pylint has quite a few checkers that overlap in functionality.
|
||||
|
||||
However, one of the things unique to Pylint is the ability to enforce higher-level issues: for example, number of lines in a function, or number of methods in a class.
|
||||
|
||||
These numbers might be different from project to project and can depend on the development team's preferences. However, once the team comes to an agreement about the parameters, it is useful to _enforce_ those parameters using an automated tool. This is where Pylint shines.
|
||||
|
||||
### Configuring Pylint
|
||||
|
||||
In order to start with an empty configuration, start your `.pylintrc` with
|
||||
|
||||
|
||||
```
|
||||
[MESSAGES CONTROL]
|
||||
|
||||
disable=all
|
||||
```
|
||||
|
||||
This disables all Pylint messages. Since many of them are redundant, this makes sense. In Pylint, a `message` is a specific kind of warning.
|
||||
|
||||
You can check that all messages have been turned off by running `pylint`:
|
||||
|
||||
|
||||
```
|
||||
`$ pylint <my package>`
|
||||
```
|
||||
|
||||
In general, it is not a great idea to add parameters to the `pylint` command-line: the best place to configure your `pylint` is the `.pylintrc`. In order to have it do _something_ useful, we need to enable some messages.
|
||||
|
||||
In order to enable messages, add to your `.pylintrc`, under the `[MESSAGES CONTROL]`.
|
||||
|
||||
|
||||
```
|
||||
enable=<message>,
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
For the "messages" (what Pylint calls different kinds of warnings) that look useful. Some of my favorites include `too-many-lines`, `too-many-arguments`, and `too-many-branches`. All of those limit complexity of modules or functions, and serve as an objective check, without a human nitpicker needed, for code complexity measurement.
|
||||
|
||||
A _checker_ is a source of _messages_: every message belongs to exactly one checker. Many of the most useful messages are under the [design checker][5]. The default numbers are usually good, but tweaking the maximums is straightfoward: we can add a section called `DESIGN` in the `.pylintrc`.
|
||||
|
||||
|
||||
```
|
||||
[DESIGN]
|
||||
|
||||
max-args=7
|
||||
|
||||
max-locals=15
|
||||
```
|
||||
|
||||
Another good source of useful messages is the `refactoring` checker. Some of my favorite messages to enable there are `consider-using-dict-comprehension`, `stop-iteration-return` (which looks for generators which use `raise StopIteration` when `return` is the correct way to stop the iteration). and `chained-comparison`, which will suggest using syntax like `1 <= x < 5` rather than the less obvious `1 <= x && 5 > 5`
|
||||
|
||||
Finally, an expensive checker, in terms of performance, but highly useful, is `similarities`. It is designed to enforce "Don't Repeat Yourself" (the DRY principle) by explicitly looking for copy-paste between different parts of the code. It only has one message to enable: `duplicate-code`. The default "minimum similarity lines" is set to `4`. It is possible to set it to a different value using the `.pylintrc`.
|
||||
|
||||
|
||||
```
|
||||
[SIMILARITIES]
|
||||
|
||||
min-similarity-lines=3
|
||||
```
|
||||
|
||||
### Pylint makes code reviews easy
|
||||
|
||||
If you are sick of code reviews where you point out that a class is too complicated, or that two different functions are basically the same, add Pylint to your [Continuous Integration][6] configuration, and only have the arguments about complexity guidelines for your project _once_.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/python-pylint-introduction
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_2.jpg?itok=4fza48WU (OpenStack source code (Python) in VIM)
|
||||
[2]: https://opensource.com/article/19/5/python-flake8
|
||||
[3]: https://opensource.com/article/19/5/python-black
|
||||
[4]: https://opensource.com/article/19/5/python-mypy
|
||||
[5]: https://pylint.readthedocs.io/en/latest/technical_reference/features.html#design-checker
|
||||
[6]: https://opensource.com/business/15/7/six-continuous-integration-tools
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (laingke)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -370,7 +370,7 @@ via: https://opensource.com/article/19/10/initializing-arrays-java
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[laingke](https://github.com/laingke)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,206 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Best practices in test-driven development)
|
||||
[#]: via: (https://opensource.com/article/19/10/test-driven-development-best-practices)
|
||||
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
|
||||
|
||||
Best practices in test-driven development
|
||||
======
|
||||
Ensure you're producing very high-quality code by following these TDD
|
||||
best practices.
|
||||
![magnifying glass on computer screen][1]
|
||||
|
||||
In my previous series on [test-driven development (TDD) and mutation testing][2], I demonstrated the benefits of relying on examples when building a solution. That begs the question: What does "relying on examples" mean?
|
||||
|
||||
In that series, I described one of my expectations when building a solution to determine whether it's daytime or nighttime. I provided an example of a specific hour of the day that I consider to fall in the daytime category. I created a **DateTime** variable named **dayHour** and gave it the specific value of **August 8, 2019, 7 hours, 0 minutes, 0 seconds**.
|
||||
|
||||
My logic (or way of reasoning) was: "When the system is notified that the time is exactly 7am on August 8, 2019, I expect that the system will perform the necessary calculations and return the value **Daylight**."
|
||||
|
||||
Armed with such a specific example, it was very easy to create a unit test (**Given7amReturnDaylight**). I then ran the tests and watched my unit test fail, which gave me the opportunity to work on fixing this early failure.
|
||||
|
||||
### Iteration is the solution
|
||||
|
||||
One very important aspect of TDD (and, by proxy, of agile) is the fact that it is impossible to arrive at an acceptable solution unless you are iterating. TDD is a professional discipline based on the process of relentless iterating. It is very important to note that it mandates that each iteration must begin with a micro-failure. That micro-failure has only one purpose: to solicit immediate feedback. And that immediate feedback ensures we can rapidly close the gap between _wanting_ a solution and _getting_ a solution.
|
||||
|
||||
Iteration provides an opportunity to solicit immediate feedback by failing as early as possible. Because that failure is fast (i.e., it is a micro-failure), it is not alarming; even when we fail, we can remain calm, knowing that it will be easy to fix the failure. And the feedback from that failure will guide us toward fixing the failure.
|
||||
|
||||
Rinse, repeat, until we completely close the gap and deliver the solution that fully meets the expectation (but keep in mind that the expectation must also be a micro-expectation).
|
||||
|
||||
### Why micro?
|
||||
|
||||
This approach often feels very unambitious. In TDD (and in agile), it's best to pick a tiny, almost trivial challenge, and then do the TDD song-and-dance by failing first, then iterating until we solve that trivial challenge. People who are used to more substantial, beefy engineering and problem solving tend to feel that such an exercise is beneath their level of competence.
|
||||
|
||||
One of the cornerstones of agile philosophy relies on reducing the problem space to multiple, smallest-possible surface areas. As Robert C. Martin puts it:
|
||||
|
||||
> _"Agile is a small idea about the small problems of small programming teams doing small things"_
|
||||
|
||||
But how can making an unimpressive series of such pedestrian, minuscule, and almost insignificant micro-victories ever enable us to reach the big-scale solution?
|
||||
|
||||
Here is where sophisticated and elaborate systems thinking comes into play. When building a system, there's always the risk of ending up with a dreaded "monolith." A monolith is a system built on the principle of tight coupling. Any part of the monolith is highly dependent on many other parts of the same monolith. That arrangement makes the monolith very brittle, unreliable, and difficult to operate, maintain, troubleshoot, and fix.
|
||||
|
||||
The only way to avoid this trap is to minimize or, better yet, completely remove coupling. Instead of investing heroic efforts into building elaborate parts that will be assembled into a system, it is much better to take humble, baby steps toward building tiny, micro parts. These micro parts have very little capability on their own, and will, by virtue of such arrangement, not be dependent on other components. This will minimize and even remove any coupling.
|
||||
|
||||
The desired end game in building a useful, elaborate system is to compose it from a collection of generic, completely independent components. The more generic each component is, the more robust, resilient, and flexible the resulting system will be. Also, having a collection of generic components enables them to be repurposed to build brand new systems by reconfiguring those components.
|
||||
|
||||
Consider a toy castle made out of Lego blocks. If we pick almost any block from that castle and examine it in isolation, we won't be able to find anything on that block that specifies it is a Lego block meant for building a castle. The block itself is sufficiently generic, which makes it suitable for building other contraptions, such as toy cars, toy airplanes, toy boats, etc. That's the power of having generic components.
|
||||
|
||||
TDD is a proven discipline for delivering generic, independent, and autonomous components that can be safely used to assemble large, sophisticated systems expediently. As in agile, TDD is focused on micro-activities. And because agile is based on the fundamental principle known as "the Whole Team," the humble approach illustrated here is also important when specifying business examples. If the example used for building a component is not modest, it will be difficult to meet the expectations. Therefore, the expectations must be humble, which makes the resulting examples equally humble.
|
||||
|
||||
For instance, if a member of the Whole Team (a requester) provides the developer with an expectation and an example that reads:
|
||||
|
||||
> _"When processing an order, make sure to apply appropriate discount for orders made by loyal customers, or for orders over certain monetary value, or both."_
|
||||
|
||||
The developer should recognize that this example is too ambitious. That's not a humble expectation. It is not sufficiently micro, if you will. The developer should always strive to guide a requester in being more specific and micro-level when crafting examples. Paradoxically, the more specific the example, the more generic the resulting solution will be.
|
||||
|
||||
A much better, more effective expectation and example would be:
|
||||
|
||||
> _"Discount made for an order greater than $100.00 is $18.00."_
|
||||
|
||||
Or:
|
||||
|
||||
> _"Discount made for an order greater than $100.00 that was made by a customer who already placed three orders is $25.00."_
|
||||
|
||||
Such micro-examples make it easy to turn them into automated micro-expectations (read: unit tests). Such expectations will make us fail, and then we will pick ourselves up and iterate until we deliver the solution—a robust, generic component that knows how to calculate discounts based on the micro-examples supplied by the Whole Team.
|
||||
|
||||
### Writing quality unit tests
|
||||
|
||||
Merely writing unit tests without any concern about their quality is a fool's errand. Shoddily written unit tests will result in bloated, tightly coupled code. Such code is brittle, difficult to reason about, and often nearly impossible to fix.
|
||||
|
||||
We need to lay down some ground rules for writing quality unit tests. These ground rules will help us make swift progress in building robust, reliable solutions. The easiest way to do that is to introduce a mnemonic in the form of an acronym: **FIRST**, which says unit tests must be:
|
||||
|
||||
* **F** = Fast
|
||||
* **I** = Independent
|
||||
* **R** = Repeatable
|
||||
* **S** = Self-validating
|
||||
* **T** = Thorough
|
||||
|
||||
|
||||
|
||||
#### Fast
|
||||
|
||||
Since a unit test describes a micro-example, it should expect very simple processing from the implemented code. This means that each unit test should be very fast to run.
|
||||
|
||||
#### Independent
|
||||
|
||||
Since a unit test describes a micro-example, it should describe a very simple process that does not depend on any other unit test.
|
||||
|
||||
#### Repeatable
|
||||
|
||||
Since a unit test does not depend on any other unit test, it must be fully repeatable. What that means is that each time a certain unit test runs, it produces the same results as the previous time it ran. Neither the number of times the unit tests run nor the order in which they run should ever affect the expected output.
|
||||
|
||||
#### Self-validating
|
||||
|
||||
When unit tests run, the outcome of the testing should be instantly visible. Developers should not be expected to reach for some other source(s) of information to find out whether their unit tests failed or passed.
|
||||
|
||||
#### Thorough
|
||||
|
||||
Unit tests should describe all the expectations as defined in the micro-examples.
|
||||
|
||||
### Well-structured unit tests
|
||||
|
||||
Unit tests are code. And the same as any other code, unit tests need to be well-structured. It is unacceptable to deliver sloppy, messy unit tests. All the principles that apply to the rules governing clean implementation code apply with equal force to unit tests.
|
||||
|
||||
A time-tested and proven methodology for writing reliable quality code is based on the clean code principle known as **SOLID**. This acronym that helps us remember five very important principles:
|
||||
|
||||
* **S** = Single responsibility principle
|
||||
* **O** = Open–closed principle
|
||||
* **L** = Liskov substitution principle
|
||||
* **I** = Interface segregation principle
|
||||
* **D** = Dependency inversion principle
|
||||
|
||||
|
||||
|
||||
#### Single responsibility principle
|
||||
|
||||
Each component must be responsible for performing only one operation. This principle is illustrated in this meme
|
||||
|
||||
![Sign illustrating single-responsibility principle][3]
|
||||
|
||||
Pumping septic tanks is an operation that must be kept separate from filling swimming pools.
|
||||
|
||||
Applied to unit tests, this principle ensures that each unit test verifies one—and only one—expectation. From a technical standpoint, this means each unit test must have one and only one **Assert** statement.
|
||||
|
||||
#### Open–closed principle
|
||||
|
||||
This principle states that a component should be open for extensions, but closed for any modifications.
|
||||
|
||||
![Open-closed principle][4]
|
||||
|
||||
Applied to unit tests, this principle ensures that we will not implement a change to an existing unit test in that unit test. Instead, we must write a brand new unit test that will implement the changes.
|
||||
|
||||
#### Liskov substitution principle
|
||||
|
||||
This principle provides a guide for deciding which level of abstraction may be appropriate for the solution.
|
||||
|
||||
![Liskov substitution principle][5]
|
||||
|
||||
Applied to unit tests, this principle guides us to avoid tight coupling with dependencies that depend on the underlying computing environment (such as databases, disks, network, etc.).
|
||||
|
||||
#### Interface segregation principle
|
||||
|
||||
This principle reminds us not to bloat APIs. When subsystems need to collaborate to complete a task, they should communicate via interfaces. But those interfaces must not be bloated. If a new capability becomes necessary, don't add it to the already defined interface; instead, craft a brand new interface.
|
||||
|
||||
![Interface segregation principle][6]
|
||||
|
||||
Applied to unit tests, removing the bloat from interfaces helps us craft more specific unit tests, which, in turn, results in more generic components.
|
||||
|
||||
#### Dependency inversion principle
|
||||
|
||||
This principle states that we should control our dependencies, instead of dependencies controlling us. If there is a need to use another component's services, instead of being responsible for instantiating that component within the component we are building, it must instead be injected into our component.
|
||||
|
||||
![Dependency inversion principle][7]
|
||||
|
||||
Applied to the unit tests, this principle helps separate the intention from the implementation. We must strive to inject only those dependencies that have been sufficiently abstracted. That approach is important for ensuring unit tests are not mixed with integration tests.
|
||||
|
||||
### Testing the tests
|
||||
|
||||
Finally, even if we manage to produce well-structured unit tests that fulfill the FIRST principles, it does not guarantee that we have delivered a solid solution. TDD best practices rely on the proper sequence of events when building components/services; we are always and invariably expected to provide a description of our expectations (supplied in the micro-examples). Only after those expectations are described in the unit test can we move on to writing the implementation code. However, two unwanted side effects can, and often do, happen while writing implementation code:
|
||||
|
||||
1. Implemented code enables the unit tests to pass, but they are written in a convoluted way, using unnecessarily complex logic
|
||||
2. Implemented code gets tagged on AFTER the unit tests have been written
|
||||
|
||||
|
||||
|
||||
In the first case, even if all unit tests pass, mutation testing uncovers that some mutants have survived. As I explained in _[Mutation testing by example: Evolving from fragile TDD][8]_, that is an extremely undesirable situation because it means that the solution is unnecessarily complex and, therefore, unmaintainable.
|
||||
|
||||
In the second case, all unit tests are guaranteed to pass, but a potentially large portion of the codebase consists of implemented code that hasn't been described anywhere. This means we are dealing with mysterious code. In the best-case scenario, we could treat that mysterious code as deadwood and safely remove it. But more likely than not, removing this not-described, implemented code will cause some serious breakages. And such breakages indicate that our solution is not well engineered.
|
||||
|
||||
### Conclusion
|
||||
|
||||
TDD best practices stem from the time-tested methodology called [extreme programming][9] (XP for short). One of the cornerstones of XP is based on the **three C's**:
|
||||
|
||||
1. **Card:** A small card briefly specifies the intent (e.g., "Review customer request").
|
||||
2. **Conversation:** The card becomes a ticket to conversation. The whole team gets together and talks about "Review customer request." What does that mean? Do we have enough information/knowledge to ship the "review customer request" functionality in this increment? If not, how do we further slice this card?
|
||||
3. **Concrete confirmation examples:** This includes all the specific values plugged in (e.g., concrete names, numeric values, specific dates, whatever else is pertinent to the use case) plus all values expected as an output of the processing.
|
||||
|
||||
|
||||
|
||||
Starting from such micro-examples, we write unit tests. We watch unit tests fail, then make them pass. And while doing that, we observe and respect the best software engineering practices: the **FIRST** principles, the **SOLID** principles, and the mutation testing discipline (i.e., kill all surviving mutants).
|
||||
|
||||
This ensures that our components and services are delivered with solid quality built in. And what is the measure of that quality? Simple—**the cost of change**. If the delivered code is costly to change, it is of shoddy quality. Very high-quality code is structured so well that it is simple and inexpensive to change and, at the same time, does not incur any change-management risks.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/test-driven-development-best-practices
|
||||
|
||||
作者:[Alex Bunardzic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alex-bunardzic
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
|
||||
[2]: https://opensource.com/users/alex-bunardzic
|
||||
[3]: https://opensource.com/sites/default/files/uploads/single-responsibility.png (Sign illustrating single-responsibility principle)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/openclosed_cc.jpg (Open-closed principle)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/liskov_substitution_cc.jpg (Liskov substitution principle)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/interface_segregation_cc.jpg (Interface segregation principle)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/dependency_inversion_cc.jpg (Dependency inversion principle)
|
||||
[8]: https://opensource.com/article/19/9/mutation-testing-example-definition
|
||||
[9]: https://en.wikipedia.org/wiki/Extreme_programming
|
@ -0,0 +1,154 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Building container images with the ansible-bender tool)
|
||||
[#]: via: (https://opensource.com/article/19/10/building-container-images-ansible)
|
||||
[#]: author: (Tomas Tomecek https://opensource.com/users/tomastomecek)
|
||||
|
||||
Building container images with the ansible-bender tool
|
||||
======
|
||||
Learn how to use Ansible to execute commands in a container.
|
||||
![Blocks for building][1]
|
||||
|
||||
Containers and [Ansible][2] blend together so nicely—from management and orchestration to provisioning and building. In this article, we'll focus on the building part.
|
||||
|
||||
If you are familiar with Ansible, you know that you can write a series of tasks, and the **ansible-playbook** command will execute them for you. Did you know that you can also execute such commands in a container environment and get the same result as if you'd written a Dockerfile and run **podman build**.
|
||||
|
||||
Here is an example:
|
||||
|
||||
|
||||
```
|
||||
\- name: Serve our file using httpd
|
||||
hosts: all
|
||||
tasks:
|
||||
- name: Install httpd
|
||||
package:
|
||||
name: httpd
|
||||
state: installed
|
||||
- name: Copy our file to httpd’s webroot
|
||||
copy:
|
||||
src: our-file.txt
|
||||
dest: /var/www/html/
|
||||
```
|
||||
|
||||
You could execute this playbook locally on your web server or in a container, and it would work—as long as you remember to create the **our-file.txt** file first.
|
||||
|
||||
But something is missing. You need to start (and configure) httpd in order for your file to be served. This is a difference between container builds and infrastructure provisioning: When building an image, you just prepare the content; running the container is a different task. On the other hand, you can attach metadata to the container image that tells the command to run by default.
|
||||
|
||||
Here's where a tool would help. How about trying **ansible-bender**?
|
||||
|
||||
|
||||
```
|
||||
`$ ansible-bender build the-playbook.yaml fedora:30 our-httpd`
|
||||
```
|
||||
|
||||
This script uses the ansible-bender tool to execute the playbook against a Fedora 30 container image and names the resulting container image **our-httpd**.
|
||||
|
||||
But when you run that container, it won't start httpd because it doesn't know how to do it. You can fix this by adding some metadata to the playbook:
|
||||
|
||||
|
||||
```
|
||||
\- name: Serve our file using httpd
|
||||
hosts: all
|
||||
vars:
|
||||
ansible_bender:
|
||||
base_image: fedora:30
|
||||
target_image:
|
||||
name: our-httpd
|
||||
cmd: httpd -DFOREGROUND
|
||||
tasks:
|
||||
- name: Install httpd
|
||||
package:
|
||||
name: httpd
|
||||
state: installed
|
||||
- name: Listen on all network interfaces.
|
||||
lineinfile:
|
||||
path: /etc/httpd/conf/httpd.conf
|
||||
regexp: '^Listen '
|
||||
line: Listen 0.0.0.0:80
|
||||
- name: Copy our file to httpd’s webroot
|
||||
copy:
|
||||
src: our-file.txt
|
||||
dest: /var/www/html
|
||||
```
|
||||
|
||||
Now you can build the image (from here on, please run all the commands as root—currently, Buildah and Podman won't create dedicated networks for rootless containers):
|
||||
|
||||
|
||||
```
|
||||
# ansible-bender build the-playbook.yaml
|
||||
PLAY [Serve our file using httpd] ****************************************************
|
||||
|
||||
TASK [Gathering Facts] ***************************************************************
|
||||
ok: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
TASK [Install httpd] *****************************************************************
|
||||
loaded from cache: 'f053578ed2d47581307e9ba3f64f4b4da945579a082c6f99bd797635e62befd0'
|
||||
skipping: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
TASK [Listen on all network interfaces.] *********************************************
|
||||
changed: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
TASK [Copy our file to httpd’s webroot] **********************************************
|
||||
changed: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
PLAY RECAP ***************************************************************************
|
||||
our-httpd-20191004-131941266141-cont : ok=3 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
|
||||
|
||||
Getting image source signatures
|
||||
Copying blob sha256:4650c04b851c62897e9c02c6041a0e3127f8253fafa3a09642552a8e77c044c8
|
||||
Copying blob sha256:87b740bba596291af8e9d6d91e30a01d5eba9dd815b55895b8705a2acc3a825e
|
||||
Copying blob sha256:82c21252bd87532e93e77498e3767ac2617aa9e578e32e4de09e87156b9189a0
|
||||
Copying config sha256:44c6dc6dda1afe28892400c825de1c987c4641fd44fa5919a44cf0a94f58949f
|
||||
Writing manifest to image destination
|
||||
Storing signatures
|
||||
44c6dc6dda1afe28892400c825de1c987c4641fd44fa5919a44cf0a94f58949f
|
||||
Image 'our-httpd' was built successfully \o/
|
||||
```
|
||||
|
||||
The image is built, and it's time to run the container:
|
||||
|
||||
|
||||
```
|
||||
# podman run our-httpd
|
||||
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.88.2.106. Set the 'ServerName' directive globally to suppress this message
|
||||
```
|
||||
|
||||
Is your file being served? First, find out the IP of your container:
|
||||
|
||||
|
||||
```
|
||||
# podman inspect -f '{{ .NetworkSettings.IPAddress }}' 7418570ba5a0
|
||||
10.88.2.106
|
||||
```
|
||||
|
||||
And now you can check:
|
||||
|
||||
|
||||
```
|
||||
$ curl <http://10.88.2.106/our-file.txt>
|
||||
Ansible is ❤
|
||||
```
|
||||
|
||||
What were the contents of your file?
|
||||
|
||||
This was just an introduction to building container images with Ansible. If you want to learn more about what ansible-bender can do, please check it out on [GitHub][3]. Happy building!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/building-container-images-ansible
|
||||
|
||||
作者:[Tomas Tomecek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tomastomecek
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/blocks_building.png?itok=eMOT-ire (Blocks for building)
|
||||
[2]: https://www.ansible.com/
|
||||
[3]: https://github.com/ansible-community/ansible-bender
|
@ -0,0 +1,263 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to dual boot Windows 10 and Debian 10)
|
||||
[#]: via: (https://www.linuxtechi.com/dual-boot-windows-10-debian-10/)
|
||||
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
|
||||
|
||||
How to dual boot Windows 10 and Debian 10
|
||||
======
|
||||
|
||||
So, you finally made the bold decision to try out **Linux** after much convincing. However, you do not want to let go of your Windows 10 operating system yet as you will still be needing it before you learn the ropes on Linux. Thankfully, you can easily have a dual boot setup that allows you to switch to either of the operating systems upon booting your system. In this guide, you will learn how to **dual boot Windows 10 alongside Debian 10**.
|
||||
|
||||
[![How-to-dual-boot-Windows-and-Debian10][1]][2]
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before you get started, ensure you have the following:
|
||||
|
||||
* A bootable USB or DVD of Debian 10
|
||||
* A fast and stable internet connection ( For installation updates & third party applications)
|
||||
|
||||
|
||||
|
||||
Additionally, it worth paying attention to how your system boots (UEFI or Legacy) and ensure both the operating systems boot using the same boot mode.
|
||||
|
||||
### Step 1: Create a free partition on your hard drive
|
||||
|
||||
To start off, you need to create a free partition on your hard drive. This is the partition where Debian will be installed during the installation process. To achieve this, you will invoke the disk management utility as shown:
|
||||
|
||||
Press **Windows Key + R** to launch the Run dialogue. Next, type **diskmgmt.msc** and hit **ENTER**
|
||||
|
||||
[![Launch-Run-dialogue][1]][3]
|
||||
|
||||
This launches the **disk management** window displaying all the drives existing on your Windows system.
|
||||
|
||||
[![Disk-management][1]][4]
|
||||
|
||||
Next, you need to create a free space for Debian installation. To do this, you need to shrink a partition from one of the volumes and create a new unallocated partition. In this case, I will create a **30 GB** partition from Volume D.
|
||||
|
||||
To shrink a volume, right-click on it and select the ‘**shrink**’ option
|
||||
|
||||
[![Shrink-volume][1]][5]
|
||||
|
||||
In the pop-up dialogue, define the size that you want to shrink your space. Remember, this will be the disk space on which Debian 10 will be installed. In my case, I selected **30000MB ( Approximately 30 GB)**. Once done, click on ‘**Shrink**’.
|
||||
|
||||
[![Shrink-space][1]][6]
|
||||
|
||||
After the shrinking operation completes, you should have an unallocated partition as shown:
|
||||
|
||||
[![Unallocated-partition][1]][7]
|
||||
|
||||
Perfect! We are now good to go and ready to begin the installation process.
|
||||
|
||||
### Step 2: Begin the installation of Debian 10
|
||||
|
||||
With the free partition already created, plug in your bootable USB drive or insert the DVD installation medium in your PC and reboot your system. Be sure to make changes to the **boot order** in the **BIOS** set up by pressing the function keys (usually, **F9, F10 or F12** depending on the vendor). This is crucial so that the PC boots into your installation medium. Saves the BIOS settings and reboot.
|
||||
|
||||
A new grub menu will be displayed as shown below: Click on ‘**Graphical install**’
|
||||
|
||||
[![Graphical-Install-Debian10][1]][8]
|
||||
|
||||
In the next step, select your **preferred language** and click ‘**Continue**’
|
||||
|
||||
[![Select-Language-Debian10][1]][9]
|
||||
|
||||
Next, select your **location** and click ‘**Continue**’. Based on this location the time will automatically be selected for you. If you cannot find you located, scroll down and click on ‘**other**’ then select your location.
|
||||
|
||||
[![Select-location-Debain10][1]][10]
|
||||
|
||||
Next, select your **keyboard** layout.
|
||||
|
||||
[![Configure-Keyboard-layout-Debain10][1]][11]
|
||||
|
||||
In the next step, specify your system’s **hostname** and click ‘**Continue**’
|
||||
|
||||
[![Set-hostname-Debian10][1]][12]
|
||||
|
||||
Next, specify the **domain name**. If you are not in a domain environment, simply click on the ‘**continue**’ button.
|
||||
|
||||
[![Set-domain-name-Debian10][1]][13]
|
||||
|
||||
In the next step, specify the **root password** as shown and click ‘**continue**’.
|
||||
|
||||
[![Set-root-Password-Debian10][1]][14]
|
||||
|
||||
In the next step, specify the full name of the user for the account and click ‘**continue**’
|
||||
|
||||
[![Specify-fullname-user-debain10][1]][15]
|
||||
|
||||
Then set the account name by specifying the **username** associated with the account
|
||||
|
||||
[![Specify-username-Debian10][1]][16]
|
||||
|
||||
Next, specify the username’s password as shown and click ‘**continue**’
|
||||
|
||||
[![Specify-user-password-Debian10][1]][17]
|
||||
|
||||
Next, specify your **timezone**
|
||||
|
||||
[![Configure-timezone-Debian10][1]][18]
|
||||
|
||||
At this point, you need to create partitions for your Debian 10 installation. If you are an inexperienced user, Click on the ‘**Use the largest continuous free space**’ and click ‘**continue**’.
|
||||
|
||||
[![Use-largest-continuous-free-space-debian10][1]][19]
|
||||
|
||||
However, if you are more knowledgeable about creating partitions, select the ‘**Manual**’ option and click ‘**continue**’
|
||||
|
||||
[![Select-Manual-Debain10][1]][20]
|
||||
|
||||
Thereafter, select the partition labeled ‘**FREE SPACE**’ and click ‘**continue**’ . Next click on ‘**Create a new partition**’.
|
||||
|
||||
[![Create-new-partition-Debain10][1]][21]
|
||||
|
||||
In the next window, first, define the size of swap space, In my case, I specified **2GB**. Click **Continue**.
|
||||
|
||||
[![Define-swap-space-debian10][1]][22]
|
||||
|
||||
Next, click on ‘’**Primary**’ on the next screen and click ‘**continue**’
|
||||
|
||||
[![Partition-Disks-Primary-Debain10][1]][23]
|
||||
|
||||
Select the partition to **start at the beginning** and click continue.
|
||||
|
||||
[![Start-at-the-beginning-Debain10][1]][24]
|
||||
|
||||
Next, click on **Ext 4 journaling file system** and click ‘**continue**’
|
||||
|
||||
[![Select-Ext4-Journaling-system-debain10][1]][25]
|
||||
|
||||
On the next window, select **swap **and click continue
|
||||
|
||||
[![Select-swap-debain10][1]][26]
|
||||
|
||||
Next, click on **done setting the partition** and click continue.
|
||||
|
||||
[![Done-setting-partition-debian10][1]][27]
|
||||
|
||||
Back to the **Partition disks** page, click on **FREE SPACE** and click continue
|
||||
|
||||
[![Click-Free-space-Debain10][1]][28]
|
||||
|
||||
To make your life easy select **Automatically partition the free space** and click **continue**.
|
||||
|
||||
[![Automatically-partition-free-space-Debain10][1]][29]
|
||||
|
||||
Next click on **All files in one partition (recommended for new users)**
|
||||
|
||||
[![All-files-in-one-partition-debian10][1]][30]
|
||||
|
||||
Finally, click on **Finish partitioning and write changes to disk** and click **continue**.
|
||||
|
||||
[![Finish-partitioning-write-changes-to-disk][1]][31]
|
||||
|
||||
Confirm that you want to write changes to disk and click ‘**Yes**’
|
||||
|
||||
[![Write-changes-to-disk-Yes-Debian10][1]][32]
|
||||
|
||||
Thereafter, the installer will begin installing all the requisite software packages.
|
||||
|
||||
When asked if you want to scan another CD, select **No** and click continue
|
||||
|
||||
[![Scan-another-CD-No-Debain10][1]][33]
|
||||
|
||||
Next, select the mirror of the Debian archive closest to you and click ‘Continue’
|
||||
|
||||
[![Debian-archive-mirror-country][1]][34]
|
||||
|
||||
Next, select the **Debian mirror** that is most preferable to you and click ‘**Continue**’
|
||||
|
||||
[![Select-Debian-archive-mirror][1]][35]
|
||||
|
||||
If you plan on using a proxy server, enter its details as shown below, otherwise leave it blank and click ‘continue’
|
||||
|
||||
[![Enter-proxy-details-debian10][1]][36]
|
||||
|
||||
As the installation proceeds, you will be asked if you would like to participate in a **package usage survey**. You can select either option and click ‘continue’ . In my case, I selected ‘**No**’
|
||||
|
||||
[![Participate-in-survey-debain10][1]][37]
|
||||
|
||||
Next, select the packages you need in the **software selection** window and click **continue**.
|
||||
|
||||
[![Software-selection-debian10][1]][38]
|
||||
|
||||
The installation will continue installing the selected packages. At this point, you can take a coffee break as the installation goes on.
|
||||
|
||||
You will be prompted whether to install the grub **bootloader** on **Master Boot Record (MBR)**. Click **Yes** and click **Continue**.
|
||||
|
||||
[![Install-grub-bootloader-debian10][1]][39]
|
||||
|
||||
Next, select the hard drive on which you want to install **grub** and click **Continue**.
|
||||
|
||||
[![Select-hard-drive-install-grub-Debian10][1]][40]
|
||||
|
||||
Finally, the installation will complete, Go ahead and click on the ‘**Continue**’ button
|
||||
|
||||
[![Installation-complete-reboot-debian10][1]][41]
|
||||
|
||||
You should now have a grub menu with both **Windows** and **Debian** listed. To boot to Debian, scroll and click on Debian. Thereafter, you will be prompted with a login screen. Enter your details and hit ENTER.
|
||||
|
||||
[![Debian10-log-in][1]][42]
|
||||
|
||||
And voila! There goes your fresh copy of Debian 10 in a dual boot setup with Windows 10.
|
||||
|
||||
[![Debian10-Buster-Details][1]][43]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/dual-boot-windows-10-debian-10/
|
||||
|
||||
作者:[James Kiarie][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/james/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/How-to-dual-boot-Windows-and-Debian10.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Launch-Run-dialogue.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Disk-management.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Shrink-volume.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Shrink-space.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Unallocated-partition.jpg
|
||||
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Graphical-Install-Debian10.jpg
|
||||
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Language-Debian10.jpg
|
||||
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-location-Debain10.jpg
|
||||
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Configure-Keyboard-layout-Debain10.jpg
|
||||
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-hostname-Debian10.jpg
|
||||
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-domain-name-Debian10.jpg
|
||||
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-root-Password-Debian10.jpg
|
||||
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-fullname-user-debain10.jpg
|
||||
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-username-Debian10.jpg
|
||||
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-user-password-Debian10.jpg
|
||||
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Configure-timezone-Debian10.jpg
|
||||
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Use-largest-continuous-free-space-debian10.jpg
|
||||
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Manual-Debain10.jpg
|
||||
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Create-new-partition-Debain10.jpg
|
||||
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Define-swap-space-debian10.jpg
|
||||
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Partition-Disks-Primary-Debain10.jpg
|
||||
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Start-at-the-beginning-Debain10.jpg
|
||||
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Ext4-Journaling-system-debain10.jpg
|
||||
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-swap-debain10.jpg
|
||||
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Done-setting-partition-debian10.jpg
|
||||
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Click-Free-space-Debain10.jpg
|
||||
[29]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Automatically-partition-free-space-Debain10.jpg
|
||||
[30]: https://www.linuxtechi.com/wp-content/uploads/2019/10/All-files-in-one-partition-debian10.jpg
|
||||
[31]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Finish-partitioning-write-changes-to-disk.jpg
|
||||
[32]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Write-changes-to-disk-Yes-Debian10.jpg
|
||||
[33]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Scan-another-CD-No-Debain10.jpg
|
||||
[34]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian-archive-mirror-country.jpg
|
||||
[35]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Debian-archive-mirror.jpg
|
||||
[36]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Enter-proxy-details-debian10.jpg
|
||||
[37]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Participate-in-survey-debain10.jpg
|
||||
[38]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Software-selection-debian10.jpg
|
||||
[39]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Install-grub-bootloader-debian10.jpg
|
||||
[40]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-hard-drive-install-grub-Debian10.jpg
|
||||
[41]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Installation-complete-reboot-debian10.jpg
|
||||
[42]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian10-log-in.jpg
|
||||
[43]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian10-Buster-Details.jpg
|
352
sources/tech/20191023 How to program with Bash- Loops.md
Normal file
352
sources/tech/20191023 How to program with Bash- Loops.md
Normal file
@ -0,0 +1,352 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to program with Bash: Loops)
|
||||
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-3)
|
||||
[#]: author: (David Both https://opensource.com/users/dboth)
|
||||
|
||||
How to program with Bash: Loops
|
||||
======
|
||||
Learn how to use loops for performing iterative operations, in the final
|
||||
article in this three-part series on programming with Bash.
|
||||
![arrows cycle symbol for failing faster][1]
|
||||
|
||||
Bash is a powerful programming language, one perfectly designed for use on the command line and in shell scripts. This three-part series, based on my [three-volume Linux self-study course][2], explores using Bash as a programming language on the command-line interface (CLI).
|
||||
|
||||
The [first article][3] in this series explored some simple command-line programming with Bash, including using variables and control operators. The [second article][4] looked into the types of file, string, numeric, and miscellaneous logical operators that provide execution-flow control logic and different types of shell expansions in Bash. This third (and final) article examines the use of loops for performing various types of iterative operations and ways to control those loops.
|
||||
|
||||
### Loops
|
||||
|
||||
Every programming language I have ever used has at least a couple types of loop structures that provide various capabilities to perform repetitive operations. I use the for loop quite often but I also find the while and until loops useful.
|
||||
|
||||
#### for loops
|
||||
|
||||
Bash's implementation of the **for** command is, in my opinion, a bit more flexible than most because it can handle non-numeric values; in contrast, for example, the standard C language **for** loop can deal only with numeric values.
|
||||
|
||||
The basic structure of the Bash version of the **for** command is simple:
|
||||
|
||||
|
||||
```
|
||||
`for Var in list1 ; do list2 ; done`
|
||||
```
|
||||
|
||||
This translates to: "For each value in list1, set the **$Var** to that value and then perform the program statements in list2 using that value; when all of the values in list1 have been used, it is finished, so exit the loop." The values in list1 can be a simple, explicit string of values, or they can be the result of a command substitution (described in the second article in the series). I use this construct frequently.
|
||||
|
||||
To try it, ensure that **~/testdir** is still the present working directory (PWD). Clean up the directory, then look at a trivial example of the **for** loop starting with an explicit list of values. This list is a mix of alphanumeric values—but do not forget that all variables are strings and can be treated as such.
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ rm *
|
||||
[student@studentvm1 testdir]$ for I in a b c d 1 2 3 4 ; do echo $I ; done
|
||||
a
|
||||
b
|
||||
c
|
||||
d
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
```
|
||||
|
||||
Here is a bit more useful version with a more meaningful variable name:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ for Dept in "Human Resources" Sales Finance "Information Technology" Engineering Administration Research ; do echo "Department $Dept" ; done
|
||||
Department Human Resources
|
||||
Department Sales
|
||||
Department Finance
|
||||
Department Information Technology
|
||||
Department Engineering
|
||||
Department Administration
|
||||
Department Research
|
||||
```
|
||||
|
||||
Make some directories (and show some progress information while doing so):
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ for Dept in "Human Resources" Sales Finance "Information Technology" Engineering Administration Research ; do echo "Working on Department $Dept" ; mkdir "$Dept" ; done
|
||||
Working on Department Human Resources
|
||||
Working on Department Sales
|
||||
Working on Department Finance
|
||||
Working on Department Information Technology
|
||||
Working on Department Engineering
|
||||
Working on Department Administration
|
||||
Working on Department Research
|
||||
[student@studentvm1 testdir]$ ll
|
||||
total 28
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 Administration
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 Engineering
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 Finance
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 'Human Resources'
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 'Information Technology'
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 Research
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:45 Sales
|
||||
```
|
||||
|
||||
The **$Dept** variable must be enclosed in quotes in the **mkdir** statement; otherwise, two-part department names (such as "Information Technology") will be treated as two separate departments. That highlights a best practice I like to follow: all file and directory names should be a single word. Although most modern operating systems can deal with spaces in names, it takes extra work for sysadmins to ensure that those special cases are considered in scripts and CLI programs. (They almost certainly should be considered, even if they're annoying because you never know what files you will have.)
|
||||
|
||||
So, delete everything in **~/testdir**—again—and do this one more time:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ rm -rf * ; ll
|
||||
total 0
|
||||
[student@studentvm1 testdir]$ for Dept in Human-Resources Sales Finance Information-Technology Engineering Administration Research ; do echo "Working on Department $Dept" ; mkdir "$Dept" ; done
|
||||
Working on Department Human-Resources
|
||||
Working on Department Sales
|
||||
Working on Department Finance
|
||||
Working on Department Information-Technology
|
||||
Working on Department Engineering
|
||||
Working on Department Administration
|
||||
Working on Department Research
|
||||
[student@studentvm1 testdir]$ ll
|
||||
total 28
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Administration
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Engineering
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Finance
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Human-Resources
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Information-Technology
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Research
|
||||
drwxrwxr-x 2 student student 4096 Apr 8 15:52 Sales
|
||||
```
|
||||
|
||||
Suppose someone asks for a list of all RPMs on a particular Linux computer and a short description of each. This happened to me when I worked for the State of North Carolina. Since open source was not "approved" for use by state agencies at that time, and I only used Linux on my desktop computer, the pointy-haired bosses (PHBs) needed a list of each piece of software that was installed on my computer so that they could "approve" an exception.
|
||||
|
||||
How would you approach that? Here is one way, starting with the knowledge that the **rpm –qa** command provides a complete description of an RPM, including the two items the PHBs want: the software name and a brief summary.
|
||||
|
||||
Build up to the final result one step at a time. First, list all RPMs:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ rpm -qa
|
||||
perl-HTTP-Message-6.18-3.fc29.noarch
|
||||
perl-IO-1.39-427.fc29.x86_64
|
||||
perl-Math-Complex-1.59-429.fc29.noarch
|
||||
lua-5.3.5-2.fc29.x86_64
|
||||
java-11-openjdk-headless-11.0.ea.28-2.fc29.x86_64
|
||||
util-linux-2.32.1-1.fc29.x86_64
|
||||
libreport-fedora-2.9.7-1.fc29.x86_64
|
||||
rpcbind-1.2.5-0.fc29.x86_64
|
||||
libsss_sudo-2.0.0-5.fc29.x86_64
|
||||
libfontenc-1.1.3-9.fc29.x86_64
|
||||
<snip>
|
||||
```
|
||||
|
||||
Add the **sort** and **uniq** commands to sort the list and print the unique ones (since it's possible that some RPMs with identical names are installed):
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ rpm -qa | sort | uniq
|
||||
a2ps-4.14-39.fc29.x86_64
|
||||
aajohan-comfortaa-fonts-3.001-3.fc29.noarch
|
||||
abattis-cantarell-fonts-0.111-1.fc29.noarch
|
||||
abiword-3.0.2-13.fc29.x86_64
|
||||
abrt-2.11.0-1.fc29.x86_64
|
||||
abrt-addon-ccpp-2.11.0-1.fc29.x86_64
|
||||
abrt-addon-coredump-helper-2.11.0-1.fc29.x86_64
|
||||
abrt-addon-kerneloops-2.11.0-1.fc29.x86_64
|
||||
abrt-addon-pstoreoops-2.11.0-1.fc29.x86_64
|
||||
abrt-addon-vmcore-2.11.0-1.fc29.x86_64
|
||||
<snip>
|
||||
```
|
||||
|
||||
Since this gives the correct list of RPMs you want to look at, you can use this as the input list to a loop that will print all the details of each RPM:
|
||||
|
||||
|
||||
```
|
||||
`[student@studentvm1 testdir]$ for RPM in `rpm -qa | sort | uniq` ; do rpm -qi $RPM ; done`
|
||||
```
|
||||
|
||||
This code produces way more data than you want. Note that the loop is complete. The next step is to extract only the information the PHBs requested. So, add an **egrep** command, which is used to select **^Name** or **^Summary**. The carat (**^**) specifies the beginning of the line; thus, any line with Name or Summary at the beginning of the line is displayed.
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ for RPM in `rpm -qa | sort | uniq` ; do rpm -qi $RPM ; done | egrep -i "^Name|^Summary"
|
||||
Name : a2ps
|
||||
Summary : Converts text and other types of files to PostScript
|
||||
Name : aajohan-comfortaa-fonts
|
||||
Summary : Modern style true type font
|
||||
Name : abattis-cantarell-fonts
|
||||
Summary : Humanist sans serif font
|
||||
Name : abiword
|
||||
Summary : Word processing program
|
||||
Name : abrt
|
||||
Summary : Automatic bug detection and reporting tool
|
||||
<snip>
|
||||
```
|
||||
|
||||
You can try **grep** instead of **egrep** in the command above, but it will not work. You could also pipe the output of this command through the **less** filter to explore the results. The final command sequence looks like this:
|
||||
|
||||
|
||||
```
|
||||
`[student@studentvm1 testdir]$ for RPM in `rpm -qa | sort | uniq` ; do rpm -qi $RPM ; done | egrep -i "^Name|^Summary" > RPM-summary.txt`
|
||||
```
|
||||
|
||||
This command-line program uses pipelines, redirection, and a **for** loop—all on a single line. It redirects the output of your little CLI program to a file that can be used in an email or as input for other purposes.
|
||||
|
||||
This process of building up the program one step at a time allows you to see the results of each step and ensure that it is working as you expect and provides the desired results.
|
||||
|
||||
From this exercise, the PHBs received a list of over 1,900 separate RPM packages. I seriously doubt that anyone read that list. But I gave them exactly what they asked for, and I never heard another word from them about it.
|
||||
|
||||
### Other loops
|
||||
|
||||
There are two more types of loop structures available in Bash: the **while** and **until** structures, which are very similar to each other in both syntax and function. The basic syntax of these loop structures is simple:
|
||||
|
||||
|
||||
```
|
||||
`while [ expression ] ; do list ; done`
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
|
||||
```
|
||||
`until [ expression ] ; do list ; done`
|
||||
```
|
||||
|
||||
The logic of the first reads: "While the expression evaluates as true, execute the list of program statements. When the expression evaluates as false, exit from the loop." And the second: "Until the expression evaluates as true, execute the list of program statements. When the expression evaluates as true, exit from the loop."
|
||||
|
||||
#### While loop
|
||||
|
||||
The **while** loop is used to execute a series of program statements while (so long as) the logical expression evaluates as true. Your PWD should still be **~/testdir**.
|
||||
|
||||
The simplest form of the **while** loop is one that runs forever. The following form uses the true statement to always generate a "true" return code. You could also use a simple "1"—and that would work just the same—but this illustrates the use of the true statement:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ X=0 ; while [ true ] ; do echo $X ; X=$((X+1)) ; done | head
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
This CLI program should make more sense now that you have studied its parts. First, it sets **$X** to zero in case it has a value left over from a previous program or CLI command. Then, since the logical expression **[ true ]** always evaluates to 1, which is true, the list of program instructions between **do** and **done** is executed forever—or until you press **Ctrl+C** or otherwise send a signal 2 to the program. Those instructions are an arithmetic expansion that prints the current value of **$X** and then increments it by one.
|
||||
|
||||
One of the tenets of [_The Linux Philosophy for Sysadmins_][5] is to strive for elegance, and one way to achieve elegance is simplicity. You can simplify this program by using the variable increment operator, **++**. In the first instance, the current value of the variable is printed, and then the variable is incremented. This is indicated by placing the **++** operator after the variable:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ X=0 ; while [ true ] ; do echo $((X++)) ; done | head
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
```
|
||||
|
||||
Now delete **| head** from the end of the program and run it again.
|
||||
|
||||
In this version, the variable is incremented before its value is printed. This is specified by placing the **++** operator before the variable. Can you see the difference?
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ X=0 ; while [ true ] ; do echo $((++X)) ; done | head
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
```
|
||||
|
||||
You have reduced two statements into a single one that prints the value of the variable and increments that value. There is also a decrement operator, **\--**.
|
||||
|
||||
You need a method for stopping the loop at a specific number. To accomplish that, change the true expression to an actual numeric evaluation expression. Have the program loop to 5 and stop. In the example code below, you can see that **-le** is the logical numeric operator for "less than or equal to." This means: "So long as **$X** is less than or equal to 5, the loop will continue. When **$X** increments to 6, the loop terminates."
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ X=0 ; while [ $X -le 5 ] ; do echo $((X++)) ; done
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
#### Until loop
|
||||
|
||||
The **until** command is very much like the **while** command. The difference is that it will continue to loop until the logical expression evaluates to "true." Look at the simplest form of this construct:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ X=0 ; until false ; do echo $((X++)) ; done | head
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
It uses a logical comparison to count to a specific value:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ X=0 ; until [ $X -eq 5 ] ; do echo $((X++)) ; done
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
[student@studentvm1 ~]$ X=0 ; until [ $X -eq 5 ] ; do echo $((++X)) ; done
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
This series has explored many powerful tools for building Bash command-line programs and shell scripts. But it has barely scratched the surface on the many interesting things you can do with Bash; the rest is up to you.
|
||||
|
||||
I have discovered that the best way to learn Bash programming is to do it. Find a simple project that requires multiple Bash commands and make a CLI program out of them. Sysadmins do many tasks that lend themselves to CLI programming, so I am sure that you will easily find tasks to automate.
|
||||
|
||||
Many years ago, despite being familiar with other shell languages and Perl, I made the decision to use Bash for all of my sysadmin automation tasks. I have discovered that—sometimes with a bit of searching—I have been able to use Bash to accomplish everything I need.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/programming-bash-part-3
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
|
||||
[2]: http://www.both.org/?page_id=1183
|
||||
[3]: https://opensource.com/article/19/10/programming-bash-part-1
|
||||
[4]: https://opensource.com/article/19/10/programming-bash-part-2
|
||||
[5]: https://www.apress.com/us/book/9781484237298
|
106
sources/tech/20191023 Using SSH port forwarding on Fedora.md
Normal file
106
sources/tech/20191023 Using SSH port forwarding on Fedora.md
Normal file
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using SSH port forwarding on Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/using-ssh-port-forwarding-on-fedora/)
|
||||
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
|
||||
|
||||
Using SSH port forwarding on Fedora
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
You may already be familiar with using the _[ssh][2]_ [command][2] to access a remote system. The protocol behind _ssh_ allows terminal input and output to flow through a [secure channel][3]. But did you know that you can also use _ssh_ to send and receive other data securely as well? One way is to use _port forwarding_, which allows you to connect network ports securely while conducting your _ssh_ session. This article shows you how it works.
|
||||
|
||||
### About ports
|
||||
|
||||
A standard Linux system has a set of network ports already assigned, from 0-65535. Your system reserves ports up to 1023 for system use. In many systems you can’t elect to use one of these low-numbered ports. Quite a few ports are commonly expected to run specific services. You can find these defined in your system’s _/etc/services_ file.
|
||||
|
||||
You can think of a network port like a physical port or jack to which you can connect a cable. That port may connect to some sort of service on the system, like wiring behind that physical jack. An example is the Apache web server (also known as _httpd_). The web server usually claims port 80 on the host system for HTTP non-secure connections, and 443 for HTTPS secure connections.
|
||||
|
||||
When you connect to a remote system, such as with a web browser, you are also “wiring” your browser to a port on your host. This is usually a random high port number, such as 54001. The port on your host connects to the port on the remote host, such as 443 to reach its secure web server.
|
||||
|
||||
So why use port forwarding when you have so many ports available? Here are a couple common cases in the life of a web developer.
|
||||
|
||||
### Local port forwarding
|
||||
|
||||
Imagine that you are doing web development on a remote system called _remote.example.com_. You usually reach this system via _ssh_ but it’s behind a firewall that allows very little additional access, and blocks most other ports. To try out your web app, it’s helpful to be able to use your web browser to point to the remote system. But you can’t reach it via the normal method of typing the URL in your browser, thanks to that pesky firewall.
|
||||
|
||||
Local forwarding allows you to tunnel a port available via the remote system through your _ssh_ connection. The port appears as a local port on your system (thus “local forwarding.”)
|
||||
|
||||
Let’s say your web app is running on port 8000 on the _remote.example.com_ box. To locally forward that system’s port 8000 to your system’s port 8000, use the _-L_ option with _ssh_ when you start your session:
|
||||
|
||||
```
|
||||
$ ssh -L 8000:localhost:8000 remote.example.com
|
||||
```
|
||||
|
||||
Wait, why did we use _localhost_ as the target for forwarding? It’s because from the perspective of _remote.example.com_, you’re asking the host to use its own port 8000. (Recall that any host usually can refer to itself as _localhost_ to connect to itself via a network connection.) That port now connects to your system’s port 8000. Once the _ssh_ session is ready, keep it open, and you can type _<http://localhost:8000>_ in your browser to see your web app. The traffic between systems now travels securely over an _ssh_ tunnel!
|
||||
|
||||
If you have a sharp eye, you may have noticed something. What if we used a different hostname than _localhost_ for the _remote.example.com_ to forward? If it can reach a port on another system on its network, it usually can forward that port just as easily. For example, say you wanted to reach a MariaDB or MySQL service on the _db.example.com_ box also on the remote network. This service typically runs on port 3306. So you could forward it with this command, even if you can’t _ssh_ to the actual _db.example.com_ host:
|
||||
|
||||
```
|
||||
$ ssh -L 3306:db.example.com:3306 remote.example.com
|
||||
```
|
||||
|
||||
Now you can run MariaDB commands against your _localhost_ and you’re actually using the _db.example.com_ box.
|
||||
|
||||
### Remote port forwarding
|
||||
|
||||
Remote forwarding lets you do things the opposite way. Imagine you’re designing a web app for a friend at the office, and want to show them your work. Unfortunately, though, you’re working in a coffee shop, and because of the network setup, they can’t reach your laptop via a network connection. However, you both use the _remote.example.com_ system at the office and you can still log in there. Your web app seems to be running well on port 5000 locally.
|
||||
|
||||
Remote port forwarding lets you tunnel a port from your local system through your _ssh_ connection, and make it available on the remote system. Just use the _-R_ option when you start your _ssh_ session:
|
||||
|
||||
```
|
||||
$ ssh -R 6000:localhost:5000 remote.example.com
|
||||
```
|
||||
|
||||
Now when your friend inside the corporate firewall runs their browser, they can point it at _<http://remote.example.com:6000>_ and see your work. And as in the local port forwarding example, the communications travel securely over your _ssh_ session.
|
||||
|
||||
By default the _sshd_ daemon running on a host is set so that **only** that host can connect to its remote forwarded ports. Let’s say your friend wanted to be able to let people on other _example.com_ corporate hosts see your work, and they weren’t on _remote.example.com_ itself. You’d need the owner of the _remote.example.com_ host to add **one** of these options to _/etc/ssh/sshd_config_ on that box:
|
||||
|
||||
```
|
||||
GatewayPorts yes # OR
|
||||
GatewayPorts clientspecified
|
||||
```
|
||||
|
||||
The first option means remote forwarded ports are available on all the network interfaces on _remote.example.com_. The second means that the client who sets up the tunnel gets to choose the address. This option is set to **no** by default.
|
||||
|
||||
With this option, you as the _ssh_ client must still specify the interfaces on which the forwarded port on your side can be shared. Do this by adding a network specification before the local port. There are several ways to do this, including the following:
|
||||
|
||||
```
|
||||
$ ssh -R *:6000:localhost:5000 # all networks
|
||||
$ ssh -R 0.0.0.0:6000:localhost:5000 # all networks
|
||||
$ ssh -R 192.168.1.15:6000:localhost:5000 # single network
|
||||
$ ssh -R remote.example.com:6000:localhost:5000 # single network
|
||||
```
|
||||
|
||||
### Other notes
|
||||
|
||||
Notice that the port numbers need not be the same on local and remote systems. In fact, at times you may not even be able to use the same port. For instance, normal users may not to forward onto a system port in a default setup.
|
||||
|
||||
In addition, it’s possible to restrict forwarding on a host. This might be important to you if you need tighter security on a network-connected host. The _PermitOpen_ option for the _sshd_ daemon controls whether, and which, ports are available for TCP forwarding. The default setting is **any**, which allows all the examples above to work. To disallow any port fowarding, choose **none**, or choose only a specific **host:port** setting to permit. For more information, search for _PermitOpen_ in the manual page for _sshd_ daemon configuration:
|
||||
|
||||
```
|
||||
$ man sshd_config
|
||||
```
|
||||
|
||||
Finally, remember port forwarding only happens as long as the controlling _ssh_ session is open. If you need to keep the forwarding active for a long period, try running the session in the background using the _-N_ option. Make sure your console is locked to prevent tampering while you’re away from it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/using-ssh-port-forwarding-on-fedora/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/pfrields/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/ssh-port-forwarding-816x345.jpg
|
||||
[2]: https://en.wikipedia.org/wiki/Secure_Shell
|
||||
[3]: https://fedoramagazine.org/open-source-ssh-clients/
|
@ -0,0 +1,116 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open Source CMS Ghost 3.0 Released with New features for Publishers)
|
||||
[#]: via: (https://itsfoss.com/ghost-3-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Open Source CMS Ghost 3.0 Released with New features for Publishers
|
||||
======
|
||||
|
||||
[Ghost][1] is a free and open source content management system (CMS). If you are not aware of the term, a CMS is a software that allows you to build a website that is primarily focused on creating content without knowledge of HTML and other web-related technologies.
|
||||
|
||||
Ghost is in fact one of the [best open source CMS][2] out there. It’s main focus is on creating lightweight, fast loading and good looking blogs.
|
||||
|
||||
It has a modern intuitive editor with built-in SEO features. You also have native desktop (Linux including) and mobile apps. If you like terminal, you can also use the CLI tools it provides.
|
||||
|
||||
Let’s see what new feature Ghost 3.0 brings.
|
||||
|
||||
### New Features in Ghost 3.0
|
||||
|
||||
![][3]
|
||||
|
||||
I’m usually intrigued by open source CMS solutions – so after reading the official announcement post, I went ahead and gave it a try by installing a new Ghost instance via [Digital Ocean cloud server][4].
|
||||
|
||||
I was really impressed with the improvements they’ve made with the features and the UI compared to the previous version.
|
||||
|
||||
Here, I shall list out the key changes/additions worth mentioning.
|
||||
|
||||
#### Bookmark Cards
|
||||
|
||||
![][5]
|
||||
|
||||
In addition to all the subtle change to the editor, it now lets you add a beautiful bookmark card by just entering the URL.
|
||||
|
||||
If you have used WordPress – you may have noticed that you need to have a plugin in order to add a card like that – so it is definitely a useful addition in Ghost 3.0.
|
||||
|
||||
#### Improved WordPress Migration Plugin
|
||||
|
||||
I haven’t tested this in particular but they have updated their WordPress migration plugin to let you easily clone the posts (with images) to Ghost CMS.
|
||||
|
||||
Basically, with the plugin, you will be able to create an archive (with images) and import it to Ghost CMS.
|
||||
|
||||
#### Responsive Image Galleries & Images
|
||||
|
||||
To make the user experience better, they have also updated the image galleries (which is now responsive) to present your picture collection comfortably across all devices.
|
||||
|
||||
In addition, the images in post/pages are now responsive as well.
|
||||
|
||||
#### Members & Subscriptions option
|
||||
|
||||
![Ghost Subscription Model][6]
|
||||
|
||||
Even though the feature is still in the beta phase, it lets you add members and a subscription model for your blog if you choose to make it a premium publication to sustain your business.
|
||||
|
||||
With this feature, you can make sure that your blog can only be accessed by the subscribed members or choose to make it available to the public in addition to the subscription.
|
||||
|
||||
#### Stripe: Payment Integration
|
||||
|
||||
It supports Stripe payment gateway by default to help you easily enable the subscription (or any type of payments) with no additional fee charged by Ghost.
|
||||
|
||||
#### New App Integrations
|
||||
|
||||
![][7]
|
||||
|
||||
You can now integrate a variety of popular applications/services with your blog on Ghost 3.0. It could come in handy to automate a lot of things.
|
||||
|
||||
#### Default Theme Improvement
|
||||
|
||||
The default theme (design) that comes baked in has improved and now offers a dark mode as well.
|
||||
|
||||
You can always choose to create a custom theme as well (if not pre-built themes available).
|
||||
|
||||
#### Other Minor Improvements
|
||||
|
||||
In addition to all the key highlights, the visual editor to create posts/pages has improved as well (with some drag and drop capabilities).
|
||||
|
||||
I’m sure there’s a lot of technical changes as well – which you can check it out in their [changelog][8] if you’re interested.
|
||||
|
||||
### Ghost is gradually getting good traction
|
||||
|
||||
It’s not easy to make your mark in a world dominated by WordPress. But Ghost has gradually formed a dedicated community of publishers around it.
|
||||
|
||||
Not only that, their managed hosting service [Ghost Pro][9] now has customers like NASA, Mozilla and DuckDuckGo.
|
||||
|
||||
In last six years, Ghost has made $5 million in revenue from their Ghost Pro customers . Considering that they are a non-profit organization working on open source solution, this is indeed an achievement.
|
||||
|
||||
This helps them remain independent by avoiding external funding from venture capitalists. The more customers for managed Ghost CMS hosting, the more funds goes into the development of the free and open source CMS.
|
||||
|
||||
Overall, Ghost 3.0 is by far the best upgrade they’ve offered. I’m personally impressed with the features.
|
||||
|
||||
If you have websites of your own, what CMS do you use? Have you ever used Ghost? How’s your experience with it? Do share your thoughts in the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ghost-3-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/recommends/ghost/
|
||||
[2]: https://itsfoss.com/open-source-cms/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-3.jpg?ssl=1
|
||||
[4]: https://itsfoss.com/recommends/digital-ocean/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-editor-screenshot.png?ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-subscription-model.jpg?resize=800%2C503&ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-app-integration.jpg?ssl=1
|
||||
[8]: https://ghost.org/faq/upgrades/
|
||||
[9]: https://itsfoss.com/recommends/ghost-pro/
|
@ -0,0 +1,147 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Five Most Popular Operating Systems for the Internet of Things)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/the-five-most-popular-operating-systems-for-the-internet-of-things/)
|
||||
[#]: author: (K S Kuppusamy https://opensourceforu.com/author/ks-kuppusamy/)
|
||||
|
||||
The Five Most Popular Operating Systems for the Internet of Things
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_Connecting every ‘thing’ that we see around us to the Internet is the fundamental idea of the Internet of Things (IoT). There are many operating systems to get the best out of the things that are connected to the Internet. This article explores four popular operating systems for IoT — Ubuntu Core, RIOT, Contiki and TinyOS._
|
||||
|
||||
To say that life is running on the Internet these days is not an exaggeration due to the number and variety of services that we consume on the Net. These services span multiple domains such as information, financial services, social networking and entertainment. As this list grows longer, it becomes imperative that we do not restrict the types of devices that can connect to the Internet. The Internet of Things (IoT) facilitates connecting various types of ‘things’ to the Internet infrastructure. By connecting a device or thing to the Internet, these things get the ability to not only interact with the user but also between themselves. This feature of a variety of things interacting among themselves to assist users in a pervasive manner constitutes an interesting phenomenon called ambient intelligence.
|
||||
|
||||
![Figure 1: IoT application domains][3]
|
||||
|
||||
IoT is becoming increasingly popular as the types of devices that can be connected to it are becoming more diverse. The nature of applications is also evolving. Some of the popular domains in which IoT is getting used increasingly are listed below (Figure 1):
|
||||
|
||||
* Smart homes
|
||||
* Smart cities
|
||||
* Smart agriculture
|
||||
* Connected automobiles
|
||||
* Smart shopping
|
||||
* Connected health
|
||||
|
||||
|
||||
|
||||
![Figure 2: IoT operating system features][4]
|
||||
|
||||
As the application domains become diverse, the need to manage the IoT infrastructure efficiently is also becoming more important. The operating systems in normal computers perform the primary functions such as resource management, user interaction, etc. The requirements of IoT operating systems are specialised due to the nature and size of the devices involved in the process. Some of the important characteristics/requirements of IoT operating systems are listed below (Figure 2):
|
||||
|
||||
* A tiny memory footprint
|
||||
* Energy efficiency
|
||||
* Connectivity features
|
||||
* Hardware-agnostic operations
|
||||
* Real-time processing requirements
|
||||
* Security requirements
|
||||
* Application development ecosystem
|
||||
|
||||
|
||||
|
||||
As of 2019, there is a spectrum of choices for selecting the operating system (OS) for the Internet of Things. Some of these OSs are shown in Figure 3.
|
||||
|
||||
![Figure 3: IoT operating systems][5]
|
||||
|
||||
**Ubuntu Core**
|
||||
As Ubuntu is a popular Linux distribution, the Ubuntu Core IoT offering has also become popular. Ubuntu Core is a secure and lightweight OS for IoT, and is designed with a ‘security first’ philosophy. According to the official documentation, the entire system has been redesigned to focus on security from the first boot. There is a detailed white paper available on Ubuntu Core’s security features. It can be accessed at _<https://assets.ubuntu.com/v1/66fcd858> -ubuntu-core-security-whitepaper.pdf?_ga=2.74563154.1977628533. 1565098475-2022264852.1565098475_.
|
||||
|
||||
Ubuntu Core has been made tamper-resistant. As the applications may be from diverse sources, they are given privileges for only their own data. This has been done so that one poorly designed app does not make the entire system vulnerable. Ubuntu Core is ‘built for business’, which means that the developers can focus directly on the application at hand, while the other requirements are supported by the default operating system.
|
||||
|
||||
Another important feature of Ubuntu Core is the availability of a secure app store, which you can learn more about at _<https://ubuntu.com/internet-of-things/appstore>_. There is a ready-to-go software ecosystem that makes using Ubuntu Core simple.
|
||||
|
||||
The official documentation lists various successful case studies about how Ubuntu Core has been successfully used.
|
||||
|
||||
**RIOT**
|
||||
RIOT is a user-friendly OS for the Internet of Things. This FOSS OS has been developed by a number of people from around the world.
|
||||
RIOT supports many low-power IoT devices. It has support for various microcontroller architectures. The official documentation lists the following reasons for using the RIOT OS.
|
||||
|
||||
* _**It is developer friendly:**_ It supports the standard environments and tools so that developers need not go through a steep learning curve. Standard programming languages such as C or C++ are supported. The hardware dependent code is very minimal. Developers can code once and then run their code on 8-bit, 16-bit and 32-bit platforms.
|
||||
* _**RIOT is resource friendly:**_ One of the important features of RIOT is its ability to support lightweight devices. It enables maximum energy efficiency. It supports multi-threading with very little overhead for threading.
|
||||
* _**RIOT is IoT friendly:**_ The common system support provided by RIOT makes it a very important choice for IoT. It has support for CoAP, CBOR, high resolution and long-term timers.
|
||||
|
||||
|
||||
|
||||
**Contiki**
|
||||
Contiki is an important OS for IoT. It facilitates connecting tiny, low-cost and low-energy devices to the Internet.
|
||||
The prominent reasons for choosing the Contiki OS are as follows.
|
||||
|
||||
* _**Internet standards:**_ The Contiki OS supports the IPv6 and IPv4 standards, in addition to the low-power 6lowpan, RPL and CoAP standards.
|
||||
* _**Support for a variety of hardware:**_ Contiki can be run on a variety of low-power devices, which are easily available online.
|
||||
* _**Large community support:**_ One of the important advantages of using Contiki is the availability of an active community of developers. So when you have some technical issues to be solved, these community members make the problem solving process simple and effective.
|
||||
|
||||
|
||||
|
||||
The major features of Contiki are listed below.
|
||||
|
||||
* _**Memory allocation:**_ Even the tiny systems with only a few kilobytes of memory can also use Contiki. Its memory efficiency is an important feature.
|
||||
* _**Full IP networking:**_ The Contiki OS offers a full IP network stack. This includes major standard protocols such as UDP, TCP, HTTP, 6lowpan, RPL, CoAP, etc.
|
||||
* _**Power awareness:**_ The ability to assess the power requirements and to use them in an optimal minimal manner is an important feature of Contiki.
|
||||
* The Cooja network simulator makes the process of developing and debugging software easier.
|
||||
* The availability of the Coffee Flash file system and the Contiki shell makes the file handling and command execution simpler and more effective.
|
||||
|
||||
|
||||
|
||||
**TinyOS**
|
||||
TinyOS is an open source operating system designed for low-power wireless devices. It has a vibrant community of users spread across the world from both academia and industry. The popularity of TinyOS can be understood from the fact that it gets downloaded more than 35,000 times in a year.
|
||||
TinyOS is very effectively used in various scenarios such as sensor networks, smart buildings, smart meters, etc. The main repository of TinyOS is available at <https://github.com/tinyos/tinyos-main>.
|
||||
TinyOS is written in nesC which is a dialect of C. A sample code snippet is shown below:
|
||||
|
||||
```
|
||||
configuration Led {
|
||||
provides {
|
||||
interface LedControl;
|
||||
}
|
||||
uses {
|
||||
interface Gpio;
|
||||
}
|
||||
}
|
||||
implementation {
|
||||
|
||||
command void LedControl.turnOn() {
|
||||
call Gpio.set();
|
||||
}
|
||||
|
||||
command void LedControl.turnOff() {
|
||||
call Gpio.clear();
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
**Zephyr**
|
||||
Zephyr is a real-time OS that supports multiple architectures and is optimised for resource-constrained environments. Security is also given importance in the Zephyr design.
|
||||
|
||||
The prominent features of Zephyr are listed below:
|
||||
|
||||
* Support for 150+ boards.
|
||||
* Complete flexibility and freedom of choice.
|
||||
* Can handle small footprint IoT devices.
|
||||
* Can develop products with built-in security features.
|
||||
|
||||
|
||||
|
||||
This article has introduced readers to a list of four OSs for the IoT, from which they can select the ideal one, based on individual requirements.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/the-five-most-popular-operating-systems-for-the-internet-of-things/
|
||||
|
||||
作者:[K S Kuppusamy][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/ks-kuppusamy/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/OS-for-IoT.jpg?resize=696%2C647&ssl=1 (OS for IoT)
|
||||
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/OS-for-IoT.jpg?fit=800%2C744&ssl=1
|
||||
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-1-IoT-application-domains.jpg?resize=350%2C107&ssl=1
|
||||
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-2-IoT-operating-system-features.jpg?resize=350%2C93&ssl=1
|
||||
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-3-IoT-operating-systems.jpg?resize=350%2C155&ssl=1
|
@ -0,0 +1,93 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 cool new projects to try in COPR for October 2019)
|
||||
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-october-2019/)
|
||||
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
|
||||
|
||||
4 cool new projects to try in COPR for October 2019
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
[COPR][2] is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
|
||||
|
||||
This article presents a few new and interesting projects in COPR. If you’re new to using COPR, see the [COPR User Documentation][3] for how to get started.
|
||||
|
||||
### Nu
|
||||
|
||||
[Nu][4], or Nushell, is a shell inspired by PowerShell and modern CLI tools. Using a structured data based approach, Nu makes it easy to work with commands that output data, piping through other commands. The results are then displayed in tables that can be sorted or filtered easily and may serve as inputs for further commands. Finally, Nu provides several builtin commands, multiple shells and support for plugins.
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The [repo][5] currently provides Nu for Fedora 30, 31 and Rawhide. To install Nu, use these commands:
|
||||
|
||||
```
|
||||
sudo dnf copr enable atim/nushell
|
||||
sudo dnf install nushell
|
||||
```
|
||||
|
||||
### NoteKit
|
||||
|
||||
[NoteKit][6] is a program for note-taking. It supports Markdown for formatting notes, and the ability to create hand-drawn notes using mouse. In NoteKit, notes are sorted and organized in a tree structure.
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The [repo][7] currently provides NoteKit for Fedora 29, 30, 31 and Rawhide. To install NoteKit, use these commands:
|
||||
|
||||
```
|
||||
sudo dnf copr enable lyessaadi/notekit
|
||||
sudo dnf install notekit
|
||||
```
|
||||
|
||||
### Crow Translate
|
||||
|
||||
[Crow Translate][8] is a program for translating. It can translate text as well as speak both the input and result, and offers a command line interface as well. For translation, Crow Translate uses Google, Yandex or Bing translate API.
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The [repo][9] currently provides Crow Translate for Fedora 30, 31 and Rawhide, and for Epel 8. To install Crow Translate, use these commands:
|
||||
|
||||
```
|
||||
sudo dnf copr enable faezebax/crow-translate
|
||||
sudo dnf install crow-translate
|
||||
```
|
||||
|
||||
### dnsmeter
|
||||
|
||||
[dnsmeter][10] is a command-line tool for testing performance of a nameserver and its infrastructure. For this, it sends DNS queries and counts the replies, measuring various statistics. Among other features, dnsmeter can use different load steps, use payload from PCAP files and spoof sender addresses.
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The repo currently provides dnsmeter for Fedora 29, 30, 31 and Rawhide, and EPEL 7. To install dnsmeter, use these commands:
|
||||
|
||||
```
|
||||
sudo dnf copr enable @dnsoarc/dnsmeter
|
||||
sudo dnf install dnsmeter
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-october-2019/
|
||||
|
||||
作者:[Dominik Turecek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/dturecek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
|
||||
[2]: https://copr.fedorainfracloud.org/
|
||||
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
|
||||
[4]: https://github.com/nushell/nushell
|
||||
[5]: https://copr.fedorainfracloud.org/coprs/atim/nushell/
|
||||
[6]: https://github.com/blackhole89/notekit
|
||||
[7]: https://copr.fedorainfracloud.org/coprs/lyessaadi/notekit/
|
||||
[8]: https://github.com/crow-translate/crow-translate
|
||||
[9]: https://copr.fedorainfracloud.org/coprs/faezebax/crow-translate/
|
||||
[10]: https://github.com/DNS-OARC/dnsmeter
|
@ -0,0 +1,550 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Backup Configuration Files on a Remote System Using the Bash Script)
|
||||
[#]: via: (https://www.2daygeek.com/linux-bash-script-backup-configuration-files-remote-linux-system-server/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Backup Configuration Files on a Remote System Using the Bash Script
|
||||
======
|
||||
|
||||
It is a good practice to backup configuration files before performing any activity on a Linux system.
|
||||
|
||||
You can use this script if you are restarting the server after several days.
|
||||
|
||||
If you are really concerned about the backup of your configuration files, it is advisable to use this script at least once a month.
|
||||
|
||||
If something goes wrong, you can restore the system to normal by comparing configuration files based on the error message.
|
||||
|
||||
Three **[bash scripts][1]** are included in this article, and each **[shell script][2]** is used for specific purposes.
|
||||
|
||||
You can choose one based on your requirements.
|
||||
|
||||
Everything in Linux is a file. If you make some wrong changes in the configuration file, it will cause the associated service to crash.
|
||||
|
||||
So it is a good idea to take a backup of configuration files, and you do not have to worry about disk usage as this not consume much space.
|
||||
|
||||
### What does this script do?
|
||||
|
||||
This script backs up specific configuration files, moves them to another server, and finally deletes the backup on the remote machine.
|
||||
|
||||
This script has six parts, and the details are below.
|
||||
|
||||
* **Part-1:** Backup a General Configuration Files
|
||||
* **Part-2:** Backup a wwn/wwpn number if the server is physical.
|
||||
* **Part-3:** Backup an oracle related files if the system has an oracle user account.
|
||||
* **Part-4:** Create a tar archive of backup configuration files.
|
||||
* **Part-5:** Copy the tar archive to other server.
|
||||
* **Part-6:** Remove Backup of configuration files on the remote system.
|
||||
|
||||
|
||||
|
||||
**System details are as follows:**
|
||||
|
||||
* **Server-A:** Local System/ JUMP System (local.2daygeek.com)
|
||||
* **Server-B:** Remote System-1 (CentOS6.2daygeek.com)
|
||||
* **Server-C:** Remote System-2 (CentOS7.2daygeek.com)
|
||||
|
||||
|
||||
|
||||
### 1) Bash Script to Backup Configuration files on Remote Server
|
||||
|
||||
Two scripts are included in this example, which allow you to back up important configurations files from one server to another (that is, from a remote server to a local server).
|
||||
|
||||
For example, if you want to back up important configuration files from **“Server-B”** to **“Server-A”**. Use the following script.
|
||||
|
||||
This is a real bash script that takes backup of configuration files on the remote server.
|
||||
|
||||
```
|
||||
# vi /home/daygeek/shell-script/config-file.sh
|
||||
|
||||
#!/bin/bash
|
||||
mkdir /tmp/conf-bk-$(date +%Y%m%d)
|
||||
cd /tmp/conf-bk-$(date +%Y%m%d)
|
||||
|
||||
For General Configuration Files
|
||||
hostname > hostname.out
|
||||
uname -a > uname.out
|
||||
uptime > uptime.out
|
||||
cat /etc/hosts > hosts.out
|
||||
/bin/df -h>df-h.out
|
||||
pvs > pvs.out
|
||||
vgs > vgs.out
|
||||
lvs > lvs.out
|
||||
/bin/ls -ltr /dev/mapper>mapper.out
|
||||
fdisk -l > fdisk.out
|
||||
cat /etc/fstab > fstab.out
|
||||
cat /etc/exports > exports.out
|
||||
cat /etc/crontab > crontab.out
|
||||
cat /etc/passwd > passwd.out
|
||||
ip link show > ip.out
|
||||
/bin/netstat -in>netstat-in.out
|
||||
/bin/netstat -rn>netstat-rn.out
|
||||
/sbin/ifconfig -a>ifconfig-a.out
|
||||
cat /etc/sysctl.conf > sysctl.out
|
||||
sleep 10s
|
||||
|
||||
#For Physical Server
|
||||
vserver=$(lscpu | grep vendor | wc -l)
|
||||
if [ $vserver -gt 0 ]
|
||||
then
|
||||
echo "$(hostname) is a VM"
|
||||
else
|
||||
systool -c fc_host -v | egrep "(Class Device path | port_name |port_state)" > systool.out
|
||||
fi
|
||||
sleep 10s
|
||||
|
||||
#For Oracle DB Servers
|
||||
if id oracle >/dev/null 2>&1; then
|
||||
/usr/sbin/oracleasm listdisks>asm.out
|
||||
/sbin/multipath -ll > mpath.out
|
||||
/bin/ps -ef|grep pmon > pmon.out
|
||||
else
|
||||
echo "oracle user does not exist on server"
|
||||
fi
|
||||
sleep 10s
|
||||
|
||||
#Create a tar archive
|
||||
tar -cvf /tmp/$(hostname)-date +%Y%m%d.tar /tmp/conf-bk-$(date +%Y%m%d)
|
||||
sleep 10s
|
||||
|
||||
#Copy a tar archive to other server
|
||||
sshpass -p 'password' scp /tmp/$(hostname)-date +%Y%m%d.tar Server-A:/home/daygeek/backup/
|
||||
|
||||
#Remove the backup config folder
|
||||
cd ..
|
||||
rm -Rf conf-bk-$(date +%Y%m%d)
|
||||
rm $(hostname)-date +%Y%m%d.tar
|
||||
rm config-file.sh
|
||||
exit
|
||||
```
|
||||
|
||||
This is a sub-script that pushes the above script to the target server.
|
||||
|
||||
```
|
||||
# vi /home/daygeek/shell-script/conf-remote.sh
|
||||
|
||||
#!/bin/bash
|
||||
echo -e "Enter the Remote Server Name: \c"
|
||||
read server
|
||||
scp /home/daygeek/shell-script/config-file.sh $server:/tmp/
|
||||
ssh [email protected]${server} sh /home/daygeek/shell-script/config-file.sh
|
||||
sleep 10s
|
||||
exit
|
||||
```
|
||||
|
||||
Finally run the bash script to achieve this.
|
||||
|
||||
```
|
||||
# sh /home/daygeek/shell-script/conf-remote.sh
|
||||
|
||||
Enter the Remote Server Name: CentOS6.2daygeek.com
|
||||
config-file.sh 100% 1446 647.8KB/s 00:00
|
||||
CentOS6.2daygeek.com is a VM
|
||||
oracle user does not exist on server
|
||||
tar: Removing leading `/' from member names
|
||||
/tmp/conf-bk-20191024/
|
||||
/tmp/conf-bk-20191024/pvs.out
|
||||
/tmp/conf-bk-20191024/vgs.out
|
||||
/tmp/conf-bk-20191024/ip.out
|
||||
/tmp/conf-bk-20191024/netstat-in.out
|
||||
/tmp/conf-bk-20191024/fstab.out
|
||||
/tmp/conf-bk-20191024/ifconfig-a.out
|
||||
/tmp/conf-bk-20191024/hostname.out
|
||||
/tmp/conf-bk-20191024/crontab.out
|
||||
/tmp/conf-bk-20191024/netstat-rn.out
|
||||
/tmp/conf-bk-20191024/uptime.out
|
||||
/tmp/conf-bk-20191024/uname.out
|
||||
/tmp/conf-bk-20191024/mapper.out
|
||||
/tmp/conf-bk-20191024/lvs.out
|
||||
/tmp/conf-bk-20191024/exports.out
|
||||
/tmp/conf-bk-20191024/df-h.out
|
||||
/tmp/conf-bk-20191024/sysctl.out
|
||||
/tmp/conf-bk-20191024/hosts.out
|
||||
/tmp/conf-bk-20191024/passwd.out
|
||||
/tmp/conf-bk-20191024/fdisk.out
|
||||
```
|
||||
|
||||
Once you run the above script, use the ls command to check the copied tar archive file.
|
||||
|
||||
```
|
||||
# ls -ltrh /home/daygeek/backup/*.tar
|
||||
|
||||
-rw-r--r-- 1 daygeek daygeek 30K Oct 25 11:01 /home/daygeek/backup/CentOS6.2daygeek.com-20191024.tar
|
||||
```
|
||||
|
||||
If it is moved successfully, you can find the contents of it without extracting it using the following tar command.
|
||||
|
||||
```
|
||||
# tar -tvf /home/daygeek/backup/CentOS6.2daygeek.com-20191024.tar
|
||||
|
||||
drwxr-xr-x root/root 0 2019-10-25 11:00 tmp/conf-bk-20191024/
|
||||
-rw-r--r-- root/root 96 2019-10-25 11:00 tmp/conf-bk-20191024/pvs.out
|
||||
-rw-r--r-- root/root 92 2019-10-25 11:00 tmp/conf-bk-20191024/vgs.out
|
||||
-rw-r--r-- root/root 413 2019-10-25 11:00 tmp/conf-bk-20191024/ip.out
|
||||
-rw-r--r-- root/root 361 2019-10-25 11:00 tmp/conf-bk-20191024/netstat-in.out
|
||||
-rw-r--r-- root/root 785 2019-10-25 11:00 tmp/conf-bk-20191024/fstab.out
|
||||
-rw-r--r-- root/root 1375 2019-10-25 11:00 tmp/conf-bk-20191024/ifconfig-a.out
|
||||
-rw-r--r-- root/root 21 2019-10-25 11:00 tmp/conf-bk-20191024/hostname.out
|
||||
-rw-r--r-- root/root 457 2019-10-25 11:00 tmp/conf-bk-20191024/crontab.out
|
||||
-rw-r--r-- root/root 337 2019-10-25 11:00 tmp/conf-bk-20191024/netstat-rn.out
|
||||
-rw-r--r-- root/root 62 2019-10-25 11:00 tmp/conf-bk-20191024/uptime.out
|
||||
-rw-r--r-- root/root 116 2019-10-25 11:00 tmp/conf-bk-20191024/uname.out
|
||||
-rw-r--r-- root/root 210 2019-10-25 11:00 tmp/conf-bk-20191024/mapper.out
|
||||
-rw-r--r-- root/root 276 2019-10-25 11:00 tmp/conf-bk-20191024/lvs.out
|
||||
-rw-r--r-- root/root 0 2019-10-25 11:00 tmp/conf-bk-20191024/exports.out
|
||||
-rw-r--r-- root/root 236 2019-10-25 11:00 tmp/conf-bk-20191024/df-h.out
|
||||
-rw-r--r-- root/root 1057 2019-10-25 11:00 tmp/conf-bk-20191024/sysctl.out
|
||||
-rw-r--r-- root/root 115 2019-10-25 11:00 tmp/conf-bk-20191024/hosts.out
|
||||
-rw-r--r-- root/root 2194 2019-10-25 11:00 tmp/conf-bk-20191024/passwd.out
|
||||
-rw-r--r-- root/root 1089 2019-10-25 11:00 tmp/conf-bk-20191024/fdisk.out
|
||||
```
|
||||
|
||||
### 2) Bash Script to Backup Configuration files on Remote Server
|
||||
|
||||
There are two scripts added in this example, which do the same as the above script, but this can be very useful if you have a JUMP server in your environment.
|
||||
|
||||
This script allows you to copy important configuration files from your client system into the JUMP box
|
||||
|
||||
For example, since we have already set up a password-less login, you have ten clients that can be accessed from the JUMP server. If so, use this script.
|
||||
|
||||
This is a real bash script that takes backup of configuration files on the remote server.
|
||||
|
||||
```
|
||||
# vi /home/daygeek/shell-script/config-file-1.sh
|
||||
|
||||
#!/bin/bash
|
||||
mkdir /tmp/conf-bk-$(date +%Y%m%d)
|
||||
cd /tmp/conf-bk-$(date +%Y%m%d)
|
||||
|
||||
For General Configuration Files
|
||||
hostname > hostname.out
|
||||
uname -a > uname.out
|
||||
uptime > uptime.out
|
||||
cat /etc/hosts > hosts.out
|
||||
/bin/df -h>df-h.out
|
||||
pvs > pvs.out
|
||||
vgs > vgs.out
|
||||
lvs > lvs.out
|
||||
/bin/ls -ltr /dev/mapper>mapper.out
|
||||
fdisk -l > fdisk.out
|
||||
cat /etc/fstab > fstab.out
|
||||
cat /etc/exports > exports.out
|
||||
cat /etc/crontab > crontab.out
|
||||
cat /etc/passwd > passwd.out
|
||||
ip link show > ip.out
|
||||
/bin/netstat -in>netstat-in.out
|
||||
/bin/netstat -rn>netstat-rn.out
|
||||
/sbin/ifconfig -a>ifconfig-a.out
|
||||
cat /etc/sysctl.conf > sysctl.out
|
||||
sleep 10s
|
||||
|
||||
#For Physical Server
|
||||
vserver=$(lscpu | grep vendor | wc -l)
|
||||
if [ $vserver -gt 0 ]
|
||||
then
|
||||
echo "$(hostname) is a VM"
|
||||
else
|
||||
systool -c fc_host -v | egrep "(Class Device path | port_name |port_state)" > systool.out
|
||||
fi
|
||||
sleep 10s
|
||||
|
||||
#For Oracle DB Servers
|
||||
if id oracle >/dev/null 2>&1; then
|
||||
/usr/sbin/oracleasm listdisks>asm.out
|
||||
/sbin/multipath -ll > mpath.out
|
||||
/bin/ps -ef|grep pmon > pmon.out
|
||||
else
|
||||
echo "oracle user does not exist on server"
|
||||
fi
|
||||
sleep 10s
|
||||
|
||||
#Create a tar archieve
|
||||
tar -cvf /tmp/$(hostname)-date +%Y%m%d.tar /tmp/conf-bk-$(date +%Y%m%d)
|
||||
sleep 10s
|
||||
|
||||
#Remove the backup config folder
|
||||
cd ..
|
||||
rm -Rf conf-bk-$(date +%Y%m%d)
|
||||
rm config-file.sh
|
||||
exit
|
||||
```
|
||||
|
||||
This is a sub-script that pushes the above script to the target server.
|
||||
|
||||
```
|
||||
# vi /home/daygeek/shell-script/conf-remote-1.sh
|
||||
|
||||
#!/bin/bash
|
||||
echo -e "Enter the Remote Server Name: \c"
|
||||
read server
|
||||
scp /home/daygeek/shell-script/config-file-1.sh $server:/tmp/
|
||||
ssh [email protected]${server} sh /home/daygeek/shell-script/config-file-1.sh
|
||||
sleep 10s
|
||||
echo -e "Re-Enter the Remote Server Name: \c"
|
||||
read server
|
||||
scp $server:/tmp/$server-date +%Y%m%d.tar /home/daygeek/backup/
|
||||
exit
|
||||
```
|
||||
|
||||
Finally run the bash script to achieve this.
|
||||
|
||||
```
|
||||
# sh /home/daygeek/shell-script/conf-remote-1.sh
|
||||
|
||||
Enter the Remote Server Name: CentOS6.2daygeek.com
|
||||
config-file.sh 100% 1446 647.8KB/s 00:00
|
||||
CentOS6.2daygeek.com is a VM
|
||||
oracle user does not exist on server
|
||||
tar: Removing leading `/' from member names
|
||||
/tmp/conf-bk-20191025/
|
||||
/tmp/conf-bk-20191025/pvs.out
|
||||
/tmp/conf-bk-20191025/vgs.out
|
||||
/tmp/conf-bk-20191025/ip.out
|
||||
/tmp/conf-bk-20191025/netstat-in.out
|
||||
/tmp/conf-bk-20191025/fstab.out
|
||||
/tmp/conf-bk-20191025/ifconfig-a.out
|
||||
/tmp/conf-bk-20191025/hostname.out
|
||||
/tmp/conf-bk-20191025/crontab.out
|
||||
/tmp/conf-bk-20191025/netstat-rn.out
|
||||
/tmp/conf-bk-20191025/uptime.out
|
||||
/tmp/conf-bk-20191025/uname.out
|
||||
/tmp/conf-bk-20191025/mapper.out
|
||||
/tmp/conf-bk-20191025/lvs.out
|
||||
/tmp/conf-bk-20191025/exports.out
|
||||
/tmp/conf-bk-20191025/df-h.out
|
||||
/tmp/conf-bk-20191025/sysctl.out
|
||||
/tmp/conf-bk-20191025/hosts.out
|
||||
/tmp/conf-bk-20191025/passwd.out
|
||||
/tmp/conf-bk-20191025/fdisk.out
|
||||
Enter the Server Name Once Again: CentOS6.2daygeek.com
|
||||
CentOS6.2daygeek.com-20191025.tar
|
||||
```
|
||||
|
||||
Once you run the above script, use the ls command to check the copied tar archive file.
|
||||
|
||||
```
|
||||
# ls -ltrh /home/daygeek/backup/*.tar
|
||||
|
||||
-rw-r--r-- 1 daygeek daygeek 30K Oct 25 11:44 /home/daygeek/backup/CentOS6.2daygeek.com-20191025.tar
|
||||
```
|
||||
|
||||
If it is moved successfully, you can find the contents of it without extracting it using the following tar command.
|
||||
|
||||
```
|
||||
# tar -tvf /home/daygeek/backup/CentOS6.2daygeek.com-20191025.tar
|
||||
|
||||
drwxr-xr-x root/root 0 2019-10-25 11:43 tmp/conf-bk-20191025/
|
||||
-rw-r--r-- root/root 96 2019-10-25 11:43 tmp/conf-bk-20191025/pvs.out
|
||||
-rw-r--r-- root/root 92 2019-10-25 11:43 tmp/conf-bk-20191025/vgs.out
|
||||
-rw-r--r-- root/root 413 2019-10-25 11:43 tmp/conf-bk-20191025/ip.out
|
||||
-rw-r--r-- root/root 361 2019-10-25 11:43 tmp/conf-bk-20191025/netstat-in.out
|
||||
-rw-r--r-- root/root 785 2019-10-25 11:43 tmp/conf-bk-20191025/fstab.out
|
||||
-rw-r--r-- root/root 1375 2019-10-25 11:43 tmp/conf-bk-20191025/ifconfig-a.out
|
||||
-rw-r--r-- root/root 21 2019-10-25 11:43 tmp/conf-bk-20191025/hostname.out
|
||||
-rw-r--r-- root/root 457 2019-10-25 11:43 tmp/conf-bk-20191025/crontab.out
|
||||
-rw-r--r-- root/root 337 2019-10-25 11:43 tmp/conf-bk-20191025/netstat-rn.out
|
||||
-rw-r--r-- root/root 61 2019-10-25 11:43 tmp/conf-bk-20191025/uptime.out
|
||||
-rw-r--r-- root/root 116 2019-10-25 11:43 tmp/conf-bk-20191025/uname.out
|
||||
-rw-r--r-- root/root 210 2019-10-25 11:43 tmp/conf-bk-20191025/mapper.out
|
||||
-rw-r--r-- root/root 276 2019-10-25 11:43 tmp/conf-bk-20191025/lvs.out
|
||||
-rw-r--r-- root/root 0 2019-10-25 11:43 tmp/conf-bk-20191025/exports.out
|
||||
-rw-r--r-- root/root 236 2019-10-25 11:43 tmp/conf-bk-20191025/df-h.out
|
||||
-rw-r--r-- root/root 1057 2019-10-25 11:43 tmp/conf-bk-20191025/sysctl.out
|
||||
-rw-r--r-- root/root 115 2019-10-25 11:43 tmp/conf-bk-20191025/hosts.out
|
||||
-rw-r--r-- root/root 2194 2019-10-25 11:43 tmp/conf-bk-20191025/passwd.out
|
||||
-rw-r--r-- root/root 1089 2019-10-25 11:43 tmp/conf-bk-20191025/fdisk.out
|
||||
```
|
||||
|
||||
### 3) Bash Script to Backup Configuration files on Multiple Linux Remote Systems
|
||||
|
||||
This script allows you to copy important configuration files from multiple remote Linux systems into the JUMP box at the same time.
|
||||
|
||||
This is a real bash script that takes backup of configuration files on the remote server.
|
||||
|
||||
```
|
||||
# vi /home/daygeek/shell-script/config-file-2.sh
|
||||
|
||||
#!/bin/bash
|
||||
mkdir /tmp/conf-bk-$(date +%Y%m%d)
|
||||
cd /tmp/conf-bk-$(date +%Y%m%d)
|
||||
|
||||
For General Configuration Files
|
||||
hostname > hostname.out
|
||||
uname -a > uname.out
|
||||
uptime > uptime.out
|
||||
cat /etc/hosts > hosts.out
|
||||
/bin/df -h>df-h.out
|
||||
pvs > pvs.out
|
||||
vgs > vgs.out
|
||||
lvs > lvs.out
|
||||
/bin/ls -ltr /dev/mapper>mapper.out
|
||||
fdisk -l > fdisk.out
|
||||
cat /etc/fstab > fstab.out
|
||||
cat /etc/exports > exports.out
|
||||
cat /etc/crontab > crontab.out
|
||||
cat /etc/passwd > passwd.out
|
||||
ip link show > ip.out
|
||||
/bin/netstat -in>netstat-in.out
|
||||
/bin/netstat -rn>netstat-rn.out
|
||||
/sbin/ifconfig -a>ifconfig-a.out
|
||||
cat /etc/sysctl.conf > sysctl.out
|
||||
sleep 10s
|
||||
|
||||
#For Physical Server
|
||||
vserver=$(lscpu | grep vendor | wc -l)
|
||||
if [ $vserver -gt 0 ]
|
||||
then
|
||||
echo "$(hostname) is a VM"
|
||||
else
|
||||
systool -c fc_host -v | egrep "(Class Device path | port_name |port_state)" > systool.out
|
||||
fi
|
||||
sleep 10s
|
||||
|
||||
#For Oracle DB Servers
|
||||
if id oracle >/dev/null 2>&1; then
|
||||
/usr/sbin/oracleasm listdisks>asm.out
|
||||
/sbin/multipath -ll > mpath.out
|
||||
/bin/ps -ef|grep pmon > pmon.out
|
||||
else
|
||||
echo "oracle user does not exist on server"
|
||||
fi
|
||||
sleep 10s
|
||||
|
||||
#Create a tar archieve
|
||||
tar -cvf /tmp/$(hostname)-date +%Y%m%d.tar /tmp/conf-bk-$(date +%Y%m%d)
|
||||
sleep 10s
|
||||
|
||||
#Remove the backup config folder
|
||||
cd ..
|
||||
rm -Rf conf-bk-$(date +%Y%m%d)
|
||||
rm config-file.sh
|
||||
exit
|
||||
```
|
||||
|
||||
This is a sub-script that pushes the above script to the target servers.
|
||||
|
||||
```
|
||||
# vi /home/daygeek/shell-script/conf-remote-2.sh
|
||||
|
||||
#!/bin/bash
|
||||
for server in CentOS6.2daygeek.com CentOS7.2daygeek.com
|
||||
do
|
||||
scp /home/daygeek/shell-script/config-file-2.sh $server:/tmp/
|
||||
ssh [email protected]${server} sh /tmp/config-file-2.sh
|
||||
sleep 10s
|
||||
scp $server:/tmp/$server-date +%Y%m%d.tar /home/daygeek/backup/
|
||||
done
|
||||
exit
|
||||
```
|
||||
|
||||
Finally run the bash script to achieve this.
|
||||
|
||||
```
|
||||
# sh /home/daygeek/shell-script/conf-remote-2.sh
|
||||
|
||||
config-file-1.sh 100% 1444 416.5KB/s 00:00
|
||||
CentOS6.2daygeek.com is a VM
|
||||
oracle user does not exist on server
|
||||
tar: Removing leading `/' from member names
|
||||
/tmp/conf-bk-20191025/
|
||||
/tmp/conf-bk-20191025/pvs.out
|
||||
/tmp/conf-bk-20191025/vgs.out
|
||||
/tmp/conf-bk-20191025/ip.out
|
||||
/tmp/conf-bk-20191025/netstat-in.out
|
||||
/tmp/conf-bk-20191025/fstab.out
|
||||
/tmp/conf-bk-20191025/ifconfig-a.out
|
||||
/tmp/conf-bk-20191025/hostname.out
|
||||
/tmp/conf-bk-20191025/crontab.out
|
||||
/tmp/conf-bk-20191025/netstat-rn.out
|
||||
/tmp/conf-bk-20191025/uptime.out
|
||||
/tmp/conf-bk-20191025/uname.out
|
||||
/tmp/conf-bk-20191025/mapper.out
|
||||
/tmp/conf-bk-20191025/lvs.out
|
||||
/tmp/conf-bk-20191025/exports.out
|
||||
/tmp/conf-bk-20191025/df-h.out
|
||||
/tmp/conf-bk-20191025/sysctl.out
|
||||
/tmp/conf-bk-20191025/hosts.out
|
||||
/tmp/conf-bk-20191025/passwd.out
|
||||
/tmp/conf-bk-20191025/fdisk.out
|
||||
CentOS6.2daygeek.com-20191025.tar
|
||||
config-file-1.sh 100% 1444 386.2KB/s 00:00
|
||||
CentOS7.2daygeek.com is a VM
|
||||
oracle user does not exist on server
|
||||
/tmp/conf-bk-20191025/
|
||||
/tmp/conf-bk-20191025/hostname.out
|
||||
/tmp/conf-bk-20191025/uname.out
|
||||
/tmp/conf-bk-20191025/uptime.out
|
||||
/tmp/conf-bk-20191025/hosts.out
|
||||
/tmp/conf-bk-20191025/df-h.out
|
||||
/tmp/conf-bk-20191025/pvs.out
|
||||
/tmp/conf-bk-20191025/vgs.out
|
||||
/tmp/conf-bk-20191025/lvs.out
|
||||
/tmp/conf-bk-20191025/mapper.out
|
||||
/tmp/conf-bk-20191025/fdisk.out
|
||||
/tmp/conf-bk-20191025/fstab.out
|
||||
/tmp/conf-bk-20191025/exports.out
|
||||
/tmp/conf-bk-20191025/crontab.out
|
||||
/tmp/conf-bk-20191025/passwd.out
|
||||
/tmp/conf-bk-20191025/ip.out
|
||||
/tmp/conf-bk-20191025/netstat-in.out
|
||||
/tmp/conf-bk-20191025/netstat-rn.out
|
||||
/tmp/conf-bk-20191025/ifconfig-a.out
|
||||
/tmp/conf-bk-20191025/sysctl.out
|
||||
tar: Removing leading `/' from member names
|
||||
CentOS7.2daygeek.com-20191025.tar
|
||||
```
|
||||
|
||||
Once you run the above script, use the ls command to check the copied tar archive file.
|
||||
|
||||
```
|
||||
# ls -ltrh /home/daygeek/backup/*.tar
|
||||
|
||||
-rw-r--r-- 1 daygeek daygeek 30K Oct 25 12:37 /home/daygeek/backup/CentOS6.2daygeek.com-20191025.tar
|
||||
-rw-r--r-- 1 daygeek daygeek 30K Oct 25 12:38 /home/daygeek/backup/CentOS7.2daygeek.com-20191025.tar
|
||||
```
|
||||
|
||||
If it is moved successfully, you can find the contents of it without extracting it using the following tar command.
|
||||
|
||||
```
|
||||
# tar -tvf /home/daygeek/backup/CentOS7.2daygeek.com-20191025.tar
|
||||
|
||||
drwxr-xr-x root/root 0 2019-10-25 12:23 tmp/conf-bk-20191025/
|
||||
-rw-r--r-- root/root 21 2019-10-25 12:23 tmp/conf-bk-20191025/hostname.out
|
||||
-rw-r--r-- root/root 115 2019-10-25 12:23 tmp/conf-bk-20191025/uname.out
|
||||
-rw-r--r-- root/root 62 2019-10-25 12:23 tmp/conf-bk-20191025/uptime.out
|
||||
-rw-r--r-- root/root 228 2019-10-25 12:23 tmp/conf-bk-20191025/hosts.out
|
||||
-rw-r--r-- root/root 501 2019-10-25 12:23 tmp/conf-bk-20191025/df-h.out
|
||||
-rw-r--r-- root/root 88 2019-10-25 12:23 tmp/conf-bk-20191025/pvs.out
|
||||
-rw-r--r-- root/root 84 2019-10-25 12:23 tmp/conf-bk-20191025/vgs.out
|
||||
-rw-r--r-- root/root 252 2019-10-25 12:23 tmp/conf-bk-20191025/lvs.out
|
||||
-rw-r--r-- root/root 197 2019-10-25 12:23 tmp/conf-bk-20191025/mapper.out
|
||||
-rw-r--r-- root/root 1088 2019-10-25 12:23 tmp/conf-bk-20191025/fdisk.out
|
||||
-rw-r--r-- root/root 465 2019-10-25 12:23 tmp/conf-bk-20191025/fstab.out
|
||||
-rw-r--r-- root/root 0 2019-10-25 12:23 tmp/conf-bk-20191025/exports.out
|
||||
-rw-r--r-- root/root 451 2019-10-25 12:23 tmp/conf-bk-20191025/crontab.out
|
||||
-rw-r--r-- root/root 2748 2019-10-25 12:23 tmp/conf-bk-20191025/passwd.out
|
||||
-rw-r--r-- root/root 861 2019-10-25 12:23 tmp/conf-bk-20191025/ip.out
|
||||
-rw-r--r-- root/root 455 2019-10-25 12:23 tmp/conf-bk-20191025/netstat-in.out
|
||||
-rw-r--r-- root/root 505 2019-10-25 12:23 tmp/conf-bk-20191025/netstat-rn.out
|
||||
-rw-r--r-- root/root 2072 2019-10-25 12:23 tmp/conf-bk-20191025/ifconfig-a.out
|
||||
-rw-r--r-- root/root 449 2019-10-25 12:23 tmp/conf-bk-20191025/sysctl.out
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-bash-script-backup-configuration-files-remote-linux-system-server/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/bash-script/
|
||||
[2]: https://www.2daygeek.com/category/shell-script/
|
@ -0,0 +1,207 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Configure Rsyslog Server in CentOS 8 / RHEL 8)
|
||||
[#]: via: (https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/)
|
||||
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
|
||||
|
||||
如何在 CentOS 8 / RHEL 8 中配置 Rsyslog 服务器
|
||||
======
|
||||
|
||||
**Rsyslog** 是一个免费的开源日志记录程序,默认下在 **CentOS** 8 和 **RHEL** 8 系统上存在。它提供了一种从客户端节点到单个中央服务器的“集中日志”的简单有效的方法。日志集中化有两个好处。首先,它简化了日志查看,因为系统管理员可以在一个中心节点查看远程服务器的所有日志,而无需登录每个客户端系统来检查日志。如果需要监视多台服务器,这将非常有用,其次,如果远程客户端崩溃,你不用担心丢失日志,因为所有日志都将保存在**中央 rsyslog 服务器上**。Rsyslog 取代了仅支持 **UDP** 协议的 syslog。它以优异的功能扩展了基本的 syslog 协议,例如在传输日志时支持 **UDP** 和 **TCP**协议,增强的过滤功能以及灵活的配置选项。让我们来探讨如何在 CentOS 8 / RHEL 8 系统中配置 Rsyslog 服务器。
|
||||
|
||||
[![configure-rsyslog-centos8-rhel8][1]][2]
|
||||
|
||||
### 预先条件
|
||||
|
||||
我们将搭建以下实验环境来测试集中式日志记录过程:
|
||||
|
||||
* **Rsyslog 服务器** CentOS 8 Minimal IP 地址: 10.128.0.47
|
||||
* **客户端系统** RHEL 8 Minimal IP 地址: 10.128.0.48
|
||||
|
||||
|
||||
|
||||
通过上面的设置,我们将演示如何设置 Rsyslog 服务器,然后配置客户端系统以将日志发送到 Rsyslog 服务器进行监视。
|
||||
|
||||
让我们开始!
|
||||
|
||||
### 在 CentOS 8 上配置 Rsyslog 服务器
|
||||
|
||||
默认情况下,Rsyslog 已安装在 CentOS 8 / RHEL 8 服务器上。要验证 Rsyslog 的状态,请通过 SSH 登录并运行以下命令:
|
||||
|
||||
```
|
||||
$ systemctl status rsyslog
|
||||
```
|
||||
|
||||
示例输出
|
||||
|
||||
![rsyslog-service-status-centos8][1]
|
||||
|
||||
如果由于某种原因不存在 rsyslog,那么可以使用以下命令进行安装:
|
||||
|
||||
```
|
||||
$ sudo yum install rsyslog
|
||||
```
|
||||
|
||||
接下来,你需要修改 Rsyslog 配置文件中的一些设置。打开配置文件。
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
滚动并取消注释下面的行,以允许通过 UDP 协议接收日志
|
||||
|
||||
```
|
||||
module(load="imudp") # needs to be done just once
|
||||
input(type="imudp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-centos8-rhel8][1]
|
||||
|
||||
同样,如果你希望启用 TCP rsyslog 接收,请取消注释下面的行:
|
||||
|
||||
```
|
||||
module(load="imtcp") # needs to be done just once
|
||||
input(type="imtcp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-tcp-centos8-rhel8][1]
|
||||
|
||||
保存并退出配置文件。
|
||||
|
||||
要从客户端系统接收日志,我们需要在防火墙上打开 Rsyslog 默认端口 514。为此,请运行
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
接下来,重新加载防火墙保存更改
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
示例输出
|
||||
|
||||
![firewall-ports-rsyslog-centos8][1]
|
||||
|
||||
接下来,重启 Rsyslog 服务器
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
要在启动时运行 Rsyslog,运行以下命令
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
```
|
||||
|
||||
要确认 Rsyslog 服务器正在监听 514 端口,请使用 netstat 命令,如下所示:
|
||||
|
||||
```
|
||||
$ sudo netstat -pnltu
|
||||
```
|
||||
|
||||
示例输出
|
||||
|
||||
![netstat-rsyslog-port-centos8][1]
|
||||
|
||||
完美!我们已经成功配置了 Rsyslog 服务器来从客户端系统接收日志。
|
||||
|
||||
要实时查看日志消息,请运行以下命令:
|
||||
|
||||
```
|
||||
$ tail -f /var/log/messages
|
||||
```
|
||||
|
||||
现在开始配置客户端系统。
|
||||
|
||||
### 在 RHEL 8 上配置客户端系统
|
||||
|
||||
与 Rsyslog 服务器一样,登录并通过以下命令检查 rsyslog 守护进程是否正在运行:
|
||||
|
||||
```
|
||||
$ sudo systemctl status rsyslog
|
||||
```
|
||||
|
||||
示例输出
|
||||
|
||||
![client-rsyslog-service-rhel8][1]
|
||||
|
||||
接下来,打开 rsyslog 配置文件
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
在文件末尾,添加以下行
|
||||
|
||||
```
|
||||
*.* @10.128.0.47:514 # Use @ for UDP protocol
|
||||
*.* @@10.128.0.47:514 # Use @@ for TCP protocol
|
||||
```
|
||||
|
||||
保存并退出配置文件。就像 Rsyslog 服务器一样,打开 514 端口,这是防火墙上的默认 Rsyslog 端口。
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
接下来,重新加载防火墙以保存更改
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
接下来,重启 rsyslog 服务
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
要在启动时运行 Rsyslog,请运行以下命令
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
```
|
||||
|
||||
### 测试日志记录操作
|
||||
|
||||
已经成功安装并配置 Rsyslog 服务器和客户端后,就该验证你的配置是否按预期运行了。
|
||||
|
||||
在客户端系统上,运行以下命令:
|
||||
|
||||
```
|
||||
# logger "Hello guys! This is our first log"
|
||||
```
|
||||
|
||||
现在进入 Rsyslog 服务器并运行以下命令来实时查看日志消息
|
||||
|
||||
```
|
||||
# tail -f /var/log/messages
|
||||
```
|
||||
|
||||
客户端系统上命令运行的输出显示在了 Rsyslog 服务器的日志中,这意味着 Rsyslog 服务器正在接收来自客户端系统的日志。
|
||||
|
||||
![centralize-logs-rsyslogs-centos8][1]
|
||||
|
||||
就是这些了!我们成功设置了 Rsyslog 服务器来接收来自客户端系统的日志信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/
|
||||
|
||||
作者:[James Kiarie][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/james/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/configure-rsyslog-centos8-rhel8.jpg
|
Loading…
Reference in New Issue
Block a user