Merge pull request #3 from LCTT/master

update from LCTT
This commit is contained in:
jdh8383 2019-11-07 22:02:11 +08:00 committed by GitHub
commit 58815e27d9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
40 changed files with 4348 additions and 1590 deletions

View File

@ -0,0 +1,289 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11538-1.html)
[#]: subject: (How RPM packages are made: the spec file)
[#]: via: (https://fedoramagazine.org/how-rpm-packages-are-made-the-spec-file/)
[#]: author: (Ankur Sinha "FranciscoD" https://fedoramagazine.org/author/ankursinha/)
如何编写 RPM 的 spec 文件
======
![][1]
在[关于 RPM 软件包构建的上一篇文章][2]中,你了解到了源 RPM 包括软件的源代码以及 spec 文件。这篇文章深入研究了 spec 文件,该文件中包含了有关如何构建 RPM 的指令。同样,本文以 `fpaste` 为例。
### 了解源代码
在开始编写 spec 文件之前,你需要对要打包的软件有所了解。在这里,你正在研究 `fpaste`,这是一个非常简单的软件。它是用 Python 编写的,并且是一个单文件脚本。当它发布新版本时,可在 Pagure 上找到:<https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz>
如该档案文件所示,当前版本为 0.3.9.2。下载它,以便你查看该档案文件中的内容:
```
$ wget https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
$ tar -tvf fpaste-0.3.9.2.tar.gz
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/
-rw-rw-r-- root/root 25 2018-07-25 02:58 fpaste-0.3.9.2/.gitignore
-rw-rw-r-- root/root 3672 2018-07-25 02:58 fpaste-0.3.9.2/CHANGELOG
-rw-rw-r-- root/root 35147 2018-07-25 02:58 fpaste-0.3.9.2/COPYING
-rw-rw-r-- root/root 444 2018-07-25 02:58 fpaste-0.3.9.2/Makefile
-rw-rw-r-- root/root 1656 2018-07-25 02:58 fpaste-0.3.9.2/README.rst
-rw-rw-r-- root/root 658 2018-07-25 02:58 fpaste-0.3.9.2/TODO
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/
-rw-rw-r-- root/root 3867 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/fpaste.1
-rwxrwxr-x root/root 24884 2018-07-25 02:58 fpaste-0.3.9.2/fpaste
lrwxrwxrwx root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/fpaste.py -> fpaste
```
你要安装的文件是:
* `fpaste.py`:应该安装到 `/usr/bin/`
* `docs/man/en/fpaste.1`:手册,应放到 `/usr/share/man/man1/`
* `COPYING`:许可证文本,应放到 `/usr/share/license/fpaste/`
* `README.rst`、`TODO`:放到 `/usr/share/doc/fpaste/` 下的其它文档。
这些文件的安装位置取决于文件系统层次结构标准FHS。要了解更多信息可以在这里阅读<http://www.pathname.com/fhs/> 或查看 Fedora 系统的手册页:
```
$ man hier
```
#### 第一部分:要构建什么?
现在我们知道了源文件中有哪些文件,以及它们要存放的位置,让我们看一下 spec 文件。你可以在此处查看这个完整的文件:<https://src.fedoraproject.org/rpms/fpaste/blob/master/f/fpaste.spec>
这是 spec 文件的第一部分:
```
Name: fpaste
Version: 0.3.9.2
Release: 3%{?dist}
Summary: A simple tool for pasting info onto sticky notes instances
BuildArch: noarch
License: GPLv3+
URL: https://pagure.io/fpaste
Source0: https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
Requires: python3
%description
It is often useful to be able to easily paste text to the Fedora
Pastebin at http://paste.fedoraproject.org and this simple script
will do that and return the resulting URL so that people may
examine the output. This can hopefully help folks who are for
some reason stuck without X, working remotely, or any other
reason they may be unable to paste something into the pastebin
```
`Name`、`Version` 等称为*标签*,它们定义在 RPM 中。这意味着你不能只是随意写点标签RPM 无法理解它们!需要注意的标签是:
* `Source0`:告诉 RPM 该软件的源代码档案文件所在的位置。
* `Requires`列出软件的运行时依赖项。RPM 可以自动检测很多依赖项,但是在某些情况下,必须手动指明它们。运行时依赖项是系统上必须具有的功能(通常是软件包),才能使该软件包起作用。这是 [dnf][3] 在安装此软件包时检测是否需要拉取其他软件包的方式。
* `BuildRequires`:列出了此软件的构建时依赖项。这些通常必须手动确定并添加到 spec 文件中。
* `BuildArch`:此软件为该计算机体系结构所构建。如果省略此标签,则将为所有受支持的体系结构构建该软件。值 `noarch` 表示该软件与体系结构无关(例如 `fpaste`,它完全是用 Python 编写的)。
本节提供有关 `fpaste` 的常规信息:它是什么,正在将什么版本制作为 RPM其许可证等等。如果你已安装 `fpaste`,并查看其元数据时,则可以看到该 RPM 中包含的以下信息:
```
$ sudo dnf install fpaste
$ rpm -qi fpaste
Name : fpaste
Version : 0.3.9.2
Release : 2.fc30
...
```
RPM 会自动添加一些其他标签,以代表它所知道的内容。
至此,我们掌握了要为其构建 RPM 的软件的一般信息。接下来,我们开始告诉 RPM 做什么。
#### 第二部分:准备构建
spec 文件的下一部分是准备部分,用 `prep` 代表:
```
%prep
%autosetup
```
对于 `fpaste`,这里唯一的命令是 `autosetup`。这只是将 tar 档案文件提取到一个新文件夹中,并为下一部分的构建阶段做好了准备。你可以在此处执行更多操作,例如应用补丁程序,出于不同目的修改文件等等。如果你查看过 Python 的源 RPM 的内容,那么你会在那里看到许多补丁。这些都将在本节中应用。
通常spec 文件中带有 `` 前缀的所有内容都是 RPM 以特殊方式解释的宏或标签。这些通常会带有大括号,例如 `{example}`
#### 第三部分:构建软件
下一部分是构建软件的位置,用 `build` 表示。现在,由于 `fpaste` 是一个简单的纯 Python 脚本,因此无需构建。因此,这里是:
```
%build
#nothing required
```
不过,通常来说,你会在此处使用构建命令,例如:
```
configure; make
```
构建部分通常是 spec 文件中最难的部分因为这是从源代码构建软件的地方。这要求你知道该工具使用的是哪个构建系统该系统可能是许多构建系统之一Autotools、CMake、Meson、Setuptools用于 Python等等。每个都有自己的命令和语法样式。你需要充分了解这些才能正确构建软件。
#### 第四部分:安装文件
软件构建后,需要在 `install` 部分中安装它:
```
%install
mkdir -p %{buildroot}%{_bindir}
make install BINDIR=%{buildroot}%{_bindir} MANDIR=%{buildroot}%{_mandir}
```
在构建 RPM 时RPM 不会修改你的系统文件。在一个可以正常运行的系统上添加、删除或修改文件的风险太大。如果发生故障怎么办因此RPM 会创建一个专门打造的文件系统并在其中工作。这称为 `buildroot`。 因此,在 `buildroot` 中,我们创建由宏 `{_bindir}` 代表的 `/usr/bin` 目录,然后使用提供的 `Makefile` 将文件安装到其中。
至此,我们已经在专门打造的 `buildroot` 中安装了 `fpaste` 的构建版本。
#### 第五部分:列出所有要包括在 RPM 中的文件
spec 文件其后的一部分是文件部分:`files`。在这里,我们告诉 RPM 从该 spec 文件创建的档案文件中包含哪些文件。`fpaste` 的文件部分非常简单:
```
%files
%{_bindir}/%{name}
%doc README.rst TODO
%{_mandir}/man1/%{name}.1.gz
%license COPYING
```
请注意,在这里,我们没有指定 `buildroot`。所有这些路径都是相对路径。`doc` 和 `license`命令做的稍微多一点,它们会创建所需的文件夹,并记住这些文件必须放在那里。
RPM 很聪明。例如,如果你在 `install` 部分中安装了文件,但未列出它们,它会提醒你。
#### 第六部分:在变更日志中记录所有变更
Fedora 是一个基于社区的项目。许多贡献者维护或共同维护软件包。因此当务之急是不要被软件包做了哪些更改所搞混。为了确保这一点spec 文件包含的最后一部分是变更日志 `changelog`
```
%changelog
* Thu Jul 25 2019 Fedora Release Engineering < ...> - 0.3.9.2-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_31_Mass_Rebuild
* Thu Jan 31 2019 Fedora Release Engineering < ...> - 0.3.9.2-2
- Rebuilt for https://fedoraproject.org/wiki/Fedora_30_Mass_Rebuild
* Tue Jul 24 2018 Ankur Sinha - 0.3.9.2-1
- Update to 0.3.9.2
* Fri Jul 13 2018 Fedora Release Engineering < ...> - 0.3.9.1-4
- Rebuilt for https://fedoraproject.org/wiki/Fedora_29_Mass_Rebuild
* Wed Feb 07 2018 Fedora Release Engineering < ..> - 0.3.9.1-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_28_Mass_Rebuild
* Sun Sep 10 2017 Vasiliy N. Glazov < ...> - 0.3.9.1-2
- Cleanup spec
* Fri Sep 08 2017 Ankur Sinha - 0.3.9.1-1
- Update to latest release
- fixes rhbz 1489605
...
....
```
spec 文件的*每项*变更都必须有一个变更日志条目。如你在此处看到的,虽然我以维护者身份更新了该 spec 文件,但其他人也做过更改。清楚地记录变更内容有助于所有人知道该 spec 文件的当前状态。对于系统上安装的所有软件包,都可以使用 `rpm` 来查看其更改日志:
```
$ rpm -q --changelog fpaste
```
### 构建 RPM
现在我们准备构建 RPM 包。如果要继续执行以下命令,请确保遵循[上一篇文章][2]中的步骤设置系统以构建 RPM。
我们将 `fpaste` 的 spec 文件放置在 `~/rpmbuild/SPECS` 中,将源代码档案文件存储在 `~/rpmbuild/SOURCES/` 中,现在可以创建源 RPM 了:
```
$ cd ~/rpmbuild/SPECS
$ wget https://src.fedoraproject.org/rpms/fpaste/raw/master/f/fpaste.spec
$ cd ~/rpmbuild/SOURCES
$ wget https://pagure.io/fpaste/archive/0.3.9.2/fpaste-0.3.9.2.tar.gz
$ cd ~/rpmbuild/SOURCES
$ rpmbuild -bs fpaste.spec
Wrote: /home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
```
让我们看一下结果:
```
$ ls ~/rpmbuild/SRPMS/fpaste*
/home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
$ rpm -qpl ~/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
fpaste-0.3.9.2.tar.gz
fpaste.spec
```
我们看到源 RPM 已构建。让我们同时构建源 RPM 和二进制 RPM
```
$ cd ~/rpmbuild/SPECS
$ rpmbuild -ba fpaste.spec
..
..
..
```
RPM 将向你显示完整的构建输出,并在我们之前看到的每个部分中详细说明它的工作。此“构建日志”非常重要。当构建未按预期进行时,我们的打包人员将花费大量时间来遍历它们,以跟踪完整的构建路径来查看出了什么问题。
就是这样!准备安装的 RPM 应该位于以下位置:
```
$ ls ~/rpmbuild/RPMS/noarch/
fpaste-0.3.9.2-3.fc30.noarch.rpm
```
### 概括
我们已经介绍了如何从 spec 文件构建 RPM 的基础知识。这绝不是一份详尽的文档。实际上,它根本不是文档。它只是试图解释幕后的运作方式。简短回顾一下:
* RPM 有两种类型:源 RPM 和 二进制 RPM。
* 二进制 RPM 包含要安装以使用该软件的文件。
* 源 RPM 包含构建二进制 RPM 所需的信息:完整的源代码,以及 spec 文件中的有关如何构建 RPM 的说明。
* spec 文件包含多个部分,每个部分都有其自己的用途。
  
在这里,我们已经在安装好的 Fedora 系统中本地构建了 RPM。虽然这是个基本的过程但我们从存储库中获得的 RPM 是建立在具有严格配置和方法的专用服务器上的,以确保正确性和安全性。这个 Fedora 打包流程将在以后的文章中讨论。
你想开始构建软件包,并帮助 Fedora 社区维护我们提供的大量软件吗?你可以[从这里开始加入软件包集合维护者][4]。
如有任何疑问,请发布到 [Fedora 开发人员邮件列表][5],我们随时乐意为你提供帮助!
### 参考
这里有一些构建 RPM 的有用参考:
* <https://fedoraproject.org/wiki/How_to_create_an_RPM_package>
* <https://docs.fedoraproject.org/en-US/quick-docs/create-hello-world-rpm/>
* <https://docs.fedoraproject.org/en-US/packaging-guidelines/>
* <https://rpm.org/documentation.html>
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-rpm-packages-are-made-the-spec-file/
作者:[Ankur Sinha "FranciscoD"][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/ankursinha/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/rpm.png-816x345.jpg
[2]: https://linux.cn/article-11527-1.html
[3]: https://fedoramagazine.org/managing-packages-fedora-dnf/
[4]: https://fedoraproject.org/wiki/Join_the_package_collection_maintainers
[5]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/

View File

@ -0,0 +1,246 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11546-1.html)
[#]: subject: (Building CI/CD pipelines with Jenkins)
[#]: via: (https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins)
[#]: author: (Bryant Son https://opensource.com/users/brson)
用 Jenkins 构建 CI/CD 流水线
======
> 通过这份 Jenkins 分步教程构建持续集成和持续交付CI/CD流水线。
![](https://img.linux.net.cn/data/attachment/album/201911/07/001349rbbbswpeqnnteeee.jpg)
在我的文章《[使用开源工具构建 DevOps 流水线的初学者指南][2]》中,我分享了一个从头开始构建 DevOps 流水线的故事。推动该计划的核心技术是 [Jenkins][3]这是一个用于建立持续集成和持续交付CI/CD流水线的开源工具。
在花旗,有一个单独的团队为专用的 Jenkins 流水线提供稳定的主从节点环境但是该环境仅用于质量保证QA、构建阶段和生产环境。开发环境仍然是非常手动的我们的团队需要对其进行自动化以在加快开发工作的同时获得尽可能多的灵活性。这就是我们决定为 DevOps 建立 CI/CD 流水线的原因。Jenkins 的开源版本由于其灵活性、开放性、强大的插件功能和易用性而成为显而易见的选择。
在本文中,我将分步演示如何使用 Jenkins 构建 CI/CD 流水线。
### 什么是流水线?
在进入本教程之前,了解有关 CI/CD <ruby>流水线<rt>pipeline</rt></ruby>的知识会很有帮助。
首先,了解 Jenkins 本身并不是流水线这一点很有帮助。只是创建一个新的 Jenkins 作业并不能构建一条流水线。可以把 Jenkins 看做一个遥控器在这里点击按钮即可。当你点击按钮时会发生什么取决于遥控器要控制的内容。Jenkins 为其他应用程序 API、软件库、构建工具等提供了一种插入 Jenkins 的方法它可以执行并自动化任务。Jenkins 本身不执行任何功能,但是随着其它工具的插入而变得越来越强大。
流水线是一个单独的概念,指的是按顺序连接在一起的事件或作业组:
> “<ruby>流水线<rt>pipeline</rt></ruby>”是可以执行的一系列事件或作业。
理解流水线的最简单方法是可视化一系列阶段,如下所示:
![Pipeline example][4]
在这里,你应该看到两个熟悉的概念:<ruby>阶段<rt>Stage</rt></ruby><ruby>步骤<rt>Step</rt></ruby>
* 阶段:一个包含一系列步骤的块。阶段块可以命名为任何名称;它用于可视化流水线过程。
* 步骤:表明要做什么的任务。步骤定义在阶段块内。
在上面的示例图中,阶段 1 可以命名为 “构建”、“收集信息”或其它名称,其它阶段块也可以采用类似的思路。“步骤”只是简单地说放上要执行的内容,它可以是简单的打印命令(例如,`echo "Hello, World"`)、程序执行命令(例如,`java HelloWorld`、shell 执行命令( 例如,`chmod 755 Hello`)或任何其他命令,只要通过 Jenkins 环境将其识别为可执行命令即可。
Jenkins 流水线以**编码脚本**的形式提供,通常称为 “Jenkinsfile”尽管可以用不同的文件名。下面这是一个简单的 Jenkins 流水线文件的示例:
```
// Example of Jenkins pipeline script
pipeline {
  stages {
    stage("Build") {
      steps {
          // Just print a Hello, Pipeline to the console
          echo "Hello, Pipeline!"
          // Compile a Java file. This requires JDKconfiguration from Jenkins
          javac HelloWorld.java
          // Execute the compiled Java binary called HelloWorld. This requires JDK configuration from Jenkins
          java HelloWorld
          // Executes the Apache Maven commands, clean then package. This requires Apache Maven configuration from Jenkins
          mvn clean package ./HelloPackage
          // List the files in current directory path by executing a default shell command
          sh "ls -ltr"
      }
   }
   // And next stages if you want to define further...
  } // End of stages
} // End of pipeline
```
从此示例脚本很容易看到 Jenkins 流水线的结构。请注意,默认情况下某些命令(如 `java`、`javac`和 `mvn`)不可用,需要通过 Jenkins 进行安装和配置。 因此:
> Jenkins 流水线是一种以定义的方式依次执行 Jenkins 作业的方法,方法是将其编码并在多个块中进行结构化,这些块可以包含多个任务的步骤。
好。既然你已经了解了 Jenkins 流水线是什么,我将向你展示如何创建和执行 Jenkins 流水线。在本教程的最后,你将建立一个 Jenkins 流水线,如下所示:
![Final Result][5]
### 如何构建 Jenkins 流水线
为了便于遵循本教程的步骤,我创建了一个示例 [GitHub 存储库][6]和一个视频教程。
- [视频](https://img.linux.net.cn/static/video/_-jDPwYgDVKlg.mp4)
开始本教程之前,你需要:
* Java 开发工具包JDK如果尚未安装请安装 JDK 并将其添加到环境路径中,以便可以通过终端执行 Java 命令(如 `java jar`)。这是利用本教程中使用的 Java Web ArchiveWAR版本的 Jenkins 所必需的(尽管你可以使用任何其他发行版)。
* 基本计算机操作能力:你应该知道如何键入一些代码、通过 shell 执行基本的 Linux 命令以及打开浏览器。
让我们开始吧。
#### 步骤一:下载 Jenkins
导航到 [Jenkins 下载页面][7]。向下滚动到 “Generic Java package (.war)”,然后单击下载文件;将其保存在易于找到的位置。(如果你选择其他 Jenkins 发行版,除了步骤二之外,本教程的其余步骤应该几乎相同。)使用 WAR 文件的原因是它是个一次性可执行文件,可以轻松地执行和删除。
![Download Jenkins as Java WAR file][8]
#### 步骤二:以 Java 二进制方式执行 Jenkins
打开一个终端窗口,并使用 `cd <your path>` 进入下载 Jenkins 的目录。(在继续之前,请确保已安装 JDK 并将其添加到环境路径。)执行以下命令,该命令将 WAR 文件作为可执行二进制文件运行:
```
java -jar ./jenkins.war
```
如果一切顺利Jenkins 应该在默认端口 8080 上启动并运行。
![Execute as an executable JAR binary][9]
#### 步骤三:创建一个新的 Jenkins 作业
打开一个 Web 浏览器并导航到 `localhost:8080`。除非你有以前安装的 Jenkins否则应直接转到 Jenkins 仪表板。点击 “Create New Jobs”。你也可以点击左侧的 “New Item”。
![Create New Job][10]
#### 步骤四:创建一个流水线作业
在此步骤中,你可以选择并定义要创建的 Jenkins 作业类型。选择 “Pipeline” 并为其命名例如“TestPipeline”。单击 “OK” 创建流水线作业。
![Create New Pipeline Job][11]
你将看到一个 Jenkins 作业配置页面。向下滚动以找到 “Pipeline” 部分。有两种执行 Jenkins 流水线的方法。一种方法是在 Jenkins 上直接编写流水线脚本,另一种方法是从 SCM源代码管理中检索 Jenkins 文件。在接下来的两个步骤中,我们将体验这两种方式。
#### 步骤五:通过直接脚本配置并执行流水线作业
要使用直接脚本执行流水线,请首先从 GitHub 复制该 [Jenkinsfile 示例][6]的内容。选择 “Pipeline script” 作为 “Destination”然后将该 Jenkinsfile 的内容粘贴到 “Script” 中。花一些时间研究一下 Jenkins 文件的结构。注意共有三个阶段Build、Test 和 Deploy它们是任意的可以是任何一个。每个阶段中都有一些步骤在此示例中它们只是打印一些随机消息。
单击 “Save” 以保留更改,这将自动将你带回到 “Job Overview” 页面。
![Configure to Run as Jenkins Script][12]
要开始构建流水线的过程,请单击 “Build Now”。如果一切正常你将看到第一个流水线如下面的这个
![Click Build Now and See Result][13]
要查看流水线脚本构建的输出,请单击任何阶段,然后单击 “Log”。你会看到这样的消息。
![Visit sample GitHub with Jenkins get clone link][14]
#### 步骤六:通过 SCM 配置并执行流水线作业
现在,换个方式:在此步骤中,你将通过从源代码控制的 GitHub 中复制 Jenkinsfile 来部署相同的 Jenkins 作业。在同一个 [GitHub 存储库][6]中,通过单击 “Clone or download” 并复制其 URL 来找到其存储库 URL。
![Checkout from GitHub][15]
单击 “Configure” 以修改现有作业。滚动到 “Advanced Project Options” 设置,但这一次,从 “Destination” 下拉列表中选择 “Pipeline script from SCM” 选项。将 GitHub 存储库的 URL 粘贴到 “Repository URL” 中,然后在 “Script Path” 中键入 “Jenkinsfile”。 单击 “Save” 按钮保存。
![Change to Pipeline script from SCM][16]
要构建流水线,回到 “Task Overview” 页面后,单击 “Build Now” 以再次执行作业。结果与之前相同,除了多了一个称为 “Declaration: Checkout SCM” 的阶段。
![Build again and verify][17]
要查看来自 SCM 构建的流水线的输出,请单击该阶段并查看 “Log” 以检查源代码控制克隆过程的进行情况。
![Verify Checkout Procedure][18]
### 除了打印消息,还能做更多
恭喜你!你已经建立了第一个 Jenkins 流水线!
“但是等等”,你说,“这太有限了。除了打印无用的消息外,我什么都做不了。”那没问题。到目前为止,本教程仅简要介绍了 Jenkins 流水线可以做什么,但是你可以通过将其与其他工具集成来扩展其功能。以下是给你的下一个项目的一些思路:
* 建立一个多阶段的 Java 构建流水线,从以下阶段开始:从 Nexus 或 Artifactory 之类的 JAR 存储库中拉取依赖项、编译 Java 代码、运行单元测试、打包为 JAR/WAR 文件,然后部署到云服务器。
* 实现一个高级代码测试仪表板,该仪表板将基于 Selenium 的单元测试、负载测试和自动用户界面测试,报告项目的运行状况。
* 构建多流水线或多用户流水线,以自动化执行 Ansible 剧本的任务,同时允许授权用户响应正在进行的任务。
* 设计完整的端到端 DevOps 流水线,该流水线可提取存储在 SCM 中的基础设施资源文件和配置文件(例如 GitHub并通过各种运行时程序执行该脚本。
学习本文结尾处的任何教程,以了解这些更高级的案例。
#### 管理 Jenkins
在 Jenkins 主面板,点击 “Manage Jenkins”。
![Manage Jenkins][19]
#### 全局工具配置
有许多可用工具,包括管理插件、查看系统日志等。单击 “Global Tool Configuration”。
![Global Tools Configuration][20]
#### 增加附加能力
在这里,你可以添加 JDK 路径、Git、Gradle 等。配置工具后,只需将该命令添加到 Jenkinsfile 中或通过 Jenkins 脚本执行即可。
![See Various Options for Plugin][21]
### 后继
本文为你介绍了使用酷炫的开源工具 Jenkins 创建 CI/CD 流水线的方法。要了解你可以使用 Jenkins 完成的许多其他操作,请在 Opensource.com 上查看以下其他文章:
* [Jenkins X 入门][22]
* [使用 Jenkins 安装 OpenStack 云][23]
* [在容器中运行 Jenkins][24]
* [Jenkins 流水线入门][25]
* [如何与 Jenkins 一起运行 JMeter][26]
* [将 OpenStack 集成到你的 Jenkins 工作流中][27]
你可能对我为你的开源之旅而写的其他一些文章感兴趣:
* [9 个用于构建容错系统的开源工具][28]
* [了解软件设计模式][29]
* [使用开源工具构建 DevOps 流水线的初学者指南][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins
作者:[Bryant Son][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brson
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pipe-pipeline-grid.png?itok=kkpzKxKg (pipelines)
[2]: https://linux.cn/article-11307-1.html
[3]: https://jenkins.io/
[4]: https://opensource.com/sites/default/files/uploads/diagrampipeline.jpg (Pipeline example)
[5]: https://opensource.com/sites/default/files/uploads/0_endresultpreview_0.jpg (Final Result)
[6]: https://github.com/bryantson/CICDPractice
[7]: https://jenkins.io/download/
[8]: https://opensource.com/sites/default/files/uploads/2_downloadwar.jpg (Download Jenkins as Java WAR file)
[9]: https://opensource.com/sites/default/files/uploads/3_runasjar.jpg (Execute as an executable JAR binary)
[10]: https://opensource.com/sites/default/files/uploads/4_createnewjob.jpg (Create New Job)
[11]: https://opensource.com/sites/default/files/uploads/5_createpipeline.jpg (Create New Pipeline Job)
[12]: https://opensource.com/sites/default/files/uploads/6_runaspipelinescript.jpg (Configure to Run as Jenkins Script)
[13]: https://opensource.com/sites/default/files/uploads/7_buildnow4script.jpg (Click Build Now and See Result)
[14]: https://opensource.com/sites/default/files/uploads/8_seeresult4script.jpg (Visit sample GitHub with Jenkins get clone link)
[15]: https://opensource.com/sites/default/files/uploads/9_checkoutfromgithub.jpg (Checkout from GitHub)
[16]: https://opensource.com/sites/default/files/uploads/10_runsasgit.jpg (Change to Pipeline script from SCM)
[17]: https://opensource.com/sites/default/files/uploads/11_seeresultfromgit.jpg (Build again and verify)
[18]: https://opensource.com/sites/default/files/uploads/12_verifycheckout.jpg (Verify Checkout Procedure)
[19]: https://opensource.com/sites/default/files/uploads/13_managingjenkins.jpg (Manage Jenkins)
[20]: https://opensource.com/sites/default/files/uploads/14_globaltoolsconfiguration.jpg (Global Tools Configuration)
[21]: https://opensource.com/sites/default/files/uploads/15_variousoptions4plugin.jpg (See Various Options for Plugin)
[22]: https://opensource.com/article/18/11/getting-started-jenkins-x
[23]: https://opensource.com/article/18/4/install-OpenStack-cloud-Jenkins
[24]: https://linux.cn/article-9741-1.html
[25]: https://opensource.com/article/18/4/jenkins-pipelines-with-cucumber
[26]: https://opensource.com/life/16/7/running-jmeter-jenkins-continuous-delivery-101
[27]: https://opensource.com/business/15/5/interview-maish-saidel-keesing-cisco
[28]: https://opensource.com/article/19/3/tools-fault-tolerant-system
[29]: https://opensource.com/article/19/7/understanding-software-design-patterns

View File

@ -0,0 +1,406 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11545-1.html)
[#]: subject: (Understanding system calls on Linux with strace)
[#]: via: (https://opensource.com/article/19/10/strace)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
在 Linux 上用 strace 来理解系统调用
======
> 使用 strace 跟踪用户进程和 Linux 内核之间的交互。
![Hand putting a Linux file folder into a drawer][1]
<ruby>系统调用<rt>system call</rt></ruby>是程序从内核请求服务的一种编程方式,而 `strace` 是一个功能强大的工具,可让你跟踪用户进程与 Linux 内核之间的交互。
要了解操作系统的工作原理,首先需要了解系统调用的工作原理。操作系统的主要功能之一是为用户程序提供抽象机制。
操作系统可以大致分为两种模式:
* 内核模式:操作系统内核使用的一种强大的特权模式
* 用户模式:大多数用户应用程序运行的地方
  
用户大多使用命令行实用程序和图形用户界面GUI来执行日常任务。系统调用在后台静默运行与内核交互以完成工作。
系统调用与函数调用非常相似,这意味着它们都接受并处理参数然后返回值。唯一的区别是系统调用进入内核,而函数调用不进入。从用户空间切换到内核空间是使用特殊的 [trap][2] 机制完成的。
通过使用系统库(在 Linux 系统上又称为 glibc大部分系统调用对用户隐藏了。尽管系统调用本质上是通用的但是发出系统调用的机制在很大程度上取决于机器架构
本文通过使用一些常规命令并使用 `strace` 分析每个命令进行的系统调用来探索一些实际示例。这些示例使用 Red Hat Enterprise Linux但是这些命令运行在其他 Linux 发行版上应该也是相同的:
```
[root@sandbox ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.7 (Maipo)
[root@sandbox ~]#
[root@sandbox ~]# uname -r
3.10.0-1062.el7.x86_64
[root@sandbox ~]#
```
首先,确保在系统上安装了必需的工具。你可以使用下面的 `rpm` 命令来验证是否安装了 `strace`。如果安装了,则可以使用 `-V` 选项检查 `strace` 实用程序的版本号:
```
[root@sandbox ~]# rpm -qa | grep -i strace
strace-4.12-9.el7.x86_64
[root@sandbox ~]#
[root@sandbox ~]# strace -V
strace -- version 4.12
[root@sandbox ~]#
```
如果没有安装,运行命令安装:
```
yum install strace
```
出于本示例的目的,在 `/tmp` 中创建一个测试目录,并使用 `touch` 命令创建两个文件:
```
[root@sandbox ~]# cd /tmp/
[root@sandbox tmp]#
[root@sandbox tmp]# mkdir testdir
[root@sandbox tmp]#
[root@sandbox tmp]# touch testdir/file1
[root@sandbox tmp]# touch testdir/file2
[root@sandbox tmp]#
```
(我使用 `/tmp` 目录是因为每个人都可以访问它,但是你可以根据需要选择另一个目录。)
`testdir` 目录下使用 `ls` 命令验证该文件已经创建:
```
[root@sandbox tmp]# ls testdir/
file1  file2
[root@sandbox tmp]#
```
你可能每天都在使用 `ls` 命令,而没有意识到系统调用在其下面发挥的作用。抽象地来说,该命令的工作方式如下:
> 命令行工具 -> 从系统库glibc调用函数 -> 调用系统调用
`ls` 命令内部从 Linux 上的系统库(即 glibc调用函数。这些库去调用完成大部分工作的系统调用。
如果你想知道从 glibc 库中调用了哪些函数,请使用 `ltrace` 命令,然后跟上常规的 `ls testdir/`命令:
```
ltrace ls testdir/
```
如果没有安装 `ltrace`,键入如下命令安装:
```
yum install ltrace
```
大量的输出会被堆到屏幕上;不必担心,只需继续就行。`ltrace` 命令输出中与该示例有关的一些重要库函数包括:
```
opendir("testdir/") = { 3 }
readdir({ 3 }) = { 101879119, "." }
readdir({ 3 }) = { 134, ".." }
readdir({ 3 }) = { 101879120, "file1" }
strlen("file1") = 5
memcpy(0x1665be0, "file1\0", 6) = 0x1665be0
readdir({ 3 }) = { 101879122, "file2" }
strlen("file2") = 5
memcpy(0x166dcb0, "file2\0", 6) = 0x166dcb0
readdir({ 3 }) = nil
closedir({ 3 })                    
```
通过查看上面的输出,你或许可以了解正在发生的事情。`opendir` 库函数打开一个名为 `testdir` 的目录,然后调用 `readdir` 函数,该函数读取目录的内容。最后,有一个对 `closedir` 函数的调用,该函数将关闭先前打开的目录。现在请先忽略其他 `strlen``memcpy` 功能。
你可以看到正在调用哪些库函数,但是本文将重点介绍由系统库函数调用的系统调用。
与上述类似,要了解调用了哪些系统调用,只需将 `strace` 放在 `ls testdir` 命令之前,如下所示。 再次,一堆乱码丢到了你的屏幕上,你可以按照以下步骤进行操作:
```
[root@sandbox tmp]# strace ls testdir/
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
brk(NULL) = 0x1f12000
<<< truncated strace output >>>
write(1, "file1 file2\n", 13file1 file2
) = 13
close(1) = 0
munmap(0x7fd002c8d000, 4096) = 0
close(2) = 0
exit_group(0) = ?
+++ exited with 0 +++
[root@sandbox tmp]#
```
运行 `strace` 命令后屏幕上的输出就是运行 `ls` 命令的系统调用。每个系统调用都为操作系统提供了特定的用途,可以将它们大致分为以下几个部分:
* 进程管理系统调用
* 文件管理系统调用
* 目录和文件系统管理系统调用
* 其他系统调用
分析显示到屏幕上的信息的一种更简单的方法是使用 `strace` 方便的 `-o` 标志将输出记录到文件中。在 `-o` 标志后添加一个合适的文件名,然后再次运行命令:
```
[root@sandbox tmp]# strace -o trace.log ls testdir/
file1  file2
[root@sandbox tmp]#
```
这次,没有任何输出干扰屏幕显示,`ls` 命令如预期般工作,显示了文件名并将所有输出记录到文件 `trace.log` 中。仅仅是一个简单的 `ls` 命令,该文件就有近 100 行内容:
```
[root@sandbox tmp]# ls -l trace.log
-rw-r--r--. 1 root root 7809 Oct 12 13:52 trace.log
[root@sandbox tmp]#
[root@sandbox tmp]# wc -l trace.log
114 trace.log
[root@sandbox tmp]#
```
让我们看一下这个示例的 `trace.log` 文件的第一行:
```
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
```
* 该行的第一个单词 `execve` 是正在执行的系统调用的名称。
* 括号内的文本是提供给该系统调用的参数。
* 符号 `=` 后的数字(在这种情况下为 `0`)是 `execve` 系统调用的返回值。
现在的输出似乎还不太吓人,对吧。你可以应用相同的逻辑来理解其他行。
现在,将关注点集中在你调用的单个命令上,即 `ls testdir`。你知道命令 `ls` 使用的目录名称,那么为什么不在 `trace.log` 文件中使用 `grep` 查找 `testdir` 并查看得到的结果呢?让我们详细查看一下结果的每一行:
```
[root@sandbox tmp]# grep testdir trace.log
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0
openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
[root@sandbox tmp]#
```
回顾一下上面对 `execve` 的分析,你能说一下这个系统调用的作用吗?
```
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
```
你无需记住所有系统调用或它们所做的事情,因为你可以在需要时参考文档。手册页可以解救你!在运行 `man` 命令之前,请确保已安装以下软件包:
```
[root@sandbox tmp]# rpm -qa | grep -i man-pages
man-pages-3.53-5.el7.noarch
[root@sandbox tmp]#
```
请记住,你需要在 `man` 命令和系统调用名称之间添加 `2`。如果使用 `man man` 阅读 `man` 命令的手册页,你会看到第 2 节是为系统调用保留的。同样,如果你需要有关库函数的信息,则需要在 `man` 和库函数名称之间添加一个 `3`
以下是手册的章节编号及其包含的页面类型:
* `1`:可执行的程序或 shell 命令
* `2`:系统调用(由内核提供的函数)
* `3`:库调用(在程序的库内的函数)
* `4`:特殊文件(通常出现在 `/dev`
使用系统调用名称运行以下 `man` 命令以查看该系统调用的文档:
```
man 2 execve
```
按照 `execve` 手册页,这将执行在参数中传递的程序(在本例中为 `ls`)。可以为 `ls` 提供其他参数,例如本例中的 `testdir`。因此,此系统调用仅以 `testdir` 作为参数运行 `ls`
```
execve - execute program
DESCRIPTION
execve() executes the program pointed to by filename
```
下一个系统调用,名为 `stat`,它使用 `testdir` 参数:
```
stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0
```
使用 `man 2 stat` 访问该文档。`stat` 是获取文件状态的系统调用请记住Linux 中的一切都是文件,包括目录。
接下来,`openat` 系统调用将打开 `testdir`。密切注意返回的 `3`。这是一个文件描述符,将在以后的系统调用中使用:
```
openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
```
到现在为止一切都挺好。现在,打开 `trace.log` 文件,并转到 `openat` 系统调用之后的行。你会看到 `getdents` 系统调用被调用,该调用完成了执行 `ls testdir` 命令所需的大部分操作。现在,从 `trace.log` 文件中用 `grep` 获取 `getdents`
```
[root@sandbox tmp]# grep getdents trace.log
getdents(3, /* 4 entries */, 32768)     = 112
getdents(3, /* 0 entries */, 32768)     = 0
[root@sandbox tmp]#
```
`getdents` 的手册页将其描述为 “获取目录项”,这就是你要执行的操作。注意,`getdents` 的参数是 `3`,这是来自上面 `openat` 系统调用的文件描述符。
现在有了目录列表,你需要一种在终端中显示它的方法。因此,在日志中用 `grep` 搜索另一个用于写入终端的系统调用 `write`
```
[root@sandbox tmp]# grep write trace.log
write(1, "file1  file2\n", 13)          = 13
[root@sandbox tmp]#
```
在这些参数中,你可以看到将要显示的文件名:`file1` 和 `file2`。关于第一个参数(`1`),请记住在 Linux 中,当运行任何进程时,默认情况下会为其打开三个文件描述符。以下是默认的文件描述符:
* `0`:标准输入
* `1`:标准输出
* `2`:标准错误
因此,`write` 系统调用将在标准显示(就是这个终端,由 `1` 所标识的)上显示 `file1``file2`
现在你知道哪个系统调用完成了 `ls testdir/` 命令的大部分工作。但是在 `trace.log` 文件中其它的 100 多个系统调用呢?操作系统必须做很多内务处理才能运行一个进程,因此,你在该日志文件中看到的很多内容都是进程初始化和清理。阅读整个 `trace.log` 文件,并尝试了解 `ls` 命令是怎么工作起来的。
既然你知道了如何分析给定命令的系统调用,那么就可以将该知识用于其他命令来了解正在执行哪些系统调用。`strace` 提供了许多有用的命令行标志,使你更容易使用,下面将对其中一些进行描述。
默认情况下,`strace` 并不包含所有系统调用信息。但是,它有一个方便的 `-v` 冗余选项,可以在每个系统调用中提供附加信息:
```
strace -v ls testdir
```
在运行 `strace` 命令时始终使用 `-f` 选项是一种好的作法。它允许 `strace` 对当前正在跟踪的进程创建的任何子进程进行跟踪:
```
strace -f ls testdir
```
假设你只需要系统调用的名称、运行的次数以及每个系统调用花费的时间百分比。你可以使用 `-c` 标志来获取这些统计信息:
```
strace -c ls testdir/
```
假设你想专注于特定的系统调用,例如专注于 `open` 系统调用,而忽略其余部分。你可以使用`-e` 标志跟上系统调用的名称:
```
[root@sandbox tmp]# strace -e open ls testdir
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libcap.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libacl.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpcre.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
file1  file2
+++ exited with 0 +++
[root@sandbox tmp]#
```
如果你想关注多个系统调用怎么办?不用担心,你同样可以使用 `-e` 命令行标志,并用逗号分隔开两个系统调用的名称。例如,要查看 `write``getdents` 系统调用:
```
[root@sandbox tmp]# strace -e write,getdents ls testdir
getdents(3, /* 4 entries */, 32768)     = 112
getdents(3, /* 0 entries */, 32768)     = 0
write(1, "file1  file2\n", 13file1  file2
)          = 13
+++ exited with 0 +++
[root@sandbox tmp]#
```
到目前为止,这些示例是明确地运行的命令进行了跟踪。但是,要跟踪已经运行并正在执行的命令又怎么办呢?例如,如果要跟踪用来长时间运行进程的守护程序,该怎么办?为此,`strace` 提供了一个特殊的 `-p` 标志,你可以向其提供进程 ID。
我们的示例不在守护程序上运行 `strace`,而是以 `cat` 命令为例,如果你将文件名作为参数,通常 `cat` 会显示文件的内容。如果没有给出参数,`cat` 命令会在终端上等待用户输入文本。输入文本后,它将重复给定的文本,直到用户按下 `Ctrl + C` 退出为止。
从一个终端运行 `cat` 命令;它会向你显示一个提示,并等待在那里(记住 `cat` 仍在运行且尚未退出):
```
[root@sandbox tmp]# cat
```
在另一个终端上,使用 `ps` 命令找到进程标识符PID
```
[root@sandbox ~]# ps -ef | grep cat
root      22443  20164  0 14:19 pts/0    00:00:00 cat
root      22482  20300  0 14:20 pts/1    00:00:00 grep --color=auto cat
[root@sandbox ~]#
```
现在,使用 `-p` 标志和 PID在上面使用 `ps` 找到)对运行中的进程运行 `strace`。运行 `strace` 之后,其输出说明了所接驳的进程的内容及其 PID。现在`strace` 正在跟踪 `cat` 命令进行的系统调用。看到的第一个系统调用是 `read`,它正在等待文件描述符 `0`(标准输入,这是运行 `cat` 命令的终端)的输入:
```
[root@sandbox ~]# strace -p 22443
strace: Process 22443 attached
read(0,
```
现在,返回到你运行 `cat` 命令的终端,并输入一些文本。我出于演示目的输入了 `x0x0`。注意 `cat` 是如何简单地重复我输入的内容的。因此,`x0x0` 出现了两次。我输入了第一个,第二个是 `cat` 命令重复的输出:
```
[root@sandbox tmp]# cat
x0x0
x0x0
```
返回到将 `strace` 接驳到 `cat` 进程的终端。现在你会看到两个额外的系统调用:较早的 `read` 系统调用,现在在终端中读取 `x0x0`,另一个为 `write`,它将 `x0x0` 写回到终端,然后是再一个新的 `read`,正在等待从终端读取。请注意,标准输入(`0`)和标准输出(`1`)都在同一终端中:
```
[root@sandbox ~]# strace -p 22443
strace: Process 22443 attached
read(0, "x0x0\n", 65536)                = 5
write(1, "x0x0\n", 5)                   = 5
read(0,
```
想象一下,对守护进程运行 `strace` 以查看其在后台执行的所有操作时这有多大帮助。按下 `Ctrl + C` 杀死 `cat` 命令;由于该进程不再运行,因此这也会终止你的 `strace` 会话。
如果要查看所有的系统调用的时间戳,只需将 `-t` 选项与 `strace` 一起使用:
```
[root@sandbox ~]#strace -t ls testdir/
14:24:47 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
14:24:47 brk(NULL)                      = 0x1f07000
14:24:47 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2530bc8000
14:24:47 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
14:24:47 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
```
如果你想知道两次系统调用之间所花费的时间怎么办?`strace` 有一个方便的 `-r` 命令,该命令显示执行每个系统调用所花费的时间。非常有用,不是吗?
```
[root@sandbox ~]#strace -r ls testdir/
0.000000 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
0.000368 brk(NULL)                 = 0x1966000
0.000073 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb6b1155000
0.000047 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
0.000119 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
```
### 总结
`strace` 实用程序非常有助于理解 Linux 上的系统调用。要了解它的其它命令行标志,请参考手册页和在线文档。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/strace
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://en.wikipedia.org/wiki/Trap_(computing)

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11541-1.html)
[#]: subject: (Upgrading Fedora 30 to Fedora 31)
[#]: via: (https://fedoramagazine.org/upgrading-fedora-30-to-fedora-31/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
@ -12,25 +12,25 @@
![][1]
Fedora 31 [前发布了][2]。你也许想要升级系统来获得 Fedora 中的最新功能。Fedora 工作站有图形化的升级方式。另外Fedora 提供了一种命令行方式来将 Fedora 30 升级到 Fedora 31。
Fedora 31 [前发布了][2]。你也许想要升级系统来获得 Fedora 中的最新功能。Fedora 工作站有图形化的升级方式。另外Fedora 提供了一种命令行方式来将 Fedora 30 升级到 Fedora 31。
### 将 Fedora 30 工作站升级到 Fedora 31
在发布不久之后,就会有通知告诉你有可用升级。你可以点击通知打开 **GNOME Software**。或者在 GNOME Shell 选择 Software
发布不久之后,就会有通知告诉你有可用升级。你可以点击通知打开 GNOME “软件”。或者在 GNOME Shell 选择“软件”
在 GNOME Software 中选择_更新_,你应该会看到告诉你有 Fedora 31 更新的提示。
在 GNOME 软件中选择*更新*,你应该会看到告诉你有 Fedora 31 更新的提示。
如果你在屏幕上看不到任何内容,请尝试使用左上方的重新加载按钮。在发布后,所有系统可能需要一段时间才能看到可用的升级。
选择_下载_以获取升级包。你可以继续工作直到下载完成。然后使用 GNOME Software 重启系统并应用升级。升级需要时间,因此你可能需要喝杯咖啡,稍后再返回系统。
选择*下载*以获取升级包。你可以继续工作,直到下载完成。然后使用 GNOME “软件”重启系统并应用升级。升级需要时间,因此你可能需要喝杯咖啡,稍后再返回系统。
### 使用命令行
如果你是从 Fedora 以前的版本升级的,那么你可能对 _dnf upgrade_ 插件很熟悉。这是推荐且支持的从 Fedora 30 升级到 Fedora 31 的方法。使用此插件能让你轻松地升级到 Fedora 31。
如果你是从 Fedora 以前的版本升级的,那么你可能对 `dnf upgrade` 插件很熟悉。这是推荐且支持的从 Fedora 30 升级到 Fedora 31 的方法。使用此插件能让你轻松地升级到 Fedora 31。
#### 1\. 更新软件并备份系统
#### 1更新软件并备份系统
在开始升级之前,请确保你安装了 Fedora 30 的最新软件。如果你安装了模块化软件,这点尤为重要。dnf 和 GNOME Software 的最新版本对某些模块化流的升级过程进行了改进。要更新软件,请使用 _GNOME Software_ 或在终端中输入以下命令。
在开始升级之前,请确保你安装了 Fedora 30 的最新软件。如果你安装了模块化软件,这点尤为重要。`dnf` 和 GNOME “软件”的最新版本对某些模块化流的升级过程进行了改进。要更新软件,请使用 GNOME “软件” 或在终端中输入以下命令:
```
sudo dnf upgrade --refresh
@ -38,7 +38,7 @@ sudo dnf upgrade --refresh
此外,在继续操作之前,请确保备份系统。有关备份的帮助,请参阅 Fedora Magazine 上的[备份系列][3]。
#### 2\. 安装 DNF 插件
#### 2安装 DNF 插件
接下来,打开终端并输入以下命令安装插件:
@ -46,7 +46,7 @@ sudo dnf upgrade --refresh
sudo dnf install dnf-plugin-system-upgrade
```
#### 3\. 使用 DNF 开始更新
#### 3使用 DNF 开始更新
现在,你的系统是最新的,已经备份并且安装了 DNF 插件,你可以通过在终端中使用以下命令来开始升级:
@ -54,9 +54,9 @@ sudo dnf install dnf-plugin-system-upgrade
sudo dnf system-upgrade download --releasever=31
```
该命令将开始在本地下载计算机的所有升级。如果由于缺乏更新包、损坏的依赖项或已淘汰的软件包而在升级时遇到问题,请在输入上面的命令时添加 _-allowerasing_ 标志。这将使 DNF 删除可能阻止系统升级的软件包。
该命令将开始在本地下载计算机的所有升级。如果由于缺乏更新包、损坏的依赖项或已淘汰的软件包而在升级时遇到问题,请在输入上面的命令时添加 `-allowerasing` 标志。这将使 DNF 删除可能阻止系统升级的软件包。
#### 4\. 重启并升级
#### 4重启并升级
上面的命令下载更新完成后,你的系统就可以重启了。要将系统引导至升级过程,请在终端中输入以下命令:
@ -64,7 +64,7 @@ sudo dnf system-upgrade download --releasever=31
sudo dnf system-upgrade reboot
```
此后,你的系统将重启。在许多版本之前,_fedup_ 工具会在内核选择/引导页面上创建一个新选项。使用 _dnf-plugin-system-upgrade_ 软件包,你的系统将重新引导到当前 Fedora 30 使用的内核。这很正常。在内核选择页面之后不久,你的系统会开始升级过程。
此后,你的系统将重启。在许多版本之前,`fedup` 工具会在内核选择/引导页面上创建一个新选项。使用 `dnf-plugin-system-upgrade` 软件包,你的系统将重新引导到当前 Fedora 30 使用的内核。这很正常。在内核选择页面之后不久,你的系统会开始升级过程。
现在也许可以喝杯咖啡休息下!升级完成后,系统将重启,你将能够登录到新升级的 Fedora 31 系统。
@ -83,14 +83,14 @@ via: https://fedoramagazine.org/upgrading-fedora-30-to-fedora-31/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/f30-f31-816x345.jpg
[2]: https://fedoramagazine.org/announcing-fedora-31/
[2]: https://linux.cn/article-11522-1.html
[3]: https://fedoramagazine.org/taking-smart-backups-duplicity/
[4]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png
[5]: https://docs.fedoraproject.org/en-US/quick-docs/dnf-system-upgrade/#Resolving_post-upgrade_issues

View File

@ -0,0 +1,158 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11543-1.html)
[#]: subject: (Getting started with awk, a powerful text-parsing tool)
[#]: via: (https://opensource.com/article/19/10/intro-awk)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
awk 入门 —— 强大的文本分析工具
======
> 让我们开始使用它。
![](https://img.linux.net.cn/data/attachment/album/201911/06/114421e006e9mbh0xxe8bb.jpg)
`awk` 是用于 Unix 和类 Unix 系统的强大文本解析工具,但是由于它有可编程函数,因此你可以用它来执行常规解析任务,因此它也被视为一种编程语言。你可能不会使用 `awk` 开发下一个 GUI 应用,并且它可能不会代替你的默认脚本语言,但是它是用于特定任务的强大程序。
这些任务或许是惊人的多样化。了解 `awk` 可以解决你的哪些问题的最好方法是学习 `awk`。你会惊讶于 `awk` 如何帮助你完成更多工作,却花费更少的精力。
`awk` 的基本语法是:
```
awk [options] 'pattern {action}' file
```
首先,创建此示例文件并将其保存为 `colours.txt`
```
name       color  amount
apple      red    4
banana     yellow 6
strawberry red    3
grape      purple 10
apple      green  8
plum       purple 2
kiwi       brown  4
potato     brown  9
pineapple  yellow 5
```
数据被一个或多个空格分隔为列。以某种方式组织要分析的数据是很常见的。它不一定总是由空格分隔的列,甚至可以不是逗号或分号,但尤其是在日志文件或数据转储中,通常有一个可预测的格式。你可以使用数据格式来帮助 `awk` 提取和处理你关注的数据。
### 打印列
`awk` 中,`print` 函数显示你指定的内容。你可以使用许多预定义的变量,但是最常见的是文本文件中以整数命名的列。试试看:
```
$ awk '{print $2;}' colours.txt
color
red
yellow
red
purple
green
purple
brown
brown
yellow
```
在这里,`awk` 显示第二列,用 `$2` 表示。这是相对直观的,因此你可能会猜测 `print $1` 显示第一列,而 `print $3` 显示第三列,依此类推。
要显示*全部*列,请使用 `$0`
美元符号(`$`)后的数字是*表达式*,因此 `$2``$(1+1)` 是同一意思。
### 有条件地选择列
你使用的示例文件非常结构化。它有一行充当标题,并且各列直接相互关联。通过定义*条件*,你可以限定 `awk` 在找到此数据时返回的内容。例如,要查看第二列中与 `yellow` 匹配的项并打印第一列的内容:
```
awk '$2=="yellow"{print $1}' file1.txt
banana
pineapple
```
正则表达式也可以工作。此表达式近似匹配 `$2` 中以 `p` 开头跟上任意数量(一个或多个)字符后继续跟上 `p` 的值:
```
$ awk '$2 ~ /p.+p/ {print $0}' colours.txt
grape   purple  10
plum    purple  2
```
数字能被 `awk` 自然解释。例如,要打印第三列包含大于 5 的整数的行:
```
awk '$3&gt;5 {print $1, $2}' colours.txt
name    color
banana  yellow
grape   purple
apple   green
potato  brown
```
### 字段分隔符
默认情况下,`awk` 使用空格作为字段分隔符。但是,并非所有文本文件都使用空格来定义字段。例如,用以下内容创建一个名为 `colours.csv` 的文件:
```
name,color,amount
apple,red,4
banana,yellow,6
strawberry,red,3
grape,purple,10
apple,green,8
plum,purple,2
kiwi,brown,4
potato,brown,9
pineapple,yellow,5
```
只要你指定将哪个字符用作命令中的字段分隔符,`awk` 就能以完全相同的方式处理数据。使用 `--field-separator`(或简称为 `-F`)选项来定义分隔符:
```
$ awk -F"," '$2=="yellow" {print $1}' file1.csv
banana
pineapple
```
### 保存输出
使用输出重定向,你可以将结果写入文件。例如:
```
$ awk -F, '$3>5 {print $1, $2} colours.csv > output.txt
```
这将创建一个包含 `awk` 查询内容的文件。
你还可以将文件拆分为按列数据分组的多个文件。例如,如果要根据每行显示的颜色将 `colours.txt` 拆分为多个文件,你可以在 `awk` 中包含重定向语句来重定向*每条查询*
```
$ awk '{print > $2".txt"}' colours.txt
```
这将生成名为 `yellow.txt`、`red.txt` 等文件。
在下一篇文章中,你将了解有关字段,记录和一些强大的 awk 变量的更多信息。
本文改编自社区技术播客 [Hacker Public Radio][2]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/intro-awk
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
[2]: http://hackerpublicradio.org/eps.php?id=2114

View File

@ -1,40 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (lnrCoder)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11542-1.html)
[#]: subject: (How to Find Out Top Memory Consuming Processes in Linux)
[#]: via: (https://www.2daygeek.com/linux-find-top-memory-consuming-processes/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How to Find Out Top Memory Consuming Processes in Linux
如何在 Linux 中找出内存消耗最大的进程
======
You may have seen your system consumes too much of memory many times.
![](https://img.linux.net.cn/data/attachment/album/201911/06/110149r81efjx12afjat7f.jpg)
If thats the case, what would be the best thing you can do to identify processes that consume too much memory on a Linux machine.
很多次,你可能遇见过系统消耗了过多的内存。如果是这种情况,那么最好的办法是识别出 Linux 机器上消耗过多内存的进程。我相信,你可能已经运行了下文中的命令以进行检查。如果没有,那你尝试过哪些其他的命令?我希望你可以在评论中更新这篇文章,它可能会帮助其他用户。
I believe, you may have run one of the below commands to check it out.
使用 [top 命令][1] 和 [ps 命令][2] 可以轻松的识别这种情况。我过去经常同时使用这两个命令,两个命令得到的结果是相同的。所以我建议你从中选择一个喜欢的使用就可以。
If not, what is the other commands you tried?
### 1) 如何使用 ps 命令在 Linux 中查找内存消耗最大的进程
I would request you to update it in the comment section, it may help other users.
`ps` 命令用于报告当前进程的快照。`ps` 命令的意思是“进程状态”。这是一个标准的 Linux 应用程序,用于查找有关在 Linux 系统上运行进程的信息。
This can be easily identified using the **[top command][1]** and the **[ps command][2]**.
它用于列出当前正在运行的进程及其进程 IDPID、进程所有者名称、进程优先级PR以及正在运行的命令的绝对路径等。
I used to check both commands simultaneously, and both were given the same result.
So i suggest you to use one of the command that you like.
### 1) How to Find Top Memory Consuming Process in Linux Using the ps Command
The ps command is used to report a snapshot of the current processes. The ps command stands for process status.
This is a standard Linux application that looks for information about running processes on a Linux system.
It is used to list the currently running processes and their process ID (PID), process owner name, process priority (PR), and the absolute path of the running command, etc,.
The below ps command format provides you more information about top memory consumption process.
下面的 `ps` 命令格式为你提供有关内存消耗最大进程的更多信息。
```
# ps aux --sort -rss | head
@ -51,7 +39,7 @@ root 1135 0.0 0.9 86708 37572 ? S 05:37 0:20 cwpsrv: worker
root 1133 0.0 0.9 86708 37544 ? S 05:37 0:05 cwpsrv: worker process
```
Use the below ps command format to include only specific information about the process of memory consumption in the output.
使用以下 `ps` 命令格式可在输出中仅展示有关内存消耗过程的特定信息。
```
# ps -eo pid,ppid,%mem,%cpu,cmd --sort=-%mem | head
@ -68,7 +56,7 @@ Use the below ps command format to include only specific information about the p
1135 3034 0.9 0.0 cwpsrv: worker process
```
If you want to see only the command name instead of the absolute path of the command, use the ps command format below.
如果你只想查看命令名称而不是命令的绝对路径,请使用下面的 `ps` 命令格式。
```
# ps -eo pid,ppid,%mem,%cpu,comm --sort=-%mem | head
@ -85,15 +73,11 @@ If you want to see only the command name instead of the absolute path of the com
1133 3034 0.9 0.0 cwpsrv
```
### 2) How to Find Out Top Memory Consuming Process in Linux Using the top Command
### 2) 如何使用 top 命令在 Linux 中查找内存消耗最大的进程
The Linux top command is the best and most well known command that everyone uses to monitor Linux system performance.
Linux 的 `top` 命令是用来监视 Linux 系统性能的最好和最知名的命令。它在交互界面上显示运行的系统进程的实时视图。但是,如果要查找内存消耗最大的进程,请 [在批处理模式下使用 top 命令][3]。
It displays a real-time view of the system process running on the interactive interface.
But if you want to find top memory consuming process then **[use the top command in the batch mode][3]**.
You should properly **[understand the top command output][4]** to fix the performance issue in system.
你应该正确地 [了解 top 命令输出][4] 以解决系统中的性能问题。
```
# top -c -b -o +%MEM | head -n 20 | tail -15
@ -114,7 +98,7 @@ You should properly **[understand the top command output][4]** to fix the perfor
968 nobody 20 0 1356216 30544 2348 S 0.0 0.8 0:19.95 /usr/local/apache/bin/httpd -k start
```
If you only want to see the command name instead of the absolute path of the command, use the below top command format.
如果你只想查看命令名称而不是命令的绝对路径,请使用下面的 `top` 命令格式。
```
# top -b -o +%MEM | head -n 20 | tail -15
@ -135,15 +119,11 @@ If you only want to see the command name instead of the absolute path of the com
968 nobody 20 0 1356216 30544 2348 S 0.0 0.8 0:19.95 httpd
```
### 3) Bonus Tips: How to Find Out Top Memory Consuming Process in Linux Using the ps_mem Command
### 3) 奖励技巧:如何使用 ps_mem 命令在 Linux 中查找内存消耗最大的进程
The **[ps_mem utility][5]** is used to display the core memory used per program (not per process).
[ps_mem 程序][5] 用于显示每个程序(而不是每个进程)使用的核心内存。该程序允许你检查每个程序使用了多少内存。它根据程序计算私有和共享内存的数量,并以最合适的方式返回已使用的总内存。
This utility allows you to check how much memory is used per program.
It calculates the amount of private and shared memory against a program and returns the total used memory in the most appropriate way.
It uses the following logic to calculate RAM usage. Total RAM = sum (private RAM for program processes) + sum (shared RAM for program processes)
它使用以下逻辑来计算内存使用量。总内存使用量 = sum(用于程序进程的专用内存使用量) + sum(用于程序进程的共享内存使用量)。
```
# ps_mem
@ -205,7 +185,7 @@ via: https://www.2daygeek.com/linux-find-top-memory-consuming-processes/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[lnrCoder](https://github.com/lnrCoder)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -213,6 +193,6 @@ via: https://www.2daygeek.com/linux-find-top-memory-consuming-processes/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
[2]: https://www.2daygeek.com/linux-ps-command-find-running-process-monitoring/
[3]: https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/
[3]: https://linux.cn/article-11491-1.html
[4]: https://www.2daygeek.com/understanding-linux-top-command-output-usage/
[5]: https://www.2daygeek.com/ps_mem-report-core-memory-usage-accurately-in-linux/

View File

@ -1,39 +1,33 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11547-1.html)
[#]: subject: (Viewing network bandwidth usage with bmon)
[#]: via: (https://www.networkworld.com/article/3447936/viewing-network-bandwidth-usage-with-bmon.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Viewing network bandwidth usage with bmon
用 bmon 查看网络带宽使用情况
======
Introducing bmon, a monitoring and debugging tool that captures network statistics and makes them easily digestible.
Sandra Henry-Stocker
Bmon is a monitoring and debugging tool that runs in a terminal window and captures network statistics, offering options on how and how much data will be displayed and displayed in a form that is easy to understand.
> 介绍一下 bmon这是一个监视和调试工具可捕获网络统计信息并使它们易于理解。
To check if **bmon** is installed on your system, use the **which** command:
![](https://img.linux.net.cn/data/attachment/album/201911/07/010237a8gb5oqddvl3bnd0.jpg)
`bmon` 是一种监视和调试工具,可在终端窗口中捕获网络统计信息,并提供了如何以易于理解的形式显示以及显示多少数据的选项。
要检查系统上是否安装了 `bmon`,请使用 `which` 命令:
```
$ which bmon
/usr/bin/bmon
```
### Getting bmon
### 获取 bmon
On Debian systems, use **sudo apt-get install bmon** to install the tool.
在 Debian 系统上,使用 `sudo apt-get install bmon` 安装该工具。
[][1]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][1]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
For Red Hat and related distributions, you might be able to install with **yum install bmon** or **sudo dnf install bmon**. Alternately, you may have to resort to a more complex install with commands like these that first set up the required **libconfuse** using the root account or sudo:
对于 Red Hat 和相关发行版,你可以使用 `yum install bmon``sudo dnf install bmon` 进行安装。或者,你可能必须使用更复杂的安装方式,例如使用以下命令,这些命令首先使用 root 帐户或 sudo 来设置所需的 `libconfuse`
```
# wget https://github.com/martinh/libconfuse/releases/download/v3.2.2/confuse-3.2.2.zip
@ -48,15 +42,13 @@ For Red Hat and related distributions, you might be able to install with **yum i
# sudo make install
```
The first five lines will install **libconfuse** and the second five will grab and install **bmon** itself.
前面五行会安装 `libconfuse`,而后面五行会获取并安装 `bmon` 本身。
### Using bmon
### 使用 bmon
The simplest way to start **bmon** is simply to type **bmon** on the command line. Depending on the size of the window you are using, you will be able to see and bring up a variety of data.
启动 `bmon` 的最简单方法是在命令行中键入 `bmon`。根据你正在使用的窗口的大小,你能够查看并显示各种数据。
The top portion of your display will display stats on your network interfaces  the loopback (lo) and network-accessible (e.g., eth0). If you terminal window has few lines, this is all you may see, and it will look something like this:
[RELATED: 11 pointless but awesome Linux terminal tricks][2]
显示区域的顶部将显示你的网络接口的统计信息环回接口lo和可通过网络访问的接口例如 eth0。如果你的终端窗口只有区区几行高下面这就是你可能会看到的所有内容它将看起来像这样
```
lo bmon 4.0
@ -73,7 +65,7 @@ q Press i to enable additional information qq
Wed Oct 23 14:36:27 2019 Press ? for help
```
In this example, the network interface is enp0s25. Notice the helpful "Increase screen height" hint below the listed interfaces. Stretch your screen to add sufficient lines (no need to restart bmon) and you will see some graphs:
在此示例中,网络接口是 enp0s25。请注意列出的接口下方的有用的 “Increase screen height” 提示。拉伸屏幕以增加足够的行(无需重新启动 bmon你将看到一些图形
```
Interfaces x RX bps pps %x TX bps pps %
@ -100,7 +92,7 @@ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqq
1 5 10 15 20 25 30 35 40 45 50 55 60
```
Notice, however, that the graphs are not showing values. This is because it is displaying the loopback **&gt;lo** interface. Arrow your way down to the public network interface and you will see some traffic.
但是请注意,该图形未显示值。这是因为它正在显示环回 “>lo” 接口。按下箭头键指向公共网络接口,你将看到一些流量。
```
Interfaces x RX bps pps %x TX bps pps %
@ -132,11 +124,11 @@ q Press i to enable additional information qq
Wed Oct 23 16:42:06 2019 Press ? for help
```
The change allows you to view a graph displaying network traffic. Note, however, that the default is to display bytes per second. To display bits per second instead, you would start the tool using **bmon -b**
通过更改接口,你可以查看显示了网络流量的图表。但是请注意,默认值是按每秒字节数显示的。要按每秒位数来显示,你可以使用 `bmon -b` 启动该工具。
Detailed statistics on network traffic can be displayed if your window is large enough and you press **d**. An example of the stats you will see is displayed below. This display was split into left and right portions because of its width.
如果你的窗口足够大并按下 `d` 键,则可以显示有关网络流量的详细统计信息。你看到的统计信息示例如下所示。由于其宽度太宽,该显示分为左右两部分。
##### left side:
左侧:
```
RX TX │ RX TX │
@ -154,7 +146,7 @@ RX TX │ RX TX │
Window Error - 0 │ │
```
##### right side
右侧:
```
│ RX TX │ RX TX
@ -171,9 +163,9 @@ RX TX │ RX TX │
│ No Handler 0 - │ Over Error 0 -
```
Additional information on the network interface will be displayed if you press **i**
如果按下 `i` 键,将显示网络接口上的其他信息。
##### left side:
左侧:
```
MTU 1500 | Flags broadcast,multicast,up |
@ -181,7 +173,7 @@ Address 00:1d:09:77:9d:08 | Broadcast ff:ff:ff:ff:ff:ff |
Family unspec | Alias |
```
##### right side:
右侧:
```
| Operstate up | IfIndex 2 |
@ -189,19 +181,15 @@ Family unspec | Alias |
| Qdisc fq_codel |
```
A help menu will appear if you press **?** with brief descriptions of how to move around the screen, select data to be displayed and control the graphs.
如果你按下 `?` 键,将会出现一个帮助菜单,其中简要介绍了如何在屏幕上移动光标、选择要显示的数据以及控制图形如何显示。
To quit **bmon**, you would type **q** and then **y** in response to the prompt to confirm your choice to exit.
要退出 `bmon`,输入 `q`,然后输入 `y` 以响应提示来确认退出。
Some of the important things to note are that:
需要注意的一些重要事项是:
* **bmon** adjusts its display to the size of the terminal window
* some of the choices shown at the bottom of the display will only function if the window is large enough to accomodate the data
* the display is updated every second unless you slow this down using the **-R** (e.g., **bmon -R 5)** option
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
* `bmon` 会将其显示调整为终端窗口的大小
* 显示区域底部显示的某些选项仅在窗口足够大可以容纳数据时才起作用
* 除非你使用 `-R`(例如 `bmon -R 5`)来减慢显示速度,否则每秒更新一次显示
--------------------------------------------------------------------------------
@ -209,8 +197,8 @@ via: https://www.networkworld.com/article/3447936/viewing-network-bandwidth-usag
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11539-1.html)
[#]: subject: (Why you don't have to be afraid of Kubernetes)
[#]: via: (https://opensource.com/article/19/10/kubernetes-complex-business-problem)
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
为什么你不必害怕 Kubernetes
======
> Kubernetes 绝对是满足复杂 web 应用程序需求的最简单、最容易的方法。
![Digital creative of a browser on the internet][1]
在 90 年代末和 2000 年代初,在大型网站工作很有趣。我的经历让我想起了 American Greetings Interactive在情人节那天我们拥有了互联网上排名前 10 位之一的网站(以网络访问量衡量)。我们为 [AmericanGreetings.com][2]、[BlueMountain.com][3] 等公司提供了电子贺卡,并为 MSN 和 AOL 等合作伙伴提供了电子贺卡。该组织的老员工仍然深切地记得与 Hallmark 等其它电子贺卡网站进行大战的史诗般的故事。顺便说一句,我还为 Holly Hobbie、Care Bears 和 Strawberry Shortcake 运营过大型网站。
我记得那就像是昨天发生的一样,这是我们第一次遇到真正的问题。通常,我们的前门(路由器、防火墙和负载均衡器)有大约 200Mbps 的流量进入。但是突然之间Multi Router Traffic GrapherMRTG图示突然在几分钟内飙升至 2Gbps。我疯了似地东奔西跑。我了解了我们的整个技术堆栈从路由器、交换机、防火墙和负载平衡器到 Linux/Apache web 服务器,到我们的 Python 堆栈FastCGI 的元版本以及网络文件系统NFS服务器。我知道所有配置文件在哪里我可以访问所有管理界面并且我是一位经验丰富的打过硬仗的系统管理员具有多年解决复杂问题的经验。
但是,我无法弄清楚发生了什么……
当你在一千个 Linux 服务器上疯狂地键入命令时,五分钟的感觉就像是永恒。我知道站点可能会在任何时候崩溃,因为当它被划分成更小的集群时,压垮上千个节点的集群是那么的容易。
我迅速*跑到*老板的办公桌前,解释了情况。他几乎没有从电子邮件中抬起头来,这使我感到沮丧。他抬头看了看,笑了笑,说道:“是的,市场营销可能会开展广告活动。有时会发生这种情况。”他告诉我在应用程序中设置一个特殊标志,以减轻 Akamai 的访问量。我跑回我的办公桌,在上千台 web 服务器上设置了标志,几分钟后,站点恢复正常。灾难也就被避免了。
我可以再分享 50 个类似的故事,但你脑海中可能会有一点好奇:“这种运维方式将走向何方?”
关键是,我们遇到了业务问题。当技术问题使你无法开展业务时,它们就变成了业务问题。换句话说,如果你的网站无法访问,你就不能处理客户交易。
那么,所有这些与 Kubernetes 有什么关系?一切!世界已经改变。早在 90 年代末和 00 年代初,只有大型网站才出现大型的、<ruby>规模级<rt>web-scale</rt></ruby>的问题。现在,有了微服务和数字化转型,每个企业都面临着一个大型的、规模级的问题——可能是多个大型的、规模级的问题。
你的企业需要能够通过许多不同的人构建的许多不同的、通常是复杂的服务来管理复杂的规模级的网站。你的网站需要动态地处理流量,并且它们必须是安全的。这些属性需要在所有层(从基础结构到应用程序层)上由 API 驱动。
### 进入 Kubernetes
Kubernetes 并不复杂你的业务问题才复杂。当你想在生产环境中运行应用程序时要满足性能伸缩性、性能抖动等和安全性要求就需要最低程度的复杂性。诸如高可用性HA、容量要求N+1、N+2、N+100以及保证最终一致性的数据技术等就会成为必需。这些是每家进行数字化转型的公司的生产要求而不仅仅是 Google、Facebook 和 Twitter 这样的大型网站。
在旧时代,我还在 American Greetings 任职时,每次我们加入一个新的服务,它看起来像这样:所有这些都是由网站运营团队来处理的,没有一个是通过订单系统转移给其他团队来处理的。这是在 DevOps 出现之前的 DevOps
1. 配置 DNS通常是内部服务层和面向公众的外部
2. 配置负载均衡器(通常是内部服务和面向公众的)
3. 配置对文件的共享访问(大型 NFS 服务器、群集文件系统等)
4. 配置集群软件(数据库、服务层等)
5. 配置 web 服务器群集(可以是 10 或 50 个服务器)
大多数配置是通过配置管理自动完成的,但是配置仍然很复杂,因为每个系统和服务都有不同的配置文件,而且格式完全不同。我们研究了像 [Augeas][4] 这样的工具来简化它,但是我们认为使用转换器来尝试和标准化一堆不同的配置文件是一种反模式。
如今,借助 Kubernetes启动一项新服务本质上看起来如下
1. 配置 Kubernetes YAML/JSON。
2. 提交给 Kubernetes API`kubectl create -f service.yaml`)。
Kubernetes 大大简化了服务的启动和管理。服务所有者(无论是系统管理员、开发人员还是架构师)都可以创建 Kubernetes 格式的 YAML/JSON 文件。使用 Kubernetes每个系统和每个用户都说相同的语言。所有用户都可以在同一 Git 存储库中提交这些文件,从而启用 GitOps。
而且,可以弃用和删除服务。从历史上看,删除 DNS 条目、负载平衡器条目和 Web 服务器的配置等是非常可怕的,因为你几乎肯定会破坏某些东西。使用 Kubernetes所有内容都处于命名空间下因此可以通过单个命令删除整个服务。尽管你仍然需要确保其它应用程序不使用它微服务和函数即服务 [FaaS] 的缺点),但你可以更加确信:删除服务不会破坏基础架构环境。
### 构建、管理和使用 Kubernetes
太多的人专注于构建和管理 Kubernetes 而不是使用它(详见 [Kubernetes 是一辆翻斗车][5])。
在单个节点上构建一个简单的 Kubernetes 环境并不比安装 LAMP 堆栈复杂得多,但是我们无休止地争论着构建与购买的问题。不是 Kubernetes 很难;它以高可用性大规模运行应用程序。建立一个复杂的、高可用性的 Kubernetes 集群很困难,因为要建立如此规模的任何集群都是很困难的。它需要规划和大量软件。建造一辆简单的翻斗车并不复杂,但是建造一辆可以运载 [10 吨垃圾并能以 200 迈的速度稳定行驶的卡车][6]则很复杂。
管理 Kubernetes 可能很复杂,因为管理大型的、规模级的集群可能很复杂。有时,管理此基础架构很有意义;而有时不是。由于 Kubernetes 是一个社区驱动的开源项目,它使行业能够以多种不同方式对其进行管理。供应商可以出售托管版本,而用户可以根据需要自行决定对其进行管理。(但是你应该质疑是否确实需要。)
使用 Kubernetes 是迄今为止运行大规模网站的最简单方法。Kubernetes 正在普及运行一组大型、复杂的 Web 服务的能力——就像当年 Linux 在 Web 1.0 中所做的那样。
由于时间和金钱是一个零和游戏,因此我建议将重点放在使用 Kubernetes 上。将你的时间和金钱花费在[掌握 Kubernetes 原语][7]或处理[活跃度和就绪性探针][8]的最佳方法上(表明大型、复杂的服务很难的另一个例子)。不要专注于构建和管理 Kubernetes。在构建和管理上许多供应商可以为你提供帮助。
### 结论
我记得对无数的问题进行了故障排除,比如我在这篇文章的开头所描述的问题——当时 Linux 内核中的 NFS、我们自产的 CFEngine、仅在某些 Web 服务器上出现的重定向问题等)。开发人员无法帮助我解决所有这些问题。实际上,除非开发人员具备高级系统管理员的技能,否则他们甚至不可能进入系统并作为第二双眼睛提供帮助。没有带有图形或“可观察性”的控制台——可观察性在我和其他系统管理员的大脑中。如今,有了 Kubernetes、Prometheus、Grafana 等,一切都改变了。
关键是:
1. 时代不一样了。现在,所有 Web 应用程序都是大型的分布式系统。就像 AmericanGreetings.com 过去一样复杂,现在每个网站都有扩展性和 HA 的要求。
2. 运行大型的分布式系统是很困难的。绝对是。这是业务的需求,不是 Kubernetes 的问题。使用更简单的编排系统并不是解决方案。
Kubernetes 绝对是满足复杂 Web 应用程序需求的最简单,最容易的方法。这是我们生活的时代,而 Kubernetes 擅长于此。你可以讨论是否应该自己构建或管理 Kubernetes。有很多供应商可以帮助你构建和管理它但是很难否认这是大规模运行复杂 Web 应用程序的最简单方法。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/kubernetes-complex-business-problem
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fatherlinux
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: http://AmericanGreetings.com
[3]: http://BlueMountain.com
[4]: http://augeas.net/
[5]: https://linux.cn/article-11011-1.html
[6]: http://crunchtools.com/kubernetes-10-ton-dump-truck-handles-pretty-well-200-mph/
[7]: https://linux.cn/article-11036-1.html
[8]: https://srcco.de/posts/kubernetes-liveness-probes-are-dangerous.html

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Hypervisor comeback, Linus says no and reads email, and more industry trends)
[#]: via: (https://opensource.com/article/19/11/hypervisor-stable-kernel-and-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Hypervisor comeback, Linus says no and reads email, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Containers in 2019: They're calling it a [hypervisor] comeback][2]
> So what does all this mean as we continue with rapid adoption and hyper-ecosystem growth around Kubernetes and containers? Lets try and break that down into a few key areas and see what all the excitement is about.
**The impact**: I'm pretty sure that the title of the article is an LL Cool J reference, which I wholeheartedly approve of. Even more important though is a robust unpacking of developments in the hypervisor space over the last year and how they square up against the trend towards cloud-native and container-based development.
## [Linux kernel is getting more reliable, says Linus Torvalds. Plus: What do you need to do to be him?][3]
> "In the end my job is to say no. Somebody has to be able to say no, because other developers know that if they do something bad I will say no. They hopefully in turn are more careful. But in order to be able to say no, I have to know the background, because otherwise I can't do my job. I spend all my time basically reading email about what people are working on.
**The impact**: The rehabilitation of Linus as a much chiller guy continues; this one has some good advice for people leading distributed teams.
## [Automated infrastructure in the on-premise datacenter—OpenShift 4.2 on OpenStack 15 (Stein)][4]
> Up until now IPI (Installer Provision Infrastructure) has only supported public clouds: AWS, Azure, and Google. Now with OpenShift 4.2 it is supporting OpenStack. For the first time we can bring IPI into the on-premise datacenter where it is IMHO most needed. This single feature has the potential to revolutionize on-premise environments and bring them into the cloud-age with a single click and that promise is truly something to get excited about!
**The impact**: So much tech press has started with the assumption that every company should run their infrastructure like a hyperscaler. The technology is catching up to make the user experience of that feasible.
## [Kubernetes autoscaling 101: Cluster autoscaler, horizontal autoscaler, and vertical pod autoscaler][5]
> Im providing in this post a high-level overview of different scalability mechanisms inside Kubernetes and best ways to make them serve your needs. Remember, to truly master Kubernetes, you need to master different ways to manage the scale of cluster resources, thats [the core of promise of Kubernetes][6].
>
> _Configuring Kubernetes clusters to balance resources and performance can be challenging, and requires expert knowledge of the inner workings of Kubernetes. Just because your app or services workload isnt constant, it rather fluctuates throughout the day if not the hour. Think of it as a journey and ongoing process._
**The impact**: You can tell whether someone knows what they're talking about if they can represent it in a simple diagram. Thanks to the excellent diagrams in this post, I know more day 2 concerns of Kubernetes operators than I ever wanted to.
## [GitHub: All open source developers anywhere are welcome][7]
> Eighty percent of all open-source contributions today, come from outside of the US. The top two markets for open source development outside of the US are China and India. These markets, although we have millions of developers in them, are continuing to grow faster than any others at about 30% year-over-year average.
**The impact**: One of my open source friends likes to muse on the changing culture within the open source community. He posits that the old guard gatekeepers are already becoming irrelevant. I don't know if I completely agree, but I think you can look at the exponentially increasing contributions from places that haven't been on the open source map before and safely speculate that the open source culture of tomorrow will be radically different than that of today.
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/hypervisor-stable-kernel-and-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.infoq.com/articles/containers-hypervisors-2019/
[3]: https://www.theregister.co.uk/2019/10/30/linux_kernel_is_getting_more_reliable_says_linus_torvalds/
[4]: https://keithtenzer.com/2019/10/29/automated-infrastructure-in-the-on-premise-datacenter-openshift-4-2-on-openstack-15-stein/
[5]: https://www.cncf.io/blog/2019/10/29/kubernetes-autoscaling-101-cluster-autoscaler-horizontal-autoscaler-and-vertical-pod-autoscaler/
[6]: https://speakerdeck.com/thockin/everything-you-ever-wanted-to-know-about-resource-scheduling-dot-dot-dot-almost
[7]: https://www.zdnet.com/article/github-all-open-source-developers-anywhere-are-welcome/#ftag=RSSbaffb68

View File

@ -0,0 +1,92 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Red Hat announces RHEL 8.1 with predictable release cadence)
[#]: via: (https://www.networkworld.com/article/3451367/red-hat-announces-rhel-8-1-with-predictable-release-cadence.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Red Hat announces RHEL 8.1 with predictable release cadence
======
[Clkr / Pixabay][1] [(CC0)][2]
[Red Hat][3] has just today announced the availability of Red Hat Enterprise Linux (RHEL) 8.1, promising improvements in manageability, security and performance.
RHEL 8.1 will enhance the companys open [hybrid-cloud][4] portfolio and continue to provide a consistent user experience between on-premises and public-cloud deployments.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][5]
RHEL 8.1 is also the first release that will follow what Red Hat is calling its "predictable release cadence". Announced at Red Hat Summit 2019, this means that minor releases will be available every six months. The expectation is that this rhythmic release cycle will make it easier both for customer organizations and other software providers to plan their upgrades.
[][6]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][6]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Red Hat Enterprise Linux 8.1 provides product enhancements in many areas.
### Enhanced automation
All supported RHEL subscriptions now include access to Red Hat's proactive analytics, **Red Hat Insights**. With more than 1,000 rules for operating RHEL systems whether on-premises or cloud deployments, Red Hat Insights help IT administrators flag potential configuration, security, performance, availability and stability issues before they impact production.
### New system roles
RHEL 8.1 streamlines the process for setting up subsystems to handle specific functions such as storage, networking, time synchronization, kdump and SELinux. This expands on the variety of Ansible system roles.
### Live kernel patching
RHEL 8.1 adds full support for live kernel patching. This critically important feature allows IT operations teams to deal with ongoing threats without incurring excessive system downtime. Kernel updates can be applied to remediate common vulnerabilities and exposures (CVE) while reducing the need for a system reboot. Additional security enhancements include enhanced CVE remediation, kernel-level memory protection and application whitelisting.
### Container-centric SELinux profiles
These profiles allow the creation of more tailored security policies to control how containerized services access host-system resources, making it easier to harden systems against security threats.
### Enhanced hybrid-cloud application development
A reliably consistent set of supported development tools is included, among them the latest stable versions of popular open-source tools and languages like golang and .NET Core as well as the ability to power modern data-processing workloads such as Microsoft SQL Server and SAP solutions.
Red Hat Linux 8.1 is available now for RHEL subscribers via the [Red Hat Customer Portal][7]. Red Hat Developer program members may obtain the latest releases at no cost at the [Red Hat Developer][8] site.
#### Additional resources
Here are some links to  additional information:
* More about [Red Hat Enterprise Linux][9]
* Get a [RHEL developer subscription][10]
* More about the latest features at [Red Hat Insights][11]
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3451367/red-hat-announces-rhel-8-1-with-predictable-release-cadence.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://pixabay.com/vectors/red-hat-fedora-fashion-style-26734/
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3316960/ibm-closes-34b-red-hat-deal-vaults-into-multi-cloud.html
[4]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[7]: https://access.redhat.com/
[8]: https://developer.redhat.com
[9]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
[10]: https://developers.redhat.com/
[11]: https://www.redhat.com/en/blog/whats-new-red-hat-insights-november-2019
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,57 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (System76 introduces laptops with open source BIOS coreboot)
[#]: via: (https://opensource.com/article/19/11/coreboot-system76-laptops)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
System76 introduces laptops with open source BIOS coreboot
======
The company answers open hardware fans by revealing two laptops powered
with open source firmware coreboot.
![Guy on a laptop on a building][1]
In mid-October, [System76][2] made an exciting announcement for open source hardware fans: It would soon begin shipping two of its laptop models, [Galago Pro][3] and [Darter Pro][4], with the open source BIOS [coreboot][5].
The coreboot project [says][6] its open source firmware "is a replacement for your BIOS / UEFI with a strong focus on boot speed, security, and flexibility. It is designed to boot your operating system as fast as possible without any compromise to security, with no back doors, and without any cruft from the '80s." Coreboot was previously known as LinuxBIOS, and the engineers who work on coreboot have also contributed to the Linux kernel.
Most firmware on computers sold today is proprietary, which means even if you are running an open source operating system, you have no access to your machine's BIOS. This is not so with coreboot. Its developers share the improvements they make, rather than keeping them secret from other vendors. Coreboot's source code can be inspected, learned from, and modified, just like any other open source code.
[Joshua Woolery][7], marketing director at System76, says coreboot differs from a proprietary BIOS in several important ways. "Traditional firmware is closed source and impossible to review and inspect. It's bloated with unnecessary features and unnecessarily complex [ACPI][8] implementations that lead to PCs operating in unpredictable ways. System76 Open Firmware, on the other hand, is lightweight, fast, and cleanly written." This means your computer boots faster and is more secure, he says.
I asked Joshua about the impact of coreboot on open hardware overall. "The combination of open hardware and open firmware empowers users beyond what's possible when one or the other is proprietary," he says. "Imagine an open hardware controller like [System76's] [Thelio Io][9] without open source firmware. One could read the schematic and write software to control it, but why? With open firmware, the user starts from functioning hardware and software and can expand from there. Open hardware and firmware enable the community to learn from, adapt, and expand on our work, thus moving technology forward as a whole rather than requiring individuals to constantly re-implement what's already been accomplished."
Joshua says System76 is working to open source all aspects of the computer, and we will see coreboot on other System76 machines. The hardware and firmware in Thelio Io, the controller board in the company's Thelio desktops, are both open. Less than a year after System76 introduced Thelio, the company is now marketing two laptops with open firmware.
If you would like to see System76's firmware contributions to the coreboot project, visit the code repository on [GitHub][10]. You can also see the schematics for any supported System76 model by sending an [email][11] with the subject line: _Schematics for &lt;MODEL&gt;_. (Bear in mind that the only currently supported models are darp6 and galp4.) Using the coreboot firmware on other devices is not supported and may render them inoperable,
Coreboot is licensed under the GNU Public License. You can view the [documentation][12] on the project's website and find out how to [contribute][13] to the project on GitHub.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/coreboot-system76-laptops
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Guy on a laptop on a building)
[2]: https://opensource.com/article/19/5/system76-secret-sauce
[3]: https://system76.com/laptops/galago
[4]: https://system76.com/laptops/darter
[5]: https://www.coreboot.org/
[6]: https://www.coreboot.org/users.html
[7]: https://www.linkedin.com/in/joshuawoolery
[8]: https://en.wikipedia.org/wiki/Advanced_Configuration_and_Power_Interface
[9]: https://opensource.com/article/18/11/system76-thelio-desktop-computer
[10]: https://github.com/system76/firmware-open
[11]: mailto:productdev@system76.com
[12]: https://doc.coreboot.org/index.html
[13]: https://github.com/coreboot/coreboot

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -152,7 +152,7 @@ via: https://opensource.com/article/19/10/open-source-name-origins
作者:[Joshua Allen Holm][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,107 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Source Big Data Solutions Support Digital Transformation)
[#]: via: (https://opensourceforu.com/2019/11/open-source-big-data-solutions-support-digital-transformation/)
[#]: author: (Vinayak Ramachandra Adkoli https://opensourceforu.com/author/vinayak-adkoli/)
Open Source Big Data Solutions Support Digital Transformation
======
[![][1]][2]
_The digital transformation (DT) of enterprises is enabled by the judicious use of Big Data. And its open source technologies that are the driving force behind the power of Big Data and DT._
Digital Transformation (DT) and Big Data combine to offer several advantages. Big Data based digitally transformed systems make life easier and smarter, whether in the field of home automation or industrial automation. The digital world tracks Big Data generated by IoT devices, etc. It tries to make this data more productive and hence, DT should be taken for granted as the world progresses.
For example, NASA s rover Curiosity is sending Big Data from Mars to the Earth. As compared to data sent by NASAs satellites that are revolving around Mars, this data is nothing but digitally transformed Big Data, which works with DT to provide a unique platform for open source applications. Today, Curiosity has its own Twitter account with four million followers.
A Digital Transformation isnt complete unless a business adopts Big Data. The phrase “Data is the new crude oil,” is not new. However, crude oil itself has no value, unless it is refined into petrol, diesel, tar, wax, etc. Similarly, in our daily lives, we deal with tons of data. If this data is refined to a useful form, only then is it of some real use.
As an example, we can see the transformation televisions have undergone, in appearance. We once had picture tube based TVs. Today, we have LEDs, OLEDs, LCD based TVs, curved TVs, Internet enabled TVs, and so on. Such transformation is also quite evident in the digital world.
In a hospital, several patients may be diagnosed with cancer, each year. The patient data generated is voluminous, including treatment methods, diverse drug therapies, patient responses, genetic histories, etc. But such vast pools of information, i.e., Big Data, would serve no useful purpose without proper analysis. So DT, coupled with Big Data and open source applications, can create a more patient-focused and effective treatment one that might have higher recovery rates.
Big Data combines structured data with unstructured data to give us new business insights that weve never had before. Structured data may be traditional spreadsheets, your customer list, information about your products and business processes, etc. Unstructured data may include Google Trends data, feeds from IoT sensors, etc. When a layer of unstructured data is placed on top of structured data and analysed, thats where the magic happens.
Lets look into a typical business situation. Lets suppose a century old car-making company asks its data team to use Big Data concepts to find an efficient way to make safe sales forecasts. In the past, the team would look at the number of products it had sold in the previous month, as well as the number of cars it had sold a year ago and use that data to make a safe forecast. But now the Big Data teams use sentiment analysis on Twitter and look at what people are saying about its products and brand. They also look at Google Trends to see which similar products and brands are being searched the most. Then they correlate such data from the preceding few months with the actual current sales figures to check if the former was predictive i.e., had Google Trends over the past few months actually predicted the firms current sales figures?
In the case of the car company, while making sales forecasts, the team used structured data (how many cars sold last month, a year ago, etc) and layers of unstructured data (sentiment analysis from Twitter and Google Trends) and it resulted in a smart forecast. Thus, Big Data is today becoming more effective in business situations like sales planning, promotions, market campaigns, etc.
**Open source is the key to DT**
Open source, nowadays, clearly dominates domains like Big Data, mobile and cloud platforms. Once open source becomes a key component that delivers a good financial performance, the momentum is unstoppable. Open source (often coupled with the cloud) is giving Big Data based companies like Google, Facebook and other Web giants flexibility to innovate faster.
Big Data companies are using DT to understand their processes, so that they can employ technologies like IoT, Big Data analytics, AI, etc, better. The journey of enterprises migrating from old digital infrastructure to new platforms is an exciting trend in the open source environment.
Organisations are relying on data warehouses and business intelligence applications to help make important data driven business decisions. Different types of data, such as audio, video or unstructured data, is organised in formats to help identify it for making future decisions.
**Open source tools used in DT**
Several open source tools are becoming popular for dealing with Big Data and DT. Some of them are listed below.
* **Hadoop** is known for the ability to process extremely large data volumes in both structured and unstructured formats, reliably placing Big Data to nodes in the group and making it available locally on the processing machine.
* **MapReduce** happens to be a crucial component of Hadoop. It works rapidly to process vast amounts of data in parallel on large clusters of computer nodes. It was originally developed by Google.
* **Storm** is different from other tools with its distributed, real-time, fault-tolerant processing system, unlike the batch processing of Hadoop. It is fast and highly scalable. It is now owned by Twitter.
* **Apache Cassandra** is used by many organisations with large, active data sets, including Netflix, Twitter, Urban Airship, Cisco and Digg. Originally developed by Facebook, it is now managed by the Apache Foundation.
* **Kaggle** is the worlds largest Big Data community. It helps organisations and researchers to post their data and statistics. It is an open source Big Data tool that allows programmers to analyse large data sets on Hadoop. It helps with querying and managing large data sets really fast.
**DT: A new innovation**
DT is the result of IT innovation. It is driven by well-planned business strategies, with the goal of inventing new business models. Today, any organisation can undergo business transformation because of three main business-focused essentials — intelligence, the ability to decide more quickly and a customer-centric outlook.
DT, which includes establishing Big Data analytics capabilities, poses considerable challenges for traditional manufacturing organisations, such as car companies. The successful introduction of Big Data analytics often requires substantial organisational transformation including new organisational structures and business processes.
Retail is one of the most active sectors when it comes to DT. JLab is an innovative DT venture by retail giant John Lewis, which offers lots of creativity and entrepreneurial dynamism. It is even encouraging five startups each year and helps them to bring their technologies to market. For example, Digital Bridge, a startup promoted by JLab, has developed a clever e-commerce website that allows shoppers to snap photos of their rooms and see what furniture and other products would look like in their own homes. It automatically detects walls and floors, and creates a photo realistic virtual representation of the customers room. Here, lighting and decoration can be changed and products can be placed, rotated and repositioned with a realistic perspective.
Companies across the globe are going through digital business transformation as it helps to improve their business processes and leads to new business opportunities. The importance of Big Data in the business world cant be ignored. Nowadays, it is a key factor for success. There is a huge amount of valuable data which companies can use to improve their results and strategies. Today, every important decision can and should be supported by the application of data analytics.
Big Data and open source help DT do more for businesses. DT helps companies become digitally mature and gain a solid presence on the Internet. It helps companies to identify any drawbacks that may exist in their e-commerce system.
**Big Data in DT**
Data is critical, but it cant be used as a replacement for creativity. In other words, DT is not all about creativity versus data, but its about creativity enhanced by data.
Companies gather data to analyse and improve the customer experience, and then to create targeted messages emphasising the brand promise. But emotion, story-telling and human connections remain as essential as ever. The DT world today is dominated by Big Data. This is inevitable given the fact that business organisations always want DT based Big Data, so that data is innovative, appealing, useful to attract customers and hence to increase their sales.
Tesla cars today are equipped with sensors and IoT connections to gather a vast amount of data. Improvements based on this data are then fed back into the cars, creating a better driving experience.
**DT in India**
DT can transform businesses across every vertical in India. Data analytics has changed from being a good-to-have to a must-have technology.
According to a survey by Microsoft in partnership with International Data Corporation (IDC), by 2021, DT will add an estimated US$ 154 billion to Indias GDP and increase the growth rate by 1 per cent annually. Ninety per cent of Indian organisations are in the midst of their DT journey. India is the biggest user and contributor to open source technology. DT has created a new ripple across the whole of India and is one of the major drivers for the growth of open source. The government of India has encouraged the adoption of this new technology in the Digital India initiative, and this has further encouraged the CEOs of enterprises and other government organisations to make a move towards this technology.
The continuous DT in India is being driven faster with the adoption of emerging technologies like Big Data. Thats one of the reasons why organisations today are investing in these technological capabilities. Businesses in India are recognising the challenges of DT and embracing them. Overall, it may be said that the new DT concept is more investor and technology friendly, in tune with the Make in India programme of the present government.
From finding ways to increase business efficiency and trimming costs, to retaining high-value customers, determining new revenue opportunities and preventing fraud, advanced analytics is playing an important role in the DT of Big Data based companies.
**The way forward**
Access to Big Data has changed the game for small and large businesses alike. Big Data can help businesses to solve almost every problem. DT helps companies to embrace a culture of change and remain competitive in a global environment. Losing weight is a life style change and so is the incorporation of Big Data into business strategies.
Big Data is the currency of tomorrow, and today, it is the fuel running a business. DT can harness it to a greater level.
![Avatar][3]
[Vinayak Ramachandra Adkoli][4]
The author is a B.E. in industrial production, and has been a lecturer in the mechanical engineering department for ten years at three different polytechnics. He is also a freelance writer and cartoonist. He can be contacted at [karnatakastory@gmail.com][5] or [vradkoli@rediffmail.com][6].
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/open-source-big-data-solutions-support-digital-transformation/
作者:[Vinayak Ramachandra Adkoli][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/vinayak-adkoli/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Big-Data-.jpg?resize=696%2C517&ssl=1 (Big Data)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Big-Data-.jpg?fit=800%2C594&ssl=1
[3]: https://secure.gravatar.com/avatar/7b4383616c8708e3417051b3afd64bbc?s=100&r=g
[4]: https://opensourceforu.com/author/vinayak-adkoli/
[5]: mailto:karnatakastory@gmail.com
[6]: mailto:vradkoli@rediffmail.com

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Birds Eye View of Big Data for Enterprises)
[#]: via: (https://opensourceforu.com/2019/11/a-birds-eye-view-of-big-data-for-enterprises-2/)
[#]: author: (Swapneel Mehta https://opensourceforu.com/author/swapneel-mehta/)
A Birds Eye View of Big Data for Enterprises
======
[![][1]][2]
_Entrepreneurial decisions are made using data and business acumen. Big Data is today a tool that helps to maximise revenue and customer engagement. Open source tools like Hadoop, Apache Spark and Apache Storm are the popular choices when it comes to analysing Big Data. As the volume and variety of data in the world grows by the day, there is great scope for the discovery of trends as well as for innovation in data analysis and storage._
In the past five years, the spate of research focused on machine learning has resulted in a boom in the nature and quality of heterogeneous data sources that are being tapped by providers for their customers. Cheaper compute and widespread storage makes it so much easier to apply bulk data processing techniques, and derive insights from existing and unexplored sources of rich user data including logs and traces of activity whilst using software products. Business decision making and strategy has been primarily dictated by data and is usually supported by business acumen. But in recent times it has not been uncommon to see data providing conclusions seemingly in contrast with conventional business logic.
One could take the simple example of the baseball movie Moneyball, in which the protagonist defies all notions of popular wisdom in looking solely at performance statistics to evaluate player viability, eventually building a winning team of players a team that would otherwise never have come together. The advantage of Big Data for enterprises, then, becomes a no brainer for most corporate entities looking to maximise revenue and engagement. At the back-end, this is accomplished by popular combinations of existing tools specially designed for large scale, multi-purpose data analysis. Apache, Hadoop and Spark are some of the most widespread open source tools used in this space in the industry. Concomitantly, it is easy to imagine that there are a number of software providers offering B2B services to corporate clients looking to outsource specific portions of their analytics. Therefore, there is a bustling market with customisable, proprietary technological solutions in this space as well.
![Figure 1: A crowded landscape to follow \(Source: Forbes\)][3]
Traditionally, Big Data refers to the large volumes of unstructured and heterogeneous data that is often subject to processing in order to provide insights and improve decision-making regarding critical business processes. The McKinsey Global institute estimates that data volumes have been growing at 40 per cent per year and will grow 44x between the years 2009 and 2020. But there is more to Big Data than just its immense volume. The rate of data production is an important factor given that smaller data streams generated at faster rates produce larger pools than their counterparts. Social media is a great example of how small networks can expand rapidly to become rich sources of information — up to massive, billion-node scales.
Structure in data is a highly variable attribute given that data is now extracted from across the entire spectrum of user activity. Conventional formats of storage, including relational databases, have been virtually replaced by massively unstructured data pools designed to be leveraged in manners unique to their respective use cases. In fact, there has been a huge body of work on data storage in order to leverage various write formats, compression algorithms, access methods and data structures to arrive at the best combination for improving productivity of the workflow reliant on that data. A variety of these combinations has emerged to set the industry standards in their respective verticals, with the benefits ranging from efficient storage to faster access.
Finally, we have the latent value in these data pools that remains to be exploited by the use of emerging trends in artificial intelligence and machine learning. Personalised advertising recommendations are a huge factor driving revenue for social media giants like Facebook and companies like Google that offer a suite of products and an ecosystem to use them. The well-known Silicon Valley giant started out as a search provider, but now controls a host of apps and most of the entry points for the data generated in the course of people using a variety of electronic devices across the world. Established financial institutions are now exploring the possibility of a portion of user data being put on an immutable public ledger to introduce a blockchain-like structure that can open the doors to innovation. The pace is picking up as product offerings improve in quality and expand in variety. Lets get a birds eye view of this subject to understand where the market stands.
The idea behind building better frameworks is increasingly turning into a race to provide more add-on features and simplify workflows for the end user to engage with. This means the categories have many blurred lines because most products and tools present themselves as end-to-end platforms to manage Big Data analytics. However, well attempt to divide this broadly into a few categories and examine some providers in each of these.
**Big Data storage and processing**
Infrastructure is the key to building a reliable workflow when it comes to enterprise use cases. Earlier, relational databases were worthwhile to invest in for small and mid-sized firms. However, when the data starts pouring in, it is usually the scalability that is put to the test first. Building a flexible infrastructure comes at the cost of complexity. It is likely to have more moving parts that can cause failure in the short-term. However, if done right something that will not be easy because it has to be tailored exactly to your company it can result in life-changing improvements for both users and the engineers working with the said infrastructure to build and deliver state-of-the-art products.
There are many alternatives to SQL, with the NoSQL paradigm being adopted and modified for building different types of systems. Cassandra, MongoDB and CouchDB are some well-known alternatives. Most emerging options can be distinguished based on their disruption, which is aimed at the fundamental ACID properties of databases. To recall, a transaction in a database system must maintain atomicity, consistency, isolation, and durability commonly known as ACID properties in order to ensure accuracy, completeness, and data integrity (from Tutorialspoint). For instance, CockroachDB, an open source offshoot of Googles Spanner database system, has gained traction due to its support for being distributed. Redis and HBase offer a sort of hybrid storage solution while Neo4j remains a flag bearer for graph structured databases. However, traditional areas aside, there are always new challenges on the horizon for building enterprise software.
Backups are one such area where startups have found viable disruption points to enter the market. Cloud backups for enterprise software are expensive, non-trivial procedures and offloading this work to proprietary software offers a lucrative business opportunity. Rubrik and Cohesity are two companies that originally started out in this space and evolved to offer added services atop their primary offerings. Clumio is a recent entrant, purportedly creating a data fabric that the promoters expect will serve as a foundational layer to run analytics on top of. It is interesting to follow recent developments in this burgeoning space as we see competitors enter the market and attempt to carve a niche for themselves with their product offerings.
**Big Data analytics in the cloud**
Apache Hadoop remains the popular choice for many organisations. However, many successors have emerged to offer a set of additional analytical capabilities: Apache Spark, commonly hailed as an improvement to the Hadoop ecosystem; Apache Storm that offers real-time data processing capabilities; and Googles BigQuery, which is supposedly a full-fledged platform for Big Data analytics.
Typically, cloud providers such as Amazon Web Services and Google Cloud Platform tend to build in-house products leveraging these capabilities, or replicate them entirely and offer them as hosted services to businesses. This helps them provide enterprise offerings that are closely integrated within their respective cloud computing ecosystem. There has been some discussion about the moral consequences of replicating open source products to profit off closed source versions of the same, but there has been no consensus on the topic, nor any severe consequences suffered on account of this questionable approach to boost revenue.
Another hosted service offering a plethora of Big Data analytics tools is Cloudera which has an established track record in the market. It has been making waves since its merger with Hortonworks earlier this year, giving it added fuel to compete with the giants in its bid to become the leading enterprise cloud provider in the market.
Overall, weve seen interesting developments in the Big Data storage and analysis domain and as the volume and variety of data grows, so do the opportunities to innovate in the field.
![Avatar][4]
[Swapneel Mehta][5]
The author has worked at Microsoft Research, CERN and startups in AI and cyber security. He is an open source enthusiast who enjoys spending time organising software development workshops for school and college students. You can contact him at <https://www.linkedin.com/in/swapneelm>; <https://github.com/SwapneelM> or <http://www.ccdev.in>.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/a-birds-eye-view-of-big-data-for-enterprises-2/
作者:[Swapneel Mehta][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/swapneel-mehta/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-1-Big-Data-analytics-and-processing-for-the-enterprise.jpg?resize=696%2C449&ssl=1 (Figure 1 Big Data analytics and processing for the enterprise)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-1-Big-Data-analytics-and-processing-for-the-enterprise.jpg?fit=900%2C580&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-2-A-crowded-landscape-to-follow.jpg?resize=350%2C254&ssl=1
[4]: https://secure.gravatar.com/avatar/2ba7abaf240a1f6166d506dccdcda00f?s=100&r=g
[5]: https://opensourceforu.com/author/swapneel-mehta/

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (AI and 5G: Entering a new world of data)
[#]: via: (https://www.networkworld.com/article/3451718/ai-and-5g-entering-a-new-world-of-data.html)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
AI and 5G: Entering a new world of data
======
The deployment model of vendor-centric equipment cannot sustain this exponential growth in traffic.
[Stinging Eyes][1] [(CC BY-SA 2.0)][2]
Today the telecom industry has identified the need for faster end-user-data rates. Previously users were happy to call and text each other. However, now mobile communication has converted our lives in such a dramatic way it is hard to imagine this type of communication anymore.
Nowadays, we are leaning more towards imaging and VR/AR video-based communication. Therefore, considering such needs, these applications are looking for a new type of network. Immersive experiences with 360° video applications require a lot of data and a zero-lag network.
To give you a quick idea, VR with a resolution equivalent to 4K TV resolution would require a bandwidth of 1Gbps for a smooth play or 2.5 Gbps for interactive; both requiring a minimal latency of 10ms and minimal delay. And that's for round-trip time. Soon these applications will target the smartphone, putting additional strains on networks. As AR/VR services grow in popularity, the proposed 5G networks will yield the speed and the needed performance.
Every [IoT device][3] _[Disclaimer: The author works for Network Insight]_, no matter how dumb it is, will create data and this data is the fuel for the engine of AI. AI enables us to do more interesting things with the data. The ultimate goal of the massive amount of data we will witness is the ability to turn this data into value. The rise in data from the enablement of 5G represents the biggest opportunity for AI.
There will be unprecedented levels of data that will have to move across the network for processing and in some cases be cached locally to ensure low latency. For this, we primarily need to move the processing closer to the user to utilize ultra-low latency and ultra-high throughput.
### Some challenges with 5G
The introduction of 5G is not without challenges. It's expensive and is distributed in ways that have not been distributed in the past. There is an extensive cost involved in building this type of network. Location is central to effective planning, deployment and optimization of 5G networks.
Also, the 5G millimeter wave comes with its own challenges. There are techniques that allow you to take the signal and send it towards a specific customer instead of sending it to every direction. The old way would be similar to a light bulb that reaches all the parts of the room, as opposed to a flashlight that targets specific areas.
[The time of 5G is almost here][4]
So, choosing the right location plays a key role in the development and deployment of 5G networks. Therefore, you must analyze if you are building in the right place, and are marketing to the right targets. How many new subscribers do you expect to sign up for the services if you choose one area over the other? You need to take into account the population that travels around that area, the building structures and how easy it is to get the signal.
Moreover, we must understand the potential of flooding and analyze real-time weather to predict changes in traffic. So, if there is a thunderstorm, we need to understand how such events influence the needs of the networks and then make predictive calculations. AI can certainly assist in predicting these events.
### AI, a doorway to opportunity
5G is introducing new challenges, but by integrating AI techniques into networks is one way the industry is addressing these complexities. AI techniques is a key component that needs to be adapted to the network to help manage and control this change. Another important use case for AI is for network planning and operations.
With 5G, we will have 100,000s of small cells everywhere where each cell is connected to a fiber line. It has been predicted that we can have 10 million cells globally. Figuring out how to plan and design all these cells would be beyond human capability. This is where AI can do site evaluations and tell you what throughput you have with certain designs.
AI can help build out the 5G infrastructure and map out the location of cell towers to pinpoint the best location for the 5G rollout. It can continuously monitor how the network is being used. If one of the cell towers is not functioning as expected, AI can signal to another cell tower to take over.
### Vendor-centric equipment cannot sustain 5G
With the enablement of 5G networks, we have a huge amount of data. In some cases, this could be high in the PB region per day; the majority of this will be due to video-based applications. A deployment model of vendor-centric equipment cannot sustain this exponential growth in traffic.
We will witness a lot of open source in this area, with the movement of the processing and compute, storage and network functionality to the edge. Eventually, this will create a real-time network at the edge.
### More processing at the edge
Edge computing involves having the computer, server and network at the very edge of the network that is closer to the user. It provides intelligence at the edge, thereby reducing the amount of traffic going to the backbone.
Edge computing can result in for example AI object identification to reach the target recognition in under .35 seconds. Essentially, we have the image recognition deep learning algorithm that is sitting on the edge. The algorithm sitting on the edge of the network will help to reduce the traffic sent to the backbone.
However, this also opens up a new attack surface and luckily AI plays well with cybersecurity. A closed-loop system will collect data at the network edge, identity threats and take real-time action.
### Edge and open source
We have a few popular open-source options available at our disposal. Some examples of open source edge computing could be Akraino Edge Stack, ONAP Open Network Animation Platform and Airship Open Infrastructure Project.
The Akraino Edge Stack creates an open-source software stack that supports high-availability cloud services. These services are optimized for edge computing systems and applications.
The Akraino R1 release includes 10 “ready and proven” blueprints and delivers a fully functional edge stack for edge use cases. These range from Industrial IoT, Telco 5G Core &amp; vRAN, uCPE, SDWAN, edge media processing and carrier edge media processing.
The ONAP (Open Network Platform) provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions. It is an open-source networking project hosted by the Linux Foundation.
Finally, the Airship Open Infrastructure Project is a collection of open-source tools for automating cloud provisioning and management. These tools include OpenStack for virtual machines, Kubernetes for container orchestration and MaaS for bare metal, with planned support for OpenStack Ironic.
**This article is published as part of the IDG Contributor Network. [Want to Join?][5]**
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3451718/ai-and-5g-entering-a-new-world-of-data.html
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://www.flickr.com/photos/martinlatter/4233363677
[2]: https://creativecommons.org/licenses/by-sa/2.0/legalcode
[3]: https://network-insight.net/2017/10/internet-things-iot-dissolving-cloud/
[4]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[5]: https://www.networkworld.com/contributor-network/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,155 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Conquering documentation challenges on a massive project)
[#]: via: (https://opensource.com/article/19/11/documentation-challenges-tom-caswell-matplotlib)
[#]: author: (Gina Helfrich, Ph.D. https://opensource.com/users/ginahelfrich)
Conquering documentation challenges on a massive project
======
Learn more about documentation at scale in this interview with Tom
Caswell, Matplotlib lead developer.
![Files in a folder][1]
Given the recent surge in popularity of open source data science projects like pandas, NumPy, and [Matplotlib][2], its probably no surprise that the increased level of interest is generating user complaints about documentation. To help shed light on whats at stake, we talked to someone who knows a lot about the subject: [Thomas Caswell][3], the lead developer of Matplotlib.
Matplotlib is a flexible and customizable tool for producing static and interactive data visualizations since 2001 and is a foundational project in the scientific Python stack. Matplotlib became a [NumFOCUS-sponsored project][4] in 2015.
Tom has been working on Matplotlib for the past five years and got his start answering questions about the project on Stack Overflow. Answering questions became submitting bug reports, which became writing patches, which became maintaining the project, which ultimately led to him becoming the lead developer.
**Fun fact:** Toms advancement through the open source community follows exactly the [path described by Brett Cannon][5], a core Python maintainer.
NumFOCUS Communications Director, Gina Helfrich, sat down with Tom to discuss the challenges of managing documentation on a project as massive and as fundamental as Matplotlib.
**Gina Helfrich:** Thanks so much for taking the time to talk with us about Matplotlib and open source documentation, Tom. To contextualize our conversation a bit, can you speak a little to your impression of the [back-and-forth][6] on Twitter with Wes McKinney about pandas and user complaints about the documentation?
**Thomas Caswell:** I only kind of saw the edges, but I see both sides. On one hand, I think something Mike Pope said was, "if its not documented, it doesnt exist." If you are writing open source tools,
part of that work is documenting them, and doing so clearly in a way that users can discover and actually use, short of going to the source [code]. Its not good enough to dump code on the internet—you have to do the whole thing.
On the other hand, if youre not paying [for the software], you dont get to make demands. The attitude I think Wes was reacting to, which you see a lot, is: "You built this tool that is useful to me, therefore I expect enterprise-grade paid support because its obviously critical to what Im doing."
But I think the part Eric O. Lebigot was responding to is the first part. Part of building a tool is the documentation, not just the code. But Wes is responding to the entitlement, the expectation of free work, so I see both sides.
**GH:** Looking at Matplotlib specifically, which is facing many of the same issues as pandas, I know you have some big challenges with your documentation. I get the impression that theres this notion out there from new users that getting started with Matplotlib is super frustrating and the docs dont really help. Can you tell me about the history there and how the project came to have this problem?
**TC:** So, Matplotlib is a humongous library. Ive been working on it for five years, and around once a month (or every other month), theres a bug report where my first reaction is, "Wait… we do _what_?"
A lot of the library is under-documented. This library survived at least two generations of partial conversion to standardized docstring formats. As I understand it (I wasnt around at the time), we were one of the first projects outside of core Python to adopt Sphinx to build our docs—possibly a little too early. We have a lot of weird customizations since Sphinx didnt have those features yet [at the time]. Other people have built better versions of those features since then, but because Matplotlib is so huge, migrating them is hard.
I think if you build the PDF version of our docs, its around 3,000 pages, and I would say that the library has maybe half the documentation it really needs.
We are woefully under-documented in the sense that not every feature has good docs. On the other hand, we are over-documented in that what we have is not well organized and theres no clear entry point. If I want to find out how to do something, even I have a hard time finding where something is documented. And if _I_ [the lead developer] have issues finding that information, theres no prayer of new users finding it. So in that sense, we are both drastically under-documented and drastically over-documented.
**[Read next: [Syadmins: Poor documentation is not a job insurance strategy][7]]**
**GH:** Given that Matplotlib is over 15 years old, do you have a sense of who has been writing the documentation? How does your documentation actually get developed?
**TC:** Historically, much like the code, the documentation was organically developed. Weve had a lot of investment in examples and docstrings, and a few entries labeled as tutorials that teach you one specific skill. For example, weve got prose on the "rough theory of colormaps," and how to make a colormap.
A lot of Matplotlibs documentation is examples, and the examples overlap. Over the past few years, when I see interesting examples go by on the mailing list or on Stack Overflow, Ill say, "Can you put this example in the docs?" So, I guess Ive been actively contributing to the problem that theres too much stuff to wade through.
Part of the issue is that people will do a six-hour tutorial and then some of those examples end up in the docs. Then, someone _else_ will do a six-hour tutorial (you cant cover the whole library in six hours) and the basics are probably similar, but they may format the tutorial differently.
**GH:** Wow, that sounds pretty challenging to inherit and try to maintain. What kinds of improvements have you been working on for the documentation?
**TC:** Theres been an effort over the past couple of years to move to numpydoc format, away from the home-grown scheme we had previously. Also, [Nelle Varoquaux][8] recently did a tremendous amount of work and led the effort to move from how we were doing examples to using Sphinx-Gallery, which makes it much easier to put good prose into examples. This practice was picked up by [Chris Holdgraf][9] recently, as well. Sphinx-Gallery went live on our main docs with Matplotlib 2.1, which was a huge improvement for users. Nelle also organized a distributed [docathon][10].
Weve been trying to get better about new features. When theres a new feature, you must add an example to the docs for that feature, which helps make things discoverable. Weve been trying to get better about making sure docstrings exist, are accurate, and that they document all of the parameters.
**GH:** If you could wave a magic wand and have the Matplotlib docs that you want, what would they look like?
**TC:** Well, as I mentioned, the docs grew organically, and that means we have no consistent voice across them. It also means theres no single point of truth for various things. When you write an example, how far back down the basics do you go? So, its not clear what you need to know before you can understand the example. Either you explain just enough, all the way back (so weve got a random assortment of the basics smeared everywhere), or you have examples that, unless youre already a heavy user, make no sense.
So, to answer the question, having someone who can actually _write_ and has empathy for users go through and write a 200-page intro to Matplotlib book, and have that be the main entry to the docs. Thats my current vision of what I want.
**GH:** If you were introducing a new user to Matplotlib today, what would you have her read? Where would you point her in the docs?
**TC:** Well, there isnt a good, clear option for, "Youve been told you need to use Matplotlib. Go spend an afternoon and read this." Im not sure where Id point people to for that right now. [Nicolas Rougier][11] has written some [good][12] [stuff][13] on that front, such as a tutorial for beginners, and some of that has migrated into the docs.
Theres a lot out there, but its not collated centrally, or linked from our docs as "START HERE." I should also add that I might not have the best view of this issue anymore because I havent actively gone looking for this information, so maybe I just never found it because I dont need it. I dont know that it exists. (This topic actually [came up recently][14] on the mailing list.)
The place we do point people to is: Go look at the gallery and click on the thumbnail that looks closest to what you want to do.
Ben Root presented an [Anatomy of Matplotlib tutorial][15] at SciPy several times. Theres a number of Matplotlib books that exist. Its mixed whether the authors were contributors [to the project]. Ben Root recently wrote one about [interactive figures][16]. Ive been approached and have turned this task down a couple of times, just because I dont have time to write a book. So my thought for getting a technical writer was to get a technical writer to write the book, and instead of publishing the result as a book, put it in the online docs.
**GH:** Is there anyone in the Matplotlib contributor community who specializes in the documentation part of things, or takes a lot of ownership around documentation?
Nelle was doing this for Matplotlib for a bit but has stepped back. Chris Holdgraf is taking the lead on some doc-related things now. Nicholas Rougier has written a number of [extremely good tutorials][17] outside of the project's documentation.
I mean, no one uses _just_ Matplotlib. You dont use us but not use SciPy, NumPy, or pandas. You have to be using something else to do the actual work that you now need to visualize. There are many "clean" introductions to Matplotlib in other places. For example, both Jake VanderPlass [analysis book][18] and Katy Huff and Anthony Scopatzs [book][19] have introductions to Matplotlib that cover this topic to the degree they felt was needed for their purposes.
**GH:** Id love to hear your thoughts on the role of Stack Overflow in all of this.
**TC:** That actually is how I got into the project. My Stack Overflow number is large, and its almost all Matplotlib questions. And how I got started is that I answered questions. A lot of the questions on Stack Overflow are, "Please read the docs for me." Which, fine. But actually, a great way to learn the library is to answer questions on Stack Overflow, because people who have problems that you dont personally have will ask, "How do I do this?" and now you have to go figure out how to do it. Its kind of fun.
But sometimes people ask questions and theyve actually found a bug. And in determining that theyve actually found a bug, I tried to figure out how to fix the bugs. So, I started some reports, which led to, "Heres a pull request to fix the bug I found." And then when I started entering a lot of PRs, they were like, "You need to start reviewing them now," so they gave me commit rights and made me review things. And then they put me in charge.
I do like Stack Overflow. I think that to a large extent, what it replaced is the mailing list. If I have any criticism of Stack Overflow, I think its convincing people who are answering questions to upstream more of the results.
There are some good examples on Stack Overflow. Heres a complex one: You have to touch these seven different functions, each of which are relatively well documented, but you have to put them together in just the right way. Some of those answers should probably go in the gallery with our annotations about how they work. Basically, if you go through Joe Kingtons top 50 answers, they should probably all go in the docs.
In other cases, the question is asked because the docstring is not clear. We need to convince people who are answering those questions to use those moments as a survey of where our documentation is not clear, instead of just answering [on Stack Overflow], and then move those answers back [to the docs].
**GH:** Whats it like managing PRs for documentation as opposed to patches and bug fixes?
**TC:** Weve tried to streamline how we do documentation PRs. Writing documentation PRs is the most painful thing ever in open source because you get copyediting via pull request. You get picky proofreading and copyediting via GitHub comments. Like, "theres a missing comma," or "two spaces!" And again, I keep using myself as a weird outlier benchmark, _I_ get disheartened when I write doc pull requests and then I get 50 comments regarding picky little things.
What Ive started trying to push as the threshold on docs is, "Did [the change] make it worse?" If it didnt make it worse, merge the change. Frequently, it takes more time to leave a GitHub comment than to fix the problem.
> "If you can use Matplotlib, you are qualified to contribute to it."
>      — Tom Caswell, Matplotlib lead developer
**GH:** Whats one action youd like members of the community who are reading this interview to take? What is one way they could make a difference on this issue?
**TC:** One thing Id like to see more of—and I acknowledge that how to contribute to open source is a big hurdle to get over—Ive said previously that if you can use Matplotlib, you are qualified to contribute to it. Thats a message I would like to get out more broadly.
If youre a user and you read the docstring to something and it doesnt make sense, and then you play around a bit and you understand that function well enough to use it—you could then start clarifying docstrings.
Because one of the things I have the hardest time with is that I personally am bad at putting myself in other peoples shoes when writing docs. I dont know from a users point of view—and this sounds obnoxious but Im deep enough in the code—what they know coming into the library as a new person. I dont know the right things to tell them in the docstring that will actually help them. I can try to guess and Ill probably write too much, or the wrong things. Or worse, Ill write a bunch of stuff that refers to things they dont know about, and now Ive just made the function more confusing.
Whereas a user who has just encountered this function for the first time, and sorted out how to make it do what they need it to do for their purposes, is in the right mindset to write what they wish the docs had said that would have saved them an hour.
**GH:** Thats a great message, I think. Thanks for talking with me, Tom!
**TC:** Youre welcome. Thank you.
_This article was originally published on the [NumFOCUS blog][20] in 2017 and is just as relevant today. Its republished with permission by the original interviewer and has been lightly edited for style, length, and clarity. If you want to support NumFOCUS in person, attend one of the local [PyData events][21] happening around the world. Learn more about NumFOCUS on our website: [numfocus.org][22]_
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/documentation-challenges-tom-caswell-matplotlib
作者:[Gina Helfrich, Ph.D.][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ginahelfrich
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
[2]: https://matplotlib.org
[3]: https://twitter.com/tacaswell
[4]: https://numfocus.org/sponsored-projects
[5]: https://snarky.ca/why-i-took-october-off-from-oss-volunteering/
[6]: https://twitter.com/wesmckinn/status/909772652532953088
[7]: https://www.redhat.com/sysadmin/poor-documentation
[8]: https://twitter.com/nvaroqua
[9]: https://twitter.com/choldgraf
[10]: https://www.numfocus.org/blog/numfocus-projects-participate-in-docathon-2017/
[11]: https://twitter.com/NPRougier
[12]: https://github.com/rougier/matplotlib-tutorial
[13]: http://www.labri.fr/perso/nrougier/teaching/matplotlib/matplotlib.html
[14]: https://mail.python.org/pipermail/matplotlib-users/2017-September/001031.html
[15]: https://github.com/matplotlib/AnatomyOfMatplotlib
[16]: https://www.amazon.com/Interactive-Applications-using-Matplotlib-Benjamin/dp/1783988843
[17]: http://www.labri.fr/perso/nrougier/teaching/
[18]: http://shop.oreilly.com/product/0636920034919.do
[19]: http://shop.oreilly.com/product/0636920033424.do
[20]: https://numfocus.org/blog/matplotlib-lead-developer-explains-why-he-cant-fix-the-docs-but-you-can
[21]: https://pydata.org/
[22]: https://numfocus.org

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Forrester: Edge computing is about to bloom)
[#]: via: (https://www.networkworld.com/article/3451532/forrester-edge-computing-is-about-to-bloom.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Forrester: Edge computing is about to bloom
======
2020 is set to be a “breakout year” for edge computing technology, according to the latest research from Forrester Research
Getty Images
The next calendar year will be the one that propels [edge computing][1] into the enterprise technology limelight for good, according to a set of predictions from Forrester Research.
While edge computing is primarily an [IoT][2]-related phenomenon, Forrester said that addressing the need for on-demand compute and real-time app engagements will also play a role in driving the growth of edge computing in 2020.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
What it all boils down to, in some ways, is that form factors will shift sharply away from traditional rack, blade or tower servers in the coming year, depending on where the edge technology is deployed. An autonomous car, for example, wont be able to run a traditionally constructed server.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Itll also mean that telecom companies will begin to feature a lot more heavily in the cloud and distributed-computing markets. Forrester said that CDNs and [colocation vendors][5] could become juicy acquisition targets for big telecom, which missed the boat on cloud computing to a certain extent, and is eager to be a bigger part of the edge. Theyre also investing in open-source projects like Akraino, an edge software stack designed to support carrier availability.
But the biggest carrier impact on edge computing in 2020 will undoubtedly be the growing availability of [5G][6] network coverage, Forrester says. While that availability will still mostly be confined to major cities, that should be enough to prompt reconsideration of edge strategies by businesses that want to take advantage of capabilities like smart, real-time video processing, 3D mapping for worker productivity and use cases involving autonomous robots or drones.
Beyond the carriers, theres a huge range of players in the edge computing, all of which have their eyes firmly on the future. Operational-device makers in every field from medicine to utilities to heavy industry will need custom edge devices for connectivity and control, huge cloud vendors will look to consolidate their hold over that end of the market and AI/ML startups will look to enable brand-new levels of insight and functionality.
Whats more, the average edge-computing implementation will often use many of them at the same time, according to Forrester, which noted that integrators who can pull products and services from many different vendors into a single system will be highly sought-after in the coming year. Multivendor solutions are likely to be much more popular than single-vendor, in large part because few individual companies have products that address all parts of the edge and IoT stacks.
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3451532/forrester-edge-computing-is-about-to-bloom.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.networkworld.com/article/3407756/colocation-facilities-buck-the-cloud-data-center-trend.html
[6]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open by nature: What building a platform for activists taught me about playful development)
[#]: via: (https://opensource.com/open-organization/19/11/open-by-nature)
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
Open by nature: What building a platform for activists taught me about playful development
======
Building a global platform for environmental activists revealed a spirit
of openness that's central to human nature—and taught me how to design
for it.
![The Open Organization at Greenpeace][1]
"Open" isn't just a way we can build software. It's an attitude we can adopt toward anything we do.
And when we adopt it, we can move mountains.
Participating in a design sprint with colleagues at Greenpeace reminded me of that. As I explained in the first [two][2] [parts][3] of this [series][4], learning to think, plan, and work the open way is helping us build something truly great—a new, global platform for engaging activists who want to take action on behalf of our planet.
The sprint experience (part of a collaboration with Red Hat) reinforced several lessons about openness I've learned throughout my career as an advocate for open source, an architect of change, and a community organizer.
It also taught me a few new ones.
### An open nature
The design sprint experience reminded me just how central "openness" is to human nature. We all cook, sew, construct, write, play music, tinker, paint, tell stories—engage in the world through the creation of thousands of artifacts that allow others to understand our outlooks and worldviews. We express ourselves through our creations. We always have.
We express ourselves through our creations. We always have.
And throughout all of our expressive making, we reflect on and _share_ what we've created. We ask for feedback: _"Do you like my new recipe?" "What do you think of my painting?"_
We learn. Through trial and error (and ever-important failure), we learn what to do and what _not_ to do. Learning to make something work involves discovery and wonder in a spiral of [intrinsic motivation][5]; each new understanding unlocks new questions. We improve our skills as we create, and when we share.
I noticed something critically important while our teams were collaborating: learning to work openly can liberate a certain playfulness that often gets ignored (or buried) in many organizations today—and that playfulness can help us solve complex problems. When we're having fun learning, creating, and sharing, we're often in a flow, truly interested in our work, creating environments that others want to join. Openness can be a fount of innovation.
While our mission is a serious one, the more joy we find in it, the more people we'll attract to it. Discovery is a delightful process, and agency is empowering. The design sprint allowed us to finish with something that spurred reflection of our project—and do so with both humor and passion. The sprint left a lot of room for play, connection between participants, collaboration to solve problems, and decision-making.
### Positively open
Watching Red Hatters and Greenpeacers interact—many just having met one another for the first time—also crystallized for me some important impressions of open leadership.
Open leadership took many forms throughout the sprint. The Red Hat team showed open leadership when they adapted the agenda on the first day. Greenpeace was further ahead than other groups they'd planned for, so their plan wouldn't work. Greenpeacers were transparent about certain internal politics (because it's no use planning something that's impossible to build).
Open leaders are beacons of positivity. They assume best intentions in others. They truly listen. They live open principles. They build people up.
People left their baggage at the door. We showed up, all of us, and were present together.
Open leaders are beacons of positivity. They assume best intentions in others. They truly listen. They live open principles. They build people up. They remember to move as a collective, to ask for the insight of the collective, to thank the collective.
And in the spirit of positive, open leadership, I want to offer my own thanks.
Thanks to the Planet 4 team, a small group of people who kept pushing forward, despite the difficulties of a global project like this—a group that fought, made mistakes, and kept going despite them. They continue to pull together, and behind the scenes they're trying to be more open as they inspire the entire organization on an open journey with them (and build a piece of software at the same time!).
Thanks to the others at Greenpeace who have supported this work and those who have participated in it. Thanks to the leaders in other departments, who saw the potential of this work and helped us socialize it.
Thanks, too, to [the open organization community at Opensource.com][6] and [long-time colleagues][7] who modeled the behaviours and lent their open spirit to helping the Planet 4 team get started.
### Open returns
If openness is a way of being, then central to that way of being is [a spirit of reciprocity and exchange][8].
We belong to our communities and thus we contribute to them. We strive to be transparent so that our communities can grow and welcome new collaborators. When we infuse positivity into the world and into our projects, we create an atmosphere that invites innovation.
Our success in open source means working to nurture those ecosystems of passionate contributors. Our success as a species demands the same kind of care for our natural ecosystems, too.
Both Red Hat and Greenpeace understand the importance of ecosystems—and that shared understanding powered our collaboration on Planet 4.
As an open source software company, Red Hat both benefits from and contributes to open source software communities across the world—communities forming a technological ecosystem of passionate contributors that must always be in delicate balance. Greenpeace is also focused on the importance of maintaining ecosystems—the natural ecosystems of which we are all, irrevocably, a part. Our success in open source means working to nurture those ecosystems of passionate contributors. Our success as a species demands the same kind of care for our natural ecosystems, too, and Planet 4 is a platform that helps everyone do exactly that. For both organizations, innovation is _social_ innovation; what we create _with_ others ultimately _benefits_ others, enhancing their lives.
_Listen to Alexandra Machado of Red Hat explain social innovation._
So, really, the end of this story is just the beginning of so many others that will spawn from Planet 4.
Yours can begin immediately. [Join the Planet 4 project][9] and advocate for a greener, more peaceful future—the open way.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/11/open-by-nature
作者:[Laura Hilliger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/laurahilliger
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/images/open-org/open-org-greenpeace-article-3-blog-thumbnail-500x283.png?itok=aK5TOqSS
[2]: https://opensource.com/open-organization/19/10/open-platform-greenpeace
[3]: https://opensource.com/open-organization/19/10/collaboration-breakthrough-greenpeace
[4]: https://opensource.com/tags/open-organization-greenpeace
[5]: http://en.wikipedia.org/wiki/Motivation#Intrinsic_and_extrinsic_motivation
[6]: https://opensource.com/open-organization/resources/meet-ambassadors
[7]: https://medium.com/planet4/how-to-prepare-for-planet-4-user-interviews-a3a8cd627fe
[8]: https://opensource.com/open-organization/19/9/peanuts-community-reciprocity
[9]: https://planet4.greenpeace.org/create/contribute/

View File

@ -0,0 +1,152 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Quick Look at Some of the Best Cloud Platforms for High Performance Computing Applications)
[#]: via: (https://opensourceforu.com/2019/11/a-quick-look-at-some-of-the-best-cloud-platforms-for-high-performance-computing-applications/)
[#]: author: (Dr Kumar Gaurav https://opensourceforu.com/author/dr-gaurav-kumar/)
A Quick Look at Some of the Best Cloud Platforms for High Performance Computing Applications
======
[![][1]][2]
_Cloud platforms enable high performance computing without the need to purchase the required infrastructure. Cloud services are available on a pay per use basis which is very economical. This article takes a look at cloud platforms like Neptune, BigML, Deep Cognition and Google Colaboratory, all of which can be used for high performance applications._
Software applications, smart devices and gadgets face many performance issues which include load balancing, turnaround time, delay, congestion, Big Data, parallel computations and others. These key issues traditionally consume enormous computational resources and low-configuration computers are not able to work on high performance tasks. The laptops and computers available in the market are designed for personal use; so these systems face numerous performance issues when they are tasked with high performance jobs.
For example, a desktop computer or laptop with a 3GHz processor is able to perform approximately 3 billion computations per second. However, high performance computing (HPC) is focused on solving complex problems and working on quadrillions or trillions of computations with high speed and maximum accuracy.
![Figure 1: The Neptune portal][3]
![Figure 2: Creating a new project on the Neptune platform][4]
**Application domains and use cases**
High performance computing applications are used in domains where speed and accuracy levels are quite high as compared to those in traditional scenarios, and the cost factor is also very high.
The following are the use cases where high performance implementations are required:
* Nuclear power plants
* Space research organisations
* Oil and gas exploration
* Artificial intelligence and knowledge discovery
* Machine learning and deep learning
* Financial services and digital forensics
* Geographical and satellite data analytics
* Bio-informatics and molecular sciences
**Working with cloud platforms for high performance applications**
There are a number of cloud platforms on which high performance computing applications can be launched without users having actual access to the supercomputer. The billing for these cloud services is done on a usage basis and costs less compared to purchasing the actual infrastructure required to work with high performance computing applications.
The following are a few of the prominent cloud based platforms that can be used for advanced implementations including data science, data exploration, machine learning, deep learning, artificial intelligence, etc.
**Neptune**
URL: _<https://neptune.ml/>_
Neptune is a lightweight cloud based service for high performance applications including data science, machine learning, predictive knowledge discovery, deep learning, modelling training curves and many others. Neptune can be integrated with Jupyter notebooks so that Python programs can be easily executed for multiple applications.
The Neptune dashboard is available at <https://ui.neptune.ml/> on which multiple experiments can be performed. Neptune works as a machine learning lab on which assorted algorithms can be programmed and their outcomes can be visualised. The platform is available as Software as a Service (SaaS) so that the deployment can be done on the cloud. The deployments can be done on the users own hardware and can be mapped with the Neptune cloud.
In addition to having a pre-built cloud based platform, Neptune can be integrated with Python and R programming so that high performance applications can be programmed. Python and R are prominent programming environments for data science, machine learning, deep learning, Big Data and many other applications.
For Python programming, Neptune provides neptune-client so that communication with the Neptune server can be achieved, and advanced data analytics can be implemented on its advanced cloud.
For integration of Neptune with R, there is an amazing and effective library reticulate which integrates the use of neptune-client.
The detailed documentation for the integration of R and Python with Neptune is available at _<https://docs.neptune.ml/python-api.html> and <https://docs.neptune.ml/r-support.html>_.
![Figure 3: Integration of Neptune with Jupyter Notebook][5]
![Figure 4: Dashboard of BigML][6]
In addition, integration with MLflow and TensorBoard is also available. MLflow is an open source platform for managing the machine learning life cycle with reproducibility, advanced experiments and deployments. It has three key components — tracking, projects and models. These can be programmed and controlled using the Neptune MLflow integration.
The association of TensorFlow with Neptune is possible using Neptune-TensorBoard. TensorFlow is one of the powerful frameworks for the deep learning and advanced knowledge discovery approaches.
With the use of assorted features and dimensions, the Neptune cloud can be used for high performance research based implementations.
**BigML**
URL: _<https://bigml.com/>_
BigML is a cloud based platform for the implementation of advanced algorithms with assorted data sets. This cloud based platform has a panel for implementing multiple machine learning algorithms with ease.
The BigML dashboard has access to different data sets and algorithms under supervised and unsupervised taxonomy, as shown in Figure 4. The researcher can use the algorithm from the menu according to the requirements of the research domain.
![Figure 5: Algorithms and techniques integrated with BigML][7]
A number of tools, libraries and repositories are integrated with BigML so that the programming, collaboration and reporting can be done with a higher degree of performance and minimum error levels.
Algorithms and techniques can be attached to specific data sets for evaluation and deep analytics, as shown in Figure 5. Using this methodology, the researcher can work with the code as well as the data set on easier platforms.
The following are the tools and libraries associated with BigML for multiple applications of high performance computing:
* Node-Red for flow diagrams
* GitHub repos
* BigMLer as the command line tool
* Alexa Voice Service
* Zapier for machine learning workflows
* Google Sheets
* Amazon EC2 Image PredictServer
* BigMLX app for MacOS
![Figure 6: Enabling Google Colaboratory from Google Drive][8]
![Figure 7: Activation of the hardware accelerator with Google Colaboratory notebook][9]
**Google Colaboratory**
URL: _<https://colab.research.google.com>_
Google Colaboratory is one of the cloud platforms for the implementation of high performance computing tasks including artificial intelligence, machine learning, deep learning and many others. It is a cloud based service which integrates Jupyter Notebook so that Python code can be executed as per the application domain.
Google Colaboratory is available as a Google app in Google Cloud Services. It can be invoked from Google Drive as depicted in Figure 6 or directly at _<https://colab.research.google.com>_.
The Jupyter notebook in Google Colaboratory is associated with the CPU, by default. If a hardware accelerator is required, like the tensor processing unit (TPU) or the graphics processing unit (GPU), it can be activated from _Notebook Settings_, as shown in Figure 7.
Figure 8 presents a view of Python code that is imported in the Jupyter Notebook. The data set can be placed in Google Drive. The data set under analysis is mapped with the code so that the script can directly perform the operations as programmed in the code. The outputs and logs are presented on the Jupyter Notebook in the platform of Google Colaboratory.
![Figure 8: Implementation of the Python code on the Google Colaboratory Jupyter Notebook][10]
**Deep Cognition**
URL: _<https://deepcognition.ai/>_
Deep Cognition provides the platform to implement advanced neural networks and deep learning models. AutoML with Deep Cognition provides an autonomous integrated development environment (IDE) so that the coding, testing and debugging of advanced models can be done.
It has a visual editor so that the multiple layers of different types can be programmed. The layers that can be imported are core layers, hidden layers, convolutional layers, recurrent layers, pooling layers and many others.
The platform provides the features to work with advanced frameworks and libraries of MXNet and TensorFlow for scientific computations and deep neural networks.
![Figure 9: Importing layers in neural network models on Deep Cognition][11]
**Scope for research and development**
Research scholars, academicians and practitioners can work on advanced algorithms and their implementations using cloud based platforms dedicated to high performance computing. With this type of implementation, there is no need to purchase the specific infrastructure or devices; rather, the supercomputing environment can be hired on the cloud.
![Avatar][12]
[Dr Kumar Gaurav][13]
The author is the managing director of Magma Research and Consultancy Pvt Ltd, Ambala Cantonment, Haryana. He has 16 years experience in teaching, in industry and in research. He is a projects contributor for the Web-based source code repository SourceForge.net. He is associated with various central, state and deemed universities in India as a research guide and consultant. He is also an author and consultant reviewer/member of advisory panels for various journals, magazines and periodicals. The author can be reached at [kumargaurav.in@gmail.com][14].
[![][15]][16]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/a-quick-look-at-some-of-the-best-cloud-platforms-for-high-performance-computing-applications/
作者:[Dr Kumar Gaurav][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dr-gaurav-kumar/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Big-ML-Colab-and-Deep-cognition.jpg?resize=696%2C384&ssl=1 (Big ML Colab and Deep cognition)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Big-ML-Colab-and-Deep-cognition.jpg?fit=900%2C497&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-1-The-Neptune-portal.jpg?resize=350%2C122&ssl=1
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-2-Creating-a-new-project-on-the-Neptune-platform.jpg?resize=350%2C161&ssl=1
[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-3-Integration-of-Neptune-with-Jupyter-Notebook.jpg?resize=350%2C200&ssl=1
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-4-Dashboard-of-BigML.jpg?resize=350%2C193&ssl=1
[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-5-Algorithms-and-techniques-integrated-with-BigML.jpg?resize=350%2C200&ssl=1
[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-6-Enabling-Google-Colaboratory-from-Google-Drive.jpg?resize=350%2C253&ssl=1
[9]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-7-Activation-of-the-hardware-accelerator-with-Google-Colaboratory-notebook.jpg?resize=350%2C264&ssl=1
[10]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-8-Implementation-of-the-Python-code-on-the-Google-Colaboratory-Jupyter-Notebook.jpg?resize=350%2C253&ssl=1
[11]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-9-Importing-layers-in-neural-network-models-on-Deep-Cognition.jpg?resize=350%2C254&ssl=1
[12]: https://secure.gravatar.com/avatar/4a506881730a18516f8f839f49527105?s=100&r=g
[13]: https://opensourceforu.com/author/dr-gaurav-kumar/
[14]: mailto:kumargaurav.in@gmail.com
[15]: http://opensourceforu.com/wp-content/uploads/2013/10/assoc.png
[16]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -0,0 +1,130 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Pimcore: An open source alternative for product information management)
[#]: via: (https://opensource.com/article/19/11/pimcore-alternative-product-information-management)
[#]: author: (Dietmar Rietsch https://opensource.com/users/erinmcmahon)
Getting started with Pimcore: An open source alternative for product information management
======
PIM software enables sellers to centralize sales, marketing, and
technical product information to engage better with customers.
![Pair programming][1]
Product information management (PIM) software enables sellers to consolidate product data into a centralized repository that acts as a single source of truth, minimizing errors and redundancies in product data. This, in turn, makes it easier to share high-quality, clear, and accurate product information across customer touchpoints, paving the way for rich, consistent, readily accessible content that's optimized for all the channels customers use, including websites, social platforms, marketplaces, apps, IoT devices, conversational interfaces, and even print catalogs and physical stores. Being able to engage with customers on their favorite platform is essential for increasing sales and expanding into new markets. For years, there have been proprietary products that address some of these needs, like Salsify for data management, Adobe Experience Manager, and SAP Commerce Cloud for experience management, but now there's an open source alternative called Pimcore.
[Pimcore PIM][2] is an open source enterprise PIM, dual-[licensed][3] under GPLv3 and Pimcore Enterprise License (PEL) that enables sellers to centralize and harmonize sales, marketing, and technical product information. Pimcore can acquire, manage, and share any digital data and integrate easily into an existing IT system landscape. Its API-driven, service-oriented architecture enables fast and seamless connection to third-party software such as enterprise resource planning (ERP), customer relationship management (CRM), business intelligence (BI), and more.
### Open source vs. proprietary PIM software
There are at least four significant differences between open source and proprietary software that PIM users should consider.
* **Vendor lock-in:** It is more difficult to customize proprietary software. If you want to develop a new feature or modify an existing one, proprietary software lock-in makes you dependent on the vendor. On the other hand, open source provides unlimited access and flexibility to modify the source code and leverage it to your advantage, as well as the opportunity to freely access contributions made by the community behind it.
* **Interoperability:** Open source PIM software offers greater interoperability capabilities with APIs for integration with third-party business applications. Since the source code is open and available, users can customize or build connectors to meet their needs, which is not possible with proprietary software.
* **Community:** Open source solutions are supported by vibrant communities of contributors, implementers, developers, and other enthusiasts working towards enhancing the solution. Proprietary PIM software typically depends on commercial partnerships for implementation assistance and customizations.
* **Total cost of ownership:** Proprietary software carries a significant license fee for deployment, which includes implementation, customization, and system maintenance. In contrast, open source software development can be done in-house or through an IT vendor. This becomes a huge advantage for enterprises with tight budgets, as it slashes PIM operating costs.
### Pimcore features
Pimcore's platform is divided into two core offerings: data management and experience management. In addition to being open source and free to download and use, its features include the following.
#### Data modeling
Pimcore's web-based data modeling engine has over 40 high-performance data types that can help companies easily manage zillions of products or other master data with thousands of attributes. It also offers multilingual data management, object relations, data classification, digital asset management (DAM), and data modeling supported by data inheritance.
![Pimcore translations inheritance][4]
#### Data management
Pimcore enables efficient enterprise data management that focuses on ease of use; consistency in aggregation, organization, classification, and translation of product information; and sound data governance to enable optimization, flexibility, and scalability.
![PIM batch change][5]
#### Data quality
Data quality management is the basis for analytics and business intelligence (BI). Pimcore supports data quality, completeness, and validation, and includes rich auditing and versioning features to help organizations meet revenue goals, compliance requirements, and productivity objectives. Pimcore also offers a configurable dashboard, custom reports capabilities, filtering, and export functionalities.
![PIM data quality and completeness][6]
#### Workflow management
Pimcore's advanced workflow engine makes it easy to build and modify workflows to improve accuracy and productivity and reduce risks. Drop-downs enable enterprises to chalk out workflow paths to define business processes and editorial workflows with ease, and the customizable management and administration interface makes it easy to integrate workflows into an organization's application infrastructure.
![Pimcore workflow management][7]
#### Data consolidation
Pimcore eliminates data silos by consolidating data in a central place and creating a single master data record or a single point of truth. It does this by gathering data lying in disparate systems spread across geographic locations, departments, applications, hard drives, vendors, suppliers, and more. By consolidating data, enterprises can get improved accuracy, reliability, and efficacy of information, lower cost of compliance, and decreased time-to-market.
#### Synchronization across channels
Pimcore's tools for gathering and managing digital data enable sellers to deliver it across any channel or device to reach individual customers on their preferred platforms. This helps enterprises enrich the user experience, leverage a single point of control to optimize performance, improve data governance, streamline product data lifecycle management, and boost productivity to reduce time-to-market and meet customers' expectations.
### Installing, trying, and using Pimcore
The best way to start exploring Pimcore is with a guided tour or demo; before you begin, make sure that you have the [system requirements][8] in place.
#### Demo Pimcore
Navigate to the [Pimcore demo][9] page and either register for a guided tour or click on one of the products in the "Try By Yourself" column for a self-guided demo. Enter the username **admin** and password **demo** to begin the demo.
![Pimcore demo page][10]
#### Download and install Pimcore
If you want to take a deeper dive, you can [download Pimcore][11]; you can choose the data management or the experience management offering or both. You will need to enter your contact information and then immediately receive installation instructions.
![Pimcore download interface][12]
You can also choose from four installation packages: three are demo packages for beginners, and one is a skeleton for experienced developers. All contain:
* Complete Pimcore platform
* Latest open source version
* Quick-start guide
* Demo data for getting started
If you are installing Pimcore on a typical [LAMP][13] environment (which is recommended), see the [Pimcore installation guide][14]. If you're using another setup (e.g., Nginx), see the [installation, setup, and upgrade guide][15] for details.
![Pimcore installation documentation][16]
### Contribute to Pimcore
As open source software, users are encouraged to engage with, [contribute][17] to, and fork Pimcore. For tracking bugs and features, as well as for software management, Pimcore relies exclusively on [GitHub][18], where contributions are assessed and carefully curated to uphold Pimcore's quality standards.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/pimcore-alternative-product-information-management
作者:[Dietmar Rietsch][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/erinmcmahon
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1 (Pair programming)
[2]: https://pimcore.com/en
[3]: https://github.com/pimcore/pimcore/blob/master/LICENSE.md
[4]: https://opensource.com/sites/default/files/uploads/pimcoretranslationinheritance.png (Pimcore translations inheritance)
[5]: https://opensource.com/sites/default/files/uploads/pimcorebatchchange.png (PIM batch change)
[6]: https://opensource.com/sites/default/files/uploads/pimcoredataquality.png (PIM data quality and completeness)
[7]: https://opensource.com/sites/default/files/pimcore-workflow-management.jpg (Pimcore workflow management)
[8]: https://pimcore.com/docs/5.x/Development_Documentation/Installation_and_Upgrade/System_Requirements.html
[9]: https://pimcore.com/en/try
[10]: https://opensource.com/sites/default/files/uploads/pimcoredemopage.png (Pimcore demo page)
[11]: https://pimcore.com/en/download
[12]: https://opensource.com/sites/default/files/uploads/pimcoredownload.png (Pimcore download interface)
[13]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
[14]: https://pimcore.com/docs/5.x/Development_Documentation/Getting_Started/Installation.html
[15]: https://pimcore.com/docs/5.x/Development_Documentation/Installation_and_Upgrade/index.html
[16]: https://opensource.com/sites/default/files/uploads/pimcoreinstall.png (Pimcore installation documentation)
[17]: https://github.com/pimcore/pimcore/blob/master/CONTRIBUTING.md
[18]: https://github.com/pimcore/pimcore

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What it Takes to Be a Successful Network Engineer)
[#]: via: (https://opensourceforu.com/2019/11/what-it-takes-to-be-a-successful-network-engineer/)
[#]: author: (Christopher Nichols https://opensourceforu.com/author/christopher-nichols/)
What it Takes to Be a Successful Network Engineer
======
[![][1]][2]
_Network engineering is an excellent field filled with complex and fulfilling work, and many job opportunities. As companies end up with networks that continue to become more complex and connect more devices together, network engineers are in high-demand. Being successful in this role requires several characteristics and skill sets that serve employees well in this fast-paced and mission-critical environment._
**Deep Understanding of Networking Technologies**
Some people might think that this characteristic is assumed when it comes to network engineering. However, theres a distinct difference between knowing enough about networking to manage and monitor the system, and having a truly in-depth understanding of the subject matter. The best network engineers eat, breathe, and drink this type of technology. They keep up on top of the latest trends during their free time and are thrilled to learn about new developments in the field.
**Detail Oriented**
Networking has a lot of moving parts and various types of software and hardware to work with. Paying close attention to all of the details ensures that the system is being monitored correctly and nothing gets lost in the shuffle. When data breaches are prevalent in the business world, stopping an intrusion could mean identifying a small red flag that popped up the day before. Without being alert to these details, the network ends up being vulnerable.
**Problem Solving**
One of the most used skills in network engineering is problem-solving. Everything from troubleshooting issues for users to look for ways to improve the performance of the network requires it. When a worker in this field can quickly and efficiently solve issues through an analytical mindset, they free up a lot of time for strategic decision-making.
**Team Coordination**
Many organizations have teams collaborating together across departments. The network engineer role may be a small part of the team or put in a management position based on the resources required for the project. Working with multiple teams requires strong people management skills and understanding how to move towards a common goal.
**Ongoing Education**
Many continued education opportunities exist for network engineering. Many organizations offer certifications in specific networking technologies, whether the person is learning about a particular server operating system or branching out into subject areas that are related to networking. A drive for ongoing education means that the network engineer will always have their skills updated to adapt to the latest technology changes in the marketplace. Additionally, when these workers love to learn, they also seek out self-instruction opportunities. For example, they could [_read this guide_][3] to learn more about how VPN protocols work.
**Documentation**
Strong writing skills may not be the first characteristic that comes to mind when someone thinks about a network engineer. However, its essential when it comes to writing technical documentation. Well-structured and clear documentation allows the network engineer to share information about the network with other people in the organization. If that person ends up leaving the company, the networking protocols, procedures and configuration remain in place because all of the data is available and understandable.
**Jargon-free Communication**
Network engineers have frequent conversations with stakeholders and end users, who may not have a strong IT background. The common jargon used for talking with other members of the IT teams would leave this group confused and not understanding what youre saying. When the network engineer can explain technology in simple terms, it makes it easier to get the resources and budget that they need to effectively support the companys networking needs.
**Proactive Approaches**
Some network engineers rely on reactive approaches to fix problems when they occur. If data breaches arent prevented before they impact the organization, then it ends up being an expensive endeavor. A reactive approach is sometimes compared to running around and putting out fires the entire day. A proactive approach is more strategic. Network engineers put systems, policies and procedures in place that prevent the intrusion in the first place. They pick up on small issues and tackle them as soon as they show up, rather than waiting for something to break. Its easier to improve network performance because many of the low-level problems are eliminated through the network design or other technology that was implemented.
**Independent**
Network engineers often have to work on tasks without a lot of oversight. Depending on the companys budget, they may be the only person in their role in the entire organization. Working independently requires the employee to be driven and a self-starter. They must be able to keep themselves on task and stick to the schedule thats laid out for that particular project. In the event of a disaster, the network engineer may need to step into a leadership role to guide the recovery process.
**Fast Learner**
Technology changes all the time, and the interactions between new hardware and software may not be expected. A fast learner can quickly pick up the most important details about a piece of technology so that they can effectively troubleshoot it or optimize it.
**On-Call**
Disasters can strike a network at any time, and unexpected downtime is one of the worst things that can happen to a modern business. The mission-critical systems have to come up as soon as possible, which means that network engineers may need to take on-call shifts. One of the keys to being on-call is to be ready to act at a moments notice, even if its the middle of the night.
**Reliability**
Few businesses can operate without their network being up and available. If critical software or hardware are not available, then the entire business may find itself at a standstill. Customers get upset that they cant access the website or reach anyone in the company, employees are frustrated because theyre falling behind on their projects, and management is running around trying to get everything back up and running. As a network engineer, reliability is the key. Being available makes a big difference in resolving these types of problems, and always showing up on time and on schedule goes a long way towards cementing someone as a great network engineer.
![Avatar][4]
[Christopher Nichols][5]
[![][6]][7]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/what-it-takes-to-be-a-successful-network-engineer/
作者:[Christopher Nichols][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/christopher-nichols/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2015/03/Network-cable-with-router.jpg?resize=696%2C372&ssl=1 (Network cable with router)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2015/03/Network-cable-with-router.jpg?fit=1329%2C710&ssl=1
[3]: https://surfshark.com/learn/vpn-protocols
[4]: https://secure.gravatar.com/avatar/92e286970e06818292d5ce792b67a662?s=100&r=g
[5]: https://opensourceforu.com/author/christopher-nichols/
[6]: http://opensourceforu.com/wp-content/uploads/2013/10/assoc.png
[7]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -1,299 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How RPM packages are made: the spec file)
[#]: via: (https://fedoramagazine.org/how-rpm-packages-are-made-the-spec-file/)
[#]: author: (Ankur Sinha "FranciscoD" https://fedoramagazine.org/author/ankursinha/)
How RPM packages are made: the spec file
======
![][1]
In the [previous article on RPM package building][2], you saw that source RPMS include the source code of the software, along with a “spec” file. This post digs into the spec file, which contains instructions on how to build the RPM. Again, this article uses _fpaste_ as an example.
### Understanding the source code
Before you can start writing a spec file, you need to have some idea of the software that youre looking to package. Here, youre looking at fpaste, a very simple piece of software. It is written in Python, and is a one file script. When a new version is released, its provided here on Pagure: <https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz>
The current version, as the archive shows, is 0.3.9.2. Download it so you can see whats in the archive:
```
$ wget https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
$ tar -tvf fpaste-0.3.9.2.tar.gz
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/
-rw-rw-r-- root/root 25 2018-07-25 02:58 fpaste-0.3.9.2/.gitignore
-rw-rw-r-- root/root 3672 2018-07-25 02:58 fpaste-0.3.9.2/CHANGELOG
-rw-rw-r-- root/root 35147 2018-07-25 02:58 fpaste-0.3.9.2/COPYING
-rw-rw-r-- root/root 444 2018-07-25 02:58 fpaste-0.3.9.2/Makefile
-rw-rw-r-- root/root 1656 2018-07-25 02:58 fpaste-0.3.9.2/README.rst
-rw-rw-r-- root/root 658 2018-07-25 02:58 fpaste-0.3.9.2/TODO
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/
-rw-rw-r-- root/root 3867 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/fpaste.1
-rwxrwxr-x root/root 24884 2018-07-25 02:58 fpaste-0.3.9.2/fpaste
lrwxrwxrwx root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/fpaste.py -> fpaste
```
The files you want to install are:
* _fpaste.py_: which should go be installed to /usr/bin/.
* _docs/man/en/fpaste.1_: the manual, which should go to /usr/share/man/man1/.
* _COPYING_: the license text, which should go to /usr/share/license/fpaste/.
* _README.rst, TODO_: miscellaneous documentation that goes to /usr/share/doc/fpaste.
Where these files are installed depends on the Filesystem Hierarchy Standard. To learn more about it, you can either read here: <http://www.pathname.com/fhs/> or look at the man page on your Fedora system:
```
$ man hier
```
#### Part 1: What are we building?
Now that we know what files we have in the source, and where they are to go, lets look at the spec file. You can see the full file here: <https://src.fedoraproject.org/rpms/fpaste/blob/master/f/fpaste.spec>
Here is the first part of the spec file:
```
Name: fpaste
Version: 0.3.9.2
Release: 3%{?dist}
Summary: A simple tool for pasting info onto sticky notes instances
BuildArch: noarch
License: GPLv3+
URL: https://pagure.io/fpaste
Source0: https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
Requires: python3
%description
It is often useful to be able to easily paste text to the Fedora
Pastebin at http://paste.fedoraproject.org and this simple script
will do that and return the resulting URL so that people may
examine the output. This can hopefully help folks who are for
some reason stuck without X, working remotely, or any other
reason they may be unable to paste something into the pastebin
```
_Name_, _Version_, and so on are called _tags_, and are defined in RPM. This means you cant just make up tags. RPM wont understand them if you do! The tags to keep an eye out for are:
* _Source0_: tells RPM where the source archive for this software is located.
* _Requires_: lists run-time dependencies for the software. RPM can automatically detect quite a few of these, but in some cases they must be mentioned manually. A run-time dependency is a capability (often a package) that must be on the system for this package to function. This is how _[dnf][3]_ detects whether it needs to pull in other packages when you install this package.
* _BuildRequires_: lists the build-time dependencies for this software. These must generally be determined manually and added to the spec file.
* _BuildArch_: the computer architectures that this software is being built for. If this tag is left out, the software will be built for all supported architectures. The value _noarch_ means the software is architecture independent (like fpaste, which is written purely in Python).
This section provides general information about fpaste: what it is, which version is being made into an RPM, its license, and so on. If you have fpaste installed, and look at its metadata, you can see this information included in the RPM:
```
$ sudo dnf install fpaste
$ rpm -qi fpaste
Name : fpaste
Version : 0.3.9.2
Release : 2.fc30
...
```
RPM adds a few extra tags automatically that represent things that it knows.
At this point, we have the general information about the software that were building an RPM for. Next, we start telling RPM what to do.
#### Part 2: Preparing for the build
The next part of the spec is the preparation section, denoted by _%prep_:
```
%prep
%autosetup
```
For fpaste, the only command here is %autosetup. This simply extracts the tar archive into a new folder and keeps it ready for the next section where we build it. You can do more here, like apply patches, modify files for different purposes, and so on. If you did look at the contents of the source rpm for Python, you would have seen lots of patches there. These are all applied in this section.
Typically anything in a spec file with the **%** prefix is a macro or label that RPM interprets in a special way. Often these will appear with curly braces, such as _%{example}_.
#### Part 3: Building the software
The next section is where the software is built, denoted by “%build”. Now, since fpaste is a simple, pure Python script, it doesnt need to be built. So, here we get:
```
%build
#nothing required
```
Generally, though, youd have build commands here, like:
```
configure; make
```
The build section is often the hardest section of the spec, because this is where the software is being built from source. This requires you to know what build system the tool is using, which could be one of many: Autotools, CMake, Meson, Setuptools (for Python) and so on. Each has its own commands and style. You need to know these well enough to get the software to build correctly.
#### Part 4: Installing the files
Once the software is built, it needs to be installed in the _%install_ section:
```
%install
mkdir -p %{buildroot}%{_bindir}
make install BINDIR=%{buildroot}%{_bindir} MANDIR=%{buildroot}%{_mandir}
```
RPM doesnt tinker with your system files when building RPMs. Its far too risky to add, remove, or modify files to a working installation. What if something breaks? So, instead RPM creates an artificial file system and works there. This is referred to as the _buildroot_. So, here in the buildroot, we create _/usr/bin_, represented by the macro _%{_bindir}_, and then install the files to it using the provided Makefile.
At this point, we have a built version of fpaste installed in our artificial buildroot.
#### Part 5: Listing all files to be included in the RPM
The last section of the spec file is the files section, _%files_. This is where we tell RPM what files to include in the archive it creates from this spec file. The fpaste file section is quite simple:
```
%files
%{_bindir}/%{name}
%doc README.rst TODO
%{_mandir}/man1/%{name}.1.gz
%license COPYING
```
Notice how, here, we do not specify the buildroot. All of these paths are relative to it. The _%doc_ and _%license_ commands simply do a little more—they create the required folders and remember that these files must go there.
RPM is quite smart. If youve installed files in the _%install_ section, but not listed them, itll tell you this, for example.
#### Part 6: Document all changes in the change log
Fedora is a community based project. Lots of contributors maintain and co-maintain packages. So it is imperative that theres no confusion about what changes have been made to a package. To ensure this, the spec file contains the last section, the Changelog, _%changelog_:
```
%changelog
* Thu Jul 25 2019 Fedora Release Engineering < ...> - 0.3.9.2-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_31_Mass_Rebuild
* Thu Jan 31 2019 Fedora Release Engineering < ...> - 0.3.9.2-2
- Rebuilt for https://fedoraproject.org/wiki/Fedora_30_Mass_Rebuild
* Tue Jul 24 2018 Ankur Sinha - 0.3.9.2-1
- Update to 0.3.9.2
* Fri Jul 13 2018 Fedora Release Engineering < ...> - 0.3.9.1-4
- Rebuilt for https://fedoraproject.org/wiki/Fedora_29_Mass_Rebuild
* Wed Feb 07 2018 Fedora Release Engineering < ..> - 0.3.9.1-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_28_Mass_Rebuild
* Sun Sep 10 2017 Vasiliy N. Glazov < ...> - 0.3.9.1-2
- Cleanup spec
* Fri Sep 08 2017 Ankur Sinha - 0.3.9.1-1
- Update to latest release
- fixes rhbz 1489605
...
....
```
There must be a changelog entry for _every_ change to the spec file. As you see here, while Ive updated the spec as the maintainer, others have too. Having the changes documented clearly helps everyone know what the current status of the spec is. For all packages installed on your system, you can use rpm to see their changelogs:
```
$ rpm -q --changelog fpaste
```
### Building the RPM
Now we are ready to build the RPM. If you want to follow along and run the commands below, please ensure that you followed the steps [in the previous post][2] to set your system up for building RPMs.
We place the fpaste spec file in _~/rpmbuild/SPECS_, the source code archive in _~/rpmbuild/SOURCES/_ and can now create the source RPM:
```
$ cd ~/rpmbuild/SPECS
$ wget https://src.fedoraproject.org/rpms/fpaste/raw/master/f/fpaste.spec
$ cd ~/rpmbuild/SOURCES
$ wget https://pagure.io/fpaste/archive/0.3.9.2/fpaste-0.3.9.2.tar.gz
$ cd ~/rpmbuild/SOURCES
$ rpmbuild -bs fpaste.spec
Wrote: /home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
```
Lets have a look at the results:
```
$ ls ~/rpmbuild/SRPMS/fpaste*
/home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
$ rpm -qpl ~/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
fpaste-0.3.9.2.tar.gz
fpaste.spec
```
There we are — the source rpm has been built. Lets build both the source and binary rpm together:
```
$ cd ~/rpmbuild/SPECS
$ rpmbuild -ba fpaste.spec
..
..
..
```
RPM will show you the complete build output, with details on what it is doing in each section that we saw before. This “build log” is extremely important. When builds do not go as expected, we packagers spend lots of time going through them, tracing the complete build path to see what went wrong.
Thats it really! Your ready-to-install RPMs are where they should be:
```
$ ls ~/rpmbuild/RPMS/noarch/
fpaste-0.3.9.2-3.fc30.noarch.rpm
```
### Recap
Weve covered the basics of how RPMs are built from a spec file. This is by no means an exhaustive document. In fact, it isnt documentation at all, really. It only tries to explain how things work under the hood. Heres a short recap:
* RPMs are of two types: _source_ and _binary_.
* Binary RPMs contain the files to be installed to use the software.
* Source RPMs contain the information needed to build the binary RPMs: the complete source code, and the instructions on how to build the RPM in the spec file.
* The spec file has various sections, each with its own purpose.
Here, weve built RPMs locally, on our Fedora installations. While this is the basic process, the RPMs we get from repositories are built on dedicated servers with strict configurations and methods to ensure correctness and security. This Fedora packaging pipeline will be discussed in a future post.
Would you like to get started with building packages, and help the Fedora community maintain the massive amount of software we provide? You can [start here by joining the package collection maintainers][4].
For any queries, post to the [Fedora developers mailing list][5]—were always happy to help!
### References
Here are some useful references to building RPMs:
* <https://fedoraproject.org/wiki/How_to_create_an_RPM_package>
* <https://docs.fedoraproject.org/en-US/quick-docs/create-hello-world-rpm/>
* <https://docs.fedoraproject.org/en-US/packaging-guidelines/>
* <https://rpm.org/documentation.html>
* * *
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-rpm-packages-are-made-the-spec-file/
作者:[Ankur Sinha "FranciscoD"][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/ankursinha/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/rpm.png-816x345.jpg
[2]: https://fedoramagazine.org/how-rpm-packages-are-made-the-source-rpm/
[3]: https://fedoramagazine.org/managing-packages-fedora-dnf/
[4]: https://fedoraproject.org/wiki/Join_the_package_collection_maintainers
[5]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/

View File

@ -1,255 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building CI/CD pipelines with Jenkins)
[#]: via: (https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins)
[#]: author: (Bryant Son https://opensource.com/users/brson)
Building CI/CD pipelines with Jenkins
======
Build continuous integration and continuous delivery (CI/CD) pipelines
with this step-by-step Jenkins tutorial.
![pipelines][1]
In my article [_A beginner's guide to building DevOps pipelines with open source tools_][2], I shared a story about building a DevOps pipeline from scratch. The core technology driving that initiative was [Jenkins][3], an open source tool to build continuous integration and continuous delivery (CI/CD) pipelines.
At Citi, there was a separate team that provided dedicated Jenkins pipelines with a stable master-slave node setup, but the environment was only used for quality assurance (QA), staging, and production environments. The development environment was still very manual, and our team needed to automate it to gain as much flexibility as possible while accelerating the development effort. This is the reason we decided to build a CI/CD pipeline for DevOps. And the open source version of Jenkins was the obvious choice due to its flexibility, openness, powerful plugin-capabilities, and ease of use.
In this article, I will share a step-by-step walkthrough on how you can build a CI/CD pipeline using Jenkins.
### What is a pipeline?
Before jumping into the tutorial, it's helpful to know something about CI/CD pipelines.
To start, it is helpful to know that Jenkins itself is not a pipeline. Just creating a new Jenkins job does not construct a pipeline. Think about Jenkins like a remote control—it's the place you click a button. What happens when you do click a button depends on what the remote is built to control. Jenkins offers a way for other application APIs, software libraries, build tools, etc. to plug into Jenkins, and it executes and automates the tasks. On its own, Jenkins does not perform any functionality but gets more and more powerful as other tools are plugged into it.
A pipeline is a separate concept that refers to the groups of events or jobs that are connected together in a sequence:
> A **pipeline** is a sequence of events or jobs that can be executed.
The easiest way to understand a pipeline is to visualize a sequence of stages, like this:
![Pipeline example][4]
Here, you should see two familiar concepts: _Stage_ and _Step_.
* **Stage:** A block that contains a series of steps. A stage block can be named anything; it is used to visualize the pipeline process.
* **Step:** A task that says what to do. Steps are defined inside a stage block.
In the example diagram above, Stage 1 can be named "Build," "Gather Information," or whatever, and a similar idea is applied for the other stage blocks. "Step" simply says what to execute, and this can be a simple print command (e.g., **echo "Hello, World"**), a program-execution command (e.g., **java HelloWorld**), a shell-execution command (e.g., **chmod 755 Hello**), or any other command—as long as it is recognized as an executable command through the Jenkins environment.
The Jenkins pipeline is provided as a _codified script_ typically called a **Jenkinsfile**, although the file name can be different. Here is an example of a simple Jenkins pipeline file.
```
// Example of Jenkins pipeline script
pipeline {
  stages {
    stage("Build") {
       steps {
          // Just print a Hello, Pipeline to the console
          echo "Hello, Pipeline!"
          // Compile a Java file. This requires JDKconfiguration from Jenkins
          javac HelloWorld.java
          // Execute the compiled Java binary called HelloWorld. This requires JDK configuration from Jenkins
          java HelloWorld
          // Executes the Apache Maven commands, clean then package. This requires Apache Maven configuration from Jenkins
          mvn clean package ./HelloPackage
          // List the files in current directory path by executing a default shell command
          sh "ls -ltr"
       }
   }
   // And next stages if you want to define further...
 } // End of stages
} // End of pipeline
```
It's easy to see the structure of a Jenkins pipeline from this sample script. Note that some commands, like **java**, **javac**, and **mvn**, are not available by default, and they need to be installed and configured through Jenkins. Therefore:
> A **Jenkins pipeline** is the way to execute a Jenkins job sequentially in a defined way by codifying it and structuring it inside multiple blocks that can include multiple steps containing tasks.
OK. Now that you understand what a Jenkins pipeline is, I'll show you how to create and execute a Jenkins pipeline. At the end of the tutorial, you will have built a Jenkins pipeline like this:
![Final Result][5]
### How to build a Jenkins pipeline
To make this tutorial easier to follow, I created a sample [GitHub repository][6] and a video tutorial.
Before starting this tutorial, you'll need:
* **Java Development Kit:** If you don't already have it, install a JDK and add it to the environment path so a Java command (like **java jar**) can be executed through a terminal. This is necessary to leverage the Java Web Archive (WAR) version of Jenkins that is used in this tutorial (although you can use any other distribution).
* **Basic computer operations:** You should know how to type some code, execute basic Linux commands through the shell, and open a browser.
Let's get started.
#### Step 1: Download Jenkins
Navigate to the [Jenkins download page][7]. Scroll down to **Generic Java package (.war)** and click on it to download the file; save it someplace where you can locate it easily. (If you choose another Jenkins distribution, the rest of tutorial steps should be pretty much the same, except for Step 2.) The reason to use the WAR file is that it is a one-time executable file that is easily executable and removable.
![Download Jenkins as Java WAR file][8]
#### Step 2: Execute Jenkins as a Java binary
Open a terminal window and enter the directory where you downloaded Jenkins with **cd &lt;your path&gt;**. (Before you proceed, make sure JDK is installed and added to the environment path.) Execute the following command, which will run the WAR file as an executable binary:
```
`java -jar ./jenkins.war`
```
If everything goes smoothly, Jenkins should be up and running at the default port 8080.
![Execute as an executable JAR binary][9]
#### Step 3: Create a new Jenkins job
Open a web browser and navigate to **localhost:8080**. Unless you have a previous Jenkins installation, it should go straight to the Jenkins dashboard. Click **Create New Jobs**. You can also click **New Item** on the left.
![Create New Job][10]
#### Step 4: Create a pipeline job
In this step, you can select and define what type of Jenkins job you want to create. Select **Pipeline** and give it a name (e.g., TestPipeline). Click **OK** to create a pipeline job.
![Create New Pipeline Job][11]
You will see a Jenkins job configuration page. Scroll down to find** Pipeline section**. There are two ways to execute a Jenkins pipeline. One way is by _directly writing a pipeline script_ on Jenkins, and the other way is by retrieving the _Jenkins file from SCM_ (source control management). We will go through both ways in the next two steps.
#### Step 5: Configure and execute a pipeline job through a direct script
To execute the pipeline with a direct script, begin by copying the contents of the [sample Jenkinsfile][6] from GitHub. Choose **Pipeline script** as the **Destination** and paste the **Jenkinsfile** contents in **Script**. Spend a little time studying how the Jenkins file is structured. Notice that there are three Stages: Build, Test, and Deploy, which are arbitrary and can be anything. Inside each Stage, there are Steps; in this example, they just print some random messages.
Click **Save** to keep the changes, and it should automatically take you back to the Job Overview.
![Configure to Run as Jenkins Script][12]
To start the process to build the pipeline, click **Build Now**. If everything works, you will see your first pipeline (like the one below).
![Click Build Now and See Result][13]
To see the output from the pipeline script build, click any of the Stages and click **Log**. You will see a message like this.
![Visit sample GitHub with Jenkins get clone link][14]
#### Step 6: Configure and execute a pipeline job with SCM
Now, switch gears: In this step, you will Deploy the same Jenkins job by copying the **Jenkinsfile** from a source-controlled GitHub. In the same [GitHub repository][6], pick up the repository URL by clicking **Clone or download** and copying its URL.
![Checkout from GitHub][15]
Click **Configure** to modify the existing job. Scroll to the **Advanced Project Options** setting, but this time, select the **Pipeline script from SCM** option in the **Destination** dropdown. Paste the GitHub repo's URL in the **Repository URL**, and type **Jenkinsfile** in the **Script Path**. Save by clicking the **Save** button.
![Change to Pipeline script from SCM][16]
To build the pipeline, once you are back to the Task Overview page, click **Build Now** to execute the job again. The result will be the same as before, except you have one additional stage called **Declaration: Checkout SCM**.
![Build again and verify][17]
To see the pipeline's output from the SCM build, click the Stage and view the **Log** to check how the source control cloning process went.
![Verify Checkout Procedure][18]
### Do more than print messages
Congratulations! You've built your first Jenkins pipeline!
"But wait," you say, "this is very limited. I cannot really do anything with it except print dummy messages." That is OK. So far, this tutorial provided just a glimpse of what a Jenkins pipeline can do, but you can extend its capabilities by integrating it with other tools. Here are a few ideas for your next project:
* Build a multi-staged Java build pipeline that takes from the phases of pulling dependencies from JAR repositories like Nexus or Artifactory, compiling Java codes, running the unit tests, packaging into a JAR/WAR file, and deploying to a cloud server.
* Implement the advanced code testing dashboard that will report back the health of the project based on the unit test, load test, and automated user interface test with Selenium. 
* Construct a multi-pipeline or multi-user pipeline automating the tasks of executing Ansible playbooks while allowing for authorized users to respond to task in progress.
* Design a complete end-to-end DevOps pipeline that pulls the infrastructure resource files and configuration files stored in SCM like GitHub and executing the scripts through various runtime programs.
Follow any of the tutorials at the end of this article to get into these more advanced cases.
#### Manage Jenkins
From the main Jenkins dashboard, click **Manage Jenkins**.
![Manage Jenkins][19]
#### Global tool configuration
There are many options available, including managing plugins, viewing the system log, etc. Click **Global Tool Configuration**.
![Global Tools Configuration][20]
#### Add additional capabilities
Here, you can add the JDK path, Git, Gradle, and so much more. After you configure a tool, it is just a matter of adding the command into your Jenkinsfile or executing it through your Jenkins script.
![See Various Options for Plugin][21]
### Where to go from here?
This article put you on your way to creating a CI/CD pipeline using Jenkins, a cool open source tool. To find out about many of the other things you can do with Jenkins, check out these other articles on Opensource.com:
* [Getting started with Jenkins X][22]
* [Install an OpenStack cloud with Jenkins][23]
* [Running Jenkins builds in containers][24]
* [Getting started with Jenkins pipelines][25]
* [How to run JMeter with Jenkins][26]
* [Integrating OpenStack into your Jenkins workflow][27]
You may be interested in some of the other articles I've written to supplement your open source journey:
* [9 open source tools for building a fault-tolerant system][28]
* [Understanding software design patterns][29]
* [A beginner's guide to building DevOps pipelines with open source tools][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins
作者:[Bryant Son][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brson
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pipe-pipeline-grid.png?itok=kkpzKxKg (pipelines)
[2]: https://opensource.com/article/19/4/devops-pipeline
[3]: https://jenkins.io/
[4]: https://opensource.com/sites/default/files/uploads/diagrampipeline.jpg (Pipeline example)
[5]: https://opensource.com/sites/default/files/uploads/0_endresultpreview_0.jpg (Final Result)
[6]: https://github.com/bryantson/CICDPractice
[7]: https://jenkins.io/download/
[8]: https://opensource.com/sites/default/files/uploads/2_downloadwar.jpg (Download Jenkins as Java WAR file)
[9]: https://opensource.com/sites/default/files/uploads/3_runasjar.jpg (Execute as an executable JAR binary)
[10]: https://opensource.com/sites/default/files/uploads/4_createnewjob.jpg (Create New Job)
[11]: https://opensource.com/sites/default/files/uploads/5_createpipeline.jpg (Create New Pipeline Job)
[12]: https://opensource.com/sites/default/files/uploads/6_runaspipelinescript.jpg (Configure to Run as Jenkins Script)
[13]: https://opensource.com/sites/default/files/uploads/7_buildnow4script.jpg (Click Build Now and See Result)
[14]: https://opensource.com/sites/default/files/uploads/8_seeresult4script.jpg (Visit sample GitHub with Jenkins get clone link)
[15]: https://opensource.com/sites/default/files/uploads/9_checkoutfromgithub.jpg (Checkout from GitHub)
[16]: https://opensource.com/sites/default/files/uploads/10_runsasgit.jpg (Change to Pipeline script from SCM)
[17]: https://opensource.com/sites/default/files/uploads/11_seeresultfromgit.jpg (Build again and verify)
[18]: https://opensource.com/sites/default/files/uploads/12_verifycheckout.jpg (Verify Checkout Procedure)
[19]: https://opensource.com/sites/default/files/uploads/13_managingjenkins.jpg (Manage Jenkins)
[20]: https://opensource.com/sites/default/files/uploads/14_globaltoolsconfiguration.jpg (Global Tools Configuration)
[21]: https://opensource.com/sites/default/files/uploads/15_variousoptions4plugin.jpg (See Various Options for Plugin)
[22]: https://opensource.com/article/18/11/getting-started-jenkins-x
[23]: https://opensource.com/article/18/4/install-OpenStack-cloud-Jenkins
[24]: https://opensource.com/article/18/4/running-jenkins-builds-containers
[25]: https://opensource.com/article/18/4/jenkins-pipelines-with-cucumber
[26]: https://opensource.com/life/16/7/running-jmeter-jenkins-continuous-delivery-101
[27]: https://opensource.com/business/15/5/interview-maish-saidel-keesing-cisco
[28]: https://opensource.com/article/19/3/tools-fault-tolerant-system
[29]: https://opensource.com/article/19/7/understanding-software-design-patterns

View File

@ -1,452 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Understanding system calls on Linux with strace)
[#]: via: (https://opensource.com/article/19/10/strace)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
Understanding system calls on Linux with strace
======
Trace the thin layer between user processes and the Linux kernel with
strace.
![Hand putting a Linux file folder into a drawer][1]
A system call is a programmatic way a program requests a service from the kernel, and **strace** is a powerful tool that allows you to trace the thin layer between user processes and the Linux kernel.
To understand how an operating system works, you first need to understand how system calls work. One of the main functions of an operating system is to provide abstractions to user programs.
An operating system can roughly be divided into two modes:
* **Kernel mode:** A privileged and powerful mode used by the operating system kernel
* **User mode:** Where most user applications run
Users mostly work with command-line utilities and graphical user interfaces (GUI) to do day-to-day tasks. System calls work silently in the background, interfacing with the kernel to get work done.
System calls are very similar to function calls, which means they accept and work on arguments and return values. The only difference is that system calls enter a kernel, while function calls do not. Switching from user space to kernel space is done using a special [trap][2] mechanism.
Most of this is hidden away from the user by using system libraries (aka **glibc** on Linux systems). Even though system calls are generic in nature, the mechanics of issuing a system call are very much machine-dependent.
This article explores some practical examples by using some general commands and analyzing the system calls made by each command using **strace**. These examples use Red Hat Enterprise Linux, but the commands should work the same on other Linux distros:
```
[root@sandbox ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.7 (Maipo)
[root@sandbox ~]#
[root@sandbox ~]# uname -r
3.10.0-1062.el7.x86_64
[root@sandbox ~]#
```
First, ensure that the required tools are installed on your system. You can verify whether **strace** is installed using the RPM command below; if it is, you can check the **strace** utility version number using the **-V** option:
```
[root@sandbox ~]# rpm -qa | grep -i strace
strace-4.12-9.el7.x86_64
[root@sandbox ~]#
[root@sandbox ~]# strace -V
strace -- version 4.12
[root@sandbox ~]#
```
If that doesn't work, install **strace** by running:
```
`yum install strace`
```
For the purpose of this example, create a test directory within **/tmp** and create two files using the **touch** command using:
```
[root@sandbox ~]# cd /tmp/
[root@sandbox tmp]#
[root@sandbox tmp]# mkdir testdir
[root@sandbox tmp]#
[root@sandbox tmp]# touch testdir/file1
[root@sandbox tmp]# touch testdir/file2
[root@sandbox tmp]#
```
(I used the **/tmp** directory because everybody has access to it, but you can choose another directory if you prefer.)
Verify that the files were created using the **ls** command on the **testdir** directory:
```
[root@sandbox tmp]# ls testdir/
file1  file2
[root@sandbox tmp]#
```
You probably use the **ls** command every day without realizing system calls are at work underneath it. There is abstraction at play here; here's how this command works:
```
`Command-line utility -> Invokes functions from system libraries (glibc) -> Invokes system calls`
```
The **ls** command internally calls functions from system libraries (aka **glibc**) on Linux. These libraries invoke the system calls that do most of the work.
If you want to know which functions were called from the **glibc** library, use the **ltrace** command followed by the regular **ls testdir/** command:
```
`ltrace ls testdir/`
```
If **ltrace** is not installed, install it by entering:
```
`yum install ltrace`
```
A bunch of output will be dumped to the screen; don't worry about it—just follow along. Some of the important library functions from the output of the **ltrace** command that are relevant to this example include:
```
opendir("testdir/")                                  = { 3 }
readdir({ 3 })                                       = { 101879119, "." }
readdir({ 3 })                                       = { 134, ".." }
readdir({ 3 })                                       = { 101879120, "file1" }
strlen("file1")                                      = 5
memcpy(0x1665be0, "file1\0", 6)                      = 0x1665be0
readdir({ 3 })                                       = { 101879122, "file2" }
strlen("file2")                                      = 5
memcpy(0x166dcb0, "file2\0", 6)                      = 0x166dcb0
readdir({ 3 })                                       = nil
closedir({ 3 })                      
```
By looking at the output above, you probably can understand what is happening. A directory called **testdir** is being opened by the **opendir** library function, followed by calls to the **readdir** function, which is reading the contents of the directory. At the end, there is a call to the **closedir** function, which closes the directory that was opened earlier. Ignore the other **strlen** and **memcpy** functions for now.
You can see which library functions are being called, but this article will focus on system calls that are invoked by the system library functions.
Similar to the above, to understand what system calls are invoked, just put **strace** before the **ls testdir** command, as shown below. Once again, a bunch of gibberish will be dumped to your screen, which you can follow along with here:
```
[root@sandbox tmp]# strace ls testdir/
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
brk(NULL)                               = 0x1f12000
&lt;&lt;&lt; truncated strace output &gt;&gt;&gt;
write(1, "file1  file2\n", 13file1  file2
)          = 13
close(1)                                = 0
munmap(0x7fd002c8d000, 4096)            = 0
close(2)                                = 0
exit_group(0)                           = ?
+++ exited with 0 +++
[root@sandbox tmp]#
```
The output on the screen after running the **strace** command was simply system calls made to run the **ls** command. Each system call serves a specific purpose for the operating system, and they can be broadly categorized into the following sections:
* Process management system calls
* File management system calls
* Directory and filesystem management system calls
* Other system calls
An easier way to analyze the information dumped onto your screen is to log the output to a file using **strace**'s handy **-o** flag. Add a suitable file name after the **-o** flag and run the command again:
```
[root@sandbox tmp]# strace -o trace.log ls testdir/
file1  file2
[root@sandbox tmp]#
```
This time, no output dumped to the screen—the **ls** command worked as expected by showing the file names and logging all the output to the file **trace.log**. The file has almost 100 lines of content just for a simple **ls** command:
```
[root@sandbox tmp]# ls -l trace.log
-rw-r--r--. 1 root root 7809 Oct 12 13:52 trace.log
[root@sandbox tmp]#
[root@sandbox tmp]# wc -l trace.log
114 trace.log
[root@sandbox tmp]#
```
Take a look at the first line in the example's trace.log:
```
`execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0`
```
* The first word of the line, **execve**, is the name of a system call being executed.
* The text within the parentheses is the arguments provided to the system call.
* The number after the **=** sign (which is **0** in this case) is a value returned by the **execve** system call.
The output doesn't seem too intimidating now, does it? And you can apply the same logic to understand other lines.
Now, narrow your focus to the single command that you invoked, i.e., **ls testdir**. You know the directory name used by the command **ls**, so why not **grep** for **testdir** within your **trace.log** file and see what you get? Look at each line of the results in detail:
```
[root@sandbox tmp]# grep testdir trace.log
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0
openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
[root@sandbox tmp]#
```
Thinking back to the analysis of **execve** above, can you tell what this system call does?
```
`execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0`
```
You don't need to memorize all the system calls or what they do, because you can refer to documentation when you need to. Man pages to the rescue! Ensure the following package is installed before running the **man** command:
```
[root@sandbox tmp]# rpm -qa | grep -i man-pages
man-pages-3.53-5.el7.noarch
[root@sandbox tmp]#
```
Remember that you need to add a **2** between the **man** command and the system call name. If you read **man**'s man page using **man man**, you can see that section 2 is reserved for system calls. Similarly, if you need information on library functions, you need to add a **3** between **man** and the library function name.
The following are the manual's section numbers and the types of pages they contain:
```
1\. Executable programs or shell commands
2\. System calls (functions provided by the kernel)
3\. Library calls (functions within program libraries)
4\. Special files (usually found in /dev)
```
Run the following **man** command with the system call name to see the documentation for that system call:
```
`man 2 execve`
```
As per the **execve** man page, this executes a program that is passed in the arguments (in this case, that is **ls**). There are additional arguments that can be provided to **ls**, such as **testdir** in this example. Therefore, this system call just runs **ls** with **testdir** as the argument:
```
'execve - execute program'
'DESCRIPTION
       execve()  executes  the  program  pointed to by filename'
```
The next system call, named **stat**, uses the **testdir** argument:
```
`stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0`
```
Use **man 2 stat** to access the documentation. **stat** is the system call that gets a file's status—remember that everything in Linux is a file, including a directory.
Next, the **openat** system call opens **testdir.** Keep an eye on the **3** that is returned. This is a file description, which will be used by later system calls:
```
`openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3`
```
So far, so good. Now, open the **trace.log** file and go to the line following the **openat** system call. You will see the **getdents** system call being invoked, which does most of what is required to execute the **ls testdir** command. Now, **grep getdents** from the **trace.log** file:
```
[root@sandbox tmp]# grep getdents trace.log
getdents(3, /* 4 entries */, 32768)     = 112
getdents(3, /* 0 entries */, 32768)     = 0
[root@sandbox tmp]#
```
The **getdents** man page describes it as **get directory entries**, which is what you want to do. Notice that the argument for **getdents** is **3**, which is the file descriptor from the **openat** system call above.
Now that you have the directory listing, you need a way to display it in your terminal. So, **grep** for another system call, **write**, which is used to write to the terminal, in the logs:
```
[root@sandbox tmp]# grep write trace.log
write(1, "file1  file2\n", 13)          = 13
[root@sandbox tmp]#
```
In these arguments, you can see the file names that will be displayed: **file1** and **file2**. Regarding the first argument (**1**), remember in Linux that, when any process is run, three file descriptors are opened for it by default. Following are the default file descriptors:
* 0 - Standard input
* 1 - Standard out
* 2 - Standard error
So, the **write** system call is displaying **file1** and **file2** on the standard display, which is the terminal, identified by **1**.
Now you know which system calls did most of the work for the **ls testdir/** command. But what about the other 100+ system calls in the **trace.log** file? The operating system has to do a lot of housekeeping to run a process, so a lot of what you see in the log file is process initialization and cleanup. Read the entire **trace.log** file and try to understand what is happening to make the **ls** command work.
Now that you know how to analyze system calls for a given command, you can use this knowledge for other commands to understand what system calls are being executed. **strace** provides a lot of useful command-line flags to make it easier for you, and some of them are described below.
By default, **strace** does not include all system call information. However, it has a handy **-v verbose** option that can provide additional information on each system call:
```
`strace -v ls testdir`
```
It is good practice to always use the **-f** option when running the **strace** command. It allows **strace** to trace any child processes created by the process currently being traced:
```
`strace -f ls testdir`
```
Say you just want the names of system calls, the number of times they ran, and the percentage of time spent in each system call. You can use the **-c** flag to get those statistics:
```
`strace -c ls testdir/`
```
Suppose you want to concentrate on a specific system call, such as focusing on **open** system calls and ignoring the rest. You can use the **-e** flag followed by the system call name:
```
[root@sandbox tmp]# strace -e open ls testdir
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libcap.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libacl.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpcre.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
file1  file2
+++ exited with 0 +++
[root@sandbox tmp]#
```
What if you want to concentrate on more than one system call? No worries, you can use the same **-e** command-line flag with a comma between the two system calls. For example, to see the **write** and **getdents** systems calls:
```
[root@sandbox tmp]# strace -e write,getdents ls testdir
getdents(3, /* 4 entries */, 32768)     = 112
getdents(3, /* 0 entries */, 32768)     = 0
write(1, "file1  file2\n", 13file1  file2
)          = 13
+++ exited with 0 +++
[root@sandbox tmp]#
```
The examples so far have traced explicitly run commands. But what about commands that have already been run and are in execution? What, for example, if you want to trace daemons that are just long-running processes? For this, **strace** provides a special **-p** flag to which you can provide a process ID.
Instead of running a **strace** on a daemon, take the example of a **cat** command, which usually displays the contents of a file if you give a file name as an argument. If no argument is given, the **cat** command simply waits at a terminal for the user to enter text. Once text is entered, it repeats the given text until a user presses Ctrl+C to exit.
Run the **cat** command from one terminal; it will show you a prompt and simply wait there (remember **cat** is still running and has not exited):
```
`[root@sandbox tmp]# cat`
```
From another terminal, find the process identifier (PID) using the **ps** command:
```
[root@sandbox ~]# ps -ef | grep cat
root      22443  20164  0 14:19 pts/0    00:00:00 cat
root      22482  20300  0 14:20 pts/1    00:00:00 grep --color=auto cat
[root@sandbox ~]#
```
Now, run **strace** on the running process with the **-p** flag and the PID (which you found above using **ps**). After running **strace**, the output states what the process was attached to along with the PID number. Now, **strace** is tracing the system calls made by the **cat** command. The first system call you see is **read**, which is waiting for input from 0, or standard input, which is the terminal where the **cat** command ran:
```
[root@sandbox ~]# strace -p 22443
strace: Process 22443 attached
read(0,
```
Now, move back to the terminal where you left the **cat** command running and enter some text. I entered **x0x0** for demo purposes. Notice how **cat** simply repeated what I entered; hence, **x0x0** appears twice. I input the first one, and the second one was the output repeated by the **cat** command:
```
[root@sandbox tmp]# cat
x0x0
x0x0
```
Move back to the terminal where **strace** was attached to the **cat** process. You now see two additional system calls: the earlier **read** system call, which now reads **x0x0** in the terminal, and another for **write**, which wrote **x0x0** back to the terminal, and again a new **read**, which is waiting to read from the terminal. Note that Standard input (**0**) and Standard out (**1**) are both in the same terminal:
```
[root@sandbox ~]# strace -p 22443
strace: Process 22443 attached
read(0, "x0x0\n", 65536)                = 5
write(1, "x0x0\n", 5)                   = 5
read(0,
```
Imagine how helpful this is when running **strace** against daemons to see everything it does in the background. Kill the **cat** command by pressing Ctrl+C; this also kills your **strace** session since the process is no longer running.
If you want to see a timestamp against all your system calls, simply use the **-t** option with **strace**:
```
[root@sandbox ~]#strace -t ls testdir/
14:24:47 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
14:24:47 brk(NULL)                      = 0x1f07000
14:24:47 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2530bc8000
14:24:47 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
14:24:47 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
```
What if you want to know the time spent between system calls? **strace** has a handy **-r** command that shows the time spent executing each system call. Pretty useful, isn't it?
```
[root@sandbox ~]#strace -r ls testdir/
0.000000 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
0.000368 brk(NULL)                 = 0x1966000
0.000073 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb6b1155000
0.000047 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
0.000119 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
```
### Conclusion
The **strace** utility is very handy for understanding system calls on Linux. To learn about its other command-line flags, please refer to the man pages and online documentation.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/strace
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://en.wikipedia.org/wiki/Trap_(computing)

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,168 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with awk, a powerful text-parsing tool)
[#]: via: (https://opensource.com/article/19/10/intro-awk)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Getting started with awk, a powerful text-parsing tool
======
Let's jump in and start using it.
![Woman programming][1]
Awk is a powerful text-parsing tool for Unix and Unix-like systems, but because it has programmed functions that you can use to perform common parsing tasks, it's also considered a programming language. You probably won't be developing your next GUI application with awk, and it likely won't take the place of your default scripting language, but it's a powerful utility for specific tasks.
What those tasks may be is surprisingly diverse. The best way to discover which of your problems might be best solved by awk is to learn awk; you'll be surprised at how awk can help you get more done but with a lot less effort.
Awk's basic syntax is:
```
`awk [options] 'pattern {action}' file`
```
To get started, create this sample file and save it as **colours.txt**
```
name       color  amount
apple      red    4
banana     yellow 6
strawberry red    3
grape      purple 10
apple      green  8
plum       purple 2
kiwi       brown  4
potato     brown  9
pineapple  yellow 5
```
This data is separated into columns by one or more spaces. It's common for data that you are analyzing to be organized in some way. It may not always be columns separated by whitespace, or even a comma or semicolon, but especially in log files or data dumps, there's generally a predictable pattern. You can use patterns of data to help awk extract and process the data that you want to focus on.
### Printing a column
In awk, the **print** function displays whatever you specify. There are many predefined variables you can use, but some of the most common are integers designating columns in a text file. Try it out:
```
$ awk '{print $2;}' colours.txt
color
red
yellow
red
purple
green
purple
brown
brown
yellow
```
In this case, awk displays the second column, denoted by **$2**. This is relatively intuitive, so you can probably guess that **print $1** displays the first column, and **print $3** displays the third, and so on.
To display _all_ columns, use **$0**.
The number after the dollar sign (**$**) is an _expression_, so **$2** and **$(1+1)** mean the same thing.
### Conditionally selecting columns
The example file you're using is very structured. It has a row that serves as a header, and the columns relate directly to one another. By defining _conditional_ requirements, you can qualify what you want awk to return when looking at this data. For instance, to view items in column 2 that match "yellow" and print the contents of column 1:
```
awk '$2=="yellow"{print $1}' file1.txt
banana
pineapple
```
Regular expressions work as well. This conditional looks at **$2** for approximate matches to the letter **p** followed by any number of (one or more) characters, which are in turn followed by the letter **p**:
```
$ awk '$2 ~ /p.+p/ {print $0}' colours.txt
grape   purple  10
plum    purple  2
```
Numbers are interpreted naturally by awk. For instance, to print any row with a third column containing an integer greater than 5:
```
awk '$3&gt;5 {print $1, $2}' colours.txt
name    color
banana  yellow
grape   purple
apple   green
potato  brown
```
### Field separator
By default, awk uses whitespace as the field separator. Not all text files use whitespace to define fields, though. For example, create a file called **colours.csv** with this content:
```
name,color,amount
apple,red,4
banana,yellow,6
strawberry,red,3
grape,purple,10
apple,green,8
plum,purple,2
kiwi,brown,4
potato,brown,9
pineapple,yellow,5
```
Awk can treat the data in exactly the same way, as long as you specify which character it should use as the field separator in your command. Use the **\--field-separator** (or just **-F** for short) option to define the delimiter:
```
$ awk -F"," '$2=="yellow" {print $1}' file1.csv
banana
pineapple
```
### Saving output
Using output redirection, you can write your results to a file. For example:
```
`$ awk -F, '$3>5 {print $1, $2} colours.csv > output.txt`
```
This creates a file with the contents of your awk query.
You can also split a file into multiple files grouped by column data. For example, if you want to split colours.txt into multiple files according to what color appears in each row, you can cause awk to redirect _per query_ by including the redirection in your awk statement:
```
`$ awk '{print > $2".txt"}' colours.txt`
```
This produces files named **yellow.txt**, **red.txt**, and so on.
In the next article, you'll learn more about fields, records, and some powerful awk variables.
* * *
This article is adapted from an episode of [Hacker Public Radio][2], a community technology podcast.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/intro-awk
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
[2]: http://hackerpublicradio.org/eps.php?id=2114

View File

@ -1,107 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Keyboard Shortcuts to Speed Up Your Work in Linux)
[#]: via: (https://opensourceforu.com/2019/11/keyboard-shortcuts-to-speed-up-your-work-in-linux/)
[#]: author: (S Sathyanarayanan https://opensourceforu.com/author/s-sathyanarayanan/)
Keyboard Shortcuts to Speed Up Your Work in Linux
======
[![Google Keyboard][1]][2]
_Manipulating the mouse, keyboard and menus takes up a lot of our time, which could be saved by using keyboard shortcuts. These not only save time, but also make the computer user more efficient._
Did you realise that switching from the keyboard to the mouse while typing takes up to two seconds each time? If a person works for eight hours every day, switching from the keyboard to the mouse once a minute, and there are around 240 working days in a year, the amount of time wasted (as per calculations done by Brainscape) is:
_[2 wasted seconds/min] x [480 minutes per day] x 240 working days per year = 64 wasted hours per year_
This is equal to eight working days lost and hence learning keyboard shortcuts will increase productivity by 3.3 per cent (_<https://www.brainscape.com/blog/2011/08/keyboard-shortcuts-economy/>_).
Keyboard shortcuts provide a quicker way to do a task, which otherwise would have had to be done in multiple steps using the mouse and/or the menu. Figure 1 gives a list of a few most frequently used shortcuts in Ubuntu 18.04 Linux OS and the Web browsers. I am omitting the very well-known shortcuts like copy, paste, etc, and the ones which are not used frequently. The readers can refer to online resources for a comprehensive list of shortcuts. Note that the Windows key is renamed as Super key in Linux.
**General shortcuts**
A list of general shortcuts is given below.
[![][3]][4]
**Print Screen and video recording of the screen**
The following shortcuts can be used to print the screen or take a video recording of the screen.
[![][5]][6]**Switching between applications**
The shortcut keys listed here can be used to switch between applications.
[![][7]][8]
**Tile windows**
The windows can be tiled in different ways using the shortcuts given below.
[![][9]][10]
**Browser shortcuts**
The most frequently used shortcuts for browsers are listed here. Most of the shortcuts are common to the Chrome/Firefox browsers.
**Key combination** | **Action**
---|---
Ctrl + T | Opens a new tab.
Ctrl + Shift + T | Opens the most recently closed tab.
Ctrl + D | Adds a new bookmark.
Ctrl + W | Closes the browser tab.
Alt + D | Positions the cursor in the browsers address bar.
F5 or Ctrl-R | Refreshes a page.
Ctrl + Shift + Del | Clears private data and history.
Ctrl + N | Opens a new window.
Home | Scrolls to the top of the page.
End | Scrolls to the bottom of the page.
Ctrl + J | Opens the Downloads folder
(in Chrome)
F11 | Full-screen view (toggle effect)
**Terminal shortcuts**
Here is a list of terminal shortcuts.
[![][11]][12]You can also configure your own custom shortcuts in Ubuntu, as follows:
* Click on Settings in Ubuntu Dash.
* Select the Devices tab in the left menu of the Settings window.
* Select the Keyboard tab in the Devices menu.
* The + button is displayed at the bottom of the right panel. Click on the + sign to open the custom shortcut dialogue box and configure a new shortcut.
Learning three shortcuts mentioned in this article can save a lot of time and make you more productive.
**Reference**
_Cohen, Andrew. How keyboard shortcuts could revive Americas economy; [www.brainscape.com][13]. [Online] Brainscape, 26 May 2017; <https://www.brainscape.com/blog/2011/08/keyboard-shortcuts-economy/>_
![Avatar][14]
[S Sathyanarayanan][15]
The author is currently working with Sri Sathya Sai University for Human Excellence, Gulbarga. He has more than 25 years of experience in systems management and in teaching IT courses. He is an enthusiastic promoter of FOSS and can be reached at [sathyanarayanan.brn@gmail.com][16].
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/keyboard-shortcuts-to-speed-up-your-work-in-linux/
作者:[S Sathyanarayanan][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/s-sathyanarayanan/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Google-Keyboard.jpg?resize=696%2C418&ssl=1 (Google Keyboard)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Google-Keyboard.jpg?fit=750%2C450&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/1.png?resize=350%2C319&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/1.png?ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/NW.png?resize=350%2C326&ssl=1
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/NW.png?ssl=1
[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/2.png?resize=350%2C264&ssl=1
[8]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/2.png?ssl=1
[9]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/3.png?resize=350%2C186&ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/3.png?ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/7.png?resize=350%2C250&ssl=1
[12]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/7.png?ssl=1
[13]: http://www.brainscape.com
[14]: https://secure.gravatar.com/avatar/736684a2707f2ed7ae72675edf7bb3ee?s=100&r=g
[15]: https://opensourceforu.com/author/s-sathyanarayanan/
[16]: mailto:sathyanarayanan.brn@gmail.com

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cloning a MAC address to bypass a captive portal)
[#]: via: (https://fedoramagazine.org/cloning-a-mac-address-to-bypass-a-captive-portal/)
[#]: author: (Esteban Wilson https://fedoramagazine.org/author/swilson/)
Cloning a MAC address to bypass a captive portal
======
![][1]
If you ever attach to a WiFi system outside your home or office, you often see a portal page. This page may ask you to accept terms of service or some other agreement to get access. But what happens when you cant connect through this kind of portal? This article shows you how to use NetworkManager on Fedora to deal with some failure cases so you can still access the internet.
### How captive portals work
Captive portals are web pages offered when a new device is connected to a network. When the user first accesses the Internet, the portal captures all web page requests and redirects them to a single portal page.
The page then asks the user to take some action, typically agreeing to a usage policy. Once the user agrees, they may authenticate to a RADIUS or other type of authentication system. In simple terms, the captive portal registers and authorizes a device based on the devices MAC address and end user acceptance of terms. (The MAC address is [a hardware-based value][2] attached to any network interface, like a WiFi chip or card.)
Sometimes a device doesnt load the captive portal to authenticate and authorize the device to use the locations WiFi access. Examples of this situation include mobile devices and gaming consoles (Switch, Playstation, etc.). They usually wont launch a captive portal page when connecting to the Internet. You may see this situation when connecting to hotel or public WiFi access points.
You can use NetworkManager on Fedora to resolve these issues, though. Fedora will let you temporarily clone the connecting devices MAC address and authenticate to the captive portal on the devices behalf. Youll need the MAC address of the device you want to connect. Typically this is printed somewhere on the device and labeled. Its a six-byte hexadecimal value, so it might look like _4A:1A:4C:B0:38:1F_. You can also usually find it through the devices built-in menus.
### Cloning with NetworkManager
First, open _**nm-connection-editor**_, or open the WiFI settings via the Settings applet. You can then use NetworkManager to clone as follows:
* For Ethernet Select the connected Ethernet connection. Then select the _Ethernet_ tab. Note or copy the current MAC address. Enter the MAC address of the console or other device in the _Cloned MAC address_ field.
* For WiFi Select the WiFi profile name. Then select the WiFi tab. Note or copy the current MAC address. Enter the MAC address of the console or other device in the _Cloned MAC address_ field.
### **Bringing up the desired device**
Once the Fedora system connects with the Ethernet or WiFi profile, the cloned MAC address is used to request an IP address, and the captive portal loads. Enter the credentials needed and/or select the user agreement. The MAC address will then get authorized.
Now, disconnect the WiFi or Ethernet profile, and change the Fedora systems MAC address back to its original value. Then boot up the console or other device. The device should now be able to access the Internet, because its network interface has been authorized via your Fedora system.
This isnt all that NetworkManager can do, though. For instance, check out this article on [randomizing your systems hardware address][3] for better privacy.
> [Randomize your MAC address using NetworkManager][3]
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/cloning-a-mac-address-to-bypass-a-captive-portal/
作者:[Esteban Wilson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/swilson/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/clone-mac-nm-816x345.jpg
[2]: https://en.wikipedia.org/wiki/MAC_address
[3]: https://fedoramagazine.org/randomize-mac-address-nm/

View File

@ -0,0 +1,252 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fields, records, and variables in awk)
[#]: via: (https://opensource.com/article/19/11/fields-records-variables-awk)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Fields, records, and variables in awk
======
In the second article in this intro to awk series, learn about fields,
records, and some powerful awk variables.
![Man at laptop on a mountain][1]
Awk comes in several varieties: There is the original **awk**, written in 1977 at AT&amp;T Bell Laboratories, and several reimplementations, such as **mawk**, **nawk**, and the one that ships with most Linux distributions, GNU awk, or **gawk**. On most Linux distributions, awk and gawk are synonyms referring to GNU awk, and typing either invokes the same awk command. See the [GNU awk user's guide][2] for the full history of awk and gawk.
The [first article][3] in this series showed that awk is invoked on the command line with this syntax:
```
`$ awk [options] 'pattern {action}' inputfile`
```
Awk is the command, and it can take options (such as **-F** to define the field separator). The action you want awk to perform is contained in single quotes, at least when it's issued in a terminal. To further emphasize which part of the awk command is the action you want it to take, you can precede your program with the **-e** option (but it's not required):
```
$ awk -F, -e '{print $2;}' colours.txt
yellow
blue
green
[...]
```
### Records and fields
Awk views its input data as a series of _records_, which are usually newline-delimited lines. In other words, awk generally sees each line in a text file as a new record. Each record contains a series of _fields_. A field is a component of a record delimited by a _field separator_.
By default, awk sees whitespace, such as spaces, tabs, and newlines, as indicators of a new field. Specifically, awk treats multiple _space_ separators as one, so this line contains two fields:
```
`raspberry red`
```
As does this one:
```
`tuxedo                  black`
```
Other separators are not treated this way. Assuming that the field separator is a comma, the following example record contains three fields, with one probably being zero characters long (assuming a non-printable character isn't hiding in that field):
```
`a,,b`
```
### The awk program
The _program_ part of an awk command consists of a series of rules. Normally, each rule begins on a new line in the program (although this is not mandatory). Each rule consists of a pattern and one or more actions:
```
`pattern { action }`
```
In a rule, you can define a pattern as a condition to control whether the action will run on a record. Patterns can be simple comparisons, regular expressions, combinations of the two, and more.
For instance, this will print a record _only_ if it contains the word "raspberry":
```
$ awk '/raspberry/ { print $0 }' colours.txt
raspberry red 99
```
If there is no qualifying pattern, the action is applied to every record.
Also, a rule can consist of only a pattern, in which case the entire record is written as if the action was **{ print }**.
Awk programs are essentially _data-driven_ in that actions depend on the data, so they are quite a bit different from programs in many other programming languages.
### The NF variable
Each field has a variable as a designation, but there are special variables for fields and records, too. The variable **NF** stores the number of fields awk finds in the current record. This can be printed or used in tests. Here is an example using the [text file][3] from the previous article:
```
$ awk '{ print $0 " (" NF ")" }' colours.txt
name       color  amount (3)
apple      red    4 (3)
banana     yellow 6 (3)
[...]
```
Awk's **print** function takes a series of arguments (which may be variables or strings) and concatenates them together. This is why, at the end of each line in this example, awk prints the number of fields as an integer enclosed by parentheses.
### The NR variable
In addition to counting the fields in each record, awk also counts input records. The record number is held in the variable **NR**, and it can be used in the same way as any other variable. For example, to print the record number before each line:
```
$ awk '{ print NR ": " $0 }' colours.txt
1: name       color  amount
2: apple      red    4
3: banana     yellow 6
4: raspberry  red    3
5: grape      purple 10
[...]
```
Note that it's acceptable to write this command with no spaces other than the one after **print**, although it's more difficult for a human to parse:
```
`$ awk '{print NR": "$0}' colours.txt`
```
### The printf() function
For greater flexibility in how the output is formatted, you can use the awk **printf()** function. This is similar to **printf** in C, Lua, Bash, and other languages. It takes a _format_ argument followed by a comma-separated list of items. The argument list may be enclosed in parentheses.
```
`$ printf format, item1, item2, ...`
```
The format argument (or _format string_) defines how each of the other arguments will be output. It uses _format specifiers_ to do this, including **%s** to output a string and **%d** to output a decimal number. The following **printf** statement outputs the record followed by the number of fields in parentheses:
```
$ awk 'printf "%s (%d)\n",$0,NF}' colours.txt
name       color  amount (3)
raspberry  red    4 (3)
banana     yellow 6 (3)
[...]
```
In this example, **%s (%d)** provides the structure for each line, while **$0,NF** defines the data to be inserted into the **%s** and **%d** positions. Note that, unlike with the **print** function, no newline is generated without explicit instructions. The escape sequence **\n** does this.
### Awk scripting
All of the awk code in this article has been written and executed in an interactive Bash prompt. For more complex programs, it's often easier to place your commands into a file or _script_. The option **-f FILE** (not to be confused with **-F**, which denotes the field separator) may be used to invoke a file containing a program.
For example, here is a simple awk script. Create a file called **example1.awk** with this content:
```
/^a/ {print "A: " $0}
/^b/ {print "B: " $0}
```
It's conventional to give such files the extension **.awk** to make it clear that they hold an awk program. This naming is not mandatory, but it gives file managers and editors (and you) a useful clue about what the file is.
Run the script:
```
$ awk -f example1.awk colours.txt
A: raspberry  red    4
B: banana     yellow 6
A: apple      green  8
```
A file containing awk instructions can be made into a script by adding a **#!** line at the top and making it executable. Create a file called **example2.awk** with these contents:
```
#!/usr/bin/awk -f
#
# Print all but line 1 with the line number on the front
#
NR &gt; 1 {
    printf "%d: %s\n",NR,$0
}
```
Arguably, there's no advantage to having just one line in a script, but sometimes it's easier to execute a script than to remember and type even a single line. A script file also provides a good opportunity to document what a command does. Lines starting with the **#** symbol are comments, which awk ignores.
Grant the file executable permission:
```
`$ chmod u+x example2.awk`
```
Run the script:
```
$ ./example2.awk colours.txt
2: apple      red    4
2: banana     yellow 6
4: raspberry red    3
5: grape      purple 10
[...]
```
An advantage of placing your awk instructions in a script file is that it's easier to format and edit. While you can write awk on a single line in your terminal, it can get overwhelming when it spans several lines.
### Try it
You now know enough about how awk processes your instructions to be able to write a complex awk program. Try writing an awk script with more than one rule and at least one conditional pattern. If you want to try more functions than just **print** and **printf**, refer to [the gawk manual][4] online.
Here's an idea to get you started:
```
#!/usr/bin/awk -f
#
# Print each record EXCEPT
# IF the first record contains "raspberry",
# THEN replace "red" with "pi"
$1 == "raspberry" {
        gsub(/red/,"pi")
}
{ print }
```
Try this script to see what it does, and then try to write your own.
The next article in this series will introduce more functions for even more complex (and useful!) scripts.
* * *
_This article is adapted from an episode of [Hacker Public Radio][5], a community technology podcast._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/fields-records-variables-awk
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_laptop_code_programming_mountain_view.jpg?itok=yx5buqkr (Man at laptop on a mountain)
[2]: https://www.gnu.org/software/gawk/manual/html_node/History.html#History
[3]: https://opensource.com/article/19/10/intro-awk
[4]: https://www.gnu.org/software/gawk/manual/
[5]: http://hackerpublicradio.org/eps.php?id=2129

View File

@ -1,95 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Update a Fedora Linux System [Beginners Tutorial])
[#]: via: (https://itsfoss.com/update-fedora/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
How To Update a Fedora Linux System [Beginners Tutorial]
======
_**This quick tutorial shows various ways to update a Fedora Linux install.**_
So, the other day, I installed the [newly released Fedora 31][1]. Ill be honest with you, it was my first time with a [non-Ubuntu distribution][2].
The first thing I did after installing Fedora was to try and install some software. I opened the software center and found that the software center was broken. I couldnt install any application from it.
I wasnt sure what went wrong with my installation. Discussing within the team, Abhishek advised me to update the system first. I did that and poof! everything was back to normal. After updating the [Fedora][3] system, the software center worked as it should.
Sometimes we just ignore the updates and keep troubleshooting the issue we face. No matter how big/small the issue is to avoid them, you should keep your system up-to-date.
In this article, Ill show you various possible methods to update your Fedora Linux system.
* [Update Fedora using software center][4]
* [Update Fedora using command line][5]
* [Update Fedora from system settings][6]
Keep in mind that updating Fedora means installing the security patches, kernel updates and software updates. If you want to update from one version of Fedora to another, it is called version upgrade and you can [read about Fedora version upgrade procedure here][7].
### Updating Fedora From The Software Center
![Software Center][8]
You will most likely be notified that you have some system updates to look at, you should end up launching the software center when you click on that notification.
All you have to do is hit Update and verify the root password to start updating.
In case you did not get a notification for the available updates, you can simply launch the software center and head to the “Updates” tab. Now, you just need to proceed with the updates listed.
### Updating Fedora Using The Terminal
If you cannot load up the software center for some reason, you can always utilize the dnf package managing commands to easily update your system.
Simply launch the terminal and type in the following command to start updating (you should be prompted to verify the root password):
```
sudo dnf upgrade
```
**dnf update vs dnf upgrade
**
Youll find that there are two dnf commands available: dnf update and dnf upgrade.
Both command do the same job and that is to install all the updates provided by Fedora.
Then why there is dnf update and dnf upgrade and which one should you use?
Well, dnf update is basically an alias to dnf upgrade. While dnf update may still work, the good practice is to use dnf upgrade because that is the real command.
### Updating Fedora From System Settings
![][9]
If nothing else works (or if youre already in the System settings for a reason), navigate your way to the “Details” option at the bottom of your settings.
This should show up the details of your OS and hardware along with a “Check for Updates” button as shown in the image above. You just need to click on it and provide the root/admin password to proceed to install the available updates.
**Wrapping Up**
As explained above, it is quite easy to update your Fedora installation. Youve got three available methods to choose from so you have nothing to worry about.
If you notice any issue in following the instructions mentioned above, feel free to let me know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/update-fedora/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/fedora-31-release/
[2]: https://itsfoss.com/non-ubuntu-beginner-linux/
[3]: https://getfedora.org/
[4]: tmp.Lqr0HBqAd9#software-center
[5]: tmp.Lqr0HBqAd9#command-line
[6]: tmp.Lqr0HBqAd9#system-settings
[7]: https://itsfoss.com/upgrade-fedora-version/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/11/software-center.png?ssl=1
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/11/system-settings-fedora-1.png?ssl=1

View File

@ -0,0 +1,308 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Add Windows and Linux host to Nagios Server for Monitoring)
[#]: via: (https://www.linuxtechi.com/add-windows-linux-host-to-nagios-server/)
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
How to Add Windows and Linux host to Nagios Server for Monitoring
======
In the previous article, we demonstrated how to install [Nagios Core on CentOS 8 / RHEL 8][1] server. In this guide, we will dive deeper and add Linux and Windows hosts to the Nagios Core server for monitoring.
![Add-Linux-Windows-Host-Nagios-Server][2]
### Adding a Remote Windows Host to Nagios Server
In this section, you will learn how to add a **Windows host** system to the **Nagios server**. For this to be possible, you need to install **NSClient++** agent on the Windows Host system. In this guide, we are going to install the NSClient++ on a Windows Server 2019 Datacenter edition.
On the Windows host system,  head out to the download link as specified <https://sourceforge.net/projects/nscplus/> and download NSClient ++ agent.
Once downloaded, double click on the downloaded installation file to launch the installation wizard.
[![NSClient-installer-Windows][2]][3]
On the first step on the installation procedure click **Next**
[![click-nex-to-install-NSClient][2]][4]
In the next section, check off the **I accept the terms in the license Agreement** checkbox and click **Next**
[![Accept-terms-conditions-NSClient][2]][5]
Next, click on the **Typical** option from the list of options and click **Next**
[![click-on-Typical-option-NSClient-Installation][2]][6]
In the next step, leave the default settings as they are and click **Next**.
[![Define-path-NSClient-Windows][2]][7]
On the next page, specify your Nagios Server cores IP address and tick off all the modules and click **Next** as shown below.
[![Specify-Nagios-Server-IP-address-NSClient-Windows][2]][8]
Next, click on the **Install** option to commence the installation process.[![Click-install-to-being-the-installation-NSClient][2]][9]
The installation process will start and will take a couple of seconds to complete. On the last step. Click **Finish** to complete the installation and exit the Wizard.
[![Click-finish-NSClient-Windows][2]][10]
To start the NSClient service, click on the **Start** menu and click on the **Start NSClient ++** option.
[![Click-start-NSClient-service-windows][2]][11]
To confirm that indeed the service is running, press **Windows Key + R**, type services.msc and hit **ENTER**. Scroll and search for the **NSClient** service and ensure its running
[![NSClient-running-windows][2]][12]
At this point, we have successfully installed NSClient++ on Windows Server 2019 host and verified that its running.
### Configure Nagios Server to monitor Windows host
After the successful installation of the NSClient ++ on the Windows host PC, log in to the Nagios server Core system and configure it to monitor the Windows host system.
Open the windows.cfg file using your favorite text editor
```
# vim /usr/local/nagios/etc/objects/windows.cfg
```
In the configuration file, ensure that the host_name attribute matches the hostname of your Windows client system. In our case, the hostname for the Windows server PC is windows-server. This hostname should apply for all the host_name attributes.
For the address attribute, specify your Windows host IP address. , In our case, this was 10.128.0.52.
![Specify-hostname-IP-Windows][2]
After you are done, save the changes and exit the text editor.
Next, open the Nagios configuration file.
```
# vim /usr/local/nagios/etc/nagios.cfg
```
Uncomment the line below and save the changes.
cfg_file=/usr/local/nagios/etc/objects/windows.cfg
![Uncomment-Windows-cfg-Nagios][2]
Finally, to verify that Nagios configuration is free from any errors, run the command:
```
# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
```
Output
![Verify-configuration-for-errors-Nagios][2]
As you can see from the output, there are no warnings or errors.
Now browse your Nagios Server IP address, log in and click on Hosts. Your Windows hostname, in this case, windows-server will appear on the dashboard.
![Windows-Host-added-Nagios][2]
### Adding a remote Linux Host to Nagios Server
Having added a Windows host to the Nagios server, lets add a Linux host system. In our case, we are going to add a **Ubuntu 18.04 LTS** to the Nagios monitoring server. To monitor a Linux host, we need to install an agent on the remote Linux system called **NRPE**. NRPE is short for **Nagios Remote Plugin Executor**. This is the plugin that will allow you to monitor Linux host systems. It allows you to monitor resources such as Swap, memory usage, and CPU load to mention a few on remote Linux hosts. So the first step is to install NRPE on Ubuntu 18.04 LTS remote system.
But first, update Ubuntu system
```
# sudo apt update
```
Next,  install Nagios NRPE by running the command as shown:
```
# sudo apt install nagios-nrpe-server nagios-plugins
```
![Install-nrpe-server-nagios-plugins][2]
After the successful installation of  NRPE and Nagios plugins, configure NRPE by opening its configuration file in /etc/nagios/nrpe.cfg
```
# vim /etc/nagios/nrpe.cfg
```
Append the Linux host IP address to the **server_address** attribute. In this case, 10.128.0.53 is the IP address of the Ubuntu 18.04 LTS system.
![Specify-server-address-Nagios][2]
Next, add Nagios server IP address in the allowed_hosts attribute, in this case, 10.128.0.50
![Allowed-hosts-Nagios][2]
Save and exit the configuration file.
Next, restart NRPE service and verify its status
```
# systemctl restart nagios-nrpe-server
# systemctl enable nagios-nrpe-server
# systemctl status nagios-nrpe-server
```
![Restart-nrpe-check-status][2]
### Configure Nagios Server to monitor Linux host
Having successfully installed NRPE and nagios plugins on the remote linux server, log in to Nagios Server and install EPEL (Extra packages for Enterprise Linux) package.
```
# dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
```
Next, install NRPE plugin on the server
```
# dnf install nagios-plugins-nrpe -y
```
After the installation of the NRPE plugin, open the Nagios configuration file “/usr/local/nagios/etc/nagios.cfg”
```
# vim /usr/local/nagios/etc/nagios.cfg
```
Next, uncomment the line below in the configuration file
cfg_dir=/usr/local/nagios/etc/servers
![uncomment-servers-line-Nagios-Server-CentOS8][2]
Next, create a configuration directory
```
# mkdir /usr/local/nagios/etc/servers
```
Then create client configuration file
```
# vim /usr/local/nagios/etc/servers/ubuntu-host.cfg
```
Copy and paste the configuration below to the file. This configuration monitors swap space, system load, total processes, logged in users, and disk usage.
```
define host{
use linux-server
host_name ubuntu-nagios-client
alias ubuntu-nagios-client
address 10.128.0.53
}
define hostgroup{
hostgroup_name linux-server
alias Linux Servers
members ubuntu-nagios-client
}
define service{
use local-service
host_name ubuntu-nagios-client
service_description SWAP Uasge
check_command check_nrpe!check_swap
}
define service{
use local-service
host_name ubuntu-nagios-client
service_description Root / Partition
check_command check_nrpe!check_root
}
define service{
use local-service
host_name ubuntu-nagios-client
service_description Current Users
check_command check_nrpe!check_users
}
define service{
use local-service
host_name ubuntu-nagios-client
service_description Total Processes
check_command check_nrpe!check_total_procs
}
define service{
use local-service
host_name ubuntu-nagios-client
service_description Current Load
check_command check_nrpe!check_load
}
```
Save and exit the configuration file.
Next, verify that there are no errors in Nagios configuration
```
# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
```
Now restart Nagios service and ensure that it is up and running.
```
# systemctl restart nagios
```
Remember to open port 5666 which is used by NRPE plugin on the firewall of the Nagios server.
```
# firewall-cmd --permanent --add-port=5666/tcp
# firewall-cmd --reload
```
![Allow-firewall-Nagios-server][2]
Likewise, head out to your Linux host (Ubuntu 18.04 LTS) and allow the port on UFW firewall
```
# ufw allow 5666/tcp
# ufw reload
```
![Allow-NRPE-service][2]
Finally, head out to the Nagios Servers URL and click on **Hosts**. Your Ubuntu system will be displayed on the dashboard alongside the Windows host machine we added earlier on.
![Linux-host-added-monitored-Nagios][2]
And this wraps up our 2-part series on Nagios installation and adding remote hosts. Feel free to get back to us with your feedback.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/add-windows-linux-host-to-nagios-server/
作者:[James Kiarie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/install-nagios-core-rhel-8-centos-8/
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/11/NSClient-installer-Windows.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/11/click-nex-to-install-NSClient.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Accept-terms-conditions-NSClient.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/11/click-on-Typical-option-NSClient-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Define-path-NSClient-Windows.png
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Specify-Nagios-Server-IP-address-NSClient-Windows.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Click-install-to-being-the-installation-NSClient.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Click-finish-NSClient-Windows.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Click-start-NSClient-service-windows.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/11/NSClient-running-windows.jpg

View File

@ -0,0 +1,76 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My first contribution to open source: Impostor Syndrome)
[#]: via: (https://opensource.com/article/19/11/my-first-open-source-contribution-impostor-syndrome)
[#]: author: (Galen Corey https://opensource.com/users/galenemco)
My first contribution to open source: Impostor Syndrome
======
A new open source contributor documents a series of five mistakes she
made starting out in open source.
![Dandelion held out over water][1]
The story of my first mistake goes back to the beginning of my learn-to-code journey. I taught myself the basics through online resources. I was working through tutorials and projects, making progress but also looking for the next way to level up. Pretty quickly, I came across a blog post that told me the best way for beginners _just like me_ to take their coding skills to the next level was to contribute to open source.
> "Anyone can do this," insisted the post, "and it is a crucial part of participating in the larger developer community."
My internal impostor (who, for the purpose of this post, is the personification of my imposter syndrome) latched onto this idea. "Look, Galen," she said. "The only way to be a real developer is to contribute to open source." "Alrighty," I replied, and started following the instructions in the blog post to make a [GitHub][2] account. It took me under ten minutes to get so thoroughly confused that I gave up on the idea entirely. It wasnt that I was unwilling to learn, but the resources that I was depending on expected me to have quite a bit of preexisting knowledge about [Git][3], GitHub, and how these tools allowed multiple developers to collaborate on a single project.
"Maybe Im not ready for this yet," I thought, and went back to my tutorials. "But the blog post said that anyone can do it, even beginners," my internal impostor nagged. Thus began a multi-year internal battle between the idea that contributing to open source was easy and valuable and I should be doing it, and the impression I was not yet _ready_ to write code for open source projects.
Even once I became comfortable with Git, my internal impostor was always eager to remind me of why I was not yet ready to contribute to open source. When I was in coding Bootcamp, she whispered: "Sure, you know Git and you write code, but youve never written real code before, only fake Bootcamp code. Youre not qualified to contribute to real projects that people use and depend on." When I was working my first year at work as a Software Engineer, she chided, "Okay maybe the code you write is 'real,' but you only work with one codebase! What makes you think you can write high-quality code somewhere else with different conventions, frameworks, or even languages?"
It took me about a year and a half of fulltime work to finally feel confident enough to shut down my internal impostors arguments and go for my first pull request (PR). The irony here is that my internal imposter was the one talking me both into and out of contributing to open source.
### Harmful myths
There are two harmful myths here that I want to debunk.
#### Myth 1: Contributing to open source is "easy"
Throughout this journey, I frequently ran across the message that contributing to open source was supposed to be easy. This made me question my own skills when I found myself unable to "easily" get started.
I understand why people might say that contributing to open source is easy, but I suspect what they actually mean is "its an attainable goal," "its accessible to beginners if they put in the work," or "it is possible to contribute to open source without writing a ton of really complex code."
All of these things are true, but it is equally important to note that contributing to open source is difficult. It requires you to take the time to understand a new codebase _and_ understand the tools that developers use.
I definitely dont want to discourage beginners from trying. It is just important to remember that running into challenges is an expected part of the process.
#### Myth 2: All "real" or "good" developers contribute to open source
My internal impostor was continually reminding me that my lack of open source contributions was a blight on my developer career. In fact, even as I write this post, I feel guilty that I have not contributed more to open source. But while working on open source is a great way to learn and participate in the broader community of developers, it is not the only way to do this. You can also blog, attend meetups, work on side projects, read, mentor, or go home at the end of a long day at work and have a lovely relaxing evening. Contributing to open source is a challenge that can be fun and rewarding if it is the challenge you choose.
Julia Evans wrote a blog post called [Dont feel guilty about not contributing to open source][4], which is a healthy reminder that there are many productive ways to use your time as a developer. I highly recommend bookmarking it for any time you feel that guilt creeping in.
### Mistake number one
Mistake number one was letting my internal impostor guide me. I let her talk me out of contributing to open source for years by telling me I was not ready. Instead, I just did not understand the amount of work I would need to put in to get to the level where I felt confident in my ability to write code for an unfamiliar project (I am still working toward this). I also let her talk me into it, with the idea that I had to contribute to open source to prove my worth as a developer. The end result was still my first merged pull request in a widely used project, but the insecurity made my entire experience less enjoyable.
### Don't let Git get you down
If you want to learn more about Git, or if you are a beginner and Git is a blocker toward making your first open-source contribution, dont panic. Git is very complicated, and you are not expected to know what it is already. Once you get the hang of it, you will find that Git is a handy tool that lets many different developers work on the same project at the same time, and then merge their individual changes together.
There are many resources to help you learn about Git and Github (a site that hosts code so that people can collaborate on it with Git). Here are some suggestions on where to start: [_Hello World_ intro to GitHub][5] and _[Resources to learn Git][6]_.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/my-first-open-source-contribution-impostor-syndrome
作者:[Galen Corey][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/galenemco
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dandelion_blue_water_hand.jpg?itok=QggW8Wnw (Dandelion held out over water)
[2]: https://github.com
[3]: https://git-scm.com
[4]: https://jvns.ca/blog/2014/04/26/i-dont-feel-guilty-about-not-contributing-to-open-source/
[5]: https://guides.github.com/activities/hello-world/
[6]: https://try.github.io/

View File

@ -0,0 +1,58 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My first contribution to open source: Making a decision)
[#]: via: (https://opensource.com/article/19/11/my-first-open-source-contribution-mistake-decisions)
[#]: author: (Galen Corey https://opensource.com/users/galenemco)
My first contribution to open source: Making a decision
======
A new open source contributor documents a series of five mistakes she
made starting out in open source.
![Lightbulb][1]
Previously, I put a lot of [blame on impostor syndrome][2] for delaying my first open source contribution. But there was another factor that I cant ignore: I cant make a decision to save my life. And with [millions][3] of open source projects to choose from, choosing one to contribute to is overwhelming. So overwhelming that I would often end up closing my laptop, thinking, "Maybe Ill just do this another day."
Mistake number two was letting my fear of making a decision get in the way of making my first contribution. In an ideal world, perhaps I would have come into my open source journey with a specific project in mind that I genuinely cared about and wanted to work on, but all I had was a vague goal of contributing to open source somehow. For those of you in the same position, here are strategies that helped me pick out the right project (or at least a good one) for my contribution.
### Tools that I used frequently
At first, I did not think it would be necessary to limit myself to tools or projects with which I was already familiar. There were projects that I had never used before but seemed like appealing candidates because of their active community, or the interesting problems that they solved.
However, given that I had a limited amount of time to devote to this project, I decided to stick with a tool that I already knew. To understand what a tool needs, you need to be familiar with how it is supposed to work. If you want to contribute to a project that you are unfamiliar with, you need to complete an additional step of getting to know the functionality and goals of the code. This extra load can be fun and rewarding, but it can also double your work time. Since my goal was primarily to contribute, sticking to what I knew was a helpful way to narrow things down. It is also rewarding to give back to a project that you have found useful.
### An active and friendly community
When choosing my project, I wanted to feel confident that someone would be there to review the code that I wrote. And, of course, I wanted the person who reviewed my code to be a nice person. Putting your work out there for public scrutiny is scary, after all. While I was open to constructive feedback, there were toxic corners of the developer community that I hoped to avoid.
To evaluate the community that I would be joining, I checked out the _issues_ sections of the repos that I was considering. I looked to see if someone from the core team responded regularly. More importantly, I tried to make sure that no one was talking down to each other in the comments (which is surprisingly common in issues discussions). I also looked out for projects that had a code of conduct, outlining what was appropriate vs. inappropriate behavior for online interaction.
### Clear contribution guidelines
Because this was my first time contributing to open source, I had a lot of questions around the process. Some project communities are excellent about documenting the procedures for choosing an issue and making a pull request. Although I did not select them at the time because I had never worked with the product before, [Gatsby][4] is an exemplar of this practice.
This type of clear documentation helped ease some of my insecurity about not knowing what to do. It also gave me hope that the project was open to new contributors and would take the time to look at my work. In addition to contribution guidelines, I looked in the issues section to see if the project was making use of the "good first issue" flag. This is another indication that the project is open to beginners (and helps you discover what to work on).
### Conclusion
If you dont already have a project in mind, choosing the right place to make your first open source contribution can be overwhelming. Coming up with a list of standards helped me narrow down my choices and find a great project for my first pull request.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/my-first-open-source-contribution-mistake-decisions
作者:[Galen Corey][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/galenemco
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lightbulb-idea-think-yearbook-lead.png?itok=5ZpCm0Jh (Lightbulb)
[2]: https://opensource.com/article/19/10/my-first-open-source-contribution-mistakes
[3]: https://github.blog/2018-02-08-open-source-project-trends-for-2018/
[4]: https://www.gatsbyjs.org/contributing/

View File

@ -0,0 +1,434 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An introduction to monitoring with Prometheus)
[#]: via: (https://opensource.com/article/19/11/introduction-monitoring-prometheus)
[#]: author: (Yuri Grinshteyn https://opensource.com/users/yuri-grinshteyn)
An introduction to monitoring with Prometheus
======
Prometheus is a popular and powerful toolkit to monitor Kubernetes. This
is a tutorial on how to get started.
![Wheel of a ship][1]
[Metrics are the primary way][2] to represent both the overall health of your system and any other specific information you consider important for monitoring and alerting or observability. [Prometheus][3] is a leading open source metric instrumentation, collection, and storage toolkit [built at SoundCloud][4] beginning in 2012. Since then, it's [graduated][5] from the Cloud Native Computing Foundation and become the de facto standard for Kubernetes monitoring. It has been covered in some detail in:
* [Getting started with Prometheus][6]
* [5 examples of Prometheus monitoring success][7]
* [Achieve high-scale application monitoring with Prometheus][8]
* [Tracking the weather with Python and Prometheus][9]
However, none of these articles focus on how to use Prometheus on Kubernetes. This article:
* Describes the Prometheus architecture and data model to help you understand how it works and what it can do
* Provides a tutorial on setting Prometheus up in a Kubernetes cluster and using it to monitor clusters and applications
### Architecture
While knowing how Prometheus works may not be essential to using it effectively, it can be helpful, especially if you're considering using it for production. The [Prometheus documentation][10] provides this graphic and details about the essential elements of Prometheus and how the pieces connect together.
[![Prometheus architecture][11]][10]
For most use cases, you should understand three major components of Prometheus:
1. The Prometheus **server** scrapes and stores metrics. Note that it uses a **persistence** layer, which is part of the server and not expressly mentioned in the documentation. Each node of the server is autonomous and does not rely on distributed storage. I'll revisit this later when looking to use a dedicated time-series database to store Prometheus data, rather than relying on the server itself.
2. The web **UI** allows you to access, visualize, and chart the stored data. Prometheus provides its own UI, but you can also configure other visualization tools, like [Grafana][12], to access the Prometheus server using PromQL (the Prometheus Query Language).
3. **Alertmanager** sends alerts from client applications, especially the Prometheus server. It has advanced features for deduplicating, grouping, and routing alerts and can route through other services like PagerDuty and OpsGenie.
The key to understanding Prometheus is that it fundamentally relies on **scraping**, or pulling, metrics from defined endpoints. This means that your application needs to expose an endpoint where metrics are available and instruct the Prometheus server how to scrape it (this is covered in the tutorial below). There are [exporters][13] for many applications that do not have an easy way to add web endpoints, such as [Kafka][14] and [Cassandra][15] (using the JMX exporter).
### Data model
Now that you understand how Prometheus works to scrape and store metrics, the next thing to learn is the kinds of metrics Prometheus supports. Some of the following information (noted with quotation marks) comes from the [metric types][16] section of the Prometheus documentation.
#### Counters and gauges
The two simplest metric types are **counter** and **gauge**. When getting started with Prometheus (or with time-series monitoring more generally), these are the easiest types to understand because it's easy to connect them to values you can imagine monitoring, like how much system resources your application is using or how many events it has processed.
> "A **counter** is a cumulative metric that represents a single monotonically increasing counter whose value can only **increase** or be **reset** to zero on restart. For example, you can use a counter to represent the number of requests served, tasks completed, or errors."
Because you cannot decrease a counter, it can and should be used only to represent cumulative metrics.
> "A **gauge** is a metric that represents a single numerical value that can arbitrarily go up and down. Gauges are typically used for measured values like [CPU] or current memory usage, but also 'counts' that can go up and down, like the number of concurrent requests."
#### Histograms and summaries
Prometheus supports two more complex metric types: [**histograms**][17] [and][17] [**summaries**][17]. There is ample opportunity for confusion here, given that they both track the number of observations _and_ the sum of observed values. One of the reasons you might choose to use them is that you need to calculate an average of the observed values. Note that they create multiple time series in the database; for example, they each create a sum of the observed values with a **_sum** suffix.
> "A **histogram** samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. It also provides a sum of all observed values."
This makes it an excellent candidate to track things like latency that might have a service level objective (SLO) defined against it. From the [documentation][17]:
> You might have an SLO to serve 95% of requests within 300ms. In that case, configure a histogram to have a bucket with an upper limit of 0.3 seconds. You can then directly express the relative amount of requests served within 300ms and easily alert if the value drops below 0.95. The following expression calculates it by job for the requests served in the last 5 minutes. The request durations were collected with a histogram called **http_request_duration_seconds**.
>
> [code]`sum(rate(http_request_duration_seconds_bucket{le="0.3"}[5m])) by (job) / sum(rate(http_request_duration_seconds_count[5m])) by (job)`
```
>
>  
Returning to definitions:
> "Similar to a histogram, a **summary** samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window."
The essential difference between summaries and histograms is that summaries calculate streaming φ-quantiles on the client-side and expose them directly, while histograms expose bucketed observation counts, and the calculation of quantiles from the buckets of a histogram happens on the server-side using the **histogram_quantile()** function.
If you are still confused, I suggest taking the following approach:
* Use gauges most of the time for straightforward time-series metrics.
* Use counters for things you know to increase monotonically, e.g., if you are counting the number of times something happens.
* Use histograms for latency measurements with simple buckets, e.g., one bucket for "under SLO" and another for "over SLO."
This should be sufficient for the overwhelming majority of use cases, and you should rely on a statistical analysis expert to help you with more advanced scenarios.
Now that you have a basic understanding of what Prometheus is, how it works, and the kinds of data it can collect and store, you're ready to begin the tutorial.
## Prometheus and Kubernetes hands-on tutorial
This tutorial covers the following:
* Installing Prometheus in your cluster
* Downloading the sample application and reviewing the code
* Building and deploying the app and generating load against it
* Accessing the Prometheus UI and reviewing the basic metrics
This tutorial assumes:
* You already have a Kubernetes cluster deployed.
* You have configured the **kubectl** command-line utility for access.
* You have the **cluster-admin** role (or at least sufficient privileges to create namespaces and deploy applications).
* You are running a Bash-based command-line interface. Adjust this tutorial if you run other operating systems or shell environments.
If you don't have Kubernetes running yet, this [Minikube tutorial][18] is an easy way to set it up on your laptop.
If you're ready now, let's go.
### Install Prometheus
In this section, you will clone the sample repository and use Kubernetes' configuration files to deploy Prometheus to a dedicated namespace.
1. Clone the sample repository locally and use it as your working directory: [code] $ git clone <https://github.com/yuriatgoogle/prometheus-demo.git>
$ cd  prometheus-demo
$ WORKDIR=$(pwd)
```
2. Create a dedicated namespace for the Prometheus deployment: [code]`$ kubectl create namespace prometheus`
```
3. Give your namespace the cluster reader role: [code] $ kubectl apply -f $WORKDIR/kubernetes/clusterRole.yaml
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
```
4. Create a Kubernetes configmap with scraping and alerting rules: [code] $ kubectl apply -f $WORKDIR/kubernetes/configMap.yaml -n prometheus
configmap/prometheus-server-conf created
```
5. Deploy Prometheus: [code] $ kubectl create -f prometheus-deployment.yaml -n prometheus
deployment.extensions/prometheus-deployment created
```
6. Validate that Prometheus is running: [code] $ kubectl get pods -n prometheus
NAME                                     READY   STATUS    RESTARTS   AGE
prometheus-deployment-78fb5694b4-lmz4r   1/1     Running   0          15s
```
### Review basic metrics
In this section, you'll access the Prometheus UI and review the metrics being collected.
1. Use port forwarding to enable web access to the Prometheus UI locally:
**Note:** Your **prometheus-deployment** will have a different name than this example. Review and replace the name of the pod from the output of the previous command. [code] $ kubectl port-forward prometheus-deployment-7ddb99dcb-fkz4d 8080:9090 -n prometheus
Forwarding from 127.0.0.1:8080 -&gt; 9090
Forwarding from [::1]:8080 -&gt; 9090
```
2. Go to <http://localhost:8080> in a browser:
![Prometheus console][19]
You are now ready to query Prometheus metrics!
3. Some basic machine metrics (like the number of CPU cores and memory) are available right away. For example, enter **machine_memory_bytes** in the expression field, switch to the Graph view, and click Execute to see the metric charted:
![Prometheus metric channel][20]
4. Containers running in the cluster are also automatically monitored. For example, enter **rate(container_cpu_usage_seconds_total{container_name="prometheus"}[1m])** as the expression and click Execute to see the rate of CPU usage by Prometheus:
![CPU usage metric][21]
Now that you know how to install Prometheus and use it to measure some out-of-the-box metrics, it's time for some real monitoring.
#### Golden signals
As described in the "[Monitoring Distributed Systems][22]" chapter of [Google's SRE][23] book:
> "The four golden signals of monitoring are latency, traffic, errors, and saturation. If you can only measure four metrics of your user-facing system, focus on these four."
The book offers thorough descriptions of all four, but this tutorial focuses on the three signals that most easily serve as proxies for user happiness:
* **Traffic:** How many requests you're receiving
* **Error rate:** How many of those requests you can successfully serve
* **Latency:** How quickly you can serve successful requests
As you probably realize by now, Prometheus does not measure any of these for you; you'll have to instrument any application you deploy to emit them. Following is an example implementation.
Open the **$WORKDIR/node/golden_signals/app.js** file, which is a sample application written in Node.js (recall we cloned **yuriatgoogle/prometheus-demo** and exported **$WORKDIR** earlier). Start by reviewing the first section, where the metrics to be recorded are defined:
```
// total requests - counter
const nodeRequestsCounter = new prometheus.Counter({
    name: 'node_requests',
    help: 'total requests'
});
```
The first metric is a counter that will be incremented for each request; this is how the total number of requests is counted:
```
// failed requests - counter
const nodeFailedRequestsCounter = new prometheus.Counter({
    name: 'node_failed_requests',
    help: 'failed requests'
});
```
The second metric is another counter that increments for each error to track the number of failed requests:
```
// latency - histogram
const nodeLatenciesHistogram = new prometheus.Histogram({
    name: 'node_request_latency',
    help: 'request latency by path',
    labelNames: ['route'],
    buckets: [100, 400]
});
```
The third metric is a histogram that tracks request latency. Working with a very basic assumption that the SLO for latency is 100ms, you will create two buckets: one for 100ms and the other 400ms latency.
The next section handles incoming requests, increments the total requests metric for each one, increments failed requests when there is an (artificially induced) error, and records a latency histogram value for each successful request. I have chosen not to record latencies for errors; that implementation detail is up to you.
```
app.get('/', (req, res) =&gt; {
    // start latency timer
    const requestReceived = new Date().getTime();
    console.log('request made');
    // increment total requests counter
    nodeRequestsCounter.inc();
    // return an error 1% of the time
    if ((Math.floor(Math.random() * 100)) == 100) {
        // increment error counter
        nodeFailedRequestsCounter.inc();
        // return error code
        res.send("error!", 500);
    }
    else {
        // delay for a bit
        sleep.msleep((Math.floor(Math.random() * 1000)));
        // record response latency
        const responseLatency = new Date().getTime() - requestReceived;
        nodeLatenciesHistogram
            .labels(req.route.path)
            .observe(responseLatency);
        res.send("success in " + responseLatency + " ms");
    }
})
```
#### Test locally
Now that you've seen how to implement Prometheus metrics, see what happens when you run the application.
1. Install the required packages: [code] $ cd $WORKDIR/node/golden_signals
$ npm install --save
```
2. Launch the app: [code]`$ node app.js`
```
3. Open two browser tabs: one to <http://localhost:8080> and another to <http://localhost:8080/metrics>.
4. When you go to the **/metrics** page, you can see the Prometheus metrics being collected and updated every time you reload the home page:
![Prometheus metrics being collected][24]
You're now ready to deploy the sample application to your Kubernetes cluster and test your monitoring.
#### Deploy monitoring to Prometheus on Kubernetes
Now it's time to see how metrics are recorded and represented in the Prometheus instance deployed in your cluster by:
* Building the application image
* Deploying it to your cluster
* Generating load against the app
* Observing the metrics recorded
##### Build the application image
The sample application provides a Dockerfile you'll use to build the image. This section assumes that you have:
* Docker installed and configured locally
* A Docker Hub account
* Created a repository
If you're using Google Kubernetes Engine to run your cluster, you can use Cloud Build and the Google Container Registry instead.
1. Switch to the application directory: [code]`$ cd $WORKDIR/node/golden_signals`
```
2. Build the image with this command: [code]`$ docker build . --tag=<Docker username>/prometheus-demo-node:latest`
```
3. Make sure you're logged in to Docker Hub: [code]`$ docker login`
```
4. Push the image to Docker Hub using this command: [code]`$ docker push <username>/prometheus-demo-node:latest`
```
5. Verify that the image is available: [code]`$ docker images`
```
#### Deploy the application
Now that the application image is in the Docker Hub, you can deploy it to your cluster and run the application.
1. Modify the **$WORKDIR/node/golden_signals/prometheus-demo-node.yaml** file to pull the image from Docker Hub: [code] spec:
      containers:
      - image: docker.io/&lt;Docker username&gt;/prometheus-demo-node:latest
```
2. Deploy the image: [code] $ kubectl apply -f $WORKDIR/node/golden_signals/prometheus-demo-node.yaml
deployment.extensions/prometheus-demo-node created
```
3. Verify that the application is running: [code] $ kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
prometheus-demo-node-69688456d4-krqqr   1/1     Running   0          65s
```
4. Expose the application using a load balancer: [code] $ kubectl expose deployment prometheus-node-demo --type=LoadBalancer --name=prometheus-node-demo --port=8080
service/prometheus-demo-node exposed
```
5. Confirm that your service has an external IP address: [code] $ kubectl get services
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
kubernetes             ClusterIP      10.39.240.1     &lt;none&gt;           443/TCP          23h
prometheus-demo-node   LoadBalancer   10.39.248.129   35.199.186.110   8080:31743/TCP   78m
```
##### Generate load to test monitoring
Now that your service is up and running, generate some load against it by using [Apache Bench][25].
1. Get the IP address of your service as a variable: [code]`$ export SERVICE_IP=$(kubectl get svc prometheus-demo-node -ojson | jq -r '.status.loadBalancer.ingress[].ip')`
```
2. Use **ab** to generate some load. You may want to run this in a separate terminal window. [code]`$ ab -c 3 -n 1000 http://${SERVICE_IP}:8080/`
```
##### Review metrics
While the load is running, access the Prometheus UI in the cluster again and confirm that the "golden signal" metrics are being collected.
1. Establish a connection to Prometheus: [code]
$ kubectl get pods -n prometheus
NAME                                     READY   STATUS    RESTARTS   AGE
prometheus-deployment-78fb5694b4-lmz4r   1/1     Running   0          15s
$ kubectl port-forward prometheus-deployment-78fb5694b4-lmz4r 8080:9090 -n prometheus
Forwarding from 127.0.0.1:8080 -&gt; 9090
Forwarding from [::1]:8080 -&gt; 9090
```
**Note:** Make sure to replace the name of the pod in the second command with the output of the first.
2. Open <http://localhost:8080> in a browser:
![Prometheus console][26]
3. Use this expression to measure the request rate: [code]`rate(node_requests[1m])`
```
![Measuring the request rate][27]
4. Use this expression to measure your error rate: [code]`rate(node_failed_requests[1m])`
```
![Measuring the error rate][28]
5. Finally, use this expression to validate your latency SLO. Remember that you set up two buckets, 100ms and 400ms. This expression returns the percentage of requests that meet the SLO : [code]`sum(rate(node_request_latency_bucket{le="100"}[1h])) / sum(rate(node_request_latency_count[1h]))`
```
![SLO query graph][29]
About 10% of the requests are within SLO. This is what you should expect since the code sleeps for a random number of milliseconds between 0 and 1,000. As such, about 10% of the time, it returns in more than 100ms, and this graph shows that you can't meet the latency SLO as a result.
### Summary
Congratulations! You've completed the tutorial and hopefully have a much better understanding of how Prometheus works, how to instrument your application with custom metrics, and how to use it to measure your SLO compliance. The next article in this series will look at another metric instrumentation approach using OpenCensus.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/introduction-monitoring-prometheus
作者:[Yuri Grinshteyn][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/yuri-grinshteyn
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes.png?itok=PqDGb6W7 (Wheel of a ship)
[2]: https://opensource.com/article/19/10/open-source-observability-kubernetes
[3]: https://prometheus.io/
[4]: https://en.wikipedia.org/wiki/Prometheus_(software)#History
[5]: https://www.cncf.io/announcement/2018/08/09/prometheus-graduates/
[6]: https://opensource.com/article/18/12/introduction-prometheus
[7]: https://opensource.com/article/18/9/prometheus-operational-advantage
[8]: https://opensource.com/article/19/10/application-monitoring-prometheus
[9]: https://opensource.com/article/19/4/weather-python-prometheus
[10]: https://prometheus.io/docs/introduction/overview/
[11]: https://opensource.com/sites/default/files/uploads/prometheus-architecture.png (Prometheus architecture)
[12]: https://grafana.com/
[13]: https://prometheus.io/docs/instrumenting/exporters/
[14]: https://github.com/danielqsj/kafka_exporter
[15]: https://github.com/prometheus/jmx_exporter
[16]: https://prometheus.io/docs/concepts/metric_types/
[17]: https://prometheus.io/docs/practices/histograms/
[18]: https://opensource.com/article/18/10/getting-started-minikube
[19]: https://opensource.com/sites/default/files/uploads/prometheus-console.png (Prometheus console)
[20]: https://opensource.com/sites/default/files/uploads/prometheus-machine_memory_bytes.png (Prometheus metric channel)
[21]: https://opensource.com/sites/default/files/uploads/prometheus-cpu-usage.png (CPU usage metric)
[22]: https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/
[23]: https://landing.google.com/sre/sre-book/toc/
[24]: https://opensource.com/sites/default/files/uploads/prometheus-metrics-collected.png (Prometheus metrics being collected)
[25]: https://httpd.apache.org/docs/2.4/programs/ab.html
[26]: https://opensource.com/sites/default/files/uploads/prometheus-enable-query-history.png (Prometheus console)
[27]: https://opensource.com/sites/default/files/uploads/prometheus-request-rate.png (Measuring the request rate)
[28]: https://opensource.com/sites/default/files/uploads/prometheus-error-rate.png (Measuring the error rate)
[29]: https://opensource.com/sites/default/files/uploads/prometheus-slo-query.png (SLO query graph)

View File

@ -0,0 +1,221 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bash Script to Generate Patching Compliance Report on CentOS/RHEL Systems)
[#]: via: (https://www.2daygeek.com/bash-script-to-generate-patching-compliance-report-on-centos-rhel-systems/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Bash Script to Generate Patching Compliance Report on CentOS/RHEL Systems
======
If you are running a large Linux environment you may have already integrated your Red Hat systems with the Satellite.
If yes, there is a way to export this from the Satellite Server so you dont have to worry about patching compliance reports.
But if you are running a small Red Hat environment without satellite integration, or if it is CentOS systems, this script will help you to create a report.
The patching compliance report is usually created monthly once or three months once, depending on the companys needs.
Add a cronjob based on your needs to automate this.
This **[bash script][1]** is generally good to run with less than 50 systems, but there is no limit.
Keeping the system up-to-date is an important task for Linux administrators, keeping your computer very stable and secure.
The following articles may help you to learn more about installing security patches on Red Hat (RHEL) and CentOS systems.
* **[How to check available security updates on Red Hat (RHEL) and CentOS system][2]**
* **[Four ways to install security updates on Red Hat (RHEL) &amp; CentOS systems][3]**
* **[Two methods to check or list out installed security updates on Red Hat (RHEL) &amp; CentOS system][4]**
Four **[shell scripts][5]** are included in this tutorial and pick the suitable one for you.
### Method-1: Bash Script to Generate Patching Compliance Report for Security Errata on CentOS/RHEL Systems
This script allows you to create a security errata patch compliance report only. It sends the output via a mail in a plain text.
```
# vi /opt/scripts/small-scripts/sec-errata.sh
#!/bin/sh
/tmp/sec-up.txt
SUBJECT="Patching Reports on "date""
MESSAGE="/tmp/sec-up.txt"
TO="[email protected]"
echo "+---------------+-----------------------------+" >> $MESSAGE
echo "| Server_Name | Security Errata |" >> $MESSAGE
echo "+---------------+-----------------------------+" >> $MESSAGE
for server in `more /opt/scripts/server.txt`
do
sec=`ssh $server yum updateinfo summary | grep 'Security' | grep -v 'Important|Moderate' | tail -1 | awk '{print $1}'`
echo "$server $sec" >> $MESSAGE
done
echo "+---------------------------------------------+" >> $MESSAGE
mail -s "$SUBJECT" "$TO" < $MESSAGE
```
Run the script file once you have added the above script.
```
# sh /opt/scripts/small-scripts/sec-errata.sh
```
You get an output like the one below.
```
# cat /tmp/sec-up.txt
+---------------+-------------------+
| Server_Name | Security Errata |
+---------------+-------------------+
server1
server2
server3 21
server4
+-----------------------------------+
```
Add the following cronjob to get the patching compliance report once a month.
```
# crontab -e
@monthly /bin/bash /opt/scripts/system-uptime-script-1.sh
```
### Method-1a: Bash Script to Generate Patching Compliance Report for Security Errata on CentOS/RHEL Systems
This script allows you to generate a security errata patch compliance report. It sends the output through a mail with the CSV file.
```
# vi /opt/scripts/small-scripts/sec-errata-1.sh
#!/bin/sh
echo "Server Name, Security Errata" > /tmp/sec-up.csv
for server in `more /opt/scripts/server.txt`
do
sec=`ssh $server yum updateinfo summary | grep 'Security' | grep -v 'Important|Moderate' | tail -1 | awk '{print $1}'`
echo "$server, $sec" >> /tmp/sec-up.csv
done
echo "Patching Report for `date +"%B %Y"`" | mailx -s "Patching Report on `date`" -a /tmp/sec-up.csv [email protected]
rm /tmp/sec-up.csv
```
Run the script file once you have added the above script.
```
# sh /opt/scripts/small-scripts/sec-errata-1.sh
```
You get an output like the one below.
![][6]
### Method-2: Bash Script to Generate Patching Compliance Report for Security Errata, Bugfix, and Enhancement on CentOS/RHEL Systems
This script allows you to generate patching compliance reports for Security Errata, Bugfix, and Enhancement. It sends the output via a mail in a plain text.
```
# vi /opt/scripts/small-scripts/sec-errata-bugfix-enhancement.sh
#!/bin/sh
/tmp/sec-up.txt
SUBJECT="Patching Reports on "`date`""
MESSAGE="/tmp/sec-up.txt"
TO="[email protected]"
echo "+---------------+-------------------+--------+---------------------+" >> $MESSAGE
echo "| Server_Name | Security Errata | Bugfix | Enhancement |" >> $MESSAGE
echo "+---------------+-------------------+--------+---------------------+" >> $MESSAGE
for server in `more /opt/scripts/server.txt`
do
sec=`ssh $server yum updateinfo summary | grep 'Security' | grep -v 'Important|Moderate' | tail -1 | awk '{print $1}'`
bug=`ssh $server yum updateinfo summary | grep 'Bugfix' | tail -1 | awk '{print $1}'`
enhance=`ssh $server yum updateinfo summary | grep 'Enhancement' | tail -1 | awk '{print $1}'`
echo "$server $sec $bug $enhance" >> $MESSAGE
done
echo "+------------------------------------------------------------------+" >> $MESSAGE
mail -s "$SUBJECT" "$TO" < $MESSAGE
```
Run the script file once you have added the above script.
```
# sh /opt/scripts/small-scripts/sec-errata-bugfix-enhancement.sh
```
You get an output like the one below.
```
# cat /tmp/sec-up.txt
+---------------+-------------------+--------+---------------------+
| Server_Name | Security Errata | Bugfix | Enhancement |
+---------------+-------------------+--------+---------------------+
server01 16
server02 5 16
server03 21 266 20
server04 16
+------------------------------------------------------------------+
```
Add the following cronjob to get the patching compliance report once every three months. This script is scheduled to run on the 1st of January, April, July and October months.
```
# crontab -e
0 0 01 */3 * /bin/bash /opt/scripts/system-uptime-script-1.sh
```
### Method-2a: Bash Script to Generate Patching Compliance Report for Security Errata, Bugfix, and Enhancement on CentOS/RHEL Systems
This script allows you to generate patching compliance reports for Security Errata, Bugfix, and Enhancement. It sends the output through a mail with the CSV file.
```
# vi /opt/scripts/small-scripts/sec-errata-bugfix-enhancement-1.sh
#!/bin/sh
echo "Server Name, Security Errata,Bugfix,Enhancement" > /tmp/sec-up.csv
for server in `more /opt/scripts/server.txt`
do
sec=`ssh $server yum updateinfo summary | grep 'Security' | grep -v 'Important|Moderate' | tail -1 | awk '{print $1}'`
bug=`ssh $server yum updateinfo summary | grep 'Bugfix' | tail -1 | awk '{print $1}'`
enhance=`ssh $server yum updateinfo summary | grep 'Enhancement' | tail -1 | awk '{print $1}'`
echo "$server,$sec,$bug,$enhance" >> /tmp/sec-up.csv
done
echo "Patching Report for `date +"%B %Y"`" | mailx -s "Patching Report on `date`" -a /tmp/sec-up.csv [email protected]
rm /tmp/sec-up.csv
```
Run the script file once you have added the above script.
```
# sh /opt/scripts/small-scripts/sec-errata-bugfix-enhancement-1.sh
```
You get an output like the one below.
![][6]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/bash-script-to-generate-patching-compliance-report-on-centos-rhel-systems/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/category/bash-script/
[2]: https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/
[3]: https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/
[4]: https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/
[5]: https://www.2daygeek.com/category/shell-script/
[6]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7

View File

@ -0,0 +1,241 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Schedule and Automate tasks in Linux using Cron Jobs)
[#]: via: (https://www.linuxtechi.com/schedule-automate-tasks-linux-cron-jobs/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Schedule and Automate tasks in Linux using Cron Jobs
======
Sometimes, you may have tasks that need to be performed on a regular basis or at certain predefined intervals. Such tasks include backing up databases, updating the system, performing periodic reboots and so on. Such tasks are referred to as **cron jobs**. Cron jobs are used for **automation of tasks** that come in handy and help in simplifying the execution of repetitive and sometimes mundane tasks. **Cron** is a daemon that allows you to schedule these jobs which are then carried out at specified intervals. In this tutorial, you will learn how to schedule jobs using cron jobs.
[![Schedule -tasks-in-Linux-using cron][1]][2]
### The Crontab file
A crontab file, also known as a **cron table**, is a simple text file that contains rules or commands that specify the time interval of execution of a task. There are two categories of crontab files:
**1)  System-wide crontab file**
These are usually used by Linux services &amp; critical applications requiring root privileges. The system crontab file is located at **/etc/crontab** and can only be accessed and edited by the root user. Its usually used for the configuration of system-wide daemons. The crontab file looks as shown:
[![etc-crontab-linux][1]][3]
**2) User-created crontab files**
Linux users can also create their own cron jobs with the help of the crontab command. The cron jobs created will run as the user who created them.
All cron jobs are stored in /var/spool/cron (For RHEL and CentOS distros) and /var/spool/cron/crontabs (For Debian and Ubuntu distros), the cron jobs are listed using the username of the user that created the cron job
The **cron daemon** runs silently in the background checking the **/etc/crontab** file and **/var/spool/cron** and **/etc/cron.d*/** directories
The **crontab** command is used for editing cron files. Let us take a look at the anatomy of a crontab file.
### The anatomy of a crontab file
Before we go further, its important that we first explore how a crontab file looks like. The basic syntax for a crontab file comprises 5 columns represented by asterisks followed by the command to be carried out.
*    *    *    *    *    command
This format can also be represented as shown below:
m h d moy dow command
OR
m h d moy dow /path/to/script
Lets expound on each entry
* **m**: This represents minutes. Its specified from 0 to 59
* **h**: This denoted the hour specified from 0 to 23
* **d**:  This represents the day of the month. Specified between 1 to 31`
* **moy**: This is the month of the year. Its specified between 1 to 12
* **doy**:  This is the day of the week. Its specified between 0 and 6 where 0 = Sunday
* **Command**: This is the command to be executed e.g backup command, reboot, &amp; copy
### Managing cron jobs
Having looked at the architecture of a crontab file, lets see how you can create, edit and delete cron jobs
**Creating cron jobs**
To create or edit a cron job as the root user, run the command
# crontab -e
To create a cron job or schedule a task as another user, use the syntax
# crontab -u username -e
For instance, to run a cron job as user Pradeep, issue the command:
# crontab -u Pradeep -e
If there is no preexisting crontab file, then you will get a blank text document. If a crontab file was existing, The  -e option allows  to edit the file,
**Listing crontab files**
To view the cron jobs that have been created, simply pass the -l option as shown
# crontab -l
**Deleting a  crontab file**
To delete a cron file, simply run crontab -e and delete or the line of the cron job that you want and save the file.
To remove all cron jobs, run the command:
# crontab -r
That said, lets have a look at different ways that you can schedule tasks
### Crontab examples in Scheduling tasks.
All cron jobs being with a shebang header as shown
#!/bin/bash
This indicates the shell you are using, which, for this case, is bash shell.
Next, specify the interval at which you want to schedule the tasks using the cron job entries we specified earlier on.
To reboot a system daily at 12:30 pm, use the syntax:
30  12 *  *  * /sbin/reboot
To schedule the reboot at 4:00 am use the syntax:
0  4  *  *  *  /sbin/reboot
**NOTE:**  The asterisk * is used to match all records
To run a script twice every day, for example, 4:00 am and 4:00 pm, use the syntax.
0  4,16  *  *  *  /path/to/script
To schedule a cron job to run every Friday at 5:00 pm  use the syntax:
0  17  *  *  Fri  /path/to/script
OR
0 17  *  *  *  5  /path/to/script
If you wish to run your cron job every 30 minutes then use:
*/30  *  *  *  * /path/to/script
To schedule cron to run after every 5 hours, run
*  */5  *  *  *  /path/to/script
To run a script on selected days, for example, Wednesday and Friday at 6.00 pm execute:
0  18  *  *  wed,fri  /path/to/script
To schedule multiple tasks to use a single cron job, separate the tasks using a semicolon for example:
*  *  *  *  *  /path/to/script1 ; /path/to/script2
### Using special strings to save time on writing cron jobs
Some of the cron jobs can easily be configured using special strings that correspond to certain time intervals. For example,
1)  @hourly timestamp corresponds to  0 * * * *
It will execute a task in the first minute of every hour.
@hourly /path/to/script
2) @daily timestamp is equivalent to  0 0 * * *
It executes a task in the first minute of every day (midnight). It comes in handy when executing daily jobs.
  @daily /path/to/script
3) @weekly   timestamp is the equivalent to  0 0 1 * mon
It executes a cron job in the first minute of every week where a week whereby, a  week starts on Monday.
 @weekly /path/to/script
3) @monthly is similar to the entry 0 0 1 * *
It carries out a task in the first minute of the first day of the month.
  @monthly /path/to/script
4) @yearly corresponds to 0 0 1 1 *
It executes a task in the first minute of every year and is useful in sending New year greetings 🙂
@monthly /path/to/script
### Crontab Restrictions
As a Linux user, you can control who has the right to use the crontab command. This is possible using the **/etc/cron.deny** and **/etc/cron.allow** file. By default, only the /etc/cron.deny file exists and does not contain any entries. To restrict a user from using the crontab utility, simply add a users username to the file. When a user is added to this file, and the user tries to run the crontab command, he/she will encounter the error below.
![restricted-cron-user][1]
To allow the user to continue using the crontab utility,  simply remove the username from the /etc/cron.deny file.
If /etc/cron.allow file is present, then only the users listed in the file can access and use the crontab utility.
If neither file exists, then only the root user will have privileges to use the crontab command.
### Backing up crontab entries
Its always advised to backup your crontab entries. To do so, use the syntax
# crontab -l &gt; /path/to/file.txt
For example,
```
# crontab -l > /home/james/backup.txt
```
**Checking cron logs**
Cron logs are stored in /var/log/cron file. To view the cron logs run the command:
```
# cat /var/log/cron
```
![view-cron-log-files-linux][1]
To view live logs, use the tail command as shown:
```
# tail -f /var/log/cron
```
![view-live-cron-logs][1]
**Conclusion**
In this guide, you learned how to create cron jobs to automate repetitive tasks, how to backup as well as how to view cron logs. We hope that this article provided useful insights with regard to cron jobs. Please dont hesitate to share your feedback and comments.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/schedule-automate-tasks-linux-cron-jobs/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Schedule-tasks-in-Linux-using-cron.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/11/etc-crontab-linux.png

View File

@ -0,0 +1,50 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My first contribution to open source: Make a fork of the repo)
[#]: via: (https://opensource.com/article/19/11/first-open-source-contribution-fork-clone)
[#]: author: (Galen Corey https://opensource.com/users/galenemco)
My first contribution to open source: Make a fork of the repo
======
Which comes first, to clone or fork a repo?
![User experience vs. design][1]
Previously, I explained [how I ultimately chose a project][2] for my contributions. Once I finally picked that project and a task to work on, I felt like the hard part was over, and I slid into cruise control. I knew what to do next, no question. Just clone the repository so that I have the code on my computer, make a new branch for my work, and get coding, right?
It turns out I made a crucial mistake at this step. Unfortunately, I didnt realize that I had made a mistake until several hours later when I tried to push my completed code back up to GitHub and got a permission denied error. My third mistake was trying to work directly from a clone of the repo.
When you want to contribute to someone elses repo, in most cases, you should not clone the repo directly. Instead, you should make a fork of the repo and clone that. You do all of your work on a branch of your fork. Then, when you are ready to make a pull request, you can compare your branch on the fork against the master branch of the original repo.
Before this, I had only ever worked on repos that I either created or had collaborator permissions for, so I could work directly from a clone of the main repo. I did not realize that GitHub even offered the capability to make a pull request from a repo fork onto the original repo. Now that Ive learned a bit about this option, it is a great feature that makes sense. Forking allows a project to open the ability to contribute to anyone with a GitHub account without having to add them all as "contributors." It also helps keep the main project clean by keeping most new branches on forks, so that they dont create clutter.
I would have preferred to know this before I started writing my code (or, in this case, finished writing my code, since I didnt attempt to push any of my changes to GitHub until the end). Moving my changes over from the main repo that I originally worked on into the fork was non-trivial.
For those of you getting started, here are the steps to make a PR on a repository that you do not own, or where you are not a collaborator. I highly recommend trying to push your code to GitHub and at least going through the steps of creating a PR before you get too deep into coding, just to make sure you have everything set up the right way:
1. Make a fork of the repo youve chosen for your contributions.
2. From the fork, click **Clone or download** to create a copy on your computer.
**Optional:** [Add the base repository as a remote "upstream,"][3] which is helpful if you want to pull down new changes from the base repository into your fork.
3. [Create a pull request from the branch on your fork into the master branch of the base repository.][4]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/first-open-source-contribution-fork-clone
作者:[Galen Corey][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/galenemco
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_DesirePath.png?itok=N_zLVWlK (User experience vs. design)
[2]: https://opensource.com/article/19/10/first-open-source-contribution-mistake-two
[3]: https://help.github.com/en/articles/configuring-a-remote-for-a-fork
[4]: https://help.github.com/en/articles/creating-a-pull-request-from-a-fork

View File

@ -1,105 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why you don't have to be afraid of Kubernetes)
[#]: via: (https://opensource.com/article/19/10/kubernetes-complex-business-problem)
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
为什么你不必害怕 Kubernetes
======
Kubernetes 绝对是满足复杂 web 应用程序需求的最简单,最容易的方法。
![Digital creative of a browser on the internet][1]
在 90 年代末和 00 年代初,在大型网络媒体资源上工作很有趣。我的经历让我想起了 American Greetings Interactive在情人节那天我们拥有互联网上排名前 10 位之一的网站(以网络访问量衡量)。我们为 [AmericanGreetings.com][2][BlueMountain.com][3] 等公司提供了电子贺卡,并为 MSN 和 AOL 等合作伙伴提供了电子贺卡。该组织的老员工仍然深切地记得与 Hallmark 等其它电子贺卡网站进行大战的史诗般的故事。 顺便说一句,我还为 Holly HobbieCare Bears 和 Strawberry Shortcake 经营大型网站。
我记得就像那是昨天发生的一样,这是我们第一次遇到真正的问题。通常,我们的前门(路由器,防火墙和负载均衡器)有大约 200Mbps 的流量进入。但是突然之间Multi Router Traffic GrapherMRTG图示突然在几分钟内飙升至 2Gbps。我疯了似地东奔西跑。我了解了我们的整个技术堆栈从路由器交换机防火墙和负载平衡器到 Linux/Apache web 服务器,到我们的 Python 堆栈FastCGI 的元版本以及网络文件系统NFS服务器。我知道所有配置文件在哪里我可以访问所有管理界面并且我是一位经验丰富的经验丰富的系统管理员具有多年解决复杂问题的经验。
但是,我无法弄清楚发生了什么……
当你在一千个 Linux 服务器上疯狂地键入命令时,五分钟的感觉就像是永恒。我知道站点可能会在任何时候崩溃,因为当它被划分成更小的集群时,压垮上千个节点的集群是那么的容易。
我迅速 _跑到_ 老板的办公桌前,解释了情况。他几乎没有从电子邮件中抬头,这使我感到沮丧。他抬头看了看,笑了笑,说道:“是的,市场营销可能会开展广告活动。有时会发生这种情况。”他告诉我在应用程序中设置一个特殊标志,以减轻 Akamai 的访问量。 我跑回我的办公桌,在上千台 web 服务器上设置了标志,几分钟后,该站点恢复正常。灾难也就被避免了。
我可以再分享 50 个类似的故事,但你脑海中可能会有一点好奇:“这种运维方式将走向何方?”
关键是,我们遇到了业务问题。当技术问题使你无法开展业务时,它们就变成了业务问题。换句话说,如果你的网站无法访问,你就不能处理客户交易。
那么,所有这些与 Kubernetes 有什么关系?一切。世界已经改变。早在 90 年代末和 00 年代初,只有大型网站才出现大型网络规模级的问题。现在,有了微服务和数字化转型,每个企业都面临着一个大型的网络规模级的问题——可能是多个大型的网络规模级的问题。
你的企业需要能够通过许多不同的人构建的许多不同的,通常是复杂的服务来管理复杂的网络规模的资产。你的网站需要动态地处理流量,并且它们必须是安全的。这些属性需要在所有层(从基础结构到应用程序层)上由 API 驱动。
### 进入 Kubernetes
Kubernetes 并不复杂你的业务问题才是。当你想在生产环境中运行应用程序时要满足性能伸缩性抖动等和安全性要求就需要最低程度的复杂性。诸如高可用性HA容量要求N+1N+2N+100以及保证最终一致性的数据技术等就会成为必需。这些是每家进行数字化转型的公司的生产要求而不仅仅是 GoogleFacebook 和 Twitter 这样的大型网站。
在旧时代,我还在 American Greetings 任职时,每次我们加入一个新的服务,它看起来像这样:所有这些都是由网络运营团队来处理的,没有一个是通过标签系统转移给其他团队来处理的。这是在 DevOps 出现之前的 DevOps
1. 配置DNS通常是内部服务层和面向外部的公众
2. 配置负载均衡器(通常是内部服务和面向公众的)
3. 配置对文件的共享访问(大型 NFS 服务器,群集文件系统等)
4. 配置集群软件(数据库,服务层等)
5. 配置 web 服务器群集(可以是 10 或 50 个服务器)
大多数配置是通过配置管理自动完成的,但是配置仍然很复杂,因为每个系统和服务都有不同的配置文件,而且格式完全不同。我们研究了像 [Augeas][4] 这样的工具来简化它,但是我们认为使用转换器来尝试和标准化一堆不同的配置文件是一种反模式。
如今借助Kubernetes启动一项新服务本质上看起来如下
1. 配置 Kubernetes YAML/JSON。
2. 提交给 Kubernetes API```kubectl create -f service.yaml```)。
Kubernetes 大大简化了服务的启动和管理。服务所有者(无论是系统管理员,开发人员还是架构师)都可以创建 Kubernetes 格式的 YAML/JSON 文件。使用 Kubernetes每个系统和每个用户都说相同的语言。所有用户都可以在同一 Git 存储库中提交这些文件,从而启用 GitOps。
而且,可以弃用和删除服务。从历史上看,删除 DNS 条目负载平衡器条目web 服务器配置等是非常可怕的,因为你几乎肯定会破坏某些东西。使用 Kubernetes所有内容都被命名为名称空间因此可以通过单个命令删除整个服务。尽管你仍然需要确保其它应用程序不使用它微服务和功能即服务FaaS的缺点但你可以更加确信删除服务不会破坏基础架构环境。
### 构建,管理和使用 Kubernetes
太多的人专注于构建和管理 Kubernetes 而不是使用它(详见 [_Kubernetes 是一辆翻斗车_][5].
在单个节点上构建一个简单的 Kubernetes 环境并不比安装 LAMP 堆栈复杂得多但是我们无休止地争论着构建与购买的问题。不是Kubernetes很难它以高可用性大规模运行应用程序。建立一个复杂的高可用性的 Kubernetes 集群很困难,因为要建立如此规模的任何集群都是很困难的。它需要规划和大量软件。建造一辆简单的翻斗车并不复杂,但是建造一辆可以运载 [10 吨灰尘并能以 200mph 的速度稳定行驶的卡车][6]则很复杂。
管理 Kubernetes 可能很复杂,因为管理大型网络规模的集群可能很复杂。有时,管理此基础架构很有意义;而有时不是。由于 Kubernetes 是一个社区驱动的开源项目,它使行业能够以多种不同方式对其进行管理。供应商可以出售托管版本,而用户可以根据需要自行决定对其进行管理。(但是你应该质疑是否确实需要。)
使用 Kubernetes 是迄今为止运行大规模网络资源的最简单方法。Kubernetes 正在普及运行一组大型、复杂的 Web 服务的能力——就像当年 Linux 在 Web 1.0 中所做的那样。
由于时间和金钱是一个零和游戏,因此我建议将重点放在使用 Kubernetes 上。将你的时间和金钱花费在[掌握 Kubernetes 原语][7]或处理[活跃度和就绪性探针][8]的最佳方法上(另一个例子表明大型、复杂的服务很难)。不要专注于构建和管理 Kubernetes。在构建和管理上许多供应商可以为你提供帮助。
### 结论
我记得对无数的问题进行了故障排除,比如我在这篇文章的开头所描述的问题——当时 Linux 内核中的 NFS我们自产的 CFEngine仅在某些 web 服务器上出现的重定向问题等)。开发人员无法帮助我解决所有这些问题。实际上,除非开发人员具备高级系统管理员的技能,否则他们甚至不可能进入系统并作为第二组眼睛提供帮助。没有带有图形或“可观察性”的控制台——可观察性在我和其他系统管理员的大脑中。如今,有了 KubernetesPrometheusGrafana 等,一切都改变了。
关键是:
1. 时代不一样了。现在,所有 web 应用程序都是大型的分布式系统。就像 AmericanGreetings.com 过去一样复杂,现在每个网站都需要该站点的扩展性和 HA 要求。
2. 运行大型的分布式系统是很困难的。(维护)周期,这是业务需求,不是 Kubernetes 的。使用更简单的协调器并不是解决方案。
Kubernetes绝对是满足复杂Web应用程序需求的最简单最简单的方法。这是我们生活的时代而 Kubernetes 擅长于此。你可以讨论是否应该自己构建或管理 Kubernetes。有很多供应商可以帮助你构建和管理它但是很难否认这是大规模运行复杂 web 应用程序的最简单方法。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/kubernetes-complex-business-problem
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fatherlinux
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: http://AmericanGreetings.com
[3]: http://BlueMountain.com
[4]: http://augeas.net/
[5]: https://linux.cn/article-11011-1.html
[6]: http://crunchtools.com/kubernetes-10-ton-dump-truck-handles-pretty-well-200-mph/
[7]: https://linux.cn/article-11036-1.html
[8]: https://srcco.de/posts/kubernetes-liveness-probes-are-dangerous.html

View File

@ -0,0 +1,110 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Keyboard Shortcuts to Speed Up Your Work in Linux)
[#]: via: (https://opensourceforu.com/2019/11/keyboard-shortcuts-to-speed-up-your-work-in-linux/)
[#]: author: (S Sathyanarayanan https://opensourceforu.com/author/s-sathyanarayanan/)
在 Linux 中加速工作的键盘快捷键
======
[![Google Keyboard][1]][2]
_操作鼠标、键盘和菜单会占用我们很多时间这些可以使用键盘快捷键来节省时间。这不仅节省时间还可以使用户更高效。_
你是否意识到每次在打字时从键盘切换到鼠标最多需要两秒钟?如果一个人每天工作八小时,每分钟从键盘切换到鼠标一次,并且一年中大约有 240 个工作日,那么所浪费的时间(根据 Brainscape 的计算)为:
_ [每分钟浪费 2 秒] x [每天 480 分钟] x每年 240 个工作日=每年浪费 64 小时_
这相当于损失了八个工作日,因此学习键盘快捷键将使生产率提高 3.3_<https://www.brainscape.com/blog/2011/08/keyboard-shortcuts-economy/>_
键盘快捷键提供了一种更快的方式来执行任务,不然就需要使用鼠标和/或菜单分多个步骤来完成。图 1 列出了 Ubuntu 18.04 Linux 和 Web 浏览器中一些最常用的快捷方式。我省略了非常有名的快捷方式例如复制、粘贴等以及不经常使用的快捷方式。读者可以参考在线资源以获得完整的快捷方式列表。请注意Windows 键在 Linux 中被重命名为 Super 键。
**常规快捷方式**
下面列出了常规快捷方式。
[![][3]][4]
**打印屏幕和屏幕录像**
以下快捷方式可用于打印屏幕或录制屏幕视频。
[![][5]][6]
**在应用之间切换**
此处列出的快捷键可用于在应用之间切换。
[![][7]][8]
**平铺窗口**
可以使用下面提供的快捷方式以不同方式将窗口平铺。
[![][9]][10]
**浏览器快捷方式**
此处列出了浏览器最常用的快捷方式。大多数快捷键对于 Chrome/Firefox 浏览器是通用的。
**组合键** | **行为**
---|---
Ctrl + T | 打开一个新标签。
Ctrl + Shift + T | 打开最近关闭的标签。
Ctrl + D | 添加一个新书签。
Ctrl + W | 关闭浏览器标签。
Alt + D | 将光标置于浏览器的地址栏中。
F5 或 Ctrl-R | 刷新页面。
Ctrl + Shift + Del | 清除私人数据和历史记录。
Ctrl + N | 打开一个新窗口。
Home| 滚动到页面顶部。
End | 滚动到页面底部。
Ctrl + J | 打开下载文件夹在Chrome中
F11 | 全屏视图(切换效果)
**终端快捷方式**
这是终端快捷方式的列表。
[![][11]][12]
你还可以在 Ubuntu 中配置自己的自定义快捷方式,如下所示:
* 在 Ubuntu Dash 中单击设置。
  * 在“设置”窗口的左侧菜单中选择“设备”选项卡。
  * 在设备菜单中选择键盘标签。
  * 右面板的底部有个 “+” 按钮。点击 “+” 号打开自定义快捷方式对话框并配置新的快捷方式。
学习本文提到的三个快捷方式可以节省大量时间,并使你的工作效率更高。
**引用**
_Cohen, Andrew. How keyboard shortcuts could revive Americas economy; [www.brainscape.com][13]. [Online] Brainscape, 26 May 2017; <https://www.brainscape.com/blog/2011/08/keyboard-shortcuts-economy/>_
![Avatar][14]
[S Sathyanarayanan][15]
作者目前在斯里萨蒂亚赛古尔巴加人类卓越大学工作。他在系统管理和 IT 课程教学方面拥有 25 年以上的经验。他是 FOSS 的积极推动者,可以通过 [sathyanarayanan.brn@gmail.com][16] 与他联系。
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/keyboard-shortcuts-to-speed-up-your-work-in-linux/
作者:[S Sathyanarayanan][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/s-sathyanarayanan/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Google-Keyboard.jpg?resize=696%2C418&ssl=1 (Google Keyboard)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Google-Keyboard.jpg?fit=750%2C450&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/1.png?resize=350%2C319&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/1.png?ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/NW.png?resize=350%2C326&ssl=1
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/NW.png?ssl=1
[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/2.png?resize=350%2C264&ssl=1
[8]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/2.png?ssl=1
[9]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/3.png?resize=350%2C186&ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/3.png?ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/7.png?resize=350%2C250&ssl=1
[12]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/7.png?ssl=1
[13]: http://www.brainscape.com
[14]: https://secure.gravatar.com/avatar/736684a2707f2ed7ae72675edf7bb3ee?s=100&r=g
[15]: https://opensourceforu.com/author/s-sathyanarayanan/
[16]: mailto:sathyanarayanan.brn@gmail.com

View File

@ -0,0 +1,95 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Update a Fedora Linux System [Beginners Tutorial])
[#]: via: (https://itsfoss.com/update-fedora/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
如何更新 Fedora Linux 系统[入门教程]
======
_**本快速教程介绍了更新 Fedora Linux 安装的多种方法。**_
前几天,我安装了[新发布的 Fedora 31][1]。老实说,这是我第一次使用[非 Ubuntu 发行版][2]。
安装 Fedora 之后,我做的第一件事就是尝试安装一些软件。 我打开软件中心,发现该软件中心已“损坏”。 我无法从中安装任何应用程序。
我不确定我的安装出了什么问题。 在团队内部讨论时Abhishek 建议我先更新系统。 我更新了, 更新后一切恢复正常。 更新[Fedora][3]系统后,软件中心也能正常工作了。
有时我们只是忽略了对系统的更新,而继续对我们所面临的问题进行故障排除。 不管问题有多大或多小,为了避免它们,你都应该保持系统更新。
在本文中我将向你展示更新Fedora Linux系统的多种方法。
* [使用软件中心更新 Fedora][4]
* [使用命令行更新 Fedora][5]
* [从系统设置更新 Fedora][6]
请记住,更新 Fedora 意味着安装安全补丁,更新内核和软件。 如果要从 Fedora 的一个版本更新到另一个版本,这称为版本升级,你可以[在此处阅读有关 Fedora 版本升级过程的信息][7]。
### 从软件中心更新 Fedora
![软件中心][8]
您很可能会收到通知,通知您有一些系统更新需要查看,您应该在单击该通知时启动软件中心。
您所要做的就是–点击“更新”,并验证 root 密码开始更新。
如果您没有收到更新的通知,则只需启动软件中心并转到“更新”选项卡即可。 现在,您只需要继续更新。
### 使用终端更新 Fedora
如果由于某种原因无法加载软件中心则可以使用dnf 软件包管理命令轻松地更新系统。
只需启动终端并输入以下命令即可开始更新系统将提示你确认root密码
```
sudo dnf upgrade
```
**dnf 更新 vs dnf 升级
**
你会发现有两个可用的 dnf 命令dnf 更新和 dnf 升级。 这两个命令执行相同的工作,即安装 Fedora 提供的所有更新。 那么,为什么要会有 dnf 更新和 dnf 升级,你应该使用哪一个呢? dnf 更新基本上是 dnf 升级的别名。 尽管 dnf 更新可能仍然有效,但最好使用 dnf 升级,因为这是真正的命令。
### 从系统设置中更新 Fedora
![][9]
如果其它方法都不行(或者由于某种原因已经进入系统设置),请导航至设置底部的“详细信息”选项。
如上图所示,改选项中显示操作系统和硬件的详细信息以及一个“检查更新”按钮,如上图中所示。 您只需要单击它并提供root / admin密码即可继续安装可用的更新。
**总结**
如上所述更新Fedora安装非常容易。 有三种方法供你选择,因此无需担心。
如果你按上述说明操作时发现任何问题,请随时在下面的评论部分告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/update-fedora/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/fedora-31-release/
[2]: https://itsfoss.com/non-ubuntu-beginner-linux/
[3]: https://getfedora.org/
[4]: tmp.Lqr0HBqAd9#software-center
[5]: tmp.Lqr0HBqAd9#command-line
[6]: tmp.Lqr0HBqAd9#system-settings
[7]: https://itsfoss.com/upgrade-fedora-version/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/11/software-center.png?ssl=1
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/11/system-settings-fedora-1.png?ssl=1