Merge pull request #9 from LCTT/master

update
This commit is contained in:
Morisun029 2019-11-08 20:50:13 +08:00 committed by GitHub
commit a5a8f5f57a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
59 changed files with 6838 additions and 2926 deletions

View File

@ -0,0 +1,289 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11538-1.html)
[#]: subject: (How RPM packages are made: the spec file)
[#]: via: (https://fedoramagazine.org/how-rpm-packages-are-made-the-spec-file/)
[#]: author: (Ankur Sinha "FranciscoD" https://fedoramagazine.org/author/ankursinha/)
如何编写 RPM 的 spec 文件
======
![][1]
在[关于 RPM 软件包构建的上一篇文章][2]中,你了解到了源 RPM 包括软件的源代码以及 spec 文件。这篇文章深入研究了 spec 文件,该文件中包含了有关如何构建 RPM 的指令。同样,本文以 `fpaste` 为例。
### 了解源代码
在开始编写 spec 文件之前,你需要对要打包的软件有所了解。在这里,你正在研究 `fpaste`,这是一个非常简单的软件。它是用 Python 编写的,并且是一个单文件脚本。当它发布新版本时,可在 Pagure 上找到:<https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz>
如该档案文件所示,当前版本为 0.3.9.2。下载它,以便你查看该档案文件中的内容:
```
$ wget https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
$ tar -tvf fpaste-0.3.9.2.tar.gz
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/
-rw-rw-r-- root/root 25 2018-07-25 02:58 fpaste-0.3.9.2/.gitignore
-rw-rw-r-- root/root 3672 2018-07-25 02:58 fpaste-0.3.9.2/CHANGELOG
-rw-rw-r-- root/root 35147 2018-07-25 02:58 fpaste-0.3.9.2/COPYING
-rw-rw-r-- root/root 444 2018-07-25 02:58 fpaste-0.3.9.2/Makefile
-rw-rw-r-- root/root 1656 2018-07-25 02:58 fpaste-0.3.9.2/README.rst
-rw-rw-r-- root/root 658 2018-07-25 02:58 fpaste-0.3.9.2/TODO
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/
-rw-rw-r-- root/root 3867 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/fpaste.1
-rwxrwxr-x root/root 24884 2018-07-25 02:58 fpaste-0.3.9.2/fpaste
lrwxrwxrwx root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/fpaste.py -> fpaste
```
你要安装的文件是:
* `fpaste.py`:应该安装到 `/usr/bin/`
* `docs/man/en/fpaste.1`:手册,应放到 `/usr/share/man/man1/`
* `COPYING`:许可证文本,应放到 `/usr/share/license/fpaste/`
* `README.rst`、`TODO`:放到 `/usr/share/doc/fpaste/` 下的其它文档。
这些文件的安装位置取决于文件系统层次结构标准FHS。要了解更多信息可以在这里阅读<http://www.pathname.com/fhs/> 或查看 Fedora 系统的手册页:
```
$ man hier
```
#### 第一部分:要构建什么?
现在我们知道了源文件中有哪些文件,以及它们要存放的位置,让我们看一下 spec 文件。你可以在此处查看这个完整的文件:<https://src.fedoraproject.org/rpms/fpaste/blob/master/f/fpaste.spec>
这是 spec 文件的第一部分:
```
Name: fpaste
Version: 0.3.9.2
Release: 3%{?dist}
Summary: A simple tool for pasting info onto sticky notes instances
BuildArch: noarch
License: GPLv3+
URL: https://pagure.io/fpaste
Source0: https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
Requires: python3
%description
It is often useful to be able to easily paste text to the Fedora
Pastebin at http://paste.fedoraproject.org and this simple script
will do that and return the resulting URL so that people may
examine the output. This can hopefully help folks who are for
some reason stuck without X, working remotely, or any other
reason they may be unable to paste something into the pastebin
```
`Name`、`Version` 等称为*标签*,它们定义在 RPM 中。这意味着你不能只是随意写点标签RPM 无法理解它们!需要注意的标签是:
* `Source0`:告诉 RPM 该软件的源代码档案文件所在的位置。
* `Requires`列出软件的运行时依赖项。RPM 可以自动检测很多依赖项,但是在某些情况下,必须手动指明它们。运行时依赖项是系统上必须具有的功能(通常是软件包),才能使该软件包起作用。这是 [dnf][3] 在安装此软件包时检测是否需要拉取其他软件包的方式。
* `BuildRequires`:列出了此软件的构建时依赖项。这些通常必须手动确定并添加到 spec 文件中。
* `BuildArch`:此软件为该计算机体系结构所构建。如果省略此标签,则将为所有受支持的体系结构构建该软件。值 `noarch` 表示该软件与体系结构无关(例如 `fpaste`,它完全是用 Python 编写的)。
本节提供有关 `fpaste` 的常规信息:它是什么,正在将什么版本制作为 RPM其许可证等等。如果你已安装 `fpaste`,并查看其元数据时,则可以看到该 RPM 中包含的以下信息:
```
$ sudo dnf install fpaste
$ rpm -qi fpaste
Name : fpaste
Version : 0.3.9.2
Release : 2.fc30
...
```
RPM 会自动添加一些其他标签,以代表它所知道的内容。
至此,我们掌握了要为其构建 RPM 的软件的一般信息。接下来,我们开始告诉 RPM 做什么。
#### 第二部分:准备构建
spec 文件的下一部分是准备部分,用 `prep` 代表:
```
%prep
%autosetup
```
对于 `fpaste`,这里唯一的命令是 `autosetup`。这只是将 tar 档案文件提取到一个新文件夹中,并为下一部分的构建阶段做好了准备。你可以在此处执行更多操作,例如应用补丁程序,出于不同目的修改文件等等。如果你查看过 Python 的源 RPM 的内容,那么你会在那里看到许多补丁。这些都将在本节中应用。
通常spec 文件中带有 `` 前缀的所有内容都是 RPM 以特殊方式解释的宏或标签。这些通常会带有大括号,例如 `{example}`
#### 第三部分:构建软件
下一部分是构建软件的位置,用 `build` 表示。现在,由于 `fpaste` 是一个简单的纯 Python 脚本,因此无需构建。因此,这里是:
```
%build
#nothing required
```
不过,通常来说,你会在此处使用构建命令,例如:
```
configure; make
```
构建部分通常是 spec 文件中最难的部分因为这是从源代码构建软件的地方。这要求你知道该工具使用的是哪个构建系统该系统可能是许多构建系统之一Autotools、CMake、Meson、Setuptools用于 Python等等。每个都有自己的命令和语法样式。你需要充分了解这些才能正确构建软件。
#### 第四部分:安装文件
软件构建后,需要在 `install` 部分中安装它:
```
%install
mkdir -p %{buildroot}%{_bindir}
make install BINDIR=%{buildroot}%{_bindir} MANDIR=%{buildroot}%{_mandir}
```
在构建 RPM 时RPM 不会修改你的系统文件。在一个可以正常运行的系统上添加、删除或修改文件的风险太大。如果发生故障怎么办因此RPM 会创建一个专门打造的文件系统并在其中工作。这称为 `buildroot`。 因此,在 `buildroot` 中,我们创建由宏 `{_bindir}` 代表的 `/usr/bin` 目录,然后使用提供的 `Makefile` 将文件安装到其中。
至此,我们已经在专门打造的 `buildroot` 中安装了 `fpaste` 的构建版本。
#### 第五部分:列出所有要包括在 RPM 中的文件
spec 文件其后的一部分是文件部分:`files`。在这里,我们告诉 RPM 从该 spec 文件创建的档案文件中包含哪些文件。`fpaste` 的文件部分非常简单:
```
%files
%{_bindir}/%{name}
%doc README.rst TODO
%{_mandir}/man1/%{name}.1.gz
%license COPYING
```
请注意,在这里,我们没有指定 `buildroot`。所有这些路径都是相对路径。`doc` 和 `license`命令做的稍微多一点,它们会创建所需的文件夹,并记住这些文件必须放在那里。
RPM 很聪明。例如,如果你在 `install` 部分中安装了文件,但未列出它们,它会提醒你。
#### 第六部分:在变更日志中记录所有变更
Fedora 是一个基于社区的项目。许多贡献者维护或共同维护软件包。因此当务之急是不要被软件包做了哪些更改所搞混。为了确保这一点spec 文件包含的最后一部分是变更日志 `changelog`
```
%changelog
* Thu Jul 25 2019 Fedora Release Engineering < ...> - 0.3.9.2-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_31_Mass_Rebuild
* Thu Jan 31 2019 Fedora Release Engineering < ...> - 0.3.9.2-2
- Rebuilt for https://fedoraproject.org/wiki/Fedora_30_Mass_Rebuild
* Tue Jul 24 2018 Ankur Sinha - 0.3.9.2-1
- Update to 0.3.9.2
* Fri Jul 13 2018 Fedora Release Engineering < ...> - 0.3.9.1-4
- Rebuilt for https://fedoraproject.org/wiki/Fedora_29_Mass_Rebuild
* Wed Feb 07 2018 Fedora Release Engineering < ..> - 0.3.9.1-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_28_Mass_Rebuild
* Sun Sep 10 2017 Vasiliy N. Glazov < ...> - 0.3.9.1-2
- Cleanup spec
* Fri Sep 08 2017 Ankur Sinha - 0.3.9.1-1
- Update to latest release
- fixes rhbz 1489605
...
....
```
spec 文件的*每项*变更都必须有一个变更日志条目。如你在此处看到的,虽然我以维护者身份更新了该 spec 文件,但其他人也做过更改。清楚地记录变更内容有助于所有人知道该 spec 文件的当前状态。对于系统上安装的所有软件包,都可以使用 `rpm` 来查看其更改日志:
```
$ rpm -q --changelog fpaste
```
### 构建 RPM
现在我们准备构建 RPM 包。如果要继续执行以下命令,请确保遵循[上一篇文章][2]中的步骤设置系统以构建 RPM。
我们将 `fpaste` 的 spec 文件放置在 `~/rpmbuild/SPECS` 中,将源代码档案文件存储在 `~/rpmbuild/SOURCES/` 中,现在可以创建源 RPM 了:
```
$ cd ~/rpmbuild/SPECS
$ wget https://src.fedoraproject.org/rpms/fpaste/raw/master/f/fpaste.spec
$ cd ~/rpmbuild/SOURCES
$ wget https://pagure.io/fpaste/archive/0.3.9.2/fpaste-0.3.9.2.tar.gz
$ cd ~/rpmbuild/SOURCES
$ rpmbuild -bs fpaste.spec
Wrote: /home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
```
让我们看一下结果:
```
$ ls ~/rpmbuild/SRPMS/fpaste*
/home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
$ rpm -qpl ~/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
fpaste-0.3.9.2.tar.gz
fpaste.spec
```
我们看到源 RPM 已构建。让我们同时构建源 RPM 和二进制 RPM
```
$ cd ~/rpmbuild/SPECS
$ rpmbuild -ba fpaste.spec
..
..
..
```
RPM 将向你显示完整的构建输出,并在我们之前看到的每个部分中详细说明它的工作。此“构建日志”非常重要。当构建未按预期进行时,我们的打包人员将花费大量时间来遍历它们,以跟踪完整的构建路径来查看出了什么问题。
就是这样!准备安装的 RPM 应该位于以下位置:
```
$ ls ~/rpmbuild/RPMS/noarch/
fpaste-0.3.9.2-3.fc30.noarch.rpm
```
### 概括
我们已经介绍了如何从 spec 文件构建 RPM 的基础知识。这绝不是一份详尽的文档。实际上,它根本不是文档。它只是试图解释幕后的运作方式。简短回顾一下:
* RPM 有两种类型:源 RPM 和 二进制 RPM。
* 二进制 RPM 包含要安装以使用该软件的文件。
* 源 RPM 包含构建二进制 RPM 所需的信息:完整的源代码,以及 spec 文件中的有关如何构建 RPM 的说明。
* spec 文件包含多个部分,每个部分都有其自己的用途。
  
在这里,我们已经在安装好的 Fedora 系统中本地构建了 RPM。虽然这是个基本的过程但我们从存储库中获得的 RPM 是建立在具有严格配置和方法的专用服务器上的,以确保正确性和安全性。这个 Fedora 打包流程将在以后的文章中讨论。
你想开始构建软件包,并帮助 Fedora 社区维护我们提供的大量软件吗?你可以[从这里开始加入软件包集合维护者][4]。
如有任何疑问,请发布到 [Fedora 开发人员邮件列表][5],我们随时乐意为你提供帮助!
### 参考
这里有一些构建 RPM 的有用参考:
* <https://fedoraproject.org/wiki/How_to_create_an_RPM_package>
* <https://docs.fedoraproject.org/en-US/quick-docs/create-hello-world-rpm/>
* <https://docs.fedoraproject.org/en-US/packaging-guidelines/>
* <https://rpm.org/documentation.html>
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-rpm-packages-are-made-the-spec-file/
作者:[Ankur Sinha "FranciscoD"][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/ankursinha/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/rpm.png-816x345.jpg
[2]: https://linux.cn/article-11527-1.html
[3]: https://fedoramagazine.org/managing-packages-fedora-dnf/
[4]: https://fedoraproject.org/wiki/Join_the_package_collection_maintainers
[5]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/

View File

@ -0,0 +1,246 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11546-1.html)
[#]: subject: (Building CI/CD pipelines with Jenkins)
[#]: via: (https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins)
[#]: author: (Bryant Son https://opensource.com/users/brson)
用 Jenkins 构建 CI/CD 流水线
======
> 通过这份 Jenkins 分步教程构建持续集成和持续交付CI/CD流水线。
![](https://img.linux.net.cn/data/attachment/album/201911/07/001349rbbbswpeqnnteeee.jpg)
在我的文章《[使用开源工具构建 DevOps 流水线的初学者指南][2]》中,我分享了一个从头开始构建 DevOps 流水线的故事。推动该计划的核心技术是 [Jenkins][3]这是一个用于建立持续集成和持续交付CI/CD流水线的开源工具。
在花旗,有一个单独的团队为专用的 Jenkins 流水线提供稳定的主从节点环境但是该环境仅用于质量保证QA、构建阶段和生产环境。开发环境仍然是非常手动的我们的团队需要对其进行自动化以在加快开发工作的同时获得尽可能多的灵活性。这就是我们决定为 DevOps 建立 CI/CD 流水线的原因。Jenkins 的开源版本由于其灵活性、开放性、强大的插件功能和易用性而成为显而易见的选择。
在本文中,我将分步演示如何使用 Jenkins 构建 CI/CD 流水线。
### 什么是流水线?
在进入本教程之前,了解有关 CI/CD <ruby>流水线<rt>pipeline</rt></ruby>的知识会很有帮助。
首先,了解 Jenkins 本身并不是流水线这一点很有帮助。只是创建一个新的 Jenkins 作业并不能构建一条流水线。可以把 Jenkins 看做一个遥控器在这里点击按钮即可。当你点击按钮时会发生什么取决于遥控器要控制的内容。Jenkins 为其他应用程序 API、软件库、构建工具等提供了一种插入 Jenkins 的方法它可以执行并自动化任务。Jenkins 本身不执行任何功能,但是随着其它工具的插入而变得越来越强大。
流水线是一个单独的概念,指的是按顺序连接在一起的事件或作业组:
> “<ruby>流水线<rt>pipeline</rt></ruby>”是可以执行的一系列事件或作业。
理解流水线的最简单方法是可视化一系列阶段,如下所示:
![Pipeline example][4]
在这里,你应该看到两个熟悉的概念:<ruby>阶段<rt>Stage</rt></ruby><ruby>步骤<rt>Step</rt></ruby>
* 阶段:一个包含一系列步骤的块。阶段块可以命名为任何名称;它用于可视化流水线过程。
* 步骤:表明要做什么的任务。步骤定义在阶段块内。
在上面的示例图中,阶段 1 可以命名为 “构建”、“收集信息”或其它名称,其它阶段块也可以采用类似的思路。“步骤”只是简单地说放上要执行的内容,它可以是简单的打印命令(例如,`echo "Hello, World"`)、程序执行命令(例如,`java HelloWorld`、shell 执行命令( 例如,`chmod 755 Hello`)或任何其他命令,只要通过 Jenkins 环境将其识别为可执行命令即可。
Jenkins 流水线以**编码脚本**的形式提供,通常称为 “Jenkinsfile”尽管可以用不同的文件名。下面这是一个简单的 Jenkins 流水线文件的示例:
```
// Example of Jenkins pipeline script
pipeline {
  stages {
    stage("Build") {
      steps {
          // Just print a Hello, Pipeline to the console
          echo "Hello, Pipeline!"
          // Compile a Java file. This requires JDKconfiguration from Jenkins
          javac HelloWorld.java
          // Execute the compiled Java binary called HelloWorld. This requires JDK configuration from Jenkins
          java HelloWorld
          // Executes the Apache Maven commands, clean then package. This requires Apache Maven configuration from Jenkins
          mvn clean package ./HelloPackage
          // List the files in current directory path by executing a default shell command
          sh "ls -ltr"
      }
   }
   // And next stages if you want to define further...
  } // End of stages
} // End of pipeline
```
从此示例脚本很容易看到 Jenkins 流水线的结构。请注意,默认情况下某些命令(如 `java`、`javac`和 `mvn`)不可用,需要通过 Jenkins 进行安装和配置。 因此:
> Jenkins 流水线是一种以定义的方式依次执行 Jenkins 作业的方法,方法是将其编码并在多个块中进行结构化,这些块可以包含多个任务的步骤。
好。既然你已经了解了 Jenkins 流水线是什么,我将向你展示如何创建和执行 Jenkins 流水线。在本教程的最后,你将建立一个 Jenkins 流水线,如下所示:
![Final Result][5]
### 如何构建 Jenkins 流水线
为了便于遵循本教程的步骤,我创建了一个示例 [GitHub 存储库][6]和一个视频教程。
- [视频](https://img.linux.net.cn/static/video/_-jDPwYgDVKlg.mp4)
开始本教程之前,你需要:
* Java 开发工具包JDK如果尚未安装请安装 JDK 并将其添加到环境路径中,以便可以通过终端执行 Java 命令(如 `java jar`)。这是利用本教程中使用的 Java Web ArchiveWAR版本的 Jenkins 所必需的(尽管你可以使用任何其他发行版)。
* 基本计算机操作能力:你应该知道如何键入一些代码、通过 shell 执行基本的 Linux 命令以及打开浏览器。
让我们开始吧。
#### 步骤一:下载 Jenkins
导航到 [Jenkins 下载页面][7]。向下滚动到 “Generic Java package (.war)”,然后单击下载文件;将其保存在易于找到的位置。(如果你选择其他 Jenkins 发行版,除了步骤二之外,本教程的其余步骤应该几乎相同。)使用 WAR 文件的原因是它是个一次性可执行文件,可以轻松地执行和删除。
![Download Jenkins as Java WAR file][8]
#### 步骤二:以 Java 二进制方式执行 Jenkins
打开一个终端窗口,并使用 `cd <your path>` 进入下载 Jenkins 的目录。(在继续之前,请确保已安装 JDK 并将其添加到环境路径。)执行以下命令,该命令将 WAR 文件作为可执行二进制文件运行:
```
java -jar ./jenkins.war
```
如果一切顺利Jenkins 应该在默认端口 8080 上启动并运行。
![Execute as an executable JAR binary][9]
#### 步骤三:创建一个新的 Jenkins 作业
打开一个 Web 浏览器并导航到 `localhost:8080`。除非你有以前安装的 Jenkins否则应直接转到 Jenkins 仪表板。点击 “Create New Jobs”。你也可以点击左侧的 “New Item”。
![Create New Job][10]
#### 步骤四:创建一个流水线作业
在此步骤中,你可以选择并定义要创建的 Jenkins 作业类型。选择 “Pipeline” 并为其命名例如“TestPipeline”。单击 “OK” 创建流水线作业。
![Create New Pipeline Job][11]
你将看到一个 Jenkins 作业配置页面。向下滚动以找到 “Pipeline” 部分。有两种执行 Jenkins 流水线的方法。一种方法是在 Jenkins 上直接编写流水线脚本,另一种方法是从 SCM源代码管理中检索 Jenkins 文件。在接下来的两个步骤中,我们将体验这两种方式。
#### 步骤五:通过直接脚本配置并执行流水线作业
要使用直接脚本执行流水线,请首先从 GitHub 复制该 [Jenkinsfile 示例][6]的内容。选择 “Pipeline script” 作为 “Destination”然后将该 Jenkinsfile 的内容粘贴到 “Script” 中。花一些时间研究一下 Jenkins 文件的结构。注意共有三个阶段Build、Test 和 Deploy它们是任意的可以是任何一个。每个阶段中都有一些步骤在此示例中它们只是打印一些随机消息。
单击 “Save” 以保留更改,这将自动将你带回到 “Job Overview” 页面。
![Configure to Run as Jenkins Script][12]
要开始构建流水线的过程,请单击 “Build Now”。如果一切正常你将看到第一个流水线如下面的这个
![Click Build Now and See Result][13]
要查看流水线脚本构建的输出,请单击任何阶段,然后单击 “Log”。你会看到这样的消息。
![Visit sample GitHub with Jenkins get clone link][14]
#### 步骤六:通过 SCM 配置并执行流水线作业
现在,换个方式:在此步骤中,你将通过从源代码控制的 GitHub 中复制 Jenkinsfile 来部署相同的 Jenkins 作业。在同一个 [GitHub 存储库][6]中,通过单击 “Clone or download” 并复制其 URL 来找到其存储库 URL。
![Checkout from GitHub][15]
单击 “Configure” 以修改现有作业。滚动到 “Advanced Project Options” 设置,但这一次,从 “Destination” 下拉列表中选择 “Pipeline script from SCM” 选项。将 GitHub 存储库的 URL 粘贴到 “Repository URL” 中,然后在 “Script Path” 中键入 “Jenkinsfile”。 单击 “Save” 按钮保存。
![Change to Pipeline script from SCM][16]
要构建流水线,回到 “Task Overview” 页面后,单击 “Build Now” 以再次执行作业。结果与之前相同,除了多了一个称为 “Declaration: Checkout SCM” 的阶段。
![Build again and verify][17]
要查看来自 SCM 构建的流水线的输出,请单击该阶段并查看 “Log” 以检查源代码控制克隆过程的进行情况。
![Verify Checkout Procedure][18]
### 除了打印消息,还能做更多
恭喜你!你已经建立了第一个 Jenkins 流水线!
“但是等等”,你说,“这太有限了。除了打印无用的消息外,我什么都做不了。”那没问题。到目前为止,本教程仅简要介绍了 Jenkins 流水线可以做什么,但是你可以通过将其与其他工具集成来扩展其功能。以下是给你的下一个项目的一些思路:
* 建立一个多阶段的 Java 构建流水线,从以下阶段开始:从 Nexus 或 Artifactory 之类的 JAR 存储库中拉取依赖项、编译 Java 代码、运行单元测试、打包为 JAR/WAR 文件,然后部署到云服务器。
* 实现一个高级代码测试仪表板,该仪表板将基于 Selenium 的单元测试、负载测试和自动用户界面测试,报告项目的运行状况。
* 构建多流水线或多用户流水线,以自动化执行 Ansible 剧本的任务,同时允许授权用户响应正在进行的任务。
* 设计完整的端到端 DevOps 流水线,该流水线可提取存储在 SCM 中的基础设施资源文件和配置文件(例如 GitHub并通过各种运行时程序执行该脚本。
学习本文结尾处的任何教程,以了解这些更高级的案例。
#### 管理 Jenkins
在 Jenkins 主面板,点击 “Manage Jenkins”。
![Manage Jenkins][19]
#### 全局工具配置
有许多可用工具,包括管理插件、查看系统日志等。单击 “Global Tool Configuration”。
![Global Tools Configuration][20]
#### 增加附加能力
在这里,你可以添加 JDK 路径、Git、Gradle 等。配置工具后,只需将该命令添加到 Jenkinsfile 中或通过 Jenkins 脚本执行即可。
![See Various Options for Plugin][21]
### 后继
本文为你介绍了使用酷炫的开源工具 Jenkins 创建 CI/CD 流水线的方法。要了解你可以使用 Jenkins 完成的许多其他操作,请在 Opensource.com 上查看以下其他文章:
* [Jenkins X 入门][22]
* [使用 Jenkins 安装 OpenStack 云][23]
* [在容器中运行 Jenkins][24]
* [Jenkins 流水线入门][25]
* [如何与 Jenkins 一起运行 JMeter][26]
* [将 OpenStack 集成到你的 Jenkins 工作流中][27]
你可能对我为你的开源之旅而写的其他一些文章感兴趣:
* [9 个用于构建容错系统的开源工具][28]
* [了解软件设计模式][29]
* [使用开源工具构建 DevOps 流水线的初学者指南][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins
作者:[Bryant Son][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brson
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pipe-pipeline-grid.png?itok=kkpzKxKg (pipelines)
[2]: https://linux.cn/article-11307-1.html
[3]: https://jenkins.io/
[4]: https://opensource.com/sites/default/files/uploads/diagrampipeline.jpg (Pipeline example)
[5]: https://opensource.com/sites/default/files/uploads/0_endresultpreview_0.jpg (Final Result)
[6]: https://github.com/bryantson/CICDPractice
[7]: https://jenkins.io/download/
[8]: https://opensource.com/sites/default/files/uploads/2_downloadwar.jpg (Download Jenkins as Java WAR file)
[9]: https://opensource.com/sites/default/files/uploads/3_runasjar.jpg (Execute as an executable JAR binary)
[10]: https://opensource.com/sites/default/files/uploads/4_createnewjob.jpg (Create New Job)
[11]: https://opensource.com/sites/default/files/uploads/5_createpipeline.jpg (Create New Pipeline Job)
[12]: https://opensource.com/sites/default/files/uploads/6_runaspipelinescript.jpg (Configure to Run as Jenkins Script)
[13]: https://opensource.com/sites/default/files/uploads/7_buildnow4script.jpg (Click Build Now and See Result)
[14]: https://opensource.com/sites/default/files/uploads/8_seeresult4script.jpg (Visit sample GitHub with Jenkins get clone link)
[15]: https://opensource.com/sites/default/files/uploads/9_checkoutfromgithub.jpg (Checkout from GitHub)
[16]: https://opensource.com/sites/default/files/uploads/10_runsasgit.jpg (Change to Pipeline script from SCM)
[17]: https://opensource.com/sites/default/files/uploads/11_seeresultfromgit.jpg (Build again and verify)
[18]: https://opensource.com/sites/default/files/uploads/12_verifycheckout.jpg (Verify Checkout Procedure)
[19]: https://opensource.com/sites/default/files/uploads/13_managingjenkins.jpg (Manage Jenkins)
[20]: https://opensource.com/sites/default/files/uploads/14_globaltoolsconfiguration.jpg (Global Tools Configuration)
[21]: https://opensource.com/sites/default/files/uploads/15_variousoptions4plugin.jpg (See Various Options for Plugin)
[22]: https://opensource.com/article/18/11/getting-started-jenkins-x
[23]: https://opensource.com/article/18/4/install-OpenStack-cloud-Jenkins
[24]: https://linux.cn/article-9741-1.html
[25]: https://opensource.com/article/18/4/jenkins-pipelines-with-cucumber
[26]: https://opensource.com/life/16/7/running-jmeter-jenkins-continuous-delivery-101
[27]: https://opensource.com/business/15/5/interview-maish-saidel-keesing-cisco
[28]: https://opensource.com/article/19/3/tools-fault-tolerant-system
[29]: https://opensource.com/article/19/7/understanding-software-design-patterns

View File

@ -0,0 +1,251 @@
[#]: collector: (lujun9972)
[#]: translator: (jdh8383)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11552-1.html)
[#]: subject: (How to program with Bash: Syntax and tools)
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-1)
[#]: author: (David Both https://opensource.com/users/dboth)
怎样用 Bash 编程:语法和工具
======
> 让我们通过本系列文章来学习基本的 Bash 编程语法和工具,以及如何使用变量和控制运算符,这是三篇中的第一篇。
![](https://img.linux.net.cn/data/attachment/album/201911/08/092559r5wdg0w97dtf350j.jpg)
Shell 是操作系统的命令解释器,其中 Bash 是我最喜欢的。每当用户或者系统管理员将命令输入系统的时候Linux 的 shell 解释器就会把这些命令转换成操作系统可以理解的形式。而执行结果返回 shell 程序后,它会将结果输出到 STDOUT标准输出默认情况下这些结果会[显示在你的终端][2]。所有我熟悉的 shell 同时也是一门编程语言。
Bash 是个功能强大的 shell包含众多便捷特性比如tab 补全、命令回溯和再编辑、别名等。它的命令行默认编辑模式是 Emacs但是我最喜欢的 Bash 特性之一是我可以将其更改为 Vi 模式,以使用那些储存在我肌肉记忆中的的编辑命令。
然而,如果你把 Bash 当作单纯的 shell 来用,则无法体验它的真实能力。我在设计一套包含三卷的 [Linux 自学课程][3]时(这个系列的文章正是基于此课程),了解到许多 Bash 的知识,这些是我在过去 20 年的 Linux 工作经验中所没有掌握的,其中的一些知识就是关于 Bash 的编程用法。不得不说Bash 是一门强大的编程语言,是一个能够同时用于命令行和 shell 脚本的完美设计。
本系列文章将要探讨如何使用 Bash 作为命令行界面CLI编程语言。第一篇文章简单介绍 Bash 命令行编程、变量以及控制运算符。其他文章会讨论诸如Bash 文件的类型;字符串、数字和一些逻辑运算符,它们能够提供代码执行流程中的逻辑控制;不同类型的 shell 扩展;通过 `for`、`while` 和 `until` 来控制循环操作。
### Shell
Bash 是 Bourne Again Shell 的缩写,因为 Bash shell 是 [基于][4] 更早的 Bourne shell后者是 Steven Bourne 在 1977 年开发的。另外还有很多[其他的 shell][5] 可以使用,但下面四个是我经常见到的:
* `csh`C shell 适合那些习惯了 C 语言语法的开发者。
* `ksh`Korn shell由 David Korn 开发,在 Unix 用户中更流行。
* `tcsh`:一个 csh 的变种,增加了一些易用性。
* `zsh`Z shell集成了许多其他流行 shell 的特性。
所有 shell 都有内置命令,用以补充或替代核心工具集。打开 shell 的 man 说明页找到“BUILT-INS”那一段可以查看都有哪些内置命令。
每种 shell 都有它自己的特性和语法风格。我用过 csh、ksh 和 zsh但我还是更喜欢 Bash。你可以多试几个寻找更适合你的 shell尽管这可能需要花些功夫。但幸运的是切换不同 shell 很简单。
所有这些 shell 既是编程语言又是命令解释器。下面我们来快速浏览一下 Bash 中集成的编程结构和工具。
### 作为编程语言的 Bash
大多数场景下,系统管理员都会使用 Bash 来发送简单明了的命令。但 Bash 不仅可以输入单条命令,很多系统管理员可以编写简单的命令行程序来执行一系列任务,这些程序可以作为通用工具,能节省时间和精力。
编写 CLI 程序的目的是要提高效率(做一个“懒惰的”系统管理员)。在 CLI 程序中,你可以用特定顺序列出若干命令,逐条执行。这样你就不用盯着显示屏,等待一条命令执行完,再输入另一条,省下来的时间就可以去做其他事情了。
### 什么是“程序”?
自由在线计算机词典([FOLDOC][6])对于程序的定义是:“由计算机执行的指令,而不是运行它们的物理硬件。”普林斯顿大学的 [WordNet][7] 将程序定义为:“……计算机可以理解并执行的一系列指令……”[维基百科][8]上也有一条不错的关于计算机程序的条目。
总结下,程序由一条或多条指令组成,目的是完成一个具体的相关任务。对于系统管理员而言,一段程序通常由一系列的 shell 命令构成。Linux 下所有的 shell 至少我所熟知的都有基本的编程功能Bash 作为大多数 linux 发行版的默认 shell也不例外。
本系列用 Bash 举例(因为它无处不在),假如你使用一个不同的 shell 也没关系,尽管结构和语法有所不同,但编程思想是相通的。有些 shell 支持某种特性而其他 shell 则不支持但它们都提供编程功能。Shell 程序可以被存在一个文件中被反复使用,或者在需要的时候才创建它们。
### 简单 CLI 程序
最简单的命令行程序只有一或两条语句,它们可能相关,也可能无关,在按回车键之前被输入到命令行。程序中的第二条语句(如果有的话)可能取决于第一条语句的操作,但也不是必须的。
这里需要特别讲解一个标点符号。当你在命令行输入一条命令,按下回车键的时候,其实在命令的末尾有一个隐含的分号(`;`)。当一段 CLI shell 程序在命令行中被串起来作为单行指令使用时,必须使用分号来终结每个语句并将其与下一条语句分开。但 CLI shell 程序中的最后一条语句可以使用显式或隐式的分号。
### 一些基本语法
下面的例子会阐明这一语法规则。这段程序由单条命令组成,还有一个显式的终止符:
```
[student@studentvm1 ~]$ echo "Hello world." ;
Hello world.
```
看起来不像一个程序,但它确是我学习每个新编程语言时写下的第一个程序。不同语言可能语法不同,但输出结果是一样的。
让我们扩展一下这段微不足道却又无所不在的代码。你的结果可能与我的有所不同,因为我的家目录有点乱,而你可能是在 GUI 桌面中第一次登录账号。
```
[student@studentvm1 ~]$ echo "My home directory." ; ls ;
My home directory.
chapter25 TestFile1.Linux dmesg2.txt Downloads newfile.txt softlink1 testdir6
chapter26 TestFile1.mac dmesg3.txt file005 Pictures Templates testdir
TestFile1 Desktop dmesg.txt link3 Public testdir Videos
TestFile1.dos dmesg1.txt Documents Music random.txt testdir1
```
现在是不是更明显了。结果是相关的,但是两条语句彼此独立。你可能注意到我喜欢在分号前后多输入一个空格,这样会让代码的可读性更好。让我们再运行一遍这段程序,这次不要带结尾的分号:
```
[student@studentvm1 ~]$ echo "My home directory." ; ls
```
输出结果没有区别。
### 关于变量
像所有其他编程语言一样Bash 支持变量。变量是个象征性的名字,它指向内存中的某个位置,那里存着对应的值。变量的值是可以改变的,所以它叫“变~量”。
Bash 不像 C 之类的语言,需要强制指定变量类型,比如:整型、浮点型或字符型。在 Bash 中,所有变量都是字符串。整数型的变量可以被用于整数运算,这是 Bash 唯一能够处理的数学类型。更复杂的运算则需要借助 [bc][9] 这样的命令,可以被用在命令行编程或者脚本中。
变量的值是被预先分配好的,这些值可以用在命令行编程或者脚本中。可以通过变量名字给其赋值,但是不能使用 `$` 符开头。比如,`VAR=10` 这样会把 `VAR` 的值设为 `10`。要打印变量的值,你可以使用语句 `echo $VAR`。变量名必须以文本(即非数字)开始。
Bash 会保存已经定义好的变量,直到它们被取消掉。
下面这个例子,在变量被赋值前,它的值是空(`null`)。然后给它赋值并打印出来,检验一下。你可以在同一行 CLI 程序里完成它:
```
[student@studentvm1 ~]$ echo $MyVar ; MyVar="Hello World" ; echo $MyVar ;
Hello World
[student@studentvm1 ~]$
```
*注意:变量赋值的语法非常严格,等号(`=`)两边不能有空格。*
那个空行表明了 `MyVar` 的初始值为空。变量的赋值和改值方法都一样,这个例子展示了原始值和新的值。
正如之前说的Bash 支持整数运算,当你想计算一个数组中的某个元素的位置,或者做些简单的算术运算,这还是挺有帮助的。然而,这种方法并不适合科学计算,或是某些需要小数运算的场景,比如财务统计。这些场景有其它更好的工具可以应对。
下面是个简单的算术题:
```
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var1*Var2))"
Result = 63
```
好像没啥问题,但如果运算结果是浮点数会发生什么呢?
```
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var1/Var2))"
Result = 0
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var2/Var1))"
Result = 1
[student@studentvm1 ~]$
```
结果会被取整。请注意运算被包含在 `echo` 语句之中,其实计算在 echo 命令结束前就已经完成了,原因是 Bash 的内部优先级。想要了解详情的话,可以在 Bash 的 man 页面中搜索 “precedence”。
### 控制运算符
Shell 的控制运算符是一种语法运算符,可以轻松地创建一些有趣的命令行程序。在命令行上按顺序将几个命令串在一起,就变成了最简单的 CLI 程序:
```
command1 ; command2 ; command3 ; command4 ; . . . ; etc. ;
```
只要不出错,这些命令都能顺利执行。但假如出错了怎么办?你可以预设好应对出错的办法,这就要用到 Bash 内置的控制运算符, `&&``||`。这两种运算符提供了流程控制功能,使你能改变代码执行的顺序。分号也可以被看做是一种 Bash 运算符,预示着新一行的开始。
`&&` 运算符提供了如下简单逻辑,“如果 command1 执行成功,那么接着执行 command2。如果 command1 失败,就跳过 command2。”语法如下
```
command1 && command2
```
现在,让我们用命令来创建一个新的目录,如果成功的话,就把它切换为当前目录。确保你的家目录(`~`)是当前目录,先尝试在 `/root` 目录下创建,你应该没有权限:
```
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir/ && cd $Dir
mkdir: cannot create directory '/root/testdir/': Permission denied
[student@studentvm1 ~]$
```
上面的报错信息是由 `mkdir` 命令抛出的,因为创建目录失败了。`&&` 运算符收到了非零的返回码,所以 `cd` 命令就被跳过,前者阻止后者继续运行,因为创建目录失败了。这种控制流程可以阻止后面的错误累积,避免引发更严重的问题。是时候讲点更复杂的逻辑了。
当一段程序的返回码大于零时,使用 `||` 运算符可以让你在后面接着执行另一段程序。简单语法如下:
```
command1 || command2
```
解读一下,“假如 command1 失败,执行 command2”。隐藏的逻辑是如果 command1 成功,跳过 command2。下面实践一下仍然是创建新目录
```
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir || echo "$Dir was not created."
mkdir: cannot create directory '/root/testdir': Permission denied
/root/testdir was not created.
[student@studentvm1 ~]$
```
正如预期,因为目录无法创建,第一条命令失败了,于是第二条命令被执行。
`&&``||` 两种运算符结合起来才能发挥它们的最大功效。请看下面例子中的流程控制方法:
```
前置 commands ; command1 && command2 || command3 ; 跟随 commands
```
语法解释:“假如 command1 退出时返回码为零,就执行 command2否则执行 command3。”用具体代码试试
```
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir && cd $Dir || echo "$Dir was not created."
mkdir: cannot create directory '/root/testdir': Permission denied
/root/testdir was not created.
[student@studentvm1 ~]$
```
现在我们再试一次,用你的家目录替换 `/root` 目录,你将会有权限创建这个目录了:
```
[student@studentvm1 ~]$ Dir=~/testdir ; mkdir $Dir && cd $Dir || echo "$Dir was not created."
[student@studentvm1 testdir]$
```
`command1 && command2` 这样的控制语句能够运行的原因是,每条命令执行完毕时都会给 shell 发送一个返回码,用来表示它执行成功与否。默认情况下,返回码为 `0` 表示成功,其他任何正值表示失败。一些系统管理员使用的工具用值为 `1` 的返回码来表示失败,但其他很多程序使用别的数字来表示失败。
Bash 的内置变量 `$?` 可以显示上一条命令的返回码,可以在脚本或者命令行中非常方便地检查它。要查看返回码,让我们从运行一条简单的命令开始,返回码的结果总是上一条命令给出的。
```
[student@studentvm1 testdir]$ ll ; echo "RC = $?"
total 1264
drwxrwxr-x 2 student student 4096 Mar 2 08:21 chapter25
drwxrwxr-x 2 student student 4096 Mar 21 15:27 chapter26
-rwxr-xr-x 1 student student 92 Mar 20 15:53 TestFile1
drwxrwxr-x. 2 student student 663552 Feb 21 14:12 testdir
drwxr-xr-x. 2 student student 4096 Dec 22 13:15 Videos
RC = 0
[student@studentvm1 testdir]$
```
在这个例子中,返回码为零,意味着命令执行成功了。现在对 root 的家目录测试一下,你应该没有权限:
```
[student@studentvm1 testdir]$ ll /root ; echo "RC = $?"
ls: cannot open directory '/root': Permission denied
RC = 2
[student@studentvm1 testdir]$
```
本例中返回码是 `2`,表明非 root 用户没有权限进入这个目录。你可以利用这些返回码,用控制运算符来改变程序执行的顺序。
### 总结
本文将 Bash 看作一门编程语言,并从这个视角介绍了它的简单语法和基础工具。我们学习了如何将数据输出到 STDOUT怎样使用变量和控制运算符。在本系列的下一篇文章中将会重点介绍能够控制指令执行流程的逻辑运算符。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/programming-bash-part-1
作者:[David Both][a]
选题:[lujun9972][b]
译者:[jdh8383](https://github.com/jdh8383)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://opensource.com/article/18/10/linux-data-streams
[3]: http://www.both.org/?page_id=1183
[4]: https://opensource.com/19/9/command-line-heroes-bash
[5]: https://en.wikipedia.org/wiki/Comparison_of_command_shells
[6]: http://foldoc.org/program
[7]: https://wordnet.princeton.edu/
[8]: https://en.wikipedia.org/wiki/Computer_program
[9]: https://www.gnu.org/software/bc/manual/html_mono/bc.html

View File

@ -0,0 +1,406 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11545-1.html)
[#]: subject: (Understanding system calls on Linux with strace)
[#]: via: (https://opensource.com/article/19/10/strace)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
在 Linux 上用 strace 来理解系统调用
======
> 使用 strace 跟踪用户进程和 Linux 内核之间的交互。
![Hand putting a Linux file folder into a drawer][1]
<ruby>系统调用<rt>system call</rt></ruby>是程序从内核请求服务的一种编程方式,而 `strace` 是一个功能强大的工具,可让你跟踪用户进程与 Linux 内核之间的交互。
要了解操作系统的工作原理,首先需要了解系统调用的工作原理。操作系统的主要功能之一是为用户程序提供抽象机制。
操作系统可以大致分为两种模式:
* 内核模式:操作系统内核使用的一种强大的特权模式
* 用户模式:大多数用户应用程序运行的地方
  
用户大多使用命令行实用程序和图形用户界面GUI来执行日常任务。系统调用在后台静默运行与内核交互以完成工作。
系统调用与函数调用非常相似,这意味着它们都接受并处理参数然后返回值。唯一的区别是系统调用进入内核,而函数调用不进入。从用户空间切换到内核空间是使用特殊的 [trap][2] 机制完成的。
通过使用系统库(在 Linux 系统上又称为 glibc大部分系统调用对用户隐藏了。尽管系统调用本质上是通用的但是发出系统调用的机制在很大程度上取决于机器架构
本文通过使用一些常规命令并使用 `strace` 分析每个命令进行的系统调用来探索一些实际示例。这些示例使用 Red Hat Enterprise Linux但是这些命令运行在其他 Linux 发行版上应该也是相同的:
```
[root@sandbox ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.7 (Maipo)
[root@sandbox ~]#
[root@sandbox ~]# uname -r
3.10.0-1062.el7.x86_64
[root@sandbox ~]#
```
首先,确保在系统上安装了必需的工具。你可以使用下面的 `rpm` 命令来验证是否安装了 `strace`。如果安装了,则可以使用 `-V` 选项检查 `strace` 实用程序的版本号:
```
[root@sandbox ~]# rpm -qa | grep -i strace
strace-4.12-9.el7.x86_64
[root@sandbox ~]#
[root@sandbox ~]# strace -V
strace -- version 4.12
[root@sandbox ~]#
```
如果没有安装,运行命令安装:
```
yum install strace
```
出于本示例的目的,在 `/tmp` 中创建一个测试目录,并使用 `touch` 命令创建两个文件:
```
[root@sandbox ~]# cd /tmp/
[root@sandbox tmp]#
[root@sandbox tmp]# mkdir testdir
[root@sandbox tmp]#
[root@sandbox tmp]# touch testdir/file1
[root@sandbox tmp]# touch testdir/file2
[root@sandbox tmp]#
```
(我使用 `/tmp` 目录是因为每个人都可以访问它,但是你可以根据需要选择另一个目录。)
`testdir` 目录下使用 `ls` 命令验证该文件已经创建:
```
[root@sandbox tmp]# ls testdir/
file1  file2
[root@sandbox tmp]#
```
你可能每天都在使用 `ls` 命令,而没有意识到系统调用在其下面发挥的作用。抽象地来说,该命令的工作方式如下:
> 命令行工具 -> 从系统库glibc调用函数 -> 调用系统调用
`ls` 命令内部从 Linux 上的系统库(即 glibc调用函数。这些库去调用完成大部分工作的系统调用。
如果你想知道从 glibc 库中调用了哪些函数,请使用 `ltrace` 命令,然后跟上常规的 `ls testdir/`命令:
```
ltrace ls testdir/
```
如果没有安装 `ltrace`,键入如下命令安装:
```
yum install ltrace
```
大量的输出会被堆到屏幕上;不必担心,只需继续就行。`ltrace` 命令输出中与该示例有关的一些重要库函数包括:
```
opendir("testdir/") = { 3 }
readdir({ 3 }) = { 101879119, "." }
readdir({ 3 }) = { 134, ".." }
readdir({ 3 }) = { 101879120, "file1" }
strlen("file1") = 5
memcpy(0x1665be0, "file1\0", 6) = 0x1665be0
readdir({ 3 }) = { 101879122, "file2" }
strlen("file2") = 5
memcpy(0x166dcb0, "file2\0", 6) = 0x166dcb0
readdir({ 3 }) = nil
closedir({ 3 })                    
```
通过查看上面的输出,你或许可以了解正在发生的事情。`opendir` 库函数打开一个名为 `testdir` 的目录,然后调用 `readdir` 函数,该函数读取目录的内容。最后,有一个对 `closedir` 函数的调用,该函数将关闭先前打开的目录。现在请先忽略其他 `strlen``memcpy` 功能。
你可以看到正在调用哪些库函数,但是本文将重点介绍由系统库函数调用的系统调用。
与上述类似,要了解调用了哪些系统调用,只需将 `strace` 放在 `ls testdir` 命令之前,如下所示。 再次,一堆乱码丢到了你的屏幕上,你可以按照以下步骤进行操作:
```
[root@sandbox tmp]# strace ls testdir/
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
brk(NULL) = 0x1f12000
<<< truncated strace output >>>
write(1, "file1 file2\n", 13file1 file2
) = 13
close(1) = 0
munmap(0x7fd002c8d000, 4096) = 0
close(2) = 0
exit_group(0) = ?
+++ exited with 0 +++
[root@sandbox tmp]#
```
运行 `strace` 命令后屏幕上的输出就是运行 `ls` 命令的系统调用。每个系统调用都为操作系统提供了特定的用途,可以将它们大致分为以下几个部分:
* 进程管理系统调用
* 文件管理系统调用
* 目录和文件系统管理系统调用
* 其他系统调用
分析显示到屏幕上的信息的一种更简单的方法是使用 `strace` 方便的 `-o` 标志将输出记录到文件中。在 `-o` 标志后添加一个合适的文件名,然后再次运行命令:
```
[root@sandbox tmp]# strace -o trace.log ls testdir/
file1  file2
[root@sandbox tmp]#
```
这次,没有任何输出干扰屏幕显示,`ls` 命令如预期般工作,显示了文件名并将所有输出记录到文件 `trace.log` 中。仅仅是一个简单的 `ls` 命令,该文件就有近 100 行内容:
```
[root@sandbox tmp]# ls -l trace.log
-rw-r--r--. 1 root root 7809 Oct 12 13:52 trace.log
[root@sandbox tmp]#
[root@sandbox tmp]# wc -l trace.log
114 trace.log
[root@sandbox tmp]#
```
让我们看一下这个示例的 `trace.log` 文件的第一行:
```
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
```
* 该行的第一个单词 `execve` 是正在执行的系统调用的名称。
* 括号内的文本是提供给该系统调用的参数。
* 符号 `=` 后的数字(在这种情况下为 `0`)是 `execve` 系统调用的返回值。
现在的输出似乎还不太吓人,对吧。你可以应用相同的逻辑来理解其他行。
现在,将关注点集中在你调用的单个命令上,即 `ls testdir`。你知道命令 `ls` 使用的目录名称,那么为什么不在 `trace.log` 文件中使用 `grep` 查找 `testdir` 并查看得到的结果呢?让我们详细查看一下结果的每一行:
```
[root@sandbox tmp]# grep testdir trace.log
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0
openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
[root@sandbox tmp]#
```
回顾一下上面对 `execve` 的分析,你能说一下这个系统调用的作用吗?
```
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
```
你无需记住所有系统调用或它们所做的事情,因为你可以在需要时参考文档。手册页可以解救你!在运行 `man` 命令之前,请确保已安装以下软件包:
```
[root@sandbox tmp]# rpm -qa | grep -i man-pages
man-pages-3.53-5.el7.noarch
[root@sandbox tmp]#
```
请记住,你需要在 `man` 命令和系统调用名称之间添加 `2`。如果使用 `man man` 阅读 `man` 命令的手册页,你会看到第 2 节是为系统调用保留的。同样,如果你需要有关库函数的信息,则需要在 `man` 和库函数名称之间添加一个 `3`
以下是手册的章节编号及其包含的页面类型:
* `1`:可执行的程序或 shell 命令
* `2`:系统调用(由内核提供的函数)
* `3`:库调用(在程序的库内的函数)
* `4`:特殊文件(通常出现在 `/dev`
使用系统调用名称运行以下 `man` 命令以查看该系统调用的文档:
```
man 2 execve
```
按照 `execve` 手册页,这将执行在参数中传递的程序(在本例中为 `ls`)。可以为 `ls` 提供其他参数,例如本例中的 `testdir`。因此,此系统调用仅以 `testdir` 作为参数运行 `ls`
```
execve - execute program
DESCRIPTION
execve() executes the program pointed to by filename
```
下一个系统调用,名为 `stat`,它使用 `testdir` 参数:
```
stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0
```
使用 `man 2 stat` 访问该文档。`stat` 是获取文件状态的系统调用请记住Linux 中的一切都是文件,包括目录。
接下来,`openat` 系统调用将打开 `testdir`。密切注意返回的 `3`。这是一个文件描述符,将在以后的系统调用中使用:
```
openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
```
到现在为止一切都挺好。现在,打开 `trace.log` 文件,并转到 `openat` 系统调用之后的行。你会看到 `getdents` 系统调用被调用,该调用完成了执行 `ls testdir` 命令所需的大部分操作。现在,从 `trace.log` 文件中用 `grep` 获取 `getdents`
```
[root@sandbox tmp]# grep getdents trace.log
getdents(3, /* 4 entries */, 32768)     = 112
getdents(3, /* 0 entries */, 32768)     = 0
[root@sandbox tmp]#
```
`getdents` 的手册页将其描述为 “获取目录项”,这就是你要执行的操作。注意,`getdents` 的参数是 `3`,这是来自上面 `openat` 系统调用的文件描述符。
现在有了目录列表,你需要一种在终端中显示它的方法。因此,在日志中用 `grep` 搜索另一个用于写入终端的系统调用 `write`
```
[root@sandbox tmp]# grep write trace.log
write(1, "file1  file2\n", 13)          = 13
[root@sandbox tmp]#
```
在这些参数中,你可以看到将要显示的文件名:`file1` 和 `file2`。关于第一个参数(`1`),请记住在 Linux 中,当运行任何进程时,默认情况下会为其打开三个文件描述符。以下是默认的文件描述符:
* `0`:标准输入
* `1`:标准输出
* `2`:标准错误
因此,`write` 系统调用将在标准显示(就是这个终端,由 `1` 所标识的)上显示 `file1``file2`
现在你知道哪个系统调用完成了 `ls testdir/` 命令的大部分工作。但是在 `trace.log` 文件中其它的 100 多个系统调用呢?操作系统必须做很多内务处理才能运行一个进程,因此,你在该日志文件中看到的很多内容都是进程初始化和清理。阅读整个 `trace.log` 文件,并尝试了解 `ls` 命令是怎么工作起来的。
既然你知道了如何分析给定命令的系统调用,那么就可以将该知识用于其他命令来了解正在执行哪些系统调用。`strace` 提供了许多有用的命令行标志,使你更容易使用,下面将对其中一些进行描述。
默认情况下,`strace` 并不包含所有系统调用信息。但是,它有一个方便的 `-v` 冗余选项,可以在每个系统调用中提供附加信息:
```
strace -v ls testdir
```
在运行 `strace` 命令时始终使用 `-f` 选项是一种好的作法。它允许 `strace` 对当前正在跟踪的进程创建的任何子进程进行跟踪:
```
strace -f ls testdir
```
假设你只需要系统调用的名称、运行的次数以及每个系统调用花费的时间百分比。你可以使用 `-c` 标志来获取这些统计信息:
```
strace -c ls testdir/
```
假设你想专注于特定的系统调用,例如专注于 `open` 系统调用,而忽略其余部分。你可以使用`-e` 标志跟上系统调用的名称:
```
[root@sandbox tmp]# strace -e open ls testdir
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libcap.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libacl.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpcre.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
file1  file2
+++ exited with 0 +++
[root@sandbox tmp]#
```
如果你想关注多个系统调用怎么办?不用担心,你同样可以使用 `-e` 命令行标志,并用逗号分隔开两个系统调用的名称。例如,要查看 `write``getdents` 系统调用:
```
[root@sandbox tmp]# strace -e write,getdents ls testdir
getdents(3, /* 4 entries */, 32768)     = 112
getdents(3, /* 0 entries */, 32768)     = 0
write(1, "file1  file2\n", 13file1  file2
)          = 13
+++ exited with 0 +++
[root@sandbox tmp]#
```
到目前为止,这些示例是明确地运行的命令进行了跟踪。但是,要跟踪已经运行并正在执行的命令又怎么办呢?例如,如果要跟踪用来长时间运行进程的守护程序,该怎么办?为此,`strace` 提供了一个特殊的 `-p` 标志,你可以向其提供进程 ID。
我们的示例不在守护程序上运行 `strace`,而是以 `cat` 命令为例,如果你将文件名作为参数,通常 `cat` 会显示文件的内容。如果没有给出参数,`cat` 命令会在终端上等待用户输入文本。输入文本后,它将重复给定的文本,直到用户按下 `Ctrl + C` 退出为止。
从一个终端运行 `cat` 命令;它会向你显示一个提示,并等待在那里(记住 `cat` 仍在运行且尚未退出):
```
[root@sandbox tmp]# cat
```
在另一个终端上,使用 `ps` 命令找到进程标识符PID
```
[root@sandbox ~]# ps -ef | grep cat
root      22443  20164  0 14:19 pts/0    00:00:00 cat
root      22482  20300  0 14:20 pts/1    00:00:00 grep --color=auto cat
[root@sandbox ~]#
```
现在,使用 `-p` 标志和 PID在上面使用 `ps` 找到)对运行中的进程运行 `strace`。运行 `strace` 之后,其输出说明了所接驳的进程的内容及其 PID。现在`strace` 正在跟踪 `cat` 命令进行的系统调用。看到的第一个系统调用是 `read`,它正在等待文件描述符 `0`(标准输入,这是运行 `cat` 命令的终端)的输入:
```
[root@sandbox ~]# strace -p 22443
strace: Process 22443 attached
read(0,
```
现在,返回到你运行 `cat` 命令的终端,并输入一些文本。我出于演示目的输入了 `x0x0`。注意 `cat` 是如何简单地重复我输入的内容的。因此,`x0x0` 出现了两次。我输入了第一个,第二个是 `cat` 命令重复的输出:
```
[root@sandbox tmp]# cat
x0x0
x0x0
```
返回到将 `strace` 接驳到 `cat` 进程的终端。现在你会看到两个额外的系统调用:较早的 `read` 系统调用,现在在终端中读取 `x0x0`,另一个为 `write`,它将 `x0x0` 写回到终端,然后是再一个新的 `read`,正在等待从终端读取。请注意,标准输入(`0`)和标准输出(`1`)都在同一终端中:
```
[root@sandbox ~]# strace -p 22443
strace: Process 22443 attached
read(0, "x0x0\n", 65536)                = 5
write(1, "x0x0\n", 5)                   = 5
read(0,
```
想象一下,对守护进程运行 `strace` 以查看其在后台执行的所有操作时这有多大帮助。按下 `Ctrl + C` 杀死 `cat` 命令;由于该进程不再运行,因此这也会终止你的 `strace` 会话。
如果要查看所有的系统调用的时间戳,只需将 `-t` 选项与 `strace` 一起使用:
```
[root@sandbox ~]#strace -t ls testdir/
14:24:47 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
14:24:47 brk(NULL)                      = 0x1f07000
14:24:47 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2530bc8000
14:24:47 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
14:24:47 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
```
如果你想知道两次系统调用之间所花费的时间怎么办?`strace` 有一个方便的 `-r` 命令,该命令显示执行每个系统调用所花费的时间。非常有用,不是吗?
```
[root@sandbox ~]#strace -r ls testdir/
0.000000 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
0.000368 brk(NULL)                 = 0x1966000
0.000073 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb6b1155000
0.000047 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
0.000119 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
```
### 总结
`strace` 实用程序非常有助于理解 Linux 上的系统调用。要了解它的其它命令行标志,请参考手册页和在线文档。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/strace
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://en.wikipedia.org/wiki/Trap_(computing)

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11541-1.html)
[#]: subject: (Upgrading Fedora 30 to Fedora 31)
[#]: via: (https://fedoramagazine.org/upgrading-fedora-30-to-fedora-31/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
将 Fedora 30 升级到 Fedora 31
======
![][1]
Fedora 31 [日前发布了][2]。你也许想要升级系统来获得 Fedora 中的最新功能。Fedora 工作站有图形化的升级方式。另外Fedora 提供了一种命令行方式来将 Fedora 30 升级到 Fedora 31。
### 将 Fedora 30 工作站升级到 Fedora 31
在该发布不久之后,就会有通知告诉你有可用升级。你可以点击通知打开 GNOME “软件”。或者在 GNOME Shell 选择“软件”。
在 GNOME 软件中选择*更新*,你应该会看到告诉你有 Fedora 31 更新的提示。
如果你在屏幕上看不到任何内容,请尝试使用左上方的重新加载按钮。在发布后,所有系统可能需要一段时间才能看到可用的升级。
选择*下载*以获取升级包。你可以继续工作,直到下载完成。然后使用 GNOME “软件”重启系统并应用升级。升级需要时间,因此你可能需要喝杯咖啡,稍后再返回系统。
### 使用命令行
如果你是从 Fedora 以前的版本升级的,那么你可能对 `dnf upgrade` 插件很熟悉。这是推荐且支持的从 Fedora 30 升级到 Fedora 31 的方法。使用此插件能让你轻松地升级到 Fedora 31。
#### 1、更新软件并备份系统
在开始升级之前,请确保你安装了 Fedora 30 的最新软件。如果你安装了模块化软件,这点尤为重要。`dnf` 和 GNOME “软件”的最新版本对某些模块化流的升级过程进行了改进。要更新软件,请使用 GNOME “软件” 或在终端中输入以下命令:
```
sudo dnf upgrade --refresh
```
此外,在继续操作之前,请确保备份系统。有关备份的帮助,请参阅 Fedora Magazine 上的[备份系列][3]。
#### 2、安装 DNF 插件
接下来,打开终端并输入以下命令安装插件:
```
sudo dnf install dnf-plugin-system-upgrade
```
#### 3、使用 DNF 开始更新
现在,你的系统是最新的,已经备份并且安装了 DNF 插件,你可以通过在终端中使用以下命令来开始升级:
```
sudo dnf system-upgrade download --releasever=31
```
该命令将开始在本地下载计算机的所有升级。如果由于缺乏更新包、损坏的依赖项或已淘汰的软件包而在升级时遇到问题,请在输入上面的命令时添加 `-allowerasing` 标志。这将使 DNF 删除可能阻止系统升级的软件包。
#### 4、重启并升级
上面的命令下载更新完成后,你的系统就可以重启了。要将系统引导至升级过程,请在终端中输入以下命令:
```
sudo dnf system-upgrade reboot
```
此后,你的系统将重启。在许多版本之前,`fedup` 工具会在内核选择/引导页面上创建一个新选项。使用 `dnf-plugin-system-upgrade` 软件包,你的系统将重新引导到当前 Fedora 30 使用的内核。这很正常。在内核选择页面之后不久,你的系统会开始升级过程。
现在也许可以喝杯咖啡休息下!升级完成后,系统将重启,你将能够登录到新升级的 Fedora 31 系统。
![][4]
### 解决升级问题
有时,升级系统时可能会出现意外问题。如果遇到任何问题,请访问 [DNF 系统升级文档][5],以获取有关故障排除的更多信息。
如果升级时遇到问题,并且系统上安装了第三方仓库,那么在升级时可能需要禁用这些仓库。对于 Fedora 不提供的仓库的支持,请联系仓库的提供者。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/upgrading-fedora-30-to-fedora-31/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/f30-f31-816x345.jpg
[2]: https://linux.cn/article-11522-1.html
[3]: https://fedoramagazine.org/taking-smart-backups-duplicity/
[4]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png
[5]: https://docs.fedoraproject.org/en-US/quick-docs/dnf-system-upgrade/#Resolving_post-upgrade_issues

View File

@ -0,0 +1,158 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11543-1.html)
[#]: subject: (Getting started with awk, a powerful text-parsing tool)
[#]: via: (https://opensource.com/article/19/10/intro-awk)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
awk 入门 —— 强大的文本分析工具
======
> 让我们开始使用它。
![](https://img.linux.net.cn/data/attachment/album/201911/06/114421e006e9mbh0xxe8bb.jpg)
`awk` 是用于 Unix 和类 Unix 系统的强大文本解析工具,但是由于它有可编程函数,因此你可以用它来执行常规解析任务,因此它也被视为一种编程语言。你可能不会使用 `awk` 开发下一个 GUI 应用,并且它可能不会代替你的默认脚本语言,但是它是用于特定任务的强大程序。
这些任务或许是惊人的多样化。了解 `awk` 可以解决你的哪些问题的最好方法是学习 `awk`。你会惊讶于 `awk` 如何帮助你完成更多工作,却花费更少的精力。
`awk` 的基本语法是:
```
awk [options] 'pattern {action}' file
```
首先,创建此示例文件并将其保存为 `colours.txt`
```
name       color  amount
apple      red    4
banana     yellow 6
strawberry red    3
grape      purple 10
apple      green  8
plum       purple 2
kiwi       brown  4
potato     brown  9
pineapple  yellow 5
```
数据被一个或多个空格分隔为列。以某种方式组织要分析的数据是很常见的。它不一定总是由空格分隔的列,甚至可以不是逗号或分号,但尤其是在日志文件或数据转储中,通常有一个可预测的格式。你可以使用数据格式来帮助 `awk` 提取和处理你关注的数据。
### 打印列
`awk` 中,`print` 函数显示你指定的内容。你可以使用许多预定义的变量,但是最常见的是文本文件中以整数命名的列。试试看:
```
$ awk '{print $2;}' colours.txt
color
red
yellow
red
purple
green
purple
brown
brown
yellow
```
在这里,`awk` 显示第二列,用 `$2` 表示。这是相对直观的,因此你可能会猜测 `print $1` 显示第一列,而 `print $3` 显示第三列,依此类推。
要显示*全部*列,请使用 `$0`
美元符号(`$`)后的数字是*表达式*,因此 `$2``$(1+1)` 是同一意思。
### 有条件地选择列
你使用的示例文件非常结构化。它有一行充当标题,并且各列直接相互关联。通过定义*条件*,你可以限定 `awk` 在找到此数据时返回的内容。例如,要查看第二列中与 `yellow` 匹配的项并打印第一列的内容:
```
awk '$2=="yellow"{print $1}' file1.txt
banana
pineapple
```
正则表达式也可以工作。此表达式近似匹配 `$2` 中以 `p` 开头跟上任意数量(一个或多个)字符后继续跟上 `p` 的值:
```
$ awk '$2 ~ /p.+p/ {print $0}' colours.txt
grape   purple  10
plum    purple  2
```
数字能被 `awk` 自然解释。例如,要打印第三列包含大于 5 的整数的行:
```
awk '$3&gt;5 {print $1, $2}' colours.txt
name    color
banana  yellow
grape   purple
apple   green
potato  brown
```
### 字段分隔符
默认情况下,`awk` 使用空格作为字段分隔符。但是,并非所有文本文件都使用空格来定义字段。例如,用以下内容创建一个名为 `colours.csv` 的文件:
```
name,color,amount
apple,red,4
banana,yellow,6
strawberry,red,3
grape,purple,10
apple,green,8
plum,purple,2
kiwi,brown,4
potato,brown,9
pineapple,yellow,5
```
只要你指定将哪个字符用作命令中的字段分隔符,`awk` 就能以完全相同的方式处理数据。使用 `--field-separator`(或简称为 `-F`)选项来定义分隔符:
```
$ awk -F"," '$2=="yellow" {print $1}' file1.csv
banana
pineapple
```
### 保存输出
使用输出重定向,你可以将结果写入文件。例如:
```
$ awk -F, '$3>5 {print $1, $2} colours.csv > output.txt
```
这将创建一个包含 `awk` 查询内容的文件。
你还可以将文件拆分为按列数据分组的多个文件。例如,如果要根据每行显示的颜色将 `colours.txt` 拆分为多个文件,你可以在 `awk` 中包含重定向语句来重定向*每条查询*
```
$ awk '{print > $2".txt"}' colours.txt
```
这将生成名为 `yellow.txt`、`red.txt` 等文件。
在下一篇文章中,你将了解有关字段,记录和一些强大的 awk 变量的更多信息。
本文改编自社区技术播客 [Hacker Public Radio][2]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/intro-awk
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
[2]: http://hackerpublicradio.org/eps.php?id=2114

View File

@ -1,40 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (lnrCoder)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11542-1.html)
[#]: subject: (How to Find Out Top Memory Consuming Processes in Linux)
[#]: via: (https://www.2daygeek.com/linux-find-top-memory-consuming-processes/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How to Find Out Top Memory Consuming Processes in Linux
如何在 Linux 中找出内存消耗最大的进程
======
You may have seen your system consumes too much of memory many times.
![](https://img.linux.net.cn/data/attachment/album/201911/06/110149r81efjx12afjat7f.jpg)
If thats the case, what would be the best thing you can do to identify processes that consume too much memory on a Linux machine.
很多次,你可能遇见过系统消耗了过多的内存。如果是这种情况,那么最好的办法是识别出 Linux 机器上消耗过多内存的进程。我相信,你可能已经运行了下文中的命令以进行检查。如果没有,那你尝试过哪些其他的命令?我希望你可以在评论中更新这篇文章,它可能会帮助其他用户。
I believe, you may have run one of the below commands to check it out.
使用 [top 命令][1] 和 [ps 命令][2] 可以轻松的识别这种情况。我过去经常同时使用这两个命令,两个命令得到的结果是相同的。所以我建议你从中选择一个喜欢的使用就可以。
If not, what is the other commands you tried?
### 1) 如何使用 ps 命令在 Linux 中查找内存消耗最大的进程
I would request you to update it in the comment section, it may help other users.
`ps` 命令用于报告当前进程的快照。`ps` 命令的意思是“进程状态”。这是一个标准的 Linux 应用程序,用于查找有关在 Linux 系统上运行进程的信息。
This can be easily identified using the **[top command][1]** and the **[ps command][2]**.
它用于列出当前正在运行的进程及其进程 IDPID、进程所有者名称、进程优先级PR以及正在运行的命令的绝对路径等。
I used to check both commands simultaneously, and both were given the same result.
So i suggest you to use one of the command that you like.
### 1) How to Find Top Memory Consuming Process in Linux Using the ps Command
The ps command is used to report a snapshot of the current processes. The ps command stands for process status.
This is a standard Linux application that looks for information about running processes on a Linux system.
It is used to list the currently running processes and their process ID (PID), process owner name, process priority (PR), and the absolute path of the running command, etc,.
The below ps command format provides you more information about top memory consumption process.
下面的 `ps` 命令格式为你提供有关内存消耗最大进程的更多信息。
```
# ps aux --sort -rss | head
@ -51,7 +39,7 @@ root 1135 0.0 0.9 86708 37572 ? S 05:37 0:20 cwpsrv: worker
root 1133 0.0 0.9 86708 37544 ? S 05:37 0:05 cwpsrv: worker process
```
Use the below ps command format to include only specific information about the process of memory consumption in the output.
使用以下 `ps` 命令格式可在输出中仅展示有关内存消耗过程的特定信息。
```
# ps -eo pid,ppid,%mem,%cpu,cmd --sort=-%mem | head
@ -68,7 +56,7 @@ Use the below ps command format to include only specific information about the p
1135 3034 0.9 0.0 cwpsrv: worker process
```
If you want to see only the command name instead of the absolute path of the command, use the ps command format below.
如果你只想查看命令名称而不是命令的绝对路径,请使用下面的 `ps` 命令格式。
```
# ps -eo pid,ppid,%mem,%cpu,comm --sort=-%mem | head
@ -85,15 +73,11 @@ If you want to see only the command name instead of the absolute path of the com
1133 3034 0.9 0.0 cwpsrv
```
### 2) How to Find Out Top Memory Consuming Process in Linux Using the top Command
### 2) 如何使用 top 命令在 Linux 中查找内存消耗最大的进程
The Linux top command is the best and most well known command that everyone uses to monitor Linux system performance.
Linux 的 `top` 命令是用来监视 Linux 系统性能的最好和最知名的命令。它在交互界面上显示运行的系统进程的实时视图。但是,如果要查找内存消耗最大的进程,请 [在批处理模式下使用 top 命令][3]。
It displays a real-time view of the system process running on the interactive interface.
But if you want to find top memory consuming process then **[use the top command in the batch mode][3]**.
You should properly **[understand the top command output][4]** to fix the performance issue in system.
你应该正确地 [了解 top 命令输出][4] 以解决系统中的性能问题。
```
# top -c -b -o +%MEM | head -n 20 | tail -15
@ -114,7 +98,7 @@ You should properly **[understand the top command output][4]** to fix the perfor
968 nobody 20 0 1356216 30544 2348 S 0.0 0.8 0:19.95 /usr/local/apache/bin/httpd -k start
```
If you only want to see the command name instead of the absolute path of the command, use the below top command format.
如果你只想查看命令名称而不是命令的绝对路径,请使用下面的 `top` 命令格式。
```
# top -b -o +%MEM | head -n 20 | tail -15
@ -135,15 +119,11 @@ If you only want to see the command name instead of the absolute path of the com
968 nobody 20 0 1356216 30544 2348 S 0.0 0.8 0:19.95 httpd
```
### 3) Bonus Tips: How to Find Out Top Memory Consuming Process in Linux Using the ps_mem Command
### 3) 奖励技巧:如何使用 ps_mem 命令在 Linux 中查找内存消耗最大的进程
The **[ps_mem utility][5]** is used to display the core memory used per program (not per process).
[ps_mem 程序][5] 用于显示每个程序(而不是每个进程)使用的核心内存。该程序允许你检查每个程序使用了多少内存。它根据程序计算私有和共享内存的数量,并以最合适的方式返回已使用的总内存。
This utility allows you to check how much memory is used per program.
It calculates the amount of private and shared memory against a program and returns the total used memory in the most appropriate way.
It uses the following logic to calculate RAM usage. Total RAM = sum (private RAM for program processes) + sum (shared RAM for program processes)
它使用以下逻辑来计算内存使用量。总内存使用量 = sum(用于程序进程的专用内存使用量) + sum(用于程序进程的共享内存使用量)。
```
# ps_mem
@ -205,7 +185,7 @@ via: https://www.2daygeek.com/linux-find-top-memory-consuming-processes/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[lnrCoder](https://github.com/lnrCoder)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -213,6 +193,6 @@ via: https://www.2daygeek.com/linux-find-top-memory-consuming-processes/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
[2]: https://www.2daygeek.com/linux-ps-command-find-running-process-monitoring/
[3]: https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/
[3]: https://linux.cn/article-11491-1.html
[4]: https://www.2daygeek.com/understanding-linux-top-command-output-usage/
[5]: https://www.2daygeek.com/ps_mem-report-core-memory-usage-accurately-in-linux/

View File

@ -1,39 +1,33 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11547-1.html)
[#]: subject: (Viewing network bandwidth usage with bmon)
[#]: via: (https://www.networkworld.com/article/3447936/viewing-network-bandwidth-usage-with-bmon.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Viewing network bandwidth usage with bmon
用 bmon 查看网络带宽使用情况
======
Introducing bmon, a monitoring and debugging tool that captures network statistics and makes them easily digestible.
Sandra Henry-Stocker
Bmon is a monitoring and debugging tool that runs in a terminal window and captures network statistics, offering options on how and how much data will be displayed and displayed in a form that is easy to understand.
> 介绍一下 bmon这是一个监视和调试工具可捕获网络统计信息并使它们易于理解。
To check if **bmon** is installed on your system, use the **which** command:
![](https://img.linux.net.cn/data/attachment/album/201911/07/010237a8gb5oqddvl3bnd0.jpg)
`bmon` 是一种监视和调试工具,可在终端窗口中捕获网络统计信息,并提供了如何以易于理解的形式显示以及显示多少数据的选项。
要检查系统上是否安装了 `bmon`,请使用 `which` 命令:
```
$ which bmon
/usr/bin/bmon
```
### Getting bmon
### 获取 bmon
On Debian systems, use **sudo apt-get install bmon** to install the tool.
在 Debian 系统上,使用 `sudo apt-get install bmon` 安装该工具。
[][1]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][1]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
For Red Hat and related distributions, you might be able to install with **yum install bmon** or **sudo dnf install bmon**. Alternately, you may have to resort to a more complex install with commands like these that first set up the required **libconfuse** using the root account or sudo:
对于 Red Hat 和相关发行版,你可以使用 `yum install bmon``sudo dnf install bmon` 进行安装。或者,你可能必须使用更复杂的安装方式,例如使用以下命令,这些命令首先使用 root 帐户或 sudo 来设置所需的 `libconfuse`
```
# wget https://github.com/martinh/libconfuse/releases/download/v3.2.2/confuse-3.2.2.zip
@ -48,15 +42,13 @@ For Red Hat and related distributions, you might be able to install with **yum i
# sudo make install
```
The first five lines will install **libconfuse** and the second five will grab and install **bmon** itself.
前面五行会安装 `libconfuse`,而后面五行会获取并安装 `bmon` 本身。
### Using bmon
### 使用 bmon
The simplest way to start **bmon** is simply to type **bmon** on the command line. Depending on the size of the window you are using, you will be able to see and bring up a variety of data.
启动 `bmon` 的最简单方法是在命令行中键入 `bmon`。根据你正在使用的窗口的大小,你能够查看并显示各种数据。
The top portion of your display will display stats on your network interfaces  the loopback (lo) and network-accessible (e.g., eth0). If you terminal window has few lines, this is all you may see, and it will look something like this:
[RELATED: 11 pointless but awesome Linux terminal tricks][2]
显示区域的顶部将显示你的网络接口的统计信息环回接口lo和可通过网络访问的接口例如 eth0。如果你的终端窗口只有区区几行高下面这就是你可能会看到的所有内容它将看起来像这样
```
lo bmon 4.0
@ -73,7 +65,7 @@ q Press i to enable additional information qq
Wed Oct 23 14:36:27 2019 Press ? for help
```
In this example, the network interface is enp0s25. Notice the helpful "Increase screen height" hint below the listed interfaces. Stretch your screen to add sufficient lines (no need to restart bmon) and you will see some graphs:
在此示例中,网络接口是 enp0s25。请注意列出的接口下方的有用的 “Increase screen height” 提示。拉伸屏幕以增加足够的行(无需重新启动 bmon你将看到一些图形
```
Interfaces x RX bps pps %x TX bps pps %
@ -100,7 +92,7 @@ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqq
1 5 10 15 20 25 30 35 40 45 50 55 60
```
Notice, however, that the graphs are not showing values. This is because it is displaying the loopback **&gt;lo** interface. Arrow your way down to the public network interface and you will see some traffic.
但是请注意,该图形未显示值。这是因为它正在显示环回 “>lo” 接口。按下箭头键指向公共网络接口,你将看到一些流量。
```
Interfaces x RX bps pps %x TX bps pps %
@ -132,11 +124,11 @@ q Press i to enable additional information qq
Wed Oct 23 16:42:06 2019 Press ? for help
```
The change allows you to view a graph displaying network traffic. Note, however, that the default is to display bytes per second. To display bits per second instead, you would start the tool using **bmon -b**
通过更改接口,你可以查看显示了网络流量的图表。但是请注意,默认值是按每秒字节数显示的。要按每秒位数来显示,你可以使用 `bmon -b` 启动该工具。
Detailed statistics on network traffic can be displayed if your window is large enough and you press **d**. An example of the stats you will see is displayed below. This display was split into left and right portions because of its width.
如果你的窗口足够大并按下 `d` 键,则可以显示有关网络流量的详细统计信息。你看到的统计信息示例如下所示。由于其宽度太宽,该显示分为左右两部分。
##### left side:
左侧:
```
RX TX │ RX TX │
@ -154,7 +146,7 @@ RX TX │ RX TX │
Window Error - 0 │ │
```
##### right side
右侧:
```
│ RX TX │ RX TX
@ -171,9 +163,9 @@ RX TX │ RX TX │
│ No Handler 0 - │ Over Error 0 -
```
Additional information on the network interface will be displayed if you press **i**
如果按下 `i` 键,将显示网络接口上的其他信息。
##### left side:
左侧:
```
MTU 1500 | Flags broadcast,multicast,up |
@ -181,7 +173,7 @@ Address 00:1d:09:77:9d:08 | Broadcast ff:ff:ff:ff:ff:ff |
Family unspec | Alias |
```
##### right side:
右侧:
```
| Operstate up | IfIndex 2 |
@ -189,19 +181,15 @@ Family unspec | Alias |
| Qdisc fq_codel |
```
A help menu will appear if you press **?** with brief descriptions of how to move around the screen, select data to be displayed and control the graphs.
如果你按下 `?` 键,将会出现一个帮助菜单,其中简要介绍了如何在屏幕上移动光标、选择要显示的数据以及控制图形如何显示。
To quit **bmon**, you would type **q** and then **y** in response to the prompt to confirm your choice to exit.
要退出 `bmon`,输入 `q`,然后输入 `y` 以响应提示来确认退出。
Some of the important things to note are that:
需要注意的一些重要事项是:
* **bmon** adjusts its display to the size of the terminal window
* some of the choices shown at the bottom of the display will only function if the window is large enough to accomodate the data
* the display is updated every second unless you slow this down using the **-R** (e.g., **bmon -R 5)** option
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
* `bmon` 会将其显示调整为终端窗口的大小
* 显示区域底部显示的某些选项仅在窗口足够大可以容纳数据时才起作用
* 除非你使用 `-R`(例如 `bmon -R 5`)来减慢显示速度,否则每秒更新一次显示
--------------------------------------------------------------------------------
@ -209,8 +197,8 @@ via: https://www.networkworld.com/article/3447936/viewing-network-bandwidth-usag
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11539-1.html)
[#]: subject: (Why you don't have to be afraid of Kubernetes)
[#]: via: (https://opensource.com/article/19/10/kubernetes-complex-business-problem)
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
为什么你不必害怕 Kubernetes
======
> Kubernetes 绝对是满足复杂 web 应用程序需求的最简单、最容易的方法。
![Digital creative of a browser on the internet][1]
在 90 年代末和 2000 年代初,在大型网站工作很有趣。我的经历让我想起了 American Greetings Interactive在情人节那天我们拥有了互联网上排名前 10 位之一的网站(以网络访问量衡量)。我们为 [AmericanGreetings.com][2]、[BlueMountain.com][3] 等公司提供了电子贺卡,并为 MSN 和 AOL 等合作伙伴提供了电子贺卡。该组织的老员工仍然深切地记得与 Hallmark 等其它电子贺卡网站进行大战的史诗般的故事。顺便说一句,我还为 Holly Hobbie、Care Bears 和 Strawberry Shortcake 运营过大型网站。
我记得那就像是昨天发生的一样,这是我们第一次遇到真正的问题。通常,我们的前门(路由器、防火墙和负载均衡器)有大约 200Mbps 的流量进入。但是突然之间Multi Router Traffic GrapherMRTG图示突然在几分钟内飙升至 2Gbps。我疯了似地东奔西跑。我了解了我们的整个技术堆栈从路由器、交换机、防火墙和负载平衡器到 Linux/Apache web 服务器,到我们的 Python 堆栈FastCGI 的元版本以及网络文件系统NFS服务器。我知道所有配置文件在哪里我可以访问所有管理界面并且我是一位经验丰富的打过硬仗的系统管理员具有多年解决复杂问题的经验。
但是,我无法弄清楚发生了什么……
当你在一千个 Linux 服务器上疯狂地键入命令时,五分钟的感觉就像是永恒。我知道站点可能会在任何时候崩溃,因为当它被划分成更小的集群时,压垮上千个节点的集群是那么的容易。
我迅速*跑到*老板的办公桌前,解释了情况。他几乎没有从电子邮件中抬起头来,这使我感到沮丧。他抬头看了看,笑了笑,说道:“是的,市场营销可能会开展广告活动。有时会发生这种情况。”他告诉我在应用程序中设置一个特殊标志,以减轻 Akamai 的访问量。我跑回我的办公桌,在上千台 web 服务器上设置了标志,几分钟后,站点恢复正常。灾难也就被避免了。
我可以再分享 50 个类似的故事,但你脑海中可能会有一点好奇:“这种运维方式将走向何方?”
关键是,我们遇到了业务问题。当技术问题使你无法开展业务时,它们就变成了业务问题。换句话说,如果你的网站无法访问,你就不能处理客户交易。
那么,所有这些与 Kubernetes 有什么关系?一切!世界已经改变。早在 90 年代末和 00 年代初,只有大型网站才出现大型的、<ruby>规模级<rt>web-scale</rt></ruby>的问题。现在,有了微服务和数字化转型,每个企业都面临着一个大型的、规模级的问题——可能是多个大型的、规模级的问题。
你的企业需要能够通过许多不同的人构建的许多不同的、通常是复杂的服务来管理复杂的规模级的网站。你的网站需要动态地处理流量,并且它们必须是安全的。这些属性需要在所有层(从基础结构到应用程序层)上由 API 驱动。
### 进入 Kubernetes
Kubernetes 并不复杂你的业务问题才复杂。当你想在生产环境中运行应用程序时要满足性能伸缩性、性能抖动等和安全性要求就需要最低程度的复杂性。诸如高可用性HA、容量要求N+1、N+2、N+100以及保证最终一致性的数据技术等就会成为必需。这些是每家进行数字化转型的公司的生产要求而不仅仅是 Google、Facebook 和 Twitter 这样的大型网站。
在旧时代,我还在 American Greetings 任职时,每次我们加入一个新的服务,它看起来像这样:所有这些都是由网站运营团队来处理的,没有一个是通过订单系统转移给其他团队来处理的。这是在 DevOps 出现之前的 DevOps
1. 配置 DNS通常是内部服务层和面向公众的外部
2. 配置负载均衡器(通常是内部服务和面向公众的)
3. 配置对文件的共享访问(大型 NFS 服务器、群集文件系统等)
4. 配置集群软件(数据库、服务层等)
5. 配置 web 服务器群集(可以是 10 或 50 个服务器)
大多数配置是通过配置管理自动完成的,但是配置仍然很复杂,因为每个系统和服务都有不同的配置文件,而且格式完全不同。我们研究了像 [Augeas][4] 这样的工具来简化它,但是我们认为使用转换器来尝试和标准化一堆不同的配置文件是一种反模式。
如今,借助 Kubernetes启动一项新服务本质上看起来如下
1. 配置 Kubernetes YAML/JSON。
2. 提交给 Kubernetes API`kubectl create -f service.yaml`)。
Kubernetes 大大简化了服务的启动和管理。服务所有者(无论是系统管理员、开发人员还是架构师)都可以创建 Kubernetes 格式的 YAML/JSON 文件。使用 Kubernetes每个系统和每个用户都说相同的语言。所有用户都可以在同一 Git 存储库中提交这些文件,从而启用 GitOps。
而且,可以弃用和删除服务。从历史上看,删除 DNS 条目、负载平衡器条目和 Web 服务器的配置等是非常可怕的,因为你几乎肯定会破坏某些东西。使用 Kubernetes所有内容都处于命名空间下因此可以通过单个命令删除整个服务。尽管你仍然需要确保其它应用程序不使用它微服务和函数即服务 [FaaS] 的缺点),但你可以更加确信:删除服务不会破坏基础架构环境。
### 构建、管理和使用 Kubernetes
太多的人专注于构建和管理 Kubernetes 而不是使用它(详见 [Kubernetes 是一辆翻斗车][5])。
在单个节点上构建一个简单的 Kubernetes 环境并不比安装 LAMP 堆栈复杂得多,但是我们无休止地争论着构建与购买的问题。不是 Kubernetes 很难;它以高可用性大规模运行应用程序。建立一个复杂的、高可用性的 Kubernetes 集群很困难,因为要建立如此规模的任何集群都是很困难的。它需要规划和大量软件。建造一辆简单的翻斗车并不复杂,但是建造一辆可以运载 [10 吨垃圾并能以 200 迈的速度稳定行驶的卡车][6]则很复杂。
管理 Kubernetes 可能很复杂,因为管理大型的、规模级的集群可能很复杂。有时,管理此基础架构很有意义;而有时不是。由于 Kubernetes 是一个社区驱动的开源项目,它使行业能够以多种不同方式对其进行管理。供应商可以出售托管版本,而用户可以根据需要自行决定对其进行管理。(但是你应该质疑是否确实需要。)
使用 Kubernetes 是迄今为止运行大规模网站的最简单方法。Kubernetes 正在普及运行一组大型、复杂的 Web 服务的能力——就像当年 Linux 在 Web 1.0 中所做的那样。
由于时间和金钱是一个零和游戏,因此我建议将重点放在使用 Kubernetes 上。将你的时间和金钱花费在[掌握 Kubernetes 原语][7]或处理[活跃度和就绪性探针][8]的最佳方法上(表明大型、复杂的服务很难的另一个例子)。不要专注于构建和管理 Kubernetes。在构建和管理上许多供应商可以为你提供帮助。
### 结论
我记得对无数的问题进行了故障排除,比如我在这篇文章的开头所描述的问题——当时 Linux 内核中的 NFS、我们自产的 CFEngine、仅在某些 Web 服务器上出现的重定向问题等)。开发人员无法帮助我解决所有这些问题。实际上,除非开发人员具备高级系统管理员的技能,否则他们甚至不可能进入系统并作为第二双眼睛提供帮助。没有带有图形或“可观察性”的控制台——可观察性在我和其他系统管理员的大脑中。如今,有了 Kubernetes、Prometheus、Grafana 等,一切都改变了。
关键是:
1. 时代不一样了。现在,所有 Web 应用程序都是大型的分布式系统。就像 AmericanGreetings.com 过去一样复杂,现在每个网站都有扩展性和 HA 的要求。
2. 运行大型的分布式系统是很困难的。绝对是。这是业务的需求,不是 Kubernetes 的问题。使用更简单的编排系统并不是解决方案。
Kubernetes 绝对是满足复杂 Web 应用程序需求的最简单,最容易的方法。这是我们生活的时代,而 Kubernetes 擅长于此。你可以讨论是否应该自己构建或管理 Kubernetes。有很多供应商可以帮助你构建和管理它但是很难否认这是大规模运行复杂 Web 应用程序的最简单方法。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/kubernetes-complex-business-problem
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fatherlinux
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: http://AmericanGreetings.com
[3]: http://BlueMountain.com
[4]: http://augeas.net/
[5]: https://linux.cn/article-11011-1.html
[6]: http://crunchtools.com/kubernetes-10-ton-dump-truck-handles-pretty-well-200-mph/
[7]: https://linux.cn/article-11036-1.html
[8]: https://srcco.de/posts/kubernetes-liveness-probes-are-dangerous.html

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11549-1.html)
[#]: subject: (Keyboard Shortcuts to Speed Up Your Work in Linux)
[#]: via: (https://opensourceforu.com/2019/11/keyboard-shortcuts-to-speed-up-your-work-in-linux/)
[#]: author: (S Sathyanarayanan https://opensourceforu.com/author/s-sathyanarayanan/)
在 Linux 中加速工作的键盘快捷键
======
![Google Keyboard][2]
> 操作鼠标、键盘和菜单会占用我们很多时间,这些可以使用键盘快捷键来节省时间。这不仅节省时间,还可以使用户更高效。
你是否意识到每次在打字时从键盘切换到鼠标需要多达两秒钟?如果一个人每天工作八小时,每分钟从键盘切换到鼠标一次,并且一年中大约有 240 个工作日,那么所浪费的时间(根据 Brainscape 的计算)为:
[每分钟浪费 2 秒] x [每天 480 分钟] x 每年 240 个工作日 = 每年浪费 64 小时
这相当于损失了八个工作日,因此学习键盘快捷键将使生产率提高 3.3<https://www.brainscape.com/blog/2011/08/keyboard-shortcuts-economy/>)。
键盘快捷键提供了一种更快的方式来执行任务,不然就需要使用鼠标和/或菜单分多个步骤来完成。图 1 列出了 Ubuntu 18.04 Linux 和 Web 浏览器中一些最常用的快捷方式。我省略了非常有名的快捷方式例如复制、粘贴等以及不经常使用的快捷方式。读者可以参考在线资源以获得完整的快捷方式列表。请注意Windows 键在 Linux 中被重命名为 Super 键。
### 常规快捷方式
下面列出了常规快捷方式。
![][4]
### 打印屏幕和屏幕录像
以下快捷方式可用于打印屏幕或录制屏幕视频。
![][6]
### 在应用之间切换
此处列出的快捷键可用于在应用之间切换。
![][8]
### 平铺窗口
可以使用下面提供的快捷方式以不同方式将窗口平铺。
![][10]
### 浏览器快捷方式
此处列出了浏览器最常用的快捷方式。大多数快捷键对于 Chrome/Firefox 浏览器是通用的。
**组合键** | **行为**
---|---
`Ctrl + T` | 打开一个新标签。
`Ctrl + Shift + T` | 打开最近关闭的标签。
`Ctrl + D` | 添加一个新书签。
`Ctrl + W` | 关闭浏览器标签。
`Alt + D` | 将光标置于浏览器的地址栏中。
`F5 或 Ctrl-R` | 刷新页面。
`Ctrl + Shift + Del` | 清除私人数据和历史记录。
`Ctrl + N` | 打开一个新窗口。
`Home` | 滚动到页面顶部。
`End` | 滚动到页面底部。
`Ctrl + J` | 打开下载文件夹(在 Chrome 中)
`F11` | 全屏视图(切换效果)
### 终端快捷方式
这是终端快捷方式的列表。
![][12]
你还可以在 Ubuntu 中配置自己的自定义快捷方式,如下所示:
* 在 Ubuntu Dash 中单击设置。
* 在“设置”窗口的左侧菜单中选择“设备”选项卡。
* 在设备菜单中选择键盘标签。
* 右面板的底部有个 “+” 按钮。点击 “+” 号打开自定义快捷方式对话框并配置新的快捷方式。
学习本文提到的三个快捷方式可以节省大量时间,并使你的工作效率更高。
### 引用
Cohen, Andrew. How keyboard shortcuts could revive Americas economy; www.brainscape.com. [Online] Brainscape, 26 May 2017;
<https://www.brainscape.com/blog/2011/08/keyboard-shortcuts-economy/>
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/keyboard-shortcuts-to-speed-up-your-work-in-linux/
作者:[S Sathyanarayanan][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/s-sathyanarayanan/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Google-Keyboard.jpg?resize=696%2C418&ssl=1 (Google Keyboard)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Google-Keyboard.jpg?fit=750%2C450&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/1.png?resize=350%2C319&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/1.png?ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/NW.png?resize=350%2C326&ssl=1
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/NW.png?ssl=1
[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/2.png?resize=350%2C264&ssl=1
[8]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/2.png?ssl=1
[9]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/3.png?resize=350%2C186&ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/3.png?ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/7.png?resize=350%2C250&ssl=1
[12]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/7.png?ssl=1
[13]: http://www.brainscape.com
[14]: https://secure.gravatar.com/avatar/736684a2707f2ed7ae72675edf7bb3ee?s=100&r=g
[15]: https://opensourceforu.com/author/s-sathyanarayanan/
[16]: mailto:sathyanarayanan.brn@gmail.com

View File

@ -0,0 +1,90 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11550-1.html)
[#]: subject: (How To Update a Fedora Linux System [Beginners Tutorial])
[#]: via: (https://itsfoss.com/update-fedora/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
初级:如何更新 Fedora Linux 系统
======
> 本快速教程介绍了更新 Fedora Linux 安装的多种方法。
前几天,我安装了[新发布的 Fedora 31][1]。老实说,这是我第一次使用[非 Ubuntu 发行版][2]。
安装 Fedora 之后,我做的第一件事就是尝试安装一些软件。我打开软件中心,发现该软件中心已“损坏”。 我无法从中安装任何应用程序。
我不确定我的系统出了什么问题。在团队内部讨论时Abhishek 建议我先更新系统。我更新了,更新后一切恢复正常。更新 [Fedora][3] 系统后,软件中心也能正常工作了。
有时我们一直尝试解决我们所面临的问题,而忽略了对系统的更新。不管问题有多大或多小,为了避免它们,你都应该保持系统更新。
在本文中,我将向你展示更新 Fedora Linux 系统的多种方法。
* 使用软件中心更新 Fedora
* 使用命令行更新 Fedora
* 从系统设置更新 Fedora
请记住,更新 Fedora 意味着安装安全补丁、更新内核和软件。如果要从 Fedora 的一个版本更新到另一个版本,这称为版本升级,你可以[在此处阅读有关 Fedora 版本升级过程的信息][7]。
### 从软件中心更新 Fedora
![软件中心][8]
你很可能会收到通知,通知你有一些系统更新需要查看,你应该在单击该通知时启动软件中心。
你所要做的就是 —— 点击“更新”,并验证 root 密码开始更新。
如果你没有收到更新的通知,则只需启动软件中心并转到“更新”选项卡即可。现在,你只需要继续更新。
### 使用终端更新 Fedora
如果由于某种原因无法加载软件中心,则可以使用 `dnf` 软件包管理命令轻松地更新系统。
只需启动终端并输入以下命令即可开始更新(系统将提示你确认 root 密码):
```
sudo dnf upgrade
```
> **dnf 更新 vs dnf 升级 **
> 你会发现有两个可用的 dnf 命令:`dnf update` 和 `dnf upgrade`。这两个命令执行相同的工作,即安装 Fedora 提供的所有更新。那么,为什么要会有这两个呢,你应该使用哪一个?`dnf update` 基本上是 `dnf upgrade` 的别名。尽管 `dnf update` 可能仍然有效,但最好使用 `dnf upgrade`,因为这是真正的命令。
### 从系统设置中更新 Fedora
![][9]
如果其它方法都不行(或者由于某种原因已经进入“系统设置”),请导航至“设置”底部的“详细信息”选项。
如上图所示,该选项中显示操作系统和硬件的详细信息以及一个“检查更新”按钮。你只需要单击它并提供 root 密码即可继续安装可用的更新。
### 总结
如上所述,更新 Fedora 系统非常容易。有三种方法供你选择,因此无需担心。
如果你按上述说明操作时发现任何问题,请随时在下面的评论部分告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/update-fedora/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/Morisun029)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/fedora-31-release/
[2]: https://itsfoss.com/non-ubuntu-beginner-linux/
[3]: https://getfedora.org/
[4]: tmp.Lqr0HBqAd9#software-center
[5]: tmp.Lqr0HBqAd9#command-line
[6]: tmp.Lqr0HBqAd9#system-settings
[7]: https://itsfoss.com/upgrade-fedora-version/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/11/software-center.png?ssl=1
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/11/system-settings-fedora-1.png?ssl=1

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Hypervisor comeback, Linus says no and reads email, and more industry trends)
[#]: via: (https://opensource.com/article/19/11/hypervisor-stable-kernel-and-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Hypervisor comeback, Linus says no and reads email, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Containers in 2019: They're calling it a [hypervisor] comeback][2]
> So what does all this mean as we continue with rapid adoption and hyper-ecosystem growth around Kubernetes and containers? Lets try and break that down into a few key areas and see what all the excitement is about.
**The impact**: I'm pretty sure that the title of the article is an LL Cool J reference, which I wholeheartedly approve of. Even more important though is a robust unpacking of developments in the hypervisor space over the last year and how they square up against the trend towards cloud-native and container-based development.
## [Linux kernel is getting more reliable, says Linus Torvalds. Plus: What do you need to do to be him?][3]
> "In the end my job is to say no. Somebody has to be able to say no, because other developers know that if they do something bad I will say no. They hopefully in turn are more careful. But in order to be able to say no, I have to know the background, because otherwise I can't do my job. I spend all my time basically reading email about what people are working on.
**The impact**: The rehabilitation of Linus as a much chiller guy continues; this one has some good advice for people leading distributed teams.
## [Automated infrastructure in the on-premise datacenter—OpenShift 4.2 on OpenStack 15 (Stein)][4]
> Up until now IPI (Installer Provision Infrastructure) has only supported public clouds: AWS, Azure, and Google. Now with OpenShift 4.2 it is supporting OpenStack. For the first time we can bring IPI into the on-premise datacenter where it is IMHO most needed. This single feature has the potential to revolutionize on-premise environments and bring them into the cloud-age with a single click and that promise is truly something to get excited about!
**The impact**: So much tech press has started with the assumption that every company should run their infrastructure like a hyperscaler. The technology is catching up to make the user experience of that feasible.
## [Kubernetes autoscaling 101: Cluster autoscaler, horizontal autoscaler, and vertical pod autoscaler][5]
> Im providing in this post a high-level overview of different scalability mechanisms inside Kubernetes and best ways to make them serve your needs. Remember, to truly master Kubernetes, you need to master different ways to manage the scale of cluster resources, thats [the core of promise of Kubernetes][6].
>
> _Configuring Kubernetes clusters to balance resources and performance can be challenging, and requires expert knowledge of the inner workings of Kubernetes. Just because your app or services workload isnt constant, it rather fluctuates throughout the day if not the hour. Think of it as a journey and ongoing process._
**The impact**: You can tell whether someone knows what they're talking about if they can represent it in a simple diagram. Thanks to the excellent diagrams in this post, I know more day 2 concerns of Kubernetes operators than I ever wanted to.
## [GitHub: All open source developers anywhere are welcome][7]
> Eighty percent of all open-source contributions today, come from outside of the US. The top two markets for open source development outside of the US are China and India. These markets, although we have millions of developers in them, are continuing to grow faster than any others at about 30% year-over-year average.
**The impact**: One of my open source friends likes to muse on the changing culture within the open source community. He posits that the old guard gatekeepers are already becoming irrelevant. I don't know if I completely agree, but I think you can look at the exponentially increasing contributions from places that haven't been on the open source map before and safely speculate that the open source culture of tomorrow will be radically different than that of today.
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/hypervisor-stable-kernel-and-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.infoq.com/articles/containers-hypervisors-2019/
[3]: https://www.theregister.co.uk/2019/10/30/linux_kernel_is_getting_more_reliable_says_linus_torvalds/
[4]: https://keithtenzer.com/2019/10/29/automated-infrastructure-in-the-on-premise-datacenter-openshift-4-2-on-openstack-15-stein/
[5]: https://www.cncf.io/blog/2019/10/29/kubernetes-autoscaling-101-cluster-autoscaler-horizontal-autoscaler-and-vertical-pod-autoscaler/
[6]: https://speakerdeck.com/thockin/everything-you-ever-wanted-to-know-about-resource-scheduling-dot-dot-dot-almost
[7]: https://www.zdnet.com/article/github-all-open-source-developers-anywhere-are-welcome/#ftag=RSSbaffb68

View File

@ -0,0 +1,92 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Red Hat announces RHEL 8.1 with predictable release cadence)
[#]: via: (https://www.networkworld.com/article/3451367/red-hat-announces-rhel-8-1-with-predictable-release-cadence.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Red Hat announces RHEL 8.1 with predictable release cadence
======
[Clkr / Pixabay][1] [(CC0)][2]
[Red Hat][3] has just today announced the availability of Red Hat Enterprise Linux (RHEL) 8.1, promising improvements in manageability, security and performance.
RHEL 8.1 will enhance the companys open [hybrid-cloud][4] portfolio and continue to provide a consistent user experience between on-premises and public-cloud deployments.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][5]
RHEL 8.1 is also the first release that will follow what Red Hat is calling its "predictable release cadence". Announced at Red Hat Summit 2019, this means that minor releases will be available every six months. The expectation is that this rhythmic release cycle will make it easier both for customer organizations and other software providers to plan their upgrades.
[][6]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][6]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Red Hat Enterprise Linux 8.1 provides product enhancements in many areas.
### Enhanced automation
All supported RHEL subscriptions now include access to Red Hat's proactive analytics, **Red Hat Insights**. With more than 1,000 rules for operating RHEL systems whether on-premises or cloud deployments, Red Hat Insights help IT administrators flag potential configuration, security, performance, availability and stability issues before they impact production.
### New system roles
RHEL 8.1 streamlines the process for setting up subsystems to handle specific functions such as storage, networking, time synchronization, kdump and SELinux. This expands on the variety of Ansible system roles.
### Live kernel patching
RHEL 8.1 adds full support for live kernel patching. This critically important feature allows IT operations teams to deal with ongoing threats without incurring excessive system downtime. Kernel updates can be applied to remediate common vulnerabilities and exposures (CVE) while reducing the need for a system reboot. Additional security enhancements include enhanced CVE remediation, kernel-level memory protection and application whitelisting.
### Container-centric SELinux profiles
These profiles allow the creation of more tailored security policies to control how containerized services access host-system resources, making it easier to harden systems against security threats.
### Enhanced hybrid-cloud application development
A reliably consistent set of supported development tools is included, among them the latest stable versions of popular open-source tools and languages like golang and .NET Core as well as the ability to power modern data-processing workloads such as Microsoft SQL Server and SAP solutions.
Red Hat Linux 8.1 is available now for RHEL subscribers via the [Red Hat Customer Portal][7]. Red Hat Developer program members may obtain the latest releases at no cost at the [Red Hat Developer][8] site.
#### Additional resources
Here are some links to  additional information:
* More about [Red Hat Enterprise Linux][9]
* Get a [RHEL developer subscription][10]
* More about the latest features at [Red Hat Insights][11]
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3451367/red-hat-announces-rhel-8-1-with-predictable-release-cadence.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://pixabay.com/vectors/red-hat-fedora-fashion-style-26734/
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3316960/ibm-closes-34b-red-hat-deal-vaults-into-multi-cloud.html
[4]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[7]: https://access.redhat.com/
[8]: https://developer.redhat.com
[9]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
[10]: https://developers.redhat.com/
[11]: https://www.redhat.com/en/blog/whats-new-red-hat-insights-november-2019
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,57 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (System76 introduces laptops with open source BIOS coreboot)
[#]: via: (https://opensource.com/article/19/11/coreboot-system76-laptops)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
System76 introduces laptops with open source BIOS coreboot
======
The company answers open hardware fans by revealing two laptops powered
with open source firmware coreboot.
![Guy on a laptop on a building][1]
In mid-October, [System76][2] made an exciting announcement for open source hardware fans: It would soon begin shipping two of its laptop models, [Galago Pro][3] and [Darter Pro][4], with the open source BIOS [coreboot][5].
The coreboot project [says][6] its open source firmware "is a replacement for your BIOS / UEFI with a strong focus on boot speed, security, and flexibility. It is designed to boot your operating system as fast as possible without any compromise to security, with no back doors, and without any cruft from the '80s." Coreboot was previously known as LinuxBIOS, and the engineers who work on coreboot have also contributed to the Linux kernel.
Most firmware on computers sold today is proprietary, which means even if you are running an open source operating system, you have no access to your machine's BIOS. This is not so with coreboot. Its developers share the improvements they make, rather than keeping them secret from other vendors. Coreboot's source code can be inspected, learned from, and modified, just like any other open source code.
[Joshua Woolery][7], marketing director at System76, says coreboot differs from a proprietary BIOS in several important ways. "Traditional firmware is closed source and impossible to review and inspect. It's bloated with unnecessary features and unnecessarily complex [ACPI][8] implementations that lead to PCs operating in unpredictable ways. System76 Open Firmware, on the other hand, is lightweight, fast, and cleanly written." This means your computer boots faster and is more secure, he says.
I asked Joshua about the impact of coreboot on open hardware overall. "The combination of open hardware and open firmware empowers users beyond what's possible when one or the other is proprietary," he says. "Imagine an open hardware controller like [System76's] [Thelio Io][9] without open source firmware. One could read the schematic and write software to control it, but why? With open firmware, the user starts from functioning hardware and software and can expand from there. Open hardware and firmware enable the community to learn from, adapt, and expand on our work, thus moving technology forward as a whole rather than requiring individuals to constantly re-implement what's already been accomplished."
Joshua says System76 is working to open source all aspects of the computer, and we will see coreboot on other System76 machines. The hardware and firmware in Thelio Io, the controller board in the company's Thelio desktops, are both open. Less than a year after System76 introduced Thelio, the company is now marketing two laptops with open firmware.
If you would like to see System76's firmware contributions to the coreboot project, visit the code repository on [GitHub][10]. You can also see the schematics for any supported System76 model by sending an [email][11] with the subject line: _Schematics for &lt;MODEL&gt;_. (Bear in mind that the only currently supported models are darp6 and galp4.) Using the coreboot firmware on other devices is not supported and may render them inoperable,
Coreboot is licensed under the GNU Public License. You can view the [documentation][12] on the project's website and find out how to [contribute][13] to the project on GitHub.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/coreboot-system76-laptops
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Guy on a laptop on a building)
[2]: https://opensource.com/article/19/5/system76-secret-sauce
[3]: https://system76.com/laptops/galago
[4]: https://system76.com/laptops/darter
[5]: https://www.coreboot.org/
[6]: https://www.coreboot.org/users.html
[7]: https://www.linkedin.com/in/joshuawoolery
[8]: https://en.wikipedia.org/wiki/Advanced_Configuration_and_Power_Interface
[9]: https://opensource.com/article/18/11/system76-thelio-desktop-computer
[10]: https://github.com/system76/firmware-open
[11]: mailto:productdev@system76.com
[12]: https://doc.coreboot.org/index.html
[13]: https://github.com/coreboot/coreboot

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Budget-friendly Linux Smartphone PinePhone Will be Available to Pre-order Next Week)
[#]: via: (https://itsfoss.com/pinephone/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Budget-friendly Linux Smartphone PinePhone Will be Available to Pre-order Next Week
======
Do you remember when [Its FOSS first broke the story that Pine64 was working on a Linux-based smartphone][1] running KDE Plasma (among other distributions) in 2017? Its been some time since then but the good news is that PinePhone will be available for pre-order from 15th November.
Let me provide you more details on the PinePhone like its specification, pricing and release date.
### PinePhone: Linux-based budget smartphone
The PinePhone developer kit is already being tested by some devs and more such kits will be shipped by 15th November. You can check out some of these images by clicking the photo gallery below:
The developer kit is a combo kit of PINE A64 baseboard + SOPine module + 7″ Touch Screen Display + Camera + Wifi/BT + Playbox enclosure + Lithium-Ion battery case + LTE cat 4 USB dongle.
These combo kits allow developers to jump start PinePhone development. The PINE A64 platform already has mainline Linux OS build thanks to the PINE64 community and the support by [KDE neon][2].
#### Specifications of PinePhone
![PinePhone Prototype | Image by Martjin Braam][3]
* Allwinner A64 Quad Core SoC with Mali 400 MP2 GPU
* 2GB of LPDDR3 RAM
* 5.95″ LCD 1440×720, 18:9 aspect ratio (hardened glass)
* Bootable Micro SD
* 16GB eMMC
* HD Digital Video Out
* USB Type C (Power, Data and Video Out)
* Quectel EG-25G with worldwide bands
* WiFi: 802.11 b/g/n, single-band, hotspot capable
* Bluetooth: 4.0, A2DP
* GNSS: GPS, GPS-A, GLONASS
* Vibrator
* RGB status LED
* Selfie and Main camera (2/5Mpx respectively)
* Main Camera: Single OV6540, 5MP, 1/4″, LED Flash
* Selfie Camera: Single GC2035, 2MP, f/2.8, 1/5″
* Sensors: accelerator, gyro, proximity, compass, barometer, ambient light
* 3 External Switches: up down and power
* HW switches: LTE/GNSS, WiFi, Microphone, Speaker, USB
* Samsung J7 form-factor 3000mAh battery
* Case is matte black finished plastic
* Headphone Jack
#### Production, Price &amp; Availability
![Pinephone Brave Heart Pre Order][4]
PinePhone will cost about $150. The early adapter release has been named Brave Heart edition and it will go on sale from November 15, 2019. As you can see in the image above, [Pine64s homepage][5] has included a timer for the first pre-order batch of PinePhone.
You should expect the early adopter Brave Heart editions to be shipped and delivered by December 2019 or January 2020.
Mass production will begin only after the Chinese New Year, hinting at early Q2 of 2020 or March 2020 (at the earliest).
The phone hasnt yet been listed on Pine Store so make sure to check out [Pine64 online store][6] to pre-order the Brave Heart edition if you want to be one of the early adopters.
#### What do you think of PinePhone?
Pine64 has already created a budget laptop called [Pinebook][7] and a relatively powerful [Pinebook Pro][8] laptop. So, there is definitely hope for PinePhone to succeed, at least in the niche of DIY enthusiasts and hardcore Linux fans. The low pricing is definitely a huge plus here compared to the other [Linux smartphone Librem5][9] that costs over $600.
Another good thing about PinePhone is that you can experiment with the operating system by installing Ubuntu Touch, Plasma Mobile or Aurora OS/Sailfish OS.
These Linux-based smartphones dont have the features to replace Android or iOS, yet. If you are looking for a fully functional smartphone to replace your Android smartphone, PinePhone is certainly not for you. Its more for people who like to experiment and are not afraid to troubleshoot.
If you are looking to buy PinePhone, mark the date and set a reminder. There will be limited supply and what I have seen so far, Pine devices go out of stock pretty soon.
_Are you going to pre-order a PinePhone? Let us know of your views in the comment section._
--------------------------------------------------------------------------------
via: https://itsfoss.com/pinephone/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/pinebook-kde-smartphone/
[2]: https://neon.kde.org/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/pinephone-prototype.jpeg?ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/11/pinephone-brave-heart-pre-order.jpg?ssl=1
[5]: https://www.pine64.org/
[6]: https://store.pine64.org/
[7]: https://itsfoss.com/pinebook-linux-notebook/
[8]: https://itsfoss.com/pinebook-pro/
[9]: https://itsfoss.com/librem-linux-phone/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -152,7 +152,7 @@ via: https://opensource.com/article/19/10/open-source-name-origins
作者:[Joshua Allen Holm][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,76 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My first contribution to open source: Impostor Syndrome)
[#]: via: (https://opensource.com/article/19/11/my-first-open-source-contribution-impostor-syndrome)
[#]: author: (Galen Corey https://opensource.com/users/galenemco)
My first contribution to open source: Impostor Syndrome
======
A new open source contributor documents a series of five mistakes she
made starting out in open source.
![Dandelion held out over water][1]
The story of my first mistake goes back to the beginning of my learn-to-code journey. I taught myself the basics through online resources. I was working through tutorials and projects, making progress but also looking for the next way to level up. Pretty quickly, I came across a blog post that told me the best way for beginners _just like me_ to take their coding skills to the next level was to contribute to open source.
> "Anyone can do this," insisted the post, "and it is a crucial part of participating in the larger developer community."
My internal impostor (who, for the purpose of this post, is the personification of my imposter syndrome) latched onto this idea. "Look, Galen," she said. "The only way to be a real developer is to contribute to open source." "Alrighty," I replied, and started following the instructions in the blog post to make a [GitHub][2] account. It took me under ten minutes to get so thoroughly confused that I gave up on the idea entirely. It wasnt that I was unwilling to learn, but the resources that I was depending on expected me to have quite a bit of preexisting knowledge about [Git][3], GitHub, and how these tools allowed multiple developers to collaborate on a single project.
"Maybe Im not ready for this yet," I thought, and went back to my tutorials. "But the blog post said that anyone can do it, even beginners," my internal impostor nagged. Thus began a multi-year internal battle between the idea that contributing to open source was easy and valuable and I should be doing it, and the impression I was not yet _ready_ to write code for open source projects.
Even once I became comfortable with Git, my internal impostor was always eager to remind me of why I was not yet ready to contribute to open source. When I was in coding Bootcamp, she whispered: "Sure, you know Git and you write code, but youve never written real code before, only fake Bootcamp code. Youre not qualified to contribute to real projects that people use and depend on." When I was working my first year at work as a Software Engineer, she chided, "Okay maybe the code you write is 'real,' but you only work with one codebase! What makes you think you can write high-quality code somewhere else with different conventions, frameworks, or even languages?"
It took me about a year and a half of fulltime work to finally feel confident enough to shut down my internal impostors arguments and go for my first pull request (PR). The irony here is that my internal imposter was the one talking me both into and out of contributing to open source.
### Harmful myths
There are two harmful myths here that I want to debunk.
#### Myth 1: Contributing to open source is "easy"
Throughout this journey, I frequently ran across the message that contributing to open source was supposed to be easy. This made me question my own skills when I found myself unable to "easily" get started.
I understand why people might say that contributing to open source is easy, but I suspect what they actually mean is "its an attainable goal," "its accessible to beginners if they put in the work," or "it is possible to contribute to open source without writing a ton of really complex code."
All of these things are true, but it is equally important to note that contributing to open source is difficult. It requires you to take the time to understand a new codebase _and_ understand the tools that developers use.
I definitely dont want to discourage beginners from trying. It is just important to remember that running into challenges is an expected part of the process.
#### Myth 2: All "real" or "good" developers contribute to open source
My internal impostor was continually reminding me that my lack of open source contributions was a blight on my developer career. In fact, even as I write this post, I feel guilty that I have not contributed more to open source. But while working on open source is a great way to learn and participate in the broader community of developers, it is not the only way to do this. You can also blog, attend meetups, work on side projects, read, mentor, or go home at the end of a long day at work and have a lovely relaxing evening. Contributing to open source is a challenge that can be fun and rewarding if it is the challenge you choose.
Julia Evans wrote a blog post called [Dont feel guilty about not contributing to open source][4], which is a healthy reminder that there are many productive ways to use your time as a developer. I highly recommend bookmarking it for any time you feel that guilt creeping in.
### Mistake number one
Mistake number one was letting my internal impostor guide me. I let her talk me out of contributing to open source for years by telling me I was not ready. Instead, I just did not understand the amount of work I would need to put in to get to the level where I felt confident in my ability to write code for an unfamiliar project (I am still working toward this). I also let her talk me into it, with the idea that I had to contribute to open source to prove my worth as a developer. The end result was still my first merged pull request in a widely used project, but the insecurity made my entire experience less enjoyable.
### Don't let Git get you down
If you want to learn more about Git, or if you are a beginner and Git is a blocker toward making your first open-source contribution, dont panic. Git is very complicated, and you are not expected to know what it is already. Once you get the hang of it, you will find that Git is a handy tool that lets many different developers work on the same project at the same time, and then merge their individual changes together.
There are many resources to help you learn about Git and Github (a site that hosts code so that people can collaborate on it with Git). Here are some suggestions on where to start: [_Hello World_ intro to GitHub][5] and _[Resources to learn Git][6]_.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/my-first-open-source-contribution-impostor-syndrome
作者:[Galen Corey][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/galenemco
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dandelion_blue_water_hand.jpg?itok=QggW8Wnw (Dandelion held out over water)
[2]: https://github.com
[3]: https://git-scm.com
[4]: https://jvns.ca/blog/2014/04/26/i-dont-feel-guilty-about-not-contributing-to-open-source/
[5]: https://guides.github.com/activities/hello-world/
[6]: https://try.github.io/

View File

@ -0,0 +1,107 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Source Big Data Solutions Support Digital Transformation)
[#]: via: (https://opensourceforu.com/2019/11/open-source-big-data-solutions-support-digital-transformation/)
[#]: author: (Vinayak Ramachandra Adkoli https://opensourceforu.com/author/vinayak-adkoli/)
Open Source Big Data Solutions Support Digital Transformation
======
[![][1]][2]
_The digital transformation (DT) of enterprises is enabled by the judicious use of Big Data. And its open source technologies that are the driving force behind the power of Big Data and DT._
Digital Transformation (DT) and Big Data combine to offer several advantages. Big Data based digitally transformed systems make life easier and smarter, whether in the field of home automation or industrial automation. The digital world tracks Big Data generated by IoT devices, etc. It tries to make this data more productive and hence, DT should be taken for granted as the world progresses.
For example, NASA s rover Curiosity is sending Big Data from Mars to the Earth. As compared to data sent by NASAs satellites that are revolving around Mars, this data is nothing but digitally transformed Big Data, which works with DT to provide a unique platform for open source applications. Today, Curiosity has its own Twitter account with four million followers.
A Digital Transformation isnt complete unless a business adopts Big Data. The phrase “Data is the new crude oil,” is not new. However, crude oil itself has no value, unless it is refined into petrol, diesel, tar, wax, etc. Similarly, in our daily lives, we deal with tons of data. If this data is refined to a useful form, only then is it of some real use.
As an example, we can see the transformation televisions have undergone, in appearance. We once had picture tube based TVs. Today, we have LEDs, OLEDs, LCD based TVs, curved TVs, Internet enabled TVs, and so on. Such transformation is also quite evident in the digital world.
In a hospital, several patients may be diagnosed with cancer, each year. The patient data generated is voluminous, including treatment methods, diverse drug therapies, patient responses, genetic histories, etc. But such vast pools of information, i.e., Big Data, would serve no useful purpose without proper analysis. So DT, coupled with Big Data and open source applications, can create a more patient-focused and effective treatment one that might have higher recovery rates.
Big Data combines structured data with unstructured data to give us new business insights that weve never had before. Structured data may be traditional spreadsheets, your customer list, information about your products and business processes, etc. Unstructured data may include Google Trends data, feeds from IoT sensors, etc. When a layer of unstructured data is placed on top of structured data and analysed, thats where the magic happens.
Lets look into a typical business situation. Lets suppose a century old car-making company asks its data team to use Big Data concepts to find an efficient way to make safe sales forecasts. In the past, the team would look at the number of products it had sold in the previous month, as well as the number of cars it had sold a year ago and use that data to make a safe forecast. But now the Big Data teams use sentiment analysis on Twitter and look at what people are saying about its products and brand. They also look at Google Trends to see which similar products and brands are being searched the most. Then they correlate such data from the preceding few months with the actual current sales figures to check if the former was predictive i.e., had Google Trends over the past few months actually predicted the firms current sales figures?
In the case of the car company, while making sales forecasts, the team used structured data (how many cars sold last month, a year ago, etc) and layers of unstructured data (sentiment analysis from Twitter and Google Trends) and it resulted in a smart forecast. Thus, Big Data is today becoming more effective in business situations like sales planning, promotions, market campaigns, etc.
**Open source is the key to DT**
Open source, nowadays, clearly dominates domains like Big Data, mobile and cloud platforms. Once open source becomes a key component that delivers a good financial performance, the momentum is unstoppable. Open source (often coupled with the cloud) is giving Big Data based companies like Google, Facebook and other Web giants flexibility to innovate faster.
Big Data companies are using DT to understand their processes, so that they can employ technologies like IoT, Big Data analytics, AI, etc, better. The journey of enterprises migrating from old digital infrastructure to new platforms is an exciting trend in the open source environment.
Organisations are relying on data warehouses and business intelligence applications to help make important data driven business decisions. Different types of data, such as audio, video or unstructured data, is organised in formats to help identify it for making future decisions.
**Open source tools used in DT**
Several open source tools are becoming popular for dealing with Big Data and DT. Some of them are listed below.
* **Hadoop** is known for the ability to process extremely large data volumes in both structured and unstructured formats, reliably placing Big Data to nodes in the group and making it available locally on the processing machine.
* **MapReduce** happens to be a crucial component of Hadoop. It works rapidly to process vast amounts of data in parallel on large clusters of computer nodes. It was originally developed by Google.
* **Storm** is different from other tools with its distributed, real-time, fault-tolerant processing system, unlike the batch processing of Hadoop. It is fast and highly scalable. It is now owned by Twitter.
* **Apache Cassandra** is used by many organisations with large, active data sets, including Netflix, Twitter, Urban Airship, Cisco and Digg. Originally developed by Facebook, it is now managed by the Apache Foundation.
* **Kaggle** is the worlds largest Big Data community. It helps organisations and researchers to post their data and statistics. It is an open source Big Data tool that allows programmers to analyse large data sets on Hadoop. It helps with querying and managing large data sets really fast.
**DT: A new innovation**
DT is the result of IT innovation. It is driven by well-planned business strategies, with the goal of inventing new business models. Today, any organisation can undergo business transformation because of three main business-focused essentials — intelligence, the ability to decide more quickly and a customer-centric outlook.
DT, which includes establishing Big Data analytics capabilities, poses considerable challenges for traditional manufacturing organisations, such as car companies. The successful introduction of Big Data analytics often requires substantial organisational transformation including new organisational structures and business processes.
Retail is one of the most active sectors when it comes to DT. JLab is an innovative DT venture by retail giant John Lewis, which offers lots of creativity and entrepreneurial dynamism. It is even encouraging five startups each year and helps them to bring their technologies to market. For example, Digital Bridge, a startup promoted by JLab, has developed a clever e-commerce website that allows shoppers to snap photos of their rooms and see what furniture and other products would look like in their own homes. It automatically detects walls and floors, and creates a photo realistic virtual representation of the customers room. Here, lighting and decoration can be changed and products can be placed, rotated and repositioned with a realistic perspective.
Companies across the globe are going through digital business transformation as it helps to improve their business processes and leads to new business opportunities. The importance of Big Data in the business world cant be ignored. Nowadays, it is a key factor for success. There is a huge amount of valuable data which companies can use to improve their results and strategies. Today, every important decision can and should be supported by the application of data analytics.
Big Data and open source help DT do more for businesses. DT helps companies become digitally mature and gain a solid presence on the Internet. It helps companies to identify any drawbacks that may exist in their e-commerce system.
**Big Data in DT**
Data is critical, but it cant be used as a replacement for creativity. In other words, DT is not all about creativity versus data, but its about creativity enhanced by data.
Companies gather data to analyse and improve the customer experience, and then to create targeted messages emphasising the brand promise. But emotion, story-telling and human connections remain as essential as ever. The DT world today is dominated by Big Data. This is inevitable given the fact that business organisations always want DT based Big Data, so that data is innovative, appealing, useful to attract customers and hence to increase their sales.
Tesla cars today are equipped with sensors and IoT connections to gather a vast amount of data. Improvements based on this data are then fed back into the cars, creating a better driving experience.
**DT in India**
DT can transform businesses across every vertical in India. Data analytics has changed from being a good-to-have to a must-have technology.
According to a survey by Microsoft in partnership with International Data Corporation (IDC), by 2021, DT will add an estimated US$ 154 billion to Indias GDP and increase the growth rate by 1 per cent annually. Ninety per cent of Indian organisations are in the midst of their DT journey. India is the biggest user and contributor to open source technology. DT has created a new ripple across the whole of India and is one of the major drivers for the growth of open source. The government of India has encouraged the adoption of this new technology in the Digital India initiative, and this has further encouraged the CEOs of enterprises and other government organisations to make a move towards this technology.
The continuous DT in India is being driven faster with the adoption of emerging technologies like Big Data. Thats one of the reasons why organisations today are investing in these technological capabilities. Businesses in India are recognising the challenges of DT and embracing them. Overall, it may be said that the new DT concept is more investor and technology friendly, in tune with the Make in India programme of the present government.
From finding ways to increase business efficiency and trimming costs, to retaining high-value customers, determining new revenue opportunities and preventing fraud, advanced analytics is playing an important role in the DT of Big Data based companies.
**The way forward**
Access to Big Data has changed the game for small and large businesses alike. Big Data can help businesses to solve almost every problem. DT helps companies to embrace a culture of change and remain competitive in a global environment. Losing weight is a life style change and so is the incorporation of Big Data into business strategies.
Big Data is the currency of tomorrow, and today, it is the fuel running a business. DT can harness it to a greater level.
![Avatar][3]
[Vinayak Ramachandra Adkoli][4]
The author is a B.E. in industrial production, and has been a lecturer in the mechanical engineering department for ten years at three different polytechnics. He is also a freelance writer and cartoonist. He can be contacted at [karnatakastory@gmail.com][5] or [vradkoli@rediffmail.com][6].
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/open-source-big-data-solutions-support-digital-transformation/
作者:[Vinayak Ramachandra Adkoli][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/vinayak-adkoli/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Big-Data-.jpg?resize=696%2C517&ssl=1 (Big Data)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Big-Data-.jpg?fit=800%2C594&ssl=1
[3]: https://secure.gravatar.com/avatar/7b4383616c8708e3417051b3afd64bbc?s=100&r=g
[4]: https://opensourceforu.com/author/vinayak-adkoli/
[5]: mailto:karnatakastory@gmail.com
[6]: mailto:vradkoli@rediffmail.com

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Birds Eye View of Big Data for Enterprises)
[#]: via: (https://opensourceforu.com/2019/11/a-birds-eye-view-of-big-data-for-enterprises-2/)
[#]: author: (Swapneel Mehta https://opensourceforu.com/author/swapneel-mehta/)
A Birds Eye View of Big Data for Enterprises
======
[![][1]][2]
_Entrepreneurial decisions are made using data and business acumen. Big Data is today a tool that helps to maximise revenue and customer engagement. Open source tools like Hadoop, Apache Spark and Apache Storm are the popular choices when it comes to analysing Big Data. As the volume and variety of data in the world grows by the day, there is great scope for the discovery of trends as well as for innovation in data analysis and storage._
In the past five years, the spate of research focused on machine learning has resulted in a boom in the nature and quality of heterogeneous data sources that are being tapped by providers for their customers. Cheaper compute and widespread storage makes it so much easier to apply bulk data processing techniques, and derive insights from existing and unexplored sources of rich user data including logs and traces of activity whilst using software products. Business decision making and strategy has been primarily dictated by data and is usually supported by business acumen. But in recent times it has not been uncommon to see data providing conclusions seemingly in contrast with conventional business logic.
One could take the simple example of the baseball movie Moneyball, in which the protagonist defies all notions of popular wisdom in looking solely at performance statistics to evaluate player viability, eventually building a winning team of players a team that would otherwise never have come together. The advantage of Big Data for enterprises, then, becomes a no brainer for most corporate entities looking to maximise revenue and engagement. At the back-end, this is accomplished by popular combinations of existing tools specially designed for large scale, multi-purpose data analysis. Apache, Hadoop and Spark are some of the most widespread open source tools used in this space in the industry. Concomitantly, it is easy to imagine that there are a number of software providers offering B2B services to corporate clients looking to outsource specific portions of their analytics. Therefore, there is a bustling market with customisable, proprietary technological solutions in this space as well.
![Figure 1: A crowded landscape to follow \(Source: Forbes\)][3]
Traditionally, Big Data refers to the large volumes of unstructured and heterogeneous data that is often subject to processing in order to provide insights and improve decision-making regarding critical business processes. The McKinsey Global institute estimates that data volumes have been growing at 40 per cent per year and will grow 44x between the years 2009 and 2020. But there is more to Big Data than just its immense volume. The rate of data production is an important factor given that smaller data streams generated at faster rates produce larger pools than their counterparts. Social media is a great example of how small networks can expand rapidly to become rich sources of information — up to massive, billion-node scales.
Structure in data is a highly variable attribute given that data is now extracted from across the entire spectrum of user activity. Conventional formats of storage, including relational databases, have been virtually replaced by massively unstructured data pools designed to be leveraged in manners unique to their respective use cases. In fact, there has been a huge body of work on data storage in order to leverage various write formats, compression algorithms, access methods and data structures to arrive at the best combination for improving productivity of the workflow reliant on that data. A variety of these combinations has emerged to set the industry standards in their respective verticals, with the benefits ranging from efficient storage to faster access.
Finally, we have the latent value in these data pools that remains to be exploited by the use of emerging trends in artificial intelligence and machine learning. Personalised advertising recommendations are a huge factor driving revenue for social media giants like Facebook and companies like Google that offer a suite of products and an ecosystem to use them. The well-known Silicon Valley giant started out as a search provider, but now controls a host of apps and most of the entry points for the data generated in the course of people using a variety of electronic devices across the world. Established financial institutions are now exploring the possibility of a portion of user data being put on an immutable public ledger to introduce a blockchain-like structure that can open the doors to innovation. The pace is picking up as product offerings improve in quality and expand in variety. Lets get a birds eye view of this subject to understand where the market stands.
The idea behind building better frameworks is increasingly turning into a race to provide more add-on features and simplify workflows for the end user to engage with. This means the categories have many blurred lines because most products and tools present themselves as end-to-end platforms to manage Big Data analytics. However, well attempt to divide this broadly into a few categories and examine some providers in each of these.
**Big Data storage and processing**
Infrastructure is the key to building a reliable workflow when it comes to enterprise use cases. Earlier, relational databases were worthwhile to invest in for small and mid-sized firms. However, when the data starts pouring in, it is usually the scalability that is put to the test first. Building a flexible infrastructure comes at the cost of complexity. It is likely to have more moving parts that can cause failure in the short-term. However, if done right something that will not be easy because it has to be tailored exactly to your company it can result in life-changing improvements for both users and the engineers working with the said infrastructure to build and deliver state-of-the-art products.
There are many alternatives to SQL, with the NoSQL paradigm being adopted and modified for building different types of systems. Cassandra, MongoDB and CouchDB are some well-known alternatives. Most emerging options can be distinguished based on their disruption, which is aimed at the fundamental ACID properties of databases. To recall, a transaction in a database system must maintain atomicity, consistency, isolation, and durability commonly known as ACID properties in order to ensure accuracy, completeness, and data integrity (from Tutorialspoint). For instance, CockroachDB, an open source offshoot of Googles Spanner database system, has gained traction due to its support for being distributed. Redis and HBase offer a sort of hybrid storage solution while Neo4j remains a flag bearer for graph structured databases. However, traditional areas aside, there are always new challenges on the horizon for building enterprise software.
Backups are one such area where startups have found viable disruption points to enter the market. Cloud backups for enterprise software are expensive, non-trivial procedures and offloading this work to proprietary software offers a lucrative business opportunity. Rubrik and Cohesity are two companies that originally started out in this space and evolved to offer added services atop their primary offerings. Clumio is a recent entrant, purportedly creating a data fabric that the promoters expect will serve as a foundational layer to run analytics on top of. It is interesting to follow recent developments in this burgeoning space as we see competitors enter the market and attempt to carve a niche for themselves with their product offerings.
**Big Data analytics in the cloud**
Apache Hadoop remains the popular choice for many organisations. However, many successors have emerged to offer a set of additional analytical capabilities: Apache Spark, commonly hailed as an improvement to the Hadoop ecosystem; Apache Storm that offers real-time data processing capabilities; and Googles BigQuery, which is supposedly a full-fledged platform for Big Data analytics.
Typically, cloud providers such as Amazon Web Services and Google Cloud Platform tend to build in-house products leveraging these capabilities, or replicate them entirely and offer them as hosted services to businesses. This helps them provide enterprise offerings that are closely integrated within their respective cloud computing ecosystem. There has been some discussion about the moral consequences of replicating open source products to profit off closed source versions of the same, but there has been no consensus on the topic, nor any severe consequences suffered on account of this questionable approach to boost revenue.
Another hosted service offering a plethora of Big Data analytics tools is Cloudera which has an established track record in the market. It has been making waves since its merger with Hortonworks earlier this year, giving it added fuel to compete with the giants in its bid to become the leading enterprise cloud provider in the market.
Overall, weve seen interesting developments in the Big Data storage and analysis domain and as the volume and variety of data grows, so do the opportunities to innovate in the field.
![Avatar][4]
[Swapneel Mehta][5]
The author has worked at Microsoft Research, CERN and startups in AI and cyber security. He is an open source enthusiast who enjoys spending time organising software development workshops for school and college students. You can contact him at <https://www.linkedin.com/in/swapneelm>; <https://github.com/SwapneelM> or <http://www.ccdev.in>.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/a-birds-eye-view-of-big-data-for-enterprises-2/
作者:[Swapneel Mehta][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/swapneel-mehta/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-1-Big-Data-analytics-and-processing-for-the-enterprise.jpg?resize=696%2C449&ssl=1 (Figure 1 Big Data analytics and processing for the enterprise)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-1-Big-Data-analytics-and-processing-for-the-enterprise.jpg?fit=900%2C580&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-2-A-crowded-landscape-to-follow.jpg?resize=350%2C254&ssl=1
[4]: https://secure.gravatar.com/avatar/2ba7abaf240a1f6166d506dccdcda00f?s=100&r=g
[5]: https://opensourceforu.com/author/swapneel-mehta/

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (AI and 5G: Entering a new world of data)
[#]: via: (https://www.networkworld.com/article/3451718/ai-and-5g-entering-a-new-world-of-data.html)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
AI and 5G: Entering a new world of data
======
The deployment model of vendor-centric equipment cannot sustain this exponential growth in traffic.
[Stinging Eyes][1] [(CC BY-SA 2.0)][2]
Today the telecom industry has identified the need for faster end-user-data rates. Previously users were happy to call and text each other. However, now mobile communication has converted our lives in such a dramatic way it is hard to imagine this type of communication anymore.
Nowadays, we are leaning more towards imaging and VR/AR video-based communication. Therefore, considering such needs, these applications are looking for a new type of network. Immersive experiences with 360° video applications require a lot of data and a zero-lag network.
To give you a quick idea, VR with a resolution equivalent to 4K TV resolution would require a bandwidth of 1Gbps for a smooth play or 2.5 Gbps for interactive; both requiring a minimal latency of 10ms and minimal delay. And that's for round-trip time. Soon these applications will target the smartphone, putting additional strains on networks. As AR/VR services grow in popularity, the proposed 5G networks will yield the speed and the needed performance.
Every [IoT device][3] _[Disclaimer: The author works for Network Insight]_, no matter how dumb it is, will create data and this data is the fuel for the engine of AI. AI enables us to do more interesting things with the data. The ultimate goal of the massive amount of data we will witness is the ability to turn this data into value. The rise in data from the enablement of 5G represents the biggest opportunity for AI.
There will be unprecedented levels of data that will have to move across the network for processing and in some cases be cached locally to ensure low latency. For this, we primarily need to move the processing closer to the user to utilize ultra-low latency and ultra-high throughput.
### Some challenges with 5G
The introduction of 5G is not without challenges. It's expensive and is distributed in ways that have not been distributed in the past. There is an extensive cost involved in building this type of network. Location is central to effective planning, deployment and optimization of 5G networks.
Also, the 5G millimeter wave comes with its own challenges. There are techniques that allow you to take the signal and send it towards a specific customer instead of sending it to every direction. The old way would be similar to a light bulb that reaches all the parts of the room, as opposed to a flashlight that targets specific areas.
[The time of 5G is almost here][4]
So, choosing the right location plays a key role in the development and deployment of 5G networks. Therefore, you must analyze if you are building in the right place, and are marketing to the right targets. How many new subscribers do you expect to sign up for the services if you choose one area over the other? You need to take into account the population that travels around that area, the building structures and how easy it is to get the signal.
Moreover, we must understand the potential of flooding and analyze real-time weather to predict changes in traffic. So, if there is a thunderstorm, we need to understand how such events influence the needs of the networks and then make predictive calculations. AI can certainly assist in predicting these events.
### AI, a doorway to opportunity
5G is introducing new challenges, but by integrating AI techniques into networks is one way the industry is addressing these complexities. AI techniques is a key component that needs to be adapted to the network to help manage and control this change. Another important use case for AI is for network planning and operations.
With 5G, we will have 100,000s of small cells everywhere where each cell is connected to a fiber line. It has been predicted that we can have 10 million cells globally. Figuring out how to plan and design all these cells would be beyond human capability. This is where AI can do site evaluations and tell you what throughput you have with certain designs.
AI can help build out the 5G infrastructure and map out the location of cell towers to pinpoint the best location for the 5G rollout. It can continuously monitor how the network is being used. If one of the cell towers is not functioning as expected, AI can signal to another cell tower to take over.
### Vendor-centric equipment cannot sustain 5G
With the enablement of 5G networks, we have a huge amount of data. In some cases, this could be high in the PB region per day; the majority of this will be due to video-based applications. A deployment model of vendor-centric equipment cannot sustain this exponential growth in traffic.
We will witness a lot of open source in this area, with the movement of the processing and compute, storage and network functionality to the edge. Eventually, this will create a real-time network at the edge.
### More processing at the edge
Edge computing involves having the computer, server and network at the very edge of the network that is closer to the user. It provides intelligence at the edge, thereby reducing the amount of traffic going to the backbone.
Edge computing can result in for example AI object identification to reach the target recognition in under .35 seconds. Essentially, we have the image recognition deep learning algorithm that is sitting on the edge. The algorithm sitting on the edge of the network will help to reduce the traffic sent to the backbone.
However, this also opens up a new attack surface and luckily AI plays well with cybersecurity. A closed-loop system will collect data at the network edge, identity threats and take real-time action.
### Edge and open source
We have a few popular open-source options available at our disposal. Some examples of open source edge computing could be Akraino Edge Stack, ONAP Open Network Animation Platform and Airship Open Infrastructure Project.
The Akraino Edge Stack creates an open-source software stack that supports high-availability cloud services. These services are optimized for edge computing systems and applications.
The Akraino R1 release includes 10 “ready and proven” blueprints and delivers a fully functional edge stack for edge use cases. These range from Industrial IoT, Telco 5G Core &amp; vRAN, uCPE, SDWAN, edge media processing and carrier edge media processing.
The ONAP (Open Network Platform) provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions. It is an open-source networking project hosted by the Linux Foundation.
Finally, the Airship Open Infrastructure Project is a collection of open-source tools for automating cloud provisioning and management. These tools include OpenStack for virtual machines, Kubernetes for container orchestration and MaaS for bare metal, with planned support for OpenStack Ironic.
**This article is published as part of the IDG Contributor Network. [Want to Join?][5]**
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3451718/ai-and-5g-entering-a-new-world-of-data.html
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://www.flickr.com/photos/martinlatter/4233363677
[2]: https://creativecommons.org/licenses/by-sa/2.0/legalcode
[3]: https://network-insight.net/2017/10/internet-things-iot-dissolving-cloud/
[4]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[5]: https://www.networkworld.com/contributor-network/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,155 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Conquering documentation challenges on a massive project)
[#]: via: (https://opensource.com/article/19/11/documentation-challenges-tom-caswell-matplotlib)
[#]: author: (Gina Helfrich, Ph.D. https://opensource.com/users/ginahelfrich)
Conquering documentation challenges on a massive project
======
Learn more about documentation at scale in this interview with Tom
Caswell, Matplotlib lead developer.
![Files in a folder][1]
Given the recent surge in popularity of open source data science projects like pandas, NumPy, and [Matplotlib][2], its probably no surprise that the increased level of interest is generating user complaints about documentation. To help shed light on whats at stake, we talked to someone who knows a lot about the subject: [Thomas Caswell][3], the lead developer of Matplotlib.
Matplotlib is a flexible and customizable tool for producing static and interactive data visualizations since 2001 and is a foundational project in the scientific Python stack. Matplotlib became a [NumFOCUS-sponsored project][4] in 2015.
Tom has been working on Matplotlib for the past five years and got his start answering questions about the project on Stack Overflow. Answering questions became submitting bug reports, which became writing patches, which became maintaining the project, which ultimately led to him becoming the lead developer.
**Fun fact:** Toms advancement through the open source community follows exactly the [path described by Brett Cannon][5], a core Python maintainer.
NumFOCUS Communications Director, Gina Helfrich, sat down with Tom to discuss the challenges of managing documentation on a project as massive and as fundamental as Matplotlib.
**Gina Helfrich:** Thanks so much for taking the time to talk with us about Matplotlib and open source documentation, Tom. To contextualize our conversation a bit, can you speak a little to your impression of the [back-and-forth][6] on Twitter with Wes McKinney about pandas and user complaints about the documentation?
**Thomas Caswell:** I only kind of saw the edges, but I see both sides. On one hand, I think something Mike Pope said was, "if its not documented, it doesnt exist." If you are writing open source tools,
part of that work is documenting them, and doing so clearly in a way that users can discover and actually use, short of going to the source [code]. Its not good enough to dump code on the internet—you have to do the whole thing.
On the other hand, if youre not paying [for the software], you dont get to make demands. The attitude I think Wes was reacting to, which you see a lot, is: "You built this tool that is useful to me, therefore I expect enterprise-grade paid support because its obviously critical to what Im doing."
But I think the part Eric O. Lebigot was responding to is the first part. Part of building a tool is the documentation, not just the code. But Wes is responding to the entitlement, the expectation of free work, so I see both sides.
**GH:** Looking at Matplotlib specifically, which is facing many of the same issues as pandas, I know you have some big challenges with your documentation. I get the impression that theres this notion out there from new users that getting started with Matplotlib is super frustrating and the docs dont really help. Can you tell me about the history there and how the project came to have this problem?
**TC:** So, Matplotlib is a humongous library. Ive been working on it for five years, and around once a month (or every other month), theres a bug report where my first reaction is, "Wait… we do _what_?"
A lot of the library is under-documented. This library survived at least two generations of partial conversion to standardized docstring formats. As I understand it (I wasnt around at the time), we were one of the first projects outside of core Python to adopt Sphinx to build our docs—possibly a little too early. We have a lot of weird customizations since Sphinx didnt have those features yet [at the time]. Other people have built better versions of those features since then, but because Matplotlib is so huge, migrating them is hard.
I think if you build the PDF version of our docs, its around 3,000 pages, and I would say that the library has maybe half the documentation it really needs.
We are woefully under-documented in the sense that not every feature has good docs. On the other hand, we are over-documented in that what we have is not well organized and theres no clear entry point. If I want to find out how to do something, even I have a hard time finding where something is documented. And if _I_ [the lead developer] have issues finding that information, theres no prayer of new users finding it. So in that sense, we are both drastically under-documented and drastically over-documented.
**[Read next: [Syadmins: Poor documentation is not a job insurance strategy][7]]**
**GH:** Given that Matplotlib is over 15 years old, do you have a sense of who has been writing the documentation? How does your documentation actually get developed?
**TC:** Historically, much like the code, the documentation was organically developed. Weve had a lot of investment in examples and docstrings, and a few entries labeled as tutorials that teach you one specific skill. For example, weve got prose on the "rough theory of colormaps," and how to make a colormap.
A lot of Matplotlibs documentation is examples, and the examples overlap. Over the past few years, when I see interesting examples go by on the mailing list or on Stack Overflow, Ill say, "Can you put this example in the docs?" So, I guess Ive been actively contributing to the problem that theres too much stuff to wade through.
Part of the issue is that people will do a six-hour tutorial and then some of those examples end up in the docs. Then, someone _else_ will do a six-hour tutorial (you cant cover the whole library in six hours) and the basics are probably similar, but they may format the tutorial differently.
**GH:** Wow, that sounds pretty challenging to inherit and try to maintain. What kinds of improvements have you been working on for the documentation?
**TC:** Theres been an effort over the past couple of years to move to numpydoc format, away from the home-grown scheme we had previously. Also, [Nelle Varoquaux][8] recently did a tremendous amount of work and led the effort to move from how we were doing examples to using Sphinx-Gallery, which makes it much easier to put good prose into examples. This practice was picked up by [Chris Holdgraf][9] recently, as well. Sphinx-Gallery went live on our main docs with Matplotlib 2.1, which was a huge improvement for users. Nelle also organized a distributed [docathon][10].
Weve been trying to get better about new features. When theres a new feature, you must add an example to the docs for that feature, which helps make things discoverable. Weve been trying to get better about making sure docstrings exist, are accurate, and that they document all of the parameters.
**GH:** If you could wave a magic wand and have the Matplotlib docs that you want, what would they look like?
**TC:** Well, as I mentioned, the docs grew organically, and that means we have no consistent voice across them. It also means theres no single point of truth for various things. When you write an example, how far back down the basics do you go? So, its not clear what you need to know before you can understand the example. Either you explain just enough, all the way back (so weve got a random assortment of the basics smeared everywhere), or you have examples that, unless youre already a heavy user, make no sense.
So, to answer the question, having someone who can actually _write_ and has empathy for users go through and write a 200-page intro to Matplotlib book, and have that be the main entry to the docs. Thats my current vision of what I want.
**GH:** If you were introducing a new user to Matplotlib today, what would you have her read? Where would you point her in the docs?
**TC:** Well, there isnt a good, clear option for, "Youve been told you need to use Matplotlib. Go spend an afternoon and read this." Im not sure where Id point people to for that right now. [Nicolas Rougier][11] has written some [good][12] [stuff][13] on that front, such as a tutorial for beginners, and some of that has migrated into the docs.
Theres a lot out there, but its not collated centrally, or linked from our docs as "START HERE." I should also add that I might not have the best view of this issue anymore because I havent actively gone looking for this information, so maybe I just never found it because I dont need it. I dont know that it exists. (This topic actually [came up recently][14] on the mailing list.)
The place we do point people to is: Go look at the gallery and click on the thumbnail that looks closest to what you want to do.
Ben Root presented an [Anatomy of Matplotlib tutorial][15] at SciPy several times. Theres a number of Matplotlib books that exist. Its mixed whether the authors were contributors [to the project]. Ben Root recently wrote one about [interactive figures][16]. Ive been approached and have turned this task down a couple of times, just because I dont have time to write a book. So my thought for getting a technical writer was to get a technical writer to write the book, and instead of publishing the result as a book, put it in the online docs.
**GH:** Is there anyone in the Matplotlib contributor community who specializes in the documentation part of things, or takes a lot of ownership around documentation?
Nelle was doing this for Matplotlib for a bit but has stepped back. Chris Holdgraf is taking the lead on some doc-related things now. Nicholas Rougier has written a number of [extremely good tutorials][17] outside of the project's documentation.
I mean, no one uses _just_ Matplotlib. You dont use us but not use SciPy, NumPy, or pandas. You have to be using something else to do the actual work that you now need to visualize. There are many "clean" introductions to Matplotlib in other places. For example, both Jake VanderPlass [analysis book][18] and Katy Huff and Anthony Scopatzs [book][19] have introductions to Matplotlib that cover this topic to the degree they felt was needed for their purposes.
**GH:** Id love to hear your thoughts on the role of Stack Overflow in all of this.
**TC:** That actually is how I got into the project. My Stack Overflow number is large, and its almost all Matplotlib questions. And how I got started is that I answered questions. A lot of the questions on Stack Overflow are, "Please read the docs for me." Which, fine. But actually, a great way to learn the library is to answer questions on Stack Overflow, because people who have problems that you dont personally have will ask, "How do I do this?" and now you have to go figure out how to do it. Its kind of fun.
But sometimes people ask questions and theyve actually found a bug. And in determining that theyve actually found a bug, I tried to figure out how to fix the bugs. So, I started some reports, which led to, "Heres a pull request to fix the bug I found." And then when I started entering a lot of PRs, they were like, "You need to start reviewing them now," so they gave me commit rights and made me review things. And then they put me in charge.
I do like Stack Overflow. I think that to a large extent, what it replaced is the mailing list. If I have any criticism of Stack Overflow, I think its convincing people who are answering questions to upstream more of the results.
There are some good examples on Stack Overflow. Heres a complex one: You have to touch these seven different functions, each of which are relatively well documented, but you have to put them together in just the right way. Some of those answers should probably go in the gallery with our annotations about how they work. Basically, if you go through Joe Kingtons top 50 answers, they should probably all go in the docs.
In other cases, the question is asked because the docstring is not clear. We need to convince people who are answering those questions to use those moments as a survey of where our documentation is not clear, instead of just answering [on Stack Overflow], and then move those answers back [to the docs].
**GH:** Whats it like managing PRs for documentation as opposed to patches and bug fixes?
**TC:** Weve tried to streamline how we do documentation PRs. Writing documentation PRs is the most painful thing ever in open source because you get copyediting via pull request. You get picky proofreading and copyediting via GitHub comments. Like, "theres a missing comma," or "two spaces!" And again, I keep using myself as a weird outlier benchmark, _I_ get disheartened when I write doc pull requests and then I get 50 comments regarding picky little things.
What Ive started trying to push as the threshold on docs is, "Did [the change] make it worse?" If it didnt make it worse, merge the change. Frequently, it takes more time to leave a GitHub comment than to fix the problem.
> "If you can use Matplotlib, you are qualified to contribute to it."
>      — Tom Caswell, Matplotlib lead developer
**GH:** Whats one action youd like members of the community who are reading this interview to take? What is one way they could make a difference on this issue?
**TC:** One thing Id like to see more of—and I acknowledge that how to contribute to open source is a big hurdle to get over—Ive said previously that if you can use Matplotlib, you are qualified to contribute to it. Thats a message I would like to get out more broadly.
If youre a user and you read the docstring to something and it doesnt make sense, and then you play around a bit and you understand that function well enough to use it—you could then start clarifying docstrings.
Because one of the things I have the hardest time with is that I personally am bad at putting myself in other peoples shoes when writing docs. I dont know from a users point of view—and this sounds obnoxious but Im deep enough in the code—what they know coming into the library as a new person. I dont know the right things to tell them in the docstring that will actually help them. I can try to guess and Ill probably write too much, or the wrong things. Or worse, Ill write a bunch of stuff that refers to things they dont know about, and now Ive just made the function more confusing.
Whereas a user who has just encountered this function for the first time, and sorted out how to make it do what they need it to do for their purposes, is in the right mindset to write what they wish the docs had said that would have saved them an hour.
**GH:** Thats a great message, I think. Thanks for talking with me, Tom!
**TC:** Youre welcome. Thank you.
_This article was originally published on the [NumFOCUS blog][20] in 2017 and is just as relevant today. Its republished with permission by the original interviewer and has been lightly edited for style, length, and clarity. If you want to support NumFOCUS in person, attend one of the local [PyData events][21] happening around the world. Learn more about NumFOCUS on our website: [numfocus.org][22]_
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/documentation-challenges-tom-caswell-matplotlib
作者:[Gina Helfrich, Ph.D.][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ginahelfrich
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
[2]: https://matplotlib.org
[3]: https://twitter.com/tacaswell
[4]: https://numfocus.org/sponsored-projects
[5]: https://snarky.ca/why-i-took-october-off-from-oss-volunteering/
[6]: https://twitter.com/wesmckinn/status/909772652532953088
[7]: https://www.redhat.com/sysadmin/poor-documentation
[8]: https://twitter.com/nvaroqua
[9]: https://twitter.com/choldgraf
[10]: https://www.numfocus.org/blog/numfocus-projects-participate-in-docathon-2017/
[11]: https://twitter.com/NPRougier
[12]: https://github.com/rougier/matplotlib-tutorial
[13]: http://www.labri.fr/perso/nrougier/teaching/matplotlib/matplotlib.html
[14]: https://mail.python.org/pipermail/matplotlib-users/2017-September/001031.html
[15]: https://github.com/matplotlib/AnatomyOfMatplotlib
[16]: https://www.amazon.com/Interactive-Applications-using-Matplotlib-Benjamin/dp/1783988843
[17]: http://www.labri.fr/perso/nrougier/teaching/
[18]: http://shop.oreilly.com/product/0636920034919.do
[19]: http://shop.oreilly.com/product/0636920033424.do
[20]: https://numfocus.org/blog/matplotlib-lead-developer-explains-why-he-cant-fix-the-docs-but-you-can
[21]: https://pydata.org/
[22]: https://numfocus.org

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Forrester: Edge computing is about to bloom)
[#]: via: (https://www.networkworld.com/article/3451532/forrester-edge-computing-is-about-to-bloom.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Forrester: Edge computing is about to bloom
======
2020 is set to be a “breakout year” for edge computing technology, according to the latest research from Forrester Research
Getty Images
The next calendar year will be the one that propels [edge computing][1] into the enterprise technology limelight for good, according to a set of predictions from Forrester Research.
While edge computing is primarily an [IoT][2]-related phenomenon, Forrester said that addressing the need for on-demand compute and real-time app engagements will also play a role in driving the growth of edge computing in 2020.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
What it all boils down to, in some ways, is that form factors will shift sharply away from traditional rack, blade or tower servers in the coming year, depending on where the edge technology is deployed. An autonomous car, for example, wont be able to run a traditionally constructed server.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Itll also mean that telecom companies will begin to feature a lot more heavily in the cloud and distributed-computing markets. Forrester said that CDNs and [colocation vendors][5] could become juicy acquisition targets for big telecom, which missed the boat on cloud computing to a certain extent, and is eager to be a bigger part of the edge. Theyre also investing in open-source projects like Akraino, an edge software stack designed to support carrier availability.
But the biggest carrier impact on edge computing in 2020 will undoubtedly be the growing availability of [5G][6] network coverage, Forrester says. While that availability will still mostly be confined to major cities, that should be enough to prompt reconsideration of edge strategies by businesses that want to take advantage of capabilities like smart, real-time video processing, 3D mapping for worker productivity and use cases involving autonomous robots or drones.
Beyond the carriers, theres a huge range of players in the edge computing, all of which have their eyes firmly on the future. Operational-device makers in every field from medicine to utilities to heavy industry will need custom edge devices for connectivity and control, huge cloud vendors will look to consolidate their hold over that end of the market and AI/ML startups will look to enable brand-new levels of insight and functionality.
Whats more, the average edge-computing implementation will often use many of them at the same time, according to Forrester, which noted that integrators who can pull products and services from many different vendors into a single system will be highly sought-after in the coming year. Multivendor solutions are likely to be much more popular than single-vendor, in large part because few individual companies have products that address all parts of the edge and IoT stacks.
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3451532/forrester-edge-computing-is-about-to-bloom.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.networkworld.com/article/3407756/colocation-facilities-buck-the-cloud-data-center-trend.html
[6]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,58 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My first contribution to open source: Making a decision)
[#]: via: (https://opensource.com/article/19/11/my-first-open-source-contribution-mistake-decisions)
[#]: author: (Galen Corey https://opensource.com/users/galenemco)
My first contribution to open source: Making a decision
======
A new open source contributor documents a series of five mistakes she
made starting out in open source.
![Lightbulb][1]
Previously, I put a lot of [blame on impostor syndrome][2] for delaying my first open source contribution. But there was another factor that I cant ignore: I cant make a decision to save my life. And with [millions][3] of open source projects to choose from, choosing one to contribute to is overwhelming. So overwhelming that I would often end up closing my laptop, thinking, "Maybe Ill just do this another day."
Mistake number two was letting my fear of making a decision get in the way of making my first contribution. In an ideal world, perhaps I would have come into my open source journey with a specific project in mind that I genuinely cared about and wanted to work on, but all I had was a vague goal of contributing to open source somehow. For those of you in the same position, here are strategies that helped me pick out the right project (or at least a good one) for my contribution.
### Tools that I used frequently
At first, I did not think it would be necessary to limit myself to tools or projects with which I was already familiar. There were projects that I had never used before but seemed like appealing candidates because of their active community, or the interesting problems that they solved.
However, given that I had a limited amount of time to devote to this project, I decided to stick with a tool that I already knew. To understand what a tool needs, you need to be familiar with how it is supposed to work. If you want to contribute to a project that you are unfamiliar with, you need to complete an additional step of getting to know the functionality and goals of the code. This extra load can be fun and rewarding, but it can also double your work time. Since my goal was primarily to contribute, sticking to what I knew was a helpful way to narrow things down. It is also rewarding to give back to a project that you have found useful.
### An active and friendly community
When choosing my project, I wanted to feel confident that someone would be there to review the code that I wrote. And, of course, I wanted the person who reviewed my code to be a nice person. Putting your work out there for public scrutiny is scary, after all. While I was open to constructive feedback, there were toxic corners of the developer community that I hoped to avoid.
To evaluate the community that I would be joining, I checked out the _issues_ sections of the repos that I was considering. I looked to see if someone from the core team responded regularly. More importantly, I tried to make sure that no one was talking down to each other in the comments (which is surprisingly common in issues discussions). I also looked out for projects that had a code of conduct, outlining what was appropriate vs. inappropriate behavior for online interaction.
### Clear contribution guidelines
Because this was my first time contributing to open source, I had a lot of questions around the process. Some project communities are excellent about documenting the procedures for choosing an issue and making a pull request. Although I did not select them at the time because I had never worked with the product before, [Gatsby][4] is an exemplar of this practice.
This type of clear documentation helped ease some of my insecurity about not knowing what to do. It also gave me hope that the project was open to new contributors and would take the time to look at my work. In addition to contribution guidelines, I looked in the issues section to see if the project was making use of the "good first issue" flag. This is another indication that the project is open to beginners (and helps you discover what to work on).
### Conclusion
If you dont already have a project in mind, choosing the right place to make your first open source contribution can be overwhelming. Coming up with a list of standards helped me narrow down my choices and find a great project for my first pull request.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/my-first-open-source-contribution-mistake-decisions
作者:[Galen Corey][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/galenemco
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lightbulb-idea-think-yearbook-lead.png?itok=5ZpCm0Jh (Lightbulb)
[2]: https://opensource.com/article/19/10/my-first-open-source-contribution-mistakes
[3]: https://github.blog/2018-02-08-open-source-project-trends-for-2018/
[4]: https://www.gatsbyjs.org/contributing/

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open by nature: What building a platform for activists taught me about playful development)
[#]: via: (https://opensource.com/open-organization/19/11/open-by-nature)
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
Open by nature: What building a platform for activists taught me about playful development
======
Building a global platform for environmental activists revealed a spirit
of openness that's central to human nature—and taught me how to design
for it.
![The Open Organization at Greenpeace][1]
"Open" isn't just a way we can build software. It's an attitude we can adopt toward anything we do.
And when we adopt it, we can move mountains.
Participating in a design sprint with colleagues at Greenpeace reminded me of that. As I explained in the first [two][2] [parts][3] of this [series][4], learning to think, plan, and work the open way is helping us build something truly great—a new, global platform for engaging activists who want to take action on behalf of our planet.
The sprint experience (part of a collaboration with Red Hat) reinforced several lessons about openness I've learned throughout my career as an advocate for open source, an architect of change, and a community organizer.
It also taught me a few new ones.
### An open nature
The design sprint experience reminded me just how central "openness" is to human nature. We all cook, sew, construct, write, play music, tinker, paint, tell stories—engage in the world through the creation of thousands of artifacts that allow others to understand our outlooks and worldviews. We express ourselves through our creations. We always have.
We express ourselves through our creations. We always have.
And throughout all of our expressive making, we reflect on and _share_ what we've created. We ask for feedback: _"Do you like my new recipe?" "What do you think of my painting?"_
We learn. Through trial and error (and ever-important failure), we learn what to do and what _not_ to do. Learning to make something work involves discovery and wonder in a spiral of [intrinsic motivation][5]; each new understanding unlocks new questions. We improve our skills as we create, and when we share.
I noticed something critically important while our teams were collaborating: learning to work openly can liberate a certain playfulness that often gets ignored (or buried) in many organizations today—and that playfulness can help us solve complex problems. When we're having fun learning, creating, and sharing, we're often in a flow, truly interested in our work, creating environments that others want to join. Openness can be a fount of innovation.
While our mission is a serious one, the more joy we find in it, the more people we'll attract to it. Discovery is a delightful process, and agency is empowering. The design sprint allowed us to finish with something that spurred reflection of our project—and do so with both humor and passion. The sprint left a lot of room for play, connection between participants, collaboration to solve problems, and decision-making.
### Positively open
Watching Red Hatters and Greenpeacers interact—many just having met one another for the first time—also crystallized for me some important impressions of open leadership.
Open leadership took many forms throughout the sprint. The Red Hat team showed open leadership when they adapted the agenda on the first day. Greenpeace was further ahead than other groups they'd planned for, so their plan wouldn't work. Greenpeacers were transparent about certain internal politics (because it's no use planning something that's impossible to build).
Open leaders are beacons of positivity. They assume best intentions in others. They truly listen. They live open principles. They build people up.
People left their baggage at the door. We showed up, all of us, and were present together.
Open leaders are beacons of positivity. They assume best intentions in others. They truly listen. They live open principles. They build people up. They remember to move as a collective, to ask for the insight of the collective, to thank the collective.
And in the spirit of positive, open leadership, I want to offer my own thanks.
Thanks to the Planet 4 team, a small group of people who kept pushing forward, despite the difficulties of a global project like this—a group that fought, made mistakes, and kept going despite them. They continue to pull together, and behind the scenes they're trying to be more open as they inspire the entire organization on an open journey with them (and build a piece of software at the same time!).
Thanks to the others at Greenpeace who have supported this work and those who have participated in it. Thanks to the leaders in other departments, who saw the potential of this work and helped us socialize it.
Thanks, too, to [the open organization community at Opensource.com][6] and [long-time colleagues][7] who modeled the behaviours and lent their open spirit to helping the Planet 4 team get started.
### Open returns
If openness is a way of being, then central to that way of being is [a spirit of reciprocity and exchange][8].
We belong to our communities and thus we contribute to them. We strive to be transparent so that our communities can grow and welcome new collaborators. When we infuse positivity into the world and into our projects, we create an atmosphere that invites innovation.
Our success in open source means working to nurture those ecosystems of passionate contributors. Our success as a species demands the same kind of care for our natural ecosystems, too.
Both Red Hat and Greenpeace understand the importance of ecosystems—and that shared understanding powered our collaboration on Planet 4.
As an open source software company, Red Hat both benefits from and contributes to open source software communities across the world—communities forming a technological ecosystem of passionate contributors that must always be in delicate balance. Greenpeace is also focused on the importance of maintaining ecosystems—the natural ecosystems of which we are all, irrevocably, a part. Our success in open source means working to nurture those ecosystems of passionate contributors. Our success as a species demands the same kind of care for our natural ecosystems, too, and Planet 4 is a platform that helps everyone do exactly that. For both organizations, innovation is _social_ innovation; what we create _with_ others ultimately _benefits_ others, enhancing their lives.
_Listen to Alexandra Machado of Red Hat explain social innovation._
So, really, the end of this story is just the beginning of so many others that will spawn from Planet 4.
Yours can begin immediately. [Join the Planet 4 project][9] and advocate for a greener, more peaceful future—the open way.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/11/open-by-nature
作者:[Laura Hilliger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/laurahilliger
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/images/open-org/open-org-greenpeace-article-3-blog-thumbnail-500x283.png?itok=aK5TOqSS
[2]: https://opensource.com/open-organization/19/10/open-platform-greenpeace
[3]: https://opensource.com/open-organization/19/10/collaboration-breakthrough-greenpeace
[4]: https://opensource.com/tags/open-organization-greenpeace
[5]: http://en.wikipedia.org/wiki/Motivation#Intrinsic_and_extrinsic_motivation
[6]: https://opensource.com/open-organization/resources/meet-ambassadors
[7]: https://medium.com/planet4/how-to-prepare-for-planet-4-user-interviews-a3a8cd627fe
[8]: https://opensource.com/open-organization/19/9/peanuts-community-reciprocity
[9]: https://planet4.greenpeace.org/create/contribute/

View File

@ -0,0 +1,152 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Quick Look at Some of the Best Cloud Platforms for High Performance Computing Applications)
[#]: via: (https://opensourceforu.com/2019/11/a-quick-look-at-some-of-the-best-cloud-platforms-for-high-performance-computing-applications/)
[#]: author: (Dr Kumar Gaurav https://opensourceforu.com/author/dr-gaurav-kumar/)
A Quick Look at Some of the Best Cloud Platforms for High Performance Computing Applications
======
[![][1]][2]
_Cloud platforms enable high performance computing without the need to purchase the required infrastructure. Cloud services are available on a pay per use basis which is very economical. This article takes a look at cloud platforms like Neptune, BigML, Deep Cognition and Google Colaboratory, all of which can be used for high performance applications._
Software applications, smart devices and gadgets face many performance issues which include load balancing, turnaround time, delay, congestion, Big Data, parallel computations and others. These key issues traditionally consume enormous computational resources and low-configuration computers are not able to work on high performance tasks. The laptops and computers available in the market are designed for personal use; so these systems face numerous performance issues when they are tasked with high performance jobs.
For example, a desktop computer or laptop with a 3GHz processor is able to perform approximately 3 billion computations per second. However, high performance computing (HPC) is focused on solving complex problems and working on quadrillions or trillions of computations with high speed and maximum accuracy.
![Figure 1: The Neptune portal][3]
![Figure 2: Creating a new project on the Neptune platform][4]
**Application domains and use cases**
High performance computing applications are used in domains where speed and accuracy levels are quite high as compared to those in traditional scenarios, and the cost factor is also very high.
The following are the use cases where high performance implementations are required:
* Nuclear power plants
* Space research organisations
* Oil and gas exploration
* Artificial intelligence and knowledge discovery
* Machine learning and deep learning
* Financial services and digital forensics
* Geographical and satellite data analytics
* Bio-informatics and molecular sciences
**Working with cloud platforms for high performance applications**
There are a number of cloud platforms on which high performance computing applications can be launched without users having actual access to the supercomputer. The billing for these cloud services is done on a usage basis and costs less compared to purchasing the actual infrastructure required to work with high performance computing applications.
The following are a few of the prominent cloud based platforms that can be used for advanced implementations including data science, data exploration, machine learning, deep learning, artificial intelligence, etc.
**Neptune**
URL: _<https://neptune.ml/>_
Neptune is a lightweight cloud based service for high performance applications including data science, machine learning, predictive knowledge discovery, deep learning, modelling training curves and many others. Neptune can be integrated with Jupyter notebooks so that Python programs can be easily executed for multiple applications.
The Neptune dashboard is available at <https://ui.neptune.ml/> on which multiple experiments can be performed. Neptune works as a machine learning lab on which assorted algorithms can be programmed and their outcomes can be visualised. The platform is available as Software as a Service (SaaS) so that the deployment can be done on the cloud. The deployments can be done on the users own hardware and can be mapped with the Neptune cloud.
In addition to having a pre-built cloud based platform, Neptune can be integrated with Python and R programming so that high performance applications can be programmed. Python and R are prominent programming environments for data science, machine learning, deep learning, Big Data and many other applications.
For Python programming, Neptune provides neptune-client so that communication with the Neptune server can be achieved, and advanced data analytics can be implemented on its advanced cloud.
For integration of Neptune with R, there is an amazing and effective library reticulate which integrates the use of neptune-client.
The detailed documentation for the integration of R and Python with Neptune is available at _<https://docs.neptune.ml/python-api.html> and <https://docs.neptune.ml/r-support.html>_.
![Figure 3: Integration of Neptune with Jupyter Notebook][5]
![Figure 4: Dashboard of BigML][6]
In addition, integration with MLflow and TensorBoard is also available. MLflow is an open source platform for managing the machine learning life cycle with reproducibility, advanced experiments and deployments. It has three key components — tracking, projects and models. These can be programmed and controlled using the Neptune MLflow integration.
The association of TensorFlow with Neptune is possible using Neptune-TensorBoard. TensorFlow is one of the powerful frameworks for the deep learning and advanced knowledge discovery approaches.
With the use of assorted features and dimensions, the Neptune cloud can be used for high performance research based implementations.
**BigML**
URL: _<https://bigml.com/>_
BigML is a cloud based platform for the implementation of advanced algorithms with assorted data sets. This cloud based platform has a panel for implementing multiple machine learning algorithms with ease.
The BigML dashboard has access to different data sets and algorithms under supervised and unsupervised taxonomy, as shown in Figure 4. The researcher can use the algorithm from the menu according to the requirements of the research domain.
![Figure 5: Algorithms and techniques integrated with BigML][7]
A number of tools, libraries and repositories are integrated with BigML so that the programming, collaboration and reporting can be done with a higher degree of performance and minimum error levels.
Algorithms and techniques can be attached to specific data sets for evaluation and deep analytics, as shown in Figure 5. Using this methodology, the researcher can work with the code as well as the data set on easier platforms.
The following are the tools and libraries associated with BigML for multiple applications of high performance computing:
* Node-Red for flow diagrams
* GitHub repos
* BigMLer as the command line tool
* Alexa Voice Service
* Zapier for machine learning workflows
* Google Sheets
* Amazon EC2 Image PredictServer
* BigMLX app for MacOS
![Figure 6: Enabling Google Colaboratory from Google Drive][8]
![Figure 7: Activation of the hardware accelerator with Google Colaboratory notebook][9]
**Google Colaboratory**
URL: _<https://colab.research.google.com>_
Google Colaboratory is one of the cloud platforms for the implementation of high performance computing tasks including artificial intelligence, machine learning, deep learning and many others. It is a cloud based service which integrates Jupyter Notebook so that Python code can be executed as per the application domain.
Google Colaboratory is available as a Google app in Google Cloud Services. It can be invoked from Google Drive as depicted in Figure 6 or directly at _<https://colab.research.google.com>_.
The Jupyter notebook in Google Colaboratory is associated with the CPU, by default. If a hardware accelerator is required, like the tensor processing unit (TPU) or the graphics processing unit (GPU), it can be activated from _Notebook Settings_, as shown in Figure 7.
Figure 8 presents a view of Python code that is imported in the Jupyter Notebook. The data set can be placed in Google Drive. The data set under analysis is mapped with the code so that the script can directly perform the operations as programmed in the code. The outputs and logs are presented on the Jupyter Notebook in the platform of Google Colaboratory.
![Figure 8: Implementation of the Python code on the Google Colaboratory Jupyter Notebook][10]
**Deep Cognition**
URL: _<https://deepcognition.ai/>_
Deep Cognition provides the platform to implement advanced neural networks and deep learning models. AutoML with Deep Cognition provides an autonomous integrated development environment (IDE) so that the coding, testing and debugging of advanced models can be done.
It has a visual editor so that the multiple layers of different types can be programmed. The layers that can be imported are core layers, hidden layers, convolutional layers, recurrent layers, pooling layers and many others.
The platform provides the features to work with advanced frameworks and libraries of MXNet and TensorFlow for scientific computations and deep neural networks.
![Figure 9: Importing layers in neural network models on Deep Cognition][11]
**Scope for research and development**
Research scholars, academicians and practitioners can work on advanced algorithms and their implementations using cloud based platforms dedicated to high performance computing. With this type of implementation, there is no need to purchase the specific infrastructure or devices; rather, the supercomputing environment can be hired on the cloud.
![Avatar][12]
[Dr Kumar Gaurav][13]
The author is the managing director of Magma Research and Consultancy Pvt Ltd, Ambala Cantonment, Haryana. He has 16 years experience in teaching, in industry and in research. He is a projects contributor for the Web-based source code repository SourceForge.net. He is associated with various central, state and deemed universities in India as a research guide and consultant. He is also an author and consultant reviewer/member of advisory panels for various journals, magazines and periodicals. The author can be reached at [kumargaurav.in@gmail.com][14].
[![][15]][16]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/a-quick-look-at-some-of-the-best-cloud-platforms-for-high-performance-computing-applications/
作者:[Dr Kumar Gaurav][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dr-gaurav-kumar/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Big-ML-Colab-and-Deep-cognition.jpg?resize=696%2C384&ssl=1 (Big ML Colab and Deep cognition)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Big-ML-Colab-and-Deep-cognition.jpg?fit=900%2C497&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-1-The-Neptune-portal.jpg?resize=350%2C122&ssl=1
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-2-Creating-a-new-project-on-the-Neptune-platform.jpg?resize=350%2C161&ssl=1
[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-3-Integration-of-Neptune-with-Jupyter-Notebook.jpg?resize=350%2C200&ssl=1
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-4-Dashboard-of-BigML.jpg?resize=350%2C193&ssl=1
[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-5-Algorithms-and-techniques-integrated-with-BigML.jpg?resize=350%2C200&ssl=1
[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-6-Enabling-Google-Colaboratory-from-Google-Drive.jpg?resize=350%2C253&ssl=1
[9]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-7-Activation-of-the-hardware-accelerator-with-Google-Colaboratory-notebook.jpg?resize=350%2C264&ssl=1
[10]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-8-Implementation-of-the-Python-code-on-the-Google-Colaboratory-Jupyter-Notebook.jpg?resize=350%2C253&ssl=1
[11]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-9-Importing-layers-in-neural-network-models-on-Deep-Cognition.jpg?resize=350%2C254&ssl=1
[12]: https://secure.gravatar.com/avatar/4a506881730a18516f8f839f49527105?s=100&r=g
[13]: https://opensourceforu.com/author/dr-gaurav-kumar/
[14]: mailto:kumargaurav.in@gmail.com
[15]: http://opensourceforu.com/wp-content/uploads/2013/10/assoc.png
[16]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -0,0 +1,130 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Pimcore: An open source alternative for product information management)
[#]: via: (https://opensource.com/article/19/11/pimcore-alternative-product-information-management)
[#]: author: (Dietmar Rietsch https://opensource.com/users/erinmcmahon)
Getting started with Pimcore: An open source alternative for product information management
======
PIM software enables sellers to centralize sales, marketing, and
technical product information to engage better with customers.
![Pair programming][1]
Product information management (PIM) software enables sellers to consolidate product data into a centralized repository that acts as a single source of truth, minimizing errors and redundancies in product data. This, in turn, makes it easier to share high-quality, clear, and accurate product information across customer touchpoints, paving the way for rich, consistent, readily accessible content that's optimized for all the channels customers use, including websites, social platforms, marketplaces, apps, IoT devices, conversational interfaces, and even print catalogs and physical stores. Being able to engage with customers on their favorite platform is essential for increasing sales and expanding into new markets. For years, there have been proprietary products that address some of these needs, like Salsify for data management, Adobe Experience Manager, and SAP Commerce Cloud for experience management, but now there's an open source alternative called Pimcore.
[Pimcore PIM][2] is an open source enterprise PIM, dual-[licensed][3] under GPLv3 and Pimcore Enterprise License (PEL) that enables sellers to centralize and harmonize sales, marketing, and technical product information. Pimcore can acquire, manage, and share any digital data and integrate easily into an existing IT system landscape. Its API-driven, service-oriented architecture enables fast and seamless connection to third-party software such as enterprise resource planning (ERP), customer relationship management (CRM), business intelligence (BI), and more.
### Open source vs. proprietary PIM software
There are at least four significant differences between open source and proprietary software that PIM users should consider.
* **Vendor lock-in:** It is more difficult to customize proprietary software. If you want to develop a new feature or modify an existing one, proprietary software lock-in makes you dependent on the vendor. On the other hand, open source provides unlimited access and flexibility to modify the source code and leverage it to your advantage, as well as the opportunity to freely access contributions made by the community behind it.
* **Interoperability:** Open source PIM software offers greater interoperability capabilities with APIs for integration with third-party business applications. Since the source code is open and available, users can customize or build connectors to meet their needs, which is not possible with proprietary software.
* **Community:** Open source solutions are supported by vibrant communities of contributors, implementers, developers, and other enthusiasts working towards enhancing the solution. Proprietary PIM software typically depends on commercial partnerships for implementation assistance and customizations.
* **Total cost of ownership:** Proprietary software carries a significant license fee for deployment, which includes implementation, customization, and system maintenance. In contrast, open source software development can be done in-house or through an IT vendor. This becomes a huge advantage for enterprises with tight budgets, as it slashes PIM operating costs.
### Pimcore features
Pimcore's platform is divided into two core offerings: data management and experience management. In addition to being open source and free to download and use, its features include the following.
#### Data modeling
Pimcore's web-based data modeling engine has over 40 high-performance data types that can help companies easily manage zillions of products or other master data with thousands of attributes. It also offers multilingual data management, object relations, data classification, digital asset management (DAM), and data modeling supported by data inheritance.
![Pimcore translations inheritance][4]
#### Data management
Pimcore enables efficient enterprise data management that focuses on ease of use; consistency in aggregation, organization, classification, and translation of product information; and sound data governance to enable optimization, flexibility, and scalability.
![PIM batch change][5]
#### Data quality
Data quality management is the basis for analytics and business intelligence (BI). Pimcore supports data quality, completeness, and validation, and includes rich auditing and versioning features to help organizations meet revenue goals, compliance requirements, and productivity objectives. Pimcore also offers a configurable dashboard, custom reports capabilities, filtering, and export functionalities.
![PIM data quality and completeness][6]
#### Workflow management
Pimcore's advanced workflow engine makes it easy to build and modify workflows to improve accuracy and productivity and reduce risks. Drop-downs enable enterprises to chalk out workflow paths to define business processes and editorial workflows with ease, and the customizable management and administration interface makes it easy to integrate workflows into an organization's application infrastructure.
![Pimcore workflow management][7]
#### Data consolidation
Pimcore eliminates data silos by consolidating data in a central place and creating a single master data record or a single point of truth. It does this by gathering data lying in disparate systems spread across geographic locations, departments, applications, hard drives, vendors, suppliers, and more. By consolidating data, enterprises can get improved accuracy, reliability, and efficacy of information, lower cost of compliance, and decreased time-to-market.
#### Synchronization across channels
Pimcore's tools for gathering and managing digital data enable sellers to deliver it across any channel or device to reach individual customers on their preferred platforms. This helps enterprises enrich the user experience, leverage a single point of control to optimize performance, improve data governance, streamline product data lifecycle management, and boost productivity to reduce time-to-market and meet customers' expectations.
### Installing, trying, and using Pimcore
The best way to start exploring Pimcore is with a guided tour or demo; before you begin, make sure that you have the [system requirements][8] in place.
#### Demo Pimcore
Navigate to the [Pimcore demo][9] page and either register for a guided tour or click on one of the products in the "Try By Yourself" column for a self-guided demo. Enter the username **admin** and password **demo** to begin the demo.
![Pimcore demo page][10]
#### Download and install Pimcore
If you want to take a deeper dive, you can [download Pimcore][11]; you can choose the data management or the experience management offering or both. You will need to enter your contact information and then immediately receive installation instructions.
![Pimcore download interface][12]
You can also choose from four installation packages: three are demo packages for beginners, and one is a skeleton for experienced developers. All contain:
* Complete Pimcore platform
* Latest open source version
* Quick-start guide
* Demo data for getting started
If you are installing Pimcore on a typical [LAMP][13] environment (which is recommended), see the [Pimcore installation guide][14]. If you're using another setup (e.g., Nginx), see the [installation, setup, and upgrade guide][15] for details.
![Pimcore installation documentation][16]
### Contribute to Pimcore
As open source software, users are encouraged to engage with, [contribute][17] to, and fork Pimcore. For tracking bugs and features, as well as for software management, Pimcore relies exclusively on [GitHub][18], where contributions are assessed and carefully curated to uphold Pimcore's quality standards.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/pimcore-alternative-product-information-management
作者:[Dietmar Rietsch][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/erinmcmahon
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1 (Pair programming)
[2]: https://pimcore.com/en
[3]: https://github.com/pimcore/pimcore/blob/master/LICENSE.md
[4]: https://opensource.com/sites/default/files/uploads/pimcoretranslationinheritance.png (Pimcore translations inheritance)
[5]: https://opensource.com/sites/default/files/uploads/pimcorebatchchange.png (PIM batch change)
[6]: https://opensource.com/sites/default/files/uploads/pimcoredataquality.png (PIM data quality and completeness)
[7]: https://opensource.com/sites/default/files/pimcore-workflow-management.jpg (Pimcore workflow management)
[8]: https://pimcore.com/docs/5.x/Development_Documentation/Installation_and_Upgrade/System_Requirements.html
[9]: https://pimcore.com/en/try
[10]: https://opensource.com/sites/default/files/uploads/pimcoredemopage.png (Pimcore demo page)
[11]: https://pimcore.com/en/download
[12]: https://opensource.com/sites/default/files/uploads/pimcoredownload.png (Pimcore download interface)
[13]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
[14]: https://pimcore.com/docs/5.x/Development_Documentation/Getting_Started/Installation.html
[15]: https://pimcore.com/docs/5.x/Development_Documentation/Installation_and_Upgrade/index.html
[16]: https://opensource.com/sites/default/files/uploads/pimcoreinstall.png (Pimcore installation documentation)
[17]: https://github.com/pimcore/pimcore/blob/master/CONTRIBUTING.md
[18]: https://github.com/pimcore/pimcore

View File

@ -0,0 +1,173 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Much of a Genius-Level Move Was Using Binary Space Partitioning in Doom?)
[#]: via: (https://twobithistory.org/2019/11/06/doom-bsp.html)
[#]: author: (Two-Bit History https://twobithistory.org)
How Much of a Genius-Level Move Was Using Binary Space Partitioning in Doom?
======
In 1993, id Software released the first-person shooter _Doom_, which quickly became a phenomenon. The game is now considered one of the most influential games of all time.
A decade after _Doom_s release, in 2003, journalist David Kushner published a book about id Software called _Masters of Doom_, which has since become the canonical account of _Doom_s creation. I read _Masters of Doom_ a few years ago and dont remember much of it now, but there was one story in the book about lead programmer John Carmack that has stuck with me. This is a loose gloss of the story (see below for the full details), but essentially, early in the development of _Doom_, Carmack realized that the 3D renderer he had written for the game slowed to a crawl when trying to render certain levels. This was unacceptable, because _Doom_ was supposed to be action-packed and frenetic. So Carmack, realizing the problem with his renderer was fundamental enough that he would need to find a better rendering algorithm, starting reading research papers. He eventually implemented a technique called “binary space partitioning,” never before used in a video game, that dramatically sped up the _Doom_ engine.
That story about Carmack applying cutting-edge academic research to video games has always impressed me. It is my explanation for why Carmack has become such a legendary figure. He deserves to be known as the archetypal genius video game programmer for all sorts of reasons, but this episode with the academic papers and the binary space partitioning is the justification I think of first.
Obviously, the story is impressive because “binary space partitioning” sounds like it would be a difficult thing to just read about and implement yourself. Ive long assumed that what Carmack did was a clever intellectual leap, but because Ive never understood what binary space partitioning is or how novel a technique it was when Carmack decided to use it, Ive never known for sure. On a spectrum from Homer Simpson to Albert Einstein, how much of a genius-level move was it really for Carmack to add binary space partitioning to _Doom_?
Ive also wondered where binary space partitioning first came from and how the idea found its way to Carmack. So this post is about John Carmack and _Doom_, but it is also about the history of a data structure: the binary space partitioning tree (or BSP tree). It turns out that the BSP tree, rather interestingly, and like so many things in computer science, has its origins in research conducted for the military.
Thats right: E1M1, the first level of _Doom_, was brought to you by the US Air Force.
### The VSD Problem
The BSP tree is a solution to one of the thorniest problems in computer graphics. In order to render a three-dimensional scene, a renderer has to figure out, given a particular viewpoint, what can be seen and what cannot be seen. This is not especially challenging if you have lots of time, but a respectable real-time game engine needs to figure out what can be seen and what cannot be seen at least 30 times a second.
This problem is sometimes called the problem of visible surface determination. Michael Abrash, a programmer who worked with Carmack on _Quake_ (id Softwares follow-up to _Doom_), wrote about the VSD problem in his famous _Graphics Programming Black Book_:
> I want to talk about what is, in my opinion, the toughest 3-D problem of all: visible surface determination (drawing the proper surface at each pixel), and its close relative, culling (discarding non-visible polygons as quickly as possible, a way of accelerating visible surface determination). In the interests of brevity, Ill use the abbreviation VSD to mean both visible surface determination and culling from now on.
> Why do I think VSD is the toughest 3-D challenge? Although rasterization issues such as texture mapping are fascinating and important, they are tasks of relatively finite scope, and are being moved into hardware as 3-D accelerators appear; also, they only scale with increases in screen resolution, which are relatively modest.
> In contrast, VSD is an open-ended problem, and there are dozens of approaches currently in use. Even more significantly, the performance of VSD, done in an unsophisticated fashion, scales directly with scene complexity, which tends to increase as a square or cube function, so this very rapidly becomes the limiting factor in rendering realistic worlds.[1][1]
Abrash was writing about the difficulty of the VSD problem in the late 90s, years after _Doom_ had proved that regular people wanted to be able to play graphically intensive games on their home computers. In the early 90s, when id Software first began publishing games, the games had to be programmed to run efficiently on computers not designed to run them, computers meant for word processing, spreadsheet applications, and little else. To make this work, especially for the few 3D games that id Software published before _Doom_, id Software had to be creative. In these games, the design of all the levels was constrained in such a way that the VSD problem was easier to solve.
For example, in _Wolfenstein 3D_, the game id Software released just prior to _Doom_, every level is made from walls that are axis-aligned. In other words, in the Wolfenstein universe, you can have north-south walls or west-east walls, but nothing else. Walls can also only be placed at fixed intervals on a grid—all hallways are either one grid square wide, or two grid squares wide, etc., but never 2.5 grid squares wide. Though this meant that the id Software team could only design levels that all looked somewhat the same, it made Carmacks job of writing a renderer for _Wolfenstein_ much simpler.
The _Wolfenstein_ renderer solved the VSD problem by “marching” rays into the virtual world from the screen. Usually a renderer that uses rays is a “raycasting” renderer—these renderers are often slow, because solving the VSD problem in a raycaster involves finding the first intersection between a ray and something in your world, which in the general case requires lots of number crunching. But in _Wolfenstein_, because all the walls are aligned with the grid, the only location a ray can possibly intersect a wall is at the grid lines. So all the renderer needs to do is check each of those intersection points. If the renderer starts by checking the intersection point nearest to the players viewpoint, then checks the next nearest, and so on, and stops when it encounters the first wall, the VSD problem has been solved in an almost trivial way. A ray is just marched forward from each pixel until it hits something, which works because the marching is so cheap in terms of CPU cycles. And actually, since all walls are the same height, it is only necessary to march a single ray for every _column_ of pixels.
This rendering shortcut made _Wolfenstein_ fast enough to run on underpowered home PCs in the era before dedicated graphics cards. But this approach would not work for _Doom_, since the id team had decided that their new game would feature novel things like diagonal walls, stairs, and ceilings of different heights. Ray marching was no longer viable, so Carmack wrote a different kind of renderer. Whereas the _Wolfenstein_ renderer, with its ray for every column of pixels, is an “image-first” renderer, the _Doom_ renderer is an “object-first” renderer. This means that rather than iterating through the pixels on screen and figuring out what color they should be, the _Doom_ renderer iterates through the objects in a scene and projects each onto the screen in turn.
In an object-first renderer, one easy way to solve the VSD problem is to use a z-buffer. Each time you project an object onto the screen, for each pixel you want to draw to, you do a check. If the part of the object you want to draw is closer to the player than what was already drawn to the pixel, then you can overwrite what is there. Otherwise you have to leave the pixel as is. This approach is simple, but a z-buffer requires a lot of memory, and the renderer may still expend a lot of CPU cycles projecting level geometry that is never going to be seen by the player.
In the early 1990s, there was an additional drawback to the z-buffer approach: On IBM-compatible PCs, which used a video adapter system called VGA, writing to the output frame buffer was an expensive operation. So time spent drawing pixels that would only get overwritten later tanked the performance of your renderer.
Since writing to the frame buffer was so expensive, the ideal renderer was one that started by drawing the objects closest to the player, then the objects just beyond those objects, and so on, until every pixel on screen had been written to. At that point the renderer would know to stop, saving all the time it might have spent considering far-away objects that the player cannot see. But ordering the objects in a scene this way, from closest to farthest, is tantamount to solving the VSD problem. Once again, the question is: What can be seen by the player?
Initially, Carmack tried to solve this problem by relying on the layout of _Doom_s levels. His renderer started by drawing the walls of the room currently occupied by the player, then flooded out into neighboring rooms to draw the walls in those rooms that could be seen from the current room. Provided that every room was convex, this solved the VSD issue. Rooms that were not convex could be split into convex “sectors.” You can see how this rendering technique might have looked if run at extra-slow speed [in this video][2], where YouTuber Bisqwit demonstrates a renderer of his own that works according to the same general algorithm. This algorithm was successfully used in Duke Nukem 3D, released three years after _Doom_, when CPUs were more powerful. But, in 1993, running on the hardware then available, the _Doom_ renderer that used this algorithm struggled with complicated levels—particularly when sectors were nested inside of each other, which was the only way to create something like a circular pit of stairs. A circular pit of stairs led to lots of repeated recursive descents into a sector that had already been drawn, strangling the game engines speed.
Around the time that the id team realized that the _Doom_ game engine might be too slow, id Software was asked to port _Wolfenstein 3D_ to the Super Nintendo. The Super Nintendo was even less powerful than the IBM-compatible PCs of the day, and it turned out that the ray-marching _Wolfenstein_ renderer, simple as it was, didnt run fast enough on the Super Nintendo hardware. So Carmack began looking for a better algorithm. It was actually for the Super Nintendo port of _Wolfenstein_ that Carmack first researched and implemented binary space partitioning. In _Wolfenstein_, this was relatively straightforward because all the walls were axis-aligned; in _Doom_, it would be more complex. But Carmack realized that BSP trees would solve _Doom_s speed problems too.
### Binary Space Partitioning
Binary space partitioning makes the VSD problem easier to solve by splitting a 3D scene into parts ahead of time. For now, you just need to grasp why splitting a scene is useful: If you draw a line (really a plane in 3D) across your scene, and you know which side of the line the player or camera viewpoint is on, then you also know that nothing on the other side of the line can obstruct something on the viewpoints side of the line. If you repeat this process many times, you end up with a 3D scene split into many sections, which wouldnt be an improvement on the original scene except now you know more about how different parts of the scene can obstruct each other.
The first people to write about dividing a 3D scene like this were researchers trying to establish for the US Air Force whether computer graphics were sufficiently advanced to use in flight simulators. They released their findings in a 1969 report called “Study for Applying Computer-Generated Images to Visual Simulation.” The report concluded that computer graphics could be used to train pilots, but also warned that the implementation would be complicated by the VSD problem:
> One of the most significant problems that must be faced in the real-time computation of images is the priority, or hidden-line, problem. In our everyday visual perception of our surroundings, it is a problem that nature solves with trivial east; a point of an opaque object obscures all other points that lie along the same line of sight and are more distant. In the computer, the task is formidable. The computations required to resolve priority in the general case grow exponentially with the complexity of the environment, and soon they surpass the computing load associated with finding the perspective images of the objects.[2][3]
One solution these researchers mention, which according to them was earlier used in a project for NASA, is based on creating what I am going to call an “occlusion matrix.” The researchers point out that a plane dividing a scene in two can be used to resolve “any priority conflict” between objects on opposite sides of the plane. In general you might have to add these planes explicitly to your scene, but with certain kinds of geometry you can just rely on the faces of the objects you already have. They give the example in the figure below, where (p_1), (p_2), and (p_3) are the separating planes. If the camera viewpoint is on the forward or “true” side of one of these planes, then (p_i) evaluates to 1. The matrix shows the relationships between the three objects based on the three dividing planes and the location of the camera viewpoint—if object (a_i) obscures object (a_j), then entry (a_{ij}) in the matrix will be a 1.
![][4]
The researchers propose that this matrix could be implemented in hardware and re-evaluated every frame. Basically the matrix would act as a big switch or a kind of pre-built z-buffer. When drawing a given object, no video would be output for the parts of the object when a 1 exists in the objects column and the corresponding row object is also being drawn.
The major drawback with this matrix approach is that to represent a scene with (n) objects you need a matrix of size (n^2). So the researchers go on to explore whether it would be feasible to represent the occlusion matrix as a “priority list” instead, which would only be of size (n) and would establish an order in which objects should be drawn. They immediately note that for certain scenes like the one in the figure above no ordering can be made (since there is an occlusion cycle), so they spend a lot of time laying out the mathematical distinction between “proper” and “improper” scenes. Eventually they conclude that, at least for “proper” scenes—and it should be easy enough for a scene designer to avoid “improper” cases—a priority list could be generated. But they leave the list generation as an exercise for the reader. It seems the primary contribution of this 1969 study was to point out that it should be possible to use partitioning planes to order objects in a scene for rendering, at least _in theory_.
It was not until 1980 that a paper, titled “On Visible Surface Generation by A Priori Tree Structures,” demonstrated a concrete algorithm to accomplish this. The 1980 paper, written by Henry Fuchs, Zvi Kedem, and Bruce Naylor, introduced the BSP tree. The authors say that their novel data structure is “an alternative solution to an approach first utilized a decade ago but due to a few difficulties, not widely exploited”—here referring to the approach taken in the 1969 Air Force study.[3][5] A BSP tree, once constructed, can easily be used to provide a priority ordering for objects in the scene.
Fuchs, Kedem, and Naylor give a pretty readable explanation of how a BSP tree works, but let me see if I can provide a less formal but more concise one.
You begin by picking one polygon in your scene and making the plane in which the polygon lies your partitioning plane. That one polygon also ends up as the root node in your tree. The remaining polygons in your scene will be on one side or the other of your root partitioning plane. The polygons on the “forward” side or in the “forward” half-space of your plane end in up in the left subtree of your root node, while the polygons on the “back” side or in the “back” half-space of your plane end up in the right subtree. You then repeat this process recursively, picking a polygon from your left and right subtrees to be the new partitioning planes for their respective half-spaces, which generates further half-spaces and further sub-trees. You stop when you run out of polygons.
Say you want to render the geometry in your scene from back-to-front. (This is known as the “painters algorithm,” since it means that polygons further from the camera will get drawn over by polygons closer to the camera, producing a correct rendering.) To achieve this, all you have to do is an in-order traversal of the BSP tree, where the decision to render the left or right subtree of any node first is determined by whether the camera viewpoint is in either the forward or back half-space relative to the partitioning plane associated with the node. So at each node in the tree, you render all the polygons on the “far” side of the plane first, then the polygon in the partitioning plane, then all the polygons on the “near” side of the plane—”far” and “near” being relative to the camera viewpoint. This solves the VSD problem because, as we learned several paragraphs back, the polygons on the far side of the partitioning plane cannot obstruct anything on the near side.
The following diagram shows the construction and traversal of a BSP tree representing a simple 2D scene. In 2D, the partitioning planes are instead partitioning lines, but the basic idea is the same in a more complicated 3D scene.
![][6] _Step One: The root partitioning line along wall D splits the remaining geometry into two sets._
![][7] _Step Two: The half-spaces on either side of D are split again. Wall C is the only wall in its half-space so no split is needed. Wall B forms the new partitioning line in its half-space. Wall A must be split into two walls since it crosses the partitioning line._
![][8] _A back-to-front ordering of the walls relative to the viewpoint in the top-right corner, useful for implementing the painters algorithm. This is just an in-order traversal of the tree._
The really neat thing about a BSP tree, which Fuchs, Kedem, and Naylor stress several times, is that it only has to be constructed once. This is somewhat surprising, but the same BSP tree can be used to render a scene no matter where the camera viewpoint is. The BSP tree remains valid as long as the polygons in the scene dont move. This is why the BSP tree is so useful for real-time rendering—all the hard work that goes into constructing the tree can be done beforehand rather than during rendering.
One issue that Fuchs, Kedem, and Naylor say needs further exploration is the question of what makes a “good” BSP tree. The quality of your BSP tree will depend on which polygons you decide to use to establish your partitioning planes. I skipped over this earlier, but if you partition using a plane that intersects other polygons, then in order for the BSP algorithm to work, you have to split the intersected polygons in two, so that one part can go in one half-space and the other part in the other half-space. If this happens a lot, then building a BSP tree will dramatically increase the number of polygons in your scene.
Bruce Naylor, one of the authors of the 1980 paper, would later write about this problem in his 1993 paper, “Constructing Good Partitioning Trees.” According to John Romero, one of Carmacks fellow id Software co-founders, this paper was one of the papers that Carmack read when he was trying to implement BSP trees in _Doom_.[4][9]
### BSP Trees in Doom
Remember that, in his first draft of the _Doom_ renderer, Carmack had been trying to establish a rendering order for level geometry by “flooding” the renderer out from the players current room into neighboring rooms. BSP trees were a better way to establish this ordering because they avoided the issue where the renderer found itself visiting the same room (or sector) multiple times, wasting CPU cycles.
“Adding BSP trees to _Doom_” meant, in practice, adding a BSP tree generator to the _Doom_ level editor. When a level in _Doom_ was complete, a BSP tree was generated from the level geometry. According to Fabien Sanglard, the generation process could take as long as eight seconds for a single level and 11 minutes for all the levels in the original _Doom_.[5][10] The generation process was lengthy in part because Carmacks BSP generation algorithm tries to search for a “good” BSP tree using various heuristics. An eight-second delay would have been unforgivable at runtime, but it was not long to wait when done offline, especially considering the performance gains the BSP trees brought to the renderer. The generated BSP tree for a single level would have then ended up as part of the level data loaded into the game when it starts.
Carmack put a spin on the BSP tree algorithm outlined in the 1980 paper, because once _Doom_ is started and the BSP tree for the current level is read into memory, the renderer uses the BSP tree to draw objects front-to-back rather than back-to-front. In the 1980 paper, Fuchs, Kedem, and Naylor show how a BSP tree can be used to implement the back-to-front painters algorithm, but the painters algorithm involves a lot of over-drawing that would have been expensive on an IBM-compatible PC. So the _Doom_ renderer instead starts with the geometry closer to the player, draws that first, then draws the geometry farther away. This reverse ordering is easy to achieve using a BSP tree, since you can just make the opposite traversal decision at each node in the tree. To ensure that the farther-away geometry is not drawn over the closer geometry, the _Doom_ renderer uses a kind of implicit z-buffer that provides much of the benefit of a z-buffer with a much smaller memory footprint. There is one array that keeps track of occlusion in the horizontal dimension, and another two arrays that keep track of occlusion in the vertical dimension from the top and bottom of the screen. The _Doom_ renderer can get away with not using an actual z-buffer because _Doom_ is not technically a fully 3D game. The cheaper data structures work because certain things never appear in _Doom_: The horizontal occlusion array works because there are no sloping walls, and the vertical occlusion arrays work because no walls have, say, two windows, one above the other.
The only other tricky issue left is how to incorporate _Doom_s moving characters into the static level geometry drawn with the aid of the BSP tree. The enemies in _Doom_ cannot be a part of the BSP tree because they move; the BSP tree only works for geometry that never moves. So the _Doom_ renderer draws the static level geometry first, keeping track of the segments of the screen that were drawn to (with yet another memory-efficient data structure). It then draws the enemies in back-to-front order, clipping them against the segments of the screen that occlude them. This process is not as optimal as rendering using the BSP tree, but because there are usually fewer enemies visible then there is level geometry in a level, speed isnt as much of an issue here.
Using BSP trees in _Doom_ was a major win. Obviously it is pretty neat that Carmack was able to figure out that BSP trees were the perfect solution to his problem. But was it a _genius_-level move?
In his excellent book about the _Doom_ game engine, Fabien Sanglard quotes John Romero saying that Bruce Naylors paper, “Constructing Good Partitioning Trees,” was mostly about using BSP trees to cull backfaces from 3D models.[6][11] According to Romero, Carmack thought the algorithm could still be useful for _Doom_, so he went ahead and implemented it. This description is quite flattering to Carmack—it implies he saw that BSP trees could be useful for real-time video games when other people were still using the technique to render static scenes. There is a similarly flattering story in _Masters of Doom_: Kushner suggests that Carmack read Naylors paper and asked himself, “what if you could use a BSP to create not just one 3D image but an entire virtual world?”[7][12]
This framing ignores the history of the BSP tree. When those US Air Force researchers first realized that partitioning a scene might help speed up rendering, they were interested in speeding up _real-time_ rendering, because they were, after all, trying to create a flight simulator. The flight simulator example comes up again in the 1980 BSP paper. Fuchs, Kedem, and Naylor talk about how a BSP tree would be useful in a flight simulator that pilots use to practice landing at the same airport over and over again. Since the airport geometry never changes, the BSP tree can be generated just once. Clearly what they have in mind is a real-time simulation. In the introduction to their paper, they even motivate their research by talking about how real-time graphics systems must be able to create an image in at least 1/30th of a second.
So Carmack was not the first person to think of using BSP trees in a real-time graphics simulation. Of course, its one thing to anticipate that BSP trees might be used this way and another thing to actually do it. But even in the implementation Carmack may have had more guidance than is commonly assumed. The [Wikipedia page about BSP trees][13], at least as of this writing, suggests that Carmack consulted a 1991 paper by Chen and Gordon as well as a 1990 textbook called _Computer Graphics: Principles and Practice_. Though no citation is provided for this claim, it is probably true. The 1991 Chen and Gordon paper outlines a front-to-back rendering approach using BSP trees that is basically the same approach taken by _Doom_, right down to what Ive called the “implicit z-buffer” data structure that prevents farther polygons being drawn over nearer polygons. The textbook provides a great overview of BSP trees and some pseudocode both for building a tree and for displaying one. (Ive been able to skim through the 1990 edition thanks to my wonderful university library.) _Computer Graphics: Principles and Practice_ is a classic text in computer graphics, so Carmack might well have owned it.
Still, Carmack found himself faced with a novel problem—”How can we make a first-person shooter run on a computer with a CPU that cant even do floating-point operations?”—did his research, and proved that BSP trees are a useful data structure for real-time video games. I still think that is an impressive feat, even if the BSP tree had first been invented a decade prior and was pretty well theorized by the time Carmack read about it. Perhaps the accomplishment that we should really celebrate is the _Doom_ game engine as a whole, which is a seriously nifty piece of work. Ive mentioned it once already, but Fabien Sanglards book about the _Doom_ game engine (_Game Engine Black Book: DOOM_) is an excellent overview of all the different clever components of the game engine and how they fit together. We shouldnt forget that the VSD problem was just one of many problems that Carmack had to solve to make the _Doom_ engine work. That he was able, on top of everything else, to read about and implement a complicated data structure unknown to most programmers speaks volumes about his technical expertise and his drive to perfect his craft.
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][14] on Twitter or subscribe to the [RSS feed][15] to make sure you know when a new post is out._
_Previously on TwoBitHistory…_
> I've wanted to learn more about GNU Readline for a while, so I thought I'd turn that into a new blog post. Includes a few fun facts from an email exchange with Chet Ramey, who maintains Readline (and Bash):<https://t.co/wnXeuyjgMx>
>
> — TwoBitHistory (@TwoBitHistory) [August 22, 2019][16]
1. Michael Abrash, “Michael Abrashs Graphics Programming Black Book,” James Gregory, accessed November 6, 2019, <http://www.jagregory.com/abrash-black-book/#chapter-64-quakes-visible-surface-determination>. [↩︎][17]
2. R. Schumacher, B. Brand, M. Gilliland, W. Sharp, “Study for Applying Computer-Generated Images to Visual Simulation,” Air Force Human Resources Laboratory, December 1969, accessed on November 6, 2019, <https://apps.dtic.mil/dtic/tr/fulltext/u2/700375.pdf>. [↩︎][18]
3. Henry Fuchs, Zvi Kedem, Bruce Naylor, “On Visible Surface Generation By A Priori Tree Structures,” ACM SIGGRAPH Computer Graphics, July 1980. [↩︎][19]
4. Fabien Sanglard, Game Engine Black Book: DOOM (CreateSpace Independent Publishing Platform, 2018), 200. [↩︎][20]
5. Sanglard, 206. [↩︎][21]
6. Sanglard, 200. [↩︎][22]
7. David Kushner, Masters of Doom (Random House Trade Paperbacks, 2004), 142. [↩︎][23]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2019/11/06/doom-bsp.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: tmp.BANyYeM4Vm#fn:1
[2]: https://youtu.be/HQYsFshbkYw?t=822
[3]: tmp.BANyYeM4Vm#fn:2
[4]: https://twobithistory.org/images/matrix_figure.png
[5]: tmp.BANyYeM4Vm#fn:3
[6]: https://twobithistory.org/images/bsp.svg
[7]: https://twobithistory.org/images/bsp1.svg
[8]: https://twobithistory.org/images/bsp2.svg
[9]: tmp.BANyYeM4Vm#fn:4
[10]: tmp.BANyYeM4Vm#fn:5
[11]: tmp.BANyYeM4Vm#fn:6
[12]: tmp.BANyYeM4Vm#fn:7
[13]: https://en.wikipedia.org/wiki/Binary_space_partitioning
[14]: https://twitter.com/TwoBitHistory
[15]: https://twobithistory.org/feed.xml
[16]: https://twitter.com/TwoBitHistory/status/1164631020353859585?ref_src=twsrc%5Etfw
[17]: tmp.BANyYeM4Vm#fnref:1
[18]: tmp.BANyYeM4Vm#fnref:2
[19]: tmp.BANyYeM4Vm#fnref:3
[20]: tmp.BANyYeM4Vm#fnref:4
[21]: tmp.BANyYeM4Vm#fnref:5
[22]: tmp.BANyYeM4Vm#fnref:6
[23]: tmp.BANyYeM4Vm#fnref:7

View File

@ -0,0 +1,50 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My first contribution to open source: Make a fork of the repo)
[#]: via: (https://opensource.com/article/19/11/first-open-source-contribution-fork-clone)
[#]: author: (Galen Corey https://opensource.com/users/galenemco)
My first contribution to open source: Make a fork of the repo
======
Which comes first, to clone or fork a repo?
![User experience vs. design][1]
Previously, I explained [how I ultimately chose a project][2] for my contributions. Once I finally picked that project and a task to work on, I felt like the hard part was over, and I slid into cruise control. I knew what to do next, no question. Just clone the repository so that I have the code on my computer, make a new branch for my work, and get coding, right?
It turns out I made a crucial mistake at this step. Unfortunately, I didnt realize that I had made a mistake until several hours later when I tried to push my completed code back up to GitHub and got a permission denied error. My third mistake was trying to work directly from a clone of the repo.
When you want to contribute to someone elses repo, in most cases, you should not clone the repo directly. Instead, you should make a fork of the repo and clone that. You do all of your work on a branch of your fork. Then, when you are ready to make a pull request, you can compare your branch on the fork against the master branch of the original repo.
Before this, I had only ever worked on repos that I either created or had collaborator permissions for, so I could work directly from a clone of the main repo. I did not realize that GitHub even offered the capability to make a pull request from a repo fork onto the original repo. Now that Ive learned a bit about this option, it is a great feature that makes sense. Forking allows a project to open the ability to contribute to anyone with a GitHub account without having to add them all as "contributors." It also helps keep the main project clean by keeping most new branches on forks, so that they dont create clutter.
I would have preferred to know this before I started writing my code (or, in this case, finished writing my code, since I didnt attempt to push any of my changes to GitHub until the end). Moving my changes over from the main repo that I originally worked on into the fork was non-trivial.
For those of you getting started, here are the steps to make a PR on a repository that you do not own, or where you are not a collaborator. I highly recommend trying to push your code to GitHub and at least going through the steps of creating a PR before you get too deep into coding, just to make sure you have everything set up the right way:
1. Make a fork of the repo youve chosen for your contributions.
2. From the fork, click **Clone or download** to create a copy on your computer.
**Optional:** [Add the base repository as a remote "upstream,"][3] which is helpful if you want to pull down new changes from the base repository into your fork.
3. [Create a pull request from the branch on your fork into the master branch of the base repository.][4]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/first-open-source-contribution-fork-clone
作者:[Galen Corey][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/galenemco
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_DesirePath.png?itok=N_zLVWlK (User experience vs. design)
[2]: https://opensource.com/article/19/10/first-open-source-contribution-mistake-two
[3]: https://help.github.com/en/articles/configuring-a-remote-for-a-fork
[4]: https://help.github.com/en/articles/creating-a-pull-request-from-a-fork

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What it Takes to Be a Successful Network Engineer)
[#]: via: (https://opensourceforu.com/2019/11/what-it-takes-to-be-a-successful-network-engineer/)
[#]: author: (Christopher Nichols https://opensourceforu.com/author/christopher-nichols/)
What it Takes to Be a Successful Network Engineer
======
[![][1]][2]
_Network engineering is an excellent field filled with complex and fulfilling work, and many job opportunities. As companies end up with networks that continue to become more complex and connect more devices together, network engineers are in high-demand. Being successful in this role requires several characteristics and skill sets that serve employees well in this fast-paced and mission-critical environment._
**Deep Understanding of Networking Technologies**
Some people might think that this characteristic is assumed when it comes to network engineering. However, theres a distinct difference between knowing enough about networking to manage and monitor the system, and having a truly in-depth understanding of the subject matter. The best network engineers eat, breathe, and drink this type of technology. They keep up on top of the latest trends during their free time and are thrilled to learn about new developments in the field.
**Detail Oriented**
Networking has a lot of moving parts and various types of software and hardware to work with. Paying close attention to all of the details ensures that the system is being monitored correctly and nothing gets lost in the shuffle. When data breaches are prevalent in the business world, stopping an intrusion could mean identifying a small red flag that popped up the day before. Without being alert to these details, the network ends up being vulnerable.
**Problem Solving**
One of the most used skills in network engineering is problem-solving. Everything from troubleshooting issues for users to look for ways to improve the performance of the network requires it. When a worker in this field can quickly and efficiently solve issues through an analytical mindset, they free up a lot of time for strategic decision-making.
**Team Coordination**
Many organizations have teams collaborating together across departments. The network engineer role may be a small part of the team or put in a management position based on the resources required for the project. Working with multiple teams requires strong people management skills and understanding how to move towards a common goal.
**Ongoing Education**
Many continued education opportunities exist for network engineering. Many organizations offer certifications in specific networking technologies, whether the person is learning about a particular server operating system or branching out into subject areas that are related to networking. A drive for ongoing education means that the network engineer will always have their skills updated to adapt to the latest technology changes in the marketplace. Additionally, when these workers love to learn, they also seek out self-instruction opportunities. For example, they could [_read this guide_][3] to learn more about how VPN protocols work.
**Documentation**
Strong writing skills may not be the first characteristic that comes to mind when someone thinks about a network engineer. However, its essential when it comes to writing technical documentation. Well-structured and clear documentation allows the network engineer to share information about the network with other people in the organization. If that person ends up leaving the company, the networking protocols, procedures and configuration remain in place because all of the data is available and understandable.
**Jargon-free Communication**
Network engineers have frequent conversations with stakeholders and end users, who may not have a strong IT background. The common jargon used for talking with other members of the IT teams would leave this group confused and not understanding what youre saying. When the network engineer can explain technology in simple terms, it makes it easier to get the resources and budget that they need to effectively support the companys networking needs.
**Proactive Approaches**
Some network engineers rely on reactive approaches to fix problems when they occur. If data breaches arent prevented before they impact the organization, then it ends up being an expensive endeavor. A reactive approach is sometimes compared to running around and putting out fires the entire day. A proactive approach is more strategic. Network engineers put systems, policies and procedures in place that prevent the intrusion in the first place. They pick up on small issues and tackle them as soon as they show up, rather than waiting for something to break. Its easier to improve network performance because many of the low-level problems are eliminated through the network design or other technology that was implemented.
**Independent**
Network engineers often have to work on tasks without a lot of oversight. Depending on the companys budget, they may be the only person in their role in the entire organization. Working independently requires the employee to be driven and a self-starter. They must be able to keep themselves on task and stick to the schedule thats laid out for that particular project. In the event of a disaster, the network engineer may need to step into a leadership role to guide the recovery process.
**Fast Learner**
Technology changes all the time, and the interactions between new hardware and software may not be expected. A fast learner can quickly pick up the most important details about a piece of technology so that they can effectively troubleshoot it or optimize it.
**On-Call**
Disasters can strike a network at any time, and unexpected downtime is one of the worst things that can happen to a modern business. The mission-critical systems have to come up as soon as possible, which means that network engineers may need to take on-call shifts. One of the keys to being on-call is to be ready to act at a moments notice, even if its the middle of the night.
**Reliability**
Few businesses can operate without their network being up and available. If critical software or hardware are not available, then the entire business may find itself at a standstill. Customers get upset that they cant access the website or reach anyone in the company, employees are frustrated because theyre falling behind on their projects, and management is running around trying to get everything back up and running. As a network engineer, reliability is the key. Being available makes a big difference in resolving these types of problems, and always showing up on time and on schedule goes a long way towards cementing someone as a great network engineer.
![Avatar][4]
[Christopher Nichols][5]
[![][6]][7]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/what-it-takes-to-be-a-successful-network-engineer/
作者:[Christopher Nichols][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/christopher-nichols/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2015/03/Network-cable-with-router.jpg?resize=696%2C372&ssl=1 (Network cable with router)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2015/03/Network-cable-with-router.jpg?fit=1329%2C710&ssl=1
[3]: https://surfshark.com/learn/vpn-protocols
[4]: https://secure.gravatar.com/avatar/92e286970e06818292d5ce792b67a662?s=100&r=g
[5]: https://opensourceforu.com/author/christopher-nichols/
[6]: http://opensourceforu.com/wp-content/uploads/2013/10/assoc.png
[7]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -0,0 +1,51 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My first open source contribution: Keep the code relevant)
[#]: via: (https://opensource.com/article/19/11/first-open-source-contribution-relevant-code)
[#]: author: (Galen Corey https://opensource.com/users/galenemco)
My first open source contribution: Keep the code relevant
======
Be aware of what development tools you have running in the background.
![Filing cabinet for organization][1]
Previously, I explained [the importance of forking repositories][2]. Once I finished the actual "writing the code" part of making my first open source pull request, I felt excellent. It seemed like the hard part was finally over. Whats more, I felt great about the code that I wrote.
One thing that I decided to do (which turned out to be an excellent choice) was to use [test-driven development][3] (TDD) to write the code. Using TDD was helpful because it gave me a place to start, and a way to know if what I was doing actually worked. Because my background was in building web apps, I rarely ran into the problem of writing code that didnt have a tangible, visible output. The test-first approach helped me make the leap into working on a tool where you cant evaluate your progress manually. The fact that I had written a clear test also helped me ultimately get my pull request accepted. The reviewer highlighted the test in his comments on my code.
Another thing I felt great about was that I had accomplished the whole thing in around 20 lines of code. I know from experience that shorter pull requests are much easier to review. Such short pieces generally take less time, and the reviewer can concentrate on only the small number of lines that were changed. I hoped that this would increase my chances that one of the maintainers would look at my work and feel confident in it.
Much to my surprise, when I finally pushed my branch to GitHub, the diff was showing that I had changed multiple lines of code. I ran into trouble here because I had become too comfortable with my usual development setup. Because I typically work on a single project, I barely think about some of the tools I have working in the background to make my life easier. The culprit here was [`prettier`][4], a code formatter that automatically fixes all of my minor spacing and syntax discrepancies when I save an edited file. In my usual workflow, this tool is extremely helpful. Most of the developers I work with have `prettier` installed, so all of the code that we write obeys the same style rules.
In this new project, however, style rules had fallen by the wayside. The project did, in fact, contain an eslint config stating that single quotes should be used instead of double-quotes. However, the developers who were contributing to the project ignored this rule and used both single- and double-quotes. Unlike human beings, `prettier` never ignores the rules. While I was working, it took the initiative to turn every double quote in every file I changed to a single quote, causing hundreds of unintentional changes.
I tried for a few minutes to remove these changes, but because they had been continually happening as I worked, they were embedded in all of my commits. Then the type-B in me took over and decided to leave the changes in. "Maybe this is not a big deal," I thought. "They said they wanted single quotes, after all."
My mistake was including these unrelated changes in my PR. While I was technically right that this wasnt a "big deal," the maintainer who reviewed my code asked me to revert the changes. My initial instinct, that keeping my pull request small and to the point, was correct.
The lesson here is that you should keep your changes as minimal and to-the-point as possible. Be mindful of any tools you have that might apply to your normal workflow, but arent as useful if you are working on a new project.
**Free idea:** If you are looking for a way to get an open source PR in without writing any code, pick a project that doesnt adhere to its style guide, run `prettier` on it, and make the result your whole pull request. Its not guaranteed that every project community will appreciate this, but its worth a shot.
There are lots of non-code ways to contribute to open source: Here are three alternatives.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/first-open-source-contribution-relevant-code
作者:[Galen Corey][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/galenemco
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_organize_letter.png?itok=GTtiiabr (Filing cabinet for organization)
[2]: https://opensource.com/article/19/10/first-open-source-contribution-fork-clone
[3]: https://opensource.com/article/19/10/test-driven-development-best-practices
[4]: https://prettier.io/

View File

@ -1,346 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux permissions 101)
[#]: via: (https://opensource.com/article/19/8/linux-permissions-101)
[#]: author: (Alex Juarez https://opensource.com/users/mralexjuarezhttps://opensource.com/users/marcobravohttps://opensource.com/users/greg-p)
Linux permissions 101
======
Knowing how to control users' access to files is a fundamental system
administration skill.
![Penguins][1]
Understanding Linux permissions and how to control which users have access to files is a fundamental skill for systems administration.
This article will cover standard Linux file systems permissions, dig further into special permissions, and wrap up with an explanation of default permissions using **umask**.
### Understanding the ls command output
Before we can talk about how to modify permissions, we need to know how to view them. The **ls** command with the long listing argument (**-l**) gives us a lot of information about a file.
```
$ ls -lAh
total 20K
-rwxr-xr--+ 1 root root    0 Mar  4 19:39 file1
-rw-rw-rw-. 1 root root    0 Mar  4 19:39 file10
-rwxrwxr--+ 1 root root    0 Mar  4 19:39 file2
-rw-rw-rw-. 1 root root    0 Mar  4 19:39 file8
-rw-rw-rw-. 1 root root    0 Mar  4 19:39 file9
drwxrwxrwx. 2 root root 4.0K Mar  4 20:04 testdir
```
To understand what this means, let's break down the output regarding the permissions into individual sections. It will be easier to reference each section individually.
Take a look at each component of the final line in the output above:
```
`drwxrwxrwx. 2 root root 4.0K Mar  4 20:04 testdir`
```
Section 1 | Section 2 | Section 3 | Section 4 | Section 5 | Section 6 | Section 7
---|---|---|---|---|---|---
d | rwx | rwx | rwx | . | root | root
Section 1 (on the left) reveals what type of file it is.
d | Directory
---|---
- | Regular file
l | A soft link
The [info page][2] for **ls** has a full listing of the different file types.
Each file has three modes of access:
* the owner
* the group
* all others
Sections 2, 3, and 4 refer to the user, group, and "other users" permissions. And each section can include a combination of **r** (read), **w** (write), and **x** (executable) permissions.
Each of the permissions is also assigned a numerical value, which is important when talking about the octal representation of permissions.
Permission | Octal Value
---|---
Read | 4
Write | 2
Execute | 1
Section 5 details any alternative access methods, such as SELinux or File Access Control List (FACL).
Method | Character
---|---
No other method | -
SELinux | .
FACLs | +
Any combination of methods | +
Sections 6 and 7 are the names of the owner and the group, respectively.
### Using chown and chmod
#### The chown command
The **chown** (change ownership) command is used to change a file's user and group ownership.
To change both the user and group ownership of the file **foo** to **root**, we can use these commands:
```
$ chown root:root foo
$ chown root: foo
```
Running the command with the user followed by a colon (**:**) sets both the user and group ownership.
To set only the user ownership of the file **foo** to the **root** user, enter:
```
`$ chown root foo`
```
To change only the group ownership of the file **foo**, precede the group with a colon:
```
`$ chown :root foo`
```
#### The chmod command
The **chmod** (change mode) command controls file permissions for the owner, group, and all other users who are neither the owner nor part of the group associated with the file.
The **chmod** command can set permissions in both octal (e.g., 755, 644, etc.) and symbolic (e.g., u+rwx, g-rwx, o=rw) formatting.
Octal notation assigns 4 "points" to **read**, 2 to **write**, and 1 to **execute**. If we want to assign the user **read** permissions, we assign 4 to the first slot, but if we want to add **write** permissions, we must add 2. If we want to add **execute**, then we add 1. We do this for each permission type: owner, group, and others.
For example, if we want to assign **read**, **write**, and **execute** to the owner of the file, but only **read** and **execute** to group members and all other users, we would use 755 in octal formatting. That's all permission bits for the owner (4+2+1), but only a 4 and 1 for the group and others (4+1).
> The breakdown for that is: 4+2+1=7; 4+1=5; and 4+1=5.
If we wanted to assign **read** and **write** to the owner of the file but only **read** to members of the group and all other users, we could use **chmod** as follows:
```
`$ chmod 644 foo_file`
```
In the examples below, we use symbolic notation in different groupings. Note the letters **u**, **g**, and **o** represent **user**, **group**, and **other**. We use **u**, **g**, and **o** in conjunction with **+**, **-**, or **=** to add, remove, or set permission bits.
To add the **execute** bit to the ownership permission set:
```
`$ chmod u+x foo_file`
```
To remove **read**, **write**, and **execute** from members of the group:
```
`$ chmod g-rwx foo_file`
```
To set the ownership for all other users to **read** and **write**:
```
`$ chmod o=rw`
```
### The special bits: Set UID, set GID, and sticky bits
In addition to the standard permissions, there are a few special permission bits that have some useful benefits.
#### Set user ID (suid)
When **suid** is set on a file, an operation executes as the owner of the file, not the user running the file. A [good example][3] of this is the **passwd** command. It needs the **suid** bit to be set so that changing a password runs with root permissions.
```
$ ls -l /bin/passwd
-rwsr-xr-x. 1 root root 27832 Jun 10  2014 /bin/passwd
```
An example of setting the **suid** bit would be:
```
`$ chmod u+s /bin/foo_file_name`
```
#### Set group ID (sgid)
The **sgid** bit is similar to the **suid** bit in the sense that the operations are done under the group ownership of the directory instead of the user running the command.
An example of using **sgid** would be if multiple users are working out of the same directory, and every file created in the directory needs to have the same group permissions. The example below creates a directory called **collab_dir**, sets the **sgid** bit, and changes the group ownership to **webdev**.
```
$ mkdir collab_dir
$ chmod g+s collab_dir
$ chown :webdev collab_dir
```
Now any file created in the directory will have the group ownership of **webdev** instead of the user who created the file.
```
$ cd collab_dir
$ touch file-sgid
$ ls -lah file-sgid
-rw-r--r--. 1 root webdev 0 Jun 12 06:04 file-sgid
```
#### The "sticky" bit
The sticky bit denotes that only the owner of a file can delete the file, even if group permissions would otherwise allow it. This setting usually makes the most sense on a common or collaborative directory such as **/tmp**. In the example below, the **t** in the **execute** column of the **all others** permission set indicates that the sticky bit has been applied.
```
$ ls -ld /tmp
drwxrwxrwt. 8 root root 4096 Jun 12 06:07 /tmp/
```
Keep in mind this does not prevent somebody from editing the file; it just keeps them from deleting the contents of a directory.
We set the sticky bit with:
```
`$ chmod o+t foo_dir`
```
On your own, try setting the sticky bit on a directory and give it full group permissions so that multiple users can read, write and execute on the directory because they are in the same group.
From there, create files as each user and then try to delete them as the other.
If everything is configured correctly, one user should not be able to delete users from the other user.
Note that each of these bits can also be set in octal format with SUID=4, SGID=2, and Sticky=1.
```
$ chmod 4744
$ chmod 2644
$ chmod 1755
```
#### Uppercase or lowercase?
If you are setting the special bits and see an uppercase **S** or **T** instead of lowercase (as we've seen until this point), it is because the underlying execute bit is not present. To demonstrate, the following example creates a file with the sticky bit set. We can then add/remove the execute bit to demonstrate the case change.
```
$ touch file cap-ST-demo
$ chmod 1755 cap-ST-demo
$ ls -l cap-ST-demo
-rwxr-xr-t. 1 root root 0 Jun 12 06:16 cap-ST-demo
$ chmod o-x cap-X-demo
$ ls -l cap-X-demo
-rwxr-xr-T. 1 root root 0 Jun 12 06:16 cap-ST-demo
```
#### Setting the execute bit conditionally
To this point, we've set the **execute** bit using a lowercase **x**, which sets it without asking any questions. We have another option: using an uppercase **X** instead of lowercase will set the **execute** bit only if it is already present somewhere in the permission group. This can be a difficult concept to explain, but the demo below will help illustrate it. Notice here that after trying to add the **execute** bit to the group privileges, it is not applied.
```
$ touch cap-X-file
$ ls -l cap-X-file
-rw-r--r--. 1 root root 0 Jun 12 06:31 cap-X-file
$ chmod g+X cap-X-file
$ ls -l cap-X-file
-rw-r--r--. 1 root root 0 Jun 12 06:31 cap-X-file
```
In this similar example, we add the execute bit first to the group permissions using the lowercase **x** and then use the uppercase **X** to add permissions for all other users. This time, the uppercase **X** sets the permissions.
```
$ touch cap-X-file
$ ls -l cap-X-file
-rw-r--r--. 1 root root 0 Jun 12 06:31 cap-X-file
$ chmod g+x cap-X-file
$ ls -l cap-X-file
-rw-r-xr--. 1 root root 0 Jun 12 06:31 cap-X-file
$ chmod g+x cap-X-file
$ chmod o+X cap-X-file
ls -l cap-X-file
-rw-r-xr-x. 1 root root 0 Jun 12 06:31 cap-X-file
```
### Understanding umask
The **umask** masks (or "blocks off") bits from the default permission set in order to define permissions for a file or directory. For example, a 2 in the **umask** output indicates it is blocking the **write** bit from a file, at least by default.
Using the **umask** command without any arguments allows us to see the current **umask** setting. There are four columns: the first is reserved for the special suid, sgid, or sticky bit, and the remaining three represent the owner, group, and other permissions.
```
$ umask
0022
```
To understand what this means, we can execute **umask** with a **-S** (as shown below) to get the result of masking the bits. For instance, because of the **2** value in the third column, the **write** bit is masked off from the group and other sections; only **read** and **execute** can be assigned for those.
```
$ umask -S
u=rwx,g=rx,o=rx
```
To see what the default permission set is for files and directories, let's set our **umask** to all zeros. This means that we are not masking off any bits when we create a file.
```
$ umask 000
$ umask -S
u=rwx,g=rwx,o=rwx
$ touch file-umask-000
$ ls -l file-umask-000
-rw-rw-rw-. 1 root root 0 Jul 17 22:03 file-umask-000
```
Now when we create a file, we see the default permissions are **read** (4) and **write** (2) for all sections, which would equate to 666 in octal representation.
We can do the same for a directory and see its default permissions are 777. We need the **execute** bit on directories so we can traverse through them.
```
$ mkdir dir-umask-000
$ ls -ld dir-umask-000
drwxrwxrwx. 2 root root 4096 Jul 17 22:03 dir-umask-000/
```
### Conclusion
There are many other ways an administrator can control access to files on a system. These permissions are basic to Linux, and we can build upon these fundamental aspects. If your work takes you into FACLs or SELinux, you will see that they also build upon these first rules of file access.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/linux-permissions-101
作者:[Alex Juarez][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mralexjuarezhttps://opensource.com/users/marcobravohttps://opensource.com/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_ (Penguins)
[2]: https://www.gnu.org/software/texinfo/manual/info-stnd/info-stnd.html
[3]: https://www.theurbanpenguin.com/using-a-simple-c-program-to-explain-the-suid-permission/

View File

@ -1,141 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (11 Essential Keyboard Shortcuts Google Chrome/Chromium Users Should Know)
[#]: via: (https://itsfoss.com/google-chrome-shortcuts/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
11 Essential Keyboard Shortcuts Google Chrome/Chromium Users Should Know
======
_**Brief: Master these Google Chrome keyboard shortcuts for a better, smoother and more productive web browsing experience. Downloadable cheatsheet is also included.**_
Google Chrome is the [most popular web browser][1] and there is no denying it. Its open source version [Chromium][2] is also getting popularity and some Linux distributions now include it as the default web browser.
If you use it on desktop a lot, you can improve your browsing experience by using Google Chrome keyboard shortcuts. No need to go up to your mouse and spend time finding your way around. Just master these shortcuts and youll even save some time and be more productive.
I am using the term Google Chrome but these shortcuts are equally applicable to the Chromium browser.
### 11 Cool Chrome Keyboard shortcuts you should be using
If you are a pro, you might know a few of these Chrome shortcuts already but the chances are that you may still find some hidden gems here. Lets see.
**Keyboard Shortcuts** | **Action**
---|---
Ctrl+T | Open a new tab
Ctrl+N | Open a new window
Ctrl+Shift+N | Open incognito window
Ctrl+W | Close current tab
Ctrl+Shift+T | Reopen last closed tab
Ctrl+Shift+W | Close the window
Ctrl+Tab and Ctrl+Shift+Tab | Switch to right or left tab
Ctrl+L | Go to search/address bar
Ctrl+D | Bookmark the website
Ctrl+H | Access browsing history
Ctrl+J | Access downloads history
Shift+Esc | Open Chrome task manager
You can [download this list of useful Chrome keyboard shortcut for quick reference][3].
#### 1\. Open a new tab with Ctrl+T
Need to open a new tab? Just press Ctrl and T keys together and youll have a new tab opened.
#### 2\. Open a new window with Ctrl+N
Too many tabs opened already? Time to open a fresh new window. Use Ctrl and N keys to open a new browser window.
#### 3\. Go incognito with Ctrl+Shift+N
Checking flight or hotel prices online? Going incognito might help. Open an incognito window in Chrome with Ctrl+Shift+N.
[][4]
Suggested read  Best Text Editors for Linux Command Line
#### 4\. Close a tab with Ctrl+W
Close the current tab with Ctrl and W key. No need to take the mouse to the top and look for the x button.
#### 5\. Accidentally closed a tab? Reopen it with Ctrl+Shift+T
This is my favorite Google Chrome shortcut. No more oh crap when you close a tab you didnt mean to. Use the Ctrl+Shift+T and it will open the last closed tab. Keep hitting this key combination and it will keep on bringing the closed tabs.
#### 6\. Close the entire browser window with Ctrl+Shift+W
Done with you work? Time to close the entire browser window with all the tabs. Use the keys Ctrl+Shift+W and the browser window will disappear like it never existed.
#### 7\. Switch between tabs with Ctrl+Tab
Too many tabs open? You can move to right tab with Ctrl+Tab. Want to move left? Use Ctrl+Shift+Tab. Press these keys repeatedly and you can move between all the open tabs in the current browser window.
You can also use Ctrl+0 till Ctrl+9 to go to one of the first 10 tabs. But this Chrome keyboard shortcut doesnt work for the 11th tabs onward.
#### 8\. Go to the search/address bar with Ctrl+L
Want to type a new URL or search something quickly. You can use Ctrl+L and it will highlight the address bar on the top.
#### 9\. Bookmark the current website with Ctrl+D
Found something interesting? Save it in your bookmarks with Ctrl+D keys combination.
#### 10\. Go back in history with Ctrl+H
You can open up your browser history with Ctrl+H keys. Search through the history if you are looking for a page visited some time ago or delete something that you dont want to be seen anymore.
#### 11\. See your downloads with Ctrl+J
Pressing the Ctrl+J keys in Chrome will take you to the Downloads page. This page will show you all the downloads action you performed.
[][5]
Suggested read  Get Rid Of Two Google Chrome Icons From Dock In Elementary OS Freya [Quick Tip]
#### Bonus shortcut: Open Chrome task manager with Shift+Esc
Many people doesnt even know that there is a task manager in Chrome browser. Chrome is infamous for eating up your systems RAM. And when you have plenty of tabs opened, finding the culprit is not easy.
With Chrome task manager, you can see all the open tabs and their system utilization stats. You can also see various hidden processes such as Chrome extensions and other services.
![Google Chrome Task Manager][6]
I am going to this table here for a quick reference.
### Download Chrome shortcut cheatsheet
I know that mastering keyboard shortcuts depends on habit and you can make it a habit by using it again and again. To help you in this task, I have created this Google Chrome keyboard shortcut cheatsheet.
You can download the below image in PDF form, print it and put it on your desk. This way you can use practice the shortcuts all the time.
![Google Chrome Keyboard Shortcuts Cheat Sheet][7]
[Download Chrome Shortcut Cheatsheet][8]
If you are interested in mastering shortcuts, you may also have a look at [Ubuntu keyboard shortcuts][9].
By the way, whats your favorite Chrome shortcut?
--------------------------------------------------------------------------------
via: https://itsfoss.com/google-chrome-shortcuts/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[2]: https://www.chromium.org/Home
[3]: tmp.3qZNXSy2FC#download-cheatsheet
[4]: https://itsfoss.com/command-line-text-editors-linux/
[5]: https://itsfoss.com/rid-google-chrome-icons-dock-elementary-os-freya/
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/google-chrome-task-manager.png?resize=800%2C300&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/google-chrome-keyboard-shortcuts-cheat-sheet.png?ssl=1
[8]: https://drive.google.com/open?id=1lZ4JgRuFbXrnEXoDQqOt7PQH6femIe3t
[9]: https://itsfoss.com/ubuntu-shortcuts/

View File

@ -1,299 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How RPM packages are made: the spec file)
[#]: via: (https://fedoramagazine.org/how-rpm-packages-are-made-the-spec-file/)
[#]: author: (Ankur Sinha "FranciscoD" https://fedoramagazine.org/author/ankursinha/)
How RPM packages are made: the spec file
======
![][1]
In the [previous article on RPM package building][2], you saw that source RPMS include the source code of the software, along with a “spec” file. This post digs into the spec file, which contains instructions on how to build the RPM. Again, this article uses _fpaste_ as an example.
### Understanding the source code
Before you can start writing a spec file, you need to have some idea of the software that youre looking to package. Here, youre looking at fpaste, a very simple piece of software. It is written in Python, and is a one file script. When a new version is released, its provided here on Pagure: <https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz>
The current version, as the archive shows, is 0.3.9.2. Download it so you can see whats in the archive:
```
$ wget https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
$ tar -tvf fpaste-0.3.9.2.tar.gz
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/
-rw-rw-r-- root/root 25 2018-07-25 02:58 fpaste-0.3.9.2/.gitignore
-rw-rw-r-- root/root 3672 2018-07-25 02:58 fpaste-0.3.9.2/CHANGELOG
-rw-rw-r-- root/root 35147 2018-07-25 02:58 fpaste-0.3.9.2/COPYING
-rw-rw-r-- root/root 444 2018-07-25 02:58 fpaste-0.3.9.2/Makefile
-rw-rw-r-- root/root 1656 2018-07-25 02:58 fpaste-0.3.9.2/README.rst
-rw-rw-r-- root/root 658 2018-07-25 02:58 fpaste-0.3.9.2/TODO
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/
-rw-rw-r-- root/root 3867 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/fpaste.1
-rwxrwxr-x root/root 24884 2018-07-25 02:58 fpaste-0.3.9.2/fpaste
lrwxrwxrwx root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/fpaste.py -> fpaste
```
The files you want to install are:
* _fpaste.py_: which should go be installed to /usr/bin/.
* _docs/man/en/fpaste.1_: the manual, which should go to /usr/share/man/man1/.
* _COPYING_: the license text, which should go to /usr/share/license/fpaste/.
* _README.rst, TODO_: miscellaneous documentation that goes to /usr/share/doc/fpaste.
Where these files are installed depends on the Filesystem Hierarchy Standard. To learn more about it, you can either read here: <http://www.pathname.com/fhs/> or look at the man page on your Fedora system:
```
$ man hier
```
#### Part 1: What are we building?
Now that we know what files we have in the source, and where they are to go, lets look at the spec file. You can see the full file here: <https://src.fedoraproject.org/rpms/fpaste/blob/master/f/fpaste.spec>
Here is the first part of the spec file:
```
Name: fpaste
Version: 0.3.9.2
Release: 3%{?dist}
Summary: A simple tool for pasting info onto sticky notes instances
BuildArch: noarch
License: GPLv3+
URL: https://pagure.io/fpaste
Source0: https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
Requires: python3
%description
It is often useful to be able to easily paste text to the Fedora
Pastebin at http://paste.fedoraproject.org and this simple script
will do that and return the resulting URL so that people may
examine the output. This can hopefully help folks who are for
some reason stuck without X, working remotely, or any other
reason they may be unable to paste something into the pastebin
```
_Name_, _Version_, and so on are called _tags_, and are defined in RPM. This means you cant just make up tags. RPM wont understand them if you do! The tags to keep an eye out for are:
* _Source0_: tells RPM where the source archive for this software is located.
* _Requires_: lists run-time dependencies for the software. RPM can automatically detect quite a few of these, but in some cases they must be mentioned manually. A run-time dependency is a capability (often a package) that must be on the system for this package to function. This is how _[dnf][3]_ detects whether it needs to pull in other packages when you install this package.
* _BuildRequires_: lists the build-time dependencies for this software. These must generally be determined manually and added to the spec file.
* _BuildArch_: the computer architectures that this software is being built for. If this tag is left out, the software will be built for all supported architectures. The value _noarch_ means the software is architecture independent (like fpaste, which is written purely in Python).
This section provides general information about fpaste: what it is, which version is being made into an RPM, its license, and so on. If you have fpaste installed, and look at its metadata, you can see this information included in the RPM:
```
$ sudo dnf install fpaste
$ rpm -qi fpaste
Name : fpaste
Version : 0.3.9.2
Release : 2.fc30
...
```
RPM adds a few extra tags automatically that represent things that it knows.
At this point, we have the general information about the software that were building an RPM for. Next, we start telling RPM what to do.
#### Part 2: Preparing for the build
The next part of the spec is the preparation section, denoted by _%prep_:
```
%prep
%autosetup
```
For fpaste, the only command here is %autosetup. This simply extracts the tar archive into a new folder and keeps it ready for the next section where we build it. You can do more here, like apply patches, modify files for different purposes, and so on. If you did look at the contents of the source rpm for Python, you would have seen lots of patches there. These are all applied in this section.
Typically anything in a spec file with the **%** prefix is a macro or label that RPM interprets in a special way. Often these will appear with curly braces, such as _%{example}_.
#### Part 3: Building the software
The next section is where the software is built, denoted by “%build”. Now, since fpaste is a simple, pure Python script, it doesnt need to be built. So, here we get:
```
%build
#nothing required
```
Generally, though, youd have build commands here, like:
```
configure; make
```
The build section is often the hardest section of the spec, because this is where the software is being built from source. This requires you to know what build system the tool is using, which could be one of many: Autotools, CMake, Meson, Setuptools (for Python) and so on. Each has its own commands and style. You need to know these well enough to get the software to build correctly.
#### Part 4: Installing the files
Once the software is built, it needs to be installed in the _%install_ section:
```
%install
mkdir -p %{buildroot}%{_bindir}
make install BINDIR=%{buildroot}%{_bindir} MANDIR=%{buildroot}%{_mandir}
```
RPM doesnt tinker with your system files when building RPMs. Its far too risky to add, remove, or modify files to a working installation. What if something breaks? So, instead RPM creates an artificial file system and works there. This is referred to as the _buildroot_. So, here in the buildroot, we create _/usr/bin_, represented by the macro _%{_bindir}_, and then install the files to it using the provided Makefile.
At this point, we have a built version of fpaste installed in our artificial buildroot.
#### Part 5: Listing all files to be included in the RPM
The last section of the spec file is the files section, _%files_. This is where we tell RPM what files to include in the archive it creates from this spec file. The fpaste file section is quite simple:
```
%files
%{_bindir}/%{name}
%doc README.rst TODO
%{_mandir}/man1/%{name}.1.gz
%license COPYING
```
Notice how, here, we do not specify the buildroot. All of these paths are relative to it. The _%doc_ and _%license_ commands simply do a little more—they create the required folders and remember that these files must go there.
RPM is quite smart. If youve installed files in the _%install_ section, but not listed them, itll tell you this, for example.
#### Part 6: Document all changes in the change log
Fedora is a community based project. Lots of contributors maintain and co-maintain packages. So it is imperative that theres no confusion about what changes have been made to a package. To ensure this, the spec file contains the last section, the Changelog, _%changelog_:
```
%changelog
* Thu Jul 25 2019 Fedora Release Engineering < ...> - 0.3.9.2-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_31_Mass_Rebuild
* Thu Jan 31 2019 Fedora Release Engineering < ...> - 0.3.9.2-2
- Rebuilt for https://fedoraproject.org/wiki/Fedora_30_Mass_Rebuild
* Tue Jul 24 2018 Ankur Sinha - 0.3.9.2-1
- Update to 0.3.9.2
* Fri Jul 13 2018 Fedora Release Engineering < ...> - 0.3.9.1-4
- Rebuilt for https://fedoraproject.org/wiki/Fedora_29_Mass_Rebuild
* Wed Feb 07 2018 Fedora Release Engineering < ..> - 0.3.9.1-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_28_Mass_Rebuild
* Sun Sep 10 2017 Vasiliy N. Glazov < ...> - 0.3.9.1-2
- Cleanup spec
* Fri Sep 08 2017 Ankur Sinha - 0.3.9.1-1
- Update to latest release
- fixes rhbz 1489605
...
....
```
There must be a changelog entry for _every_ change to the spec file. As you see here, while Ive updated the spec as the maintainer, others have too. Having the changes documented clearly helps everyone know what the current status of the spec is. For all packages installed on your system, you can use rpm to see their changelogs:
```
$ rpm -q --changelog fpaste
```
### Building the RPM
Now we are ready to build the RPM. If you want to follow along and run the commands below, please ensure that you followed the steps [in the previous post][2] to set your system up for building RPMs.
We place the fpaste spec file in _~/rpmbuild/SPECS_, the source code archive in _~/rpmbuild/SOURCES/_ and can now create the source RPM:
```
$ cd ~/rpmbuild/SPECS
$ wget https://src.fedoraproject.org/rpms/fpaste/raw/master/f/fpaste.spec
$ cd ~/rpmbuild/SOURCES
$ wget https://pagure.io/fpaste/archive/0.3.9.2/fpaste-0.3.9.2.tar.gz
$ cd ~/rpmbuild/SOURCES
$ rpmbuild -bs fpaste.spec
Wrote: /home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
```
Lets have a look at the results:
```
$ ls ~/rpmbuild/SRPMS/fpaste*
/home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
$ rpm -qpl ~/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
fpaste-0.3.9.2.tar.gz
fpaste.spec
```
There we are — the source rpm has been built. Lets build both the source and binary rpm together:
```
$ cd ~/rpmbuild/SPECS
$ rpmbuild -ba fpaste.spec
..
..
..
```
RPM will show you the complete build output, with details on what it is doing in each section that we saw before. This “build log” is extremely important. When builds do not go as expected, we packagers spend lots of time going through them, tracing the complete build path to see what went wrong.
Thats it really! Your ready-to-install RPMs are where they should be:
```
$ ls ~/rpmbuild/RPMS/noarch/
fpaste-0.3.9.2-3.fc30.noarch.rpm
```
### Recap
Weve covered the basics of how RPMs are built from a spec file. This is by no means an exhaustive document. In fact, it isnt documentation at all, really. It only tries to explain how things work under the hood. Heres a short recap:
* RPMs are of two types: _source_ and _binary_.
* Binary RPMs contain the files to be installed to use the software.
* Source RPMs contain the information needed to build the binary RPMs: the complete source code, and the instructions on how to build the RPM in the spec file.
* The spec file has various sections, each with its own purpose.
Here, weve built RPMs locally, on our Fedora installations. While this is the basic process, the RPMs we get from repositories are built on dedicated servers with strict configurations and methods to ensure correctness and security. This Fedora packaging pipeline will be discussed in a future post.
Would you like to get started with building packages, and help the Fedora community maintain the massive amount of software we provide? You can [start here by joining the package collection maintainers][4].
For any queries, post to the [Fedora developers mailing list][5]—were always happy to help!
### References
Here are some useful references to building RPMs:
* <https://fedoraproject.org/wiki/How_to_create_an_RPM_package>
* <https://docs.fedoraproject.org/en-US/quick-docs/create-hello-world-rpm/>
* <https://docs.fedoraproject.org/en-US/packaging-guidelines/>
* <https://rpm.org/documentation.html>
* * *
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-rpm-packages-are-made-the-spec-file/
作者:[Ankur Sinha "FranciscoD"][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/ankursinha/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/rpm.png-816x345.jpg
[2]: https://fedoramagazine.org/how-rpm-packages-are-made-the-source-rpm/
[3]: https://fedoramagazine.org/managing-packages-fedora-dnf/
[4]: https://fedoraproject.org/wiki/Join_the_package_collection_maintainers
[5]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/

View File

@ -1,255 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building CI/CD pipelines with Jenkins)
[#]: via: (https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins)
[#]: author: (Bryant Son https://opensource.com/users/brson)
Building CI/CD pipelines with Jenkins
======
Build continuous integration and continuous delivery (CI/CD) pipelines
with this step-by-step Jenkins tutorial.
![pipelines][1]
In my article [_A beginner's guide to building DevOps pipelines with open source tools_][2], I shared a story about building a DevOps pipeline from scratch. The core technology driving that initiative was [Jenkins][3], an open source tool to build continuous integration and continuous delivery (CI/CD) pipelines.
At Citi, there was a separate team that provided dedicated Jenkins pipelines with a stable master-slave node setup, but the environment was only used for quality assurance (QA), staging, and production environments. The development environment was still very manual, and our team needed to automate it to gain as much flexibility as possible while accelerating the development effort. This is the reason we decided to build a CI/CD pipeline for DevOps. And the open source version of Jenkins was the obvious choice due to its flexibility, openness, powerful plugin-capabilities, and ease of use.
In this article, I will share a step-by-step walkthrough on how you can build a CI/CD pipeline using Jenkins.
### What is a pipeline?
Before jumping into the tutorial, it's helpful to know something about CI/CD pipelines.
To start, it is helpful to know that Jenkins itself is not a pipeline. Just creating a new Jenkins job does not construct a pipeline. Think about Jenkins like a remote control—it's the place you click a button. What happens when you do click a button depends on what the remote is built to control. Jenkins offers a way for other application APIs, software libraries, build tools, etc. to plug into Jenkins, and it executes and automates the tasks. On its own, Jenkins does not perform any functionality but gets more and more powerful as other tools are plugged into it.
A pipeline is a separate concept that refers to the groups of events or jobs that are connected together in a sequence:
> A **pipeline** is a sequence of events or jobs that can be executed.
The easiest way to understand a pipeline is to visualize a sequence of stages, like this:
![Pipeline example][4]
Here, you should see two familiar concepts: _Stage_ and _Step_.
* **Stage:** A block that contains a series of steps. A stage block can be named anything; it is used to visualize the pipeline process.
* **Step:** A task that says what to do. Steps are defined inside a stage block.
In the example diagram above, Stage 1 can be named "Build," "Gather Information," or whatever, and a similar idea is applied for the other stage blocks. "Step" simply says what to execute, and this can be a simple print command (e.g., **echo "Hello, World"**), a program-execution command (e.g., **java HelloWorld**), a shell-execution command (e.g., **chmod 755 Hello**), or any other command—as long as it is recognized as an executable command through the Jenkins environment.
The Jenkins pipeline is provided as a _codified script_ typically called a **Jenkinsfile**, although the file name can be different. Here is an example of a simple Jenkins pipeline file.
```
// Example of Jenkins pipeline script
pipeline {
  stages {
    stage("Build") {
       steps {
          // Just print a Hello, Pipeline to the console
          echo "Hello, Pipeline!"
          // Compile a Java file. This requires JDKconfiguration from Jenkins
          javac HelloWorld.java
          // Execute the compiled Java binary called HelloWorld. This requires JDK configuration from Jenkins
          java HelloWorld
          // Executes the Apache Maven commands, clean then package. This requires Apache Maven configuration from Jenkins
          mvn clean package ./HelloPackage
          // List the files in current directory path by executing a default shell command
          sh "ls -ltr"
       }
   }
   // And next stages if you want to define further...
 } // End of stages
} // End of pipeline
```
It's easy to see the structure of a Jenkins pipeline from this sample script. Note that some commands, like **java**, **javac**, and **mvn**, are not available by default, and they need to be installed and configured through Jenkins. Therefore:
> A **Jenkins pipeline** is the way to execute a Jenkins job sequentially in a defined way by codifying it and structuring it inside multiple blocks that can include multiple steps containing tasks.
OK. Now that you understand what a Jenkins pipeline is, I'll show you how to create and execute a Jenkins pipeline. At the end of the tutorial, you will have built a Jenkins pipeline like this:
![Final Result][5]
### How to build a Jenkins pipeline
To make this tutorial easier to follow, I created a sample [GitHub repository][6] and a video tutorial.
Before starting this tutorial, you'll need:
* **Java Development Kit:** If you don't already have it, install a JDK and add it to the environment path so a Java command (like **java jar**) can be executed through a terminal. This is necessary to leverage the Java Web Archive (WAR) version of Jenkins that is used in this tutorial (although you can use any other distribution).
* **Basic computer operations:** You should know how to type some code, execute basic Linux commands through the shell, and open a browser.
Let's get started.
#### Step 1: Download Jenkins
Navigate to the [Jenkins download page][7]. Scroll down to **Generic Java package (.war)** and click on it to download the file; save it someplace where you can locate it easily. (If you choose another Jenkins distribution, the rest of tutorial steps should be pretty much the same, except for Step 2.) The reason to use the WAR file is that it is a one-time executable file that is easily executable and removable.
![Download Jenkins as Java WAR file][8]
#### Step 2: Execute Jenkins as a Java binary
Open a terminal window and enter the directory where you downloaded Jenkins with **cd &lt;your path&gt;**. (Before you proceed, make sure JDK is installed and added to the environment path.) Execute the following command, which will run the WAR file as an executable binary:
```
`java -jar ./jenkins.war`
```
If everything goes smoothly, Jenkins should be up and running at the default port 8080.
![Execute as an executable JAR binary][9]
#### Step 3: Create a new Jenkins job
Open a web browser and navigate to **localhost:8080**. Unless you have a previous Jenkins installation, it should go straight to the Jenkins dashboard. Click **Create New Jobs**. You can also click **New Item** on the left.
![Create New Job][10]
#### Step 4: Create a pipeline job
In this step, you can select and define what type of Jenkins job you want to create. Select **Pipeline** and give it a name (e.g., TestPipeline). Click **OK** to create a pipeline job.
![Create New Pipeline Job][11]
You will see a Jenkins job configuration page. Scroll down to find** Pipeline section**. There are two ways to execute a Jenkins pipeline. One way is by _directly writing a pipeline script_ on Jenkins, and the other way is by retrieving the _Jenkins file from SCM_ (source control management). We will go through both ways in the next two steps.
#### Step 5: Configure and execute a pipeline job through a direct script
To execute the pipeline with a direct script, begin by copying the contents of the [sample Jenkinsfile][6] from GitHub. Choose **Pipeline script** as the **Destination** and paste the **Jenkinsfile** contents in **Script**. Spend a little time studying how the Jenkins file is structured. Notice that there are three Stages: Build, Test, and Deploy, which are arbitrary and can be anything. Inside each Stage, there are Steps; in this example, they just print some random messages.
Click **Save** to keep the changes, and it should automatically take you back to the Job Overview.
![Configure to Run as Jenkins Script][12]
To start the process to build the pipeline, click **Build Now**. If everything works, you will see your first pipeline (like the one below).
![Click Build Now and See Result][13]
To see the output from the pipeline script build, click any of the Stages and click **Log**. You will see a message like this.
![Visit sample GitHub with Jenkins get clone link][14]
#### Step 6: Configure and execute a pipeline job with SCM
Now, switch gears: In this step, you will Deploy the same Jenkins job by copying the **Jenkinsfile** from a source-controlled GitHub. In the same [GitHub repository][6], pick up the repository URL by clicking **Clone or download** and copying its URL.
![Checkout from GitHub][15]
Click **Configure** to modify the existing job. Scroll to the **Advanced Project Options** setting, but this time, select the **Pipeline script from SCM** option in the **Destination** dropdown. Paste the GitHub repo's URL in the **Repository URL**, and type **Jenkinsfile** in the **Script Path**. Save by clicking the **Save** button.
![Change to Pipeline script from SCM][16]
To build the pipeline, once you are back to the Task Overview page, click **Build Now** to execute the job again. The result will be the same as before, except you have one additional stage called **Declaration: Checkout SCM**.
![Build again and verify][17]
To see the pipeline's output from the SCM build, click the Stage and view the **Log** to check how the source control cloning process went.
![Verify Checkout Procedure][18]
### Do more than print messages
Congratulations! You've built your first Jenkins pipeline!
"But wait," you say, "this is very limited. I cannot really do anything with it except print dummy messages." That is OK. So far, this tutorial provided just a glimpse of what a Jenkins pipeline can do, but you can extend its capabilities by integrating it with other tools. Here are a few ideas for your next project:
* Build a multi-staged Java build pipeline that takes from the phases of pulling dependencies from JAR repositories like Nexus or Artifactory, compiling Java codes, running the unit tests, packaging into a JAR/WAR file, and deploying to a cloud server.
* Implement the advanced code testing dashboard that will report back the health of the project based on the unit test, load test, and automated user interface test with Selenium. 
* Construct a multi-pipeline or multi-user pipeline automating the tasks of executing Ansible playbooks while allowing for authorized users to respond to task in progress.
* Design a complete end-to-end DevOps pipeline that pulls the infrastructure resource files and configuration files stored in SCM like GitHub and executing the scripts through various runtime programs.
Follow any of the tutorials at the end of this article to get into these more advanced cases.
#### Manage Jenkins
From the main Jenkins dashboard, click **Manage Jenkins**.
![Manage Jenkins][19]
#### Global tool configuration
There are many options available, including managing plugins, viewing the system log, etc. Click **Global Tool Configuration**.
![Global Tools Configuration][20]
#### Add additional capabilities
Here, you can add the JDK path, Git, Gradle, and so much more. After you configure a tool, it is just a matter of adding the command into your Jenkinsfile or executing it through your Jenkins script.
![See Various Options for Plugin][21]
### Where to go from here?
This article put you on your way to creating a CI/CD pipeline using Jenkins, a cool open source tool. To find out about many of the other things you can do with Jenkins, check out these other articles on Opensource.com:
* [Getting started with Jenkins X][22]
* [Install an OpenStack cloud with Jenkins][23]
* [Running Jenkins builds in containers][24]
* [Getting started with Jenkins pipelines][25]
* [How to run JMeter with Jenkins][26]
* [Integrating OpenStack into your Jenkins workflow][27]
You may be interested in some of the other articles I've written to supplement your open source journey:
* [9 open source tools for building a fault-tolerant system][28]
* [Understanding software design patterns][29]
* [A beginner's guide to building DevOps pipelines with open source tools][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins
作者:[Bryant Son][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brson
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pipe-pipeline-grid.png?itok=kkpzKxKg (pipelines)
[2]: https://opensource.com/article/19/4/devops-pipeline
[3]: https://jenkins.io/
[4]: https://opensource.com/sites/default/files/uploads/diagrampipeline.jpg (Pipeline example)
[5]: https://opensource.com/sites/default/files/uploads/0_endresultpreview_0.jpg (Final Result)
[6]: https://github.com/bryantson/CICDPractice
[7]: https://jenkins.io/download/
[8]: https://opensource.com/sites/default/files/uploads/2_downloadwar.jpg (Download Jenkins as Java WAR file)
[9]: https://opensource.com/sites/default/files/uploads/3_runasjar.jpg (Execute as an executable JAR binary)
[10]: https://opensource.com/sites/default/files/uploads/4_createnewjob.jpg (Create New Job)
[11]: https://opensource.com/sites/default/files/uploads/5_createpipeline.jpg (Create New Pipeline Job)
[12]: https://opensource.com/sites/default/files/uploads/6_runaspipelinescript.jpg (Configure to Run as Jenkins Script)
[13]: https://opensource.com/sites/default/files/uploads/7_buildnow4script.jpg (Click Build Now and See Result)
[14]: https://opensource.com/sites/default/files/uploads/8_seeresult4script.jpg (Visit sample GitHub with Jenkins get clone link)
[15]: https://opensource.com/sites/default/files/uploads/9_checkoutfromgithub.jpg (Checkout from GitHub)
[16]: https://opensource.com/sites/default/files/uploads/10_runsasgit.jpg (Change to Pipeline script from SCM)
[17]: https://opensource.com/sites/default/files/uploads/11_seeresultfromgit.jpg (Build again and verify)
[18]: https://opensource.com/sites/default/files/uploads/12_verifycheckout.jpg (Verify Checkout Procedure)
[19]: https://opensource.com/sites/default/files/uploads/13_managingjenkins.jpg (Manage Jenkins)
[20]: https://opensource.com/sites/default/files/uploads/14_globaltoolsconfiguration.jpg (Global Tools Configuration)
[21]: https://opensource.com/sites/default/files/uploads/15_variousoptions4plugin.jpg (See Various Options for Plugin)
[22]: https://opensource.com/article/18/11/getting-started-jenkins-x
[23]: https://opensource.com/article/18/4/install-OpenStack-cloud-Jenkins
[24]: https://opensource.com/article/18/4/running-jenkins-builds-containers
[25]: https://opensource.com/article/18/4/jenkins-pipelines-with-cucumber
[26]: https://opensource.com/life/16/7/running-jmeter-jenkins-continuous-delivery-101
[27]: https://opensource.com/business/15/5/interview-maish-saidel-keesing-cisco
[28]: https://opensource.com/article/19/3/tools-fault-tolerant-system
[29]: https://opensource.com/article/19/7/understanding-software-design-patterns

View File

@ -1,234 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 Open Source Paint Applications for Linux Users)
[#]: via: (https://itsfoss.com/open-source-paint-apps/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
6 Open Source Paint Applications for Linux Users
======
As a child, when I started using computer (with Windows XP), my favorite application was Paint. I spent hours doodling on it. Surprisingly, children still love the paint apps. And not just children, the simple paint app comes handy in a number of situations.
You will find a bunch of applications that let you draw/paint or manipulate images. However, some of them are proprietary. While youre a Linux user why not focus on open source paint applications?
In this article, we are going to list some of the best open source paint applications which are worthy alternatives to proprietary painting software available on Linux.
### Open Source paint &amp; drawing applications
![][1]
**Note:** _The list is in no particular order of ranking._
#### 1\. Pinta
![][2]
Key Highlights:
* Great alternative to Paint.NET / MS Paint
* Add-on support (WebP Image support available)
* Layer Support
[Pinta][3] is an impressive open-source paint application which is perfect for drawing and basic image editing. In other words, it is a simple paint application with some fancy features.
You may consider [Pinta][4] as an alternative to MS Paint on Linux but with layer support and more. Not just MS Paint, but it acts as a Linux replacement for Paint.NET software available for Windows. Even though Paint.NET is better Pinta seems to be a decent alternative to it.
A couple of add-ons can be utilized to enhance the functionality, like the [support for WebP images on Linux][5]. In addition to the layer support, you can easily resize the images, add effects, make adjustments (brightness, contrast, etc.), and also adjust the quality when exporting the image.
#### How to install Pinta?
You should be able to easily find it in the Software Center / App Center / Package Manager. Just type in “**Pinta**” and get started installing it. In either case, try the [Flatpak][6] package.
Or, you can enter the following command in the terminal (Ubuntu/Debian):
```
sudo apt install pinta
```
For more information on the download packages and installation instructions, refer the [official download page][7].
#### 2\. Krita
![][8]
Key Highlights:
* HDR Painting
* PSD Support
* Layer Support
* Brush stabilizers
* 2D Animation
Krita is one of the most advanced open source paint applications for Linux. Of course, for this article, it helps you draw sketches and wreak havoc upon the canvas. But, in addition to that, it offers a whole lot of features.
[][9]
Suggested read  Things To Do After Installing Fedora 24
For instance, if you have a shaky hand, it can help you stabilize the brush strokes. You also get built-in vector tools to create comic panels and other interesting things. If you are looking for a full-fledged color management support, drawing assistants, and layer management, Krita should be your preferred choice.
#### How to install Krita?
Similar to pinta, you should be able to find it listed in the Software Center/App Center or the package manager. Its also available in the [Flatpak repository][10].
Thinking to install it via terminal? Type in the following command:
```
sudo apt install krita
```
In either case, you can head down to their [official download page][11] to get the **AppImage** file and run it.
If you have no idea on AppImage files, check out our guide on [how to use AppImage][12].
#### 3\. Tux Paint
![][13]
Key Highlights:
* A no-nonsense paint application for kids
Im not kidding, Tux Paint is one of the best open-source paint applications for kids between 3-12 years of age. Of course, you do not want options when you want to just scribble. So, Tux Paint seems to be the best option in that case (even for adults!).
#### How to install Tuxpaint?
Tuxpaint can be downloaded from the Software Center or Package manager. In either case, to install it on Ubuntu/Debian, type in the following command in the terminal:
```
sudo apt install tuxpaint
```
For more information on it, head to the [official site][14].
#### 4\. Drawpile
![][15]
Key Highlights:
* Collaborative Drawing
* Built-in chat to interact with other users
* Layer support
* Record drawing sessions
Drawpile is an interesting open-source paint application where you get to collaborate with other users in real-time. To be precise, you can simultaneously draw in a single canvas. In addition to this unique feature, you have the layer support, ability to record your drawing session, and even a chat facility to interact with the users collaborating.
You can host/join a public session or start a private session with your friend which requires a code. By default, the server will be your computer. But, if you want a remote server, you can select it as well.
Do note, that you will need to [sign up for a Drawpile account][16] in order to collaborate.
#### How to install Drawpile?
As far as Im aware of, you can only find it listed in the [Flatpak repository][17].
[][18]
Suggested read  OCS Store: One Stop Shop All of Your Linux Software Customization Needs
#### 5\. MyPaint
![][19]
Key Highlights:
* Easy-to-use tool for digital painters
* Layer management support
* Lots of options to tweak your brush and drawing
[MyPaint][20] is a simple yet powerful tool for digital painters. It features a lot of options to tweak in order to make the perfect digital brush stroke. Im not much of a digital artist (but a scribbler) but I observed quite a few options to adjust the brush, the colors, and an option to add a scratchpad panel.
It also supports layer management in case you want that. The latest stable version hasnt been updated for a few years now, but the recent alpha build (which I tested) works just fine. If you are looking for an open source paint application on Linux do give this a try.
#### How to install MyPaint?
MyPaint is available in the official repository. However, thats the old version. If you still want to proceed, you can search for it in the Software Center or type the following command in the terminal:
```
sudo apt install mypaint
```
You can head to its official [GitHub release page][21] for the latest alpha build and get the [AppImage file][12] (any version) to make it executable and launch the app.
#### 6\. KolourPaint
![][22]
Key Highlights:
* A simple alternative to MS Paint on Linux
* No layer management support
If you arent looking for any Layer management support and just want an open source paint application to draw stuff this is it.
[KolourPaint][23] is originally tailored for KDE desktop environments but it works flawlessly on others too.
#### How to install KolourPaint?
You can install KolourPaint right from the Software Center or via the terminal using the following command:
```
sudo apt install kolourpaint4
```
In either case, you can utilize [Flathub][24] as well.
**Wrapping Up**
If you are wondering about applications like GIMP/Inkscape, we have those listed in another separate article on the [best Linux Tools for digital artists][25]. If youre curious about more options, I recommend you to check that out.
Here, we try to compile a list of best open source paint applications available for Linux. If you think we missed something, feel free to tell us about it in the comments section below!
--------------------------------------------------------------------------------
via: https://itsfoss.com/open-source-paint-apps/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/open-source-paint-apps.png?resize=800%2C450&ssl=1
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/pinta.png?ssl=1
[3]: https://pinta-project.com/pintaproject/pinta/
[4]: https://itsfoss.com/pinta-1-6-ubuntu-linux-mint/
[5]: https://itsfoss.com/webp-ubuntu-linux/
[6]: https://www.flathub.org/apps/details/com.github.PintaProject.Pinta
[7]: https://pinta-project.com/pintaproject/pinta/releases
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/krita-paint.png?ssl=1
[9]: https://itsfoss.com/things-to-do-after-installing-fedora-24/
[10]: https://www.flathub.org/apps/details/org.kde.krita
[11]: https://krita.org/en/download/krita-desktop/
[12]: https://itsfoss.com/use-appimage-linux/
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/tux-paint.jpg?ssl=1
[14]: http://www.tuxpaint.org/
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/drawpile.png?ssl=1
[16]: https://drawpile.net/accounts/signup/
[17]: https://flathub.org/apps/details/net.drawpile.drawpile
[18]: https://itsfoss.com/ocs-store/
[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/mypaint.png?ssl=1
[20]: https://mypaint.org/
[21]: https://github.com/mypaint/mypaint/releases
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/kolourpaint.png?ssl=1
[23]: http://kolourpaint.org/
[24]: https://flathub.org/apps/details/org.kde.kolourpaint
[25]: https://itsfoss.com/best-linux-graphic-design-software/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,272 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to program with Bash: Syntax and tools)
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-1)
[#]: author: (David Both https://opensource.com/users/dboth)
How to program with Bash: Syntax and tools
======
Learn basic Bash programming syntax and tools, as well as how to use
variables and control operators, in the first article in this three-part
series.
![bash logo on green background][1]
A shell is the command interpreter for the operating system. Bash is my favorite shell, but every Linux shell interprets the commands typed by the user or sysadmin into a form the operating system can use. When the results are returned to the shell program, it sends them to STDOUT which, by default, [displays them in the terminal][2]. All of the shells I am familiar with are also programming languages.
Features like tab completion, command-line recall and editing, and shortcuts like aliases all contribute to its value as a powerful shell. Its default command-line editing mode uses Emacs, but one of my favorite Bash features is that I can change it to Vi mode to use editing commands that are already part of my muscle memory.
However, if you think of Bash solely as a shell, you miss much of its true power. While researching my three-volume [Linux self-study course][3] (on which this series of articles is based), I learned things about Bash that I'd never known in over 20 years of working with Linux. Some of these new bits of knowledge relate to its use as a programming language. Bash is a powerful programming language, one perfectly designed for use on the command line and in shell scripts.
This three-part series explores using Bash as a command-line interface (CLI) programming language. This first article looks at some simple command-line programming with Bash, variables, and control operators. The other articles explore types of Bash files; string, numeric, and miscellaneous logical operators that provide execution-flow control logic; different types of shell expansions; and the **for**, **while**, and **until** loops that enable repetitive operations. They will also look at some commands that simplify and support the use of these tools.
### The shell
A shell is the command interpreter for the operating system. Bash is my favorite shell, but every Linux shell interprets the commands typed by the user or sysadmin into a form the operating system can use. When the results are returned to the shell program, it displays them in the terminal. All of the shells I am familiar with are also programming languages.
Bash stands for Bourne Again Shell because the Bash shell is [based upon][4] the older Bourne shell that was written by Steven Bourne in 1977. Many [other shells][5] are available, but these are the four I encounter most frequently:
* **csh:** The C shell for programmers who like the syntax of the C language
* **ksh:** The Korn shell, written by David Korn and popular with Unix users
* **tcsh:** A version of csh with more ease-of-use features
* **zsh:** The Z shell, which combines many features of other popular shells
All shells have built-in commands that supplement or replace the ones provided by the core utilities. Open the shell's man page and find the "BUILT-INS" section to see the commands it provides.
Each shell has its own personality and syntax. Some will work better for you than others. I have used the C shell, the Korn shell, and the Z shell. I still like the Bash shell more than any of them. Use the one that works best for you, although that might require you to try some of the others. Fortunately, it's quite easy to change shells.
All of these shells are programming languages, as well as command interpreters. Here's a quick tour of some programming constructs and tools that are integral parts of Bash.
### Bash as a programming language
Most sysadmins have used Bash to issue commands that are usually fairly simple and straightforward. But Bash can go beyond entering single commands, and many sysadmins create simple command-line programs to perform a series of tasks. These programs are common tools that can save time and effort.
My objective when writing CLI programs is to save time and effort (i.e., to be the lazy sysadmin). CLI programs support this by listing several commands in a specific sequence that execute one after another, so you do not need to watch the progress of one command and type in the next command when the first finishes. You can go do other things and not have to continually monitor the progress of each command.
### What is "a program"?
The Free On-line Dictionary of Computing ([FOLDOC][6]) defines a program as: "The instructions executed by a computer, as opposed to the physical device on which they run." Princeton University's [WordNet][7] defines a program as: "…a sequence of instructions that a computer can interpret and execute…" [Wikipedia][8] also has a good entry about computer programs.
Therefore, a program can consist of one or more instructions that perform a specific, related task. A computer program instruction is also called a program statement. For sysadmins, a program is usually a sequence of shell commands. All the shells available for Linux, at least the ones I am familiar with, have at least a basic form of programming capability, and Bash, the default shell for most Linux distributions, is no exception.
While this series uses Bash (because it is so ubiquitous), if you use a different shell, the general programming concepts will be the same, although the constructs and syntax may differ somewhat. Some shells may support some features that others do not, but they all provide some programming capability. Shell programs can be stored in a file for repeated use, or they may be created on the command line as needed.
### Simple CLI programs
The simplest command-line programs are one or two consecutive program statements, which may be related or not, that are entered on the command line before the **Enter** key is pressed. The second statement in a program, if there is one, might be dependent upon the actions of the first, but it does not need to be.
There is also one bit of syntactical punctuation that needs to be clearly stated. When entering a single command on the command line, pressing the **Enter** key terminates the command with an implicit semicolon (**;**). When used in a CLI shell program entered as a single line on the command line, the semicolon must be used to terminate each statement and separate it from the next one. The last statement in a CLI shell program can use an explicit or implicit semicolon.
### Some basic syntax
The following examples will clarify this syntax. This program consists of a single command with an explicit terminator:
```
[student@studentvm1 ~]$ echo "Hello world." ;
Hello world.
```
That may not seem like much of a program, but it is the first program I encounter with every new programming language I learn. The syntax may be a bit different for each language, but the result is the same.
Let's expand a little on this trivial but ubiquitous program. Your results will be different from mine because I have done other experiments, while you may have only the default directories and files that are created in the account home directory the first time you log into an account via the GUI desktop.
```
[student@studentvm1 ~]$ echo "My home directory." ; ls ;
My home directory.
chapter25   TestFile1.Linux  dmesg2.txt  Downloads  newfile.txt  softlink1  testdir6
chapter26   TestFile1.mac    dmesg3.txt  file005    Pictures     Templates  testdir
TestFile1      Desktop       dmesg.txt   link3      Public       testdir    Videos
TestFile1.dos  dmesg1.txt    Documents   Music      random.txt   testdir1
```
That makes a bit more sense. The results are related, but the individual program statements are independent of each other. Notice that I like to put spaces before and after the semicolon because it makes the code a bit easier to read. Try that little CLI program again without an explicit semicolon at the end:
```
`[student@studentvm1 ~]$ echo "My home directory." ; ls`
```
There is no difference in the output.
### Something about variables
Like all programming languages, the Bash shell can deal with variables. A variable is a symbolic name that refers to a specific location in memory that contains a value of some sort. The value of a variable is changeable, i.e., it is variable.
Bash does not type variables like C and related languages, defining them as integers, floating points, or string types. In Bash, all variables are strings. A string that is an integer can be used in integer arithmetic, which is the only type of math that Bash is capable of doing. If more complex math is required, the [**bc** command][9] can be used in CLI programs and scripts.
Variables are assigned values and can be used to refer to those values in CLI programs and scripts. The value of a variable is set using its name but not preceded by a **$** sign. The assignment **VAR=10** sets the value of the variable VAR to 10. To print the value of the variable, you can use the statement **echo $VAR**. Start with text (i.e., non-numeric) variables.
Bash variables become part of the shell environment until they are unset.
Check the initial value of a variable that has not been assigned; it should be null. Then assign a value to the variable and print it to verify its value. You can do all of this in a single CLI program:
```
[student@studentvm1 ~]$ echo $MyVar ; MyVar="Hello World" ; echo $MyVar ;
Hello World
[student@studentvm1 ~]$
```
_Note: The syntax of variable assignment is very strict. There must be no spaces on either side of the equal (**=**) sign in the assignment statement._
The empty line indicates that the initial value of **MyVar** is null. Changing and setting the value of a variable are done the same way. This example shows both the original and the new value.
As mentioned, Bash can perform integer arithmetic calculations, which is useful for calculating a reference to the location of an element in an array or doing simple math problems. It is not suitable for scientific computing or anything that requires decimals, such as financial calculations. There are much better tools for those types of calculations.
Here's a simple calculation:
```
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var1*Var2))"
Result = 63
```
What happens when you perform a math operation that results in a floating-point number?
```
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var1/Var2))"
Result = 0
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var2/Var1))"
Result = 1
[student@studentvm1 ~]$
```
The result is the nearest integer. Notice that the calculation was performed as part of the **echo** statement. The math is performed before the enclosing echo command due to the Bash order of precedence. For details see the Bash man page and search "precedence."
### Control operators
Shell control operators are one of the syntactical operators for easily creating some interesting command-line programs. The simplest form of CLI program is just stringing several commands together in a sequence on the command line:
```
`command1 ; command2 ; command3 ; command4 ; . . . ; etc. ;`
```
Those commands all run without a problem so long as no errors occur. But what happens when an error occurs? You can anticipate and allow for errors using the built-in **&amp;&amp;** and **||** Bash control operators. These two control operators provide some flow control and enable you to alter the sequence of code execution. The semicolon is also considered to be a Bash control operator, as is the newline character.
The **&amp;&amp;** operator simply says, "if command1 is successful, then run command2. If command1 fails for any reason, then command2 is skipped." That syntax looks like this:
```
`command1 && command2`
```
Now, look at some commands that will create a new directory and—if it's successful—make it the present working directory (PWD). Ensure that your home directory (**~**) is the PWD. Try this first in **/root**, a directory that you do not have access to:
```
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir/ &amp;&amp; cd $Dir
mkdir: cannot create directory '/root/testdir/': Permission denied
[student@studentvm1 ~]$
```
The error was emitted by the **mkdir** command. You did not receive an error indicating that the file could not be created because the creation of the directory failed. The **&amp;&amp;** control operator sensed the non-zero return code, so the **touch** command was skipped. Using the **&amp;&amp;** control operator prevents the **touch** command from running because there was an error in creating the directory. This type of command-line program flow control can prevent errors from compounding and making a real mess of things. But it's time to get a little more complicated.
The **||** control operator allows you to add another program statement that executes when the initial program statement returns a code greater than zero. The basic syntax looks like this:
```
`command1 || command2`
```
This syntax reads, "If command1 fails, execute command2." That implies that if command1 succeeds, command2 is skipped. Try this by attempting to create a new directory:
```
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir || echo "$Dir was not created."
mkdir: cannot create directory '/root/testdir': Permission denied
/root/testdir was not created.
[student@studentvm1 ~]$
```
This is exactly what you would expect. Because the new directory could not be created, the first command failed, which resulted in the execution of the second command.
Combining these two operators provides the best of both. The control operator syntax using some flow control takes this general form when the **&amp;&amp;** and **||** control operators are used:
```
`preceding commands ; command1 && command2 || command3 ; following commands`
```
This syntax can be stated like so: "If command1 exits with a return code of 0, then execute command2, otherwise execute command3." Try it:
```
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir &amp;&amp; cd $Dir || echo "$Dir was not created."
mkdir: cannot create directory '/root/testdir': Permission denied
/root/testdir was not created.
[student@studentvm1 ~]$
```
Now try the last command again using your home directory instead of the **/root** directory. You will have permission to create this directory:
```
[student@studentvm1 ~]$ Dir=~/testdir ; mkdir $Dir &amp;&amp; cd $Dir || echo "$Dir was not created."
[student@studentvm1 testdir]$
```
The control operator syntax, like **command1 &amp;&amp; command2**, works because every command sends a return code (RC) to the shell that indicates if it completed successfully or whether there was some type of failure during execution. By convention, an RC of zero (0) indicates success, and any positive number indicates some type of failure. Some of the tools sysadmins use just return a one (1) to indicate a failure, but many use other codes to indicate the type of failure that occurred.
The Bash shell variable **$?** contains the RC from the last command. This RC can be checked very easily by a script, the next command in a list of commands, or even the sysadmin directly. Start by running a simple command and immediately checking the RC. The RC will always be for the last command that ran before you looked at it.
```
[student@studentvm1 testdir]$ ll ; echo "RC = $?"
total 1264
drwxrwxr-x  2 student student   4096 Mar  2 08:21 chapter25
drwxrwxr-x  2 student student   4096 Mar 21 15:27 chapter26
-rwxr-xr-x  1 student student     92 Mar 20 15:53 TestFile1
&lt;snip&gt;
drwxrwxr-x. 2 student student 663552 Feb 21 14:12 testdir
drwxr-xr-x. 2 student student   4096 Dec 22 13:15 Videos
RC = 0
[student@studentvm1 testdir]$
```
The RC, in this case, is zero, which means the command completed successfully. Now try the same command on root's home directory, a directory you do not have permissions for:
```
[student@studentvm1 testdir]$ ll /root ; echo "RC = $?"
ls: cannot open directory '/root': Permission denied
RC = 2
[student@studentvm1 testdir]$
```
In this case, the RC is two; this means permission was denied for a non-root user to access a directory to which the user is not permitted access. The control operators use these RCs to enable you to alter the sequence of program execution.
### Summary
This article looked at Bash as a programming language and explored its basic syntax as well as some basic tools. It showed how to print data to STDOUT and how to use variables and control operators. The next article in this series looks at some of the many Bash logical operators that control the flow of instruction execution.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/programming-bash-part-1
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://opensource.com/article/18/10/linux-data-streams
[3]: http://www.both.org/?page_id=1183
[4]: https://opensource.com/19/9/command-line-heroes-bash
[5]: https://en.wikipedia.org/wiki/Comparison_of_command_shells
[6]: http://foldoc.org/program
[7]: https://wordnet.princeton.edu/
[8]: https://en.wikipedia.org/wiki/Computer_program
[9]: https://www.gnu.org/software/bc/manual/html_mono/bc.html

View File

@ -1,263 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (wenwensnow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to dual boot Windows 10 and Debian 10)
[#]: via: (https://www.linuxtechi.com/dual-boot-windows-10-debian-10/)
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
How to dual boot Windows 10 and Debian 10
======
So, you finally made the bold decision to try out **Linux** after much convincing. However, you do not want to let go of your Windows 10 operating system yet as you will still be needing it before you learn the ropes on Linux. Thankfully, you can easily have a dual boot setup that allows you to switch to either of the operating systems upon booting your system. In this guide, you will learn how to **dual boot  Windows 10 alongside Debian 10**.
[![How-to-dual-boot-Windows-and-Debian10][1]][2]
### Prerequisites
Before you get started, ensure you have the following:
* A bootable USB  or DVD of Debian 10
* A fast and stable internet connection ( For installation updates &amp; third party applications)
Additionally, it worth paying attention to how your system boots (UEFI or Legacy) and ensure both the operating systems boot using the same boot mode.
### Step 1: Create a free partition on your hard drive
To start off, you need to create a free partition on your hard drive. This is the partition where Debian will be installed during the installation process. To achieve this, you will invoke the disk management utility as shown:
Press **Windows Key + R** to launch the Run dialogue. Next, type **diskmgmt.msc** and hit **ENTER**
[![Launch-Run-dialogue][1]][3]
This launches the **disk management** window displaying all the drives existing on your Windows system.
[![Disk-management][1]][4]
Next, you need to create a free space for Debian installation. To do this, you need to shrink a partition from one of the volumes and create a new unallocated partition. In this case, I will create a **30 GB** partition from Volume D.
To shrink a volume, right-click on it and select the **shrink** option
[![Shrink-volume][1]][5]
In the pop-up dialogue, define the size that you want to shrink your space. Remember, this will be the disk space on which Debian 10 will be installed. In my case, I selected **30000MB  ( Approximately 30 GB)**. Once done, click on **Shrink**.
[![Shrink-space][1]][6]
After the shrinking operation completes, you should have an unallocated partition as shown:
[![Unallocated-partition][1]][7]
Perfect! We are now good to go and ready to begin the installation process.
### Step 2: Begin the installation of Debian 10
With the free partition already created, plug in your bootable USB drive or insert the DVD installation medium in your PC and reboot your system. Be sure to make changes to the **boot order** in the **BIOS** set up by pressing the function keys (usually, **F9, F10 or F12** depending on the vendor). This is crucial so that the PC boots into your installation medium. Saves the BIOS settings and reboot.
A new grub menu will be displayed as shown below: Click on **Graphical install**
[![Graphical-Install-Debian10][1]][8]
In the next step, select your **preferred language** and click **Continue**
[![Select-Language-Debian10][1]][9]
Next, select your **location** and click **Continue**. Based on this location the time will automatically be selected for you. If you cannot find you located, scroll down and click on **other** then select your location.
[![Select-location-Debain10][1]][10]
Next, select your **keyboard** layout.
[![Configure-Keyboard-layout-Debain10][1]][11]
In the next step, specify your systems **hostname** and click **Continue**
[![Set-hostname-Debian10][1]][12]
Next, specify the **domain name**. If you are not in a domain environment, simply click on the **continue** button.
[![Set-domain-name-Debian10][1]][13]
In the next step, specify the **root password** as shown and click **continue**.
[![Set-root-Password-Debian10][1]][14]
In the next step, specify the full name of the user for the account and click **continue**
[![Specify-fullname-user-debain10][1]][15]
Then set the account name by specifying the **username** associated with the account
[![Specify-username-Debian10][1]][16]
Next, specify the usernames password as shown and click **continue**
[![Specify-user-password-Debian10][1]][17]
Next, specify your **timezone**
[![Configure-timezone-Debian10][1]][18]
At this point, you need to create partitions for your Debian 10 installation. If you are an inexperienced user, Click on the **Use the largest continuous free space** and click **continue**.
[![Use-largest-continuous-free-space-debian10][1]][19]
However, if you are more knowledgeable about creating partitions, select the **Manual** option and click **continue**
[![Select-Manual-Debain10][1]][20]
Thereafter, select the partition labeled **FREE SPACE**  and click **continue** . Next click on **Create a new partition**.
[![Create-new-partition-Debain10][1]][21]
In the next window, first, define the size of swap space, In my case, I specified **2GB**. Click **Continue**.
[![Define-swap-space-debian10][1]][22]
Next, click on **Primary** on the next screen and click **continue**
[![Partition-Disks-Primary-Debain10][1]][23]
Select the partition to **start at the beginning** and click continue.
[![Start-at-the-beginning-Debain10][1]][24]
Next, click on **Ext 4 journaling file system** and click **continue**
[![Select-Ext4-Journaling-system-debain10][1]][25]
On the next window, select **swap  **and click continue
[![Select-swap-debain10][1]][26]
Next, click on **done setting the partition** and click continue.
[![Done-setting-partition-debian10][1]][27]
Back to the **Partition disks** page, click on **FREE SPACE** and click continue
[![Click-Free-space-Debain10][1]][28]
To make your life easy select **Automatically partition the free space** and click **continue**.
[![Automatically-partition-free-space-Debain10][1]][29]
Next click on **All files in one partition (recommended for new users)**
[![All-files-in-one-partition-debian10][1]][30]
Finally, click on **Finish partitioning and write changes to disk** and click **continue**.
[![Finish-partitioning-write-changes-to-disk][1]][31]
Confirm that you want to write changes to disk and click **Yes**
[![Write-changes-to-disk-Yes-Debian10][1]][32]
Thereafter, the installer will begin installing all the requisite software packages.
When asked if you want to scan another CD, select **No** and click continue
[![Scan-another-CD-No-Debain10][1]][33]
Next, select the mirror of the Debian archive closest to you and click Continue
[![Debian-archive-mirror-country][1]][34]
Next, select the **Debian mirror** that is most preferable to you and click **Continue**
[![Select-Debian-archive-mirror][1]][35]
If you plan on using a proxy server, enter its details as shown below, otherwise leave it blank and click continue
[![Enter-proxy-details-debian10][1]][36]
As the installation proceeds, you will be asked if you would like to participate in a **package usage survey**. You can select either option and click continue . In my case, I selected **No**
[![Participate-in-survey-debain10][1]][37]
Next, select the packages you need in the **software selection** window and click **continue**.
[![Software-selection-debian10][1]][38]
The installation will continue installing the selected packages. At this point, you can take a coffee break as the installation goes on.
You will be prompted whether to install the grub **bootloader** on **Master Boot Record (MBR)**. Click **Yes** and click **Continue**.
[![Install-grub-bootloader-debian10][1]][39]
Next, select the hard drive on which you want to install **grub** and click **Continue**.
[![Select-hard-drive-install-grub-Debian10][1]][40]
Finally, the installation will complete, Go ahead and click on the **Continue** button
[![Installation-complete-reboot-debian10][1]][41]
You should now have a grub menu with both **Windows** and **Debian** listed. To boot to Debian, scroll and click on Debian. Thereafter, you will be prompted with a login screen. Enter your details and hit ENTER.
[![Debian10-log-in][1]][42]
And voila! There goes your fresh copy of Debian 10 in a dual boot setup with Windows 10.
[![Debian10-Buster-Details][1]][43]
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/dual-boot-windows-10-debian-10/
作者:[James Kiarie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/How-to-dual-boot-Windows-and-Debian10.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Launch-Run-dialogue.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Disk-management.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Shrink-volume.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Shrink-space.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Unallocated-partition.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Graphical-Install-Debian10.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Language-Debian10.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-location-Debain10.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Configure-Keyboard-layout-Debain10.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-hostname-Debian10.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-domain-name-Debian10.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-root-Password-Debian10.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-fullname-user-debain10.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-username-Debian10.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-user-password-Debian10.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Configure-timezone-Debian10.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Use-largest-continuous-free-space-debian10.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Manual-Debain10.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Create-new-partition-Debain10.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Define-swap-space-debian10.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Partition-Disks-Primary-Debain10.jpg
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Start-at-the-beginning-Debain10.jpg
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Ext4-Journaling-system-debain10.jpg
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-swap-debain10.jpg
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Done-setting-partition-debian10.jpg
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Click-Free-space-Debain10.jpg
[29]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Automatically-partition-free-space-Debain10.jpg
[30]: https://www.linuxtechi.com/wp-content/uploads/2019/10/All-files-in-one-partition-debian10.jpg
[31]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Finish-partitioning-write-changes-to-disk.jpg
[32]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Write-changes-to-disk-Yes-Debian10.jpg
[33]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Scan-another-CD-No-Debain10.jpg
[34]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian-archive-mirror-country.jpg
[35]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Debian-archive-mirror.jpg
[36]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Enter-proxy-details-debian10.jpg
[37]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Participate-in-survey-debain10.jpg
[38]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Software-selection-debian10.jpg
[39]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Install-grub-bootloader-debian10.jpg
[40]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-hard-drive-install-grub-Debian10.jpg
[41]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Installation-complete-reboot-debian10.jpg
[42]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian10-log-in.jpg
[43]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian10-Buster-Details.jpg

View File

@ -1,452 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Understanding system calls on Linux with strace)
[#]: via: (https://opensource.com/article/19/10/strace)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
Understanding system calls on Linux with strace
======
Trace the thin layer between user processes and the Linux kernel with
strace.
![Hand putting a Linux file folder into a drawer][1]
A system call is a programmatic way a program requests a service from the kernel, and **strace** is a powerful tool that allows you to trace the thin layer between user processes and the Linux kernel.
To understand how an operating system works, you first need to understand how system calls work. One of the main functions of an operating system is to provide abstractions to user programs.
An operating system can roughly be divided into two modes:
* **Kernel mode:** A privileged and powerful mode used by the operating system kernel
* **User mode:** Where most user applications run
Users mostly work with command-line utilities and graphical user interfaces (GUI) to do day-to-day tasks. System calls work silently in the background, interfacing with the kernel to get work done.
System calls are very similar to function calls, which means they accept and work on arguments and return values. The only difference is that system calls enter a kernel, while function calls do not. Switching from user space to kernel space is done using a special [trap][2] mechanism.
Most of this is hidden away from the user by using system libraries (aka **glibc** on Linux systems). Even though system calls are generic in nature, the mechanics of issuing a system call are very much machine-dependent.
This article explores some practical examples by using some general commands and analyzing the system calls made by each command using **strace**. These examples use Red Hat Enterprise Linux, but the commands should work the same on other Linux distros:
```
[root@sandbox ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.7 (Maipo)
[root@sandbox ~]#
[root@sandbox ~]# uname -r
3.10.0-1062.el7.x86_64
[root@sandbox ~]#
```
First, ensure that the required tools are installed on your system. You can verify whether **strace** is installed using the RPM command below; if it is, you can check the **strace** utility version number using the **-V** option:
```
[root@sandbox ~]# rpm -qa | grep -i strace
strace-4.12-9.el7.x86_64
[root@sandbox ~]#
[root@sandbox ~]# strace -V
strace -- version 4.12
[root@sandbox ~]#
```
If that doesn't work, install **strace** by running:
```
`yum install strace`
```
For the purpose of this example, create a test directory within **/tmp** and create two files using the **touch** command using:
```
[root@sandbox ~]# cd /tmp/
[root@sandbox tmp]#
[root@sandbox tmp]# mkdir testdir
[root@sandbox tmp]#
[root@sandbox tmp]# touch testdir/file1
[root@sandbox tmp]# touch testdir/file2
[root@sandbox tmp]#
```
(I used the **/tmp** directory because everybody has access to it, but you can choose another directory if you prefer.)
Verify that the files were created using the **ls** command on the **testdir** directory:
```
[root@sandbox tmp]# ls testdir/
file1  file2
[root@sandbox tmp]#
```
You probably use the **ls** command every day without realizing system calls are at work underneath it. There is abstraction at play here; here's how this command works:
```
`Command-line utility -> Invokes functions from system libraries (glibc) -> Invokes system calls`
```
The **ls** command internally calls functions from system libraries (aka **glibc**) on Linux. These libraries invoke the system calls that do most of the work.
If you want to know which functions were called from the **glibc** library, use the **ltrace** command followed by the regular **ls testdir/** command:
```
`ltrace ls testdir/`
```
If **ltrace** is not installed, install it by entering:
```
`yum install ltrace`
```
A bunch of output will be dumped to the screen; don't worry about it—just follow along. Some of the important library functions from the output of the **ltrace** command that are relevant to this example include:
```
opendir("testdir/")                                  = { 3 }
readdir({ 3 })                                       = { 101879119, "." }
readdir({ 3 })                                       = { 134, ".." }
readdir({ 3 })                                       = { 101879120, "file1" }
strlen("file1")                                      = 5
memcpy(0x1665be0, "file1\0", 6)                      = 0x1665be0
readdir({ 3 })                                       = { 101879122, "file2" }
strlen("file2")                                      = 5
memcpy(0x166dcb0, "file2\0", 6)                      = 0x166dcb0
readdir({ 3 })                                       = nil
closedir({ 3 })                      
```
By looking at the output above, you probably can understand what is happening. A directory called **testdir** is being opened by the **opendir** library function, followed by calls to the **readdir** function, which is reading the contents of the directory. At the end, there is a call to the **closedir** function, which closes the directory that was opened earlier. Ignore the other **strlen** and **memcpy** functions for now.
You can see which library functions are being called, but this article will focus on system calls that are invoked by the system library functions.
Similar to the above, to understand what system calls are invoked, just put **strace** before the **ls testdir** command, as shown below. Once again, a bunch of gibberish will be dumped to your screen, which you can follow along with here:
```
[root@sandbox tmp]# strace ls testdir/
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
brk(NULL)                               = 0x1f12000
&lt;&lt;&lt; truncated strace output &gt;&gt;&gt;
write(1, "file1  file2\n", 13file1  file2
)          = 13
close(1)                                = 0
munmap(0x7fd002c8d000, 4096)            = 0
close(2)                                = 0
exit_group(0)                           = ?
+++ exited with 0 +++
[root@sandbox tmp]#
```
The output on the screen after running the **strace** command was simply system calls made to run the **ls** command. Each system call serves a specific purpose for the operating system, and they can be broadly categorized into the following sections:
* Process management system calls
* File management system calls
* Directory and filesystem management system calls
* Other system calls
An easier way to analyze the information dumped onto your screen is to log the output to a file using **strace**'s handy **-o** flag. Add a suitable file name after the **-o** flag and run the command again:
```
[root@sandbox tmp]# strace -o trace.log ls testdir/
file1  file2
[root@sandbox tmp]#
```
This time, no output dumped to the screen—the **ls** command worked as expected by showing the file names and logging all the output to the file **trace.log**. The file has almost 100 lines of content just for a simple **ls** command:
```
[root@sandbox tmp]# ls -l trace.log
-rw-r--r--. 1 root root 7809 Oct 12 13:52 trace.log
[root@sandbox tmp]#
[root@sandbox tmp]# wc -l trace.log
114 trace.log
[root@sandbox tmp]#
```
Take a look at the first line in the example's trace.log:
```
`execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0`
```
* The first word of the line, **execve**, is the name of a system call being executed.
* The text within the parentheses is the arguments provided to the system call.
* The number after the **=** sign (which is **0** in this case) is a value returned by the **execve** system call.
The output doesn't seem too intimidating now, does it? And you can apply the same logic to understand other lines.
Now, narrow your focus to the single command that you invoked, i.e., **ls testdir**. You know the directory name used by the command **ls**, so why not **grep** for **testdir** within your **trace.log** file and see what you get? Look at each line of the results in detail:
```
[root@sandbox tmp]# grep testdir trace.log
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0
openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
[root@sandbox tmp]#
```
Thinking back to the analysis of **execve** above, can you tell what this system call does?
```
`execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0`
```
You don't need to memorize all the system calls or what they do, because you can refer to documentation when you need to. Man pages to the rescue! Ensure the following package is installed before running the **man** command:
```
[root@sandbox tmp]# rpm -qa | grep -i man-pages
man-pages-3.53-5.el7.noarch
[root@sandbox tmp]#
```
Remember that you need to add a **2** between the **man** command and the system call name. If you read **man**'s man page using **man man**, you can see that section 2 is reserved for system calls. Similarly, if you need information on library functions, you need to add a **3** between **man** and the library function name.
The following are the manual's section numbers and the types of pages they contain:
```
1\. Executable programs or shell commands
2\. System calls (functions provided by the kernel)
3\. Library calls (functions within program libraries)
4\. Special files (usually found in /dev)
```
Run the following **man** command with the system call name to see the documentation for that system call:
```
`man 2 execve`
```
As per the **execve** man page, this executes a program that is passed in the arguments (in this case, that is **ls**). There are additional arguments that can be provided to **ls**, such as **testdir** in this example. Therefore, this system call just runs **ls** with **testdir** as the argument:
```
'execve - execute program'
'DESCRIPTION
       execve()  executes  the  program  pointed to by filename'
```
The next system call, named **stat**, uses the **testdir** argument:
```
`stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0`
```
Use **man 2 stat** to access the documentation. **stat** is the system call that gets a file's status—remember that everything in Linux is a file, including a directory.
Next, the **openat** system call opens **testdir.** Keep an eye on the **3** that is returned. This is a file description, which will be used by later system calls:
```
`openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3`
```
So far, so good. Now, open the **trace.log** file and go to the line following the **openat** system call. You will see the **getdents** system call being invoked, which does most of what is required to execute the **ls testdir** command. Now, **grep getdents** from the **trace.log** file:
```
[root@sandbox tmp]# grep getdents trace.log
getdents(3, /* 4 entries */, 32768)     = 112
getdents(3, /* 0 entries */, 32768)     = 0
[root@sandbox tmp]#
```
The **getdents** man page describes it as **get directory entries**, which is what you want to do. Notice that the argument for **getdents** is **3**, which is the file descriptor from the **openat** system call above.
Now that you have the directory listing, you need a way to display it in your terminal. So, **grep** for another system call, **write**, which is used to write to the terminal, in the logs:
```
[root@sandbox tmp]# grep write trace.log
write(1, "file1  file2\n", 13)          = 13
[root@sandbox tmp]#
```
In these arguments, you can see the file names that will be displayed: **file1** and **file2**. Regarding the first argument (**1**), remember in Linux that, when any process is run, three file descriptors are opened for it by default. Following are the default file descriptors:
* 0 - Standard input
* 1 - Standard out
* 2 - Standard error
So, the **write** system call is displaying **file1** and **file2** on the standard display, which is the terminal, identified by **1**.
Now you know which system calls did most of the work for the **ls testdir/** command. But what about the other 100+ system calls in the **trace.log** file? The operating system has to do a lot of housekeeping to run a process, so a lot of what you see in the log file is process initialization and cleanup. Read the entire **trace.log** file and try to understand what is happening to make the **ls** command work.
Now that you know how to analyze system calls for a given command, you can use this knowledge for other commands to understand what system calls are being executed. **strace** provides a lot of useful command-line flags to make it easier for you, and some of them are described below.
By default, **strace** does not include all system call information. However, it has a handy **-v verbose** option that can provide additional information on each system call:
```
`strace -v ls testdir`
```
It is good practice to always use the **-f** option when running the **strace** command. It allows **strace** to trace any child processes created by the process currently being traced:
```
`strace -f ls testdir`
```
Say you just want the names of system calls, the number of times they ran, and the percentage of time spent in each system call. You can use the **-c** flag to get those statistics:
```
`strace -c ls testdir/`
```
Suppose you want to concentrate on a specific system call, such as focusing on **open** system calls and ignoring the rest. You can use the **-e** flag followed by the system call name:
```
[root@sandbox tmp]# strace -e open ls testdir
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libcap.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libacl.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpcre.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
file1  file2
+++ exited with 0 +++
[root@sandbox tmp]#
```
What if you want to concentrate on more than one system call? No worries, you can use the same **-e** command-line flag with a comma between the two system calls. For example, to see the **write** and **getdents** systems calls:
```
[root@sandbox tmp]# strace -e write,getdents ls testdir
getdents(3, /* 4 entries */, 32768)     = 112
getdents(3, /* 0 entries */, 32768)     = 0
write(1, "file1  file2\n", 13file1  file2
)          = 13
+++ exited with 0 +++
[root@sandbox tmp]#
```
The examples so far have traced explicitly run commands. But what about commands that have already been run and are in execution? What, for example, if you want to trace daemons that are just long-running processes? For this, **strace** provides a special **-p** flag to which you can provide a process ID.
Instead of running a **strace** on a daemon, take the example of a **cat** command, which usually displays the contents of a file if you give a file name as an argument. If no argument is given, the **cat** command simply waits at a terminal for the user to enter text. Once text is entered, it repeats the given text until a user presses Ctrl+C to exit.
Run the **cat** command from one terminal; it will show you a prompt and simply wait there (remember **cat** is still running and has not exited):
```
`[root@sandbox tmp]# cat`
```
From another terminal, find the process identifier (PID) using the **ps** command:
```
[root@sandbox ~]# ps -ef | grep cat
root      22443  20164  0 14:19 pts/0    00:00:00 cat
root      22482  20300  0 14:20 pts/1    00:00:00 grep --color=auto cat
[root@sandbox ~]#
```
Now, run **strace** on the running process with the **-p** flag and the PID (which you found above using **ps**). After running **strace**, the output states what the process was attached to along with the PID number. Now, **strace** is tracing the system calls made by the **cat** command. The first system call you see is **read**, which is waiting for input from 0, or standard input, which is the terminal where the **cat** command ran:
```
[root@sandbox ~]# strace -p 22443
strace: Process 22443 attached
read(0,
```
Now, move back to the terminal where you left the **cat** command running and enter some text. I entered **x0x0** for demo purposes. Notice how **cat** simply repeated what I entered; hence, **x0x0** appears twice. I input the first one, and the second one was the output repeated by the **cat** command:
```
[root@sandbox tmp]# cat
x0x0
x0x0
```
Move back to the terminal where **strace** was attached to the **cat** process. You now see two additional system calls: the earlier **read** system call, which now reads **x0x0** in the terminal, and another for **write**, which wrote **x0x0** back to the terminal, and again a new **read**, which is waiting to read from the terminal. Note that Standard input (**0**) and Standard out (**1**) are both in the same terminal:
```
[root@sandbox ~]# strace -p 22443
strace: Process 22443 attached
read(0, "x0x0\n", 65536)                = 5
write(1, "x0x0\n", 5)                   = 5
read(0,
```
Imagine how helpful this is when running **strace** against daemons to see everything it does in the background. Kill the **cat** command by pressing Ctrl+C; this also kills your **strace** session since the process is no longer running.
If you want to see a timestamp against all your system calls, simply use the **-t** option with **strace**:
```
[root@sandbox ~]#strace -t ls testdir/
14:24:47 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
14:24:47 brk(NULL)                      = 0x1f07000
14:24:47 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2530bc8000
14:24:47 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
14:24:47 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
```
What if you want to know the time spent between system calls? **strace** has a handy **-r** command that shows the time spent executing each system call. Pretty useful, isn't it?
```
[root@sandbox ~]#strace -r ls testdir/
0.000000 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
0.000368 brk(NULL)                 = 0x1966000
0.000073 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb6b1155000
0.000047 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
0.000119 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
```
### Conclusion
The **strace** utility is very handy for understanding system calls on Linux. To learn about its other command-line flags, please refer to the man pages and online documentation.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/strace
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://en.wikipedia.org/wiki/Trap_(computing)

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,96 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Upgrading Fedora 30 to Fedora 31)
[#]: via: (https://fedoramagazine.org/upgrading-fedora-30-to-fedora-31/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
Upgrading Fedora 30 to Fedora 31
======
![][1]
Fedora 31 [is available now][2]. Youll likely want to upgrade your system to get the latest features available in Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 30 to Fedora 31.
### Upgrading Fedora 30 Workstation to Fedora 31
Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the **GNOME Software** app. Or you can choose Software from GNOME Shell.
Choose the _Updates_ tab in GNOME Software and you should see a screen informing you that Fedora 31 is Now Available.
If you dont see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available.
Choose _Download_ to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.
### Using the command line
If youve upgraded from past Fedora releases, you are likely familiar with the _dnf upgrade_ plugin. This method is the recommended and supported way to upgrade from Fedora 30 to Fedora 31. Using this plugin will make your upgrade to Fedora 31 simple and easy.
#### 1\. Update software and back up your system
Before you do start the upgrade process, make sure you have the latest software for Fedora 30. This is particularly important if you have modular software installed; the latest versions of dnf and GNOME Software include improvements to the upgrade process for some modular streams. To update your software, use _GNOME Software_ or enter the following command in a terminal.
```
sudo dnf upgrade --refresh
```
Additionally, make sure you back up your system before proceeding. For help with taking a backup, see [the backup series][3] on the Fedora Magazine.
#### 2\. Install the DNF plugin
Next, open a terminal and type the following command to install the plugin:
```
sudo dnf install dnf-plugin-system-upgrade
```
#### 3\. Start the update with DNF
Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:
```
sudo dnf system-upgrade download --releasever=31
```
This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the _allowerasing_ flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.
#### 4\. Reboot and upgrade
Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:
```
sudo dnf system-upgrade reboot
```
Your system will restart after this. Many releases ago, the _fedup_ tool would create a new option on the kernel selection / boot screen. With the _dnf-plugin-system-upgrade_ package, your system reboots into the current kernel installed for Fedora 30; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.
Now might be a good time for a coffee break! Once it finishes, your system will restart and youll be able to log in to your newly upgraded Fedora 31 system.
![][4]
### Resolving upgrade problems
On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the [DNF system upgrade quick docs][5] for more information on troubleshooting.
If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/upgrading-fedora-30-to-fedora-31/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/f30-f31-816x345.jpg
[2]: https://fedoramagazine.org/announcing-fedora-31/
[3]: https://fedoramagazine.org/taking-smart-backups-duplicity/
[4]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png
[5]: https://docs.fedoraproject.org/en-US/quick-docs/dnf-system-upgrade/#Resolving_post-upgrade_issues

View File

@ -1,168 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with awk, a powerful text-parsing tool)
[#]: via: (https://opensource.com/article/19/10/intro-awk)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Getting started with awk, a powerful text-parsing tool
======
Let's jump in and start using it.
![Woman programming][1]
Awk is a powerful text-parsing tool for Unix and Unix-like systems, but because it has programmed functions that you can use to perform common parsing tasks, it's also considered a programming language. You probably won't be developing your next GUI application with awk, and it likely won't take the place of your default scripting language, but it's a powerful utility for specific tasks.
What those tasks may be is surprisingly diverse. The best way to discover which of your problems might be best solved by awk is to learn awk; you'll be surprised at how awk can help you get more done but with a lot less effort.
Awk's basic syntax is:
```
`awk [options] 'pattern {action}' file`
```
To get started, create this sample file and save it as **colours.txt**
```
name       color  amount
apple      red    4
banana     yellow 6
strawberry red    3
grape      purple 10
apple      green  8
plum       purple 2
kiwi       brown  4
potato     brown  9
pineapple  yellow 5
```
This data is separated into columns by one or more spaces. It's common for data that you are analyzing to be organized in some way. It may not always be columns separated by whitespace, or even a comma or semicolon, but especially in log files or data dumps, there's generally a predictable pattern. You can use patterns of data to help awk extract and process the data that you want to focus on.
### Printing a column
In awk, the **print** function displays whatever you specify. There are many predefined variables you can use, but some of the most common are integers designating columns in a text file. Try it out:
```
$ awk '{print $2;}' colours.txt
color
red
yellow
red
purple
green
purple
brown
brown
yellow
```
In this case, awk displays the second column, denoted by **$2**. This is relatively intuitive, so you can probably guess that **print $1** displays the first column, and **print $3** displays the third, and so on.
To display _all_ columns, use **$0**.
The number after the dollar sign (**$**) is an _expression_, so **$2** and **$(1+1)** mean the same thing.
### Conditionally selecting columns
The example file you're using is very structured. It has a row that serves as a header, and the columns relate directly to one another. By defining _conditional_ requirements, you can qualify what you want awk to return when looking at this data. For instance, to view items in column 2 that match "yellow" and print the contents of column 1:
```
awk '$2=="yellow"{print $1}' file1.txt
banana
pineapple
```
Regular expressions work as well. This conditional looks at **$2** for approximate matches to the letter **p** followed by any number of (one or more) characters, which are in turn followed by the letter **p**:
```
$ awk '$2 ~ /p.+p/ {print $0}' colours.txt
grape   purple  10
plum    purple  2
```
Numbers are interpreted naturally by awk. For instance, to print any row with a third column containing an integer greater than 5:
```
awk '$3&gt;5 {print $1, $2}' colours.txt
name    color
banana  yellow
grape   purple
apple   green
potato  brown
```
### Field separator
By default, awk uses whitespace as the field separator. Not all text files use whitespace to define fields, though. For example, create a file called **colours.csv** with this content:
```
name,color,amount
apple,red,4
banana,yellow,6
strawberry,red,3
grape,purple,10
apple,green,8
plum,purple,2
kiwi,brown,4
potato,brown,9
pineapple,yellow,5
```
Awk can treat the data in exactly the same way, as long as you specify which character it should use as the field separator in your command. Use the **\--field-separator** (or just **-F** for short) option to define the delimiter:
```
$ awk -F"," '$2=="yellow" {print $1}' file1.csv
banana
pineapple
```
### Saving output
Using output redirection, you can write your results to a file. For example:
```
`$ awk -F, '$3>5 {print $1, $2} colours.csv > output.txt`
```
This creates a file with the contents of your awk query.
You can also split a file into multiple files grouped by column data. For example, if you want to split colours.txt into multiple files according to what color appears in each row, you can cause awk to redirect _per query_ by including the redirection in your awk statement:
```
`$ awk '{print > $2".txt"}' colours.txt`
```
This produces files named **yellow.txt**, **red.txt**, and so on.
In the next article, you'll learn more about fields, records, and some powerful awk variables.
* * *
This article is adapted from an episode of [Hacker Public Radio][2], a community technology podcast.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/intro-awk
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
[2]: http://hackerpublicradio.org/eps.php?id=2114

View File

@ -1,107 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Keyboard Shortcuts to Speed Up Your Work in Linux)
[#]: via: (https://opensourceforu.com/2019/11/keyboard-shortcuts-to-speed-up-your-work-in-linux/)
[#]: author: (S Sathyanarayanan https://opensourceforu.com/author/s-sathyanarayanan/)
Keyboard Shortcuts to Speed Up Your Work in Linux
======
[![Google Keyboard][1]][2]
_Manipulating the mouse, keyboard and menus takes up a lot of our time, which could be saved by using keyboard shortcuts. These not only save time, but also make the computer user more efficient._
Did you realise that switching from the keyboard to the mouse while typing takes up to two seconds each time? If a person works for eight hours every day, switching from the keyboard to the mouse once a minute, and there are around 240 working days in a year, the amount of time wasted (as per calculations done by Brainscape) is:
_[2 wasted seconds/min] x [480 minutes per day] x 240 working days per year = 64 wasted hours per year_
This is equal to eight working days lost and hence learning keyboard shortcuts will increase productivity by 3.3 per cent (_<https://www.brainscape.com/blog/2011/08/keyboard-shortcuts-economy/>_).
Keyboard shortcuts provide a quicker way to do a task, which otherwise would have had to be done in multiple steps using the mouse and/or the menu. Figure 1 gives a list of a few most frequently used shortcuts in Ubuntu 18.04 Linux OS and the Web browsers. I am omitting the very well-known shortcuts like copy, paste, etc, and the ones which are not used frequently. The readers can refer to online resources for a comprehensive list of shortcuts. Note that the Windows key is renamed as Super key in Linux.
**General shortcuts**
A list of general shortcuts is given below.
[![][3]][4]
**Print Screen and video recording of the screen**
The following shortcuts can be used to print the screen or take a video recording of the screen.
[![][5]][6]**Switching between applications**
The shortcut keys listed here can be used to switch between applications.
[![][7]][8]
**Tile windows**
The windows can be tiled in different ways using the shortcuts given below.
[![][9]][10]
**Browser shortcuts**
The most frequently used shortcuts for browsers are listed here. Most of the shortcuts are common to the Chrome/Firefox browsers.
**Key combination** | **Action**
---|---
Ctrl + T | Opens a new tab.
Ctrl + Shift + T | Opens the most recently closed tab.
Ctrl + D | Adds a new bookmark.
Ctrl + W | Closes the browser tab.
Alt + D | Positions the cursor in the browsers address bar.
F5 or Ctrl-R | Refreshes a page.
Ctrl + Shift + Del | Clears private data and history.
Ctrl + N | Opens a new window.
Home | Scrolls to the top of the page.
End | Scrolls to the bottom of the page.
Ctrl + J | Opens the Downloads folder
(in Chrome)
F11 | Full-screen view (toggle effect)
**Terminal shortcuts**
Here is a list of terminal shortcuts.
[![][11]][12]You can also configure your own custom shortcuts in Ubuntu, as follows:
* Click on Settings in Ubuntu Dash.
* Select the Devices tab in the left menu of the Settings window.
* Select the Keyboard tab in the Devices menu.
* The + button is displayed at the bottom of the right panel. Click on the + sign to open the custom shortcut dialogue box and configure a new shortcut.
Learning three shortcuts mentioned in this article can save a lot of time and make you more productive.
**Reference**
_Cohen, Andrew. How keyboard shortcuts could revive Americas economy; [www.brainscape.com][13]. [Online] Brainscape, 26 May 2017; <https://www.brainscape.com/blog/2011/08/keyboard-shortcuts-economy/>_
![Avatar][14]
[S Sathyanarayanan][15]
The author is currently working with Sri Sathya Sai University for Human Excellence, Gulbarga. He has more than 25 years of experience in systems management and in teaching IT courses. He is an enthusiastic promoter of FOSS and can be reached at [sathyanarayanan.brn@gmail.com][16].
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/keyboard-shortcuts-to-speed-up-your-work-in-linux/
作者:[S Sathyanarayanan][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/s-sathyanarayanan/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Google-Keyboard.jpg?resize=696%2C418&ssl=1 (Google Keyboard)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/12/Google-Keyboard.jpg?fit=750%2C450&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/1.png?resize=350%2C319&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/1.png?ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/NW.png?resize=350%2C326&ssl=1
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/NW.png?ssl=1
[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/2.png?resize=350%2C264&ssl=1
[8]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/2.png?ssl=1
[9]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/3.png?resize=350%2C186&ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/3.png?ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/7.png?resize=350%2C250&ssl=1
[12]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/7.png?ssl=1
[13]: http://www.brainscape.com
[14]: https://secure.gravatar.com/avatar/736684a2707f2ed7ae72675edf7bb3ee?s=100&r=g
[15]: https://opensourceforu.com/author/s-sathyanarayanan/
[16]: mailto:sathyanarayanan.brn@gmail.com

View File

@ -0,0 +1,252 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fields, records, and variables in awk)
[#]: via: (https://opensource.com/article/19/11/fields-records-variables-awk)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Fields, records, and variables in awk
======
In the second article in this intro to awk series, learn about fields,
records, and some powerful awk variables.
![Man at laptop on a mountain][1]
Awk comes in several varieties: There is the original **awk**, written in 1977 at AT&amp;T Bell Laboratories, and several reimplementations, such as **mawk**, **nawk**, and the one that ships with most Linux distributions, GNU awk, or **gawk**. On most Linux distributions, awk and gawk are synonyms referring to GNU awk, and typing either invokes the same awk command. See the [GNU awk user's guide][2] for the full history of awk and gawk.
The [first article][3] in this series showed that awk is invoked on the command line with this syntax:
```
`$ awk [options] 'pattern {action}' inputfile`
```
Awk is the command, and it can take options (such as **-F** to define the field separator). The action you want awk to perform is contained in single quotes, at least when it's issued in a terminal. To further emphasize which part of the awk command is the action you want it to take, you can precede your program with the **-e** option (but it's not required):
```
$ awk -F, -e '{print $2;}' colours.txt
yellow
blue
green
[...]
```
### Records and fields
Awk views its input data as a series of _records_, which are usually newline-delimited lines. In other words, awk generally sees each line in a text file as a new record. Each record contains a series of _fields_. A field is a component of a record delimited by a _field separator_.
By default, awk sees whitespace, such as spaces, tabs, and newlines, as indicators of a new field. Specifically, awk treats multiple _space_ separators as one, so this line contains two fields:
```
`raspberry red`
```
As does this one:
```
`tuxedo                  black`
```
Other separators are not treated this way. Assuming that the field separator is a comma, the following example record contains three fields, with one probably being zero characters long (assuming a non-printable character isn't hiding in that field):
```
`a,,b`
```
### The awk program
The _program_ part of an awk command consists of a series of rules. Normally, each rule begins on a new line in the program (although this is not mandatory). Each rule consists of a pattern and one or more actions:
```
`pattern { action }`
```
In a rule, you can define a pattern as a condition to control whether the action will run on a record. Patterns can be simple comparisons, regular expressions, combinations of the two, and more.
For instance, this will print a record _only_ if it contains the word "raspberry":
```
$ awk '/raspberry/ { print $0 }' colours.txt
raspberry red 99
```
If there is no qualifying pattern, the action is applied to every record.
Also, a rule can consist of only a pattern, in which case the entire record is written as if the action was **{ print }**.
Awk programs are essentially _data-driven_ in that actions depend on the data, so they are quite a bit different from programs in many other programming languages.
### The NF variable
Each field has a variable as a designation, but there are special variables for fields and records, too. The variable **NF** stores the number of fields awk finds in the current record. This can be printed or used in tests. Here is an example using the [text file][3] from the previous article:
```
$ awk '{ print $0 " (" NF ")" }' colours.txt
name       color  amount (3)
apple      red    4 (3)
banana     yellow 6 (3)
[...]
```
Awk's **print** function takes a series of arguments (which may be variables or strings) and concatenates them together. This is why, at the end of each line in this example, awk prints the number of fields as an integer enclosed by parentheses.
### The NR variable
In addition to counting the fields in each record, awk also counts input records. The record number is held in the variable **NR**, and it can be used in the same way as any other variable. For example, to print the record number before each line:
```
$ awk '{ print NR ": " $0 }' colours.txt
1: name       color  amount
2: apple      red    4
3: banana     yellow 6
4: raspberry  red    3
5: grape      purple 10
[...]
```
Note that it's acceptable to write this command with no spaces other than the one after **print**, although it's more difficult for a human to parse:
```
`$ awk '{print NR": "$0}' colours.txt`
```
### The printf() function
For greater flexibility in how the output is formatted, you can use the awk **printf()** function. This is similar to **printf** in C, Lua, Bash, and other languages. It takes a _format_ argument followed by a comma-separated list of items. The argument list may be enclosed in parentheses.
```
`$ printf format, item1, item2, ...`
```
The format argument (or _format string_) defines how each of the other arguments will be output. It uses _format specifiers_ to do this, including **%s** to output a string and **%d** to output a decimal number. The following **printf** statement outputs the record followed by the number of fields in parentheses:
```
$ awk 'printf "%s (%d)\n",$0,NF}' colours.txt
name       color  amount (3)
raspberry  red    4 (3)
banana     yellow 6 (3)
[...]
```
In this example, **%s (%d)** provides the structure for each line, while **$0,NF** defines the data to be inserted into the **%s** and **%d** positions. Note that, unlike with the **print** function, no newline is generated without explicit instructions. The escape sequence **\n** does this.
### Awk scripting
All of the awk code in this article has been written and executed in an interactive Bash prompt. For more complex programs, it's often easier to place your commands into a file or _script_. The option **-f FILE** (not to be confused with **-F**, which denotes the field separator) may be used to invoke a file containing a program.
For example, here is a simple awk script. Create a file called **example1.awk** with this content:
```
/^a/ {print "A: " $0}
/^b/ {print "B: " $0}
```
It's conventional to give such files the extension **.awk** to make it clear that they hold an awk program. This naming is not mandatory, but it gives file managers and editors (and you) a useful clue about what the file is.
Run the script:
```
$ awk -f example1.awk colours.txt
A: raspberry  red    4
B: banana     yellow 6
A: apple      green  8
```
A file containing awk instructions can be made into a script by adding a **#!** line at the top and making it executable. Create a file called **example2.awk** with these contents:
```
#!/usr/bin/awk -f
#
# Print all but line 1 with the line number on the front
#
NR &gt; 1 {
    printf "%d: %s\n",NR,$0
}
```
Arguably, there's no advantage to having just one line in a script, but sometimes it's easier to execute a script than to remember and type even a single line. A script file also provides a good opportunity to document what a command does. Lines starting with the **#** symbol are comments, which awk ignores.
Grant the file executable permission:
```
`$ chmod u+x example2.awk`
```
Run the script:
```
$ ./example2.awk colours.txt
2: apple      red    4
2: banana     yellow 6
4: raspberry red    3
5: grape      purple 10
[...]
```
An advantage of placing your awk instructions in a script file is that it's easier to format and edit. While you can write awk on a single line in your terminal, it can get overwhelming when it spans several lines.
### Try it
You now know enough about how awk processes your instructions to be able to write a complex awk program. Try writing an awk script with more than one rule and at least one conditional pattern. If you want to try more functions than just **print** and **printf**, refer to [the gawk manual][4] online.
Here's an idea to get you started:
```
#!/usr/bin/awk -f
#
# Print each record EXCEPT
# IF the first record contains "raspberry",
# THEN replace "red" with "pi"
$1 == "raspberry" {
        gsub(/red/,"pi")
}
{ print }
```
Try this script to see what it does, and then try to write your own.
The next article in this series will introduce more functions for even more complex (and useful!) scripts.
* * *
_This article is adapted from an episode of [Hacker Public Radio][5], a community technology podcast._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/fields-records-variables-awk
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_laptop_code_programming_mountain_view.jpg?itok=yx5buqkr (Man at laptop on a mountain)
[2]: https://www.gnu.org/software/gawk/manual/html_node/History.html#History
[3]: https://opensource.com/article/19/10/intro-awk
[4]: https://www.gnu.org/software/gawk/manual/
[5]: http://hackerpublicradio.org/eps.php?id=2129

View File

@ -0,0 +1,308 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Add Windows and Linux host to Nagios Server for Monitoring)
[#]: via: (https://www.linuxtechi.com/add-windows-linux-host-to-nagios-server/)
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
How to Add Windows and Linux host to Nagios Server for Monitoring
======
In the previous article, we demonstrated how to install [Nagios Core on CentOS 8 / RHEL 8][1] server. In this guide, we will dive deeper and add Linux and Windows hosts to the Nagios Core server for monitoring.
![Add-Linux-Windows-Host-Nagios-Server][2]
### Adding a Remote Windows Host to Nagios Server
In this section, you will learn how to add a **Windows host** system to the **Nagios server**. For this to be possible, you need to install **NSClient++** agent on the Windows Host system. In this guide, we are going to install the NSClient++ on a Windows Server 2019 Datacenter edition.
On the Windows host system,  head out to the download link as specified <https://sourceforge.net/projects/nscplus/> and download NSClient ++ agent.
Once downloaded, double click on the downloaded installation file to launch the installation wizard.
[![NSClient-installer-Windows][2]][3]
On the first step on the installation procedure click **Next**
[![click-nex-to-install-NSClient][2]][4]
In the next section, check off the **I accept the terms in the license Agreement** checkbox and click **Next**
[![Accept-terms-conditions-NSClient][2]][5]
Next, click on the **Typical** option from the list of options and click **Next**
[![click-on-Typical-option-NSClient-Installation][2]][6]
In the next step, leave the default settings as they are and click **Next**.
[![Define-path-NSClient-Windows][2]][7]
On the next page, specify your Nagios Server cores IP address and tick off all the modules and click **Next** as shown below.
[![Specify-Nagios-Server-IP-address-NSClient-Windows][2]][8]
Next, click on the **Install** option to commence the installation process.[![Click-install-to-being-the-installation-NSClient][2]][9]
The installation process will start and will take a couple of seconds to complete. On the last step. Click **Finish** to complete the installation and exit the Wizard.
[![Click-finish-NSClient-Windows][2]][10]
To start the NSClient service, click on the **Start** menu and click on the **Start NSClient ++** option.
[![Click-start-NSClient-service-windows][2]][11]
To confirm that indeed the service is running, press **Windows Key + R**, type services.msc and hit **ENTER**. Scroll and search for the **NSClient** service and ensure its running
[![NSClient-running-windows][2]][12]
At this point, we have successfully installed NSClient++ on Windows Server 2019 host and verified that its running.
### Configure Nagios Server to monitor Windows host
After the successful installation of the NSClient ++ on the Windows host PC, log in to the Nagios server Core system and configure it to monitor the Windows host system.
Open the windows.cfg file using your favorite text editor
```
# vim /usr/local/nagios/etc/objects/windows.cfg
```
In the configuration file, ensure that the host_name attribute matches the hostname of your Windows client system. In our case, the hostname for the Windows server PC is windows-server. This hostname should apply for all the host_name attributes.
For the address attribute, specify your Windows host IP address. , In our case, this was 10.128.0.52.
![Specify-hostname-IP-Windows][2]
After you are done, save the changes and exit the text editor.
Next, open the Nagios configuration file.
```
# vim /usr/local/nagios/etc/nagios.cfg
```
Uncomment the line below and save the changes.
cfg_file=/usr/local/nagios/etc/objects/windows.cfg
![Uncomment-Windows-cfg-Nagios][2]
Finally, to verify that Nagios configuration is free from any errors, run the command:
```
# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
```
Output
![Verify-configuration-for-errors-Nagios][2]
As you can see from the output, there are no warnings or errors.
Now browse your Nagios Server IP address, log in and click on Hosts. Your Windows hostname, in this case, windows-server will appear on the dashboard.
![Windows-Host-added-Nagios][2]
### Adding a remote Linux Host to Nagios Server
Having added a Windows host to the Nagios server, lets add a Linux host system. In our case, we are going to add a **Ubuntu 18.04 LTS** to the Nagios monitoring server. To monitor a Linux host, we need to install an agent on the remote Linux system called **NRPE**. NRPE is short for **Nagios Remote Plugin Executor**. This is the plugin that will allow you to monitor Linux host systems. It allows you to monitor resources such as Swap, memory usage, and CPU load to mention a few on remote Linux hosts. So the first step is to install NRPE on Ubuntu 18.04 LTS remote system.
But first, update Ubuntu system
```
# sudo apt update
```
Next,  install Nagios NRPE by running the command as shown:
```
# sudo apt install nagios-nrpe-server nagios-plugins
```
![Install-nrpe-server-nagios-plugins][2]
After the successful installation of  NRPE and Nagios plugins, configure NRPE by opening its configuration file in /etc/nagios/nrpe.cfg
```
# vim /etc/nagios/nrpe.cfg
```
Append the Linux host IP address to the **server_address** attribute. In this case, 10.128.0.53 is the IP address of the Ubuntu 18.04 LTS system.
![Specify-server-address-Nagios][2]
Next, add Nagios server IP address in the allowed_hosts attribute, in this case, 10.128.0.50
![Allowed-hosts-Nagios][2]
Save and exit the configuration file.
Next, restart NRPE service and verify its status
```
# systemctl restart nagios-nrpe-server
# systemctl enable nagios-nrpe-server
# systemctl status nagios-nrpe-server
```
![Restart-nrpe-check-status][2]
### Configure Nagios Server to monitor Linux host
Having successfully installed NRPE and nagios plugins on the remote linux server, log in to Nagios Server and install EPEL (Extra packages for Enterprise Linux) package.
```
# dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
```
Next, install NRPE plugin on the server
```
# dnf install nagios-plugins-nrpe -y
```
After the installation of the NRPE plugin, open the Nagios configuration file “/usr/local/nagios/etc/nagios.cfg”
```
# vim /usr/local/nagios/etc/nagios.cfg
```
Next, uncomment the line below in the configuration file
cfg_dir=/usr/local/nagios/etc/servers
![uncomment-servers-line-Nagios-Server-CentOS8][2]
Next, create a configuration directory
```
# mkdir /usr/local/nagios/etc/servers
```
Then create client configuration file
```
# vim /usr/local/nagios/etc/servers/ubuntu-host.cfg
```
Copy and paste the configuration below to the file. This configuration monitors swap space, system load, total processes, logged in users, and disk usage.
```
define host{
use linux-server
host_name ubuntu-nagios-client
alias ubuntu-nagios-client
address 10.128.0.53
}
define hostgroup{
hostgroup_name linux-server
alias Linux Servers
members ubuntu-nagios-client
}
define service{
use local-service
host_name ubuntu-nagios-client
service_description SWAP Uasge
check_command check_nrpe!check_swap
}
define service{
use local-service
host_name ubuntu-nagios-client
service_description Root / Partition
check_command check_nrpe!check_root
}
define service{
use local-service
host_name ubuntu-nagios-client
service_description Current Users
check_command check_nrpe!check_users
}
define service{
use local-service
host_name ubuntu-nagios-client
service_description Total Processes
check_command check_nrpe!check_total_procs
}
define service{
use local-service
host_name ubuntu-nagios-client
service_description Current Load
check_command check_nrpe!check_load
}
```
Save and exit the configuration file.
Next, verify that there are no errors in Nagios configuration
```
# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
```
Now restart Nagios service and ensure that it is up and running.
```
# systemctl restart nagios
```
Remember to open port 5666 which is used by NRPE plugin on the firewall of the Nagios server.
```
# firewall-cmd --permanent --add-port=5666/tcp
# firewall-cmd --reload
```
![Allow-firewall-Nagios-server][2]
Likewise, head out to your Linux host (Ubuntu 18.04 LTS) and allow the port on UFW firewall
```
# ufw allow 5666/tcp
# ufw reload
```
![Allow-NRPE-service][2]
Finally, head out to the Nagios Servers URL and click on **Hosts**. Your Ubuntu system will be displayed on the dashboard alongside the Windows host machine we added earlier on.
![Linux-host-added-monitored-Nagios][2]
And this wraps up our 2-part series on Nagios installation and adding remote hosts. Feel free to get back to us with your feedback.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/add-windows-linux-host-to-nagios-server/
作者:[James Kiarie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/install-nagios-core-rhel-8-centos-8/
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/11/NSClient-installer-Windows.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/11/click-nex-to-install-NSClient.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Accept-terms-conditions-NSClient.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/11/click-on-Typical-option-NSClient-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Define-path-NSClient-Windows.png
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Specify-Nagios-Server-IP-address-NSClient-Windows.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Click-install-to-being-the-installation-NSClient.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Click-finish-NSClient-Windows.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Click-start-NSClient-service-windows.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/11/NSClient-running-windows.jpg

View File

@ -0,0 +1,434 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An introduction to monitoring with Prometheus)
[#]: via: (https://opensource.com/article/19/11/introduction-monitoring-prometheus)
[#]: author: (Yuri Grinshteyn https://opensource.com/users/yuri-grinshteyn)
An introduction to monitoring with Prometheus
======
Prometheus is a popular and powerful toolkit to monitor Kubernetes. This
is a tutorial on how to get started.
![Wheel of a ship][1]
[Metrics are the primary way][2] to represent both the overall health of your system and any other specific information you consider important for monitoring and alerting or observability. [Prometheus][3] is a leading open source metric instrumentation, collection, and storage toolkit [built at SoundCloud][4] beginning in 2012. Since then, it's [graduated][5] from the Cloud Native Computing Foundation and become the de facto standard for Kubernetes monitoring. It has been covered in some detail in:
* [Getting started with Prometheus][6]
* [5 examples of Prometheus monitoring success][7]
* [Achieve high-scale application monitoring with Prometheus][8]
* [Tracking the weather with Python and Prometheus][9]
However, none of these articles focus on how to use Prometheus on Kubernetes. This article:
* Describes the Prometheus architecture and data model to help you understand how it works and what it can do
* Provides a tutorial on setting Prometheus up in a Kubernetes cluster and using it to monitor clusters and applications
### Architecture
While knowing how Prometheus works may not be essential to using it effectively, it can be helpful, especially if you're considering using it for production. The [Prometheus documentation][10] provides this graphic and details about the essential elements of Prometheus and how the pieces connect together.
[![Prometheus architecture][11]][10]
For most use cases, you should understand three major components of Prometheus:
1. The Prometheus **server** scrapes and stores metrics. Note that it uses a **persistence** layer, which is part of the server and not expressly mentioned in the documentation. Each node of the server is autonomous and does not rely on distributed storage. I'll revisit this later when looking to use a dedicated time-series database to store Prometheus data, rather than relying on the server itself.
2. The web **UI** allows you to access, visualize, and chart the stored data. Prometheus provides its own UI, but you can also configure other visualization tools, like [Grafana][12], to access the Prometheus server using PromQL (the Prometheus Query Language).
3. **Alertmanager** sends alerts from client applications, especially the Prometheus server. It has advanced features for deduplicating, grouping, and routing alerts and can route through other services like PagerDuty and OpsGenie.
The key to understanding Prometheus is that it fundamentally relies on **scraping**, or pulling, metrics from defined endpoints. This means that your application needs to expose an endpoint where metrics are available and instruct the Prometheus server how to scrape it (this is covered in the tutorial below). There are [exporters][13] for many applications that do not have an easy way to add web endpoints, such as [Kafka][14] and [Cassandra][15] (using the JMX exporter).
### Data model
Now that you understand how Prometheus works to scrape and store metrics, the next thing to learn is the kinds of metrics Prometheus supports. Some of the following information (noted with quotation marks) comes from the [metric types][16] section of the Prometheus documentation.
#### Counters and gauges
The two simplest metric types are **counter** and **gauge**. When getting started with Prometheus (or with time-series monitoring more generally), these are the easiest types to understand because it's easy to connect them to values you can imagine monitoring, like how much system resources your application is using or how many events it has processed.
> "A **counter** is a cumulative metric that represents a single monotonically increasing counter whose value can only **increase** or be **reset** to zero on restart. For example, you can use a counter to represent the number of requests served, tasks completed, or errors."
Because you cannot decrease a counter, it can and should be used only to represent cumulative metrics.
> "A **gauge** is a metric that represents a single numerical value that can arbitrarily go up and down. Gauges are typically used for measured values like [CPU] or current memory usage, but also 'counts' that can go up and down, like the number of concurrent requests."
#### Histograms and summaries
Prometheus supports two more complex metric types: [**histograms**][17] [and][17] [**summaries**][17]. There is ample opportunity for confusion here, given that they both track the number of observations _and_ the sum of observed values. One of the reasons you might choose to use them is that you need to calculate an average of the observed values. Note that they create multiple time series in the database; for example, they each create a sum of the observed values with a **_sum** suffix.
> "A **histogram** samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. It also provides a sum of all observed values."
This makes it an excellent candidate to track things like latency that might have a service level objective (SLO) defined against it. From the [documentation][17]:
> You might have an SLO to serve 95% of requests within 300ms. In that case, configure a histogram to have a bucket with an upper limit of 0.3 seconds. You can then directly express the relative amount of requests served within 300ms and easily alert if the value drops below 0.95. The following expression calculates it by job for the requests served in the last 5 minutes. The request durations were collected with a histogram called **http_request_duration_seconds**.
>
> [code]`sum(rate(http_request_duration_seconds_bucket{le="0.3"}[5m])) by (job) / sum(rate(http_request_duration_seconds_count[5m])) by (job)`
```
>
>  
Returning to definitions:
> "Similar to a histogram, a **summary** samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window."
The essential difference between summaries and histograms is that summaries calculate streaming φ-quantiles on the client-side and expose them directly, while histograms expose bucketed observation counts, and the calculation of quantiles from the buckets of a histogram happens on the server-side using the **histogram_quantile()** function.
If you are still confused, I suggest taking the following approach:
* Use gauges most of the time for straightforward time-series metrics.
* Use counters for things you know to increase monotonically, e.g., if you are counting the number of times something happens.
* Use histograms for latency measurements with simple buckets, e.g., one bucket for "under SLO" and another for "over SLO."
This should be sufficient for the overwhelming majority of use cases, and you should rely on a statistical analysis expert to help you with more advanced scenarios.
Now that you have a basic understanding of what Prometheus is, how it works, and the kinds of data it can collect and store, you're ready to begin the tutorial.
## Prometheus and Kubernetes hands-on tutorial
This tutorial covers the following:
* Installing Prometheus in your cluster
* Downloading the sample application and reviewing the code
* Building and deploying the app and generating load against it
* Accessing the Prometheus UI and reviewing the basic metrics
This tutorial assumes:
* You already have a Kubernetes cluster deployed.
* You have configured the **kubectl** command-line utility for access.
* You have the **cluster-admin** role (or at least sufficient privileges to create namespaces and deploy applications).
* You are running a Bash-based command-line interface. Adjust this tutorial if you run other operating systems or shell environments.
If you don't have Kubernetes running yet, this [Minikube tutorial][18] is an easy way to set it up on your laptop.
If you're ready now, let's go.
### Install Prometheus
In this section, you will clone the sample repository and use Kubernetes' configuration files to deploy Prometheus to a dedicated namespace.
1. Clone the sample repository locally and use it as your working directory: [code] $ git clone <https://github.com/yuriatgoogle/prometheus-demo.git>
$ cd  prometheus-demo
$ WORKDIR=$(pwd)
```
2. Create a dedicated namespace for the Prometheus deployment: [code]`$ kubectl create namespace prometheus`
```
3. Give your namespace the cluster reader role: [code] $ kubectl apply -f $WORKDIR/kubernetes/clusterRole.yaml
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
```
4. Create a Kubernetes configmap with scraping and alerting rules: [code] $ kubectl apply -f $WORKDIR/kubernetes/configMap.yaml -n prometheus
configmap/prometheus-server-conf created
```
5. Deploy Prometheus: [code] $ kubectl create -f prometheus-deployment.yaml -n prometheus
deployment.extensions/prometheus-deployment created
```
6. Validate that Prometheus is running: [code] $ kubectl get pods -n prometheus
NAME                                     READY   STATUS    RESTARTS   AGE
prometheus-deployment-78fb5694b4-lmz4r   1/1     Running   0          15s
```
### Review basic metrics
In this section, you'll access the Prometheus UI and review the metrics being collected.
1. Use port forwarding to enable web access to the Prometheus UI locally:
**Note:** Your **prometheus-deployment** will have a different name than this example. Review and replace the name of the pod from the output of the previous command. [code] $ kubectl port-forward prometheus-deployment-7ddb99dcb-fkz4d 8080:9090 -n prometheus
Forwarding from 127.0.0.1:8080 -&gt; 9090
Forwarding from [::1]:8080 -&gt; 9090
```
2. Go to <http://localhost:8080> in a browser:
![Prometheus console][19]
You are now ready to query Prometheus metrics!
3. Some basic machine metrics (like the number of CPU cores and memory) are available right away. For example, enter **machine_memory_bytes** in the expression field, switch to the Graph view, and click Execute to see the metric charted:
![Prometheus metric channel][20]
4. Containers running in the cluster are also automatically monitored. For example, enter **rate(container_cpu_usage_seconds_total{container_name="prometheus"}[1m])** as the expression and click Execute to see the rate of CPU usage by Prometheus:
![CPU usage metric][21]
Now that you know how to install Prometheus and use it to measure some out-of-the-box metrics, it's time for some real monitoring.
#### Golden signals
As described in the "[Monitoring Distributed Systems][22]" chapter of [Google's SRE][23] book:
> "The four golden signals of monitoring are latency, traffic, errors, and saturation. If you can only measure four metrics of your user-facing system, focus on these four."
The book offers thorough descriptions of all four, but this tutorial focuses on the three signals that most easily serve as proxies for user happiness:
* **Traffic:** How many requests you're receiving
* **Error rate:** How many of those requests you can successfully serve
* **Latency:** How quickly you can serve successful requests
As you probably realize by now, Prometheus does not measure any of these for you; you'll have to instrument any application you deploy to emit them. Following is an example implementation.
Open the **$WORKDIR/node/golden_signals/app.js** file, which is a sample application written in Node.js (recall we cloned **yuriatgoogle/prometheus-demo** and exported **$WORKDIR** earlier). Start by reviewing the first section, where the metrics to be recorded are defined:
```
// total requests - counter
const nodeRequestsCounter = new prometheus.Counter({
    name: 'node_requests',
    help: 'total requests'
});
```
The first metric is a counter that will be incremented for each request; this is how the total number of requests is counted:
```
// failed requests - counter
const nodeFailedRequestsCounter = new prometheus.Counter({
    name: 'node_failed_requests',
    help: 'failed requests'
});
```
The second metric is another counter that increments for each error to track the number of failed requests:
```
// latency - histogram
const nodeLatenciesHistogram = new prometheus.Histogram({
    name: 'node_request_latency',
    help: 'request latency by path',
    labelNames: ['route'],
    buckets: [100, 400]
});
```
The third metric is a histogram that tracks request latency. Working with a very basic assumption that the SLO for latency is 100ms, you will create two buckets: one for 100ms and the other 400ms latency.
The next section handles incoming requests, increments the total requests metric for each one, increments failed requests when there is an (artificially induced) error, and records a latency histogram value for each successful request. I have chosen not to record latencies for errors; that implementation detail is up to you.
```
app.get('/', (req, res) =&gt; {
    // start latency timer
    const requestReceived = new Date().getTime();
    console.log('request made');
    // increment total requests counter
    nodeRequestsCounter.inc();
    // return an error 1% of the time
    if ((Math.floor(Math.random() * 100)) == 100) {
        // increment error counter
        nodeFailedRequestsCounter.inc();
        // return error code
        res.send("error!", 500);
    }
    else {
        // delay for a bit
        sleep.msleep((Math.floor(Math.random() * 1000)));
        // record response latency
        const responseLatency = new Date().getTime() - requestReceived;
        nodeLatenciesHistogram
            .labels(req.route.path)
            .observe(responseLatency);
        res.send("success in " + responseLatency + " ms");
    }
})
```
#### Test locally
Now that you've seen how to implement Prometheus metrics, see what happens when you run the application.
1. Install the required packages: [code] $ cd $WORKDIR/node/golden_signals
$ npm install --save
```
2. Launch the app: [code]`$ node app.js`
```
3. Open two browser tabs: one to <http://localhost:8080> and another to <http://localhost:8080/metrics>.
4. When you go to the **/metrics** page, you can see the Prometheus metrics being collected and updated every time you reload the home page:
![Prometheus metrics being collected][24]
You're now ready to deploy the sample application to your Kubernetes cluster and test your monitoring.
#### Deploy monitoring to Prometheus on Kubernetes
Now it's time to see how metrics are recorded and represented in the Prometheus instance deployed in your cluster by:
* Building the application image
* Deploying it to your cluster
* Generating load against the app
* Observing the metrics recorded
##### Build the application image
The sample application provides a Dockerfile you'll use to build the image. This section assumes that you have:
* Docker installed and configured locally
* A Docker Hub account
* Created a repository
If you're using Google Kubernetes Engine to run your cluster, you can use Cloud Build and the Google Container Registry instead.
1. Switch to the application directory: [code]`$ cd $WORKDIR/node/golden_signals`
```
2. Build the image with this command: [code]`$ docker build . --tag=<Docker username>/prometheus-demo-node:latest`
```
3. Make sure you're logged in to Docker Hub: [code]`$ docker login`
```
4. Push the image to Docker Hub using this command: [code]`$ docker push <username>/prometheus-demo-node:latest`
```
5. Verify that the image is available: [code]`$ docker images`
```
#### Deploy the application
Now that the application image is in the Docker Hub, you can deploy it to your cluster and run the application.
1. Modify the **$WORKDIR/node/golden_signals/prometheus-demo-node.yaml** file to pull the image from Docker Hub: [code] spec:
      containers:
      - image: docker.io/&lt;Docker username&gt;/prometheus-demo-node:latest
```
2. Deploy the image: [code] $ kubectl apply -f $WORKDIR/node/golden_signals/prometheus-demo-node.yaml
deployment.extensions/prometheus-demo-node created
```
3. Verify that the application is running: [code] $ kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
prometheus-demo-node-69688456d4-krqqr   1/1     Running   0          65s
```
4. Expose the application using a load balancer: [code] $ kubectl expose deployment prometheus-node-demo --type=LoadBalancer --name=prometheus-node-demo --port=8080
service/prometheus-demo-node exposed
```
5. Confirm that your service has an external IP address: [code] $ kubectl get services
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
kubernetes             ClusterIP      10.39.240.1     &lt;none&gt;           443/TCP          23h
prometheus-demo-node   LoadBalancer   10.39.248.129   35.199.186.110   8080:31743/TCP   78m
```
##### Generate load to test monitoring
Now that your service is up and running, generate some load against it by using [Apache Bench][25].
1. Get the IP address of your service as a variable: [code]`$ export SERVICE_IP=$(kubectl get svc prometheus-demo-node -ojson | jq -r '.status.loadBalancer.ingress[].ip')`
```
2. Use **ab** to generate some load. You may want to run this in a separate terminal window. [code]`$ ab -c 3 -n 1000 http://${SERVICE_IP}:8080/`
```
##### Review metrics
While the load is running, access the Prometheus UI in the cluster again and confirm that the "golden signal" metrics are being collected.
1. Establish a connection to Prometheus: [code]
$ kubectl get pods -n prometheus
NAME                                     READY   STATUS    RESTARTS   AGE
prometheus-deployment-78fb5694b4-lmz4r   1/1     Running   0          15s
$ kubectl port-forward prometheus-deployment-78fb5694b4-lmz4r 8080:9090 -n prometheus
Forwarding from 127.0.0.1:8080 -&gt; 9090
Forwarding from [::1]:8080 -&gt; 9090
```
**Note:** Make sure to replace the name of the pod in the second command with the output of the first.
2. Open <http://localhost:8080> in a browser:
![Prometheus console][26]
3. Use this expression to measure the request rate: [code]`rate(node_requests[1m])`
```
![Measuring the request rate][27]
4. Use this expression to measure your error rate: [code]`rate(node_failed_requests[1m])`
```
![Measuring the error rate][28]
5. Finally, use this expression to validate your latency SLO. Remember that you set up two buckets, 100ms and 400ms. This expression returns the percentage of requests that meet the SLO : [code]`sum(rate(node_request_latency_bucket{le="100"}[1h])) / sum(rate(node_request_latency_count[1h]))`
```
![SLO query graph][29]
About 10% of the requests are within SLO. This is what you should expect since the code sleeps for a random number of milliseconds between 0 and 1,000. As such, about 10% of the time, it returns in more than 100ms, and this graph shows that you can't meet the latency SLO as a result.
### Summary
Congratulations! You've completed the tutorial and hopefully have a much better understanding of how Prometheus works, how to instrument your application with custom metrics, and how to use it to measure your SLO compliance. The next article in this series will look at another metric instrumentation approach using OpenCensus.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/introduction-monitoring-prometheus
作者:[Yuri Grinshteyn][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/yuri-grinshteyn
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes.png?itok=PqDGb6W7 (Wheel of a ship)
[2]: https://opensource.com/article/19/10/open-source-observability-kubernetes
[3]: https://prometheus.io/
[4]: https://en.wikipedia.org/wiki/Prometheus_(software)#History
[5]: https://www.cncf.io/announcement/2018/08/09/prometheus-graduates/
[6]: https://opensource.com/article/18/12/introduction-prometheus
[7]: https://opensource.com/article/18/9/prometheus-operational-advantage
[8]: https://opensource.com/article/19/10/application-monitoring-prometheus
[9]: https://opensource.com/article/19/4/weather-python-prometheus
[10]: https://prometheus.io/docs/introduction/overview/
[11]: https://opensource.com/sites/default/files/uploads/prometheus-architecture.png (Prometheus architecture)
[12]: https://grafana.com/
[13]: https://prometheus.io/docs/instrumenting/exporters/
[14]: https://github.com/danielqsj/kafka_exporter
[15]: https://github.com/prometheus/jmx_exporter
[16]: https://prometheus.io/docs/concepts/metric_types/
[17]: https://prometheus.io/docs/practices/histograms/
[18]: https://opensource.com/article/18/10/getting-started-minikube
[19]: https://opensource.com/sites/default/files/uploads/prometheus-console.png (Prometheus console)
[20]: https://opensource.com/sites/default/files/uploads/prometheus-machine_memory_bytes.png (Prometheus metric channel)
[21]: https://opensource.com/sites/default/files/uploads/prometheus-cpu-usage.png (CPU usage metric)
[22]: https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/
[23]: https://landing.google.com/sre/sre-book/toc/
[24]: https://opensource.com/sites/default/files/uploads/prometheus-metrics-collected.png (Prometheus metrics being collected)
[25]: https://httpd.apache.org/docs/2.4/programs/ab.html
[26]: https://opensource.com/sites/default/files/uploads/prometheus-enable-query-history.png (Prometheus console)
[27]: https://opensource.com/sites/default/files/uploads/prometheus-request-rate.png (Measuring the request rate)
[28]: https://opensource.com/sites/default/files/uploads/prometheus-error-rate.png (Measuring the error rate)
[29]: https://opensource.com/sites/default/files/uploads/prometheus-slo-query.png (SLO query graph)

View File

@ -0,0 +1,221 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bash Script to Generate Patching Compliance Report on CentOS/RHEL Systems)
[#]: via: (https://www.2daygeek.com/bash-script-to-generate-patching-compliance-report-on-centos-rhel-systems/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Bash Script to Generate Patching Compliance Report on CentOS/RHEL Systems
======
If you are running a large Linux environment you may have already integrated your Red Hat systems with the Satellite.
If yes, there is a way to export this from the Satellite Server so you dont have to worry about patching compliance reports.
But if you are running a small Red Hat environment without satellite integration, or if it is CentOS systems, this script will help you to create a report.
The patching compliance report is usually created monthly once or three months once, depending on the companys needs.
Add a cronjob based on your needs to automate this.
This **[bash script][1]** is generally good to run with less than 50 systems, but there is no limit.
Keeping the system up-to-date is an important task for Linux administrators, keeping your computer very stable and secure.
The following articles may help you to learn more about installing security patches on Red Hat (RHEL) and CentOS systems.
* **[How to check available security updates on Red Hat (RHEL) and CentOS system][2]**
* **[Four ways to install security updates on Red Hat (RHEL) &amp; CentOS systems][3]**
* **[Two methods to check or list out installed security updates on Red Hat (RHEL) &amp; CentOS system][4]**
Four **[shell scripts][5]** are included in this tutorial and pick the suitable one for you.
### Method-1: Bash Script to Generate Patching Compliance Report for Security Errata on CentOS/RHEL Systems
This script allows you to create a security errata patch compliance report only. It sends the output via a mail in a plain text.
```
# vi /opt/scripts/small-scripts/sec-errata.sh
#!/bin/sh
/tmp/sec-up.txt
SUBJECT="Patching Reports on "date""
MESSAGE="/tmp/sec-up.txt"
TO="[email protected]"
echo "+---------------+-----------------------------+" >> $MESSAGE
echo "| Server_Name | Security Errata |" >> $MESSAGE
echo "+---------------+-----------------------------+" >> $MESSAGE
for server in `more /opt/scripts/server.txt`
do
sec=`ssh $server yum updateinfo summary | grep 'Security' | grep -v 'Important|Moderate' | tail -1 | awk '{print $1}'`
echo "$server $sec" >> $MESSAGE
done
echo "+---------------------------------------------+" >> $MESSAGE
mail -s "$SUBJECT" "$TO" < $MESSAGE
```
Run the script file once you have added the above script.
```
# sh /opt/scripts/small-scripts/sec-errata.sh
```
You get an output like the one below.
```
# cat /tmp/sec-up.txt
+---------------+-------------------+
| Server_Name | Security Errata |
+---------------+-------------------+
server1
server2
server3 21
server4
+-----------------------------------+
```
Add the following cronjob to get the patching compliance report once a month.
```
# crontab -e
@monthly /bin/bash /opt/scripts/system-uptime-script-1.sh
```
### Method-1a: Bash Script to Generate Patching Compliance Report for Security Errata on CentOS/RHEL Systems
This script allows you to generate a security errata patch compliance report. It sends the output through a mail with the CSV file.
```
# vi /opt/scripts/small-scripts/sec-errata-1.sh
#!/bin/sh
echo "Server Name, Security Errata" > /tmp/sec-up.csv
for server in `more /opt/scripts/server.txt`
do
sec=`ssh $server yum updateinfo summary | grep 'Security' | grep -v 'Important|Moderate' | tail -1 | awk '{print $1}'`
echo "$server, $sec" >> /tmp/sec-up.csv
done
echo "Patching Report for `date +"%B %Y"`" | mailx -s "Patching Report on `date`" -a /tmp/sec-up.csv [email protected]
rm /tmp/sec-up.csv
```
Run the script file once you have added the above script.
```
# sh /opt/scripts/small-scripts/sec-errata-1.sh
```
You get an output like the one below.
![][6]
### Method-2: Bash Script to Generate Patching Compliance Report for Security Errata, Bugfix, and Enhancement on CentOS/RHEL Systems
This script allows you to generate patching compliance reports for Security Errata, Bugfix, and Enhancement. It sends the output via a mail in a plain text.
```
# vi /opt/scripts/small-scripts/sec-errata-bugfix-enhancement.sh
#!/bin/sh
/tmp/sec-up.txt
SUBJECT="Patching Reports on "`date`""
MESSAGE="/tmp/sec-up.txt"
TO="[email protected]"
echo "+---------------+-------------------+--------+---------------------+" >> $MESSAGE
echo "| Server_Name | Security Errata | Bugfix | Enhancement |" >> $MESSAGE
echo "+---------------+-------------------+--------+---------------------+" >> $MESSAGE
for server in `more /opt/scripts/server.txt`
do
sec=`ssh $server yum updateinfo summary | grep 'Security' | grep -v 'Important|Moderate' | tail -1 | awk '{print $1}'`
bug=`ssh $server yum updateinfo summary | grep 'Bugfix' | tail -1 | awk '{print $1}'`
enhance=`ssh $server yum updateinfo summary | grep 'Enhancement' | tail -1 | awk '{print $1}'`
echo "$server $sec $bug $enhance" >> $MESSAGE
done
echo "+------------------------------------------------------------------+" >> $MESSAGE
mail -s "$SUBJECT" "$TO" < $MESSAGE
```
Run the script file once you have added the above script.
```
# sh /opt/scripts/small-scripts/sec-errata-bugfix-enhancement.sh
```
You get an output like the one below.
```
# cat /tmp/sec-up.txt
+---------------+-------------------+--------+---------------------+
| Server_Name | Security Errata | Bugfix | Enhancement |
+---------------+-------------------+--------+---------------------+
server01 16
server02 5 16
server03 21 266 20
server04 16
+------------------------------------------------------------------+
```
Add the following cronjob to get the patching compliance report once every three months. This script is scheduled to run on the 1st of January, April, July and October months.
```
# crontab -e
0 0 01 */3 * /bin/bash /opt/scripts/system-uptime-script-1.sh
```
### Method-2a: Bash Script to Generate Patching Compliance Report for Security Errata, Bugfix, and Enhancement on CentOS/RHEL Systems
This script allows you to generate patching compliance reports for Security Errata, Bugfix, and Enhancement. It sends the output through a mail with the CSV file.
```
# vi /opt/scripts/small-scripts/sec-errata-bugfix-enhancement-1.sh
#!/bin/sh
echo "Server Name, Security Errata,Bugfix,Enhancement" > /tmp/sec-up.csv
for server in `more /opt/scripts/server.txt`
do
sec=`ssh $server yum updateinfo summary | grep 'Security' | grep -v 'Important|Moderate' | tail -1 | awk '{print $1}'`
bug=`ssh $server yum updateinfo summary | grep 'Bugfix' | tail -1 | awk '{print $1}'`
enhance=`ssh $server yum updateinfo summary | grep 'Enhancement' | tail -1 | awk '{print $1}'`
echo "$server,$sec,$bug,$enhance" >> /tmp/sec-up.csv
done
echo "Patching Report for `date +"%B %Y"`" | mailx -s "Patching Report on `date`" -a /tmp/sec-up.csv [email protected]
rm /tmp/sec-up.csv
```
Run the script file once you have added the above script.
```
# sh /opt/scripts/small-scripts/sec-errata-bugfix-enhancement-1.sh
```
You get an output like the one below.
![][6]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/bash-script-to-generate-patching-compliance-report-on-centos-rhel-systems/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/category/bash-script/
[2]: https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/
[3]: https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/
[4]: https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/
[5]: https://www.2daygeek.com/category/shell-script/
[6]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7

View File

@ -0,0 +1,241 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Schedule and Automate tasks in Linux using Cron Jobs)
[#]: via: (https://www.linuxtechi.com/schedule-automate-tasks-linux-cron-jobs/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Schedule and Automate tasks in Linux using Cron Jobs
======
Sometimes, you may have tasks that need to be performed on a regular basis or at certain predefined intervals. Such tasks include backing up databases, updating the system, performing periodic reboots and so on. Such tasks are referred to as **cron jobs**. Cron jobs are used for **automation of tasks** that come in handy and help in simplifying the execution of repetitive and sometimes mundane tasks. **Cron** is a daemon that allows you to schedule these jobs which are then carried out at specified intervals. In this tutorial, you will learn how to schedule jobs using cron jobs.
[![Schedule -tasks-in-Linux-using cron][1]][2]
### The Crontab file
A crontab file, also known as a **cron table**, is a simple text file that contains rules or commands that specify the time interval of execution of a task. There are two categories of crontab files:
**1)  System-wide crontab file**
These are usually used by Linux services &amp; critical applications requiring root privileges. The system crontab file is located at **/etc/crontab** and can only be accessed and edited by the root user. Its usually used for the configuration of system-wide daemons. The crontab file looks as shown:
[![etc-crontab-linux][1]][3]
**2) User-created crontab files**
Linux users can also create their own cron jobs with the help of the crontab command. The cron jobs created will run as the user who created them.
All cron jobs are stored in /var/spool/cron (For RHEL and CentOS distros) and /var/spool/cron/crontabs (For Debian and Ubuntu distros), the cron jobs are listed using the username of the user that created the cron job
The **cron daemon** runs silently in the background checking the **/etc/crontab** file and **/var/spool/cron** and **/etc/cron.d*/** directories
The **crontab** command is used for editing cron files. Let us take a look at the anatomy of a crontab file.
### The anatomy of a crontab file
Before we go further, its important that we first explore how a crontab file looks like. The basic syntax for a crontab file comprises 5 columns represented by asterisks followed by the command to be carried out.
*    *    *    *    *    command
This format can also be represented as shown below:
m h d moy dow command
OR
m h d moy dow /path/to/script
Lets expound on each entry
* **m**: This represents minutes. Its specified from 0 to 59
* **h**: This denoted the hour specified from 0 to 23
* **d**:  This represents the day of the month. Specified between 1 to 31`
* **moy**: This is the month of the year. Its specified between 1 to 12
* **doy**:  This is the day of the week. Its specified between 0 and 6 where 0 = Sunday
* **Command**: This is the command to be executed e.g backup command, reboot, &amp; copy
### Managing cron jobs
Having looked at the architecture of a crontab file, lets see how you can create, edit and delete cron jobs
**Creating cron jobs**
To create or edit a cron job as the root user, run the command
# crontab -e
To create a cron job or schedule a task as another user, use the syntax
# crontab -u username -e
For instance, to run a cron job as user Pradeep, issue the command:
# crontab -u Pradeep -e
If there is no preexisting crontab file, then you will get a blank text document. If a crontab file was existing, The  -e option allows  to edit the file,
**Listing crontab files**
To view the cron jobs that have been created, simply pass the -l option as shown
# crontab -l
**Deleting a  crontab file**
To delete a cron file, simply run crontab -e and delete or the line of the cron job that you want and save the file.
To remove all cron jobs, run the command:
# crontab -r
That said, lets have a look at different ways that you can schedule tasks
### Crontab examples in Scheduling tasks.
All cron jobs being with a shebang header as shown
#!/bin/bash
This indicates the shell you are using, which, for this case, is bash shell.
Next, specify the interval at which you want to schedule the tasks using the cron job entries we specified earlier on.
To reboot a system daily at 12:30 pm, use the syntax:
30  12 *  *  * /sbin/reboot
To schedule the reboot at 4:00 am use the syntax:
0  4  *  *  *  /sbin/reboot
**NOTE:**  The asterisk * is used to match all records
To run a script twice every day, for example, 4:00 am and 4:00 pm, use the syntax.
0  4,16  *  *  *  /path/to/script
To schedule a cron job to run every Friday at 5:00 pm  use the syntax:
0  17  *  *  Fri  /path/to/script
OR
0 17  *  *  *  5  /path/to/script
If you wish to run your cron job every 30 minutes then use:
*/30  *  *  *  * /path/to/script
To schedule cron to run after every 5 hours, run
*  */5  *  *  *  /path/to/script
To run a script on selected days, for example, Wednesday and Friday at 6.00 pm execute:
0  18  *  *  wed,fri  /path/to/script
To schedule multiple tasks to use a single cron job, separate the tasks using a semicolon for example:
*  *  *  *  *  /path/to/script1 ; /path/to/script2
### Using special strings to save time on writing cron jobs
Some of the cron jobs can easily be configured using special strings that correspond to certain time intervals. For example,
1)  @hourly timestamp corresponds to  0 * * * *
It will execute a task in the first minute of every hour.
@hourly /path/to/script
2) @daily timestamp is equivalent to  0 0 * * *
It executes a task in the first minute of every day (midnight). It comes in handy when executing daily jobs.
  @daily /path/to/script
3) @weekly   timestamp is the equivalent to  0 0 1 * mon
It executes a cron job in the first minute of every week where a week whereby, a  week starts on Monday.
 @weekly /path/to/script
3) @monthly is similar to the entry 0 0 1 * *
It carries out a task in the first minute of the first day of the month.
  @monthly /path/to/script
4) @yearly corresponds to 0 0 1 1 *
It executes a task in the first minute of every year and is useful in sending New year greetings 🙂
@monthly /path/to/script
### Crontab Restrictions
As a Linux user, you can control who has the right to use the crontab command. This is possible using the **/etc/cron.deny** and **/etc/cron.allow** file. By default, only the /etc/cron.deny file exists and does not contain any entries. To restrict a user from using the crontab utility, simply add a users username to the file. When a user is added to this file, and the user tries to run the crontab command, he/she will encounter the error below.
![restricted-cron-user][1]
To allow the user to continue using the crontab utility,  simply remove the username from the /etc/cron.deny file.
If /etc/cron.allow file is present, then only the users listed in the file can access and use the crontab utility.
If neither file exists, then only the root user will have privileges to use the crontab command.
### Backing up crontab entries
Its always advised to backup your crontab entries. To do so, use the syntax
# crontab -l &gt; /path/to/file.txt
For example,
```
# crontab -l > /home/james/backup.txt
```
**Checking cron logs**
Cron logs are stored in /var/log/cron file. To view the cron logs run the command:
```
# cat /var/log/cron
```
![view-cron-log-files-linux][1]
To view live logs, use the tail command as shown:
```
# tail -f /var/log/cron
```
![view-live-cron-logs][1]
**Conclusion**
In this guide, you learned how to create cron jobs to automate repetitive tasks, how to backup as well as how to view cron logs. We hope that this article provided useful insights with regard to cron jobs. Please dont hesitate to share your feedback and comments.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/schedule-automate-tasks-linux-cron-jobs/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/11/Schedule-tasks-in-Linux-using-cron.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/11/etc-crontab-linux.png

View File

@ -0,0 +1,309 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A guide to open source for microservices)
[#]: via: (https://opensource.com/article/19/11/microservices-cheat-sheet)
[#]: author: (Girish Managoli https://opensource.com/users/gammay)
A guide to open source for microservices
======
Build and manage high-scale microservices networks and solve the
challenges of running services without fault that scale based on
business demand.
![Text editor on a browser, in blue][1]
Microservices—applications broken down into smaller, composable pieces that work together—are getting as much attention as the hottest new restaurant in town. (If you're not yet familiar, dive into [What Are Microservices][2] before continuing here.)
However, if you have moved on from "Hello, World" and running a simple handful of microservices, and are building hundreds of microservices and running thousands of instances, you know there is nothing "micro" about them. You want your instances to increase when users increase and decrease when users decrease. You want to distribute requests effectively between instances. You want to build and run your services intelligently. You need a clear view of the service instances that are running or going down. How can you manage all of this complexity?
This article looks at some of the key terminologies in the microservices ecosystem and some of the open source software available to build out a microservices architecture. The focus is on building and managing high-scale microservices networks and solving the challenges of running services without fault and that scale correctly based on business demand.
Here is a wholesome, lavish spread of open source cuisine that is sure to be gastronomically, "_microservically"_ appetizing. I'm sure I've overlooked some open source applications in this area; please let me know about them in the comments.
**[Download the PDF version of this cheat sheet [here][3]]**
### Containers
The right way to deploy applications is in [containers][4]. Briefly, a container is a miniature virtual server packed with the software required to run an application. The container pack is small, smart, and easy to deploy and maintain. And deploying your application in a container is clever. You can deploy as many instances as you need and scale up or down as needed to meet the current load.
**Open source containers**
**Software** | **Code** | **License**
---|---|---
[rkt][5] | [GitHub][6] | Apache License 2.0
[Docker][7] | [GitHub][8] | Apache License 2.0
[FreeBSD Jail][9] | [GitHub][10] | FreeBSD License
[LXC][11] | [GitHub][12] | GNU LGPL v.2.1
[OpenVZ][13] | [GitHub][14] | GNU General Public License v2.0
### Container orchestrators
If you have hundreds or thousands of service instances deployed on containers, you need a good way to manage them. Container orchestration is the right solution for deploying and managing all of these containers. Orchestrators can move across; scale up, down, or out; manage higher or lower loads; regulate added, removed, and dead containers; and much more.
**Open source container orchestrators**
**Software** | **Code** | **License**
---|---|---
[Kubernetes][15] | [GitHub][16] | Apache License 2.0
[OpenShift][17] | [GitHub][18] | Apache License 2.0
[Nomad][19] | [GitHub][20] | Mozilla Public License 2.0
[LXD][21] | [GitHub][22] | Apache License 2.0
### API gateways
An API gateway is a watchman that controls and monitors API calls to your application. An API gateway has three key roles:
1. **API data and management:** API listing, API subscription, API documentation, community support
2. **API viewpoint and billing:** Analytics, metrics, billing
3. **API control and security:** Subscription caller management, rate control, blocking, data conversion, production and sandbox support, key management
API gateways are usually multi-tenant solutions to deploy multiple applications on the same gateway.
**Open source API gateways**
Not all of the following API gateways support every function mentioned above, so pick and choose depending on your needs.
**Software** | **Code** | **License**
---|---|---
[3scale][23] | [GitHub][24] | Apache License 2.0
[API Umbrella][25] | [GitHub][26] | MIT License
[Apigee][27] | [GitHub][28] | Apache License 2.0
[Apiman][29] | [GitHub][30] | Apache License 2.0
[DreamFactory][31] | [GitHub][32] | Apache License 2.0
[Fusio][33] | [GitHub][34] | GNU Affero General Public License v3.0
[Gravitee][35] | [GitHub][36] | Apache License 2.0
[Kong][37] | [GitHub][38] | Apache License 2.0
[KrakenD][39] | [GitHub][40] | Apache License 2.0
[Tyk][41] | [GitHub][42] | Mozilla Public License 2.0
### CI/CD
Continuous integration (CI) and continuous deployment (CD; it may also stand for continuous delivery) are the net sum of processes to build and run your processes. [CI/CD][43] is a philosophy that ensures your microservices are built and run correctly to meet users' expectations. Automation is the critical CI/CD factor that makes the build and run process easy and structured. CI's primary processes are build and test, and CD's are deploy and monitor.
All of the CI/CD tools and platforms listed below are open source. I don't include SaaS platforms that are free for hosting open source. GitHub also isn't on the list because it is not open source and does not have built-in CI/CD; it uses third-party CI/CD product integrations instead. GitLab is open source and has a built-in CI/CD service, so it is on this list.
**Open source CI/CD tools**
**Software** | **Code** | **License**
---|---|---
[Jenkins][44] | [GitHub][45] | MIT License
[GitLab][46] | [GitLab][47] | MIT License
[Buildbot][48] | [GitHub][49] | GNU General Public License v2.0
[Concourse][50] | [GitHub][51] | Apache License 2.0
[GoCD][52] | [GitHub][53] | Apache License 2.0
[Hudson][54] | [GitHub][55] | MIT License
[Spinnaker][56] | [GitHub][57] | Apache License 2.0
### Load balancers
When your number of requests scale, you must deploy multiple instances of your application and share requests across those instances. The application that manages the requests between instances is called a load balancer. A load balancer can be configured to distribute requests based on round-robin scheduling, IP routing, or another algorithm. The load balancer automatically manages request distributions when new instances are added (to support higher load) or decommissioned (when load scales down). Session persistence is another load-balancing feature that redirects new requests to the previous instance when needed (for example, to maintain a session). There are hardware- and software-based load balancers.
**Open source load balancers**
**Software** | **Code** | **License**
---|---|---
[HAProxy][58] | [GitHub][59] | HAPROXY's license / GPL v2.0
[Apache modules][60] (mod_athena, mod_proxy_balancer) | [SourceForge][61] or
[Code.Google][62] or
[GitHub][63] | Apache License 2.0
[Balance][64] | [SourceForge][65] | GNU General Public License v2.0
[Distributor][66] | [SourceForge][67] | GNU General Public License v2.0
[GitHub Load Balancer (GLB) Director][68] | [GitHub][69] | BSD 3-Clause License
[Neutrino][70] | [GitHub][71] | Apache License 2.0
[OpenLoBa][72] | [SourceForge][73] | Not known
[Pen][74] | [GitHub][75] | GNU General Public License, v2.0
[Seesaw][76] | [GitHub][77] | Apache License 2.0
[Synapse][78] | [GitHub][79] | Apache License 2.0
[Traefik][80] | [GitHub][81] | MIT License
### Service registry and service discovery
When several hundreds or thousands of service instances are deployed and talking to each other, how do requester services know how to connect the right responder services, given that deployment points are dynamic as services are scaled in and out? A service registry and service discovery service solves this problem. These systems are essentially key-value stores that maintain configuration information and naming and provide distributed synchronization.
**Open source service registry and discovery services**
**Software** | **Code** | **License**
---|---|---
[Baker Street][82] | [GitHub][83] | Apache License 2.0
[Consul][84] | [GitHub][85] | Mozilla Public License 2.0
[etcd][86] | [GitHub][87] | Apache License 2.0
[Registrator][88] | [GitHub][89] | MIT License
[Serf][90] | [GitHub][91] | Mozilla Public License 2.0
[ZooKeeper][92] | [GitHub][93] | Apache License 2.0
### Monitoring
When your microservices and their instances cater to users' needs, you need to maintain a good view of their performance. Monitoring tools to the rescue!
Open source monitoring tools and software come in numerous flavors, some barely better than [top][94]. Other options include OS-specific; enterprise-grade; tool collections that provide complete integration; do-one-thing tools that merely monitor or report or visualize and integrate with third-party tools; and tools that monitor specific or multiple components such as networks, log files, web requests, and databases. Monitoring tools can be web-based or standalone tools, and notification options range from passive reporting to active alerting.
Choose one or more of these tools to enjoy a chewy crunch of your microservices network.
**Open source monitoring software**
**Software** | **Code** | **License**
---|---|---
[OpenNMS][95] | [GitHub][96] | GNU Affero General Public License
[Grafana][97] | [GitHub][98] | Apache License 2.0
[Graphite][99] | [GitHub][100] | Apache License 2.0
[Icinga][101] | [GitHub][102] | GNU General Public License v2.0
[InfluxDB][103] | [GitHub][104] | MIT License
[LibreNMS][105] | [GitHub][106] | GNU General Public License v3.0
[Naemon][107] | [GitHub][108] | GNU General Public License v2.0
[Nagios][109] | [GitHub][110] | GNU General Public License v2.0
[ntop][111] | [GitHub][112] | GNU General Public License v3.0
[ELK][113] | [GitHub][114] | Apache License 2.0
[Prometheus][115] | [GitHub][116] | Apache License 2.0
[Sensu][117] | [GitHub][118] | MIT License
[Zabbix][119] | [Self-hosted repo][120] | GNU General Public License v2.0
[Zenoss][121] | [SourceForge][122] | GNU General Public License v2.0
### The right ingredients
Pure open source solutions can offer the right ingredients for deploying and running microservices at high scale. I hope you find them to be relishing, gratifying, satiating, and most of all, _microservicey_!
### Download the [Microservices cheat sheet][3]. 
What are microservices? Opensource.com created a new resource page which gently introduces...
What are microservices, how do container technologies allow for their use, and what other tools do...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/microservices-cheat-sheet
作者:[Girish Managoli][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gammay
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_blue_text_editor_web.png?itok=lcf-m6N7 (Text editor on a browser, in blue)
[2]: https://opensource.com/resources/what-are-microservices
[3]: https://opensource.com/content/microservices-cheat-sheet
[4]: https://opensource.com/resources/what-are-linux-containers
[5]: https://coreos.com/rkt/
[6]: https://github.com/rkt/rkt/
[7]: https://www.docker.com/
[8]: https://github.com/docker
[9]: https://www.freebsd.org/doc/handbook/jails-build.html
[10]: https://github.com/freebsd/freebsd
[11]: https://linuxcontainers.org/lxc/
[12]: https://github.com/lxc/lxc
[13]: https://openvz.org/
[14]: https://github.com/OpenVZ
[15]: https://kubernetes.io/
[16]: https://github.com/kubernetes/kubernetes
[17]: https://www.openshift.com/
[18]: https://github.com/openshift
[19]: https://www.nomadproject.io/
[20]: https://github.com/hashicorp/nomad
[21]: https://linuxcontainers.org/lxd/introduction/
[22]: https://github.com/lxc/lxd
[23]: https://www.redhat.com/en/technologies/jboss-middleware/3scale
[24]: https://github.com/3scale/APIcast
[25]: https://apiumbrella.io/
[26]: https://github.com/NREL/api-umbrella
[27]: https://cloud.google.com/apigee/
[28]: https://github.com/apigee/microgateway-core
[29]: http://www.apiman.io/
[30]: https://github.com/apiman/apiman
[31]: https://www.dreamfactory.com/
[32]: https://github.com/dreamfactorysoftware/dreamfactory
[33]: https://www.fusio-project.org/
[34]: https://github.com/apioo/fusio
[35]: https://gravitee.io/
[36]: https://github.com/gravitee-io/gravitee-gateway
[37]: https://konghq.com/kong/
[38]: https://github.com/Kong/
[39]: https://www.krakend.io/
[40]: https://github.com/devopsfaith/krakend
[41]: https://tyk.io/
[42]: https://github.com/TykTechnologies/tyk
[43]: https://opensource.com/article/18/8/what-cicd
[44]: https://jenkins.io/
[45]: https://github.com/jenkinsci/jenkins
[46]: https://gitlab.com/
[47]: https://gitlab.com/gitlab-org
[48]: https://buildbot.net/
[49]: https://github.com/buildbot/buildbot
[50]: https://concourse-ci.org/
[51]: https://github.com/concourse/concourse
[52]: https://www.gocd.org/
[53]: https://github.com/gocd/gocd
[54]: http://hudson-ci.org/
[55]: https://github.com/hudson
[56]: https://www.spinnaker.io/
[57]: https://github.com/spinnaker/spinnaker
[58]: http://www.haproxy.org/
[59]: https://github.com/haproxy/haproxy
[60]: https://httpd.apache.org/docs/2.4/mod/mod_proxy_balancer.html
[61]: http://ath.sourceforge.net/
[62]: https://code.google.com/archive/p/ath/
[63]: https://github.com/omnigroup/Apache/blob/master/httpd/modules/proxy/mod_proxy_balancer.c
[64]: https://www.inlab.net/balance/
[65]: https://sourceforge.net/projects/balance/
[66]: http://distributor.sourceforge.net/
[67]: https://sourceforge.net/projects/distributor/files/
[68]: https://github.blog/2016-09-22-introducing-glb/
[69]: https://github.com/github/glb-director
[70]: https://neutrinoslb.github.io/
[71]: https://github.com/eBay/Neutrino
[72]: http://openloba.sourceforge.net/
[73]: https://sourceforge.net/p/openloba/code/HEAD/tree/
[74]: http://siag.nu/pen/
[75]: https://github.com/UlricE/pen
[76]: https://opensource.google.com/projects/seesaw
[77]: https://github.com/google/seesaw
[78]: https://synapse.apache.org/
[79]: https://github.com/apache/synapse/tree/master
[80]: https://traefik.io/
[81]: https://github.com/containous/traefik
[82]: http://bakerstreet.io/
[83]: https://github.com/datawire/bakerstreet
[84]: https://www.consul.io/
[85]: https://github.com/hashicorp/consul
[86]: https://etcd.io/
[87]: https://github.com/etcd-io/etcd
[88]: https://gliderlabs.github.io/registrator/latest/
[89]: https://github.com/gliderlabs/registrator
[90]: https://www.serf.io/
[91]: https://github.com/hashicorp/serf
[92]: https://zookeeper.apache.org/
[93]: https://github.com/apache/zookeeper
[94]: https://en.wikipedia.org/wiki/Top_(software)
[95]: https://www.opennms.com/
[96]: https://github.com/OpenNMS/opennms
[97]: https://grafana.com
[98]: https://github.com/grafana/grafana
[99]: https://graphiteapp.org/
[100]: https://github.com/graphite-project
[101]: https://icinga.com/
[102]: https://github.com/icinga/
[103]: https://www.influxdata.com/
[104]: https://github.com/influxdata/influxdb
[105]: https://www.librenms.org/
[106]: https://github.com/librenms/librenms
[107]: http://www.naemon.org/
[108]: https://github.com/naemon
[109]: https://www.nagios.org/
[110]: https://github.com/NagiosEnterprises/nagioscore
[111]: https://www.ntop.org/
[112]: https://github.com/ntop/ntopng
[113]: https://www.elastic.co/
[114]: https://github.com/elastic
[115]: https://prometheus.io/
[116]: https://github.com/prometheus/prometheus
[117]: https://sensu.io/
[118]: https://github.com/sensu
[119]: https://www.zabbix.com/
[120]: https://git.zabbix.com/projects/ZBX/repos/zabbix/browse
[121]: https://www.zenoss.com/
[122]: https://sourceforge.net/projects/zenoss/

View File

@ -0,0 +1,236 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Demystifying Kubernetes)
[#]: via: (https://opensourceforu.com/2019/11/demystifying-kubernetes/)
[#]: author: (Abhinav Nath Gupta https://opensourceforu.com/author/abhinav-gupta/)
Demystifying Kubernetes
======
[![][1]][2]
_Kubernetes is a production grade open source system for automating deployment, scaling, and the management of containerised applications. This article is about managing containers with Kubernetes._
Containers has become one of the latest buzz words. But what does the term imply? Often associated with Docker, a container is defined as a standardised unit of software. Containers encapsulate the software and the environment required to run the software into a single unit that is easily shippable.
A container is a standard unit of software that packages the code and all its dependencies so that the application runs quickly and reliably from one computing environment to another. The container does this by creating something called an image, which is akin to an ISO image. A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application — code, runtime, system tools, system libraries and settings.
Container images become containers at runtime and, in the case of Docker containers, images become containers when they run on a Docker engine. Containers isolate software from the environment and ensure that it works uniformly despite differences in instances across environments.
**What is container management?**
Container management is the process of organising, adding or replacing large numbers of software containers. Container management uses software to automate the process of creating, deploying and scaling containers. This gives rise to the need for container orchestration—a tool that automates the deployment, management, scaling, networking and availability of container based applications.
**Kubernetes**
Kubernetes is a portable, extensible, open source platform for managing containerised workloads and services, and it facilitates both configuration and automation. It was originally developed by Google. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Google open sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google had with running production workloads at scale, combined with best-of-breed ideas and practices from the community, as well as the usage of declarative syntax.
Some of the common terminologies associated with the Kubernetes ecosystem are listed below.
_**Pods:**_ A pod is the basic execution unit of a Kubernetes application the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod represents processes running on a Kubernetes cluster.
A pod encapsulates the running container, storage, network IP (unique) and commands that govern how the container should run. It represents the single unit of deployment within the Kubernetes ecosystem, a single instance of an application which might consist of one or many containers running with tight coupling and shared resources.
Pods in a Kubernetes cluster can be used in two main ways. The first is pods that run a single container. The one-container-per-pod model is the most common Kubernetes use case. The second method involves pods that run multiple containers that need to work together.
A pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources.
_**ReplicaSet:**_ The purpose of a ReplicaSet is to maintain a stable set of replica pods running at any given time. A ReplicaSet contains information about how many copies of a particular pod should be running. To create multiple pods to match the ReplicaSet criteria, Kubernetes uses the pod template. The link a ReplicaSet has to its pods is via the latters metadata.ownerReferences field, which specifies which resource owns the current object.
_**Services:**_ Services are an abstraction to expose the functionality of a set of pods. With Kubernetes, you dont need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives pods their own IP addresses and a single DNS name for a set of pods, and can load-balance across them.
One major problem that services solve is the integration of the front-end and back-end of a Web application. Since Kubernetes provides IP addresses behind the scenes to pods, when the latter are killed and resurrected, the IP addresses are changed. This creates a big problem on the front-end side to connect a given back-end IP address to the corresponding front-end IP address. Services solve this problem by providing an abstraction over the pods — something akin to a load balancer.
_**Volumes:**_ A Kubernetes volume has an explicit lifetime — the same as the pod that encloses it. Consequently, a volume outlives any container that runs within the pod and the data is preserved across container restarts. Of course, when a pod ceases to exist, the volume will cease to exist, too. Perhaps more important than this is that Kubernetes supports many types of volumes, and a pod can use any number of them simultaneously.
At its core, a volume is just a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it and its contents are determined by the particular volume type used.
**Why Kubernetes?**
Containers are a good way to bundle and run applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if one container goes down, another needs to start. Wouldnt it be nice if this could be automated by a system?
Thats where Kubernetes comes to the rescue! It provides a framework to run distributed systems resiliently. It takes care of scaling requirements, failover, deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
Kubernetes provides users with:
1\. Service discovery and load balancing
2\. Storage orchestration
3\. Automated roll-outs and roll-backs
4\. Automatic bin packing
5\. Self-healing
6\. Secret and configuration management
**What can Kubernetes do?**
In this section we will look at some code examples of how to use Kubernetes when building a Web application from scratch. We will create a simple back-end server using Flask in Python.
There are a few prerequisites for those who want to build a Web app from scratch. These are:
1\. Basic understanding of Docker, Docker containers and Docker images. A quick refresher can be found at _<https://www.docker.com/sites/default/files/Docker\_CheatSheet\_08.09.2016\_0.pdf>_.
2\. Docker should be installed in the system.
3\. Kubernetes should be installed in the system. Instructions on how to do so on a local machine can be found at _<https://kubernetes.io/docs/setup/learning-environment/minikube/>_.
Now, create a simple directory, as shown in the code snippet below:
```
mkdir flask-kubernetes/app && cd flask-kubernetes/app
```
Next, inside the _flask-kubernetes/app_ directory, create a file called main.py, as shown in the code snippet below:
```
touch main.py
```
In the newly created _main.py,_ paste the following code:
```
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Kubernetes!"
if __name__ == "__main__":
app.run(host='0.0.0.0')
```
Install Flask in your local using the command below:
```
pip install Flask==0.10.1
```
After installing Flask, run the following command:
```
python app.py
```
This should run the Flask server locally on port 5000, which is the default port for the Flask app, and you can see the output Hello from Kubernetes! on *<http://localhost:500*0>.
Once the server is running locally, we will create a Docker image to be used by Kubernetes.
Create a file with the name Dockerfile and paste the following code snippet in it:
```
FROM python:3.7
RUN mkdir /app
WORKDIR /app
ADD . /app/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/app/main.py"]
```
The instructions in _Dockerfile_ are explained below:
1\. Docker will fetch the Python 3.7 image from the Docker hub.
2\. It will create an app directory in the image.
3\. It will set an app as the working directory.
4\. Copy the contents from the app directory in the host to the image app directory.
5\. Expose Port 5000.
6\. Finally, it will run the command to start the Flask server.
In the next step, we will create the Docker image, using the command given below:
```
docker build -f Dockerfile -t flask-kubernetes:latest .
```
After creating the Docker image, we can test it by running it locally using the following command:
```
docker run -p 5001:5000 flask-kubernetes
```
Once we are done testing it locally by running a container, we need to deploy this in Kubernetes.
We will first verify that Kubernetes is running using the _kubectl_ command. If there are no errors, then it is working. If there are errors, do refer to _<https://kubernetes.io/docs/setup/learning-environment/minikube/>_.
Next, lets create a deployment file. This is a yaml file containing the instruction for Kubernetes about how to create pods and services in a very declarative fashion. Since we have a Flask Web application, we will create a _deployment.yaml_ file with both the pods and services declarations inside it.
Create a file named deployment.yaml and add the following contents to it, before saving it:
```
apiVersion: v1
kind: Service
metadata:
name: flask-kubernetes -service
spec:
selector:
app: flask-kubernetes
ports:
- protocol: "TCP"
port: 6000
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-kubernetes
spec:
replicas: 4
template:
metadata:
labels:
app: flask-kubernetes
spec:
containers:
- name: flask-kubernetes
image: flask-kubernetes:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
```
Use _kubectl_ to send the _yaml_ file to Kubernetes by running the following command:
```
kubectl apply -f deployment.yaml
```
You can see the pods are running if you execute the following command:
```
kubectl get pods
```
Now navigate to _<http://localhost:6000>_, and you should see the Hello from Kubernetes! message.
Thats it! The application is now running in Kubernetes!
**What Kubernetes cannot do**
Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, and monitoring. Kubernetes provides the building blocks for developer platforms, but preserves user choice and flexibility where it is important.
* Kubernetes does not limit the types of applications supported. If an application can run in a container, it should run great on Kubernetes.
* It does not deploy and build source code.
* It does not dictate logging, monitoring, or alerting solutions.
* It does not provide or mandate a configuration language/system. It provides a declarative API for everyones use.
* It does not provide or adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
![Avatar][3]
[Abhinav Nath Gupta][4]
The author is a software development engineer at Cleo Software India Pvt Ltd, Bengaluru. He is interested in cryptography, data security, cryptocurrency and cloud computing. He can be reached at [abhi.aec89@gmail.com][5].
[![][6]][7]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/demystifying-kubernetes/
作者:[Abhinav Nath Gupta][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/abhinav-gupta/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Gear-kubernetes.jpg?resize=696%2C457&ssl=1 (Gear kubernetes)
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Gear-kubernetes.jpg?fit=800%2C525&ssl=1
[3]: https://secure.gravatar.com/avatar/f65917facf5f28936663731fedf545c4?s=100&r=g
[4]: https://opensourceforu.com/author/abhinav-gupta/
[5]: mailto:abhi.aec89@gmail.com
[6]: http://opensourceforu.com/wp-content/uploads/2013/10/assoc.png
[7]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to add a user to your Linux desktop)
[#]: via: (https://opensource.com/article/19/11/add-user-gui-linux)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
How to add a user to your Linux desktop
======
It's easy to manage users from a graphical interface, whether during
installation or on the desktop.
![Team of people around the world][1]
Adding a user is one of the first things you do on a new computer system. And you often have to manage users throughout the computer's lifespan.
My article on the [**useradd** command][2] provides a deeper understanding of user management on Linux. Useradd is a command-line tool, but you can also manage users graphically on Linux. That's the topic of this article.
### Add a user during Linux installation
Most Linux distributions provide a step for creating a user during installation. For example, the Fedora 30 installer, Anaconda, creates the standard _root_ user and one other local user account. When you reach the **Configuration** screen during installation, click **User Creation** under **User Settings**.
![Fedora Anaconda Installer - Add a user][3]
On the Create User screen, enter the user's details: **Full name**, **User name**, and **Password**. You can also choose whether to make the user an administrator.
![Create a user during installation][4]
The **Advanced** button opens the **Advanced User Configuration** screen. Here, you can specify the path to the home directory and the user and group IDs if you need something besides the default. You can also type a list of secondary groups that the user will be placed into.
![Advanced user configuration][5]
### Add a user on the Linux desktop
#### GNOME
Many Linux distributions use the GNOME desktop. The following screenshots are from Red Hat Enterprise Linux 8.0, but the process is similar in other distros like Fedora, Ubuntu, or Debian.
Start by opening **Settings**. Then go to **Details**, select **Users**, click **Unlock**, and enter your password (unless you are already logged in as root). This will replace the **Unlock** button with an **Add User** button.
![GNOME user settings][6]
Now, you can add a user by clicking **Add User**,** **then selecting the account **Type** and the details **Name** and **Password**).
In the screenshot below, a user name has been entered, and settings are left as default. I did not have to enter the **Username**; it was created automatically as I typed in the **Full Name** field. You can still modify it though if the autocompletion is not to your liking.
![GNOME settings - add user][7]
This creates a standard account for a user named Sonny. Sonny will need to provide a password the first time he or she logs in.
Next, the users will be displayed. Each user can be selected and customized or removed from this screen. For instance, you might want to choose an avatar image or set the default language.
![GNOME new user][8]
#### KDE
KDE is another popular Linux desktop environment. Below is a screenshot of KDE Plasma on Fedora 30. You can see that adding a user in KDE is quite similar to doing it in GNOME.
![KDE settings - add user][9]
### Conclusion
Other desktop environments and window managers in addition to GNOME and KDE include graphical user management tools. Adding a user graphically in Linux is quick and simple, whether you do it at installation or afterward.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/add-user-gui-linux
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_global_people_gis_location.png?itok=Rl2IKo12 (Team of people around the world)
[2]: https://opensource.com/article/19/10/linux-useradd-command
[3]: https://opensource.com/sites/default/files/uploads/screenshot_fedora30_anaconda2.png (Fedora Anaconda Installer - Add a user)
[4]: https://opensource.com/sites/default/files/uploads/screenshot_fedora30_anaconda3.png (Create a user during installation)
[5]: https://opensource.com/sites/default/files/uploads/screenshot_fedora30_anaconda4.png (Advanced user configuration)
[6]: https://opensource.com/sites/default/files/uploads/gnome_settings_user_unlock.png (GNOME user settings)
[7]: https://opensource.com/sites/default/files/uploads/gnome_settings_adding_user.png (GNOME settings - add user)
[8]: https://opensource.com/sites/default/files/uploads/gnome_settings_user_new.png (GNOME new user)
[9]: https://opensource.com/sites/default/files/uploads/kde_settings_adding_user.png (KDE settings - add user)

View File

@ -0,0 +1,260 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Tuning your bash or zsh shell on Fedora Workstation and Silverblue)
[#]: via: (https://fedoramagazine.org/tuning-your-bash-or-zsh-shell-in-workstation-and-silverblue/)
[#]: author: (George Luiz Maluf https://fedoramagazine.org/author/georgelmaluf/)
Tuning your bash or zsh shell on Fedora Workstation and Silverblue
======
![][1]
This article shows you how to set up some powerful tools in your command line interpreter (CLI) shell on Fedora. If you use _bash_ (the default) or _zsh_, Fedora lets you easily setup these tools.
### Requirements
Some installed packages are required. On Workstation, run the following command:
```
sudo dnf install git wget curl ruby ruby-devel zsh util-linux-user redhat-rpm-config gcc gcc-c++ make
```
On Silverblue run:
```
sudo rpm-ostree install git wget curl ruby ruby-devel zsh util-linux-user redhat-rpm-config gcc gcc-c++ make
```
**Note**: On Silverblue you need to restart before proceeding.
### Fonts
You can give your terminal a new look by installing new fonts. Why not fonts that display characters and icons together?
##### Nerd-Fonts
Open a new terminal and type the following commands:
```
git clone https://github.com/ryanoasis/nerd-fonts ~/.nerd-fonts
cd .nerd-fonts
sudo ./install.sh
```
##### Awesome-Fonts
On Workstation, install using the following command:
```
sudo dnf fontawesome-fonts
```
On Silverblue, type:
```
sudo rpm-ostree install fontawesome-fonts
```
### Powerline
Powerline is a statusline plugin for vim, and provides statuslines and prompts for several other applications, including bash, zsh, tmus, i3, Awesome, IPython and Qtile.
Fedora Magazine previously posted an [article about powerline][2] that includes instructions on how to install it in the vim editor. You can also find more information on the official [documentation site][3].
#### Installation
To install powerline utility on Fedora Workstation, open a new terminal and run:
```
sudo dnf install powerline vim-powerline tmux-powerline powerline-fonts
```
On Silverblue, the command changes to:
```
sudo rpm-ostree install powerline vim-powerline tmux-powerline powerline-fonts
```
**Note**: On Silverblue, before proceeding you need restart.
#### Activating powerline
To make the powerline active by default, place the code below at the end of your _~/.bashrc_ file
```
if [ -f `which powerline-daemon` ]; then
powerline-daemon -q
POWERLINE_BASH_CONTINUATION=1
POWERLINE_BASH_SELECT=1
. /usr/share/powerline/bash/powerline.sh
fi
```
Finally, close the terminal and open a new one. It will look like this:
![][4]
### Oh-My-Zsh
[Oh-My-Zsh][5] is a framework for managing your Zsh configuration. It comes bundled with helpful functions, plugins, and themes. To learn how set Zsh as your default shell this [article][6].
#### Installation
Type this in the terminal:
```
sh -c "$(curl -fsSL https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"
```
Alternatively, you can type this:
```
sh -c "$(wget https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O -)"
```
At the end, you see the terminal like this:
![][7]
Congratulations, Oh-my-zsh is installed.
#### Themes
Once installed, you can select your theme. I prefer to use the Powerlevel10k. One advantage is that it is 100 times faster than powerlevel9k theme. To install run this line:
```
git clone https://github.com/romkatv/powerlevel10k.git ~/.oh-my-zsh/themes/powerlevel10k
```
And set ZSH_THEME in your _~/.zshrc_ file
```
ZSH_THEME=powerlevel10k/powerlevel10k
```
Close the terminal. When you open the terminal again, the Powerlevel10k configuration wizard will ask you a few questions to configure your prompt properly.
![][8]
After finish Powerline10k configuration wizard, your prompt will look like this:
![][9]
If you dont like it. You can run the powerline10k wizard any time with the command _p10k configure_.
#### Enable plug-ins
Plug-ins are stored in _.oh-my-zsh/plugins_ folder. You can visit this site for more information. To activate a plug-in, you need edit your _~/.zshrc_ file. Install plug-ins means that you are going create a series of aliases or shortcuts that execute a specific function.
For example, to enable the firewalld and git plugins, first edit ~/.zshrc:
```
plugins=(firewalld git)
```
**Note**: use a blank space to separate the plug-ins names list.
Then reload the configuration
```
source ~/.zshrc
```
To see the created aliases, use the command:
```
alias | grep firewall
```
![][10]
#### Additional configuration
I suggest the install syntax-highlighting and syntax-autosuggestions plug-ins.
```
git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting
git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions
```
Add them to your plug-ins list in your file _~/.zshrc_
```
plugins=( [plugins...] zsh-syntax-highlighting zsh-autosuggestions)
```
Reload the configuration
```
source ~/.zshrc
```
See the results:
![][11]
### Colored folders and icons
Colorls is a Ruby gem that beautifies the terminals ls command, with colors and font-awesome icons. You can visit the official [site][12] for more information.
Because its a ruby gem, just follow this simple step:
```
sudo gem install colorls
```
To keep up to date, just do:
```
sudo gem update colorls
```
To prevent type colorls everytime you can make aliases in your _~/.bashrc_ or _~/.zshrc_.
```
alias ll='colorls -lA --sd --gs --group-directories-first'
alias ls='colorls --group-directories-first'
```
Also, you can enable tab completion for colorls flags, just entering following line at end of your shell configuration:
```
source $(dirname ($gem which colorls))/tab_complete.sh
```
Reload it and see what it happens:
![][13]
![][14]
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/tuning-your-bash-or-zsh-shell-in-workstation-and-silverblue/
作者:[George Luiz Maluf][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/georgelmaluf/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/tuning-shell-816x345.jpg
[2]: https://fedoramagazine.org/add-power-terminal-powerline/
[3]: https://powerline.readthedocs.io/en/latest/
[4]: https://fedoramagazine.org/wp-content/uploads/2019/10/terminal_bash_powerline.png
[5]: https://ohmyz.sh
[6]: https://fedoramagazine.org/set-zsh-fedora-system/
[7]: https://fedoramagazine.org/wp-content/uploads/2019/10/oh-my-zsh.png
[8]: https://fedoramagazine.org/wp-content/uploads/2019/10/powerlevel10k_config_wizard.png
[9]: https://fedoramagazine.org/wp-content/uploads/2019/10/powerlevel10k.png
[10]: https://fedoramagazine.org/wp-content/uploads/2019/10/aliases_plugin.png
[11]: https://fedoramagazine.org/wp-content/uploads/2019/10/sintax.png
[12]: https://github.com/athityakumar/colorls
[13]: https://fedoramagazine.org/wp-content/uploads/2019/10/ls-1024x495.png
[14]: https://fedoramagazine.org/wp-content/uploads/2019/10/ll-1024x495.png

View File

@ -1,105 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why you don't have to be afraid of Kubernetes)
[#]: via: (https://opensource.com/article/19/10/kubernetes-complex-business-problem)
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
为什么你不必害怕 Kubernetes
======
Kubernetes 绝对是满足复杂 web 应用程序需求的最简单,最容易的方法。
![Digital creative of a browser on the internet][1]
在 90 年代末和 00 年代初,在大型网络媒体资源上工作很有趣。我的经历让我想起了 American Greetings Interactive在情人节那天我们拥有互联网上排名前 10 位之一的网站(以网络访问量衡量)。我们为 [AmericanGreetings.com][2][BlueMountain.com][3] 等公司提供了电子贺卡,并为 MSN 和 AOL 等合作伙伴提供了电子贺卡。该组织的老员工仍然深切地记得与 Hallmark 等其它电子贺卡网站进行大战的史诗般的故事。 顺便说一句,我还为 Holly HobbieCare Bears 和 Strawberry Shortcake 经营大型网站。
我记得就像那是昨天发生的一样,这是我们第一次遇到真正的问题。通常,我们的前门(路由器,防火墙和负载均衡器)有大约 200Mbps 的流量进入。但是突然之间Multi Router Traffic GrapherMRTG图示突然在几分钟内飙升至 2Gbps。我疯了似地东奔西跑。我了解了我们的整个技术堆栈从路由器交换机防火墙和负载平衡器到 Linux/Apache web 服务器,到我们的 Python 堆栈FastCGI 的元版本以及网络文件系统NFS服务器。我知道所有配置文件在哪里我可以访问所有管理界面并且我是一位经验丰富的经验丰富的系统管理员具有多年解决复杂问题的经验。
但是,我无法弄清楚发生了什么……
当你在一千个 Linux 服务器上疯狂地键入命令时,五分钟的感觉就像是永恒。我知道站点可能会在任何时候崩溃,因为当它被划分成更小的集群时,压垮上千个节点的集群是那么的容易。
我迅速 _跑到_ 老板的办公桌前,解释了情况。他几乎没有从电子邮件中抬头,这使我感到沮丧。他抬头看了看,笑了笑,说道:“是的,市场营销可能会开展广告活动。有时会发生这种情况。”他告诉我在应用程序中设置一个特殊标志,以减轻 Akamai 的访问量。 我跑回我的办公桌,在上千台 web 服务器上设置了标志,几分钟后,该站点恢复正常。灾难也就被避免了。
我可以再分享 50 个类似的故事,但你脑海中可能会有一点好奇:“这种运维方式将走向何方?”
关键是,我们遇到了业务问题。当技术问题使你无法开展业务时,它们就变成了业务问题。换句话说,如果你的网站无法访问,你就不能处理客户交易。
那么,所有这些与 Kubernetes 有什么关系?一切。世界已经改变。早在 90 年代末和 00 年代初,只有大型网站才出现大型网络规模级的问题。现在,有了微服务和数字化转型,每个企业都面临着一个大型的网络规模级的问题——可能是多个大型的网络规模级的问题。
你的企业需要能够通过许多不同的人构建的许多不同的,通常是复杂的服务来管理复杂的网络规模的资产。你的网站需要动态地处理流量,并且它们必须是安全的。这些属性需要在所有层(从基础结构到应用程序层)上由 API 驱动。
### 进入 Kubernetes
Kubernetes 并不复杂你的业务问题才是。当你想在生产环境中运行应用程序时要满足性能伸缩性抖动等和安全性要求就需要最低程度的复杂性。诸如高可用性HA容量要求N+1N+2N+100以及保证最终一致性的数据技术等就会成为必需。这些是每家进行数字化转型的公司的生产要求而不仅仅是 GoogleFacebook 和 Twitter 这样的大型网站。
在旧时代,我还在 American Greetings 任职时,每次我们加入一个新的服务,它看起来像这样:所有这些都是由网络运营团队来处理的,没有一个是通过标签系统转移给其他团队来处理的。这是在 DevOps 出现之前的 DevOps
1. 配置DNS通常是内部服务层和面向外部的公众
2. 配置负载均衡器(通常是内部服务和面向公众的)
3. 配置对文件的共享访问(大型 NFS 服务器,群集文件系统等)
4. 配置集群软件(数据库,服务层等)
5. 配置 web 服务器群集(可以是 10 或 50 个服务器)
大多数配置是通过配置管理自动完成的,但是配置仍然很复杂,因为每个系统和服务都有不同的配置文件,而且格式完全不同。我们研究了像 [Augeas][4] 这样的工具来简化它,但是我们认为使用转换器来尝试和标准化一堆不同的配置文件是一种反模式。
如今借助Kubernetes启动一项新服务本质上看起来如下
1. 配置 Kubernetes YAML/JSON。
2. 提交给 Kubernetes API```kubectl create -f service.yaml```)。
Kubernetes 大大简化了服务的启动和管理。服务所有者(无论是系统管理员,开发人员还是架构师)都可以创建 Kubernetes 格式的 YAML/JSON 文件。使用 Kubernetes每个系统和每个用户都说相同的语言。所有用户都可以在同一 Git 存储库中提交这些文件,从而启用 GitOps。
而且,可以弃用和删除服务。从历史上看,删除 DNS 条目负载平衡器条目web 服务器配置等是非常可怕的,因为你几乎肯定会破坏某些东西。使用 Kubernetes所有内容都被命名为名称空间因此可以通过单个命令删除整个服务。尽管你仍然需要确保其它应用程序不使用它微服务和功能即服务FaaS的缺点但你可以更加确信删除服务不会破坏基础架构环境。
### 构建,管理和使用 Kubernetes
太多的人专注于构建和管理 Kubernetes 而不是使用它(详见 [_Kubernetes 是一辆翻斗车_][5].
在单个节点上构建一个简单的 Kubernetes 环境并不比安装 LAMP 堆栈复杂得多但是我们无休止地争论着构建与购买的问题。不是Kubernetes很难它以高可用性大规模运行应用程序。建立一个复杂的高可用性的 Kubernetes 集群很困难,因为要建立如此规模的任何集群都是很困难的。它需要规划和大量软件。建造一辆简单的翻斗车并不复杂,但是建造一辆可以运载 [10 吨灰尘并能以 200mph 的速度稳定行驶的卡车][6]则很复杂。
管理 Kubernetes 可能很复杂,因为管理大型网络规模的集群可能很复杂。有时,管理此基础架构很有意义;而有时不是。由于 Kubernetes 是一个社区驱动的开源项目,它使行业能够以多种不同方式对其进行管理。供应商可以出售托管版本,而用户可以根据需要自行决定对其进行管理。(但是你应该质疑是否确实需要。)
使用 Kubernetes 是迄今为止运行大规模网络资源的最简单方法。Kubernetes 正在普及运行一组大型、复杂的 Web 服务的能力——就像当年 Linux 在 Web 1.0 中所做的那样。
由于时间和金钱是一个零和游戏,因此我建议将重点放在使用 Kubernetes 上。将你的时间和金钱花费在[掌握 Kubernetes 原语][7]或处理[活跃度和就绪性探针][8]的最佳方法上(另一个例子表明大型、复杂的服务很难)。不要专注于构建和管理 Kubernetes。在构建和管理上许多供应商可以为你提供帮助。
### 结论
我记得对无数的问题进行了故障排除,比如我在这篇文章的开头所描述的问题——当时 Linux 内核中的 NFS我们自产的 CFEngine仅在某些 web 服务器上出现的重定向问题等)。开发人员无法帮助我解决所有这些问题。实际上,除非开发人员具备高级系统管理员的技能,否则他们甚至不可能进入系统并作为第二组眼睛提供帮助。没有带有图形或“可观察性”的控制台——可观察性在我和其他系统管理员的大脑中。如今,有了 KubernetesPrometheusGrafana 等,一切都改变了。
关键是:
1. 时代不一样了。现在,所有 web 应用程序都是大型的分布式系统。就像 AmericanGreetings.com 过去一样复杂,现在每个网站都需要该站点的扩展性和 HA 要求。
2. 运行大型的分布式系统是很困难的。(维护)周期,这是业务需求,不是 Kubernetes 的。使用更简单的协调器并不是解决方案。
Kubernetes绝对是满足复杂Web应用程序需求的最简单最简单的方法。这是我们生活的时代而 Kubernetes 擅长于此。你可以讨论是否应该自己构建或管理 Kubernetes。有很多供应商可以帮助你构建和管理它但是很难否认这是大规模运行复杂 web 应用程序的最简单方法。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/kubernetes-complex-business-problem
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fatherlinux
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: http://AmericanGreetings.com
[3]: http://BlueMountain.com
[4]: http://augeas.net/
[5]: https://linux.cn/article-11011-1.html
[6]: http://crunchtools.com/kubernetes-10-ton-dump-truck-handles-pretty-well-200-mph/
[7]: https://linux.cn/article-11036-1.html
[8]: https://srcco.de/posts/kubernetes-liveness-probes-are-dangerous.html

View File

@ -0,0 +1,322 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux permissions 101)
[#]: via: (https://opensource.com/article/19/8/linux-permissions-101)
[#]: author: (Alex Juarez https://opensource.com/users/mralexjuarezhttps://opensource.com/users/marcobravohttps://opensource.com/users/greg-p)
全面介绍 Linux 权限
======
> 知道如何控制用户对文件的访问是一项基本的系统管理技能。
![Penguins][1]
了解 Linux 权限以及如何控制哪些用户可以访问文件是系统管理的一项基本技能。
本文将介绍标准 Linux 文件系统权限,并进一步研究特殊权限,以及使用 `umask` 解释默认权限的出处。
### 理解 ls 命令的输出
在讨论如何修改权限之前,我们需要知道如何查看权限。通过 `ls` 命令的长列表参数(`-l`)为我们提供了有关文件的许多信息。
```
$ ls -lAh
total 20K
-rwxr-xr--+ 1 root root    0 Mar  4 19:39 file1
-rw-rw-rw-. 1 root root    0 Mar  4 19:39 file10
-rwxrwxr--+ 1 root root    0 Mar  4 19:39 file2
-rw-rw-rw-. 1 root root    0 Mar  4 19:39 file8
-rw-rw-rw-. 1 root root    0 Mar  4 19:39 file9
drwxrwxrwx. 2 root root 4.0K Mar  4 20:04 testdir
```
为了理解这些是什么意思,让我们将关于权限的输出分解为各个部分。单独理解每个部分会更容易。
让我们看看在上面的输出中的最后一行的每个组件:
```
drwxrwxrwx. 2 root root 4.0K Mar  4 20:04 testdir
```
第 1 节 | 第 2 节 | 第 3 节 | 第 4 节 | 第 5 节 | 第 6 节 | 第 7 节
---|---|---|---|---|---|---
`d` | `rwx` | `rwx` | `rwx` | `.` | `root` | `root`
第 1 节(左侧)显示文件的类型。
符号 | 类型
---|---
`d` | 目录
`-` | 常规文件
`l` | 软链接
`ls` 的 [info 页面][2]完整列出了不同的文件类型。
每个文件都有三种访问方式:
* 属主
* 组
* 所有其他人
  
第 2、3 和 4 节涉及用户、组和“其他用户”权限。每个部分都可以包含 `r`(读)、`w`(写)和 `x`(可执行)权限的组合。
每个权限还分配了一个数值,这在以八进制表示形式讨论权限时很重要。
权限 | 八进制值
---|---
`r` | 4
`w` | 2
`x` | 1
第 5 节描述了其他访问方法,例如 SELinux 或文件访问控制列表FACL
访问方法 | 字符
---|---
没有其它访问方法 | `-`
SELinux | `.`
FACL | `+`
各种方法的组合 | `+`
第 6 节和第 7 节分别是属主和组的名称。
### 使用 chown 和 chmod
#### chown 命令
`chown`(更改所有权)命令用于更改文件的用户和组的所有权。
要将文件 `foo` 的用户和组的所有权更改为 `root`,我们可以使用以下命令:
```
$ chown root:root foo
$ chown root: foo
```
在用户后跟冒号(`:`)运行该命令将同时设置用户和组所有权。
要仅将文件 `foo` 的用户所有权设置为 `root` 用户,请输入:
```
$ chown root foo
```
要仅更改文件 `foo` 的组所有权,请在组之前加冒号:
```
$ chown :root foo
```
#### chmod 命令
`chmod`(更改模式)命令控制属主、组以及既不是属主也不属于与文件关联的组的所有其他用户的文件许可权。
`chmod` 命令可以以八进制(例如 `755`、`644` 等)和符号(例如 `u+rwx`、`g-rwx`、`o=rw`)格式设置权限。
八进制表示法将 4 个“点”分配给“读取”,将 2 个“点”分配给“写入”,将 1 个点分配给“执行”。如果要给用户(属主)分配“读”权限,则将 4 分配给第一个插槽,但是如果要添加“写”权限,则必须添加 2。如果要添加“执行”则要添加 1。我们对每种权限类型执行此操作属主、组和其他。
例如,如果我们想将 “读取”、“写入”和“执行”分配给文件的属主,但仅将“读取”和“执行”分配给组成员和所有其他用户,则我们应使用 `755`(八进制格式)。这是属主的所有权限位(`4 + 2 + 1`),但组和其他权限的所有权限位只有 `4``1``4 + 1`)。
> 细分为4+2+1=74+1=5 和 4+1=5。
如果我们想将“读取”和“写入”分配给文件的属主,而只将“读取”分配给组的成员和所有其他用户,则可以如下使用 `chmod`
```
$ chmod 644 foo_file
```
在下面的示例中,我们在不同的分组中使用符号表示法。注意字母 `u`、`g` 和 `o` 分别代表“用户”(属主)、“组”和“其他”。我们将 `u`、`g` 和 `o``+`、`-` 或 `=` 结合使用来添加、删除或设置权限位。
要将“执行”位添加到所有权权限集中:
```
$ chmod u+x foo_file
```
要从组成员中删除“读取”、“写入”和“执行”:
```
$ chmod g-rwx foo_file
```
要将所有其他用户的所有权设置为“读取”和“写入”:
```
$ chmod o=rw
```
### 特殊位:设置 UID、设置 GID 和粘滞位
除了标准权限外,还有一些特殊的权限位,它们具有一些有用的好处。
#### 设置用户 IDsuid
当在文件上设置 `suid` 时,将以文件的属主的身份而不是运行该文件的用户身份执行操作。一个[好例子][3]是 `passwd` 命令。它需要设置 `suid` 位,以便更改密码的操作具有 root 权限。
```
$ ls -l /bin/passwd
-rwsr-xr-x. 1 root root 27832 Jun 10  2014 /bin/passwd
```
设置 `suid` 位的示例:
```
$ chmod u+s /bin/foo_file_name
```
#### 设置组 IDsgid
`sgid` 位与 `suid` 位类似,因为操作是在目录的组所有权下完成的,而不是以运行命令的用户身份。
一个使用 `sgid` 的例子是,如果多个用户正在同一个目录中工作,并且目录中创建的每个文件都需要具有相同的组权限。下面的示例创建一个名为 `collab_dir` 的目录,设置 `sgid` 位,并将组所有权更改为 `webdev`
```
$ mkdir collab_dir
$ chmod g+s collab_dir
$ chown :webdev collab_dir
```
现在,在该目录中创建的任何文件都将具有 `webdev` 的组所有权,而不是创建该文件的用户的组。
```
$ cd collab_dir
$ touch file-sgid
$ ls -lah file-sgid
-rw-r--r--. 1 root webdev 0 Jun 12 06:04 file-sgid
```
#### “粘滞”位
粘滞位表示只有文件所有者才能删除该文件,即使组权限也允许该文件可以删除。通常,在 `/tmp` 这样的通用或协作目录上,此设置最有意义。在下面的示例中,“所有其他人”权限集的“执行”列中的 `t` 表示已应用粘滞位。
```
$ ls -ld /tmp
drwxrwxrwt. 8 root root 4096 Jun 12 06:07 /tmp/
```
请记住,这不会阻止某个人编辑该文件,它只是阻止他们删除该目录的内容。
我们将粘滞位设置为:
```
$ chmod o+t foo_dir
```
你可以自己尝试在目录上设置粘滞位并赋予其完整的组权限,以便多个属于同一组的用户可以在目录上进行读取、写入和执行。
接着,以每个用户的身份创建文件,然后尝试以另一个用户的身份删除它们。
如果一切配置正确,则一个用户应该不能从另一用户那里删除文件。
请注意这些位中的每个位也可以用八进制格式设置SUID = 4、SGID = 2 和 粘滞位 = 1。LCTT 译注:这里是四位八进制数字)
```
$ chmod 4744
$ chmod 2644
$ chmod 1755
```
#### 大写还是小写?
如果要设置特殊位并看到大写的 `S``T` 而不是小写的字符(如我们之前所见),那是因为不存在(对应的)底层的执行位。为了说明这一点,下面的示例创建一个设置了粘滞位的文件。然后,我们可以添加和删除执行位以演示大小写更改。
```
$ touch file cap-ST-demo
$ chmod 1755 cap-ST-demo
$ ls -l cap-ST-demo
-rwxr-xr-t. 1 root root 0 Jun 12 06:16 cap-ST-demo
$ chmod o-x cap-X-demo
$ ls -l cap-X-demo
-rwxr-xr-T. 1 root root 0 Jun 12 06:16 cap-ST-demo
```
#### 有条件地设置执行位
至此,我们使用小写的 `x` 设置了执行位,而无需询问任何问题即可对其进行设置。我们还有另一种选择:使用大写的 `X` 而不是小写的,它将仅在权限组中某个位置已经有执行位时才设置执行位。这可能是一个很难解释的概念,但是下面的演示将帮助说明它。请注意,在尝试将执行位添加到组特权之后,该位没有被设置上。
```
$ touch cap-X-file
$ ls -l cap-X-file
-rw-r--r--. 1 root root 0 Jun 12 06:31 cap-X-file
$ chmod g+X cap-X-file
$ ls -l cap-X-file
-rw-r--r--. 1 root root 0 Jun 12 06:31 cap-X-file
```
在这个类似的例子中,我们首先使用小写的 `x` 将执行位添加到组权限,然后使用大写的 `X` 为所有其他用户添加权限。这次,大写的 `X`设置了该权限。
```
$ touch cap-X-file
$ ls -l cap-X-file
-rw-r--r--. 1 root root 0 Jun 12 06:31 cap-X-file
$ chmod g+x cap-X-file
$ ls -l cap-X-file
-rw-r-xr--. 1 root root 0 Jun 12 06:31 cap-X-file
$ chmod o+X cap-X-file
ls -l cap-X-file
-rw-r-xr-x. 1 root root 0 Jun 12 06:31 cap-X-file
```
### 理解 umask
`umask 会屏蔽(或“阻止”)默认权限集中的位,以定义文件或目录的权限。例如,`umask`输出中的 `2` 表示它至少在默认情况下阻止了文件的写入位。
使用不带任何参数的 `umask` 命令可以使我们看到当前的 `umask` 设置。共有四列:第一列为特殊的`suid`、`sgid` 或粘滞位而保留,其余三列代表属主、组和其他人的权限。
```
$ umask
0022
```
为了理解这意味着什么,我们可以用 `-S` 标志来执行 `umask`(如下所示)以了解屏蔽位的结果。例如,由于第三列中的值为 `2`,因此将“写入”位从组和其他部分中屏蔽掉了;只能为它们分配“读取”和“执行”。
```
$ umask -S
u=rwx,g=rx,o=rx
```
要查看文件和目录的默认权限集是什么,让我们将 `umask` 设置为全零。这意味着我们在创建文件时不会掩盖任何位。
```
$ umask 000
$ umask -S
u=rwx,g=rwx,o=rwx
$ touch file-umask-000
$ ls -l file-umask-000
-rw-rw-rw-. 1 root root 0 Jul 17 22:03 file-umask-000
```
现在,当我们创建文件时,我们看到所有部分的默认权限分别为“读取”(`4`)和“写入”(`2`),相当于八进制表示 `666`
我们可以对目录执行相同的操作,并看到其默认权限为 `777`。我们需要在目录上使用“执行”位,以便可以遍历它们。
```
$ mkdir dir-umask-000
$ ls -ld dir-umask-000
drwxrwxrwx. 2 root root 4096 Jul 17 22:03 dir-umask-000/
```
### 总结
管理员还有许多其他方法可以控制对系统文件的访问。这些权限是 Linux 的基本权限,我们可以在这些基础上进行构建。如果你的工作将你带入 FACL 或 SELinux你会发现它们也建立在这些文件访问的首要规则之上。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/linux-permissions-101
作者:[Alex Juarez][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mralexjuarezhttps://opensource.com/users/marcobravohttps://opensource.com/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_ (Penguins)
[2]: https://www.gnu.org/software/texinfo/manual/info-stnd/info-stnd.html
[3]: https://www.theurbanpenguin.com/using-a-simple-c-program-to-explain-the-suid-permission/

View File

@ -0,0 +1,129 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (11 Essential Keyboard Shortcuts Google Chrome/Chromium Users Should Know)
[#]: via: (https://itsfoss.com/google-chrome-shortcuts/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Chrome/Chromium 用户必知必会的 11 个基本快捷键
======
> 掌握这些 Google Chrome 键盘快捷键,以获得更好、更流畅、更高效的 Web 浏览体验。还包括可下载的备忘单。
无可否认Google Chrome 是[最受欢迎的网络浏览器][1]。它的开源版本 [Chromium][2] 也越来越受欢迎,现在一些 Linux 发行版将其作为默认的网络浏览器。
如果你经常在台式机上使用它,则可以使用 Google Chrome 键盘快捷键来改善浏览体验。没有必要用你的鼠标移来移去、点来点去。只要掌握这些快捷方式,你可以节省一些时间并提高工作效率。
我这里使用的名称是 Google Chrome但是这些快捷方式同样适用于 Chromium 浏览器。
### 你应该使用的 11 个酷炫的 Chrome 键盘快捷键
如果你是专业人士,则可能已经知道其中一些 Chrome 快捷方式,但是有可能你仍然可以在这里找到一些隐藏的宝石。让我们来看看。
**键盘快捷键** | **动作**
---|---
`Ctrl+T` | 打开一个新标签页
`Ctrl+N` | 打开一个新窗口
`Ctrl+Shift+N` | 打开一个新无痕式窗口
`Ctrl+W` | 关闭当前标签页
`Ctrl+Shift+T` | 重新打开上一个关闭的标签页
`Ctrl+Shift+W` | 关闭窗口
`Ctrl+Tab``Ctrl+Shift+Tab` | 切换到右侧或左侧的标签页
`Ctrl+L` | 访问搜索/地址栏
`Ctrl+D` | 将网址放入书签
`Ctrl+H` | 访问浏览历史
`Ctrl+J` | 访问下载历史
`Shift+Esc` | 打开 Chrome 任务管理器
你可以[下载这份有用的 Chrome 键盘快捷键列表来作为快速参考][3]。
#### 1、用 `Ctrl+T` 打开一个新标签页
需要打开一个新标签页吗?只需同时按 `Ctrl 和 `T`键,你就会打开一个新标签。
#### 2、使用 `Ctrl+N` 打开一个新窗口
已经打开太多标签页?是时候打开一个新的窗口。使用 `Ctrl``N` 键打开一个新的浏览器窗口。
#### 3、使用 `Ctrl+Shift+N` 隐身
在线查询航班或酒店价格?隐身可能会有所帮助。使用 `Ctrl+Shift+N`在 Chrome 中打开一个隐身窗口。
#### 4、使用 `Ctrl+W` 关闭标签页
使用 `Ctrl``W` 键关闭当前标签页。无需将鼠标移到顶部并寻找 `x` 按钮。
#### 5、不小心关闭了标签页`Ctrl+Shift+T` 重新打开
这是我最喜欢的 Google Chrome 浏览器快捷方式。当你关闭了原本不想关的标签页时,就不用再懊悔了。使用 `Ctrl+Shift+T`,它将打开最后一个关闭的选项卡。继续按此组合键,它把关闭的选项卡再次打开。
#### 6、使用 `Ctrl+Shift+W` 关闭整个浏览器窗口
完成工作了吗?是时候关闭带有所有标签页的整个浏览器窗口了。使用 `Ctrl+Shift+W` 键,浏览器窗口将消失,就像以前不存在一样。
#### 7、使用 `Ctrl+Tab` 在标签之间切换
打开的标签页太多了吗?你可以使用 `Ctrl+Tab` 移至右侧标签页。想左移吗?使用 `Ctrl+Shift+Tab`。 重复按这些键,你可以在当前浏览器窗口的所有打开的标签页之间移动。
你也可以使用 `Ctrl+0` 直到 `Ctrl+9` 转到前 10 个标签页之一。但是此 Chrome 键盘快捷键不适用于第 11 个及更多标签页。
#### 8、使用 `Ctrl+L` 转到搜索/地址栏
想要输入新的 URL 或快速搜索一些内容。你可以使用 `Ctrl+L它将在顶部突出显示地址栏。
#### 9、用 `Ctrl+D` 收藏当前网站
找到了有趣的东西?使用 `Ctrl+D` 组合键将其保存在书签中。
#### 10、使用 `Ctrl+H` 返回历史记录
你可以使用 `Ctrl+H` 键打开浏览器历史记录。如果你正在寻找前一段时间访问过的页面,或者删除你不想再看到的页面,可以搜索历史记录。
#### 11、使用 `Ctrl+J` 查看下载
在 Chrome 中按 `Ctrl+J` 键将带你进入下载页面。此页面将显示你执行的所有下载操作。
#### 意外惊喜:使用 `Shift+Esc` 打开 Chrome 任务管理器
很多人甚至都不知道 Chrome 浏览器中有一个任务管理器。Chrome 以消耗系统内存而臭名昭著。而且,当你打开大量标签时,找到罪魁祸首并不容易。
使用 Chrome 任务管理器,你可以查看所有打开的标签页及其系统利用率统计信息。你还可以看到各种隐藏的进程,例如 Chrome 扩展程序和其他服务。
![Google Chrome 任务管理器][6]
### 下载 Chrome 快捷键备忘单
我知道掌握键盘快捷键取决于习惯,你可以通过反复使用使其习惯。为了帮助你完成此任务,我创建了此 Google Chrome 键盘快捷键备忘单。
![Google Chrome键盘快捷键备忘单][7]
你可以[下载以下 PDF 格式的图像][8],进行打印并将其放在办公桌上。这样,你可以一直练习快捷方式。
如果你对掌握快捷方式感兴趣,还可以查看 [Ubuntu 键盘快捷键][9]。
顺便问一下,你最喜欢的 Chrome 快捷方式是什么?
--------------------------------------------------------------------------------
via: https://itsfoss.com/google-chrome-shortcuts/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[2]: https://www.chromium.org/Home
[3]: tmp.3qZNXSy2FC#download-cheatsheet
[4]: https://itsfoss.com/command-line-text-editors-linux/
[5]: https://itsfoss.com/rid-google-chrome-icons-dock-elementary-os-freya/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/google-chrome-task-manager.png?w=800&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/google-chrome-keyboard-shortcuts-cheat-sheet.png?ssl=1
[8]: https://drive.google.com/open?id=1lZ4JgRuFbXrnEXoDQqOt7PQH6femIe3t
[9]: https://itsfoss.com/ubuntu-shortcuts/

View File

@ -0,0 +1,234 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 Open Source Paint Applications for Linux Users)
[#]: via: (https://itsfoss.com/open-source-paint-apps/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
面向 Linux 用户的6款开源绘图应用程序
======
小时候,当我开始使用计算机(使用 Windows XP)时,我最喜欢的应用程序是 Paint。我在它上面花费数小时涂鸦。出乎意料孩子们仍然喜欢 paint 应用程序。不仅仅是孩子们,简单是 paint 应用程序,在很多情况下都能派上用场。
你将找到一堆可以让你绘制/绘图或操作图片的应用程序。然而,其中一些是专有的。既然你是一名 Linux 用户 - 为什么不聚焦在开源绘图应用程序上呢?
在这篇文章中,我们将列出一些最好的开源绘图应用程序,在 Linux 上,它们有替换专有绘图软件的价值。
### 开源绘图 & 绘制应用程序
![][1]
**注意:** _该列表没有特别的排名顺序_
#### 1\. Pinta
![][2]
主要亮点:
* Paint.NET / MS Paint 的极好替代品
* 附加组件支持 (也支持 WebP 图片)
* 图层支持
[Pinta][3] 是一款令人赞叹的开源绘图应用程序,非常适合绘图和简单的图片编辑。换句话说,它是一款简单的带有绚丽特色的绘图应用程序。
你可以考虑把 [Pinta][4] 作为 Linux 上的 MS Paint 的一个替代品 但是带有图层支持等等。不仅仅是 MS Paint它也可以作为 Windows 上可以使用的Paint.NET 的一个 Linux 替代品。尽管 Paint.NET 更好一些 Pinta 似乎是个不错的选择。
几个附件可以用于增强功能,像 [在 Linux 上支持 WebP 图片][5]。另外,图层支持,你可以简单地调整图片大小,添加特效,进行调整(亮度,对比度等等),以及在导出图片时调整其质量。
#### 如何安装 Pinta
你应该能够在软件中心/应用程序中心/软件包管理器中简单地找到它。只需要输入 “**Pinta**” ,并开始安装它。无论哪种情况,尝试 [Flatpak][6] 软件包。
或者,你可以在终端中输入下面的命令 (Ubuntu/Debian):
```
sudo apt install pinta
```
下载软件包和安装指南的更多信息,参考[官方下载页面][7].
#### 2\. Krita
![][8]
主要亮点:
* HDR 绘图
* PSD 支持
* 图层支持
* 笔刷稳定器
* 二维动画
Krita 是 Linux 上最高级的开源绘图应用程序之一。当然,对于本文,它帮助你绘制草图和在画布上造成破坏。除此之外,它还提供很多特色。
[][9]
建议在安装 Fedora 24后阅读要做的事情
例如如果你有一只颤抖的手它可以帮助你稳定笔刷的笔划。你可以使用内置的矢量工具来创建漫画面板和其它有趣的东西。如果你正在寻找一个成熟的颜色管理支持绘图助理和图层管理Krita 应该是你最好的选择。
#### 如何安装 Krita
类似于 pinta你可以在软件中心/应用程序中心或软件包管理器的列表中找到它。它也可以 [Flatpak 存储库][10]中找到。
考虑通过终端安装它?输入下面的命令:
```
sudo apt install krita
```
无论哪种情况,你可以前往它们的[官方下载页面][11]来获取 **AppImage** 文件并运行它。
如果你对 AppImage 文件一无所知,查看我们的指南 [如何使用 AppImage][12] 。
#### 3\. Tux Paint
![][13]
主要亮点:
* 给儿童用的一个简单直接的绘图应用程序
我不是儿童对于3-12岁儿童来说Tux Paint 是最好的开源绘图应用程序之一。当然当你只想乱画时你不需要选择。所以在这种情况下Tux Paint 似乎是最好的选择(即使是成年人!).
#### 如何安装 Tuxpaint
Tuxpaint 可以从软件中心或软件包管理器下载。无论哪种情况,在 Ubuntu/Debian 上安装它,在终端中输入下面的命令:
```
sudo apt install tuxpaint
```
关于它的更多信息,前往[官方站点][14]。
#### 4\. Drawpile
![][15]
主要亮点:
* 协同绘制
* 内置聊天功能,与其他用户互动
* 图层支持
* 记录绘制会话
Drawpile 是一个有趣的开源绘图应用程序,在程序中,你可以与其他用户实时协作。准确地说,你们可以单个画布中同时绘制。除了这个独特的功能,你还有图层支持,记录绘制会话的能力,甚至一个聊天工具来与其他协作用户交互。
你可以主办/加入一个公共会话,或使用一个代码与你的朋友一起开始一个私有会话。默认情况下,服务器将是你的计算机。但是,如果你需要远程服务器,你也可以选择它。
注意,你将需要[注册一个 Drawpile 账户][16] 以便于协作。
#### 如何安装 Drawpile
据我所知你只能在As far as Im aware of, you can only find it listed in the [Flatpak 存储库][17]的列表中找到它。
[][18]
建议阅读 OCS 商店:一站式商店,满足你所有的 Linux 软件定制需求
#### 5\. MyPaint
![][19]
主要亮点:
* 给数码画家的易用工具
* 图层管理支持
* 很多选项来微调你的画笔和绘制
对于数码画家来说,[MyPaint][20] 是一个简单但强大的工具。它以很多选项来调整为特色,以便于使数字笔刷笔划轻触。我不是一个数字艺术家(但我是一个涂鸦者),但是我注意到很多来调整笔刷,颜色的选项,和一个来添加中间结果暂存器面板的选项。
它也支持图层管理 也许你需要它。已经有好几年没有更新最新的稳定版本,但是当前的 alpha 构建版本(我测试过)运行的很好。如果你正在 Linux 上寻找一个开源绘图应用程序 试试这个。
#### 如何安装 MyPaint
MyPaint 在官方存储库中可获得。然而,这是老旧的版本。如果你仍然想继续,你可以在软件中心搜索它,或在终端中输入下面的命令:
```
sudo apt install mypaint
```
你可以前往它的官方 [GitHub 发布页面][21]获取最新的 alpha 构建版本,和获取 [AppImage 文件][12] (任意版本) 来使它可执行并启动应用程序。
#### 6\. KolourPaint
![][22]
主要亮点:
* 在 Linux 上的一个 MS Paint 简单替代
* 不支持图层管理
如果你制作寻找不支持任何图层管理,只需要一个开源绘图应用程序来绘制东西 它就是这个。
[KolourPaint][23] 最初为 KDE 桌面环境定制,但是它在其它的桌面环境中也完美地工作。
#### 如何安装 KolourPaint
你可以从软件中心安装 KolourPaint ,或通过终端使用下面的命令:
```
sudo apt install kolourpaint4
```
无论哪种情况,你都可以使用 [Flathub][24] 。
**总结**
如果你在考虑如 GIMP/Inkscape 这样的应用程序, 我们在另一篇关于[给数码艺术家的最好 Linux 工具][25]的文章中列出。如果你对更多的选项好奇,我建议你去查看它。
在这里,我们尝试编写一份 Linux 可用的最佳开源绘图应用程序列表。如果你认为我们错过一些东西,请在下面的评论区告诉我们!
--------------------------------------------------------------------------------
via: https://itsfoss.com/open-source-paint-apps/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/open-source-paint-apps.png?resize=800%2C450&ssl=1
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/pinta.png?ssl=1
[3]: https://pinta-project.com/pintaproject/pinta/
[4]: https://itsfoss.com/pinta-1-6-ubuntu-linux-mint/
[5]: https://itsfoss.com/webp-ubuntu-linux/
[6]: https://www.flathub.org/apps/details/com.github.PintaProject.Pinta
[7]: https://pinta-project.com/pintaproject/pinta/releases
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/krita-paint.png?ssl=1
[9]: https://itsfoss.com/things-to-do-after-installing-fedora-24/
[10]: https://www.flathub.org/apps/details/org.kde.krita
[11]: https://krita.org/en/download/krita-desktop/
[12]: https://itsfoss.com/use-appimage-linux/
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/tux-paint.jpg?ssl=1
[14]: http://www.tuxpaint.org/
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/drawpile.png?ssl=1
[16]: https://drawpile.net/accounts/signup/
[17]: https://flathub.org/apps/details/net.drawpile.drawpile
[18]: https://itsfoss.com/ocs-store/
[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/mypaint.png?ssl=1
[20]: https://mypaint.org/
[21]: https://github.com/mypaint/mypaint/releases
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/kolourpaint.png?ssl=1
[23]: http://kolourpaint.org/
[24]: https://flathub.org/apps/details/org.kde.kolourpaint
[25]: https://itsfoss.com/best-linux-graphic-design-software/

View File

@ -0,0 +1,260 @@
[#]: collector: (lujun9972)
[#]: translator: (wenwensnow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to dual boot Windows 10 and Debian 10)
[#]: via: (https://www.linuxtechi.com/dual-boot-windows-10-debian-10/)
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
如何拥有一个Windows 10 和 Debian 10 的双系统
======
所以,在无数次劝说自己后,你终于做出了一个大胆的决定,试试**Linux**。 不过在完全熟悉Linux之前你依旧需要使用Windows 10系统。幸运的是通过一个双系统引导设置能让你在启动时选择自己想要进入的系统。在这个指南中你会看到如何 **如何双重引导Windows 10 和 Debian 10**.
[![如何拥有一个Windows 10 和 Debian 10 的双系统][1]][2]
### 前提条件
在开始之前,确保你满足下列条件:
* 一个Debian10 的可引导USB或DVD
* 一个快速且稳定的网络 (为了安装更新 &amp; 以及第三方软件)
另外记得注意你系统的引导策略UEFI 或Legacy, 需要确保两个系统使用同一种引导模式。
### 第一步:在硬盘上创建一个空余分区
第一步你需要在你的硬盘上创建一个空余分区。之后这将是我们安装Debian系统的地方。为了实现这一目的需要使用下图所示的磁盘管理器
同时按下 **Windows + R键**,启动运行程序。接下来,输入 **diskmgmt.msc** ,按 **回车键**
[![Launch-Run-dialogue][1]][3]
这会启动 **磁盘管理器**窗口它会显示你Windows 上所有已有磁盘。
[![Disk-management][1]][4]
接下来你需要为Debian安装创建空余空间。为此你需要压缩其中一个磁盘的空间从而创建一个未分配的新分区。在这个例子里我会从 D 盘中创建一个 **30 GB** 的新分区。
为了压缩一个卷,右键点击它,然后选中选项 **压缩**
[![压缩卷][1]][5]
在弹出窗口中定义你想压缩的空间大小。记住这是将来要安装Debian 10的磁盘空间。我选择了 **30000MB ( 大约 30 GB)** 。 压缩完成后,点击‘**压缩**.
[![Shrink-space][1]][6]
在压缩操作结束后,你会看到一个如下图所示的未分配分区:
[![未分配分区][1]][7]
完美! 现在可以准备开始安装了。
### 第二步开始安装Debian 10
空余分区已经创建好了将你的可引导USB或安装DVD插入电脑重新启动系统。 记得更改 **BIOS** 中的**引导顺序**,需要在启动时按住功能键(通常,根据品牌不同,是**F9, F10 或 F12** 中的某一个)。 这一步骤,对系统是否能进入安装媒体来说,至关重要。保存 BIOS 设置,并重启电脑。
如下图所示,界面会显示一个新的引导菜单:点击 **Graphical install**
[![图形化界面安装][1]][8]
下一步,选择你的 **偏好语言** ,然后点击 **继续**
[![设置语言-Debian10][1]][9]
接着,选择你的 **地区** ,点击‘**继续**’。 根据地区,系统会自动选择当地对应的时区。 如果你无法找到你所对应的地区,将界面往下拉, 点击‘**其他**’后,选择相对应位置。
[![选择地区-Debain10][1]][10]
而后,选择你的 **keyboard** 布局。
[![设置键盘-Debain10][1]][11]
接下来,设置系统的 **主机名** ,点击 **继续**
[![Set-hostname-Debian10][1]][12]
下一步,确定 **域名**。如果你的电脑不在域中,直接点击 **继续**’按钮。
[![设置域名-Debian10][1]][13]
然后,如图所示,设置 **root 密码**,点击 **继续**
[![设置root 密码-Debian10][1]][14]
下一步骤,设置账户的用户全名,点击 **继续**
[![设置用户全名-debain10][1]][15]
接着,通过设置 **username** 来确定此账户显示时的用户名
[![Specify-username-Debian10][1]][16]
下一步,设置用户密码, 点击‘**继续**
[![设置用户密码-Debian10][1]][17]
然后,设置**时区**
[![设置时区-Debian10][1]][18]
这时你要为Debian10安装创建分区。如果你是新手用户点击菜单中的第一个选项 **使用最大的连续空余空间**,点击‘**继续**.
[![Use-largest-continuous-free-space-debian10][1]][19]
不过,如果你对创建分区有所了解的话,选择‘**手动** 选项,点击 **继续**
[![选择手动-Debain10][1]][20]
接着,选择被标记为 **空余空间** 的磁盘, 点击‘**继续** 。接下来,点击‘**创建新分区**
[![创建新分区-Debain10][1]][21]
下一界面首先确定swap空间大小。我的swap大小为**2GB**,点击 **继续**
[![确定swap大小-debian10][1]][22]
点击下一界面的 **Primary** 点击‘**继续**
[![磁盘主分区-Debian10][1]][23]
选择在磁盘**初始位置创建新分区**后,点击继续.
[![在初始位置创建-Debain10][1]][24]
选择**Ext 4 日志文件系统** ,点击 **继续**
[![选择Ext4日志文件系统-debain10][1]][25]
下个界面选择 **swap** ,点击继续
[![选择swap-debian10][1]][26]
选中 **完成此分区设置** ,点击继续。
[!完成此分区设置-debian10][1]][27]
返回 **磁盘分区** 界面, 点击**空余空间** ,点击继续
[![点击空余空间-Debain10][1]][28]
为了让自己能轻松一点,选中**自动为空余空间分区** 后,点击 **继续**.
[![自动为空余空间分区-Debain10][1]][29]
接着点击 **将所有文件存储在同一分区 (新手用户推荐)**
[![将所有文件存储在同一分区-debian10][1]][30]
最后, 点击**完成分区设置,并将改动写入磁盘** ,点击 **继续**.
[![完成分区设置,并将改动写入磁盘][1]][31]
确定你要将改动写入磁盘,点击‘**Yes**
[![将改动写入磁盘-Debian10][1]][32]
而后,安装程序会开始安装所有必要的软件包。
当系统询问是否要扫描其他CD时选择 **No** ,并点击继续
[![扫描其他CD-No-Debain10][1]][33]
接着,选择离你最近的镜像站点地区,点击 ‘继续’
[![Debian-镜像站点-国家][1]][34]
然后,选择最适合你的镜像站点,点击‘**继续**
[![选择镜像站点][1]][35]
如果你打算使用代理服务器,在下面输入具体信息,没有的话就留空,点击‘继续’
[![输入代理信息-debian10][1]][36]
随着安装进程的继续, 你会被问到,是否想参加一个**软件包用途调查**。 你可以选择任意一个选项,之后点击‘继续’ .我选择了‘**否**’。
[![参与调查-debain10][1]][37]
**软件选择** 窗口选中你想安装的软件包,点击**继续**.
[![软件选择-debian10][1]][38]
安装程序会将选中的软件一一安装,在这期间,你可以去喝杯咖啡休息一下。
系统将会询问你,是否要将 grub 的**引导装载程序** 安装到 **主引导记录表 (MBR)** 上。点击 **Yes**,而后点击 **继续**.
[![安装-grub-bootloader-debian10][1]][39]
接着,选中你想安装**grub** 的硬盘,点击**继续**
[![选择硬盘-安装grub-Debian10][1]][40]
最后, 安装完成,直接点击 **继续** 按钮
[![安装完成-重新启动-debian10][1]][41]
你现在应该会有一个列出**Windows** 和**Debian** 的grub 菜单。 为了引导Debian系统往下选择Debian。之后你就能看见登录界面。输入密码之后点击回车键。
[![Debian10-登录][1]][42]
这就完成了这样你就拥有了一个全新的Debian 10 和Windows 10双系统。
[![Debian10-Buster-Details][1]][43]
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/dual-boot-windows-10-debian-10/
作者:[James Kiarie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/How-to-dual-boot-Windows-and-Debian10.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Launch-Run-dialogue.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Disk-management.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Shrink-volume.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Shrink-space.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Unallocated-partition.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Graphical-Install-Debian10.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Language-Debian10.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-location-Debain10.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Configure-Keyboard-layout-Debain10.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-hostname-Debian10.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-domain-name-Debian10.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Set-root-Password-Debian10.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-fullname-user-debain10.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-username-Debian10.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Specify-user-password-Debian10.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Configure-timezone-Debian10.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Use-largest-continuous-free-space-debian10.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Manual-Debain10.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Create-new-partition-Debain10.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Define-swap-space-debian10.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Partition-Disks-Primary-Debain10.jpg
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Start-at-the-beginning-Debain10.jpg
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Ext4-Journaling-system-debain10.jpg
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-swap-debain10.jpg
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Done-setting-partition-debian10.jpg
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Click-Free-space-Debain10.jpg
[29]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Automatically-partition-free-space-Debain10.jpg
[30]: https://www.linuxtechi.com/wp-content/uploads/2019/10/All-files-in-one-partition-debian10.jpg
[31]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Finish-partitioning-write-changes-to-disk.jpg
[32]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Write-changes-to-disk-Yes-Debian10.jpg
[33]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Scan-another-CD-No-Debain10.jpg
[34]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian-archive-mirror-country.jpg
[35]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-Debian-archive-mirror.jpg
[36]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Enter-proxy-details-debian10.jpg
[37]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Participate-in-survey-debain10.jpg
[38]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Software-selection-debian10.jpg
[39]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Install-grub-bootloader-debian10.jpg
[40]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Select-hard-drive-install-grub-Debian10.jpg
[41]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Installation-complete-reboot-debian10.jpg
[42]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian10-log-in.jpg
[43]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Debian10-Buster-Details.jpg

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cloning a MAC address to bypass a captive portal)
[#]: via: (https://fedoramagazine.org/cloning-a-mac-address-to-bypass-a-captive-portal/)
[#]: author: (Esteban Wilson https://fedoramagazine.org/author/swilson/)
克隆 MAC 地址来绕过强制门户
======
![][1]
如果你曾经不在家和办公室连接到 WiFi那么通常会看到一个门户页面。它可能会要求你接受服务条款或其他协议才能访问。但是当你无法通过这类门户进行连接时会发生什么本文向你展示了如何在 Fedora 上使用 NetworkManager 处理某些故障情况,以便你仍然可以访问互联网。
### 强制门户如何工作
强制门户是新设备连接到网络时显示的网页。当用户首次访问互联网时,门户网站会捕获所有网页请求并将其重定向到单个门户页面。
然后,页面要求用户采取一些措施,通常是同意使用政策。用户同意后,他们可以向 RADIUS 或其他类型的身份验证系统进行身份验证。简而言之,强制门户根据设备的 MAC 地址和终端用户接受条款来注册和授权设备。 MAC 地址是附加到任何网络接口(例如 WiFi 芯片或卡)的[基于硬件的值][2]。)
有时设备无法加载强制门户来进行身份验证和授权,以使用 WiFI 接入。这种情况的例子包括移动设备和游戏机SwitchPlaystation 等)。当连接到互联网时,它们通常不会打开动强制门户页面。连接到酒店或公共 WiFi 接入点时,你可能会看到这种情况。
不过,你可以在 Fedora 上使用 NetworkManager 来解决这些问题。Fedora 使你可以临时克隆连接设备的 MAC 地址,并代表该设备通过强制门户进行身份验证。你需要得到连接设备的 MAC 地址。通常,它被打印在设备上的某个地方并贴上标签。它是一个六字节的十六进制值,因此看起来类似 _4A:1A:4C:B0:38:1F_。通常,你也可以通过设备的内置菜单找到它。
### 使用 NetworkManager 克隆
首先,打开 _**nm-connection-editor**_,或通过”设置“打开 WiFi 设置。然后,你可以使用 NetworkManager 进行克隆:
* 对于以太网–选择已连接的以太网连接。然后选择 _Ethernet_ 选项卡。记录或复制当前的 MAC 地址。在 _Cloned MAC address_ 字段中输入游戏机或其他设备的 MAC 地址。
  * 对于 WiFi –选择 WiFi 配置名。然后选择 “WiFi” 选项卡。记录或复制当前的 MAC 地址。在 _Cloned MAC address_ 字段中输入游戏机或其他设备的 MAC 地址。
### **启动所需的设备**
当 Fedora 系统与以太网或 WiFi 配置连接,克隆的 MAC 地址将用于请求 IP 地址,并加载强制门户。输入所需的凭据和/或选择用户协议。MAC 地址将获得授权。
现在,断开 WiF i或以太网配置连接然后将 Fedora 系统的 MAC 地址更改回其原始值。然后启动游戏机或其他设备。设备现在应该可以访问互联网了,因为它的网络接口已通过你的 Fedora 系统进行了授权。
不过,这不是 NetworkManager 全部能做的。例如,请参阅[随机化系统硬件地址][3],来获得更好的隐私保护。
> [使用 NetworkManager 随机化你的 MAC 地址][3]
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/cloning-a-mac-address-to-bypass-a-captive-portal/
作者:[Esteban Wilson][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/swilson/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/clone-mac-nm-816x345.jpg
[2]: https://en.wikipedia.org/wiki/MAC_address
[3]: https://fedoramagazine.org/randomize-mac-address-nm/

View File

@ -1,95 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Update a Fedora Linux System [Beginners Tutorial])
[#]: via: (https://itsfoss.com/update-fedora/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
如何更新 Fedora Linux 系统[入门教程]
======
_**本快速教程介绍了更新 Fedora Linux 安装的多种方法。**_
前几天,我安装了[新发布的 Fedora 31][1]。老实说,这是我第一次使用[非 Ubuntu 发行版][2]。
安装 Fedora 之后,我做的第一件事就是尝试安装一些软件。 我打开软件中心,发现该软件中心已“损坏”。 我无法从中安装任何应用程序。
我不确定我的安装出了什么问题。 在团队内部讨论时Abhishek 建议我先更新系统。 我更新了, 更新后一切恢复正常。 更新[Fedora][3]系统后,软件中心也能正常工作了。
有时我们只是忽略了对系统的更新,而继续对我们所面临的问题进行故障排除。 不管问题有多大或多小,为了避免它们,你都应该保持系统更新。
在本文中我将向你展示更新Fedora Linux系统的多种方法。
* [使用软件中心更新 Fedora][4]
* [使用命令行更新 Fedora][5]
* [从系统设置更新 Fedora][6]
请记住,更新 Fedora 意味着安装安全补丁,更新内核和软件。 如果要从 Fedora 的一个版本更新到另一个版本,这称为版本升级,你可以[在此处阅读有关 Fedora 版本升级过程的信息][7]。
### 从软件中心更新 Fedora
![软件中心][8]
您很可能会收到通知,通知您有一些系统更新需要查看,您应该在单击该通知时启动软件中心。
您所要做的就是–点击“更新”,并验证 root 密码开始更新。
如果您没有收到更新的通知,则只需启动软件中心并转到“更新”选项卡即可。 现在,您只需要继续更新。
### 使用终端更新 Fedora
如果由于某种原因无法加载软件中心则可以使用dnf 软件包管理命令轻松地更新系统。
只需启动终端并输入以下命令即可开始更新系统将提示你确认root密码
```
sudo dnf upgrade
```
**dnf 更新 vs dnf 升级
**
你会发现有两个可用的 dnf 命令dnf 更新和 dnf 升级。 这两个命令执行相同的工作,即安装 Fedora 提供的所有更新。 那么,为什么要会有 dnf 更新和 dnf 升级,你应该使用哪一个呢? dnf 更新基本上是 dnf 升级的别名。 尽管 dnf 更新可能仍然有效,但最好使用 dnf 升级,因为这是真正的命令。
### 从系统设置中更新 Fedora
![][9]
如果其它方法都不行(或者由于某种原因已经进入系统设置),请导航至设置底部的“详细信息”选项。
如上图所示,改选项中显示操作系统和硬件的详细信息以及一个“检查更新”按钮,如上图中所示。 您只需要单击它并提供root / admin密码即可继续安装可用的更新。
**总结**
如上所述更新Fedora安装非常容易。 有三种方法供你选择,因此无需担心。
如果你按上述说明操作时发现任何问题,请随时在下面的评论部分告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/update-fedora/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/fedora-31-release/
[2]: https://itsfoss.com/non-ubuntu-beginner-linux/
[3]: https://getfedora.org/
[4]: tmp.Lqr0HBqAd9#software-center
[5]: tmp.Lqr0HBqAd9#command-line
[6]: tmp.Lqr0HBqAd9#system-settings
[7]: https://itsfoss.com/upgrade-fedora-version/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/11/software-center.png?ssl=1
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/11/system-settings-fedora-1.png?ssl=1