mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
commit
d95c06872a
235
published/20190826 How RPM packages are made- the source RPM.md
Normal file
235
published/20190826 How RPM packages are made- the source RPM.md
Normal file
@ -0,0 +1,235 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11527-1.html)
|
||||
[#]: subject: (How RPM packages are made: the source RPM)
|
||||
[#]: via: (https://fedoramagazine.org/how-rpm-packages-are-made-the-source-rpm/)
|
||||
[#]: author: (Ankur Sinha "FranciscoD" https://fedoramagazine.org/author/ankursinha/)
|
||||
|
||||
RPM 包是如何从源 RPM 制作的
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
在[上一篇文章中,我们研究了什么是 RPM 软件包][2]。它们是包含文件和元数据的档案文件。当安装或卸载 RPM 时,此元数据告诉 RPM 在哪里创建或删除文件。正如你将在上一篇文章中记住的,元数据还包含有关“依赖项”的信息,它可以是“运行时”或“构建时”的依赖信息。
|
||||
|
||||
例如,让我们来看看 `fpaste`。你可以使用 `dnf` 下载该 RPM。这将下载 Fedora 存储库中可用的 `fpaste` 最新版本。在 Fedora 30 上,当前版本为 0.3.9.2:
|
||||
|
||||
```
|
||||
$ dnf download fpaste
|
||||
|
||||
...
|
||||
fpaste-0.3.9.2-2.fc30.noarch.rpm
|
||||
```
|
||||
|
||||
由于这是个构建 RPM,因此它仅包含使用 `fpaste` 所需的文件:
|
||||
|
||||
```
|
||||
$ rpm -qpl ./fpaste-0.3.9.2-2.fc30.noarch.rpm
|
||||
/usr/bin/fpaste
|
||||
/usr/share/doc/fpaste
|
||||
/usr/share/doc/fpaste/README.rst
|
||||
/usr/share/doc/fpaste/TODO
|
||||
/usr/share/licenses/fpaste
|
||||
/usr/share/licenses/fpaste/COPYING
|
||||
/usr/share/man/man1/fpaste.1.gz
|
||||
```
|
||||
|
||||
### 源 RPM
|
||||
|
||||
在此链条中的下一个环节是源 RPM。Fedora 中的所有软件都必须从其源代码构建。我们不包括预构建的二进制文件。因此,要制作一个 RPM 文件,RPM(工具)需要:
|
||||
|
||||
* 给出必须要安装的文件,
|
||||
* 例如,如果要编译出这些文件,则告诉它们如何生成这些文件,
|
||||
* 告知必须在何处安装这些文件,
|
||||
* 该特定软件需要其他哪些依赖才能正常工作。
|
||||
|
||||
源 RPM 拥有所有这些信息。源 RPM 与构建 RPM 相似,但顾名思义,它们不包含已构建的二进制文件,而是包含某个软件的源文件。让我们下载 `fpaste` 的源 RPM:
|
||||
|
||||
```
|
||||
$ dnf download fpaste --source
|
||||
|
||||
...
|
||||
fpaste-0.3.9.2-2.fc30.src.rpm
|
||||
```
|
||||
|
||||
注意文件的结尾是 `src.rpm`。所有的 RPM 都是从源 RPM 构建的。你也可以使用 `dnf` 轻松检查“二进制” RPM 的源 RPM:
|
||||
|
||||
```
|
||||
$ dnf repoquery --qf "%{SOURCERPM}" fpaste
|
||||
fpaste-0.3.9.2-2.fc30.src.rpm
|
||||
```
|
||||
|
||||
另外,由于这是源 RPM,因此它不包含构建的文件。相反,它包含有关如何从中构建 RPM 的源代码和指令:
|
||||
|
||||
```
|
||||
$ rpm -qpl ./fpaste-0.3.9.2-2.fc30.src.rpm
|
||||
fpaste-0.3.9.2.tar.gz
|
||||
fpaste.spec
|
||||
```
|
||||
|
||||
这里,第一个文件只是 `fpaste` 的源代码。第二个是 spec 文件。spec 文件是个配方,可告诉 RPM(工具)如何使用源 RPM 中包含的源代码创建 RPM(档案文件)— 它包含 RPM(工具)构建 RPM(档案文件)所需的所有信息。在 spec 文件中。当我们软件包维护人员添加软件到 Fedora 中时,我们大部分时间都花在编写和完善 spec 文件上。当软件包需要更新时,我们会回过头来调整 spec 文件。你可以在 <https://src.fedoraproject.org/browse/projects/> 的源代码存储库中查看 Fedora 中所有软件包的 spec 文件。
|
||||
|
||||
请注意,一个源 RPM 可能包含构建多个 RPM 的说明。`fpaste` 是一款非常简单的软件,一个源 RPM 生成一个“二进制” RPM。而 Python 则更复杂。虽然只有一个源 RPM,但它会生成多个二进制 RPM:
|
||||
|
||||
```
|
||||
$ sudo dnf repoquery --qf "%{SOURCERPM}" python3
|
||||
python3-3.7.3-1.fc30.src.rpm
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
|
||||
$ sudo dnf repoquery --qf "%{SOURCERPM}" python3-devel
|
||||
python3-3.7.3-1.fc30.src.rpm
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
|
||||
$ sudo dnf repoquery --qf "%{SOURCERPM}" python3-libs
|
||||
python3-3.7.3-1.fc30.src.rpm
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
|
||||
$ sudo dnf repoquery --qf "%{SOURCERPM}" python3-idle
|
||||
python3-3.7.3-1.fc30.src.rpm
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
|
||||
$ sudo dnf repoquery --qf "%{SOURCERPM}" python3-tkinter
|
||||
python3-3.7.3-1.fc30.src.rpm
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
```
|
||||
|
||||
用 RPM 行话来讲,“python3” 是“主包”,因此该 spec 文件将称为 `python3.spec`。所有其他软件包均为“子软件包”。你可以下载 python3 的源 RPM,并查看其中的内容。(提示:补丁也是源代码的一部分):
|
||||
|
||||
```
|
||||
$ dnf download --source python3
|
||||
python3-3.7.4-1.fc30.src.rpm
|
||||
|
||||
$ rpm -qpl ./python3-3.7.4-1.fc30.src.rpm
|
||||
00001-rpath.patch
|
||||
00102-lib64.patch
|
||||
00111-no-static-lib.patch
|
||||
00155-avoid-ctypes-thunks.patch
|
||||
00170-gc-assertions.patch
|
||||
00178-dont-duplicate-flags-in-sysconfig.patch
|
||||
00189-use-rpm-wheels.patch
|
||||
00205-make-libpl-respect-lib64.patch
|
||||
00251-change-user-install-location.patch
|
||||
00274-fix-arch-names.patch
|
||||
00316-mark-bdist_wininst-unsupported.patch
|
||||
Python-3.7.4.tar.xz
|
||||
check-pyc-timestamps.py
|
||||
idle3.appdata.xml
|
||||
idle3.desktop
|
||||
python3.spec
|
||||
```
|
||||
|
||||
### 从源 RPM 构建 RPM
|
||||
|
||||
现在我们有了源 RPM,并且其中有什么内容,我们可以从中重建 RPM。但是,在执行此操作之前,我们应该设置系统以构建 RPM。首先,我们安装必需的工具:
|
||||
|
||||
```
|
||||
$ sudo dnf install fedora-packager
|
||||
```
|
||||
|
||||
这将安装 `rpmbuild` 工具。`rpmbuild` 需要一个默认布局,以便它知道源 RPM 中每个必需组件的位置。让我们看看它们是什么:
|
||||
|
||||
```
|
||||
# spec 文件将出现在哪里?
|
||||
$ rpm -E %{_specdir}
|
||||
/home/asinha/rpmbuild/SPECS
|
||||
|
||||
# 源代码将出现在哪里?
|
||||
$ rpm -E %{_sourcedir}
|
||||
/home/asinha/rpmbuild/SOURCES
|
||||
|
||||
# 临时构建目录是哪里?
|
||||
$ rpm -E %{_builddir}
|
||||
/home/asinha/rpmbuild/BUILD
|
||||
|
||||
# 构建根目录是哪里?
|
||||
$ rpm -E %{_buildrootdir}
|
||||
/home/asinha/rpmbuild/BUILDROOT
|
||||
|
||||
# 源 RPM 将放在哪里?
|
||||
$ rpm -E %{_srcrpmdir}
|
||||
/home/asinha/rpmbuild/SRPMS
|
||||
|
||||
# 构建的 RPM 将放在哪里?
|
||||
$ rpm -E %{_rpmdir}
|
||||
/home/asinha/rpmbuild/RPMS
|
||||
```
|
||||
|
||||
我已经在系统上设置了所有这些目录:
|
||||
|
||||
```
|
||||
$ cd
|
||||
$ tree -L 1 rpmbuild/
|
||||
rpmbuild/
|
||||
├── BUILD
|
||||
├── BUILDROOT
|
||||
├── RPMS
|
||||
├── SOURCES
|
||||
├── SPECS
|
||||
└── SRPMS
|
||||
|
||||
6 directories, 0 files
|
||||
```
|
||||
|
||||
RPM 还提供了一个为你全部设置好的工具:
|
||||
|
||||
```
|
||||
$ rpmdev-setuptree
|
||||
```
|
||||
|
||||
然后,确保已安装 `fpaste` 的所有构建依赖项:
|
||||
|
||||
```
|
||||
sudo dnf builddep fpaste-0.3.9.2-3.fc30.src.rpm
|
||||
```
|
||||
|
||||
对于 `fpaste`,你只需要 Python,并且它肯定已经安装在你的系统上(`dnf` 也使用 Python)。还可以给 `builddep` 命令一个 spec 文件,而不是源 RPM。在手册页中了解更多信息:
|
||||
|
||||
```
|
||||
$ man dnf.plugin.builddep
|
||||
```
|
||||
|
||||
现在我们有了所需的一切,从源 RPM 构建一个 RPM 就像这样简单:
|
||||
|
||||
```
|
||||
$ rpmbuild --rebuild fpaste-0.3.9.2-3.fc30.src.rpm
|
||||
..
|
||||
..
|
||||
|
||||
$ tree ~/rpmbuild/RPMS/noarch/
|
||||
/home/asinha/rpmbuild/RPMS/noarch/
|
||||
└── fpaste-0.3.9.2-3.fc30.noarch.rpm
|
||||
|
||||
0 directories, 1 file
|
||||
```
|
||||
|
||||
`rpmbuild` 将安装源 RPM 并从中构建你的 RPM。现在,你可以使用 `dnf` 安装 RPM 以使用它。当然,如前所述,如果你想在 RPM 中进行任何更改,则必须修改 spec 文件,我们将在下一篇文章中介绍 spec 文件。
|
||||
|
||||
### 总结
|
||||
|
||||
总结一下这篇文章有两点:
|
||||
|
||||
* 我们通常安装使用的 RPM 是包含软件的构建版本的 “二进制” RPM
|
||||
* 构建 RPM 来自于源 RPM,源 RPM 包括用于生成二进制 RPM 所需的源代码和规范文件。
|
||||
|
||||
如果你想开始构建 RPM,并帮助 Fedora 社区维护我们提供的大量软件,则可以从这里开始: <https://fedoraproject.org/wiki/Join_the_package_collection_maintainers>
|
||||
|
||||
如有任何疑问,请发邮件到 [Fedora 开发人员邮件列表][3],我们随时乐意为你提供帮助!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/how-rpm-packages-are-made-the-source-rpm/
|
||||
|
||||
作者:[Ankur Sinha "FranciscoD"][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/ankursinha/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/rpm.png-816x345.jpg
|
||||
[2]: https://linux.cn/article-11452-1.html
|
||||
[3]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11529-1.html)
|
||||
[#]: subject: (Someone Forked GIMP into Glimpse Because Gimp is an Offensive Word)
|
||||
[#]: via: (https://itsfoss.com/gimp-fork-glimpse/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
由于 GIMP 是令人反感的字眼,有人将它复刻了
|
||||
======
|
||||
|
||||
在开源应用程序世界中,当社区成员希望以与其他人不同的方向来开发应用程序时,<ruby>复刻<rt>fork</rt></ruby>是很常见的。最新的具有新闻价值的一个复刻称为 [Glimpse][1],旨在解决用户在使用 <ruby>[GNU 图像处理程序][2]<rt>GNU Image Manipulation Program</rt></ruby>(通常称为 GIMP)时遇到的某些问题。
|
||||
|
||||
### 为什么创建 GIMP 的复刻?
|
||||
|
||||
![][3]
|
||||
|
||||
当你访问 Glimpse 应用的[主页][1]时,它表示该项目的目标是“尝试其他设计方向并修复长期存在的错误。”这听起来并不奇怪。但是,如果你开始阅读该项目的博客文章,则是另外一种印象。
|
||||
|
||||
根据该项目的[第一篇博客文章][4],他们创建了这个复刻是因为他们不喜欢 GIMP 这个名称。根据该帖子,“我们中的许多人不认为该软件的名称适用于所有用户,并且在拒绝该项目的 13 年后,我们决定复刻!”
|
||||
|
||||
如果你想知道为什么这些人认为 GIMP 令人讨厌,他们在[关于页面][5]中回答该问题:
|
||||
|
||||
> “如果英语不是你的母语,那么你可能没有意识到 ‘gimp’ 一词有问题。在某些国家,这被视为针对残疾人的侮辱和针对不受欢迎儿童的操场侮辱。它也可以与成年人同意的某些‘天黑后’活动联系起来。”
|
||||
|
||||
他们还指出,他们并没有使这一举动脱离政治正确或过于敏感。“除了可能给边缘化社区带来的痛苦外,我们当中许多人都有过倡导自由软件的故事,比如在 GNU 图像处理程序没有被专业环境中的老板或同事视为可选项这件事上。”
|
||||
|
||||
他们似乎在回答许多质疑,“不幸的是,我们不得不复刻整个项目来更改其名称,我们认为有关此问题的讨论陷入了僵局,而这是最积极的前进方向。 ”
|
||||
|
||||
看起来 Glimpse 这个名称不是确定不变的。他们的 GitHub 页面上有个关于可能选择其他名称的[提案][7]。也许他们应该放弃 GNU 这个词,我认为 IMP 这个词没有不好的含义。(LCTT 译注:反讽)
|
||||
|
||||
### 分叉之路
|
||||
|
||||
![GIMP 2.10][8]
|
||||
|
||||
[GIMP][6] 已经存在了 20 多年,因此任何形式的复刻都是一项艰巨的任务。当前,[他们正在计划][9]首先在 2019 年 9 月发布 Glimpse 0.1。这将是一个软复刻,这意味着在迁移到新身份时的更改将主要是装饰性的。(LCTT 译注:事实上到本译文发布时,该项目仍然处于蛋疼的 0.1 beta,也许 11 月,也许 12 月,才能发布 0.1 的正式版本。)
|
||||
|
||||
Glimpse 1.0 将是一个硬复刻,他们将积极更改代码库并将其添加到代码库中。他们想将 1.0 移植到 GTK3 并拥有自己的文档。他们估计,直到 2020 年 GIMP 3 发布之后才能做到。
|
||||
|
||||
除了 1.0,Glimpse 团队还计划打响自己的名声。他们计划进行“前端 UI 重写”。他们目前正在讨论[改用哪种语言][10]。D 和 Rust 似乎有很多支持者。随着时间的流逝,他们也[希望][4]“添加新功能以解决普通用户的抱怨”。
|
||||
|
||||
### 最后的思考
|
||||
|
||||
我过去曾经使用过一点 GIMP,但从来没有对它的名称感到困扰。老实说,我很长一段时间都不知道这意味着什么。有趣的是,当我在 Wikipedia 上搜索 GIMP 时,看到了一个 [GIMP 项目][11]的条目,这是纽约的一个现代舞蹈项目,其中包括残疾人。我想 gimp 并不是每个人视为一个贬低词汇的。
|
||||
|
||||
对我来说,更改名称似乎需要大量工作。似乎改写 UI 的想法会使项目看起来更有价值一些。我想知道他们是否会调整它以带来更经典的 UI,例如[使用 Ctrl + S 保存到 GIMP][12] / Glimpse。让我们拭目以待。
|
||||
|
||||
如果你对该项目感兴趣,可以在 [Twitter][14] 上关注他们,查看其 [GitHub 帐户][15],或查看其 [Patreon 页面][16]。
|
||||
|
||||
你觉得被 GIMP 名称冒犯了吗?你是否认为值得对应用程序进行复刻,以便你可以对其进行重命名?在下面的评论中让我们知道。
|
||||
|
||||
如果你觉得这篇文章有趣,请花一点时间在社交媒体、Hacker News 或 [Reddit][17] 上分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gimp-fork-glimpse/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://getglimpse.app/
|
||||
[2]: https://www.gimp.org/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/gimp-fork-glimpse.png?resize=800%2C450&ssl=1
|
||||
[4]: https://getglimpse.app/posts/so-it-begins/
|
||||
[5]: https://getglimpse.app/about/
|
||||
[6]: https://itsfoss.com/gimp-2-10-release/
|
||||
[7]: https://github.com/glimpse-editor/Glimpse/issues/92
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/08/gimp-screenshot.jpg?resize=800%2C508&ssl=1
|
||||
[9]: https://getglimpse.app/posts/six-week-checkpoint/
|
||||
[10]: https://github.com/glimpse-editor/Glimpse/issues/70
|
||||
[11]: https://en.wikipedia.org/wiki/The_Gimp_Project
|
||||
[12]: https://itsfoss.com/how-to-solve-gimp-2-8-does-not-save-in-jpeg-or-png-format/
|
||||
[13]: https://itsfoss.com/wps-office-2016-linux/
|
||||
[14]: https://twitter.com/glimpse_editor
|
||||
[15]: https://github.com/glimpse-editor/Glimpse
|
||||
[16]: https://www.patreon.com/glimpse
|
||||
[17]: https://reddit.com/r/linuxusersgroup
|
289
published/20190902 How RPM packages are made- the spec file.md
Normal file
289
published/20190902 How RPM packages are made- the spec file.md
Normal file
@ -0,0 +1,289 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11538-1.html)
|
||||
[#]: subject: (How RPM packages are made: the spec file)
|
||||
[#]: via: (https://fedoramagazine.org/how-rpm-packages-are-made-the-spec-file/)
|
||||
[#]: author: (Ankur Sinha "FranciscoD" https://fedoramagazine.org/author/ankursinha/)
|
||||
|
||||
如何编写 RPM 的 spec 文件
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
在[关于 RPM 软件包构建的上一篇文章][2]中,你了解到了源 RPM 包括软件的源代码以及 spec 文件。这篇文章深入研究了 spec 文件,该文件中包含了有关如何构建 RPM 的指令。同样,本文以 `fpaste` 为例。
|
||||
|
||||
### 了解源代码
|
||||
|
||||
在开始编写 spec 文件之前,你需要对要打包的软件有所了解。在这里,你正在研究 `fpaste`,这是一个非常简单的软件。它是用 Python 编写的,并且是一个单文件脚本。当它发布新版本时,可在 Pagure 上找到:<https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz>。
|
||||
|
||||
如该档案文件所示,当前版本为 0.3.9.2。下载它,以便你查看该档案文件中的内容:
|
||||
|
||||
```
|
||||
$ wget https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
|
||||
$ tar -tvf fpaste-0.3.9.2.tar.gz
|
||||
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/
|
||||
-rw-rw-r-- root/root 25 2018-07-25 02:58 fpaste-0.3.9.2/.gitignore
|
||||
-rw-rw-r-- root/root 3672 2018-07-25 02:58 fpaste-0.3.9.2/CHANGELOG
|
||||
-rw-rw-r-- root/root 35147 2018-07-25 02:58 fpaste-0.3.9.2/COPYING
|
||||
-rw-rw-r-- root/root 444 2018-07-25 02:58 fpaste-0.3.9.2/Makefile
|
||||
-rw-rw-r-- root/root 1656 2018-07-25 02:58 fpaste-0.3.9.2/README.rst
|
||||
-rw-rw-r-- root/root 658 2018-07-25 02:58 fpaste-0.3.9.2/TODO
|
||||
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/
|
||||
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/
|
||||
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/
|
||||
-rw-rw-r-- root/root 3867 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/fpaste.1
|
||||
-rwxrwxr-x root/root 24884 2018-07-25 02:58 fpaste-0.3.9.2/fpaste
|
||||
lrwxrwxrwx root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/fpaste.py -> fpaste
|
||||
```
|
||||
|
||||
你要安装的文件是:
|
||||
|
||||
* `fpaste.py`:应该安装到 `/usr/bin/`。
|
||||
* `docs/man/en/fpaste.1`:手册,应放到 `/usr/share/man/man1/`。
|
||||
* `COPYING`:许可证文本,应放到 `/usr/share/license/fpaste/`。
|
||||
* `README.rst`、`TODO`:放到 `/usr/share/doc/fpaste/` 下的其它文档。
|
||||
|
||||
这些文件的安装位置取决于文件系统层次结构标准(FHS)。要了解更多信息,可以在这里阅读:<http://www.pathname.com/fhs/> 或查看 Fedora 系统的手册页:
|
||||
|
||||
```
|
||||
$ man hier
|
||||
```
|
||||
|
||||
#### 第一部分:要构建什么?
|
||||
|
||||
现在我们知道了源文件中有哪些文件,以及它们要存放的位置,让我们看一下 spec 文件。你可以在此处查看这个完整的文件:<https://src.fedoraproject.org/rpms/fpaste/blob/master/f/fpaste.spec>。
|
||||
|
||||
这是 spec 文件的第一部分:
|
||||
|
||||
```
|
||||
Name: fpaste
|
||||
Version: 0.3.9.2
|
||||
Release: 3%{?dist}
|
||||
Summary: A simple tool for pasting info onto sticky notes instances
|
||||
BuildArch: noarch
|
||||
License: GPLv3+
|
||||
URL: https://pagure.io/fpaste
|
||||
Source0: https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
|
||||
|
||||
Requires: python3
|
||||
|
||||
%description
|
||||
It is often useful to be able to easily paste text to the Fedora
|
||||
Pastebin at http://paste.fedoraproject.org and this simple script
|
||||
will do that and return the resulting URL so that people may
|
||||
examine the output. This can hopefully help folks who are for
|
||||
some reason stuck without X, working remotely, or any other
|
||||
reason they may be unable to paste something into the pastebin
|
||||
```
|
||||
|
||||
`Name`、`Version` 等称为*标签*,它们定义在 RPM 中。这意味着你不能只是随意写点标签,RPM 无法理解它们!需要注意的标签是:
|
||||
|
||||
* `Source0`:告诉 RPM 该软件的源代码档案文件所在的位置。
|
||||
* `Requires`:列出软件的运行时依赖项。RPM 可以自动检测很多依赖项,但是在某些情况下,必须手动指明它们。运行时依赖项是系统上必须具有的功能(通常是软件包),才能使该软件包起作用。这是 [dnf][3] 在安装此软件包时检测是否需要拉取其他软件包的方式。
|
||||
* `BuildRequires`:列出了此软件的构建时依赖项。这些通常必须手动确定并添加到 spec 文件中。
|
||||
* `BuildArch`:此软件为该计算机体系结构所构建。如果省略此标签,则将为所有受支持的体系结构构建该软件。值 `noarch` 表示该软件与体系结构无关(例如 `fpaste`,它完全是用 Python 编写的)。
|
||||
|
||||
本节提供有关 `fpaste` 的常规信息:它是什么,正在将什么版本制作为 RPM,其许可证等等。如果你已安装 `fpaste`,并查看其元数据时,则可以看到该 RPM 中包含的以下信息:
|
||||
|
||||
```
|
||||
$ sudo dnf install fpaste
|
||||
$ rpm -qi fpaste
|
||||
Name : fpaste
|
||||
Version : 0.3.9.2
|
||||
Release : 2.fc30
|
||||
...
|
||||
```
|
||||
|
||||
RPM 会自动添加一些其他标签,以代表它所知道的内容。
|
||||
|
||||
至此,我们掌握了要为其构建 RPM 的软件的一般信息。接下来,我们开始告诉 RPM 做什么。
|
||||
|
||||
#### 第二部分:准备构建
|
||||
|
||||
spec 文件的下一部分是准备部分,用 `%prep` 代表:
|
||||
|
||||
```
|
||||
%prep
|
||||
%autosetup
|
||||
```
|
||||
|
||||
对于 `fpaste`,这里唯一的命令是 `%autosetup`。这只是将 tar 档案文件提取到一个新文件夹中,并为下一部分的构建阶段做好了准备。你可以在此处执行更多操作,例如应用补丁程序,出于不同目的修改文件等等。如果你查看过 Python 的源 RPM 的内容,那么你会在那里看到许多补丁。这些都将在本节中应用。
|
||||
|
||||
通常,spec 文件中带有 `%` 前缀的所有内容都是 RPM 以特殊方式解释的宏或标签。这些通常会带有大括号,例如 `%{example}`。
|
||||
|
||||
#### 第三部分:构建软件
|
||||
|
||||
下一部分是构建软件的位置,用 `%build` 表示。现在,由于 `fpaste` 是一个简单的纯 Python 脚本,因此无需构建。因此,这里是:
|
||||
|
||||
```
|
||||
%build
|
||||
#nothing required
|
||||
```
|
||||
|
||||
不过,通常来说,你会在此处使用构建命令,例如:
|
||||
|
||||
```
|
||||
configure; make
|
||||
```
|
||||
|
||||
构建部分通常是 spec 文件中最难的部分,因为这是从源代码构建软件的地方。这要求你知道该工具使用的是哪个构建系统,该系统可能是许多构建系统之一:Autotools、CMake、Meson、Setuptools(用于 Python)等等。每个都有自己的命令和语法样式。你需要充分了解这些才能正确构建软件。
|
||||
|
||||
#### 第四部分:安装文件
|
||||
|
||||
软件构建后,需要在 `%install` 部分中安装它:
|
||||
|
||||
```
|
||||
%install
|
||||
mkdir -p %{buildroot}%{_bindir}
|
||||
make install BINDIR=%{buildroot}%{_bindir} MANDIR=%{buildroot}%{_mandir}
|
||||
```
|
||||
|
||||
在构建 RPM 时,RPM 不会修改你的系统文件。在一个可以正常运行的系统上添加、删除或修改文件的风险太大。如果发生故障怎么办?因此,RPM 会创建一个专门打造的文件系统并在其中工作。这称为 `buildroot`。 因此,在 `buildroot` 中,我们创建由宏 `%{_bindir}` 代表的 `/usr/bin` 目录,然后使用提供的 `Makefile` 将文件安装到其中。
|
||||
|
||||
至此,我们已经在专门打造的 `buildroot` 中安装了 `fpaste` 的构建版本。
|
||||
|
||||
#### 第五部分:列出所有要包括在 RPM 中的文件
|
||||
|
||||
spec 文件其后的一部分是文件部分:`%files`。在这里,我们告诉 RPM 从该 spec 文件创建的档案文件中包含哪些文件。`fpaste` 的文件部分非常简单:
|
||||
|
||||
```
|
||||
%files
|
||||
%{_bindir}/%{name}
|
||||
%doc README.rst TODO
|
||||
%{_mandir}/man1/%{name}.1.gz
|
||||
%license COPYING
|
||||
```
|
||||
|
||||
请注意,在这里,我们没有指定 `buildroot`。所有这些路径都是相对路径。`%doc` 和 `%license`命令做的稍微多一点,它们会创建所需的文件夹,并记住这些文件必须放在那里。
|
||||
|
||||
RPM 很聪明。例如,如果你在 `%install` 部分中安装了文件,但未列出它们,它会提醒你。
|
||||
|
||||
#### 第六部分:在变更日志中记录所有变更
|
||||
|
||||
Fedora 是一个基于社区的项目。许多贡献者维护或共同维护软件包。因此,当务之急是不要被软件包做了哪些更改所搞混。为了确保这一点,spec 文件包含的最后一部分是变更日志 `%changelog`:
|
||||
|
||||
```
|
||||
%changelog
|
||||
* Thu Jul 25 2019 Fedora Release Engineering < ...> - 0.3.9.2-3
|
||||
- Rebuilt for https://fedoraproject.org/wiki/Fedora_31_Mass_Rebuild
|
||||
|
||||
* Thu Jan 31 2019 Fedora Release Engineering < ...> - 0.3.9.2-2
|
||||
- Rebuilt for https://fedoraproject.org/wiki/Fedora_30_Mass_Rebuild
|
||||
|
||||
* Tue Jul 24 2018 Ankur Sinha - 0.3.9.2-1
|
||||
- Update to 0.3.9.2
|
||||
|
||||
* Fri Jul 13 2018 Fedora Release Engineering < ...> - 0.3.9.1-4
|
||||
- Rebuilt for https://fedoraproject.org/wiki/Fedora_29_Mass_Rebuild
|
||||
|
||||
* Wed Feb 07 2018 Fedora Release Engineering < ..> - 0.3.9.1-3
|
||||
- Rebuilt for https://fedoraproject.org/wiki/Fedora_28_Mass_Rebuild
|
||||
|
||||
* Sun Sep 10 2017 Vasiliy N. Glazov < ...> - 0.3.9.1-2
|
||||
- Cleanup spec
|
||||
|
||||
* Fri Sep 08 2017 Ankur Sinha - 0.3.9.1-1
|
||||
- Update to latest release
|
||||
- fixes rhbz 1489605
|
||||
...
|
||||
....
|
||||
```
|
||||
|
||||
spec 文件的*每项*变更都必须有一个变更日志条目。如你在此处看到的,虽然我以维护者身份更新了该 spec 文件,但其他人也做过更改。清楚地记录变更内容有助于所有人知道该 spec 文件的当前状态。对于系统上安装的所有软件包,都可以使用 `rpm` 来查看其更改日志:
|
||||
|
||||
```
|
||||
$ rpm -q --changelog fpaste
|
||||
```
|
||||
|
||||
### 构建 RPM
|
||||
|
||||
现在我们准备构建 RPM 包。如果要继续执行以下命令,请确保遵循[上一篇文章][2]中的步骤设置系统以构建 RPM。
|
||||
|
||||
我们将 `fpaste` 的 spec 文件放置在 `~/rpmbuild/SPECS` 中,将源代码档案文件存储在 `~/rpmbuild/SOURCES/` 中,现在可以创建源 RPM 了:
|
||||
|
||||
```
|
||||
$ cd ~/rpmbuild/SPECS
|
||||
$ wget https://src.fedoraproject.org/rpms/fpaste/raw/master/f/fpaste.spec
|
||||
|
||||
$ cd ~/rpmbuild/SOURCES
|
||||
$ wget https://pagure.io/fpaste/archive/0.3.9.2/fpaste-0.3.9.2.tar.gz
|
||||
|
||||
$ cd ~/rpmbuild/SOURCES
|
||||
$ rpmbuild -bs fpaste.spec
|
||||
Wrote: /home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
|
||||
```
|
||||
|
||||
让我们看一下结果:
|
||||
|
||||
```
|
||||
$ ls ~/rpmbuild/SRPMS/fpaste*
|
||||
/home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
|
||||
|
||||
$ rpm -qpl ~/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
|
||||
fpaste-0.3.9.2.tar.gz
|
||||
fpaste.spec
|
||||
```
|
||||
|
||||
我们看到源 RPM 已构建。让我们同时构建源 RPM 和二进制 RPM:
|
||||
|
||||
```
|
||||
$ cd ~/rpmbuild/SPECS
|
||||
$ rpmbuild -ba fpaste.spec
|
||||
..
|
||||
..
|
||||
..
|
||||
```
|
||||
|
||||
RPM 将向你显示完整的构建输出,并在我们之前看到的每个部分中详细说明它的工作。此“构建日志”非常重要。当构建未按预期进行时,我们的打包人员将花费大量时间来遍历它们,以跟踪完整的构建路径来查看出了什么问题。
|
||||
|
||||
就是这样!准备安装的 RPM 应该位于以下位置:
|
||||
|
||||
```
|
||||
$ ls ~/rpmbuild/RPMS/noarch/
|
||||
fpaste-0.3.9.2-3.fc30.noarch.rpm
|
||||
```
|
||||
|
||||
### 概括
|
||||
|
||||
我们已经介绍了如何从 spec 文件构建 RPM 的基础知识。这绝不是一份详尽的文档。实际上,它根本不是文档。它只是试图解释幕后的运作方式。简短回顾一下:
|
||||
|
||||
* RPM 有两种类型:源 RPM 和 二进制 RPM。
|
||||
* 二进制 RPM 包含要安装以使用该软件的文件。
|
||||
* 源 RPM 包含构建二进制 RPM 所需的信息:完整的源代码,以及 spec 文件中的有关如何构建 RPM 的说明。
|
||||
* spec 文件包含多个部分,每个部分都有其自己的用途。
|
||||
|
||||
在这里,我们已经在安装好的 Fedora 系统中本地构建了 RPM。虽然这是个基本的过程,但我们从存储库中获得的 RPM 是建立在具有严格配置和方法的专用服务器上的,以确保正确性和安全性。这个 Fedora 打包流程将在以后的文章中讨论。
|
||||
|
||||
你想开始构建软件包,并帮助 Fedora 社区维护我们提供的大量软件吗?你可以[从这里开始加入软件包集合维护者][4]。
|
||||
|
||||
如有任何疑问,请发布到 [Fedora 开发人员邮件列表][5],我们随时乐意为你提供帮助!
|
||||
|
||||
### 参考
|
||||
|
||||
这里有一些构建 RPM 的有用参考:
|
||||
|
||||
* <https://fedoraproject.org/wiki/How_to_create_an_RPM_package>
|
||||
* <https://docs.fedoraproject.org/en-US/quick-docs/create-hello-world-rpm/>
|
||||
* <https://docs.fedoraproject.org/en-US/packaging-guidelines/>
|
||||
* <https://rpm.org/documentation.html>
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/how-rpm-packages-are-made-the-spec-file/
|
||||
|
||||
作者:[Ankur Sinha "FranciscoD"][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/ankursinha/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/rpm.png-816x345.jpg
|
||||
[2]: https://linux.cn/article-11527-1.html
|
||||
[3]: https://fedoramagazine.org/managing-packages-fedora-dnf/
|
||||
[4]: https://fedoraproject.org/wiki/Join_the_package_collection_maintainers
|
||||
[5]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/
|
246
published/20190905 Building CI-CD pipelines with Jenkins.md
Normal file
246
published/20190905 Building CI-CD pipelines with Jenkins.md
Normal file
@ -0,0 +1,246 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11546-1.html)
|
||||
[#]: subject: (Building CI/CD pipelines with Jenkins)
|
||||
[#]: via: (https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson)
|
||||
|
||||
用 Jenkins 构建 CI/CD 流水线
|
||||
======
|
||||
|
||||
> 通过这份 Jenkins 分步教程,构建持续集成和持续交付(CI/CD)流水线。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201911/07/001349rbbbswpeqnnteeee.jpg)
|
||||
|
||||
在我的文章《[使用开源工具构建 DevOps 流水线的初学者指南][2]》中,我分享了一个从头开始构建 DevOps 流水线的故事。推动该计划的核心技术是 [Jenkins][3],这是一个用于建立持续集成和持续交付(CI/CD)流水线的开源工具。
|
||||
|
||||
在花旗,有一个单独的团队为专用的 Jenkins 流水线提供稳定的主从节点环境,但是该环境仅用于质量保证(QA)、构建阶段和生产环境。开发环境仍然是非常手动的,我们的团队需要对其进行自动化以在加快开发工作的同时获得尽可能多的灵活性。这就是我们决定为 DevOps 建立 CI/CD 流水线的原因。Jenkins 的开源版本由于其灵活性、开放性、强大的插件功能和易用性而成为显而易见的选择。
|
||||
|
||||
在本文中,我将分步演示如何使用 Jenkins 构建 CI/CD 流水线。
|
||||
|
||||
### 什么是流水线?
|
||||
|
||||
在进入本教程之前,了解有关 CI/CD <ruby>流水线<rt>pipeline</rt></ruby>的知识会很有帮助。
|
||||
|
||||
首先,了解 Jenkins 本身并不是流水线这一点很有帮助。只是创建一个新的 Jenkins 作业并不能构建一条流水线。可以把 Jenkins 看做一个遥控器,在这里点击按钮即可。当你点击按钮时会发生什么取决于遥控器要控制的内容。Jenkins 为其他应用程序 API、软件库、构建工具等提供了一种插入 Jenkins 的方法,它可以执行并自动化任务。Jenkins 本身不执行任何功能,但是随着其它工具的插入而变得越来越强大。
|
||||
|
||||
流水线是一个单独的概念,指的是按顺序连接在一起的事件或作业组:
|
||||
|
||||
> “<ruby>流水线<rt>pipeline</rt></ruby>”是可以执行的一系列事件或作业。
|
||||
|
||||
理解流水线的最简单方法是可视化一系列阶段,如下所示:
|
||||
|
||||
![Pipeline example][4]
|
||||
|
||||
在这里,你应该看到两个熟悉的概念:<ruby>阶段<rt>Stage</rt></ruby>和<ruby>步骤<rt>Step</rt></ruby>。
|
||||
|
||||
* 阶段:一个包含一系列步骤的块。阶段块可以命名为任何名称;它用于可视化流水线过程。
|
||||
* 步骤:表明要做什么的任务。步骤定义在阶段块内。
|
||||
|
||||
在上面的示例图中,阶段 1 可以命名为 “构建”、“收集信息”或其它名称,其它阶段块也可以采用类似的思路。“步骤”只是简单地说放上要执行的内容,它可以是简单的打印命令(例如,`echo "Hello, World"`)、程序执行命令(例如,`java HelloWorld`)、shell 执行命令( 例如,`chmod 755 Hello`)或任何其他命令,只要通过 Jenkins 环境将其识别为可执行命令即可。
|
||||
|
||||
Jenkins 流水线以**编码脚本**的形式提供,通常称为 “Jenkinsfile”,尽管可以用不同的文件名。下面这是一个简单的 Jenkins 流水线文件的示例:
|
||||
|
||||
```
|
||||
// Example of Jenkins pipeline script
|
||||
|
||||
pipeline {
|
||||
stages {
|
||||
stage("Build") {
|
||||
steps {
|
||||
// Just print a Hello, Pipeline to the console
|
||||
echo "Hello, Pipeline!"
|
||||
// Compile a Java file. This requires JDKconfiguration from Jenkins
|
||||
javac HelloWorld.java
|
||||
// Execute the compiled Java binary called HelloWorld. This requires JDK configuration from Jenkins
|
||||
java HelloWorld
|
||||
// Executes the Apache Maven commands, clean then package. This requires Apache Maven configuration from Jenkins
|
||||
mvn clean package ./HelloPackage
|
||||
// List the files in current directory path by executing a default shell command
|
||||
sh "ls -ltr"
|
||||
}
|
||||
}
|
||||
// And next stages if you want to define further...
|
||||
} // End of stages
|
||||
} // End of pipeline
|
||||
```
|
||||
|
||||
从此示例脚本很容易看到 Jenkins 流水线的结构。请注意,默认情况下某些命令(如 `java`、`javac`和 `mvn`)不可用,需要通过 Jenkins 进行安装和配置。 因此:
|
||||
|
||||
> Jenkins 流水线是一种以定义的方式依次执行 Jenkins 作业的方法,方法是将其编码并在多个块中进行结构化,这些块可以包含多个任务的步骤。
|
||||
|
||||
好。既然你已经了解了 Jenkins 流水线是什么,我将向你展示如何创建和执行 Jenkins 流水线。在本教程的最后,你将建立一个 Jenkins 流水线,如下所示:
|
||||
|
||||
![Final Result][5]
|
||||
|
||||
### 如何构建 Jenkins 流水线
|
||||
|
||||
为了便于遵循本教程的步骤,我创建了一个示例 [GitHub 存储库][6]和一个视频教程。
|
||||
|
||||
- [视频](https://img.linux.net.cn/static/video/_-jDPwYgDVKlg.mp4)
|
||||
|
||||
开始本教程之前,你需要:
|
||||
|
||||
* Java 开发工具包(JDK):如果尚未安装,请安装 JDK 并将其添加到环境路径中,以便可以通过终端执行 Java 命令(如 `java jar`)。这是利用本教程中使用的 Java Web Archive(WAR)版本的 Jenkins 所必需的(尽管你可以使用任何其他发行版)。
|
||||
* 基本计算机操作能力:你应该知道如何键入一些代码、通过 shell 执行基本的 Linux 命令以及打开浏览器。
|
||||
|
||||
让我们开始吧。
|
||||
|
||||
#### 步骤一:下载 Jenkins
|
||||
|
||||
导航到 [Jenkins 下载页面][7]。向下滚动到 “Generic Java package (.war)”,然后单击下载文件;将其保存在易于找到的位置。(如果你选择其他 Jenkins 发行版,除了步骤二之外,本教程的其余步骤应该几乎相同。)使用 WAR 文件的原因是它是个一次性可执行文件,可以轻松地执行和删除。
|
||||
|
||||
![Download Jenkins as Java WAR file][8]
|
||||
|
||||
#### 步骤二:以 Java 二进制方式执行 Jenkins
|
||||
|
||||
打开一个终端窗口,并使用 `cd <your path>` 进入下载 Jenkins 的目录。(在继续之前,请确保已安装 JDK 并将其添加到环境路径。)执行以下命令,该命令将 WAR 文件作为可执行二进制文件运行:
|
||||
|
||||
```
|
||||
java -jar ./jenkins.war
|
||||
```
|
||||
|
||||
如果一切顺利,Jenkins 应该在默认端口 8080 上启动并运行。
|
||||
|
||||
![Execute as an executable JAR binary][9]
|
||||
|
||||
#### 步骤三:创建一个新的 Jenkins 作业
|
||||
|
||||
打开一个 Web 浏览器并导航到 `localhost:8080`。除非你有以前安装的 Jenkins,否则应直接转到 Jenkins 仪表板。点击 “Create New Jobs”。你也可以点击左侧的 “New Item”。
|
||||
|
||||
![Create New Job][10]
|
||||
|
||||
#### 步骤四:创建一个流水线作业
|
||||
|
||||
在此步骤中,你可以选择并定义要创建的 Jenkins 作业类型。选择 “Pipeline” 并为其命名(例如,“TestPipeline”)。单击 “OK” 创建流水线作业。
|
||||
|
||||
![Create New Pipeline Job][11]
|
||||
|
||||
你将看到一个 Jenkins 作业配置页面。向下滚动以找到 “Pipeline” 部分。有两种执行 Jenkins 流水线的方法。一种方法是在 Jenkins 上直接编写流水线脚本,另一种方法是从 SCM(源代码管理)中检索 Jenkins 文件。在接下来的两个步骤中,我们将体验这两种方式。
|
||||
|
||||
#### 步骤五:通过直接脚本配置并执行流水线作业
|
||||
|
||||
要使用直接脚本执行流水线,请首先从 GitHub 复制该 [Jenkinsfile 示例][6]的内容。选择 “Pipeline script” 作为 “Destination”,然后将该 Jenkinsfile 的内容粘贴到 “Script” 中。花一些时间研究一下 Jenkins 文件的结构。注意,共有三个阶段:Build、Test 和 Deploy,它们是任意的,可以是任何一个。每个阶段中都有一些步骤;在此示例中,它们只是打印一些随机消息。
|
||||
|
||||
单击 “Save” 以保留更改,这将自动将你带回到 “Job Overview” 页面。
|
||||
|
||||
![Configure to Run as Jenkins Script][12]
|
||||
|
||||
要开始构建流水线的过程,请单击 “Build Now”。如果一切正常,你将看到第一个流水线(如下面的这个)。
|
||||
|
||||
![Click Build Now and See Result][13]
|
||||
|
||||
要查看流水线脚本构建的输出,请单击任何阶段,然后单击 “Log”。你会看到这样的消息。
|
||||
|
||||
![Visit sample GitHub with Jenkins get clone link][14]
|
||||
|
||||
#### 步骤六:通过 SCM 配置并执行流水线作业
|
||||
|
||||
现在,换个方式:在此步骤中,你将通过从源代码控制的 GitHub 中复制 Jenkinsfile 来部署相同的 Jenkins 作业。在同一个 [GitHub 存储库][6]中,通过单击 “Clone or download” 并复制其 URL 来找到其存储库 URL。
|
||||
|
||||
![Checkout from GitHub][15]
|
||||
|
||||
单击 “Configure” 以修改现有作业。滚动到 “Advanced Project Options” 设置,但这一次,从 “Destination” 下拉列表中选择 “Pipeline script from SCM” 选项。将 GitHub 存储库的 URL 粘贴到 “Repository URL” 中,然后在 “Script Path” 中键入 “Jenkinsfile”。 单击 “Save” 按钮保存。
|
||||
|
||||
![Change to Pipeline script from SCM][16]
|
||||
|
||||
要构建流水线,回到 “Task Overview” 页面后,单击 “Build Now” 以再次执行作业。结果与之前相同,除了多了一个称为 “Declaration: Checkout SCM” 的阶段。
|
||||
|
||||
![Build again and verify][17]
|
||||
|
||||
要查看来自 SCM 构建的流水线的输出,请单击该阶段并查看 “Log” 以检查源代码控制克隆过程的进行情况。
|
||||
|
||||
![Verify Checkout Procedure][18]
|
||||
|
||||
### 除了打印消息,还能做更多
|
||||
|
||||
恭喜你!你已经建立了第一个 Jenkins 流水线!
|
||||
|
||||
“但是等等”,你说,“这太有限了。除了打印无用的消息外,我什么都做不了。”那没问题。到目前为止,本教程仅简要介绍了 Jenkins 流水线可以做什么,但是你可以通过将其与其他工具集成来扩展其功能。以下是给你的下一个项目的一些思路:
|
||||
|
||||
* 建立一个多阶段的 Java 构建流水线,从以下阶段开始:从 Nexus 或 Artifactory 之类的 JAR 存储库中拉取依赖项、编译 Java 代码、运行单元测试、打包为 JAR/WAR 文件,然后部署到云服务器。
|
||||
* 实现一个高级代码测试仪表板,该仪表板将基于 Selenium 的单元测试、负载测试和自动用户界面测试,报告项目的运行状况。
|
||||
* 构建多流水线或多用户流水线,以自动化执行 Ansible 剧本的任务,同时允许授权用户响应正在进行的任务。
|
||||
* 设计完整的端到端 DevOps 流水线,该流水线可提取存储在 SCM 中的基础设施资源文件和配置文件(例如 GitHub),并通过各种运行时程序执行该脚本。
|
||||
|
||||
学习本文结尾处的任何教程,以了解这些更高级的案例。
|
||||
|
||||
#### 管理 Jenkins
|
||||
|
||||
在 Jenkins 主面板,点击 “Manage Jenkins”。
|
||||
|
||||
![Manage Jenkins][19]
|
||||
|
||||
#### 全局工具配置
|
||||
|
||||
有许多可用工具,包括管理插件、查看系统日志等。单击 “Global Tool Configuration”。
|
||||
|
||||
![Global Tools Configuration][20]
|
||||
|
||||
#### 增加附加能力
|
||||
|
||||
在这里,你可以添加 JDK 路径、Git、Gradle 等。配置工具后,只需将该命令添加到 Jenkinsfile 中或通过 Jenkins 脚本执行即可。
|
||||
|
||||
![See Various Options for Plugin][21]
|
||||
|
||||
### 后继
|
||||
|
||||
本文为你介绍了使用酷炫的开源工具 Jenkins 创建 CI/CD 流水线的方法。要了解你可以使用 Jenkins 完成的许多其他操作,请在 Opensource.com 上查看以下其他文章:
|
||||
|
||||
* [Jenkins X 入门][22]
|
||||
* [使用 Jenkins 安装 OpenStack 云][23]
|
||||
* [在容器中运行 Jenkins][24]
|
||||
* [Jenkins 流水线入门][25]
|
||||
* [如何与 Jenkins 一起运行 JMeter][26]
|
||||
* [将 OpenStack 集成到你的 Jenkins 工作流中][27]
|
||||
|
||||
你可能对我为你的开源之旅而写的其他一些文章感兴趣:
|
||||
|
||||
* [9 个用于构建容错系统的开源工具][28]
|
||||
* [了解软件设计模式][29]
|
||||
* [使用开源工具构建 DevOps 流水线的初学者指南][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/intro-building-cicd-pipelines-jenkins
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pipe-pipeline-grid.png?itok=kkpzKxKg (pipelines)
|
||||
[2]: https://linux.cn/article-11307-1.html
|
||||
[3]: https://jenkins.io/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/diagrampipeline.jpg (Pipeline example)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/0_endresultpreview_0.jpg (Final Result)
|
||||
[6]: https://github.com/bryantson/CICDPractice
|
||||
[7]: https://jenkins.io/download/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/2_downloadwar.jpg (Download Jenkins as Java WAR file)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/3_runasjar.jpg (Execute as an executable JAR binary)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/4_createnewjob.jpg (Create New Job)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/5_createpipeline.jpg (Create New Pipeline Job)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/6_runaspipelinescript.jpg (Configure to Run as Jenkins Script)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/7_buildnow4script.jpg (Click Build Now and See Result)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/8_seeresult4script.jpg (Visit sample GitHub with Jenkins get clone link)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/9_checkoutfromgithub.jpg (Checkout from GitHub)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/10_runsasgit.jpg (Change to Pipeline script from SCM)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/11_seeresultfromgit.jpg (Build again and verify)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/12_verifycheckout.jpg (Verify Checkout Procedure)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/13_managingjenkins.jpg (Manage Jenkins)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/14_globaltoolsconfiguration.jpg (Global Tools Configuration)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/15_variousoptions4plugin.jpg (See Various Options for Plugin)
|
||||
[22]: https://opensource.com/article/18/11/getting-started-jenkins-x
|
||||
[23]: https://opensource.com/article/18/4/install-OpenStack-cloud-Jenkins
|
||||
[24]: https://linux.cn/article-9741-1.html
|
||||
[25]: https://opensource.com/article/18/4/jenkins-pipelines-with-cucumber
|
||||
[26]: https://opensource.com/life/16/7/running-jmeter-jenkins-continuous-delivery-101
|
||||
[27]: https://opensource.com/business/15/5/interview-maish-saidel-keesing-cisco
|
||||
[28]: https://opensource.com/article/19/3/tools-fault-tolerant-system
|
||||
[29]: https://opensource.com/article/19/7/understanding-software-design-patterns
|
444
published/201910/20180706 Building a Messenger App- OAuth.md
Normal file
444
published/201910/20180706 Building a Messenger App- OAuth.md
Normal file
@ -0,0 +1,444 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (PsiACE)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11510-1.html)
|
||||
[#]: subject: (Building a Messenger App: OAuth)
|
||||
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-oauth/)
|
||||
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
|
||||
|
||||
构建一个即时消息应用(二):OAuth
|
||||
======
|
||||
|
||||
[上一篇:模式](https://linux.cn/article-11396-1.html)。
|
||||
|
||||
在这篇帖子中,我们将会通过为应用添加社交登录功能进入后端开发。
|
||||
|
||||
社交登录的工作方式十分简单:用户点击链接,然后重定向到 GitHub 授权页面。当用户授予我们对他的个人信息的访问权限之后,就会重定向回登录页面。下一次尝试登录时,系统将不会再次请求授权,也就是说,我们的应用已经记住了这个用户。这使得整个登录流程看起来就和你用鼠标单击一样快。
|
||||
|
||||
如果进一步考虑其内部实现的话,过程就会变得复杂起来。首先,我们需要注册一个新的 [GitHub OAuth 应用][2]。
|
||||
|
||||
这一步中,比较重要的是回调 URL。我们将它设置为 `http://localhost:3000/api/oauth/github/callback`。这是因为,在开发过程中,我们总是在本地主机上工作。一旦你要将应用交付生产,请使用正确的回调 URL 注册一个新的应用。
|
||||
|
||||
注册以后,你将会收到“客户端 id”和“安全密钥”。安全起见,请不要与任何人分享他们 👀
|
||||
|
||||
顺便让我们开始写一些代码吧。现在,创建一个 `main.go` 文件:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"strconv"
|
||||
|
||||
"github.com/gorilla/securecookie"
|
||||
"github.com/joho/godotenv"
|
||||
"github.com/knq/jwt"
|
||||
_ "github.com/lib/pq"
|
||||
"github.com/matryer/way"
|
||||
"golang.org/x/oauth2"
|
||||
"golang.org/x/oauth2/github"
|
||||
)
|
||||
|
||||
var origin *url.URL
|
||||
var db *sql.DB
|
||||
var githubOAuthConfig *oauth2.Config
|
||||
var cookieSigner *securecookie.SecureCookie
|
||||
var jwtSigner jwt.Signer
|
||||
|
||||
func main() {
|
||||
godotenv.Load()
|
||||
|
||||
port := intEnv("PORT", 3000)
|
||||
originString := env("ORIGIN", fmt.Sprintf("http://localhost:%d/", port))
|
||||
databaseURL := env("DATABASE_URL", "postgresql://root@127.0.0.1:26257/messenger?sslmode=disable")
|
||||
githubClientID := os.Getenv("GITHUB_CLIENT_ID")
|
||||
githubClientSecret := os.Getenv("GITHUB_CLIENT_SECRET")
|
||||
hashKey := env("HASH_KEY", "secret")
|
||||
jwtKey := env("JWT_KEY", "secret")
|
||||
|
||||
var err error
|
||||
if origin, err = url.Parse(originString); err != nil || !origin.IsAbs() {
|
||||
log.Fatal("invalid origin")
|
||||
return
|
||||
}
|
||||
|
||||
if i, err := strconv.Atoi(origin.Port()); err == nil {
|
||||
port = i
|
||||
}
|
||||
|
||||
if githubClientID == "" || githubClientSecret == "" {
|
||||
log.Fatalf("remember to set both $GITHUB_CLIENT_ID and $GITHUB_CLIENT_SECRET")
|
||||
return
|
||||
}
|
||||
|
||||
if db, err = sql.Open("postgres", databaseURL); err != nil {
|
||||
log.Fatalf("could not open database connection: %v\n", err)
|
||||
return
|
||||
}
|
||||
defer db.Close()
|
||||
if err = db.Ping(); err != nil {
|
||||
log.Fatalf("could not ping to db: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
githubRedirectURL := *origin
|
||||
githubRedirectURL.Path = "/api/oauth/github/callback"
|
||||
githubOAuthConfig = &oauth2.Config{
|
||||
ClientID: githubClientID,
|
||||
ClientSecret: githubClientSecret,
|
||||
Endpoint: github.Endpoint,
|
||||
RedirectURL: githubRedirectURL.String(),
|
||||
Scopes: []string{"read:user"},
|
||||
}
|
||||
|
||||
cookieSigner = securecookie.New([]byte(hashKey), nil).MaxAge(0)
|
||||
|
||||
jwtSigner, err = jwt.HS256.New([]byte(jwtKey))
|
||||
if err != nil {
|
||||
log.Fatalf("could not create JWT signer: %v\n", err)
|
||||
return
|
||||
}
|
||||
|
||||
router := way.NewRouter()
|
||||
router.HandleFunc("GET", "/api/oauth/github", githubOAuthStart)
|
||||
router.HandleFunc("GET", "/api/oauth/github/callback", githubOAuthCallback)
|
||||
router.HandleFunc("GET", "/api/auth_user", guard(getAuthUser))
|
||||
|
||||
log.Printf("accepting connections on port %d\n", port)
|
||||
log.Printf("starting server at %s\n", origin.String())
|
||||
addr := fmt.Sprintf(":%d", port)
|
||||
if err = http.ListenAndServe(addr, router); err != nil {
|
||||
log.Fatalf("could not start server: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
func env(key, fallbackValue string) string {
|
||||
v, ok := os.LookupEnv(key)
|
||||
if !ok {
|
||||
return fallbackValue
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
func intEnv(key string, fallbackValue int) int {
|
||||
v, ok := os.LookupEnv(key)
|
||||
if !ok {
|
||||
return fallbackValue
|
||||
}
|
||||
i, err := strconv.Atoi(v)
|
||||
if err != nil {
|
||||
return fallbackValue
|
||||
}
|
||||
return i
|
||||
}
|
||||
```
|
||||
|
||||
安装依赖项:
|
||||
|
||||
```
|
||||
go get -u github.com/gorilla/securecookie
|
||||
go get -u github.com/joho/godotenv
|
||||
go get -u github.com/knq/jwt
|
||||
go get -u github.com/lib/pq
|
||||
ge get -u github.com/matoous/go-nanoid
|
||||
go get -u github.com/matryer/way
|
||||
go get -u golang.org/x/oauth2
|
||||
```
|
||||
|
||||
我们将会使用 `.env` 文件来保存密钥和其他配置。请创建这个文件,并保证里面至少包含以下内容:
|
||||
|
||||
```
|
||||
GITHUB_CLIENT_ID=your_github_client_id
|
||||
GITHUB_CLIENT_SECRET=your_github_client_secret
|
||||
```
|
||||
|
||||
我们还要用到的其他环境变量有:
|
||||
|
||||
* `PORT`:服务器运行的端口,默认值是 `3000`。
|
||||
* `ORIGIN`:你的域名,默认值是 `http://localhost:3000/`。我们也可以在这里指定端口。
|
||||
* `DATABASE_URL`:Cockroach 数据库的地址。默认值是 `postgresql://root@127.0.0.1:26257/messenger?sslmode=disable`。
|
||||
* `HASH_KEY`:用于为 cookie 签名的密钥。没错,我们会使用已签名的 cookie 来确保安全。
|
||||
* `JWT_KEY`:用于签署 JSON <ruby>网络令牌<rt>Web Token</rt></ruby>的密钥。
|
||||
|
||||
因为代码中已经设定了默认值,所以你也不用把它们写到 `.env` 文件中。
|
||||
|
||||
在读取配置并连接到数据库之后,我们会创建一个 OAuth 配置。我们会使用 `ORIGIN` 信息来构建回调 URL(就和我们在 GitHub 页面上注册的一样)。我们的数据范围设置为 “read:user”。这会允许我们读取公开的用户信息,这里我们只需要他的用户名和头像就够了。然后我们会初始化 cookie 和 JWT 签名器。定义一些端点并启动服务器。
|
||||
|
||||
在实现 HTTP 处理程序之前,让我们编写一些函数来发送 HTTP 响应。
|
||||
|
||||
```
|
||||
func respond(w http.ResponseWriter, v interface{}, statusCode int) {
|
||||
b, err := json.Marshal(v)
|
||||
if err != nil {
|
||||
respondError(w, fmt.Errorf("could not marshal response: %v", err))
|
||||
return
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
w.WriteHeader(statusCode)
|
||||
w.Write(b)
|
||||
}
|
||||
|
||||
func respondError(w http.ResponseWriter, err error) {
|
||||
log.Println(err)
|
||||
http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError)
|
||||
}
|
||||
```
|
||||
|
||||
第一个函数用来发送 JSON,而第二个将错误记录到控制台并返回一个 `500 Internal Server Error` 错误信息。
|
||||
|
||||
### OAuth 开始
|
||||
|
||||
所以,用户点击写着 “Access with GitHub” 的链接。该链接指向 `/api/oauth/github`,这将会把用户重定向到 github。
|
||||
|
||||
```
|
||||
func githubOAuthStart(w http.ResponseWriter, r *http.Request) {
|
||||
state, err := gonanoid.Nanoid()
|
||||
if err != nil {
|
||||
respondError(w, fmt.Errorf("could not generte state: %v", err))
|
||||
return
|
||||
}
|
||||
|
||||
stateCookieValue, err := cookieSigner.Encode("state", state)
|
||||
if err != nil {
|
||||
respondError(w, fmt.Errorf("could not encode state cookie: %v", err))
|
||||
return
|
||||
}
|
||||
|
||||
http.SetCookie(w, &http.Cookie{
|
||||
Name: "state",
|
||||
Value: stateCookieValue,
|
||||
Path: "/api/oauth/github",
|
||||
HttpOnly: true,
|
||||
})
|
||||
http.Redirect(w, r, githubOAuthConfig.AuthCodeURL(state), http.StatusTemporaryRedirect)
|
||||
}
|
||||
```
|
||||
|
||||
OAuth2 使用一种机制来防止 CSRF 攻击,因此它需要一个“状态”(`state`)。我们使用 `Nanoid()` 来创建一个随机字符串,并用这个字符串作为状态。我们也把它保存为一个 cookie。
|
||||
|
||||
### OAuth 回调
|
||||
|
||||
一旦用户授权我们访问他的个人信息,他将会被重定向到这个端点。这个 URL 的查询字符串上将会包含状态(`state`)和授权码(`code`): `/api/oauth/github/callback?state=&code=`。
|
||||
|
||||
```
|
||||
const jwtLifetime = time.Hour * 24 * 14
|
||||
|
||||
type GithubUser struct {
|
||||
ID int `json:"id"`
|
||||
Login string `json:"login"`
|
||||
AvatarURL *string `json:"avatar_url,omitempty"`
|
||||
}
|
||||
|
||||
type User struct {
|
||||
ID string `json:"id"`
|
||||
Username string `json:"username"`
|
||||
AvatarURL *string `json:"avatarUrl"`
|
||||
}
|
||||
|
||||
func githubOAuthCallback(w http.ResponseWriter, r *http.Request) {
|
||||
stateCookie, err := r.Cookie("state")
|
||||
if err != nil {
|
||||
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
|
||||
return
|
||||
}
|
||||
|
||||
http.SetCookie(w, &http.Cookie{
|
||||
Name: "state",
|
||||
Value: "",
|
||||
MaxAge: -1,
|
||||
HttpOnly: true,
|
||||
})
|
||||
|
||||
var state string
|
||||
if err = cookieSigner.Decode("state", stateCookie.Value, &state); err != nil {
|
||||
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
|
||||
return
|
||||
}
|
||||
|
||||
q := r.URL.Query()
|
||||
|
||||
if state != q.Get("state") {
|
||||
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
|
||||
return
|
||||
}
|
||||
|
||||
ctx := r.Context()
|
||||
|
||||
t, err := githubOAuthConfig.Exchange(ctx, q.Get("code"))
|
||||
if err != nil {
|
||||
respondError(w, fmt.Errorf("could not fetch github token: %v", err))
|
||||
return
|
||||
}
|
||||
|
||||
client := githubOAuthConfig.Client(ctx, t)
|
||||
resp, err := client.Get("https://api.github.com/user")
|
||||
if err != nil {
|
||||
respondError(w, fmt.Errorf("could not fetch github user: %v", err))
|
||||
return
|
||||
}
|
||||
|
||||
var githubUser GithubUser
|
||||
if err = json.NewDecoder(resp.Body).Decode(&githubUser); err != nil {
|
||||
respondError(w, fmt.Errorf("could not decode github user: %v", err))
|
||||
return
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
tx, err := db.BeginTx(ctx, nil)
|
||||
if err != nil {
|
||||
respondError(w, fmt.Errorf("could not begin tx: %v", err))
|
||||
return
|
||||
}
|
||||
|
||||
var user User
|
||||
if err = tx.QueryRowContext(ctx, `
|
||||
SELECT id, username, avatar_url FROM users WHERE github_id = $1
|
||||
`, githubUser.ID).Scan(&user.ID, &user.Username, &user.AvatarURL); err == sql.ErrNoRows {
|
||||
if err = tx.QueryRowContext(ctx, `
|
||||
INSERT INTO users (username, avatar_url, github_id) VALUES ($1, $2, $3)
|
||||
RETURNING id
|
||||
`, githubUser.Login, githubUser.AvatarURL, githubUser.ID).Scan(&user.ID); err != nil {
|
||||
respondError(w, fmt.Errorf("could not insert user: %v", err))
|
||||
return
|
||||
}
|
||||
user.Username = githubUser.Login
|
||||
user.AvatarURL = githubUser.AvatarURL
|
||||
} else if err != nil {
|
||||
respondError(w, fmt.Errorf("could not query user by github ID: %v", err))
|
||||
return
|
||||
}
|
||||
|
||||
if err = tx.Commit(); err != nil {
|
||||
respondError(w, fmt.Errorf("could not commit to finish github oauth: %v", err))
|
||||
return
|
||||
}
|
||||
|
||||
exp := time.Now().Add(jwtLifetime)
|
||||
token, err := jwtSigner.Encode(jwt.Claims{
|
||||
Subject: user.ID,
|
||||
Expiration: json.Number(strconv.FormatInt(exp.Unix(), 10)),
|
||||
})
|
||||
if err != nil {
|
||||
respondError(w, fmt.Errorf("could not create token: %v", err))
|
||||
return
|
||||
}
|
||||
|
||||
expiresAt, _ := exp.MarshalText()
|
||||
|
||||
data := make(url.Values)
|
||||
data.Set("token", string(token))
|
||||
data.Set("expires_at", string(expiresAt))
|
||||
|
||||
http.Redirect(w, r, "/callback?"+data.Encode(), http.StatusTemporaryRedirect)
|
||||
}
|
||||
```
|
||||
|
||||
首先,我们会尝试使用之前保存的状态对 cookie 进行解码。并将其与查询字符串中的状态进行比较。如果它们不匹配,我们会返回一个 `418 I'm teapot`(未知来源)错误。
|
||||
|
||||
接着,我们使用授权码生成一个令牌。这个令牌被用于创建 HTTP 客户端来向 GitHub API 发出请求。所以最终我们会向 `https://api.github.com/user` 发送一个 GET 请求。这个端点将会以 JSON 格式向我们提供当前经过身份验证的用户信息。我们将会解码这些内容,一并获取用户的 ID、登录名(用户名)和头像 URL。
|
||||
|
||||
然后我们将会尝试在数据库上找到具有该 GitHub ID 的用户。如果没有找到,就使用该数据创建一个新的。
|
||||
|
||||
之后,对于新创建的用户,我们会发出一个将用户 ID 作为主题(`Subject`)的 JSON 网络令牌,并使用该令牌重定向到前端,查询字符串中一并包含该令牌的到期日(`Expiration`)。
|
||||
|
||||
这一 Web 应用也会被用在其他帖子,但是重定向的链接会是 `/callback?token=&expires_at=`。在那里,我们将会利用 JavaScript 从 URL 中获取令牌和到期日,并通过 `Authorization` 标头中的令牌以 `Bearer token_here` 的形式对 `/api/auth_user` 进行 GET 请求,来获取已认证的身份用户并将其保存到 localStorage。
|
||||
|
||||
### Guard 中间件
|
||||
|
||||
为了获取当前已经过身份验证的用户,我们设计了 Guard 中间件。这是因为在接下来的文章中,我们会有很多需要进行身份认证的端点,而中间件将会允许我们共享这一功能。
|
||||
|
||||
```
|
||||
type ContextKey struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
var keyAuthUserID = ContextKey{"auth_user_id"}
|
||||
|
||||
func guard(handler http.HandlerFunc) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
var token string
|
||||
if a := r.Header.Get("Authorization"); strings.HasPrefix(a, "Bearer ") {
|
||||
token = a[7:]
|
||||
} else if t := r.URL.Query().Get("token"); t != "" {
|
||||
token = t
|
||||
} else {
|
||||
http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
|
||||
var claims jwt.Claims
|
||||
if err := jwtSigner.Decode([]byte(token), &claims); err != nil {
|
||||
http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
|
||||
ctx := r.Context()
|
||||
ctx = context.WithValue(ctx, keyAuthUserID, claims.Subject)
|
||||
|
||||
handler(w, r.WithContext(ctx))
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
首先,我们尝试从 `Authorization` 标头或者是 URL 查询字符串中的 `token` 字段中读取令牌。如果没有找到,我们需要返回 `401 Unauthorized`(未授权)错误。然后我们将会对令牌中的申明进行解码,并使用该主题作为当前已经过身份验证的用户 ID。
|
||||
|
||||
现在,我们可以用这一中间件来封装任何需要授权的 `http.handlerFunc`,并且在处理函数的上下文中保有已经过身份验证的用户 ID。
|
||||
|
||||
```
|
||||
var guarded = guard(func(w http.ResponseWriter, r *http.Request) {
|
||||
authUserID := r.Context().Value(keyAuthUserID).(string)
|
||||
})
|
||||
```
|
||||
|
||||
### 获取认证用户
|
||||
|
||||
```
|
||||
func getAuthUser(w http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
authUserID := ctx.Value(keyAuthUserID).(string)
|
||||
|
||||
var user User
|
||||
if err := db.QueryRowContext(ctx, `
|
||||
SELECT username, avatar_url FROM users WHERE id = $1
|
||||
`, authUserID).Scan(&user.Username, &user.AvatarURL); err == sql.ErrNoRows {
|
||||
http.Error(w, http.StatusText(http.StatusTeapot), http.StatusTeapot)
|
||||
return
|
||||
} else if err != nil {
|
||||
respondError(w, fmt.Errorf("could not query auth user: %v", err))
|
||||
return
|
||||
}
|
||||
|
||||
user.ID = authUserID
|
||||
|
||||
respond(w, user, http.StatusOK)
|
||||
}
|
||||
```
|
||||
|
||||
我们使用 Guard 中间件来获取当前经过身份认证的用户 ID 并查询数据库。
|
||||
|
||||
这一部分涵盖了后端的 OAuth 流程。在下一篇帖子中,我们将会看到如何开始与其他用户的对话。
|
||||
|
||||
- [源代码][3]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://nicolasparada.netlify.com/posts/go-messenger-oauth/
|
||||
|
||||
作者:[Nicolás Parada][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[PsiACE](https://github.com/PsiACE)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://nicolasparada.netlify.com/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://nicolasparada.netlify.com/posts/go-messenger-schema/
|
||||
[2]: https://github.com/settings/applications/new
|
||||
[3]: https://github.com/nicolasparada/go-messenger-demo
|
@ -0,0 +1,161 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wenwensnow)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11504-1.html)
|
||||
[#]: subject: (Use GameHub to Manage All Your Linux Games in One Place)
|
||||
[#]: via: (https://itsfoss.com/gamehub/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
用 GameHub 集中管理你 Linux 上的所有游戏
|
||||
======
|
||||
|
||||
你在 Linux 上是怎么[玩游戏的呢][1]? 让我猜猜,要不就是从软件中心直接安装,要不就选 Steam、GOG、Humble Bundle 等平台,对吧?但是,如果你有多个游戏启动器和客户端,又要如何管理呢?好吧,对我来说这简直令人头疼 —— 这也是我发现 [GameHub][2] 这个应用之后,感到非常高兴的原因。
|
||||
|
||||
GameHub 是为 Linux 发行版设计的一个桌面应用,它能让你“集中管理你的所有游戏”。这听起来很有趣,是不是?下面让我来具体说明一下。
|
||||
|
||||
![][3]
|
||||
|
||||
### 集中管理不同平台 Linux 游戏的 GameHub
|
||||
|
||||
让我们看看,对玩家来说,让 GameHub 成为一个[不可或缺的 Linux 应用][4]的功能,都有哪些。
|
||||
|
||||
#### Steam、GOG & Humble Bundle 支持
|
||||
|
||||
![][5]
|
||||
|
||||
它支持 Steam、[GOG][6] 和 [Humble Bundle][7] 账户整合。你可以登录你的 GameHub 账号,从而在你的库管理器中管理所有游戏。
|
||||
|
||||
对我来说,我在 Steam 上有很多游戏,Humble Bundle 上也有一些。我不能确保它支持所有平台,但可以确信的是,主流平台游戏是没有问题的。
|
||||
|
||||
#### 支持原生游戏
|
||||
|
||||
![][8]
|
||||
|
||||
[有很多网站专门推荐 Linux 游戏,并支持下载][9]。你可以通过下载安装包,或者添加可执行文件,从而管理原生游戏。
|
||||
|
||||
可惜的是,现在无法在 GameHub 内搜索 Linux 游戏。如上图所示,你需要分别下载游戏,随后再将其添加到 GameHub 中。
|
||||
|
||||
#### 模拟器支持
|
||||
|
||||
用模拟器,你可以在 [Linux 上玩复古游戏][10]。正如上图所示,你可以添加模拟器(并导入模拟的镜像)。
|
||||
|
||||
你可以在 [RetroArch][11] 查看已有的模拟器,但也能根据需求添加自定义模拟器。
|
||||
|
||||
#### 用户界面
|
||||
|
||||
![Gamehub 界面选项][12]
|
||||
|
||||
当然,用户体验很重要。因此,探究下用户界面都有些什么,也很有必要。
|
||||
|
||||
我个人觉得,这一应用很容易使用,并且黑色主题是一个加分项。
|
||||
|
||||
#### 手柄支持
|
||||
|
||||
如果你习惯在 Linux 系统上用手柄玩游戏 —— 你可以轻松在设置里添加,启用或禁用它。
|
||||
|
||||
#### 多个数据提供商
|
||||
|
||||
因为它需要获取你的游戏信息(或元数据),也意味着它需要一个数据源。你可以看到下图列出的所有数据源。
|
||||
|
||||
![Data Providers Gamehub][13]
|
||||
|
||||
这里你什么也不用做 —— 但如果你使用的是 steam 之外的其他平台,你需要为 [IDGB 生成一个 API 密钥][14]。
|
||||
|
||||
我建议只有出现 GameHub 中的提示/通知,或有些游戏在 GameHub 上没有任何描述/图片/状态时,再这么做。
|
||||
|
||||
#### 兼容性选项
|
||||
|
||||
![][15]
|
||||
|
||||
你有不支持在 Linux 上运行的游戏吗?
|
||||
|
||||
不用担心,GameHub 上提供了多种兼容工具,如 Wine/Proton,你可以利用它们来玩游戏。
|
||||
|
||||
我们无法确定具体哪个兼容工具适用于你 —— 所以你需要自己亲自测试。然而,对许多游戏玩家来说,这的确是个很有用的功能。
|
||||
|
||||
### GameHub: 如何安装它呢?
|
||||
|
||||
![][18]
|
||||
|
||||
首先,你可以直接在软件中心或者应用商店内搜索。 它在 “Pop!_Shop” 之下。所以,它在绝大多数官方源中都能找到。
|
||||
|
||||
如果你在这些地方都没有找到,你可以手动添加源,并从终端上安装它,你需要输入以下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:tkashkin/gamehub
|
||||
sudo apt update
|
||||
sudo apt install com.github.tkashkin.gamehub
|
||||
```
|
||||
|
||||
如果你遇到了 “add-apt-repository command not found” 这个错误,你可以看看,[add-apt-repository not found error.][19]这篇文章,它能帮你解决这一问题。
|
||||
|
||||
这里还提供 AppImage 和 FlatPak 版本。 在[官网][2] 上,你可以针对找到其他 Linux 发行版的安装手册。
|
||||
|
||||
同时,你还可以从它的 [GitHub 页面][20]下载之前版本的安装包.
|
||||
|
||||
[GameHub][2]
|
||||
|
||||
### 如何在 GameHub 上管理你的游戏?
|
||||
|
||||
在启动程序后,你可以将自己的 Steam/GOG/Humble Bundle 账号添加进来。
|
||||
|
||||
对于 Steam,你需要在 Linux 发行版上安装 Steam 客户端。一旦安装完成,你可以轻松将账号中的游戏导入 GameHub。
|
||||
|
||||
![][16]
|
||||
|
||||
对于 GOG & Humble Bundle,登录后,就能直接在 GameHub 上管理游戏了。
|
||||
|
||||
如果你想添加模拟器或者本地安装文件,点击窗口右上角的 “+” 按钮进行添加。
|
||||
|
||||
### 如何安装游戏?
|
||||
|
||||
对于 Steam 游戏,它会自动启动 Steam 客户端,从而下载/安装游戏(我希望之后安装游戏,可以不用启动 Steam!)
|
||||
|
||||
![][17]
|
||||
|
||||
但对于 GOG/Humble Bundle,登录后就能直接、下载安装游戏。必要的话,对于那些不支持在 Linux 上运行的游戏,你可以使用兼容工具。
|
||||
|
||||
无论是模拟器游戏,还是本地游戏,只需添加安装包或导入模拟器镜像就可以了。这里没什么其他步骤要做。
|
||||
|
||||
### 注意
|
||||
|
||||
GameHub 是相当灵活的一个集中游戏管理应用。 用户界面和选项设置也相当直观。
|
||||
|
||||
你之前是否使用过这一应用呢?如果有,请在评论里写下你的感受。
|
||||
|
||||
而且,如果你想尝试一些与此功能相似的工具/应用,请务必告诉我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gamehub/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wenwensnow](https://github.com/wenwensnow)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/linux-gaming-guide/
|
||||
[2]: https://tkashkin.tk/projects/gamehub/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-home-1.png?ssl=1
|
||||
[4]: https://itsfoss.com/essential-linux-applications/
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-platform-support.png?ssl=1
|
||||
[6]: https://www.gog.com/
|
||||
[7]: https://www.humblebundle.com/monthly?partner=itsfoss
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-native-installers.png?ssl=1
|
||||
[9]: https://itsfoss.com/download-linux-games/
|
||||
[10]: https://itsfoss.com/play-retro-games-linux/
|
||||
[11]: https://www.retroarch.com/
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-appearance.png?ssl=1
|
||||
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/data-providers-gamehub.png?ssl=1
|
||||
[14]: https://www.igdb.com/api
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-windows-game.png?fit=800%2C569&ssl=1
|
||||
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-library.png?ssl=1
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-compatibility-layer.png?ssl=1
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/gamehub-install.jpg?ssl=1
|
||||
[19]: https://itsfoss.com/add-apt-repository-command-not-found/
|
||||
[20]: https://github.com/tkashkin/GameHub/releases
|
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11506-1.html)
|
||||
[#]: subject: (How to use IoT devices to keep children safe?)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/)
|
||||
[#]: author: (Andrew Carroll https://opensourceforu.com/author/andrew-carroll/)
|
||||
|
||||
如何使用物联网设备来确保儿童安全?
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
IoT (物联网)设备正在迅速改变我们的生活。这些设备无处不在,从我们的家庭到其它行业。根据一些预测数据,到 2020 年,将会有 100 亿个 IoT 设备。到 2025 年,该数量将增长到 220 亿。目前,物联网已经在很多领域得到了应用,包括智能家居、工业生产过程、农业甚至医疗保健领域。伴随着如此广泛的应用,物联网显然已经成为近年来的热门话题之一。
|
||||
|
||||
多种因素促成了物联网设备在多个学科的爆炸式增长。这其中包括低成本处理器和无线连接的的可用性,以及开源平台的信息交流推动了物联网领域的创新。与传统的应用程序开发相比,物联网设备的开发成指数级增长,因为它的资源是开源的。
|
||||
|
||||
在解释如何使用物联网设备来保护儿童之前,必须对物联网技术有基本的了解。
|
||||
|
||||
### IoT 设备是什么?
|
||||
|
||||
IoT 设备是指那些在没有人类参与的情况下彼此之间可以通信的设备。因此,许多专家并不将智能手机和计算机视为物联网设备。此外,物联网设备必须能够收集数据并且能将收集到的数据传送到其他设备或云端进行处理。
|
||||
|
||||
然而,在某些领域中,我们需要探索物联网的潜力。儿童往往是脆弱的,他们很容易成为犯罪分子和其他蓄意伤害者的目标。无论在物理世界还是数字世界中,儿童都很容易面临犯罪的威胁。因为父母不能始终亲自到场保护孩子;这就是为什么需要监视工具了。
|
||||
|
||||
除了适用于儿童的可穿戴设备外,还有许多父母监视应用程序,例如 Xnspy,可实时监控儿童并提供信息的实时更新。这些工具可确保儿童安全。可穿戴设备确保儿童身体上的安全性,而家长监控应用可确保儿童的上网安全。
|
||||
|
||||
由于越来越多的孩子花费时间在智能手机上,毫无意外地,他们也就成为诈骗分子的主要目标。此外,由于恋童癖、网络自夸和其他犯罪在网络上的盛行,儿童也有可能成为网络欺凌的目标。
|
||||
|
||||
这些解决方案够吗?我们需要找到物联网解决方案,以确保孩子们在网上和线下的安全。在当代,我们如何确保孩子的安全?我们需要提出创新的解决方案。 物联网可以帮助保护孩子在学校和家里的安全。
|
||||
|
||||
### 物联网的潜力
|
||||
|
||||
物联网设备提供的好处很多。举例来说,父母可以远程监控自己的孩子,而又不会显得太霸道。因此,儿童在拥有安全环境的同时也会有空间和自由让自己变得独立。
|
||||
|
||||
而且,父母也不必在为孩子的安全而担忧。物联网设备可以提供 7x24 小时的信息更新。像 Xnspy 之类的监视应用程序在提供有关孩子的智能手机活动信息方面更进了一步。随着物联网设备变得越来越复杂,拥有更长使用寿命的电池只是一个时间问题。诸如位置跟踪器之类的物联网设备可以提供有关孩子下落的准确详细信息,所以父母不必担心。
|
||||
|
||||
虽然可穿戴设备已经非常好了,但在确保儿童安全方面,这些通常还远远不够。因此,要为儿童提供安全的环境,我们还需要其他方法。许多事件表明,儿童在学校比其他任何公共场所都容易受到攻击。因此,学校需要采取安全措施,以确保儿童和教师的安全。在这一点上,物联网设备可用于检测潜在威胁并采取必要的措施来防止攻击。威胁检测系统包括摄像头。系统一旦检测到威胁,便可以通知当局,如一些执法机构和医院。智能锁等设备可用于封锁学校(包括教室),来保护儿童。除此之外,还可以告知父母其孩子的安全,并立即收到有关威胁的警报。这将需要实施无线技术,例如 Wi-Fi 和传感器。因此,学校需要制定专门用于提供教室安全性的预算。
|
||||
|
||||
智能家居实现拍手关灯,也可以让你的家庭助手帮你关灯。同样,物联网设备也可用在屋内来保护儿童。在家里,物联网设备(例如摄像头)为父母在照顾孩子时提供 100% 的可见性。当父母不在家里时,可以使用摄像头和其他传感器检测是否发生了可疑活动。其他设备(例如连接到这些传感器的智能锁)可以锁门和窗,以确保孩子们的安全。
|
||||
|
||||
同样,可以引入许多物联网解决方案来确保孩子的安全。
|
||||
|
||||
### 有多好就有多坏
|
||||
|
||||
物联网设备中的传感器会创建大量数据。数据的安全性是至关重要的一个因素。收集的有关孩子的数据如果落入不法分子手中会存在危险。因此,需要采取预防措施。IoT 设备中泄露的任何数据都可用于确定行为模式。因此,必须对提供不违反用户隐私的安全物联网解决方案投入资金。
|
||||
|
||||
IoT 设备通常连接到 Wi-Fi,用于设备之间传输数据。未加密数据的不安全网络会带来某些风险。这样的网络很容易被窃听。黑客可以使用此类网点来入侵系统。他们还可以将恶意软件引入系统,从而使系统变得脆弱、易受攻击。此外,对设备和公共网络(例如学校的网络)的网络攻击可能导致数据泄露和私有数据盗用。 因此,在实施用于保护儿童的物联网解决方案时,保护网络和物联网设备的总体计划必须生效。
|
||||
|
||||
物联网设备保护儿童在学校和家里的安全的潜力尚未发现有什么创新。我们需要付出更多努力来保护连接 IoT 设备的网络安全。此外,物联网设备生成的数据可能落入不法分子手中,从而造成更多麻烦。因此,这是物联网安全至关重要的一个领域。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/
|
||||
|
||||
作者:[Andrew Carroll][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/Morisun029)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/andrew-carroll/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?resize=696%2C507&ssl=1 (Visual Internet of things_EB May18)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?fit=900%2C656&ssl=1
|
@ -0,0 +1,98 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11519-1.html)
|
||||
[#]: subject: (Object-Oriented Programming and Essential State)
|
||||
[#]: via: (https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
|
||||
面向对象编程和根本状态
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201910/30/232452kvdivhgb9b2yi0ug.jpg)
|
||||
|
||||
早在 2015 年,Brian Will 撰写了一篇有挑衅性的博客:[面向对象编程:一个灾难故事][1]。他随后发布了一个名为[面向对象编程很糟糕][2]的视频,该视频更加详细。我建议你花些时间观看视频,下面是我的一段总结:
|
||||
|
||||
> OOP 的柏拉图式理想是一堆相互解耦的对象,它们彼此之间发送无状态消息。没有人真的像这样制作软件,Brian 指出这甚至没有意义:对象需要知道向哪个对象发送消息,这意味着它们需要相互引用。该视频大部分讲述的是这样一个痛点:人们试图将对象耦合以实现控制流,同时假装它们是通过设计解耦的。
|
||||
|
||||
总的来说,他的想法与我自己的 OOP 经验产生了共鸣:对象没有问题,但是我一直不满意的是*面向*对象建模程序控制流,并且试图使代码“正确地”面向对象似乎总是在创建不必要的复杂性。
|
||||
|
||||
有一件事我认为他无法完全解释。他直截了当地说“封装没有作用”,但在脚注后面加上“在细粒度的代码级别”,并继续承认对象有时可以奏效,并且在库和文件级别封装是可以的。但是他没有确切解释为什么有时会奏效,有时却没有奏效,以及如何和在何处划清界限。有人可能会说这使他的 “OOP 不好”的说法有缺陷,但是我认为他的观点是正确的,并且可以在根本状态和偶发状态之间划清界限。
|
||||
|
||||
如果你以前从未听说过“<ruby>根本<rt>essential</rt></ruby>”和“<ruby>偶发<rt>accidental</rt></ruby>”这两个术语的使用,那么你应该阅读 Fred Brooks 的经典文章《[没有银弹][3]》。(顺便说一句,他写了许多很棒的有关构建软件系统的文章。)我以前曾写过[关于根本和偶发的复杂性的文章][4],这里有一个简短的摘要:软件是复杂的。部分原因是因为我们希望软件能够解决混乱的现实世界问题,因此我们将其称为“根本复杂性”。“偶发复杂性”是所有其它的复杂性,因为我们正尝试使用硅和金属来解决与硅和金属无关的问题。例如,对于大多数程序而言,用于内存管理或在内存与磁盘之间传输数据或解析文本格式的代码都是“偶发的复杂性”。
|
||||
|
||||
假设你正在构建一个支持多个频道的聊天应用。消息可以随时到达任何频道。有些频道特别有趣,当有新消息传入时,用户希望得到通知。而其他频道静音:消息被存储,但用户不会受到打扰。你需要跟踪每个频道的用户首选设置。
|
||||
|
||||
一种实现方法是在频道和频道设置之间使用<ruby>映射<rt>map</rt></ruby>(也称为哈希表、字典或关联数组)。注意,映射是 Brian Will 所说的可以用作对象的抽象数据类型(ADT)。
|
||||
|
||||
如果我们有一个调试器并查看内存中的映射对象,我们将看到什么?我们当然会找到频道 ID 和频道设置数据(或至少指向它们的指针)。但是我们还会找到其它数据。如果该映射是使用红黑树实现的,我们将看到带有红/黑标签和指向其他节点的指针的树节点对象。与频道相关的数据是根本状态,而树节点是偶发状态。不过,请注意以下几点:该映射有效地封装了它的偶发状态 —— 你可以用 AVL 树实现的另一个映射替换该映射,并且你的聊天程序仍然可以使用。另一方面,映射没有封装根本状态(仅使用 `get()` 和 `set()` 方法访问数据并不是封装)。事实上,映射与根本状态是尽可能不可知的,你可以使用基本相同的映射数据结构来存储与频道或通知无关的其他映射。
|
||||
|
||||
这就是映射 ADT 如此成功的原因:它封装了偶发状态,并与根本状态解耦。如果你思考一下,Brian 用封装描述的问题就是尝试封装根本状态。其他描述的好处是封装偶发状态的好处。
|
||||
|
||||
要使整个软件系统都达到这一理想状况相当困难,但扩展开来,我认为它看起来像这样:
|
||||
|
||||
* 没有全局的可变状态
|
||||
* 封装了偶发状态(在对象或模块或以其他任何形式)
|
||||
* 无状态偶发复杂性封装在单独函数中,与数据解耦
|
||||
* 使用诸如依赖注入之类的技巧使输入和输出变得明确
|
||||
* 组件可由易于识别的位置完全拥有和控制
|
||||
|
||||
其中有些违反了我很久以来的直觉。例如,如果你有一个数据库查询函数,如果数据库连接处理隐藏在该函数内部,并且唯一的参数是查询参数,那么接口会看起来会更简单。但是,当你使用这样的函数构建软件系统时,协调数据库的使用实际上变得更加复杂。组件不仅以自己的方式做事,而且还试图将自己所做的事情隐藏为“实现细节”。数据库查询需要数据库连接这一事实从来都不是实现细节。如果无法隐藏某些内容,那么显露它是更合理的。
|
||||
|
||||
我对将面向对象编程和函数式编程放在对立的两极非常警惕,但我认为从函数式编程进入面向对象编程的另一极端是很有趣的:OOP 试图封装事物,包括无法封装的根本复杂性,而纯函数式编程往往会使事情变得明确,包括一些偶发复杂性。在大多数时候,这没什么问题,但有时候(比如[在纯函数式语言中构建自我指称的数据结构][5])设计更多的是为了函数编程,而不是为了简便(这就是为什么 [Haskell 包含了一些“<ruby>逃生出口<rt>escape hatches</rt></ruby>”][6])。我之前写过一篇[所谓“<ruby>弱纯性<rt>weak purity</rt></ruby>”的中间立场][7]。
|
||||
|
||||
Brian 发现封装对更大规模有效,原因有几个。一个是,由于大小的原因,较大的组件更可能包含偶发状态。另一个是“偶发”与你要解决的问题有关。从聊天程序用户的角度来看,“偶发的复杂性”是与消息、频道和用户等无关的任何事物。但是,当你将问题分解为子问题时,更多的事情就变得“根本”。例如,在解决“构建聊天应用”问题时,可以说频道名称和频道 ID 之间的映射是偶发的复杂性,而在解决“实现 `getChannelIdByName()` 函数”子问题时,这是根本复杂性。因此,封装对于子组件的作用比对父组件的作用要小。
|
||||
|
||||
顺便说一句,在视频的结尾,Brian Will 想知道是否有任何语言支持*无法*访问它们所作用的范围的匿名函数。[D][8] 语言可以。 D 中的匿名 Lambda 通常是闭包,但是如果你想要的话,也可以声明匿名无状态函数:
|
||||
|
||||
```
|
||||
import std.stdio;
|
||||
|
||||
void main()
|
||||
{
|
||||
int x = 41;
|
||||
|
||||
// Value from immediately executed lambda
|
||||
auto v1 = () {
|
||||
return x + 1;
|
||||
}();
|
||||
writeln(v1);
|
||||
|
||||
// Same thing
|
||||
auto v2 = delegate() {
|
||||
return x + 1;
|
||||
}();
|
||||
writeln(v2);
|
||||
|
||||
// Plain functions aren't closures
|
||||
auto v3 = function() {
|
||||
// Can't access x
|
||||
// Can't access any mutable global state either if also marked pure
|
||||
return 42;
|
||||
}();
|
||||
writeln(v3);
|
||||
}
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab
|
||||
[2]: https://www.youtube.com/watch?v=QM1iUe6IofM
|
||||
[3]: http://www.cs.nott.ac.uk/~pszcah/G51ISS/Documents/NoSilverBullet.html
|
||||
[4]: https://theartofmachinery.com/2017/06/25/compression_complexity_and_software.html
|
||||
[5]: https://wiki.haskell.org/Tying_the_Knot
|
||||
[6]: https://en.wikibooks.org/wiki/Haskell/Mutable_objects#The_ST_monad
|
||||
[7]: https://theartofmachinery.com/2016/03/28/dirtying_pure_functions_can_be_useful.html
|
||||
[8]: https://dlang.org
|
@ -0,0 +1,166 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11523-1.html)
|
||||
[#]: subject: (10 Ways to Customize Your Linux Desktop With GNOME Tweaks Tool)
|
||||
[#]: via: (https://itsfoss.com/gnome-tweak-tool/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
使用 GNOME 优化工具自定义 Linux 桌面的 10 种方法
|
||||
======
|
||||
|
||||
|
||||
![][7]
|
||||
|
||||
你可以通过多种方法来调整 Ubuntu,以自定义其外观和行为。我发现最简单的方法是使用 [GNOME 优化工具][2]。它也被称为 GNOME Tweak 或简单地称为 Tweak(优化)。
|
||||
|
||||
在过去的教程中,我已经多次介绍过它。在这里,我列出了你可以使用此工具执行的所有主要优化。
|
||||
|
||||
我在这里使用的是 Ubuntu,但是这些步骤应该适用于使用 GNOME 桌面环境的任何 Linux 发行版。
|
||||
|
||||
### 在 Ubuntu 18.04 或其它版本上安装 GNOME 优化工具
|
||||
|
||||
GNOME 优化工具可从 [Ubuntu 中的 Universe 存储库][3]中安装,因此请确保已在“软件和更新”工具中启用了该仓库:
|
||||
|
||||
![在 Ubuntu 中启用 Universe 存储库][4]
|
||||
|
||||
之后,你可以从软件中心安装 GNOME 优化工具。只需打开软件中心并搜索 “GNOME Tweaks” 并从那里安装它:
|
||||
|
||||
![从软件中心安装 GNOME 优化工具][5]
|
||||
|
||||
或者,你也可以使用命令行通过 [apt 命令][6]安装此软件:
|
||||
|
||||
```
|
||||
sudo apt install gnome-tweaks
|
||||
```
|
||||
|
||||
### 用优化工具定制 GNOME 桌面
|
||||
|
||||
GNOME 优化工具使你可以进行许多设置更改。其中的某些更改(例如墙纸更改、启动应用程序等)也可以在官方的“系统设置”工具中找到。我将重点介绍默认情况下“设置”中不可用的优化。
|
||||
|
||||
#### 1、改变主题
|
||||
|
||||
你可以通过各种方式[在 Ubuntu 中安装新主题][8]。但是,如果要更改为新安装的主题,则必须安装GNOME 优化工具。
|
||||
|
||||
你可以在“<ruby>外观<rt>Appearance</rt></ruby>”部分找到主题和图标设置。你可以浏览可用的主题和图标并设置你喜欢的主题和图标。更改将立即生效。
|
||||
|
||||
![通过 GNOME 优化更改主题][9]
|
||||
|
||||
#### 2、禁用动画以提速你的桌面体验
|
||||
|
||||
应用程序窗口的打开、关闭、最大化等操作都有一些细微的动画。你可以禁用这些动画以稍微加快系统的速度,因为它会稍微使用一点资源。
|
||||
|
||||
![禁用动画以获得稍快的桌面体验][10]
|
||||
|
||||
#### 3、控制桌面图标
|
||||
|
||||
至少在 Ubuntu 中,你会在桌面上看到“<ruby>家目录<rt>Home</rt></ruby>”和“<ruby>垃圾箱<rt>Trash</rt></ruby>”图标。如果你不喜欢,可以选择禁用它。你还可以选择要在桌面上显示的图标。
|
||||
|
||||
![在 Ubuntu 中控制桌面图标][11]
|
||||
|
||||
#### 4、管理 GNOME 扩展
|
||||
|
||||
我想你可能知道 [GNOME 扩展][12]。这些是用于桌面的小型“插件”,可扩展 GNOME 桌面的功能。有[大量的 GNOME 扩展][13],可用于在顶部面板中查看 CPU 消耗、获取剪贴板历史记录等等。
|
||||
|
||||
我已经写了一篇[安装和使用 GNOME 扩展][14]的详细文章。在这里,我假设你已经在使用它们,如果是这样,可以从 GNOME 优化工具中对其进行管理。
|
||||
|
||||
![管理 GNOME 扩展][15]
|
||||
|
||||
#### 5、改变字体和缩放比例
|
||||
|
||||
你可以[在 Ubuntu 中安装新字体][16],并使用这个优化工具在系统范围应用字体更改。如果你认为桌面上的图标和文本太小,也可以更改缩放比例。
|
||||
|
||||
![更改字体和缩放比例][17]
|
||||
|
||||
#### 6、控制触摸板行为,例如在键入时禁用触摸板,使触摸板右键单击可以工作
|
||||
|
||||
GNOME 优化工具还允许你在键入时禁用触摸板。如果你在笔记本电脑上快速键入,这将很有用。手掌底部可能会触摸触摸板,并导致光标移至屏幕上不需要的位置。
|
||||
|
||||
在键入时自动禁用触摸板可解决此问题。
|
||||
|
||||
![键入时禁用触摸板][18]
|
||||
|
||||
你还会注意到[当你按下触摸板的右下角以进行右键单击时,什么也没有发生][19]。你的触摸板并没有问题。这是一项系统设置,可对没有实体右键按钮的任何触摸板(例如旧的 Thinkpad 笔记本电脑)禁用这种右键单击功能。两指点击可为你提供右键单击操作。
|
||||
|
||||
你也可以通过在“<ruby>鼠标单击模拟<rt>Mouse Click Simulation</rt></ruby>”下设置为“<ruby>区域<rt>Area</rt></ruby>”中而不是“<ruby>手指<rt>Fingers</rt></ruby>”来找回这项功能。
|
||||
|
||||
![修复右键单击问题][20]
|
||||
|
||||
你可能必须[重新启动 Ubuntu][21] 来使这项更改生效。如果你是 Emacs 爱好者,还可以强制使用 Emacs 键盘绑定。
|
||||
|
||||
#### 7、改变电源设置
|
||||
|
||||
电源这里只有一个设置。它可以让你在盖上盖子后将笔记本电脑置于挂起模式。
|
||||
|
||||
![GNOME 优化工具中的电源设置][22]
|
||||
|
||||
#### 8、决定什么显示在顶部面板
|
||||
|
||||
桌面的顶部面板显示了一些重要的信息。在这里有日历、网络图标、系统设置和“<ruby>活动<rt>Activities</rt></ruby>”选项。
|
||||
|
||||
你还可以[显示电池百分比][23]、添加日期及时间,并显示星期数。你还可以启用鼠标热角,以便将鼠标移至屏幕的左上角时可以获得所有正在运行的应用程序的活动视图。
|
||||
|
||||
![GNOME 优化工具中的顶部面板设置][24]
|
||||
|
||||
如果将鼠标焦点放在应用程序窗口上,你会注意到其菜单显示在顶部面板中。如果你不喜欢这样,可以将其关闭,然后应用程序菜单将显示应用程序本身。
|
||||
|
||||
#### 9、配置应用窗口
|
||||
|
||||
你可以决定是否在应用程序窗口中显示最大化和最小化选项(右上角的按钮)。你也可以改变它们的位置到左边或右边。
|
||||
|
||||
![应用程序窗口配置][25]
|
||||
|
||||
这里还有其他一些配置选项。我不使用它们,但你可以自行探索。
|
||||
|
||||
#### 10、配置工作区
|
||||
|
||||
GNOME 优化工具还允许你围绕工作区配置一些内容。
|
||||
|
||||
![在 Ubuntu 中配置工作区][26]
|
||||
|
||||
### 总结
|
||||
|
||||
对于任何 GNOME 用户,GNOME 优化(Tweaks)工具都是必备工具。它可以帮助你配置桌面的外观和功能。 我感到惊讶的是,该工具甚至没有出现在 Ubuntu 的主存储库中。我认为应该默认安装它,要不,你就得在 Ubuntu 中手动安装 GNOME 优化工具。
|
||||
|
||||
如果你在 GNOME 优化工具中发现了一些此处没有讨论的隐藏技巧,为什么不与大家分享呢?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gnome-tweak-tool/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gnome-tweak-tool-icon.png?ssl=1
|
||||
[2]: https://wiki.gnome.org/action/show/Apps/Tweaks?action=show&redirect=Apps%2FGnomeTweakTool
|
||||
[3]: https://itsfoss.com/ubuntu-repositories/
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/enable-repositories-ubuntu.png?ssl=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/install-gnome-tweaks-tool.jpg?ssl=1
|
||||
[6]: https://itsfoss.com/apt-command-guide/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/customize-gnome-with-tweak-tool.jpg?ssl=1
|
||||
[8]: https://itsfoss.com/install-themes-ubuntu/
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/change-theme-ubuntu-gnome.jpg?ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/disable-animation-ubuntu-gnome.jpg?ssl=1
|
||||
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/desktop-icons-ubuntu.jpg?ssl=1
|
||||
[12]: https://extensions.gnome.org/
|
||||
[13]: https://itsfoss.com/best-gnome-extensions/
|
||||
[14]: https://itsfoss.com/gnome-shell-extensions/
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/manage-gnome-extension-tweaks-tool.jpg?ssl=1
|
||||
[16]: https://itsfoss.com/install-fonts-ubuntu/
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/change-fonts-ubuntu-gnome.jpg?ssl=1
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/disable-touchpad-while-typing-ubuntu.jpg?ssl=1
|
||||
[19]: https://itsfoss.com/fix-right-click-touchpad-ubuntu/
|
||||
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/enable-right-click-ubuntu.jpg?ssl=1
|
||||
[21]: https://itsfoss.com/schedule-shutdown-ubuntu/
|
||||
[22]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/power-settings-gnome-tweaks-tool.jpg?ssl=1
|
||||
[23]: https://itsfoss.com/display-battery-ubuntu/
|
||||
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/top-panel-settings-gnome-tweaks-tool.jpg?ssl=1
|
||||
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/windows-configuration-ubuntu-gnome-tweaks.jpg?ssl=1
|
||||
[26]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/configure-workspaces-ubuntu.jpg?ssl=1
|
@ -1,27 +1,27 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11505-1.html)
|
||||
[#]: subject: (How to Configure Rsyslog Server in CentOS 8 / RHEL 8)
|
||||
[#]: via: (https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/)
|
||||
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
|
||||
|
||||
如何在 CentOS 8 / RHEL 8 中配置 Rsyslog 服务器
|
||||
如何在 CentOS8/RHEL8 中配置 Rsyslog 服务器
|
||||
======
|
||||
|
||||
**Rsyslog** 是一个免费的开源日志记录程序,默认下在 **CentOS** 8 和 **RHEL** 8 系统上存在。它提供了一种从客户端节点到单个中央服务器的“集中日志”的简单有效的方法。日志集中化有两个好处。首先,它简化了日志查看,因为系统管理员可以在一个中心节点查看远程服务器的所有日志,而无需登录每个客户端系统来检查日志。如果需要监视多台服务器,这将非常有用,其次,如果远程客户端崩溃,你不用担心丢失日志,因为所有日志都将保存在**中央 rsyslog 服务器上**。Rsyslog 取代了仅支持 **UDP** 协议的 syslog。它以优异的功能扩展了基本的 syslog 协议,例如在传输日志时支持 **UDP** 和 **TCP**协议,增强的过滤功能以及灵活的配置选项。让我们来探讨如何在 CentOS 8 / RHEL 8 系统中配置 Rsyslog 服务器。
|
||||
![](https://img.linux.net.cn/data/attachment/album/201910/27/062908v4nnzgf7bhnplgvg.jpg)
|
||||
|
||||
[![configure-rsyslog-centos8-rhel8][1]][2]
|
||||
Rsyslog 是一个自由开源的日志记录程序,在 CentOS 8 和 RHEL 8 系统上默认可用。它提供了一种从客户端节点到单个中央服务器的“集中日志”的简单有效的方法。日志集中化有两个好处。首先,它简化了日志查看,因为系统管理员可以在一个中心节点查看远程服务器的所有日志,而无需登录每个客户端系统来检查日志。如果需要监视多台服务器,这将非常有用,其次,如果远程客户端崩溃,你不用担心丢失日志,因为所有日志都将保存在中心的 Rsyslog 服务器上。rsyslog 取代了仅支持 UDP 协议的 syslog。它以优异的功能扩展了基本的 syslog 协议,例如在传输日志时支持 UDP 和 TCP 协议,增强的过滤功能以及灵活的配置选项。让我们来探讨如何在 CentOS 8 / RHEL 8 系统中配置 Rsyslog 服务器。
|
||||
|
||||
![configure-rsyslog-centos8-rhel8][2]
|
||||
|
||||
### 预先条件
|
||||
|
||||
我们将搭建以下实验环境来测试集中式日志记录过程:
|
||||
|
||||
* **Rsyslog 服务器** CentOS 8 Minimal IP 地址: 10.128.0.47
|
||||
* **客户端系统** RHEL 8 Minimal IP 地址: 10.128.0.48
|
||||
|
||||
|
||||
* Rsyslog 服务器 CentOS 8 Minimal IP 地址: 10.128.0.47
|
||||
* 客户端系统 RHEL 8 Minimal IP 地址: 10.128.0.48
|
||||
|
||||
通过上面的设置,我们将演示如何设置 Rsyslog 服务器,然后配置客户端系统以将日志发送到 Rsyslog 服务器进行监视。
|
||||
|
||||
@ -35,30 +35,30 @@
|
||||
$ systemctl status rsyslog
|
||||
```
|
||||
|
||||
示例输出
|
||||
示例输出:
|
||||
|
||||
![rsyslog-service-status-centos8][1]
|
||||
![rsyslog-service-status-centos8](https://www.linuxtechi.com/wp-content/uploads/2019/10/rsyslog-service-status-centos8.jpg)
|
||||
|
||||
如果由于某种原因不存在 rsyslog,那么可以使用以下命令进行安装:
|
||||
如果由于某种原因 Rsyslog 不存在,那么可以使用以下命令进行安装:
|
||||
|
||||
```
|
||||
$ sudo yum install rsyslog
|
||||
```
|
||||
|
||||
接下来,你需要修改 Rsyslog 配置文件中的一些设置。打开配置文件。
|
||||
接下来,你需要修改 Rsyslog 配置文件中的一些设置。打开配置文件:
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
滚动并取消注释下面的行,以允许通过 UDP 协议接收日志
|
||||
滚动并取消注释下面的行,以允许通过 UDP 协议接收日志:
|
||||
|
||||
```
|
||||
module(load="imudp") # needs to be done just once
|
||||
input(type="imudp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-centos8-rhel8][1]
|
||||
![rsyslog-conf-centos8-rhel8](https://www.linuxtechi.com/wp-content/uploads/2019/10/rsyslog-conf-centos8-rhel8.jpg)
|
||||
|
||||
同样,如果你希望启用 TCP rsyslog 接收,请取消注释下面的行:
|
||||
|
||||
@ -67,47 +67,47 @@ module(load="imtcp") # needs to be done just once
|
||||
input(type="imtcp" port="514")
|
||||
```
|
||||
|
||||
![rsyslog-conf-tcp-centos8-rhel8][1]
|
||||
![rsyslog-conf-tcp-centos8-rhel8](https://www.linuxtechi.com/wp-content/uploads/2019/10/rsyslog-conf-tcp-centos8-rhel8.jpg)
|
||||
|
||||
保存并退出配置文件。
|
||||
|
||||
要从客户端系统接收日志,我们需要在防火墙上打开 Rsyslog 默认端口 514。为此,请运行
|
||||
要从客户端系统接收日志,我们需要在防火墙上打开 Rsyslog 默认端口 514。为此,请运行:
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
接下来,重新加载防火墙保存更改
|
||||
接下来,重新加载防火墙保存更改:
|
||||
|
||||
```
|
||||
# sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
示例输出
|
||||
示例输出:
|
||||
|
||||
![firewall-ports-rsyslog-centos8][1]
|
||||
![firewall-ports-rsyslog-centos8](https://www.linuxtechi.com/wp-content/uploads/2019/10/firewall-ports-rsyslog-centos8.jpg)
|
||||
|
||||
接下来,重启 Rsyslog 服务器
|
||||
接下来,重启 Rsyslog 服务器:
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
要在启动时运行 Rsyslog,运行以下命令
|
||||
要在启动时运行 Rsyslog,运行以下命令:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
```
|
||||
|
||||
要确认 Rsyslog 服务器正在监听 514 端口,请使用 netstat 命令,如下所示:
|
||||
要确认 Rsyslog 服务器正在监听 514 端口,请使用 `netstat` 命令,如下所示:
|
||||
|
||||
```
|
||||
$ sudo netstat -pnltu
|
||||
```
|
||||
|
||||
示例输出
|
||||
示例输出:
|
||||
|
||||
![netstat-rsyslog-port-centos8][1]
|
||||
![netstat-rsyslog-port-centos8](https://www.linuxtechi.com/wp-content/uploads/2019/10/netstat-rsyslog-port-centos8.jpg)
|
||||
|
||||
完美!我们已经成功配置了 Rsyslog 服务器来从客户端系统接收日志。
|
||||
|
||||
@ -127,42 +127,42 @@ $ tail -f /var/log/messages
|
||||
$ sudo systemctl status rsyslog
|
||||
```
|
||||
|
||||
示例输出
|
||||
示例输出:
|
||||
|
||||
![client-rsyslog-service-rhel8][1]
|
||||
![client-rsyslog-service-rhel8](https://www.linuxtechi.com/wp-content/uploads/2019/10/client-rsyslog-service-rhel8.jpg)
|
||||
|
||||
接下来,打开 rsyslog 配置文件
|
||||
接下来,打开 rsyslog 配置文件:
|
||||
|
||||
```
|
||||
$ sudo vim /etc/rsyslog.conf
|
||||
```
|
||||
|
||||
在文件末尾,添加以下行
|
||||
在文件末尾,添加以下行:
|
||||
|
||||
```
|
||||
*.* @10.128.0.47:514 # Use @ for UDP protocol
|
||||
*.* @@10.128.0.47:514 # Use @@ for TCP protocol
|
||||
```
|
||||
|
||||
保存并退出配置文件。就像 Rsyslog 服务器一样,打开 514 端口,这是防火墙上的默认 Rsyslog 端口。
|
||||
保存并退出配置文件。就像 Rsyslog 服务器一样,打开 514 端口,这是防火墙上的默认 Rsyslog 端口:
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
|
||||
```
|
||||
|
||||
接下来,重新加载防火墙以保存更改
|
||||
接下来,重新加载防火墙以保存更改:
|
||||
|
||||
```
|
||||
$ sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
接下来,重启 rsyslog 服务
|
||||
接下来,重启 rsyslog 服务:
|
||||
|
||||
```
|
||||
$ sudo systemctl restart rsyslog
|
||||
```
|
||||
|
||||
要在启动时运行 Rsyslog,请运行以下命令
|
||||
要在启动时运行 Rsyslog,请运行以下命令:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable rsyslog
|
||||
@ -178,15 +178,15 @@ $ sudo systemctl enable rsyslog
|
||||
# logger "Hello guys! This is our first log"
|
||||
```
|
||||
|
||||
现在进入 Rsyslog 服务器并运行以下命令来实时查看日志消息
|
||||
现在进入 Rsyslog 服务器并运行以下命令来实时查看日志消息:
|
||||
|
||||
```
|
||||
# tail -f /var/log/messages
|
||||
```
|
||||
|
||||
客户端系统上命令运行的输出显示在了 Rsyslog 服务器的日志中,这意味着 Rsyslog 服务器正在接收来自客户端系统的日志。
|
||||
客户端系统上命令运行的输出显示在了 Rsyslog 服务器的日志中,这意味着 Rsyslog 服务器正在接收来自客户端系统的日志:
|
||||
|
||||
![centralize-logs-rsyslogs-centos8][1]
|
||||
![centralize-logs-rsyslogs-centos8](https://www.linuxtechi.com/wp-content/uploads/2019/10/centralize-logs-rsyslogs-centos8.jpg)
|
||||
|
||||
就是这些了!我们成功设置了 Rsyslog 服务器来接收来自客户端系统的日志信息。
|
||||
|
||||
@ -197,11 +197,11 @@ via: https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/
|
||||
作者:[James Kiarie][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/james/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/configure-rsyslog-centos8-rhel8.jpg
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/configure-rsyslog-centos8-rhel8.jpg
|
193
published/201910/20191021 Transition to Nftables.md
Normal file
193
published/201910/20191021 Transition to Nftables.md
Normal file
@ -0,0 +1,193 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11513-1.html)
|
||||
[#]: subject: (Transition to Nftables)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/transition-to-nftables/)
|
||||
[#]: author: (Vijay Marcel D https://opensourceforu.com/author/vijay-marcel/)
|
||||
|
||||
过渡到 nftables
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201910/29/085827o8b7rbswjjr7ijsr.jpg)
|
||||
|
||||
> 开源世界中的每个主要发行版都在演进,逐渐将 nftables 作为了默认防火墙。换言之,古老的 iptables 现在已经消亡。本文是有关如何构建 nftables 的教程。
|
||||
|
||||
当前,有一个与 nftables 兼容的 iptables-nft 后端,但是很快,即使是它也不再提供了。另外,正如 Red Hat 开发人员所指出的那样,有时它可能会错误地转换规则。我们需要知道如何构建自己的 nftables,而不是依赖于 iptables 到 nftables 的转换器。
|
||||
|
||||
在 nftables 中,所有地址族都遵循一个规则。与 iptables 不同,nftables 在用户空间中运行,iptables 中的每个模块都运行在内核(空间)中。它很少需要更新内核,并带有一些新功能,例如映射、地址族和字典。
|
||||
|
||||
### 地址族
|
||||
|
||||
地址族确定要处理的数据包的类型。在 nftables 中有六个地址族,它们是:
|
||||
|
||||
* ip
|
||||
* ipv6
|
||||
* inet
|
||||
* arp
|
||||
* bridge
|
||||
* netdev
|
||||
|
||||
在 nftables 中,ipv4 和 ipv6 协议可以被合并为一个称为 inet 的单一地址族。因此,我们不需要指定两个规则:一个用于 ipv4,另一个用于 ipv6。如果未指定地址族,它将默认为 ip 协议,即 ipv4。我们感兴趣的领域是 inet 地址族,因为大多数家庭用户将使用 ipv4 或 ipv6 协议。
|
||||
|
||||
### nftables
|
||||
|
||||
典型的 nftables 规则包含三个部分:表、链和规则。
|
||||
|
||||
表是链和规则的容器。它们由其地址族和名称来标识。链包含 inet/arp/bridge/netdev 等协议所需的规则,并具有三种类型:过滤器、NAT 和路由。nftables 规则可以从脚本加载,也可以在终端键入,然后另存为规则集。
|
||||
|
||||
对于家庭用户,默认链为过滤器。inet 系列包含以下钩子:
|
||||
|
||||
* Input
|
||||
* Output
|
||||
* Forward
|
||||
* Pre-routing
|
||||
* Post-routing
|
||||
|
||||
### 使用脚本还是不用?
|
||||
|
||||
最大的问题之一是我们是否可以使用防火墙脚本。答案是:这是你自己的选择。这里有一些建议:如果防火墙中有数百条规则,那么最好使用脚本,但是如果你是典型的家庭用户,则可以在终端中键入命令,然后(保存并在重启时)加载规则集。每种选择都有其自身的优缺点。在本文中,我们将在终端中键入它们以构建防火墙。
|
||||
|
||||
nftables 使用一个名为 `nft` 的程序来添加、创建、列出、删除和加载规则。确保使用以下命令将 nftables 与 conntrackd 和 netfilter-persistent 软件包一起安装,并删除 iptables:
|
||||
|
||||
```
|
||||
apt-get install nftables conntrackd netfilter-persistent
|
||||
apt-get purge iptables
|
||||
```
|
||||
|
||||
`nft` 需要以 root 身份运行或使用 `sudo` 运行。使用以下命令分别列出、刷新、删除规则集和加载脚本。
|
||||
|
||||
```
|
||||
nft list ruleset
|
||||
nft flush ruleset
|
||||
nft delete table inet filter
|
||||
/usr/sbin/nft -f /etc/nftables.conf
|
||||
```
|
||||
|
||||
### 输入策略
|
||||
|
||||
就像 iptables 一样,防火墙将包含三部分:输入(`input`)、转发(`forward`)和输出(`output`)。在终端中,为输入(`input`)策略键入以下命令。在开始之前,请确保已刷新规则集。我们的默认策略将会删除所有内容。我们将在防火墙中使用 inet 地址族。将以下规则以 root 身份添加或使用 `sudo` 运行:
|
||||
|
||||
```
|
||||
nft add table inet filter
|
||||
nft add chain inet filter input { type filter hook input priority 0 \; counter \; policy drop \; }
|
||||
```
|
||||
|
||||
你会注意到有一个名为 `priority 0` 的东西。这意味着赋予该规则更高的优先级。挂钩通常赋予负整数,这意味着更高的优先级。每个挂钩都有自己的优先级,过滤器链的优先级为 0。你可以检查 nftables Wiki 页面以查看每个挂钩的优先级。
|
||||
|
||||
要了解你计算机中的网络接口,请运行以下命令:
|
||||
|
||||
```
|
||||
ip link show
|
||||
```
|
||||
|
||||
它将显示已安装的网络接口,一个是本地主机、另一个是以太网端口或无线端口。以太网端口的名称如下所示:`enpXsY`,其中 `X` 和 `Y` 是数字,无线端口也是如此。我们必须允许本地主机的流量,并且仅允许从互联网建立的传入连接。
|
||||
|
||||
nftables 具有一项称为裁决语句的功能,用于解析规则。裁决语句为 `accept`、`drop`、`queue`、`jump`、`goto`、`continue` 和 `return`。由于这是一个很简单的防火墙,因此我们将使用 `accept` 或 `drop` 处理数据包。
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname lo accept
|
||||
nft add rule inet filter input iifname enpXsY ct state new, established, related accept
|
||||
```
|
||||
|
||||
接下来,我们必须添加规则以保护我们免受隐秘扫描。并非所有的隐秘扫描都是恶意的,但大多数都是。我们必须保护网络免受此类扫描。第一组规则列出了要测试的 TCP 标志。在这些标志中,第二组列出了要与第一组匹配的标志。
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(syn\|fin\) == \(syn\|fin\) drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(syn\|rst\) == \(syn\|rst\) drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(fin\|rst\) == \(fin\|rst\) drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|fin\) == fin drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|psh\) == psh drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|urg\) == urg drop
|
||||
```
|
||||
|
||||
记住,我们在终端中键入这些命令。因此,我们必须在一些特殊字符之前添加一个反斜杠,以确保终端能够正确解释该斜杠。如果你使用的是脚本,则不需要这样做。
|
||||
|
||||
### 关于 ICMP 的警告
|
||||
|
||||
互联网控制消息协议(ICMP)是一种诊断工具,因此不应完全丢弃该流量。完全阻止 ICMP 的任何尝试都是不明智的,因为它还会导致停止向我们提供错误消息。仅启用最重要的控制消息,例如回声请求、回声应答、目的地不可达和超时等消息,并拒绝其余消息。回声请求和回声应答是 `ping` 的一部分。在输入策略中,我们仅允许回声应答、而在输出策略中,我们仅允许回声请求。
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname enpXsY icmp type { echo-reply, destination-unreachable, time-exceeded } limit rate 1/second accept
|
||||
nft add rule inet filter input iifname enpXsY ip protocol icmp drop
|
||||
```
|
||||
|
||||
最后,我们记录并丢弃所有无效数据包。
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname enpXsY ct state invalid log flags all level info prefix \”Invalid-Input: \”
|
||||
nft add rule inet filter input iifname enpXsY ct state invalid drop
|
||||
```
|
||||
|
||||
### 转发和输出策略
|
||||
|
||||
在转发和输出策略中,默认情况下我们将丢弃数据包,仅接受已建立连接的数据包。
|
||||
|
||||
```
|
||||
nft add chain inet filter forward { type filter hook forward priority 0 \; counter \; policy drop \; }
|
||||
nft add rule inet filter forward ct state established, related accept
|
||||
nft add rule inet filter forward ct state invalid drop
|
||||
nft add chain inet filter output { type filter hook output priority 0 \; counter \; policy drop \; }
|
||||
```
|
||||
|
||||
典型的桌面用户只需要端口 80 和 443 即可访问互联网。最后,允许可接受的 ICMP 协议并在记录无效数据包时丢弃它们。
|
||||
|
||||
```
|
||||
nft add rule inet filter output oifname enpXsY tcp dport { 80, 443 } ct state established accept
|
||||
nft add rule inet filter output oifname enpXsY icmp type { echo-request, destination-unreachable, time-exceeded } limit rate 1/second accept
|
||||
nft add rule inet filter output oifname enpXsY ip protocol icmp drop
|
||||
nft add rule inet filter output oifname enpXsY ct state invalid log flags all level info prefix \”Invalid-Output: \”
|
||||
nft add rule inet filter output oifname enpXsY ct state invalid drop
|
||||
```
|
||||
|
||||
现在我们必须保存我们的规则集,否则重新启动时它将丢失。为此,请运行以下命令:
|
||||
|
||||
```
|
||||
sudo nft list ruleset. > /etc/nftables.conf
|
||||
```
|
||||
|
||||
我们须在引导时加载 nftables,以下将在 systemd 中启用 nftables 服务:
|
||||
|
||||
```
|
||||
sudo systemctl enable nftables
|
||||
```
|
||||
|
||||
接下来,编辑 nftables 单元文件以删除 `Execstop` 选项,以避免在每次引导时刷新规则集。该文件通常位于 `/etc/systemd/system/sysinit.target.wants/nftables.service`。现在重新启动nftables:
|
||||
|
||||
```
|
||||
sudo systemctl restart nftables
|
||||
```
|
||||
|
||||
### 在 rsyslog 中记录日志
|
||||
|
||||
当你记录丢弃的数据包时,它们直接进入 syslog,这使得读取该日志文件非常困难。最好将防火墙日志重定向到单独的文件。在 `/var/log` 目录中创建一个名为 `nftables` 的目录,并在其中创建两个名为 `input.log` 和 `output.log` 的文件,分别存储输入和输出日志。确保系统中已安装 rsyslog。现在转到 `/etc/rsyslog.d` 并创建一个名为 `nftables.conf` 的文件,其内容如下:
|
||||
|
||||
```
|
||||
:msg,regex,”Invalid-Input: “ -/var/log/nftables/Input.log
|
||||
:msg,regex,”Invalid-Output: “ -/var/log/nftables/Output.log & stop
|
||||
```
|
||||
|
||||
现在,我们必须确保日志是可管理的。为此,使用以下代码在 `/etc/logrotate.d` 中创建另一个名为 `nftables` 的文件:
|
||||
|
||||
```
|
||||
/var/log/nftables/* { rotate 5 daily maxsize 50M missingok notifempty delaycompress compress postrotate invoke-rc.d rsyslog rotate > /dev/null endscript }
|
||||
```
|
||||
|
||||
重新启动 nftables。现在,你可以检查你的规则集。如果你觉得在终端中键入每个命令很麻烦,则可以使用脚本来加载 nftables 防火墙。我希望本文对保护你的系统有用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/transition-to-nftables/
|
||||
|
||||
作者:[Vijay Marcel D][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/vijay-marcel/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/01/REHfirewall-1.jpg?resize=696%2C481&ssl=1 (REHfirewall)
|
||||
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/01/REHfirewall-1.jpg?fit=900%2C622&ssl=1
|
@ -0,0 +1,149 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11518-1.html)
|
||||
[#]: subject: (Building container images with the ansible-bender tool)
|
||||
[#]: via: (https://opensource.com/article/19/10/building-container-images-ansible)
|
||||
[#]: author: (Tomas Tomecek https://opensource.com/users/tomastomecek)
|
||||
|
||||
使用 ansible-bender 构建容器镜像
|
||||
======
|
||||
|
||||
> 了解如何使用 Ansible 在容器中执行命令。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201910/30/090738vzbifzfpa6qz9bij.jpg)
|
||||
|
||||
容器和 [Ansible][2] 可以很好地融合在一起:从管理和编排到供应和构建。在本文中,我们将重点介绍构建部分。
|
||||
|
||||
如果你熟悉 Ansible,就会知道你可以编写一系列任务,`ansible-playbook` 命令将为你执行这些任务。你知道吗,如果你编写 Dockerfile 并运行 `podman build`,你还可以在容器环境中执行此类命令,并获得相同的结果。
|
||||
|
||||
这是一个例子:
|
||||
|
||||
```
|
||||
- name: Serve our file using httpd
|
||||
hosts: all
|
||||
tasks:
|
||||
- name: Install httpd
|
||||
package:
|
||||
name: httpd
|
||||
state: installed
|
||||
- name: Copy our file to httpd’s webroot
|
||||
copy:
|
||||
src: our-file.txt
|
||||
dest: /var/www/html/
|
||||
```
|
||||
|
||||
你可以在 Web 服务器本地或容器中执行这个剧本,并且只要你记得先创建 `our-file.txt`,它就可以工作。
|
||||
|
||||
但是这里缺少了一些东西。你需要启动(并配置)httpd 以便提供文件。这是容器构建和基础架构供应之间的区别:构建镜像时,你只需准备内容;而运行容器是另一项任务。另一方面,你可以将元数据附加到容器镜像,它会默认运行命令。
|
||||
|
||||
这有个工具可以帮助。试试看 `ansible-bender` 怎么样?
|
||||
|
||||
```
|
||||
$ ansible-bender build the-playbook.yaml fedora:30 our-httpd
|
||||
```
|
||||
|
||||
该脚本使用 `ansible-bender` 对 Fedora 30 容器镜像执行该剧本,并将生成的容器镜像命名为 `our-httpd`。
|
||||
|
||||
但是,当你运行该容器时,它不会启动 httpd,因为它不知道如何操作。你可以通过向该剧本添加一些元数据来解决此问题:
|
||||
|
||||
```
|
||||
- name: Serve our file using httpd
|
||||
hosts: all
|
||||
vars:
|
||||
ansible_bender:
|
||||
base_image: fedora:30
|
||||
target_image:
|
||||
name: our-httpd
|
||||
cmd: httpd -DFOREGROUND
|
||||
tasks:
|
||||
- name: Install httpd
|
||||
package:
|
||||
name: httpd
|
||||
state: installed
|
||||
- name: Listen on all network interfaces.
|
||||
lineinfile:
|
||||
path: /etc/httpd/conf/httpd.conf
|
||||
regexp: '^Listen '
|
||||
line: Listen 0.0.0.0:80
|
||||
- name: Copy our file to httpd’s webroot
|
||||
copy:
|
||||
src: our-file.txt
|
||||
dest: /var/www/html
|
||||
```
|
||||
|
||||
现在你可以构建镜像(从这里开始,请以 root 用户身份运行所有命令。目前,Buildah 和 Podman 不会为无 root 容器创建专用网络):
|
||||
|
||||
```
|
||||
# ansible-bender build the-playbook.yaml
|
||||
PLAY [Serve our file using httpd] ****************************************************
|
||||
|
||||
TASK [Gathering Facts] ***************************************************************
|
||||
ok: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
TASK [Install httpd] *****************************************************************
|
||||
loaded from cache: 'f053578ed2d47581307e9ba3f64f4b4da945579a082c6f99bd797635e62befd0'
|
||||
skipping: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
TASK [Listen on all network interfaces.] *********************************************
|
||||
changed: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
TASK [Copy our file to httpd’s webroot] **********************************************
|
||||
changed: [our-httpd-20191004-131941266141-cont]
|
||||
|
||||
PLAY RECAP ***************************************************************************
|
||||
our-httpd-20191004-131941266141-cont : ok=3 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
|
||||
|
||||
Getting image source signatures
|
||||
Copying blob sha256:4650c04b851c62897e9c02c6041a0e3127f8253fafa3a09642552a8e77c044c8
|
||||
Copying blob sha256:87b740bba596291af8e9d6d91e30a01d5eba9dd815b55895b8705a2acc3a825e
|
||||
Copying blob sha256:82c21252bd87532e93e77498e3767ac2617aa9e578e32e4de09e87156b9189a0
|
||||
Copying config sha256:44c6dc6dda1afe28892400c825de1c987c4641fd44fa5919a44cf0a94f58949f
|
||||
Writing manifest to image destination
|
||||
Storing signatures
|
||||
44c6dc6dda1afe28892400c825de1c987c4641fd44fa5919a44cf0a94f58949f
|
||||
Image 'our-httpd' was built successfully \o/
|
||||
```
|
||||
|
||||
镜像构建完毕,可以运行容器了:
|
||||
|
||||
```
|
||||
# podman run our-httpd
|
||||
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.88.2.106. Set the 'ServerName' directive globally to suppress this message
|
||||
```
|
||||
|
||||
是否提供文件了?首先,找出你容器的 IP:
|
||||
|
||||
```
|
||||
# podman inspect -f '{{ .NetworkSettings.IPAddress }}' 7418570ba5a0
|
||||
10.88.2.106
|
||||
```
|
||||
|
||||
你现在可以检查了:
|
||||
|
||||
```
|
||||
$ curl http://10.88.2.106/our-file.txt
|
||||
Ansible is ❤
|
||||
```
|
||||
|
||||
你文件内容是什么?
|
||||
|
||||
这只是使用 Ansible 构建容器镜像的介绍。如果你想了解有关 `ansible-bender` 可以做什么的更多信息,请查看它的 [GitHub][3] 页面。构建快乐!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/building-container-images-ansible
|
||||
|
||||
作者:[Tomas Tomecek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tomastomecek
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/blocks_building.png?itok=eMOT-ire (Blocks for building)
|
||||
[2]: https://www.ansible.com/
|
||||
[3]: https://github.com/ansible-community/ansible-bender
|
106
published/201910/20191023 Using SSH port forwarding on Fedora.md
Normal file
106
published/201910/20191023 Using SSH port forwarding on Fedora.md
Normal file
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11515-1.html)
|
||||
[#]: subject: (Using SSH port forwarding on Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/using-ssh-port-forwarding-on-fedora/)
|
||||
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
|
||||
|
||||
在 Fedora 上使用 SSH 端口转发
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201910/29/123804dql3aqqlghza9txt.jpg)
|
||||
|
||||
你可能已经熟悉使用 [ssh 命令][2]访问远程系统。`ssh` 命令背后所使用的协议允许终端的输入和输出流经[安全通道][3]。但是你知道也可以使用 `ssh` 来安全地发送和接收其他数据吗?一种方法是使用“<ruby>端口转发<rt>port forwarding</rt></ruby>”,它允许你在进行 `ssh` 会话时安全地连接网络端口。本文向你展示了它是如何工作的。
|
||||
|
||||
### 关于端口
|
||||
|
||||
标准 Linux 系统已分配了一组网络端口,范围是 0 - 65535。系统会保留 0 - 1023 的端口以供系统使用。在许多系统中,你不能选择使用这些低端口号。通常有几个端口用于运行特定的服务。你可以在系统的 `/etc/services` 文件中找到这些定义。
|
||||
|
||||
你可以认为网络端口是类似的物理端口或可以连接到电缆的插孔。端口可以连接到系统上的某种服务,类似物理插孔后面的接线。一个例子是 Apache Web 服务器(也称为 `httpd`)。对于 HTTP 非安全连接,Web 服务器通常要求在主机系统上使用端口 80,对于 HTTPS 安全连接通常要求使用 443。
|
||||
|
||||
当你连接到远程系统(例如,使用 Web 浏览器)时,你是将浏览器“连接”到你的主机上的端口。这通常是一个随机的高端口号,例如 54001。你的主机上的端口连接到远程主机上的端口(例如 443)来访问其安全的 Web 服务器。
|
||||
|
||||
那么,当你有这么多可用端口时,为什么还要使用端口转发呢?这是 Web 开发人员生活中的几种常见情况。
|
||||
|
||||
### 本地端口转发
|
||||
|
||||
想象一下,你正在名为 `remote.example.com` 的远程系统上进行 Web 开发。通常,你是通过 `ssh` 进入此系统的,但是它位于防火墙后面,而且该防火墙很少允许其他类型的访问,并且会阻塞大多数其他端口。要尝试你的网络应用,能够使用浏览器访问远程系统会很有帮助。但是,由于使用了讨厌的防火墙,你无法通过在浏览器中输入 URL 的常规方法来访问它。
|
||||
|
||||
本地转发使你可以通过 `ssh` 连接来建立可通过远程系统访问的端口。该端口在系统上显示为本地端口(因而称为“本地转发”)。
|
||||
|
||||
假设你的网络应用在 `remote.example.com` 的 8000 端口上运行。要将那个系统的 8000 端口本地转发到你系统上的 8000 端口,请在开始会话时将 `-L` 选项与 `ssh` 结合使用:
|
||||
|
||||
```
|
||||
$ ssh -L 8000:localhost:8000 remote.example.com
|
||||
```
|
||||
|
||||
等等,为什么我们使用 `localhost` 作为转发目标?这是因为从 `remote.example.com` 的角度来看,你是在要求主机使用其自己的端口 8000。(回想一下,任何主机通常可以通过网络连接 `localhost` 而连接到自身。)现在那个端口连接到你系统的 8000 端口了。`ssh` 会话准备就绪后,将其保持打开状态,然后可以在浏览器中键入 `http://localhost:8000` 来查看你的 Web 应用。现在,系统之间的流量可以通过 `ssh` 隧道安全地传输!
|
||||
|
||||
如果你有敏锐的眼睛,你可能已经注意到了一些东西。如果我们要 `remote.example.com` 转发到与 `localhost` 不同的主机名怎么办?如果它可以访问该网络上另一个系统上的端口,那么通常可以同样轻松地转发该端口。例如,假设你想访问也在该远程网络中的 `db.example.com` 的 MariaDB 或 MySQL 服务。该服务通常在端口 3306 上运行。因此,即使你无法 `ssh` 到实际的 `db.example.com` 主机,你也可以使用此命令将其转发:
|
||||
|
||||
```
|
||||
$ ssh -L 3306:db.example.com:3306 remote.example.com
|
||||
```
|
||||
|
||||
现在,你可以在 `localhost` 上运行 MariaDB 命令,而实际上是在使用 `db.example.com` 主机。
|
||||
|
||||
### 远程端口转发
|
||||
|
||||
远程转发让你可以进行相反操作。想象一下,你正在为办公室的朋友设计一个 Web 应用,并想向他们展示你的工作。不过,不幸的是,你在咖啡店里工作,并且由于网络设置,他们无法通过网络连接访问你的笔记本电脑。但是,你同时使用着办公室的 `remote.example.com` 系统,并且仍然可在这里登录。你的 Web 应用似乎在本地 5000 端口上运行良好。
|
||||
|
||||
远程端口转发使你可以通过 `ssh` 连接从本地系统建立端口的隧道,并使该端口在远程系统上可用。在开始 `ssh` 会话时,只需使用 `-R` 选项:
|
||||
|
||||
```
|
||||
$ ssh -R 6000:localhost:5000 remote.example.com
|
||||
```
|
||||
|
||||
现在,当在公司防火墙内的朋友打开浏览器时,他们可以进入 `http://remote.example.com:6000` 查看你的工作。就像在本地端口转发示例中一样,通信通过 `ssh` 会话安全地进行。
|
||||
|
||||
默认情况下,`sshd` 守护进程运行在设置的主机上,因此**只有**该主机可以连接它的远程转发端口。假设你的朋友希望能够让其他 `example.com` 公司主机上的人看到你的工作,而他们不在 `remote.example.com` 上。你需要让 `remote.example.com` 主机的所有者将以下选项**之一**添加到 `/etc/ssh/sshd_config` 中:
|
||||
|
||||
```
|
||||
GatewayPorts yes # 或
|
||||
GatewayPorts clientspecified
|
||||
```
|
||||
|
||||
第一个选项意味着 `remote.example.com` 上的所有网络接口都可以使用远程转发的端口。第二个意味着建立隧道的客户端可以选择地址。默认情况下,此选项设置为 `no`。
|
||||
|
||||
使用此选项,你作为 `ssh` 客户端仍必须指定可以共享你这边转发端口的接口。通过在本地端口之前添加网络地址范围来进行此操作。有几种方法可以做到,包括:
|
||||
|
||||
```
|
||||
$ ssh -R *:6000:localhost:5000 # 所有网络
|
||||
$ ssh -R 0.0.0.0:6000:localhost:5000 # 所有网络
|
||||
$ ssh -R 192.168.1.15:6000:localhost:5000 # 单个网络
|
||||
$ ssh -R remote.example.com:6000:localhost:5000 # 单个网络
|
||||
```
|
||||
|
||||
### 其他注意事项
|
||||
|
||||
请注意,本地和远程系统上的端口号不必相同。实际上,有时你甚至可能无法使用相同的端口。例如,普通用户可能不会在默认设置中转发到系统端口。
|
||||
|
||||
另外,可以限制主机上的转发。如果你需要在联网主机上更严格的安全性,那么这你来说可能很重要。 `sshd` 守护程进程的 `PermitOpen` 选项控制是否以及哪些端口可用于 TCP 转发。默认设置为 `any`,这让上面的所有示例都能正常工作。要禁止任何端口转发,请选择 `none`,或仅允许的特定的“主机:端口”。有关更多信息,请在手册页中搜索 `PermitOpen` 来配置 `sshd` 守护进程:
|
||||
|
||||
```
|
||||
$ man sshd_config
|
||||
```
|
||||
|
||||
最后,请记住,只有在 `ssh` 会话处于打开状态时才会端口转发。如果需要长时间保持转发活动,请尝试使用 `-N` 选项在后台运行会话。确保控制台已锁定,以防止在你离开控制台时其被篡夺。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/using-ssh-port-forwarding-on-fedora/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/pfrields/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/ssh-port-forwarding-816x345.jpg
|
||||
[2]: https://en.wikipedia.org/wiki/Secure_Shell
|
||||
[3]: https://fedoramagazine.org/open-source-ssh-clients/
|
@ -0,0 +1,96 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11509-1.html)
|
||||
[#]: subject: (MX Linux 19 Released With Debian 10.1 ‘Buster’ & Other Improvements)
|
||||
[#]: via: (https://itsfoss.com/mx-linux-19/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
随着 Debian 10.1 “Buster” 的发布,MX Linux 19 也发布了
|
||||
======
|
||||
|
||||
MX Linux 18 是我在[最佳 Linux 发行版][1]中的主要推荐的发行版之一,特别是当你在考虑 Ubuntu 以外的发行版时。
|
||||
|
||||
它基于 Debian 9.6 “Stretch”,具有令人难以置信的快速流畅的体验。
|
||||
|
||||
现在,作为该发行版的主要升级版本,MX Linux 19 带来了许多重大改进和变更。在这里,我们将看一下主要亮点。
|
||||
|
||||
### MX Linux 19 中的新功能
|
||||
|
||||
- [视频](https://player.vimeo.com/video/368459760)
|
||||
|
||||
#### Debian 10 “Buster”
|
||||
|
||||
这个值得一提,因为 Debian 10 实际上是 MX Linux 18 所基于的 Debian 9.6 “Stretch” 的主要升级。
|
||||
|
||||
如果你对 Debian 10 “Buster” 的变化感到好奇,建议你阅读有关 [Debian 10 “Buster” 的新功能][3]的文章。
|
||||
|
||||
#### Xfce 桌面 4.14
|
||||
|
||||
![MX Linux 19][4]
|
||||
|
||||
[Xfce 4.14][5] 正是 Xfce 开发团队提供的最新产品。就个人而言,我不是 Xfce 桌面环境的粉丝,但是当你在 Linux 发行版(尤其是 MX Linux 19)上使用它时,它超快的性能会让你惊叹。
|
||||
|
||||
或许你会感兴趣,我们也有一个快速指南来帮助你[自定义 Xfce][6]。
|
||||
|
||||
#### 升级的软件包及最新的 Debian 内核 4.19
|
||||
|
||||
除了 [GIMP][7]、MESA、Firefox 等的更新软件包之外,它还随附有 Debian “Buster” 可用的最新内核 4.19。
|
||||
|
||||
#### 升级的 MX 系列应用
|
||||
|
||||
如果你以前使用过 MX Linux,则可能会知道它已经预装了有用的 MX 系列应用,可以帮助你快速完成更多工作。
|
||||
|
||||
像 MX-installer 和 MX-packageinstaller 这样的应用程序得到了显著改进。
|
||||
|
||||
除了这两个以外,所有其他 MX 工具也已不同程度的进行了更新和修复错误、添加了新的翻译(或只是改善了用户体验)。
|
||||
|
||||
#### 其它改进
|
||||
|
||||
考虑到这是一次重大升级,很明显,底层的更改要多于表面(包括最新的 antiX live 系统更新)。
|
||||
|
||||
你可以在他们的[官方公告][8]中查看更多详细信息。你还可以从开发人员那里观看以下视频,它介绍了 MX Linux 19 中的所有新功能:
|
||||
|
||||
- [视频](https://youtu.be/4XVHA4l4Zrc)
|
||||
|
||||
### 获取 MX Linux 19
|
||||
|
||||
即使是你现在正在使用 MX Linux 18 版本,你也[无法][9]升级到 MX Linux 19。你需要像其他人一样进行全新安装。
|
||||
|
||||
你可以从此页面下载 MX Linux 19:
|
||||
|
||||
- [下载 MX Linux 19][10]
|
||||
|
||||
### 结语
|
||||
|
||||
在 MX Linux 18 上,我在使用 WiFi 适配器时遇到了问题,通过[论坛][11]解决了该问题,但看来 MX Linux 19 仍未解决该问题。因此,如果你在安装 MX Linux 19 之后遇到了相同的问题,你可能想要查看一下我的[论坛帖子][11]。
|
||||
|
||||
如果你使用的是 MX Linux 18,那么这绝对是一个令人印象深刻的升级。
|
||||
|
||||
你尝试过了吗?你对新的 MX Linux 19 版本有何想法?让我知道你在以下评论中的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/mx-linux-19/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://linux.cn/article-11411-1.html
|
||||
[2]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[3]: https://linux.cn/article-11071-1.html
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/mx-linux-19.jpg?ssl=1
|
||||
[5]: https://xfce.org/about/news
|
||||
[6]: https://itsfoss.com/customize-xfce/
|
||||
[7]: https://itsfoss.com/gimp-2-10-release/
|
||||
[8]: https://mxlinux.org/blog/mx-19-patito-feo-released/
|
||||
[9]: https://mxlinux.org/migration/
|
||||
[10]: https://mxlinux.org/download-links/
|
||||
[11]: https://forum.mxlinux.org/viewtopic.php?t=52201
|
85
published/201910/20191029 Fedora 31 is officially here.md
Normal file
85
published/201910/20191029 Fedora 31 is officially here.md
Normal file
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11522-1.html)
|
||||
[#]: subject: (Fedora 31 is officially here!)
|
||||
[#]: via: (https://fedoramagazine.org/announcing-fedora-31/)
|
||||
[#]: author: (Matthew Miller https://fedoramagazine.org/author/mattdm/)
|
||||
|
||||
Fedora 31 正式发布
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
这里,我们很荣幸地宣布 Fedora 31 的发布。感谢成千上万的 Fedora 社区成员和贡献者的辛勤工作,我们现在正在庆祝又一次的准时发布。这已成为一种惯例!
|
||||
|
||||
如果你只想立即获取它,请立即访问 <https://getfedora.org/>。要了解详细信息,请继续阅读!
|
||||
|
||||
### 工具箱
|
||||
|
||||
如果你还没有使用过 [Fedora 工具箱][2],那么现在是尝试一下的好时机。这是用于启动和管理个人工作区容器的简单工具,你可以在一个单独的环境中进行开发或试验。它只需要在命令行运行 `toolbox enter` 就行。
|
||||
|
||||
这种容器化的工作流程对于基于 ostree 的 Fedora 变体(如 CoreOS、IoT 和 Silverblue)的用户至关重要,但在任何工作站甚至服务器系统上也非常有用。在接下来的几个月中,希望对该工具及其相关的用户体验进行更多增强,非常欢迎你提供反馈。
|
||||
|
||||
### Fedora 风味版
|
||||
|
||||
Fedora 的“版本”是针对特定的“展示柜”用途输出的。
|
||||
|
||||
Fedora 工作站版本专注于台式机,以及希望获得“可以工作的” Linux 操作系统体验的特定软件开发人员。此版本具有 GNOME 3.34,它带来了显著的性能增强,在功耗较低的硬件上尤其明显。
|
||||
|
||||
Fedora 服务器版本以易于部署的方式为系统管理员带来了最新的、最先进的开源服务器软件。
|
||||
|
||||
而且,我们还有处于预览状态下的 Fedora CoreOS(一个定义了现代容器世界分类的操作系统)和[Fedora IoT][3](用于“边缘计算”用例)。(敬请期待计划中的给该物联网版本的征集名称的活动!)
|
||||
|
||||
当然,我们不仅仅提供的是各种版本。还有面向各种受众和用例的 [Fedora Spins][4] 和 [Labs][5],包括 [Fedora 天文学][6] 版本,为业余和专业的天文学家带来了完整的开源工具链,以及支持各种桌面环境(例如 [KDE Plasma][7] 和 [Xfce][8])。
|
||||
|
||||
而且,请不要忘记我们的替代架构 [ARM AArch64、Power 和 S390x][9]。特别要注意的是,我们对包括 Rock960、RockPro64 和 Rock64 在内的 Rockchip 片上系统设备的支持得到了改善,并初步支持了 “[panfrost][10]”,这是一种较新的开源 3D 加速图形驱动程序 Arm Mali "midgard" GPU。
|
||||
|
||||
不过,如果你使用的是只支持 32 位的 i686 旧系统,那么该找个替代方案了,[我们的基本系统告别了 32 位 Intel 架构][11]。
|
||||
|
||||
### 常规改进
|
||||
|
||||
无论你使用哪种 Fedora 版本,你都将获得开源世界所提供的最新版本。遵循 “[First][12]” 准则,我们启用了 CgroupsV2(如果你使用的是 Docker,[请确保检查一下][13])。Glibc 2.30 和 NodeJS 12 是 Fedora 31 中许多更新的软件包之一。而且,我们已经将 `python` 命令切换为 Python 3,请记住,Python 2 在[今年年底][14]生命期就终止了。
|
||||
|
||||
我们很高兴你能试用新版本!转到 <https://getfedora.org/> 并立即下载吧。或者,如果你已经在运行 Fedora 操作系统,请遵循简单的[升级说明][15]就行。
|
||||
|
||||
### 万一出现问题……
|
||||
|
||||
如果遇到问题,请查看 [Fedora 31 常见错误][16]页面,如果有疑问,请访问我们的 [Ask Fedora][17] 用户支持平台。
|
||||
|
||||
### 谢谢大家
|
||||
|
||||
感谢在此发行周期中成千上万为 Fedora 项目做出贡献的人们,尤其是那些为使该发行版再次按时发行而付出更多努力的人。而且,如果你本周在波特兰参加 [USENIX LISA][18],请在博览会大厅,在 Red Hat、Fedora 和 CentOS 展位找到我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/announcing-fedora-31/
|
||||
|
||||
作者:[Matthew Miller][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/mattdm/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/fedora31-816x345.jpg
|
||||
[2]: https://docs.fedoraproject.org/en-US/fedora-silverblue/toolbox/
|
||||
[3]: https://iot.fedoraproject.org/
|
||||
[4]: https://spins.fedoraproject.org/
|
||||
[5]: https://labs.fedoraproject.org/
|
||||
[6]: https://labs.fedoraproject.org/en/astronomy/
|
||||
[7]: https://spins.fedoraproject.org/en/kde/
|
||||
[8]: https://spins.fedoraproject.org/en/xfce/
|
||||
[9]: https://alt.fedoraproject.org/alt/
|
||||
[10]: https://panfrost.freedesktop.org/
|
||||
[11]: https://fedoramagazine.org/in-fedora-31-32-bit-i686-is-86ed/
|
||||
[12]: https://docs.fedoraproject.org/en-US/project/#_first
|
||||
[13]: https://fedoraproject.org/wiki/Common_F31_bugs#Docker_package_no_longer_available_and_will_not_run_by_default_.28due_to_switch_to_cgroups_v2.29
|
||||
[14]: https://pythonclock.org/
|
||||
[15]: https://docs.fedoraproject.org/en-US/quick-docs/upgrading/
|
||||
[16]: https://fedoraproject.org/wiki/Common_F31_bugs
|
||||
[17]: http://ask.fedoraproject.org
|
||||
[18]: https://www.usenix.org/conference/lisa19
|
193
published/20191008 5 Best Password Managers For Linux Desktop.md
Normal file
193
published/20191008 5 Best Password Managers For Linux Desktop.md
Normal file
@ -0,0 +1,193 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11531-1.html)
|
||||
[#]: subject: (5 Best Password Managers For Linux Desktop)
|
||||
[#]: via: (https://itsfoss.com/password-managers-linux/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
5 个 Linux 桌面上的最佳密码管理器
|
||||
======
|
||||
|
||||
> 密码管理器是创建唯一密码并安全存储它们的有用工具,这样你无需记住密码。了解一下适用于 Linux 桌面的最佳密码管理器。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201911/03/102528e97mr0ls89lz9rrr.jpg)
|
||||
|
||||
密码无处不在。网站、论坛、Web 应用等,你需要为其创建帐户和密码。麻烦在于密码,为各个帐户使用相同的密码会带来安全风险,因为[如果其中一个网站遭到入侵,黑客也会在其他网站上尝试相同的电子邮件密码组合][1]。
|
||||
|
||||
但是,为所有新帐户设置独有的密码意味着你必须记住所有密码,这对普通人而言不太可能。这就是密码管理器可以提供帮助的地方。
|
||||
|
||||
密码管理应用会为你建议/创建强密码,并将其存储在加密的数据库中。你只需要记住密码管理器的主密码即可。
|
||||
|
||||
主流的现代浏览器(例如 Mozilla Firefox 和 Google Chrome)内置了密码管理器。这有帮助,但是你只能在浏览器上使用它。
|
||||
|
||||
有一些第三方专门的密码管理器,其中一些还提供 Linux 的原生桌面应用。在本文中,我们将筛选出可用于 Linux 的最佳密码管理器。
|
||||
|
||||
继续之前,我还建议你仔细阅读 [Linux 的免费密码生成器][2],来为你生成强大的唯一密码。
|
||||
|
||||
### Linux 密码管理器
|
||||
|
||||
> 可能的非 FOSS 警报!
|
||||
|
||||
> 我们优先考虑开源软件(有一些专有软件,请不要讨厌我!),并提供适用于 Linux 的独立桌面应用(GUI)。专有软件已高亮显示。
|
||||
|
||||
#### 1、Bitwarden
|
||||
|
||||
![][3]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 开源
|
||||
* 免费供个人使用(可选付费升级)
|
||||
* 云服务器的端到端加密
|
||||
* 跨平台
|
||||
* 有浏览器扩展
|
||||
* 命令行工具
|
||||
|
||||
Bitwarden 是 Linux 上最令人印象深刻的密码管理器之一。老实说,直到现在我才知道它。我已经从 [LastPass][4] 切换到了它。我能够轻松地从 LastPass 导入数据,而没有任何问题和困难。
|
||||
|
||||
付费版本的价格仅为每年 10 美元。这似乎是值得的(我已经为个人使用进行了升级)。
|
||||
|
||||
它是一个开源解决方案,因此没有任何可疑之处。你甚至可以将其托管在自己的服务器上,并为你的组织创建密码解决方案。
|
||||
|
||||
除此之外,你还将获得所有必需的功能,例如用于登录的两步验证、导入/导出凭据、指纹短语(唯一键)、密码生成器等等。
|
||||
|
||||
你可以免费将帐户升级为组织帐户,以便最多与 2 个用户共享你的信息。但是,如果你想要额外的加密存储以及与 5 个用户共享密码的功能,那么付费升级的费用低至每月 1 美元。我认为绝对值得一试!
|
||||
|
||||
- [Bitwarden][5]
|
||||
|
||||
#### 2、Buttercup
|
||||
|
||||
![][6]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 开源
|
||||
* 免费,没有付费方式。
|
||||
* 跨平台
|
||||
* 有浏览器扩展
|
||||
|
||||
这是 Linux 中的另一个开源密码管理器。Buttercup 可能不是一个非常流行的解决方案。但是,如果你在寻找一种更简单的保存凭据的方法,那么这将是一个不错的开始。
|
||||
|
||||
与其他软件不同,你不必对怀疑其云服务器的安全,因为它只支持离线使用并支持连接 [Dropbox][7]、[OwnCloud] [8]、[Nextcloud][9] 和 [WebDAV][10] 等云服务。
|
||||
|
||||
因此,如果需要同步数据,那么可以选择云服务。你有不同选择。
|
||||
|
||||
- [Buttercup][11]
|
||||
|
||||
#### 3、KeePassXC
|
||||
|
||||
![][12]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 开源
|
||||
* 简单的密码管理器
|
||||
* 跨平台
|
||||
* 没有移动设备支持
|
||||
|
||||
KeePassXC 是 [KeePassX][13] 的社区分支,它最初是 Windows 上 [KeePass][14] 的 Linux 移植版本。
|
||||
|
||||
除非你没意识到,KeePassX 已经多年没有维护。因此,如果你在寻找简单易用的密码管理器,那么 KeePassXC 是一个不错的选择。KeePassXC 可能不是最漂亮或最好的密码管理器,但它确实可以做到该做的事。
|
||||
|
||||
它也是安全和开源的。我认为这值得一试,你说呢?
|
||||
|
||||
- [KeePassXC][15]
|
||||
|
||||
#### 4、Enpass (非开源)
|
||||
|
||||
![][16]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 专有软件
|
||||
* 有许多功能,包括对“可穿戴”设备支持。
|
||||
* Linux 完全免费(具有付费支持)
|
||||
|
||||
Enpass 是非常流行的跨平台密码管理器。即使它不是开源解决方案,但还是有很多人依赖它。因此,至少可以肯定它是可行的。
|
||||
|
||||
它提供了很多功能,如果你有可穿戴设备,它也可以支持它,这点很少见。
|
||||
|
||||
很高兴能看到 Enpass 积极管理 Linux 发行版的软件包。另外,请注意,它仅适用于 64 位系统。你可以在它的网站上找到[官方的安装说明] [17]。它需要使用终端,但是我按照步骤进行了测试,它非常好用。
|
||||
|
||||
- [Enpass][18]
|
||||
|
||||
#### 5、myki (非开源)
|
||||
|
||||
![][19]
|
||||
|
||||
主要亮点:
|
||||
|
||||
* 专有软件
|
||||
* 不使用云服务器存储密码
|
||||
* 专注于本地点对点同步
|
||||
* 能够在移动设备上用指纹 ID 替换密码
|
||||
|
||||
这可能不是一个受欢迎的建议,但我发现它很有趣。它是专有软件密码管理器,它让你避免使用云服务器,而是依靠点对点同步。
|
||||
|
||||
因此,如果你不想使用任何云服务器来存储你的信息,那么它适合你。另外值得注意的是,用于 Android 和 iOS 的程序可让你用指纹 ID 替换密码。如果你希望便于在手机上使用,又有桌面密码管理器的基本功能,这似乎是个不错的选择。
|
||||
|
||||
但是,如果你选择升级到付费版,这有个付费计划供你判断,绝对不便宜。
|
||||
|
||||
尝试一下,让我们知道它如何!
|
||||
|
||||
- [myki][20]
|
||||
|
||||
### 其他一些值得说的密码管理器
|
||||
|
||||
即使没有为 Linux 提供独立的应用,但仍有一些密码管理器值得一提。
|
||||
|
||||
如果你需要使用基于浏览器的(扩展)密码管理器,建议你尝试使用 [LastPass][21]、[Dashlane][22] 和 [1Password][23]。LastPass 甚至提供了 [Linux 客户端(和命令行工具)][24]。
|
||||
|
||||
如果你正在寻找命令行密码管理器,那你应该试试 [Pass][25]。
|
||||
|
||||
[Password Safe][26] 也是种选择,但它的 Linux 客户端还处于 beta 阶段。我不建议依靠 “beta” 程序来存储密码。还有 [Universal Password Manager][27],但它不再维护。你可能也听说过 [Password Gorilla][28],但并它没有积极维护。
|
||||
|
||||
### 总结
|
||||
|
||||
目前,Bitwarden 似乎是我个人的最爱。但是,在 Linux 上有几个替代品可供选择。你可以选择提供原生应用的程序,也可选择浏览器插件,选择权在你。
|
||||
|
||||
如果我有错过值得尝试的密码管理器,请在下面的评论中告诉我们。与往常一样,我们会根据你的建议扩展列表。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/password-managers-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://medium.com/@computerphonedude/one-of-my-old-passwords-was-hacked-on-6-different-sites-and-i-had-no-clue-heres-how-to-quickly-ced23edf3b62
|
||||
[2]: https://itsfoss.com/password-generators-linux/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/bitward.png?ssl=1
|
||||
[4]: https://www.lastpass.com/
|
||||
[5]: https://bitwarden.com/
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/buttercup.png?ssl=1
|
||||
[7]: https://www.dropbox.com/
|
||||
[8]: https://owncloud.com/
|
||||
[9]: https://nextcloud.com/
|
||||
[10]: https://en.wikipedia.org/wiki/WebDAV
|
||||
[11]: https://buttercup.pw/
|
||||
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/KeePassXC.png?ssl=1
|
||||
[13]: https://www.keepassx.org/
|
||||
[14]: https://keepass.info/
|
||||
[15]: https://keepassxc.org
|
||||
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/enpass.png?ssl=1
|
||||
[17]: https://www.enpass.io/support/kb/general/how-to-install-enpass-on-linux/
|
||||
[18]: https://www.enpass.io/
|
||||
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/myki.png?ssl=1
|
||||
[20]: https://myki.com/
|
||||
[21]: https://lastpass.com/
|
||||
[22]: https://www.dashlane.com/
|
||||
[23]: https://1password.com/
|
||||
[24]: https://lastpass.com/misc_download2.php
|
||||
[25]: https://www.passwordstore.org/
|
||||
[26]: https://pwsafe.org/
|
||||
[27]: http://upm.sourceforge.net/
|
||||
[28]: https://github.com/zdia/gorilla/wiki
|
@ -1,65 +1,63 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11535-1.html)
|
||||
[#]: subject: (How to Enable EPEL Repository on CentOS 8 and RHEL 8 Server)
|
||||
[#]: via: (https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Enable EPEL Repository on CentOS 8 and RHEL 8 Server
|
||||
如何在 CentOS 8 和 RHEL 8 服务器上启用 EPEL 仓库
|
||||
======
|
||||
|
||||
**EPEL** Stands for Extra Packages for Enterprise Linux, it is a free and opensource additional packages repository available for **CentOS** and **RHEL** servers. As the name suggests, EPEL repository provides extra and additional packages which are not available in the default package repositories of [CentOS 8][1] and [RHEL 8][2].
|
||||
EPEL 代表 “Extra Packages for Enterprise Linux”,它是一个自由开源的附加软件包仓库,可用于 CentOS 和 RHEL 服务器。顾名思义,EPEL 仓库提供了额外的软件包,这些软件在 [CentOS 8][1] 和 [RHEL 8][2] 的默认软件包仓库中不可用。
|
||||
|
||||
In this article we will demonstrate how to enable and use epel repository on CentOS 8 and RHEL 8 Server.
|
||||
在本文中,我们将演示如何在 CentOS 8 和 RHEL 8 服务器上启用和使用 EPEL 存储库。
|
||||
|
||||
[![EPEL-Repo-CentOS8-RHEL8][3]][4]
|
||||
![](https://img.linux.net.cn/data/attachment/album/201911/04/113307wz4y3lnczzlxzn2j.jpg)
|
||||
|
||||
### Prerequisites of EPEL Repository
|
||||
### EPEL 仓库的先决条件
|
||||
|
||||
* Minimal CentOS 8 and RHEL 8 Server
|
||||
* Root or sudo admin privileges
|
||||
* Internet Connection
|
||||
* 最小化安装的 CentOS 8 和 RHEL 8 服务器
|
||||
* root 或 sudo 管理员权限
|
||||
* 网络连接
|
||||
|
||||
### 在 RHEL 8.x 服务器上安装并启用 EPEL 仓库
|
||||
|
||||
|
||||
### Install and Enable EPEL Repository on RHEL 8.x Server
|
||||
|
||||
Login or ssh to your RHEL 8.x server and execute the following dnf command to install EPEL rpm package,
|
||||
登录或 SSH 到你的 RHEL 8.x 服务器,并执行以下 `dnf` 命令来安装 EPEL rpm 包,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y
|
||||
```
|
||||
|
||||
Output of above command would be something like below,
|
||||
上面命令的输出将如下所示,
|
||||
|
||||
![dnf-install-epel-repo-rehl8][3]
|
||||
![dnf-install-epel-repo-rehl8][5]
|
||||
|
||||
Once epel rpm package is installed successfully then it will automatically enable and configure its yum / dnf repository. Run following dnf or yum command to verify whether EPEL repository is enabled or not,
|
||||
EPEL rpm 包成功安装后,它将自动启用并配置其 yum/dnf 仓库。运行以下 `dnf` 或 `yum` 命令,以验证是否启用了 EPEL 仓库,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf repolist epel
|
||||
Or
|
||||
或者
|
||||
[root@linuxtechi ~]# dnf repolist epel -v
|
||||
```
|
||||
|
||||
![epel-repolist-rhel8][3]
|
||||
![epel-repolist-rhel8][6]
|
||||
|
||||
### Install and Enable EPEL Repository on CentOS 8.x Server
|
||||
### 在 CentOS 8.x 服务器上安装并启用 EPEL 仓库
|
||||
|
||||
Login or ssh to your CentOS 8 server and execute following dnf or yum command to install ‘**epel-release**‘ rpm package. In CentOS 8 server, epel rpm package is available in its default package repository.
|
||||
登录或 SSH 到你的 CentOS 8 服务器,并执行以下 `dnf` 或 `yum` 命令来安装 `epel-release` rpm 软件包。在 CentOS 8 服务器中,EPEL rpm 在其默认软件包仓库中。
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install epel-release -y
|
||||
Or
|
||||
或者
|
||||
[root@linuxtechi ~]# yum install epel-release -y
|
||||
```
|
||||
|
||||
Execute the following commands to verify the status of epel repository on CentOS 8 server,
|
||||
执行以下命令来验证 CentOS 8 服务器上 EPEL 仓库的状态,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf repolist epel
|
||||
[root@linuxtechi ~]# dnf repolist epel
|
||||
Last metadata expiration check: 0:00:03 ago on Sun 13 Oct 2019 04:18:05 AM BST.
|
||||
repo id repo name status
|
||||
*epel Extra Packages for Enterprise Linux 8 - x86_64 1,977
|
||||
@ -82,11 +80,11 @@ Total packages: 1,977
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Above command’s output confirms that we have successfully enabled epel repo. Let’s perform some basic operations on EPEL repo.
|
||||
以上命令的输出说明我们已经成功启用了 EPEL 仓库。让我们在 EPEL 仓库上执行一些基本操作。
|
||||
|
||||
### List all available packages from epel repository
|
||||
### 列出 EPEL 仓库种所有可用包
|
||||
|
||||
If you want to list all the packages from epel repository then run the following dnf command,
|
||||
如果要列出 EPEL 仓库中的所有的软件包,请运行以下 `dnf` 命令,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf repository-packages epel list
|
||||
@ -116,33 +114,35 @@ zvbi-fonts.noarch 0.2.35-9.el8 epel
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Search a package from epel repository
|
||||
### 从 EPEL 仓库中搜索软件包
|
||||
|
||||
Let’s assume if we want to search Zabbix package in epel repository, execute the following dnf command,
|
||||
假设我们要搜索 EPEL 仓库中的 Zabbix 包,请执行以下 `dnf` 命令,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf repository-packages epel list | grep -i zabbix
|
||||
```
|
||||
|
||||
Output of above command would be something like below,
|
||||
上面命令的输出类似下面这样,
|
||||
|
||||
![epel-repo-search-package-centos8][3]
|
||||
![epel-repo-search-package-centos8][7]
|
||||
|
||||
### Install a package from epel repository
|
||||
### 从 EPEL 仓库安装软件包
|
||||
|
||||
Let’s assume we want to install htop package from epel repo, then issue the following dnf command,
|
||||
假设我们要从 EPEL 仓库安装 htop 包,运行以下 `dnf` 命令,
|
||||
|
||||
Syntax:
|
||||
语法:
|
||||
|
||||
# dnf –enablerepo=”epel” install <pkg_name>
|
||||
```
|
||||
# dnf –enablerepo=”epel” install <包名>
|
||||
```
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf --enablerepo="epel" install htop -y
|
||||
```
|
||||
|
||||
**Note:** If we don’t specify the “**–enablerepo=epel**” in above command then it will look for htop package in all available package repositories.
|
||||
注意:如果我们在上面的命令中未指定 `–enablerepo=epel`,那么它将在所有可用的软件包仓库中查找 htop 包。
|
||||
|
||||
That’s all from this article, I hope above steps helps you to enable and configure EPEL repository on CentOS 8 and RHEL 8 Server, please don’t hesitate to share your comments and feedback in below comments section.
|
||||
本文就是这些内容了,我希望上面的步骤能帮助你在 CentOS 8 和 RHEL 8 服务器上启用并配置 EPEL 仓库,请在下面的评论栏分享你的评论和反馈。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -150,8 +150,8 @@ via: https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -161,3 +161,6 @@ via: https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/
|
||||
[2]: https://www.linuxtechi.com/install-configure-kvm-on-rhel-8/
|
||||
[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/EPEL-Repo-CentOS8-RHEL8.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/dnf-install-epel-repo-rehl8.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/10/epel-repolist-rhel8.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/10/epel-repo-search-package-centos8.jpg
|
361
published/20191022 Initializing arrays in Java.md
Normal file
361
published/20191022 Initializing arrays in Java.md
Normal file
@ -0,0 +1,361 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (laingke)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11533-1.html)
|
||||
[#]: subject: (Initializing arrays in Java)
|
||||
[#]: via: (https://opensource.com/article/19/10/initializing-arrays-java)
|
||||
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
|
||||
|
||||
Java 中初始化数组
|
||||
======
|
||||
|
||||
> 数组是一种有用的数据类型,用于管理在连续内存位置中建模最好的集合元素。下面是如何有效地使用它们。
|
||||
|
||||
![Coffee beans and a cup of coffee][1]
|
||||
|
||||
有使用 C 或者 FORTRAN 语言编程经验的人会对数组的概念很熟悉。它们基本上是一个连续的内存块,其中每个位置都是某种数据类型:整型、浮点型或者诸如此类的数据类型。
|
||||
|
||||
Java 的情况与此类似,但是有一些额外的问题。
|
||||
|
||||
### 一个数组的示例
|
||||
|
||||
让我们在 Java 中创建一个长度为 10 的整型数组:
|
||||
|
||||
```
|
||||
int[] ia = new int[10];
|
||||
```
|
||||
|
||||
上面的代码片段会发生什么?从左到右依次是:
|
||||
|
||||
1. 最左边的 `int[]` 将变量的*类型*声明为 `int` 数组(由 `[]` 表示)。
|
||||
2. 它的右边是变量的名称,当前为 `ia`。
|
||||
3. 接下来,`=` 告诉我们,左侧定义的变量赋值为右侧的内容。
|
||||
4. 在 `=` 的右侧,我们看到了 `new`,它在 Java 中表示一个对象正在*被初始化中*,这意味着已为其分配存储空间并调用了其构造函数([请参见此处以获取更多信息][2])。
|
||||
5. 然后,我们看到 `int[10]`,它告诉我们正在初始化的这个对象是包含 10 个整型的数组。
|
||||
|
||||
因为 Java 是强类型的,所以变量 `ia` 的类型必须跟 `=` 右侧表达式的类型兼容。
|
||||
|
||||
### 初始化示例数组
|
||||
|
||||
让我们把这个简单的数组放在一段代码中,并尝试运行一下。将以下内容保存到一个名为 `Test1.java` 的文件中,使用 `javac` 编译,使用 `java` 运行(当然是在终端中):
|
||||
|
||||
```
|
||||
import java.lang.*;
|
||||
|
||||
public class Test1 {
|
||||
|
||||
public static void main(String[] args) {
|
||||
int[] ia = new int[10]; // 见下文注 1
|
||||
System.out.println("ia is " + ia.getClass()); // 见下文注 2
|
||||
for (int i = 0; i < ia.length; i++) // 见下文注 3
|
||||
System.out.println("ia[" + i + "] = " + ia[i]); // 见下文注 4
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
让我们来看看最重要的部分。
|
||||
|
||||
1. 我们声明和初始化了长度为 10 的整型数组,即 `ia`,这显而易见。
|
||||
2. 在下面的行中,我们看到表达式 `ia.getClass()`。没错,`ia` 是属于一个*类*的*对象*,这行代码将告诉我们是哪个类。
|
||||
3. 在紧接的下一行中,我们看到了一个循环 `for (int i = 0; i < ia.length; i++)`,它定义了一个循环索引变量 `i`,该变量遍历了从 0 到比 `ia.length` 小 1 的序列,这个表达式告诉我们在数组 `ia` 中定义了多少个元素。
|
||||
4. 接下来,循环体打印出 `ia` 的每个元素的值。
|
||||
|
||||
当这个程序编译和运行时,它产生以下结果:
|
||||
|
||||
```
|
||||
me@mydesktop:~/Java$ javac Test1.java
|
||||
me@mydesktop:~/Java$ java Test1
|
||||
ia is class [I
|
||||
ia[0] = 0
|
||||
ia[1] = 0
|
||||
ia[2] = 0
|
||||
ia[3] = 0
|
||||
ia[4] = 0
|
||||
ia[5] = 0
|
||||
ia[6] = 0
|
||||
ia[7] = 0
|
||||
ia[8] = 0
|
||||
ia[9] = 0
|
||||
me@mydesktop:~/Java$
|
||||
```
|
||||
|
||||
`ia.getClass()` 的输出的字符串表示形式是 `[I`,它是“整数数组”的简写。与 C 语言类似,Java 数组以第 0 个元素开始,扩展到第 `<数组大小> - 1` 个元素。如上所见,我们可以看到数组 `ia` 的每个元素都(似乎由数组构造函数)设置为零。
|
||||
|
||||
所以,就这些吗?声明类型,使用适当的初始化器,就完成了吗?
|
||||
|
||||
好吧,并没有。在 Java 中有许多其它方法来初始化数组。
|
||||
|
||||
### 为什么我要初始化一个数组,有其它方式吗?
|
||||
|
||||
像所有好的问题一样,这个问题的答案是“视情况而定”。在这种情况下,答案取决于初始化后我们希望对数组做什么。
|
||||
|
||||
在某些情况下,数组自然会作为一种累加器出现。例如,假设我们正在编程实现计算小型办公室中一组电话分机接收和拨打的电话数量。一共有 8 个分机,编号为 1 到 8,加上话务员的分机,编号为 0。 因此,我们可以声明两个数组:
|
||||
|
||||
```
|
||||
int[] callsMade;
|
||||
int[] callsReceived;
|
||||
```
|
||||
|
||||
然后,每当我们开始一个新的累计呼叫统计数据的周期时,我们就将每个数组初始化为:
|
||||
|
||||
```
|
||||
callsMade = new int[9];
|
||||
callsReceived = new int[9];
|
||||
```
|
||||
|
||||
在每个累计通话统计数据的最后阶段,我们可以打印出统计数据。粗略地说,我们可能会看到:
|
||||
|
||||
```
|
||||
import java.lang.*;
|
||||
import java.io.*;
|
||||
|
||||
public class Test2 {
|
||||
|
||||
public static void main(String[] args) {
|
||||
|
||||
int[] callsMade;
|
||||
int[] callsReceived;
|
||||
|
||||
// 初始化呼叫计数器
|
||||
|
||||
callsMade = new int[9];
|
||||
callsReceived = new int[9];
|
||||
|
||||
// 处理呼叫……
|
||||
// 分机拨打电话:callsMade[ext]++
|
||||
// 分机接听电话:callsReceived[ext]++
|
||||
|
||||
// 汇总通话统计
|
||||
|
||||
System.out.printf("%3s%25s%25s\n", "ext", " calls made",
|
||||
"calls received");
|
||||
for (int ext = 0; ext < callsMade.length; ext++) {
|
||||
System.out.printf("%3d%25d%25d\n", ext,
|
||||
callsMade[ext], callsReceived[ext]);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
这会产生这样的输出:
|
||||
|
||||
```
|
||||
me@mydesktop:~/Java$ javac Test2.java
|
||||
me@mydesktop:~/Java$ java Test2
|
||||
ext calls made calls received
|
||||
0 0 0
|
||||
1 0 0
|
||||
2 0 0
|
||||
3 0 0
|
||||
4 0 0
|
||||
5 0 0
|
||||
6 0 0
|
||||
7 0 0
|
||||
8 0 0
|
||||
me@mydesktop:~/Java$
|
||||
```
|
||||
|
||||
看来这一天呼叫中心不是很忙。
|
||||
|
||||
在上面的累加器示例中,我们看到由数组初始化程序设置的零起始值可以满足我们的需求。但是在其它情况下,这个起始值可能不是正确的选择。
|
||||
|
||||
例如,在某些几何计算中,我们可能需要将二维数组初始化为单位矩阵(除沿主对角线———左上角到右下角——以外所有全是零)。我们可以选择这样做:
|
||||
|
||||
|
||||
```
|
||||
double[][] m = new double[3][3];
|
||||
for (int d = 0; d < 3; d++) {
|
||||
m[d][d] = 1.0;
|
||||
}
|
||||
```
|
||||
|
||||
在这种情况下,我们依靠数组初始化器 `new double[3][3]` 将数组设置为零,然后使用循环将主对角线上的元素设置为 1。在这种简单情况下,我们可以使用 Java 提供的快捷方式:
|
||||
|
||||
```
|
||||
double[][] m = {
|
||||
{1.0, 0.0, 0.0},
|
||||
{0.0, 1.0, 0.0},
|
||||
{0.0, 0.0, 1.0}};
|
||||
```
|
||||
|
||||
这种可视结构特别适用于这种应用程序,在这种应用程序中,它便于复查数组的实际布局。但是在这种情况下,行数和列数只在运行时确定时,我们可能会看到这样的东西:
|
||||
|
||||
```
|
||||
int nrc;
|
||||
// 一些代码确定行数和列数 = nrc
|
||||
double[][] m = new double[nrc][nrc];
|
||||
for (int d = 0; d < nrc; d++) {
|
||||
m[d][d] = 1.0;
|
||||
}
|
||||
```
|
||||
|
||||
值得一提的是,Java 中的二维数组实际上是数组的数组,没有什么能阻止无畏的程序员让这些第二层数组中的每个数组的长度都不同。也就是说,下面这样的事情是完全合法的:
|
||||
|
||||
```
|
||||
int [][] differentLengthRows = {
|
||||
{1, 2, 3, 4, 5},
|
||||
{6, 7, 8, 9},
|
||||
{10, 11, 12},
|
||||
{13, 14},
|
||||
{15}};
|
||||
```
|
||||
|
||||
在涉及不规则形状矩阵的各种线性代数应用中,可以应用这种类型的结构(有关更多信息,请参见[此 Wikipedia 文章][5])。除此之外,既然我们了解到二维数组实际上是数组的数组,那么以下内容也就不足为奇了:
|
||||
|
||||
```
|
||||
differentLengthRows.length
|
||||
```
|
||||
|
||||
可以告诉我们二维数组 `differentLengthRows` 的行数,并且:
|
||||
|
||||
```
|
||||
differentLengthRows[i].length
|
||||
```
|
||||
|
||||
告诉我们 `differentLengthRows` 第 `i` 行的列数。
|
||||
|
||||
### 深入理解数组
|
||||
|
||||
考虑到在运行时确定数组大小的想法,我们看到数组在实例化之前仍需要我们知道该大小。但是,如果在处理完所有数据之前我们不知道大小怎么办?这是否意味着我们必须先处理一次以找出数组的大小,然后再次处理?这可能很难做到,尤其是如果我们只有一次机会使用数据时。
|
||||
|
||||
[Java 集合框架][6]很好地解决了这个问题。提供的其中一项是 `ArrayList` 类,它类似于数组,但可以动态扩展。为了演示 `ArrayList` 的工作原理,让我们创建一个 `ArrayList` 对象并将其初始化为前 20 个[斐波那契数字][7]:
|
||||
|
||||
```
|
||||
import java.lang.*;
|
||||
import java.util.*;
|
||||
|
||||
public class Test3 {
|
||||
|
||||
public static void main(String[] args) {
|
||||
|
||||
ArrayList<Integer> fibos = new ArrayList<Integer>();
|
||||
|
||||
fibos.add(0);
|
||||
fibos.add(1);
|
||||
for (int i = 2; i < 20; i++) {
|
||||
fibos.add(fibos.get(i - 1) + fibos.get(i - 2));
|
||||
}
|
||||
|
||||
for (int i = 0; i < fibos.size(); i++) {
|
||||
System.out.println("fibonacci " + i + " = " + fibos.get(i));
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
上面的代码中,我们看到:
|
||||
|
||||
* 用于存储多个 `Integer` 的 `ArrayList` 的声明和实例化。
|
||||
* 使用 `add()` 附加到 `ArrayList` 实例。
|
||||
* 使用 `get()` 通过索引号检索元素。
|
||||
* 使用 `size()` 来确定 `ArrayList` 实例中已经有多少个元素。
|
||||
|
||||
这里没有展示 `put()` 方法,它的作用是将一个值放在给定的索引号上。
|
||||
|
||||
该程序的输出为:
|
||||
|
||||
```
|
||||
fibonacci 0 = 0
|
||||
fibonacci 1 = 1
|
||||
fibonacci 2 = 1
|
||||
fibonacci 3 = 2
|
||||
fibonacci 4 = 3
|
||||
fibonacci 5 = 5
|
||||
fibonacci 6 = 8
|
||||
fibonacci 7 = 13
|
||||
fibonacci 8 = 21
|
||||
fibonacci 9 = 34
|
||||
fibonacci 10 = 55
|
||||
fibonacci 11 = 89
|
||||
fibonacci 12 = 144
|
||||
fibonacci 13 = 233
|
||||
fibonacci 14 = 377
|
||||
fibonacci 15 = 610
|
||||
fibonacci 16 = 987
|
||||
fibonacci 17 = 1597
|
||||
fibonacci 18 = 2584
|
||||
fibonacci 19 = 4181
|
||||
```
|
||||
|
||||
`ArrayList` 实例也可以通过其它方式初始化。例如,可以给 `ArrayList` 构造器提供一个数组,或者在编译过程中知道初始元素时也可以使用 `List.of()` 和 `array.aslist()` 方法。我发现自己并不经常使用这些方式,因为我对 `ArrayList` 的主要用途是当我只想读取一次数据时。
|
||||
|
||||
此外,对于那些喜欢在加载数据后使用数组的人,可以使用 `ArrayList` 的 `toArray()` 方法将其实例转换为数组;或者,在初始化 `ArrayList` 实例之后,返回到当前数组本身。
|
||||
|
||||
Java 集合框架提供了另一种类似数组的数据结构,称为 `Map`(映射)。我所说的“类似数组”是指 `Map` 定义了一个对象集合,它的值可以通过一个键来设置或检索,但与数组(或 `ArrayList`)不同,这个键不需要是整型数;它可以是 `String` 或任何其它复杂对象。
|
||||
|
||||
例如,我们可以创建一个 `Map`,其键为 `String`,其值为 `Integer` 类型,如下:
|
||||
|
||||
```
|
||||
Map<String, Integer> stoi = new Map<String, Integer>();
|
||||
```
|
||||
|
||||
然后我们可以对这个 `Map` 进行如下初始化:
|
||||
|
||||
```
|
||||
stoi.set("one",1);
|
||||
stoi.set("two",2);
|
||||
stoi.set("three",3);
|
||||
```
|
||||
|
||||
等类似操作。稍后,当我们想要知道 `"three"` 的数值时,我们可以通过下面的方式将其检索出来:
|
||||
|
||||
```
|
||||
stoi.get("three");
|
||||
```
|
||||
|
||||
在我的认知中,`Map` 对于将第三方数据集中出现的字符串转换为我的数据集中的一致代码值非常有用。作为[数据转换管道][8]的一部分,我经常会构建一个小型的独立程序,用作在处理数据之前清理数据;为此,我几乎总是会使用一个或多个 `Map`。
|
||||
|
||||
值得一提的是,`ArrayList` 的 `ArrayList` 和 `Map` 的 `Map` 是很可能的,有时也是合理的。例如,假设我们在看树,我们对按树种和年龄范围累计树的数目感兴趣。假设年龄范围定义是一组字符串值(“young”、“mid”、“mature” 和 “old”),物种是 “Douglas fir”、“western red cedar” 等字符串值,那么我们可以将这个 `Map` 中的 `Map` 定义为:
|
||||
|
||||
```
|
||||
Map<String, Map<String, Integer>> counter = new Map<String, Map<String, Integer>>();
|
||||
```
|
||||
|
||||
这里需要注意的一件事是,以上内容仅为 `Map` 的*行*创建存储。因此,我们的累加代码可能类似于:
|
||||
|
||||
```
|
||||
// 假设我们已经知道了物种和年龄范围
|
||||
if (!counter.containsKey(species)) {
|
||||
counter.put(species,new Map<String, Integer>());
|
||||
}
|
||||
if (!counter.get(species).containsKey(ageRange)) {
|
||||
counter.get(species).put(ageRange,0);
|
||||
}
|
||||
```
|
||||
|
||||
此时,我们可以这样开始累加:
|
||||
|
||||
```
|
||||
counter.get(species).put(ageRange, counter.get(species).get(ageRange) + 1);
|
||||
```
|
||||
|
||||
最后,值得一提的是(Java 8 中的新特性)Streams 还可以用来初始化数组、`ArrayList` 实例和 `Map` 实例。关于此特性的详细讨论可以在[此处][9]和[此处][10]中找到。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/initializing-arrays-java
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[laingke](https://github.com/laingke)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/java-coffee-mug.jpg?itok=Bj6rQo8r (Coffee beans and a cup of coffee)
|
||||
[2]: https://opensource.com/article/19/8/what-object-java
|
||||
[3]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
|
||||
[5]: https://en.wikipedia.org/wiki/Irregular_matrix
|
||||
[6]: https://en.wikipedia.org/wiki/Java_collections_framework
|
||||
[7]: https://en.wikipedia.org/wiki/Fibonacci_number
|
||||
[8]: https://towardsdatascience.com/data-science-for-startups-data-pipelines-786f6746a59a
|
||||
[9]: https://stackoverflow.com/questions/36885371/lambda-expression-to-initialize-array
|
||||
[10]: https://stackoverflow.com/questions/32868665/how-to-initialize-a-map-using-a-lambda
|
@ -0,0 +1,116 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11534-1.html)
|
||||
[#]: subject: (Open Source CMS Ghost 3.0 Released with New features for Publishers)
|
||||
[#]: via: (https://itsfoss.com/ghost-3-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
开源 CMS Ghost 3.0 发布,带来新功能
|
||||
======
|
||||
|
||||
[Ghost][1] 是一个自由开源的内容管理系统(CMS)。如果你还不了解 CMS,那我在此解释一下。CMS 是一种软件,用它可以构建主要专注于创建内容的网站,而无需了解 HTML 和其他与 Web 相关的技术。
|
||||
|
||||
事实上,Ghost 是目前[最好的开源 CMS][2] 之一。它主要聚焦于创建轻量级、快速加载、界面美观的博客。
|
||||
|
||||
Ghost 系统有一个现代直观的编辑器,该编辑器内置 SEO(搜索引擎优化)功能。你也可以用本地桌面(包括 Linux 系统)和移动应用程序。如果你喜欢终端,也可以使用其提供的 CLI(命令行界面)工具。
|
||||
|
||||
让我们看看 Ghost 3.0 带来了什么新功能。
|
||||
|
||||
### Ghost 3.0 的新功能
|
||||
|
||||
![][3]
|
||||
|
||||
我通常对开源的 CMS 解决方案很感兴趣。因此,在阅读了官方公告后,我通过在 Digital Ocean 云服务器上安装新的 Ghost 实例来进一步尝试它。
|
||||
|
||||
与以前的版本相比,Ghost 3.0 在功能和用户界面上的改进给我留下了深刻的印象。
|
||||
|
||||
在此,我将列出一些值得一提的关键点。
|
||||
|
||||
#### 书签卡
|
||||
|
||||
![][5]
|
||||
|
||||
除了编辑器的所有细微更改之外,3.0 版本现在支持通过输入 URL 添加漂亮的书签卡。
|
||||
|
||||
如果你使用过 WordPress(你可能已经注意到,WordPress 需要添加一个插件才能添加类似的卡片),所以该功能绝对是 Ghost 3.0 系统的一个最大改进。
|
||||
|
||||
#### 改进的 WordPress 迁移插件
|
||||
|
||||
我没有专门对此进行测试,但它更新了 WordPress 的迁移插件,可以让你轻松地将帖子(带有图片)克隆到 Ghost CMS。
|
||||
|
||||
基本上,使用该插件,你就能够创建一个存档(包含图片)并将其导入到 Ghost CMS。
|
||||
|
||||
#### 响应式图像库和图片
|
||||
|
||||
为了使用户体验更好,Ghost 团队还更新了图像库(现已为响应式),以便在所有设备上舒适地呈现你的图片集。
|
||||
|
||||
此外,帖子和页面中的图片也更改为响应式的了。
|
||||
|
||||
#### 添加成员和订阅选项
|
||||
|
||||
![Ghost Subscription Model][6]
|
||||
|
||||
虽然,该功能目前还处于测试阶段,但如果你是以此平台作为维持你业务关系的重要发布平台,你可以为你的博客添加成员、订阅选项。
|
||||
|
||||
该功能可以确保只有订阅的成员才能访问你的博客,你也可以选择让未订阅者也可以访问。
|
||||
|
||||
#### Stripe:集成支付功能
|
||||
|
||||
默认情况下,该版本支持 Stripe 付款网关,帮助你轻松启用订阅功能(或使用任何类型的付款的付款方式),而 Ghost 不收取任何额外费用。
|
||||
|
||||
#### 新的应用程序集成
|
||||
|
||||
![][7]
|
||||
|
||||
你现在可以在 Ghost 3.0 的博客中集成各种流行的应用程序/服务。它可以使很多事情自动化。
|
||||
|
||||
#### 默认主题改进
|
||||
|
||||
引入的默认主题(设计)已得到改进,现在也提供了夜间模式。
|
||||
|
||||
你也可以随时选择创建自定义主题(如果没有可用的预置主题)。
|
||||
|
||||
#### 其他小改进
|
||||
|
||||
除了所有关键亮点以外,用于创建帖子/页面的可视编辑器也得到了改进(具有某些拖放功能)。
|
||||
|
||||
我确定还有很多技术方面的更改,如果你对此感兴趣,可以在他们的[更改日志][8]中查看。
|
||||
|
||||
### Ghost 影响力渐增
|
||||
|
||||
要在以 WordPress 为主导的世界中获得认可并不是一件容易的事。但 Ghost 逐渐形成了它的一个专门的发布者社区。
|
||||
|
||||
不仅如此,它的托管服务 [Ghost Pro][9] 现在拥有像 NASA、Mozilla 和 DuckDuckGo 这样的客户。
|
||||
|
||||
在过去的六年中,Ghost 从其 Ghost Pro 客户那里获得了 500 万美元的收入。就从它是致力于开源系统解决方案的非营利组织这一点来讲,这确实是一项成就。
|
||||
|
||||
这些收入有助于它们保持独立,避免风险投资家的外部资金投入。Ghost CMS 的托管客户越多,投入到免费和开源的 CMS 的研发款项就越多。
|
||||
|
||||
总体而言,Ghost 3.0 是迄今为止提供的最好的升级版本。这些功能给我留下了深刻的印象。
|
||||
|
||||
如果你拥有自己的网站,你会使用什么 CMS?你曾经使用过 Ghost 吗?你的体验如何?请在评论部分分享你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ghost-3-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/Morisun029)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/recommends/ghost/
|
||||
[2]: https://itsfoss.com/open-source-cms/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-3.jpg?ssl=1
|
||||
[4]: https://itsfoss.com/recommends/digital-ocean/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-editor-screenshot.png?ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-subscription-model.jpg?resize=800%2C503&ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/ghost-app-integration.jpg?ssl=1
|
||||
[8]: https://ghost.org/faq/upgrades/
|
||||
[9]: https://itsfoss.com/recommends/ghost-pro/
|
@ -0,0 +1,93 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11528-1.html)
|
||||
[#]: subject: (4 cool new projects to try in COPR for October 2019)
|
||||
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-october-2019/)
|
||||
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
|
||||
|
||||
COPR 仓库中 4 个很酷的新项目(2019.10)
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
COPR 是个人软件仓库[集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准,尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
|
||||
|
||||
本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR,请参阅 [COPR 用户文档][3]。
|
||||
|
||||
### Nu
|
||||
|
||||
[Nu][4] 也被称为 Nushell,是受 PowerShell 和现代 CLI 工具启发的 shell。通过使用基于结构化数据的方法,Nu 可轻松处理命令的输出,并通过管道传输其他命令。然后将结果显示在可以轻松排序或过滤的表中,并可以用作其他命令的输入。最后,Nu 提供了几个内置命令、多 shell 和对插件的支持。
|
||||
|
||||
#### 安装说明
|
||||
|
||||
该[仓库][5]目前为 Fedora 30、31 和 Rawhide 提供 Nu。要安装 Nu,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable atim/nushell
|
||||
sudo dnf install nushell
|
||||
```
|
||||
|
||||
### NoteKit
|
||||
|
||||
[NoteKit][6] 是一个笔记程序。它支持 Markdown 来格式化笔记,并支持使用鼠标创建手绘笔记的功能。在 NoteKit 中,笔记以树状结构进行排序和组织。
|
||||
|
||||
#### 安装说明
|
||||
|
||||
该[仓库][7]目前为 Fedora 29、30、31 和 Rawhide 提供 NoteKit。要安装 NoteKit,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable lyessaadi/notekit
|
||||
sudo dnf install notekit
|
||||
```
|
||||
|
||||
### Crow Translate
|
||||
|
||||
[Crow Translate][8] 是一个翻译程序。它可以翻译文本并且可以对输入和结果发音,它还提供命令行界面。对于翻译,Crow Translate 使用 Google、Yandex 或 Bing 的翻译 API。
|
||||
|
||||
#### 安装说明
|
||||
|
||||
该[仓库][9]目前为 Fedora 30、31 和 Rawhide 以及 Epel 8 提供 Crow Translate。要安装 Crow Translate,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable faezebax/crow-translate
|
||||
sudo dnf install crow-translate
|
||||
```
|
||||
|
||||
### dnsmeter
|
||||
|
||||
[dnsmeter][10] 是用于测试域名服务器及其基础设施性能的命令行工具。为此,它发送 DNS 查询并计算答复数,从而测量各种统计数据。除此之外,dnsmeter 可以使用不同的加载步骤,使用 PCAP 文件中的载荷和欺骗发送者地址。
|
||||
|
||||
#### 安装说明
|
||||
|
||||
该仓库目前为 Fedora 29、30、31、Rawhide 以及 Epel 7 提供 dnsmeter。要安装 dnsmeter,请使用以下命令:
|
||||
|
||||
```
|
||||
sudo dnf copr enable @dnsoarc/dnsmeter
|
||||
sudo dnf install dnsmeter
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-october-2019/
|
||||
|
||||
作者:[Dominik Turecek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/dturecek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
|
||||
[2]: https://copr.fedorainfracloud.org/
|
||||
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
|
||||
[4]: https://github.com/nushell/nushell
|
||||
[5]: https://copr.fedorainfracloud.org/coprs/atim/nushell/
|
||||
[6]: https://github.com/blackhole89/notekit
|
||||
[7]: https://copr.fedorainfracloud.org/coprs/lyessaadi/notekit/
|
||||
[8]: https://github.com/crow-translate/crow-translate
|
||||
[9]: https://copr.fedorainfracloud.org/coprs/faezebax/crow-translate/
|
||||
[10]: https://github.com/DNS-OARC/dnsmeter
|
@ -0,0 +1,406 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11545-1.html)
|
||||
[#]: subject: (Understanding system calls on Linux with strace)
|
||||
[#]: via: (https://opensource.com/article/19/10/strace)
|
||||
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
|
||||
|
||||
在 Linux 上用 strace 来理解系统调用
|
||||
======
|
||||
|
||||
> 使用 strace 跟踪用户进程和 Linux 内核之间的交互。
|
||||
|
||||
![Hand putting a Linux file folder into a drawer][1]
|
||||
|
||||
<ruby>系统调用<rt>system call</rt></ruby>是程序从内核请求服务的一种编程方式,而 `strace` 是一个功能强大的工具,可让你跟踪用户进程与 Linux 内核之间的交互。
|
||||
|
||||
要了解操作系统的工作原理,首先需要了解系统调用的工作原理。操作系统的主要功能之一是为用户程序提供抽象机制。
|
||||
|
||||
操作系统可以大致分为两种模式:
|
||||
|
||||
* 内核模式:操作系统内核使用的一种强大的特权模式
|
||||
* 用户模式:大多数用户应用程序运行的地方
|
||||
|
||||
用户大多使用命令行实用程序和图形用户界面(GUI)来执行日常任务。系统调用在后台静默运行,与内核交互以完成工作。
|
||||
|
||||
系统调用与函数调用非常相似,这意味着它们都接受并处理参数然后返回值。唯一的区别是系统调用进入内核,而函数调用不进入。从用户空间切换到内核空间是使用特殊的 [trap][2] 机制完成的。
|
||||
|
||||
通过使用系统库(在 Linux 系统上又称为 glibc),大部分系统调用对用户隐藏了。尽管系统调用本质上是通用的,但是发出系统调用的机制在很大程度上取决于机器(架构)。
|
||||
|
||||
本文通过使用一些常规命令并使用 `strace` 分析每个命令进行的系统调用来探索一些实际示例。这些示例使用 Red Hat Enterprise Linux,但是这些命令运行在其他 Linux 发行版上应该也是相同的:
|
||||
|
||||
```
|
||||
[root@sandbox ~]# cat /etc/redhat-release
|
||||
Red Hat Enterprise Linux Server release 7.7 (Maipo)
|
||||
[root@sandbox ~]#
|
||||
[root@sandbox ~]# uname -r
|
||||
3.10.0-1062.el7.x86_64
|
||||
[root@sandbox ~]#
|
||||
```
|
||||
|
||||
首先,确保在系统上安装了必需的工具。你可以使用下面的 `rpm` 命令来验证是否安装了 `strace`。如果安装了,则可以使用 `-V` 选项检查 `strace` 实用程序的版本号:
|
||||
|
||||
```
|
||||
[root@sandbox ~]# rpm -qa | grep -i strace
|
||||
strace-4.12-9.el7.x86_64
|
||||
[root@sandbox ~]#
|
||||
[root@sandbox ~]# strace -V
|
||||
strace -- version 4.12
|
||||
[root@sandbox ~]#
|
||||
```
|
||||
|
||||
如果没有安装,运行命令安装:
|
||||
|
||||
```
|
||||
yum install strace
|
||||
```
|
||||
|
||||
出于本示例的目的,在 `/tmp` 中创建一个测试目录,并使用 `touch` 命令创建两个文件:
|
||||
|
||||
```
|
||||
[root@sandbox ~]# cd /tmp/
|
||||
[root@sandbox tmp]#
|
||||
[root@sandbox tmp]# mkdir testdir
|
||||
[root@sandbox tmp]#
|
||||
[root@sandbox tmp]# touch testdir/file1
|
||||
[root@sandbox tmp]# touch testdir/file2
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
(我使用 `/tmp` 目录是因为每个人都可以访问它,但是你可以根据需要选择另一个目录。)
|
||||
|
||||
在 `testdir` 目录下使用 `ls` 命令验证该文件已经创建:
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# ls testdir/
|
||||
file1 file2
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
你可能每天都在使用 `ls` 命令,而没有意识到系统调用在其下面发挥的作用。抽象地来说,该命令的工作方式如下:
|
||||
|
||||
> 命令行工具 -> 从系统库(glibc)调用函数 -> 调用系统调用
|
||||
|
||||
`ls` 命令内部从 Linux 上的系统库(即 glibc)调用函数。这些库去调用完成大部分工作的系统调用。
|
||||
|
||||
如果你想知道从 glibc 库中调用了哪些函数,请使用 `ltrace` 命令,然后跟上常规的 `ls testdir/`命令:
|
||||
|
||||
```
|
||||
ltrace ls testdir/
|
||||
```
|
||||
|
||||
如果没有安装 `ltrace`,键入如下命令安装:
|
||||
|
||||
```
|
||||
yum install ltrace
|
||||
```
|
||||
|
||||
大量的输出会被堆到屏幕上;不必担心,只需继续就行。`ltrace` 命令输出中与该示例有关的一些重要库函数包括:
|
||||
|
||||
```
|
||||
opendir("testdir/") = { 3 }
|
||||
readdir({ 3 }) = { 101879119, "." }
|
||||
readdir({ 3 }) = { 134, ".." }
|
||||
readdir({ 3 }) = { 101879120, "file1" }
|
||||
strlen("file1") = 5
|
||||
memcpy(0x1665be0, "file1\0", 6) = 0x1665be0
|
||||
readdir({ 3 }) = { 101879122, "file2" }
|
||||
strlen("file2") = 5
|
||||
memcpy(0x166dcb0, "file2\0", 6) = 0x166dcb0
|
||||
readdir({ 3 }) = nil
|
||||
closedir({ 3 })
|
||||
```
|
||||
|
||||
通过查看上面的输出,你或许可以了解正在发生的事情。`opendir` 库函数打开一个名为 `testdir` 的目录,然后调用 `readdir` 函数,该函数读取目录的内容。最后,有一个对 `closedir` 函数的调用,该函数将关闭先前打开的目录。现在请先忽略其他 `strlen` 和 `memcpy` 功能。
|
||||
|
||||
你可以看到正在调用哪些库函数,但是本文将重点介绍由系统库函数调用的系统调用。
|
||||
|
||||
与上述类似,要了解调用了哪些系统调用,只需将 `strace` 放在 `ls testdir` 命令之前,如下所示。 再次,一堆乱码丢到了你的屏幕上,你可以按照以下步骤进行操作:
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# strace ls testdir/
|
||||
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
|
||||
brk(NULL) = 0x1f12000
|
||||
<<< truncated strace output >>>
|
||||
write(1, "file1 file2\n", 13file1 file2
|
||||
) = 13
|
||||
close(1) = 0
|
||||
munmap(0x7fd002c8d000, 4096) = 0
|
||||
close(2) = 0
|
||||
exit_group(0) = ?
|
||||
+++ exited with 0 +++
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
运行 `strace` 命令后屏幕上的输出就是运行 `ls` 命令的系统调用。每个系统调用都为操作系统提供了特定的用途,可以将它们大致分为以下几个部分:
|
||||
|
||||
* 进程管理系统调用
|
||||
* 文件管理系统调用
|
||||
* 目录和文件系统管理系统调用
|
||||
* 其他系统调用
|
||||
|
||||
分析显示到屏幕上的信息的一种更简单的方法是使用 `strace` 方便的 `-o` 标志将输出记录到文件中。在 `-o` 标志后添加一个合适的文件名,然后再次运行命令:
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# strace -o trace.log ls testdir/
|
||||
file1 file2
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
这次,没有任何输出干扰屏幕显示,`ls` 命令如预期般工作,显示了文件名并将所有输出记录到文件 `trace.log` 中。仅仅是一个简单的 `ls` 命令,该文件就有近 100 行内容:
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# ls -l trace.log
|
||||
-rw-r--r--. 1 root root 7809 Oct 12 13:52 trace.log
|
||||
[root@sandbox tmp]#
|
||||
[root@sandbox tmp]# wc -l trace.log
|
||||
114 trace.log
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
让我们看一下这个示例的 `trace.log` 文件的第一行:
|
||||
|
||||
```
|
||||
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
|
||||
```
|
||||
|
||||
* 该行的第一个单词 `execve` 是正在执行的系统调用的名称。
|
||||
* 括号内的文本是提供给该系统调用的参数。
|
||||
* 符号 `=` 后的数字(在这种情况下为 `0`)是 `execve` 系统调用的返回值。
|
||||
|
||||
现在的输出似乎还不太吓人,对吧。你可以应用相同的逻辑来理解其他行。
|
||||
|
||||
现在,将关注点集中在你调用的单个命令上,即 `ls testdir`。你知道命令 `ls` 使用的目录名称,那么为什么不在 `trace.log` 文件中使用 `grep` 查找 `testdir` 并查看得到的结果呢?让我们详细查看一下结果的每一行:
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# grep testdir trace.log
|
||||
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
|
||||
stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0
|
||||
openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
回顾一下上面对 `execve` 的分析,你能说一下这个系统调用的作用吗?
|
||||
|
||||
```
|
||||
execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
|
||||
```
|
||||
|
||||
你无需记住所有系统调用或它们所做的事情,因为你可以在需要时参考文档。手册页可以解救你!在运行 `man` 命令之前,请确保已安装以下软件包:
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# rpm -qa | grep -i man-pages
|
||||
man-pages-3.53-5.el7.noarch
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
请记住,你需要在 `man` 命令和系统调用名称之间添加 `2`。如果使用 `man man` 阅读 `man` 命令的手册页,你会看到第 2 节是为系统调用保留的。同样,如果你需要有关库函数的信息,则需要在 `man` 和库函数名称之间添加一个 `3`。
|
||||
|
||||
以下是手册的章节编号及其包含的页面类型:
|
||||
|
||||
* `1`:可执行的程序或 shell 命令
|
||||
* `2`:系统调用(由内核提供的函数)
|
||||
* `3`:库调用(在程序的库内的函数)
|
||||
* `4`:特殊文件(通常出现在 `/dev`)
|
||||
|
||||
使用系统调用名称运行以下 `man` 命令以查看该系统调用的文档:
|
||||
|
||||
```
|
||||
man 2 execve
|
||||
```
|
||||
|
||||
按照 `execve` 手册页,这将执行在参数中传递的程序(在本例中为 `ls`)。可以为 `ls` 提供其他参数,例如本例中的 `testdir`。因此,此系统调用仅以 `testdir` 作为参数运行 `ls`:
|
||||
|
||||
```
|
||||
execve - execute program
|
||||
|
||||
DESCRIPTION
|
||||
execve() executes the program pointed to by filename
|
||||
```
|
||||
|
||||
下一个系统调用,名为 `stat`,它使用 `testdir` 参数:
|
||||
|
||||
```
|
||||
stat("testdir/", {st_mode=S_IFDIR|0755, st_size=32, ...}) = 0
|
||||
```
|
||||
|
||||
使用 `man 2 stat` 访问该文档。`stat` 是获取文件状态的系统调用,请记住,Linux 中的一切都是文件,包括目录。
|
||||
|
||||
接下来,`openat` 系统调用将打开 `testdir`。密切注意返回的 `3`。这是一个文件描述符,将在以后的系统调用中使用:
|
||||
|
||||
```
|
||||
openat(AT_FDCWD, "testdir/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
|
||||
```
|
||||
|
||||
到现在为止一切都挺好。现在,打开 `trace.log` 文件,并转到 `openat` 系统调用之后的行。你会看到 `getdents` 系统调用被调用,该调用完成了执行 `ls testdir` 命令所需的大部分操作。现在,从 `trace.log` 文件中用 `grep` 获取 `getdents`:
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# grep getdents trace.log
|
||||
getdents(3, /* 4 entries */, 32768) = 112
|
||||
getdents(3, /* 0 entries */, 32768) = 0
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
`getdents` 的手册页将其描述为 “获取目录项”,这就是你要执行的操作。注意,`getdents` 的参数是 `3`,这是来自上面 `openat` 系统调用的文件描述符。
|
||||
|
||||
现在有了目录列表,你需要一种在终端中显示它的方法。因此,在日志中用 `grep` 搜索另一个用于写入终端的系统调用 `write`:
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# grep write trace.log
|
||||
write(1, "file1 file2\n", 13) = 13
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
在这些参数中,你可以看到将要显示的文件名:`file1` 和 `file2`。关于第一个参数(`1`),请记住在 Linux 中,当运行任何进程时,默认情况下会为其打开三个文件描述符。以下是默认的文件描述符:
|
||||
|
||||
* `0`:标准输入
|
||||
* `1`:标准输出
|
||||
* `2`:标准错误
|
||||
|
||||
因此,`write` 系统调用将在标准显示(就是这个终端,由 `1` 所标识的)上显示 `file1` 和 `file2`。
|
||||
|
||||
现在你知道哪个系统调用完成了 `ls testdir/` 命令的大部分工作。但是在 `trace.log` 文件中其它的 100 多个系统调用呢?操作系统必须做很多内务处理才能运行一个进程,因此,你在该日志文件中看到的很多内容都是进程初始化和清理。阅读整个 `trace.log` 文件,并尝试了解 `ls` 命令是怎么工作起来的。
|
||||
|
||||
既然你知道了如何分析给定命令的系统调用,那么就可以将该知识用于其他命令来了解正在执行哪些系统调用。`strace` 提供了许多有用的命令行标志,使你更容易使用,下面将对其中一些进行描述。
|
||||
|
||||
默认情况下,`strace` 并不包含所有系统调用信息。但是,它有一个方便的 `-v` 冗余选项,可以在每个系统调用中提供附加信息:
|
||||
|
||||
```
|
||||
strace -v ls testdir
|
||||
```
|
||||
|
||||
在运行 `strace` 命令时始终使用 `-f` 选项是一种好的作法。它允许 `strace` 对当前正在跟踪的进程创建的任何子进程进行跟踪:
|
||||
|
||||
```
|
||||
strace -f ls testdir
|
||||
```
|
||||
|
||||
假设你只需要系统调用的名称、运行的次数以及每个系统调用花费的时间百分比。你可以使用 `-c` 标志来获取这些统计信息:
|
||||
|
||||
```
|
||||
strace -c ls testdir/
|
||||
```
|
||||
|
||||
假设你想专注于特定的系统调用,例如专注于 `open` 系统调用,而忽略其余部分。你可以使用`-e` 标志跟上系统调用的名称:
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# strace -e open ls testdir
|
||||
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libcap.so.2", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libacl.so.1", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libpcre.so.1", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
|
||||
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
|
||||
file1 file2
|
||||
+++ exited with 0 +++
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
如果你想关注多个系统调用怎么办?不用担心,你同样可以使用 `-e` 命令行标志,并用逗号分隔开两个系统调用的名称。例如,要查看 `write` 和 `getdents` 系统调用:
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# strace -e write,getdents ls testdir
|
||||
getdents(3, /* 4 entries */, 32768) = 112
|
||||
getdents(3, /* 0 entries */, 32768) = 0
|
||||
write(1, "file1 file2\n", 13file1 file2
|
||||
) = 13
|
||||
+++ exited with 0 +++
|
||||
[root@sandbox tmp]#
|
||||
```
|
||||
|
||||
到目前为止,这些示例是明确地运行的命令进行了跟踪。但是,要跟踪已经运行并正在执行的命令又怎么办呢?例如,如果要跟踪用来长时间运行进程的守护程序,该怎么办?为此,`strace` 提供了一个特殊的 `-p` 标志,你可以向其提供进程 ID。
|
||||
|
||||
我们的示例不在守护程序上运行 `strace`,而是以 `cat` 命令为例,如果你将文件名作为参数,通常 `cat` 会显示文件的内容。如果没有给出参数,`cat` 命令会在终端上等待用户输入文本。输入文本后,它将重复给定的文本,直到用户按下 `Ctrl + C` 退出为止。
|
||||
|
||||
从一个终端运行 `cat` 命令;它会向你显示一个提示,并等待在那里(记住 `cat` 仍在运行且尚未退出):
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# cat
|
||||
```
|
||||
|
||||
在另一个终端上,使用 `ps` 命令找到进程标识符(PID):
|
||||
|
||||
```
|
||||
[root@sandbox ~]# ps -ef | grep cat
|
||||
root 22443 20164 0 14:19 pts/0 00:00:00 cat
|
||||
root 22482 20300 0 14:20 pts/1 00:00:00 grep --color=auto cat
|
||||
[root@sandbox ~]#
|
||||
```
|
||||
|
||||
现在,使用 `-p` 标志和 PID(在上面使用 `ps` 找到)对运行中的进程运行 `strace`。运行 `strace` 之后,其输出说明了所接驳的进程的内容及其 PID。现在,`strace` 正在跟踪 `cat` 命令进行的系统调用。看到的第一个系统调用是 `read`,它正在等待文件描述符 `0`(标准输入,这是运行 `cat` 命令的终端)的输入:
|
||||
|
||||
```
|
||||
[root@sandbox ~]# strace -p 22443
|
||||
strace: Process 22443 attached
|
||||
read(0,
|
||||
```
|
||||
|
||||
现在,返回到你运行 `cat` 命令的终端,并输入一些文本。我出于演示目的输入了 `x0x0`。注意 `cat` 是如何简单地重复我输入的内容的。因此,`x0x0` 出现了两次。我输入了第一个,第二个是 `cat` 命令重复的输出:
|
||||
|
||||
```
|
||||
[root@sandbox tmp]# cat
|
||||
x0x0
|
||||
x0x0
|
||||
```
|
||||
|
||||
返回到将 `strace` 接驳到 `cat` 进程的终端。现在你会看到两个额外的系统调用:较早的 `read` 系统调用,现在在终端中读取 `x0x0`,另一个为 `write`,它将 `x0x0` 写回到终端,然后是再一个新的 `read`,正在等待从终端读取。请注意,标准输入(`0`)和标准输出(`1`)都在同一终端中:
|
||||
|
||||
```
|
||||
[root@sandbox ~]# strace -p 22443
|
||||
strace: Process 22443 attached
|
||||
read(0, "x0x0\n", 65536) = 5
|
||||
write(1, "x0x0\n", 5) = 5
|
||||
read(0,
|
||||
```
|
||||
|
||||
想象一下,对守护进程运行 `strace` 以查看其在后台执行的所有操作时这有多大帮助。按下 `Ctrl + C` 杀死 `cat` 命令;由于该进程不再运行,因此这也会终止你的 `strace` 会话。
|
||||
|
||||
如果要查看所有的系统调用的时间戳,只需将 `-t` 选项与 `strace` 一起使用:
|
||||
|
||||
```
|
||||
[root@sandbox ~]#strace -t ls testdir/
|
||||
|
||||
14:24:47 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
|
||||
14:24:47 brk(NULL) = 0x1f07000
|
||||
14:24:47 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2530bc8000
|
||||
14:24:47 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
|
||||
14:24:47 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
|
||||
```
|
||||
|
||||
如果你想知道两次系统调用之间所花费的时间怎么办?`strace` 有一个方便的 `-r` 命令,该命令显示执行每个系统调用所花费的时间。非常有用,不是吗?
|
||||
|
||||
```
|
||||
[root@sandbox ~]#strace -r ls testdir/
|
||||
|
||||
0.000000 execve("/usr/bin/ls", ["ls", "testdir/"], [/* 40 vars */]) = 0
|
||||
0.000368 brk(NULL) = 0x1966000
|
||||
0.000073 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb6b1155000
|
||||
0.000047 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
|
||||
0.000119 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
`strace` 实用程序非常有助于理解 Linux 上的系统调用。要了解它的其它命令行标志,请参考手册页和在线文档。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/strace
|
||||
|
||||
作者:[Gaurav Kamathe][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/gkamathe
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
|
||||
[2]: https://en.wikipedia.org/wiki/Trap_(computing)
|
114
published/20191028 SQLite is really easy to compile.md
Normal file
114
published/20191028 SQLite is really easy to compile.md
Normal file
@ -0,0 +1,114 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11536-1.html)
|
||||
[#]: subject: (SQLite is really easy to compile)
|
||||
[#]: via: (https://jvns.ca/blog/2019/10/28/sqlite-is-really-easy-to-compile/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
|
||||
SQLite 真的很容易编译
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201911/04/120656cedfznzenxxvmxq1.jpg)
|
||||
|
||||
上周,我一直在做一个 SQL 网站(<https://sql-steps.wizardzines.com/>,一个 SQL 示例列表)。我使用 sqlite 运行网站上的所有查询,并且我想在其中一个例子([这个][1])中使用窗口函数。
|
||||
|
||||
但是我使用的是 Ubuntu 18.04 中的 sqlite 版本,它太旧了,不支持窗口函数。所以我需要升级 sqlite!
|
||||
|
||||
事实证明,这个过程超麻烦(如通常一样),但是非常有趣!我想起了一些有关可执行文件和共享库如何工作的信息,结论令人满意。所以我想在这里写下来。
|
||||
|
||||
(剧透:<https://www.sqlite.org/howtocompile.html> 中解释了如何编译 SQLite,它只需花费 5 秒左右,这比我平时从源码编译的体验容易了许多。)
|
||||
|
||||
### 尝试 1:从它的网站下载 SQLite 二进制文件
|
||||
|
||||
[SQLite 的下载页面][2]有一个用于 Linux 的 SQLite 命令行工具的二进制文件的链接。我下载了它,它可以在笔记本电脑上运行,我以为这就完成了。
|
||||
|
||||
但是后来我尝试在构建服务器(Netlify) 上运行它,得到了这个极其奇怪的错误消息:“File not found”。我进行了追踪,并确定 `execve` 返回错误代码 ENOENT,这意味着 “File not found”。这有点令人发狂,因为该文件确实存在,并且有正确的权限。
|
||||
|
||||
我搜索了这个问题(通过搜索 “execve enoen”),找到了[这个 stackoverflow 中的答案][3],它指出要运行二进制文件,你不仅需要二进制文件存在!你还需要它的**加载程序**才能存在。(加载程序的路径在二进制文件内部)
|
||||
|
||||
要查看加载程序的路径,可以使用 `ldd`,如下所示:
|
||||
|
||||
```
|
||||
$ ldd sqlite3
|
||||
linux-gate.so.1 (0xf7f9d000)
|
||||
libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf7f70000)
|
||||
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xf7e6e000)
|
||||
libz.so.1 => /lib/i386-linux-gnu/libz.so.1 (0xf7e4f000)
|
||||
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf7c73000)
|
||||
/lib/ld-linux.so.2
|
||||
```
|
||||
|
||||
所以 `/lib/ld-linux.so.2` 是加载程序,而该文件在构建服务器上不存在,可能是因为 Xenial(Xenial 是 Ubuntu 16.04,本文应该使用的是 18.04 “Bionic Beaver”)安装程序不支持 32 位二进制文件(?),因此我需要尝试一些不同的东西。
|
||||
|
||||
### 尝试 2:安装 Debian sqlite3 软件包
|
||||
|
||||
好吧,我想我也许可以安装来自 [debian testing 的 sqlite 软件包][4]。尝试从另一个我不使用的 Debian 版本安装软件包并不是一个好主意,但是出于某种原因,我还是决定尝试一下。
|
||||
|
||||
这次毫不意外地破坏了我计算机上的 sqlite(这也破坏了 git),但我设法通过 `sudo dpkg --purge --force-all libsqlite3-0` 恢复了,并使所有依赖于 sqlite 的软件再次工作。
|
||||
|
||||
### 尝试 3:提取 Debian sqlite3 软件包
|
||||
|
||||
我还尝试仅从 Debian sqlite 软件包中提取 sqlite3 二进制文件并运行它。毫不意外,这也行不通,但这个更容易理解:我有旧版本的 libreadline(`.so.7`),但它需要 `.so.8`。
|
||||
|
||||
```
|
||||
$ ./usr/bin/sqlite3
|
||||
./usr/bin/sqlite3: error while loading shared libraries: libreadline.so.8: cannot open shared object file: No such file or directory
|
||||
```
|
||||
|
||||
### 尝试 4:从源代码进行编译
|
||||
|
||||
我花费这么多时间尝试下载 sqlite 二进制的原因是我认为从源代码编译 sqlite 既烦人又耗时。但是显然,下载随便一个 sqlite 二进制文件根本不适合我,因此我最终决定尝试自己编译它。
|
||||
|
||||
这有指导:[如何编译 SQLite][5]。它是宇宙中最简单的东西。通常,编译的感觉是类似这样的:
|
||||
|
||||
* 运行 `./configure`
|
||||
* 意识到我缺少依赖
|
||||
* 再次运行 `./configure`
|
||||
* 运行 `make`
|
||||
* 编译失败,因为我安装了错误版本的依赖
|
||||
* 去做其他事,之后找到二进制文件
|
||||
|
||||
编译 SQLite 的方式如下:
|
||||
|
||||
* [从下载页面下载整合的 tarball][2]
|
||||
* 运行 `gcc shell.c sqlite3.c -lpthread -ldl`
|
||||
* 完成!!!
|
||||
|
||||
所有代码都在一个文件(`sqlite.c`)中,并且没有奇怪的依赖项!太奇妙了。
|
||||
|
||||
对我而言,我实际上并不需要线程支持或 readline 支持,因此我用编译页面上的说明来创建了一个非常简单的二进制文件,它仅使用了 libc 而没有其他共享库。
|
||||
|
||||
```
|
||||
$ ldd sqlite3
|
||||
linux-vdso.so.1 (0x00007ffe8e7e9000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fbea4988000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007fbea4d79000)
|
||||
```
|
||||
|
||||
### 这很好,因为它使体验 sqlite 变得容易
|
||||
|
||||
我认为 SQLite 的构建过程如此简单很酷,因为过去我很乐于[编辑 sqlite 的源码][6]来了解其 B 树的实现方式。
|
||||
|
||||
鉴于我对 SQLite 的了解,这并不令人感到意外(它在受限环境/嵌入式中确实可以很好地工作,因此可以以一种非常简单/最小的方式进行编译是有意义的)。 但这真是太好了!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2019/10/28/sqlite-is-really-easy-to-compile/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://sql-steps.wizardzines.com/lag.html
|
||||
[2]: https://www.sqlite.org/download.html
|
||||
[3]: https://stackoverflow.com/questions/5234088/execve-file-not-found-when-stracing-the-very-same-file
|
||||
[4]: https://packages.debian.org/bullseye/amd64/sqlite3/download
|
||||
[5]: https://www.sqlite.org/howtocompile.html
|
||||
[6]: https://jvns.ca/blog/2014/10/02/how-does-sqlite-work-part-2-btrees/
|
@ -0,0 +1,100 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11525-1.html)
|
||||
[#]: subject: (Collapse OS – An OS Created to Run After the World Ends)
|
||||
[#]: via: (https://itsfoss.com/collapse-os/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
Collapse OS:为世界末日创建的操作系统
|
||||
======
|
||||
|
||||
当大多数人考虑为末日后的世界做准备时,想到的第一件事就是准备食物和其他生活必需品。最近,有一个程序员觉得,在社会崩溃之后,创建一个多功能的、且可生存的操作系统同样重要。我们今天将尽我们所能地来了解一下它。
|
||||
|
||||
### Collapse OS:当文明被掩埋在垃圾中
|
||||
|
||||
![][1]
|
||||
|
||||
这里说的操作系统称为 [Collapse OS(崩溃操作系统)][2]。根据该官方网站的说法,Collapse OS 是 “z80 内核以及一系列程序、工具和文档的集合”。 它可以让你:
|
||||
|
||||
* 可在最小的和临时拼凑的机器上运行。
|
||||
* 通过临时拼凑的方式(串行、键盘、显示)进行接口。
|
||||
* 可编辑文本文件。
|
||||
* 编译适用于各种 MCU 和 CPU 的汇编源代码文件。
|
||||
* 从各种存储设备读取和写入。
|
||||
* 自我复制。
|
||||
|
||||
其创造者 [Virgil Dupras][3] 之所以开始这个项目,是因为[他认为][4]“我们的全球供应链在我们到达 2030 年之前就会崩溃”。他是根据<ruby>巴勃罗·塞维尼<rt>Pablo Servigne</rt></ruby>的作品得出了这一结论的。他似乎也觉得并非所有人都会认可[他的观点][4],“话虽如此,我认为不相信到 2030 年可能会发生崩溃也是可以理解的,所以请不要为我的信念而感到受到了冲击。”
|
||||
|
||||
该项目的总体目标是迅速让瓦解崩溃后的文明重新回到计算机时代。电子产品的生产取决于非常复杂的供应链。一旦供应链崩溃,人类将回到一个技术水平较低的时代。要恢复我们以前的技术水平,将需要数十年的时间。Dupras 希望通过创建一个生态系统来跨越几个步骤,该生态系统将与从各种来源搜寻到的更简单的芯片一起工作。
|
||||
|
||||
### z80 是什么?
|
||||
|
||||
最初的 Collapse OS 内核是为 [z80 芯片][5]编写的。作为复古计算机历史的爱好者,我对 [Zilog][6] 和 z80 芯片很熟悉。在 1970 年代后期,Zilog 公司推出了 z80,以和 [Intel 的 8080][7] CPU 竞争。z80 被用于许多早期的个人计算机中,例如 [Sinclair ZX Spectrum][8] 和 [Tandy TRS-80][9]。这些系统中的大多数使用了 [CP/M 操作系统] [10],这是当时最流行的操作系统。(有趣的是,Dupras 最初希望使用[一个开源版本的 CP/M][11],但最终决定[从头开始][12]。)
|
||||
|
||||
在 1981 年 [IBM PC][13] 发布之后,z80 和 CP/M 的普及率开始下降。Zilog 确实发布了其它几种微处理器(Z8000 和 Z80000),但并没有获得成功。该公司将重点转移到了微控制器上。今天,更新后的 z80 后代产品可以在图形计算器、嵌入式设备和消费电子产品中找到。
|
||||
|
||||
Dupras 在 [Reddit][14] 上说,他为 z80 编写了 Collapse OS,因为“它已经投入生产很长时间了,并且因为它被用于许多机器上,所以拾荒者有很大的机会拿到它。”
|
||||
|
||||
### 该项目的当前状态和未来发展
|
||||
|
||||
Collapse OS 的起步相当不错。有足够的内存和存储空间它就可以进行自我复制。它可以在 [RC2014 家用计算机][15]或世嘉 Master System / MegaDrive(Genesis)上运行。它可以读取 SD 卡。它有一个简单的文本编辑器。其内核由用粘合代码连接起来的模块组成。这是为了使系统具有灵活性和适应性。
|
||||
|
||||
还有一个详细的[路线图][16]列出了该项目的方向。列出的目标包括:
|
||||
|
||||
* 支持其他 CPU,例如 8080 和 [6502][17]。
|
||||
* 支持临时拼凑的外围设备,例如 LCD 屏幕、电子墨水显示器和 [ACIA 设备][18]。
|
||||
* 支持更多的存储方式,例如软盘、CD、SPI RAM/ROM 和 AVR MCU。
|
||||
* 使它可以在其他 z80 机器上工作,例如 [TI-83+][19] 和 [TI-84+][20] 图形计算器和 TRS-80s。
|
||||
|
||||
如果你有兴趣帮助或只是想窥视一下这个项目,请访问其 [GitHub 页面][21]。
|
||||
|
||||
### 最后的思考
|
||||
|
||||
坦率地说,我认为 Collapse OS 与其说是一个有用的项目,倒不如说更像是一个有趣的爱好项目(对于那些喜欢构建操作系统的人来说)。当崩溃真的到来时,我认为 GitHub 也会宕机,那么 Collapse OS 将如何分发?我无法想像,得具有多少技能的人才能够从捡来的零件中创建出一个系统。到时候会有新一代的创客们,但大多数创客们会习惯于选择 Arduino 或树莓派来构建项目,而不是从头开始。
|
||||
|
||||
与 Dupras 相反,我最担心的是[电磁脉冲炸弹(EMP)][22] 的使用。这些东西会炸毁所有的电气系统,这意味着将没有任何构建系统的可能。如果没有发生这种事情,我想我们将能够找到过去 30 年制造的那么多的 x86 组件,以保持它们运行下去。
|
||||
|
||||
话虽如此,对于那些喜欢为奇奇怪怪的应用编写低级代码的人来说,Collapse OS 听起来是一个有趣且具有高度挑战性的项目。如果你是这样的人,去检出 [Collapse OS][2] 代码吧。
|
||||
|
||||
让我提个假设的问题:你选择的世界末日操作系统是什么?请在下面的评论中告诉我们。
|
||||
|
||||
如果你觉得这篇文章有趣,请花一点时间在社交媒体、Hacker News 或 [Reddit][23] 上分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/collapse-os/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/Collapse_OS.jpg?ssl=1
|
||||
[2]: https://collapseos.org/
|
||||
[3]: https://github.com/hsoft
|
||||
[4]: https://collapseos.org/why.html
|
||||
[5]: https://en.m.wikipedia.org/wiki/Z80
|
||||
[6]: https://en.wikipedia.org/wiki/Zilog
|
||||
[7]: https://en.wikipedia.org/wiki/Intel_8080
|
||||
[8]: https://en.wikipedia.org/wiki/ZX_Spectrum
|
||||
[9]: https://en.wikipedia.org/wiki/TRS-80
|
||||
[10]: https://en.wikipedia.org/wiki/CP/M
|
||||
[11]: https://github.com/davidgiven/cpmish
|
||||
[12]: https://github.com/hsoft/collapseos/issues/52
|
||||
[13]: https://en.wikipedia.org/wiki/IBM_Personal_Computer
|
||||
[14]: https://old.reddit.com/r/collapse/comments/dejmvz/collapse_os_bootstrap_postcollapse_technology/f2w3sid/?st=k1gujoau&sh=1b344da9
|
||||
[15]: https://rc2014.co.uk/
|
||||
[16]: https://collapseos.org/roadmap.html
|
||||
[17]: https://en.wikipedia.org/wiki/MOS_Technology_6502
|
||||
[18]: https://en.wikipedia.org/wiki/MOS_Technology_6551
|
||||
[19]: https://en.wikipedia.org/wiki/TI-83_series#TI-83_Plus
|
||||
[20]: https://en.wikipedia.org/wiki/TI-84_Plus_series
|
||||
[21]: https://github.com/hsoft/collapseos
|
||||
[22]: https://en.wikipedia.org/wiki/Electromagnetic_pulse
|
||||
[23]: https://reddit.com/r/linuxusersgroup
|
96
published/20191029 Upgrading Fedora 30 to Fedora 31.md
Normal file
96
published/20191029 Upgrading Fedora 30 to Fedora 31.md
Normal file
@ -0,0 +1,96 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11541-1.html)
|
||||
[#]: subject: (Upgrading Fedora 30 to Fedora 31)
|
||||
[#]: via: (https://fedoramagazine.org/upgrading-fedora-30-to-fedora-31/)
|
||||
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
|
||||
|
||||
将 Fedora 30 升级到 Fedora 31
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Fedora 31 [日前发布了][2]。你也许想要升级系统来获得 Fedora 中的最新功能。Fedora 工作站有图形化的升级方式。另外,Fedora 提供了一种命令行方式来将 Fedora 30 升级到 Fedora 31。
|
||||
|
||||
### 将 Fedora 30 工作站升级到 Fedora 31
|
||||
|
||||
在该发布不久之后,就会有通知告诉你有可用升级。你可以点击通知打开 GNOME “软件”。或者在 GNOME Shell 选择“软件”。
|
||||
|
||||
在 GNOME 软件中选择*更新*,你应该会看到告诉你有 Fedora 31 更新的提示。
|
||||
|
||||
如果你在屏幕上看不到任何内容,请尝试使用左上方的重新加载按钮。在发布后,所有系统可能需要一段时间才能看到可用的升级。
|
||||
|
||||
选择*下载*以获取升级包。你可以继续工作,直到下载完成。然后使用 GNOME “软件”重启系统并应用升级。升级需要时间,因此你可能需要喝杯咖啡,稍后再返回系统。
|
||||
|
||||
### 使用命令行
|
||||
|
||||
如果你是从 Fedora 以前的版本升级的,那么你可能对 `dnf upgrade` 插件很熟悉。这是推荐且支持的从 Fedora 30 升级到 Fedora 31 的方法。使用此插件能让你轻松地升级到 Fedora 31。
|
||||
|
||||
#### 1、更新软件并备份系统
|
||||
|
||||
在开始升级之前,请确保你安装了 Fedora 30 的最新软件。如果你安装了模块化软件,这点尤为重要。`dnf` 和 GNOME “软件”的最新版本对某些模块化流的升级过程进行了改进。要更新软件,请使用 GNOME “软件” 或在终端中输入以下命令:
|
||||
|
||||
```
|
||||
sudo dnf upgrade --refresh
|
||||
```
|
||||
|
||||
此外,在继续操作之前,请确保备份系统。有关备份的帮助,请参阅 Fedora Magazine 上的[备份系列][3]。
|
||||
|
||||
#### 2、安装 DNF 插件
|
||||
|
||||
接下来,打开终端并输入以下命令安装插件:
|
||||
|
||||
```
|
||||
sudo dnf install dnf-plugin-system-upgrade
|
||||
```
|
||||
|
||||
#### 3、使用 DNF 开始更新
|
||||
|
||||
现在,你的系统是最新的,已经备份并且安装了 DNF 插件,你可以通过在终端中使用以下命令来开始升级:
|
||||
|
||||
```
|
||||
sudo dnf system-upgrade download --releasever=31
|
||||
```
|
||||
|
||||
该命令将开始在本地下载计算机的所有升级。如果由于缺乏更新包、损坏的依赖项或已淘汰的软件包而在升级时遇到问题,请在输入上面的命令时添加 `‐-allowerasing` 标志。这将使 DNF 删除可能阻止系统升级的软件包。
|
||||
|
||||
#### 4、重启并升级
|
||||
|
||||
上面的命令下载更新完成后,你的系统就可以重启了。要将系统引导至升级过程,请在终端中输入以下命令:
|
||||
|
||||
```
|
||||
sudo dnf system-upgrade reboot
|
||||
```
|
||||
|
||||
此后,你的系统将重启。在许多版本之前,`fedup` 工具会在内核选择/引导页面上创建一个新选项。使用 `dnf-plugin-system-upgrade` 软件包,你的系统将重新引导到当前 Fedora 30 使用的内核。这很正常。在内核选择页面之后不久,你的系统会开始升级过程。
|
||||
|
||||
现在也许可以喝杯咖啡休息下!升级完成后,系统将重启,你将能够登录到新升级的 Fedora 31 系统。
|
||||
|
||||
![][4]
|
||||
|
||||
### 解决升级问题
|
||||
|
||||
有时,升级系统时可能会出现意外问题。如果遇到任何问题,请访问 [DNF 系统升级文档][5],以获取有关故障排除的更多信息。
|
||||
|
||||
如果升级时遇到问题,并且系统上安装了第三方仓库,那么在升级时可能需要禁用这些仓库。对于 Fedora 不提供的仓库的支持,请联系仓库的提供者。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/upgrading-fedora-30-to-fedora-31/
|
||||
|
||||
作者:[Ben Cotton][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/bcotton/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/f30-f31-816x345.jpg
|
||||
[2]: https://linux.cn/article-11522-1.html
|
||||
[3]: https://fedoramagazine.org/taking-smart-backups-duplicity/
|
||||
[4]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png
|
||||
[5]: https://docs.fedoraproject.org/en-US/quick-docs/dnf-system-upgrade/#Resolving_post-upgrade_issues
|
@ -0,0 +1,158 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11543-1.html)
|
||||
[#]: subject: (Getting started with awk, a powerful text-parsing tool)
|
||||
[#]: via: (https://opensource.com/article/19/10/intro-awk)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
awk 入门 —— 强大的文本分析工具
|
||||
======
|
||||
|
||||
> 让我们开始使用它。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201911/06/114421e006e9mbh0xxe8bb.jpg)
|
||||
|
||||
`awk` 是用于 Unix 和类 Unix 系统的强大文本解析工具,但是由于它有可编程函数,因此你可以用它来执行常规解析任务,因此它也被视为一种编程语言。你可能不会使用 `awk` 开发下一个 GUI 应用,并且它可能不会代替你的默认脚本语言,但是它是用于特定任务的强大程序。
|
||||
|
||||
这些任务或许是惊人的多样化。了解 `awk` 可以解决你的哪些问题的最好方法是学习 `awk`。你会惊讶于 `awk` 如何帮助你完成更多工作,却花费更少的精力。
|
||||
|
||||
`awk` 的基本语法是:
|
||||
|
||||
```
|
||||
awk [options] 'pattern {action}' file
|
||||
```
|
||||
|
||||
首先,创建此示例文件并将其保存为 `colours.txt`。
|
||||
|
||||
```
|
||||
name color amount
|
||||
apple red 4
|
||||
banana yellow 6
|
||||
strawberry red 3
|
||||
grape purple 10
|
||||
apple green 8
|
||||
plum purple 2
|
||||
kiwi brown 4
|
||||
potato brown 9
|
||||
pineapple yellow 5
|
||||
```
|
||||
|
||||
数据被一个或多个空格分隔为列。以某种方式组织要分析的数据是很常见的。它不一定总是由空格分隔的列,甚至可以不是逗号或分号,但尤其是在日志文件或数据转储中,通常有一个可预测的格式。你可以使用数据格式来帮助 `awk` 提取和处理你关注的数据。
|
||||
|
||||
### 打印列
|
||||
|
||||
在 `awk` 中,`print` 函数显示你指定的内容。你可以使用许多预定义的变量,但是最常见的是文本文件中以整数命名的列。试试看:
|
||||
|
||||
```
|
||||
$ awk '{print $2;}' colours.txt
|
||||
color
|
||||
red
|
||||
yellow
|
||||
red
|
||||
purple
|
||||
green
|
||||
purple
|
||||
brown
|
||||
brown
|
||||
yellow
|
||||
```
|
||||
|
||||
在这里,`awk` 显示第二列,用 `$2` 表示。这是相对直观的,因此你可能会猜测 `print $1` 显示第一列,而 `print $3` 显示第三列,依此类推。
|
||||
|
||||
要显示*全部*列,请使用 `$0`。
|
||||
|
||||
美元符号(`$`)后的数字是*表达式*,因此 `$2` 和 `$(1+1)` 是同一意思。
|
||||
|
||||
### 有条件地选择列
|
||||
|
||||
你使用的示例文件非常结构化。它有一行充当标题,并且各列直接相互关联。通过定义*条件*,你可以限定 `awk` 在找到此数据时返回的内容。例如,要查看第二列中与 `yellow` 匹配的项并打印第一列的内容:
|
||||
|
||||
```
|
||||
awk '$2=="yellow"{print $1}' file1.txt
|
||||
banana
|
||||
pineapple
|
||||
```
|
||||
|
||||
正则表达式也可以工作。此表达式近似匹配 `$2` 中以 `p` 开头跟上任意数量(一个或多个)字符后继续跟上 `p` 的值:
|
||||
|
||||
```
|
||||
$ awk '$2 ~ /p.+p/ {print $0}' colours.txt
|
||||
grape purple 10
|
||||
plum purple 2
|
||||
```
|
||||
|
||||
数字能被 `awk` 自然解释。例如,要打印第三列包含大于 5 的整数的行:
|
||||
|
||||
```
|
||||
awk '$3>5 {print $1, $2}' colours.txt
|
||||
name color
|
||||
banana yellow
|
||||
grape purple
|
||||
apple green
|
||||
potato brown
|
||||
```
|
||||
|
||||
### 字段分隔符
|
||||
|
||||
默认情况下,`awk` 使用空格作为字段分隔符。但是,并非所有文本文件都使用空格来定义字段。例如,用以下内容创建一个名为 `colours.csv` 的文件:
|
||||
|
||||
```
|
||||
name,color,amount
|
||||
apple,red,4
|
||||
banana,yellow,6
|
||||
strawberry,red,3
|
||||
grape,purple,10
|
||||
apple,green,8
|
||||
plum,purple,2
|
||||
kiwi,brown,4
|
||||
potato,brown,9
|
||||
pineapple,yellow,5
|
||||
```
|
||||
|
||||
只要你指定将哪个字符用作命令中的字段分隔符,`awk` 就能以完全相同的方式处理数据。使用 `--field-separator`(或简称为 `-F`)选项来定义分隔符:
|
||||
|
||||
```
|
||||
$ awk -F"," '$2=="yellow" {print $1}' file1.csv
|
||||
banana
|
||||
pineapple
|
||||
```
|
||||
|
||||
### 保存输出
|
||||
|
||||
使用输出重定向,你可以将结果写入文件。例如:
|
||||
|
||||
```
|
||||
$ awk -F, '$3>5 {print $1, $2} colours.csv > output.txt
|
||||
```
|
||||
|
||||
这将创建一个包含 `awk` 查询内容的文件。
|
||||
|
||||
你还可以将文件拆分为按列数据分组的多个文件。例如,如果要根据每行显示的颜色将 `colours.txt` 拆分为多个文件,你可以在 `awk` 中包含重定向语句来重定向*每条查询*:
|
||||
|
||||
```
|
||||
$ awk '{print > $2".txt"}' colours.txt
|
||||
```
|
||||
|
||||
这将生成名为 `yellow.txt`、`red.txt` 等文件。
|
||||
|
||||
在下一篇文章中,你将了解有关字段,记录和一些强大的 awk 变量的更多信息。
|
||||
|
||||
本文改编自社区技术播客 [Hacker Public Radio][2]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/intro-awk
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
|
||||
[2]: http://hackerpublicradio.org/eps.php?id=2114
|
@ -0,0 +1,198 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lnrCoder)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11542-1.html)
|
||||
[#]: subject: (How to Find Out Top Memory Consuming Processes in Linux)
|
||||
[#]: via: (https://www.2daygeek.com/linux-find-top-memory-consuming-processes/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
如何在 Linux 中找出内存消耗最大的进程
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201911/06/110149r81efjx12afjat7f.jpg)
|
||||
|
||||
很多次,你可能遇见过系统消耗了过多的内存。如果是这种情况,那么最好的办法是识别出 Linux 机器上消耗过多内存的进程。我相信,你可能已经运行了下文中的命令以进行检查。如果没有,那你尝试过哪些其他的命令?我希望你可以在评论中更新这篇文章,它可能会帮助其他用户。
|
||||
|
||||
使用 [top 命令][1] 和 [ps 命令][2] 可以轻松的识别这种情况。我过去经常同时使用这两个命令,两个命令得到的结果是相同的。所以我建议你从中选择一个喜欢的使用就可以。
|
||||
|
||||
### 1) 如何使用 ps 命令在 Linux 中查找内存消耗最大的进程
|
||||
|
||||
`ps` 命令用于报告当前进程的快照。`ps` 命令的意思是“进程状态”。这是一个标准的 Linux 应用程序,用于查找有关在 Linux 系统上运行进程的信息。
|
||||
|
||||
它用于列出当前正在运行的进程及其进程 ID(PID)、进程所有者名称、进程优先级(PR)以及正在运行的命令的绝对路径等。
|
||||
|
||||
下面的 `ps` 命令格式为你提供有关内存消耗最大进程的更多信息。
|
||||
|
||||
```
|
||||
# ps aux --sort -rss | head
|
||||
|
||||
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
|
||||
mysql 1064 3.2 5.4 886076 209988 ? Ssl Oct25 62:40 /usr/sbin/mysqld
|
||||
varnish 23396 0.0 2.9 286492 115616 ? SLl Oct25 0:42 /usr/sbin/varnishd -P /var/run/varnish.pid -f /etc/varnish/default.vcl -a :82 -T 127.0.0.1:6082 -S /etc/varnish/secret -s malloc,256M
|
||||
named 1105 0.0 2.7 311712 108204 ? Ssl Oct25 0:16 /usr/sbin/named -u named -c /etc/named.conf
|
||||
nobody 23377 0.2 2.3 153096 89432 ? S Oct25 4:35 nginx: worker process
|
||||
nobody 23376 0.1 2.1 147096 83316 ? S Oct25 2:18 nginx: worker process
|
||||
root 23375 0.0 1.7 131028 66764 ? Ss Oct25 0:01 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
|
||||
nobody 23378 0.0 1.6 130988 64592 ? S Oct25 0:00 nginx: cache manager process
|
||||
root 1135 0.0 0.9 86708 37572 ? S 05:37 0:20 cwpsrv: worker process
|
||||
root 1133 0.0 0.9 86708 37544 ? S 05:37 0:05 cwpsrv: worker process
|
||||
```
|
||||
|
||||
使用以下 `ps` 命令格式可在输出中仅展示有关内存消耗过程的特定信息。
|
||||
|
||||
```
|
||||
# ps -eo pid,ppid,%mem,%cpu,cmd --sort=-%mem | head
|
||||
|
||||
PID PPID %MEM %CPU CMD
|
||||
1064 1 5.4 3.2 /usr/sbin/mysqld
|
||||
23396 23386 2.9 0.0 /usr/sbin/varnishd -P /var/run/varnish.pid -f /etc/varnish/default.vcl -a :82 -T 127.0.0.1:6082 -S /etc/varnish/secret -s malloc,256M
|
||||
1105 1 2.7 0.0 /usr/sbin/named -u named -c /etc/named.conf
|
||||
23377 23375 2.3 0.2 nginx: worker process
|
||||
23376 23375 2.1 0.1 nginx: worker process
|
||||
3625 977 1.9 0.0 /usr/local/bin/php-cgi /home/daygeekc/public_html/index.php
|
||||
23375 1 1.7 0.0 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
|
||||
23378 23375 1.6 0.0 nginx: cache manager process
|
||||
1135 3034 0.9 0.0 cwpsrv: worker process
|
||||
```
|
||||
|
||||
如果你只想查看命令名称而不是命令的绝对路径,请使用下面的 `ps` 命令格式。
|
||||
|
||||
```
|
||||
# ps -eo pid,ppid,%mem,%cpu,comm --sort=-%mem | head
|
||||
|
||||
PID PPID %MEM %CPU COMMAND
|
||||
1064 1 5.4 3.2 mysqld
|
||||
23396 23386 2.9 0.0 cache-main
|
||||
1105 1 2.7 0.0 named
|
||||
23377 23375 2.3 0.2 nginx
|
||||
23376 23375 2.1 0.1 nginx
|
||||
23375 1 1.7 0.0 nginx
|
||||
23378 23375 1.6 0.0 nginx
|
||||
1135 3034 0.9 0.0 cwpsrv
|
||||
1133 3034 0.9 0.0 cwpsrv
|
||||
```
|
||||
|
||||
### 2) 如何使用 top 命令在 Linux 中查找内存消耗最大的进程
|
||||
|
||||
Linux 的 `top` 命令是用来监视 Linux 系统性能的最好和最知名的命令。它在交互界面上显示运行的系统进程的实时视图。但是,如果要查找内存消耗最大的进程,请 [在批处理模式下使用 top 命令][3]。
|
||||
|
||||
你应该正确地 [了解 top 命令输出][4] 以解决系统中的性能问题。
|
||||
|
||||
```
|
||||
# top -c -b -o +%MEM | head -n 20 | tail -15
|
||||
|
||||
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
|
||||
1064 mysql 20 0 886076 209740 8388 S 0.0 5.4 62:41.20 /usr/sbin/mysqld
|
||||
23396 varnish 20 0 286492 115616 83572 S 0.0 3.0 0:42.24 /usr/sbin/varnishd -P /var/run/varnish.pid -f /etc/varnish/default.vcl -a :82 -T 127.0.0.1:6082 -S /etc/varnish/secret -s malloc,256M
|
||||
1105 named 20 0 311712 108204 2424 S 0.0 2.8 0:16.41 /usr/sbin/named -u named -c /etc/named.conf
|
||||
23377 nobody 20 0 153240 89432 2432 S 0.0 2.3 4:35.74 nginx: worker process
|
||||
23376 nobody 20 0 147096 83316 2416 S 0.0 2.1 2:18.09 nginx: worker process
|
||||
23375 root 20 0 131028 66764 1616 S 0.0 1.7 0:01.07 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
|
||||
23378 nobody 20 0 130988 64592 592 S 0.0 1.7 0:00.51 nginx: cache manager process
|
||||
1135 root 20 0 86708 37572 2252 S 0.0 1.0 0:20.18 cwpsrv: worker process
|
||||
1133 root 20 0 86708 37544 2212 S 0.0 1.0 0:05.94 cwpsrv: worker process
|
||||
3034 root 20 0 86704 36740 1452 S 0.0 0.9 0:00.09 cwpsrv: master process /usr/local/cwpsrv/bin/cwpsrv
|
||||
1067 nobody 20 0 1356200 31588 2352 S 0.0 0.8 0:56.06 /usr/local/apache/bin/httpd -k start
|
||||
977 nobody 20 0 1356088 31268 2372 S 0.0 0.8 0:30.44 /usr/local/apache/bin/httpd -k start
|
||||
968 nobody 20 0 1356216 30544 2348 S 0.0 0.8 0:19.95 /usr/local/apache/bin/httpd -k start
|
||||
```
|
||||
|
||||
如果你只想查看命令名称而不是命令的绝对路径,请使用下面的 `top` 命令格式。
|
||||
|
||||
```
|
||||
# top -b -o +%MEM | head -n 20 | tail -15
|
||||
|
||||
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
|
||||
1064 mysql 20 0 886076 210340 8388 S 6.7 5.4 62:40.93 mysqld
|
||||
23396 varnish 20 0 286492 115616 83572 S 0.0 3.0 0:42.24 cache-main
|
||||
1105 named 20 0 311712 108204 2424 S 0.0 2.8 0:16.41 named
|
||||
23377 nobody 20 0 153240 89432 2432 S 13.3 2.3 4:35.74 nginx
|
||||
23376 nobody 20 0 147096 83316 2416 S 0.0 2.1 2:18.09 nginx
|
||||
23375 root 20 0 131028 66764 1616 S 0.0 1.7 0:01.07 nginx
|
||||
23378 nobody 20 0 130988 64592 592 S 0.0 1.7 0:00.51 nginx
|
||||
1135 root 20 0 86708 37572 2252 S 0.0 1.0 0:20.18 cwpsrv
|
||||
1133 root 20 0 86708 37544 2212 S 0.0 1.0 0:05.94 cwpsrv
|
||||
3034 root 20 0 86704 36740 1452 S 0.0 0.9 0:00.09 cwpsrv
|
||||
1067 nobody 20 0 1356200 31588 2352 S 0.0 0.8 0:56.04 httpd
|
||||
977 nobody 20 0 1356088 31268 2372 S 0.0 0.8 0:30.44 httpd
|
||||
968 nobody 20 0 1356216 30544 2348 S 0.0 0.8 0:19.95 httpd
|
||||
```
|
||||
|
||||
### 3) 奖励技巧:如何使用 ps_mem 命令在 Linux 中查找内存消耗最大的进程
|
||||
|
||||
[ps_mem 程序][5] 用于显示每个程序(而不是每个进程)使用的核心内存。该程序允许你检查每个程序使用了多少内存。它根据程序计算私有和共享内存的数量,并以最合适的方式返回已使用的总内存。
|
||||
|
||||
它使用以下逻辑来计算内存使用量。总内存使用量 = sum(用于程序进程的专用内存使用量) + sum(用于程序进程的共享内存使用量)。
|
||||
|
||||
```
|
||||
# ps_mem
|
||||
|
||||
Private + Shared = RAM used Program
|
||||
128.0 KiB + 27.5 KiB = 155.5 KiB agetty
|
||||
228.0 KiB + 47.0 KiB = 275.0 KiB atd
|
||||
284.0 KiB + 53.0 KiB = 337.0 KiB irqbalance
|
||||
380.0 KiB + 81.5 KiB = 461.5 KiB dovecot
|
||||
364.0 KiB + 121.5 KiB = 485.5 KiB log
|
||||
520.0 KiB + 65.5 KiB = 585.5 KiB auditd
|
||||
556.0 KiB + 60.5 KiB = 616.5 KiB systemd-udevd
|
||||
732.0 KiB + 48.0 KiB = 780.0 KiB crond
|
||||
296.0 KiB + 524.0 KiB = 820.0 KiB avahi-daemon (2)
|
||||
772.0 KiB + 51.5 KiB = 823.5 KiB systemd-logind
|
||||
940.0 KiB + 162.5 KiB = 1.1 MiB dbus-daemon
|
||||
1.1 MiB + 99.0 KiB = 1.2 MiB pure-ftpd
|
||||
1.2 MiB + 100.5 KiB = 1.3 MiB master
|
||||
1.3 MiB + 198.5 KiB = 1.5 MiB pickup
|
||||
1.3 MiB + 198.5 KiB = 1.5 MiB bounce
|
||||
1.3 MiB + 198.5 KiB = 1.5 MiB pipe
|
||||
1.3 MiB + 207.5 KiB = 1.5 MiB qmgr
|
||||
1.4 MiB + 198.5 KiB = 1.6 MiB cleanup
|
||||
1.3 MiB + 299.5 KiB = 1.6 MiB trivial-rewrite
|
||||
1.5 MiB + 145.0 KiB = 1.6 MiB config
|
||||
1.4 MiB + 291.5 KiB = 1.6 MiB tlsmgr
|
||||
1.4 MiB + 308.5 KiB = 1.7 MiB local
|
||||
1.4 MiB + 323.0 KiB = 1.8 MiB anvil (2)
|
||||
1.3 MiB + 559.0 KiB = 1.9 MiB systemd-journald
|
||||
1.8 MiB + 240.5 KiB = 2.1 MiB proxymap
|
||||
1.9 MiB + 322.5 KiB = 2.2 MiB auth
|
||||
2.4 MiB + 88.5 KiB = 2.5 MiB systemd
|
||||
2.8 MiB + 458.5 KiB = 3.2 MiB smtpd
|
||||
2.9 MiB + 892.0 KiB = 3.8 MiB bash (2)
|
||||
3.3 MiB + 555.5 KiB = 3.8 MiB NetworkManager
|
||||
4.1 MiB + 233.5 KiB = 4.3 MiB varnishd
|
||||
4.0 MiB + 662.0 KiB = 4.7 MiB dhclient (2)
|
||||
4.3 MiB + 623.5 KiB = 4.9 MiB rsyslogd
|
||||
3.6 MiB + 1.8 MiB = 5.5 MiB sshd (3)
|
||||
5.6 MiB + 431.0 KiB = 6.0 MiB polkitd
|
||||
13.0 MiB + 546.5 KiB = 13.6 MiB tuned
|
||||
22.5 MiB + 76.0 KiB = 22.6 MiB lfd - sleeping
|
||||
30.0 MiB + 6.2 MiB = 36.2 MiB php-fpm (6)
|
||||
5.7 MiB + 33.5 MiB = 39.2 MiB cwpsrv (3)
|
||||
20.1 MiB + 25.3 MiB = 45.4 MiB httpd (5)
|
||||
104.7 MiB + 156.0 KiB = 104.9 MiB named
|
||||
112.2 MiB + 479.5 KiB = 112.7 MiB cache-main
|
||||
69.4 MiB + 58.6 MiB = 128.0 MiB nginx (4)
|
||||
203.4 MiB + 309.5 KiB = 203.7 MiB mysqld
|
||||
---------------------------------
|
||||
775.8 MiB
|
||||
=================================
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-find-top-memory-consuming-processes/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lnrCoder](https://github.com/lnrCoder)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
|
||||
[2]: https://www.2daygeek.com/linux-ps-command-find-running-process-monitoring/
|
||||
[3]: https://linux.cn/article-11491-1.html
|
||||
[4]: https://www.2daygeek.com/understanding-linux-top-command-output-usage/
|
||||
[5]: https://www.2daygeek.com/ps_mem-report-core-memory-usage-accurately-in-linux/
|
210
published/20191030 Viewing network bandwidth usage with bmon.md
Normal file
210
published/20191030 Viewing network bandwidth usage with bmon.md
Normal file
@ -0,0 +1,210 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11547-1.html)
|
||||
[#]: subject: (Viewing network bandwidth usage with bmon)
|
||||
[#]: via: (https://www.networkworld.com/article/3447936/viewing-network-bandwidth-usage-with-bmon.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
用 bmon 查看网络带宽使用情况
|
||||
======
|
||||
|
||||
> 介绍一下 bmon,这是一个监视和调试工具,可捕获网络统计信息并使它们易于理解。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201911/07/010237a8gb5oqddvl3bnd0.jpg)
|
||||
|
||||
`bmon` 是一种监视和调试工具,可在终端窗口中捕获网络统计信息,并提供了如何以易于理解的形式显示以及显示多少数据的选项。
|
||||
|
||||
要检查系统上是否安装了 `bmon`,请使用 `which` 命令:
|
||||
|
||||
```
|
||||
$ which bmon
|
||||
/usr/bin/bmon
|
||||
```
|
||||
|
||||
### 获取 bmon
|
||||
|
||||
在 Debian 系统上,使用 `sudo apt-get install bmon` 安装该工具。
|
||||
|
||||
对于 Red Hat 和相关发行版,你可以使用 `yum install bmon` 或 `sudo dnf install bmon` 进行安装。或者,你可能必须使用更复杂的安装方式,例如使用以下命令,这些命令首先使用 root 帐户或 sudo 来设置所需的 `libconfuse`:
|
||||
|
||||
```
|
||||
# wget https://github.com/martinh/libconfuse/releases/download/v3.2.2/confuse-3.2.2.zip
|
||||
# unzip confuse-3.2.2.zip && cd confuse-3.2.2
|
||||
# sudo PATH=/usr/local/opt/gettext/bin:$PATH ./configure
|
||||
# make
|
||||
# make install
|
||||
# git clone https://github.com/tgraf/bmon.git &&ammp; cd bmon
|
||||
# ./autogen.sh
|
||||
# ./configure
|
||||
# make
|
||||
# sudo make install
|
||||
```
|
||||
|
||||
前面五行会安装 `libconfuse`,而后面五行会获取并安装 `bmon` 本身。
|
||||
|
||||
### 使用 bmon
|
||||
|
||||
启动 `bmon` 的最简单方法是在命令行中键入 `bmon`。根据你正在使用的窗口的大小,你能够查看并显示各种数据。
|
||||
|
||||
显示区域的顶部将显示你的网络接口的统计信息:环回接口(lo)和可通过网络访问的接口(例如 eth0)。如果你的终端窗口只有区区几行高,下面这就是你可能会看到的所有内容,它将看起来像这样:
|
||||
|
||||
```
|
||||
lo bmon 4.0
|
||||
Interfaces x RX bps pps %x TX bps pps %
|
||||
>lo x 4B0 x0 0 0 4B 0
|
||||
qdisc none (noqueue) x 0 0 x 0 0
|
||||
enp0s25 x 244B0 x1 0 0 470B 2
|
||||
qdisc none (fq_codel) x 0 0 x 0 0 462B 2
|
||||
q Increase screen height to see graphical statistics qq
|
||||
|
||||
|
||||
q Press d to enable detailed statistics qq
|
||||
q Press i to enable additional information qq
|
||||
Wed Oct 23 14:36:27 2019 Press ? for help
|
||||
```
|
||||
|
||||
在此示例中,网络接口是 enp0s25。请注意列出的接口下方的有用的 “Increase screen height” 提示。拉伸屏幕以增加足够的行(无需重新启动 bmon),你将看到一些图形:
|
||||
|
||||
```
|
||||
Interfaces x RX bps pps %x TX bps pps %
|
||||
>lo x 0 0 x 0 0
|
||||
qdisc none (noqueue) x 0 0 x 0 0
|
||||
enp0s25 x 253B 3 x 2.65KiB 6
|
||||
qdisc none (fq_codel) x 0 0 x 2.62KiB 6
|
||||
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
|
||||
(RX Bytes/second)
|
||||
0.00 ............................................................
|
||||
0.00 ............................................................
|
||||
0.00 ............................................................
|
||||
0.00 ............................................................
|
||||
0.00 ............................................................
|
||||
0.00 ............................................................
|
||||
1 5 10 15 20 25 30 35 40 45 50 55 60
|
||||
(TX Bytes/second)
|
||||
0.00 ............................................................
|
||||
0.00 ............................................................
|
||||
0.00 ............................................................
|
||||
0.00 ............................................................
|
||||
0.00 ............................................................
|
||||
0.00 ............................................................
|
||||
1 5 10 15 20 25 30 35 40 45 50 55 60
|
||||
```
|
||||
|
||||
但是请注意,该图形未显示值。这是因为它正在显示环回 “>lo” 接口。按下箭头键指向公共网络接口,你将看到一些流量。
|
||||
|
||||
```
|
||||
Interfaces x RX bps pps %x TX bps pps %
|
||||
lo x 0 0 x 0 0
|
||||
qdisc none (noqueue) x 0 0 x 0 0
|
||||
>enp0s25 x 151B 2 x 1.61KiB 3
|
||||
qdisc none (fq_codel) x 0 0 x 1.60KiB 3
|
||||
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
|
||||
B (RX Bytes/second)
|
||||
635.00 ...............................|............................
|
||||
529.17 .....|.........................|....|.......................
|
||||
423.33 .....|................|..|..|..|..|.|.......................
|
||||
317.50 .|..||.|..||.|..|..|..|..|..|..||.||||......................
|
||||
211.67 .|..||.|..||.|..||||.||.|||.||||||||||......................
|
||||
105.83 ||||||||||||||||||||||||||||||||||||||......................
|
||||
1 5 10 15 20 25 30 35 40 45 50 55 60
|
||||
KiB (TX Bytes/second)
|
||||
4.59 .....................................|......................
|
||||
3.83 .....................................|......................
|
||||
3.06 ....................................||......................
|
||||
2.30 ....................................||......................
|
||||
1.53 |||..............|..|||.|...|.|||.||||......................
|
||||
0.77 ||||||||||||||||||||||||||||||||||||||......................
|
||||
1 5 10 15 20 25 30 35 40 45 50 55 60
|
||||
|
||||
|
||||
q Press d to enable detailed statistics qq
|
||||
q Press i to enable additional information qq
|
||||
Wed Oct 23 16:42:06 2019 Press ? for help
|
||||
```
|
||||
|
||||
通过更改接口,你可以查看显示了网络流量的图表。但是请注意,默认值是按每秒字节数显示的。要按每秒位数来显示,你可以使用 `bmon -b` 启动该工具。
|
||||
|
||||
如果你的窗口足够大并按下 `d` 键,则可以显示有关网络流量的详细统计信息。你看到的统计信息示例如下所示。由于其宽度太宽,该显示分为左右两部分。
|
||||
|
||||
左侧:
|
||||
|
||||
```
|
||||
RX TX │ RX TX │
|
||||
Bytes 11.26MiB 11.26MiB│ Packets 25.91K 25.91K │
|
||||
Collisions - 0 │ Compressed 0 0 │
|
||||
Errors 0 0 │ FIFO Error 0 0 │
|
||||
ICMPv6 2 2 │ ICMPv6 Checksu 0 - │
|
||||
Ip6 Broadcast 0 0 │ Ip6 Broadcast 0 0 │
|
||||
Ip6 Delivers 8 - │ Ip6 ECT(0) Pac 0 - │
|
||||
Ip6 Header Err 0 - │ Ip6 Multicast 0 152B │
|
||||
Ip6 Non-ECT Pa 8 - │ Ip6 Reasm/Frag 0 0 │
|
||||
Ip6 Reassembly 0 - │ Ip6 Too Big Er 0 - │
|
||||
Ip6Discards 0 0 │ Ip6Octets 530B 530B │
|
||||
Missed Error 0 - │ Multicast - 0 │
|
||||
Window Error - 0 │ │
|
||||
```
|
||||
|
||||
右侧:
|
||||
|
||||
```
|
||||
│ RX TX │ RX TX
|
||||
│ Abort Error - 0 │ Carrier Error - 0
|
||||
│ CRC Error 0 - │ Dropped 0 0
|
||||
│ Frame Error 0 - │ Heartbeat Erro -
|
||||
│ ICMPv6 Errors 0 0 │ Ip6 Address Er 0 -
|
||||
│ Ip6 CE Packets 0 - │ Ip6 Checksum E 0 -
|
||||
│ Ip6 ECT(1) Pac 0 - │ Ip6 Forwarded - 0
|
||||
│ Ip6 Multicast 0 2 │ Ip6 No Route 0 0
|
||||
│ Ip6 Reasm/Frag 0 0 │ Ip6 Reasm/Frag 0 0
|
||||
│ Ip6 Truncated 0 - │ Ip6 Unknown Pr 0 -
|
||||
│ Ip6Pkts 8 8 │ Length Error 0
|
||||
│ No Handler 0 - │ Over Error 0 -
|
||||
```
|
||||
|
||||
如果按下 `i` 键,将显示网络接口上的其他信息。
|
||||
|
||||
左侧:
|
||||
|
||||
```
|
||||
MTU 1500 | Flags broadcast,multicast,up |
|
||||
Address 00:1d:09:77:9d:08 | Broadcast ff:ff:ff:ff:ff:ff |
|
||||
Family unspec | Alias |
|
||||
```
|
||||
|
||||
右侧:
|
||||
|
||||
```
|
||||
| Operstate up | IfIndex 2 |
|
||||
| Mode default | TXQlen 1000 |
|
||||
| Qdisc fq_codel |
|
||||
```
|
||||
|
||||
如果你按下 `?` 键,将会出现一个帮助菜单,其中简要介绍了如何在屏幕上移动光标、选择要显示的数据以及控制图形如何显示。
|
||||
|
||||
要退出 `bmon`,输入 `q`,然后输入 `y` 以响应提示来确认退出。
|
||||
|
||||
需要注意的一些重要事项是:
|
||||
|
||||
* `bmon` 会将其显示调整为终端窗口的大小
|
||||
* 显示区域底部显示的某些选项仅在窗口足够大可以容纳数据时才起作用
|
||||
* 除非你使用 `-R`(例如 `bmon -R 5`)来减慢显示速度,否则每秒更新一次显示
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3447936/viewing-network-bandwidth-usage-with-bmon.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[2]: https://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,101 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (laingke)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11539-1.html)
|
||||
[#]: subject: (Why you don't have to be afraid of Kubernetes)
|
||||
[#]: via: (https://opensource.com/article/19/10/kubernetes-complex-business-problem)
|
||||
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
|
||||
|
||||
为什么你不必害怕 Kubernetes
|
||||
======
|
||||
|
||||
> Kubernetes 绝对是满足复杂 web 应用程序需求的最简单、最容易的方法。
|
||||
|
||||
![Digital creative of a browser on the internet][1]
|
||||
|
||||
在 90 年代末和 2000 年代初,在大型网站工作很有趣。我的经历让我想起了 American Greetings Interactive,在情人节那天,我们拥有了互联网上排名前 10 位之一的网站(以网络访问量衡量)。我们为 [AmericanGreetings.com][2]、[BlueMountain.com][3] 等公司提供了电子贺卡,并为 MSN 和 AOL 等合作伙伴提供了电子贺卡。该组织的老员工仍然深切地记得与 Hallmark 等其它电子贺卡网站进行大战的史诗般的故事。顺便说一句,我还为 Holly Hobbie、Care Bears 和 Strawberry Shortcake 运营过大型网站。
|
||||
|
||||
我记得那就像是昨天发生的一样,这是我们第一次遇到真正的问题。通常,我们的前门(路由器、防火墙和负载均衡器)有大约 200Mbps 的流量进入。但是,突然之间,Multi Router Traffic Grapher(MRTG)图示突然在几分钟内飙升至 2Gbps。我疯了似地东奔西跑。我了解了我们的整个技术堆栈,从路由器、交换机、防火墙和负载平衡器,到 Linux/Apache web 服务器,到我们的 Python 堆栈(FastCGI 的元版本),以及网络文件系统(NFS)服务器。我知道所有配置文件在哪里,我可以访问所有管理界面,并且我是一位经验丰富的,打过硬仗的系统管理员,具有多年解决复杂问题的经验。
|
||||
|
||||
但是,我无法弄清楚发生了什么……
|
||||
|
||||
当你在一千个 Linux 服务器上疯狂地键入命令时,五分钟的感觉就像是永恒。我知道站点可能会在任何时候崩溃,因为当它被划分成更小的集群时,压垮上千个节点的集群是那么的容易。
|
||||
|
||||
我迅速*跑到*老板的办公桌前,解释了情况。他几乎没有从电子邮件中抬起头来,这使我感到沮丧。他抬头看了看,笑了笑,说道:“是的,市场营销可能会开展广告活动。有时会发生这种情况。”他告诉我在应用程序中设置一个特殊标志,以减轻 Akamai 的访问量。我跑回我的办公桌,在上千台 web 服务器上设置了标志,几分钟后,站点恢复正常。灾难也就被避免了。
|
||||
|
||||
我可以再分享 50 个类似的故事,但你脑海中可能会有一点好奇:“这种运维方式将走向何方?”
|
||||
|
||||
关键是,我们遇到了业务问题。当技术问题使你无法开展业务时,它们就变成了业务问题。换句话说,如果你的网站无法访问,你就不能处理客户交易。
|
||||
|
||||
那么,所有这些与 Kubernetes 有什么关系?一切!世界已经改变。早在 90 年代末和 00 年代初,只有大型网站才出现大型的、<ruby>规模级<rt>web-scale</rt></ruby>的问题。现在,有了微服务和数字化转型,每个企业都面临着一个大型的、规模级的问题——可能是多个大型的、规模级的问题。
|
||||
|
||||
你的企业需要能够通过许多不同的人构建的许多不同的、通常是复杂的服务来管理复杂的规模级的网站。你的网站需要动态地处理流量,并且它们必须是安全的。这些属性需要在所有层(从基础结构到应用程序层)上由 API 驱动。
|
||||
|
||||
### 进入 Kubernetes
|
||||
|
||||
Kubernetes 并不复杂;你的业务问题才复杂。当你想在生产环境中运行应用程序时,要满足性能(伸缩性、性能抖动等)和安全性要求,就需要最低程度的复杂性。诸如高可用性(HA)、容量要求(N+1、N+2、N+100)以及保证最终一致性的数据技术等就会成为必需。这些是每家进行数字化转型的公司的生产要求,而不仅仅是 Google、Facebook 和 Twitter 这样的大型网站。
|
||||
|
||||
在旧时代,我还在 American Greetings 任职时,每次我们加入一个新的服务,它看起来像这样:所有这些都是由网站运营团队来处理的,没有一个是通过订单系统转移给其他团队来处理的。这是在 DevOps 出现之前的 DevOps:
|
||||
|
||||
1. 配置 DNS(通常是内部服务层和面向公众的外部)
|
||||
2. 配置负载均衡器(通常是内部服务和面向公众的)
|
||||
3. 配置对文件的共享访问(大型 NFS 服务器、群集文件系统等)
|
||||
4. 配置集群软件(数据库、服务层等)
|
||||
5. 配置 web 服务器群集(可以是 10 或 50 个服务器)
|
||||
|
||||
大多数配置是通过配置管理自动完成的,但是配置仍然很复杂,因为每个系统和服务都有不同的配置文件,而且格式完全不同。我们研究了像 [Augeas][4] 这样的工具来简化它,但是我们认为使用转换器来尝试和标准化一堆不同的配置文件是一种反模式。
|
||||
|
||||
如今,借助 Kubernetes,启动一项新服务本质上看起来如下:
|
||||
|
||||
1. 配置 Kubernetes YAML/JSON。
|
||||
2. 提交给 Kubernetes API(`kubectl create -f service.yaml`)。
|
||||
|
||||
Kubernetes 大大简化了服务的启动和管理。服务所有者(无论是系统管理员、开发人员还是架构师)都可以创建 Kubernetes 格式的 YAML/JSON 文件。使用 Kubernetes,每个系统和每个用户都说相同的语言。所有用户都可以在同一 Git 存储库中提交这些文件,从而启用 GitOps。
|
||||
|
||||
而且,可以弃用和删除服务。从历史上看,删除 DNS 条目、负载平衡器条目和 Web 服务器的配置等是非常可怕的,因为你几乎肯定会破坏某些东西。使用 Kubernetes,所有内容都处于命名空间下,因此可以通过单个命令删除整个服务。尽管你仍然需要确保其它应用程序不使用它(微服务和函数即服务 [FaaS] 的缺点),但你可以更加确信:删除服务不会破坏基础架构环境。
|
||||
|
||||
### 构建、管理和使用 Kubernetes
|
||||
|
||||
太多的人专注于构建和管理 Kubernetes 而不是使用它(详见 [Kubernetes 是一辆翻斗车][5])。
|
||||
|
||||
在单个节点上构建一个简单的 Kubernetes 环境并不比安装 LAMP 堆栈复杂得多,但是我们无休止地争论着构建与购买的问题。不是 Kubernetes 很难;它以高可用性大规模运行应用程序。建立一个复杂的、高可用性的 Kubernetes 集群很困难,因为要建立如此规模的任何集群都是很困难的。它需要规划和大量软件。建造一辆简单的翻斗车并不复杂,但是建造一辆可以运载 [10 吨垃圾并能以 200 迈的速度稳定行驶的卡车][6]则很复杂。
|
||||
|
||||
管理 Kubernetes 可能很复杂,因为管理大型的、规模级的集群可能很复杂。有时,管理此基础架构很有意义;而有时不是。由于 Kubernetes 是一个社区驱动的开源项目,它使行业能够以多种不同方式对其进行管理。供应商可以出售托管版本,而用户可以根据需要自行决定对其进行管理。(但是你应该质疑是否确实需要。)
|
||||
|
||||
使用 Kubernetes 是迄今为止运行大规模网站的最简单方法。Kubernetes 正在普及运行一组大型、复杂的 Web 服务的能力——就像当年 Linux 在 Web 1.0 中所做的那样。
|
||||
|
||||
由于时间和金钱是一个零和游戏,因此我建议将重点放在使用 Kubernetes 上。将你的时间和金钱花费在[掌握 Kubernetes 原语][7]或处理[活跃度和就绪性探针][8]的最佳方法上(表明大型、复杂的服务很难的另一个例子)。不要专注于构建和管理 Kubernetes。(在构建和管理上)许多供应商可以为你提供帮助。
|
||||
|
||||
### 结论
|
||||
|
||||
我记得对无数的问题进行了故障排除,比如我在这篇文章的开头所描述的问题——当时 Linux 内核中的 NFS、我们自产的 CFEngine、仅在某些 Web 服务器上出现的重定向问题等)。开发人员无法帮助我解决所有这些问题。实际上,除非开发人员具备高级系统管理员的技能,否则他们甚至不可能进入系统并作为第二双眼睛提供帮助。没有带有图形或“可观察性”的控制台——可观察性在我和其他系统管理员的大脑中。如今,有了 Kubernetes、Prometheus、Grafana 等,一切都改变了。
|
||||
|
||||
关键是:
|
||||
|
||||
1. 时代不一样了。现在,所有 Web 应用程序都是大型的分布式系统。就像 AmericanGreetings.com 过去一样复杂,现在每个网站都有扩展性和 HA 的要求。
|
||||
2. 运行大型的分布式系统是很困难的。绝对是。这是业务的需求,不是 Kubernetes 的问题。使用更简单的编排系统并不是解决方案。
|
||||
|
||||
Kubernetes 绝对是满足复杂 Web 应用程序需求的最简单,最容易的方法。这是我们生活的时代,而 Kubernetes 擅长于此。你可以讨论是否应该自己构建或管理 Kubernetes。有很多供应商可以帮助你构建和管理它,但是很难否认这是大规模运行复杂 Web 应用程序的最简单方法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/kubernetes-complex-business-problem
|
||||
|
||||
作者:[Scott McCarty][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[laingke](https://github.com/laingke)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/fatherlinux
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
|
||||
[2]: http://AmericanGreetings.com
|
||||
[3]: http://BlueMountain.com
|
||||
[4]: http://augeas.net/
|
||||
[5]: https://linux.cn/article-11011-1.html
|
||||
[6]: http://crunchtools.com/kubernetes-10-ton-dump-truck-handles-pretty-well-200-mph/
|
||||
[7]: https://linux.cn/article-11036-1.html
|
||||
[8]: https://srcco.de/posts/kubernetes-liveness-probes-are-dangerous.html
|
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco issues critical security warning for IOS XE REST API container)
|
||||
[#]: via: (https://www.networkworld.com/article/3447558/cisco-issues-critical-security-warning-for-ios-xe-rest-api-container.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco issues critical security warning for IOS XE REST API container
|
||||
======
|
||||
This Cisco IOS XE REST API vulnerability could lead to attackers obtaining the token-id of an authenticated user.
|
||||
D3Damon / Getty Images
|
||||
|
||||
Cisco this week said it issued a software update to address a vulnerability in its [Cisco REST API virtual service container for Cisco IOS XE][1] software that scored a critical 10 out of 10 on the Common Vulnerability Scoring System (CVSS) system.
|
||||
|
||||
With the vulnerability an attacker could submit malicious HTTP requests to the targeted device and if successful, obtain the _token-id_ of an authenticated user. This _token-id_ could be used to bypass authentication and execute privileged actions through the interface of the REST API virtual service container on the affected Cisco IOS XE device, the company said.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
|
||||
|
||||
According to Cisco the REST API is an application that runs in a virtual services container. A virtual services container is a virtualized environment on a device and is delivered as an open virtual application (OVA). The OVA package has to be installed and enabled on a device through the device virtualization manager (VMAN) CLI.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][3] ]**
|
||||
|
||||
The Cisco REST API provides a set of RESTful APIs as an alternative method to the Cisco IOS XE CLI to provision selected functions on Cisco devices.
|
||||
|
||||
Cisco said the vulnerability can be exploited under the following conditions:
|
||||
|
||||
* The device runs an affected Cisco IOS XE Software release.
|
||||
* The device has installed and enabled an affected version of the Cisco REST API virtual service container.
|
||||
* An authorized user with administrator credentials (level 15) is authenticated to the REST API interface.
|
||||
|
||||
|
||||
|
||||
The REST API interface is not enabled by default. To be vulnerable, the virtual services container must be installed and activated. Deleting the OVA package from the device storage memory removes the attack vector. If the Cisco REST API virtual service container is not enabled, this operation will not impact the device's normal operating conditions, Cisco stated.
|
||||
|
||||
This vulnerability affects Cisco devices that are configured to use a vulnerable version of Cisco REST API virtual service container. This vulnerability affected the following products:
|
||||
|
||||
* Cisco 4000 Series Integrated Services Routers
|
||||
* Cisco ASR 1000 Series Aggregation Services Routers
|
||||
* Cisco Cloud Services Router 1000V Series
|
||||
* Cisco Integrated Services Virtual Router
|
||||
|
||||
|
||||
|
||||
Cisco said it has [released a fixed version of the REST API][4] virtual service container and a hardened IOS XE release that prevents installation or activation of a vulnerable container on a device. If the device was already configured with an active vulnerable container, the IOS XE software upgrade will deactivate the container, making the device not vulnerable. In that case, to restore the REST API functionality, customers should upgrade the Cisco REST API virtual service container to a fixed software release, the company said.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3447558/cisco-issues-critical-security-warning-for-ios-xe-rest-api-container.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190828-iosxe-rest-auth-bypass
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[4]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,94 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (MX Linux 19 Released With Debian 10.1 ‘Buster’ & Other Improvements)
|
||||
[#]: via: (https://itsfoss.com/mx-linux-19/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
MX Linux 19 Released With Debian 10.1 ‘Buster’ & Other Improvements
|
||||
======
|
||||
|
||||
MX Linux 18 has been one of my top recommendations for the [best Linux distributions][1], specially when considering distros other than Ubuntu.
|
||||
|
||||
It is based on Debian 9.6 ‘Stretch’ – which was incredibly a fast and smooth experience.
|
||||
|
||||
Now, as a major upgrade to that, MX Linux 19 brings a lot of major improvements and changes. Here, we shall take a look at the key highlights.
|
||||
|
||||
### New features in MX Linux 19
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][2]
|
||||
|
||||
#### Debian 10 ‘Buster’
|
||||
|
||||
This deserves a separate mention as Debian 10 is indeed a major upgrade from Debian 9.6 ‘Stretch’ on which MX Linux 18 was based on.
|
||||
|
||||
In case you’re curious about what has changed with Debian 10 Buster, we suggest to check out our article on the [new features of Debian 10 Buster][3].
|
||||
|
||||
#### Xfce Desktop 4.14
|
||||
|
||||
![MX Linux 19][4]
|
||||
|
||||
[Xfce 4.14][5] happens to be the latest offering from Xfce development team. Personally, I’m not a fan of Xfce desktop environment but it screams fast performance when you get to use it on a Linux distro (especially on MX Linux 19).
|
||||
|
||||
Interestingly, we also have a quick guide to help you [customize Xfce][6] on your system.
|
||||
|
||||
#### Updated Packages & Latest Debian Kernel 4.19
|
||||
|
||||
Along with updated packages for [GIMP][7], MESA, Firefox, and so on – it also comes baked in with the latest kernel 4.19 available for Debian Buster.
|
||||
|
||||
#### Updated MX-Apps
|
||||
|
||||
If you’ve used MX Linux before, you might be knowing that it comes pre-installed with useful MX-Apps that help you get more things done quickly.
|
||||
|
||||
The apps like MX-installer and MX-packageinstaller have significantly improved.
|
||||
|
||||
In addition to these two, all other MX-tools have been updated here and there to fix bugs, add new translations (or simply to improve the user experience).
|
||||
|
||||
#### Other Improvements
|
||||
|
||||
Considering it a major upgrade, there’s obviously a lot of under-the-hood changes than highlighted (including the latest antiX live system updates).
|
||||
|
||||
You can check out more details on their [official announcement post][8]. You may also watch this video from the developers explaining all the new stuff in MX Linux 19:
|
||||
|
||||
### Getting MX Linux 19
|
||||
|
||||
Even if you are using MX Linux 18 versions right now, you [cannot upgrade][9] to MX Linux 19. You need to go for a clean install like everyone else.
|
||||
|
||||
You can download MX Linux 19 from this page:
|
||||
|
||||
[Download MX Linux 19][10]
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
With MX Linux 18, I had a problem using my WiFi adapter due to a driver issue which I resolved through the [forum][11], it seems that it still hasn’t been fixed with MX Linux 19. So, you might want to take a look at my [forum post][11] if you face the same issue after installing MX Linux 19.
|
||||
|
||||
If you’ve been using MX Linux 18, this definitely seems to be an impressive upgrade.
|
||||
|
||||
Have you tried it yet? What are your thoughts on the new MX Linux 19 release? Let me know what you think in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/mx-linux-19/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-linux-distributions/
|
||||
[2]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[3]: https://itsfoss.com/debian-10-buster/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/mx-linux-19.jpg?ssl=1
|
||||
[5]: https://xfce.org/about/news
|
||||
[6]: https://itsfoss.com/customize-xfce/
|
||||
[7]: https://itsfoss.com/gimp-2-10-release/
|
||||
[8]: https://mxlinux.org/blog/mx-19-patito-feo-released/
|
||||
[9]: https://mxlinux.org/migration/
|
||||
[10]: https://mxlinux.org/download-links/
|
||||
[11]: https://forum.mxlinux.org/viewtopic.php?t=52201
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Netflix builds a Jupyter Lab alternative, a bug bounty to fight election hacking, Raspberry Pi goes microscopic, and more open source news)
|
||||
[#]: via: (https://opensource.com/article/19/10/news-october-26)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
Netflix builds a Jupyter Lab alternative, a bug bounty to fight election hacking, Raspberry Pi goes microscopic, and more open source news
|
||||
======
|
||||
Catch up on the biggest open source headlines from the past two weeks.
|
||||
![Weekly news roundup with TV][1]
|
||||
|
||||
In this edition of our open source news roundup, we take a look at a machine learning tool from Netflix, Microsoft's election software bug bounty, a cost-effective microscope built with Raspberry Pi, and more!
|
||||
|
||||
### Netflix release Polynote machine learning tool
|
||||
|
||||
While there have been numerous advances in machine learning over the last decade, it's still a difficult, laborious, and sometimes frustrating task. To help make that task easier, Netflix has [released a machine learning notebook environment][2] called Polynote as open source.
|
||||
|
||||
Polynote enables "data scientists and AI researchers to integrate Netflix’s JVM-based machine learning framework with Python machine learning and visualization libraries". What make Polynote unique is its reproducibility feature, which "takes cells’ positions in the notebook into account before executing them, helping prevent bad practices that make notebooks difficult to rerun from the top." It's also quite flexible—Polynote works with Apache Spark and supports languages like Python, Scala, and SQL.
|
||||
|
||||
You can grab Polynote [off GitHub][3] or learn more about it at the Polynote website.
|
||||
|
||||
### Microsoft announces bug bounty program for its election software
|
||||
|
||||
Hoping that more eyeballs on its code will make bugs shallow, Microsoft announced a [a bug bounty][4] for its open source ElectionGuard software development kit for voting machines. The goal of the program is to "uncover vulnerabilities and help bolster election security."
|
||||
|
||||
The bounty is open to "security professionals, part-time hobbyists, and students." Successful submissions, which must include proofs of concept demonstrating how bugs could compromise the security of voters, are worth up to $15,000 (USD).
|
||||
|
||||
If you're interested in participating, you can find ElectionGuard's code on [GitHub][5], and read more about the [bug bounty][6].
|
||||
|
||||
### microscoPI: a microscope built on Raspberry Pi
|
||||
|
||||
It's not a stretch to say that the Raspberry Pi is one of the most flexible platforms for hardware and software hackers. Micropalaeontologist Martin Tetard saw the potential of the tiny computers in his field of study and [create the microscoPI][7].
|
||||
|
||||
The microscoPI is a Raspberry Pi-assisted microscope that can "capture, process, and store images and image analysis results." Using an old adjustable microscope with a movable stage as a base, Tetard added a Raspberry Pi B, a Raspberry Pi camera module, and a small touchscreen to the device. The result is a compact rig that's "completely portable and measuring less than 30 cm (12 inches) in height." The entire setup cost him €159 (about $177 USD).
|
||||
|
||||
Tetard has set up [a website][8] for the microscoPI, where you can learn more about it.
|
||||
|
||||
#### In other news
|
||||
|
||||
* [Happy 15th birthday, Ubuntu][9]
|
||||
* [Open-Source Arm Puts Robotics Within Reach][10]
|
||||
* [Apache Rya matures open source triple store database][11]
|
||||
* [UNICEF Launches Cryptocurrency Fund to Back Open Source Technology][12]
|
||||
* [Open-source Delta Lake project moves to the Linux Foundation][13]
|
||||
|
||||
|
||||
|
||||
_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/news-october-26
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
|
||||
[2]: https://venturebeat.com/2019/10/23/netflix-open-sources-polynote-to-simplify-data-science-and-machine-learning-workflows/
|
||||
[3]: https://github.com/polynote/polynote
|
||||
[4]: https://thenextweb.com/security/2019/10/21/microsofts-open-source-election-software-now-has-a-bug-bounty-program/
|
||||
[5]: https://github.com/microsoft/ElectionGuard-SDK
|
||||
[6]: https://www.microsoft.com/en-us/msrc/bounty
|
||||
[7]: https://www.geeky-gadgets.com/raspberry-pi-microscope-07-10-2019/
|
||||
[8]: https://microscopiproject.wordpress.com/
|
||||
[9]: https://www.omgubuntu.co.uk/2019/10/happy-birthday-ubuntu-2019
|
||||
[10]: https://hackaday.com/2019/10/17/open-source-arm-puts-robotics-within-reach/
|
||||
[11]: https://searchdatamanagement.techtarget.com/news/252472464/Apache-Rya-matures-open-source-triple-store-database
|
||||
[12]: https://www.coindesk.com/unicef-launches-cryptocurrency-fund-to-back-open-source-technology
|
||||
[13]: https://siliconangle.com/2019/10/16/open-source-delta-lake-project-moves-linux-foundation/
|
@ -0,0 +1,70 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Hypervisor comeback, Linus says no and reads email, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/11/hypervisor-stable-kernel-and-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
Hypervisor comeback, Linus says no and reads email, and more industry trends
|
||||
======
|
||||
A weekly look at open source community and industry trends.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
|
||||
|
||||
## [Containers in 2019: They're calling it a [hypervisor] comeback][2]
|
||||
|
||||
> So what does all this mean as we continue with rapid adoption and hyper-ecosystem growth around Kubernetes and containers? Let’s try and break that down into a few key areas and see what all the excitement is about.
|
||||
|
||||
**The impact**: I'm pretty sure that the title of the article is an LL Cool J reference, which I wholeheartedly approve of. Even more important though is a robust unpacking of developments in the hypervisor space over the last year and how they square up against the trend towards cloud-native and container-based development.
|
||||
|
||||
## [Linux kernel is getting more reliable, says Linus Torvalds. Plus: What do you need to do to be him?][3]
|
||||
|
||||
> "In the end my job is to say no. Somebody has to be able to say no, because other developers know that if they do something bad I will say no. They hopefully in turn are more careful. But in order to be able to say no, I have to know the background, because otherwise I can't do my job. I spend all my time basically reading email about what people are working on.
|
||||
|
||||
**The impact**: The rehabilitation of Linus as a much chiller guy continues; this one has some good advice for people leading distributed teams.
|
||||
|
||||
## [Automated infrastructure in the on-premise datacenter—OpenShift 4.2 on OpenStack 15 (Stein)][4]
|
||||
|
||||
> Up until now IPI (Installer Provision Infrastructure) has only supported public clouds: AWS, Azure, and Google. Now with OpenShift 4.2 it is supporting OpenStack. For the first time we can bring IPI into the on-premise datacenter where it is IMHO most needed. This single feature has the potential to revolutionize on-premise environments and bring them into the cloud-age with a single click and that promise is truly something to get excited about!
|
||||
|
||||
**The impact**: So much tech press has started with the assumption that every company should run their infrastructure like a hyperscaler. The technology is catching up to make the user experience of that feasible.
|
||||
|
||||
## [Kubernetes autoscaling 101: Cluster autoscaler, horizontal autoscaler, and vertical pod autoscaler][5]
|
||||
|
||||
> I’m providing in this post a high-level overview of different scalability mechanisms inside Kubernetes and best ways to make them serve your needs. Remember, to truly master Kubernetes, you need to master different ways to manage the scale of cluster resources, that’s [the core of promise of Kubernetes][6].
|
||||
>
|
||||
> _Configuring Kubernetes clusters to balance resources and performance can be challenging, and requires expert knowledge of the inner workings of Kubernetes. Just because your app or services’ workload isn’t constant, it rather fluctuates throughout the day if not the hour. Think of it as a journey and ongoing process._
|
||||
|
||||
**The impact**: You can tell whether someone knows what they're talking about if they can represent it in a simple diagram. Thanks to the excellent diagrams in this post, I know more day 2 concerns of Kubernetes operators than I ever wanted to.
|
||||
|
||||
## [GitHub: All open source developers anywhere are welcome][7]
|
||||
|
||||
> Eighty percent of all open-source contributions today, come from outside of the US. The top two markets for open source development outside of the US are China and India. These markets, although we have millions of developers in them, are continuing to grow faster than any others at about 30% year-over-year average.
|
||||
|
||||
**The impact**: One of my open source friends likes to muse on the changing culture within the open source community. He posits that the old guard gatekeepers are already becoming irrelevant. I don't know if I completely agree, but I think you can look at the exponentially increasing contributions from places that haven't been on the open source map before and safely speculate that the open source culture of tomorrow will be radically different than that of today.
|
||||
|
||||
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/11/hypervisor-stable-kernel-and-more-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://www.infoq.com/articles/containers-hypervisors-2019/
|
||||
[3]: https://www.theregister.co.uk/2019/10/30/linux_kernel_is_getting_more_reliable_says_linus_torvalds/
|
||||
[4]: https://keithtenzer.com/2019/10/29/automated-infrastructure-in-the-on-premise-datacenter-openshift-4-2-on-openstack-15-stein/
|
||||
[5]: https://www.cncf.io/blog/2019/10/29/kubernetes-autoscaling-101-cluster-autoscaler-horizontal-autoscaler-and-vertical-pod-autoscaler/
|
||||
[6]: https://speakerdeck.com/thockin/everything-you-ever-wanted-to-know-about-resource-scheduling-dot-dot-dot-almost
|
||||
[7]: https://www.zdnet.com/article/github-all-open-source-developers-anywhere-are-welcome/#ftag=RSSbaffb68
|
@ -0,0 +1,92 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Red Hat announces RHEL 8.1 with predictable release cadence)
|
||||
[#]: via: (https://www.networkworld.com/article/3451367/red-hat-announces-rhel-8-1-with-predictable-release-cadence.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Red Hat announces RHEL 8.1 with predictable release cadence
|
||||
======
|
||||
|
||||
[Clkr / Pixabay][1] [(CC0)][2]
|
||||
|
||||
[Red Hat][3] has just today announced the availability of Red Hat Enterprise Linux (RHEL) 8.1, promising improvements in manageability, security and performance.
|
||||
|
||||
RHEL 8.1 will enhance the company’s open [hybrid-cloud][4] portfolio and continue to provide a consistent user experience between on-premises and public-cloud deployments.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][5]
|
||||
|
||||
RHEL 8.1 is also the first release that will follow what Red Hat is calling its "predictable release cadence". Announced at Red Hat Summit 2019, this means that minor releases will be available every six months. The expectation is that this rhythmic release cycle will make it easier both for customer organizations and other software providers to plan their upgrades.
|
||||
|
||||
[][6]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][6]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
Red Hat Enterprise Linux 8.1 provides product enhancements in many areas.
|
||||
|
||||
### Enhanced automation
|
||||
|
||||
All supported RHEL subscriptions now include access to Red Hat's proactive analytics, **Red Hat Insights**. With more than 1,000 rules for operating RHEL systems whether on-premises or cloud deployments, Red Hat Insights help IT administrators flag potential configuration, security, performance, availability and stability issues before they impact production.
|
||||
|
||||
### New system roles
|
||||
|
||||
RHEL 8.1 streamlines the process for setting up subsystems to handle specific functions such as storage, networking, time synchronization, kdump and SELinux. This expands on the variety of Ansible system roles.
|
||||
|
||||
### Live kernel patching
|
||||
|
||||
RHEL 8.1 adds full support for live kernel patching. This critically important feature allows IT operations teams to deal with ongoing threats without incurring excessive system downtime. Kernel updates can be applied to remediate common vulnerabilities and exposures (CVE) while reducing the need for a system reboot. Additional security enhancements include enhanced CVE remediation, kernel-level memory protection and application whitelisting.
|
||||
|
||||
### Container-centric SELinux profiles
|
||||
|
||||
These profiles allow the creation of more tailored security policies to control how containerized services access host-system resources, making it easier to harden systems against security threats.
|
||||
|
||||
### Enhanced hybrid-cloud application development
|
||||
|
||||
A reliably consistent set of supported development tools is included, among them the latest stable versions of popular open-source tools and languages like golang and .NET Core as well as the ability to power modern data-processing workloads such as Microsoft SQL Server and SAP solutions.
|
||||
|
||||
Red Hat Linux 8.1 is available now for RHEL subscribers via the [Red Hat Customer Portal][7]. Red Hat Developer program members may obtain the latest releases at no cost at the [Red Hat Developer][8] site.
|
||||
|
||||
#### Additional resources
|
||||
|
||||
Here are some links to additional information:
|
||||
|
||||
* More about [Red Hat Enterprise Linux][9]
|
||||
* Get a [RHEL developer subscription][10]
|
||||
* More about the latest features at [Red Hat Insights][11]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3451367/red-hat-announces-rhel-8-1-with-predictable-release-cadence.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pixabay.com/vectors/red-hat-fedora-fashion-style-26734/
|
||||
[2]: https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[3]: https://www.networkworld.com/article/3316960/ibm-closes-34b-red-hat-deal-vaults-into-multi-cloud.html
|
||||
[4]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html
|
||||
[5]: https://www.networkworld.com/newsletters/signup.html
|
||||
[6]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[7]: https://access.redhat.com/
|
||||
[8]: https://developer.redhat.com
|
||||
[9]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
|
||||
[10]: https://developers.redhat.com/
|
||||
[11]: https://www.redhat.com/en/blog/whats-new-red-hat-insights-november-2019
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,57 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (System76 introduces laptops with open source BIOS coreboot)
|
||||
[#]: via: (https://opensource.com/article/19/11/coreboot-system76-laptops)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
System76 introduces laptops with open source BIOS coreboot
|
||||
======
|
||||
The company answers open hardware fans by revealing two laptops powered
|
||||
with open source firmware coreboot.
|
||||
![Guy on a laptop on a building][1]
|
||||
|
||||
In mid-October, [System76][2] made an exciting announcement for open source hardware fans: It would soon begin shipping two of its laptop models, [Galago Pro][3] and [Darter Pro][4], with the open source BIOS [coreboot][5].
|
||||
|
||||
The coreboot project [says][6] its open source firmware "is a replacement for your BIOS / UEFI with a strong focus on boot speed, security, and flexibility. It is designed to boot your operating system as fast as possible without any compromise to security, with no back doors, and without any cruft from the '80s." Coreboot was previously known as LinuxBIOS, and the engineers who work on coreboot have also contributed to the Linux kernel.
|
||||
|
||||
Most firmware on computers sold today is proprietary, which means even if you are running an open source operating system, you have no access to your machine's BIOS. This is not so with coreboot. Its developers share the improvements they make, rather than keeping them secret from other vendors. Coreboot's source code can be inspected, learned from, and modified, just like any other open source code.
|
||||
|
||||
[Joshua Woolery][7], marketing director at System76, says coreboot differs from a proprietary BIOS in several important ways. "Traditional firmware is closed source and impossible to review and inspect. It's bloated with unnecessary features and unnecessarily complex [ACPI][8] implementations that lead to PCs operating in unpredictable ways. System76 Open Firmware, on the other hand, is lightweight, fast, and cleanly written." This means your computer boots faster and is more secure, he says.
|
||||
|
||||
I asked Joshua about the impact of coreboot on open hardware overall. "The combination of open hardware and open firmware empowers users beyond what's possible when one or the other is proprietary," he says. "Imagine an open hardware controller like [System76's] [Thelio Io][9] without open source firmware. One could read the schematic and write software to control it, but why? With open firmware, the user starts from functioning hardware and software and can expand from there. Open hardware and firmware enable the community to learn from, adapt, and expand on our work, thus moving technology forward as a whole rather than requiring individuals to constantly re-implement what's already been accomplished."
|
||||
|
||||
Joshua says System76 is working to open source all aspects of the computer, and we will see coreboot on other System76 machines. The hardware and firmware in Thelio Io, the controller board in the company's Thelio desktops, are both open. Less than a year after System76 introduced Thelio, the company is now marketing two laptops with open firmware.
|
||||
|
||||
If you would like to see System76's firmware contributions to the coreboot project, visit the code repository on [GitHub][10]. You can also see the schematics for any supported System76 model by sending an [email][11] with the subject line: _Schematics for <MODEL>_. (Bear in mind that the only currently supported models are darp6 and galp4.) Using the coreboot firmware on other devices is not supported and may render them inoperable,
|
||||
|
||||
Coreboot is licensed under the GNU Public License. You can view the [documentation][12] on the project's website and find out how to [contribute][13] to the project on GitHub.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/11/coreboot-system76-laptops
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Guy on a laptop on a building)
|
||||
[2]: https://opensource.com/article/19/5/system76-secret-sauce
|
||||
[3]: https://system76.com/laptops/galago
|
||||
[4]: https://system76.com/laptops/darter
|
||||
[5]: https://www.coreboot.org/
|
||||
[6]: https://www.coreboot.org/users.html
|
||||
[7]: https://www.linkedin.com/in/joshuawoolery
|
||||
[8]: https://en.wikipedia.org/wiki/Advanced_Configuration_and_Power_Interface
|
||||
[9]: https://opensource.com/article/18/11/system76-thelio-desktop-computer
|
||||
[10]: https://github.com/system76/firmware-open
|
||||
[11]: mailto:productdev@system76.com
|
||||
[12]: https://doc.coreboot.org/index.html
|
||||
[13]: https://github.com/coreboot/coreboot
|
@ -1,92 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Someone Forked GIMP into Glimpse Because Gimp is an Offensive Word)
|
||||
[#]: via: (https://itsfoss.com/gimp-fork-glimpse/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
Someone Forked GIMP into Glimpse Because Gimp is an Offensive Word
|
||||
======
|
||||
|
||||
In the world of open source applications, forking is common when members of the community want to take an application in a different direction than the rest. The latest newsworthy fork is named [Glimpse][1] and is intended to fix certain issues that users have with the [GNU Image Manipulation Program][2], commonly known as GIMP.
|
||||
|
||||
### Why create a fork of GIMP?
|
||||
|
||||
![][3]
|
||||
|
||||
When you visit the [homepage][1] of the Glimpse app, it says that the goal of the project is to “experiment with other design directions and fix longstanding bugs.” That doesn’t sound too much out of the ordinary. However, if you start reading the project’s blog posts, a different image appears.
|
||||
|
||||
According to the project’s [first blog post][4], they created this fork because they did not like the GIMP name. According to the post, “A number of us disagree that the name of the software is suitable for all users, and after 13 years of the project refusing to budge on this have decided to fork!”
|
||||
|
||||
If you are wondering why these people find the work GIMP disagreeable they answer that question on the [About page][5]:
|
||||
|
||||
> “If English is not your first language, then you may not have realised that the word “gimp” is problematic. In some countries it is considered a slur against disabled people and a playground insult directed at unpopular children. It can also be linked to certain “after dark” activities performed by consenting adults.”
|
||||
|
||||
They also point out that they are not making this move out of political correctness or being oversensitive. “In addition to the pain it can cause to marginalized communities many of us have our own free software advocacy stories about the GNU Image Manipulation Program not being taken seriously as an option by bosses or colleagues in professional settings.”
|
||||
|
||||
As if to answer many questions, they also said, “It is unfortunate that we have to fork the whole project to change the name, but we feel that discussions about the issue are at an impasse and that this is the most positive way forward.”
|
||||
|
||||
[][6]
|
||||
|
||||
Suggested read After 6 Years, GIMP 2.10 is Here With Ravishing New Looks and Tons of New Features
|
||||
|
||||
It looks like the Glimpse name is not written in stone. There is [an issue][7] on their GitHub page about possibly picking another name. Maybe they should just drop GNU. I don’t think the word IMP has a bad connotation.
|
||||
|
||||
### A diverging path
|
||||
|
||||
![GIMP 2.10][8]
|
||||
|
||||
[GIMP][6] has been around for over twenty years, so any kind of fork is a big task. Currently, [they are planning][9] to start by releasing Glimpse 0.1 in September 2019. This will be a soft fork, meaning that changes will be mainly cosmetic as they migrate to a new identity.
|
||||
|
||||
Glimpse 1.0 will be a hard fork where they will be actively changing the codebase and adding to it. They want 1.0 to be a port to GTK3 and have its own documentation. They estimate that this will not take place until GIMP 3 is released in 2020.
|
||||
|
||||
Beyond the 1.0, the Glimpse team has plans to forge their own identity. They plan to work on a “front-end UI rewrite”. They are currently discussing [which language][10] they should use for the rewrite. There seems to be a lot of push for D and Rust. They also [hope to][4] “add new functionality that addresses common user complaints” as time goes on.
|
||||
|
||||
### Final Thoughts
|
||||
|
||||
I have used GIMP a little bit in the past but was never too bothered by the name. To be honest, I didn’t know what it meant for quite a while. Interestingly, when I searched Wikipedia for GIMP, I came across an entry for the [GIMP Project][11], which is a modern dance project in New York that includes disabled people. I guess gimp isn’t considered a derogatory term by everyone.
|
||||
|
||||
To me, it seems like a lot of work to go through to change a name. It also seems like the idea of rewriting the UI was tacked to make the project look more worthwhile. I wonder if they will tweak it to bring a more classic UI like [using Ctrl+S to save in GIMP][12]/Glimpse. Let’s wait and watch.
|
||||
|
||||
[][13]
|
||||
|
||||
Suggested read Finally! WPS Office Has A New Release for Linux
|
||||
|
||||
If you are interested in the project, you can follow them on [Twitter][14], check out their [GitHub account][15], or take a look at their [Patreon page][16].
|
||||
|
||||
Are you offended by the GIMP name? Do you think it is worthwhile to fork an application, just so you can rename it? Let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][17].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/gimp-fork-glimpse/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://getglimpse.app/
|
||||
[2]: https://www.gimp.org/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/gimp-fork-glimpse.png?resize=800%2C450&ssl=1
|
||||
[4]: https://getglimpse.app/posts/so-it-begins/
|
||||
[5]: https://getglimpse.app/about/
|
||||
[6]: https://itsfoss.com/gimp-2-10-release/
|
||||
[7]: https://github.com/glimpse-editor/Glimpse/issues/92
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/08/gimp-screenshot.jpg?resize=800%2C508&ssl=1
|
||||
[9]: https://getglimpse.app/posts/six-week-checkpoint/
|
||||
[10]: https://github.com/glimpse-editor/Glimpse/issues/70
|
||||
[11]: https://en.wikipedia.org/wiki/The_Gimp_Project
|
||||
[12]: https://itsfoss.com/how-to-solve-gimp-2-8-does-not-save-in-jpeg-or-png-format/
|
||||
[13]: https://itsfoss.com/wps-office-2016-linux/
|
||||
[14]: https://twitter.com/glimpse_editor
|
||||
[15]: https://github.com/glimpse-editor/Glimpse
|
||||
[16]: https://www.patreon.com/glimpse
|
||||
[17]: https://reddit.com/r/linuxusersgroup
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (laingke)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -152,7 +152,7 @@ via: https://opensource.com/article/19/10/open-source-name-origins
|
||||
|
||||
作者:[Joshua Allen Holm][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[laingke](https://github.com/laingke)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,92 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (MPLS Migration: How a KISS Transformed the WANs of 4 IT Managers)
|
||||
[#]: via: (https://www.networkworld.com/article/3447383/mpls-migration-how-a-kiss-transformed-the-wans-of-4-it-managers.html)
|
||||
[#]: author: (Cato Networks https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
MPLS Migration: How a KISS Transformed the WANs of 4 IT Managers
|
||||
======
|
||||
WAN transformation is challenging; learning from the experiences of others can help. Here are practical insights from four IT managers who migrated to SD-WAN.
|
||||
flytosky11
|
||||
|
||||
Back in 1960, a Lockheed engineer named Kelly Johnson coined the acronym KISS for “keep it simple stupid.” His wise—and simple—advice was that systems tend to work better when they’re simple than when they’re complex. KISS became an essential U.S. Navy design principle and captures the crux of any WAN transformation initiative.
|
||||
|
||||
So many of the challenges of today’s WANs stem from the sheer number of components involved. Each location may require one or more routers, firewalls, WAN optimizers, VPN concentrators, and other devices just to connect safely and effectively with other locations or the cloud. The result: multiple points of failure and a potential uptime and troubleshooting nightmare. Simply understanding the state of the WAN can be difficult with information spread across so many devices and components. Managing all the updates required to protect the network from new and evolving threats can be overwhelming.
|
||||
|
||||
Simplifying the enterprise backbone addresses those challenges. According to four IT managers, the key is to create a single global enterprise backbone that connects all users–mobile or fixed–and all locations–cloud or physical. The backbone’s software should include a complete security stack and WAN optimization to protect and enhance the performance of all “edges” everywhere. Such an approach avoids the complexity that comes with all the appliances and other solutions forming today enterprise networks.
|
||||
|
||||
The four IT managers did not use every aspect of this approach. Some focused on the global performance benefits and cost savings, others on security. But they all gained from the agility and visibility that result. Here are their stories.
|
||||
|
||||
**Pharmaceutical Firm Improves China Connectivity, Reduced Costs by Eliminating MPLS**
|
||||
|
||||
For [Centrient Pharmaceuticals][1], [SD-WAN][2] looked at first as if it might be just as complex as the company’s tangled Web of global MPLS and Internet VPNs. A global leader in sustainable antibiotics, next-generation statins, and antifungals, Centrient had relied on MPLS to connect its Netherlands data center with nine manufacturing and office locations across China, India, Netherlands, Spain, and Mexico. SAP, VoIP, and other Internet applications had to be backhauled through the data center. Local Internet breakouts secured by firewall hardware provided access to the public Internet, Office 365, and some other SaaS applications. Five smaller global locations had to connect via VPN to India or the Netherlands office.
|
||||
|
||||
Over time, MPLS became congested and performance suffered. “It took a long time for users to open documents,” said Mattheiu Cijsouw, Global IT Manager.
|
||||
|
||||
Agility suffered as well, as it typically took three to four months to move a location. “One time we needed to move a sales office and the MPLS connection was simply not ready in time,” Cijsouw said.
|
||||
|
||||
Cijsouw looked toward SD-WAN to simplify connectivity and cut costs but found that the typical solution of SD-WAN appliances at every location secured by firewalls and Secure Web Gateway (SWGs) was also complex, expensive, and dependent on the fickleness of the Internet middle mile. For him, the simplicity of a global, distributed, SLA-backed network of PoPS interconnected by an enterprise backbone seemed appealing. All it required was a simple, zero-touch appliance at each location to connect to the local PoP.
|
||||
|
||||
Cijsouw went with simple. “We migrated in stages, gaining confidence along the way,” he said.
|
||||
|
||||
The 6 Mbits/s of MPLS was replaced by 20 Mbits/s per site, burstable to 40 Mbits/s, and 50 Mbits/s burstable to 100 Mbits/s at the data center, all at lower cost than MPLS. Immediately applications became more responsive, China connectivity worked as well or better than with MPLS, and the cloud-based SD-WAN solution gave Cijsouw better visibility into the network.
|
||||
|
||||
**Paysafe Achieves Fast Application Access at Every Location**
|
||||
|
||||
Similarly, [Paysafe, a global provider of end-to-end payment solutions][3], had been connecting its 21 globally dispersed locations with a combination of MPLS and local Internet access at six locations and VPNs at the other 15. Depending on where staff members were, Internet connectivity could range from 25 Mbits/s to 500 Mbits/sec.
|
||||
|
||||
“We wanted the same access everywhere,” said Stuart Gall, then PaySafe’s Infrastructure Architect in its Network and Systems Groups. “If I’m in Calgary and go to any other office, the access must be the same—no need to RDP into a machine or VPN into the network.”
|
||||
|
||||
The lack of a fully meshed network also made Active Directory operation erratic, with users sometimes locked out of some accounts at one location but not another. Rolling out new locations took two to three months.
|
||||
|
||||
As with Centrient, a cloud-based SD-WAN solution using global PoPS and an enterprise backbone seemed a much simpler, less expensive, and more secure approach than the typical SD-WAN services offered by competing providers.
|
||||
|
||||
Paysafe has connected 11 sites to its enterprise backbone. “We found latency to be 45 percent less than with the public Internet,” said Gall. “New site deployment takes 30 minutes instead of weeks. Full meshing problems are no longer, as all locations instantly mesh once they connect.”
|
||||
|
||||
**Sanne Group Cleans Up WAN and Reduces Latency in the Process**
|
||||
|
||||
[Sanne Group, a global provider of alternative asset and corporate administrative services][4], had two data centers in Jersey and Guernsey UK connected by two 1Gbits/s fiber links, with seven locations connecting to the data centers via the public Internet. A Malta office connected via an IPsec VPN to Cape Town, which connected to Jersey via MPLS. A business continuity site in HIlgrove and two other UK locations connected to the data centers via dedicated fiber. Access for small office users consisted of a combination of Internet broadband, a small firewall appliance, and Citrix VDI.
|
||||
|
||||
Printing PDFs took forever, according to Nathan Trevor, Sanne Group’s IT Director, and the remote desktop architectures suffered from high latency and packet loss. Traffic from the Hong Kong office took 12 to 15 hops to get to the UK.
|
||||
|
||||
The company tried MPLS but found it too expensive. Deploying a site took up to 120 days. Trevor started looking at SD-WAN, but it was also complex.
|
||||
|
||||
“Even with zero-touch provisioning configuration was complicated,” he said. “IT professionals new to SD-WAN would definitely need handholding.”
|
||||
|
||||
The simplicity of the cloud-based global enterprise backbone solution was obvious. “Just looking at an early screen share I could understand how to connect my sites,” said Trevor.
|
||||
|
||||
Sanne connected its locations big and small to the enterprise backbone, eliminating the mess of Internet and MPLS connections. Performance improved immediately, with latency down by 20 percent. All users have to do to connect is log into their computers, and the solution has saved Sanne “an absolute fortune,” according to Trevor.
|
||||
|
||||
**Humphrey’s Eliminates MPLS and Embraces Freedom Easily**
|
||||
|
||||
As for [Humphrey’s and Partners, an architectural services firm][5], eight regional offices connected to its Dallas headquarters via a hybrid WAN and a ninth connected over the Internet. Three offices ran SD-WAN appliances connected to MPLS and the Internet. Another three connected via MPLS only. Two connected with SD-WAN and the Internet, and an office in Vietnam had to rely on file sharing and transfer to move data across the Internet to Dallas.
|
||||
|
||||
With MPLS, Humphrey’s needed three months to deploy at a new site. Even simple network changes took 24 hours, frequently requiring off-hours work. “Often the process involved waking me up in the middle of the night,” said IT Director Paul Burns.
|
||||
|
||||
Burns had tried deploying SD-WAN appliances in some locations, but “the configuration pages of the SD-WAN appliance were insane,” said Burns, and it was sometimes difficult to get WAN connections working properly. “Sometimes Dallas could connect to two sites, but they couldn’t connect to each other,” he said.
|
||||
|
||||
Burns deployed a global enterprise backbone solution at most locations, including Vietnam. Getting sites up and running took minutes or hours. “We dropped shipped devices to New Orleans, and I flew out to install the stuff. Took less than a day and the performance was great,” said Burns. “We set up Uruguay in less than 10 minutes. [The solution] gave us freedom.”
|
||||
|
||||
MPLS and VPNs can be very complex, but so can an SD-WAN replacement if it’s not architected carefully. For many organizations, a simpler approach is to connect and secure all users and locations with a global private backbone and software providing WAN optimization and a complete security stack. Such an approach fulfills the goals of KISS: performance, agility, and low cost.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3447383/mpls-migration-how-a-kiss-transformed-the-wans-of-4-it-managers.html
|
||||
|
||||
作者:[Cato Networks][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.catonetworks.com/customers/pharmaceutical-leader-replaces-mpls-with-cato-cloud-cutting-costs-while-quadrupling-capacity?utm_source=idg
|
||||
[2]: https://www.catonetworks.com/sd-wan?utm_source=idg
|
||||
[3]: https://www.catonetworks.com/customers/paysafe-replaces-global-mpls-network-and-internet-vpn-with-cato-cloud?utm_source=idg
|
||||
[4]: https://www.catonetworks.com/customers/sanne-group-replaces-internet-and-mpls-simplifying-citrix-access-and-improving-performance-with-cato-cloud?utm_source=idg
|
||||
[5]: https://www.catonetworks.com/customers/humphreys-replaces-mpls-sd-wan-appliances-and-mobile-vpn-with-cato-cloud?utm_source=idg
|
76
sources/talk/20191023 Psst- Wanna buy a data center.md
Normal file
76
sources/talk/20191023 Psst- Wanna buy a data center.md
Normal file
@ -0,0 +1,76 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Psst! Wanna buy a data center?)
|
||||
[#]: via: (https://www.networkworld.com/article/3447657/psst-wanna-buy-a-data-center.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Psst! Wanna buy a data center?
|
||||
======
|
||||
Data centers are being bought and sold at an increasing rate, although since they are often private transactions, solid numbers can be hard to come by.
|
||||
artisteer / Getty Images
|
||||
|
||||
When investment bank Bear Stearns collapsed in 2008, there was nothing left of value to auction off except its [data centers][1]. JP Morgan bought the company's carcass for just $270 million, but the only thing of value was Bear's NYC headquarters and two data centers.
|
||||
|
||||
Since then there have been numerous sales of data centers under better conditions. There are even websites ([Datacenters.com][2], [Five 9s Digital][3]) that list data centers for sale. You can buy an empty building, but in most cases, you get the equipment, too.
|
||||
|
||||
There are several reasons why, the most common being companies want to get out of owning a data center. It's an expensive capex and opex investment, and if the cloud is a good alternative, then that's where they go.
|
||||
|
||||
[][4]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][4]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
But there are other reasons, too, said Jon Lin, president of the Equinix Americas office. He said enterprises have overbuilt because of their initial long-term forecasts fell short, partially driven by increased use of cloud. He also said there is an increase in the amount of private equity and real estate investors interested in diversifying into data centers.
|
||||
|
||||
But that doesn't mean Equinix takes every data center they are offered. He cited three reasons why Equinix would pass on an offer:
|
||||
|
||||
1) It is difficult to repurpose an enterprise data center designed around a very tailored customer into a general purpose, multi-tenant data center without significant investment in order to tailor it to the company's satisfaction.
|
||||
|
||||
2) Most of these sites were not built to Equinix standards, diminishing their value.
|
||||
|
||||
**[ Learn more about SDN: Find out [where SDN is going][5] and learn the [difference between SDN and NFV][6]. | Get regularly scheduled insights by [signing up for Network World newsletters][7]. ]**
|
||||
|
||||
3) Enterprise data centers are usually located where the company HQ is for convenience, and not near the interconnection points or infrastructure locations Equinix would prefer for fiber and power.
|
||||
|
||||
Just how much buying and selling is going on is hard to tell. Most of these firms are privately held and thus no disclosure is required. Kelly Morgan, research vice president with 451 Research who tracks the data center market, put the dollar figure for data center sales in 2019 so far at $5.4 billion. That's way down from $19.5 billion just two years ago.
|
||||
|
||||
She says that back then there were very big deals, like when Verizon sold its data centers to Equinix in 2017 for $3.6 billion while AT&T sold its data centers to Brookfield Infrastructure Partners, which buys and managed infrastructure assets, for $1.1 billion.
|
||||
|
||||
These days, she says, the main buyers are big real estate-oriented pension funds that have a different perspective on why they buy vs. traditional real estate investors. Pension funds like the steady income, even in a recession. Private equity firms were buying data centers to buy up the assets, group them, then sell them and make a double-digit return, she said.
|
||||
|
||||
Enterprises do look to sell their data centers, but it's a more challenging process. She echoes what Lin said about the problem with specialty data centers. "They tend to be expensive and often in not great locations for multi-tenant situations. They are often at company headquarters or the town where the company is headquartered. So they are hard to sell," she said.
|
||||
|
||||
Enterprises want to sell their data center to get out of data center ownership, since they are often older -- the average age of corporate data centers is from 10 years to 25 years old – for the obvious reasons. "When we ask enterprises why they are selling or closing their data centers, they say they are consolidating multiple data centers into one, plus moving half their stuff to the cloud," said Morgan.
|
||||
|
||||
There is still a good chunk of companies who build or acquire data centers, either because they are consolidating or just getting rid of older facilities. Some add space because they are moving to a new geography. However, Morgan said they almost never buy. "They lease one from someone else. Enterprise data centers for sale are not bought by other enterprises, they are bought by service providers who will lease it. Enterprises build a new one," she said.
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3447657/psst-wanna-buy-a-data-center.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[2]: https://www.datacenters.com/real-estate/data-centers-for-sale
|
||||
[3]: https://five9sdigital.com/data-centers/
|
||||
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[5]: https://www.networkworld.com/article/3209131/lan-wan/what-sdn-is-and-where-its-going.html
|
||||
[6]: https://www.networkworld.com/article/3206709/lan-wan/what-s-the-difference-between-sdn-and-nfv.html
|
||||
[7]: https://www.networkworld.com/newsletters/signup.html
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 ways developers can have a say in what agile looks like)
|
||||
[#]: via: (https://opensource.com/article/19/10/ways-developers-what-agile)
|
||||
[#]: author: (Clement Verna https://opensource.com/users/cverna)
|
||||
|
||||
4 ways developers can have a say in what agile looks like
|
||||
======
|
||||
How agile is implemented—versus imposed—plays a big role in what
|
||||
developers gain from it.
|
||||
![Person on top of a mountain, arm raise][1]
|
||||
|
||||
Agile has become the default way of developing software; sometimes, it seems like every organization is doing (or wants to do) agile. But, instead of trying to change their culture to become agile, many companies try to impose frameworks like scrum onto developers, looking for a magic recipe to increase productivity. This has unfortunately created some bad experiences and leads developers to feel like agile is something they would rather avoid. This is a shame because, when it's done correctly, developers and their projects benefit from becoming involved in it. Here are four reasons why.
|
||||
|
||||
### Agile, back to the basics
|
||||
|
||||
The first way for developers to be unafraid of agile is to go back to its basics and remember what agile is really about. Many people see agile as a synonym for scrum, kanban, story points, or daily stand-ups. While these are important parts of the [agile umbrella][2], this perception takes people away from the original spirit of agile.
|
||||
|
||||
Going back to agile's origins means looking at the [Agile Manifesto][3], and what I believe is its most important part, the introduction:
|
||||
|
||||
> We are uncovering better ways of developing software by doing it and helping others do it.
|
||||
|
||||
I'm a believer in continuous improvement, and this sentence resonates with me. It emphasizes the importance of having a [growth mindset][4] while being a part of an agile team. In fact, I think this outlook is a solution to most of the problems a team may face when adopting agile.
|
||||
|
||||
Scrum is not working for your team? Right, let's discover a better way of organizing it. You are working in a distributed team across multiple timezones, and having a daily standup is not ideal? No problem, let's find a better way to communicate and share information.
|
||||
|
||||
Agile is all about flexibility and being able to adapt to change, so be open-minded and creative to discover better ways of collaborating and developing software.
|
||||
|
||||
### Agile metrics as a way to improve, not control
|
||||
|
||||
Indeed, agile is about adopting and embracing change. Metrics play an important part in this process, as they help the team determine if it is heading in the right direction. As an agile developer, you want metrics to provide the data your team needs to support its decisions, including whether it should change directions. This process of learning from facts and experience is known as empiricism, and it is well-illustrated by the three pillars of agile.
|
||||
|
||||
![Three pillars of agile][5]
|
||||
|
||||
Unfortunately, in most of the teams I've worked with, metrics were used by project management as an indicator of the team's performance, which causes people on the team to be afraid of implementing changes or to cut corners to meet expectations.
|
||||
|
||||
In order to avoid those outcomes, developers need to be in control of their team's metrics. They need to know exactly what is measured and, most importantly, why it's being measured. Once the team has a good understanding of those factors, it will be easier for them to try new practices and measure their impact.
|
||||
|
||||
Rather than using metrics to measure your team's performance, engage with management to find a better way to define what success means to your team.
|
||||
|
||||
### Developer power is in the team
|
||||
|
||||
As a member of an agile team, you have more power than you think to help build a team that has a great impact. The [Toyota Production System][6] recognized this long ago. Indeed, Toyota considered that employees, not processes, were the key to building great products.
|
||||
|
||||
This means that, even if a team uses the best process possible, if the people on the team are not comfortable working with each other, there is a high chance that the team will fail. As a developer, invest time to build trust inside your team and to understand what motivates its members.
|
||||
|
||||
If you are curious about how to do this, I recommend reading Alexis Monville's book [_Changing Your Team from the Inside_][7].
|
||||
|
||||
### Making developer work visible
|
||||
|
||||
A big part of any agile methodology is to make information and work visible; this is often referred to as an [information radiator][8]. In his book [_Teams of Teams_][9], Gen. Stanley McChrystal explains how the US Army had to transform itself from an organization that was optimized on productivity to one optimized to adapt. What we learn from his book is that the world in which we live has changed. The problem of becoming more productive was mostly solved at the end of the 20th century, and the challenge that companies now face is how to adapt to a world in constant evolution.
|
||||
|
||||
![A lot of sticky notes on a whiteboard][10]
|
||||
|
||||
I particularly like Gen. McChrystal's explanation of how he created a powerful information radiator. When he took charge of the [Joint Special Operations Command][11], Gen. McChrystal began holding a daily call with his high commanders to discuss and plan future operations. He soon realized that this was not optimal and instead started running 90-minute briefings every morning for 7,000 people around the world. This allowed every task force to acquire the knowledge necessary to accomplish their missions and made them aware of other task forces' assignments and situations. Gen. McChrystal refers to this as "shared consciousness."
|
||||
|
||||
So, as a developer, how can you help build a shared consciousness in your team? Start by simply sharing what you are working on and/or plan to work on and get curious about what your colleagues are doing.
|
||||
|
||||
* * *
|
||||
|
||||
If you're using agile in your development organization, what do you think are its main benefits? And if you aren't using agile, what barriers are holding your team back? Please share your thoughts in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/ways-developers-what-agile
|
||||
|
||||
作者:[Clement Verna][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cverna
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/developer_mountain_cloud_top_strong_win.jpg?itok=axK3EX-q (Person on top of a mountain, arm raise)
|
||||
[2]: https://confluence.huit.harvard.edu/display/WGAgile/2014/07/01/The+Agile+Umbrella
|
||||
[3]: https://agilemanifesto.org/
|
||||
[4]: https://www.edglossary.org/growth-mindset/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/3pillarsofagile.png (Three pillars of agile)
|
||||
[6]: https://en.wikipedia.org/wiki/Toyota_Production_System#Respect_for_people
|
||||
[7]: https://leanpub.com/changing-your-team-from-the-inside#packages
|
||||
[8]: https://www.agilealliance.org/glossary/information-radiators/
|
||||
[9]: https://www.mcchrystalgroup.com/insights-2/teamofteams/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/stickynotes.jpg (A lot of sticky notes on a whiteboard)
|
||||
[11]: https://en.wikipedia.org/wiki/Joint_Special_Operations_Command
|
@ -0,0 +1,122 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Gartner crystal ball: Looking beyond 2020 at the top IT-changing technologies)
|
||||
[#]: via: (https://www.networkworld.com/article/3447759/gartner-looks-beyond-2020-to-foretell-the-top-it-changing-technologies.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Gartner crystal ball: Looking beyond 2020 at the top IT-changing technologies
|
||||
======
|
||||
Gartner’s top strategic predictions for 2020 and beyond is heavily weighted toward the human side of technology
|
||||
[Thinkstock][1]
|
||||
|
||||
ORLANDO – Forecasting long-range IT technology trends is a little herding cats – things can get a little crazy.
|
||||
|
||||
But Gartner analysts have specialized in looking forwardth, boasting an 80 percent accuracy rate over the years, Daryl Plummer, distinguished vice president and Gartner Fellow told the IT crowd at this year’s [IT Symposium/XPO][2]. Some of those successful prediction have included the rise of automation, robotics, AI technology and other ongoing trends.
|
||||
|
||||
[Now see how AI can boost data-center availability and efficiency][3]
|
||||
|
||||
Like some of the [other predictions][4] Gartner has made at this event, this year’s package of predictions for 2020 and beyond is heavily weighted toward the human side of technology rather than technology itself.
|
||||
|
||||
**[ [Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][5] ]**
|
||||
|
||||
“Beyond offering insights into some of the most critical areas of technology evolution, this year’s predictions help us move beyond thinking about mere notions of technology adoption and draw us more deeply into issues surrounding what it means to be human in the digital world.” Plummer said.
|
||||
|
||||
The list this year goes like this:
|
||||
|
||||
**By 2023, the number of people with disabilities employed will triple due to AI and emerging technologies, reducing barriers to access.**
|
||||
|
||||
Technology is going to make it easier for people with disabilities to connect to the business world. “People with disabilities constitute an untapped pool of critically skilled talent,” Plummer said.
|
||||
|
||||
“[Artificial intelligence (AI)][6], augmented reality (AR), virtual reality (VR) and other [emerging technologies][7] have made work more accessible for employees with disabilities. For example, select restaurants are starting to pilot AI robotics technology that enables paralyzed employees to control robotic waiters remotely. Organizations that actively employ people with disabilities will not only cultivate goodwill from their communities, but also see 89 percent higher retention rates, a 72 percent increase in employee productivity, and a 29 percent increase in profitability,” Plummer said.
|
||||
|
||||
**By 2024, AI identification of emotions will influence more than half of the online advertisements you see.**
|
||||
|
||||
Computer vision, which allows AI to identify and interpret physical environments, is one of the key technologies used for emotion recognition and has been ranked by Gartner as one of the most important technologies in the next three to five years. [Artificial emotional intelligence (AEI)][8] is the next frontier for AI development, Plummer said. Twenty-eight percent of marketers ranked AI and machine learning (ML) among the top three technologies that will drive future marketing impact, and 87 percent of marketing organizations are currently pursuing some level of personalization, according to Gartner. By 2022, 10 percent of personal devices will have emotion AI capabilities, Gartner predicted.
|
||||
|
||||
“AI makes it possible for both digital and physical experiences to become hyper personalized, beyond clicks and browsing history but actually on how customers _feel_ in a specific purchasing moment. With the promise to measure and engage consumers based on something once thought to be intangible, this area of ‘empathetic marketing’ holds tremendous value for both brands and consumers when used within the proper [privacy][9] boundaries,” said Plummer.
|
||||
|
||||
**Through 2023, 30% of IT organizations will extend BYOD policies with “bring your own enhancement” (BYOE) to address augmented humans in the workforce.**
|
||||
|
||||
The concept of augmented workers has gained traction in social media conversations in 2019 due to advancements in wearable technology. Wearables are driving workplace productivity and safety across most verticals, including automotive, oil and gas, retail and healthcare.
|
||||
|
||||
Wearables are only one example of physical augmentations available today, but humans will look to additional physical augmentations that will enhance their personal lives and help do their jobs. Gartner defines human augmentation as creating cognitive and physical improvements as an integral part of the human body. An example is using active control systems to create limb prosthetics with characteristics that can exceed the highest natural human performance.
|
||||
|
||||
“IT leaders certainly see these technologies as impactful, but it is the consumers’ desire to physically enhance themselves that will drive the adoption of these technologies first,” Plummer said. “Enterprises need to balance the control of these devices in their enterprises while also enabling users to use them for the benefit of the organization.”
|
||||
|
||||
**By 2025, 50% of people with a smartphone but without a bank account will use a mobile-accessible cryptocurrency account.**
|
||||
|
||||
Currently 30 percent of people have no bank account and 71 percent will subscribe to mobile services by 2025. Major online marketplaces and social media platforms will start supporting cryptocurrency payments by the end of next year. By 2022, Facebook, Uber, Airbnb, eBay, PayPal and other digital e-commerce companies will support over 750 million customer, Gartner predicts.
|
||||
|
||||
At least half the globe’s citizens who do not use a bank account will instead use these new mobile-enabled cryptocurrency account services offered by global digital platforms by 2025, Gartner said.
|
||||
|
||||
**By 2023, a self-regulating association for oversight of AI and machine-learning designers will be established in at least four of the G7 countries.**
|
||||
|
||||
By 2021, multiple incidents involving non-trivial AI-produced harm to hundreds or thousands of individuals can be expected, Gartner said. Public demand for protection from the consequences of malfunctioning algorithms will in turn produce pressure to assign legal liability for the harmful consequences of algorithm failure. The immediate impact of regulation of process will be to increase cycle times for AI and ML algorithm development and deployment. Enterprises can also expect to spend more for training and certification for practitioners and documentation of processes, as well as higher salaries for certified personnel.
|
||||
|
||||
“Regulation of products as complex as AI and ML algorithms is no easy task. Consequences of algorithm failures at scale that occur within major societal functions are becoming more visible. For instance, AI-related failures in autonomous vehicles and aircraft have already killed people and attracted widespread attention in recent months,” said Plummer.
|
||||
|
||||
**By 2023, 40% of professional workers will orchestrate their business application experiences and capabilities like they do their music streaming experience.**
|
||||
|
||||
The human desire to have a work environment that is similar to their personal environment continues to rise — one where they can assemble their own applications to meet job and personal requirements in a [self-service fashion][10]. The consumerization of technology and introduction of new applications have elevated the expectations of employees as to what is possible from their business applications. Gartner says through 2020, the top 10 enterprise-application vendors will expose over 90 percent of their application capabilities through APIs.
|
||||
|
||||
“Applications used to define our jobs. Nowadays, we are seeing organizations designing application experiences around the employee. For example, mobile and cloud technologies are freeing many workers from coming into an office and instead supporting a work-anywhere environment, outpacing traditional application business models,” Plummer said. “Similar to how humans customize their streaming experience, they can increasingly customize and engage with new application experiences.”
|
||||
|
||||
**By 2023, up to 30 percent of world news and video content will be authenticated as real by blockchain countering deep fake technology.**
|
||||
|
||||
Fake news represents deliberate disinformation, such as propaganda that is presented to viewers as real news. Its rapid proliferation in recent years can be attributed to bot-controlled accounts on social media, attracting more viewers than authentic news and manipulating human intake of information, Plummer said. Fake content, exacerbated by AI can pose an existential threat to an organization.
|
||||
|
||||
By 2021, at least 10 major news organizations will use [blockchain][11] to track and prove the authenticity of their published content to readers and consumers. Likewise, governments, technology giants and other entities are fighting back through industry groups and proposed regulations. “The IT organization must work with content-production teams to establish and track the origin of enterprise-generated content using blockchain technology,” Plummer said.
|
||||
|
||||
**On average, through 202, digital transformation initiatives will take large traditional enterprises twice as long and cost twice as much as anticipated.**
|
||||
|
||||
Business leaders’ expectations for revenue growth are unlikely to be realized from digital optimization strategies, due to the cost of technology modernization and the unanticipated costs of simplifying operational interdependencies. Such operational complexity also impedes the pace of change along with the degree of innovation and adaptability required to operate as a digital business.
|
||||
|
||||
“In most traditional organizations, the gap between digital ambition and reality is large,” Plummer said. “We expect CIOs’ budget allocation for IT modernization to grow 7 percent year-over-year through 2021 to try to close that gap.”
|
||||
|
||||
**By 2023, individual activities will be tracked digitally by an “Internet of Behavior” to influence, benefit and service eligibility for 40% of people worldwide.**
|
||||
|
||||
Through facial recognition, location tracking and big data, organizations are starting to monitor individual behavior and link that behavior to other digital actions, like buying a train ticket. The Internet of Things (IoT) – where physical things are directed to do a certain thing based on a set of observed operating parameters relative to a desired set of operating parameters — is now being extended to people, known as the Internet of Behavior (IoB). Through 2020 watch for examples of usage-based and behaviorally-based business models to expand into health insurance or financial services, Plummer said.
|
||||
|
||||
“With IoB, value judgements are applied to behavioral events to create a desired state of behavior,” Plummer said. “What level of tracking will we accept? Will it be hard to get life insurance if your Fitbit tracker doesn’t see 10,000 steps a day?”
|
||||
|
||||
“Over the long term, it is likely that almost everyone living in a modern society will be exposed to some form of IoB that melds with cultural and legal norms of our existing predigital societies,” Plummer said
|
||||
|
||||
**By 2024, the World Health Organization will identify online shopping as an addictive disorder, as millions abuse digital commerce and encounter financial stress.**
|
||||
|
||||
Consumer spending via digital commerce platforms will continue to grow over 10 percent year-over-year through 2022. In addition watch for an increased number of digital commerce orders predicted by, and initiated by, AI.
|
||||
|
||||
The ease of online shopping will cause financial stress for millions of people, as online retailers increasingly use AI and personalization to effectively target consumers and prompt them to spend income that they do not have. The resulting debt and personal bankruptcies will cause depression and other health concerns caused by stress, which is capturing the attention of the WHO.
|
||||
|
||||
“The side effects of technology that promote addictive behavior are not exclusive to consumers. CIOs must also consider the possibility of lost productivity among employees who put work aside for online shopping and other digital distractions. In addition, regulations in support of responsible online retail practices might force companies to provide warnings to prospective customers who are ready to make online purchases, similar to casinos or cigarette companies,” Plummer said.
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3447759/gartner-looks-beyond-2020-to-foretell-the-top-it-changing-technologies.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://thinkstockphotos.com
|
||||
[2]: https://www.networkworld.com/article/3447397/gartner-10-infrastructure-trends-you-need-to-know.html
|
||||
[3]: https://www.networkworld.com/article/3274654/ai-boosts-data-center-availability-efficiency.html
|
||||
[4]: https://www.networkworld.com/article/3447401/gartner-top-10-strategic-technology-trends-for-2020.html
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
|
||||
[6]: https://www.gartner.com/en/newsroom/press-releases/2019-07-15-gartner-survey-reveals-leading-organizations-expect-t
|
||||
[7]: https://www.gartner.com/en/newsroom/press-releases/2018-08-20-gartner-identifies-five-emerging-technology-trends-that-will-blur-the-lines-between-human-and-machine
|
||||
[8]: https://www.gartner.com/smarterwithgartner/13-surprising-uses-for-emotion-ai-technology/
|
||||
[9]: https://www.gartner.com/smarterwithgartner/how-to-balance-personalization-with-data-privacy/
|
||||
[10]: https://www.gartner.com/en/newsroom/press-releases/2019-05-28-gartner-says-the-future-of-self-service-is-customer-l
|
||||
[11]: https://www.gartner.com/smarterwithgartner/the-cios-guide-to-blockchain/
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user