diff --git a/published/20220105 Create bookmarks for your PDF with pdftk.md b/published/20220105 Create bookmarks for your PDF with pdftk.md new file mode 100644 index 0000000000..899229da57 --- /dev/null +++ b/published/20220105 Create bookmarks for your PDF with pdftk.md @@ -0,0 +1,156 @@ +[#]: subject: "Create bookmarks for your PDF with pdftk" +[#]: via: "https://opensource.com/article/22/1/pdf-metadata-pdftk" +[#]: author: "Seth Kenlon https://opensource.com/users/seth" +[#]: collector: "lujun9972" +[#]: translator: "toknow-gh" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-15973-1.html" + +使用 pdftk 为 PDF 文档创建书签 +====== + +> 充分利用现有的技术,提供书签以帮助用户。 + +![][0] + +在 [介绍 pdftk-java][2] 中, 我展示了如何在脚本中使用 `pdftk-java` 来快速修改 PDF 文件。 + +但是,`pdftk-java` 最有用的场景是处理那种动辄几百页的没有目录的大 PDF 文件。这里所谓的目录不是指文档前面供打印的目录,而是指显示在 PDF 阅读器侧边栏里的目录,它在 PDF 格式中的正式叫法是“书签bookmarks”。 + +![Screenshot of a sidebar table of contents next to a PDF][3] + +如果没有书签,就只能通过上下滚动或全局搜索文本来定位想要的章节,这非常麻烦。 + +PDF 文件的另一个恼人的小问题是缺乏元数据,比如标题和作者。如果你打开过一个标题栏上显示类似 “Microsoft Word - 04_Classics_Revisited.docx” 的 PDF 文件,你就能体会那种感觉了。 + +`pdftk-java` 让我能够创建自己的书签,我再也不面对这些问题了。 + +### 在 Linux 上安装 pdftk-java + +正如 `pdftk-java` 的名称所示的,它是用 Java 编写的。它能够在所有主流操作系统上运行,只要你安装了 Java。 + +Linux 和 macOS 用户可以从 [AdoptOpenJDK.net][5] 安装 Java(LCTT 译注:原文为 Linux,应为笔误)。 + +Windows 用户可以安装 [Red Hat's Windows build of OpenJDK][6]。 + +在 Linux 上安装 pdftk-java: + + 1. 从 Gitlab 仓库下载 [pdftk-all.jar release][7],保存至 `~/.local/bin/` 或 [其它路径][8] 下. + 2. 用文本编辑器打开 `~/.bashrc`,添加 `alias pdftk='java -jar $HOME/.local/bin/pdftk-all.jar'` + 3. 运行 `source ~/.bashrc` 使新的 Bash 设置生效。 + +### 数据转储 + +修改元数据的第一步是抽取 PDF 当前的数据文件。 + +现在的数据文件可能并没包含多少内容,但这也是一个不错的开端。 + +``` +$ pdftk mybigfile.pdf \ + data_dump \ + output bookmarks.txt +``` +生成的 `bookmarks.txt` 文件中包含了输入 PDF 文件 `mybigfile.pdf` 的所有元数据和一大堆无用数据。 + +### 编辑元数据 + +用文本编辑器(比如 [Atom][9] 或 [Gedit][10])打开 `bookmarks.txt` 以编辑 PDF 元数据。 + +元数据的格式和数据项直观易懂: + +``` +InfoBegin +InfoKey: Creator +InfoValue: Word +InfoBegin +InfoKey: ModDate +InfoValue: D:20151221203353Z00'00' +InfoBegin +InfoKey: CreationDate +InfoValue: D:20151221203353Z00'00' +InfoBegin +InfoKey: Producer +InfoValue: Mac OS X 10.10.4 Quartz PDFContext +InfoBegin +InfoKey: Title +InfoValue: Microsoft Word - 04_UA_Classics_Revisited.docx +PdfID0: f049e63eaf3b4061ddad16b455ca780f +PdfID1: f049e63eaf3b4061ddad16b455ca780f +NumberOfPages: 42 +PageMediaBegin +PageMediaNumber: 1 +PageMediaRotation: 0 +PageMediaRect: 0 0 612 792 +PageMediaDimensions: 612 792 +[...] +``` + +你可以将 `InfoValue` 的值修改为对当前 PDF 有意义的内容。比如可以将 `Creator` 字段从 `Word` 修改为实际的作者或出版社名称。比起使用导出程序自动生成的标题,使用书籍的实际标题会更好。 + +你也可以做一些清理工作。在 `NumberOfPages` 之后的行都不是必需的,可以删除这些行的内容。 + +### 添加书签 + +PDF 书签的格式如下: + +``` +BookmarkBegin +BookmarkTitle: My first bookmark +BookmarkLevel: 1 +BookmarkPageNumber: 2 +``` + + * `BookmarkBegin` 表示这是一个书签。 + * `BookmarkTitle` 书签在 PDF 阅读器中显示的文本。 + * `BookmarkLevel` 书签层级。如果书签层级为 2,它将出现在上一个书签的小三角下。如果设置为 3,它会显示在上一个 2 级书签的小三角下。这让你能为章以及其中的节设置书签。 + * `BookmarkPageNumber` 点击书签时转到的页码。 + +为你需要的章节创建书签,然后保存文件。 + +### 更新书签信息 + +现在已经准备好了元数据和书签,你可以将它们导入到 PDF 文件中。实际上是将这些信息导入到一个新的 PDF 文件中,它的内容与原 PDF 文件相同: + +``` +$ pdftk mybigfile.pdf \ + update_info bookmarks.txt \ + output mynewfile.pdf +``` + +生成的 `mynewfile.pdf` 包含了你设置的全部元数据和书签。 + +### 体现专业性 + +PDF 文件中是否包含定制化的元数据和书签可能并不会影响销售。 + +但是,关注元数据可以向用户表明你重视质量保证。增加书签可以为用户提供便利,同时亦是充分利用现有技术。 + +使用 `pdftk-java` 来简化这个过程,用户会感激不尽。 + +*(题图:MJ/f8869a66-562d-4ee4-9f2d-1949944d6a9c)* + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/22/1/pdf-metadata-pdftk + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[toknow-gh](https://github.com/toknow-gh) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating) +[2]: https://opensource.com/article/21/12/edit-pdf-linux-pdftk +[3]: https://opensource.com/sites/default/files/uploads/pdtfk_update.jpeg (table of contents) +[4]: https://creativecommons.org/licenses/by-sa/4.0/ +[5]: https://adoptopenjdk.net/releases.html +[6]: https://developers.redhat.com/products/openjdk/download +[7]: https://gitlab.com/pdftk-java/pdftk/-/jobs/1527259628/artifacts/raw/build/libs/pdftk-all.jar +[8]: https://opensource.com/article/17/6/set-path-linux +[9]: https://opensource.com/article/20/12/atom +[10]: https://opensource.com/article/20/12/gedit +[0]: https://img.linux.net.cn/data/attachment/album/202307/06/185044ioz6nw1jqkqnhx66.jpg \ No newline at end of file diff --git a/published/20221205.4 ⭐️⭐️ True Lightweight Notepad for Ubuntu and Other Linux.md b/published/20221205.4 ⭐️⭐️ True Lightweight Notepad for Ubuntu and Other Linux.md new file mode 100644 index 0000000000..0bff3de2a8 --- /dev/null +++ b/published/20221205.4 ⭐️⭐️ True Lightweight Notepad for Ubuntu and Other Linux.md @@ -0,0 +1,255 @@ +[#]: subject: "True Lightweight Notepad for Ubuntu and Other Linux" +[#]: via: "https://www.debugpoint.com/lightweight-notepad-linux/" +[#]: author: "Arindam https://www.debugpoint.com/author/admin1/" +[#]: collector: "lkxed" +[#]: translator: "ChatGPT" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-15957-1.html" + +真正轻量级的 Linux 记事本 +====== + +> 轻量级、资源友好的基于 GUI 的基本记事本列表,适用于 Ubuntu 和其他 Linux。 + +![][1] + +Linux 是一个因其速度、稳定性和灵活性而广受欢迎的操作系统。Linux 的一个关键特点是能够根据你的需求自定义和配置系统。这包括选择适合你系统的正确应用程序和工具。本教程将介绍一些适用于 Linux 的最佳轻量级记事本。我们将查看它们的特点、优缺点,并提供选择适合你需求的正确记事本的建议。无论你是学生、程序员,还是喜欢做笔记的普通用户,一款优秀的记事本对于任何 Linux 用户来说都是必不可少的工具。 + +### Ubuntu 和其他发行版的最佳轻量级记事本 + +#### 1、Mousepad + +该列表中的第一个是流行的文本编辑器 - Mousepad。它是 Xfce 桌面环境的默认文本编辑器,使用 GTK 开发。它简单轻便,但与本列表中的 Leafpad 相比,它具有更多的设置和功能。 + +你可以将其视为具有一些额外功能的 Leafpad。 + +其关键特点包括深浅色主题、标签式编辑、字体和插件功能。你可以在安装后和使用过程中发现更多类似的设置。 + +下面是其外观示例: + +![在 Ubuntu 上运行的 mousepad][3] + +由于 Mousepad 在所有主要的 Linux 发行版仓库中都可用,所以安装非常简单。 + +对于 Ubuntu、Linux Mint 和相关发行版,使用以下命令进行安装。 + +``` +sudo apt install mousepad +``` + +对于 Fedora Linux,请使用以下命令: + +``` +sudo dnf install mousepad +``` + +而 Arch Linux 用户可以使用以下命令进行安装: + +``` +sudo pacman -S mousepad +``` + +#### 2、Featherpad + +[FeatherPad][4] 是一个独立于桌面环境的基于 Qt 的轻量级文本编辑器,适用于 Ubuntu 和其他 Linux 发行版。它的一些关键特性包括拖放支持、分离和附加标签、虚拟桌面感知,以及一个可选的固定搜索栏,并有每个标签的入口。 + +此外,它可以在搜索时立即突出显示找到的匹配项,提供了一个停靠窗口用于文本替换,并支持显示行号和跳转到特定行。 + +此外,Featherpad 可以检测文本编码,为常见的编程语言提供语法高亮,并支持会话管理。它还具有拼写检查(使用 Hunspell)、文本缩放、打印和自动保存等功能。 + +![在 Ubuntu 上运行的 Featherpad][5] + +安装 Featherpad 很简单。 + +对于 Ubuntu 和相关的发行版,你可以使用终端中的以下命令进行安装: + +``` +sudo apt install featherpad +``` + +对于 Fedora Linux,请使用以下命令进行安装: + +``` +sudo dnf install featherpad +``` + +Arch Linux 用户可以使用以下命令进行安装: + +``` +sudo pacman -S featherpad +``` + +#### 3、Leafpad + +[Leafpad][6] 是一个基于 GTK 的简单的轻量级 Linux 文本编辑器。它旨在快速、易于使用,并且需要最少的资源。Leafpad 具有干净直观的用户界面,提供了你所需的所有基本文本编辑工具,如剪切、复制和粘贴,并支持撤消和重做。此外,它还支持多种编程语言的语法高亮,使其成为程序员的有用工具。 + +由于其简单和高效性,Leafpad 是 Linux 用户的热门选择。它可能是 Windows 记事本应用程序的完美替代品。它具有所有基本功能,包括自动换行、行号、字体选择和自动缩进。 + +下面是它的外观示例。这是列表中最简单和轻量级的记事本。 + +![leafpad - 在 Ubuntu 上运行的简易记事本][7] + +但是,在 Ubuntu 上安装 Leafpad 有些棘手。不幸的是,它在 Universe 仓库中不可用,只能作为 Snap 软件包而不是 Flatpak 软件包使用。 + +但是,你可以从 Debian 仓库中获取并在 Ubuntu 中安装它。 + +从 Debian 仓库下载 deb 文件,并使用以下命令进行安装。 + +``` +wget http://ftp.us.debian.org/debian/pool/main/l/leafpad/leafpad_0.8.18.1-5_amd64.deb +``` + +``` +sudo dpkg -i leafpad_0.8.18.1-5_amd64.deb +``` + +Fedora 用户可以使用以下命令进行安装: + +``` +sudo dnf install leafpad +``` + +Arch Linux 用户可以使用以下命令进行安装: + +``` +sudo pacman -S leafpad +``` + +#### 4、Beaver 编辑器 + +[Beaver][8] 编辑器是一个轻量级、启动快速的文本编辑器,具有极少的依赖性。它是基于 GTK+2 库构建的,不需要额外安装的库,非常适合在较旧的计算机和小型 Linux 发行版上使用。Beaver 的核心功能包括基本功能和语法高亮,还可以通过插件添加额外功能。其界面简洁高效,并包含高质量的 Tango 美术作品。 + +![Beaver 编辑器在 Ubuntu 上运行][9] + +这是一个有些老旧的应用程序,但它仍然正常工作。目前,它仅适用于 Ubuntu 和相关的发行版。你可以下载预编译的 deb 文件,并使用以下命令进行安装: + +``` +wget https://www.bristolwatch.com/debian/packages/beaver_amd64.deb +``` + +``` +sudo dpkg -i beaver_amd64.deb +``` + +#### 5、Gedit + +[Gedit 文本编辑器][10] 是 GNOME 桌面环境的默认文本编辑器,被数百万用户在诸如 Ubuntu 和 Fedora 等各种 Linux 发行版上使用。它是核心 GNOME 应用程序的一部分,旨在成为一个轻量级的通用文本编辑器。然而,通过其设置和已安装的插件,Gedit 也包含许多增强生产力的功能,使得它能够与其他流行的文本编辑器竞争。 + +尽管如此,它最近已经从 GNOME 桌面的默认编辑器标签中“降级”。基于现代 GTK4 的 [GNOME 文本编辑器][11] 已取而代之。 + +但它仍然是最好的编辑器之一,你可以通过插件和 [各种技巧][12] 将其从简单的编辑器扩展为更高级的编辑器。 + +![Gedit 文本编辑器][13] + +要安装它,请使用以下命令(针对 Ubuntu 和相关发行版): + +``` +sudo apt install gedit +``` + +对于 Fedora Linux 用户,请使用以下命令进行安装。 + +``` +sudo dnf install gedit +``` + +最后,Arch Linux 用户可以使用以下命令进行安装: + +``` +sudo pacman -S gedit +``` + +#### 6. Xed + +如果你使用 Linux Mint,那么你可能听说过 Xed。Xed 是 Linux Mint 的默认文本编辑器,它非常轻量级。作为一个 “Xapp” 应用程序,它遵循 Linux Mint 的设计和功能指南,提供简单的用户界面、强大的菜单、工具栏和功能。 + +Xed 的一些主要特点包括: + +- 传统的用户界面,保持简洁易用 +- 强大的工具栏和上下文菜单选项,增强功能的能力 +- 语法高亮显示 +- 配置选项,如标签、编码等 +- 支持 UTF-8 文本 +- 编辑远程服务器文件 +- 广泛的插件支持,可根据需要添加更多高级功能 +- 支持概览地图 +- 可缩放的编辑窗口 + +Xed 是最好的编辑器之一,可作为 Linux 系统上轻量级记事本的替代品。 + +![Xed 编辑器来自 Linux Mint 团队][14] + +如果你使用的是 Linux Mint,它应该是默认安装的。然而,在 Ubuntu 中安装它需要运行一系列命令。打开终端并运行以下命令来在 Ubuntu 中安装 Xed。 + +``` +wget http://packages.linuxmint.com/pool/import/i/inxi/inxi_3.0.32-1-1_all.deb +wget http://packages.linuxmint.com/pool/backport/x/xapp/xapps-common_2.4.2+vera_all.deb +wget http://packages.linuxmint.com/pool/backport/x/xapp/libxapp1_2.4.2+vera_amd64.deb +wget http://packages.linuxmint.com/pool/backport/x/xed/xed-common_3.2.8+vera_all.deb +wget http://packages.linuxmint.com/pool/backport/x/xed/xed_3.2.8+vera_amd64.deb +``` + +``` +sudo dpkg -i inxi_3.0.32-1-1_all.deb +sudo dpkg -i xapps-common_2.4.2+vera_all.deb +sudo dpkg -i libxapp1_2.4.2+vera_amd64.deb +sudo dpkg -i xed-common_3.2.8+vera_all.deb +sudo dpkg -i xed_3.2.8+vera_amd64.deb +``` + +有关更多详情,请访问 [Xed 的 GitHub 存储库][15]。 + +### 内存和资源比较 + +由于我们在讨论性能,这是比较的关键,我们列出了上述所有应用程序在最新的 Ubuntu 安装中消耗的内存。 + +正如你所看到的,Xfce 的 Mousepad 最轻量级,而 Gedit 最占资源。 + +| 应用程序名称 | Ubuntu 闲置时消耗的内存 | +| :- | :- | +| Mousepad | 303 KB | +| Featherpad | 1.7 MB | +| Leafpad | 7.7 MB | +| Beaver pad | 11.1 MB | +| Gedit | 30.2 MB | +| Xed | 32.1 MB | + +### 总结 + +总之,在 Linux 上选择一个轻量级的记事本对于各种用途至关重要。无论你是需要记笔记、编写代码还是编辑文本,轻量级记事本可以让你的工作更快、更轻松、更高效。Linux 操作系统提供了各种记事本应用程序,每个应用程序都具有其独特的功能和能力。 + +这份轻量级 Linux 记事本的前几名(应用程序)列表探讨了一些应用程序,包括 Leafpad、Gedit、Mousepad 和其他应用程序。 + +无论你选择哪个记事本,你可以确信它将提供你在 Linux 系统上完成工作所需的功能。 + +你最喜欢哪个?在评论框里告诉我吧。 + +-------------------------------------------------------------------------------- + +via: https://www.debugpoint.com/lightweight-notepad-linux/ + +作者:[Arindam][a] +选题:[lkxed][b] +译者:ChatGPT +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.debugpoint.com/author/admin1/ +[b]: https://github.com/lkxed +[1]: https://www.debugpoint.com/wp-content/uploads/2022/12/notepad-1head.jpg +[2]: https://www.debugpoint.com/notepad-replacement-ubuntu/ +[3]: https://www.debugpoint.com/wp-content/uploads/2022/12/mousepad-running-in-Ubuntu.jpg +[4]: https://github.com/tsujan/FeatherPad +[5]: https://www.debugpoint.com/wp-content/uploads/2022/12/featherpad-running-in-Ubuntu.jpg +[6]: http://tarot.freeshell.org/leafpad/v +[7]: https://www.debugpoint.com/wp-content/uploads/2022/12/leafpad-a-simple-notepad-running-in-Ubuntu.jpg +[8]: https://sourceforge.net/projects/beaver-editor/ +[9]: https://www.debugpoint.com/wp-content/uploads/2022/12/Beaver-editor-running-in-Ubuntu.jpg +[10]: https://wiki.gnome.org/Apps/Gedit +[11]: https://www.debugpoint.com/gnome-text-editor/ +[12]: https://www.debugpoint.com/gedit-features/ +[13]: https://www.debugpoint.com/wp-content/uploads/2022/12/gedit-text-editor.jpg +[14]: https://www.debugpoint.com/wp-content/uploads/2022/12/Xed-editor-from-Linux-Mint-team.jpg +[15]: https://github.com/linuxmint/xed \ No newline at end of file diff --git a/published/20200628 Roy Fielding-s Misappropriated REST Dissertation.md b/published/202303/20200628 Roy Fielding-s Misappropriated REST Dissertation.md similarity index 100% rename from published/20200628 Roy Fielding-s Misappropriated REST Dissertation.md rename to published/202303/20200628 Roy Fielding-s Misappropriated REST Dissertation.md diff --git a/published/20210128 Open Source Security Foundation (OpenSSF)- Reflection and Future.md b/published/202303/20210128 Open Source Security Foundation (OpenSSF)- Reflection and Future.md similarity index 100% rename from published/20210128 Open Source Security Foundation (OpenSSF)- Reflection and Future.md rename to published/202303/20210128 Open Source Security Foundation (OpenSSF)- Reflection and Future.md diff --git a/published/202303/20210214 Why programmers love Linux packaging.md b/published/202303/20210214 Why programmers love Linux packaging.md new file mode 100644 index 0000000000..62f668716e --- /dev/null +++ b/published/202303/20210214 Why programmers love Linux packaging.md @@ -0,0 +1,73 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-15674-1.html) +[#]: subject: (Why programmers love Linux packaging) +[#]: via: (https://opensource.com/article/21/2/linux-packaging) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +为什么程序员喜欢为 Linux 打包 +====== + +> 程序员可以通过 Flatpak 轻松、稳定地发布他们的软件,让他们专注于他们的激情工作:编程。 + +![][0] + +如今,人们比以往任何时候都喜爱 Linux。在这个系列中,我将分享使用 Linux 的 21 个不同理由。今天,我将谈一谈是什么让 Linux 的打包成为程序员的理想选择。 + +程序员喜欢编程。这可能看起来是一个显而易见的说法,但重要的是要明白,开发软件所涉及的不仅仅是编写代码。它包括编译、文档、源代码管理、安装脚本、配置默认值、支持文件、交付格式等等。从一个空白的屏幕到一个可交付的软件安装程序,需要的不仅仅是编程,但大多数程序员宁愿编程也不愿打包。 + +### 什么是打包? + +当食物被送到商店购买时,它是被包装好的。当直接从农民或从环保的散装或桶装商店购买时,包装是你所带的任何容器。当从杂货店购买时,包装可能是一个纸板箱、塑料袋、一个铁罐等等。 + +当软件被提供给广大计算机用户时,它也必须被打包起来。像食品一样,软件也有几种打包方式。开源软件可以不进行打包,因为用户在获得原始代码后,可以自己编译和打包它。然而,打包也有好处,所以应用程序通常以某种特定于用户平台的格式交付。而这正是问题的开始,因为软件包的格式并不只有一种。 + +对于用户来说,软件包使安装软件变得容易,因为所有的工作都由系统的安装程序完成。软件被从软件包中提取出来,并分发到操作系统中的适当位置。几乎没有任何出错的机会。 + +然而,对于软件开发者来说,打包意味着你必须学会如何创建一个包 —— 而且不仅仅是一个包,而是为你希望你的软件可以安装到的每一个操作系统创建一个独特的包。更加复杂的是,每个操作系统都有多种打包格式和选项,有时甚至是不同的编程语言。 + +### 为 Linux 打包 + +传统上,Linux 的打包方式似乎是非常多的。从 Fedora 衍生出来的 Linux 发行版,如 Red Hat 和 CentOS,默认使用 .rpm 包。Debian 和 Ubuntu(以及类似的)默认使用 .deb 包。其他发行版可能使用其中之一,或者两者都不使用,选择自定义的格式。当被问及时,许多 Linux 用户说,理想情况下,程序员根本不会为 Linux 打包他们的软件,而是依靠每个发行版的软件包维护者来创建软件包。所有安装在 Linux 系统上的软件都应该来自该发行版的官方软件库。然而,目前还不清楚如何让你的软件可靠地被一个发行版打包和包含,更不用说所有的发行版了。 + +### Linux 的 Flatpak + +Flatpak 打包系统是为了统一和去中心化 Linux 作为开发者的交付目标而推出的。通过 Flatpak,无论是开发者还是其他人(Linux 社区的成员、不同的开发者、Flatpak 团队成员或其他任何人)都可以自由地打包软件。然后他们可以将软件包提交给 Flathub,或者选择自我托管软件包,并将其提供给几乎任何 Linux 发行版。Flatpak 系统适用于所有 Linux 发行版,所以针对一个发行版就等于针对所有发行版。 + +### Flatpak 技术如何工作 + +Flatpak 具有普遍吸引力的秘密是一个标准基础。Flatpak 系统允许开发者引用一套通用的软件开发者工具包(SDK)模块。这些模块由 Flatpak 系统的维护者进行打包和管理。当你安装 Flatpak 时,SDK 会根据需要被拉入,以确保与你的系统兼容。任何特定的 SDK 只需要一次,因为它所包含的库可以在任何 Flatpak 中共享。 + +如果开发者需要一个尚未包含在现有 SDK 中的库,开发者可以在 Flatpak 中添加该库。 + +结果不言自明。用户可以从一个叫做 [Flathub][2] 的中央仓库在任何 Linux 发行版上安装数百个软件包。 + +### 开发者如何使用 Flatpak + +Flatpak 被设计成可重复的,所以构建过程很容易被集成到 CI/CD 工作流程中。Flatpak 是在一个 [YAML][3] 或 JSON 清单文件中定义的。你可以按照我的 [介绍性文章][4] 创建你的第一个 Flatpak,你也可以在 [docs.flatpak.org][5] 阅读完整的文档。 + +### Linux 让它变得简单 + +在 Linux 上创建软件很容易,为 Linux 打包也很简单,而且可以自动化。如果你是一个程序员,Linux 使你很容易忘记打包这件事,因为它只需要针对一个系统,并可以整合到你的构建过程中。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/linux-packaging + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brown-package-red-bow.jpg?itok=oxZYQzH- (Package wrapped with brown paper and red bow) +[2]: https://flatpak.org/setup/ +[3]: https://www.redhat.com/sysadmin/yaml-beginners +[4]: https://opensource.com/article/19/10/how-build-flatpak-packaging +[5]: https://docs.flatpak.org/en/latest/index.html +[0]: https://img.linux.net.cn/data/attachment/album/202303/29/231331qb9ye8gggeekvce1.jpg \ No newline at end of file diff --git a/published/20210819 Short option parsing using getopt in C.md b/published/202303/20210819 Short option parsing using getopt in C.md similarity index 100% rename from published/20210819 Short option parsing using getopt in C.md rename to published/202303/20210819 Short option parsing using getopt in C.md diff --git a/published/202303/20210906 Learn everything about computers with this Raspberry Pi kit.md b/published/202303/20210906 Learn everything about computers with this Raspberry Pi kit.md new file mode 100644 index 0000000000..faa63fd66b --- /dev/null +++ b/published/202303/20210906 Learn everything about computers with this Raspberry Pi kit.md @@ -0,0 +1,130 @@ +[#]: subject: "Learn everything about computers with this Raspberry Pi kit" +[#]: via: "https://opensource.com/article/21/9/raspberry-pi-crowpi2" +[#]: author: "Seth Kenlon https://opensource.com/users/seth" +[#]: collector: "lujun9972" +[#]: translator: "XiaotingHuang22" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-15656-1.html" + +用 CrowPi 树莓派套件了解关于计算机的一切 +====== + +> CrowPi 是一个超棒的树莓派项目系统,安装在一个笔记本电脑般的外壳里。 + +![][0] + +我喜欢历史,也喜欢计算机,因此相比于计算机如何变成个人配件,我更喜欢听它在成为日常家用电器前的故事。[我经常听到的一个故事][2] 是很久以前(反正在计算机时代算久远了)的计算机是多么的简单。事实上,它们简单到对于一个好奇的用户来说,弄清楚如何编程并不是十分困难。再看看现代计算机,它具有面向对象的编程语言、复杂的 GUI 框架、网络 API、容器等,但愈发令人担忧的是,计算工具正变得越来越难懂,对于那些没有接受过专门培训的人来说基本上无法使用。 + +从树莓派在 2012 年发布之日起,它就一直被定位为一个教育平台。一些第三方供应商通过附加组件和培训套件支持树莓派,以帮助所有年龄段的学习者探索编程、物理计算和开源。然而,直到最近,很大程度上还是要由用户来弄清楚市场上的所有部件如何组合在一起,直到我最近买了 CrowPi。 + +![CrowPi 不是一个笔记本电脑][3] + +### CrowPi2 介绍 + +乌鸦是非常聪明的鸟。它们能识别并记住面孔,模仿听到的声音,解决复杂的谜题,甚至使用工具来完成任务。CrowPi 使用乌鸦作为其徽标和名字是恰当的,因为这个设备充满了探索、实验、教育的机会,最重要的是,充满了乐趣。 + +其设计很巧妙:它看起来像笔记本电脑,但远不止于此。当你从机壳中取出蓝牙键盘时,它会显示一个隐藏的电子设备工坊,配有 LCD 屏幕、16 个按钮、刻度盘、RFID 传感器、接近传感器、线路板、扬声器、GPIO 连接、LED 阵列等等。_而且这一切都是可编程的。_ + +顾名思义,该装置本身完全由树莓派提供支持,它牢固地固定在机壳底部。 + +![CrowPi 的树莓派板][5] + +默认情况下,你应该用电源适配器为设备充电,包装附带一个壁式插头,你可以将其插入机壳,而不是直接为树莓派供电。你还可以使用插入外部微型 USB 端口的电池电源。机壳内甚至还有一个抽屉,方便你存放电池。这样做的时候,有一根 USB 线从电池抽屉中弹出,并插入机壳电源端口,因此你不会产生这是一台“普通”笔记本电脑的错觉。然而,这样一台设备能够有如此美观的设计已经很理想了! + +### 首次启动系统 + +CrowPi2 提供一张安装了 Raspbian 系统,贴有 “System” 标签的 microSD 卡,不过它同时还提供了装载 [RetroPie][6] 的第二张 microSD 卡。作为一个负责任的成年人(咳咳),我自然是先启动了 RetroPie。 + +RetroPie 总是很有趣,CrowPi2 附带两个超任风格的游戏控制器,确保你能获得最佳的复古游戏体验。 + +令人赞叹不已的是,启动实际的 Raspbian 系统的过程同样有趣,甚至可以说更有趣。它的登录管理器是一个自定义项目中心,有一些快速链接,如编程示例项目、Python 和 Arduino IDE、Scratch、Python 示例游戏、Minecraft 等。你也可以选择退出项目中心,只使用桌面。 + +![CrowPi 中心][7] + +对于习惯使用树莓派或 Linux 的人来说,CrowPi 桌面很熟悉,不过它也足够简单,所以很容易上手。左上角有应用程序菜单,桌面上有快捷图标,右上角有网络选择和音量控制的系统托盘等等。 + +![CrowPi 桌面][8] + +CrowPi 上有很多东西可供选择,所以你可能很难决定从哪里开始。对我来说,主要分为四大类:编程、物理电子学、Linux 和游戏。 + +盒子里有一本使用说明,所以你会知道你需要怎样进行连接(例如,键盘是电池供电的,所以它有时确实需要充电,它和鼠标总是需要一个 USB 适配器)。虽然说明书很快就能读完,但这一例子也充分体现了 CrowPi 团队是如何认真对待说明书的。 + +![CrowPi 文档][9] + +### 编程 + +如果你想学习如何编码,在 CrowPi 上有很多成功的途径。你可以从中选择你觉得最满意的路径。 + +#### 1、Scratch + +[Scratch][10] 是一个简单的可视化编码应用程序,可让你像拼 [乐高积木][11] 一样将代码块组合在一起,制作出游戏和互动故事。这是开启编程之旅最简单的方法,我曾见过年仅 8 岁的孩子会花数小时来研究自己设计的游戏的最佳算法。当然,它不仅适合孩子们!成年人也可以从中获得很多乐趣。不知道从哪里开始?包装盒中有一本 99 页的小册子(打印在纸张上),其中包含 Scratch 课程和项目供你尝试。 + +#### 2、Java 和 Minecraft + +Minecraft 不是开源的(虽然有 [几个开源项目][12] 复刻了它),但它有足够的可用资源,因此也经常被用来教授编程。Minecraft 是用 Java 编写的,CrowPi 同时装载有 [Minecraft Pi Edition][13] 和 [BlueJ Java IDE][14] ,如此可使学习 Java 变得比以往更容易、更有趣。 + +#### 3、Python 和 PyGame + +CrowPi 上有几个非常有趣的游戏,它们是用 Python 和 [PyGame 游戏引擎][15] 编写的。你可以玩这些游戏,然后查看其源代码以了解游戏的运行方式。CrowPi 中包含 Geany、Thonny 和 [Mu][16] 编辑器,因此你可以使用 Python 立即开始编程。与 Scratch 一样,包装盒中有一本包含有课程的小册子,因此你可以学习 Python 基础知识。 + +### 电子器件 + +隐藏在键盘下的物理电子工坊本质上是一系列 Pi Hat(附着在上的硬件)。为了让你可以认识所有的组件,CrowPi 绘制了一张中英双语的折页图进行详细的说明。除此之外还有很多示例项目可以帮助你入门。 以下是一张小清单: + + * **你好**:当你与 CrowPi 说话时,LCD 屏幕上打印输出“你好”。 + * **入侵警报**:使用接近传感器发出警报。 + * **远程控制器**:让你能够使用远程控制(是的,这个也包含在盒子里)来触发 CrowPi 上的事件。 + * **RGB 俄罗斯方块**:让你可以在 LED 显示屏上玩俄罗斯方块游戏。 + * **语音识别**:演示自然语言处理。 + * **超声波音乐**:利用距离传感器和扬声器创建简易版的 特雷门琴Theramin(LCTT 译注:世上唯一不需要身体接触的电子乐器)。 + +这些项目仅仅是入门级别而已,因为你还可以在现有的基础上搭建更多东西。当然,还有更多内容值得探索。包装盒里还有网络跳线、电阻、LED 和各种组件,这样你闲暇时也可以了解树莓派的 GPIO (通用输入输出端口)功能的所有信息。 + +不过我也发现了一个问题:示例项目的位置有点难找。找到演示项目很容易(它们就在 CrowPi 中心上),但源代码的位置并不是很容易被找到。我后来发现大多数示例项目都在 `/usr/share/code` 中,你可以通过文件管理器或终端进行访问。 + +![CrowPi 外围设备][17] + +### Linux + +树莓派上运行的是 Linux 系统。如果你一直想更深入了解 Linux,那么 CrowPi 同样会是一个很好的平台。你可以探索 Linux 桌面、终端以及几乎所有 Linux 或开源应用程序。如果你多年来一直在阅读有关开源的文章,并准备深入研究开源操作系统,那么 CrowPi 会是你想要的平台(当然还有很多其他平台也可以)。 + +### 游戏 + +包装盒中包含的 **RetroPie** SD 卡意味着你可以重新启动切换为复古游戏机,并任意玩各种老式街机游戏。它跟 Steam Deck 并不完全相同,但也是一个有趣且令人振奋的小游戏平台。因为它配备的不是一个而是两个游戏控制器,所以它非常适合多人合作的沙发游戏。最重要的是,你不仅可以在 CrowPi 上玩游戏,还可以制作自己的游戏。 + +### 配备螺丝刀 + +自我坐下开始使用 CrowPi2 以来已经大约两周,但我还没有通关所有项目。有很多个晚上,我不得不强迫自己停下摆弄它,因为即使我厌倦了一个项目,我也会不可避免地发现还有其他东西可以探索。总而言之,我在盒子里找到了一个特别的组件,这个组件让我马上知道 CrowPi 和我就是天造地设:它是一把不起眼的小螺丝刀。盒子上没有撕开就不保修的标签。CrowPi 希望你去修补、拆解、探索和学习。它不是笔记本电脑,甚至也不仅仅是个树莓派;而是一个便携的、低功耗的、多样化的、开源的学习者工具包。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/9/raspberry-pi-crowpi2 + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[XiaotingHuang22](https://github.com/XiaotingHuang22) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G (Teacher or learner?) +[2]: https://opensource.com/article/21/8/my-first-programming-language +[3]: https://opensource.com/sites/default/files/crowpi-not-laptop.jpeg (CrowPi more than a laptop) +[4]: https://creativecommons.org/licenses/by-sa/4.0/ +[5]: https://opensource.com/sites/default/files/crowpi-pi.jpeg (crowpi pi board) +[6]: https://opensource.com/article/19/1/retropie +[7]: https://opensource.com/sites/default/files/crowpi-hub.png (CrowPi hub) +[8]: https://opensource.com/sites/default/files/crowpi-desktop.png (CrowPi desktop) +[9]: https://opensource.com/sites/default/files/crowpi-docs.jpeg (CrowPi docs) +[10]: https://opensource.com/article/20/9/scratch +[11]: https://opensource.com/article/20/6/open-source-virtual-lego +[12]: https://opensource.com/alternatives/minecraft +[13]: https://www.minecraft.net/en-us/edition/pi +[14]: https://opensource.com/article/20/7/ide-java#bluej +[15]: https://opensource.com/downloads/python-gaming-ebook +[16]: https://opensource.com/article/18/8/getting-started-mu-python-editor-beginners +[17]: https://opensource.com/sites/default/files/crowpi-peripherals.jpeg (CrowPi peripherals) +[0]: https://img.linux.net.cn/data/attachment/album/202303/24/170210th71d0o707worogv.jpg \ No newline at end of file diff --git a/published/20211014 9 ways to use open source every day.md b/published/202303/20211014 9 ways to use open source every day.md similarity index 100% rename from published/20211014 9 ways to use open source every day.md rename to published/202303/20211014 9 ways to use open source every day.md diff --git a/published/202303/20220712 OpenWrt, an open source alternative to firmware for home routers.md b/published/202303/20220712 OpenWrt, an open source alternative to firmware for home routers.md new file mode 100644 index 0000000000..d626e005cb --- /dev/null +++ b/published/202303/20220712 OpenWrt, an open source alternative to firmware for home routers.md @@ -0,0 +1,177 @@ +[#]: subject: "OpenWrt, an open source alternative to firmware for home routers" +[#]: via: "https://opensource.com/article/22/7/openwrt-open-source-firmware" +[#]: author: "Stephan Avenwedde https://opensource.com/users/hansic99" +[#]: collector: "lkxed" +[#]: translator: "wxy" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-15671-1.html" + +OpenWrt:一个开源的家用路由器固件替代品 +====== + +![][0] + +> OpenWrt 是一个基于 Linux 的开源操作系统,主要针对嵌入式网络设备。 + +如果你在家里阅读这篇文章,你可能是用一个 LTE/5G/DSL/WIFI 路由器联网的。这种设备通常负责在你的本地设备(智能手机、PC、电视等)之间路由数据包,并通过内置的调制解调器提供对 WWW 的访问。你家里的路由器很可能有一个基于网页的界面,用于配置该设备。这种界面往往过于简单,因为它们是为普通用户制作的。 + +如果你想要更多的配置选项,但又不想花钱买一个专业的设备,你应该看看其他的固件,如 [OpenWrt][2]。 + +### OpenWrt 的特点 + +OpenWrt 是一个基于 Linux 的、针对嵌入式网络设备的开源操作系统。它主要用于替代各种家用路由器上的原始固件。OpenWrt 具备一个好的路由器应该具备的所有有用功能,如 DNS 服务器([dnsmasq][3]),WiFi 接入点(AP)和客户端功能,用于调制解调器功能的 PPP 协议,而且,与标准固件不同,这一切都是可以完全配置的。 + +#### LuCI 网页界面 + +OpenWrt 可以通过命令行(SSH)或使用 GUI 配置界面([LuCI][4])进行远程配置。LuCI 是一个用 [Lua][5] 编写的轻量级、可扩展的网页 GUI,它可以精确地配置你的设备。除了配置,LuCI 还提供了很多额外的信息,如实时图表、系统日志和网络诊断。 + +![LuCI 网页界面][6] + +LuCI 有一些可选的扩展,以增加更多的配置选择。 + +#### 可写文件系统 + +它的另一个亮点是可写文件系统。原有的固件通常是只读的,而 OpenWrt 配备了一个可写的文件系统,这要归功于一个巧妙的解决方案,它将 OverlayFS 与 SquashFS/JFFS2 文件系统相结合,允许安装软件包以增强功能。在 [OpenWrt 文档][8] 中可以找到更多关于文件系统架构的信息。 + +#### 扩展 + +OpenWrt 有一个相关的软件包管理器,[opkg][9],它允许安装额外的服务,比如 FTP 服务器、DLNA 媒体服务器、OpenVPN 服务器、用于实现文件共享的 Samba 服务器、控制电话的 Asterisk 等等。当然,有些扩展需要适当的底层硬件资源。 + +### 动机 + +你可能想知道为什么要冒着对你的设备造成不可修复的损害和失去保修的风险,而尝试更换路由器制造商的固件。如果你的设备以你想要的方式工作,那么你可能不应该。永远不要碰一个正在运行的系统!但是,如果你想增强功能,或者你的设备缺乏配置选项,那么你应该看看 OpenWrt 是否可以成为一种补救措施。 + +在我的例子中,我想要一个旅行用的路由器,当我在露营地的时候,我可以把它放在一个合适的位置,以便让其它设备与这个本地 WiFi 接入点(AP)保持良好连接。该路由器将作为一个普通的客户端连接到互联网,并广播它的 WiFi 接入点让我的其它设备连接到它。这样我就可以配置我的所有设备与这个路由器的接入点连接,当我在其他地方时我只需要改变路由器的客户端连接。此外,在一些露营地,你只能得到一个单一设备的访问代码,我可以通过这种设置来加强。 + +作为我的旅行路由器,我选择 TP-Link TL-WR902AC 的原因如下: + +* 很小 +* 两根 WiFi 天线 +* 5V 电源(USB) +* 低功耗 +* 成本效益高(你以 30 美元左右的价格得到它) + +为了了解它的尺寸,这里是它在树莓派 4 旁边的样子: + +![TP-Link TL-WR902AC 在树莓派旁边][10] + +尽管这个路由器带来了我所需要的所有硬件功能,但我很快发现,默认的固件并不能让我按照我想要的方式配置它。该路由器主要是作为一个 WiFi 接入点,它可以复制现有的 WiFi 网络或通过板载以太网接口将自己连接到网络。默认的固件对于这些使用情况是非常有限的。 + +(LCTT 译注:此型号国内没有销售,它的特点之一是可以通过插入 3G/4G USB 网卡连接到互联网,但由于它不在国内销售,所以没有支持哪种国内 3G/4G USB 网卡的说明,我 [查下来](https://www.tp-link.com/lk/support/3g-comp-list/tl-wr902ac/?location=1963) 似乎华为的 E3372h-320 是可用的。有相关实践的同学可以分享一下经验。 + +国内销售的其它类似型号只能通过以太网口或 WiFi 连接到互联网,这种情况下,如果只能通过 3G/4G 连接互联网,那需要另外买一个随身 WiFi /移动路由器。) + +幸运的是,该路由器能够运行 OpenWrt,所以我决定用它来替换原来的固件。 + +### 安装 + +当你的 LTE/5G/DSL/WiFi 路由器符合 [最低要求][12] 时,很有可能在它上面运行 OpenWrt。下一步,你要查看 [硬件表][13],检查你的设备是否被列为兼容,以及你要选择哪个固件包。OpenWrt 的 [TP-Link TL-WR902AC][14] 的页面还包括安装说明,其中描述了如何刷入内部存储器。 + +刷入固件的过程在不同的设备之间可能会有所不同,所以我就不详细介绍了。简而言之,我必须通过将设备连接到一个具有特定 IP 地址的网络接口上的 TFTP 服务器,重命名 OpenWrt 固件文件,然后按复位按钮启动设备。 + +### 配置 + +一旦刷入成功,你的设备现在应该用新的固件启动了。现在启动可能需要更长的时间,因为与默认固件相比,OpenWrt 具有更多的功能。 + +为了开始配置,需要在你的 PC 和路由器之间建立一个直接的以太网连接,OpenWrt 在此充当了一个 DHCP 服务器,并将你的 PC 的以太网适配器配置为一个 DHCP 客户端。 + +在 Fedora Linux 上,要激活你的网络适配器的 DHCP 客户端模式,首先你必须通过运行找出连接的 UUID: + +``` +$ nmcli connection show +NAME          UUID         TYPE      DEVICE +Wired Conn 1  7a96b...27a  ethernet  ens33 +virbr0        360a0...673  bridge   virbr0 +testwifi      2e865...ee8  wifi     -- +virbr0        bd487...227  bridge   -- +Wired Conn 2  16b23...7ba  ethernet -- +``` + +选择你要修改的连接的 UUID,然后运行: + +``` +$ nmcli connection modify ipv4.method auto +``` + +你可以在 [Fedora 联网维基][15] 中找到更多关于这些命令的信息。 + +在你连接到路由器后,打开一个网页浏览器并导航到 [http://openwrt/][16]。现在你应该看到 LuCI 的登录管理器: + +![LuCI 登录][17] + +使用 `root` 作为用户名,并将密码留空。 + +### 配置 WiFi 和路由 + +要配置你的 WiFi 天线,请点击 “网络Network” 菜单并选择 “无线Wireless”。 + +![LuCI 无线配置][19] + +在我的设备上,上面的天线 `radio0` 工作在 2.4GHz 模式,并连接到名为 `MOBILE-INTERNET` 的本地接入点。下面的天线 `radio1` 工作在 5GHz,有一个相关的接入点,SSID 为 `OpenWrt_AV`。通过点击 “编辑Edit” 按钮,你可以打开设备配置,以决定该设备属于 LAN 还是 WWAN 网络。在我的例子中,接入点 `OpenWrt_AV` 属于 LAN 网络,客户端连接 `MOBILE-INTERNET` 属于 WWAN 网络。 + +![LuCI 配置屏幕][21] + +配置的网络在 “接口Interfaces” 面板的 “网络Network” 下列出。 + +![设备列表][23] + +为了获得我想要的功能,网络流量必须在 LAN 和 WWAN 网络之间进行路由。路由可以在 “网络Network” 面板的 “防火墙Firewall” 部分进行配置。我没有在这里做任何改动,因为在默认情况下,网络之间的流量是被路由的,而传入的数据包(从 WWAN 到 LAN)必须通过防火墙。 + +![防火墙设置][28] + +因此,你需要知道的是一个接口是属于 LAN 还是 (W)WAN。这个概念使它相对容易配置,特别是对初学者来说。你可以在 [OpenWrt 联网基础][25] 指南中找到更多信息。 + +### 专属门户 + +公共 WiFi 接入点通常受到 [专属门户][26] 的保护,你必须输入一个访问代码或类似的代码。通常情况下,当你第一次连接到接入点并试图打开一个任意的网页时,这种门户就会出现。这种机制是由接入点的 DNS 服务器实现的。 + +默认情况下,OpenWrt 激活了一个安全功能,可以防止连接的客户端受到 [DNS 重新绑定攻击][27]。OpenWrt 的重新绑定保护也阻止了专属门户网站被转发到客户端,所以你必须禁用重新绑定保护,以便你可以到达专属门户网站。这个选项在 “网络Network” 菜单的 “DHCP 和 DNSDHCP and DNS” 面板中。 + +### 尝试 OpenWrt + +由于升级到 OpenWrt,我得到了一个基于商品硬件的灵活的旅行路由器。OpenWrt 使你的路由器具有完全的可配置性和可扩展性,而且由于其制作精良的网页 GUI,它也适合初学者使用。甚至有一些 [精选路由器][30] 在出厂时已经安装了 OpenWrt。你还可以用很多 [可用的软件包][31] 来增强你的路由器的功能。例如,我正在使用 [vsftp][32] FTP 服务器,在连接的 U 盘上托管一些电影和电视剧。看看该 [项目主页][33],在那里你可以找到许多切换到 OpenWrt 的理由。 + +图片来自: Stephan Avenwedde,[CC BY-SA 4.0][7] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/22/7/openwrt-open-source-firmware + +作者:[Stephan Avenwedde][a] +选题:[lkxed][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/hansic99 +[b]: https://github.com/lkxed +[1]: https://opensource.com/sites/default/files/lead-images/OSDC_Internet_Cables_520x292_0614_RD.png +[2]: https://openwrt.org +[3]: https://thekelleys.org.uk/dnsmasq/doc.html +[4]: https://openwrt.org/docs/guide-user/luci/start +[5]: https://opensource.com/article/20/2/lua-cheat-sheet +[6]: https://opensource.com/sites/default/files/2022-07/openwrt_luci_overview_c_0.png +[7]: https://creativecommons.org/licenses/by-sa/4.0/legalcode +[8]: https://openwrt.org/docs/techref/flash.layout +[9]: https://openwrt.org/docs/guide-user/additional-software/opkg +[10]: https://opensource.com/sites/default/files/2022-07/OpenWrt_Comparison_RaspberryPi.jpg +[12]: https://openwrt.org/supported_devices +[13]: https://openwrt.org/toh/start +[14]: https://openwrt.org/toh/tp-link/tl-wr902ac_v3 +[15]: https://fedoraproject.org/wiki/Networking/CLI +[16]: http://openwrt/ +[17]: https://opensource.com/sites/default/files/2022-07/openwrt_luci_login_manager.png +[19]: https://opensource.com/sites/default/files/2022-07/openwrt_luci_wireless_section_c.webp +[21]: https://opensource.com/sites/default/files/2022-07/openwrt_luci_wifi_device_configuration.webp +[23]: https://opensource.com/sites/default/files/2022-07/openwrt_luci_network_devices_0.webp +[25]: https://openwrt.org/docs/guide-user/base-system/basic-networking +[26]: https://en.wikipedia.org/wiki/Captive_portal +[27]: https://en.wikipedia.org/wiki/DNS_rebinding +[28]: https://opensource.com/sites/default/files/2022-07/openwrt_luci_firewall_settings.webp +[30]: https://opensource.com/article/22/1/turris-omnia-open-source-router +[31]: https://openwrt.org/packages/table/start +[32]: https://openwrt.org/docs/guide-user/services/nas/ftp.overview +[33]: https://openwrt.org/reasons_to_use_openwrt +[0]: https://img.linux.net.cn/data/attachment/album/202303/29/105431e78pqv0n8x6aqm0l.jpg \ No newline at end of file diff --git a/published/20221117.0 ⭐️⭐️ How to Install and Use htop in Linux.md b/published/202303/20221117.0 ⭐️⭐️ How to Install and Use htop in Linux.md similarity index 100% rename from published/20221117.0 ⭐️⭐️ How to Install and Use htop in Linux.md rename to published/202303/20221117.0 ⭐️⭐️ How to Install and Use htop in Linux.md diff --git a/published/20221206.3 ⭐️⭐️ A data scientist's guide to open source community analysis.md b/published/202303/20221206.3 ⭐️⭐️ A data scientist's guide to open source community analysis.md similarity index 100% rename from published/20221206.3 ⭐️⭐️ A data scientist's guide to open source community analysis.md rename to published/202303/20221206.3 ⭐️⭐️ A data scientist's guide to open source community analysis.md diff --git a/published/20221219.1 ⭐️⭐️ How I use my old camera as a webcam with Linux.md b/published/202303/20221219.1 ⭐️⭐️ How I use my old camera as a webcam with Linux.md similarity index 100% rename from published/20221219.1 ⭐️⭐️ How I use my old camera as a webcam with Linux.md rename to published/202303/20221219.1 ⭐️⭐️ How I use my old camera as a webcam with Linux.md diff --git a/published/20221220.2 ⭐️⭐️ How I use Artipie, a PyPI repo.md b/published/202303/20221220.2 ⭐️⭐️ How I use Artipie, a PyPI repo.md similarity index 100% rename from published/20221220.2 ⭐️⭐️ How I use Artipie, a PyPI repo.md rename to published/202303/20221220.2 ⭐️⭐️ How I use Artipie, a PyPI repo.md diff --git a/published/20230131.1 ⭐️⭐️ Use Terraform to manage an OpenStack cluster.md b/published/202303/20230131.1 ⭐️⭐️ Use Terraform to manage an OpenStack cluster.md similarity index 100% rename from published/20230131.1 ⭐️⭐️ Use Terraform to manage an OpenStack cluster.md rename to published/202303/20230131.1 ⭐️⭐️ Use Terraform to manage an OpenStack cluster.md diff --git a/published/20230206.1 ⭐️⭐️ Wordsmith on the Linux command line with dict.md b/published/202303/20230206.1 ⭐️⭐️ Wordsmith on the Linux command line with dict.md similarity index 100% rename from published/20230206.1 ⭐️⭐️ Wordsmith on the Linux command line with dict.md rename to published/202303/20230206.1 ⭐️⭐️ Wordsmith on the Linux command line with dict.md diff --git a/published/20230216.0 ⭐️ 5 escape sequences for your Linux shell.md b/published/202303/20230216.0 ⭐️ 5 escape sequences for your Linux shell.md similarity index 100% rename from published/20230216.0 ⭐️ 5 escape sequences for your Linux shell.md rename to published/202303/20230216.0 ⭐️ 5 escape sequences for your Linux shell.md diff --git a/published/20230216.1 ⭐️⭐️ Beginner's Guide to R Markdown Syntax [With Cheat Sheet].md b/published/202303/20230216.1 ⭐️⭐️ Beginner's Guide to R Markdown Syntax [With Cheat Sheet].md similarity index 100% rename from published/20230216.1 ⭐️⭐️ Beginner's Guide to R Markdown Syntax [With Cheat Sheet].md rename to published/202303/20230216.1 ⭐️⭐️ Beginner's Guide to R Markdown Syntax [With Cheat Sheet].md diff --git a/published/202303/20230216.2 ⭐️⭐️ Writing Javascript without a build system.md b/published/202303/20230216.2 ⭐️⭐️ Writing Javascript without a build system.md new file mode 100644 index 0000000000..cafee37959 --- /dev/null +++ b/published/202303/20230216.2 ⭐️⭐️ Writing Javascript without a build system.md @@ -0,0 +1,195 @@ +[#]: subject: "Writing Javascript without a build system" +[#]: via: "https://jvns.ca/blog/2023/02/16/writing-javascript-without-a-build-system/" +[#]: author: "Julia Evans https://jvns.ca/" +[#]: collector: "lkxed" +[#]: translator: "wxy" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-15666-1.html" + +在没有构建系统的情况下编写 Javascript +====== + +![][0] + +嗨!这周我一直在写一些 Javascript,和往常一样,当我开始一个新的前端项目时,我面临的问题是:我是否应该使用构建系统? + +我想谈谈构建系统对我有什么吸引力,为什么我(通常)仍然不使用它们,以及一些前端 Javascript 库要求你使用构建系统时,为什么我觉得这让我感到沮丧。 + +我写这篇文章是因为我看到的大多数关于 JS 的文章都假定你正在使用构建系统,而对于像我这样的人来说,编写非常简单的、不需要构建系统的小型 Javascript 项目时,构建系统可能反而添加了很多麻烦。 + +#### 什么是构建系统? + +构建系统的思路是,你有一堆 Javascript 或 Typescript 代码,你想在把它放到你的网站上之前把它翻译成不同的 Javascript 代码。 + +构建系统可以做很多有用的事情,比如: + +- (出于效率的考虑)将 100 多个 JS 文件合并成一个大的捆绑文件 +- 将 Typescript 翻译成 Javascript +- 对 Typescript 进行类型检查 +- 精简化 +- 添加 Polyfills 以支持旧的浏览器 +- 编译 JSX +- 摇树优化Tree Shaking(删除不使用的 JS 代码以减少文件大小) +- 构建 CSS(像 [tailwind][1] 那样) +- 可能还有很多其他重要的事情 + +正因为如此,如果你今天正在构建一个复杂的前端项目,你可能会使用 Webpack、Rollup、Esbuild、Parcel 或 Vite 等构建系统。 + +很多这些功能对我很有吸引力,我过去使用构建系统也是出于这样一些原因: 例如,[Mess With DNS][2] 使用 Esbuild 来翻译 Typescript,并将许多文件合并成一个大文件。 + +#### 目标:轻松地对旧的小网站进行修改 + +我做了很多简单的小网站([之一][3]、[之二][4]、[之三][5]、[之四][6]),我对它们的维护精力大约为 0,而且我改变它们的频率很低。 + +我的目标是,如果我有一个 3、5 年前做的网站,我希望能在 20 分钟内, + +- 在一台新的电脑上从 GitHub 获取源代码 +- 做一些修改 +- 把它放到互联网上 + +但我对构建系统(不仅仅是 Javascript 构建系统!)的经验是,如果你有一个 5 年历史的网站,要重新构建这个网站会非常痛苦。 + +因为我的大多数网站都很小,所以使用构建系统的 *优势* 很小 —— 我并不真的需要 Typescript 或 JSX。我只要有一个 400 行的 `script.js` 文件就可以了。 + +#### 示例:尝试构建 SQL 实验场 + +我的一个网站([SQL 试验场][5])使用了一个构建系统(它使用 Vue)。我最后一次编辑该项目是在 2 年前,是在另一台机器上。 + +让我们看看我今天是否还能在我的机器上轻松地构建它。首先,我们要运行 `npm install`。下面是我得到的输出: + +``` +$ npm install +[lots of output redacted] +npm ERR! code 1 +npm ERR! path /Users/bork/work/sql-playground.wizardzines.com/node_modules/grpc +npm ERR! command failed +npm ERR! command sh /var/folders/3z/g3qrs9s96mg6r4dmzryjn3mm0000gn/T/install-b52c96ad.sh +npm ERR! CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/surface/init.o +npm ERR! CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/avl/avl.o +npm ERR! CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/backoff/backoff.o +npm ERR! CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/channel/channel_args.o +npm ERR! CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/channel/channel_stack.o +npm ERR! CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/channel/channel_stack_builder.o +npm ERR! CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/channel/channel_trace.o +npm ERR! CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/channel/channelz.o +``` + +在构建 `grpc` 时出现了某种错误。没问题。反正我也不需要这个依赖关系,所以我可以花 5 分钟把它拆下来重建。现在我可以 `npm install` 了,一切正常。 + +现在让我们试着构建这个项目: + +``` +$ npm run build + ? Building for production...Error: error:0308010C:digital envelope routines::unsupported + at new Hash (node:internal/crypto/hash:71:19) + at Object.createHash (node:crypto:130:10) + at module.exports (/Users/bork/work/sql-playground.wizardzines.com/node_modules/webpack/lib/util/createHash.js:135:53) + at NormalModule._initBuildHash (/Users/bork/work/sql-playground.wizardzines.com/node_modules/webpack/lib/NormalModule.js:414:16) + at handleParseError (/Users/bork/work/sql-playground.wizardzines.com/node_modules/webpack/lib/NormalModule.js:467:10) + at /Users/bork/work/sql-playground.wizardzines.com/node_modules/webpack/lib/NormalModule.js:499:5 + at /Users/bork/work/sql-playground.wizardzines.com/node_modules/webpack/lib/NormalModule.js:356:12 + at /Users/bork/work/sql-playground.wizardzines.com/node_modules/loader-runner/lib/LoaderRunner.js:373:3 + at iterateNormalLoaders (/Users/bork/work/sql-playground.wizardzines.com/node_modules/loader-runner/lib/LoaderRunner.js:214:10) + at iterateNormalLoaders (/Users/bork/work/sql-playground.wizardzines.com/node_modules/loader-runner/lib/LoaderRunner.js:221:10) + at /Users/bork/work/sql-playground.wizardzines.com/node_modules/loader-runner/lib/LoaderRunner.js:236:3 + at runSyncOrAsync (/Users/bork/work/sql-playground.wizardzines.com/node_modules/loader-runner/lib/LoaderRunner.js:130:11) + at iterateNormalLoaders (/Users/bork/work/sql-playground.wizardzines.com/node_modules/loader-runner/lib/LoaderRunner.js:232:2) + at Array. (/Users/bork/work/sql-playground.wizardzines.com/node_modules/loader-runner/lib/LoaderRunner.js:205:4) + at Storage.finished (/Users/bork/work/sql-playground.wizardzines.com/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:43:16) + at /Users/bork/work/sql-playground.wizardzines.com/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:79:9 +``` + +[这个 Stack Overflow 的答案][7] 建议运行 `export NODE_OPTIONS=--openssl-legacy-provider` 来解决这个错误。 + +这很有效,最后我得以 `npm run build` 来构建这个项目。 + +这其实并不坏(我只需要删除一个依赖关系和传递一个略显神秘的 Node 选项!),但我宁愿不被那些构建错误破坏。 + +#### 对我来说,对于小项目来说,构建系统并不值得 + +对我来说,一个复杂的 Javascript 构建系统对于 500 行的小项目来说似乎并不值得 —— 它意味着放弃了在未来能够轻松更新项目的能力,以换取一些相当微小的好处。 + +#### Esbuild 似乎更稳定一些 + +我想为 Esbuild 大声叫好: 我 [在 2021 年了解到 Esbuild][8],并用于一个项目,到目前为止,它确实是一种更可靠的构建 JS 项目的方式。 + +我刚刚尝试在一台新电脑上构建一个我最后一次改动在 8 个月前的 Esbuild 项目,结果成功了。但我不能肯定的说,两年后我是否还能轻松的建立那个项目。也许会的,我希望如此! + +#### 不使用构建系统通常是很容易的 + +下面是 [Nginx 实验场][6] 代码中导入所有库的部分的样子: + +``` + + + + + + + +``` + +这个项目也在使用 Vue,但它只是用 ` -27    -28  -29  -30 

Raffles Book Club Remote Gift Exchange

-31 

The players, in random order, and the luxurious gifts, wrapped:

-32    -33 
-34 
Wanda
-35 
Carlos
-36 
Bill
-37 
Arlette
-38 
Joanne
-39 
Alekx
-40 
Ermintrude
-41 
Walter
-42 
Hilary
-43 
-44 
-45 
-46 
-47 
-48 
-49 
-50 
-51 
-52 
-53 
-54 
-55    -56  -82    -83  -84  -``` - -### Breaking it down - -Let's go over this code bit by bit. - -* Lines 1–6: Upfront, I have the usual `HTML` boilerplate, HTML, `HEAD`, `META`, `TITLE` elements, followed by a link to the CSS for jQuery UI. -* Lines 7–25: I added two new style classes: `draggable` and `droppable`. These define the layout for the books (draggable) and the people (droppable). Note that, aside from defining the size, background color, padding, and margin, I established that these need to float left. This way, the layout adjusts to the browser window width in a reasonably acceptable form. -* Line 26–27: With the CSS out of the way, it's time for the JavaScript libraries, first jQuery, then jQuery UI. -* Lines 29–83: Now that the `HEAD` `element` is done, next is the `BODY`: - * Lines 30–31: These couple of titles, `H1` and `H2`, let people know what they're doing here. -Lines 33–43: A `DIV` to contain the people: -Lines 34–42: The people are defined as `droppable` `DIV` elements and given `ID` fields corresponding to their names. -Lines 44–54: A `DIV` to contain the books: -Lines 45–53: The books are defined as `draggable` `DIV` elements. Each element is declared with a background image corresponding to the `wrapping` paper with no text between the `
` and `
`. The `ID` fields correspond to the wrapping paper. -Lines 56–81: These contain JavaScript to make it all work. - * Lines 57–67: This JavaScript object contains the book definitions. The keys ('bows', `'boxes'`, etc.) correspond to the `ID` fields of the book `DIV` elements. The values ('Untamed by Glennon Doyle', `"The Heart's Invisible Furies by John Boyne"`, etc.) are the book titles and authors. -Lines 68–79: This JavaScript jQuery UI function defines the `droppable` functionality to be attached to HTML elements whose class is `drop`pable. -Lines 69–75: When a `draggable` element is dropped onto a droppable element, the function drop is called. -Line 70: The element variable is assigned the draggable object that was dropped (this will be a `
` element. -Line 71: The wrapping variable is assigned the value of the `ID` field in the draggable object. -Line 72: This line is commented `out`, but while I was learning and testing, calls to `alert()` were useful. - * Line 73: This reassigns the draggable object's background image to a bland image on which text can be read; part 1 of unwrapping is getting rid of the wrapping paper. - * Line 74: This sets the text of the draggable object to the title of the book, looked up in the book's object using the draggable object's ID; part 2 of the unwrapping is showing the book title and author. -Lines 76–78: For a while, I thought I wanted something to happen when a draggable object was removed from a droppable object (e.g., when a club member stole a book), which would require using the out function in a droppable object. Eventually, I decided not to do anything. But, this could note that the book was stolen and make it "unstealable" for one turn; or it could show a status line that says something like: "Wanda's book Blah Blah by Joe Blogs was stolen, and she needs to choose another." -Line 80: This JavaScript jQuery UI function defines the draggable functionality to be attached to HTML elements whose class is draggable. In my case, the default behavior was all I needed. - -That's it! - -### A few last thoughts - -Libraries like jQuery and jQuery UI are incredibly helpful when trying to do something complicated in JavaScript. Look at the `$().draggable()` and `$().droppable()` functions, for example: - -``` -$( ".draggable" ).draggable(); -``` - -The `".draggable"` allows associating the `draggable()` function with any HTML element whose class is "draggable." The `draggable()` function comes with all sorts of useful behavior about picking, dragging, and releasing a draggable HTML element. - -If you haven't spent much time with jQuery, I really like the book [jQuery in Action][9] by Bear Bibeault, Yehuda Katz, and Aurelio De Rosa. Similarly, [jQuery UI in Action][10] by TJ VanToll is a great help with the jQuery UI (where draggable and droppable come from). - -Of course, there are many other JavaScript libraries, frameworks, and what-nots around to do good stuff in the user interface. I haven't really started to explore all that jQuery and jQuery UI offer, and I want to play around with the rest to see what can be done. - -Image by: (Chris Hermansen, CC BY-SA 4.0) - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/1/open-source-gift-exchange - -作者:[Chris Hermansen][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/brown-package-red-bow.jpg -[2]: https://unsplash.com/@jessbaileydesigns?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/package?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://all-free-download.com/free-vector/patterns-creative-commons.html#google_vignette -[5]: https://opensource.com/tags/gimp -[6]: https://opensource.com/sites/default/files/uploads/bookexchangestart.png -[7]: https://opensource.com/sites/default/files/uploads/bookexchangeperson1.png -[8]: https://opensource.com/sites/default/files/uploads/bookexchangeperson2.png -[9]: https://www.manning.com/books/jquery-in-action-third-edition -[10]: https://www.manning.com/books/jquery-ui-in-action diff --git a/sources/tech/20210123 Schedule appointments with an open source alternative to Doodle.md b/sources/tech/20210123 Schedule appointments with an open source alternative to Doodle.md deleted file mode 100644 index 4627c2c618..0000000000 --- a/sources/tech/20210123 Schedule appointments with an open source alternative to Doodle.md +++ /dev/null @@ -1,66 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Schedule appointments with an open source alternative to Doodle) -[#]: via: (https://opensource.com/article/21/1/open-source-scheduler) -[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) - -Schedule appointments with an open source alternative to Doodle -====== -Easy!Appointments is an open source appointment scheduler filled with -features to make planning your day easier. -![Working on a team, busy worklife][1] - -In previous years, this annual series covered individual apps. This year, we are looking at all-in-one solutions in addition to strategies to help in 2021. Welcome to day 13 of 21 Days of Productivity in 2021. - -Setting appointments with other people is difficult. Most of the time, we guess at a date and time and then start the "is this time bad for you? No, that time is bad for me, how about..." dance. It is easier with co-workers since you can see each others' calendars. You just have to find that magic spot that is good for almost everyone who needs to be on the call. However, for freelancers managing personal calendars, the dance is a routine part of setting up calls and meetings. - -![Service and Provider set up screen][2] - -Easy!Appointments (Kevin Sonney, [CC BY-SA 4.0][3]) - -This scheduling is a particular challenge for someone like me. Since I interview people for my weekly productivity podcast, finding a time that works for both of us can be extra challenging. - -Finally, one of my guests said, "Hey, to get on my calendar, go to this URL, and pick a time that works for both of us." - -This concept was, to be honest, a revelation. There is software (and services) out there that allows a person requesting the meeting to _pick_ a time that is good for both parties! No more back and forth trying to figure it out! It also means that I could give the person being interviewed control over their own availability. - -![Appointment, Date, and Time settings][4] - -Easy!Appointments (Kevin Sonney, [CC BY-SA 4.0][3]) - -There are several commercial and cloud-hosted solutions that provide this service. The best open source alternative I've used is [Easy!Appointments][5]. It is exceptionally easy to set up and has a handy WordPress plug-in that allows users to put the request form on a page or post. - -Easy!Appointments is geared more towards a service organization, like a helpdesk or a handyman service, with the ability to add multiple people (aka Service Providers) and give them individual schedules. It also allows for various service types. While I only have it set up for one person (me) and one service (an interview), for a helpdesk, it might have four or five people and services like "Set up new laptop," "New hire setup," and "Set up new printer." - -Easy!Appointments can also synchronize with Google Calendar on a per-person basis to automatically add or update any new appointments on their calendar. There are discussions in their issue tracker about support for syncing to additional backends. Easy!Appointments also supports multiple languages, time zones, and a whole host of other useful features. - -![Final booking interface][6] - -A final booking (Kevin Sonney, [CC BY-SA 4.0][3]) - -It has been freeing to be able to say, "You can book your interview on this web page," and spending less time negotiating when we are both available to talk. That gives both myself and the other person more time to do more productive things. Whether you are an individual, like me, or a service organization, Easy!Appointments is a big help when scheduling time with other people. - -Need to keep your schedule straight? Learn how to do it using open source with these free... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/1/open-source-scheduler - -作者:[Kevin Sonney][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ksonney -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_dev_email_chat_video_work_wfm_desk_520.png?itok=6YtME4Hj (Working on a team, busy worklife) -[2]: https://opensource.com/sites/default/files/day13-image1_1.png -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://opensource.com/sites/default/files/day13-image2_0.png -[5]: https://easyappointments.org/ -[6]: https://opensource.com/sites/default/files/day13-image3_0.png diff --git a/sources/tech/20210126 Movim- An Open-Source Decentralized Social Platform Based on XMPP Network.md b/sources/tech/20210126 Movim- An Open-Source Decentralized Social Platform Based on XMPP Network.md deleted file mode 100644 index 0a9565bd73..0000000000 --- a/sources/tech/20210126 Movim- An Open-Source Decentralized Social Platform Based on XMPP Network.md +++ /dev/null @@ -1,107 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Movim: An Open-Source Decentralized Social Platform Based on XMPP Network) -[#]: via: (https://itsfoss.com/movim/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) - -Movim: An Open-Source Decentralized Social Platform Based on XMPP Network -====== - -_**Brief: Movim is an open-source decentralized social media platform that relies on XMPP network and can communicate with other applications using XMPP.**_ - -We’ve already highlighted some [open-source alternatives to mainstream social media platforms][1]. In addition to those options available, I have come across another open-source social media platform that focuses on privacy and decentralization. - -### Movim: Open-Source Web-based Social Platform - -![][2] - -Just like some other XMPP desktop clients, [Movim][3] is a web-based XMPP front-end to let you utilize it as a federated social media. - -Since it relies on [XMPP network][4], you can interact with other users utilizing XMPP clients such as [Conversations][5] (for Android) and [Dino][6] (for Desktop). - -In case you didn’t know, XMPP is an open-standard for messaging. - -So, Movim can act as your decentralized messaging app or a full-fledged social media platform giving you an all-in-one experience without relying on a centralized network. - -It offers many features that can appeal to a wide variety of users. Let me briefly highlight most of the important ones. - -![][7] - -### Features of Movim - - * Chatroom - * Ability to organize video conferences - * Publish articles/stories publicly to all federated network - * Tweak the privacy setting of your post - * Easily talk with other Movim users or XMPP users with different clients - * Automatically embed your links and images to your post - * Explore topics easily using hashtags - * Ability to follow a topic or publication - * Auto-save to draft when you type in a post - * Supports Markdown syntax to let you publish informative posts and start a publication on the network for free - * React to chat messages - * Supports GIFs and funny Stickers - * Edit or delete your messages - * Supports screen sharing - * Supports night mode - * Self-hosting option available - * Offers a free public instance as well - * Cross-platform web support - - - -### Using Movim XMPP Client - -![][8] - -In addition to all the features listed above, it is also worth noting that you can also find a Movim mobile app on [F-Droid][9]. - -If you have an iOS device, you might have a hard time looking for a good XMPP client (I’m not aware of any decent options). If you rule that out, you should not have any issues using it on your Android device. - -For desktop, you can simply use Movim’s [public instance][10], sign up for an account, and use it on your favorite browser no matter which platform you’re on. - -You can also deploy your instance by using the Docker Compose script, the Debian package, or any other methods mentioned in their [GitHub page][11]. - -[Movim][3] - -### Concluding Thoughts - -While the idea of decentralized social media platforms is good, not everyone would prefer to use it because they probably do not have friends on it and the user experience is not the best out there. - -That being said, XMPP clients like Movim are trying to make a federated social platform that a general consumer can easily use without any hiccups. - -Just like it took a while for users to look for [WhatsApp alternatives][12], the craze for decentralized social media platform like Movim and [Mastodon][13] is a possibility in the near future as well. - -If you like it, do consider making a donation to their project. - -What do you think about Movim? Let me know your thoughts in the comments below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/movim/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/mainstream-social-media-alternaives/ -[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/movim-dark-mode.jpg?resize=800%2C486&ssl=1 -[3]: https://movim.eu/ -[4]: https://xmpp.org/ -[5]: https://conversations.im/ -[6]: https://itsfoss.com/dino-xmpp-client/ -[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/movim-discover.png?resize=800%2C466&ssl=1 -[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/movim-eu.jpg?resize=800%2C464&ssl=1 -[9]: https://f-droid.org/packages/com.movim.movim/ -[10]: https://nl.movim.eu -[11]: https://github.com/movim/movim -[12]: https://itsfoss.com/private-whatsapp-alternatives/ -[13]: https://itsfoss.com/mastodon-open-source-alternative-twitter/ diff --git a/sources/tech/20210127 Build a programmable light display on Raspberry Pi.md b/sources/tech/20210127 Build a programmable light display on Raspberry Pi.md deleted file mode 100644 index b7b0ee31b5..0000000000 --- a/sources/tech/20210127 Build a programmable light display on Raspberry Pi.md +++ /dev/null @@ -1,218 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Build a programmable light display on Raspberry Pi) -[#]: via: (https://opensource.com/article/21/1/light-display-raspberry-pi) -[#]: author: (Darin London https://opensource.com/users/dmlond) - -Build a programmable light display on Raspberry Pi -====== -Celebrate the holidays or any special occasion with a DIY light display -using a Raspberry Pi, Python, and programmable LED lights. -![Lightbulb][1] - -This past holiday season, I decided to add some extra joy to our house by setting up a DIY light display. I used a Raspberry Pi, a programmable light string, and Python. - - - -You can set up your own light display for any occasion, thanks to the flexibility of the WS12911/2 (or NeoPixel) system, by following these directions. - -### Prerequisites - -You will need: - - * 1 – Raspberry Pi with headers and an Ethernet or WiFi connection. I used a Raspberry Pi Zero W with headers. - * 1 – WS12811/2 light string. I used the [Alitove WS2811 Addressable LED Pixel Light 50][2], but many other types are available. Adafruit brands these as [NeoPixel][3]. - * 1 – [5v/10A AC-DC power supply for WS12811][4] if you use the Alitove. Other lights may come with a power supply. - * 1 – Breadboard - * 2 – Breadboard-to-Pi-header jumper wires. I used blue for the Pi GPIO pin 18 and black for the Pi ground. - * 1 – 74AHCT125 level converter chip to safely transmit Pi GPIO wire signals to 5v/10A power without feeding back to the Pi. - * 8 – Breadboard-to-breadboard jumper wires or solid-core 24 AWG wires. I used red/orange for 5v power, black for ground, and yellow for data. - * 1 – SD card with Raspberry Pi OS installed. I used [Raspberry Pi OS Lite][5] and set it up in a headless mode with SSH enabled. - - - -### What are WS2811/2 programmable LEDs? - -The [WS2811/2 class of programmable lights][6] integrates red, green, and blue LED lights with a driver chip into a tiny surface-mounted package controlled through a single wire. - -![Programmable LED light][7] - -(Darin London, [CC BY-SA 4.0][8]) - -Each light can be individually programmed using an RGB set of integers or hex equivalents. These lights can be packaged together into matrices, strings, and other form factors, and they can be programmatically accessed using a data structure that makes sense for the form factor. The light strings I use are addressed using a standard Python list. Adafruit has a great [tutorial on wiring and controlling your lights][9]. - -### Control NeoPixel LEDs with Python - -Adafruit has created a full suite of Python libraries for most of the parts it sells. These are designed to work with [CircuitPython][10], Adafruit's port of Python designed for low-cost microcontroller boards. You do not need to install CircuitPython on the Raspberry Pi OS because the preinstalled Python 2 and Python 3 are compatible. - -You will need to `pip3` to install libraries for Python 3. Install it with: - - -``` -`sudo apt-get install python3-pip` -``` - -Then install the following libraries: - - * [rpi_ws281x][11] - * [Adafruit-circuitpython-neopixel][12] - * [Adafruit-blinka][13] - - - -Once these libraries and their dependencies are installed, you can write code like the following to program one or more lights wired to your Raspberry Pi using `sudo python3` (sudo is required): - - -``` -import board -import neopixel -num_lights = 50 -# program 50 lights with the default brightness 1.0, and autoWrite true -pixels = neopixel.NeoPixel(board.D18, num_lights) -# light 20 bright green -pixels[19] = (0,255,0) -# light all pixels red -pixels.fill((255.0,0)) -# turn off neopixels -pixels.fill((0,0,0)) -``` - -### Set up your lighting system - - 1. Install the SD card into the Raspberry Pi and secure it, the breadboard, and lights [where they need to be][14] (velcro works for the Pi and breadboard). - - 2. Install the 74AHCT125 level converter chip, light, power supply, and Pi according to this schematic: - -![Wiring schematic][15] - -([Kattni Rembor][16], [CC BY-SA 4.0][8]) - - 3. String additional lights to the first light using their connectors. Note the total number of lights. - - 4. Plug the power supply into the wall. - - 5. Plug the Raspberry Pi power supply into the wall, and wait for it to boot. - - - - -  - -![Lighting hardware wiring][17] - -(Darin London, [CC BY-SA 4.0][8]) - -![Lighting hardware wiring][18] - -(Darin London, [CC BY-SA 4.0][8]) - -![Lighting hardware wiring][19] - -(Darin London, [CC BY-SA 4.0][8]) - -### Install the light controller and Flask web application - -I wrote a Python application and library to interact with the lights and a Flask web application that runs on the Pi. See my [Raspberry Pi Neopixel Controller][20] GitHub repository for more information about the code. - -#### The lib.neopixc library - -The [`lib.neopixc` library][21] extends the `neopixel.NeoPixC` class to work with two 50-light Alitove light strands connected in serial, using a programmable list of RGB colors lists. It adds the following functions:  - - * `set_color`: Takes a new list of lists of RGB colors - * `walk`: Walks through each light and sets them to the colors in order - * `rotate`: Pushes the last color in the list of lists to the beginning of the list of lists for blinking the lights - - - -If you have a different number of lights, you will need to edit this library to change the `self._num_lights` value. Also, some lights require a different argument in the order constructor attribute. The Alitove is compatible with the default order attribute `neopixel.GRBW`. - -#### The run_lights.py script - -The [`run_lights.py` script][22] uses `lib.neopixc` to support a colors file and a state file to dynamically set how the lights behave at any time. The colors file is a JSON array of arrays of RGB (or RGBW) integers that is fed as the colors to the `lib.neopixc` object using its `set_colors` method. The state file can hold one of three words: - - * `static`: Does not rotate the lights with each iteration of the while loop - * `blink`: Rotates the lights with each iteration of the main while loop - * `down`: Turns all the lights off - - - -If the state file does not exist, the default state is `static`. - -The script also has HUP and INT signal handlers, which will turn off the lights when those signals are received. - -Note: Because the GPIO 18 pin requires sudo on the Raspberry Pi to work, the `run_lights.py` script must be run with sudo. - -#### The neopixel_controller application - -The `neopixel_controller` Flask application, in the neopix_controller directory of the github repository (see below), offers a front-end browser graphical user interface (GUI) to control the lights. My raspberry pi connects to my wifi, and is accessible at raspberrypi.local. To access the GUI in a browser, go to . Alternatively, you can use ping to find the IP address of raspberrypi.local, and use it as the hostname, which is useful if you have multiple raspberry pi devices connected to your wifi. - -![Flask app UI][23] - -(Darin London, [CC BY-SA 4.0][8]) - -The current state and three front-end buttons use JavaScript to interact with a set of REST API endpoints presented by the Flask application: - - * `/api/v1/state`: Returns the current state of the shared state file, which defaults to `static` if the state file does not exist - * `/api/v1/blink`: Sets the state file to blink - * `/api/v1/static`: Sets the state file to static - * `/api/v1/down`: Sets the state file to down - - - -I wrote two scripts and corresponding JSON definition files that launch `run_lights.py` and the Flask application: - - * `launch_christmas.sh` - * `launch_new_years.sh` - - - -These can be launched from a command-line session (terminal or SSH) on the Pi after it is set up (they do not require sudo, but use sudo internally): - - -``` -`./launch_christmas.sh` -``` - -You can turn off the lights and stop `run_lights.sh` and the Flask application by using `lights_down.sh`. - -The code for the library and the flask application are in the [Raspberry Pi Neopixel Controller][20] GitHub repository. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/1/light-display-raspberry-pi - -作者:[Darin London][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/dmlond -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lightbulb-idea-think-yearbook-lead.png?itok=5ZpCm0Jh (Lightbulb) -[2]: https://www.amazon.com/gp/product/B06XD72LYM -[3]: https://www.adafruit.com/category/168 -[4]: https://www.amazon.com/gp/product/B01M0KLECZ -[5]: https://opensource.com/article/20/6/custom-raspberry-pi -[6]: https://learn.adafruit.com/adafruit-neopixel-uberguide -[7]: https://opensource.com/sites/default/files/uploads/led_1.jpg (Programmable LED light) -[8]: https://creativecommons.org/licenses/by-sa/4.0/ -[9]: https://learn.adafruit.com/neopixels-on-raspberry-pi -[10]: https://circuitpython.org/ -[11]: https://pypi.org/project/rpi-ws281x/ -[12]: https://circuitpython.readthedocs.io/projects/neopixel/en/latest/api.html -[13]: https://pypi.org/project/Adafruit-Blinka/ -[14]: https://gpiozero.readthedocs.io/en/stable/recipes.html#pin-numbering -[15]: https://opensource.com/sites/default/files/uploads/schematic.png (Wiring schematic) -[16]: https://learn.adafruit.com/assets/64121 -[17]: https://opensource.com/sites/default/files/uploads/wiring.jpg (Lighting hardware wiring) -[18]: https://opensource.com/sites/default/files/uploads/wiring2.jpg (Lighting hardware wiring) -[19]: https://opensource.com/sites/default/files/uploads/wiring3.jpg (Lighting hardware wiring) -[20]: https://github.com/dmlond/raspberry_pi_neopixel -[21]: https://github.com/dmlond/raspberry_pi_neopixel/blob/main/lib/neopixc.py -[22]: https://github.com/dmlond/raspberry_pi_neopixel/blob/main/run_lights.py -[23]: https://opensource.com/sites/default/files/uploads/neopixelui.png (Flask app UI) diff --git a/sources/tech/20210127 Introduction to Thunderbird mail filters.md b/sources/tech/20210127 Introduction to Thunderbird mail filters.md deleted file mode 100644 index 808a22f084..0000000000 --- a/sources/tech/20210127 Introduction to Thunderbird mail filters.md +++ /dev/null @@ -1,164 +0,0 @@ -[#]: subject: "Introduction to Thunderbird mail filters" -[#]: via: "https://fedoramagazine.org/introduction-to-thunderbird-mail-filters/" -[#]: author: "Richard England https://fedoramagazine.org/author/rlengland/" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Introduction to Thunderbird mail filters -====== -![][1] - -Everyone eventually runs into an inbox loaded with messages that they need to sort through. If you are like a lot of people, this is not a fast process. However, use of mail filters can make the task a little less tedious by letting Thunderbird pre-sort the messages into categories that reflect their source, priority, or usefulness. This article is an introduction to the creation of filters in Thunderbird. - -Filters may be created for each email account you have created in Thunderbird. These are the accounts you see in the main Thunderbird folder pane shown at the left of the “Classic Layout”. - -![][2] - -There are two methods that can be used to create mail filters for your accounts. The first is based on the currently selected account and the second on the currently selected message. Both are discussed here. - -### Message destination folder - -Before filtering messages there has to be a destination for them. Create the destination by selecting a location to create a new folder. In this example the destination will be **Local Folders**shown in the accounts pane. Right click on **Local Folders** and select *New Folder…* from the menu. - -![][3] - -Enter the name of the new folder in the menu and select *Create Folder.* The mail to filter is coming from the New York Times so that is the name entered. - -![][4] - -### Filter creation based on the selected account - -Select the *Inbox* for the account you wish to filter and select the toolbar menu item at *Tools > Message_Filters*. - -![][5] - -The *Message Filters* menu appears and is set to your pre-selected account as indicated at the top in the selection menu labelled *Filters for:*. - -![][6] - -Previously created filters, if any, are listed beneath the account name in the “*Filter Name”*column. To the right of this list are controls that let you modify the filters selected. These controls are activated when you select a filter. More on this later. - -Start creating your filter as follows: - -1. Verify the correct account has been pre-selected. It may be changed if necessary. -2. Select New… from the menu list at the right. - -When you select *New* you will see the *Filter Rules*menu where you define your filter. Note that when using *New…* you have the option to copy an existing filter to use as a template or to simply duplicate the settings. - -Filter rules are made up of three things, the “property” to be tested, the “test”, and the “value” to be tested against. Once the condition is met, the “action” is performed. - -![][7] - -Complete this filter as follows: - -1. Enter an appropriate name in the textbox labelled Filter name: -2. Select the property From in the left drop down menu, if not set. -3. Leave the test set to contains. -4. Enter the value, in this case the email address of the sender. - -Under the *Perform these actions:* section at the bottom, create an action rule to move the message and choose the destination. - -1. Select Move Messages to from the left end of the action line. -2. Select Choose Folder… and select Local Folders > New York Times. -3. Select OK. - -By default the **Apply filter when:** is set to *Manually Run* and *Getting New Mail:*. This means that when new mail appears in the Inbox for this account the filter will be applied and you may run it manually at any time, if necessary. There are other options available but they are too numerous to be discussed in this introduction. They are, however, for the most part self explanatory. - -If more than one rule or action is to be created during the same session, the “+” to the right of each entry provides that option. Additional property, test, and value entries can be added. If more than one rule is created, make certain that the appropriate option for *Match all of the following* and *Match any of the following* is selected. In this example the choice does not matter since we are only setting one filter. - -After selecting *OK,*the *Message Filters* menu is displayed again showing your newly created filter. Note that the menu items on the right side of the menu are now active for *Edit…* and *Delete.* - -![][8] - -Also notice the message *“Enabled filters are run automatically in the order shown below”*. If there are multiple filters the order is changed by selecting the one to be moved and using the *Move to Top, Move Up, Move Down,*or*Move to Bottom* buttons. The order can change the destination of your messages so consider the tests used in each filter carefully when deciding the order. - -Since you have just created this filter you may wish to use the *Run Now* button to run your newly created filter on the Inbox shown to the left of the button. - -### Filter creation based on a message - -An alternative creation technique is to select a message from the message pane and use the *Create Filter From Message…* option from the menu bar. - -In this example the filter will use two rules to select the messages: the email address and a text string in the Subject line of the email. Start as follows: - -1. Select a message in the message page. -2. Select the filter options on the toolbar at Message > Create Filter From Message…. - -![][9] - -The pre-selected message, highlighted in grey in the message pane above, determines the account used and *Create Filter From Message…* takes you directly to the *Filter Rules* menu. - -![][10] - -The property (*From*), test (*is*), and value (email) are pre-set for you as shown in the image above. Complete this filter as follows: - -1. Enter an appropriate name in the textbox labelled Filter name:. COVID is the name in this case. -2. Check that the property is From. -3. Verify the test is set to is. -4. Confirm that the value for the email address is from the correct sender. -5. Select the “+” to the right of the From rule to create a new filter rule. -6. In the new rule, change the default property entry From to Subject using the pulldown menu. -7. Set the test to contains. -8. Enter the value text to be matched in the Email “Subject” line. In this case COVID. - -Since we left the *Match all of the following* item checked, each message will be from the address chosen AND will have the text *COVID* in the email subject line. - -Now use the action rule to choose the destination for the messages under the *Perform these actions:* section at the bottom: - -1. Select Move Messages to from the left menu. -2. Select Choose Folder… and select Local Folders > COVID in Scotland. (This destination was created before this example was started. There was no magic here.) -3. Select OK. - -*OK* will cause the *Message Filters* menu to appear, again, verifying that the new filter has been created. - -### The Message Filters menu - -All the message filters you create will appear in the *Message Filters* menu. Recall that the *Message Filters* is available in the menu bar at *Tools > Message Filters*. - -Once you have created filters there are several options to manage them. To change a filter, select the filter in question and click on the *Edit* button. This will take you back to the *Filter Rules* menu for that filter. As mentioned earlier, you can change the order in which the rules are apply here using the *Move* buttons. Disable a filter by clicking on the check mark in the *Enabled* column. - -![][11] - -The *Run Now* button will execute the selected filter immediately. You may also run your filter from the menu bar using *Tools > Run Filters on Folder* or *Tools > Run Filters on Message*. - -### Next step - -This article hasn’t covered every feature available for message filtering but hopefully it provides enough information for you to get started. Places for further investigation are the “property”, “test”, and “actions” in the *Filter menu* as well as the settings there for when your filter is to be run, *Archiving, After Sending,* and *Periodically*. - -### References - -Mozilla: [Organize][12][Your Messages][13][by Using Filters][14] - -MozillaZine: [Message][15][Filters][16] - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/introduction-to-thunderbird-mail-filters/ - -作者:[Richard England][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/rlengland/ -[b]: https://github.com/lkxed -[1]: https://fedoramagazine.org/wp-content/uploads/2021/01/Tbird_mail_filters-1-816x345.jpg -[2]: https://fedoramagazine.org/wp-content/uploads/2021/01/Image_001-1024x613.png -[3]: https://fedoramagazine.org/wp-content/uploads/2021/01/Image_New_Folder.png -[4]: https://fedoramagazine.org/wp-content/uploads/2021/01/Folder_name-1.png -[5]: https://fedoramagazine.org/wp-content/uploads/2021/01/Image_002-2-1024x672.png -[6]: https://fedoramagazine.org/wp-content/uploads/2021/01/Image_Message_Filters-1.png -[7]: https://fedoramagazine.org/wp-content/uploads/2021/01/Filter_rules_1-1.png -[8]: https://fedoramagazine.org/wp-content/uploads/2021/01/Messsage_Filters_1st_entry.png -[9]: https://fedoramagazine.org/wp-content/uploads/2021/01/Create_by_messasge.png -[10]: https://fedoramagazine.org/wp-content/uploads/2021/01/Filter_rules_2-1.png -[11]: https://fedoramagazine.org/wp-content/uploads/2021/01/Message_Filters_2nd_entry.png -[12]: https://support.mozilla.org/en-US/kb/organize-your-messages-using-filters -[13]: https://support.mozilla.org/en-US/kb/organize-your-messages-using-filters -[14]: https://support.mozilla.org/en-US/kb/organize-your-messages-using-filters -[15]: http://kb.mozillazine.org/Filters_%28Thunderbird%29 -[16]: http://kb.mozillazine.org/Filters_%28Thunderbird%29 diff --git a/sources/tech/20210128 Interview with Shuah Khan, Kernel Maintainer & Linux Fellow.md b/sources/tech/20210128 Interview with Shuah Khan, Kernel Maintainer & Linux Fellow.md deleted file mode 100644 index 09152cb53a..0000000000 --- a/sources/tech/20210128 Interview with Shuah Khan, Kernel Maintainer & Linux Fellow.md +++ /dev/null @@ -1,170 +0,0 @@ -[#]: subject: "Interview with Shuah Khan, Kernel Maintainer & Linux Fellow" -[#]: via: "https://www.linux.com/news/interview-with-shuah-khan-kernel-maintainer-linux-fellow/" -[#]: author: "The Linux Foundation https://www.linuxfoundation.org/en/blog/interview-with-shuah-khan-kernel-maintainer-linux-fellow/" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Interview with Shuah Khan, Kernel Maintainer & Linux Fellow -====== -Jason Perlow, Director of Project Insights and Editorial Content at the Linux Foundation, had an opportunity to speak with Shuah Khan about her experiences as a woman in the technology industry. She discusses how mentorship can improve the overall diversity and makeup of open source projects, why software maintainers are important for the health of open source projects such as the Linux kernel, and how language inclusivity and codes of conduct can improve relationships and communication between software maintainers and individual contributors. - -**JP:** So, Shuah, I know you wear many different hats at the Linux Foundation. What do you call yourself around here these days? - -**SK:** Well, I primarily call myself a Kernel Maintainer & Linux Fellow. In addition to that, I focus on two areas that are important to the continued health and sustainability of the open source projects in the Linux ecosystem. The first one is bringing more women into the Kernel community, and additionally, I am leading the mentorship program efforts overall at the Linux Foundation. And in that role, in addition to the Linux Kernel Mentorship, we are looking at how the Linux Foundation mentorship program is working overall, how it is scaling. I make sure the [LFX Mentorship][1] platform scales and serves diverse mentees and mentors’ needs in this role. - -The LF mentorships program includes several projects in the Linux kernel, LFN, HyperLedger, Open MainFrame, OpenHPC, and other technologies. [The Linux Foundation’s Mentorship Programs][2] are designed to help developers with the necessary skills–many of whom are first-time open source contributors–experiment, learn, and contribute effectively to open source communities. - -The mentorship program has been successful in its mission to train new developers and make these talented pools of prospective employees trained by experts to employers. Several graduated mentees have found jobs. New developers have improved the quality and security of various open source projects, including the Linux kernel. Several Linux kernel bugs were fixed, a new subsystem mentor was added, and a new driver maintainer is now part of the Linux kernel community. My sincere thanks to all our mentors for volunteering to share their expertise. - -**JP:** How long have you been working on the Kernel? - -**SK:** Since 2010, or 2011, I got involved in the [Android Mainlining project][3]. My [first patch removed the Android pmem driver][4]. - -**JP:** Wow! Is there any particular subsystem that you specialize in? - -**SK:** I am a self described generalist. I maintain the [kernel self-test][5] subsystem, the [USB over IP driver][6], [usbip tool][7], and the [cpupower][8] tool. I contributed to the media subsystem working on [Media Controller Device Allocator API][9] to resolve shared device resource management problems across device drivers from different subsystems. - -**JP:** Hey, I’ve [actually used the USB over IP driver][10] when I worked at Microsoft on Azure. And also, when I’ve used AWS and Google Compute. - -**SK:** It’s a small niche driver used in cloud computing. Docker and other containers use that driver heavily. That’s how they provide remote access to USB devices on the server to export devices to be imported by other systems for use. - -**JP:** I initially used it for IoT kinds of stuff in the embedded systems space. Were you the original lead developer on it, or was it one of those things you fell into because nobody else was maintaining it? - -**SK:** Well, twofold. I was looking at USB over IP because I like that technology. it just so happened the driver was brought from the staging tree into the Mainline kernel, I volunteered at the time to maintain it. Over the last few years, we discovered some security issues with it, because it handles a lot of userspace data, so I had a lot of fun fixing all of those. . - -**JP:** What drew you into the Linux operating system, and what drew you into the kernel development community in the first place? - -**SK:** Well, I have been doing kernel development for a very long time. I worked on the [LynxOS RTOS][11], a while back, and then HP/UX, when I was working at HP, after which I transitioned into  doing open source development — the [OpenHPI][12] project, to support HP’s rack server hardware, and that allowed me to work much more closely with Linux on the back end. And at some point, I decided I wanted to work with the kernel and become part of the Linux kernel community. I started as an independent contributor. - -**JP:** Maybe it just displays my own ignorance, but you are the first female, hardcore Linux kernel developer I have ever met. I mean, I had met female core OS developers before — such as when I was at Microsoft and IBM — but not for Linux. Why do you suppose we lack women and diversity in general when participating in open source and the technology industry overall? - -**SK:** So I’ll answer this question from my perspective, from what I have seen and experienced, over the years. You are right; you probably don’t come across that many hardcore women Kernel developers. I’ve been working professionally in this industry since the early 1990s, and on every project I have been involved with, I am usually the only woman sitting at the table. Some of it, I think, is culture and society. There are some roles that we are told are acceptable to women — even me, when I was thinking about going into engineering as a profession. Some of it has to do with where we are guided, as a natural path. - -There’s a natural resistance to choosing certain professions that you have to overcome first within yourself and externally. This process is different for everybody based on their personality and their origin story. And once you go through the hurdle of getting your engineering degree and figuring out which industry you want to work in, there is a level of establishing credibility in those work environments you have to endure and persevere. Sometimes when I would walk into a room, I felt like people were looking at me and thinking, “why is she here?” You aren’t accepted right away, and you have to overcome that as well. You have to go in there and say, “I am here because I want to be here, and therefore, I belong here.” You have to have that mindset. Society sends you signals that “this profession is not for me” — and you have to be aware of that and resist it. I consider myself an engineer that happens to be a woman as opposed to a woman engineer. - -**JP:** Are you from India, originally? - -**SK:** Yes. - -**JP:** It’s funny; my wife really likes this [Netflix show about matchmaking in India][13]. Are you familiar with it? - -**SK:** Yes I enjoyed the series, and [A Suitable Girl][14] documentary film that follows three women as they navigate making decisions about their careers and family obligations. - -**JP:** For many Americans, this is our first introduction to what home life is like for Indian people. But many of the women featured on this show are professionals, such as doctors, lawyers, and engineers. And they are very ambitious, but of course, the family tries to set them up in a marriage to find a husband for them that is compatible. As a result, you get to learn about the traditional values and roles they still want women to play there — while at the same time, many women are coming out of higher learning institutions in that country that are seeking technical careers. - -**SK:** India is a very fascinatingly complex place. But generally speaking, in a global sense, having an environment at home where your parents tell you that you may choose any profession you want to choose is very encouraging. I was extremely fortunate to have parents like that. They never said to me that there was a role or a mold that I needed to fit into. They have always told me, “do what you want to do.” Which is different; I don’t find that even here, in the US. Having that support system, beginning in the home to tell you, “you are open to whatever profession you want to choose,” is essential. That’s where a lot of the change has to come from. - -**JP:** Women in technical and STEM professions are becoming much more prominent in other countries, such as China, Japan, and Korea. For some reason, in the US, I tend to see more women enter the medical profession than hard technology — and it might be a level of effort and perceived reward thing. You can spend eight years becoming a medical doctor or eight years becoming a scientist or an engineer, and it can be equally difficult, but the compensation at the end may not be the same. It’s expensive to get an education, and it takes a long time and hard work, regardless of the professional discipline. - -**SK:** I have also heard that women also like to enter professions where they can make a difference in the world — a human touch, if you will. So that may translate to them choosing careers where they can make a larger impact on people — and they may view careers in technology as not having those same attributes. Maybe when we think about attracting women to technology fields, we might have to promote technology aspects that make a difference. That may be changing now, such as the [LF Public Health][15] (LFPH) project we kicked off last year. And with [LF AI & Data Foundation][16], we are also making a difference in people’s lives, such as [detecting earthquakes][17] or [analyzing climate change][18]. If we were to promote projects such as these, we might draw more women in. - -**JP:** So clearly, one of the areas of technology where you can make a difference is in open source, as the LF is hosting some very high-concept and existential types of projects such as [LF Energy][19], for example — I had no idea what was involved in it and what its goals were until I spoke to [Shuli Goodman][20] in-depth about it. With the mentorship program, I assume we need this to attract fresh talent — because as folks like us get older and retire, and they exit the field, we need new people to replace them. So I assume mentorship, for the Linux Foundation, is an investment in our own technologies, correct? - -**SK:** Correct. Bringing in new developers into the fold is the primary purpose, of course — and at the same time, I view the LF as taking on mentorship provides that neutral, level playing field across the industry for all open source projects. Secondly, we offer a self-service platform, [LFX Mentorship][21], where anyone can come in and start their project. So when the COVID-19 pandemic began, we [expanded this program to help displaced people][22] — students, et cetera, and less visible projects. Not all projects typically get as much funding or attention as others do — such as a Kubernetes or  Linux kernel — among the COVID mentorship program projects we are funding. I am particularly proud of supporting a climate change-related project, [Using Machine Learning to Predict Deforestation][23]. - -The self-service approach allows us to fund and add new developers to projects where they are needed. The LF mentorships are remote work opportunities that are accessible to developers around the globe. We see people sign up for mentorship projects from places we haven’t seen before, such as Africa, and so on, thus creating a level playing field. - -The other thing that we are trying to increase focus on is how do you get maintainers? Getting new developers is a starting point, but how do we get them to continue working on the projects they are mentored on? As you said, someday, you and I and others working on these things are going to retire, maybe five or ten years from now. This is a harder problem to solve than training and adding new developers to the project itself. - -**JP:** And that is core to our [software supply chain security mission][24]. It’s one thing to have this new, flashy project, and then all these developers say, “oh wow, this is cool, I want to join that,” but then, you have to have a certain number of people maintaining it for it to have long-term viability. As we learned in our [FOSS study with Harvard][25], there are components in the Linux operating system that are like this. Perhaps even modules within the kernel itself, I assume that maybe you might have only one or two people actively maintaining it for many years. And what happens if that person dies or can no longer work? What happens to that code? And if someone isn’t familiar with that code, it might become abandoned. That’s a serious problem in open source right now, isn’t it? - -**SK:** Right. We have seen that with SSH and other security-critical areas. What if you don’t have the bandwidth to fix it? Or the money to fix it? I ended up volunteering to maintain a tool for a similar reason when the maintainer could no longer contribute regularly. It is true; we have many drivers where maintainer bandwidth is an issue in the kernel. So the question is, how do we grow that talent pool? - -**JP:** Do we need a job board or something? We need X number of maintainers. So should we say, “Hey, we know you want to join the kernel project as a contributor, and we have other people working on this thing, but we really need your help working on something else, and if you do a good job, we know tons of companies willing to hire developers just like you?” - -**SK:** With the kernel, we are talking about organic growth; it is just like any other open source project. It’s not a traditional hire and talent placement scenario. Organically they have to have credibility, and they have to acquire it through experience and relationships with people on those projects. We just talked about it at the previous [Linux Plumbers Conference][26], we do have areas where we really need maintainers, and the [MAINTAINERS][27] file does show areas where they need help. - -To answer your question, it’s not one of those things where we can seek people to fill that role, like LinkedIn or one of the other job sites. It has to be an organic fulfillment of that role, so the mentorship program is essential in creating those relationships. It is the double-edged sword of open source; it is both the strength and weakness. People need to have an interest in becoming a maintainer and also a commitment to being one, long term. - -**JP:** So, what do you see as the future of your mentorship and diversity efforts at the Linux Foundation? What are you particularly excited about that is forthcoming that you are working on? - -**SK:** I view the Linux Foundation mentoring as a three-pronged approach to provide unstructured webinars, training courses, and structured mentoring programs. All of these efforts combine to advance a diverse, healthy, and vibrant open source community. So over the past several months, we have been morphing our speed mentorship style format into an expanded webinar format — the [LF Live Mentorship series][28]. This will have the function of growing our next level of expertise. As a complement to our traditional mentorship programs, these are webinars and courses that are an hour and a half long that we hold a few times a month that tackle specific technical areas in software development. So it might cover how to write great commit logs, for example, for your patches to be accepted, or how to find bugs in C code. Commit logs are one of those things that are important to code maintenance, so promoting good documentation is a beneficial thing. Webinars provide a way for experts short on time to share their knowledge with a few hours of time commitment and offer a self-paced learning opportunity to new developers. - -Additionally, I have started the [Linux Kernel Mentorship forum][29] for developers and their mentors to connect and interact with others participating in the Linux Kernel Mentorship program and graduated mentees to mentor new developers. We kicked off [Linux Kernel mentorship Spring 2021][30] and are planning for Summer and Fall. - -A big challenge is we are short on mentors to be able to scale the structured program. Solving the problem requires help from LF member companies and others to encourage their employees to mentor, “it takes a village,” they say. - -**JP:** So this webinar series and the expanded mentorship program will help developers cultivate both hard and soft skills, then. - -**SK:** Correct. The thing about doing webinars is that if we are talking about this from a diversity perspective, they might not have time for a full-length mentorship, typically like a three-month or six-month commitment. This might help them expand their resources for self-study. When we ask for developers’ feedback about what else they need to learn new skill sets, we hear that they don’t have resources, don’t have time to do self-study, and learn to become open source developers and software maintainers. This webinar series covers general open source software topics such as the Linux kernel and legal issues. It could also cover topics specific to other LF projects such as CNCF, Hyperledger, LF Networking, etc. - -**JP:** Anything else we should know about the mentorship program in 2021? - -**SK:** In my view,  attracting diversity and new people is two-fold. One of the things we are working on is inclusive language. Now, we’re not talking about curbing harsh words, although that is a component of what we are looking at. The English you and I use in North America isn’t the same English used elsewhere. As an example, when we use North American-centric terms in our email communications, such as when a maintainer is communicating on a list with people from South Korea, something like “where the rubber meets the road” may not make sense to them at all. So we have to be aware of that. - -**JP:** I know that you are serving on the [Linux kernel Code of Conduct Committee][31] and actively developing the handbook. When I first joined the Linux Foundation, I learned what the Community Managers do and our governance model. I didn’t realize that we even needed to have codes of conduct for open source projects. I have been covering open source for 25 years, but I come out of the corporate world, such as IBM and Microsoft. Codes of Conduct are typically things that the Human Resources officer shows you during your initial onboarding, as part of reviewing your employee manual. You are expected to follow those rules as a condition of employment. - -So why do we need Codes of Conduct in an open source project? Is it because these are people who are coming from all sorts of different backgrounds, companies, and ways of life, and may not have interacted in this form of organized and distributed project before? Or is it about personalities, people interacting with each other over long distance, and email, which creates situations that may arise due to that separation? - -**SK:** Yes, I come out of the corporate world as well, and of course, we had to practice those codes of conduct in that setting. But conduct situations arise that you have to deal with in the corporate world. There are always interpersonal scenarios that can be difficult or challenging to work with — the corporate world isn’t better than the open source world in that respect. It is just that all of that happens behind a closed setting. - -But there is no accountability in the open source world because everyone participates out of their own free will. So on a small, traditional closed project, inside the corporate world, where you might have 20 people involved, you might get one or two people that could be difficult to work with. The same thing happens and is multiplied many times in the open source community, where you have hundreds of thousands of developers working across many different open source projects. - -The biggest problem with these types of projects when you encounter situations such as this is dealing with participation in public forums. In the corporate world, this can be addressed in private. But on a public mailing list, if you are being put down or talked down to, it can be extremely humiliating. - -These interactions are not always extreme cases; they could be simple as a maintainer or a lead developer providing negative feedback — so how do you give it? It has to be done constructively. And that is true for all of us. - -**JP:** Anything else? - -**SK:** In addition to bringing our learnings and applying this to the kernel project, I am also doing this on the [ELISA][32] project, where I chair the Technical Steering Committee, where I am bridging communication between experts from the kernel and the safety communities. To make sure we can use the kernel the best ways in safety-critical applications, in the automotive and medical industry, and so on. Many lessons can be learned in terms of connecting the dots, defining clearly what is essential to make Linux run effectively in these environments, in terms of dependability. How can we think more proactively instead of being engaged in fire-fighting in terms of security or kernel bugs? As a result of this, I am also working on any necessary kernel changes needed to support these safety-critical usage scenarios. - -**JP:** Before we go, what are you passionate about besides all this software stuff? If you have any free time left, what else do you enjoy doing? - -**SK:** I read a lot. COVID quarantine has given me plenty of opportunities to read. I like to go hiking, snowshoeing, and other outdoor activities. Living in Colorado gives me ample opportunities to be in nature. I also like backpacking — while I wasn’t able to do it last year because of COVID — I like to take backpacking trips with my son. I also love to go to conferences and travel, so I am looking forward to doing that again as soon as we are able. - -Talking about backpacking reminded me of the two-day, 22-mile backpacking trip during the summer of 2019 with my son. You can see me in the picture above at the end of the road, carrying a bearbox, sleeping bag, and hammock. It was worth injuring my foot and hurting in places I didn’t even know I had. - -**JP:** Awesome. I enjoyed talking to you today. So happy I finally got to meet you virtually. - -The post [Interview with Shuah Khan, Kernel Maintainer & Linux Fellow][33] appeared first on [Linux Foundation][34]. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/interview-with-shuah-khan-kernel-maintainer-linux-fellow/ - -作者:[The Linux Foundation][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linuxfoundation.org/en/blog/interview-with-shuah-khan-kernel-maintainer-linux-fellow/ -[b]: https://github.com/lkxed -[1]: https://lfx.linuxfoundation.org/tools/mentorship/ -[2]: https://linuxfoundation.org/about/diversity-inclusivity/mentorship/ -[3]: https://elinux.org/Android_Mainlining_Project -[4]: https://lkml.org/lkml/2012/1/26/368 -[5]: https://www.kernel.org/doc/html/v4.15/dev-tools/kselftest.html -[6]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/usb/usbip -[7]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/usb/usbip -[8]: https://www.systutorials.com/docs/linux/man/1-cpupower/ -[9]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/media/mc/mc-dev-allocator.c -[10]: https://www.linux-magazine.com/Issues/2018/208/Tutorial-USB-IP -[11]: https://en.wikipedia.org/wiki/LynxOS -[12]: http://www.openhpi.org/Developers -[13]: https://www.netflix.com/title/80244565 -[14]: https://en.wikipedia.org/wiki/A_Suitable_Girl_(film) -[15]: https://www.lfph.io/ -[16]: https://lfaidata.foundation/ -[17]: https://openeew.com/ -[18]: https://www.os-climate.org/ -[19]: https://www.lfenergy.org/ -[20]: https://www.linux.com/mailto:sgoodman@contractor.linuxfoundation.org -[21]: https://mentorship.lfx.linuxfoundation.org/ -[22]: https://linuxfoundation.org/about/diversity-inclusivity/mentorship/ -[23]: https://mentorship.lfx.linuxfoundation.org/project/926665ac-9b96-45aa-bb11-5d99096be870 -[24]: https://www.linuxfoundation.org/en/blog/preventing-supply-chain-attacks-like-solarwinds/ -[25]: https://www.linuxfoundation.org/en/press-release/new-open-source-contributor-report-from-linux-foundation-and-harvard-identifies-motivations-and-opportunities-for-improving-software-security/ -[26]: https://www.linuxplumbersconf.org/ -[27]: https://www.kernel.org/doc/linux/MAINTAINERS -[28]: https://events.linuxfoundation.org/lf-live-mentorship-series/ -[29]: https://forum.linuxfoundation.org/categories/lfx-mentorship-linux-kernel -[30]: https://forum.linuxfoundation.org/discussion/858202/linux-kernel-mentorship-spring-projects-are-now-accepting-applications#latest -[31]: https://www.kernel.org/code-of-conduct.html -[32]: https://elisa.tech/ -[33]: https://www.linuxfoundation.org/en/blog/interview-with-shuah-khan-kernel-maintainer-linux-fellow/ -[34]: https://www.linuxfoundation.org/ diff --git a/sources/tech/20210128 Start programming in Racket by writing a -guess the number- game.md b/sources/tech/20210128 Start programming in Racket by writing a -guess the number- game.md deleted file mode 100644 index 7d672783b8..0000000000 --- a/sources/tech/20210128 Start programming in Racket by writing a -guess the number- game.md +++ /dev/null @@ -1,152 +0,0 @@ -[#]: subject: "Start programming in Racket by writing a "guess the number" game" -[#]: via: "https://opensource.com/article/21/1/racket-guess-number" -[#]: author: "Cristiano L. Fontana https://opensource.com/users/cristianofontana" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Start programming in Racket by writing a "guess the number" game -====== -Racket is a great way to learn a language from the Scheme and Lisp families. - -![Person using a laptop][1] - -I am a big advocate of learning multiple programming languages. That's mostly because I tend to get bored with the languages I use the most. It also teaches me new and interesting ways to approach programming. - -Writing the same program in multiple languages is a good way to learn their differences and similarities. Previously, I wrote articles showing the same sample data plotting program written in [C & C++][2], JavaScript with [Node.js][3], and [Python and Octave][4]. - -This article is part of another series about writing a "guess the number" game in different programming languages. In this game, the computer picks a number between one and 100 and asks you to guess it. The program loops until you make a correct guess. - -### Learning a new language - -Venturing into a new language always feels awkward—I feel like I am losing time since it would be much quicker to use the tools I know and use all the time. Luckily, at the start, I am also very enthusiastic about learning something new, and this helps me overcome the initial pain. And once I learn a new perspective or a solution that I would never have thought of, things become interesting! Learning new languages also helps me backport new techniques to my old and tested tools. - -When I start learning a new language, I usually look for a tutorial that introduces me to its [syntax][5]. Once I have a feeling for the syntax, I start working on a program I am familiar with and look for examples that will adapt to my needs. - -### What is Racket? - -[Racket][6] is a programming language in the [Scheme family][7], which is a dialect of [Lisp][8]. Lisp is also a family of languages, which can make it hard to decide which "dialect" to start with when you want to learn Lisp. All of the implementations have various degrees of compatibility, and this plethora of options might turn away newbies. I think that is a pity because these languages are really fun and stimulating! - -Starting with Racket makes sense because it is very mature and versatile, and the community is very active. Since Racket is a Lisp-like language, a major characteristic is that it uses the [prefix notation][9] and a [lot of parentheses][10]. Functions and operators are applied to a list of operands by prefixing them: - -``` -(function-name operand operand ...) - -(+ 2 3) -↳ Returns 5 - -(list 1 2 3 5) -↳ Returns a list containing 1, 2, 3, and 5 - -(define x 1) -↳ Defines a variable called x with value of 1 - -(define (f x y) (* x x)) -↳ Defines a function called f with two parameters called x and y that returns their product. -``` - -This is basically all there is to know about Racket syntax; the rest is learning the functions from the [documentation][11], which is very thorough. There are other aspects of the syntax, like [keyword arguments][12] and [quoting][13], but you do not need them for this example. - -Mastering Racket might be difficult, and its syntax might look weird (especially if you are used to languages like Python), but I find it very fun to use. A big bonus is Racket's programming environment, [DrRacket][14], which is very supportive, especially when you are getting started with the language. - -The major Linux distributions offer packaged versions of Racket, so [installation][15] should be easy. - -### Guess the number game in Racket - -Here is a version of the "guess the number" program written in Racket: - -``` -#lang racket - -(define (inquire-user number) -  (display "Insert a number: ") -  (define guess (string->number (read-line))) -  (cond [(> number guess) (displayln "Too low") (inquire-user number)] -        [(< number guess) (displayln "Too high") (inquire-user number)] -        [else (displayln "Correct!")])) - -(displayln "Guess a number between 1 and 100") -(inquire-user (random 1 101)) -``` - -Save this listing to a file called `guess.rkt` and run it: - -``` -$ racket guess.rkt -``` - -Here is some example output: - -``` -Guess a number between 1 and 100 -Insert a number: 90 -Too high -Insert a number: 50 -Too high -Insert a number: 20 -Too high -Insert a number: 10 -Too low -Insert a number: 12 -Too low -Insert a number: 13 -Too low -Insert a number: 14 -Too low -Insert a number: 15 -Correct! -``` - -### Understanding the program - -I'll go through the program line by line. The first line declares the language the listing is written into: `#lang racket`. This might seem strange, but Racket is very good at [writing interpreters][16] for new [domain-specific languages][17]. Do not panic, though! You can use Racket as it is because it is very rich in tools. - -Now for the next line. `(define ...)` is used to declare new variables or functions. Here, it defines a new function called `inquire-user` that accepts the parameter `number`. The `number` parameter is the random number that the user will have to guess. The rest of the code inside the parentheses of the `define` procedure is the body of the `inquire-user` function. Notice that the function name contains a dash; this is Racket's idiomatic style for writing a long variable name. - -This function recursively calls itself to repeat the question until the user guesses the right number. Note that I am not using loops; I feel that Racket programmers do not like loops and only use recursive functions. This approach is idiomatic to Racket, but if you prefer, [loops are an option][18]. - -The first step of the `inquire-user` function asks the user to insert a number by writing that string to the console. Then it defines a variable called `guess` that contains whatever the user entered. The [read-line function][19] returns the user input as a string. The string is then converted to a number with the [string->number function][20]. After the variable definition, the [cond function][21] accepts a series of conditions. If a condition is satisfied, it executes the code inside that condition. These conditions, `(> number guess)` and `(< number guess)`, are followed by two functions: a `displayln` that gives clues to the user and a `inquire-user` call. The function calls itself again when the user does not guess the right number. The `else` clause executes when the two conditions are not met, i.e., the user enters the correct number. The program's guts are this `inquire-user` function. - -However, the function still needs to be called! First, the program asks the user to guess a number between 1 and 100, and then it calls the `inquire-user` function with a random number. The random number is generated with the [random function][22]. You need to inform the function that you want to generate a number between 1 and 100, but the `random` function generates integer numbers up to `max-1`, so I used 101. - -### Try Racket - -Learning new languages is fun! I am a big advocate of programming languages polyglotism because it brings new, interesting approaches and insights to programming. Racket is a great opportunity to start learning how to program with a Lisp-like language. I suggest you give it a try. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/1/racket-guess-number - -作者:[Cristiano L. Fontana][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cristianofontana -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/laptop_screen_desk_work_chat_text.png -[2]: https://opensource.com/article/20/2/c-data-science -[3]: https://opensource.com/article/20/6/data-science-nodejs -[4]: https://opensource.com/article/20/2/python-gnu-octave-data-science -[5]: https://en.wikipedia.org/wiki/Syntax_(programming_languages) -[6]: https://racket-lang.org/ -[7]: https://en.wikipedia.org/wiki/Scheme_(programming_language) -[8]: https://en.wikipedia.org/wiki/Lisp_(programming_language) -[9]: https://en.wikipedia.org/wiki/Polish_notation -[10]: https://xkcd.com/297/ -[11]: https://docs.racket-lang.org/ -[12]: https://rosettacode.org/wiki/Named_parameters#Racket -[13]: https://docs.racket-lang.org/guide/quote.html -[14]: https://docs.racket-lang.org/drracket/ -[15]: https://download.racket-lang.org/ -[16]: https://docs.racket-lang.org/guide/hash-languages.html -[17]: https://en.wikipedia.org/wiki/Domain-specific_language -[18]: https://docs.racket-lang.org/heresy/conditionals.html -[19]: https://docs.racket-lang.org/reference/Byte_and_String_Input.html?q=read-line#%28def._%28%28quote._~23~25kernel%29._read-line%29%29 -[20]: https://docs.racket-lang.org/reference/generic-numbers.html?q=string-%3Enumber#%28def._%28%28quote._~23~25kernel%29._string-~3enumber%29%29 -[21]: https://docs.racket-lang.org/reference/if.html?q=cond#%28form._%28%28lib._racket%2Fprivate%2Fletstx-scheme..rkt%29._cond%29%29 -[22]: https://docs.racket-lang.org/reference/generic-numbers.html?q=random#%28def._%28%28lib._racket%2Fprivate%2Fbase..rkt%29._random%29%29 diff --git a/sources/tech/20210130 How I de-clutter my digital workspace.md b/sources/tech/20210130 How I de-clutter my digital workspace.md deleted file mode 100644 index 4a62c73b9e..0000000000 --- a/sources/tech/20210130 How I de-clutter my digital workspace.md +++ /dev/null @@ -1,73 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How I de-clutter my digital workspace) -[#]: via: (https://opensource.com/article/21/1/declutter-workspace) -[#]: author: (Kevin Sonney https://opensource.com/users/ksonney) - -How I de-clutter my digital workspace -====== -Archive old email and other files to de-clutter your digital workspace. -![video editing dashboard][1] - -In prior years, this annual series covered individual apps. This year, we are looking at all-in-one solutions in addition to strategies to help in 2021. Welcome to day 20 of 21 Days of Productivity in 2021. - -I am a digital pack-rat. So many of us are. After all, who knows when we'll need that email our partner sent asking us to pick up milk on our way home from work in 2009? - -The truth is, we don't need it. We _really_ don't. However, large cloud providers have given us so much storage space for cheap or for free that we don't even think about it anymore. When I can have unlimited documents, notes, to-do items, calendar appointments, and email, why _shouldn't_ I just keep everything? - -![Marie Kondo indicating clearing email will bring you joy][2] - -It really does. (Kevin Sonney, [CC BY-SA 4.0][3]) - -When dealing with physical items, like a notebook or a stack of documents, there comes a point where it is obvious we need to move it off our desks or out of our offices. We need to store it in some other place where we can get to it if we need to, but also know it will be safe. Eventually, that too fills up, and we are forced to clean out that storage as well. - -Digital storage is really no different, only we're tempted to keep more things in it. If I have a note on my desk to pick something up on the way home, I'm going to throw it away when I'm done with it. That same note in my shared notebook with my wife is likely just to stay there. Maybe we'll re-use it, maybe we'll just let it sit there, taking up space. - -This approach is the same as "hot" and "cold" storage. Hot storage is the most recent and relevant data that tends to be accessed frequently. Cold storage is for the archives we might need to refer to in the future and may have historical significance, but doesn't need to be accessed frequently. - -Last year I took the time to export all of my emails from before 2019 and put them in an archive file. Then I deleted them. Why? For starters, I really didn't need any of it anymore. Sure it is nice to have the emails my spouse and I sent each other when we started dating, but they are not something I look at daily or even monthly. I could put them in cold storage, where they would be safe, and I could get them when I did want to look at them. The same for the emails and schedules for the conventions I had worked at before the pandemic. Do I need to have the schedule grid for my department at AnthroCon 2015 at my fingertips? NOPE. - -### Archiving messages - -The process of archiving messages will differ, depending on what email client you use, but the general idea is the same. In KMail, the email client from KDE, you can archive (and export, by nature of the archival process) a folder of messages by right-clicking on a folder and selecting **Archive Folder**. I also have KMail remove the messages after completing the archive. - -![Archiving a directory of messages in KMail][4] - -On the GNOME side of things, you can either export a folder of messages as an **mbox** file or you can use the **Save As** option to export it as an archive. - -![Archive file of 2007-2019 email.][5] - -8 Gb of mail (Kevin Sonney, [CC BY-SA 4.0][3]) - -If you're not on Linux, you might look into using the Thunderbird client. In Thunderbird, highlight the messages you want to archive (or press **Ctrl+A** to select all of them, and then right-click and select **Archive**. - -![Archiving mail in Thunderbird][6] - -### Cold storage - -I have been managing my documents, notes, online to-do lists, and so on, by archiving the excess. I keep the most recent and relevant, and then I move the rest either into an archive file that does not live on my machine. Otherwise, if they no longer have any relevance, I just delete them altogether. That has made finding the relevant information much easier because there aren't loads of old things cluttering up my results. - -It is important to take some time and de-clutter our digital workspaces the way we de-clutter our physical workspaces. Productivity isn't just getting things done, but also being able to find the right things we need to do them. Moving data into cold storage archives means we can rest easy knowing that it is safe if we need it and out of our way for the 99.9% of the time when we don't. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/1/declutter-workspace - -作者:[Kevin Sonney][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ksonney -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard) -[2]: https://opensource.com/sites/default/files/day20-image1.jpg -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://opensource.com/sites/default/files/kmail-archive.jpg (Archiving a directory of messages in KMail) -[5]: https://opensource.com/sites/default/files/day20-image2.png -[6]: https://opensource.com/sites/default/files/thunderbird-export.jpg (Archiving mail in Thunderbird) diff --git a/sources/tech/20210201 Best Single Board Computers for AI and Deep Learning Projects.md b/sources/tech/20210201 Best Single Board Computers for AI and Deep Learning Projects.md deleted file mode 100644 index 73cf55c47b..0000000000 --- a/sources/tech/20210201 Best Single Board Computers for AI and Deep Learning Projects.md +++ /dev/null @@ -1,282 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Best Single Board Computers for AI and Deep Learning Projects) -[#]: via: (https://itsfoss.com/best-sbc-for-ai/) -[#]: author: (Community https://itsfoss.com/author/itsfoss/) - -Best Single Board Computers for AI and Deep Learning Projects -====== - -[Single-board computers][1] (SBC) are very popular with tinkerers and hobbyists alike, they offer a lot of functionality in a very small form factor. An SBC has the CPU, GPU, memory, IO ports, etc. on a small circuit board and users can add functionality by adding new devices to the [GPIO ports][2]. Some of the more popular SBCs include the [Raspberry Pi][3] and [Arduino][4] family of products. - -However, there is an increasing demand for SBC’s that can be used for edge compute applications like Artificial Intelligence (AI) or Deep Learning (DL) and there are quite a few. The list below consists of some of the best SBCs that have been developed for edge computing. - -The list is in no particular order of ranking. Some links here are affiliate links. Please read our [affiliate policy][5]. - -### 1\. Nvidia Jetson Family - -Nvidia has a great lineup of SBCs that cater to AI developers and hobbyists alike. Their line of “[Jetson Developer Kits][6]” are some of the most powerful and value for money SBCs available in the market. Below is a list of their offerings. - -#### Nvidia Jetson Nano Developer Kit - -![][7] - -Starting at **$59**, the Jetson Nano is the cheapest SBC in the list and offers a good price to performance ratio. It can run multiple neural networks alongside other applications such as object detection, segmentation, speech processing and image classification. - -The Jetson Nano is aimed towards AI enthusiasts, hobbyists and developers who want to do projects by implementing AI. - -The Jetson Nano is being offered in two variants: 4 GB and 2 GB. The main differences between the two are, the price, RAM capacity and IO ports being offered. The 4 GB variant has been showcased in the image above. - -**Key Specifications** - - * **CPU:** Quad-core ARM A57 @ 1.43 GHz - * **GPU:** 128-core NVIDIA Maxwell - * **Memory:** 4 GB 64-bit LPDDR4 @ 25.6 GB/s or 2 GB 64-bit LPDDR4 @ 25.6 GB/s - * **Storage:** microSD card support - * **Display:** HDMI and Display Port or HDMI - -Preview | Product | Price | ----|---|---|--- -![NVIDIA Jetson Nano 2GB Developer Kit \(945-13541-0000-000\)][8] ![NVIDIA Jetson Nano 2GB Developer Kit \(945-13541-0000-000\)][8] | [NVIDIA Jetson Nano 2GB Developer Kit (945-13541-0000-000)][9] | $59.00[][10] | [Buy on Amazon][11] - -#### Nvidia Jetson Xavier NX Developer Kit - -![][12] - -The Jetson Xavier NX is a step up from the Jetson Nano and is aimed more towards OEMs, start-ups and AI developers. - -The Jetson Xavier NX is meant for applications that need more serious AI processing power that an entry level offering like the Jetson Nano simply can’t deliver. The Jetson Xavier NX is being offered at **$386.99**. - -**Key Specifications** - - * **CPU:** 6-core NVIDIA Carmel ARM v8.2 64-bit CPU - * **GPU:** NVIDIA Volta architecture with 384 NVIDIA CUDA cores and 48 Tensor cores - * **DL Accelerator:** 2x NVDLA Engines - * **Vision Accelerator:** 7-Way VLIW Vision Processor - * **Memory:** 8 GB 128-bit LPDDR4x @ 51.2 GB/s - * **Storage:** microSD support - * **Display:** HDMI and Display Port - -Preview | Product | Price | ----|---|---|--- -![NVIDIA Jetson Xavier NX Developer Kit \(812674024318\)][13] ![NVIDIA Jetson Xavier NX Developer Kit \(812674024318\)][13] | [NVIDIA Jetson Xavier NX Developer Kit (812674024318)][14] | $386.89[][10] | [Buy on Amazon][15] - -#### Nvidia Jetson AGX Xavier Developer Kit - -![][16] - -The Jetson AGX Xavier is the flagship product of the Jetson family, it is meant to be deployed in servers and AI robotics applications in industries such as manufacturing, retail, automobile, agriculture, etc. - -Coming in at **$694.91**, the Jetson AGX Xavier is not meant for beginners, it is meant for developers who want top-tier edge compute performance at their disposal and for companies who want good scalability for their applications. - -**Key Specifications** - - * **CPU:** 8-core ARM v8.2 64-bit CPU - * **GPU:** 512-core Volta GPU with Tensor Cores - * **DL Accelerator:** 2x NVDLA Engines - * **Vision Accelerator:** 7-Way VLIW Vision Processor - * **Memory:** 32 GB 256-Bit LPDDR4x @ 137 GB/s - * **Storage:** 32 GB eMMC 5.1 and uSD/UFS Card Socket for storage expansion - * **Display:** HDMI 2.0 - -Preview | Product | Price | ----|---|---|--- -![NVIDIA Jetson AGX Xavier Developer Kit \(32GB\)][17] ![NVIDIA Jetson AGX Xavier Developer Kit \(32GB\)][17] | [NVIDIA Jetson AGX Xavier Developer Kit (32GB)][18] | $694.91[][10] | [Buy on Amazon][19] - -### 2\. ROCK Pi N10 - -![][20] - -The ROCK Pi N10, developed by [Radxa][21] is the second cheapest offering in this list with its base variant coming in at **$99**, its range topping variant comes in at **$169**, - -The ROCK Pi N10 is equipped with a NPU (Neural Processing Unit) that helps it in processing AI/ Deep Learning workloads with ease. It offers up to 3 TOPS (Tera Operations Per Second) of performance. - -It is being offered in three variants namely, ROCK Pi N10 Model A, ROCK Pi N10 Model B, ROCK Pi N10 Model C, the only differences between these variants are the price, RAM and Storage capacities. - -The ROCK Pi N10 is available for purchase through [Seeed Studio][22]. - -**Key Specifications** - - * **CPU:** RK3399Pro with 2-core Cortex-A72 @ 1.8 GHz and 4-Core Cortex-A53 @ 1.4 GHz - * **GPU:** Mali T860MP4 - * **NPU:** Supports 8bit/16bit computing with up to 3.0 TOPS computing power - * **Memory:** 4 GB/6 GB/8 GB 64-bit LPDDR3 @ 1866 Mb/s - * **Storage:** 16 GB/32 GB/64 GB eMMC - * **Display:** HDMI 2.0 - - - -### 3\. BeagleBone AI - -![][23] - -The BeagleBone AI is [BeagleBoard.org][24]‘s open source SBC is meant to bridge the gap between small SBCs and more powerful industrial computers. The hardware and software of the BeagleBoard are completely open source. - -It is meant for use in the automation of homes, industries and other commercial use cases. It is priced at **~$110**, the price varies across dealers, for more info check [their website][25]. - -**Key Specifications** - - * **CPU:** Texas Instrument AM5729 with Dual-core ARM Cortex-A15 @ 1.5GHz - * **Co-Processor:** 2 x Dual-core ARM Cortex-M4 - * **DSP:** 2 x C66x floating-point VLIW - * **EVE:** 4 x Embedded Vision Engines - * **GPU:** PowerVR SGX544 - * **RAM:** 1 GB - * **Storage:** 16 GB eMMC - * **Display:** microHDMI - -Preview | Product | Price | ----|---|---|--- -![BeagleBone AI][26] ![BeagleBone AI][26] | [BeagleBone AI][27] | $127.49[][10] | [Buy on Amazon][28] - -### 4\. BeagleV - -![][29] - -The BeagleV is the latest launch in the list, it is an SBC that runs Linux out of the box and has a [RISC-V][30] CPU. - -It is capable of running edge compute applications effortlessly, to know more about the BeagleV check [our coverage][31] of the launch. - -The BeagleV will be getting two variants, a 4 GB RAM variant and an 8 GB RAM variant. Pricing starts at **$119** for the base model and **$149** for the 8 GB RAM model, it is up for pre-order through [their website][32]. - -**Key Specifications** - - * **CPU:** RISC-V U74 2-Core @ 1.0GHz - * **DSP:** Vision DSP Tensilica-VP6 - * **DL Accelerator:** NVDLA Engine 1-core - * **NPU:** Neural Network Engine - * **RAM:** 4 GB/8 GB (2 x 4 GB) LPDDR4 SDRAM - * **Storage:** microSD slot - * **Display:** HDMI 1.4 - - - -### 5\. HiKey970 - -![][33] - -HiKey970 is [96 Boards][34] first SBC meant for edge compute applications and is the world’s first dedicated NPU AI platform. - -The HiKey970 features an CPU, GPU and an NPU for accelerating AI performance, it can also be used for training and building DL (Deep Learning) models. - -The HiKey970 is priced at **$299** and can be bought from their [official store][35]. - -**Key Specifications** - - * **SoC:** HiSilicon Kirin 970 - * **CPU:** ARM Cortex-A73 4-Core @ 2.36GHz and ARM Cortex-A53 4-Core @ 1.8GHz - * **GPU:** ARM Mali-G72 MP12 - * **RAM:** 6 GB LPDDR4X @ 1866MHz - * **Storage:** 64 GB UFS 2.1 microSD - * **Display:** HDMI and 4 line MIPI/LCD port - - - -### 6\. Google Coral Dev Board - -![][36] - -The Coral Dev Board is Google’s first attempt at an SBC dedicated for edge computing. It is capable of performing high speed ML (Machine Learning) inferencing and has support for TensorFlow Lite and AutoML Vision Edge. - -The board is priced at **$129.99** and is available through [Coral’s official website][37]. - -**Key Specifications** - - * **CPU:** NXP i.MX 8M SoC (4-Core Cortex-A53, Cortex-M4F) - * **ML Accelerator**: Google Edge TPU coprocessor - * **GPU:** Integrated GC7000 Lite Graphics - * **RAM:** 1 GB LPDDR4 - * **Storage:** 8 GB eMMC and microSD slot - * **Display:** HDMI 2.0a, 39-pin FFC connector for MIPI-DSI display (4-lane) and 24-pin FFC connector for MIPI-CSI2 camera (4-lane) - - - -### 7\. Google Coral Dev Board Mini - -![][38] - -The Coral Dev Board Mini is the successor to the Coral Dev Board, it packs in more processing power into a smaller form factor and a lower price point of **$99.99**. - -The Coral Dev Board Mini can be purchased from their [official web store][39]. - -**Key Specifications** - - * **CPU:** MediaTek 8167s SoC (4-core Arm Cortex-A35) - * **ML Accelerator:** Google Edge TPU coprocessor - * **GPU:** IMG PowerVR GE8300 - * **RAM:** 2 GB LPDDR3 - * **Storage:** 8 GB eMMC - * **Display:** micro HDMI (1.4), 24-pin FFC connector for MIPI-CSI2 camera (4-lane) and 24-pin FFC connector for MIPI-DSI display (4-lane) - -Preview | Product | Price | ----|---|---|--- -![Google Coral Dev Board Mini][40] ![Google Coral Dev Board Mini][40] | [Google Coral Dev Board Mini][41] | $99.99[][10] | [Buy on Amazon][42] - -### Closing Thoughts - -There is an SBC available in every price range for edge compute applications. Some are just basic, like the Nvidia Jetson Nano or the BeagleBone AI and some are performance oriented models like the BeagleV and Nvidia Jetson AGX Xavier. - -If you are looking for something more universal you can check [our article on Raspberry Pi alternatives][1] that could help you in finding a suitable SBC for your use case. - -If I missed any SBC dedicated for edge compute, feel free to let me know in the comments below. - -_**Author info: Sourav Rudra is a FOSS Enthusiast with love for Gaming Rigs/Workstation building.**_ - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/best-sbc-for-ai/ - -作者:[Community][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/itsfoss/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/raspberry-pi-alternatives/ -[2]: https://en.wikipedia.org/wiki/General-purpose_input/output -[3]: https://www.raspberrypi.org/products/ -[4]: https://www.arduino.cc/en/main/products -[5]: https://itsfoss.com/affiliate-policy/ -[6]: https://developer.nvidia.com/embedded/jetson-developer-kits -[7]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Nvidia-Jetson-Nano.png?ssl=1 -[8]: https://i1.wp.com/m.media-amazon.com/images/I/310YWrfdnTL._SL160_.jpg?ssl=1 -[9]: https://www.amazon.com/dp/B08J157LHH?tag=chmod7mediate-20&linkCode=osi&th=1&psc=1 (NVIDIA Jetson Nano 2GB Developer Kit (945-13541-0000-000)) -[10]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime) -[11]: https://www.amazon.com/dp/B08J157LHH?tag=chmod7mediate-20&linkCode=osi&th=1&psc=1 (Buy on Amazon) -[12]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Nvidia-Jetson-Xavier-NX.png?ssl=1 -[13]: https://i1.wp.com/m.media-amazon.com/images/I/31B9xMmCvwL._SL160_.jpg?ssl=1 -[14]: https://www.amazon.com/dp/B086874Q5R?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (NVIDIA Jetson Xavier NX Developer Kit (812674024318)) -[15]: https://www.amazon.com/dp/B086874Q5R?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (Buy on Amazon) -[16]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Nvidia-Jetson-AGX-Xavier-.png?ssl=1 -[17]: https://i1.wp.com/m.media-amazon.com/images/I/41tO5hw4zHL._SL160_.jpg?ssl=1 -[18]: https://www.amazon.com/dp/B083ZL3X5B?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (NVIDIA Jetson AGX Xavier Developer Kit (32GB)) -[19]: https://www.amazon.com/dp/B083ZL3X5B?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (Buy on Amazon) -[20]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/ROCK-Pi-N10.png?ssl=1 -[21]: https://wiki.radxa.com/Home -[22]: https://www.seeedstudio.com/ROCK-Pi-4-c-1323.html?cat=1343 -[23]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Beagle-AI.png?ssl=1 -[24]: https://beagleboard.org/ -[25]: https://beagleboard.org/ai -[26]: https://i2.wp.com/m.media-amazon.com/images/I/41K+htPCUHL._SL160_.jpg?ssl=1 -[27]: https://www.amazon.com/dp/B07YR1RV64?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (BeagleBone AI) -[28]: https://www.amazon.com/dp/B07YR1RV64?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (Buy on Amazon) -[29]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/BeagleV.png?ssl=1 -[30]: https://en.wikipedia.org/wiki/RISC-V -[31]: https://news.itsfoss.com/beaglev-announcement/ -[32]: https://beaglev.seeed.cc/ -[33]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/HiKey970.png?ssl=1 -[34]: https://www.96boards.org/ -[35]: https://www.96boards.org/product/hikey970/ -[36]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Google-Coral-Dev-Board.png?ssl=1 -[37]: https://coral.ai/products/dev-board/ -[38]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/Google-Coral-Dev-Board-Mini.png?ssl=1 -[39]: https://coral.ai/products/dev-board-mini -[40]: https://i0.wp.com/m.media-amazon.com/images/I/41g5c6IwLmL._SL160_.jpg?ssl=1 -[41]: https://www.amazon.com/dp/B08QLXKJB7?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (Google Coral Dev Board Mini) -[42]: https://www.amazon.com/dp/B08QLXKJB7?tag=chmod7mediate-20&linkCode=ogi&th=1&psc=1 (Buy on Amazon) diff --git a/sources/tech/20210201 My handy guide to software development and testing.md b/sources/tech/20210201 My handy guide to software development and testing.md deleted file mode 100644 index 6621204ddf..0000000000 --- a/sources/tech/20210201 My handy guide to software development and testing.md +++ /dev/null @@ -1,229 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (toknow-gh) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (My handy guide to software development and testing) -[#]: via: (https://opensource.com/article/21/2/development-guide) -[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) - -My handy guide to software development and testing -====== -Programming can feel like a battle against a horde of zombies at times. -In this series, learn how to put this ZOMBIES acronym to work for you. -![Gears above purple clouds][1] - -A long time ago, when I was but a budding computer programmer, we used to work in large batches. We were each assigned a programming task, and then we'd go away and hide in our cubicles and bang on the keyboard. I remember my team members spending hours upon hours in isolation, each of us in our own cubicle, wrestling with challenges to create defect-free apps. The theory was, the larger the batch, the better the evidence that we're awesome problem solvers. - -For me, it was a badge of honor to see how long I could write new code or modify existing code before stopping to check to see whether what I did worked. Back then, many of us thought stopping to verify that our code worked was a sign of weakness, a sign of a rookie programmer. A "real developer" should be able to crank out the entire app without stopping to check anything! - -When I did stop to test my code, however unwillingly, I usually got a reality check. Either my code wouldn't compile, or it wouldn't build, or it wouldn't run, or it just wouldn't process the data the way I'd intended. Inevitably, I'd scramble in desperation to fix all the pesky problems I'd uncovered. - -### Avoiding the zombie horde - -If the old style of working sounds chaotic, that's because it was. We tackled our tasks all at once, hacking and slashing through problems only to be overwhelmed by more. It was like a battle against a horde of zombies. - -Today, we've learned to avoid large batches. Hearing some experts extolling the virtues of avoiding large batches sounded completely counterintuitive at first, but I've learned a lot from past mistakes. Appropriately, I'm using a system James Grenning () calls **ZOMBIES** to guide my software development efforts. - -### ZOMBIES to the rescue! - -There's nothing mysterious about **ZOMBIES**. It's an acronym that stands for: - -**Z** – Zero -**O** – One -**M** – Many (or more complex) -**B** – Boundary behaviors -**I** – Interface definition -**E** – Exercise exceptional behavior -**S** – Simple scenarios, simple solutions - -I'll break it down for you in this article series. - -### Zero in action! - -**Z**ero stands for the simplest possible case. - -A solution is _simplest_ because everyone initially prefers to use hard-coded values. By starting a coding session with hard-coded values, you quickly create a situation that gives you immediate feedback. Without having to wait several minutes or potentially hours, hard-coded values provide instant feedback on whether you like interacting with what you're building. If you find out you like interacting with it, great! Carry on in that direction. If you discover, for one reason or another, that you don't like interacting with it, there's been no big loss. You can easily dismiss it; you don't even have any losses to cut. - -As an example, build a simple backend shopping API. This service lets users grab a shopping basket, add items to the basket, remove items from the basket, and get the order total from the API. - -Create the necessary infrastructure (segregate the shipping app into an `app` folder and tests into a `tests` folder). This example uses the open source [xUnit][2] testing framework. - -Roll up your sleeves, and see the Zero principle in action! - - -``` -[Fact] -public void NewlyCreatedBasketHas0Items() {     -    var expectedNoOfItems = 0; -    var actualNoOfItems = 1; -    Assert.Equal(expectedNoOfItems, actualNoOfItems); -} -``` - -This test is _faking it_ because it is testing for hard-coded values. When the shopping basket is newly created, it contains no items; therefore, the expected number of items in the basket is 0. This expectation is put to the test (or _asserted_) by comparing expected and actual values for equality. - -When the test runs, it produces the following results: - - -``` -Starting test execution, please wait... - -A total of 1 test files matched the specified pattern. -[xUnit.net 00:00:00.57] tests.UnitTest1.NewlyCreatedBasketHas0Items [FAIL] -  X tests.UnitTest1.NewlyCreatedBasketHas0Items [4ms] -  Error Message: -   Assert.Equal() Failure -Expected: 0 -Actual: 1 -[...] -``` - -The test fails for obvious reasons: you expected the number of items to be 0, but the actual number of items was hard-coded as 1. - -Of course, you can quickly remedy that error by modifying the hard-coded value assigned to the actual variable from 1 to 0: - - -``` -[Fact] -public void NewlyCreatedBasketHas0Items() { -    var expectedNoOfItems = 0; -    var actualNoOfItems = 0; -    Assert.Equal(expectedNoOfItems, actualNoOfItems); -} -``` - -As expected, when this test runs, it passes successfully: - - -``` -Starting test execution, please wait... - -A total of 1 test files matched the specified pattern. - -Test Run Successful. -Total tests: 1 -     Passed: 1 - Total time: 1.0950 Seconds -``` - -You might not think it's worth testing code you're forcing to fail, but no matter how simple a test may be, it is absolutely mandatory to see it fail at least once. That way, you can rest assured that the test will alert you later should some inadvertent change corrupt your processing logic. - -Now's the time to stop faking the Zero case and replace that hard-coded value with a value that will be provided by the running API. Now that you know you have a reliably failing test that expects an empty basket to have 0 items, it's time to write some application code. - -As with any other modeling exercise in software, begin by crafting a simple _interface_. Create a new file in the solution's `app` folder and name it `IShoppingAPI.cs` (by convention, preface every interface name with an upper-case **I**). In the interface, declare the method `NoOfItems()` to return the number of items as an `int`. Here's the listing of the interface: - - -``` -using System; - -namespace app {     -    public interface IShoppingAPI { -        int NoOfItems(); -    } -} -``` - -Of course, this interface is incapable of doing any work until you implement it. Create another file in the `app` folder and name it `ShoppingAPI`. Declare `ShoppingAPI` as a public class that implements `IShoppingAPI`. In the body of the class, define `NoOfItems` to return the integer 1: - - -``` -using System; - -namespace app { -    public class ShoppingAPI : IShoppingAPI { -        public int NoOfItems() { -            return 1; -        } -    } -} -``` - -You can see in the above that you are faking the processing logic again by hard-coding the return value to 1. That's good for now because you want to keep everything super brain-dead simple. Now's not the time (not yet, at least) to start mulling over how you're going to implement this shopping basket. Leave that for later! For now, you're playing with the Zero case, which means you want to see whether you like your current arrangement. - -To ascertain that, replace the hard-coded expected value with the value that will be delivered when your shopping API runs and receives the request. You need to let the tests know where the shipping code is located by declaring that you are using the `app` folder. - -Next, you need to instantiate the `IShoppingAPI` interface: - - -``` -`IShoppingAPI shoppingAPI = new ShoppingAPI();` -``` - -This instance is used to send requests and receive actual values after the code runs. - -Now the listing looks like: - - -``` -using System; -using Xunit; -using app; - -namespace tests { -    public class ShoppingAPITests { -        IShoppingAPI shoppingAPI = [new][3] ShoppingAPI(); -  -        [Fact]         -        public void NewlyCreatedBasketHas0Items() { -            var expectedNoOfItems = 0; -            var actualNoOfItems = shoppingAPI.NoOfItems(); -            Assert.Equal(expectedNoOfItems, actualNoOfItems); -        } -    } -} -``` - -Of course, when this test runs, it fails because you hard-coded an incorrect return value (the test expects 0, but the app returns 1). - -Again, you can easily make the test pass by modifying the hard-coded value from 1 to 0, but that would be a waste of time at this point. Now that you have a proper interface hooked up to your test, the onus is on you to write programming logic that results in expected code behavior. - -For the application code, you need to decide which data structure to use to represent the shopping cart. To keep things bare-bones, strive to identify the simplest representation of a collection in C#. The thing that immediately comes to mind is `ArrayList`. This collection is perfect for these purposes—it can take an indefinite number of items and is easy and simple to traverse. - -In your app code, declare that you're using `System.Collections` because `ArrayList` is part of that package: - - -``` -`using System.Collections;` -``` - -Then declare your `basket`: - - -``` -`ArrayList basket = new ArrayList();` -``` - -Finally, replace the hard-coded value in the `NoOfItems()` with actual running code: - - -``` -public int NoOfItems() { -    return basket.Count; -} -``` - -This time, the test passes because your instantiated basket is empty, so `basket.Count` returns 0 items. - -Which is exactly what your first Zero test expects. - -### More examples - -Your homework is to tackle just one zombie for now, and that's the Zeroeth zombie. In the next article, I'll take a look at **O**ne and **M**any. Stay strong! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/development-guide - -作者:[Alex Bunardzic][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alex-bunardzic -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chaos_engineer_monster_scary_devops_gear_kubernetes.png?itok=GPYLvfVh (Gears above purple clouds) -[2]: https://xunit.net/ -[3]: http://www.google.com/search?q=new+msdn.microsoft.com diff --git a/sources/tech/20210202 How I build and expand application development and testing.md b/sources/tech/20210202 How I build and expand application development and testing.md deleted file mode 100644 index 4c3673ebee..0000000000 --- a/sources/tech/20210202 How I build and expand application development and testing.md +++ /dev/null @@ -1,204 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (toknow-gh) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How I build and expand application development and testing) -[#]: via: (https://opensource.com/article/21/2/build-expand-software) -[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) - -How I build and expand application development and testing -====== -Start development simply, by writing and testing your code with One -element and then expand it out to Many. -![Security monster][1] - -In my [previous article][2], I explained why tackling coding problems all at once, as if they were hordes of zombies, is a mistake. I also explained the first **ZOMBIES** principle, **Zero**. In this article, I'll demonstrate the next two principles: **One** and **Many**. - -**ZOMBIES** is an acronym that stands for: - -**Z** – Zero -**O** – One -**M** – Many (or more complex) -**B** – Boundary behaviors -**I** – Interface definition -**E** – Exercise exceptional behavior -**S** – Simple scenarios, simple solutions - -In the previous article, you implemented Zero, which provides the simplest possible path through your code. There is absolutely no conditional processing logic anywhere to be found. Now it's time for you to move into **O**ne. - -Unlike with **Z**ero, which basically means nothing is added, or we have an empty case, nothing to take care of, **O**ne means we have a single case to take care of. That single case could be one item in the collection, or one visitor, or one event that demands special treatment. - -With **M**any, we are now dealing with potentially more complicated cases. Two or more items in the collection, two or more events that demand special treatment, and so on. - -### One in action - -Build on the code from the previous article by adding something to your virtual shopping basket. First, write a fake test: - - -``` -[Fact] -public void Add1ItemBasketHas1Item() { -        var expectedNoOfItems = 1; -        var actualNoOfItems = 0; -        Assert.Equal(expectedNoOfItems, actualNoOfItems); -} -``` - -As expected, this test fails because you hard-coded an incorrect value: - - -``` -Starting test execution, please wait... - -A total of 1 test files matched the specified pattern. -[xUnit.net 00:00:00.57] tests.UnitTest1.NewlyCreatedBasketHas0Items [FAIL] -  X tests.UnitTest1.NewlyCreatedBasketHas0Items [4ms] -  Error Message: -   Assert.Equal() Failure -Expected: 0 -Actual: 1 -[...] -``` - -Now is the time to think about how to stop faking it. You already created an implementation of a shopping basket (an `ArrayList` to hold items). But how do you implement an _item_? - -Simplicity should always be your guiding principle, and not knowing much about the actual item, you could fake it a little by implementing it as another collection. What could that collection contain? Well, because you're mostly interested in calculating basket totals, the item collection should, at minimum, contain a price (in any currency, but for simplicity, use dollars). - -A simple collection can hold an ID on an item (a pointer to the item, which may be kept elsewhere on the system) and the associated price of an item. - -A good data structure that can easily capture this is a key/value structure. In C#, the first thing that comes to mind is `Hashtable`. - -In the app code, add a new capability to the `IShoppingAPI` interface: - - -``` -`int AddItem(Hashtable item);` -``` - -This new capability accepts one item (an instance of a `Hashtable`) and returns the number of items found in the shopping basket. - -In your tests, replace the hard-coded value with a call to the interface: - - -``` -[Fact] -public void Add1ItemBasketHas1Item() {             -    var expectedNoOfItems = 1; -    Hashtable item = [new][3] Hashtable(); -    var actualNoOfItems = shoppingAPI.AddItem(item); -    Assert.Equal(expectedNoOfItems, actualNoOfItems); -} -``` - -This code instantiates `Hashtable` and names it `item`, then invokes `AddItem(item)` on the shopping interface, which returns the actual number of items in the basket. - -To implement it, turn to the `ShoppingAPI` class: - - -``` -public int AddItem(Hashtable item) { -    return 0; -} -``` - -You are faking it again just to see the results of your tests (which are the first customers of your code). Should the test fail (as expected), replace the hard-coded values with actual code: - - -``` -public int AddItem(Hashtable item) { -    basket.Add(item); -    return basket.Count; -} -``` - -In the working code, add an item to the basket, and then return the count of the items in the basket: - - -``` -Test Run Successful. -Total tests: 2 -     Passed: 2 - Total time: 1.0633 Seconds -``` - -So now you have two tests passing and have pretty much covered **Z** and **O**, the first two parts of **ZOMBIES**. - -### A moment of reflection - -If you look back at what you've done so far, you will notice that by focusing your attention on dealing with the simplest possible **Z**ero and **O**ne scenarios, you have managed to create an interface as well as define some processing logic boundaries! Isn't that awesome? You now have the most important abstractions partially implemented, and you know how to process cases where nothing is added and when one thing is added. And because you are building an e-commerce API, you certainly do not foresee placing any other boundaries that would limit your customers when shopping. Your virtual shopping basket is, for all intents and purposes, limitless. - -Another important (although not necessarily immediately obvious) aspect of the stepwise refinement that **ZOMBIES** offers is a reluctance to leap head-first into the brambles of implementation. You may have noticed how sheepish this is about implementing anything. For starters, it's better to fake the implementation by hard-coding the values. Only after you see that the interface interacts with your test in a sensible way are you willing to roll up your sleeves and harden the implementation code. - -But even then, you should always prefer simple, straightforward constructs. And strive to avoid conditional logic as much as you can. - -### Many in action - -Expand your application by defining your expectations when a customer adds two items to the basket. The first test is a fake. It expects 2, but force it to fail by hard-coding 0 items: - - -``` -[Fact] -public void Add2ItemsBasketHas2Items() { -        var expectedNoOfItems = 2; -        var actualNoOfItems = 0; -        Assert.Equal(expectedNoOfItems, actualNoOfItems); -} -``` - -When you run the test, two of them pass successfuy (the previous two, the **Z** and **O** tests), but as expected, the hard-coded test fails: - - -``` -A total of 1 test files matched the specified pattern. -[xUnit.net 00:00:00.57] tests.UnitTest1.Add2ItemsBasketHas2Items [FAIL] -  X tests.UnitTest1.Add2ItemsBasketHas2Items [2ms] -  Error Message: -   Assert.Equal() Failure -Expected: 2 -Actual: 0 - -Test Run Failed. -Tatal tests: 3 -     Passed: 2 -     Failed: 1 -``` - -Replace the hard-coded values with the call to the app code: - - -``` -[Fact] -public void Add2ItemsBasketHas2Items() { -        var expectedNoOfItems = 2; -        Hashtable item = [new][3] Hashtable(); -        shoppingAPI.AddItem(item); -        var actualNoOfItems = shoppingAPI.AddItem(item); -        Assert.Equal(expectedNoOfItems, actualNoOfItems); -} -``` - -In the test, you add two items (actually, you're adding the same item twice) and then compare the expected number of items to the number of items from the `shoppingAPI` instance after adding the item the second time. - -All tests now pass! - -### Stay tuned - -You have now completed the first pass of the **ZOM** part of the equation. You did a pass on **Z**ero, on **O**ne, and on **M**any. In the next article, I'll take a look at **B** and **I**. Stay vigilant! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/build-expand-software - -作者:[Alex Bunardzic][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alex-bunardzic -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_password_chaos_engineer_monster.png?itok=J31aRccu (Security monster) -[2]: https://opensource.com/article/21/1/zombies-zero -[3]: http://www.google.com/search?q=new+msdn.microsoft.com diff --git a/sources/tech/20210203 Defining boundaries and interfaces in software development.md b/sources/tech/20210203 Defining boundaries and interfaces in software development.md deleted file mode 100644 index 912d093534..0000000000 --- a/sources/tech/20210203 Defining boundaries and interfaces in software development.md +++ /dev/null @@ -1,315 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (toknow-gh) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Defining boundaries and interfaces in software development) -[#]: via: (https://opensource.com/article/21/2/boundaries-interfaces) -[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) - -Defining boundaries and interfaces in software development -====== -Zombies are bad at understanding boundaries, so set limits and -expectations for what your app can do. -![Looking at a map for career journey][1] - -Zombies are bad at understanding boundaries. They trample over fences, tear down walls, and generally get into places they don't belong. In the previous articles in this series, I explained why tackling coding problems all at once, as if they were hordes of zombies, is a mistake. - -**ZOMBIES** is an acronym that stands for: - -**Z** – Zero -**O** – One -**M** – Many (or more complex) -**B** – Boundary behaviors -**I** – Interface definition -**E** – Exercise exceptional behavior -**S** – Simple scenarios, simple solutions - -In the first two articles in this series, I demonstrated the first three **ZOMBIES** principles of **Zero**, **One**, and **Many**. The first article [implemented **Z**ero][2], which provides the simplest possible path through your code. The second article [performed tests][3] with **O**ne and **M**any samples. In this third article, I'll take a look at **B**oundaries and **I**nterfaces.  - -### Back to One - -Before you can tackle **B**oundaries, you need to circle back (iterate). - -Begin by asking yourself: What are the boundaries in e-commerce? Do I need or want to limit the size of a shopping basket? (I don't think that would make any sense, actually). - -The only reasonable boundary at this point would be to make sure the shopping basket never contains a negative number of items. Write an executable expectation that expresses this limitation: - - -``` -[Fact] -public void Add1ItemRemoveItemRemoveAgainHas0Items() { -        var expectedNoOfItems = 0; -        var actualNoOfItems = -1; -        Assert.Equal(expectedNoOfItems, actualNoOfItems); -} -``` - -This says that if you add one item to the basket, remove that item, and remove it again, the `shoppingAPI` instance should say that you have zero items in the basket. - -Of course, this executable expectation (microtest) fails, as expected. What is the bare minimum modification you need to make to get this microtest to pass? - - -``` -[Fact] -public void Add1ItemRemoveItemRemoveAgainHas0Items() { -        var expectedNoOfItems = 0; -        Hashtable item = [new][4] Hashtable(); -        shoppingAPI.AddItem(item); -        shoppingAPI.RemoveItem(item); -        var actualNoOfItems = shoppingAPI.RemoveItem(item); -        Assert.Equal(expectedNoOfItems, actualNoOfItems); -} -``` - -This encodes an expectation that depends on the `RemoveItem(item)` capability. And because that capability is not in your `shippingAPI`, you need to add it. - -Flip over to the `app` folder, open `IShippingAPI.cs` and add the new declaration: - - -``` -`int RemoveItem(Hashtable item);` -``` - -Go to the implementation class (`ShippingAPI.cs`), and implement the declared capability: - - -``` -public int RemoveItem(Hashtable item) { -        basket.RemoveAt(basket.IndexOf(item)); -        return basket.Count; -} -``` - -Run the system, and you get an error: - -![Error][5] - -(Alex Bunardzic, [CC BY-SA 4.0][6]) - -The system is trying to remove an item that does not exist in the basket, and it crashes. Add a little bit of defensive programming: - - -``` -public int RemoveItem(Hashtable item) { -        if(basket.IndexOf(item) >= 0) { -                basket.RemoveAt(basket.IndexOf(item)); -        } -        return basket.Count; -} -``` - -Before you try to remove the item from the basket, check if it is in the basket. (You could've tried by catching the exception, but I feel the above logic is easier to read and follow.) - -### More specific expectations - -Before we move to more specific expectations, let's pause for a second and examine what is meant by interfaces. In software engineering, an interface denotes a specification, or a description of some capability. In a way, interface in software is similar to a recipe in cooking. It lists the ingredients that make the cake but it is not actually edible. We follow the specified description in the recipe in order to bake the cake. - -Similarly here, we define our service by first specifying what is this service capable of. That specification is what we call interface. But interface itself cannot provide any services to us. It is a mere blueprint which we then use and follow in order to implement specified capabilities. - -So far, you have implemented the interface (partially; more capabilities will be added later) and the processing boundaries (you cannot have a negative number of items in the shopping basket). You instructed the `shoppingAPI` how to add items to the shopping basket and confirmed that the addition works by running the `Add2ItemsBasketHas2Items` test. - -However, just adding items to the basket does not an e-commerce app make. You need to be able to calculate the total of the items added to the basket—time to add another expectation. - -As is the norm by now (hopefully), start with the most straightforward expectation. When you add one item to the basket and the item price is $10, you expect the shopping API to correctly calculate the total as $10. - -Your fifth test (the fake version): - - -``` -[Fact] -public void Add1ItemPrice10GrandTotal10() { -        var expectedTotal = 10.00; -        var actualTotal = 0.00; -        Assert.Equal(expectedTotal, actualTotal); -} -``` - -Make the `Add1ItemPrice10GrandTotal10` test fail by using the good old trick: hard-coding an incorrect actual value. Of course, your previous three tests succeed, but the new fourth test fails: - - -``` -A total of 1 test files matched the specified pattern. -[xUnit.net 00:00:00.57] tests.UnitTest1.Add1ItemPrice10GrandTotal10 [FAIL] -  X tests.UnitTest1.Add1ItemPrice10GrandTotal10 [4ms] -  Error Message: -   Assert.Equal() Failure -Expected: 10 -Actual: 0 - -Test Run Failed. -Total tests: 4 -     Passed: 3 -         Failed: 1 - Total time: 1.0320 Seconds -``` - -Replace the hard-coded value with real processing. First, see if you have any such capability in your interface that would enable it to calculate order totals. Nope, no such thing. So far, you have declared only three capabilities in your interface: - - 1. `int NoOfItems();` - 2. `int AddItem(Hashtable item);` - 3. `int RemoveItem(Hashtable item);` - - - -None of those indicates any ability to calculate totals. You need to declare a new capability: - - -``` -`double CalculateGrandTotal();` -``` - -This new capability should enable your `shoppingAPI` to calculate the total amount by traversing the collection of items it finds in the shopping basket and adding up the item prices. - -Flip over to your tests and change the fifth test: - - -``` -[Fact] -public void Add1ItemPrice10GrandTotal10() { -        var expectedGrandTotal = 10.00; -        Hashtable item = [new][4] Hashtable(); -        item.Add("00000001", 10.00); -        shoppingAPI.AddItem(item); -        var actualGrandTotal = shoppingAPI.CalculateGrandTotal(); -        Assert.Equal(expectedGrandTotal, actualGrandTotal); -} -``` - -This test declares your expectation that if you add an item priced at $10 and then call the `CalculateGrandTotal()` method on the shopping API, it will return a grand total of $10. Which is a perfectly reasonable expectation since that's how the API should calculate. - -How do you implement this capability? As always, fake it first. Flip over to the `ShippingAPI` class and implement the `CalculateGrandTotal()` method, as declared in the interface: - - -``` -public double CalculateGrandTotal() { -                return 0.00; -} -``` - -You're hard-coding the return value as 0.00, just to see if the test (your first customer) will be able to run it and whether it will fail. Indeed, it does run fine and fails, so now you must implement processing logic to calculate the grand total of the items in the shopping basket properly: - - -``` -public double CalculateGrandTotal() { -        double grandTotal = 0.00; -        foreach(var product in basket) { -                Hashtable item = product as Hashtable; -                foreach(var value in item.Values) { -                        grandTotal += Double.Parse(value.ToString()); -                } -        } -        return grandTotal; -} -``` - -Run the system. All five tests succeed! - -### From One to Many - -Time for another iteration. Now that you have built the system by iterating to handle the **Z**ero, **O**ne (both very simple and a bit more elaborate scenarios), and **B**oundary scenarios (no negative number of items in the basket), you must handle a bit more elaborate scenario for **M**any.  - -A quick note: as we keep iterating and returning back to the concerns related to **O**ne, **M**any, and **B**oundaries (we are refining our implementation), some readers may expect that we should also rework the **I**nterface. As we will see later on, our interface is already fully fleshed out, and we see no need to add more capabilities at this point. Keep in mind that interfaces should be kept lean and simple; there is not much advantage in proliferating interfaces, as that only adds more noise to the signal. Here, we are following the principle of Occam's Razor, which states that entities should not multiply without a very good reason. For now, we are pretty much done with describing the expected capabilities of our API. We're now rolling up our sleeves and refining the implementation. - -The previous iteration enabled the system to handle more than one item placed in the basket. Now, enable the system to calculate the grand total for more than one item in the basket. First things first; write the executable expectation: - - -``` -[Fact] -public void Add2ItemsGrandTotal30() { -        var expectedGrandTotal = 30.00; -        var actualGrandTotal = 0.00; -        Assert.Equal(expectedGrandTotal, actualGrandTotal); -} -``` - -You "cheat" by hard-coding all values first and then do your best to make sure the expectation fails. - -And it does, so now is the time to make it pass. Modify your expectation by adding two items to the basket and then running the `CalculateGrandTotal()` method: - - -``` -[Fact] -public void Add2ItemsGrandTotal30() { -        var expectedGrandTotal = 30.00; -        Hashtable item = [new][4] Hashtable(); -        item.Add("00000001", 10.00); -        shoppingAPI.AddItem(item); -        Hashtable item2 = [new][4] Hashtable(); -        item2.Add("00000002", 20.00); -        shoppingAPI.AddItem(item2); -        var actualGrandTotal = shoppingAPI.CalculateGrandTotal(); -        Assert.Equal(expectedGrandTotal, actualGrandTotal); -} -``` - -And it passes. You now have six microtests pass successfuly; the system is back to steady-state! - -### Setting expectations - -As a conscientious engineer, you want to make sure that the expected acrobatics when users add items to the basket and then remove some items from the basket always calculate the correct grand total. Here comes the new expectation: - - -``` -[Fact] -public void Add2ItemsRemoveFirstItemGrandTotal200() { -        var expectedGrandTotal = 200.00; -        var actualGrandTotal = 0.00; -        Assert.Equal(expectedGrandTotal, actualGrandTotal); -} -``` - -This says that when someone adds two items to the basket and then removes the first item, the expected grand total is $200.00. The hard-coded behavior fails, and now you can elaborate with more specific confirmation examples and running the code: - - -``` -[Fact] -public void Add2ItemsRemoveFirstItemGrandTotal200() { -        var expectedGrandTotal = 200.00; -        Hashtable item = [new][4] Hashtable(); -        item.Add("00000001", 100.00); -        shoppingAPI.AddItem(item); -        Hashtable item2 = [new][4] Hashtable(); -        item2.Add("00000002", 200.00); -        shoppingAPI.AddItem(item2); -        shoppingAPI.RemoveItem(item); -        var actualGrandTotal = shoppingAPI.CalculateGrandTotal(); -        Assert.Equal(expectedGrandTotal, actualGrandTotal); -} -``` - -Your confirmation example, coded as the expectation, adds the first item (ID "00000001" with item price $100.00) and then adds the second item (ID "00000002" with item price $200.00). You then remove the first item from the basket, calculate the grand total, and assert if it is equal to the expected value. - -When this executable expectation runs, the system meets the expectation by correctly calculating the grand total. You now have seven tests passing! The system is working; nothing is broken! - - -``` -Test Run Successful. -Total tests: 7 -     Passed: 7 - Total time: 0.9544 Seconds -``` - -### More to come - -You're up to **ZOMBI** now, so in the next article, I'll cover **E**. Until then, try your hand at some tests of your own! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/boundaries-interfaces - -作者:[Alex Bunardzic][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alex-bunardzic -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey) -[2]: https://opensource.com/article/21/1/zombies-zero -[3]: https://opensource.com/article/21/1/zombies-2-one-many -[4]: http://www.google.com/search?q=new+msdn.microsoft.com -[5]: https://opensource.com/sites/default/files/uploads/error_0.png (Error) -[6]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/sources/tech/20210204 How to implement business requirements in software development.md b/sources/tech/20210204 How to implement business requirements in software development.md deleted file mode 100644 index c3a49e052a..0000000000 --- a/sources/tech/20210204 How to implement business requirements in software development.md +++ /dev/null @@ -1,128 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (toknow-gh) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to implement business requirements in software development) -[#]: via: (https://opensource.com/article/21/2/exceptional-behavior) -[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) - -How to implement business requirements in software development -====== -Increment your e-commerce app to ensure it implements required business -process rules correctly. -![Working on a team, busy worklife][1] - -In my previous articles in this series, I explained why tackling coding problems all at once, as if they were hordes of zombies, is a mistake. I'm using a helpful acronym to explain why it's better to approach problems incrementally. **ZOMBIES** stands for: - -**Z** – Zero -**O** – One -**M** – Many (or more complex) -**B** – Boundary behaviors -**I** – Interface definition -**E** – Exercise exceptional behavior -**S** – Simple scenarios, simple solutions - -In the first three articles in this series, I demonstrated the first five **ZOMBIES** principles. The first article [implemented **Z**ero][2], which provides the simplest possible path through your code. The second article performed [tests with **O**ne and **M**any][3] samples, and the third article looked at [**B**oundaries and **I**nterfaces][4]. In this article, I'll take a look at the penultimate letter in our acronym: **E**, which stands for "exercise exceptional behavior." - -### Exceptional behavior in action - -When you write an app like the e-commerce tool in this example, you need to contact product owners or business sponsors to learn if there are any specific business policy rules that need to be implemented. - -Sure enough, as with any e-commerce operation, you want to put business policy rules in place to entice customers to keep buying. Suppose a business policy rule has been communicated that any order with a grand total greater than $500 gets a percentage discount. - -OK, time to roll up your sleeves and craft the executable expectation for this business policy rule: - - -``` -[Fact] -public void Add2ItemsTotal600GrandTotal540() { -        var expectedGrandTotal = 540.00; -        var actualGrandTotal = 0.00; -        Assert.Equal(expectedGrandTotal, actualGrandTotal); -} -``` - -The confirmation example that encodes the business policy rule states that if the order total is $600.00, the `shoppingAPI` will calculate the grand total to discount it to $540.00. The script above fakes the expectation just to see it fail. Now, make it pass: - - -``` -[Fact] -public void Add2ItemsTotal600GrandTotal540() { -        var expectedGrandTotal = 540.00; -        Hashtable item = [new][5] Hashtable(); -        item.Add("00000001", 200.00); -        shoppingAPI.AddItem(item); -        Hashtable item2 = [new][5] Hashtable(); -        item2.Add("00000002", 400.00); -        shoppingAPI.AddItem(item2); -        var actualGrandTotal = shoppingAPI.CalculateGrandTotal(); -        Assert.Equal(expectedGrandTotal, actualGrandTotal); -} -``` - -In the confirmation example, you are adding one item priced at $200 and another item priced at $400 for a total of $600 for the order. When you call the `CalculateGrandTotal()` method, you expect to get a total of $540. - -Will this microtest pass? - - -``` -[xUnit.net 00:00:00.57] tests.UnitTest1.Add2ItemsTotal600GrandTotal540 [FAIL] -  X tests.UnitTest1.Add2ItemsTotal600GrandTotal540 [2ms] -  Error Message: -   Assert.Equal() Failure -Expected: 540 -Actual: 600 -[...] -``` - -Well, it fails miserably. You were expecting $540, but the system calculates $600. Why the error? It's because you haven't taught the system how to calculate the discount on order totals larger than $500 and then subtract that discount from the grand total. - -Implement that processing logic. Judging from the confirmation example above, when the order total is $600.00 (which is greater than the business rule threshold of an order totaling $500), the expected grand total is $540. This means the system needs to subtract $60 from the grand total. And $60 is precisely 10% of $600. So the business policy rule that deals with discounts expects a 10% discount on all order totals greater than $500. - -Implement this processing logic in the `ShippingAPI` class: - - -``` -private double Calculate10PercentDiscount(double total) { -        double discount = 0.00; -        if(total > 500.00) { -                discount = (total/100) * 10; -        } -        return discount; -} -``` - -First, check to see if the order total is greater than $500. If it is, then calculate 10% of the order total. - -You also need to teach the system how to subtract the calculated 10% from the order grand total. That's a very straightforward change: - - -``` -`return grandTotal - Calculate10PercentDiscount(grandTotal);` -``` - -Now all tests pass, and you're again enjoying steady success. Your script **Exercises exceptional behavior** to implement the required business policy rules. - -### One more to go - -I've taken us to **ZOMBIE** now, so there's just **S** remaining. I'll cover that in the exciting series finale. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/exceptional-behavior - -作者:[Alex Bunardzic][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alex-bunardzic -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_dev_email_chat_video_work_wfm_desk_520.png?itok=6YtME4Hj (Working on a team, busy worklife) -[2]: https://opensource.com/article/21/1/zombies-zero -[3]: https://opensource.com/article/21/1/zombies-2-one-many -[4]: https://opensource.com/article/21/1/zombies-3-boundaries-interface -[5]: http://www.google.com/search?q=new+msdn.microsoft.com diff --git a/sources/tech/20210205 Astrophotography with Fedora Astronomy Lab- setting up.md b/sources/tech/20210205 Astrophotography with Fedora Astronomy Lab- setting up.md deleted file mode 100644 index 78d855c76b..0000000000 --- a/sources/tech/20210205 Astrophotography with Fedora Astronomy Lab- setting up.md +++ /dev/null @@ -1,220 +0,0 @@ -[#]: subject: "Astrophotography with Fedora Astronomy Lab: setting up" -[#]: via: "https://fedoramagazine.org/astrophotography-with-fedora-astronomy-lab-setting-up/" -[#]: author: "Geoffrey Marr https://fedoramagazine.org/author/coremodule/" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Astrophotography with Fedora Astronomy Lab: setting up -====== - -![][1] - -Photo by Geoffrey Marr - -You love astrophotography. You love Fedora Linux. What if you could do the former using the latter? Capturing stunning and awe-inspiring astrophotographs, processing them, and editing them for printing or sharing online using Fedora is absolutely possible! This tutorial guides you through the process of setting up a computer-guided telescope mount, guide cameras, imaging cameras, and other pieces of equipment. A future article will cover capturing and processing data into pleasing images. Please note that while this article is written with certain aspects of the astrophotography process included or omitted based off my own equipment, you can custom-tailor it to fit your own equipment and experience. Let’s capture some photons! - -![][2] - -### Installing Fedora Astronomy Lab - -This tutorial focuses on [Fedora Astronomy Lab][3], so it only makes sense that the first thing we should do is get it installed. But first, a quick introduction: based on the KDE Plasma desktop, Fedora Astronomy Lab includes many pieces of open source software to aid astronomers in planning observations, capturing data, processing images, and controlling astronomical equipment. - -Download Fedora Astronomy Lab from the [Fedora Labs website][4]. You will need a USB flash-drive with at least eight GB of storage. Once you have downloaded the ISO image, use [Fedora Media Writer][5] to [write the image to your USB flash-drive.][6] After this is done, [boot from the USB drive][7] you just flashed and [install Fedora Astronomy Lab to your hard drive.][8] While you can use Fedora Astronomy Lab in a live-environment right from the flash drive, you should install to the hard drive to prevent bottlenecks when processing large amounts of astronomical data. - -### Configuring your installation - -Before you can go capturing the heavens, you need to do some minor setup in Fedora Astronomy Lab. - -First of all, you need to add your user to the *dialout* group so that you can access certain pieces of astronomical equipment from within the guiding software. Do that by opening the terminal (Konsole) and running this command (replacing *user* with your username): - -My personal setup includes a guide camera (QHY5 series, also known as Orion Starshoot) that does not have a driver in the mainline Fedora repositories. To enable it, ypu need to install the [qhyccd SDK][9]. (*Note that this package is not officially supported by Fedora. Use it at your own risk.)* At the time of writing, I chose to use the latest stable release, 20.08.26. Once you download the Linux 64-bit version of the SDK, to extract it: - -``` -tar zxvf sdk_linux64_20.08.26.tgz -``` - -Now change into the directory you just extracted, change the permissions of the *install.sh* file to give you execute privileges, and run the *install.sh*: - -``` -cd sdk_linux64_20.08.26 -chmod +x install.sh -sudo ./install.sh -``` - -Now it’s time to install the qhyccd INDI driver. INDI is an open source software library used to control astronomical equipment. Unfortunately, the driver is unavailable in the mainline Fedora repositories, but it is in a Copr repository. (*Note: Copr is not officially supported by Fedora infrastructure. Use packages at your own risk.)* If you prefer to have the newest (and perhaps unstable!) pieces of astronomy software, you can also enable the “bleeding” repositories at this time by following [this guide][10]. For this tutorial, you are only going to enable one repo: - -``` -sudo dnf copr enable xsnrg/indi-3rdparty-bleeding -``` - -Install the driver by running the following command: - -``` -sudo dnf install indi-qhy -``` - -Finally, update all of your system packages: - -To recap what you accomplished in this sectio: you added your user to the *dialout* group, downloaded and installed the qhyccd driver, enabled the *indi-3rdparty-bleeding* copr, installed the qhyccd-INDI driver with dnf, and updated your system. - -### Connecting your equipment - -This is the time to connect all your equipment to your computer. Most astronomical equipment will connect via USB, and it’s really as easy as plugging each device into your computer’s USB ports. If you have a lot of equipment (mount, imaging camera, guide camera, focuser, filter wheel, etc), you should use an external powered-USB hub to make sure that all connected devices have adequate power. Once you have everything plugged in, run the following command to ensure that the system recognizes your equipment: - -``` -lsusb -``` - -You should see output similar to (but not the same as) the output here: - -![][11] - -You see in the output that the system recognizes the telescope mount (a SkyWatcher EQM-35 Pro) as *Prolific Technology, Inc. PL2303 Serial Port*, the imaging camera (a Sony a6000) as *Sony Corp. ILCE-6000*, and the guide camera (an Orion Starshoot, aka QHY5) as *Van Ouijen Technische Informatica*. Now that you have made sure your system recognizes your equipment, it’s time to open your desktop planetarium and telescope controller, KStars! - -### Setting up KStars - -It’s time to open [KStars][12], which is a desktop planetarium and also includes the Ekos telescope control software. The first time you open KStars, you will see the KStars Startup Wizard. - -![][13] - -Follow the prompts to choose your home location (where you will be imaging from) and *Download Extra Data…* - -![][14] - -![][15] - -![][16] - -This will allow you to install additional star, nebula, and galaxy catalogs. You don’t need them, but they don’t take up too much space and add to the experience of using KStars. Once you’ve completed this, hit *Done* in the bottom right corner to continue. - -### Getting familiar with KStars - -Now is a good time to play around with the KStars interface. You are greeted with a spherical image with a coordinate plane and stars in the sky. - -![][17] - -This is the desktop planetarium which allows you to view the placement of objects in the night sky. Double-clicking an object selects it, and right clicking on an object gives you options like *Center & Track* which will follow the object in the planetarium, compensating for [sidereal time][18]. *Show DSS Image* shows a real [digitized sky survey][19] image of the selected object. - -![][20] - -Another essential feature is the *Set Time* option in the toolbar. Clicking this will allow you to input a future (or past) time and then simulate the night sky as if that were the current date. - -![][21] - -### Configuring capture equipment with Ekos - -You’re familiar with the KStars layout and some basic functions, so it’s time to move on configuring your equipment using the [Ekos][22] observatory controller and automation tool. To open Ekos, click the observatory button in the toolbar or go to *Tools* > *Ekos*. - -![][23] - -You will see another setup wizard: the *Ekos Profile Wizard*. Click *Next* to start the wizard. - -![][24] - -In this tutorial, you have all of our equipment connected directly to your computer. A future article we will cover using an INDI server installed on a remote computer to control our equipment, allowing you to connect over a network and not have to be in the same physical space as your gear. For now though, select *Equipment is attached to this device*. - -![][25] - -You are now asked to name your equipment profile. I usually name mine something like “Local Gear” to differentiate between profiles that are for remote gear, but name your profile what you wish. We will leave the button marked *Internal Guide* checked and won’t select any additional services. Now click the *Create Profile & Select Devices* button. - -![][26] - -This next screen is where we can select your particular driver to use for each individual piece of equipment. This part will be specific to your setup depending on what gear you use. For this tutorial, I will select the drivers for my setup. - -My mount, a [SkyWatcher EQM-35 Pro][27], uses the *EQMod Mount* under *SkyWatcher* in the menu (this driver is also compatible with all SkyWatcher equatorial mounts, including the [EQ6-R Pro][28] and the [EQ8-R Pro][29]). For my Sony a6000 imaging camera, I choose the *Sony DSLR* under *DSLRs* under the CCD category. Under *Guider*, I choose the *QHY CCD* under *QHY* for my Orion Starshoot (and any QHY5 series camera). That last driver we want to select will be under the Aux 1 category. We want to select *Astrometry* from the drop-down window. This will enable the Astrometry plate-solver from within Ekos that will allow our telescope to automatically figure out where in the night sky it is pointed, saving us the time and hassle of doing a one, two, or three star calibration after setting up our mount. - -You selected your drivers. Now it’s time to configure your telescope. Add new telescope profiles by clicking on the + button in the lower right. This is essential for computing field-of-view measurements so you can tell what your images will look like when you open the shutter. Once you click the + button, you will be presented with a form where you can enter the specifications of your telescope and guide scope. For my imaging telescope, I will enter Celestron into the *Vendor* field, SS-80 into the *Model* field, I will leave the *Driver* field as None, *Type* field as Refractor, *Aperture* as 80mm, and *Focal Length* as 400mm. - -![][30] - -After you enter the data, hit the *Save* button. You will see the data you just entered appear in the left window with an index number of 1 next to it. Now you can go about entering the specs for your guide scope following the steps above. Once you hit save here, the guide scope will also appear in the left window with an index number of 2. Once all of your scopes are entered, close this window. Now select your *Primary* and *Guide* telescopes from the drop-down window. - -![][31] - -After all that work, everything should be correctly configured! Click the *Close* button and complete the final bit of setup. - -### Starting your capture equipment - -This last step before you can start taking images should be easy enough. Click the *Play* button under Start & Stop Ekos to connect to your equipment. - -![][32] - -You will be greeted with a screen that looks similar to this: - -![][33] - -When you click on the tabs at the top of the screen, they should all show a green dot next to *Connection*, indicating that they are connected to your system. On my setup, the baud rate for my mount (the EQMod Mount tab) is set incorrectly, and so the mount is not connected. - -![][34] - -This is an easy fix; click on the *EQMod Mount* tab, then the *Connection* sub-tab, and then change the baud rate from 9600 to 115200. Now is a good time to ensure the serial port under *Ports* is the correct serial port for your mount. You can check which port the system has mounted your device on by running the command: - -``` -ls /dev - -| grep USB -``` - -You should see *ttyUSB0*. If there is more than one USB-serial device plugged in at a time, you will see more than one ttyUSB port, with an incrementing following number. To figure out which port is correct. unplug your mount and run the command again. - -Now click on the *Main Control* sub-tab, click *Connect* again, and wait for the mount to connect. It might take a few seconds, be patient and it should connect. - -The last thing to do is set the sensor and pixel size parameters for my camera. Under the *Sony DSLR Alpha-A6000 (Control)* tab, select the *Image Info* sub-tab. This is where you can enter your sensor specifications; if you don’t know them, a quick search on the internet will bring you your sensor’s maximum resolution as well as pixel pitch. Enter this data into the right-side boxes, then press the *Set* button to load them into the left boxes and save them into memory. Hit the *Close* button when you are done. - -![][35] - -### Conclusion - -Your equipment is ready to use. In the next article, you will learn how to capture and process the images. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/astrophotography-with-fedora-astronomy-lab-setting-up/ - -作者:[Geoffrey Marr][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/coremodule/ -[b]: https://github.com/lkxed -[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/astrophotography-setup-2-816x345.jpg -[2]: https://fedoramagazine.org/wp-content/uploads/2020/11/IMG_4151-768x1024.jpg -[3]: https://labs.fedoraproject.org/en/astronomy/ -[4]: https://labs.fedoraproject.org/astronomy/download/index.html -[5]: https://github.com/FedoraQt/MediaWriter -[6]: https://docs.fedoraproject.org/en-US/fedora/f33/install-guide/install/Preparing_for_Installation/#_fedora_media_writer -[7]: https://docs.fedoraproject.org/en-US/fedora/f33/install-guide/install/Booting_the_Installation/ -[8]: https://docs.fedoraproject.org/en-US/fedora/f33/install-guide/install/Installing_Using_Anaconda/#sect-installation-graphical-mode -[9]: https://www.qhyccd.com/html/prepub/log_en.html#!log_en.md -[10]: https://www.indilib.org/download/fedora/category/8-fedora.html -[11]: https://fedoramagazine.org/wp-content/uploads/2020/11/lsusb_output_rpi.png -[12]: https://edu.kde.org/kstars/ -[13]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_setuo_wizard-2.png -[14]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_location_select-2.png -[15]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars-download-extra-data-1.png -[16]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_install_extra_Data-1.png -[17]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_planetarium-1024x549.png -[18]: https://en.wikipedia.org/wiki/Sidereal_time -[19]: https://en.wikipedia.org/wiki/Digitized_Sky_Survey -[20]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_right_click_object-1024x576.png -[21]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_planetarium_clock_icon.png -[22]: https://www.indilib.org/about/ekos.html -[23]: https://fedoramagazine.org/wp-content/uploads/2020/11/open_ekos_icon.png -[24]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos-profile-wizard.png -[25]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_equipment_attached_to_this_device.png -[26]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_wizard_local_gear.png -[27]: https://www.skywatcherusa.com/products/eqm-35-mount -[28]: https://www.skywatcherusa.com/products/eq6-r-pro -[29]: https://www.skywatcherusa.com/collections/eq8-r-series-mounts/products/eq8-r-mount-with-pier-tripod -[30]: https://fedoramagazine.org/wp-content/uploads/2020/11/setup_telescope_profiles.png -[31]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_setup_aux_1_astrometry-1024x616.png -[32]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_start_equip_connect.png -[33]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_startup_equipment.png -[34]: https://fedoramagazine.org/wp-content/uploads/2020/11/set_baud_rate_to_115200.png -[35]: https://fedoramagazine.org/wp-content/uploads/2020/11/set_camera_sensor_settings.png diff --git a/sources/tech/20210205 Integrate devices and add-ons into your home automation setup.md b/sources/tech/20210205 Integrate devices and add-ons into your home automation setup.md deleted file mode 100644 index d82dcc262e..0000000000 --- a/sources/tech/20210205 Integrate devices and add-ons into your home automation setup.md +++ /dev/null @@ -1,190 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Integrate devices and add-ons into your home automation setup) -[#]: via: (https://opensource.com/article/21/2/home-automation-addons) -[#]: author: (Steve Ovens https://opensource.com/users/stratusss) - -Integrate devices and add-ons into your home automation setup -====== -Learn how to set up initial integrations and install add-ons in Home -Assistant in the fifth article in this series. -![Looking at a map][1] - -In the four previous articles in this series about home automation, I have discussed [what Home Assistant is][2], why you may want [local control][3], some of the [communication protocols][4] for smart home components, and how to [install Home Assistant][5] in a virtual machine (VM) using libvirt. In this fifth article, I will talk about configuring some initial integrations and installing some add-ons. - -### Set up initial integrations - -It's time to start getting into some of the fun stuff. The whole reason Home Assistant (HA) exists is to pull together various "smart" devices from different manufacturers. To do so, you have to make Home Assistant aware of which devices it should coordinate. I'll demonstrate by adding a [Sonoff Zigbee Bridge][6]. - -I followed [DigiBlur's Sonoff Guide][7] to replace the stock firmware with the open source firmware [Tasmota][8] to decouple my sensors from the cloud. My [second article][3] in this series explains why you might wish to replace the stock firmware. (I won't go into the device's setup with either the stock or custom firmware, as that is outside of the scope of this tutorial.) - -First, navigate to the **Configuration** menu on the left side of the HA interface, and make sure **Integrations** is selected: - -![Home Assistant integration configuration][9] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -From there, click the **Add Integration** button in the bottom-right corner and search for Zigbee: - -![Add Zigbee integration in Home Assistant][11] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -Enter the device manually. If the Zigbee Bridge was physically connected to the Home Assistant interface, you could select the device path. For instance, I have a ZigBee CC2531 USB stick that I use for some Zigbee devices that do not communicate correctly with the Sonoff Bridge. It attaches directly to the Home Assistant host and shows up as a Serial Device. See my [third article][12] for details on wireless standards. However, in this tutorial, we will continue to configure and use the Sonoff Bridge. - -![Enter device manually][13] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -The next step is to choose the radio type, using the information in the DigiBlur tutorial. In this case, the radio is an EZSP radio: - -![Choose the radio type][14] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -Finally, you need to know the IP address of the Sonoff Bridge, the port it is listening on, and the speed of the connection. Once I found the Sonoff Bridge's MAC address, I used my DHCP server to ensure that the device always uses the same IP on my network. DigiBlur's guide provides the port and speed numbers. - -![IP, port, and speed numbers][15] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -Once you've added the Bridge, you can begin pairing devices to it. Ensure that your devices are in pairing mode. The Bridge will eventually find your device(s). - -![Device pairing][16] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -You can name the device(s) and assign an area (if you set them up). - -![Name device][17] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -The areas displayed will vary based on whether or not you have any configured. Bedroom, Kitchen, and Living Room exist by default. As you add a device, HA will add a new Card to the **Integrations** tab. A Card is a user interface (UI) element that groups information related to a specific entity. The Zigbee card looks like this: - -![Integration card][18] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -Later, I'll come back to using this integration. I'll also get into how to use this device in automation flows. But now, I will show you how to add functionality to Home Assistant to make your life easier. - -### Add functionality with add-ons - -Out of the box, HA has some pretty great features for home automation. If you are buying commercial-off-the-shelf (CoTS) products, there is a good chance you can accomplish everything you need without the help of add-ons. However, you may want to investigate some of the add-ons, especially if (like me) you want to make your own sensors. - -There are all kinds of HA add-ons, ranging from Android debugging (ADB) tools to MQTT brokers to the Visual Studio Code editor. With each release, the number of add-ons grows. Some people make HA the center of their local system, encompassing DHCP, Plex, databases, and other useful programs. In fact, HA now ships with a built-in media browser for playing any media that you expose to it. - -I won't go too crazy in this article; I'll show you some of the basics and let you decide how you want to proceed. - -#### Install official add-ons - -Some of the many HA add-ons are available for installation right from the web UI, and others can be installed from alternative sources, such as Git. - -To see what's available, click on the **Supervisor** menu on the left panel. Near the top, you will see a tab called **Add-on store**. - -![Home Assistant add-on store][19] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -Below are three of the more useful add-ons that I think should be standard for any HA deployment: - -![Home Assistant official add-ons][20] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -The **File Editor** allows you to manage Home Assistant configuration files directly from your browser. I find this far more convenient for quick edits than obtaining a copy of the file, editing it, and pushing it back to HA. If you use add-ons like the Visual Studio Code editor, you can edit the same files. - -The **Samba share** add-on is an excellent way to extract HA backups from the system or push configuration files or assets to the **web** directory. You should _never_ leave your backups sitting on the machine being backed up. - -Finally, **Mosquitto broker** is my preferred method for managing an [MQTT][21] client. While you can install a broker that's external to the HA machine, I find low value in doing this. Since I am using MQTT just to communicate with my IoT devices, and HA is the primary method of coordinating that communication, there is a low risk in having these components vertically integrated. If HA is offline, the MQTT broker is almost useless in my arrangement. - -#### Install community add-ons - -Home Assistant has a prolific community and passionate developers. In fact, many of the "community" add-ons are developed and maintained by the HA developers themselves. For my needs, I install: - -![Home Assistant community add-ons][22] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -**Grafana** (graphing program) and **InfluxDB** (a time-series database) are largely optional and relate to the ability to customize how you visualize the data HA collects. I like to have historical data handy and enjoy looking at the graphs from time to time. While not exactly HA-related, I have my pfSense firewall/router forward metrics to InfluxDB so that I can make some nice graphs over time. - -![Home Assistant Grafana add-on][23] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -**ESPHome** is definitely an optional add-on that's warranted only if you plan on making your own sensors. - -**NodeRED** is my preferred automation flow-handling solution. Although HA has some built-in automation, I find a visual flow editor is preferable for some of the logic I use in my system. - -#### Configure add-ons - -Some add-ons (such as File Editor) require no configuration to start them. However, most—such as Node-RED—require at least a small amount of configuration. Before you can start Node-RED, you will need to set a password: - -![Home Assistant Node-RED add-on][24] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -**IMPORTANT:** Many people will abstract passwords through the `secrets.yaml` file. This does not provide any additional security other than not having passwords in the add-on configuration's YAML. See [the official documentation][25] for more information. - -In addition to the password requirement, most of the add-ons that have a web UI default to having the `ssl: true` option set. A self-signed cert on my local LAN is not a requirement, so I usually set this to false. There is an add-on for Let's Encrypt, but dealing with certificates is outside the scope of this series. - -After you have looked through the **Configuration** tab, save your changes, and enable Node-RED on the add-on's main screen. - -![Home Assistant Node-RED add-on][26] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -Don't forget to start the plugin. - -Most add-ons follow a similar procedure, so you can use this approach to set up other add-ons. - -### Wrapping up - -Whew, that was a lot of screenshots! Fortunately, when you are doing the configuration, the UI makes these steps relatively painless. - -At this point, your HA instance should be installed with some basic configurations and a few essential add-ons. - -In the next article, I will discuss integrating custom Internet of Things (IoT) devices into Home Assistant. Don't worry; the fun is just beginning! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/home-automation-addons - -作者:[Steve Ovens][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/stratusss -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map) -[2]: https://opensource.com/article/20/11/home-assistant -[3]: https://opensource.com/article/20/11/cloud-vs-local-home-automation -[4]: https://opensource.com/article/20/11/home-automation-part-3 -[5]: https://opensource.com/article/20/12/home-assistant -[6]: https://sonoff.tech/product/smart-home-security/zbbridge -[7]: https://www.digiblur.com/2020/07/how-to-use-sonoff-zigbee-bridge-with.html -[8]: https://tasmota.github.io/docs/ -[9]: https://opensource.com/sites/default/files/uploads/ha-setup20-configuration-integration.png (Home Assistant integration configuration) -[10]: https://creativecommons.org/licenses/by-sa/4.0/ -[11]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee.png (Add Zigbee integration in Home Assistant) -[12]: https://opensource.com/article/20/11/wireless-protocol-home-automation -[13]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-2.png (Enter device manually) -[14]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-3.png (Choose the radio type) -[15]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-4.png (IP, port, and speed numbers) -[16]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-5.png (Device pairing) -[17]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-6.png (Name device) -[18]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-7_0.png (Integration card) -[19]: https://opensource.com/sites/default/files/uploads/ha-setup7-addons.png (Home Assistant add-on store) -[20]: https://opensource.com/sites/default/files/uploads/ha-setup8-official-addons.png (Home Assistant official add-ons) -[21]: https://en.wikipedia.org/wiki/MQTT -[22]: https://opensource.com/sites/default/files/uploads/ha-setup9-community-addons.png (Home Assistant community add-ons) -[23]: https://opensource.com/sites/default/files/uploads/ha-setup9-community-grafana-pfsense.png (Home Assistant Grafana add-on) -[24]: https://opensource.com/sites/default/files/uploads/ha-setup27-nodered2.png (Home Assistant Node-RED add-on) -[25]: https://www.home-assistant.io/docs/configuration/secrets/ -[26]: https://opensource.com/sites/default/files/uploads/ha-setup26-nodered1.png (Home Assistant Node-RED add-on) diff --git a/sources/tech/20210205.0 ⭐️⭐️ Why simplicity is critical to delivering sturdy applications.md b/sources/tech/20210205.0 ⭐️⭐️ Why simplicity is critical to delivering sturdy applications.md deleted file mode 100644 index ac4433957f..0000000000 --- a/sources/tech/20210205.0 ⭐️⭐️ Why simplicity is critical to delivering sturdy applications.md +++ /dev/null @@ -1,203 +0,0 @@ -[#]: subject: "Why simplicity is critical to delivering sturdy applications" -[#]: via: "https://opensource.com/article/21/2/simplicity" -[#]: author: "Alex Bunardzic https://opensource.com/users/alex-bunardzic" -[#]: collector: "lkxed" -[#]: translator: "toknow-gh" -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Why simplicity is critical to delivering sturdy applications -====== - -In the previous articles in this series, I explained why tackling coding problems all at once, as if they were hordes of zombies, is a mistake. I'm using a helpful acronym explaining why it's better to approach problems incrementally. **ZOMBIES** stands for: - -**Z** – Zero**O** – One**M** – Many (or more complex)**B** – Boundary behaviors**I** – Interface definition**E** – Exercise exceptional behavior**S** – Simple scenarios, simple solutions - -More Great Content - -- [Free online course: RHEL technical overview][1] -- [Learn Advanced Linux Commands][2] -- [Download Cheat Sheets][3] -- [Find an Open Source Alternative][4] -- [Read Top Linux Content][5] -- [Check out open source resources][6] - -In the first four articles in this series, I demonstrated the first five **ZOMBIES** principles. The first article [implemented **Z**ero][7], which provides the simplest possible path through your code. The second article performed [tests with **O**ne and **M**any][8] samples, the third article looked at [**B**oundaries and **I**nterfaces][9], and the fourth examined [**E**xceptional behavior][10]. In this article, I'll take a look at the final letter in the acronym: **S**, which stands for "simple scenarios, simple solutions." - -### Simple scenarios, simple solutions in action - -If you go back and examine all the steps taken to implement the shopping API in this series, you'll see a purposeful decision to always stick to the simplest possible scenarios. In the process, you end up with the simplest possible solutions. - -There you have it: **ZOMBIES** help you deliver sturdy, elegant solutions by adhering to simplicity. - -### Victory? - -It might seem you're done here, and a less conscientious engineer would very likely declare victory. But enlightened engineers always probe a bit deeper. - -One exercise I always recommend is [mutation testing][11]. Before you wrap up this exercise and go on to fight new battles, it is wise to give your solution a good shakeout with mutation testing. And besides, you have to admit that _mutation_ fits well in a battle against zombies. - -Use the open source [Stryker.NET][12] to run mutation tests. - -![Mutation testing][13] - -It looks like you have one surviving mutant! That's not a good sign. - -What does this mean? Just when you thought you had a rock-solid, sturdy solution, Stryker.NET is telling you that not everything is rosy in your neck of the woods. - -Take a look at the pesky mutant who survived: - -![Surviving mutant][14] - -The mutation testing tool took the statement: - -``` -if(total > 500.00) { -``` - -and mutated it to: - -``` -if(total >= 500.00) { -``` - -Then it ran the tests and realized that none of the tests complained about the change. If there is a change in processing logic and none of the tests complain about the change, that means you have a surviving mutant. - -### Why mutation matters - -Why is a surviving mutant a sign of trouble? It's because the processing logic you craft governs the behavior of your system. If the processing logic changes, the behavior should change, too. And if the behavior changes, the expectations encoded in the tests should be violated. If these expectations are not violated, that means that the expectations are not precise enough. You have a loophole in your processing logic. - -To fix this, you need to "kill" the surviving mutant. How do you do that? Typically, the fact that a mutant survived means at least one expectation is missing. - -Look through your code to see what expectation, if any, is not there: - -- You clearly defined the expectation that a newly created basket has zero items (and, by implication, has a $0 grand total). -- You also defined the expectation that adding one item will result in the basket having one item, and if the item price is $10, the grand total will be $10. -- Furthermore, you defined the expectation that adding two items to the basket, one item priced at $10 and the other at $20, results in a grand total of $30. -- You also declared expectations regarding the removal of items from the basket. -- Finally, you defined the expectation that any order total greater than $500 results in a price discount. The business policy rule dictates that in such a case, the discount is 10% of the order's total price. - -What is missing? According to the mutation testing report, you never defined an expectation regarding what business policy rule applies when the order total is exactly $500. You defined what happens if the order total is greater than the $500 threshold and what happens when the order total is less than $500. - -Define this edge-case expectation: - -``` -[Fact] -public void Add2ItemsTotal500GrandTotal500() { - var expectedGrandTotal = 500.00; - var actualGrandTotal = 450; - Assert.Equal(expectedGrandTotal, actualGrandTotal); -} -``` - -The first stab fakes the expectation to make it fail. You now have nine microtests; eight succeed, and the ninth test fails: - -``` -[xUnit.net 00:00:00.57] tests.UnitTest1.Add2ItemsTotal500GrandTotal500 [FAIL] - X tests.UnitTest1.Add2ItemsTotal500GrandTotal500 [2ms] - Error Message: - Assert.Equal() Failure -Expected: 500 -Actual: 450 -[...] -Test Run Failed. -Total tests: 9 - Passed: 8 - Failed: 1 - Total time: 1.5920 Seconds -``` - -Replace hard-coded values with an expectation of a confirmation example: - -``` -[Fact] -public void Add2ItemsTotal500GrandTotal500() { - var expectedGrandTotal = 500.00; - Hashtable item1 = new Hashtable(); - item1.Add("0001", 400.00); - shoppingAPI.AddItem(item1); - Hashtable item2 = new Hashtable(); - item2.Add("0002", 100.00); - shoppingAPI.AddItem(item2); - var actualGrandTotal = shoppingAPI.CalculateGrandTotal(); } -``` - -You added two items, one priced at $400, the other at $100, totaling $500. After calculating the grand total, you expect that it will be $500. - -Run the system. All nine tests pass! - -``` -Total tests: 9 - Passed: 9 - Failed: 0 - Total time: 1.0440 Seconds -``` - -Now for the moment of truth. Will this new expectation remove all mutants? Run the mutation testing and check the results: - -![Mutation testing success][15] - -Success! All 10 mutants were killed. Great job; you can now ship this API with confidence. - -### Epilogue - -If there is one takeaway from this exercise, it's the emerging concept of _skillful procrastination_. It's an essential concept, knowing that many of us tend to rush mindlessly into envisioning the solution even before our customers have finished describing their problem. - -#### Positive procrastination - -Procrastination doesn't come easily to software engineers. We're eager to get our hands dirty with the code. We know by heart numerous design patterns, anti-patterns, principles, and ready-made solutions. We're itching to put them into executable code, and we lean toward doing it in large batches. So it is indeed a virtue to _hold our horses_ and carefully consider each and every step we make. - -This exercise proves how **ZOMBIES** help you take many deliberate small steps toward solutions. It's one thing to be aware of and to agree with the [Yagni][16] principle, but in the "heat of the battle," those deep considerations often fly out the window, and you end up throwing in everything and the kitchen sink. And that produces bloated, tightly coupled systems. - -### Iteration and incrementation - -Another essential takeaway from this exercise is the realization that the only way to keep a system working at all times is by adopting an _iterative approach_. You developed the shopping API by applying some _rework_, which is to say, you proceeded with coding by making changes to code that you already changed. This rework is unavoidable when iterating on a solution. - -One of the problems many teams experience is confusion related to iteration and increments. These two concepts are fundamentally different. - -An _incremental approach_ is based on the idea that you hold a crisp set of requirements (or a _blueprint_) in your hand, and you go and build the solution by working incrementally. Basically, you build it piece-by-piece, and when all pieces have been assembled, you put them together, and _voila_! The solution is ready to be shipped! - -In contrast, in an _iterative approach_, you are less certain that you know all that needs to be known to deliver the expected value to the paying customer. Because of that realization, you proceed gingerly. You're wary of breaking the system that already works (i.e., the system in a steady-state). If you disturb that balance, you always try to disturb it in the least intrusive, least invasive manner. You focus on taking the smallest imaginable batches, then quickly wrapping up your work on each batch. You prefer to have the system back to the steady-state in a matter of minutes, sometimes even seconds. - -That's why an iterative approach so often adheres to "_fake it 'til you make it_." You hard-code many expectations so that you can verify that a tiny change does not disable the system from running. You then make the changes necessary to replace the hard-coded value with real processing. - -As a rule of thumb, in an iterative approach, you aim to craft an expectation (a microtest) in such a way that it precipitates only one improvement to the code. You go one improvement by one improvement, and with each improvement, you exercise the system to make sure it is in a working state. As you proceed in that fashion, you eventually hit the stage where all the expectations have been met, and the code has been refactored in such a way that it leaves no surviving mutants. - -Once you get to that state, you can be fairly confident that you can ship the solution. - -Many thanks to inimitable [Kent Beck][17], [Ron Jeffries][18], and [GeePaw Hill][19] for being a constant inspiration on my journey to software engineering apprenticeship. - -And may _your_ journey be filled with ZOMBIES. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/simplicity - -作者:[Alex Bunardzic][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alex-bunardzic -[b]: https://github.com/lkxed/ -[1]: https://www.redhat.com/en/services/training/rh024-red-hat-linux-technical-overview?intcmp=7016000000127cYAAQ -[2]: https://developers.redhat.com/cheat-sheets/advanced-linux-commands/?intcmp=7016000000127cYAAQ -[3]: https://opensource.com/downloads/cheat-sheets?intcmp=7016000000127cYAAQ -[4]: https://opensource.com/alternatives?intcmp=7016000000127cYAAQ -[5]: https://opensource.com/tags/linux?intcmp=7016000000127cYAAQ -[6]: https://opensource.com/resources?intcmp=7016000000127cYAAQ -[7]: https://opensource.com/article/21/1/zombies-zero -[8]: https://opensource.com/article/21/1/zombies-2-one-many -[9]: https://opensource.com/article/21/1/zombies-3-boundaries-interface -[10]: https://opensource.com/article/21/1/zombies-4-exceptional-behavior -[11]: https://opensource.com/article/19/9/mutation-testing-example-definition -[12]: https://stryker-mutator.io/ -[13]: https://opensource.com/sites/default/files/uploads/stryker-net.png -[14]: https://opensource.com/sites/default/files/uploads/mutant.png -[15]: https://opensource.com/sites/default/files/uploads/stryker-net-success.png -[16]: https://martinfowler.com/bliki/Yagni.html -[17]: https://en.wikipedia.org/wiki/Kent_Beck -[18]: https://en.wikipedia.org/wiki/Ron_Jeffries -[19]: https://www.geepawhill.org/ \ No newline at end of file diff --git a/sources/tech/20210208 Fedora Aarch64 on the SolidRun HoneyComb LX2K.md b/sources/tech/20210208 Fedora Aarch64 on the SolidRun HoneyComb LX2K.md deleted file mode 100644 index ef95349df3..0000000000 --- a/sources/tech/20210208 Fedora Aarch64 on the SolidRun HoneyComb LX2K.md +++ /dev/null @@ -1,219 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Fedora Aarch64 on the SolidRun HoneyComb LX2K) -[#]: via: (https://fedoramagazine.org/fedora-aarch64-on-the-solidrun-honeycomb-lx2k/) -[#]: author: (John Boero https://fedoramagazine.org/author/boeroboy/) - -Fedora Aarch64 on the SolidRun HoneyComb LX2K -====== - -![][1] - -Photo by [Tim Mossholder][2] on [Unsplash][3] - -Almost a year has passed since the [HoneyComb][4] development kit was released by SolidRun. I remember reading about this Mini-ITX Arm workstation board being released and thinking “what a great idea.” Then I saw the price and realized this isn’t just another Raspberry Pi killer. Currently that price is $750 USD plus shipping and duty. Niche devices like the HoneyComb aren’t mass produced like the simpler Pi is, and they pack in quite a bit of high end tech. Eventually COVID lockdown boredom got the best of me and I put a build together. Adding a case and RAM, the build ended up costing about $1100 shipped to London. This is a recount of my experiences and the current state of using Fedora on this fun bit of hardware. - -First and foremost, the tech packed into this board is impressive. It’s not about to kill a Xeon workstation in raw performance but it’s going to wallop it in performance/watt efficiency. Essentially this is a powerful server in the energy footprint of a small laptop. It’s also a powerful hybrid of compute and network functionality, combining powerful network features in a carrier board with modular daughter card sporting a 16-core A72 with 2 ECC-capable DDR4 SO-DIMM slots. The carrier board comes in a few editions, giving flexibility to swap or upgrade your RAM + CPU options. I purchased the edition pictured below with 16 cores, 32GB (non-ECC), 512GB NVMe, and 4x10Gbe. For an extra $250 you can add the 100Gbe option if you’re building a 5G deployment or an ISP for a small country (bottom right of board). Imagine this jacked into a 100Gb uplink port acting as proxy, tls inspector, router, or storage for a large 10gb TOR switch. - -![][5] - -When I ordered it I didn’t fully understand the network co processor included from NXP. NXP is the company that makes the unique [LX2160A][6] CPU/SOC for this as well as configurable ports and offload engine that enable handling up to 150Gb/s of network traffic without the CPU breaking a sweat. Here is a list of options from NXP’s Layerscape user manual. - -![Configure ports in switch, LAG, MUX mode, or straight NICs.][7] - -I have a 10gb network in my home attic via a Ubiquiti ES-16-XG so I was eager to see how much this board could push. I also have a QNAP connected via 10gb which rarely manages to saturate the line, so could this also be a NAS replacement? It turned out I needed to sort out drivers and get a stable install first. Since the board has been out for a year, I had some catching up to do. SolidRun keeps an active Discord on [Developer-Ecosystem][8] which was immensely helpful as install wasn’t as straightforward as previous blogs have mentioned. I’ve always been cursed. If you’ve ever seen Pure Luck, I’m bound to hit every hardware glitch. - -![][9] - -For starters, you can add a GPU and install graphically or install via USB console. I started with a spare GPU (Radeon Pro WX2100) intending to build a headless box which in the end over-complicated things. If you need to swap parts or re-flash a BIOS via the microSD card, you’ll need to swap display, keyboard + mouse. Chaos. Much simpler just to plug into the micro USB console port and access it via /dev/ttyUSB0 for that picture-in-picture experience. It’s really great to have the open ended PCIe3-x8 slot but I’ll keep it open for now. Note that the board does not support PCIe Atomics so some devices may have compatibility issues. - -Now comes the fun part. BIOS is not built-in here. You’ll need to [build][10] from source for to your RAM speed and install via microSDHC. At first this seems annoying but then you realize that with removable BIOS installer it’s pretty hard to brick this thing. Not bad. The good news is the latest UEFI builds have worked well for me. Just remember that every time you re-flash your BIOS you’ll need to set everything up again. This was enough to boot Fedora aarch64 from USB. The board offers 64GB of eMMC flash which you can install to if you like. I immediately benched it to find it reads about 165MB/s and writes 55MB/s which is practical speed for embedded usage but I’ll definitely be installing to NVMe instead. I had an older Samsung 950 Pro in my spares from a previous Linux box but I encountered major issues with it even with the widely documented kernel param workaround: -``` - -``` - -nvme_core.default_ps_max_latency_us=0 -``` - -``` - -In the end I upgraded my main workstation so I could repurpose its existing Samsung EVO 960 for the HoneyComb which worked much better. - -After some fidgeting I was able to install Fedora but it became apparent that the integrated network ports still don’t work with the mainline kernel. The NXP tech is great but requires a custom kernel build and tooling. Some earlier blogs got around this with a USB->RJ45 Ethernet adapter which works fine. Hopefully network support will be mainlined soon, but for now I snagged a kernel SRPM from the helpful engineers on Discord. With the custom kernel the 1Gbe NIC worked fine, but it turns out the SFP+ ports need more configuration. They won’t be recognized as interfaces until you use NXP’s _restool_ utility to map ports to their usage. In this case just a runtime mapping of _dmap -> dni_ was required. This is NXP’s way of mapping a MAC to a network interface via IOCTL commands. The restool binary isn’t provided either and must be built from source. It then layers on management scripts which use cheeky $arg0 references for redirection to call the restool binary with complex arguments. - -Since I was starting to accumulate quite a few custom packages it was apparent that a COPR repo was needed to simplify this for Fedora. If you’re not familiar with COPR I think it’s one of Fedora’s finest resources. This repo contains the uefi build (currently failing build), 5.10.5 kernel built with network support, and the restool binary with supporting scripts. I also added a oneshot systemd unit to enable the SFP+ ports on boot: -``` - -``` - -systemd enable --now [dpmac@7.service][11] -systemd enable --now [dpmac@8.service][12] -systemd enable --now [dpmac@9.service][13] -systemd enable --now [dpmac@10.service][14] -``` - -``` - -Now each SPF+ port will boot configured as eth1-4, with eth0 being the 1Gb. NetworkManager will struggle unless these are consistent, and if you change the service start order the eth devices will re-order. I actually put a sleep $@ in each activation so they are consistent and don’t have locking issues. Unfortunately it adds 10 seconds to boot time. This has been fixed in the latest kernel and won’t be an issue once mainlined. - -![][15] - -I’d love to explore the built-in LAG features but this still needs to be coded into the - -restool - -options. I’ll save it for later. In the meantime I managed a single 10gb link as primary, and a 3×10 LACP Team for kicks. Eventually I changed to 4×10 LACP via copper SFP+ cables mounted in the attic. - -### Energy Efficiency - -Now with a stable environment it’s time to raise some hell. It’s really nice to see PWM support was recently added for the CPU fan, which sounds like a mini jet engine without it. Now the sound level is perfectly manageable and thermal control is automatic. Time to test drive with a power meter. Total power usage is consistently between 20-40 watts (usually in the low 20s) which is really impressive. I tried a few _tuned_ profiles which didn’t seem to have much effect on energy. If you add a power-hungry GPU or device that can obviously increase but for a dev server it’s perfect and well below the Z600 workstations I have next to it which consume 160-250 watts each when fired up. - -### Remote Access - -I’m an old soul so I still prefer KDE with Xorg and NX via X2go server. I can access SSH or a full GUI at native performance without a GPU. This lets me get a feel for performance, thermal stats, and also helps to evaluate the device as a workstation or potential VDI. The version of KDE shipped with the aarch64 server spin doesn’t seem to recognize some sensors but that seems to be because of KDE’s latest widget changes which I’d have to dig into. - -![X2go KDE session over SSH][16] - -Cockpit support is also outstanding out of the box. If SSH and X2go remote access aren’t your thing, Cockpit provides a great remote management platform with a growing list of plugins. Everything works great in my experience. - -![Cockpit behaves as expected.][17] - -All I needed to do now is shift into high gear with jumbo frames. MTU 1500 yields me an iperf of about 2-4Gbps bottlenecked at CPU0. Ain’t nobody got time for that. Set MTU 9000 and suddenly it gets the full 10Gbps both ways with time to spare on the CPU. Again, it would be nice to use the hardware assisted LAG since the device is supposed to handle up to 150Gbps duplex no sweat (with the 100Gbe QSFP option), which is nice given the Ubiquiti ES-16-XG tops out at 160Gbps full duplex (10gb/16 ports). - -### Storage - -As a storage solution this hardware provides great value in a small thermal window and energy saving footprint. I could accomplish similar performance with an old x86 box for cheap but the energy usage alone would eclipse any savings in short order. By comparison I’ve seen some consumer NAS devices offer 10Gbe and NVMe cache sharing an inadequate number of PCIe2 lanes and bottlenecked at the bus. This is fully customizable and since the energy footprint is similar to a small laptop a small UPS backup should allow full writeback cache mode for maximum performance. This would make a great oVirt NFS or iSCSI storage pool if needed. I would pair it with a nice NAS case or rack mount case with bays. Some vendors such as [Bamboo][18] are actually building server options around this platform as we speak. - -The board has 4 SATA3 ports but if I were truly going to build a NAS with this I would probably add a RAID card that makes best use of the PCIe8x slot, which thankfully is open ended. Why some hardware vendors choose to include close-ended PCIe 8x,4x slots is beyond me. Future models will ship with a physical x16 slot but only 8x electrically. Some users on the SolidRun Discord talk about bifurcation and splitting out the 8 PCIe lanes which is an option as well. Note that some of those lanes are also reserved for NVMe, SATA, and network. The CEX7 form factor and interchangeable carrier board presents interesting possibilities later as the NXP LX2160A docs claim to support up to 24 lanes. For a dev board it’s perfectly fine as-is. - -### Network Perf - -For now I’ve managed to rig up a 4×10 LACP Team with NetworkManager for full load balancing. This same setup can be done with a QSFP+ breakout cable. KDE nm Network widget still doesn’t support Teams but I can set them up via nm-connection-editor or Cockpit. Automation could be achieved with _nmcli_ and _teamdctl_. An iperf3 test shows the connection maxing out at about 13Gbps to/from the 2×10 LACP team on my workstation. I know that iperf isn’t a true indication of real-world usage but it’s fun for benchmarks and tuning nonetheless. This did in fact require a lot of tuning and at this point I feel like I could fill a book just with iperf stats. - -``` -$ iperf3 -c honeycomb -P 4 --cport 5000 -R -Connecting to host honeycomb, port 5201 -Reverse mode, remote host honeycomb is sending -[ 5] local 192.168.2.10 port 5000 connected to 192.168.2.4 port 5201 -[ 7] local 192.168.2.10 port 5001 connected to 192.168.2.4 port 5201 -[ 9] local 192.168.2.10 port 5002 connected to 192.168.2.4 port 5201 -[ 11] local 192.168.2.10 port 5003 connected to 192.168.2.4 port 5201 -[ ID] Interval Transfer Bitrate -[ 5] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec -[ 7] 1.00-2.00 sec 382 MBytes 3.21 Gbits/sec -[ 9] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec -[ 11] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec -[SUM] 1.00-2.00 sec 1.49 GBytes 12.8 Gbits/sec -- - - - - - - - - - - - - - - - - - - - - - - - - -(TRUNCATED) -- - - - - - - - - - - - - - - - - - - - - - - - - -[ 5] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec -[ 7] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec -[ 9] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec -[ 11] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec -[SUM] 2.00-3.00 sec 1.48 GBytes 12.7 Gbits/sec -- - - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bitrate Retr -[ 5] 0.00-10.00 sec 3.67 GBytes 3.16 Gbits/sec 1 sender -[ 5] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver -[ 7] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 7 sender -[ 7] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver -[ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 36 sender -[ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver -[ 11] 0.00-10.00 sec 3.69 GBytes 3.17 Gbits/sec 1 sender -[ 11] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver -[SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec 45 sender -[SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec receiver - -iperf Done -``` - -### Notes on iperf3 - -I struggled with LACP Team configuration for hours, having done this before with an HP cluster on the same switch. I’d heard stories about bonds being old news with team support adding better load balancing to single TCP flows. This still seems bogus as you still can’t load balance a single flow with a team in my experience. Also LACP claims to be fully automated and easier to set up than traditional load balanced trunks but I find the opposite to be true. For all it claims to automate you still need to have hashing algorithms configured correctly at switches and host. With a few quirks along the way I once accidentally left a team in broadcast mode (not LACP) which registered duplicate packets on the iperf server and made it look like a single connection was getting double bandwidth. That mistake caused confusion as I tried to reproduce it with LACP. - -Then I finally found the LACP hash settings in Ubiquiti’s new firmware GUI. It’s hidden behind a tiny pencil icon on each LAG. I managed to set my LAGs to hash on Src+Dest IP+port when they were defaulting to MAC/port. Still I was only seeing traffic on one slave of my 2×10 team even with parallel clients. Eventually I tried parallel clients with -V and it all made sense. By default iperf3 client ports are ephemeral but they follow an even sequence: 42174, 42176, 42178, 42180, etc… If your lb hash across a pair of sequential MACs includes src+dst port but those ports are always even, you’ll never hit the other interface with an odd MAC. How crazy is that for iperf to do? I tried looking at the source for iperf3 and I don’t even see how that could be happening. Instead if you specify a client port as well as parallel clients, they use a straight sequence: 50000, 50001, 50002, 50003, etc.. With odd+even numbers in client ports, I’m finally able to LB across all interfaces in all LAG groups. This setup would scale out well with more clients on the network. - -![Proper LACP load balancing.][19] - -Everything could probably be tuned a bit better but for now it is excellent performance and it puts my QNAP to shame. I’ll continue experimenting with the network co-processor and seeing if I can enable the native LAG support for even better performance. Across the network I would expect a practical peak of about 40 Gbps raw which is great. - -![][20] - -### Virtualization - -What about virt? One of the best parts about having a 16 A72 cores is support for Aarch64 VMs at full speed using KVM, which you won’t be able to do on x86. I can use this single box to spin up a dozen or so VMs at a time for CI automation and testing, or just to test our latest HashiCorp builds with aarch64 builds on COPR. Qemu on x86 without KVM can emulate aarch64 but crawls by comparison. I’ve not yet tried to add it to an oVirt cluster yet but it’s really snappy actually and proves more cost effective than spinning up Arm VMs in a cloud. One of the use cases for this environment is NFV, and I think it fits it perfectly so long as you pair it with ECC RAM which I skipped as I’m not running anything critical. If anybody wants to test drive a VM DM me and I’ll try to get you some temp access. - -![Virtual Machines in Cockpit][21] - -### Benchmarks - -[Phoronix][22] has already done quite a few benchmarks on [OpenBenchmarking.org][23] but I wanted to rerun them with the latest versions on my own Fedora 33 build for consistency. I also wanted to compare them to my Xeons which is not really a fair comparison. Both use DDR4 with similar clock speeds – around 2Ghz but different architectures and caches obviously yield different results. Also the Xeons are dual socket which is a huge cooling advantage for single threaded workloads. You can watch one process bounce between the coolest CPU sockets. The Honeycomb doesn’t have this luxury and has a smaller fan but the clock speed is playing it safe and slow at 2Ghz so I would bet the SoC has room to run faster if cooling were adjusted. I also haven’t played with the PWM settings to adjust the fan speed up just in case. Benchmarks performed using the tuned profile network-throughput. - -Strangely some single core operations seem to actually perform better on the Honeycomb than they do on my Xeons. I tried single-threaded zstd compression with default level 3 on a a few files and found it actually performs consistently better on the Honeycomb. However using the actual pts/compress-zstd benchmark with multithreaded option turns the tables. The 16 cores still manage an impressive **2073** MB/s: -``` - -``` - -Zstd Compression 1.4.5: -   pts/compress-zstd-1.2.1 &#91;Compression Level: 3] -   Test 1 of 1 -   Estimated Trial Run Count:    3                       -   Estimated Time To Completion: 9 Minutes &#91;22:41 UTC]   -       Started Run 1 @ 22:33:02 -       Started Run 2 @ 22:33:53 -       Started Run 3 @ 22:34:37 -   Compression Level: 3: -2079.3 -2067.5 -       2073.9 -   Average: 2073.57 MB/s -``` - -``` - -For apples to oranges comparison my 2×10 core Xeon E5-2660 v3 box does **2790** MB/s, so 2073 seems perfectly respectable as a potential workstation. Paired with a midrange GPU this device would also make a great video transcoder or media server. Some users have asked about mining but I wouldn’t use one of these for mining crypto currency. The lack of PCIe atomics means certain OpenCL and CUDA features might not be supported and with only 8 PCIe lanes exposed you’re fairly limited. That said it could potentially make a great mobile ML, VR, IoT, or vision development platform. The possibilities are pretty open as the whole package is very well balanced and flexible. - -### Conclusion - -I wasn’t organized enough this year to arrange a FOSDEM visit but this is something I would have loved to talk about. I’m definitely glad I tried out. Special thanks to Jon Nettleton and the folks on SolidRun’s Discord for the help and troubleshooting. The kit is powerful and potentially replaces a lot of energy waste in my home lab. It provides a great Arm platform for development and it’s great to see how solid Fedora’s alternative architecture support is. I got my Linux start on Gentoo back in the day, but Fedora really has upped it’s arch game. I’m really glad I didn’t have to sit waiting for compilation on a proprietary platform. I look forward to the remaining patches to be mainlined into the Fedora kernel and I hope to see a few more generations use this package, especially as Apple goes all in on Arm. It will also be interesting to see what features emerge if Nvidia’s Arm acquisition goes through. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/fedora-aarch64-on-the-solidrun-honeycomb-lx2k/ - -作者:[John Boero][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/boeroboy/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/honeycomb-fed-aarch64-816x346.jpg -[2]: https://unsplash.com/@timmossholder?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/honeycombs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: http://solid-run.com/arm-servers-networking-platforms/honeycomb-workstation/#overview -[5]: https://www.solid-run.com/wp-content/uploads/2020/11/HoneyComb-layout-front.png -[6]: https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/layerscape-processors/layerscape-lx2160a-processor:LX2160A -[7]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/image-894x1024.png -[8]: https://discord.com/channels/620838168794497044 -[9]: https://i.imgflip.com/11c7o.gif -[10]: https://github.com/SolidRun/lx2160a_uefi -[11]: mailto:dpmac@7.service -[12]: mailto:dpmac@8.service -[13]: mailto:dpmac@9.service -[14]: mailto:dpmac@10.service -[15]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/image-2-1024x403.png -[16]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/Screenshot_20210202_112051-1024x713.jpg -[17]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-2-1024x722.png -[18]: https://www.bamboosystems.io/b1000n/ -[19]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-4-1024x245.png -[20]: http://systems.cs.columbia.edu/files/kvm-arm-logo.png -[21]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-1024x717.png -[22]: https://www.phoronix.com/scan.php?page=news_item&px=SolidRun-ClearFog-ARM-ITX -[23]: https://openbenchmarking.org/result/1905313-JONA-190527343&obr_sor=y&obr_rro=y&obr_hgv=ClearFog-ITX diff --git a/sources/tech/20210208 How to set up custom sensors in Home Assistant.md b/sources/tech/20210208 How to set up custom sensors in Home Assistant.md deleted file mode 100644 index 074f898a93..0000000000 --- a/sources/tech/20210208 How to set up custom sensors in Home Assistant.md +++ /dev/null @@ -1,290 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to set up custom sensors in Home Assistant) -[#]: via: (https://opensource.com/article/21/2/home-assistant-custom-sensors) -[#]: author: (Steve Ovens https://opensource.com/users/stratusss) - -How to set up custom sensors in Home Assistant -====== -Dive into the YAML files to set up custom sensors in the sixth article -in this home automation series. -![Computer screen with files or windows open][1] - -In the last article in this series about home automation, I started digging into Home Assistant. I [set up a Zigbee integration][2] with a Sonoff Zigbee Bridge and installed a few add-ons, including Node-RED, File Editor, Mosquitto broker, and Samba. I wrapped up by walking through Node-RED's configuration, which I will use heavily later on in this series. The four articles before that one discussed [what Home Assistant is][3], why you may want [local control][4], some of the [communication protocols][5] for smart home components, and how to [install Home Assistant][6] in a virtual machine (VM) using libvirt. - -In this sixth article, I'll walk through the YAML configuration files. This is largely unnecessary if you are just using the integrations supported in the user interface (UI). However, there are times, particularly if you are pulling in custom sensor data, where you have to get your hands dirty with the configuration files. - -Let's dive in. - -### Examine the configuration files - -There are several potential configuration files you will want to investigate. Although everything I am about to show you _can_ be done in the main configuration.yaml file, it can help to split your configuration into dedicated files, especially with large installations. - -Below I will walk through how I configure my system. For my custom sensors, I use the ESP8266 chipset, which is very maker-friendly. I primarily use [Tasmota][7] for my custom firmware, but I also have some components running [ESPHome][8]. Configuring firmware is outside the scope of this article. For now, I will assume you set up your devices with some custom firmware (or you wrote your own with [Arduino IDE][9] ). - -#### The /config/configuration.yaml file - -Configuration.yaml is the main file Home Assistant reads. For the following, use the File Editor you installed in the previous article. If you do not see File Editor in the left sidebar, enable it by going back into the **Supervisor** settings and clicking on **File Editor**. You should see a screen like this: - -![Install File Editor][10] - -(Steve Ovens, [CC BY-SA 4.0][11]) - -Make sure **Show in sidebar** is toggled on. I also always toggle on the **Watchdog** setting for any add-ons I use frequently. - -Once that is completed, launch File Editor. There is a folder icon in the top-left header bar. This is the navigation icon. The `/config` folder is where the configuration files you are concerned with are stored. If you click on the folder icon, you will see a few important files: - -![Configuration split files][12] - -The following is a default configuration.yaml: - -![Default Home Assistant configuration.yaml][13] - -(Steve Ovens, [CC BY-SA 4.0][11]) - -The notation `script: !include scripts.yaml` indicates that Home Assistant should reference the contents of scripts.yaml anytime it needs the definition of a script object. You'll notice that each of these files correlates to files observed when the folder icon is clicked. - -I added three lines to my configuration.yaml: - - -``` -input_boolean: !include input_boolean.yaml -binary_sensor: !include binary_sensor.yaml -sensor: !include sensor.yaml -``` - -As a quick aside, I configured my MQTT settings (see Home Assistant's [MQTT documentation][14] for more details) in the configuration.yaml file: - - -``` -mqtt: -  discovery: true -  discovery_prefix: homeassistant -  broker: 192.168.11.11 -  username: mqtt -  password: superpassword -``` - -If you make an edit, don't forget to click on the Disk icon to save your work. - -![Save icon in Home Assistant config][15] - -(Steve Ovens, [CC BY-SA 4.0][11]) - -#### The /config/binary_sensor.yaml file - -After you name your file in configuration.yaml, you'll have to create it. In the File Editor, click on the folder icon again. There is a small icon of a piece of paper with a **+** sign in its center. Click on it to bring up this dialog: - -![Create config file][16] - -(Steve Ovens, [CC BY-SA 4.0][11]) - -I have three main types of [binary sensors][17]: door, motion, and power. A binary sensor has only two states: on or off. All my binary sensors send their data to MQTT. See my article on [cloud vs. local control][4] for more information about MQTT. - -My binary_sensor.yaml file looks like this: - - -``` - - platform: mqtt -    state_topic: "BRMotion/state/PIR1" -    name: "BRMotion" -    qos: 1 -    payload_on: "ON" -    payload_off: "OFF" -    device_class: motion -    -  - platform: mqtt -    state_topic: "IRBlaster/state/PROJECTOR" -    name: "ProjectorStatus" -    qos: 1 -    payload_on: "ON" -    payload_off: "OFF" -    device_class: power -    -  - platform: mqtt -    state_topic: "MainHallway/state/DOOR" -    name: "FrontDoor" -    qos: 1 -    payload_on: "open" -    payload_off: "closed" -    device_class: door -``` - -Take a look at the definitions. Since `platform` is self-explanatory, start with `state_topic`. - - * `state_topic`, as the name implies, is the topic where the device's state is published. This means anyone subscribed to the topic will be notified any time the state changes. This path is completely arbitrary, so you can name it anything you like. I tend to use the convention `location/state/object`, as this makes sense for me. I want to be able to reference all devices in a location, and for me, this layout is the easiest to remember. Grouping by device type is also a valid organizational layout. - - * `name` is the string used to reference the device inside Home Assistant. It is normally referenced by `type.name`, as seen in this card in the Home Assistant [Lovelace][18] interface: - -![Binary sensor card][19] - -(Steve Ovens, [CC BY-SA 4.0][11]) - - * `qos`, short for quality of service, refers to how an MQTT client communicates with the broker when posting to a topic. - - * `payload_on` and `payload_off` are determined by the firmware. These sections tell Home Assistant what text the device will send to indicate its current state. - - * `device_class:` There are multiple possibilities for a device class. Refer to the [Home Assistant documentation][17] for more information and a description of each type available. - - - - -#### The /config/sensor.yaml file - -This file differs from binary_sensor.yaml in one very important way: The sensors within this configuration file can have vastly different data inside their payloads. Take a look at one of the more tricky bits of sensor data, temperature. - -Here is the definition for my DHT temperature sensor: - - -``` - - platform: mqtt -    state_topic: "Steve_Desk_Sensor/tele/SENSOR" -    name: "Steve Desk Temperature" -    value_template: '{{ value_json.DHT11.Temperature }}' -    -  - platform: mqtt -    state_topic: "Steve_Desk_Sensor/tele/SENSOR" -    name: "Steve Desk Humidity" -    value_template: '{{ value_json.DHT11.Humidity }}' -``` - -You'll notice two things right from the start. First, there are two definitions for the same `state_topic`. This is because this sensor publishes three different statistics. - -Second, there is a new definition of `value_template`. Most sensors, whether custom or not, send their data inside a JSON payload. The template tells Home Assistant where the important information is in the JSON file. The following shows the raw JSON coming from my homemade sensor. (I used the program `jq` to make the JSON more readable.) - - -``` -{ -  "Time": "2020-12-23T16:59:11", -  "DHT11": { -    "Temperature": 24.8, -    "Humidity": 32.4, -    "DewPoint": 7.1 -  }, -  "BH1750": { -    "Illuminance": 24 -  }, -  "TempUnit": "C" -} -``` - -There are a few things to note here. First, as the sensor data is stored in a time-based data store, every reading has a `Time` entry. Second, there are two different sensors attached to this output. This is because I have both a DHT11 temperature sensor and a BH1750 light sensor attached to the same ESP8266 chip. Finally, my temperature is reported in Celsius. - -Hopefully, the Home Assistant definitions will make a little more sense now. `value_json` is just a standard name given to any JSON object ingested by Home Assistant. The format of the `value_template` is `value_json..`. - -For example, to retrieve the dewpoint: - - -``` -`value_template: '{{ value_json.DHT11.DewPoint}}'` -``` - -While you can dump this information to a file from within Home Assistant, I use Tasmota's `Console` to see the data it is publishing. (If you want me to do an article on Tasmota, please let me know in the comments below.) - -As a side note, I also keep tabs on my local Home Assistant resource usage. To do so, I put this in my sensor.yaml file: - - -``` - - platform: systemmonitor -    resources: -      - type: disk_use_percent -        arg: / -      - type: memory_free -      - type: memory_use -      - type: processor_use -``` - -While this is technically not a sensor, I put it here, as I think of it as a data sensor. For more information, see the Home Assistant's [system monitoring][20] documentation. - -#### The /config/input_boolean file - -This last section is pretty easy to set up, and I use it for a wide variety of applications. An input boolean is used to track the status of something. It's either on or off, home or away, etc. I use these quite extensively in my automations. - -My definitions are: - - -``` -   steve_home: -        name: steve -    steve_in_bed: -        name: 'steve in bed' -    guest_home: -    -    kitchen_override: -        name: kitchen -    kitchen_fan_override: -        name: kitchen_fan -    laundryroom_override: -        name: laundryroom -    bathroom_override: -        name: bathroom -    hallway_override:   -        name: hallway -    livingroom_override:   -        name: livingroom -    ensuite_bathroom_override: -        name: ensuite_bathroom -    steve_desk_light_override: -        name: steve_desk_light -    projector_led_override: -        name: projector_led -        -    project_power_status: -        name: 'Projector Power Status' -    tv_power_status: -        name: 'TV Power Status' -    bed_time: -        name: "It's Bedtime" -``` - -I use some of these directly in the Lovelace UI. I create little badges that I put at the top of each of the pages I have in the UI: - -![Home Assistant options in Lovelace UI][21] - -(Steve Ovens, [CC BY-SA 4.0][11]) - -These can be used to determine whether I am home, if a guest is in my house, and so on. Clicking on one of these badges allows me to toggle the boolean, and this object can be read by automations to make decisions about how the “smart devices” react to a person's presence (if at all). I'll revisit the booleans in a future article when I examine Node-RED in more detail. - -### Wrapping up - -In this article, I looked at the YAML configuration files and added a few custom sensors into the mix. You are well on the way to getting some functioning automation with Home Assistant and Node-RED. In the next article, I'll dive into some basic Node-RED flows and introduce some basic automations. - -Stick around; I've got plenty more to cover, and as always, leave a comment below if you would like me to examine something specific. If I can, I'll be sure to incorporate the answers to your questions into future articles. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/home-assistant-custom-sensors - -作者:[Steve Ovens][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/stratusss -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open) -[2]: https://opensource.com/article/21/1/home-automation-5-homeassistant-addons -[3]: https://opensource.com/article/20/11/home-assistant -[4]: https://opensource.com/article/20/11/cloud-vs-local-home-automation -[5]: https://opensource.com/article/20/11/home-automation-part-3 -[6]: https://opensource.com/article/20/12/home-assistant -[7]: https://tasmota.github.io/docs/ -[8]: https://esphome.io/ -[9]: https://create.arduino.cc/projecthub/Niv_the_anonymous/esp8266-beginner-tutorial-project-6414c8 -[10]: https://opensource.com/sites/default/files/uploads/ha-setup22-file-editor-settings.png (Install File Editor) -[11]: https://creativecommons.org/licenses/by-sa/4.0/ -[12]: https://opensource.com/sites/default/files/uploads/ha-setup29-configuration-split-files1.png (Configuration split files) -[13]: https://opensource.com/sites/default/files/uploads/ha-setup28-configuration-yaml.png (Default Home Assistant configuration.yaml) -[14]: https://www.home-assistant.io/docs/mqtt/broker -[15]: https://opensource.com/sites/default/files/uploads/ha-setup23-configuration-yaml2.png (Save icon in Home Assistant config) -[16]: https://opensource.com/sites/default/files/uploads/ha-setup24-new-config-file.png (Create config file) -[17]: https://www.home-assistant.io/integrations/binary_sensor/ -[18]: https://www.home-assistant.io/lovelace/ -[19]: https://opensource.com/sites/default/files/uploads/ha-setup25-bindary_sensor_card.png (Binary sensor card) -[20]: https://www.home-assistant.io/integrations/systemmonitor -[21]: https://opensource.com/sites/default/files/uploads/ha-setup25-input-booleans.png (Home Assistant options in Lovelace UI) diff --git a/sources/tech/20210209 My open source disaster recovery strategy for the home office.md b/sources/tech/20210209 My open source disaster recovery strategy for the home office.md deleted file mode 100644 index 05f2fce5b5..0000000000 --- a/sources/tech/20210209 My open source disaster recovery strategy for the home office.md +++ /dev/null @@ -1,199 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (My open source disaster recovery strategy for the home office) -[#]: via: (https://opensource.com/article/21/2/high-availability-home-office) -[#]: author: (Howard Fosdick https://opensource.com/users/howtech) - -My open source disaster recovery strategy for the home office -====== -In the remote work era, it's more important than ever to have a disaster -recovery plan for your household infrastructure. -![Person using a laptop][1] - -I've worked from home for years, and with the COVID-19 crisis, millions more have joined me. Teachers, accountants, librarians, stockbrokers… you name it, these workers now operate full or part time from their homes. Even after the coronavirus crisis ends, many will continue working at home, at least part time. But what happens when the home worker's computer fails? Whether the device is a smartphone, tablet, laptop, or desktop—and whether the problem is hardware or software—the result might be missed workdays and lots of frustration. - -This article explores how to ensure high-availability home computing. Open source software is key. It offers device independence so that home workers can easily move between primary and backup devices. Most importantly, it gives users control of their environment, which is the surest route to high availability. This simple high-availability strategy, based on open source, is easy to modify for your needs. - -### Different strategies for different situations - -I need to emphasize one point upfront: different job functions require different solutions. Some at-home workers can use smartphones or tablets, while others rely on laptops, and still others require high-powered desktop workstations. Some can tolerate an outage of hours or even days, while others must be available without interruption. Some use company-supplied devices, and others must provide their own. Lastly, some home workers store their data in their company's cloud, while others self-manage their data. - -Obviously, no single high-availability strategy fits everyone. My strategy probably isn't "the answer" for you, but I hope it prompts you to think about the challenges involved (if you haven't already) and presents some ideas to help you prepare before disaster strikes. - -### Defining high availability - -Whatever computing device a home worker uses, high availability (HA) involves five interoperable components: - - * Device hardware - * System software - * Communications capability - * Applications - * Data - - - -The HA plan must encompass all five components to succeed. Missing any component causes HA failure. - -For example, last night, I worked on a cloud-based spreadsheet. If my communications link had failed and I couldn't access my cloud data, that would stop my work on the project… even if I had all the other HA components available in a backup computer. - -Of course, there are exceptions. Say last night's spreadsheet was stored on my local computer. If that device failed, I could have kept working as long as I had a backup computer with my data on it, even if I lacked internet access. - -To succeed as a high-availability home worker, you must first identify the components you require for your work. Once you've done that, develop a plan to continue working even if one or more components fails. - -#### Duplicate replacement - -One approach is to create a _duplicate replacement_. Having the exact same hardware, software, communications, apps, and data available on a backup device guarantees that you can work if your primary fails. This approach is simple, though it might cost more to keep a complete backup on hand. - -To economize, you might share computers with your family or flatmates. A _shared backup_ is always more cost-effective than a _dedicated backup_, so long as you have top priority on the shared computer when you need it. - -#### Functional replacement - -The alternative to duplicate replacement is a _functional replacement_. You substitute a working equivalent for the failed component. Say I'm working from my home laptop and connecting through home WiFi. My internet connection fails. Perhaps I can tether my computer to my phone and use the cell network instead. I achieve HA by replacing one technology with an equivalent. - -#### Know your requirements - -Beyond the five HA components, be sure to identify any special requirements you have. For example, if mobility is important, you might need to replace a broken laptop with another laptop, not a desktop. - -HA means identifying all the functions you need, then ensuring your HA plan covers them all. - -### Timing, planning, and testing - -You must also define your time frame for recovery. Must you be able to continue your work immediately after a failure? Or do you have the luxury of some downtime during which you can react? - -The longer your allowable downtime, the more options you have. For example, if you could miss work for several days, you could simply trot a broken device into a repair shop. No need for a backup. - -In this article, by "high availability," I mean getting back to work in very short order after a failure, perhaps less than one hour. This typically requires that you have access to a backup device that is immediately available and ready to go. While there might be occasions when you can recover your primary device in a matter of minutes—for example, by working around a failure or by quickly replacing a defective piece of hardware or software—a backup computer is normally part of the HA plan. - -HA requires planning and preparation. "Winging it" doesn't suffice; ensure your backup plan works by testing it beforehand. - -For example, say your data resides in the cloud. That data is accessible from anywhere, from any device. That sounds ideal. But what if you forget that there's a small but vital bit of data stored locally on your failed computer? If you can't access that essential data, your HA plan fails. A dry run surfaces problems like this. - -### Smartphones as backup - -Most of us in software engineering and support use laptops and desktops at home. Smartphones and tablets are useful adjuncts, but they aren't at the core of what we do. - -The main reasons are screen size and keyboard. For software work, you can't achieve the same level of productivity with a small screen and touchscreen keypad as you can with a large monitor and physical keyboard. - -If you normally use a laptop or desktop and opt for a smartphone or tablet as your backup, test it out beforehand to make sure it suffices. Here's an example of the kind of subtlety that might otherwise trip you up. Most videoconferencing platforms run on both smartphones and laptops or desktops, but their mobile apps can differ in small but important ways. And even when the platform does offer an equivalent experience (the way [Jitsi][2] does, for instance), it can be awkward to share charts, slide decks, and documents, to use a chat feature, and so on, just due to the difference in mobile form factors compared to a big computer screen and a variety of input options. - -Smartphones make convenient backup devices because nearly everyone has one. But if you designate yours as your functional replacement, then try using it for work one day to verify that it meets your needs. - -### Data accessibility - -Data access is vital when your primary device fails. Even if you back up your work data, if a device fails, you also may need credentials for VPN or SSH access, specialized software, or forms of data that might not be stored along with your day-to-day documents and directories. You must ensure that when you design a backup scheme for yourself, you include all important data and store encryption keys and other access information securely. - -The best way to keep your work data secure is to use your own service. Running [Nextcloud][3] or [Sparkleshare][4] is easy, and hosting is cheap. Both are automated: files you place in a specially designated directory are synchronized with your server. It's not exactly building your own cloud, but it's a great way to leverage the cloud for your own services. You can make the backup process seamless with tools like [Syncthing, Bacula][5], or [rdiff-backup][6]. - -Cloud storage enables you to access data from any device at any location, but cloud storage will work only if you have a live communications path to it after a failure event. And not all cloud storage meets the privacy and security specifications for all projects. If your workplace has a cloud backup solution, spend some time learning about the cloud vendor's services and find out what level of availability it promises. Check its track record in achieving it. And be sure to devise an alternate way to access your cloud if your primary communications link fails. - -### Local backups - -If you store your data on a local device, you'll be responsible for backing it up and recovering it. In that case, back up your data to an alternate device, and verify that you can restore it within your acceptable time frame. This is your _time-to-recovery_. - -You'll also need to secure that data and meet any privacy requirements your employer specifies. - -#### Acceptable loss - -Consider how much data you can afford to lose in the event of an outage. For example, if you back up your data nightly, you could lose up to a maximum of one day's work (all the work completed during the day prior to the nightly backup). This is your _backup data timeliness_. - -Open source offers many free applications for local data backup and recovery. Generally, the same applications used for remote backups can also apply to local backup plans, so take a look at the [Advanced Rsync][7] or the [Syncthing tutorial][8] articles here on Opensource.com. - -Many prefer a data strategy that combines both cloud and local storage. Store your data locally, and then use the cloud as a backup (rather than working on the cloud). Or do it the other way around (although automating the cloud to push backups to you is more difficult than automating your local machine to push backups to the cloud). Storing your data in two separate locations gives your data _geographical redundancy_, which is useful should either site become unavailable. - -With a little forethought, you can devise a simple plan to access your data regardless of any outage. - -### My high-availability strategy - -As a practical example, I'll describe my own HA approach. My goals are a time to recovery of an hour or less and backup data timeliness within a day. - -![High Availability Strategy][9] - -(Howard Fosdick, [CC BY-SA 4.0][10]) - -#### Hardware - -I use an Android smartphone for phone calls and audioconferences. I can access a backup phone from another family member if my primary fails. - -Unfortunately, my phone's small size and touch keyboard mean I can't use it as my backup computer. Instead, I rely on a few generic desktop computers that have standard, interchangeable parts. You can easily maintain such hardware with this simple [free how-to guide][11]. You don't need any hardware experience. - -Open source software makes my multibox strategy affordable. It runs so efficiently that even [10-year-old computers work fine][12] as backups for typical office work. Mine are dual-core desktops with 4GB of RAM and any disk that cleanly verifies. These are so inexpensive that you can often get them for free from recycling centers. (In my [charity work][13], I find that many people give them away as unsuitable for running current proprietary software, but they're actually in perfect working order given a flexible operating system like Linux.) - -Another way to economize is to designate another family member's computer for your shared backups. - -#### Systems software and apps - -Running open source software on top of this generic hardware enables me to achieve several benefits. First, the flexibility of open source software enables me to address any possible software failure. For example, with simple operating system commands, I can copy, move, back up, and recover the operating system, applications, and data across partitions, disks, or computers. I don't have to worry about software constraints, vendor lock-in, proprietary backup file formats, licensing or activation restrictions, or extra fees. - -Another open source benefit is that you control your operating system. If you don't have control over your own system, you could be subject to forced restarts, unexpected and unwanted updates, and forced upgrades. My relative has run into such problems more than once. Without his knowledge or consent, his computer suddenly launched a forced upgrade from Windows 7 to Windows 10, which cost him three days of lost income (and untold frustration). The lesson: Your vendor's agenda may not coincide with your own. - -All operating systems have bugs. The difference is that open source software doesn't force you to eat them. - -#### Data classification - -I use very simple techniques to make my data highly available. - -I can't use cloud services for my data due to privacy requirements. Instead, my data "master copy" resides on a USB-connected disk. I plug it into any of several computers. After every session, I back up any altered data on the computer I used. - -Of course, this approach is only feasible if your backups run quickly. For most home workers, that's easy. All you have to do is segregate your data by size and how frequently you update it. - -Isolate big files like photos, audio, and video into separate folders or partitions. Make sure you back up only the files that are new or modified, not older items that have already been backed up. - -Much of my work involves office suites. These generate small files, so I isolate each project in its own folder. For example, I stored the two dozen files I used to write this article in a single subdirectory. Backing it up is as simple as copying that folder. - -Giving a little thought to data segregation and backing up only modified files ensures quick, easy backups for most home workers. My approach is simple; it works best if you only work on a couple of projects in a session. And I can tolerate losing up to a day's work. You can easily automate a more refined backup scheme for yourself. - -For software development, I take an entirely different approach. I use software versioning, which transparently handles all software backup issues for me and coordinates with other developers. My HA planning in this area focuses just on ensuring I can access the online tool. - -#### Communications - -Like many home users, I communicate through both a cellphone network and the internet. If my internet goes down, I can use the cell network instead by tethering my laptop to my Android smartphone. - -### Learning from failure - -Using my strategy for 15 years, how have I fared? What failures have I experienced, and how did they turn out? - - 1. **Motherboard burnout:** One day, my computer wouldn't turn on. I simply moved my USB "master data" external disk to another computer and used that. I lost no data. After some investigation, I determined it was a motherboard failure, so I scrapped the computer and used it for parts. - 2. **Drive failure:** An internal disk failed while I was working. I just moved my USB master disk to a backup computer. I lost 10 minutes of data updates. After work, I created a new boot disk by copying one from another computer—flexibility that only open source software offers. I used the affected computer the next day. - 3. **Fatal software update:** An update caused a failure in an important login service. I shifted to a backup computer where I hadn't yet applied the fatal update. I lost no data. After work, I searched for help with this problem and had it solved in an hour. - 4. **Monitor burnout:** My monitor fizzled out. I just swapped in a backup display and kept working. This took 10 minutes. After work, I determined that the problem was a burned-out capacitor, so I recycled the monitor. - 5. **Power outage:** Now, here's a situation I didn't plan for! A tornado took down the electrical power in our entire town for two days. I learned that one should think through _all_ possible contingencies—including alternate work sites. - - - -### Make your plan - -If you work from home, you need to consider what will happen when your home computer fails. If not, you could experience frustrating workdays off while you scramble to fix the problem. - -Open source software is the key. It runs so efficiently on older, cheaper computers that they become affordable backup machines. It offers device independence, and it ensures that you can design solutions that work best for you. - -For most people, ensuring high availability is very simple. The trick is thinking about it in advance. Create a plan _and then test it_. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/high-availability-home-office - -作者:[Howard Fosdick][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/howtech -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop) -[2]: https://jitsi.org/downloads/ -[3]: https://opensource.com/article/20/7/nextcloud -[4]: https://opensource.com/article/19/4/file-sharing-git -[5]: https://opensource.com/article/19/3/backup-solutions -[6]: https://opensource.com/life/16/3/turn-your-old-raspberry-pi-automatic-backup-server -[7]: https://opensource.com/article/19/5/advanced-rsync -[8]: https://opensource.com/article/18/9/take-control-your-data-syncthing -[9]: https://opensource.com/sites/default/files/uploads/my_ha_strategy.png (High Availability Strategy) -[10]: https://creativecommons.org/licenses/by-sa/4.0/ -[11]: http://www.rexxinfo.org/Quick_Guide/Quick_Guide_To_Fixing_Computer_Hardware -[12]: https://opensource.com/article/19/7/how-make-old-computer-useful-again -[13]: https://www.freegeekchicago.org/ diff --git a/sources/tech/20210210 Configure multi-tenancy with Kubernetes namespaces.md b/sources/tech/20210210 Configure multi-tenancy with Kubernetes namespaces.md deleted file mode 100644 index 5804c35922..0000000000 --- a/sources/tech/20210210 Configure multi-tenancy with Kubernetes namespaces.md +++ /dev/null @@ -1,368 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Configure multi-tenancy with Kubernetes namespaces) -[#]: via: (https://opensource.com/article/21/2/kubernetes-namespaces) -[#]: author: (Mike Calizo https://opensource.com/users/mcalizo) - -Configure multi-tenancy with Kubernetes namespaces -====== -Namespaces provide basic building blocks of access control for -applications, users, or groups of users. -![shapes of people symbols][1] - -Most enterprises want a multi-tenancy platform to run their cloud-native applications because it helps manage resources, costs, and operational efficiency and control [cloud waste][2]. - -[Kubernetes][3] is the leading open source platform for managing containerized workloads and services. It gained this reputation because of its flexibility in allowing operators and developers to establish automation with declarative configuration. But there is a catch: Because Kubernetes grows rapidly, the old problem of velocity becomes an issue. The bigger your adoption, the more issues and resource waste you discover. - -### An example of scale - -Imagine your company started small with its Kubernetes adoption by deploying a variety of internal applications. It has multiple project streams running with multiple developers dedicated to each project stream. - -In a scenario like this, you need to make sure your cluster administrator has full control over the cluster to manage its resources and implement cluster policy and security standards. In a way, the admin is herding the cluster's users to use best practices. A namespace is very useful in this instance because it enables different teams to share a single cluster where computing resources are subdivided into multiple teams. - -While namespaces are your first step to Kubernetes multi-tenancy, they are not good enough on their own. There are a number of Kubernetes primitives you need to consider so that you can administer your cluster properly and put it into a production-ready implementation. - -The Kubernetes primitives for multi-tenancy are: - - 1. **RBAC:** Role-based access control for Kubernetes - 2. **Network policies:** To isolate traffic between namespaces - 3. **Resource quotas:** To control fair access to cluster resources - - - -This article explores how to use Kubernetes namespaces and some basic RBAC configurations to partition a single Kubernetes cluster and take advantage of this built-in Kubernetes tooling. - -### What is a Kubernetes namespace? - -Before digging into how to use namespaces to prepare your Kubernetes cluster to become multi-tenant-ready, you need to know what namespaces are. - -A [namespace][4] is a Kubernetes object that partitions a Kubernetes cluster into multiple virtual clusters. This is done with the aid of [Kubernetes names and IDs][5]. Namespaces use the Kubernetes name object, which means that each object inside a namespace gets a unique name and ID across the cluster to allow virtual partitioning. - -### How namespaces help in multi-tenancy - -Namespaces are one of the Kubernetes primitives you can use to partition your cluster into multiple virtual clusters to allow multi-tenancy. Each namespace is isolated from every other user's, team's, or application's namespace. This isolation is essential in multi-tenancy so that updates and changes in applications, users, and teams are contained within the specific namespace. (Note that namespace does not provide network segmentation.) - -Before moving ahead, verify the default namespace in a working Kubernetes cluster: - - -``` -[root@master ~]# kubectl get namespace -NAME              STATUS   AGE -default           Active   3d -kube-node-lease   Active   3d -kube-public       Active   3d -kube-system       Active   3d -``` - -Then create your first namespace, called **test**: - - -``` -[root@master ~]# kubectl create namespace test -namespace/test created -``` - -Verify the newly created namespace: - - -``` -[root@master ~]# kubectl get namespace -NAME              STATUS   AGE -default           Active   3d -kube-node-lease   Active   3d -kube-public       Active   3d -kube-system       Active   3d -test              Active   10s -[root@master ~]# -``` - -Describe the newly created namespace: - - -``` -[root@master ~]# kubectl describe namespace test -Name:         test -Labels:       -Annotations:   -Status:       Active -No resource quota. -No LimitRange resource. -``` - -To delete a namespace: - - -``` -[root@master ~]# kubectl delete namespace test -namespace "test" deleted -``` - -Your new namespace is active, but it doesn't have any labels, annotations, or quota-limit ranges defined. However, now that you know how to create and describe and delete a namespace, I'll show how you can use a namespace to virtually partition a Kubernetes cluster. - -### Partitioning clusters using namespace and RBAC - -Deploy the following simple application to learn how to partition a cluster using namespace and isolate an application and its related objects from "other" users. - -First, verify the namespace you will use. For simplicity, use the **test** namespace you created above: - - -``` -[root@master ~]# kubectl get namespaces -NAME              STATUS   AGE -default           Active   3d -kube-node-lease   Active   3d -kube-public       Active   3d -kube-system       Active   3d -test              Active   3h -``` - -Then deploy a simple application called **test-app** inside the test namespace by using the following configuration: - - -``` -apiVersion: v1 -kind: Pod -metadata: -  name: test-app                 ⇒ name of the application -  namespace: test                ⇒ the namespace where the app runs -  labels: -     app: test-app                      ⇒ labels for the app -spec: -  containers: -  - name: test-app -    image: nginx:1.14.2         ⇒ the image we used for the app. -    ports: -    - containerPort: 80 -``` - -Deploy it: - - -``` -$ kubectl create -f test-app.yaml -    pod/test-app created -``` - -Then verify the application pod was created: - - -``` -$ kubectl get pods -n test -  NAME       READY   STATUS    RESTARTS   AGE -  test-app   1/1     Running   0          18s -``` - -Now that the running application is inside the **test** namespace, test a use case where: - - * **auth-user** can edit and view all the objects inside the test namespace - * **un-auth-user** can only view the namespace - - - -I pre-created the users for you to test. If you want to know how I created the users inside Kubernetes, view the commands [here][6]. - - -``` -$ kubectl config view -o jsonpath='{.users[*].name}' -  auth-user -  kubernetes-admin -  un-auth-user -``` - -With this set up, create a Kubernetes [Role and RoleBindings][7] to isolate the target namespace **test** to allow **auth-user** to view and edit objects inside the namespace and not allow **un-auth-user** to access or view the objects inside the **test** namespace. - -Start by creating a ClusterRole and a Role. These objects are a list of verbs (action) permitted on specific resources and namespaces. - -Create a ClusterRole: - - -``` -$ cat clusterrole.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: -  name: list-deployments -  namespace: test -rules: -  - apiGroups: [ apps ] -    resources: [ deployments ] -    verbs: [ get, list ] -``` - -Create a Role: - - -``` -$ cat role.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: Role -metadata: -  name: list-deployments -  namespace: test -rules: -  - apiGroups: [ apps ] -    resources: [ deployments ] -    verbs: [ get, list ] -``` - -Apply the Role: - - -``` -$ kubectl create -f role.yaml -roles.rbac.authorization.k8s.io "list-deployments" created -``` - -Use the same command to create a ClusterRole: - - -``` -$ kubectl create -f clusterrole.yaml - -$ kubectl get role -n test -  NAME               CREATED AT -  list-deployments   2021-01-18T00:54:00Z -``` - -Verify the Roles: - - -``` -$ kubectl describe roles -n test -  Name:         list-deployments -  Labels:       -  Annotations:   -  PolicyRule: -    Resources         Non-Resource URLs  Resource Names  Verbs -    ---------         -----------------  --------------  ----- -    deployments.apps  []                 []              [get list] -``` - -Remember that you must create RoleBindings by namespace, not by user. This means you need to create two role bindings for user **auth-user**. - -Here are the sample RoleBinding YAML files to permit **auth-user** to edit and view. - -**To edit:** - - -``` -$ cat rolebinding-auth-edit.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: -  name: auth-user-edit -  namespace: test -subjects: -\- kind: User -  name: auth-user -  apiGroup: rbac.authorization.k8s.io -roleRef: -  kind: ClusterRole -  name: edit -  apiGroup: rbac.authorization.k8s.io -``` - -**To view:** - - -``` -$ cat rolebinding-auth-view.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: -  name: auth-user-view -  namespace: test -subjects: -\- kind: User -  name: auth-user -  apiGroup: rbac.authorization.k8s.io -roleRef: -  kind: ClusterRole -  name: view -  apiGroup: rbac.authorization.k8s.io -``` - -Create these YAML files: - - -``` -$ kubectl create rolebinding-auth-view.yaml -$ kubectl create rolebinding-auth-edit.yaml -``` - -Verify if the RoleBindings were successfully created: - - -``` -$ kubectl get rolebindings -n test -NAME             ROLE               AGE -auth-user-edit   ClusterRole/edit   48m -auth-user-view   ClusterRole/view   47m -``` - -With the requirements set up, test the cluster partitioning: - - -``` -[root@master]$ sudo su un-auth-user -[un-auth-user@master ~]$ kubect get pods -n test -[un-auth-user@master ~]$ kubectl get pods -n test -Error from server (Forbidden): pods is forbidden: User "un-auth-user" cannot list resource "pods" in API group "" in the namespace "test" -``` - -Log in as **auth-user**: - - -``` -[root@master ]# sudo su auth-user -[auth-user@master auth-user]$ kubectl get pods -n test -NAME       READY   STATUS    RESTARTS   AGE -test-app   1/1     Running   0          3h8m -[auth-user@master un-auth-user]$ - -[auth-user@master auth-user]$ kubectl edit pods/test-app -n test -Edit cancelled, no changes made. -``` - -You can view and edit the objects inside the **test** namespace. How about viewing the cluster nodes? - - -``` -[auth-user@master auth-user]$ kubectl get nodes -Error from server (Forbidden): nodes is forbidden: User "auth-user" cannot list resource "nodes" in API group "" at the cluster scope -[auth-user@master auth-user]$ -``` - -You can't because the role bindings for user **auth-user** dictate they have access to view or edit objects only inside the **test** namespace. - -### Enable access control with namespaces - -Namespaces provide basic building blocks of access control using RBAC and isolation for applications, users, or groups of users. But using namespaces alone as your multi-tenancy solution is not enough in an enterprise implementation. It is recommended that you use other Kubernetes multi-tenancy primitives to attain further isolation and implement proper security. - -Namespaces can provide some basic isolation in your Kubernetes cluster; therefore, it is important to consider them upfront, especially when planning a multi-tenant cluster. Namespaces also allow you to logically segregate and assign resources to individual users, teams, or applications. - -By using namespaces, you can increase resource efficiencies by enabling a single cluster to be used for a diverse set of workloads. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/kubernetes-namespaces - -作者:[Mike Calizo][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mcalizo -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/Open%20Pharma.png?itok=GP7zqNZE (shapes of people symbols) -[2]: https://devops.com/the-cloud-is-booming-but-so-is-cloud-waste/ -[3]: https://opensource.com/resources/what-is-kubernetes -[4]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ -[5]: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/ -[6]: https://www.adaltas.com/en/2019/08/07/users-rbac-kubernetes/ -[7]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ diff --git a/sources/tech/20210210 Draw Mandelbrot fractals with GIMP scripting.md b/sources/tech/20210210 Draw Mandelbrot fractals with GIMP scripting.md deleted file mode 100644 index 3ff3460421..0000000000 --- a/sources/tech/20210210 Draw Mandelbrot fractals with GIMP scripting.md +++ /dev/null @@ -1,354 +0,0 @@ -[#]: subject: "Draw Mandelbrot fractals with GIMP scripting" -[#]: via: "https://opensource.com/article/21/2/gimp-mandelbrot" -[#]: author: "Cristiano L. Fontana https://opensource.com/users/cristianofontana" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Draw Mandelbrot fractals with GIMP scripting -====== -Create complex mathematical images with GIMP's Script-Fu language. - -![Painting art on a computer screen][1] - -Image by: Opensource.com - -The GNU Image Manipulation Program ([GIMP][2]) is my go-to solution for image editing. Its toolset is very powerful and convenient, except for doing [fractals][3], which is one thing you cannot draw by hand easily. These are fascinating mathematical constructs that have the characteristic of being [self-similar][4]. In other words, if they are magnified in some areas, they will look remarkably similar to the unmagnified picture. Besides being interesting, they also make very pretty pictures! - -![Rotated and magnified portion of the Mandelbrot set using Firecode][5] - -GIMP can be automated with [Script-Fu][6] to do [batch processing of images][7] or create complicated procedures that are not practical to do by hand; drawing fractals falls in the latter category. This tutorial will show how to draw a representation of the [Mandelbrot fractal][8] using GIMP and Script-Fu. - -![Mandelbrot set drawn using GIMP's Firecode palette][9] - -![Rotated and magnified portion of the Mandelbrot set using Firecode.][10] - -In this tutorial, you will write a script that creates a layer in an image and draws a representation of the Mandelbrot set with a colored environment around it. - -### What is the Mandelbrot set? - -Do not panic! I will not go into too much detail here. For the more math-savvy, the Mandelbrot set is defined as the set of [complex numbers][11] *a* for which the succession - -zn+1 = zn2 + a - -does not diverge when starting from *z₀ = 0*. - -In reality, the Mandelbrot set is the fancy-looking black blob in the pictures; the nice-looking colors are outside the set. They represent how many iterations are required for the magnitude of the succession of numbers to pass a threshold value. In other words, the color scale shows how many steps are required for the succession to pass an upper-limit value. - -### GIMP's Script-Fu - -[Script-Fu][12] is the scripting language built into GIMP. It is an implementation of the [Scheme programming language][13]. - -If you want to get more acquainted with Scheme, GIMP's documentation offers an [in-depth tutorial][14]. I also wrote an article about [batch processing images][15] using Script-Fu. Finally, the Help menu offers a Procedure Browser with very extensive documentation with all of Script-Fu's functions described in detail. - -![GIMP Procedure Browser][16] - -Scheme is a Lisp-like language, so a major characteristic is that it uses a [prefix notation][17] and a [lot of parentheses][18]. Functions and operators are applied to a list of operands by prefixing them: - -``` -(function-name operand operand ...) - -(+ 2 3) -↳ Returns 5 - -(list 1 2 3 5) -↳ Returns a list containing 1, 2, 3, and 5 -``` - -### Write the script - -You can write your first script and save it to the **Scripts** folder found in the preferences window under **Folders → Scripts**. Mine is at `$HOME/.config/GIMP/2.10/scripts`. Write a file called `mandelbrot.scm` with: - -``` -; Complex numbers implementation -(define (make-rectangular x y) (cons x y)) -(define (real-part z) (car z)) -(define (imag-part z) (cdr z)) - -(define (magnitude z) -  (let ((x (real-part z)) -        (y (imag-part z))) -    (sqrt (+ (* x x) (* y y))))) - -(define (add-c a b) -  (make-rectangular (+ (real-part a) (real-part b)) -                    (+ (imag-part a) (imag-part b)))) - -(define (mul-c a b) -  (let ((ax (real-part a)) -        (ay (imag-part a)) -        (bx (real-part b)) -        (by (imag-part b))) -    (make-rectangular (- (* ax bx) (* ay by)) -                      (+ (* ax by) (* ay bx))))) - -; Definition of the function creating the layer and drawing the fractal -(define (script-fu-mandelbrot image palette-name threshold domain-width domain-height offset-x offset-y) -  (define num-colors (car (gimp-palette-get-info palette-name))) -  (define colors (cadr (gimp-palette-get-colors palette-name))) - -  (define width (car (gimp-image-width image))) -  (define height (car (gimp-image-height image))) - -  (define new-layer (car (gimp-layer-new image -                                         width height -                                         RGB-IMAGE -                                         "Mandelbrot layer" -                                         100 -                                         LAYER-MODE-NORMAL))) - -  (gimp-image-add-layer image new-layer 0) -  (define drawable new-layer) -  (define bytes-per-pixel (car (gimp-drawable-bpp drawable))) - -  ; Fractal drawing section. -  ; Code from: https://rosettacode.org/wiki/Mandelbrot_set#Racket -  (define (iterations a z i) -    (let ((z′ (add-c (mul-c z z) a))) -       (if (or (= i num-colors) (> (magnitude z′) threshold)) -          i -          (iterations a z′ (+ i 1))))) - -  (define (iter->color i) -    (if (>= i num-colors) -        (list->vector '(0 0 0)) -        (list->vector (vector-ref colors i)))) - -  (define z0 (make-rectangular 0 0)) - -  (define (loop x end-x y end-y) -    (let* ((real-x (- (* domain-width (/ x width)) offset-x)) -           (real-y (- (* domain-height (/ y height)) offset-y)) -           (a (make-rectangular real-x real-y)) -           (i (iterations a z0 0)) -           (color (iter->color i))) -      (cond ((and (< x end-x) (< y end-y)) (gimp-drawable-set-pixel drawable x y bytes-per-pixel color) -                                           (loop (+ x 1) end-x y end-y)) -            ((and (>= x end-x) (< y end-y)) (gimp-progress-update (/ y end-y)) -                                            (loop 0 end-x (+ y 1) end-y))))) -  (loop 0 width 0 height) - -  ; These functions refresh the GIMP UI, otherwise the modified pixels would be evident -  (gimp-drawable-update drawable 0 0 width height) -  (gimp-displays-flush) -) - -(script-fu-register -  "script-fu-mandelbrot"          ; Function name -  "Create a Mandelbrot layer"     ; Menu label -                                  ; Description -  "Draws a Mandelbrot fractal on a new layer. For the coloring it uses the palette identified by the name provided as a string. The image boundaries are defined by its domain width and height, which correspond to the image width and height respectively. Finally the image is offset in order to center the desired feature." -  "Cristiano Fontana"             ; Author -  "2021, C.Fontana. GNU GPL v. 3" ; Copyright -  "27th Jan. 2021"                ; Creation date -  "RGB"                           ; Image type that the script works on -  ;Parameter    Displayed            Default -  ;type         label                values -  SF-IMAGE      "Image"              0 -  SF-STRING     "Color palette name" "Firecode" -  SF-ADJUSTMENT "Threshold value"    '(4 0 10 0.01 0.1 2 0) -  SF-ADJUSTMENT "Domain width"       '(3 0 10 0.1 1 4 0) -  SF-ADJUSTMENT "Domain height"      '(3 0 10 0.1 1 4 0) -  SF-ADJUSTMENT "X offset"           '(2.25 -20 20 0.1 1 4 0) -  SF-ADJUSTMENT "Y offset"           '(1.50 -20 20 0.1 1 4 0) -) -(script-fu-menu-register "script-fu-mandelbrot" "/Layer/") -``` - -I will go through the script to show you what it does. - -### Get ready to draw the fractal - -Since this image is all about complex numbers, I wrote a quick and dirty implementation of complex numbers in Script-Fu. I defined the complex numbers as [pairs][19] of real numbers. Then I added the few functions needed for the script. I used [Racket's documentation][20] as inspiration for function names and roles: - -``` -(define (make-rectangular x y) (cons x y)) -(define (real-part z) (car z)) -(define (imag-part z) (cdr z)) - -(define (magnitude z) -  (let ((x (real-part z)) -        (y (imag-part z))) -    (sqrt (+ (* x x) (* y y))))) - -(define (add-c a b) -  (make-rectangular (+ (real-part a) (real-part b)) -                    (+ (imag-part a) (imag-part b)))) - -(define (mul-c a b) -  (let ((ax (real-part a)) -        (ay (imag-part a)) -        (bx (real-part b)) -        (by (imag-part b))) -    (make-rectangular (- (* ax bx) (* ay by)) -                      (+ (* ax by) (* ay bx))))) -``` - -### Draw the fractal - -The new function is called `script-fu-mandelbrot`. The best practice for writing a new function is to call it `script-fu-something` so that it can be identified in the Procedure Browser easily. The function requires a few parameters: an `image` to which it will add a layer with the fractal, the `palette-name` identifying the color palette to be used, the `threshold` value to stop the iteration, the `domain-width` and `domain-height` that identify the image boundaries, and the `offset-x` and `offset-y` to center the image to the desired feature. The script also needs some other parameters that it can deduce from the GIMP interface: - -``` -(define (script-fu-mandelbrot image palette-name threshold domain-width domain-height offset-x offset-y) -  (define num-colors (car (gimp-palette-get-info palette-name))) -  (define colors (cadr (gimp-palette-get-colors palette-name))) - -  (define width (car (gimp-image-width image))) -  (define height (car (gimp-image-height image))) - -  ... -``` - -Then it creates a new layer and identifies it as the script's `drawable`. A "drawable" is the element you want to draw on: - -``` -(define new-layer (car (gimp-layer-new image -                                       width height -                                       RGB-IMAGE -                                       "Mandelbrot layer" -                                       100 -                                       LAYER-MODE-NORMAL))) - -(gimp-image-add-layer image new-layer 0) -(define drawable new-layer) -(define bytes-per-pixel (car (gimp-drawable-bpp drawable))) -``` - -For the code determining the pixels' color, I used the [Racket][21] example on the [Rosetta Code][22] website. It is not the most optimized algorithm, but it is simple to understand. Even a non-mathematician like me can understand it. The `iterations` function determines how many steps the succession requires to pass the threshold value. To cap the iterations, I am using the number of colors in the palette. In other words, if the threshold is too high or the succession does not grow, the calculation stops at the `num-colors` value. The `iter->color` function transforms the number of iterations into a color using the provided palette. If the iteration number is equal to `num-colors`, it uses black because this means that the succession is probably bound and that pixel is in the Mandelbrot set: - -``` -; Fractal drawing section. -; Code from: https://rosettacode.org/wiki/Mandelbrot_set#Racket -(define (iterations a z i) -  (let ((z′ (add-c (mul-c z z) a))) -     (if (or (= i num-colors) (> (magnitude z′) threshold)) -        i -        (iterations a z′ (+ i 1))))) - -(define (iter->color i) -  (if (>= i num-colors) -      (list->vector '(0 0 0)) -      (list->vector (vector-ref colors i)))) -``` - -Because I have the feeling that Scheme users do not like to use loops, I implemented the function looping over the pixels as a recursive function. The `loop` function reads the starting coordinates and their upper boundaries. At each pixel, it defines some temporary variables with the `let*` function: `real-x` and `real-y` are the real coordinates of the pixel in the complex plane, according to the parameters; the `a` variable is the starting point for the succession; the `i` is the number of iterations; and finally `color` is the pixel color. Each pixel is colored with the `gimp-drawable-set-pixel` function that is an internal GIMP procedure. The peculiarity is that it is not undoable, and it does not trigger the image to refresh. Therefore, the image will not be updated during the operation. To play nice with the user, at the end of each row of pixels, it calls the `gimp-progress-update` function, which updates a progress bar in the user interface: - -``` -(define z0 (make-rectangular 0 0)) - -(define (loop x end-x y end-y) -  (let* ((real-x (- (* domain-width (/ x width)) offset-x)) -         (real-y (- (* domain-height (/ y height)) offset-y)) -         (a (make-rectangular real-x real-y)) -         (i (iterations a z0 0)) -         (color (iter->color i))) -    (cond ((and (< x end-x) (< y end-y)) (gimp-drawable-set-pixel drawable x y bytes-per-pixel color) -                                         (loop (+ x 1) end-x y end-y)) -          ((and (>= x end-x) (< y end-y)) (gimp-progress-update (/ y end-y)) -                                          (loop 0 end-x (+ y 1) end-y))))) -(loop 0 width 0 height) -``` - -At the calculation's end, the function needs to inform GIMP that it modified the `drawable`, and it should refresh the interface because the image is not "automagically" updated during the script's execution: - -``` -(gimp-drawable-update drawable 0 0 width height) -(gimp-displays-flush) -``` - -### Interact with the user interface - -To use the `script-fu-mandelbrot` function in the graphical user interface (GUI), the script needs to inform GIMP. The `script-fu-register` function informs GIMP about the parameters required by the script and provides some documentation: - -``` -(script-fu-register -  "script-fu-mandelbrot"          ; Function name -  "Create a Mandelbrot layer"     ; Menu label -                                  ; Description -  "Draws a Mandelbrot fractal on a new layer. For the coloring it uses the palette identified by the name provided as a string. The image boundaries are defined by its domain width and height, which correspond to the image width and height respectively. Finally the image is offset in order to center the desired feature." -  "Cristiano Fontana"             ; Author -  "2021, C.Fontana. GNU GPL v. 3" ; Copyright -  "27th Jan. 2021"                ; Creation date -  "RGB"                           ; Image type that the script works on -  ;Parameter    Displayed            Default -  ;type         label                values -  SF-IMAGE      "Image"              0 -  SF-STRING     "Color palette name" "Firecode" -  SF-ADJUSTMENT "Threshold value"    '(4 0 10 0.01 0.1 2 0) -  SF-ADJUSTMENT "Domain width"       '(3 0 10 0.1 1 4 0) -  SF-ADJUSTMENT "Domain height"      '(3 0 10 0.1 1 4 0) -  SF-ADJUSTMENT "X offset"           '(2.25 -20 20 0.1 1 4 0) -  SF-ADJUSTMENT "Y offset"           '(1.50 -20 20 0.1 1 4 0) -) -``` - -Then the script tells GIMP to put the new function in the Layer menu with the label "Create a Mandelbrot layer": - -``` -(script-fu-menu-register "script-fu-mandelbrot" "/Layer/") -``` - -Having registered the function, you can visualize it in the Procedure Browser. - -![script-fu-mandelbrot function][23] - -### Run the script - -Now that the function is ready and registered, you can draw the Mandelbrot fractal! First, create a square image and run the script from the Layers menu. - -![script running][24] - -The default values are a good starting set to obtain the following image. The first time you run the script, create a very small image (e.g., 60x60 pixels) because this implementation is slow! It took several hours for my computer to create the following image in full 1920x1920 pixels. As I mentioned earlier, this is not the most optimized algorithm; rather, it was the easiest for me to understand. - -![Mandelbrot set drawn using GIMP's Firecode palette][25] - -### Learn more - -This tutorial showed how to use GIMP's built-in scripting features to draw an image created with an algorithm. These images show GIMP's powerful set of tools that can be used for artistic applications and mathematical images. - -If you want to move forward, I suggest you look at the official documentation and its [tutorial][26]. As an exercise, try modifying this script to draw a [Julia set][27], and please share the resulting image in the comments. - -Image by: Rotated and magnified portion of the Mandelbrot set using Firecode. (Cristiano Fontana, CC BY-SA 4.0) - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/gimp-mandelbrot - -作者:[Cristiano L. Fontana][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cristianofontana -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/painting_computer_screen_art_design_creative.png -[2]: https://www.gimp.org/ -[3]: https://en.wikipedia.org/wiki/Fractal -[4]: https://en.wikipedia.org/wiki/Self-similarity -[5]: https://opensource.com/sites/default/files/uploads/mandelbrot_portion.png -[6]: https://docs.gimp.org/en/gimp-concepts-script-fu.html -[7]: https://opensource.com/article/21/1/gimp-scripting -[8]: https://en.wikipedia.org/wiki/Mandelbrot_set -[9]: https://opensource.com/sites/default/files/uploads/mandelbrot.png -[10]: https://opensource.com/sites/default/files/uploads/mandelbrot_portion2.png -[11]: https://en.wikipedia.org/wiki/Complex_number -[12]: https://docs.gimp.org/en/gimp-concepts-script-fu.html -[13]: https://en.wikipedia.org/wiki/Scheme_(programming_language) -[14]: https://docs.gimp.org/en/gimp-using-script-fu-tutorial.html -[15]: https://opensource.com/article/21/1/gimp-scripting -[16]: https://opensource.com/sites/default/files/uploads/procedure_browser_0.png -[17]: https://en.wikipedia.org/wiki/Polish_notation -[18]: https://xkcd.com/297/ -[19]: https://www.gnu.org/software/guile/manual/html_node/Pairs.html -[20]: https://docs.racket-lang.org/reference/generic-numbers.html?q=make-rectangular#%28part._.Complex_.Numbers%29 -[21]: https://racket-lang.org/ -[22]: https://rosettacode.org/wiki/Mandelbrot_set#Racket -[23]: https://opensource.com/sites/default/files/uploads/mandelbrot_documentation.png -[24]: https://opensource.com/sites/default/files/uploads/script_working.png -[25]: https://opensource.com/sites/default/files/uploads/mandelbrot.png -[26]: https://docs.gimp.org/en/gimp-using-script-fu-tutorial.html -[27]: https://en.wikipedia.org/wiki/Julia_set diff --git a/sources/tech/20210211 Getting to Know the Cryptocurrency Open Patent Alliance (COPA).md b/sources/tech/20210211 Getting to Know the Cryptocurrency Open Patent Alliance (COPA).md deleted file mode 100644 index 86df7cf688..0000000000 --- a/sources/tech/20210211 Getting to Know the Cryptocurrency Open Patent Alliance (COPA).md +++ /dev/null @@ -1,92 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Getting to Know the Cryptocurrency Open Patent Alliance (COPA)) -[#]: via: (https://www.linux.com/news/getting-to-know-the-cryptocurrency-open-patent-alliance-copa/) -[#]: author: (Linux.com Editorial Staff https://www.linux.com/author/linuxdotcom/) - -Getting to Know the Cryptocurrency Open Patent Alliance (COPA) -====== - -### ![][1] - -### Why is there a need for a patent protection alliance for cryptocurrency technologies? - -With the recent surge in popularity of cryptocurrencies and related technologies, Square felt an industry group was needed to protect against litigation and other threats against core cryptocurrency technology and ensure the ecosystem remains vibrant and open for developers and companies. - -The same way [Open Invention Network][2] (OIN) and [LOT Network][3] add a layer of patent protection to inter-company collaboration on open source technologies, COPA aims to protect open source cryptocurrency technology. Feeling safe from the threat of lawsuits is a precursor to good collaboration. - - * Locking up foundational cryptocurrency technologies in patents stifles innovation and adoption of cryptocurrency in novel and useful applications. - * The offensive use of patents threatens the growth and free availability of cryptocurrency technologies. Many smaller companies and developers do not own patents and cannot deter or defend threats adequately. - - - -By joining COPA, a member can feel secure it can innovate in the cryptocurrency space without fear of litigation between other members.  - -### What is Square’s involvement in COPA? - -Square’s core purpose is economic empowerment, and they see cryptocurrency as a core technological pillar. Square helped start and fund COPA with the hope that by encouraging innovation in the cryptocurrency space, more useful ideas and products would get created. COPA management has now diversified to an independent board of technology and regulatory experts, and Square maintains a minority presence. - -### Do we need cryptocurrency patents to join COPA?  - -No! Anyone can join and benefit from being a member of COPA, regardless of whether they have patents or not. There is no barrier to entry – members can be individuals, start-ups, small companies, or large corporations. Here is how COPA works: - - * First, COPA members pledge never to use their crypto-technology patents against anyone, except for defensive reasons, effectively making their patents freely available for all. - * Second, members pool all of their crypto-technology patents together to form a shared patent library, which provides a forum to allow members to reasonably negotiate lending patents to one another for defensive purposes. - * The patent pledge and the shared patent library work in tandem to help drive down the incidence and threat of patent litigation, benefiting the cryptocurrency community as a whole.  - * Additionally, COPA monitors core technologies and entities that support cryptocurrency and does its best to research and help address litigation threats against community members. - - - -### What types of companies should join COPA? - - * Financial services companies and technology companies working in regulated industries that use distributed ledger or cryptocurrency technology - * Companies or individuals who are interested in collaborating on developing cryptocurrency products or who hold substantial investments in cryptocurrency - - - -### What companies have joined COPA so far? - - * Square, Inc. - * Blockchain Commons - * Carnes Validadas - * Request Network - * Foundation Devices - * ARK - * SatoshiLabs - * Transparent Systems - * Horizontal Systems - * VerifyChain - * Blockstack - * Protocol Labs - * Cloudeya Ltd. - * Mercury Cash - * Bithyve - * Coinbase - * Blockstream - * Stakenet - - - -### How to join - -Please express interest and get access to our membership agreement here: - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/getting-to-know-the-cryptocurrency-open-patent-alliance-copa/ - -作者:[Linux.com Editorial Staff][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/author/linuxdotcom/ -[b]: https://github.com/lujun9972 -[1]: https://www.linux.com/wp-content/uploads/2021/02/copa-linuxdotcom.jpg -[2]: https://openinventionnetwork.com/ -[3]: https://lotnet.com/ diff --git a/sources/tech/20210211 Unikraft- Pushing Unikernels into the Mainstream.md b/sources/tech/20210211 Unikraft- Pushing Unikernels into the Mainstream.md deleted file mode 100644 index e9eb57d18e..0000000000 --- a/sources/tech/20210211 Unikraft- Pushing Unikernels into the Mainstream.md +++ /dev/null @@ -1,115 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Unikraft: Pushing Unikernels into the Mainstream) -[#]: via: (https://www.linux.com/featured/unikraft-pushing-unikernels-into-the-mainstream/) -[#]: author: (Linux.com Editorial Staff https://www.linux.com/author/linuxdotcom/) - -Unikraft: Pushing Unikernels into the Mainstream -====== - -![][1] - -Unikernels have been around for many years and are famous for providing excellent performance in boot times, throughput, and memory consumption, to name a few metrics [1]. Despite their apparent potential, unikernels have not yet seen a broad level of deployment due to three main drawbacks: - - * **Hard to build**: Putting a unikernel image together typically requires expert, manual work that needs redoing for each application. Also, many unikernel projects are not, and don’t aim to be, POSIX compliant, and so significant porting effort is required to have standard applications and frameworks run on them. - * **Hard to extract high performance**: Unikernel projects don’t typically expose high-performance APIs; extracting high performance often requires expert knowledge and modifications to the code. - * **Little or no tool ecosystem**: Assuming you have an image to run, deploying it and managing it is often a manual operation. There is little integration with major DevOps or orchestration frameworks. - - - -While not all unikernel projects suffer from all of these issues (e.g., some provide some level of POSIX compliance but the performance is lacking, others target a single programming language and so are relatively easy to build but their applicability is limited), we argue that no single project has been able to successfully address all of them, hindering any significant level of deployment. For the past three years, Unikraft ([www.unikraft.org][2]), a Linux Foundation project under the Xen Project’s auspices, has had the explicit aim to change this state of affairs to bring unikernels into the mainstream.  - -If you’re interested, read on, and please be sure to check out: - - * The [replay of our two FOSDEM talks][3] [2,3] and the [virtual stand ][4] - * Our website (unikraft.org) and source code (). - * Our upcoming source code release, 0.5 Tethys (more information at ) - * [unikraft.io][5], for industrial partners interested in Unikraft PoCs (or [info@unikraft.io][6]) - - - -### High Performance - -To provide developers with the ability to obtain high performance easily, Unikraft exposes a set of composable, performance-oriented APIs. The figure below shows Unikraft’s architecture: all components are libraries with their own **Makefile** and **Kconfig** configuration files, and so can be added to the unikernel build independently of each other. - -![][7] - -**Figure 1. Unikraft ‘s fully modular architecture showing high-performance APIs** - -APIs are also micro-libraries that can be easily enabled or disabled via a Kconfig menu; Unikraft unikernels can compose which APIs to choose to best cater to an application’s needs. For example, an RCP-style application might turn off the **uksched** API (➃ in the figure) to implement a high performance, run-to-completion event loop; similarly, an application developer can easily select an appropriate memory allocator (➅) to obtain maximum performance, or to use multiple different ones within the same unikernel (e.g., a simple, fast memory allocator for the boot code, and a standard one for the application itself).  - -![][8] | ![][9] ----|--- -**Figure 2. Unikraft memory consumption vs. other unikernel projects and Linux** | **Figure 3. Unikraft NGINX throughput versus other unikernels, Docker, and Linux/KVM.** - -  - -These APIs, coupled with the fact that all Unikraft’s components are fully modular, results in high performance. Figure 2, for instance, shows Unikraft having lower memory consumption than other unikernel projects (HermiTux, Rump, OSv) and Linux (Alpine); and Figure 3 shows that Unikraft outperforms them in terms of NGINX requests per second, reaching 90K on a single CPU core. - -Further, we are working on (1) a performance profiler tool to be able to quickly identify potential bottlenecks in Unikraft images and (2) a performance test tool that can automatically run a large set of performance experiments, varying different configuration options to figure out optimal configurations. - -### Ease of Use, No Porting Required - -Forcing users to port applications to a unikernel to obtain high performance is a showstopper. Arguably, a system is only as good as the applications (or programming languages, frameworks, etc.) can run. Unikraft aims to achieve good POSIX compatibility; one way of doing so is supporting a **libc** (e.g., **musl)**, along with a large set of Linux syscalls.  - -![][10] - -**Figure 4. Only a certain percentage of syscalls are needed to support a wide range of applications** - -While there are over 300 of these, many of them are not needed to run a large set of applications; as shown in Figure 1 (taken from [5]). Having in the range of 145, for instance, is enough to support 50% of all libraries and applications in a Ubuntu distribution (many of which are irrelevant to unikernels, such as desktop applications). As of this writing, Unikraft supports over 130 syscalls and a number of mainstream applications (e.g., SQLite, Nginx, Redis), programming languages and runtime environments such as C/C++, Go, Python, Ruby, Web Assembly, and Lua, not to mention several different hypervisors (KVM, Xen, and Solo5) and ARM64 bare-metal support. - -### Ecosystem and DevOps - -Another apparent downside of unikernel projects is the almost total lack of integration with existing, major DevOps and orchestration frameworks. Working towards the goal of integration, in the past year, we created the **kraft** tool, allowing users to choose an application and a target platform simply (e.g., KVM on x86_64) and take care of building the image running it. - -Beyond this, we have several sub-projects ongoing to support in the coming months: - - * **Kubernetes**: If you’re already using Kubernetes in your deployments, this work will allow you to deploy much leaner, fast Unikraft images _transparently._ - * **Cloud Foundry**: Similarly, users relying on Cloud Foundry will be able to generate Unikraft images through it, once again transparently. - * **Prometheus**: Unikernels are also notorious for having very primitive or no means for monitoring running instances. Unikraft is targeting Prometheus support to provide a wide range of monitoring capabilities.  - - - -In all, we believe Unikraft is getting closer to bridging the gap between unikernel promise and actual deployment. We are very excited about this year’s upcoming features and developments, so please feel free to drop us a line if you have any comments, questions, or suggestions at [**info@unikraft.io**][6]. - -_**About the author: [Dr. Felipe Huici][11] is Chief Researcher, Systems and Machine Learning Group, NEC Laboratories Europe GmbH**_ - -### References - -[1] Unikernels Rethinking Cloud Infrastructure. - -[2] Is the Time Ripe for Unikernels to Become Mainstream with Unikraft? FOSDEM 2021 Microkernel developer room. - -[3] Severely Debloating Cloud Images with Unikraft. FOSDEM 2021 Virtualization and IaaS developer room. - -[4] Welcome to the Unikraft Stand! - -[5] A study of modern Linux API usage and compatibility: what to support when you’re supporting. Eurosys 2016. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/featured/unikraft-pushing-unikernels-into-the-mainstream/ - -作者:[Linux.com Editorial Staff][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/author/linuxdotcom/ -[b]: https://github.com/lujun9972 -[1]: https://www.linux.com/wp-content/uploads/2021/02/unikraft.svg -[2]: http://www.unikraft.org -[3]: https://video.fosdem.org/2021/stands/unikraft/ -[4]: https://stands.fosdem.org/stands/unikraft/ -[5]: http://www.unikraft.io -[6]: mailto:info@unikraft.io -[7]: https://www.linux.com/wp-content/uploads/2021/02/unikraft1.png -[8]: https://www.linux.com/wp-content/uploads/2021/02/unikraft2.png -[9]: https://www.linux.com/wp-content/uploads/2021/02/unikraft3.png -[10]: https://www.linux.com/wp-content/uploads/2021/02/unikraft4.png -[11]: https://www.linkedin.com/in/felipe-huici-47a559127/ diff --git a/sources/tech/20210214 Why programmers love Linux packaging.md b/sources/tech/20210214 Why programmers love Linux packaging.md deleted file mode 100644 index 837b4a2aed..0000000000 --- a/sources/tech/20210214 Why programmers love Linux packaging.md +++ /dev/null @@ -1,71 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Why programmers love Linux packaging) -[#]: via: (https://opensource.com/article/21/2/linux-packaging) -[#]: author: (Seth Kenlon https://opensource.com/users/seth) - -Why programmers love Linux packaging -====== -Programmers can distribute their software easily and consistently via -Flatpaks, letting them focus on their passion: Programming. -![Package wrapped with brown paper and red bow][1] - -In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Today, I'll talk about what makes packaging for Linux ideal for programmers. - -Programmers love to program. That probably seems like an obvious statement, but it's important to understand that developing software involves a lot more than just writing code. It includes compiling, documentation, source code management, install scripts, configuration defaults, support files, delivery format, and more. Getting from a blank screen to a deliverable software installer requires much more than just programming, but most programmers would rather program than package. - -### What is packaging? - -When food is sent to stores to be purchased, it is packaged. When buying directly from a farmer or from an eco-friendly bulk or bin store, the packaging is whatever container you've brought with you. When buying from a grocery store, packaging may be a cardboard box, plastic bag, a tin can, and so on. - -When software is made available to computer users at large, it also must be packaged. Like food, there are several ways software can be packaged. Open source software can be left unpackaged because users, having access to the raw code, can compile and package it themselves. However, there are advantages to packages, so applications are commonly delivered in some format specific to the user's platform. And that's where the problems begin, because there's not just one format for software packages. - -For the user, packages make it easy to install software because all the work is done by the system's installer. The software is extracted from its package and distributed to the appropriate places within the operating system. There's little opportunity for anything to go wrong. - -For the software developer, however, packaging means that you have to learn how to create a package—and not just one package, but a unique package for every operating system you want your software to be installable on. To complicate matters, there are multiple packaging formats and options for each operating system, and sometimes even for the programming language being used. - -### Packaging on Linux - -Packaging options for Linux have traditionally seemed pretty overwhelming. Linux distributions derived from Fedora, such as Red Hat and CentOS, default to `.rpm` packages. Debian and Ubuntu (and similar) default to `.deb` packages. Other distributions may use one or the other, or neither, opting for a custom format. When asked, many Linux users say that ideally, a programmer won't package their software for Linux at all but instead rely on the package maintainers of each distribution to create the package. All software installed onto any Linux system ought to come from that distribution's official repository. However, it remains unclear how to get your software reliably packaged and included by one distribution, let alone all distributions. - -### Flatpak for Linux - -The Flatpak packaging system was introduced to unify and decentralize Linux as a delivery target for developers. With Flatpak, either a developer or anyone (a member of a Linux community, a different developer, a Flatpak team member, or anyone else) is free to package software. They can then submit the package to Flathub or choose to self-host the package and offer it to basically any Linux distribution. The Flatpak system is available to all Linux distributions, so targeting one is the same as targeting them all. - -### How Flatpak technology works - -The secret to Flatpak's universal appeal is a standard base. The Flatpak system allows developers to reference a common set of Software Developer Kit (SDK) modules. These are packaged and managed by the maintainers of the Flatpak system. The SDKs get pulled in as needed whenever you install a Flatpak, ensuring compatibility with your system. Any given SDK is only required once because the libraries it contains can be shared across any Flatpak calling for it. - -If a developer requires a library not already included in an existing SDK, the developer can add that library in the Flatpak. - -The results speak for themselves. Users may install hundreds of packages on any Linux distribution from one central repository, called [Flathub][2]. - -### How developers use Flatpaks - -Flatpaks are designed to be reproducible, so the build process is easily integrated into a CI/CD workflow. A Flatpak is defined in a [YAML][3] or JSON manifest file. You can create your first Flatpak by following my [introductory article][4], and you can read the full documentation at [docs.flatpak.org][5]. - -### Linux makes it easy - -Creating software on Linux is easy, and packaging it up for Linux is simple and automatable. If you're a programmer, Linux makes it easy for you to forget about packaging by targeting one system and integrating that into your build process. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/linux-packaging - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brown-package-red-bow.jpg?itok=oxZYQzH- (Package wrapped with brown paper and red bow) -[2]: https://flatpak.org/setup/ -[3]: https://www.redhat.com/sysadmin/yaml-beginners -[4]: https://opensource.com/article/19/10/how-build-flatpak-packaging -[5]: https://docs.flatpak.org/en/latest/index.html diff --git a/sources/tech/20210215 Installing Nextcloud 20 on Fedora Linux with Podman.md b/sources/tech/20210215 Installing Nextcloud 20 on Fedora Linux with Podman.md deleted file mode 100644 index ae9edc77ea..0000000000 --- a/sources/tech/20210215 Installing Nextcloud 20 on Fedora Linux with Podman.md +++ /dev/null @@ -1,222 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Installing Nextcloud 20 on Fedora Linux with Podman) -[#]: via: (https://fedoramagazine.org/nextcloud-20-on-fedora-linux-with-podman/) -[#]: author: (dschier https://fedoramagazine.org/author/danielwtd/) - -Installing Nextcloud 20 on Fedora Linux with Podman -====== - -![][1] - -Nowadays, many open source projects offer container images for easy deployment. This is very handy when running a home server or lab environment. A previous Fedora Magazine article covered [installing Nextcloud from the source package][2]. This article explains how you can run Nextcloud on Fedora 33 as a container deployment with Podman. - -### What is Nextcloud? - -[Nextcloud][3] started in 2016 as a fork of Owncloud. Since then, it evolved into a full-fledged collaboration software offering file-, calendar-, and contact-syncing, plus much more. You can run a simple Kanban Board in it or write documents collaboratively. Nextcloud is fully open source under the AGPLv3 License and can be used for private or commercial use alike. - -### What is Podman? - -Podman is a container engine for developing, managing, and running OCI Containers on your Linux System. It offers a wide variety of features like rootless mode, cgroupv2 support, pod management, and it can run daemonless. Furthermore, you are getting a Docker compatible API for further development. It is available by default on Fedora Workstation and ready to be used. - -In case you need to install podman, run: - -``` -sudo dnf install podman -``` - -### Designing the Deployment - -Every deployment needs a bit of preparation. Sure, you can simply start a container and start using it, but that wouldn’t be so much fun. A well-thought and designed deployment should be easy to understand and offer some kind of flexibility. - -#### Container / Images - -First, you need to choose the proper container images for the deployment. This is quite easy for Nextcloud, since it offers already a pretty good documentation for container deployments. Nextcloud supports two variations: Nextcloud Apache httpd (which is fully self-contained) and Nextcloud php-fpm (which needs an additional nginx container). - -In both cases, you also need to provide a database, which can be MariaDB (recommended) or PostgreSQL (also supported). This article uses the Apache httpd + MariaDB installation. - -#### Volumes - -Running a container does not persist data you create during the runtime. You perform updates by recreating the container. Therefore, you will need some volumes for the database and the Nextcloud files. Nextcloud also recommends you put the “data” folder in a separate volume. So you will end up with three volumes: - - * nextcloud-app - * nextcloud-data - * nextcloud-db - - - -#### Network - -Lastly, you need to consider networking. One of the benefits of containers is that you can re-create your deployment as it may look like in production. [Network segmentation][4] is a very common practice and should be considered for a container deployment, too. This tutorial will not add advanced features like network load balancing or security segmentation. You will need only one network which you will use to publish the ports for Nextcloud. Creating a network also provides the dnsname plugin, which will allow container communication based on container names. - -#### The picture - -Now that every single element is prepared, you can put these together and get a really nice understanding of how the development will look. - -![][5] - -### Run, Nextcloud, Run - -Now you have prepared all of the ingredients and you can start running the commands to deploy Nextcloud. All commands can be used for root-privileged or rootless deployments. This article will stick to rootless deployments. - -Sart with the network: - -``` -# Creating a new network -$ podman network create nextcloud-net - -# Listing all networks -$ podman network ls - -# Inspecting a network -$ podman network inspect nextcloud-net -``` - -As you can see in the last command, you created a DNS zone with the name “dns.podman”. All containers created in this network are reachable via “CONTAINER_NAME.dns.podman”. - -Next, optionally prepare your volumes. This step can be skipped, since Podman will create named volumes on demand, if they do not exist. Podman supports named volumes, which it creates in special locations, so you don’t need to take care of SELinux or alike. - -``` -# Creating the volumes -$ podman volume create nextcloud-app -$ podman volume create nextcloud-data -$ podman volume create nextcloud-db - -# Listing volumes -$ podman volume ls - -# Inspecting volumes (this also provides the full path) -$ podman volume inspect nextcloud-app -``` - -Network and volumes are done. Now provide the containers. - -First, you need the database. According to the MariaDB image documentation, you need to provide some additional environment variables,. Additionally, you need to attach the created volume, connect the network, and provide a name for the container. Most of the values will be needed in the next commands again. (Note that you should replace DB_USER_PASSWORD and DB_ROOT_PASSWORD with unique passwords.) - -``` -# Deploy Mariadb -$ podman run --detach - --env MYSQL_DATABASE=nextcloud - --env MYSQL_USER=nextcloud - --env MYSQL_PASSWORD=DB_USER_PASSWORD - --env MYSQL_ROOT_PASSWORD=DB_ROOT_PASSWORD - --volume nextcloud-db:/var/lib/mysql - --network nextcloud-net - --restart on-failure - --name nextcloud-db - docker.io/library/mariadb:10 - -# Check running containers -$ podman container ls -``` - -After the successful start of your new MariaDB container, you can deploy Nextcloud itself. (Note that you should replace DB_USER_PASSWORD with the password you used in the previous step. Replace NC_ADMIN and NC_PASSWORD with the username and password you want to use for the Nextcloud administrator account.) - -``` -# Deploy Nextcloud -$ podman run --detach - --env MYSQL_HOST=nextcloud-db.dns.podman - --env MYSQL_DATABASE=nextcloud - --env MYSQL_USER=nextcloud - --env MYSQL_PASSWORD=DB_USER_PASSWORD - --env NEXTCLOUD_ADMIN_USER=NC_ADMIN - --env NEXTCLOUD_ADMIN_PASSWORD=NC_PASSWORD - --volume nextcloud-app:/var/www/html - --volume nextcloud-data:/var/www/html/data - --network nextcloud-net - --restart on-failure - --name nextcloud - --publish 8080:80 - docker.io/library/nextcloud:20 - -# Check running containers -$ podman container ls -``` - -Now that the two containers are running, you can configure your containers. Open your browser and point to “localhost:8080” (or another host name or IP address if it is running on a different server). - -The first load may take some time (30 seconds) or even report “unable to load”. This is coming from Nextcloud, which is preparing the first run. In that case, wait a minute or two. Nextcloud will prompt for a username and password. - -![][6] - -Enter the user name and password you used previously. - -![][7] - -Now you are ready to go and experience Nextcloud for testing, development ,or your home server. - -### Update - -If you want to update one of the containers, you need to pull the new image and re-create the containers. - -``` -# Update mariadb -$ podman pull mariadb:10 -$ podman stop nextcloud-db -$ podman rm nextcloud-db -$ podman run --detach - --env MYSQL_DATABASE=nextcloud - --env MYSQL_USER=nextcloud - --env MYSQL_PASSWORD=DB_USER_PASSWORD - --env MYSQL_ROOT_PASSWORD=DB_ROOT_PASSWORD - --volume nextcloud-db:/var/lib/mysql - --network nextcloud-net - --restart on-failure - --name nextcloud-db - docker.io/library/mariadb:10 -``` - -Updating the Nextcloud container works exactly the same. - -``` -# Update Nextcloud - -$ podman pull nextcloud:20 -$ podman stop nextcloud -$ podman rm nextcloud -$ podman run --detach - --env MYSQL_HOST=nextcloud-db.dns.podman - --env MYSQL_DATABASE=nextcloud - --env MYSQL_USER=nextcloud - --env MYSQL_PASSWORD=DB_USER_PASSWORD - --env NEXTCLOUD_ADMIN_USER=NC_ADMIN - --env NEXTCLOUD_ADMIN_PASSWORD=NC_PASSWORD - --volume nextcloud-app:/var/www/html - --volume nextcloud-data:/var/www/html/data - --network nextcloud-net - --restart on-failure - --name nextcloud - --publish 8080:80 - docker.io/library/nextcloud:20 -``` - -That’s it; your Nextcloud installation is up-to-date again. - -### Conclusion - -Deploying Nextcloud with Podman is quite easy. After just a couple of minutes, you will have a very handy collaboration software, offering filesync, calendar, contacts, and much more. Check out [apps.nextcloud.com][8], which will extend the features even further. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/nextcloud-20-on-fedora-linux-with-podman/ - -作者:[dschier][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/danielwtd/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/nextcloud-podman-816x345.jpg -[2]: https://fedoramagazine.org/build-your-own-cloud-with-fedora-31-and-nextcloud-server/ -[3]: https://nextcloud.com/ -[4]: https://en.wikipedia.org/wiki/Network_segmentation -[5]: https://fedoramagazine.org/wp-content/uploads/2021/01/nextcloud-podman-arch.png -[6]: https://fedoramagazine.org/wp-content/uploads/2021/02/Screenshot-from-2021-02-12-08-38-37-1024x211.png -[7]: https://fedoramagazine.org/wp-content/uploads/2021/02/Screenshot-from-2021-02-12-08-38-28-1024x377.png -[8]: https://apps.nextcloud.com diff --git a/sources/tech/20210215 Protect your Home Assistant with these backups.md b/sources/tech/20210215 Protect your Home Assistant with these backups.md deleted file mode 100644 index 5761273b64..0000000000 --- a/sources/tech/20210215 Protect your Home Assistant with these backups.md +++ /dev/null @@ -1,160 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Protect your Home Assistant with these backups) -[#]: via: (https://opensource.com/article/21/2/home-assistant-backups) -[#]: author: (Steve Ovens https://opensource.com/users/stratusss) - -Protect your Home Assistant with these backups -====== -Make sure you can recover quickly from a home automation failure with a -solid backup and hardware plan in the seventh article in this series. -![A rack of servers, blue background][1] - -In the last two articles in this series on home automation with Home Assistant (HA), I walked through setting up a few [integrations][2] with a Zigbee Bridge and some [custom ESP8266][3] devices that I use for automation. The first four articles in the series discussed [what Home Assistant is][4], why you may want [local control][5], some of the [communication protocols][6] for smart home components, and how to [install Home Assistant][7] in a virtual machine (VM) using libvirt. - -Now that you have a basic home automation setup, it is a good time to take a baseline of your system. In this seventh article, I will talk about snapshots, backups, and backup strategies. Let's get right into it. - -### Backups vs. copies - -I'll start by clearing up some ambiguity: A copy of something is not the same as a backup. Here is a brief overview of the difference between a copy and a backup. Bear in mind that this comes from the lens of an IT professional. I work with client data day in and day out. I have seen many ways that backups can go sideways, so the following descriptions may be overkill for home use. You'll have to decide just how important your Home Assistant data really is. - - * **Copies:** A copy is just what it sounds. It is when you highlight something on your computer and hit **Ctrl**+**C** and paste it somewhere else with **Ctrl**+**V**. Many people may view this as backing up the source, and to some extent, that is true. However, a copy is merely a representation of a point in time. If it's taken incorrectly, the newly created file can be corrupt, leading to a false sense of security. In addition, the source may have a problem—meaning the copy will also have a problem. If you have just a single copy of a file, it's often the same as having nothing at all. When it comes to backup, the saying "one is none" is absolutely true. If you do not have files going back over time, you won't have a good idea of whether the system creating the backups has a problem. - * **Backups and snapshots:** In Home Assistant, it is a bit tricky to differentiate between a copy and a backup. First, Home Assistant uses the term "snapshot" to refer to what we traditionally think of backups. In this context, a backup is very similar to a copy because you don't use any type of backup software, at least not in the traditional sense. Normally, backup software is designed specifically to get all the files that are hidden or otherwise protected. For example, backup software for a computer (such as CloneZilla) makes an exact replica (in some cases) of the hard drive to ensure no files are missed. Home Assistant knows how to create snapshots and does it for you. You just need to worry about storing the files somewhere. - - - -### Set a good backup strategy - -Before I get into how to deal with snapshots in Home Assistant, I want to share a brief story from a recent client. Remember when I mentioned that simply having a single copy of your files doesn't give you any indication that a problem has occurred? My client was doing all of the right things when it came to backups. The team was using the proper methodology for backups, kept multiple files going back a certain period of time, ensured there were more than two copies of each backup, and was especially careful that backups were not being stored locally on the machine being backed up. Sounds great, doesn't it? They were doing everything right. Well, almost. The one thing they neglected was testing the backups. Most people put this off or disregard it entirely. I admit I am guilty of not testing my backups frequently. I do it when I remember, which is usually once every few months or so. - -In my client's case, a software upgrade created a new requirement from the backup program. This was missed. The backups continued to hum along, and the automated checks passed. There were files after every backup run, they were larger than a certain amount, and the [magic file checks][8] reported the correct file type. The problem was that the file sizes shrunk significantly due to the software change. This meant the client was not backing up the data it thought. - -This story has a happy ending, which brings me to my point: Because the client was doing everything else right, we could go through the backups and identify the precise moment when something changed. From this, we linked this to the date of an upgrade some weeks back. Fortunately, there was no emergency that precipitated the investigation. I happened to be doing a standard audit when I discovered the discrepancy. - -The moral of the story? Without proper backup strategies, we would have had a much harder time tracking down this problem. Also, in the event of a failure, we would have had no recovery point. - -A good backup strategy usually entails daily, weekly, and monthly backups. For example, you may decide to keep all your daily backups for two weeks, four weekly backups, and perhaps four monthly backups. This, in my opinion, is overkill for Home Assistant after you have a stable setup. You'll have to choose the level of precision you need. I take a backup before I make any change to the system. This gives me a known-good situation to revert to. - -### Create snapshots - -Great, so how do you create a snapshot in Home Assistant? The **Snapshots** menu resides inside the **Supervisor** tab on the left-size panel. - -![Home Assistant snapshots][9] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -You have two options for creating a snapshot: _Full snapshot_ or _Partial snapshot_. A Full snapshot is self-explanatory. You have some options with a Partial snapshot. - -![Home Assistant partial snapshots][11] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -Any component you install in Home Assistant will populate in this menu. Choose a name for your backup and click **Create**. This will take some time, depending on the speed of the disk and the size of the backup. I recommend keeping at least four backups on your machine if you can afford the space. - -![Home Assistant snapshots][12] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -You can retrieve these files from Home Assistant with File Browser if you set up the **Samba share** extension. - -![Samba share][13] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -Save these files somewhere safe. The name you give the backup is contained in the metadata inside Home Assistant, and the file names are randomly generated. After I copy them to a different location, I usually rename them because when I test the restoration process on a different machine, the new file name does not matter. - -### My homelab strategy - -I run my Home Assistant instance on top of KVM on a Linux host. I have had a few requests to go into a little more detail on this, so feel free to skip past this section as it's not directly related to HA. - -Due to the nature of my job, I have a fairly large variety of hardware, which I use for a [homelab][14]. Sometimes this is because physical hosts are easier to work with than VMs for certain clustering software. Other times, this is because I keep workloads isolated to specific hardware. Either way, this means I already have a certain amount of knowledge built up around managing and dealing with KVM. Not to mention the fact that I run _almost exclusively_ open source software (with a few exceptions). Here is a basic layout of my homelab: - -![KVM homelab architecture][15] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -The network-attached storage (NAS) has dual 10GB network cards that feed into uplink ports. Two of the KVM hosts have 10GB network interface cards (NICs), while the hosts on the right have regular 1GB network cards. - -For Home Assistant, this is well into overkill territory. However, this infrastructure was not designed for HA. HA runs just fine on a Raspberry Pi 4 (4GB version) at my parents' house. - -The VM that hosts Home Assistant has three vCPU cores of a Broadwell Core I5 CPU (circa 2015) with 8GB of RAM. The CPU tends to remain around 25% usage, and I rarely use more than 2.2GB of RAM. This is with 11 add-ons, including InfluxDB and Grafana, which are heavier applications. - -While I do have shared storage, I do not use it for live migration or anything similar. Instead, I use this for backend storage for specific mount points in a VM. For example, I store the `data` directory from Nextcloud on the NAS, divorcing the data from the service. - -At any rate, I have a few approaches to backups with this setup. Naturally, I use the Home Assistant snapshotting function to provide the first layer of protection. I tend to store only four weeks' worth of data on the VM. What do I do with the files themselves? Here is a diagram of how I try to keep my backups safe: - -![Home Assistant backup architecture][16] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -Using the Samba add-on, I pull a copy of the snapshot onto my GNOME desktop. I configure Nextcloud using GNOME's **Online Accounts** settings. - -![GNOME Online Accounts settings][17] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -Nextcloud takes a copy and puts it on my NAS. Both my desktop and the NAS use [SpiderOak One Backup][18] clients to ensure the backups are linked to more than one host. In the unlikely event that I delete a device from my SpiderOak account, the file is still linked to another device. I chose SpiderOak because it supports a Linux client, and it is privacy-focused and has no insight into what files it stores. The files are encrypted before being uploaded, and only the owner has the ability to decrypt them. The downside is that if you lose or forget your password, you lose your backups. - -Finally, I keep a cold copy on an external hard drive. I have a 14TB external drive that remains off and disconnected from the NAS except when backups are running. It's not on the diagram, but I also occasionally replicate to a computer at my in-laws' house. - -I can also take snapshots of the VM during critical operations (such as Home Assistant's recent upgrade from using a numbered point release to a month-based numbering system). - -I use a similar pipeline for most things that I back up, although I recognize it is a bit overkill. Also, this whole process has the flaw that it relies on me. Aside from SpiderOak and Nextcloud, I have not automated this process. I have scripts that I run, but I do not run them in a cron or anything like that. In hindsight, perhaps I should work on that. - -This setup may be considered extreme, but the built-in versioning in Nextcloud and SpiderOak, along with making copies in multiple locations, means that I am unlikely to suffer a failure that I can't recover from. At the very least, I should be able to dig up a close reference. - -As a final precaution, I make sure to keep the important information about _how_ I set things up on my private wiki on [Wiki.js][19]. I keep a section just for home automation. - -![Home automation overview][20] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -When you get into creating Node-RED automations (in the next article), I suggest you keep your own notes. I take a screenshot of the flow, write a brief description, so I know what I was attempting to achieve, and dump the flow to JSON (for brevity, I omitted the JSON from this screenshot): - -![Node-RED routine][21] - -(Steve Ovens, [CC BY-SA 4.0][10]) - -### Wrapping up - -Backups are essential when you're using Home Assistant, as it is a critical component of your infrastructure that always needs to be functioning. Small downtime is acceptable, but the ability to recover from a failure quickly is crucial. Granted, I have found Home Assistant to be rock solid. It has never failed on its own; any problems I have had were external to HA. Still, if you are going to make HA a central part of your house, I strongly recommend putting a good backup strategy in place. - -In the next article, I'll take a look at setting up some simple automations with Node-RED. As always, leave a comment below if you have questions, ideas, or suggestions for topics you'd like to see covered. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/home-assistant-backups - -作者:[Steve Ovens][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/stratusss -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rack_server_sysadmin_cloud_520.png?itok=fGmwhf8I (A rack of servers, blue background) -[2]: https://opensource.com/article/21/1/home-automation-5-homeassistant-addons -[3]: https://opensource.com/article/21/1/home-assistant-6-custom-sensors -[4]: https://opensource.com/article/20/11/home-assistant -[5]: https://opensource.com/article/20/11/cloud-vs-local-home-automation -[6]: https://opensource.com/article/20/11/home-automation-part-3 -[7]: https://opensource.com/article/20/12/home-assistant -[8]: https://linux.die.net/man/5/magic -[9]: https://opensource.com/sites/default/files/uploads/ha-setup33-snapshot1_0.png (Home Assistant snapshots) -[10]: https://creativecommons.org/licenses/by-sa/4.0/ -[11]: https://opensource.com/sites/default/files/uploads/ha-setup34-snapshot2.png (Home Assistant partial snapshots) -[12]: https://opensource.com/sites/default/files/uploads/ha-setup35-snapshot3.png (Home Assistant snapshots) -[13]: https://opensource.com/sites/default/files/uploads/ha-setup36-backup-samba.png (Samba share) -[14]: https://opensource.com/article/19/3/home-lab -[15]: https://opensource.com/sites/default/files/uploads/kvm_lab.png (KVM homelab architecture) -[16]: https://opensource.com/sites/default/files/uploads/home_assistant_backups.png (Home Assistant backup architecture) -[17]: https://opensource.com/sites/default/files/uploads/gnome-online-account.png (GNOME Online Accounts settings) -[18]: https://spideroak.com/ -[19]: https://wiki.js.org/ -[20]: https://opensource.com/sites/default/files/uploads/confluence_home_automation_overview.png (Home automation overview) -[21]: https://opensource.com/sites/default/files/uploads/node_red_bedtime.png (Node-RED routine) diff --git a/sources/tech/20210215 Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy.md b/sources/tech/20210215 Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy.md deleted file mode 100644 index 88c4fc7c51..0000000000 --- a/sources/tech/20210215 Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy.md +++ /dev/null @@ -1,224 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy) -[#]: via: (https://www.linux.com/news/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/) -[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/) - -Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy -====== - -_by Matt Zand_ - -As companies are catching up in adopting blockchain technology, the choice of a private blockchain platform becomes very vital. Hyperledger, whose open source projects support/power more [enterprise blockchain use cases][1] than others, is currently leading the race of private Distributed Ledger Technology (DLT) implementation. Working from the assumption that you know how blockchain works and what is the design philosophy behind [Hyperledger’s ecosystem][2], in this article we will briefly review five active Hyperledger DLTs. In addition to DLTs discussed in this article, Hyperledger ecosystem has more supporting tools and libraries that I will cover in more detail in my future articles. - -This article mainly targets those who are relatively new to Hyperledger. This article would be a great resource for those interested in providing blockchain solution architect services and doing blockchain enterprise consulting and development. The materials included in this article will help you understand Hyperledger DLTs as a whole and use its high-level overview as a guideline for making the best of each Hyperledger project. - -Since Hyperledger is supported by a robust open source community, new projects are being added to the Hyperledger ecosystem regularly. At the time of this writing, Feb 2021, it consists of six active projects and 10 others which are at the incubation stage. Each project has unique features and advantages. - -**1- Hyperledger Fabric** - -[Hyperledger Fabric][3] is the most popular Hyperledger framework. Smart contracts (also known as **chaincode**) are written in [Golang][4] or JavaScript, and run in Docker containers. Fabric is known for its extensibility and allows enterprises to build distributed ledger networks on top of an established and successful architecture. A permissioned blockchain, initially contributed by IBM and Digital Asset,  Fabric is designed to be a foundation for developing applications or solutions with a modular architecture. It takes plugin components for providing functionalities such as consensus and membership services. Like Ethereum, Hyperledger Fabric can host and execute smart contracts, which are named chaincode. A Fabric network consists of peer nodes, which execute smart contracts (chaincode), query ledger data, validate transactions, and interact with applications. User-entered transactions are channeled to an ordering service component, which initially serves to be a consensus mechanism for Hyperledger Fabric. Special nodes called Orderer nodes validate the transactions, ensure the consistency of the blockchain, and send the validated transactions to the peers of the network as well as to membership service provider (MSP) services. - -Two major highlights of Hyperledger Fabric versus Ethereum are: - - * **Multi-ledger**: Each node on Ethereum has a replica of a single ledger in the network. However, Fabric nodes can carry multiple ledgers on each node, which is a great feature for enterprise applications. - * **Private Data**: In addition to a private channel feature, unlike with Ethereum, Fabric members within a consortium can exchange private data among themselves without disseminating them through Fabric channel, which is very useful for enterprise applications. - - - -[Here][5] is a good article for reviewing all Hyperledger Fabric components like peer, channel and, chaincode that are essential for building blockchain applications. In short, thorough understanding of all Hyperledger Fabric components is highly recommended for building, deploying and managing enterprise-level Hyperledger Fabric applications. - -**2- Hyperledger Besu** - -Hyperledger Besu is an open source Ethereum client developed under the Apache 2.0 license and written in Java. It can be run on the Ethereum public network or on private permissioned networks, as well as test networks such as Rinkeby, Ropsten, and Gorli. Hyperledger Besu supports several consensus algorithms including PoW, PoA, and IBFT, and has comprehensive permissioning schemes designed specifically for uses in a consortium environment. - -Hyperledger Besu implements the Enterprise Ethereum Alliance (EEA) specification. The EEA specification was established to create common interfaces amongst the various open and closed source projects within Ethereum, to ensure users do not have vendor lock-in, and to create standard interfaces for teams building applications. Besu implements enterprise features in alignment with the EEA client specification. - -As a basic Ethereum Client, Besu has the following features: - - * It connects to the blockchain network to synchronize blockchain transaction data or emit events to the network. - * It processes transactions through smart contracts in an Ethereum Virtual Machine (EVM) environment. - * It uses a data storage of networks (blocks). - * It publishes client API interfaces for developers to interact with the blockchain network. - - - -Besu implements [Proof of Work][6] and [Proof of Authority][7] (PoA) consensus mechanisms. Further, Hyperledger Besu implements several PoA protocols, including Clique and IBFT 2.0. - -Clique is a proof-of-authority blockchain consensus protocol. The blockchain runs Clique protocol maintaining the list of authorized signers. These approved signers directly mine and seal all blocks without mining. Therefore, the transaction task is computationally light. When creating a block, a miner collects and executes transactions, updates the network state with the calculated hash of the block and signs the block using his private key. By using a defined period of time to create a block, Clique can limit the number of processed transactions. - -IBFT 2.0 (Istanbul BFT 2.0) is a PoA **Byzantine-Fault-Tolerant** (**BFT**) blockchain consensus protocol. Transactions and blocks in the network are validated by authorized accounts, known as validators. Validators collect, validate and execute transactions and create the next block. Existing validators can propose and vote to add or remove validators and maintain a dynamic validator set. The consensus can ensure immediate finality. As the name suggests, IBFT 2.0 builds upon the IBFT blockchain consensus protocol with improved safety and liveness. In IBFT 2.0 blockchain, all valid blocks are directly added in the main chain and there are no forks. - -**3- Hyperledger Sawtooth** - -Sawtooth is the second Hyperledger project to reach 1.0 release maturity. Sawtooth-core is written in Python, while Sawtooth Raft and Sawtooth Sabre are written in Rust. It also has JavaScript and Golang components. Sawtooth supports both permissioned and permissionless deployments. It supports the EVM through a collaboration with the Hyperledger Burrow. By design, Hyperledger Sawtooth is created to address issues of performance. As such, one of its distinct features compared to other Hyperledger DLTs is that each node in Sawtooth can act as an orderer by validating and approving a transaction. Other notable features are: - - * **Parallel Transaction Execution**: While many blockchains use serial transaction execution to ensure consistent ordering at every node on the network, Sawtooth follows an advanced parallel scheduler that classifies transactions into parallel flows that eventually leads to the boost in transaction processing performance. - * **Separation of Application from Core**: Sawtooth simplifies the development and deployment of an application by separating the application level from the core system level. It offers smart contract abstraction to allow developers to create contract logic in the programming language of their choice. - * **Custom Transaction Processors**: In Sawtooth, each application can define the custom transaction processors to meet its unique requirements. It provides transaction families to serve as an approach for low-level functions, like storing on-chain permissions, managing chain-wide settings and for particular applications such as saving block information and performance analysis. - - - -**4- Hyperledger Iroha** - -Hyperledger Iroha is designed to target the creation and management of complex digital assets and identities. It is written in C++ and is user friendly. Iroha has a powerful role-based model for access control and supports complex analytics. While using Iroha for identity management, querying and performing commands are only limited to the participants who have access to the Iroha network. A robust permissions system ensures that all transactions are secure and controlled. Some of its highlights are: - - * **Ease of use:** You can easily create and manage simple, as well as complex, digital assets (e.g., cryptocurrency or personal medical data). - * **Built-in Smart Contracts:** You can easily integrate blockchain into a business process using built-in smart-contracts called “commands.” As such, developers need not to write complicated smart-contracts because they are available in the form of commands. - * **BFT:** Iroha uses BFT consensus algorithm which makes it suitable for businesses that require verifiable data consistency at a low cost. - - - -**5- Hyperledger Indy** - -As a self-sovereign identity management platform, Hyperledger Indy is built explicitly for decentralized identity management. The server portion, Indy node, is built in Python, while the Indy SDK is written in Rust. It offers tools and reusable components to manage digital identities on blockchains or other distributed ledgers. Hyperledger Indy architecture is well-suited for every application that requires heavy work on identity management since Indy is easily interpretable across multiple domains, organization silos and applications. As such, identities are securely stored and shared with all parties involved. Some notable highlights of Hyperledger Indy are: - -●        Identity Correlation-resistant: According to the Hyperledger Indy documentation, Indy is completely identity correlation-resistant. So, you do not need to worry about connecting or mixing one Id with another. That means, you can not connect two Ids or find two similar Ids in the ledger. - -●        Decentralized Identifiers (DIDs): According to the Hyperledger Indy documentation, all the decentralized identifiers are globally resolvable and unique without needing any central party in the mix. That means, every decentralized identity on the Indy platform will have a unique identifier that will solely belong to you. As a result, no one can claim or even use your identity on your behalf. So, it would eliminate the chances of identity theft. - -●        Zero-Knowledge Proofs: With help from Zero-Knowledge Proof, you can disclose only the information necessary without anything else. So, when you have to prove your credentials, you can only choose to release the information that you need depending on the party that is requesting it. For instance, you may choose to share your data of birth only with one party whereas to release your driver license and financial docs to another. In short, Indy gives users great flexibility in sharing their private data whenever and wherever needed. - -**Summary** - -In this article, we briefly reviewed five popular Hyperledger DLTs. We started off by going over Hyperledger Fabric and its main components and some of its highlights compared to public blockchain platforms like Ethereum. Even though Fabric is currently used heavily for supply chain management, if you are doing lots of specific works in supply chain domain, you should explore Hyperledger Grid too. Then, we moved on to learning how to use Hyperledger Besu for building public consortium blockchain applications that support multiple consensus algorithms and how to manage Besu from EVM. Next, we covered some highlights of Hyperledger Sawtooth such as how it is designed for high performance. For instance, we learned how a single node in Sawtooth can act as an orderer by approving and validating transactions in the network. The last two DLTs (Hyperledger Iroha and Indy) are specifically geared toward digital asset management and identity . So if you are working on a project that heavily uses identity management, you should explore and use either Iroha or Indy instead of Fabric. - -I have included reference and resource links for those interested in exploring topics discussed in this article in depth. - -For more references on all Hyperledger projects, libraries and tools, visit the below documentation links: - - 1. [Hyperledger Indy Project][8] - 2. [Hyperledger Fabric Project][9] - 3. [Hyperledger Aries Library][10] - 4. [Hyperledger Iroha Project][11] - 5. [Hyperledger Sawtooth Project][12] - 6. [Hyperledger Besu Project][13] - 7. [Hyperledger Quilt Library][14] - 8. [Hyperledger Ursa Library][15] - 9. [Hyperledger Transact Library][16] - 10. [Hyperledger Cactus Project][17] - 11. [Hyperledger Caliper Tool][18] - 12. [Hyperledger Cello Tool][19] - 13. [Hyperledger Explorer Tool][20] - 14. [Hyperledger Grid (Domain Specific)][21] - 15. [Hyperledger Burrow Project][22] - 16. [Hyperledger Avalon Tool][23] - - - -**Resources** - - * Free Training Courses from The Linux Foundation & Hyperledger - * [Blockchain: Understanding Its Uses and Implications (LFS170)][24] - * [Introduction to Hyperledger Blockchain Technologies (LFS171)][25] - * [Introduction to Hyperledger Sovereign Identity Blockchain Solutions: Indy, Aries & Ursa (LFS172)][26] - * [Becoming a Hyperledger Aries Developer (LFS173)][27] - * [Hyperledger Sawtooth for Application Developers (LFS174)][28] - * eLearning Courses from The Linux Foundation & Hyperledger - * [Hyperledger Fabric Administration (LFS272)][29] - * [Hyperledger Fabric for Developers (LFD272)][30] - * Certification Exams from The Linux Foundation & Hyperledger - * [Certified Hyperledger Fabric Administrator (CHFA)][31] - * [Certified Hyperledger Fabric Developer (CHFD)][32] - * [Hands-On Smart Contract Development with Hyperledger Fabric V2][33] Book by Matt Zand and others. - * [Essential Hyperledger Sawtooth Features for Enterprise Blockchain Developers][34] - * [Blockchain Developer Guide- How to Install Hyperledger Fabric on AWS][35] - * [Blockchain Developer Guide- How to Install and work with Hyperledger Sawtooth][36] - * [Intro to Blockchain Cybersecurity (Coding Bootcamps)][37] - * [Intro to Hyperledger Sawtooth for System Admins (Coding Bootcamps)][38] - * [Blockchain Developer Guide- How to Install Hyperledger Iroha on AWS][39] - * [Blockchain Developer Guide- How to Install Hyperledger Indy and Indy CLI on AWS][40] - * [Blockchain Developer Guide- How to Configure Hyperledger Sawtooth Validator and REST API on AWS][41] - * [Intro blockchain development with Hyperledger Fabric (Coding Bootcamps)][42] - * [How to build DApps with Hyperledger Fabric][43] - * [Blockchain Developer Guide- How to Build Transaction Processor as a Service and Python Egg for Hyperledger Sawtooth][44] - * [Blockchain Developer Guide- How to Create Cryptocurrency Using Hyperledger Iroha CLI][45] - * [Blockchain Developer Guide- How to Explore Hyperledger Indy Command Line Interface][46] - * [Blockchain Developer Guide- Comprehensive Blockchain Hyperledger Developer Guide from Beginner to Advance Level][47] - * [Blockchain Management in Hyperledger for System Admins][48] - * [Hyperledger Fabric for Developers (Coding Bootcamps)][49] - * [Free White Papers from Hyperledger][50] - * [Free Webinars from Hyperledger][51] - * [Hyperledger Wiki][52] - - - -**About the Author** - -**Matt Zand** is a serial entrepreneur and the founder of 3 tech startups: [DC Web Makers][53], [Coding Bootcamps][54] and [High School Technology Services][55]. He is a leading author of [Hands-on Smart Contract Development with Hyperledger Fabric][33] book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms. At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LI: - -The post [Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy][56] appeared first on [Linux Foundation – Training][57]. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/ - -作者:[Dan Brown][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://training.linuxfoundation.org/announcements/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/ -[b]: https://github.com/lujun9972 -[1]: https://blockchain.dcwebmakers.com/blog/comprehensive-overview-and-analysis-of-blockchain-use-cases-in-many-industries.html -[2]: https://weg2g.com/application/touchstonewords/article-intro-to-hyperledger-family-and-hyperledger-blockchain-ecosystem.php -[3]: https://learn.coding-bootcamps.com/blog/202224/why-build-blockchain-applications-with-hyperledger-fabric -[4]: https://learn.coding-bootcamps.com/p/learn-go-programming-language-by-examples -[5]: https://coding-bootcamps.com/blog/review-of-hyperledger-fabric-architecture-and-components.html -[6]: https://coding-bootcamps.com/blog/how-proof-of-work-consensus-works-in-blockchain.html -[7]: https://coding-bootcamps.com/blog/how-proof-of-stake-consensus-works-in-blockchain.html -[8]: https://www.hyperledger.org/use/hyperledger-indy -[9]: https://www.hyperledger.org/use/fabric -[10]: https://www.hyperledger.org/projects/aries -[11]: https://www.hyperledger.org/projects/iroha -[12]: https://www.hyperledger.org/projects/sawtooth -[13]: https://www.hyperledger.org/projects/besu -[14]: https://www.hyperledger.org/projects/quilt -[15]: https://www.hyperledger.org/projects/ursa -[16]: https://www.hyperledger.org/projects/transact -[17]: https://www.hyperledger.org/projects/cactus -[18]: https://www.hyperledger.org/projects/caliper -[19]: https://www.hyperledger.org/projects/cello -[20]: https://www.hyperledger.org/projects/explorer -[21]: https://www.hyperledger.org/projects/grid -[22]: https://www.hyperledger.org/projects/hyperledger-burrow -[23]: https://www.hyperledger.org/projects/avalon -[24]: https://training.linuxfoundation.org/training/blockchain-understanding-its-uses-and-implications/ -[25]: https://training.linuxfoundation.org/training/blockchain-for-business-an-introduction-to-hyperledger-technologies/ -[26]: https://training.linuxfoundation.org/training/introduction-to-hyperledger-sovereign-identity-blockchain-solutions-indy-aries-and-ursa/ -[27]: https://training.linuxfoundation.org/training/becoming-a-hyperledger-aries-developer-lfs173/ -[28]: https://training.linuxfoundation.org/training/hyperledger-sawtooth-application-developers-lfs174/ -[29]: https://training.linuxfoundation.org/training/hyperledger-fabric-administration-lfs272/ -[30]: https://training.linuxfoundation.org/training/hyperledger-fabric-for-developers-lfd272/ -[31]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-administrator-chfa/ -[32]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-developer/ -[33]: https://www.oreilly.com/library/view/hands-on-smart-contract/9781492086116/ -[34]: https://weg2g.com/application/touchstonewords/article-essential-hyperledger-sawtooth-features-for-enterprise-blockchain-developers.php -[35]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-fabric-on-amazon-web-services.php -[36]: https://myhsts.org/tutorial-learn-how-to-install-and-work-with-blockchain-hyperledger-sawtooth.php -[37]: https://learn.coding-bootcamps.com/p/learn-how-to-secure-blockchain-applications-by-examples -[38]: https://learn.coding-bootcamps.com/p/introduction-to-hyperledger-sawtooth-for-system-admins -[39]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-iroha-on-amazon-web-services.php -[40]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-indy-on-amazon-web-services.php -[41]: https://myhsts.org/tutorial-learn-how-to-configure-hyperledger-sawtooth-validator-and-rest-api-on-aws.php -[42]: https://learn.coding-bootcamps.com/p/live-and-self-paced-blockchain-development-with-hyperledger-fabric -[43]: https://learn.coding-bootcamps.com/p/live-crash-course-for-building-dapps-with-hyperledger-fabric -[44]: https://myhsts.org/tutorial-learn-how-to-build-transaction-processor-as-a-service-and-python-egg-for-hyperledger-sawtooth.php -[45]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-iroha-cli-to-create-cryptocurrency.php -[46]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-indy-command-line-interface.php -[47]: https://myhsts.org/tutorial-comprehensive-blockchain-hyperledger-developer-guide-for-all-professional-programmers.php -[48]: https://learn.coding-bootcamps.com/p/learn-blockchain-development-with-hyperledger-by-examples -[49]: https://learn.coding-bootcamps.com/p/hyperledger-blockchain-development-for-developers -[50]: https://www.hyperledger.org/learn/white-papers -[51]: https://www.hyperledger.org/learn/webinars -[52]: https://wiki.hyperledger.org/ -[53]: https://blockchain.dcwebmakers.com/ -[54]: http://coding-bootcamps.com/ -[55]: https://myhsts.org/ -[56]: https://training.linuxfoundation.org/announcements/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/ -[57]: https://training.linuxfoundation.org/ diff --git a/sources/tech/20210218 What is PPA Purge- How to Use it in Ubuntu and other Debian-based Distributions.md b/sources/tech/20210218 What is PPA Purge- How to Use it in Ubuntu and other Debian-based Distributions.md deleted file mode 100644 index 2be48a8b15..0000000000 --- a/sources/tech/20210218 What is PPA Purge- How to Use it in Ubuntu and other Debian-based Distributions.md +++ /dev/null @@ -1,182 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (What is PPA Purge? How to Use it in Ubuntu and other Debian-based Distributions?) -[#]: via: (https://itsfoss.com/ppa-purge/) -[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) - -What is PPA Purge? How to Use it in Ubuntu and other Debian-based Distributions? -====== - -PPA is a popular method of installing additional applications or newer versions of a software in Ubuntu. - -I have written a [detailed guide on PPA][1] so I will just quickly recall it here. PPA is a mechanism developed by Ubuntu to enable developers to provide their own repositories. When you add a PPA, you add additional repository to your system and thus you can download applications from this additional repository. - -``` -sudo add-apt-repository ppa:ppa-address -sudo apt update -sudo apt install package_from_ppa -``` - -I have also written about [deleting PPAs from your system][2]. I briefly mentioned the PPA Purge tool in that article. In this tutorial, you’ll get more detailed information about this handy utility. - -### What is PPA Purge? - -PPA Purge is a command line tool that disables a PPA repository from your software sources list. Apart from that, it reverts the system back to official Ubuntu packages. This is a different behavior than simply deleting the PPA repository. - -Suppose application ABC has version x available from Ubuntu repositories. You add a PPA that provides a higher version y of the same application/package ABC. When your Linux system finds that the same package is available from multiple sources, it uses the source that provides a newer version. - -In this example, you’ll have version y of application ABC installed thanks to the PPA you added. - -Normally, you would remove the application and then remove the PPA from sources list. But if you use ppa-purge to disable the said PPA, your application ABC will automatically revert to the older version x provided by Ubuntu repositories. - -Do you see the difference? Probably not. Let me explain it to you with real examples. - -#### Reverting applications to the official version provided by Ubuntu - -I heard that the [upcoming VLC 4.0 version has major UI overhaul][3]. I wanted to try it before it is officially released and so I used the [daily build PPA of VLC][4] to get the under-development version 4. - -Take a look at the screenshot below. I have added the VLC PPA (videolan/master-daily) and this PPA provides VLC version 4.0 release candidate (RC) version. Ubuntu repositories provide VLC version 3.0.11. - -![][5] - -If I use the ppa-purge command with the VLC daily build PPA, it disables the PPA and reverts the installed VLC version to 3.0.11 which is available from Ubuntu’s universal repository. - -![][6] - -You can see that it informs you that some packages are going to be downgraded. - -![][7] - -When the daily build VLC PPA is purged, the installed version reverts to what Ubuntu provides from its official repositories. - -![][8] - -You might think that VLC was downgraded because it was upgraded from version 3.0.11 to VLC 4.0 with the PPA. But here is a funny thing. Even if I had used the PPA to install VLC 4.0 RC version afresh (instead of upgrading it), it would still be downgraded instead of being removed from the system. - -Does it mean ppa-purge command cannot remove applications along with disabling the PPA? Not quite so. Let me show another example. - -#### PPA Purge impact on application only available from a PPA - -I recently stumbled across Plots, a [nifty tool for plotting mathematical graphs][9]. Since it is a new application, it is not available in Ubuntu repositories yet. I used [its PPA][10] to install it. - -If I use ppa-purge command on this PPA, it disables the PPA first and then looks to revert it to the original version. But there is no ‘original version’ in Ubuntu’s repositories. So, it proceeds to [uninstall the application from Ubuntu][11]. - -The entire process is depicted in the single picture below. Pointer 1 is for adding PPA, pointer 2 is for installing the application named plots. I have discarded the input for these two commands with [redirection in Linux][12]. - -You can see that when PPA Purge is used (pointer 3), it disables the PPA (pointer 4) and then proceeds to inform that the application plots will be removed (pointer 5). - -![][13] - -#### Deleting a PPA vs disabling it - -I have repeatedly used the term ‘disabling PPA’ with PPA Purge. There is a difference between disabling PPA and deleting it. - -When you add a PPA, it adds a new file in the /etc/apt/sources.list.d directory. This file has the URL of the repository. - -Disabling the PPA keeps this file but it is commented out the repository in the PPA’s file. Now this repository is not considered while updating or installing software. - -![][14] - -You can see disabled PPA repository in Software & Updates tool: - -![][15] - -When you delete a PPA, it means deleting the PPA’s file from etc/apt/sources.list.d directory. You won’t see it anywhere on the system. - -![PPA deleted][16] - -Why disable a PPA instead of deleting it? Because it is easier to re-enable it. You can do just check the box in Software & Updates tool or edit the PPA file and remove the leading # to uncomment the repository. - -#### Recap of what PPA Purge does - -If it was too much information, let me summarize the main points of what the ppa-purge script/tool does: - - * PPA Purge disables a given PPA but doesn’t delete it. - * If there was a new application (which is not available from any sources other than only the PPA) installed with the given PPA, it is uninstalled. - * If the PPA upgraded an already installed application, that application will be reverted to the version provided by the official Ubuntu repositories. - * If you used the PPA to install (not upgrade) a newer version of an application (which is also available from the official Ubuntu repository), using PPA Purge will downgrade the application version to the one available from Ubuntu repositories. - - - -### Using PPA Purge - -Alright! Enough explanation. You might be wondering how to use PPA Purge. - -You need to install ppa-purge tool first. Ensure that you have [universe repository enabled][17] already. - -``` -sudo apt install ppa-purge -``` - -As far using PPA Purge, you should provide the PPA name in a format similar to what you use for adding it: - -``` -sudo ppa-purge ppa:ppa-name -``` - -Here’s a real example: - -![][18] - -If you are not sure of the PPA name, [use the apt show command][19] to display the source repository of the package in question. - -``` -apt show vlc -``` - -![Finding PPA source URL][20] - -For example, the source for VLC PPA shows groovy/main. Out of this the terms after ppa.launchpad.net and before Ubuntu are part of PPA name. So here, you get the PPA name as videolan/master-daily. - -If you have to use to purge the PPA ‘videolan/master-daily’, you use it like this by adding `ppa:` before PPA name: - -``` -sudo ppa-purge ppa:videolan/master-daily -``` - -### Do you purge your PPAs? - -I wanted to keep this article short and crisp but it seems I went in a little bit of more detail.As long as you learn something new, you won’t mind the additional details, will you? - -PPA Purge is a nifty utility that allows you to test newer or beta versions of applications and then easily revert to the original version provided by the distribution. If a PPA has more than one application, it works on all of them. - -Of course, you can do all these stuff manually which is to disable the PPA, remove the application and install it again to get the version provided by the distribution. PPA Purge makes the job easier. - -Do you use ppa-purge already or will you start using it from now onwards? Did I miss some crucial information or do you still have some doubts on this topic? Please feel free to use the comment sections. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/ppa-purge/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/ppa-guide/ -[2]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/ -[3]: https://news.itsfoss.com/vlc-4-features/ -[4]: https://launchpad.net/~videolan/+archive/ubuntu/master-daily -[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/vlc-ppa.png?resize=800%2C400&ssl=1 -[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/using-ppap-purge.png?resize=800%2C506&ssl=1 -[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/downgrade-packages-with-ppa-purge.png?resize=800%2C506&ssl=1 -[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/package-reverted-ppa-purge.png?resize=800%2C405&ssl=1 -[9]: https://itsfoss.com/plots-graph-app/ -[10]: https://launchpad.net/~apandada1/+archive/ubuntu/plots/ -[11]: https://itsfoss.com/uninstall-programs-ubuntu/ -[12]: https://linuxhandbook.com/redirection-linux/ -[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/ppa-purge-deleting-apps.png?resize=800%2C625&ssl=1 -[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/disabled-ppa.png?resize=800%2C295&ssl=1 -[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/disabled-ppa-ubuntu.png?resize=800%2C398&ssl=1 -[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/ppa-deleted.png?resize=800%2C271&ssl=1 -[17]: https://itsfoss.com/ubuntu-repositories/ -[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/ppa-purge-example-800x379.png?resize=800%2C379&ssl=1 -[19]: https://itsfoss.com/apt-search-command/ -[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/apt-show-find-ppa-source.png?resize=800%2C341&ssl=1 diff --git a/sources/tech/20210222 A step-by-step guide to Knative eventing.md b/sources/tech/20210222 A step-by-step guide to Knative eventing.md deleted file mode 100644 index 82f90f55c8..0000000000 --- a/sources/tech/20210222 A step-by-step guide to Knative eventing.md +++ /dev/null @@ -1,598 +0,0 @@ -[#]: subject: "A step-by-step guide to Knative eventing" -[#]: via: "https://opensource.com/article/21/2/knative-eventing" -[#]: author: "Jessica Cherry https://opensource.com/users/cherrybomb" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -A step-by-step guide to Knative eventing -====== -Knative eventing is a way to create, send, and verify events in your cloud-native environment. - -![Computer laptop in space][1] - -Image by: Opensource.com - -In a previous article, I covered [how to create a small app with Knative][2], which is an open source project that adds components to [Kubernetes][3] for deploying, running, and managing [serverless, cloud-native][4] applications. In this article, I'll explain Knative eventing, a way to create, send, and verify events in your cloud-native environment. - -Events can be generated from many sources in your environment, and they can be confusing to manage or define. Since Knative follows the [CloudEvents][5] specification, it allows you to have one common abstraction point for your environment, where the events are defined to one specification. - -This article explains how to install Knative eventing version 0.20.0 and create, trigger, and verify events. Because there are many steps involved, I suggest you look at my [GitHub repo][6] to walk through this article with the files. - -### Set up your configuration - -This walkthrough uses [Minikube][7] with Kubernetes 1.19.0. It also makes some configuration changes to the Minikube environment. - -**Minikube pre-configuration commands:** - -``` -$ minikube config set kubernetes-version v1.19.0 -$ minikube config set memory 4000 -$ minikube config set cpus 4 -``` - -Before starting Minikube, run the following commands to make sure your configuration stays and start Minikube: - -``` -$ minikube delete -$ minikube start -``` - -### Install Knative eventing - -Install the Knative eventing custom resource definitions (CRDs) using kubectl. The following shows the command and a snippet of the output: - -``` -$ kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.20.0/eventing-crds.yaml - -customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev created -customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev created -customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev created -customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev created -``` - -Next, install the core components using kubectl: - -``` -$ kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.20.0/eventing-core.yaml -namespace/knative-eventing created -serviceaccount/eventing-controller created -clusterrolebinding.rbac.authorization.k8s.io/eventing-controller created -``` - -Since you're running a standalone version of the Knative eventing service, you must install the in-memory channel to pass events. Using kubectl, run: - -``` -$ kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.20.0/in-memory-channel.yaml -``` - -Install the broker, which utilizes the channels and runs the event routing: - -``` -$ kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.20.0/mt-channel-broker.yaml -clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-channel-broker-controller created -clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created -``` - -Next, create a namespace and add a small broker to it; this broker routes events to triggers. Create your namespace using kubectl: - -``` -$ kubectl create namespace eventing-test -namespace/eventing-test created -``` - -Now create a small broker named `default` in your namespace. The following is the YAML from my **broker.yaml** file (which can be found in my GitHub repository): - -``` -apiVersion: eventing.knative.dev/v1 -kind: broker -metadata: -  name: default -  namespace: eventing-test -``` - -Then apply your broker file using kubectl: - -``` -$ kubectl create -f broker.yaml -   broker.eventing.knative.dev/default created -``` - -Verify that everything is up and running (you should see the confirmation output) after you run the command: - -``` -$ kubectl -n eventing-test get broker default                                                               -NAME      URL                                                                              AGE    READY   REASON -default   http://broker-ingress.knative-eventing.svc.cluster.local/eventing-test/default   3m6s   True -``` - -You'll need this URL from the broker output later for sending events, so save it. - -### Create event consumers - -Now that everything is installed, you can start configuring the components to work with events. - -First, you need to create event consumers. You'll create two consumers in this walkthrough: **hello-display** and **goodbye-display**. Having two consumers allows you to see how to target a consumer per event message. - -**The hello-display YAML code:** - -``` -apiVersion: apps/v1 -kind: Deployment -metadata: -  name: hello-display -spec: -  replicas: 1 -  selector: -    matchLabels: &labels -      app: hello-display -  template: -    metadata: -      labels: *labels -    spec: -      containers: -        - name: event-display -          image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display - ---- - -kind: Service -apiVersion: v1 -metadata: -  name: hello-display -spec: -  selector: -    app: hello-display -  ports: -    - protocol: TCP -      port: 80 -      targetPort: 8080 -``` - -**The goodbye-display YAML code:** - -``` -apiVersion: apps/v1 -kind: Deployment -metadata: -  name: goodbye-display -spec: -  replicas: 1 -  selector: -    matchLabels: &labels -      app: goodbye-display -  template: -    metadata: -      labels: *labels -    spec: -      containers: -        - name: event-display -          # Source code: https://github.com/knative/eventing-contrib/tree/master/cmd/event_display -          image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display - ---- - -kind: Service -apiVersion: v1 -metadata: -  name: goodbye-display -spec: -  selector: -    app: goodbye-display -  ports: -  - protocol: TCP -    port: 80 -    targetPort: 8080 -``` - -The differences in the YAML between the two consumers are in the `app` and `metadata name` sections. While both consumers are on the same ports, you can target one when generating an event. Create the consumers using kubectl: - -``` -$ kubectl -n eventing-test apply -f hello-display.yaml -deployment.apps/hello-display created -service/hello-display created - -$ kubectl -n eventing-test apply -f goodbye-display.yaml -deployment.apps/goodbye-display created -service/goodbye-display created -``` - -Check to make sure the deployments are running after you've applied the YAML files: - -``` -$ kubectl -n eventing-test get deployments hello-display goodbye-display -NAME              READY   UP-TO-DATE   AVAILABLE   AGE -hello-display     1/1     1            1           2m4s -goodbye-display   1/1     1            1           34s -``` - -### Create triggers - -Now, you need to create the triggers, which define the events the consumer receives. You can define triggers to use any filter from your cloud events. The broker receives events from the trigger and sends the events to the correct consumer. This set of examples creates two triggers with different definitions. For example, you can send events with the attribute type `greeting` to the `hello-display` consumer. - -**The greeting-trigger.yaml code:** - -``` -apiVersion: eventing.knative.dev/v1 -kind: Trigger -metadata: -  name: hello-display -spec: -  broker: default -  filter: -    attributes: -      type: greeting -  subscriber: -    ref: -     apiVersion: v1 -     kind: Service -     name: hello-display -``` - -To create the first trigger, apply your YAML file: - -``` -$ kubectl -n eventing-test apply -f greeting-trigger.yaml -trigger.eventing.knative.dev/hello-display created -``` - -Next, make the second trigger using **sendoff-trigger.yaml**. This sends anything with the attribute `source sendoff` to your `goodbye-display` consumer. - -**The sendoff-trigger.yaml code:** - -``` -apiVersion: eventing.knative.dev/v1 -kind: Trigger -metadata: -  name: goodbye-display -spec: -  broker: default -  filter: -    attributes: -      source: sendoff -  subscriber: -    ref: -      apiVersion: v1 -      kind: Service -      name: goodbye-display -``` - -Next, apply your second trigger definition to the cluster: - -``` -$ kubectl -n eventing-test apply -f sendoff-trigger.yaml -trigger.eventing.knative.dev/goodbye-display created -``` - -Confirm everything is correctly in place by getting your triggers from the cluster using kubectl: - -``` -$ kubectl -n eventing-test get triggers -NAME              BROKER    SUBSCRIBER_URI                                            AGE   READY   -goodbye-display   default   http://goodbye-display.eventing-test.svc.cluster.local/   24s   True     -hello-display     default   http://hello-display.eventing-test.svc.cluster.local/     46s   True -``` - -### Create an event producer - -Create a pod you can use to send events. This is a simple pod deployment with curl and SSH access for you to [send events using curl][8]. Because the broker can be accessed only from inside the cluster where Knative eventing is installed, the pod needs to be in the cluster; this is the only way to send events into the cluster. Use the **event-producer.yaml** file with this code: - -``` -apiVersion: v1 -kind: Pod -metadata: -  labels: -    run: curl -  name: curl -spec: -  containers: -    - image: radial/busyboxplus:curl -      imagePullPolicy: IfNotPresent -      name: curl -      resources: {} -      stdin: true -      terminationMessagePath: /dev/termination-log -      terminationMessagePolicy: File -      tty: true -``` - -Next, deploy the pod by using kubectl: - -``` -$ kubectl -n eventing-test apply -f event-producer.yaml -pod/curl created -``` - -To verify, get the deployment and make sure the pod is up and running: - -``` -$ kubectl get pods -n eventing-test -NAME                               READY   STATUS    RESTARTS   AGE -curl                               1/1     Running   0          8m13s -``` - -### Send some events - -Since this article has been so configuration-heavy, I imagine you'll be happy to finally be able to send some events and test out your services. Events have to be passed internally in the cluster. Usually, events would be defined around applications internal to the cluster and come from those applications. But this example will manually send events from your pod named **curl**. - -Begin by logging into the pod: - -``` -$ kubectl -n eventing-test attach curl -it -``` - -Once logged in, you'll see output similar to: - -``` -Defaulting container name to curl. -Use 'kubectl describe pod/curl -n eventing-test' to see all of the containers in this pod. -If you don't see a command prompt, try pressing enter. -[ root@curl:/ ]$ -``` - -Now, generate an event using curl. This needs some extra definitions and requires the broker URL generated during the installation. This example sends a greeting to the broker: - -``` -curl -v "http://broker-ingress.knative-eventing.svc.cluster.local/eventing-test/default" \ -  -X POST \ -  -H "Ce-Id: say-hello" \ -  -H "Ce-Specversion: 1.0" \ -  -H "Ce-Type: greeting" \ -  -H "Ce-Source: not-sendoff" \ -  -H "Content-Type: application/json" \ -  -d '{"msg":"Hello Knative!"}' -``` - -`Ce` is short for CloudEvent, which is the [standardized CloudEvents specification][9] that Knative follows. You also need to know the event ID (this is useful to verify it was delivered), the type, the source (which must specify that it is not a `sendoff` so that it doesn't go to the source defined in the sendoff trigger), and a message. - -When you run the command, this should be the output (and you should receive a [202 Accepted][10] response): - -``` -> POST /eventing-test/default HTTP/1.1 -> User-Agent: curl/7.35.0 -> Host: broker-ingress.knative-eventing.svc.cluster.local -> Accept: */* -> Ce-Id: say-hello -> Ce-Specversion: 1.0 -> Ce-Type: greeting -> Ce-Source: not-sendoff -> Content-Type: application/json -> Content-Length: 24 -> -< HTTP/1.1 202 Accepted -< Date: Sun, 24 Jan 2021 22:25:25 GMT -< Content-Length: 0 -``` - -The 202 means the trigger sent it to the **hello-display** consumer (because of the definition.) - -Next, send a second definition to the **goodbye-display** consumer with this new curl command: - -``` -curl -v "http://broker-ingress.knative-eventing.svc.cluster.local/eventing-test/default" \ -  -X POST \ -  -H "Ce-Id: say-goodbye" \ -  -H "Ce-Specversion: 1.0" \ -  -H "Ce-Type: not-greeting" \ -  -H "Ce-Source: sendoff" \ -  -H "Content-Type: application/json" \ -  -d '{"msg":"Goodbye Knative!"}' -``` - -This time, it is a `sendoff` and not a greeting based on the previous setup section's trigger definition. It is directed to the **goodbye-display** consumer. - -Your output should look like this, with another 202 returned: - -``` -> POST /eventing-test/default HTTP/1.1 -> User-Agent: curl/7.35.0 -> Host: broker-ingress.knative-eventing.svc.cluster.local -> Accept: */* -> Ce-Id: say-goodbye -> Ce-Specversion: 1.0 -> Ce-Type: not-greeting -> Ce-Source: sendoff -> Content-Type: application/json -> Content-Length: 26 -> -< HTTP/1.1 202 Accepted -< Date: Sun, 24 Jan 2021 22:33:00 GMT -< Content-Length: 0 -``` - -Congratulations, you sent two events! - -Before moving on to the next section, exit the pod by typing **exit**. - -### Verify the events - -Now that the events have been sent, how do you know that the correct consumers received them? By going to each consumer and verifying it in the logs. - -Start with the **hello-display** consumer:: - -``` -$ kubectl -n eventing-test logs -l app=hello-display --tail=100 -``` - -There isn't much running in this example cluster, so you should see only one event: - -``` -☁️  cloudevents.Event -Validation: valid -Context Attributes, -  specversion: 1.0 -  type: greeting -  source: not-sendoff -  id: say-hello -  datacontenttype: application/json -Extensions, -  knativearrivaltime: 2021-01-24T22:25:25.760867793Z -Data, -  { -    "msg": "Hello Knative!" -  } -``` - -You've confirmed the **hello-display** consumer received the event! Now check the **goodbye-display** consumer and make sure the other message made it. - -Start by running the same command but with **goodbye-display**: - -``` -$ kubectl -n eventing-test logs -l app=goodbye-display --tail=100 -☁️  cloudevents.Event -Validation: valid -Context Attributes, -  specversion: 1.0 -  type: not-greeting -  source: sendoff -  id: say-goodbye -  datacontenttype: application/json -Extensions, -  knativearrivaltime: 2021-01-24T22:33:00.515716701Z -Data, -  { -    "msg": "Goodbye Knative!" -  } -``` - -It looks like both events made it to their proper locations. Congratulations—you have officially worked with Knative eventing! - -### Bonus round: Send an event to multiple consumers - -So you sent events to each consumer using curl, but what if you want to send an event to both consumers? This uses a similar curl command but with some interesting changes. In the previous triggers, each one was defined with a different attribute. The greeting trigger had attribute `type`, and sendoff trigger had attribute `source`. This means you can make a curl call and send it to both consumers. - -Here is a curl example of a definition for sending an event to both consumers: - -``` -curl -v "http://broker-ingress.knative-eventing.svc.cluster.local/eventing-test/default" \ -  -X POST \ -  -H "Ce-Id: say-hello-goodbye" \ -  -H "Ce-Specversion: 1.0" \ -  -H "Ce-Type: greeting" \ -  -H "Ce-Source: sendoff" \ -  -H "Content-Type: application/json" \ -  -d '{"msg":"Hello Knative! Goodbye Knative!"}' -``` - -As you can see, the definition of this curl command changed to set the `source` for **goodbye-display** and the `type` for **hello-display**. - -Here is sample output of what the events look like after they are sent. - -**Output of the event being sent:** - -``` -> POST /eventing-test/default HTTP/1.1 -> User-Agent: curl/7.35.0 -> Host: broker-ingress.knative-eventing.svc.cluster.local -> Accept: */* -> Ce-Id: say-hello-goodbye -> Ce-Specversion: 1.0 -> Ce-Type: greeting -> Ce-Source: sendoff -> Content-Type: application/json -> Content-Length: 41 -> -< HTTP/1.1 202 Accepted -< Date: Sun, 24 Jan 2021 23:04:15 GMT -< Content-Length: 0 -``` - -**Output of hello-display (showing two events):** - -``` -$ kubectl -n eventing-test logs -l app=hello-display --tail=100 -☁️  cloudevents.Event -Validation: valid -Context Attributes, -  specversion: 1.0 -  type: greeting -  source: not-sendoff -  id: say-hello -  datacontenttype: application/json -Extensions, -  knativearrivaltime: 2021-01-24T22:25:25.760867793Z -Data, -  { -    "msg": "Hello Knative!" -  } -☁️  cloudevents.Event -Validation: valid -Context Attributes, -  specversion: 1.0 -  type: greeting -  source: sendoff -  id: say-hello-goodbye -  datacontenttype: application/json -Extensions, -  knativearrivaltime: 2021-01-24T23:04:15.036352685Z -Data, -  { -    "msg": "Hello Knative! Goodbye Knative!" -  } -``` - -**Output of goodbye-display (also with two events):** - -``` -$ kubectl -n eventing-test logs -l app=goodbye-display --tail=100 -☁️  cloudevents.Event -Validation: valid -Context Attributes, -  specversion: 1.0 -  type: not-greeting -  source: sendoff -  id: say-goodbye -  datacontenttype: application/json -Extensions, -  knativearrivaltime: 2021-01-24T22:33:00.515716701Z -Data, -  { -    "msg": "Goodbye Knative!" -  } -☁️  cloudevents.Event -Validation: valid -Context Attributes, -  specversion: 1.0 -  type: greeting -  source: sendoff -  id: say-hello-goodbye -  datacontenttype: application/json -Extensions, -  knativearrivaltime: 2021-01-24T23:04:15.036352685Z -Data, -  { -    "msg": "Hello Knative! Goodbye Knative!" -  } -``` - -As you can see, the event went to both consumers based on your curl definition. If an event needs to be sent to more than one place, you can write definitions to send it to more than one consumer. - -### Give it a try! - -Internal eventing in cloud events is pretty easy to track if it's going to a predefined location of your choice. Enjoy seeing how far you can go with eventing in your cluster! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/knative-eventing - -作者:[Jessica Cherry][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cherrybomb -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/computer_space_graphic_cosmic.png -[2]: https://opensource.com/article/20/11/knative -[3]: https://opensource.com/resources/what-is-kubernetes -[4]: https://en.wikipedia.org/wiki/Cloud_native_computing -[5]: https://cloudevents.io/ -[6]: https://github.com/Alynder/knative_eventing -[7]: https://minikube.sigs.k8s.io/docs/ -[8]: https://www.redhat.com/sysadmin/use-curl-api -[9]: https://github.com/cloudevents/spec -[10]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202 diff --git a/sources/tech/20210222 Review of Three Hyperledger Tools - Caliper, Cello and Avalon.md b/sources/tech/20210222 Review of Three Hyperledger Tools - Caliper, Cello and Avalon.md deleted file mode 100644 index ea79630b49..0000000000 --- a/sources/tech/20210222 Review of Three Hyperledger Tools - Caliper, Cello and Avalon.md +++ /dev/null @@ -1,261 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Review of Three Hyperledger Tools – Caliper, Cello and Avalon) -[#]: via: (https://www.linux.com/news/review-of-three-hyperledger-tools-caliper-cello-and-avalon/) -[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/review-of-three-hyperledger-tools-caliper-cello-and-avalon/) - -Review of Three Hyperledger Tools – Caliper, Cello and Avalon -====== - -_By Matt Zand_ - -#### **Recap** - -In our previous article ([Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy][1]), we discussed the following Hyperledger Distributed Ledger Technologies (DLTs). - - 1. Hyperledger Indy - 2. Hyperledger Fabric - 3. Hyperledger Iroha - 4. Hyperledger Sawtooth - 5. Hyperledger Besu - - - -To continue our journey, in this article we discuss three Hyperledger tools (Hyperledger Caliper, Cello and Avalon) that act as great accessories for any of Hyperledger DLTs. It is worth mentioning that, as of this writing, all of three tools discussed in this article are at the incubation stage. - -#### **Hyperledger Caliper** - -Caliper is a benchmarking tool for measuring blockchain performance and is written in [JavaScript][2]. It utilizes the following four performance indicators: success rate, Transactions Per Second (or transaction throughput), transaction latency, and resource utilization. Specifically, it is designed to perform benchmarks on a deployed smart contract, enabling the analysis of said four indicators on a blockchain network while smart contract is being used. - -Caliper is a unique general tool and has become a useful reference for enterprises to measure the performance of their distributed ledgers. The Caliper project will be one of the most important tools to use along with other Hyperledger projects (even in [Quorum][3] or [Ethereum][4] projects since it also supports those types of blockchains). It offers different connectors to various blockchains, which gives it greater power and usability. Likewise, based on its documentation, Caliper is ideal for: - - * Application developers interested in running performance tests for their smart contracts - * System architects interested in investigating resource constraints during test loads - - - -To better understand how Caliper works, one should start with its architecture. Specifically, to use it, a user should start with defining the following configuration files: - - * A **benchmark** file defining the arguments of a benchmark workload - * A **blockchain** file specifying the necessary information, which helps to interact with the system being tested - * **Smart contracts** defining what contracts are going to be deployed - - - -The above configuration files act as inputs for the Caliper CLI, which creates an admin client (acts as a superuser) and factory (being responsible for running test loads). Based on a chosen benchmark file, a client could be transacting with the system by adding or querying assets. - -While testing is in progress, all transactions are saved. The statistics of these transactions are logged and stored. Further, a resource monitor logs the consumption of resources. All of this data is eventually aggregated into a single report. For more detailed discussion on its implementation, visit the link provided in the References section. - -**Hyperledger Cello** - -As blockchain applications eventually deployed at the enterprise level, developers had to do a lot of manual work when deploying/managing a blockchain. This job does not get any easier if multiple tenants need to access separate chains simultaneously. For instance, interacting with Hyperledger Fabric requires manual installation of each peer node on different servers, as well as setting up scripts (e.g., Docker-Composer) to start a Fabric network. Thus, to address said challenges while automating the process for developers, Hyperledger Cello got incubated. Cello brings the on-demand deployment model to blockchains and is written in the [Go language][5]. Cello is an automated application for deploying and managing blockchains in the form of plug-and-play, particularly for enterprises looking to integrate distributed ledger technologies. - -Cello also provides a real-time dashboard for blockchain statuses, system utilization, chain code performance, and the configuration of blockchains. It currently supports Hyperledger Fabric. According to its documentation, Cello allows for: - - * Provisioning customized blockchains instantly - * Maintaining a pool of running blockchains healthy without any need for manual operation - * Checking the system’s status, scaling the chain numbers, changing resources, etc. through a dashboard - - - -Likewise, according to its documentation, the major Cello’s features are: - - * Management of multiple blockchains (e.g., create, delete, and maintain health automatically) - * Almost instant response, even with hundreds of chains or nodes - * Support for customized blockchains request (e.g., size, consensus) — currently, there is support for Hyperledger Fabric - * Support for a native Docker host or a Swarm host as the compute nodes - * Support for heterogeneous architecture (e.g., z Systems, Power Systems, and x86) from bare-metal servers to virtual machines - * Extensible with monitoring, logging, and health features through employing additional components - - - -According to its developers, Cello’s architecture follows the principles of the [microservices][6], fault resilience, and scalability. In particular, Cello has three functional layers: - - * **The access layer**, which also includes web UI dashboards operated by users - * **The orchestration layer**, which on receiving the request from the access layer, makes a call to the agents to operate the blockchain resources - * **The agent layer**, which embodies real workers that interact with underlying infrastructures like Docker, [Swarm][7], or Kubernetes - - - -According to its documentation, each layer should maintain stable APIs for upper layers to achieve pluggability without changing the upper-layer code. For more detailed discussion on its implementation, visit the link provided in the References section. - -**Hyperledger Avalon** - -To boost the performance of blockchain networks, developers decided to store non-essential data into off-the-chain databases. While this approach improved blockchain scalability, it led to some confidentiality issues. So, the community was in search of an approach that can achieve scalability and confidentiality goals at once; thus, it led to the incubation of Avalon. Hyperledger Avalon (formerly Trusted Compute Framework) enables privacy in blockchain transactions, shifting heavy processing from a main blockchain to trusted off-chain computational resources in order to improve scalability and latency, and to support attested Oracles. - -The Trusted Compute Specification was designed to assist developers gain the benefits of computational trust and to overcome its drawbacks. In the case of the Avalon, a blockchain is used to enforce execution policies and ensure transaction auditability, while associated off-chain trusted computational resources execute transactions. By utilizing trusted off-chain computational resources, a developer can accelerate throughput and improve data privacy. By using Hyperledger Avalon in a distributed ledger, we can: - - * Maintain a registry of the trusted workers (including their attestation info) - * Provide a mechanism for submitting work orders from a client(s) to a worker - * Preserve a log of work order receipts and acknowledgments - - - -To put it simply, the off-chain parts related to the main-network are  executing the transactions with the help of trusted compute resources. What guarantees the enforcement of confidentiality along with the integrity of execution is the Trusted Compute option with the following features: - - * Trusted Execution Environment (TEE) - * MultiParty Commute (MPC) - * Zero-Knowledge Proofs (ZKP) - - - -By means of Trusted Execution Environments, a developer can enhance the integrity of the link in the off-chain and on-chain execution. Intel’s SGX play is a known example of TEEs, which have capabilities such as code verification, attestation verification, and execution isolation which allows the creation of a trustworthy link between main-chain and off-chain compute resources. For more detailed discussion on its implementation, visit the link provided in the References section. - -**Note- Hyperledger Explorer Tool (deprecated)** - -Hyperledger Explorer, in a nutshell, provides a dashboard for peering into block details which are primarily written in JavaScript. Hyperledger Explorer is known to all developers and system admins that have done work in Hyperledger in past few years. In spite of its great features and popularity, Hyperledger announced last year that they no longer maintain it. So this tool is deprecated. - -**Next Article** - -In our upcoming article, we move on covering the below four Hyperledger libraries: - - 1. Hyperledger Aries - 2. Hyperledger Quilt - 3. Hyperledger Ursa - 4. Hyperledger Transact - - - -**Summary** - -To recap, we covered three Hyperledger tools (Caliper, Cello and Avalon) in this article. We started off by explaining that Hyperledger Caliper is designed to perform benchmarks on a deployed smart contract, enabling the analysis of four indicators (like success rate or transaction throughout) on a blockchain network while smart contract is being used. Next, we learned that Hyperledger Cello is an automated application for deploying and managing blockchains in the form of plug-and-play, particularly for enterprises looking to integrate distributed ledger technologies. At last, Hyperledger Avalon enables privacy in blockchain transactions, shifting heavy processing from a main blockchain to trusted off-chain computational resources in order to improve scalability and latency, and to support attested Oracles. - -** References** - -For more references on all Hyperledger projects, libraries and tools, visit the below documentation links: - - 1. [Hyperledger Indy Project][8] - 2. [Hyperledger Fabric Project][9] - 3. [Hyperledger Aries Library][10] - 4. [Hyperledger Iroha Project][11] - 5. [Hyperledger Sawtooth Project][12] - 6. [Hyperledger Besu Project][13] - 7. [Hyperledger Quilt Library][14] - 8. [Hyperledger Ursa Library][15] - 9. [Hyperledger Transact Library][16] - 10. [Hyperledger Cactus Project][17] - 11. [Hyperledger Caliper Tool][18] - 12. [Hyperledger Cello Tool][19] - 13. [Hyperledger Explorer Tool][20] - 14. [Hyperledger Grid (Domain Specific)][21] - 15. [Hyperledger Burrow Project][22] - 16. [Hyperledger Avalon Tool][23] - - - -**Resources** - - * Free Training Courses from The Linux Foundation & Hyperledger - * [Blockchain: Understanding Its Uses and Implications (LFS170)][24] - * [Introduction to Hyperledger Blockchain Technologies (LFS171)][25] - * [Introduction to Hyperledger Sovereign Identity Blockchain Solutions: Indy, Aries & Ursa (LFS172)][26] - * [Becoming a Hyperledger Aries Developer (LFS173)][27] - * [Hyperledger Sawtooth for Application Developers (LFS174)][28] - * eLearning Courses from The Linux Foundation & Hyperledger - * [Hyperledger Fabric Administration (LFS272)][29] - * [Hyperledger Fabric for Developers (LFD272)][30] - * Certification Exams from The Linux Foundation & Hyperledger - * [Certified Hyperledger Fabric Administrator (CHFA)][31] - * [Certified Hyperledger Fabric Developer (CHFD)][32] - * [Hands-On Smart Contract Development with Hyperledger Fabric V2][33] Book by Matt Zand and others. - * [Essential Hyperledger Sawtooth Features for Enterprise Blockchain Developers][34] - * [Blockchain Developer Guide- How to Install Hyperledger Fabric on AWS][35] - * [Blockchain Developer Guide- How to Install and work with Hyperledger Sawtooth][36] - * [Intro to Blockchain Cybersecurity (Coding Bootcamps)][37] - * [Intro to Hyperledger Sawtooth for System Admins (Coding Bootcamps)][38] - * [Blockchain Developer Guide- How to Install Hyperledger Iroha on AWS][39] - * [Blockchain Developer Guide- How to Install Hyperledger Indy and Indy CLI on AWS][40] - * [Blockchain Developer Guide- How to Configure Hyperledger Sawtooth Validator and REST API on AWS][41] - * [Intro blockchain development with Hyperledger Fabric (Coding Bootcamps)][42] - * [How to build DApps with Hyperledger Fabric][43] - * [Blockchain Developer Guide- How to Build Transaction Processor as a Service and Python Egg for Hyperledger Sawtooth][44] - * [Blockchain Developer Guide- How to Create Cryptocurrency Using Hyperledger Iroha CLI][45] - * [Blockchain Developer Guide- How to Explore Hyperledger Indy Command Line Interface][46] - * [Blockchain Developer Guide- Comprehensive Blockchain Hyperledger Developer Guide from Beginner to Advance Level][47] - * [Blockchain Management in Hyperledger for System Admins][48] - * [Hyperledger Fabric for Developers (Coding Bootcamps)][49] - * [Free White Papers from Hyperledger][50] - * [Free Webinars from Hyperledger][51] - * [Hyperledger Wiki][52] - - - -**About the Author** - -**Matt Zand** is a serial entrepreneur and the founder of 3 tech startups: [DC Web Makers][53], [Coding Bootcamps][54] and [High School Technology Services][55]. He is a leading author of [Hands-on Smart Contract Development with Hyperledger Fabric][33] book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms at sites such as IBM, SAP, Alibaba Cloud, Hyperledger, The Linux Foundation, and more. As a public speaker, he has presented webinars at many Hyperledger communities across USA and Europe.. At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LI: - -The post [Review of Three Hyperledger Tools – Caliper, Cello and Avalon][56] appeared first on [Linux Foundation – Training][57]. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/review-of-three-hyperledger-tools-caliper-cello-and-avalon/ - -作者:[Dan Brown][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://training.linuxfoundation.org/announcements/review-of-three-hyperledger-tools-caliper-cello-and-avalon/ -[b]: https://github.com/lujun9972 -[1]: https://training.linuxfoundation.org/announcements/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/ -[2]: https://learn.coding-bootcamps.com/p/learn-javascript-web-development-by-examples -[3]: https://coding-bootcamps.com/blog/introduction-to-quorum-blockchain-development.html -[4]: https://myhsts.org/blog/ethereum-dapp-with-evm-remix-golang-truffle-and-solidity-part1.html -[5]: https://learn.coding-bootcamps.com/p/learn-go-programming-language-by-examples -[6]: https://blockchain.dcwebmakers.com/blog/comprehensive-guide-for-migration-from-monolithic-to-microservices-architecture.html -[7]: https://coding-bootcamps.com/blog/how-to-work-with-ethereum-swarm-storage.html -[8]: https://www.hyperledger.org/use/hyperledger-indy -[9]: https://www.hyperledger.org/use/fabric -[10]: https://www.hyperledger.org/projects/aries -[11]: https://www.hyperledger.org/projects/iroha -[12]: https://www.hyperledger.org/projects/sawtooth -[13]: https://www.hyperledger.org/projects/besu -[14]: https://www.hyperledger.org/projects/quilt -[15]: https://www.hyperledger.org/projects/ursa -[16]: https://www.hyperledger.org/projects/transact -[17]: https://www.hyperledger.org/projects/cactus -[18]: https://www.hyperledger.org/projects/caliper -[19]: https://www.hyperledger.org/projects/cello -[20]: https://www.hyperledger.org/projects/explorer -[21]: https://www.hyperledger.org/projects/grid -[22]: https://www.hyperledger.org/projects/hyperledger-burrow -[23]: https://www.hyperledger.org/projects/avalon -[24]: https://training.linuxfoundation.org/training/blockchain-understanding-its-uses-and-implications/ -[25]: https://training.linuxfoundation.org/training/blockchain-for-business-an-introduction-to-hyperledger-technologies/ -[26]: https://training.linuxfoundation.org/training/introduction-to-hyperledger-sovereign-identity-blockchain-solutions-indy-aries-and-ursa/ -[27]: https://training.linuxfoundation.org/training/becoming-a-hyperledger-aries-developer-lfs173/ -[28]: https://training.linuxfoundation.org/training/hyperledger-sawtooth-application-developers-lfs174/ -[29]: https://training.linuxfoundation.org/training/hyperledger-fabric-administration-lfs272/ -[30]: https://training.linuxfoundation.org/training/hyperledger-fabric-for-developers-lfd272/ -[31]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-administrator-chfa/ -[32]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-developer/ -[33]: https://www.oreilly.com/library/view/hands-on-smart-contract/9781492086116/ -[34]: https://weg2g.com/application/touchstonewords/article-essential-hyperledger-sawtooth-features-for-enterprise-blockchain-developers.php -[35]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-fabric-on-amazon-web-services.php -[36]: https://myhsts.org/tutorial-learn-how-to-install-and-work-with-blockchain-hyperledger-sawtooth.php -[37]: https://learn.coding-bootcamps.com/p/learn-how-to-secure-blockchain-applications-by-examples -[38]: https://learn.coding-bootcamps.com/p/introduction-to-hyperledger-sawtooth-for-system-admins -[39]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-iroha-on-amazon-web-services.php -[40]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-indy-on-amazon-web-services.php -[41]: https://myhsts.org/tutorial-learn-how-to-configure-hyperledger-sawtooth-validator-and-rest-api-on-aws.php -[42]: https://learn.coding-bootcamps.com/p/live-and-self-paced-blockchain-development-with-hyperledger-fabric -[43]: https://learn.coding-bootcamps.com/p/live-crash-course-for-building-dapps-with-hyperledger-fabric -[44]: https://myhsts.org/tutorial-learn-how-to-build-transaction-processor-as-a-service-and-python-egg-for-hyperledger-sawtooth.php -[45]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-iroha-cli-to-create-cryptocurrency.php -[46]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-indy-command-line-interface.php -[47]: https://myhsts.org/tutorial-comprehensive-blockchain-hyperledger-developer-guide-for-all-professional-programmers.php -[48]: https://learn.coding-bootcamps.com/p/learn-blockchain-development-with-hyperledger-by-examples -[49]: https://learn.coding-bootcamps.com/p/hyperledger-blockchain-development-for-developers -[50]: https://www.hyperledger.org/learn/white-papers -[51]: https://www.hyperledger.org/learn/webinars -[52]: https://wiki.hyperledger.org/ -[53]: https://blockchain.dcwebmakers.com/ -[54]: http://coding-bootcamps.com/ -[55]: https://myhsts.org/ -[56]: https://training.linuxfoundation.org/announcements/review-of-three-hyperledger-tools-caliper-cello-and-avalon/ -[57]: https://training.linuxfoundation.org/ diff --git a/sources/tech/20210225 AIOps vs. MLOps- What-s the difference.md b/sources/tech/20210225 AIOps vs. MLOps- What-s the difference.md deleted file mode 100644 index 658abe1b6f..0000000000 --- a/sources/tech/20210225 AIOps vs. MLOps- What-s the difference.md +++ /dev/null @@ -1,82 +0,0 @@ -[#]: subject: (AIOps vs. MLOps: What's the difference?) -[#]: via: (https://opensource.com/article/21/2/aiops-vs-mlops) -[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -AIOps vs. MLOps: What's the difference? -====== -Break down the differences between these disciplines to learn how you -should use them in your open source project. -![Brick wall between two people, a developer and an operations manager][1] - -In late 2019, O'Reilly hosted a survey on artificial intelligence [(AI) adoption in the enterprise][2]. The survey broke respondents into two stages of adoption: Mature and Evaluation. - -When asked what's holding back their AI adoption, those in the latter category most often cited company culture. Trouble identifying good use cases for AI wasn't far behind. - -![Bottlenecks to AI adoption][3] - -[AI adoption in the enterprise 2020][2] (O'Reilly, ©2020) - -MLOps, or machine learning operations, is increasingly positioned as a solution to these problems. But that leaves a question: What _is_ MLOps? - -It's fair to ask for two key reasons. This discipline is new, and it's often confused with a sister discipline that's equally important yet distinctly different: Artificial intelligence operations, or AIOps. - -Let's break down the key differences between these two disciplines. This exercise will help you decide how to use them in your business or open source project. - -### What is AIOps? - -[AIOps][4] is a series of multi-layered platforms that automate IT to make it more efficient. Gartner [coined the term][5] in 2017, which emphasizes how new this discipline is. (Disclosure: I worked for Gartner for four years.) - -At its best, AIOps allows teams to improve their IT infrastructure by using big data, advanced analytics, and machine learning techniques. That first item is crucial given the mammoth amount of data produced today. - -When it comes to data, more isn't always better. In fact, many business leaders say they receive so much data that it's [increasingly hard][6] for them to collect, clean, and analyze it to find insights that can help their businesses. - -This is where AIOps comes in. By helping DevOps and data operations (DataOps) teams choose what to automate, from development to production, this discipline [helps open source teams][7] predict performance problems, do root cause analysis, find anomalies, [and more][8]. - -### What is MLOps? - -MLOps is a multidisciplinary approach to managing machine learning algorithms as ongoing products, each with its own continuous lifecycle. It's a discipline that aims to build, scale, and deploy algorithms to production consistently.  - -Think of MLOps as DevOps applied to machine learning pipelines. [It's a collaboration][9] between data scientists, data engineers, and operations teams. Done well, it gives members of all teams more shared clarity on machine learning projects. - -MLOps has obvious benefits for data science and data engineering teams. Since members of both teams sometimes work in silos, using shared infrastructure boosts transparency. - -But MLOps can benefit other colleagues, too. This discipline offers the ops side more autonomy over regulation. - -As an increasing number of businesses start using machine learning, they'll come under more scrutiny from the government, media, and public. This is especially true of machine learning in highly regulated industries like healthcare, finance, and autonomous vehicles. - -Still skeptical? Consider that just [13% of data science projects make it to production][10]. The reasons are outside this article's scope. But, like AIOps helps teams automate their tech lifecycles, MLOps helps teams choose which tools, techniques, and documentation will help their models reach production. - -When applied to the right problems, AIOps and MLOps can both help teams hit their production goals. The trick is to start by answering this question: - -### What do you want to automate? Processes or machines? - -When in doubt, remember: AIOps automates machines while MLOps standardizes processes. If you're on a DevOps or DataOps team, you can—and should—consider using both disciplines. Just don't confuse them for the same thing. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/aiops-vs-mlops - -作者:[Lauren Maffeo][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/lmaffeo -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops_confusion_wall_questions.png?itok=zLS7K2JG (Brick wall between two people, a developer and an operations manager) -[2]: https://www.oreilly.com/radar/ai-adoption-in-the-enterprise-2020/ -[3]: https://opensource.com/sites/default/files/uploads/oreilly_bottlenecks-with-maturity.png (Bottlenecks to AI adoption) -[4]: https://www.bmc.com/blogs/what-is-aiops/ -[5]: https://www.appdynamics.com/topics/what-is-ai-ops -[6]: https://www.millimetric.ai/2020/08/10/data-driven-to-madness-what-to-do-when-theres-too-much-data/ -[7]: https://opensource.com/article/20/8/aiops-devops-itsm -[8]: https://thenewstack.io/how-aiops-conquers-performance-gaps-on-big-data-pipelines/ -[9]: https://medium.com/@ODSC/what-are-mlops-and-why-does-it-matter-8cff060d4067 -[10]: https://venturebeat.com/2019/07/19/why-do-87-of-data-science-projects-never-make-it-into-production/ diff --git a/sources/tech/20210226 Navigate your FreeDOS system.md b/sources/tech/20210226 Navigate your FreeDOS system.md deleted file mode 100644 index c80a601bbb..0000000000 --- a/sources/tech/20210226 Navigate your FreeDOS system.md +++ /dev/null @@ -1,192 +0,0 @@ -[#]: subject: "Navigate your FreeDOS system" -[#]: via: "https://opensource.com/article/21/2/freedos-dir" -[#]: author: "Kevin O'Brien https://opensource.com/users/ahuka" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Navigate your FreeDOS system -====== -Master the DIR command to navigate your way around FreeDOS. - -![A map with a route highlighted][1] - -[FreeDOS][2] is an open source implementation of DOS. It's not a remix of Linux, and it is compatible with the operating system that introduced many people to personal computing. This makes it an important resource for running legacy applications, playing retro games, updating firmware on motherboards, and experiencing a little bit of living computer history. In this article, I'll look at some of the essential commands used to navigate a FreeDOS system. - -### Change your current directory with CD - -When you first boot FreeDOS, you're "in" the root directory, which is called `C:\`. This represents the foundation of your filesystem, specifically the system hard drive. It's labeled with a `C` because, back in the old days of MS-DOS and PC-DOS, there were always `A` and `B` floppy drives, making the physical hard drive the third drive by default. The convention has been retained to this day in FreeDOS and the operating system that grew out of MS-DOS, Windows. - -There are many reasons not to work exclusively in your root directory. First of all, there are limitations to the FAT filesystem that would make that impractical at scale. Secondly, it would make for a very poorly organized filesystem. So it's common to make new directories (or "folders," as we often refer to them) to help keep your work tidy. To access these files easily, it's convenient to change your working directory. - -The FreeDOS `CD` command changes your current working subdirectory to another subdirectory. Imagine a computer with the following directory structure: - -``` -C:\   -\LETTERS\   -  \LOVE\ -  \BUSINESS\ - -\DND\ -\MEMOS\   -\SCHOOL\ -``` - -You start in the `C:\` directory, so to navigate to your love letter directory, you can use `CD` : - -``` -C:\>CD \LETTERS\LOVE\ -``` - -To navigate to your `\LETTERS\BUSINESS` directory, you must specify the path to your business letters from a common fixed point on your filesystem. The most reliable starting location is `C:\`, because it's where *everything* on your computer is stored. - -``` -C:\LETTERS\LOVE\>CD C:\LETTERS\BUSINESS -``` - -#### Navigating with dots - -There's a useful shortcut for navigating your FreeDOS system, which takes the form of dots. Two dots (`..` ) tell FreeDOS you want to move "back" or "down" in your directory tree. For instance, the `LETTERS` directory in this example system contains one subdirectory called `LOVE` and another called `BUSINESS`. If you're in `LOVE` currently, and you want to step back and change over to `BUSINESS`, you can just use two dots to represent that move: - -``` -C:\LETTERS\LOVE\>CD ..\BUSINESS -C:\LETTERS\BUSINESS\> -``` - -To get all the way back to your root directory, just use the right number of dots: - -``` -C:\LETTERS\BUSINESS\: CD ..\.. -C:\> -``` - -#### Navigational shortcuts - -There are some shortcuts for navigating directories, too. - -To get back to the root directory from wherever you are: - -``` -C:\LETTERS\BUSINESS\>CD \ -C:\> -``` - -### List directory contents with DIR - -The `DIR` command displays the contents of a subdirectory, but it can also function as a search command. This is one of the most used commands in FreeDOS, and learning to use it properly is a great time saver. - -`DIR` displays the contents of the current working subdirectory, and with an optional path argument, it displays the contents of some other subdirectory: - -``` -C:\LETTERS\BUSINESS\>DIR -MTG_CARD    TXT  1344 12-29-2020  3:06p -NON         TXT   381 12-31-2020  8:12p -SOMUCHFO    TXT   889 12-31-2020  9:36p -TEST        BAT    32 01-03-2021 10:34a -``` - -#### Attributes - -With a special attribute argument, you can use `DIR` to find and filter out certain kinds of files. There are 10 attributes you can specify: - -| - | - | -| :- | :- | -| H | Hidden | -| -H | Not hidden | -| S | System | -| -S | Not system | -| A | Archivable files | -| -A | Already archived files | -| R | Read-only files | -| -R | Not read-only (i.e., editable and deletable) files | -| D | Directories only, no files | -| -D | Files only, no directories | - -These special designators are denoted with `/A:` followed by the attribute letter. You can enter as many attributes as you like, in order, without leaving a space between them. For instance, to view only hidden directories: - -``` -C:\MEMOS\>DIR /A:HD -.OBSCURE      01-08-2021 10:10p -``` - -#### Listing in order - -You can also display the results of your `DIR` command in a specific order. The syntax for this is very similar to using attributes. You leave a space after the `DIR` command or after any other switches, and enter `/O:` followed by a selection. There are 12 possible selections: - -| - | - | -| :- | :- | -| N | Alphabetical order by file name | -| -N | Reverse alphabetical order by file name | -| E | Alphabetical order by file extension | -| -E | Reverse alphabetical order by file extension | -| D | Order by date and time, earliest first | -| -D | Order by date and time, latest first | -| S | By size, increasing | -| -S | By size, decreasing | -| C | By DoubleSpace compression ratio, lowest to highest (version 6.0 only) | -| -C | By DoubleSpace compression ratio, highest to lowest (version 6.0 only) | -| G | Group directories before other files | -| -G | Group directories after other files | - -To see your directory listing grouped by file extension: - -``` -C:\>DIR /O:E -TEST        BAT 01-10-2021 7:11a -TIMER       EXE 01-11-2021 6:06a -AAA         TXT 01-09-2021 4:27p -``` - -This returns a list of files in alphabetical order of file extension. - -If you're looking for a file you were working on yesterday, you can order by modification time: - -``` -C:\>DIR /O:-D -AAA         TXT 01-09-2021 4:27p -TEST        BAT 01-10-2021 7:11a -TIMER       EXE 01-11-2021 6:06a -``` - -If you need to clean up your hard drive because you're running out of space, you can order your list by file size, and so on. - -#### Multiple arguments - -You can use multiple arguments in a `DIR` command to achieve fairly complex results. Remember that each argument has to be separated from its neighbors by a blank space on each side: - -``` -C:\>DIR /A:A /O:D /P -``` - -This command selects only those files that have not yet been backed up (`/A:A` ), orders them by date, beginning with the oldest (`/O:D` ), and displays the results on your monitor one page at a time (`/P` ). So you can really do some slick stuff with the `DIR` command once you've mastered these arguments and switches. - -### Terminology - -In case you were wondering, anything that modifies a command is an argument. - -If it has a slash in front, it is a switch. So all switches are also arguments, but some arguments (for example, a file path) are not switches. - -### Better navigation in FreeDOS - -FreeDOS can be very different from what you're used to if you're used to Windows or macOS, and it can be just different enough if you're used to Linux. A little practice goes a long way, though, so try some of these on your own. You can always get a help message with the `/?` switch. The best way to get comfortable with these commands is to practice using them. - -*Some of the information in this article was previously published in [DOS lesson 12: Expert DIR use][3] (CC BY-SA 4.0).* - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/freedos-dir - -作者:[Kevin O'Brien][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ahuka -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/map_route_location_gps_path.png -[2]: https://www.freedos.org/ -[3]: https://www.ahuka.com/dos-lessons-for-self-study-purposes/dos-lesson-12-expert-dir-use/ diff --git a/sources/tech/20210227 Build your own technology on Linux.md b/sources/tech/20210227 Build your own technology on Linux.md deleted file mode 100644 index 45859d081d..0000000000 --- a/sources/tech/20210227 Build your own technology on Linux.md +++ /dev/null @@ -1,58 +0,0 @@ -[#]: subject: (Build your own technology on Linux) -[#]: via: (https://opensource.com/article/21/2/linux-technology) -[#]: author: (Seth Kenlon https://opensource.com/users/seth) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Build your own technology on Linux -====== -Linux puts you in charge of your own technology so you can use it any -way you want. -![Someone wearing a hardhat and carrying code ][1] - -In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Linux empowers its users to build their own tools. - -There's a persistent myth that tech companies must "protect" their customers from the many features of their technology. Sometimes, companies put restrictions on their users for fear of unexpected breakage, and other times they expect users to pay extra to unlock features. I love an operating system that protects me from stupid mistakes, but I want to know without a doubt that there's a manual override switch somewhere. I want to be able to control my own experience on my own computer. Whether I'm using Linux for work or my hobbies, that's precisely what it does for me. It puts me in charge of the technology I've chosen to use. - -### Customizing your tools - -It's hard for a business, even when its public face is the image of a "quirky revolutionary," to deal with the fact that reality is actually quite diverse. Literally everybody on the planet uses a computer differently than the next person. We all have our habits; we have artifacts of good and bad computer training; we have our interest levels, our distractions, and our individual goals. - -No matter what a company anticipates, there's no way to construct the ideal environment to please each and every potential user. And it's a tall order to expect a business to even attempt that. You might even think it's an unreasonable demand to have a custom tool for every individual. - -But we're living in the future, it's a high-tech world, and the technology that empowers users to design and use their own tools has been around for decades. You can witness early Unix users stringing together commands on old [_Computer Chronicles_][2] episodes way back in 1985. Today, you see it in spreadsheet applications, possibly the most well-used and malleable (and unintentional) prototype engines available. From business forms to surveys to games to video encoder frontends, I've seen everyday users claiming to have "no programming skills" design spreadsheets that rival applications developed and sold by software companies. This is the kind of creativity that technology should foster and encourage, and I think it's the way computing is heading the more that _open source_ principles become an expectation. - -Today, Linux delivers the same power: the power to construct your own utilities and offer them to other users in a portable and adaptable format. Whether you work in Bash, Python, or LibreOffice Calc, Linux invites you to build tools that make your life easier. - -### Services - -I believe one of the missing components of the modern computing experience is connectedness. That seems like a crazy thing to assert in the 21st century when we have social networks that claim to bring people together like never before. But social networks have always felt like more like a chaperoned prom than a casual hangout. You go to a place where you're expected to socialize, and you do what's expected of you, but deep down, you'd rather just invite your friends over to watch some movies and play some games. - -The deficiency of modern computing platforms is that this casual level of sharing our digital life isn't easy. In fact, it's really difficult on most computers. While we're still a long away from a great selection of sharable applications, Linux is nevertheless built for sharing. It doesn't try to block your path when you open a port to invite your friends to connect to a shared application like [Drawpile][3] or [Maptool][4]. On the contrary, it has [tools specifically to make sharing _easy_][5]. - -### Stand back; I'm doing science! - -Linux is a platform for makers, creators, and developers. It's part of its core tenet to let its users explore and to ensure that the user remains in control of their system. Linux offers you an open source environment and an [open studio][6]. All you have to do is take advantage of it. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/linux-technology - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code ) -[2]: https://archive.org/details/UNIX1985 -[3]: https://opensource.com/article/20/3/drawpile -[4]: https://opensource.com/article/18/5/maptool -[5]: https://opensource.com/article/19/7/make-linux-stronger-firewalls -[6]: https://www.redhat.com/en/about/open-studio diff --git a/sources/tech/20210227 Getting started with COBOL development on Fedora Linux 33.md b/sources/tech/20210227 Getting started with COBOL development on Fedora Linux 33.md deleted file mode 100644 index f19517e351..0000000000 --- a/sources/tech/20210227 Getting started with COBOL development on Fedora Linux 33.md +++ /dev/null @@ -1,222 +0,0 @@ -[#]: subject: (Getting started with COBOL development on Fedora Linux 33) -[#]: via: (https://fedoramagazine.org/getting-started-with-cobol-development-on-fedora-linux-33/) -[#]: author: (donnie https://fedoramagazine.org/author/donnie/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Getting started with COBOL development on Fedora Linux 33 -====== - -![cobol_article_title_photo][1] - -Though its popularity has waned, COBOL is still powering business critical operations within many major organizations. As the need to update, upgrade and troubleshoot these applications grows, so may the demand for anyone with COBOL development knowledge. - -Fedora 33 represents an excellent platform for COBOL development. -This article will detail how to install and configure tools, as well as compile and run a COBOL program. - -### Installing and configuring tools - -GnuCOBOL is a free and open modern compiler maintained by volunteer developers. To install, open a terminal and execute the following command: - -``` -# sudo dnf -y install gnucobol -``` - -Once completed, execute this command to verify that GnuCOBOL is ready for work: - -``` -# cobc -v -``` - -You should see version information and build dates. Don’t worry if you see the error “no input files”. We will create a COBOL program file with the Vim text editor in the following steps. - -Fedora ships with a minimal version of Vim, but it would be nice to have some of the extra features that the full version can offer (such as COBOL syntax highlighting). Run the command below to install Vim-enhanced, which will overwrite Vim-minimal: - -``` -# sudo dnf -y install vim-enhanced -``` - -### Writing, Compiling, and Executing COBOL programs - -At this point, you are ready to write a COBOL program. For this example, I am set up with username _fedorauser_ and I will create a folder under my home directory to store my COBOL programs. I called mine _cobolcode_. - -``` -# mkdir /home/fedorauser/cobolcode -# cd /home/fedorauser/cobolcode -``` - -Now we can create and open a new file to enter our COBOL source program. I’ll call it _helloworld.cbl_. - -``` -# vim helloworld.cbl -``` - -You should now have the blank file open in Vim, ready to edit. This will be a simple program that does nothing except print out a message to our terminal. - -Enable “insert” mode in vim by pressing the “i” key, and key in the text below. Vim will assist with placement of your code sections. This can be very helpful since every character space in a COBOL file has a purpose (it’s a digital representation of the physical cards that developers would complete and feed into the computer). - -``` - IDENTIFICATION DIVISION. - PROGRAM-ID. HELLO-WORLD. -*simple helloworld program. - PROCEDURE DIVISION. - DISPLAY '##################################'. - DISPLAY '#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#'. - DISPLAY '#!!!!!!!!!!FEDORA RULES!!!!!!!!!!#'. - DISPLAY '#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#'. - DISPLAY '##################################'. - STOP RUN. -``` - -You can now press the “ESC” key to exit insert mode, and key in “:x” to save and close the file. - -Compile the program by keying in the following: - -``` -# cobc -x helloworld.cbl -``` - -It should complete quickly with return status: 0. Key in “ls” to view the contents of your current directory. You should see your original _helloworld.cbl_ file, as well as a new file simply named _helloworld_. - -Execute the COBOL program. - -``` -# ./helloworld -``` - -If you see your text output without errors, then you have sucessfully compiled and executed the program! - -![][2] - -Now that we have the basics of writing, compiling, and running a COBOL program, lets try one that does something a little more interesting. - -The following program will generate the Fibonacci sequence given your input. Use Vim to create a file called _fib.cbl_ and input the text below: - -``` -****************************************************************** - * Author: Bryan Flood - * Date: 25/10/2018 - * Purpose: Compute Fibonacci Numbers - * Tectonics: cobc - ****************************************************************** - IDENTIFICATION DIVISION. - PROGRAM-ID. FIB. - DATA DIVISION. - FILE SECTION. - WORKING-STORAGE SECTION. - 01 N0 BINARY-C-LONG VALUE 0. - 01 N1 BINARY-C-LONG VALUE 1. - 01 SWAP BINARY-C-LONG VALUE 1. - 01 RESULT PIC Z(20)9. - 01 I BINARY-C-LONG VALUE 0. - 01 I-MAX BINARY-C-LONG VALUE 0. - 01 LARGEST-N BINARY-C-LONG VALUE 92. - PROCEDURE DIVISION. - *> THIS IS WHERE THE LABELS GET CALLED - PERFORM MAIN - PERFORM ENDFIB - GOBACK. - *> THIS ACCEPTS INPUT AND DETERMINES THE OUTPUT USING A EVAL STMT - MAIN. - DISPLAY "ENTER N TO GENERATE THE FIBONACCI SEQUENCE" - ACCEPT I-MAX. - EVALUATE TRUE - WHEN I-MAX > LARGEST-N - PERFORM INVALIDN - WHEN I-MAX > 2 - PERFORM CASEGREATERTHAN2 - WHEN I-MAX = 2 - PERFORM CASE2 - WHEN I-MAX = 1 - PERFORM CASE1 - WHEN I-MAX = 0 - PERFORM CASE0 - WHEN OTHER - PERFORM INVALIDN - END-EVALUATE. - STOP RUN. - *> THE CASE FOR WHEN N = 0 - CASE0. - MOVE N0 TO RESULT. - DISPLAY RESULT. - *> THE CASE FOR WHEN N = 1 - CASE1. - PERFORM CASE0 - MOVE N1 TO RESULT. - DISPLAY RESULT. - *> THE CASE FOR WHEN N = 2 - CASE2. - PERFORM CASE1 - MOVE N1 TO RESULT. - DISPLAY RESULT. - *> THE CASE FOR WHEN N > 2 - CASEGREATERTHAN2. - PERFORM CASE1 - PERFORM VARYING I FROM 1 BY 1 UNTIL I = I-MAX - ADD N0 TO N1 GIVING SWAP - MOVE N1 TO N0 - MOVE SWAP TO N1 - MOVE SWAP TO RESULT - DISPLAY RESULT - END-PERFORM. - *> PROVIDE ERROR FOR INVALID INPUT - INVALIDN. - DISPLAY 'INVALID N VALUE. THE PROGRAM WILL NOW END'. - *> END THE PROGRAM WITH A MESSAGE - ENDFIB. - DISPLAY "THE PROGRAM HAS COMPLETED AND WILL NOW END". - END PROGRAM FIB. -``` - -As before, hit the “ESC” key to exit insert mode, and key in “:x” to save and close the file. - -Compile the program: - -``` -# cobc -x fib.cbl -``` - -Now execute the program: - -``` -# ./fib -``` - -The program will ask for you to input a number, and will then generate Fibonocci output based upon that number. - -![][3] - -### Further Study - -There are numerous resources available on the internet to consult, however vast amounts of knowledge reside only in legacy print. Keep an eye out for vintage COBOL guides when visiting used book stores and public libraries; you may find copies of endangered manuals at a rock-bottom prices! - -It is also worth noting that helpful documentation was installed on your system when you installed GnuCOBOL. You can access them with these terminal commands: - -``` -# info gnucobol -# man cobc -# cobc -h -``` - -![][4] - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/getting-started-with-cobol-development-on-fedora-linux-33/ - -作者:[donnie][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/donnie/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/Screenshot-from-2021-02-09-17-20-21-816x384.png -[2]: https://fedoramagazine.org/wp-content/uploads/2021/02/Screenshot-from-2021-01-02-21-48-22-1024x576.png -[3]: https://fedoramagazine.org/wp-content/uploads/2021/02/Screenshot-from-2021-02-21-22-11-51-1024x598.png -[4]: https://fedoramagazine.org/wp-content/uploads/2021/02/image_50369281-1-1024x768.jpg diff --git a/sources/tech/20210228 Edit video on Linux with this Python app.md b/sources/tech/20210228 Edit video on Linux with this Python app.md deleted file mode 100644 index 482fa3e29d..0000000000 --- a/sources/tech/20210228 Edit video on Linux with this Python app.md +++ /dev/null @@ -1,113 +0,0 @@ -[#]: subject: (Edit video on Linux with this Python app) -[#]: via: (https://opensource.com/article/21/2/linux-python-video) -[#]: author: (Seth Kenlon https://opensource.com/users/seth) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Edit video on Linux with this Python app -====== -Three years ago I chose Openshot as my Linux video editing software of -choice. See why it's still my favorite. -![video editing dashboard][1] - -In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Here's how I use Linux to edit videos. - -Back in 2018, I wrote an article about the [state of Linux video editing][2], in which I chose an application called [Openshot][3] as my pick for the top hobbyist video editing software. Years later, and my choices haven't changed. Openshot remains a great little video editing application for Linux, and it's managed to make creating videos on Linux _boring_ in the best of ways. - -Well, video editing may never become boring in the sense that no platform will ever get it perfect because part of the art of moviemaking is the constant improvement of image quality and visual trickery. Software and cameras will forever be pushing each other forward and forever catching up to one another. I've been editing video on Linux since 2008 at the very least, but back then, editing video was still generally mystifying to most people. Computer users have become familiar with what used to be advanced concepts since then, so video editing is taken for granted. And video editing on Linux, at the very least, is at the stage of getting an obvious shrug. Yes, of course, you can edit your videos on Linux. - -### Installing Openshot - -On Linux, you can install Openshot from your distribution's software repository. - -On Fedora and similar: - - -``` -`$ sudo dnf install openshot` -``` - -On Debian, Linux Mint, Elementary, and similar: - - -``` -`$ sudo apt install openshot` -``` - -### Importing video - -Without the politics of "not invented here" syndrome and corporate identity, Linux has the best codec support in the tech industry. With the right libraries, you can play nearly any video format on Linux. It's a liberating feeling for even a casual content creator, especially to anyone who's spent an entire day downloading plugins and converter applications in a desperate attempt to get a video format into their proprietary video editing software. Barring [un]expected leaps and bounds in camera technology, you generally don't have to do that on Linux. Openshot uses [ffmpeg][4] to import videos, so you can edit whatever format you need to edit. - -![Importing into Openshot][5] - -Import into Openshot - -**Note**: When importing video, I prefer to standardize on the formats I use. It's fine to mix formats a little, but for consistency in behavior and to eliminate variables when troubleshooting, I convert any outliers in my source material to whatever the majority of my project uses. I prefer my source to be only lightly compressed when that's an option, so I can edit at a high quality and save compression for the final render. - -### Auditioning footage - -Once you've imported your video clips, you can preview each clip right in Openshot. To play a clip, right-click the clip and select **Preview file**. This option opens a playback window so you can watch your footage. This is a common task for a production with several takes of the same material. - -When rummaging through a lot of footage, you can tag clips in Openshot to help you keep track of which ones are good and which ones you don't think you'll use, or what clip belongs to which scene, or any other meta-information you need to track. To tag a clip, right-click on it and select **File properties**. Add your tags to the **Tag** field. - -![Tagging files in Openshot][6] - -Tagging files in Openshot - -### Add video to the timeline - -Whether you have a script you're following, or you're just sorting through footage and finding a story, you eventually get a sense of how you think your video ought to happen. There are always myriad possibilities at this stage, and that's a good thing. It's why video editing is one of the single most influential stages of moviemaking. Will you start with a cold open _in media res_? Or maybe you want to start at the end and unravel your narrative to lead back up to that? Or are you a traditional story-teller, proudly beginning at the beginning? Whatever you decide now, you can always change later, so the most important thing is just to get started. - -Getting started means putting video footage in your timeline. Whatever's in the timeline at the end of your edit is what makes your movie, so start adding clips from your project files to the timeline at the bottom of the Openshot window. - -![Openshot interface to add clips to the timeline][7] - -Adding clips to the timeline - -The _rough assembly_, as the initial edit is commonly called, is a sublimely simple and quick process in Openshot. You can throw clips into the timeline hastily, either straight from the **Project files** panel (right-click and select **Add to timeline** or just press **Ctrl+W**), or by dragging and dropping. - -Once you have a bunch of clips in the timeline, in more or less the correct order, you can take another pass to refine how much of each clip plays with each cut. You can cut video clips in the timeline short with the scissors (or _Razor tool_ in Openshot's terminology, but the icon is a scissor), or you can move the order of clips, intercut from shot to shot, and so on. For quick cross dissolves, just overlay the beginning of a clip over the end of another. Openshot takes care of the transition. - -Should you find that some clips have stray background sound that you don't need, you can separate the audio from the video. To extract audio from a clip in the timeline, right-click on it and select **Separate audio**. The clip's audio appears as a new clip on the track below its parent. - -### Exporting video from Openshot - -Fast-forward several hours, days, or months, and you're done with your video edit. You're ready to release it to the world, or your family or friends, or to whomever your audience may be. It's time to export. - -To export a video, click the **File** menu and select **Export video**. This selection brings up an **Export Video** window, with a **Simple** and **Advanced** tab. - -The **Simple** tab provides a few formats for you to choose from: Bluray, DVD, Device, and Web. These are common targets for videos, and general presets are assigned by default to each. - -The **Advanced** tab offers profiles based on output video size and quality, with overrides available for both video and audio. You can manually enter the video format, codec, and bitrate you want to use for the export. I prefer to export to an uncompressed format and then use ffmpeg manually, so that I can do multipass renders and also target several different formats as a batch process. However, this is entirely optional, but this attention to the needs of many different use cases is part of what makes Openshot great. - -### Editing video on Linux with Openshot - -This short article hardly does Openshot justice. It has many more features and conveniences, but you'll discover those as you use it. - -If you're a content creator with a deadline, you'll appreciate the speed of Openshot's workflow. If you're a moviemaker with no budget, you'll appreciate Openshot's low, low price of $0. If you're a proud parent struggling to extract just the parts of the school play featuring your very own rising star, you'll appreciate how easy it is to use Openshot. Cutting to the chase: Editing videos on the Linux desktop is easy, fun, and fast. - -A review of 6 free and open source (FOSS) video editing tools. Who came out the winner? Find out in... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/linux-python-video - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard) -[2]: https://opensource.com/article/18/4/new-state-video-editing-linux -[3]: http://openshot.org -[4]: https://opensource.com/article/17/6/ffmpeg-convert-media-file-formats -[5]: https://opensource.com/sites/default/files/openshot-import-2021.png -[6]: https://opensource.com/sites/default/files/openshot-tag-2021.png -[7]: https://opensource.com/sites/default/files/openshot-timeline-2021.png diff --git a/sources/tech/20210301 5 tips for choosing an Ansible collection that-s right for you.md b/sources/tech/20210301 5 tips for choosing an Ansible collection that-s right for you.md deleted file mode 100644 index 1b9bd5eea2..0000000000 --- a/sources/tech/20210301 5 tips for choosing an Ansible collection that-s right for you.md +++ /dev/null @@ -1,183 +0,0 @@ -[#]: subject: "5 tips for choosing an Ansible collection that's right for you" -[#]: via: "https://opensource.com/article/21/3/ansible-collections" -[#]: author: "Tadej Borovšak https://opensource.com/users/tadeboro" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -5 tips for choosing an Ansible collection that's right for you -====== -Try these strategies to find and vet collections of Ansible plugins and modules before you install them. - -![Women in computing and open source][1] - -Image by: Ray Smith - -In August 2020, Ansible issued its first release since the developers split the core functionality from the vast majority of its modules and plugins. A few [basic Ansible modules][2] remain part of core Ansible—modules for templating configuration files, managing services, and installing packages. All the other modules and plugins found their homes in dedicated [Ansible collections][3]. - -This article offers a quick look at Ansible collections in general and—especially—how to recognize high-quality ones. - -### What are Ansible collections? - -At its core, an Ansible collection is a collection (pun intended) of related modules and plugins that you can manage independently from Ansible's core engine. For example, the [Sensu Go Ansible collection][4] contains Ansible content for managing all aspects of Sensu Go. It includes Ansible roles for installing Sensu Go components and modules for creating, updating, and deleting monitoring resources. Another example is the [Sops Ansible collection][5] that integrates [Mozilla's Secret Operations editor][6] with Ansible. - -With the introduction of Ansible collections, [Ansible Galaxy][7] became the central hub for all Ansible content. Authors publish their Ansible collections there, and Ansible users use Ansible Galaxy's search function to find Ansible content they need. - -Ansible comes bundled with the `ansible-galaxy` tool for installing collections. Once you know what Ansible collection you want to install, things are relatively straightforward: Run the installation command listed on the Ansible Galaxy page. Ansible takes care of downloading and installing it. For example: - -``` -$ ansible-galaxy collection install sensu.sensu_go -Process install dependency map -Starting collection install process -Installing 'sensu.sensu_go:1.7.1' to -  '/home/user/.ansible/collections/ansible_collections/sensu/sensu_go' -``` - -But finding the Ansible collection you need and vetting its contents are the harder parts. - -### How to select an Ansible collection - -In the old times of monolithic Ansible, using third-party Ansible modules and plugins was not for the faint of heart. As a result, most users used whatever came bundled with their version of Ansible. - -The ability to install Ansible collections offered a lot more control over the content you use in your Ansible playbooks. You can install the core Ansible engine and then equip it with the modules, plugins, and roles you need. But, as always, with great power comes great responsibility. - -Now users are solely responsible for the quality of content they use to build Ansible playbooks. But how can you separate high-quality content from the rest? Here are five things to check when evaluating an Ansible collection. - -#### 1. Documentation - -Once you find a potential candidate on Ansible Galaxy, check its documentation first. In an ideal world, each Ansible collection would have a dedicated documentation site. For example, the [Sensu Go][8] and [F5 Networks][9] Ansible collections have them. Most other Ansible collections come only with a README file, but this will change for the better once the documentation tools mature. - -The Ansible collection's documentation should contain at least a quickstart tutorial with installation instructions. This part of the documentation aims to have users up and running in a matter of minutes. For example, the Sensu Go Ansible collection has a [dedicated quickstart guide][10], while the Sops Ansible collection includes this information in [its README][11] file. - -Another essential part of the documentation is a detailed module, plugin, and role reference guide. Collection authors do not always publish those guides on the internet, but they should always be accessible with the `ansible-doc` tool. - -``` -$ ansible-doc community.sops.sops_encrypt -> SOPS_ENCRYPT    (/home/tadej/.ansible/collections/ansible> - -        Allows to encrypt binary data (Base64 encoded), text -        data, JSON or YAML data with sops. - -  * This module is maintained by The Ansible Community -OPTIONS (= is mandatory): - -- attributes -        The attributes the resulting file or directory should -        have. -        To get supported flags look at the man page for -        `chattr' on the target system. -        This string should contain the attributes in the same -        order as the one displayed by `lsattr'. -        The `=' operator is assumed as default, otherwise `+' -        or `-' operators need to be included in the string. -        (Aliases: attr)[Default: (null)] -        type: str -        version_added: 2.3 -... -``` - -#### 2. Playbook readability - -An Ansible playbook should serve as a human-readable description of the desired state. To achieve that, modules from the Ansible collection under evaluation should have a consistent user interface and descriptive parameter names. - -For example, if Ansible modules interact with a web service, authentication parameters should be separated from the rest. And all modules should use the same authentication parameters if possible. - -``` -- name: Create a check that runs every 30 seconds -  sensu.sensu_go.check: -    auth: &auth -      url: https://my.sensu.host:8080 -      user: demo -      password: demo-pass -    name: check -    command: check-cpu.sh -w 75 -c 90 -    interval: 30 -    publish: true - -- name: Create a filter -  sensu.sensu_go.filter: -     # Reuse the authentication data from before -    auth: *auth -    name: filter -    action: deny -    expressions: -       - event.check.interval == 10 -      - event.check.occurrences == 1 -``` - -#### 3. Basic functionality - -Before you start using third-party Ansible content in production, always check each Ansible module's basic functionality. - -Probably the most critical property to look for is the result. Ansible modules and roles that enforce a state are much easier to use than their action-executing counterparts. This is because you can update your Ansible playbook and rerun it without risking a significant breakage. - -``` -- name: Command module executes an action -> fails on re-run -  ansible.builtin.command: useradd demo - -- name: User module enforces a state -> safe to re-run -  ansible.builtin.user: -    name: demo -``` - -You should also expect support for [check mode][12], which simulates the change without making it. If you combine check mode with state enforcement, you get a configuration drift detector for free. - -``` -$ ansible-playbook --check playbook.yaml - -PLAY [host] ************************************************ - -TASK [Create user] ***************************************** -ok: [host] - -... - -PLAY RECAP ************************************************* -host        : ok=5    changed=2    unreachable=0    failed=0 -                      skipped=3        rescued=0   ignored=0 -``` - -#### 4. Implementation robustness - -A robustness check is a bit harder to perform if you've never developed an Ansible module or role before. Checking the continuous integration/continuous delivery (CI/CD) configuration files should give you a general idea of what is tested. Finding `ansible-test` and `molecule` commands in the test suite is an excellent sign. - -#### 5. Maintenance - -During your evaluation, you should also take a look at the issue tracker and development activity. Finding old issues with no response from maintainers is one sign of a poorly maintained Ansible collection. - -Judging the health of a collection by the development activity is a bit trickier. No commits in the last year are a sure sign of an unmaintained Ansible collection because the Ansible ecosystem is developing rapidly. Seeing a few commits per month is usually a sign of a mature project that receives timely updates. - -### Time well-spent - -Evaluating Ansible collections is not an entirely trivial task. Hopefully, these tips will make your selection process somewhat more manageable. It does take time and effort to find the appropriate content for your use case. But with automation becoming an integral part of almost everything, all this effort is well-spent and will pay dividends in the future. - -If you are thinking about creating your own Ansible Collection, you can download a [free eBook from Steampunk][13] packed full of advice on building and maintaining high-quality Ansible integrations. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/ansible-collections - -作者:[Tadej Borovšak][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/tadeboro -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/OSDC_women_computing_3.png -[2]: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/ -[3]: https://docs.ansible.com/ansible/latest/collections/index.html#list-of-collections -[4]: https://galaxy.ansible.com/sensu/sensu_go -[5]: https://galaxy.ansible.com/community/sops -[6]: https://github.com/mozilla/sops -[7]: https://galaxy.ansible.com/ -[8]: https://sensu.github.io/sensu-go-ansible/ -[9]: https://clouddocs.f5.com/products/orchestration/ansible/devel/ -[10]: https://sensu.github.io/sensu-go-ansible/quickstart-sensu-go-6.html -[11]: https://github.com/ansible-collections/community.sops#using-this-collection -[12]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_checkmode.html#using-check-mode -[13]: https://steampunk.si/pdf/Importance_of_High_quality_Ansible_Collections_XLAB_Steampunk_ebook.pdf diff --git a/sources/tech/20210301 Build a home thermostat with a Raspberry Pi.md b/sources/tech/20210301 Build a home thermostat with a Raspberry Pi.md deleted file mode 100644 index 35ec8b7951..0000000000 --- a/sources/tech/20210301 Build a home thermostat with a Raspberry Pi.md +++ /dev/null @@ -1,223 +0,0 @@ -[#]: subject: "Build a home thermostat with a Raspberry Pi" -[#]: via: "https://opensource.com/article/21/3/thermostat-raspberry-pi" -[#]: author: "Joe Truncale https://opensource.com/users/jtruncale" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Build a home thermostat with a Raspberry Pi -====== -The ThermOS project is an answer to the many downsides of off-the-shelf smart thermostats. - -![Orange home vintage thermostat][1] - -Image by: Photo by [Moja Msanii][2] on [Unsplash][3] - -My wife and I moved into a new home in October 2020. As soon as it started getting cold, we realized some shortcomings of the home's older heating system (including one heating zone that was *always* on). We had Nest thermostats in our previous home, and the current setup was not nearly as convenient. There are multiple thermostats in our house, and some had programmed heating schedules, others had different schedules, some had none at all. - -![Old thermostats][4] - -The home's previous owner left notes explaining how some of the thermostats worked. (Joseph Truncale, CC BY-SA 4.0) - -It was time for a change, but the house has some constraints: - -* It was built in the late 1960s with a renovation during the '90s. -* The heat is hydronic (hot water baseboard). -* It has six thermostats for the six heating zones. -* There are only two wires that go to each thermostat for heat (red and white). - -![Furnace valves][5] - -### To buy or to build? - -I wanted "smart" thermostat control for all of the heat zones (schedules, automations, home/away, etc.). I had several options if I wanted to buy something off the shelf, but all of them have drawbacks: - -**Option 1: A Nest or Ecobee** - -* It's expensive: No smart thermostat can handle multiple zones, so I would need one for each zone (~$200*6 = $1,200). -* It's difficult: I would have to rerun the thermostat wire to get the infamous [C wire][6], which enables continuous power to the thermostat. The wires are 20 to 100 feet each, in-wall, and might be stapled to the studs. - -**Option 2: A battery-powered thermostat** such as the [Sensi WiFi thermostat][7] - -* The batteries last only a month or two. -* It's not HomeKit-compatible in battery-only mode. - -**Option 3: A commercial-off-the-shelf thermostat**, but only one exists (kind of): [Honeywell's TrueZONE][8] - -* It's old and poorly supported (it was released in 2008). -* It's expensive—more than $300 for just the controller, and you need a [RedLINK gateway][9] for a shoddy app to work. - -And the winner is… - -**Option 4: Build my own!** - -I decided to build my own multizone smart thermostat, which I named [ThermOS][10]. - -* It's centralized at the furnace (you need one device, not six). -* It uses the existing in-wall thermostat wires. -* It's HomeKit compatible, complete with automation, scheduling, home/away, etc. -* Anddddd it's… fun? Yeah, fun… I think. - -### The ThermOS hardware - -I knew that I wanted to use a Raspberry Pi. Since they've gotten so inexpensive, I decided to use a Raspberry Pi 4 Model B 2GB. I'm sure I could get by with a Raspberry Pi Zero W, but that will be for a future revision. - -Here's a full list of the parts I used: - -| Name | Quantity | Price | -| :- | :- | :- | -| Raspberry Pi 4 Model B 2GB | 1 | $29.99 | -| Raspberry Pi 4 official 15W power supply | 1 | $6.99 | -| Inland 400 tie-point breadboard | 1 | $2.99 | -| Inland 8 channel 5V relay module for Arduino | 1 | $8.99 | -| Inland DuPont jumper wire 20cm (3 pack) | 1 | $4.99 | -| DS18B20 temperature sensor (genuine) from Mouser.com | 6 | $6.00 | -| 3-pin screw terminal blocks (40 pack) | 1 | $7.99 | -| RPi GPIO terminal block breakout board module for Raspberry Pi | 1 | $17.99 | -| Alligator clip test leads (10 pack) | 1 | $5.89 | -| Southwire 18/2 thermostat wire (50ft) | 1 | $10.89 | -| Shrinkwrap | 1 | $4.99 | -| Solderable breadboard (5 pack) | 1 | $11.99 | -| PCB mounting brackets (50 pack) | 1 | $7.99 | -| Plastic housing/enclosure | 1 | $27.92 | - -I began drawing out the hardware diagram on [draw.io][11] and realized I lacked some crucial knowledge about the furnace. I opened the side panel and found the step-down transformer that takes the 120V electrical line and makes it 24V for the heating system. If your heating system is anything like mine, you'll see a lot of jumper wires between the Taco zone valves. Terminal 3 on the Taco is jumped across all of my zone valves. This is because it doesn't matter how many valves are on/open—it just controls the circulator pump. If any combination of one to five valves is open, it should be on; if no valves are open, it should be off… simple! - -![Furnace wiring architecture][12] - -At its core, a thermostat is just a type of switch. Once the thermistor (temp sensor) inside the thermostat detects a lower temperature, the switch closes and completes the 24V circuit. Instead of having a thermostat in every room, this project keeps all of them right next to the furnace so that all six-zone valves can be controlled by a relay module using six of the eight relays. The Raspberry Pi acts as the brains of the thermostat and controls each relay independently. - -![Manually setting relays using Raspberry Pi and Python][13] - -The next problem was how to get temperature readings from each room. I could have a wireless temperature sensor in each room running on an Arduino or Raspberry Pi, but that can get expensive and complicated. Instead, I wanted to reuse the existing thermostat wire in the walls but purely for temperature sensors. - -The "1-wire" [DS18B20][14] temperature sensor appeared to fit the bill: - -* It has an accuracy of +/- 0.5°C or 0.9°F. -* It uses the "1-wire" protocol for data. -* Most importantly, the DS18B20 can use "[parasitic power][15]" mode where it needs just two wires for power and data. Just a heads up… almost all of the DS18B20s out there are [counterfeit][16]. I purchased a few (hoping they were genuine), but they wouldn't work when I tried to use parasitic power. I then bought real ones from [Mouser.com][17], and they worked like a charm! - -![Temperature sensors][18] - -Starting with a breadboard and all the components locally, I started writing code to interact with all of it. Once I proved out the concept, I added the existing in-wall thermostat wire into the mix. I got consistent readings with that setup, so I set out to make them a bit more polished. With help from my [dad][19], the self-proclaimed "just good enough" solderer, we soldered leads to the three-pin screw terminals (to avoid overheating the sensor) and then attached the sensor into the terminals. Now the sensors can be attached with wire nuts to the existing in-wall wiring. - -![Attaching temperature sensors][20] - -I'm still in the process of "prettifying" my temperature sensor wall mounts, but I've gone through a few 3D printing revisions, and I think I'm almost there. - -![Wall mounts][21] - -### The ThermOS software - -As usual, writing the logic wasn't the hard part. However, deciding on the application architecture and framework was a confusing, multi-day process. I started out evaluating open source projects like [PiHome][22], but it relied on specific hardware *and* was written in PHP. I'm a Python fan and decided to start from scratch and write my own REST API. - -Since HomeKit integration was so important, I figured I would eventually write a [HomeBridge][23] plugin to integrate it. I didn't realize that there was an entire Python HomeKit framework called [HAP-Python][24] that implements the accessory protocol. It helped me get a proof of concept running and controlled through my iPhone's Home app within 30 minutes. - -![ThermOS HomeKit integration][25] - -![ThermOS software architecture][26] - -The rest of the "temp" logic is relatively straightforward, but I do want to highlight a piece that I initially missed. My code was running for a few days, and I was working on the hardware, when I noticed that my relays were turning on and off every few seconds. This "short-cycling" isn't necessarily harmful, but it certainly isn't efficient. To avoid that, I added some thresholding to make sure the heat toggles only when it's +/- 0.5C°. - -Here is the threshold logic (you can see the [rubber-duck debugging][27] in the comments): - -``` -# check that we want heat -if self.target_state.value == 1: -    # if heat relay is already on, check if above threshold -    # if above, turn off .. if still below keep on -    if GPIO.input(self.relay_pin): -        if self.current_temp.value - self.target_temp.value >= 0.5: -            status = 'HEAT ON - TEMP IS ABOVE TOP THRESHOLD, TURNING OFF' -            GPIO.output(self.relay_pin, GPIO.LOW) -        else: -            status = 'HEAT ON - TEMP IS BELOW TOP THRESHOLD, KEEPING ON' -            GPIO.output(self.relay_pin, GPIO.HIGH) -    # if heat relay is not already on, check if below threshold -    elif not GPIO.input(self.relay_pin): -        if self.current_temp.value - self.target_temp.value <= -0.5: -            status = 'HEAT OFF - TEMP IS BELOW BOTTOM THRESHOLD, TURNING ON' -            GPIO.output(self.relay_pin, GPIO.HIGH) -        else: -          status = 'HEAT OFF - KEEPING OFF' -``` - -![Thresholding][28] - -And I achieved my ultimate goal—to be able to control all of it from my phone. - -![ThermOS as a HomeKit Hub][29] - -### Putting my ThermOS in a lunchbox - -My proof of concept was pretty messy. - -![Initial ThermOS setup][30] - -With the software and general hardware design in place, I started figuring out how to package all of the components in a more permanent and polished form. One of my main concerns for a permanent installation was to use a breadboard with DuPont jumper wires. I ordered some [solderable breadboards][31] and a [screw terminal breakout board][32] (thanks [@arduima][33] for the Raspberry Pi GPIO pins). - -Here's what the solderable breadboard with mounts and enclosure looked like in progress. - -![ThermOS hardware][34] - -And here it is, mounted in the boiler room. - -![ThermOS mounted][35] - -Now I just need to organize and label the wires, and then I can start swapping the remainder of the thermostats over to ThermOS. And I'll be on to my next project: ThermOS for my central air conditioning. - -Image by: (Joseph Truncale, CC BY-SA 4.0) - -*This originally appeared on [Medium][36] and is republished with permission.* - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/thermostat-raspberry-pi - -作者:[Joe Truncale][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jtruncale -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/home-thermostat.jpg -[2]: https://unsplash.com/@mojamsanii?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/thermostat?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://opensource.com/sites/default/files/uploads/oldthermostats.jpeg -[5]: https://opensource.com/sites/default/files/uploads/furnacevalves.jpeg -[6]: https://smartthermostatguide.com/thermostat-c-wire-explained/ -[7]: https://www.amazon.com/Emerson-Thermostat-Version-Energy-Certified/dp/B01NB1OB0I -[8]: https://www.honeywellhome.com/us/en/products/air/forced-air-zone-panels/truezone-hz432-panel-hz432-u/ -[9]: https://www.amazon.com/Honeywell-Redlink-Enabled-Internet-THM6000R7001/dp/B0783HK9ZZ -[10]: https://github.com/truncj/thermos -[11]: http://draw.io/ -[12]: https://opensource.com/sites/default/files/uploads/furnacewiring.png -[13]: https://opensource.com/sites/default/files/uploads/settingrelays.gif -[14]: https://datasheets.maximintegrated.com/en/ds/DS18B20.pdf -[15]: https://learn.openenergymonitor.org/electricity-monitoring/temperature/DS18B20-temperature-sensing -[16]: https://github.com/cpetrich/counterfeit_DS18B20 -[17]: https://www.mouser.com/ -[18]: https://opensource.com/sites/default/files/uploads/tempsensors.png -[19]: https://twitter.com/jofredrick -[20]: https://opensource.com/sites/default/files/uploads/attachingsensors.jpeg -[21]: https://opensource.com/sites/default/files/uploads/wallmount.jpeg -[22]: https://github.com/pihome-shc/pihome -[23]: https://github.com/homebridge/homebridge -[24]: https://github.com/ikalchev/HAP-python -[25]: https://opensource.com/sites/default/files/uploads/iphoneintegration.gif -[26]: https://opensource.com/sites/default/files/uploads/thermosarchitecture.png -[27]: https://en.wikipedia.org/wiki/Rubber_duck_debugging -[28]: https://opensource.com/sites/default/files/uploads/thresholding.png -[29]: https://opensource.com/sites/default/files/uploads/thermoshomekit.png -[30]: https://opensource.com/sites/default/files/uploads/unpackaged.jpeg -[31]: https://www.amazon.com/gp/product/B07ZV8FWM4/r -[32]: https://www.amazon.com/gp/product/B084C69VSQ/ -[33]: https://twitter.com/dimitri_koshkin -[34]: https://opensource.com/sites/default/files/uploads/breadboard.png -[35]: https://opensource.com/sites/default/files/uploads/mounted.png -[36]: https://joetruncale.medium.com/thermos-d089e1c4974b diff --git a/sources/tech/20210302 Learn Java by building a classic arcade game.md b/sources/tech/20210302 Learn Java by building a classic arcade game.md deleted file mode 100644 index f5f43c98f4..0000000000 --- a/sources/tech/20210302 Learn Java by building a classic arcade game.md +++ /dev/null @@ -1,382 +0,0 @@ -[#]: subject: "Learn Java by building a classic arcade game" -[#]: via: "https://opensource.com/article/21/3/java-object-orientation" -[#]: author: "Vaneska Sousa https://opensource.com/users/vaneska" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Learn Java by building a classic arcade game -====== -Practice how to structure a project and write Java code while having fun building a fun game. - -![Learning and studying technology is the key to success][1] - -Image by: [WOCinTech Chat][2], [CC BY 2.0][3] - -As a second-semester student in systems and digital media at the Federal University of Ceará in Brazil, I was given the assignment to remake the classic Atari 2600 [Breakout game][4] from 1978. I am still in my infancy in learning software development, and this was a challenging experience. It was also a gainful one because I learned a lot, especially about applying object-oriented concepts. - -![Breakout game][5] - -I'll explain how I accomplished this challenge, and if you follow the step-by-step instructions, at the end of this article, you will have the first pieces of your own classic Breakout game. - -### Choosing Java and TotalCross - -Several of my courses use [Processing][6], a software engine that uses [Java][7]. Java is a great language for learning programming concepts, in part because it's a strongly typed language. - -Despite being free to choose any language or framework for my Breakout project, I chose to continue in Java to apply what I've learned in my coursework. I also wanted to use a framework so that I did not need to do everything from scratch. I considered using Godot, but that would mean I would hardly need to program at all. - -Instead, I chose [TotalCross][8]. It is an open source software development kit (SDK) and framework with a simple game engine that generates code for [Linux Arm][9] devices (like the Raspberry Pi) and smartphones. Also, because I work for TotalCross, I have access to developers with much more experience than I have and know the platform very well. It seemed to be the safest way and, despite some strife, I don't regret it one bit. It was very cool to develop the whole project and see it running on the phone and the [Raspberry Pi][10]. - -![Breakout remake][11] - -### Define the project mechanics and structure - -When starting to develop any application, and especially a game, you need to consider the main features or mechanics that will be implemented. I watched the original Breakout gameplay a few times and played some versions on the internet. Then I defined the game mechanics and project structure based on what I learned. - -#### Game mechanics - -1. The platform moves left or right, according to the user's command. When it reaches an end, it hits the "wall" (edge). -2. When the ball hits the platform, it returns in the opposite direction it came from. -3. Each time the ball hits a "brick" (blue, green, yellow, orange, or red), the brick disappears. -4. When all the bricks in level 01 have been destroyed, new ones appear (in the same position as the previous one), and the ball's speed increases. -5. When all the bricks in level 02 have been destroyed, the game continues without obstacles on the screen. -6. The game ends when the ball falls. - -#### Project structure - -* RunBreakoutApplication.java is the class responsible for calling the class that inherits the `GameEngine` and runs the simulator. -* Breakout.java is the main class, which inherits from the `GameEngine` class and "assembles" the game, where it will call objects, define positions, etc. -* The `sprites` package is where all the classes responsible for the sprites (e.g., the image and behavior of the blocks, platform, and ball) go. -* The `util` packages contain classes used to facilitate project maintenance, such as constants, image initialization, and colors. - -### Get hands-on with code - -First, install the [TotalCross plugin from VSCode][12]. If you are using another [integrated development environment][13] (IDE), check TotalCross's documentation for installation instructions. - -If you're using the plugin, just press `Ctrl` +`P`, type `totalcross`, and click `Create new project`. Fill in the requested information: - -* Folder name: gameTC -* ArtifactId: com.totalcross -* Project name: Breakout -* TotalCross version: 6.1.1 (or the most recent one) -* Build platforms: -Android and -Linux_arm (select the platforms you want) - -When filling in the fields above and generating the project, if you are in the `RunBreakoutApplication.java` class, right-clicking on it and clicking "run" will open the simulator, and "Hello World!" will appear on your screen if you have created your Java project with TotalCross properly. - -![HelloWorld project structure][14] - -If you have a problem, check the [documentation][15] or ask the [TotalCross community][16] on Telegram for help. - -After the project is configured, the next step is to add the project's images in `Resources` > `Sprites`. Create two packages named `util` and `sprites` to work on later. - -The structure of your project will be: - -![Project structure][17] - -### Go behind the scenes - -To make it easier to maintain the code and change the images to the colors you want to use, it's a good practice to [centralize everything by creating classes][18]. Place all of the classes for this function inside the `util` package. - -#### Constants.java - -First, create the `constants.java` class, which is where placement patterns (such as the edge between the screen and where the platform starts), speed, number of blocks, etc., reside. This is good for playing, changing numbers, and understanding where things change and why. It is a great exercise for those just starting with Java. - -``` -package com.totacross.util; - -import totalcross.sys.Settings; -import totalcross.ui.Control; -import totalcross.util.UnitsConverter; - -public class Constants { -    //Position -    public static final int BOTTOM_EDGE = UnitsConverter.toPixels(430 + Control.DP); -    public static final int DP_23 = UnitsConverter.toPixels(23 + Control.DP); -    public static final int DP_50 = UnitsConverter.toPixels(50 + Control.DP); -    public static final int DP_100 = UnitsConverter.toPixels(100 + Control.DP); - -    //Sprites -    public static final int EDGE_RACKET = UnitsConverter.toPixels(20 + Control.DP); -    public static final int WIDTH_BALL =  UnitsConverter.toPixels(15 + Control.DP); -    public static final int HEIGHT_BALL =  UnitsConverter.toPixels(15 + Control.DP); - -    //Bricks -    public static final int NUM_BRICKS = 10; -    public static final int WIDTH_BRICKS = Settings.screenWidth / NUM_BRICKS; -    public static final int HEIGHT_BRICKS = Settings.screenHeight / 32; - -    //Brick Points -    public static final int BLUE_POINT = 1; -    public static final int GREEN_POINT = 2; -    public static final int YELLOW_POINT = 3; -    public static final int DARK_ORANGE_POINT = 4; -    public static final int ORANGE_POINT = 5; -    public static final int RED_POINT = 6; -} -``` - -If you want to know more about the pixel density (DP) unit, I recommend reading the [Material Design description][19]. - -#### Colors.java - -As the name suggests, this class is where you define the colors used in the game. I recommend naming things according to the color's purpose, such as background, font color, etc. This will make it easier to update your project's color palette in a single class. - -``` -package com.totacross.util; - -public class Colors { -    public static int PRIMARY = 0x161616; -    public static int P_FONT = 0xFFFFFF; -    public static int SECONDARY = 0xE63936; -    public static int SECONDARY_DARK = 0xCE3737; -} -``` - -#### Images.java - -The `images.java` class is undoubtedly the most frequently used. - -``` -package com.totacross.util; - -import static com.totacross.util.Constants.*; -import totalcross.ui.dialog.MessageBox; -import totalcross.ui.image.Image; - - -public class Images { - -    public static Image paddle, ball; -    public static Image red, orange, dark_orange, yellow, green, blue; - -    public static void loadImages() { -        try { -            // general -            paddle = new Image("sprites/paddle.png"); -            ball = new Image("sprites/ball.png").getScaledInstance(WIDTH_BALL, HEIGHT_BALL); - -            // Bricks -            red = new Image("sprites/red_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS); -            orange = new Image("sprites/orange_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS); -            dark_orange = new Image("sprites/orange2_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS); -            yellow = new Image("sprites/yellow_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS); -            green = new Image("sprites/green_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS); -            blue = new Image("sprites/blue_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS); - -        } catch (Exception e) { -            MessageBox.showException(e, true); -        } -    } -} -``` - -The `getScaledInstance()` method will manipulate the image to match the values passed through the constant. Try to change these values and observe the impact on the game. - -#### Recap - -At this point, your project should look like this: - -![Project structure][20] - -### Create your first sprite - -Now that the project is structured properly, you're ready to create your first class in the sprite package: `paddle.java`, which is the platform—the user's object of interaction. - -#### Paddle.java - -The `paddle.java` class must inherit from `sprite`, which is the class responsible for objects in games. This is a fundamental concept in game engine development, so when inheriting from sprites, the TotalCross framework will already be concerned with delimiting movement within the screen, detecting collisions between sprites, and other important functions. You can check all the details in [Javadoc][21]. - -In Breakout, the paddle moves on the X-axis at a speed determined by the user's command (by touch screen or mouse movement). The `paddle.java` class is responsible for defining this movement and the sprite's image (the "face"): - -``` -package com.totacross.sprites; - -import com.totacross.util.Images; - -import totalcross.game.Sprite; -import totalcross.ui.image.ImageException; - -public class Paddle extends Sprite { -  private static final int SPEED = 4; - -  public Paddle() throws IllegalArgumentException, IllegalStateException, ImageException { -    super(Images.paddle, -1, true, null); -  } - -  //Move the platform according the speed and the direction -  public final void move(boolean left, int speed) { -    if (left) { -      centerX -= SPEED; -    } else { -      centerX += SPEED; -    } - -    setPos(centerX, centerY, true); -  } -} -``` - -You indicate the image (`Images.paddle` ) within the constructor, and the `move` method (a TotalCross feature) receives the speed defined at the beginning of the class. Experiment with other values and observe what happens with the movement. - -When the paddle is moving to the left, the center of the paddle at any moment is defined as itself minus the speed, and when it's moving to the right, it's itself plus the speed. Ultimately, you define the position of the sprite on the screen. - -Now your sprite is ready, so you need to add it on the screen and include the user's movement to call the `move` method and create movement. Do this in your main class, `Breakout.java`. - -#### Add onscreen and user interaction - -When building your game engine, you need to focus on some standard points. For the sake of brevity, I'll add comments in the code. - -Basically, you will delete the automatically generated `initUI()` method and, instead of inheriting from `MainWindow`, you will inherit it from `GameEngine`. A "red" will appear in the name of your class, so just click on the lamp or the suggestion symbol for your IDE and click `Add unimplemented methods`. This will automatically generate the `onGameInit()` method, which is responsible for the moment when the game starts, i.e., the moment the `breakout` class is called. - -Inside the constructor, you must add the style type (`MaterialUI` ) and the refresh time on the screen (`70` ), and signal that the game has an interface (`gameHasUI = true;` ). - -Last but not least, you have to start the game through `this.start()` on `onGameInit()` and focus on some other methods: - -* onGameInit() is the first method called. In it, you must initialize the sprites and images (Images.loadImages), and tell the game that it can start. -* onGameStart()is called when the game starts. It sets the platform's initial position (in the center of the screen on the X-axis and below the center with a border on the Y-axis). -* onPaint() is where you say what will be drawn for each frame. First, it paints the background black (to not leave traces of the sprites), then it displays the sprites with `.show()`. -* The `onPenDrag` and `onPenDown` methods identify when the user `move`s the paddle (by dragging a finger on a touch screen or moving the mouse while pressing the left button). These methods change the paddle movement through the `setPos()` method, which triggers the move method in the `Paddle.java` class. Note that the last parameter of the `racket.setPos` method is `true` to precisely limit the paddle's movement within the screen so that it never disappears from the user's field of view. - -``` -package com.totacross; - -import com.totacross.sprites.Paddle; -import com.totacross.util.Colors; -import com.totacross.util.Constants; -import com.totacross.util.Images; - -import totalcross.game.GameEngine; -import totalcross.sys.Settings; -import totalcross.ui.MainWindow; -import totalcross.ui.dialog.MessageBox; -import totalcross.ui.event.PenEvent; -import totalcross.ui.gfx.Graphics; - -public class Breakout extends GameEngine { - -    private Paddle racket; - -    public Breakout() { -        setUIStyle(Settings.MATERIAL_UI); -        gameName = "Breakout"; -        gameVersion = 100; -        gameHasUI = true; -        gameRefreshPeriod = 70; - -    } - -    @Override -    public void onGameInit() { -        setBackColor(Colors.PRIMARY); -        Images.loadImages(); - -        try { -            racket = new Paddle(); - -        } catch (Exception e) { -            MessageBox.showException(e, true); -            MainWindow.exit(0); -        } -        this.start(); -    } -    public void onGameStart() { -        racket.setPos(Settings.screenWidth / 2, (Settings.screenHeight - racket.height) - Constants.EDGE_RACKET, true); -    } - -     //to draw the interface -     @Override -     public void onPaint(Graphics g) { -         super.onPaint(g); -         if (gameIsRunning) { -             g.backColor = Colors.PRIMARY; -             g.fillRect(0, 0, this.width, this.height); - -             if (racket != null) { -                 racket.show(); -             } -         } -     } -     //To make the paddle moving with the mouse/press moviment -     @Override -     public final void onPenDown(PenEvent evt) { -         if (gameIsRunning) { -             racket.setPos(evt.x, racket.centerY, true); -         } -     } - -     @Override -     public final void onPenDrag(PenEvent evt) { -         if (gameIsRunning) { -             racket.setPos(evt.x, racket.centerY, true); -         } -     } -} -``` - -### Run the game - -To run the game, just click `RunBreakoutApplication.java` with the right mouse button, then click `run` to see how it looks. - -![Breakout game remake][22] - -If you want to run it on a Raspberry Pi, change the parameters in the `RunBreakoutApplication.java` class to: - -``` -TotalCrossApplication.run(Breakout.class, "/scr", "848x480"); -``` - -This sets the screen size to match the Raspberry Pi. - -![Breakout on Raspberry Pi][23] - -The first sprite and game mechanics are ready! - -### Next steps - -In the next article, I'll show how to add the ball sprite and make collisions. If you need help, call me in the [community group][24] on Telegram or post in the TotalCross [forum][25], where I'm available to help. - -If you put this article into practice, share your experience in the comments. All feedback is important! If you wish, favorite [TotalCross on GitHub][26], as it improves the project's relevance on the platform. - -Image by: (Vaneska Karen, CC BY-SA 4.0) - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/java-object-orientation - -作者:[Vaneska Sousa][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/vaneska -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/studying-books-java-couch-education.png -[2]: https://www.wocintechchat.com/ -[3]: https://creativecommons.org/licenses/by/2.0/ -[4]: https://www.youtube.com/watch?v=Cr6z3AyhRr8 -[5]: https://opensource.com/sites/default/files/uploads/originalbreakout.gif -[6]: https://processing.org/ -[7]: https://opensource.com/resources/java -[8]: https://opensource.com/article/20/7/totalcross-cross-platform-development -[9]: https://www.arm.linux.org.uk/docs/whatis.php -[10]: https://opensource.com/resources/raspberry-pi -[11]: https://opensource.com/sites/default/files/uploads/breakoutremake.gif -[12]: https://marketplace.visualstudio.com/items?itemName=totalcross.vscode-totalcross -[13]: https://www.redhat.com/en/topics/middleware/what-is-ide -[14]: https://opensource.com/sites/default/files/uploads/helloworld.png -[15]: https://learn.totalcross.com/ -[16]: https://t.me/guiforembedded -[17]: https://opensource.com/sites/default/files/uploads/projectstructure.png -[18]: https://learn.totalcross.com/documentation/guides/app-architecture/colors-fonts-and-images -[19]: https://material.io/design/layout/pixel-density.html -[20]: https://opensource.com/sites/default/files/uploads/projectstructure2.png -[21]: https://en.wikipedia.org/wiki/Javadoc -[22]: https://opensource.com/sites/default/files/uploads/runbreakout.gif -[23]: https://opensource.com/sites/default/files/uploads/runbreakout2.gif -[24]: https://t.me/guiforembedded -[25]: http://forum.totalcross.com -[26]: https://github.com/totalcross/totalcross diff --git a/sources/tech/20210303 Host your website with dynamic content and a database on a Raspberry Pi.md b/sources/tech/20210303 Host your website with dynamic content and a database on a Raspberry Pi.md deleted file mode 100644 index dbd7837991..0000000000 --- a/sources/tech/20210303 Host your website with dynamic content and a database on a Raspberry Pi.md +++ /dev/null @@ -1,474 +0,0 @@ -[#]: subject: "Host your website with dynamic content and a database on a Raspberry Pi" -[#]: via: "https://opensource.com/article/21/3/web-hosting-raspberry-pi" -[#]: author: "Marty Kalin https://opensource.com/users/mkalindepauledu" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Host your website with dynamic content and a database on a Raspberry Pi -====== -You can use free software to support a web application on a very lightweight computer. - -![Digital creative of a browser on the internet][1] - -Raspberry Pi's single-board machines have set the mark for cheap, real-world computing. With its model 4, the Raspberry Pi can host web applications with a production-grade web server, a transactional database system, and dynamic content through scripting. This article explains the installation and configuration details with a full code example. Welcome to web applications hosted on a very lightweight computer. - -### The snowfall application - -Imagine a downhill ski area large enough to have microclimates, which can mean dramatically different snowfalls across the area. The area is divided into regions, each of which has devices that record snowfall in centimeters; the recorded information then guides decisions on snowmaking, grooming, and other maintenance operations. The devices communicate, say, every 20 minutes with a server that updates a database that supports reports. Nowadays, the server-side software for such an application can be free *and* production-grade. - -This snowfall application uses the following technologies: - -* A [Raspberry Pi 4][2] running Debian -* Nginx web server: The free version hosts over 400 million websites. This web server is easy to install, configure, and use. -* [SQLite relational database system][3], which is file-based: A database, which can hold many tables, is a file on the local system. SQLite is lightweight but also [ACID-compliant][4]; it is suited for low to moderate volume. SQLite is likely the most widely used database system in the world, and the source code for SQLite is in the public domain. The current version is 3. A more powerful (but still free) option is PostgreSQL. -* Python: The Python programming language can interact with databases such as SQLite and web servers such as Nginx. Python (version 3) comes with Linux and macOS systems. - -Python includes a software driver for communicating with SQLite. There are options for connecting Python scripts with Nginx and other web servers. One option is [uWSGI][5] (Web Server Gateway Interface), which updates the ancient CGI (Common Gateway Interface) from the 1990s. - -Several factors speak for uWSGI: - -* uWSGI is flexible. It can be used as either a lightweight concurrent web server or the backend application server connected to a web server such as Nginx. -* Its setup is minimal. -* The snowfall application involves a low to moderate volume of hits on the web server and database system. In general, CGI technologies are not fast by modern standards, but CGI performs well enough for department-level web applications such as this one. - -Various acronyms describe the uWSGI option. Here's a sketch of the three principal ones: - -* WSGI is a Python specification for an interface between a web server on one side, and an application or an application framework (e.g., Django) on the other side. This specification defines an API whose implementation is left open. -* uWSGI implements the WSGI interface by providing an application server, which connects applications to a web server. A uWSGI application server's main job is to translate HTTP requests into a format that a web application can consume and, afterward, to format the application's response into an HTTP message. -* uwsgi is a binary protocol implemented by a uWSGI application server to communicate with a full-featured web server such as Nginx; it also includes utilities such as a lightweight web server. The Nginx web server "speaks" uwsgi out of the box. - -For convenience, I will use "uwsgi" as shorthand for the binary protocol, the application server, and the very lightweight web server. - -### Setting up the database - -On a Debian-based system, you can install SQLite the usual way (with `%` representing the command-line prompt): - -``` -% sudo apt-get install sqlite3 -``` - -This database system is a collection of C libraries and utilities, all of which come to about 500KB in size. There is no database server to start, stop, or otherwise maintain. - -Once SQLite is installed, create a database at the command-line prompt: - -``` -% sqlite3 snowfall.db -``` - -If this succeeds, the command creates the file `snowfall.db` in the current working directory. The database name is arbitrary (e.g., no extension is required), and the command opens the SQLite client utility with `>sqlite` as the prompt: - -``` -Enter ".help" for usage hints. -sqlite> -``` - -Create the snowfall table in the snowfall database with the following command. The table name, like the database name, is arbitrary: - -``` -sqlite> CREATE TABLE snowfall (id INTEGER PRIMARY KEY AUTOINCREMENT, -                               region TEXT NOT NULL, -                               device TEXT NOT NULL, -                               amount DECIMAL NOT NULL, -                               tstamp DECIMAL NOT NULL); -``` - -SQLite commands are case-insensitive, but it is traditional to use uppercase for SQL terms and lowercase for user terms. Check that the table was created: - -``` -sqlite> .schema -``` - -The command echoes the `CREATE TABLE` statement. - -The database is now ready for business, although the single-table snowfall is empty. You can add rows interactively to the table, but an empty table is fine for now. - -### A first look at the overall architecture - -Recall that uwsgi can be used in two ways: either as a lightweight web server or as an application server connected to a production-grade web server such as Nginx. The second use is the goal, but the first is suited for developing and testing the programmer's request-handling code. Here's the architecture with Nginx in play as the web server: - -``` -HTTP       uwsgi -client<---->Nginx<----->appServer<--->request-handling code<--->SQLite -``` - -The client could be a browser, a utility such as [curl][6], or a hand-crafted program fluent in HTTP. Communications between the client and Nginx occur through HTTP, but then uwsgi takes over as a binary-transport protocol between Nginx and the application server, which interacts with request-handling code such as `requestHandler.py` (described below). This architecture delivers a clean division of labor. Nginx alone manages the client, and only the request-handling code interacts with the database. In turn, the application server separates the web server from the programmer-written code, which has a high-level API to read and write HTTP messages delivered over uwsgi. - -I'll examine these architectural pieces and cover the steps for installing, configuring, and using uwsgi and Nginx in the next sections. - -### The snowfall application code - -Below is the source code file `requestHandler.py` for the snowfall application. (It's also available on my [website][7].) Different functions within this code help clarify the software architecture that connects SQLite, Nginx, and uwsgi. - -#### The request-handling program - -``` -import sqlite3 -import cgi - -PATH_2_DB = '/home/marty/wsgi/snowfall.db' - -## Dispatches HTTP requests to the appropriate handler. -def application(env, start_line): -    if env['REQUEST_METHOD'] == 'POST':   ## add new DB record -        return handle_post(env, start_line) -    elif env['REQUEST_METHOD'] == 'GET':  ## create HTML-fragment report -        return handle_get(start_line) -    else:                                 ## no other option for now -        start_line('405 METHOD NOT ALLOWED', [('Content-Type', 'text/plain')]) -        response_body = 'Only POST and GET verbs supported.' -        return [response_body.encode()]                             - -def handle_post(env, start_line):     -    form = get_field_storage(env)  ## body of an HTTP POST request -    -    ## Extract fields from POST form. -    region = form.getvalue('region') -    device = form.getvalue('device') -    amount = form.getvalue('amount') -    tstamp = form.getvalue('tstamp') - -    ## Missing info? -    if (region is not None and -        device is not None and -        amount is not None and -        tstamp is not None): -        add_record(region, device, amount, tstamp) -        response_body = "POST request handled.\n" -        start_line('201 OK', [('Content-Type', 'text/plain')]) -    else: -        response_body = "Missing info in POST request.\n" -        start_line('400 Bad Request', [('Content-Type', 'text/plain')]) -  -    return [response_body.encode()] - -def handle_get(start_line): -    conn = sqlite3.connect(PATH_2_DB)        ## connect to DB -    cursor = conn.cursor()                   ## get a cursor -    cursor.execute("select * from snowfall") - -    response_body = "

Snowfall report

    " -    rows = cursor.fetchall() -    for row in rows: -        response_body += "
  • " + str(row[0]) + '|'  ## primary key -        response_body += row[1] + '|'                ## region -        response_body += row[2] + '|'                ## device -        response_body += str(row[3]) + '|'           ## amount -        response_body += str(row[4]) + "
  • "       ## timestamp -    response_body += "
" - -    conn.commit()  ## commit -    conn.close()   ## cleanup -    -    start_line('200 OK', [('Content-Type', 'text/html')]) -    return [response_body.encode()] - -## Add a record from a device to the DB. -def add_record(reg, dev, amt, tstamp): -    conn = sqlite3.connect(PATH_2_DB)      ## connect to DB -    cursor = conn.cursor()                 ## get a cursor - -    sql = "INSERT INTO snowfall(region,device,amount,tstamp) values (?,?,?,?)" -    cursor.execute(sql, (reg, dev, amt, tstamp)) ## execute INSERT - -    conn.commit()  ## commit -    conn.close()   ## cleanup - -def get_field_storage(env): -    input = env['wsgi.input'] -    form = env.get('wsgi.post_form') -    if (form is not None and form[0] is input): -        return form[2] - -    fs = cgi.FieldStorage(fp = input, -                          environ = env, -                          keep_blank_values = 1) -    return fs -``` - -A constant at the start of the source file defines the path to the database file: - -``` -PATH_2_DB = '/home/marty/wsgi/snowfall.db' -``` - -Make sure to update the path for your Raspberry Pi. - -As noted earlier, uwsgi includes a lightweight web server that can host this request-handling application. To begin, install uwsgi with these two commands (`##` introduces my comments): - -``` -% sudo apt-get install build-essential python-dev ## C header files, etc. -% pip install uwsgi                               ## pip = Python package manager -``` - -Next, launch a bare-bones snowfall application using uwsgi as the web server: - -``` -% uwsgi --http 127.0.0.1:9999 --wsgi-file requestHandler.py -``` - -The flag `--http` runs uwsgi in web-server mode, with 9999 as the web server's listening port on localhost (127.0.0.1). By default, uwsgi dispatches HTTP requests to a programmer-defined function named `application`. For review, here's the full function from the top of the `requestHandler.py` code: - -``` -def application(env, start_line): -    if env['REQUEST_METHOD'] == 'POST':   ## add new DB record -        return handle_post(env, start_line) -    elif env['REQUEST_METHOD'] == 'GET':  ## create HTML-fragment report -        return handle_get(start_line) -    else:                                 ## no other option for now -        start_line('405 METHOD NOT ALLOWED', [('Content-Type', 'text/plain')]) -        response_body = 'Only POST and GET verbs supported.' -        return [response_body.encode()] -``` - -The snowfall application accepts only two request types: - -* A POST request, if up to snuff, creates a new entry in the snowfall table. The request should include the ski area region, the device in the region, the snowfall amount in centimeters, and a Unix-style timestamp. A POST request is dispatched to the `handle_post` function (which I'll clarify shortly). -* A GET request returns an HTML fragment (an unordered list) with the records currently in the snowfall table. - -Requests with an HTTP verb other than POST and GET will generate an error message. - -You can use a utility such as curl to generate HTTP requests for testing. Here are three sample POST requests to start populating the database: - -``` -% curl -X POST -d "region=R1&device=D9&amount=1.42&tstamp=1604722088.0158753" localhost:9999/ -% curl -X POST -d "region=R7&device=D4&amount=2.11&tstamp=1604722296.8862638" localhost:9999/ -% curl -X POST -d "region=R5&device=D1&amount=1.12&tstamp=1604942236.1013834" localhost:9999/ -``` - -These commands add three records to the snowfall table. A subsequent GET request from curl or a browser displays an HTML fragment that lists the rows in the snowfall table. Here's the equivalent as non-HTML text: - -``` -Snowfall report - -    1|R1|D9|1.42|1604722088.0158753 -    2|R7|D4|2.11|1604722296.8862638 -    3|R5|D1|1.12|1604942236.1013834 -``` - -A professional report would convert the numeric timestamps into human-readable ones. But the emphasis, for now, is on the architectural components in the snowfall application, not on the user interface. - -The uwsgi utility accepts various flags, which can be given either through a configuration file or in the launch command. For example, here's a richer launch of uwsgi as a web server: - -``` -% uwsgi --master --processes 2 --http 127.0.0.1:9999 --wsgi-file requestHandler.py -``` - -This version creates a master (supervisory) process and two worker processes, which can handle the HTTP requests concurrently. - -In the snowfall application, the functions `handle_post` and `handle_get` process POST and GET requests, respectively. Here's the `handle_post` function in full: - -``` -def handle_post(env, start_line):     -    form = get_field_storage(env)  ## body of an HTTP POST request -    -    ## Extract fields from POST form. -    region = form.getvalue('region') -    device = form.getvalue('device') -    amount = form.getvalue('amount') -    tstamp = form.getvalue('tstamp') - -    ## Missing info? -    if (region is not None and -        device is not None and -        amount is not None and -        tstamp is not None): -        add_record(region, device, amount, tstamp) -        response_body = "POST request handled.\n" -        start_line('201 OK', [('Content-Type', 'text/plain')]) -    else: -        response_body = "Missing info in POST request.\n" -        start_line('400 Bad Request', [('Content-Type', 'text/plain')]) -  -    return [response_body.encode()] -``` - -The two arguments to the `handle_post` function (`env` and `start_line` ) represent the system environment and a communications channel, respectively. The `start_line` channel sends the HTTP start line (in this case, either `400 Bad Request` or `201 OK` ) and any HTTP headers (in this case, just `Content-Type: text/plain` ) of an HTTP response. - -The `handle_post` function tries to extract the relevant data from the HTTP POST request and, if it's successful, calls the function `add_record` to add another row to the snowfall table: - -``` -def add_record(reg, dev, amt, tstamp): -    conn = sqlite3.connect(PATH_2_DB)      ## connect to DB -    cursor = conn.cursor()                 ## get a cursor - -    sql = "INSERT INTO snowfall(region,device,amount,tstamp) VALUES (?,?,?,?)" -    cursor.execute(sql, (reg, dev, amt, tstamp)) ## execute INSERT - -    conn.commit()  ## commit -    conn.close()   ## cleanup -``` - -SQLite automatically wraps single SQL statements (such as `INSERT` above) in a transaction, which accounts for the call to `conn.commit()` in the code. SQLite also supports multi-statement transactions. After calling `add_record`, the `handle_post` function winds up its work by sending an HTTP response confirmation message to the requester. - -The `handle_get` function also touches the database, but only to read the records in the snowfall table: - -``` -def handle_get(start_line): -    conn = sqlite3.connect(PATH_2_DB)        ## connect to DB -    cursor = conn.cursor()                   ## get a cursor -    cursor.execute("SELECT * FROM snowfall") - -    response_body = "

Snowfall report

    " -    rows = cursor.fetchall() -    for row in rows: -        response_body += "
  • " + str(row[0]) + '|'  ## primary key -        response_body += row[1] + '|'                ## region -        response_body += row[2] + '|'                ## device -        response_body += str(row[3]) + '|'           ## amount -        response_body += str(row[4]) + "
  • "       ## timestamp -    response_body += "
" - -    conn.commit()  ## commit -    conn.close()   ## cleanup -    -    start_line('200 OK', [('Content-Type', 'text/html')]) -    return [response_body.encode()] -``` - -A user-friendly version of the snowfall application would support additional (and fancier) reports, but even this version of `handle_get` underscores the clean interface between Python and SQLite. By the way, uwsgi expects a response body to be a list of bytes. In the `return` statement, the call to `response_body.encode()` inside the square brackets generates the byte list from the `response_body` string. - -### Moving up to Nginx - -The Nginx web server can be installed on a Debian-based system with one command: - -``` -% sudo apt-get install nginx -``` - -As a web server, Nginx provides the expected services, such as wire-level security, HTTPS, user authentication, load balancing, media streaming, response compression, file uploading, etc. The Nginx engine is high-performance and stable, and this server can support dynamic content through a variety of programming languages. Using uwsgi as a very lightweight web server is an attractive option but switching to Nginx is a move up to industrial-strength web hosting with high-volume capability. Nginx and uwsgi are both implemented in C. - -With Nginx in play, uwsgi takes on a communication protocol's restricted roles and an application server; it no longer acts as an HTTP web server. Here's the revised architecture: - -``` -HTTP       uwsgi                   -requester<---->Nginx<----->app server<--->requestHandler.py -``` - -As noted earlier, Nginx includes uwsgi support and now acts as a reverse-proxy server that forwards designated HTTP requests to the uwsgi application server, which in turn interacts with the Python script `requestHandler.py`. Responses from the Python script move in the reverse direction so that Nginx sends the HTTP response back to the requesting client. - -Two changes bring this new architecture to life. The first launches uwsgi as an application server: - -``` -% uwsgi --socket 127.0.0.1:8001 --wsgi-file requestHandler.py -``` - -Socket 8001 is the Nginx default for uwsgi communications. For robustness, you could use the full path to the Python script so that the command above does not have to be executed in the directory that houses the Python script. In a production environment, uwsgi would start and stop automatically; for now, however, the emphasis remains on how the architectural pieces fit together. - -The second change involves Nginx configuration, which can be tricky on Debian-based systems. The main configuration file for Nginx is `/etc/nginx/nginx.conf`, but this file may have `include` directives for other files, in particular, files in one of three `/etc/nginx` subdirectories: `nginx.d`, `sites-available`, and `sites-enabled`. The `include` directives can be eliminated to simplify matters; in this case, the configuration occurs only in `nginx.conf`. I recommend the simple approach. - -However the configuration is distributed, the key section for having Nginx talk to the uwsgi application server begins with `http` and has one or more `server` subsections, which in turn have `location` subsections. Here's an example from the Nginx documentation: - -``` -... -http { -    # Configuration specific to HTTP and affecting all virtual servers   -    ... -    server { # simple reverse-proxy -       listen       80; -       server_name  domain2.com www.domain2.com; -       access_log   logs/domain2.access.log  main; - -       # serve static files -       location ~ ^/(images|javascript|js|css|flash|media|static)/  { -         root    /var/www/virtual/big.server.com/htdocs; -         expires 30d; -       } - -       # pass requests for dynamic content to rails/turbogears/zope, et al -       location / { -         proxy_pass      http://127.0.0.1:8080; -       } -     } -     ... -} -``` - -The `location` subsections are the ones of interest. For the snowfall application, here's the added `location` entry with its two configuration lines: - -``` -... -server { -   listen 80 default_server; -   listen [::]:80 default_server; - -   root /var/www/html; -   index index.html index.htm index.nginx-debian.html; - -   server_name _; - -   ### key addition for uwsgi communication -   location /snowfall { -      include uwsgi_params;       ## comes with Nginx -      uwsgi_pass 127.0.0.1:8001;  ## 8001 is the default for uwsgi -   } -   ... -} -... -``` - -To keep things simple for now, make `/snowfall` the only `location` in the configuration. With this configuration in place, Nginx listens on port 80 and dispatches HTTP requests ending with the `/snowfall` path to the uwsgi application server: - -``` -% curl -X POST -d "..." localhost/snowfall ## new POST -% curl -X GET localhost/snowfall           ## new GET -``` - -The port number 80 can be dropped from the request because 80 is the default server port for HTTP requests. - -If the configured location were simply `/` instead of `/snowfall`, then any HTTP request with `/` at the start of the path would be dispatched to the uwsgi application server. Accordingly, the `/snowfall` path leaves room for other locations and, therefore, for further actions in response to HTTP requests. - -Once you've changed the Nginx configuration with the added `location` subsection, you can start the web server: - -``` -% sudo systemctl start nginx -``` - -There are other commands similar to `stop` and `restart` Nginx. In a production environment, you could automate these actions so that Nginx starts on a system boot and stops on a system shutdown. - -With uwsgi and Nginx both running, you can use a browser to test whether the architectural components cooperate as expected. For example, if you enter the URL `localhost/` in the browser's input window, then the Nginx welcome page should appear with (HTML) content similar to this: - -``` -Welcome to nginx! -... -Thank you for using nginx. -``` - -By contrast, the URL `localhost/snowfall` should display the rows currently in the snowfall table: - -``` -Snowfall report - -    1|R1|D9|1.42|1604722088.0158753 -    2|R7|D4|2.11|1604722296.8862638 -    3|R5|D1|1.12|1604942236.1013834 -``` - -### Wrapping up - -The snowfall application shows how free software components—a high-powered web server, an ACID-compliant database system, and scripting for dynamic content—can support a realistic web application on a Raspberry Pi 4 platform. This lightweight machine lifts above its weight class, and Debian eases the lifting. - -The software components in the web application work well together and require very little configuration. For higher volume hits against a relational database, recall that a free and feature-rich alternative to SQLite is PostgreSQL. If you're eager to play on the Raspberry Pi 4—in particular, to explore server-side web programming on this platform—then Nginx, SQLite or PostgreSQL, uwsgi, and Python are worth considering. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/web-hosting-raspberry-pi - -作者:[Marty Kalin][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mkalindepauledu -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/browser_web_internet_website.png -[2]: https://www.raspberrypi.org/products/raspberry-pi-4-model-b/ -[3]: https://opensource.com/article/21/2/sqlite3-cheat-sheet -[4]: https://en.wikipedia.org/wiki/ACID -[5]: https://uwsgi-docs.readthedocs.io/en/latest/ -[6]: https://opensource.com/article/20/5/curl-cheat-sheet -[7]: https://condor.depaul.edu/mkalin diff --git a/sources/tech/20210304 Measure your Internet of Things with Raspberry Pi and open source tools.md b/sources/tech/20210304 Measure your Internet of Things with Raspberry Pi and open source tools.md deleted file mode 100644 index 41f528b552..0000000000 --- a/sources/tech/20210304 Measure your Internet of Things with Raspberry Pi and open source tools.md +++ /dev/null @@ -1,351 +0,0 @@ -[#]: subject: (Measure your Internet of Things with Raspberry Pi and open source tools) -[#]: via: (https://opensource.com/article/21/3/iot-measure-raspberry-pi) -[#]: author: (Darin London https://opensource.com/users/dmlond) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Measure your Internet of Things with Raspberry Pi and open source tools -====== -Setting up an environment-monitoring system demonstrates how to use open -source tools to keep tabs on temperature, humidity, and more. -![Metrics and a graph illustration][1] - -If you are interested in measuring and interacting with the world around you through the Internet of Things (IoT), there are a variety of inexpensive microcontrollers and microcomputers you can use. There are also many sensors available that connect to these devices to measure many aspects of the physical world. - -These sensors interface with the microcontroller boards using the [I2C][2] message bus, which programs that run on the boards can access using open source libraries in [MicroPython][3], Java, C#, and other popular programming languages. These devices and libraries make it very easy to create sophisticated data-collection systems. - -To demonstrate how easy and powerful this is, I built a greenhouse monitoring system using the following components that I purchased from [SparkFun][4]: - - * [Raspberry Pi Zero W with headers][5] - * [Power supply][6] - * [Qwiic pHAT][7] - * [Qwiic cables][8] - * [Qwiic Environmental Combo breakout][9] - * [Qwiic ambient light detector][10] - * [32GB microSD card][11] - * [Metal standoffs][12], [screws][13], and [nuts][14] - - - -Adafruit has very similar offerings and connection systems. - -### Getting to know Prometheus - -One of the first things you can do to start interacting with your world is to collect and analyze data acquired by sensors. Open source software makes it easy to collect, analyze, display, and even take action on your data. - -The [Prometheus][15] family of applications makes it easy to collect, store, and analyze data as a time series of individual events. I will briefly introduce the relevant parts of the Prometheus architecture; if you would like to learn more, there are many great articles about Prometheus on Opensource.com, including [_An introduction to monitoring with Prometheus_][16] and [_Achieve high-scale application monitoring with Prometheus_][17]. - -The Prometheus suite includes the following applications, which can be plugged together in various ways. - -#### [Prometheus][18] - -The main Prometheus service is a powerful time-series database that runs on a general-purpose computer, such as a Linux machine, cloud service, or Raspberry Pi (the Raspberry Pi 4 is recommended). A Prometheus instance can be configured to regularly "scrape" various file- and network-connected exporter services (e.g., HTTP, TCP, etc.) in the [Prometheus exposition format][19]. A single Prometheus service can be configured to scrape multiple targets, each with a unique job name. A scrape target publishes data in the form of events with a user-defined name, timestamp, value, and optional set of key-value annotations. If a data source publishes data without a timestamp, the scrape's exact time is automatically added to the event when it is stored. It can also be configured to communicate with one or more Alertmanager instances running on the same host or another host on the same network. - -Once events are published in a Prometheus service, they can be queried using the [Prometheus Query Language][20]. PromQL queries can be used to create tables and graphs of events. They can also be used to configure alerts, whereby a PromQL query condition's truth causes the Prometheus service to set the configured alert's firing state as `true`; this alert will remain in the firing state as long as the condition is true. Once the condition becomes false, the alert firing state is set to `false`. - -Multiple instances of an exporting service can publish the same metrics but differentiated by annotations to identify the sensor. For example, if you have three greenhouse monitors, each can publish its temperature, humidity, and other metrics, annotated with something like `greenhouse=1`, `greenhouse=2`, or `greenhouse=3`. Graphs, tables, and alerts can be configured to show all instances for a particular metric or just the metrics with specific annotations. - -All metrics stored in Prometheus are annotated with the job defined for the scrape target in the configuration. Every scrape target configured in a Prometheus service has a Boolean metric called `up`, which is set to `true` each time the service successfully scrapes the target and `false` when it cannot. This is a useful metric to use in PromQL queries to define alerts when a service goes down. - -#### [Alertmanager][21] - -The main Prometheus service does not act on alerts—it just holds the alerts' state as firing or not firing at any particular moment. The Alertmanager service works with a Prometheus service to set up notifications when alerts defined in Prometheus are firing. One or more Alertmanager services can be configured to run on general-purpose computers on the same network as the Prometheus service. - -Alertmanager notifications can be configured to communicate with various external systems, including email gateways, web service endpoints, chat services, and popular ticketing systems. Each notification can be templated to use various attributes about the event, including all of its annotations, to produce the notification message. - -#### [Node Exporter][22] - -Node Exporter is a very simple daemon that runs on a general-purpose computer host as a web service and exports data about that host via HTTP in the Prometheus exposition format. It is programmed to produce many different metrics about its host, such as CPU and memory utilization, using logic defined for each specific host architecture (e.g., proc filesystem, Windows Registry, etc.). - -A Node Exporter instance can also be configured to present one or more Prometheus exposition format compliant files on the host filesystem. This makes it useful for publishing metrics produced by another application running on the same host. The example greenhouse monitoring system uses a Python program to collect data from the sensors and produce a Prometheus-formatted export file, and Node Exporter publishes these metrics. - -#### [Pushgateway][23] - -A Raspberry Pi Zero, 3, or 4 can host a Node Exporter, but other microcontrollers (such as an Arduino or Raspberry Pi Pico) cannot. Pushgateway enables these devices to publish their metrics. It is a microservice that can run on another general-purpose computer host (such as a desktop, a cloud, or even a Rasberry Pi Zero, 3, or 4) and present a prometheus exposition formatted feed for a Prometheus service to scrape, and a REST API that other processes connected to its network can use to report custom metrics. - -A Pushgateway instance can run on the same host as the Prometheus service or a different host on the same network. If the microprocessor can communicate with the network using the Pushgateway and Prometheus services (e.g., an Ethernet cable, WiFi, or [LoRaWAN][24]), the process running on the microcontroller can use a standard HTTP library to report metrics using the Pushgateway REST API as part of its process loop. - -#### [Grafana][25] - -Grafana is not part of the Prometheus suite. It is an open source observability system designed to pull in data from multiple external data sources and integrate the data into customizable visualization dashboards. Grafana can pull data in from a variety of external system types, including Prometheus. It's another powerful, open source application that you can use to create sophisticated dashboards with the data produced by your devices. Grafana can also be installed onto a general-purpose computer, such as a desktop or a Raspberry Pi Zero, 3, or 4. (I installed it on the Raspberry Pi 4 that hosts the Prometheus and Alertmanager services.) - -There are plenty of tutorials available to help you get up and running with Grafana, including several on Opensource.com, such as _[The perfect combo with Prometheus and Grafana, and more industry trends][26]_ and _[Monitoring Linux performance with Grafana][27]_. - -Once Grafana is installed, use your browser to navigate to the Grafana host's hostname or internet protocol address (IP) at port 3000, and log in with the default credentials (**blank** / **admin**). Make sure to change the admin password. You can then add a data source and use the menu to choose the Prometheus main server's IP or host and port. Once you add the data source, you can start to graph data from Prometheus or create dashboards. - -If you are installing any of the above on a Raspberry Pi, ensure you download the [Prometheus][28] and [Grafana][29] binary distributions for your CPU's architecture. On a running Raspberry Pi, you can use either of these commands: - - * `uname -m` - * `cat /proc/cpuinfo` - - - -to get cpu architecture. It will say something like armv7. - -### Connect the Raspberry Pi Zero's sensors - -Once you have somewhere to store the data, you can assemble and configure the greenhouse monitoring device. I flashed the MicroSD card with the [Raspberry Pi OS Lite][30] image and configured it for [headless connection over WiFi][31]. I plugged the Qwiiic pHAT onto the Pi Zero headers and connected the Qwiic cables from the Qwiic pHAT to each of the light and environmental combo sensors. (Be sure to plug the yellow cable into the Qwiic pHAT on the side with the Pi header connection and into the sensors on the side with the I2C solder connection holes.) It is also possible to daisy-chain the sensors if you have only one Qwiic connection to your Raspberry Pi. - -![Wiring architecture][32] - -(Darin London, [CC BY-SA 4.0][33]) - -Once the Raspberry Pi is connected to the sensors, plug the SD card into its slot, connect the power supply, and power it up. It will boot up, and then you should be able to connect to the Raspberry Pi using: - - -``` -`ssh pi@raspbberrypi.local` -``` - -The default password is **raspberry**, but change it to something more secure using the `passwd` command. You can also use ping on your desktop to get the host's IP address and use it instead of the `raspberrypi.local` address. (This is useful if you have multiple Pis on your network.) - -### Install Node Exporter - -Install the Node Exporter application on your Raspberry Pi Zero by [downloading][34] the binary distribution for your architecture from the Prometheus website. Once it is installed, [configure it as a systemd service][35] so that it automatically starts and stops with the Raspberry Pi. - -### Install Python sensor libraries - -Raspberry Pi OS comes with Python 3, but it does not include the libraries required to interact with the sensors. Fortunately, there are Python libraries available. - -Install SparkFun's official [Qwiic_Py library][36] to access the sensors on the Environmental Combo breakout. If you are using Raspberry Pi OS Lite, you have to install [pip][37] (the Python package installer) for Python 3: - - -``` -`sudo apt install python3-pip` -``` - -The light sensor does not yet have an official SparkFun or Adafruit Python package, but you can get an open source [vml6030.py package][38] from its GitHub repo and copy it to `/home/pi` to use it in your monitoring application. It is based on the official SparkFun Arduino library. - -### Install the greenhouse monitor code - -The `greenhouse_monitor.py` script in this project's [GitHub repo][39] uses the Python sensor libraries to append metrics for `ambient_temperature`, `ambient_humidity`, and `ambient_light` every 11 seconds to a file named `/home/pi/metrics.prom` in the format Prometheus expects: - - -``` -#!/usr/bin/python3 - -from veml6030 import VEML6030 -import smbus2 -import qwiic_bme280 -import time -import sys - -def instrument_metrics(light,temp,humidity): -  metrics_out = open('/home/pi/metrics.prom', 'w+') -  print('# HELP ambient_temperature temperature in fahrenheit', flush=True, file=metrics_out) -  print('# TYPE ambient_temperature gauge', flush=True, file=metrics_out) -  print(f'ambient_temperature {temp}', flush=True, file=metrics_out) -  print('# HELP ambient_light light in lux', flush=True, file=metrics_out) -  print('# TYPE ambient_light gauge', flush=True, file=metrics_out) -  print(f'ambient_light {light}', flush=True, file=metrics_out) -  print('# HELP ambient_humidity humidity in %RH', flush=True, file=metrics_out) -  print('# TYPE ambient_humidity gauge', flush=True, file=metrics_out) -  print(f'ambient_humidity {humidity}', flush=True, file=metrics_out) -  metrics_out.close() - -print("Starting Greenhouse Monitor") -bus = smbus2.SMBus(1)  # For Raspberry Pi -light_sensor = VEML6030(bus) -environment_sensor = qwiic_bme280.QwiicBme280() - -if environment_sensor.is_connected() == False: -        print("The Environment Sensor isn't connected to the system. Please check your connection", file=sys.stderr) -        exit(1) -environment_sensor.begin() -while True: -        light = light_sensor.read_light() -        temp = environment_sensor.temperature_fahrenheit -        humidity = environment_sensor.humidity -        instrument_metrics(light, temp, humidity) -        time.sleep(11) -``` - -This can be set up as a systemd service, `/etc/systemd/system/greenhouse_montor.service`: - - -``` -[Unit] -Description=Greenhouse Monitor -Documentation= -After=network-online.target - -[Service] -User=pi -Restart=on-failure - -ExecStart=/home/pi/greenhouse_monitor.py - -[Install] -WantedBy=multi-user.target -``` - -A Node Exporter can also be configured as a systemd service to publish the metrics file produced by the `greenhouse_montitor.py` script at `/etc/systemd/system/node_exporter.service`: - - -``` -[Unit] -Description=Node Exporter -Documentation= -After=network-online.target - -[Service] -User=pi -Restart=on-failure - -ExecStart=/usr/local/bin/node_exporter \ -  --no-collector.arp \ -  --no-collector.bcache \ -  --no-collector.bonding \ -  --no-collector.btrfs \ -  --no-collector.cpu --no-collector.cpufreq --no-collector.edac --no-collector.entropy --no-collector.filefd --no-collector.hwmon --no-collector.ipvs \ -  --no-collector.loadavg \ -  --no-collector.mdadm \ -  --no-collector.meminfo \ -  --no-collector.netdev \ -  --no-collector.netstat \ -  --no-collector.nfs \ -  --no-collector.nfsd \ -  --no-collector.rapl \ -  --no-collector.softnet \ -  --no-collector.stat \ -  --no-collector.time \ -  --no-collector.timex \ -  --no-collector.uname \ -  --no-collector.vmstat \ -  --no-collector.xfs \ -  --no-collector.zfs \ -  --no-collector.netclass \ -  --no-collector.powersupplyclass \ -  --no-collector.pressure \ -  --no-collector.diskstats \ -  --no-collector.filesystem \ -  --no-collector.conntrack \ -  --no-collector.infiniband \ -  --no-collector.schedstat \ -  --no-collector.sockstat \ -  --no-collector.thermal_zone \ -  --no-collector.udp_queues \ -  --collector.textfile.directory=/home/pi - -[Install] -WantedBy=multi-user.target -``` - -Note that you can leave off all the `--nocollector.*` arguments, and `node_exporter` will export lots of metrics about the Raspberry Pi host and the `greenhouse_monitor` data. - -Once the systemd service definitions are in place, you can add and enable them using systemctl, and they will start as soon as your Raspberry Pi boots up and has a network: - - -``` -sudo systemctl enable greenhouse_monitor.py -sudo systemctl enable node_exporter -``` - -You can troubleshoot these services using: - - -``` -`sudo systemctl status $servicename` -``` - -The Python script and systemd service definition files are available in the [project's GitHub repo][39]. - -### Restart the Raspberry Pi Zero and start monitoring - -When the Raspberry Pi starts, it will start `greenhouse_monitor.py` and the `node_exporter` service. You can visit the `node_exporter` service using the IP or hostname of the Raspberry Pi running the greenhouse monitor at port 9100 (e.g., `http://$ip:9100`). Refresh every 11 seconds to see new entries. - -### Configure the Prometheus server scrape endpoint - -Once your greenhouse monitor's Node Exporter is exporting metrics, you can configure the Prometheus service to scrape it. Add the following lines to the `prometheus.yml` configuration file within the `scrape_configs` section (replace the IP in the targets with the IP of the device running the greenhouse_monitoring service on your network): - - -``` - - job_name: 'greenhouse_monitor' -  -        # metrics_path defaults to '/metrics' -        # scheme defaults to 'http'. -  -        static_configs: -        - targets: ['192.168.1.12:9100'] -``` - -Prometheus will automatically load the configuration file every few seconds and start scraping your greenhouse monitor. You can verify that it has started scraping (and get its up/down status) by visiting the Prometheus web user interface (UI) targets page at `http://$prometheus_host:9090/targets`. - -If it is up (and green), you can query metrics in the Prometheus web UI graphs page `http://$prometheus_host:9090/graph`. - -![Prometheus web UI graphs page][40] - -(Darin London, [CC BY-SA 4.0][33]) - -Once you are getting data in Prometheus, you can visit the Grafana service at `http://$graphana_host:3000`. I created a dashboard called Greenhouse with the panels for the three metrics exported by the greenhouse monitor. You can set Grafana to show data in the panels using the time controls. I was able to get the values for a 24-hour period from midnight to 11:59:59pm on the same day using the format `from: YYYY-MM-DD 00:00:00` and `To: YYYY-MM-DD 23:59:59`. - -![24-hour metrics][41] - -(Darin London, [CC BY-SA 4.0][33]) - -Notice the time of day when the sun was shining through a window onto the device? - -### What should you measure next? - -You have a treasure-trove of data at your fingertips to examine the physical world. Next, you could [configure Alertmanager][42] to send notifications through various communication technologies (e.g., webhooks, Slack, Gmail, PagerDuty, etc.) when alerts configured in Prometheus are firing. - -Now that you know how to measure your world, the question becomes: What do you want to measure? - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/iot-measure-raspberry-pi - -作者:[Darin London][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/dmlond -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D (Metrics and a graph illustration) -[2]: https://en.wikipedia.org/wiki/I%C2%B2C -[3]: https://micropython.org/ -[4]: https://www.sparkfun.com/ -[5]: https://www.sparkfun.com/products/15470 -[6]: https://www.sparkfun.com/products/13831 -[7]: https://www.sparkfun.com/products/15945 -[8]: https://www.sparkfun.com/products/15081 -[9]: https://www.sparkfun.com/products/14348 -[10]: https://www.sparkfun.com/products/15436 -[11]: https://www.sparkfun.com/products/14832 -[12]: https://www.sparkfun.com/products/10463 -[13]: https://www.sparkfun.com/products/10453 -[14]: https://www.sparkfun.com/products/10454 -[15]: https://prometheus.io/ -[16]: https://opensource.com/article/19/11/introduction-monitoring-prometheus -[17]: https://opensource.com/article/19/10/application-monitoring-prometheus -[18]: https://prometheus.io/docs/introduction/overview/ -[19]: https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md -[20]: https://prometheus.io/docs/prometheus/latest/querying/basics/ -[21]: https://prometheus.io/docs/alerting/latest/alertmanager/ -[22]: https://prometheus.io/docs/guides/node-exporter/ -[23]: https://prometheus.io/docs/practices/pushing -[24]: https://en.wikipedia.org/wiki/LoRa#LoRaWAN -[25]: https://grafana.com/ -[26]: https://opensource.com/article/20/5/Prometheus-Grafana-and-more-industry-trends -[27]: https://opensource.com/article/17/8/linux-grafana -[28]: https://prometheus.io/download/ -[29]: https://grafana.com/grafana/download -[30]: https://www.raspberrypi.org/software/operating-systems/ -[31]: https://www.raspberrypi.org/documentation/configuration/wireless/headless.md -[32]: https://opensource.com/sites/default/files/uploads/raspberrypi-qwiic-wiring.jpg (Wiring architecture) -[33]: https://creativecommons.org/licenses/by-sa/4.0/ -[34]: https://prometheus.io/docs/guides/node-exporter/#installing-and-running-the-node-exporter -[35]: https://pimylifeup.com/raspberry-pi-prometheus -[36]: https://github.com/sparkfun/Qwiic_Py -[37]: https://pypi.org/project/pip/ -[38]: https://github.com/n8many/VEML6030py -[39]: https://github.com/dmlond/greenhouse -[40]: https://opensource.com/sites/default/files/pictures/prometheus-web-ui-graphs-page.png (Prometheus web UI graphs page) -[41]: https://opensource.com/sites/default/files/uploads/24-hour-metrics.png (24-hour metrics) -[42]: https://prometheus.io/docs/alerting/latest/configuration/ diff --git a/sources/tech/20210306 Use FreeBSD jails on Raspberry Pi.md b/sources/tech/20210306 Use FreeBSD jails on Raspberry Pi.md deleted file mode 100644 index 1b7ebb7bfe..0000000000 --- a/sources/tech/20210306 Use FreeBSD jails on Raspberry Pi.md +++ /dev/null @@ -1,268 +0,0 @@ -[#]: subject: "Use FreeBSD jails on Raspberry Pi" -[#]: via: "https://opensource.com/article/21/3/bastille-raspberry-pi" -[#]: author: "Peter Czanik https://opensource.com/users/czanik" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Use FreeBSD jails on Raspberry Pi -====== -Create and maintain your containers (aka jails) at scale on FreeBSD with Bastille. - -![Parts, modules, containers for software][1] - -Image by: Opensource.com - -Containers became widely popular because of Docker on Linux, but there are [much earlier implementations][2], including the [jail][3] system on FreeBSD. A container is called a "jail" in FreeBSD terminology. The jail system was first released in FreeBSD 4.0 way back in 2000, and it has continuously improved since. While 20 years ago it was used mostly on large servers, now you can run it on your Raspberry Pi. - -### Jails vs. containers on Linux - -Container development took a very different path on FreeBSD than on Linux. On FreeBSD, containerization was developed as a strict security feature in the late '90s for virtual hosting and its flexibility grew over the years. Limiting a container's computing resources was not part of the original concept; this was added later. - -When I started to use jails in production in 2001, it was quite painful. I had to prepare my own scripts to automate working with them. - -On the Linux side, there were quite a few attempts at containerization, including [lxc][4]. - -Docker brought popularity, accessibility, and ease of use to containers. There are now many other tools on Linux (for example, I prefer to use [Podman on my laptop][5]). And Kubernetes allows you to work with containers at really large scale. - -[Bastille][6] is one of several tools available in [FreeBSD ports][7] to manage jails. It is comparable to Docker or Podman and allows you to create and maintain jails at scale instead of manually. It has a template system to automatically install and configure applications within jails, similar to Dockerfile. It also supports advanced FreeBSD functionality, like ZFS or VNET. - -### Install FreeBSD on Raspberry Pi - -Installing [BSD on Raspberry Pi][8] is pretty similar to installing Linux. You download a compressed image from the FreeBSD website and `dd` it to an SD card. You can also use a dedicated image writer tool; there are many available for all operating systems (OS). Download and write an image from the command line with: - -``` -wget https://download.freebsd.org/ftp/releases/arm64/aarch64/ISO-IMAGES/13.0/FreeBSD-13.0-BETA1-arm64-aarch64-RPI.img.xz -xzcat FreeBSD-13.0-BETA1-arm64-aarch64-RPI.img.xz | dd of=/dev/XXX -``` - -That writes the latest beta image available for 64-bit Raspberry Pi boards; check the [download page][9] if you use another Raspberry Pi board or want to use another build. Replace `XXX` with your SD card's device name, which depends on your OS and how the card connects to your machine. I purposefully did not use a device name so that you won't overwrite anything if you just copy and paste the instructions mindlessly. I did that and was lucky to have a recent backup of my laptop, but it was *not* a pleasant experience. - -Once you've written the SD card, put it in your Raspberry Pi and boot it. The first boot takes a bit longer than usual; I suspect the partition sizes are being adjusted to the SD card's size. After a while, you will receive the familiar login prompt on a good old text-based screen. The username is **root**, and the password is the same as the user name. The SSH server is enabled by default, but don't worry; the root user cannot log in. It is still a good idea to change the password to something else. The network is automatically configured by DHCP for the Ethernet connection (I did not test WiFi). - -The easiest way to configure Bastille on the system is to SSH into Raspberry Pi and copy and paste the commands and configuration in this article. You have a couple of options, depending on how much you care about industry best practices or are willing to treat it as a test system. You can either enable root login in the SSHD configuration (scary, but this is what I did at first) or create a regular user that can log in remotely. In the latter case, make sure that the user is part of the "wheel" group so that it can use `su -` to become root and use Bastille: - -``` -root@generic:~ # adduser -Username: czanik -Full name: Peter Czanik -Uid (Leave empty for default): -Login group [czanik]: -Login group is czanik. Invite czanik into other groups? []: wheel -Login class [default]: -Shell (sh csh tcsh bash rbash git-shell nologin) [sh]: bash -Home directory [/home/czanik]: -Home directory permissions (Leave empty for default): -Use password-based authentication? [yes]: -Use an empty password? (yes/no) [no]: -Use a random password? (yes/no) [no]: -Enter password: -Enter password again: -Lock out the account after creation? [no]: -Username   : czanik -Password   : ***** -Full Name  : Peter Czanik -Uid        : 1002 -Class      : -Groups     : czanik wheel -Home       : /home/czanik -Home Mode  : -Shell      : /usr/local/bin/bash -Locked     : no -OK? (yes/no): yes -adduser: INFO: Successfully added (czanik) to the user database. -Add another user? (yes/no): no -Goodbye! -``` - -The fifth line adds the user to the wheel group. Note that you might have a different list of shells on your system, and Bash is not part of the base system. Install Bash before adding the user: - -``` -pkg install bash -``` - -PKG needs to bootstrap itself on the first run, so invoking the command takes a bit longer this time. - -### Get started with Bastille - -Managing jails with the tools in the FreeBSD base system is possible—but not really convenient. Using a tool like Bastille can simplify it considerably. It is not part of the base system, so install it: - -``` -pkg install bastille -``` - -As you can see from the command's output, Bastille has no external dependencies. It is a shell script that relies on commands in the FreeBSD base system (with an exception I'll note later when explaining templates). - -If you want to start your containers on boot, enable Bastille: - -``` -sysrc bastille_enable="YES" -``` - -Start with a simple use case. Many people use containers to install different development tools in different containers to avoid conflicts or simplify their environments. For example, no sane person wants to install Python 2 on a brand-new system—but you might need to run an ancient script every once in a while. So, create a jail for Python 2. - -Before creating your first jail, you need to bootstrap a FreeBSD release and configure networking. Just make sure that you bootstrap the same or an older release than the host is running. For example: - -``` -bastille bootstrap 12.2-RELEASE -``` - -It downloads and extracts this release under the `/usr/local/bastille` directory structure. - -Networking can be configured in many different ways using Bastille. One option that works everywhere—on your local machine and in the cloud—is using cloned interfaces. This allows jails to use an internal network that does not interfere with the external network. Configure and start this internal network: - -``` -sysrc cloned_interfaces+=lo1 -sysrc ifconfig_lo1_name="bastille0" -service netif cloneup -``` - -With this network setup, services in your jails are not accessible from the outside network, nor can they reach outside. You need forward ports from your host's external interface to the jails and to enable network access translation (NAT). Bastille integrates with BSD's [PF firewall][10] for this task. The following `pf.conf` configures the PF firewall such that Bastille can add port forwarding rules to the firewall dynamically: - -``` -ext_if="ue0" - -set block-policy return -scrub in on $ext_if all fragment reassemble -set skip on lo - -table persist -nat on $ext_if from to any -> ($ext_if) - -rdr-anchor "rdr/*" - -block in all -pass out quick modulate state -antispoof for $ext_if inet -pass in inet proto tcp from any to any port ssh flags S/SA modulate state -``` - -You also need to enable and start PF for these rules to take effect. Note that if you work through an SSH connection, starting PF will terminate your connection, and you will need to log in again: - -``` -sysrc pf_enable="YES" -service pf restart -``` - -### Create your first jail - -To create a jail, Bastille needs a few parameters. First, it needs a name for the jail you're creating. It is an important parameter, as you will always refer to a jail by its name. I chose the name of the most famous Hungarian jail for the most elite criminals, but in real life, jail names often refer to the jail's function, like `syslogserver`. You also need to set the FreeBSD release you're using and an internet protocol (IP) address. I used a random `10.0.0.0/8` IP address range, but if your internal network already uses addresses from that, then using the `192.168.0.0/16` is probably a better idea: - -``` -bastille create csillag 12.2-RELEASE 10.17.89.51 -``` - -Your new jail should be up and running within a few seconds. It is a complete FreeBSD base system without any extra packages. So install some packages, like my favorite text editor, inside the jail: - -``` -root@generic:~ # bastille pkg csillag install joe -[csillag]: -Updating FreeBSD repository catalogue... -FreeBSD repository is up to date. -All repositories are up to date. -The following 1 package(s) will be affected (of 0 checked): - -New packages to be INSTALLED: -        joe: 4.6,1 - -Number of packages to be installed: 1 - -The process will require 2 MiB more space. -442 KiB to be downloaded. - -Proceed with this action? [y/N]: y -[csillag] [1/1] Fetching joe-4.6,1.txz: 100%  442 KiB 452.5kB/s    00:01     -Checking integrity... done (0 conflicting) -[csillag] [1/1] Installing joe-4.6,1... -[csillag] [1/1] Extracting joe-4.6,1: 100% -``` - -You can install multiple packages at the same time. Install Python 2, Bash, and Git: - -``` -bastille pkg csillag install bash python2 git -``` - -Now you can start working in your new, freshly created jail. There are no network services installed in it, but you can reach it through its console: - -``` -root@generic:~ # bastille console csillag -[csillag]: -root@csillag:~ # python2 -Python 2.7.18 (default, Feb  2 2021, 01:53:44) -[GCC FreeBSD Clang 10.0.1 (git@github.com:llvm/llvm-project.git llvmorg-10.0.1- on freebsd12 -Type "help", "copyright", "credits" or "license" for more information. ->>> -root@csillag:~ # logout - -root@generic:~ # -``` - -### Work with templates - -The previous example manually installed some packages inside a jail. Setting up jails manually is no fun, even if Bastille makes it easy. Templates make the process even easier; they are similar to Dockerfiles but not entirely the same concept. You bootstrap templates for Bastille just like FreeBSD releases and then apply them to jails. When you apply a template, it will install the necessary packages and change configurations as needed. - -To use templates, you need to install Git on the host: - -``` -pkg install git -``` - -For example, to bootstrap the `syslog-ng` template, use: - -``` -bastille bootstrap https://gitlab.com/BastilleBSD-Templates/syslog-ng -``` - -Create a new jail, apply the template, and redirect an external port to it: - -``` -bastille create alcatraz 12.2-RELEASE 10.17.89.50 -bastille template alcatraz BastilleBSD-Templates/syslog-ng -bastille rdr alcatraz tcp 514 514 -``` - -To test the new service within the jail, use telnet to connect port 514 of your host and enter some random text. Use the `tail` command within your jail to see what you just entered: - -``` -root@generic:~ # tail /usr/local/bastille/jails/alcatraz/root/var/log/messages -Feb  6 03:57:27 alcatraz sendmail[3594]: gethostbyaddr(10.17.89.50) failed: 1 -Feb  6 04:07:13 alcatraz syslog-ng[1186]: Syslog connection accepted; fd='23', client='AF_INET(192.168.1.126:50104)', local='AF_INET(0.0.0.0:514)' -Feb  6 04:07:18 192.168.1.126 this is a test -Feb  6 04:07:20 alcatraz syslog-ng[1186]: Syslog connection closed; fd='23', client='AF_INET(192.168.1.126:50104)', local='AF_INET(0.0.0.0:514)' -``` - -Since I'm a [syslog-ng][11] evangelist, I used the syslog-ng template in my example, but there are many more available. Check the full list of [Bastille templates][12] to learn about them. - -### What's next? - -I hope that this article inspires you to try FreeBSD and Bastille on your Raspberry Pi. It was just enough information to get you started; to learn about all of Bastille's cool features—like auditing your jails for vulnerabilities and updating software within them—in the [documentation][13]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/bastille-raspberry-pi - -作者:[Peter Czanik][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/czanik -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/containers_modules_networking_hardware_parts.png -[2]: https://opensource.com/article/18/1/history-low-level-container-runtimes -[3]: https://docs.freebsd.org/en/books/handbook/jails/ -[4]: https://opensource.com/article/18/11/behind-scenes-linux-containers -[5]: https://opensource.com/article/18/10/podman-more-secure-way-run-containers -[6]: https://bastillebsd.org/ -[7]: https://www.freebsd.org/ports/ -[8]: https://opensource.com/article/19/3/netbsd-raspberry-pi -[9]: https://www.freebsd.org/where/ -[10]: https://en.wikipedia.org/wiki/PF_(firewall) -[11]: https://www.syslog-ng.com/ -[12]: https://gitlab.com/BastilleBSD-Templates/ -[13]: https://bastille.readthedocs.io/en/latest/ diff --git a/sources/tech/20210307 How to Install Nvidia Drivers on Linux Mint -Beginner-s Guide.md b/sources/tech/20210307 How to Install Nvidia Drivers on Linux Mint -Beginner-s Guide.md deleted file mode 100644 index 2a9a7650f4..0000000000 --- a/sources/tech/20210307 How to Install Nvidia Drivers on Linux Mint -Beginner-s Guide.md +++ /dev/null @@ -1,201 +0,0 @@ -[#]: subject: (How to Install Nvidia Drivers on Linux Mint [Beginner’s Guide]) -[#]: via: (https://itsfoss.com/nvidia-linux-mint/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How to Install Nvidia Drivers on Linux Mint [Beginner’s Guide] -====== - -[Linux Mint][1] is a fantastic Ubuntu-based Linux distribution that aims to make it easy for newbies to experience Linux by minimizing the learning curve. - -Not just limited to being one of the [best beginner-friendly Linux distros][2], it also does a [few things better than Ubuntu][3]. Of course, if you’re using Linux Mint like I do, you’re probably already aware of it. - -We have many beginner-focused Mint tutorials on It’s FOSS. Recently some readers requested help with Nvidia drivers with Linux Mint and hence I came up with this article. - -I have tried to mention different methods with a bit of explaining what’s going on and what you are doing in these steps. - -But before that, you should know this: - - * Nvidia has two categories of drivers. Open source drivers called Nouveau and proprietary drivers from Nvidia itself. - * Most of the time, Linux distributions install the open source Nouveau driver and you can manually enable the proprietary drivers. - * Graphics drivers are tricky things. For some systems, Nouveau works pretty well while for some it could create issues like blank screen or poor display. You may switch to proprietary drivers in such cases. - * The proprietary driver from Nvidia has different version numbers like 390, 450, 460. The higher the number, the more recent is the driver. I’ll show you how to change between them in this tutorial. - * If you are opting for proprietary drivers, you should go with the latest one unless you encounter some graphics issue. In those cases, opt for an older version of the driver and see if that works fine for you. - - - -Now that you have some familiarity with the terms, let’s see how to go about installing Nvidia drivers on Linux Mint. - -### How to Install Nvidia Drivers on Linux Mint: The Easy Way (Recommended) - -Linux Mint comes baked in with a [Driver Manager][4] which easily lets you choose/install a driver that you need for your hardware using the GUI. - -By default, you should see the open-source [xserver-xorg-video-nouveau][5] driver for Nvidia cards installed, and it works pretty well until you start playing a high-res video or want to play a [game on Linux][6]. - -So, to get the best possible experience, proprietary drivers should be preferred. - -You should get different proprietary driver versions when you launch the Driver Manager as shown in the image below: - -![][7] - -Basically, the higher the number, the latest driver it is. At the time of writing this article, driver **version 460** was the latest recommendation for my Graphics Card. You just need to select the driver version and hit “**Apply Changes**“. - -Once done, all you need to do is just reboot your system and if the driver works, you should automatically get the best resolution image and the refresh rate depending on your monitor for the display. - -For instance, here’s how it looks for me (while it does not detect the correct size of the monitor): - -![][8] - -#### Troubleshooting tips - -Depending on your card, the list would appear to be different. So, **what driver version should you choose?** Here are some pointers for you: - - * The latest drivers should ensure compatibility with the latest games and should technically offer better performance overall. Hence, it is the recommended solution. - * If the latest driver causes issues or fails to work, choose the next best offering. For instance, version 460 didn’t work, so I tried applying driver version 450, and it worked! - - - -Initially, in my case (**Linux Mint 20.1** with **Linux Kernel 5.4**), the latest driver 460 version did not work. Technically, it was successfully installed but did not load up every time I booted. - -**What to do if drivers fail to load at boot** - -_How do you know when it does not work?_ You will boot up with a low-resolution screen, and you will be unable to tweak the resolution or the refresh rate of the monitor. - -It will also inform you about the same in the form of an error: - -![][9] - -Fortunately, a solution from [Linux Mint’s forum][10] solved it for me. Here’s what you need to do: - -1\. Access the modules file using the command: - -``` -xed admin:///etc/modules -``` - -2\. You’ll be prompted to authenticate the access with your account password. Once done, you just need to add the following lines at the bottom: - -``` -nvidia -nvidia-drm -nvidia-modeset -``` - -Here’s what it looks like: - -![][11] - -If that doesn’t work, you can launch the Driver Manager and opt for another version of Nvidia driver. It’s more of a hit and try. - -### Install Nvidia Driver Using the Terminal (Special Use-Cases) - -For some reasons, if you are not getting the latest drivers for your Graphics Card using the Driver Manager, opting for the terminal method could help. - -It may not be the safest way to do it, but I did not have any issues installing the latest Nvidia driver 460 version. - -I’ll always recommend sticking to the Driver Manager app unless you have your reasons. - -To get started, first you have to check the available drivers for your GPU. Type in the following command to get the list: - -``` -ubuntu-drivers devices -``` - -Here’s how it looks in my case: - -![][12] - -**non-free** refers to the proprietary drivers and **free** points at the open-source nouveau Nvidia drivers. - -As mentioned above, usually, it is preferred to try installing the recommended driver. In order to do that, you just type in: - -``` -sudo ubuntu-drivers autoinstall -``` - -If you want something specific, type in: - -``` -sudo apt install nvidia-driver-450 -``` - -You just have to replace “**450**” with the driver version that you want and it will install the driver in the same way that you install an application via the terminal. - -Once installed, you just need to restart the system or type it in the terminal: - -``` -reboot -``` - -**To check the Nvidia driver version and verify the installation, you can type the following command in the terminal:** - -``` -nvidia-smi -``` - -Here’s how it may look like: - -![][13] - -To remove the driver and its associated dependencies, simply mention the exact version of the driver: - -``` -sudo apt remove nvidia-driver-450 -sudo apt autoremove -``` - -And, simply reboot. It should fallback to use the open-source nouveau driver. - -install the open-source driver using the following command and then reboot to revert to the default open-source driver: - -``` -sudo apt install xserver-xorg-video-nouveau -``` - -### Installing Nvidia Drivers using the .run file from Official Website (Time Consuming/Not Recommended) - -Unless you want the latest version of the driver from the official website or just want to experiment the process, you can opt to download the file (.run) and install it. - -To proceed, you need to first disable the X server and then install the Nvidia driver which could turn out to be troublesome and risky. - -You can follow the [official documentation][14] if you want to explore this method, but you may not need it at all. - -### Wrapping Up - -While it’s easy to install Nvidia drivers in Linux Mint, occasionally, you might find something that does not work for your hardware. - -If one driver version does not work, I’d suggest you to try other available versions for your Graphics Card and stick to the one that works. Unless you’re gaming and want the latest software/hardware compatibility, you don’t really need the latest Nvidia drivers installed. - -Feel free to share your experiences with installing Nvidia drivers on Linux Mint in the comments down below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/nvidia-linux-mint/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://linuxmint.com/ -[2]: https://itsfoss.com/best-linux-beginners/ -[3]: https://itsfoss.com/linux-mint-vs-ubuntu/ -[4]: https://github.com/linuxmint/mintdrivers -[5]: https://nouveau.freedesktop.org/ -[6]: https://itsfoss.com/linux-gaming-guide/ -[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-driver-manager.jpg?resize=800%2C548&ssl=1 -[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-display-settings.jpg?resize=800%2C566&ssl=1 -[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-no-driver.jpg?resize=593%2C299&ssl=1 -[10]: https://forums.linuxmint.com/viewtopic.php?p=1895521#p1895521 -[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/etc-modules-nvidia.jpg?resize=800%2C587&ssl=1 -[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-device-drivers-list.jpg?resize=800%2C506&ssl=1 -[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/nvidia-smi.jpg?resize=800%2C556&ssl=1 -[14]: https://download.nvidia.com/XFree86/Linux-x86_64/440.82/README/installdriver.html diff --git a/sources/tech/20210308 6 open source tools for wedding planning.md b/sources/tech/20210308 6 open source tools for wedding planning.md deleted file mode 100644 index e261941a4c..0000000000 --- a/sources/tech/20210308 6 open source tools for wedding planning.md +++ /dev/null @@ -1,111 +0,0 @@ -[#]: subject: (6 open source tools for wedding planning) -[#]: via: (https://opensource.com/article/21/3/open-source-wedding-planning) -[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -6 open source tools for wedding planning -====== -Create the event of your dreams with open source software. -![Outdoor wedding sign][1] - -If I were to say I had planned on writing this article a year or so ago, I would be wrong. So, I'll give you a small amount of backstory about how this came to be. - -On March 21st, I will be "getting married." I put that in quotes because I got married in Las Vegas on March 21, 2019. But I'm getting married again because my mom, who told us to elope, decided she was wrong and wanted a real wedding. So here I am, planning a wedding. - -![Vegas wedding][2] - -(Jess Cherry, [CC BY-SA 4.0][3]) - -Planning hasn't been smooth. We have moved the event twice due to the pandemic. My wedding planner got pregnant in the middle of it all, and since she's due in March, everything is now in my lap. About three-quarters of our invitations did not make it to their destinations because of weird mail issues, so we're sorting out our guests by text messages. - -But all of my poor luck has led to this, a moment when I can share my list of open source tools that are helping me survive wedding planning, even at the last minute. - -### Budgeting this whole thing - -Let's talk about budgets. As seems to be typical, mine went above and beyond what I'd originally allocated. I chose [HomeBank][4], which I wrote about last year, so I am familiar with it. - -I put all my wedding expenses in HomeBank as debts so that I could show my overall basic costs (not counting all the extra stuff I bought for the most expensive party I will ever throw). Once they are marked as debts, I can add a transaction and an income to it to pay for everything and keep track of what I owe. - -Here's an example of what such a budgeting might look like in HomeBank. - -![HomeBank][5] - -(Jess Cherry, [CC BY-SA 4.0][3]) - -### Keep track of invitations and guests - -I did not have a proper guest list at the outset, so I needed a way to manage my guests. I went with [LibreOffice Calc][6], because everyone needs sheets with counts and plans. Here is an example of what I ended up with. I used it to tally up numbers, so I could move on to planning how many tables I needed at the party. I summed the number of guests at the bottom of Column B to get the total. - -![LibreOffice Calc][7] - -(Jess Cherry, [CC BY-SA 4.0][3]) - -### Table time - -Certain venues, like mine, require you to provide table arrangements a month before the event so that they can be prepared for the right amount of settings and silverware. And drinks, because that's important to have for dancing and whatnot. - -The venue gave me a PDF for my table setup, but I decided to use [LibreOffice Draw][8] instead because I had an extra table I didn't need, and my counts were off due to our original guest list dropping considerably. But here's my drawing of where I want the tables to be (including the table I tossed due to our lower number of guests). - -![LibreOffice Draw][9] - -(Jess Cherry, [CC BY-SA 4.0][3]) - -### How about a timeline? - -One of the major pieces of event planning is having a timeline for the day to make sure everything goes according to plan. Spoiler alert: I can promise mine won't. I asked Opensource.com's productivity expert [Kevin Sonney][10] for help finding something to help me outline the big day and the rehearsal dinner the day before. - -I have two problems. One, I need to share the timeline with multiple people. Two, those people do not do computers for a living, like we do, so a heavily command-line option wouldn't work. I selected something Kevin wrote about in his [productivity article series][11] this year: KDE Plasma Kontact's [KOrganizer][12] using the timeline mode. I stacked an entire day into one timeline and produced this fancy set of blocks. (Don't mind this looking weird; it's a first draft.) - -![KOrganizer][13] - -(Jess Cherry, [CC BY-SA 4.0][3]) - -I also suggest keeping everything on your to-do lists inside KOrganizer, so you don't get lost while you're working through everything. Best of all, if you need to export all of this information and put it somewhere like a popular, regularly used application (e.g., Google, because well, it's Google), it exports and imports well. - -### Open source wedding tools for the pandemic - -OK, so before we all rush to judgment on this, I am aware we're still in the middle of a pandemic. The wedding planning started forever ago, and guess when the pandemic started. March… It all started in March of last year. That should tell you exactly how my plans have been going. - -In case you are wondering about my backup plan (since nearly three-quarters of the original guest list can't attend), the plan is to livestream this show. This leads me to two different conversations. One, I believe this is the future of weddings because it's cool to show everyone in your life this amazing moment, so from now on, wedding planners will have to add this to their services list, pandemic or not. - -Two, how can I achieve this goal of livestreaming the whole event? That's easy: I have a laptop and a camera, the DJ has clip-on microphones, and a bunch of cool people write about livestreaming all the time. [Seth Kenlon][14] wrote an entire article on [live streaming with OBS][15], so I can just walk through everything about a week before and share it out. If I decide to edit and publish the video, [Don Watkins][16] gave a great walkthrough of [Kaltura][17] to get me through the post-wedding things. - -### Final thoughts - -If you are good with open source software and organizing, you can be the wedding planner of anyone's dreams, or you can just plan your own wedding and stay organized. I would give bonus points to anyone who can get all of this running on a [Raspberry Pi 400][18] because that would be the easiest way to have everything with you in a package that's smaller than a laptop. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/open-source-wedding-planning - -作者:[Jessica Cherry][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cherrybomb -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wedding-sign.jpg?itok=e3zagA4b (Outdoor wedding sign) -[2]: https://opensource.com/sites/default/files/uploads/wedding.jpg (Vegas wedding) -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://opensource.com/article/20/2/open-source-homebank -[5]: https://opensource.com/sites/default/files/uploads/homebank.png (HomeBank) -[6]: https://www.libreoffice.org/discover/calc/ -[7]: https://opensource.com/sites/default/files/uploads/libreofficecalc.png (LibreOffice Calc) -[8]: https://www.libreoffice.org/discover/draw/ -[9]: https://opensource.com/sites/default/files/uploads/libreofficedraw.png (LibreOffice Draw) -[10]: https://opensource.com/users/ksonney -[11]: https://opensource.com/article/21/1/kde-kontact -[12]: https://kontact.kde.org/components/korganizer.html -[13]: https://opensource.com/sites/default/files/uploads/kontact-korganizer.png (KOrganizer) -[14]: https://opensource.com/users/seth -[15]: https://opensource.com/article/20/4/open-source-live-stream -[16]: https://opensource.com/users/don-watkins -[17]: https://opensource.com/article/18/9/kaltura-video-editing -[18]: https://www.raspberrypi.org/products/raspberry-pi-400/ diff --git a/sources/tech/20210309 Collect sensor data with your Raspberry Pi and open source tools.md b/sources/tech/20210309 Collect sensor data with your Raspberry Pi and open source tools.md deleted file mode 100644 index 0c5f528946..0000000000 --- a/sources/tech/20210309 Collect sensor data with your Raspberry Pi and open source tools.md +++ /dev/null @@ -1,276 +0,0 @@ -[#]: subject: (Collect sensor data with your Raspberry Pi and open source tools) -[#]: via: (https://opensource.com/article/21/3/sensor-data-raspberry-pi) -[#]: author: (Peter Czanik https://opensource.com/users/czanik) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Collect sensor data with your Raspberry Pi and open source tools -====== -Learning more about what is going on in your home is not just useful; -it's fun! -![Working from home at a laptop][1] - -I have lived in 100-plus-year-old brick houses for most of my life. They look nice, they are comfortable, and usually, they are not too expensive. However, humidity is high in the winter in my climate, and mold is a recurring problem. A desktop thermometer that displays relative humidity is useful for measuring it, but it does not provide continuous monitoring. - -In comes the Raspberry Pi: It is small, inexpensive, and has many sensor options, including temperature and relative humidity. It can collect data around the clock, do some alerting, and forward data for analysis. - -Recently, I participated in an experiment by [miniNodes][2] to collect and process environmental data on an all-[Arm][3] network of computers. One of my network's nodes was a [Raspberry Pi][4] that collected environmental data above my desk. Once the project was over, I was allowed to keep the hardware and play with it. This became my winter holiday project. Learning [Python][5] or [Elasticsearch][6] just to know more about them is boring. Having a practical project that utilizes these technologies is not just useful but also makes learning fun. - -Originally, I planned to utilize only these two technologies. Unfortunately, my good old Arm "server," an [OverDrive 1000][7] machine for developers, and my Xeon server are too loud for continuous use above my desk. I turn them on only when I need them, which means some kind of buffering is necessary when the servers are offline. Implementing buffering for Elasticsearch as a beginner Python coder looked a bit difficult. Luckily, I know a tool that can buffer data and send it to Elasticsearch: [syslog-ng][8]. - -### A note about licensing - -Elastic, the maintainer of Elasticsearch, has recently changed the project's license from the Apache License, an extremely permissive license approved by the Open Source Initiative, to a more restrictive license "[to protect our products and brand from abuse][9]." The term "abuse" in this context refers to the tendency of companies using Elasticsearch and Kibana and providing them to customers directly as a service without collaborating with Elastic or the Elastic community (a common critique of permissive licenses). It's still unclear how this affects users, but it's an important discussion for the open source community to have, especially as cloud services become more and more common. - -To keep your project open source, use Elasticsearch version 7.10 under the Apache License. - -### Configure data collection - -For data collection, I have a [Raspberry Pi Model 3B+][10] with the latest Raspberry Pi OS version and a set of sensors from [SparkFun][11] connected to a [Qwiic pHat][12] add-on board (this board has been discontinued, but there are more recent boards that provide the same functionality). Since monitoring GPS does not make much sense with a fixed location and there is no lightning to detect during the winter, I connected only the environmental sensor. You can collect data from the sensor using [Python scripts available on GitHub][13]. - -Install the Python modules locally as a user: - - -``` -`pip3 install sparkfun-qwiic-bme280` -``` - -There are three example scripts you can use to check data collection. You can download them using your browser or Git: - - -``` -`git clone https://github.com/sparkfun/Qwiic_BME280_Py/` -``` - -When you start the script, it will print data in a nice, human-readable format: - - -``` -pi@raspberrypi:~/Documents/Qwiic_BME280_Py/examples $ python3 qwiic_bme280_ex1.py - -SparkFun BME280 Sensor  Example 1 - -Humidity:       58.396 -Pressure:       128911.984 -Altitude:       -6818.388 -Temperature:    70.43 - -Humidity:       58.390 -Pressure:       128815.051 -Altitude:       -6796.598 -Temperature:    70.41 - -^C -Ending Example 1 -``` - -I am from Europe, so the default temperature data did not make much sense to me. Luckily, you can easily rewrite the code to use the metric system: just replace `temperature_fahrenheit` with `temperature_celsius`. Pressure and altitude showed some crazy values, even when I changed to the metric system, but I did not debug them. The humidity and temperature values were pretty close to what I expected (based on my desktop thermometer). - -Once I verified that the relevant sensors work as expected, I started to develop my own code. It is pretty simple. First, I made sure that it printed values every second to the terminal, then I added syslog support: - - -``` -#!/usr/bin/python3 - -import qwiic_bme280 -import time -import sys -import syslog - -# initialize sensor -sensor = qwiic_bme280.QwiicBme280() -if sensor.connected == False: -  print("Sensor not connected. Exiting") -  sys.exit(1) -sensor.begin() - -# collect and log time, humidity and temperature -while True: -  t = time.localtime() -  current_time = time.strftime("%H:%M:%S", t) -  current_humidity = sensor.humidity -  current_temperature = sensor.temperature_celsius -  print("time={} humidity={} temperature={}".format(current_time,current_humidity,current_temperature)) -  message = "humidity=" + str(current_humidity) + " temperature=" + str(current_temperature) -  syslog.syslog(message) -  time.sleep(1) -``` - -As I start the Python script using the [screen][14] utility, I also print data to the terminal. Check if the collected data arrives into syslog-ng using the `tail` command: - - -``` -pi@raspberrypi:~ $ tail -3 /var/log/messages -Jan  5 12:11:24 raspberrypi sensor2syslog_v2.py[6213]: humidity=58.294921875 temperature=21.4 -Jan  5 12:11:25 raspberrypi sensor2syslog_v2.py[6213]: humidity=58.294921875 temperature=21.4 -Jan  5 12:11:26 raspberrypi sensor2syslog_v2.py[6213]: humidity=58.294921875 temperature=21.39 -``` - -### Configure Elasticsearch - -The 1GB RAM in my Pi 3B+ is way too low to run Elasticsearch and [Kibana][15], so I host them on a second machine. [Installing Elasticsearch and Kibana][16] is different on every platform, so I will not cover that. What I will cover is mapping. By default, syslog-ng sends all data as text. If you want to prepare nice graphs in Kibana, you need temperature and humidity values as floating-point numbers. - -You need to set up mapping before sending data from syslog-ng. The syslog-ng configuration expects that the Sensors index uses this mapping: - - -``` -{ -  "mappings": { -    "_doc": { -      "properties": { -        "@timestamp": { -          "type": "date" -        }, -        "sensors": { -          "properties": { -            "humidity": { -              "type": "float" -            }, -            "temperature": { -              "type": "float" -            } -          } -        } -      } -    } -  } -} -``` - -Elasticsearch is now ready to collect data from syslog-ng. - -### Install and configure syslog-ng - -Version 3.19 of syslog-ng is included in Raspberry Pi OS, but it does not yet have Elasticsearch support. Therefore, I installed the latest version of syslog-ng from an unofficial repository. First, I added the repository key: - - -``` -`wget -qO - https://download.opensuse.org/repositories/home:/laszlo_budai:/syslog-ng/Raspbian_10/Release.key | sudo apt-key add -` -``` - -Then I added the following line to `/etc/apt/sources.list.d/sng.list`: - - -``` -`deb https://download.opensuse.org/repositories/home:/laszlo_budai:/syslog-ng/Raspbian_10/ ./` -``` - -Finally, I updated the repositories and installed the necessary syslog-ng packages (which also removed rsyslog from the system): - - -``` -apt-get update -apt-get install syslog-ng-mod-json syslog-ng-mod-http -``` - -There are many other syslog-ng subpackages, but only these two are needed to forward sensor logs to Elasticsearch. - -Syslog-ng's main configuration file is `/etc/syslog-ng/syslog-ng.conf`, and you do not need to modify it. You can extend the configuration by creating new text files with a `.conf` extension under the `/etc/syslog-ng/conf.d` directory. - -I created a file called `sens2elastic.conf` with the following content: - - -``` -filter f_sensors {program("sensor2syslog_v2.py")}; -parser p_kv {kv-parser(prefix("sensors."));}; -destination d_sensors { -  file("/var/log/sensors" template("$(format-json @timestamp=${ISODATE} --key sensors.*)\n\n")); -  elasticsearch-http( -      index("sensors") -      type("") -      url("") -      template("$(format-json @timestamp=${ISODATE} --key sensors.*)") -      disk-buffer( -        disk-buf-size(1G) -        reliable(no) -        dir("/tmp/disk-buffer") -      ) -  ); -}; -log { -  source(s_src); -  filter(f_sensors); -  parser(p_kv); -  destination(d_sensors); -}; -``` - -If you are new to syslog-ng, read my article about [syslog-ng's building blocks][17] to learn about syslog-ng's configuration. The configuration snippet above shows some of the possible building blocks, except for the source, as you need to use the local log source defined in `syslog-ng.conf` (`s_src`). - -The first line is a filter: it matches the program name. Mine is `sensor2syslog_v2.py`. Make sure this value is the same as the name of your Python script. - -The second line is a key-value parser. By default, syslog-ng treats the message part of incoming log messages as plain text. Using this parser, you can create name-value pairs within syslog-ng from data in the log messages that you can use later when sending logs to Elasticsearch. - -The next block is a bit larger. It is a destination containing two different destination drivers. The first driver saves logs to a local file in JSON format. I use this for debugging. The second driver is the Elasticsearch destination. Make sure that the index name and the URL match your environment. Using this large disk buffer, you can ensure you don't lose any data even if your Elasticsearch server is offline for days. - -The last block is a bit different. It is the log statement, the part of the configuration that connects the above building blocks. The name of the source comes from the main configuration. - -Save the configuration and create the `/tmp/disk-buffer/` directory. Reload syslog-ng to make the configuration live: - - -``` -`systemctl restart syslog-ng` -``` - -### Test the system - -The next step is to test the system. Elasticsearch is already running and prepared to receive data. Syslog-ng is configured to forward data to Elasticsearch. So, start the script to make sure data is actually collected. - -For a quick test, you can start it in a terminal window. For continuous data collection, I recommend starting it from the screen utility so that it keeps running even after you disconnect from the machine. Of course, this is not fail-safe, as it will not start "automagically" on a reboot. If you want to collect data 24/7, create an init script or a systemd service file for it. - -Check that logs arrive in the `/var/log/sensors` file. If it is not empty, then the filter is working as expected. Next, open Kibana. I cannot give exact instructions here, as the menu structure seems to change with each release. Create an index pattern for Kibana from the Sensors index, then change to Kibana's Discover mode, and select the freshly defined index. You should already see incoming temperature and humidity data on the screen. - -You are now ready to visualize data. I used Kibana's new [Lens][18] mode to visualize temperature and humidity values. While it is not very flexible, it is definitely easier to handle than the other visualization tools in Kibana. This diagram shows the data I collected, including how values change when I ventilate my room with fresh, cold air by opening my windows. - -![Graph of sensor data in Kibana Lens][19] - -(Peter Czanik, [CC BY-SA 4.0][20]) - -### What have I learned? - -My original goal was to monitor my home's relative humidity while brushing up on my Python and Elasticsearch skills. Even staying at basic levels, I now feel more comfortable working with Python and Elasticsearch. - -Best of all: Not only did I practice these tools, but I also learned about relative humidity from the graphs. Previously, I often ventilated my home by opening the windows for just one or two minutes. The Kibana graphs showed that humidity went back to the original levels quite quickly after I shut the windows. When I opened the windows for five to 10 minutes instead, humidity stayed low for many hours. - -### What's next? - -The more adventurous can use a Raspberry Pi and sensors not just to monitor but also to control their homes. I configured everything from the ground up, but there are ready-to-use tools available such as [Home Assistant][21]. You can also configure alerting in syslog-ng to do things like [sending an alert to your Slack channel][22] if the temperature drops below a set level. There are many sensors available for the Raspberry Pi, so there are countless possibilities on both the software and hardware side. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/sensor-data-raspberry-pi - -作者:[Peter Czanik][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/czanik -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop) -[2]: https://www.mininodes.com/ -[3]: https://www.arm.com/ -[4]: https://opensource.com/resources/raspberry-pi -[5]: https://opensource.com/tags/python -[6]: https://www.elastic.co/elasticsearch/ -[7]: https://softiron.com/blog/news_20160624/ -[8]: https://www.syslog-ng.com/products/open-source-log-management/ -[9]: https://www.elastic.co/pricing/faq/licensing -[10]: https://www.raspberrypi.org/products/raspberry-pi-3-model-b-plus/ -[11]: https://www.sparkfun.com/ -[12]: https://www.sparkfun.com/products/retired/15351 -[13]: https://github.com/sparkfun/Qwiic_BME280_Py/ -[14]: https://www.gnu.org/software/screen/ -[15]: https://www.elastic.co/kibana -[16]: https://opensource.com/article/19/7/install-elasticsearch-and-kibana-linux -[17]: https://www.syslog-ng.com/community/b/blog/posts/building-blocks-of-syslog-ng -[18]: https://www.elastic.co/kibana/kibana-lens -[19]: https://opensource.com/sites/default/files/uploads/kibanalens_data.png (Graph of sensor data in Kibana Lens) -[20]: https://creativecommons.org/licenses/by-sa/4.0/ -[21]: https://www.home-assistant.io/ -[22]: https://www.syslog-ng.com/community/b/blog/posts/send-your-log-messages-to-slack diff --git a/sources/tech/20210310 3 open source tools for producing video tutorials.md b/sources/tech/20210310 3 open source tools for producing video tutorials.md deleted file mode 100644 index bdccfba9cb..0000000000 --- a/sources/tech/20210310 3 open source tools for producing video tutorials.md +++ /dev/null @@ -1,180 +0,0 @@ -[#]: subject: (3 open source tools for producing video tutorials) -[#]: via: (https://opensource.com/article/21/3/video-open-source-tools) -[#]: author: (Abe Kazemzadeh https://opensource.com/users/abecode) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -3 open source tools for producing video tutorials -====== -Use OBS, OpenShot, and Audacity to create videos to teach your learners. -![Person reading a book and digital copy][1] - -I've learned that video tutorials are a great way to teach my students, and open source tools have helped me take my video-production skills to the next level. This article will explain how to get started and add artfulness and creativity to your video tutorial projects. - -I'll describe an end-to-end workflow for making video tutorials using open source tools for each subtask. For the purposes of this tutorial, making a video is the "task," and the various steps to make the video are the "subtasks." Those subtasks are video screen capture, video recording, video editing, audio recording, and effort estimation. Other tools include hardware such as cameras and microphones. My workflow also includes effort estimation as a subtask, but it's more of a general skill that parallels the idea of effort estimation when developing software. - -My workflow involves recording the bulk of the video content as screen capture. I record supporting video material (known as B-roll) using a smartphone, as it is a cheap, ubiquitous video camera. Then I edit the materials together into shot sequences using a video editor. - -### Advantages of video tutorials - -There are many reasons to record video tutorials. Some people prefer learning with video than with text like web pages and manuals. This preference may be partly generational—my students tend to prefer (or at least appreciate) the video option. - -Some modalities, like graphical user interfaces (GUIs), are easier to demonstrate with video. Video tutorials also work well for documenting open source software. Showing a convenient, standard workflow for producing software demonstration videos makes it easier for users to learn new software. - -Teaching people how to make videos can help increase the number of people participating in open source software. Enabling users to record themselves using software helps them share their knowledge. Teaching people how to do basic screen capture is a good way to make it easier for users to file bug reports on open source software. - -Educators are increasingly turning to tutorial videos as a way to deliver course content asynchronously. The most direct way to transition to an online classroom is to lecture over a videoconference system. Another option is the flipped classroom, which "flips" the traditional teaching method (i.e., in-class teaching followed by independent homework) by having students watch a recorded lecture on their own at home and using classroom time as a live, synchronous, interactive session for doing independent and project work. While video technologies have been evolving in the classroom for some time, the Covid-19 pandemic strongly motivated many educators, like my colleagues at the University of St. Thomas and me, to adopt video techniques when Covid-19 forced schools to close. - -Previously, I worked at the University of Southern California Annenberg School of Communications and Journalism, which offered a project and an app to help citizen journalists produce higher-quality video. (A citizen journalist is not a professional journalist but uses their proximity to current events to document and share their experiences). Similarly, this article aims to help you produce artful, quality video tutorials even if you are not a professional videographer. - -### Screen capture - -Screen capture is the first subtask in making a video tutorial. I use the [Open Broadcaster Software][2] (OBS) suite. OBS started as a way to stream video games and has evolved into a general-purpose tool for video recording and live streaming. It is programmed using Qt, which allows it to run on different operating systems. OBS can capture content as a video file or as a live stream, but I use it to capture to a video file for creating video tutorials. - -Screen capture is the natural starting place for creating things like a video tutorial for a piece of software. Still, OBS is also useful for tutorials that use physical objects, like electronics or woodworking. A simple use is to record a presentation, but it also supports webcams and switching views to combine multiple webcams and screen captures. In fact, you can combine a simple presentation with a tutorial; in this case, the presentation slides provide structure to the other tutorial content. - -![OBS][3] - -The main abstractions in OBS are _scenes_, and these scenes are made up of _sources_. Sources are any basic video inputs, including webcams, entire desktop displays, individual applications, and static images. Scenes are composed of one or more sources. Any individual source can be a scene, but the power and creativity of scenes come from combining sources. An example of a composite scene with two sources would be a video-captured presentation with a "talking head"—i.e., a small window inset inside the larger display with the presenter's face recorded from a webcam. In some cases, the "talking head" can help the presenter engage with the audience better, and it serves as an extra channel of information to go along with the speech's audio. - -This example implements a three-scene setup: one scene is the webcam, another scene is a full desktop display capture, and the third is the display capture with a webcam video inset. - -If you are using a new OBS installation, you'll see a single blank scene called "Scene" without any sources in the lower-left corner. If you have an existing OBS installation, you can start fresh by creating a new scene collection from the **Scene Collection** menu. - -Start by renaming the first scene **Face** by right-clicking the scene and selecting **Rename**. Then add the webcam as a source by selecting the **+** under **Sources**, choosing **Video Capture Device**, and clicking **Create New**. Set the name to **Webcam**, and click **OK**. This opens a screen where you can select and preview the webcam to make sure it's working (which is very useful if you have more than one webcam). After you select the webcam and click **OK**, you need to resize it. This can be done manually, but it is easier to right-click, select **Transform**, and select **Fit to Screen**. - -An aside on naming scenes and sources: I like to make logical distinctions between scenes (which I give abstract names like Face) and sources (which I give concrete names like Webcam #1). A naming convention like this is useful when you have multiple scenes and sources. You can always rename scenes and sources by right-clicking and selecting **Rename**, so don't worry much about naming at this stage. - -Add the second scene by clicking on the **+** button below the scenes area. Name it **Desktop** (or another name that describes this screen if you have a multiple-monitor setup). Then, under **Sources**, click **+** and select **Display Capture**. Select the display you want to capture (you only have one option if you have one monitor). Resize the video to fit the screen by right-clicking the **Transform** option. If you have one monitor, you should see a trippy, recursive view of the screen capture within a screen capture within a screen capture into infinity. - -For the third scene, you can use a shortcut by duplicating the last scene and adding an inset webcam. To do this, right-click on the last scene, select **Duplicate**, and name it **Desktop with Talking Head** (or something similar). Then add another source for this scene by clicking **+** under **Sources** when this source is selected, selecting **Video Capture Device**, and choosing your webcam under **Add Existing** (instead of Create New, like before). Instead of fitting the webcam to the whole screen, this time, move and stretch the webcam so that it is in the lower-right corner. Now you'll have the desktop and the webcam in the same scene. - -Now that the screen capture setup is finished, you can start making a basic software tutorial. Click **Start Recording** in the lower-right corner under **Controls**, record whatever you want to show in your tutorial, and use the scene selector to control what source you are recording. Changing scenes is like making a cut when editing a video, except it happens in real time while you are doing the tutorial. Because scene transitions in OBS happen in real time, this is more time-efficient than editing your video after the fact, so I recommend you try to do most of the scene transitions in OBS. - -I mentioned above that OBS recursively captures itself in a way that can best be described as "trippy." Seeing OBS in the screen capture is fine if you are demonstrating how OBS works, but if you are making a tutorial about anything else, you will not want to capture the OBS application window. There are two ways to avoid this. First, if you have a second monitor, you can capture the desktop environment on one monitor and have OBS running on the other. Second, OBS allows you to capture from individual applications (rather than the entire desktop environment), so you can specify which application you want to show in your video. - -When you are finished recording, click **Stop Recording**. To find the video you recorded, use the **File** menu and select **Show Recordings**. - -OBS is a powerful tool with many more features than I have described. You can learn more about adding text labels, streaming, and other features in [_Linux video editing in real time with OBS Studio_][4] and [_How to livestream games like the pros with OBS_][5]. For specific questions and technical issues, OBS has a great [online user forum][6]. - -### Editing video and using B-roll footage - -If you make mistakes when recording the screen capture or want to shorten the video, you'll need to edit it. You also might want to edit your video to make it more creative and artful. Adding creativity is also fun, and I believe having fun is necessary for sustaining your effort over time. - -For this how-to, assume the screen capture video recorded in OBS is your main video content, and you have other video recorded to enhance the tutorial's creative quality. In cinema and television jargon, the main content is called "A-roll" footage ("roll" refers to when video was captured on rolls of film), and supporting video material is called "B-roll." B-roll footage includes the surrounding environment, hands and pointing gestures, heads nodding, and static images with logos or branding. Editing B-roll footage into the main A-roll footage can make your video look more professional and give it more creative depth. - -A practical use of B-roll footage is to prevent _jump cuts_, which can happen when editing two similar video clips together. For example, imagine you make a mistake while doing your screen capture, and you want to cut out that part. However, this cut will leave an awkward gap—the jump cut—between the two clips after you remove the mistake. To remove that gap, put a short clip of B-roll material between the two parts. That B-roll shot placed to fill the cut is called a cutaway shot ( here, "shot" is used as a synonym of "clip," from the verb "shooting" a movie). - -B-roll footage is also used to build _shot sequences_. Just like software engineers build design patterns from individual statements, functions, and classes, videographers build shot sequences from individual shots or clips. These shot sequences enhance the video's quality and creativity. - -One of these, called the five-shot sequence, makes a good opening sequence to introduce your video tutorial. As the name suggests, it consists of five shots: - - 1. A close up of your hands - 2. A close up of your face - 3. A wide shot of the environment with you in it - 4. An over-the-shoulder shot showing the action as if your audience is watching over your shoulder - 5. A creative shot to capture an unusual perspective or something else the audience should know - - - -This [example][7] shows what this looks like. - -Showing pictures of yourself doing the activity can also help people better imagine doing it. There is a body of research about so-called "mirror neurons" that fire when observing another person's actions, especially hand movements. The five-shot sequence is also a pattern used by professional video journalists, so using it can give your video an appearance of professionalism. - -In addition to these five B-roll shots, you may want to record yourself introducing the video. This could be done in OBS using the webcam, but recording it with a smartphone camera gives you options for different views and backgrounds. - -#### Record your B-roll footage - -You can use a smartphone camera to capture B-roll footage for the five-shot sequence. Not only are they ubiquitous, but smartphones' connectedness makes it easy to use [filesharing applications][8] to sync your video to your editing application on your computer. - -You will need a tripod with a smartphone holder if you are working alone. This allows you to set up the recording without having to hold the phone in your hand. Some tripods come with remote controls that allow you to start and stop recording (search for "selfie tripod"), but this is just a convenience. Using the smartphone's forward-facing "selfie" camera can help you monitor that the camera is aimed properly. - -There will be material at the beginning and end of the recorded clip that you need to edit out. I prefer to record the five clips as separate files, and sometimes I need multiple takes to get a shot correct. A movie-making "clapper" with a dry-erase board is a piece of optional equipment that can help you keep track of B-roll footage by allowing you to write information about the shot (e.g., "hand close up, take 2"). The clapper functionality—the bar on top that makes the clap noise—is useful for synchronizing audio. This helps if you have multiple cameras and microphones, but in this simple setup, the clapper's main utility is to make you look like a serious auteur. - -Once you have recorded the five shots and any other material you want (e.g., a spoken introduction), copy or sync the video files to your desktop computer to begin editing. - -#### Edit your video - -I use [OpenShot][9], an open source video editor. Like OBS, it is programmed in Qt, so it runs on a variety of operating systems. - -![Openshot][10] - -_Tracks_ are OpenShot's main abstraction, and tracks can be made up of clips of individual _project files_, including video, audio, and images. - -Start with the five clips from the five-shot sequence and the video captured from OBS. To import the clips into OpenShot, drag-and-drop them into the project files area. These project files are the raw material that go into the tracks—you can think of this collection as a staging area similar to a chef collecting ingredients before cooking a dish. The clips don't need to be edited: you can edit them using OpenShot when you add them into the final video. - -After adding the five clips to the project files area, drag and drop the first clip to the top track. The tracks are like layers, and the higher-numbered tracks are in front of the lower-numbered tracks, so the top track should be track four or five. Ideally, each shot of the five-shot sequence will be about two or three seconds long; if they are longer, cut out the parts you don't want. - -To cut the clip, move the cursor (the blue marker on the timeline) to the place you want to cut. Right-click on the blue cursor, select **Slice All**, and then select which side you want to keep. Once you trim the clip, add the next clip to the same track, and give a bit of space after the first clip. Trim the second clip like you did the first one. After you trim both clips, slide the first clip all the way to the left to time zero on the timeline. Then, drag the second clip over the first clip so that the beginning of the second clip overlaps the end of the first clip. When you release the mouse, you'll see a blue area where the clips overlap. This blue area is a transition that OpenShot adds automatically when the clips overlap. If the overlap is not quite right, the easiest way to fix it is to separate the clips, delete the transition (select the blue area and hit the **Delete** key), and then try again. OpenShot automatically adds a transition where the shots overlap, but it won't automatically delete it when they are separated. Continue by trimming and overlapping the remaining shots of the five-shot sequence. - -You can do a lot more with OpenShot transitions, and [OpenShot's user guide][11] can help you learn about the options. - -Finally, add the screen capture video clip from OBS. If necessary, you can edit it in the same way, by moving the blue cursor in the timeline to where you want to trim, right-clicking the blue cursor, and selecting **Slice All**. If you need to keep both sides of the slice—for example, if you want to cut out a mistake in the middle of the video—make a slice on either side of the mistake, keep the sides, and delete the middle. This may result in a jump shot; if so, insert a clip of B-roll footage between them. I've found that a closeup shot of hands on the keyboard or an over-the-shoulder shot are good cutaways for this purpose. - -This example didn't use them, but the other tracks in OpenShot can be used to add parallel video tracks (e.g., a webcam, screen capture in OBS, or material recorded with a separate camera) or to add extra audio, like background music. I've found that using a single track is most convenient for combining clips for the five-shot sequence, and using multiple tracks is best for adding a separate audio track (e.g., music that will be played throughout the whole video) or when multiple views of the same action are captured separately. - -This example used OBS to capture two sources, the webcam and the screen capture, but it was all done with a single device, the computer. If you have video from another device, like a standalone camera, you might want to use two parallel tracks to combine the camera video and the screen capture. However, because OBS can capture multiple sources on one screen in real time, another option would be to use a second webcam instead of a standalone video camera. Doing all the recording in OBS and switching scenes while doing the screen capture would enable you to avoid after-the-fact editing. - -### Recording and editing audio - -For the audio component of the recording, your computer's built-in microphone might be fine. It is also simpler and saves money. If your computer's built-in microphone is not good enough, you may want to invest in a dedicated microphone. - -Even a high-quality microphone can produce poor audio quality if you don't take some care when recording it. One issue is recording in different acoustic environments: If you record one part of the video in a completely silent environment and another part in an environment with background noise (like fans, air conditioners, or other appliances), the difference in acoustic backgrounds will be very apparent. - -You might think it is better to have no background noise. While this might be true for recording music, having some ambient room noise can even out the differences between clips recorded in different acoustic environments. To do this, record about a minute of the ambient sound in the target environment. You might not end up needing it, but it is easier to make a brief audio recording at the outset if you anticipate recording in different environments. - -Audio compression is another technique that can help you fix volume issues if they arise. Compression helps reduce the differences between quiet and loud audio, and it has settings to not amplify background noise. [Audacity][12] is a useful open source audio tool that includes compression. - -![Multitrack suggestion][13] - -Using a clapper is helpful if you plan to edit multiple simultaneous audio recordings together. You can use the sharp peak in audio volume from the clapper to synchronize different tracks because the clapping noise makes it easier to line up the different recordings. - -### Estimation and planning - -A related issue is estimating the time and effort required to finish tasks and projects. This can be hard for many reasons, but there are some general rules of thumb that can help you estimate the time it will take to complete a video production project. - -First, as I noted, it is easier to use OBS scene transitions to switch views while recording than to edit scene transitions after the fact. If you can capture transitions while recording, you have one less task to do while editing. - -Another rule of thumb is that as the amount of recorded material increases, it takes more time and effort in general. For one, recording more material takes more time. Also, more material increases the overhead of organizing and editing it. Conversely and somewhat counterintuitively, given the same amount of raw material, a shorter final project will generally take more time and effort than a longer final project. When the amount of input is constant, it is harder to edit the content down to a shorter product than if you have less of a constraint on the final video's length. - -Having a plan for your video tutorial will help you stay on track and not forget any topics. A plan can range from a set of bullet points to a mindmap to a full script. Not only will the plan help guide you when you start recording, it can also help after the video is done. One way to improve your video tutorial's usefulness is to have a video table of contents, where each topic includes the timestamp when it begins. If you have a plan for your video—whether it is bullet points or a script—you will already have the video's structure, and you can just add the timestamps. Many video-sharing sites have ways to start playing a video at a specific point. For example, YouTube allows you to add an anchor hashtag to the end of a video's URL (e.g., `youtube.com/videourl#t=1m30s` would start playback 90 seconds into the video). Providing a script with the video is also useful for deaf and hard-of-hearing viewers. - -### Give it a try - -One great thing about open source is that there are low barriers to trying new software. Since the software is free, the main costs of making a video tutorial are the hardware—a computer for screen capture and video editing and a smartphone to record the B-roll footage. - -* * * - -_Acknowledgments: When I started learning about video, I benefited greatly from help from colleagues,*friends, and acquaintances. The University of St. Thomas Center for Faculty Development sponsored this work financially and my colleague Eric Level at the University of St. Thomas gave me many ideas for using video in the classrooms where we teach. My former colleagues Melissa Loudon and Andrew Lih at USC Annenberg School of Communications and Journalism taught me about citizen journalism and the five-shot sequence. My friend Matthew Lynn is a visual effects expert who helped me with time estimation and room-tone issues. Finally, the audience in the 2020 Southern California Linux Expo (SCaLE 18x) Graphics track gave me many helpful suggestions, including the video table of contents._ - -A look behind the scenes of Dototot's The Hello World Program, a YouTube channel aimed at computer... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/video-open-source-tools - -作者:[Abe Kazemzadeh][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/abecode -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/read_book_guide_tutorial_teacher_student_apaper.png?itok=_GOufk6N (Person reading a book and digital copy) -[2]: https://obsproject.com/ -[3]: https://opensource.com/sites/default/files/obs-full.jpg (OBS) -[4]: https://opensource.com/life/15/12/real-time-linux-video-editing-with-obs-studio -[5]: https://opensource.com/article/17/7/obs-studio-pro-level-streaming -[6]: https://obsproject.com/forum/ -[7]: https://www.youtube.com/watch?v=WnDD_59Lcas -[8]: https://opensource.com/alternatives/dropbox -[9]: https://www.openshot.org/ -[10]: https://opensource.com/sites/default/files/openshot-full.jpg (Openshot) -[11]: https://www.openshot.org/user-guide/ -[12]: https://www.audacityteam.org/ -[13]: https://opensource.com/sites/default/files/screenshot_20210303_073557.png (Multitrack suggestion) diff --git a/sources/tech/20210311 Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact.md b/sources/tech/20210311 Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact.md deleted file mode 100644 index 3c77bd04a7..0000000000 --- a/sources/tech/20210311 Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact.md +++ /dev/null @@ -1,248 +0,0 @@ -[#]: subject: (Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact) -[#]: via: (https://www.linux.com/news/review-of-four-hyperledger-libraries-aries-quilt-ursa-and-transact/) -[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/review-of-four-hyperledger-libraries-aries-quilt-ursa-and-transact/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact -====== - -_By Matt Zand_ - -## **Recap** - -In our two previous articles, first we covered “[Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy][1]” where we discussed the following five Hyperledger Distributed Ledger Technologies (DLTs): - - 1. Hyperledger Indy - 2. Hyperledger Fabric - 3. Hyperledger Iroha - 4. Hyperledger Sawtooth - 5. Hyperledger Besu - - - -Then, we moved on to our second article ([Review of three Hyperledger Tools- Caliper, Cello and Avalon][2]) where we surveyed the following three Hyperledger t     ools: - - 1. Hyperledger Caliper - 2. Hyperledger Cello - 3. Hyperledger Avalon - - - -So in this follow-up article, we review four (as listed below) Hyperledger libraries that work very well with other Hyperledger DLTs.      As of this writing, all of these libraries are at the incubation stage except for Hyperledger Aries,      which has [graduated][3] to      active. - - 1. Hyperledger Aries - 2. Hyperledger Quilt - 3. Hyperledger Ursa - 4. Hyperledger Transact - - - -**Hyperledger Aries** - -Identity has been adopted by the industry as one of the most promising use cases of DLTs. Solutions and initiatives around creating, storing, and transmitting verifiable digital credentials will result in a reusable, shared, interoperable tool kit. In response to such growing demand, Hyperledger has come up with three       projects (Hyperledger Indy, Hyperledger Iroha and Hyperledger Aries) that are specifically focused on identity management. - -Hyperledger Aries is infrastructure for blockchain-rooted, peer-to-peer interactions. It includes a shared cryptographic wallet (the secure storage tech, not a UI) for blockchain clients as well as a communications protocol for allowing off-ledger interactions between those clients.      This project consumes the cryptographic support provided by Hyperledger Ursa      to provide secure secret management and decentralized key management functionality. - -According to Hyperledger Aries’ documentation, Aries includes the following features: - - * An encrypted messaging system for off-ledger interactions using multiple transport protocols between clients. - * A blockchain interface layer that is also called as a resolver. It is used for creating and signing blockchain transactions. - * A cryptographic wallet to enable secure storage of cryptographic secrets and other information that is used for building blockchain clients. - * An implementation of ZKP-capable W3C verifiable credentials with the help of the ZKP primitives that are found in Hyperledger Ursa. - * A mechanism to build API-like use cases and higher-level protocols based on secure messaging functionality. - * An implementation of the specifications of the Decentralized Key Management System (DKMS) that are being currently incubated in Hyperledger Indy. - * Initially, the generic interface of Hyperledger Aries will support the Hyperledger Indy resolver. But the interface is flexible in the sense that anyone can build a pluggable method using DID method resolvers such as Ethereum and Hyperledger Fabric, or any other DID method resolver they wish to use. These resolvers would support the resolving of transactions and other data on other ledgers. - * Hyperledger Aries will additionally provide the functionality and features outside the scope of the Hyperledger Indy ledger to be fully planned and supported. Owing to these capabilities, the community can now build core message families to facilitate interoperable interactions using a wide range of use cases that involve blockchain-based identity. - - - -For more detailed discussion on its implementation, visit the link provided in the References section. - -**Hyperledger Quilt** - -The widespread adoption of blockchain technology by global businesses      has coincided with the emergence of tons of isolated and disconnected networks or ledgers. While users can easily conduct transactions within their own network or ledger, they experience technical difficultly (and in some cases impracticality) for doing transactions with parties residing      on different networks or ledgers. At best, the process of cross-ledger (or cross-network) transactions is slow, expensive, or manual. However, with the advent and adoption of Interledger Protocol (ILP), money and other forms of value can be routed, packetized, and delivered over ledgers and payment networks. - -Hyperledger Quilt is a tool for      interoperability      between ledger systems and is written in Java      by implementing the ILP for atomic swaps. While the Interledger is a protocol for making transactions across ledgers, ILP is a payment protocol designed to transfer value across non-distributed and distributed ledgers. The standards and specifications of Interledger protocol are governed by the open-source community under the World Wide Web Consortium umbrella. Quilt is an enterprise-grade implementation of the ILP, and provides libraries and reference implementations for the core Interledger components used for payment networks. With the launch of Quilt, the JavaScript (Interledger.js) implementation of Interledger was maintained by the JS Foundation. - -According to the Quilt documentation, as a result of ILP implementation, Quilt offers the following features: - - * A framework to design higher-level use-case specific protocols. - * A set of rules to enable interoperability with basic escrow semantics. - * A standard for data packet format and a ledger-dependent independent address format to enable connectors to route payments. - - - -For more detailed discussion on its implementation, visit the link provided in the References section. - -**Hyperledger Ursa** - -Hyperledger Ursa is a shared cryptographic library that      enables people (and projects) to avoid duplicating other cryptographic work and hopefully increase security in the process. The library is      an opt-in repository for Hyperledger projects (and, potentially others) to place and use crypto. - -Inside Project Ursa, a complete library of modular signatures and symmetric-key primitives      is at the disposal of developers to swap in and out different cryptographic schemes through configuration and without having to modify their code. On top its base library, Ursa      also includes newer cryptography, including pairing-based, threshold, and aggregate signatures. Furthermore, the zero-knowledge primitives including SNARKs are also supported by Ursa. - -According to the Ursa’s documentation, Ursa offers the following benefits: - - * Preventing duplication of solving similar security requirements across different blockchain - * Simplifying the security audits of cryptographic operations since the code is consolidated into a single location. This reduces maintenance efforts of these libraries while improving the security footprint for developers with beginner knowledge of distributed ledger projects. - * Reviewing all cryptographic codes in a single place will reduce the likelihood of dangerous security bugs. - * Boosting cross-platform interoperability when multiple platforms, which require cryptographic verification, are using the same security protocols on both platforms. - * Enhancing the architecture via modularity of common components will pave the way for future modular distributed ledger technology platforms using common components. - * Accelerating the time to market for new projects as long as an existing security paradigm can be plugged-in without a project needing to build it themselves. - - - -For more detailed discussion on its implementation, visit the link provided in the References section. - -**Hyperledger Transact** - -Hyperledger Transact, in a nutshell, makes writing distributed ledger software easier by providing a shared software library that handles the execution of smart contracts, including all aspects of scheduling, transaction dispatch, and state management. Utilizing Transact, smart contracts can be executed irrespective of DLTs being used. Specifically, Transact achieves that by offering an extensible approach to implementing new smart contract languages called “smart contract engines.” As such, each smart contract engine implements a virtual machine or interpreter that processes smart contracts. - -At its core, Transact is solely a transaction processing system for state transitions. That is, s     tate data is normally stored in a key-value or an SQL database. Considering an initial state and a transaction, Transact executes the transaction to produce a new state. These state transitions are deemed “pure” because only the initial state and the transaction are used as input. (In contrast to     other systems such as Ethereum where state and block information are mixed to produce the new state). Therefore, Transact is agnostic about DLT framework features other than transaction execution and state. - -According to Hyperledger Transact’s documentation, Transact comes with the following components: - - * **State**. The Transact state implementation provides get, set, and delete operations against a database. For the Merkle-Radix tree state implementation, the tree structure is implemented on top of LMDB or an in-memory database. - * **Context manager**. In Transact, state reads and writes are scoped (sandboxed) to a specific “context” that contains a reference to a state ID (such as a Merkle-Radix state root hash) and one or more previous contexts. The context manager implements the context lifecycle and services the calls that read, write, and delete data from state. - * **Scheduler**. This component controls the order of transactions to be executed. Concrete implementations include a serial scheduler and a parallel scheduler. Parallel transaction execution is an important innovation for increasing network throughput. - * **Executor**. The Transact executor obtains transactions from the scheduler and executes them against a specific context. Execution is handled by sending the transaction to specific execution adapters (such as ZMQ or a static in-process adapter) which, in turn, send the transaction to a specific smart contract. - * **Smart Contract Engines**. These components provide the virtual machine implementations and interpreters that run the smart contracts. Examples of engines include WebAssembly, Ethereum Virtual Machine, Sawtooth Transactions Processors, and Fabric Chaincode. - - - -For more detailed discussion on its implementation, visit the link provided in the References section. - -** Summary** - -In this article, we reviewed four Hyperledger libraries that are great resources for managing Hyperledger DLTs. We started by explaining Hyperledger Aries, which is infrastructure for blockchain-rooted, peer-to-peer interactions and includes a shared cryptographic wallet for blockchain clients as well as a communications protocol for allowing off-ledger interactions between those clients. Then, we learned that Hyperledger Quilt is the interoperability tool between ledger systems and is written in Java by implementing the ILP for atomic swaps. While the Interledger is a protocol for making transactions across ledgers, ILP is a payment protocol designed to transfer value across non-distributed and distributed ledgers. We also discussed that Hyperledger Ursa is a shared cryptographic library that would enable people (and projects) to avoid duplicating other cryptographic work and hopefully increase security in the process. The library would be an opt-in repository for Hyperledger projects (and, potentially others) to place and use crypto. We concluded our article by reviewing Hyperledger Transact by which smart contracts can be executed irrespective of DLTs being used. Specifically, Transact achieves that by offering an extensible approach to implementing new smart contract languages called “smart contract engines.” - -**References** - -For more references on all Hyperledger projects, libraries and tools, visit the below documentation links: - - 1. [Hyperledger Indy Project][4] - 2. [Hyperledger Fabric Project][5] - 3. [Hyperledger Aries Library][6] - 4. [Hyperledger Iroha Project][7] - 5. [Hyperledger Sawtooth Project][8] - 6. [Hyperledger Besu Project][9] - 7. [Hyperledger Quilt Library][10] - 8. [Hyperledger Ursa Library][11] - 9. [Hyperledger Transact Library][12] - 10. [Hyperledger Cactus Project][13] - 11. [Hyperledger Caliper Tool][14] - 12. [Hyperledger Cello Tool][15] - 13. [Hyperledger Explorer Tool][16] - 14. [Hyperledger Grid (Domain Specific)][17] - 15. [Hyperledger Burrow Project][18] - 16. [Hyperledger Avalon Tool][19] - - - -**Resources** - - * Free Training Courses from The Linux Foundation & Hyperledger - * [Blockchain: Understanding Its Uses and Implications (LFS170)][20] - * [Introduction to Hyperledger Blockchain Technologies (LFS171)][21] - * [Introduction to Hyperledger Sovereign Identity Blockchain Solutions: Indy, Aries & Ursa (LFS172)][22] - * [Becoming a Hyperledger Aries Developer (LFS173)][23] - * [Hyperledger Sawtooth for Application Developers (LFS174)][24] - * eLearning Courses from The Linux Foundation & Hyperledger - * [Hyperledger Fabric Administration (LFS272)][25] - * [Hyperledger Fabric for Developers (LFD272)][26] - * Certification Exams from The Linux Foundation & Hyperledger - * [Certified Hyperledger Fabric Administrator (CHFA)][27] - * [Certified Hyperledger Fabric Developer (CHFD)][28] - * [Hands-On Smart Contract Development with Hyperledger Fabric V2][29] Book by Matt Zand and others. - * [Essential Hyperledger Sawtooth Features for Enterprise Blockchain Developers][30] - * [Blockchain Developer Guide- How to Install Hyperledger Fabric on AWS][31] - * [Blockchain Developer Guide- How to Install and work with Hyperledger Sawtooth][32] - * [Intro to Blockchain Cybersecurity][33] - * [Intro to Hyperledger Sawtooth for System Admins][34] - * [Blockchain Developer Guide- How to Install Hyperledger Iroha on AWS][35] - * [Blockchain Developer Guide- How to Install Hyperledger Indy and Indy CLI on AWS][36] - * [Blockchain Developer Guide- How to Configure Hyperledger Sawtooth Validator and REST API on AWS][37] - * [Intro blockchain development with Hyperledger Fabric][38] - * [How to build DApps with Hyperledger Fabric][39] - * [Blockchain Developer Guide- How to Build Transaction Processor as a Service and Python Egg for Hyperledger Sawtooth][40] - * [Blockchain Developer Guide- How to Create Cryptocurrency Using Hyperledger Iroha CLI][41] - * [Blockchain Developer Guide- How to Explore Hyperledger Indy Command Line Interface][42] - * [Blockchain Developer Guide- Comprehensive Blockchain Hyperledger Developer Guide from Beginner to Advance Level][43] - * [Blockchain Management in Hyperledger for System Admins][44] - * [Hyperledger Fabric for Developers][45] - - - -**About Author** - -**Matt Zand** is a serial entrepreneur and the founder of three tech startups: [DC Web Makers][46], [Coding Bootcamps][47] and [High School Technology Services][48]. He is a leading author of [Hands-on Smart Contract Development with Hyperledger Fabric][29] book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms at sites such as IBM, SAP, Alibaba Cloud, Hyperledger, The Linux Foundation, and more. As a public speaker, he has presented webinars at many Hyperledger communities across USA and Europe     . At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on [LinkedIn][49]. - -The post [Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact][50] appeared first on [Linux Foundation – Training][51]. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/review-of-four-hyperledger-libraries-aries-quilt-ursa-and-transact/ - -作者:[Dan Brown][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://training.linuxfoundation.org/announcements/review-of-four-hyperledger-libraries-aries-quilt-ursa-and-transact/ -[b]: https://github.com/lujun9972 -[1]: https://training.linuxfoundation.org/announcements/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/ -[2]: https://training.linuxfoundation.org/announcements/review-of-three-hyperledger-tools-caliper-cello-and-avalon/ -[3]: https://www.hyperledger.org/blog/2021/02/26/hyperledger-aries-graduates-to-active-status-joins-indy-as-production-ready-hyperledger-projects-for-decentralized-identity -[4]: https://www.hyperledger.org/use/hyperledger-indy -[5]: https://www.hyperledger.org/use/fabric -[6]: https://www.hyperledger.org/projects/aries -[7]: https://www.hyperledger.org/projects/iroha -[8]: https://www.hyperledger.org/projects/sawtooth -[9]: https://www.hyperledger.org/projects/besu -[10]: https://www.hyperledger.org/projects/quilt -[11]: https://www.hyperledger.org/projects/ursa -[12]: https://www.hyperledger.org/projects/transact -[13]: https://www.hyperledger.org/projects/cactus -[14]: https://www.hyperledger.org/projects/caliper -[15]: https://www.hyperledger.org/projects/cello -[16]: https://www.hyperledger.org/projects/explorer -[17]: https://www.hyperledger.org/projects/grid -[18]: https://www.hyperledger.org/projects/hyperledger-burrow -[19]: https://www.hyperledger.org/projects/avalon -[20]: https://training.linuxfoundation.org/training/blockchain-understanding-its-uses-and-implications/ -[21]: https://training.linuxfoundation.org/training/blockchain-for-business-an-introduction-to-hyperledger-technologies/ -[22]: https://training.linuxfoundation.org/training/introduction-to-hyperledger-sovereign-identity-blockchain-solutions-indy-aries-and-ursa/ -[23]: https://training.linuxfoundation.org/training/becoming-a-hyperledger-aries-developer-lfs173/ -[24]: https://training.linuxfoundation.org/training/hyperledger-sawtooth-application-developers-lfs174/ -[25]: https://training.linuxfoundation.org/training/hyperledger-fabric-administration-lfs272/ -[26]: https://training.linuxfoundation.org/training/hyperledger-fabric-for-developers-lfd272/ -[27]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-administrator-chfa/ -[28]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-developer/ -[29]: https://www.oreilly.com/library/view/hands-on-smart-contract/9781492086116/ -[30]: https://weg2g.com/application/touchstonewords/article-essential-hyperledger-sawtooth-features-for-enterprise-blockchain-developers.php -[31]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-fabric-on-amazon-web-services.php -[32]: https://myhsts.org/tutorial-learn-how-to-install-and-work-with-blockchain-hyperledger-sawtooth.php -[33]: https://learn.coding-bootcamps.com/p/learn-how-to-secure-blockchain-applications-by-examples -[34]: https://learn.coding-bootcamps.com/p/introduction-to-hyperledger-sawtooth-for-system-admins -[35]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-iroha-on-amazon-web-services.php -[36]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-indy-on-amazon-web-services.php -[37]: https://myhsts.org/tutorial-learn-how-to-configure-hyperledger-sawtooth-validator-and-rest-api-on-aws.php -[38]: https://learn.coding-bootcamps.com/p/live-and-self-paced-blockchain-development-with-hyperledger-fabric -[39]: https://learn.coding-bootcamps.com/p/live-crash-course-for-building-dapps-with-hyperledger-fabric -[40]: https://myhsts.org/tutorial-learn-how-to-build-transaction-processor-as-a-service-and-python-egg-for-hyperledger-sawtooth.php -[41]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-iroha-cli-to-create-cryptocurrency.php -[42]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-indy-command-line-interface.php -[43]: https://myhsts.org/tutorial-comprehensive-blockchain-hyperledger-developer-guide-for-all-professional-programmers.php -[44]: https://learn.coding-bootcamps.com/p/learn-blockchain-development-with-hyperledger-by-examples -[45]: https://learn.coding-bootcamps.com/p/hyperledger-blockchain-development-for-developers -[46]: https://blockchain.dcwebmakers.com/ -[47]: http://coding-bootcamps.com/ -[48]: https://myhsts.org/ -[49]: https://www.linkedin.com/in/matt-zand-64047871 -[50]: https://training.linuxfoundation.org/announcements/review-of-four-hyperledger-libraries-aries-quilt-ursa-and-transact/ -[51]: https://training.linuxfoundation.org/ diff --git a/sources/tech/20210313 Build an open source theremin.md b/sources/tech/20210313 Build an open source theremin.md deleted file mode 100644 index cc7196df9c..0000000000 --- a/sources/tech/20210313 Build an open source theremin.md +++ /dev/null @@ -1,173 +0,0 @@ -[#]: subject: (Build an open source theremin) -[#]: via: (https://opensource.com/article/21/3/open-source-theremin) -[#]: author: (Gordon Haff https://opensource.com/users/ghaff) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Build an open source theremin -====== -Create your own electronic musical instrument with Open.Theremin V3. -![radio communication signals][1] - -Even if you haven't heard of a [theremin][2], you're probably familiar with the [eerie electronic sound][3] it makes from watching TV shows and movies like the 1951 science fiction classic _The Day the Earth Stood Still_. Theremins have also appeared in popular music, although often in the form of a theremin variant. For example, the "theremin" in the Beach Boys' "Good Vibrations" was actually an [electro-theremin][4], an instrument played with a slider invented by trombonist Paul Tanner and amateur inventor Bob Whitsell and designed to be easier to play. - -Soviet physicist Leon Theremin invented the theremin in 1920. It was one of the first electronic instruments, and Theremin introduced it to the world through his concerts in Europe and the US in the late 1920s. He patented his invention in 1928 and sold the rights to RCA. However, in the wake of the 1929 stock market crash, RCA's expensive product flopped. Theremin returned to the Soviet Union under somewhat mysterious circumstances in the late 1930s. The instrument remained relatively unknown until Robert Moog, of synthesizer fame, became interested in them as a high school student in the 1950s and started writing articles and selling kits. RA Moog, the company he founded, remains the best-known maker of commercial theremins today. - -### What does this have to do with open source? - -In 2008, Swiss engineer Urs Gaudenz was at a festival put on by the Swiss Mechatronic Art Society, which describes itself as a collective of engineers, hackers, scientists, and artists who collaborate on creative uses of technology. The festival included a theremin exhibit, which introduced Gaudenz to the instrument. - -At a subsequent event focused on bringing together music and technology, one of the organizers told Gaudenz that there were a lot of people who wanted to build theremins from kits. Some kits existed, but they often didn't work or play well. Gaudenz set off to build an open theremin that could be played in the same manner and use the same operating principles as a traditional theremin but with a modern electronic board and microcontroller. - -The [Open.Theremin][5] project (currently in version 3) is completely open source, including the microcontroller code and the [hardware files][6], which include the schematics and printed circuit board (PCB) layout. The hardware and the instructions are under GPL v3, while the [control code][7] is under LGPL v3. Therefore, the project can be assembled completely from scratch. In practice, most people will probably work from the kit available from Gaudi.ch, so that's what I'll describe in this article. There's also a completely assembled version available. - -### How does a theremin work? - -Before getting into the details of the Open.Theremin V3 and its assembly and use, I'll talk at a high level about how traditional theremins work. - -Theremins are highly unusual in that they're played without touching the instrument directly or indirectly. They're controlled by varying your distance and hand shape from [two antennas][8], a horizontal volume loop antenna, typically on the left, and a vertical pitch antenna, typically on the right. Some theremins have a pitch antenna only—Robert Plant of Led Zeppelin played such a variant—and some, including the Open.Theremin, have additional knob controls. But hand movements associated with the volume and pitch antennas are the primary means of controlling the instrument. - -I've been referring to the "antennas" because that's how everyone else refers to them. But they're not antennas in the usual sense of picking up radio waves. Each antenna acts as a plate in a capacitor. This brings us to the basic theremin operating principle: the heterodyne oscillator that mixes signals from a fixed and a variable oscillator. - -Such a circuit can be implemented in various ways. The Open.Theremin uses a combination of an oscillating crystal for the fixed frequency and an LC (inductance-capacitance) oscillator tuned to a similar but different frequency for the variable oscillator. There's one circuit for volume and a second one (operating at a slightly different frequency to avoid interference) for pitch, as this functional block diagram shows. - -![Theremin block diagram][9] - -(Gaudi Labs, [GPL v3][10]) - -You play the theremin by moving or changing the shape of your hand relative to each antenna. This changes the capacitance of the LC circuit. These changes are, in turn, processed and turned into sound. - -### Assembling the materials - -But enough theory. For this tutorial, I'll assume you're using an Open.Theremin V3 kit. In that case, here's what you need: - - * [Open.Theremin V3 kit][11] - * Arduino Uno with mounting plate - * Soldering iron and related materials (you'll want fairly fine solder; I used 0.02") - * USB printer-type cable - * Wire for grounding - * Replacement antenna mounting hardware: Socket head M3-10 bolt, washer, wing nut (x2, optional) - * Speaker or headphones (3.5mm jack) - * Tripod with standard ¼" screw - - - -The Open.Theremin is a shield for an Arduino, which is to say it's a modular circuit board that piggybacks on the Arduino microcontroller to extend its capabilities. In this case, the Arduino handles most of the important tasks for the theremin board, such as linearizing and filtering the audio and generating the instrument's sound using stored waveforms. The waveforms can be changed in the Arduino software. The Arduino's capabilities are an important part of enabling a wholly digital theremin with good sound quality without analog parts. - -The Arduino is also open source. It grew out of a 2003 project at the Interaction Design Institute Ivrea in Ivrea, Italy. - -### Building the hardware - -There are [good instructions][12] for building the theremin hardware on the Gaudi.ch site, so I won't take you through every step. I'll focus on the project at a high level and share some knowledge that you may find helpful. - -The PCB that comes with the kit already has the integrated circuits and discrete electronics surface-mounted on the board's backside, so you don't need to worry about those (other than not damaging them). What you do need to solder to the board are the pins to attach the shield to the Arduino, four potentiometers (pots), and a couple of surface-mount LEDs and a surface-mount button on the front side. - -Before going further, I should note that this is probably an intermediate-level project. There's not a lot of soldering, but some of it is fairly detailed and in close proximity to other electronics. The surface-mount LEDs and button on the front side aren't hard to solder but do take a little technique (described in the instructions on the Gaudi.ch site). Just deliberately work your way through the soldering in the suggested order. You'll want good lighting and maybe a magnifier. Carefully check that no pins are shorting other pins. - -Here is what the front of the hardware looks like: - -![Open.Theremin front][13] - -(Gordon Haff, [CC-BY-SA 4.0][14]) - -This shows the backside; the pins are the interface to the Arduino. - -![Open.Theremin back][15] - -(Gordon Haff, [CC-BY-SA 4.0][14]) - -I'll return to the hardware after setting up the Arduino and its software. - -### Loading the software - -The Arduino part of this project is straightforward if you've done anything with an Arduino and, really, even if you haven't. - - * Install the [Arduino Desktop IDE][16] - * Download the [Open.Theremin control software][7] and load it into the IDE - * Attach the Arduino to your computer with a USB cable - * Upload the software to the Arduino - - - -It's possible to modify the Arduino's software, such as changing the stored waveforms, but I will not get into that in this article. - -Power off the Arduino and carefully attach the shield. Make sure you line them up properly. (If you're uncertain, look at the Open.Theremin's [schematics][17], which show you which Arduino sockets aren't in use.) - -Reconnect the USB. The red LED on the shield should come on. If it doesn't, something is wrong. - -Use the Arduino Desktop IDE one more time to check out the calibration process, which, hopefully, will offer more confirmation that things are going according to plan. Here are the [detailed instructions][18]. - -What you're doing here is monitoring the calibration process. This isn't a real calibration because you haven't attached the antennas, and you'll have to recalibrate whenever you move the theremin. But this should give you an indication of whether the theremin is basically working. - -Once you press the function button for about a second, the yellow LED should start to blink slowly, and the output from the Arduino's serial monitor should look something like the image below, which shows typical Open.Theremin calibration output. The main things that indicate a problem are frequency-tuning ranges that are either just zeros or that have a range that doesn't bound the set frequency. - -![Open.Theremin calibration output][19] - -(Gordon Haff, [CC-BY-SA 4.0][14]) - -### Completing the hardware - -To finish the hardware, it's easiest if you separate the Arduino from the shield. You'll probably want to screw some sort of mounting plate to the back of the Arduino for the self-adhesive tripod mount you'll attach. Attaching the tripod mount works much better on a plate than on the Arduino board itself. Furthermore, I found that the mount's adhesive didn't work very well, and I had to use stronger glue. - -Next, attach the antennas. The loop antenna goes on the left. The pitch antenna goes on the right (the shorter leg connects to the shield). Attach the supplied banana plugs to the antennas. (You need to use enough force to mate the two parts that you'll want to do it before attaching the banana plugs to the board.) - -I found the kit's hardware extremely frustrating to tighten sufficiently to keep the antennas from rotating. In fact, due to the volume antenna swinging around, it ended up grounding itself on some of the conductive printing on the PCB, which led to a bit of debugging. In any case, the hardware listed in the parts list at the top of this article made it much easier for me to attach the antennas. - -Attach the tripod mount to a tripod or stand of some sort, connect the USB to a power source, plug the Open.Theremin into a speaker or headset, and you're ready to go. - -Well, almost. You need to ground it. Plugging the theremin into a stereo may ground it, as may the USB connection powering it. If the person playing the instrument (i.e., the player) has a strong coupling to ground, that can be sufficient. But if these circumstances don't apply, you need to ground the theremin by running a wire from the ground pad on the board to something like a water pipe. You can also connect the ground pad to the player with an antistatic wrist strap or equivalent wire. This gives the player strong capacitive coupling directly with the theremin, [which works][20] as an alternative to grounding the theremin. - -At this point, recalibrate the theremin. You probably don't need to fiddle with the knobs at the start. Volume does what you'd expect. Pitch changes the "zero beat" point, i.e., where the theremin transitions from high pitched near the pitch antenna to silence near your body. Register is similar to what's called sensitivity on other theremins. Timbre selects among the different waveforms programmed into the Arduino. - -There are many theremin videos online. It is _not_ an easy instrument to play well, but it is certainly fun to play with. - -### The value of open - -The open nature of the Open.Theremin project has enabled collaboration that would have been more difficult otherwise. - -For example, Gaudenz received a great deal of feedback from people who play the theremin well, including [Swiss theremin player Coralie Ehinger][21]. Gaudenz says he really doesn't play the theremin but the help he got from players enabled him to make changes to make Open.Theremin a playable musical instrument. - -Others contributed directly to the instrument design, especially the Arduino software code. Gaudenz credits [Thierry Frenkel][22] with improved volume control code. [Vincent Dhamelincourt][23] came up with the MIDI implementation. Gaudenz used circuit designs that others had created and shared, like designs [for the oscillators][24] that are a central part of the Open.Theremin board. - -Open.Theremin is a great example of how open source is not just good for the somewhat abstract reasons people often mention. It can also lead to specific examples of improved collaboration and more effective design. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/open-source-theremin - -作者:[Gordon Haff][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ghaff -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/sound-radio-noise-communication.png?itok=KMNn9QrZ (radio communication signals) -[2]: https://en.wikipedia.org/wiki/Theremin -[3]: https://www.youtube.com/watch?v=2tnJEqXSs24 -[4]: https://en.wikipedia.org/wiki/Electro-Theremin -[5]: http://www.gaudi.ch/OpenTheremin/ -[6]: https://github.com/GaudiLabs/OpenTheremin_Shield -[7]: https://github.com/GaudiLabs/OpenTheremin_V3 -[8]: https://en.wikipedia.org/wiki/Theremin#/media/File:Etherwave_Theremin_Kit.jpg -[9]: https://opensource.com/sites/default/files/uploads/opentheremin_blockdiagram.png (Theremin block diagram) -[10]: https://www.gnu.org/licenses/gpl-3.0.en.html -[11]: https://gaudishop.ch/index.php/product-category/opentheremin/ -[12]: https://www.gaudi.ch/OpenTheremin/images/stories/OpenTheremin/Instructions_OpenThereminV3.pdf -[13]: https://opensource.com/sites/default/files/uploads/opentheremin_front.jpg (Open.Theremin front) -[14]: https://creativecommons.org/licenses/by-sa/4.0/ -[15]: https://opensource.com/sites/default/files/uploads/opentheremin_back.jpg (Open.Theremin back) -[16]: https://www.arduino.cc/en/software -[17]: https://www.gaudi.ch/OpenTheremin/index.php/opentheremin-v3/schematics -[18]: http://www.gaudi.ch/OpenTheremin/index.php/40-general/197-calibration-diagnostics -[19]: https://opensource.com/sites/default/files/uploads/opentheremin_calibration.png (Open.Theremin calibration output) -[20]: http://www.thereminworld.com/Forums/T/30525/grounding-and-alternatives-yes-a-repeat-performance-- -[21]: https://youtu.be/8bxz01kN7Sw -[22]: https://theremin.tf/en/category/projects/open_theremin-projects/ -[23]: https://www.gaudi.ch/OpenTheremin/index.php/opentheremin-v3/midi-implementation -[24]: http://www.gaudi.ch/OpenTheremin/index.php/home/sound-and-oscillators diff --git a/sources/tech/20210313 My review of the Raspberry Pi 400.md b/sources/tech/20210313 My review of the Raspberry Pi 400.md deleted file mode 100644 index 43600718b9..0000000000 --- a/sources/tech/20210313 My review of the Raspberry Pi 400.md +++ /dev/null @@ -1,94 +0,0 @@ -[#]: subject: (My review of the Raspberry Pi 400) -[#]: via: (https://opensource.com/article/21/3/raspberry-pi-400-review) -[#]: author: (Don Watkins https://opensource.com/users/don-watkins) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -My review of the Raspberry Pi 400 -====== -Raspberry Pi 400's support for videoconferencing is a benefit for -homeschoolers seeking inexpensive computers. -![Raspberries with pi symbol overlay][1] - -The [Raspberry Pi 400][2] promises to be a boon to the homeschool market. In addition to providing an easy-to-assemble workstation that comes loaded with free software, the Pi 400 also serves as a surprisingly effective videoconferencing platform. I ordered a Pi 400 from CanaKit late last year and was eager to explore this capability. - -### Easy setup - -After unboxing my Pi 400, which came in this lovely package, the setup was quick and easy. - -![Raspberry Pi 400 box][3] - -(Don Watkins, [CC BY-SA 4.0][4]) - -The Pi 400 reminds me of the old Commodore 64. The keyboard and CPU are in one form factor. - -![Raspberry Pi 400 keyboard][5] - -(Don Watkins, [CC BY-SA 4.0][4]) - -The matching keyboard and mouse make this little unit both aesthetically and ergonomically appealing. - -Unlike earlier versions of the Raspberry Pi, there are not many parts to assemble. I connected the mouse, power supply, and micro HDMI cable to the back of the unit. - -The ports on the back of the keyboard are where things get interesting. - -![Raspberry Pi 400 ports][6] - -(Don Watkins, [CC BY-SA 4.0][4]) - -From left to right, the ports are: - - * 40-pin GPIO - * MicroSD: a microSD card is the main hard drive, and it comes with a microSD card in the slot, ready for startup - * Two micro HDMI ports - * USB-C port for power - * Two USB 3.0 ports and one USB 2.0 port for the mouse - * Gigabit Ethernet port - - - -The CPU is a Broadcom 1.8GHz 64-bit quad-core ARMv8 CPU, overclocked to make it even faster than the Raspberry Pi 4's processor. - -My unit came with 4GB RAM and a stock 16GB microSD card with Raspberry Pi OS installed and ready to boot up for the first time. - -### Evaluating the software and user experience - -The Raspberry Pi Foundation continually improves its software. Raspberry Pi OS has various wizards to make setup easier, including ones for keyboard layout, WiFi settings, and so on. - -The software included on the microSD card was the August 2020 Raspberry Pi OS release. After initial startup and setup, I connected a Logitech C270 webcam (which I regularly use with my other Linux computers) to one of the USB 3.0 ports. - -The operating system recognized the Logitech webcam, but I could not get the microphone to work with [Jitsi][7]. I solved this problem by updating to the latest [Raspberry Pi OS][8] release with Linux Kernel version 5.4. This OS version includes many important features that I love, like an updated Chromium browser and Pulse Audio, which solved my webcam audio woes. I can use open source videoconferencing sites, like Jitsi, and common proprietary ones, like Google Hangouts, for video calls, but Zoom was entirely unsuccessful. - -### Learning computing with the Pi - -The icing on the cake is the Official Raspberry Pi Beginners Guide, a 245-page book introducing you to your new computer. Packed with informative tutorials, this book hearkens back to the days when technology _provided documentation_! For the curious mind, this book is a vitally important key to the Pi, which is best when it serves as a gateway to open source computing. - -And after you become enchanted with Linux and all that it offers by using the Pi, you'll have months of exploration ahead, thanks to Opensource.com's [many Raspberry Pi articles][9]. - -I paid US$ 135 for my Raspberry Pi 400 because I added an optional inline power switch and an extra 32GB microSD card. Without those additional components, the unit is US$ 100. It's a steal either way and sure to provide years of fun, fast, and educational computing. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/raspberry-pi-400-review - -作者:[Don Watkins][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/don-watkins -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2 (Raspberries with pi symbol overlay) -[2]: https://opensource.com/article/20/11/raspberry-pi-400 -[3]: https://opensource.com/sites/default/files/uploads/pi400box.jpg (Raspberry Pi 400 box) -[4]: https://creativecommons.org/licenses/by-sa/4.0/ -[5]: https://opensource.com/sites/default/files/uploads/pi400-keyboard.jpg (Raspberry Pi 400 keyboard) -[6]: https://opensource.com/sites/default/files/uploads/pi400-ports.jpg (Raspberry Pi 400 ports) -[7]: https://opensource.com/article/20/5/open-source-video-conferencing -[8]: https://www.raspberrypi.org/software/ -[9]: https://opensource.com/tags/raspberry-pi diff --git a/sources/tech/20210314 12 Raspberry Pi projects to try this year.md b/sources/tech/20210314 12 Raspberry Pi projects to try this year.md deleted file mode 100644 index 083ff8c281..0000000000 --- a/sources/tech/20210314 12 Raspberry Pi projects to try this year.md +++ /dev/null @@ -1,100 +0,0 @@ -[#]: subject: (12 Raspberry Pi projects to try this year) -[#]: via: (https://opensource.com/articles/21/3/raspberry-pi-projects) -[#]: author: (Seth Kenlon https://opensource.com/users/seth) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -12 Raspberry Pi projects to try this year -====== -There are plenty of reasons to use your Raspberry Pi at home, work, and -everywhere in between. Celebrate Pi Day by choosing one of these -projects. -![Raspberry Pi 4 board][1] - -Remember when the Raspberry Pi was just a really tiny hobbyist Linux computer? Well, to the surprise of no one, the Pi's power and scope has escalated quickly. Have you got a new Raspberry Pi or an old one lying around needing something to do? If so, we have plenty of new project ideas, ranging from home automation to cross-platform coding, and even some new hardware to check out. - -### Raspberry Pi at home - -Although I started using the Raspberry Pi mostly for electronics projects, any spare Pi not attached to a breadboard quickly became a home server. As I decommission old units, I always look for a new reason to keep it working on something useful. - - * While it's fun to make LEDs blink with a Pi, after you've finished a few basic electronics projects, it might be time to give your Pi some serious responsibilities. Predictably, it turns out that a homemade smart thermostat is substantially smarter than those you buy off the shelf. Try out ThermOS and this tutorial to [build your own multizone thermostat with a Raspberry Pi][2]. - - * Whether you have a child trying to focus on remote schoolwork or an adult trying to stay on task during work hours, being able to "turn off" parts of the Internet can be an invaluable feature for your home network. [The Pi-hole project][3] grants you this ability by turning your Pi into your local DNS server, which allows you to block or re-route specific sites. There's a sizable community around Pi-hole, so there are existing lists of commonly blocked sites, and several front-ends to help you interact with Pi-hole right from your Android phone. - - * Some families have a complex schedule. Kids have school and afterschool activities, adults have important events to attend, anniversaries and birthdays to remember, appointments to keep, and so on. You can keep track of everything using your mobile phone, but this is the future! Shouldn't wall calendars be interactive by now? - -For me, nothing is more futuristic than paper that changes its ink. Of course, we have e-ink now, and the Pi can use an e-ink display as its screen. [Build a family calendar][4] with a Pi and an e-ink display for one of the lowest-powered yet most futuristic (or magical, if you prefer) calendaring systems possible. - - * There's something about the Raspberry Pi's minimal design and lack of a case that inspires you to want to build something with it. After you've built yourself a thermostat and a calendar, why not [replace your home router with a Raspberry Pi][5]? With the OpenWRT distribution, you can repurpose your Pi as a router, and with the right hardware you can even add mobile connectivity. - - - - -### Monitoring your world with the Pi - -For modern technology to be truly interactive, it has to have an awareness of its environment. For instance, a display that brightens or dims based on ambient light isn't possible without useful light sensor data. Similarly, the actual _environment_ is really important to us humans, and so it helps to have technology that can monitor it for us. - - * Gathering data from sensors is one of the foundations you need to understand before embarking on a home automation or Internet of Things project. The Pi can do serious computing tasks, but it's got to get its data from something. Sensors provide a Pi with data about the environment. [Learn more about the fine art of gathering data over sensors][6] so you'll be ready to monitor the physical world with your Pi. - - * Once you're gathering data, you need a way to process it. The open source monitoring tool Prometheus is famous for its ability to represent complex data inputs, and so it's an ideal candidate to be your IoT (Internet of Things) aggregator. Get started now, and in no time you'll be monitoring and measuring and general data crunching with [Prometheus on a Pi][7]. - - * While a Pi is inexpensive and small enough to be given a single task, it's still a surprisingly powerful computer. Whether you've got one Pi monitoring a dozen other Pi units on your IoT, or whether you just have a Pi tracking the temperature of your greenhouse, sometimes it's nice to be able to check in on the Pi itself to find out what its workload is like, or where specific tasks might be able to be optimized. - -Grafana is a great platform for monitoring servers, including a Raspberry Pi. [Prometheus and Grafana][8] work together to monitor all aspects of your hardware, providing a friendly dashboard so you can check in on performance and reliability at a glance. - - * You can download mobile apps to help you scan your home for WiFi signal strength, or you can [build your own on a Raspberry Pi using Go][9]. The latter sounds a lot more fun than the former, and because you're writing it yourself, there's a lot more customization you can do on a Pi-based solution. - - - - -### The Pi at work - -I've run file shares and development servers on Pi units at work, and I've seen them at former workplaces doing all kinds of odd jobs (I remember one that got hooked up to an espresso machine to count how many cups of coffee my department consumed each day, not for accounting purposes but for bragging rights). Ask your IT department before bringing your Pi to work, of course, but look around and see what odd job a credit-card-sized computer might be able to do for you. - - * Of course you could host a website on a Raspberry Pi from the very beginning of the Pi. But as the Pi has developed, it's gotten more RAM and better processing power, and so [a dynamic website with SQLite or Postgres and Python][10] is an entirely reasonable prospect. - - * Printers are infamously frustrating. Wouldn't it be nice to program [your very own print UI][11] using the amazing cross-platform framework TotalCross and a Pi? The less you have to struggle through screens of poorly designed and excessive options, the better. If you design it yourself, you can provide exactly the options your department needs, leaving the rest out of sight and out of mind. - - * Containers are the latest trend in computing, but before containers there were FreeBSD jails. Jails are a great solution for running high-risk applications safely, but they can be complex to set up and maintain. However, if you install FreeBSD on your Pi and run [Bastille for jail management][12] and mix in the liberal use of jail templates, you'll find yourself using jails with the same ease you use containers on Linux. - - * The "problem" with having so many tech devices around your desk is that your attention tends to get split between screens. If you'd rather be able to relax and just stare at a single screen, then you might look into the Scrcpy project, a screen copying application that [lets you access the screen of your mobile device on your Linux desktop or Pi][13]. I've tested scrcpy on a Pi 3 and a Pi 4, and the performance has surprised me each time. I use scrcpy often, but especially when I'm setting up an exciting new Edge computing node on your Pi cluster, or building my smart thermostat, or my mobile router, or whatever else. - - - - -### Get a Pi - -To be fair, not everyone has a Pi. If you haven't gotten hold of a Pi yet, you might [take a look at the Pi 400][14], an ultra-portable Pi-in-a-keyboard computer. Evocative of the Commodore 64, this unique form factor is designed to make it easy for you to plug your keyboard (and the Pi inside of it) into the closest monitor and get started computing. It's fast, easy, convenient, and almost _painfully_ retro. If you don't own a Pi yet, this may well be the one to get. - -What Pi projects are you working on for Pi day? Tell us in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/articles/21/3/raspberry-pi-projects - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-4_lead.jpg?itok=2bkk43om (Raspberry Pi 4 board) -[2]: https://opensource.com/article/21/3/thermostat-raspberry-pi -[3]: https://opensource.com/article/21/3/raspberry-pi-parental-control -[4]: https://opensource.com/article/21/3/family-calendar-raspberry-pi -[5]: https://opensource.com/article/21/3/router-raspberry-pi -[6]: https://opensource.com/article/21/3/sensor-data-raspberry-pi -[7]: https://opensource.com/article/21/3/iot-measure-raspberry-pi -[8]: https://opensource.com/article/21/3/raspberry-pi-grafana-cloud -[9]: https://opensource.com/article/21/3/troubleshoot-wifi-go-raspberry-pi -[10]: https://opensource.com/article/21/3/web-hosting-raspberry-pi -[11]: https://opensource.com/article/21/3/raspberry-pi-totalcross -[12]: https://opensource.com/article/21/3/bastille-raspberry-pi -[13]: https://opensource.com/article/21/3/android-raspberry-pi -[14]: https://opensource.com/article/21/3/raspberry-pi-400-review diff --git a/sources/tech/20210317 Track aircraft with a Raspberry Pi.md b/sources/tech/20210317 Track aircraft with a Raspberry Pi.md deleted file mode 100644 index 50bb1584c4..0000000000 --- a/sources/tech/20210317 Track aircraft with a Raspberry Pi.md +++ /dev/null @@ -1,73 +0,0 @@ -[#]: subject: (Track aircraft with a Raspberry Pi) -[#]: via: (https://opensource.com/article/21/3/tracking-flights-raspberry-pi) -[#]: author: (Patrick Easters https://opensource.com/users/patrickeasters) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Track aircraft with a Raspberry Pi -====== -Explore the open skies with a Raspberry Pi, an inexpensive radio, and -open source software. -![Airplane flying with a globe background][1] - -I live near a major airport, and I frequently hear aircraft flying over my house. I also have a curious preschooler, and I find myself answering questions like, "What's that?" and "Where's that plane going?" often. While a quick internet search could answer these questions, I wanted to see if I could answer them myself. - -With a Raspberry Pi, an inexpensive radio, and open source software, I can track aircraft as far as 200 miles from my house. Whether you're answering relentless questions from your kids or are just curious about what's in the sky above you, this is something you can try, too. - -![Flight map][2] - -(Patrick Easters, [CC BY-SA 4.0][3]) - -### The protocol behind it all - -[ADS-B][4] is a technology that aircraft use worldwide to broadcast their location. Aircraft use position data gathered from GPS and periodically broadcast it along with speed and other telemetry so that other aircraft and ground stations can track their position. - -Since this protocol is well-known and unencrypted, there are many solutions to receive and parse it, including many that are open source. - -### Gathering the hardware - -Pretty much any [Raspberry Pi][5] will work for this project. I've used an older Pi 1 Model B, but I'd recommend a Pi 3 or newer to ensure you can keep up with the stream of decoded ADS-B messages. - -To receive the ADS-B signals, you need a software-defined radio. Thanks to ultra-cheap radio chips designed for TV tuners, there are quite a few cheap USB receivers to choose from. I use [FlightAware's ProStick Plus][6] because it has a built-in filter to weaken signals outside the 1090MHz band used for ADS-B. Filtering is important since strong signals, such as broadcast FM radio and television, can desensitize the receiver. Any receiver based on RTL-SDR should work. - -You will also need an antenna for the receiver. The options are limitless here, ranging from the [more adventurous DIY options][7] to purchasing a [ready-made 1090MHz antenna][8]. Whichever route you choose, antenna placement matters most. ADS-B reception is line-of-sight, so you'll want your antenna to be as high as possible to extend your range. I have mine in my attic, but I got decent results from my house's upper floor. - -### Visualizing your data with software - -Now that your Pi is equipped to receive ADS-B signals, the real magic happens in the software. Two of the most commonly used open source software projects for ADS-B are [readsb][9] for decoding ADS-B messages and [tar1090][10] for visualization. Combining both provides an interactive map showing all the aircraft your Pi is tracking. - -Both projects provide setup instructions, but using a prebuilt image like the [ADSBx Custom Pi Image][11] is the fastest way to get going. The ADSBx image even configures a Prometheus instance with custom metrics like aircraft count. - -### Keep experimenting - -If the novelty of tracking airplanes with your Raspberry Pi wears off, there are plenty of ways to keep experimenting. Try different antenna designs or find the best antenna placement to maximize the number of aircraft you see. - -These are just a few of the ways to track aircraft with your Pi, and hopefully, this inspires you to try it out and learn a bit about the world of radio. Happy tracking! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/tracking-flights-raspberry-pi - -作者:[Patrick Easters][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/patrickeasters -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plane_travel_world_international.png?itok=jG3sYPty (Airplane flying with a globe background) -[2]: https://opensource.com/sites/default/files/uploads/flightmap.png (Flight map) -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://en.wikipedia.org/wiki/Automatic_Dependent_Surveillance%E2%80%93Broadcast -[5]: https://www.raspberrypi.org/ -[6]: https://www.amazon.com/FlightAware-FA-PROSTICKPLUS-1-Receiver-Built-Filter/dp/B01M7REJJW -[7]: http://www.radioforeveryone.com/p/easy-homemade-ads-b-antennas.html -[8]: https://www.amazon.com/s?k=1090+antenna+sma&i=electronics&ref=nb_sb_noss_2 -[9]: https://github.com/wiedehopf/readsb -[10]: https://github.com/wiedehopf/tar1090 -[11]: https://www.adsbexchange.com/how-to-feed/adsbx-custom-pi-image/ diff --git a/sources/tech/20210318 Get started with an open source customer data platform.md b/sources/tech/20210318 Get started with an open source customer data platform.md deleted file mode 100644 index ec2f2c9afa..0000000000 --- a/sources/tech/20210318 Get started with an open source customer data platform.md +++ /dev/null @@ -1,204 +0,0 @@ -[#]: subject: "Get started with an open source customer data platform" -[#]: via: "https://opensource.com/article/21/3/rudderstack-customer-data-platform" -[#]: author: "Amey Varangaonkar https://opensource.com/users/ameypv" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Get started with an open source customer data platform -====== -As an open source alternative to Segment, RudderStack collects and routes event stream (or clickstream) data and automatically builds your customer data lake on your data warehouse. - -![Person standing in front of a giant computer screen with numbers, data][1] - -Image by: Opensource.com - -[RudderStack][2] is an open source, warehouse-first customer data pipeline. It collects and routes event stream (or clickstream) data and automatically builds your customer data lake on your data warehouse. - -RudderStack is commonly known as the open source alternative to the customer data platform (CDP), [Segment][3]. It provides a more secure, flexible, and cost-effective solution in comparison. You get all the CDP functionality with added security and full ownership of your customer data. - -Warehouse-first tools like RudderStack are architected to build functional data lakes in the user's data warehouse. The benefits are improved data control, increased flexibility in tool use, and (frequently) lower costs. Since it's open source, you can see how complicated processes—like building your identity graph—are done without relying on a vendor's black box. - -### Getting the RudderStack workspace token - -Before you get started, you will need the RudderStack workspace token from your RudderStack dashboard. To get it: - -1. Go to the [RudderStack dashboard][4]. -2. Log in using your credentials (or sign up for an account, if you don't already have one). - - ![RudderStack login screen][7] - -3. Once you've logged in, you should see the workspace token on your RudderStack dashboard. - - ![RudderStack workspace token][8] - -### Installing RudderStack - -Setting up a RudderStack open source instance is straightforward. You have two installation options: - -1. On your Kubernetes cluster, using RudderStack's Helm charts -2. On your Docker container, using the `docker-compose` command - -This tutorial explains how to use both options but assumes that you already have [Git installed on your system][9]. - -#### Deploying with Kubernetes - -You can deploy RudderStack on your Kubernetes cluster using the [Helm][10] package manager. - -*If you plan to use RudderStack in production, we strongly recommend using this method.* This is because the Docker images are updated with bug fixes more frequently than the GitHub repository (which follows a monthly release cycle). - -Before you can deploy RudderStack on Kubernetes, make sure you have the following prerequisites in place: - -* [Install and connect kubectl][11] to your Kubernetes cluster. -* [Install Helm][12] on your system, either through the Helm installer scripts or its package manager. -* Finally, get the workspace token from the RudderStack dashboard by following the steps in the Getting the RudderStack workspace token section. - -Once you've completed all the prerequisites, deploy RudderStack on your default Kubernetes cluster: - -1. Find the Helm chart required to deploy RudderStack in this [repo][13]. -2. Install the Helm chart with a release name of your choice (my-release, in this example) from the root directory of the repo in the previous step: - ``` - $ helm install \ - my-release ./ --set \ - rudderWorkspaceToken="" - ``` - -This deploys RudderStack on your default Kubernetes cluster configured with kubectl using the workspace token you obtained from the RudderStack dashboard. - -For more details on the configurable parameters in the RudderStack Helm chart or updating the versions of the images used, consult the [documentation][14]. - -#### Deploying with Docker - -Docker is the easiest and fastest way to set up your open source RudderStack instance. - -First, get the workspace token from the RudderStack dashboard by following the steps above. - -Once you have the RudderStack workspace token: - -1. Download the [rudder-docker.yml][15] docker-compose file required for the installation. -2. Replace `` in this file with your RudderStack workspace token. -3. Set up RudderStack on your Docker container by running: - ``` - docker-compose -f rudder-docker.yml up - ``` - -Now RudderStack should be up and running on your Docker instance. - -### Verifying the installation - -You can verify your RudderStack installation by sending test events using the bundled shell script: - -1. Clone the GitHub repository: - ``` - git clone https://github.com/rudderlabs/rudder-server.git - ``` -2. In this tutorial, you will verify RudderStack by sending test events to Google Analytics. Make sure you have a Google Analytics account and keep the tracking ID handy. Also, note that the Google Analytics account needs to have a `Web` property. -3. In the [RudderStack hosted control plane][16]: - * Add a source on the RudderStack dashboard by following the [Adding a source and destination in RudderStack][17] guide. You can use either of RudderStack's event stream software development kits (SDKs) for sending events from your app. This example sets up the [JavaScript SDK][18] as a source on the dashboard. Note: You aren't actually installing the RudderStack JavaScript SDK on your site in this step; you are just creating the source in RudderStack. - * Configure a Google Analytics destination on the RudderStack dashboard using the instructions in the guide mentioned previously. Use the Google Analytics tracking ID you kept from step 2 of this section: - - ![Google Analytics tracking ID][27] - -4. As mentioned before, RudderStack bundles a shell script that generates test events. Get the **Source write key** from the RudderStack dashboard: - - ![RudderStack source write key][28] - -5. Next, run: - ``` - ./scripts/generate-event https://hosted.rudderlabs.com/v1/batch - ``` -6. Finally, log into your Google Analytics account and verify that the events were delivered. In your Google Analytics account, navigate to *RealTime** -> **Events**. The RealTime view is important because some dashboards can take one to two days to refresh. - -### Optional: Setting up the open source control plane - -RudderStack's core architecture contains two major components: the data plane and the control plane. The data plane, [rudder-server][29], delivers your event data, and the RudderStack hosted control plane manages the configuration of your sources and destinations. - -However, if you want to manage the source and destination configurations locally, you can set an open source control plane in your environment using the RudderStack Config Generator. (You must have [Node.js][30] installed on your system to use it.) - -Here are the steps to set up the control plane: - -1. Install and set up RudderStack on the platform of your choice by following the instructions above. -2. Run the following commands in this order: - ``` - cd utils/config-gen - npm install - npm start - ``` - -You should now be able to access the open source control plane at `http://localhost:3000` by default. If your setup is successful, you will see the user interface. - -![RudderStack open source control plane][31] - -To export the existing workspace configuration from the RudderStack-hosted control plane and have RudderStack use it, consult the [docs][32]. - -### RudderStack and open source - -The core of RudderStack is in the [rudder-server][33] repository. It is open source, licensed under [AGPL-3.0][34]. A majority of the destination integrations live in the [rudder-transformer][35] repository. They are open source as well, licensed under the [MIT License][36]. The SDKs and instrumentation repositories, several tool and utility repositories, and even some [dbt][37] model repositories for use-cases like customer journey analysis and sessionization for the data residing in your data warehouse are open source, licensed under the MIT License, and available in the [GitHub repository][38]. - -You can use RudderStack's open source offering, rudder-server, on your platform of choice. There are setup guides for [Docker][39], [Kubernetes][40], [native installation][41], and [developer machines][42]. - -RudderStack open source offers: - -1. RudderStack event stream -2. 15+ SDKs and source integrations to ingest event data -3. 80+ destination and warehouse integrations -4. Slack community support - -#### RudderStack Cloud - -RudderStack also offers a managed option, [RudderStack Cloud][43]. It is fast, reliable, and highly scalable with a multi-node architecture and sophisticated error-handling mechanism. You can hit peak event volume without worrying about downtime, loss of events, or latency. - -Image By: (RudderStack, CC BY-SA 4.0) - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/rudderstack-customer-data-platform - -作者:[Amey Varangaonkar][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ameypv -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/data_metrics_analytics_desktop_laptop.png -[2]: https://rudderstack.com/ -[3]: https://segment.com/ -[4]: https://app.rudderstack.com/ -[7]: https://opensource.com/sites/default/files/uploads/rudderstack_login.png -[8]: https://opensource.com/sites/default/files/uploads/rudderstack_workspace-token.png -[9]: https://opensource.com/life/16/7/stumbling-git -[10]: https://helm.sh/ -[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ -[12]: https://helm.sh/docs/intro/install/ -[13]: https://github.com/rudderlabs/rudderstack-helm -[14]: https://docs.rudderstack.com/installing-and-setting-up-rudderstack/kubernetes -[15]: https://raw.githubusercontent.com/rudderlabs/rudder-server/master/rudder-docker.yml -[16]: https://app.rudderstack.com/ -[17]: https://docs.rudderstack.com/get-started/adding-source-and-destination-rudderstack -[18]: https://docs.rudderstack.com/rudderstack-sdk-integration-guides/rudderstack-javascript-sdk -[20]: https://docs.rudderstack.com/get-started/adding-source-and-destination-rudderstack -[21]: https://docs.rudderstack.com/rudderstack-sdk-integration-guides/rudderstack-javascript-sdk -[24]: https://docs.rudderstack.com/get-started/adding-source-and-destination-rudderstack -[25]: https://docs.rudderstack.com/rudderstack-sdk-integration-guides/rudderstack-javascript-sdk -[27]: https://opensource.com/sites/default/files/uploads/googleanalyticstrackingid.png -[28]: https://opensource.com/sites/default/files/uploads/rudderstack_sourcewritekey.png -[29]: https://github.com/rudderlabs/rudder-server -[30]: https://nodejs.org/en/download/ -[31]: https://opensource.com/sites/default/files/uploads/rudderstack_controlplane.png -[32]: https://docs.rudderstack.com/how-to-guides/rudderstack-config-generator -[33]: https://github.com/rudderlabs/rudder-server -[34]: https://www.gnu.org/licenses/agpl-3.0-standalone.html -[35]: https://github.com/rudderlabs/rudder-transformer -[36]: https://opensource.org/licenses/MIT -[37]: https://www.getdbt.com/ -[38]: https://github.com/rudderlabs -[39]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/docker -[40]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/kubernetes -[41]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/native-installation -[42]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/developer-machine-setup -[43]: https://resources.rudderstack.com/rudderstack-cloud diff --git a/sources/tech/20210319 Managing deb Content in Foreman.md b/sources/tech/20210319 Managing deb Content in Foreman.md deleted file mode 100644 index c080a1c394..0000000000 --- a/sources/tech/20210319 Managing deb Content in Foreman.md +++ /dev/null @@ -1,213 +0,0 @@ -[#]: subject: (Managing deb Content in Foreman) -[#]: via: (https://opensource.com/article/21/3/linux-foreman) -[#]: author: (Maximilian Kolb https://opensource.com/users/kolb) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Managing deb Content in Foreman -====== -Use Foreman to serve software packages and errata for certain Linux -systems. -![Package wrapped with brown paper and red bow][1] - -Foreman is a data center automation tool to deploy, configure, and patch hosts. It relies on Katello for content management, which in turn relies on Pulp to manage repositories. See [_Manage content using Pulp Debian_][2] for more information. - -Pulp offers many plugins for different content types, including RPM packages, Ansible roles and collections, PyPI packages, and deb content. The latter is called the **pulp_deb** plugin. - -### Content management in Foreman - -The basic idea for providing content to hosts is to mirror repositories and provide content to hosts via either the Foreman server or attached Smart Proxies. - -This tutorial is a step-by-step guide to adding deb content to Foreman and serving hosts running Debian 10. "Deb content" refers to software packages and errata for Debian-based Linux systems (e.g., Debian and Ubuntu). This article focuses on [Debian 10 Buster][3] but the instructions also work for [Ubuntu 20.04 Focal Fossa][4], unless noted otherwise. - -### 1\. Create the operating system - -#### 1.1. Create an architecture - -Navigate to **Hosts > Architectures** and create a new architecture (if the architecture where you want to deploy Debian 10 hosts is missing). This tutorial assumes your hosts run on the x86_64 architecture, as Foreman does. - -#### 1.2. Create an installation media - -Navigate to **Hosts > Installation Media** and create new Debian 10 installation media. Use the upstream repository URL . - -Select the Debian operating system family for either Debian or Ubuntu. - -Alternatively, you can also use a Debian mirror. However, content synced via Pulp does not work for two reasons: first, the `linux` and `initrd.gz` files are not in the expected locations; second, the `Release` file is not signed. - -#### 1.3. Create an operating system - -Navigate to **Hosts > Operating Systems** and create a new operating system called Debian 10. Use **10** as the major version and leave the minor version field blank. For Ubuntu, use **20.04** as the major version and leave the minor version field blank. - -![Creating an operating system entry][5] - -(Maximilian Kolb, [CC BY-SA 4.0][6]) - -Select the Debian operating system family for Debian or Ubuntu, and specify the release name (e.g., **Buster** for Debian 10 or **Stretch** for Debian 9). Select the default partition tables and provisioning templates, i.e., **Preseed default ***. - -#### 1.4. Adapt default Preseed templates (optional) - -Navigate to **Hosts > Partition Tables** and **Hosts > Provisioning Templates** and adapt the default **Preseed** templates if necessary. Note that you need to clone locked templates before editing them. Cloned templates will not receive updates with newer Foreman versions. All Debian-based systems use **Preseed** templates, which are included with Foreman by default. - -#### 1.5. Associate the templates - -Navigate to **Hosts > Provisioning Templates** and search for **Preseed**. Associate all desired provisioning templates to the operating system. Then, navigate to **Hosts > Operating Systems** and select **Debian 10** as the operating system. Select the **Templates** tab and associate any provisioning templates that you want. - -### 2\. Synchronize content - -#### 2.1. Create content credentials for Debian upstream repositories and Debian client - -Navigate to **Content > Content Credentials** and add the required GPG public keys as content credentials for Foreman to verify the deb packages' authenticity. To obtain the necessary GPG public keys, verify the **Release** file and export the corresponding GPG public key as follows: - - * **Debian 10 main:** [code] wget && wget -gpg --verify Release.gpg Release -gpg --keyserver keys.gnupg.net --recv-key 16E90B3FDF65EDE3AA7F323C04EE7237B7D453EC -gpg --keyserver keys.gnupg.net --recv-key 0146DC6D4A0B2914BDED34DB648ACFD622F3D138 -gpg --keyserver keys.gnupg.net --recv-key 6D33866EDD8FFA41C0143AEDDCC9EFBF77E11517 -gpg --armor --export E0B11894F66AEC98 DC30D7C23CBBABEE DCC9EFBF77E11517 > debian_10_main.txt -``` - * **Debian 10 security:** [code] wget && wget -gpg --verify Release.gpg Release -gpg --keyserver keys.gnupg.net --recv-key 379483D8B60160B155B372DDAA8E81B4331F7F50 -gpg --keyserver keys.gnupg.net --recv-key 5237CEEEF212F3D51C74ABE0112695A0E562B32A -gpg --armor --export EDA0D2388AE22BA9 4DFAB270CAA96DFA > debian_10_security.txt -``` - * **Debian 10 updates:** [code] wget && wget -gpg --verify Release.gpg Release -gpg --keyserver keys.gnupg.net --recv-key 16E90B3FDF65EDE3AA7F323C04EE7237B7D453EC -gpg --keyserver keys.gnupg.net --recv-key 0146DC6D4A0B2914BDED34DB648ACFD622F3D138 -gpg --armor --export E0B11894F66AEC98 DC30D7C23CBBABEE > debian_10_updates.txt -``` -* **Debian 10 client:** [code]`wget --output-document=debian_10_client.txt https://apt.atix.de/atix_gpg.pub` -``` - - - -You can select the respective ASCII-armored TXT files to upload to your Foreman instance. - -#### 2.2. Create products called Debian 10 and Debian 10 client - -Navigate to **Content > Hosts** and create two new products. - -#### 2.3. Create the necessary Debian 10 repositories - -Navigate to **Content > Products** and select the **Debian 10** product. Create three **deb** repositories: - - * **Debian 10 main:** - * URL: `http://ftp.debian.org/debian/` - * Releases: `buster` - * Component: `main` - * Architecture: `amd64` - - - * **Debian 10 security:** - * URL: `http://deb.debian.org/debian-security/` - * Releases: `buster/updates` - * Component: `main` - * Architecture: `amd64` - - - -If you want, you can add a self-hosted errata service: `https://github.com/ATIX-AG/errata_server` and `https://github.com/ATIX-AG/errata_parser` - - * **Debian 10 updates:** - * URL: `http://ftp.debian.org/debian/` - * Releases: `buster-updates` - * Component: `main` - * Architecture: `amd64` - - - -Select the content credentials that you created in step 2.1. Adjust the components and architecture as needed. Navigate to **Content > Products** and select the **Debian 10 client** product. Create a **deb** repository as follows: - - * **Debian 10 subscription-manager** - * URL: `https://apt.atix.de/Debian10/` - * Releases: `stable` - * Component: `main` - * Architecture: `amd64` - - - -Select the content credentials you created in step 2.1. The Debian 10 client contains the **subscription-manager** package, which runs on each content host to receive content from the Foreman Server or an attached Smart Proxy. Navigate to [apt.atix.de][7] for further instructions. - -#### 2.4. Synchronize the repositories - -If you want, you can create a sync plan to sync the **Debian 10** and **Debian 10 client** products periodically. To sync the product once, click the **Select Action > Sync Now** button on the **Products** page. - -#### 2.5. Create content views - -Navigate to **Content > Content Views** and create a content view called **Debian 10** comprising the Debian upstream repositories created in the **Debian 10** product and publish a new version. Do the same for the **Debian 10 client** repository of the **Debian 10 client** product. - -#### 2.6. Create a composite content view - -Create a new composite content view called **Composite Debian 10** comprising the previously published **Debian 10** and **Debian 10 client** content views and publish a new version. You may optionally add other content views of your choice (e.g., Puppet). - -![Composite content view][8] - -(Maximilian Kolb, [CC BY-SA 4.0][6]) - -#### 2.7. Create an activation key - -Navigate to **Content > Activation Keys** and create a new activation key called **debian-10**: - - * Select the **Library** lifecycle environment and add the **Composite Debian 10** content view. - * On the **Details** tab, assign the correct lifecycle environment and composite content view. - * On the **Subscriptions** tab, assign the necessary subscriptions, i.e., the **Debian 10** and **Debian 10 client** products. - - - -### 3\. Deploy a host - -#### 3.1. Enable provisioning via Port 8000 - -Connect to your Foreman instance via SSH and edit the following file: - - -``` -`/etc/foreman-proxy/settings.yml` -``` - -Search for `:http_port: 8000` and make sure it is not commented out (i.e., the line does not start with a `#`). - -#### 3.2. Create a host group - -Navigate to **Configure > Host Groups** and create a new host group called **Debian 10**. Check out the Foreman documentation on [creating host groups][9], and make sure to select the correct entries on the **Operating System** and **Activation Keys** tabs. - -#### 3.3. Create a new host - -Navigate to **Hosts > Create Host** and either select the host group as described above or manually enter the identical information. - -> Tip: Deploying hosts running Ubuntu 20.04 is even easier, as you can use its official installation media ISO image and do offline installations. Check out orcharhino's [Managing Ubuntu Systems Guide][10] for more information. - -[ATIX][11] has developed several Foreman plugins, and is an integral part of the [Foreman open source ecosystem][12]. The community's feedback on our contributions is passed back to our customers, as we continuously strive to improve our downstream product, [orcharhino][13]. - -This May I started my internship at Red Hat with the Pulp team . Since it was my first ever... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/linux-foreman - -作者:[Maximilian Kolb][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/kolb -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brown-package-red-bow.jpg?itok=oxZYQzH- (Package wrapped with brown paper and red bow) -[2]: https://opensource.com/article/20/10/pulp-debian -[3]: https://wiki.debian.org/DebianBuster -[4]: https://releases.ubuntu.com/20.04/ -[5]: https://opensource.com/sites/default/files/uploads/foreman-debian_content_deb_operating_system_entry.png (Creating an operating system entry) -[6]: https://creativecommons.org/licenses/by-sa/4.0/ -[7]: https://apt.atix.de/ -[8]: https://opensource.com/sites/default/files/uploads/foreman-debian_content_deb_composite_content_view.png (Composite content view) -[9]: https://docs.theforeman.org/nightly/Managing_Hosts/index-foreman-el.html#creating-a-host-group -[10]: https://docs.orcharhino.com/or/docs/sources/usage_guides/managing_ubuntu_systems_guide.html#musg_deploy_hosts -[11]: https://atix.de/ -[12]: https://theforeman.org/2020/10/atix-in-the-foreman-community.html -[13]: https://orcharhino.com/ diff --git a/sources/tech/20210322 6 WordPress plugins for restaurants and retailers.md b/sources/tech/20210322 6 WordPress plugins for restaurants and retailers.md deleted file mode 100644 index ad0320a73c..0000000000 --- a/sources/tech/20210322 6 WordPress plugins for restaurants and retailers.md +++ /dev/null @@ -1,104 +0,0 @@ -[#]: subject: (6 WordPress plugins for restaurants and retailers) -[#]: via: (https://opensource.com/article/21/3/wordpress-plugins-retail) -[#]: author: (Don Watkins https://opensource.com/users/don-watkins) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -6 WordPress plugins for restaurants and retailers -====== -The end of the pandemic won't be the end of curbside pickup, delivery, -and other shopping conveniences, so set your website up for success with -these plugins. -![An open for business sign.][1] - -The pandemic changed how many people prefer to do business—probably permanently. Restaurants and other local retail establishments can no longer rely on walk-in trade, as they always have. Online ordering of food and other items has become the norm and the expectation. It is unlikely consumers will turn their backs on the convenience of e-commerce once the pandemic is over. - -WordPress is a great platform for getting your business' message out to consumers and ensuring you're meeting their e-commerce needs. And its ecosystem of plugins extends the platform to increase its usefulness to you and your customers. - -The six open source plugins described below will help you create a WordPress site that meets your customers' preferences for online shopping, curbside pickup, and delivery, and build your brand and your customer base—now and post-pandemic. - -### E-commerce - -![WooCommerce][2] - -WooCommerce (Don Watkins, [CC BY-SA 4.0][3]) - -[WooCommerce][4] says it is the most popular e-commerce plugin for the WordPress platform. Its website says: "Our core platform is free, flexible, and amplified by a global community. The freedom of open source means you retain full ownership of your store's content and data forever." The plugin, which is under active development, enables you to create enticing web storefronts. It was created by WordPress developer [Automattic][5] and is released under the GPLv3. - -### Order, delivery, and pickup - -![Curbside Pickup][6] - -Curbside Pickup (Don Watkins, [CC BY-SA 4.0][3]) - -[Curbside Pickup][7] is a complete system to manage your curbside pickup experience. It's ideal for any restaurant, library, retailer, or other organization that offers curbside pickup for purchases. The plugin, which is licensed GPLv3, works with any theme that supports WooCommerce. - -![Food Store][8] - -[Food Store][9] - -If you're looking for an online food delivery and pickup system, [Food Store][9] could meet your needs. It extends WordPress' core functions and capabilities to convert your brick-and-mortar restaurant into a food-ordering hub. The plugin, licensed under GPLv2, is under active development with over 1,000 installations. - -![RestroPress][10] - -[RestroPress][11] - -[RestroPress][11] is another option to add a food-ordering system to your website. The GPLv2-licensed plugin has over 4,000 installations and supports payment through PayPal, Amazon, and cash on delivery. - -![RestaurantPress][12] - -[RestaurantPress][13] - -If you want to post the menu for your restaurant, bar, or cafe online, try [RestaurantPress][13]. According to its website, the plugin, which is available under a GPLv2 license, "provides modern responsive menu templates that adapt to any devices," according to its website. It has over 2,000 installations and integrates with WooCommerce. - -### Communications - -![Corona Virus \(COVID-19\) Banner & Live Data][14] - -Corona Virus (COVID-19) Banner & Live Data (Don Watkins, [CC BY-SA 4.0][3]) - -You can keep your customers informed about COVID-19 policies with the [Corona Virus Banner & Live Data][15] plugin. It adds a simple banner with live coronavirus information to your website. It has over 6,000 active installations and is open source under GPLv2. - -![MailPoet][16] - -MailPoet (Don Watkins, [CC BY-SA 4.0][3]) - -As rules and restrictions change rapidly, an email newsletter is a great way to keep your customers informed. The [MailPoet][17] WordPress plugin makes it easy to manage and email information about new offerings, hours, and more. Through MailPoet, website visitors can subscribe to your newsletter, which you can create and send with WordPress. It has over 300,000 installations and is open source under GPLv2. - -### Prepare for the post-pandemic era - -Pandemic-driven lockdowns made online shopping, curbside pickup, and home delivery necessities, but these shopping trends are not going anywhere. As the pandemic subsides, restrictions will ease, and we will start shopping, dining, and doing business in person more. Still, consumers have come to appreciate the ease and convenience of e-commerce, even for small local restaurants and stores, and these plugins will help your WordPress site meet their needs. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/wordpress-plugins-retail - -作者:[Don Watkins][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/don-watkins -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_business_sign_store.jpg?itok=g4QibRqg (An open for business sign.) -[2]: https://opensource.com/sites/default/files/pictures/woocommerce.png (WooCommerce) -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://wordpress.org/plugins/woocommerce/ -[5]: https://automattic.com/ -[6]: https://opensource.com/sites/default/files/pictures/curbsidepickup.png (Curbside Pickup) -[7]: https://wordpress.org/plugins/curbside-pickup/ -[8]: https://opensource.com/sites/default/files/pictures/food-store.png (Food Store) -[9]: https://wordpress.org/plugins/food-store/ -[10]: https://opensource.com/sites/default/files/pictures/restropress.png (RestroPress) -[11]: https://wordpress.org/plugins/restropress/ -[12]: https://opensource.com/sites/default/files/pictures/restaurantpress.png (RestaurantPress) -[13]: https://wordpress.org/plugins/restaurantpress/ -[14]: https://opensource.com/sites/default/files/pictures/covid19updatebanner.png (Corona Virus (COVID-19) Banner & Live Data) -[15]: https://wordpress.org/plugins/corona-virus-covid-19-banner/ -[16]: https://opensource.com/sites/default/files/pictures/mailpoet1.png (MailPoet) -[17]: https://wordpress.org/plugins/mailpoet/ diff --git a/sources/tech/20210322 Productivity with Ulauncher.md b/sources/tech/20210322 Productivity with Ulauncher.md deleted file mode 100644 index 5b49a41848..0000000000 --- a/sources/tech/20210322 Productivity with Ulauncher.md +++ /dev/null @@ -1,144 +0,0 @@ -[#]: subject: (Productivity with Ulauncher) -[#]: via: (https://fedoramagazine.org/ulauncher-productivity/) -[#]: author: (Troy Curtis Jr https://fedoramagazine.org/author/troycurtisjr/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Productivity with Ulauncher -====== - -![Productivity with Ulauncher][1] - -Photo by [Freddy Castro][2] on [Unsplash][3] - -Application launchers are a category of productivity software that not everyone is familiar with, and yet most people use the basic concepts without realizing it. As the name implies, this software launches applications, but they also other capablities. - -Examples of dedicated Linux launchers include [dmenu][4], [Synapse][5], and [Albert][6]. On MacOS, some examples are [Quicksilver][7] and [Alfred][8]. Many modern desktops include basic versions as well. On Fedora Linux, the Gnome 3 [activities overview][9] uses search to open applications and more, while MacOS has the built-in launcher Spotlight. - -While these applications have great feature sets, this article focuses on productivity with [Ulauncher][10]. - -### What is Ulauncher? - -[Ulauncher][10] is a new application launcher written in Python, with the first Fedora package available in March 2020 for [Fedora Linux 32][11]. The core focuses on basic functionality with a nice [interface for extensions][12]. Like most application launchers, the key idea in Ulauncher is search. Search is a powerful productivity boost, especially for repetitive tasks. - -Typical menu-driven interfaces work great for discovery when you aren’t sure what options are available. However, when the same action needs to happen repeatedly, it is a real time sink to navigate into 3 nested sub-menus over and over again. On the other side, [hotkeys][13] give immediate access to specific actions, but can be difficult to remember. Especially after exhausting all the obvious mnemonics. Is [_Control+C_][14] “copy”, or is it “cancel”? Search is a middle ground giving a means to get to a specific command quickly, while supporting discovery by typing only some remembered word or fragment. Exploring by search works especially well if tags and descriptions are available. Ulauncher supplies the search framework that extensions can use to build all manner of productivity enhancing actions. - -### Getting started - -Getting the core functionality of Ulauncher on any Fedora OS is trivial; install using _[dnf][15]_: - -``` -sudo dnf install ulauncher -``` - -Once installed, use any standard desktop launching method for the first start up of Ulauncher. A basic dialog should pop up, but if not try launching it again to toggle the input box on. Click the gear icon on the right side to open the preferences dialog. - -![Ulauncher input box][16] - -A number of options are available, but the most important when starting out are _Launch at login_ and the hotkey. The default hotkey is _Control+space_, but it can be changed. Running in Wayland needs additional configuration for consistent operation; see the [Ulauncher wiki][17] for details. Users of “Focus on Hover” or “Sloppy Focus” should also enable the “Don’t hide after losing mouse focus” option. Otherwise, Ulauncher disappears while typing in some cases. - -### Ulauncher basics - -The idea of any application launcher, like Ulauncher, is fast access at any time. Press the hotkey and the input box shows up on top of the current application. Type out and execute the desired command and the dialog hides until the next use. Unsurprisingly, the most basic operation is launching applications. This is similar to most modern desktop environments. Hit the hotkey to bring up the dialog and start typing, for example _te_, and a list of matches comes up. Keep typing to further refine the search, or navigate to the entry using the arrow keys. For even faster access, use _Alt+#_ to directly choose a result. - -![Ulauncher dialog searching for keywords with “te”][18] - -Ulauncher can also do quick calculations and navigate the file-system. To calculate, hit the hotkey and type a math expression. The result list dynamically updates with the result, and hitting _Enter_ copies the value to the clipboard. Start file-system navigation by typing _/_ to start at the root directory or _~/_ to start in the home directory. Selecting a directory lists that directory’s contents and typing another argument filters the displayed list. Locate the right file by repeatedly descending directories. Selecting a file opens it, while _Alt+Enter_ opens the folder containing the file. - -### Ulauncher shortcuts - -The first bit of customization comes in the form of shortcuts. The _Shortcuts_ tab in the preferences dialog lists all the current shortcuts. Shortcuts can be direct commands, URL aliases, URLs with argument substitution, or small scripts. Basic shortcuts for Wikipedia, StackOverflow, and Google come pre-configured, but custom shortcuts are easy to add. - -![Ulauncher shortcuts preferences tab][19] - -For instance, to create a duckduckgo search shortcut, click _Add Shortcut_ in the _Shortcuts_ preferences tab and add the name and keyword _duck_ with the query __. Any argument given to the _duck_ keyword replaces _%s_ in the query and the URL opened in the default browser. Now, typing _duck fedora_ will bring up a duckduckgo search using the supplied terms, in this case _fedora_. - -A more complex shortcut is a script to convert [UTC time][20] to local time. Once again click _Add Shortcut_ and this time use the keyword _utc_. In the _Query or Script_ text box, include the following script: - -``` -#!/bin/bash -tzdate=$(date -d "$1 UTC") -zenity --info --no-wrap --text="$tzdate" -``` - -This script takes the first argument (given as _$1_) and uses the standard [_date_][21] utility to convert a given UTC time into the computer’s local timezone. Then [zenity][22] pops up a simple dialog with the result. To test this, open Ulauncher and type _utc 11:00_. While this is a good example showing what’s possible with shortcuts, see the [ultz][23] extension for really converting time zones. - -### Introducing extensions - -While the built-in functionality is great, installing extensions really accelerates productivity with Ulauncher. Extensions can go far beyond what is possible with custom shortcuts, most obviously by providing suggestions as arguments are typed. Extensions are Python modules which use the [Ulauncher extension interface][12] and can either be personally-developed local code or shared with others using GitHub. A collection of community developed extensions is available at . There are basic standalone extensions for quick conversions and dynamic interfaces to online resources such as dictionaries. Other extensions integrate with external applications, like password managers, browsers, and VPN providers. These effectively give external applications a Ulauncher interface. By keeping the core code small and relying on extensions to add advanced functionality, Ulauncher ensures that each user only installs the functionality they need. - -![Ulauncher extension configuration][24] - -Installing a new extension is easy, though it could be a more integrated experience. After finding an interesting extension, either on the Ulauncher extensions website or anywhere on GitHub, navigate to the _Extensions_ tab in the preferences window. Click _Add Extension_ and paste in the GitHub URL. This loads the extension and shows a preferences page for any available options. A nice hint is that while browsing the extensions website, clicking on the _Github star_ button opens the extension’s GitHub page. Often this GitHub repository has more details about the extension than the summary provided on the community extensions website. - -#### Firefox bookmarks search - -One useful extension is [Ulauncher Firefox Bookmarks][25], which gives fuzzy search access to the current user’s Firefox bookmarks. While this is similar to typing _*<search-term>_ in Firefox’s omnibar, the difference is Ulauncher gives quick access to the bookmarks from anywhere, without needing to open Firefox first. Also, since this method uses search to locate bookmarks, no folder organization is really needed. This means pages can be “starred” quickly in Firefox and there is no need to hunt for an appropriate folder to put it in. - -![Firefox Ulauncher extension searching for fedora][26] - -#### Clipboard search - -Using a clipboard manager is a productivity boost on its own. These managers maintain a history of clipboard contents, which makes it easy to retrieve earlier copied snippets. Knowing there is a history of copied data allows the user to copy text without concern of overwriting the current contents. Adding in the [Ulauncher clipboard][27] extension gives quick access to the clipboard history with search capability without having to remember another unique hotkey combination. The extension integrates with different clipboard managers: [GPaste][28], [clipster][29], or [CopyQ][30]. Invoking Ulauncher and typing the _c_ keywords brings up a list of recent copied snippets. Typing out an argument starts to narrow the list of options, eventually showing the sought after text. Selecting the item copies it to the clipboard, ready to paste into another application. - -![Ulauncher clipboard extension listing latest clipboard contents][31] - -#### Google search - -The last extension to highlight is [Google Search][32]. While a Google search shortcut is available as a default shortcut, using an extension allows for more dynamic behavior. With the extension, Google supplies suggestions as the search term is typed. The experience is similar to what is available on Google’s homepage, or in the search box in Firefox. Again, the key benefit of using the extension for Google search is immediate access while doing anything else on the computer. - -![Google search Ulauncher extension listing suggestions for fedora][33] - -### Being productive - -Productivity on a computer means customizing the environment for each particular usage. A little configuration streamlines common tasks. Dedicated hotkeys work really well for the most frequent actions, but it doesn’t take long before it gets hard to remember them all. Using fuzzy search to find half-remembered keywords strikes a good balance between discoverability and direct access. The key to productivity with Ulauncher is identifying frequent actions and installing an extension, or adding a shortcut, to make doing it faster. Building a habit to search in Ulauncher first means there is a quick and consistent interface ready to go a key stroke away. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/ulauncher-productivity/ - -作者:[Troy Curtis Jr][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/troycurtisjr/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/ulauncher-816x345.jpg -[2]: https://unsplash.com/@readysetfreddy?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://tools.suckless.org/dmenu/ -[5]: https://launchpad.net/synapse-project -[6]: https://github.com/albertlauncher/albert -[7]: https://qsapp.com/ -[8]: https://www.alfredapp.com/ -[9]: https://help.gnome.org/misc/release-notes/3.6/users-activities-overview.html.en -[10]: https://ulauncher.io/ -[11]: https://fedoramagazine.org/announcing-fedora-32/ -[12]: http://docs.ulauncher.io/en/latest/ -[13]: https://en.wikipedia.org/wiki/Keyboard_shortcut -[14]: https://en.wikipedia.org/wiki/Control-C -[15]: https://fedoramagazine.org/managing-packages-fedora-dnf/ -[16]: https://fedoramagazine.org/wp-content/uploads/2021/03/image.png -[17]: https://github.com/Ulauncher/Ulauncher/wiki/Hotkey-In-Wayland -[18]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-1.png -[19]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-2-1024x361.png -[20]: https://www.timeanddate.com/time/aboututc.html -[21]: https://man7.org/linux/man-pages/man1/date.1.html -[22]: https://help.gnome.org/users/zenity/stable/ -[23]: https://github.com/Epholys/ultz -[24]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-6-1024x407.png -[25]: https://github.com/KuenzelIT/ulauncher-firefox-bookmarks -[26]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-3.png -[27]: https://github.com/friday/ulauncher-clipboard -[28]: https://github.com/Keruspe/GPaste -[29]: https://github.com/mrichar1/clipster -[30]: https://hluk.github.io/CopyQ/ -[31]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-4.png -[32]: https://github.com/NastuzziSamy/ulauncher-google-search -[33]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-5.png diff --git a/sources/tech/20210323 Meet Sleek- A Sleek Looking To-Do List Application.md b/sources/tech/20210323 Meet Sleek- A Sleek Looking To-Do List Application.md deleted file mode 100644 index b86101fbc3..0000000000 --- a/sources/tech/20210323 Meet Sleek- A Sleek Looking To-Do List Application.md +++ /dev/null @@ -1,91 +0,0 @@ -[#]: subject: (Meet Sleek: A Sleek Looking To-Do List Application) -[#]: via: (https://itsfoss.com/sleek-todo-app/) -[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Meet Sleek: A Sleek Looking To-Do List Application -====== - -There are plenty of [to-do list applications available for Linux][1]. There is one more added to that list in the form of Sleek. - -### Sleek to-do List app - -Sleek is nothing extraordinary except for its looks perhaps. It provides an Electron-based GUI for todo.txt. - -![][2] - -For those not aware, [Electron][3] is a framework that lets you use JavaScript, HTML and CSS for building cross-platform desktop apps. It utilizes Chromium and Node.js for this purpose and this is why some people don’t like their desktop apps running a browser underneath it. - -[Todo.txt][4] is a text-based file system and if you follow its markup syntax, you can create a to-do list. There are tons of mobile, desktop and CLI apps that use Todo.txt underneath it. - -Don’t worry you don’t need to know the correct syntax for todo.txt. Since Sleek is a GUI tool, you can utilize its interface for creating to-do lists without special efforts. - -The advantage of todo.txt is that you can copy or export your files and use it on any To Do List app that supports todo.txt. This gives you portability to keep your data while moving between applications. - -### Experience with Sleek - -![][5] - -Sleek gives you option to create a new to-do.txt or open an existing one. Once you create or open one, you can start adding items to the list. - -Apart from the normal checklist, you can add tasks with due date. - -![][6] - -While adding a due date, you can also set the repetition for the tasks. I find this weird that you can not create a recurring task without setting a due date to it. This is something the developer should try to fix in the future release of the application. - -![][7] - -You can check a task complete. You can also choose to hide or show completed tasks with options to sort tasks based on priority. - -Sleek is available in both dark and light theme. There is a dedicated option on the left sidebar to change themes. You can, of course, change it from the settings. - -![][8] - -There is no provision to sync your to-do list app. As a workaround, you can save your todo.txt file in a location that is automatically sync with Nextcloud, Dropbox or some other cloud service. This also opens the possibility of using it on mobile with some todo.txt mobile client. It’s just a suggestion, I haven’t tried it myself. - -### Installing Sleek on Linux - -Since Sleek is an Electron-based application, it is available for Windows as well as Linux. - -For Linux, you can install it using Snap or Flatpak, whichever you prefer. - -For Snap, use the following command: - -``` -sudo snap install sleek -``` - -If you have enabled Flatpak and added Flathub repository, you can install it using this command: - -``` -flatpak install flathub com.github.ransome1.sleek -``` - -As I said at the beginning of this article, Sleek is nothing extraordinary. If you prefer a modern looking to-do list app with option to import and export your tasks list, you may give this open source application a try. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/sleek-todo-app/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/to-do-list-apps-linux/ -[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app.png?resize=800%2C630&ssl=1 -[3]: https://www.electronjs.org/ -[4]: http://todotxt.org/ -[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-1.png?resize=800%2C521&ssl=1 -[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-due-tasks.png?resize=800%2C632&ssl=1 -[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-repeat-tasks.png?resize=800%2C632&ssl=1 -[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-light-theme.png?resize=800%2C521&ssl=1 diff --git a/sources/tech/20210324 Build a to-do list app in React with hooks.md b/sources/tech/20210324 Build a to-do list app in React with hooks.md deleted file mode 100644 index d61af93ed1..0000000000 --- a/sources/tech/20210324 Build a to-do list app in React with hooks.md +++ /dev/null @@ -1,466 +0,0 @@ -[#]: subject: (Build a to-do list app in React with hooks) -[#]: via: (https://opensource.com/article/21/3/react-app-hooks) -[#]: author: (Jaivardhan Kumar https://opensource.com/users/invinciblejai) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Build a to-do list app in React with hooks -====== -Learn to build React apps using functional components and state -management. -![Team checklist and to dos][1] - -React is one of the most popular and simple JavaScript libraries for building user interfaces (UIs) because it allows you to create reusable UI components. - -Components in React are independent, reusable pieces of code that serve as building blocks for an application. React functional components are JavaScript functions that separate the presentation layer from the business logic. According to the [React docs][2], a simple, functional component can be written like: - - -``` -function Welcome(props) { -  return <h1>Hello, {props.name}</h1>; -} -``` - -React functional components are stateless. Stateless components are declared as functions that have no state and return the same markup, given the same props. State is managed in components with hooks, which were introduced in React 16.8. They enable the management of state and the lifecycle of functional components. There are several built-in hooks, and you can also create custom hooks. - -This article explains how to build a simple to-do app in React using functional components and state management. The complete code for this app is available on [GitHub][3] and [CodeSandbox][4]. When you're finished with this tutorial, the app will look like this: - -![React to-do list][5] - -(Jaivardhan Kumar, [CC BY-SA 4.0][6]) - -### Prerequisites - - * To build locally, you must have [Node.js][7] v10.16 or higher, [yarn][8] v1.20.0 or higher, and npm 5.6 - * Basic knowledge of JavaScript - * Basic understanding of React would be a plus - - - -### Create a React app - -[Create React App][9] is an environment that allows you to start building a React app. Along with this tutorial, I used a TypeScript template for adding static type definitions. [TypeScript][10] is an open source language that builds on JavaScript: - - -``` -`npx create-react-app todo-app-context-api --template typescript` -``` - -[npx][11] is a package runner tool; alternatively, you can use [yarn][12]: - - -``` -`yarn create react-app todo-app-context-api --template typescript` -``` - -After you execute this command, you can navigate to the directory and run the app: - - -``` -cd todo-app-context-api -yarn start -``` - -You should see the starter app and the React logo which is generated by boilerplate code. Since you are building your own React app, you will be able to modify the logo and styles to meet your needs. - -### Build the to-do app - -The to-do app can: - - * Add an item - * List items - * Mark items as completed - * Delete items - * Filter items based on status (e.g., completed, all, active) - - - -![To-Do App architecture][13] - -(Jaivardhan Kumar, [CC BY-SA 4.0][6]) - -#### The header component - -Create a directory called **components** and add a file named **Header.tsx**: - - -``` -mkdir components -cd  components -vi  Header.tsx -``` - -Header is a functional component that holds the heading: - - -``` -const Header: React.FC = () => { -    return ( -        <div className="header"> -            <h1> -                Add TODO List!! -            </h1> -        </div> -        ) -} -``` - -#### The AddTodo component - -The **AddTodo** component contains a text box and a button. Clicking the button adds an item to the list. - -Create a directory called **todo** under the **components** directory and add a file named **AddTodo.tsx**: - - -``` -mkdir todo -cd todo -vi AddTodo.tsx -``` - -AddTodo is a functional component that accepts props. Props allow one-way passing of data, i.e., only from parent to child components: - - -``` -const AddTodo: React.FC<AddTodoProps> = ({ todoItem, updateTodoItem, addTaskToList }) => { -    const submitHandler = (event: SyntheticEvent) => { -        event.preventDefault(); -        addTaskToList(); -    } -    return ( -        <form className="addTodoContainer" onSubmit={submitHandler}> -            <div  className="controlContainer"> -                <input className="controlSpacing" style={{flex: 1}} type="text" value={todoItem?.text ?? ''} onChange={(ev) => updateTodoItem(ev.target.value)} placeholder="Enter task todo ..." /> -                <input className="controlSpacing" style={{flex: 1}} type="submit" value="submit" /> -            </div> -            <div> -                <label> -                    <span style={{ color: '#ccc', padding: '20px' }}>{todoItem?.text}</span> -                </label> -            </div> -        </form> -    ) -} -``` - -You have created a functional React component called **AddTodo** that takes props provided by the parent function. This makes the component reusable. The props that need to be passed are: - - * **todoItem:** An empty item state - * **updateToDoItem:** A helper function to send callbacks to the parent as the user types - * **addTaskToList:** A function to add an item to a to-do list - - - -There are also some styling and HTML elements, like form, input, etc. - -#### The TodoList component - -The next component to create is the **TodoList**. It is responsible for listing the items in the to-do state and providing options to delete and mark items as complete. - -**TodoList** will be a functional component: - - -``` -const TodoList: React.FC = ({ listData, removeItem, toggleItemStatus }) => { -    return listData.length > 0 ? ( -        <div className="todoListContainer"> -            { listData.map((lData) => { -                return ( -                    <ul key={lData.id}> -                        <li> -                            <div className="listItemContainer"> -                                <input type="checkbox" style={{ padding: '10px', margin: '5px' }} onChange={() => toggleItemStatus(lData.id)} checked={lData.completed}/> -                                <span className="listItems" style={{ textDecoration: lData.completed ? 'line-through' : 'none', flex: 2 }}>{lData.text}</span> -                                <button type="button" className="listItems" onClick={() => removeItem(lData.id)}>Delete</button> -                            </div> -                        </li> -                    </ul> -                ) -            })} -        </div> -    ) : (<span> No Todo list exist </span >) -} -``` - -The **TodoList** is also a reusable functional React component that accepts props from parent functions. The props that need to be passed are: - - * **listData:** A list of to-do items with IDs, text, and completed properties - * **removeItem:** A helper function to delete an item from a to-do list - * **toggleItemStatus:** A function to toggle the task status from completed to not completed and vice versa - - - -There are also some styling and HTML elements (like lists, input, etc.). - -#### Footer component - -**Footer** will be a functional component; create it in the **components** directory as follows: - - -``` -cd .. - -const Footer: React.FC = ({item = 0, storage, filterTodoList}) => { -    return ( -        <div className="footer"> -            <button type="button" style={{flex:1}} onClick={() => filterTodoList(ALL_FILTER)}>All Item</button> -            <button type="button" style={{flex:1}} onClick={() => filterTodoList(ACTIVE_FILTER)}>Active</button> -            <button type="button" style={{flex:1}} onClick={() => filterTodoList(COMPLETED_FILTER)}>Completed</button> -            <span style={{color: '#cecece', flex:4, textAlign: 'center'}}>{item} Items | Make use of {storage} to store data</span> -        </div> -    ); -} -``` - -It accepts three props: - - * **item:** Displays the number of items - * **storage:** Displays text - * **filterTodoList:** A function to filter tasks based on status (active, completed, all items) - - - -### Todo component: Managing state with contextApi and useReducer - -![Todo Component][14] - -(Jaivardhan Kumar, [CC BY-SA 4.0][6]) - -Context provides a way to pass data through the component tree without having to pass props down manually at every level. **ContextApi** and **useReducer** can be used to manage state by sharing it across the entire React component tree without passing it as a prop to each component in the tree. - -Now that you have the AddTodo, TodoList, and Footer components, you need to wire them. - -Use the following built-in hooks to manage the components' state and lifecycle: - - * **useState:** Returns the stateful value and updater function to update the state - * **useEffect:** Helps manage lifecycle in functional components and perform side effects - * **useContext:** Accepts a context object and returns current context value - * **useReducer:** Like useState, it returns the stateful value and updater function, but it is used instead of useState when you have complex state logic (e.g., multiple sub-values or if the new state depends on the previous one) - - - -First, use **contextApi** and **useReducer** hooks to manage the state. For separation of concerns, add a new directory under **components** called **contextApiComponents**: - - -``` -mkdir contextApiComponents -cd contextApiComponents -``` - -Create **TodoContextApi.tsx**: - - -``` -const defaultTodoItem: TodoItemProp = { id: Date.now(), text: '', completed: false }; - -const TodoContextApi: React.FC = () => { -    const { state: { todoList }, dispatch } = React.useContext(TodoContext); -    const [todoItem, setTodoItem] = React.useState(defaultTodoItem); -    const [todoListData, setTodoListData] = React.useState(todoList); - -    React.useEffect(() => { -        setTodoListData(todoList); -    }, [todoList]) - -    const updateTodoItem = (text: string) => { -        setTodoItem({ -            id: Date.now(), -            text, -            completed: false -        }) -    } -    const addTaskToList = () => { -        dispatch({ -            type: ADD_TODO_ACTION, -            payload: todoItem -        }); -        setTodoItem(defaultTodoItem); -    } -    const removeItem = (id: number) => { -        dispatch({ -            type: REMOVE_TODO_ACTION, -            payload: { id } -        }) -    } -    const toggleItemStatus = (id: number) => { -        dispatch({ -            type: UPDATE_TODO_ACTION, -            payload: { id } -        }) -    } -    const filterTodoList = (type: string) => { -        const filteredList = FilterReducer(todoList, {type}); -        setTodoListData(filteredList) - -    } - -    return ( -        <> -            <AddTodo todoItem={todoItem} updateTodoItem={updateTodoItem} addTaskToList={addTaskToList} /> -            <TodoList listData={todoListData} removeItem={removeItem} toggleItemStatus={toggleItemStatus} /> -            <Footer item={todoListData.length} storage="Context API" filterTodoList={filterTodoList} /> -        </> -    ) -} -``` - -This component includes the **AddTodo**, **TodoList**, and **Footer** components and their respective helper and callback functions. - -To manage the state, it uses **contextApi**, which provides state and dispatch methods, which, in turn, updates the state. It accepts a context object. (You will create the provider for the context, called **contextProvider**, next). - - -``` -` const { state: { todoList }, dispatch } = React.useContext(TodoContext);` -``` - -#### TodoProvider - -Add **TodoProvider**, which creates **context** and uses a **useReducer** hook. The **useReducer** hook takes a reducer function along with the initial values and returns state and updater functions (dispatch). - - * Create the context and export it. Exporting it will allow it to be used by any child component to get the current state using the hook **useContext**: [code]`export const TodoContext = React.createContext({} as TodoContextProps);` -``` - * Create **ContextProvider** and export it: [code] const TodoProvider : React.FC = (props) => { -    const [state, dispatch] = React.useReducer(TodoReducer, {todoList: []}); -    const value = {state, dispatch} -    return ( -        <TodoContext.Provider value={value}> -            {props.children} -        </TodoContext.Provider> -    ) -} -``` - * Context data can be accessed by any React component in the hierarchy directly with the **useContext** hook if you wrap the parent component (e.g., **TodoContextApi**) or the app itself with the provider (e.g., **TodoProvider**): [code] <TodoProvider> -  <TodoContextApi /> -</TodoProvider> -``` -* In the **TodoContextApi** component, use the **useContext** hook to access the current context value: [code]`const { state: { todoList }, dispatch } = React.useContext(TodoContext)` -``` - - - -**TodoProvider.tsx:** - - -``` -type TodoContextProps = { -    state : {todoList: TodoItemProp[]}; -    dispatch: ({type, payload}: {type:string, payload: any}) => void; -} - -export const TodoContext = React.createContext({} as TodoContextProps); - -const TodoProvider : React.FC = (props) => { -    const [state, dispatch] = React.useReducer(TodoReducer, {todoList: []}); -    const value = {state, dispatch} -    return ( -        <TodoContext.Provider value={value}> -            {props.children} -        </TodoContext.Provider> -    ) -} -``` - -#### Reducers - -A reducer is a pure function with no side effects. This means that for the same input, the expected output will always be the same. This makes the reducer easier to test in isolation and helps manage state. **TodoReducer** and **FilterReducer** are used in the components **TodoProvider** and **TodoContextApi**. - -Create a directory named **reducers** under **src** and create a file there named **TodoReducer.tsx**: - - -``` -const TodoReducer = (state: StateProps = {todoList:[]}, action: ActionProps) => { -    switch(action.type) { -        case ADD_TODO_ACTION: -            return { todoList: [...state.todoList, action.payload]} -        case REMOVE_TODO_ACTION: -            return { todoList: state.todoList.length ? state.todoList.filter((d) => d.id !== action.payload.id) : []}; -        case UPDATE_TODO_ACTION: -            return { todoList: state.todoList.length ? state.todoList.map((d) => { -                if(d.id === action.payload.id) d.completed = !d.completed; -                return d; -            }): []} -        default: -            return state; -    } -} -``` - -Create a **FilterReducer** to maintain the filter's state: - - -``` -const FilterReducer =(state : TodoItemProp[] = [], action: ActionProps) => { -    switch(action.type) { -        case ALL_FILTER: -            return state; -        case ACTIVE_FILTER: -            return state.filter((d) => !d.completed); -        case COMPLETED_FILTER: -            return state.filter((d) => d.completed); -        default: -            return state; -    } -} -``` - -You have created all the required components. Next, you will add the **Header** and **TodoContextApi** components in App, and **TodoContextApi** with **TodoProvider** so that all children can access the context. - - -``` -function App() { -  return ( -    <div className="App"> -      <Header /> -      <TodoProvider> -              <TodoContextApi /> -      </TodoProvider> -    </div> -  ); -} -``` - -Ensure the App component is in **index.tsx** within **ReactDom.render**. [ReactDom.render][15] takes two arguments: React Element and an ID of an HTML element. React Element gets rendered on a web page, and the **id** indicates which HTML element will be replaced by the React Element: - - -``` -ReactDOM.render( -   <App />, -  document.getElementById('root') -); -``` - -### Conclusion - -You have learned how to build a functional app in React using hooks and state management. What will you do with it? - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/react-app-hooks - -作者:[Jaivardhan Kumar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/invinciblejai -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos) -[2]: https://reactjs.org/docs/components-and-props.html -[3]: https://github.com/invincibleJai/todo-app-context-api -[4]: https://codesandbox.io/s/reverent-edison-v8om5 -[5]: https://opensource.com/sites/default/files/pictures/todocontextapi.gif (React to-do list) -[6]: https://creativecommons.org/licenses/by-sa/4.0/ -[7]: https://nodejs.org/en/download/ -[8]: https://yarnpkg.com/getting-started/install -[9]: https://github.com/facebook/create-react-app -[10]: https://www.typescriptlang.org/ -[11]: https://www.npmjs.com/package/npx -[12]: https://yarnpkg.com/ -[13]: https://opensource.com/sites/default/files/uploads/to-doapp_architecture.png (To-Do App architecture) -[14]: https://opensource.com/sites/default/files/uploads/todocomponent_0.png (Todo Component) -[15]: https://reactjs.org/docs/react-dom.html#render diff --git a/sources/tech/20210326 10 open source tools for content creators.md b/sources/tech/20210326 10 open source tools for content creators.md deleted file mode 100644 index 39685baba0..0000000000 --- a/sources/tech/20210326 10 open source tools for content creators.md +++ /dev/null @@ -1,144 +0,0 @@ -[#]: subject: (10 open source tools for content creators) -[#]: via: (https://opensource.com/article/21/3/open-source-tools-web-design) -[#]: author: (Kristina Tuvikene https://opensource.com/users/hfkristina) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -10 open source tools for content creators -====== -Check out these lesser-known web design tools for your next project. -![Painting art on a computer screen][1] - -There are a lot of well-known open source applications used in web design, but there are also many great tools that are not as popular. I thought I'd challenge myself to find some obscure options on the chance I might find something useful. - -Open source offers a wealth of options, so it's no surprise that I found 10 new applications that I now consider indispensable to my work. - -### Bulma - -![Bulma widgets][2] - -[Bulma][3] is a modular and responsive CSS framework for designing interfaces that flow beautifully. Design work is hardest between the moment of inspiration and the time of initial implementation, and that's exactly the problem Bulma helps solve. It's a collection of useful front-end components that a designer can combine to create an engaging and polished interface. And the best part is that it requires no JavaScript. It's all done in CSS. - -Included components include forms, columns, tabbed interfaces, pagination, breadcrumbs, buttons, notifications, and much more. - -### Skeleton - -![Skeleton][4] - -(Kristina Tuvikene, [CC BY-SA 4.0][5]) - -[Skeleton][6] is a lightweight open source framework that gives you a simple grid, basic formats, and cross-browser support. It's a great alternative for bulky frameworks and lets you start coding your site with a minimal but highly functional foundation. There's a slight learning curve, as you do have to get familiar with its codebase, but after you've built one site with Skeleton, you've built a thousand, and it becomes second-nature. - -### The Noun Project - -![The Noun Project][7] - -(Kristina Tuvikene, [CC BY-SA 4.0][5]) - -[The Noun Project][8] is a collection of more than 3 million icons and images. You can use them on your site or as inspiration to create your own designs. I've found hundreds of useful icons on the site, and they're superbly easy to use. Because they're so basic, you can use them as-is for a nice, minimal look or bring them into your [favorite image editor][9] and customize them for your project. - -### MyPaint - -![MyPaint][10] - -(Kristina Tuvikene, [CC BY-SA 4.0][5]) - -If you fancy creating your own icons or maybe some incidental art, then you should take a look at [MyPaint][11]. It is a lightweight painting tool that supports various graphic tablets, features dozens of amazing brush emulators and textures, and has a clean, minimal interface, so you can focus on creating your illustration. - -### Glimpse - -![Glimpse][12] - -(Kristina Tuvikene, [CC BY-SA 4.0][5]) - -[Glimpse][13] is a cross-platform photo editor, a fork of [GIMP][14] that adds some nice features such as keyboard shortcuts similar to another popular (non-open) image editor. This is one of those must-have [applications for any graphic designer][15]. Climpse doesn't have a macOS release yet, but Mac users may use GIMP in the mean time. - -### LazPaint - -![LaPaz][16] - -(Kristina Tuvikene, [CC BY-SA 4.0][5]) - -[LazPaint][17] is a lightweight raster and vector graphics editor with multiple tools and filters. It's also available on multiple platforms and offers straightforward vector editing for quick and basic work. - -### The League of Moveable Type - -![League of Moveable Type][18] - -(Kristina Tuvikene, [CC BY-SA 4.0][5]) - -My favorite open source font foundry, [The League of Moveable Type][19], offers expertly designed open source font faces. There's something suitable for every sort of project here. - -### Shotcut - -![Shotcut][20] - -(Kristina Tuvikene, [CC BY-SA 4.0][5]) - -[Shotcut][21] is a non-linear video editor that supports multiple audio and video formats. It has an intuitive interface, undockable panels, and you can do some basic to advanced video editing using this open source tool. - -### Draw.io - -![Draw.io][22] - -(Kristina Tuvikene, [CC BY-SA 4.0][5]) - -[Draw.io][23] is lightweight, dedicated software with a straightforward user interface for creating professional diagrams and flowcharts. You can run it online or [get it from GitHub][24] and install it locally. - -### Bonus resource: Olive video editor - -![Olive][25] - -(©2021, [Olive][26]) - -[Olive video editor][27] is a work in progress but considered a very strong contender for premium open source video editing software. It's something you should keep your eye on for sure. - -### Add these to your collection - -Web design is an exciting line of work, and there's always something unexpected to deal with or invent. There are many great open source options out there for the resourceful web designer, and you'll benefit from trying these out to see if they fit your style. - -What open source web design tools do you use that I've missed? Please share your favorites in the comments! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/open-source-tools-web-design - -作者:[Kristina Tuvikene][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/hfkristina -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen) -[2]: https://opensource.com/sites/default/files/bulma.jpg (Bulma widgets) -[3]: https://bulma.io/ -[4]: https://opensource.com/sites/default/files/uploads/skeleton.jpg (Skeleton) -[5]: https://creativecommons.org/licenses/by-sa/4.0/ -[6]: http://getskeleton.com/ -[7]: https://opensource.com/sites/default/files/uploads/nounproject.jpg (The Noun Project) -[8]: https://thenounproject.com/ -[9]: https://opensource.com/life/12/6/design-without-debt-five-tools-for-designers -[10]: https://opensource.com/sites/default/files/uploads/mypaint.jpg (MyPaint) -[11]: http://mypaint.org/ -[12]: https://opensource.com/sites/default/files/uploads/glimpse.jpg (Glimpse) -[13]: https://glimpse-editor.github.io/ -[14]: https://www.gimp.org/ -[15]: https://websitesetup.org/web-design-software/ -[16]: https://opensource.com/sites/default/files/uploads/lapaz.jpg (LaPaz) -[17]: https://lazpaint.github.io/ -[18]: https://opensource.com/sites/default/files/uploads/league-of-moveable-type.jpg (League of Moveable Type) -[19]: https://www.theleagueofmoveabletype.com/ -[20]: https://opensource.com/sites/default/files/uploads/shotcut.jpg (Shotcut) -[21]: https://shotcut.org/ -[22]: https://opensource.com/sites/default/files/uploads/drawio.jpg (Draw.io) -[23]: http://www.draw.io/ -[24]: https://github.com/jgraph/drawio -[25]: https://opensource.com/sites/default/files/uploads/olive.png (Olive) -[26]: https://olivevideoeditor.org/020.php -[27]: https://olivevideoeditor.org/ diff --git a/sources/tech/20210326 Network address translation part 3 - the conntrack event framework.md b/sources/tech/20210326 Network address translation part 3 - the conntrack event framework.md deleted file mode 100644 index 490978686e..0000000000 --- a/sources/tech/20210326 Network address translation part 3 - the conntrack event framework.md +++ /dev/null @@ -1,108 +0,0 @@ -[#]: subject: (Network address translation part 3 – the conntrack event framework) -[#]: via: (https://fedoramagazine.org/conntrack-event-framework/) -[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Network address translation part 3 – the conntrack event framework -====== - -![][1] - -This is the third post in a series about network address translation (NAT). The first article introduced [how to use the iptables/nftables packet tracing feature][2] to find the source of NAT-related connectivity problems. Part 2 [introduced the “conntrack” command][3]. This part gives an introduction to the “conntrack” event framework. - -### Introduction - -NAT configured via iptables or nftables builds on top of netfilter’s connection tracking framework. conntrack’s event facility allows real-time monitoring of incoming and outgoing flows. This event framework is useful for debugging or logging flow information, for instance with [ulog][4] and its IPFIX output plugin. - -### Conntrack events - -Run the following command to see a real-time conntrack event log: - -``` -# conntrack -E -NEW tcp 120 SYN_SENT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 [UNREPLIED] src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 -UPDATE tcp 60 SYN_RECV src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 -UPDATE tcp 432000 ESTABLISHED src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED] -UPDATE tcp 120 FIN_WAIT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED] -UPDATE tcp 30 LAST_ACK src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED] -UPDATE tcp 120 TIME_WAIT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED] -``` - -This prints a continuous stream of events: - - * new connections - * removal of connections - * changes in a connections state. - - - -Hit _ctrl+c_ to quit. - -The conntrack tool offers a number of options to limit the output. For example its possible to only show DESTROY events. The NEW event is generated after the iptables/nftables rule set accepts the corresponding packet. - -### **Conntrack expectations** - -Some legacy protocols require multiple connections to work, such as [FTP][5], [SIP][6] or [H.323][7]. To make these work in NAT environments, conntrack uses “connection tracking helpers”: kernel modules that can parse the specific higher-level protocol such as ftp. - -The _nf_conntrack_ftp_ module parses the ftp command connection and extracts the TCP port number that will be used for the file transfer. The helper module then inserts a “expectation” that consists of the extracted port number and address of the ftp client. When a new data connection arrives, conntrack searches the expectation table for a match. An incoming connection that matches such an entry is flagged RELATED rather than NEW. This allows you to craft iptables and nftables rulesets that reject incoming connection requests unless they were requested by an existing connection. If the original connection is subject to NAT, the related data connection will inherit this as well. This means that helpers can expose ports on internal hosts that are otherwise unreachable from the wider internet. The next section will explain this expectation mechanism in more detail. - -### The expectation table - -Use _conntrack -L expect_ to list all active expectations. In most cases this table appears to be empty, even if a helper module is active. This is because expectation table entries are short-lived. Use _conntrack -E expect_ to monitor the system for changes in the expectation table instead. - -Use this to determine if a helper is working as intended or to log conntrack actions taken by the helper. Here is an example output of a file download via ftp: -``` - -``` - -# conntrack -E expect -NEW 300 proto=6 src=10.2.1.1 dst=10.8.4.12 sport=0 dport=46767 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=10.2.1.1 master-dst=10.8.4.12 sport=34526 dport=21 class=0 helper=ftp -DESTROY 299 proto=6 src=10.2.1.1 dst=10.8.4.12 sport=0 dport=46767 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=10.2.1.1 master-dst=10.8.4.12 sport=34526 dport=21 class=0 helper=ftp -``` - -``` - -The expectation entry describes the criteria that an incoming connection request must meet in order to recognized as a RELATED connection. In this example, the connection may come from any port, must go to port 46767 (the port the ftp server expects to receive the DATA connection request on). Futhermore the source and destination addresses must match the address of the ftp client and server. - -Events also include the connection that created the expectation and the name of the protocol helper (ftp). The helper has full control over the expectation: it can request full matching (IP addresses of the incoming connection must match), it can restrict to a subnet or even allow the request to come from any address. Check the “mask-dst” and “mask-src” parameters to see what parts of the addresses need to match. - -### Caveats - -You can configure some helpers to allow wildcard expectations. Such wildcard expectations result in requests coming from an unrelated 3rd party host to get flagged as RELATED. This can open internal servers to the wider internet (“NAT slipstreaming”). - -This is the reason helper modules require explicit configuration from the nftables/iptables ruleset. See [this article][8] for more information about helpers and how to configure them. It includes a table that describes the various helpers and the types of expectations (such as wildcard forwarding) they can create. The nftables wiki has a [nft ftp example][9]. - -A nftables rule like ‘ct state related ct helper “ftp”‘ matches connections that were detected as a result of an expectation created by the ftp helper. - -In iptables, use “_-m conntrack –ctstate RELATED -m helper –helper ftp_“. Always restrict helpers to only allow communication to and from the expected server addresses. This prevents accidental exposure of other, unrelated hosts. - -### Summary - -This article introduced the conntrack event facilty and gave examples on how to inspect the expectation table. The next part of the series will describe low-level debug knobs of conntrack. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/conntrack-event-framework/ - -作者:[Florian Westphal][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/strlen/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/network-address-translation-part-3-816x345.jpg -[2]: https://fedoramagazine.org/network-address-translation-part-1-packet-tracing/ -[3]: https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/ -[4]: https://netfilter.org/projects/ulogd/index.html -[5]: https://en.wikipedia.org/wiki/File_Transfer_Protocol -[6]: https://en.wikipedia.org/wiki/Session_Initiation_Protocol -[7]: https://en.wikipedia.org/wiki/H.323 -[8]: https://github.com/regit/secure-conntrack-helpers/blob/master/secure-conntrack-helpers.rst -[9]: https://wiki.nftables.org/wiki-nftables/index.php/Conntrack_helpers diff --git a/sources/tech/20210327 My favorite open source tools to meet new friends.md b/sources/tech/20210327 My favorite open source tools to meet new friends.md deleted file mode 100644 index 88d8783754..0000000000 --- a/sources/tech/20210327 My favorite open source tools to meet new friends.md +++ /dev/null @@ -1,129 +0,0 @@ -[#]: subject: (My favorite open source tools to meet new friends) -[#]: via: (https://opensource.com/article/21/3/open-source-streaming) -[#]: author: (Chris Collins https://opensource.com/users/clcollins) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -My favorite open source tools to meet new friends -====== -Quarantine hasn't been all bad—it's allowed people to create fun online -communities that also help others. -![Two people chatting via a video conference app][1] - -In March 2020, I joined the rest of the world in quarantine at home for two weeks. Then, two weeks turned into more. And more. It wasn't too hard on me at first. I had been working a remote job for a year already, and I'm sort of an introvert in some ways. Being at home was sort of "business as usual" for me, but I watched as it took its toll on others, including my wife. - -### An unlikely lifeline - -That spring, I found out a buddy and co-worker of mine was a Fairly Well-Known Streamer™ who had been doing a podcast for something ridiculous, like _15 years_. So, I popped into the podcast's Twitch channel, [2DorksTV][2]. What I found, I was not prepared for. My friend and his co-hosts perform their podcast _live_ on Twitch, like the cast of _Saturday Night Live_ or something! _**Live!**_ The hosts, Stephen, Ashley, and Jacob, joked and laughed, (sometimes) read news stories, and interacted with a vibrant community of followers—_live!_ - -I introduced myself in the chat, and Stephen looked into the camera and welcomed me, as though he were looking at and talking directly to me. I was surprised to find that there was a real back and forth. The community in the chat talked with the hosts and one another, and the hosts interacted with the chat. - -It was a great time, and I laughed out loud for the first time in several months. - -### Trying a new thing - -Shortly after getting involved in the community, I thought I might try out streaming for myself. I didn't have a podcast or a co-host, but I really, _really_ like to play Dwarf Fortress, a video game that's not open source but is built for Linux. People stream themselves playing games, right? I had all the stuff I needed because I already worked remotely full time. Other folks were struggling to find a webcam in stock and a spot to work that wasn't a kitchen table, but I'd been set up for months. - -When I looked into it more, I found that a free and open source video recording and streaming application named OBS Studio is one of the most popular ways to stream to Twitch and other platforms. Score one for open source! - -[OBS worked][3] _right out of the box_ on my Fedora system, so there's not much to write about. And that's a good thing! - -So, it wasn't because of the software that my first stream was…rough, to say the least. I didn't really know what I was doing, the quality wasn't that great, and I kept muting the mic to cough and forgetting to turn it back on. I think there were a grand total of zero viewers who saw that stream, and that's probably for the best. - -The next day though, I shared what I'd done in chat, and everyone was amazingly supportive. I decided to try again. In the second stream, Stephen popped in and said hi, and I had the opportunity to be on the other side of the camera, talking to a friend in chat and really enjoying the interaction. Within a few more streams, more of the community started to hop on and chat and hang out and, despite having no idea what was going on (Dwarf Fortress is famously a bit dense), sticking around and interacting with me. - -### The open source behind the stream - -Eventually, I started to up my game. Not my Dwarf Fortress game, but my streaming game. My stream slowly became more polished and more frequent. I created my own official stream, called _It's Dwarf Fortress! …with Hammerdwarf!_ - -The entire production is powered by open source: - - * [VLC Media Player][4] plays the intro and outro music. - * I use [GIMP][5] (GNU Image Manipulation Program) to make the logos and splash screens. - * [OBS Studio][6] handles the recording and streaming. - * Both GIMP and OBS are packaged with [Flatpak][7], a seriously cool next-generation packaging technology for Linux. - * I've recently started using [OpenShot][8] to edit recordings of my stream before uploading them to YouTube. - * Even the fonts I use are Open Font License fonts. - * All this, the game included, live on a Fedora Linux system. - - - -### Coding out in the open - -As I got further into streaming, I discovered, again through Stephen, that folks stream themselves programming. What?! But it's oddly satisfying, listening to someone calmly talk about what they're doing and why and hearing the quiet clicks of their keyboard. I've started keeping those kinds of things on in the background while I work, just for ambiance. - -Eventually, I thought to myself, "Why not? I could do that too. I program things." I had plenty of side projects to work on, and maybe folks would come hang out with me while I work on them. - -I created a new stream called _It's _not_ Dwarf Fortress! …with Hammerdwarf!_ (Look—that's just how Dwarf Fortress-y I am.) I started up that stream and worked on a little side project, and—the very first time—a group of four or five folks from my previous job hopped in and hung out with me, despite it being the middle of their workday. Friends from the 2DorksTV Discord joined as well, and we had a nice big group of folks chatting and helping me troubleshoot code and regexes and missing whitespace. And then, some random folks I didn't know, folks looking around for a stream on Twitch, found it and jumped in as well! - -### Sharing is what open source is about - -Fast forward a few months, and I was talking (again) with Stephen. Over the months, we've discussed how folks represent themselves online and commiserated about feeling out of place at work, fighting to feel like we deserve to be there, to convince ourselves that we're good enough to be there. It's not just him or just me, I realize. I have this conversation with _so many people_. I told Stephen that I think it's because there is so little representation of _trying_. Everyone shares their success story on Twitter. They only ever _do_ or _don't_. - -They never share themselves trying. - -("Friggin Yoda, man," Stephen commented on the matter. You can see why he's got a successful podcast.) - -Presentations at tech conferences are filled with complicated, difficult stories, but they're always success stories. The "internet famous" in our field, developer advocates and tech gurus, share amazing new things and present complicated demos, but all of them are backed by teams of people working with them that no one ever sees. Online, with tech specifically and honestly the rest of the world generally, you see only the finished sausage, not all the grind. - -These are the things I think help people, and I realized that I need to be open about all of my processes. Projects I work on take me _forever_ to figure out. Code that I write _sucks_. I'm a senior software engineer/site reliability engineer for a large software company. I spend _hours and hours_ reading documentation, struggling to figure out how something works, and slowly, slowly incrementing on it. Even that first Dwarf Fortress stream needed a lot of help. - -And this is normal! - -Everyone does it, but we're so tuned into sharing our successes and hiding our failures that all we can compare our flawed selves to is other people's successes. We never see their failures, and we try to live up to a standard of illusion. - -I even struggled to decide whether I should create a whole new channel for this thing I was trying to do. I spent all this time building a professional career image online—I couldn't show everyone how much of a Dwarf Dork I _really_ am! And once again, Stephen inspired me: - -> "Hammerdwarf is you. And your coding stream was definitely a professional stream. The channel name didn't matter…Be authentic." - -Professional Chris Collins and personal Hammerdwarf make up who I am. I have a wife and two dogs, I like space stuff, I get a headache every now and again, I write for Opensource.com and [EnableSysadmin][9], I speak at tech conferences, and sometimes, I have to take an afternoon off work to sit in the sun or lie awake at night because I miss my friends. - -All that to say, my summer project, inspired by Stephen, Ashley, and Jacob and the community from 2DorksTV and powered by open source technology, is to fail publicly and to be real. To borrow a phrase from another excellent podcast: I am [failing out loud][10]. - -I've started a streaming program on Twitch called _Practically Programming_, dedicated to showing what it is like for me at work, working on real things and failing and struggling and needing help. I've been in tech for almost 20 years, and I still have to learn every day, and now I'm going to do so online where everyone can see me. Because it's important to show your failures and flaws as much as your successes, and it's important to see others fail and realize it's a normal part of life. - -![Practically Programming logo][11] - -(Chris Collins, [CC BY-SA 4.0][12]) - -That's what I did last summer. - -And _Practically Programming_ is what I will be doing this spring and from now on. Please join me if you're interested, and please, if you fail at something or struggle with something, know that everyone else is doing so, too. As long as you keep trying and keep learning, it doesn't matter how many times you fail. - -You got this! - -* * * - -_Practically Programming_ is on my [Hammerdwarf Twitch channel][13] on Tuesdays and Thursdays at 5pm Pacific time. - -Dwarf Fortress is on almost any other time… - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/open-source-streaming - -作者:[Chris Collins][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clcollins -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chat_video_conference_talk_team.png?itok=t2_7fEH0 (Two people chatting via a video conference app) -[2]: https://www.twitch.com/2dorkstv -[3]: https://opensource.com/article/20/4/open-source-live-stream -[4]: https://www.videolan.org/vlc/index.html -[5]: https://www.gimp.org/ -[6]: https://obsproject.com/ -[7]: https://opensource.com/article/21/2/linux-packaging -[8]: https://opensource.com/article/21/2/linux-python-video -[9]: http://redhat.com/sysadmin -[10]: https://open.spotify.com/show/1WcfOvSiD99zrVLFWlFHpo -[11]: https://opensource.com/sites/default/files/uploads/practically_programming_logo.png (Practically Programming logo) -[12]: https://creativecommons.org/licenses/by-sa/4.0/ -[13]: https://www.twitch.tv/hammerdwarf diff --git a/sources/tech/20210329 Rapidly configure SD cards for your Raspberry Pi cluster.md b/sources/tech/20210329 Rapidly configure SD cards for your Raspberry Pi cluster.md deleted file mode 100644 index 14f3a3e3d4..0000000000 --- a/sources/tech/20210329 Rapidly configure SD cards for your Raspberry Pi cluster.md +++ /dev/null @@ -1,226 +0,0 @@ -[#]: subject: (Rapidly configure SD cards for your Raspberry Pi cluster) -[#]: via: (https://opensource.com/article/21/3/raspberry-pi-cluster) -[#]: author: (Gregor von Laszewski https://opensource.com/users/laszewski) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Rapidly configure SD cards for your Raspberry Pi cluster -====== -Create multiple SD cards that are preconfigured to create Pi clusters -with Cloudmesh Pi Burner. -![Raspberries with pi symbol overlay][1] - -There are many reasons people want to create [computer clusters][2] using the Raspberry Pi, including that they have full control over their platform, they're able to use an inexpensive, highly usable platform, and get the opportunity to learn about cluster computing in general. - -There are different methods for setting up a cluster, such as headless, network booting, and booting from SD cards. Each method has advantages and disadvantages, but the latter method is most familiar to users who have worked with a single Pi. Most cluster setups involve many complex steps that require a significant amount of time because they are executed on an individual Pi. Even starting is non-trivial, as you need to set up a network to access them. - -Despite improvements to the [Raspberry Pi Imager][3] and the availability of [PiBakery][4], the process is still too involved. So, at Cloudmesh, we asked: - -> Is it possible to develop a tool that is specifically targeted to burn SD cards for Pis in a cluster one at a time so that the cards can be just plugged in and, with minimal effort, start a cluster that simply works? - -In response, we developed a tool called **Cloudmesh Pi Burner** for SD Cards, and we present it within [Pi Planet][5]. No more spending hours upon hours to replicate the steps and learn complex DevOps tutorials; instead, you can get a cluster set up with just a few commands. - -For this, we developed `cms burn`, which is a program that you can execute on a "manager" Pi or a Linux or macOS computer to burn cards for your cluster. - -We set up a [comprehensive package][6] on GitHub that can be installed easily. You can read about it in detail in the [README][7]. There, you can also find detailed instructions on how to [burn directly][8] from a macOS or Linux computer. - -### Getting started - -This article explains how to create a cluster setup using five Raspberry Pi units (you need a minimum of two, but this method also works for larger numbers). To follow along, you must have five SD cards, one for each of the five Pi units. It's helpful to have a network switch (managed or unmanaged) with five Ethernet cables (one for each Pi). - -#### Requirements - -You need: - - * 5 Raspberry Pi boards - * 5 SD cards - * 5 Ethernet cables - * A network switch (unmanaged or managed) - * WiFi access - * Monitor, mouse, keyboard (for desktop access on Pi) - * An SD card slot for your computer or the manager Pi (and preferably supports USB 3.0 speeds) - * If you're doing this on a Mac, you must install [XCode][9] and [Homebrew][10] - - - -On Linux, the open source **ext4** filesystem is supported by default. However, Apple doesn't provide this capability for macOS, so you must purchase support separately. I use Paragon Software's **extFS** application. Like macOS itself, this is largely based upon, but is not itself, open source. - -At Cloudmesh, we maintain a list of [hardware parts][11] you need to consider when setting up a cluster. - -### Network configuration - -Figure 1 shows our network configuration. Of the five Raspberry Pi computers, one is dedicated as a _manager_ and four are _workers_. Using WiFi for the manager Pi allows you to set it up anywhere in your house or other location (other configurations are discussed in the README). - -Our configuration uses an unmanaged network switch, where the manager and workers communicate locally with each other, and the manager provides internet access to the workers over a bridge that's configured for you. - -![Pi cluster setup with bridge network][12] - -Pi cluster setup with bridge network (©2021 [The Cloudmesh Projects][13]) - -### Set up the Cloudmesh burn application - -To set up the Cloudmesh burn program, first [create a Python `venv`][14]: - - -``` -$ python3 -m venv ~/ENV3 -$ source ~/ENV3/bin/activate -``` - -Next, install the Cloudmesh cluster generation tools and start the burn process. You must adjust the path to your SD card, which differs depending on your system and what kind of SD card reader you're using. Here's an example: - - -``` -(ENV3)$ pip install cloudmesh-pi-cluster -(ENV3)$ cms help -(ENV3)$ cms burn info -(ENV3)$ cms burn cluster \ -\--device=/path/to/sdcard \ -\--hostname=red,red01,red02,red03,red04 \ -\--ssid=myssid -y -``` - -Fill out the passwords and plug in the SD cards as requested. - -### Start your cluster and configure it - -Plug the burned SD cards into the Pis and switch them on. Execute the `ssh` command to log into your manager—it's the one called `red` (worker nodes are identified by number): - - -``` -`(ENV3)$ ssh pi@red.local` -``` - -This takes a while, as the filesystems on the SD cards need to be installed, and configurations such as Country, SSH, and WiFi need to be activated. - -Once you are in the manager, install the Cloudmesh cluster software in it. (You could have done this automatically, but we decided to leave this part of the process up to you to give you maximum flexibility.) - - -``` -pi@red:~ $ curl -Ls \ - \ -\--output install.sh -pi@red:~ $ sh ./install.sh -``` - -After lots of log messages, you see: - - -``` -################################################# -# Install Completed                             # -################################################# -Time to update and upgarde: 339 s -Time to install the venv:   22 s -Time to install cloudmesh:  185 s -Time for total install:     546 s -Time to install: 546 s -################################################# -Please activate with -    source ~/ENV3/bin/activate -``` - -Reboot: - - -``` -`pi@red:~ $ sudo reboot` -``` - -### Start using your cluster - -Log in to your manager Pi over SSH: - - -``` -`(ENV3)$ ssh pi@red.local` -``` - -Once you're logged into your manager (in this example, `red.local`) on the network, execute a command to see if things are working. For example, you can use a temperature monitor to get the temperature from all Pi boards: - - -``` -(ENV3) pi@red:~ $ cms pi temp red01,red02,red03,red04 - -pi temp red01,red02 -+--------+--------+-------+----------------------------+ -| host   |    cpu |   gpu | date                       | -|--------+--------+-------+----------------------------| -| red01  | 45.277 |  45.2 | 2021-02-23 22:13:11.788430 | -| red02  | 42.842 |  42.8 | 2021-02-23 22:13:11.941566 | -| red02  | 43.356 |  42.8 | 2021-02-23 22:13:11.961245 | -| red02  | 44.124 |  42.8 | 2021-02-23 22:13:11.981896 | -+--------+--------+-------+----------------------------+ -``` - -### Access the workers - -It's even more convenient to access the workers, so we designed a tunnel command that makes setup easy. Call it on the manager node, for example: - - -``` -`(ENV3) pi@red:~ $ cms host setup "red0[1-4]" user@laptop.local` -``` - -This creates ssh keys on all workers, gathers ssh keys from all hosts, and scatters the public keys to the manager's and worker's authorized key file. This also makes the manager node a bridge for the worker nodes so they can have internet access. Now our laptop we update our ssh config file with the following command. - - -``` -`(ENV3)$ cms host config proxy pi@red.local red0[1-4]` -``` - -Now you can access the workers from your computer. Try it out with the temperature program: - - -``` -(ENV3)$ cms pi temp "red,red0[1-4]"               - -+-------+--------+-------+----------------------------+ -| host  |    cpu |   gpu | date                       | -|-------+--------+-------+----------------------------| -| red   | 50.147 |  50.1 | 2021-02-18 21:10:05.942494 | -| red01 | 51.608 |  51.6 | 2021-02-18 21:10:06.153189 | -| red02 | 45.764 |  45.7 | 2021-02-18 21:10:06.163067 | -... -+-------+--------+-------+----------------------------+ -``` - -### More information - -Since this uses SSH keys to authenticate between the manager and the workers, you can log directly into the workers from the manager. You can find more details in the [README][7] and on [Pi Planet][5]. Other Cloudmesh components are discussed in the [Cloudmesh manual][15]. - -* * * - -_This article is based on [Easy Raspberry Pi cluster setup with Cloudmesh from MacOS][13] and is republished with permission._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/raspberry-pi-cluster - -作者:[Gregor von Laszewski][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/laszewski -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2 (Raspberries with pi symbol overlay) -[2]: https://en.wikipedia.org/wiki/Computer_cluster -[3]: https://www.youtube.com/watch?v=J024soVgEeM -[4]: https://www.raspberrypi.org/blog/pibakery/ -[5]: https://piplanet.org/ -[6]: https://github.com/cloudmesh/cloudmesh-pi-burn -[7]: https://github.com/cloudmesh/cloudmesh-pi-burn/blob/main/README.md -[8]: https://github.com/cloudmesh/cloudmesh-pi-burn#71-quickstart-for-a-setup-of-a-cluster-from-macos-or-linux-with-no-burning-on-a-pi -[9]: https://opensource.com/article/20/8/iterm2-zsh -[10]: https://opensource.com/article/20/6/homebrew-mac -[11]: https://cloudmesh.github.io/pi/docs/hardware/parts/ -[12]: https://opensource.com/sites/default/files/uploads/network-bridge.png (Pi cluster setup with bridge network) -[13]: https://cloudmesh.github.io/pi/tutorial/sdcard-burn-pi-headless/ -[14]: https://opensource.com/article/20/10/venv-python -[15]: https://cloudmesh.github.io/cloudmesh-manual/ diff --git a/sources/tech/20210329 Setting up a VM on Fedora Server using Cloud Images and virt-install version 3.md b/sources/tech/20210329 Setting up a VM on Fedora Server using Cloud Images and virt-install version 3.md deleted file mode 100644 index c3104bd87a..0000000000 --- a/sources/tech/20210329 Setting up a VM on Fedora Server using Cloud Images and virt-install version 3.md +++ /dev/null @@ -1,500 +0,0 @@ -[#]: subject: (Setting up a VM on Fedora Server using Cloud Images and virt-install version 3) -[#]: via: (https://fedoramagazine.org/setting-up-a-vm-on-fedora-server-using-cloud-images-and-virt-install-version-3/) -[#]: author: (pboy https://fedoramagazine.org/author/pboy/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Setting up a VM on Fedora Server using Cloud Images and virt-install version 3 -====== - -![][1] - -Photo by [Max Kukurudziak][2] on [Unsplash][3] - -Many servers use one or more virtual machines (VMs), e.g. to isolate a public service in the best possible way and to protect the host server from compromise. This article explores the possibilities of deploying [Fedora Cloud Base][4] images as a VM in an autonomous Fedora 33 Server Edition using version 3 of _virt-install_. This capability was introduced with Fedora 33 and the new _‐-cloud-init_ option. - -### Why use Cloud Images? - -The standard virtualization tool for Fedora Server is _libvirt_. For a long time the only way to create a virtual Fedora Server instance was to create a _libvirt_ VM and run the standard Anaconda installation. Several tools exist to make this procedure as comfortable and fail-safe as possible, e.g. a [Cockpit module][5]. The process is pretty straight forward and every Fedora system administrator is used to it. - -With the advent of cloud systems came cloud images. These are pre-built ready-to-run virtual servers. Fedora provides specialized images for various cloud systems as well as Fedora Cloud Base image, a generic optimized VM. The image image is copied to the server and used by a virtual machine as an operational file system. - -These images save the system administrator the time-consuming process of many individual passes through Anaconda. An installation merely requires the invocation of _virt-install_ with suitable parameters. It is a CLI tool, thus easily scriptable and reproducible. In a worst case emergency, a replacement VM can be set up quickly. - -Fedora Cloud Base images are integrated into the Fedora QA Process. This prevents subtle inconsistencies that may lead to not-so-subtle problems during operation. For any system administrator concerned about security and reliability, this is an incredibly valuable advantage over _libvirt_ compatible VM images from third party vendors. Cloud images speed up the deployment process as well. - -#### Implementation considerations - -As usual, there is nothing for free. Cloud images use _cloud-init_ for an automatic initial configuration, which is otherwise done as part of Anaconda. The cloud system usually provides the necessary information. In the absence of cloud, the system administrator must provide a replacement. - -Basically, there are two implementation options. - -First, with relatively little additional effort, you can install [Vagrant and the Vagrant libvirt plugin][6]. If the server is also used for development work, Vagrant may already be in use and the additional effort is minimal. This option is then the optimal choice. - -Second, you can use _virt-install_ directly. Until now you had to create a cloud-init nocloud datasource iso in [several additional steps][7]. v_irt-install_ version 3, included since Fedora 33, elements these additional steps. The newly introduced _‐-cloud-init_ option initially configures a VM from a cloud image without additional software and without detours. _Virt-install_ takes on taming the rather complex cloud-init nocloud procedures. - -There are two ways to make use of _virt-install_: - - * quick and (not really) dirty: minimal Cloud-init configuration -This requires a little more post-installation effort and is suitable if you set up only a few VMs. - - - * elaborate cloud-init based configuration using simple configuration files -This requires more pre-installation work and is more effective if you have to set up multiple VMs. - - - -#### Be certain you know what you are getting - -There is no light without shadow. Cloud Base image (currently) do not provide an alternatively built but otherwise identical build of Fedora Server Edition. There are some subtle differences. For example: - - * Fedora Server Edition uses xfs as its file system, Cloud Base Image still uses the older ext4. - * Fedora Server Edition now persists the network configuration completely and stringently in NetworkManager, Fedora Cloud Base image still uses the old ifcfg plugin. - * Other differences are conceptual. For example, Fedora Cloud image does not install a firewall by default. - * The use concept for the persistent storage is also different due to technical differences. - - - -Overall, however, the functionality is so far identical and the advantages so noticeable that it is worthwhile and makes sense to use Fedora Cloud Base. - -### A **t**ypical **u**se **c**ase - -Consider a use case that often applies to small and medium-sized organizations. The hardware is located in an off-premise housing center. Fedora Server is required with the most rigorous isolation possible from public access, e.g. ssh and key based authentication only. Any risk of compromise has to be minimized. Public services are offered in a VM to provide as much isolation as possible. The VM operates as a pure front end with minimal exposure of services. For example, only an Apache web server is installed. All data processing resides on an application server in a separate VM (or a container), e.g. JBoss rsp. Wildfly. The application server accesses a database that may run directly on the host hardware for performance reasons but without any public access. - -Regarding the infrastructure, at least some VMs as well as the host ssh or vpn process need access to the public network. They have to share the physical interface. At the same time, VMs and host need another internal network that enables protected communication. The application VM only connects to the internal network. And we need an internal DNS for the services to find each other. - -### **System Requirements** - -You need a Fedora 33 Server Edition with _libvirt_ virtualization properly installed and working. The _libvirt_ network “default” with virbr0 provides the internal protected network and is active. Some external network device, usually a router, provides DHCP service for the external network. Every lab or production environment should meet these requirements. - -For internal name resolution to work, you have to decide upon an internal domain name and extend the _libvirt_ network configuration. In this example the external name will be _example.com_, and the internal domain name will be _example.lan_. The Fedora server thus receives the name _host.example.com_ externally and internally _host.example.lan_ or just _host_ for short. The names of the VMs are _**app**_ and _**web**_, respectively. The two examples that follow will create these VMs. - -#### Network preparations for the examples - -Modify the configuration of the internal network similar to the example below (N.B. adjust your domain name accordingly! Leave mac address and UUID untouched!): - -``` -# virsh net-edit default - - default - aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee - - - - - - - - - host - host.example.lan - - - - - - - - - -# virsh net-destroy default -# virsh net-start default -``` - -Do NOT add an external forwarder via _<forwarder addr=’xxx.yyy.zz.uu’/>_ tag. It will break the VMs split-dns capability. - -Due to a bug in the interaction of _systemd-resolved_ and _libvirt_, the name resolution for the internal network does not work on the host at the moment without additional measures. The VM’s are not affected. Hence, the host cannot resolve the names of the VMs, but conversely, the VMs can resolve to each other and to the host. The latter is sufficient here. - -With everything set up correctly the following interfaces are active on the host: - -``` -# ip a - 1: lo: mtu ... - inet 127.0.0.1/8 scope host ... - inet6 ::1/128 scope host ... - 2: enpNsM: mtu ... - inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 - ... - 4: virbr0-nic: mtu 8... -``` - -### Creating a Fedora Server **v**irtual **m**achine **u**sing Fedora Cloud Base Image - -#### Preparations - -First download a Fedora 33 Cloud Base Image file and store it in the directory _/var/lib/libvirt/boot_. By convention, this is the location from which images are installed. - -``` -# sudo wget https://download.fedoraproject.org/pub/fedora/linux/releases/33/Cloud/x86_64/images/Fedora-Cloud-Base-33-1.2.x86_64.qcow2 -O /var/lib/libvirt/boot/Fedora-Cloud-Base-33-1.2.x86_64.qcow2 -# sudo wget https://getfedora.org/static/checksums/Fedora-Cloud-33-1.2-x86_64-CHECKSUM -O /var/lib/libvirt/boot/Fedora-Cloud-33-1.2-x86_64-CHECKSUM -# sudo cd /var/lib/libvirt/boot -# sudo sha256sum --ignore-missing -c *-CHECKSUM -``` - -The *CHECKSUM file contains the values for all cloud images. The check should result in one _OK_. - -For external connectivity of the VMs, the easiest way is to use MacVTap in the VM configuration. You don’t need to set up a virtual bridge nor touch the critical configuration of the physical Ethernet interface of an off-premise server. Enable forwarding for both IPv4 and IPv6 (dual stack). _Libvirt_ takes care for IPv4. Nevertheless, it is advantageous to configure forwarding independent of _libvirt_. - -Check the forwarding configuration: - -``` -# cat /proc/sys/net/ipv4/ip_forward -# cat /proc/sys/net/ipv6/conf/default/forwarding -``` - -In both cases, an output value of 1 is required. If necessary, activate forwarding temporarily until next reboot: - -**[…]# echo 1 > /proc/sys/net/ipv4/ip_forward -[…]# echo 1 > /proc/sys/net/ipv6/conf/all/forwarding** - -For a permanent setup create the following file: - -``` -# vim /etc/sysctl.d/50-enable-forwarding.conf -# local customizations -# -# enable forwarding for dual stack -net.ipv4.ip_forwarding=1 -net.ipv6.conf.all.forwarding=1 -``` - -With these preparations completed, the following two examples, creating the VMs _**app**_ and _**web**_, should work flawlessly. - -#### Example 1: Quick & (not really) dirty: Minimal cloud-init configuration - -Installation for the _**app**_ VM begins by creating a copy of the download image as a (fully installed) virtual disk in the directory _/var/lib/libvirt/images_. This is, by convention, the virtual disk pool. The _virt-install_ program performs the installation. The parameters on _virt-install_ pass all the required information. There is no need for further intervention or preparation The parameters first specify the usual, general properties such as memory, CPU and the (non-graphical) console for the server. The parameter _‐-graphics none_, enforces a redirect to the terminal window. After booting you get a VM terminal prompt and immediate access from the host. Parameter _‐-import_ causes skipping the install task and booting from the first virtual disk specified by the _‐-disk_ parameter. The VM “app” is will connect to the internal virtual network thus only one network is specified by the _‐-network_ parameter. - -The only new parameter is _‐-cloud-init_ without any further subparameters. This causes the generation and display of a root password, enabling a one-time login. cloud-init is executes with sensible default settings. Finally, it is deactivated and not executed during subsequent boot processes. - -The VM terminal appears when installation is complete. Note that the first root login password is displayed early in the process and is used for the initial login. This password is single use and must be replace during the first login. - -``` -# sudo cp /var/lib/libvirt/boot/Fedora-Cloud-Base-33-1.2.x86_64.qcow2 \ - /var/lib/libvirt/images/app.qcow2 -# sudo virt-install --name app - --memory 3074 --cpu host --vcpus 3 --graphics none\ - --os-type linux --os-variant fedora33 --import \ - --disk /var/lib/libvirt/images/app.qcow2,format=qcow2,bus=virtio \ - --network bridge=virbr0,model=virtio \ - --cloud-init - WARNING Defaulting to --cloud-init root-password-generate=yes,disable=yes - Installation startet … - Password for first root login is: OtMQshytI0E8xZGD - Installation will continue in 10 seconds (press Enter to skip)…Running text console command: … - Connected to Domain: app - Escape character is ^] (Ctrl + ]) - [ 0.000000] Linux version 5.8.15-301.fc33.x86_64 (mockbuild@bkernel01.iad2.fedoraproject … - … - … - [ 29.271451] cloud-init[721]: Cloud-init v. 19.4 finished … Datasource DataSourceNoCloud … - [FAILED] Failed to start Execute cloud user/final scripts. - See 'systemctl status cloud-final.service' for details. - [ OK ] Reached target Cloud-init target. - Fedora 33 (Cloud Edition) - Kernel 5.8.15-301.fc33.x86_64 on an x86_64 (ttyS0) - localhost login: -``` - -The error message is unsightly, but does not affect operation. (This might be the reason for cloud-init service remaining enabled.) You may disable it manually or remove it at all. - -On the host you may check the network status: - -``` -# less /var/lib/libvirt/dnsmasq/virbr0.status -[ - { - "ip-address": "192.168.122.109", - "mac-address": "52:54:00:57:35:3d", - "client-id": "01:52:54:00:57:35:3d", - "expiry-time": 1615665342 - } -] -``` - -The output shows the VM got an internal IP, but no hostname because one has not yet been set. That is the first post-installation tasks to perform. - -##### Post-Installation Tasks - -The initially displayed password enables _root_ login and forces the setting of a new one. - -Of particular interest is the network connection. Verify using these commands: - -``` -# ping host -# ping host.example.lan -# ping host.example.com -# ping guardian.co.ik -``` - -Everything is working fine out of the box. Internal and external network access is working. - -The only remaining task is to set hostname - -``` -# hostnamectl set-hostname app.example.lan -``` - -After rebooting, using this command on the host again, _**less**_ _**/var/lib/libvirt/dnsmasq/virbr0.status**_ will now list a hostname. This verifies that name resolution is working. - -To complete the final application software installations, perform a system update and install a Tomcat application server for the functional demo. - -``` -# dnf -y update && dnf -y install tomcat && systemctl enable tomcat --now && reboot -``` - -When installation and reboot complete, exit and close the console using _**<ctrl>+]**_. - -The VM is automatically deactivated and not executed during subsequent boot processes. To override this, on the host, enable autostart of the **app** VM - -``` -# sudo virsh autostart app -``` - -#### Example 2: An easy way to an elaborate configuration - -The **web** front end VM is more complex and there are several issues to deal with. There is a public facing interface that requires the installation of a firewall. It is unique to the cloud-init process that the internal interface is not configured persistently. Instead, it is set up anew each time the system is booted. This makes it impossible to assign a firewall zone to this interface. The public interface also provides ssh access. So for root a key file is needed to secure the login. - -The virt-install cloud-init process is provisioned by two subparameters, meta-data and user-data. Each references a configuration file. These files were previously buried in a special iso image, now simulated by _virt-install_. You are free to chose where to store these files. It is best, however, to be systematic and choosing a subdirectory in the boot directory is a good choice. This example will use _/var/lib/libvirt/boot/cloud-init_. - -The file referenced by the meta-data parameter contains information about the runtime environment. The name is _web-meta-data_ in this example. Here it contains just the mandatory parameter _instance-id_. The must be unique in a cloud environment, but can be chosen arbitrarily here just as in a nocloud environment. - -``` -# sudo mkdir /var/lib/libvirt/boot/cloud-init -# sudo vim /var/lib/libvirt/boot/cloud-init/web-meta-data -instance-id: web-app -``` - -The file referenced by the user-data parameter holds the main configuration work. This example uses the name _web-user-data_ . The first line must contain some kind of shebang, which cloud-init uses to determine the format of the following data. The formatting itself is _yaml_. The _web-user-data_ file defines several steps: - - 1. setting the hostname - 2. set up the user root with the public RSA key copied into the file as well as the fallback account “hostmin” (or alike). The latter is enabled to log in by password and assigned to the group wheel - 3. set up a first-time password for both users for initial login which must be changed on first login - 4. install required additional packages , e.g. the firewall, fail2ban, postfix (needed by fail2ban) and the webserver - 5. some packages need additional configuration files - 6. the VM needs an update of all packages - 7. several configuration commands are required - 1. assign zone trusted to the interface eth1 (2nd position in the dbus path, so the order of the network parameters when calling _libvirt_ is crucial!) and rename it according to naming convention. The modification also persists to a configuration file (still in /etc/sysconfig/network-scripts/ ) - 2. start the firewall and add the web services - 3. finally disable cloud-init - - - -Once the configuration files are completed it eliminates what would be a time consuming process if done manually. This efficiency makes the use of cloud images attractive. The definition of _web-user-data_ follows: - -``` -# vim /var/lib/libvirt/boot/cloud-init/web-user-data -# cloud-config -# (1) setting hostname -preserve_hostname: False -hostname: web -fqdn: web.example.com - -# (2) set up root and fallback account including rsa key copied into this file -users: - - name: root - ssh-authorized-keys: - - ssh-rsa AAAAB3NzaC1yc2EAAAADAQA…jSMt9rC4uKDPR8whgw== - - name: hostmin - groups: users,wheel - ssh_pwauth: True - ssh-authorized-keys: - - ssh-rsa AAAAB3NzaC1yc2EAAAIAQDix...Mt9rC4uKDPR8whgw== - -# (3) set up a first-time password for both accounts -chpasswd: - list: | - root:myPassword - hostmin:topSecret - expire: True - -# (4) install additional required packages -packages: - - firewalld - - postfix - - fail2ban - - vim - - httpd - - mod_ssl - - letsencrypt - -# (5) some packages need additional configuration files -write_files: - - path: /etc/fail2ban/jail.local - content: | - # /etc/fail2ban/jail.local - # Jail configuration local customization - - # Adjust the default configuration's default values - [DEFAULT] - ##ignoreip = /24 /32 - bantime = 6600 - backend = auto - # The main configuration file defines all services but - # deactivates them by default. Activate those needed - [sshd] - enabled = true - # detect password authentication failures - [apache-auth] - enabled = true - # detect spammer robots crawling email addresses - [apache-badbots] - enabled = true - # detect Apache overflow attempts - [apache-overflows] - enabled = true - - path: /etc/httpd/conf.d/vhost_default.conf - content: | - - ServerAdmin root@localhost - DirectoryIndex index.jsp - DocumentRoot /var/www/html - - Options Indexes FollowSymLinks - AllowOverride none - # Allow open access: - Require all granted - - ProxyPass / http://app:8080/ - - -# (6) perform a package upgrade -package_upgrade: true - -# (7) several configuration commands are executed on first boot -runcmd: - # (a.) assign a zone to internal interface as well as some other adaptations. - # results in the writing of a configuration file - # IMPORTANT: internal interface have to be specified SECOND after external - - nmcli con mod path 2 con-name eth1 connection.zone trusted - - nmcli con mod path 2 con-name 'System eth1' ipv6.method disabled - - nmcli con up path 2 - # (b.) activate and configure firewall and additional services - - systemctl enable firewalld --now - - firewall-cmd --permanent –add-service=http - - firewall-cmd --permanent –add-service=https - - firewall-cmd --reload - - systemctl enable fail2ban --now - # compensate for a SELinux port handling issue - - setsebool httpd_can_network_connect 1 -P - - systemctl enable httpd –-now - # (c.) finally disable cloud-init - - systemctl disable cloud-init - - reboot -# done -``` - -A detailed overview of the user-data configuration options is provided in the examples section of the [cloud-init project documentation][8]. - -After completing the configuration files, initiate the virt-install process. Adjust the values of CPU, memory, external network interface etc. as required. - -``` -# sudo virt-install --name web \ - --memory 3072 --cpu host --vcpus 3 --graphics none \ - --os-type linux --os-variant fedora33 --import \ - --disk /var/lib/libvirt/images/web.qcow2,format=qcow2,bus=virtio, size=20 \ - --network type=direct,source=enp1s0,source_mode=bridge,model=virtio \ - --network bridge=virbr0,model=virtio \ - --cloud-init meta-data=/var/lib/libvirt/boot/cloud-init/web-meta-data,user-data=/var/lib/libvirt/boot/cloud-init/web-user-data -``` - -If the network environment issues IP addresses based on MAC addresses via DHCP, add the MAC address to the the first network configuration: - -``` ---network type=direct,source=enp1s0,source_mode=bridge,mac=52:54:00:93:97:46,model=virtio -``` - -Remember, that the first 3 pairs in the MAC address must be the sequence ’52:54:00′ for KVM virtual machines. - -Back on the host enable autostart of the VM: - -``` -# virsh autostart web -``` - -Everything is complet. Direct your desktop browser to your domain and enjoy a look at the tomcat webapps screen (after ignoring the warning about an insecure connection). - -##### Configuring a static address - -According to the specifications a static network connection is configured in meta-data. A configuration would look like this: - -``` -# vim /var/lib/libdir/boot/cloud-init/web-meta-data -instance-id: web-app -network-interfaces: | - iface eth0 inet static - address 192.168.1.10 - netmask 255.255.255.0 - gateway 192.168.1.254 -``` - -_Cloud-init_ will create a configuration file accordingly. But there are 2 issues - - * The configuration file is created after a default initialization of the interface via dhcp and the interface is not reinitialized. - * The generated configuration file includes the setting _onboot=no_ so after a reboot there is no connection either. - - - -There are several hints that this is a bug that has existed for a long time so manual intervention is required. - -It is probably easier and more efficient to do without the networks specification in meta-data and make an adjustment manually on the basis of the default initialization in user-data. Perform the following before the configuration of the internal network: - -``` -# nmcli con mod path 1 ipv4.method static ipv4.addresses '192.168.158.240/24' ipv4.gateway '192.168.158.1' ipv4.dns '192.168.158.1' -# nmcli con mod path 1 ipv6.method static ipv6.addresses '2003:ca:7f06:2c00:5054:ff:fed6:5b27/64' ipv6.gateway 'fe80::1' ipv6.dns '003:ca:7f06:2c00::add:9999' -# nmcli con up path 1 -``` - -Doing this, the connection is immediately reset to the new specification and the configuration file is adjusted immediately. Remember to adjust the configuration values as needed. - -Alternatively, the 3 statements can be made part of the user-data file and adapted or commented in or out as required. The corresponding part of the file would look like - -``` -... - # (7.) several configuration commands are executed on first boot - runcmd: - # If needed, convert interface eth0 as static - # comment in and modify as required - #- nmcli con mod path 1 ipv4.method static ipv4.addresses '/24' ipv4.gateway '' ipv4.dns 'IPv4 - #- nmcli con mod path 1 ipv6.method static ipv6.addresses '/64' ipv6.gateway '' ipv6.dns '' - #- nmcli con up path 1 - # (a) assign a zone to internal interface as well as some other adaptations. - # results in the writing of a configuration file - # IMPORTANT: internal interface have to be specified SECOND after external - - nmcli con mod path 2 con-name eth1 connection.zone trusted - - ... -``` - -Again, adjust the <IPv4>, <IPv6>, etc. configuration values as needed! - -Configuring the cloud-init process by virt-install version 3 is highly efficient and flexible. You may create a dedicated set of files for each VM or you may keep one set of generic files and adjust them by commenting in and out as required. A combination of both can be use. You can quickly and easily change settings to test suitability for your purposes. - -In summary, while the use of Fedora Cloud Base Images comes with some inconveniences and suffers from shortcomings in documentation, Fedora Cloud Base images and virt-install version 3 is a great combination for quickly and efficiently creating virtual machines for Fedora Server. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/setting-up-a-vm-on-fedora-server-using-cloud-images-and-virt-install-version-3/ - -作者:[pboy][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/pboy/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/cloud_base_via_virt-install-816x345.jpg -[2]: https://unsplash.com/@maxkuk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/cloud-computing?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://alt.fedoraproject.org/cloud/ -[5]: https://fedoramagazine.org/create-virtual-machines-with-cockpit-in-fedora/ -[6]: https://fedoramagazine.org/vagrant-qemukvm-fedora-devops-sysadmin/ -[7]: https://blog.christophersmart.com/2016/06/17/booting-fedora-24-cloud-image-with-kvm/ -[8]: https://cloudinit.readthedocs.io/en/latest/topics/examples.html diff --git a/sources/tech/20210331 Playing with modular synthesizers and VCV Rack.md b/sources/tech/20210331 Playing with modular synthesizers and VCV Rack.md deleted file mode 100644 index 8bfe39f95f..0000000000 --- a/sources/tech/20210331 Playing with modular synthesizers and VCV Rack.md +++ /dev/null @@ -1,288 +0,0 @@ -[#]: subject: (Playing with modular synthesizers and VCV Rack) -[#]: via: (https://fedoramagazine.org/vcv-rack-modular-synthesizers/) -[#]: author: (Yann Collette https://fedoramagazine.org/author/ycollet/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Playing with modular synthesizers and VCV Rack -====== - -![][1] - -You know about using Fedora Linux to write code, books, play games, and listen to music. You can also do system simulation, work on electronic circuits, work with embedded systems too via [Fedora Labs][2]. But you can also make music with the VCV Rack software. For that, you can use to [Fedora Jam][3] or work from a standard Fedora Workstation installation with the [LinuxMAO Copr][4] repository enabled. This article describes how to use modular synthesizers controlled by Fedora Linux. - -### Some history - -The origin of the modular synthesizer dates back to the 1950’s and was soon followed in the 60’s by the Moog modular synthesizer. [Wikipedia has a lot more on the history][5]. - -![Moog synthesizer circa 1975][6] - -But, by the way, what is a modular synthesizer ? - -These synthesizers are made of hardware “blocks” or modules with specific functions like oscillators, amplifier, sequencer, and other various functions. The blocks are connected together by wires. You make music with these connected blocks by manipulating knobs. Most of these modular synthesizers came without keyboard. - -![][7] - -Modular synthesizers were very common in the early days of progressive rock (with Emerson Lake and Palmer) and electronic music (Klaus Schulze, for example).  - -After a while people forgot about modular synthesizers because they were cumbersome, hard to tune, hard to fix, and setting a patch (all the wires connecting the modules) was a time consuming task not very easy to perform live. Price was also a problem because systems were mostly sold as a small series of modules, and you needed at least 10 of them to have a decent set-up. - -In the last few years, there has been a rebirth of these synthesizers. Doepfer produces some affordable models and a lot of modules are also available and have open sources schematics and codes (check [Mutable instruments][8] for example). - -But, a few years ago came … [VCV Rack.][9] VCV Rack stands for **V**oltage **C**ontrolled **V**irtual Rack: software-based modular synthesizer lead by Andrew Belt. His first commit on [GitHub][10] was Monday Nov 14 18:34:40 2016.  - -### Getting started with VCV Rack - -#### Installation - -To be able to use VCV Rack, you can either go to the [VCV Rack web site][9] and install a binary for Linux or, you can activate a Copr repository dedicated to music: the [LinuxMAO Copr][4] repository (disclaimer: I am the man behind this Copr repository). As a reminder, Copr is not officially supported by Fedora infrastructure. Use packages at your own risk. - -Enable the repository with: - -``` -sudo dnf copr enable ycollet/linuxmao -``` - -Then install VCV Rack: - -``` -sudo dnf install Rack-v1 -``` - -You can now start VCV Rack from the console of via the Multimedia entry in the start menu: - -``` -$ Rack & -``` - -![][11] - -#### Add some modules - -The first step is now to clean up everything and leave just the **AUDIO-8** module. You can remove modules in various ways: - - * Click on a module and hit the backspace key - * Right click on a module and click “delete” - - - -The **AUDIO-8** module allows you to connect from and to audio devices. Here are the features for this module. - -![][12] - -Now it’s time to produce some noise (for the music, we’ll see that later). - -Right click inside VCV Rack (but outside of a module) and a module search window will appear.  - -![][13] - -Enter “VCO-2” in the search bar and click on the image of the module. This module is now on VCV Rack. - -To move a module: click and drag the module. - -To move a group of modules, hit shit + click + drag a module and all the modules on the right of the dragged modules will move with the selected module. - -![][14] - -Now you need to connect the modules by drawing a wire between the “OUT” connector of **VCO-2** module and the “1” “TO DEVICE” of **AUDIO-8** module. - -Left-click on the “OUT” connector of the **VCO-2** module and while keeping the left-click, drag your mouse to the “1” “TO DEVICE” of the **AUDIO-8** module. Once on this connector, release your left-click.  - -![][15] - -To remove a wire, do a right-click on the connector where the wire is connected. - -To draw a wire from an already connected connector, hold “ctrl+left+click” and draw the wire. For example, you can draw a wire from “OUT” connector of module **VCO-2** to the “2” “TO DEVICE” connector of **AUDIO-8** module. - -#### What are these wires ? - -Wires allow you to control various part of the module. The information handled by these wires are Control Voltages, Gate signals, and Trigger signals. - -**CV** ([Control Voltages][16]): These typically control pitch and range between a minimum value around -1 to -5 volt and a maximum value between 1 and 5 volt. - -What is the **GATE** signal you find on some modules? Imagine a keyboard sending out on/off data to an amplifier module: its voltage is at zero when no key is  pressed and jumps up to max level (5v for example) when a key is pressed; release the key, and the voltage goes back to zero again. A **GATE** signal can be emitted by things other than a keyboard. A clock module, for example, can emit gate signals. - -Finally, what is a **TRIGGER** signal you find on some modules? It’s a square pulse which starts when you press a key and stops after a while. - -In the modular world, **gate** and **trigger** signals are used to trigger drum machines, restart clocks, reset sequencers and so on.  - -#### Connecting everybody - -Let’s control an oscillator with a CV signal. But before that, remove your **VCO-2** module (click on the module and hit backspace). - -Do a right-click on VCV Rack a search for these modules: - - * **VCO-1** (a controllable oscillator) - * **LFO-1** (a low frequency oscillator which will control the frequency of the **VCO-1**) - - - -Now draw wires: - - * between the “SAW” connector of the **LFO-1** module and the “V/OCT” (Voltage per Octave) connector of the **VCO-1** module - * between the “SIN” connector of the **VCO-1** module and the “1” “TO DEVICE” of the **AUDIO-8** module - - - -![][17] - -You can adjust the range of the frequency by turning the FREQ knob of the **LFO-1** module. - -You can also adjust the low frequency of the sequence by turning the FREQ knob of the **VCO-1** module. - -### The Fundamental modules for VCV Rack - -When you install the **Rack-v1**, the **Rack-v1-Fundamental** package is automatically installed. **Rack-v1** only installs the rack system, with input / output modules, but without other basic modules. - -In the Fundamental VCV Rack packages, there are various modules available. - -![][18] - -Some important modules to have in mind: - - * **VCO**: Voltage Controlled Oscillator - * **LFO**: Low Frequency Oscillator - * **VCA**: Voltage Controlled Amplifier - * **SEQ**: Sequencers (to define a sequence of voltage / notes) - * **SCOPE**: an oscilloscope, very useful to debug your connexions - * **ADSR**: a module to generate an envelope for a note. ADSR stands for **A**ttack / **D**ecay / **S**ustain / **R**elease - - - -And there are a lot more functions available. I recommend you watch tutorials related to VCV Rack on YouTube to discover all these functionalities, in particular the Video Channel of [Omri Cohen][19]. - -### What to do next - -Are you limited to the Fundamental modules? No, certainly not! VCV Rack provides some closed sources modules (for which you’ll need to pay) and a lot of other modules which are open source. All the open source modules are packages for Fedora 32 and 33. How many VCV Rack packages are available ? - -``` -sudo dnf search rack-v1 | grep src | wc -l -150 -``` - -And counting.  Each month new packages appear. If you want to install everything at once, run: - -``` -sudo dnf install `dnf search rack-v1 | grep src | sed -e "s/\(^.*\)\.src.*/\1/"` -``` - -Here are some recommended modules to start with. - - * BogAudio (dnf install rack-v1-BogAudio) - * AudibleInstruments (dnf install rack-v1-AudibleInstruments) - * Valley (dnf install rack-v1-Valley) - * Befaco (dnf install rack-v1-Befaco) - * Bidoo (dnf install rack-v1-Bidoo) - * VCV-Recorder (dnf install rack-v1-VCV-Recorder) - - - -### A more complex case - -![][20] - -From Fundamental, use **MIXER**, **AUDIO-8**, **MUTERS**, **SEQ-3**, **VCO-1**, **ADSR**, **VCA**. - -Use: - - * **Plateau** module from Valley package (it’s an enhanced reverb). - * **BassDrum9** from DrumKit package. - * **HolonicSystems-Gaps** from HolonicSystems-Free package. - - - -How it sounds: checkout [this video][21] on my YouTube channel. - -### Managing MIDI - -VCV Rack as a bunch of modules dedicated to MIDI management. - -![][22] - -With these modules and with a tool like the Akai LPD-8: - -![][23] - -You can easily control knob in VCV Rack modules from a real life device. - -Before buying some devices, check it’s Linux compatibility. Normally every “USB Class Compliant” device works out of the box in every Linux distribution. - -The MIDI → Knob mapping is done via the “MIDI-MAP” module. Once you have selected the MIDI driver (first line) and MIDI device (second line), click on “unmapped”. Then, touch a knob you want to control on a module (for example the “FREQ” knob of the VCO-1 Fundamental module). Now, turn the knob of the MIDI device and there you are; the mapping is done. - -### Artistic scopes - -Last topic of this introduction paper: the scopes. - -VCV Rack has several standard (and useful) scopes. The **SCOPE** module from Fundamental for example. - -But it also has some interesting scopes. - -![][24] - -This used 3 **VCO-1** modules from Fundamental and a **fullscope** from wiqid-anomalies. - -The first connector at the top of the scope corresponds to the X input. The one below is the Y input and the other one below controls the color of the graph. - -For the complete documentation of this module, check: - - * the documentation of [wigid-anomalies][25] - * the documentation of the [fullscope][26] module - * the github repository of the [wigid-anomalies][27] module - - - -### For more information - -If you’re looking for help or want to talk to the VCV Rack Community, visit their [Discourse forum][28]. You can get _patches_ (a patch is the file saved by VCV Rack) for VCV Rack on [Patch Storage][29]. - -Check out how vintage synthesizers looks like on [Vintage Synthesizer Museum][30] or [Google’s online exhibition][31]. The documentary “[I Dream of Wires][32]” provides a look at the history of modular synthesizers. Finally, the book _[Developing Virtual Syntehsizers with VCV Rack][33]_ provides more depth. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/vcv-rack-modular-synthesizers/ - -作者:[Yann Collette][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/ycollet/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/music_synthesizers-816x345.jpg -[2]: https://labs.fedoraproject.org/ -[3]: https://fedoraproject.org/wiki/Fedora_Jam_Audio_Spin -[4]: https://copr.fedorainfracloud.org/coprs/ycollet/linuxmao/ -[5]: https://en.wikipedia.org/wiki/Modular_synthesizer -[6]: https://fedoramagazine.org/wp-content/uploads/2021/03/Moog_Modular_55_img1-1024x561.png -[7]: https://fedoramagazine.org/wp-content/uploads/2021/03/modular_synthesizer_-_jam_syntotek_stockholm_2014-09-09_photo_by_henning_klokkerasen_edit-1.jpg -[8]: https://mutable-instruments.net/ -[9]: https://vcvrack.com/ -[10]: https://github.com/VCVRack/Rack -[11]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_215239-1024x498.png -[12]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_232052.png -[13]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210310_191531-1024x479.png -[14]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_221358.png -[15]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_222055.png -[16]: https://en.wikipedia.org/wiki/CV/gate -[17]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_223840.png -[18]: https://fedoramagazine.org/wp-content/uploads/2021/03/Fundamental-showcase-1024x540.png -[19]: https://www.youtube.com/channel/UCuWKHSHTHMV_nVSeNH4gYAg -[20]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_233506.png -[21]: https://www.youtube.com/watch?v=HhJ_HY2rN5k -[22]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210310_193452-1024x362.png -[23]: https://fedoramagazine.org/wp-content/uploads/2021/03/235492.jpg -[24]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210310_195044.png -[25]: https://library.vcvrack.com/wiqid-anomalies -[26]: https://library.vcvrack.com/wiqid-anomalies/fullscope -[27]: https://github.com/wiqid/anomalies -[28]: https://community.vcvrack.com/ -[29]: https://patchstorage.com/platform/vcv-rack/ -[30]: https://vintagesynthesizermuseum.com/ -[31]: https://artsandculture.google.com/story/7AUBadCIL5Tnow -[32]: http://www.idreamofwires.org/ -[33]: https://www.leonardo-gabrielli.info/vcv-book diff --git a/sources/tech/20210402 Read and write files with Groovy.md b/sources/tech/20210402 Read and write files with Groovy.md deleted file mode 100644 index 091c3c9c40..0000000000 --- a/sources/tech/20210402 Read and write files with Groovy.md +++ /dev/null @@ -1,181 +0,0 @@ -[#]: subject: (Read and write files with Groovy) -[#]: via: (https://opensource.com/article/21/4/groovy-io) -[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Read and write files with Groovy -====== -Learn how the Groovy programming language handles reading from and -writing to files. -![Woman programming][1] - -Two common tasks that new programmers need to learn are how to read from and write to files stored on a computer. Some examples are when data and configuration files created in one application need to be read by another application, or when a third application needs to write info, warnings, and errors to a log file or to save its results for someone else to use. - -Every language has a few different ways to read from and write to files. This article covers some of these details in the [Groovy programming language][2], which is based on Java but with a different set of priorities that make Groovy feel more like Python. The first thing a new-to-Groovy programmer sees is that it is much less verbose than Java. The next observation is that it is (by default) dynamically typed. The third is that Groovy has closures, which are somewhat like lambdas in Java but provide access to the entire enclosing context (Java lambdas restrict what can be accessed). - -My fellow correspondent Seth Kenlon has written about [Java input and output (I/O)][3]. I'll jump off from his Java code to show you how it's done in Groovy. - -### Install Groovy - -Since Groovy is based on Java, it requires a Java installation. You may be able to find a recent and decent version of Java and Groovy in your Linux distribution's repositories. Or you can install Groovy by following the instructions on [Groovy's download page][4]. A nice alternative for Linux users is [SDKMan][5], which you can use to get multiple versions of Java, Groovy, and many other related tools. For this article, I'm using my distro's OpenJDK11 release and SDKMan's Groovy 3.0.7 release. - -### Read a file with Groovy - -Start by reviewing Seth's Java program for reading files: - - -``` -import java.io.File; -import java.util.Scanner; -import java.io.FileNotFoundException; - -public class Ingest { -  public static void main([String][6][] args) { - -      try { -          [File][7] myFile = new [File][7]("example.txt"); -          Scanner myScanner = new Scanner(myFile); -          while (myScanner.hasNextLine()) { -              [String][6] line = myScanner.nextLine(); -              [System][8].out.println(line); -          } -          myScanner.close(); -      } catch ([FileNotFoundException][9] ex) { -          ex.printStackTrace(); -      } //try -    } //main -} //class -``` - -Now I'll do the same thing in Groovy: - - -``` -def myFile = new [File][7]('example.txt') -def myScanner = new Scanner(myFile) -while (myScanner.hasNextLine()) { -        def line = myScanner.nextLine() -        println(line) -} -myScanner.close() -``` - -Groovy looks like Java but less verbose. The first thing to notice is that all those `import` statements are already done in the background. And since Groovy is partly intended to be a scripting language, by omitting the definition of the surrounding `class` and `public static void main`, Groovy will construct those things in the background. - -The semicolons are also gone. Groovy supports their use but doesn't require them except in cases like when you want to put multiple statements on the same line. Aaaaaaaaand the single quotes—Groovy supports either single or double quotes for delineating strings, which is handy when you need to put double quotes inside a string, like this: - - -``` -`'"I like this Groovy stuff", he muttered to himself.'` -``` - -Note also that `try...catch` is gone. Groovy supports `try...catch` but doesn't require it, and it will give a perfectly good error message and stack trace just like the `ex.printStackTrace()` call does in the Java example. - -Groovy adopted the `def` keyword and inference of type from the right-hand side of a statement long before Java came up with the `var` keyword, and Groovy allows it everywhere. Aside from using `def`, though, the code that does the main work looks quite similar to the Java version. Oh yeah, except that Groovy also has this nice metaprogramming ability built in, which among other things, lets you write `println()` instead of `System.out.println()`. This similarity is way more than skin deep and allows Java programmers to get traction with Groovy very quickly. - -And just like Python programmers are always looking for the pythonic way to do stuff, there is Groovy that looks like Java, and then there is… groovier Groovy. This solves the same problem but uses Groovy's `with` method to make the code more DRY ("don't repeat yourself") and to automate closing the input file: - - -``` -new Scanner(new [File][7]('example.txt')).with { -    while (hasNextLine()) { -      def line = nextLine() -      println(line) -    } -} -``` - -What's between `.with {` and `}` is a closure body. Notice that you don't need to write `myScanner.hasNextLine()` nor `myScanner.nextLine()` as `with` exposes those methods directly to the closure body.  Also the with gets rid of the need to code myScanner.close() and so we don't actually need to declare myScanner at all. - -Run it: - - -``` -$ groovy ingest1.groovy -Caught: java.io.[FileNotFoundException][9]: example.txt (No such file or directory) -java.io.[FileNotFoundException][9]: example.txt (No such file or directory) -        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native [Method][10]) -        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) -        at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) -        at ingest1.run(ingest1.groovy:1) -        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native [Method][10]) -        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) -        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) -$ -``` - -Note the "file not found" exception; this is because there isn't a file called `example.txt` yet. Note also that the files are from things like `java.io`. - -So I'll write something into that file… - -### Write data to a file with Groovy - -Combining what I shared previously about, well, being "groovy": - - -``` -new [FileWriter][11]("example.txt", true).with { -        write("Hello world\n") -        flush() -} -``` - -Remember that `true` after the file name means "append to the file," so you can run this a few times: - - -``` -$ groovy exgest.groovy -$ groovy exgest.groovy -$ groovy exgest.groovy -$ groovy exgest.groovy -``` - -Then you can read the results with `ingest1.groovy`: - - -``` -$ groovy ingest1.groovy -Hello world -Hello world -Hello world -Hello world -$ -``` - -The call to `flush()` is used because the `with` / `write` combo didn't do a flush before close. Groovy isn't always shorter! - -### Groovy resources - -The Apache Groovy site has a lot of great [documentation][12]. Another great Groovy resource is [Mr. Haki][13]. And a really great reason to learn Groovy is to learn [Grails][14], which is a wonderfully productive full-stack web framework built on top of excellent components like Hibernate, Spring Boot, and Micronaut. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/4/groovy-io - -作者:[Chris Hermansen][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming) -[2]: https://groovy-lang.org/ -[3]: https://opensource.com/article/21/3/io-java -[4]: https://groovy.apache.org/download.html -[5]: https://sdkman.io/ -[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string -[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+file -[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system -[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filenotfoundexception -[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+method -[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filewriter -[12]: https://groovy-lang.org/documentation.html -[13]: https://blog.mrhaki.com/ -[14]: https://grails.org/ diff --git a/sources/tech/20210405 Scaling Microservices on Kubernetes.md b/sources/tech/20210405 Scaling Microservices on Kubernetes.md deleted file mode 100644 index 26fa6a2334..0000000000 --- a/sources/tech/20210405 Scaling Microservices on Kubernetes.md +++ /dev/null @@ -1,179 +0,0 @@ -[#]: subject: (Scaling Microservices on Kubernetes) -[#]: via: (https://www.linux.com/news/scaling-microservices-on-kubernetes/) -[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/scaling-microservices-on-kubernetes/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Scaling Microservices on Kubernetes -====== - -_By Ashley Davis_ - -_*This article was originally published at [TheNewStack][1]_ - -Applications built on microservices can be scaled in multiple ways. We can scale them to support development by larger development teams and we can also scale them up for better performance. Our application can then have a higher capacity and can handle a larger workload. - -Using microservices gives us granular control over the performance of our application. We can easily measure the performance of our microservices to find the ones that are performing poorly, are overworked, or are overloaded at times of peak demand. Figure 1 shows how we might use the [Kubernetes dashboard][2] to understand CPU and memory usage for our microservices. - - - -_Figure 1: Viewing CPU and memory usage for microservices in the Kubernetes dashboard_ - -If we were using a monolith, however, we would have limited control over performance. We could vertically scale the monolith, but that’s basically it. - -Horizontally scaling a monolith is much more difficult; and we simply can’t independently scale any of the “parts” of a monolith. This isn’t ideal, because it might only be a small part of the monolith that causes the performance problem. Yet, we would have to vertically scale the entire monolith to fix it. Vertically scaling a large monolith can be an expensive proposition. - -Instead, with microservices, we have numerous options for scaling. For instance, we can independently fine-tune the performance of small parts of our system to eliminate bottlenecks and achieve the right mix of performance outcomes. - -There are also many advanced ways we could tackle performance issues, but in this post, we’ll overview a handful of relatively simple techniques for scaling our microservices using [Kubernetes][3]: - - 1. Vertically scaling the entire cluster - 2. Horizontally scaling the entire cluster - 3. Horizontally scaling individual microservices - 4. Elastically scaling the entire cluster - 5. Elastically scaling individual microservices - - - -Scaling often requires risky configuration changes to our cluster. For this reason, you shouldn’t try to make any of these changes directly to a production cluster that your customers or staff are depending on. - -Instead, I would suggest that you create a new cluster and use **blue-green deployment**, or a similar deployment strategy, to buffer your users from risky changes to your infrastructure. - -### **Vertically Scaling the Cluster** - -As we grow our application, we might come to a point where our cluster generally doesn’t have enough compute, memory or storage to run our application. As we add new microservices (or replicate existing microservices for redundancy), we will eventually max out the nodes in our cluster. (We can monitor this through our cloud vendor or the Kubernetes dashboard.) - -At this point, we must increase the total amount of resources available to our cluster. When scaling microservices on a [Kubernetes cluster][4], we can just as easily make use of either vertical or horizontal scaling. Figure 2 shows what vertical scaling looks like for Kubernetes. - - - -_Figure 2: Vertically scaling your cluster by increasing the size of the virtual machines (VMs)_ - -We scale up our cluster by increasing the size of the virtual machines (VMs) in the node pool. In this example, we increased the size of three small-sized VMs so that we now have three large-sized VMs. We haven’t changed the number of VMs; we’ve just increased their size — scaling our VMs vertically. - -Listing 1 is an extract from Terraform code that provisions a cluster on Azure; we change the vm_size field from Standard_B2ms to Standard_B4ms. This upgrades the size of each VM in our Kubernetes node pool. Instead of two CPUs, we now have four (one for each VM). As part of this change, memory and hard-drive for the VM also increase. If you are deploying to AWS or GCP, you can use this technique to vertically scale, but those cloud platforms offer different options for varying VM sizes. - -We still only have a single VM in our cluster, but we have increased our VM’s size. In this example, scaling our cluster is as simple as a code change. This is the power of infrastructure-as-code, the technique where we store our infrastructure configuration as code and make changes to our infrastructure by committing code changes that trigger our continuous delivery (CD) pipeline - - - -_Listing 1: Vertically scaling the cluster with Terraform (an extract)_ - -### Horizontally Scaling the Cluster - -In addition to vertically scaling our cluster, we can also scale it horizontally. Our VMs can remain the same size, but we simply add more VMs. - -By adding more VMs to our cluster, we spread the load of our application across more computers. Figure 3 illustrates how we can take our cluster from three VMs up to six. The size of each VM remains the same, but we gain more computing power by having more VMs. - - - -_Figure 3: Horizontally scaling your cluster by increasing the number of VMs_ - -Listing 2 shows an extract of Terraform code to add more VMs to our node pool. Back in listing 1, we had node_count set to 1, but here we have changed it to 6. Note that we reverted the vm_size field to the smaller size of Standard_B2ms. In this example, we increase the number of VMs, but not their size; although there is nothing stopping us from increasing both the number and the size of our VMs. - -Generally, though, we might prefer horizontal scaling because it is less expensive than vertical scaling. That’s because using many smaller VMs is cheaper than using fewer but bigger and higher-priced VMs. - - - -_Listing 2: Horizontal scaling the cluster with Terraform (an extract)_ - -### Horizontally Scaling an Individual Microservice - -Assuming our cluster is scaled to an adequate size to host all the microservices with good performance, what do we do when individual microservices become overloaded? (This can be monitored in the Kubernetes dashboard.) - -Whenever a microservice becomes a performance bottleneck, we can horizontally scale it to distribute its load over multiple instances. This is shown in figure 4. - - - -_Figure 4: Horizontally scaling a microservice by replicating it_ - -We are effectively giving more compute, memory and storage to this particular microservice so that it can handle a bigger workload. - -Again, we can use code to make this change. We can do this by setting the replicas field in the specification for our Kubernetes deployment or pod as shown in listing 3. - - - -_Listing 3: Horizontally scaling a microservice with Terraform (an extract)_ - -Not only can we scale individual microservices for performance, we can also horizontally scale our microservices for redundancy, creating a more fault-tolerant application. By having multiple instances, there are others available to pick up the load whenever any single instance fails. This allows the failed instance of a microservice to restart and begin working again. - -### Elastic Scaling for the Cluster - -Moving into more advanced territory, we can now think about elastic scaling. This is a technique where we automatically and dynamically scale our cluster to meet varying levels of demand. - -Whenever a demand is low, [Kubernetes][5] can automatically deallocate resources that aren’t needed. During high-demand periods, new resources are allocated to meet the increased workload. This generates substantial cost savings because, at any given moment, we only pay for the resources necessary to handle our application’s workload at that time. - -We can use elastic scaling at the cluster level to automatically grow our clusters that are nearing their resource limits. Yet again, when using Terraform, this is just a code change. Listing 4 shows how we can enable the Kubernetes autoscaler and set the minimum and maximum size of our node pool. - -Elastic scaling for the cluster works by default, but there are also many ways we can customize it. Search for “auto_scaler_profile” in [the Terraform documentation][6] to learn more. - - - -_Listing 4: Enabling elastic scaling for the cluster with Terraform (an extract)_ - -### Elastic Scaling for an Individual Microservice - -We can also enable elastic scaling at the level of an individual microservice. - -Listing 5 is a sample of Terraform code that gives microservices a “burstable” capability. The number of replicas for the microservice is expanded and contracted dynamically to meet the varying workload for the microservice (bursts of activity). - -The scaling works by default, but can be customized to use other metrics. See the [Terraform documentation][7] to learn more. To learn more about pod auto-scaling in Kubernetes, [see the Kubernetes docs][8]. - - - -_Listing 5: Enabling elastic scaling for a microservice with Terraform_ - -### About the Book: Bootstrapping Microservices - -You can learn about building applications with microservices with [Bootstrapping Microservices][9]. - -Bootstrapping Microservices is a practical and project-based guide to building applications with microservices. It will take you all the way from building one single microservice all the way up to running a microservices application in production on [Kubernetes][10], ending up with an automated continuous delivery pipeline and using _infrastructure-as-code_ to push updates into production. - -### Other Kubernetes Resources - -This post is an extract from _Bootstrapping Microservices_ and has been a short overview of the ways we can scale microservices when running them on Kubernetes. - -We specify the configuration for our infrastructure using Terraform. Creating and updating our infrastructure through code in this way is known as **intrastructure-as-code**, as a technique that turns working with infrastructure into a coding task and paved the way for the DevOps revolution. - -To learn more about [Kubernetes][11], please see [the Kubernetes documentation][12] and the free [Introduction to Kubernetes][13] training course. - -To learn more about working with Kubernetes using Terraform, please see [the Terraform documentation][14]. - -**About the Author, Ashley Davis** - -Ashley is a software craftsman, entrepreneur, and author with over 20 years of experience in software development, from coding to managing teams, then to founding companies. He is the CTO of Sortal, a product that automatically sorts digital assets through the magic of machine learning. - -The post [Scaling Microservices on Kubernetes][15] appeared first on [Linux Foundation – Training][16]. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/scaling-microservices-on-kubernetes/ - -作者:[Dan Brown][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://training.linuxfoundation.org/announcements/scaling-microservices-on-kubernetes/ -[b]: https://github.com/lujun9972 -[1]: https://thenewstack.io/scaling-microservices-on-kubernetes/ -[2]: https://coding-bootcamps.com/blog/kubernetes-evolution-from-virtual-servers-and-kubernetes-architecture.html -[3]: https://learn.coding-bootcamps.com/p/complete-live-training-for-mastering-devops-and-all-of-its-tools -[4]: https://blockchain.dcwebmakers.com/blog/advance-topics-for-deploying-and-managing-kubernetes-containers.html -[5]: http://myhsts.org/tutorial-review-of-17-essential-topics-for-mastering-kubernetes.php -[6]: https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html -[7]: http://www.terraform.io/docs/providers/kubernetes/r/horizontal_pod_autoscaler.html -[8]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ -[9]: https://www.manning.com/books/bootstrapping-microservices-with-docker-kubernetes-and-terraform -[10]: https://coding-bootcamps.com/blog/build-containerized-applications-with-golang-on-kubernetes.html -[11]: https://learn.coding-bootcamps.com/p/live-training-class-for-mastering-kubernetes-containers-and-cloud-native -[12]: https://kubernetes.io/docs/home/ -[13]: https://training.linuxfoundation.org/training/introduction-to-kubernetes/ -[14]: https://registry.terraform.io/providers/hashicorp/kubernetes/latest -[15]: https://training.linuxfoundation.org/announcements/scaling-microservices-on-kubernetes/ -[16]: https://training.linuxfoundation.org/ diff --git a/sources/tech/20210406 Use Apache Superset for open source business intelligence reporting.md b/sources/tech/20210406 Use Apache Superset for open source business intelligence reporting.md deleted file mode 100644 index 48ee2a41fa..0000000000 --- a/sources/tech/20210406 Use Apache Superset for open source business intelligence reporting.md +++ /dev/null @@ -1,145 +0,0 @@ -[#]: subject: (Use Apache Superset for open source business intelligence reporting) -[#]: via: (https://opensource.com/article/21/4/business-intelligence-open-source) -[#]: author: (Maxime Beauchemin https://opensource.com/users/mistercrunch) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Use Apache Superset for open source business intelligence reporting -====== -Since its creation in 2015 at an Airbnb hackathon, Apache Superset has -matured into a leading open source BI solution. -![metrics and data shown on a computer screen][1] - -They say software is eating the world, but it's equally clear that open source is taking over software. - -Simply put, open source is a superior approach for building and distributing software because it provides important guarantees around how software can be discovered, tried, operated, collaborated on, and packaged. For those reasons, it is not surprising that it has taken over most of the modern data stack: Infrastructure, databases, orchestration, data processing, AI/ML, and beyond. - -Looking back, the main reason why I originally created both [Apache Airflow][2] and [Apache Superset][3] while I was at Airbnb from 2014-17 is that the vendors in the data space were failing to: - - * Keep up with the pace of innovation in the data ecosystem - * Give power to users who wanted to satisfy their more advanced use cases - - - -As is often the case with open source, the capacity to integrate and extend was always at the core of how we approached the architecture of those two projects. - -### Headaches with Tableau - -More specifically, for Superset, the main driver to start the project at the time was the fact that Tableau (which was, at the time, our main data visualization tool) couldn't connect natively to [Apache Druid][4] and [Trino][5]/[Presto][6]. These were our data engines of choice that provided the properties and guarantees that we needed to satisfy our data use cases. - -With Tableau's "Live Mode" misbehaving in intricate ways at the time (I won't get into this!), we were steered towards using Tableau Extracts. Extracts crumbled under the data volumes we had at Airbnb, creating a whole lot of challenges around non-additive metrics (think distinct user counts) and forcing us to intricately pre-compute multiple "grouping sets," which broke down some of the Tableau paradigms and confused users. Secondarily, we had a limited number of licenses for Tableau and generally had an order of magnitude more employees that wanted/needed access to our internal than our contract allowed. That's without mentioning the fact that for a cloud-native company, Tableau's Windows-centric approach at the time didn't work well for the team. - -Some of the above premises have since changed, but the power of open source and the core principles on which it's built have only grown. In this blog post, I will explain why the future of business intelligence is open source. - -## Benefits of open source - -If I could only use a single word to describe why the time is right for organizations to adopt open source BI, the word would be _freedom_. Flowing from the principle of freedom comes a few more concrete superpowers for an organization: - - * The power to customize, extend and integrate - * The power of the community - * Avoid vendor lock-in - - - -### Extend, customize, and integrate - -Airbnb wanted to integrate in-house tools like Dataportal and Minerva with a dashboarding tool to enable data democratization within their organization. Because Superset is open source and Airbnb actively contributes to the project, they could supercharge Superset with in-house components with relative ease. - -On the visualization side, organizations like Nielsen create new visualizations and deploy them in their Superset environments. They're going a step further by empowering their engineers to contribute to Superset's customizability and extensibility. The Superset platform is now flexible enough so that anyone can build their [own custom visualization plugins][7], a benefit that is unmatched in the marketplace. - -Many report using the rich [REST API that ships with Superset][8] within the wider community, allowing them full programmatic control over all aspects of the platform. Given that pretty much everything that users can do in Superset can be done through the API, the sky is the limit for automating processes in and around Superset. - -Around the topic of integration, members from the Superset community have added support for over 30 databases ([and growing!][9]) by submitting code and documentation contributions. Because the core contributors bet on the right open source components ([SQLAlchemy][10] and Python [DB-API 2.0][11]), the Superset community both gives and receives to/from the broader Python community. - -### The power of the community - -Open source communities are composed of a diverse group of people who come together over a similar set of needs. This group is empowered to contribute to the common good. Vendors, on the other hand, tend to focus on their most important customers. Open source is a fundamentally different model that's much more collaborative and frictionless. As a result of this fundamentally de-centralized model, communities are very resilient to changes that vendor-led products struggle with. As contributors and organizations come and go, the community lives on! - -At the core of the community are the active contributors that typically operate as a dynamic meritocracy. Network effects attract attention and talent, and communities welcome and offer guidance to newcomers because their goals are aligned. With the rise of platforms like Gitlab and Github, software is pretty unique in that engineers and developers from around the world seem to be able to come together and work collaboratively with minimal overhead. Those dynamics are fairly well understood and accepted as a disruptive paradigm shift in how people collaborate to build modern software. - -![Growth in Monthly Unique Contributors][12] - -Growth in Monthly Unique Contributors - -Beyond the software at the core of the project, dynamic communities contribute in all sorts of ways that provide even more value. Here are some examples: - - * Rich and up-to-date documentation - * Example use cases and testimonials, often in the form of blog posts - * Bug reports and bug fixes, contributing to stability and quality - * Ever-growing online knowledge bases and FAQs - * How-to videos and conference talks - * Real-time support networks of enthusiasts and experts in forums and on [chat platforms][13] - * Dynamic mailing lists where core contributors propose and debate over complex issues - * Feedback loops, ways to suggest features and influence roadmaps - - - -### Avoid lock-in - -Recently, [Atlassian acquired the proprietary BI platform Chart.io][14], started to downsize the Chart.io team, and announced their intention to shut down the platform. Their customers now have to scramble and find a new home for their analytics assets that they now have to rebuild. - -![Chart.io Shutting Down][15] - -Chart.io Shutting Down - -This isn't a new phenomenon. Given how mature and dynamic the BI market is, consolidation has been accelerating over the past few years: - - * Tableau was acquired by Salesforce - * Looker was acquired by Google Cloud - * Periscope was acquired by Sisense - * Zoomdata was acquired by Logi Analytics - - - -While consolidation is likely to continue, these concerns don't arise when your BI platform is open source. If you're self-hosting, you are essentially immune to vendor lock-in. If you choose to partner with a commercial open source software (COSS), you should have an array of options from alternative vendors to hiring expertise in the marketplace, all the way to taking ownership and operating the software on your own. - -For example, if you were using Apache Airflow service to take care of your Airflow needs, and your cloud provider decided to shut down the service, you'd be left with a set of viable options: - - * Select and migrate to another service provider in the space, such as Apache Airflow specialist [Astronomer][16]. - * Hire or consult Airflow talent that can help you take control. The community has fostered a large number of professionals who know and love Airflow and can help your organization. - * Learn and act. That is, take control and tap into the community's amazing resources to run the software on your own (Docker, Helm, k8s operator, and so on.) - - - -Even at [Preset][17], where we offer a cloud-hosted version of Superset, we don't fork the Superset code and instead run the same Superset that's available to everyone. In the Preset cloud, you can freely import and export data sources, charts, and dashboards. This is not unique to Preset. Many vendors understand that "no lock-in!" is integral to their value proposition and are incentivized to provide clear guarantees around this. - -## Open source for your data - -Open source is disruptive in the best of ways, providing freedom, and a set of guarantees that really matter when it comes to adopting software. These guarantees fully apply when it comes to business intelligence. In terms of business intelligence, Apache Superset has matured to a level where it's a compelling choice over any proprietary solution. Since its creation in 2015 at an Airbnb hackathon, the project has come a very long way indeed. Try it yourself to discover a combination of features and guarantees unique to open source BI. To learn more, visit and [join our growing community][18]. - -In this article, I review some of the top open source business intelligence (BI) and reporting... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/4/business-intelligence-open-source - -作者:[Maxime Beauchemin][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mistercrunch -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen) -[2]: https://airflow.apache.org/ -[3]: https://superset.apache.org/ -[4]: https://druid.apache.org/ -[5]: https://trino.io/ -[6]: https://prestodb.io/ -[7]: https://preset.io/blog/2020-07-02-hello-world/ -[8]: https://superset.apache.org/docs/rest-api/ -[9]: https://superset.apache.org/docs/databases/installing-database-drivers -[10]: https://www.sqlalchemy.org/ -[11]: https://www.python.org/dev/peps/pep-0249/ -[12]: https://opensource.com/sites/default/files/uniquecontributors.png -[13]: https://opensource.com/article/20/7/mattermost -[14]: https://www.atlassian.com/blog/announcements/atlassian-acquires-chartio -[15]: https://opensource.com/sites/default/files/chartio.jpg -[16]: https://www.astronomer.io/ -[17]: https://preset.io/ -[18]: https://superset.apache.org/community/ diff --git a/sources/tech/20210408 Protect external storage with this Linux encryption system.md b/sources/tech/20210408 Protect external storage with this Linux encryption system.md deleted file mode 100644 index 6c3725e1bc..0000000000 --- a/sources/tech/20210408 Protect external storage with this Linux encryption system.md +++ /dev/null @@ -1,174 +0,0 @@ -[#]: subject: (Protect external storage with this Linux encryption system) -[#]: via: (https://opensource.com/article/21/3/encryption-luks) -[#]: author: (Seth Kenlon https://opensource.com/users/seth) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Protect external storage with this Linux encryption system -====== -Use Linux Unified Key Setup to encrypt your thumb drives, external hard -drives, and other storage from prying eyes. -![A keyboard with privacy written on it.][1] - -Many people consider hard drives secure because they physically own them. It's difficult to read the data on a hard drive that you don't have, and many people think that protecting their computer with a passphrase makes the data on the drive unreadable. - -This isn't always the case, partly because, in some cases, a passphrase serves only to unlock a user session. In other words, you can power on a computer, but because you don't have its passphrase, you can't get to the desktop, and so you have no way to open files to look at them. - -The problem, as many a computer technician understands, is that hard drives can be extracted from computers, and some drives are already external by design (USB thumb drives, for instance), so they can be attached to any computer for full access to the data on them. You don't have to physically separate a drive from its computer host for this trick to work, either. Computers can be [booted from a portable boot drive][2], which separates a drive from its host operating system and turns it into, virtually, an external drive available for reading. - -The answer is to place the data on a drive into a digital vault that can't be opened without information that only you have access to. - -Linux Unified Key Setup ([LUKS][3]) is a disk-encryption system. It provides a generic key store (and associated metadata and recovery aids) in a dedicated area on a disk with the ability to use multiple passphrases (or key files) to unlock a stored key. It's designed to be flexible and can even store metadata externally so that it can be integrated with other tools. The result is full-drive encryption, so you can store all of your data confident that it's safe—even if your drive is separated, either physically or through software, from your computer. - -### Encrypting during installation - -The easiest way to implement full-drive encryption is to select the option during installation. Most modern Linux distributions offer this as an option, so it's usually a trivial process. - -![Encrypt during installation][4] - -(Seth Kenlon, [CC BY-SA 4.0][5]) - -This establishes everything you need: an encrypted drive requiring a passphrase before your system can boot. If the drive is extracted from your computer or accessed from another operating system running on your computer, the drive must be decrypted by LUKS before it can be mounted. - -### Encrypting external drives - -It's not common to separate an internal hard drive from its computer, but external drives are designed to travel. As technology gets smaller and smaller, it's easier to put a portable drive on your keychain and carry it around with you every day. The obvious danger, however, is that these are also pretty easy to misplace. I've found abandoned drives in the USB ports of hotel lobby computers, business center printers, classrooms, and even a laundromat. Most of these didn't include personal information, but it's an easy mistake to make. - -You can mitigate against misplacing important data by encrypting your external drives. - -LUKS and its frontend `cryptsetup` provide a way to do this on Linux. As Linux does during installation, you can encrypt the entire drive so that it requires a passphrase to mount it. - -### How to encrypt an external drive with LUKS - -First, you need an empty external drive (or a drive with contents you're willing to erase). This process overwrites all the data on a drive, so if you have data that you want to keep on the drive, _back it up first_. - -#### 1\. Find your drive - -I used a small USB thumb drive. To protect you from accidentally erasing data, the drive referenced in this article is located at the imaginary location `/dev/sdX`. Attach your drive and find its location: - - -``` -$ lsblk -sda    8:0    0 111.8G  0 disk -sda1   8:1    0 111.8G  0 part / -sdb    8:112  1  57.6G  0 disk -sdb1   8:113  1  57.6G  0 part /mydrive -sdX    8:128  1   1.8G  0 disk -sdX1   8:129  1   1.8G  0 part -``` - -I know that my demo drive is located at `/dev/sdX` because I recognize its size (1.8GB), and it's also the last drive I attached (with `sda` being the first, `sdb` the second, `sdc` the third, and so on). The `/dev/sdX1` designator means the drive has 1 partition. - -If you're unsure, remove your drive, look at the output of `lsblk`, and then attach your drive and look at `lsblk` again. - -Make sure you identify the correct drive because encrypting it overwrites _everything on it_. My drive is not empty, but it contains copies of documents I have copies of elsewhere, so losing this data isn't significant to me. - -#### 2\. Clear the drive - -To proceed, destroy the drive's partition table by overwriting the drive's head with zeros: - - -``` -`$ sudo dd if=/dev/zero of=/dev/sdX count=4096` -``` - -This step isn't strictly necessary, but I like to start with a clean slate. - -#### 3\. Format your drive for LUKS - -The `cryptsetup` command is a frontend for managing LUKS volumes. The `luksFormat` subcommand creates a sort of LUKS vault that's password-protected and can house a secured filesystem. - -When you create a LUKS partition, you're warned about overwriting data and then prompted to create a passphrase for your drive: - - -``` -$ sudo cryptsetup luksFormat /dev/sdX -WARNING! -======== -This will overwrite data on /dev/sdX irrevocably. - -Are you sure? (Type uppercase yes): YES -Enter passphrase: -Verify passphrase: -``` - -#### 4\. Open the LUKS volume - -Now you have a fully encrypted vault on your drive. Prying eyes, including your own right now, are kept out of this LUKS partition. So to use it, you must open it with your passphrase. Open the LUKS vault with `cryptsetup open` along with the device location (`/dev/sdX`, in my example) and an arbitrary name for your opened vault: - - -``` -`$ cryptsetup open /dev/sdX vaultdrive` -``` - -I use `vaultdrive` in this example, but you can name your vault anything you want, and you can give it a different name every time you open it. - -LUKS volumes are opened in a special device location called `/dev/mapper`. You can list the files there to check that your vault was added: - - -``` -$ ls /dev/mapper -control  vaultdrive -``` - -You can close a LUKS volume at any time using the `close` subcommand: - - -``` -`$ cryptsetup close vaultdrive` -``` - -This removes the volume from `/dev/mapper`. - -#### 5\. Create a filesystem - -Now that you have your LUKS volume decrypted and open, you must create a filesystem there to store data in it. In my example, I use XFS, but you can use ext4 or JFS or any filesystem you want: - - -``` -`$ sudo mkfs.xfs -f -L myvault /dev/mapper/vaultdrive` -``` - -### Mount and unmount a LUKS volume - -You can mount a LUKS volume from a terminal with the `mount` command. Assume you have a directory called `/mnt/hd` and want to mount your LUKS volume there: - - -``` -$ sudo cryptsetup open /dev/sdX vaultdrive -$ sudo mount /dev/mapper/vaultdrive /mnt/hd -``` - -LUKS also integrates into popular Linux desktops. For instance, when I attach an encrypted drive to my workstation running KDE or my laptop running GNOME, my file manager prompts me for a passphrase before it mounts the drive. - -![LUKS requesting passcode to mount drive][6] - -(Seth Kenlon, [CC BY-SA 4.0][5]) - -### Encryption is protection - -Linux makes encryption easier than ever. It's so easy, in fact, that it's nearly unnoticeable. The next time you [format an external drive for Linux][7], consider using LUKS first. It integrates seamlessly with your Linux desktop and protects your important data from accidental exposure. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/3/encryption-luks - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/privacy_keyboard_security.jpg?itok=vZ9jFdK_ (A keyboard with privacy written on it.) -[2]: https://opensource.com/article/19/6/linux-distros-to-try -[3]: https://gitlab.com/cryptsetup/cryptsetup/blob/master/README.md -[4]: https://opensource.com/sites/default/files/uploads/centos8-install-encrypt.jpg (Encrypt during installation) -[5]: https://creativecommons.org/licenses/by-sa/4.0/ -[6]: https://opensource.com/sites/default/files/uploads/luks-mount-gui.png (LUKS requesting passcode to mount drive) -[7]: https://opensource.com/article/18/11/partition-format-drive-linux diff --git a/sources/tech/20210409 Stream event data with this open source tool.md b/sources/tech/20210409 Stream event data with this open source tool.md deleted file mode 100644 index 62f8222089..0000000000 --- a/sources/tech/20210409 Stream event data with this open source tool.md +++ /dev/null @@ -1,274 +0,0 @@ -[#]: subject: (Stream event data with this open source tool) -[#]: via: (https://opensource.com/article/21/4/event-streaming-rudderstack) -[#]: author: (Amey Varangaonkar https://opensource.com/users/ameypv) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Stream event data with this open source tool -====== -Route real-time events from web, mobile, and server-side app sources to -help build your customer data lake on your data warehouse. -![Net catching 1s and 0s or data in the clouds][1] - -In my [previous article][2], I introduced [RudderStack][3], an open source, warehouse-first customer data pipeline. In this article, I demonstrate how easy Rudderstack makes it to set up and use event streams. - -An event stream is a pipeline between a source you define and a destination of your choice. Rudderstack provides you with SDKs and plugins to help you ingest event data from your website, mobile apps, and server-side sources — including JavaScript, Gatsby, Android, iOS, Unity, ReactNative, Node.js, and many more. Similarly, Rudderstack's **Event Stream** module features over 80 destination and warehouse integrations, including Firebase, Google Analytics, Salesforce, Zendesk, Snowflake, BigQuery, RedShift, and more, making it easy to send event data to downstream tools that can use it as well as build a customer data lake on a data warehouse for analytical use cases. - -This tutorial shows how to track and route events using RudderStack. - -### How to set up an event stream - -Before you get started, make sure you understand these terms used in this tutorial: - - * **Source**: A source refers to a tool or a platform from which RudderStack ingests your event data. Your website, mobile app, or your back-end server are common examples of sources. - * **Destination**: A destination refers to a tool that receives your event data from RudderStack. These destination tools can then use this data for your activation use cases. Tools like Google Analytics, Salesforce, and HubSpot are common examples of destinations. - - - -The steps for setting up an event stream in RudderStack open source are: - - 1. Instrumenting an event stream source - 2. Configuring a warehouse destination - 3. Configuring a tool destination - 4. Sending events to verify the event stream - - - -### Step 1: Instrument an event stream source - -To set up an event stream source in RudderStack: - - 1. Log into your [RudderStack dashboard][4]. If you don't have a RudderStack account, please sign up. You can use the RudderStack open source control plane to [set up your event streams][5]. - -RudderStack's hosted control plane is an option to manage your event stream configurations. It is completely free, requires no setup, and has some more advanced features than the open source control plane. - - 2. Once you've logged into RudderStack, you should see the following dashboard: - -![RudderStack dashboard][6] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - -**Note:** Make sure to save the **Data Plane URL**. It is required in your RudderStack JavaScript SDK snippet to track events from your website. - - 3. To instrument the source, click **Add Source**. Optionally, you can also select the **Directory** option on the left navigation bar, and select **Event Streams** under **Sources**. This tutorial will set up a simple **JavaScript** source that allows you to track events from your website. - -![RudderStack event streams dashboard][8] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - - 4. Assign a name to your source, and click **Next**. - -![RudderStack Source Name][9] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - - 5. That's it! Your event source is now configured. - -![RudderStack source write key][10] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - -**Note:** Save the source **Write Key**. Your RudderStack JavaScript SDK snippet requires it to track events from your website. - - - - -Now you need to install the RudderStack JavaScript SDK on your website. To do this, you need to place either the minified or non-minified version of the snippet with your **Data Plane URL** and source **Write Key** in your website's `` section. Consult the docs for information on how to [install and use the RudderStack JavaScript SDK][11]. - -### Step 2: Configure a warehouse destination - -**Important**: Before you configure your data warehouse as a destination in RudderStack, you need to set up a new project in your warehouse and create a RudderStack user role with the relevant permissions. The docs provide [detailed, step-by-step instructions][12] on how to do this for the warehouse of your choice. - -This tutorial sets up a Google BigQuery warehouse destination. You don't have to configure a warehouse destination, but I recommend it. The docs provide [instructions on setting up][13] a Google BigQuery project and a service account with the required permissions. - -Then configure BigQuery as a warehouse destination in RudderStack by following these steps: - - 1. On the left navigation bar, click on **Directory**, and then click on **Google BigQuery** from the list of destinations: - -![RudderStack destination options][14] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - - 2. Assign a name to your destination, and click on **Next**. - - - - -![RudderStack naming the destination][15] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - - 3. Choose which source you want to use to send the events to your destination. Select the source that you created in the previous section. Then, click on **Next**. - - - -![RudderStack selecting data source][16] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - - 4. Specify the required connection credentials. For this destination, enter the **BigQuery Project ID** and the **staging bucket name**; information on [how to get this information][17] is in the docs. - - - -![RudderStack connection credentials][18] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - - 5. Copy the contents of the private JSON file you created, as [the docs][19] explain. - - - -That's it! You have configured your BigQuery warehouse as a destination in RudderStack. Once you start sending events from your source (a website in this case), RudderStack will automatically route them into your BigQuery and build your identity graph there as well. - -### Step 3: Configure a tool destination - -Once you've added a source, follow these steps to configure a destination in the RudderStack dashboard: - - 1. To add a new destination, click on the **Add Destination** button as shown: - -![RudderStack adding the destination][20] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - -**Note:** If you have configured a destination before, use the **Connect Destinations** option to connect it to any source. - - 2. RudderStack supports over 80 destinations to which you can send your event data. Choose your preferred destination platform from the list. This example configures **Google Analytics** as a destination. - - - - -![RudderStack selecting destination platform][21] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - - 3. Add a name to your destination, and click **Next**. - - - -![RudderStack naming the destination][22] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - - 4. Next, choose the preferred source. If you're following along with this tutorial, choose the source you configured above. - - - -![RudderStack choosing source][23] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - - 5. In this step, you must add the relevant **Connection Settings**. Enter the **Tracking ID** for this destination (Google Analytics). You can also configure other optional settings per your requirements. Once you've added the required settings, click **Next**. - -![RudderStack connection settings][24] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - -**Note**: RudderStack also gives you the option of transforming the events before sending them to your destination. Read more about [user transformations][25] in RudderStack in the docs. - - 6. That's it! The destination is now configured. You should now see it connected to your source. - - - - -![RudderStack connection configured][26] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - -### Step 4: Send test events to verify the event stream - -This tutorial set up a JavaScript source to track events from your website. Once you have placed the JavaScript code snippet in your website's `` section, RudderStack will automatically track and collect user events from the website in real time. - -However, to quickly test if your event stream is set up correctly, you can send some test events. To do so, follow these steps: - -**Note**: Before you get started, you will need to clone the [rudder-server][27] repo and have a RudderStack server installed in your environment. Follow [this tutorial][28] to set up a RudderStack server. - - 1. Make sure you have set up a source and destination by following the steps in the previous sections and have your **Data Plane URL** and source **Write Key** available. - - 2. Start the RudderStack server. - - 3. The **rudder-server** repo includes a shell script that generates test events. Get the source **Write Key** from step 2, and run the following command: - - -``` -`./scripts/generate-event /v1/batch` -``` - - - - -![RudderStack event testing code][29] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - - 4. To check if the test events are delivered, go to your Google Analytics dashboard, navigate to **Realtime** under Reports, and click **Events**. - -**Note**: Make sure you check the events associated with the same Tracking ID you provided while instrumenting the destination. - - - - -You should now be able to see the test event received in Google Analytics and BigQuery. - -![RudderStack event test][30] - -(Gavin Johnson, [CC BY-SA 4.0][7]) - -If you come across any issues while setting up or configuring RudderStack open source, join our [Slack][31] and start a conversation in our #open-source channel. We will be happy to help. - -If you want to try RudderStack but don't want to host your own, sign up for our free, hosted offering, [RudderStack Cloud Free][32]. Explore our open source repos on [GitHub][33], subscribe to [our blog][34], and follow us on our socials: [Twitter][35], [LinkedIn][36], [dev.to][37], [Medium][38], and [YouTube][39]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/4/event-streaming-rudderstack - -作者:[Amey Varangaonkar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ameypv -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds) -[2]: https://opensource.com/article/21/3/rudderstack-customer-data-platform -[3]: https://rudderstack.com/ -[4]: https://app.rudderstack.com/ -[5]: https://docs.rudderstack.com/how-to-guides/rudderstack-config-generator -[6]: https://opensource.com/sites/default/files/uploads/rudderstack_dashboard.png (RudderStack dashboard) -[7]: https://creativecommons.org/licenses/by-sa/4.0/ -[8]: https://opensource.com/sites/default/files/uploads/rudderstack_eventstreamsdash.png (RudderStack event streams dashboard) -[9]: https://opensource.com/sites/default/files/uploads/rudderstack_namesource.png (RudderStack Source Name) -[10]: https://opensource.com/sites/default/files/uploads/rudderstack_writekey.png (RudderStack Source Name) -[11]: https://docs.rudderstack.com/rudderstack-sdk-integration-guides/rudderstack-javascript-sdk -[12]: https://docs.rudderstack.com/data-warehouse-integrations -[13]: https://docs.rudderstack.com/data-warehouse-integrations/google-bigquery -[14]: https://opensource.com/sites/default/files/uploads/rudderstack_destinations.png (RudderStack destination options) -[15]: https://opensource.com/sites/default/files/uploads/rudderstack_namedestination.png (RudderStack naming the destination) -[16]: https://opensource.com/sites/default/files/uploads/rudderstack_adddestination.png (RudderStack selecting data source) -[17]: https://docs.rudderstack.com/data-warehouse-integrations/google-bigquery#setting-up-google-bigquery -[18]: https://opensource.com/sites/default/files/uploads/rudderstack_connectioncredentials.png (RudderStack connection credentials) -[19]: https://docs.rudderstack.com/data-warehouse-integrations/google-bigquery#setting-up-the-service-account-for-rudderstack -[20]: https://opensource.com/sites/default/files/uploads/rudderstack_addnewdestination.png (RudderStack adding the destination) -[21]: https://opensource.com/sites/default/files/uploads/rudderstack_googleanalyticsdestination.png (RudderStack selecting destination platform) -[22]: https://opensource.com/sites/default/files/uploads/rudderstack_namenewdestination.png (RudderStack naming the destination) -[23]: https://opensource.com/sites/default/files/uploads/rudderstack_choosepreferredsource.png (RudderStack choosing source) -[24]: https://opensource.com/sites/default/files/uploads/rudderstack_connectionsettings.png (RudderStack connection settings) -[25]: https://docs.rudderstack.com/adding-a-new-user-transformation-in-rudderstack -[26]: https://opensource.com/sites/default/files/uploads/rudderstack_destinationconfigured.png (RudderStack connection configured) -[27]: https://github.com/rudderlabs/rudder-server -[28]: https://docs.rudderstack.com/installing-and-setting-up-rudderstack/docker -[29]: https://opensource.com/sites/default/files/uploads/rudderstack_testevents.jpg (RudderStack event testing code) -[30]: https://opensource.com/sites/default/files/uploads/rudderstack_testeventoutput.png (RudderStack event test) -[31]: https://resources.rudderstack.com/join-rudderstack-slack -[32]: https://app.rudderlabs.com/signup?type=freetrial -[33]: https://github.com/rudderlabs -[34]: https://rudderstack.com/blog/ -[35]: https://twitter.com/RudderStack -[36]: https://www.linkedin.com/company/rudderlabs/ -[37]: https://dev.to/rudderstack -[38]: https://rudderstack.medium.com/ -[39]: https://www.youtube.com/channel/UCgV-B77bV_-LOmKYHw8jvBw diff --git a/sources/tech/20210411 Why Crate.io has returned to its pure open source roots.md b/sources/tech/20210411 Why Crate.io has returned to its pure open source roots.md deleted file mode 100644 index 2de53527f4..0000000000 --- a/sources/tech/20210411 Why Crate.io has returned to its pure open source roots.md +++ /dev/null @@ -1,49 +0,0 @@ -[#]: subject: (Why Crate.io has returned to its pure open source roots) -[#]: via: (https://opensource.com/article/21/4/crate-open-source) -[#]: author: (Bernd Dorn https://opensource.com/users/bernd-dorn) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Why Crate.io has returned to its pure open source roots -====== -CrateDB's renewed commitment to open source aligns with both our best -ideals and what's best for our business. -![Open source stars.][1] - -The headline benefits of open source are widely known and well-articulated. Open source technologies provide enterprise-level scalability, performance, security, and reliability. Trust is there, and it's deserved. But what's less celebrated, other than by die-hard open source adherents, are the inner workings of the everyday community contributions building those macro benefits at the atomic level. For those offering open source technologies, it is the community's constant user-driven testing and hardening that forges those technologies into robust and proven solutions. Those contributions don't show up on the balance sheet, but they can be absolutely formative to an enterprise's health and success. - -In 2013, I co-founded [Crate.io][2] with open source ideals and my belief in the power of the community. As a startup intent on bringing the simplicity and strength of open source to the realm of advanced SQL databases that could handle the growing volume of Internet of Things (IoT) and Industrial IoT data, we rooted our CrateDB database in 100% open source component technologies. And we were sure to play our role as active contributors to those technologies and nurtured our own community of CrateDB developers. - -In 2017, Crate began exploring an open core business model, and soon after, I took a few years away from the company. In 2019, Crate began to offer a CrateDB Free Edition with a strict three-node limit and stopped building and distributing packages for CrateDB Community Edition (the open source code could still be downloaded). This move focused the company more heavily on a paid open core enterprise edition that added some proprietary features. From a sales perspective, the idea was to spur community users to convert to paying customers. However, this strategy ended up being a fundamental misunderstanding of the user base. The result was a marked decline in user engagement and the strength of our valuable community while failing to convert much of anyone. - -When I returned to Crate towards the end of 2020 as CTO, I made it my priority to bring back a commitment to pure open source. This sparked a rich conversation between competing viewpoints within our organization. The key to winning over my more revenue-minded colleagues was to explain that community users are completely different in nature from our enterprise customers and offer our business a different kind of support. Furthermore, forcing them away does nothing positive. Our open source community user base contributes crucial influence and experience that improves our technologies very effectively. Their support is invaluable and irreplaceable. Without them, CrateDB isn't nearly as compelling or as enterprise-ready a product. - -Ultimately, it was our investors that weighed in and championed Crate's once and future commitment to pure open source. Our investors even helped us abandon considerations towards other licensing models such as the Business Source License by favoring the [Apache License 2.0][3] we now utilize, pressing for the fully open source permissions it offers. - -Our recent [4.5 release][4] of CrateDB completes this full circle to our open source roots. I couldn't be prouder to say that our business is rededicated to building our community and openly welcomes all contributors as we work hand-in-hand to push CrateDB toward its full potential as a distributed SQL database for machine data. - -I also need to mention the recent decision by [Elastic to discontinue its longstanding commitment to open source][5], as it offers a stark juxtaposition to ours. CrateDB has used open source Elasticsearch from the very beginning. But more than that, open source Elasticsearch was a formative inspiration to Crate's founders, especially me. Our drive to serve as contributing citizens in that community was born out of our work operating some of Europe's largest Elasticsearch deployments. That team and I later created Crate with Elasticsearch as our clearest example of why upholding open source ideals results in powerful technologies. - -Our renewed commitment to open source elegantly aligns with both our best ideals and what's best for our business. The activity of a robust and inclusive open source community is the vital pulse of our product. It's our hope to provide an example of what dedication to pure open source can truly accomplish. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/4/crate-open-source - -作者:[Bernd Dorn][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/bernd-dorn -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourcestars.png?itok=hnrMETFh (Open source stars.) -[2]: https://crate.io/ -[3]: https://www.apache.org/licenses/LICENSE-2.0 -[4]: https://crate.io/products/cratedb/ -[5]: https://www.elastic.co/blog/licensing-change diff --git a/sources/tech/20210413 Make Conway-s Game of Life in WebAssembly.md b/sources/tech/20210413 Make Conway-s Game of Life in WebAssembly.md deleted file mode 100644 index 83ba01325d..0000000000 --- a/sources/tech/20210413 Make Conway-s Game of Life in WebAssembly.md +++ /dev/null @@ -1,472 +0,0 @@ -[#]: subject: (Make Conway's Game of Life in WebAssembly) -[#]: via: (https://opensource.com/article/21/4/game-life-simulation-webassembly) -[#]: author: (Mohammed Saud https://opensource.com/users/saud) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Make Conway's Game of Life in WebAssembly -====== -WebAssembly is a good option for computationally expensive tasks due to -its predefined execution environment and memory granularity. -![Woman sitting in front of her computer][1] - -Conway's [Game of Life][2] is a popular programming exercise to create a [cellular automaton][3], a system that consists of an infinite grid of cells. You don't play the game in the traditional sense; in fact, it is sometimes referred to as a game for zero players. - -Once you start the Game of Life, the game plays itself to multiply and sustain "life." In the game, digital cells representing lifeforms are allowed to change states as defined by a set of rules. When the rules are applied to cells through multiple iterations, they exhibit complex behavior and interesting patterns. - -The Game of Life simulation is a very good candidate for a WebAssembly implementation because of how computationally expensive it can be; every cell's state in the entire grid must be calculated for every iteration. WebAssembly excels at computationally expensive tasks due to its predefined execution environment and memory granularity, among many other features. - -### Compiling to WebAssembly - -Although it's possible to write WebAssembly by hand, it is very unintuitive and error-prone as complexity increases. Most importantly, it's not intended to be written that way. It would be the equivalent of manually writing [assembly language][4] instructions. - -Here's a simple WebAssembly function to add two numbers: - - -``` -(func $Add (param $0 i32) (param $1 i32) (result i32) -    local.get $0 -    local.get $1 -    i32.add -) -``` - -It is possible to compile WebAssembly modules using many existing languages, including C, C++, Rust, Go, and even interpreted languages like Lua and Python. This [list][5] is only growing. - -One of the problems with using existing languages is that WebAssembly does not have much of a runtime. It does not know what it means to [free a pointer][6] or what a [closure][7] is. All these language-specific runtimes have to be included in the resulting WebAssembly binaries. Runtime size varies by language, but it has an impact on module size and execution time. - -### AssemblyScript - -[AssemblyScript][8] is one language that is trying to overcome some of these challenges with a different approach. AssemblyScript is designed specifically for WebAssembly, with a focus on providing low-level control, producing smaller binaries, and reducing the runtime overhead. - -AssemblyScript uses a strictly typed variant of [TypeScript][9], a superset of JavaScript. Developers familiar with TypeScript do not have to go through the trouble of learning an entirely new language. - -### Getting started - -The AssemblyScript compiler can easily be installed through [Node,js][10]. Start by initializing a new project in an empty directory: - - -``` -npm init -npm install --save-dev assemblyscript -``` - -If you don't have Node installed locally, you can play around with AssemblyScript on your browser using the nifty [WebAssembly Studio][11] application. - -AssemblyScript comes with `asinit`, which should be installed when you run the installation command above. It is a helpful utility to quickly set up an AssemblyScript project with the recommended directory structure and configuration files: - - -``` -`npx asinit .` -``` - -The newly created `assembly` directory will contain all the AssemblyScript code, a simple example function in `assembly/index.ts`, and the `asbuild` command inside `package.json`. `asbuild`, which compiles the code into WebAssembly binaries. - -When you run `npm run asbuild` to compile the code, it creates files inside `build`. The `.wasm` files are the generated WebAssembly modules. The `.wat` files are the modules in text format and are generally used for debugging and inspection. - -You have to do a little bit of work to get the binaries to run on a browser. - -First, create a simple HTML file, `index.html`: - - -``` -<[html][12]> -    <[head][13]> -        <[meta][14] charset=utf-8> -        <[title][15]>Game of life</[title][15]> -    </[head][13]> -    -    <[body][16]> -        <[script][17] src='./index.js'></[script][17]> -    </[body][16]> -</[html][12]> -``` - -Next, replace the contents of `index.js` with the code snippet below to load the WebAssembly modules: - - -``` -const runWasm = async () => { -  const module = await WebAssembly.instantiateStreaming(fetch('./build/optimized.wasm')); -  const exports = module.instance.exports; - -  console.log('Sum = ', exports.add(20, 22)); -}; - -runWasm(); -``` - -This `fetches` the binary and passes it to `WebAssembly.instantiateStreaming`, the browser API that compiles a module into a ready-to-use instance. This is an asynchronous operation, so it is run inside an async function so that await can be used to wait for it to finish compiling. - -The `module.instance.exports` object contains all the functions exported by AssemblyScript. Use the example function in `assembly/index.ts` and log the result. - -You will need a simple development server to host these files. There are a lot of options listed in this [gist][18]. I used [node-static][19]: - - -``` -npm install -g node-static -static -``` - -You can view the result by pointing your browser to `localhost:8080` and opening the console. - -![console output][20] - -(Mohammed Saud, [CC BY-SA 4.0][21]) - -### Drawing to a canvas - -You will be drawing all the cells onto a `` element: - - -``` -<[body][16]> -    <[canvas][22] id=canvas></[canvas][22]> - -    ... -</[body][16]> -``` - -Add some CSS: - - -``` -<[head][13]> -    ... - -    <[style][23] type=text/css> -    body { -      background: #ccc; -    } -    canvas { -      display: block; -      padding: 0; -      margin: auto; -      width: 40%; - -      image-rendering: pixelated; -      image-rendering: crisp-edges; -    } -    </[style][23]> -</[head][13]> -``` - -The `image-rendering` styles are used to prevent the canvas from smoothing and blurring out pixelated images. - -You will need a canvas drawing context in `index.js`: - - -``` -const canvas = document.getElementById('canvas'); -const ctx = canvas.getContext('2d'); -``` - -There are many functions in the [Canvas API][24] that you could use for drawing—but you need to draw using WebAssembly, not JavaScript. - -Remember that WebAssembly does NOT have access to the browser APIs that JavaScript has, and any call that needs to be made should be interfaced through JavaScript. This also means that your WebAssembly module will run the fastest if there is as little communication with JavaScript as possible. - -One method is to create [ImageData][25] (a data type for the underlying pixel data of a canvas), fill it up with the WebAssembly module's memory, and draw it on the canvas. This way, if the memory buffer is updated inside WebAssembly, it will be immediately available to the `ImageData`. - -Define the pixel count of the canvas and create an `ImageData` object: - - -``` -const WIDTH = 10, HEIGHT = 10; - -const runWasm = async () => { -... - -canvas.width = WIDTH; -canvas.height = HEIGHT; - -const ctx = canvas.getContext('2d'); -const memoryBuffer = exports.memory.buffer; -const memoryArray = new Uint8ClampedArray(memoryBuffer) - -const imageData = ctx.createImageData(WIDTH, HEIGHT); -imageData.data.set(memoryArray.slice(0, WIDTH * HEIGHT * 4)); -ctx.putImageData(imageData, 0, 0); -``` - -The memory of a WebAssembly module is provided in `exports.memory.buffer` as an [ArrayBuffer][26]. You need to use it as an array of 8-bit unsigned integers or `Uint8ClampedArray`. Now you can fill up the module's memory with some pixels. In `assembly/index.ts`, you first need to grow the available memory: - - -``` -`memory.grow(1);` -``` - -WebAssembly does not have access to memory by default and needs to request it from the browser using the `memory.grow` function. Memory grows in chunks of 64Kb, and the number of required chunks can be specified when calling it. You will not need more than one chunk for now. - -Keep in mind that memory can be requested multiple times, whenever needed, and once acquired, memory cannot be freed or given back to the browser. - -Writing to the memory: - - -``` -`store(0, 0xff101010);` -``` - -A pixel is represented by 32 bits, with the RGBA values taking up 8 bits each. Here, RGBA is defined in reverse—ABGR—because WebAssembly is [little-endian][27]. - -The `store` function stores the value `0xff101010` at index `0`, taking up 32 bits. The alpha value is `0xff` so that the pixel is fully opaque. - -![Byte order for a pixel's color][28] - -(Mohammed Saud, [CC BY-SA 4.0][21]) - -Build the module again with `npm run asbuild` before refreshing the page to see your first pixel on the top-left of the canvas. - -### Implementing rules - -Let's review the rules. The [Game of Life Wikipedia page][29] summarizes them nicely: - - 1. Any live cell with fewer than two live neighbors dies, as if by underpopulation. - 2. Any live cell with two or three live neighbors lives on to the next generation. - 3. Any live cell with more than three live neighbors dies, as if by overpopulation. - 4. Any dead cell with exactly three live neighbors becomes a live cell, as if by reproduction. - - - -You need to iterate through all the rows, implementing these rules on each cell. You do not know the width and height of the grid, so write a little function to initialize the WebAssembly module with this information: - - -``` -let universe_width: u32; -let universe_height: u32; -let alive_color: u32; -let dead_color: u32; -let chunk_offset: u32; - -export function init(width: u32, height: u32): void { -  universe_width = width; -  universe_height = height; -  chunk_offset = width * height * 4; - -  alive_color = 0xff101010; -  dead_color = 0xffefefef; -} -``` - -Now you can use this function in `index.js` to provide data to the module: - - -``` -`exports.init(WIDTH, HEIGHT);` -``` - -Next, write an `update` function to iterate over all the cells, count the number of active neighbors for each, and set the current cell's state accordingly: - - -``` -export function update(): void { -  for (let x: u32 = 0; x < universe_width; x++) { -    for (let y: u32 = 0; y < universe_height; y++) { - -      const neighbours = countNeighbours(x, y); - -      if (neighbours < 2) { -        // less than 2 neighbours, cell is no longer alive -        setCell(x, y, dead_color); -      } else if (neighbours == 3) { -        // cell will be alive -        setCell(x, y, alive_color); -      } else if (neighbours > 3) { -        // cell dies due to overpopulation -        setCell(x, y, dead_color); -      } -    } -  } - -  copyToPrimary(); -} -``` - -You have two copies of cell arrays, one representing the current state and the other for calculating and temporarily storing the next state. After the calculation is done, the second array is copied to the first for rendering. - -The rules are fairly straightforward, but the `countNeighbours()` function looks interesting. Take a closer look: - - -``` -function countNeighbours(x: u32, y: u32): u32 { -  let neighbours = 0; - -  const max_x = universe_width - 1; -  const max_y = universe_height - 1; - -  const y_above = y == 0 ? max_y : y - 1; -  const y_below = y == max_y ? 0 : y + 1; -  const x_left = x == 0 ? max_x : x - 1; -  const x_right = x == max_x ? 0 : x + 1; - -  // top left -  if(getCell(x_left, y_above) == alive_color) { -    neighbours++; -  } - -  // top -  if(getCell(x, y_above) == alive_color) { -    neighbours++; -  } - -  // top right -  if(getCell(x_right, y_above) == alive_color) { -    neighbours++; -  } - -  ... - -  return neighbours; -} -``` - -![Coordinates of a cell's neighbors][30] - -(Mohammed Saud, [CC BY-SA 4.0][21]) - -Every cell has eight neighbors, and you can check if each one is in the `alive_color` state. The important situation handled here is if a cell is exactly on the edge of the grid. Cellular automata are generally assumed to be on an infinite space, but since infinitely large displays haven't been invented yet, stick to warping at the edges. This means when a cell goes off the top, it comes back in its corresponding position on the bottom. This is commonly known as [toroidal space][31]. - -The `getCell` and `setCell` functions are wrappers to the `store` and `load` functions to make it easier to interact with memory using 2D coordinates: - - -``` -@inline -function getCell(x: u32, y: u32): u32 { -  return load<u32>((x + y * universe_width) << 2); -} - -@inline -function setCell(x: u32, y: u32, val: u32): void { -  store<u32>(((x + y * universe_width) << 2) + chunk_offset, val); -} - -function copyToPrimary(): void { -  memory.copy(0, chunk_offset, chunk_offset); -} -``` - -The `@inline` is an [annotation][32] that requests that the compiler convert calls to the function with the function definition itself. - -Call the update function on every iteration from `index.js` and render the image data from the module memory: - - -``` -const FPS = 5; - -const runWasm = async () => { -  ... - -  const step = () => { -    exports.update(); -  -    imageData.data.set(memoryArray.slice(0, WIDTH * HEIGHT * 4)); -    ctx.putImageData(imageData, 0, 0); -  -    setTimeout(step, 1000 / FPS); -  }; -  step(); -``` - -At this point, if you compile the module and load the page, it shows nothing. The code works fine, but since you don't have any living cells initially, there are no new cells coming up. - -Create a new function to randomly add cells during initialization: - - -``` -function fillUniverse(): void { -  for (let x: u32 = 0; x < universe_width; x++) { -    for (let y: u32 = 0; y < universe_height; y++) { -      setCell(x, y, Math.random() > 0.5 ? alive_color : dead_color); -    } -  } - -  copyToPrimary(); -} - -export function init(width: u32, height: u32): void { -  ... - -  fillUniverse(); -``` - -Since `Math.random` is used to determine the initial state of a cell, the WebAssembly module needs a seed function to derive a random number from. - -AssemblyScript provides a convenient [module loader][33] that does this and a lot more, like wrapping the browser APIs for module loading and providing functions for more fine-grained memory control. You will not be using it here since it abstracts away many details that would otherwise help in learning the inner workings of WebAssembly, so pass in a seed function instead: - - -``` -  const importObject = { -    env: { -      seed: Date.now, -      abort: () => console.log('aborting!') -    } -  }; -  const module = await WebAssembly.instantiateStreaming(fetch('./build/optimized.wasm'), importObject); -``` - -`instantiateStreaming` can be called with an optional second parameter, an object that exposes JavaScript functions to WebAssembly modules. Here, use `Date.now` as the seed to generate random numbers. - -It should now be possible to run the `fillUniverse` function and finally have life on your grid! - -You can also play around with different `WIDTH`, `HEIGHT`, and `FPS` values and use different cell colors. - -![Game of Life result][34] - -(Mohammed Saud, [CC BY-SA 4.0][21]) - -### Try the game - -If you use large sizes, make sure to grow the memory accordingly. - -Here's the [complete code][35]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/4/game-life-simulation-webassembly - -作者:[Mohammed Saud][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/saud -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_3.png?itok=qw2A18BM (Woman sitting in front of her computer) -[2]: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life -[3]: https://en.wikipedia.org/wiki/Cellular_automaton -[4]: https://en.wikipedia.org/wiki/Assembly_language -[5]: https://github.com/appcypher/awesome-wasm-langs -[6]: https://en.wikipedia.org/wiki/C_dynamic_memory_allocation -[7]: https://en.wikipedia.org/wiki/Closure_(computer_programming) -[8]: https://www.assemblyscript.org -[9]: https://www.typescriptlang.org/ -[10]: https://nodejs.org/en/download/ -[11]: https://webassembly.studio -[12]: http://december.com/html/4/element/html.html -[13]: http://december.com/html/4/element/head.html -[14]: http://december.com/html/4/element/meta.html -[15]: http://december.com/html/4/element/title.html -[16]: http://december.com/html/4/element/body.html -[17]: http://december.com/html/4/element/script.html -[18]: https://gist.github.com/willurd/5720255 -[19]: https://www.npmjs.com/package/node-static -[20]: https://opensource.com/sites/default/files/uploads/console_log.png (console output) -[21]: https://creativecommons.org/licenses/by-sa/4.0/ -[22]: http://december.com/html/4/element/canvas.html -[23]: http://december.com/html/4/element/style.html -[24]: https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API -[25]: https://developer.mozilla.org/en-US/docs/Web/API/ImageData -[26]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer -[27]: https://en.wikipedia.org/wiki/Endianness -[28]: https://opensource.com/sites/default/files/uploads/color_bits.png (Byte order for a pixel's color) -[29]: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life#Rules -[30]: https://opensource.com/sites/default/files/uploads/count_neighbours.png (Coordinates of a cell's neighbors) -[31]: https://en.wikipedia.org/wiki/Torus -[32]: https://www.assemblyscript.org/peculiarities.html#annotations -[33]: https://www.assemblyscript.org/loader.html -[34]: https://opensource.com/sites/default/files/uploads/life.png (Game of Life result) -[35]: https://github.com/rottencandy/game-of-life-wasm diff --git a/sources/tech/20210414 Using Web Assembly Written in Rust on the Server-Side.md b/sources/tech/20210414 Using Web Assembly Written in Rust on the Server-Side.md deleted file mode 100644 index d610f17791..0000000000 --- a/sources/tech/20210414 Using Web Assembly Written in Rust on the Server-Side.md +++ /dev/null @@ -1,306 +0,0 @@ -[#]: subject: (Using Web Assembly Written in Rust on the Server-Side) -[#]: via: (https://www.linux.com/news/using-web-assembly-written-in-rust-on-the-server-side/) -[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/using-web-assembly-written-in-rust-on-the-server-side/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Using Web Assembly Written in Rust on the Server-Side -====== - -_By Bob Reselman_ - -_This article was originally published at [TheNewStac][1]k_ - -WebAssembly allows you to write code in a low-level programming language such as Rust, that gets compiled into a transportable binary. That binary can then be run on the client-side in the WebAssembly virtual machine that is [standard in today’s web browsers][2]. Or, the binary can be used on the server-side, as a component consumed by another programming framework — such as Node.js or [Deno][3]. - -WebAssembly combines the efficiency inherent in low-level code programming with the ease of component transportability typically found in Linux containers. The result is a development paradigm specifically geared toward doing computationally intensive work at scale — for example, artificial intelligence and complex machine learning tasks. - -As Solomon Hykes, the creator of Docker, [tweeted][4] on March 27, 2019: “If WASM+WASI existed in 2008, we wouldn’t have needed to have created Docker. That’s how important it is. WebAssembly on the server is the future of computing.” - -WebAssembly is a compelling approach to software development. However, in order to get a true appreciation for the technology, you need to see it in action. - -In this article, I am going to show you how to program a WebAssembly binary in Rust and use it in a TypeScript-powered web server running under Deno. I’ll show you how to install Rust and prep the runtime environment. We’ll compile the source code into a Rust binary. Then, once the binary is created, I’ll demonstrate how to run it on the server-side under [Deno][3]. Deno is a TypeScript-based programming framework that was started by Ryan Dahl, the creator of Node.js. - -### Understanding the Demonstration Project - -The demonstration project that accompanies this article is called Wise Sayings. The project stores a collection of “wise sayings” in a text file named wisesayings.txt. Each line in the text file is a wise saying, for example, “_A friend in need is a friend indeed._” - -The Rust code publishes a single function, get_wise_saying(). That function gets a random line from the text file, wisesayings.txt, and returns the random line to the caller. (See Figure 1, below) - - - -Figure 1: The demonstration project compiles data in a text file directly into the WebAssembly binary - -Both the code and text file are compiled into a single WebAssembly binary file, named wisesayings.wasm. Then another layer of processing is performed to make the WebAssembly binary consumable by the Deno web server code. The Deno code calls the function get_wise_sayings() in the WebAssembly binary, to produce a random wise saying. (See Figure 2.) - - - -Figure 2: WebAssembly binaries can be consumed by a server-side programming framework such as Deno. - -_You get the source code for the Wise Sayings demonstration project used in this article [on GitHub][5]. All the steps described in this article are listed on the repository’s main [Readme][6] document._ - -### Prepping the Development Environment - -The first thing we need to do to get the code up and running is to make sure that Rust is installed in the development environment. The following steps describe the process. - -**Step 1: **Make sure Rust is installed on your machine by typing: - -1 | rustc —version ----|--- - -You’ll get output similar to the following: - -1 | rustc 1.50.0 (cb75ad5db 2021–02–10) ----|--- - -If the call to rustc –version fails, you don’t have Rust installed. Follow the instructions below and** make sure you do all the tasks presented by the given installation method**. - -To install Rust, go here and install on Linux/MAC: … - -1 | curl —proto ‘=https’ —tlsv1.2 –sSf ----|--- - -… or here to install it on Windows: - -Download and run rustup-init.exe which you can find at this URL: . - -**Step 2:** Modify your system’s PATH - -1 | export PATH=“$HOME/.cargo/bin:$PATH” ----|--- - -**Step 3: **If you’re working in a Linux environment do the following steps to install the required additional Linux components. - -1 2 3 4 5 | sudo apt–get update –y sudo apt–get install –y libssl–dev apt install pkg–config ----|--- - -***Developer’s Note: *_The optimal development environment in which to run this code is one that uses the Linux operating system._ - -**Step 4: **Get the CLI tool that you’ll use for generating the TypeScript/JavaScript adapter files. These adapter files (a.k.a. shims) do the work of exposing the function get_wise_saying() in the WebAssembly binary to the Deno web server that will be hosting the binary. Execute the following command at the command line to install the tool, [wasm-bindgen-cli][7]. - -1 | cargo install wasm–bindgen–cli ----|--- - -The development environment now has Rust installed, along with the necessary ancillary libraries. Now we need to get the Wise Saying source code. - -### Working with the Project Files - -The Wise Saying source code is hosted in a GitHub repository. Take the following steps to clone the source code from GitHub onto the local development environment. - -**Step 1: **Execute the following command to clone the Wise Sayings source code from GitHub - -1 | git clone ----|--- - -**Step 2: **Go to the working directory - -1 | cd wisesayingswasm/ ----|--- - -Listing 1, below lists the files that make up the source code cloned from the GitHub repository. - -1 2 3 4 5 6 7 8 9 10 11 12 13 14 | . ├── Cargo.toml ├── cheatsheet.txt ├── LICENSE ├── lldbconfig ├── package–lock.json ├── README.md ├── server │   ├── main.ts │   └── package–lock.json └── src     ├── fortunes.txt     ├── lib.rs     └── main.rs ----|--- - -_Listing 1: The files for the source code for the Wise Sayings demonstration project hosted in the GitHub repository_ - -Let’s take a moment to describe the source code files listed above in Listing 1. The particular files of interest with regard to creating the WebAssembly binary are the files in the directory named, src at Line 11 and the file, Cargo.toml at Line 2. - -Let’s discuss Cargo.toml first. The content of Cargo.toml is shown in Listing 2, below. - -1 2 3 4 5 6 7 8 9 10 11 12 13 14 | [package] name = “wise-sayings-wasm” version = “0.1.0” authors = [“Bob Reselman <bob@CogArtTech.com>”] edition = “2018” [dependencies] rand = “0.8.3” getrandom = { version = “0.2”, features = [“js”] } wasm–bindgen = “0.2.70” [lib] name = “wisesayings” crate–type =[“cdylib”, “lib”] ----|--- - -_Listing 2: The content of Cargo.toml for the demonstration project Wise Sayings_ - -Cargo.toml is the [manifest file][8] that describes various aspects of the Rust project under development. The Cargo.toml file for the Wise Saying project is organized into three sections: package, dependencies, and lib. The section names are defined in the Cargo manifest specification, which you read [here][8]. - -#### Understanding the Package Section of Cargo.toml - -The package section indicates the name of the package (wise-sayings-wasm), the developer assigned version (0.1.0), the authors (Bob Reselman <[bob@CogArtTech.com][9]>) and the edition of Rust (2018) that is used to program the binary. - -#### Understanding the Dependencies Section of Cargo.toml - -The dependencies section lists the dependencies that the WebAssembly project needs to do its work. As you can see in Listing 2, above at Line 8, the Cargo.toml lists the rand library as a dependency. The rand library provides the capability to generate a random number which is used to get a random line of wise saying text from the file, wisesayings.txt. - -The reference to getrandom at Line 9 in Listing 2 above indicates that the WebAssembly binary’s [getrandom][10] is running under Javascript and that the [JavaScript interface should be used][11]. This condition is very particular to running a WebAssembly binary under JavaScript. The long and short of it is that if the line getrandom = { version = “0.2”, features = [“js”] } is not included in the Cargo.toml, the WebAssembly binary will not be able to create a random number. - -The entry at Line 10 declares the [wasm-bindgen][12] library as a dependency. The wasm-bindgen library provides the capability for wasm modules to talk to JavaScript and JavaScript to talk to wasm modules. - -#### Understanding the Lib Section of Cargo.toml - -The entry [crate-type =[“cdylib”, “lib”]][13] at Line 14 in the lib section of the Cargo.toml file tells the Rust compiler to create a wasm binary without a start function. Typically when cdylib is indicated, the compiler will create a [dynamic library][14] with the extension .dll in Windows, .so in Linux, or .dylib in MacOS. In this case, because the deployment unit is a WebAssembly binary, the compiler will create a file with the extension .wasm. The name of the wasm file will be wisesayings.wasm, as indicated at Line 13 above in Listing 2. - -The important thing to understand about Cargo.toml is that it provides both the design and runtime information needed to get your Rust code up and running. If the Cargo.toml file is not present, the Rust compiler doesn’t know what to do and the build will fail. - -### Understanding the Core Function, get_wise_saying() - -The actual work of getting a random line that contains a Wise Saying from the text file wisesayings.txt is done by the function get_wise_saying(). The code for get_wise_sayings() is in the Rust library file, ./src/lib.rs. The Rust code is shown below in Listing 3. - -1 2 3 4 5 6 7 8 9 10 11 12 13 | use rand::seq::IteratorRandom; use wasm_bindgen::prelude::*; #[wasm_bindgen] pub fn get_wise_saying() -> String {     let str = include_str!(“fortunes.txt”);     let mut lines = str.lines();     let line = lines         .choose(&mut rand::thread_rng())         .expect(“File had no lines”);     return line.to_string(); } ----|--- - -_Listing 3: The function file, lib.rs contains the function, get_wise_saying()._ - -The important things to know about the source is that it’s tagged at Line 4 with the attribute #[wasm_bindgen], which lets the Rust compiler know that the source code is targeted as a WebAssembly binary. The code publishes one function, get_wise_saying(), at Line 5. The way the wise sayings text file is loaded into memory is to use the [Rust macro][15], [include_str!][16]. This macro does the work of getting the file from disk and loading the data into memory. The macro loads the file as a string and the function str.lines() separates the lines within the string into an array. (Line 7.) - -The rand::thread_rng() call at Line 10 returns a number that is used as an index by the .choose() function at Line 10. The result of it all is an array of characters (a string) that reflects the wise saying returned by the function. - -### Creating the WebAssembly Binary - -Let’s move on compiling the code into a WebAssembly Binary. - -**Step 1: **Compile the source code into a WebAssembly is shown below. - -1 | cargo build —lib —target wasm32–unknown–unknown ----|--- - -WHERE - -**cargo build** is the command and subcommand to invoke the Rust compiler using the settings in the Cargo.toml file. - -**–lib** is the option indicating that you’re going to build a library against the source code in the ./lib directory. - -**–targetwasm32-unknown-unknown** indicates that Rust will use the wasm-unknown-unknown compiler and will store the build artifacts as well as the WebAssembly binary into directories within the target directory, **wasm32-unknown-unknown.** - -#### **Understanding the Rust Target Triple Naming Convention** - -Rust has a naming convention for targets. The term used for the convention is a _target triple_. A target triple uses the following format: ARCH-VENDOR-SYS-ABI. - -**WHERE** - -**ARCH** describes the intended target architecture, for example wasm32 for WebAssembly, or i686 for current-generation Intel chips. - -**VENDOR** describes the vendor publishing the target; for example, Apple or Nvidia. - -**SYS** describes the operating system; for example, Windows or Linux. - -**ABI** describes how the process starts up, for eabi is used for bare metal, while gnu is used for glibc. - -Thus, the name i686-unknown-linux-gnu means that the Rust binary is targeted to an i686 architecture, the vendor is defined as unknown, the targeted operating system is Linux, and ABI is gnu. - -In the case of wasm32-unknown-unknown, the target is WebAssembly, the operating system is unknown and the ABI is unknown. The informal inference of the name is “it’s a WebAssembly binary.” - -There are a standard set of built-in targets defined by Rust that can be found [here][17]. - -If you find the naming convention to be confusing because there are optional fields and sometimes there are four sections to the name, while other times there will be three sections, you are not alone. - -### Deploying the Binary Server-Side Using Deno - -After we build the base WeAssembly binary, we need to create the adapter (a.k.a shim) files and a special version of the WebAssembly binary — all of which can be run from within JavaScript. We’ll create these artifacts using the [wasm-bindgen][18] tool. - -**Step 1: **We create these new artifacts using the command shown below. - -1 | wasm–bindgen —target deno ./target/wasm32–unknown–unknown/debug/wisesayings.wasm —out–dir ./server ----|--- - -WHERE - -**wasm-bindgen** is the command for creating the adapter files and the special WebAssembly binary. - -**–target deno ./target/wasm32-unknown-unknown/debug/wisesayings.wasm** is the option that indicates the adapter files will be targeted for Deno. Also, the option denotes the location of the original WebAssembly wasm binary that is the basis for the artifact generation process. - -**–out-dir ./server** is the option that declares the location where the created adapter files will be stored on disk; in this case, **./server**. - -The result of running wasm-bindgen is the server directory shown in Listing 4 below. - -1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | . ├── Cargo.toml ├── cheatsheet.txt ├── LICENSE ├── lldbconfig ├── package–lock.json ├── README.md ├── server │   ├── main.ts │   ├── package–lock.json │   ├── wisesayings_bg.wasm │   ├── wisesayings_bg.wasm.d.ts │   ├── wisesayings.d.ts │   └── wisesayings.js └── src     ├── fortunes.txt     ├── lib.rs     └── main.rs ----|--- - -_Listing 4: The server directory contains the results of running wasm-bindgen_ - -Notice that the contents of the server directory, shown above in Listing 4, now has some added JavaScript (js) and TypeScript (ts) files. Also, the server directory has the special version of the WebAssembly binary, named wisesayings_bg.wasm. This version of the WebAssembly binary is a stripped-down version of the wasm file originally created by the initial compilation, done when invoking cargo build earlier. You can think of this new wasm file as a JavaScript-friendly version of the original WebAssembly binary. The suffix, _bg, is an abbreviation for bindgen. - -### Running the Deno Server - -Once all the artifacts for running WebAssembly have been generated into the server directory, we’re ready to invoke the Deno web server. Listing 5 below shows content of main.ts, which is the source code for the Deno web server. - -1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | import { serve } from “https://deno.land/std@0.86.0/http/server.ts”; import { get_wise_saying } from “./wisesayings.js”; const env = Deno.env.toObject(); let port = 4040; if(env.WISESAYING_PORT){   port = Number(env.WISESAYING_PORT); }; const server = serve({ hostname: “0.0.0.0”, port}); console.log(`HTTP webserver running at ${new Date()}.  Access it at:  http://localhost:${port}/`); for await (const request of server) {     const saying = get_wise_saying();     request.respond({ status: 200, body: saying });   } ----|--- - -_Listing 5: main.ts is the Deno webserver code that uses the WebAssembly binary_ - -You’ll notice that the WebAssembly wasm binary is not imported directly. This is because the work of representing the WebAssembly binary is done by the JavaScript and TypeScript adapter (a.k.a shim) files generated earlier. The WebAssembly/Rust function, get_wise_sayings(), is exposed in the auto-generated JavaScript file, wisesayings.js. - -The function get_wise_saying is imported into the webserver code at Line 2 above. The function is used at Line 16 to get a wise saying that will be returned as an HTTP response by the webserver. - -To get the Deno web server up and running, execute the following command in a terminal window. - -**Step 1:** - -1 | deno run —allow–read —allow–net —allow–env ./main.ts ----|--- - -WHERE - -deno run is the command set to invoke the webserver. - -–allow-read is the option that allows the Deno webserver code to have permission to read files from disk. - -–allow-net is the option that allows the Deno webserver code to have access to the network. - -–allow-env is the option that allows the Deno webserver code read environment variables. - -./main.ts is the TypeScript file that Deno is to run. In this case, it’s the webserver code. - -When the webserver is up and running, you’ll get output similar to the following: - -HTTP webserver running at Thu Mar 11 2021 17:57:32 GMT+0000 (Coordinated Universal Time). Access it at: - -**Step 2:** - -Run the following command in a terminal on your computer to exercise the Deno/WebAssembly code - -1 | curl localhost:4040 ----|--- - -You’ll get a wise saying, for example: - -_True beauty lies within._ - -**Congratulations!** You’ve created and run a server-side WebAssembly binary. - -### Putting It All Together - -In this article, I’ve shown you everything you need to know to create and use a WebAssembly binary in a Deno web server. Yet for as detailed as the information presented in this article is, there is still a lot more to learn about what’s under the covers. Remember, Rust is a low-level programming language. It’s meant to go right up against the processor and memory directly. That’s where its power really is. The real benefit of WebAssembly is using the technology to do computationally intensive work from within a browser. Applications that are well suited to WebAssembly are visually intensive games and activities that require complex machine learning capabilities — for example, real-time voice recognition and language translation. WebAssembly allows you to do computation on the client-side that previously was only possible on the server-side. As Solomon Hykes said, WebAssembly is the future of computing. He might very well be right. - -The important thing to understand is that WebAssembly provides enormous opportunities for those wanting to explore cutting-edge approaches to modern distributed computing. Hopefully, the information presented in this piece will motivate you to explore those opportunities. - -The post [Using Web Assembly Written in Rust on the Server-Side][19] appeared first on [Linux Foundation – Training][20]. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/using-web-assembly-written-in-rust-on-the-server-side/ - -作者:[Dan Brown][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://training.linuxfoundation.org/announcements/using-web-assembly-written-in-rust-on-the-server-side/ -[b]: https://github.com/lujun9972 -[1]: https://thenewstack.io/using-web-assembly-written-in-rust-on-the-server-side/ -[2]: https://www.infoq.com/news/2017/12/webassembly-browser-support/ -[3]: https://deno.land/ -[4]: https://twitter.com/solomonstre/status/1111004913222324225?s=20 -[5]: https://github.com/reselbob/wisesayingswasm -[6]: https://github.com/reselbob/wisesayingswasm/blob/main/README.md -[7]: https://rustwasm.github.io/docs/wasm-bindgen/reference/cli.html -[8]: https://doc.rust-lang.org/cargo/reference/manifest.html -[9]: mailto:bob@CogArtTech.com -[10]: https://docs.rs/getrandom/0.2.2/getrandom/ -[11]: https://docs.rs/getrandom/0.2.2/getrandom/#webassembly-support -[12]: https://rustwasm.github.io/docs/wasm-bindgen/ -[13]: https://rustwasm.github.io/docs/wasm-pack/tutorials/npm-browser-packages/template-deep-dive/cargo-toml.html#1-crate-type -[14]: https://en.wikipedia.org/wiki/Library_(computing)#Shared_libraries -[15]: https://doc.rust-lang.org/book/ch19-06-macros.html -[16]: https://doc.rust-lang.org/std/macro.include_str.html -[17]: https://docs.rust-embedded.org/embedonomicon/compiler-support.html#built-in-target -[18]: https://rustwasm.github.io/wasm-bindgen/ -[19]: https://training.linuxfoundation.org/announcements/using-web-assembly-written-in-rust-on-the-server-side/ -[20]: https://training.linuxfoundation.org/ diff --git a/sources/tech/20210415 Resolve systemd-resolved name-service failures with Ansible.md b/sources/tech/20210415 Resolve systemd-resolved name-service failures with Ansible.md deleted file mode 100644 index 392d087922..0000000000 --- a/sources/tech/20210415 Resolve systemd-resolved name-service failures with Ansible.md +++ /dev/null @@ -1,140 +0,0 @@ -[#]: subject: (Resolve systemd-resolved name-service failures with Ansible) -[#]: via: (https://opensource.com/article/21/4/systemd-resolved) -[#]: author: (David Both https://opensource.com/users/dboth) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Resolve systemd-resolved name-service failures with Ansible -====== -Name resolution and the ever-changing networking landscape. -![People work on a computer server with devices][1] - -Most people tend to take name services for granted. They are necessary to convert human-readable names, such as `www.example.com`, into IP addresses, like `93.184.216.34`. It is easier for humans to recognize and remember names than IP addresses, and name services allow us to use names, and they also convert them to IP addresses for us. - -The [Domain Name System][2] (DNS) is the global distributed database that maintains the data required to perform these lookups and reverse lookups, in which the IP address is known and the domain name is needed. - -I [installed Fedora 33][3] the first day it became available in October 2020. One of the major changes was a migration from the ancient Name Service Switch (NSS) resolver to [systemd-resolved][4]. Unfortunately, after everything was up and running, I couldn't connect to or even ping any of the hosts on my network by name, although using IP addresses did work. - -### The problem - -I run my own name server using BIND on my network server, and all has been good for over 20 years. I've configured my DHCP server to provide the IP address of my name server to every workstation connected to my network, and that (along with a couple of backup name servers) is stored in `/etc/resolv.conf`. - -[Michael Catanzaro][5] describes how systemd-resolved is supposed to work, but the introduction of systemd-resolved caused various strange resolution problems on my network hosts. The symptoms varied depending upon the host's purpose. The trouble mostly presented as an inability to find IP addresses for hosts inside the network on most systems. On other systems, it failed to resolve any names at all. For example, even though nslookup sometimes returned the correct IP addresses for hosts inside and outside networks, ping would not contact the designated host, nor could I SSH to that same host. Most of the time, neither the lookup, the ping, nor SSH would work unless I used the IP address in the command. - -The new resolver allegedly has four operational modes, described in this [Fedora Magazine article][6]. None of the options seems to work correctly when the network has its own name server designed to perform lookups for internal and external hosts. - -In theory, systemd-resolved is supposed to fix some corner issues around the NSS resolver failing to use the correct name server when a host is connected to a VPN, which has become a common problem with so many more people working from home. - -The new resolver is supposed to use the fact that `/etc/resolv.conf` is now a symlink to determine how it is supposed to work based on which resolve file it is linked to. systemd-resolved's man page includes details about this behavior. The man page says that setting `/etc/resolv.conf` as a symlink to `/run/systemd/resolve/resolv.conf` should cause the new resolver to work the same way the old one does, but that didn't work for me. - -### My fix - -I have seen many options on the internet for resolving this problem, but the only thing that works reliably for me is to stop and disable the new resolver. First, I deleted the existing link for `resolv.conf`, copied my preferred `resolv.conf` file to `/run/NetworkManager/resolv.conf`, and added a new link to that file in `/etc`: - - -``` -# rm -f /etc/resolv.conf -# ln -s /run/NetworkManager/resolv.conf /etc/resolv.conf -``` - -These commands stop and disable the systemd-resolved service: - - -``` -# systemctl stop systemd-resolved.service ; systemctl disable systemd-resolved.service -Removed /etc/systemd/system/multi-user.target.wants/systemd-resolved.service. -Removed /etc/systemd/system/dbus-org.freedesktop.resolve1.service. -``` - -There's no reboot required. The old resolver takes over, and name services work as expected. - -### Make it easy with Ansible - -I set up this Ansible playbook to make the necessary changes if I install a new host or an update that reverts the resolver to systemd-resolved, or if an upgrade to the next release of Fedora reverts the resolver. The `resolv.conf` file you want for your network should be located in `/root/ansible/resolver/files/`: - - -``` -################################################################################ -#                              fixResolver.yml                                 # -#------------------------------------------------------------------------------# -#                                                                              # -# This playbook configures the old nss resolver on systems that have the new   # -# systemd-resolved service installed. It copies the resolv.conf file for my    # -# network to /run/NetworkManager/resolv.conf and places a link to that file    # -# as /etc/resolv.conf. It then stops and disables systemd-resolved which       # -# activates the old nss resolver.                                              # -#                                                                              # -#------------------------------------------------------------------------------# -#                                                                              # -# Change History                                                               # -# Date        Name         Version   Description                               # -# 2020/11/07  David Both   00.00     Started new code                          # -# 2021/03/26  David Both   00.50     Tested OK on multiple hosts.              # -#                                                                              # -################################################################################ -\--- -\- name: Revert to old NSS resolver and disable the systemd-resolved service -  hosts: all_by_IP - -################################################################################ - -  tasks: -    - name: Copy resolv.conf for my network -      copy: -        src: /root/ansible/resolver/files/resolv.conf -        dest: /run/NetworkManager/resolv.conf -        mode: 0644 -        owner: root -        group: root - -    - name: Delete existing /etc/resolv.conf file or link -      file: -        path: /etc/resolv.conf -        state: absent - -    - name: Create link from /etc/resolv.conf to /run/NetworkManager/resolv.conf -      file: -        src: /run/NetworkManager/resolv.conf -        dest: /etc/resolv.conf -        state: link - -    - name: Stop and disable systemd-resolved -      systemd: -        name: systemd-resolved -        state: stopped -        enabled: no -``` - -Whichever Ansible inventory you use must have a group that uses IP addresses instead of hostnames. This command runs the playbook and specifies the name of the inventory file I use for hosts by IP address: - - -``` -`# ansible-playbook -i /etc/ansible/hosts_by_IP fixResolver.yml` -``` - -### Final thoughts - -Sometimes the best answer to a tech problem is to fall back to what you know. When systemd-resolved is more robust, I'll likely give it another try, but for now I'm glad that open source infrastructure allows me to quickly identify and resolve network problems. Using Ansible to automate the process is a much appreciated bonus. The important lesson here is to do your research when troubleshooting, and to never be afraid to void your warranty. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/4/systemd-resolved - -作者:[David Both][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/dboth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server with devices) -[2]: https://opensource.com/article/17/4/introduction-domain-name-system-dns -[3]: https://opensource.com/article/20/11/new-gnome -[4]: https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html -[5]: https://blogs.gnome.org/mcatanzaro/2020/12/17/understanding-systemd-resolved-split-dns-and-vpn-configuration/ -[6]: https://fedoramagazine.org/systemd-resolved-introduction-to-split-dns/ diff --git a/sources/tech/20210416 Notes on building debugging puzzles.md b/sources/tech/20210416 Notes on building debugging puzzles.md deleted file mode 100644 index 593526933a..0000000000 --- a/sources/tech/20210416 Notes on building debugging puzzles.md +++ /dev/null @@ -1,228 +0,0 @@ -[#]: subject: (Notes on building debugging puzzles) -[#]: via: (https://jvns.ca/blog/2021/04/16/notes-on-debugging-puzzles/) -[#]: author: (Julia Evans https://jvns.ca/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Notes on building debugging puzzles -====== - -Hello! This week I started building some choose-your-own-adventure-style puzzles about debugging networking problems. I’m pretty excited about it and I’m trying to organize my thoughts so here’s a blog post! - -The two I’ve made so far are: - - * [The Case of the Connection Timeout][1] - * [The Case of the Slow Website][2] - - - -I’ll talk about how I came to this idea, design decisions I made, how it works, what I think is hard about making these puzzles, and some feedback I’ve gotten so far. - -### why this choose-your-own-adventure format? - -I’ve been thinking a lot about DNS recently, and how to help people troubleshoot their DNS issues. So on Tuesday I was sitting in a park with a couple of friends chatting about this. - -We started out by talking about the idea of flowcharts (“here’s a flowchart that will help you debug any DNS problem”). I’ve don’t think I’ve ever seen a flowchart that I found helpful in solving a problem, so I found it really hard to imagine creating one – there are so many possibilities! It’s hard to be exhaustive! It would be disappointing if the flowchart failed and didn’t give you your answer! - -But then someone mentioned choose-your-own-adventure games, and I thought about something I **could** relate to – debugging a problem together with someone who knows things that I don’t! - -So I thought – what if I made a choose-your-own-adventure game where you’re given the symptoms of a specific networking bug, and you have to figure out how to diagnose it? - -I got really excited about this and immediately went home and started putting something together in Twine. - -Here are some design decisions I’ve made so far. Some of them might change. - -### design decision: the mystery has 1 specific bug - -Each mystery has one very specific bug, ideally a bug that I’ve actually run into in the past. Your mission is to figure out the cause of the bug and fix it. - -### design decision: show people the actual output of the tools they’re using - -All of the bugs I’m starting with are networking issues, and the way you solve them is to use various tools (like dig, curl, tcpdump, ping, etc) to get more information. - -Originally I thought of writing the game like this: - - 1. You choose “Use curl” - 2. It says “You run ``. You see that the output tells you ``“ - - - -But I realized that immediately interpreting the output of a command for someone is extremely unrealistic – one of the biggest problems with using some of these command line networking tools is that their output is hard to interpret! - -So instead, the puzzle: - - 1. Asks what tool you want to use - 2. Tells you what command they ran, and shows you the output of the command - 3. Asks you to interpret the output (you type it in in a freeform text box) - 4. Tells you the “correct” interpretation of the output and shows you how you could have figured it out (by highlighting the relevant parts of the output) - - - -This really lines up with how I’ve learned about these tools in real life – I don’t learn about how to read all of the output all at once, I learn it in bits and pieces by debugging real problems. - -### design decision: make the output realistic - -This is sort of obvious, but in order to give someone output to help them diagnose a bug, the output needs to be a realistic representation of what would actually happen. - -I’ve been doing this by reproducing the bug in a virtual machine (or on my laptop), and then running the commands in the same way I would to fix the bug in real life and paste their output. - -Reproducing the bug isn’t always easy, but once I’ve reproduced it it makes building the puzzle much more straightforward than trying to imagine what tcpdump would theoretically output in a given situation. - -### design decision: let people collect “knowledge” throughout the mystery - -When I debug, I think about it as slowly collecting new pieces of information as I go. So in this mystery, every time you figure out a new piece of information, you get a little box that looks like this: - -![][3] - -And in the sidebar, you have a sort of “inventory” that lists all of the knowledge you’ve collected so far. It looks like this: - -![][4] - -### design decision: you always figure out the bug - -My friend Sumana pointed out an interesting difference between this and normal choose-your-own-adventure games: in the choose-your-own-adventure games I grew up reading, you lose a lot! You make the wrong choice, and you fall into a pit and die. - -But that’s not how debugging works in my experience. When debugging, if you make a “wrong” choice (for example by making a guess about the bug that isn’t correct), there’s no cost except your time! So you can always go back, keep trying, and eventually figure out what’s going on. - -I think that “you always win” is sort of realistic in the sense that with any bug you can always figure out what the bug is, given: - - 1. enough time - 2. enough understanding of how the systems you’re debugging work - 3. tools that can give you information about what’s happening - - - -I’m still not sure if I want all bugs to result in “you fix the bug!” – sometimes bugs are impossible to fix if they’re caused by a system that’s outside of your control! One really interesting idea Sumana had was to have the resolution sometimes be “you submit a really clear and convincing bug report and someone else fixes it”. - -### design decision: include red herrings sometimes - -In debugging in real life, there are a lot of red herrings! Sometimes you see something that looks weird, and you spend three hours looking into it, and then you realize that wasn’t it at all. - -One of the mysteries right now has a red herring, and the way I came up with it was that I ran a command and I thought “wait, the output of that is pretty confusing, it’s not clear how to interpret that”. So I just included the confusing output in the mystery and said “hey, what do you think it means?”. - -One thing I like about including red herrings is that it lets me show how you can prove what the cause of the bug **isn’t** which is even harder than proving what the cause of the bug is. - -### design decision: use free form text boxes - -Here’s an example of what it looks like to be asked to interpret some output. You’re asked a question and you fill in the answer in a text box. - -![][5] - -I think I like using free form text boxes instead of multiple choice because it feels a little more realistic to me – in real life, when you see some output like this, you don’t get a list of choices! - -### design decision: don’t do anything with what you enter in the text box - -No matter what you enter in the text box (or if you say “I don’t know”), exactly the same thing happens. It’ll send you to a page that tells you the answer and explains the reasoning. So you have to think about what you think the answer might be, but if you get it “wrong”, it’s no big deal. - -The reason I’m doing this is basically “it’s very easy to implement”, but I think there’s maybe also something nice about it for the person using it – if you don’t know, it’s totally okay! You can learn something new and keep moving! You don’t get penalized for your “wrong” answers in any way. - -### design decision: the epilogue - -At the end of the game, there’s a very short epilogue where it talks about how likely you are to run into this bug in real life / how realistic this is. I think I need to expand on this to answer other questions people might have had while going through it, but I think it’s going to be a nice place to wrap up loose ends. - -### how long each one takes to play: 5 minutes - -People seem to report so far that each mystery takes about 5 minutes to play, which feels reasonable to me. I think I’m most likely to extend this by making lots of different 5-minute mysteries rather than making one really long mystery, but we’ll see. - -### what’s hard: reproducing the bug - -Figuring out how to reproduce a given bug is actually not that easy – I think I want to include some pretty weird bugs, and setting up a computer where that bug is happening in a realistic way isn’t actually that easy. I think this just takes some work and creativity though. - -### what’s hard: giving realistic options - -The most common critique I got was of the form “In this situation I would have done X but you didn’t include X as an option”. Some examples of X: “ping the problem host”, “ssh to the problem host and run tcpdump there”, “look at the log file”, “use netstat”, etc. - -I think it’s possible to make a lot of progress on this with playtesting – if I playtest a mystery with a bunch of people and ask them to tell me when there was an option they wish they had, I can add that option pretty easily! - -Because I can actually reproduce the bug, providing an option like “run netstat” is pretty straightforward – all I have to do is go to the VM where I’ve reproduced the bug, run `netstat`, and put the output into the game. - -A couple of people also said that the game felt too “linear” or didn’t branch enough. I’m curious about whether that will naturally come out of having more realistic options. - -### how it works: it’s a Twine game! - -I felt like Twine was the obvious choice for this even though I’d never used it before – I’d heard so many good things about it over the years. - -You can see all of the source code for The Case of the Connection Timeout in [connection-timeout.twee][6] and [common.twee][7], which has some shared code between all the games. - -A few notes about using Twine: - - * I’m using SugarCube, the [sugarcube docs are very good][8] - * I’m using [tweego][9] to translate the `.twee` files in to a HTML page. I started out using the visual Twine editor to do my editing but switched to `tweego` pretty quickly because I wanted to use version control and have a more text-based workflow. - * The final output is one big HTML file that includes all the images and CSS and Javascript inline. The final HTML files are about 800K which seems reasonable to me. - * I base64-encode all the images in the game and include them inline in the file - * The [Twine wiki][10] and forums have a lot of great information and between the Twine wiki, the forums, and the Sugarcube docs I could pretty easily find answers to all my questions. - - - -I used pretty much the exact Twine workflow from Em Lazerwalker’s great post [A Modern Developer’s Workflow For Twine][11]. - -I won’t explain how Twine works because it has great documentation and it would make this post way too long. - -### some feedback so far - -I posted this on Twitter and asked for feedback. Some common pieces of feedback I got: - -things people liked: - - * maybe 180 “I love this, this was so fun, I learned something new” - * A bunch of people specifically said that they liked learning how to interpret tcpdump’s output format - * A few people specifically mentioned that they liked the “what you know” list and the mechanic of hunting for clues and how it breaks down the debugging process. - - - -some suggestions for improvements: - - * Like I mentioned before, lots of people said “I wanted to try X but it wasn’t an option” - * One of the puzzles had a resolution to the bug that some people found unsatisfying (they felt it was more of a workaround than a fix, which I agreed with). I updated it to add a different resolution that was more satisfying. - * There were some technical issues (it could be more mobile-friendly, one of the images was hard to read, I needed to add a “Submit” button to one of the forms) - * Right now the way the text boxes work is that no matter what you type, the exact same thing happens. Some people found this a bit confusing, like “why did it act like I answered correctly if my answer was wrong”. This definitely needs some work. - - - -### some goals of this project - -Here’s what I think the goals of this project are: - - 1. help people learn about **tools** (like tcpcdump, dig, and curl). How do you use each tool? What questions can they be used to answer? How do you interpret their output? - 2. help people learn about **bugs**. There are some super common bugs that we run into over and over, and once you see a bug once it’s easier to recognize the same bug in the future. - 3. help people get better at the **debugging process** (gathering data, asking questions) - - - -### what experience is this trying to imitate? - -Something I try to keep in mind with all my projects is – what real-life experience does this reproduce? For example, I kind of think of my zines as being the experience “your coworker explains something to you in a really clear way”. - -I think the experience here might be “you’re debugging a problem together with your coworker and they’re really knowledgeable about the tools you’re using”. - -### that’s all! - -I’m pretty excited about this project right now – I’m going to build at least a couple more of these and see how it goes! If things go well I might make this into my first non-zine thing for sale – maybe it’ll be a collection of 12 small debugging mysteries! We’ll see. - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2021/04/16/notes-on-debugging-puzzles/ - -作者:[Julia Evans][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://jvns.ca/ -[b]: https://github.com/lujun9972 -[1]: https://mysteries.wizardzines.com/connection-timeout.html -[2]: https://mysteries.wizardzines.com/slow-website.html -[3]: https://jvns.ca/images/newinfo.png -[4]: https://jvns.ca/images/sidebar-mystery.png -[5]: https://jvns.ca/images/textboxes.png -[6]: https://github.com/jvns/twine-stories/blob/2914c4326e3ff760a0187b2cfb15161928d6335b/connection-timeout.twee -[7]: https://github.com/jvns/twine-stories/blob/2914c4326e3ff760a0187b2cfb15161928d6335b/common.twee -[8]: https://www.motoslave.net/sugarcube/2/docs/ -[9]: https://www.motoslave.net/tweego/ -[10]: https://twinery.org/wiki/ -[11]: https://dev.to/lazerwalker/a-modern-developer-s-workflow-for-twine-4imp diff --git a/sources/tech/20210416 Use the DNF local plugin to speed up your home lab.md b/sources/tech/20210416 Use the DNF local plugin to speed up your home lab.md deleted file mode 100644 index cf802d3894..0000000000 --- a/sources/tech/20210416 Use the DNF local plugin to speed up your home lab.md +++ /dev/null @@ -1,436 +0,0 @@ -[#]: subject: (Use the DNF local plugin to speed up your home lab) -[#]: via: (https://fedoramagazine.org/use-the-dnf-local-plugin-to-speed-up-your-home-lab/) -[#]: author: (Brad Smith https://fedoramagazine.org/author/buckaroogeek/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Use the DNF local plugin to speed up your home lab -====== - -![][1] - -Photo by [Sven Hornburg][2] on [Unsplash][3] - -### Introduction - -If you are a Fedora Linux enthusiast or a developer working with multiple instances of Fedora Linux then you might benefit from the [DNF local][4] plugin. An example of someone who would benefit from the DNF local plugin would be an enthusiast who is running a cluster of Raspberry Pis. Another example would be someone running several virtual machines managed by Vagrant. The DNF local plugin reduces the time required for DNF transactions. It accomplishes this by transparently creating and managing a local RPM repository. Because accessing files on a local file system is significantly faster than downloading them repeatedly, multiple Fedora Linux machines will see a significant performance improvement when running _dnf_ with the DNF local plugin enabled. - -I recently started using this plugin after reading a tip from Glenn Johnson (aka glennzo) in [a 2018 fedoraforum.org post][5]. While working on a Raspberry Pi based Kubernetes cluster running Fedora Linux and also on several container-based services, I winced with every DNF update on each Pi or each container that downloaded a duplicate set of rpms across my expensive internet connection. In order to improve this situation, I searched for a solution that would cache rpms for local reuse. I wanted something that would not require any changes to repository configuration files on every machine. I also wanted it to continue to use the network of Fedora Linux mirrors. I didn’t want to use a single mirror for all updates. - -### Prior art - -An internet search yields two common solutions that eliminate or reduce repeat downloads of the same RPM set – create a private Fedora Linux mirror or set up a caching proxy. - -Fedora provides guidance on setting up a [private mirror][6]. A mirror requires a lot of bandwidth and disk space and significant work to maintain. A full private mirror would be too expensive and it would be overkill for my purposes. - -The most common solution I found online was to implement a caching proxy using Squid. I had two concerns with this type of solution. First, I would need to edit repository definitions stored in _/etc/yum.repo.d_ on each virtual and physical machine or container to use the same mirror. Second, I would need to use _http_ and not _https_ connections which would introduce a security risk. - -After reading Glenn’s 2018 post on the DNF local plugin, I searched for additional information but could not find much of anything besides the sparse documentation for the plugin on the DNF documentation web site. This article is intended to raise awareness of this plugin. - -### About the DNF local plugin - -The [online documentation][4] provides a succinct description of the plugin: “Automatically copy all downloaded packages to a repository on the local filesystem and generating repo metadata”. The magic happens when there are two or more Fedora Linux machines configured to use the plugin and to share the same local repository. These machines can be virtual machines or containers running on a host and all sharing the host filesystem, or separate physical hardware on a local area network sharing the file system using a network-based file system sharing technology. The plugin, once configured, handles everything else transparently. Continue to use _dnf_ as before. _dnf_ will check the plugin repository for rpms, then proceed to download from a mirror if not found. The plugin will then cache all rpms in the local repository regardless of their upstream source – an official Fedora Linux repository or a third-party RPM repository – and make them available for the next run of _dnf_. - -### Install and configure the DNF local plugin - -Install the plugin using _dnf_. The _createrepo_c_ packages will be installed as a dependency. The latter is used, if needed, to create the local repository. - -``` -sudo dnf install python3-dnf-plugin-local -``` - -The plugin configuration file is stored at /_etc/dnf/plugins/local.conf_. An example copy of the file is provided below. The only change required is to set the _repodir_ option. The _repodir_ option defines where on the local filesystem the plugin will keep the RPM repository. - -``` -[main] -enabled = true -# Path to the local repository. -# repodir = /var/lib/dnf/plugins/local - -# Createrepo options. See man createrepo_c -[createrepo] -# This option lets you disable createrepo command. This could be useful -# for large repositories where metadata is priodically generated by cron -# for example. This also has the side effect of only copying the packages -# to the local repo directory. -enabled = true - -# If you want to speedup createrepo with the --cachedir option. Eg. -# cachedir = /tmp/createrepo-local-plugin-cachedir - -# quiet = true - -# verbose = false -``` - -Change _repodir_ to the filesystem directory where you want the RPM repository stored. For example, change _repodir_ to _/srv/repodir_ as shown below. - -``` -... -# Path to the local repository. -# repodir = /var/lib/dnf/plugins/local -repodir = /srv/repodir -... -``` - -Finally, create the directory if it does not already exist. If this directory does not exist, _dnf_ will display some errors when it first attempts to access the directory. The plugin will create the directory, if necessary, despite the initial errors. - -``` -sudo mkdir -p /srv/repodir -``` - -Repeat this process on any virtual machine or container that you want to share the local repository. See the use cases below for more information. An alternative configuration using NFS (network file system) is also provided below. - -### How to use the DNF local plugin - -After you have installed the plugin, you do not need to change how you use _dnf_. The plugin will cause a few additional steps to run transparently behind the scenes whenever _dnf_ is called. After _dnf_ determines which rpms to update or install, the plugin will try to retrieve them from the local repository before trying to download them from a mirror. After _dnf_ has successfully completed the requested updates, the plugin will copy any rpms downloaded from a mirror to the local repository and then update the local repository’s metadata. The downloaded rpms will then be available in the local repository for the next _dnf_ client. - -There are two points to be aware of. First, benefits from the local repository only occur if multiple machines share the same architecture (for example, x86_64 or aarch64). Virtual machines and containers running on a host will usually share the same architecture as the host. But if there is only one aarch64 device and one x86_64 device there is little real benefit to a shared local repository unless one of the devices is constantly reset and updated which is common when developing with a virtual machine or container. Second, I have not explored how robust the local repository is to multiple _dnf_ clients updating the repository metadata concurrently. I therefore run _dnf_ from multiple machines serially rather than in parallel. This may not be a real concern but I want to be cautious. - -The use cases outlined below assume that work is being done on Fedora Workstation. Other desktop environments can work as well but may take a little extra effort. I created a GitHub repository with examples to help with each use case. Click the _Code_ button at to clone the repository or to download a zip file. - -#### Use case 1: networked physical machines - -The simplest use case is two or more Fedora Linux computers on the same network. Install the DNF local plugin on each Fedora Linux machine and configure the plugin to use a repository on a network-aware file system. There are many network-aware file systems to choose from. Which file system you will use will probably be influenced by the existing devices on your network. - -For example, I have a small Synology Network Attached Storage device (NAS) on my home network. The web admin interface for the Synology makes it very easy to set up a NFS server and export a file system share to other devices on the network. NFS is a shared file system that is well supported on Fedora Linux. I created a share on my NAS named _nfs-dnf_ and exported it to all the Fedora Linux machines on my network. For the sake of simplicity, I am omitting the details of the security settings. However, please keep in mind that security is always important even on your own local network. If you would like more information about NFS, the online Red Hat Enable Sysadmin magazine has an [informative post][7] that covers both client and server configurations on Red Hat Enterprise Linux. They translate well to Fedora Linux. - -I configured the NFS client on each of my Fedora Linux machines using the steps shown below. In the below example, _quga.lan_ is the hostname of my NAS device. - -Install the NFS client on each Fedora Linux machine. - -``` -$ sudo dnf install nfs-utils -``` - -Get the list of exports from the NFS server: - -``` -$ showmount -e quga.lan -Export list for quga.lan: -/volume1/nfs-dnf pi*.lan -``` - -Create a local directory to be used as a mount point on the Fedora Linux client: - -``` -$ sudo mkdir -p /srv/repodir -``` - -Mount the remote file system on the local directory. See _man mount_ for more information and options. - -``` -$ sudo mount -t nfs -o vers=4 quga.lan:/nfs-dnf /srv/repodir -``` - -The DNF local plugin will now work until as long as the client remains up. If you want the NFS export to be automatically mounted when the client is rebooted, then you must to edit /_etc/fstab_ as demonstrated below. I recommend making a backup of _/etc/fstab_ before editing it. You can substitute _vi_ with _nano_ or another editor of your choice if you prefer. - -``` -$ sudo vi /etc/fstab -``` - -Append the following line at the bottom of _/etc/fstab_, then save and exit. - -``` -quga.lan:/volume1/nfs-dnf /srv/repodir nfs defaults,timeo=900,retrans=5,_netdev 0 0 -``` - -Finally, notify systemd that it should rescan _/etc/fstab_ by issuing the following command. - -``` -$ sudo systemctl daemon-reload -``` - -NFS works across the network and, like all network traffic, may be blocked by firewalls on the client machines. Use _firewall-cmd_ to allow NFS-related network traffic through each Fedora Linux machine’s firewall. - -``` -$ sudo firewall-cmd --permanent --zone=public --allow-service=nfs -``` - -As you can imagine, replicating these steps correctly on multiple Fedora Linux machines can be challenging and tedious. Ansible automation solves this problem. - -In the _rpi-example_ directory of the github repository I’ve included an example Ansible playbook (_configure.yaml_) that installs and configures both the DNF plugin and the NFS client on all Fedora Linux machines on my network. There is also a playbook (_update.yaml)_ that runs a DNF update across all devices. See this [recent post in Fedora Magazine][8] for more information about Ansible. - -To use the provided Ansible examples, first update the inventory file (_inventory_) to include the list of Fedora Linux machines on your network that you want to managed. Next, install two Ansible roles in the roles subdirectory (or another suitable location). - -``` -$ ansible-galaxy install --roles-path ./roles -r requirements.yaml -``` - -Run the _configure.yaml_ playbook to install and configure the plugin and NFS client on all hosts defined in the inventory file. The role that installs and configures the NFS client does so via _/etc/fstab_ but also takes it a step further by creating an automount for the NFS share in systemd. The automount is configured to mount the share only when needed and then to automatically unmount. This saves network bandwidth and CPU cycles which can be important for low power devices like a Raspberry Pi. See the github repository for the role and for more information. - -``` -$ ansible-playbook -i inventory configure.yaml -``` - -Finally, Ansible can be configured to execute _dnf update_ on all the systems serially by using the _update.yaml_ playbook. - -``` -$ ansible-playbook -i inventory update.yaml -``` - -Ansible and other automation tools such as Puppet, Salt, or Chef can be big time savers when working with multiple virtual or physical machines that share many characteristics. - -#### Use case 2: virtual machines running on the same host - -Fedora Linux has excellent built-in support for virtual machines. The Fedora Project also provides [Fedora Cloud][9] base images for use as virtual machines. [Vagrant][10] is a tool for managing virtual machines. Fedora Magazine has [instructions][11] on how to set up and configure Vagrant. Add the following line in your _.bashrc_ (or other comparable shell configuration file) to inform Vagrant to use _libvirt_ automatically on your workstation instead of the default VirtualBox. - -``` -export VAGRANT_DEFAULT_PROVIDER=libvirt -``` - -In your project directory initialize Vagrant and the Fedora Cloud image (use 34-cloud-base for Fedora Linux 34 when available): - -``` -$ vagrant init fedora/33-cloud-base -``` - -This creates a Vagrant file in the project directory. Edit the Vagrant file to look like the example below. DNF will likely fail with the default memory settings for _libvirt_. So the example Vagrant file below provides additional memory to the virtual machine. The example below also shares the host _/srv/repodir_ with the virtual machine. The shared directory will have the same path in the virtual machine – _/srv/repodir_. The Vagrant file can be downloaded from [github][12]. - -``` -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# define repo directory; same name on host and vm -REPO_DIR = "/srv/repodir" - -Vagrant.configure("2") do |config| - - config.vm.box = "fedora/33-cloud-base" - - config.vm.provider :libvirt do |v| - v.memory = 2048 - # v.cpus = 2 - end - - # share the local repository with the vm at the same location - config.vm.synced_folder REPO_DIR, REPO_DIR - - # ansible provisioner - commented out by default - # the ansible role is installed into a path defined by - # ansible.galaxy_roles-path below. The extra_vars are ansible - # variables passed to the playbook. - # -# config.vm.provision "ansible" do |ansible| -# ansible.verbose = "v" -# ansible.playbook = "ansible/playbook.yaml" -# ansible.extra_vars = { -# repo_dir: REPO_DIR, -# dnf_update: false -# } -# ansible.galaxy_role_file = "ansible/requirements.yaml" -# ansible.galaxy_roles_path = "ansible/roles" -# end -end -``` - -Once you have Vagrant managing a Fedora Linux virtual machine, you can install the plugin manually. SSH into the virtual machine: - -``` -$ vagrant ssh -``` - -When you are at a command prompt in the virtual machine, repeat the steps from the _Install and configure the DNF local plugin_ section above. The Vagrant configuration file should have already made _/srv/repodir_ from the host available in the virtual machine at the same path. - -If you are working with several virtual machines or repeatedly re-initiating a new virtual machine then some simple automation becomes useful. As with the network example above, I use ansible to automate this process. - -In the [vagrant-example directory][12] on github, you will see an _ansible_ subdirectory. Edit the Vagrant file and remove the comment marks under the _ansible provisioner_ section. Make sure the _ansible_ directory and its contents (_playbook.yaml_, _requirements.yaml_) are in the project directory. - -After you’ve uncommented the lines, the _ansible provisioner_ section in the Vagrant file should look similar to the following: - -``` -.... - # ansible provisioner - # the ansible role is installed into a path defined by - # ansible.galaxy_roles-path below. The extra_vars are ansible - # variables passed to the playbook. - # - config.vm.provision "ansible" do |ansible| - ansible.verbose = "v" - ansible.playbook = "ansible/playbook.yaml" - ansible.extra_vars = { - repo_dir: REPO_DIR, - dnf_update: false - } - ansible.galaxy_role_file = "ansible/requirements.yaml" - ansible.galaxy_roles_path = "ansible/roles" - end -.... -``` - -Ansible must be installed (_sudo dnf install ansible_). Note that there are significant changes to how Ansible is packaged beginning with Fedora Linux 34 (use _sudo dnf install ansible-base ansible-collections*_). - -If you run Vagrant now (or reprovision: _vagrant provision_), Ansible will automatically download an Ansible role that installs the DNF local plugin. It will then use the downloaded role in a playbook. You can _vagrant ssh_ into the virtual machine to verify that the plugin is installed and to verify that rpms are coming from the DNF local repository instead of a mirror. - -#### Use case 3: container builds - -Container images are a common way to distribute and run applications. If you are a developer or enthusiast using Fedora Linux containers as a foundation for applications or services, you will likely use _dnf_ to update the container during the development/build process. Application development is iterative and can result in repeated executions of _dnf_ pulling the same RPM set from Fedora Linux mirrors. If you cache these rpms locally then you can speed up the container build process by retrieving them from the local cache instead of re-downloading them over the network each time. One way to accomplish this is to create a custom Fedora Linux container image with the DNF local plugin installed and configured to use a local repository on the host workstation. Fedora Linux offers _podman_ and _buildah_ for managing the container build, run and test life cycle. See the Fedora Magazine post [_How to build Fedora container images_][13] for more about managing containers on Fedora Linux. - -Note that the _fedora_minimal_ container uses _microdnf_ by default which does not support plugins. The _fedora_ container, however, uses _dnf_. - -A script that uses _buildah_ and _podman_ to create a custom Fedora Linux image named _myFedora_ is provided below. The script creates a mount point for the local repository at _/srv/repodir_. The below script is also available in the [_container-example_][14] directory of the github repository. It is named _base-image-build.sh_. - -``` -#!/bin/bash -set -x - -# bash script that creates a 'myfedora' image from fedora:latest. -# Adds dnf-local-plugin, points plugin to /srv/repodir for local -# repository and creates an external mount point for /srv/repodir -# that can be used with a -v switch in podman/docker - -# custom image name -custom_name=myfedora - -# scratch conf file name -tmp_name=local.conf - -# location of plugin config file -configuration_name=/etc/dnf/plugins/local.conf - -# location of repodir on container -container_repodir=/srv/repodir - -# create scratch plugin conf file for container -# using repodir location as set in container_repodir -cat < "$tmp_name" -[main] -enabled = true -repodir = $container_repodir -[createrepo] -enabled = true -# If you want to speedup createrepo with the --cachedir option. Eg. -# cachedir = /tmp/createrepo-local-plugin-cachedir -# quiet = true -# verbose = false -EOF - -# pull registry.fedoraproject.org/fedora:latest -podman pull registry.fedoraproject.org/fedora:latest - -#start the build -mkdev=$(buildah from fedora:latest) - -# tag author -buildah config --author "$USER" "$mkdev" - -# install dnf-local-plugin, clean -# do not run update as local repo is not operational -buildah run "$mkdev" -- dnf --nodocs -y install python3-dnf-plugin-local createrepo_c -buildah run "$mkdev" -- dnf -y clean all - -# create the repo dir -buildah run "$mkdev" -- mkdir -p "$container_repodir" - -# copy the scratch plugin conf file from host -buildah copy "$mkdev" "$tmp_name" "$configuration_name" - -# mark container repodir as a mount point for host volume -buildah config --volume "$container_repodir" "$mkdev" - -# create myfedora image -buildah commit "$mkdev" "localhost/$custom_name:latest" - -# clean up working image -buildah rm "$mkdev" - -# remove scratch file -rm $tmp_name -``` - -Given normal security controls for containers, you usually run this script with _sudo_ and when you use the _myFedora_ image in your development process. - -``` -$ sudo ./base_image_build.sh -``` - -To list the images stored locally and see both _fedora:latest_ and _myfedora:latest_ run: - -``` -$ sudo podman images -``` - -To run the _myFedora_ image as a container and get a bash prompt in the container run: - -``` -$ sudo podman run -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash -``` - -Podman also allows you to run containers rootless (as an unprivileged user). Run the script without _sudo_ to create the _myfedora_ image and store it in the unprivileged user’s image repository: - -``` -$ ./base-image-build.sh -``` - -In order to run the _myfedora_ image as a rootless container on a Fedora Linux host, an additional flag is needed. Without the extra flag, SELinux will block access to _/srv/repodir_ on the host. - -``` -$ podman run --security-opt label=disable -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash -``` - -By using this custom image as the base for your Fedora Linux containers, the iterative building and development of applications or services on them will be faster. - -**Bonus Points** – for even better _dnf_ performance, Dan Walsh describes how to share _dnf_ metadata between a host and container using a file overlay (see . redhat.com/sysadmin/speeding-container-buildah). This technique will work in combination with a shared local repository only if the host and the container use the same local repository. The _dnf_ metadata cache includes metadata for the local repository under the name __dnf_local_. - -I have created a container file that uses _buildah_ to do a _dnf_ update on a _fedora:latest_ image. I’ve also created a container file to repeat the process using a _myfedora_ image. There are 53 MB and 111 rpms in the _dnf_ update. The only difference between the images is that _myfedora_ has the DNF local plugin installed. Using the local repository cut the elapse time by more than half in this example and saves 53MB of internet bandwidth consumption. - -With the _fedora:latest_ image the command and elapsed time is: - -``` -# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O - f Containerfile.3 . -128 Elapsed Time: 0:48.06 -``` - -With the _myfedora_ image the command and elapsed time is less than half of the base run. The **:Z** on the **-v** volume below is required when running the container on a SELinux-enabled host. - -``` -# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O -v /srv/repodir:/srv/repodir:Z -f Containerfile.4 . -133 Elapsed Time: 0:19.75 -``` - -### Repository management - -The local repository will accumulate files over time. Among the files will be many versions of rpms that change frequently. The kernel rpms are one such example. A system upgrade (for example upgrading from Fedora Linux 33 to Fedora Linux 34) will copy many rpms into the local repository. The _dnf repomanage_ command can be used to remove outdated rpm archives. I have not used the plugin long enough to explore this. The interested and knowledgeable reader is welcome to write an article about the _dnf repomanage_ command for Fedora Magazine. - -Finally, I keep the _x86_64_ rpms for my workstation, virtual machines and containers in a local repository that is separate from the _aarch64_ local repository for the Raspberry Pis and (future) containers hosting my Kubernetes cluster. I have separated them for reasons of convenience and happenstance. A single repository location should work across all architectures. - -### An important note about Fedora Linux system upgrades - -Glenn Johnson has more than four years experience with the DNF local plugin. On occasion he has experienced problems when upgrading to a new release of Fedora Linux with the DNF local plugin enabled. Glenn strongly recommends that the _enabled_ attribute in the plugin configuration file _/etc/dnf/plugins/local.conf_ be set to **false** before upgrading your systems to a new Fedora Linux release. After the system upgrade, re-enable the plugin. Glenn also recommends using a separate local repository for each Fedora Linux release. For example, a NFS server might export _/volume1/dnf-repo/33_ for Fedora Linux 33 systems only. Glenn hangs out on fedoraforum.org – an independent online resource for Fedora Linux users. - -### Summary - -The DNF local plugin has been beneficial to my ongoing work with a Fedora Linux based Kubernetes cluster. The containers and virtual machines running on my Fedora Linux desktop have also benefited. I appreciate how it supplements the existing DNF process and does not dictate any changes to how I update my systems or how I work with containers and virtual machines. I also appreciate not having to download the same set of rpms multiple times which saves me money, frees up bandwidth, and reduces the load on the Fedora Linux mirror hosts. Give it a try and see if the plugin will help in your situation! - -Thanks to Glenn Johnson for his post on the DNF local plugin which started this journey, and for his helpful reviews of this post. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/use-the-dnf-local-plugin-to-speed-up-your-home-lab/ - -作者:[Brad Smith][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/buckaroogeek/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/dnf-local-816x345.jpg -[2]: https://unsplash.com/@_s9h8_?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://dnf-plugins-core.readthedocs.io/en/latest/local.html -[5]: https://forums.fedoraforum.org/showthread.php?318812-dnf-plugin-local -[6]: https://fedoraproject.org/wiki/%20Infrastructure/Mirroring#How_can_someone_make_a_private_mirror -[7]: https://www.redhat.com/sysadmin/nfs-server-client -[8]: https://fedoramagazine.org/using-ansible-setup-workstation/ -[9]: https://alt.fedoraproject.org/cloud/ -[10]: https://vagrantup.com/ -[11]: https://fedoramagazine.org/vagrant-qemukvm-fedora-devops-sysadmin/ -[12]: https://github.com/buckaroogeek/dnf-local-plugin-examples/tree/main/vagrant-example -[13]: https://fedoramagazine.org/how-to-build-fedora-container-images/ -[14]: https://github.com/buckaroogeek/dnf-local-plugin-examples/tree/main/container-example diff --git a/sources/tech/20210416 WASI, Bringing WebAssembly Way Beyond Browsers.md b/sources/tech/20210416 WASI, Bringing WebAssembly Way Beyond Browsers.md deleted file mode 100644 index 83b6c0ba38..0000000000 --- a/sources/tech/20210416 WASI, Bringing WebAssembly Way Beyond Browsers.md +++ /dev/null @@ -1,87 +0,0 @@ -[#]: subject: (WASI, Bringing WebAssembly Way Beyond Browsers) -[#]: via: (https://www.linux.com/news/wasi-bringing-webassembly-way-beyond-browsers/) -[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/wasi-bringing-webassembly-way-beyond-browsers/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -WASI, Bringing WebAssembly Way Beyond Browsers -====== - -_By Marco Fioretti_ - -[WebAssembly (Wasm)][1] is a binary software format that all browsers can run directly, [safely][2] and at near-native speeds, on any operating system (OS). Its biggest promise, however, is to eventually work in the same way [everywhere][3], from IoT devices and edge servers, to mobile devices and traditional desktops. This post introduces the main interface that should make this happen. The next post in this series will describe some of the already available, real-world implementations and applications of the same interface. - -**What is portability, again?** - -To be safe and portable, software code needs, as a minimum:  - - 1. guarantees that users and programs can do only what they actually _have_ the right to do, and only do it without creating problems to other programs or users - 2. standard, platform-independent methods to declare and apply those guarantees - - - -Traditionally, these services are provided by libraries of “system calls” for each language, that is functions with which a software program can ask its host OS to perform some low-level, or sensitive task. When those libraries follow standards like [POSIX][4], any compiler can automatically combine them with the source code, to produce a binary file that can run on _some_ combination of OSes and processors. - -**The next level: BINARY compatibility** - -System calls only make _source code_ portable across platforms. As useful as they are, they still force developers to generate platform-specific executable files, all too often from more or less different combinations of source code. - -WebAssembly instead aims to get to the next level: use any language you want, then compile it once, to produce one binary file that will _just run_, securely, in any environment that recognizes WebAssembly.  - -**What Wasm does not need to work outside browsers** - -Since WebAssembly already “compiles once” for all major browsers, the easiest way to expand its reach may seem to create, for every target environment, a full virtual machine (runtime) that provides everything a Wasm module expects from Firefox or Chrome. - -Work like that however would be _really_ complex, and above all simply unnecessary, if not impossible, in many cases (e.g. on IoT devices). Besides, there are better ways to secure Wasm modules than dumping them in one-size-fits-all sandboxes as browsers do today. - -**The solution? A virtual operating system and runtime** - -Fully portable Wasm modules cannot happen until, to give one practical example, accesses to webcams or websites can be written only with system calls that generate platform-dependent machine code. - -Consequently, the most practical way to have such modules, from _any_ programming language, seems to be that of the [WebAssembly System interface (WASI) project][5]: write and compile code for only _one, obviously virtual,_ but complete operating system. - -On one hand WASI gives to all the developers of [Wasm runtimes][6] one single OS to emulate. On the other, WASI gives to all programming languages one set of system calls to talk to that same OS. - -In this way, even if you loaded it on ten different platforms, a _binary_ Wasm module calling a certain WASI function would still get – from the runtime that launched it – a different binary object every time. But since all those objects would interact with that single Wasm module in exactly the same way, it would not matter! - -This approach would work also in the first use case of WebAssembly, that is with the JavaScript virtual machines inside web browsers. To run Wasm modules that use WASI calls, those machines should only load the JavaScript versions of the corresponding libraries. - -This OS-level emulation is also more secure than simple sandboxing. With WASI, any runtime can implement different versions of each system call – with different security privileges – as long as they all follow the specification. Then that runtime could place every instance of every Wasm module it launches into a separate sandbox, containing only the smallest, and least privileged combination of functions that that specific instance really needs. - -This “principle of least privilege”, or “[capability-based security model][7]“, is everywhere in WASI. A WASI runtime can pass into a sandbox an instance of the “open” system call that is only capable of opening the specific files, or folders, that were _pre-selected_ by the runtime itself. This is a more robust, much more granular control on what programs can do than it would be possible with traditional file permissions, or even with chroot systems. - -Coding-wise, functions for things like basic management of files, folders, network connections or time are needed by almost any program. Therefore the corresponding WASI interfaces are designed as similar as possible to their POSIX equivalents, and all packaged into one “wasi-core” module, that every WASI-compliant runtime must contain. - -A version of the [libc][8] standard C library, rewritten usi wasi-core functions, is already available and, [according to its developers][9], already “sufficiently stable and usable for many purposes”.  - -All the other virtual interfaces that WASI includes, or will include over time, are standardized and packaged as separate modules,  without forcing any runtime to support all of them. In the next article we will see how some of these WASI components are already used today. - -The post [WASI, Bringing WebAssembly Way Beyond Browsers][10] appeared first on [Linux Foundation – Training][11]. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/wasi-bringing-webassembly-way-beyond-browsers/ - -作者:[Dan Brown][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://training.linuxfoundation.org/announcements/wasi-bringing-webassembly-way-beyond-browsers/ -[b]: https://github.com/lujun9972 -[1]: https://training.linuxfoundation.org/announcements/an-introduction-to-webassembly/ -[2]: https://training.linuxfoundation.org/announcements/webassembly-security-now-and-in-the-future/ -[3]: https://webassembly.org/docs/non-web/ -[4]: https://www.gnu.org/software/libc/manual/html_node/POSIX.html#POSIX -[5]: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/ -[6]: https://github.com/appcypher/awesome-wasm-runtimes -[7]: https://github.com/WebAssembly/WASI/blob/main/docs/WASI-overview.md#capability-oriented -[8]: https://en.wikipedia.org/wiki/C_standard_library -[9]: https://github.com/WebAssembly/wasi-libc -[10]: https://training.linuxfoundation.org/announcements/wasi-bringing-webassembly-way-beyond-browsers/ -[11]: https://training.linuxfoundation.org/ diff --git a/sources/tech/20210417 How I digitized my CD collection with open source tools.md b/sources/tech/20210417 How I digitized my CD collection with open source tools.md deleted file mode 100644 index 608e564933..0000000000 --- a/sources/tech/20210417 How I digitized my CD collection with open source tools.md +++ /dev/null @@ -1,101 +0,0 @@ -[#]: subject: (How I digitized my CD collection with open source tools) -[#]: via: (https://opensource.com/article/21/4/digitize-cd-open-source-tools) -[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How I digitized my CD collection with open source tools -====== -Clean off your shelves by ripping your CDs and tagging them for easy -playback across your home network. -![11 CDs in a U shape][1] - -The restrictions on getting out and about during the pandemic occasionally remind me that time is slipping by—although some days, "slipping" doesn't quite feel like the right word. But it also reminds me there are more than a few tasks around the house that can be great for restoring the sense of accomplishment that so many of us have missed. - -One such task, in my home anyway, is converting our CD collection to [FLAC][2] and storing the files on our music server's hard drive. Considering we don't have a huge collection (at least, by some people's standards), I'm surprised we still have so many CDs awaiting conversion—even excluding all the ones that fail to impress and therefore don't merit the effort. - -As for that ticking clock—who knows how much longer the CD player will continue working or the CD-ROM drive in the old computer will remain in service? Plus, I'd rather have the CDs shelved in the basement storage instead of cluttering up the family room. - -So, here I sit on a rainy Sunday afternoon with a pile of classical music CDs, ready to go… - -### Ripping CDs - -I like using the open source [Asunder CD ripper][3]. It's a simple and straightforward tool that uses the [cdparanoia][4] tool to handle the conversion chores. This image shows it working away on an album. - -![Asunder][5] - -(Chris Hermansen, [CC BY-SA 4.0][6]) - -When I fired up Asunder, I was surprised that its Compact Disc Database (CDDB) lookup feature didn't seem to find any matching info. A quick online search led me to a Linux Mint forum discussion that [offered alternatives][7] for the freedb.freedb.org online service, which apparently is no longer working. I first tried using gnudb.gnudb.org with no appreciably better result; plus, the suggested link to gnudb.org/howto.php upset Firefox due to an expired certificate. - -Next, I tried the freedb.freac.org service (note that it is on port 80, not 8880, as was freedb.freedb.org), which worked well for me… with one notable exception: The contributed database entries don't seem to understand the difference between "artist" (or "performer") and "composer." This isn't a huge problem for popular music, but having JS Bach as the "artist" seems a bit incongruous since he never made it to a recording studio, as far as I know. - -Quite a few of the tracks I converted identified the composer in the track title, but if there's one thing I've learned, your metadata can never be too correct. This leads me to the issue of tag editing, or curating the collection. - -Oh wait, there's another reason for tag editing, too, at least when using Asunder to rip: getting the albums' cover images. - -### Editing tags and curating the collection - -My open source go-to tool for [music tag editing continues to be EasyTag][8]. I use it a lot, both for downloads I purchase (it's amazing how messed up their tags can be, and some download services offer untagged WAV format files) and for tidying up the CDs I rip. - -Take a look at what Asunder has (and hasn't) accomplished from EasyTag's perspective. One of the CDs I ripped included Ravel's _Daphnis et Chloé Suites 1 and 2_ and Strauss' _Don Quixote_. The freedb.freac.org database seemed to think that the composers Maurice Ravel and Richard Strauss were the artists performing the work, but the artist on this album is the wonderful London Symphony Orchestra led by André Previn. In Asunder, I clicked the "single artist" checkbox and changed the artist name to the LSO. Here's what it looks like in EasyTag: - -![EasyTag][9] - -(Chris Hermansen, [CC BY-SA 4.0][6]) - -It's not quite there! But in EasyTag, I can select the first six tracks, tagging the composer on all the files by clicking on that little "A" icon on the right of the Composer field: - -![Editing tags in EasyTag][10] - -(Chris Hermansen, [CC BY-SA 4.0][6]) - -I can set the remaining 13 similarly, then select the whole lot and set the Album Artist as well. Finally, I can flip to the Images tab and find and set the album cover image. - -Speaking of images, I've found it wise to always name the image "cover.jpg" and make sure it's in the directory with the FLAC files… some players aren't happy with PNG files, some want the file in the same directory, and some are just plain difficult to get along with, as far as images go. - -What is your favorite open source CD ripping tool? How about the open source tool you like to use to fix your metadata? Let me know in the comments below! - -### And speaking of music… - -I haven't been as regular with my music and open source column over the past year as I was in previous years. Although I didn't acquire a lot of new music in 2020 and 2021, a few jewels still came my way… - -As always, [Erased Tapes][11] continues to develop an amazing collection of hmmm… what would you call it, anyway? The site uses the terms "genre-defying" and "avant-garde," which don't seem overblown for once. A recent favorite is Rival Consoles' [_Night Melody Articulation_][12], guaranteed to transport me from the day-to-day grind to somewhere else. - -I've been a huge fan of [Gustavo Santaolalla][13] since I first heard his music on a road trip from Coyhaique to La Tapera in Chile's Aysén Region. You might be familiar with his film scores to _Motorcycle Diaries_ or _Brokeback Mountain_. I recently picked up [_Qhapaq Ñan_, music about the Inca Trail][14], on the Linux-friendly music site [7digital][15], which has a good selection of his work. - -Finally, and continuing with the Latin American theme, The Queen's Six recording [_Journeys to the New World_][16] is not to be missed. It is available in FLAC format (including high-resolution versions) from the Linux-friendly [Signum Classics][17] site. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/4/digitize-cd-open-source-tools - -作者:[Chris Hermansen][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_cd_dvd.png?itok=RBwVIzmi (11 CDs in a U shape) -[2]: https://en.wikipedia.org/wiki/FLAC -[3]: https://opensource.com/article/17/2/open-music-tagging -[4]: https://www.xiph.org/paranoia/ -[5]: https://opensource.com/sites/default/files/uploads/asunsder.png (Asunder) -[6]: https://creativecommons.org/licenses/by-sa/4.0/ -[7]: https://forums.linuxmint.com/viewtopic.php?t=322415 -[8]: https://opensource.com/article/17/5/music-library-tag-management-tools -[9]: https://opensource.com/sites/default/files/uploads/easytag.png (EasyTag) -[10]: https://opensource.com/sites/default/files/uploads/easytag_editing-tags.png (Editing tags in EasyTag) -[11]: https://www.erasedtapes.com/about -[12]: https://www.erasedtapes.com/release/eratp139-rival-consoles-night-melody-articulation -[13]: https://en.wikipedia.org/wiki/Gustavo_Santaolalla -[14]: https://ca.7digital.com/artist/gustavo-santaolalla/release/qhapaq-%C3%B1an-12885504?f=20%2C19%2C12%2C16%2C17%2C9%2C2 -[15]: https://ca.7digital.com/search/release?q=gustavo%20santaolalla&f=20%2C19%2C12%2C16%2C17%2C9%2C2 -[16]: https://signumrecords.com/product/journeys-to-the-new-world-hispanic-sacred-music-from-the-16th-17th-centuries/SIGCD626/ -[17]: https://signumrecords.com/ diff --git a/sources/tech/20210418 Hyperbola Linux Review- Systemd-Free Arch With Linux-libre Kernel.md b/sources/tech/20210418 Hyperbola Linux Review- Systemd-Free Arch With Linux-libre Kernel.md deleted file mode 100644 index 4e12dcd583..0000000000 --- a/sources/tech/20210418 Hyperbola Linux Review- Systemd-Free Arch With Linux-libre Kernel.md +++ /dev/null @@ -1,181 +0,0 @@ -[#]: subject: (Hyperbola Linux Review: Systemd-Free Arch With Linux-libre Kernel) -[#]: via: (https://itsfoss.com/hyperbola-linux-review/) -[#]: author: (Sarvottam Kumar https://itsfoss.com/author/sarvottam/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Hyperbola Linux Review: Systemd-Free Arch With Linux-libre Kernel -====== - -In the last month of 2019, the Hyperbola project took a [major decision][1] of ditching Linux in favor of OpenBSD. We also had a [chat][2] with Hyperbola co-founder Andre Silva, who detailed the reason for dropping Hyperbola OS and starting a new HyperbolaBSD. - -HyperbolaBSD is still under development and its alpha release will be ready by September 2021 for initial testing. The current Hyperbola GNU/Linux-libre v0.3.1 Milky Way will be supported until the legacy [Linux-libre kernel][3] reaches the end of life in 2022. - -I thought of giving it a try before it goes away and switches to BSD completely. - -### What is Hyperbola GNU/Linux-libre? - -![][4] - -Back in April 2017, the Hyperbola project was started by its [six co-founders][5] with an aim to deliver a lightweight, stable, secure, software freedom, and privacy focussed operating system.  - -Subsequently, the first stable version of Hyperbola GNU/Linux-libre arrived in July 2017. It was based on Arch Linux snapshots combining Debian development. - -But, unlike Arch having a rolling release model, Hyperbola GNU/Linux-libre follows a Long Term Support (LTS) model. - -Also, instead of a generic Linux kernel, it includes GNU operating system components and the Linux-libre kernel. Most importantly, Hyperbola is also one of the distributions without Systemd init system. - -Even though the Systemd is widely adopted by major Linux distributions like Ubuntu, Hyperbola replaced it with OpenRC as the default init system. v0.1 of Hyperbola was the first and the last version to support Systemd. - -Moreover, Hyperbola put high emphasis on Keep It Simple Stupid (KISS) methodology. It provides packages for i686 and x86_64 architecture that meets GNU Free System Distribution Guidelines (GNU FSDG). - -Not just that, but it also has its own social contract and packaging guidelines that follow the philosophy of the Free Software Movement. - -Hence, Free Software Foundation [recognized][6] Hyperbola GNU/Linux-libre as the first completely free Brazilian operating system in 2018. - -### Downloading Hyperbola GNU/Linux-libre 0.3.1 Milky Way - -The hyperbola project provides [two live images][7] for installation: one is the regular Hyperbola and the other is Hypertalking. Hypertalking is the ISO optimized and adapted for blind and visually impaired users. - -Interestingly, if you already use Arch Linux or Arch-based distribution like Parabola, you don’t need to download a live image. You can easily migrate to Hyperbola by following the official [Arch][8] or [Parabola][9] migration guide. - -The ISO image sizes around 650MB containing only essential packages (excluding desktop environment) to boot only in a command line interface. - -### Hardware requirements for Hyperbola - -For v0.3.1 (x86_64), you require a minimum of any 64-bit processor, 47MiB (OS installed) and 302MiB (Live image) of RAM for text mode only with no desktop environment. - -While for v0.3.1 (i686), you require a minimum of Intel Pentium II or AMD Athlon CPU model, 33MiB (OS installed), and 252MiB (Live image) of RAM for text mode only with no desktop environment. - -### Installing Hyperbola Linux from scratch - -Currently, I don’t use Arch or Parabola distribution. Hence, instead of migration, I chose to install Hyperbola Linux from scratch. - -I also mostly don’t dual boot unknown (to me) distribution on my hardware as it may create undetermined problems. So, I decided to use the wonderful GNOME Boxes app for setting up a Hyperbola virtual machine with up to 2 GB of RAM and 22 GB of free disk space. - -Similar to Arch, Hyperbola also does not come with a graphical user interface (GUI) installer. It means you need to set up almost everything from scratch using a command line interface (CLI). - -Here, it also concludes that Hyperbola is definitely not for beginners and those afraid of the command line. - -However, Hyperbola does provide separate [installation instruction][10], especially for beginners. But I think it still misses several steps that can trouble beginners during the installation process. - -For instance, it does not guide you to connect to the network, set up a new user account, and install a desktop environment. - -Hence, there is also another Hyperbola [installation guide][11] that you need to refer to in case you’re stuck at any step. - -As I booted the live image, the boot menu showed the option to install for both 64-bit or 32-bit architecture. - -![Live Image Boot Menu][12] - -Next, following the installation instruction, I went through setting up disk partition, DateTime, language, and password for the root user. - -![Disk partition][13] - -Once everything set up, I then installed the most common [Grub bootloader][14] and rebooted the system. Phew! until now, all went well as I could log in to my Hyperbola system. - -![text mode][15] - -### Installing Xfce desktop in Hyperbola Linux - -The command-line interface was working fine for me. But now, to have a graphical user interface, I need to manually choose and install a new [desktop environment][16] as Hyperbola does not come with any default DE. - -For the sake of simplicity and lightweight, I chose to get the popular [Xfce desktop][17]. But before installing it, I also needed a Xorg [display server][18]. So, I installed it along with other important packages using the default pacman package manager. - -![Install X.Org][19] - -Later, I installed LightDM cross-desktop [display manager][20], Xfce desktop, and other necessary packages like elogind for managing user logins. - -![Install Xfce desktop environment][21] - -After the Xfce installation, you also need to add LightDM service at the default run level to automatically switch to GUI mode. You can use the below command and reboot the system: - -``` -rc-update add lightdm default -reboot -``` - -![Add LightDM at runlevel][22] - -#### Pacman Signature Error In Hyperbola Linux - -While installing Xorg and Xfce in the latest Hyperbola v0.3.1, I encountered the signature error for some packages showing “signature is marginal trust” or “invalid or corrupted package.” - -![Signature Error In Hyperbola Linux][23] - -After searching the solution, I came to know from Hyperbola [Forum][24] that the main author Emulatorman’s keys expired on 1st Feb 2021. - -Hence, until the author upgrades the key or a new version 0.4 arrives sooner or later, you can change the `SigLevel` from “SigLevel=Required DatabaseOptional” to “SigLevel=Never” in`/etc/pacman.conf` file to avoid this error. - -![][25] - -### Hyperbola Linux with Xfce desktop - -![Hyperbola Linux With Xfce desktop][26] - -Hyperbola GNU/Linux-libre with Xfce 4.12 desktop gives a very clean, light, and smooth user experience. At the core, it contains Linux-libre 4.9 and OpenRC 0.28 service manager. - -![][27] - -As Hyperbola does not come with customized desktops and tons of bloated software, it definitely gives flexibility and freedom to choose, install, and configure the services you want. - -On the memory usage side, it takes around 205MB of RAM (approx. 10%) while running no applications (except terminal). - -![][28] - -### Is Hyperbola a suitable distribution for you? - -As per my experience, it definitely not a [Linux distribution that I would like to suggest to complete beginners][29]. Well, the Hyperbola project does not even claim to be beginners-friendly. - -If you’re well-versed with the command line and have quite a good knowledge of Linux concepts like disk partition, you can give it a try and decide yourself. Spending time hacking around the installation and configuration process can teach you a lot. - -Another thing that might matter in choosing Hyperbola Linux is also the default init system. If you’re looking for Systemd-free distribution with complete customization control from scratch, what can be better than it. - -Last but not least, you should also consider the future of Hyperbola, which will no longer contain Linux Kernel as it will turn into a HyperbolaBSD with OpenBSD Linux and userspace. - -If you’ve already tried or currently using Hyperbola Linux, let us know your experience in the comment below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/hyperbola-linux-review/ - -作者:[Sarvottam Kumar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/sarvottam/ -[b]: https://github.com/lujun9972 -[1]: https://www.hyperbola.info/news/announcing-hyperbolabsd-roadmap/ -[2]: https://itsfoss.com/hyperbola-linux-bsd/ -[3]: https://www.fsfla.org/ikiwiki/selibre/linux-libre/ -[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/hyperbola-gnu-linux.png?resize=800%2C450&ssl=1 -[5]: https://www.hyperbola.info/members/founders/ -[6]: https://www.fsf.org/news/fsf-adds-hyperbola-gnu-linux-libre-to-list-of-endorsed-gnu-linux-distributions -[7]: https://wiki.hyperbola.info/doku.php?id=en:main:downloads&redirect=1 -[8]: https://wiki.hyperbola.info/doku.php?id=en:migration:from_arch -[9]: https://wiki.hyperbola.info/doku.php?id=en:migration:from_parabola -[10]: https://wiki.hyperbola.info/doku.php?id=en:guide:beginners -[11]: https://wiki.hyperbola.info/doku.php?id=en:guide:installation -[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Live-Image-Boot-Menu.png?resize=640%2C480&ssl=1 -[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/Disk-partition.png?resize=600%2C450&ssl=1 -[14]: https://itsfoss.com/what-is-grub/ -[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/text-mode.png?resize=600%2C450&ssl=1 -[16]: https://itsfoss.com/what-is-desktop-environment/ -[17]: https://xfce.org/ -[18]: https://itsfoss.com/display-server/ -[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/Install-xorg-package.png?resize=600%2C450&ssl=1 -[20]: https://itsfoss.com/display-manager/ -[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Install-Xfce-desktop-environment-800x600.png?resize=600%2C450&ssl=1 -[22]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Add-LightDM-at-runlevel.png?resize=600%2C450&ssl=1 -[23]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Signature-Error-In-Hyperbola-Linux.png?resize=600%2C450&ssl=1 -[24]: https://forums.hyperbola.info/viewtopic.php?id=493 -[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/Configure-pacman-SigLevel.png?resize=600%2C450&ssl=1 -[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Hyperbola-Linux-With-Xfce-desktop.jpg?resize=800%2C450&ssl=1 -[27]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Hyperbola-System-Information.jpg?resize=800%2C450&ssl=1 -[28]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/Memory-Usage.jpg?resize=800%2C450&ssl=1 -[29]: https://itsfoss.com/best-linux-beginners/ diff --git a/sources/tech/20210420 In the trenches with Thomas Gleixner, real-time Linux kernel patch set.md b/sources/tech/20210420 In the trenches with Thomas Gleixner, real-time Linux kernel patch set.md deleted file mode 100644 index d5a39bf6c5..0000000000 --- a/sources/tech/20210420 In the trenches with Thomas Gleixner, real-time Linux kernel patch set.md +++ /dev/null @@ -1,133 +0,0 @@ -[#]: subject: (In the trenches with Thomas Gleixner, real-time Linux kernel patch set) -[#]: via: (https://www.linux.com/news/in-the-trenches-with-thomas-gleixner-real-time-linux-kernel-patch-set/) -[#]: author: (Linux.com Editorial Staff https://www.linux.com/author/linuxdotcom/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -In the trenches with Thomas Gleixner, real-time Linux kernel patch set -====== - -_![][1]_ - -_Jason Perlow, Editorial Director at the Linux Foundation interviews Thomas Gleixner, Linux Foundation Fellow, CTO of Linutronix GmbH, and project leader of the_ [_PREEMPT_RT_][2] _real-time kernel patch set._ - -**JP:** *Greetings, Thomas! It’s great to have you here this morning — although for you, it’s getting late in the afternoon in Germany. So **PREEMPT_RT**, the real-time patch set for the kernel is a fascinating project because it has some very important use-cases that most people who use Linux-based systems may not be aware of. First of all, can you tell me what “Real-Time” truly means? * - -**TG:** Real-Time in the context of operating systems means that the operating system provides mechanisms to guarantee that the associated real-time task processes an event within a specified period of time. Real-Time is often confused with “really fast.” The late Prof. Doug Niehaus explained it this way: “Real-Time is not as fast as possible; it is as fast as specified.” - -The specified time constraint is application-dependent. A control loop for a water treatment plant can have comparatively large time constraints measured in seconds or even minutes, while a robotics control loop has time constraints in the range of microseconds. But for both scenarios missing the deadline at which the computation has to be finished can result in malfunction. For some application scenarios, missing the deadline can have fatal consequences. - -In the strict sense of Real-Time, the guarantee which is provided by the operating system must be verifiable, e.g., by mathematical proof of the worst-case execution time. In some application areas, especially those related to functional safety (aerospace, medical, automation, automotive, just to name a few), this is a mandatory requirement. But for other scenarios or scenarios where there is a separate mechanism for providing the safety requirements, the proof of correctness can be more relaxed. But even in the more relaxed case, the malfunction of a real-time system can cause substantial damage, which obviously wants to be avoided. - -**JP:** _What is the history behind the project? How did it get started?_ - -**TG:** Real-Time Linux has a history that goes way beyond the actual **PREEMPT_RT** project. - -Linux became a research vehicle very early on. Real-Time researchers set out to transform Linux into a Real-Time Operating system and followed different approaches with more or less success. Still, none of them seriously attempted a fully integrated and perhaps upstream-able variant. In 2004 various parties started an uncoordinated effort to get some key technologies into the Linux kernel on which they wanted to build proper Real-Time support. None of them was complete, and there was a lack of an overall concept.  - -Ingo Molnar, working for RedHat, started to pick up pieces, reshape them and collect them in a patch series to build the grounds for the real-time preemption patch set **PREEMPT_RT**. At that time, I worked with [the late Dr. Doug Niehaus][3] to port a solution we had working based on the 2.4 Linux kernel forward to the 2.6 kernel. Our work was both conflicting and complimentary, so I teamed up with Ingo quickly to get this into a usable shape. Others like Steven Rostedt brought in ideas and experience from other Linux Real-Time research efforts. With a quickly forming loose team of interested developers, we were able to develop a halfway usable Real-Time solution that was fully integrated into the Linux kernel in a short period of time. That was far from a maintainable and production-ready solution. Still, we had laid the groundwork and proven that the concept of making the Linux Kernel real-time capable was feasible. The idea and intent of fully integrating this into the mainline Linux kernel over time were there from the very beginning. - -**JP:** _Why is it still a separate project from the Mainline kernel today?_ - -**TG:** To integrate the real-time patches into the Linux kernel, a lot of preparatory work, restructuring, and consolidation of the mainline codebase had to be done first. While many pieces that emerged from the real-time work found their way into the mainline kernel rather quickly due to their isolation, the more intrusive changes that change the Linux kernel’s fundamental behavior needed (and still need) a lot of polishing and careful integration work.  - -Naturally, this has to be coordinated with all the other ongoing efforts to adopt the Linux kernel to the different use cases ranging from tiny embedded systems to supercomputers.  - -This also requires carefully designing the integration so it does not get in the way of other interests and imposes roadblocks for further developing the Linux kernel, which is something the community and especially Linus Torvalds, cares about deeply.  - -As long as these remaining patches are out of the mainline kernel, this is not a problem because it does not put any burden or restriction on the mainline kernel. The responsibility is on the real-time project, but on the other side, in this context, there is no restriction to take shortcuts that would never be acceptable in the upstream kernel. - -The real-time patches are fundamentally different from something like a device driver that sits at some corner of the source tree. A device driver does not cause any larger damage when it goes unmaintained and can be easily removed when it reaches the final state bit-rot. Conversely, the **PREEMPT_RT** core technology is in the heart of the Linux kernel. Long-term maintainability is key as any problem in that area will affect the Linux user universe as a whole. In contrast, a bit-rotted driver only affects the few people who have a device depending on it. - -**JP:** *Traditionally, when I think about RTOS, I think of legacy solutions based on closed systems. Why is it essential we have an open-source alternative to them? * - -**TG:** The RTOS landscape is broad and, in many cases, very specialized. As I mentioned on the question of “what is real-time,” certain application scenarios require a fully validated RTOS, usually according to an application space-specific standard and often regulatory law. Aside from that, many RTOSes are limited to a specific class of CPU devices that fit into the targeted application space. Many of them come with specialized application programming interfaces which require special tooling and expertise. - -The Real-Time Linux project never aimed at these narrow and specialized application spaces. It always was meant to be the solution for 99% of the use cases and to be able to fully leverage the flexibility and scalability of the Linux kernel and the broader FOSS ecosystem so that integrated solutions with mixed-criticality workloads can be handled consistently.  - -Developing real-time applications on a real-time enabled Linux kernel is not much different from developing non-real-time applications on Linux, except for the careful selection of system interfaces that can be utilized and programming patterns that should be avoided, but that is true for real-time application programming in general independent of the RTOS.  - -The important difference is that the tools and concepts are all the same, and integration into and utilizing the larger FOSS ecosystem comes for free. - -The downside of **PREEMPT_RT** is that it can’t be fully validated, which excludes it from specific application spaces, but there are efforts underway, e.g., the LF ELISA project, to fill that gap. The reason behind this is, that large multiprocessor systems have become a commodity, and the need for more complex real-time systems in various application spaces, e.g., assisted / autonomous driving or robotics, requires a more flexible and scalable RTOS approach than what most of the specialized and validated RTOSes can provide.  - -That’s a long way down the road. Still, there are solutions out there today which utilize external mechanisms to achieve the safety requirements in some of the application spaces while leveraging the full potential of a real-time enabled Linux kernel along with the broad offerings of the wider FOSS ecosystem. - -**JP:** _What are examples of products and systems that use the real-time patch set that people depend on regularly?_ - -**TG:** It’s all over the place now. Industrial automation, control systems, robotics, medical devices, professional audio, automotive, rockets, and telecommunication, just to name a few prominent areas. - -**JP:** *Who are the major participants currently developing systems and toolsets with the real-time Linux kernel patch set?  * - -**TG:** Listing them all would be equivalent to reciting the “who’s who” in the industry. On the distribution side, there are offerings from, e.g., RedHat, SUSE, Mentor, and Wind River, which deliver RT to a broad range of customers in different application areas. There are firms like Concurrent, National Instruments, Boston Dynamics, SpaceX, and Tesla, just to name a few on the products side. - -RedHat and National Instruments are also members of the LF collaborative Real-Time project. - -**JP:** _What are the challenges in developing a real-time subsystem or specialized kernel for Linux? Is it any different than how other projects are run for the kernel?_ - -**TG:** Not really different; the same rules apply. Patches have to be posted, are reviewed, and discussed. The feedback is then incorporated. The loop starts over until everyone agrees on the solution, and the patches get merged into the relevant subsystem tree and finally end up in the mainline kernel. - -But as I explained before, it needs a lot of care and effort and, often enough, a large amount of extra work to restructure existing code first to get a particular piece of the patches integrated. The result is providing the desired functionality but is at the same time not in the way of other interests or, ideally, provides a benefit for everyone. - -The technology’s complexity that reaches into a broad range of the core kernel code is obviously challenging, especially combined with the mainline kernel’s rapid change rate. Even larger changes happening at the related core infrastructure level are not impacting ongoing development and integration work too much in areas like drivers or file systems. But any change on the core infrastructure can break a carefully thought-out integration of the real-time parts into that infrastructure and send us back to the drawing board for a while. - -**JP:**  *Which companies have been supporting the effort to get the **PREEMPT_RT** Linux kernel patches upstream? * - -**TG:** For the past five years, it has been supported by the members of the LF real-time Linux project, currently ARM, BMW, CIP, ELISA, Intel, National Instruments, OSADL, RedHat, and Texas Instruments. CIP, ELISA, and OSADL are projects or organizations on their own which have member companies all over the industry. Former supporters include Google, IBM, and NXP. - -I personally, my team and the broader Linux real-time community are extremely grateful for the support provided by these members.  - -However, as with other key open source projects heavily used in critical infrastructure, funding always was and still is a difficult challenge. Even if the amount of money required to keep such low-level plumbing but essential functionality sustained is comparatively small, these projects struggle with finding enough sponsors and often lack long-term commitment. - -The approach to funding these kinds of projects reminds me of the [Mikado Game][4], which is popular in Europe, where the first player who picks up the stick and disturbs the pile often is the one who loses. - -That’s puzzling to me, especially as many companies build key products depending on these technologies and seem to take the availability and sustainability for granted up to the point where such a project fails, or people stop working on it due to lack of funding. Such companies should seriously consider supporting the funding of the Real-Time project. - -It’s a lot like the [Jenga][5] game, where everyone pulls out as many pieces as they can up until the point where it collapses. We cannot keep taking; we have to give back to these communities putting in the hard work for technologies that companies heavily rely on. - -I gave up long ago trying to make sense of that, especially when looking at the insane amounts of money thrown at the over-hyped technology of the day. Even if critical for a large part of the industry, low-level infrastructure lacks the buzzword charm that attracts attention and makes headlines — but it still needs support. - -**JP:**  *One of the historical concerns was that Real-Time didn’t have a community associated with it; what has changed in the last five years?  * - -**TG:** There is a lively user community, and quite a bit of the activity comes from the LF project members. On the development side itself, we are slowly gaining more people who understand the intricacies of **PREEMPT_RT** and also people who look at it from other angles, e.g., analysis and instrumentation. Some fields could be improved, like documentation, but there is always something that can be improved. - -**JP:**  _What will the Real-Time Stable team be doing once the patches are accepted upstream?_ - -**TG:** The stable team is currently overseeing the RT variants of the supported mainline stable versions. Once everything is integrated, this will dry out to some extent once the older versions reach EOL. But their expertise will still be required to keep real-time in shape in mainline and in the supported mainline stable kernels. - -**JP:** _So once the upstreaming activity is complete, what happens afterward?_ - -**TG:** Once upstreaming is done, efforts have to be made to enable RT support for specific Linux features currently disabled on real-time enabled kernels. Also, for quite some time, there will be fallout when other things change in the kernel, and there has to be support for kernel developers who run into the constraints of RT, which they did not have to think about before.  - -The latter is a crucial point for this effort. Because there needs to be a clear longer-term commitment that the people who are deeply familiar with the matter and the concepts are not going to vanish once the mainlining is done. We can’t leave everybody else with the task of wrapping their brains around it in desperation; there cannot be institutional knowledge loss with a system as critical as this.  - -The lack of such a commitment would be a showstopper on the final step because we are now at the point where the notable changes are focused on the real-time only aspects rather than welcoming cleanups, improvements, and features of general value. This, in turn, circles back to the earlier question of funding and industry support — for this final step requires several years of commitment by companies using the real-time kernel. - -There’s not going to be a shortage of things to work on. It’s not going to be as much as the current upstreaming effort, but as the kernel never stops changing, this will be interesting for a long time. - -**JP:** _Thank you, Thomas, for your time this morning. It’s been an illuminating discussion._ - -_To get involved with the real-time kernel patch for Linux, please visit the_ [_PREEMPT_RT wiki_][2] _at The Linux Foundation or email [real-time-membership@linuxfoundation.org][6]_ - --------------------------------------------------------------------------------- - -via: https://www.linux.com/news/in-the-trenches-with-thomas-gleixner-real-time-linux-kernel-patch-set/ - -作者:[Linux.com Editorial Staff][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/author/linuxdotcom/ -[b]: https://github.com/lujun9972 -[1]: https://www.linux.com/wp-content/uploads/2021/04/IntheTrenches_ThomasGleixner.png -[2]: https://wiki.linuxfoundation.org/realtime/start -[3]: https://lwn.net/Articles/514182/?fbclid=IwAR1mhNcOVlNdQ_QmwOhC1vG3vHzxsXRwa_g4GTo62u1sHbjOZBPPviT5bxc -[4]: https://en.wikipedia.org/wiki/Mikado_(game) -[5]: https://en.wikipedia.org/wiki/Jenga -[6]: mailto:real-time-membership@linuxfoundation.org diff --git a/sources/tech/20210423 How I use OBS Studio to record videos for my YouTube channel.md b/sources/tech/20210423 How I use OBS Studio to record videos for my YouTube channel.md deleted file mode 100644 index 79c429e0f3..0000000000 --- a/sources/tech/20210423 How I use OBS Studio to record videos for my YouTube channel.md +++ /dev/null @@ -1,141 +0,0 @@ -[#]: subject: (How I use OBS Studio to record videos for my YouTube channel) -[#]: via: (https://opensource.com/article/21/4/obs-youtube) -[#]: author: (Jim Hall https://opensource.com/users/jim-hall) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How I use OBS Studio to record videos for my YouTube channel -====== -Install, configure, and use Open Broadcaster Software to record how-to, -promotional, and other types of videos. -![Person using a laptop][1] - -I manage a [YouTube channel for the FreeDOS Project][2], where I record "how-to" videos with FreeDOS running inside the [QEMU][3] PC emulator software. When I started the channel in August 2019, I didn't know anything about recording videos. But with [Open Broadcaster Software][4], also called OBS Studio, I've found recording these videos to be pretty straightforward. Here's how you can do it, too. - -### Install OBS Studio - -I run Fedora Linux, which doesn't include the OBS Studio software by default. Fortunately, the OBS Studio website has an [installation guide][5] that walks you through the steps to install OBS Studio via the RPM Fusion alternative repository. - -If you don't already have RPM Fusion set up on your system, you can add the repository on Fedora using this one-line command: - - -``` -`$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm` -``` - -Once the RPM Fusion repo is set up, you can install OBS Studio with this command: - - -``` -`$ sudo dnf install obs-studio` -``` - -If you have an NVIDIA graphics card, there's an extra step in the installation guide to install hardware-accelerated video support. But my graphics card is from Intel, so I don't need to run the extra steps. - -However, OBS Studio does not support [Wayland][6], at least not in the Fedora build. That means when I want to record videos with OBS Studio, I need to log into my GNOME desktop [using an Xorg session][7]. On the login screen, enter your password, click on the gear-shaped icon in the lower-right corner, and select **GNOME on Xorg**. - -### Configure OBS Studio - -The first time you launch OBS Studio, the software runs an auto-configuration wizard to determine the best settings for recording videos. This makes setup a breeze. If you're recording videos on the desktop, like I am, then click the **Optimize just for recording** radio button and click **Next**. - -![OBS Studio configuration][8] - -(Jim Hall, [CC BY-SA 4.0][9]) - -OBS Studio will run through a series of automated tests before it confirms the best video settings for your system. On my system, that's 1920x1080 at 30 frames per second (fps), which is good enough for recording my videos. - -![OBS Studio configuration][10] - -(Jim Hall, [CC BY-SA 4.0][9]) - -#### My setup - -The default OBS Studio interface shows the video front and center and positions the controls at the bottom of the screen. While this is not a bad default arrangement, you can see in my early videos that I occasionally look away from the camera as I change from a full-screen webcam video to my QEMU screen. That's because the default OBS Studio configuration places the **Scene controls** in the lower-left corner. - -![OBS Studio configuration][11] - -(Jim Hall, [CC BY-SA 4.0][9]) - -Breaking virtual eye contact like this is distracting, so I wanted another way to change scenes without looking for the scene controls. I discovered that I could click and drag the OBS Studio controls to different areas on the screen. By positioning the scene controls at the top of the screen, near my computer's webcam, I don't need to look away from the camera to change scenes. - -![OBS Studio configuration][12] - -(Jim Hall, [CC BY-SA 4.0][9]) - -So, my first step whenever I set up OBS Studio is to drag the controls to the top of the screen. I like to place the **Scene selector panel** in the middle, so I don't have to look very far away from my camera to change scenes. I keep the recording controls to one side because I'm never on camera when I start or stop the video, so it doesn't matter if I look away to start or stop my video recording. - -![OBS Studio configuration][13] - -(Jim Hall, [CC BY-SA 4.0][9]) - -### Setting up scenes - -You can set up OBS Studio to support your preferred video style. When I started recording videos, I watched other how-to videos to see how they were organized. Most start with a brief introduction by the host, then switch to a hands-on demonstration, and end with a "thank you" screen to advertise the channel. I wanted to create my videos similarly, and you can do that with scenes. - -Each scene is a different arrangement of **sources**, or elements in the video. Each source is like a layer, so if you have multiple image or video sources, they will appear to stack on top of one another. - -How you define your scenes depends on the kind of video you want to make. I do a lot of hands-on demonstration videos, so I have one scene with a full-screen webcam video, another scene that's just a QEMU window, and yet another scene that's "picture-in-picture" with me over my QEMU screen. I can also set up separate scenes that show a "thank you" image and links to subscribe to my channel or to join the project on social media. - -With these scenes, I can record my videos as Live—meaning I don't need to edit them afterward. I can use the Scene controls in OBS Studio to switch from the **QEMU** scene to the **Full-screen webcam** screen and back to the **QEMU** screen before wrapping up with separate scenes that thank my supporters and share information about my channel. That may sound like a lot of work, but once you have the scenes set up, changing scenes is just clicking an item in the Scenes menu. That's why I like to center the Scene selector at the top of the screen, so I can easily select the scene I need. - -Here's what I use to record my videos and how I set up the sources in each: - - * **Full-screen webcam:** I set up a webcam source from my Vitade webcam as a **video capture device** (V4L) and use the **Transform** menu (right-click) to fit the webcam to the screen. This also uses my Yeti microphone for sound as an **audio input capture** (PulseAudio). - - * **QEMU:** This is where I spend most of my time in my videos. OBS Studio can use any window as a source, and I define my QEMU window as a **window capture** (Xcomposite) source. In case I need to reboot the virtual machine while I'm recording a video, I also set a Color Bars image as a background image on a layer that's "behind" the window. This also uses my Yeti microphone for sound as an **audio input capture** (PulseAudio). - - * **QEMU + webcam:** My viewers tell me they like to see me on camera while I'm showing things in my QEMU window, so I defined another scene that combines the **QEMU** and **Full-screen webcam** scenes. My webcam is a small rectangle in one corner of the screen. - - * **Patreon card:** At the end of my videos, I thank the people who support me on Patreon. I created a striped pattern in GIMP and set that as my background image. I then defined a **text** source where I entered a "thank you" message and a list of my patrons. As before, I set my Yeti microphone for sound as an **audio input capture** (PulseAudio). - - * **End card:** As I wrap up the video, I want to encourage viewers to visit our website or join us on social media. Similar to the Patreon card scene, I use a background pattern that already includes my text and icons. But to add a little visual flair, I created a blinking cursor after our URL, as though someone had typed it in. This cursor is not actually an animation but an **image slideshow** source that uses two images: a blank rectangle and a rectangle with a cursor. The image slideshow flips between these two images, creating the appearance of a blinking cursor. - - - - -![OBS Studio configuration][14] - -(Jim Hall, [CC BY-SA 4.0][9]) - -### And action! - -Once I create my scene collection, I'm ready to record my videos. I usually start by talking over my QEMU window, so I click on the **QEMU** scene and then click the **Start Recording** button. After I've said a few words to set the stage for my video, I click on the **Full-screen webcam** scene to fully introduce the topic. - -After sharing some information about whatever I'm talking about in the video, I click on the **QEMU** scene or the **QEMU + webcam** scene. Which scene I choose depends on whether I need to be seen during the video or if the "picture-in-picture" video will obscure important text on the screen. I spend most of the how-to video in this scene, usually while playing a game, demonstrating a program, or writing a sample program. - -When I'm ready to wrap up, I click on the **Patreon card** scene to thank everyone who supports me on Patreon. Some patrons support me at a higher level, and they get a specific mention and their name listed on the screen. Then, I click on the **End card** scene to encourage viewers to visit our website, join us on Facebook, follow us on Twitter, and consider supporting me on Patreon. Finally, I click the **Stop Recording** button, and OBS Studio stops the video. - -Using OBS Studio is a great way to record videos. I've used this same method to record other videos, including pre-recorded conference talks, welcome videos for a remote symposium, and virtual lecture videos when I teach an online class. - -The next time you need to record a video, try OBS Studio. I think you'll find it easy to learn and use. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/4/obs-youtube - -作者:[Jim Hall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop) -[2]: https://www.youtube.com/freedosproject -[3]: https://www.qemu.org/ -[4]: https://obsproject.com/ -[5]: https://obsproject.com/wiki/install-instructions#linux -[6]: https://wayland.freedesktop.org/ -[7]: https://docs.fedoraproject.org/en-US/quick-docs/configuring-xorg-as-default-gnome-session/ -[8]: https://opensource.com/sites/default/files/uploads/obs-setup-02.png (OBS Studio configuration) -[9]: https://creativecommons.org/licenses/by-sa/4.0/ -[10]: https://opensource.com/sites/default/files/uploads/obs-setup-09.png (OBS Studio configuration) -[11]: https://opensource.com/sites/default/files/uploads/obs-setup-10.png (OBS Studio configuration) -[12]: https://opensource.com/sites/default/files/uploads/obs-setup-11.png (OBS Studio configuration) -[13]: https://opensource.com/sites/default/files/uploads/obs-setup-12.png (OBS Studio configuration) -[14]: https://opensource.com/sites/default/files/uploads/obs-setup-18.png (OBS Studio configuration) diff --git a/sources/tech/20210426 Exploring the world of declarative programming.md b/sources/tech/20210426 Exploring the world of declarative programming.md deleted file mode 100644 index b5ffce076e..0000000000 --- a/sources/tech/20210426 Exploring the world of declarative programming.md +++ /dev/null @@ -1,196 +0,0 @@ -[#]: subject: (Exploring the world of declarative programming) -[#]: via: (https://fedoramagazine.org/exploring-the-world-of-declarative-programming/) -[#]: author: (pampelmuse https://fedoramagazine.org/author/pampelmuse/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Exploring the world of declarative programming -====== - -![][1] - -Photo by [Stefan Cosma][2] on [Unsplash][3] - -### Introduction - -Most of us use imperative programming languages like C, Python, or Java at home. But the universe of programming languages is endless and there are languages where no imperative command has gone before. That which may sound impossible at the first glance is feasible with Prolog and other so called declarative languages. This article will demonstrate how to split a programming task between Python and Prolog. - -In this article I do not want to teach Prolog. There are [resources available][4] for that. We will demonstrate how simple it is to solve a puzzle solely by describing the solution. After that it is up to the reader how far this idea will take them. - -To proceed, you should have a basic understanding of Python. Installation of Prolog and the Python-Prolog bridge is accomplished using this command: - -dnf install pl python3-pyswip - -Our exploration uses [SWI-Prolog][5], an actively developed Prolog which has the Fedora package name “pl”. The Python/SWI-Prolog bridge is [pyswip][6]. - -If you are a bold adventurer you are welcome to follow me exploring the world of declarative programming. - -### Puzzle - -The example problem for our exploration will be a puzzle similar to what you may have seen before. - -**How many triangles are there?** - -![][7] - -### Getting started - -Get started by opening a fresh text file with your favorite text editor. Copy all three text blocks in the sections below (Input, Process and Output) together into one file. - -#### Input - -This section sets up access to the Prolog interface and defines data for the problem. This is a simple case so it is fastest to write the data lines by hand. In larger problems you may get your input data from a file or from a database. - -``` -#!/usr/bin/python - -from pyswip import Prolog - -prolog = Prolog() -prolog.assertz("line([a, e, k])") -prolog.assertz("line([a, d, f, j])") -prolog.assertz("line([a, c, g, i])") -prolog.assertz("line([a, b, h])") -prolog.assertz("line([b, c, d, e])") -prolog.assertz("line([e, f, g, h])") -prolog.assertz("line([h, i, j, k])") -``` - - * The first line is the UNIX way to tell that this text file is a Python program. -Don’t forget to make your file executable by using _chmod +x yourfile.py_ . - * The second line imports a Python module which is doing the Python/Prolog bridge. - * The third line makes a Prolog instance available inside Python. - * Next lines are puzzle related. They describe the picture you see above. -Single **small** letters stand for concrete points. -_[a,e,k]_ is the Prolog way to describe a list of three points. -_line()_ declares that it is true that the list inside parentheses is a line . - - - -The idea is to let Python do the work and to feed Prolog. - -#### “Process” - -This section title is quoted because nothing is actually processed here. This is simply the description (declaration) of the solution. - -There is no single variable which gets a new value. Technically the processing is done in the section titled Output below where you find the command _prolog.query()_. - -``` -prolog.assertz(""" -triangle(A, B, C) :- - line(L1), - line(L2), - line(L3), - L1 \= L2, - member(A, L1), - member(B, L1), - member(A, L2), - member(C, L2), - member(B, L3), - member(C, L3), - A @< B, - B @< C""") -``` - -First of all: All capital letters and strings starting with a capital letter are Prolog variables! - -The statements here are the description of what a triangle is and you can read this like: - - * **If** all lines after _“:-“_ are true, **then** _triangle(A, B, C)_ is a triangle - * There must exist three lines (L1 to L3). - * Two lines must be different. “\_=_” means not equal in Prolog. We do not want to count a triangle where all three points are on the same line! So we check if at least two different lines are used. - * _member()_ is a Prolog predicate which is true if the first argument is inside the second argument which must be a list. In sum these six lines express that the three points must be pairwise on different lines. - * The last two lines are only true if the three points are in alphabetical order. (“_@<_” compares terms in Prolog.) This is necessary, otherwise [a, h, k] and [a, k, h] would count as two triangles. Also, the case where a triangle contains the same point two or even three times is excluded by these final two lines. - - - -As you can see, it is often not that obvious what defines a triangle. But for a computed approach you must be rather strict and rigorous. - -#### Output - -After the hard work in the process chapter the rest is easy. Just have Python ask Prolog to search for triangles and count them all. - -``` -total = 0 -for result in prolog.query("triangle(A, B, C)"): - print(result) - total += 1 -print("There are", total, "triangles.") -``` - -Run the program using this command in the directory containing _yourfile.py_ : - -``` -./yourfile.py -``` - -The output shows the listing of each triangle found and the final count. - -``` -{'A': 'a', 'B': 'e', 'C': 'f'} -{'A': 'a', 'B': 'e', 'C': 'g'} -{'A': 'a', 'B': 'e', 'C': 'h'} -{'A': 'a', 'B': 'd', 'C': 'e'} -{'A': 'a', 'B': 'j', 'C': 'k'} -{'A': 'a', 'B': 'f', 'C': 'g'} -{'A': 'a', 'B': 'f', 'C': 'h'} -{'A': 'a', 'B': 'c', 'C': 'e'} -{'A': 'a', 'B': 'i', 'C': 'k'} -{'A': 'a', 'B': 'c', 'C': 'd'} -{'A': 'a', 'B': 'i', 'C': 'j'} -{'A': 'a', 'B': 'g', 'C': 'h'} -{'A': 'a', 'B': 'b', 'C': 'e'} -{'A': 'a', 'B': 'h', 'C': 'k'} -{'A': 'a', 'B': 'b', 'C': 'd'} -{'A': 'a', 'B': 'h', 'C': 'j'} -{'A': 'a', 'B': 'b', 'C': 'c'} -{'A': 'a', 'B': 'h', 'C': 'i'} -{'A': 'd', 'B': 'e', 'C': 'f'} -{'A': 'c', 'B': 'e', 'C': 'g'} -{'A': 'b', 'B': 'e', 'C': 'h'} -{'A': 'e', 'B': 'h', 'C': 'k'} -{'A': 'f', 'B': 'h', 'C': 'j'} -{'A': 'g', 'B': 'h', 'C': 'i'} -There are 24 triangles. -``` - -There are certainly more elegant ways to display this output but the point is: -**Python should do the output handling for Prolog.** - -If you are a star programmer you can make the output look like this: - -``` -*************************** -* There are 24 triangles. * -*************************** -``` - -### Conclusion - -Splitting a programming task between Python and Prolog makes it easy to keep the Prolog part pure and monotonic, which is good for logic reasoning. It is also easy to make the input and output handling with Python. - -Be aware that Prolog is a bit more complicated and can do much more than what I explained here. You can find a really good and modern introduction here: [The Power of Prolog][4]. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/exploring-the-world-of-declarative-programming/ - -作者:[pampelmuse][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/pampelmuse/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/explore_declarative-816x345.jpg -[2]: https://unsplash.com/@stefanbc?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/star-trek?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://www.metalevel.at/prolog -[5]: https://www.swi-prolog.org/ -[6]: https://github.com/yuce/pyswip -[7]: https://fedoramagazine.org/wp-content/uploads/2021/04/triangle2.png diff --git a/sources/tech/20210428 5 ways to process JSON data in Ansible.md b/sources/tech/20210428 5 ways to process JSON data in Ansible.md deleted file mode 100644 index ea61f1f7f3..0000000000 --- a/sources/tech/20210428 5 ways to process JSON data in Ansible.md +++ /dev/null @@ -1,347 +0,0 @@ -[#]: subject: (5 ways to process JSON data in Ansible) -[#]: via: (https://opensource.com/article/21/4/process-json-data-ansible) -[#]: author: (Nicolas Leiva https://opensource.com/users/nicolas-leiva) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -5 ways to process JSON data in Ansible -====== -Structured data is friendly for automation, and you can take full -advantage of it with Ansible. -![Net catching 1s and 0s or data in the clouds][1] - -Exploring and validating data from an environment is a common practice for preventing service disruptions. You can choose to run the process periodically or on-demand, and the data you're checking can come from different sources: telemetry, command outputs, etc. - -If the data is _unstructured_, you must do some custom regex magic to retrieve key performance indicators (KPIs) relevant for specific scenarios. If the data is _structured_, you can leverage a wide array of options to make parsing it simpler and more consistent. Structured data conforms to a data model, which allows access to each data field separately. The data for these models is exchanged as key/value pairs and encoded using different formats. JSON, which is widely used in Ansible, is one of them. - -There are many resources available in Ansible to work with JSON data, and this article presents five of them. While all these resources are used together in sequence in the examples, it is probably sufficient to use just one or two in most real-life scenarios. - -![Magnifying glass looking at 0's and 1's][2] - -([Geralt][3], Pixabay License) - -The following code snippet is a short JSON document used as input for the examples in this article. If you just want to see the code, it's available in my [GitHub repository][4]. - -This is sample [pyATS][5] output from a `show ip ospf neighbors` command on a Cisco IOS-XE device: - - -``` -{ -   "parsed": { -      "interfaces": { -          "Tunnel0": { -              "neighbors": { -                  "203.0.113.2": { -                      "address": "198.51.100.2", -                      "dead_time": "00:00:39", -                      "priority": 0, -                      "state": "FULL/  -" -                  } -              } -          }, -          "Tunnel1": { -              "neighbors": { -                  "203.0.113.2": { -                      "address": "192.0.2.2", -                      "dead_time": "00:00:36", -                      "priority": 0, -                      "state": "INIT/  -" -                  } -              } -          } -      } -   } -} -``` - -This document lists various interfaces from a networking device describing the Open Shortest Path First ([OSPF][6]) state of any OSPF neighbor present per interface. The goal is to validate that the state of all these OSPF sessions is good (i.e., **FULL**). - -This goal is visually simple, but if you have a lot of entries, it wouldn't be. Fortunately, as the following examples demonstrate, you can do this at scale with Ansible. - -### 1\. Access a subset of the data - -If you are only interested in a specific branch of the data tree, a reference to its path will take you down the JSON structure hierarchy and allow you to select only that portion of the JSON object. The path is made of dot-separated key names. - -To begin, create a variable (`input`) in Ansible that reads the JSON-formatted message from a file. - -To go two levels down, for example, you need to follow the hierarchy of the key names to that point, which translates to `input.parsed.interfaces`, in this case. `input` is the variable that stores the JSON data, `parsed` the top-level key, and `interfaces` is the subsequent one. In a playbook, this will looks like: - - -``` -\- name: Go down the JSON file 2 levels -  hosts: localhost -  vars: -    input: "{{ lookup('file','output.json') | from_json }}" - -  tasks: -   - name: Create interfaces Dictionary -     set_fact: -       interfaces: "{{ input.parsed.interfaces }}" - -   - name: Print out interfaces -     debug: -       var: interfaces -``` - -It gives the following output: - - -``` -TASK [Print out interfaces] ************************************************************************************************************************************* -ok: [localhost] => { -    "msg": { -        "Tunnel0": { -            "neighbors": { -                "203.0.113.2": { -                    "address": "198.51.100.2", -                    "dead_time": "00:00:39", -                    "priority": 0, -                    "state": "FULL/  -" -                } -            } -        }, -        "Tunnel1": { -            "neighbors": { -                "203.0.113.2": { -                    "address": "192.0.2.2", -                    "dead_time": "00:00:36", -                    "priority": 0, -                    "state": "INIT/  -" -                } -            } -        } -    } -} -``` - -The view hasn't changed much; you only trimmed the edges. Baby steps! - -### 2\. Flatten out the content - -If the previous output doesn't help or you want a better understanding of the data hierarchy, you can produce a more compact output with the `to_paths` filter: - - -``` -\- name: Print out flatten interfaces input -  debug: -    msg: "{{ lookup('ansible.utils.to_paths', interfaces) }}" -``` - -This will print out as: - - -``` -TASK [Print out flatten interfaces input] *********************************************************************************************************************** -ok: [localhost] => { -    "msg": { -        "Tunnel0.neighbors['203.0.113.2'].address": "198.51.100.2", -        "Tunnel0.neighbors['203.0.113.2'].dead_time": "00:00:39", -        "Tunnel0.neighbors['203.0.113.2'].priority": 0, -        "Tunnel0.neighbors['203.0.113.2'].state": "FULL/  -", -        "Tunnel1.neighbors['203.0.113.2'].address": "192.0.2.2", -        "Tunnel1.neighbors['203.0.113.2'].dead_time": "00:00:36", -        "Tunnel1.neighbors['203.0.113.2'].priority": 0, -        "Tunnel1.neighbors['203.0.113.2'].state": "INIT/  -" -    } -} -``` - -### 3\. Use json_query filter (JMESPath) - -If you are familiar with a JSON query language such as [JMESPath][7], then Ansible's json_query filter is your friend because it is built upon JMESPath, and you can use the same syntax. If this is new to you, there are plenty of JMESPath examples you can learn from in [JMESPath examples][8]. It is a good resource to have in your toolbox. - -Here's how to use it to create a list of the neighbors for all interfaces. The query executed in this is `*.neighbors`: - - -``` -\- name: Create neighbors dictionary (this is now per interface) -  set_fact: -    neighbors: "{{ interfaces | json_query('*.neighbors') }}" - -\- name: Print out neighbors -  debug: -    msg: "{{ neighbors }}" -``` - -Which returns a list you can iterate over: - - -``` -TASK [Print out neighbors] ************************************************************************************************************************************** -ok: [localhost] => { -    "msg": [ -        { -            "203.0.113.2": { -                "address": "198.51.100.2", -                "dead_time": "00:00:39", -                "priority": 0, -                "state": "FULL/  -" -            } -        }, -        { -            "203.0.113.2": { -                "address": "192.0.2.2", -                "dead_time": "00:00:36", -                "priority": 0, -                "state": "INIT/  -" -            } -        } -    ] -} -``` - -Other options to query JSON are [jq][9] or [Dq][10] (for pyATS). - -### 4\. Access specific data fields - -Now you can go through the list of neighbors in a loop to access individual data. This example is interested in the `state` of each one. Based on the field's value, you can trigger an action. - -This will generate a message to alert the user if the state of a session isn't **FULL**. Typically, you would notify users through mechanisms like email or a chat message rather than just a log entry, as in this example. - -As you loop over the `neighbors` list generated in the previous step, it executes the tasks described in `tasks.yml` to instruct Ansible to print out a **WARNING** message only if the state of the neighbor isn't **FULL** (i.e., `info.value.state is not match("FULL.*")`): - - -``` -\- name: Loop over neighbors -  include_tasks: tasks.yml -  with_items: "{{ neighbors }}" -``` - -The `tasks.yml` file considers `info` as the dictionary item produced for each neighbor in the list you iterate over: - - -``` -\- name: Print out a WARNING if OSPF state is not FULL - debug: -   msg: "WARNING: Neighbor {{ info.key }}, with address {{ info.value.address }} is in state {{ info.value.state[0:4]  }}" - vars: -   info: "{{ lookup('dict', item) }}" - when: info.value.state is not match("FULL.*") -``` - -This produces a custom-generated message with different data fields for each neighbor that isn't operational: - - -``` -TASK [Print out a WARNING if OSPF state is not FULL] ************************************************************************************************************ -ok: [localhost] => { -    "msg": "WARNING: Neighbor 203.0.113.2, with address 192.0.2.2 is in state INIT" -} -``` - -> Note: Filter JSON data in Ansible using [json_query][11]. - -### 5\. Use a JSON schema to validate your data - -A more sophisticated way to validate the data from a JSON message is by using a JSON schema. This gives you more flexibility and a wider array of options to validate different types of data. A schema for this example would need to specify `state` is a `string` that starts with **FULL** if that's the only state you want to be valid (you can access this code in my [GitHub repository][12]): - - -``` -{ - "$schema": "", - "definitions": { -     "neighbor" : { -         "type" : "object", -         "properties" : { -             "address" : {"type" : "string"}, -             "dead_time" : {"type" : "string"}, -             "priority" : {"type" : "number"}, -             "state" : { -                 "type" : "string", -                 "pattern" : "^FULL" -                 } -             }, -         "required" : [ "address","state" ] -     } - }, - "type": "object", - "patternProperties": { -     ".*" : { "$ref" : "#/definitions/neighbor" } - } -} -``` - -As you loop over the neighbors, it reads this schema (`schema.json`) and uses it to validate each neighbor item with the module `validate` and engine `jsonschema`: - - -``` -\- name: Validate state of the neighbor is FULL -  ansible.utils.validate: -    data: "{{ item }}" -    criteria: -     - "{{ lookup('file',  'schema.json') | from_json }}" -    engine: ansible.utils.jsonschema -  ignore_errors: true -  register: result - -\- name: Print the neighbor that does not satisfy the desired state -  ansible.builtin.debug: -    msg: -     - "WARNING: Neighbor {{ info.key }}, with address {{ info.value.address }} is in state {{ info.value.state[0:4] }}" -     - "{{ error.data_path }}, found: {{ error.found }}, expected: {{ error.expected }}" -  when: "'errors' in result" -  vars: -    info: "{{ lookup('dict', item) }}" -    error: "{{ result['errors'][0] }}" -``` - -Save the output of the ones that fail the validation so that you can alert the user with a message: - - -``` -TASK [Validate state of the neighbor is FULL] ******************************************************************************************************************* -fatal: [localhost]: FAILED! => {"changed": false, "errors": [{"data_path": "203.0.113.2.state", "expected": "^FULL", "found": "INIT/  -", "json_path": "$.203.0.113.2.state", "message": "'INIT/  -' does not match '^FULL'", "relative_schema": {"pattern": "^FULL", "type": "string"}, "schema_path": "patternProperties..*.properties.state.pattern", "validator": "pattern"}], "msg": "Validation errors were found.\nAt 'patternProperties..*.properties.state.pattern' 'INIT/  -' does not match '^FULL'. "} -...ignoring - -TASK [Print the neighbor that does not satisfy the desired state] *********************************************************************************************** -ok: [localhost] => { -    "msg": [ -        "WARNING: Neighbor 203.0.113.2, with address 192.0.2.2 is in state INIT", -        "203.0.113.2.state, found: INIT/  -, expected: ^FULL" -    ] -} -``` - -If you'd like a deeper dive: - - * You can find a more elaborated example and references in [Using new Ansible utilities for operational state management and remediation][13]. - * A good resource to practice JSON schema generation is the [JSON Schema Validator and Generator][14]. - * A similar approach is the [Schema Enforcer][15], which lets you create the schema in YAML (helpful if you prefer that syntax). - - - -### Conclusion - -Structured data is friendly for automation, and you can take full advantage of it with Ansible. As you determine your KPIs, you can automate checks on them to give you peace of mind in situations such as before and after a maintenance window. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/4/process-json-data-ansible - -作者:[Nicolas Leiva][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/nicolas-leiva -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds) -[2]: https://opensource.com/sites/default/files/uploads/data_pixabay.jpg (Magnifying glass looking at 0's and 1's) -[3]: https://pixabay.com/illustrations/window-hand-magnifying-glass-binary-4354467/ -[4]: https://github.com/nleiva/ansible-networking/blob/master/test-json.md#parsing-json-outputs -[5]: https://pypi.org/project/pyats/ -[6]: https://en.wikipedia.org/wiki/Open_Shortest_Path_First -[7]: https://jmespath.org/ -[8]: https://jmespath.org/examples.html -[9]: https://stedolan.github.io/jq/ -[10]: https://pubhub.devnetcloud.com/media/genie-docs/docs/userguide/utils/index.html -[11]: https://blog.networktocode.com/post/ansible-filtering-json-query/ -[12]: https://github.com/nleiva/ansible-networking/blob/master/files/schema.json -[13]: https://www.ansible.com/blog/using-new-ansible-utilities-for-operational-state-management-and-remediation -[14]: https://extendsclass.com/json-schema-validator.html -[15]: https://blog.networktocode.com/post/introducing_schema_enforcer/ diff --git a/sources/tech/20210428 How to create your first Quarkus application.md b/sources/tech/20210428 How to create your first Quarkus application.md deleted file mode 100644 index ea9a77e73b..0000000000 --- a/sources/tech/20210428 How to create your first Quarkus application.md +++ /dev/null @@ -1,99 +0,0 @@ -[#]: subject: (How to create your first Quarkus application) -[#]: via: (https://opensource.com/article/21/4/quarkus-tutorial) -[#]: author: (Saumya Singh https://opensource.com/users/saumyasingh) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How to create your first Quarkus application -====== -The Quarkus framework is considered the rising star for -Kubernetes-native Java. -![woman on laptop sitting at the window][1] - -Programming languages and frameworks continuously evolve to help developers who want to develop and deploy applications with even faster speeds, better performance, and lower footprint. Engineers push themselves to develop the "next big thing" to satisfy developers' demands for faster deployments. - -[Quarkus][2] is the latest addition to the Java world and considered the rising star for Kubernetes-native Java. It came into the picture in 2019 to optimize Java and commonly used open source frameworks for cloud-native environments. With the Quarkus framework, you can easily go serverless with Java. This article explains why this open source framework is grabbing lots of attention these days and how to create your first Quarkus app. - -## What is Quarkus? - -Quarkus reimagines the Java stack to give the performance characteristics and developer experience needed to create efficient, high-speed applications. It is a container-first and cloud-native framework for writing Java apps. - -You can use your existing skills to code in new ways with Quarkus. It also helps reduce the technical burden in moving to a Kubernetes-centric environment. High-density deployment platforms like Kubernetes need apps with a faster boot time and lower memory usage. Java is still a popular language for developing software but suffers from its focus on productivity at the cost of RAM and CPU. - -In the world of virtualization, serverless, and cloud, many developers find Java is not the best fit for developing cloud-native apps. However, the introduction of Quarkus (also known as "Supersonic and Subatomic Java") helps to resolve these issues. - -## What are the benefits of Quarkus? - -![Quarkus benefits][3] - -(Saumya Singh, [CC BY-SA 4.0][4]) - -Quarkus improves start-up times, execution costs, and productivity. Its main objective is to reduce applications' startup time and memory footprint while providing "developer joy." It fulfills these objectives with native compilation and hot reload features. - -### Runtime benefits - -![How Quarkus uses memory][5] - -(Saumya Singh, [CC BY-SA 4.0][4]) - - * Lowers memory footprint - * Reduces RSS memory, using 10% of the memory needed for a traditional cloud-native stack - * Offers very fast startup - * Provides a container-first framework, as it is designed to run in a container + Kubernetes environment. - * Focuses heavily on making things work in Kubernetes - - - -### Development benefits - -![Developers love Quarkus][6] - -(Saumya Singh, [CC BY-SA 4.0][4]) - - * Provides very fast, live reload during development and coding - * Uses "best of breed" libraries and standards - * Brings specifications and great support - * Unifies and supports imperative and reactive (non-blocking) styles - - - -## Create a Quarkus application in 10 minutes - -Now that you have an idea about why you may want to try Quarkus, I'll show you how to use it. - -First, ensure you have the prerequisites for creating a Quarkus application - - * An IDE like Eclipse, IntelliJ IDEA, VS Code, or Vim - * JDK 8 or 11+ installed with JAVA_HOME configured correctly - * Apache Maven 3.6.2+ - - - -You can create a project with either a Maven command or by using code.quarkus.io. - -### Use a Maven command: - -One of the easiest ways to create a new Quarkus project is to open a terminal and run the following commands, as outlined in the [getting started guide][7].  - -**Linux and macOS users:** - - -``` -mvn io.quarkus:quarkus-maven-plugin:1.13.2.Final:create \ -    -DprojectGroupId=org.acme \ -    -DprojectArtifactId=getting-started \ -    -DclassName="org.acme.getting.started.GreetingResource" \ -    -Dpath="/hello" -cd getting-started -``` - -**Windows users:** - - * If you are using `cmd`, don't use the backward slash (`\`): [code]`mvn io.quarkus:quarkus-maven-plugin:1.13.2.Final:create -DprojectGroupId=org.acme -DprojectArtifactId=getting-started -DclassName="org.acme.getting.started.GreetingResource" -Dpath="/hello"` -``` -* If you are using PowerShell, wrap `-D` parameters in double-quotes: -``` -`mvn io.quarkus:quarkus-maven-plugin:1.13.2.Final:create " \ No newline at end of file diff --git a/sources/tech/20210430 Access freenode using Matrix clients.md b/sources/tech/20210430 Access freenode using Matrix clients.md deleted file mode 100644 index dce583fb0a..0000000000 --- a/sources/tech/20210430 Access freenode using Matrix clients.md +++ /dev/null @@ -1,133 +0,0 @@ -[#]: subject: (Access freenode using Matrix clients) -[#]: via: (https://fedoramagazine.org/access-freenode-using-matrix-clients/) -[#]: author: (TheEvilSkeleton https://fedoramagazine.org/author/theevilskeleton/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Access freenode using Matrix clients -====== - -![][1] - -Fedora Linux 34 Background with freenode and Matrix logos - -Matrix (also written [matrix]) is [an open source project][2] and [a communication protocol][3]. The protocol standard is open and it is free to use or implement. Matrix is being recognized as a modern successor to the older [Internet Relay Chat (IRC)][4] protocol. [Mozilla][5], [KDE][6], [FOSDEM][7] and [GNOME][8] are among several large projects that have started using chat clients and servers that operate over the Matrix protocol. Members of the Fedora project have [discussed][9] whether or not the community should switch to using the Matrix protocol. - -The Matrix project has implemented an IRC bridge to enable communication between IRC networks (for example, [freenode][10]) and [Matrix homeservers][11]. This article is a guide on how to register, identify and join freenode channels from a Matrix client via the [Matrix IRC bridge][12]. - -Check out _[Beginner’s guide to IRC][13]_ for more information about IRC. - -### Preparation - -You need to set everything up before you register a nick. A nick is a username. - -#### Install a client - -Before you use the IRC bridge, you need to install a Matrix client. This guide will use Element. Other [Matrix clients][14] are available. - -First, install the Matrix client _Element_ from [Flathub][15] on your PC. Alternatively, browse to [element.io][16] to run the Element client directly in your browser. - -Next, click _Create Account_ to register a new account on matrix.org (a homeserver hosted by the Matrix project). - -#### Create rooms - -For the IRC bridge, you need to create rooms with the required users. - -First, click the ➕ (plus) button next to _People_ on the left side in Element and type _@appservice-irc:matrix.org_ in the field to create a new room with the user. - -Second, create another new room with _@freenode_NickServ:matrix.org_. - -### Register a nick at freenode - -If you have already registered a nick at freenode, skip the remainder of this section. - -Registering a nickname is optional, but strongly recommended. Many freenode channels require a registered nickname to join. - -First, open the room with _appservice-irc_ and enter the following: - -``` -!nick -``` - -Substitute _<your_nick>_ with the username you want to use. If the nick is already taken, _NickServ_ will send you the following message: - -``` -This nickname is registered. Please choose a different nickname, or identify via /msg NickServ identify . -``` - -If you receive the above message, use another nick. - -Second, open the room with _NickServ_ and enter the following: - -``` -REGISTER -``` - -You will receive a verification email from freenode. The email will contain a verification command similar to the following: - -``` -/msg NickServ VERIFY REGISTER -``` - -Ignore _/msg NickServ_ at the start of the command. Enter the remainder of the command in the room with _NickServ_. Be quick! You will have 24 hours to verify before the code expires. - -### Identify your nick at freenode - -If you just registered a new nick using the procedure in the previous section, then you should already be identified. If you are already identified, skip the remainder of this section. - -First, open the room with _@appservice-irc:matrix.org_ and enter the following: - -``` -!nick -``` - -Next, open the room with _@freenode_NickServ:matrix.org_ and enter the following: - -``` -IDENTIFY -``` - -### Join a freenode channel - -To join a freenode channel, press the ➕ (plus) button next to _Rooms_ on the left side in Element and type _#freenode_#<your_channel>:matrix.org_. Substitute _<your_channel>_ with the freenode channel you want to join. For example, to join the _#fedora_ channel, use _#freenode_#fedora:matrix.org_. For a list of Fedora Project IRC channels, see _[Communicating_and_getting_help — IRC_for_interactive_community_support][17]_. - -### Further reading - - * [Matrix IRC wiki][18] - - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/access-freenode-using-matrix-clients/ - -作者:[TheEvilSkeleton][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/theevilskeleton/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/freenode-matrix-816x345.jpeg -[2]: https://matrix.org/ -[3]: https://matrix.org/docs/spec/ -[4]: https://en.wikipedia.org/wiki/Internet_Relay_Chat -[5]: https://matrix.org/blog/2019/12/19/welcoming-mozilla-to-matrix/ -[6]: https://matrix.org/blog/2019/02/20/welcome-to-matrix-kde/ -[7]: https://matrix.org/blog/2021/01/04/taking-fosdem-online-via-matrix -[8]: https://wiki.gnome.org/Initiatives/Matrix -[9]: https://discussion.fedoraproject.org/t/the-future-of-real-time-chat-discussion-for-the-fedora-council/24628 -[10]: https://en.wikipedia.org/wiki/Freenode -[11]: https://en.wikipedia.org/wiki/Matrix_(protocol)#Servers -[12]: https://github.com/matrix-org/matrix-appservice-irc -[13]: https://fedoramagazine.org/beginners-guide-irc/ -[14]: https://matrix.org/clients/ -[15]: https://flathub.org/apps/details/im.riot.Riot -[16]: https://app.element.io/ -[17]: https://fedoraproject.org/wiki/Communicating_and_getting_help#IRC_for_interactive_community_support -[18]: https://github.com/matrix-org/matrix-appservice-irc/wiki diff --git a/sources/tech/20210501 Flipping burgers to flipping switches- A tech guy-s journey.md b/sources/tech/20210501 Flipping burgers to flipping switches- A tech guy-s journey.md deleted file mode 100644 index c51a7dfb67..0000000000 --- a/sources/tech/20210501 Flipping burgers to flipping switches- A tech guy-s journey.md +++ /dev/null @@ -1,44 +0,0 @@ -[#]: subject: (Flipping burgers to flipping switches: A tech guy's journey) -[#]: via: (https://opensource.com/article/21/5/open-source-story-burgers) -[#]: author: (Clint Byrum https://opensource.com/users/spamaps) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Flipping burgers to flipping switches: A tech guy's journey -====== -You never know how your first job might influence your career path. -![Multi-colored and directional network computer cables][1] - -In my last week of high school in 1996, I quit my job at Carl's Jr. because I thought maybe without school, I'd have time to learn enough skills to get hired at a PC shop or something. I didn't know that I actually had incredibly marketable skills as a Linux sysadmin and C programmer, because I was the only tech person I'd ever known (except the people I chatted with on Undernet's #LinuxHelp channel). - -I applied at a local company that had maybe the weirdest tech mission I've experienced: Its entire reason for existing was the general lack of industrial-sized QIC-80 tape-formatting machines. Those 80MB backup tapes (gargantuan at a time when 200MB hard disks were huge) were usually formatted at the factory as they came off the line, or you could buy them already formatted at a significantly higher price. - -One of the people who developed that line at 3M noticed that formatting them took an hour—over 90% of their time in manufacturing. The machine developed to speed up formatting was, of course, buggy and years too late. - -Being a shrewd businessman, instead of fixing the problem for 3M, he quit his job, bought a bunch of cheap PCs and a giant pile of unformatted tapes, and began paying minimum wage to workers in my hometown of San Marcos, Calif., to stuff them into the PCs and pull them out all day long. Then he sold the formatted tapes at a big markup—but less than what 3M charged for them. It was a success. - -By the time I got there in 1996, they'd streamlined things a bit. They had a big degaussing machine, about 400 486 PCs stuffed with specialized floppy controllers so that you could address eight tape drives in one machine, custom software (including hardware multiplexers for data collection), and contracts with all the major tape makers (Exabyte, 3M, etc.). I thought I was coming in to be a PC repair tech, as I had passed the test, which asked me to identify all the parts of a PC. - -A few weeks in, the lead engineer noticed I had an electronics book (I was studying electronics at [ITT Tech][2], of all places) and pulled me in to help him debug and build the next feature they had cooked up—a custom printed circuit board (PCB) that lit up LEDs to signal a tape's status: formatting (yellow), error (red), or done (green). I didn't write any code or do anything useful, but he still told me I was wasting my time there and should go out and get a real tech job. - -That "real tech job" I got was as a junior sysadmin for a local medical device manufacturer. I helped bridge their HP-UX ERP system to their new parent company's Windows NT printers using Linux and Samba—and I was hooked forever on the power of free and open source software (FOSS). I also did some fun debugging while I was there, which you can [read about][3] on my blog. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/open-source-story-burgers - -作者:[Clint Byrum][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/spamaps -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/connections_wires_sysadmin_cable.png?itok=d5WqHmnJ (Multi-colored and directional network computer cables) -[2]: https://en.wikipedia.org/wiki/ITT_Technical_Institute -[3]: https://fewbar.com/2020/04/a-bit-of-analysis/ diff --git a/sources/tech/20210503 Why I support systemd-s plan to take over the world.md b/sources/tech/20210503 Why I support systemd-s plan to take over the world.md deleted file mode 100644 index 1684e78409..0000000000 --- a/sources/tech/20210503 Why I support systemd-s plan to take over the world.md +++ /dev/null @@ -1,188 +0,0 @@ -[#]: subject: (Why I support systemd's plan to take over the world) -[#]: via: (https://opensource.com/article/21/5/systemd) -[#]: author: (David Both https://opensource.com/users/dboth) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Why I support systemd's plan to take over the world -====== -There is no nefarious plan, just one to bring service management into -the 21st century. -![A rack of servers, blue background][1] - -Over the years, I have read many articles and posts about how systemd is trying to replace everything and take over everything in Linux. I agree; it is taking over pretty much everything. - -But not really "everything-everything." Just "everything" in that middle ground of services that lies between the kernel and things like the GNU core utilities, graphical user interface desktops, and user applications. - -Examining Linux's structure is a way to explore this. The following figure shows the three basic software layers found in the operating system. The bottom is the Linux kernel; the middle layer consists of services that may perform startup tasks, such as launching various other services like Network Time Protocol (NTP), Dynamic Host Configuration Protocol (DHCP), Domain Name System (DNS), secure shell (SSH), device management, login services, gettys, Network Manager, journal and log management, logical volume management, printing, kernel module management, local and remote filesystems, sound and video, display management, swap space, system statistics collection, and much more. There are also tens of thousands of new and powerful applications at the top layer. - -![systemd services][2] - -systemd and the services it manages with respect to the kernel and application programs, including tools used by the sysadmin. (David Both, [CC BY-SA 4.0][3]) - -This diagram (as well as sysadmins' collective experience over the last several years) makes it clear that systemd is indeed intended to completely replace the old SystemV init system. But I also know (and explained in the previous articles in this systemd series) that it significantly extends the capabilities of the init system. - -It is also important to recognize that, although Linus Torvalds rewrote the Unix kernel as an exercise, he did nothing to change the middle layer of system services. He simply recompiled SystemV init to work with his completely new kernel. SystemV is much older than Linux and has needed a complete change to something totally new for decades. - -So the kernel is new and is refreshed frequently through the leadership of Torvalds and the work of thousands of programmers around the planet. All of the programs on the top layer of the image above also contribute. - -But until recently, there have been no significant enhancements to the init system and management of system services. - -In authoring systemd, [Lennart Poettering][4] has done for system services what Linus Torvalds did for the kernel. Like Torvalds and the Linux kernel, Poettering has become the leader and arbiter of what happens inside this middle system services layer. And I like what I see. - -### More data for the admin - -The new capabilities of systemd include far more status information about services, whether they're running or not. I like having more information about the services I am trying to monitor. For example, look at the DHCPD service. Were I to use the SystemV command, `service dhcpd status`, I would get a simple message that the service is running or stopped. Using the systemd command, `systemctl status dhcpd`, I get much more useful information. - -This data is from the server on my personal network: - - -``` -[root@yorktown ~]# systemctl status dhcpd -● dhcpd.service - DHCPv4 Server Daemon -     Loaded: loaded (/usr/lib/systemd/system/dhcpd.service; enabled; vendor preset: disabled) -     Active: active (running) since Fri 2021-04-09 21:43:41 EDT; 4 days ago -       Docs: man:dhcpd(8) -             man:dhcpd.conf(5) -   Main PID: 1385 (dhcpd) -     Status: "Dispatching packets..." -      Tasks: 1 (limit: 9382) -     Memory: 3.6M -        CPU: 240ms -     CGroup: /system.slice/dhcpd.service -             └─1385 /usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -group dhcpd --no-pid - -Apr 14 20:51:01 yorktown.both.org dhcpd[1385]: DHCPREQUEST for 192.168.0.7 from e0:d5:5e:a2🇩🇪a4 via eno1 -Apr 14 20:51:01 yorktown.both.org dhcpd[1385]: DHCPACK on 192.168.0.7 to e0:d5:5e:a2🇩🇪a4 via eno1 -Apr 14 20:51:14 yorktown.both.org dhcpd[1385]: DHCPREQUEST for 192.168.0.8 from e8:40:f2:3d:0e:a8 via eno1 -Apr 14 20:51:14 yorktown.both.org dhcpd[1385]: DHCPACK on 192.168.0.8 to e8:40:f2:3d:0e:a8 via eno1 -Apr 14 20:51:14 yorktown.both.org dhcpd[1385]: DHCPREQUEST for 192.168.0.201 from 80:fa:5b:63:37:88 via eno1 -Apr 14 20:51:14 yorktown.both.org dhcpd[1385]: DHCPACK on 192.168.0.201 to 80:fa:5b:63:37:88 via eno1 -Apr 14 20:51:24 yorktown.both.org dhcpd[1385]: DHCPREQUEST for 192.168.0.6 from e0:69:95:45:c4:cd via eno1 -Apr 14 20:51:24 yorktown.both.org dhcpd[1385]: DHCPACK on 192.168.0.6 to e0:69:95:45:c4:cd via eno1 -Apr 14 20:52:41 yorktown.both.org dhcpd[1385]: DHCPREQUEST for 192.168.0.5 from 00:1e:4f:df:3a:d7 via eno1 -Apr 14 20:52:41 yorktown.both.org dhcpd[1385]: DHCPACK on 192.168.0.5 to 00:1e:4f:df:3a:d7 via eno1 -[root@yorktown ~]# -``` - -Having all this information available in a single command is empowering and simplifies problem determination for me. I get more information right at the start. I not only see that the service is up and running but also some of the most recent log entries. - -Here is another example that uses a non-operating-system tool. [BOINC][5], the Berkeley Open Infrastructure Network Computing Client, is used to create ad hoc supercomputers out of millions of home computers around the world that are signed up to participate in the computational stages of many types of scientific studies. I am signed up with the [IBM World Community Grid][6] and participate in studies about COVID-19, mapping cancer markers, rainfall in Africa, and more. - -The information from this command gives me a more complete picture of how this service is faring: - - -``` -[root@yorktown ~]# systemctl status boinc-client.service -● boinc-client.service - Berkeley Open Infrastructure Network Computing Client -     Loaded: loaded (/usr/lib/systemd/system/boinc-client.service; enabled; vendor preset: disabled) -     Active: active (running) since Fri 2021-04-09 21:43:41 EDT; 4 days ago -       Docs: man:boinc(1) -   Main PID: 1389 (boinc) -      Tasks: 18 (limit: 9382) -     Memory: 1.1G -        CPU: 1month 1w 2d 3h 42min 47.398s -     CGroup: /system.slice/boinc-client.service -             ├─  1389 /usr/bin/boinc -             ├─712591 ../../projects/www.worldcommunitygrid.org/wcgrid_mcm1_map_7.43_x86_64-pc-linux-gnu -SettingsFile MCM1_0174482_7101.txt -DatabaseFile dataset> -             ├─712614 ../../projects/www.worldcommunitygrid.org/wcgrid_mcm1_map_7.43_x86_64-pc-linux-gnu -SettingsFile MCM1_0174448_7280.txt -DatabaseFile dataset> -             ├─713275 ../../projects/www.worldcommunitygrid.org/wcgrid_opn1_autodock_7.17_x86_64-pc-linux-gnu -jobs OPN1_0040707_05092.job -input OPN1_0040707_050> -             ├─713447 ../../projects/www.worldcommunitygrid.org/wcgrid_mcm1_map_7.43_x86_64-pc-linux-gnu -SettingsFile MCM1_0174448_2270.txt -DatabaseFile dataset> -             ├─713517 ../../projects/www.worldcommunitygrid.org/wcgrid_opn1_autodock_7.17_x86_64-pc-linux-gnu -jobs OPN1_0040871_00826.job -input OPN1_0040871_008> -             ├─713657 ../../projects/www.worldcommunitygrid.org/wcgrid_mcm1_map_7.43_x86_64-pc-linux-gnu -SettingsFile MCM1_0174525_7317.txt -DatabaseFile dataset> -             ├─713672 ../../projects/www.worldcommunitygrid.org/wcgrid_mcm1_map_7.43_x86_64-pc-linux-gnu -SettingsFile MCM1_0174529_1537.txt -DatabaseFile dataset> -             └─714586 ../../projects/www.worldcommunitygrid.org/wcgrid_opn1_autodock_7.17_x86_64-pc-linux-gnu -jobs OPN1_0040864_01640.job -input OPN1_0040864_016> - -Apr 14 19:57:16 yorktown.both.org boinc[1389]: 14-Apr-2021 19:57:16 [World Community Grid] Finished upload of OPN1_0040707_05063_0_r181439640_0 -Apr 14 20:57:36 yorktown.both.org boinc[1389]: 14-Apr-2021 20:57:36 [World Community Grid] Sending scheduler request: To report completed tasks. -Apr 14 20:57:36 yorktown.both.org boinc[1389]: 14-Apr-2021 20:57:36 [World Community Grid] Reporting 1 completed tasks -Apr 14 20:57:36 yorktown.both.org boinc[1389]: 14-Apr-2021 20:57:36 [World Community Grid] Not requesting tasks: don't need (job cache full) -Apr 14 20:57:38 yorktown.both.org boinc[1389]: 14-Apr-2021 20:57:38 [World Community Grid] Scheduler request completed -Apr 14 20:57:38 yorktown.both.org boinc[1389]: 14-Apr-2021 20:57:38 [World Community Grid] Project requested delay of 121 seconds -Apr 14 21:38:03 yorktown.both.org boinc[1389]: 14-Apr-2021 21:38:03 [World Community Grid] Computation for task MCM1_0174482_7657_1 finished -Apr 14 21:38:03 yorktown.both.org boinc[1389]: 14-Apr-2021 21:38:03 [World Community Grid] Starting task OPN1_0040864_01640_0 -Apr 14 21:38:05 yorktown.both.org boinc[1389]: 14-Apr-2021 21:38:05 [World Community Grid] Started upload of MCM1_0174482_7657_1_r1768267288_0 -Apr 14 21:38:09 yorktown.both.org boinc[1389]: 14-Apr-2021 21:38:09 [World Community Grid] Finished upload of MCM1_0174482_7657_1_r1768267288_0 -[root@yorktown ~]# -``` - -The key is that the BOINC client runs as a daemon and should be managed by the init system. All software that runs as a daemon should be managed by systemd. In fact, even software that still provides SystemV start scripts is managed by systemd. - -### systemd standardizes configuration - -One of the problems I have had over the years is that, even though "Linux is Linux," not all distributions store their configuration files in the same places or use the same names or even formats. With the huge numbers of Linux hosts in the world, that lack of standardization is a problem. I have also encountered horrible config files and SystemV startup files created by developers trying to jump on the Linux bandwagon and who have no idea how to create software for Linux—and especially the services that must be included in the Linux startup sequence. - -The systemd unit files standardize configuration and enforce a startup methodology and organization that provides a level of safety from poorly written SystemV start scripts. They also provide tools that the sysadmin can use to monitor and manage services. - -Lennart Poettering wrote a short blog post describing [standard names and locations][7] for common critical systemd configuration files. This standardization makes the sysadmin's job easier. It also makes it easier to automate administrative tasks in environments with multiple Linux distributions. Developers also benefit from this standardization. - -### Sometimes, the pain - -Any undertaking as massive as replacing and extending an entire init system will cause some level of pain during the transition. I don't mind learning the new commands and how to create configuration files of various types, such as targets, timers, and so on. It does take some work, but I think the results are well worth the effort. - -New configuration files and changes in the subsystems that own and manage them can also seem daunting at first. Not to mention that sometimes new tools such as systemd-resolvd can break the way things have worked for a long time, as I point out in [_Resolve systemd-resolved name-service failures with Ansible_][8]. - -Tools like scripts and Ansible can mitigate the pain while we wait for changes that resolve the pain. - -### Conclusion - -As I write in [_Learning to love systemd_][9], I can work with either SystemV or systemd init systems, and I have reasons for liking and disliking each: - -> "…the real issue and the root cause of most of the controversy between SystemV and systemd is that there is [no choice][10] on the sysadmin level. The choice of whether to use SystemV or systemd has already been made by the developers, maintainers, and packagers of the various distributions—but with good reason. Scooping out and replacing an init system, by its extreme, invasive nature, has a lot of consequences that would be hard to tackle outside the distribution design process." - -Because this wholesale replacement is such a massive undertaking, the developers of systemd have been working in stages for several years and replacing various parts of the init system and services and tools that were not parts of the init system but should have been. Many of systemd's new capabilities are made possible only by its tight integration with the services and tools used to manage modern Linux systems. - -Although there has been some pain along the way and there will undoubtedly be more, I think the long-term plan and goals are good ones. The advantages of systemd that I have experienced are quite significant. - -There is no nefarious plan to take over the world, just one to bring service management into the 21st century. - -### Other resources - -There is a great deal of information about systemd available on the internet, but much is terse, obtuse, or even misleading. In addition to the resources mentioned in this article, the following web pages offer more detailed and reliable information about systemd startup. This list has grown since I started this series of articles to reflect the research I have done. - - * [5 reasons sysadmins love systemd][11] - * The Fedora Project has a good, practical [guide to systemd][12]. It has pretty much everything you need to know to configure, manage, and maintain a Fedora computer using systemd. - * The Fedora Project also has a good [cheat sheet][13] that cross-references the old SystemV commands to comparable systemd ones. - * The [systemd.unit(5) manual page][14] contains a nice list of unit file sections and their configuration options, along with concise descriptions of each. - * Fedora Magazine has a good description of the [Unit file structure][15] as well as other important information.  - * For detailed technical information about systemd and the reasons for creating it, check out Freedesktop.org's [description of systemd][16]. This page is one of the best I have found because it contains many links to other important and accurate documentation. - * Linux.com's "More systemd fun" offers more advanced systemd [information and tips][17]. - - - -There is also a series of deeply technical articles for Linux sysadmins by Lennart Poettering, the designer and primary developer of systemd. He wrote these articles between April 2010 and September 2011, but they are just as relevant now as they were then. Much of everything else good written about systemd and its ecosystem is based on these papers. These links are all available at [FreeDesktop.org][18]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/systemd - -作者:[David Both][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/dboth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rack_server_sysadmin_cloud_520.png?itok=fGmwhf8I (A rack of servers, blue background) -[2]: https://opensource.com/sites/default/files/uploads/systemd-architecture_0.png (systemd services) -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://en.wikipedia.org/wiki/Lennart_Poettering -[5]: https://boinc.berkeley.edu/ -[6]: https://www.worldcommunitygrid.org/ -[7]: http://0pointer.de/blog/projects/the-new-configuration-files -[8]: https://opensource.com/article/21/4/systemd-resolved -[9]: https://opensource.com/article/20/4/systemd -[10]: http://www.osnews.com/story/28026/Editorial_Thoughts_on_Systemd_and_the_Freedom_to_Choose -[11]: https://opensource.com/article/21/4/sysadmins-love-systemd -[12]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html -[13]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet -[14]: https://man7.org/linux/man-pages/man5/systemd.unit.5.html -[15]: https://fedoramagazine.org/systemd-getting-a-grip-on-units/ -[16]: https://www.freedesktop.org/wiki/Software/systemd/ -[17]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/ -[18]: http://www.freedesktop.org/wiki/Software/systemd diff --git a/sources/tech/20210506 Resolve DHCPD and HTTPD startup failures with Ansible.md b/sources/tech/20210506 Resolve DHCPD and HTTPD startup failures with Ansible.md deleted file mode 100644 index 5590869efc..0000000000 --- a/sources/tech/20210506 Resolve DHCPD and HTTPD startup failures with Ansible.md +++ /dev/null @@ -1,199 +0,0 @@ -[#]: subject: (Resolve DHCPD and HTTPD startup failures with Ansible) -[#]: via: (https://opensource.com/article/21/5/ansible-server-services) -[#]: author: (David Both https://opensource.com/users/dboth) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Resolve DHCPD and HTTPD startup failures with Ansible -====== -Ancient remnants can create strange problems. -![Someone wearing a hardhat and carrying code ][1] - -Last year, I had a problem: HTTPD (the [Apache web server][2]) would not start on a reboot or cold boot. To fix it, I added an override file, `/etc/systemd/system/httpd.service.d/override.conf`. It contained the following statements to delay HTTPD's startup until the network is properly started and online. (If you've read my previous [articles][3], you'll know that I use NetworkManager and systemd, not the old SystemV network service and start scripts). - - -``` -# Trying to delay the startup of httpd so that the network is -# fully up and running so that httpd can bind to the correct -# IP address -# -# By David Both, 2020-04-16 -[Unit] -After=network-online.target -Wants=network-online.target -``` - -This circumvention worked until recently when I not only needed to start HTTPD manually; I also had to start DHCPD manually. The wait for the `network-online.target` was no longer working for some reason. - -### The causes and my fix - -After more internet searches and some digging around my `/etc` directory, I think I discovered the true culprit: I found an ancient remnant from the SystemV and init days in the `/etc/init.d` directory. There was a copy of the old network startup file that should not have been there. I think this file is left over from when I spent some time using the old network program before I switched over to NetworkManager. - -Apparently, systemd did what it is supposed to do. It generated a target file from that SystemV start script on the fly and tried to start the network using both the SystemV start script and systemd target that it created. This caused systemd to try to start HTTPD and DHCPD before the network was ready, and those services timed out and did not start. - -I removed the `/etc/init.d/network` script from my server, and now it reboots without me having to start the HTTPD and DHCPD services manually. This is a much better solution because it gets to the root cause and is not simply a circumvention. - -But this is still not the best solution. That file is owned by the `network-scripts` package and will be replaced if that package is updated. So, I also removed that package from my server, which ensures that this should not happen again. Can you guess how I discovered this? - -After I upgraded to Fedora 34, DHCPD and HTTPD again would not start. After some additional experimentation, I found that the `override.conf` file also needed a couple of lines added. These two new lines force those two services to wait until 60 seconds have passed before starting. That seems to solve the problem again—for now. - -The revised `override.conf` file now looks like the following. It not only sleeps for 60 seconds before starting the services, it specifies that it is not supposed to start until after the `network-online.target` starts. The latter part is what seems to be broken, but I figured I might as well do both things since one or the other usually seems to work. - - -``` -# Delay the startup of any network service so that the -# network is fully up and running so that httpd can bind to the correct -# IP address. -# -# By David Both, 2020-04-28 -# -################################################################################ -#                                                                              # -#  Copyright (C) 2021 David Both                                               # -#  [LinuxGeek46@both.org][4]                                                        # -#                                                                              # -#  This program is free software; you can redistribute it and/or modify        # -#  it under the terms of the GNU General Public License as published by        # -#  the Free Software Foundation; either version 2 of the License, or           # -#  (at your option) any later version.                                         # -#                                                                              # -#  This program is distributed in the hope that it will be useful,             # -#  but WITHOUT ANY WARRANTY; without even the implied warranty of              # -#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the               # -#  GNU General Public License for more details.                                # -#                                                                              # -#  You should have received a copy of the GNU General Public License           # -#  along with this program; if not, write to the Free Software                 # -#  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA   # -#                                                                              # -################################################################################ - -[Service] -ExecStartPre=/bin/sleep 60 - -[Unit] -After=network-online.target -Wants=network-online.target -``` - -### Making it easier with Ansible - -This is the type of problem that lends itself to an easy solution using Ansible. So, I created a relatively simple playbook. It has two plays. The first play removes the `network-scripts` and then the `/etc/init.d/network` script because if the script is there and the package is not, the script won’t be removed. At least one of my systems had that circumstance. I run this play against all the hosts whether they are workstations or servers. - -The second play runs only against the server and installs the `override.conf` files. - - -``` -################################################################################ -#                                 fix-network                                  # -#                                                                              # -# This Ansible playbook removes the network-scripts package and the            # -# /etc/rc.d/init.d/network SystemV start script. The /etc/init.d/network       # -# script which conflicts with NetworkManager and causes some network services  # -# such as DHCPD and HTTPD to fail to start.                                    # -#                                                                              # -# This playbook also installs override files for httpd and dhcpd which causes  # -# them to wait 60 seconds before starting.                                     # -#                                                                              # -# All of these things taken together seem to resolve or circumvent the issues  # -# that seem to stem from multiple causes.                                      # -#                                                                              # -# NOTE: The override file is service neutral and can be used with any service. # -#       I have found that using the systemctl edit command does not work as    # -#       it is supposed to according to the documenation.                       # -#                                                                              # -#                                                                              # -# From the network-scripts package info:                                       # -#                                                                              # -# : This package contains the legacy scripts for activating & deactivating of most -# : network interfaces. It also provides a legacy version of 'network' service. -# : -# : The 'network' service is enabled by default after installation of this package, -# : and if the network-scripts are installed alongside NetworkManager, then the -# : ifup/ifdown commands from network-scripts take precedence over the ones provided -# : by NetworkManager. -# : -# : If user has both network-scripts & NetworkManager installed, and wishes to -# : use ifup/ifdown from NetworkManager primarily, then they has to run command: -# :  $ update-alternatives --config ifup -# : -# : Please note that running the command above will also disable the 'network' -# : service. -#                                                                              # -#                                                                              # -#------------------------------------------------------------------------------# -#                                                                              # -# Change History                                                               # -# 2021/04/26 David Both V01.00 New code.                                       # -# 2021/04/28 David Both V01.10 Revised to also remove network-scripts package. # -#                              Also install an override file to do a 60 second # -#                              timeout before the services start.              #                                                                              #                                                                              # -################################################################################ -\--- -################################################################################ -# Play 1: Remove the /etc/init.d/network file -################################################################################ -\- name: Play 1 - Remove the network-scripts legacy package on all hosts -  hosts: all - -  tasks: -    - name: Remove the network-scripts package if it exists -      dnf: -        name: network-scripts -        state: absent - -    - name: Remove /etc/init.d/network file if it exists but the network-scripts package is not installed -      ansible.builtin.file: -        path: /etc/init.d/network -        state: absent - -\- name: Play 2 - Install override files for the server services -  hosts: server - -  tasks: - -    - name: Install the override file for DHCPD -      copy: -        src: /root/ansible/BasicTools/files/override.conf -        dest: /etc/systemd/system/dhcpd.service.d -        mode: 0644 -        owner: root -        group: root - -    - name: Install the override file for HTTPD -      copy: -        src: /root/ansible/BasicTools/files/override.conf -        dest: /etc/systemd/system/httpd.service.d -        mode: 0644 -        owner: root -        group: root -``` - -This Ansible play removed that bit of cruft from two other hosts on my network and one host on another network that I support. All the hosts that still had the SystemV network script and the `network-scripts` package have not been reinstalled from scratch for several years; they were all upgraded using `dnf-upgrade`. I never circumvented NetworkManager on my newer hosts, so they don't have this problem. - -This playbook also installed the override files for both services. Note that the override file has no reference to the service for which it provides the configuration override. For this reason, it can be used for any service that does not start because the attempt to start them has not allowed the NetworkManager service to finish starting up. - -### Final thoughts - -Although this problem is related to systemd startup, I cannot blame it on systemd. This is, partly at least, a self-inflicted problem caused when I circumvented systemd. At the time, I thought I was making things easier for myself, but I have spent more time trying to locate the problem caused by my avoidance of NetworkManager than I ever saved because I had to learn it anyway. Yet in reality, this problem has multiple possible causes, all of which are addressed by the Ansible playbook. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/ansible-server-services - -作者:[David Both][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/dboth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code ) -[2]: https://opensource.com/article/18/2/how-configure-apache-web-server -[3]: https://opensource.com/users/dboth -[4]: mailto:LinuxGeek46@both.org diff --git a/sources/tech/20210508 Best Open Source LMS for Creating Online Course and e-Learning Websites.md b/sources/tech/20210508 Best Open Source LMS for Creating Online Course and e-Learning Websites.md deleted file mode 100644 index 099429d814..0000000000 --- a/sources/tech/20210508 Best Open Source LMS for Creating Online Course and e-Learning Websites.md +++ /dev/null @@ -1,232 +0,0 @@ -[#]: subject: (Best Open Source LMS for Creating Online Course and e-Learning Websites) -[#]: via: (https://itsfoss.com/best-open-source-lms/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Best Open Source LMS for Creating Online Course and e-Learning Websites -====== - -A Learning Management System (LMS) helps you automate and document the learning programs. It is suitable for both small-scale educational programs and university-level learning programs. - -Of course, even corporate training programs can be hosted using a learning management system. - -While it has a lot of use-cases, having a transparent platform for your Learning Management System should be a benefit for any organization. - -So, in this article, we will be listing some of the best open source LMS. - -### Top Open-Source Learning Management Systems - -To ensure that you have a transparent and secure platform that comes with community and/or professional support, open-source LMS solutions should be a perfect pick. - -You may self-host these software on your own [cloud servers][1] or physical servers. You can also opt for managed hosting from the developers of the LMS system themselves or their official partners. - -**Note**: The list is in no particular order of ranking. - -#### 1\. Moodle - -![][2] - -**Key Features:** - - * Simple user interface - * Plugin availability to extend options - * Collaboration and management options - * Administrative control options - * Regular security updates - - - -Moodle is a popular learning management platform. It features one of the most extensive set of options among any other learning management system out there. It may not offer the most modern and intuitive learning user experience, but it is a simple and feature-rich option as a learning platform. - -You get most of the essential options that include calendar, collaborative tools, file management, text editor, progress tracker, notifications, and several more. - -Unfortunately, there’s no managed hosting solution from the team itself. So, you will have to deploy it by yourself on your server or rely on certified partners to do the work. - -[Moodle][3] - -#### 2\. Forma LMS - -![][4] - -**Key Features:** - - * Tailored for corporate training - * Plugin support - * E-commerce integration - * Multi-company support - - - -Forma LMS is an open-source project tailored for corporate training. - -You can add courses, manage them, and also create webinar sessions to enhance your training process remotely. It lets you organize the courses in the form of catalogs while also being able to create multiple editions of courses for different classrooms. - -E-Commerce integration is available with it as well that will let you monetize your training courses in return for certifications. It also gives you the ability to utilize plugins to extend the functionality. - -The key feature of Forma LMS is that it allows you to manage multiple companies using a single installation. - -[Forma LMS][5] - -#### 3\. Open edX - -![][6] - -**Key Features:** - - * A robust platform for university-tailored programs - * Integration with exciting technology offerings for a premium learning experience - - - -If you happen to know a few learning platforms for courses and certifications, you probably know about edX. - -And, Open edX lets you utilize the same technology behind edX platform to offer instructor-led courses, degree programs, and self-paced learning courses. Of course, considering that it is already something successful as a platform used by many companies, you can utilize it for any scale of operation. - -You can opt for self-managed deployment or contact the partners for a managed hosting option to set up your LMS. - -[Open edX][7] - -#### 4\. ELMS Learning Network - -**Key Features:** - - * A suite of tools to choose from - * Distributed learning network - - - -Unlike others, ELMS Learning Network offers a set of tools that you can utilize to set up your learning platform as per your requirements. - -It is not an LMS by itself but through a collection of tools it offers in the network. This may not be a robust option for degree programs or equivalent. You will also find a demo available on their website if you’d like to explore more about it. - -You can also check out its [GitHub page][8] if you’re curious. - -[ELMS Network][9] - -#### 5\. Canvas LMS - -![][10] - -**Key Features:** - - * Fit for small-scale education programs and higher education - * API access - * Plenty of integration options - - - -Canvas LMS is also a quite popular open-source LMS. Similar to Open edX, Canvas LMS is also suitable for a range of applications, be it school education programs or university degrees. - -It offers integrations with several technologies while empowering you with an API that you can connect with Google Classrooms, Zoom, Microsoft Teams, and others. It is also an impressive option if you want to offer mobile learning through your platform. - -You can opt for a free trial to test it out or just deploy it on your server as required. To explore more about it, head to its [GitHub page][11]. - -[Canvas LMS][12] - -#### 6\. Sakai LMS - -![][13] - -**Key Features:** - - * Simple interface - * Essential features - - - -Sakai LMS may not be a popular option, but it offers most of the essential features that include course management, grade assessment, app integration, and collaboration tools. - -If you are looking for a simple and effective LMS that does not come with an overwhelming set of options, Sakai LMS can be a good option to choose. - -You can try it for free with a trial account if you want a cloud-based option. In either case, you can check out the [GitHub page][14] to self-host it. - -[Sakai LMS][15] - -#### 6\. Opigno LMS - -![][16] - -**Key Features:** - - * Tailored for corporate training - * Security features - * Authoring tools - * E-commerce integration - - - -Opigno LMS is a [Drupal-based open-source project][17] that caters to the needs of training programs for companies. - -In case you didn’t know, Drupal is an [open-source CMS][18] that you can use to create websites. And, with Opigno LMS, you can create training resources, quizzes, certificates. You can also sell certification courses using this learning platform. - -A simple interface and essential features, that’s what you get here. - -[Opigno LMS][19] - -#### 7\. Sensei LMS - -![][20] - -**Key Features:** - - * WordPress plugin - * Easy to use - * WooCommerce’s integration support - * Offers WooCommerce extensions - - - -Sensei LMA is an impressive open-source project which is a plugin available for WordPress. In fact, it is a project by the same company behind WordPress, i.e. **Automattic**. - -Considering that WordPress powers the majority of web – if you already have a website on WordPress, simply install Sensei as a plugin and incorporate a learning management system quickly, it is that easy! - -You can manage your courses, and also sell them online if you need. It also supports multiple WooCommerce extensions to give you more control on managing and monetizing the platform. - -[Sensei LMS][21] - -### Wrapping Up - -Most of the LMS should offer you the basic essentials of managing learning programs and courses along with the ability to sell them online. However, they differ based on their 3rd party integrations, ease of use, user interface, and plugins. - -So, make sure to go through all the available resources before you plan on setting up a learning management system for your educational institute or company training. - -Did I miss listing any other interesting open-source LMS? Let me know in the comments down below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/best-open-source-lms/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://linuxhandbook.com/free-linux-cloud-servers/ -[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/moodle-dashboard.png?resize=800%2C627&ssl=1 -[3]: https://moodle.com -[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/forma-lms.png?resize=800%2C489&ssl=1 -[5]: https://www.formalms.org/ -[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/open-edx.png?resize=800%2C371&ssl=1 -[7]: https://open.edx.org/ -[8]: https://github.com/elmsln/elmsln -[9]: https://www.elmsln.org/ -[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/canvas-lms.png?resize=800%2C417&ssl=1 -[11]: https://github.com/instructure/canvas-lms -[12]: https://www.instructure.com/en-au/canvas -[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/sakai-lms.png?resize=800%2C388&ssl=1 -[14]: https://github.com/sakaiproject/sakai -[15]: https://www.sakailms.org -[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/opigno-screenshot.jpg?resize=800%2C714&ssl=1 -[17]: https://www.drupal.org/project/opigno_lms -[18]: https://itsfoss.com/open-source-cms/ -[19]: https://www.opigno.org/solution -[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/sensei-quiz.png?resize=800%2C620&ssl=1 -[21]: https://senseilms.com/ diff --git a/sources/tech/20210510 Getting better at counting rpm-ostree based systems.md b/sources/tech/20210510 Getting better at counting rpm-ostree based systems.md deleted file mode 100644 index 8695d52dea..0000000000 --- a/sources/tech/20210510 Getting better at counting rpm-ostree based systems.md +++ /dev/null @@ -1,84 +0,0 @@ -[#]: subject: (Getting better at counting rpm-ostree based systems) -[#]: via: (https://fedoramagazine.org/getting-better-at-counting-rpm-ostree-based-systems/) -[#]: author: (Timothée Ravier https://fedoramagazine.org/author/siosm/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Getting better at counting rpm-ostree based systems -====== - -![][1] - -Photo by [Joost Crop][2] on [Unsplash][3] - -This article describes the extension of the Fedora 32 user count mechanism to _rpm-ostree_ based systems. It also provides tips for opting out, if necessary. - -### How Fedora counts users - -Since the release of Fedora 32, a new mechanism has been in place to better count the number of Fedora users while respecting their privacy. This system is explicitly designed to make sure that no personally identifiable information is sent from counted systems. It also ensures that the Fedora infrastructure does not collect any personal data. The nickname for this new counting mechanism is “Count Me”, from the option name. Details are available in [DNF Better Counting change request for Fedora 32][4]. In short, the Count Me mechanism works by telling Fedora servers how old your system is (with a very large approximation). This occurs randomly during a metadata refresh request performed by DNF. - -### Adding support for rpm-ostree based systems - -The current mechanism works great for classic editions of Fedora (Workstation, Server, Spins, etc.). However, _rpm-ostree_ based systems (such as Fedora Silverblue, Fedora IoT and Fedora CoreOS) do not fetch any repository metadata in the default case. This means they can not take advantage of this mechanism. We thus decided to implement a stand-alone method, based on the same logic, in _rpm-ostree_. The new implementation has the same privacy preserving properties as the original DNF implementation. - -### Time line - -Our new Count Me mechanism will be enabled by default in the upcoming Fedora 34 release for Fedora IoT and Fedora Silverblue. This will occur for both upgraded machines and for new installs. For instructions on opting out, see below. - -Since Fedora CoreOS is an automatically updating operating system, existing machines will adopt the Count Me logic without user intervention. However, counting will be enabled approximately three months after publication of this article. This delay is to ensure that users have time to opt out if they prefer to do so. Thus, default counting will be enabled starting with the _testing_ and _next_ Fedora CoreOS releases that will be published at the beginning of August 2021 and in the _stable_ release that will go out two weeks after. - -More information is available in the [tracking issue for Fedora CoreOS][5]. - -### Opting out of counting - -Full instructions on disabling this functionality are available in the [rpm-ostree documentation][6]. We are reproducing them here for convenience. - -#### Disable the process - -You can disable counting by stopping the _rpm-ostree-countme.timer_ and masking the corresponding unit, as a precaution: - -``` -$ systemctl mask --now rpm-ostree-countme.timer -``` - -Execute that command in advance to disable the default counting when you update to Fedora 34. - -#### Modify your Butane configuration - -Fedora CoreOS users can use the same _systemctl_ command to manually mask the unit. You may also use the following snippet as part of your Butane config to disable counting on first boot via Ignition: - -``` -variant: fcos -version: 1.3.0 -systemd: - units: - - name: rpm-ostree-countme.timer - enabled: false - mask: true -``` - -[Fedora CoreOS documentation][7] contains details about using the Butane config snippet and how Fedora CoreOS is provisioned. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/getting-better-at-counting-rpm-ostree-based-systems/ - -作者:[Timothée Ravier][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/siosm/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/count_rpm_ostree-816x345.jpg -[2]: https://unsplash.com/@smallcamerabigpictures?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/calculator?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://fedoraproject.org/wiki/Changes/DNF_Better_Counting -[5]: https://github.com/coreos/fedora-coreos-tracker/issues/717 -[6]: https://coreos.github.io/rpm-ostree/countme/ -[7]: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/ diff --git a/sources/tech/20210510 Getting started with edge development on Linux using open source.md b/sources/tech/20210510 Getting started with edge development on Linux using open source.md deleted file mode 100644 index 0fc192f7c0..0000000000 --- a/sources/tech/20210510 Getting started with edge development on Linux using open source.md +++ /dev/null @@ -1,158 +0,0 @@ -[#]: subject: (Getting started with edge development on Linux using open source) -[#]: via: (https://opensource.com/article/21/5/edge-quarkus-linux) -[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Getting started with edge development on Linux using open source -====== -Leverage Quarkus to scale IoT application development and deployment -environments. -![Looking at a map][1] - -There are many reasons why Linux is such a popular platform for processing Internet of Things (IoT) edge applications. A major one is transparency. Linux security capabilities are built on open source projects, giving users a transparent view of security risks and threats and enables them to apply fixes quickly with security module patches or kernel-level updates. Another Linux advantage is that developers can choose from various programming languages to develop, test, and run device communications over various networking protocols—other than HTTP(s)—when developing IoT edge applications. It also enables developers to address server programming for controlling data flow from IoT devices to front-end graphical user interface (GUI) applications. - -This article explains how to get started with IoT edge development using [Quarkus][2], a cloud-native Java framework that enables you to integrate a lightweight [message broker][3] for processing data streams from IoT devices in a reactive way. - -For this article, I'm using [CentOS Stream][4], which I feel provides a reliable open source platform to handle the business applications I work on, from traditional enterprise Java to cloud, IoT edge, artificial intelligence (AI), and machine learning (ML) environments. It's a midstream platform operating between [Fedora][5] and [Red Hat Enterprise Linux][6] (RHEL). - -**[Read next: [Deploy Quarkus everywhere with RHEL][7]]** - -![High-level architecture for IoT edge development][8] - -(Daniel Oh, [CC BY-SA 4.0][9]) - -You don't have to use CentOS to use Quarkus, of course. However, if you want to follow along with this article precisely, you can install [CentOS Stream][10] so there will be no difference between what you read here and what you see onscreen. - -You can learn more about Quarkus by reading my article _[Writing Java with Quarkus in VS Code][11]_. - -### Step 1: Send IoT data to the lightweight message broker - -To quickly spin up a lightweight message broker, you can use [Eclipse Mosquitto][12]. It's an open source message broker that implements the MQTT protocol. [MQTT][13] processes messages across IoT devices, such as low-power sensors, mobile phones, embedded computers, and microcontrollers. Mosquitto can be [installed][14] on various devices and operating system platforms, but you can also spin up the broker container image after installing a container engine (e.g., [Docker][15]) and a command-line interface (CLI) tool. - -I use the [Podman][16] tool for running Linux containers. Compared to other container engines, this saves resources (CPU and memory especially) when you install and run an extra container engine in your environment. If you haven't already, [install Podman][17] before continuing. Then run the Mosquitto message broker with this command: - - -``` -$ podman run --name mosquitto \ -\--rm -p "9001:9001" -p "1883:1883" \ -eclipse-mosquitto:1.6.2 -``` - -You see this output: - - -``` -1619384779: mosquitto version 1.6.2 starting -1619384779: Config loaded from /mosquitto/config/mosquitto.conf. -1619384779: Opening ipv4 listen to socket on port 1883. -1619384779: Opening ipv6 listen socket on port 1883. -``` - -### Step 2: Process reactive data streams with Quarkus - -For this example, imagine you have IoT devices connected to a warehouse that continually send temperature and heat data to back-end servers to monitor the building's condition and save power resources. - -Your imaginary setup uses one [ESP8266-01][18] WiFi module that streams temperature and heat data in the JSON data format. The stream's IoT edge data is transmitted to the Mosiquitto message broker server running on your machine. - -Define the ESP8266-01 emulator in a Java application on Quarkus: - - -``` -Device esp8266 = new Device("ESP8266-01"); - -@Outgoing("device-temp") -public Flowable<String> generate() { -  return Flowable.interval(2, TimeUnit.SECONDS) -    .onBackpressureDrop() -    .map(t -> { -      [String][19] data = esp8266.toString(); -      return data; -  }); -} -``` - -Quarkus also enables you to process data streams and event sources with the [SmallRye Reactive Messaging][20] extension, which interacts with various messaging technologies such as [Apache Kafka][21], [AMQP][22], and especially MQTT, the standard for IoT messaging. This code snippet shows how to specify incoming data streams with an `@Incoming()` annotation: - - -``` -@Incoming("devices") -@Outgoing("my-data-stream") -@Broadcast -public String process(byte[] data) { -  String d = new String(data); -  return d; -} -``` - -You can find this solution in my [GitHub repository][23]. - -#### Step 3: Monitor the real-time data channel - -Quarkus uses reactive messaging and channels to receive, process, and showcase messages with a browser-based front-end application. You can run the Quarkus application in development mode for live coding or continue adding code in the inner-loop development workflow. - -Issue the following Maven command to build and start the application: - - -``` -`./mvnw compile quarkus:dev` -``` - -Once your Quarkus application starts, you should see incoming IoT data from the ESP8266-01 device. - -![Incoming IoT data in Quarkus][24] - -(Daniel Oh, [CC BY-SA 4.0][9]) - -You can use the dashboard to monitor how the IoT edge data (e.g., temperature, heat) is processing. Open a new web browser and navigate to [http://localhost:8080][25]. You should start seeing some statistics. - -![IoT data graph][26] - -(Daniel Oh, [CC BY-SA 4.0][9]) - -### Conclusion - -With Quarkus, enterprises can scale application development and deployment environments with minimal cost and without high maintenance or licensing fees. From a DevOps perspective, enterprise developers can still use familiar open source technologies (such as Java) to implement IoT edge applications, while operators can control and monitor production using a Linux-based system (like CentOS Stream) with data gathered from big data, IoT, and artificial intelligence (AI) technologies. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/edge-quarkus-linux - -作者:[Daniel Oh][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/daniel-oh -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map) -[2]: https://quarkus.io/ -[3]: https://www.ibm.com/cloud/learn/message-brokers -[4]: https://www.centos.org/centos-stream/ -[5]: https://getfedora.org/ -[6]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux -[7]: https://developers.redhat.com/blog/2021/04/07/deploy-quarkus-everywhere-with-red-hat-enterprise-linux-rhel/ -[8]: https://opensource.com/sites/default/files/uploads/iot-edge-architecture.png (High-level architecture for IoT edge development) -[9]: https://creativecommons.org/licenses/by-sa/4.0/ -[10]: https://www.centos.org/download/ -[11]: https://opensource.com/article/20/4/java-quarkus-vs-code -[12]: https://mosquitto.org/ -[13]: https://mqtt.org/ -[14]: https://mosquitto.org/download/ -[15]: https://opensource.com/resources/what-docker -[16]: https://podman.io/ -[17]: https://podman.io/getting-started/installation -[18]: https://www.instructables.com/Getting-Started-With-the-ESP8266-ESP-01/ -[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string -[20]: https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/2/index.html -[21]: https://kafka.apache.org/ -[22]: https://www.amqp.org/ -[23]: https://github.com/danieloh30/quarkus-edge-mqtt-demo -[24]: https://opensource.com/sites/default/files/uploads/quarkus_incoming-iot-data.png (Incoming IoT data in Quarkus) -[25]: http://localhost:8080/ -[26]: https://opensource.com/sites/default/files/uploads/iot-graph.png (IoT data graph) diff --git a/sources/tech/20210511 Use the Alpine email client in your Linux terminal.md b/sources/tech/20210511 Use the Alpine email client in your Linux terminal.md deleted file mode 100644 index 7f4e8155e6..0000000000 --- a/sources/tech/20210511 Use the Alpine email client in your Linux terminal.md +++ /dev/null @@ -1,390 +0,0 @@ -[#]: subject: "Use the Alpine email client in your Linux terminal" -[#]: via: "https://opensource.com/article/21/5/alpine-linux-email" -[#]: author: "David Both https://opensource.com/users/dboth" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Use the Alpine email client in your Linux terminal -====== -Configure Alpine to handle your email the way you like it. - -![Chat via email][1] - -Email is an important communications medium and will remain so for the foreseeable future. I have used many different email clients over the last 30 years, and [Thunderbird][2] is what I have used the most in recent years. It is an excellent and functional desktop application that provides all the features that most people need—including me. - -One of the things that makes a good system administrator is curiosity—and I have more than my share. Over the last few months, I have become dissatisfied with Thunderbird—not because of anything particularly wrong with it. Rather, after many years, I grew tired of it. I was curious about whether I could find an email client to provide a better (or at least different) experience than Thunderbird and be at least as efficient. - -I decided it was time for a change—and not just to a different graphical user interface (GUI) mail client. None of the other GUI-based email clients available on Linux have ever really appealed to me. I finally realized that what I wanted was to go back to [Alpine][3], the descendant of Pine, the text user interface (TUI) email client I used for a time about 20 years ago. - -This desire to go retro with my email client started back in 2017 when I wrote an [article about Alpine][4] for Opensource.com. I described how I used Alpine to circumvent problems sending emails from ISP networks when I was traveling away from my home email system. - -I recently decided to exclusively use Alpine for email. The main attraction is the ease of use offered by keeping my hands on the keyboard (and reducing the number of times I need to reach for the mouse). It is also about scratching my sysadmin itch to do something different and use an excellent text mode interface in the process. - -### Getting started - -I already had Alpine set up from my previous use, so it was just a matter of starting to use it again. - -Well, not really. - -I previously set up Alpine on my mail server—I used secure shell (SSH) to log into the email server using my email account and then launched Alpine to access my email. I explained this in my previous article, but the bottom line is that I wanted to circumvent ISPs that block outbound port 25 for mail transfer in the name of spam reduction. A bit of bother, really. - -But now I want to run Alpine on my workstation or laptop. It's relatively simple to configure Alpine on the same host as the email server. Using it on a remote computer requires a good bit more. - -### Install Alpine - -Installing Alpine on Fedora is simple because it is available from the Fedora repository. Just use DNF as root: - -``` -# dnf -y install alpine -``` - -This command installs Alpine and any prerequisite packages that are not already installed. Alpine's primary dependencies are Sendmail, Hunspell, OpenLDAP, OpenSSL, krb5-libs, ncurses, and a couple of others. In my case, Alpine was the only package installed. - -### Launch Alpine - -To launch Alpine, open a terminal session, type **alpine** on the command line, and press **Enter**. - -The first time you start Alpine, it displays a message that it is creating the user directory structure on the localhost. It then displays a Welcome message, and if you press **Enter**, you are treated to a copy of Apache's license. That is good, and you should probably read the license at some point so that you know its terms. But the most important thing right now is to configure Alpine to get your email. - -For now, just press lowercase **e** to exit from the greeting message. You should now see Alpine's Main menu (I deleted several blank lines of the output to save space): - -``` -+----------------------------------------------------+ -| ALPINE 2.24 MAIN MENU Folder: INBOX No Messages    | -|                                                    | -| HELP - Get help using Alpine                       | -|                                                    | -| C COMPOSE MESSAGE - Compose and send a message     | -|                                                    | -| I MESSAGE INDEX - View messages in current folder  | -|                                                    | -| L FOLDER LIST - Select a folder to view            | -|                                                    | -| A ADDRESS BOOK - Update address book               | -|                                                    | -| S SETUP - Configure Alpine Options                 | -|                                                    | -| Q QUIT - Leave the Alpine program                  | -|                                                    | -|                                                    | -|                                                    | -|                                                    | -|                                                    | -| For Copyright information press "?"                | -|                                                    | -| ? Help P PrevCmd R RelNotes                        | -| O OTHER CMDS > [ListFldrs] N NextCmd K KBLock      | -+----------------------------------------------------+ -``` - -*Figure 1: Alpine's Main menu* - -Alpine creates the `~mail` directory localhost during initial use. When you configure the IMAP server, Alpine creates the default `~/mail`, `~/mail/sent-mail`, and `saved-messages` folders in your home directory on the IMAP server. You can change the defaults, but I recommend against it. When using IMAP, emails are not stored locally unless you copy them to local folders. All emails are stored in the Inbox on the SMTP server until they are saved to a folder on the IMAP server. The SMTP and IMAP servers might use the same or different hosts. - -Alpine also assumes that the Inbox is located at `/var/spool/mail/user_name` on the email SMTP server. This article explains how to configure both IMAP and SMTP servers. The email administrator for your organization—that might be you—will add your account to the IMAP server and provide you with the initial password. - -### The Alpine interface - -The Alpine user interface (UI) is a text-mode, menu-driven UI, also known as a TUI. This type of interface is also sometimes called captive user interface (CUI), which does not provide a command-line interface that can be used in scripts, for example. You must exit from the program to perform other tasks. - -By contrast, the [mailx][5] program is an email program that can be used with either a TUI, from the command line, or in scripts. For example, you can use the following command to send the results of the free command directly to the sysadmin's email account: - -``` -$ free | mailx -s "Free memory" sysadmin@example.com -``` - -But enough of that little side trip; there is work to do. Let's start with an explanation. - -Notice in Figure 1 that all of the possible options in the Main menu in the center of the interface and the menu items along the bottom of the Alpine UI are shown as uppercase letters. But you can use either uppercase or lowercase when issuing commands; Alpine recognizes and responds to both. Uppercase is easier to see and recognize in the interface, but it's easier to use lowercase to enter commands and make menu selections. I will use uppercase letters in bold throughout this article to indicate menu selections (to mimic the Alpine UI). - -On the Main menu, you can use the **Up** and **Down** arrow keys to highlight a different option and then press **Enter** to select it. The only way to access the menu items along the bottom of the Alpine screen (which I call the secondary menu, for lack of a better term) is by using the letter designated for each. There are two sets of secondary menu items. You can press **O** (the letter, not the number) to switch to the next set of commands, and press **O** again to toggle back to the original set. This keystroke only changes the secondary menu items. - -Use the **Page Down** and **Page Up** keys to scroll through the commands if you can't see them all. The secondary menu at the bottom of the page usually lists all the commands available on the current menu; you will also see a message similar to this: - -``` -[START of Information About Setup Command] -``` - -Should you find yourself at a place you don't want to be, such as creating a new email, responding to one, or making changes to settings, and decide you don't want to do that, **Ctrl+C** allows you to cancel the current task. In most cases, you will be asked to confirm that you want to cancel by pressing the **C** key. Note that **^C** in the secondary menu represents **Ctrl+C**. Many commands use the **Ctrl** key, so you will see **^** quite frequently on some menus. - -Finally, to quit Alpine, you can press **Q**; when it asks, "Really quit Alpine?" respond with **Y** to exit. Like many commands, **Q** is not available from all menus. - -### Help - -Help is available from all of the menus I have tried. You can access detailed help for each menu item by highlighting the item you need information for and pressing the **?** key to obtain context-sensitive help. - -### Configuration - -When I started using Alpine regularly, I made the minimum changes to the configuration needed to send and receive emails. As I gained more experience with Alpine, I changed other configuration items to make things work easier or more to my liking. - -First, I will explain the basic configurations required to make Alpine work, then move on to ones that make it work better. - -If you have been exploring a bit on your own—which is a good thing—return to the Main menu. To get to Alpine's Configuration menu from the Main menu, type **S** for Setup. You will see a menu like this: - -``` -ALPINE 2.24 SETUP Folder: INBOX No Messages - -This is the Setup screen for Alpine. Choose from the following commands: - -(E) Exit Setup: -This puts you back at the Main Menu. - -(P) Printer: -Allows you to set a default printer and to define custom -print commands. - -(N) Newpassword: -Change your password. - -(C) Config: -Allows you to set or unset many features of Alpine. -You may also set the values of many options with this command. - -(S) Signature: -Enter or edit a custom signature which will -be included with each new message you send. -  -(A) AddressBooks: -Define a non-default address book. -  -(L) collectionLists: -You may define groups of folders to help you better organize your mail. -  -(R) Rules: -This has up to six sub-categories: Roles, Index Colors, Filters, - [START of Information About Setup Command ] -? Help E Exit Setup N Newpassword S Signature L collectionList D Directory   -O OTHER CMDS P Printer C Config A AddressBooks R Rules K Kolor -``` - -*Figure 2: Alpine Setup menu* - -The Setup menu groups the very large number of setup items into related categories to, hopefully, make the ones you want easier to locate. Use **Page Down** and **Page Up** to scroll through the commands if you can't see them all. - -I'll start with the settings necessary to get email—Alpine's entire purpose—up and running. - -### Config - -The Config section contains 15 pages (on my large screen) of option- and feature-configuration items. These settings can be used to set up your SMTP and IMAP connections to the email server and define the way many aspects of Alpine work. In these examples, I'll use the `example.com` domain name (which is the virtual network I use for testing and experimenting). Alpine's configuration is stored in the `~/.pinerc` file, created the first time you start Alpine. - -The first page of the Setup Configuration menu contains most of the settings required to configure Alpine to send and receive email: - -``` -ALPINE 2.24 SETUP CONFIGURATION Folder: INBOX No Messages - -Personal Name = -User Domain = -SMTP Server (for sending) = -NNTP Server (for news) = -Inbox Path = -Incoming Archive Folders = -Pruned Folders = -Default Fcc (File carbon copy) = -Default Saved Message Folder = -Postponed Folder = -Read Message Folder = -Form Letter Folder = -Trash Folder = -Literal Signature = -Signature File = -Feature List = -Set Feature Name ---- ---------------------- -[ Composer Preferences ] -[X] Allow Changing From (default) -[ ] Alternate Compose Menu -[ ] Alternate Role (#) Menu -[ ] Compose Cancel Confirm Uses Yes -[ ] Compose Rejects Unqualified Addresses -[ ] Compose Send Offers First Filter -[ ] Ctrl-K Cuts From Cursor -[ ] Delete Key Maps to Ctrl-D -[ ] Do Not Save to Deadletter on Cancel -[Already at start of screen] -? Help E Exit Setup P Prev - PrevPage A Add Value % Print -O OTHER CMDS C [Change Val] N Next Spc NextPage D Delete Val W WhereIs -``` - -*Figure 3: First page of Alpine's Setup Configuration menu* - -This is where you define the parameters required to communicate with the email server. To change a setting, use the **Arrow** keys to move the selection bar to the desired configuration item and press **Enter**. You can see in Figure 3 that none of the basic configuration items have any values set. - -The **Personal Name** item uses the [Gecos field][6] of the Unix `/etc/passwd` entry for the logged-in user to obtain the default name. This is just a name Alpine uses for display and has no role in receiving or sending email. I usually call this the "pretty name." In this case, the default name is fine, so I will leave it as it is. - -There are some configuration items that you must set. Start with the **User Domain**, which is the current computer's domain name. Mine is a virtual machine I use for testing and examples in my books. Use the command line to get the fully qualified domain name (FQDN) and the hostname. In Figure 4, you can see that the domain name is `example.com` : - -``` -$ hostnamectl -Static hostname: testvm1.example.com -Icon name: computer-vm -Chassis: vm -Machine ID: 616ed83d97594a53814c35bc6c078d43 -Boot ID: fd721c46a9c44c9ab8ea392cef77b661 -Virtualization: oracle -Operating System: Fedora 33 (Xfce) -CPE OS Name: cpe:/o:fedoraproject:fedora:33 -Kernel: Linux 5.10.23-200.fc33.x86_64 -Architecture: x86-64 -``` - -*Figure 4: Obtaining the hostname and domain name* - -Once you have the FQDN, select the **User Domain** entry and press **Enter** to see the entry field at the bottom of the Alpine screen (as shown in Figure 5). Type your domain name and press **Enter** (using *your* network's domain and server names): - -``` -ALPINE 2.24 SETUP CONFIGURATION Folder: INBOX No Messages - -Personal Name = -User Domain = -SMTP Server (for sending) = -NNTP Server (for news) = -Inbox Path = -Incoming Archive Folders = -Pruned Folders = -Default Fcc (File carbon copy) = -Default Saved Message Folder = -Postponed Folder = -Read Message Folder = -Form Letter Folder = -Trash Folder = -Literal Signature = -Signature File = -Feature List = -Set Feature Name ---- ---------------------- -[ Composer Preferences ] -[X] Allow Changing From (default) -[ ] Alternate Compose Menu -[ ] Alternate Role (#) Menu -[ ] Compose Cancel Confirm Uses Yes -[ ] Compose Rejects Unqualified Addresses -[ ] Compose Send Offers First Filter -[ ] Ctrl-K Cuts From Cursor -[ ] Delete Key Maps to Ctrl-D -[ ] Do Not Save to Deadletter on Cancel -Enter the text to be added : example.com -^G Help -^C Cancel Ret Accept -``` - -*Figure 5: Type the domain name into the text entry field.* - -#### Required config - -These are the basic configuration items you need to send and receive email: - -* Personal Name - * Your name - * This is the pretty name Alpine uses for the From and Return fields in emails. -* User Domain - * example.com:25/user=SMTP_Authentication_UserName - * This is the email domain for your email client. This might be different from the User Domain name. This line also contains the SMTP port number and the user name for SMTP authentication. -* * SMTP server -SMTP - * This is the name of the outbound SMTP email server. It combines with the User Domain name to create the FQDN for the email server. -* Inbox Path - * {IMAP_server)}Inbox - * This is the name of the IMAP server enclosed in curly braces ({}) and the name of the Inbox. Note that this directory location is different from the inbound IMAP email. The usual location for the inbox on the server is `/var/spool/mail/user_name`. -* Default Fcc (file carbon copy) - * {IMAP_server)}mail/sent - * This is the mailbox (folder) where sent mail is stored. The default mail directory on the server is usually `~/mail`, but `mail/` must be specified in this and the next two entries, or the folders will be placed in the home directory instead. -* Default Saved Message Folder - * {IMAP_server)}mail/saved-messages - * This is the default folder when saving a message to a folder if you don't use `^t` to specify a different one. -* Trash Folder - * {IMAP_server)}mail/Trash -* Literal Signature - * A signature string - * I don't use this, but it's an easy place to specify a simple signature. -* Signature File - * ~/MySignature.sig - * This points to the file that contains your signature file. - -#### Optional config - -Here are the features I changed to make Alpine work more to my liking. They are not about getting Alpine to send and receive email, but about making Alpine work the way you want it to. Unless otherwise noted, I turned all of these features on. Features that are turned on by default have the string `(default)` next to them in the Alpine display. Because they are already turned on, I will not describe them. - -* Alternate Role (#) Menu: This allows multiple identities using different email addresses on the same client and server. The server must be configured to allow multiple addresses to be delivered to your primary email account. -* Compose Rejects Unqualified Addresses: Alpine will not accept an address that is not fully qualified. That is, it must be in the form ``. -* Enable Sigdashes: This enables Alpine to automatically add dashes (--) in the row just above the signature. This is a common way of delineating the start of the signature. -* Prevent User Lookup in Password File: This prevents the lookup of the full user name from the Gecos field of the passwd file. -* Spell Check Before Sending: Although you can invoke the spell checker at any time while composing an email, this forces a spell check when you use the `^X` keystroke to send an email. -* Include Header in Reply: This includes a message's headers when you reply. -* Include Text in Reply: This includes the text of the original message in your reply. -* Signature at Bottom: Many people prefer to have their signature at the very bottom of the email. This setting changes the default, which puts the signature at the end of the reply and before the message being replied to. -* Preserve Original Fields: This preserves the original addresses in the To: and CC: fields when you reply to a message. If this feature is disabled when you reply to a message, the original sender is added to the To: field, all other recipients are added to the CC: field, and your address is added to the From: field. -* Enable Background Sending: This speeds the Alpine user interface response when sending an email. -* Enable Verbose SMTP Posting: This produces more verbose information during SMTP conversations with the server. It is a problem-determination aid for the sysadmin. -* Warn if Blank Subject: This prevents sending emails with no subject. -* Combined Folder Display: This combines all folder collections into a single main display. Otherwise, collections will be in separate views. -* Combined Subdirectory Display: This combines all subdirectories' collections into a single main display. Otherwise, subdirectories will be in separate views. This is useful when searching for a subdirectory to attach or save files. -* Enable Incoming Folders Collection: This lists all incoming folders in the same collection as the Inbox. Incoming folders can be used with a tool like procmail to presort email into folders other than the Inbox and makes it easier to see the folders where new emails are sorted. -* Enable Incoming Folders Checking: This enables Alpine to check for new emails in the incoming folders collection. -* Incoming Checking Includes Total: This displays the number of old and new emails in the incoming folders. -* Expanded View of Folders: This displays all folders in each collection when you view the Folder List screen. Otherwise, only the collections are shown, and the folders are not shown until selected. -* Separate Folder and Directory Entries: If your mail directory has email folders and regular directories that use the same name, this causes Alpine to list them separately. -* Use Vertical Folder List: This sorts mail folders vertically first and then horizontally. The default is horizontal, then vertical. -* Convert Dates To Localtime: By default, all dates and times are displayed in their originating time zones. This converts the dates to display in local time. -* Show Sort in Titlebar: Alpine can sort emails in a mail folder using multiple criteria. This causes the sort criteria to be displayed in the title bar. -* Enable Message View Address Links: This highlights email addresses in the body of the email. -* Enable Message View Attachment Links: This highlights URL links in the body of the email. -* Prefer Plain Text: Many emails contain two versions, plain text and HTML. When this feature is turned on, Alpine always displays the plain text version. You can use the A key to toggle to the "preferred" version, usually the HTML one. I usually find the plain text easier to visualize the structure of and read the email. This can depend upon the sending client, so I use the A key when needed. -* Enable Print Via Y Command: This prints a message using the previous default, Y. Because Y is also used to confirm many commands, the keystroke can inadvertently cause you to print a message. The new default is % to prevent accidental printing. I like the ease of using Y, but it has caused some extra print jobs, so I am thinking about turning this feature off. -* Print Formfeed Between Messages: This prints each message on a new sheet of paper. -* Customized Headers: Customized headers enables overriding the default From: and Reply-To: headers. I set mine to: --   From: "David Both" <[david@example.com](mailto:david@both.org)> --   Reply-To: "David Both" -    <[david@example.com](mailto:david@both.org)> -* Sort key: By default, Alpine sorts messages in a folder by arrival time. I found this to be a bit confusing, so I changed it to Date, which can be significantly different from arrival time. Many spammers use dates and times in the past or future, so this setting can sort the future ones to the top of the list (or bottom, depending on your preferences for forward or reverse sorts). -* Image Viewer: This feature allows you to specify the image viewer to use when displaying graphics attached to or embedded in an email. This only works when using Alpine in a terminal window on the graphical desktop. It will not work in a text-only virtual console. I always set this to `=okular` because [Okular][7] is my preferred viewer. -* URL-Viewer: This tells Alpine what web browser you want to use. I set this for `= /bin/firefox` but you could use Chrome or another browser. Be sure to verify the location of the Firefox executable. - -#### Printing - -It is easy to set up Alpine for printing. Select the **Printer** menu from the **Setup** page. This allows you to set a default printer and define custom print commands. The default is probably `attached-to-ansi`. Move the cursor down to the **Standard UNIX print command** section and highlight the printer list. - -``` -Standard UNIX print command - -Using this option may require setting your "PRINTER" or "LPDEST" - -environment variable using the standard UNIX utilities. - -Printer List: "" lpr -``` - -Then press the **Enter** key to set the standard Unix **lpr** command as the default. - -### Final thoughts - -This is not a step-by-step guide to Alpine configuration and use. Rather, I tried to cover the basics to get it up and running to send and receive email. I also shared some configuration changes that have made my Alpine experience much more usable. These are the configuration items that I've found most important to my experience; you may find that others are more important to you. - -I have been using Alpine for several months now and am very happy with the experience. The text interface helps me concentrate on the message and not the distracting graphics and animations. I can view those if I choose, but 99% of the time, I choose not to. - -Alpine is easy to use and has a huge number of features that can be configured to give the best email client experience possible. - -Use the **Help** feature to get more information about the fields I explored above and those that I did not cover. You will undoubtedly find ways to configure Alpine that work better for you than the defaults or what I changed. I hope this will at least give you a start to set up Alpine the way you want. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/alpine-linux-email - -作者:[David Both][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/dboth -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/email_chat_communication_message.png -[2]: https://www.thunderbird.net/en-US/ -[3]: https://alpine.x10host.com/ -[4]: https://opensource.com/article/17/10/alpine-email-client -[5]: https://linux.die.net/man/1/mailx -[6]: https://en.wikipedia.org/wiki/Gecos_field -[7]: https://okular.kde.org/ diff --git a/sources/tech/20210513 Building open organizations to make a better life more sustainable for everyone.md b/sources/tech/20210513 Building open organizations to make a better life more sustainable for everyone.md deleted file mode 100644 index 9d6dcd5d23..0000000000 --- a/sources/tech/20210513 Building open organizations to make a better life more sustainable for everyone.md +++ /dev/null @@ -1,112 +0,0 @@ -[#]: subject: (Building open organizations to make a better life more sustainable for everyone) -[#]: via: (https://opensource.com/open-organization/21/5/sustainable-development-human-impacts) -[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Building open organizations to make a better life more sustainable for everyone -====== -By opening our approach to ensuring prosperity for all, we can advance -more sustainable solutions. -![Lots of hands trying to climb a ladder][1] - -In the first article in this series reviewing [_The Age of Sustainable Development_ by Jeffrey Sachs][2], I discussed the impact of economic development on the environment, and I explained how [open organization principles][3] can help us begin building sustainable, global economic development plans for the future. - -In this article, I will continue to explore Sach's argument about the future of sustainable development. This time, I'll address development issues Sachs says are impacting humans, and I'll offer a few ideas about how we might leverage open organization principles to begin addressing those issues. As I did previously, I'll conclude with a video in which I offer more detail and further explanation. - -### Confronting poverty - -_Why do some countries still experience extreme poverty?_ - -This is the first question one must ask when exploring human suffering that might occur as a result of unsustainable economic development. Simply by asking this question, we bring the issue to light, making problems (and solutions) more _transparent_. - -"[Extreme poverty][4]" is defined by a lack of basic human needs, such as food, clean water, shelter, sanitation, clothing, health care, safe energy, transportation, education, and connectivity. Some regions of the world experience this more than others. Why? And could thinking with open organization principles help us expose the reasons? - -Sachs introduced some of these reasons in another of his books, [_The End of Poverty_][5]: - - 1. **Extreme poverty trap:** Emerging from extreme poverty requires a certain degree of investment; a country may not have the resources to provide the basic investment required to escape extreme poverty. - 2. **Bad economic policy:** A regional or national government selects the wrong economic policy for the situation. - 3. **Governmental expertise:** The government lacks the resources to explore requisite physical infrastructure (roads, schools, clinics), or "soft" infrastructure (hiring doctors, teachers, or engineers). - 4. **Physical geography:** The country is landlocked ([Africa][6] has the most landlocked countries in the world with 16 out of 55 countries), is in remote locations high in the mountains (not suitable for farming or low-cost manufacturing), does not have valuable minerals/energy raw materials/wind/direct sunlight, is susceptible to diseases (malaria, Zika, AIDS), or experiences natural disasters (earthquakes, tsunamis, tropical cyclones, droughts). Here Sachs references the [Koppen-Geiger (K-G) map][7] to classify climates globally to determine each region's best strategy for building a sustainable economy. - 5. **Poor policy execution:** The region had selected the right policies but failed to execute on them due to corruption or incompetence. - 6. **Cultural barriers:** Cultural values and norms inhibit development (for example, [systemic discrimination against women and girls][8]), or barriers between ethnic groups (like racial or nationality differences) have caused problems. - 7. **Geopolitics:** The region has security issues between neighboring countries. - - - -Anyone who wants to create an open organization to address these issues must decide on the purpose and goals for that community. Developing that strong sense of purpose is only possible when everyone knows the "situation on the ground"—that is, when everyone is clear about the problem's they're facing—and has some sense of the history of which tactics have worked and which haven't. Only when the community has a clear and compelling sense of purpose can all the open organization principles go to work. - -Anyone who wants to create an open organization to address these issues must decide on the purpose and goals for that community. - -### Where to begin? - -In _The Age of Sustainable Development_, Sachs offers detailed suggestions for addressing problems like these (including suggestions for targeted funding). Breaking down the problems in this way, we can also more easily see how we might introduce open organization principles to address them. - -Sachs asks readers to look closely at six factors impacting countries facing extreme poverty and suggests economic development strategies he considers sustainable: - - 1. **For landlocked regions:** Build a transportation system to the nearest major ocean or river port, build close relationships with coastal neighbors, develop internet access, and emphasize export activities. Building an open organization to address extreme poverty in a landlocked country would involve engaging with stakeholders who can access a port. - 2. **For water-stressed regions:** Build water irrigation and purification systems powered by solar/wind energy for farmers. Develop new crops that do not require huge amounts of water. This could improve food stability. An open organization community addressing this challenge would need to begin by considering how to get that country the water it needs, as well as offering new crop strategies. - 3. **For regions with a heavy disease burden:** Build public health programs for disease control and collaborate with international health agencies. Also, some kind of [community health worker][9] might be helpful. If disease is the main issue, maybe an open organization community project on disease control should be explored. - 4. **For regions impacted by natural hazards:**Develop physical and social infrastructure to help the public prepare for increased probabilities of floods, cyclones, and extreme storm events. Also, develop mass migration strategies to relocate populations from regions that will surely become uninhabitable in the years ahead. If natural hazards, like flooding, are the primary issues a country faces, then perhaps an open organization community project addressing those hazards should be explored. - 5. **For regions lacking major energy sources:** Develop alternative energy sources (geothermal, hydro, wind, solar, nuclear power). In regions lacking major energy sources, introduce clean energy sources. For regions heavily reliant on fossil fuel resources, consider implementing programs and policies that adequately distribute wealth throughout the entire society. Furthermore, if these regions are involved in burning, introduce some kind of carbon capture system. If energy availability is the primary issue a region is facing, then maybe an open organization project concerning power supplies should be explored. - 6. **For regions requiring more education opportunities:** Examine factors like demographics and [total fertility rate (TFR)][10] to develop strategies for helping families in the region become economically sustainable. According to Sachs, greater and better educational opportunities lead to better health, better nutrition per child, and more manageable population growth. If education and human capital development are the issues at hand, maybe an open organization project exploring new educational opportunities, and strategies for virtual learning should be explored. - - - -Considering all these issues, Sachs recommends more targeted [official development assistance][11] in agriculture, health, education/skills creation, infrastructure, and women's empowerment. Also, new methods of funding should be explored, like [Partners for Development][12]. This must be very carefully done, Sachs warns, as poorly directed funds will not produce the desired results. - -### Population explosion - -Without any consideration of the world's growing human population, Sachs notes, we can't address other global sustainable economic development issues. The [global population][13] is growing, as is the [per capita productivity and consumption][14]. The [population has grown][15] by _nine times_ the 800 million people estimated to have lived around the start of the Industrial Revolution. Simply put, this means the world is full of the 7.8 billion people seeking economic improvement (leading to greater resource consumption per person). And right now, the world economy is estimated to produce $90 trillion of output per year (the Gross World Product, [GWP][16]). So there must be investments in agriculture, health, education/skills creation, and physical infrastructure, as well as some consideration of global fertility rates, otherwise [extreme poverty][4] will never be eradicated. - -Without any consideration of the world's growing human population, Sachs notes, we can't address other global sustainable economic development issues. - -To make matters worse, population growth stresses global supplies of water, nitrogen, and carbon, upon which life depends. This is unprecedented in the span of [humanity’s 10,000 years of civilization][17]. Therefore, Sachs recommends sustainable, holistic development that encompasses all economic, social, and environmental concerns. - -Reviewing the concerns above, we can see four basic issues: - - 1. **Economic prosperity:** Any proposal regarding the above issues must take the global economy into consideration. We must be monitoring economic changes, whether locally or globally, long-term or short-term. Some industries will decline, but they must be replaced with more environmentally friendly industries, so communities can experience economic stability. - 2. **Environmental sustainability:** I discussed this in the [first part of my review][18]. Human activities should not adversely impact the local and global natural environment. - 3. **Social inclusion and cohesion:** This is the improvement of human life and getting rid of extreme poverty on the planet. - 4. **Governance:** I'll talk about this more in the next and final part of my review. Whether it be governments, businesses, community organizations, social support, neighborhood networks within communities, schools, families, parents, or others, supervision and management of economically sustainable policies and practices will be important. - - - -In the first article of this review series, I've explained Sach's take on development issues impacting the global environment. In this article, I focused primarily on the importance of transparency and awareness of human suffering globally. We can achieve this kind of transparency efficiently and cost-effectively by turning to information and communication technology specialists using automated data gathering and analytics, utilizing telecommunication tools, and social media. For example, sensors and communications devices are now being developed and deployed to measure water requirements and nutrients in soil. These data can be gathered, distributed among specialists, and can inform collaborastors working on solutions. Even now, this process is helping both to reduce environmental fertilizer runoff damage and improve crop production.  - -In the final article in this series, I'll discuss global actions we can take to address these challenges. They begin with the issue of global governance. - --------------------------------------------------------------------------------- - -via: https://opensource.com/open-organization/21/5/sustainable-development-human-impacts - -作者:[Ron McFarland][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ron-mcfarland -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_heirarchy.png?itok=ExGiv98I (Lots of hands trying to climb a ladder) -[2]: http://cup.columbia.edu/book/the-age-of-sustainable-development/9780231173155 -[3]: https://theopenorganization.org/definition/ -[4]: https://www.worldbank.org/en/news/feature/2016/06/08/ending-extreme-poverty#:~:text=The%20World%20Bank%20defines%20%E2%80%9Cextreme,extreme%20poverty%20can%20be%20achieved. -[5]: https://en.wikipedia.org/wiki/The_End_of_Poverty -[6]: https://www.thoughtco.com/african-countries-that-are-landlocked-4060437#:~:text=Out%20of%20Africa's%2055%20countries,Uganda%2C%20Zambia%2C%20and%20Zimbabwe. -[7]: http://koeppen-geiger.vu-wien.ac.at/present.htm -[8]: https://www.worldbank.org/en/research/dime/brief/dime-gender-program -[9]: https://www.who.int/hrh/documents/community_health_workers_brief.pdf -[10]: https://ourworldindata.org/grapher/children-per-woman-un -[11]: https://en.wikipedia.org/wiki/Official_development_assistance -[12]: http://pfd.org/ -[13]: https://www.worldometers.info/world-population/ -[14]: https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(PPP)_per_capita -[15]: https://ourworldindata.org/grapher/world-population-since-10000-bce-ourworldindata-series -[16]: https://en.wikipedia.org/wiki/Gross_world_product#:~:text=The%20gross%20world%20product%20(GWP,gross%20domestic%20product%20(GDP). -[17]: https://opensource.com/open-organization/20/8/global-history-collaboration -[18]: https://opensource.com/open-organization/21/3/sustainable-development-environment diff --git a/sources/tech/20210514 PipeWire- the new audio and video daemon in Fedora Linux 34.md b/sources/tech/20210514 PipeWire- the new audio and video daemon in Fedora Linux 34.md deleted file mode 100644 index 9904c70d6c..0000000000 --- a/sources/tech/20210514 PipeWire- the new audio and video daemon in Fedora Linux 34.md +++ /dev/null @@ -1,155 +0,0 @@ -[#]: subject: (PipeWire: the new audio and video daemon in Fedora Linux 34) -[#]: via: (https://fedoramagazine.org/pipewire-the-new-audio-and-video-daemon-in-fedora-linux-34/) -[#]: author: (Christian Fredrik Schaller https://fedoramagazine.org/author/uraeus/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -PipeWire: the new audio and video daemon in Fedora Linux 34 -====== - -![][1] - -Photo by [Samuel Sianipar][2] on [Unsplash][3] - -Wim Taymans has a long track record in the Linux community. He was one of the two original developers of the GStreamer multimedia framework and he was the principal maintainer for most of the project’s initial existence. He joined Red Hat in 2013 and has helped maintain GStreamer and PulseAudio for Red Hat since. In 2015 he started working on [PipeWire][4]: a project that has come to full fruition in [Fedora Workstation 34][5], where it handles both audio and video. In addition to that, it also merges the world of pro-audio with mainstream Linux. In this interview we will talk about where PipeWire came from, where it is at and where Wim sees it going from here. - -![][6] - -_Christian Schaller & Wim Taymans testing PipeWire video filters at Flock_ - -**Christian Schaller: What was the origin of PipeWire in terms of what problems you wanted to resolve?** - -Wim Taymans: PipeWire really evolved out of two earlier ideas. The first one was [PulseVideo][7], which was written by William Manley back in 2015. It was a small server that would send the video from a v4l2 camera to one or more other processes. It used GStreamer, DBus and file descriptor (fd) passing to do this fairly efficiently. It resulted in a bunch of patches to GStreamer regarding fdmemory. - -Around that time we started to think about screen capture for Wayland. I was asked to investigate options. The idea was then to take the PulseVideo idea and implement the possibility for clients to provide streams as well (not just v4l2 devices). Another requirement was to make this secure and work well with Flatpak and Flatpak’s concept of portals to handle things that have potential security concerns. - -**CS: Ah right, because when PipeWire was originally introduced to Fedora in Fedora 27 it was only dealing with video right? Providing a way to do screen sharing in GNOME Shell?** - -WT: Yes, there were only wild ideas about trying to handle audio as well. The version that ended up in Fedora 27 needed another rewrite to make that happen, really. - -**CS: Can you talk a little about how PipeWire interacts with things like Wayland and GNOME Shell?** - -WT: GNOME Shell will send a stream to PipeWire when screen sharing is activated. PipeWire will route this stream to the applications like Firefox or the screen recorder. We have some more advanced features implemented such as DMABUF passing and metadata for the cursor and clipping regions when sharing a single window. There is also the volume control that interacts through the PulseAudio API with PipeWire to manage the volumes. - -**CS: So there was no real PipeWire precursor for video, as most stuff just interacted directly with v4l, so I assume it must have been a big task porting over things like GNOME Shell and the web browsers to start using it?  ** - -WT: There was nothing for screen sharing, it was just X11 calls to grab the screen content. Jan Grulich worked with the upstream WebRTC project to add code to interact with the new portal APIS defined for Wayland, to negotiate screen sharing options and then native PipeWire support to fetch the screen content. Then Martin Stransky backported that work into the Firefox copy of WebRTC and Jan Grulich and Tomas Popela ensured the changes got merged into Chromium/Chrome. - -For webcams there is not much progress yet. Browsers still access the v4l2 camera directly. There is a portal to negotiate webcam access through PipeWire but that has not been implemented in browsers as far as I know. - -**CS: Talking about porting and developers, first question developers are likely to ask themselves when they hear about a new project like PipeWire is ‘Oh no, do I need to rewrite all my multimedia applications now?’. How is PipeWire dealing with that challenge?** - -WT: PipeWire provides compatibility with ALSA, PulseAudio and JACK applications with an ALSA plugin, a replacement PulseAudio server and a JACK replacement client library respectively. Theoretically this should provide a way to run all existing applications without modifications. - -With PipeWire, we should now start thinking about those audio APIs as Audio toolkits. It’s a bit like GUI toolkits such as GTK or Qt: both of them talk to the underlying display subsystem (Wayland/X11) and no application thinks about implementing raw Wayland backends in their applications. - -It’s the same with JACK/PulseAudio, they provide applications with a model of the Audio subsystem and you select the Audio toolkit best suited for your use case. I don’t see this change unless someone comes up with the ultimate Audio Toolkit. - -**CS: How did your thinking about the problem space evolve as you worked on it?** - -WT: As the project went forward, I started to investigate if this framework could also support audio. It would need a substantial rewrite to make this work efficiently. GStreamer and dbus needed to be replaced with something more low level to make audio viable, especially pro-audio. At the same time both GObject and DBus started feeling heavy for the low level system I was designing. - -I started experimenting with a new small media plugin API at around mid 2016. It was still all very GObject like but I started to reimplement the v4l2 and audiomixer plugins in this new framework. By the end of 2016, I moved away from DBus as well to a more Wayland like protocol. - -Early 2017 was when I seriously started to think about implementing the features of an audio server as well. I started to investigate JACK and its processing model and audio plugin APIs such as lv2. This is also when we came up with the name PipeWire. By the end of 2018 I had a working audio server with a JACK-like graph model, well… at least working in the context of my basic test case.  - -After some discussions with members of the Linux Pro-Audio community they convinced me that I needed to make some more drastic design changes in the way scheduling and mixing worked if this was ever going to be able to replace JACK for them . This is when the final re-architecting started and eventually became, after 2 years of development, the first 0.3 version in early 2020. - -**CS: I know the Pro-audio support you mention has got a lot of buzz in the community, so who did you initially talk to and what has the reception been so far from the wider pro-audio community?** - -WT: As mentioned, I had some discussions with them back in early 2018. Robin Gareus and Paul Davis were instrumental in driving the changes that lead to the current implementation. - -I think everybody would love to have a seamless, integrated and user friendly experience that can be used for both Pro and Consumer Audio use cases and there is definitely interest in how PipeWire will evolve to make this happen. We’re not there yet in terms of feature parity although we are moving quickly. For instance, just this week I landed Freewheeling support in PipeWire, which should be out in Fedora by the time you read this. Beyond that latency reporting is the big TODO item remaining. Also, while PipeWire can manage the same latency as JACK we are not yet as reliable. So there is some more work to do. - -**CS: And what about the PulseAudio developers? How have they taken the arrival of PipeWire? Does Lennart Poettering hate you now?** - -WT: I think they are fine with it. We organized a hackfest in October 2018 with some of the PulseAudio developers to talk about PipeWire so it was not a surprise. In fact, Arun Ragahavan who is a long time PulseAudio contributor is currently working on PipeWire. I also talked with Lennart about it back in the early days and he was all for the idea of unifying Pro and Consumer Audio so I don’t think he hates me 🙂 - -**CS: You are also the creator of GStreamer, how do you see the two projects in terms of use cases?** - -WT: I see PipeWire as a much lower-level framework to move data around between apps and devices. It’s very good at handling raw audio and video and interfacing with devices. It’s not so good at muxing and demuxing and it does not want to do some of the higher level multimedia tasks such as implementing an RTSP server or handle transmuxing formats. GStreamer still remains ideally suited for those higher level tasks, muxing, demuxing, encoding, decoding, etc. - -**CS: So you see them compliment each other more than compete?** - -WT: They absolutely complement each other. I don’t see one overtaking the other. It’s still early to know exactly where things will go but I can see that things like audio or video effect chains are better implemented in PipeWire. While the plumbing and post processing is better done in GStreamer. - -**CS: Any community contributors you want to highlight so far beyond yourself?** - -WT: Absolutely! Almost all of the new exciting Bluetooth work has been done by community contributors. - -Pauli Virtanen has been doing fixes all over the place such as many Bluetooth improvements and general fixing and stability improvements to the SCO plugins, implementing codec switching and delay reporting. He also has his hands in other areas such as the PipeWire IPC connections and the default-node and policy in the session manager, as well as some object management improvements. - -Huang-Huang Bao (eh5) who maintained a pulseaudio-modules-bt has been contributing a lot of changes such as LDAC ABR support, Hardware volume support and numerous stability and compatibility fixes all over the place to bring the bluetooth support to the same level as the pulseaudio module. - -We also have Collabora contributors George Kiagiadakis and Frédéric Danis regularly contributing Bluetooth, build and other fixes as part of their AGL involvement. They have also been working on an improved session manager called WirePlumber, which we will try to include in Fedora 35. - -Dmitry Sharshakov implemented the Bluetooth battery status reporting, which is a relatively new feature in bluez and now also supported by PipeWire. - -While not directly tied to PipeWire itself ,the work I mentioned earlier by Jan Grulich, Martin Stransky, and Tomáš Popela getting PipeWire support into the web browsers was also a major step forward. The same goes for all the work Jonas Ådahl did to create the screen capture portal and implement it in GNOME Shell. I also want to give a special mention to Georges Stavracas for his great work on getting PipeWire support into OBS Studio. Jan Grulich has also done a lot of work getting PipeWire support into KDE. - -There also also a lot of people active on the [issue tracker][8] that try to help triage bugs, provide help and improve the [wiki pages][9]. - -**CS: As you’ve been testing and using PipeWire has there been applications you didn’t know about before, but which you discovered due to people reporting they didn’t work with PipeWire or you found when looking for test cases?** - -WT: Most of the midi tools, really. I never really used midi before I started to add support in PipeWire. I got fascinated by the various synths, like Helm, zynaddsubfx, and more recently Vital and the free Vitalium application. - -There is a whole world of music creation tools that become available when you have midi and JACK compatibility that were previously little or unknown. I didn’t know about any of the lsp or calf plugins before. - -I love the idea of [Inge][10] and I would love to see it developed some more. I imagine that a tool like this can be used to model and tweak the effect chains in PipeWire. - -**CS: In terms of pro-audio and midi, are you a musician yourself and are these things you see yourself using personally going forward?** - -WT: I play a little guitar myself but I’m old school, I plug into a real tube amp without effects and I jam. I did some recording of guitar and voice in Ardour using PipeWire to test things out. I’m really more interested in creating code so that other people can make music you actually want to listen to 🙂 - -**CS: What do you feel are the remaining items that need to be tackled in PipeWire?** - -WT: There is a long TODO list of pending items… - -For desktop use cases, we need to reach reasonable feature parity with PulseAudio. We’re missing the automatic detection and setup of network streams along with passthrough of compressed formats such as DTS and AC3 over HDMI. - -For the PRO audio use cases we need to implement what in Jack is known as  Freewheeling and then latency reporting. - -After that, we can start to look at all the exciting new things we can do now with PipeWire. We’re probably looking at a redesign of the sound control panel at some point. - -On the video front, a lot can be improved. We don’t have a video processing pipeline yet, let alone the tools to manage such a video pipeline. - -**CS: Are there any specific areas you would love to see more contributors to in** **PipeWire?** - -WT: Sure! I think there are so many exciting things you can make now. For example, we don’t really have a native patchbay. We rely on JACK tools, but those don’t handle the video streams. I would say a simple curses based patchbay would be a nice contribution. - -In PipeWire it is relatively easy to write new external sinks or sources. I would love to see a native implementation of a good general purpose network protocol like ROC or so. - -**CS: You recently started a new job inside Red Hat, can you tell us a little about that and what that means for PipeWire?** - -WT: Yes, I’m part of the new Infotainment group inside Red Hat that will initially focus on providing the software stack for the sutomotive sector. This is about enabling Audio and Video in cars and PipeWire will play a major part in realizing that. PipeWire is already part of Automotive Grade Linux, together with WirePlumber.** ** - -One of the challenges is to be able to route all the audio capture and playback streams in a car in a flexible way. Modern cars also have a large amount of video cameras that need to be managed. Part of the plan is to improve PipeWire for these use cases. - -The expectation is that some of these use cases will also benefit desktop users eventually. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/pipewire-the-new-audio-and-video-daemon-in-fedora-linux-34/ - -作者:[Christian Fredrik Schaller][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/uraeus/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/05/pipewire-taymans-816x345.jpg -[2]: https://unsplash.com/@samthewam24?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/pipe?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://pipewire.org/ -[5]: https://fedoramagazine.org/whats-new-fedora-34-workstation/ -[6]: https://lh5.googleusercontent.com/veQ7AR06E-vbSYabLZw0StEuX9pP5OVu7nDMuIiq9nPurMas0uPUXDUwI9rdAL9vWKZ8L-CPNR0PSRcXtJZamHmAWYPfxE9r4kwxYoT6p8qRlkbUq0tbkQgDLprmqAn1HOx8wsoj -[7]: https://github.com/wmanley/pulsevideo -[8]: https://gitlab.freedesktop.org/pipewire/pipewire/-/issues -[9]: https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/home -[10]: https://drobilla.net/software/ingen diff --git a/sources/tech/20210517 Network address translation part 4 - Conntrack troubleshooting.md b/sources/tech/20210517 Network address translation part 4 - Conntrack troubleshooting.md deleted file mode 100644 index 5301fa3456..0000000000 --- a/sources/tech/20210517 Network address translation part 4 - Conntrack troubleshooting.md +++ /dev/null @@ -1,174 +0,0 @@ -[#]: subject: (Network address translation part 4 – Conntrack troubleshooting) -[#]: via: (https://fedoramagazine.org/network-address-translation-part-4-conntrack-troubleshooting/) -[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Network address translation part 4 – Conntrack troubleshooting -====== - -![Network address translation - conntrack troubleshooting][1] - -This is the fourth post in a series about network address translation (NAT). The first article introduced [how to use the iptables/nftables packet tracing feature][2] to find the source of NAT-related connectivity problems. The second article [introduced the “conntrack” command][3]. The third article gave an [introduction to the “conntrack” event framework.][4] - -This article shows how to expose more information about what is happening inside conntrack. - -### Connection tracking and NAT - -NAT configured via iptables or nftables builds on top of the netfilters connection tracking facility. This means that if there is any problem with the connection tracking engine, NAT will not work. This may result in connectivity issues. Ineffective NAT rules will leak internal addresses to the outer network. Use the nftables “_ct state_” or the iptables “_-m conntrack –ctstate_” feature to prevent this. If a packet matches the INVALID state, conntrack failed to associate the packet with a known connection. This also means NAT will not work. - -### How connection tracking works at a high level - -The connection tracker first extracts the IP addresses and higher-level protocol information from the packet. “Higher level protocol information” is the transport protocol specific part. A common example are the source and destination port numbers (tcp, udp) or the ICMP id. A more exotic example would be the PPTP call id. These packet fields – the IP addresses and protocol specific information – are the lookup keys used to check the connection tracking table. - -In addition to checking if a packet is new or part of a known connection, conntrack also performs protocol specific tests. In case of UDP, it checks if the packet is complete (received packet length matches length specified in the UDP header) and that the UDP checksum is correct. For other protocols, such as TCP, it will also check: - - * Are TCP flags valid (for example a packet is considered invalid if both RST and SYN flags are set) - * When a packet acknowledges data, it checks that the acknowledgment number matches data sent in the other direction. - * When a packet contains new data, it checks that this data is within the receive window announced by the peer. - - - -Any failures in these checks cause the packet to be considered invalid. For such packets, conntrack will neither create a new connection tracking entry nor associate it with an existing entry, even if one exists. Conntrack can be configured to log a reason for why a packet was deemed to be invalid. - -### Log internal conntrack information - -The “_net.netfilter.nf_conntrack_log_invalid″_ sysctl is used to set kernel parameters to get more information about why a packet is considered invalid. The default setting, 0, disables this logging. Positive numbers (up to 255) specify for which protocol more information will be logged. For example, _6_ would print more information for tcp, while 17 would provide more information for udp. The numbers are identical to those found in the file _/etc/protocols._ The special value _255_ enables debug logging for all protocol trackers. - -You may need to set a specific logging backend. Use “_sysctl -a | grep nf_log_” to see what log backends are currently in use. NONE means that no backend is set. Example output: -``` - -``` - -# sysctl -a | grep nf_log -net.netfilter.nf_log.10 = NONE -net.netfilter.nf_log.2 = NONE -net.netfilter.nf_log_all_netns = 0 -``` - -``` - -2 is ipv4, 3 is arp, 7 is used for bridge logging and 10 for ipv6. For connection tracking only ipv4 (2) and ipv6 (10) are relevant. The last sysctl shown here – _nf_log_all_netns_ – is set to the default 0 to prevent other namespaces from flooding the system log. It may be set to 1 to debug issues in another network namespace. - -### Logger configuration - -This command will print a list of available log modules: -``` - -``` - -# ls /lib/modules/$(uname -r)/kernel/net/netfilter/log /lib/modules/$(uname -r)/kernel/net/ip/netfilter/log* -``` - -``` - -The command: - -``` -# modprobe nf_log_ipv4 -``` - -loads the ipv4 log module. If multiple log modules are loaded you can set the preferred/active logger with sysctl. For example: - -``` -# sudo sysctl net.netfilter.nf_log.2=nf_log_ipv4 -``` - -tells the kernel to log ipv4 packet events to syslog/journald. This only affects log messages generated by conntrack debugging. Log messages generated by rules like “_ipables_ _-j NFLOG_” or the _LOG_ target do not change as the rule itself already specifies to log type to use (nfnetlink and syslog/journald respectively). - -After this, debug messages will appear in ulogd (if configured via nfnetlink) or the system log (if nf_log_ipv4 is the log backend). - -### Example debug output - -The following examples occur with the settings created using _“sudo sysctl net.netfilter.nf_log.2=nf_log_ipv4”_ and “_sudo sysctl net.netfilter.nf_conntrack_log_invalid=6_“. -``` - -``` - -nf_ct_proto_6: invalid packet ignored in state ESTABLISHED SRC=10.47.217.34 DST=192.168.0.17 LEN=60 DF SPT=443 DPT=47832 SEQ=389276 ACK=3130 WINDOW=65160 ACK SYN - -``` -nf_ct_proto_6: ACK is over the upper bound (ACKed data not seen yet) SRC=10.3.1.1 DST=192.168.0.1 LEN=60 DF SPT=443 DPT=49135 SEQ= ... -``` - -This dump contains the packet contents (allowing correlation with tcpdump packet capture of the flow, for example) plus a reason why the packet was tagged as INVALID. - -### Dynamic Printk - -If further information is needed, there are log statements in the conntrack module that can be enabled at run-time with the dynamic debugging infrastructure. - -To check if this feature is available, use the following command: - -``` -# sudo grep nf_conntrack_proto_tcp /sys/kernel/debug/dynamic_debug/control -``` - -If the conntrack module is loaded and the dynamic debug feature is available, the output is similar to this: -``` - -``` - -net/netfilter/nf_conntrack_proto_tcp.c:1104 [nf_conntrack]nf_conntrack_tcp_packet =_ "syn=%i ack=%i fin=%i rst=%i old=%i new=%i\012" -``` - -``` - -net/netfilter/nf_conntrack_proto_tcp.c:1102 [nf_conntrack]nf_conntrack_tcp_packet =_ "tcp_conntracks: " net/netfilter/nf_conntrack_proto_tcp.c:1005 [nf_conntrack]nf_conntrack_tcp_packet =_ "nf_ct_tcp: Invalid dir=%i index=%u ostate=%u\012" - -``` -net/netfilter/nf_conntrack_proto_tcp.c:999 [nf_conntrack]nf_conntrack_tcp_packet =_ "nf_ct_tcp: SYN proxy client keep alive\012" -``` - -Each line shows the location of a default-disabled debug _printk_ statement. _printk_ is a C function from the Linux kernel interface that prints messages to the kernel log. The name of the file in the linux kernel source code comes first, followed by the line number. The square brackets contain the name of the kernel module that this source file is part of. The combination of file name and line number allows enabling or disabling these _printk_ statements. This command: -``` - -``` - -# sudo echo "file net/netfilter/nf_conntrack_proto_tcp.c line 1005 +p" &gt; /sys/kernel/debug/dynamic_debug/control -``` - -``` - -will enable the _printk_ statement shown in line 1005 of [net/netfilter/nf_conntrack_proto_tcp.c][5]. The same command, with “_+p_” replaced by “_-p_“, disables this log line again. This facility is not unique to connection tracking: many parts of the kernel provide such debug messages. This technique is useful when things go wrong and more information about the conntrack internal state is needed. A dedicated howto about the dynamic debug feature is available in the kernel documentation [here][6]. - -### The unconfirmed and dying lists - -A newly allocated conntrack entry is first added to the unconfirmed list. Once the packet is accepted by all iptables/nftables rules, the conntrack entry moves from the unconfirmed list to the main connection tracking table. The dying list is the inverse: when a entry is removed from the table, it is placed on the dying list. The entry is freed once all packets that reference the flow have been dropped. This means that a conntrack entry is always on a list: Either the unconfirmed list, the dying list, or the conntrack hash table list. Most entries will be in the hash table. - -If removal from the table is due to a timeout, no further references exist and the entry is freed immediately. This is what will typically happen with UDP flows. For TCP, conntrack entries are normally removed due to a special TCP packets such as the last TCP acknowledgment or a TCP reset. This is because TCP, unlike UDP, signals state transitions, such as connection closure. The entry is moved from the table to the dying list. The conntrack entry is then released after the network stack has processed the “last packet” packet. - -#### Examining these lists - -``` -# sudo conntrack -L unconfirmed -# sudo conntrack -L dying -``` - -These two commands show the lists. A large discrepancy between the number of active connections (_sudo conntrack -C_) and the content of the connection tracking table (_sudo conntrack -L_) indicate a problem. Entries that remain on either one of these lists for long time periods indicate a kernel bug. Expected time ranges are in the order of a few microseconds. - -### Summary - -This article gave an introduction to several debugging aids that can be helpful to pinpoint problems with the connection tracking module. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/network-address-translation-part-4-conntrack-troubleshooting/ - -作者:[Florian Westphal][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/strlen/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/05/network-address-translation-part-4-816x345.jpg -[2]: https://fedoramagazine.org/network-address-translation-part-1-packet-tracing/ -[3]: https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/ -[4]: https://fedoramagazine.org/conntrack-event-framework/ -[5]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/netfilter/nf_conntrack_proto_tcp.c?id=e2ef5203c817a60bfb591343ffd851b6537370ff#n1005 -[6]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/admin-guide/dynamic-debug-howto.rst?id=e85d92b3bc3b7062f18b24092a65ec427afa8148 diff --git a/sources/tech/20210517 reading and searching gmane with gnus, fast.md b/sources/tech/20210517 reading and searching gmane with gnus, fast.md deleted file mode 100644 index 51816eac6c..0000000000 --- a/sources/tech/20210517 reading and searching gmane with gnus, fast.md +++ /dev/null @@ -1,91 +0,0 @@ -[#]: subject: "reading and searching gmane with gnus, fast" -[#]: via: "https://jao.io/blog/2021-05-17-reading-and-searching-gmane-with-gnus-fast.html" -[#]: author: "jao https://jao.io" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -reading and searching gmane with gnus, fast -====== - -Reading mailing lists via Gnus by pointing it to the usenet service news.gmane.io is a well-known trick among emacsers. It has a couple of drawbacks, though: network latency and no search. The two problems have, as almost always with almost any problem in Emacs land, a cure. The names of the game are, in this case, leafnode and notmuch. - -I've been using [leafnode][1] since i was young to avoid network latency issues when Gnus fetches news from remote usenet servers. [Leafnode][1] is a store & forward NNTP proxy that can be used to give a regular newsreader off-line functionality. It works by fetching in the background news articles from a number of configured remote servers (gmane.io in our case), storing them locally and offering a local NNTP server to Gnus (or any other newsreader, for that matter). That way, one configures Gnus to fetch news from localhost, which is fast and will never block, even when one is disconnected from the interwebs. Leafnode's server implements the full protocol, so one can also post to the remote servers. - -For our case, leafnode's configuration file is very simple: - -``` - - ## Unread articles will be deleted after this many days - expire = 365 - - ## This is the NNTP server leafnode fetches its news from. - ## You need read and post access to it. Mandatory. - server = news.gmane.io - - ## Fetch only a few articles when we subscribe a new newsgroup. The - ## default is to fetch all articles. - initialfetch = 100 - -``` - -With leafnode in place, i've rarely needed to subscribe to a mailing list[1][2], and all their messages are available with the Gnus interface that we all know and love. - -With one caveat: one can search over e-mails, using either IMAP (i like dovecot's lucene indexes) or (even better) notmuch. Can we do the same with those messages we access through leafnode? Well, it turns out that, using notmuch, you can! - -First of all, leafnode stores its articles in a format recognised by notmuch's indexer. In my debian installation, the live in the directory `/var/spool/news/gmane`. On the other hand, my notmuch configuration points to `~/var/mail` as the parent directory where my mailboxes are to be found. I just created a symlink in the latter to the former and voila, notmuch is indexing all the messages retrieved by leafnode and i can search over them![2][3] - -With the version of Gnus in current emacs master, it's even better. I can tell Gnus that the search engine for the news server is notmuch: - -``` - - (setq gnus-select-method - '(nntp "localhost" - (gnus-search-engine gnus-search-notmuch - (remove-prefix "/home/jao/var/mail/")))) - -``` - -and perform searches directly in Gnus using the notmuch indexes. Or, if you prefer, you can use directly notmuch.el to find and read those usenet articles: they look just like good old email[3][4] :) - -### Footnotes: - -[1][5] - -Actually, gmane also includes _gwene_ groups that mirror RSS feeds as usenet messages, so you could extend the trick to feeds too. I however use [rss2email][6] to read RSS feeds as email, for a variety of reasons best left to a separate post. - -[2][7] - -With the `expire` parameter in leafnode's configuration set to 365, i keep locally an indexed archive of the mailing list posts less than a year old: in this age of cheap storage, one can make that much longer. One can also play with `initialfetch`. - -[3][8] - -I am not a mu4e user, but i am pretty sure one can play the same trick if that's your email indexer and reader. - -[Tags][9]: [emacs][10] - --------------------------------------------------------------------------------- - -via: https://jao.io/blog/2021-05-17-reading-and-searching-gmane-with-gnus-fast.html - -作者:[jao][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://jao.io -[b]: https://github.com/lujun9972 -[1]: https://leafnode.sourceforge.io/ -[2]: tmp.gr7aQUOwRH#fn.1 -[3]: tmp.gr7aQUOwRH#fn.2 -[4]: tmp.gr7aQUOwRH#fn.3 -[5]: tmp.gr7aQUOwRH#fnr.1 -[6]: https://wiki.archlinux.org/title/Rss2email -[7]: tmp.gr7aQUOwRH#fnr.2 -[8]: tmp.gr7aQUOwRH#fnr.3 -[9]: https://jao.io/blog/tags.html -[10]: https://jao.io/blog/tag-emacs.html diff --git a/sources/tech/20210518 Vimix is an Open Source Tool That Helps With Graphical Mixing and Blending Live.md b/sources/tech/20210518 Vimix is an Open Source Tool That Helps With Graphical Mixing and Blending Live.md deleted file mode 100644 index 9b3adb02d5..0000000000 --- a/sources/tech/20210518 Vimix is an Open Source Tool That Helps With Graphical Mixing and Blending Live.md +++ /dev/null @@ -1,94 +0,0 @@ -[#]: subject: (Vimix is an Open Source Tool That Helps With Graphical Mixing and Blending Live) -[#]: via: (https://itsfoss.com/vimix/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Vimix is an Open Source Tool That Helps With Graphical Mixing and Blending Live -====== - -There are several [Linux tools available for digital artists][1]. However, those are mostly for image manipulation or drawing. - -So, how can you blend and mix video clips or computer-generated graphics in real-time on Linux? - -This is mostly a use-case if you are presenting something live for a VJ session or concerts and conferences. - -vimix sounds like a tool that can come in handy for the job. - -### vimix: Free & Open-Source Video Live Mixer - -![][2] - -Of course, [video editors][3] should be preferred if you want to edit a video and apply several post-processing effects. - -But, if you want real-time video clip manipulation for a good extent, vimix is a tool that you can try. It is an open-source tool which also happens to be the successor of [GLMixer][4] which is no longer maintained. - -Here, I will highlight some of the key features that it offers. - -### Features of vimix - -![][5] - -You get a huge set of abilities with this tool. If you are new to using such a tool, this could prove to be overwhelming. - - * Mixing multiple video clips - * Controlling opacity using a simple slider for multiple active clips - * Fade videos to apply smooth transition when playing multiple videos as a cross-playlist - * Folder-based session - * Geometry feature to manipulate/re-size source clips - * Incredibly useful layer option to add three or more videos at a time - * Apply post-processing effects to computer-generated graphics - * Multiple blending modes - * Ability to clone sources - * Tweak the texture of the sources - * Multiple cropping options - - - -### Install vimix on Linux - -It is only available as a snap package right now. So, if you want to install it on any Linux distribution of your choice, download it from the [Snap store][6] or your software center if it has integrated snap enabled. - -``` -sudo snap install vimix -``` - -You can also refer to our [snap guide][7] if you need help to set it up. If interesting, you can compile it yourself or explore more about it in its [GitHub page][8]. - -[vimix][9] - -### Closing Thoughts - -vimix is a tool that caters the need of specific use-cases. And, that is what it excels at. - -It is good to see the availability of such a tool tailored for live video jockeys and other professionals using Linux. - -It is worth noting that I’m not an expert to work with this tool, so I just fiddled around with the common operations to control opacity, fade videos, and add effects to the source. - -I encourage you to explore more, and please don’t hesitate to let me know your thoughts in the comments below! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/vimix/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/best-linux-graphic-design-software/ -[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/vimix-screenshot.png?resize=800%2C465&ssl=1 -[3]: https://itsfoss.com/open-source-video-editors/ -[4]: https://sourceforge.net/projects/glmixer/ -[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/vimix-screenshot-1.png?resize=800%2C468&ssl=1 -[6]: https://snapcraft.io/vimix -[7]: https://itsfoss.com/install-snap-linux/ -[8]: https://github.com/brunoherbelin/vimix -[9]: https://brunoherbelin.github.io/vimix/ diff --git a/sources/tech/20210519 A beginner-s guide for contributing to Apache Cassandra.md b/sources/tech/20210519 A beginner-s guide for contributing to Apache Cassandra.md deleted file mode 100644 index 3e1273909c..0000000000 --- a/sources/tech/20210519 A beginner-s guide for contributing to Apache Cassandra.md +++ /dev/null @@ -1,157 +0,0 @@ -[#]: subject: (A beginner's guide for contributing to Apache Cassandra) -[#]: via: (https://opensource.com/article/21/5/apache-cassandra) -[#]: author: (Ekaterina Dimitrova https://opensource.com/users/edimitrova) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -A beginner's guide for contributing to Apache Cassandra -====== -Start participating in an open source database project used to power -internet services worldwide. -![An intersection of pipes.][1] - -[Apache Cassandra][2] is an open source NoSQL database trusted by thousands of companies around the globe for its scalability and high availability that does not compromise performance. Contributing to such a widely used distributed system may seem daunting, so this article aims to provide you an easy entry point. - -There are good reasons to contribute to Cassandra, such as: - - * Gaining recognition with the Apache Software Foundation (ASF) as a contributor - * Contributing to an open source project used by millions of people worldwide that powers internet services for companies such as American Express, Bloomberg, Netflix, Yelp, and more - * Being part of a community adding new features and building on the release of Cassandra 4.0, our most stable in the project's history - - - -### How to get started - -Apache Cassandra is a big project, which means you will find something within your skillset to contribute to. Every contribution, regardless of how small, counts and is greatly appreciated. An excellent place to start is the [Getting Started guide][3]. - -The Apache Cassandra project also participates in Google Summer of Code. For an idea of what's involved, please read this [blog post][4] by PMC member Paolo Motta. - -### Choose what to work on - -Submitted patches can include bug fixes, changes to the Java codebase, improvements for tooling (Java or Python), documentation, testing, or any other changes to the codebase. Although the process of contributing code is always the same, the amount of work and time it takes to get a patch accepted depends on the kind of issue you're addressing. - -Reviewing other people's patches is always appreciated. To learn more, read the [Review Checklist][5]. If you are a Cassandra user and can help by responding to some of the questions on the user list, that makes an excellent contribution. - -The simplest way to find a ticket to work on is to search Cassandra's Jira for issues marked as [Low-Hanging Fruit][6]. We use this label to flag issues that are good starter tasks for beginners. If you don't have a login to ASF's Jira, you'll need to [sign up][7]. - -A few easy ways to start getting involved include: - - * **Testing:** By learning about Cassandra, you can add or improve tests, such as [CASSANDRA-16191][8]. You can learn more about the Cassandra test framework on our [Testing][9] page. Additional testing and Jira-reported bugs or suggestions for improvements are always welcome. - * **Documentation:** This isn't always low-hanging fruit, but it's very important. Here's a sample ticket: [CASSANDRA-16122][10]. You can find more information on contributing to the Cassandra documentation on our [Working on documentation][11] page. - * **Investigate or fix reported bugs:** Here's an example: [CASSANDRA-16151][12]. - * **Answer questions:** Subscribe to the user mailing list, look out for questions you know the answer to, and help others by replying. See the [Community][13] page for details on how to subscribe to the mailing list. - - - -These are just four ways to start helping the project. If you want to learn more about distributed systems and contribute in other ways, check the [documentation][11]. - -### What you need to contribute code - -To make code contributions, you will need: - - * Java SDK - * Apache Ant - * Git - * Python - - - -#### Get the code and test - -Get the code with Git, work on the topic, use your preferred IDE, and follow the [Cassandra coding style][14]. You can learn more on our [Building and IDE integration][15] page. - - -``` -`$ git clone https://git-wip-us.apache.org/repos/asf/cassandra.git cassandra-trunk` -``` - -Many contributors name their branches based on ticket number and Cassandra version. For example: - - -``` -$ git checkout -b CASSANDRA-XXXX-V.V -$ ant -``` - -Test the environment: - - -``` -`$ ant test` -``` - -### Testing a distributed database - -When you are done, please, make sure all tests (including your own) pass using Ant, as described in [Testing][9]. If you suspect a test failure is unrelated to your change, it may be useful to check the test's status by searching the issue tracker or looking at [CI][16] results for the relevant upstream version. - -The full test suites take many hours to complete, so it is common to run relevant tests locally before uploading a patch. Once a patch has been uploaded, the reviewer or committer can help set up CI jobs to run the complete test suites. - -Additional resources on testing Cassandra include: - - * The [Cassandra Distributed Tests][17] repository. You can find setup information and prerequisites in the README file. - * The [Cassandra Cluster Manager][18] README - * A great blog post from the community on [approaches to testing Cassandra 4.0][19] - * [Harry][20], a fuzz testing tool for Apache Cassandra. - - - -### Submitting your patch - -Before submitting a patch, please verify that you follow Cassandra's [Code Style][21] conventions. The easiest way to submit your patch is to fork the Cassandra repository on GitHub and push your branch: - - -``` -`$ git push --set-upstream origin CASSANDRA-XXXX-V.V` -``` - -Submit your patch by publishing the link to your newly created branch in your Jira ticket. Use the **Submit Patch** button. - -To learn more, read the complete docs on [Contributing to Cassandra][22]. If you still have questions, get in touch with the [developer community][23]. - -* * * - -_The author wants to thank the Apache Cassandra community for their tireless contributions to the project, dedication to the project users, and continuous efforts in improving the process of onboarding new contributors._ - -_The contributions and dedication of many individuals to the Apache Cassandra project and community have enabled us to reach 4.0—a significant milestone. As we look to the future and seek to encourage new contributors, we want to recognize everyone's efforts since its inception over 12 years ago. It would not have been possible without your help. Thank you!_ - -You don't need to be a master coder to contribute to open source. Jade Wang shares 8 ways you can... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/apache-cassandra - -作者:[Ekaterina Dimitrova][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/edimitrova -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.) -[2]: https://cassandra.apache.org/ -[3]: https://cassandra.apache.org/doc/latest/development/gettingstarted.html -[4]: https://cassandra.apache.org/blog/2021/03/10/join_cassandra_gsoc_2021.html -[5]: https://cassandra.apache.org/doc/latest/development/how_to_review.html -[6]: https://issues.apache.org/jira/issues/?jql=project%20%3D%20CASSANDRA%20AND%20Complexity%20%3D%20%22Low%20Hanging%20Fruit%22%20and%20status%20!%3D%20resolved -[7]: https://issues.apache.org/jira/secure/Signup!default.jspa -[8]: https://issues.apache.org/jira/browse/CASSANDRA-16191?jql=project%20%3D%20CASSANDRA%20AND%20Complexity%20%3D%20%22Low%20Hanging%20Fruit%22%20and%20status%20!%3D%20resolved%20AND%20component%20%3D%20%22Test%2Fdtest%2Fjava%22 -[9]: https://cassandra.apache.org/doc/latest/development/testing.html -[10]: https://issues.apache.org/jira/browse/CASSANDRA-16122?jql=project%20%3D%20CASSANDRA%20and%20status%20!%3D%20resolved%20AND%20component%20%3D%20%22Documentation%2FBlog%22 -[11]: https://cassandra.apache.org/doc/latest/development/documentation.html -[12]: https://issues.apache.org/jira/browse/CASSANDRA-16151?jql=project%20%3D%20CASSANDRA%20AND%20Complexity%20%3D%20%22Low%20Hanging%20Fruit%22%20and%20status%20!%3D%20resolved%20AND%20component%20%3D%20Packaging -[13]: http://cassandra.apache.org/community/ -[14]: https://cwiki.apache.org/confluence/display/CASSANDRA2/CodeStyle -[15]: https://cassandra.apache.org/doc/latest/development/ide.html -[16]: https://builds.apache.org/ -[17]: https://github.com/apache/cassandra-dtest -[18]: https://github.com/riptano/ccm -[19]: https://cassandra.apache.org/blog/Testing-Apache-Cassandra-4.html -[20]: https://github.com/apache/cassandra-harry -[21]: https://cassandra.apache.org/doc/latest/development/code_style.html -[22]: https://cassandra.apache.org/doc/latest/development/index.html -[23]: https://cassandra.apache.org/community/ diff --git a/sources/tech/20210519 Set up a .NET development environment.md b/sources/tech/20210519 Set up a .NET development environment.md deleted file mode 100644 index 9944f6f47a..0000000000 --- a/sources/tech/20210519 Set up a .NET development environment.md +++ /dev/null @@ -1,231 +0,0 @@ -[#]: subject: (Set up a .NET development environment) -[#]: via: (https://fedoramagazine.org/set-up-a-net-development-environment/) -[#]: author: (Federico Antuña https://fedoramagazine.org/author/federicoantuna/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Set up a .NET development environment -====== - -![][1] - -Photo reused from previous post [_C# Fundamentals: Hello World_][2] - -Since the release of .NET Core, .NET developers are able to develop applications for and in GNU/Linux using languages like [C#][2]. If you are a .NET developer wanting to use Fedora Linux as your main workstation, this article is for you. I’ll demonstrate how to set up a full development environment for .NET on Fedora Linux, including an IDE/Text Editor, _Azure Functions_ and an SSL certificate for a secure _https_ site. There are multiple options for Text Editor and IDE, but here we cover _Visual Studio Code_ and _Rider_. The last one is not free but it is a great option for those familiar with _Visual Studio_ on _Windows_. - -### Install .NET SDK - -Until recently the _Microsoft_ repositories were required in the list of sources to be able to install _dotnet_ through _dnf_. But that is no longer the case. Fedora has added the _dotnet_ packages to their repositories, so installation is quite simple. Use the following two commands to install the latest _dotnet_ (.NET 5 at the moment) and the previous (.NET Core 3.1), if you want it. - -``` -sudo dnf install dotnet -sudo dnf install dotnet-sdk-3.1 -``` - -That’s it! Easier than ever! - -### Install NodeJS - -If you want to develop _Azure Functions_ or use _Azurite_ to emulate storage, you will need to have NodeJS installed. The best way to do this is to first install _nvm_ to allow installation of _NodeJS_ in user space. This means you may then install global packages without ever using _sudo_. - -To install _nvm_, follow [these instructions][3] in order to have the latest version. As of today the latest version is 0.38. Check the _github_ site in the instructions for the latest version. - -``` -sudo dnf install curl -curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash -``` - -Once you have _nvm_ installed, just run _nvm install lts/*_ to install the latest LTS version of _node_ or check [here][4] for more options. - -### Install a .NET IDE - -#### Visual Studio Code - -Check [this guide][5] in case something’s changed, but as of today the process to install _Visual Studio Code_ is to import the _Microsoft_ key, add the repository, and install the corresponding package. - -``` -sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc -sudo sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/vscode.repo' -sudo dnf check-update -sudo dnf install code -``` - -Now install the C# extension from _Microsoft_. - -![][6] - -That’s pretty much it. - -#### JetBrains Rider - -##### JetBrains Toolbox - -If you come from _Visual Studio_ on _Windows_, this tool will feel more familiar to you. It’s not free, but you have 30 days to try it out and see if you like it or not before buying a license. You can check [here][7] for more information. - -There are several ways to install _Rider_, but the easiest and cleanest way is to install the _JetBrains Toolbox_ and let it manage the installation for you. To install it, navigate to [this link][8] and click on the _Download_ button. Make sure that the _.tar.gz_ option is selected. - -If you feel more comfortable using the UI, then go to the directory where you downloaded the file using the file explorer of your Desktop Environment (_nautilus_, _dolphin_, etc.), right click on it and extract its content. Then go inside the extracted directory, right click on the _jetbrains-toolbox_ file and click on _Properties_. Make sure that the _Allow executing file as program_ checkbox under the _Permissions_ tab is checked and close the _Properties_ window. Now double click the jetbrains-toolbox file. - -If you have trouble following that in your DE, or if you prefer using the console, open a terminal and navigate to the directory where you downloaded the file. Then extract the content of the file, navigate into the extracted directory, add execution permissions to the AppImage and execute it. The version numbers that I am using might differ from yours, so autocomplete with the **TAB** key instead of using copy-and-paste to avoid errors. - -``` -tar -xzvf jetbrains-toolbox-1.20.8352.tar.gz -cd jetbrains-toolbox-1.20.8352 -chmod +x jetbrains-toolbox -./jetbrains-toolbox -``` - -It takes a few seconds or minutes, depending on your system and internet connection, until a small Toolbox window opens. After that you can delete the downloaded files. You will be able to open the JetBrains Toolbox from your app menu, the AppImage installs the application under _~/.local/share/JetBrains_. - -![JetBrains Toolbox][9] - -##### Rider - -In the _JetBrains Toolbox_, search for the _Rider_ app and click Install. If you want to change where it’s going to be installed and other options, check first the settings (top right corner). - -When the installation finishes, open _Rider_. The first screen you’ll see is to opt-in in sending anonymous statistics to the _JetBrains_ team. You can choose whatever you prefer there. The second one is to import your settings. If you’ve never used _Rider_ before, click on _Do not import settings_ and _OK_. After that, you’ll be prompted to choose a theme and keymap. Choose whatever feels more comfortable. Click next on every other screen until you reach the _License_ window. If you have already bought a license, complete your JB Account or corresponding information. If you want to use the trial period, switch to _Evaluate for free_ and click on _Evaluate_. Do the same for _dotCover_ and _dotTrace_ on the _Plugins_ section on the left panel. Then click _Continue_. - -That’s it! You now have Rider installed. You can change the options selected going to _Configure -> Settings_ on the initial screen or _File -> Settings_ on the editor. - -### Azure Functions and Azurite - -To be able to develop Azure Functions you need to install the _azurite_ node package. The _azurite_ package allows you to emulate storage which is needed for some types of Azure Functions. - -``` -npm install -g azurite -``` - -You can read more about Azurite and how to use it [here][10]. - -#### Visual Studio Code - -To develop Azure Functions with _VSCode_, you need to also install the _azure-functions-core-tools_ package. As of today, the latest version is v3. Check [here][11] to find the latest version and more information on how to use the tool. Run _npm i -g azure-functions-core-tools@3 –unsafe-perm true_ if you want to install v3 or _npm i -g azure-functions-core-tools@2 –unsafe-perm true_ if you want to install v2. - -Then you just need to install the _Azure Functions_ extension from _Microsoft_. Once the extension is installed, you can go to the _Azure_ icon on the left panel and create a new Azure Function from the templates. - -#### JetBrains Rider - -On _Rider_, you first need to install the _Azure Toolkit for Rider_ plugin. Once the plugin is installed, restart the IDE. Then go to _Settings -> Tools -> Azure -> Functions_. If you want to manage the _azure-functions-core-tools_ by yourself manually, install the package like described in the _Visual Studio Code_ section and then specify the _Azure Functions Core Tools Path_ by hand. Otherwise, if you want _Rider_ to handle updates and the package automatically, click on _Download latest version…_ and make sure that the option _Check updates for Azure Function Core tools on startup_ is checked. - -Then navigate to _Tools -> Azure -> Azurite_ and on the _Azurite package path_ dropdown, select your installation of Azurite. It should look something like _~/.nvm/versions/node/v14.16.1/lib/node_modules/azurite_. - -Click _Save_ and now you are ready to create Azure Functions. If you click _New Solution_ you should see the Azure Functions templates on the menu. - -### Create a SSL Certificate for your .NET apps - -You won’t be able to trust the .NET certificate generated by _dotnet dev-certs https –trust_. That command has no effect on Fedora Linux. - -This article doesn’t cover the details for _easy-rsa_ or the concepts for the SSL Certificate. If you are interested into learning more about this, please check these sources: - - * [SSL][12] - * [CA][13] - * [pfx][14] - * [easy-rsa][15] - - - -First, install the _easy-rsa_ tool. Then create your own certificate authority (CA), set your system to trust it, sign your certificate and set .NET to use the certificate. - -Start with the package install and set up the working directory. - -``` -sudo dnf install easy-rsa -cd ~ -mkdir .easyrsa -chmod 700 .easyrsa -cd .easyrsa -cp -r /usr/share/easy-rsa/3/* ./ -./easyrsa init-pki -``` - -Now, create a file called _vars_ with the CA details. If you know what you are doing, feel free to change these values. - -``` -cat << EOF > vars -set_var EASYRSA_REQ_COUNTRY "US" -set_var EASYRSA_REQ_PROVINCE "Texas" -set_var EASYRSA_REQ_CITY "Houston" -set_var EASYRSA_REQ_ORG "Development" -set_var EASYRSA_REQ_EMAIL "local@localhost.localdomain" -set_var EASYRSA_REQ_OU "LocalDevelopment" -set_var EASYRSA_ALGO "ec" -set_var EASYRSA_DIGEST "sha512" -EOF -``` - -Now, build the CA and trust it. When you run the first command it will prompt for the CA name, you can just press enter to leave the default value. - -``` -./easyrsa build-ca nopass -sudo cp ./pki/ca.crt /etc/pki/ca-trust/source/anchors/easyrsaca.crt -sudo update-ca-trust -``` - -Next, create the request for our CA and sign it. After executing the last command, type _yes_ and press enter. - -``` -mkdir req -cd req -openssl genrsa -out localhost.key -openssl req -new -key localhost.key -out localhost.req -subj /C=US/ST=Texas/L=Houston/O=Development/OU=LocalDevelopment/CN=localhost -cd .. -./easyrsa import-req ./req/localhost.req localhost -./easyrsa sign-req server localhost -``` - -Now, place all the needed files inside a common directory and create the _pfx_ cert. After the final command you will be prompted for a password. Type anything you want. Be sure to remember your password and keep it secret. - -``` -cd ~ -mkdir .certs -cp .easyrsa/pki/issued/localhost.crt .certs/localhost.crt -cp .easyrsa/req/localhost.key .certs/localhost.key -cd .certs -openssl pkcs12 -export -out localhost.pfx -inkey localhost.key -in localhost.crt -``` - -Finally, edit the _~/.bashrc_ file and add the following environment variables. - -``` -cat << EOF >> ~/.bashrc -# .NET -export ASPNETCORE_Kestrel__Certificates__Default__Password="PASSWORD" -export ASPNETCORE_Kestrel__Certificates__Default__Path="/home/YOUR_USERNAME/.certs/localhost.pfx" -EOF -``` - -Remember to replace _PASSWORD_ for your actual password and _YOUR_USERNAME_ for your actual username. - -Reboot your system (there are other ways to do this, but rebooting is the easiest and fastest one). And that’s it! You can now develop using .NET with _https_ on your Fedora Linux system! - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/set-up-a-net-development-environment/ - -作者:[Federico Antuña][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/federicoantuna/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/05/dotnet-devel-816x345.jpg -[2]: https://fedoramagazine.org/c-fundamentals-hello-world/ -[3]: https://github.com/nvm-sh/nvm#install--update-script -[4]: https://github.com/nvm-sh/nvm#usage -[5]: https://code.visualstudio.com/docs/setup/linux#_rhel-fedora-and-centos-based-distributions -[6]: https://fedoramagazine.org/wp-content/uploads/2021/05/csharp-extension-1024x316.png -[7]: https://www.jetbrains.com/rider/buy/#personal?billing=yearly -[8]: https://www.jetbrains.com/toolbox-app/ -[9]: https://fedoramagazine.org/wp-content/uploads/2021/05/jetbrains-toolbox-644x1024.png -[10]: https://github.com/Azure/Azurite -[11]: https://github.com/Azure/azure-functions-core-tools -[12]: https://www.ssl.com/faqs/faq-what-is-ssl/ -[13]: https://www.ssl.com/faqs/what-is-a-certificate-authority/ -[14]: https://www.ssl.com/how-to/create-a-pfx-p12-certificate-file-using-openssl/ -[15]: https://github.com/OpenVPN/easy-rsa diff --git a/sources/tech/20210521 Play the Busy Beaver Game through a simulator.md b/sources/tech/20210521 Play the Busy Beaver Game through a simulator.md deleted file mode 100644 index 0f7c8db348..0000000000 --- a/sources/tech/20210521 Play the Busy Beaver Game through a simulator.md +++ /dev/null @@ -1,575 +0,0 @@ -[#]: subject: (Play the Busy Beaver Game through a simulator) -[#]: via: (https://opensource.com/article/21/5/busy-beaver-game-c) -[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Play the Busy Beaver Game through a simulator -====== -A simulator written in C helps solve one of the most complex games in -mathematics. -![Woman sitting in front of her computer][1] - -It's hard to find a game that combines the difficulty of, say, [Dark Souls][2] with the elegance of Conway's [Game of Life][3]. In a 1962 paper, Hungarian mathematician Tibor Radó came up with just such a game, which he called the Busy Beaver Game (BBG). - -To play BBG is to create a program that writes **1**s on a machine's tape whose cells initially hold **0**s; the **1**s need not be consecutive, but the program must halt for the **1**s to count. The winning program has the most **1**s on its tape after halting. - -I'll start with an overview of the machine, its programming language, and the game's constraints. - -### The BBG machine, language, and game constraints - -Imagine an abstract computing machine (indeed, a Turing machine) with these features: - - * A two-character alphabet. In general, any two characters will do for a Turing machine; by tradition, BBG uses **0** and **1**. - * A tape of cells laid out horizontally. Each cell holds a single character from the alphabet. The tape is initialized to **0**s, which counts as a blank tape. A Turing machine's tape must be unbounded in at least one direction (typically the right), which is computationally equivalent to a tape unbounded in both directions. Again by tradition, the BBG uses a tape that is unbounded in both directions: [code]    +-+-+-+-+-+ -...|0|0|0|0|0|... -   +-+-+-+-+-+ [/code] The tape is the machine's memory. - * A read-write marker, which identifies the current memory cell. During each step in a computation, the machine reads and then overwrites the current cell's contents. The machine can replace a **0** with a **1**, a **1** with a **0**, or either character with the same one. The caret **^** beneath a cell represents the marker: [code]    +-+-+-+-+-+ -...|0|0|0|0|0|... -   +-+-+-+-+-+ -        ^ [/code] The marker (here beneath the middle cell) acts as an index into the machine's memory. A Turing machine has only linear rather than random access to memory cells because it moves just one cell, either left or right, per executed instruction. - - - -A BBG program, like any Turing program, consists of instructions such as **a0b1L**, which are known as quintuples for the number of parts; in BBG, there is one character per part. A quintuple's first two characters (in this example, **a0**) represent a condition that captures two aspects of the computation: - - * The current state of the computation, which is **a** in the **a0b1L** example; BBG uses single letters (in my examples, lowercase ones) to identify the state - * The contents of the currently marked cell: **0** or **1** - - - -The condition **a0** means "in state **a** scanning a **0**," whereas **h1** means "in state **h** scanning a **1**." - -The last three characters in a quintuple specify the action to be taken if the condition is satisfied. - -Here are two examples (with ## introducing my comments): - - -``` -a0b1R ## in state a scanning a 0: transition to state b, write a 1, and move one cell right -p1p1L ## in state p scanning a 1: stay in state p, write a 1, and move one cell left -``` - -Quintuples can be visualized as rules with an arrow separating the condition from the action: - - -``` -`a0-->b1R` -``` - -If condition **a0** is satisfied, then action **b1R** occurs. By tradition, **a** is the start state and, therefore, **a0** is the start condition. Forward-chaining rule systems such as [OPS5][4] use a similar flow-of-control mechanism. - -In summary, the quintuples in a BBG program can occur in any order because condition-matching determines which instruction executes next. No two quintuples in a program should have the same condition. - -### The halting problem - -A BBC program executes until reaching a halt state—a state that does not occur in the matched condition of any instruction. Consider the program below, which is the BBG winner for a program with two non-halting states, **a** and **b**: - - -``` -# bb2 winner -a0b1R  ## a0-->b1R -a1b1L  ## a1-->b1L -b0a1L  ## b0-->a1L -b1h1R  ## b1-->b1h1R (halt state) -``` - -The last instruction **b1h1R** contains the traditional halt state **h** in the action. (There can be multiple halt states but one is enough.) If the condition **b1** is satisfied, then **h** becomes the new state when instruction **b1h1R** executes. However, no instruction in the program has a condition that begins with **h**, which means that the machine halts in state **h**. Reaching a halt state **h** represents normal program termination. - -For a BBG, as for Turing computations in general, a program must halt for the computation to complete. For example, this instruction would write infinitely many **1**s to the right on an initially blank tape: - - -``` -`a0a1R ## "infinitely touring" instruction on a blank tape` -``` - -Scanning a **0** in state **a**, the machine stays in state **a**, overwrites the **0** with a **1**, and moves right to another **0**; hence, instruction **a0a1R** executes again. A program in which this instruction executes, with only **0**s to the right, would never halt and, therefore, could not qualify as a BBG winner. - -Among the legendary unsolvable problems in computing is whether a Turing machine halts when executing a given program on given data inputs—the _halting problem_. Accordingly, there is no way to know whether a given BBG program will halt. My simulator (introduced below) suspects "infinite touring" after executing a million instructions and therefore exits. - -### From BBG to BBG-N - -BBG covers an indefinitely large set of games, each with a number that identifies how many non-halting states a game-playing program may use. For example, the sample program shown earlier is the winner of the BBG-2 game because the program is restricted to two non-halting states, **a** and **b**. (The BBG-2 winner produces four **1**s on an initially blank tape.) The BBG-3 game uses three non-halting states, whereas the BBG-744 game uses 744 non-halting states. - -In summary, a BBG-N winner produces the most **1**s, given _N_ non-halting states and starting on a blank tape. The winner must halt. Proving that a contender wins the BBG-N game is non-trivial. At present, there are proven winners of the BBG-1 through the BBG-4 games, but none for games with more than four non-halting states. For example, the BBG-5 game has only a best contender. Running some BBG winners and the BBG-5 best contender on the simulator should clarify the computation in detail and underscore how ingenious the winning and contending programs are. - -### BBG-N examples on the simulator - -I'll start with the winner of the trivial BBG-1 game: - - -``` -# bb1 winner -a0h1R  ## a is the single non-halting state -``` - -Here's how the simulator's tape looks to begin, with the computation in start state **a** scanning a **0**: - - -``` -Current state: a -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                                    ^ -``` - -The program's single instruction transitions to the halting state **h**, writes a **1**, and moves one cell to the right: - - -``` -Current state: h -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                                    ^ -``` - -The winner thus produces a single **1** in one step, using one non-halting state. No BBG-1 program can do better. An equivalent BBG-1 program might move left instead of right after writing a **1**, but the winning total of one **1** would be the same. - -The winner of the more interesting BBG-2 game is this program (shown earlier), which has two non-halting states, **a** and **b**: - - -``` -# bb2 winner -a0b1R  ## a0-->b1R -a1b1L  ## a1-->b1L -b0a1L  ## b0-->a1L -b1h1R  ## b1-->h1R  (halt state) -``` - -The program produces four **1**s in six steps: - - -``` -Current state: h -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|0|0|0|0|0|0|0|0|1|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                                  ^ -``` - - Let's examine the BBG-3 winner in detail: - - -``` -# bb3 winner -a0b1R -a1h1R # (halt state) -b0c0R -b1b1R -c0c1L -c1a1L -``` - -The program produces six **1**s in 14 steps and has three non-halting states: **a**, **b**, and **c**. A trace from the simulator clarifies how the computation works, and in particular, how it loops. - -As usual, the tape is initially blank. The marker is in the middle, and the machine is in start state **a** and scanning a **0**. The instruction with the matching condition **a0** is the first one, **a0b1R**. This instruction transitions to state **b**, writes a **1**, and moves one cell to the right: - - -``` -Current state: b -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                                    ^ -``` - -The condition is now **b0**, which identifies instruction **b0c0R**. The machine accordingly transitions to state **c**, writes a **0**, and moves one cell to the right: - - -``` -Current state: c -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                                      ^ -``` - -The new condition is **c0** with **c0c1L** as the matching instruction. The machine thus overwrites the currently scanned **0** with a **1**, remains in state **c**, and moves one cell to the left: - - -``` -Current state: c -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|0|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                                    ^ -``` - -The condition remains **c0** and the matching instruction remains **c0c1L**; the action, therefore, writes another **1** between the other two. The state stays the same but the left move places the marker on a **1** rather than a **0**. Accordingly, the condition changes to **c1** and the matching instruction to **c1a1L**. The action for this instruction moves the marker to the **0** cell immediately to the left of the leftmost **1** with **a** as the new state: - - -``` -Current state: a -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                                ^ -``` - -Now the condition is **a0** just as it was at the start of the computation—the program is, in effect, looping. The full instruction is **a0b1R**, which transitions the machine to state **b**, writes a fourth **1** on the left of the other three, and then moves right. The machine keeps moving right (and overwriting **1**s with **1**s) via instruction **b1b1R** until hitting the first **0** to the right: - - -``` -Current state: b -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                                        ^ -``` - -In state **b** and scanning a **0**, the machine executes instruction **b0c0R** once again, thereby transitioning to state **c**, overwriting a **0** with a **0**, and moving right to another blank cell: - - -``` -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                                          ^ -``` - -In state **c** and scanning a **0**, the machine now executes instruction **c0c1L** twice in a row to produce a tape with six consecutive **1**s. Also, the machine has transitioned into state **a**: - - -``` -Current state: a -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|1|1|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                                    ^ -``` - -At this point, the matching instruction **a1h1R** transitions the machine into halt state **h** and moves one cell to the right. The final tape configuration is thus: - - -``` -Current state: h -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|1|1|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                                      ^ -``` - -The BBG-3 winner produces six **1**s in 14 steps. - -How can we be sure that these winners deserve the title? There are rigorous mathematical proofs that the winning BBG-1, BBG-2, BBG-3, and BBG-4 programs cannot be bested. The creator of the BBG once believed it impossible to prove a winner for BBG-4, but eventually, there was a [proof for the BBG-4 winner][5]. Here's the proven BBG-4 winner: - - -``` -# bb4 winner -a0b1R -a1b1L -b0a1L -b1c0L -c0h1R # (halt state) -c1d1L -d0d1R -d1a0R -``` - -This program takes 107 steps to produce 13 **1**s, which are not consecutive: - - -``` -Current state: h -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -...|0|0|0|0|0|1|0|1|1|1|1|1|1|1|1|1|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|... -   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ -                ^ -``` - -For games BBG-5 and beyond, there are best contenders rather than proven winners. Long-lived fame (but probably not fortune) awaits anyone who can prove a BBG-5 winner. For reference, here's the BBG-5 best contender: - - -``` -# bb5 best contender -a0b1R -a1c1L -b0c1R -b1b1R -c0d1R -c1e0L -d0a1L -d1d1L -e0h1R # (halt state) -e1a0L -``` - -As shown below, this program produces 4,098 non-consecutive **1**s in an astonishing 47,176,870 steps—a mind-boggling caution about just how hard BBG-N games become for _N_>4\. Might there be a better contender for BBG-5 or even a proof that this program wins BBG-5? To date, these questions are open, and the experts expect them to remain so. - -### Running the simulator - -The Turing program (written in C) simulates a single-tape Universal Turing Machine (UTM) by being general-purpose: the simulator can play BBG games but also, for example, perform mathematical operations such as multiplication and exponentiation on values represented in unary, given the appropriate program as an input. The UTM simulator presents the tape as if it were unbounded in both directions. The unavoidable shortcoming of any UTM simulator is finite tape size, of course; the abstract UTM has unbounded memory, a magical feature that no simulator can capture. - -The simulator, together with the BBG programs discussed so far, is available from [my website][6]. For reference, here's the C source code for the simulator: - - -``` -.Turing machine simulator -========================= -\----- -#include <stdio.h> -#include <stdlib.h> -#include <string.h> - -#define MaxQuintuples     128 /* expand as needed */ -#define QuintupleLen        5 -#define MaxBuffer         128 -#define MaxTape            33 /* expand as needed: 1 line of display */ -#define MaxSteps      1000000 /* assume 'infinite looping' thereafter */ -#define Blank            '0'  /* 2-character alphabet: 0 and 1 */ -#define StartState       'a' - -#define TapeBorder       "+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+" -#define CellSide         '|' -#define Ellipsis         "..." -#define CellLen          2   - -enum { NewState = 2, NewSymbol = 3, Direction = 4 }; -char quintuples[MaxQuintuples][QuintupleLen + 1]; /* array of strings */ -unsigned qlen = 0;        /* number of entries from input file */ -unsigned displayFlag = 1; /* 2nd command-line arg turns this off */ - -char tape[MaxTape]; -unsigned currentCell = MaxTape / 2; /* tape middle */ -char currentState = StartState; -unsigned instructionsExecuted = 0; - -void die(const char* msg) { -  [puts][7](msg); -  [exit][8](EXIT_FAILURE); -} - -void print_cell(unsigned ind) { -  [putchar][9](CellSide); -  [putchar][9](tape[ind]); -} - -void pause() { -  [puts][7]("Hit RETURN to continue..."); -  [getchar][10](); -} - -void display_tape() { -  [printf][11]("\nCurrent state: %c\n", currentState); -  [printf][11]("   %s\n", TapeBorder); -  [printf][11]("%s", Ellipsis); -  -  unsigned i; -  for (i = 0; i < MaxTape; i++) print_cell(i); -  [printf][11]("%c%s\n   %s\n", CellSide, Ellipsis, TapeBorder); - -  i = (CellLen * currentCell) + 2; -  [printf][11]("%*s%c\n", i, "", '^'); -  pause(); -} - -int comp(const void* e1, const void* e2) { /* qsort and bsearch callback */ -  const char* s1 = (char*) e1; -  const char* s2 = (char*) e2; -  return [strncmp][12](s1, s2, 2); /* < 0 = s1 < s2; 0 = s1 == s2; > 0 = s1 > s2 */ -} - -void read_program(const char* file) { -  FILE* infile = [fopen][13](file, "r"); -  char buff[MaxBuffer + 1]; -  if (NULL == infile) { -    [sprintf][14](buff, "Can't open program file %s", file); -    die(buff); -  } -    -  while ([fgets][15](buff, MaxBuffer, infile)) { /* read until end-of-file */ -    if ('#' == buff[0]) continue;              /* ignore comments */ -    if ([strlen][16](buff) < QuintupleLen) continue; /* ignore faulty lines */ -    [strncpy][17](quintuples[qlen], buff, QuintupleLen); -    qlen++; -  } -  [fclose][18](infile); - -  [qsort][19](quintuples, qlen, sizeof(quintuples[0]), comp); /* sort for easy access */ -  [memset][20](tape, Blank, MaxTape); /* blank out the tape */ -} - -void report() { -  /* Show instructions. */ -  [printf][11]("%i instructions:\n", qlen); -  unsigned i, count = 0; -  for (i = 0; i < qlen; i++) [puts][7](quintuples[i]); -  for (i = 0; i < MaxTape; i++) if ('1' == tape[i]) count++; -  [printf][11]("Total 1s on tape:      %8i\n", count); -  [printf][11]("Instructions executed: %8i\n", instructionsExecuted); -} - -void check_for_errors(const char* action) { -  if (0 == (currentCell - 1) && 'L' == action[Direction]) die("Can't move left..."); -  if (currentCell >= MaxTape - 1 && 'R' == action[Direction]) die("Can't move right..."); -  if (instructionsExecuted >= MaxSteps) die("Seems to be infinitely touring..."); -} - -void run_simulation() { -  while (1) { -     if (displayFlag) display_tape(); -  -    /* Get the action for the current key. */ -    char key[3]; -    [sprintf][14](key, "%c%c", currentState, tape[currentCell - 1]); -    char* action = [bsearch][21](key, quintuples, qlen, sizeof(quintuples[0]), comp); -    if (NULL == action) break; /* no match == normal termination */ - -    check_for_errors(action); - -    /* Update system. */ -    currentState = action[NewState]; -    tape[currentCell - 1] = action[NewSymbol]; -    if ('L' == action[Direction]) currentCell--; /* move left */ -    else currentCell++;                          /* move right */ -    instructionsExecuted++;                      /* update step counter */ -  } -} - -int main(int argc, char* argv[]) { -  if (argc < 2) die("Usage: turing <program file>"); -  if (argc > 2 && 0 == [strcmp][22](argv[2], "off")) displayFlag = 0; -  -  read_program(argv[1]); -  run_simulation(); -  report(); -  return 0; -} -``` - -The code is straightforward and there are about 130 lines of it. Here's a summary of the control flow: - - * The Turing program reads from an input file (given as a command-line argument), that contains quintuples (one per line), as in the BBG-N examples. - * The simulator then loops until one of these conditions occurs: - * If the input program reaches a halt state, the simulator exits normally after reporting on the program's instructions, the number of **1**s produced, and the number of steps required to do so. - * If the computation tries to move either left from the leftmost cell or right from the rightmost cell, the simulator exits with an error message. The tape is not big enough for the computation. - * If the computation hits **MaxSteps** (currently set at a million), the simulator terminates on suspicion of "infinite touring." - - - -The simulator expects, as a command-line argument, the name of a file that contains a BBG-N or other program. With **%** as the command-line prompt, this command runs the simulator on the **bb4.prog** file introduced earlier: - - -``` -`% ./turing bb4.prog` -``` - -By default, the simulator displays the tape and pauses after each instruction executes. However, the displays can be turned off from the command line: - - -``` -`% ./turing bb5.prog off  ` -``` - -This produces a report on the BBG-5 best contender, which takes more than 47 million steps before halting: - - -``` -10 instructions: -a0b1R -a1c1L -b0c1R -b1b1R -c0d1R -c1e0L -d0a1L -d1d1L -e0h1R -e1a0L -Total 1s on tape:          4098 -Instructions executed: 47176870 -``` - -A program file should terminate each line, including the last one, with a newline; otherwise, the simulator may fail to read the line. The simulator ignores comments (which start with a **#**) and empty lines. Here, for review, is the **bb4.prog** input file: - - -``` -# bb4 winner -a0b1R -a1b1L -b0a1L -b1c0L -c0h1R # (halt state) -c1d1L -d0d1R -d1a0R -``` - -At the top of the Turing source file are various macros (in C, **#define** directives) that specify sizes. These are of particular interest: - - -``` -#define MaxQuintuples  128 /* expand as needed */ -#define MaxTape         33 /* expand as needed */ -#define MaxSteps   1000000 /* assume 'infinite touring' thereafter */ -``` - -The specified sizes can be increased as needed, but even the best BBG-5 contender has only 10 instructions. The tape for a contender such as BBG-5 must be very big, and this contender requires more than 47 million steps to complete the computation. These settings would suffice: - - -``` -#define MaxTape     100000000 /* 100M */ -#define MaxSteps    100000000 /* 100M */ -``` - -BBG-5 and other best contenders should be run with the **off** flag because the simulator, at present, displays the **MaxTape** cells on a single line. - -### Wrapping up - -BBGs should appeal to recreational problem solvers and especially programmers. To program a BBG is to work in the machine language of the abstract computer—the Turing machine—that defines what _computable_ means. One way to get started is by composing the BBG-2 and the BBG-3 winners from scratch. These exercises help to reveal the programming patterns used in the truly daunting challenges such as the BBG-4 winner and the BBG-5 best contender. - -Another starting exercise is to write a program that first initializes a tape to two numeric values (in unary) separated by a **0**: - - -``` -   +-+-+-+-+-+-+-+-+ -...|0|1|1|0|1|1|1|0|...  ## two and three in unary -   +-+-+-+-+-+-+-+-+ -          ^ -``` - -The program then computes, for example, the product in unary. Other arithmetic examples abound. - -BBGs also are of ongoing interest to theoreticians in logic, mathematics, and computer science. For a brief history and overview of them, see [_How the slowest computer programs illuminate math's fundamental limits_][23]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/busy-beaver-game-c - -作者:[Marty Kalin][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mkalindepauledu -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_2.png?itok=JPlR5aCA (Woman sitting in front of her computer) -[2]: https://en.wikipedia.org/wiki/Dark_Souls -[3]: https://opensource.com/article/21/4/game-life-simulation-webassembly -[4]: https://en.wikipedia.org/wiki/OPS5 -[5]: https://www.ams.org/journals/mcom/1983-40-162/S0025-5718-1983-0689479-6/S0025-5718-1983-0689479-6.pdf -[6]: https://condor.depaul.edu/mkalin -[7]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html -[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html -[9]: http://www.opengroup.org/onlinepubs/009695399/functions/putchar.html -[10]: http://www.opengroup.org/onlinepubs/009695399/functions/getchar.html -[11]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html -[12]: http://www.opengroup.org/onlinepubs/009695399/functions/strncmp.html -[13]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html -[14]: http://www.opengroup.org/onlinepubs/009695399/functions/sprintf.html -[15]: http://www.opengroup.org/onlinepubs/009695399/functions/fgets.html -[16]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html -[17]: http://www.opengroup.org/onlinepubs/009695399/functions/strncpy.html -[18]: http://www.opengroup.org/onlinepubs/009695399/functions/fclose.html -[19]: http://www.opengroup.org/onlinepubs/009695399/functions/qsort.html -[20]: http://www.opengroup.org/onlinepubs/009695399/functions/memset.html -[21]: http://www.opengroup.org/onlinepubs/009695399/functions/bsearch.html -[22]: http://www.opengroup.org/onlinepubs/009695399/functions/strcmp.html -[23]: https://www.quantamagazine.org/the-busy-beaver-game-illuminates-the-fundamental-limits-of-math-20201210/ diff --git a/sources/tech/20210524 4 steps to set up global modals in React.md b/sources/tech/20210524 4 steps to set up global modals in React.md deleted file mode 100644 index f0debb9c82..0000000000 --- a/sources/tech/20210524 4 steps to set up global modals in React.md +++ /dev/null @@ -1,329 +0,0 @@ -[#]: subject: "4 steps to set up global modals in React" -[#]: via: "https://opensource.com/article/21/5/global-modals-react" -[#]: author: "Ajay Pratap https://opensource.com/users/ajaypratap" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -4 steps to set up global modals in React -====== -Learn how to create interactive pop-up windows in a React web app. - -![Digital creative of a browser on the internet][1] - -A modal dialog is a window that appears on top of a web page and requires a user's interaction before it disappears. [React][2] has a couple of ways to help you generate and manage modals with minimal coding. - -If you create them within a **local scope**, you must import modals into each component and then create a state to manage each modal's opening and closing status. - -By using a **global state**, you don't need to import modals into each component, nor do you have to create a state for each. You can import all the modals in one place and use them anywhere. - -In my opinion, the best way to manage modal dialogs in your React application is globally by using a React context rather than a local state. - -### How to create global modals - -Here are the steps (and code) to set up global modals in React. I'm using [Patternfly][3] as my foundation, but the principles apply to any project. - -#### 1. Create a global modal component - -In a file called **GlobalModal.tsx**, create your modal definition: - -``` -import React, { useState, createContext, useContext } from 'react'; -import { CreateModal, DeleteModal,UpdateModal } from './components'; - -export const MODAL_TYPES = { -CREATE_MODAL:”CREATE_MODAL”, - DELETE_MODAL: “DELETE_MODAL”, - UPDATE_MODAL: “UPDATE_MODAL” -}; - -const MODAL_COMPONENTS: any = { - [MODAL_TYPES.CREATE_MODAL]: CreateModal, - [MODAL_TYPES.DELETE_MODAL]: DeleteModal, - [MODAL_TYPES.UPDATE_MODAL]: UpdateModal -}; - -type GlobalModalContext = { - showModal: (modalType: string, modalProps?: any) => void; - hideModal: () => void; - store: any; -}; - -const initalState: GlobalModalContext = { - showModal: () => {}, - hideModal: () => {}, - store: {}, -}; - -const GlobalModalContext = createContext(initalState); -export const useGlobalModalContext = () => useContext(GlobalModalContext); - -export const GlobalModal: React.FC<{}> = ({ children }) => { - const [store, setStore] = useState(); - const { modalType, modalProps } = store || {}; - - const showModal = (modalType: string, modalProps: any = {}) => { -   setStore({ -     ...store, -     modalType, -     modalProps, -   }); - }; - - const hideModal = () => { -   setStore({ -     ...store, -     modalType: null, -     modalProps: {}, -   }); - }; - - const renderComponent = () => { -   const ModalComponent = MODAL_COMPONENTS[modalType]; -   if (!modalType || !ModalComponent) { -     return null; -   } -   return ; - }; - - return ( -    -     {renderComponent()} -     {children} -    - ); -}; -``` - -In this code, all dialog components are mapped with the modal type. The `showModal` and `hideModal` functions are used to open and close dialog boxes, respectively. - -The `showModal` function takes two parameters: `modalType` and `modalProps`. The `modalProps` parameter is optional; it is used to pass any type of data to the modal as a prop. - -The `hideModal` function doesn't have any parameters; calling it causes the current open modal to close. - -#### 2. Create modal dialog components - -In a file called **CreateModal.tsx**, create a modal: - -``` -import React from "react"; -import { Modal, ModalVariant, Button } from "@patternfly/react-core"; -import { useGlobalModalContext } from "../GlobalModal"; - -export const CreateModal = () => { - const { hideModal, store } = useGlobalModalContext(); - const { modalProps } = store || {}; - const { title, confirmBtn } = modalProps || {}; - - const handleModalToggle = () => { -   hideModal(); - }; - - return ( -    -         {confirmBtn || "Confirm button"} -       , -        -     ]} -   > -     Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod -     tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim -     veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea -     commodo consequat. Duis aute irure dolor in reprehenderit in voluptate -     velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat -     cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id -     est laborum. -    - ); -}; -``` - -This has a custom hook, `useGlobalModalContext`, that provides store object from where you can access all the props and the functions `showModal` and `hideModal`. You can close the modal by using the `hideModal` function. - -To delete a modal, create a file called **DeleteModal.tsx**: - -``` -import React from "react"; -import { Modal, ModalVariant, Button } from "@patternfly/react-core"; -import { useGlobalModalContext } from "../GlobalModal"; - -export const DeleteModal = () => { - const { hideModal } = useGlobalModalContext(); - - const handleModalToggle = () => { -   hideModal(); - }; - - return ( -    -         Confirm -       , -        -     ]} -   > -     Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod -     tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim -     veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea -     commodo consequat. Duis aute irure dolor in reprehenderit in voluptate -     velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat -     cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id -     est laborum. -    - ); -}; -``` - -To update a modal, create a file called **UpdateModal.tsx** and add this code: - -``` -import React from "react"; -import { Modal, ModalVariant, Button } from "@patternfly/react-core"; -import { useGlobalModalContext } from "../GlobalModal"; - -export const UpdateModal = () => { - const { hideModal } = useGlobalModalContext(); - - const handleModalToggle = () => { -   hideModal(); - }; - - return ( -    -         Confirm -       , -        -     ]} -   > -     Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod -     tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim -     veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea -     commodo consequat. Duis aute irure dolor in reprehenderit in voluptate -     velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat -     cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id -     est laborum. -    - ); -}; -``` - -#### 3. Integrate GlobalModal into the top-level component in your application - -To integrate the new modal structure you've created into your app, you just import the global modal class you've created. Here's my sample **App.tsx**file: - -``` -import "@patternfly/react-core/dist/styles/base.css"; -import "./fonts.css"; -import { GlobalModal } from "./components/GlobalModal"; -import { AppLayout } from "./AppLayout"; - -export default function App() { - return ( -    -      -    - ); -} -``` - -App.tsx is the top-level component in your app, but you can add another component according to your application's structure. However, make sure it is one level above where you want to access modals. - -`GlobalModal` is the root-level component where all your modal components are imported and mapped with their specific `modalType`. - -#### 4. Select the modal's button from the AppLayout component - -Adding a button to your modal with **AppLayout.js**: - -``` -import React from "react"; -import { Button, ButtonVariant } from "@patternfly/react-core"; -import { useGlobalModalContext, MODAL_TYPES } from "./components/GlobalModal"; - -export const AppLayout = () => { - const { showModal } = useGlobalModalContext(); - - const createModal = () => { -   showModal(MODAL_TYPES.CREATE_MODAL, { -     title: "Create instance form", -     confirmBtn: "Save" -   }); - }; - - const deleteModal = () => { -   showModal(MODAL_TYPES.DELETE_MODAL); - }; - - const updateModal = () => { -   showModal(MODAL_TYPES.UPDATE_MODAL); - }; - - return ( -   <> -      -     
-     
-      -     
-     
-      -    - ); -}; -``` - -There are three buttons in the AppLayout component: create modal, delete modal, and update modal. Each modal is mapped with the corresponding `modalType` : `CREATE_MODAL`, `DELETE_MODAL`, or `UPDATE_MODAL`. - -### Use global dialogs - -Global modals are a clean and efficient way to handle dialogs in React. They are also easier to maintain in the long run. The next time you set up a project, keep these tips in mind. - -If you'd like to see the code in action, I've included the [complete application][4] I created for this article in a sandbox. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/global-modals-react - -作者:[Ajay Pratap][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ajaypratap -[b]: https://github.com/lkxed -[1]: https://opensource.com/sites/default/files/lead-images/browser_web_internet_website.png -[2]: https://reactjs.org/ -[3]: https://www.patternfly.org/v4/ -[4]: https://codesandbox.io/s/affectionate-pine-gib74 diff --git a/sources/tech/20210524 Keep tabs on your Linux computer specs with this desktop application.md b/sources/tech/20210524 Keep tabs on your Linux computer specs with this desktop application.md deleted file mode 100644 index 6c5eecc0f0..0000000000 --- a/sources/tech/20210524 Keep tabs on your Linux computer specs with this desktop application.md +++ /dev/null @@ -1,93 +0,0 @@ -[#]: subject: (Keep tabs on your Linux computer specs with this desktop application) -[#]: via: (https://opensource.com/article/21/5/linux-kinfocenter) -[#]: author: (Seth Kenlon https://opensource.com/users/seth) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Keep tabs on your Linux computer specs with this desktop application -====== -Get to know KDE Plasma's Info Center, a lifesaver when you need to know -your Linux machine's specs quickly. -![Puzzle pieces coming together to form a computer screen][1] - -Whether I'm using a laptop my employer assigned to me or a workstation I built from vendor parts, I seem to have an endless capacity to forget my computer's specifications. One of the great things about Linux is its `/proc` filesystem, a dynamically populated virtual expression of the system's hardware. It's convenient when you want to see the specifics of your CPU (`cat /proc/cpuinfo`), uptime (`cat /proc/uptime`), a list of mounted filesystems (`ls -R /proc/fs/`), and so on. - -Sometimes, though, it's nice to have everything you need (and what you don't know you need) all in one place for your perusal. The KDE Plasma desktop provides an application called Info Center (sometimes also called [KInfoCenter][2]), a place to help you know what, where, and how much you're running. - -### Installing KInfoCenter - -If you're already running the [KDE Plasma desktop][3], then KInfoCenter is probably already installed. Otherwise, you can find the application in your distribution's software repository. - -For example, on Fedora or CentOS Stream: - - -``` -`$ sudo dnf install kinfocenter` -``` - -### System information - -When Info Center is launched, the default screen is the **About System** pane. This displays the versions of your Plasma desktop, KDE Frameworks, and Qt: all the technologies that work together to provide the desktop. It also displays the Linux kernel version and architecture and gives you a quick hardware overview, listing both your CPU and RAM. - -![KInfoCenter's main display][4] - -(Seth Kenlon, [CC BY-SA 4.0][5]) - -### Memory and resources - -Maybe seeing the total RAM installed on your system isn't specific enough for you. In that case, you can open the **Memory** pane to see a detailed report about how your RAM is being used. This updates dynamically, so you can use it to monitor the effects an application or activity has on your system. - -![KInfoCenter's Memory pane][6] - -(Seth Kenlon, [CC BY-SA 4.0][5]) - -If you're on a laptop, **Energy Information** displays your power-saving settings. If you have file indexing active, you can view the status of the indexer in the **File Indexer Monitor** panel. - -### Devices - -The **Device Information** folder contains several panes you can access for details about the physical peripherals inside or connected to your computer. This covers _everything_, including USB devices, hard drives, processors, PCI slots, and more. - -![KInfoCenter's Device Information pane][7] - -(Seth Kenlon, [CC BY-SA 4.0][5]) - -This isn't just a broad overview, either. KInfoCenter gives you nearly everything there is to know about the components you're running. For hard drives, it provides a list of partitions, the SATA port the drive is connected to, the drive label or name you've given it, UUID, size, partition, the filesystem, whether it's mounted and where, and more. For the CPU, it provides the product name, vendor, number of cores (starting at 0), maximum clock speed, interrupt information, and supported instruction sets. The list goes on and on for every type of device you can think of. - -### Network and IP address - -Maybe you're tired of parsing the verbose output of `ip address show`. Maybe you're too lazy to create an alias for `ip address show | grep --only-matching "inet 10.*" | cut -f2 -d" "`. Whatever the reason, sometimes you want an easy way to get a machine's IP address. KInfoCenter is the answer because the **Network Information** panel contains its host's IP address. In fact, it lists both the active hardware-based IP addresses as well as active bridges for virtual machines. - -It seems basic, but this simple KInfoCenter feature has saved me minutes of frustration when trying to obtain an IP address quickly over a support call so I could SSH into the machine in question and fix a problem. The network panel also provides information about [Samba shares][8], the open source file sharing service you can run locally to swap files between computers on your network easily. - -### Graphics - -As if that's not enough, KInfoCenter also features a **Graphical Information** panel so you can get details about your graphics server, whether you're running Wayland or X11. You can get data on your display's dimensions, resolution (you may remember when 72 DPI was standard, but this panel assures you that you're running a more modern 92 DPI), bit depth, and more. It also provides information on OpenGL or Vulkan, including what card is being used to render graphics, what extensions are in use, what kernel module is installed, and so on. - -### KInfoCenter? More like KLifeSaver - -I regularly pin KInfoCenter to the KDE Kicker or create a shortcut to it on the desktop so that users I support can get there easily whenever they need to know their architecture, RAM, or IP address. It's the most friendly aggregation of system information I've seen on any operating system, much less on any Linux desktop. Install KInfoCenter today. You might not use it right away, but you'll need it someday, and when you do, you'll be glad you have it. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/linux-kinfocenter - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen) -[2]: https://userbase.kde.org/KInfoCenter -[3]: https://opensource.com/article/19/12/linux-kde-plasma -[4]: https://opensource.com/sites/default/files/uploads/kinfocenter-main.png (KInfoCenter's main display) -[5]: https://creativecommons.org/licenses/by-sa/4.0/ -[6]: https://opensource.com/sites/default/files/uploads/kinfocenter-memory.png (KInfoCenter's Memory pane) -[7]: https://opensource.com/sites/default/files/uploads/kinfocenter-peripherals.png (KInfoCenter's Device Information pane) -[8]: https://opensource.com/article/21/4/share-files-linux-windows diff --git a/sources/tech/20210525 Gromit-MPX Lets You Draw Anywhere On Linux Desktop Screen.md b/sources/tech/20210525 Gromit-MPX Lets You Draw Anywhere On Linux Desktop Screen.md deleted file mode 100644 index 31c17c3766..0000000000 --- a/sources/tech/20210525 Gromit-MPX Lets You Draw Anywhere On Linux Desktop Screen.md +++ /dev/null @@ -1,123 +0,0 @@ -[#]: subject: (Gromit-MPX Lets You Draw Anywhere On Linux Desktop Screen) -[#]: via: (https://itsfoss.com/gromit-mpx/) -[#]: author: (Sarvottam Kumar https://itsfoss.com/author/sarvottam/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Gromit-MPX Lets You Draw Anywhere On Linux Desktop Screen -====== - -Have you ever wished that you could freeze your Linux desktop screen and draw anything on it? Or, you may want to point out or highlight a part of your app or desktop to someone else while [screen recording on Linux][1]? - -If so, Gromit-MPX is an easy-to-use on-screen annotation tool that you could use right now. - -### Make Annotation On Screen Using Gromit-MPX - -![Gromit-MPX][2] - -[Gromit-MPX][3] (**GR**aphics **O**ver **MI**scellaneous **T**hings – **M**ulti-**P**ointer **EX**tension) is a free and open-source tool that lets you annotate anywhere on the screen. The best thing about the app is it does not restrict you to use it only on one desktop environment. - -Rather, Gromit-MPX is desktop-independent and supports [all Unix-based desktop environments][4] such as GNOME, KDE, and Xfce under both X11 and Wayland windowing sessions. - -Even for X11, if you have a second pair of input devices and want to use it to annotate in place of the first pair, the app lets you set both a pointer for a dedicated annotation device and a multi-pointer at once. - -Another thing that makes Gromit-MPX quite different from other available annotation tools is its easy-to-use and distraction-free philosophy. - -What I mean to say is that, once you install and activate the app, you can either operate it using its tray icon (if your desktop has a system tray) or six default keys binding. Gromit-MPX does not draw or stick any UI widget of its own for making useful options available. - -![Tray icon options][5] - -You can toggle it on and off on the fly using a `F9` hotkey without interrupting your normal workflow. And whether you want to undo/redo your last draw or clear the screen completely, you’re only one key away from performing the action: `F8` to undo the last stroke (max up to 4 stroke) and `SHIFT-F9` to clear the screen. - -![Gromit-MPX Available Commands][6] - -Of course, you’re also completely free to change its default configuration for both key bindings and drawing tools. - -One of the things that I think Gromit-MPX lacks is the availability of different shapes like rectangles, circles, and straight lines. Currently, you can annotate the desktop screen only using freehand drawing, which you may initially find difficult to handle. - -![Upcoming feature in Gromit-MPX][7] - -However, the good news is the functionality to draw straight lines in Gromit-MPX is under development and already planned to feature in the next version 1.5. - -### Installing Gromit-MPX on Ubuntu and other Linux distributions - -If you’re using Debian-based distributions like Ubuntu, Gromit-MPX is already available in the repository. You only need to run a single command to install it. - -``` -sudo apt install gromit-mpx -``` - -However, for the older OS version, you may not get the latest version 1.4 of the app and miss some important features. If you want the current latest version 1.4, you need to install it from the [Flathub repository][8] using the universal package manager [Flatpak][9]. - -If you’ve not set up Flatpak on your system, check out the complete [Flatpak guide][10]. Once you enable the Flatpak support, you can run the following command to install Gromit-MPX. - -``` -flatpak install flathub net.christianbeier.Gromit-MPX -``` - -![Install Gromit-MPX Using Flatpak][11] - -If you don’t want the Flatpak package or your system doesn’t support it, you can also download its [source code][3], compile and build the app on its own. - -### How to change key binding and tool color in Gromit-MPX? - -By default, Gromit-MPX uses red color for the tool. But it also provides other colors that you can switch to using hotkeys. For instance, once you toggle on drawing, you can hold `SHIFT` for turning tool color into blue, and `CONTROL` for yellow. - -And if you wish your default color other than red or different color for different hotkeys, you can configure the same in the `gromit-mpx.cfg` file. - -![Change tool color][12] - -You can find the configuration file either in a directory listed in $XDG_CONFIG_HOME variable (usually ~/.config or ~/.var/app/net.christianbeier.Gromit-MPX/config/ if you’ve installed Flatpak package) or /etc/gromit-mpx/ if you have Debian package. - -For changing the default Hotkey or Undo key, you need to add a new entry with a custom value in the same config file. - -``` -HOTKEY="F9" -UNDOKEY="F8" -``` - -### How to start Gromit-MPX automatically on boot? - -In case you’re using Gromit-MPX regularly, then you may want to mark it as a startup app instead of opening it manually each time you boot the system. - -So, to autostart Gromit-MPX, you can either make use of the GUI [Startup Applications utility][13] or manually add a desktop entry with the below content at `~/.config/autostart/gromit-mpx.desktop`. - -``` -[Desktop Entry] -Type=Application -Exec=gromit-mpx -``` - -If you’re using the Flatpak package, you need to replace `Exec=gromit-mpx` with `Exec=flatpak run net.christianbeier.Gromit-MPX`. - -I hope you like this nifty tool. If you try it, don’t forget to share your experience. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/gromit-mpx/ - -作者:[Sarvottam Kumar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/sarvottam/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/best-linux-screen-recorders/ -[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/Gromit-MPX.jpg?resize=800%2C450&ssl=1 -[3]: https://github.com/bk138/gromit-mpx -[4]: https://itsfoss.com/best-linux-desktop-environments/ -[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/Tray-icon-options.jpg?resize=235%2C450&ssl=1 -[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/Gromit-MPX-Available-Commands.jpg?resize=800%2C361&ssl=1 -[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/Upcoming-feature-in-Gromit-MPX.jpg?resize=600%2C338&ssl=1 -[8]: https://flathub.org/apps/details/net.christianbeier.Gromit-MPX -[9]: https://itsfoss.com/what-is-flatpak/ -[10]: https://itsfoss.com/flatpak-guide/ -[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/Install-Gromit-MPX-Using-Flatpak.jpg?resize=800%2C325&ssl=1 -[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/Change-tool-color.jpg?resize=800%2C450&ssl=1 -[13]: https://itsfoss.com/manage-startup-applications-ubuntu/ diff --git a/sources/tech/20210525 Launch Flatpaks from your Linux terminal.md b/sources/tech/20210525 Launch Flatpaks from your Linux terminal.md deleted file mode 100644 index 8c1687688d..0000000000 --- a/sources/tech/20210525 Launch Flatpaks from your Linux terminal.md +++ /dev/null @@ -1,281 +0,0 @@ -[#]: subject: (Launch Flatpaks from your Linux terminal) -[#]: via: (https://opensource.com/article/21/5/launch-flatpaks-linux-terminal) -[#]: author: (Seth Kenlon https://opensource.com/users/seth) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Launch Flatpaks from your Linux terminal -====== -Use a Bash alias to launch Flatpak applications without dropping out of -the terminal to the desktop. -![Terminal command prompt on orange background][1] - -The Flatpak application distribution model is helping developers target Linux in a new and easy way, and it's helping Linux users install more applications without worrying about what version of Linux they're running. It's an exciting technology, and on my [Fedora Silverblue][2] system, it's the default package installation method. All of my desktop applications on Silverblue and several of my favorites I use on Slackware are running as Flatpaks. - -There's one thing that makes Flatpak a little awkward in some cases, though, and that's its naming scheme. For instance, when I install Emacs as a Flatpak, it's registered on my system as `org.gnu.emacs`. This is done, apparently, for fear of clobbering the name of an existing system-level application—if I already have Emacs installed, then what's the differentiation between `/usr/bin/emacs` and the Flatpak installation of Emacs? For this reason, a Flatpak like Emacs gets installed to something like (get ready for it) this path: - - -``` -`/var/lib/flatpak/app/org.gnu.emacs/current/active/export/bin/org.gnu.emacs` -``` - -It's not symlinked from `/usr/bin` or `/opt`, the location isn't added to the user's path, and launching a Flatpak requires an invocation like this: - - -``` -`$ flatpak run org.gnu.emacs` -``` - -That's a lot of typing compared to just entering `emacs`. - -### Names are hard to remember - -The Flatpak naming scheme also assumes you use a Flatpak often enough to remember the package's reverse DNS name. Aside from the structure, there's no standard for naming a Flatpak, so one Flatpak could use camel-case, such as `org.libreoffice.LibreOffice`, while another might use a mix, such as `org.gimp.GIMP`. - -Some names are easier to remember than others, too. For example, `org.glimpse_editor.Glimpse` is easy to remember _only_ if you remember its website is , rather than glimpse.org, and an underscore replaces the dash. - -From the viewpoint of Flatpak developers, this isn't a problem because Flatpaks are intended to be launched from the desktop. You don't have to remember `org.gnu.emacs` because you can always launch it from GNOME Activities or your K-Menu or a similar graphical launcher. - -This holds true often enough, but sometimes it's more convenient to launch an application from a terminal because you're already using the terminal. Whether I want an image in Glimpse or a text file in Emacs or a music file in VLC, I'm very frequently too busy in the terminal to "drop" out to the desktop (even though it's just one key away!), launch the application, click through the menus to open a file, and then click through my filesystem to find the file I want to open. - -It's just faster to type the command followed by the file I want to open. But if I have to type `flatpak run org.something.app`, it isn't. - -### Using Bash aliases to launch a Flatpak - -The obvious solution to all of this is a [Bash alias][3]. With a Bash alias, you can assign any arbitrary command to nearly any word you want. There are many [common][4] Bash [aliases][5] that nearly every Linux user has on their system, either by conscious choice or because the distribution presets them: - - -``` -$ grep alias ~/.bashrc -alias cp='cp -v' -alias rm='/usr/bin/local/trashy' -alias mv='mv -v' -alias ls='ls --color' -alias ll='ls -l --color' -alias lh='ll -h' -``` - -You can create aliases for Flatpaks, too: - - -``` -`alias emacs='flatpak run org.gnu.emacs'` -``` - -Problem solved! - -### Better interaction with Bash scripting - -It didn't take long for the process of adding aliases manually to feel too laborious to me. And for me, it's not the task but the process. Opening an editor and adding an alias is remarkably quick, but it's a break in my workflow. - -What I really want is something I can, mentally and physically, append to the initial Flatpak install process _as needed_. Not all the Flatpaks I install require an alias. For instance, here's a partial list of Flatpaks on my Silverblue system: - - -``` -$ find /var/lib/flatpak/app/* -maxdepth 0 -type d | tail -n5 -/var/lib/flatpak/app/org.gnome.baobab -/var/lib/flatpak/app/org.gnome.Calculator -/var/lib/flatpak/app/org.gnome.Calendar -/var/lib/flatpak/app/org.gnome.Characters -/var/lib/flatpak/app/org.gnome.clocks -/var/lib/flatpak/app/org.gnome.Contacts -/var/lib/flatpak/app/org.gnome.eog -/var/lib/flatpak/app/org.gnome.Evince -/var/lib/flatpak/app/org.gnome.FileRoller -/var/lib/flatpak/app/org.gnome.font-viewer -/var/lib/flatpak/app/org.gnome.gedit -/var/lib/flatpak/app/org.gnome.Logs -/var/lib/flatpak/app/org.gnome.Maps -/var/lib/flatpak/app/org.gnome.NautilusPreviewer -/var/lib/flatpak/app/org.gnome.Rhythmbox3 -/var/lib/flatpak/app/org.gnome.Screenshot -/var/lib/flatpak/app/org.gnome.Weather -/var/lib/flatpak/app/org.gnu.emacs -/var/lib/flatpak/app/org.signal.Signal -``` - -I'll never launch Weather or GNOME Calculator from the terminal. I won't ever launch Signal from the terminal, either, because it's an application I open at the start of my day and never close. - -Therefore, the requirements I defined for myself are: - - * As-needed addition of an alias - * Terminal-based control, so it fits comfortably at the end of my Flatpak install process - * Does one thing and does it well - * Portable across Fedora, RHEL, Slackware, and any other distro I happen to be using any given week - - - -The solution I've settled on lately is a custom little [Bash script][6] that I use to add aliases for Flatpaks I know I want to access quickly from my terminal. Here's the script: - - -``` -#!/bin/sh -# GPLv3 appears here -# gnu.org/licenses/gpl-3.0.md - -# vars -SYMRC=.bashrc.d -SYMDIR=$HOME/$SYMRC -SYMFILE=flatpak_aliases - -# exit on errors -set -e - -# this is where the aliases lives -if [ ! -d $SYMDIR ]; then -    mkdir "${SYMDIR}" -    touch "${SYMDIR}"/"${SYMFILE}" -fi - -sourcer() { -    echo 'Run this command to update your shell:' -    echo ". ${SYMDIR}/${SYMFILE}" -} - -lister() { -    cat "${SYMDIR}"/"${SYMFILE}" -} - -adder() { -    grep "alias ${ARG}\=" "${SYMDIR}"/"${SYMFILE}" && i=1 -    [[ $VERBOSE ]] && echo "$i" - -    if [ $i > 0 ]; then -        echo "Alias for ${ARG} already exists:" -        grep "alias ${ARG}=" "${SYMDIR}"/"${SYMFILE}" -        exit -    else -        echo "alias ${ARG}='${COMMAND}'" >> "${SYMDIR}"/"${SYMFILE}" -        [[ $VERBOSE ]] && echo "Alias for ${ARG} added" -        sourcer -    fi - -    unset i -} - -remover() { -    echo "Removing stuff." -    sed -i "/alias ${ARG}\=/d" "${SYMDIR}"/"${SYMFILE}" -    sourcer -} - -# arg parse -while [ True ]; do -    if [ "$1" = "--help" -o "$1" = "-h" ]; then -        echo " " -        echo "$0 add --command 'flatpak run org.gnu.emacs' emacs \\# create symlink for emacs" -        echo "$0 add --command 'flatpak run org.gnu.emacs -fs' emacs-fs \\# create symlink for emacs in fullscreen" -        echo "$0 remove emacs \\# remove emacs symlink" -        echo "$0 list         \\# list all active flatpak symlinks" -        echo " " -        exit -    elif [ "$1" = "--verbose" -o "$1" = "-v" ]; then -        VERBOSE=1 -        shift 1 -    elif [ "$1" = "list" ]; then -        MODE="list" -        shift 1 -    elif [ "$1" = "add" ]; then -        MODE="add" -        shift 1 -    elif [ "$1" = "remove" ]; then -        MODE="remove" -        shift 1 -    elif [ "$1" = "--command" -o "$1" = "-c" ]; then -        COMMAND="${2}" -        shift 2 -    else -        break -    fi -done - -#create array, retain spaces -ARG=( "${@}" ) - -case $MODE in -    add) -        adder -        ;; -    list) -        lister -        ;; -    remove) -        remover -        ;; -    *) -        echo "You must specify an action <list|add|remove>" -        exit 1 -esac -``` - -### Using the script - -![Launching a Flatpak from a terminal][7] - -When I install a Flatpak I expect to want to launch from the terminal, I finish the process with this script: - - -``` -$ flatpak install org.gnu.emacs -$ pakrat add -c 'flatpak run org.gnu.emacs' emacs -Alias for emacs added. -Run this command to update your shell: -. ~/.bashrc.d/flatpak_aliases - -$ . ~/.bashrc.d/flatpak_aliases -``` - -If an alias already exists, it's discovered, and no new alias is created. - -I can remove an alias, too: - - -``` -`$ pakrat remove emacs` -``` - -This doesn't remove the Flatpak and only operates on the dedicated `flatpak_aliases` file. - -All Flatpak aliases are added to `~/.bashrc.d/flatpak_aliases`, which you can automatically source when your shell is launched by placing this manner of code into your `.bashrc` or `.bash_profile` or `.profile` file: - - -``` -if [ -d ~/.bashrc.d ]; then -  for rc in ~/.bashrc.d/*; do -    if [ -f "$rc" ]; then -      . "$rc" -    fi -  done -fi - -unset rc -``` - -### Flatpak launching made easy - -Flatpaks integrate really well with desktop Linux, and they have a strong, reproducible infrastructure behind them. They're [relatively easy to build][8] and a breeze to use. With just a little added effort, you can bring them down into the terminal so that you can use them whichever way works best for you. There are probably several other projects like this out there and probably a few in development that are far more advanced than a simple Bash script, but this one's been working well for me so far. Try it out, or share your custom solution in the comments! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/launch-flatpaks-linux-terminal - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background) -[2]: https://opensource.com/article/21/2/linux-packaging -[3]: https://opensource.com/article/19/7/bash-aliases -[4]: https://opensource.com/article/17/5/introduction-alias-command-line-tool -[5]: https://opensource.com/article/18/9/handy-bash-aliases -[6]: https://opensource.com/article/20/4/bash-sysadmins-ebook -[7]: https://opensource.com/sites/default/files/flatpak-terminal-launch.png (Launching a Flatpak from a terminal) -[8]: https://opensource.com/article/19/10/how-build-flatpak-packaging diff --git a/sources/tech/20210526 Linux Jargon Buster- What are Daemons in Linux.md b/sources/tech/20210526 Linux Jargon Buster- What are Daemons in Linux.md deleted file mode 100644 index 164fd3c386..0000000000 --- a/sources/tech/20210526 Linux Jargon Buster- What are Daemons in Linux.md +++ /dev/null @@ -1,155 +0,0 @@ -[#]: subject: (Linux Jargon Buster: What are Daemons in Linux?) -[#]: via: (https://itsfoss.com/linux-daemons/) -[#]: author: (Bill Dyer https://itsfoss.com/author/bill/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Linux Jargon Buster: What are Daemons in Linux? -====== - -Daemons work hard so you don’t have to. - -Imagine that you are writing an article, Web page, or book, Your intent is to do just that – write. It’s rather nice not having to manually start printer and network services and then monitor them all day to make sure that they are working right. - -We can thank daemons for that – they do that kind of work for us. - -![][1] - -### What is a Daemon in Linux? - -A _daemon_ (usually pronounced as: `day-mon`, but sometimes pronounced as to rhyme with `diamond`) is a program with a unique purpose. They are utility programs that run silently in the background to monitor and take care of certain subsystems to ensure that the operating system runs properly. A printer daemon monitors and takes care of printing services. A network daemon monitors and maintains network communications, and so on. - -Having gone over the pronunciation of _daemon_, I’ll add that, if you want to pronounce it as demon, I won’t complain. - -For those people coming to Linux from the Windows world, daemons are known as _services_. For Mac users, the term, _services_, has a different use. The Mac’s operating system is really UNIX, so it uses daemons. The term, _services_ is used, but only to label software found under the `Services` menu. - -Daemons perform certain actions at predefined times or in response to certain events. There are many daemons that run on a Linux system, each specifically designed to watch over its own little piece of the system, and because they are not under the direct control of a user, they are effectively invisible, but essential. Because daemons do the bulk of their work in the background, they can appear a little mysterious and so, perhaps difficult to identify them and what they actually do. - -### What Daemons are Running on Your Machine? - -To identify a daemon, look for a process that ends with the letter _d_. It’s a general Linux rule that the names of daemons end this way. - -There are many ways to catch a glimpse of a running daemon. They can be seen in process listings through `ps`, `top`, or `htop`. These are useful programs in their own right – they have a specific purpose, but to see all of the daemons running on your machine, the `pstree` command will suit our discussion better. - -The `pstree` command is a handy little utility that shows the processes currently running on your system and it show them in a tree diagram. Open up a terminal and type in this command: - -``` -pstree -``` - -You will see a complete listing of all of the processes that are running. You may not know what some of them are, or what they do, they are listed. The `pstree` output is a pretty good illustration as to what is going on with your machine. There’s a lot going on! - -![daemon – pstree run completed][2] - -Looking at the screen shot, a few daemons can be seen here: **udisksd**, **gvfsd**, **systemd**, **logind** and some others. - -Our process list was long enough to where the listing couldn’t fit in a single terminal window, but we can scroll up using the mouse or cursor keys: - -![daemon – top part of pstree][3] - -### Spawning Daemons - -![Picture for representational purpose only][4] - -Again, a daemon is a process that runs in the background and is usually out of the control of the user. It is said that a daemon _has no controlling terminal_. - -A _process_ is a running program. At a particular instant of time, it can be either running, sleeping, or zombie (a process that completed its task, but waiting for its parent process to accept the return value). - -In Linux, there are three types of processes: interactive, batch and daemon. - -_Interactive processes_ are those which are run by a user at the command line are called interactive processes. - -_Batch processes_ are processes that are not associated with the command line and are presented from a list of processes. Think of these as “groups of tasks”. These are best at times when the system usage is low. System backups, for example, are usually run at night since the daytime workers aren’t using the system. When I was a full-time system administrator, I often ran disk usage inventories, system behavior analysis scripts, and so on, at night. - -Interactive processes and batch jobs are _not_ daemons even though they can be run in the background and can do some monitoring work. They key is that these two types of processes involve human input through some sort of terminal control. Daemons do not need a person to start them up. - -We know that a _daemon_ is a computer program that runs as a background process, rather than being under the direct control of an interactive user. When the system boot is complete, the system initialization process starts _spawning_ (creating) daemons through a method called _forking_, eliminating the need for a terminal (this is what is meant by _no controlling terminal_). - -I will not go into the full details of process forking, but hopefully, I can be just brief enough to show a little background information to describe what is done. While there are other methods to create processes, traditionally, in Linux, the way to create a process is through making a copy of an existing process in order to create a child process. An exec system call to start another program in then performed. - -The term, _fork_ isn’t arbitrary, by the way. It gets its name from the C programming language. One of the libraries that C uses, is called the standard library, containing methods to perform operating services. One of these methods, called _fork_, is dedicated to creating new processes. The process that initiates a fork is considered to be the parent process of the newly created child process. - -The process that creates daemons is the initialization (called `init`) process by forking its own process to create new ones. Done this way, the `init` process is the outright parent process. - -There is another way to spawn a daemon and that is for another process to fork a child process and then _die_ (a term often used in place of _exit_). When the parent dies, the child process becomes an _orphan_. When a child process is orphaned, it is adopted by the `init` process. - -If you overhear discussions, or read online material, about daemons having “a parent process ID of 1,” this is why. Some daemons aren’t spawned at boot time, but are created later by another process which died, and `init` adopted it. - -It is important that you do not confuse this with a _zombie_. Remember, a zombie is a child process that has finished its task and is waiting on the parent to accept the exit status. - -### Examples of Linux Daemons - -![][5] - -Again, the most common way to identify a Linux daemon is to look for a service that ends with the letter _d_. Here are some examples of daemons that may be running on your system. You will be able to see that daemons are created to perform a specific set of tasks: - -`systemd` – the main purpose of this daemon is to unify service configuration and behavior across Linux distributions. - -`rsyslogd` – used to log system messages. This is a newer version of `syslogd` having several additional features. It supports logging on local systems as well as on remote systems. - -`udisksd` – handles operations such as querying, mounting, unmounting, formatting, or detaching storage devices such as hard disks or USB thumb drives - -`logind` – a tiny daemon that manages user logins and seats in various ways - -`httpd` – the HTTP service manager. This is normally run with Web server software such as Apache. - -`sshd` – Daemon responsible for managing the SSH service. This is used on virtually any server that accepts SSH connections. - -`ftpd` – manages the FTP service – FTP or File Transfer Protocol is a commonly-used protocol for transferring files between computers; one act as a client, the other act as a server. - -`crond` – the scheduler daemon for time-based actions such as software updates or system checks. - -### What is the origin of the word, daemon? - -When I first started writing this article, I planned to only cover what a daemon is and leave it at that. I worked with UNIX before Linux appeared. Back then, I thought of a daemon as it was: a background process that performed system tasks. I really didn’t care how it got its name. With additional talk of other things, like zombies and orphans, I just figured that the creators of the operating system had a warped sense of humor (a lot like my own). - -I always perform some research on every piece that I write and I was surprised to learn that apparently, a lot of other people did want to know how the word came to be and why. - -The word has certainly generated a bit of curiosity and, after reading through several lively exchanges, I admit that I got curious too. Perform a search on the word’s meaning or etymology (the origin of words) and you’ll find several answers. - -In the interest of contributing to the discussion, here’s my take on it. - -The earliest form of the word, daemon, was spelled as _daimon_, a form of guardian angel – attendant spirits that helped form the character of people they assisted. Socrates claimed to have one that served him in a limited way, but correctly. Socrates’ daimon only told him when to keep his mouth shut. Socrates described his daimon during his trial in 399 BC, so the belief in daimons has been around for quite some time. Sometimes, the spelling of daimon is shown as daemon. _Daimon_ and _daemon_, here, mean the same thing. - -While a _daemon_ is an attendant, a _demon_ is an evil character from the Bible. The differences in spelling is intentional and was apparently decided upon in the 16th century. Daemons are the good guys, and demons are the bad ones. - -The use of the word, daemon, in computing came about in 1963. [Project MAC][6] is shorthand for _Project on Mathematics and Computation_, and was created at the Massachusetts Institute of Technology. It was here that the word, daemon, [came into common use][7] to mean any system process that monitors other tasks and performs predetermined actions depending on their behavior, The word, daemon was named for [Maxwell’s daemon][8]. - -Maxwell’s daemon is the result of a thought experiment. In 1871, [James Clerk Maxwell][9] imagined an intelligent and resourceful being that was able to observe and direct the travel of individual molecules in a specific direction. The purpose of the thought exercise was to show the possibility of contradicting the second law of thermodynamics. - -I did see some comments that the word, daemon, was an acronym for `Disk And Executive MONitor`. The original users of the word, daemon, [never used it for that purpose][7], so the acronym idea, I believe, is incorrect. - -![][10] - -Lastly – to end this on a light note – there is the BSD mascot: a daemon that has the appearance of a demon. The BSD daemon was named after the software daemons, but gets is appearance from playing around with the word. - -The daemon’s name is _Beastie_. I haven’t researched this fully (yet), but I did find one comment that states that Beastie comes from slurring the letters, _BSD_. Try it; I did. Say the letters as fast as you can and out comes a sound very much like _beastie_. - -Beastie is often seen with a trident which is symbolic of a daemon’s forking of processes. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/linux-daemons/ - -作者:[Bill Dyer][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/bill/ -[b]: https://github.com/lujun9972 -[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/daemon-linux.png?resize=800%2C450&ssl=1 -[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/daemon_pstree1.png?resize=800%2C725&ssl=1 -[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/daemon_pstree2.png?resize=800%2C725&ssl=1 -[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/demons.jpg?resize=800%2C400&ssl=1 -[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/linux-daemon-1.png?resize=256%2C256&ssl=1 -[6]: https://www.britannica.com/topic/Project-Mac -[7]: https://ei.cs.vt.edu/%7Ehistory/Daemon.html -[8]: https://www.britannica.com/science/Maxwells-demon -[9]: https://www.britannica.com/biography/James-Clerk-Maxwell -[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/Beastie.jpg?resize=800%2C450&ssl=1 diff --git a/sources/tech/20210527 Processing modular and dynamic configuration files in shell.md b/sources/tech/20210527 Processing modular and dynamic configuration files in shell.md deleted file mode 100644 index f97476ca26..0000000000 --- a/sources/tech/20210527 Processing modular and dynamic configuration files in shell.md +++ /dev/null @@ -1,232 +0,0 @@ -[#]: subject: (Processing modular and dynamic configuration files in shell) -[#]: via: (https://opensource.com/article/21/5/processing-configuration-files-shell) -[#]: author: (Evan "Hippy" Slatis https://opensource.com/users/hippyod) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Processing modular and dynamic configuration files in shell -====== -Learn how to manage frequent changes within configuration files better. -![Coding on a computer][1] - -While working on a continuous integration/continuous development (CI/CD) solution for a customer, one of my first tasks was to automate the bootstrapping of a CI/CD Jenkins server in OpenShift. Following DevOps best practices, I quickly created a configuration file that drove a script to complete the job. That quickly became two configuration files when I realized I needed a separate Jenkins server for production. After that came the request that the customer needed more than one pair of engineering and production CI/CD servers for different groups, and each server had similar but slightly different configurations. - -When the inevitable changes had to be made to the values common to two or more of the servers, it was very difficult and error-prone to propagate the changes across two or four files. As CI/CD environments were added for more complex testing and deployments, the number of shared and specific values for each group and environment grew. - -As the changes became more frequent and the data more complex, making changes within the configuration files became more and more unmanageable. I needed a better solution to solve this age-old problem and manage changes faster and more reliably. More importantly, I needed a solution that would allow my clients to do the same after turning my completed work over to them. - -### Defining the problem - -On the surface, this sounds like a very straightforward problem. Given `my-config-file.conf` (or a `*.ini` or `*.properties`) file: - - -``` -KEY_1=value-1 -KEY_2=value-2 -``` - -You just have to execute this line at the top of your script: - - -``` -#!/usr/bin/bash - -set -o allexport -source my-config-file.conf -set +o allexport -``` - -This code realizes all the variables inside your configuration file in the environment, and `set -o allexport` automatically exports them all. The original file, being a typical key/value properties file, is also very standard and easy to parse into another system. Where it gets more complicated is in the following scenarios: - - 1. **Some of the values are copied and pasted from variable to variable and are related.** Besides violating the DRY ("don't repeat yourself") principle, it's error-prone, especially when values need to be changed. How can values within the file be reused? - 2. **Portions of the configuration file are reusable over multiple runs of the original script, and others are useful only for a specific run.** How do you move beyond copy and paste and modularize the data so that some pieces can be reused elsewhere? - 3. **Once the files are modularized, how do you handle conflicts and define precedent?** If a key is defined twice in the same file, which value do you take? If two configuration files define the same key, which gets precedence? How can a specific install override a shared value? - 4. **The configuration files are initially intended to be used by a shell script and are written for processing by shell scripts. If the configuration files need to be loaded or reused in another environment, is there a way to make them easily available to other systems without further processing?** I wanted to move some of the key/value pairs into a single ConfigMap in Kubernetes. What's the best way to make the processed data available to make the import process straightforward and easy so that other systems don't have to understand how the config files are structured? - - - -This article will take you through some simple code snippets and show how easy this is to implement. - -### Defining the configuration file content - -Sourcing a file means it will source variables as well as other shell statements like commands. For this purpose, configuration files should only be about key/value pairs and not about defining functions or executing code. Therefore, I'll define these files similarly to property and .ini files: - - -``` -KEY_1=${KEY_2} -KEY_2=value-2 -... -KEY_N=value-n -``` - -From this file, you should expect the following behavior: - - -``` -$ source my-config-file.conf -$ echo $KEY_1 -value-2 -``` - -I purposefully made this a little counterintuitive in that it refers to a value I haven't even defined yet. Later in this article, I will show you the code to handle this scenario. - -### Defining modularization and precedence - -To keep the code simple and make defining the files intuitive, I implemented a left-to-right, top-to-bottom precedence strategy for files and variables, respectively. More specifically, given a list of configuration files: - - 1. Each file in the list would be processed first-to-last (left-to-right) - 2. The first definition of a key would define the value, and subsequent values would be ignored - - - -There are many ways to do this, but I found this strategy straightforward, easy to code, and easy to explain to others. In other words, I am not claiming this is the best design decision, but it works, and it simplifies debugging. - -Given this colon-delimited list of two configuration files: - - -``` -`first.conf:second.conf` -``` - -with these contents: - - -``` -# first.conf -KEY_1=value-1 -KEY_1=ignored-value - -[/code] [code] - -# first.conf -KEY_1=ignored-value -``` - -you would expect: - - -``` -$ echo $KEY_1 -value-1 -``` - -### The solution - -This function will implement the defined requirements: - - -``` -_create_final_configuration_file() { -    # convert the list of files into an array -    local CONFIG_FILE_LIST=($(echo ${1} | tr ':' ' ')) -    local WORKING_DIR=${2} - -    # removes any trailing whitespace from each file, if any -    # this is absolutely required when importing into ConfigMaps -    # put quotes around values if extra spaces are necessary -    sed -i -e 's/\s*$//' -e '/^$/d' -e '/^#.*$/d' ${CONFIG_FILE_LIST[@]} - -    # iterates over each file and prints (default awk behavior) -    # each unique line; only takes first value and ignores duplicates -    awk -F= '!line[$1]++' ${CONFIG_FILE_LIST[@]} > ${COMBINED_CONFIG_FILE} - -    # have to export everything, and source it twice: -    # 1) first source is to realize variables -    # 2) second time is to realize references -    set -o allexport -    source ${COMBINED_CONFIG_FILE} -    source ${COMBINED_CONFIG_FILE} -    set +o allexport - -    # use envsubst command to realize value references -    cat ${COMBINED_CONFIG_FILE} | envsubst > ${FINAL_CONFIG_FILE} -``` - -It performs the following steps: - - 1. It trims extraneous white space from each line. - 2. It iterates through each file and writes out each line with a unique key (i.e. thanks to `awk` magic, it skips duplicate keys) to an intermediate configuration file. - 3. It sources the intermediate file twice to realize all references in memory. - 4. The referenced values in the intermediate file are realized from the values now in memory and written out to a final configuration file, which can be used for further processing. - - - -As the above notes, when the combined configuration intermediate file is sourced, it must be done twice. This is so that the referenced values that are defined after being referenced can be properly realized in memory. The `envsubst` substitutes the values of environment variables, and the output is redirected to the final configuration file for possible postprocessing. Per the previous example's requirement, this can take the form of realizing the data in a ConfigMap: - - -``` -kubectl create cm my-config-map --from-env-file=${FINAL_CONFIG_FILE} \ -    -n my-namespace -``` - -### Sample code - -You can find sample code with `specific.conf` and `shared.conf` files demonstrating how you can combine files representing a specific configuration file and a general, shared configuration file in my GitHub repository [modular-config-file-sample][2]. The configuration files are composed of: - - -``` -# specific.conf -KEY_1=${KEY_2} -KEY_2='some value' -KEY_1='this value will be ignored' - -[/code] [code] - -# shared.conf -SHARED_KEY_1='some shared value' -SHARED_KEY_2=${SHARED_KEY_1} -SHARED_KEY_1='this value will never see the light of day' -KEY_1='this was overridden' -``` - -Note the single quotes around the values. I purposefully chose example values with spaces to make things more interesting and so that the values had to be in quotes; otherwise, when the files are sourced, each word would be interpreted as a separate command. However, the variable references do not need to be in quotes once the values are set. - -The repository contains a small shell script utility, `pconfs.sh`. Here's what happens when you run the following command from within the sample code directory: - - -``` -# NOTE: see the sample code for the full set of command line options -$ ./pconfs.sh -f specific.conf:shared.conf - -================== COMBINED CONFIGS BEFORE ================= -KEY_1=${KEY_2} -KEY_2='some value' -SHARED_KEY_1='some shared value' -SHARED_KEY_2=${SHARED_KEY_1} -================ COMBINED CONFIGS BEFORE END =============== - -================= PROOF OF SUBST IN MEMORY ================= -KEY_1: some value -SHARED_KEY_2: some shared value -=============== PROOF OF SUBST IN MEMORY END =============== - -================== PROOF OF SUBST IN FILE ================== -KEY_1=some value -KEY_2='some value' -SHARED_KEY_1='some shared value' -SHARED_KEY_2=some shared value -================ PROOF OF SUBST IN FILE END ================ -``` - -This proves that even complex values may be referenced before and after a value is defined. It also shows that only the first definition of the value is retained, whether within or across files, and that precedence is given left to right in your list of files. This is why I specify parsing specific.conf first when running this command; this allows a specific configuration to override any of the more general, shared values in the example. - -You should now have an easy-to-implement solution for creating and using modular configuration files in the shell. Also, the results of processing the files should be easy enough to use or import without requiring the other system to understand the data's original format or organization. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/processing-configuration-files-shell - -作者:[Evan "Hippy" Slatis][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/hippyod -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer) -[2]: https://github.com/hippyod/modular-config-file-sample diff --git a/sources/tech/20210528 3 key considerations for your trusted compute base.md b/sources/tech/20210528 3 key considerations for your trusted compute base.md deleted file mode 100644 index 6ee969b2ed..0000000000 --- a/sources/tech/20210528 3 key considerations for your trusted compute base.md +++ /dev/null @@ -1,92 +0,0 @@ -[#]: subject: (3 key considerations for your trusted compute base) -[#]: via: (https://opensource.com/article/21/5/trusted-compute-base) -[#]: author: (Mike Bursell https://opensource.com/users/mikecamel) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -3 key considerations for your trusted compute base -====== -The smaller your TCB, the less there is to attack, and that's a good -thing. -![Puzzle pieces coming together to form a computer screen][1] - -This isn't the first article I've written about trusted computing bases (TCBs), so if the concept is new to you, I suggest you have a look at _[What's a trusted compute base?][2]_ to get an idea of what I'll be talking about here. In that article, I noted the importance of the size of the TCB: "What you want is a small, easily measurable and easily auditable TCB on which you can build the rest of your system—from which you can build a 'chain of trust' to the other parts of your system about which you care." - -In this article, I want to discuss the importance of a TCB's size, how you might measure it, and how difficult it can be to reduce its size. Let's look at those issues in order. - -### Sizing things up - -However you measure it—and I'll get to that below—the size of the TCB matters for two reasons: - - 1. The larger the TCB is, the more bugs there are likely to be. - 2. The larger the TCB is, the larger the attack surface. - - - -The first of these is true of any system. Although there may be ways of reducing the number of bugs by proving the correctness of all (or, more likely, part) of the system, bugs are both tricky to remove and resilient; if you remove one, you may well introduce another (or worse, several). You can reduce the kinds and number of bugs through a multitude of techniques, from language choice (choosing Rust over C/C++ to decrease memory allocation errors, for instance), to better specification, and on to improved test coverage and fuzzing. In the end, however, the smaller the TCB, the less code (or hardware—don't forget we're considering the broader system here) you have to trust, the less space there is for bugs in it. - -The concept of an attack surface is important (and, like TCBs, it's one I've introduced before—see _[What's an attack surface?][3]_). Like bugs, there may be no absolute measure of the ratio of _danger:attack surface_, but the smaller your TCB, the less there is to attack, and that's a good thing. As with bug reduction, there are many techniques you may want to apply to reduce your attack surface, but the smaller it is, by definition, the fewer opportunities attackers have to try to compromise your system. - -### Measurement - -Measuring the size of your TCB is really, really hard. Or maybe I should say that coming up with an absolute measure that you can compare to other TCBs is really, really hard. The problem is that there are so many measurements you might take. The ones you care about are probably those that can be related to the attack surface. But there are so many different attack vectors that _might_ be relevant to a TCB that there are likely to be multiple attack surfaces. Some of the possible measurements include: - - * Number of API methods - * Amount of data that can be passed across each API method - * Number of parameters that can be passed across each API method - * Number of open network sockets - * Number of open local (e.g., Unix) sockets - * Number of files read from local storage - * Number of dynamically loaded libraries - * Number of Direct Memory Access (DMA) calls - * Number of lines of code - * Amount of compilation optimisation carried out - * Size of binary - * Size of executing code in memory - * Amount of memory shared with other processes - * Use of various caches (L1, L2, etc.) - * Number of syscalls made - * Number of strings visible using a `strings` command or similar - * Number of cryptographic operations not subject to constant time checks - - - -This is not meant to be an exhaustive list; it just shows the range of different areas where vulnerabilities might appear. Designing your application to reduce one may increase another; a very simple example is attempting to reduce the number of API calls exposed by increasing the number of parameters on each call; another might be reducing the size of the binary by using more dynamically linked libraries. - -This leads me to an important point that I'm not going to address in detail in this article but is fundamental to understanding TCBs: without a threat model, there's very little point in considering what your TCB is. - -### Reducing TCB size - -I've just shown one of the main reasons that reducing your TCB size is difficult: it's likely to involve tradeoffs between different measures. If all you're trying to do is produce competitive marketing material where you say, "my TCB is smaller than yours," then you're likely to miss the point. The point of a TCB is to have a well-defined computing base that can protect against specific threats. This requires you to be clear about exactly what functionality _requires_ it to be trusted, where it sits in the system, and how the other components in the system rely on it. In other words, what trust relationships they have. - -Recently, I was speaking to a colleague who relayed a story of a software project by saying, "we've reduced our TCB to this tiny component by designing it very carefully and checking how we implement it." But my colleague overlooked the fact that the rest of the stack—which contains a complete Linux distribution and applications—could not be trusted any more than before. The threat model (if there is one—we didn't get into details) seems to assume that only the TCB would be attacked. This misses the point entirely; it just adds another "[turtle][4]" to the stack without fixing the problem that is presumably at issue: improving the system's security. - -Reducing the TCB by artificially defining what the TCB is to suit your capabilities or particular beliefs around what it should be protecting against is not only unhelpful but actively counterproductive. This is because it ignores the fact that a TCB is there to serve the needs of a broader system, and if it is considered in isolation, then it becomes irrelevant: what is it acting as a base _for_? - -In conclusion, it's all very well saying, "we have a tiny TCB," but you need to know what you're protecting, from what, and how. - -* * * - -_This article was originally published on [Alice, Eve, and Bob][5] and is reprinted with the author's permission._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/trusted-compute-base - -作者:[Mike Bursell][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mikecamel -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen) -[2]: https://aliceevebob.com/2019/10/22/whats-a-trusted-compute-base/ -[3]: https://aliceevebob.com/2018/04/24/whats-an-attack-surface/ -[4]: https://aliceevebob.com/2019/07/02/turtles-and-chains-of-trust/ -[5]: https://aliceevebob.com/2021/05/11/does-my-tcb-look-big-in-this/ diff --git a/sources/tech/20210530 16 efficient breakfasts of open source technologists from around the world.md b/sources/tech/20210530 16 efficient breakfasts of open source technologists from around the world.md deleted file mode 100644 index 5afbc8077b..0000000000 --- a/sources/tech/20210530 16 efficient breakfasts of open source technologists from around the world.md +++ /dev/null @@ -1,91 +0,0 @@ -[#]: subject: (16 efficient breakfasts of open source technologists from around the world) -[#]: via: (https://opensource.com/article/21/5/breakfast) -[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -16 efficient breakfasts of open source technologists from around the world -====== -Cheese from Bavaria, a Brazilian breakfast sandwich, a New York bagel, -or a hot cup of tea from Great Britain. Our contributors share their -favorite way to fuel their day. -![Selfcare, drinking tea on the porch][1] - -Breakfast …. It's the most important meal of the day, or so they say. But who wants to spend time on a big meal when you could be sleeping instead? (And if your breakfast is _too_ big, you might feel the need to go back to sleep.) - -Still, busy developers, sysadmins, and other IT pros need some fuel to start their day. So, we asked some of our contributors to tell us how they feed their hunger without "eating" into their rest or work time. Here's what they had to say. - -Bacon, egg, and cheese on a bagel. As NYC as it gets. —[Emily Brand][2] - -An Indian Spicy Chai followed by oats and cream. —[Kedar Vijay Kulkarni][3] - -I start in on coffee before I get in the shower, and that's pretty much all I consume until lunch. I usually go through about 24–32oz. of coffee in a typical morning, although since I got an Ember travel mug, I'm not "warming up" as often as I was with regular mugs. The only exception is Sunday, when I have a reward mini coffee cake with breakfast for taking my one weekly injectable medication. —[Kevin Sonney][4] - -My go-to is peanut butter on two slices of wheat, sunflower, or whole-grain toast with a cup of tart cherry or organic grape juice. —[Don Watkins][5] - -A cup of hot-brewed coffee and scrambled eggs is my go-to breakfast ritual. —[Sudeshna Sur][6] - -This is the best breakfast... —[Chris Hermansen][7] - -![Expresso empty cup][8] - -(Chris Hermansen, [CC BY-SA 4.0][9]) - -I'm an oatmeal guy for the most part. Walnuts, brown sugar, cinnamon, and flaxseeds for the win. Something I learned from being a runner is, if you fuel up, you can forget about food for a good long time. —[Steve Morris][10] - -Coffee-latte + whole wheat bread with cream and grape jelly. Usually, I do a more traditional Brazilian breakfast of coffee + French bread with ham and cheese. —[Igor Steinmacher][11] - -Scrambled eggs, chicken sausage, and cooked apples. —[Petra Sargent][12] - -300ml of cold-brew coffee (not because I'm fancy, but because I can make it once a week and save time for more important matters, like writing to a mailing list about my breakfast habits) mixed with 240ml unsweetened hemp milk, one umeboshi, and a fistful of raw and whole plant vitamins. For me, breakfast is about efficiency. —[Jeremy Stanley][13] - -Usually, I have cereal, milk, eggs, and sometimes a sandwich. —[Manaswini Das][14] - -I'm not super hungry in the morning, but as a coffee fanatic, I always have two cups of coffee with cream. If I'm good, I'll also make a smoothie or have a cup of Greek yogurt. Today's breakfast was a banana and one cup of coffee with cinnamon roll creamer. —[Lauren Maffeo][15] - -Coffee and/or tea in copious amounts (tea can be close to a liter in one of my mugs) and maybe a breakfast sandwich. —[John Hawley][16] - -For breakfast, I have bread from a local bakery with cheese from Bavaria and sausage from Italy. —[Stephan Avenwedde][17] - -I usually have a bowl of Crunchy Raisin Bran cereal, an apple, and several cups of black coffee. —[Alan Formy-Duval][18] - -Whatever I eat, [expect tea][19]. —[Mike Bursell][20] - -Now that you know how our contributors fuel up, what's the quick breakfast that helps you get started on your workday? Please share your favorites in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/5/breakfast - -作者:[Jen Wike Huger][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jen-wike -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_selfcare_wfh_porch_520.png?itok=2qXG0T7u (Selfcare, drinking tea on the porch) -[2]: https://opensource.com/users/emily-brand -[3]: https://opensource.com/users/kkulkarn -[4]: https://opensource.com/users/ksonney -[5]: https://opensource.com/users/don-watkins -[6]: https://opensource.com/users/sudeshna-sur -[7]: https://opensource.com/users/clhermansen -[8]: https://opensource.com/sites/default/files/uploads/pxl_20210510_163033909.jpg (Expresso empty cup) -[9]: https://creativecommons.org/licenses/by-sa/4.0/ -[10]: https://opensource.com/users/smorris12 -[11]: https://opensource.com/users/igorsteinmacher -[12]: https://opensource.com/users/psargent -[13]: https://opensource.com/users/fungi -[14]: https://opensource.com/users/manaswinidas -[15]: https://opensource.com/users/lmaffeo -[16]: https://opensource.com/users/warthog9 -[17]: https://opensource.com/users/hansic99 -[18]: https://opensource.com/users/alanfdoss -[19]: http://aliceevebob.com/2019/09/17/how-not-to-make-a-cup-of-tea/ -[20]: https://opensource.com/users/mikecamel diff --git a/sources/tech/20210601 Start monitoring your Kubernetes cluster with Prometheus and Grafana.md b/sources/tech/20210601 Start monitoring your Kubernetes cluster with Prometheus and Grafana.md deleted file mode 100644 index ce3b566af0..0000000000 --- a/sources/tech/20210601 Start monitoring your Kubernetes cluster with Prometheus and Grafana.md +++ /dev/null @@ -1,437 +0,0 @@ -[#]: subject: (Start monitoring your Kubernetes cluster with Prometheus and Grafana) -[#]: via: (https://opensource.com/article/21/6/chaos-grafana-prometheus) -[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Start monitoring your Kubernetes cluster with Prometheus and Grafana -====== -Before you can measure chaos, you need to know what your system's steady -state looks like. Learn how in the second article in this series about -chaos engineering. -![A ship wheel with someone steering][1] - -In my introductory [article about chaos engineering][2], one of the main things I covered was the importance of getting the steady state of your working Kubernetes cluster. Before you can start causing chaos, you need to know what the cluster looks like in a steady state. - -This article will cover how to [get those metrics using Prometheus][3] and [Grafana][4]. This walkthrough also uses Pop!_OS 20.04, Helm 3, Minikube 1.14.2, and Kubernetes 1.19. - -### Configure Minikube - -[Install Minikube][5] in whatever way makes sense for your environment. If you have enough resources, I recommend giving your virtual machine a bit more than the default memory and CPU power: - - -``` -$ minikube config set memory 8192 -❗  These changes will take effect upon a minikube delete and then a minikube start -$ minikube config set cpus 6 -❗  These changes will take effect upon a minikube delete and then a minikube start -``` - -Then start and check your system's status: - - -``` -$ minikube start -😄  minikube v1.14.2 on Debian bullseye/sid -🎉  minikube 1.19.0 is available! Download it: -💡  To disable this notice, run: 'minikube config set WantUpdateNotification false' - -✨  Using the docker driver based on user configuration -👍  Starting control plane node minikube in cluster minikube -🔥  Creating docker container (CPUs=6, Memory=8192MB) ... -🐳  Preparing Kubernetes v1.19.0 on Docker 19.03.8 ... -🔎  Verifying Kubernetes components... -🌟  Enabled addons: storage-provisioner, default-storageclass -🏄  Done! kubectl is now configured to use "minikube" by default -$ minikube status -minikube -type: Control Plane -host: Running -kubelet: Running -apiserver: Running -kubeconfig: Configured -``` - -### Install Prometheus - -Once the cluster is set up, start your installations. Install [Prometheus][6] first by following the instructions below. - -First, add the repository in Helm: - - -``` -$ helm repo add prometheus-community -"prometheus-community" has been added to your repositories -``` - -Then install your Prometheus Helm chart. You should see: - - -``` -$ helm install prometheus prometheus-community/prometheus -NAME: prometheus -LAST DEPLOYED: Sun May  9 11:37:19 2021 -NAMESPACE: default -STATUS: deployed -REVISION: 1 -TEST SUITE: None -NOTES: -The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster: -prometheus-server.default.svc.cluster.local -``` - -Get the Prometheus server URL by running these commands in the same shell: - - -``` -  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") -  kubectl --namespace default port-forward $POD_NAME 9090 -``` - -You can access the Prometheus Alertmanager via port 80 on this DNS name from within your cluster: - - -``` -`prometheus-alertmanager.default.svc.cluster.local` -``` - -Get the Alertmanager URL by running these commands in the same shell: - - -``` -  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") -  kubectl --namespace default port-forward $POD_NAME 9093 -################################################################################# -######   WARNING: Pod Security Policy has been moved to a global property.  ##### -######            use .Values.podSecurityPolicy.enabled with pod-based      ##### -######            annotations                                               ##### -######            (e.g. .Values.nodeExporter.podSecurityPolicy.annotations) ##### -################################################################################# -``` - -You can access the Prometheus PushGateway via port 9091 on this DNS name from within your cluster: - - -``` -`prometheus-pushgateway.default.svc.cluster.local` -``` - -Get the PushGateway URL by running these commands in the same shell: - - -``` -  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}") -  kubectl --namespace default port-forward $POD_NAME 9091 - -For more information on running Prometheus, visit: - -``` - -Check to confirm your pods are running: - - -``` -$ kubectl get pods -n default -NAME                                             READY   STATUS    RESTARTS   AGE -prometheus-alertmanager-ccf8f68cd-hcrqr          2/2     Running   0          3m22s -prometheus-kube-state-metrics-685b975bb7-mhv54   1/1     Running   0          3m22s -prometheus-node-exporter-mfcwj                   1/1     Running   0          3m22s -prometheus-pushgateway-74cb65b858-7ffhs          1/1     Running   0          3m22s -prometheus-server-d9fb67455-2g2jw                2/2     Running   0          3m22s -``` - -Next, expose your port on the Prometheus server pod so that you can see the Prometheus web interface. To do this, you need the service name and port. You also need to come up with a name to open the service using the Minikube service command. - -Get the service name for `prometheus-server`: - - -``` -$ kubectl get svc -n default -NAME                            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE -kubernetes                      ClusterIP   10.96.0.1        <none>        443/TCP        13m -prometheus-alertmanager         ClusterIP   10.106.68.12     <none>        80/TCP         8m22s -prometheus-kube-state-metrics   ClusterIP   10.104.167.239   <none>        8080/TCP       8m22s -prometheus-node-exporter        ClusterIP   None             <none>        9100/TCP       8m22s -prometheus-pushgateway          ClusterIP   10.99.90.233     <none>        9091/TCP       8m22s -prometheus-server               ClusterIP   10.103.195.104   <none>        9090/TCP       8m22s -``` - -Expose the service as type `Node-port`. Provide a target port of `9090` and a name you want to call the server. The node port is the server listening port. This is an extract of the Helm chart: - - -``` -    ## Port for Prometheus Service to listen on -    ## -    port: 9090 -``` - -The command is: - - -``` -$ kubectl expose service prometheus-server --type=NodePort --target-port=9090 --name=prom-server -service/prom-server exposed -``` - -Next, you need Minikube to open the service and browser: - - -``` -jess@Athena:~$ minikube service prom-server -|-----------|-------------|-------------|---------------------------| -| NAMESPACE |    NAME     | TARGET PORT |            URL            | -|-----------|-------------|-------------|---------------------------| -| default   | prom-server |          80 | | -|-----------|-------------|-------------|---------------------------| -🎉  Opening service default/prom-server in default browser... -``` - -Your browser should open and show you the Prometheus service. - -![Prometheus interface][7] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Congratulations! You now have Prometheus installed on your cluster. - -### Install Grafana - -Next, install Grafana and configure it to work with Prometheus. Follow the steps below to expose a service to configure Grafana and collect data from Prometheus to gather your steady state. - -Start with getting your Helm chart: - - -``` -$ helm repo add grafana -"grafana" has been added to your repositories -``` - -Search for your chart: - - -``` -$ helm search repo grafana -NAME                                            CHART VERSION   APP VERSION     DESCRIPTION                                       -bitnami/grafana       5.2.11      7.5.5         Grafana is an open source, feature rich metrics... -bitnami/grafana-operator       0.6.5      3.10.0   Kubernetes Operator based on the Operator SDK f... -grafana/grafana    6.9.0                7.5.5           The leading tool for querying and visualizing t... -stable/grafana    5.5.7         7.1.1           DEPRECATED - The leading tool for querying and ... -``` - -Since stable/grafana is depreciated, install bitnami/grafana. Then install your chart: - - -``` -helm install grafana bitnami/grafana -NAME: grafana -LAST DEPLOYED: Sun May  9 12:09:53 2021 -NAMESPACE: default -STATUS: deployed -REVISION: 1 -TEST SUITE: None -NOTES: -** Please be patient while the chart is being deployed ** -``` - - 1. Get the application URL by running: [code] echo "Browse to " -kubectl port-forward svc/grafana 8080:3000 & -``` - 2. Get the admin credentials: [code] echo "User: admin" -echo "Password: $(kubectl get secret grafana-admin --namespace default -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 --decode)" -``` - - - -As you can see in the Helm installation output, the target port for Grafana is 3000, so you will use that port for exposing the service to see Grafana's web frontend. Before exposing the service, confirm your services are running: - - -``` -$ kubectl get pods -A -NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE -default       grafana-6b84bbcd8f-xt6vd                         1/1     Running   0          4m21s -``` - -Expose the service: - - -``` -$ kubectl expose service grafana --type=NodePort --target-port=3000 --name=grafana-server -service/grafana-server exposed -``` - -Enable the service to open a browser with a Minikube service: - - -``` -jess@Athena:~$ minikube service grafana-server -|-----------|----------------|-------------|---------------------------| -| NAMESPACE |      NAME      | TARGET PORT |            URL            | -|-----------|----------------|-------------|---------------------------| -| default   | grafana-server |        3000 | | -|-----------|----------------|-------------|---------------------------| -🎉  Opening service default/grafana-server in default browser... -``` - -You will see the welcome screen where you can log in. - -![Grafana welcome screen][9] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Set up credentials to log into Grafana using kubectl. The commands appeared in the installation's output; here are the commands in use: - - -``` -$ echo "User: admin" -User: admin -$ echo "Password: $(kubectl get secret grafana-admin --namespace default -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 --decode)" -Password: G6U5VeAejt -``` - -Log in with your new credentials, and you will see the Grafana dashboard. - -![Grafana dashboard][10] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Congratulations! You now have a working Grafana installation in your Minikube cluster with the ability to log in. The next step is to configure Grafana to work with Prometheus to gather data and show your steady state. - -### Configure Grafana with Prometheus - -Now that you can log in to your Grafana instance, you need to set up the data collection and dashboard. Since this is an entirely web-based configuration, I will go through the setup using screenshots. Start by adding your Prometheus data collection. Click the **gear icon** on the left-hand side of the display to open the **Configuration** settings, then select **Data Source**. - -![Configure data source option][11] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -On the next screen, click **Add data source**. - -![Add data source option][12] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Select **Prometheus**. - -![Select Prometheus data source][13] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Because you configured your Prometheus instance to be exposed on port 80, use the service name **prometheus-server** and the server **port 80**. - -![Configuring Prometheus data source][14] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Save and test your new data source by scrolling to the bottom of the screen and clicking **Save and Test**. You should see a green banner that says **Data source is working**. - -![Confirming Data source is working][15] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Return to the top of the page and click **Dashboards**. - -![Select Dashboards option][16] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Import all three dashboard options. - -![Import three dashboards][17] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Click the **magnifying glass** icon on the left-hand side to confirm all three dashboards have been imported. - -![Confirming dashboard import][18] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Now that everything is configured, click **Prometheus 2.0 Stats**, and you should see something similar to this. - -![Prometheus 2.0 Stats][19] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Congratulations! You have a set up basic data collection from Prometheus about your cluster. - -### Import more monitoring dashboards - -You can import additional detailed dashboards from Grafana Labs' [community dashboards][20] collection. I picked two of my favorites, [Dash-minikube][21] and [Kubernetes Cluster Monitoring][22], for this quick walkthrough. - -To import a dashboard, you need its ID from the dashboards collection. First, click the plus (**+**) sign on the left-hand side to create a dashboard, then click **Import** in the dropdown list, and enter the ID. For Dash-minikube, it's ID 10219. - -![Import Dash-minikube dashboard][23] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -![Import Dash-minikube dashboard][24] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Click **Load**, and enter the data source on the next screen. Since this uses Prometheus, enter your Prometheus data source. - -![Import Dash-minikube][25] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Click **Import**, and the new dashboard will appear. - -![Import Dash-minikube dashboard][26] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Now you have a new dashboard to keep track of your Minikube stats. If you follow the same steps using Kubernetes Cluster Monitoring (ID 2115), you will see a more verbose monitoring dashboard. - -![Kubernetes Cluster Monitoring dashboard][27] - -(Jess Cherry, [CC BY-SA 4.0][8]) - -Now you can keep track of your steady state with Grafana and Prometheus data collections and visuals. - -### Final thoughts - -With these open source tools, you can collect your cluster's steady state and maintain a good pulse on it. This is important in chaos engineering because it allows you to check everything in a destructive, unstable state and use that data to test your hypothesis about what could happen to its state during an outage. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/chaos-grafana-prometheus - -作者:[Jessica Cherry][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cherrybomb -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv (A ship wheel with someone steering) -[2]: https://opensource.com/article/21/5/11-years-kubernetes-and-chaos -[3]: https://opensource.com/article/19/11/introduction-monitoring-prometheus -[4]: htpp://grafana.com -[5]: https://minikube.sigs.k8s.io/docs/start/ -[6]: http://prometheus.io -[7]: https://opensource.com/sites/default/files/uploads/prometheus-interface.png (Prometheus interface) -[8]: https://creativecommons.org/licenses/by-sa/4.0/ -[9]: https://opensource.com/sites/default/files/uploads/grafana_welcome.png (Grafana welcome screen) -[10]: https://opensource.com/sites/default/files/uploads/grafana_dashboard.png (Grafana dashboard) -[11]: https://opensource.com/sites/default/files/uploads/grafana_datasource.png (Configure data source option) -[12]: https://opensource.com/sites/default/files/uploads/grafana_adddatasource.png (Add data source option) -[13]: https://opensource.com/sites/default/files/uploads/grafana_prometheusdatasource.png (Select Prometheus data source) -[14]: https://opensource.com/sites/default/files/uploads/grafana_configureprometheusdatasource.png (Configuring Prometheus data source) -[15]: https://opensource.com/sites/default/files/uploads/datasource_save-test.png (Confirming Data source is working) -[16]: https://opensource.com/sites/default/files/uploads/dashboards.png (Select Dashboards option) -[17]: https://opensource.com/sites/default/files/uploads/importdatasources.png (Import three dashboards) -[18]: https://opensource.com/sites/default/files/uploads/importeddashboard.png (Confirming dashboard import) -[19]: https://opensource.com/sites/default/files/uploads/prometheus2stats.png (Prometheus 2.0 Stats) -[20]: https://grafana.com/grafana/dashboards -[21]: https://grafana.com/grafana/dashboards/10219 -[22]: https://grafana.com/grafana/dashboards/2115 -[23]: https://opensource.com/sites/default/files/uploads/importdashminikube.png (Import Dash-minikube dashboard) -[24]: https://opensource.com/sites/default/files/uploads/importdashminikube2.png (Import Dash-minikube dashboard) -[25]: https://opensource.com/sites/default/files/uploads/importdashminikube3.png (Import Dash-minikube) -[26]: https://opensource.com/sites/default/files/uploads/importdashminikube4.png (Import Dash-minikube dashboard) -[27]: https://opensource.com/sites/default/files/uploads/kubernetesclustermonitoring-dashboard.png (Kubernetes Cluster Monitoring dashboard) diff --git a/sources/tech/20210602 How to navigate FreeDOS with CD and DIR.md b/sources/tech/20210602 How to navigate FreeDOS with CD and DIR.md deleted file mode 100644 index 53cefa4b91..0000000000 --- a/sources/tech/20210602 How to navigate FreeDOS with CD and DIR.md +++ /dev/null @@ -1,71 +0,0 @@ -[#]: subject: (How to navigate FreeDOS with CD and DIR) -[#]: via: (https://opensource.com/article/21/6/navigate-freedos-cd-dir) -[#]: author: (Jim Hall https://opensource.com/users/jim-hall) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How to navigate FreeDOS with CD and DIR -====== -Armed with just two commands DIR and CD, you can navigate your FreeDOS -system from the command line. -![4 different color terminal windows with code][1] - -FreeDOS is an open source DOS-compatible operating system that you can use to play classic DOS games, run legacy business software, or develop embedded systems. Any program that works on MS-DOS should also run on FreeDOS. - -But if you've never used DOS, you might be confused about how to navigate the system. FreeDOS is primarily a command-line interface; there is no default graphical user interface (GUI) in FreeDOS. You need to type every command at the command line. - -Two commands that help you find your way around FreeDOS: `CD` and `DIR`. I've written those commands in all uppercase, but DOS is actually _case insensitive_, so you can type your commands using either uppercase or lowercase letters. DOS doesn't care. - -Let's start with the `DIR` command. This command name is short for _directory_ and is similar to the `ls` command on Linux systems. You can run `DIR` anywhere on your system to see what files you have. Just type the command `DIR` to get a list of files and directories: - -![DIR listing of the D: drive][2] - -Jim Hall, CC-BY SA 4.0 - -The output from `DIR` is very utilitarian. At the top, `DIR` prints the "volume name" of the current drive. Then `DIR` shows all the files and directories. In the screenshot, you can see the directory listing of the FreeDOS 1.3 RC4 LiveCD. It contains several directories, including the `FREEDOS` directory which contains all of the core FreeDOS programs and utilities. You can also see several files, starting with the `COMMAND.COM` shell, which is similar to Bash on Linux—except much simpler. The FreeDOS kernel itself is the `KERNEL.SYS `file further down the list. - -At the top level of any drive, before you go into a directory, you are at the _root directory_. DOS uses the `\` ("back slash") character to separate directories in a path, which is slightly different from the `/` ("slash") character in Linux systems. - -To navigate into a directory, you can use the `CD` command. Like `cd` on Linux, this stands for _change directory_. The `CD` command sets the new _working directory_ to wherever you want to go. For example, you might go into the `GAMES` directory and use `DIR` to list its contents: - -![Use CD to change your working directory][3] - -Jim Hall, CC-BY SA 4.0 - -You can also specify a path to `CD`, to jump to a specific directory elsewhere on your system. If I wanted to change to the `FREEDOS` directory, I could simply specify the full path relative to the root directory. In this case, that's the `\FREEDOS` directory. From there, I can run another `DIR` command to see the files and directories stored there: - -![Specify a full path to change to another working directory][4] - -Jim Hall, CC-BY SA 4.0 - -Like Linux, DOS also uses `.` and `..` to represent a _relative path_. The `.` directory is the current directory, and `..` is the directory that's one level before it, or the _parent_ directory. Using `..` allows you to "back up" one directory with the `CD` command, so you don't need to specify a full path. - -From the first `DIR` screenshot, we can see the root directory also contains a `DEVEL` directory. If we're already in the `\FREEDOS` directory, we can navigate to `DEVEL` by "backing up" one directory level, and "going into" the `..\DEVEL` directory via a relative path: - -![Use .. to navigate using a relative path][5] - -Jim Hall, CC-BY SA 4.0 - -Armed with just two commands `DIR` and `CD`, you can navigate your FreeDOS system from the command line. Try it on your FreeDOS system to locate files and execute programs. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/navigate-freedos-cd-dir - -作者:[Jim Hall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freedos.png?itok=aOBLy7Ky (4 different color terminal windows with code) -[2]: https://opensource.com/sites/default/files/uploads/dir1.png (DIR listing of the D: drive) -[3]: https://opensource.com/sites/default/files/uploads/cd-games2.png (Use CD to change your working directory) -[4]: https://opensource.com/sites/default/files/uploads/cd-freedos3.png (Specify a full path to change to another working directory) -[5]: https://opensource.com/sites/default/files/uploads/cd-devel4.png (Use .. to navigate using a relative path) diff --git a/sources/tech/20210602 Test Kubernetes cluster failures and experiments in your terminal.md b/sources/tech/20210602 Test Kubernetes cluster failures and experiments in your terminal.md deleted file mode 100644 index 347497b121..0000000000 --- a/sources/tech/20210602 Test Kubernetes cluster failures and experiments in your terminal.md +++ /dev/null @@ -1,486 +0,0 @@ -[#]: subject: (Test Kubernetes cluster failures and experiments in your terminal) -[#]: via: (https://opensource.com/article/21/6/kubernetes-litmus-chaos) -[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Test Kubernetes cluster failures and experiments in your terminal -====== -Litmus is an effective tool to cause chaos to test how your system will -respond to failure. -![Science lab with beakers][1] - -Do you know how your system will respond to an arbitrary failure? Will your application fail? Will anything survive after a loss? If you're not sure, it's time to see if your system passes the [Litmus][2] test, a detailed way to cause chaos at random with many experiments. - -In the first article in this series, I explained [what chaos engineering is][3], and in the second article, I demonstrated how to get your [system's steady state][4] so that you can compare it against a chaos state. This third article will show you how to install and use Litmus to test arbitrary failures and experiments in your Kubernetes cluster. In this walkthrough, I'll use Pop!_OS 20.04, Helm 3, Minikube 1.14.2, and Kubernetes 1.19. - -### Configure Minikube - -If you haven't already, [install Minikube][5] in whatever way makes sense for your environment. If you have enough resources, I recommend giving your virtual machine a bit more than the default memory and CPU power: - - -``` -$ minikube config set memory 8192 -❗  These changes will take effect upon a minikube delete and then a minikube start -$ minikube config set cpus 6 -❗  These changes will take effect upon a minikube delete and then a minikube start -``` - -Then start and check your system's status: - - -``` -$ minikube start -😄  minikube v1.14.2 on Debian bullseye/sid -🎉  minikube 1.19.0 is available! Download it: -💡  To disable this notice, run: 'minikube config set WantUpdateNotification false' - -✨  Using the docker driver based on user configuration -👍  Starting control plane node minikube in cluster minikube -🔥  Creating docker container (CPUs=6, Memory=8192MB) ... -🐳  Preparing Kubernetes v1.19.0 on Docker 19.03.8 ... -🔎  Verifying Kubernetes components... -🌟  Enabled addons: storage-provisioner, default-storageclass -🏄  Done! kubectl is now configured to use "minikube" by default -jess@Athena:~$ minikube status -minikube -type: Control Plane -host: Running -kubelet: Running -apiserver: Running -kubeconfig: Configured -``` - -### Install Litmus - -As outlined on [Litmus' homepage][6], the steps to install Litmus are: add your repo to Helm, create your Litmus namespace, then install your chart: - - -``` -$ helm repo add litmuschaos -"litmuschaos" has been added to your repositories - -$ kubectl create ns litmus -namespace/litmus created - -$ helm install chaos litmuschaos/litmus --namespace=litmus -NAME: chaos -LAST DEPLOYED: Sun May  9 17:05:36 2021 -NAMESPACE: litmus -STATUS: deployed -REVISION: 1 -TEST SUITE: None -NOTES: -``` - -### Verify the installation - -You can run the following commands if you want to verify all the desired components are installed correctly. - -Check if **api-resources** for chaos are available:  - - -``` -root@demo:~# kubectl api-resources | grep litmus -chaosengines                                   litmuschaos.io                 true         ChaosEngine -chaosexperiments                               litmuschaos.io                 true         ChaosExperiment -chaosresults                                   litmuschaos.io                 true         ChaosResult -``` - -Check if the Litmus chaos operator deployment is running successfully: - - -``` -root@demo:~# kubectl get pods -n litmus -NAME                      READY   STATUS    RESTARTS   AGE -litmus-7d998b6568-nnlcd   1/1     Running   0          106s -``` - -### Start running chaos experiments  - -With this out of the way, you are good to go! Refer to Litmus' [chaos experiment documentation][7] to start executing your first experiment. - -To confirm your installation is working, check that the pod is up and running correctly: - - -``` -jess@Athena:~$ kubectl get pods -n litmus -NAME                      READY   STATUS    RESTARTS   AGE -litmus-7d6f994d88-2g7wn   1/1     Running   0          115s -``` - -Confirm the Custom Resource Definitions (CRDs) are also installed correctly: - - -``` -jess@Athena:~$ kubectl get crds | grep chaos -chaosengines.litmuschaos.io       2021-05-09T21:05:33Z -chaosexperiments.litmuschaos.io   2021-05-09T21:05:33Z -chaosresults.litmuschaos.io       2021-05-09T21:05:33Z -``` - -Finally, confirm your API resources are also installed: - - -``` -jess@Athena:~$ kubectl api-resources | grep chaos -chaosengines                                   litmuschaos.io                 true         ChaosEngine -chaosexperiments                               litmuschaos.io                 true         ChaosExperiment -chaosresults                                   litmuschaos.io                 true         ChaosResult -``` - -That's what I call easy installation and confirmation. The next step is setting up deployments for chaos. - -### Prep for destruction - -To test for chaos, you need something to test against. Add a new namespace: - - -``` -$ kubectl create namespace more-apps -namespace/more-apps created -``` - -Then add a deployment to the new namespace: - - -``` -$ kubectl create deployment ghost --namespace more-apps --image=ghost:3.11.0-alpine -deployment.apps/ghost created -``` - -Finally, scale your deployment up so that you have more than one pod in your deployment to test against: - - -``` -$ kubectl scale deployment/ghost --namespace more-apps --replicas=4 -deployment.apps/ghost scaled -``` - -For Litmus to cause chaos, you need to add an [annotation][8] to your deployment to mark it ready for chaos. Currently, annotations are available for deployments, StatefulSets, and DaemonSets. Add the annotation `chaos=true` to your deployment: - - -``` -$ kubectl annotate deploy/ghost litmuschaos.io/chaos="true" -n more-apps -deployment.apps/ghost annotated -``` - -Make sure the experiments you will install have the correct permissions to work in the "more-apps" namespace. - -Make a new **rbac.yaml** file for the prepper bindings and permissions: - - -``` -`$ touch rbac.yaml` -``` - -Then add permissions for the generic testing by copying and pasting the code below into your **rbac.yaml** file. These are just basic, minimal permissions to kill pods in your namespace and give Litmus permissions to delete a pod for a namespace you provide: - - -``` -\--- -apiVersion: v1 -kind: ServiceAccount -metadata: -  name: pod-delete-sa -  namespace: more-apps -  labels: -    name: pod-delete-sa -\--- -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: -  name: pod-delete-sa -  namespace: more-apps -  labels: -    name: pod-delete-sa -rules: -\- apiGroups: [""] -  resources: ["pods","events"] -  verbs: ["create","list","get","patch","update","delete","deletecollection"] -\- apiGroups: [""] -  resources: ["pods/exec","pods/log","replicationcontrollers"] -  verbs: ["create","list","get"] -\- apiGroups: ["batch"] -  resources: ["jobs"] -  verbs: ["create","list","get","delete","deletecollection"] -\- apiGroups: ["apps"] -  resources: ["deployments","statefulsets","daemonsets","replicasets"] -  verbs: ["list","get"] -\- apiGroups: ["apps.openshift.io"] -  resources: ["deploymentconfigs"] -  verbs: ["list","get"] -\- apiGroups: ["argoproj.io"] -  resources: ["rollouts"] -  verbs: ["list","get"] -\- apiGroups: ["litmuschaos.io"] -  resources: ["chaosengines","chaosexperiments","chaosresults"] -  verbs: ["create","list","get","patch","update"] -\--- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: -  name: pod-delete-sa -  namespace: more-apps -  labels: -    name: pod-delete-sa -roleRef: -  apiGroup: rbac.authorization.k8s.io -  kind: Role -  name: pod-delete-sa -subjects: -\- kind: ServiceAccount -  name: pod-delete-sa -  namespace: more-apps -``` - -Apply the **rbac.yaml** file: - - -``` -$ kubectl apply -f rbac.yaml -serviceaccount/pod-delete-sa created -role.rbac.authorization.k8s.io/pod-delete-sa created -rolebinding.rbac.authorization.k8s.io/pod-delete-sa created -``` - -The next step is to prepare your chaos engine to delete pods. The chaos engine will connect the experiment you need to your application instance by creating a **chaosengine.yaml** file and copying the information below into the .yaml file. This will connect your experiment to your namespace and the service account with the role bindings you created above. - -This chaos engine file only specifies the pod to delete during chaos testing: - - -``` -apiVersion: litmuschaos.io/v1alpha1 -kind: ChaosEngine -metadata: -  name: moreapps-chaos -  namespace: more-apps -spec: -  appinfo: -    appns: 'more-apps' -    applabel: 'app=ghost' -    appkind: 'deployment' -  # It can be true/false -  annotationCheck: 'true' -  # It can be active/stop -  engineState: 'active' -  #ex. values: ns1:name=percona,ns2:run=more-apps -  auxiliaryAppInfo: '' -  chaosServiceAccount: pod-delete-sa -  # It can be delete/retain -  jobCleanUpPolicy: 'delete' -  experiments: -    - name: pod-delete -      spec: -        components: -          env: -           # set chaos duration (in sec) as desired -            - name: TOTAL_CHAOS_DURATION -              value: '30' - -            # set chaos interval (in sec) as desired -            - name: CHAOS_INTERVAL -              value: '10' - -            # pod failures without '--force' & default terminationGracePeriodSeconds -            - name: FORCE -              value: 'false' -``` - -Don't apply this file until you install the experiments in the next section. - -### Add new experiments for causing chaos - -Now that you have an entirely new environment with deployments, roles, and the chaos engine to test against, you need some experiments to run. Since Litmus has a large community, you can find some great experiments in the [Chaos Hub][9]. - -In this walkthrough, I'll use the generic experiment of [killing a pod][10]. - -Run a kubectl command to install the generic experiments into your cluster. Install this in your `more-apps` namespace; you will see the tests created when you run it: - - -``` -$ kubectl apply -f -n more-apps -chaosexperiment.litmuschaos.io/pod-network-duplication created -chaosexperiment.litmuschaos.io/node-cpu-hog created -chaosexperiment.litmuschaos.io/node-drain created -chaosexperiment.litmuschaos.io/docker-service-kill created -chaosexperiment.litmuschaos.io/node-taint created -chaosexperiment.litmuschaos.io/pod-autoscaler created -chaosexperiment.litmuschaos.io/pod-network-loss created -chaosexperiment.litmuschaos.io/node-memory-hog created -chaosexperiment.litmuschaos.io/disk-loss created -chaosexperiment.litmuschaos.io/pod-io-stress created -chaosexperiment.litmuschaos.io/pod-network-corruption created -chaosexperiment.litmuschaos.io/container-kill created -chaosexperiment.litmuschaos.io/node-restart created -chaosexperiment.litmuschaos.io/node-io-stress created -chaosexperiment.litmuschaos.io/disk-fill created -chaosexperiment.litmuschaos.io/pod-cpu-hog created -chaosexperiment.litmuschaos.io/pod-network-latency created -chaosexperiment.litmuschaos.io/kubelet-service-kill created -chaosexperiment.litmuschaos.io/k8-pod-delete created -chaosexperiment.litmuschaos.io/pod-delete created -chaosexperiment.litmuschaos.io/node-poweroff created -chaosexperiment.litmuschaos.io/k8-service-kill created -chaosexperiment.litmuschaos.io/pod-memory-hog created -``` - -Verify the experiments installed correctly: - - -``` -$ kubectl get chaosexperiments -n more-apps -NAME                      AGE -container-kill            72s -disk-fill                 72s -disk-loss                 72s -docker-service-kill       72s -k8-pod-delete             72s -k8-service-kill           72s -kubelet-service-kill      72s -node-cpu-hog              72s -node-drain                72s -node-io-stress            72s -node-memory-hog           72s -node-poweroff             72s -node-restart              72s -node-taint                72s -pod-autoscaler            72s -pod-cpu-hog               72s -pod-delete                72s -pod-io-stress             72s -pod-memory-hog            72s -pod-network-corruption    72s -pod-network-duplication   72s -pod-network-latency       72s -pod-network-loss          72s -``` - -### Run the experiments - -Now that everything is installed and configured, use your **chaosengine.yaml** file to run the pod-deletion experiment you defined. Apply your chaos engine file: - - -``` -$ kubectl apply -f chaosengine.yaml -chaosengine.litmuschaos.io/more-apps-chaos created -``` - -Confirm the engine started by getting all the pods in your namespace; you should see `pod-delete` being created: - - -``` -$ kubectl get pods -n more-apps -NAME                      READY   STATUS              RESTARTS   AGE -ghost-5bdd4cdcc4-blmtl    1/1     Running             0          53m -ghost-5bdd4cdcc4-z2lnt    1/1     Running             0          53m -ghost-5bdd4cdcc4-zlcc9    1/1     Running             0          53m -ghost-5bdd4cdcc4-zrs8f    1/1     Running             0          53m -moreapps-chaos-runner     1/1     Running             0          17s -pod-delete-e443qx-lxzfx   0/1     ContainerCreating   0          7s -``` - -Next, you need to be able to observe your experiments using Litmus. The following command uses the ChaosResult CRD and provides a large amount of output: - - -``` -$ kubectl describe chaosresult moreapps-chaos-pod-delete -n more-apps -Name:         moreapps-chaos-pod-delete -Namespace:    more-apps -Labels:       app.kubernetes.io/component=experiment-job -              app.kubernetes.io/part-of=litmus -              app.kubernetes.io/version=1.13.3 -              chaosUID=a6c9ab7e-ff07-4703-abe4-43e03b77bd72 -              controller-uid=601b7330-c6f3-4d9b-90cb-2c761ac0567a -              job-name=pod-delete-e443qx -              name=moreapps-chaos-pod-delete -Annotations:  <none> -API Version:  litmuschaos.io/v1alpha1 -Kind:         ChaosResult -Metadata: -  Creation Timestamp:  2021-05-09T22:06:19Z -  Generation:          2 -  Managed Fields: -    API Version:  litmuschaos.io/v1alpha1 -    Fields Type:  FieldsV1 -    fieldsV1: -      f:metadata: -        f:labels: -          .: -          f:app.kubernetes.io/component: -          f:app.kubernetes.io/part-of: -          f:app.kubernetes.io/version: -          f:chaosUID: -          f:controller-uid: -          f:job-name: -          f:name: -      f:spec: -        .: -        f:engine: -        f:experiment: -      f:status: -        .: -        f:experimentStatus: -        f:history: -    Manager:         experiments -    Operation:       Update -    Time:            2021-05-09T22:06:53Z -  Resource Version:  8406 -  Self Link:         /apis/litmuschaos.io/v1alpha1/namespaces/more-apps/chaosresults/moreapps-chaos-pod-delete -  UID:               08b7e3da-d603-49c7-bac4-3b54eb30aff8 -Spec: -  Engine:      moreapps-chaos -  Experiment:  pod-delete -Status: -  Experiment Status: -    Fail Step:                 N/A -    Phase:                     Completed -    Probe Success Percentage:  100 -    Verdict:                   Pass -  History: -    Failed Runs:   0 -    Passed Runs:   1 -    Stopped Runs:  0 -Events: -  Type    Reason   Age    From                     Message -  ----    ------   ----   ----                     ------- -  Normal  Pass     104s   pod-delete-e443qx-lxzfx  experiment: pod-delete, Result: Pass -``` - -You can see the pass or fail output from your testing as you run the chaos engine definitions. - -Congratulations on your first (and hopefully not last) chaos engineering test! Now you have a powerful tool to use and help your environment grow. - -### Final thoughts - -You might be thinking, "I can't run this manually every time I want to run chaos. How far can I take this, and how can I set it up for the long term?" - -Litmus' best part (aside from the Chaos Hub) is its [scheduler][11] function. You can use it to define times and dates, repetitions or sporadic, to run experiments. This is a great tool for detailed admins who have been working with Kubernetes for a while and are ready to create some chaos. I suggest staying up to date on Litmus and how to use this tool for regular chaos engineering. Happy pod hunting! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/kubernetes-litmus-chaos - -作者:[Jessica Cherry][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cherrybomb -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/science_experiment_beaker_lab.png?itok=plKWRhlU (Science lab with beakers) -[2]: https://github.com/litmuschaos/litmus -[3]: https://opensource.com/article/21/5/11-years-kubernetes-and-chaos -[4]: https://opensource.com/article/21/5/get-your-steady-state-chaos-grafana-and-prometheus -[5]: https://minikube.sigs.k8s.io/docs/start/ -[6]: https://litmuschaos.io/ -[7]: https://docs.litmuschaos.io -[8]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ -[9]: https://hub.litmuschaos.io/ -[10]: https://docs.litmuschaos.io/docs/pod-delete/ -[11]: https://docs.litmuschaos.io/docs/scheduling/ diff --git a/sources/tech/20210603 Get started with Kustomize for Kubernetes configuration management.md b/sources/tech/20210603 Get started with Kustomize for Kubernetes configuration management.md deleted file mode 100644 index dd1406615f..0000000000 --- a/sources/tech/20210603 Get started with Kustomize for Kubernetes configuration management.md +++ /dev/null @@ -1,280 +0,0 @@ -[#]: subject: (Get started with Kustomize for Kubernetes configuration management) -[#]: via: (https://opensource.com/article/21/6/kustomize-kubernetes) -[#]: author: (Brent Laster https://opensource.com/users/bclaster) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Get started with Kustomize for Kubernetes configuration management -====== -Modify your Kubernetes manifests without losing control of what's in the -original versions. -![Ship captain sailing the Kubernetes seas][1] - -Preparing to run a new (or convert an existing) application in [Kubernetes][2] takes work. Working with Kubernetes requires defining and creating multiple "manifests" for the different types of objects in your application. Even a simple microservice is likely to have a deployment.yaml, service.yaml, configmap.yaml, and other files. These declarative YAML files for Kubernetes are usually known as "manifests." You might also have to set up secrets, ingresses, persistent volumes, and other supporting pieces. - -Once those are created, you're done with managing your manifests, _right_? Well, it depends. What happens if someone else needs to work with your manifest but needs a slightly (or significantly) different version? Or what happens if someone wants to leverage your manifests for different stages or environments? You need to handle reuse and updates for the different use cases without losing track of your original version. - -### Typical approaches for reusing manifests - -Approaches for reusing manifests have typically been rather brute force. You make a copy, modify it in whatever way is appropriate, and save it with a different name or location. This process works for the immediate use case, but things can quickly get out of sync or become unwieldy to manage. - -#### The copy approach - -Suppose you want to change a manifest to add a new resource or update a value in a manifest copy. Someone or something will have to monitor the original, figure out the differences, and merge them into the copy. - -The problem becomes even worse if other people make their own copies and change them to suit their particular use case. Very quickly, the content diverges. People might miss important or significant updates to the original manifests, and they might end up using confusing variations of similar files. - -And over time, the situation can worsen, and a significant amount of time can be spent just trying to keep things up to date. If copies of the copies are made, you can end up with something that diverges significantly from the original and even lose track of what was in the original. This, in turn, can dramatically affect usability and maintainability. - -#### The parameterization approach - -Another approach is to create parameterized templates from the files. That is, to make the manifests into generic "templates" by replacing static, hardcoded values with placeholders that can be filled in with any value. Values are usually supplied at deployment time, with placeholders replaced by values passed in from a command line or read in from a data file. The resulting templates with the values filled in are rendered as valid manifests for Kubernetes. - -This is the approach the well-known tool [Helm][3] takes. However, this also has challenges. Removing values and using placeholders fundamentally changes and adds complexity to the manifests, which are now templates. They are no longer usable on their own; they require an application or process like Helm to find or derive and fill in the values. And, as templates, the original files are no longer easily parsable by anyone who looks at them. - -The templates are also still susceptible to the issues that copies of the files have. In fact, the problem can be compounded when using templates due to copies having more placeholders and separate data values stored elsewhere. Functions and pipes that join functions can also be added. At some level, this can turn the templates into sort of "programmed YAML" files. At the extreme, this may make the files unusable and unreadable unless you use Helm to render them with the data values into a form that people (and Kubernetes) can understand and use. - -### Kustomize's alternative approach - -Ideally, you would be able to keep using your existing files in their original forms and produce variations without making permanent changes or copies that can easily diverge from the original and each other. And you would keep the differences between versions small and simple. - -These are the basic tenets of [Kustomize][4]'s approach. It's an Apache 2.0-licensed tool that generates custom versions of manifests by "overlaying" declarative specifications on top of existing ones. "Declarative" refers to the standard way to describe resources in Kubernetes: declaring what you want a resource to be and how to look and behave, in contrast to "imperative," which defines the process to create it. - -"Overlaying" describes the process where separate files are layered over (or "stacked on top of") each other to create altered versions. Kustomize applies specific kinds of overlays to the original manifest. The changes to be made in the rendered versions are declared in a separate, dedicated file named kustomization.yaml, while leaving the original files intact. - -Kustomize reads the kustomization.yaml file to drive its behavior. One section of the kustomization.yaml file, titled Resources, lists the names (and optionally the paths) of the original manifests to base changes on. After loading the resources, Kustomize applies the overlays and renders the result. - -You can think of this as applying the specified customizations "on top of" a temporary copy of the original manifest. These operations produce a "customized" copy of the manifest that, if you want, can be fed directly into Kubernetes via a `kubectl apply` command. - -The types of functions built into Kustomize "transform" your Kubernetes manifests, given a simple set of declarative rules. These sets of rules are called "transformers." - -The simplest kind of transformer applies a common identifier to the same set of resources, as Figure 1 demonstrates. - -![A simple example][5] - -Figure 1: Example structure and content for basic Kustomize use (Brent Laster, [CC BY-SA 4.0][6]) - -This example has a simple directory with a set of YAML files for a web app with a MySQL backend. The files are: - - * `roar-web-deploy.yaml` is the Kubernetes deployment manifest for the web app part of an app. - * `roar-web-svc.yaml` is the Kubernetes service manifest for the web app part of an app. - * `kustomization.yaml` is the Kustomize input file that declares the type of transformations you want to make to the manifests. - - - -In the kustomization.yaml file in Figure 1, the `commonLabels` section (the bottom center) is an example of a transformer. As the name implies, this transformer's intent is to make the designated label common in the files after the transformation. - -The kustomization.yaml file also includes a `resources` section, which lists the files to be included and possibly customized or transformed (highlighted in Figure 2). - -![Resources section in kustomization.yaml][7] - -Figure 2: Kustomize resource section denotes which original manifests to include (Brent Laster, [CC BY-SA 4.0][6]) - -The kustomization.yaml file is a simple set of declarations about the manifests you want to change and how you want to change them; it is a specification of resources plus customizations. The modifications happen when you run the `kustomize build` command. The build operation reads the kustomization.yaml file, pulls in the resources, and applies the transformers appropriate to each file. This example pulls in the two `roar-web` YAML files, produces copies of them, and adds the requested label in the metadata section for each one. - -By default, the files are not saved anywhere, and the original files are not overwritten. The "transformed" content can be piped directly to a `kubectl apply` command or redirected and saved to another file if you want. However, it's generally not a good idea to save generated files because it becomes too easy for them to get out of sync with the source. You can view the output from the `kustomize build` step as generated content. - -Instead, you should save the associated original files and the kustomization.yaml file. Since the kustomization.yaml file pulls in the original files and transforms them for rendering, they can stay the same and be reused in their original form. - -### Other transformations - -Kustomize provides a set of transformations that you can apply to a set of resources. These include: - - * `commonLabel` adds a common label (name:value) to each Kubernetes (K8s) resource. - * `commonAnnotations` adds an annotation to all K8s resources. - * `namePrefix` adds a common prefix to all resource names. - - - -Figure 3 shows examples of other types of common changes. - -![commonAnnotations and namePrefix transformers][8] - -Figure 3: Some common transformations provided by Kustomize (Brent Laster, [CC BY-SA 4.0][6]) - -#### Image transformers - -As its name implies, image transformers produce a version of a manifest with a different `newname` or `newTag` for an image spec, such as a container or an initcontainer. The name value must match the image value in the original resource. - -Figure 4 shows an example of a kustomization.yaml file with changes for an image. - -![kustomization.yaml file for an image transformer][9] - -Figure 4: Updating image selection with Kustomize (Brent Laster, [CC BY-SA 4.0][6]) - -While it's useful to do these kinds of transformations, a more strategic feature is creating separate versions for different environments from a set of resources. In Kustomize, these are called "variants." - -#### Variants - -In Kubernetes, it's common to need multiple variations (variants) of a set of resources and the manifests that declare them. A simple example is building on top of a set of Kubernetes resources to create different variants for different stages of product development, such as dev, stage, and prod. - -To facilitate these sorts of changes, Kustomize uses the concepts of "overlays" and "bases." A "base" declares things variants have in common, and an "overlay" declares differences between variants. Both bases and overlays are represented within a kustomization.yaml file. Figure 5 includes an example of this structure. It has the original resource manifests and a base kustomization.yaml file in the root of the tree. The kustomization.yaml files define variants as a set of overlays in subdirectories for prod and stage. - -![base/overlay approach][10] - -Figure 5: Example structure for Kustomize with bases and overlays to implement variants (Brent Laster, [CC BY-SA 4.0][6]) - -Variants can also apply patches. Patches in Kustomize are a partial spec or a "delta" for a K8s object. They describe what a section should look like after it changes and how it should be modified when Kustomize renders an updated version. They represent a more "surgical" approach to targeting one or more specific sections in a resource. - -The next set of figures demonstrate leveraging the Kustomize patching functionality. Going back to an earlier example, you have a set of core resource files (a deployment and a service) and the associated kustomization.yaml files for them (Figures 6a and 6b). There are two parts to the app: a database portion and a web app portion. The patch in this example renames the database service. - -![Patching database content][11] - -Figure 6a: Patching content in the database portion of the project (Brent Laster, [CC BY-SA 4.0][6]) - -![Renaming database service][12] - -Figure 6b: The service definition for the database resource (Brent Laster, [CC BY-SA 4.0][6]) - -Figures 7a through 7d highlight the patch portion within the kustomization.yaml file associated with the service. Line 12 defines the type of patch, a "replace" in this example. Lines 13 and 14 identify a "location" in the YAML hierarchy to find the value you want to patch and the replacement value to use. Lines 15–17 identify the specific type of item in the K8s resources you wish to change. - -![Patch block][13] - -Figure 7a: Example patch block in kustomization.yaml (Brent Laster, [CC BY-SA 4.0][6]) - -![Patch to apply][14] - -Figure 7b: More detail on the type of patch (Brent Laster, [CC BY-SA 4.0][6]) - -![Target location][15] - -Figure 7c: More detail on the location in the hierarchy in the base files and replacement value (Brent Laster, [CC BY-SA 4.0][6]) - -![value to modify][16] - -Figure 7d: More detail on the exact item to search for—and replace (per the "op" setting) (Brent Laster, [CC BY-SA 4.0][6]) - -When you execute the `kustomize build` command against this set of files, Kustomize first locates the K8s resource you're interested in—the service—and then finds the path identified in the patch block (`metadata.name.`). Then it renders a version of the spec with the value `roar-db` replaced with `mysql`. Figures 8a through 8f illustrate this process. - -![Locating the initial named object][17] - -Figure 8a: Navigating the YAML structure in the original file (Brent Laster, [CC BY-SA 4.0][6]) - -![Locating the initial named object][18] - -Figure 8b: Finding the correct item via `name` (Brent Laster, [CC BY-SA 4.0][6]) - -![Locating the target section in the hierarchy][19] - -Figure 8c: Finding the target section (Brent Laster, [CC BY-SA 4.0][6]) - -![Identifying the path][20] - -Figures 8d: Identifying the path (Brent Laster, [CC BY-SA 4.0][6]) - -![Substituting the desired value][21] - -Figure 8e: The substitution (Brent Laster, [CC BY-SA 4.0][6]) - -![Rendering the result][22] - -Figure 8f: The rendered file with the change (Brent Laster, [CC BY-SA 4.0][6]) - -Kustomize supports patching via a "strategic merge patch" (illustrated above) or via JSON patches. - -### Kustomization hierarchies - -The patch scenario example illustrates another useful concept when working with Kustomize: multiple kustomization.yaml files in a project hierarchy. This example project has two subprojects: one for a database and another for a web app. - -The database piece has a customization to update the service name with the patch functionality, as described above. - -The web piece simply has a file to include the resources. - -At the base level, there is a kustomization.yaml file that pulls in resources from both parts of the project and a simple file to create a namespace. It also applies a common label to the different elements. - -### Generators - -Kustomize also includes "generators" to automatically update related Kubernetes resources when a different resource is updated. A generator establishes a connection between two resources by generating a random identifier and using it as a common suffix on the objects' names. - -This can be beneficial for configmaps and secrets: If data is changed in them, the corresponding deployment will automatically be regenerated and updated. Figure 9 shows an example specification for a Kustomize generator. - -![Kustomize generator spec][23] - -Figure 9: Example of a Kustomize generator spec (Brent Laster, [CC BY-SA 4.0][6]) - -When run through a Kustomize build operation, the new objects produced will have the generated name applied and included in the specs, as shown in Figure 10. - -![Objects and specs from a Kustomize generator ][24] - -Figure 10: Objects and specs resulting from using a Kustomize generator (Brent Laster, [CC BY-SA 4.0][6]) - -If you then change the configmap associated with the generator (as Figure 11 shows)... - -![Objects and specs from Kustomize generator][25] - -Figure 11: Making a change to the configMapGenerator (Brent Laster, [CC BY-SA 4.0][6]) - -… Kustomize will generate new values that are incorporated into the specs and objects (Figure 12a). Then, if you take the build output and apply it, the deployment will be updated because the associated configmap was updated (Figure 12b). - -![Changes after configMapGenerator update and Kustomize build][26] - -Figure12a: Changes after the configMapGenerator is updated and a Kustomize build is run (Brent Laster, [CC BY-SA 4.0][6]) - -![Deployment changes after configmap changes][27] - -Figure 12b: Changes to the deployment based on changes to the configmap (Brent Laster, [CC BY-SA 4.0][6]) - -In summary, a `kubectl apply` operation on the build's results causes the configmap and any dependent items to reference the new hash value of the updated configmap and update them in the cluster. - -### Kubernetes integration - -Kustomize has been integrated into Kubernetes. There are two integration points: - - 1. To view the resources in a directory with a kustomization file, you can run: -`$ kubectl kustomize < directory >` - 2. To apply those resources, you can use the `-k` option on `kubectl apply`: -`$ kubectl apply -k < directory >` - - - -If you are using an older version of Kubernetes, it might not have an updated version of Kustomize. In most cases, this isn't a problem unless you need a particular feature or bug fix available in a current version of Kustomize. - -### Conclusion  - -Kustomize is another way to facilitate the reuse of Kubernetes manifests. Unlike most other approaches, it leaves the original files intact and generates changed versions on the fly with its `build` command. The changes to make are defined in a kustomization.yaml file and can include adding various common attributes, making patches on top of original content, or even generating unique identifiers to tie together items like configmaps and deployments. - -All in all, Kustomize provides a unique and simple way to deliver variations of Kubernetes manifests once you are comfortable with the setup and function of its various ways to transform files. It is significantly different from the traditional reuse approach taken by Helm, the other main tool for reuse. I'll explore those differences in a future article. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/kustomize-kubernetes - -作者:[Brent Laster][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/bclaster -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas) -[2]: https://opensource.com/resources/what-is-kubernetes -[3]: https://helm.sh/ -[4]: https://kustomize.io/ -[5]: https://opensource.com/sites/default/files/uploads/kustomize-1_simple-example.png (A simple example) -[6]: https://creativecommons.org/licenses/by-sa/4.0/ -[7]: https://opensource.com/sites/default/files/uploads/kustomize-2_resources.png (Resources section in kustomization.yaml) -[8]: https://opensource.com/sites/default/files/uploads/kustomize-3_transformers.png (commonAnnotations and namePrefix transformers) -[9]: https://opensource.com/sites/default/files/uploads/kustomize-4_image-transformer.png (kustomization.yaml file for an image transformer) -[10]: https://opensource.com/sites/default/files/uploads/kustomize-5_base-overlay.png (base/overlay approach) -[11]: https://opensource.com/sites/default/files/uploads/kustomize-6a_patch1.png (Patching database content) -[12]: https://opensource.com/sites/default/files/uploads/kustomize-6b_patch2.png (Renaming database service) -[13]: https://opensource.com/sites/default/files/uploads/kustomize-7a_patchblock.png (Patch block) -[14]: https://opensource.com/sites/default/files/uploads/kustomize-7b_patch_0.png (Patch to apply) -[15]: https://opensource.com/sites/default/files/uploads/kustomize-7c_targetlocation.png (Target location) -[16]: https://opensource.com/sites/default/files/uploads/kustomize-7d_valuemodify.png (value to modify) -[17]: https://opensource.com/sites/default/files/uploads/kustomize-8a_service.png (Locating the initial named object) -[18]: https://opensource.com/sites/default/files/uploads/kustomize-8b_name.png (Locating the initial named object) -[19]: https://opensource.com/sites/default/files/uploads/kustomize-8c_metadata.png (Locating the target section in the hierarchy) -[20]: https://opensource.com/sites/default/files/uploads/kustomize-8d_name.png (Identifying the path) -[21]: https://opensource.com/sites/default/files/uploads/kustomize-8e_name.png (Substituting the desired value) -[22]: https://opensource.com/sites/default/files/uploads/kustomize-8f_newname.png (Rendering the result) -[23]: https://opensource.com/sites/default/files/uploads/kustomize-9a_kustomizegenerator.png (Kustomize generator spec) -[24]: https://opensource.com/sites/default/files/uploads/kustomize-9b_hashadded.png (Objects and specs from a Kustomize generator ) -[25]: https://opensource.com/sites/default/files/uploads/kustomize-9c_commonlabel.png (Objects and specs from Kustomize generator) -[26]: https://opensource.com/sites/default/files/uploads/kustomize-9d_hashchanged.png (Changes after configMapGenerator update and Kustomize build) -[27]: https://opensource.com/sites/default/files/uploads/kustomize-9e_updates.png (Deployment changes after configmap changes) diff --git a/sources/tech/20210603 Test your Kubernetes experiments with an open source web interface.md b/sources/tech/20210603 Test your Kubernetes experiments with an open source web interface.md deleted file mode 100644 index 55bd633e3d..0000000000 --- a/sources/tech/20210603 Test your Kubernetes experiments with an open source web interface.md +++ /dev/null @@ -1,399 +0,0 @@ -[#]: subject: (Test your Kubernetes experiments with an open source web interface) -[#]: via: (https://opensource.com/article/21/6/chaos-mesh-kubernetes) -[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Test your Kubernetes experiments with an open source web interface -====== -Chaos Mesh enables chaos engineering with a web frontend. Learn more in -the fourth article in this series. -![Digital creative of a browser on the internet][1] - -Have you wanted to cause chaos to test your systems but prefer to use visual tools rather than the terminal? Well, this article is for you, my friend. In the first article in this series, I explained [what chaos engineering is][2]; in the second article, I demonstrated how to get your [system's steady state][3] so that you can compare it against a chaos state; and in the third, I showed how to [use Litmus to test][4] arbitrary failures and experiments in your Kubernetes cluster. - -The fourth article introduces [Chaos Mesh][5], an open source chaos orchestrator with a web user interface (UI) that anyone can use. It allows you to create experiments and display statistics in a web UI for presentations or visual storytelling. The [Cloud Native Computing Foundation][6] hosts the Chaos Mesh project, which means it is a good choice for Kubernetes. So let's get started! In this walkthrough, I'll use Pop!_OS 20.04, Helm 3, Minikube 1.14.2, and Kubernetes 1.19. - -### Configure Minikube - -If you haven't already, [install Minikube][7] in whatever way that makes sense for your environment. If you have enough resources, I recommend giving your virtual machine a bit more than the default memory and CPU power: - - -``` -$ minikube config set memory 8192 -❗  These changes will take effect upon a minikube delete and then a minikube start -$ minikube config set cpus 6 -❗  These changes will take effect upon a minikube delete and then a minikube start -``` - -Then start and check the status of your system: - - -``` -$ minikube start -😄  minikube v1.14.2 on Debian bullseye/sid -🎉  minikube 1.19.0 is available! Download it: -💡  To disable this notice, run: 'minikube config set WantUpdateNotification false' - -✨  Using the docker driver based on user configuration -👍  Starting control plane node minikube in cluster minikube -🔥  Creating docker container (CPUs=6, Memory=8192MB) ... -🐳  Preparing Kubernetes v1.19.0 on Docker 19.03.8 ... -🔎  Verifying Kubernetes components... -🌟  Enabled addons: storage-provisioner, default-storageclass -🏄  Done! kubectl is now configured to use "minikube" by default -$ minikube status -minikube -type: Control Plane -host: Running -kubelet: Running -apiserver: Running -kubeconfig: Configured -``` - -#### Install Chaos Mesh - -Start installing Chaos Mesh by adding the repository to Helm: - - -``` -$ helm repo add chaos-mesh -"chaos-mesh" has been added to your repositories -``` - -Then search for your Helm chart: - - -``` -$ helm search repo chaos-mesh -NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       -chaos-mesh/chaos-mesh   v0.5.0          v1.2.0          Chaos Mesh® is a cloud-native Chaos Engineering... -``` - -Once you find your chart, you can begin the installation steps, starting with creating a `chaos-testing` namespace: - - -``` -$ kubectl create ns chaos-testing -namespace/chaos-testing created -``` - -Next, install your Chaos Mesh chart in this namespace and name it `chaos-mesh`: - - -``` -$ helm install chaos-mesh chaos-mesh/chaos-mesh --namespace=chaos-testing -NAME: chaos-mesh -LAST DEPLOYED: Mon May 10 10:08:52 2021 -NAMESPACE: chaos-testing -STATUS: deployed -REVISION: 1 -TEST SUITE: None -NOTES: -1\. Make sure chaos-mesh components are running -   kubectl get pods --namespace chaos-testing -l app.kubernetes.io/instance=chaos-mesh -``` - -As the output instructs, check that the Chaos Mesh components are running: - - -``` -$ kubectl get pods --namespace chaos-testing -l app.kubernetes.io/instance=chaos-mesh -NAME                                       READY   STATUS    RESTARTS   AGE -chaos-controller-manager-bfdcb99fd-brkv7   1/1     Running   0          85s -chaos-daemon-4mjq2                         1/1     Running   0          85s -chaos-dashboard-865b778d79-729xw           1/1     Running   0          85s -``` - -Now that everything is running correctly, you can set up the services to see the Chaos Mesh dashboard and make sure the `chaos-dashboard` service is available: - - -``` -$ kubectl get svc -n chaos-testing -NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                       AGE -chaos-daemon                    ClusterIP   None            <none>        31767/TCP,31766/TCP           3m42s -chaos-dashboard                 NodePort    10.99.137.187   <none>        2333:30029/TCP                3m42s -chaos-mesh-controller-manager   ClusterIP   10.99.118.132   <none>        10081/TCP,10080/TCP,443/TCP   3m42s -``` - -Now that you know the service is running, go ahead and expose it, rename it, and open the dashboard using `minikube service`: - - -``` -$ kubectl expose service chaos-dashboard --namespace chaos-testing --type=NodePort --target-port=2333 --name=chaos -service/chaos exposed - -$ minikube service chaos --namespace chaos-testing -|---------------|-------|-------------|---------------------------| -|   NAMESPACE   | NAME  | TARGET PORT |            URL            | -|---------------|-------|-------------|---------------------------| -| chaos-testing | chaos |        2333 | | -|---------------|-------|-------------|---------------------------| -🎉  Opening service chaos-testing/chaos in default browser... -``` - -When the browser opens, you'll see a token generator window. Check the box next to **Cluster scoped**, and follow the directions on the screen. - -![Token generator][8] - -(Jess Cherry, [CC BY-SA 4.0][9]) - -Then you can log into Chaos Mesh and see the Dashboard. - -![Chaos Mesh Dashboard][10] - -(Jess Cherry, [CC BY-SA 4.0][9]) - -You have installed your Chaos Mesh instance and can start working towards chaos testing! - -### Get meshy in your cluster - -Now that everything is up and running, you can set up some new experiments to try. The documentation offers some predefined experiments, and I'll choose [StressChaos][11] from the options. In this walkthrough, you will create something in a new namespace to stress against and scale it up so that it can stress against more than one thing. - -Create the namespace: - - -``` -$ kubectl create ns app-demo -namespace/app-demo created -``` - -Then create the deployment in your new namespace: - - -``` -$ kubectl create deployment nginx --image=nginx --namespace app-demo -deployment.apps/nginx created -``` - -Scale the deployment up to eight pods: - - -``` -$ kubectl scale deployment/nginx --replicas=8 --namespace app-demo -deployment.apps/nginx scaled -``` - -Finally, confirm everything is up and working correctly by checking your pods in the namespace: - - -``` -$ kubectl get pods -n app-demo -NAME                     READY   STATUS    RESTARTS   AGE -nginx-6799fc88d8-7kphn   1/1     Running   0          69s -nginx-6799fc88d8-82p8t   1/1     Running   0          69s -nginx-6799fc88d8-dfrlz   1/1     Running   0          69s -nginx-6799fc88d8-kbf75   1/1     Running   0          69s -nginx-6799fc88d8-m25hs   1/1     Running   0          2m44s -nginx-6799fc88d8-mg4tb   1/1     Running   0          69s -nginx-6799fc88d8-q9m2m   1/1     Running   0          69s -nginx-6799fc88d8-v7q4d   1/1     Running   0          69s -``` - -Now that you have something to test against, you can begin working on the definition for your experiment. Start by creating `chaos-test.yaml`: - - -``` -`$ touch chaos-test.yaml` -``` - -Next, create the definition for the chaos test. Just copy and paste this experiment definition into your `chaos-test.yaml` file: - - -``` -apiVersion: chaos-mesh.org/v1alpha1 -kind: StressChaos -metadata: -  name: burn-cpu -  namespace: chaos-testing -spec: -  mode: one -  selector: -    namespaces: -     - app-demo -    labelSelectors: -      app: "nginx" -  stressors: -    cpu: -      workers: 1 -  duration: '30s' -  scheduler: -    cron: '@every 2m' -``` - -This test will burn 1 CPU for 30 seconds every 2 minutes on pods in the `app-demo` namespace. Finally, apply the YAML file to start the experiment and view what happens in your dashboard. - -Apply the experiment file: - - -``` -$ kubectl apply -f chaos-test.yaml -stresschaos.chaos-mesh.org/burn-cpu created -``` - -Then go to your dashboard and click **Experiments** to see the stress test running. You can pause the experiment by pressing the **Pause** button on the right-hand side of the experiment. - -![Chaos Mesh Experiments interface][12] - -(Jess Cherry, [CC BY-SA 4.0][9]) - -Click **Dashboard** to see the state with a count of total experiments, the state graph, and a timeline of running events or previously run tests. - -![Chaos Mesh Dashboard][13] - -(Jess Cherry, [CC BY-SA 4.0][9]) - -Choose **Events** to see the timeline and the experiments below it with details. - -![Chaos Mesh Events interface][14] - -(Jess Cherry, [CC BY-SA 4.0][9]) - -![Chaos Mesh Events timeline details][15] - -(Jess Cherry, [CC BY-SA 4.0][9]) - -Congratulations on completing your first test! Now that you have this working, I'll share more details about what else you can do with your experiments. - -### But wait, there's more - -Other things you can do with this experiment using the command line include: - - * Updating the experiment to change how it works - * Pausing the experiment if you need to return the cluster to a steady state - * Resuming the experiment to continue testing - * Deleting the experiment if you no longer need it for testing - - - -#### Updating the experiment - -As an example, update the experiment in your cluster to increase the duration between tests. Go back to your `cluster-test.yaml` and edit the scheduler to change 2 minutes to 20 minutes: - -Before: - - -``` - scheduler: -    cron: '@every 2m' -``` - -After: - - -``` - scheduler: -    cron: '@every 20m' -``` - -Save and reapply your file; the output should show the new stress test configuration: - - -``` -$ kubectl apply -f chaos-test.yaml -stresschaos.chaos-mesh.org/burn-cpu configured -``` - -If you look in the Dashboard, the experiment should show the new cron configuration. - -![New cron configuration][16] - -(Jess Cherry, [CC BY-SA 4.0][9]) - -#### Pausing and resuming the experiment - -Manually pausing the experiment on the command line will require adding an [annotation][17] to the experiment. Resuming the experiment will require removing the annotation. - -To add the annotation, you will need the kind, name, and namespace of the experiment from your YAML file. - -**Pause an experiment:** - - -``` -$ kubectl annotate stresschaos burn-cpu experiment.chaos-mesh.org/pause=true  -n chaos-testing - -stresschaos.chaos-mesh.org/burn-cpu annotated -``` - -The web UI shows it is paused. - -![Paused experiment][18] - -(Jess Cherry, [CC BY-SA 4.0][9]) - -**Resume an experiment** - -You need the same information to resume your experiment. However, rather than the word `true`, you use a dash to remove the pause. - - -``` -$ kubectl annotate stresschaos burn-cpu experiment.chaos-mesh.org/pause-  -n chaos-testing - -stresschaos.chaos-mesh.org/burn-cpu annotated -``` - -Now you can see the experiment has resumed in the web UI. - -![Resumed experiment][19] - -(Jess Cherry, [CC BY-SA 4.0][9]) - -#### Remove an experiment - -Removing an experiment altogether requires a simple `delete` command with the file name: - - -``` -$ kubectl delete -f chaos-test.yaml - -stresschaos.chaos-mesh.org "burn-cpu" deleted -``` - -Once again, you should see the desired result in the web UI. - -![All experiments deleted][20] - -(Jess Cherry, [CC BY-SA 4.0][9]) - -Many of these tasks were done with the command line, but you can also create your own experiments using the UI or import experiments you created as YAML files. This helps many people become more comfortable with creating new experiments. There is also a Download button for each experiment, so you can see the YAML file you created by clicking a few buttons. - -### Final thoughts - -Now that you have this new tool, you can get meshy with your environment. Chaos Mesh allows more user-friendly interaction, which means more people can join the chaos team. I hope you've learned enough here to expand on your chaos engineering. Happy pod hunting! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/chaos-mesh-kubernetes - -作者:[Jessica Cherry][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cherrybomb -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet) -[2]: https://opensource.com/article/21/5/11-years-kubernetes-and-chaos -[3]: https://opensource.com/article/21/5/get-your-steady-state-chaos-grafana-and-prometheus -[4]: https://opensource.com/article/21/5/total-chaos-litmus -[5]: https://chaos-mesh.org/ -[6]: https://www.cncf.io/ -[7]: https://minikube.sigs.k8s.io/docs/start/ -[8]: https://opensource.com/sites/default/files/uploads/tokengenerator.png (Token generator) -[9]: https://creativecommons.org/licenses/by-sa/4.0/ -[10]: https://opensource.com/sites/default/files/uploads/chaosmesh_dashboard.png (Chaos Mesh Dashboard) -[11]: https://chaos-mesh.org/docs/chaos_experiments/stresschaos_experiment -[12]: https://opensource.com/sites/default/files/uploads/chaosmesh_experiments.png (Chaos Mesh Experiments interface) -[13]: https://opensource.com/sites/default/files/uploads/chaosmesh_experiment-dashboard.png (Chaos Mesh Dashboard) -[14]: https://opensource.com/sites/default/files/uploads/chaosmesh_events.png (Chaos Mesh Events interface) -[15]: https://opensource.com/sites/default/files/uploads/chaosmesh_event-details.png (Chaos Mesh Events timeline details) -[16]: https://opensource.com/sites/default/files/uploads/newcron.png (New cron configuration) -[17]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ -[18]: https://opensource.com/sites/default/files/uploads/pausedexperiment.png (Paused experiment) -[19]: https://opensource.com/sites/default/files/uploads/resumedexperiment.png (Resumed experiment) -[20]: https://opensource.com/sites/default/files/uploads/deletedexperiment.png (All experiments deleted) diff --git a/sources/tech/20210607 Test arbitrary pod failures on Kubernetes with kube-monkey.md b/sources/tech/20210607 Test arbitrary pod failures on Kubernetes with kube-monkey.md deleted file mode 100644 index 5955aa433d..0000000000 --- a/sources/tech/20210607 Test arbitrary pod failures on Kubernetes with kube-monkey.md +++ /dev/null @@ -1,364 +0,0 @@ -[#]: subject: (Test arbitrary pod failures on Kubernetes with kube-monkey) -[#]: via: (https://opensource.com/article/21/6/chaos-kubernetes-kube-monkey) -[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Test arbitrary pod failures on Kubernetes with kube-monkey -====== -Kube-monkey offers an easy way to stress-test your systems by scheduling -random termination pods in your cluster. -![Parts, modules, containers for software][1] - -I have covered multiple chaos engineering tools in this series. The first article in this series explained [what chaos engineering is][2]; the second demonstrated how to get your [system's steady state][3] so that you can compare it against a chaos state; the third showed how to [use Litmus to test][4] arbitrary failures and experiments in your Kubernetes cluster; and the fourth article got into [Chaos Mesh][5], an open source chaos orchestrator with a web user interface. - -In this fifth article, I want to talk about arbitrary pod failure. [Kube-monkey][6] offers an easy way to stress-test your systems by scheduling random termination pods in your cluster. This aims to encourage and validate the development of failure-resilient services. As in the previous walkthroughs, I'll use Pop!_OS 20.04, Helm 3, Minikube 1.14.2, and Kubernetes 1.19. - -### Configure Minikube - -If you haven't already, [install Minikube][7] in whatever way makes sense for your environment. If you have enough resources, I recommend giving your virtual machine a bit more than the default memory and CPU power: - - -``` -$ minikube config set memory 8192 -❗  These changes will take effect upon a minikube delete and then a minikube start -$ minikube config set cpus 6 -❗  These changes will take effect upon a minikube delete and then a minikube start -``` - -Then start and check the status of your system: - - -``` -$ minikube start -😄  minikube v1.14.2 on Debian bullseye/sid -🎉  minikube 1.19.0 is available! Download it: -💡  To disable this notice, run: 'minikube config set WantUpdateNotification false' - -✨  Using the docker driver based on user configuration -👍  Starting control plane node minikube in cluster minikube -🔥  Creating docker container (CPUs=6, Memory=8192MB) ... -🐳  Preparing Kubernetes v1.19.0 on Docker 19.03.8 ... -🔎  Verifying Kubernetes components... -🌟  Enabled addons: storage-provisioner, default-storageclass -🏄  Done! kubectl is now configured to use "minikube" by default -$ minikube status -minikube -type: Control Plane -host: Running -kubelet: Running -apiserver: Running -kubeconfig: Configured -``` - -### Preconfiguring with deployments - -Start by adding some small deployments to run chaos against. These deployments will need some special labels, so you need to create a new Helm chart. The following labels will help kube-monkey determine what to kill if the app is opted-in to doing chaos and understand what details are behind the chaos: - - * **kube-monkey/enabled**: This setting opts you in to starting the chaos. - * **kube-monkey/mtbf**: This stands for mean time between failure (in days). For example, if it's set to 3, the Kubernetes (K8s) app expects to have a pod killed approximately every third weekday. - * **kube-monkey/identifier**: This is a unique identifier for the K8s apps; in this example, it will be "nginx." - * **kube-monkey/kill-mode**: The kube-monkey's default behavior is to kill only one pod in the cluster, but you can change it to add more: - * **kill-all:** Kill every pod, no matter what is happening with a pod - * **fixed:** Pick a number of pods you want to kill - * **fixed-percent:** Kill a fixed percent of pods (e.g., 50%) - * **kube-monkey/kill-value**: This is where you can specify a value for kill-mode - * **fixed:** The number of pods to kill - * **random-max-percent:** The maximum number from 0–100 that kube-monkey can kill - * **fixed-percent:** The percentage, from 0–100 percent, of pods to kill - - - -Now that you have this background info, you can start [creating a basic Helm chart][8]. - -I named this Helm chart `nginx`. I'll show only the changes to the Helm chart deployment labels below. You need to change the deployment YAML file, which is `nginx/templates` in this example: - - -``` -$ /chaos/kube-monkey/helm/nginx/templates$ ls -la -total 40 -drwxr-xr-x 3 jess jess 4096 May 15 14:46 . -drwxr-xr-x 4 jess jess 4096 May 15 14:46 .. --rw-r--r-- 1 jess jess 1826 May 15 14:46 deployment.yaml --rw-r--r-- 1 jess jess 1762 May 15 14:46 _helpers.tpl --rw-r--r-- 1 jess jess  910 May 15 14:46 hpa.yaml --rw-r--r-- 1 jess jess 1048 May 15 14:46 ingress.yaml --rw-r--r-- 1 jess jess 1735 May 15 14:46 NOTES.txt --rw-r--r-- 1 jess jess  316 May 15 14:46 serviceaccount.yaml --rw-r--r-- 1 jess jess  355 May 15 14:46 service.yaml -drwxr-xr-x 2 jess jess 4096 May 15 14:46 tests -``` - -In your `deployment.yaml` file, find this section: - - -``` - template: -    metadata: -     {{- with .Values.podAnnotations }} -      annotations: -       {{- toYaml . | nindent 8 }} -      {{- end }} -      labels: -       {{- include "nginx.selectorLabels" . | nindent 8 }} -``` - -And make these changes: - - -``` - template: -    metadata: -     {{- with .Values.podAnnotations }} -      annotations: -       {{- toYaml . | nindent 8 }} -      {{- end }} -      labels: -       {{- include "nginx.selectorLabels" . | nindent 8 }} -        kube-monkey/enabled: enabled -        kube-monkey/identifier: monkey-victim -        kube-monkey/mtbf: '2' -        kube-monkey/kill-mode: "fixed" -        kube-monkey/kill-value: '1' -``` - -Move back one directory and find the `values` file: - - -``` -$ /chaos/kube-monkey/helm/nginx/templates$ cd ../ -$ /chaos/kube-monkey/helm/nginx$ ls -charts  Chart.yaml  templates  values.yaml -``` - -You need to change one line in the values file, from: - - -``` -`replicaCount: 1` -``` - -to: - - -``` -`replicaCount: 8` -``` - -This will give you eight different pods to test chaos against. - -Move back one more directory and install the new Helm chart: - - -``` -$ /chaos/kube-monkey/helm/nginx$ cd ../ -$ /chaos/kube-monkey/helm$ helm install nginxtest nginx -NAME: nginxtest -LAST DEPLOYED: Sat May 15 14:53:47 2021 -NAMESPACE: default -STATUS: deployed -REVISION: 1 -NOTES: -1\. Get the application URL by running these commands: -  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=nginx,app.kubernetes.io/instance=nginxtest" -o jsonpath="{.items[0].metadata.name}") -  export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") -  echo "Visit to use your application" -  kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT -``` - -Then check the labels in your Nginx pods: - - -``` -$ /chaos/kube-monkey/helm$ kubectl get pods -n default -NAME                                 READY   STATUS    RESTARTS   AGE -nginxtest-8f967857-88zv7             1/1     Running   0          80s -nginxtest-8f967857-8qb95             1/1     Running   0          80s -nginxtest-8f967857-dlng7             1/1     Running   0          80s -nginxtest-8f967857-h7mmc             1/1     Running   0          80s -nginxtest-8f967857-pdzpq             1/1     Running   0          80s -nginxtest-8f967857-rdpnb             1/1     Running   0          80s -nginxtest-8f967857-rqv2w             1/1     Running   0          80s -nginxtest-8f967857-tr2cn             1/1     Running   0          80s -``` - -Chose the first pod to describe and confirm the labels are in place: - - -``` -$ /chaos/kube-monkey/helm$ kubectl describe pod nginxtest-8f967857-88zv7 -n default -Name:         nginxtest-8f967857-88zv7 -Namespace:    default -Priority:     0 -Node:         minikube/192.168.49.2 -Start Time:   Sat, 15 May 2021 15:11:37 -0400 -Labels:       app.kubernetes.io/instance=nginxtest -              app.kubernetes.io/name=nginx -              kube-monkey/enabled=enabled -              kube-monkey/identifier=monkey-victim -              kube-monkey/kill-mode=fixed -              kube-monkey/kill-value=1 -              kube-monkey/mtbf=2 -              pod-template-hash=8f967857 -``` - -### Configure and install kube-monkey - -To install kube-monkey using Helm, you first need to run `git clone on `the [kube-monkey repository][6]: - - -``` -$ /chaos$ git clone -Cloning into 'kube-monkey'... -remote: Enumerating objects: 14641, done. -remote: Counting objects: 100% (47/47), done. -remote: Compressing objects: 100% (36/36), done. -remote: Total 14641 (delta 18), reused 22 (delta 8), pack-reused 14594 -Receiving objects: 100% (14641/14641), 30.56 MiB | 39.31 MiB/s, done. -Resolving deltas: 100% (6502/6502), done. -``` - -Change to the `kube-monkey/helm` directory: - - -``` -$ /chaos$ cd kube-monkey/helm/ -$ /chaos/kube-monkey/helm$ -``` - -Then go into the Helm chart and find the `values.yaml` file: - - -``` -$ /chaos/kube-monkey/helm$ cd kubemonkey/ -$ /chaos/kube-monkey/helm/kubemonkey$ ls -Chart.yaml  README.md  templates  values.yaml -``` - -Below, I will show just the sections of the `values.yaml` file you need to change. They disable dry-run mode by changing it in the config section to `false`, then add the default namespace to the whitelist so that it can kill the pods you deployed. You must keep the `blacklistedNamespaces` value or you will cause severe damage to your system. - -Change this: - - -``` -config: -  dryRun: true   -  runHour: 8 -  startHour: 10 -  endHour: 16 -  blacklistedNamespaces: -   - kube-system -  whitelistedNamespaces: [] -``` - -To this: - - -``` -config: -  dryRun: false   -  runHour: 8 -  startHour: 10 -  endHour: 16 -  blacklistedNamespaces: -    - kube-system -  whitelistedNamespaces:  ["default"] -``` - -In the debug section, set `enabled` and `schedule_immediate_kill` to `true`. This will show the pods being killed. - -Change this: - - -``` - debug: -   enabled: false -   schedule_immediate_kill: false -``` - -To this: - - -``` - debug: -   enabled: true -   schedule_immediate_kill: true -``` - -Run a `helm install`: - - -``` -$ /chaos/kube-monkey/helm$ helm install chaos kubemonkey -NAME: chaos -LAST DEPLOYED: Sat May 15 13:51:59 2021 -NAMESPACE: default -STATUS: deployed -REVISION: 1 -TEST SUITE: None -NOTES: -1\. Wait until the application is rolled out: -  kubectl -n default rollout status deployment chaos-kube-monkey -2\. Check the logs: -  kubectl logs -f deployment.apps/chaos-kube-monkey -n default -``` - -Check the kube-monkey logs and see that the pods are being terminated: - - -``` - $ /chaos/kube-monkey/helm$ kubectl logs -f deployment.apps/chaos-kube-monkey -n default - -        ********** Today's schedule ********** -        k8 Api Kind     Kind Name               Termination Time -        -----------     ---------               ---------------- -        v1.Deployment   nginxtest               05/15/2021 15:15:22 -0400 EDT -        ********** End of schedule ********** -I0515 19:15:22.343202       1 kubemonkey.go:70] Termination successfully executed for v1.Deployment nginxtest -I0515 19:15:22.343216       1 kubemonkey.go:73] Status Update: 0 scheduled terminations left. -I0515 19:15:22.343220       1 kubemonkey.go:76] Status Update: All terminations done. -I0515 19:15:22.343278       1 kubemonkey.go:19] Debug mode detected! -I0515 19:15:22.343283       1 kubemonkey.go:20] Status Update: Generating next schedule in 30 sec -``` - -You can also use [K9s][9] and watch the pods die. - -![Pods dying in K9s][10] - -(Jess Cherry, [CC BY-SA 4.0][11]) - -Congratulations! You now have a running chaos test with arbitrary failures. Anytime you want, you can change your applications to test at a certain day of the week and time of day. - -### Final thoughts - -While kube-monkey is a great chaos engineering tool, it does require heavy configurations. Therefore, it isn't the best starter chaos engineering tool for someone new to Kubernetes. Another drawback is you have to edit your application's Helm chart for chaos testing to run. - -This tool would be best positioned in a staging environment to watch how applications respond to arbitrary failure regularly. This gives you a long-term way to keep track of unsteady states using cluster monitoring tools. It also keeps notes that you can use for recovery of your internal applications in production. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/chaos-kubernetes-kube-monkey - -作者:[Jessica Cherry][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cherrybomb -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software) -[2]: https://opensource.com/article/21/5/11-years-kubernetes-and-chaos -[3]: https://opensource.com/article/21/5/get-your-steady-state-chaos-grafana-and-prometheus -[4]: https://opensource.com/article/21/5/total-chaos-litmus -[5]: https://opensource.com/article/21/5/get-meshy-chaos-mesh -[6]: https://github.com/asobti/kube-monkey -[7]: https://minikube.sigs.k8s.io/docs/start/ -[8]: https://opensource.com/article/20/5/helm-charts -[9]: https://opensource.com/article/20/5/kubernetes-administration -[10]: https://opensource.com/sites/default/files/uploads/podsdying.png (Pods dying in K9s) -[11]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/sources/tech/20210608 Analyze community health metrics with this open source tool.md b/sources/tech/20210608 Analyze community health metrics with this open source tool.md deleted file mode 100644 index 6c2aa41b70..0000000000 --- a/sources/tech/20210608 Analyze community health metrics with this open source tool.md +++ /dev/null @@ -1,83 +0,0 @@ -[#]: subject: (Analyze community health metrics with this open source tool) -[#]: via: (https://opensource.com/article/21/6/health-metrics-cauldron) -[#]: author: (Georg Link https://opensource.com/users/georglink) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Analyze community health metrics with this open source tool -====== -Cauldron makes it easier for anyone to use GrimoireLab to learn more -about open source communities. -![Open source doctor.][1] - -Community managers, maintainers, and foundations seek metrics and insights about open source communities. Because each open source project works differently, its data needs to be analyzed differently. Yet, all projects share common challenges with getting data and creating visualizations. This presents an ideal use case for an open source project to solve this problem generically with the capability to customize it to users' needs. - -The open source GrimoireLab project has been working on ways to [measure the health of open source communities][2]. In addition to powering large-scale open source metrics solutions, it also serves as the backbone of the new [Cauldron][3] platform. - -GrimoireLab solves some hard problems related to retrieving and curating data. It was designed to be a flexible metrics solution for analyzing open source communities. [LibreOffice][4] and [Mautic][5] are among the communities using GrimoireLab's open source tools to generate community health metrics. - -![LibreOffice's GrimoireLab dashboard][6] - -LibreOffice's GrimoireLab dashboard (Georg Link, [CC BY-SA 4.0][7]) - -GrimoireLab satisfies the need for metrics, but two challenges have prevented wider adoption. First, it is difficult to deploy and secure. Its setup is more difficult than many expect, especially those who just want to have metrics without manually editing configuration files. Second, it does not scale well if you have many users trying to analyze different projects; every user must deploy their own GrimoireLab instance. - -Two platforms have solved these challenges to offer community metrics as a service, with GrimoireLab working under the hood. First, the Linux Foundation leveraged GrimoireLab to bootstrap its [LFX Insights platform][8]. It gives the foundation's open source projects a great deal of insight into their communities, some of which goes beyond GrimoireLab's core features. LFX Insights is not available as open source and only available from the Linux Foundation. - -![LFX Insights dashboard][9] - -LFX Insights dashboard showing metrics about the Kubernetes project (Georg Link, [CC BY-SA 4.0][7]) - -The other choice is [Cauldron][10], which is open source. It's designed to abstract the difficulty of using GrimoireLab's metrics and create a smooth user experience. Anyone can use Cauldron for their open source communities for free at [Cauldron.io][3]. Cauldron provides metrics without having to deploy software, which resolves the challenge of deploying and securing GrimoireLab. - -![Cauldron dashboard][11] - -Cauldron dashboard showing metrics about the Kubernetes project (Georg Link, [CC BY-SA 4.0][7]) - -Cauldron solves the scalability challenge by collecting data about an open source community centrally and making it available to all platform users. This reduces the time needed for new reports if the data was previously collected. It also minimizes the issue of API rate limits that could restrict collecting data at scale. - -To mitigate privacy concerns, Cauldron anonymizes all data by default. Should you want to know who your contributors (or companies in your communities) are, you will need a private Cauldron instance, either by deploying it yourself or using [Cauldron Cloud service][12]. - -These design choices enable a new way of working with this data. Instead of limiting analysis to individual projects, anyone can define reports and include anything from a single project's repository to hundreds of repositories from a group of projects. This makes it possible to analyze trends, like the rise in blockchain projects, by looking at data across many projects. - -Many people want to be able to compare data about multiple open source projects. In Cauldron, a user can create a report for each project then use the Comparison feature to show the data for each project side-by-side with graphs. - -![A Cauldron dashboard comparing Ansible, Ethereum, and Kubernetes][13] - -Cauldron dashboard comparing Ansible, Ethereum, and Kubernetes (Georg Link, [CC BY-SA 4.0][7]) - -The high demand for open source within the enterprise and increasing interest in community health and metrics are leading solution providers to improve usability. GrimoireLab continues to focus on retrieving data about open source communities. Downstream projects like LFX Insights and Cauldron leverage GrimoireLab to provide easy-to-use metrics. - -On a related note, the CHAOSS Project offers a Community Health Report. The report is created using the two CHAOSS projects, Augur and GrimoireLab. You can [request your Community Health Report][14] on the CHAOSS website or see the same metrics and visualizations under the [CHAOSS tab][15] in Cauldron. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/health-metrics-cauldron - -作者:[Georg Link][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/georglink -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourcedoctor.png?itok=fk79NwpC (Open source doctor.) -[2]: https://opensource.com/article/20/3/grimoirelab -[3]: https://cauldron.io/ -[4]: https://dashboard.documentfoundation.org/ -[5]: https://dashboard.mautic.org/ -[6]: https://opensource.com/sites/default/files/uploads/libreoffice_grimoirelab-dashboard.png (LibreOffice's GrimoireLab dashboard) -[7]: https://creativecommons.org/licenses/by-sa/4.0/ -[8]: https://lfx.linuxfoundation.org/tools/insights -[9]: https://opensource.com/sites/default/files/uploads/lfx-insights.png (LFX Insights dashboard) -[10]: https://gitlab.com/cauldronio/cauldron/ -[11]: https://opensource.com/sites/default/files/uploads/cauldron-dashboard.png (Cauldron dashboard) -[12]: http://cloud.cauldron.io/ -[13]: https://opensource.com/sites/default/files/uploads/compare-projects.png (A Cauldron dashboard comparing Ansible, Ethereum, and Kubernetes) -[14]: https://chaoss.community/community-reports/ -[15]: https://cauldron.io/project/372?tab=chaoss diff --git a/sources/tech/20210608 Play Doom on Kubernetes.md b/sources/tech/20210608 Play Doom on Kubernetes.md deleted file mode 100644 index 38f85bc387..0000000000 --- a/sources/tech/20210608 Play Doom on Kubernetes.md +++ /dev/null @@ -1,232 +0,0 @@ -[#]: subject: (Play Doom on Kubernetes) -[#]: via: (https://opensource.com/article/21/6/kube-doom) -[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Play Doom on Kubernetes -====== -Terminate pods while having fun by playing Kube DOOM. -![A cat under a keyboard.][1] - -Do you ever feel nostalgic for Doom and other blocky video games, the ones that didn't require much more than a mouse and the hope that you could survive on a LAN with your friends? You know what I'm talking about; the days when your weekends were consumed with figuring out how you could travel with your desktop and how many Mountain Dews you could fit in your cargo pants pockets? If this memory puts a warm feeling in your heart, well, this article is for you. - -Get ready to play Doom again, only this time you'll be playing for a legitimate work reason: doing chaos engineering. I'll be using my [fork of Kube DOOM][2] (with a new Helm chart because that's how I sometimes spend my weekends). I also have a pull request with the [original Kube DOOM][3] creator that I'm waiting to hear about. - -The first article in this series explained [what chaos engineering is][4], and the second demonstrated how to get your [system's steady state][5] so that you can compare it against a chaos state. In the next few articles, I introduced some chaos engineering tools you can use: [Litmus for testing][6] arbitrary failures and experiments in your Kubernetes cluster; [Chaos Mesh][7], an open source chaos orchestrator with a web user interface; and [Kube-monkey][8] for stress-testing your systems by scheduling random termination pods in your cluster. - -In this sixth article, I'll use Pop!_OS 20.04, Helm 3, Minikube 1.14.2, a VNC viewer, and Kubernetes 1.19. - -### Configure Minikube - -If you haven't already, [install Minikube][9] in whatever way that makes sense for your environment. If you have enough resources, I recommend giving your virtual machine a bit more than the default memory and CPU power: - - -``` -$ minikube config set memory 8192 -❗  These changes will take effect upon a minikube delete and then a minikube start -$ minikube config set cpus 6 -❗  These changes will take effect upon a minikube delete and then a minikube start -``` - -Then start and check the status of your system: - - -``` -$ minikube start -😄  minikube v1.14.2 on Debian bullseye/sid -🎉  minikube 1.19.0 is available! Download it: -💡  To disable this notice, run: 'minikube config set WantUpdateNotification false' - -✨  Using the docker driver based on user configuration -👍  Starting control plane node minikube in cluster minikube -🔥  Creating docker container (CPUs=6, Memory=8192MB) ... -🐳  Preparing Kubernetes v1.19.0 on Docker 19.03.8 ... -🔎  Verifying Kubernetes components... -🌟  Enabled addons: storage-provisioner, default-storageclass -🏄  Done! kubectl is now configured to use "minikube" by default -$ minikube status -minikube -type: Control Plane -host: Running -kubelet: Running -apiserver: Running -kubeconfig: Configured -``` - -### Preinstall pods with Helm - -Before moving forward, you'll need to deploy some pods into your cluster. To do this, I generated a simple Helm chart and changed the replicas in my values file from 1 to 8. - -If you need to generate a Helm chart, you can read my article on [creating a Helm chart][10] for guidance. I created a Helm chart named `nginx` and created a namespace to install my chart into using the commands below. - -Create a namespace: - - -``` -`$ kubectl create ns nginx` -``` - -Install the chart in your new namespace with a name: - - -``` -$ helm install chaos-pods nginx -n nginx - -NAME: chaos-pods -LAST DEPLOYED: Sun May 23 10:15:52 2021 -NAMESPACE: nginx -STATUS: deployed -REVISION: 1 -NOTES: -1\. Get the application URL by running these commands: -  export POD_NAME=$(kubectl get pods --namespace nginx -l "app.kubernetes.io/name=nginx,app.kubernetes.io/instance=chaos-pods" -o jsonpath="{.items[0].metadata.name}") -  export CONTAINER_PORT=$(kubectl get pod --namespace nginx $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") -  echo "Visit to use your application" -  kubectl --namespace nginx port-forward $POD_NAME 8080:$CONTAINER_PORT -``` - -### Install Kube DOOM - -You can use any [Virtual Network Computer][11] (VNC) viewer you want; I installed [TigerVNC][12] on my Linux box. There are several ways you can set up Kube DOOM. Before I generated my Helm chart, you could set it up with [kind][13] or use it locally with Docker, and the [README][14] contains instructions for those uses. - -Get started with a `git clone`: - - -``` -$ git clone [git@github.com][15]:Alynder/kubedoom.git -Cloning into 'kubedoom'... -``` - -Then change directory into the `kubedoom/helm` folder: - - -``` -`$ cd kubedoom/helm/` -``` - -Since the base values file is already set up correctly, you just need to run a single install command: - - -``` -$ helm install kubedoom kubedoom/ -n kubedoom -NAME: kubedoom -LAST DEPLOYED: Mon May 31 11:16:58 2021 -NAMESPACE: kubedoom -STATUS: deployed -REVISION: 1 -NOTES: -1\. Get the application URL by running these commands: -  export NODE_PORT=$(kubectl get --namespace kubedoom -o jsonpath="{.spec.ports[0].nodePort}" services kubedoom-kubedoom-chart) -  export NODE_IP=$(kubectl get nodes --namespace kubedoom -o jsonpath="{.items[0].status.addresses[0].address}") -  echo http://$NODE_IP:$NODE_PORT -``` - -Everything should be installed, set up, and ready to go. - -### Play with Kube DOOM - -Now you just need to get in there, run a few commands, and start playing your new chaos video game. The first command is a port forward, followed by the VNC viewer connection command. The VNC viewer connection needs a password, which is `idbehold`. - -Find your pod for the port forward: - - -``` -$ kubectl get pods -n kubedoom -NAME                                       READY   STATUS    RESTARTS   AGE -kubedoom-kubedoom-chart-676bcc5c9c-xkwpp   1/1     Running   0          68m -``` - -Run the `port-forward` command using your pod name: - - -``` -$  kubectl port-forward  kubedoom-kubedoom-chart-676bcc5c9c-xkwpp 5900:5900 -n kubedoom -Forwarding from 127.0.0.1:5900 -> 5900 -Forwarding from [::1]:5900 -> 5900 -``` - -Everything is ready to play, so you just need to run the VNC viewer command (shown below with output): - - -``` -$  vncviewer viewer localhost:5900 - -TigerVNC Viewer 64-bit v1.10.1 -Built on: 2020-04-09 06:49 -Copyright (C) 1999-2019 TigerVNC Team and many others (see README.rst) -See for information on TigerVNC. - -Mon May 31 11:33:23 2021 - DecodeManager: Detected 64 CPU core(s) - DecodeManager: Creating 4 decoder thread(s) - CConn:       Connected to host localhost port 5900 -``` - -Next, you'll see the password request, so enter it (`idbehold`, as given above). - -![VNC authentication][16] - -(Jess Cherry, [CC BY-SA 4.0][17]) - -Once you are logged in, you should be able to walk around and see your enemies with pod names. - -![Kube Doom pods][18] - -(Jess Cherry, [CC BY-SA 4.0][17]) - -I'm terrible at this game, so I use some cheats to have a little more fun: - - * Type `idspispopd` to walk straight through a wall to get to your army of pods. - * Can't handle the gun? That's cool; I'm bad at it, too. If you type `idkfa` and press the number **5**, you'll get a better weapon. - - - -This is what it looks like when you kill something (I used [k9s][19] for this view). - -![Killing pods in Kube DOOM][20] - -(Jess Cherry, [CC BY-SA 4.0][17]) - -### Final notes - -Because this application requires a cluster-admin role, you have to really pay attention to the names of the pods—you might run into a kube-system pod, and you'd better run away. If you kill one of those pods, you will kill an important part of the system. - -I love this application because it's the quickest gamified way to do chaos engineering. It did remind me of how bad I was at this video game, but it was hilarious to try it. Happy hunting! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/kube-doom - -作者:[Jessica Cherry][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cherrybomb -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead_cat-keyboard.png?itok=fuNmiGV- (A cat under a keyboard.) -[2]: https://github.com/Alynder/kubedoom -[3]: https://github.com/storax/kubedoom -[4]: https://opensource.com/article/21/5/11-years-kubernetes-and-chaos -[5]: https://opensource.com/article/21/5/get-your-steady-state-chaos-grafana-and-prometheus -[6]: https://opensource.com/article/21/5/total-chaos-litmus -[7]: https://opensource.com/article/21/5/get-meshy-chaos-mesh -[8]: https://opensource.com/article/21/6/chaos-kubernetes-kube-monkey -[9]: https://minikube.sigs.k8s.io/docs/start/ -[10]: https://opensource.com/article/20/5/helm-charts -[11]: https://en.wikipedia.org/wiki/Virtual_Network_Computing -[12]: https://tigervnc.org/ -[13]: https://kind.sigs.k8s.io/ -[14]: https://github.com/Alynder/kubedoom/blob/master/README.md -[15]: mailto:git@github.com -[16]: https://opensource.com/sites/default/files/uploads/vnc-password.png (VNC authentication) -[17]: https://creativecommons.org/licenses/by-sa/4.0/ -[18]: https://opensource.com/sites/default/files/uploads/doom-pods.png (Kube Doom pods) -[19]: https://opensource.com/article/20/5/kubernetes-administration -[20]: https://opensource.com/sites/default/files/uploads/doom-pods_kill.png (Killing pods in Kube DOOM) diff --git a/sources/tech/20210609 Making portable functions across serverless platforms.md b/sources/tech/20210609 Making portable functions across serverless platforms.md deleted file mode 100644 index ff96b42b67..0000000000 --- a/sources/tech/20210609 Making portable functions across serverless platforms.md +++ /dev/null @@ -1,218 +0,0 @@ -[#]: subject: (Making portable functions across serverless platforms) -[#]: via: (https://opensource.com/article/21/6/quarkus-funqy) -[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Making portable functions across serverless platforms -====== -Quarkus Funqy brings portability to serverless functions. -![Parts, modules, containers for software][1] - -The rising popularity of serverless development alongside the increased adoption of multi- and hybrid-cloud architectures has created a lot of competition among platforms. This gives developers many choices about where they can run functions on serverless platforms—from public managed services to on-premises [Kubernetes][2]. - -If you've read my previous articles about [Java serverless][3], you learned how to get started [developing Java serverless functions][4] with Quarkus and how those serverless functions can be [optimized][5] to run on Kubernetes. So what should you do next to make your serverless functions fit better with the many choices available to you? - -As a clue, think about why the Linux container (Docker, [LXC][6], cri-o) has become so popular: Portability. It's what made containers the de facto packaging technology for moving things from a developer's local machine to Kubernetes environments at scale. It means developers and operators don't need to worry about incompatibility and inconsistency between development and production environments. - -For adopting multi- and hybrid cloud architectures, these container portability benefits should also be considered for serverless function development. Without portability, developers would likely have to learn and use different APIs, command-line interface (CLI) tools, and software development kits (SDKs) for each serverless platform when developing and deploying the same serverless functions across multiple serverless runtimes. Developers, who have limited resources (e.g., time, effort, cost, and human resources), would be so overwhelmed by the options that they would find it difficult to choose the best one. - -![Many serverless runtime options][7] - -(Daniel Oh, [CC BY-SA 4.0][8]) - -### Get Funqy the next time you hit a serverless dance floor - -The [Quarkus Funqy][9] extension supports a portable Java API for developers to write serverless functions and deploy them to heterogeneous serverless runtimes, including AWS Lambda, Azure Functions, Google Cloud, and Knative. It is also usable as a standalone service. Funqy helps developers dance on the serverless floor without making code changes. - -Here is a quick example of how to build a portable serverless function with Quarkus Funqy. - -### 1\. Create a Quarkus Funqy Maven project - -Generate a Quarkus project (`quarkus-serverless-func`) to create a simple function with Funqy extensions: - - -``` -$ mvn io.quarkus:quarkus-maven-plugin:1.13.6.Final:create \ -       -DprojectGroupId=org.acme \ -       -DprojectArtifactId=quarkus-serverless-func \ -       -Dextensions="funqy-http" \ -       -DclassName="org.acme.getting.started.GreetingResource" -``` - -### 2\. Run the serverless function locally - -Open the `Funqy.java` file in the `src/main/java/org/acme/getting/started` directory: - - -``` -public class Funqy { - -    private static final [String][10] CHARM_QUARK_SYMBOL = "c"; - -    @Funq (1) -    public [String][10] charm(Answer answer) { (2) -        return CHARM_QUARK_SYMBOL.equalsIgnoreCase(answer.value) ? "You Quark!" : "👻 Wrong answer"; -    } - -    public static class Answer { -        public [String][10] value; (3) -    } -} -``` - -In the code above: - -(1) Annotation makes the method an exposable function based on the Funqy API. The function name is equivalent to the method name (`charm`) by default. -(2) Indicates a Java class (`Answer`) as an input parameter and `String` type for the output. -(3) `value` should be parameterized when the function is invoked. - -**Note**: Funqy does type introspection at build time to speed boot time, so the Funqy marshaling layer won't notice any derived types at runtime. - -Run the function via Quarkus Dev Mode: - - -``` -`$ ./mvnw quarkus:dev` -``` - -The output should look like: - - -``` -__  ____  __  _____   ___  __ ____  ______ - --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ - -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   -\--\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/   -INFO  [io.quarkus] (Quarkus Main Thread) quarkus-serverless-func 1.0.0-SNAPSHOT on JVM (powered by Quarkus x.x.x.) started in 2.908s. Listening on: -INFO  [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. -INFO  [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, funqy-http, kubernetes] -``` - -Now the function is running in your local development environment. Access the function with a RESTful API: - - -``` -`$ http://localhost:8080/charm?value=s` -``` - -The output should be: - - -``` -`👻 Wrong answer` -``` - -If you pass `value=c` down as a parameter, you will see: - - -``` -`You Quark!` -``` - -### 3\. Choose a serverless platform to deploy the Funqy function - -Now you can deploy the portable function to your preferred serverless platform when you add one of the Quarkus Funqy extensions in the figure below. The advantage is that you will not need to change the code; you should need only to adjust a few configurations, such as function export and target serverless platform. - -![Quarkus Funqy Extensions][11] - -(Daniel Oh, [CC BY-SA 4.0][8]) - -Try to deploy the function using [Knative Serving][12] (if you have installed it in your Kubernetes cluster). Add the following extensions to the Quarkus Funqy project: - - -``` -`$ ./mvnw quarkus:add-extension -Dextensions="kubernetes,container-image-docker"` -``` - -Open the `application.properties` file in the `src/main/resources/` directory. Then add the following variables to configure Knative and Kubernetes resources—make sure to replace `changeit` with your container registry's group name (username in DockerHub): - - -``` -quarkus.container-image.build=true -quarkus.container-image.group=changeit -quarkus.container-image.push=true -quarkus.container-image.builder=docker -quarkus.kubernetes.deployment-target=knative -``` - -Containerize the function, then push it to the external container registry: - - -``` -`$ ./mvnw clean package` -``` - -The output should end with `BUILD SUCCESS`. Then a `knative.yml` file will be generated in the `target/kubernetes` directory. Now you should be ready to create a Knative service with the function using the following command (be sure to log into the Kubernetes cluster and change the namespace where you want to create the Knative service): - - -``` -`$ kubectl create -f target/kubernetes/knative.yml` -``` - -The output should be like this: - - -``` -`service.serving.knative.dev/quarkus-serverless-func created` -``` - -### 4\. Test the Funqy function in Kubernetes - -Get the function's REST API and note its output: - - -``` -$ kubectl get rt -NAME URL READY REASON -quarkus-serverless-func     True -``` - -Access the function quickly using a `curl` command: - - -``` -`$ http://http://quarkus-serverless-func-YOUR_HOST_DOMAIN/charm?value=c` -``` - -You see the same output as you saw locally: - - -``` -`You Quark!` -``` - -**Note**: The function will scale down to zero in 30 seconds because of Knative Serving's default behavior. In this case, the pod will scale up automatically when the REST API is invoked. - -### What's next? - -You've learned how developers can make portable Java serverless functions with Quarkus and deploy them across serverless platforms (e.g., Knative with Kubernetes). Quarkus enables developers to avoid redundancy when creating the same function and deploying it to multiple serverless platforms. My next article in this series will explain how to enable CloudEvents Bind with Java and Knative. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/quarkus-funqy - -作者:[Daniel Oh][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/daniel-oh -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software) -[2]: https://opensource.com/article/19/6/reasons-kubernetes -[3]: https://opensource.com/article/21/5/what-serverless-java -[4]: https://opensource.com/article/21/6/java-serverless-functions -[5]: https://opensource.com/article/21/6/java-serverless-functions-kubernetes -[6]: https://www.redhat.com/sysadmin/exploring-containers-lxc -[7]: https://opensource.com/sites/default/files/uploads/choices.png (Many serverless runtime options) -[8]: https://creativecommons.org/licenses/by-sa/4.0/ -[9]: https://quarkus.io/guides/funqy -[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string -[11]: https://opensource.com/sites/default/files/uploads/funqyextensions.png (Quarkus Funqy Extensions) -[12]: https://knative.dev/docs/serving/ diff --git a/sources/tech/20210609 What happens when you terminate Kubernetes containers on purpose.md b/sources/tech/20210609 What happens when you terminate Kubernetes containers on purpose.md deleted file mode 100644 index 08199ae5a0..0000000000 --- a/sources/tech/20210609 What happens when you terminate Kubernetes containers on purpose.md +++ /dev/null @@ -1,289 +0,0 @@ -[#]: subject: (What happens when you terminate Kubernetes containers on purpose?) -[#]: via: (https://opensource.com/article/21/6/terminate-kubernetes-containers) -[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -What happens when you terminate Kubernetes containers on purpose? -====== -In the final article in this series about chaos engineering, do some -experiments to learn how changes affect your infrastructure's state. -![x sign ][1] - -In this series celebrating Kubernetes' 11th birthday, I've introduced some great tools for chaos engineering. In the first article, I explained [what chaos engineering is][2], and in the second, I demonstrated how to get your [system's steady state][3] so that you can compare it against a chaos state. In the next four articles, I introduced some chaos engineering tools: [Litmus for testing][4] arbitrary failures and experiments in your Kubernetes cluster; [Chaos Mesh][5], an open source chaos orchestrator with a web user interface; [Kube-monkey][6] for stress-testing your systems by scheduling random termination pods in your cluster; and [Kube DOOM][7] for killing pods while having fun. - -Now I'll wrap up this birthday present by putting it all together. Along with Grafana and Prometheus for monitoring for a steady state on your local cluster, I'll use Chaos Mesh and a small deployment and two experiments to see the difference between steady and not steady, as well as Pop!_OS 20.04, Helm 3, Minikube 1.14.2, and Kubernetes 1.19. - -### Configure Minikube - -If you haven't already, [install Minikube][8] in whatever way that makes sense for your environment. If you have enough resources, I recommend giving your virtual machine a bit more than the default memory and CPU power: - - -``` -$ minikube config set memory 8192 -❗  These changes will take effect upon a minikube delete and then a minikube start -$ minikube config set cpus 6 -❗  These changes will take effect upon a minikube delete and then a minikube start -``` - -Then start and check the status of your system: - - -``` -$ minikube start -😄  minikube v1.14.2 on Debian bullseye/sid -🎉  minikube 1.19.0 is available! Download it: -💡  To disable this notice, run: 'minikube config set WantUpdateNotification false' - -✨  Using the docker driver based on user configuration -👍  Starting control plane node minikube in cluster minikube -🔥  Creating docker container (CPUs=6, Memory=8192MB) ... -🐳  Preparing Kubernetes v1.19.0 on Docker 19.03.8 ... -🔎  Verifying Kubernetes components... -🌟  Enabled addons: storage-provisioner, default-storageclass -🏄  Done! kubectl is now configured to use "minikube" by default -$ minikube status -minikube -type: Control Plane -host: Running -kubelet: Running -apiserver: Running -kubeconfig: Configured -``` - -### Preinstall pods with Helm - -Before moving forward, you'll need to deploy some pods into your cluster. To do this, I generated a simple Helm chart and changed the replicas in my values file from 1 to 8. - -If you need to generate a Helm chart, you can read my article on [creating a Helm chart][9] for guidance. I created a Helm chart named `nginx` and created a namespace to install my chart into using the commands below. - -Create a namespace: - - -``` -`$ kubectl create ns nginx` -``` - -Install the chart in your new namespace with a name: - - -``` -$ helm install chaos-pods nginx -n nginx - -NAME: chaos-pods -LAST DEPLOYED: Sun May 23 10:15:52 2021 -NAMESPACE: nginx -STATUS: deployed -REVISION: 1 -NOTES: -1\. Get the application URL by running these commands: -  export POD_NAME=$(kubectl get pods --namespace nginx -l "app.kubernetes.io/name=nginx,app.kubernetes.io/instance=chaos-pods" -o jsonpath="{.items[0].metadata.name}") -  export CONTAINER_PORT=$(kubectl get pod --namespace nginx $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") -  echo "Visit to use your application" -  kubectl --namespace nginx port-forward $POD_NAME 8080:$CONTAINER_PORT -``` - -### Monitoring and marinating - -Next, install and set up Prometheus and Grafana [following the steps][10] in the second article in this series. However, you'll need to make make the following changes in the installation: - - -``` -$ kubectl create ns monitoring - -$ helm install prometheus prometheus-community/prometheus -n monitoring - -$ helm install grafana bitnami/grafana -n monitoring -``` - -Now that everything is installed in separate namespaces, set up your dashboards and let Grafana marinate for a couple of hours to catch a nice steady state. If you're in a staging or dev cluster at work, it would be even better to let everything sit for a week or so. - -For this walkthrough, I will use the [K8 Cluster Detail Dashboard][11] (dashboard 10856), which provides various drop-downs with details about your cluster. - -![K8 Cluster Detail Dashboard][12] - -(Jess Cherry, [CC BY-SA 4.0][13]) - -### Test #1: Container killing with Grafana and Chaos Mesh - -Install and configure Chaos Mesh using the [steps][5] in my previous article. Once that is set up, you can add some new experiments to test and observe with Grafana. - -Start by setting up an experiment to kill containers. First, look at your steady state. - -![K8 Cluster Detail Dashboard][14] - -(Jess Cherry, [CC BY-SA 4.0][13]) - -Next, make a kill-container experiment pointed at your Nginx containers. I created an `experiments` directory and then the `container-kill.yaml` file: - - -``` -$ mkdir experiments -$ cd experiments/ -$ touch container-kill.yaml -``` - -The file will look like this: - - -``` -apiVersion: chaos-mesh.org/v1alpha1 -kind: PodChaos -metadata: -  name: container-kill-example -  namespace: nginx -spec: -  action: container-kill -  mode: one -  containerName: 'nginx' -  selector: -    labelSelectors: -      'app.kubernetes.io/instance': 'nginx' -  scheduler: -    cron: '@every 60s' -``` - -Once it starts, this experiment will kill an `nginx` container every minute. - -Apply your file: - - -``` -$ kubectl apply -f container-kill.yaml -podchaos.chaos-mesh.org/container-kill-example created -``` - -Now that the experiment is in place, watch it running in Chaos Mesh. - -![Chaos Mesh Dashboard][15] - -(Jess Cherry, [CC BY-SA 4.0][13]) - -You can also look into Grafana and see a notable change in the state of the pods and containers. - -![Grafana][16] - -(Jess Cherry, [CC BY-SA 4.0][13]) - -If you change the kill time and reapply the experiment, you will see even more going on in Grafana. For example, change `@every 60s` to `@every 30s` and reapply the file: - - -``` -$ kubectl apply -f container-kill.yaml -podchaos.chaos-mesh.org/container-kill-example configured -$ -``` - -You can see the disruption in Grafana with two containers sitting in waiting status. - -![Grafana][17] - -(Jess Cherry, [CC BY-SA 4.0][13]) - -Now that you know how the containers reacted, go into the Chaos Mesh user interface and pause the experiment. - -### Test #2: Networking with Grafana and Chaos Mesh - -The next test will work with network delays to see what happens if there are issues between pods. First, grab your steady state from Grafana. - -![Grafana][18] - -(Jess Cherry, [CC BY-SA 4.0][13]) - -Create a `networkdelay.yaml` file for your experiment: - - -``` -`$ touch networkdelay.yaml` -``` - -Then add some network delay details. This example runs a delay in the `nginx` namespace against your namespace instances. The packet-sending delay will be 90ms, the jitter will be 90ms, and the jitter correlation will be 25%: - - -``` -apiVersion: chaos-mesh.org/v1alpha1 -kind: NetworkChaos -metadata: -  name: network-delay-example -  namespace: nginx -spec: -  action: delay -  mode: one -  selector: -    labelSelectors: -      'app.kubernetes.io/instance': 'nginx' -  delay: -    latency: "90ms" -    correlation: "25" -    jitter: "90ms" -  duration: "45s" -  scheduler: -    cron: "@every 1s" -``` - -Save and apply the file: - - -``` -$ kubectl apply -f  networkdelay.yaml -networkchaos.chaos-mesh.org/network-delay-example created -``` - -It should show up in Chaos Mesh as an experiment. - -![Chaos Mesh Dashboard][19] - -(Jess Cherry, [CC BY-SA 4.0][13]) - -Now that it is running pretty extensively using your configuration, you should see an interesting, noticeable change in Grafana. - -![Grafana][20] - -(Jess Cherry, [CC BY-SA 4.0][13]) - -In the graphs, you can see the pods are experiencing a delay. - -Congratulations! You have a more detailed way to keep track of and test networking issues. - -### Chaos engineering final thoughts - -My gift to celebrate Kubernetes' birthday is sharing a handful of chaos engineering tools. Chaos engineering has a lot of evolving yet to do, but the more people involved, the better the testing and tools will get. Chaos engineering can be fun and easy to set up, which means everyone—from your dev team to your administration—can do it. This will make your infrastructure and the apps it hosts more dependable. - -Happy birthday, Kubernetes! I hope this series was a good gift for 11 years of being a cool project. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/terminate-kubernetes-containers - -作者:[Jessica Cherry][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cherrybomb -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/x_stop_terminate_program_kill.jpg?itok=9rM8i9x8 (x sign ) -[2]: https://opensource.com/article/21/5/11-years-kubernetes-and-chaos -[3]: https://opensource.com/article/21/5/get-your-steady-state-chaos-grafana-and-prometheus -[4]: https://opensource.com/article/21/5/total-chaos-litmus -[5]: https://opensource.com/article/21/5/get-meshy-chaos-mesh -[6]: https://opensource.com/article/21/6/chaos-kubernetes-kube-monkey -[7]: https://opensource.com/article/21/6/chaos-engineering-kubedoom -[8]: https://minikube.sigs.k8s.io/docs/start/ -[9]: https://opensource.com/article/20/5/helm-charts -[10]: https://opensource.com/article/21/6/chaos-grafana-prometheus -[11]: https://grafana.com/grafana/dashboards/10856 -[12]: https://opensource.com/sites/default/files/uploads/k8-cluster-detail-dashboard.png (K8 Cluster Detail Dashboard) -[13]: https://creativecommons.org/licenses/by-sa/4.0/ -[14]: https://opensource.com/sites/default/files/uploads/dashboard-steadystate.png (K8 Cluster Detail Dashboard) -[15]: https://opensource.com/sites/default/files/uploads/chaosmesh-experiment.png (Chaos Mesh Dashboard) -[16]: https://opensource.com/sites/default/files/uploads/grafana-state.png (Grafana) -[17]: https://opensource.com/sites/default/files/uploads/waitingcontainers.png (Grafana) -[18]: https://opensource.com/sites/default/files/uploads/grafana-state2.png (Grafana) -[19]: https://opensource.com/sites/default/files/uploads/chaosmesh-experiment2.png (Chaos Mesh Dashboard) -[20]: https://opensource.com/sites/default/files/uploads/grafana-change.png (Grafana) diff --git a/sources/tech/20210611 How hypertext can establish application state in REST.md b/sources/tech/20210611 How hypertext can establish application state in REST.md deleted file mode 100644 index 2dc4c4627b..0000000000 --- a/sources/tech/20210611 How hypertext can establish application state in REST.md +++ /dev/null @@ -1,109 +0,0 @@ -[#]: subject: (How hypertext can establish application state in REST) -[#]: via: (https://opensource.com/article/21/6/hateoas) -[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How hypertext can establish application state in REST -====== -The Hypertext As The Engine Of Application State architectural style -supports non-brittle, resilient systems that enable risk-free changes. -![diagram of planning a cloud][1] - -HATEOAS is a difficult-to-pronounce acronym that stands for "[Hypertext As The Engine Of Application State][2]." Invented by [Roy Fielding][3] in the year 2000, HATEOAS was proposed as an architectural style for network-based software systems. - -The central concept of this architectural style is [hypertext][4]. With hypertext, we have encoded content that may also imply action. Each action, in turn, implies a change of state. HATEOAS represents the mechanism that can be used to control the transition from one application state to another application state. Its name contains the word Engine, based on the assumption that hypertext could drive the transitions from state to state. - -### Why HATEOAS? - -There are several reasons why HATEOAS may be a desirable architectural style. Two main reasons are: - - 1. Late binding - 2. Uniform interface - - - -#### 1\. Late binding - -Brittle systems are invariably a sign of shoddy engineering. When we discover a brittle system, usually we learn that its constituent components and subsystems are tightly coupled (almost welded together). That tight coupling creates a lot of friction that produces a lot of "heat." No wonder such brittle systems are notorious for defective behavior, which is often perceived as malfunctioning. And those defects are typically very hard to troubleshoot and fix. - -But what causes tight coupling? In most cases, it is early binding. We sometimes refer to early binding as premature optimization (which, as the saying goes, is the root of all evil). So, to avoid designing and building brittle systems, we ought to avoid tight coupling, which in practical terms means we should avoid early binding or premature optimization. - -HATEOAS is a prime example of the extreme late-binding design style. Systems built with HATEOAS style are completely decoupled and not prematurely optimized, which gives them the flexibility to be changed safely at a drop of a hat. - -#### 2\. Uniform interface - -Interfaces between the client and the server act as a unifying agent that obfuscates the need for a client to assume or understand the resource structure. A uniform interface relieves clients from having to understand anything about the servers. - -Also, a uniform interface fully separates identification from interaction. In a uniform interface, a resource that is implemented on the backend is identified by a unique resource identifier (URI). A client interested in the services rendered by the back-end resource only needs to know the starting endpoint (the home URI). A client need not know any details about how to interact with the resource. - -As is also the case with late binding, a uniform interface provides resilient, non-brittle solutions. A system built with HATEOAS style retains the freedom to radically revamp its structure without disturbing its clients in the least. - -### In-band and out-of-band information - -Another important concept related to HATEOAS is in-band vs. out-of-band access to information. If a caller (e.g., a client) needs to manipulate a resource (e.g., a server), the client's intention must somehow be translated into the implementation. If the client knows WHAT they want to do or accomplish, their next concern becomes: HOW to do it. - -There are two ways that this knowledge of how to do something could be implemented: - - * Client needs to go out of their way to obtain the how-to information (out-of-band) - * Client is given the how-to information by the resource (in-band) following the just-in-time communications model - - - -Because HATEOAS is a late-bound, uniform-interface style of design, it serves the how-to information in-band. This means a calling client need not learn any details of how to interact with the resource before initiating the interaction. - -In contrast, a traditional remote procedure call (RPC) design hinges on the out-of-band arrangement—a calling client must obtain details needed to interact with the server before initiating the interaction. In other words, it is not sufficient for the calling client to know how to begin the interaction with the server; the client is also expected to know all the necessary details before making precise calls needed for obtaining desired services. - -This upfront knowledge that the calling client must possess before making any calls to the server renders the system extremely brittle. Clients and the server are tightly coupled; the server is not at liberty to modify its API at will and must go the extra mile to maintain backward compatibility. - -Part of the in-band design philosophy of HATEOAS is self-descriptive messages. Clients do not have to know anything about the server state; a self-descriptive message represents the important points that the client needs to continue interacting with the server. - -That arrangement further loosens any possible coupling between the client (the caller) and the server. - -### How does HATEOAS work? - -There is no difference between how the HTML works and how HATEOAS works. When we browse the web, we start from an entry point—a URL. The first step in web browsing consists of instructing the web browser to send the HTTP GET request to the specified URL. - -Upon receiving that HTTP GET request, the back-end resource (the server) replies with an HTTP response. That HTTP response contains both the data and possibly (and most likely) the network operations that can be enacted on that data. These network operations are encoded as hypertext links. If we then click on one of those links, we enact a network operation: the browser sends, on our behalf, another HTTP request (it could be a GET request, or a POST request, etc.). - -The salient point in this description of the mundane web-browsing experience is that we, the clients, don't have to know anything in advance about the structure implemented on the server. All we know is the hypertext links that the server sends to us in the form of a resource representation (the HTML document). - -Replace a web browser with a computer program, and you get the picture of how HATEOAS works. A client (e.g., a computer program) obtains the entry point of the resource (the endpoint of the API). The client then programmatically sends the HTTP GET request to the resource and receives the HTTP response. In the response, the client will find one or more hypertext links. It is those hypertext links that enable the calling program to make the change in the application state. The client makes that change by sending another HTTP request using the in-band details found in the resource representation—the HTML document. - -Step-by-step, the interaction between the client and the server continues in this fashion. - -### What are the advantages of HATEOAS? - -In addition to the above advantages (a non-brittle, resilient system that invites risk-free changes), HATEOAS enables building systems that are: - - * Performant - * Scalable - * Reliable - * Simple to understand - * Transparent - * Portable - - - -These advantages are made possible by the stateless nature of the systems built using the HATEOAS style. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/hateoas - -作者:[Alex Bunardzic][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alex-bunardzic -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_darwincloud_520x292_0311LL.png?itok=74DLgd8Q (diagram of planning a cloud) -[2]: https://en.wikipedia.org/wiki/HATEOAS -[3]: https://en.wikipedia.org/wiki/Roy_Fielding -[4]: https://en.wikipedia.org/wiki/Hypertext diff --git a/sources/tech/20210612 How I teach Python on the Raspberry Pi 400 at the public library.md b/sources/tech/20210612 How I teach Python on the Raspberry Pi 400 at the public library.md deleted file mode 100644 index 1480f69fe7..0000000000 --- a/sources/tech/20210612 How I teach Python on the Raspberry Pi 400 at the public library.md +++ /dev/null @@ -1,92 +0,0 @@ -[#]: subject: (How I teach Python on the Raspberry Pi 400 at the public library) -[#]: via: (https://opensource.com/article/21/6/teach-python-raspberry-pi) -[#]: author: (Don Watkins https://opensource.com/users/don-watkins) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How I teach Python on the Raspberry Pi 400 at the public library -====== -After a long year of putting plans on hold, declining COVID case numbers -are bringing back community-based programming courses. -![Women programming][1] - -After a long and tough year, I've been looking forward to once again sharing my love of Python and open source software with other people, especially middle and high school students. Before the pandemic, I co-wrote a grant to teach Python programming to middle school students using Raspberry Pi computers. Like many other plans, COVID-19 put mine on hold for over a year. Fortunately, vaccines and the improved health in my state, New York, have changed the dynamic. - -A couple of months ago, once I became fully vaccinated, I offered to self-fund a Raspberry Pi and Python programming course in our local public library system. The [Chautauqua-Cattaraugus Library system][2] accepted my proposal, and the co-central library in Olean, N.Y., offered to fund my program. The library purchased five [Raspberry Pi 400][3] units, Micro-HDMI-to-VGA adapters, and inline power adapters, and the library system's IT department loaned us five VGA monitors. - -With all our equipment needs met, we invited middle school students to enroll for four afternoons of learning and programming fun. - -All the students were socially distanced, each with a new Pi 400 and VGA monitor at their desk. Our class was small, made up of a fourth-grade student and two sixth-grade students. None had a programming background, and their keyboarding skills were rough. However, their innate curiosity and enthusiasm carried the day. - -### Learning and iterating - -We spent the first afternoon assembling the Pi 400s, connecting them to the library's wireless network, and installing the [Mu Python editor][4], which we used for the class. - -![Raspberry Pi 400 equipment][5] - -(Don Watkins, [CC BY-SA 4.0][6]) - -I followed this with a brief introduction to Raspberry Pi OS and how it differs from Windows and macOS computers and offered a brief tutorial on using the Mu editor. - -Since we were meeting in a public library, I emphasized that the library has books covering the concepts and Python programming code used in the class, especially [_Teach Your Kids to Code_][7] by Dr. Bryson Payne and [_Python for Kids_][8] by Jason Briggs. I created daily handouts for the students to refer to alongside the instruction. I also used my own Raspberry PI 400 connected to a 32" LCD monitor to illustrate the code and programming results. - -![Raspberry Pi 400 setup][9] - -(Don Watkins, [CC BY-SA 4.0][6]) - -I like to use the [turtle module to introduce Python][10] programming. It's always been well received, and the students love the graphics they can create while learning Python basics like variables, [`for` loops][11], lists, and the importance of syntax. - -I learn something new every time I teach, and this was no exception. I especially enjoy watching students iterate on my code examples—some are from books, and others are my own creations. The fourth-grader in our class took this example code and added two more colors and corresponding code to create a six-color spiral. - - -``` -# multicolor spiral -import turtle as t -colors = ["red", "yellow", "blue", "green"] -for x in range(100): -    t.pencolor(colors[x%4]) -    t.circle(x) -    t.left(91) -``` - -![Spiral graphic created in Python][12] - -(Don Watkins, [CC BY-SA 4.0][6]) - -At the end of the four-day course, each student received a Raspberry Pi 400 and a book explaining how to program their computer. They also got a list of free and open source software resources, a reading list of recommended books available in the library, and some open educational resources available on the web. - -### Open learning - -Mark Van Doren said, "the art of teaching is the art of assisting discovery." I saw that play out in this classroom using open source tools. More students need opportunities like this to help them gain a quality education. The Raspberry Pi 400 is a great form factor for teaching and learning. - -The [Olean Library][13] plans to offer another similar course later this year. I encourage you to share your love of free and open source software with your own communities. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/teach-python-raspberry-pi - -作者:[Don Watkins][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/don-watkins -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard2.png?itok=WnKfsl-G (Women programming) -[2]: https://www.cclsny.org/ -[3]: https://opensource.com/article/21/3/raspberry-pi-400-review -[4]: https://opensource.com/article/18/8/getting-started-mu-python-editor-beginners -[5]: https://opensource.com/sites/default/files/uploads/pi400_library.jpg (Raspberry Pi 400 equipment) -[6]: https://creativecommons.org/licenses/by-sa/4.0/ -[7]: https://opensource.com/education/15/9/review-bryson-payne-teach-your-kids-code -[8]: https://nostarch.com/pythonforkids -[9]: https://opensource.com/sites/default/files/uploads/pi400_library-teacher.jpg (Raspberry Pi 400 setup) -[10]: https://opensource.com/article/17/10/python-101#turtle -[11]: https://opensource.com/article/18/3/loop-better-deeper-look-iteration-python -[12]: https://opensource.com/sites/default/files/uploads/pi400-spiral.jpg (Spiral graphic created in Python) -[13]: https://www.oleanlibrary.org/ diff --git a/sources/tech/20210614 13 open source tools for developers.md b/sources/tech/20210614 13 open source tools for developers.md deleted file mode 100644 index 7546d5fd32..0000000000 --- a/sources/tech/20210614 13 open source tools for developers.md +++ /dev/null @@ -1,159 +0,0 @@ -[#]: subject: (13 open source tools for developers) -[#]: via: (https://opensource.com/article/21/6/open-source-developer-tools) -[#]: author: (Nimisha Mukherjee https://opensource.com/users/nimisha) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -13 open source tools for developers -====== -Choose tools that provide maximum flexibility in software integration -and delivery. -![Tools in a cloud][1] - -Modern developers are highly technical, opinionated, passionate, community-focused, driven, polyglot, and most importantly, empowered decision-makers. Today, developers have a say in the products being built and the tools and technologies used to build them. Most importantly, time is precious, and developers and project managers can always benefit from great efficiency. To attain great efficiency, though, you must understand the software lifecycle, and how it can be organized and manipulated. - -The industry is still working on perfecting how a developer's time is spent. We can divide a developer's major tasks into two different "loops": - - * **Inner loop:** These are the most common tasks developers do, the ones that fully utilize their skillsets: code, run, validate, and debug. This is the classical developer loop. - - - -![Inner loop developer tasks][2] - -(Nimisha Mukherjee, [CC BY-SA 4.0][3]) - - * **Outer loop:** This is where a developer's code goes through continuous integration and continuous delivery (CI/CD) and gets deployed to production. On Gitlab and similar platforms, a developer's pull request (PR) gets merged to the main branch, CI/CD kicks in and creates the build, runs the necessary tests, and deploys to the specified environments. This is a DevOps loop. - - - -![Outer loop developer tasks][4] - -(Nimisha Mukherjee, [CC BY-SA 4.0][3]) - -Developers should spend most of their effort on inner-loop tasks, driving innovation, and minimal time on the outer loop. - -Understanding the differences between the inner and outer loops can help identify the developer tools that work best for each part of the software lifecycle. - -### Open source inner-loop tools - -Here are some of my favorite open source tools for the _code, run, validate, and debug_ cycle. - -#### Code - - * [Eclipse Che][5] makes Kubernetes development accessible for developer teams. Che provides an in-browser integrated development environment (IDE), allowing developers to code, build, test, and run applications from any machine exactly as they run in production. - * [Visual Studio Code][6] (VS Code) and [VSCodium][7] are open source code editors with support for debugging, syntax highlighting, intelligent code completion, snippets, code refactoring, and embedded Git. - - - -#### Run - - * [OpenShift Do][8] (odo) is a command-line interface for developers that supports fast, iterative development, allowing them to focus on what's most important to them: code. - * [Minishift][9] helps developers run [OKD][10] (the community distribution of Kubernetes) locally by launching a single-node OKD cluster inside a virtual machine. Minishift allows trying out OKD or developing with it, day-to-day, on a local machine. - * Eclipse Che - - - -#### Validate - - * Eclipse Che - * Odo - * [VS Code Dependency Analytics][11] is an open source vulnerability database. - - - -#### Deploy - - * Eclipse Che - * Odo - - - -### Learn more - -There are many workflows to implement a good coding cycle. To get an idea of how developers are using these tools, read Daniel Oh's article about how he uses [Quark for serverless application development][12] and Bryan Son's article about [how his team uses Eclipse Che][13]. - -### Open source outer-loop tools - -There are great open source tools that make it easier to send code through CI/CD and deploy it to production. - -#### CI/CD - - * [Tekton][14] is an open source framework for creating CI/CD systems, allowing developers to build, test, and deploy. - * [Jenkins][15] is a free and open source automation server. It helps automate the parts of software development related to building, testing, and deploying to facilitate CI/CD. - - - -#### Build - - * [Shipwright][16] is an extensible framework for building container images on Kubernetes. - * [Eclipse JKube][17] is a collection of plugins and libraries used to build container images using Docker, Jib, or OpenShift Source-to-Image (S2I) build strategies. - - - -#### Run - - * [CodeReady Containers][18] (CRC) manages a local OpenShift 4.x cluster optimized for testing and development purposes. - - - -#### Monitor - - * [Prometheus][19] provides event monitoring and alerting. - - - -#### Deploy - - * Tekton - * Jenkins - * [Helm][20] is a package manager for Kubernetes. - * [Argo CD][21] is a declarative, GitOps continuous delivery tool for Kubernetes. It makes application deployment and lifecycle management automated, auditable, and easy to understand. - - - -### Learn DevOps - -If you're keen to implement a DevOps strategy, you can get started with Jess Cherry's article on how to [use Minishift and Jenkiins to setup your first pipeline][22]. - -### Make it easy - -Today, developers choose the tools and technologies used in software integration and delivery. If you're a developer, then choose open source tools for maximum flexibility. If you're a project manager or architect, choose open source tools to help your developers succeed by working less and getting more done. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/open-source-developer-tools - -作者:[Nimisha Mukherjee][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/nimisha -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud) -[2]: https://opensource.com/sites/default/files/uploads/innerloop.png (Inner loop developer tasks) -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://opensource.com/sites/default/files/uploads/outerloop.png (Outer loop developer tasks) -[5]: https://www.eclipse.org/che/ -[6]: https://code.visualstudio.com/ -[7]: https://opensource.com/article/20/6/open-source-alternatives-vs-code#vscodium -[8]: https://docs.openshift.com/container-platform/4.4/cli_reference/developer_cli_odo/understanding-odo.html -[9]: https://www.okd.io/minishift/ -[10]: https://www.okd.io/ -[11]: https://marketplace.visualstudio.com/items?itemName=redhat.fabric8-analytics -[12]: https://opensource.com/article/21/5/edge-quarkus-linux -[13]: https://opensource.com/article/19/10/cloud-ide-che -[14]: https://tekton.dev/ -[15]: https://www.jenkins.io/ -[16]: https://shipwright.io/ -[17]: https://projects.eclipse.org/projects/ecd.jkube -[18]: https://github.com/code-ready/crc -[19]: https://prometheus.io/ -[20]: https://helm.sh/ -[21]: https://argoproj.github.io/argo-cd/ -[22]: https://opensource.com/article/20/11/minishift-linux diff --git a/sources/tech/20210614 Fedora Classroom- RPM Packaging 101.md b/sources/tech/20210614 Fedora Classroom- RPM Packaging 101.md deleted file mode 100644 index c643c165af..0000000000 --- a/sources/tech/20210614 Fedora Classroom- RPM Packaging 101.md +++ /dev/null @@ -1,103 +0,0 @@ -[#]: subject: (Fedora Classroom: RPM Packaging 101) -[#]: via: (https://fedoramagazine.org/fedora-classroom-rpm-packaging-101/) -[#]: author: (Ankur Sinha "FranciscoD" https://fedoramagazine.org/author/ankursinha/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Fedora Classroom: RPM Packaging 101 -====== - -![Fedora classroom on RPM packaging][1] - -Fedora Classroom sessions return with a session on RPM packaging targeted at beginners. - -### About the session - -RPMs are the smallest building blocks of the Fedora Linux system. This session will walk through the basics of building an RPM from source code. You will learn how to set up your Fedora system to build RPMs, how to write a spec file that adheres to the [Fedora Packaging Guidelines][2], and how to use it to generate RPMs for distribution. The session will also provide a brief overview of the complete Fedora packaging pipeline. - -While no prior knowledge of building RPMs or building software from its source code is required, some software development experience will be useful. The hope is to help users learn the skills required to build and maintain RPM packages, and to encourage them to contribute to Fedora by joining the package collection maintainers. - -### When and where - -The classroom session will be organised on the BlueJeans video platform at 1200 UTC on June 17, 2021 and is expected to last an hour: - - * BlueJeans event URL: - * [Fedora calendar entry][3] (Click to see the event in your local time zone and add it to your calendar application). - - - -### Topics covered in the session - - * The basics of a spec file. - * Source and binary RPMs and how they are built from the spec using rpmbuild. - * A brief introduction to mock and fedpkg. - * The life cycle of a Fedora package. - * How you can join the Fedora package collection maintainers. - - - -### Prerequisites - - * A Fedora installation (Workstation or any lab/spin) - * The following software should be installed and configured: - * **git** - -``` -sudo dnf install git -``` - - * **fedora-packager -** - -``` -sudo dnf install fedora-packager -``` - - * **mock** (configured as per [these instructions][4]) - - - - -### Useful reading - - * [RPM packages explained][5] - * [How RPM packages are made: the spec file][6] - * [How RPM packages are made: the source RPM][7] - - - -### About the instructor - -[Ankur Sinha][8] has been maintaining packages in Fedora for more than a decade and is currently both a sponsor to the package maintainers group, and a [proven packager][9]. Ankur primarily focuses on maintaining neuroscience related software for the [NeuroFedora Special Interest Group][10] and contributes to other parts of the community wherever possible. - -Fedora Classroom is a project aimed at spreading knowledge on Fedora related topics. If you would like to propose a session, feel free to open a ticket [here][11] with the tag _classroom_. If you are interested in taking a proposed session, please let us know and once you take it, you will be awarded the [Sensei][12] Badge too as a token of appreciation. Recordings from the previous sessions can be found [here][13]. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/fedora-classroom-rpm-packaging-101/ - -作者:[Ankur Sinha "FranciscoD"][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/ankursinha/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/06/fedora-magazing-rpm-classroom-816x345.png -[2]: https://docs.fedoraproject.org/en-US/packaging-guidelines/ -[3]: https://calendar.fedoraproject.org/meeting/10002/ -[4]: https://fedoraproject.org/wiki/Using_Mock_to_test_package_builds#How_do_I_use_Mock.3F -[5]: https://fedoramagazine.org/rpm-packages-explained/ -[6]: https://fedoramagazine.org/how-rpm-packages-are-made-the-spec-file/ -[7]: https://fedoramagazine.org/how-rpm-packages-are-made-the-source-rpm/ -[8]: https://fedoraproject.org/wiki/User:Ankursinha -[9]: https://docs.fedoraproject.org/en-US/fesco/Provenpackager_policy/ -[10]: https://neuro.fedoraproject.org -[11]: https://pagure.io/fedora-join/Fedora-Join/issues -[12]: https://badges.fedoraproject.org/badge/sensei/ -[13]: https://www.youtube.com/playlist?list=PL0x39xti0_64FBQ7mcFt7uBXpG8EA7OF1 diff --git a/sources/tech/20210615 Keep track of your IRC chats with ZNC.md b/sources/tech/20210615 Keep track of your IRC chats with ZNC.md deleted file mode 100644 index 73d4fd5f0c..0000000000 --- a/sources/tech/20210615 Keep track of your IRC chats with ZNC.md +++ /dev/null @@ -1,132 +0,0 @@ -[#]: subject: (Keep track of your IRC chats with ZNC) -[#]: via: (https://opensource.com/article/21/6/irc-matrix-bridge-znc) -[#]: author: (John 'Warthog9' Hawley https://opensource.com/users/warthog9) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Keep track of your IRC chats with ZNC -====== -Create a bridge between IRC and Matrix. -![Chat bubbles][1] - -For a bit more than a year, I've been wondering if it is possible to bolt the open source [Matrix][2] communications network to Internet Relay Chat (IRC) in such a way that I can still use my [ZNC][3] IRC bouncer without an extra proliferation of nicknames. The answer, is amusingly, yes. But first, some background. - -### What's IRC? - -IRC has been around since August 1988, and it's been a staple of real-time communications ever since. It's also one of the early open source projects, as the code for the original IRC server was eventually shared. Over the years, it's been quite useful for meeting many developers' real-time communication needs, although not without its own share of drama. However, it has been resilient and is still widely used despite newer options. - -### Enter the bouncer - -ZNC solves a specific problem on IRC: IRC is intentionally a very ephemeral system, so no state is saved. When you log into an IRC server, you get nothing except what is happening right then—nothing before, and once you leave, nothing after. This contrasts with more modern systems that give historical context, scrollback, searchability, etc. - -Some of this can be handled by clients that are left on continuously, but that's not ideal. Enter the IRC bouncer. A bouncer acts as a middleman to the IRC connection. It connects to IRC and can log in on the user's behalf. It can then relay chats back out to the client (or, in many cases, clients). This can make it seem like a user is always on, which gives some context. - -Many folks who use IRC use either a bouncer or a client that runs long-term to keep that context going. ZNC is a relatively popular and well-understood bouncer for IRC. Other services like [IRCCloud][4] can provide this and other features bolted around IRC to make the experience more pleasant and usable. - -### Building bridges - -Matrix is a newer standard that isn't really a program or a codebase. It's actually a protocol definition that lends itself particularly well to bridging other protocols and provides a framework for real-time encrypted chat. One of its reference implementations is called Synapse, and it happens to be a pretty solid base from which to build. It has a rich set of prebuilt [bridges][5], including Slack, Gitter, XMPP, and email. While not all features translate everywhere, the fact that so many good bridges exist speaks to a great community and a robust protocol. - -### The crux of the matter - -I've been on IRC for 26 or 27 years; my clients are set up the way I like, and I'm used to interacting with it in certain ways on certain systems. This is great until I want to start interfacing with IRC when I have Matrix, [Mattermost][6], [Rocket.Chat][7], or other systems running. Traditionally, this meant I ended up with an extra nickname every time I logged into IRC. After a while, username[m], username[m]1, username[m]2, and so forth start looking old. Imagine everyone trying to do this, and you understand that this eventually gets untenable. - -I've been running a Matrix server with bridges. So why can't I bridge ZNC into Matrix and get the best of all worlds? - -It's doable with some prerequisites and prep work (which I won't cover in detail, but there's documentation out there should you wish to set this up for yourself). - - * You need a Matrix server, I'm using [Synapse][8], and it's what I'm going to assume going forward. You will also need admin privileges and access to the low-level system. - * You need a [ZNC server][3] up and running or a bouncer that acts like ZNC (although your mileage will vary if you aren't using ZNC). You just need a ZNC account; you don't need admin privileges. - * You need a copy of Heisenbridge, an IRC bridge for Matrix that works differently from a normal IRC bridge. It's possible to run both simultaneously; I am, and the [Heisenbridge README][9] will help you do the same. You'll likely want to run Heisenbridge on the same system you're running Synapse, although it's not required. - - - -I'll assume you have Synapse and a working IRC bouncer set up and working. Now comes the fun part: bolting Heisenbridge into place. Follow the Heisenbridge install guide, except before you restart Synapse and start Heisenbridge, you'll want to make a couple of small changes to the configuration file generated during setup. That config file will look something like this: - - -``` -id: heisenbridge -url: -as_token: alongstringtoken -hs_token: anotherlongstringtoken -rate_limited: false -sender_localpart: heisenbridge -namespaces: - users: - - regex: '@irc_.*' -   exclusive: true - aliases: [] - rooms: [] -``` - - * Change the port it will use because `9898` is also preferred by other bridges. I chose `9897`. As long as it is the same in Synapse and the bridge, it doesn't matter what you use. - * In the `namespaces` section, take note of the regex for the users. The `matrix-appservice-irc` system uses the same regex, and having both of them run in the same namespace causes issues. I changed mine from `@irc_` to `@hirc`. - * You need to add `@heisenbridge:your.homeserver.tld` to the admin list on your server. The easiest way to do this is to start up Heisenbridge once, turn it off, and then edit the database to give the user admin privileges (i.e., set `admin=1` on that user). Then restart Heisenbridge. - - - -My updated config file looks like this: - - -``` -id: heisenbridge -url: -as_token: alongstringtoken -hs_token: anotherlongstringtoken -rate_limited: false -sender_localpart: heisenbridge -namespaces: - users: - - regex: '@hirc_.*' -   exclusive: true - aliases: [] - rooms: [] -``` - -Then, restart Synapse, start Heisenbridge, and go from there. I started mine using: - - -``` -`python3 -m heisenbridge -c /path/to/heisenbridge.yaml -p 9897` -``` - -Next, talk to the Heisenbridge user on your home server and set up a network and a server for your bouncer. - -If you want to add a server, there are some options that aren't documented. If you want to add a server name and host, issue: - - -``` -`addserver networkname hostname portnumber --tls` -``` - -Open the network as your user. You'll be invited to a room where you can set the password for the network login (this is likely needed for ZNC), and then you can connect. - -> **Security warning:** The password will be stored in clear text, so don't use passwords you don't mind being stored this way, and don't do this on machines you don't trust. - -After you hit **Connect**, you should get a flurry of activity as your IRC bouncer pushes its state into Matrix. That should do it! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/irc-matrix-bridge-znc - -作者:[John 'Warthog9' Hawley][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/warthog9 -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles) -[2]: https://matrix.org/ -[3]: https://wiki.znc.in/ZNC -[4]: https://www.irccloud.com/ -[5]: https://matrix.org/bridges/ -[6]: https://mattermost.com/ -[7]: http://rocket.chat/ -[8]: https://matrix.org/docs/projects/server/synapse -[9]: https://github.com/hifi/heisenbridge diff --git a/sources/tech/20210616 Set up a service mesh on Istio.md b/sources/tech/20210616 Set up a service mesh on Istio.md deleted file mode 100644 index 637687bb8e..0000000000 --- a/sources/tech/20210616 Set up a service mesh on Istio.md +++ /dev/null @@ -1,139 +0,0 @@ -[#]: subject: (Set up a service mesh on Istio) -[#]: via: (https://opensource.com/article/21/6/service-mesh-serverless) -[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Set up a service mesh on Istio -====== -A quick example of going serverless on Knative. -![Net catching 1s and 0s or data in the clouds][1] - -Service mesh and serverless deployment models represent the next phase in the evolution of microservice architectures. Service mesh enables developers to focus on business feature development rather than managing non-functional microservices capabilities such as monitoring, tracing, fault tolerance, and service discovery. - -[Open source service mesh][2] projects, including [Istio][3], [LinkerD][4], and [Kuma][5], use a sidecar, a dedicated infrastructure layer built right into an app, to implement service mesh functionalities. So, for example, developers can improve monitoring and tracing of cloud-native microservices on a distributed networking system using [Jaeger to build an Istio service mesh][6]. - -![CNCF Service Mesh Landscape][7] - -CNCF Service Mesh Landscape (Source: [CNCF][2]) - -In this next phase of implementing service mesh in microservices, developers can advance their serverless development using an event-driven execution pattern. It's not just a brand-new method; it also tries to modernize business processes from 24x7x365 uptime to on-demand scaling. Developers can leverage the traits and benefits of serverless deployment by using one of the [open source serverless projects][8] shown below. For example, [Knative][9] is a faster, easier way to develop serverless applications on Kubernetes platforms. - -![CNCF Serverless Landscape][10] - -CNCF Serverless Landscape (Source: [CNCF][8]) - -Imagine combining service mesh and serverless for more advanced cloud-native microservices development and deployment. This combined architecture allows you to configure additional networking settings, such as custom domains, mutual Transport Layer Security (mTLS) certificates, and JSON Web Token authentication. - -Here is a quick example of setting up service mesh on Istio with serverless on Knative Serving. - -### 1\. Add Istio with sidecar injection - -When you install the Istio service mesh, you need to set the `autoInject: enabled` configuration for automatic sidecar injection: - - -``` -    global: -      proxy: -        autoInject: enabled -``` - -If you'd like to learn more, consult Knative's documentation about [installing Istio without and with sidecar injection][11]. - -### 2\. Enable a sidecar for mTLS networking - -To use mTLS network communication between a `knative-serving` namespace and another namespace where you want the application pod to be running, enable a sidecar injection: - - -``` -`$ kubectl label namespace knative-serving istio-injection=enabled` -``` - -You also need to configure `PeerAuthentication` in the `knative-serving namespace`: - - -``` -cat <<EOF | kubectl apply -f - -apiVersion: "security.istio.io/v1beta1" -kind: "PeerAuthentication" -metadata: -  name: "default" -  namespace: "knative-serving" -spec: -  mtls: -    mode: PERMISSIVE -EOF -``` - -If you've installed a local gateway for Istio service mesh and Knative, the default cluster gateway name will be `knative-local-gateway` for the Knative service and application deployment. - -### 3\. Deploy an application for a Knative service - -Create a Knative service resource YAML file (e.g., `myservice.yml`) to enable sidecar injection for a Knative service. - -Add the `sidecar.istio.io/inject="true"` annotation to the service resource: - - -``` -apiVersion: serving.knative.dev/v1 -kind: Service -metadata: -  name: hello-example-1 -spec: -  template: -    metadata: -      annotations: -        sidecar.istio.io/inject: "true" (1) -        sidecar.istio.io/rewriteAppHTTPProbers: "true" (2) -    spec: -      containers: -      - image: docker.io/sample_application (3) -        name: container -``` - -In the code above: - -(1) Adds the sidecar injection annotation. -(2) Enables JSON Web Token (JWT) authentication. -(3) Replace the application image with yours in an external container registry (e.g., DockerHub, Quay.io). - -Apply the Knative service resource above: - - -``` -`$ kubectl apply -f myservice.yml` -``` - -Note: Be sure to log into the right namespace in the Kubernetes cluster to deploy the sample application. - -### Conclusion - -This article explained the benefits of service mesh and serverless deployment for the advanced cloud-native microservices architecture. You can evolve existing microservices to service mesh or serverless step-by-step, or you can combine them to handle more advanced application implementation with complex networking settings on Kubernetes. However, this combined architecture is still in an early stage due to the architecture's complexity and lack of use cases. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/service-mesh-serverless - -作者:[Daniel Oh][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/daniel-oh -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds) -[2]: https://landscape.cncf.io/card-mode?category=service-mesh&grouping=category -[3]: https://istio.io/docs/concepts/what-is-istio/ -[4]: https://linkerd.io/ -[5]: https://kuma.io/ -[6]: https://opensource.com/article/19/3/getting-started-jaeger -[7]: https://opensource.com/sites/default/files/uploads/cncf-service-mesh-landscape.png (CNCF Service Mesh Landscape) -[8]: https://landscape.cncf.io/serverless?category=service-mesh&grouping=category&zoom=200 -[9]: https://opensource.com/article/19/4/enabling-serverless-kubernetes -[10]: https://opensource.com/sites/default/files/uploads/cncf-serverless-landscape2.png (CNCF Serverless Landscape) -[11]: https://knative.dev/docs/install/installing-istio/#installing-istio-without-sidecar-injection diff --git a/sources/tech/20210617 Refactor your applications to Kubernetes.md b/sources/tech/20210617 Refactor your applications to Kubernetes.md deleted file mode 100644 index 4e41b3bb01..0000000000 --- a/sources/tech/20210617 Refactor your applications to Kubernetes.md +++ /dev/null @@ -1,190 +0,0 @@ -[#]: subject: (Refactor your applications to Kubernetes) -[#]: via: (https://opensource.com/article/21/6/tackle-diva-kubernetes) -[#]: author: (Yasu Katsuno https://opensource.com/users/yasu-katsuno) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Refactor your applications to Kubernetes -====== -Tackle-DiVA helps developers understand database operations and -transaction processes inside applications. -![Tips and gears turning][1] - -Application modernization developers must be able to understand database operations and transaction processes inside applications precisely. [Tackle-DiVA][2] (Data-intensive Validity Analyzer) is an open source data-centric Java application analysis tool in the [Konveyor Tackle project][3] that aims at refactoring applications to Kubernetes. - -This article gives an overview of Tackle-DiVA and presents example instructions and analysis results. - -### What is Tackle-DiVA? - -Tackle-DiVA is built using Java and Python and operated using a command-line interface. It imports target Java application source files and provides analysis results as files. - -![Tackle-DiVA operation][4] - -(Yasuharu Katsuno, [CC BY-SA 4.0][5]) - -Breaking down this diagram: - - * **Service entry inventory** analysis extracts a list of Java classes for implementing public APIs. - * **Database inventory** analysis exports a list of database tables operated by an application. - * **Transaction inventory** extracts a set of transaction processes. - * **Code-to-database dependency** analyzes which Java class operates which database table. - * The **database-to-database** and **transaction-to-transaction dependency** analyses find clues for transforming parallel executions. - * Finally, **transaction refactoring recommendation** analysis shows parallel executable transactions from original sequential executions. - - - -### Try it out! - -It is easy to get started with Tackle-DiVA. It makes full use of [Docker][6] containers, and the only prerequisite is a Docker-runnable environment, such as RedHat Enterprise Linux, Ubuntu, or macOS. - -Once you have Docker available on your machine, run: - - -``` -$ cd /tmp -$ git clone && tackle-diva -$ docker build . -t diva -``` - -This builds Tackle-DiVA and packs it as a Docker image. Tackle-DiVA is now ready to use on your machine. - -The next step is to prepare source codes of your target Java applications. I'll use the [DayTrader][7] application as an example: - - -``` -$ cd /tmp -$ git clone -``` - -The final step is to execute the `diva_docker` command by attaching the full directory path: - - -``` -$ cd /tmp/tackle-diva/distrib/bin/ -$ ./diva_docker /tmp/sample.daytrader7/ -``` - -This creates the `tackle-diva/distrib/output` directory and stores the analysis result files: - - -``` -$ ls /tmp/tackle-diva/distrib/output -contexts.yml            transaction.json        transaction_summary.dot -database.json           transaction.yml         transaction_summary.pdf -``` - -### Explore the analysis results - -Take a look at some analysis results for the DayTrader application. - -The **service entry inventory** result is stored in the `contexts.yml` file. It finds that the `TradeAppServlet.init class/method` plays a key role in service entries for the `login` and `register` actions: - - -``` -\- entry: -   methods: -  - "com.ibm.websphere.samples.daytrader.web.TradeAppServlet.init" - http-param: -   action: -  - "login" -\- entry: -   methods: -  - "com.ibm.websphere.samples.daytrader.web.TradeAppServlet.init" - http-param: -   action: -  - "register" -``` - -The **database inventory** analysis exports six database tables in the `database.json` file. These tables are used in the DayTrader application: - - -``` -{ - "/app": [ -   "orderejb", -   "holdingejb", -   "quoteejb", -   "accountejb", -   "keygenejb", -   "accountprofileejb" - ] -} -``` - -The **transaction inventory** analysis result is dumped into the `transaction.json` and `.yml` files, but it's better to check the `transaction_summary.pdf` file when looking through transactions. The following transaction consists of six SQL operations to two database tables: `holdingejb` and `orderejb`: - -![Tackle-DiVA transaction inventory][8] - -(Yasuharu Katsuno, [CC BY-SA 4.0][5]) - -The `transaction.json` and `.yml` files also contain **code-to-database dependency** analysis results. The following shows how the TradeDirect class invokes query operations to two database tables, `accountprofileejb` and `accountejb`: - - -``` -"stacktrace" : [ -  ... -  { -  "method" : "<src-method: < Source, -              Lcom/ibm/websphere/samples/daytrader/direct/TradeDirect, -              getStatement(Ljava/sql/Connection;Ljava/lang/String;) -              Ljava/sql/PreparedStatement; >>", -  "file" : "/app/daytrader-ee7-ejb/src/ -            main/java/com/ibm/websphere/ -            samples/daytrader/direct/TradeDirect.java", -  "position" : "TradeDirect.java [1935:15] -> [1935:41]" -  } -], -"sql" : "select * from accountprofileejb ap where ap.userid = ( -            select profile_userid from accountejb a where a.profile_userid=?)" -``` - -The **database-to-database dependency** analysis result is located in the `transaction_summary.dot `and `.pdf` files. The `accoutprofileejb` and `accoutejb` database tables have a mutual-query relationship: - -![Tackle-DiVA database-to-database dependency][9] - -(Yasuharu Katsuno, [CC BY-SA 4.0][5]) - -The **transaction-to-transaction dependency** analysis results are found in the `transaction_summary.dot` and `.pdf` files. Two transactions have a dependency on the `orderejb` database table. The upper transaction updates the table, and the lower transaction queries it: - -![Tackle-DiVA transaction-to-transaction dependency][10] - -(Yasuharu Katsuno, [CC BY-SA 4.0][5]) - -Finally, parallel executable transactions are shown in the `transaction_summary.dot` and `.pdf` files, resulting from the **transaction refactoring recommendation** analysis. In this example, two transactions in the lower part can be executed in parallel after the upper transaction processing completes, which helps keep data consistency due to no transaction dependencies: - -![Tackle-DiVA transaction refactoring recommendation][11] - -(Yasuharu Katsuno, [CC BY-SA 4.0][5]) - -### Learn more - -To learn more about application refactoring, check out the [Konveyor Tackle site][12], join the community, and access the source code on [GitHub][2]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/tackle-diva-kubernetes - -作者:[Yasu Katsuno][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/yasu-katsuno -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning) -[2]: https://github.com/konveyor/tackle-diva -[3]: https://www.konveyor.io/tackle -[4]: https://opensource.com/sites/default/files/uploads/tackle-diva_operation.png (Tackle-DiVA operation) -[5]: https://creativecommons.org/licenses/by-sa/4.0/ -[6]: https://opensource.com/resources/what-docker -[7]: https://github.com/WASdev/sample.daytrader7 -[8]: https://opensource.com/sites/default/files/uploads/tackle-diva_transaction-inventory.png (Tackle-DiVA transaction inventory) -[9]: https://opensource.com/sites/default/files/uploads/tackle-diva_dbtodb.png (Tackle-DiVA database-to-database dependency) -[10]: https://opensource.com/sites/default/files/uploads/tackle-diva_ttot.png (Tackle-DiVA transaction-to-transaction dependency) -[11]: https://opensource.com/sites/default/files/uploads/tackle-diva_transaction-refactoring.png (Tackle-DiVA transaction refactoring recommendation) -[12]: https://github.com/konveyor/tackle diff --git a/sources/tech/20210618 DevSecOps- An open source story.md b/sources/tech/20210618 DevSecOps- An open source story.md deleted file mode 100644 index f8429bd53e..0000000000 --- a/sources/tech/20210618 DevSecOps- An open source story.md +++ /dev/null @@ -1,106 +0,0 @@ -[#]: subject: (DevSecOps: An open source story) -[#]: via: (https://opensource.com/article/21/6/open-sourcing-devsecops) -[#]: author: (Will Kelly https://opensource.com/users/willkelly) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -DevSecOps: An open source story -====== -DevSecOps brings culture changes, frameworks, and tools into open source -software. To understand DevSecOps, you must understand its relationship -with open source. -![A lock on the side of a building][1] - -Recent supply chain breaches, plus President Biden's new [Cybersecurity executive order][2], are bringing renewed attention to DevSecOps' value for the enterprise. DevSecOps brings culture changes, frameworks, and tools into open source software (OSS). To understand DevSecOps, you must understand its relationship with OSS. - -### What is DevSecOps? - -In its purest form, DevOps (which is an amalgamation of development and operations) is a methodology for breaking down the traditional silos between programmers and system administrators during the software delivery lifecycle. Corporations and government agencies adopt DevOps for various reasons, including improving software delivery velocity to serve customers better. - -DevSecOps adds security into DevOps, further refining the concept to address code quality, security, and reliability assurance through automation, enabling continuous security and compliance. Organizations seeking to comply with Sarbanes Oxley (SOX), Payment Card Industry Data Security Standard (PCI DSS), FedRAMP, and similar programs are candidates for implementing DevSecOps. - -For example, a federal government agency seeking [FedRAMP compliance][3] should use DevSecOps, because it enables them to bake security automation into each stage of their software development process. Likewise, a healthcare institution entrusted with sensitive personal healthcare information (PHI) needs DevSecOps to ensure its cloud applications meet HIPAA compliance requirements. - -The more you move security mitigation to the left to tackle these issues in development, the more money you save. You also avoid potential negative headlines because your teams don't have to respond to issues in production, where remediation costs can soar way higher than if you caught them in your development environment. - -You can treat the move from DevOps to DevSecOps as another step in the DevOps journey. But it's more like a transformation for your development organization and your entire business. Here's a typical framework: - - 1. **Analyze, communicate, and educate:** This includes analyzing your development process maturity; defining DevSecOps for your organization; and fostering a DevSecOps culture with continuous feedback and interaction, team autonomy, and automation and architecture. - 2. **Integrate security into your DevOps lifecycle:** Ensure your DevOps and security teams work together. - 3. **Introduce automation into your DevOps lifecycle:** Start on small dev projects and gradually expand your automation strategy. - 4. **Collaborate on security changes to your DevOps toolchains:** Get your development and security teams working jointly on projects to harden your DevOps toolchain. - 5. **Execute on DevSecOps:** Get your teams fully engaged with your DevSecOps toolchains and new processes. - 6. **Encourage continuous learning and iteration:** Offer your developers and sysadmins training and feedback mechanisms to support developer performance and the health of your toolchains. - - - -We're at a unique point in the history of software development, where the need to increase security and speed software development velocity is at a crossroads. While DevOps has done a lot to increase velocity, there was always more to do. - -### Growth of DevSecOps - -The growth of DevSecOps has been visible in compliance and security-conscious arenas. For example, it has a growing following inside the security-conscious US Department of Defense. Projects such as [Platform One][4] are setting an example of how DevSecOps practices can protect open source and cloud technologies in the most security-conscious government missions. - -DevSecOps has a 20% to 50% penetration within industry, according to [Gartner's Hype Cycle for Agile and DevOps, 2020][5]. The pandemic has acted as a catalyst for DevSecOps as organizations have moved application development to the cloud. - -### Challenges of DevSecOps - -Even if you treat DevSecOps as another step in your DevOps journey, you can expect changes to your toolchain, roles on your DevOps and security teams, and how your groups interact. Over 60% of the respondents to [GitLab's 2021 Global DevSecOps Survey][6] report new roles and responsibilities because of DevOps, so prepare your people upfront and keep surprises to a minimum. - -There is a variety of open source DevSecOps tools you can adopt to build out your DevOps pipeline, including: - - * [Alerta][7] consolidates and deduplicates alerts from multiple sources to provide quick visualizations. It integrates with Prometheus, Riemann, Nagios, and other monitoring tools and services for developers. You can use Alerta to customize alerts to meet your requirements. - * [StackStorm][8] offers event-driven automation providing scripted remediations and responses. Some users affectionately call it the "[IFTTT][9] for ops." - * [Grafana][10] allows you to create custom dashboards that aggregate all relevant data to visualize and query security data. - * [OWASP Threat Dragon][11] is a web-based tool that offers system diagramming and a rules engine for modeling and mitigating threats automatically. Threat Dragon touts an easy-to-use interface and seamless integration with other software development tools. - - - -DevSecOps brings a culture, much in the same way that DevOps does. Fostering a DevSecOps culture is about putting security first and making it everybody's job. DevSecOps organizations need to go beyond the mandatory corporate-wide online security training with canned dialogue and bring security into development and business processes. - -### DevSecOps and open source risk mitigation - -Businesses and even government agencies use as much as [90% open source code][12]. That sometimes accounts for hundreds of discrete libraries in a single application. There's no doubt that OSS saves DevOps teams time and money, but it may take a DevSecOps security model to mitigate OSS risk and licensing complexities. - -Forty-six percent of respondents to Synopsys' [DevSecOps Practices and Open Source Management in 2020][13] survey said that media coverage of open source issues affects how they implement controls in their OSS projects. Continuing coverage of the recent supply chain breaches are amping up tech leaders' concerns about the stringency of their controls. - -OSS risk mitigation strategies and DevSecOps go together in many ways, such as: - - * Begin generating a software bill of materials (SBOM) as a quality gate before OSS enters your software supply chain. - * Give [OSS procurement][14] the same attention as you do the vetting, purchase, and intake of enterprise software by bringing in talent from your development, security, and corporate back-office teams. You can adapt your DevSecOps lifecycle to factor in your OSS procurement strategy. - - - -### Final thoughts - -DevSecOps is a noisy topic right now. Plenty of marketers are trying to put their spin on defining it to sell more products to commercial and public-sector enterprises. Even so, the relationship between OSS and DevSecOps remains clean because DevSecOps tools and strategies offer a security gate to bring OSS into the software supply chain and your DevSecOps pipeline while maintaining security and compliance from the first step in the process. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/open-sourcing-devsecops - -作者:[Will Kelly][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/willkelly -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building) -[2]: https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/ -[3]: https://www.fedramp.gov/faqs/ -[4]: https://software.af.mil/team/platformone/ -[5]: https://www.gartner.com/en/documents/3987588/hype-cycle-for-agile-and-devops-2020 -[6]: https://about.gitlab.com/developer-survey/ -[7]: https://alerta.io/ -[8]: https://stackstorm.com/ -[9]: https://ifttt.com/ -[10]: https://grafana.com/ -[11]: https://www.owasp.org/index.php/OWASP_Threat_Dragon -[12]: https://www.contrastsecurity.com/security-influencers/how-to-identify-remediate-oss-library-risks -[13]: https://www.synopsys.com/software-integrity/resources/analyst-reports/devsecops-practices-open-source-management.html -[14]: https://thenewstack.io/how-to-standardize-open-source-procurement-and-lower-risk-without-slowing-your-developers/ diff --git a/sources/tech/20210618 Use this nostalgic text editor on FreeDOS.md b/sources/tech/20210618 Use this nostalgic text editor on FreeDOS.md deleted file mode 100644 index 384e6f9a9b..0000000000 --- a/sources/tech/20210618 Use this nostalgic text editor on FreeDOS.md +++ /dev/null @@ -1,191 +0,0 @@ -[#]: subject: (Use this nostalgic text editor on FreeDOS) -[#]: via: (https://opensource.com/article/21/6/edlin-freedos) -[#]: author: (Jim Hall https://opensource.com/users/jim-hall) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Use this nostalgic text editor on FreeDOS -====== -Edlin is a joy to use when I want to edit text the "old school" way. -![Old UNIX computer][1] - -In the very early days of DOS, the standard editor was a no-frills _line editor_ called Edlin. Tim Paterson wrote the original Edlin for the first version of DOS, then called 86-DOS and later branded PC-DOS and MS-DOS. Paterson has commented that he meant to replace Edlin eventually, but it wasn't until ten years later that MS-DOS 5 (1991) replaced Edlin with Edit, a full-screen editor. - -You may know that FreeDOS is an open source DOS-compatible operating system that you can use to play classic DOS games, run legacy business software, or develop embedded systems. FreeDOS has very good compatibility with MS-DOS, and the "Base" package group includes those utilities and programs that replicate the behavior of MS-DOS. One of those classic programs is an open source implementation of the venerable Edlin editor; Edlin is distributed under the GNU General Public License version 2. - -Written by Gregory Pietsch, Edlin is a well-designed, portable editor. You can even compile Edlin on Linux. As Gregory described Edlin in the free ebook 23 Years of FreeDOS, “The top tier parses the input and calls the middle tier, a library called `edlib`, which calls the string and array-handling code to do the dirty work.” But aside from its technical merits, I find Edlin is a joy to use when I want to edit text the "old school" way. - -FreeDOS 1.3 RC4 includes Edlin 2.18. That's actually one revision out of date, but you can download [Edlin 2.19][2] from the FreeDOS files archive on [Ibiblio][3]. You'll find two files there—_edlin-2.19.zip_ contains the source code, and _edlin-219exe.zip_ is just the DOS executable. Download the _edlin-219exe.zip_ file, and extract it to your FreeDOS system. I've unzipped my copy in `C:\EDLIN`. - -Edlin takes a little practice to "get into" it, so let's edit a new file to show a few common actions in Edlin. - -### A walkthrough - -Start editing a file by typing `EDLIN` and then the name of the file to edit. For example, to edit a C programming source file called `HELLO.C`, you might type: - - -``` -`C:\EDLIN> edlin hello.c` -``` - -I've typed the FreeDOS commands in all lowercase here. FreeDOS is actually _case insensitive_, so you can type commands and files in uppercase or lowercase. Typing `edlin` or `EDLIN` or `Edlin` would each run the Edlin editor. Similarly, you can identify the source file as `hello.c` or `HELLO.C` or `Hello.C`. - - -``` -C:\EDLIN> edlin hello.c -edlin 2.19, copyright (c) 2003 Gregory Pietsch -This program comes with ABSOLUTELY NO WARRANTY. -It is free software, and you are welcome to redistribute it -under the terms of the GNU General Public License -- either -version 2 of the license, or, at your option, any later -version. - -hello.c: 0 lines read -* -``` - -Once inside Edlin, you'll be greeted by a friendly `*` prompt. The interface is pretty minimal; no shiny "menu" or mouse support here. Just type a command at the `*` prompt to start editing, revise lines, search and replace, save your work, or exit the editor. - -Since this is a new file, we'll need to add new lines. We'll do this with the _insert_ command, by typing `i` at the `*` prompt. The Edlin prompt changes to `:` where you'll enter your new text. When you are done adding new text, type a period (`.`) on a line by itself. - - -``` -*i - : #include <stdio.h> - : - : int - : main() - : { - :   puts("Hello world"); - : } - : . -* -``` - -To view the text you've entered so far, use the _list_ command by entering `l` at the `*` prompt. Edlin will display lines one screenful at a time, assuming 25 rows on the display. But for this short "Hello world" program, the source code fits on one screen: - - -``` -*l -1: #include <stdio.h> -2: -3: int -4: main() -5: { -6:   puts("Hello world"); -7:*} -* -``` - -Did you notice the `*` on line 7, the last line in the file? That's a special mark indicating your place in the file. If you inserted new text in the file, Edlin would add it at this location. - -Let's update the C source file to return a code to the operating system. To do that, we'll need to add a line _above_ line 7. Since that's where Edlin has the mark, we can use `i` to insert next text before this line. Don't forget to enter `.` on a line by itself to stop entering the new text. - -By listing the file contents afterwards, you can see that we inserted the new text in the correct place, before the closing "curly brace" in the program. - - -``` -*i - :   return 0; - : . -*l -1: #include <stdio.h> -2: -3: int -4: main() -5: { -6:   puts("Hello world"); -7:   return 0; -8:*} -* -``` - -But what if you need to edit a single line in the file? At the `*` prompt,simply type the line number that you want to edit. Edlin works one line at a time, so you'll need to re-enter the full line. In this case, let's update the `main()` function definition to use a slightly different programming syntax. That's on line 4, so type `4` at the prompt, and re-type the line in full. - -Listing the file contents afterwards shows the updated line 4. - - -``` -*4 -4:*main() -4: main(void) -*l -1: #include <stdio.h> -2: -3: int -4:*main(void) -5: { -6:   puts("Hello world"); -7:   return 0; -8: } -* -``` - -When you've made all the changes you need to make, don't forget to save the updated file. Enter `w` at the prompt to _write_ the file back to disk, then use `q` to _quit_ Edlin and return to DOS. - - -``` -*w -hello.c: 8 lines written -*q -C:\EDLIN> -``` - -### Quick reference guide - -That walkthrough shows the basics of using Edlin to edit files. But Edlin does more than just "insert, edit, and save." Here's a handy cheat sheet showing all the Edlin functions, where _text_ indicates a text string, _filename_ is the path and name of a file, and _num_ is a number (use `.` for the current line number, `$` for the last line number). - -`?` | Show help ----|--- -_num_ | Edit a single line -`a` | Append a line below the mark -[_num_]`i` | Insert new lines before the mark -[_num_][`,`_num_]`l` | List the file (starting 11 lines above the mark) -[_num_][`,`_num_]`p` | Page (same as List, but starting at the mark) -[_num_]`,`[_num_]`,`_num_`,`[_num_]`c` | Copy lines -[_num_]`,`[_num_]`,`_num_`m` | Move lines -[_num_][`,`_num_][`?`]`s`_text_ | Search for text -[_num_][`,`_num_][`?`]`r`_text_`,`_text_ | Replace text -[_num_][`,`_num_]`d` | Delete lines -[_num_]`t`_filename_ | Transfer (insert the contents of a new file at the mark) -[_num_]`w`[_filename_] | Write the file to disk -`q` | Quit Edlin -`e`[_filename_] | End (write and quit) - -Programmers will be interested to know they can enter special characters in Edlin, using these special codes: - -`\a` | alert ----|--- -`\b` | backspace -`\e` | escape -`\f` | formfeed -`\t` | horizontal tab -`\v` | vertical tab -`\"` | double quote -`\'` | single quote -`\.` | period -`\\` | backslash -`\x`_XX_ | hexadecimal number -`\d`_NNN_ | decimal number -`\`_OOO_ | octal number -`\^`_C_ | control character - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/edlin-freedos - -作者:[Jim Hall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/retro_old_unix_computer.png?itok=SYAb2xoW (Old UNIX computer) -[2]: https://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/dos/edlin/2.19/ -[3]: https://www.ibiblio.org/ diff --git a/sources/tech/20210620 8 books open source technologists should read this summer.md b/sources/tech/20210620 8 books open source technologists should read this summer.md deleted file mode 100644 index b8fc177bee..0000000000 --- a/sources/tech/20210620 8 books open source technologists should read this summer.md +++ /dev/null @@ -1,243 +0,0 @@ -[#]: subject: (8 books open source technologists should read this summer) -[#]: via: (https://opensource.com/article/21/6/2021-opensourcecom-summer-reading-list) -[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -8 books open source technologists should read this summer -====== -The 2021 Opensource.com summer reading list features eight recommended -books for you to kick back, relax, and learn by reading. -![Reading a book, selfcare][1] - -Welcome to the 2021 Opensource.com summer reading list. This year's list contains eight wonderful book recommendations from members of the Opensource.com community. From classics like Frank Herbert's _Dune_ and a new translation of _Beowulf_ to non-fiction books about the history of tech industry culture, this list has books for readers with different tastes and interests. - -Each recommendation provides valuable insight into why the person who recommended the book thinks it is worth reading. As always, the book selections and reviews shared by my peers are insightful and inspiring. I always learn so much from what they share, and I always enjoy seeing what new and interesting books I will invariably add to my "to read" list. I hope that you will also find something to add to your "to read" pile. - -Enjoy! - -* * * - -## [Beowulf: A New Translation][2] - -[![Beowulf book cover][3]][2] - -**by Maria Dahvana Headley** (recommendation written by [Kevin Sonney][4]) - -From the first line, you know this isn't the same "Beowulf" you slogged through in grade school. - -> Bro! Tell me we still know how to speak of kings! - -Before I fell into the IT field, I was an English major with a deep, abiding love of _Beowulf_. It can be difficult to read, and (at least here in the United States) it is required reading in high school, but many people remember it as a boring read where the language and structure get in the way of the story. In this translation, Headley puts the story first while maintaining a modern translation that is accurate in structure and meaning with the original text. From the opening "Bro!" to the occasional (and appropriate!) vulgarity, to the description of the "Hashtag: Blessed" Queen, readers will find themselves hanging on every line to find out what happens next. Headley's translation, more than any other, is also a joy to read out loud, just as it was for the bards who told it in the mead halls and longhouses of long ago. - -This is a well-researched, well-written, and most importantly, engaging translation of one of the most important historical works of English literature. Headley makes _Beowulf_ FUN to read, and the reader gets immersed in the story, just like the listeners did over their drinks and meals long ago. - -## [Competing on Internet Time: Lessons From Netscape and Its Battle With Microsoft][5] - -[![Competing on Internet Time book cover][6]][5] - -**by Michael A. Cusumano and David B. Yoffie** (recommendation written by [Gaurav Kamathe][7]) - -If you are reading this right now, chances are you are using a web browser. In fact, if you use a computer today in any form, can you imagine computing without a web browser? We spend the majority of time in it today—doing work, connecting with friends, surfing the web, watching movies, playing music, finding directions, ordering food, etc. Sure, apps have replaced some of these functions on mobile devices; however, the browser has in many ways become the operating system when it comes to the internet and the web. - -There was a time when the web was taking root, and browsers were not mainstream. Few people could have imagined the potential of the web and the impact it would have on humankind. The browser is what made the web truly accessible to everybody, beyond just the technically savvy. If you are curious about what went into developing this exciting technology, the company that created it, and the battles it had to fight along the way against a mighty adversary, well, then keep reading. - -_Competing on Internet Time_ traces events between the years 1994 and 1998. It mostly revolves around two companies: Netscape (today known as Mozilla), then a new, up and coming startup with a radical new product (the web browser) that came out of nowhere and grew rapidly to capture 90% of the browser market, and Microsoft, a well-known giant (even then) known for its dominance in the operating system market. The authors interviewed many key figures at both companies and provided their analysis of the events that transpired. This book is an interesting read from both a technology and a business standpoint. It goes in-depth on business strategy, decision-making, the benefits of speed, technology choice, and, more importantly, the culture of these two diverse organizations. - -One key takeaway from the book is how Netscape truly embraced and utilized the internet as a competitive advantage at a time when other companies, including Microsoft, had ignored it for too long. Another takeaway is that Netscape, being the smaller of the two, had the advantage of flexibility, allowing it to introduce new products rapidly. A third lesson is how Microsoft fully utilized its operating system-market monopoly to beat Netscape at its browser game. Netscape tried to counter this by becoming a champion of cross-platform technology and introducing and pushing open standards to keep Microsoft on its back foot. In the face of stiff competition, Netscape turned its business strategy and went from being just a browser company and made forays into intranet, extranet, and enterprise software markets, ultimately losing this battle. All of this and more happened in a span of just four (internet) years. - -## [Dune][8] - -* * * - -* * * - -* * * - -**[![Dune book cover][9]][8]** - -* * * - -**by Frank Herbert** (recommendation written by [Matthew Broberg][10]) - -My first review of _Dune_ was that it was an inventive, solid read. That was [not received well][11] by friends on Twitter. On reflection, I don't think that assessment does it justice. _Dune_ is an exceptionally influential piece of art from the mid-1960s that interweaves futuristic cultures, empires, and religions. It also plays with timelines. Every chapter weaves the future in as the present unfolds. The layers of foreshadowing while the world of Arrakis comes into focus are a joy to read. - -I have since finished the second book in the series and plan to go on to the third. This world is massive, thoughtful, and feels as modern as it is timeless. - -## [Hacking Diversity: The Politics of Inclusion in Open Technology Cultures][12] - -* * * - -* * * - -* * * - -**[![Hacking Diversity book cover][13]][12]** - -* * * - -**by Christina Dunbar-Hester** (recommendation written by [Bryan Behrenshausen][14]) - -What motivates this critical anthropology of open source technology communities isn't the question: _Why aren't open source communities more diverse?_ or even _How do we involve more underrepresented minorities in open source?_ Instead, Dunbar-Hester seeks answers to a more complex question: _How do the ways open source communities discuss diversity and inclusion inadvertently constrain their ability to make these communities more inclusive and diverse?_ - -By embedding herself inside various open technology communities—visiting their makerspaces, attending their meetups, listening at their conferences—Dunbar-Hester offers numerous insights that help readers understand how well-meaning and good-faith diversity, equity, and inclusion initiatives might produce results antithetical to their own aspirations. If you're at all interested in making open source communities more welcoming and inclusive, you'll want to read this book. - -## [Letters to a New Developer: What I Wish I Had Known When Starting My Development Career][15] - -* * * - -* * * - -* * * - -**[![Letters to a New Developer book cover][16]][15]** - -* * * - -**by Dan Moore** (recommendation written by [Joshua Allen Holm][17]) - -If you found yourself entering (or re-entering) the job market during the last year, you found that much of the traditional mentoring structure for starting a new job was seriously disrupted. It is harder to get solid career advice when meetups and meetings are virtual, so, as an alternative, I suggest reading _Letters to a New Developer: What I Wish I Had Known When Starting My Development Career._ - -The book is divided into 10 chapters, covering Your First Month, Questions, Writing, Tools to Learn, Practices, Understanding the Business, Learning, Mistakes, Your Career, and Community. Each chapter begins with a brief introduction followed by a series of letters addressed to the reader about the chapter's subject. The letters are all interesting and engaging to read. - -Having almost the entire book in epistolary format makes this book feel more personal than other career guidance books. After a year of being disconnected because of COVID-19, the human touch of the letter-based approach makes this book a pleasure to read. No book, not even one as excellent as _Letters to a New Developer,_ can replace a few good human mentors who can adjust their advice to specific circumstances, but this is a solid alternative to having ready access to a mentor in the next cubicle. - -I highly recommend picking up a copy, even if you do not think you need career advice. There are many insightful things in _Letters to a New Developer_ that would benefit even someone well-established in their career. - -## [The All-Consuming World][18] - -* * * - -* * * - -* * * - -**[![The All-Consuming World book cover][19]][18]** - -* * * - -**by Cassandra Khaw** (recommendation written by [Kevin Sonney][4]) -Release date: September 7, 2021; review based on an advance review copy - -Khaw, author of the Lovecraftian _Hammers on Bone_ and the hilarious and visceral _Rupert Wong: Canibal Chef_ series, brings her distinctive style to a far-future space cyberpunk thriller. In _The All-Consuming World_, a band of criminals comes out of retirement to return to the mythical planet that once almost destroyed them. Khaw excels at writing realistic, messy characters whose flaws are their greatest strengths. At times visceral and profane, Khaw builds a complex universe with complex people, where those people can be artificial intelligences, rogue hackers, cyborgs, clones, heavily modified humans, and more. - -_The All-Consuming World_ is a fast, violent, and complex ride from the opening heist onward. Khaw adds her voice to the amazing array of recent modern cyberpunk authors with her unique combination of style, wit, and mayhem. This book may not be for everyone, but I enjoyed the heck out of it. - -## [The Language Lover's Puzzle Book: Lexical Perplexities and Cracking Conundrums From Across the Globe][20] - -* * * - -* * * - -* * * - -**[![The Language Lover's Puzzle Book cover][21]][20]** - -* * * - -**by Alex Bellos** (recommendation written by [Joshua Allen Holm][17]) - -Are you a word nerd? Do you love puzzles? If the answer to both those questions is "yes," then _The Language Lover's Puzzle Book_ is the book for you. This book explores interesting facts about language through a series of puzzles and anecdotes. - -Each of the book's 10 chapters covers different aspects of language like numbers and familial relations. You will find puzzles about ancient languages, like Babylonian and Egyptian, modern languages, and even constructed languages like [Dothraki][22]. Each puzzle is a brain teaser that makes you think about how languages work. The puzzles vary in difficulty, but each presents an interesting challenge for the reader to solve. - -A book full of complex puzzles about languages certainly has a niche audience; still, the anecdotes contained within are interesting enough to make _The Language Lover's Puzzle Book_ something enjoyable even to a reader not interested in solving the puzzles for themselves. Solving the puzzles is very much the point of the book, but the information in the book is still fascinating and informative, even to readers who do not want to challenge themselves with puzzle solving. - -You can watch Alex Bellos' [talk at the Royal Institution][23] for a preview of the puzzles and anecdotes contained in _The Language Lover's Puzzle Book_. - -This review was based on the UK edition of _The Language Lover's Puzzle Book_. A [US edition][24] with the title _The Language Lover's Puzzle Book: Perple_ing Le_ical Patterns to Unmi_ and Ve_ing Synta_ to Outfo__ is due out in November and available now for preorder. - -## [Understanding the Digital World: What You Need to Know about Computers, the Internet, Privacy, and Security, Second Edition][25] - -* * * - -* * * - -* * * - -**[![Understanding the Digital World book cover][26]][25]** - -* * * - -**by Brian W. Kernighan** (recommendation written by [Jim Hall][27]) - -I loved reading _Understanding the Digital World_. While it's listed as a textbook on Amazon, I didn't find it an "academic" text. It's almost a casual introduction to technology, from basic ideas such as "what is a computer" and "analog versus digital" to more advanced topics including mobile devices, internet communication, artificial intelligence, and cryptography. - -Kernighan introduces each topic in a conversational way, so you don't feel like you're stepping up to a more advanced topic. Rather, it's a natural progression or flow from one topic to the next. - -I especially appreciated his demonstration of the Toy Computer, a hypothetical computer model you can experiment with in a web browser. With the Toy, Kernighan explains the fundamentals of Assembly programming without getting lost in the details of macro assembly and specific registers. I found this an approachable way to discuss the topic. - -I also appreciated Kernighan's discussion about operating systems further into the book. Kernighan explains this technical concept in clear terms, breaking down the components into easily understood sections. - -_Understanding the Digital World_ would make a great gift for anyone interested in technology at almost any age level. - -* * * - -Not seeing something that piques your interest? Check out our previous lists for more suggestions: - - * [2020 Opensource.com summer reading list][28] - * [2019 Opensource.com summer reading list][29] - * [2018 Open Organization summer reading list][30] - * [2016 Opensource.com summer reading list][31] - * [2015 Opensource.com summer reading list][32] - * [2014 Opensource.com summer reading list][33] - * [2013 Opensource.com summer reading list][34] - * [2012 Opensource.com summer reading list][35] - * [2011 Opensource.com summer reading list][36] - * [2010 Opensource.com summer reading list][37] - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/2021-opensourcecom-summer-reading-list - -作者:[Joshua Allen Holm][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/holmja -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/reading_book_selfcare_wfh_learning_education_520.png?itok=H6satV2u (Reading a book, selfcare) -[2]: https://us.macmillan.com/books/9780374110031 -[3]: https://opensource.com/sites/default/files/uploads/beowulf.jpg (Beowulf book cover) -[4]: https://opensource.com/users/ksonney -[5]: https://www.simonandschuster.com/books/Competing-On-Internet-Time/David-B-Yoffie/9780684831121 -[6]: https://opensource.com/sites/default/files/uploads/competing_on_internet_time.jpg (Competing on Internet Time book cover) -[7]: https://opensource.com/users/gkamathe -[8]: https://dunenovels.com/dune/ -[9]: https://opensource.com/sites/default/files/uploads/dune.jpg (Dune book cover) -[10]: https://opensource.com/users/mbbroberg -[11]: https://twitter.com/mbbroberg/status/1372323276802961408 -[12]: https://press.princeton.edu/books/hardcover/9780691182070/hacking-diversity -[13]: https://opensource.com/sites/default/files/uploads/hacking_diversity.jpg (Hacking Diversity book cover) -[14]: https://opensource.com/users/bbehrens -[15]: https://letterstoanewdeveloper.com/the-book/ -[16]: https://opensource.com/sites/default/files/uploads/letters_to_a_new_developer_150.jpg (Letters to a New Developer book cover) -[17]: https://opensource.com/users/holmja -[18]: https://www.erewhonbooks.com/books/the-all-consuming-world-cassandra-khaw -[19]: https://opensource.com/sites/default/files/uploads/the_all-consuming_world.jpg (The All-Consuming World book cover) -[20]: https://www.alexbellos.com/language -[21]: https://opensource.com/sites/default/files/uploads/the_language_lover_s_puzzle_book.jpg (The Language Lover's Puzzle Book cover) -[22]: https://en.wikipedia.org/wiki/Dothraki_language -[23]: https://www.youtube.com/watch?v=2NLquktkdqk -[24]: https://www.amazon.com/dp/B08WK5X45V/ -[25]: https://press.princeton.edu/books/hardcover/9780691219097/understanding-the-digital-world -[26]: https://opensource.com/sites/default/files/uploads/understanding_the_digital_world.jpg (Understanding the Digital World book cover) -[27]: https://opensource.com/users/jim-hall -[28]: https://opensource.com/article/20/6/summer-reading-list -[29]: https://opensource.com/article/19/6/summer-reading-list -[30]: https://opensource.com/open-organization/18/6/summer-reading-2018 -[31]: https://opensource.com/life/16/6/2016-summer-reading-list -[32]: https://opensource.com/life/15/6/2015-summer-reading-list -[33]: https://opensource.com/life/14/6/annual-reading-list-2014 -[34]: https://opensource.com/life/13/6/summer-reading-list-2013 -[35]: https://opensource.com/life/12/7/your-2012-open-source-summer-reading -[36]: https://opensource.com/life/11/7/summer-reading-list -[37]: https://opensource.com/life/10/8/open-books-opensourcecom-summer-reading-list diff --git a/sources/tech/20210621 Why I love programming on FreeDOS with GW-BASIC.md b/sources/tech/20210621 Why I love programming on FreeDOS with GW-BASIC.md deleted file mode 100644 index d36b04e6db..0000000000 --- a/sources/tech/20210621 Why I love programming on FreeDOS with GW-BASIC.md +++ /dev/null @@ -1,141 +0,0 @@ -[#]: subject: (Why I love programming on FreeDOS with GW-BASIC) -[#]: via: (https://opensource.com/article/21/6/freedos-gw-basic) -[#]: author: (Jim Hall https://opensource.com/users/jim-hall) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Why I love programming on FreeDOS with GW-BASIC -====== -BASIC was my entry into computer programming. I haven't written BASIC -code in years, but I'll always have a fondness for BASIC and GW-BASIC. -![Old UNIX computer][1] - -When I was growing up, it seemed every "personal computer" from the TRS-80 to the Commodore to the Apple let you write your own programs in the Beginners' All-purpose Symbolic Instruction Code ([BASIC][2]) programming language. Our family had a clone of the Apple II called the Franklin ACE 1000, which—as a clone—also ran AppleSoft BASIC. I took to AppleSoft BASIC right away and read books and magazines to teach myself about BASIC programming. - -Later, our family upgraded to an IBM PC running DOS. Just like every personal computer before it, the IBM PC also ran its own version of DOS, called BASICA. Later versions of DOS replaced BASIC with an updated interpreter called GW-BASIC. - -BASIC was my entry into computer programming. As I grew up, I learned other programming languages. I haven't written BASIC code in years, but I'll always have a fondness for BASIC and GW-BASIC. - -### Microsoft open-sources GW-BASIC - -In May 2020, Microsoft surprised everyone (including me) by releasing the source code to GW-BASIC. Rich Turner (Microsoft) wrote in the [announcement][3] on the Microsoft Developer Blog: - -“Since re-open-sourcing MS-DOS 1.25 & 2.0 on GitHub last year, we’ve received numerous requests to also open-source Microsoft BASIC. Well, here we are! As clearly stated in the repo's readme, these sources are the 8088 assembly language sources from 10th Feb 1983 and are being open-sourced for historical reference and educational purposes. This means we will not be accepting PRs (Pull Requests) that modify the source in any way.” - -You can find the GW-BASIC source code release at the [GW-BASIC GitHub][4]. And yes, Microsoft used the [MIT License][5], which makes this open source software. - -Unfortunately, the GW-BASIC code was entirely in Assembly, which wouldn't build with modern tools. But open source developers got to work on that and adjusted the code to assemble with updated DOS assemblers. One project is [TK Chia's GitHub][6] project to update GW-BASIC to assemble with JWASM or other assemblers. You can find several [source and binary releases][7] on TK Chia's project. The notes from the latest version (October 2020) mention that this is “a 'pre-release' binary of GW-BASIC as rebuilt in 2020” and that “support for serial port I/O is missing. Light pen input, joystick input, and printer (parallel port) output need more testing.”  But if you don't need those extra features in GW-BASIC, you should be able to use this latest release to get back into BASIC programming with an open-sourced GW-BASIC. - -FreeDOS 1.3 RC4 doesn't include GW-BASIC, but installing it is pretty easy. Just download the `gwbas-20201025.zip` archive file from TK Chia's October 2020 GW-BASIC release, and extract it (unzip it) on your FreeDOS system. The binary archive uses a default path of `\DEVEL\GWBASIC`. - -### Getting started with GW-BASIC - -To start GW-BASIC, run the `GWBASIC.EXE` program from the DOS command line. Note that DOS is _case insensitive_ so you don't actually need to type that in all uppercase letters. Also, DOS will run any `EXE` or `COM` or `BAT` programs automatically, so you don't need to provide the extension, either. Go into the `\DEVEL\GWBASIC` and type `GWBASIC` to run BASIC. - -![GW-BASIC][8] - -The GW-BASIC interpreter -(Jim Hall, [CC-BY SA 4.0][9]) - -GW-BASIC is an _interpreted_ programming language. The GW-BASIC environment is a "shell" that parses each line in your BASIC program _as it runs the code_. This is a little slower than _compiled_ languages like C but makes for an easier coding-debugging cycle. You can test your code as you go, just by entering it into the interpreter. - -Each line in a GW-BASIC program needs to start with a line number. GW-BASIC uses the line numbers to make sure it executes your program statements in the correct order. With these line numbers, you can later "insert" new program statements between two other statements by giving it a line number that's somewhere in between the other line numbers. For this reason, most BASIC programmers wrote line numbers that went up by tens so that line numbers would go like 10, 20, 30, and so on. - -New to GW-BASIC? You can learn about the programming language by reading an online GW-BASIC reference. Microsoft didn't release a programming guide with the GW-BASIC source code, but you can search for one. [Here's one reference][10] that seems to be a copy of the original Microsoft GW-BASIC User's Guide. - -Let's start with a simple program to print out a list of random numbers. The `FOR` statement creates a loop over a range of numbers, and `RND(1)` prints a random value between 0 and 1. - -![GW-BASIC][11] - -Entering our first program -(Jim Hall, [CC-BY SA 4.0][9]) - -Do you see those highlighted words at the bottom of the screen? Those are keyboard shortcuts that you can access using the "F" keys (or _function_ keys) on your keyboard. For example, F1 will insert the word `LIST` into the GW-BASIC interpreter. The "left arrow" indicates that the shortcut will hit Enter for you, so F2 will enter the `RUN` command and immediately execute it. Let's run the program a few times to see what happens. - -![GW-BASIC][12] - -The two lists of random numbers are the same -(Jim Hall, [CC-BY SA 4.0][9]) - -Interestingly, the list of random numbers is the same every time we run the BASIC program. That's because the GW-BASIC random number generator resets every time you execute a BASIC program. - -To generate new random numbers every time, we need to "seed" the random number generator with a value. One way to do this is by prompting the user to enter their own seed, then use that value with the `RANDOMIZE` instruction. We can insert those two statements at the top of the program using line numbers 1 and 2. GW-BASIC will automatically add those statements before line 10. - -![GW-BASIC][13] - -Updating the program -(Jim Hall, [CC-BY SA 4.0][9]) - -With the random number generator using a new seed, we get a different list of random numbers every time we run our program. - -![GW-BASIC][14] - -Now the lists of random numbers are different -(Jim Hall, [CC-BY SA 4.0][9]) - -### "Guess the number" game in GW-BASIC - -Whenever I start learning a new programming language, I focus on defining variables, writing a statement, and evaluating expressions. Once I have a general understanding of those concepts, I can usually figure out the rest on my own. Most programming languages have some similarities, so once you know one programming language, learning the next one is a matter of figuring out the unique details and recognizing the differences. - -To help me practice a new programming language, I like to write a few test programs. One sample program I often write is a simple "guess the number" game, where the computer picks a number between one and 100 and asks me to guess it. The program loops until I guess correctly. - -Let's write a version of this "guess the number" game in GW-BASIC. To start, enter the `NEW` instruction to tell GW-BASIC to forget the previous program and start a new one. - -My "guess the number" program first prompts the user to enter a random number seed, then generates a random number between 1 and 100. The `RND(1)` function actually generates a random value between 0 and 1 (actually 0.9999…) so I first multiply `RND(1)` by 100 to get a value between 0 and 99.9999…, then I turn that into an integer (remove everything after the decimal point). Adding 1 gives a number that's between 1 and 100. - -The program then enters a simple loop where it prompts the user for a guess. If the guess is too low or too high, the program lets the user know to adjust their guess. The loop continues as long as the user's guess is _not_ the same as the random number picked earlier. - -![GW-BASIC][15] - -Entering a "guess the number" program -(Jim Hall, [CC-BY SA 4.0][9]) - -We can run the program by tapping the F2 key. Using a random seed of 1234 generates a completely new random number. It took me six guesses to figure out the secret number was 49. - -![GW-BASIC][16] - -Guessing the secret number -(Jim Hall, [CC-BY SA 4.0][9]) - -And that's your first introduction to GW-BASIC programming! Thanks to Microsoft for releasing this great piece of history as open source software, and thanks also to the many open source developers who assembled GW-BASIC so we can run it. - -One more thing before I go—It's not obvious how to exit GW-BASIC. The interpreter had a special instruction for that—to quit, enter `SYSTEM` and GW-BASIC will exit back to DOS. - -![GW-BASIC][17] - -Enter SYSTEM to quit GW-BASIC -(Jim Hall, [CC-BY SA 4.0][9]) - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/freedos-gw-basic - -作者:[Jim Hall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/retro_old_unix_computer.png?itok=SYAb2xoW (Old UNIX computer) -[2]: https://en.wikipedia.org/wiki/BASIC#:~:text=BASIC%20(Beginners'%20All%2Dpurpose,at%20Dartmouth%20College%20in%201964. -[3]: https://devblogs.microsoft.com/commandline/microsoft-open-sources-gw-basic/ -[4]: https://github.com/microsoft/GW-BASIC -[5]: https://github.com/microsoft/GW-BASIC/blob/master/LICENSE -[6]: https://github.com/tkchia/GW-BASIC -[7]: https://github.com/tkchia/GW-BASIC/releases -[8]: https://opensource.com/sites/default/files/uploads/gwbasic1.png (The GW-BASIC interpreter) -[9]: https://creativecommons.org/licenses/by-sa/4.0/ -[10]: http://www.antonis.de/qbebooks/gwbasman/index.html -[11]: https://opensource.com/sites/default/files/uploads/gwbasic2.png (Entering our first program) -[12]: https://opensource.com/sites/default/files/uploads/gwbasic3.png (The two lists of random numbers are the same) -[13]: https://opensource.com/sites/default/files/uploads/gwbasic4.png (Updating the program) -[14]: https://opensource.com/sites/default/files/uploads/gwbasic5.png (Now the lists of random numbers are different) -[15]: https://opensource.com/sites/default/files/uploads/guessnum1.png (Entering a "guess the number" program) -[16]: https://opensource.com/sites/default/files/uploads/guessnum2.png (Guessing the secret number) -[17]: https://opensource.com/sites/default/files/uploads/guessnum3.png (Enter SYSTEM to quit GW-BASIC) diff --git a/sources/tech/20210622 Edit text like Emacs in FreeDOS.md b/sources/tech/20210622 Edit text like Emacs in FreeDOS.md deleted file mode 100644 index 53f5ce1356..0000000000 --- a/sources/tech/20210622 Edit text like Emacs in FreeDOS.md +++ /dev/null @@ -1,108 +0,0 @@ -[#]: subject: (Edit text like Emacs in FreeDOS) -[#]: via: (https://opensource.com/article/21/6/freemacs) -[#]: author: (Jim Hall https://opensource.com/users/jim-hall) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Edit text like Emacs in FreeDOS -====== -If you're already familiar with GNU Emacs, you should feel right at home -in Freemacs. -![Typewriter in the grass][1] - -On Linux, I often use the GNU Emacs editor to write the source code for new programs. I learned GNU Emacs long ago when I was an undergraduate student, and I still have the "finger memory" for all the keyboard shortcuts. - -When I started work on FreeDOS in 1994, I wanted to include an Emacs-like text editor. You can find many editors similar to Emacs, such as MicroEmacs, but these all take some shortcuts to fit into the 16-bit address space on DOS. However, I was very pleased to find Freemacs, by Russell "Russ" Nelson. - -You can find Freemacs in FreeDOS 1.3 RC4, on the Bonus CD. You can use FDIMPLES to install the package, which will install to `\APPS\EMACS`. - -![installing Freemacs][2] - -Installing Freemacs from the FreeDOS 1.3 RC4 Bonus CD -(Jim Hall, [CC-BY SA 4.0][3]) - -### Initial setup - -The first time you run Freemacs, the editor will need to "compile" all of the setup files into a form that Freemacs can quickly process. This will take a few minutes to run, depending on your system's speed and memory, but fortunately, you only need to do it once. - -![Running Freemacs for the first time][4] - -Press Y to build the Freemacs MINT files -(Jim Hall, [CC-BY SA 4.0][3]) - -Freemacs actually processes the editor files in two passes. When Freemacs has successfully completed the first pass, it prompts you to restart the editor so it can finish processing. So don't be surprised that the process seems to start up again—it's just "part 2" of the compilation process. - -### Using Freemacs - -To edit a file with Freemacs, start the program with the text file as an argument on the command line. For example, `emacs readme.doc` will open the Readme file for editing in Freemacs. Typing `emacs` at the command line, without any options, will open an empty "scratch" buffer in Freemacs. - -![Freemacs][5] - -Starting Freemacs without any files opens a "scratch" buffer -(Jim Hall, [CC-BY SA 4.0][3]) - -Or, you can start Freemacs without any command-line options, and use the Emacs shortcuts C-x C-f (or M-x `find-file`). Freemacs then prompts you for a new file to load into the editor. The shortcut prefix C- means you should press the Ctrl key and some other key, so C-x is Ctrl and the x key together. And M-x is shorthand for "press the 'Meta' key (usually Esc) then hit x." - -![Freemacs][6] - -Opening a new file with C-x C-f -(Jim Hall, [CC-BY SA 4.0][3]) - -Freemacs automatically detects the file type and attempts to load the correct support. For example, opening a C source file will also set Freemacs to "C-mode." - -![Freemacs][7] - -Editing a C source file in Freemacs -(Jim Hall, [CC-BY SA 4.0][3]) - -If you also use GNU Emacs (like me), then you are probably curious to get Freemacs to match the C indentation that GNU Emacs uses (2 spaces.) Here is how to set Freemacs to use 2 spaces in C-mode: - - 1. Open a C source file in Freemacs. - 2. Enter M-x `edit-options` to edit Freemacs settings. - 3. Use the settings to change both "C-brace-offset" and "C-indent-level" to 2. - 4. Save and exit Freemacs; you'll be prompted to save settings. - - - -### A few limitations - -Much of the rest of Freemacs operates like GNU Emacs. If you're already familiar with GNU Emacs, you should feel right at home in Freemacs. However, Freemacs does have a few limitations that you might need to know: - -**The extension language is not LISP.** The biggest difference between GNU Emacs on Linux and Freemacs on FreeDOS is that Freemacs uses a different extension language. Where GNU Emacs implements a LISP-like interpreter, Freemacs implements a different extension language called MINT—based on the string processing language, TRAC. The name "MINT" is an acronym, meaning "MINT Is Not TRAC." - -You shouldn't expect to evaluate LISP code in Freemacs. The MINT language is completely different from LISP. For more information on MINT, see the reference manual. We provide the full documentation via the FreeDOS files archive on Ibiblio, at [/freedos/files/edit/emacs/docs][8]. In particular, the MINT language is defined in [mint.txt][9] and [mint2.txt][10]. - -**Freemacs cannot open files larger than 64 kilobytes.** This is a common limitation in many programs. 64kb is the maximum size of the data space for programs that do not leverage extended memory. - -**There is no "Undo" feature.** Be careful in editing. If you make a mistake, you will have to re-edit your file to get it back to the old version. Also, save early and often. For very large mistakes, your best path might be to abandon the version you're editing in Freemacs, and load the last saved version. - -The rest is up to you! You can find more information about Freemacs on Ibiblio, at [/freedos/files/edit/emacs/docs][8]. For a quick-start guide to Freemacs, read [quickie.txt][11]. The full manual is in [tutorial.txt][12]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/freemacs - -作者:[Jim Hall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead.png?itok=h3fCkVmU (Typewriter in the grass) -[2]: https://opensource.com/sites/default/files/uploads/install1.png (Installing Freemacs from the FreeDOS 1.3 RC4 Bonus CD) -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://opensource.com/sites/default/files/uploads/first-run1_0.png (Press Y to build the Freemacs MINT files) -[5]: https://opensource.com/sites/default/files/uploads/freemacs1.png (Starting Freemacs without any files opens a "scratch" buffer) -[6]: https://opensource.com/sites/default/files/uploads/freemacs2.png (Opening a new file with C-x C-f) -[7]: https://opensource.com/sites/default/files/uploads/freemacs3.png (Editing a C source file in Freemacs) -[8]: https://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/edit/emacs/docs/ -[9]: https://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/edit/emacs/docs/mint.txt -[10]: https://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/edit/emacs/docs/mint2.txt -[11]: https://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/edit/emacs/docs/quickie.txt -[12]: https://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/edit/emacs/docs/tutorial.txt diff --git a/sources/tech/20210623 Program on FreeDOS with Bywater BASIC.md b/sources/tech/20210623 Program on FreeDOS with Bywater BASIC.md deleted file mode 100644 index 4959c38af1..0000000000 --- a/sources/tech/20210623 Program on FreeDOS with Bywater BASIC.md +++ /dev/null @@ -1,83 +0,0 @@ -[#]: subject: (Program on FreeDOS with Bywater BASIC) -[#]: via: (https://opensource.com/article/21/6/freedos-bywater-basic) -[#]: author: (Jim Hall https://opensource.com/users/jim-hall) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Program on FreeDOS with Bywater BASIC -====== -Install Bywater BASIC on your FreeDOS system and start experimenting -with BASIC programming. -![woman on laptop sitting at the window][1] - -In the early days of personal computing—from the late 1970s and through the 1980s—many people got their start with BASIC programming. BASIC was a universal programming language that came built into most personal computers, from Apple to IBM PCs. - -When we started the FreeDOS Project in June 1994, it seemed natural that we should include an open source BASIC environment. I was excited to discover one already existed in Bywater BASIC. - -The [Bywater BASIC website][2] reminds us that “Bywater BASIC implements a large superset of the ANSI Standard for Minimal BASIC (X3.60-1978) and a significant subset of the ANSI Standard for Full BASIC (X3.113-1987).” It's also distributed under the GNU General Public License version 2, which means it's open source software. We only want to include open source programs in FreeDOS, so Bywater BASIC was a great addition to FreeDOS in our early days. - -We've included Bywater BASIC since at least FreeDOS Alpha 5, in 1997. You can find Bywater BASIC in FreeDOS 1.3 RC4 in the "Development" package group on the Bonus CD. Load this: - -![Bywater BASIC][3] - -Installing Bywater BASIC on FreeDOS 1.3 RC4 -(Jim Hall, [CC-BY SA 4.0][4]) - -FreeDOS installs the Bywater BASIC package in the `\DEVEL\BWBASIC` directory. Change to this directory with `CD \DEVEL\BWBASIC` and type `BWBASIC` to run the Bywater BASIC interpreter. - -![Bywater BASIC][5] - -The Bywater BASIC intepreter -(Jim Hall, [CC-BY SA 4.0][4]) - -### Writing a sample program - -Let me demonstrate Bywater BASIC by writing a test program. We'll keep this simple—print five random numbers. This requires only a few constructs—a loop to iterate over five values and a random number generator. BASIC uses the `RND(1)` statement to generate a random value between 0 and 1. We can use `PRINT` to display the random number. - -One feature I like in Bywater BASIC is the integrated "help" system. There's nothing more frustrating than forgetting the syntax for a BASIC statement. For example, I always forget how to create BASIC loops. Do I use `FOR I IN 1 TO 10` or `FOR I = 1 TO 10`? Just type `help FOR` at the Bywater BASIC prompt and the interpreter displays the usage and a brief description. - -![Bywater BASIC][6] - -Use the "help" system as a quick-reference guide -(Jim Hall, [CC-BY SA 4.0][4]) - -Another neat feature in Bywater BASIC is how it reformats your BASIC instructions, so they are easier to read. After typing my brief program, I can type `list` to see the full source listing. Bywater BASIC automatically adds the `CALL` keyword to my `RANDOMIZE` statement on line 10 and indents the `PRINT` statement inside my loop. These small changes help me to see loops and other features in my program, which can aid in debugging. - -![Bywater BASIC][7] - -Bywater BASIC automatically reformats your code -(Jim Hall, [CC-BY SA 4.0][4]) - -If everything looks okay, then type `RUN` to execute the program. Because I used the `RANDOMIZE` statement at the start of my BASIC program, Bywater _seeds_ the random number generator with a random starting point. This ensures that my numbers are actually random values and don't repeat when I re-run my program. - -![Bywater BASIC][8] - -Generating lists of random numbers -(Jim Hall, [CC-BY SA 4.0][4]) - -Install Bywater BASIC on your FreeDOS system and start experimenting with BASIC programming. BASIC can be a great first programming language, especially if you are interested in getting back to the "roots" of personal computing. You can find more information about Bywater BASIC in the manual, installed in the `\DEVEL\BWBASIC` directory as `BWBASIC.DOC`. You can also explore the online "help" system by typing `HELP` at the Bywater BASIC prompt. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/freedos-bywater-basic - -作者:[Jim Hall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop) -[2]: https://sourceforge.net/projects/bwbasic/ -[3]: https://opensource.com/sites/default/files/uploads/bwbasic1.png (Installing Bywater BASIC on FreeDOS 1.3 RC4) -[4]: https://creativecommons.org/licenses/by-sa/4.0/ -[5]: https://opensource.com/sites/default/files/uploads/bwbasic3.png (The Bywater BASIC intepreter) -[6]: https://opensource.com/sites/default/files/uploads/randnum1.png (Use the "help" system as a quick-reference guide) -[7]: https://opensource.com/sites/default/files/uploads/randnum2.png (Bywater BASIC automatically reformats your code) -[8]: https://opensource.com/sites/default/files/uploads/randnum3.png (Generating lists of random numbers) diff --git a/sources/tech/20210625 9 Features in Brave Search That Make it a Great Google Search Alternative.md b/sources/tech/20210625 9 Features in Brave Search That Make it a Great Google Search Alternative.md deleted file mode 100644 index e1814e4871..0000000000 --- a/sources/tech/20210625 9 Features in Brave Search That Make it a Great Google Search Alternative.md +++ /dev/null @@ -1,161 +0,0 @@ -[#]: subject: (9 Features in Brave Search That Make it a Great Google Search Alternative) -[#]: via: (https://itsfoss.com/brave-search-features/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -9 Features in Brave Search That Make it a Great Google Search Alternative -====== - -Brave Search is an ambitious initiative by Brave Software based on the open-source project [Tailcat][1], which tries to take on the big tech by introducing the ability to search anonymously. Brave Search itself is [not open source at the moment][2]. - -Of course, there are various other [private search engines][3] available out there trying to offer a privacy-focused experience. Even though not every service proves to be better than Google (regarding features), Brave Search seems to be a compelling choice when considering Brave Browser as a successful open-source replacement to Google Chrome. - -Here, let me highlight a few features in Brave Search that makes it an interesting alternative to Google Search. - -### Top 9 Brave Search Features - -Brave Search does a few things better than Google and those are worth highlighting as unique feature offerings that sets it apart. - -Brave Search is in beta at the time of writing this article. If you notice something different with your experience, there are chances that they may have made an improvement/change. Let me know in the comments below if that’s the case. - -#### 1\. Anonymous Search - -![][4] - -Google tracks your search queries, keeps a log of your history (unless you manually delete it or disable recording your activity). Not just the basics, but your IP address and the website you visit from the search result is also recorded in the process. - -In contrast, Brave Search does not track the IP, or the search queries made using their search portal. - -You stay completely anonymous, along with your search history being private only to yourself. - -This could eliminate the need of using a [secure VPN service][5] to keep your Internet search activity private. - -#### 2\. Ad-Free Version (Coming Soon) - -![][6] - -All the private search engines include advertisements to make money (which is fair). The advertisements used by Google Search include trackers when you click on it, which is not the case with privacy-focused search engines. - -But Brave Search tries to go a little further by offering a choice to the users. - -It is a feature that has been planned for addition, but it is worth mentioning. If you want to get rid of the ads, you can opt for the paid version of the search engine where you can explore the web ad-free. - -I think that’s a win-win for both Brave and you as a user. They do not lose on making revenue and you get to experience a truly ad-free search engine. - -#### 3\. Community Curated Search Rankings (Coming Soon) - -Users can help spot the quality of a web resource better than an algorithm often. - -So, Brave Search aims to work on a community-curated search ranking system, which will be open to all when it is available. - -This should improve the collaborative approach of exploring the web, which should be an impressive feature of Brave Search. - -#### 4\. Independent Index with No Search Algorithm - -![][7] - -With most of the other search engines, there’s an algorithm in place to make sure that only the high-quality web pages rank above the rest. Brave Search does not have any special algorithm controlling the search rankings. - -And yes, that is a feature in a world where everything depends on algorithms. - -Sometimes that algorithm ends up being biased by ranking plagiarism content first, low-quality web pages, along with a few other issues. - -Without any special search algorithm, Brave search uses its own Index to fetch results as per your queries. - -#### 5\. Private Local and Global Search Results - -![][8] - -No matter what region you choose for the search results, you get an additional option to filter your results based on your locality (IP address). - -Brave explains that the IP address is stored locally on your device and is used to serve you the local feed of results – which sounds useful. - -![][9] - -#### 6\. Transparency in Search Results - -![][10] - -The web is a vast network. Therefore, to keep the search result quality resourceful, Brave Search fetches some search results anonymously from Google and Bing (which is often less than 10% in my tests). - -For the rest of the results, Brave Search relies on its independent index. Brave Search also displays the percent of its independent search index used for your search. - -The more users start using Brave Search, the more independent the search results will become. So that’s a good thing. - -Considering not all search engines reveal a lot about their search results, Transparency, as a principle, can be a feature to compare with when choosing a search engine. - -![][11] - -#### [Brave: Open Source Web Browser That Blocks Ads and Tracking By Default][12] - -An open source web browser that blocks ads and tracking. A good choice if you are looking for a privacy focused web browser. Here’s how to install Brave on Linux. - -#### 7\. A Refreshing User Interface - -While every other Google search alternative tries to offer a familiar experience, Brave Search is actually refreshing to look at (in my opinion). - -![][13] - -The user interface looks well-thought and offers a modern, clean experience. Don’t you think? - -I like how DuckDuckGo simplifies things, but Brave certainly makes it up for a better user experience that looks unique and clean. - -#### 8\. No Anti-Competitive Nature - -Unlike some other search engines (especially, Google) do not suggest anything else explicitly, except their own products and services in their search results. - -That’s fair but potentially also anti-competitive, being the most popular search engine. They do have their reasons which we don’t have to talk about here, but giving a shout-out to your competitors is something new businesses/services are adopting. - -![][14] - -And Brave Search does an excellent jobat that. While you scroll through the search results, you will find a choice to use other search engines for your search query. - -#### 9\. Dark Mode & Tweaks - -Yes, the dark mode is an important feature (sigh). - -![][15] - -And from the settings available in Brave Search, you can **turn on the dark mode**, **set links to open in a new tab,** and **control the language** (soon). - -![][16] - -### Wrapping Up - -Brave Search is an interesting private search engine that aims to tackle the Big Tech by offering something new. It should be seamless user experience when [using Brave Browser][12] along with it, but you can use it on any browser without any limitations. - -I like what I see here, what do you think? Let me know your thoughts in the comments below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/brave-search-features/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://www.tailcat.com -[2]: https://www.reddit.com/r/brave_browser/comments/o5qknc/announcement_brave_search_beta_now_available_in/h2p3q22?utm_source=share&utm_medium=web2x&context=3 -[3]: https://itsfoss.com/privacy-search-engines/ -[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/06/brave-search-anonymous.png?resize=800%2C530&ssl=1 -[5]: https://itsfoss.com/best-vpn-linux/ -[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/06/paid-no-ads-brave.png?resize=800%2C450&ssl=1 -[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/06/brave-search-sample.png?resize=800%2C586&ssl=1 -[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/brave-local-global-search.png?resize=800%2C228&ssl=1 -[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/06/brave-anonymous-local-results.png?resize=800%2C589&ssl=1 -[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/06/brave-search-transparency.png?resize=800%2C654&ssl=1 -[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/09/brave-browser-1-e1573731875389.jpeg?resize=150%2C150&ssl=1 -[12]: https://itsfoss.com/brave-web-browser/ -[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/06/brave-search-ui.png?resize=800%2C590&ssl=1 -[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/06/brave-search-competitors.png?resize=800%2C502&ssl=1 -[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/brave-search-dark-mode.png?resize=800%2C573&ssl=1 -[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/06/brave-search-settings.png?resize=483%2C389&ssl=1 diff --git a/sources/tech/20210625 How to program in C on FreeDOS.md b/sources/tech/20210625 How to program in C on FreeDOS.md deleted file mode 100644 index 0a3f6ddeb1..0000000000 --- a/sources/tech/20210625 How to program in C on FreeDOS.md +++ /dev/null @@ -1,137 +0,0 @@ -[#]: subject: (How to program in C on FreeDOS) -[#]: via: (https://opensource.com/article/21/6/program-c-freedos) -[#]: author: (Jim Hall https://opensource.com/users/jim-hall) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How to program in C on FreeDOS -====== -Programming in C on FreeDOS is very similar to C programming on Linux. -![Woman sitting in front of her computer][1] - -When I first started using DOS, I enjoyed writing games and other interesting programs using BASIC, which DOS included. Much later, I learned the C programming language. - -I immediately loved working in C! It was a straightforward programming language that gave me a ton of flexibility for writing useful programs. In fact, much of the FreeDOS core utilities are written in C and Assembly. - -So it's probably not surprising that FreeDOS 1.3 RC4 includes a C compiler—along with other programming languages. The FreeDOS 1.3 RC4 LiveCD includes two C compilers—Bruce's C compiler (a simple C compiler) and the OpenWatcom C compiler. On the Bonus CD, you can also find DJGPP (a 32-bit C compiler based on GNU GCC) and the IA-16 port of GCC (requires a '386 or better CPU to compile, but the generated programs can run on low-end systems). - -Programming in C on FreeDOS is basically the same as C programming on Linux, with two exceptions: - - 1. **You need to remain aware of how much memory you use.** Linux allows programs to use lots of memory, but FreeDOS is more limited. Thus, DOS programs used one of four [memory models][2] (large, medium, compact, and small) depending on how much memory they needed. - 2. **You can directly access the console.** On Linux, you can create _text-mode_ mode programs that draw to the terminal screen using a library like _ncurses_. But DOS allows programs to access the console and video hardware. This provides a great deal of flexibility in writing more interesting programs. - - - -I like to write my C programs in the IA-16 port of GCC, or OpenWatcom, depending on what program I am working on. The OpenWatcom C compiler is easier to install since it's only a single package. That's why we provide OpenWatcom on the FreeDOS LiveCD, so you can install it automatically if you choose to do a "Full installation including applications and games" when you install FreeDOS 1.3 RC4. If you opted to install a "Plain DOS system," then you'll need to install the OpenWatcom C compiler afterward, using the FDIMPLES package manager. - -![installing OpenWatcom][3] - -Installing OpenWatcom on FreeDOS 1.3 RC4 -(Jim Hall, [CC-BY SA 4.0][4]) - -### DOS C programming - -You can find documentation and library guides on the [OpenWatcom project website][5] to learn all about the unique DOS C programming libraries provided by the OpenWatcom C compiler. To briefly describe a few of the most useful functions: - -From `conio.h`: - - * `int getch(void)—`Get a single keystroke from the keyboard - * `int getche(void)—`Get a single keystroke from the keyboard, and echo it - - - -From `graph.h`: - - * `_settextcolor(short color)—`Sets the color when printing text - * `_setbkcolor(short color)—`Sets the background color when printing text - * `_settextposition(short y, short x)—`Move the cursor to row `y` and column `x` - * `_outtext(char _FAR *string)—`Print a string directly to the screen, starting at the current cursor location - - - -DOS only supports [sixteen text colors][6] and eight background colors. You can use the values 0 (Black) to 15 (Bright White) to specify the text colors, and 0 (Black) to 7 (White) for the background colors: - - * **0**—Black - * **1**—Blue - * **2**—Green - * **3**—Cyan - * **4**—Red - * **5**—Magenta - * **6**—Brown - * **7**—White - * **8**—Bright Black - * **9**—Bright Blue - * **10**—Bright Green - * **11**—Bright Cyan - * **12**—Bright Red - * **13**—Bright Magenta - * **14**—Yellow - * **15**—Bright White - - - -### A fancy "Hello world" program - -The first program many new developers learn to write is a program that just prints "Hello world" to the user. We can use the DOS "conio" and "graphics" libraries to make this a more interesting program and print "Hello world" in a rainbow of colors. - -In this case, we'll iterate through each of the text colors, from 0 (Black) to 15 (Bright White). As we print each line, we'll indent the next line by one space. When we're done, we'll wait for the user to press any key, then we'll reset the screen and exit. - -You can use any text editor to write your C source code. I like using a few different editors, including [FreeDOS Edit][7]** **and [Freemacs][8], but more recently I've been using the [FED editor][9] because it provides _syntax highlighting_, making it easier to see keywords, strings, and variables in my program source code. - -![writing a simple C program][10] - -Writing a simple test program in C -(Jim Hall, [CC-BY SA 4.0][4]) - -Before you can compile using OpenWatcom, you'll need to set up the DOS [environment variables][11]** **so OpenWatcom can find its support files. The OpenWatcom C compiler package includes a setup [batch file][12] that does this for you, as `\DEVEL\OW\OWSETENV.BAT`. Run this batch file to automatically set up your environment for OpenWatcom. - -Once your environment is ready, you can use the OpenWatcom compiler to compile this "Hello world" program. I've saved my C source file as `TEST.C`, so I can type `WCL TEST.C` to compile and link the program into a DOS executable, called `TEST.EXE`. In the output messages from OpenWatcom, you can see that `WCL` actually calls the OpenWatcom C Compiler (`WCC`) to compile, and the OpenWatcom Linker (`WLINK`) to perform the object linking stage: - -![compiling with OpenWatcom][13] - -Compiling the test program with OpenWatcom -(Jim Hall, [CC-BY SA 4.0][4]) - -OpenWatcom prints some extraneous output that may make it difficult to spot errors or warnings. To tell the compiler to suppress most of these extra messages, use the `/Q` ("Quiet") option when compiling: - -![compiling with OpenWatcom][14] - -If you don't see any error messages when compiling the C source file, you can now run your DOS program. This "Hello world" example is `TEST.EXE`. Enter `TEST` on the DOS command line to run the new program, and you should see this very pretty output: - -![running the test program][15] - -C is a very efficient programming language that works well for writing programs on limited-resource systems like DOS. There's lots more that you can do by programming in C on DOS. If you're new to the C language, you can learn C yourself by following along in our [Writing FreeDOS Programs in C][16] self-paced ebook on the FreeDOS website, and the accompanying "how-to" video series on the [FreeDOS YouTube channel][17]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/program-c-freedos - -作者:[Jim Hall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_2.png?itok=JPlR5aCA (Woman sitting in front of her computer) -[2]: https://devblogs.microsoft.com/oldnewthing/20200728-00/?p=104012 -[3]: https://opensource.com/sites/default/files/uploads/install-ow.png (Installing OpenWatcom on FreeDOS 1.3 RC4) -[4]: https://creativecommons.org/licenses/by-sa/4.0/ -[5]: http://openwatcom.org/ -[6]: https://opensource.com/article/21/6/freedos-sixteen-colors -[7]: https://opensource.com/article/21/6/freedos-text-editor -[8]: https://opensource.com/article/21/6/freemacs -[9]: https://opensource.com/article/21/1/fed-editor -[10]: https://opensource.com/sites/default/files/uploads/fed-test.png (Writing a simple test program in C) -[11]: https://opensource.com/article/21/6/freedos-environment-variables -[12]: https://opensource.com/article/21/6/automate-tasks-bat-files-freedos -[13]: https://opensource.com/sites/default/files/uploads/wcl-test.png (Compiling the test program with OpenWatcom) -[14]: https://opensource.com/sites/default/files/uploads/wcl-q-test.png (Use the /Q ("Quiet") option to make OpenWatcom print less output) -[15]: https://opensource.com/sites/default/files/uploads/test.png (You can create beautiful programs in C) -[16]: https://www.freedos.org/books/cprogramming/ -[17]: https://www.youtube.com/freedosproject diff --git a/sources/tech/20210625 Mount cue-bin image files with CDemu.md b/sources/tech/20210625 Mount cue-bin image files with CDemu.md deleted file mode 100644 index 21ed6d68d7..0000000000 --- a/sources/tech/20210625 Mount cue-bin image files with CDemu.md +++ /dev/null @@ -1,87 +0,0 @@ -[#]: subject: (Mount cue/bin image files with CDemu) -[#]: via: (https://fedoramagazine.org/mount-cue-bin-image-files-with-cdemu/) -[#]: author: (Luca Rastelli https://fedoramagazine.org/author/luca247/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Mount cue/bin image files with CDemu -====== - -![][1] - -Photo by [Cameron Bunney][2] on [Unsplash][3] - -The other day I needed to burn a disc. Yeah, I know, some of you might be wondering, “A disc? What’s that?” Others might ask, “Are you really using that archaic media?” - -Well, yes I am. I feel there is still something charming about physical things that digital media cannot replace. - -I needed to burn a very old game that was comprised of a [cue file][4], some audio tracks in [cda format][5], and a [bin file][6] which stored all the binary content that was indexed by the cue file. - -First I tried to use [Brasero][7]. Yeah I know, it’s old but it does the job generally and it fits with the rest of the system, so it’s my choice generally. Unfortunately, this time it was not up to the task. It stated that it had some problems reading the cue file. Then I tried [Xfburn][8] and [K3b][9]. But neither of those worked either. They both detected the bin file but not the cda files. - -Next, I searched on the web and I found lots of posts explaining how to burn image files using command line applications. Or how to create [iso images][10] and then write those out to discs. These methods seemed excessively complex for what I wanted to do. Why all that difficulty for a task that should be easy as clicking on a button? Fedora Linux should be about freedom, not about difficulties! Although it can be used by experts, an easy way of doing things is always appreciated. - -I had almost surrendered. Then, in a forum post buried under all the suggestions mentioned previously, I found the answer I was looking for – [CDemu][11]. - -Those familiar with [Daemon Tools][12] may find CDemu to be similar. I find CDemu to be even easier and far less bloated. With CDemu, you can mount cue images with the classic double-click. Sounds easy? Well that’s because it actually is. - -CDemu is not present in Fedora Linux’s default repositories. So if you want to try it out, you will have to use the [rok/cdemu][13] Copr repository that is compatible with your version of Fedora Linux. - -**Note**: _Copr is not officially supported by Fedora infrastructure. Use packages at your own risk._ - -Open a terminal and enable the Copr repo by entering the following command. - -``` -$ sudo dnf copr enable rok/cdemu -``` - -Then install the daemon and the clients by entering the following commands. - -``` -$ sudo dnf install cdemu-daemon -$ sudo dnf install cdemu-client -$ sudo dnf install gcdemu -``` - -Next, enter the following commands to ensure the right kernel module is available and loaded. - -``` -$ sudo akmods -$ sudo systemctl restart systemd-modules-load.service -``` - -Now CDemu is installed. To associate it with your CD images, you just need to right-click on a file type that you want to mount with CDemu, select _properties_, and the select _Open with CDemu_. Now, double-clicking on those image types should mount them in Nautilus like a physical drive. - -If you need to burn the image (like I did), open Brasero and select _copy disc_. - -CDemu can also be run from the command line. But this guide was all about getting easy, right? - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/mount-cue-bin-image-files-with-cdemu/ - -作者:[Luca Rastelli][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/luca247/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/06/cdemu-816x345.jpg -[2]: https://unsplash.com/@bdbillustrations?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/dvd?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://en.wikipedia.org/wiki/Cue_sheet_(computing) -[5]: https://en.wikipedia.org/wiki/.cda_file -[6]: https://en.wikipedia.org/wiki/Binary_file -[7]: https://en.wikipedia.org/wiki/Brasero_(software) -[8]: https://en.wikipedia.org/wiki/Xfce#Xfburn -[9]: https://en.wikipedia.org/wiki/K3b -[10]: https://en.wikipedia.org/wiki/Optical_disc_image -[11]: https://en.wikipedia.org/wiki/CDemu -[12]: https://en.wikipedia.org/wiki/Daemon_Tools -[13]: https://copr.fedorainfracloud.org/coprs/rok/cdemu/ diff --git a/sources/tech/20210627 Try Chatwoot, an open source customer relationship platform.md b/sources/tech/20210627 Try Chatwoot, an open source customer relationship platform.md deleted file mode 100644 index 30ee81bab9..0000000000 --- a/sources/tech/20210627 Try Chatwoot, an open source customer relationship platform.md +++ /dev/null @@ -1,154 +0,0 @@ -[#]: subject: (Try Chatwoot, an open source customer relationship platform) -[#]: via: (https://opensource.com/article/21/6/chatwoot) -[#]: author: (Nitish Tiwari https://opensource.com/users/tiwarinitish86) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Try Chatwoot, an open source customer relationship platform -====== -Chatwoot is an open source alternative to Intercom, Zendesk, Salesforce -Service Cloud, and other proprietary communications platforms. -![Digital images of a computer desktop][1] - -Chatwoot is an open source customer relationship platform built with Ruby and Vue.js. It was written from scratch to allow customer-relations teams to build end-to-end platforms for ticket management and support. - -This article looks at Chatwoot's architecture, installation, and key features. - -### Chatwoot's architecture - -Chatwoot requires the following components to function properly: - - * Chatwoot web servers - * Chatwoot workers - * PostgreSQL database - * Redis - * Email service (e.g., SMTP, SendGrid, Mailgun) - * Object storage (e.g., AWS S3, Azure, Google Cloud Storage, MinIO) - - - -The Chatwoot server and workers are the core components that integrate with everything else. PostgreSQL and Redis are specific, required components. - -![Chatwoot architecture][2] - -(Nitish Tiwari, [CC BY-SA 4.0][3]) - -The other components, like the email server and object storage, are loosely coupled, so you can use any compatible system. Therefore, you could choose any SMTP server, self-hosted or SaaS, as your email service. Similarly, for object storage, you can use public cloud platforms like AWS S3, Azure Blob Store, GCS, or private cloud platforms like MinIO. - -### Install Chatwoot - -Chatwoot is available on common platforms, including Linux virtual machines, Docker, and as a single-click install application on [Heroku][4] and [CapRover][5]. This how-to looks at the Docker installation process; for other platforms, refer to Chatwoot's [documentation][6]. - -To begin, ensure Docker Compose is installed on your machine. Then, download the `env` and `docker-compose` files from [Chatwoot's GitHub repo][7]: - - -``` -# Download the env file template -wget -O .env -# Download the Docker compose template -wget -O docker-compose.yml -``` - -Open the `env` file and fill in the env variables `REDIS_PASSWORD` and `POSTGRES_PASSWORD`; these will be the passwords for Redis and PostgreSQL, respectively. Then update the same PostgreSQL password in the `docker-compose.yaml` file.  - -Now, prepare PostgreSQL: - - -``` -`docker-compose run --rm rails bundle exec rails db:chatwoot_prepare` -``` - -Deploy Chatwoot: - - -``` -`docker-compose up -d` -``` - -You should now be able to access Chatwoot at `http://localhost:3000`. - -![Chatwoot welcome screen][8] - -(Nitish Tiwari, [CC BY-SA 4.0][3]) - -### Chatwoot features - -Fill in the details on the welcome page to create the admin user. After that, you should land on the Conversations page. - -![Chatwoot conversations screen][9] - -(Nitish Tiwari, [CC BY-SA 4.0][3]) - -The following are Chatwoot's key features: - -#### Channels - -Chatwoot supports a wide range of platforms as messaging Channels (including website widgets, Facebook, Twitter, WhatsApp, email, and others). To create an integration, click on the **Inboxes** button on the left-hand sidebar. Then select the platform you want to integrate with. - -![Chatwoot channels screen][10] - -(Nitish Tiwari, [CC BY-SA 4.0][3]) - -Each platform has its own set of human agents, teams, labels, and canned responses. This way, Chatwoot allows a unified interface for talking to customers, but each channel is as customizable as it can be in the background. - -#### Reporting - -Organizations take customer response service-level agreements (SLAs) very seriously—and rightly so. Chatwoot has an integrated dashboard that gives a birds-eye view of the most important metrics, like total messages, response times, resolution times, etc. Administrators can also download reports for specific agents. - -![Chatwoot reports screen][11] - -(Nitish Tiwari, [CC BY-SA 4.0][3]) - -#### Contacts - -Chatwoot also captures contact details from each incoming message and neatly arranges this information on a separate page called Contacts. This ensures all contact details are available for further follow-up or even syncing with an external, full-fledged customer relationship management (CRM) platform. - -![Chatwoot Contacts][12] - -(Nitish Tiwari, [CC BY-SA 4.0][3]) - -#### Integrations - -Channels enable integrations with external messaging systems so that Chatwoot can communicate using these systems. However, what if you want a team to be notified on Slack if there is a new chat message on Chatwoot? - -This is where Integration Webhooks come into the picture. This feature allows you to integrate Chatwoot into external systems so that it can send out relevant information. - -![Chatwoot Integrations][13] - -(Nitish Tiwari, [CC BY-SA 4.0][3]) - -### Learn more - -Chatwoot provides many of the key communications features customer relations teams want. To learn more about Chatwoot, take a look at its [GitHub repository][14] and [documentation][15]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/6/chatwoot - -作者:[Nitish Tiwari][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/tiwarinitish86 -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop) -[2]: https://opensource.com/sites/default/files/uploads/chatwoot_servicecalls.png (Chatwoot architecture) -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://www.heroku.com/ -[5]: https://caprover.com/docs/get-started.html -[6]: https://www.chatwoot.com/docs/self-hosted/deployment/architecture#available-deployment-options -[7]: https://github.com/chatwoot/chatwoot -[8]: https://opensource.com/sites/default/files/uploads/chatwoot_welcome.png (Chatwoot welcome screen) -[9]: https://opensource.com/sites/default/files/uploads/chatwoot_conversations.png (Chatwoot conversations screen) -[10]: https://opensource.com/sites/default/files/uploads/chatwoot_channels.png (Chatwoot channels screen) -[11]: https://opensource.com/sites/default/files/uploads/chatwoot_reports.png (Chatwoot reports screen) -[12]: https://opensource.com/sites/default/files/uploads/chatwoot_contacts.png (Chatwoot Contacts) -[13]: https://opensource.com/sites/default/files/uploads/chatwoot_integrations.png (Chatwoot Integrations) -[14]: http://github.com/chatwoot/chatwoot -[15]: https://www.chatwoot.com/help-center diff --git a/sources/tech/20210628 Introduction to image builder.md b/sources/tech/20210628 Introduction to image builder.md deleted file mode 100644 index d484b866f3..0000000000 --- a/sources/tech/20210628 Introduction to image builder.md +++ /dev/null @@ -1,304 +0,0 @@ -[#]: subject: (Introduction to image builder) -[#]: via: (https://fedoramagazine.org/introduction-to-image-builder/) -[#]: author: (Andy Mott https://fedoramagazine.org/author/amott/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Introduction to image builder -====== - -![][1] - -Photo by [Marcel Strauß][2] on [Unsplash][3] - -Image Builder is a tool that allows you to create custom OS images (based on the upstream project Weldr), and it’s included in the base repos so you can build images right from the start. - -You can use the command line or a Cockpit plugin, and it’s a fairly simple and straightforward process which allows you to create images for most of the major platforms – Libvirt/KVM (RHV or general Linux virtualisation), VMware, Openstack, AWS and Azure. You can also deploy these images from Satellite. - -### Installing Image Builder - -To install Image Builder, run this command: - -``` -sudo dnf install -y osbuild-composer composer-cli cockpit-composer -``` - -If you’re not using Cockpit then omit that last package and you’ll just have the cli tool. - -If you are using Cockpit, then make sure you add the service to firewalld to allow access like this: - -``` -sudo firewall-cmd --add-service=cockpit && sudo firewall-cmd --runtime-to-permanent -``` - -You need to enable the osbuild-composer socket (and cockpit if you installed it and haven’t yet enabled it): - -``` -sudo systemctl enable --now osbuild-composer.socket -sudo systemctl enable --now cockpit.socket -``` - -Image Builder is now running and ready to use so let’s create an image using the cli first, then move on to using Cockpit. - -### Image Builder CLI - -The main cli command is _composer-cli,_ which you use to create, list, examine and delete blueprints. It is also used to build, list, delete and download images for upload to their intended platform. - -#### Available commands - -The following is a list of the Image Builder commands and their functions: - -**Blueprint manipulation** | ----|--- -List all available blueprints | sudo composer-cli blueprints list -Show a blueprint contents in the toml format | sudo composer-cli blueprints show _blueprint-name_ -Save (export) blueprint contents in the toml format into a file _blueprint-name.toml_ | sudo composer-cli blueprints save _blueprint-name_ -Remove a blueprint | sudo composer-cli blueprints delete _blueprint-name_ -Push (import) a blueprint file in the toml format into Image Builder | sudo composer-cli blueprints push _blueprint-name_ -**Composing images from blueprints** | -Start a compose | sudo composer-cli compose start _blueprint-name_ _image-type_ -List all composes | sudo composer-cli compose list -List all composes and their status | sudo composer-cli compose status -Cancel a running compose | sudo composer-cli compose cancel _compose-uuid_ -Delete a finished compose | sudo composer-cli compose delete _compose-uuid_ -Show detailed information about a compose | sudo composer-cli compose info _compose-uuid_ -Download image file of a compose | sudo composer-cli compose image _compose-uuid_ -**Additional resources** | -The composer-cli man page provides a full list of the available subcommands and options | man composer-cli -The composer-cli command provides help on the subcommands and options | sudo composer-cli help - -#### Creating an image blueprint - -The first step in using Image Builder is to use your favorite editor to create the blueprint of the image itself. The blueprint includes everything the image needs to run. Let’s create a really simple one to begin with, then take it from there. - -##### **Create the blueprint file** - -Blueprint files are .toml files, created in your favorite editor, and the minimal required information is shown here: - -``` -name = "image-name" -description = "my image blueprint" -version = "0.0.1" -modules = [] -groups = [] -``` - -The file above can be used to create a minimal image with just the essential software required to run. Typically, images need a few more things, so let’s add in some additional packages. Add in the following below the groups item to add in extra packages: - -``` -[[packages]] -name = "bash-completion" -version = "*" -``` - -You will need to repeat this block for every package you wish to install. The version can be a specific version or the asterisk to denote the latest. - -Going into a bit more detail, the groups declaration is used to add any groups you might need in the image. If you’re not adding any you can use the format above, but if you need to create a group remove the line shown above: - -``` -groups = [] -``` - -and add this: - -``` -[[groups]] -name = "mygroup" -``` - -Again, you need to repeat this block for every group you want to add. - -It is recommended that you create at least a “root” user using something similar to this: - -``` -[[customizations.user]] - name = "root" - description = "root" - password = "$6$ZkdAX1t8QwEAc/GH$Oi3NY3fyTH/87ALiPfCzZTwpCoKv7P3bCVnoD9JnI8f5gV9I3A0bq5mZrKrw6isuYatmRQ.SVq3Vq27v3X2yu." - home = "/home/root/" - shell = "/usr/bin/bash" - groups = ["root"] -``` - -An example blueprint is available at and it contains an explanation for creating the password hash. It doesn’t cover everything, but has the majority of the options shown. - -##### **Push the blueprint to Image Builder** - -Once you have your blueprint ready, you need to push it to Image Builder. This command pushes file _blueprint-name.toml_ to Image Builder: - -``` -sudo composer-cli blueprints push blueprint-name.toml -``` - -Check that it has been pushed with the _blueprints list_ command: - -``` -sudo composer-cli blueprints list -``` - -##### Generate the image - -Now you have your blueprint uploaded and can use it to generate images. Use the _compose-cli start_ command for this, giving the blueprint name and the output format you want (qcow, ami, vmdk, etc): - -``` -sudo composer-cli compose start blueprint-name qcow2 -``` - -You can obtain a list of image types with: -``` - -``` - -sudo composer-cli compose types -``` - -``` - -The _compose_ step creates a minimally-sized image – if you want more space on your OS disk then add _–size_ and a size, in Gb, to the command. - -The image compose will take a short time, and you can see the status of any images with the - -compose status - -command: - -``` -sudo composer-cli compose status -``` - -##### Using the image - -When the image build is complete the status will show “FINISHED” and you can download it and use it to build your VM: - -``` -sudo composer-cli compose image image-uuid -``` - -The image UUID is displayed when you start the compose. It can also be found at the beginning of the compose status command output. - -The downloaded image file is named with the UUID of the image plus the appropriate extension for the image type. You can copy this file to an image repository and rename as appropriate before creating a VM with it. - -A simple qemu/kvm machine is started like this: - -``` -sudo qemu-kvm --name test-image -m 1024 -hda ./UUID-disk.qcow2 -``` - -Alternatively, you can copy this image to a new file and use that file as the OS disk for a new VM. - -### Image Builder in Cockpit - -If you want to use Image Builder in Cockpit, you need to install the cockpit-composer package as described in the installation section above. - -After installation, log into your Cockpit URL (localhost:9090) and select _Image Builder_ in the _Tools>Applications_ section. This will take you to the initial Image Builder page, where you can create new blueprints: - -![][4] - -#### Create a blueprint - -Selecting the _Create blueprint_ button will display a dialogue box where you need to enter a name for your blueprint plus an optional description: - -![][5] - -After you enter a name and select _Create_, you move to the add packages page. Create a minimal image here by simply selecting the _Create Image_ button, or add extra packages by entering the name in the search box under _Available Components_ and then selecting the + button to add it to your image. Any dependencies required by the package will also be added to the image. Add as many packages as you require. - -![][6] - -After adding your packages, select the _Commit_ button to save your blueprint. You will be shown the changes your actions will make with the option to continue with your commit or discard the changes next. - -When the commit has been made, you will be returned to the same screen where you can add more packages. If you’re done with that, select the name of your blueprint in the breadcrumbs at the top left of the screen to go to the main screen for that blueprint. From here you can add customizations (users, groups etc), more packages, or create the image: - -![][7] - -If your image requires any specific users, or if you want to edit the root user (I’d recommend this, either to set a password or add an ssh key so you can log in without having to further edit or customize the image), then you can do this here. You can also create a hostname, which is useful for a single-use image but less so if the image will be used as the base for multiple deployments. - -To add a user, select the _Create user_ _account_ button. If you name this user root you can update the root account as you need. Enter a user name, description, any password and/or ssh public key, and if this user will be an administrative user (like root) then tick the box to signify this: - -![][8] - -Select the _Create_ button at the bottom to create the user and return to the main blueprint page. Here you will see the new user, and can create more as necessary. Once you’ve created all your users and added all your packages you can create am image from the blueprint by selecting the _Create image_ button at the upper right. - -![][9] - -#### Create an image - -In the Create image dialogue select an image type from the dropdown list, then select a size. This will effectively be the size of the disk available in the OS, just like you’d specify the virtual disk size when creating a VM manually. This will be thin-provisioned, so the actual image file won’t be this size! Select _Creat_e, when finished, to add your image to a build queue. - -![][10] - -Building images takes a little time, and you can check progress or view completed images in the Images page: - -![][11] - -You can create multiple image types from the same blueprint, so you can deploy the exact same image on multiple platforms, increasing your security and making maintenance and administration easier. - -#### Download the image - -To use your image, you need to download it, then upload to your chosen platform. To download the image, select the 3-dot menu next to the image and choose _Download_: - -![][12] - -That’s it – your image is ready to deploy. For a simple QEMU/KVM example use the same command from the CLI section above. - -``` -sudo qemu-kvm --name test-image -m 1024 -hda ./UUID-disk.qcow2 -``` - -#### Final thoughts - - * You can always edit your blueprints at a later date. The Cockpit UI will automatically increment the version number, but you will need to do this yourself in the toml file if using the CLI. Once you’ve edited the blueprint you will also need to create a new image. - * You may verify the TOML format using this web site Note that this verifies only the file formatting, not correctness of the content. - * You can create images with different sizes if your environment has such requirements. - * Create a different blueprint for each specific image you need – don’t update the same one with different packages and version numbers then create images from those. - * Image Builder does not allow disks to be partitioned. The output types that have a partitioned disk will have a single partition and additionally any platform-specific partitions that are required to boot the system image. For example, qcow2 image type has a single root partition, and possibly a platform specific boot partition – like PReP for PPC64 system – that the image requires to boot. - * Images types that may be created are listed in the following table: - -**Description** | **CLI name** | **File Extension** ----|---|--- -QEMU QCOW2 Image | qcow2 | .qcow2 -Ext4 File System Image | 80 | .qcow2 -Raw Partitioned Disk Image | partitiond-disk | .img -Live Bootable ISO | live-iso | .iso -TAR Archive | tar | .tar -Amazon Machine Image Disk | ami | .ami -VMware Virtual Machine Disk | vmdk | .vmdk -Openstack | openstack | .qcow2 - -Image Builder is a fantastic tool for anyone who needs to have repeatable based images for their environment. It’s definitely still a work in progress, but new features are coming all the time, with plans to allow uploading directly into various hypervisors and cloud platforms and other cool stuff. - -#### Image Builder documentation - -Official Weldr documentation: - -RHEL 8: - -RHEL 7: - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/introduction-to-image-builder/ - -作者:[Andy Mott][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/amott/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/05/image_builder-816x345.jpg -[2]: https://unsplash.com/@martzzl?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/builder?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://fedoramagazine.org/wp-content/uploads/2021/05/image-builder-start-page-1024x198.png -[5]: https://fedoramagazine.org/wp-content/uploads/2021/05/Screenshot-from-2021-05-25-10-20-34.png -[6]: https://fedoramagazine.org/wp-content/uploads/2021/05/image-builder-add-components-1024x456.png -[7]: https://fedoramagazine.org/wp-content/uploads/2021/05/image-builder-main-blueprint-page-1024x226.png -[8]: https://fedoramagazine.org/wp-content/uploads/2021/05/image-builder-add-user.png -[9]: https://fedoramagazine.org/wp-content/uploads/2021/05/image-builder-main-page-2-1024x303.png -[10]: https://fedoramagazine.org/wp-content/uploads/2021/05/image-builder-create-image-2-1024x701.png -[11]: https://fedoramagazine.org/wp-content/uploads/2021/05/image-builder-images-page-1024x252.png -[12]: https://fedoramagazine.org/wp-content/uploads/2021/05/image-builder-download-image-1024x255.png diff --git a/sources/tech/20210630 Is remmina useful for your daily work.md b/sources/tech/20210630 Is remmina useful for your daily work.md deleted file mode 100644 index a800764706..0000000000 --- a/sources/tech/20210630 Is remmina useful for your daily work.md +++ /dev/null @@ -1,139 +0,0 @@ -[#]: subject: (Is remmina useful for your daily work?) -[#]: via: (https://fedoramagazine.org/is-remmina-useful-for-your-daily-work/) -[#]: author: (zexcon https://fedoramagazine.org/author/zexcon/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Is remmina useful for your daily work? -====== - -![][1] - -Photo by [Oğuzhan Akdoğan][2] on [Unsplash][3] - -[Remmina][4] is a Remote Desktop Client that supports numerous protocols allowing you to connect to many remote systems. This full featured client program allows you to set up a shared folder, select the screen size and type of connection being used. There are many more options that give you the ability to customize your connection to fit your individual needs. In this article we will utilize Remote Desktop Protocol (RDP) to demonstrate its capabilities. RDP is commonly used for logging into Microsoft Windows machines remotely and that will be used as an example. - -### How I came to use remmina - -Using _remmina_ has become a staple of my work and personal life. At one point I’m sitting at my desk looking at a 13″ monitor trying to perform work on an inadequate laptop. To my left is a 34″ ultra-wide connected to my personal box running Fedora Linux. Then it dawned on me, I should see if I can remote in and use my 34″ monitor to make my life better and offload resource intensive processes. The answer is yes, maybe? Lets try it out and see if it works for you. - -### Installing remmina - -The _remmina_ software is available in the Fedora Linux repository by default. Install it by running the following. -``` - -``` - -sudo dnf install remmina -``` - -``` - -### Collecting Windows Information - -On the the Windows computer you are going to remote into you will need to get the IP address, domain name and username. Type the _Windows Key + r_ and this will display the run box. Type _cmd_ and select OK. - -![][5] - -The terminal (command line) displayed allows us to obtain the IP address. At the prompt type _ipconfig_. -``` - -``` - -ipconfig -``` - -``` - -You will see options labeled “IPv6 Address” or “IPv4 Address” or both. Keep this address handy for the next section. In the terminal enter _set user_ to obtain the Server, Domain and Username. -``` - -``` - -set user -``` - -``` - -This displays the USERDOMAIN and USERNAME. Make note of this along with the IP address you captured in the last step. You will have the following three items. - - * Server = IPv4 or IPv6 - * USERDOMAIN = Domain - * USERNAME = Username - - - -With these three pieces of information you are ready to move to creating the connection. - -### Running remmina - -Execute the following command to start _remmina:_ -``` - -``` - -remmina -``` - -``` - -![Remnina startup screen][6] - -### Creating the connection - -Lets look at creating a connection. Select the icon to the left of the magnifying class at the top to create a connection profile (middle icon of the three). - -![][7] - -In the Remote Connection Profile you provide all the options to create the connection. Provide a meaningful title under the Name field. You can also add your connection to a Group if you are going to manage several connections with _remmina_. For the Protocol select “RDP – Remote Desktop Protocol”. - -Under the Basic options you will need to provide your IPv4 or IPv6 address for the host computer, your login name for the Username and the corresponding password. Use of the Domain will be specific to your situation and may not be needed. - -At this point, you are ready to connect to your remote desktop and can click “Save and Connect” at the bottom or you can continue reading and learn about some of the additional options. - -### The fun stuff “options” - -Here is where all the fun begins. Under Basic you can select Enable multi monitor, Span screen over multiple monitors and List monitor IDs. This allows you to use one or more monitors in many different configurations. You can even set the resolution or select a Color depth. - -One of my favorite options available is the Share folder that allows you to setup a folder on your local machine and it will automatically mount on the remote computer. This affords you the opportunity to move files back and forth easily and no more emailing yourself! - -![][8] - -We will only cover two items under the Advanced section one is Quality and allows you to select performance over visual appeal or vise versa. The second option is the security protocol negotiation that I recommend leaving set to Automatic negotiation. - -![][9] - -### Alternative - -In all fairness I didn’t start with _remmina_. It took using others, notably [FreeRDP,][10] for me to see that the learning curve could be substantial and I didn’t want it to effect my availability and productivity at work. With a little bit of time and research you can dig in and learn the many features of FreeRDP and see if it might be the better choice for you. - -### Conclusion - -A basic setup for an RDP connection to a Windows system was described. Some options were discussed, such as setting the Resolution, Share folder, and Quality. We only touched on a minimal set available among an abundance of options. If you find that _remmina_ is right for you, I highly recommend you go through the remaining options. Many of the options can help tweak the desktop to fit your personal preferences and create a better experience. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/is-remmina-useful-for-your-daily-work/ - -作者:[zexcon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/zexcon/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/06/remote_display-816x345.jpg -[2]: https://unsplash.com/@jeffgry?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/remote-display?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://remmina.org/ -[5]: https://fedoramagazine.org/wp-content/uploads/2021/06/Remmina05.jpg -[6]: https://fedoramagazine.org/wp-content/uploads/2021/06/Remmina01.jpg -[7]: https://fedoramagazine.org/wp-content/uploads/2021/06/Remmina02.jpg -[8]: https://fedoramagazine.org/wp-content/uploads/2021/06/Rammina03.jpg -[9]: https://fedoramagazine.org/wp-content/uploads/2021/06/Remmina04.jpg -[10]: https://www.freerdp.com/ diff --git a/sources/tech/20210701 How I build my personal website using containers with a Makefile.md b/sources/tech/20210701 How I build my personal website using containers with a Makefile.md deleted file mode 100644 index 12233855c7..0000000000 --- a/sources/tech/20210701 How I build my personal website using containers with a Makefile.md +++ /dev/null @@ -1,212 +0,0 @@ -[#]: subject: (How I build my personal website using containers with a Makefile) -[#]: via: (https://opensource.com/article/21/7/manage-containers-makefile) -[#]: author: (Chris Collins https://opensource.com/users/clcollins) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How I build my personal website using containers with a Makefile -====== -Simplify container management by combining the commands to build, test, -and deploy a project in a Makefile. -![Parts, modules, containers for software][1] - -The `make` utility and its related [Makefile][2] have been used to build software for a long time. The Makefile defines a set of commands to run, and the `make` utility runs them. It is similar to a Dockerfile or Containerfile—a set of commands used to build container images. - -Together, a Makefile and Containerfile are an excellent way to manage a container-based project. The Containerfile describes the contents of the container image, and the Makefile describes how to manage the project itself: kicking the image build, testing, and deployment, among other helpful commands. - -### Make targets - -The Makefile consists of "targets": one or more commands grouped under a single command. You can run each target by running the `make` command followed by the target you want to run: - - -``` -# Runs the "build_image" make target from the Makefile -$ make build_image -``` - -This is the beauty of the Makefile. You can build a collection of targets for each task that needs to be performed manually. In the context of a container-based project, this includes building the image, pushing it to a registry, testing the image, and even deploying the image and updating the service running it. I use a Makefile for my personal website to do all these tasks in an easy, automated way. - -### Build, test, deploy - -I build my website using [Hugo][3], a static website generator that builds static HTML from YAML files. I use Hugo to build the HTML files for me, then build a container image with those files and [Caddy][4], a fast and simple web server, and run that image as a container. (Both Hugo and Caddy are open source, Apache-licensed projects.) I use a Makefile to make building and deploying that image to production much easier. - -The first target in the Makefile is appropriately the `image_build` command: - - -``` -image_build: -  podman build --format docker -f Containerfile -t $(IMAGE_REF):$(HASH) . -``` - -This target invokes [Podman][5] to build an image from the Containerfile included in the project. There are some variables in the command above—what are they? Variables can be specified in the Makefile, similarly to Bash or a programming language. I use them for a variety of things within the Makefile, but the most useful is building the image reference to be pushed to remote container image registries: - - -``` -# Image values -REGISTRY := "us.gcr.io" -PROJECT := "my-project-name" -IMAGE := "some-image-name" -IMAGE_REF := $(REGISTRY)/$(PROJECT)/$(IMAGE) - -# Git commit hash -HASH := $(shell git rev-parse --short HEAD) -``` - -Using these variables, the `image_build` target builds an image reference like `us.gcr.io/my-project-name/my-image-name:abc1234` using the short Git revision hash as the image tag so that it can be tied to the code that built it easily. - -The Makefile then tags that image as `:latest`. I don't generally use `:latest` for anything in production, but further down in this Makefile, it will come in useful for cleanup: - - -``` -image_tag: -  podman tag $(IMAGE_REF):$(HASH) $(IMAGE_REF):latest -``` - -So, now the image has been built and needs to be validated to make sure it meets some minimum requirements. For my personal website, this is honestly just, "does the webserver start and return something?" This could be accomplished with shell commands in the Makefile, but it was easier for me to write a Python script that starts a container with Podman, issues an HTTP request to the container, verifies it receives a reply, and then cleans up the container. Python's "try, except, finally" exception handling is perfect for this and considerably easier than replicating the same logic from shell commands in a Makefile: - - -``` -#!/usr/bin/env python3 - -import time -import argparse -from subprocess import check_call, CalledProcessError -from urllib.request import urlopen, Request - -parser = argparse.ArgumentParser() -parser.add_argument('-i', '--image', action='store', required=True, help='image name') -args = parser.parse_args() - -print(args.image) - -try: -    check_call("podman rm smk".split()) -except CalledProcessError as err: -    pass - -check_call( -    "podman run --rm --name=smk -p 8080:8080 -d {}".format(args.image).split() -) - -time.sleep(5) - -r = Request("", headers={'Host': 'chris.collins.is'}) -try: -    print(str(urlopen(r).read())) -finally: -    check_call("podman kill smk".split()) -``` - -This could be a more thorough test. For example, during the build process, the Git revision hash could be built into the response, and the test could check that the response included the expected hash. This would have the benefit of verifying that at least some of the expected content is there. - -If all goes well with the tests, then the image is ready to be deployed. I use Google's Cloud Run service to host my website, and like any of the major cloud services, there is an excellent command-line interface (CLI) tool that I can use to interact with the service. Since Cloud Run is a container service, deployment consists of pushing the images built locally to a remote container registry, and then kicking off a rollout of the service using the `gcloud` CLI tool. - -You can do the push using Podman or Skopeo (or Docker, if you're using it). My push target pushes the `$(IMAGE_REF):$(HASH)` image and also the `:latest` tag: - - -``` -push: -  podman push --remove-signatures $(IMAGE_REF):$(HASH) -  podman push --remove-signatures $(IMAGE_REF):latest -``` - -After the image has been pushed, use the `gcloud run deploy` command to roll out the newest image to the project and make the new image live. Once again, the Makefile comes in handy here. I can specify the `--platform` and `--region` arguments as variables in the Makefile so that I don't have to remember them each time. Let's be honest: I write so infrequently for my personal blog, there is zero chance I would remember these variables if I had to type them from memory each time I deployed a new image: - - -``` -rollout: -  gcloud run deploy $(PROJECT) --image $(IMAGE_REF):$(HASH) --platform $(PLATFORM) --region $(REGION) -``` - -### More targets - -There are additional helpful `make` targets. When writing new stuff or testing CSS or code changes, I like to see what I'm working on locally without deploying it to a remote server. For this, my Makefile has a `run_local` command, which spins up a container with the contents of my current commit and opens my browser to the URL of the page hosted by the locally running webserver: - - -``` -.PHONY: run_local -run_local: -  podman stop mansmk ; podman rm mansmk ; podman run --name=mansmk --rm -p $(HOST_ADDR):$(HOST_PORT):$(TARGET_PORT) -d $(IMAGE_REF):$(HASH) && $(BROWSER) $(HOST_URL):$(HOST_PORT) -``` - -I also use a variable for the browser name, so I can test with several if I want to. By default, it will open in Firefox when I run `make run_local`. If I want to test the same thing in Google, I run `make run_local BROWSER="google-chrome"`. - -When working with containers and container images, cleaning up old containers and images is an annoying chore, especially when you iterate frequently. I include targets in my Makefile for handling these tasks, too. When cleaning up a container, if the container doesn't exist, Podman or Docker will return with an exit code of 125. Unfortunately, `make` expects each command to return 0 or it will stop processing, so I use a wrapper script to handle that case: - - -``` -#!/usr/bin/env bash - -ID="${@}" - -podman stop ${ID} 2>/dev/null - -if [[ $?  == 125 ]] -then -  # No such container -  exit 0 -elif [[ $? == 0 ]] -then -  podman rm ${ID} 2>/dev/null -else -  exit $? -fi -``` - -Cleaning images requires a bit more logic, but it can all be done within the Makefile. To do this easily, I add a label (via the Containerfile) to the image when it's being built. This makes it easy to find all the images with these labels. The most recent of these images can be identified by looking for the `:latest` tag. Finally, all of the images, except those pointing to the image tagged with `:latest`, can be deleted: - - -``` -clean_images: -  $(eval LATEST_IMAGES := $(shell podman images --filter "label=my-project.purpose=app-image" --no-trunc | awk '/latest/ {print $$3}')) -  podman images --filter "label=my-project.purpose=app-image" --no-trunc --quiet | grep -v $(LATEST_IMAGES) | xargs --no-run-if-empty --max-lines=1 podman image rm -``` - -This is the point where using a Makefile for managing container projects really comes together into something cool. To this point, the Makefile includes commands for building and tagging images, testing, pushing images, rolling out a new version, cleaning up a container, cleaning up images, and running a local version. Running each of these with `make image_build && make image_tag && make test`… etc. is considerably easier than running each of the original commands, but it can be simplified even further. - -A Makefile can group commands into a target, allowing multiple targets to run with a single command. For example, my Makefile groups the `image_build` and `image_tag` targets under the `build` target, so I can run both by simply using `make build`. Even better, these targets can be further grouped into the default `make` target, `all`, allowing me to run all of them in order by executing `make all` or more simply, `make`. - -For my project, I want the default `make` action to include everything from building the image to testing, deploying, and cleaning up, so I include the following targets: - - -``` -.PHONY: all -all: build test deploy clean - -.PHONY: build image_build image_tag -build: image_build image_tag - -.PHONY: deploy push rollout -deploy: push rollout - -.PHONY: clean clean_containers clean_images -clean: clean_containers clean_images -``` - -This does everything I've talked about in this article, except the `make run_local` target, in a single command: `make`. - -### Conclusion - -A Makefile is an excellent way to manage a container-based project. By combining all the commands necessary to build, test, and deploy a project into `make` targets within the Makefile, all the "meta" work—everything aside from writing the code—can be simplified and automated. The Makefile can even be used for code-related tasks: running unit tests, maintaining modules, compiling binaries and checksums. While it can't yet write code for you, using a Makefile combined with the benefits of a containerized, cloud-based service can `make` (wink, wink) managing many aspects of a project much easier. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/manage-containers-makefile - -作者:[Chris Collins][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clcollins -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software) -[2]: https://opensource.com/article/18/8/what-how-makefile -[3]: https://gohugo.io/ -[4]: https://caddyserver.com/ -[5]: https://podman.io diff --git a/sources/tech/20210701 Try Dolibarr, an open source customer relationship management platform.md b/sources/tech/20210701 Try Dolibarr, an open source customer relationship management platform.md deleted file mode 100644 index 7549782409..0000000000 --- a/sources/tech/20210701 Try Dolibarr, an open source customer relationship management platform.md +++ /dev/null @@ -1,98 +0,0 @@ -[#]: subject: (Try Dolibarr, an open source customer relationship management platform) -[#]: via: (https://opensource.com/article/21/7/open-source-dolibarr) -[#]: author: (Pradeep Vijayakumar https://opensource.com/users/deepschennai) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Try Dolibarr, an open source customer relationship management platform -====== -Maintain a customer database and send promotions and offers with -Dolibarr's CRM features. -![a handshake ][1] - -No matter what industry you work in, a key aspect of sustaining your business is keeping your customers. In the customer-relations domain, we call this _customer retention_. - -Whether you run a retail store, restaurant, pub, supermarket, gym, or any other business, you need a reliable way to keep in touch with your customers. After all, they're customers because they like what you do, and, if they've shared their contact information with you, they want to hear more about what you have to offer. Sending them discount coupons, promotions, and special offers benefits your customers and helps ensure they remember your brand and come back to do business with you again. - -So, how can you achieve this? - -I work with [many other people][2] on the [Dolibarr][3] project. It's an open source enterprise resource planning (ERP) and customer relationship management (CRM) software. Dolibarr provides a whole range of ERP features, including point-of-sale (POS), invoicing, stock and inventory management, sales orders, purchase orders, and human resources management. This article focuses on Dolibarr's CRM features, which help you maintain a database of your customers and connect with them to send promotions and offers. - -Even if you've never used a CRM system before, Dolibarr makes it easy to manage your customers and, as long as you put in the effort, enhance customer loyalty. - -### Install Dolibarr CRM - -Dolibarr is open source, so you can [download][4] it and run it locally. If your store's staff includes more than a few people, you probably need a few networked Dolibarr instances. Your systems administrator can set that up for you or, if you're on your own, many hosting service providers offer one-click installers, such as Installatron and Softaculous. - -In the interim, you can try Dolibarr's [online demo][5]. - -### Add customer data - -The first step to getting to know your customers is getting your customers' information into your CRM system. You may not have this data yet, so you'll be starting fresh, or you might have a database or spreadsheet from a system that hasn't been working out for you. Dolibarr imports a wide variety of formats, so it's relatively painless to migrate. - -For the sake of simplicity, I'll assume you're entering new customers. To enter a new customer into your Dolibarr system, navigate to the **Third-parties** menu, and click on the **New Customer** menu item. - -![add a new customer to Dolibarr][6] - -(Pradeep Vijayakumar, [CC BY-SA 4.0][7]) - -All the fields are configurable, and you can add and remove fields if you want. Define a marketing strategy on how you want to connect with your customers. It could be email, SMS, Facebook, Twitter, or another way your customers prefer. Once you have defined the communication channel, you know what information you need to capture for each customer. - -For example, if you've chosen email as your communication method, you know to ask your customers for an email address so that you can put it into the system, along with their name, location, and any other information that may be important to you. - -### Set up an email campaign - -Imagine you're running a weekend promotion with a 20% discount on selected products. Here's how to run an email campaign to announce this offer to all your customers in just a few clicks. - -First, click on the **Tools** tab and the **New Emailing** link. You can use the editor's WYSIWYG capabilities to design attractive emails. - -![Drafting a marketing email with Dolibarr's WYSIWYG Editor][8] - -(Pradeep Vijayakumar, [CC BY-SA 4.0][7]) - -You can use substitution variables to individualize your customers' name, location, gender, etc., as long as you have captured this information in the system (use the **?** tool tip to get the list of substitution variables). Because this email will go out to all the people in your database, you must use the substitution variables to represent any customer-specific data, such as your customers' names. - -Once you've drafted your email, the next step is choosing your customer list. Navigate to the **Recipients** tab and choose **Third parties (by categories)**. - -![Add customers to an email campaign list][9] - -(Pradeep Vijayakumar, [CC BY-SA 4.0][7]) - -All your customers should be included in this email list; you can confirm this by looking at the count displayed next to the list and under **Number of distinct recipients**. - -You can now click on **Validate** and then **Send** to send your email to all of your customers. Dolibarr automatically substitutes the substitution variables with actual customer data. You can also view the delivery reports for the emails that were sent out. - -### Integrations - -Because the marketplace is ever-changing, CRM software needs to keep pace with what customers use for communication. Dolibarr is designed for integration. You can, for instance, manage SMS marketing the same way you manage email marketing. The same is true for WhatsApp and many other targets. - -### Learn more - -All things considered, I think Dolibarr is an indispensable tool for implementing a customer relationship and customer retention strategy for your business. You can learn more about Dolibarr's CRM features by watching [this video on YouTube][10]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/open-source-dolibarr - -作者:[Pradeep Vijayakumar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/deepschennai -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/handshake_business_contract_partner.png?itok=NrAWzIDM (a handshake ) -[2]: https://www.dolibarr.org/who-works-on-the-dolibarr-project-.php -[3]: http://dolibarr.org/ -[4]: https://www.dolibarr.org/downloads.php -[5]: https://www.dolibarr.org/onlinedemo.php -[6]: https://opensource.com/sites/default/files/uploads/dolibarr_add-customer.png (add a new customer to Dolibarr) -[7]: https://creativecommons.org/licenses/by-sa/4.0/ -[8]: https://opensource.com/sites/default/files/uploads/dolibarr_create-email.png (Drafting a marketing email with Dolibarr's WYSIWYG Editor) -[9]: https://opensource.com/sites/default/files/uploads/dolibarr_select-recipients.png (Add customers to an email campaign list) -[10]: https://youtu.be/9ETxdpVsgU0 diff --git a/sources/tech/20210702 Bind a cloud event to Knative.md b/sources/tech/20210702 Bind a cloud event to Knative.md deleted file mode 100644 index 46f218a2e5..0000000000 --- a/sources/tech/20210702 Bind a cloud event to Knative.md +++ /dev/null @@ -1,333 +0,0 @@ -[#]: subject: (Bind a cloud event to Knative) -[#]: via: (https://opensource.com/article/21/7/cloudevents-bind-java-knative) -[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Bind a cloud event to Knative -====== -CloudEvents provides a common format to describe events and increase -interoperability. -![woman on laptop sitting at the window][1] - -Events have become an essential piece of modern reactive systems. Indeed, events can be used to communicate from one service to another, trigger out-of-band processing, or send a payload to a service like Kafka. The problem is that event publishers may express event messages in any number of different ways, regardless of content. For example, some messages are payloads in JSON format to serialize and deserialize messages by application. Other applications use binary formats such as [Avro][2] and [Protobuf][3] to transport payloads with metadata. This is an issue when building an event-driven architecture that aims to easily integrate external systems and reduce the complexity of message transmission. - -[CloudEvents][4] is an open specification providing a common format to describe events and increase interoperability. Many cloud providers and middleware stacks, including [Knative][5], [Kogito][6], [Debezium][7], and [Quarkus][8] have adopted this format after the release of CloudEvents 1.0. Furthermore, developers need to decouple relationships between event producers and consumers in serverless architectures. [Knative Eventing][9] is consistent with the CloudEvents specification, providing common formats for creating, parsing, sending, and receiving events in any programming language. Knative Eventing also enables developers to late-bind event sources and event consumers. For example, a cloud event using JSON might look like this: - - -``` -{ -    "specversion" : "1.0", (1) -    "id" : "11111", (2) -    "source" : "", (3) -    "type" : "knative-events-binding", (4) -    "subject" : "cloudevents", (5) -    "time" : "2021-06-04T16:00:00Z", (6) -    "datacontenttype" : "application/json", (7) -    "data" : "{\"message\": \"Knative Events\"}", (8) -} -``` - -In the above code: -(1) Which version of the CloudEvents specification to use -(2) The ID field for a specific event; combining the `id` and the `source` provides a unique identifier -(3) The Uniform Resource Identifier (URI) identifies the event source in terms of the context where it happened or the application that emitted it -(4) The type of event with any random words -(5) Additional details about the event (optional) -(6) The event creation time (optional) -(7) The content type of the data attribute (optional) -(8) The business data for the specific event - -Here is a quick example of how developers can enable a CloudEvents bind with Knative and the [Quarkus Funqy extension][10]. - -### 1\. Create a Quarkus Knative event Maven project - -Generate a Quarkus project (e.g., `quarkus-serverless-cloudevent`) to create a simple function with Funqy Knative events binding extensions: - - -``` -$ mvn io.quarkus:quarkus-maven-plugin:2.0.0.CR3:create \ -       -DprojectGroupId=org.acme \ -       -DprojectArtifactId=quarkus-serverless-cloudevent \ -       -Dextensions="funqy-knative-events" \ -       -DclassName="org.acme.getting.started.GreetingResource" -``` - -### 2\. Run the serverless event function locally - -Open the `CloudEventGreeting.java` file in the `src/main/java/org/acme/getting/started/funqy/cloudevent` directory. The `@funq` annotation enables the `myCloudEventGreeting` method to map the input data to the cloud event message automatically: - - -``` -private static final Logger log = Logger.getLogger(CloudEventGreeting.class); - -    @Funq -    public void myCloudEventGreeting(Person input) { -        log.info("Hello " + input.getName()); -    } -} -``` - -Run the function via Quarkus Dev Mode: - - -``` -`$ ./mvnw quarkus:dev` -``` - -The output should look like this: - - -``` -__  ____  __  _____   ___  __ ____  ______ - --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ - -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   -\--\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/   -INFO  [io.quarkus] (Quarkus Main Thread) quarkus-serverless-cloudevent 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.0.0.CR3) started in 1.546s. Listening on: -INFO  [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. -INFO  [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, funqy-knative-events, smallrye-context-propagation] - -\-- -Tests paused, press [r] to resume -``` - -**Note**: Quarkus 2.x provides a continuous testing feature so that you can keep testing your code when you add or update code by pressing `r` in the terminal. - -Now the CloudEvents function is running in your local development environment. So, send a cloud event to the function over the HTTP protocol: - - -``` -curl -v \ -  -H "Content-Type:application/json" \ -  -H "Ce-Id:1" \ -  -H "Ce-Source:cloud-event-example" \ -  -H "Ce-Type:myCloudEventGreeting" \ -  -H "Ce-Specversion:1.0" \ -  -d "{\"name\": \"Daniel\"}" -``` - -The output should end with: - - -``` -`HTTP/1.1 204 No Content` -``` - -Go back to the terminal, and the log should look like this: - - -``` -`INFO [org.acm.get.sta.fun.clo.CloudEventGreeting] (executor-thread-0) Hello Daniel` -``` - -### 3\. Deploy the serverless event function to Knative - -Add a `container-image-docker` extension to the Quarkus Funqy project. The extension enables you to build a container image based on the serverless event function and then push it to an external container registry (e.g., [Docker Hub][11], [Quay.io][12]): - - -``` -`$ ./mvnw quarkus:add-extension -Dextensions="container-image-docker"` -``` - -Open the `application.properties` file in the `src/main/resources/` directory. Then add the following variables to configure Knative and Kubernetes resources (make sure to replace `yourAccountName` with your container registry's account name, e.g., your username in Docker Hub): - - -``` -quarkus.container-image.build=true -quarkus.container-image.push=true -quarkus.container-image.builder=docker -quarkus.container-image.image=docker.io/yourAccountName/funqy-knative-events-codestart -``` - -Run the following command to containerize the function and then push it to the Docker Hub container registry automatically: - - -``` -`$ ./mvnw clean package` -``` - -The output should end with `BUILD SUCCESS`. - -Open the `funqy-service.yaml` file in the `src/main/k8s` directory. Then replace `yourAccountName` with your account information in the Docker Hub registry: - - -``` -apiVersion: serving.knative.dev/v1 -kind: Service -metadata: -  name: funqy-knative-events-codestart -spec: -  template: -    metadata: -      name: funqy-knative-events-codestart-v1 -      annotations: -        autoscaling.knative.dev/target: "1" -    spec: -      containers: -        - image: docker.io/yourAccountName/funqy-knative-events-codestart -``` - -Assuming the container image pushed successfully, create the Knative service based on the event function using the following `kubectl` command-line tool (be sure to log into the Kubernetes cluster and change the namespace where you want to create the Knative service): - - -``` -`$ kubectl create -f src/main/k8s/funqy-service.yaml` -``` - -The output should look like this: - - -``` -`service.serving.knative.dev/funqy-knative-events-codestart created` -``` - -Create a default broker to subscribe to the event function. Use the [kn][13] Knative Serving command-line tool: - - -``` -`$ kn broker create default` -``` - -Open the `funqy-trigger.yaml` file in the `src/main/k8s` directory and replace it with: - - -``` -apiVersion: eventing.knative.dev/v1 -kind: Trigger -metadata: -  name: my-cloudevent-greeting -spec: -  broker: default -  subscriber: -    ref: -      apiVersion: serving.knative.dev/v1 -      kind: Service -      name: funqy-knative-events-codestart -``` - -Create a trigger using the `kubectl` command-line tool: - - -``` -`$ kubectl create -f src/main/k8s/funqy-trigger.yaml` -``` - -The output should look like this: - - -``` -`trigger.eventing.knative.dev/my-cloudevent-greeting created` -``` - -### 4\. Send a cloud event to the serverless event function in Kubernetes - -Find out the function's route URL and check that the output looks like this: - - -``` -$ kubectl get rt -NAME URL READY REASON -funqy-knative-events-codestart     True -``` - -Send a cloud event to the function over the HTTP protocol: - - -``` -curl -v \ -  -H "Content-Type:application/json" \ -  -H "Ce-Id:1" \ -  -H "Ce-Source:cloud-event-example" \ -  -H "Ce-Type:myCloudEventGreeting" \ -  -H "Ce-Specversion:1.0" \ -  -d "{\"name\": \"Daniel\"}" -``` - -The output should end with: - - -``` -`HTTP/1.1 204 No Content` -``` - -Once the function pod scales up, take a look at the pod logs. Use the following `kubectl` command to retrieve the pod's name: - - -``` -`$ kubectl get pod` -``` - -The output will look like this: - - -``` -NAME                                                           READY   STATUS    RESTARTS   AGE -funqy-knative-events-codestart-v1-deployment-6569f6dfc-zxsqs   2/2     Running   0          11s -``` - -Run the following `kubectl` command to verify that the pod's logs match the local testing's result:  - - -``` -`$ kubectl logs funqy-knative-events-codestart-v1-deployment-6569f6dfc-zxsqs -c user-container | grep CloudEventGreeting` -``` - -The output looks like this: - - -``` -`INFO  [org.acm.get.sta.fun.clo.CloudEventGreeting] (executor-thread-0) Hello Daniel` -``` - -If you deploy the event function to an [OpenShift Kubernetes Distribution][14] (OKD) cluster, you will find the deployment status in the topology view: - -![Deployment status][15] - -(Daniel Oh, [CC BY-SA 4.0][16]) - -You can also find the pod's logs in the **Pod details** tab: - -![Pod details][17] - -(Daniel Oh, [CC BY-SA 4.0][16]) - -### What's next? - -Developers can bind a cloud event to Knative using Quarkus functions. Quarkus also scaffolds Kubernetes manifests, such as Knative services and triggers, to process cloud events over a channel or HTTP request. - -Learn more serverless and Quarkus topics through OpenShift's [interactive self-service learning portal][18]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/cloudevents-bind-java-knative - -作者:[Daniel Oh][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/daniel-oh -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop) -[2]: https://avro.apache.org/ -[3]: https://developers.google.com/protocol-buffers -[4]: https://cloudevents.io/ -[5]: https://knative.dev/ -[6]: https://kogito.kie.org/ -[7]: https://debezium.io/ -[8]: https://quarkus.io/ -[9]: https://knative.dev/docs/eventing/ -[10]: https://opensource.com/article/21/6/quarkus-funqy -[11]: https://hub.docker.com/ -[12]: https://quay.io/ -[13]: https://knative.dev/docs/client/install-kn/ -[14]: https://www.okd.io/ -[15]: https://opensource.com/sites/default/files/uploads/5_deployment-status.png (Deployment status) -[16]: https://creativecommons.org/licenses/by-sa/4.0/ -[17]: https://opensource.com/sites/default/files/uploads/5_pod-details.png (Pod details) -[18]: https://learn.openshift.com/serverless/ diff --git a/sources/tech/20210702 Run Prometheus at home in a container.md b/sources/tech/20210702 Run Prometheus at home in a container.md deleted file mode 100644 index 13b19b7d5e..0000000000 --- a/sources/tech/20210702 Run Prometheus at home in a container.md +++ /dev/null @@ -1,330 +0,0 @@ -[#]: subject: (Run Prometheus at home in a container) -[#]: via: (https://opensource.com/article/21/7/run-prometheus-home-container) -[#]: author: (Chris Collins https://opensource.com/users/clcollins) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Run Prometheus at home in a container -====== -Keep tabs on your home network by setting up a Prometheus container -image. -![A graph of a wave.][1] - -[Prometheus][2] is an open source monitoring and alerting system that provides insight into the state and history of a computer, application, or cluster by storing defined metrics in a time-series database. It provides a powerful query language, PromQL, to help you explore and understand the data it stores. Prometheus also includes an Alertmanager that makes it easy to trigger notifications when the metrics you collect cross certain thresholds. Most importantly, Prometheus is flexible and easy to set up to monitor all kinds of metrics from whatever system you need to track. - -As site reliability engineers (SREs) on Red Hat's OpenShift Dedicated team, we use Prometheus as a central component of our monitoring and alerting for clusters and other aspects of our infrastructure. Using Prometheus, we can predict when problems may occur by following trends in the data we collect from nodes in the cluster and services we run. We can trigger alerts when certain thresholds are crossed or events occur. As a data source for [Grafana][3], Prometheus enables us to produce graphs of data over time to see how a cluster or service is behaving. - -Prometheus is a strategic piece of infrastructure for us at work, but it is also useful to me at home. Luckily, it's not only powerful and useful but also easy to set up in a home environment, with or without Kubernetes, OpenShift, containers, etc. This article shows you how to build a Prometheus container image and set up the Prometheus Node Exporter to collect data from home computers. It also explains some basic PromQL, the query language Prometheus uses to return data and create graphs. - -### Build a Prometheus container image - -The Prometheus project publishes its own container image, `quay.io/prometheus/prometheus`. However, I enjoy building my own for home projects and prefer to use the [Red Hat Universal Base Image][4] family for my projects. These images are freely available for anyone to use. I prefer the [Universal Base Image 8 Minimal][5] (ubi8-minimal) image based on Red Hat Enterprise Linux 8. The ubi8-minimal image is a smaller version of the normal ubi8 images. It is larger than the official Prometheus container image's ultra-sparse Busybox image, but since I use the Universal Base Image for other projects, that layer is a wash in terms of disk space for me. (If two images use the same layer, that layer is shared between them and doesn't use any additional disk space after the first image.) - -My Containerfile for this project is split into a [multi-stage build][6]. The first, `builder`, installs a few tools via DNF packages to make it easier to download and extract a Prometheus release from GitHub, then downloads a specific release for whatever architecture I need (either ARM64 for my [Raspberry Pi Kubernetes cluster][7] or AMD64 for running locally on my laptop), and extracts it: - - -``` -# The first stage build, downloading Prometheus from Github and extracting it - -FROM registry.access.redhat.com/ubi8/ubi-minimal as builder -LABEL maintainer "Chris Collins <[collins.christopher@gmail.com][8]>" - -# Install packages needed to download and extract the Prometheus release -RUN microdnf install -y gzip jq tar - -# Replace the ARCH for different architecture versions, eg: "linux-arm64.tar.tz" -ENV PROMETHEUS_ARCH="linux-amd64.tar.gz" - -# Replace "tag/<tag_name>" with "latest" to build whatever the latest tag is at the time -ENV PROMETHEUS_VERSION="tags/v2.27.0" -ENV PROMETHEUS="" - -# The checksum file for the Prometheus project is "sha256sums.txt" -ENV SUMFILE="sha256sums.txt" - -RUN mkdir /prometheus -WORKDIR /prometheus - -# Download the checksum -RUN /bin/sh -c "curl -sSLf $(curl -sSLf ${PROMETHEUS} -o - | jq -r '.assets[] | select(.name|test(env.SUMFILE)) | .browser_download_url') -o ${SUMFILE}" - -# Download the binary tarball -RUN /bin/sh -c "curl -sSLf -O $(curl -sSLf ${PROMETHEUS} -o - | jq -r '.assets[] | select(.name|test(env.PROMETHEUS_ARCH)) |.browser_download_url')" - -# Check the binary and checksum match -RUN sha256sum --check --ignore-missing ${SUMFILE} - -# Extract the tarball -RUN tar --extract --gunzip --no-same-owner --strip-components=1 --directory /prometheus --file *.tar.gz -``` - -The second stage of the multi-stage build copies the extracted Prometheus files to a pristine ubi8-minimal image (there's no need for the extra tools from the first image to take up space in the final image) and links the binaries into the `$PATH`: - - -``` -# The second build stage, creating the final image -FROM registry.access.redhat.com/ubi8/ubi-minimal -LABEL maintainer "Chris Collins <[collins.christopher@gmail.com][8]>" - -# Get the binary from the builder image -COPY --from=builder /prometheus /prometheus - -WORKDIR /prometheus - -# Link the binary files into the $PATH -RUN ln prometheus /bin/ -RUN ln promtool /bin/ - -# Validate prometheus binary -RUN prometheus --version - -# Add dynamic target (file_sd_config) support to the prometheus config -# -RUN echo -e "\n\ -  - job_name: 'dynamic'\n\ -    file_sd_configs:\n\ -    - files:\n\ -      - data/sd_config*.yaml\n\ -      - data/sd_config*.json\n\ -      refresh_interval: 30s\ -" >> prometheus.yml - -EXPOSE 9090 -VOLUME ["/prometheus/data"] - -ENTRYPOINT ["prometheus"] -CMD ["--config.file=prometheus.yml"] -``` - -Build the image: - - -``` -# Build the Prometheus image from the Containerfile -podman build --format docker -f Containerfile -t prometheus -``` - -I'm using [Podman][9] as my container engine at home, but you can use Docker if you prefer. Just replace the `podman` command with `docker` above. - -After building this image, you're ready to run Prometheus locally and start collecting some metrics. - -### Running Prometheus - - -``` -# This only needs to be done once -# This directory will store the metrics Prometheus collects so they persist between container restarts -mkdir data - -# Run Prometheus locally, using the ./data directory for persistent data storage -# Note that the image name, prometheus:latest, will be whatever image you are using -podman run --mount=type=bind,src=$(pwd)/data,dst=/prometheus/data,relabel=shared --publish=127.0.0.1:9090:9090 --detach prometheus:latest -``` - -The Podman command above runs Prometheus in a container, mounting the Data directory into the container and allowing you to access the Prometheus web interface with a browser only from the machine running the container. If you want to access Prometheus from other hosts, replace `--publish=127.0.0.1:9090:9090` in the command with `--publish=9090:9090`. - -Once the container is running, you should be able to access Prometheus at `http://127.0.0.1:9000/graph`. There is not much to look at yet, though. By default, Prometheus knows only to check itself (the Prometheus service) for metrics related to itself. For example, navigating to the link above and entering a query for `prometheus_http_requests_total` will show how many HTTP requests Prometheus has received (most likely, just those you have made so far). - -![number of HTTP requests Prometheus received][10] - -(Chris Collins, [CC BY-SA 4.0][11]) - -This query can also be referenced as a URL: - - -``` -`http://127.0.0.1:9090/graph?g0.expr=prometheus_http_requests_total&g0.tab=1&g0.stacked=0&g0.range_input=1h` -``` - -Clicking it should take you to the same results. By default, Prometheus scrapes for metrics every 15 seconds, so these metrics will update over time (assuming they have changed since the last scrape). - -You can also graph the data over time by entering a query (as above) and clicking the **Graph** tab. - -![Graphing data over time][12] - -(Chris Collins, [CC BY-SA 4.0][11]) - -Graphs can also be referenced as a URL: - - -``` -`http://127.0.0.1:9090/graph?g0.expr=prometheus_http_requests_total&g0.tab=0&g0.stacked=0&g0.range_input=1h` -``` - -This internal data is not helpful by itself, though. So let's add some useful metrics. - -### Add some data - -Prometheus—the project—publishes a program called [Node Exporter][13] for exporting useful metrics about the computer or node it is running on. You can use Node Exporter to quickly create a metrics target for your local machine, exporting data such as memory utilization and CPU consumption for Prometheus to track. - -In the interest of brevity, just run the `quay.io/prometheus/node-exporter:latest` container image published by the Projetheus project to get started. - -Run the following with Podman or your container engine of choice: - - -``` -`podman run --net="host" --pid="host" --mount=type=bind,src=/,dst=/host,ro=true,bind-propagation=rslave --detach quay.io/prometheus/node-exporter:latest --path.rootfs=/host` -``` - -This will start a Node Exporter on your local machine and begin publishing metrics on port 9100. You can see which metrics are being generated by opening `http://127.0.0.1:9100/metrics` in your browser. It will look similar to this: - - -``` -# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. -# TYPE go_gc_duration_seconds summary -go_gc_duration_seconds{quantile="0"} 0.000176569 -go_gc_duration_seconds{quantile="0.25"} 0.000176569 -go_gc_duration_seconds{quantile="0.5"} 0.000220407 -go_gc_duration_seconds{quantile="0.75"} 0.000220407 -go_gc_duration_seconds{quantile="1"} 0.000220407 -go_gc_duration_seconds_sum 0.000396976 -go_gc_duration_seconds_count 2 -``` - -Now you just need to tell Prometheus that the data is there. Prometheus uses a set of rules called [scrape_configs][14] that are defined in its configuration file, `prometheus.yml`, to decide what hosts to check for metrics and how often to check them. The scrape_configs can be set statically in the Prometheus config file, but that doesn't make Prometheus very flexible. Every time you add a new target, you would have to update the config file, stop Prometheus manually, and restart it. Prometheus has a better way, called [file-based service discovery][15]. - -In the Containerfile above, there's a stanza adding a dynamic file-based service discovery configuration to the Prometheus config file: - - -``` -RUN echo -e "\n\ -  - job_name: 'dynamic'\n\ -    file_sd_configs:\n\ -    - files:\n\ -      - data/sd_config*.yaml\n\ -      - data/sd_config*.json\n\ -      refresh_interval: 30s\ -" >> prometheus.ym -``` - -This tells Prometheus to look for files named `sd_config*.yaml` or `sd_config*.json` in the Data directory that are mounted into the running container and to check every 30 seconds to see if there are more config files or if they have changed at all. Using files with that naming convention, you can tell Prometheus to start looking for other targets, such as the Node Exporter you started earlier. - -Create a file named `sd_config_01.json` in the Data directory with the following contents, replacing `your_hosts_ip_address` with the IP address of the host running the Node Exporter: - - -``` -`[{"labels": {"job": "node"}, "targets": ["your_hosts_ip_address:9100"]}` -``` - -Check `http://127.0.0.1:9090/targets` in Prometheus; you should see Prometheus monitoring itself (inside the container) and the target you added for the host with the Node Exporter. Click on the link for this new target to see the raw data Prometheus has scraped. It should look familiar: - - -``` -# NOTE: Truncated for brevity -# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. -# TYPE go_gc_duration_seconds summary -go_gc_duration_seconds{quantile="0"} 3.6547e-05 -go_gc_duration_seconds{quantile="0.25"} 0.000107517 -go_gc_duration_seconds{quantile="0.5"} 0.00017582 -go_gc_duration_seconds{quantile="0.75"} 0.000503352 -go_gc_duration_seconds{quantile="1"} 0.008072206 -go_gc_duration_seconds_sum 0.029700021 -go_gc_duration_seconds_count 55 -``` - -This is the same data the Node Exporter is exporting: - - -``` -`http://127.0.0.1:9090/graph?g0.expr=rate(node_network_receive_bytes_total%7B%7D%5B5m%5D)&g0.tab=0&g0.stacked=0&g0.range_input=15m` -``` - -With this information, you can create your own rules and instrument your own applications to provide metrics for Prometheus to consume. - -### A light introduction to PromQL - -PromQL is Prometheus' query language and a powerful way to aggregate the time-series data stored in Prometheus. Prometheus shows you the output of a query as the raw result, or it can be displayed as a graph showing the trend of the data over time, like the `node_network_receive_bytes_total` example above. PromQL can be daunting to get into, and this article will not dive into a full tutorial on how to use it, but I will cover some basics. - -To get started, pull up the query interface for Prometheus: - - -``` -`http://127.0.0.1:9090/graph` -``` - -Look at the `node_network_receive_bytes_total` metrics in this example. Enter that string into the query field, and press Enter to display all the collected network metrics from the computer on which the Node Exporter is running. (Note that Prometheus provides an autocomplete feature, making it easy to explore the metrics it collects.) You may see several results, each with labels that have been applied to the data sent by the Node Exporter: - -![Network data received][16] - -(Chris Collins, [CC BY-SA 4.0][11]) - -Looking at the image above, you can see eight interfaces, each labeled by the device name (e.g., `{device="ensp12s0u1"}`), the instance they were collected from (in this case, all the same node), and the job node that was assigned in the `sd_config_01.json`. To the right of these is the latest raw metric data for this device. In the case of the `ensp12s0u1` device, it's received `4007938272` bytes of data over the interface since Prometheus started tracking the data. - -Note: The "job" label is useful in defining what kind of data is being collected. For example, "node" for metrics sent by the Node Exporter, or "cluster" for Kubernetes cluster data, or perhaps an application name for a specific service you may be monitoring. - -Click on the **Graph** tab, and you can see the metrics for these devices graphed over time (one hour by default). The time period can be adjusted using the `- +` toggle on the left. Historical data is displayed and graphed along with the current value. This provides valuable insight into how the data changes over time: - -![Graph of data changing over time][17] - -(Chris Collins, [CC BY-SA 4.0][11]) - -You can further refine the displayed data using the labels. This graph displays all the interfaces reported by the Node Exporter, but what if you are interested just in the wireless device? By changing the query to include the label `node_network_receive_bytes_total{device="wlp2s0"}`, you can evaluate just the data matching that label. Prometheus automatically adjusts the scale to a more human-readable one after the other devices' data is removed: - -![Graph of network data for one label][18] - -(Chris Collins, [CC BY-SA 4.0][11]) - -This data is helpful in itself, but Prometheus' PromQL also has several query functions that can be applied to the data to provide more information. For example, look again at the `rate()` function. The `rate()` function "calculates the per-second average rate of increase of the time series in the range vector." That's a fancy way of saying "shows how quickly the data grew." - -Looking at the graph for the wireless device above, you can see a slight curve—a slightly more vertical increase—in the line graph right around 19:00 hours. It doesn't look like much on its own but, using the `rate()` function, it is possible to calculate just how much larger the growth spike was around that timeframe. Using the query `rate(node_network_receive_bytes_total{device="wlp2s0"}[15m])` shows the rate that the received bytes increased for the wireless device, averaged per second over a 15-minute period: - -![Graph showing rate data increased][19] - -(Chris Collins, [CC BY-SA 4.0][11]) - -It is much more evident that around 19:00 hours, the wireless device received almost three times as much traffic for a brief period. - -PromQL can do much more than this. Using the `predict_linear()` function, Prometheus can make an educated guess about when a certain threshold will be crossed. Using the same wireless `network_receive_bytes` data, you can predict where the value will be over the next four hours based on the data from the previous four hours (or any combination you might be interested in). Try querying `predict_linear(node_network_receive_bytes_total{device="wlp2s0"}[4h], 4 * 3600)`. - -The important bit of the `predict_linear()` function above is `[4h], 4 * 3600`. The `[4h]` tells Prometheus to use the past four hours as a dataset and then to predict where the value will be over the next four hours (or `4 * 3600` since there are 3,600 seconds in an hour). Using the example above, Prometheus predicts that the wireless device will have received almost 95MB of data about an hour from now (your data will vary): - -![Graph showing predicted data that will be received][20] - -(Chris Collins, [CC BY-SA 4.0][11]) - -You can start to see how this might be useful, especially in an operations capacity. Kubernetes exports node disk usage metrics and includes a built-in alert using `predict_linear()` to estimate when a disk might run out of space. You can use all of these queries in conjunction with Prometheus' Alertmanager to notify you when various conditions are met—from network utilization being too high to disk space _probably_ running out in the next four hours and more. Alertmanager is another useful topic that I'll cover in a future article. - -### Conclusion - -Prometheus consumes metrics by scraping endpoints for specially formatted data. Data is tracked and can be queried for point-in-time info or graphed to show changes over time. Even better, Prometheus supports, out of the box, alerting rules that can hook in with your infrastructure in a variety of ways. Prometheus can also be used as a data source for other projects, like Grafana, to provide more sophisticated graphing information. - -In the real world at work, we use Prometheus to track metrics and provide alert thresholds that page us when clusters are unhealthy, and we use Grafana to make dashboards of data we need to view regularly. We export node data to track our nodes and instrument our operators to track their performance and health. Prometheus is the backbone of all of it. - -If you have been interested in Prometheus, keep your eyes peeled for follow-up articles. You'll learn about alerting when certain conditions are met, using Prometheus' built-in Alertmanager and integrations with it, more complicated PromQL, and how to instrument your own application and integrate it with Prometheus. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/run-prometheus-home-container - -作者:[Chris Collins][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clcollins -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_wavegraph.png?itok=z4pXCf_c (A graph of a wave.) -[2]: https://prometheus.io/ -[3]: https://grafana.com/ -[4]: https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image -[5]: https://catalog.redhat.com/software/containers/ubi8/ubi-minimal/5c359a62bed8bd75a2c3fba8 -[6]: https://docs.docker.com/develop/develop-images/multistage-build/ -[7]: https://opensource.com/article/20/6/kubernetes-raspberry-pi -[8]: mailto:collins.christopher@gmail.com -[9]: https://docs.podman.io/en/latest/Introduction.html -[10]: https://opensource.com/sites/default/files/uploads/prometheus_http_requests_total_query.png (number of HTTP requests Prometheus received) -[11]: https://creativecommons.org/licenses/by-sa/4.0/ -[12]: https://opensource.com/sites/default/files/uploads/prometheus_http_requests_total.png (Graphing data over time) -[13]: https://prometheus.io/docs/guides/node-exporter/ -[14]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config -[15]: https://prometheus.io/docs/guides/file-sd/ -[16]: https://opensource.com/sites/default/files/uploads/node_network_receive_bytes_total.png (Network data received) -[17]: https://opensource.com/sites/default/files/uploads/node_network_receive_bytes_total_graph_1.png (Graph of data changing over time) -[18]: https://opensource.com/sites/default/files/uploads/node_network_receive_bytes_total_wireless_graph.png (Graph of network data for one label) -[19]: https://opensource.com/sites/default/files/uploads/rate_network_receive_bytes_total_wireless_graph.png (Graph showing rate data increased) -[20]: https://opensource.com/sites/default/files/uploads/predict_linear_node_network_receive_bytes_total_wireless_graph.png (Graph showing predicted data that will be received) diff --git a/sources/tech/20210705 How I avoid breaking functionality when modifying legacy code.md b/sources/tech/20210705 How I avoid breaking functionality when modifying legacy code.md deleted file mode 100644 index a84b4e751d..0000000000 --- a/sources/tech/20210705 How I avoid breaking functionality when modifying legacy code.md +++ /dev/null @@ -1,116 +0,0 @@ -[#]: subject: (How I avoid breaking functionality when modifying legacy code) -[#]: via: (https://opensource.com/article/21/7/legacy-code) -[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How I avoid breaking functionality when modifying legacy code -====== -Extract methods give the biggest bang for the buck when it comes to -modifying legacy code while avoiding the risk of breaking the -functionality. -![Coding on a computer][1] - -Allow me a bit of introspection. I've been working in the software engineering field for 31 years. During those 31 years, I've modified a lot of legacy software. - -Over time, I've formed certain habits when working with legacy code. Because on most projects I get paid to deliver working software that is easy to maintain, I cannot afford the luxury of taking my sweet time trying to fully understand the legacy code I am about to modify. So, I tend to skim. Skimming the code helps me quickly identify relevant portions in the repo. It is a race against time, and I don't have cycles at my disposal to dwell on less-relevant minutia. I'm always going for the most relevant area in the code. Once I find it, I slow down and start analyzing it. - -I rely heavily on my power tools—integrated development environments (IDEs). It doesn't matter which power tool; these days, they're all pretty much capable of doing the same thing. What's important to me is having the ability to quickly find where functions are called and where variables are used. - -Sooner or later, after I'm done skimming the code and analyzing the code segment I intend to change, I identify a place where I want to insert some code. Now that I understand the meaning of the classes, components, and objects involved in performing the function, I write a test first. - -After that, I write code to make the test pass. I type the name of the object I intend to use and then press the dot key (**.**) and the IDE responds by giving me a full list of methods defined for that object. All those methods are callable from the location where my cursor is. - -I then pick the method that makes sense to me. I fill in the blanks (that is, I supply values for the expected arguments/parameters), save the change, and run the test. If the test passes, I'm done with that micro change. - -I typically repeat this activity many times per hour. Throughout the workday, it is not unusual to see it repeated dozens, even hundreds of times. - -I believe the way I modify software is not unique to my work habits. I think it describes a typical flow that many (I'd even say most) software engineers adhere to. - -### A few observations - -The first thing apparent in this way of modifying legacy software is the absence of any work on documentation. Experience shows that software developers very rarely spend time reaching out for documentation. Time spent preparing the documentation and generating it to produce HTML-style online documents is often wasted. - -Instead, most developers rely solely upon power tools. And rightly so—IDEs never lie, as they always offer the real-time picture of the system they are modifying, and documentation is usually stale. - -Another thing is that developers don't read the source code the way it was written. When writing code from scratch (first pass), many developers tend to write long functions. Source code tends to bunch up. Bunching code up makes it easier to read and reason about on the first pass and debug. But after the first pass, people rarely, if ever, consume the code the way it was written. If we catch ourselves reading a whole function from beginning to end, it is most likely because we have exhausted all other options and have no choice but to slow down and read the code in a pedestrian way. However, in my experience, that slow and orderly reading of the code seldom happens. - -### Problems caused by bunched-up code - -If you were to leave the code as it was written during the first pass (i.e., long functions, a lot of bunched-up code for easy initial understanding and debugging), it would render IDEs powerless. If you cram all capabilities an object can offer into a single, giant function, later, when you're trying to utilize that object, IDEs will be of no help. IDEs will show the existence of one method (which will probably contain a large list of parameters providing values that enforce the branching logic inside that method). So, you won't know how to really use that object unless you open its source code and read its processing logic very carefully. And even then, your head will probably hurt. - -Another problem with hastily cobbled-up, "bunched-up" code is that its processing logic is not testable. While you can still write an end-to-end test for that code (input values and the expected output values), you have no way of knowing if the bunched-up code is doing any other potentially risky processing. Also, you have no way of testing for edge cases, unusual scenarios, difficult-to-reproduce scenarios, etc. That renders your code untestable, which is a very bad thing to live with. - -### Break up bunched-up code by extracting methods - -Long functions or methods are always a sign of muddled thinking. When a block of code contains numerous statements, it usually means it is doing way too much processing. Cramming a lot of processing in one place typically means the developer hasn't carefully thought things through. - -You don't need to look further than into how companies are typically organized. Instead of having hundreds of employees working in a single department, companies tend to break up into numerous smaller departments. That way, it is much clearer where responsibilities lie. - -Software code is no different. An application exists to automate a lot of intricate processing. Processing gets broken into multiple smaller steps, so each step must be mapped onto a separate, isolated block of code. You create such separate, isolated, and autonomous blocks of code by extracting methods. You take a long, bulky block of code and break it up by extracting responsibilities into separate blocks of code. - -### Extracted methods enable better naming - -Developers write software code, but it is much more often consumed (i.e., read) by developers than written. - -When consuming software code, it helps if the code is expressive. Expressiveness boils down to proper structure and proper naming. Consider the following statement: - - -``` -`if((x && !y) && !b) || (b && y) && !(z >= 65))` -``` - -It would be literally impossible to understand the meaning and the intention of this statement without running the code and stepping through it with a debugger. Such activity is called GAK (Geek at Keyboard). It is 100% unproductive and quite wasteful. - -Here is where the extract method and proper naming practices come to the rescue. Take the complex statement contained within the `if` statement, extract it into its own method, and give that method a meaningful name. For example: - - -``` -public bool IsEligible(bool b, bool x, bool y, int z) { -  return ((x && !y) && !b) || (b && y) && !(z >= 65); -} -``` - -Now replace the ugly `if` statement with a more readable statement: - - -``` -`if(IsEligible(b, x, y, z))` -``` - -Of course, you should also replace dumb one-character variable names with more meaningful names to improve readability. - -### Reusing legacy code - -Experience shows that any functionality that is not extracted and properly named and moved to the most reasonable class will never be reused. The extract method fosters frequent reuse, which goes a long way toward improving code quality. - -### Testing legacy code - -Writing tests for existing code is hard and feels less rewarding than doing [test-driven development][2] (TDD). Even after you determine that there should be several tests to ensure production code works as expected, when you realize production code must be changed to enable testing, you often decide to skip writing tests. In such situations, achieving your goal to deliver testable code, slowly but surely, keeps diminishing. - -Writing tests for legacy code is tedious because it often requires a lot of time and code to set up the preconditions. That's the opposite of how you write tests when doing TDD, where time spent writing preconditions is minimal. - -The best way to make legacy code testable is to practice the extract method approach. Locating a block of code nested in loops and conditionals and extracting it enables you to write small, precise tests. Such tests on extracted functions improve not only the testability of the code but also the understandability. If legacy code becomes more understandable thanks to extracting methods and writing legible tests, the chance of introducing defects is drastically reduced. - -### Conclusion - -Most of the discussion about extracting methods would not be necessary with TDD. Writing one test first, then making the test pass, then scanning that code for more insights into how the code should be structured and improved, making improvements, and finally making changes to part of the code base guarantees there will be no need to worry about extracting methods. Since legacy code usually means code that was not crafted with TDD methodology, you are forced to adopt a different approach. In my experience, extract methods give the biggest bang for the buck when it comes to modifying legacy code while avoiding the risk of breaking the functionality. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/legacy-code - -作者:[Alex Bunardzic][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alex-bunardzic -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer) -[2]: https://opensource.com/article/20/1/test-driven-development diff --git a/sources/tech/20210706 Element- A Cross-Platform Decentralized Open-Source Messaging App.md b/sources/tech/20210706 Element- A Cross-Platform Decentralized Open-Source Messaging App.md deleted file mode 100644 index 31709b65dd..0000000000 --- a/sources/tech/20210706 Element- A Cross-Platform Decentralized Open-Source Messaging App.md +++ /dev/null @@ -1,158 +0,0 @@ -[#]: subject: (Element: A Cross-Platform Decentralized Open-Source Messaging App) -[#]: via: (https://itsfoss.com/element/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Element: A Cross-Platform Decentralized Open-Source Messaging App -====== - -There are many open-source messaging applications available, especially if you are looking for [WhatsApp replacements][1] on both desktop and mobile. - -Element is one of them, which is a decentralized alternative for private messaging that you can use to interact with individuals, communities, or businesses. - -### Element: Privacy-Friendly Open-Source Messenger Built on Matrix Network - -![][2] - -Matrix is an open standard for secure and decentralized communication. And Element is the messaging client that uses that. - -Element is also a part of the Matrix.org Foundation — so you will find most of the same team responsible for this. - -Originally, it was known as [Riot][3], which we covered back then. But, after the [rebranding][4], it is now known as “Element”, which comes with an improved user experience and constantly focusing on making decentralized communication common for instant messaging. - -![][5] - -Element is not just another open-source messenger, it gives you the ability to do a lot of things. - -Here, let me highlight some of the key features along with some details about it that follows as you read on. - -### Features of Element - -![][6] - -Element is more of an all-in-one messenger than a replacement of something. You could choose it as an [open-source alternative to Slack][7] or a private alternative to any instant messenger like Telegram. - -Some of the options that you get with it are: - - * End-to-End encryption chat room - * Public communities (may not be encrypted) - * Direct voice call - * Conference call in the community - * Meet Jitsi integration (one of the [open-source alternatives to Zoom][8]) - * File sharing - * Emoji and Sticker support - * Moderation tools for managing communities - * Extensive anti-spam options - * Ability to bridge other services like Slack, Discord, IRC, and more - * Offers paid managed hosting to have control over your data - * Cross-signed device verification for message privacy/security - * Fine grained notification settings - * Email notifications - * Ability to restore using encryption keys - * Make yourself discoverable to the entire Matrix network using your email or number - - - -The features offered by Element may sound to be overwhelming for a user who just wants private messaging. - -But fortunately, all those features do not get in the way unless you explicitly access/configure them. So that’s a good thing. - -First, let me address the installation instructions for Linux and I’ll give you some insights on how my experience with Element was (on both Linux desktop and Android). - -### Installing Element in Linux - -Element officially supports Debian/Ubuntu for installation. You can just add the package repository and install element. - -The commands used for this is: - -``` -sudo apt install -y wget apt-transport-https - -sudo wget -O /usr/share/keyrings/riot-im-archive-keyring.gpg https://packages.riot.im/debian/riot-im-archive-keyring.gpg - -echo "deb [signed-by=/usr/share/keyrings/riot-im-archive-keyring.gpg] https://packages.riot.im/debian/ default main" | sudo tee /etc/apt/sources.list.d/riot-im.list - -sudo apt update - -sudo apt install element-desktop -``` - -Do note that they are still using Riot.im domain to host packages even after rebranding — so not to be confused with the older Riot messaging app. - -You can also find it in AUR for Arch-based distros — but I’m not quite sure about how well it works. - -Unfortunately, there’s no [Flatpak][9] or [Snap][10] package available. So, if you are using a distribution that isn’t officially supported by Element, the best place to explore solutions/raise issues will be their [GitHub page][11]. - -Now, before you get started using it, let me give you some heads up with my thoughts on it. - -### Element on Linux and Android: Here’s What You Need to Know - -To start with — the user experience is fantastic on both Android and desktop. I tried it on Linux Mint, and it worked flawlessly. - -You do not need a mobile number to sign up. Just create a username and add an email account to it, and you’re done. - -![][12] - -One can opt for a paid homeserver (your own matrix network) or just join the free Matrix homeserver offered. - -**Keep in mind,** if you are signing up for free, you may not get to experience all the features — like the ability to see who’s online. You can only do that with your own server, the free Matrix server restricts certain functionalities like that to be able to accommodate an unlimited number of free users. - -When signing in to a mobile device, you will have to verify the session by scanning a QR code prompted on Element’s desktop app. - -Once done, you can explore and join public communities available or create your own. - -Most of the existing public communities do not have end-to-end encryption enabled. So make sure you know what you are doing before messaging in any of the public communities. - -While Element supports bridging IRC, Slack, and others or adding bots to a community — it is just not supported for an encrypted community. So, you need to have an unencrypted community to be able to use bots and bridges. - -![][13] - -A **word of caution**: - -Element is getting popular, and scammers/spammers are attracted to the platform because it does not need any valuable personal information to get started. - -So **make sure that you do not trust anyone and keep your identity safe** by not using your real profile picture or work email, especially if you are joining the public communities. - -Element is constantly improving and offers plenty of features for several use-cases. I don’t see a problem with it being an open-source Discord replacement as well (in some way). - -I was impressed with the level of notification controls that it gives and an added email notification option (which is enabled by default). You can choose to have notifications based on the keywords that you find interesting, what an exciting feature to have! - -![][14] - -Overall, Element may not be the perfect replacement for everything you use right now but it is shaping up to be an all-in-one alternative to many proprietary options. - -I’ve had a good experience with Element so far and I’m confident about its future. What do you think? Willing to try Element on Linux? - -Feel free to let me know your thoughts on this. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/element/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/private-whatsapp-alternatives/ -[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/element-io.png?resize=800%2C531&ssl=1 -[3]: https://itsfoss.com/riot-desktop/ -[4]: https://itsfoss.com/riot-to-element/ -[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/element-ui.png?resize=800%2C602&ssl=1 -[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/element-settings.png?resize=800%2C673&ssl=1 -[7]: https://itsfoss.com/open-source-slack-alternative/ -[8]: https://itsfoss.com/open-source-video-conferencing-tools/ -[9]: https://itsfoss.com/what-is-flatpak/ -[10]: https://itsfoss.com/install-snap-linux/ -[11]: https://github.com/vector-im -[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/06/element-sign-in.png?resize=800%2C581&ssl=1 -[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/element-bridge-bots.png?resize=800%2C517&ssl=1 -[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/element-notifications.png?resize=800%2C547&ssl=1 diff --git a/sources/tech/20210706 Why you need to use Kubernetes schema validation tools.md b/sources/tech/20210706 Why you need to use Kubernetes schema validation tools.md deleted file mode 100644 index be587490fe..0000000000 --- a/sources/tech/20210706 Why you need to use Kubernetes schema validation tools.md +++ /dev/null @@ -1,216 +0,0 @@ -[#]: subject: (Why you need to use Kubernetes schema validation tools) -[#]: via: (https://opensource.com/article/21/7/kubernetes-schema-validation) -[#]: author: (Eyar Zilberman https://opensource.com/users/eyarz) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Why you need to use Kubernetes schema validation tools -====== -Compare schema validation tools that help you avoid misconfigurations in -your Kubernetes clusters. -![Net catching 1s and 0s or data in the clouds][1] - -How do you ensure the stability of your Kubernetes (K8s) clusters? How do you know that your manifests are syntactically valid? Are you sure you don't have any invalid data types? Are any mandatory fields missing? - -Most often, we become aware of these misconfigurations only at the worst time: when we're trying to deploy the new manifests. - -Specialized tools and a "shift-left" approach make it possible to verify a Kubernetes schema before it's applied to a cluster. This article addresses how you can avoid misconfigurations and which tools are the best to use. - -> ## TL;DR -> -> Running schema validation tests is important, and the sooner the better. If all machines (e.g., local developer environments, continuous integration [CI], etc.) have access to your Kubernetes cluster, run `kubectl --dry-run` in server mode on every code change. If this isn't possible and you want to perform schema validation tests offline, use kubeconform together with a policy-enforcement tool to have optimal validation coverage. - -### Schema-validation tools - -Verifying the state of Kubernetes manifests may seem like a trivial task because the Kubernetes command-line interface (CLI), kubectl, can verify resources before they're applied to a cluster. You can verify the schema by using the [dry-run][2] flag (`--dry-run=client/server`) when specifying the `kubectl create` or `kubectl apply` commands; these will perform the validation without applying Kubernetes resources to the cluster. - -But I can assure you that it's actually more complex. A running Kubernetes cluster must obtain the schema for the set of resources being validated. So, when incorporating manifest verification into a CI process, you must also manage connectivity and credentials to perform the validation. This becomes even more challenging when dealing with multiple microservices in several environments (e.g., prod, dev, etc.). - -[Kubeval][3] and [kubeconform][4] are CLI tools developed to validate Kubernetes manifests without requiring a running Kubernetes environment. Because kubeconform was inspired by kubeval, they operate similarly; verification is performed against pre-generated JSON schemas created from the OpenAPI specifications ([swagger.json][5]) for each Kubernetes version. All that remains [to run][6] the schema validation tests is to point the tool executable to a single manifest, directory or pattern. - -![Kubeval and kubeconform ][7] - -(Eyar Zilberman, [CC BY-SA 4.0][8]) - -### Comparing the tools - -Now that you're aware of the tools available for Kubernetes schema validation, let's compare some core abilities—misconfiguration coverage, speed tests, support for different versions, Custom Resource Definitions support, and docs—in: - - * kubeval - * kubeconform - * kubectl dry-run in client mode - * kubectl dry-run in server mode - - - -#### Misconfiguration coverage - -I donned my QA hat and generated some (basic) Kubernetes manifest files with some [intentional misconfigurations][9] and then ran them against all four tools. - -Misconfig/Tool | - -kubeval / -kubeconform - -| - -kubectl dry-run -in client mode - -| - -kubectl dry-run -in server mode - ----|---|---|--- -[API deprecation][10] | ✅ Caught | ✅ Caught | ✅ Caught -[Invalid kind value][11] | ✅ Caught | ❌ Didn't catch | ✅ Caught -[Invalid label value][12] | ❌ Didn't catch | ❌ Didn't catch | ✅ Caught -[Invalid protocol type][13] | ✅ Caught | ❌ Didn't catch | ✅ Caught -[Invalid spec key][14] | ✅ Caught | ✅ Caught | ✅ Caught -[Missing image][15] | ❌ Didn't catch | ❌ Didn't catch | ✅ Caught -[Wrong K8s indentation][16] | ✅ Caught | ✅ Caught | ✅ Caught - -In summary: all misconfiguration was caught by `kubectl` dry-run in server mode. - -Some misconfigurations were caught by everything: - - * Invalid spec key: Caught successfully by everything! - * API deprecation: Caught successfully by everything! - * Wrong k8s indentation: Caught successfully by everything! - - - -However, some had mixed results: - - * Invalid kind value: Caught by Kubeval / Kubeconform but missed by Kubectl client. - * Invalid protocol type: Caught by Kubeval / Kubeconform but missed by Kubectl client. - * Invalid label value: Missed by both Kubeval / Kubeconform and Kubectl client. - * Missing image: Missed by both Kubeval / Kubeconform and Kubectl client. - - - -Conclusion: Running kubectl dry-run in server mode caught all misconfigurations, while kubeval/kubeconform missed two of them. It's also interesting to see that running kubectl dry-run in client mode is almost useless because it's missing some obvious misconfigurations and also requires a connection to a running Kubernetes environment. - - * All the schemas validation tests were performed against Kubernetes version 1.18.0. - * Because kubeconform is based on kubeval, they provide the same result when run against the files with the misconfigurations. - * kubectl is one tool, but each mode (client or server) produces a different result (as you can see from the table). - - - -#### Benchmark speed test - -I used [hyperfine][17] to benchmark the execution time of each tool. First, I ran it against all the [files with misconfigurations][18] (seven files in total). Then I ran it against [100 Kubernetes files][19] (all the files contain the same config). - -Results for running the tools against seven files with different Kubernetes schema misconfigurations: - -Tool | Mean | Min | Max ----|---|---|--- -kubeconform | 0.2 ms ± 0.3 ms | 0.0 ms | 2.3 ms -kubeval | 1.443 s ± 1.551 s | 0.741 s | 5.842 s -kubectl --dry-run=client | 1.92 s ± 0.035 s | 1.872 s | 2.009 s -kubectl --dry-run=server | 2.288 s ± 0.027 s | 2.241 s | 2.323 s - -Results for running the tools against 100 files with valid Kubernetes schemas: - -Tool | Mean | Min | Max ----|---|---|--- -kubeconform | 0.3 ms ± 0.3 ms | 0.0 ms | 1.9 ms -kubeval | 1.152 s ± 0.197 s | 0.989 s | 1.669 s -kubectl --dry-run=client | 1.274 s ± 0.028 s | 1.234 s | 1.313 s -kubectl --dry-run=server | 60.675 s ± 0.546 s | 60.489 s | 62.228 s - -Conclusion: While kubeconform (#1), kubeval (#2), and kubectl `--dry-run=client` (#3) provide fast results on both tests, kubectl `--dry-run=server` (#4) is slower, especially when it evaluates 100 files. Yet 60 seconds for generating a result is still a good outcome in my opinion. - -#### Kubernetes versions support - -Both kubeval and kubeconform accept the Kubernetes schema version as a flag. Although both tools are similar (as mentioned, kubeconform is based on kubeval), one of the key differences is that each tool relies on its own set of pre-generated JSON schemas: - - * **Kubeval:** [instrumenta/kubernetes-json-schema][20] (last commit: [133f848][21] on April 29, 2020) - * **Kubeconform:** [yannh/kubernetes-json-schema][22] (last commit: [a660f03][23] on May 15, 2021) - - - -As of May 2021, kubeval supports Kubernetes schema versions only up to 1.18.1, while kubeconform supports the latest Kubernetes schema available, 1.21.0. With kubectl, it's a little bit trickier. I don't know which version of kubectl introduced the dry run, but I tried it with Kubernetes version 1.16.0 and it still worked, so I know it's available in Kubernetes versions 1.16.0–1.18.0. - -The variety of supported Kubernetes schemas is especially important if you want to migrate to a new Kubernetes version. With kubeval and kubeconform, you can set the version and start evaluating which configurations must be changed to support the cluster upgrade. - -Conclusion: The fact that kubeconform has all the schemas for all the different Kubernetes versions available—and also doesn't require Minikube setup (as kubectl does)—makes it a superior tool when comparing these capabilities to its alternatives. - -### Other things to consider - -#### Custom Resource Definition (CRD) support - -Both kubectl dry-run and kubeconform support the [CRD][24] resource type, while kubeval does not. According to kubeval's docs, you can pass a flag to kubeval to tell it to ignore missing schemas so that it will not fail when testing a bunch of manifests where only some are resource type CRD. - -#### Documentation - -Kubeval is a more popular project than kubeconform; therefore, its community and [documentation][25] are more extensive. Kubeconform doesn't have official docs, but it does have a [well-written README][26] file that explains its capabilities pretty well. The interesting part is that although Kubernetes-native tools such as kubectl are usually well-documented, it was really hard to find the information needed to understand how the `dry-run` flag works and its limitations. - -Conclusion: Although it's not as famous as kubeval, the CRD support and good-enough documentation make kubeconform the winner, in my opinion. - -### Strategies for validating Kubernetes schema using these tools - -Now that you know the pros and cons of each tool, here are some best practices for leveraging them within your Kubernetes production-scale development flow: - - * ⬅️ Shift-left: When possible, the best setup is to run `kubectl --dry-run=server` on every code change. You probably can't do that because you can't allow every developer or CI machine in your organization to have a connection to your cluster. So, the second-best effort is to run kubeconform. - * 🚔 Because kubeconform doesn't cover all common misconfigurations, it's recommended to run it with a policy enforcement tool on every code change to fill the coverage gap. - * 💸 Buy vs. build: If you enjoy the [engineering overhead][27], then kubeconform + [conftest][28] is a great combination of tools to get good coverage. Alternatively, there are tools that can provide you with an out-of-the-box experience to help you save time and resources, such as [Datree][29] (whose schema validation is powered by kubeconform). - * 🚀 During the CD step, it shouldn't be a problem to connect with your cluster, so you should always run `kubectl --dry-run=server` before deploying your new code changes. - * 👯 Another option for using kubectl dry-run in server mode, without having a connection to your Kubernetes environment, is to run Minikube + `kubectl --dry-run=server`. The downside of this hack is that you must also set up the Minikube cluster like prod (i.e., same volumes, namespace, etc.), or you'll encounter errors when trying to validate your Kubernetes manifests. - - - -_Thank you to_ [_Yann Hamon_][30] _for creating kubeconform—it's awesome! This article wouldn't be possible without you. Thank you for all of your guidance._ - -* * * - -_This article originally appeared on [Datree.io][31] and is reprinted with permission._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/kubernetes-schema-validation - -作者:[Eyar Zilberman][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/eyarz -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds) -[2]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/576-dry-run/README.md -[3]: https://github.com/instrumenta/kubeval/tree/master/kubeval -[4]: https://github.com/yannh/kubeconform -[5]: https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json -[6]: https://github.com/datreeio/kubernetes-schema-validation#running-schema-validation-tests -[7]: https://opensource.com/sites/default/files/uploads/kubeval-and-kubeconform.png (Kubeval and kubeconform ) -[8]: https://creativecommons.org/licenses/by-sa/4.0/ -[9]: https://github.com/datreeio/kubernetes-schema-validation#misconfigs -[10]: https://github.com/datreeio/kubernetes-schema-validation#api-deprecationyaml -[11]: https://github.com/datreeio/kubernetes-schema-validation#invalid-kind-valueyaml -[12]: https://github.com/datreeio/kubernetes-schema-validation#invalid-label-valueyaml -[13]: https://github.com/datreeio/kubernetes-schema-validation#invalid-protocol-typeyaml -[14]: https://github.com/datreeio/kubernetes-schema-validation#invalid-spec-keyyaml -[15]: https://github.com/datreeio/kubernetes-schema-validation#missing-imageyaml -[16]: https://github.com/datreeio/kubernetes-schema-validation#wrong-k8s-indentationyaml -[17]: https://github.com/sharkdp/hyperfine -[18]: https://github.com/datreeio/kubernetes-schema-validation/tree/main/misconfigs -[19]: https://github.com/datreeio/kubernetes-schema-validation/tree/main/benchmark -[20]: https://github.com/instrumenta/kubernetes-json-schema -[21]: https://github.com/instrumenta/kubernetes-json-schema/commit/133f84871ccf6a7a7d422cc40e308ae1c044c2ab -[22]: https://github.com/yannh/kubernetes-json-schema -[23]: https://github.com/yannh/kubernetes-json-schema/commit/a660f03314fad36fb4cbfb4fa2f9a76b7766cf51 -[24]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/ -[25]: https://kubeval.instrumenta.dev/ -[26]: https://github.com/yannh/kubeconform/blob/master/Readme.md -[27]: https://jrott.com/posts/why-buy/ -[28]: https://www.conftest.dev/ -[29]: https://hub.datree.io/schema-validation/?utm_source=our_blog&utm_medium=schema-validation -[30]: https://github.com/yannh -[31]: https://www.datree.io/resources/kubernetes-schema-validation diff --git a/sources/tech/20210707 Open source tools and tips for improving your Linux PC-s performance.md b/sources/tech/20210707 Open source tools and tips for improving your Linux PC-s performance.md deleted file mode 100644 index e31bfd908c..0000000000 --- a/sources/tech/20210707 Open source tools and tips for improving your Linux PC-s performance.md +++ /dev/null @@ -1,192 +0,0 @@ -[#]: subject: (Open source tools and tips for improving your Linux PC's performance) -[#]: via: (https://opensource.com/article/21/7/improve-linux-pc-performance) -[#]: author: (Howard Fosdick https://opensource.com/users/howtech) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Open source tools and tips for improving your Linux PC's performance -====== -Make changes to your software (and how you use it) to improve your Linux -computer's performance. -![Business woman on laptop sitting in front of window][1] - -This is the third in a series of articles that explain how to improve your Linux personal computer's performance. My first article described how to unleash performance by [identifying and resolving bottlenecks][2]. My second article showed how to improve performance by [upgrading your PC's hardware][3]. - -This article completes the series by presenting software performance tips. It also explores ways to improve performance by changing how you use your computer. Some of these behavioral changes may be obvious to many Opensource.com readers, but they might not be to the end users you support. - -The goal is a simple approach to improving Linux PC performance. As in my previous articles, I use all open source tools. There's no need for anything else. - -### How to measure improvements - -Before you make any change to your computer, assess performance to ensure that the change would be beneficial. After making the update, you might want to inspect the system a second time to verify that the modification succeeded. - -Several open source graphical tools make performance monitoring easy. They include the [GNOME System Monitor][4], [KDE System Guard][5], [GKrellM][6], [Stacer][7], [Conky][8], and [Glances][9]. (In my first article in this series, I showed how to monitor performance using the GNOME System Monitor.) - -Whichever tool you pick, look very closely at your processor and memory use. You might also want to monitor other hardware resources, such as disk and USB storage, the graphics processor, and the internet connection. - -The ultimate performance arbiter for a personal computer is its responsiveness. Quick? You're good. Sluggish… that indicates room for improvement. - -Most Linux distributions require little configuration by personal computer users: they're efficient from the start. However, an overview of the important tools you use daily can be useful when you're narrowing your focus on what could use optimization. - -### Tweak your browser - -Most users run their browser nearly all the time. For some, it's their only app. So browser selection and tuning potentially offer big payoffs. - -Here are some tuning tips: - - * As research by the Brave browser indicates, many website ads and trackers [consume over half the CPU][10] your PC spends on page processing. So block ads with a browser extension like [uBlock Origin][11], and block trackers with a tool like [Privacy Badger][12]. - * Disable autoplay for videos and animation (including those little video windows that automatically appear and run in the corner of your screen). In Firefox, you can install the open source [Disable HTML5 Autoplay][13] extension. In Chromium or Google Chrome, install the open source [Yet Another Autoplay Blocker][14] extension. - * Remove all non-essential add-ons and extensions from the browser. Carefully consider whether each is worth its overhead. - - - -#### Browser tips for powerful PCs - -For a high-end PC, select a browser that leverages your abundant processor and memory resources to provide optimal web surfing. Multiprocess, multithread browsers work best. Your goal is to apply all your hardware power for a better browsing experience. - -Which browser performs best? That's a highly debatable issue that requires a separate article for an answer. Many people start their search with Chromium or Firefox, popular open source products that are widely respected for their ability to leverage hardware for high-performance browsing. There are many other [open source browsers][15] you might try, though. - -#### Browser tips for low-end PCs - -For a limited-resource PC, you don't want a browser that consumes resources and swamps your PC. Instead, you want one that performs efficiently using limited resources. Call it a lightweight browser. It probably won't be the one that spawns lots of processes and threads. - -Users hold different views about which lightweight browser performs best. I've had good experience with [Dillo][16]. [Wikipedia][17] provides a comprehensive list of lightweight browsers, if you'd like to research others. Keep in mind that most lightweight browsers sacrifice some features and functions to reduce PC resource consumption. - -For a limited-resource PC, you can reduce browser resource consumption by opening only a couple of tabs at a time rather than a dozen. Close tabs you're done using. And run only a single instance of one browser at a time. - -JavaScript can add a lot of demand on your browser, so toggle it off when you don't need it. Most browsers offer add-ons that allow you to manage JavaScript on a per-website basis or by flipping it on or off at your direction. Firefox offers several such extensions. I use one called [JavaScript Toggle On and Off][18]. - -On low-end PCs, you can also tailor browser performance to your liking by manually starting and stopping background tab processing. Just click on the page-load button in the browser to toggle processing on or off for a web page. In Firefox, for example, this button is located on the left side of the toolbar. - -Here's how to use this. If you want to load a specific web page quickly, toggle off page loading in other tabs so that they don't compete with your page of interest. Conversely, if you're spending a lot of time reading a web page you've already loaded, let other tabs load their pages in the background while you're occupied. In this way, you can often browse with decent performance, even on a minimal-resource computer. - -### Stop multitasking - -Some apps, including games, video editors, and virtual machine hosts, require more resources than others. For best performance when you run a resource hog, run _only_ that program. Conversely, don't run a resource hog in the background while focusing on some other app. - -This performance principle applies everywhere: Limit how many apps you use at one time, and close any you aren't using. Limit concurrency, and you improve performance for the apps you run. - -Background processing presents a similar opportunity. Virus scanners, software updates, backups, image copies, filesystem verification, and big downloads are resource-intensive. Schedule these activities for off-hours to optimize performance. A good open source GUI scheduler makes this easy. For example, you can [install and use Zeit][19], and [KCron][20] is available in many repositories. - -### Choose software wisely - -Your software choices make a big difference in how much processor and memory your computer uses. - -For many people today, this hardly matters. Their state-of-the-art personal computers have more than enough processing power and memory to quickly run any app they choose. (If this is you, you can skip this section.) Yet, software choices remain crucial for others. - -#### Office suites - -If you run LibreOffice or OpenOffice, but you don't use **Base** (the database creation component), then it's safe to disable the Java runtime. You can do this in the **Tools** > **Options** > **LibreOffice** > **Advanced** setting panel. - -Alternately, replace your big office suite with what's commonly known as **GNOME Office**. This includes [AbiWord][21] and [Gnumeric][22], both of which require less from your hardware and are functionally equivalent to a word processor and spreadsheet for many users. - -You could even consider ditching the local office suite altogether. Instead, offload your workload to the cloud with a product like [Etherpad][23], [EtherCalc][24], or the [ONLYOFFICE suite][25]. - -This principle applies generally. If you have a low-end computer, offload whatever you can to the cloud. This is why [Chromebooks][26] are effective, even though most offer low-power hardware. - -#### Desktop environment - -Your desktop environment runs every minute you use your PC. If it's not responsive, install a lighter desktop that requires fewer resources. I recommend [Xfce][27], [LXQt or LXDE][28]. - -Whichever desktop you use, you can increase its responsiveness by disabling visual effects. Turn off things like animation, compositing, and the thumbnail images in your file manager, or use a file manager (such as [PCManFM, XFE, or Thunar][29]) without those features. This can have a noticeable impact on slower computers because your screen is involved in every mouse click. Use keyboard shortcuts to eliminate having to move your hand between the mouse and keyboard. - -You can configure some desktops to use a lightweight window manager. The window manager dictates how windows look and feel and how users interact with these elements. - -If you really want to skimp on resources, forgo a desktop altogether in favor of a simple windows manager. Popular choices include [JWM][30], [Openbox][31], and [Fluxbox][32]. These run faster than a full desktop but at the cost of a less user-friendly interface. For example, you often can't put icons on your desktop, and you may not have a system tray or dock. - -### Right-size your Linux distribution for your PC - -Your Linux distribution can impact PC performance. Some distros assume they're running on powerful state-of-the-art computers, while others are designed to run with fewer resources. So if you're running a full-featured distro and it doesn't perform well, test a lightweight alternative to see if it improves responsiveness. - -Testing different distros is easy. Just download the Linux distro you want to try. [Write it to a USB memory stick][33] with an open source tool like [Fedora Media Writer][34] or [Unetbootin][35]. Then boot your PC from the memory stick to test drive the distro. Start up one of the system monitoring tools I mentioned earlier, and measure whether the new distro uses hardware more efficiently. - -The lightest distros I've used are AntiX and Puppy Linux. These use window managers instead of a desktop, bundle lightweight apps, and are specifically designed to run on limited-resource computers. They even run well on 15-year-old machines! (You can even [refurbish old computers using lightweight Linux software][36].) - -The tradeoff is that their desktops aren't glitzy. Their interfaces may be unfamiliar to you, and you may have to learn how to configure them as you would a full desktop environment. For some people, that's frustrating, but for others, it's a fun challenge and an opportunity to learn something new. - -### Make PC configuration changes - -I'll conclude with some basic Linux configuration changes you may want to try. Taken individually, each won't improve performance enough that you'd notice. But in the aggregate, they can have a measurable impact: - - * Verify that you have optimal device drivers for all devices. - * Avoid overheating to prevent step-downs in CPU speed (CPU throttling). - * Reduce boot time by [reducing the default GRUB_TIMEOUT parameter][37] for the Grub menu.) - * Eliminate unneeded apps and services from your startup list (the programs that run every time you boot your computer). Linux desktop environments typically provide GUI panels for this so that you don't need to edit configuration files or scripts directly. - * Speed updates and downloads by using the fastest mirror available. - * Avoid using swap memory. - * If you do use swap, place it on your fastest device. - * Verify your WiFi is operating at peak bandwidth by comparing its WiFi speed versus when it's directly cabled to your modem. - * Verify your router isn't causing a slowdown by testing internet connection speeds without it. - * Match USB standards between your USB ports and devices to ensure they don't step down to the speed of the slower partner. - * Verify your USB transfer rates with the benchmark feature in GNOME Disks. - * If you must use virtual machines, tune them for performance. - * Clean out old history, log, and junk files with open source GUI tools like [BleachBit][38] and [Sweeper][39]. - * Clean out unused junk files by uninstalling apps you don't use. On Debian-based systems, clean the APT cache. - * Find and delete duplicate files by using an open source GUI tool like [FSlint][40]. - - - -If readers express interest, I'll discuss these tweaks in detail in a future article. (Please let me know in the comments if you'd like to see this.) - -### Summary - -In a previous article, I discussed how to identify and remove performance bottlenecks. In my last article, I explained how to efficiently upgrade your Linux PC hardware. This article completes the series by presenting software and behavioral changes that improve PC performance. - -I'm sure you have many good performance tips of your own. Please add your favorites in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/improve-linux-pc-performance - -作者:[Howard Fosdick][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/howtech -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating) -[2]: https://opensource.com/article/21/3/linux-performance-bottlenecks -[3]: https://opensource.com/article/21/4/upgrade-linux-hardware -[4]: https://wiki.gnome.org/Apps/SystemMonitor -[5]: https://apps.kde.org/ksysguard/ -[6]: http://gkrellm.srcbox.net/ -[7]: https://oguzhaninan.github.io/Stacer-Web/ -[8]: https://github.com/brndnmtthws/conky -[9]: https://nicolargo.github.io/glances/ -[10]: https://brave.com/accurately-predicting-ad-blocker-savings/ -[11]: https://github.com/gorhill/uBlock -[12]: https://privacybadger.org/ -[13]: https://addons.mozilla.org/en-US/firefox/addon/disable-autoplay/ -[14]: https://chrome.google.com/webstore/detail/yet-another-autoplay-bloc/fjekfkbibnnjlkfjaeifgecjfmpmdaad -[15]: https://opensource.com/article/19/7/open-source-browsers -[16]: https://www.dillo.org -[17]: https://en.wikipedia.org/wiki/Comparison_of_lightweight_web_browsers -[18]: https://addons.mozilla.org/en-US/firefox/addon/javascript-toggler/ -[19]: https://github.com/loimu/zeit -[20]: https://apps.kde.org/kcron/ -[21]: https://flathub.org/apps/details/com.abisource.AbiWord -[22]: http://www.gnumeric.org -[23]: https://etherpad.org/ -[24]: http://ethercalc.net -[25]: https://opensource.com/article/20/12/onlyoffice-docs -[26]: https://opensource.com/article/21/2/chromebook-linux -[27]: https://opensource.com/article/19/12/xfce-linux-desktop -[28]: https://opensource.com/article/19/12/lxqt-lxde-linux-desktop -[29]: https://opensource.com/business/15/4/eight-linux-file-managers -[30]: https://opensource.com/article/19/12/joes-window-manager-linux-desktop -[31]: https://opensource.com/article/19/12/openbox-linux-desktop -[32]: https://opensource.com/article/19/12/fluxbox-linux-desktop -[33]: https://opensource.com/article/20/4/first-linux-computer -[34]: https://opensource.com/article/20/10/fedora-media-writer -[35]: https://opensource.com/life/14/10/test-drive-linux-nothing-flash-drive -[36]: https://opensource.com/article/19/7/how-make-old-computer-useful-again -[37]: https://www.unixmen.com/quick-tip-change-grub-2-default-timeout/ -[38]: https://www.bleachbit.org/ -[39]: https://apps.kde.org/sweeper/ -[40]: https://github.com/pixelb/fslint diff --git a/sources/tech/20210708 3 reasons Quarkus 2.0 improves developer productivity on Linux.md b/sources/tech/20210708 3 reasons Quarkus 2.0 improves developer productivity on Linux.md deleted file mode 100644 index a8510faa57..0000000000 --- a/sources/tech/20210708 3 reasons Quarkus 2.0 improves developer productivity on Linux.md +++ /dev/null @@ -1,126 +0,0 @@ -[#]: subject: (3 reasons Quarkus 2.0 improves developer productivity on Linux) -[#]: via: (https://opensource.com/article/21/7/developer-productivity-linux) -[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -3 reasons Quarkus 2.0 improves developer productivity on Linux -====== -New features in Quarkus 2.0 make it easier to test code in the developer -console. -![Person using a laptop][1] - -No matter how long you work as an application developer and no matter what programming language you use, you probably still struggle to increase your development productivity. Additionally, new paradigms, including cloud computing, DevOps, and test-driven development, have significantly accelerated the development lifecycle for individual developers and multifunctional teams. - -You might think open source tools could help fix this problem, but I'd say many open source development frameworks and tools for coding, building, and testing make these challenges worse. Also, it's not easy to find appropriate [Kubernetes][2] development tools to install on Linux distributions due to system dependencies and support restrictions. - -Fortunately, you can increase development productivity on Linux with [Quarkus][3], a Kubernetes-native Java stack. Quarkus 2.0 was released recently with useful new features for testing in the developer console. - -### Interactive developer UX/UI - -If you need to add more than 10 dependencies (e.g., database connections, object-relational mapping, JSON formatting, REST API specifications) to your Java Maven project, you must define more than 60 configurations with keys and values in one or more `application.properties` files. More configurations decrease readability for individual developers and are harder for developer teams to manage. - -Quarkus has an interactive interface to display all dependencies that have been added. It is available at the `localhost:8080/q/dev` endpoint after you start Quarkus dev mode with the `mvn quarkus:dev` command. You can also update configurations in the DEV user interface (UI), as Figure 1 shows, and the changes will automatically sync with the `application.properties` file. - -(Note: You can find the entire Quarkus application code for this article in my [GitHub repository][4].) - -![Quarkus DEV UI][5] - -Figure 1. Quarkus DEV UI (Daniel Oh, [CC BY-SA 4.0][6]) - -### Better continuous testing - -When developing an application, anything from a monolith to microservices, you have to test your code. Often, a dedicated quality assurance (QA) team using external continuous integration (CI) tools is responsible for verifying unit tests. That's worked for years, and it still does, but Quarkus allows programmers to run tests in the runtime environment where their code is running as it's being developed. Quarkus 2.0 provides this continuous testing feature through the command-line interface (CLI) and the DEV UI, as shown in Figure 2. - -![Quarkus Testing in DEV UI][7] - -Figure 2. Quarkus testing in DEV UI (Daniel Oh, [CC BY-SA 4.0][6]) - -Continuous testing is not running when a Quarkus application starts. To start it, click "Tests not running" on the bottom-right of the DEV UI. You can also open a web terminal by clicking "Open" on the left-hand side of the DEV UI. Both of those options are highlighted in Figure 2, and an example test result is shown in Figure 3. - -![Quarkus console in DEV UI][8] - -Figure 3. Quarkus console in DEV UI (Daniel Oh, [CC BY-SA 4.0][6]) - -If you change the code (e.g., "Hello" to "Hi" in the `hello()` method) but not the test code (regardless of whether the feature works), the test will fail, as shown in Figure 4. To fix it, update the test code along with the logic code. - -![Test failures in Quarkus DEV UI][9] - -Figure 4. Test failures in Quarkus DEV UI (Daniel Oh, [CC BY-SA 4.0][6]) - -You can rerun the test cases implemented in the `src/test/java/` directory. This feature alleviates the need to integrate with an external CI tool and ensures functionality while developing business logic continuously. - -### Zero configuration with dev services - -When you're developing for a specific target, it's important that your development environment is an accurate reflection of the environment where it is meant to run. That can make installing a database in a place like a local environment a little difficult. If you're developing on Linux, you could run the requisite database in a container, but they tend to run differently based on what resources are available, and your local environment probably doesn't have the same resources as the target production environment. - -Quarkus 2.0 helps solve this problem by providing dev services built on [Testcontainers][10]. For example, you can test applications if they work in the production database, PostgreSQL, rather than an H2 in-memory datastore with the following [configurations][11]: - - -``` -quarkus.datasource.db-kind = postgresql (1) -quarkus.hibernate-orm.log.sql = true - -quarkus.datasource.username=person (2) -quarkus.datasource.password=password (3) -quarkus.hibernate-orm.database.generation=drop-and-create - -%prod.quarkus.datasource.db-kind = postgresql (4) -%prod.quarkus.datasource.jdbc.url = jdbc:postgresql://db:5432/person (5) -%prod.quarkus.datasource.jdbc.driver=postgresql - -quarkus.datasource.devservices.image-name=postgres:latest (6) -``` - -In the code above: - -(1) The kind of database you will connect for development and test -(2) Datasource username -(3) Datasource password -(4) The kind of database you will connect for production -(5) Datasource URL -(6) The container image name to use for DevServices providers; if the provider is not container-based (e.g., H2 database), then this has no effect - -When Quarkus restarts with the new configuration, the Postgres container image will be created and start running automatically, as in Figure 5. - -![Quarkus DevServices][12] - -Figure 5. Quarkus DevServices (Daniel Oh, [CC BY-SA 4.0][6]) - -This feature enables you to remove the production datastore integration test. It also increases your development productivity due to avoiding environmental disparities in the development loop. - -### Conclusion - -Quarkus 2.0 increases developer productivity with built-in continuous testing, an interactive DEV UI, and dev services. In addition, it offers additional features for improving developer experiences such as [live coding][13], [remote development mode on Kubernetes][14], and unified configurations that accelerate the development loop. Quarkus 2.0 is certainly no exception! Try it out for yourself [here][15]! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/developer-productivity-linux - -作者:[Daniel Oh][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/daniel-oh -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop) -[2]: https://opensource.com/resources/what-is-kubernetes -[3]: https://quarkus.io/ -[4]: https://github.com/danieloh30/quarkus-testing -[5]: https://opensource.com/sites/default/files/uploads/quarkus-devui.png (Quarkus DEV UI) -[6]: https://creativecommons.org/licenses/by-sa/4.0/ -[7]: https://opensource.com/sites/default/files/uploads/quarkustesting.png (Quarkus Testing in DEV UI) -[8]: https://opensource.com/sites/default/files/uploads/quarkusconsole.png (Quarkus console in DEV UI) -[9]: https://opensource.com/sites/default/files/uploads/failedtest.png (Test failures in Quarkus DEV UI) -[10]: https://www.testcontainers.org/ -[11]: https://github.com/danieloh30/quarkus-testing/blob/main/src/main/resources/application.properties -[12]: https://opensource.com/sites/default/files/uploads/quarkusdevservices.png (Quarkus DevServices) -[13]: https://quarkus.io/guides/getting-started#development-mode -[14]: https://developers.redhat.com/blog/2021/02/11/enhancing-the-development-loop-with-quarkus-remote-development -[15]: https://quarkus.io/quarkus2/ diff --git a/sources/tech/20210709 OS Chroot 101- covering btrfs subvolumes.md b/sources/tech/20210709 OS Chroot 101- covering btrfs subvolumes.md deleted file mode 100644 index 01b092cc57..0000000000 --- a/sources/tech/20210709 OS Chroot 101- covering btrfs subvolumes.md +++ /dev/null @@ -1,261 +0,0 @@ -[#]: subject: (OS Chroot 101: covering btrfs subvolumes) -[#]: via: (https://fedoramagazine.org/os-chroot-101-covering-btrfs-subvolumes/) -[#]: author: (yannick duclap https://fedoramagazine.org/author/cybermeme/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -OS Chroot 101: covering btrfs subvolumes -====== - -![][1] - -OS chroot command allows you to mount and run another Gnu/Linux from within your current Gnu/Linux. It does this by mounting nested partition(s) within your system and it gives you a shell which allows access to this chrooted OS. This will allow you to manage or debug another Gnu/Linux from your running Fedora Linux - -### Intro - -##### Disclaimer - -When I say _chroot_, I mean the command, and _chrootDir_ a folder. _OSext_ is the external OS to work with. All the following commands are executed as superuser. For extra readability I removed the sudo at the beginning, just don’t forget to be superadmin when performing the tasks. […] means I cut some terminal output. - -First I’m going to review how to do a chroot on a classic filesystem (ext4, xfs, fat, etc) and then we’ll see how to do it on our brand new standard Btrfs and its subvolumes. - -The process is similar to that used to [change the root password][2], or that we may use to repair a corrupted fstab (it happens, trust me). We can also use the chroot command to mount a Gnu/Linux in our Fedora Linux in order to perform operations (updates, file recovery, debugging, etc). - -#### A few explanations - -The [chroot][3] command lets you “change” temporarily the root location. This lets you partition a service or a user in the directory tree. - -When you use _chroot_ to run a mounted Gnu/Linux OS, in order for it to be fully functional, you have to mount the special system folders in their “original places in the directory tree” in the chrootDir. This allows the chrooted OS to talk to the kernel. - -These special system folders are: - - * _/dev_ for the devices; - * _/proc_ which contains the information about the system (kernel and process); - * _/sys_ which contains the information about the hardware. - - - -For example, _/dev_ has to be mounted in _chrootDir/dev_. - -As I always learn better by practicing, let’s do some hands on. - -### Filesystems without btrfs subvolumes - -#### The classic method - -In the following example, the partition we are going to mount is the OSext root (_/_). This is located in _/dev/vda2_ and we will mount it in the chrootDir (_/mnt_) directory. _/mnt_ is not a necessity, you can also mount the partition somewhere else. - -``` -# mount /dev/vda2 /mnt -# mount --bind /dev /mnt/dev -# mount -t proc /proc /mnt/proc -# mount -t sysfs /sys /mnt/sys -# mount -t tmpfs tmpfs /mnt/run -# mkdir -p /mnt/run/systemd/resolve/ -# echo 'nameserver 1.1.1.1' > /mnt/run/systemd/resolve/stub-resolv.conf -# chroot /mnt -``` - -The _–bind_ option makes the contents accessible in both locations, _-t_ defines the filesystem type. See the [manpage][4] for more information. - -We will mount _/run_ as _tmpfs_ (in the memory) because we are using systemd-resolved (this is the default now in Fedora). Then we will create the folder and the file _stub-resolv.conf,_ which is associated by a symbolic link to /_etc/resolv.conf_. This file contains the resolver IP. In this example, the resolver is 1.1.1.1, but you can use any resolver IP you like. - -To exit the chroot, the shell command is _exit_. After that, we unmount all the folders we just mounted: - -``` -exit -# umount /mnt/dev -# umount /mnt/proc -# umount /mnt/sys -# umount /mnt/run -# umount /mnt -``` - -#### The case of lvm - -In the case of lvm, the partitions are not available directly and must be mapped first. - -``` -# fdisk -l /dev/vda2 -Disk /dev/vda2: 19 GiB, 20400046080 bytes, 39843840 sectors -[...] -I/O size (minimum/optimal): 512 bytes / 512 bytes - -# mount /dev/vda2 /mnt/ -mount: /mnt: unknown filesystem type 'LVM2_member'. -``` - -As you can see, we are not able to mount _/dev/vda2_ directly. We will now use the lvm tools to locate our partitions. - -``` -# pvscan -PV /dev/vda2 VG cl lvm2 [<19.00 GiB / 0 free] -Total: 1 [<19.00 GiB] / in use: 1 [<19.00 GiB] / in no VG: 0 [0] - -# vgscan -Found volume group "cl" using metadata type lvm2 - -# lvscan -ACTIVE '/dev/cl/root' [10.00 GiB] inherit -ACTIVE '/dev/cl/swap' [2.00 GiB] inherit -ACTIVE '/dev/cl/home' [1.00 GiB] inherit -ACTIVE '/dev/cl/var' [<6.00 GiB] inherit -``` - -So here we can see where the logical volumes are mapped _/dev/cl_ and we can mount these partitions like we did before, using the same method: - -``` -# mount /dev/cl/root /mnt/ -# mount /dev/cl/home /mnt/home/ -# mount /dev/cl/var /mnt/var/ -# mount --bind /dev /mnt/dev -# mount -t proc /proc /mnt/proc -# mount -t sysfs /sys /mnt/sys -# mount -t tmpfs tmpfs /mnt/run -# mkdir -p /mnt/run/systemd/resolve/ -# echo 'nameserver 1.1.1.1' > /mnt/run/systemd/resolve/stub-resolv.conf -# chroot /mnt -``` - -### Btrfs filesystem with subvolumes - -#### Overview of a btrfs partition with subvolumes - -Let’s have a look at the filesystem. - -Fdisk tells us that there are only two partitions on the physical media. - -``` -# fdisk -l -Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors - […] - Device Boot Start End Sectors Size Id Type - /dev/vda1 * 2048 2099199 2097152 1G 83 Linux - /dev/vda2 2099200 41943039 39843840 19G 83 Linux -``` - -Here are the contents of the target system’s fstab (OSext): - -``` -UUID=3de441bd-59fc-4a12-8343-8392faab5ac7 / btrfs subvol=root,compress=zstd:1 0 0 -UUID=71dc4f0f-9562-40d6-830b-bea065d4f246 /boot ext4 defaults 1 2 -UUID=3de441bd-59fc-4a12-8343-8392faab5ac7 /home btrfs subvol=home,compress=zstd:1 0 0 -``` - -Looking at the _UUID_s in the _fstab_, we can see that there are two different ones. - -One is an ext4, used here for _/boot_ and the other is a btrfs containing two mount points (the subvolumes), _/_ and _/home_. - -#### Overview of a btrfs filesystem with subvolumes - -Let’s have a look at what is in the btrfs partition (_/dev/vda2_ here) by mounting it directly: - -``` -# mount /dev/vda2 /mnt/ -# ls /mnt/ -home root - -# ls /mnt/root/ -bin dev home lib64 media opt root sbin sys usr -boot etc lib lost+found mnt proc run srv tmp var - -# ls /mnt/home/ -user - -# umount /mnt -``` - -Here we can see that in the mounted partition there are two folders (the subvolumes), that contain lots of different directories (the target file hierarchy). - -To get this information about the subvolumes, there is a much more elegant way. - -``` -# mount /dev/vda2 /mnt/ - -# btrfs subvolume list /mnt -ID 256 gen 178 top level 5 path home -ID 258 gen 200 top level 5 path root -ID 262 gen 160 top level 258 path root/var/lib/machines - -# umount /mnt -``` - -#### Practical chroot with btrfs subvolumes - -Now that we’ve had a look at the contents of our partition, we will mount the system on chrootDir (_/mnt_ in the example). We will do this by adding the mount type as btrfs and the option for subvolume _subvol=SubVolumeName_. We will also add the special system folders and other partitions in the same way. - -``` -# mount /dev/vda2 /mnt/ -t btrfs -o subvol=root - -# ls /mnt/ -bin dev home lib64 media opt root sbin sys usr -boot etc lib lost+found mnt proc run srv tmp var - -# ls /mnt/home/ - - -# mount /dev/vda2 /mnt/home -t btrfs -o subvol=home - -# ls /mnt/home/ -user - -# mount /dev/vda1 /boot -# mount --bind /dev /mnt/dev -# mount -t proc /proc /mnt/proc -# mount -t sysfs /sys /mnt/sys -# mount -t tmpfs tmpfs /mnt/run -# mkdir -p /mnt/run/systemd/resolve/ -# echo 'nameserver 1.1.1.1' > /mnt/run/systemd/resolve/stub-resolv.conf -# chroot /mnt -``` - -When the job is done, we use the shell command _exit_ and unmount all previously mounted directories as well as the chrootDir itself (_/mnt_). - -``` -exit -# umount /mnt/boot -# umount /mnt/sys -# umount /mnt/proc -# umount /mnt/sys -# umount /mnt/run -# umount /mnt -``` - -### Conclusion - -As you can see on the screenshot below, I performed a dnf update on a Fedora Linux 34 Workstation from a live [Fedora 33 security lab CD][5], that way, if a friend needs you to debug his/her/their Gnu/Linux, he/she/they just have to bring the hard drive to you and not the whole desktop/server machine. - -![][6] - -Be careful if you use a different shell between your host OS and OSext (the chrooted OS), for example ksh <-> bash. -In this case you’ll have to make sure that both systems have the same shell installed. - -I hope this will be useful to anyone needing to debug, or if you just need to update your other Fedora Linux in your dual boot and don’t want to have to restart 😉 - -This article just referred to a part of btrfs, for more information you can have a look at the the [wiki][7] which will give you all the information you need. - -Have fun chrooting. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/os-chroot-101-covering-btrfs-subvolumes/ - -作者:[yannick duclap][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/cybermeme/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/07/chroot-btrfs-816x345.jpg -[2]: https://fedoramagazine.org/reset-root-password-fedora/ -[3]: https://man7.org/linux/man-pages/man2/chroot.2.html -[4]: https://man7.org/linux/man-pages/man8/mount.8.html -[5]: https://labs.fedoraproject.org/security/download/index.html -[6]: https://fedoramagazine.org/wp-content/uploads/2021/07/fedoraSecurity.png -[7]: https://fedoraproject.org/wiki/Btrfs diff --git a/sources/tech/20210709 Troubleshooting bugs in an API implementation.md b/sources/tech/20210709 Troubleshooting bugs in an API implementation.md deleted file mode 100644 index a8f5885c8d..0000000000 --- a/sources/tech/20210709 Troubleshooting bugs in an API implementation.md +++ /dev/null @@ -1,99 +0,0 @@ -[#]: subject: (Troubleshooting bugs in an API implementation) -[#]: via: (https://opensource.com/article/21/7/listing-prefixes-s3-implementations) -[#]: author: (Alay Patel https://opensource.com/users/alpatel) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Troubleshooting bugs in an API implementation -====== -Different API versions can cause unexpected problems. -![magnifying glass on computer screen, finding a bug in the code][1] - -As distributed and cloud computing adoption increase, things are intrinsically getting harder to debug. This article shares a situation where you would expect a library to safeguard against different versions of an API. However, it didn't and it caused unexpected behavior that was very hard to debug. This might be a useful example of how ripping out layers of abstractions is sometimes necessary to get to the root cause of a problem in a systematic manner. - -The S3 (Simple Storage Solution) API is an industry standard that provides the capability to interact with cloud storage programmatically. Many cloud providers implement it as one of the ways to interact with the object-store. Having different vendors to choose from is good to avoid vendor lock-in. Also, having different implementations to choose from means you get to select open source implementations of the popular standard that works best for you and your team. - -However, the differences in API versions may cause unexpected problems, as we learned. This article leverages those differences to illustrate the troubleshooting process. - -### Konveyor Crane - -[Crane][2] is part of the [Konveyor community][3], which works to solve problems related to app modernization and portability to further the adoption of Kubernetes. Crane allows users to migrate applications (and the associated data) deployed on OpenShift 3 to OpenShift 4. Behind the scenes, it uses [Velero][4] to orchestrate the migration. Velero uses the object store to perform backup and restore operations. - -### How Velero stores data - -Velero can be configured to use a bucket from the object store as a backup storage location (where backup data is stored). Velero organizes backups in a directory called `/backups` (with `prefix` being configurable). Under the `backups` directory, Velero creates a separate directory for each backup, e.g., `/backups/`. - -Additionally, to ensure that a backup created in the object store is available in the cluster and available for restoration, Velero makes a prefix list of all the directories under `backups`. It uses the ListObjectsV2 S3 API to implement this. The [ListObjectsV2][5] API differs from the [ListObjects][6] API in how it handles pagination. - -### How API differences produced a bug - -The differences between these two API versions are subtle. First, clients see the difference in the request that they send to the S3 server. When requesting a ListObjectV2, the client sends something like this: - - -``` -GET /?list-type=2&delimiter=Delimiter&prefix=Prefix -HTTP/1.1 -Host: Bucket.s3.example.objectstorage.softlayer.net -x-amz-request-payer: RequestPayer -x-amz-expected-bucket-owner: ExpectedBucketOwner -``` - -For ListObjects, the request looks very similar, but it's missing `list-type=2`: - - -``` -GET /?delimiter=Delimiter&marker=Marker&prefix=Prefix -HTTP/1.1 -Host: Bucket.s3.example.objectstorage.softlayer.net -x-amz-request-payer: RequestPayer -x-amz-expected-bucket-owner: ExpectedBucketOwner -``` - -For a server that ignores the `list-type=2` parameter, it is easy to respond to a basic ListObjectsV2 call with a ListObject response type. - -The interesting difference between the API versions' response types is how pagination is implemented. Both versions share a common field called `isTruncated` in the response; this indicates whether the server has sent a complete set of keys in its response. In ListObjectsV2, this field is used along with the `NextContinuousToken` field to get the next page (and, hence, the next set of keys) and is iterated upon until the `isTruncated` field is false. However, in ListObjects API, the `NextMarker` field is used instead. There are subtle differences in how this is implemented. - -### Our observations - -When we observed the Velero debug logs, we discovered 555 total backup objects were found. However, when we ran the [s3cmd][7] command against the same bucket to list objects, it returned 788. After looking at the debug logs of the s3cmd command-line interface (CLI), we found that the s3cmd could talk to the server using ListObjects. We also noticed that the last field on the first page of the s3cmd debug log was the last field Velero saw in its list. This immediately rang bells that pagination is not implemented correctly with the ListObjectsV2 API. - -In a ListObjectsV2 API, the `NextContinuousToken` field is used to take the client to the next page, and the `ListObjectV2Pages` method in the `aws-go-sdk` uses this field in its implementation. The logic is: if the `NextContinuousToken` field is empty, no more pages exist, so set `LastPage=true`. - -Considering that a server could send a ListObject response without a `NextContinuousToken` set on a ListObjectV2Pages API call, it is clear that if the response is pagination with a ListObject response, ListObjectsV2Pages will read only the first page. This is exactly what happened and was verified by observing it in a debugger using a [sample program][8]. - -Simply by changing Velero's implementation to use the ListObjectsPages method (which uses the ListObjects API), Velero was able to report a backup count of 788, which was consistent with the s3cmd CLI. - -Because of this semantic difference, the customer's migration efforts were blocked. The root cause stemmed from the libraries being used, and the analysis unblocked the customer. - -### Conclusion - -This case study shows how implementations of something as widely adopted as the S3 API could have bugs and can cause problems in unexpected ways. - -To follow the technical analysis of how Konveyor's development team is solving modernization and migration issues, check out our engineering [knowledge base][9]. For updates on the Konveyor tools, join the community at [konveyor.io][10] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/listing-prefixes-s3-implementations - -作者:[Alay Patel][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alpatel -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code) -[2]: https://www.konveyor.io/crane -[3]: https://www.redhat.com/en/blog/red-hat-and-ibm-research-launch-konveyor-project -[4]: https://velero.io/ -[5]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html -[6]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html -[7]: https://s3tools.org/usage -[8]: https://gist.github.com/alaypatel07/c2a1f34095813e8887ddcb3f6e90d262 -[9]: http://engineering.konveyor.io/ -[10]: https://konveyor.io/ diff --git a/sources/tech/20210709 What you need to know about security policies.md b/sources/tech/20210709 What you need to know about security policies.md deleted file mode 100644 index 12289bea9b..0000000000 --- a/sources/tech/20210709 What you need to know about security policies.md +++ /dev/null @@ -1,89 +0,0 @@ -[#]: subject: (What you need to know about security policies) -[#]: via: (https://opensource.com/article/21/7/what-security-policy) -[#]: author: (Chris Collins https://opensource.com/users/clcollins) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -What you need to know about security policies -====== -Learn about protecting your personal computer, server, and cloud systems -with SELinux, Kubernetes pod security, and firewalls. -![Lock][1] - -A **security policy** is a set of permissions that govern access to a system, whether the system is an organization, a computer, a network, an application, a file, or any other resource. Security policies often start from the top down: Assume nobody can do anything, and then allow exceptions. - -On a desktop PC, the default policy is that no user may interact with the computer until after logging in. Once you've successfully logged in, you inherit a set of digital permissions (in the form of metadata associated with your login account) to perform some set of actions. The same is true for your phone, a server or network on the internet, or any node in the cloud. - -There are security policies designed for filesystems, firewalls, services, daemons, and individual files. Securing your digital infrastructure is a job that's never truly finished, and that can seem frustrating and intimidating. However, security policies exist so that you don't have to think about who or what can access your data. Being comfortably familiar with potential security issues is important, and reading through known security issues (such as NIST's great [RSS feed][2] for [CVE entries][3]) over your [power breakfast][4] can be more eye-opening than a good cup of coffee, but equally important is being familiar with the tools at your disposal to give you sensible defaults. These vary depending on what you're securing, so this article focuses on three areas: your personal computer, the server, and the cloud. - -### SELinux - -[SELinux][5] is a **labeling system** for your personal computer, servers, and the Linux nodes of the cloud. On a modern Linux system running SELinux, every process has a label, as does every file and directory. In fact, any system object gets a label. Luckily, you're not the one who has to do the labeling. These labels are created for you automatically by SELinux. - -Policy rules govern what access is granted between labeled **processes** and labeled **objects**. The kernel enforces these rules. In other words, SELinux can ensure that an action is safe whether a user appears to deserve the right to perform that action or not. It does this by understanding what processes are permitted. This protects a system from a bad actor who gains escalated permissions—whether it's through a security exploit or by wandering over to your desk after you've gotten up for a coffee refill—by understanding the expected interactions of all of your computer's components. - -For more information about SELinux, read our [illustrated guide to SELinux][6] by Dan Walsh. To learn more about using SELinux, read [A sysadmin's guide to SELinux][7] by Alex Callejas, and download our free [SELinux cheat sheet][8]. - -### Kubernetes pod security - -In the world of the Kubernetes cloud, there are **Security Policies** and **Security Contexts**. - -Pod [Security Policies][9] are an implementation of Kubernetes pod security resources. They are built-in resources that describe specific conditions that pods must conform to in order to be accepted and scheduled. For example, Pod Security Policies can leverage restrictions on which types of volumes a pod may be allowed to mount or what user or group IDs the pod is not allowed to use. Unlike Security Contexts, these are restrictions controlled by the cluster's Control Plane that decide if a given pod is allowed within the Kubernetes system, even before it is created. If the pod spec does not meet the requirements of the Pod Security Policy, it is rejected. - -[Security Contexts][10] are similar to Pod Security Policies, in that they describe what a pod or container can and cannot do but in the context of the container runtime. Recall that the Pod Security Policies are enforced in the Control Plane. Security Contexts are provided in the spec of the pod and describe to the container runtime (e.g., Docker, CRI-O, etc.) specifically how the pod should run. There's a lot of overlap in the kinds of restrictions found in Pod Security Policies and Security Contexts. The former can be thought of as "these are the things a pod in this policy may do," while the latter is "this pod must be run with these specific rules." - -#### The state of Pod Security Policies - -Pod Security Policies are deprecated and will be removed in Kubernetes 1.25. In April 2021, Tabitha Sable of Kubernetes SIG Security wrote about the [deprecation and replacement of Pod Security Policies][11]. There's an open pull request that describes proposed [Kubernetes enhancements][12] with a new admission controller to enforce pod security standards, which is suggested as the replacement for the deprecated Pod Security Policies. The architecture acknowledges, however, that there's a large ecosystem of add-ons and complementary services that can be mixed and matched to provide coverage that meets an organization's needs. - -For now, Kubernetes has published [Pod Security Standards][13] describing the overall concept of layered policy types, from totally unrestricted **Privileged** pods to minimally restricted **Baseline** and then heavily **Restricted** policies, and publishing these example policies as Pod Security Policies. The documentation describes what restrictions make up these different profiles and provide an excellent starting point to get familiar with different types of restrictions that might be applied to a pod to increase security. - -#### Future of security policies - -The future of pod security in Kubernetes will likely include an admission controller like the one proposed in the enhancement PR and a mix of add-ons for tweaking and adjusting how pods run in the cluster, such as [Open Policy Agent][14] (OPA). Kubernetes is extremely flexible given just how complicated its job is, and this change follows the same pattern that has allowed Kubernetes to be so successful: managing container orchestration well and allowing an entire ecosystem of add-ons and tools to enhance and extend the core so that it is not a one-size-fits-all solution. - -### Firewalls - -Protecting your network is just as important as protecting the computers inside it. For that, there are firewalls. Some firewalls come embedded in routers, but computers have firewalls too, and in large organizations, they run the firewall for the entire network. - -Typical firewall policies are constructed by denying all traffic, followed by judicious exceptions for necessary incoming and outgoing communication. Individual users can learn more about the `firewall-cmd` in Seth Kenlon's [Getting started with Linux firewalls][15]. Sysadmins can learn more about firewalls in Seth's [Secure your network with firewall-cmd][16]. And both users and admins can benefit from our free [firewall-cmd cheat sheet][17]. - -### Security policies - -Security policies are important for protecting people and their data no matter what the system. Buildings and tech conferences need security policies to keep people physically safe, and computers need security policies to keep data safe from abuse. - -Spend some time thinking about the security of the systems in your life, getting familiar with the default policies, and choosing your level of comfort for the different risks you identify. Then establish a security policy, and stick to it. As with [backup plans][18], security won't get addressed unless it's _easy_, so make it second nature to maintain good security practices. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/what-security-policy - -作者:[Chris Collins][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clcollins -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum (Lock) -[2]: https://nvd.nist.gov/feeds/xml/cve/misc/nvd-rss-analyzed.xml -[3]: https://nvd.nist.gov/vuln/data-feeds#APIS -[4]: https://opensource.com/article/21/6/breakfast -[5]: https://en.wikipedia.org/wiki/Security-Enhanced_Linux -[6]: https://opensource.com/business/13/11/selinux-policy-guide -[7]: https://opensource.com/article/18/7/sysadmin-guide-selinux -[8]: https://opensource.com/downloads/cheat-sheet-selinux -[9]: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ -[10]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ -[11]: https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/ -[12]: https://github.com/kubernetes/enhancements/issues/2579 -[13]: https://kubernetes.io/docs/concepts/security/pod-security-standards/ -[14]: https://www.openpolicyagent.org/ -[15]: https://opensource.com/article/20/2/firewall-cheat-sheet -[16]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd -[17]: https://opensource.com/downloads/firewall-cheat-sheet -[18]: https://opensource.com/article/19/3/backup-solutions diff --git a/sources/tech/20210711 Explore waterways with this open source nautical navigation tool.md b/sources/tech/20210711 Explore waterways with this open source nautical navigation tool.md deleted file mode 100644 index 2509b03452..0000000000 --- a/sources/tech/20210711 Explore waterways with this open source nautical navigation tool.md +++ /dev/null @@ -1,94 +0,0 @@ -[#]: subject: (Explore waterways with this open source nautical navigation tool) -[#]: via: (https://opensource.com/article/21/7/open-source-nautical-navigation) -[#]: author: (Don Watkins https://opensource.com/users/don-watkins) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Explore waterways with this open source nautical navigation tool -====== -Whether you're sailing down a local river or setting out on the open -seas, keep track of your nautical location with OpenCPN. -![Boat helm at sunset for navigation][1] - -If you're traveling by boat down your local waterway or sailing around the world, you can bring great navigation software with you and maintain your commitment to open source software. [OpenCPN][2] is free and open source software developed by sailors. It serves as the primary navigation interface for vessels with full-time helm-visible navigational suites. The software is written in C and released under a [GPLv2 license][3]. - -### Install OpenCPN - -OpenCPN can be installed on Linux, macOS, or Windows. Packages are available for [Fedora][4], [Ubuntu][5], and [Raspberry Pi][6]. I installed OpenCPN on my Linux laptop using [Flatpak][7]. For macOS and Windows, you can download and install packages from the [OpenCPN website][8]. - -There's also an [Android app][9] version available from the Google Play store. - -### Use OpenCPN - -Once it's installed, launch OpenCPN to try it out. The main menu bar is located on the left. - -The first choice from the top is **Options**. Here, you can select how the program appears in the display and what units of measurement of speed, distance, and depth to use. You also can set how latitude and longitude are displayed in [decimal degrees][10]. - -![OpenCPN map showing latitude and longitude measurements][11] - -(Don Watkins, [CC BY-SA 4.0][12]) - -### Get charts - -OpenCPN doesn't come preinstalled with charts. Which charts you choose to install are generally determined by your location and, potentially, your destination. - -Many free [charts are available][13], including those from the US NOAA Office of Coast Survey, Marinha Do Brasil (which includes parts of Antarctica), East Asia Hydrographic Commission, many sources of inland European waterway charts, and many other sources. The chart page also links to commercial sources, should you require them. - -### Learn more - -The project provides an excellent [quickstart guide][14] to make it easy for new users. - -The OpenCPN project also has excellent [documentation][15] to guide you through the installation and setup process. It provides step-by-step directions for first use and [installing charts][16]. The program also comes with a list of [supplementary software][17] you can use with it. - -OpenCPN is available in 20 languages. There are lots of how-to videos available on [Vimeo][18] and [YouTube][19] to help you become familiar with the software. - -![OpenCPN map of Long Island and Nantucket Sounds][20] - -(Don Watkins, [CC BY-SA 4.0][12]) - -### Get involved - -David S. Register is the lead developer for the project. He originally developed OpenCPN in 2009 for his own use. Other folks expressed an interest in his software, and now there are thousands of users and more than 40 active developers worldwide. - -You can [get involved][21] with the project by consulting its excellent [developer documentation][22]. - -Take a look at two open source applications that bring the far reaches of space a little bit closer. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/open-source-nautical-navigation - -作者:[Don Watkins][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/don-watkins -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/boat-helm-sunset.jpg?itok=g1MYhF_s (Boat helm at sunset) -[2]: https://opencpn.org/ -[3]: https://github.com/OpenCPN/OpenCPN/blob/master/COPYING.gplv2 -[4]: https://opencpn.org/wiki/dokuwiki/doku.php?id=opencpn:opencpn_user_manual:getting_started:opencpn_installation:fedora -[5]: https://opencpn.org/wiki/dokuwiki/doku.php?id=opencpn:opencpn_user_manual:getting_started:opencpn_installation:ubuntu_ppa -[6]: https://opencpn.org/wiki/dokuwiki/doku.php?id=opencpn:opencpn_user_manual:getting_started:opencpn_installation:raspberrypi_rpi2 -[7]: https://opencpn.org/wiki/dokuwiki/doku.php?id=opencpn:opencpn_user_manual:getting_started:opencpn_installation:flatpak -[8]: https://opencpn.org/OpenCPN/info/downloadopencpn.html -[9]: https://www.bigdumboat.com/aocpn/cpnapp.html -[10]: http://wiki.gis.com/wiki/index.php/Decimal_degrees -[11]: https://opensource.com/sites/default/files/uploads/opencpn-map.png (OpenCPN map showing latitude and longitude measurements) -[12]: https://creativecommons.org/licenses/by-sa/4.0/ -[13]: https://opencpn.org/OpenCPN/info/chartsource.html -[14]: https://opencpn.org/OpenCPN/info/quickstart.html -[15]: https://opencpn.org/wiki/dokuwiki/doku.php?id=opencpn:opencpn_user_manual:getting_started:opencpn_installation -[16]: https://opencpn.org/wiki/dokuwiki/doku.php?id=opencpn:opencpn_user_manual:getting_started:chart_installation -[17]: https://opencpn.org/wiki/dokuwiki/doku.php?id=opencpn:supplementary_hardware -[18]: https://vimeo.com/user17026077/videos -[19]: https://www.youtube.com/results?search_query=OpenCPN -[20]: https://opensource.com/sites/default/files/uploads/opencpn-map2.png (OpenCPN map of Long Island and Nantucket Sounds) -[21]: https://github.com/OpenCPN/OpenCPN -[22]: https://opencpn.org/wiki/dokuwiki/doku.php?id=opencpn:developer_manual diff --git a/sources/tech/20210712 Set up temperature sensors in your home with a Raspberry Pi.md b/sources/tech/20210712 Set up temperature sensors in your home with a Raspberry Pi.md deleted file mode 100644 index 56e46b4630..0000000000 --- a/sources/tech/20210712 Set up temperature sensors in your home with a Raspberry Pi.md +++ /dev/null @@ -1,205 +0,0 @@ -[#]: subject: (Set up temperature sensors in your home with a Raspberry Pi) -[#]: via: (https://opensource.com/article/21/7/temperature-sensors-pi) -[#]: author: (Chris Collins https://opensource.com/users/clcollins) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Set up temperature sensors in your home with a Raspberry Pi -====== -Find out how hot your house is with a simple home Internet of Things -project. -![Orange home vintage thermostat][1] - -It's HOT! I suppose I can't complain too much about living in paradise, but when my wife and I moved to Hawaii last fall, I didn't really think too much about the weather. Don't get me wrong, the weather is lovely pretty much all the time, and we keep our windows open 24/7, but that means it is pretty warm in the house right now in the middle of summer. - -So, where does all this humble bragging intersect with open source? Well, we're planning to get a whole-house fan—one of those big ones that suck all the air out of your house and force it into the attic, pushing all the hot air out of the attic in the process. I am _sure_ this will make the house way cooler, but the geek in me wants to know just how much cooler. - -So today, I'm playing with temperature sensors, [Raspberry Pis][2], and [Python][3]. - -Play along at home! Nothing like a little #CitizenScience! - -![DHT22 sensor and Raspberry Pi Zero W][4] - - Charming little development environment, isn't it? (Chris Collins, [CC BY-SA 4.0][5]) - -Yes, OK, I could just buy a thermometer or two, check them each day, and see what happens. But why do that when you can totally overengineer a solution, automate the data collection, and graph it all over time, amirite? - -Here's what I need: - - * Raspberry Pi Zero W (or, really, any Raspberry Pi) - * DHT22 digital sensor - * SD card - - - -### Connect the DHT22 sensor to the Raspberry Pi - -You can find a bunch of inexpensive DHT22 temperature and humidity sensors with a quick web search. The DHT22 is a digital sensor, making it easy to interact with. If you purchase a raw sensor, you'll need a resistor and some soldering skills to get it working (check out Pi My Life Up's DHT22 article for [great instructions on working with the raw sensor][6]), but you can also purchase one with a small PCB that includes all that, which is what I did. - -The DHT22 with the PCB attached has three pins: a Positive pin (marked with **+**), a Data pin, and a Ground pin (marked with **-**). You can wire the DHT22 directly to the Raspberry Pi Zero W. I used Raspberry Pi Spy's [Raspberry Pi GPIO guide][7] to make sure I connected everything correctly. - -The Positive pin provides power from the Pi to the DHT22. The DHT22 runs on 3v-6v, so I selected one of the 5v pins on the Raspberry Pi to provide the power. I connected the Data pin on the DHT22 to one of the Raspberry Pi GPIO pins. I am using GPIO4 for this, but any would work; just make a note of the one you choose, as the Python code that reads the data from the sensor will need to know which pin to read from. Finally, I connected the Ground pin on the DHT22 to a ground pin on the Raspberry Pi header. - -This is how I wired it up: - - * DHT22 Positive pin <-> Raspberry Pi GPIO v5 pin (#2) - * DHT22 Data pin <-> Raspberry Pi GPIO4 pin (#7) - * DHT22 Ground pin <-> Raspberry Pi Group pin (#6) - - - -This diagram from Raspberry Pi Spy shows the pin layout for the Raspberry Pi Zero W (among others). - -![Raspberry Pi GPIO header diagram][8] - -(Copyright 2021, [Matt Hawkins][7]) - -### Install the DHT sensor software - -Before proceeding, make sure you have an operating system installed on the Raspberry Pi Zero W and can connect to it remotely or with a keyboard. If not, consult my article about [customizing different operating system images][9] for Raspberry Pi. I am using [Raspberry Pi OS Lite][10], released May 7, 2021, as the image for the Raspberry Pi Zero W. - -Once you've installed the operating system on an SD card and booted the Raspberry Pi from the card, there are only a couple of other software packages to install to interact with the DHT22. - -First, install the Python Preferred Installer Program (pip) with `apt-get`, and then use pip to install the [Adafruit DHT sensor library for Python][11] to interact with the DHT22 sensor. - - -``` -# Install pip3 -sudo apt-get install python3-pip - -# Install the Adafruit DHT sensor library -sudo pip3 install Adafruit_DHT -``` - -### Get sensor data with Python - -With the DHT libraries installed, you can connect to the sensor and retrieve temperature and humidity data. - -Create a file with: - - -``` -#!/usr/bin/env python3 - -import sys -import argparse -import time - -# This imports the Adafruit DHT software installed via pip -import Adafruit_DHT - -# Initialize the DHT22 sensor -SENSOR = Adafruit_DHT.DHT22 - -# GPIO4 on the Raspberry Pi -SENSOR_PIN = 4 - -def parse_args(): -    parser = argparse.ArgumentParser() -    parser.add_argument("-f", "--fahrenheit", help="output temperature in Fahrenheit", action="store_true") - -    return parser.parse_args() - -def celsius_to_fahrenheit(degrees_celsius): -        return (degrees_celsius * 9/5) + 32 - -def main(): -    args = parse_args() - -    while True: -        try: -            # Gather the humidity and temperature -            # data from the sensor; GPIO Pin 4 -            humidity, temperature = Adafruit_DHT.read_retry(SENSOR, SENSOR_PIN) - -        except RuntimeError as e: -            # GPIO access may require sudo permissions -            # Other RuntimeError exceptions may occur, but -            # are common.  Just try again. -            print(f"RuntimeError: {e}") -            print("GPIO Access may need sudo permissions.") - -            time.sleep(2.0) -            continue - -        if args.fahrenheit: -            print("Temp: {0:0.1f}*F, Humidity: {1:0.1f}%".format(celsius_to_fahrenheit(temperature), humidity)) -        else: -            print("Temp:{0:0.1f}*C, Humidity: {1:0.1f}%".format(temperature, humidity)) - -        time.sleep(2.0) - -if __name__ == "__main__": -    main() -``` - -The important bits here are initializing the sensor and setting the correct GPIO pin to use on the Raspberry Pi: - - -``` -# Initialize the DHT22 sensor -SENSOR = Adafruit_DHT.DHT22 - -# GPIO4 on the Raspberry Pi -SENSOR_PIN = 4 -``` - -Another important bit is reading the data from the sensor with the variables set above for the sensor and pin: - - -``` -# This connects to the sensor "SENSOR" -# Using the Raspberry Pi GPIO Pin 4, "SENSOR_PIN" -    humidity, temperature = Adafruit_DHT.read_retry(SENSOR, SENSOR_PIN) -``` - -Finally, run the script! You should end up with something like this: - -![Output of the sensor script][12] - -84 degrees and 50% humidity. Oof! Hot and humid in here! (Chris Collins, [CC BY-SA 4.0][5]) - -Success! - -### Where to go from here - -I have three of these DHT22 sensors and three Raspberry Pi Zero Ws connected to my WiFi. I've installed them into some small project boxes, hot-glued the sensors to the outside, and set them up in my living room, office, and bedroom. With this setup, I can collect sensor data from them whenever I want by SSHing into the Raspberry Pi and running this script. - -But why stop there? Manually SSHing into them each time is tedious and too much work. I can do better! - -In a future article, I'll explain how to set up this script to run automatically at startup with a [systemd service][13], set up a web server to display the data, and instrument this script to export data in a format that can be read by [Prometheus][14], a monitoring system and time series database. I use Prometheus at work to collect data about OpenShift/Kubernetes clusters, plot trends, and create alerts based on the data. Why not go totally overboard and do the same thing with temperature and humidity data at home? This way, I can get baseline data and then see how well the whole-house fan changes things! - -#CitizenScience! - -Jeff Geerling is a father concered with his kid's cold room and their sleep because of it. To... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/temperature-sensors-pi - -作者:[Chris Collins][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clcollins -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/home-thermostat.jpg?itok=wuV1XL7t (Orange home vintage thermostat) -[2]: https://www.raspberrypi.org/ -[3]: https://www.python.org/ -[4]: https://opensource.com/sites/default/files/uploads/dht22.png (DHT22 sensor and Raspberry Pi Zero W) -[5]: https://creativecommons.org/licenses/by-sa/4.0/ -[6]: https://pimylifeup.com/raspberry-pi-humidity-sensor-dht22/ -[7]: https://www.raspberrypi-spy.co.uk/2012/06/simple-guide-to-the-rpi-gpio-header-and-pins/ -[8]: https://opensource.com/sites/default/files/uploads/raspberry_pi_gpio_layout_model_b_plus.png (Raspberry Pi GPIO header diagram) -[9]: https://opensource.com/article/20/5/disk-image-raspberry-pi -[10]: https://www.raspberrypi.org/software/operating-systems/ -[11]: https://github.com/adafruit/Adafruit_Python_DHT -[12]: https://opensource.com/sites/default/files/uploads/temperature_sensor.png (Output of the sensor script) -[13]: https://www.freedesktop.org/software/systemd/man/systemd.service.html -[14]: https://prometheus.io/ diff --git a/sources/tech/20210712 Use Docker Compose with Podman to Orchestrate Containers on Fedora.md b/sources/tech/20210712 Use Docker Compose with Podman to Orchestrate Containers on Fedora.md deleted file mode 100644 index f5c65324cf..0000000000 --- a/sources/tech/20210712 Use Docker Compose with Podman to Orchestrate Containers on Fedora.md +++ /dev/null @@ -1,161 +0,0 @@ -[#]: subject: (Use Docker Compose with Podman to Orchestrate Containers on Fedora) -[#]: via: (https://fedoramagazine.org/use-docker-compose-with-podman-to-orchestrate-containers-on-fedora/) -[#]: author: (Mehdi Haghgoo https://fedoramagazine.org/author/powergame/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Use Docker Compose with Podman to Orchestrate Containers on Fedora -====== - -![][1] - -Photo by [Jonas][2] on [Unsplash][3] - -Docker Compose is an open-source tool used by developers for orchestrating containers locally or in production. If you are new to containers, I suggest checking out the following links: - - * [Get Started with Docker][4] - * [A Practical Introduction to Container Terminology][5] - * [Using Pods with Podman on Fedora][6] - * [Podman with Capabilities on Fedora][7] - - - -Podman, a powerful alternative to Docker CLI, has attracted a lot of developers recently. However, Podman users faced a challenge. Docker Compose was expected to work with Docker daemon only. So, Podman users had to use other alternatives to Docker Compose like using [Podman Compose][8] that runs services defined in the Compose file inside a Podman pod. (To learn more about Podman Compose, check out my article [Manage Containers on Fedora with Podman Compose][9] on [Fedora Magazine][10].) Another method was manually running different containers of an application and then generating a Kubernetes file with - -podman generate - -to be later re-run with - -podman play - -. (To learn more about this method, check out this [Moving from docker-compose to Podman pods][11] on [Red Hat][12].) - -### Podman and Docker Compose - -Both of the Docker Compose alternatives mentioned previously have their limitations. At the least they need you to know a little bit more than Container and Docker basics.The good news is that Podman has added [support for Docker Compose][13] since version 3.0 so you can now run your traditional docker-compose.yml files with Podman backend. Podman does this by setting up a UNIX socket for - -docker-compose - -to interact with, similar to the Docker daemon. In this article I will show you how to use Docker Compose both with rootful and rootless Podman. - -### Required Software - -Install the following packages on your system to run Docker Compose with Podman: - - * podman: Tool for managing containers - * docker-compose: Tool for orchestrating containers - * podman-docker: Installs a script named docker that emulates docker CLI using Podman. Also links Docker CLI man pages and podman. - - - -Install the above packages using dnf: - -``` -sudo dnf install -y podman podman-docker docker-compose -``` - -### Setting Up Podman Socket - -Set up the Podman socket in order for Docker Compose to work: - -``` -sudo systemctl enable podman.socket -sudo systemctl start podman.socket -sudo systemctl status podman.socket -``` - -This sets up a Unix socket in to communicate with Docker Compose and symlinks it to /var/run/docker.sock. To test if you can communicate with the socket, run the following curl command: - -``` -sudo curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping -``` - -If the output from the above command is OK, it means your setup was successful. - -### Running A Sample Project with Docker Compose - -Now, you can start orchestrating your project by going to the project folder and running - -sudo docker-compose up - -. As an example, I will be using a docker-compose.yml from a sample WordPress project I created as a demo for this article. You can find it . Clone the repository on your system and from within the directory, start docker-compose. - -``` -sudo docker-compose up -``` - -If everything goes well, you will see docker-compose bringing up the services defined in the compose YAML file. Access the new WordPress instance at after the containers are created. To stop the project, you can press Ctrl + C in the terminal where you executed _docker-compose up_. To remove the containers, execute - -``` -sudo docker-compose down -``` - -### Running Docker Compose with Rootless Podman - -The setup shown above uses Podman in root-ful mode. Notice the _sudo_ keyword preceding most of the commands used. Often you will not need to run your projects as root. So, having the option to run docker-compose as a regular user is pretty handy. As of [version 3.2.0][14], Podman supports Docker-Compose with rootless Podman. The setup, however, changes as follows: - -``` -systemctl --user enable podman.socket -systemctl --user start podman.socket -systemctl --user status podman.socket -export DOCKER_HOST=///run/user/$UID/podman/podman.sock -``` - -Note that when starting the podman socket as non-root user, a user-level socket will be created at _/run/user/$UID/podman/podman.sock_, where _$UID_ refers is the non-root user’s user ID. We need to set the DOCKER_HOST environment variable to that socket so that Docker Compose can talk to the correct socket. You can add the environment variable to your ~/.bash_profile to make it persistent across system reboots. In root-ful mode, the socket is created in _/run/podman/podman.sock_ which is symlinked to _/var/run/docker.sock_ (the default socket expected by the docker-compose binary). So, we didn’t need to set DOCKER_HOST variable then. - -Now, in rootless mode, we can simply run the command -``` - -``` - -docker-compose up -``` - -``` - -without “sudo” in the project root folder. This will bring up our WordPress site running on Docker Compose with Podman backend. - -![WordPress instance running with Docker Compose with Podman backend][15] - -### Further Reading: - - * [][13][][16][Running Podman and Docker Compose, Red Hat][13] - * [From Docker Compose to Kubernetes with Podman][17] - * [Convert docker-compose Services to Pods with Podman][18] - - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/use-docker-compose-with-podman-to-orchestrate-containers-on-fedora/ - -作者:[Mehdi Haghgoo][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/powergame/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/07/docker-compose-w-podman-816x345.jpg -[2]: https://unsplash.com/@jonason_b?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/docker-container?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://docs.docker.com/get-started -[5]: https://developers.redhat.com/blog/2018/02/22/container-terminology-practical-introduction -[6]: https://fedoramagazine.org/podman-pods-fedora-containers/ -[7]: https://fedoramagazine.org/podman-with-capabilities-on-fedora/ -[8]: http://github.com/containers/podman-compose -[9]: https://fedoramagazine.org/manage-containers-with-podman-compose/ -[10]: https://fedoramagazine.org -[11]: https://www.redhat.com/sysadmin/compose-podman-pods -[12]: https://redhat.com -[13]: https://www.redhat.com/sysadmin/podman-docker-compose -[14]: https://github.com/containers/podman/releases/tag/v3.2.0 -[15]: https://fedoramagazine.org/wp-content/uploads/2021/06/Screenshot-from-2021-06-25-06-48-39.png -[16]: tmp.Svb0n6PVdg -[17]: https://www.redhat.com/sysadmin/compose-kubernetes-podman -[18]: https://balagetech.com/convert-docker-compose-services-to-pods/ diff --git a/sources/tech/20210714 5 Rust tools worth trying on the Linux command line.md b/sources/tech/20210714 5 Rust tools worth trying on the Linux command line.md deleted file mode 100644 index b723dc42df..0000000000 --- a/sources/tech/20210714 5 Rust tools worth trying on the Linux command line.md +++ /dev/null @@ -1,143 +0,0 @@ -[#]: subject: (5 Rust tools worth trying on the Linux command line) -[#]: via: (https://opensource.com/article/21/7/rust-tools-linux) -[#]: author: (Sudeshna Sur https://opensource.com/users/sudeshna-sur) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -5 Rust tools worth trying on the Linux command line -====== -Try some new commands for common tasks. -![Terminal command prompt on orange background][1] - -Linux inherited a lot from Unix, which has been around for a half-century. This means most of the tools you use in your Linux terminal probably either have a very long history or were written to emulate those historical commands. It's a point of pride in the POSIX world that tools don't _need_ constant reinvention. In fact, there's a subset of Linux users today who could run a version of Linux from [before they were born][2] without having to learn anything new. It's tried, true, and reliable. - -That doesn't mean there hasn't been evolution, though. All the commands Linux users know and love have been improved over the years. Some have even been replaced entirely and are so common now that few people still care to remember the old ones. Can you imagine Linux without SSH? Well, believe it or not, the `ssh` command replaced one called `rsh`. - -I'm often on the lookout for new commands because I'm always intrigued by the possibility of getting things done more efficiently. If there's a better, faster, or more robust command out there for doing a common task, I want to know about it. And while there's equal opportunity for any language to invent new Linux commands, Rust developers have been delivering an impressive collection of useful general-purpose utilities. - -### Replace man with tealdeer - -Tealdeer provides the `tldr` command, which displays an abbreviated, no-nonsense summary of how a command is used. It's not that manual and info pages aren't useful, because they are, but sometimes they can be a little verbose and a little obtuse. Tealdeer keeps its hints clear and concise, with examples of how to use the command you're struggling to recall. - - -``` -$ tldr tar - -  Archiving utility. -  Often combined with a compression method, such as gzip or bzip2. -  More information: <[https://www.gnu.org/software/tar\>][3]. - -  [c]reate an archive and write it to a [f]ile: - -      tar cf target.tar file1 file2 file3 - -  [c]reate a g[z]ipped archive and write it to a [f]ile: - -      tar czf target.tar.gz file1 file2 file3 - -  [c]reate a g[z]ipped archive from a directory using relative paths: - -      tar czf target.tar.gz --directory=path/to/directory . -[...] -``` - -Read the full article [about tldr][4]. - -### Replace du with dust - -The `du` command gives feedback about disk usage. It's a relatively simple task; likewise, the command is pretty simple, too. The `dust` command is `du` written in Rust, and it uses color-coding and bar graphs for users who prefer added visual context. - - -``` -$ dust - 5.7M   ┌── exa                                   │                                   ██ │   2% - 5.9M   ├── tokei                                 │                                   ██ │   2% - 6.1M   ├── dust                                  │                                   ██ │   2% - 6.2M   ├── tldr                                  │                                   ██ │   2% - 9.4M   ├── fd                                    │                                   ██ │   4% - 2.9M   │ ┌── exa                                 │                                 ░░░█ │   1% -  15M   │ ├── rustdoc                             │                                 ░███ │   6% -  18M   ├─┴ bin                                   │                                 ████ │   7% -  27M   ├── rg                                    │                               ██████ │  11% - 1.3M   │     ┌── libz-sys-1.1.3.crate            │  ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   0% - 1.4M   │     ├── libgit2-sys-0.12.19+1.1.0.crate │  ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   1% - 4.5M   │   ┌─┴ github.com-1ecc6299db9ec823       │  ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   2% - 4.5M   │ ┌─┴ cache                               │  ░░░░░░░░░░░░░░░░░░░░░░░░ -[...] -``` - -Read the full article [about dust][5]. - -### Replace find with fd - -The `find` command is a useful tool for finding files on your computer, but its syntax can be difficult to master. Not only are there a lot of options, but the order of those options can be significant, depending on what you're doing. Some people have [written scripts][6] to abstract the task away from the command. Other people just write a new tool altogether, like `fd`. - -Syntax doesn't get any easier than this: - - -``` -$ fd example -Documents/example.txt -Documents/example-java -Downloads/example.com/index.html -``` - -Read the full article [about fd][7]. - -### Replace ls with exa - -You might not think that the `ls` command would have much room for improvement. But `exa` proves that even the most mundane utility can benefit from small adjustments. For instance, why not have a list command with built-in Git awareness? Why not get extra metadata in your file lists?  - -Read the full [article about exa][8]. - -### Try Tokei - -Unlike the other tools on this list, the `tokei` utility doesn't replace one command, but it does demonstrate how the Linux terminal is—as always—an environment very much in constant growth. The terminal may contain lots of legacy commands, but there are new and exciting commands surfacing all the time. - -When I'm looking at a project in my local file system, and I need to know what languages it contains, I rely on a tool like Tokei. It's a program that displays statistics about a codebase, with wide support for 150 programming languages. I don't need to remember what languages have been used, or how many lines of code there are, or how many blanks or spaces or comments are there. It's a complete code-analysis tool, making my entry into and navigation of the code easy. - - -``` -$ tokei ~/exa/src ~/Work/wildfly/jaxrs -================== -Language   Files Lines Code Comments Blank -Java        46    6135  4324  945     632 -XML         23    5211  4839  473     224 -\--------------------------------- -Rust -Markdown -\----------------------------------- -Total -``` - -Read the full [article about tokei][9]. - -### Find your favorite - -Open source users never have to settle for just a small set of commands, or even just one version of a command. Find the commands you love, whether they're new ideas for emerging workflows, or reimplementations of old tools, or timeless classics that are just as good today as they were decades ago. Find the commands that make your life better and enjoy! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/rust-tools-linux - -作者:[Sudeshna Sur][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/sudeshna-sur -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background) -[2]: https://opensource.com/article/16/12/yearbook-linux-test-driving-distros -[3]: https://www.gnu.org/software/tar\> -[4]: https://opensource.com/article/21/6/tealdeer -[5]: https://opensource.com/article/21/6/dust -[6]: https://opensource.com/article/20/2/find-file-script -[7]: https://opensource.com/article/21/6/fd -[8]: https://opensource.com/article/21/3/replace-ls-exa -[9]: https://opensource.com/article/21/6/tokei diff --git a/sources/tech/20210716 What does the Open-Closed Principle mean for refactoring.md b/sources/tech/20210716 What does the Open-Closed Principle mean for refactoring.md deleted file mode 100644 index 1a52b9ba8e..0000000000 --- a/sources/tech/20210716 What does the Open-Closed Principle mean for refactoring.md +++ /dev/null @@ -1,80 +0,0 @@ -[#]: subject: (What does the Open-Closed Principle mean for refactoring?) -[#]: via: (https://opensource.com/article/21/7/open-closed-principle-refactoring) -[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -What does the Open-Closed Principle mean for refactoring? -====== -Resolve the tension between protecting clients from unwanted changes and -extending the capabilities of services. -![Brain on a computer screen][1] - -In his 1988 book, _[Object-Oriented Software Construction][2]_, professor [Bertrand Meyer][3] defined the [Open-Closed Principle][4] as: - -> "A module will be said to be open if it is still available for extension. For example, it should be possible to add fields to the data structures it contains or new elements to the set of functions it performs. -> -> "A module will be said to be closed if it is available for use by other modules. This assumes that the module has been given a well-defined, stable description (the interface in the sense of information hiding)." - -A more succinct way to put it would be: - -> Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification. - -Similarly (and in parallel to Meyer's findings), [Alistair Cockburn][5] defined the [Protected Variation][6] pattern: - -> "Identify points of predicted variation and create a stable interface around them." - -Both of these deal with volatility in software. When, as is always the case, you need to make some change to a software module, the ripple effects can be disastrous. The root cause of disastrous ripple effects is tight coupling, so the Open-Closed Principle and Protected Variation Pattern teach us how to properly decouple various modules, components, functions, and so forth. - -### Does the Open-Closed Principle preclude refactoring? - -If a module (i.e., a named block of code) must remain closed to any modifications, does that mean you're not allowed to touch it once it gets deployed? And if yes, wouldn't that eliminate any possibility of refactoring? - -Without the ability to refactor the code, you are forced to adopt the Finality Principle. This holds that rework is not allowed (why would stakeholders agree to pay you to work again on something they already paid for?) and you must carefully craft your code, because you will get only one chance to do it right. This is in total contradiction to the discipline of refactoring. - -If you are allowed to extend the deployed code but not change it, are you doomed to swim forever in the waterfall rivers? Being given only one shot at doing anything is a recipe for disaster. - -Let's review the approach to solve this conundrum. - -### How to protect clients from unwanted changes - -Clients (meaning modules or functions that use some block of code) utilize some functionality by adhering to the protocol as originally implemented in the component or service. However, as the component or service inevitably changes, the original "partnership" between the service or component and various clients breaks down. Clients "discover" the change by breakage, which is always an unpleasant surprise that often ruins the initial trust. - -Clients must be protected from those breakages. The only way to do so is by introducing a layer of abstraction between the clients and the service or component. In software engineering lingo, we call that layer of abstraction an "interface" (or an API). - -Interfaces and APIs hide the implementation. Once you arrange for a service to be delivered via an interface or API, you free yourself from the worries of changing the implementation code. No matter how much you change the service's implementation, your clients remain blissfully unaffected. - -That way, you are back to your comfortable world of iterations. You are now completely free to refactor, to rearrange the code, and to keep rearranging it in pursuit of a more optimal solution. - -The thing in this arrangement that remains closed for modification is the interface or API. The volatility of an interface or API is the thing that threatens to break the established trust between the service and its clients. Interfaces and APIs must remain open for extension. And that extension happens behind the scenes—by refactoring and adding new capabilities while guaranteeing non-volatility of the public-facing protocol. - -### How to extend capabilities of services - -While services remain non-volatile from the client's perspective, they also remain open for business when it comes to enhancing their capabilities. This Open-Closed Principle is implemented through refactoring. - -For example, if the first increment of the `OrderPayment` service offers mere bare-bones capabilities (e.g., able to process the order total and calculate sales tax), the next increment can be safely added by respecting the Open-Closed Principle. Without breaking the handshake between the clients and the `OrderPayment` service, you can refactor the implementation behind the `OrderPayment` API by adding new blocks of code. - -So, the second increment could contain the ability to calculate shipping costs. And so on, you get the picture; you accomplish the Protected Variation Pattern by observing the Open-Closed Principle. It's all about carefully modeling business abstractions. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/open-closed-principle-refactoring - -作者:[Alex Bunardzic][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alex-bunardzic -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_computer_solve_fix_tool.png?itok=okq8joti (Brain on a computer screen) -[2]: https://en.wikipedia.org/wiki/Object-Oriented_Software_Construction -[3]: https://en.wikipedia.org/wiki/Bertrand_Meyer -[4]: https://en.wikipedia.org/wiki/Open%E2%80%93closed_principle -[5]: https://en.wikipedia.org/wiki/Alistair_Cockburn -[6]: https://martinfowler.com/ieeeSoftware/protectedVariation.pdf diff --git a/sources/tech/20210717 How to avoid waste when writing code.md b/sources/tech/20210717 How to avoid waste when writing code.md deleted file mode 100644 index 7438dd9031..0000000000 --- a/sources/tech/20210717 How to avoid waste when writing code.md +++ /dev/null @@ -1,84 +0,0 @@ -[#]: subject: (How to avoid waste when writing code) -[#]: via: (https://opensource.com/article/21/7/avoid-waste-coding) -[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How to avoid waste when writing code -====== -The more we can reduce waste in software development, the better off -everyone will be. -![Learning to program][1] - -The long road toward quality is filled with diversions, false starts, and detours. The enemy of quality is waste, because waste is never desirable. No one pays anyone to deliver waste. We sometimes tolerate waste as part of the process of making something useful and desirable, but the more we can reduce waste while making something, the better. - -In software engineering, waste can be expressed in a few ways: - - 1. Defects - 2. Idling and waiting - 3. Overproduction - 4. Overprocessing - 5. Any other activity that doesn't directly put value in users' hands - - - -Let's examine each of these five types of waste. - -### Defects - -There seems to be a prevailing sentiment in the software industry that bugs (defects) are inevitable. It's not if—but when and how many—bugs find their way into production. - -You can fight that defeatist sentiment by reminding software engineers that each and every bug is authored. Bugs don't occur spontaneously. They're created by us, human beings trying to do the best software development we can. But nobody's perfect. Of course we don't create bugs intentionally, but they do happen. They're often a result of rushing things through, or perhaps due to inadequate education and training. - -Whatever the reason, bugs are _caused_, which means we can eliminate bugs by solving the problems that cause them. - -### Idling and waiting - -Our business partners funding our software development efforts tend to perceive any time we're not producing shipping code as time spent idling. Why are we idling, and what are we waiting on? It's a reasonable question to ask, if you consider they're paying potentially thousands of dollars per hour to keep the team going. - -Idling is wasteful. It does not contribute to the bottom line and may be a sign of confusion. If the team says they're waiting on someone to return from their leave of absence, that signals poor organizing skills. No team should ever get to the point where they paint themselves into a corner and are suffering from a single point of failure. If a team member can't participate, other members should step in and continue the work. If that's not possible, you are dealing with a very brittle, inflexible, and unreliable team. - -Of course, there are many other possible reasons the team is idling. Maybe there is confusion about the current highest priority, so the team is hanging and waiting to learn about the correct priority. - -There are many other [reasonable causes of idling][2], which is why this type of waste seems hardest to get on top of. Whatever the case, mature organizations take precautionary steps to minimize potential idling and waiting time. - -### Overproduction - -Often labeled "gold plating," overproduction is one of the most insidious forms of waste. Software engineers are notorious for their propensity to go overboard in their enthusiasm for building features and nifty capabilities. And because software, as its name implies, is very pliable and malleable, there is very little pushback against the onslaught of bloat. - -This dreadful bloat creates a lot of waste. Fighting bloat is what prudent software engineering discipline is all about. - -### Overprocessing - -One of the biggest problems in software engineering is known as Geek-At-Keyboard (GAK). A common misconception is that software engineers spend most of their time writing code. That is far from the truth. Most of the time spent on regular daily activities (aside from attending meetings) goes toward keyboard activities unrelated to writing code: messing with configurations and environments, manually running and navigating the app, typing and retyping test data, stepping through the debugger, etc. - -All those activities are waste. They don't contribute to delivering value. One of the most effective remedies for minimizing unproductive GAK time is [test-driven development][3] (TDD). Writing tests before writing code is a proven method for avoiding overprocessing. The test-first approach is a very effective way of eliminating waste. - -### Other activities that don't put value in users' hands - -In the early days of our profession, value was measured by the number of lines of code produced per unit of time (per day, week, month, etc.). Later, this rather ineffective way of measuring value was abandoned in favor of working code. There is no convincing correlation between the number of lines of code and working code. And once working code became the measure of value, the number of lines of code became irrelevant. - -Today, we recognize that [working code][4] is also a meaningless metric. Just because code compiles, builds, and works doesn't mean it is doing anything of value. Successfully running code could be doing inane processing, such as counting from 0 to 10 and then back to 0. It is much more important to focus on code that meets end users' expectations. - -Helping end users fulfill their goals when using your software product is the only measure of value. Any other activity that does not contribute to that value should be regarded as waste. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/avoid-waste-coding - -作者:[Alex Bunardzic][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/alex-bunardzic -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/learn-programming-code-keyboard.png?itok=xaLyptT4 (Learning to program) -[2]: https://opensource.com/article/21/2/simplicity -[3]: https://opensource.com/article/20/1/test-driven-development -[4]: https://opensource.com/article/20/7/code-tdd diff --git a/sources/tech/20210718 Up for a Challenge- Try These ‘Advanced- Linux Distros -Not Based on Debian, Arch or Red Hat.md b/sources/tech/20210718 Up for a Challenge- Try These ‘Advanced- Linux Distros -Not Based on Debian, Arch or Red Hat.md deleted file mode 100644 index d3a958a85f..0000000000 --- a/sources/tech/20210718 Up for a Challenge- Try These ‘Advanced- Linux Distros -Not Based on Debian, Arch or Red Hat.md +++ /dev/null @@ -1,140 +0,0 @@ -[#]: subject: (Up for a Challenge? Try These ‘Advanced’ Linux Distros [Not Based on Debian, Arch or Red Hat]) -[#]: via: (https://itsfoss.com/advanced-linux-distros/) -[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Up for a Challenge? Try These ‘Advanced’ Linux Distros [Not Based on Debian, Arch or Red Hat] -====== - -There are hundreds of Linux distributions. Some are for general purpose usage, while some are specifically tailored for education, robotics, hacking, gaming and what not. - -You’ll notice that most of them originate from Debian/Ubuntu, Arch and Red Hat/Fedora. If you like distrohopping and experiment with a range of distributions, you may soon get ‘bored’ out of it. Most Linux distributions would feel too similar after a point and apart from a few visual changes here and there, you won’t get a different experience. - -Does that sound familiar? If yes, let me list some advanced, independent, Linux distributions to test your expertise. - -### Advanced Linux distributions for experts - -![][1] - -You may argue against the use of term “expert” here. After all, ‘expert Linux users’ don’t necessarily need to use advanced Linux distributions. They can easily utilize their expertise on [beginner-friendly distributions like Linux Mint][2]. - -The term expert here is intended for people who won’t easily get overwhelmed when they are taken out of their comfort zone and land in an unfamiliar environment. - -Alright then. Let’s see which distributions you can use to test your expertise on. - -#### NixOS - -![NixOS Linux illustration][3] - -[NixOS][4] is a unique distribution in the terms of how it approaches everything from the kernel to configuration to applications. - -NixOS is built on top of the Nix package manager and everything from the kernel to configuration is based on it. All packages are kept in isolation from each other. - -It ensures that installing or upgrading one package does not break other packages. You can also easily roll back to previous versions. - -The isolation feature also helps you in trying new tools without hesitation, creating development environments and more. - -Sounds good enough to give it a try? You call, truly. - -#### Void Linux - -![Void Linux illustration][5] - -[Void Linux][6] is another independent Linux distribution which was implemented from scratch. It is a rolling release distribution but it focuses on stability rather than being bleeding edge like Arch Linux. - -Void Linux has its own XBPS package management system for installing and removing software with option to build packages from sources (from XBPS source packages collection). - -Another thing that sets Void Linux apart from the crowd of other distribution is its use of [runit][7] as init system instead of systemd. - -Can Void Linux fill the void in your distrohopping life? Find it out yourself. - -#### Slackware - -![Slackware Linux illustration][8] - -The oldest active Linux distribution, [Slackware][9], can surely be counted as an expert Linux distribution. - -Which is amusing because once upon a time, many new Linux users started their Linux journey with Slackware. But that was back in the mid-90s and it is safe to assume that those newbies have turned into veteran with their neck beard touching the ground. - -Originally, Slackware was based on Softlanding Linux System (SLS), one of the earliest Linux distributions in 1992. - -Slackware is an advanced Linux distribution with aim to produce the most “UNIX-like” Linux distribution out there. - -No slacking here. Be ready to use the command line extensively in Slackware. - -#### Gentoo - -![Gentoo Linux illustration][10] - -[Gentoo Linux][11] is named after the fast swimming Gentoo penguin. It reflects the speed optimization capabilities of Gentoo Linux. - -How? It’s software distribution system, Portage, gives it extreme configurability and performance. Portage keeps a collection of build scripts for the packages and it automatically builds a custom version of package based on end user’s preference and optimized for end user’s hardware. - -This ‘build’ stuff is why there are many jokes and meme in Linux-verse about compiling everything in Gentoo. - -Can you catch up with the Gentoo? - -#### Clear Linux - -![Clear Linux illustration][12] - -[Clear Linux][13] is not your general purpose desktop Linux distribution. It is an open source, rolling release distribution, created from the ground up by Intel and obviously, it is highly tuned for Intel platforms. - -Clear Linux OS primarily targets professionals in the field of IT, DevOps, Cloud/Container deployments, and AI. - -The package management is done through [swupd][14] but unlike regular package managers, versioning happens at the individual file level. This means that it generates an entirely new OS version when any software change takes place in the system. - -Is it clear enough to try Clear Linux? - -#### Linux From Scratch - -![Linux From Scratch illustration][15] - -If you think installing Arch Linux was a challenge, try [Linux From Scratch][16] (LFS). As the name suggests, here you ~~get~~ have to do everything from scratch. - -From installing to using, you do everything at a low level and that’s the beauty of it. You are not installing a pre-compiled Linux distribution here. You build your own customized Linux system entirely from the source code. - -It is often suggested to use Linux From Scratch to learn the core functioning of the Linux and it is indeed a learning experience. - -Still scratching your head about Linux From Scratch? You can [read it][17][s][17] [documentation in book format][17]. - -#### Conclusion - -There are a few more independent Linux distributions. Mageia and Solus are two of the relatively more popular ones. I did not include them in this list because I consider them more friendly and not as complicated to use as others on the list. Feel free to disagree with me in the comments. - -It is your turn now. Have you used any advanced Linux distributions ever? Was it in the past or are you still using it? - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/advanced-linux-distros/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/advanced-linux-distros.png?resize=800%2C450&ssl=1 -[2]: https://itsfoss.com/best-linux-beginners/ -[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/nix-os.png?resize=800%2C350&ssl=1 -[4]: https://nixos.org/ -[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/void-linux.png?resize=800%2C350&ssl=1 -[6]: https://voidlinux.org/ -[7]: http://smarden.org/runit/ -[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/slackware.png?resize=800%2C350&ssl=1 -[9]: http://www.slackware.com/ -[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/gentoo-linux.png?resize=800%2C350&ssl=1 -[11]: https://www.gentoo.org/ -[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/clear-linux.png?resize=800%2C350&ssl=1 -[13]: https://clearlinux.org/ -[14]: https://docs.01.org/clearlinux/latest/guides/clear/swupd.html#swupd-guide -[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/linux-from-scratch.png?resize=800%2C350&ssl=1 -[16]: https://www.linuxfromscratch.org/ -[17]: https://www.linuxfromscratch.org/lfs/read.html diff --git a/sources/tech/20210720 Access cloud files on Windows with ownCloud.md b/sources/tech/20210720 Access cloud files on Windows with ownCloud.md deleted file mode 100644 index a517fd3a4c..0000000000 --- a/sources/tech/20210720 Access cloud files on Windows with ownCloud.md +++ /dev/null @@ -1,117 +0,0 @@ -[#]: subject: (Access cloud files on Windows with ownCloud) -[#]: via: (https://opensource.com/article/21/7/owncloud-windows-files) -[#]: author: (Martin Loschwitz https://opensource.com/users/martinloschwitzorg) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Access cloud files on Windows with ownCloud -====== -ownCloud VFS leverages Microsoft's cloud files API to make opening, -modifying, and saving online files seamless. -![Scissors cutting open access to files][1] - -Most computer users nowadays rely on online file storage. Thanks to the rise of cloud computing, the idea of storing files remotely and downloading them when needed has gained a lot of fresh air in recent years. Yet, the principle's technical roots are anything but new, with implementations reaching back decades. While the protocols used and features expected for accessing data on online storage have changed massively, the basic idea hasn't altered much since the days of FTP and similar protocols. - -There's an easy explanation for why online (or "cloud") storage has so many fans. Cloud storage usually resides on highly redundant infrastructure, often distributed across physical sites. Ordinary people would have a tough time setting up anything similar with the tools generally available to them. Cloud storage also allows users to extend their storage space easily without having to fiddle with their device hardware. It also enables people to share files with relatives, friends, or colleagues in just a few simple steps. - -Smartphones are an excellent example of cloud storage's advantages; clients including Dropbox, Google Drive, and iCloud are deeply integrated into mobile operating systems and can be used in apps just like local storage. - -Classical desktop and laptop computers don't integrate online storage as well as smartphones do. Rather, accessing iCloud, ownCloud, or other storage solutions from a computer is a tedious task for several reasons. - -### A matter of protocol - -Many of the reasons boil down to the protocol. The methods and protocols for accessing online storage have changed often, and no single protocol has established itself as a de-facto standard. Online storage services such as Dropbox, S3, and iCloud use proprietary protocols (partially based on open protocols such as WebDAV), which cannot be implemented easily on desktop operating systems. As a result, desktop users often face tedious user interfaces (UIs), reduced comfort, and poor user experience (UX) with online file storage. - -It helps to look a bit deeper to understand the problem and come up with possible solutions. To start, all modern operating systems—notably Windows, macOS, and Linux (along with its numerous derivatives)—assume they are exclusively in charge of a user's files. This comes from the old-fashioned assumption that a user's files will be hosted on a single device. If all a user's files are stored on the same device, it is easy to put them in a tree-like structure (as desktop operating systems have been doing for ages) and present a unique view to the user. - -With cloud-based storage, things are not so easy. Because these files are not available locally, a computer's operating system cannot manage or display them the same way it displays local files. To edit an Excel sheet stored in the cloud, you must download the file from the cloud, store it locally, modify it, and upload it again. Not only does this break your UI and UX; it also creates chaos. - -Some online storage providers try to work around the issue with clients that synchronize contents between the cloud and the local machine. This is an ugly workaround. For instance, you might not want the dozens of gigabytes of data stored in your cloud to also reside on your small local device. To mitigate this challenge, some tools allow you to select a subset of data to synchronize between the client and the online service; this shifts the problem to the side a bit but certainly does not solve it. - -### How WebDAV failed the industry - -Many IT professionals are likely shaking their heads fiercely, knowing there's a protocol for these types of tasks that could be used. And they are not completely wrong. [WebDAV][2] was specified as early as June 2007 by the IETF to extend the HTTP protocol for Web Distributed Authoring and Versioning (WebDAV for short). WebDAV's sole purpose was to provide an interface that allows files in a remote location, such as online cloud storage, to be accessed and edited the same way local files can be. WebDAV has gained traction since then: Private cloud solutions such as ownCloud and NextCloud support and can be accessed through it. - -Yet to call WebDAV a ringing success would be unrealistic, as neither the server nor the client side has achieved widespread use. Matters are especially bad on the server side: many online storage services, including Dropbox, Google Drive, and Microsoft OneDrive, do not support the WebDAV extension to HTTP. Some put proxy services in place. Dropbox, for instance, can be used with DAVbox to achieve WebDAV access. Other services provide tools to mitigate the lack of a working WebDAV server. Generally speaking, though, WebDAV support is not widespread throughout the industry, and that will probably not change anytime soon. - -### Poor operating system support for WebDAV - -This leads right into the second aspect of WebDAV's disastrous history: the client side. At this time, only one operating system has somewhat complete client support throughout the relevant tools of its userland, and that is Linux. Standard desktop environments such as KDE, GNOME, and Xfce can connect to WebDAV drives from the desktop. They also integrate WebDAV drives as if they were normal local disks, effectively allowing users to move data back and forth between the remote site and the local machine. Last but not least, they can download files from WebDAV devices on demand instead of keeping files continuously in sync between a remote site and the local drive. In Linux, life with WebDAV is mostly good—mostly because WebDAV doesn't feature inherent caching. - -Matters change a bit when looking at macOS. Apple equipped macOS with a WebDAV client a while ago, and it mostly works fine. However, it is tedious for less experienced users to set up. And macOS's WebDAV client tends to misbehave when the connection between the client and the server is brittle—like it would be for users in Germany behind LTE connectivity. In such setups, users have to clean up their WebDAV directories regularly to be able to use them. - -The most widespread operating system, Windows, also offers the most dramatic failure in terms of WebDAV integration. To even set up a WebDAV-based storage drive, you would have to edit the Windows Registry—a task that easily exceeds the average computer user's knowledge. If that were not bad enough, even after modifying the Registry, the Windows client for the WebDAV protocol looks more like a stub than a usable feature. You will soon experience problems like those found with the macOS WebDAV implementation, and the experience of using the protocol will be terrible. - -### ownCloud's VFS alternative - -[ownCloud][3] is a private cloud solution that allows users to store, sync, and share data on their own terms, including on a Raspberry Pi, a private cloud, or in a hybrid setup. ownCloud offers a client for the world's most common operating system. But for many years, it relied upon workarounds, like requiring users to explicitly choose which files to synchronize. - -ownCloud has come up with a solution to the problem—and it's a rather sophisticated one. Windows offers an interface to connect to cloud-based online storage, and ownCloud leverages that interface with its [virtual file system][4] (VFS).  - -### How VFS works - -ownCloud's VFS functionality for Windows heavily relies on a Microsoft feature named [cloud files API][5]. It was officially introduced in Windows 10, version 1809 in 2017. Microsoft designed it for file synchronization with OneDrive, but other services are free to use the API, which is now part of the Windows 10 core. The cloud files API is kind of a demarcation line for synchronizing data from the cloud to a local machine and vice versa. It's split into two parts: - - * The **API** provides functions on the API level to perform tasks such as opening, saving, and uploading files to the remote host after the user commits changes. The cloud files API handles a lot of things invisible to the user; for instance, a client using the API will display all remote files as "present" in the local view without downloading them. The cloud files API will download a file only after the user explicitly requests to open it from the remote drive. - * The **Windows.Storage.Provider** namespace allows applications in the userland to configure a client to access a remote service through the cloud files API. - - - -### What the user sees - -The revolutionary way that cloud files API deals with files in remote storage under Windows becomes clear when you see ownCloud VFS in action. - -First, set up a connection to your ownCloud drives from within the ownCloud Client for Windows. Make sure _virtual file support_ is enabled; this makes the directories in your ownCloud drive immediately visible and selectable in Windows tools, such as the Explorer. You won't be able to tell them apart from the files on your local storage devices, and when you open a file stored in ownCloud, it will appear like it is locally present. For files not synchronized to the local host, the cloud files API generates a placeholder that is replaced with the actual file when you open it. This allows a seamless user experience while preserving bandwidth on the client's and the server's internet links. - -Setting up a VFS drive in Windows does not require administrator privileges, as editing the Windows Registry for WebDAV connectivity requires. This means ownCloud VFS might be usable on devices such as business laptops, where the administrator account is usually not available to the user. Compliance policies may still forbid using ownCloud if the instance is not run by the company under its compliance regime, however. - -### Major differences from WebDAV - -Not only does VFS work considerably better on Windows 10 operating systems, it also offers a few features not available in protocols like WebDAV. One of these is implicit caching. During normal operations, VFS will synchronize files when they are opened until a locally defined cache is full; if the user requests additional files, VFS will remove the oldest files from the cache. - -Furthermore, VFS allows you to specify "favorite" files that will always be synchronized automatically from the remote drive whether or not you are trying to access them. This shortens the initial time to access frequently used files, especially if the files are large. - -### Storage Sense makes sense - -Another helpful feature in the Windows cloud files API is the "Storage Sense" feature added in Windows 1809. While primarily aimed at OneDrive users, Storage Sense can be used in the background with an ownCloud online storage drive due to its cloud files API support. Storage Sense regularly scans the Windows C: drive for files that have not been used for a long time. It synchronizes these files to the remote cloud storage and deletes them from the local device, freeing up space for data used more often. - -The user can determine Storage Sense's intervals and when Windows will trigger scans. The latter factor is not very important anymore because searching an SSD or NVMe device is very fast compared to the old days of searching spinning disk drives. Storage Sense aims to increase available disk space on systems, and ownCloud drives can be targets for offloading unused files. - -### VFS on other operating systems - -By creating a virtual file system based on the cloud files API, ownCloud improves the experience of using ownCloud online storage as a web drive in Microsoft Windows 10. ownCloud is one of the few free, libre, and open source software projects using this API at all—even other vendors' commercial support for it is rather weak. Apple's iCloud client for Windows uses the cloud files API, but the list is short. - -How does ownCloud use VFS on other operating systems? It's not as easy as porting the Windows functionality to other operating systems because the cloud files API is not present on non-Windows machines. - -ownCloud still implements comparable functionality—sort of. The macOS and Linux ownCloud clients behave as though the cloud files API were available even on non-Windows systems. Certain Windows functions have been replaced in the background with stubs for the corresponding system. There are a few notable differences between the Windows client and the view in Linux or macOS. For instance, Windows shows the correct file size even for remote files represented locally by a placeholder. On Linux and macOS, all files are displayed with a size of 1 byte and a local extension of .owncloud. This makes it clear that the files do not exist locally—at least until the user asks to open them and ownCloud initiates the download. - -It's true, though, that the VFS experience on macOS and Linux is not quite as smooth as it is on Windows 10. - -### Summary - -ownCloud's VFS dramatically improves the integration of ownCloud cloud storage drives into Windows 10. In contrast to WebDAV and related protocols, the cloud files API is a native API in Windows, integrated seamlessly with the operating system. This eliminates the ugly hacking required to enable WebDAV access in Windows 10, let alone the contortions needed to use it effectively. Older Windows clients do not benefit from the API, and its advantages might create an incentive to update old Windows versions in environments where ownCloud is heavily used. - -macOS and Linux users do not benefit as much from ownCloud's VFS support. On Linux, hobby projects like [elokab-files-manager][6] provide better support for ownCloud VFS. But this is certainly not something you would want to bet on as your daily driver. The situation on macOS is even worse: While Apple has long promised to add similar API functionality to macOS, only Big Sur seems to have the required feature set. As of this writing, ownCloud's developers have not yet adapted the ownCloud client for macOS to the new features in Big Sur. Consequently, Linux and macOS users can use VFS on their platforms with minor limitations. Windows 10 users, however, get the biggest bang for their buck. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/owncloud-windows-files - -作者:[Martin Loschwitz][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/martinloschwitzorg -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2 (Scissors cutting open access to files) -[2]: https://en.wikipedia.org/wiki/WebDAV -[3]: https://owncloud.com/ -[4]: https://owncloud.com/features/virtual-files/ -[5]: https://docs.microsoft.com/en-us/windows/win32/cfapi/build-a-cloud-file-sync-engine -[6]: https://github.com/dragotin/elokab-files-manager diff --git a/sources/tech/20210721 Accessibility in open source for people with ADHD, dyslexia, and Autism Spectrum Disorder.md b/sources/tech/20210721 Accessibility in open source for people with ADHD, dyslexia, and Autism Spectrum Disorder.md deleted file mode 100644 index ea4aa1b63a..0000000000 --- a/sources/tech/20210721 Accessibility in open source for people with ADHD, dyslexia, and Autism Spectrum Disorder.md +++ /dev/null @@ -1,167 +0,0 @@ -[#]: subject: (Accessibility in open source for people with ADHD, dyslexia, and Autism Spectrum Disorder) -[#]: via: (https://opensource.com/article/21/7/open-source-neurodiversity) -[#]: author: (Rikard Grossman-Nielsen https://opensource.com/users/rikardgn) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Accessibility in open source for people with ADHD, dyslexia, and Autism Spectrum Disorder -====== -Open source accommodations help people with neurodiversity use their -talents to their highest ability. -![a magnifying glass looking at a brain illustration][1] - -For a long time, attention deficit hyperactivity disorder (ADHD), autism, Asperger syndrome, dyslexia, and other neurodiverse conditions were considered things that hold people back. But now, many researchers and employers recognize that [neurodiversity is a competitive advantage][2], especially in technology, and especially when certain accommodations are provided. - -This is certainly true for me. I'm a 39-year-old teacher in Sweden diagnosed with ADHD and Asperger's (also referred to as Autism Level 1). I'm also an intermediate Linux user and use it daily for Java programming, productivity, and gaming. I've been using Linux since the late 1990s, and I've learned ways open source programs can be made more accessible for people with these conditions. For example, I use accessibility software, including speech synthesis to find spelling errors and calendar software accommodations to help with my Asperger's and ADHD. - -### Asperger's, ADHD, and dyslexia - -Before I get into open source software accommodations, I'll share some information about these conditions. - -#### Asperger syndrome - -[Asperger's][3] is a form of autism without intellectual disability. People with Asperger's often have: - - * Difficulties in social contact with other people - * Special interest areas that may consume a large part of their attention and time - * Difficulties understanding and using language in communicating with other people - * Deficits in motor skills - * A tendency to easily get caught in up certain routines or actions - * Unusual perception of and sensitivity to stimuli such as sound, light, etc. - - - -#### ADHD - -The three core symptoms of [ADHD][4] are: - - * **Attention:** Difficulties in concentration and forgetfulness; distractability; easily bored and fail to complete things that don't interest them - * **Impulsivity:** Strong emotional reactions to different things; difficulty listening to others; problems handling unstructured situations that require reflection and thinking things through; sometimes impulsivity may lead to difficulties in motor control - * **Hyperactivity:** Difficulty regulating activity to an appropriate level for the situation; trouble sitting still and winding down, possibly mixed with periods of exhaustion - - - -Hyperactivity in children is often physical; in adults, it's more of an internal restlessness that might cause sleeping problems (among other things). Some people with ADHD have one of the three core symptoms, and others have two or all of them. - -#### Dyslexia - -Some people with neurodiverse conditions also have problems with reading and writing. This might be related to difficulties in attention, hyperactivity, and impulsivity. However, they might also be caused by [dyslexia][5]. - - * People with dyslexia have difficulty recognizing and understanding words. They might place letters in the incorrect order, making reading comprehension more difficult. - * Dyslexia isn't correlated with intelligence. - * Dyslexia can't be cured, but accommodations can help a great deal in school and work. - * Reading a lot and listening to audiobooks can improve the ability of people with dyslexia to read and write. - - - -### Asperger's and ADHD at work - -While the symptoms associated with Asperger's and ADHD can make some parts of work challenging, other aspects give neurodiverse people advantages in the workplace. - -#### Asperger's - -Some of the skills people with [autism spectrum disorders bring to the workplace][6]: - - * High concentration power and precision in work - * Attention to minute details - * Patience for repetitive tasks - * Higher memory power (can remember tiny details) - * Hard-working - * Loyal - - - -#### ADHD - -Some of the skills people with [ADHD bring to the workforce][7]: - - * Able to find unique solutions to difficult problems - * Can talk about many different topics at one time - * Good in a crisis; some of the most stressful jobs are staffed by those with ADHD - * Empathetic and intuitive - * Entrepreneurial - - - -### Making software more accessible - -The descriptions above are highly generalized and may not apply to all adults with Asperger's, ADHD, and dyslexia. One problem with current accessibility standards is that they confuse different neurodiversities. For example, they may not differentiate between autism with and without intellectual disability, the latter of which is called Asperger's or Autism Level 1, or they may assume dyslexia is an intellectual disability. - -In his article [_User interface for people with autism spectrum disorders_][8], Nikolay Pavlov provides some suggestions to improve UI design: - - * Use simple graphics - * Strive for simple, clear navigation - * Do not use complex menus - - - -People with Asperger's have different needs, abilities, and preferences, so these accommodations won't be beneficial to everyone. These UI features could also help people who have autism with intellectual disability, ADHD, dyslexia without intellectual disability, and other conditions. Therefore, when considering making accommodations in software, think carefully about your target group. And know that if you ask people for input, you will probably get many different answers. - -People with ADHD especially might benefit from one of Pavlov's other recommendations: - - * Use visual indicators for time-consuming actions - - - -This is valuable when people perceive that an app or web page is not loading quickly enough. I appreciate when systems give continuous feedback on their progress because it tells me that everything is in working order. - -### Examples of accessibility - -The GNOME calendar offers a good example of making software more accessible. - -Compare the standard date view: - -![Standard GNOME calendar][9] - -(Rikard Grossman-Nielsen, [CC BY-SA 4.0][10]) - -To this modified view: - -![Modified GNOME calendar][11] - -(Rikard Grossman-Nielsen, [CC BY-SA 4.0][10]) - -It's a lot easier to find the marked date with the yellow circle around the number 29. - -In contrast, [Vi and Vim][12] are among the least accessible text editors I've ever used, but note they aren't designed with accessibility in mind. My biggest problem is that that they don't offer any cues to their different commands. When I use a terminal editor, I prefer [Nano][13] because it provides cues to what keyboard commands to use. Most often, I use a graphical user interface (GUI) editor like [Gedit][14] or Nedit because it's easier for me to create text in a GUI editor. - -### How GNOME embraces diversity - -I've found that [GNOME][15] is the best of the large Linux desktop environments for offering accessibility features, but it can definitely still improve. Because I'm interested in Linux on the desktop and making it even more accessible, I joined the team planning [GUADEC][16], the GNOME Users And Developers European Conference. - -GUADEC embraces a climate of diversity, both in theory and practice. The conference provides accommodations, such as captioning for online lectures and quiet rooms at physical events. The 2021 conference, to be held online July 21–25, will have a few lectures on UI design, and I will offer a workshop on [making open source applications more accessible][17]. If you'd like to learn more, visit the GUADEC website and [register][18]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/open-source-neurodiversity - -作者:[Rikard Grossman-Nielsen][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/rikardgn -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_EvidencedBasedIP_520x292_CS.png?itok=mmhCWuZR (a magnifying glass looking at a brain illustration) -[2]: https://hbr.org/2017/05/neurodiversity-as-a-competitive-advantage -[3]: https://en.wikipedia.org/wiki/Asperger_syndrome -[4]: https://en.wikipedia.org/wiki/Attention_deficit_hyperactivity_disorder -[5]: https://en.wikipedia.org/wiki/Dyslexia -[6]: https://www.thehrdigest.com/autistic-workers-strength-not-weakness/ -[7]: https://adhdatwork.add.org/potential-benefits-of-having-an-adhd-employee/ -[8]: https://www.researchgate.net/publication/276495184_User_Interface_for_People_with_Autism_Spectrum_Disorders -[9]: https://opensource.com/sites/default/files/uploads/gnome-calendar-standard.png (Standard GNOME calendar) -[10]: https://creativecommons.org/licenses/by-sa/4.0/ -[11]: https://opensource.com/sites/default/files/uploads/gnome-calendar-modified.png (Modified GNOME calendar) -[12]: https://opensource.com/resources/what-vim -[13]: https://opensource.com/article/20/12/gnu-nano -[14]: https://opensource.com/article/20/12/gedit -[15]: https://opensource.com/downloads/cheat-sheet-gnome-3 -[16]: https://events.gnome.org/event/9/ -[17]: https://events.gnome.org/event/9/contributions/240/ -[18]: https://events.gnome.org/event/9/registrations/34/ diff --git a/sources/tech/20210722 How to manage feedback on your open project.md b/sources/tech/20210722 How to manage feedback on your open project.md deleted file mode 100644 index 64190179ed..0000000000 --- a/sources/tech/20210722 How to manage feedback on your open project.md +++ /dev/null @@ -1,99 +0,0 @@ -[#]: subject: (How to manage feedback on your open project) -[#]: via: (https://opensource.com/open-organization/21/7/manage-feedback-open-project) -[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -How to manage feedback on your open project -====== -Open projects generate feedback—lots of it. How can leaders manage it -all? This process might help. -![red pen editing mistakes][1] - -People who let open principles guide their leadership practices in open organizations inevitably find themselves fielding feedback. Lots of feedback. - -That's by design. Open leaders [invite comment and critique][2] on just about anything they can. - -But it also poses a regular challenge: How to sift through, manage, evaluate, and address that feedback in authentic and useful ways? - -Members of the Open Organization project got a taste of this process recently. Working on the Open Leadership Definition—a robust, [collaborative description][3] of the specific mindsets and behaviors associated with open styles of leadership—collaborators solicited community-wide feedback on a multi-hundred-word draft document. The results were impressive—even if a bit intimidating. - -As we continue diligently working through the feedback we received, we thought we'd offer some insight into our own process for managing a significant amount of feedback—in case it's useful to others trying to do the same. - -### The challenge - -First, we [invited anyone to read and comment][4] directly on our draft documents. Results were humbling. We're so pleased to have received so many thoughtful comments and ideas for improvement. But we received _a lot_ of comments. - -We needed some way to organize, analyze, review, address, and respond to all those comments. After all, in open organizations, feedback is only valuable and effective to the extent that people respond to and act on it.  - -So we turned to a well-known technological advancement that has changed the lives of many: the spreadsheet. We collected all the comments we received in a single place—and made sure everyone could see what we were doing along the way. - -The result: a [collaborative and transparent worksheet][5] anyone can follow as they watch us edit and revise in line with the community's stellar feedback. - -### The process - -But collecting feedback is only part of the work (and the easiest part, at that). Next, we knew we needed to create (and publish) a step-by-step process anyone could follow when collaborating on edits to the Open Leadership Definition. - -But collecting feedback is only part of the work (and the easiest part, at that). - -Here's what we can up with: - - 1. Review feedback left in section documents. - 2. Record/transcribe feedback, comments, and suggestions into our spreadsheet. - 3. Assign editorial leads for each document section. - 4. (Editorial leads) Systematically review comments in biweekly community calls with other contributors. - 5. (Editorial leads) Address reviewer comments, make necessary editorial changes to documents. - 6. (Editorial leads) Record their decisions, changes, and/or correspondence in the spreadsheet. - - - -Without a doubt, it's more work than simply jumping into the document and making the changes we thought were most appropriate. Because all our work—all the feedback we received, and all the ways we were _responding_ to that feedback—was open and transparent, we'll need to _reflect on_ and _justify_ every editorial decision we made. It takes time. But it's the least we can do to reciprocate the kindness our community showed us in leaving their feedback (after all, that took time, too!). - -### The results (so far) - -As we've worked, we've categorized feedback into seven different "types." Some, like typos and grammar issues, are no-brainers; we'll integrate this feedback and clean up our mistakes. Others, like those that suggest additional ideas or ask us to rethink assumptions, might _also_ be no-brainers—but not all of them can be integrated so easily. So, we're using our biweekly calls to work through and discuss this feedback. - -That's the most fun part—the part where we get to connect for live chatter and debate about how we are—or aren't—going to address what the community has raised. - -Here's a summary of what we've seen and debated so far. - -#### Working on the preamble - -The first section we reviewed was the document's "preamble," which received a lot of insightful and important comments that underscored the importance of nuance. This piece of the definition summarizes the rest, and so we need to get it right. Here's what we discussed. - -**The types of organizations where open leaders thrive.** We've discussed the ways open leaders can enhance organizations operating with all kinds of cultures—but argued that they're _especially_ important in _open_ organizations (because of the way command-and-control thinking can stymie openness). We acknowledge that all kinds of organizations can be open organizations—not just those wrestling with ambiguity or focusing on innovation. - -**Organizations as actors.** One interesting debate centered on writing that seemed to treat _organizations themselves_ as individual actors—rather than, say, _groups_ of individual actors. Some of us argued that organizations are more than the sum of their components and that sentences like "open leaders make organizations more self-aware" made perfect sense. Others countered that this made organizations seem like sentient beings, when in fact they're _collections_ of sentient beings. We were personifying organizations, in other words. Ultimately, we were able to find a way to both defend the sentiment that an organization can be reflective (concerned with its own context, boundaries, and limitations) and yet do so in a way that doesn't completely anthropomorphize the organization. The line is blurry at best. So we discussed how open leaders support a culture of self awareness and edited our language in the preamble to try and better balance this philosophical point. - -That's the most fun part—the part where we get to connect for live chatter and debate about how we are—or aren't—going to address what the community has raised. - -**Mindsets and behaviors.** Here again we arrived at the question that motivated this project in the first place: _What is open leadership?_ We debated the status of "open leadership" as a "mindset" versus a "skill" versus a "practice" (each of these designations has implications for how we define open leadership and how we help others adopt it), and doing this meant negotiating the complexities of character, ego, mindfulness, and more. For instance, a skill can generally be taught, but there's more nuance to what we all believe can be "taught" versus "experienced." And as our document shows, open leadership isn't just a set of things people _do_; it's a _way of thinking_, too. So we settled on open leadership as being a specific set of "mindsets and behaviors," an organic decision [inspired by Red Hat's definition][6] of the "open leadership" concept. - -**Open leaders and character.** Other excellent reviewer comments led us to discuss distributed leadership, planned obsolescence as a positive attribute and how "authority" to lead only lasts as long as people grant that authority. We discussed a nebulous quality open leaders have, character traits and experience that people value and therefore are willing to go to when support is needed. Some of our revisions will certainly reflect this discussion. - -### Slow and steady wins the race - -We've only just begun processing feedback on the Open Leadership Definition draft. We'll continue revising (and discussing!) in our biweekly calls, and we're planning new pieces about this work. We're eager to learn from our community and hear how this work can be more useful, so while the current draft is closed to comments, we always invite feedback. [Why not follow along][5]? And keep an eye out for future opportunities to get involved. - --------------------------------------------------------------------------------- - -via: https://opensource.com/open-organization/21/7/manage-feedback-open-project - -作者:[Laura Hilliger][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/laurahilliger -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_mistakes.png?itok=dN0OoIl5 (red pen editing mistakes) -[2]: https://opensource.com/open-organization/17/8/what-to-do-when-nobody-participates -[3]: https://github.com/open-organization/editorial/issues/94 -[4]: https://opensource.com/open-organization/21/6/celebrate-sixth-anniversary -[5]: https://docs.google.com/spreadsheets/d/1ETyMtoNK9MpkTOm2wUvqBBtcnf1S6wGWOUPvOYFyrx8/edit#gid=0 -[6]: https://github.com/red-hat-people-team/red-hat-multiplier diff --git a/sources/tech/20210723 Fixing Flatpak Error- No remote refs found similar to ‘flathub.md b/sources/tech/20210723 Fixing Flatpak Error- No remote refs found similar to ‘flathub.md deleted file mode 100644 index af768edbab..0000000000 --- a/sources/tech/20210723 Fixing Flatpak Error- No remote refs found similar to ‘flathub.md +++ /dev/null @@ -1,84 +0,0 @@ -[#]: subject: (Fixing Flatpak Error: No remote refs found similar to ‘flathub’) -[#]: via: (https://itsfoss.com/no-remote-ref-found-flatpak/) -[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Fixing Flatpak Error: No remote refs found similar to ‘flathub’ -====== - -So, I just installed Fedora. Installing my favorite applications was among the list of things to do after installing Fedora. - -I tried installing VLC in Flatpak form, but it gave me an error: - -**error: No remote refs found similar to ‘flathub’** - -![No remote refs found error displayed with Flatpak][1] - -### Fixing “no remote refs found similar to flathub” error - -The fix is rather simple. Add the Flathub repository in the following way: - -``` -flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo -``` - -It will ask for your password, or you could use the above command with sudo. - -Now, if you try to install a Fltapak package from Fltahub, it should work as expected. - -![Adding the Flathub repoistory fixes the issue][2] - -### Reason why you see this error and how it was fixed - -Now that have fixed the error, it would be a good idea to also learn why you saw this error in the first place and how it was fixed. - -Like most other package managers in Linux, Flatpak also works on the concept of repositories. In simpler words, you can imagine package repositories as a warehouse where packages are stored. - -But in order to retrieve a package from this warehouse, you need to know the address of the warehouse first. - -That’s what happens here. You are trying to download (and install) a package from a certain repository (Flathub in this case). But your system doesn’t know about this “flathub”. - -In order to solve this issue, you added the Flathub repository. When you do that, your Linux system can look for the package you are trying to install in this repository. - -You may see all the remote Flatpak repository added to your system. - -![List Flatpak repositories added to your system][3] - -Let’s have a deeper look at the command which was used for adding the repository: - -``` -flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo -``` - - * flatpak: this is the flatpak CLI tool. - * remote-add: this option indicates that you are adding a new remote repository. - * –if-not-exists: this ensures that the remote repository is only added if it is not added already. - * flathub: this is short reference for the complete URL of the actual repository. You may name it something else but the convention is to use the one provided by the developer. - * : The actual repository address. - - - -_**So, the bottom line is that when you see Flatpak complaining about ‘no remote refs found similar to xyz’, verify that the said repository is not added and if that’s the case, figure out its URL and add it to the system.**_ - -I hope this quick tip help you with this Flatpak issue. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/no-remote-ref-found-flatpak/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/flatpak-remote-ref-not-found-error-800x265.png?resize=800%2C265&ssl=1 -[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/flatpak-no-remote-ref-problem-fixed.png?resize=800%2C317&ssl=1 -[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/flatpak-list-repositories.png?resize=800%2C317&ssl=1 diff --git a/sources/tech/20210724 28 books recommended by open source technologists to read right now.md b/sources/tech/20210724 28 books recommended by open source technologists to read right now.md deleted file mode 100644 index 8a29c04b9c..0000000000 --- a/sources/tech/20210724 28 books recommended by open source technologists to read right now.md +++ /dev/null @@ -1,146 +0,0 @@ -[#]: subject: (28 books recommended by open source technologists to read right now) -[#]: via: (https://opensource.com/article/21/7/open-source-books) -[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -28 books recommended by open source technologists to read right now -====== -Members of the Opensource.com community share what books they are -enjoying reading. -![Ceramic mug of tea or coffee with flowers and a book in front of a window][1] - -Did you get our [summer reading list][2]? - -It may not be the season of summer where you are, but summer reading lists are quintessential and somewhat cozy no matter what part of the world you live in. I love the idea of a cool breeze, a lounge chair, a drink, and a snack... all wrapped up together with a good book to pour over. - -Fiction and non-fiction. Dramas, mysteries, science, romance... let us know in the comments what you're reading. - -* * * - -I'm working through _[How to Measure Anything][3]_, which is pretty cool. Recommended! —Moshe Zadka - -♦ - -For science fiction lovers, I recommend Luis McMaster Bujold's _[Vorkosigan Saga][4]_. I had never heard of her until reading Larry Wall's lecture _[Perl, the first postmodern computer language][5]_: "Note how we still periodically hear the phrase 'serious literature'. This is literature that is supposedly about Real Life. Let me tell you something. The most serious literature I’ve ever read is by Lois McMaster Bujold. Any of you read her? It’s also the funniest literature I’ve ever read. It’s also space opera. 'Genre fiction,' sneers the Modernist. Meaning it follows certain conventions. So what? Nobody in the world can mix gravity and levity the way Bujold does in her Vorkosigan books. It’s oh so definitely about real life. So what if it follows space opera conventions. Sonnets follow certain conventions too, but I don’t see them getting sneered at much these days. Certainly, they were always called 'serious'." - -So, I started with the [_Vor Game_][6] and then couldn't stop until I finished all of the series. Moving, funny, entertaining. —Petr Beranek - -♦ - -I finished reading _[The Sleep Revolution][7]_ by Arianna Huffington last week. Currently reading _[The Devil in the White City][8]_ by Erik Larson. —Lauren Maffeo - -♦ - -I'm currently reading [_Nemesis Games_][9] (book 5 in _The Expanse_sci-fi series). - -Next on my reading list is [_Greater Good_][10] (book 2 in the_Star Wars: Thrawn Ascendency_series). I fell into the Star Wars books a few years ago, and have been reading them in between other books on my reading list. I loved the original _Thrawn_ series, the _Ahsoka_ book, the _Darth Bane_ series, and the _Darth Plagueis_ book. Some are not so great, like _Outbound Flight_ (didn't get into it), _Master & Apprentice_ (kind of dull), and _Light of the Jedi_ (the first in the "High Republic" series). And then there are some in the middle, like the _Tarkin_ book (interesting character) and _Lords of the Sith_ (predictable, but good). - -McCloud's _[Understanding Comics][11]_ is a great book! I read it when I was working on my Master's degree, about 8 years ago, during my independent study on visual rhetoric. My instructor and I found it to be a very useful reference in how images communicate. - -Another one: _[Picture This][12]_ by Molly Bang. I thumbed through that so many times when learning about icons (same visual rhetoric class) that some of the pages started falling out. —Jim Hall - -♦ - -Sometimes, I wonder what I haven't been reading, because I always have a book or two or three on my Kindle. - -I am a huge reader of SciFi and Fantasy, so I just finished the Hugo and Nebula winner, [_A Memory Called Empire_][13]. It has a fascinating premise and good characterizations, but I thought the writing would be a little stronger. Like Ancillary Justice, this book explores the idea of colonization. But in the case of A Memory Called Empire the colonizers look a lot like the Aztec Empire writ large. Oh, and this empire is obsessed with complicated poetry. Citizens use poems to encrypt email and provide travelogues. - -My favorite book in a while is _[All Systems Red][14]_, the first book in The Murderbot Diaries, by Martha Wells. Imagine that you have an artificial human, who's a little bit of a clone and a little bit of a robot. She's sentient and self-aware, can destroy a human or a building in seconds flat, and has overwritten her governor program, but really she just wants to curl up in her cubby and binge-watch her favorite infotainment. Oh, and she has social anxiety. Except for the part about being able to destroy a spaceship, she's a lot like me. - -I'm also currently reading Brandon Sanderson's [_Words of Radiance_][15]. —Ingrid Towey - -♦ - -One of the best recent books I read was _[Bomber Mafia][16]_ by Malcolm Gladwell. It was a fascinating read. I also read *[Persist][17] *by Elizabeth Warren which I found interesting. I just started reading [_While Justice Sleeps_][18] by Stacey Abrams. I recently read "Killing Crazy Horse" by Bill O'Reilly and "The Soul of a Woman" by Isabel Allende. That book was so compelling that I bought copies for my daughter and daughter-in-law. —Don Watkins - -♦ - -I’m currently reading the new Christine Morgan novel _[Trench Mouth][19]_, having just finished the original *Metropolis *(the one Fritz Lang adapted the movie from), and next up is _Cultish: The Language of Fanaticism_ by Amanda Montell. I’ve also got _Workplace Jazz_ and _Culture is the Bass_ by Gerold Leonard in there, and am eagerly awaiting _Final Girl Support Group_ by Grady Hendrix, which will be out next month. —Kevin Sonney - -♦ - -I've got two books I'm reading: _[Laziness Does Not Exist][20]_ by Devon Price. This is a look at how the over-emphasis on productivity has gone too far in our culture. And, [_His Truth is Marching On_][21] by Jon Meacham. This is a look at the life and experience of John Lewis. —Steve Morris - -♦ - -My year-round reading continues to be mainly books from Project Gutenberg. Recently I've read a number of books by Hillaire Belloc, mainly his commentaries on various topics ("On Anything", "On Everything" to name two). I also enjoy reading things by GK Chesterton. It can be hard to decide what the real point is that he is trying to make as he complains about this or that, but he's entertaining nonetheless. - -Currently, I'm reading _[Thirty Strange Stories][22]_ by HG Wells. I've also read a number of his commentaries, which are quite good generally. It seems that most books I read come from the late 19th or very early 20th century. - -**[Read next: [10 must-read technology books for 2021][23]]** - -A book I would recommend if you've never read it is _[Candide][24]_ by Voltaire. I went through it hopping back and forth from the English to the French versions. The French seemed just a bit more entertaining. - -I do all this with my tablet. Something I've really gotten attached to is the ability to highlight some word or phrase, then immediately be able to look it up on the internet on the tablet, or translate with an app. —Greg Pittman - -♦ - -I recommend: _[Mastermind][25]_ by Maria Konnikova, and _[The Mind Map Book][26]_ by Tony Buzan —Hüseyin GÜÇ - -♦ - -I've been reading a lot of work-related books through a couple of work bookclubs. One I'm reading now that I really like is _[No Hard Feelings][27]_ by Fosslien and Duffy-West. It's a light read that talks about the ways emotions show up at work, and how to manage (your and other people's) emotions at work. - -Outside work, I've been reading some books on design. _[Understanding Comics][11]_ by Scott McCloud is my current read, and I recently finished the very enjoyable _[How Design Makes the World][28]_ by Scott Berkun, simultaneously a great lit review of books on the design of everyday things around us, and an overview of how (conscious and unconscious) design decisions impact our lives, and how we can improve the world by being aware of when we make our own design decisions. - -Finally, I have also been reading some historical non-fiction. Most recently, I loved _[The Guns of August][29]_ by Barbara Tuchman about the outbreak and early days of World War One. And I'm looking forward to reading "How the Irish Became White" (a book I heard about from Christine Dunbar-Hester, who wrote the book Bryan recommended) about the evolution of the Irish cultural identity through the 19th and 20th century. —Dave Neary - -♦ - -A little out of the how-to corner: I am currently reading [_Learn You a Haskell for Great Good!_][30] It is a beginner's guide to the Haskell programming language, a very strange thing from my viewpoint. —Stephan Avenwedde - -♦ - -Here in Vancouver, Canada we have hit summer weather as well as passing the solstice; today we're expecting a high of 26ºC and by late in the week we may hit 30ºC (which is unusually warm hereabouts). But lovely, for sure. And I'm looking forward to kicking back in the late afternoon with a nice cool beverage and a good book, so just in time for the summer reading list! - -My colleagues in Chile are in quarantine right now and it's winter there. The Mapuche new year We Tripantu and the Aymara Willkakuti are upon us, a time of reflection, celebration, and anticipation of the year to come: all, I am sure, hoping for better! There is also the winter school vacation that in theory runs from the 12th to the 23rd of July, which in a normal year would give an opportunity for digging into the recommendations on the summer reading list after a day of chasing the kids around a park or skiing or sledding in the mountains. For anyone interested, here is a short English article on [Willkakuti in Bolivia][31], and here is another on [We Tripantu in Argentina and Chile][32]. —Chris Hermansen - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/open-source-books - -作者:[Jen Wike Huger][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jen-wike -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tea-cup-mug-flowers-book-window.jpg?itok=JqThhl51 (Ceramic mug of tea or coffee with flowers and a book in front of a window) -[2]: https://opensource.com/article/21/6/2021-opensourcecom-summer-reading-list -[3]: https://openlibrary.org/books/OL7596184M/How_to_Measure_Anything -[4]: https://en.wikipedia.org/wiki/Vorkosigan_Saga -[5]: http://www.wall.org/~larry/pm.html -[6]: https://en.wikipedia.org/wiki/The_Vor_Game -[7]: https://www.ariannahuffington.com/the-sleep-revolution/ -[8]: https://eriklarsonbooks.com/book/the-devil-in-the-white-city/ -[9]: https://en.wikipedia.org/wiki/Nemesis_Games -[10]: https://starwars.fandom.com/wiki/Thrawn_Ascendancy:_Greater_Good -[11]: https://en.wikipedia.org/wiki/Understanding_Comics -[12]: https://www.mollybang.com/Pages/picture.html -[13]: https://en.wikipedia.org/wiki/A_Memory_Called_Empire -[14]: https://en.wikipedia.org/wiki/All_Systems_Red -[15]: https://stormlightarchive.fandom.com/wiki/Words_of_Radiance -[16]: https://en.wikipedia.org/wiki/Bomber_Mafia -[17]: https://us.macmillan.com/books/9781250799241 -[18]: https://www.penguinrandomhouse.com/books/648021/while-justice-sleeps-by-stacey-abrams/ -[19]: https://www.fantasticfiction.com/m/christine-morgan/trench-mouth.htm -[20]: https://bookshop.org/books/laziness-does-not-exist/9781982140106 -[21]: https://www.penguinrandomhouse.com/books/606295/his-truth-is-marching-on-by-jon-meacham/ -[22]: http://www.gutenberg.org/ebooks/59774 -[23]: https://enterprisersproject.com/article/2021/1/10-technology-books-must-read-2021 -[24]: https://en.wikipedia.org/wiki/Candide -[25]: https://www.mariakonnikova.com/books/mastermind/ -[26]: https://tonybuzan.com/product/the-mind-map-book/ -[27]: https://www.penguinrandomhouse.com/books/564051/no-hard-feelings-by-liz-fosslien-and-mollie-west-duffy/ -[28]: https://designmtw.com/ -[29]: https://en.wikipedia.org/wiki/The_Guns_of_August -[30]: http://learnyouahaskell.com/ -[31]: https://info.handicraft-bolivia.com/Aymara-New-Year-a33-sm162 -[32]: https://www.mapuche-nation.org/english/html/news/n-276.html diff --git a/sources/tech/20210724 UVdesk- A Free and Open-Source Helpdesk Ticket System.md b/sources/tech/20210724 UVdesk- A Free and Open-Source Helpdesk Ticket System.md deleted file mode 100644 index 3ccc90e997..0000000000 --- a/sources/tech/20210724 UVdesk- A Free and Open-Source Helpdesk Ticket System.md +++ /dev/null @@ -1,118 +0,0 @@ -[#]: subject: (UVdesk: A Free and Open-Source Helpdesk Ticket System) -[#]: via: (https://itsfoss.com/uvdesk/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -UVdesk: A Free and Open-Source Helpdesk Ticket System -====== - -There are countless open-source solutions (including [website creation tools][1] and [forum software][2]) that power the web, and a helpdesk system is one of the vital areas that can benefit from it. - -UVdesk is a free and open-source PHP based support ticket system with impressive options that you can start for free. - -Here I shall mention more about it and how you can set it up for your business. - -### UVDesk: Open-Source Customer Support Solution - -![][3] - -[UVDesk][4] is a helpdesk system built on [Symfony][5] (PHP framework for web development). An exciting alternative to proprietary ticketing systems like Zendesk. - -It is another open-source offering from the same company responsible for [Bagisto][6] (an ecommerce platform that we’ve covered before). - -UVDesk is primarily free but offers paid options if you want extra features and added security for your business. - -You can respond to customer queries, create documentations, manage the support tickets, and do a lot of things from a single place. This is especially helpful if you have an [eCommerce platform][7] setup. - -To explore more about it, let me highlight the key features it offers. - -### Features of UVDesk Helpdesk System - -In addition to the basic abilities of a support system, it also offers some interesting features. - -![][8] - -Here’s an overview of the features offered: - - * Ticket management and administration - * Task management options to assign tickets and set a deadline for support agents - * Email management to convert emails to support tickets - * Ability to create documentations (knowledgebase) to guide customers for self-help - * Theme customization of the support system/portal - * Multi-channel support (aggregating support requests from different platforms like Facebook, Amazon, Website) - * Automated options for customer follow-up reminders - * Improve the workflow with the ability to automate tasks and how they’re handled - * Progress Web App support - * Social Media App integration - * Ecommerce multi-channel integration - * Form builder - * Monitor agent’s support performance - * Easy migration options when switching from a different support system to UVdesk - * Self-hosting - - - -Do note that some of the features will be limited to the paid option. But the essential features should be available completely for free. - -### Get Started Using UVdesk - -You can directly download the zip package from the [official website][4]. It can be deployed using Docker as well. - -For installation instructions, you can check their [GitHub page][9] and the [official documentation][10] to check the system requirements. In either case, you can also opt for a one-click setup on your Linux server using [Softaculous installer][11]. - -[UVdesk][4] - -### Quick Impressions on the Demo - -They offer you the ability to try a [live demo][12] before you consider using it. - -![][13] - -I’ve never worked on a support system before, but limited to eCommerce projects using OpenCart, which is one of the [best open source eCommerce platforms][7]. - -But I found the back-end system to be pretty simple and accessible. It is not a breathtaking experience on the back-end side, but ranging from the branding customization options to managing the knowledge base, it is an easy-to-use experience. - -![][14] - -I also found the ticket management good enough. - -![][15] - -### Wrapping Up - -Overall, UVdesk is a flexible, open-source helpdesk system that you can try and use for free. Of course, if you have a sizable business, you may need to opt for the paid plans available. - -What other open-source helpdesk systems do you know of? How important do you think a helpdesk portal is? Let me know in the comments below! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/uvdesk/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/open-source-cms/ -[2]: https://itsfoss.com/open-source-forum-software/ -[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/uvdesk-open-source.png?resize=800%2C461&ssl=1 -[4]: https://www.uvdesk.com/en/opensource/ -[5]: https://symfony.com/ -[6]: https://itsfoss.com/bagisto/ -[7]: https://itsfoss.com/open-source-ecommerce/ -[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/06/uvdesk-contact.png?resize=800%2C548&ssl=1 -[9]: https://github.com/uvdesk/community-skeleton -[10]: https://docs.uvdesk.com -[11]: https://www.softaculous.com -[12]: https://demo.uvdesk.com/ -[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/06/uvdesk-admin.png?resize=800%2C830&ssl=1 -[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/uvdesk-mail-settings.png?resize=800%2C469&ssl=1 -[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/06/uvdesk-ticket-management.png?resize=800%2C664&ssl=1 diff --git a/sources/tech/20210725 Making PDFs more accessible to screen readers with open source.md b/sources/tech/20210725 Making PDFs more accessible to screen readers with open source.md deleted file mode 100644 index 867fb88e08..0000000000 --- a/sources/tech/20210725 Making PDFs more accessible to screen readers with open source.md +++ /dev/null @@ -1,63 +0,0 @@ -[#]: subject: (Making PDFs more accessible to screen readers with open source) -[#]: via: (https://opensource.com/article/21/7/pdf-latex) -[#]: author: (Quinn Foster https://opensource.com/users/quinn-foster) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Making PDFs more accessible to screen readers with open source -====== -One university open source program office is working to improve -accessibility of an open access journal with LaTeX. -![Person using a laptop][1] - -A screen reader is a vital tool that helps individuals who are blind or low-vision read digital text. Unfortunately, not all file formats receive the same level of support from screen readers. For example, while PDF files have accessibility features that you can use, they are often not the preferred file format for screen reader users. Between line breaks, multiple columns, symbols, and images, screen readers can have trouble reading PDFs in a cohesive way to their users. - -This is what the folks at [Open @ RIT][2] are trying to change. - -Open @ RIT is the open source program office at the Rochester Institute of Technology, offering RIT faculty and staff assistance in opening their research projects and maintaining communities of practice around their work. One such faculty member is Dr. Todd Pagano, Professor of Chemistry and Associate Dean for Teaching and Scholarship Excellence at the National Technical Institute for the Deaf. Dr. Pagano came to Open @ RIT seeking help to increase the accessibility of an open-access journal, the publications of which currently exist as PDFs. - -The Open @ RIT team, consisting of UX designer Rahul Jaiswal and full-stack developer Suhas C.V., have used this project as a stepping stone to begin exploring ways to convert PDFs into accessible HTML. - -> "It's very difficult to make PDFs fully accessible, especially in an automated way," says Mike Nolan, assistant director of Open @ RIT.  - -Open @ RIT tested multiple tools that already included accessibility features in their quest to convert PDFs into HTML successfully. Despite these features, the resulting HTML files still had many issues that made them difficult for screen readers to read, such as pauses and interruptions. - -At this point, Open @ RIT decided to pursue a more open source tool-chain to assist in the conversion from received submissions to accessible formats like HTML while maintaining the same style and general look of the published article, in which the use of LaTeX was instrumental. - -The workflow with LaTeX is simple: - - * A submitted paper—in the form of a PDF—is pasted into a  `.tex` template and turned into a `.tex` file. -This `.tex` template is an edited version of the Association for Computing Machinery ([ACM][3]) `.tex` template. - * Then [_tex2html_][4]—the conversion tool built by Open @ RIT—is applied to the `.tex` file that uses an open source LaTeX converter called LaTeXML to convert it to HTML finally. - * The resulting HTML file shows significant improvement with screen readers. - - - -Some standing issues with the tool-chain are still being worked on, but using LaTeX to facilitate and standardize the generation of the resulting formats (PDF and HTML) has shown great promise in achieving this goal. Publishing journal articles in PDF and HTML gives readers a choice and more options for compatibility with screen readers. - -Those who want to learn more about the project will get the chance very soon. During their explorations of LaTeX, Rahul and Suhas contacted experts associated with [TeX Users Group (TUG) 2021][5]—this year's conference run by [TUC][6] for all things TeX and LaTeX. They're invited to do a presentation on their project. The duo, along with Dr. Pagano, will discuss how they have been using LaTeX in their accessibility efforts and the need for journals to be accessible. TUG 2021 will be running online from August 5-8, 2021. - -Their work shows the capacity for open source to be used in a way that doesn't just increase digital transparency but also accessibility for all people. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/pdf-latex - -作者:[Quinn Foster][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/quinn-foster -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop) -[2]: https://www.rit.edu/research/open -[3]: https://services.acm.org/public/qj/keep_inventing/qjprofm_control.cfm?promo=DA4SCA -[4]: https://gitlab.com/open-rit/tex2html -[5]: https://tug.org/tug2021/ -[6]: https://www.tug.org/ diff --git a/sources/tech/20210726 Get started with WildFly for Java web development.md b/sources/tech/20210726 Get started with WildFly for Java web development.md deleted file mode 100644 index 8484b67213..0000000000 --- a/sources/tech/20210726 Get started with WildFly for Java web development.md +++ /dev/null @@ -1,229 +0,0 @@ -[#]: subject: (Get started with WildFly for Java web development) -[#]: via: (https://opensource.com/article/21/7/wildfly) -[#]: author: (Ranabir Chakraborty https://opensource.com/users/ranabir-chakraborty) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Get started with WildFly for Java web development -====== -WildFly is a popular choice for developers who want to develop -enterprise-ready applications. -![Coding on a computer][1] - -[WildFly][2] is a production-ready, cross-platform, flexible, lightweight, managed application runtime that provides all the necessary features to run a Java web application. It is also a Java EE 8 certified application server almost exclusively in Java, and it implements the [Jakarta EE][3], which was the Java Platform, Enterprise Edition (Java EE) specifications. Therefore you can run it on any operating system. - -WildFly, formerly known as JBoss AS, is a fully implemented JEE container—application server, developed by JBoss, which became a part of Red Hat on June 5, 2006, and since then, WildFly became their product. - -### **How to get started with WildFly?** - -This Java middleware application server known as [WildFly][4] is a robust implementation of the Jakarta platform specification. The latest WildFly 24 architecture built on the Modular Service Container enables services on-demand when your application requires them. - -#### **Prerequisites** - -Before installing WildFly, there are a few prerequisites: - - * Check that you have a JDK on your machine—JDK 8 or higher recommended to start WildFly. You can use the open source JDK called [OpenJDK][5]. -Once you install the JDK, set the JAVA_HOME environment variable. - * Ensure you have Maven 3.6.0 or higher installed. You can download Maven from [here][6] and set the environment variables. - * After loading both the variables, check the versions of JDK and Maven. - - - - -``` -$ java -version -openjdk version “11.0.9” 2020-10-20 OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.9+11) -OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.9+11, mixed mode) - -[/code] [code] - -$ mvn -version -Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f) Maven home: /usr/share/maven -Java version: 11.0.9, vendor: AdoptOpenJDK, runtime: /usr/lib64/adoptopenjdk -Default locale: en_US, platform encoding: UTF-8 -OS name: “linux”, version: “5.9.1”, arch: “amd64”, family: “unix” -``` - -### Download and install WildFly - -There are many ways you can install WildFly, including unzipping our traditional download zip, provisioning a custom installation using Galleon, or building a bootable jar. The official [installation guide][7] helps you identify the kind of WildFly installation that best fits your application’s deployment needs. In this article, we'll focus on the typical approach of installing the download zip. - -You can download WildFly from [here][8]. The standard WildFly variant is the right choice for most users, but if you'd like a technical preview look at what's coming in the future, try out WildFly Preview. Once downloaded, extract the archive to a folder and install it on any operating system that supports the zip or tar formats. - - -``` -`$ unzip wildfly-preview-24.0.0.Final.zip` -``` - -### Running WildFly - -WildFly has two server modes—_standalone_ and _domain_. The difference between the two modes is not about the capabilities available but about the application server's management. Use the _standalone_ mode when you only need one instance of the server. On the other hand, use the _domain_ mode when you want to run several instances of WildFly, and you want a single point from where you can control the configuration. You can find more about the domain mode in the [documentation][9]. - -To start WildFly using the default configuration in _standalone_ mode, change the directory to `$JBOSS_HOME/bin` and issue: - - -``` -`$ ./standalone.sh` -``` - -To start the application server using the default configuration in _domain_ mode, change the directory to `$JBOSS_HOME/bin` and issue: - - -``` -`$ ./domain.sh` -``` - -After starting the standalone mode, you should find something like the following in your console at the end of the start-up process: - - -``` -00:46:04,500 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Preview 24.0.0.Final (WildFly Core 16.0.0.Final) started in 4080ms - Started 437 of 638 services (350 services are lazy, passive or on-demand) -00:46:04,502 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on -00:46:04,502 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on -``` - -You can point your browser to `http://localhost:9990` (if using the default configured HTTP port), bringing you to the WildFly welcome page. - -![WildFly welcome page][10] - -(Ranabir Chakraborty, [CC-BY SA 4.0][11]) - -#### **Authentication** - -Though now you can see that WildFly is running, you can not access the admin console because you need to add a user for that. By default, security comes enabled for the WildFly management interfaces. That means that before you connect using the administration console or remotely using the CLI, you'll need to add a new user. You can achieve that simply by using the `add-user.sh` or the `add-user.bat` script in the bin folder. - -After starting the script, the system guides you through the process of adding a new user. - - -``` -$ ./add-user.sh -What type of user do you wish to add? -a) Management User (mgmt.users.properties) -b) Application User (application-users.properties) -(a): -``` - -Select the default option "a" to add a management user, where the user gets added to the ManagementRealm. Therefore, the user is authorized to perform management operations using the web-based Admin Console or the CLI. The other option is "b," where the user gets added to the ApplicationRealm. This realm provides for use with applications. - - -``` -Enter the details of the new user to add. -Using realm ‘ManagementRealm’ as discovered from the existing property files. -Username : Ranabir -Password recommendations are listed below. To modify these restrictions, edit the add-user.properties configuration file. -[…] -Passward : -Re-enter Password : -``` - -Here you choose the management user option and provide the required username and password. - - -``` -What groups do you want this user to belong to? -(Please enter a comma-separated list, or leave blank for none) [ ]: -``` - -Users can be associated with arbitrary groups of your choice, and you get prompted to consider assigning a new user to a group. Groups are helpful for simplified administration of things like access permissions, but leaving this blank is OK for getting started. You then confirm adding the user. The user gets written to the properties files used for authentication, and a confirmation message displays. - - -``` -Is this new user going to be used for AS process to connect to another AS process? -e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server Jakarta Enterprise Beans calls. -yes/no? no -``` - -Finally, you get asked whether or not you'll use the account you've added to identify one WildFly process to another—typically in a WildFly managed domain. The answer for this should be "no" because the account you are adding here is for use by a human administrator. - -After successfully adding the user, now you can refresh the browser, and the console will look like the following: - -![WildFly HAL Management console][12] - -#### **Deploy an application** - -WildFly provides many ways to deploy your application on the server. But if you are running a standalone WildFly service, a simple way to deploy your application is to copy your application archive (`war/ear/jar`) into the `$JBOSS_HOME/standalone/deployments` directory in the server installation. The deployment-scanner subsystem detects the archive and deploys it. Another straightforward way to perform the same is to go to the **Deployments** section of the console and upload your application archive. - -![How to deploy your application from console][13] - -You can make your own application and deploy it accordingly but here I have used a demo [helloworld][14] application from [WildFly quickstart][15]. - -#### **Steps to use WildFly quickstart samples:** - - 1. Make a separate folder locally and inside that, clone the WildFly quickstart project. After cloning the repository, change the directory to `helloworld` (or you can play with any other sample projects) and build the maven project. - - - - -``` -$ mkdir WFLY -$ cd WFLY -$ git clone –depth 1 [git@github.com][16]:wildfly/quickstart.git -$ cd quickstart/helloworld -$ mvn clean install -``` - - 2. If you face any project build issues, then you must clone the `boms` repository into your current working directory (WFLY in my example) and build it. After that, build the sample project. This step is only required when building a development version of the WildFly server. It isn’t required when running a [tagged][17] or [released][18] version of the WildFly server. - - - - -``` -$ git clone [git@github.com][16]:wildfly/boms.git -$ cd boms -$ mvn clean install -``` - - 3. After successfully building the sample project, take the application archive `helloworld.war` from the target folder and copy it inside the `$JBOSS_HOME/standalone/deployments` directory in the server installation. - - - - -``` -$ cd quickstart/helloworld/target/ -$ cp helloworld.war …/…/…/wildfly-preview-24.0.0.Final/standalone/deployments/ -``` - - 4. Now point your browser to `http://localhost:8080/helloworld/` to see your successfully deployed WildFly application. - - - -### **Conclusions** - -Despite WildFly being in the market for almost two decades, it's still a popular choice for developers who want to develop enterprise-ready applications. The code quality remains at a high and efficient level. The developers are continuously doing many unique and significant work that is taking WildFly to its new peak. The [latest WildFly][8] runs well on SE 16 and 17, supporting SE 17 in standard WildFly later this year. - -Michael Dowden takes a look at four Java web frameworks built for scalability. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/wildfly - -作者:[Ranabir Chakraborty][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ranabir-chakraborty -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer) -[2]: https://www.wildfly.org/ -[3]: https://opensource.com/article/18/5/jakarta-ee -[4]: https://github.com/wildfly/wildfly -[5]: http://openjdk.java.net/ -[6]: https://maven.apache.org/download.cgi -[7]: https://docs.wildfly.org/24/Installation_Guide.html -[8]: https://www.wildfly.org/downloads/ -[9]: https://docs.wildfly.org/24/Admin_Guide.html#Operating_modes -[10]: https://opensource.com/sites/default/files/pictures/welcome_page.png (WildFly welcome page) -[11]: https://creativecommons.org/licenses/by-sa/4.0/ -[12]: https://opensource.com/sites/default/files/uploads/console.png (WildFly HAL Management console) -[13]: https://opensource.com/sites/default/files/uploads/deployment.png (How to deploy your application from console) -[14]: https://github.com/wildfly/quickstart/tree/master/helloworld -[15]: https://github.com/wildfly/quickstart -[16]: mailto:git@github.com -[17]: https://github.com/wildfly/quickstart/tags -[18]: https://github.com/wildfly/boms/releases diff --git a/sources/tech/20210727 Zathura- A Minimalist Document Viewer for Keyboard Shortcut Pros.md b/sources/tech/20210727 Zathura- A Minimalist Document Viewer for Keyboard Shortcut Pros.md deleted file mode 100644 index 426fe6d6b3..0000000000 --- a/sources/tech/20210727 Zathura- A Minimalist Document Viewer for Keyboard Shortcut Pros.md +++ /dev/null @@ -1,118 +0,0 @@ -[#]: subject: (Zathura: A Minimalist Document Viewer for Keyboard Shortcut Pros) -[#]: via: (https://itsfoss.com/zathura-document-viewer/) -[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Zathura: A Minimalist Document Viewer for Keyboard Shortcut Pros -====== - -Every Linux distribution comes with a document viewer app that lets you read PDF and other documents. - -Most of the time, it is [Evince from GNOME][1] that is displayed as Document Viewer in Ubuntu and some other distributions. Evince is a handy tool and supports a wide variety of document formats. - -However, there are other applications for reading documents. Take [Foliate][2] for example. It’s an excellent [application for reading ebooks on Linux][3]. - -I recently came across another document viewer called Zathura. - -### Enjoy a mouse-free document reading experience with Zathura - -[Zathura][4] is a highly customizable document viewer based on the [girara user interface][5] and several document libraries. girara implements a simple and minimalist user interface. - -Zathura sure feels to load fast. It is minimalist, so you just get an application window with no sidebar, application menu or anything of that sort. - -![Zathura Document Viewer Interface][6] - -You may open its command line prompt by pressing the : key. You may close the CLI prompt with Esc key. - -If you want to create a bookmark, type :bmark and then provide an index number to the bookmarked page. - -![Bookmarking in Zathura][7] - -You may highlight all the links by pressing the F key. It will also display a number beside the highlighted URL and the command line prompt will appear at the bottom. If you type the URL number and press enter, the URL will be opened in the default web browser. - -![Highlighting and opening links in documents][8] - -Zathura also has automatic reloading feature. So if you make some changes to the document with some other application, the changes will be reflected as Zathura reloads the document. - -You may also install additional plugins to improve the capabilities of Zathura and use it for reading comics or PostScript. - -The problem with Zathura is that you won’t see any documentation or help option anywhere on the application interface. This makes things a bit more difficult if you are not already familiar with the tool. - -You may get the default keyboard shortcuts information from its [man page][9]. Here are a few of them: - - * R: Rotate - * D: Toggle between single and double page viewing mode - * F: Highlight all links on the current screen - * HJKL: Moving with the Vim type keys - * Arrows or PgUp/PgDown or the mouse/touchpad for moving up and down - * / and search for text, press n or N for moving to next or previous search (like less command) - * Q: Close - - - -You may find the documentation on the project website to learn about configuration, but I still found it confusing. - -### Installing Zathura on Linux - -Zathura is available in the repositories of the most Linux distributions. I could see it available for Ubuntu, Fedora, Arch and Debian, thanks to the [pkgs.org website][10]. This means that you can use the [package manager of your distribution][11] or the software center to install it. - -On Debian and Ubuntu based distributions, use this command to install Zathura: - -``` -sudo apt install zathura -``` - -On Fedora, use: - -``` -sudo dnf install zathura -``` - -[Use pacman command on Arch Linux][12]: - -``` -sudo pacman -Sy zathura -``` - -And if you want to have a look at its source code, you may visit its GitLab repository: - -[Zathura Source Code][13] - -### Conclusion - -I’ll be honest with you. I am not a fan of mouse-free tools. This is why I prefer Nano over Vim as I cannot remember so many shortcuts. - -I know there are people who swear by their keyboards. However, I would prefer not to spend time learning to configure a document viewer. This is more because I do not read too many documents on my desktop and for the limited PDF viewing, the default application is sufficient. - -It’s not that Zathura does not have it usage. If you are someone who has to deal a lot with documents, be it PDF or LaTex, Zathura could be your next favorite tool if you are a keyboard love. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/zathura-document-viewer/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://wiki.gnome.org/Apps/Evince -[2]: https://itsfoss.com/foliate-ebook-viewer/ -[3]: https://itsfoss.com/best-ebook-readers-linux/ -[4]: https://pwmt.org/projects/zathura/ -[5]: https://git.pwmt.org/pwmt/girara -[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/Zathura-Document-Viewer-Interface.png?resize=800%2C492&ssl=1 -[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/bookmarking-in-zathura.png?resize=800%2C639&ssl=1 -[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/Follow-link-in-Zathura.png?resize=800%2C639&ssl=1 -[9]: https://itsfoss.com/linux-man-page-guide/ -[10]: https://pkgs.org/ -[11]: https://itsfoss.com/package-manager/ -[12]: https://itsfoss.com/pacman-command/ -[13]: https://git.pwmt.org/pwmt/zathura diff --git a/sources/tech/20210728 Create your own custom Raspberry Pi image.md b/sources/tech/20210728 Create your own custom Raspberry Pi image.md deleted file mode 100644 index cdc6ed30c5..0000000000 --- a/sources/tech/20210728 Create your own custom Raspberry Pi image.md +++ /dev/null @@ -1,289 +0,0 @@ -[#]: subject: (Create your own custom Raspberry Pi image) -[#]: via: (https://opensource.com/article/21/7/custom-raspberry-pi-image) -[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Create your own custom Raspberry Pi image -====== -Build a Raspberry Pi image from scratch or convert your running, -modified Raspberry Pi OS back to an image others can use. -![Vector, generic Raspberry Pi board][1] - -When I recently read [Alan Formy-Duval's][2] article [_Manage your Raspberry Pi with Cockpit_][3], I thought it would be a good idea to have an image with Cockpit already preinstalled. Luckily there are at least two ways to accomplish this task:  - - * Adapt the sources of the Raspberry Pi OS image building toolchain [pi-gen][4] which enables you to build a Raspberry Pi image from scratch - * Convert your running, modified Raspberry Pi OS back to an image others can use - - - -This article covers both methods. I'll highlight the pros and cons of each technique. - -### Pi-gen - -Let's begin with [pi-gen][4]. Before we start, there are a few prerequisites you'll need to consider. - -#### Prerequisites - -To successfully run the build process, it is recommended to use a 32bit version of Debian Buster or Ubuntu Xenial. It may work on other systems as well but to avoid unnecessary complications, I recommend to setup a virtual machine with one of the recommended systems. If you are not familiar with virtual machines, take a look at my article [Try Linux on any operating system with VirtualBox][5]. When you have everything up and running, also install the dependencies mentioned in the [repository description][6]. Also consider that you need internet access in the virtual machine and enough free disk space. I set up my virtual machine with a 40GB hard drive which seemed to be enough. - -In order to follow the instructions in this article, make a clone of the [pi-gen][4] repository or fork it if you want to start developing you own image. - -#### Repository Overview - -The overall build process is separated into stages. Each stage is represented as an ordinary folder and represents a logical intermediate with regards to a full Raspberry Pi OS image. - - * **Stage 0**: Bootstrap—Creates a usable filesystem - * **Stage 1**: Minimal system—Creates an absolute minimal system - * **Stage 2**: Lite system—Corresponds to Raspberry Pi OS Lite - * **Stage 3**: Desktop system—Installs X11, LXDE, web browsers, and so on - * **Stage 4**: Corresponds to an ordinary Raspberry Pi OS - * **Stage 5**: Corresponds to Raspberry Pi OS Full - - - -The stages build upon each other: It is not possible to build a higher stage without building the lower stages. You can't leave out a stage in the middle either. For example, to build a Raspberry Pi OS Lite, you have to build stages 0, 1, and 2. To build a Raspberry Pi OS with a desktop, you have to build stages 0, 1, 2, 3, 4, and 5. - -#### Build process - -The build process is controlled by the `build.sh`, which can be found in the root repository. If you already know how to read and write bash scripts, it won't be a hurdle to understand the process defined there. If not, reading the `build.sh` and trying to understand what is going on is a really good practice. But even without bash scripting skills, you will be able to create your own image with Cockpit preinstalled. - -In general, the build process consists of several nested for-loops. - - * **stage-loop:** Loop through all stage directories in ascending order - * Skip further processing if a file named _SKIP_ is found - * Run the script `prerun.sh` - * **sub-loop:** Loop through each subdirectory in ascending order and process the following files if they are present: - * * `00-run-sh`: Arbitrary instructions to run in advance - * `00-run-chroot.sh`: Run this script in the chroot directory of the image - * `00-debconfs`: Variables for the` debconf-set-selection` - * `00-packages`: A list of packages to install - * `00-packages-nr`: Similar to the _00-packages_, except that this will cause the installation with --no-install-recommends -y parameter to _apt-get_ - * `00-patches`: A directory containing patch files to be applied, using [quilt][7] - - * Back in the stage-loop, if a file named `EXPORT_IMAGE` is found, generate an image for this stage - - * If a file named `SKIP_IMAGE` is found, skip creating the image - - - - -The `build.sh `also requires a file named `config` containing some specification which is read on startup. - -#### Hands-On - -First, we will create a basic Raspberry Pi OS Lite image. The Raspberry Pi OS Lite image will act as a base for our custom image. Create an empty file named _config_ and add the following two lines: - - -``` -IMG_NAME='Cockpit' -ENABLE_SSH=1 -``` - -Create an empty file named `SKIP` in the directories `stage3`, `stage4`, and `stage5`. `Stages 4` and `5` emit an image by default, therefore add an empty file named `SKIP_IMAGE` in `stage4` and `stage5`. - -Now open a terminal and switch to the root user by typing `su`. Navigate to the root directory of the repository and start the build script by typing `./build.sh`. - -The build process will take some time. - -After the build process has finished, you will find two more directories in the root of the repository: `work `and `deploy`. The `work` folder contains some intermediate output. In the `deploy` folder you should find the zipped image file, ready for deployment. - -If the overall build process was successful, we now can modify the process so that it installs Cockpit additionally. - -#### Extending the build process - -The Raspberry Pi OS Lite image acts as the base for our Cockpit installation. As the Raspberry Pi OS Lite image is complete with `stage2`, we will create our own `stage3` which will handle the Cockpit installation. - -We remove the original `stage3` completely and create a new, empty `stage3`: - - -``` -`rm -rf stage3 && mkdir stage3` -``` - -Inside `stage3`, we create a substage for installing cockpit: - - -``` -`mkdir stage3/00-cockpit` -``` - -To install cockpit on the image, we simply need to add it to the package list: - - -``` -`echo "cockpit" >> stage3/00-cockpit/00-packages` -``` - -We also want to configure our new `stage3` to output an image, therefore we simply add this file in the `stage3` directory: - - -``` -`touch stage3/EXPORT_IMAGE` -``` - -As there are already intermediate images from the previous build process, we can prevent that the stages are built again by adding `skip-files` in the related directories: - -Skip the build process for `stage0` and `stage1`: - - -``` -`touch stage0/SKIP && touch stage1/SKIP` -``` - -Skip the build process for `stage2` and also skip the image creation: - - -``` -`touch stage2/SKIP && touch stage2/SKIP_IMAGE` -``` - -Now run the build script again: - - -``` -`./build.sh` -``` - -In the folder `deployment` you now should find a zipped image `-Cockpit-lite.zip`, which is ready for deployment. - -#### Troubleshooting - -If you try to apply more complex modifications, there is a lot of trial and error involved in building your own Raspberry Pi image with pi-gen. You will certainly face that the build process will stop in between for some reason. As there is no exception handling in the build process, we do have some cleanup manually in case the process stopped. - -It is likely that the `chroot` file system is still mounted after the process stopped. You won't be able to start a new build process without unmounting it. In the case it is still mounted, unmount it manually by typing: - - -``` -`umount work//tmpimage/` -``` - -Another issue I determined was that the script stopped when the `chroot` filesystem was about to be unmounted. In the file `scripts/qcow2_handling`, you can see that directly before the attempt to unmount `sync` is called. `Sync` forces the system to flush the write buffer. Running the build system as a virtual machine, the write process was not ready when `unmount` was called so the script stopped here. - -To solve this, I just inserted a `sleep` between `sync` and `unmount` which solved the issue: - -![Sleep in between sync and unmount - core dump example][8] - -(I know that 30 seconds are overkill but as the whole build process takes > 20 minutes, 30 seconds are just a drop in the ocean) - -### Modify existing image - -In contrast to building an image with `pi-gen`, you could also directly apply the modification on a running Raspberry Pi OS. In our scenario, simply log in and install Cockpit with the following command: - - -``` -`sudo apt install cockpit` -``` - -Now shut down your Raspberry Pi, take out the SD card, and connect it to your PC. Check if your system has automatically mounted the partitions on the SD card by typing `lsblk -p`: - -![Using lsblk -p to check mounting partitions][9] - -In the screenshot above, the SD card is the device `/dev/sdc` and the `boot`\- and `rootfs`-partitions were automatically mounted at the mentioned mount points. Before you proceed, unmount them with : - - -``` -`umount /dev/sdc1 && umount /dev/sdc2` -``` - -Now we copy the contents of the SD card to our file system. Make sure you have enough disk space available as the image will have the same size as the SD card. Start the copy process with the following command: - - -``` -`dd if=/dev/sdc of=~/MyImage.img bs=32M` -``` - -![Copying image from the SD card][10] - -Once the copy process is finished, we can shrink the image with the [PiShrink][11]. Follow the installation instructions mentioned in the repository which are: - - -``` -wget -chmod +x pishrink.sh -sudo mv pishrink.sh /usr/local/bin -``` - -Now invoke the script by typing: - - -``` -`sudo pishrink.sh ~/MyImage.img` -``` - -![Invoking the pishrink.sh script][12] - -PiShrink reduced the image size by almost a factor of ten: From the former 30GB to 3.5GB. You can still optimize the size by zipping it before you upload or share it. - -That's it, you are now able to share and flash this image. - -### Flashing the image - -If you want to flash your own custom Raspberry Pi image back to the SD card using Linux, follow the steps below. - -Put the SD card into your PC. Your system will likely automatically mount the filesystem on the SD card if there is already a previous installation. You can check this by opening a command line and typing `lsblk -p`: - -![Checking automatic mounting with lsblk -p][13] - -As you can see in the screenshot above, my system automatically mounted two filesystems, `boot` and `rootfs` as this SD card already contained a Raspberry Pi OS. Before we start flashing the SD card we have to unmount the file systems first by typing: - - -``` -`umount /dev/sdc1 && umount /dev/sdc2` -``` - -The output of `lsblk -p` should look like this in order to proceed: - -![Output of lsblk -p][14] - -Now you can flash the image to the SD card: Open a command line and type: - - -``` -`dd if=/path/to/image.img of=/dev/sdc bs=32M, conv=fsync` -``` - -With `bs=32M`, you specify that the SD card is written in 32-megabyte blocks, `conv=fsync` forces the process to physically write each block. - -If successful, you should see this output: - -![Successful output example][15] - -Done! You can now put the SD card back into the Raspberry Pi and boot it. - -### Summary - -Both of the techniques presented in this article have their advantages and disadvantages. Whereas using `pi-gen` to create your own custom Raspberry Pi images is more error-prone than simply modifying an existing image, it is the method of choice if you plan to set up a [CICD pipeline][16]. My personal favorite is clearly to modify an existing image as you are directly able to make sure that the changes you applied are working. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/custom-raspberry-pi-image - -作者:[Stephan Avenwedde][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/hansic99 -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi_board_vector_red.png?itok=yaqYjYqI (Vector, generic Raspberry Pi board) -[2]: https://opensource.com/users/alanfdoss -[3]: https://opensource.com/article/21/5/raspberry-pi-cockpit -[4]: https://github.com/RPi-Distro/pi-gen -[5]: https://opensource.com/article/21/6/try-linux-virtualbox -[6]: https://github.com/RPi-Distro/pi-gen/blob/master/README.md#Dependencies -[7]: https://man7.org/linux/man-pages/man1/quilt.1.html -[8]: https://opensource.com/sites/default/files/uploads/1_pi_gen_sleep.png (Sleep in between sync and unmount - core dump example) -[9]: https://opensource.com/sites/default/files/uploads/pi_gen_lsblk_mounted.png (Using lsblk -p to check mounting partitions) -[10]: https://opensource.com/sites/default/files/uploads/rpi_image_copy.png (Copying image from the SD card) -[11]: https://github.com/Drewsif/PiShrink -[12]: https://opensource.com/sites/default/files/uploads/rpi_pishrink.png (Invoking the pishrink.sh script) -[13]: https://opensource.com/sites/default/files/uploads/pi_gen_lsblk_mounted_0.png (Checking automatic mounting with lsblk -p) -[14]: https://opensource.com/sites/default/files/uploads/pi_gen_lsblk_unmounted2.png (Output of lsblk -p) -[15]: https://opensource.com/sites/default/files/uploads/pi_gen_flash.png (Successful output example) -[16]: https://en.wikipedia.org/wiki/CI/CD diff --git a/sources/tech/20210728 Getting started with Maxima in Fedora Linux.md b/sources/tech/20210728 Getting started with Maxima in Fedora Linux.md deleted file mode 100644 index f75edbf4f4..0000000000 --- a/sources/tech/20210728 Getting started with Maxima in Fedora Linux.md +++ /dev/null @@ -1,305 +0,0 @@ -[#]: subject: (Getting started with Maxima in Fedora Linux) -[#]: via: (https://fedoramagazine.org/getting-started-with-maxima-in-fedora-linux/) -[#]: author: (Jagat Kafle https://fedoramagazine.org/author/jkafle/) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Getting started with Maxima in Fedora Linux -====== - -![][1] - -Photo by [Roman Mager][2] on [Unsplash][3] - -[Maxima][4] is an open source computer algebra system (CAS) with powerful symbolic, numerical, and graphical capabilities. You can perform matrix operations, differentiation, integration, solve ordinary differential equations as well as plot functions and data in two and three dimensions. As such, it is helpful for anyone interested in science and math. This article goes through installing and using Maxima in Fedora Linux. - -### Installing Maxima - -Maxima is a command line system. You can install Maxima from the official Fedora repository using the following command: - -``` -sudo dnf install maxima -``` - -You can then use Maxima from the terminal by invoking the command _maxima_. - -![Maxima session in gnome terminal in Fedora Linux 34][5] - -### Installing wxMaxima - -[wxMaxima][6] is a document based interface for Maxima. To install it in Fedora Linux, use the following command: - -``` -sudo dnf install wxmaxima -``` - -You can launch wxMaxima either by invoking the command _wxmaxima_ in the terminal or clicking its application icon from the app grid or menu. - -![wxMaxima session in Fedora Linux 34][7] - -### Basic Commands - -After calling _maxima_, you should see terminal output as in the [figure above][8]. - -The _(%i1)_ is the input label where you enter the commands. Command in Maxima is an expression that can span over many lines and is closed with a semicolon (;). The _o_ labels denote the outputs. Comments are enclosed between _/*_ and _*/_. You can use the special symbol percent _(%)_ to refer to the immediately preceding result computed by Maxima. If you don’t want to print a result, you can finish your command with _$_ instead of _;_. Here are basic arithmetic commands in Maxima: - -``` -(%i1) (19 + 7)/(52 - 2 * 13); -(%o1) 1 -(%i2) 127 / 5; - 127 -(%o2) --- - 5 -(%i3) float (127 / 5); -(%o3) 25.4 -(%i4) 127.0 / 5; -(%o4) 25.4 -(%i5) sqrt(2.0); -(%o5) 1.414213562373095 -(%i6) sin(%pi/2); -(%o6) 1 -(%i7) abs(-12); -(%o7) 12 -(%i8) 2+3%i + 5 - 4%i; /*complex arithmetic*/ -(%o8) 7 - %i -``` - -To end the Maxima session, type the command: - -``` -quit(); -``` - -### Algebra - -Maxima can expand and factor polynomials: - -``` -(%i1) (x+y)^3 + (x+y)^2 + (x+y); - 3 2 -(%o1) (y + x) + (y + x) + y + x -(%i2) expand(%); - 3 2 2 2 3 2 -(%o2) y + 3 x y + y + 3 x y + 2 x y + y + x + x + x -(%i3) factor(%); - 2 2 -(%o3) (y + x) (y + 2 x y + y + x + x + 1) -``` - -To substitute _y_ with _z_ and _x_ with _5,_ refer the output label above and use the following command: - -``` -(%i4) %o3, y=z, x=5; - 2 -(%o4) (z + 5) (z + 11 z + 31) -``` - -You can easily manipulate trigonometric identities: - -``` -(%i1) sin(x) * cos(x+y)^2; - 2 -(%o1) sin(x) cos (y + x) -(%i2) trigexpand(%); - 2 -(%o2) sin(x) (cos(x) cos(y) - sin(x) sin(y)) -(%i3) trigreduce(%o1); - sin(2 y + 3 x) - sin(2 y + x) sin(x) -(%o3) ----------------------------- + ------ - 4 2 -``` - -You can also solve algebraic equations in one or more variables: - -``` -(%i1) solve(x^2+5*x+6); - (%o1) [x = - 3, x = - 2] -(%i2) solve(x^3 + 1); - sqrt(3) %i - 1 sqrt(3) %i + 1 - (%o2) [x = - --------------, x = --------------, x = - 1] - 2 2 -(%i3) eqns: [x^2 + y^2 = 9, x + y = 3]; - 2 2 - (%o3) [y + x = 9, y + x = 3] - (%i4) solve(eqns, [x,y]); - (%o4) [[x = 3, y = 0], [x = 0, y = 3]] -``` - -### Calculus - -Define _f_ to be a function of _x._ You can then find the limit, derivative and integral of the function: - -``` -(%i1) f: x^2; - 2 - (%o1) x - (%i2) limit(f,x,0); - (%o2) 0 - (%i3) limit(1/f,x,0); - (%o3) inf - (%i4) diff(f, x); - (%o4) 2 x - (%i5) integrate(f, x); - 3 - x - (%o5) -- - 3 -``` - -To find definite integrals, slightly modify the syntax above. - -``` -(%i6) integrate(f, x, 1, inf); -defint: integral is divergent. - -- an error. To debug this try: debugmode(true); -(%i7) integrate(1/f, x, 1, inf); -(%o7) 1 -``` - -Maxima can perform Taylor expansion. Here’s the Taylor expansion of sin(x) up to order 5 terms. - -``` -(%i1) taylor(sin(x), x, 0, 5); - 3 5 - x x - (%o1)/T/ x - -- + --- + . . . - 6 120 -``` - -To represent derivatives in unevaluated form, use the following syntax. - -``` -(%i2) 'diff(y,x); - dy - (%o2) -- - dx -``` - -The ode2 function can solve first and second order ordinary differential equations (ODEs). - -``` -(%i1) 'diff(y,x,2) + y = 0; - 2 - d y - (%o1) --- + y = 0 - 2 - dx - (%i2) ode2(%o1,y,x); - (%o2) y = %k1 sin(x) + %k2 cos(x) -``` - -### Matrix Operations - -To enter a matrix, use the entermatrix function. Here’s an example of a general 2×2 matrix. - -``` -(%i1) A: entermatrix(2,2); - Is the matrix 1. Diagonal 2. Symmetric 3. Antisymmetric 4. General - Answer 1, 2, 3 or 4 : - 4; - Row 1 Column 1: - 1; - Row 1 Column 2: - 2; - Row 2 Column 1: - 3; - Row 2 Column 2: - 4; - Matrix entered. - [ 1 2 ] - (%o1) [ ] - [ 3 4 ] -``` - -You can then find the determinant, transpose, inverse, eigenvalues and eigenvectors of the matrix. - -``` -(%i2) determinant(A); - (%o2) - 2 - (%i3) transpose(A); - [ 1 3 ] - (%o3) [ ] - [ 2 4 ] -(%i4) invert(A); - [ - 2 1 ] - [ ] - (%o4) [ 3 1 ] - [ - - - ] - [ 2 2 ] -(%i5) eigenvectors(A); - sqrt(33) - 5 sqrt(33) + 5 - (%o5) [[[- ------------, ------------], [1, 1]], - 2 2 - sqrt(33) - 3 sqrt(33) + 3 - [[[1, - ------------]], [[1, ------------]]]] - 4 4 -``` - -In the output label _(%o5)_ the first array gives the eigenvalues, the second array gives the multiplicity of the respective eigenvalues, and the next two arrays give the corresponding eigenvectors of the matrix A. - -### Plotting - -Maxima can use either [Gnuplot][9], [Xmaxima][10] or [Geomview][11] as graphics program. Maxima package in Fedora Linux comes with _gnuplot_ as a dependency, so Maxima uses _gnuplot_pipes_ as the plotting format. To check the plotting format, use the following command inside Maxima. - -``` -get_plot_option(plot_format); -``` - -Below are some plotting examples. - -``` -(%i1) plot2d([sin(x), cos(x)], [x, -2%pi, 2%pi]); -``` - -![2d plot using Maxima][12] - -``` -(%i2) plot3d(sin(sqrt(x^2+y^2)), [x, -7, 7], [y, -7, 7]); -``` - -![3d plot using Maxima][13] - -``` -(%i3) mandelbrot ([iterations, 30], [x, -2, 1], [y, -1.2, 1.2], - [grid,400,400]); -``` - -![The Mandelbrot Set][14] - -You can read more about Maxima and its capabilities in its [official website][15] and [documentation][16]. - -Fedora Linux has plethora of tools for scientific use. You can find the widely used ones in the [Fedora Scientific Guide][17]. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/getting-started-with-maxima-in-fedora-linux/ - -作者:[Jagat Kafle][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/jkafle/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/07/Getting-started-with-Maxima-in-Fedora-Linux-816x345.png -[2]: https://unsplash.com/@roman_lazygeek -[3]: https://unsplash.com/s/photos/mathematics-tasks -[4]: https://maxima.sourceforge.io/ -[5]: https://fedoramagazine.org/wp-content/uploads/2021/07/maxima-terminal.png -[6]: https://wxmaxima-developers.github.io/wxmaxima/index.html -[7]: https://fedoramagazine.org/wp-content/uploads/2021/07/wxmaxima.png -[8]: tmp.LH5pctTy1x#maxima-terminal -[9]: http://www.gnuplot.info/ -[10]: https://maxima.sourceforge.io/docs/xmaxima/xmaxima.html -[11]: http://www.geomview.org/ -[12]: https://fedoramagazine.org/wp-content/uploads/2021/07/2d-maxima.png -[13]: https://fedoramagazine.org/wp-content/uploads/2021/07/3d-maxima.png -[14]: https://fedoramagazine.org/wp-content/uploads/2021/07/mandelbrot-maxima.png -[15]: https://maxima.sourceforge.io/index.html -[16]: https://maxima.sourceforge.io/docs/manual/maxima_toc.html -[17]: https://fedora-scientific.readthedocs.io/en/latest/index.html diff --git a/sources/tech/20210729 5 reasons you should run your apps on WildFly.md b/sources/tech/20210729 5 reasons you should run your apps on WildFly.md deleted file mode 100644 index 130a9691c1..0000000000 --- a/sources/tech/20210729 5 reasons you should run your apps on WildFly.md +++ /dev/null @@ -1,114 +0,0 @@ -[#]: subject: (5 reasons you should run your apps on WildFly) -[#]: via: (https://opensource.com/article/21/7/run-apps-wildfly) -[#]: author: (Ranabir Chakraborty https://opensource.com/users/ranabir-chakraborty) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -5 reasons you should run your apps on WildFly -====== -WildFly is a popular choice for users and developers worldwide who -develop enterprise-capable applications. -![Person drinking a hot drink at the computer][1] - -WildFly, formerly known as JBoss Application Server, is an open source Java EE application server. Its primary goal is to provide a set of vital tools for enterprise Java applications. - -According to the Jakarta EE 2020/2021 [survey][2], WildFly is head and shoulders above in the recent application servers and in the rating categories. Here are some of the reasons why: - -### 1. Save time with faster development - -WildFly supports the newest standards for REST-based data access, including JAX-RS 2 and JSON-P, and because it's building on Jakarta EE, which provides rich enterprise capabilities with ease of use of frameworks that eliminate boilerplate and reduce technical burden. - -The quick boot feature of WildFly, integrated with the easy-to-use Arquillian framework, allows for test-driven development using the actual environment in which your code runs. This test code is separate and deployed alongside the application, where it has full access to server resources. - -### 2\. Powerful but simple to use - -WildFly configuration setup is centralized, simple, and user-focused.  -The configuration file—organized by subsystems—is easy to understand and has no internal server wiring that will be exposed. All management capabilities appear in a unified manner across all forms of access. These include a command-line interface, a web-based administration console, a native Java API, an HTTP/JSON-based REST API, and a JMX gateway. These options allow for custom automation using the tools and languages that best suit your needs. - -### 3. Modular and lightweight - -WildFly does classloading right. And it does it smoothly. It uses JBoss Modules to provide true application isolation while hiding server implementation classes from the application and only connects to JARs that your application needs. Appearance rules have sensible defaults but are usually customized. The dependency resolution algorithm means that classloading performance isn't affected by the number of versions of libraries you've got installed. - -In WildFly base, they've developed runtime services to attenuate heap allocation using standard cached indexed metadata over duplicate full parses, which reduces heap and object churn. One hundred percent of the administration console is stateless and purely client-driven. It starts immediately and requires zero memory on the server. This integrated configuration enables WildFly to run with stock JVM settings—even on small devices while leaving more headroom for application data and supporting high-level scalability. - -### 4\. Save resources with efficient management - -WildFly takes a more aggressive approach to memory management and relies on pluggable subsystems, installed or removed as required. Subsystems use smart and intelligent defaults but can still be customized to best suit your needs. When working with domain mode, all participating servers' configuration is laid out in a well-organized, consistent manner within the same file. - -### 5. Leverage open source - -WildFly is an open source community project and is out there to be used and distributed using the LGPL v2.1 license, which means it's available for you to download and use for whatever you need. This allows organizations to develop unique new technologies and federates the world of technology to help successful startups to spring up anywhere. - -## 8 ways to contribute to WildFly - -Now that you know a bit about WildFly, Let’s try to understand the ways you can get involved with WildFly. - -WildFly relies on contributions from people like you. I’ve joined Red Hat and contributed to WildFly for a year now, and it’s fun to work with great minds around you, and you’ll get to learn a lot. Here are some ways by which you can be a part of and assist the community. - -### 1\. Check out the repository. - -Here are the [WildFly][3] and [WildFly Core][4] (WildFly Core provides the core runtime used by the Wildfly application server). If you want to get more details, you can check out this [document][5]. - -### 2\. Raise a ticket or work on existing issues. - -After checking out the WildFly repositories, if you feel some enhancements or fixes are needed, you can create issues for [WildFly][6] and [WildFly Core][7], or work on pre-existing issues. - -### 3\. Edit the website - -Like the WildFly project, the website is open source too. You can check out the [repository][8] and contribute here too, with some new and attractive modifications. - -### 4\. Blog with us - -You have a [blog][9], and all entries are maintained in a Git repository. If you have new ideas, you can share your experience and ideas in the form of an editorial. We use [markdown][10] and [AsciiDoc][11] so that you can submit your blog post as a pull request. - -### 5\. Edit the Documentation - -You can also help us to [make better documentation][12]. Let us know if you find a typo or an error, and feel free to send a pull request. Your input is valuable to us, and always welcome. - -### 6\. Help somebody out - -You can check out our [forums][13] and if you run into an issue, post your question and check the previous issues if you see some similarities. You can also share your knowledge and answer some of the queries because your knowledge can help others. - -### 7\. Join our chatroom and follow the latest news - -Our Project team has an open (and open source) and active [chatroom][14] where you can ask your questions and check out the [latest news][15] section to find what are the new things we are working on. Stop by, say hello, interact with the team members, but keep in mind that basic rules of civility apply. - -### 8\. Spread the word - -The simplest and easiest way to help the WildFly community is to act as a project ambassador by spreading the news, educating others about the usage of WildFly, and showing up to the community events in your area. - -## **Final thoughts** - -WildFly is a popular choice for users and developers worldwide who develop enterprise-capable applications. WildFly is an active project, so there are always new features in the works, and we're all delighted to be a part of it. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/run-apps-wildfly - -作者:[Ranabir Chakraborty][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ranabir-chakraborty -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hot drink at the computer) -[2]: https://arjan-tijms.omnifaces.org/2021/02/jakarta-ee-survey-20202021-results.html -[3]: https://github.com/wildfly/wildfly -[4]: https://github.com/wildfly/wildfly-core -[5]: https://developer.jboss.org/docs/DOC-48381 -[6]: https://issues.redhat.com/browse/WFLY-14541?jql=project%20%3D%20WFLY%20AND%20resolution%20%3D%20Unresolved%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC -[7]: https://issues.redhat.com/projects/WFCORE/issues/WFCORE-4827?filter=allopenissues -[8]: https://github.com/wildfly/wildfly.org -[9]: https://github.com/wildfly/wildfly.org/tree/master/_posts -[10]: https://opensource.com/article/19/9/introduction-markdown -[11]: https://asciidoc.org/ -[12]: https://github.com/wildfly/wildfly/tree/master/docs -[13]: https://groups.google.com/g/wildfly -[14]: https://wildfly.zulipchat.com/#recent_topics -[15]: https://www.wildfly.org/news/ diff --git a/sources/tech/20210803 Get started with Argo CD.md b/sources/tech/20210803 Get started with Argo CD.md deleted file mode 100644 index 17c30eeeda..0000000000 --- a/sources/tech/20210803 Get started with Argo CD.md +++ /dev/null @@ -1,168 +0,0 @@ -[#]: subject: (Get started with Argo CD) -[#]: via: (https://opensource.com/article/21/8/argo-cd) -[#]: author: (Ayush Sharma https://opensource.com/users/ayushsharma) -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) - -Get started with Argo CD -====== -Argo CD is a simple pull-based GitOps deployment tool that syncs -Kubernetes manifest files with a cluster for easy, no-nonsense -deployments. -![Plumbing tubes in many directions][1] - -In a typical push-based deployment, tools like Ansible and Jenkins connect directly to the server or cluster and execute the provisioning commands. This approach works well when the cluster is accessible on the network and there is direct connectivity between your deployment server and the destination server. For compliance or security reasons, connectivity between the deployment tool and the cluster may not be possible. - -[Argo CD][2] is a pull-based deployment tool. It watches a remote Git repository for new or updated manifest files and synchronizes those changes with the cluster. By managing manifests in Git and syncing them with the cluster, you get all the advantages of a Git-based workflow (version control, pull-request reviews, transparency in collaboration, etc.) and a one-to-one mapping between what is in the Git repo and what is deployed in the cluster. This method is called GitOps. - -In this tutorial, you will: - - 1. Install Argo CD on a Minikube installation - 2. Create a sample Argo CD application called `ayush-test-application` and link it with [my repo `ayush-sharma/example-assets`][3] - 3. Create an [Nginx deployment with three replicas][4] - 4. Ensure the new application shows up on the Argo CD dashboard and verify it using `kubectl` - - - -### Install Argo CD - -This tutorial uses Minikube version v1.21.0. If you don't have it, [download and install Minikube][5]. - -With Minikube up and running, you can install Argo CD. The Argo CD documentation contains detailed steps on how to [install and configure it for any cluster][6]. Once you've executed those steps, run `minikube tunnel` in a separate terminal window to ensure Minikube exposes the Argo CD server's load balancer endpoint on your local system. To verify this, run `kubectl get po -n argocd` and check if the `argo-server` service has an `EXTERNAL-IP:` - - -``` -user@system ~ kubectl get svc -n argocd -NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE -argocd-dex-server       ClusterIP      10.110.2.52     <none>          5556/TCP,5557/TCP,5558/TCP   3h32m -argocd-metrics          ClusterIP      10.100.73.57    <none>          8082/TCP                     3h32m -argocd-redis            ClusterIP      10.104.11.24    <none>          6379/TCP                     3h32m -argocd-repo-server      ClusterIP      10.100.132.53   <none>          8081/TCP,8084/TCP            3h32m -argocd-server           LoadBalancer   10.98.182.198   10.98.182.198   80:32746/TCP,443:31353/TCP   3h32m -argocd-server-metrics   ClusterIP      10.105.182.52   <none>          8083/TCP                     3h32m -``` - -Once the installation is complete and the load balancer is working, the Argo CD user interface (UI) will be accessible at the `EXTERNAL IP`. - -![Argo CD home page][7] - -(Ayush Sharma, [CC BY-SA 4.0][8]) - -### Create your first application - -Before talking about Argo CD deployments, you need a Git repo with a Kubernetes (k8s) manifest file ready to deploy. I'm using my [public repo `example-assets`][3] with an Nginx deployment [manifest file in `/argocd/getting-started`][4]. - -The goal is to get Argo CD to listen to the k8s manifest file for changes and then sync them with the cluster it is deployed in (in this case, Minikube). You do this by creating an application containing information about the manifest files' source repo, destination cluster details, and synchronization policies. - -Click `New App` on the top left to configure a new application. Since my destination Kubernetes server is the one Argo CD is installed on (Minikube), I left the server defaults as-is. These are the values I configured: - - 1. Application name: `ayush-test-application` - 2. Project: `default` - 3. Sync policy: `automated` - 4. Sync options: `prune: true; selfHeal: true` - 5. Source repository URL: `https://gitlab.com/ayush-sharma/example-assets.git` - 6. Source revision: `HEAD` - 7. Source path: `argocd/getting-started` - 8. Destination cluster URL: `https://kubernetes.default.svc` - 9. Destination namespace: `default` - - - -To make things easier, you can click `EDIT AS YAML` on the top right and paste in: - - -``` -apiVersion: argoproj.io/v1alpha1 -kind: Application -metadata: -  name: ayush-test-application -spec: -  destination: -    name: 'default' -    namespace: default -    server: '' -  source: -    path: argocd/getting-started -    repoURL: '' -    targetRevision: HEAD -  project: default -  syncPolicy: -    automated: -      prune: true -      selfHeal: true -``` - -Your configuration should look like this: - -![Argo CD application configuration][9] - -(Ayush Sharma, [CC BY-SA 4.0][8]) - -After saving the configuration, your application should show up as a card on the home page. Since you specified the sync policy as `Automated`, your new application will begin syncing with the repo immediately. - -![Argo CD application syncing][10] - -(Ayush Sharma, [CC BY-SA 4.0][8]) - -### Create the Nginx deployment - -In this tutorial, the manifest file is a standard Nginx deployment with three replicas. Once `ayush-test-application` completes syncing, Argo CD will display a nice graphical view of the deployment. - -![Argo CD application deployment][11] - -(Ayush Sharma, [CC BY-SA 4.0][8]) - -Verify the deployment using `kubectl get po`: - - -``` -NAME                               READY   STATUS    RESTARTS   AGE -nginx-deployment-585449566-584cj   1/1     Running   0          5m -nginx-deployment-585449566-6qn2z   1/1     Running   0          5m -nginx-deployment-585449566-d9fm2   1/1     Running   0          5m -``` - -### Argo CD's advantages - -Argo CD is a relatively lightweight approach to k8s deployments. I'm especially fond of the one-to-one relationship between what's in the repo and what's in the cluster, making incident management a lot simpler. - -Another big advantage is that since the Git repo contains everything Argo CD requires, you could delete the entire Argo CD installation and set things up from scratch. This means bringing up a second identical cluster with your entire workload deployed is now more feasible and practical in the event of a catastrophic outage. - -A third big advantage is fewer open ports. Argo CD pulls changes from a remote Git repo, so there's no need to define firewall rules and virtual private cloud (VPC) peering connections to get your deployment servers to connect with your cluster, which is one less point of entry. This reduces the attack surface area for your development, quality assurance (QA), and production servers significantly. - -Since the Git repo and branch name are configurable, you can get creative with deployment models. For example, you could have two different Argo CDs running on two different QA and production clusters listening to the same repo's branch. This guarantees that the same manifest file is deployed on both clusters, ensuring QA and production environments contain the same codebase. Also, a single Argo CD is capable of targeting multiple servers, meaning a hub-and-spoke deployment model is possible, where one master Argo CD orchestrates deployments across multiple development, QA, and production clusters in different regions or environments. - -Get creative with Argo CD, and don't forget to share your experiments with others. - -* * * - -_This article originally appeared on [Ayush Sharma's blog][12] under a [CC BY-SA 4.0][8] license._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/8/argo-cd - -作者:[Ayush Sharma][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ayushsharma -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plumbing_pipes_tutorial_how_behind_scenes.png?itok=F2Z8OJV1 (Plumbing tubes in many directions) -[2]: https://argoproj.github.io/cd/ -[3]: https://gitlab.com/ayush-sharma/example-assets/-/tree/main/argocd/getting-started -[4]: https://gitlab.com/ayush-sharma/example-assets/-/blob/main/argocd/getting-started/nginx-manifest.yml -[5]: https://minikube.sigs.k8s.io/docs/start/ -[6]: https://argoproj.github.io/argo-cd/getting_started/ -[7]: https://opensource.com/sites/default/files/uploads/getting-started-with-argocd-application-page.png (Argo CD home page) -[8]: https://creativecommons.org/licenses/by-sa/4.0/ -[9]: https://opensource.com/sites/default/files/uploads/getting-started-with-argocd-creating-the-application.png (Argo CD application configuration) -[10]: https://opensource.com/sites/default/files/uploads/getting-started-with-argocd-creating-ayush-test-application.png (Argo CD application syncing) -[11]: https://opensource.com/sites/default/files/uploads/getting-started-with-argocd-successful-nginx-deployment.png (Argo CD application deployment) -[12]: https://notes.ayushsharma.in/2021/07/getting-started-with-argocd diff --git a/sources/tech/20210806 Use OpenCV on Fedora Linux - part 2.md b/sources/tech/20210806 Use OpenCV on Fedora Linux - part 2.md deleted file mode 100644 index 1c1fb80acd..0000000000 --- a/sources/tech/20210806 Use OpenCV on Fedora Linux - part 2.md +++ /dev/null @@ -1,224 +0,0 @@ -[#]: subject: "Use OpenCV on Fedora Linux ‒ part 2" -[#]: via: "https://fedoramagazine.org/use-opencv-on-fedora-linux-part-2/" -[#]: author: "Onuralp SEZER https://fedoramagazine.org/author/thunderbirdtr/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Use OpenCV on Fedora Linux ‒ part 2 -====== - -![][1] - -Cover image excerpted from Starry Night by [Vincent van Gogh][2], Public domain, via Wikimedia Commons - -Welcome back to the OpenCV series where we explore how to make use of OpenCV on Fedora Linux. [The first article][3] covered the basic functions and use cases of OpenCV. In addition to that you learned about loading images, color mapping, and the difference between BGR and RGB color maps. You also learned how to separate and merge color channels and how to convert to different color spaces. This article will cover basic image manipulation and show you how to perform image transformations including: - - * **Accessing individual image pixels** - * **Modifying a range of image pixels** - * **Cropping** - * **Resizing** - * **Flipping** - - - -### Accessing individual pixels - -``` -import cv2 -import numpy as np -import matplotlib.pyplot as plt - -# Read image as gray scale. -img = cv2.imread(cv2.samples.findFile("gradient.png"),0) -# Set color map to gray scale for proper rendering. -plt.imshow(img, cmap='gray') -# Print img pixels as 2D Numpy Array -print(img) -# Show image with Matplotlib -plt.show() -``` - -![][4] - -To access a pixel in a numpy matrix, you have to use matrix notation such as matrix[**r**,**c**], where the **r** is the row number and **c** is the column number. Also note that the matrix is 0-indexed. If you want to access the first pixel, you need to specify matrix[0,0]. The following example prints one black pixel from top-left and one white pixel from top-right-corner. - -``` -# print the first pixel -print(img[0,0]) -# print the white pixel to the top right corner -print(img[0,299]) -``` - -### Modifying a range of image pixels - -You can modify the values of pixels using the same notation described above. - -``` -gr_img = img.copy() - -# Modify pixel one by one -#gr_img[20,20] = 200 -#gr_img[20,21] = 200 -#gr_img[20,22] = 200 -#gr_img[20,23] = 200 -#gr_img[20,24] = 200 -# ... - -# Modify pixel between 20-80 pixel range -gr_img[20:80,20:80] = 200 - -plt.imshow(gr_img, cmap='gray') -print(gr_img) -plt.show() -``` - -![][5] - -### Cropping images - -Cropping an image is achieved by selecting a specific (pixel) region of the image. - -``` -import cv2 as cv -import matplotlib.pyplot as plt -img = cv.imread(cv.samples.findFile("starry_night.jpg"),cv.IMREAD_COLOR) -img_rgb = cv.cvtColor(img, cv.COLOR_BGR2RGB) -fig, (ax1, ax2) = plt.subplots(1,2) -ax1.imshow(img_rgb) -ax1.set_title('Before Crop') -ax2.imshow(img_rgb[200:400, 300:600]) -ax2.set_title('After Crop') -plt.show() -``` - -![][6] - -### Resizing images - -**Syntax:** _dst = cv.resize( src, dsize[, dst[, fx[, fy[, interpolation]]]] )_ - -The _resize_ function resizes the _src_ image down to or up to the specified size. The size and type are derived from the values of _src_, _dsize_,_fx_, and _fy_. - -The _resize_ function has two required arguments: - - * **src:** input image - * **dsize:** output image size - - - -Optional arguments that are often used include: - - * **fx:** The scale factor along the horizontal axis. When this is 0, the factor is computed as _dsize.width/src.cols_. - * **fy:** The scale factor along the vertical axis. When this is 0, the factor is computed as _dsize.height/src.rows_. - - - -``` -import cv2 as cv -import matplotlib.pyplot as plt - -img = cv.imread(cv.samples.findFile("starry_night.jpg"), cv.IMREAD_COLOR) -img_rgb = cv.cvtColor(img, cv.COLOR_BGR2RGB) - -plt.figure(figsize=[18, 5]) -plt.subplot(1, 3, 1) # row 1, column 3, count 1 - -cropped_region = img_rgb[200:400, 300:600] -resized_img_5x = cv.resize(cropped_region, None, fx=5, fy=5) -plt.imshow(resized_img_5x) -plt.title("Resize Cropped Image with Scale 5X") - -width = 200 -height = 300 -dimension = (width, height) -resized_img = cv.resize(img_rgb, dsize=dimension, interpolation=cv.INTER_AREA) - -plt.subplot(1, 3, 2) -plt.imshow(resized_img) -plt.title("Resize Image with Custom Size") - -desired_width = 500 -aspect_ratio = desired_width / img_rgb.shape[1] -desired_height = int(resized_img.shape[0] * aspect_ratio) -dim = (desired_width, desired_height) -resized_cropped_region = cv.resize(img_rgb, dsize=dim, interpolation=cv.INTER_AREA) - -plt.subplot(1, 3, 3) -plt.imshow(resized_cropped_region) -plt.title("Keep Aspect Ratio - Resize Image") -plt.show() -``` - -![][7] - -### Flipping images - -**Syntax:** _dst = cv.flip( src, flipCode )_ - - * **dst:** output array of the same size and type as _src_. - - - -The _flip_ function flips the array in one of three different ways. - -The _flip_ function has two required arguments: - - * **src:** the input image - * **flipCode:** a flag to specify how to flip the image - * Use **0** to flip the image on the x-axis. - * Use a positive value (for example, **1**) to flip the image on the y-axis. - * Use a negative value (for example, **-1**) to flip the image on both axes. - - - -``` -import cv2 as cv -import matplotlib.pyplot as plt -img = cv.imread(cv.samples.findFile("starry_night.jpg"),cv.IMREAD_COLOR) -img_rgb = cv.cvtColor(img, cv.COLOR_BGR2RGB) - -img_rgb_flipped_horz = cv.flip(img_rgb, 1) -img_rgb_flipped_vert = cv.flip(img_rgb, 0) -img_rgb_flipped_both = cv.flip(img_rgb, -1) - -plt.figure(figsize=[18,5]) -plt.subplot(141);plt.imshow(img_rgb_flipped_horz);plt.title("Horizontal Flip"); -plt.subplot(142);plt.imshow(img_rgb_flipped_vert);plt.title("Vertical Flip"); -plt.subplot(143);plt.imshow(img_rgb_flipped_both);plt.title("Both Flipped"); -plt.subplot(144);plt.imshow(img_rgb);plt.title("Original"); -plt.show() -``` - -![][8] - -### Further information - -More details about OpenCV are available in the [documentation][9]. - -Thank you. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/use-opencv-on-fedora-linux-part-2/ - -作者:[Onuralp SEZER][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/thunderbirdtr/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/starry-night-2-816x345.jpg -[2]: https://commons.wikimedia.org/wiki/File:Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg -[3]: https://fedoramagazine.org/use-opencv-on-fedora-linux-part-1/ -[4]: https://fedoramagazine.org/wp-content/uploads/2021/06/image-8.png -[5]: https://fedoramagazine.org/wp-content/uploads/2021/06/image-9.png -[6]: https://fedoramagazine.org/wp-content/uploads/2021/06/image-11-1024x408.png -[7]: https://fedoramagazine.org/wp-content/uploads/2021/07/resize_img-1024x338.png -[8]: https://fedoramagazine.org/wp-content/uploads/2021/07/flip_image_cv-1024x250.png -[9]: https://docs.opencv.org/4.5.2/index.html diff --git a/sources/tech/20210808 stow- Your Package Manager When You Can-t Use Your Package Manager.md b/sources/tech/20210808 stow- Your Package Manager When You Can-t Use Your Package Manager.md deleted file mode 100644 index d9941f3de3..0000000000 --- a/sources/tech/20210808 stow- Your Package Manager When You Can-t Use Your Package Manager.md +++ /dev/null @@ -1,172 +0,0 @@ -[#]: subject: "stow: Your Package Manager When You Can't Use Your Package Manager" -[#]: via: "https://theartofmachinery.com/2021/08/08/stow_as_package_manager.html" -[#]: author: "Simon Arneaud https://theartofmachinery.com" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -stow: Your Package Manager When You Can't Use Your Package Manager -====== - -[GNU `stow`][1] is an underrated tool. Generically, it helps maintain a unified tree of files that come from different sources. More concretely, I use a bunch of software (D compilers, various tools) that I install manually instead of through my system’s package manager (for various reasons). `stow` makes that maintainable by letting me cleanly add/remove packages and switch between versions. Here’s how it’s done. - -### The ~/local/ directory - -The idea is simple: you `stow` install all personal software inside a `local/` directory inside your home directory. The resulting directory structure looks the same as if you installed the software normally to the filesystem root, so you’ll end up with `~/local/bin` and `~/local/lib` directories, etc. - -Setting up the `local/` directory for use with `stow` is easy. The main thing you need is a `local/` directory in your home directory, with a `stow/` subdirectory to store package archives: - -``` -$ mkdir -p ~/local/stow -``` - -If you’re installing programs into your `local/` directory, you probably want to add `local/bin` to your `PATH` so you can easily use programs there like other programs. You can add this to the end of your `~/.profile` file (or whatever init file is used by your shell): - -``` -PATH="$HOME/local/bin:$PATH" -``` - -### Downloading and installing tarball packages - -I like [`tsv-utils`][2], a handy collection of tools for data analysis on the command line. It’s not in the normal package managers I use, but there are pre-compiled tarball archives available. Here’s how to use them with `stow`. - -First, switch to your `stow` archive directory: - -``` -$ cd ~/local/stow -``` - -Then download the tarball and extract it: - -``` -$ wget https://github.com/eBay/tsv-utils/releases/download/v2.2.0/tsv-utils-v2.2.0_linux-x86_64_ldc2.tar.gz -$ tar xf tsv-utils-v2.2.0_linux-x86_64_ldc2.tar.gz -``` - -You’ll now have a directory containing all the package files: - -``` -$ tree tsv-utils-v2.2.0_linux-x86_64_ldc2 -tsv-utils-v2.2.0_linux-x86_64_ldc2 -├── LICENSE.txt -├── ReleasePackageReadme.txt -├── bash_completion -│ └── tsv-utils -├── bin -│ ├── csv2tsv -│ ├── keep-header -│ ├── number-lines -│ ├── tsv-append -│ ├── tsv-filter -│ ├── tsv-join -│ ├── tsv-pretty -│ ├── tsv-sample -│ ├── tsv-select -│ ├── tsv-split -│ ├── tsv-summarize -│ └── tsv-uniq -└── extras - └── scripts - ├── tsv-sort - └── tsv-sort-fast - -4 directories, 17 files -``` - -You can delete the `.tar.gz` archive if you want. - -Now you can install the package into `local/` with `stow`: - -``` -$ stow tsv-utils-v2.2.0_linux-x86_64_ldc2 -``` - -That creates a bunch of symbolic links inside the parent directory (`~/local/`) pointing to files and directories inside the package directory (`~/local/stow/tsv-utils-v2.2.0_linux-x86_64_ldc2`). - -If you’ve set your `PATH` (you might need to restart your shell), you’ll now be able to run `tsv-utils` commands normally: - -``` -$ tsv-summarize --help -Synopsis: tsv-summarize [options] file [file...] - -tsv-summarize runs aggregation operations on fields in tab-separated value -files. Operations can be run against the full input data or grouped by key -fields. Fields can be specified either by field number or field name. Use -'--help-verbose' for more detailed help. - -Options: - -[*snip*] -``` - -# Removing and upgrading packages - -Okay, `stow`’s algorithm for managing symbolic links is neat, but so far there’s no practical benefit over extracting the tarball directly into `local/`. `stow` shines when you’re maintaining your package collection. For example, if you decide to uninstall `tsv-utils` later, you just need to switch to the archive directory and run `stow` again with the `-D` flag: - -``` -$ cd ~/local/stow -$ stow -D tsv-utils-v2.2.0_linux-x86_64_ldc2 -``` - -That will cleanly remove `tsv-utils` from the `local/` directory without breaking any other installed packages. Try doing that after extracting the tarball directly to `local/`. - -The package directory inside the `stow/` directory will remain, but you can delete that too, if you want, of course. - -`stow` doesn’t manage versions, so upgrading packages means uninstalling the old package and installing the new package. `stow` detects when packages collide (e.g., they both include a file called `bin/tsv-summarize`), so you can only install one version at a time. However, you can keep as many archive directories as you like in `stow/`, allowing you to easily switch back and forth between versions if you need to. - -### Building packages from source - -Not all software comes precompiled. Sometimes you’re experimenting with your own custom version. If you want to use source packages with `stow`, you just need to figure out how to make the source package install to a directory in your `stow/` directory, instead of your filesystem root. - -Suppose I want to install my own version of the [GraphicsMagick][3] image processing tools. This will be a two-stage process. First I’ll need to download and extract the source somewhere (I keep a `src/` directory for third-party source code). - -``` -$ cd ~/src -$ wget -O GraphicsMagick-1.3.36.tar.gz https://sourceforge.net/projects/graphicsmagick/files/graphicsmagick/1.3.36/GraphicsMagick-1.3.36.tar.gz/download -$ tar xf GraphicsMagick-1.3.36.tar.gz -$ cd GraphicsMagick-1.3.36 -``` - -GraphicsMagick uses a GNU-style build system using `autotools`. `configure` scripts take a `--prefix` option that sets the installation root. - -``` -$ ./configure --prefix="$HOME/local/stow/GraphicsMagick-1.3.36" -$ make install -``` - -The installation step automatically creates the `stow/GraphicsMagick-1.3.36/` directory. Now I just need to install the built package with `stow`. - -``` -$ cd ~/local/stow -$ stow GraphicsMagick-1.3.36 -$ gm version -GraphicsMagick 1.3.36 20201226 Q8 http://www.GraphicsMagick.org/ -Copyright (C) 2002-2020 GraphicsMagick Group. -Additional copyrights and licenses apply to this software. -See http://www.GraphicsMagick.org/www/Copyright.html for details. - -[*snip*] -``` - -### Other uses - -This is my personal favourite usage of `stow`, but it’s just a generic tool for merging multiple filesystem trees in a maintainable way. Some people use it to manage their `/etc/` configuration files, for example. If you try it out, I’m sure you can find other use cases. - --------------------------------------------------------------------------------- - -via: https://theartofmachinery.com/2021/08/08/stow_as_package_manager.html - -作者:[Simon Arneaud][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://theartofmachinery.com -[b]: https://github.com/lujun9972 -[1]: https://www.gnu.org/software/stow/ -[2]: https://github.com/eBay/tsv-utils -[3]: http://www.graphicsmagick.org/ diff --git a/sources/tech/20210809 Parsing command options in Lua.md b/sources/tech/20210809 Parsing command options in Lua.md deleted file mode 100644 index 4679c6bff4..0000000000 --- a/sources/tech/20210809 Parsing command options in Lua.md +++ /dev/null @@ -1,215 +0,0 @@ -[#]: subject: "Parsing command options in Lua" -[#]: via: "https://opensource.com/article/21/8/parsing-commands-lua" -[#]: author: "Seth Kenlon https://opensource.com/users/seth" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Parsing command options in Lua -====== -My favorite way to solve the problem of parsing in Lua is alt-getopt. -![Woman sitting in front of her laptop][1] - -When you enter a command into your terminal, there are usually [options][2], also called _switches_ or _flags_, that you can use to modify how the command runs. This is a useful convention defined by the [POSIX specification][3], so as a programmer, it's helpful to know how to detect and parse options. - -As with most languages, there are several ways to solve the problem of parsing options in Lua. My favorite is [alt-getopt][4]. - -### Installing - -The easiest way to obtain and use **alt-getopt** in your code is to [install it with LuaRocks][5]. For most use-cases, you probably want to install it into your local project directory: - - -``` -$ mkdir local -$ luarocks --tree=local install alt-getopt  -Installing -[...] -alt-getopt 0.X.Y-1 is now installed in /tux/myproject/local (license: MIT/X11) -``` - -Alternately, you can download the code from [GitHub][6]. - -### Adding a library to your Lua code - -Assuming you've installed the library locally, then you can define your Lua package path and then `require` the **alt-getopt** package: - - -``` -package.path = package.path .. ';local/share/lua/5.1/?.lua' - -local alt_getopt = require("alt_getopt") -``` - -If you've installed it to a known system location, you can omit the `package.path` line (because Lua already knows to look for system-wide libraries.) - -Now you're set to parse options in Lua. - -### Option parsing in Lua - -The first thing you must do to parse options is to define the valid options your application can accept. The **alt_getopt** library sees all options primarily as short options, meaning that you define options as single letters. You can add long versions later. - -When you define valid options, you create a list delimited by colons (`:`), which is interpreted by the `get_opts` function provided by **alt-getopts**. - -First, create some variables to represent the options. The variables `short_opt` and `optarg` represent the short option and the option argument. These are arbitrary variable names, so you can call them anything (as long as it's a valid name for a variable). - -The table `long_opts` must exist, but I find it easiest to define the values of the long options after you've decided on the short options, so leave it empty for now. - - -``` -local long_opts = {} - -local short_opt -local optarg -``` - -With those variables declared, you can iterate over the arguments provided by the user, checking to see whether any argument matches your approved list of valid short options. - -If a valid option is found, you use the `pairs` function in Lua to extract the value of the option. - -To create an option that accepts no argument of its own but is either _on_ or _off_ (often called a _switch_), you place the short option you want to define to the right of a colon (`:`) character. - -In this example, I've created a loop to check for the short option `-a`, which is a switch: - - -``` -short_opt,optarg = alt_getopt.get_opts (arg, ":a", long_opts) -local optvalues = {} -for k,v in pairs (short_opt) do -   table.insert (optvalues, "value of " .. k .. " is " .. v .. "\n") -end - -table.sort (optvalues) -io.write (table.concat (optvalues)) - -for i = optarg,#arg do -   io.write (string.format ("ARGV [%s] = %s\n", i, arg [i])) -end -``` - -At the end of this sample code, I included a for-loop to iterate over any remaining arguments in the command because anything not detected as a valid option must be an argument (probably a file name, URI, or whatever it is that your application operates upon). - -Test the code: - - -``` -$ lua test.lua -a  -value of a is 1 -``` - -The test script has successfully detected the option `-a`, and has assigned it a value of **1** to represent that it does exist. - -Try it again with an extra argument: - - -``` -$ lua test.lua -a hello -value of a is 1 -ARGV [2] = hello -``` - -### Options with arguments - -Some options require an argument all their own. For instance, you might want to allow the user to point your application to a custom configuration file, set an attribute to a color, or set the resolution of a graphic. In **alt_getopt**, options that accept arguments are placed on the left of the colon (`:`) in the short options list. - - -``` -`short_opt,optarg = alt_getopt.get_opts (arg, "c:a", long_opts)` -``` - -Test the code: - - -``` -$ lua test.lua -a -c my.config -value of a is 1 -value of c is my.config -``` - -Try it again, this time with a spare argument: - - -``` -$ lua test.lua -a -c my.config hello -value of a is 1 -value of c is my.config -ARGV [4] = hello -``` - -### Long options - -Short options are great for power users, but they don't tend to be very memorable. You can create a table of long options that point to short options so users can learn long options before abbreviating their commands with single-letter options. - - -``` -local long_opts = { -   alpha = "a", -   config = "c" -} -``` - -Users now have the choice between long and short options: - - -``` -$ lua test.lua --config my.config --alpha hello -value of a is 1 -value of c is my.config -ARGV [4] = hello -``` - -### Option parsing - -Here's the full demonstration code for your reference: - - -``` -#!/usr/bin/env lua -package.path = package.path .. ';local/share/lua/5.1/?.lua' - -local alt_getopt = require("alt_getopt") - -local long_opts = { -   alpha = "a", -   config = "c" -} - -local short_opt -local optarg - -short_opt,optarg = alt_getopt.get_opts (arg, "c:a", long_opts) -local optvalues = {} -for k,v in pairs (short_opt) do -   table.insert (optvalues, "value of " .. k .. " is " .. v .. "\n") -end - -table.sort (optvalues) -io.write (table.concat (optvalues)) - -for i = optarg,#arg do -   io.write (string.format ("ARGV [%s] = %s\n", i, arg [i])) -end -``` - -There are further examples in the project's Git repository. Including options for your users is an important feature for any application, and Lua makes it easy to do. There are other libraries aside from **alt_getopt**, but I find this one easy and quick to use. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/8/parsing-commands-lua - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_4.png?itok=VGZO8CxT (Woman sitting in front of her laptop) -[2]: https://opensource.com/article/21/7/linux-terminal-basics#options -[3]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains -[4]: https://luarocks.org/modules/mpeterv/alt-getopt -[5]: https://opensource.com/article/19/11/getting-started-luarocks -[6]: https://github.com/cheusov/lua-alt-getopt diff --git a/sources/tech/20210810 How I use Terraform and Helm to deploy the Kubernetes Dashboard.md b/sources/tech/20210810 How I use Terraform and Helm to deploy the Kubernetes Dashboard.md deleted file mode 100644 index 73f576a25f..0000000000 --- a/sources/tech/20210810 How I use Terraform and Helm to deploy the Kubernetes Dashboard.md +++ /dev/null @@ -1,192 +0,0 @@ -[#]: subject: "How I use Terraform and Helm to deploy the Kubernetes Dashboard" -[#]: via: "https://opensource.com/article/21/8/terraform-deploy-helm" -[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -How I use Terraform and Helm to deploy the Kubernetes Dashboard -====== -Terraform can deploy Helm Charts. Is it right for you? -![Ship captain sailing the Kubernetes seas][1] - -When I'm working on projects that require provisioning cloud infrastructure, my workflow has two disparate components: one is infrastructure orchestration, which includes Terraform to bring up the infrastructure (for instance, new EKS clusters), and the second is the provisioning component, which includes Ansible or Bash scripts to instantiate and initialize that infrastructure to accept new deployments (for instance, installing Cluster Autoscaler, kube-state-metrics, and so on.) - -The reason for this is simple: very few tools can cross over and handle both the orchestration and the provisioning side. When I stumbled on the Helm provider for Terraform, I wanted to explore the possibility of using one tool to handle both sides: using Terraform to bring up a new EKS cluster and provision it with Prometheus, Loki, Grafana, Cluster Autoscaler, and others, all in one neat and clean deployment. But that's not happening until I figure out how to use this thing, so below is my experience using Terraform and Helm for something simple: deploying the Kubernetes Dashboard. - -### The Helm provider - -The Helm provider works like the other cloud providers. You can specify the path of the `KUBECONFIG` or other credentials, run `terraform init`, and the Helm provider gets initialized. - -### Deploying the Kubernetes Dashboard - -I'm going to use [Minikube for this test][2]. - -My `main.tf` file contains the following: - - -``` -provider "helm" { -  kubernetes { -    config_path = "~/.kube/config" -  } -} - -resource "helm_release" "my-kubernetes-dashboard" { - -  name = "my-kubernetes-dashboard" - -  repository = "" -  chart      = "kubernetes-dashboard" -  namespace  = "default" - -  set { -    name  = "service.type" -    value = "LoadBalancer" -  } - -  set { -    name  = "protocolHttp" -    value = "true" -  } - -  set { -    name  = "service.externalPort" -    value = 80 -  } - -  set { -    name  = "replicaCount" -    value = 2 -  } - -  set { -    name  = "rbac.clusterReadOnlyRole" -    value = "true" -  } -} -``` - -In the above Terraform, I'm deploying the `kubernetes-dashboard` Chart from `https://kubernetes.github.io/dashboard/` into the namespace `default`. I'm also using the `set` variable to override the Chart's defaults: - - 1. `service.type`: I'm changing this to `LoadBalancer` to review my changes locally. Remember to run `minikube tunnel` in a separate window, or this won't work. - 2. `protocolHttp`: I'm deploying the non-secure version to suppress HTTPS warnings on `localhost`. - 3. `service.externalPort`: This needs to be 80 for non-secure. - 4. `replicaCount`: I'm changing this to 2 to see if these changes even work :) - 5. `rbac.clusterReadOnlyRole`: This should be `true` for the Dashboard to have the correct permissions. - - - -### Executing our Terraform - -Let's start by initializing Terraform with `terraform init`: - - -``` -Initializing the backend... - -Initializing provider plugins... -\- Finding latest version of hashicorp/helm... -\- Installing hashicorp/helm v2.2.0... -\- Installed hashicorp/helm v2.2.0 (signed by HashiCorp) - -Terraform has created a lock file .terraform.lock.hcl to record the provider -selections it made above. Include this file in your version control repository -so that Terraform can guarantee to make the same selections by default when -you run "terraform init" in the future. - -Terraform has been successfully initialized! - -You may now begin working with Terraform. Try running "terraform plan" to see -any changes that are required for your infrastructure. All Terraform commands -should now work. - -If you ever set or change modules or backend configuration for Terraform, -rerun this command to reinitialize your working directory. If you forget, other -commands will detect it and remind you to do so if necessary. -``` - -So far, so good. Terraform successfully initialized the Helm provider. And now for `terraform apply`: - - -``` -Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: -  + create - -Terraform will perform the following actions: - -  # helm_release.my-kubernetes-dashboard will be created -  + resource "helm_release" "my-kubernetes-dashboard" { -      + atomic                     = false -      + chart                      = "kubernetes-dashboard" -      + cleanup_on_fail            = false -      [...] -      + set { -          + name  = "service.type" -          + value = "LoadBalancer" -        } -    } - -Plan: 1 to add, 0 to change, 0 to destroy. - -Do you want to perform these actions? -  Terraform will perform the actions described above. -  Only 'yes' will be accepted to approve. - -  Enter a value: yes - -helm_release.my-kubernetes-dashboard: Creating... -helm_release.my-kubernetes-dashboard: Still creating... [10s elapsed] -helm_release.my-kubernetes-dashboard: Creation complete after 14s [id=my-kubernetes-dashboard] -``` - -(Remember to run `minikube tunnel` in another terminal window, otherwise the `apply` won't work). - -### Verifying our changes - -Let's check if our pods are up using `kubectl get po` and `kubectl get svc`: - - -``` -~ kubectl get po -NAME                                       READY   STATUS    RESTARTS   AGE -my-kubernetes-dashboard-7bc7ccfbd9-56w56   1/1     Running   0          18m -my-kubernetes-dashboard-7bc7ccfbd9-f6jc4   1/1     Running   0          18m - -~ kubectl get svc -NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE -kubernetes                ClusterIP      10.96.0.1        <none>           443/TCP        20m -my-kubernetes-dashboard   LoadBalancer   10.104.144.125   10.104.144.125   80:32066/TCP   19m -``` - -Our pods are deployed, and the load balancer is working. Now check the UI:  - -![Kubernetes Workloads dashboard][3] - -Figure 2: Kubernetes Workloads dashboard - -### Conclusion - -You can [find the examples from this article in my Gitlab repo][4]. - -With Helm provisioning now a part of Terraform, my work life is that much easier. I do realize that the separation between Infrastructure and Provisioning served a different purpose: Infrastructure changes were usually one-off or didn't require frequent updates, maybe a few times when governance or security rules for my org changed. Provisioning changes, on the other hand, frequently occurred, sometimes with every release. So having Terraform (Infrastructure) and Helm Charts (Provisioning) in two different repos with two different tools and two different review workflows made sense. I'm not sure merging them using a single tool is the best idea, but one less tool in the toolchain is always a huge win. I think the pros and cons of this will vary from one project to another and one team to another. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/8/terraform-deploy-helm - -作者:[Ayush Sharma][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ayushsharma -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas) -[2]: https://opensource.com/article/18/10/getting-started-minikube -[3]: https://opensource.com/sites/default/files/2021-07-12-terraform-plus-helm-a-match-made-in-heaven-hell-dashboard.png -[4]: https://gitlab.com/ayush-sharma/example-assets/-/tree/main/kubernetes/tf_helm diff --git a/sources/tech/20210810 How to get the most out of GitOps right now.md b/sources/tech/20210810 How to get the most out of GitOps right now.md deleted file mode 100644 index a02fa0e1a5..0000000000 --- a/sources/tech/20210810 How to get the most out of GitOps right now.md +++ /dev/null @@ -1,109 +0,0 @@ -[#]: subject: "How to get the most out of GitOps right now" -[#]: via: "https://opensource.com/article/21/8/gitops" -[#]: author: "Itiel Shwartz https://opensource.com/users/itielschwartz2021" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -How to get the most out of GitOps right now -====== -GitOps is a great starting point to understand what is running in -production, but it may need a little more augmentation to get it working -just right for your engineering team. -![Team checklist and to dos][1] - -You may have encountered this brief introduction to GitOps shared by prevalent cloud software engineer, [Kelsey Hightower][2]: - -> GitOps: versioned CI/CD on top of declarative infrastructure. Stop scripting and start shipping. -> -> — Kelsey Hightower (@kelseyhightower) [January 17, 2018][3] - -In the world of [infrastructure as code][4], GitOps is a popular way to manage automated deployments through continuous integration/continuous development (CI/CD) and microservices architecture in general, as most of our infrastructure is essentially defined in config files today (e.g., YAML, JSON, HCL). This is not limited to Kubernetes (K8s), but it's often highly associated with K8s clusters. (I'll explain why in a second.) This basically means that changing anything in your production infrastructure is as simple as changing a line of code. - -The reason GitOps is so closely identified with K8s is that K8s is completely configured in declarative YAML, and therefore, you can quickly achieve the benefits of using GitOps as it is really just software-defined infrastructure. When it comes to properly applying GitOps in your engineering organization, the main thing you need to pay attention to is how you enforce changes to your cluster or infrastructure. - -When you choose the GitOps path, you can only do it through a single source of truth: your source-code management (SCM) repository (e.g., GitLab, GitHub, Bitbucket, or your own hosting solution) that enforces the version-control policy for your entire organization. This means the only way to make changes to your infrastructure is through a pull request in your repository. This is how version control is maintained at scale in large engineering organizations using GitOps. - -### The state of real-world deployments - -The GitOps doctrine claims to be the new and simpler way to achieve CI/CD, except that the CD part of CI/CD is a much more complex beast than GitOps practices would have you believe. With GitOps, the CD part breaks down to a very binary approach to engineering environments. You're either in staging or production, where you just flip the switch and your code is in production. In my years of experience as an engineer, I have yet to participate in a significant code change, feature rollout, or another major deployment that is that simple. - -There is plenty more legwork encapsulated in staging or production versioning completely abstracted from the CD process with GitOps. This means that any engineering process that takes quality seriously will have a few stages between the CI and CD phases of a major deployment. These include testing, validating results, verifying that changes propagated, retesting, and often doing partial rollouts (canary and such). These are just a few examples of how CD is managed in engineering organizations. - -#### GitOps tips for doing deployments better - -When it comes to GitOps, there's no need to reinvent the CI/CD (and particularly the CD) wheel. If you're like most people and achieve CI/CD by duct taping your CD process with some custom scripts before and after deployment to get it over the finish line, know there are better ways to do this with GitOps facilitators. Using GitOps facilitators such as the open source, Cloud Native Computing Foundation (CNCF)-hosted [Argo CD][5] enables users to take all those custom scripts and manage them at scale in a single place. This ensures best practices when using scripts in your CI/CD process, making them canonical and repeatable every time they run. - -What's more, since there is an agent that is continuously syncing state, it reduces human errors by enforcing the committed state. - -### Manage chaos across repositories with GitOps - -With complex deployment architectures such as K8s or even just plain old microservices, even small changes to the code often affect other interdependent services. Mapping these dependencies with GitOps tends to become a hellscape. Often with shared repos and files, you need to sync the state. However, what you'll also often find is that errors, misconfigurations, or even just bugs can create a [butterfly effect][6] that starts a cascade of failures that becomes extremely hard to track and understand in GitOps. - -One common method to solve this challenge with GitOps is to create a "super repo," which is essentially a centralized monorepo that contains pointers to all the relevant dependencies, files, resources, and such. However, this quickly becomes a messy garbage bag "catchall" of a repository, where it is extremely hard to understand, track, and log changes. - -When you have many dependencies, for this to work in GitOps, these dependencies need to be represented in Git. This requires your organization to be "Git native." This means you'll need to do a lot of duct-tape automation work to create modules and submodules to connect and correlate between your super repo and the relevant subrepos. Many times, this comes with a lot of maintenance overhead that becomes extremely difficult to maintain over time. - -If you don't do this, you're not achieving the benefits of GitOps, and you're mostly just stuck with the downsides. You could achieve similar capabilities through a YAML file that encapsulates all the versions and dependencies, similar to a Helm umbrella chart. Without going fully Git native, you could essentially be anything else—and not GitOps. - -While in the GitOps world, repos represent the single source of truth for environments, in practice, there are many third-party integrations in any given deployment. These integrations can be anything from your authentication and authorization (e.g., Auth0) to your database, which are, for the most part, updated externally to your repo. These changes to external resources, which could significantly impact your production and deployments, have no representation inside your single-source-of-truth repo at all. This could be a serious blind spot in your entire deployment. - -#### GitOps tips for managing chaos better - -When using GitOps, treat your configurations the same way you would treat your code. Don't scrimp on validation pipelines, ensure proper pull request hygiene, and maintain any other practices you apply when managing code at scale to avoid this chaos. Don't panic! If something incorrect gets pushed and you're concerned it will propagate to all servers, clusters, and repos, all you need to do is run `git revert`, and you can undo your last commit. - -Also, similar to my recommendation regarding syncing state, using GitOps facilitators can help with managing Git practices, being Git native, and handling Kubernetes deployments (as well as being Kubernetes native). - -Last, to avoid any disorder or complexity, ensure that your Git repository's state is as close as possible to your production environments to avoid any drift of your environments from your GitOps operation. - -### 3 tips for using GitOps - -Here are my tips for getting the most out of GitOps: - - 1. Make sure to build visibility into your GitOps automation early, so you're not running blind across your many repos. When it comes to making GitOps work optimally, you should work out of a single repo per application. When these start to add up, visibility can become a real pain point. Think about the dependencies and how to engineer enough visibility into the system, so if something goes wrong, you'll know how to track it down to its source and fix it. - 2. One way to do that is to plan for every kind of failure scenario. What happens when dependencies crash? When it comes to GitOps, merge conflicts are a way of life. How do you manage high-velocity deployments and promotions to production that can overwhelm a GitOps system? Think about the many potential challenges, failures, and conflicts, and have a playbook for each. Also, following up on the first point, make sure there is sufficient visibility for each to troubleshoot rapidly. And of course, don't forget the `git revert` command in the event failure happens. - 3. Use a monorepo. There, I said it. The age-old mono vs. multi-repo debate. When it comes to GitOps, there's no question which is the better choice. While a centralized monorepo has disadvantages (e.g., it can get messy, become a nightmare to understand build processes, etc.), it also can help solve a large majority of hassles with cross-repo dependencies. - - - -As an engineer, I felt this pain directly. I realized there's a pressing need for something to correlate these dependencies and visibility challenges I felt every single day of my GitOps life. - -I wanted a better solution for tracking and cascading failures in a complex microservices setup to a root cause or code change. Everything I had tried to date, including GitOps, provided only partial information, very little correlation, and almost no causation. - -GitOps tools (like Argo CD) help solve many issues that arise with DIY GitOps. Using such tools can be a good thing to consider when going down the GitOps route because they: - - * Are natively designed for Kubernetes - * Are suitable for small teams using image-puller - * Have strong community support (e.g., Argo CD through the CNCF, which is also easy to use with other Argo tools) - * Provide an improved developer experience with a good user interface for applications - * Natively integrate with Git, which helps minimize chaos and complexity - - - -### The bottom line - -Deployment processes, particularly with new versions, are a _complex_ engineering feat. To get these right, you need to invest effort in both the technology and design of the process. For example, what is the best way to deploy and validate my application in production? - -GitOps is a really good starting point to understand what is running in production. Just bear in mind that it may also need a little more augmentation with additional tools and DIY automation to get it working just right for your engineering team. This way, GitOps' shine is 24K rather than fool's gold for your organization. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/8/gitops - -作者:[Itiel Shwartz][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/itielschwartz2021 -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos) -[2]: https://twitter.com/kelseyhightower -[3]: https://twitter.com/kelseyhightower/status/953638870888849408?ref_src=twsrc%5Etfw -[4]: https://www.redhat.com/en/topics/automation/what-is-infrastructure-as-code-iac -[5]: https://argoproj.github.io/argo-cd/ -[6]: https://en.wikipedia.org/wiki/Butterfly_effect diff --git a/sources/tech/20210811 Build your own Fedora IoT Remix.md b/sources/tech/20210811 Build your own Fedora IoT Remix.md deleted file mode 100644 index 8a552bcb19..0000000000 --- a/sources/tech/20210811 Build your own Fedora IoT Remix.md +++ /dev/null @@ -1,294 +0,0 @@ -[#]: subject: "Build your own Fedora IoT Remix" -[#]: via: "https://fedoramagazine.org/build-your-own-fedora-iot-remix/" -[#]: author: "Alexander Wellbrock https://fedoramagazine.org/author/w4tsn/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Build your own Fedora IoT Remix -====== - -![][1] - -Background excerpted from photo by [S. Tsuchiya][2] on [Unsplash][3] - -Fedora IoT Edition is aimed at the Internet of Things. It was introduced in the article [How to turn on an LED][4] with Fedora IoT in 2018. It is based on [RPM-OSTree][5] as a core technology to gain some nifty properties and features which will be covered in a moment. - -RPM-OSTree is a high-level tool built on [libostree][6] which is a set of tools establishing a “git-like” model for committing and exchanging filesystem trees, deployment of said trees, bootloader configuration and layered RPM package management. Such a system benefits from the following properties: - - * Transactional upgrade and rollback - * Read-only filesystem areas - * Potentially small updates through deltas - * Branching, including rebase and multiple deployments - * Reproducible filesystem - * Specification of filesystem through version-controlled code - - - -Exchange of filesystem trees and corresponding commits is done through OSTree repositories or remotes. When using one of the Fedora Editions based on RPM-OSTree there are remotes from which the system downloads commits and applies them, rather than downloading and installing separate RPMs. - -A [Remix][7] in the Fedora ecosystem is an altered, opinionated version of the OS. It covers the needs of a specific niche. This article will dive into the world of building your own filesystem commits based on Fedora IoT Edition. You will become acquainted to the tools, terminology, design and processes of such a system. If you follow the directions in this guide you will end up with your own Fedora IoT Remix. - -### Preparations - -You will need some packages to get started. On non-ostree systems install the packages _ostree_ and _rpm-ostree_. Both are available in the Fedora Linux package repositories. Additionally install _git_ to access the Fedora IoT ostree spec sources. - -``` -sudo dnf install ostree rpm-ostree git -``` - -Assuming you have a spare, empty folder laying around to work with, start there by creating some files and folders that will be needed along the way. - -``` -mkdir .cache .build-repo .deploy-repo .tmp custom -``` - -The _.cache_ directory is used by all build commands around rpm-ostree. The folders _build_ and _deploy_ store separate repositories to keep the build environment separate from the actual remix. The _.tmp_ directory is used to combine the git-managed upstream sources (from Fedora IoT, for example) with modifications kept in the _custom_ directory. - -As you build your own OSTree as derivative from Fedora IoT you will need the sources. Clone them into the folder _.fedora-iot-spec_. They contain several configuration files specifying how the ostree filesystem for Fedora IoT is built, what packages to include, etc. - -``` -git clone -b "f34" https://pagure.io/fedora-iot/ostree.git .fedora-iot-spec -``` - -#### OSTree repositories - -Create repositories to build and store an OSTree filesystem and its contents . A place to store commits and manage their metadata. Wait, what? What is an OSTree commit anyway? Glad you ask! With _rpm-ostree_ you build so-called _libostree commits_. The terminology is roughly based on git. They essentially work in similar ways. Those commits store diffs from one state of the filesystem to the next. If you change a binary blob inside the tree, the commit contains this change. You can deploy this specific version of the filesystem at any time. - -Use the _ostree init_ command to create two _ostree repositories_. - -``` -ostree --repo=".build-repo" init --mode=bare-user -ostree --repo=".deploy-repo" init --mode=archive -``` - -The main difference between the repositories is their mode. Create the build repository in “bare-user” mode and the “production” repository in “archive” mode. The _bare*_ mode is well suited for build environments. The “user” portion additionally allows non-root operation and storing extended attributes. Create the other repository in _archive_ mode. It stores objects compressed; making them easy to move around. If all that doesn’t mean a thing to you, don’t worry. The specifics don’t matter for your primary goal here – to build your own Remix. - -Let me share just a little anecdote on this: When I was working on building ostree-based systems on GitLab CI/CD pipelines and we had to move the repositories around different jobs, we once tried to move them uncompressed in _bare-user_ mode via caches. We learned that, while this works with _archive_ repos, it does not with _bare*_ repos. Important filesystem attributes will get corrupted on the way. - -#### Custom flavor - -What’s a Remix without any customization? Not much! Create some configuration files as adjustment for your own OS. Assuming you want to deploy the Remix on a system with a hardware watchdog (a [Raspberry Pi][8], for example) start with a watchdog configuration file: - -``` -./custom/watchdog.conf -watchdog-device = /dev/watchdog -max-load-1 = 24 -max-load-15 = 9 -realtime = yes -priority = 1 -watchdog-timeout = 15 # Broadcom BCM2835 limitation -``` - -The _postprocess-script_ is an arbitrary shell script executed inside the target filesystem tree as part of the build process. It allows for last-minute customization of the filesystem in a restricted and (by default) network-less environment. It’s a good place to ensure the correct file permissions are set for the custom watchdog configuration file. - -``` -./custom/treecompose-post.sh -#!/bin/sh - -set -e - -# Prepare watchdog -chown root:root /etc/watchdog.conf -chmod 0644 /etc/watchdog.conf -``` - -#### Plant a Treefile - -Fedora IoT is pretty minimal and keeps its main focus on security and best-practices. The rest is up to you and your use-case. As a consequence, the watchdog package is not provided from the get-go. In RPM-OSTree the spec file is called [Treefile][9] and encoded in [JSON][10]. In the _Treefile_ you specify what packages to install, files and folders to exclude from packages, _configuration files_ to add to the _filesystem tree_ and _systemd units_ to enable by default. - -``` -./custom/treefile.json -{ - "ref": "OSTreeBeard/stable/x86_64", - "ex-jigdo-spec": "fedora-iot.spec", - "include": "fedora-iot-base.json", - "boot-location": "modules", - "packages": [ - "watchdog" - ], - "remove-files": [ - "etc/watchdog.conf" - ], - "add-files": [ - ["watchdog.conf", "/etc/watchdog.conf"] - ], - "units": [ - "watchdog.service" - ], - "postprocess-script": "treecompose-post.merged.sh" -} -``` - -The _ref_ is basically the branch name within the repository. Use it to refer to this specific spec in _rpm-ostree_ operations. With _ex-jigdo-spec_ and _include_ you link this _Treefile_ to the configuration of the _Fedora IoT sources_. Additionally specify the _Fedora Updates repo_ in the _repos_ section. It is not part of the sources so you will have to add that yourself. More on that in a moment. - -With _packages_ you instruct _rpm-ostree_ to install the _watchdog_ package. Exclude the _watchdog.conf_ file and replace it with the one from the _custom_ directory by using _remove-files_ and _add-files_. Now just enable the _watchdog.service_ and you are good to go. - -All available treefile options are available in the [official RPM-OSTree documentation][11]. - -#### Add another RPM repository - -In it’s initial configuration the OSTree only uses the initial Fedora 34 package repository. Add the Fedora 34 Updates repository as well. To do so, add the following file to your _custom_ directory. - -``` -./custom/fedora-34-updates.repo -[fedora-34-updates] -name=Fedora 34 - $basearch - Updates -#baseurl=http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/ -metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f34&arch=$basearch -enabled=1 -repo_gpgcheck=0 -type=rpm -gpgcheck=1 -#metadata_expire=7d -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-34-$basearch -skip_if_unavailable=False -``` - -Now tell rpm-ostree in the spec for your Remix to include this repository. Use the _treefile_‘s _repos_ section. - -``` -./custom/treefile.json -{ - ... - "repos": [ - "fedora-34", - "fedora-34-updates" - ], - ... -} -``` - -### Build your own Fedora IoT Remix - -You have all that need to build your first ostree based filesystem. By now you setup a certain project structure, downloaded the Fedora IoT upstream specs, and added some customization and initialized the ostree repositories. All you need to do now is throw everything together and create a nicely flavored Fedora IoT Remix salsa. - -``` -cp ./.fedora-iot-spec/* .tmp/ -cp ./custom/* .tmp/ -``` - -Combine the _postprocessing-scripts_ of the _Fedora IoT upstream sources_ and your _custom_ directory. - -``` -cat "./.fedora-iot-spec/treecompose-post.sh" "./custom/treecompose-post.sh" > ".tmp/treecompose-post.merged.sh" -chmod +x ".tmp/treecompose-post.merged.sh" -``` - -Remember that you specified _treecompose-post.merged.sh_ as your post-processing script earlier in _treefile.json_? That’s where this file comes from. - -Note that all the files – systemd units, scripts, configurations – mentioned in _ostree.json_ are now available in _.tmp_. This folder is the build context that all the references are relative to. - -You are only one command away from kicking off your first build of a customized Fedora IoT. Now, kick-of the build with _rpm-ostree compose tree_ command. Now grab a cup of coffee, enjoy and wait for the build to finish. That may take between 5 to 10 minutes depending on your host hardware. See you later! - -``` -sudo rpm-ostree compose tree --unified-core --cachedir=".cache" --repo=".build-repo" --write-commitid-to="$COMMIT_FILE" ".tmp/treefile.json" -``` - -#### Prepare for deployment - -Oh, erm, you are back already? Ehem. Good! – The _.build-repo_ now stores a complete filesystem tree of around 700 to 800 MB of compressed data. The last thing to do before you consider putting this on the network and deploying it on your device(s) (at least for now) is to add a _commit_ with an arbitrary _commit subject_ and _metadata_ and to pull the result over to the _deploy-repo_. - -``` -sudo ostree --repo=".deploy-repo" pull-local ".build-repo" "OSTreeBeard/stable/x86_64" -``` - -The _deploy-repo_ can now be placed on any file-serving webserver and then used as a new _ostree remote_ … theoretically. I won’t go through the topic of security for ostree remotes just yet. As an initial advise though: Always sign OSTree commits with GPG to ensure the authenticity of your updates. Apart from that it’s only a matter of adding the remote configuration on your target and using _rpm-ostree rebase_ to switch over to this Remix. - -As a final thing before you leave to do outside stuff (like with fresh air, sun, ice-cream or whatever), take a look around the newly built filesystem to ensure that everything is in place. - -#### Explore the filesystem - -Use _ostree refs_ to list available refs in the repo or on your system. - -``` -$ ostree --repo=".deploy-repo" refs -OSTreeBeard/stable/x86_64 -``` - -Take a look at the commits of a ref with _ostree log_. - -``` -$ ostree --repo=".deploy-repo" log OSTreeBeard/stable/x86_64 -commit 849c0648969c8c2e793e5d0a2f7393e92be69216e026975f437bdc2466c599e9 -ContentChecksum: bcaa54cc9d8ffd5ddfc86ed915212784afd3c71582c892da873147333e441b26 -Date: 2021-07-27 06:45:36 +0000 -Version: 34 -(no subject) -``` - -List the ostree filesystem contents with _ostree ls_. - -``` -$ ostree --repo=".build-repo" ls OSTreeBeard/stable/x86_64 -d00755 0 0 0 / -l00777 0 0 0 /bin -> usr/bin -l00777 0 0 0 /home -> var/home -l00777 0 0 0 /lib -> usr/lib -l00777 0 0 0 /lib64 -> usr/lib64 -l00777 0 0 0 /media -> run/media -l00777 0 0 0 /mnt -> var/mnt -l00777 0 0 0 /opt -> var/opt -l00777 0 0 0 /ostree -> sysroot/ostree -l00777 0 0 0 /root -> var/roothome -l00777 0 0 0 /sbin -> usr/sbin -l00777 0 0 0 /srv -> var/srv -l00777 0 0 0 /tmp -> sysroot/tmp -d00755 0 0 0 /boot -d00755 0 0 0 /dev -d00755 0 0 0 /proc -d00755 0 0 0 /run -d00755 0 0 0 /sys -d00755 0 0 0 /sysroot -d00755 0 0 0 /usr -d00755 0 0 0 /var -$ ostree --repo=".build-repo" ls OSTreeBeard/stable/x86_64 /usr/etc/watchdog.conf --00644 0 0 208 /usr/etc/watchdog.conf -``` - -Take note that the _watchdog.conf_ file is located under _/usr/etc/watchdog.conf_. On booted deployment this is located at _/etc/watchdog.conf_ as usual. - -### Where to go from here? - -You took a brave step in building a customized Fedora IoT on your local machine. First I introduced you the concepts and vocabulary so you could understand where you were at and where you wanted to go. You then ensured all the tools were in place. You looked at the ostree repository modes and mechanics before analyzing a typical _ostree configuration_. To spice it up and make it a bit more interesting you made an additional service and configuration ready to role out on your device(s). To do that you added the Fedora Updates RPM repository and then kicked off the build process. Last but not least, you packaged the result up in a format ready to be placed somewhere on the network. - -There are a lot more topics to cover. I could explain how to configure an NGINX to serve ostree remotes effectively. Or how to ensure the security and authenticity of the filesystem and updates through GPG signatures. Also, how one manually alters the filesystem and what tooling is available for building the filesystem. There is also more to be explained about how to test the Remix and how to build flashable images and installation media. - -Let me know in the comments what you think and what you care about. Tell me what you’d like to read next. If you already built Fedora IoT, I’m happy to read your stories too. - -### References - - * [Fedora IoT documentation][12] - * [libostree documentation][13] - * [rpm-ostree documentation][5] - - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/build-your-own-fedora-iot-remix/ - -作者:[Alexander Wellbrock][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/w4tsn/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/rpi-816x345.jpg -[2]: https://unsplash.com/@s_tsuchiya?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/raspberry-pi?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://fedoramagazine.org/turnon-led-fedora-iot/ -[5]: https://coreos.github.io/rpm-ostree/ -[6]: https://ostreedev.github.io/ostree/ -[7]: https://fedoraproject.org/wiki/Remix -[8]: https://en.wikipedia.org/wiki/Raspberry_Pi -[9]: https://rpm-ostree.readthedocs.io/en/stable/manual/treefile/ -[10]: https://en.wikipedia.org/wiki/JSON -[11]: https://coreos.github.io/rpm-ostree/treefile/ -[12]: https://docs.fedoraproject.org/en-US/iot/ -[13]: https://ostreedev.github.io/ostree/introduction/ diff --git a/sources/tech/20210812 Automatically create multiple applications in Argo CD.md b/sources/tech/20210812 Automatically create multiple applications in Argo CD.md deleted file mode 100644 index 31e0b9281c..0000000000 --- a/sources/tech/20210812 Automatically create multiple applications in Argo CD.md +++ /dev/null @@ -1,246 +0,0 @@ -[#]: subject: "Automatically create multiple applications in Argo CD" -[#]: via: "https://opensource.com/article/21/7/automating-argo-cd" -[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Automatically create multiple applications in Argo CD -====== -In this tutorial, I will show you how to automatically create multiple -applications in Argo CD using Argo CD. -![gears and lightbulb to represent innovation][1] - -In a previous article, I demonstrated how [Argo CD makes pull-based GitOps deployments simple][2]. In this tutorial, I’ll show you how to automatically create multiple applications in Argo CD using Argo CD itself. - -Since Argo CD’s job is to listen to a repo and apply the Manifest files it finds to the cluster, you can use this approach to configure Argo CD internals as well. In my previous example, I used the GUI to create a sample Nginx application with three replicas. This time, I use the same approach as before, but I create an application from the GUI to deploy three separate applications: `app-1`, `app-2`, and `app-3`. - -### Configuring our child applications - -First, start by creating the Manifest files for your three applications. In my `example-assets` [repository][3], I have [created three applications][4] under `argocd/my-apps`. All three applications are Nginx with three replicas. Be sure to create each application in its own folder. - -Create a [YAML file][5] to define the first application and save it as `my-apps/app-1/app.yml`: - - -``` -apiVersion: apps/v1 -kind: Deployment -metadata: -  name: nginx-app-1 -  labels: -    app: nginx-app-1 -spec: -  replicas: 3 -  selector: -    matchLabels: -      app: nginx-app-1 -  template: -    metadata: -      labels: -        app: nginx-app-1 -    spec: -      containers: -      - name: nginx -        image: nginx:latest -        ports: -        - containerPort: 80 -``` - -Create another one for your second application and save it as `my-apps/app-2/app.yml`: - - -``` -apiVersion: apps/v1 -kind: Deployment -metadata: -  name: nginx-app-2 -  labels: -    app: nginx-app-2 -spec: -  replicas: 3 -  selector: -    matchLabels: -      app: nginx-app-2 -  template: -    metadata: -      labels: -        app: nginx-app-2 -    spec: -      containers: -      - name: nginx -        image: nginx:latest -        ports: -        - containerPort: 80 -``` - -Create a third for your final app and save it as `my-apps/app-3/app.yml`: - - -``` -apiVersion: apps/v1 -kind: Deployment -metadata: -  name: nginx-app-3 -  labels: -    app: nginx-app-3 -spec: -  replicas: 3 -  selector: -    matchLabels: -      app: nginx-app-3 -  template: -    metadata: -      labels: -        app: nginx-app-3 -    spec: -      containers: -      - name: nginx -        image: nginx:latest -        ports: -        - containerPort: 80 -``` - -Now that your Manifest files are ready, you must create Argo CD Applications pointing to those Manifests. - -Argo CD can be configured in three different ways: using the GUI, using the CLI, or using Kubernetes Manifest files. In this article, I use the third method. - -Create the following Manifest files in a new folder `argocd/argo-apps`. This is `argocd-apps/app-1.yml`: - - -``` -apiVersion: argoproj.io/v1alpha1 -kind: Application -metadata: -  name: my-app-1 -  namespace: argocd -  finalizers: -  - resources-finalizer.argocd.argoproj.io -spec: -  destination: -    namespace: argocd -    server: -  project: default -  source: -    path: argocd/my-apps/app-1 -    repoURL: -    targetRevision: HEAD -``` - -This is `argocd-apps/app-2.yml`: - - -``` -apiVersion: argoproj.io/v1alpha1 -kind: Application -metadata: -  name: my-app-2 -  namespace: argocd -  finalizers: -  - resources-finalizer.argocd.argoproj.io -spec: -  destination: -    namespace: argocd -    server: -  project: default -  source: -    path: argocd/my-apps/app-2 -    repoURL: -    targetRevision: HEAD -``` - -And this is `argocd-apps/app-3.yml`: - - -``` -apiVersion: argoproj.io/v1alpha1 -kind: Application -metadata: -  name: my-app-3 -  namespace: argocd -  finalizers: -  - resources-finalizer.argocd.argoproj.io -spec: -  destination: -    namespace: argocd -    server: -  project: default -  source: -    path: argocd/my-apps/app-3 -    repoURL: -    targetRevision: HEAD -``` - -As you can see, you are creating a Kubernetes object called `Application` in the `argocd` namespace. This object contains the source Git repository and destination server details. Your Applications are pointing to the Nginx manifest files you created earlier. - -### Configuring our main application - -Now you need some way to tell Argo CD how to find your three Nginx applications. Do this by creating yet another Application. This pattern is called the `App of Apps` pattern, where one Application contains the instructions to deploy multiple child Applications. - -Create a new Application from the GUI called `my-apps` with the following configuration: - - -``` -apiVersion: argoproj.io/v1alpha1 -kind: Application -metadata: -  name: my-apps -spec: -  destination: -    namespace: default -    server: '' -  source: -    path: argocd/argocd-apps -    repoURL: '' -    targetRevision: HEAD -  project: default -  syncPolicy: -    automated: -      prune: true -      selfHeal: true -``` - -Once it has been created, `my-apps` begins syncing in the GUI: - -![Automating ArgoCD with ArgoCD! - Main app.][6] - -Figure 1: Automating ArgoCD with ArgoCD! - Main app. - -After the sync is complete, your three Nginx applications appear in the GUI as well: - -![Automating ArgoCD with ArgoCD! - Dashboard.][7] - -Figure 2: Automating ArgoCD with ArgoCD! - Dashboard. - -Since you didn't enable `AutoSync`, manually sync `app-1`, `app-2`, and `app-3`. Once synced, your Nginx replicas are deployed for all three apps. - -![Automating ArgoCD with ArgoCD! - Deployment.][8] - -Figure 3: Automating ArgoCD with ArgoCD! - Deployment. - -### Conclusion - -Mastering the `App of Apps` pattern is critical to leveraging the full power of Argo CD. This method allows you to manage groups of applications cleanly. For example, deploying Prometheus, Grafana, Loki, and other vital services could be managed by a DevOps Application, while deploying frontend code could be managed by a Frontend Application. Configuring different sync options and repo locations for each gives you precise control over different application groups. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/automating-argo-cd - -作者:[Ayush Sharma][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ayushsharma -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation) -[2]: https://opensource.com/article/21/8/argo-cd -[3]: https://gitlab.com/ayush-sharma/example-assets -[4]: https://gitlab.com/ayush-sharma/example-assets/-/tree/main/argocd/my-apps -[5]: https://www.redhat.com/sysadmin/yaml-beginners -[6]: https://opensource.com/sites/default/files/1automating-argocd-with-argocd-main-app_0.png -[7]: https://opensource.com/sites/default/files/2automating-argocd-with-argocd-dashboard.png -[8]: https://opensource.com/sites/default/files/3automating-argocd-with-argocd-deployment.png diff --git a/sources/tech/20210813 Use dnf updateinfo to read update changelogs.md b/sources/tech/20210813 Use dnf updateinfo to read update changelogs.md deleted file mode 100644 index 02baa78e22..0000000000 --- a/sources/tech/20210813 Use dnf updateinfo to read update changelogs.md +++ /dev/null @@ -1,304 +0,0 @@ -[#]: subject: "Use dnf updateinfo to read update changelogs" -[#]: via: "https://fedoramagazine.org/use-dnf-updateinfo-to-read-update-changelogs/" -[#]: author: "Mateus Rodrigues Costa https://fedoramagazine.org/author/mateusrodcosta/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Use dnf updateinfo to read update changelogs -====== - -![][1] - -Cover image background excerpted from photo by [Fotis Fotopoulos][2] on [Unsplash][3] - -This article will explore how to check the changelogs for the Fedora Linux operating system using the command line and _dnf updateinfo_. Instead of showing the commands running on a real Fedora Linux install, this article will demo running the dnf commands in [toolbox][4]. - -### Introduction - -If you have used any type of computer recently (be it a desktop, laptop or even a smartphone), you most likely have had to deal with software updates. You might have an opinion about them. They might be a “necessary evil”, something that always breaks your setup and makes you waste hours fixing the new problems that appeared, or you might even like them. - -No matter your opinion, there are reasons to update your software: mainly bug fixes, especially security-related bug fixes. After all, you most likely don’t want someone getting your private data by exploiting a bug that happens because of a interaction between the code of your web browser and the code that renders text on your screen. - -If you manage your software updates in a manual or semi-manual fashion (in comparison to letting the operating system auto-update your software), one feature you should be aware of is “changelogs”. - -A changelog is, as the name hints, a big list of changes between two releases of the same software. The changelog content can vary a lot. It may depend on the team, the type of software, its importance, and the number of changes. It can range from a very simple “several small bugs were fixed in this release”-type message, to a list of links to the bugs fixed on a issue tracker with a small description, to a big and detailed list of changes or elaborate blog posts. - -Now, how do you check the changelogs for the updates? - -If you use Fedora Workstation the easy way to see the changelog with a GUI is with Gnome Software. Select the name of the package or name of the software on the updates page and the changelog is displayed. You could also try your favorite GUI package manager, which will most likely show it to you as well. But how does one do the same thing via CLI? - -### How to use dnf updateinfo - -Start by creating a Fedora 34 toolbox called _updateinfo-demo_: - -``` -toolbox create --distro fedora --release f34 updateinfo-demo -``` - -Now, enter the toolbox: - -``` -toolbox enter updateinfo-demo -``` - -The commands from here on can also be used on a normal Fedora install. - -First, check the updates available: - -``` -$ dnf check-update -audit-libs.x86_64 3.0.3-1.fc34 updates -ca-certificates.noarch 2021.2.50-1.0.fc34 updates -coreutils.x86_64 8.32-30.fc34 updates -coreutils-common.x86_64 8.32-30.fc34 updates -curl.x86_64 7.76.1-7.fc34 updates -dnf.noarch 4.8.0-1.fc34 updates -dnf-data.noarch 4.8.0-1.fc34 updates -expat.x86_64 2.4.1-1.fc34 updates -file-libs.x86_64 5.39-6.fc34 updates -glibc.x86_64 2.33-20.fc34 updates -glibc-common.x86_64 2.33-20.fc34 updates -glibc-minimal-langpack.x86_64 2.33-20.fc34 updates -krb5-libs.x86_64 1.19.1-14.fc34 updates -libcomps.x86_64 0.1.17-1.fc34 updates -libcurl.x86_64 7.76.1-7.fc34 updates -libdnf.x86_64 0.63.1-1.fc34 updates -libeconf.x86_64 0.4.0-1.fc34 updates -libedit.x86_64 3.1-38.20210714cvs.fc34 updates -libgcrypt.x86_64 1.9.3-3.fc34 updates -libidn2.x86_64 2.3.2-1.fc34 updates -libmodulemd.x86_64 2.13.0-1.fc34 updates -librepo.x86_64 1.14.1-1.fc34 updates -libsss_idmap.x86_64 2.5.2-1.fc34 updates -libsss_nss_idmap.x86_64 2.5.2-1.fc34 updates -libuser.x86_64 0.63-4.fc34 updates -libxcrypt.x86_64 4.4.23-1.fc34 updates -nano.x86_64 5.8-3.fc34 updates -nano-default-editor.noarch 5.8-3.fc34 updates -nettle.x86_64 3.7.3-1.fc34 updates -openldap.x86_64 2.4.57-5.fc34 updates -pam.x86_64 1.5.1-6.fc34 updates -python-setuptools-wheel.noarch 53.0.0-2.fc34 updates -python-unversioned-command.noarch 3.9.6-2.fc34 updates -python3.x86_64 3.9.6-2.fc34 updates -python3-dnf.noarch 4.8.0-1.fc34 updates -python3-hawkey.x86_64 0.63.1-1.fc34 updates -python3-libcomps.x86_64 0.1.17-1.fc34 updates -python3-libdnf.x86_64 0.63.1-1.fc34 updates -python3-libs.x86_64 3.9.6-2.fc34 updates -python3-setuptools.noarch 53.0.0-2.fc34 updates -sssd-client.x86_64 2.5.2-1.fc34 updates -systemd.x86_64 248.6-1.fc34 updates -systemd-libs.x86_64 248.6-1.fc34 updates -systemd-networkd.x86_64 248.6-1.fc34 updates -systemd-pam.x86_64 248.6-1.fc34 updates -systemd-rpm-macros.noarch 248.6-1.fc34 updates -vim-minimal.x86_64 2:8.2.3182-1.fc34 updates -xkeyboard-config.noarch 2.33-1.fc34 updates -yum.noarch 4.8.0-1.fc34 updates -``` - -OK, so run your first _dnf updateinfo_ command: - -``` -$ dnf updateinfo -Updates Information Summary: available - 5 Security notice(s) - 4 Moderate Security notice(s) - 1 Low Security notice(s) - 11 Bugfix notice(s) - 8 Enhancement notice(s) - 3 other notice(s) -``` - -This is the summary of updates. As you can see there are security updates, bugfix updates, enhancement updates and some which are not specified. - -Look at the list of updates and which types they belong to: - -``` -$ dnf updateinfo list -FEDORA-2021-e4866762d8 enhancement audit-libs-3.0.3-1.fc34.x86_64 -FEDORA-2021-1f32e18471 bugfix ca-certificates-2021.2.50-1.0.fc34.noarch -FEDORA-2021-b09e010a46 bugfix coreutils-8.32-30.fc34.x86_64 -FEDORA-2021-b09e010a46 bugfix coreutils-common-8.32-30.fc34.x86_64 -FEDORA-2021-83fdddca0f Moderate/Sec. curl-7.76.1-7.fc34.x86_64 -FEDORA-2021-3b74285c43 bugfix dnf-4.8.0-1.fc34.noarch -FEDORA-2021-3b74285c43 bugfix dnf-data-4.8.0-1.fc34.noarch -FEDORA-2021-523ee0a81e enhancement expat-2.4.1-1.fc34.x86_64 -FEDORA-2021-07625b9c81 unknown file-libs-5.39-6.fc34.x86_64 -FEDORA-2021-e14e86e40e Moderate/Sec. glibc-2.33-20.fc34.x86_64 -FEDORA-2021-e14e86e40e Moderate/Sec. glibc-common-2.33-20.fc34.x86_64 -FEDORA-2021-e14e86e40e Moderate/Sec. glibc-minimal-langpack-2.33-20.fc34.x86_64 -FEDORA-2021-8b25e4642f Low/Sec. krb5-libs-1.19.1-14.fc34.x86_64 -FEDORA-2021-3b74285c43 bugfix libcomps-0.1.17-1.fc34.x86_64 -FEDORA-2021-83fdddca0f Moderate/Sec. libcurl-7.76.1-7.fc34.x86_64 -FEDORA-2021-3b74285c43 bugfix libdnf-0.63.1-1.fc34.x86_64 -FEDORA-2021-ca22b882a5 enhancement libeconf-0.4.0-1.fc34.x86_64 -FEDORA-2021-f9c139edd8 bugfix libedit-3.1-38.20210714cvs.fc34.x86_64 -FEDORA-2021-31fdc84207 Moderate/Sec. libgcrypt-1.9.3-3.fc34.x86_64 -FEDORA-2021-bc56cf7c1f enhancement libidn2-2.3.2-1.fc34.x86_64 -FEDORA-2021-da2ec14d7f bugfix libmodulemd-2.13.0-1.fc34.x86_64 -FEDORA-2021-3b74285c43 bugfix librepo-1.14.1-1.fc34.x86_64 -FEDORA-2021-1db6330a22 unknown libsss_idmap-2.5.2-1.fc34.x86_64 -FEDORA-2021-1db6330a22 unknown libsss_nss_idmap-2.5.2-1.fc34.x86_64 -FEDORA-2021-8226c82fe9 bugfix libuser-0.63-4.fc34.x86_64 -FEDORA-2021-e6916d6758 bugfix libxcrypt-4.4.22-2.fc34.x86_64 -FEDORA-2021-fed4036fd9 bugfix libxcrypt-4.4.23-1.fc34.x86_64 -FEDORA-2021-3122d2b8d2 unknown nano-5.8-3.fc34.x86_64 -FEDORA-2021-3122d2b8d2 unknown nano-default-editor-5.8-3.fc34.noarch -FEDORA-2021-d1fc0b9d32 Moderate/Sec. nettle-3.7.3-1.fc34.x86_64 -FEDORA-2021-97949d7a4e bugfix openldap-2.4.57-5.fc34.x86_64 -FEDORA-2021-e6916d6758 bugfix pam-1.5.1-6.fc34.x86_64 -FEDORA-2021-07931f7f08 bugfix python-setuptools-wheel-53.0.0-2.fc34.noarch -FEDORA-2021-2056ce89d9 enhancement python-unversioned-command-3.9.6-1.fc34.noarch -FEDORA-2021-d613e00b72 enhancement python-unversioned-command-3.9.6-2.fc34.noarch -FEDORA-2021-2056ce89d9 enhancement python3-3.9.6-1.fc34.x86_64 -FEDORA-2021-d613e00b72 enhancement python3-3.9.6-2.fc34.x86_64 -FEDORA-2021-3b74285c43 bugfix python3-dnf-4.8.0-1.fc34.noarch -FEDORA-2021-3b74285c43 bugfix python3-hawkey-0.63.1-1.fc34.x86_64 -FEDORA-2021-3b74285c43 bugfix python3-libcomps-0.1.17-1.fc34.x86_64 -FEDORA-2021-3b74285c43 bugfix python3-libdnf-0.63.1-1.fc34.x86_64 -FEDORA-2021-2056ce89d9 enhancement python3-libs-3.9.6-1.fc34.x86_64 -FEDORA-2021-d613e00b72 enhancement python3-libs-3.9.6-2.fc34.x86_64 -FEDORA-2021-07931f7f08 bugfix python3-setuptools-53.0.0-2.fc34.noarch -FEDORA-2021-1db6330a22 unknown sssd-client-2.5.2-1.fc34.x86_64 -FEDORA-2021-3141f0eff1 bugfix systemd-248.6-1.fc34.x86_64 -FEDORA-2021-3141f0eff1 bugfix systemd-libs-248.6-1.fc34.x86_64 -FEDORA-2021-3141f0eff1 bugfix systemd-networkd-248.6-1.fc34.x86_64 -FEDORA-2021-3141f0eff1 bugfix systemd-pam-248.6-1.fc34.x86_64 -FEDORA-2021-3141f0eff1 bugfix systemd-rpm-macros-248.6-1.fc34.noarch -FEDORA-2021-b8b1f6e54f enhancement vim-minimal-2:8.2.3182-1.fc34.x86_64 -FEDORA-2021-67645ae09f enhancement xkeyboard-config-2.33-1.fc34.noarch -FEDORA-2021-3b74285c43 bugfix yum-4.8.0-1.fc34.noarch -``` - -The output is in three columns. These show the ID for an update, the type of the update, and the package to which it refers. - -If you want to see the Bodhi page for a specific update, just add the id to the end of this URL: -. - -For example, for _systemd-248.6-1.fc34.x86_64_ or for _coreutils-8.32-30.fc34.x86_64_. - -The next command will list the actual changelog. - -``` -dnf updateinfo info -``` - -The output from this command is quite long. So only a few interesting excerpts are provided below. - -Start with a small one: - -``` -=============================================================================== - ca-certificates-2021.2.50-1.0.fc34 -=============================================================================== - Update ID: FEDORA-2021-1f32e18471 - Type: bugfix - Updated: 2021-06-18 22:08:02 -Description: Update the ca-certificates list to the lastest upstream list. - Severity: Low -``` - -Notice how this info has the update ID, type, updated time, description and severity. Very simple and easy to understand. - -Now look at the _systemd_ update which, in addition to the previous items, has some bugs associated with it in Red Hat Bugzilla, a more elaborate description, and a different severity. - -``` -=============================================================================== - systemd-248.6-1.fc34 -=============================================================================== - Update ID: FEDORA-2021-3141f0eff1 - Type: bugfix - Updated: 2021-07-24 22:00:30 - Bugs: 1963428 - if keyfile >= 1024*4096-1 service "systemd-cryptsetup@" can't start - : 1965815 - 50-udev-default.rules references group "sgx" which does not exist - : 1975564 - systemd-cryptenroll SIGABRT when adding recovery key - buffer overflow - : 1984651 - systemd[1]: Assertion 'a <= b' failed at src/libsystemd/sd-event/sd-event.c:2903, function sleep_between(). Aborting. -Description: - Create 'sgx' group (and also use soft-static uids for input and render, see https://pagure.io/setup/c/df3194a7295c2ca3cfa923981b046f4bd2754825 and https://pagure.io/packaging-committee/issue/1078 (#1965815) - : - Various bugfixes (#1963428, #1975564) - : - Fix for a regression introduced in the previous release with sd-event abort (#1984651) - : - : No need to log out or reboot. - Severity: Moderate -``` - -Next look at a _curl_ update. This has a security update with several [CVE][5]s associated with it. Each CVE has its respective Red Hat Bugzilla bug. - -``` -=============================================================================== - curl-7.76.1-7.fc34 -=============================================================================== - Update ID: FEDORA-2021-83fdddca0f - Type: security - Updated: 2021-07-22 22:03:07 - Bugs: 1984325 - CVE-2021-22922 curl: wrong content via metalink is not being discarded [fedora-all] - : 1984326 - CVE-2021-22923 curl: Metalink download sends credentials [fedora-all] - : 1984327 - CVE-2021-22924 curl: bad connection reuse due to flawed path name checks [fedora-all] - : 1984328 - CVE-2021-22925 curl: Incorrect fix for CVE-2021-22898 TELNET stack contents disclosure [fedora-all] -Description: - fix TELNET stack contents disclosure again (CVE-2021-22925) - : - fix bad connection reuse due to flawed path name checks (CVE-2021-22924) - : - disable metalink support to fix the following vulnerabilities - : CVE-2021-22923 - metalink download sends credentials - : CVE-2021-22922 - wrong content via metalink not discarded - Severity: Moderate -``` - -This item shows a simple enhancement update. - -``` -=============================================================================== - python3-docs-3.9.6-1.fc34 python3.9-3.9.6-1.fc34 -=============================================================================== - Update ID: FEDORA-2021-2056ce89d9 - Type: enhancement - Updated: 2021-07-08 22:00:53 -Description: Update of Python 3.9 and python3-docs to latest release 3.9.6 - Severity: None -``` - -Finally an “unknown” type update. - -``` -=============================================================================== - file-5.39-6.fc34 -=============================================================================== - Update ID: FEDORA-2021-07625b9c81 - Type: unknown - Updated: 2021-06-11 22:16:57 - Bugs: 1963895 - Wrong detection of python bytecode mimetypes -Description: do not classify python bytecode files as text (#1963895) - Severity: None -``` - -### Conclusion - -So, in what situation does dnf updateinfo become handy? - -Well, you could use it if you prefer managing updates fully via the CLI, or if you are unable to successfully use the GUI tools at a specific moment. - -In which case is checking the changelog useful? - -Say you manage the updates yourself, sometimes you might not consider it ideal to stop what you are doing to update your system. Instead of simply installing the updates, you check the changelogs. This allows you to figure out whether you should prioritize your updates (maybe there’s a important security fix?) or whether to postpone a bit longer (no important fix, “I will do it later when I’m not doing anything important”). - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/use-dnf-updateinfo-to-read-update-changelogs/ - -作者:[Mateus Rodrigues Costa][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/mateusrodcosta/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/dnf-updateinfo-816x345.jpg -[2]: https://unsplash.com/@ffstop?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://fedoramagazine.org/a-quick-introduction-to-toolbox-on-fedora/ -[5]: https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures diff --git a/sources/tech/20210818 A guide to database replication with open source.md b/sources/tech/20210818 A guide to database replication with open source.md deleted file mode 100644 index 665d18a29e..0000000000 --- a/sources/tech/20210818 A guide to database replication with open source.md +++ /dev/null @@ -1,121 +0,0 @@ -[#]: subject: "A guide to database replication with open source" -[#]: via: "https://opensource.com/article/21/8/database-replication-open-source" -[#]: author: "John Lafleur https://opensource.com/users/john-lafleur" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -A guide to database replication with open source -====== -Why choose log-based Change Data Capture (CDC) replication for -databases. Learn about the open source options available to you. -![Cloud and databsae incons][1] - -In the world of constantly evolving data, one question often pops up: How is it possible to seamlessly replicate data that is growing exponentially and coming from an increasing number of sources? This article explains some of the foundational open source technologies that may help commoditize database replication tasks into data warehouses, lakes, or other databases. - -One popular replication technology is **Change Data Capture (CDC)**, a pattern that allows row-level data changes at the source database to be quickly identified, captured, and delivered in real-time to the destination data warehouse, lake, or other database. With CDC, only the data that has changed since the last replication—categorized by insert, update, and delete operations—is in scope. This incremental design approach makes CDC significantly more efficient than other database replication patterns, such as a full-database replication. With full-database replication, the entire source database table with potentially millions of rows is scanned and copied over to the destination. - -### Open source CDC - -[Debezium][2] is an open source distributed CDC platform that leverages Apache Kafka to transport data changes. It continuously monitors databases, ensuring that each row-level change is sent to the destination in exactly the same order they were committed to the database. However, using Debezium in a do-it-yourself replication project can be a heavy lift. It requires a deep understanding of concepts related to the source and destination systems, Kafka, and Debezium internals. For example, just take a look at all the details required for a [Debezium MySQL connector][3]. - -[Airbyte][4] is an open source data integration engine that allows you to consolidate your data in your data warehouses, lakes, and databases. Airbyte leverages Debezium and does all the heavy lifting. Indeed, within Airbyte, Debezium is run as an embedded library. This engineering design allows for using Debezium without needing to use Apache Kafka or another language runtime. This [video][5] shows how you can use CDC to replicate a PostgreSQL database with Airbyte in a matter of minutes. The open source code is available for use with Postgres, MySQL, and MSSQL and will soon be for all other major databases that enable it. - -### What are some typical CDC use cases for databases? - -Databases lie at the core of today's data infrastructures, and several different use cases apply. - -#### 1\. Squash the overhead across your transactional databases and network - -With CDC in place, it's possible to deliver data changes as a continuous stream without placing unnecessary overhead on source database systems. This means that databases can focus on doing the more valuable tasks that they are engineered for, resulting in higher throughput and lower latency for apps. With CDC, only incremental data changes are transferred over the network, reducing data transfer costs, minimizing network saturation, and eliminating the need for fine-tuning the system to handle peak batch traffic. - -#### 2\. Keep transactional and analytical databases synchronized - -With data being generated at dizzying rates, extracting insights from data is key to an organization's success. CDC captures live data changes from the transactional database and ships those regularly to the analytical database or warehouse, where they can be analyzed to extract deeper insights. For example, imagine that you're an online travel company. You can capture real-time online booking activity at the database tier (let's say using PostgreSQL) and send these transactions to your analytical database to learn more about your customer's buying patterns and preferences. - -#### 3\. Migrate data from legacy systems to next-generation data platforms - -With the shift towards modernizing legacy database systems by going to cloud-based database instances, moving data to these newer platforms has become more critical than ever. With CDC, data is synchronized periodically, allowing you to modernize your data platforms at your own pace while maintaining both your legacy and next-generation data platforms in the interim. This setup ensures flexibility and can keep business operational without missing a heartbeat. - -#### 4\. Warm up a dynamic data cache for applications - -Caching is a standard technique for improving application performance, but data caches must be warmed up (or loaded with data) for them to be effective. With a warm data cache, the application can access data quickly, bypassing the core database. For example, this pattern is extremely beneficial for an application that does many data lookups because loading this lookup data in a cache can offload the read workload from the core database. Using CDC, the data cache can be dynamically updated all the time. For example, selective lookup tables in the database can be loaded into a cache during the initial warm-up cycle. Any future modifications in the lookup table data will incrementally be propagated to update the cache. - -### What CDC implementations exist and what database should you pick? - -CDC has been around for quite some time, and over the years, several different widely-used implementations have sprung up across other products. However, not all CDC implementations are created equal, and you need to pick the proper implementation to get a clear picture of the data changes. I summarize some of these implementations and the challenges of using each of them in the list below: - -#### Date modified - -This approach tracks metadata across every row in the table, including who created the row, who recently modified the row, and when the row was created and modified. - -**Challenges**: - - * Not easy to track data deletes since the date_modified field no longer exists for a deleted row. - * Needs additional compute resources to process the date_modified field. If indexing is used on the date_modified field, the index will need additional compute and storage resources. - - - -#### Binary diffs - -This implementation calculates the difference in state between the current data and the previous data. - -**Challenges**: - - * Calculating state differences can be complex and does not scale well when data volumes are large. - * Needs additional compute resources and cannot be done in real-time. - - - -#### Database trigger - -This method needs database triggers to be created with logic to manage the metadata within the same table or in a separate book-keeping table. - -**Challenges**: - - * Triggers must fire for every transaction, and this can slow down the transactional workload. - * The data engineer must write additional complex rollback logic to handle the case of a transaction failure. - * If the table schema is modified, the trigger must be manually updated with the latest schema changes. - * SQL language differences across the different database systems mean that triggers are not easily portable and might need to be re-written. - - - -#### Log-based - -This implementation reads data directly from the database logs and journal files to minimize the impact of the capture process. Since database logs and journal files exist in every transactional database product, the experience is transparent. This means it does not require any logical changes in terms of database objects or the application running on top of the database. - -**Challenges**: - - * If the destination database system is down, the source database system logs will need to be kept intact until the sync happens. - * Database operations that bypass the log file will not be captured. This is a corner case for most relational database use-cases since logs are required to guarantee [ACID][6] behaviors. - * For example, a **TRUNCATE** table statement might not log data, and in this case, forced logging through a query hint or configuration might be required. - - - -When it comes to production databases, the choice is clear: Log-based CDC is the way forward due to its reliability, ability to scale under massive data volumes, and ease of use without requiring any database or app changes. - -### Conclusion - -I hope this article was useful to explain why log-based CDC replication for databases matters and the new open source options available to you. These options provide endless replication possibilities, just as Airbyte made log-based CDC replication much easier. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/8/database-replication-open-source - -作者:[John Lafleur][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/john-lafleur -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg (Cloud and databsae incons) -[2]: https://github.com/debezium/ -[3]: https://debezium.io/documentation/reference/1.6/connectors/mysql.html -[4]: https://airbyte.io/ -[5]: https://www.youtube.com/watch?v=NMODvLgZvuE -[6]: https://en.wikipedia.org/wiki/ACID diff --git a/sources/tech/20210820 MAKE MORE with Inkscape - G-Code Tools.md b/sources/tech/20210820 MAKE MORE with Inkscape - G-Code Tools.md deleted file mode 100644 index 1d85c70003..0000000000 --- a/sources/tech/20210820 MAKE MORE with Inkscape - G-Code Tools.md +++ /dev/null @@ -1,166 +0,0 @@ -[#]: subject: "MAKE MORE with Inkscape – G-Code Tools" -[#]: via: "https://fedoramagazine.org/make-more-with-inkscape-g-code-tools/" -[#]: author: "Sirko Kemter https://fedoramagazine.org/author/gnokii/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -MAKE MORE with Inkscape – G-Code Tools -====== - -![MAKE MORE with Inkscape - GCode Tools][1] - -Inkscape, the most used and loved tool of Fedora’s Design Team, is not just a program for doing nice vector graphics. With vector graphics (in our case SVG) a lot more can be done. Many programs can import this format. Inkscape can also do a lot more than just graphics. This series will show you some things you can do besides graphics with Inkscape. This first article of the series will show how Inkscape’s G-Code Tools extension can be used to produce G-Code. G-Code , in turn, is useful for programming machines such as plotters and laser engravers. - -### What is G-Code and what is it used for - -The construction of machines for the hobby sector is booming. The publication of the source code for [RepRap][2] 3D printers for self-construction and the availability of electronic components, such as [Arduino][3] or [Raspberry Pi][4] are probably some of the causes for this boom. Mechanical engineering as a hobby is finding more and more adopters. This trend hasn’t stopped with 3D printers. There are also [CNC][5] milling machines, plotters, laser engravers, cutters and and even machines that you can build yourself. - -You don’t have to design or build these machines yourself. You can purchase such machines relatively cheaply as a kit or already assembled. All these machines have one thing in common – they are computer-controlled. [Computer Aided Manufacturing (][6][CAM][6]), which has been widespread in the manufacturing industry, is now also taking place at home. - -### G-Code or G programming language - -The most widespread language for programming CAM machines is G-Code. G-Code is also known as the G programming language. This language was developed at MIT in the 1950s. Since then, various organizations have developed versions of this programming language. Keep this in mind when you work with it. Different countries have different standards for this language. The name comes from the fact that many instructions in this code begin with the letter G. This letter is used to transmit travel or path commands to the machine. - -The commands go, in the truest sense of the word, from A (absolute or incremental position around the X-axis; turning around X) to Z (absolute or incrementing in the direction of the Z-axis). Commands prefixed with M (miscellaneous) transmit other instructions to the machine. Switching coolant on/off is an example of an M command. If you want a more complete list of G-Code commands there is a table on [Wikipedia][7]. - -``` -% -G00 X0 Y0 F70 -G01 Z-1 F50 -G01 X0 Y20 F50 -G02 X20 Y0 J-20 -G01 X0 Y0 -G00 Z0 F70 -M30 -% -``` - -This small example would mill a square. You could write this G-Code in any editor of your choice. But when it comes to more complex things, you typically won’t do this sort of low-level coding by hand. When it comes to 3D-Printing the slicer writes the G-Code for you. But what about when you want to use a plotter or a laser engraver? - -### Other Software for writing G-Code - -So you will need a program to do this job for you. Sure, some CAD programs can write G-Code. But not all open source CAD programs can do this. Here are some other open source solutions for this: - - * [dxf2gcode][8], normally a command line tool but has a Python implemented GUI - * [dmap2gcode][9], can import raster graphics and convert them - * [Millcrum][10], a browser-based tool - * [LinuxCNC][11], can import raster graphics and converts them to G-Code - * [TrueTypeTracer][12] or [F-Engrave][13] if you want to engrave fonts - - - -As you can see, there is no problem finding a tool for doing this. What I dislike is the use of raster graphics. I use a CNC machine because it works more precisely than I would be able to by hand. Using raster graphics and tracing it to make a path for G-Code is not precise anymore. I find that the use of vector graphics, which has paths anyway, is much more precise. - -### Inkscape and G-Code Tools installation - -When it comes to vector graphics, there is no way around Inkscape; at least not if you use Linux. There are a few other programs. But they do not have anywhere near the capability that Inkscape has. Or they are designed for other purposes. So the question is, “Can Inkscape be used for creating G-Code?” And the answer is, “Yes!” Since version 0.91, Inkscape has been packaged with an extension called [GCode Tools][14]. This extension does exactly what we want – it converts paths to G-Code. - -So all you have to do, if you have not already done it, is install Inkscape: - -``` -$ sudo dnf install Inkscape -``` - -One thing to note from the start (where light is, is also shadow) – the GCode Tools extension has a lot of functionality that is not well documented. The developer thinks it’s a good idea to use a forum for documentation. Also, basic knowledge about G-Code and CAM is necessary to understand the functions. - -Another point to be aware of is that the development isn’t as vibrant as it was at the time the GCode Tools were packaged with Inkscape. - -### Getting started with Inkscape’s G-Code Tools extension - -The first step is the same as when you would make any other thing in Inkscape – adjust your document properties. So open the document settings with **Shift + Ctrl + D** or by a clicking on the icon on the command bar and set the document properties to the size of your work piece. - -Next, set the orientation points by going to _Extensions > Gcodetools > Orientation points_. You can use the default settings. The default settings will probably give you something similar to what is shown below. - -![Inkscape with document setup and the orientation points ][15] - -#### The Tool library - -The next step is to edit the tool library (_Extensions > Gcodetools > Tools library_). This will open the dialog window for the tool setting. There you choose the tool you will use. The _default_ tool is fine. After you have chosen the tool and hit _Apply_, a rectangle will be on the canvas with the settings for the tool. These settings can be edited with the text tool (**T**). But this is a bit tricky. - -![Inkscape with the default tool library settings added into the document][16] - -The G-Code Tools extension will use these settings later. These tool settings are grouped together with an identifiable name. If you de-group these settings, this name will be lost. - -There are two possibilities to avoid losing the identifier if you ungroup the tool settings. You can use the de-group with 4 clicks with the activated selection tool. Or you can de-group it by using **Shift + Ctrl + G** and then give the group a name later using the XML-editor. - -In the first case you should **watch that the group is restored before you draw anything new**. Otherwise the newly drawn object will be added to this group. - -Now you can draw the paths you want to later convert to G-Code. Objects like rectangles, circles, stars and polygons as well text must be converted to paths (_Path > Object to Path_ or **Shift + Ctrl + C**). - -Keep in mind that this function often does not produce clean paths. You will have to control it and clean it afterwards. You can find an older article [here][17], that describes the process. - -#### Hershey Fonts or Stroke Fonts - -Regarding fonts, keep in mind that TTF and OTF are so called Outline Fonts. This means the contour of the single character is defined and it will be engraved or cut as such. If you do not want this and want to use, for example, a script font then you have to use Stroke Fonts instead. Inkscape itself brings a small collection of them by default (see _Extensions > Text > [Hershey text][18]_). - -![The stroke fonts of the Hershey Text extension][19] - -Another article about how make your own Stroke Fonts will follow. They are not only useful for engraving, but also for embroidery. - -#### The Area Fill Functions - -In some cases it might be necessary to fill paths with a pattern. The G-Code Tools extension has a function which offers two ways to fill objects with patterns – _zig zag_ and _spiral_. There is another function which currently is not working (Inkscape changed some parts for the extensions with the release of version 1.0). The latter function would fill the object with the help of the offset functions in Inkscape. These functions are under _Extensions > Gcodetools > Area_. - -![The Fill Area function of the G-Code Tools extension. Left the pattern fill and right \(currently not working\) the offset filling. The extension will execute the active tab!][20] - -![The area fillings of the G-Code Tool, on top Zig zag and on the bottom Spiral. Note the results will look different, if you apply this function letter-by-letter instead of on the whole path.][21] - -With more and different area fillings you will often have to draw the paths by hand (about 90% of the time). The [EggBot extension][22] has a function for filling regions with hatches. You also can use the [classical hatch patterns][23]. But you will have to convert the fill pattern back to an object. Otherwise the G-Code Tools extension can not convert it. Besides these, [Evilmadscientist has a good wiki page describing fill methods][24]. - -#### Converting paths to G-Code - -To convert drawn paths to G-Code, use the function _Extensions > Gcodetools > Paths to G-Code._ This function will be run on the selected objects. If no object is selected, then on all paths in the document will be converted. - -There is currently no functionality to save G-Code using the file menu. This must be done from within the G-Code Tools extension dialog box when you convert the paths to G-Code. **On the Preferences tab, you have to specify the path and the name for the output file.** - -On the canvas, different colored lines and arrows will be rendered. Blue and green lines show curves (G02 and G03). Red lines show straight lines (G01). When you see this styling, then you know that you are working with G-Code. - -![Fedora’s logo converted to G-Code with the Inkscape G-Code Tools][25] - -### Conclusion - -Opinions differ as to whether Inkscape is the right tool for creating G-Code. If you keep in mind that Inkscape works only in two dimensions and don’t expect too much, you can create G-Code with it. For simple jobs like plotting some lettering or logos, it is definitely enough. The main disadvantage of the G-Code Tools extension is that its documentation is lacking. This makes it difficult to get started with G-Code Tools. Another disadvantage is that there is not currently much active development of G-Code Tools. There are other extensions for Inkscape that also targeted G-Code. But they are already history or are also not being actively developed. The [Makerbot Unicorn GCode Output][26] extension and the [GCode Plot][27] extension are a few examples of the latter case. The need for an easy way to export G-Code directly definitely exists. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/make-more-with-inkscape-g-code-tools/ - -作者:[Sirko Kemter][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/gnokii/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/07/drawing-1-816x345.png -[2]: https://reprap.org/wiki/RepRap -[3]: https://www.arduino.cc/ -[4]: https://www.raspberrypi.org/ -[5]: https://en.wikipedia.org/wiki/CNC -[6]: https://en.wikipedia.org/wiki/Computer-aided_manufacturing -[7]: https://en.wikipedia.org/wiki/G-code -[8]: https://sourceforge.net/projects/dxf2gcode/ -[9]: https://www.scorchworks.com/Dmap2gcode/dmap2gcode.html -[10]: http://millcrum.com/ -[11]: http://linuxcnc.org/ -[12]: https://github.com/aewallin/truetype-tracer -[13]: https://www.scorchworks.com/Fengrave/fengrave.html -[14]: https://github.com/cnc-club/gcodetools -[15]: https://fedoramagazine.org/wp-content/uploads/2021/07/Bildschirmfoto-vom-2021-07-12-19-02-14-1024x556.png -[16]: https://fedoramagazine.org/wp-content/uploads/2021/07/Bildschirmfoto-vom-2021-07-12-19-10-24-1024x556.png -[17]: https://fedoramagazine.org/design-faster-web-pages-part-2-image-replacement/ -[18]: https://www.evilmadscientist.com/2011/hershey-text-an-inkscape-extension-for-engraving-fonts/ -[19]: https://fedoramagazine.org/wp-content/uploads/2021/07/Bildschirmfoto-vom-2021-07-12-19-16-50.png -[20]: https://fedoramagazine.org/wp-content/uploads/2021/07/fillarea-1024x391.png -[21]: https://fedoramagazine.org/wp-content/uploads/2021/07/Bildschirmfoto-vom-2021-07-12-20-36-51.png -[22]: https://wiki.evilmadscientist.com/Installing_software#Linux -[23]: https://inkscape.org/de/~henkjan_nl/%E2%98%85classical-hatch-patterns-for-mechanical-drawings -[24]: https://wiki.evilmadscientist.com/Creating_filled_regions -[25]: https://fedoramagazine.org/wp-content/uploads/2021/07/Bildschirmfoto-vom-2021-07-12-19-38-34-1024x556.png -[26]: http://makerbot.wikidot.com/unicorn-output-for-inkscape -[27]: https://inkscape.org/de/~arpruss/%E2%98%85gcodeplot diff --git a/sources/tech/20210825 Auto-updating podman containers with systemd.md b/sources/tech/20210825 Auto-updating podman containers with systemd.md deleted file mode 100644 index 6d344fe38d..0000000000 --- a/sources/tech/20210825 Auto-updating podman containers with systemd.md +++ /dev/null @@ -1,281 +0,0 @@ -[#]: subject: "Auto-updating podman containers with systemd" -[#]: via: "https://fedoramagazine.org/auto-updating-podman-containers-with-systemd/" -[#]: author: "Daniel Schier https://fedoramagazine.org/author/danielwtd/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Auto-updating podman containers with systemd -====== - -![][1] - -Auto-Updating containers can be very useful in some cases. Podman provides mechanisms to take care of container updates automatically. This article demonstrates how to use Podman Auto-Updates for your setups. - -### Podman - -Podman is a daemonless Docker replacement that can handle rootfull and rootless containers. It is fully aware of SELinux and Firewalld. Furthermore, it comes pre-installed with Fedora Linux so you can start using it right away. - -If Podman is not installed on your machine, use one of the following commands to install it. Select the appropriate command for your environment. - -``` -# Fedora Workstation / Server / Spins -$ sudo dnf install -y podman - -# Fedora Silverblue, IoT, CoreOS -$ rpm-ostree install podman -``` - -Podman is also available for many other Linux distributions like CentOS, Debian or Ubuntu. Please have a look at the [Podman Install Instructions][2]. - -### Auto-Updating Containers - -Updating the Operating System on a regular basis is somewhat mandatory to get the newest features, bug fixes, and security updates. But what about containers? These are not part of the Operating System. - -#### Why Auto-Updating? - -If you want to update your Operating System, it can be as easy as: - -``` -$ sudo dnf update -``` - -This will not take care of the deployed containers. But why should you take care of these? If you check the content of containers, you will find the application (for example MariaDB in the docker.io/library/mariadb container) and some dependencies, including basic utilities. - -Running updates for containers can be tedious and time-consuming, since you have to: - - 1. pull the new image - 2. stop and remove the running container - 3. start the container with the new image - - - -This procedure must be done for every container. Updating 10 containers can easily end up taking 30-40 commands that must be run. - -Automating these steps will save time and ensure, that everything is up-to-date. - -#### Podman and systemd - -Podman has built-in support for systemd. This means you can start/stop/restart containers via systemd without the need of a separate daemon. The Podman Auto-Update feature requires you to have containers running via systemd. This is the only way to automatically ensure that all desired containers are running properly. Some articles like these for [Bitwarden][3] and [Matrix Server][4] already had a look at this feature. For this article, I will use an even simpler [Apache httpd][5] container. - -First, start the container with the desired settings. - -``` -# Run httpd container with some custom settings -$ sudo podman container run -d -t -p 80:80 --name web -v web-volume:/usr/local/apache2/htdocs/:Z docker.io/library/httpd:2.4 - -# Just a quick check of the container -$ sudo podman container ls -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -58e5b07febdf docker.io/library/httpd:2.4 httpd-foreground 4 seconds ago Up 5 seconds ago 0.0.0.0:80->80/tcp web - -# Also check the named volume -$ sudo podman volume ls -DRIVER VOLUME NAME -local web-volume -``` - -Now, set up systemd to handle the deployment. Podman will generated the necessary file. - -``` -# Generate systemd service file -$ sudo podman generate systemd --new --name --files web - -/home/USER/container-web.service -``` - -This will generate the file _container-web service_ in your current directory. Review and edit the file to your liking. Here is the file contents with added newlines and formatting to improve readability. - -``` -# container-web.service - -[Unit] -Description=Podman container-web.service -Documentation=man:podman-generate-systemd(1) -Wants=network.target -After=network-online.target -RequiresMountsFor=%t/containers - -[Service] -Environment=PODMAN_SYSTEMD_UNIT=%n -Restart=on-failure -TimeoutStopSec=70 -ExecStartPre=/bin/rm -f %t/container-web.pid %t/container-web.ctr-id - -ExecStart=/usr/bin/podman container run \ - --conmon-pidfile %t/container-web.pid \ - --cidfile %t/container-web.ctr-id \ - --cgroups=no-conmon \ - --replace \ - -d \ - -t \ - -p 80:80 \ - --name web \ - -v web-volume:/usr/local/apache2/htdocs/ \ - docker.io/library/httpd:2.4 - -ExecStop=/usr/bin/podman container stop \ - --ignore \ - --cidfile %t/container-web.ctr-id \ - -t 10 - -ExecStopPost=/usr/bin/podman container rm \ - --ignore \ - -f \ - --cidfile %t/container-web.ctr-id - -PIDFile=%t/container-web.pid -Type=forking - -[Install] -WantedBy=multi-user.target default.target -``` - -Now, remove the current container, copy the file to the proper systemd directory, and start/enable the service. - -``` -# Remove the temporary container -$ sudo podman container rm -f web - -# Copy the service file -$ sudo cp container-web.service /etc/systemd/system/container-web.service - -# Reload systemd -$ sudo systemctl daemon-reload - -# Enable and start the service -$ sudo systemctl enable --now container-web - -# Another quick check -$ sudo podman container ls -$ sudo systemctl status container-web -``` - -Please be aware, that the container can now only be managed via systemd. Starting and stopping the container with the “podman” command may interfere with systemd. - -Now that the general setup is out of the way, have a look at auto-updating this container. - -#### Manual Auto-Updates - -The first thing to look at is manual auto-updates. Sounds weird? This feature allows you to avoid the 3 steps per container, but you will have full control over the update time and date. This is very useful if you only want to update containers in a maintenance window or on the weekend. - -Edit the _/etc/systemd/system_/_container-web.service_ file and add the label shown below to it. - -``` ---label "io.containers.autoupdate=registry" -``` - -The changed file will have a section appearing like this: - -``` -...snip... - -ExecStart=/usr/bin/podman container run \ - --conmon-pidfile %t/container-web.pid \ - --cidfile %t/container-web.ctr-id \ - --cgroups=no-conmon \ - --replace \ - -d \ - -t \ - -p 80:80 \ - --name web \ - -v web-volume:/usr/local/apache2/htdocs/ \ - --label "io.containers.autoupdate=registry" \ - docker.io/library/httpd:2.4 - -...snip... -``` - -Now reload systemd and restart the container service to apply the changes. - -``` -# Reload systemd -$ sudo systemctl daemon-reload - -# Restart container-web service -$ sudo systemctl restart container-web -``` - -After this setup you can run a simple command to update a running instance to the latest available image for the used tag. In this example case, if a new 2.4 image is available in the registry, Podman will download the image and restart the container automatically with a single command. - -``` -# Update containers -$ sudo podman auto-update -``` - -#### Scheduled Auto-Updates - -Podman also provides a systemd timer unit that enables container updates on a schedule. This can be very useful if you don’t want to handle the updates on your own. If you are running a small home server, this might be the right thing for you, so you are getting the latest updates every week or so. - -Enable the systemd timer for podman as follows: - -``` -# Enable podman auto update timer unit -$ sudo systemctl enable --now podman-auto-update.timer - -Created symlink /etc/systemd/system/timers.target.wants/podman-auto-update.timer → /usr/lib/systemd/system/podman-auto-update.timer. -``` - -Optionally, you can edit the schedule of the timer. By default, the update will run every Monday morning, which is ok for me. Edit the timer module using this command: - -``` -$ sudo systemctl edit podman-auto-update.timer -``` - -This will bring up your default editor. Changing the schedule is beyond the scope of this article but the link to _systemd.timer_ below will help. The Demo section of [Systemd Timers for Scheduling Tasks][6] contains details as well. - -That’s it. Nothing more to do. Podman will now take care of image updates and also prune old images on a schedule. - -### Hints & Tips - -Auto-Updating seems like the perfect solution for container updates, but you should consider some things, before doing so. - - * avoid using the “latest” tag, since it can include major updates - * consider using tags like “2” or “2.4”, if the image provider has them - * test auto-updates beforehand (does the container support updates without additional steps?) - * consider having backups of your Podman volumes, in case something goes sideways - * auto-updates might not be very useful for highly productive setups, where you need full control over the image version in use - * updating a container also restarts the container and prunes the old image - * occasionally check if the updates are being applied - - - -If you take care of the above hints, you should be good to go. - -### Docs & Links - -If you want to learn more about this topic, please check out the links below. There is a lot of useful information in the official documentation and some blogs. - - * - * - * - * - * [Systemd Timers for Scheduling Tasks][6] - - - -### Conclusion - -As you can see, without the use of additional tools, you can easily run auto-updates on Podman containers manually or on a schedule. Scheduling allows unattended updates overnight, and you will get all the latest security updates, features, and bug fixes. Some setups I have tested successfully are: MariaDB, Ghost Blog, WordPress, Gitea, Redis, and PostgreSQL. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/auto-updating-podman-containers-with-systemd/ - -作者:[Daniel Schier][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/danielwtd/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/auto-updating-podman-containers-816x345.jpg -[2]: https://podman.io/getting-started/installation -[3]: https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/ -[4]: https://fedoramagazine.org/deploy-your-own-matrix-server-on-fedora-coreos/ -[5]: https://hub.docker.com/_/httpd -[6]: https://fedoramagazine.org/systemd-timers-for-scheduling-tasks/ diff --git a/sources/tech/20210825 Icons Look too Small- Enable Fractional Scaling to Enjoy Your HiDPI 4K Screen in Ubuntu Linux.md b/sources/tech/20210825 Icons Look too Small- Enable Fractional Scaling to Enjoy Your HiDPI 4K Screen in Ubuntu Linux.md deleted file mode 100644 index 546be9b8f7..0000000000 --- a/sources/tech/20210825 Icons Look too Small- Enable Fractional Scaling to Enjoy Your HiDPI 4K Screen in Ubuntu Linux.md +++ /dev/null @@ -1,140 +0,0 @@ -[#]: subject: "Icons Look too Small? Enable Fractional Scaling to Enjoy Your HiDPI 4K Screen in Ubuntu Linux" -[#]: via: "https://itsfoss.com/enable-fractional-scaling-ubuntu/" -[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Icons Look too Small? Enable Fractional Scaling to Enjoy Your HiDPI 4K Screen in Ubuntu Linux -====== - -A few months ago, I bought a Dell XPS laptop with a 4K UHD screen. The screen resolution is 3840×2400 resolution with a 16:10 aspect ratio. - -When I was installing Ubuntu on it, everything looked so small. The desktop icons, applications, menus, items in the top panel, everything. - -It’s because the screen has too many pixels but the desktop icons and rest of the elements remain the same in size (as on a regular screen of 1920×1080). Hence, they look too small on the HiDPI screen. - -![Icons and other elements look too small on a HiDPI screen in Ubuntu][1] - -This is not pretty and makes it very difficult to use your Linux system. Thankfully, there is a solution for GNOME desktop users. - -If you too have a 2K or 4K screen where the desktop icons and other elements look too small, here’s what you need to do. - -### Scale-up display if the screen looks too small - -If you have a 4K screen, you can scale the display to 200%. This means that you are making every element twice its size. - -Press the Windows key and search for Settings: - -![Go to Settings][2] - -In Settings, go to Display settings. - -![Access the Display Settings and look for Scaling][3] - -Here, select 200% as the scale factor and click on Apply button. - -![Scaling the display in Ubuntu][4] - -It will change the display settings and ask you to confirm whether you want to keep the changed settings or revert to the original. If things look good to you, select “Keep Changes.” - -Your display settings will be changed and remain the same even after reboots until you change it again. - -### Enable fractional scaling (suitable for 2K screens) - -200% scaling is good for 4K screens however if you have a 2K screen, the 200% scaling will make the icons look too big for the screen. - -Now you are in the soup. You have the screen looking too small or too big. What about a mid-point? - -Thankfully, [GNOME][5] has a fractional scaling feature that allows you to set the scaling to 125%, 150%, and 175%. - -#### Using fractional scaling on Ubuntu 20.04 and newer versions - -Ubuntu 20.04 and the new versions have newer versions of GNOME desktop environment and it allows you to enable or disable fractional scaling from Display settings itself. - -Just go to the Display settings and look for the Fractional Scaling switch. Toggle it to enable or disable it. - -When you enable the fractional scaling, you’ll see new scaling factors between 100% to 200%. You can choose the one which is suitable for your screen. - -![Enable fractional scaling][6] - -#### Using fractional scaling on Ubuntu 18.04 - -You’ll have to make some additional efforts to make it work on the older Ubuntu 18.04 LTS version. - -First, [switch to Wayland from Xorg][7]. - -Second, enable fractional scaling as an experimental feature using this command: - -``` -gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']" -``` - -Third, restart your system and then go to the Display settings and you should see the fractional scaling toggle button now. - -#### Disabling fractional scaling on Ubuntu 18.04 - -If you are experiencing issues with fractional scaling, like increased power consumption and mouse lagging, you may want to disable it. Wayland could also be troublesome for some applications. - -First, toggle the fractional scaling switch in the display settings. Now use the following command to disable the experimental feature. - -``` -gsettings reset org.gnome.mutter experimental-features -``` - -Switch back to Xorg from Wayland again. - -### Multi-monitor setup and fractional scaling - -4K screen is good but I prefer a multi-monitor setup for work. The problem here is that I have two Full HD (1080p) monitors. Pairing them with my 4K laptop screen requires little settings change. - -What I do here is to keep the 4K screen at 200% scaling at 3840×2400 resolution. At the same time, I keep the full-HD monitors at 100% scaling with 1920×1080 resolution. - -![HiDPI screen is set at 200%][8] - -![Full HD screens are set at 100%][9] - -![Full HD screens are set at 100%][10] - -To ensure a smooth experience, you should take care of the following: - - * Use Wayland display server: It is a lot better at handling multi-screens and HiDPI screens than the legacy Xorg. - * Even if you use only 100% and 200% scaling, enabling fractional scaling is a must, otherwise, it doesn’t work properly. I know it sounds weird but that’s what I have experienced. - - - -### Did it help? - -HiDPI support in Linux is far from perfect but it is certainly improving. Newer desktop environment versions of GNOME and KDE keep on improving on this front. - -Fractional scaling with Wayland works quite well. It is improving with Xorg as well but it struggles especially on a multi-monitor set up. - -I hope this quick tip helped you to enable fractional scaling in Ubuntu and enjoy your Linux desktop on a UHD screen. - -Please leave your questions and suggestions in the comment section. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/enable-fractional-scaling-ubuntu/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/wp-content/uploads/2021/08/HiDPI-screen-icons-too-small-in-Ubuntu.webp -[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/settings-application-ubuntu.jpg?resize=800%2C247&ssl=1 -[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/display-settings-scaling-ubuntu.png?resize=800%2C432&ssl=1 -[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/scale-display-ubuntu.png?resize=800%2C443&ssl=1 -[5]: https://www.gnome.org/ -[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/enable-fractional-scaling.png?resize=800%2C452&ssl=1 -[7]: https://itsfoss.com/switch-xorg-wayland/ -[8]: https://itsfoss.com/wp-content/uploads/2021/08/fractional-scaling-ubuntu-multi-monitor-3.webp -[9]: https://itsfoss.com/wp-content/uploads/2021/08/fractional-scaling-ubuntu-multi-monitor-2.webp -[10]: https://itsfoss.com/wp-content/uploads/2021/08/fractional-scaling-ubuntu-multi-monitor-1.webp diff --git a/sources/tech/20210825 Use this open source tool for automated unit testing.md b/sources/tech/20210825 Use this open source tool for automated unit testing.md deleted file mode 100644 index 83c60fba66..0000000000 --- a/sources/tech/20210825 Use this open source tool for automated unit testing.md +++ /dev/null @@ -1,217 +0,0 @@ -[#]: subject: "Use this open source tool for automated unit testing" -[#]: via: "https://opensource.com/article/21/8/tackle-test" -[#]: author: "Saurabh Sinha https://opensource.com/users/saurabhsinha" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Use this open source tool for automated unit testing -====== -Tackle-test is an automatic generator of unit test cases for Java -applications. -![Looking at a map][1] - -Modernizing and transforming legacy applications is a challenging activity that involves several tasks. One of the key tasks is validating that the modernized application preserves the functionality of the legacy application. Unfortunately, this can be tedious and hard to perform. Legacy applications often do not have automated test cases, or, if available, test coverage might be inadequate, both in general and specifically for covering modernization-related changes. A poorly maintained test suite might also contain many obsolete tests (accumulated over time as the application evolved). Therefore, validation is mainly done manually in most modernization projects—it is a process that is time-consuming and may not test the application sufficiently. In some reported case studies, testing accounted for approximately 70% to 80% of the time spent on modernization projects [1]. Tackle-test is an automated testing tool designed to address this challenge. - -### Overview of Tackle-test - -At its core, Tackle-test is an automatic generator of unit test cases for Java applications. It can generate tests with assertions, which makes the tool especially useful in modernization projects, where application transformation is typically functionality-preserving—thus, useful test assertions can be created by observing runtime states of legacy application versions. This can make differential testing between the legacy and modernized application versions much more effective; test cases without assertions would detect only those differences where the modernized version crashes on a test input on which the legacy version executes successfully. The assertions that Tackle-test generates capture created object values after each code statement, as illustrated in the next section. - -Tackle-test uses a novel test-generation technique that applies combinatorial test design (CTD)—also called combinatorial testing or combinatorial interaction testing [2]—to method interfaces, with the goal of performing rigorous testing of methods with “complex interfaces,” where interface complexity is characterized over the space of parameter-type combinations that a method can be invoked with. CTD is a well-known, effective, and efficient test-design technique. It typically requires a manual definition of the test space in the form of a CTD model, consisting of a set of parameters, their respective values, and constraints on the value combinations. A valid test in the test space is defined as an assignment of one value to each parameter that satisfies the constraints. A CTD algorithm automatically constructs a subset of the set of valid tests to cover all legitimate value combinations of every _t_ parameters, where *t *is usually a user input. - -Although CTD is typically applied to program inputs in a black-box manner and the CTD model is created manually, Tackle-test automatically builds a parameter-type-based white-box CTD model for each method under test. It then generates a test plan consisting of coverage goals from the model and synthesizes test sequences for covering rows of the test plan. The test plan can be generated at different, user-configurable interaction levels, where higher levels result in the generation of more test cases and more thorough testing, but at the cost of increased test-generation time. - -Tackle-test also leverages some existing and commonly used test-generation strategies to maximize code coverage. Specifically, the strategies include feedback-driven random test generation (via the [Randoop][2] open source tool) and evolutionary and constraint-based test generation (via the [EvoSuite][3] open source tool). These tools compute coverage goals in code elements, such as methods, statements, and branches. - -![tackle-test components][4] - -Figure 1: High-level components of Tackle-test. -(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5]) - -Figure 1 presents a high-level view of the main components of Tackle-test. It consists of a Java-based core test generator that generates CTD-driven tests and a Python-based command-line interface (CLI), which is the primary mechanism for user interaction. - -### Getting started with the tool - -Tackle-test is released as open source under the Konveyor organization (). To get started, clone the repo, and follow the instructions for installing and running the tool provided in the repo readme. There are two installation options: using docker/docker-compose or a local installation. - -The CLI provides two main commands: `generate` for generating JUnit test cases and `execute` for executing them. To verify your installation completed successfully, use the sample `irs` application located in the test/data folder to run these two commands. - -The `generate` command is accompanied by a subcommand specifying the test-generation strategy (`ctd-amplified`, `randoop`, or `evosuite`) and creates JUnit test cases. By default, diff assertions are added to the generated test cases. Let’s run the generate command on the `irs` sample, using the CTD-guided strategy. - - -``` -$ tkltest --config-file ./test/data/irs/tkltest_config.toml --verbose generate ctd-amplified -[tkltest|18:00:11.171] Loading config file ./test/data/irs/tkltest_config.toml -[tkltest|18:00:11.175] Computing coverage goals using CTD -* CTD interaction level: 1 -* Total number of classes: 5 -* Targeting 5 classes -* Created a total of 20 test combinations for 20 target methods of 5 target classes -[tkltest|18:00:12.816] Computing test plans with CTD took 1.64 seconds -[tkltest|18:00:12.816] Generating basic block test sequences using CombinedTestGenerator -[tkltest|18:00:12.816] Test generator output will be written to irs_CombinedTestGenerator_output.log -[tkltest|18:01:02.693] Generating basic block test sequences with CombinedTestGenerator took 49.88 seconds -[tkltest|18:01:02.693] Extending sequences to reach coverage goals and generating junit tests -* === total CTD test-plan coverage rate: 90.00% (18/20) -* Added a total of 64 diff assertions across all sequences -* wrote summary file for generation of CTD-amplified tests (JSON) -* wrote 5 test class files to "irs-ctd-amplified-tests/monolithic" with 18 total test methods -* wrote CTD test-plan coverage report (JSON) -[tkltest|18:01:06.694] JUnit tests are saved in ./irs-ctd-amplified-tests -[tkltest|18:01:06.695] Extending test sequences and writing junit tests took 4.0 seconds -[tkltest|18:01:06.700] CTD coverage report is saved in ./irs-tkltest-reports/ctd report/ctdsummary.html -[tkltest|18:01:06.743] Generated Ant build file ./irs-ctd-amplified-tests/build.xml -[tkltest|18:01:06.743] Generated Maven build file ./irs-ctd-amplified-tests/pom.xml -``` - -Test generation takes a couple of minutes on the `irs` sample. By default, the tool spends 10 seconds per class on initial test sequence generation. However, the overall runtime can be longer due to additional steps, as explained in the following section. Note that the time limit per class option is configurable and that for large applications, test generation might take several hours. Therefore, it is a good practice to start with a limited scope of a few classes to get a feel for the tool before performing test generation on all application classes. - -When test generation completes, the test cases are written to a designated directory named `irs-ctd-amplified-tests` as output by the tool, along with Maven and Ant scripts for compiling and executing them. The test cases are in a subdirectory named `monolith`. A separate test file is created for each application class. Each such file contains multiple test approaches for testing the public methods of the class with different combinations of parameter types, as specified by the CTD test plan. A CTD coverage report is created that summarizes the test plan parts for which unit tests could be generated in a directory named `irs-tkltest-reports`. In the above output, we can see that Tackle-test created test cases for 18 of the 20 test-plan rows, resulting in 90% test-plan coverage. - -![amplified tests][6] - -(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5]) - -Now let’s look at one of the generated test methods for the `irs.IRS` class. - - -``` -  @Test     -   public void test1() throws Throwable { -       irs.IRS iRS0 = new irs.IRS(); -       java.util.ArrayList<irs.Salary> salaryList1 = new java.util.ArrayList<irs.Salary>();                  -       irs.Salary salary5 = new irs.Salary(0, 0, (double)100); -       assertEquals(0, ((irs.Salary) salary5).getEmployerId()); -       assertEquals(0, ((irs.Salary) salary5).getEmployeeId()); -       assertEquals(100.0, (double) ((irs.Salary) salary5).getSalary(), 1.0E-4); -       boolean boolean6 = salaryList1.add(salary5); -        assertEquals(true, boolean6); -       iRS0.setSalaryList((java.util.List<irs.Salary>)salaryList1); -    } -``` - -This test method intends to test the `setSalaryList` method of IRS, which receives a list of `irs.Salary` objects as its input. We can see that statements of the test case are followed by calls to the `assertEquals` method, comparing the values of generated objects to the values recorded during the generation of this test. When the test executes again, e.g., on the modernized version of the application, if any value differs from the recorded one, an assertion failure would occur, potentially indicating broken code that did not preserve the functionality of the legacy application. - -Next, we will compile and run the generated test cases using the CLI `execute`command. We note that these are standard JUnit test cases that can be run in an IDE or using any JUnit test runner; they can also be integrated into a CI pipeline. When executed with the CLI, JUnit reports are generated and optionally also code-coverage reports (created using [JaCoCo][7]). - - -``` -$ tkltest --config-file ./test/data/irs/tkltest_config.toml --verbose execute -[tkltest|18:12:46.446] Loading config file ./test/data/irs/tkltest_config.toml -[tkltest|18:12:46.457] Total test classes: 5 -[tkltest|18:12:46.457] Compiling and running tests in ./irs-ctd-amplified-tests -Buildfile: ./irs-ctd-amplified-tests/build.xml - -delete-classes: - -compile-classes_monolithic: -      [javac] Compiling 5 source files - -execute-tests_monolithic: -      [mkdir] Created dir: ./irs-tkltest-reports/junit-reports/monolithic -      [mkdir] Created dir: ./irs-tkltest-reports/junit-reports/monolithic/raw -      [mkdir] Created dir: ./irs-tkltest-reports/junit-reports/monolithic/html -[jacoco:coverage] Enhancing junit with coverage - -... - -BUILD SUCCESSFUL -Total time: 2 seconds -[tkltest|18:12:49.772] JUnit reports are saved in ./irs-tkltest-reports/junit-reports -[tkltest|18:12:49.773] Jacoco code coverage reports are saved in ./irs-tkltest-reports/jacoco-reports -``` - -The Ant script executes the unit tests by default, but the user can configure the tool to use Maven instead. Gradle will also be supported soon. - -Looking at the JUnit report, located in `irs-tkltest-reports`, we can see that all JUnit test methods passed. This is expected because we executed them on the same version of the application on which they were generated. - -![junit report][8] - -(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5]) - -From the JaCoCo code-coverage report, also located in `irs-tkltest-reports`, we can see that CTD-guided test generation achieved overall 71% statement coverage and 94% branch coverage on the irs sample. We can also drill down to the class and method levels to see their coverage rates. The missing coverage is the result of test-plan rows for which the test generator was unable to generate a passing sequence. Increasing the test-generation time limit per class can increase the coverage rate. - -![jacoco][9] - -(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5]) - -### CTD-guided test generation - -Figure 2 illustrates the test-generation flow for CTD-guided test generation, implemented in the core test-generation engine of Tackle-test. The input to the test-generation flow is a specification of (1) the application classes, (2) the library dependencies of the application, and (3) optionally, the set of application classes to target for test generation (if unspecified, all application classes are targeted). This specification is provided via a [TOML][10] configuration file. The output from the flow consists of: (1) JUnit test cases (with or without assertions), (2) Maven and Ant build files, and (3) JSON files containing a summary of test generation and CTD test-plan coverage. - -![ctd-guided test generation][11] - -Figure 2: The process for CTD-guided test generation. -(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5]) - -The flow starts with the generation of the CTD test plan. This involves creating a CTD model for each public method of the targeted classes. The CTD model for each method captures all possible concrete types for every formal parameter of the method, including elements that can be added to collection/map/array parameter types. Tackle-test incorporates lightweight static analysis to deduce the feasible concrete types for each parameter of each method. - -Next, a CTD test plan is generated automatically from the model at a given (user-configurable) interaction level. Each row in the test plan describes a specific combination of concrete parameter types with which the method should be invoked. By default, the interaction level is set to one, which results in one-way testing: each possible concrete parameter type appears in at least one row of the test plan. Setting the Interaction level to two, a.k.a. pairwise testing, would result in a test plan that includes every pair of concrete types for each pair of method parameters in at least one of its rows. - -The CTD test plan provides a set of coverage goals for which test sequences need to be synthesized. Tackle-test does this in two steps. In the first step, it uses Randoop and/or EvoSuite (the user can configure which tools are used) to create base test sequences. The base test sequences are analyzed to generate sequence pools at method and class levels from which the test-generation engine samples sequences to put together a covering sequence for each test-plan row. If a covering sequence is successfully created, the engine executes it to ensure that the sequence is valid in the sense that it does not cause the application to crash. During this execution, runtime states in terms of objects created are also recorded to be used later for assertion generation. Failing sequences are discarded. The engine adds assertions to passing sequences if the user specifies the assertion option. Finally, the engine exports the sequences, grouped by classes, to JUnit class files. The engine also creates Ant `build.xml` and Maven `pom.xml` files, which can be used if needed for running the generated test cases. - -### Other tool features - -Tackle-test is highly configurable and provides several configuration options using which the user can tailor the behavior of the tool: for example, which classes to generate tests for, which tools to use for test generation, how much time to spend on test generation, whether to add assertions to test cases, what interaction level to use for generating CTD test plans, how many executions to perform for extended test sequences, etc. - -### Effectiveness of different test-generation strategies - -Tackle-test has been evaluated on several open source Java applications and is currently being applied to enterprise-grade Java applications as well. - -![instruction coverage results][12] - -Figure 3: Instruction coverage achieved by test cases generated using different strategies and interaction levels for two small open-source Java applications taken from the[ SF110 benchmark][13]. -(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5]) - -Figure 3 presents data about statement coverage achieved by tests generated using different testing strategies on two small open source Java applications. The applications were taken from the [SF110 benchmark][13], a large corpus of open source Java applications created to facilitate empirical studies of automated testing techniques. One of the applications, `jni-inchi`, consists of 24 classes and 74 methods; the other, `gaj`, consists of 14 classes and 17 methods. The box plot shows that targeting CTD test-plan rows by itself can achieve good statement coverage and, compared to test suites of the same size as the CTD-guided test suite sampled out of Randoop- and EvoSuite-generated test cases, the CTD-guided test suite achieves higher statement coverage, making it more efficient. - -A large-scale evaluation of Tackle-test, using more applications from the SF110 benchmark and some proprietary enterprise Java applications, is currently being conducted. - -If you prefer to see a video demonstration, you can watch it [here][14]. - -We encourage you to try out the tool and provide feedback to help us improve it by submitting a pull request. We also invite you to help improve the tool by contributing to the project. - -#### Migrate to Kubernetes with the Konveyor community - -Tackle-test is part of the Konveyor community. This community is helping others modernize and migrate their applications to the hybrid cloud by building tools, identifying patterns, and providing advice on breaking down monoliths, adopting containers, and embracing Kubernetes. - -This community includes open source tools that migrate virtual machines to KubeVirt, Cloud Foundry, or Docker containers to Kubernetes, or namespaces between Kubernetes clusters. These are a few of the use cases we solve for. - -For updates on these tools and invites to meetups where practitioners show how they moved to Kubernetes, [join the community][15]. - -#### References - -[1] COBOL to Java and Newspapers Still Get Delivered, , 2018. - -[2] D. R. Kuhn, R. N. Kacker, and Y. Lei. Introduction to Combinatorial Testing. Chapman & Hall/CRC, 2013. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/8/tackle-test - -作者:[Saurabh Sinha][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/saurabhsinha -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map) -[2]: https://randoop.github.io/randoop/ -[3]: https://www.evosuite.org/ -[4]: https://opensource.com/sites/default/files/1tackle-test-components.png -[5]: https://creativecommons.org/licenses/by-sa/4.0/ -[6]: https://opensource.com/sites/default/files/2amplified-tests.png (amplified tests) -[7]: https://www.eclemma.org/jacoco/ -[8]: https://opensource.com/sites/default/files/3junit-report.png (junit report) -[9]: https://opensource.com/sites/default/files/4jacoco.png (jacoco) -[10]: https://toml.io/en/ -[11]: https://opensource.com/sites/default/files/5ctd-guided-test-generation.png (ctd-guided test generation) -[12]: https://opensource.com/sites/default/files/6instructioncoverage.png (instruction coverage results) -[13]: https://www.evosuite.org/experimental-data/sf110/ -[14]: https://youtu.be/qThqTFh2PM4 -[15]: https://www.konveyor.io/ diff --git a/sources/tech/20210827 Automatically Light Up a Sign When Your Webcam is in Use.md b/sources/tech/20210827 Automatically Light Up a Sign When Your Webcam is in Use.md deleted file mode 100644 index ad4e7c8ba3..0000000000 --- a/sources/tech/20210827 Automatically Light Up a Sign When Your Webcam is in Use.md +++ /dev/null @@ -1,80 +0,0 @@ -[#]: subject: "Automatically Light Up a Sign When Your Webcam is in Use" -[#]: via: "https://fedoramagazine.org/automatically-light-up-a-sign-when-your-webcam-is-in-use/" -[#]: author: "John Boero https://fedoramagazine.org/author/boeroboy/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Automatically Light Up a Sign When Your Webcam is in Use -====== - -![][1] - -Automatic WFH sign tells others when you're in a conference. - -At the beginning of COVID lockdown and multiple people working from home it was obvious there was a need to let others know when I’m in a meeting or on a live webcam. So naturally it took me one year to finally do something about it. Now I’m here to share what I learned along the way. You too can have your very own “do not disturb” sign automatically light up outside your door to tell people not to walk in half-dressed on laundry day. - -At first I was surprised Zoom doesn’t have this kind of feature built in. But then again I might use Teams, Meet, Hangouts, WebEx, Bluejeans, or any number of future video collaboration apps. Wouldn’t it make sense to just use a system-wide watch for active webcams or microphones? Like most problems in life, this one can be helped with the Linux kernel. A simple check of the _uvcvideo_ module will show if a video device is in use. Without using events all that is left is to poll it for changes. I chose to build a taskbar icon for this. I would normally do this with my trusty C++. But I decided to step out of my usual comfort zone and use Python in case someone wanted to port it to other platforms. I also wanted to renew my lesser Python-fu and face my inner white space demons. I came up with the following ~90 lines of practical and simple but insecure Python: - - - -Aside from the icon bits, a daemon thread performs the following basic check every 1s, calling scripts as changed: - -``` -def run(self): - while True: - val=subprocess.check_output(['lsmod | grep \'^uvcvideo\' | awk \'{print $3}\''], shell=True, text=True).strip() - if val != self.status: - self.status = val - if val == '0': - val=subprocess.check_output(['~/bin/webcam_deactivated.sh']) - else: - val=subprocess.check_output(['~/bin/webcam_activated.sh']) - time.sleep(1) -``` - -Rather than implement the parsing of modules, just using a hard-coded shell command got the job done. Now whatever scripts you choose to put in ~/bin/ will be used when at least one webcam activates or deactivates. I recently had a futile go at the kernel maintainers regarding a bug in usb_core triggered by uvcvideo. I would just as soon not go a step further and attempt an events patch to uvcvideo. Also, this leaves room for Mac or Windows users to port their own simple checks. - -Now that I had a happy icon that sits in my KDE system tray I could implement scripts for on and off. This is where things got complicated. At first I was going to stick a magnetic bluetooth LED badge on my door to flash “LIVE” whenvever I was in a call. These things are ubiquitous on the internet and cost about $10 for basically an embedded ARM Cortex-M0 with an LED screen, bluetooth, and battery. They are basically a full Raspberry Pi Pico kit but soldered onto the board. - -![These Bluetooth LED badges with 48Mhz ARM Cortex-M0 chips have a lot of potential, but they need custom firmware to be any use.][2] - -Unfortunately these badges use a fixed firmware that is either listening to Bluetooth transmissions or showing your message – it doesn’t do both which is silly. Many people have posted feedback that they should be so much more. Sure enough someone has already tinkered with [custom firmware][3]. Unfortunately the firmware was for older USB variants and I’m not about to de-solder or buy an ISP programmer to flash eeprom just for this. That would be a super interesting project for later and would be a great Rpi alternative but all I want right now is a remote controlled light outside my door. I looked at everything including WiFi [smart bulbs][4] to replace my recessed lighting bulbs, to [BTLE candles][5] which are an interesting option. Along the way I learned a lot about Bluetooth Low Energy including how a kernel update can waste 4 hours of weekend with bluetooth stack crashes. BTLE is really interesting and makes a lot more sense after reading up on it. Sure enough there is Python that can set the display [message on your LED badge][6] across the room, but once it is set, Bluetooth will stop listening for you to change it or shut it off. Darn. I guess I should just make do with USB, which actually has a standard command to control power to ports. Let’s see if something exists for this already. - -![A programmable Bluetooth LED sign costs £10 or for £30 you can have a single LED up to 59 inches away.][7] - -It looked like there are options out there even if they’re not ideal. Then suddenly I found it. Neon sign “ON AIR” for £15 and it’s as dumb as they come – just using 5v from USB power. Perfect. - -![Bingo – now all I needed to do was control the power to it.][8] - -The command to control USB power is _uhubctl_ which is in Fedora repos. Unfortunately most USB hubs don’t support this command. In fact very few support it [going back 20 years][9] which seems silly. Hubs will happily report that power has been disconnected even though no such disconnection has been made. I assume it’s just a few cents extra to build in this feature but I’m not a USB hub manufacturer. Therefore I needed to source a pre-owned one. In the end I found a BYTECC BT-UH340 from the US. This was all I needed to finalize it. Adding udev rules to allow the _wheel_ group to control USB power, I can now perform a simple _uhubctl -a off -l 1-1 -p 1_ to turn anything off. - -![The BYTECC BT-UH340 is one of few hubs I could actually find to support uhubctl power.][10] - -Now with a spare USB extension cable lead to my door I finally have a complete solution. There is an “ON AIR” sign on the outside of my door that lights up automatically whenever any of my webcams are in use. I would love to see a Mac port or improvements in pull requests. I’m sure it can all be better. Even further I would love to hone my IoT skills and sort out flashing those Bluetooth badges. If anybody wants to replicate this please be my guest, and suggestions are always welcome. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/automatically-light-up-a-sign-when-your-webcam-is-in-use/ - -作者:[John Boero][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/boeroboy/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/onair-1890x800-1-816x345.jpg -[2]: https://fedoramagazine.org/wp-content/uploads/2021/03/IMG_20210322_164346-1024x768.jpg -[3]: https://github.com/Effix/LedBadge -[4]: https://www.amazon.co.uk/AvatarControls-Dimmable-Bluetooth-Connection-2700K-6100K/dp/B08P21MSTW/ref=sr_1_6_mod_primary_lightning_deal?dchild=1&keywords=bluetooth+bulb+spot&qid=1616345349&sbo=Tc8eqSFhUl4VwMzbE4fw%2Fw%3D%3D&smid=A2GE8P68TQ1YXI&sr=8-6 -[5]: http://nilhcem.com/iot/reverse-engineering-simple-bluetooth-devices -[6]: http://nilhcem.com/iot/reverse-engineering-bluetooth-led-name-badge -[7]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-7-1024x416.png -[8]: https://fedoramagazine.org/wp-content/uploads/2021/03/IMG_20210322_163624-1024x768.jpg -[9]: https://github.com/mvp/uhubctl#compatible-usb-hubs -[10]: https://c1.neweggimages.com/ProductImage/17-145-089-02.jpg diff --git a/sources/tech/20210827 Calculate date and time ranges in Groovy.md b/sources/tech/20210827 Calculate date and time ranges in Groovy.md deleted file mode 100644 index 4ae46e19a5..0000000000 --- a/sources/tech/20210827 Calculate date and time ranges in Groovy.md +++ /dev/null @@ -1,172 +0,0 @@ -[#]: subject: "Calculate date and time ranges in Groovy" -[#]: via: "https://opensource.com/article/21/8/groovy-date-time" -[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Calculate date and time ranges in Groovy -====== -Use Groovy date and time to discover and display time increments. -![clock][1] - -Every so often, I need to do some calculations related to dates. A few days ago, a colleague asked me to set up a new project definition in our (open source, of course!) project management system. This project is to start on the 1st of August and finish on the 31st of December. The service to be provided is budgeted at 10 hours per week. - -So, yeah, I had to figure out how many weeks between 2021-08-01 and 2021-12-31 inclusive. - -This is the perfect sort of problem to solve with a tiny [Groovy][2] script. - -### Install Groovy on Linux - -Groovy is based on Java, so it requires a Java installation. Both a recent and decent version of Java and Groovy might be in your Linux distribution's repositories. Alternately, you can install Groovy by following the instructions on the [groovy-lang.org][2]. - -A nice alternative for Linux users is [SDKMan][3], which can be used to get multiple versions of Java, Groovy, and many other related tools. For this article, I'm using my distro's OpenJDK11 release and SDKMan's latest Groovy release. - -### Solving the problem with Groovy - -Since Java 8, time and date calculations have been folded into a new package called **java.time**, and Groovy provides access to that. Here’s the script: - - -``` -import java.time.* - -import java.time.temporal.* - -def start = LocalDate.parse('2021-08-01','yyyy-MM-dd') - -def end = LocalDate.parse('2022-01-01','yyyy-MM-dd') - -println "${ChronoUnit.WEEKS.between(start,end)} weeks between $start and $end" -``` - -Copy this code into a file called **wb.groovy** and run it on the command line to see the results: - - -``` -$ groovy wb.groovy -21 weeks between 2021-08-01 and 2022-01-01 -``` - -Let’s review what’s going on. - -### Date and time - -The [**java.time.LocalDate** class][4] provides many useful static methods (like **parse()** shown above, which lets us convert from a string to a **LocalDate** instance according to a pattern, in this case, **‘yyyy-MM-dd’**). The format characters are explained in quite a number of places–for example, the documentation for [**java.time.format.DateTimeFormat**][5]. Notice that **M** represents “month,” not **m**, which represents “minute.” So this pattern defines a date formatted as a four-digit year, followed by a hyphen, followed by a two-digit month number (1-12), followed by another hyphen, followed by a two-digit day-of-month number (1-31). - -Notice as well that in Java, **parse()** requires an instance of **DateTimeFormat**: - - -``` -`parse(CharSequence text, DateTimeFormatter formatter)` -``` - -As a result, parsing becomes a two-step operation, whereas Groovy provides an additional version of **parse()** that accepts the format string directly in place of the **DateTimeFormat** instance. - -The [**java.time.temporal.ChronoUnit** class][6], actually an **Enum**, provides several **Enum constants**, like **WEEKS** (or **DAYS**, or **CENTURIES**...) which in turn provide the **between()** method that allows us to calculate the interval of those units between two **LocalDates** (or other similar date or time data types). Note that I used January 1, 2022, as the value for **end**; this is because **between()** spans the time period starting on the first date given up to but not including the second date given. - -### More date arithmetic - -Every so often, I need to know how many working days are in a specific time frame (like, say, a month). This handy script will calculate that for me: - - -``` -import java.time.* - -def holidaySet = [LocalDate.parse('2021-01-01'), LocalDate.parse('2021-04-02'), -    LocalDate.parse('2021-04-03'), LocalDate.parse('2021-05-01'), -    LocalDate.parse('2021-05-15'), LocalDate.parse('2021-05-16'), -    LocalDate.parse('2021-05-21'), LocalDate.parse('2021-06-13'), -    LocalDate.parse('2021-06-21'), LocalDate.parse('2021-06-28'), -    LocalDate.parse('2021-06-16'), LocalDate.parse('2021-06-18'), -    LocalDate.parse('2021-08-15'), LocalDate.parse('2021-09-17'), -    LocalDate.parse('2021-09-18'), LocalDate.parse('2021-09-19'), -    LocalDate.parse('2021-10-11'), LocalDate.parse('2021-10-31'), -    LocalDate.parse('2021-11-01'), LocalDate.parse('2021-11-21'), -    LocalDate.parse('2021-12-08'), LocalDate.parse('2021-12-19'), -    LocalDate.parse('2021-12-25')] as [Set][7] - -def weekendDaySet = [DayOfWeek.SATURDAY,DayOfWeek.SUNDAY] as [Set][7] - -int calcWorkingDays(start, end, holidaySet, weekendDaySet) { -    (start..<end).inject(0) { subtotal, d -> -        if (!(d in holidaySet || DayOfWeek.from(d) in weekendDaySet)) -            subtotal + 1 -        else -            subtotal -    } -} - -def start = LocalDate.parse('2021-08-01') -def end = LocalDate.parse('2021-09-01') - -println "${calcWorkingDays(start,end,holidaySet,weekendDaySet)} working day(s) between $start and $end" -``` - -Copy this code into a file called **wdb.groovy** and run it from the command line to see the results: - - -``` -$ groovy wdb.groovy -22 working day(s) between 2021-08-01 and 2021-09-01 -``` - -Let’s review this. - -First, I create a set of holiday dates (these are Chile’s “días feriados” for 2021, in case you wondered) called holidaySet. Note that the default pattern for **LocalDate.parse()** is ‘**yyyy-MM-dd**’, so I’ve left the pattern out here. Note as well that I’m using the Groovy shorthand **[a,b,c]** to create a **List** and then coercing it to a **Set**. - -Next, I want to skip Saturdays and Sundays, so I create another set incorporating two **enum** values of [**java.time.DayOfWeek**][8]–**SATURDAY** and **SUNDAY**. - -Then I define a method **calcWorkingDays()** that takes as arguments the start date, the end date (which following the previous example of **between()** is the first value outside the range I want to consider), the holiday set, and the weekend day set. Line by line, this method: - - * Defines a range between **start** and **end**, open on the **end**, (that’s what **<end** means) and executes the closure argument passed to the **inject()** method (**inject()** implements the 'reduce' operation on **List** in Groovy) on the successive elements **d** in the range: - * As long as **d** is neither in the **holidaySet** nor in the **weekendDaySet**, increments the **subtotal** by 1 - * Returns the value of the result returned by **inject()** - - - -Next, I define the **start** and **end** dates between which I want to calculate working days. - -Finally, I call **println** using a Groovy [**GString**][9] to evaluate the **calcWorkingDays()** method and display the result. - -Note that I could have used the **each** closure instead of **inject**, or even a **for** loop. I could have also used Java Streams rather than Groovy ranges, lists, and closures. Lots of options. - -### But why not use groovy.Date? - -Some of you old Groovy users may be wondering why I’m not using good old [**groovy.Date**][10]. The answer is, I could use it. But Groovy Date is based on Java Date, and there are some good reasons for moving to **java.time**, even though Groovy Date added quite a few nice things to Java Date. - -For me, the main reason is that there are some not-so-great design decisions buried in the implementation of Java Date, the worst being that it is unnecessarily mutable. I spent a while tracking down a weird bug that arose from my poor understanding of the **clearTime()** method on Groovy Date. I learned it actually clears the time field of the date instance, rather than returning the date value with the time part set to ‘00:00:00’. - -Date instances also aren’t thread-safe, which can be kind of challenging for multithreaded applications. - -Finally, having both date and time wrapped up in a single field isn’t always convenient and can lead to some weird data modeling contortions. Think, for instance, of a day on which multiple events occur: Ideally, the _date_ field would be on the day, and the _time_ field would be on each event; but that’s not easy to do with Groovy Date. - -### Groovy is groovy - -Groovy is an Apache project, and it provides a simplified syntax for Java so you can use it for quick and simple scripts in addition to complex applications. You retain the power of Java, but you access it with an efficient toolset. [Try it soon][11], and see if you find your groove with Groovy. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/8/groovy-date-time - -作者:[Chris Hermansen][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/clock_1.png?itok=lbyiCJWV (clock) -[2]: https://groovy-lang.org/ -[3]: https://sdkman.io/ -[4]: https://docs.groovy-lang.org/latest/html/groovy-jdk/java/time/LocalDate.html -[5]: https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html -[6]: https://docs.oracle.com/javase/8/docs/api/java/time/temporal/ChronoUnit.html -[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+set -[8]: https://docs.oracle.com/javase/8/docs/api/java/time/DayOfWeek.html -[9]: https://docs.groovy-lang.org/latest/html/api/groovy/lang/GString.html -[10]: https://docs.groovy-lang.org/latest/html/groovy-jdk/java/util/Date.html -[11]: https://groovy.apache.org/download.html diff --git a/sources/tech/20210828 Parse command-line options in Groovy.md b/sources/tech/20210828 Parse command-line options in Groovy.md deleted file mode 100644 index 1985825300..0000000000 --- a/sources/tech/20210828 Parse command-line options in Groovy.md +++ /dev/null @@ -1,184 +0,0 @@ -[#]: subject: "Parse command-line options in Groovy" -[#]: via: "https://opensource.com/article/21/8/parsing-command-options-groovy" -[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Parse command-line options in Groovy -====== -Learn to add options to your Groovy applications. -![Woman sitting in front of her computer][1] - -A recent article provided an [introduction to parsing command-line options in Java][2]. Because I really like Groovy, and because Groovy is well suited for scripting, and because it's fun to compare Java and Groovy solutions, I decided to paraphrase Seth's article, but using Groovy. - -### Install Groovy - -Groovy is based on Java, so it requires a Java installation. Both a recent and decent version of Java and Groovy might be in your Linux distribution's repositories. Alternately, you can install Groovy by following the instructions on the [groovy-lang.org][3]. - -A nice alternative for Linux users is [SDKMan][4], which can be used to get multiple versions of Java, Groovy, and many other related tools. For this article, I'm using my distro's OpenJDK11 release and SDKMan's latest Groovy release. - -### Parsing command-line options in Groovy - -When we create a script—a kind of short, often informal program—to be run from the command line, we normally follow the practice of passing arguments to the script on the command line. A good example of this is the `ls` command, used to list all the files and subfolders in a given folder, perhaps showing attributes and sorted in reverse order of last modification date, as in: - - -``` -`$ ls -lt /home/me` -``` - -To show the contents of my home folder like this: - - -``` -total 252 -drwxr-xr-x 5 me me 4096 Aug 10 12:23 Downloads -drwx------ 11 me me 4096 Aug 10 08:59 Dropbox -drwxr-xr-x 27 me me 12288 Aug 9 11:58 Pictures --rw-rw-r-- 1 me me 235 Jul 28 16:22 wb.groovy -drwxr-xr-x 2 me me 4096 Jul 20 22:04 Desktop -drwxrwxr-x 2 me me 4096 Jul 20 15:16 Fixed -drwxr-xr-x 2 me me 16384 Jul 19 08:49 Music --rw-rw-r-- 1 me me 433 Jul 7 13:24 foo -drwxr-xr-x 6 me me 4096 Jun 29 10:25 Documents -drwxr-xr-x 2 me me 4096 Jun 14 22:15 Templates --rw-rw-r-- 1 me me 803 Jun 14 11:33 bar -``` - -Of course, arguments to commands can be handled by inspecting them and deciding what to do in each case; but this ends up being a duplication of effort that can be avoided by using a library designed for that purpose. - -Seth's Java article introduces the [Apache Commons CLI library][5], a great API for handling command-line options. In fact, this library is so great that the good people who develop Groovy make it available by default in the Groovy installation. Therefore, once you have Groovy installed, you have access to this library through [**groovy.cli.picocli.CliBuilder**][6], which is already imported for you by default. - -Here's a Groovy script that uses this CLI builder to achieve the same results as Seth's Java program: - - -``` -1 def cli = new CliBuilder(usage: 'ho.groovy [-a] -c') -2 cli.with { -3    a longOpt: 'alpha', 'Activate feature alpha' -4    c longOpt: 'config', args:1, argName: 'config', required: true, 'Set config file' -5 } -6 def options = cli.parse(args) -7 if (!options) { -8    return -9 } -10 if (options.a) { -11    println' Alpha activated' -12 } -13 if (options.c) { -14    println "Config set to ${options.c}" -15 } -``` - -I've included line numbers here to facilitate the discussion. Save this script without the line numbers in a file called **ho.groovy**. - -On line 1, we define the variable **cli** and set it to a new instance of **CliBuilder** with a defined **usage** attribute. This is a string that will be printed if the **usage()** method is called. - -On lines 2-5, we use [the **with()** method][7] that Groovy adds to objects, together with the DSL defined by **CliBuilder**, to set up the option definitions. - -On line 3, we define the option '**a**', setting its **longOpt** field to '**alpha**' and its description to '**Activate feature alpha**'. - -Similarly, on line 4, we define the option '**c**', setting its **longOpt** field to '**config**' and specifying that this option takes one argument whose name is '**config**'. Moreover, this is a **required** option (sounds funny, I know), and its description is '**Set config file**'. - -Pausing briefly here for a bit of background, you can read all about these various options at the **CliBuilder** link above. More generally, things written in the form **longOpt: 'alpha'** are Groovy notation for key-value entries to be put in a **Map** instance, which you can read about [here][8]. Each key, in this case, corresponds to a method of the same name provided by the CliBuilder. If you're wondering what's going on with a line like: - - -``` -`a longOpt: 'alpha', 'Activate feature alpha'` -``` - -then it may be useful to mention that Groovy allows us to drop parentheses in certain circumstances; so the above is equivalent to: - - -``` -`a(longOpt: 'alpha', 'Activate feature alpha')` -``` - -i.e., it's a method call. Moreover, Groovy allows both positional and named parameters, the latter using that key: value syntax. - -Onward! On lines 6-9, we call the **parse()** method of the **CliBuilder** instance **cli**, passing the **args—**an array of **String** values created by the Groovy run-time and containing the arguments from the command line. This method returns a **Map** of the options where the keys are the short-form of the predefined options—in this case, '**a**' and '**c**'. If the parsing fails, then **parse()** emits the **usage** message, a reasonable error message, and returns a null value, so we don't have to use a try-catch block (which one doesn't see as often in Groovy). So here—line 8—we just return since all our work is done for us. - -On lines 10-12, we check to see if option '_a_' was included on the command line and if it is, print a message saying so. - -Similarly, on lines 13-15, we check to see if option '**c**' was included on the command line and if so, print a message showing the argument provided to it. - -### Running the command - -Let’s run the script a few times; first with no arguments: - - -``` -$ groovy ho.groovy -error: Missing required option: c -usage: ho.groovy [-a] -c - -a,--alpha Activate feature alpha - -c,--config <config> [Set][9] config file -$ -``` - -Notice the complaint about missing the required option '**c**'. - -Then with the '**c**' option but no argument: - - -``` -$ groovy ho.groovy -c -error: Missing argument for option: c -usage: ho.groovy [-a] -c - -a,--alpha -Activate feature alpha - -c,--config <config> [Set][9] config file -$ -``` - -Cool, the **CliBuilder** instance method **parse()** noticed no argument was provided to '**c**'. - -Finally, let's try with both options and an argument to '**c**', in their long form: - - -``` -$ groovy ho.groovy --alpha --config bar -Alpha activated -Config set to bar -$ -``` - -Looks good! - -Since the idea of the '**c**' option is to provide a config file, we could also tell the **CliBuilder** instance that the type of this argument is File, and it will return that instead of a String. But we'll leave that for another day. - -So, there you have it—command-line option parsing in Groovy. - -### Groovy resources - -The Groovy website has a lot of great documentation. Another great Groovy resource is [Mr. Haki][10], and specifically [this lovely article on CliBuilder][11]. - -Another great reason to learn Groovy is [Grails][12], a wonderfully productive full-stack web framework built on top of excellent components like Hibernate, Spring Boot, and Micronaut. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/8/parsing-command-options-groovy - -作者:[Chris Hermansen][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_3.png?itok=qw2A18BM (Woman sitting in front of her computer) -[2]: https://opensource.com/article/21/8/java-commons-cli -[3]: https://groovy-lang.org/ -[4]: https://sdkman.io/ -[5]: https://commons.apache.org/proper/commons-cli/ -[6]: https://docs.groovy-lang.org/latest/html/gapi/groovy/cli/picocli/CliBuilder.html -[7]: https://objectpartners.com/2014/07/09/groovys-with-and-multiple-assignment/ -[8]: https://www.baeldung.com/groovy-maps -[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+set -[10]: https://blog.mrhaki.com/ -[11]: https://blog.mrhaki.com/2009/09/groovy-goodness-parsing-commandline.html -[12]: https://grails.org/ diff --git a/sources/tech/20210830 How to install only security and bugfixes updates with DNF.md b/sources/tech/20210830 How to install only security and bugfixes updates with DNF.md deleted file mode 100644 index 69d9b5b4e6..0000000000 --- a/sources/tech/20210830 How to install only security and bugfixes updates with DNF.md +++ /dev/null @@ -1,232 +0,0 @@ -[#]: subject: "How to install only security and bugfixes updates with DNF" -[#]: via: "https://fedoramagazine.org/how-to-install-only-security-and-bugfixes-updates-with-dnf/" -[#]: author: "Mateus Rodrigues Costa https://fedoramagazine.org/author/mateusrodcosta/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -How to install only security and bugfixes updates with DNF -====== - -![][1] - -Photo by [Scott Webb][2] on [Unsplash][3] - -This article will explore how to filter the updates available to your Fedora Linux system by type. This way you can choose to, for example, only install security or bug fixes updates. This article will demo running the _dnf_ commands inside toolbox instead of using a real Fedora Linux install. - -You might also want to read [Use dnf updateinfo to read update changelogs][4] before reading this article. - -### Introduction - -If you have been managing system updates for Fedora Linux or any other GNU/Linux distro, you might have noticed how, when you run a system update (with _dnf update_, in the case of Fedora Workstation), you usually are not installing only security updates. - -Due to how package management in a GNU/Linux distro works, generally (with the exception of software running in a container, under Flatpak, or similar technologies) you are updating every single package regardless of whether it’s a “system” software or an “app”. - -DNF divides updates in three types: “security”, “bugfix” and “enhancement”. And, as you will see, DNF allows filtering which types you want to operate on. - -But, why would you want to update only a subset of packages? - -Well, this might depend on how you personally choose to deal with system updates. If you are not comfortable at the moment with updating everything, then restricting the current update to only security updates might be a good choice. You could also install bug fix updates as well and only install enhancements and other types of updates during a future opportunity. - -### How to filter security and bug fix updates - -Start by creating a Fedora Linux 34 toolbox: - -``` -toolbox create --distro fedora --release f34 updatefilter-demo -``` - -Then enter that toolbox: - -``` -toolbox enter updatefilter-demo -``` - -From now on commands can be run on a real Fedora Linux install. - -First, run _dnf check-update_ to see the unfiltered list of packages: - -``` -$ dnf check-update -audit-libs.x86_64 3.0.5-1.fc34 updates -avahi.x86_64 0.8-14.fc34 updates -avahi-libs.x86_64 0.8-14.fc34 updates -... -vim-minimal.x86_64 2:8.2.3318-1.fc34 updates -xkeyboard-config.noarch 2.33-1.fc34 updates -yum.noarch 4.8.0-1.fc34 updates -``` - -DNF supports passing the types of updates to operate on as parameter: _‐‐security_ for security updates, _‐‐bugfix_ for bug fix updates and _‐‐enhancement_ for enhancement updates. Those work on commands such as _dnf check-update_, _dnf update_ and _dnf updateinfo_. - -For example, this is how you filter the list of available updates by security updates only: - -``` -$ dnf check-update --security -avahi.x86_64 0.8-14.fc34 updates -avahi-libs.x86_64 0.8-14.fc34 updates -curl.x86_64 7.76.1-7.fc34 updates -... -libgcrypt.x86_64 1.9.3-3.fc34 updates -nettle.x86_64 3.7.3-1.fc34 updates -perl-Encode.x86_64 4:3.12-460.fc34 updates -``` - -And now same thing but by bug fix updates only: - -``` -$ dnf check-update --bugfix -audit-libs.x86_64 3.0.5-1.fc34 updates -ca-certificates.noarch 2021.2.50-1.0.fc34 updates -coreutils.x86_64 8.32-30.fc34 updates -... -systemd-pam.x86_64 248.7-1.fc34 updates -systemd-rpm-macros.noarch 248.7-1.fc34 updates -yum.noarch 4.8.0-1.fc34 updates -``` - -They can even be combined, so you can use two or more of them at the same time. For example, you can filter the list to show both security and bug fix updates: - -``` -$ dnf check-update --security --bugfix -audit-libs.x86_64 3.0.5-1.fc34 updates -avahi.x86_64 0.8-14.fc34 updates -avahi-libs.x86_64 0.8-14.fc34 updates -... -systemd-pam.x86_64 248.7-1.fc34 updates -systemd-rpm-macros.noarch 248.7-1.fc34 updates -yum.noarch 4.8.0-1.fc34 updates -``` - -As mentioned, _dnf updateinfo_ also works with this filtering, so you can filter _dnf updateinfo_, _dnf updateinfo list_ and _dnf updateinfo info_. For example, for the list of security updates and their IDs: - -``` -$ dnf updateinfo list --security -FEDORA-2021-74ebf2f06f Moderate/Sec. avahi-0.8-14.fc34.x86_64 -FEDORA-2021-74ebf2f06f Moderate/Sec. avahi-libs-0.8-14.fc34.x86_64 -FEDORA-2021-83fdddca0f Moderate/Sec. curl-7.76.1-7.fc34.x86_64 -FEDORA-2021-e14e86e40e Moderate/Sec. glibc-2.33-20.fc34.x86_64 -FEDORA-2021-e14e86e40e Moderate/Sec. glibc-common-2.33-20.fc34.x86_64 -FEDORA-2021-e14e86e40e Moderate/Sec. glibc-minimal-langpack-2.33-20.fc34.x86_64 -FEDORA-2021-8b25e4642f Low/Sec. krb5-libs-1.19.1-14.fc34.x86_64 -FEDORA-2021-83fdddca0f Moderate/Sec. libcurl-7.76.1-7.fc34.x86_64 -FEDORA-2021-31fdc84207 Moderate/Sec. libgcrypt-1.9.3-3.fc34.x86_64 -FEDORA-2021-d1fc0b9d32 Moderate/Sec. nettle-3.7.3-1.fc34.x86_64 -FEDORA-2021-92e07de1dd Important/Sec. perl-Encode-4:3.12-460.fc34.x86_64 -``` - -If desired, you can install only security updates: - -``` -# dnf update --security -================================================================================ - Package Arch Version Repository Size -================================================================================ -Upgrading: - avahi x86_64 0.8-14.fc34 updates 289 k - avahi-libs x86_64 0.8-14.fc34 updates 68 k - curl x86_64 7.76.1-7.fc34 updates 297 k -... - perl-Encode x86_64 4:3.12-460.fc34 updates 1.7 M -Installing weak dependencies: - glibc-langpack-en x86_64 2.33-20.fc34 updates 563 k - -Transaction Summary -================================================================================ -Install 1 Package -Upgrade 11 Packages - -Total download size: 9.7 M -Is this ok [y/N]: -``` - -Or even to install both security and bug fix updates while ignoring enhancement updates: - -``` -# dnf update --security --bugfix -================================================================================ - Package Arch Version Repo Size -================================================================================ -Upgrading: - audit-libs x86_64 3.0.5-1.fc34 updates 116 k - avahi x86_64 0.8-14.fc34 updates 289 k - avahi-libs x86_64 0.8-14.fc34 updates 68 k -... - rpm-plugin-systemd-inhibit x86_64 4.16.1.3-1.fc34 fedora 23 k - shared-mime-info x86_64 2.1-2.fc34 fedora 374 k - sqlite x86_64 3.34.1-2.fc34 fedora 755 k - -Transaction Summary -================================================================================ -Install 11 Packages -Upgrade 45 Packages - -Total download size: 32 M -Is this ok [y/N]: -``` - -### Install only specific updates - -You may also choose to only install the updates with a specific ID, such as _FEDORA-2021-74ebf2f06f_ for avahi by using _–advisory_ and specifying the ID: - -``` -# dnf update --advisory=FEDORA-2021-74ebf2f06f -================================================================================ - Package Architecture Version Repository Size -================================================================================ -Upgrading: - avahi x86_64 0.8-14.fc34 updates 289 k - avahi-libs x86_64 0.8-14.fc34 updates 68 k - -Transaction Summary -================================================================================ -Upgrade 2 Packages - -Total download size: 356 k -Is this ok [y/N]: -``` - -Or even multiple updates, with _‐‐advisories_: - -``` -# dnf update --advisories=FEDORA-2021-74ebf2f06f,FEDORA-2021-83fdddca0f -================================================================================ - Package Architecture Version Repository Size -================================================================================ -Upgrading: - avahi x86_64 0.8-14.fc34 updates 289 k - avahi-libs x86_64 0.8-14.fc34 updates 68 k - curl x86_64 7.76.1-7.fc34 updates 297 k - libcurl x86_64 7.76.1-7.fc34 updates 284 k - -Transaction Summary -================================================================================ -Upgrade 4 Packages - -Total download size: 937 k -Is this ok [y/N]: -``` - -### Conclusion - -In the end it all comes down to how you personally prefer to manage your updates. But if you need, for whichever reason, to only install security updates, then these filters will surely come in handy! - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/how-to-install-only-security-and-bugfixes-updates-with-dnf/ - -作者:[Mateus Rodrigues Costa][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/mateusrodcosta/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/how-to-install-only-security-and-bugfixes-updates-with-dnf-816x345.jpg -[2]: https://unsplash.com/@scottwebb?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/security?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://fedoramagazine.org/use-dnf-updateinfo-to-read-update-changelogs/ diff --git a/sources/tech/20210830 Print from anywhere with CUPS on Linux.md b/sources/tech/20210830 Print from anywhere with CUPS on Linux.md deleted file mode 100644 index 025dce01e1..0000000000 --- a/sources/tech/20210830 Print from anywhere with CUPS on Linux.md +++ /dev/null @@ -1,113 +0,0 @@ -[#]: subject: "Print from anywhere with CUPS on Linux" -[#]: via: "https://opensource.com/article/21/8/share-printer-cups" -[#]: author: "Seth Kenlon https://opensource.com/users/seth" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Print from anywhere with CUPS on Linux -====== -Share your printer with the Common Unix Printing System (CUPS). -![Two hands holding a resume with computer, clock, and desk chair ][1] - -I have a printer in my office, but sometimes I work on my laptop in another room of the house. This isn't a problem for me for two reasons. First of all, I rarely print anything on paper and have gone months without using the printer. Secondly, though, I've set the printer to be shared over my home network, so I can send files to print from anywhere in the house. I didn't need any special equipment for this setup. It's accomplished with just my usual Linux computer and the Common Unix Printing System (CUPS). - -### Installing CUPS on Linux - -If you're running Linux, BSD, or macOS, then you probably already have CUPS installed. CUPS has been the open source solution to Unix printing since 1997. Apple relied on it so heavily for their fledgling Unix-based OS X that they ended up buying it in 2007 to ensure its continued development and maintenance. - -If your system doesn't already have CUPS installed, you can install it with your package manager. For example, on Fedora, Mageia, or CentOS: - - -``` -`$ sudo dnf install cups` -``` - -On Debian, Linux Mint, and similar: - - -``` -`$ sudo apt install cups` -``` - -### Accessing CUPS on Linux and Mac - -To access CUPS, open a web browser and navigate to `localhost:631`, which tells your computer to open whatever's on port 631 on itself (your computer always [refers to itself as localhost][2]). - -Your web browser opens a page providing you access to your system's printer settings. From here, you can add printers, modify printer defaults, monitor queued jobs, and allow printers to be shared over your local network. - -![CUPS web user interface][3] - -Figure 1: The CUPS web user interface. - -### Configuring a printer with CUPS - -You can either add a new printer or modify an existing one from within the CUPS interface. Modifying a printer involves the exact same pages as adding a new one, except that when you're adding a printer, you make new choices, and when you're modifying a printer, you confirm or change existing ones. - -First, click on the **Administration** tab, and then the **Add Printer** button. - -If you're only modifying an existing printer, click **Manage Printers** instead, and then choose the printer you want to change. Choose **Modify Printer** from the **Administration** drop-down menu. - -Regardless of whether you're modifying or adding, you must enter administrative authorization before CUPS allows you to continue. You can either log in as root, if that's available to you, or as your normal user identity, as long as you have `sudo` privileges. - -Next, you're presented with a list of printer interfaces and protocols that you can use for a printer. If your printer is plugged directly into your computer and is on, it's listed as a _Local Printer_. If the printer has networking built into it and is attached to a switch or router on your network, you can usually use the Internet Printing Protocol (ipp) to access it (you may have to look at your router to determine the IP address of the printer, but read your printer's documentation for details). If the printer is a Hewlett-Packard, you may also be able to use HPLIP to access it. - -Use whatever protocol makes sense for your physical setup. If you're unsure of what to use, you can try one, attempt to print a test page, and then try a different one in the case of failure. - -The next screen asks for human-friendly details about the printer. This is mostly for your reference. Enter a name for the printer that makes sense (I usually use the model number, but large organizations sometimes name their printers after things like fictional starships or capital cities), a description, and the location. - -You may also choose to share the printer with other computers on your network. - -![CUPS web UI to share printers][4] - -Figure 2: CUPS web user interface to share printers. - -If sharing is not currently enabled, click the checkbox to enable sharing. - -### Drivers - -On the next screen, you must set your printer driver. Open source drivers for printers can often be found on [openprinting.org][5]. There's a good chance you already have a valid driver, as long as you have the `gutenprint` package installed, or have installed drivers bundled with the printer. If the printer is a PostScript printer (many laser printers are), you may only need a PPD file from [openprinting.org][5] rather than a driver. - -Assuming you have drivers installed, you can choose your printer's make (manufacturer) for a list of available drivers. Select the appropriate driver and continue. - -### Connecting to a shared printer - -Now that you have successfully installed and configured your printer, you can connect to it from any other computer on your network. For example, suppose you have a laptop called **client** that you use around the house. You want to add your shared printer to it. - -On the GNOME and Plasma desktops, you can add a printer from the **Printer** screen of **Settings:** - - * If you have your printer connected to a computer, then you enter the IP address of the _computer_ (because the printer is accessible through its host). - * If you have your printer connected to a switch or router, then enter the IP address of the printer itself. - - - -On macOS, printer settings can be found in **System Preferences**. - -Alternately, you can keep using the CUPS interface on your client computer. The process to access CUPS is the same: Ensure CUPS is installed, open a network, and navigate to `localhost:631`. - -Once you've accessed the CUPS web interface, select the **Administration** tab. Click the **Find New Printers** button in the **Printers** section, and then add the shared printer to your network. You can also set the printer's IP address manually in CUPS by going through the normal **Add Printer** process. - -### Print from anywhere - -It's the 21st century! Put the USB thumb drive down, stop emailing yourself files to print from another computer, and make your printer available to your home network. It's surprisingly easy and supremely convenient. And best of all, you'll look like a networking wizard to all of your housemates! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/8/share-printer-cups - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI (Two hands holding a resume with computer, clock, and desk chair ) -[2]: https://opensource.com/article/21/4/network-management -[3]: https://opensource.com/sites/default/files/cups-web-ui.jpeg -[4]: https://opensource.com/sites/default/files/cups-web-ui-share_0.jpeg -[5]: http://openprinting.org diff --git a/sources/tech/20210830 The Definitive Guide to Using and Customizing the Dock in Ubuntu.md b/sources/tech/20210830 The Definitive Guide to Using and Customizing the Dock in Ubuntu.md deleted file mode 100644 index efb9cc5052..0000000000 --- a/sources/tech/20210830 The Definitive Guide to Using and Customizing the Dock in Ubuntu.md +++ /dev/null @@ -1,253 +0,0 @@ -[#]: subject: "The Definitive Guide to Using and Customizing the Dock in Ubuntu" -[#]: via: "https://itsfoss.com/customize-ubuntu-dock/" -[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/" -[#]: collector: "lkxed" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -The Definitive Guide to Using and Customizing the Dock in Ubuntu -====== -When you log into Ubuntu, you’ll see the dock on the left side with some application icons on it. This dock (also known as launcher or sometimes as panel) allows you to quickly launch your frequently used programs. - -![Ubuntu Dock][1] - -I rely heavily on the dock and I am going to share a few tips about using the dock effectively and customize its looks and position. - -You’ll learn the following in this tutorial: - -* Basic usage of the dock: adding more applications and using shortcuts for launching applications. -* Customize the looks of the dock: Change the icon size, icon positions. -* Change the position: for single screen and multi-monitor setup -* Hide mounted disk from the dock -* Auto-hide or disable the dock -* Possibility of additional dock customization with dconf-editor -* Replace dock with other docking applications - -I’ll use the terms dock, panel and launcher in the tutorial. All of them refer to the same thing. - -### Using the Ubuntu dock: Absolute basic that you must know - -If you are new to Ubuntu, you should know a few things about using the dock. You’ll eventually discover these dock features, I’ll just speed up the discovery process for you. - -![A Video from YouTube][2] - -[Subscribe to our YouTube channel for more Linux videos][3] - -#### Add new applications to the dock (or remove them) - -The steps are simple. Search for the application from the menu and run it. - -The running application appears in the dock, below all other icons. Right click on it and select the “Add to Favorites” option. This will lock the icon to the dock. - -![Right click on the icon and select "Add to Favorites" to add icons to the dock in Ubuntu][4] - -Removing an app icon from the doc is even easier. You don’t even need to run the application. Simply right click on it and select “Remove From Favorites”. - -![Right-click on the icon and select "Remove from Favorites" to remove icons from the dock in Ubuntu][5] - -#### Reorder icon position - -By default, new application icons are added after all the other icons on the launcher. You don’t have to live with it as it is. - -To change the order of the icons, you just need to drag and drop to the other position of your choice. No need to “lock it” or any additional effort. It stays on that location until you make some changes again. - -![Reorder Icons On Ubuntu Docks][6] - -#### Right click to get additional options for some apps - -Left-clicking on an icon launches the application or bring it to focus if the application is already running. - -Right-clicking the icon gives you additional options. Different applications will have different options. - -For browsers, you can open a new private window or preview all the running windows. - -![Right Click Icons Ubuntu Dock][7] - -For file manager, you can go to all the bookmarked directories or preview opened windows. - -You can, of course, quit the application. Most applications will quit while some applications like Telegram will be minimized to the system tray. - -#### Use keyboard shortcut to launch applications quickly [Not many people know about this one] - -The dock allows you to launch an application in a single mouse click. But if you are like me, you can save that mouse click with a keyboard shortcut. - -Using the Super/Window key and a number key will launch the application on that position. - -![Keyboard Shortcut For Ubuntu Dock][8] - -If the application is already running, it is brought to focus, i.e. it appears in front of all the other running application windows. - -Since it is position-based, you should make sure that you don’t reorder the icons all the time. Personally, I keep Firefox at position 1, file manager at 2 and the alternate browser at 3 and so on until number 9. This way, I quickly launch the file manager with Super+2. - -I find it easier specially because I have a three screen setup and moving the mouse to the launcher on the first screen is a bit too much of trouble. You can enable or disable the dock on additional screen. I’ll show that to you later in this tutorial. - -### Change the position of the dock - -By default, the dock is located on the left side of your screen. Some people like the launcher at the bottom, in a more traditional way. - -[Ubuntu allows you to change the position of the dock][9]. You can move it to the bottom or to the right side. I am not sure many people actually put the dock on the top, so moving the dock to the top is not an option here. - -![Change Launcher Position in Ubuntu][10] - -To change the dock position, go to Settings->Appearance. You should see some options under Dock section. You need to change the “Position on screen” settings here. - -![Change Dock Position in Ubuntu][11] - -#### Position of dock on a multiple monitor setup - -If you have multiple screens attached to your system, you can choose whether to display the dock on all screens or one of the chosen screens. - -![Ubuntu Dock Settings Multimonitor][12] - -Personally, I display the dock on my laptop screen only which is my main screen. This gives me maximum space on the additional two screens. - -### Change the appearance of the dock - -Let’s see some more dock customization options in Ubuntu. - -Imagine you added too many applications to the dock or have too many applications open. It will fill up the space and you’ll have to scroll to the top and bottom to go to the applications at end points. - -What you can do here is to change the icon size and the dock will now accommodate more icons. Don’t make it too small, though. - -![Normal Icon Size Dock][13] - -![Smaller Icon Size Dock][14] - -To do that, go to Settings-> Appearance and change it by moving the slider under Icon size. The default icons size is 48 pixels. - -![Changing Icon Size In Ubuntu Dock][15] - -#### Hide mounted disks from the launcher - -If you plug in a USB disk or SD Card, it is mounted to the system, and an icon appear in the launcher immediately. This is helpful because you can right click on it and select safely remove drive option. - -![External Mounted Disks In Ubuntu Dock][16] - -If you somehow find it troublesome, you can turn this feature off. Don’t worry, you can still access the mounted drives from the file manager. - -Open a terminal and use the following command: - -``` -gsettings set org.gnome.shell.extensions.dash-to-dock show-mounts false -``` - -The changes take into effect immediately. You won’t be bothered with mounted disk being displayed in the launcher. - -If you want the default behavior back, use this command: - -``` -gsettings set org.gnome.shell.extensions.dash-to-dock show-mounts true -``` - -### Change the behavior of dock - -Let’s customize the default behavior of the dock and make it more suitable to your needs. - -#### Enable minimize on click - -If you click on the icon of a running application, its window will be brought to focus. That’s fine. However, if you click on it, nothing happens. By default, clicking on the same icon won’t minimize the application. - -Well, this is the behavior in modern desktop, but I don’t like it. I prefer that the application is minimized when I click on its icon for the second time. - -If you are like me, you may want to [enable click to minimize option in Ubuntu][17]: - -![A Video from YouTube][18] - -To do that, open a terminal and enter the following command: - -``` -gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize' -``` - -#### Auto-hide Ubuntu dock and get more screen space - -If you want to utilize the maximum screen space, you can enable auto-hide option for the dock in Ubuntu. - -This will hide the dock, and you’ll get the entire screen. The dock is still accessible, though. Move your cursor to the location of the dock where it used to be, and it will appear again. When the dock reappears, it is overlaid on the running application window. And that’s a good thing otherwise too many elements would start moving on the screen. - -The auto-hide option is available in Settings-> Appearance and under Dock section. Just toggle it. - -![Autohide the Dock Ubuntu][19] - -If you don’t like this behavior, you can enable it again the same way. - -#### Disable Ubuntu dock - -Auto-hide option is good enough for many people, but some users simply don’t like the dock. If you are one of those users, you also have the option to disable the Ubuntu dock entirely. - -Starting with Ubuntu 20.04, you have the Extensions application available at your disposal to [manage GNOME Extensions][20]. - -![Gnome Extensions App Ubuntu][21] - -With this Extensions application, you can easily disable or re-enable the dock. - -![Disable Dock Ubuntu][22] - -### Advanced dock customization with dconf-editor [Not recommended] - -##### Warning - -The dconf-editor allows you to change almost every aspect of the GNOME desktop environment. This is both good and bad because you must be careful in editing. Most of the settings can be changed on the fly, without asking for conformation. While you may reset the changes, you could still put your system in such a state that it would be difficult to put things back in order.For this reason, I advise not to play with dconf-editor, specially if you don’t like spending time in troubleshooting and fixing problems or if you are not too familiar with Linux and GNOME. - -The [dconf editor][23] gives you additional options to customize the dock in Ubuntu. Install it from the software center and then navigate to org > gnome > shell > extensions > dash-to-dock. You’ll find plenty of options here. I cannot even list them all here. - -![Dconf Editor Dock][24] - -### Replace the dock in Ubuntu - -There are several third-party dock applications available for Ubuntu and other Linux distributions. You can install a dock of your choice and use it. - -For example, you can install Plank dock from the software center and use it in similar fashion to Ubuntu dock. - -![Plank Dock Ubuntu][25] - -Disabling Ubuntu Dock would be a better idea in this case. It won’t be wise to use multiple docks at the same time. - -### Conclusion - -This tutorial is about customizing the default dock or launcher provided in Ubuntu’s GNOME implementation. Some suggestions should work on the dock in vanilla GNOME as work well. - -I have shown you most of the common Ubuntu dock customization. You don’t need to go and blindly follow all of them. Read and think which one suits your need and then act upon it. - -Was it too trivial or did you learn something new? Would you like to see more such tutorials? I welcome your suggestions and feedback on dock customization. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/customize-ubuntu-dock/ - -作者:[Abhishek Prakash][a] -选题:[lkxed][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lkxed -[1]: https://itsfoss.com/wp-content/uploads/2021/01/ubuntu-dock.png -[2]: https://player.vimeo.com/video/534830884 -[3]: https://www.youtube.com/c/itsfoss?sub_confirmation=1 -[4]: https://itsfoss.com/wp-content/uploads/2021/01/add-icons-to-dock.png -[5]: https://itsfoss.com/wp-content/uploads/2021/01/remove-icons-from-dock.png -[6]: https://itsfoss.com/wp-content/uploads/2021/01/reorder-icons-on-ubuntu-docks-800x430.gif -[7]: https://itsfoss.com/wp-content/uploads/2021/01/right-click-icons-ubuntu-dock.png -[8]: https://itsfoss.com/wp-content/uploads/2021/01/keyboard-shortcut-for-ubuntu-dock.png -[9]: https://itsfoss.com/move-unity-launcher-bottom/ -[10]: https://itsfoss.com/wp-content/uploads/2021/01/change-launcher-position-ubuntu.png -[11]: https://itsfoss.com/wp-content/uploads/2021/01/change-dock-position-ubuntu.png -[12]: https://itsfoss.com/wp-content/uploads/2021/01/ubuntu-dock-settings-multimonitor.png -[13]: https://itsfoss.com/wp-content/uploads/2021/01/normal-icon-size-dock.jpg -[14]: https://itsfoss.com/wp-content/uploads/2021/01/smaller-icon-size-dock.jpg -[15]: https://itsfoss.com/wp-content/uploads/2021/01/changing-icon-size-in-ubuntu-dock.png -[16]: https://itsfoss.com/wp-content/uploads/2021/01/external-mounted-disks-in-ubuntu-dock.png -[17]: https://itsfoss.com/click-to-minimize-ubuntu/ -[18]: https://giphy.com/embed/52FlrSIMxnZ1qq9koP -[19]: https://itsfoss.com/wp-content/uploads/2021/01/autohide-dock-ubuntu.png -[20]: https://itsfoss.com/gnome-shell-extensions/ -[21]: https://itsfoss.com/wp-content/uploads/2020/06/GNOME-extensions-app-ubuntu.jpg -[22]: https://itsfoss.com/wp-content/uploads/2021/01/disable-dock-ubuntu.png -[23]: https://wiki.gnome.org/Apps/DconfEditor -[24]: https://itsfoss.com/wp-content/uploads/2021/01/dconf-editor-dock.png -[25]: https://itsfoss.com/wp-content/uploads/2021/01/plank-dock-Ubuntu-800x382.jpg diff --git a/sources/tech/20210901 Control your Raspberry Pi remotely with your smartphone.md b/sources/tech/20210901 Control your Raspberry Pi remotely with your smartphone.md deleted file mode 100644 index 529b3c7f70..0000000000 --- a/sources/tech/20210901 Control your Raspberry Pi remotely with your smartphone.md +++ /dev/null @@ -1,212 +0,0 @@ -[#]: subject: "Control your Raspberry Pi remotely with your smartphone" -[#]: via: "https://opensource.com/article/21/9/raspberry-pi-remote-control" -[#]: author: "Stephan Avenwedde https://opensource.com/users/hansic99" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Control your Raspberry Pi remotely with your smartphone -====== -Control the GPIOs of your Raspberry Pi remotely with your smartphone. -![A person looking at a phone][1] - -Wouldn't it be nice to control the general-purpose input/outputs (GPIOs) of the Raspberry Pi remotely with your smartphone? If you can answer the question in the affirmative, I would like to introduce you to a simple way to implement this. Writing this article, I have no specific application in mind, but I can think of combining it with lawn irrigation, any illumination, or a garage door opener. - -Anyway, all you need to get started is a Raspberry Pi and a smartphone. The actual logic is already available on GitHub, so even without programming skills, you will be able to follow the steps described in this article. - -### Architecture - -We do the major work with [Pythonic][2]—a graphical Python programming framework I develop in my leisure. Pythonic brings a [Telegram][3] bot programming element with it, which acts as our smartphone interface. A significant advantage of this setup is that it is scalable regarding the number of clients: You can decide whether you want to control the GPIOs only by yourself, share them with your relatives or friends, or share the control capabilities with the public. Of course, a prerequisite is permanent internet access to communicate between the Telegram server and the client. To establish internet access, you could use either the Ethernet interface or the WiFi functionality of the Raspberry Pi. - -### Install Pythonic - -To get started, you have to install Pythonic on your Raspberry Pi. The easiest way of doing that is to flash the SD card with the preconfigured Pythonic image available on [sourceforge.net][4]. - -Download and unzip the image and flash it to the SD card of the Raspberry Pi. On Windows, you can use [balenaEtcha][5] for it. On Linux, you can do it with the onboard tools. - - 1. Plugin the SD card and check under which device it is showing up by typing `lsblk -p`. - -![Using lsblk -p to check under which device your SD card shows ][6] - -(Stephan Avenwedde, [CC-BY SA 4.0][7]) - - 2. In the screenshot above, the SD card device is `/dev/sdc`, and my systems automatically mounted two partitions that were found on it. If this is the case, unmount it by typing `umount /dev/sdc1 && umount /dev/sdc2`. - - 3. Flash the SD card with the following command: `dd if=~/Downloads/Pythonic-1.7.img of=/dev/sdc bs=32M, conv=fsync`.  -**Attention***: *This will delete all previous files on the SD card. - - 4. The flashing process will take a while. -Once the process is finished, put the SD card back in your Raspberry Pi and boot it up. - - - - -### Establishing a connection - -The Pythonic image has no pre-installed desktop. The whole configuration is web-based, so you must establish a TCP/IP connection. It is straightforward to connect using an ordinary internet router. If you don't have access to such a router, you can also establish a connection over the onboard universal asynchronous receiver/transmitter ****(UART) device to configure the Ethernet or WiFi interface. - -#### Locale DNS - -By default, the Pythonic image is configured to acquire an IP address by DHCP. Your internet router at home usually runs a DHCP server that distributes IP addresses to connected devices. Make a connection between a free Ethernet port of your internet router and the Ethernet port on your Raspberry Pi and boot it up. - -You can now try to access the web-based GUI from a device within your local network. If the DNS in your local network works properly, open a browser and navigate to http ://PythonicRPI:7000/ to open the programming GUI. - -#### Locale IP - -I assume your router also offers a graphical configuration GUI. The configuration GUI provides information about the devices in your local network. You can find the IP address of your local router by typing `ip route`. - -In my case, the route is available under 192.168.188.1. Now login to your router's configuration page and check which IP was the Raspberry Pi given. - -![Viewing active connections][8] - -(Stephan Avenwedde, [CC-BY SA 4.0][7]) - -In my network, the Raspberry Pi is available under 192.168.188.63, so I can access the GUI at http ://192.168.188.63:7000/. - -#### UART - -Put the SD card back into the card reader and mount the _boot_ partition. Open the _config.txt_ on the _boot_ partition and add the following line to the end: - - -``` -`enable_uart=1` -``` - -Put the SD card back in the Raspberry Pi and boot it up. You can now establish a console connection with a UART-USB converter to set up a static IP address or configure a WiFi connection. - -![Establishing a UART connection][9] - -(Stephan Avenwedde, [CC-BY SA 4.0][7]) - -The default connection parameters are: - - * TxD: GPIO14 - * RxD: GPIO15 - * Ground: Pin 6 or 14 - * Baud rate: 115200 - * Data bits: 8 - * Parity bit: None - * Stop bits: 1 - - - -You can find more information on [elinux.org][10]. - -### Uploading the configuration - -To proceed with the upcoming steps, download the example files from [github][11] to your local hard drive. - -![GitHub example files repository ][12] - -(Stephan Avenwedde, [CC-BY SA 4.0][7]) - -The example consists of several files of two elementary types: - - * `*.py-files`—Contains the actual implementation of specific functionality. - * `current_config.json`—This file describes the configured elements, the links between the elements, and the variable configuration of the elements. - - - -This example is a slightly modified version of the already available reference implementation. You can access it by dragging and dropping the files from the left sidebar to the working area. - -Now upload the configuration to your target: - -![Pythonic GUI overview][13] - -(Stephan Avenwedde, [CC-BY SA 4.0][7]) - -With the blue-marked button, you upload the `current_config.json` to the target. You can upload only valid configuration files. After uploading, you can find the file on the target under `/home/pythonic/Pythonic/current_config.json`. - -With the green-marked button, you upload each `*.py-files`. Afterward, the `*.py-files` can be found under `/home/pythonic/Pythonic/executables`. - -It is possible to upload any kind of file to the `executables` folder because I plan to support binary executables in the future. - -However, so that the configuration works, the actual implementation must be available for each element described in `current_config.json`. - -### Setup a Telegram Bot - -The configuration should now look like this: - -![Pythonic GPIO remote configuration][14] - -(Stephan Avenwedde, [CC-BY SA 4.0][7]) - -Well done! But this setup won't work yet. Try to start this configuration by clicking **Play** on the _ManualScheduler - 0x5f8125f5_ element. The connected Telegram element will start but then immediately quit. This is because the Telegram element needs some additional configuration: Right-click on the Telegram element. You should now see pop-up windows like this: - -![Pop-up for Phythonic GPIO remote Telegram][15] - -(Stephan Avenwedde, [CC-BY SA 4.0][7]) - -You have to provide a Telegram bot token to communicate with the server. The process of creating a bot token is described on [core.telegram.org][16]. - -In a nutshell: Start a chat with the [BotFather][17] and create a bot with the `/newbot` command. At the end of the process, the BotFather will provide you a token that you can copy and paste to the Telegram element. - -That's it. Now you should be able to start the Telegram element by clicking on the play button on the _ManualScheduler - 0x5f8125f5_ element. The Telegram element should now be active, which can be seen from the green frame. - -![ Active RPI Telegram element][18] - -(Stephan Avenwedde, [CC-BY SA 4.0][7]) - -The spinning bar on the bottom info line indicates a working connection with the backend. - -Start a chat with your newly created bot by typing _@<name-of-your-bot>_ in the search field of Telegram. Click **Start** to get the initial state of the GPIOs. I named my bot _RPIremoteIO_: - -![Start RPI Telegram][19] - -(Stephan Avenwedde, [CC-BY SA 4.0][7]) - -### Debugging and Modification - -Open a new tab in your browser and navigate to http ://PythonicRPI:8000/. This will open the pre-installed [code-server][20] IDE. On the left pane, click on the files button and open `telegram_2ca7cd73.py` : - -![RPI code server IDE][21] - -(Stephan Avenwedde, [CC-BY SA 4.0][7]) - -You should now be able to start debugging and follow the path of execution like in the following screen recording: - - - -The Telegram element uses an [inline keyboard][22] which shows the target state of GPIO4 and GPIO5. This way, several users could control the state of GPIOs without disturbing each other because the new target state for the GPIOs is always provided to all subscribers. - -### Conclusion - -With this example, you should get a feeling of how everything connects. You can adapt the example as you like: Change or add additional GPIOs, use the analog features or get the input state on demand. If you connect a suitable relay, you could also drive higher loads with the Raspberry Pi. I am sure you will do something great with it! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/raspberry-pi-remote-control - -作者:[Stephan Avenwedde][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/hansic99 -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone) -[2]: https://github.com/hANSIc99/Pythonic -[3]: https://telegram.org/ -[4]: https://sourceforge.net/projects/pythonicrpi/ -[5]: https://www.balena.io/etcher/ -[6]: https://opensource.com/sites/default/files/uploads/pi_gen_lsblk_mounted_1.png (Using lsblk -p to check under which device your SD card shows ) -[7]: https://creativecommons.org/licenses/by-sa/4.0/ -[8]: https://opensource.com/sites/default/files/uploads/active_connections.png (Viewing active connections) -[9]: https://opensource.com/sites/default/files/uploads/pythonic_rpi_uart.jpg (Establishing a UART connection) -[10]: https://elinux.org/RPi_Serial_Connection -[11]: https://github.com/hANSIc99/Pythonic/tree/master/examples/rpi_telegram_remote_io -[12]: https://opensource.com/sites/default/files/uploads/github_example_remote_gpio.png (GitHub example files repository ) -[13]: https://opensource.com/sites/default/files/uploads/pythonic_gui_overview.png (Pythonic GUI overview) -[14]: https://opensource.com/sites/default/files/uploads/pythonic_gpio_remote_config.png (Pythonic GPIO remote configuration) -[15]: https://opensource.com/sites/default/files/uploads/pythonic_gpio_remote_telegram.png (Pop-up for Phythonic GPIO remote Telegram) -[16]: https://core.telegram.org/bots#6-botfather -[17]: https://t.me/botfather -[18]: https://opensource.com/sites/default/files/uploads/rpi_telegram_active.png (Active RPI Telegram element) -[19]: https://opensource.com/sites/default/files/uploads/rpi_start_telegram.png (Start RPI Telegram) -[20]: https://github.com/cdr/code-server -[21]: https://opensource.com/sites/default/files/uploads/rpi_code-server_ide.png (RPI code server IDE) -[22]: https://core.telegram.org/bots#inline-keyboards-and-on-the-fly-updating diff --git a/sources/tech/20210901 Getting ready for Fedora Linux.md b/sources/tech/20210901 Getting ready for Fedora Linux.md deleted file mode 100644 index 6a8d15ce08..0000000000 --- a/sources/tech/20210901 Getting ready for Fedora Linux.md +++ /dev/null @@ -1,297 +0,0 @@ -[#]: subject: "Getting ready for Fedora Linux" -[#]: via: "https://fedoramagazine.org/getting-ready-for-fedora-linux/" -[#]: author: "Hanku Lee https://fedoramagazine.org/author/hankuoffroad/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Getting ready for Fedora Linux -====== - -![][1] - -Photo by [Jacques Bopp][2] on [Unsplash][3] - -### Introduction - -Why does Linux remain vastly invisible to ordinary folks who make general use of computers? This article steps through the process to move to Fedora Linux Workstation for non-Linux users. It also describes features of the GUI (Graphic User Interface) and CLI (Command Line Interface) for the newcomer. This is a quick introduction, not an in-depth course. - -### Installation and configuration are straightforward - -Supposedly, a bootable USB drive is the most baffling experience of starting Linux for a beginner. In all fairness, installation with Fedora Media Writer and Anaconda is intuitive. - -##### Step-by-step installation process - - 1. [Make a Fedora USB stick][4]: 5 to 7 minutes depending on USB speed - 2. [Understand disk partitions and Linux file systems][5] - 3. [Boot from a USB device][6] - 4. [Install][7] with the Fedora installer, Anaconda: 15 to 20 minutes - 5. Software updates: 5 minutes - - - -Following this procedure, it is easy to help family and friends install Fedora Linux. - -##### Package management and configuration - -Instead of configuring the OS manually, adding tools and applications you need, you may choose a functional bundle from [Fedora Labs][8] for a specific use case. Design Suite, Scientific, Python Classroom, and more, are available. Plus, all processes are complete without the command line. - -##### Connecting devices and services - - * [Add a USB printer][9]: Fedora Linux detects most printers in a few seconds. Some may require the drivers. - * Configure a USB keyboard: Refer to simple [work-around][10] for a mechanical keyboard. - * [Sync with Google Drive][11]: Add an account either after installation, or at any time afterward. - - - -### Desktop customization is easy - -The default [GNOME desktop][12] is decent and free from distractions. - -A shortlist to highlight desktop benefits: - - * Simplicity: Clean design, fluid and elegant application grid. - * Reduced user effort: No alerts for paid services or long list of user consent. - * Accommodating software: GNOME requires little specialist knowledge or technical ability. - * Neat layout of system _Settings_: Larger icons and a better layout. - - - -The image below shows the applications and desktops currently available. Get here by selecting “Activities” and then the “Show Applications” icon at the bottom of the screen at the far right. There you will find LibreOffice for your document, spreadsheet, and presentation creation. Also available is Firefox for your web browsing. More applications are added using the _Software_ icon (second from right at the bottom of the screen). - -![GNOME desktop][13] - -##### Enable touchpad click (tapping) - -A change for [touchpad settings][14] is required for laptop users. - - 1. Go to _Activies > Show Applications > Settings > Mouse & Touchpad > Touchpad_ - 2. Change the default behavior of touchpad settings (double click) to tap-to-click (single tap) using the built-in touchpad - 3. Select ‘Tap to Click’ - - - -##### Add user accounts using the users settings tool - -During installation, you set up your first login account. For training or demo purposes, it is common to create a new user account. - - 1. Add users: Go to _Settings > Users > Unlock > Authentication> Add user_ - 2. Click at the top of the screen at the far right and then navigate to Power Off / Log out, and Select _Switch User_ to relogin as the new user. - - - -### Fedora Linux is beginner-friendly - -Yes, Fedora Linux caters to a broader selection of users. Since that is the case, why not dip into the shallow end of the Fedora community? - - * [Fedora Docs][15]: Clarity of self-help content is outstanding. - * Ask Fedora: Get help for anything about Fedora Linux. - * Magazine: Useful tips and user story are engaging. Make a suggestion to write about. - * Nest with Fedora: Warm welcome virtually from Fedora Linux community. - * Release parties. - - - -### Command line interface is powerful - -The command line is a way of giving instructions to a computer (shell) using a terminal. To be fair, the real power behind Fedora Linux is the Bash shell that empowers users to be problem solvers. The good news is that the text-based command is universally compatible across different versions of Linux. The Bash shell comes with the Fedora Linux, so there is no need to install it. - -The following will give you a feeling for the command line. However, you can accomplish many if not all day-to-day tasks without using the command line. - -#### How to use commands? - -Access the command line by selecting “Activities” and then the “Show Applications” icon at the bottom of the screen at the far right. Select _Terminal_. - -#### Understand the shell prompt - -The standard shell prompt looks like this: - -``` -[hank@fedora_test ~]$ -``` - -The shell prompt waits for a command. - -It shows the name of the user (hank), the computer being used (fedora_test), and the current working directory within the filesystem (~, meaning the user’s home directory). The last character of the prompt, $, indicates that this is a normal user’s prompt. - -#### Enter commands - -What common tasks should a beginner try out with command lines? - - * Command line information is available from the [Fedora Magazine][16] and [other sites][17]. - * Use _ls_ and _cd_ to list and navigate your file system. - * Make new directories (folders) with _mkdir_. - * Delete files with _rm_. - * Use _lsblk_ command to display partition details. - - - -#### How to deal with the error messages - - * Be attentive to error messages in the terminal. Common errors are missing arguments, typo of file name. - * Pause to think about why that happened. - * Figure out the correct syntax using the _man_ command. For example: -_man ls_ -displays the manual page for the _ls_ command. - - - -#### Perform administration tasks using _sudo_ - -When a user executes commands for installation, removal, or change of software, [the _sudo_ command][18] allows users to gain administrative or root access. The actions that required _sudo_ command are often called ‘the administrative tasks’. Sudo stands for **SuperUser DO**. The syntax for the _sudo_ command is as follows: - -``` -sudo [COMMAND] -``` - - 1. Replace _COMMAND_ with the command to run as the root user. - 2. Enter password - - - -What are the most used _sudo_ commands to start with? - - * List privileges - - - -``` -sudo -l -``` - - * Install a package - - - -``` -sudo dnf install [package name] -``` - - * Update a package - - - -``` -sudo dnf update [package name] -``` - - * List all packages - - - -``` -sudo dnf grouplist [package name] -``` - - * Manage disk partitions - - - -``` -sudo fdisk -l -``` - -### Built-in text editor is light and efficient - -[Nano][19] is the default command-line-based text editor for Fedora Linux. [vi][20] is another one often used on Fedora Linux. Both are light and fast. Which to us is a personal choice, really. Nano and vi remain essential tools for editing config files and writing scripts. Generally, Nano is much simpler to work with than vi but vi can be more powerful when you get used to it. - -##### What does a beginner benefit from a text editor? - - * Learn fundamentals of computing - - - -Linux offers a vast range of customization options and monitoring. Shell scripts make it possible to add new functionality and the editor is used to create the scripts. - - * Build cool things for home automation - - - -Raspberry Pi is a testing ground to build awesome projects for homes. [Fedora can be installed on Raspberry Pi][21]. Schools use the tiny microcomputer for IT training and experiment. Instead of a visual editor, it is easier to use a light and simple Nano editor to write files. - - * Test proof of concept with the public cloud services - - - -Most of the public cloud suppliers offer free sandbox account to spin up a virtual machine or configure the network. Cloud servers run Linux OS, so editing configuration files require a text editor. Without installing additional software, it is easy to invoke Nano on a remote server. - -##### How to use Nano text editor - -Type _nano_ and file name after the shell prompt $ and press Enter. - -``` -[hank@fedora_test ~]$ nano [filename] -``` - -Note that many of the most used commands are displayed at the bottom of the nano screen. The symbol ^ in Nano means to press the Ctrl key. - - * Use the arrow keys on the keyboard to move up and down, left and right. - * Edit file. - * Get built-in help by pressing ^G - * Exit by entering ^X and Y to save your file and return to the shell prompt. - - - -##### Examples of file extensions used for configuration or shell scripts - - * .cfg: User-configurable files in the /etc directory. - * .yaml: A popular type of configuration file with cross-language data portability. - * .json: JSON is a lightweight & open standard format for storing and transporting data. - * .sh: A shell script used universally for Unix/Linux systems. - - - -Above all, this is not a comprehensive guide on Nano or vi. Yet, adventurous learners should be aware of text editors for their next step in becoming accomplished in Fedora Linux. - -### Conclusion - -Does Fedora Workstation simplify the user experience of a beginner with Linux? Yes, absolutely. It is entirely possible to create a desktop quickly and get the job done without installing additional software or extensions. - -Taking it to the next level, how to get more people into Fedora Linux? - - * Make Fedora Linux device available at home. A repurposed computer with the above guide is a starting point. - * Demonstrate [cool things][22] with Fedora Linux. - * Share [power user tips][23] with shell scripts. - * Get involved with Open Source Software community such as the [Fedora project][24]. - - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/getting-ready-for-fedora-linux/ - -作者:[Hanku Lee][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/hankuoffroad/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/ready_for_fedora-816x345.jpg -[2]: https://unsplash.com/@jacquesbopp?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://fedoramagazine.org/make-fedora-usb-stick/ -[5]: https://docs.fedoraproject.org/en-US/fedora/rawhide/install-guide/appendixes/Disk_Partitions/ -[6]: https://docs.fedoraproject.org/en-US/fedora/rawhide/install-guide/install/Booting_the_Installation/ -[7]: https://docs.fedoraproject.org/en-US/fedora/rawhide/install-guide/install/Installing_Using_Anaconda/ -[8]: https://labs.fedoraproject.org/ -[9]: https://docs.fedoraproject.org/en-US/Fedora/14/html/User_Guide/chap-User_Guide-Printing.html -[10]: https://venthur.de/2021-04-30-keychron-c1-on-linux.html -[11]: https://fedoramagazine.org/connect-your-google-drive-to-fedora-workstation/ -[12]: https://developer.gnome.org/hig/principles.html -[13]: https://fedoramagazine.org/wp-content/uploads/2021/08/Screenshot-from-2021-08-12-23-27-13-1024x576.png -[14]: https://help.gnome.org/users/gnome-help/stable/mouse-touchpad-click.html.en -[15]: https://docs.fedoraproject.org/en-US/docs/ -[16]: https://fedoramagazine.org/?s=command+line -[17]: https://www.redhat.com/sysadmin/essential-linux-commands -[18]: https://fedoramagazine.org/howto-use-sudo/ -[19]: https://fedoramagazine.org/gnu-nano-minimalist-console-editor/ -[20]: https://www.redhat.com/sysadmin/vim-commands -[21]: https://docs.fedoraproject.org/en-US/quick-docs/raspberry-pi/ -[22]: https://fedoramagazine.org/automatically-light-up-a-sign-when-your-webcam-is-in-use/ -[23]: https://fedoramagazine.org/?s=bash -[24]: https://docs.fedoraproject.org/en-US/project/ diff --git a/sources/tech/20210902 Get started programming with DOS conio.md b/sources/tech/20210902 Get started programming with DOS conio.md deleted file mode 100644 index 33e8819bd6..0000000000 --- a/sources/tech/20210902 Get started programming with DOS conio.md +++ /dev/null @@ -1,341 +0,0 @@ -[#]: subject: "Get started programming with DOS conio" -[#]: via: "https://opensource.com/article/21/9/programming-dos-conio" -[#]: author: "Jim Hall https://opensource.com/users/jim-hall" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Get started programming with DOS conio -====== -Create various practical and exciting applications by programming with -conio. -![Person using a laptop][1] - -One of the reasons so many DOS applications sported a text user interface (or TUI) is because it was so easy to do. The standard way to control **con**sole **i**nput and **o**utput (**conio**) was with the `conio` library for many C programmers. This is a de-facto standard library on DOS, which gained popularity as implemented by Borland's proprietary C compiler as `conio.h`. You can also find a similar `conio` implementation in TK Chia's IA-16 DOS port of the GNU C Compiler in the `libi86` library of non-standard routines. The library includes implementations of `conio.h` functions that mimic Borland Turbo C++ to set video modes, display colored text, move the cursor, and so on. - -For years, FreeDOS included the OpenWatcom C Compiler in the standard distributions. OpenWatcom supports its own version of `conio`, implemented in `conio.h` for particular console input and output functions, and in `graph.h` to set colors and perform other manipulation. Because the OpenWatcom C Compiler has been used for a long time by many developers, this `conio` implementation is also quite popular. Let's get started with the OpenWatcom `conio` functions. - -### Setting the video mode - -Everything you do is immediately displayed on-screen via hardware. This is different from the `ncurses` library on Linux, where everything is displayed through terminal emulation. On DOS, everything is running on hardware. And that means DOS `conio` programs can easily access video modes and leverage screen regions in ways that are difficult using Linux `ncurses`. - -To start, you need to set the _video mode_. On OpenWatcom, you do this with the `_setvideomode` function. This function takes one of several possible values, but for most programs that run in color mode in a standard 80x25 screen, use `_TEXTC80` as the mode. - - -``` -#include <conio.h> -#include <graph.h> - -int -main() -{ -  _setvideomode(_TEXTC80); -  …  -``` - -When you're done with your program and ready to exit back to DOS, you should reset the video mode back to whatever values it had before. For that, you can use `_DEFAULTMODE` as the mode. - - -``` - _setvideomode(_DEFAULTMODE); -  return 0; -} -``` - -### Setting the colors - -Every PC built after 1981's Color/Graphics Adapter supports [16 text colors and 8 background colors][2]. Background colors are addressed with color indices 0 through 7, and text colors can be any value from 0 to 15: - -| | | | | | | --------- | | ----------------- | | | 0 Black | | 8 Bright Black | | | 1 Blue | | 9 Bright Blue | | | 2 Green | | 10 Bright Green | | | 3 Cyan | | 11 Bright Cyan | | | 4 Red | | 12 Bright Red | | | 5 Magenta | | 13 Bright Magenta | | | 6 Brown | | 14 Yellow | | | 7 White | | 15 Bright White | - -You can set both the text color and the color behind it. Use the `_settextcolor` function to set the text "foreground" color and `_setbkcolor` to set the text "background" color. For example, to set the colors to yellow text on a red background, you would use this pair of functions: - - -``` - _settextcolor(14); - _setbkcolor(4); -``` - -### Positioning text - -In `conio`, screen coordinates are always _row_,_col_ and start with 1,1 in the upper-left corner. For a standard 80-column display with 25 lines, the bottom-right corner is 25,80. - -Use the `_settextposition` function to move the cursor to a specific screen coordinate, then use `_outtext` to print the text you want to display. If you've set the colors, your text will use the colors you last defined, regardless of what's already on the screen. - -For example, to print the text "FreeDOS" at line 12 and column 36 (which is more or less centered on the screen) use these two functions: - - -``` -  _settextposition(12, 36); -  _outtext("FreeDOS"); -``` - -Here's a small example program: - - -``` -#include <conio.h> -#include <graph.h> - -int -main() -{ -    _setvideomode(_TEXTC80); - -    _settextcolor(14); -    _setbkcolor(4); - -    _settextposition(12, 36); -    _outtext("FreeDOS"); - -    [getch][3](); - -    _setvideomode(_DEFAULTMODE); - -    return 0; -} -``` - -Compile and run the program to see this output: - -![Print to the screen with conio][4] - -(Jim Hall, [CC BY-SA 4.0][5]) - -### Text windows - -The trick to unleashing the power of `conio` is to leverage a feature of the PC video display where a program can control the video hardware by region. These are called text windows and are a really cool feature of `conio`. - -A text window is just an area of the screen, defined as a rectangle starting at a particular _row_,_col_ and ending at a different _row_,_col_. These regions can take up the whole screen or be as small as a single line. Once you define a window, you can clear it with a background color and position text in it. - -To define a text window starting at row 5 and column 10, and extending to row 15 and column 70, you use the `_settextwindow` function like this: - - -``` -`  _settextwindow(5, 10, 15, 70);` -``` - -Now that you've defined the window, any text you draw in it uses 1,1 as the upper-left corner of the text window. Placing text at 1,1 will actually position that text at row 5 and column 10, where the window starts on the screen. - -You can also clear the window with a background color. The `_clearscreen` function does double duty to clear either the full screen or just the window that's currently defined. To clear the entire screen, give the value `_GCLEARSCREEN` to the function. To clear just the window, use `_GWINDOW`. With either usage, you'll fill that region with whatever background color you last set. For example, to clear the whole screen with cyan (color 3) and a smaller text window with blue (color 1) you could use this code: - - -``` -  _clearscreen(_GCLEARSCREEN); -  _setbkcolor(3); -  _settextwindow(5, 10, 15, 70); -  _setbkcolor(1); -  _clearscreen(_GWINDOW); -``` - -This makes it really easy to fill in certain areas of the screen. In fact, defining a window and filling it with color is such a common thing to do that I often create a function to do both at once. Many of my `conio` programs include some variation of these two functions to clear the screen or window: - - -``` -#include <conio.h> -#include <graph.h> - -void -clear_color(int fg, int bg) -{ -  _settextcolor(fg); -  _setbkcolor(bg); -  _clearscreen(_GCLEARSCREEN); -} - -void -textwindow_color(int top, int left, int bottom, int right, int fg, int bg) -{ -  _settextwindow(top, left, bottom, right); -  _settextcolor(fg); -  _setbkcolor(bg); -  _clearscreen(_GWINDOW); -} -``` - -A text window can be any size, even a single line. This is handy to define a title bar at the top of the screen or a status line at the bottom of the screen. Again, I find this to be such a useful addition to my programs that I'll frequently write functions to do it for me: - - -``` -#include <conio.h> -#include <graph.h> - -#include <string.h>                    /* for strlen */ - -void -clear_color(int fg, int bg) -{ -  …  -} - -void -textwindow_color(int top, int left, int bottom, int right, int fg, int bg) -{ -  …  -} - -void -print_header(int fg, int bg, const char *text) -{ -  textwindow_color(1, 1, 1, 80, fg, bg); - -  _settextposition(1, 40 - (strlen(text) / 2)); -  _outtext(text); -} - -void -print_status(int fg, int bg, const char *text) -{ -  textwindow_color(25, 1, 25, 80, fg, bg); - -  _settextposition(1, 1); -  _outtext(text); -} -``` - -### Putting it all together - -With this introduction to `conio`, and with the set of functions we've defined above, you can create the outlines of almost any program. Let's write a quick example that demonstrates how text windows work with `conio`. We'll clear the screen with a color, then print some sample text on the second line. That leaves room to put a title line at the top of the screen. We'll also print a status line at the bottom of the screen. - -This is the basics of many kinds of applications. Placing a text window towards the right of the screen could be useful if you were writing a "monitor" program, such as part of a control system, like this: - - -``` -#include <conio.h> -#include <graph.h> - -int -main() -{ -  _setvideomode(_TEXTC80); - -  clear_color(7, 1);                   /* white on blue */ -  _settextposition(2, 1); -  _outtext("test"); - -  print_header(0, 7, "MONITOR");       /* black on white */ - -  textwindow_color(3, 60, 23, 79, 15, 3);       /* br white on cyan */ -  _settextposition(3, 2); -  _outtext("hi mom"); - -  print_status(0, 7, "press any key to quit...");       /* black on white */ -  getch(); - -  _setvideomode(_DEFAULTMODE); - -  return 0; -} -``` - -Having already written our own window functions to do most of the repetitive work, this program becomes very straightforward: clear the screen with a blue background, then print "test" on the second line. There's a header line and a status line, but the interesting part is in the middle where the program defines a text window near the right edge of the screen and prints some sample text. The `getch()` function waits for the user to press a key on the keyboard, useful when you need to wait until the user is ready: - -![Conio mon][6] - -(Jim Hall, [CC BY-SA 4.0][5]) - -We can change only a few values to completely change the look and function of this program. By setting the background to green and red text on a white window, we have the start of a solitaire card game: - - -``` -#include <conio.h> -#include <graph.h> - -int -main() -{ -  _setvideomode(_TEXTC80); - -  clear_color(7, 2);                   /* white on green */ -  _settextposition(2, 1); -  _outtext("test"); - -  print_header(14, 4, "SOLITAIRE");    /* br yellow on red */ - -  textwindow_color(10, 10, 17, 22, 4, 7);       /* red on white */ -  _settextposition(3, 2); -  _outtext("hi mom"); - -  print_status(7, 6, "press any key to quit...");       /* white on brown */ -  getch(); - -  _setvideomode(_DEFAULTMODE); - -  return 0; -} -``` - -You could add other code to this sample program to print card values and suits, place cards on top of other cards, and other functionality to create a complete game. But for this demo, we'll just draw a single "card" displaying some text: - -![Conio solitaire][7] - -(Jim Hall, [CC BY-SA 4.0][5]) - -You can create other effects using text windows. For example, before drawing a message window, you could first draw a black window that's offset by one row and one column. The text window will appear to create a shadow over that area of the screen to the user. And we can do it all by changing only a few values in our sample program: - - -``` -#include <conio.h> -#include <graph.h> - -int -main() -{ -  _setvideomode(_TEXTC80); - -  clear_color(7, 1);                   /* white on blue */ -  _settextposition(2, 1); -  _outtext("test"); - -  print_header(15, 3, "PROGRAMMING IN CONIO");  /* br white on cyan */ - -  textwindow_color(11, 36, 16, 46, 7, 0);       /* shadow */ -  textwindow_color(10, 35, 15, 45, 7, 4);       /* white on red */ -  _settextposition(3, 2); -  _outtext("hi mom"); - -  print_status(0, 7, "press any key to quit...");       /* black on white */ -  getch(); - -  _setvideomode(_DEFAULTMODE); - -  return 0; -} -``` - -You often see this "shadow" effect used in DOS programs as a way to add some visual flair: - -![Conio Window with shadow][8] - -(Jim Hall, [CC BY-SA 4.0][5]) - -The DOS `conio` functions can do much more than I've shown here, but with this introduction to `conio` programming, you can create various practical and exciting applications. Direct screen access means your programs can be more interactive than a simple command-line utility that scrolls text from the bottom of the screen. Leverage the flexibility of `conio` programming and make your next DOS program a great one. - -### Download the conio cheat sheet - -As you explore programming with `conio`, it's helpful to have a list of common functions close at hand. I've created a double-sided cheat sheet with all the basics of `conio`, so **[download it][9]** and use it on your next `conio` project. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/programming-dos-conio - -作者:[Jim Hall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop) -[2]: https://opensource.com/article/21/6/freedos-sixteen-colors -[3]: http://www.opengroup.org/onlinepubs/009695399/functions/getch.html -[4]: https://opensource.com/sites/default/files/conio-hello.png (Print to the screen with conio) -[5]: https://creativecommons.org/licenses/by-sa/4.0/ -[6]: https://opensource.com/sites/default/files/uploads/conio-mon.png (Conio mon) -[7]: https://opensource.com/sites/default/files/uploads/conio-sol.png (Conio solitaire) -[8]: https://opensource.com/sites/default/files/uploads/conio-win.png (Conio Window with shadow) -[9]: https://opensource.com/downloads/dos-conio-cheat-sheet diff --git a/sources/tech/20210903 Install ONLYOFFICE Docs on Fedora Linux with Podman and connect it with Nextcloud.md b/sources/tech/20210903 Install ONLYOFFICE Docs on Fedora Linux with Podman and connect it with Nextcloud.md deleted file mode 100644 index 10d78e60bd..0000000000 --- a/sources/tech/20210903 Install ONLYOFFICE Docs on Fedora Linux with Podman and connect it with Nextcloud.md +++ /dev/null @@ -1,211 +0,0 @@ -[#]: subject: "Install ONLYOFFICE Docs on Fedora Linux with Podman and connect it with Nextcloud" -[#]: via: "https://fedoramagazine.org/instal-onlyoffice-docs-on-fedora-linux-with-podman/" -[#]: author: "kseniya_fedoruk https://fedoramagazine.org/author/kseniya_fedoruk/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Install ONLYOFFICE Docs on Fedora Linux with Podman and connect it with Nextcloud -====== - -![][1] - -Photo by [Chris Leggat][2] on [Unsplash][3] - -If you need a reliable office suite for online editing and collaboration within your sync & share platform, you can try ONLYOFFICE Docs. In this tutorial, we learn how to install it on your Fedora Linux with Podman and discover the ONLYOFFICE-Nextcloud integration. - -### What is ONLYOFFICE Docs - -[ONLYOFFICE Docs][4] (Document Server) is an open-source office suite distributed under GNU AGPL v3.0. It is comprised of web-based viewers and collaborative editors for text documents, spreadsheets, and presentations. The suite is highly compatible with OOXML formats (docx, xlsx, pptx). - -A brief features overview includes: - - * Full set of editing and styling tools, operations with fonts and styles, paragraph and text formatting. - * Inserting and customizing all kinds of objects: shapes, charts, text art, text boxes, etc. - * Academic formatting and navigation: endnotes, footnotes, table of contents, bookmarks. - * Content Controls for creating digital forms and templates. - * Extending functionality with plugins, building your own plugins using API. - * Collaborative features: real-time and paragraph-locking co-editing modes, review and track changes, comments and mentions, integrated chat, version history. - * Flexible access permissions: edit, view, comment, fill forms, review, restriction on copying, downloading, and printing, custom filter for spreadsheets. - - - -![][5] - -You can integrate ONLYOFFICE Docs with various cloud services such as Nextcloud, ownCloud, Seafile, Alfresco, Plone, etc. What’s more, developers can embed the editors into their own solutions.  - -You can also use the suite together with [ONLYOFFICE Groups][6], a free open-source collaboration platform distributed under Apache 2.0. The complete solution is available as [ONLYOFFICE Workspace.][7] - -### What is Podman - -Podman is a daemonless container engine for developing, managing, and running OCI containers on your Linux system. Users can run containers either as root or in rootless mode.  - -It is available by default on Fedora Workstation. If it’s not the case, install podman with the command: - -``` -sudo dnf install podman -``` - -### What you need for ONLYOFFICE Docs installation - - * CPU: single core 2 GHz or better - * RAM: 2 GB or more - * HDD: at least 40 GB of free space - * At least 4 GB of swap - - - -### Install and run ONLYOFFICE Docs - -Start with the following commands for the root-privileged deployment. This creates directories for mounting from the container to the host system: - -``` -$ sudo mkdir -p /app/onlyoffice/DocumentServer/logs \ - /app/onlyoffice/DocumentServer/data \ - /app/onlyoffice/DocumentServer/lib \ - /app/onlyoffice/DocumentServer/db -``` - -Now mount these directories via podman. When prompted, select the image from docker.io): - -``` -$ sudo podman run -i -t -d -p 80:80 -p 443:443 --restart=always \ - -v /app/onlyoffice/DocumentServer/logs:/var/log/onlyoffice:Z \ - -v /app/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data:Z \ - -v /app/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice:Z \ - -v /app/onlyoffice/DocumentServer/db:/var/lib/postgresql:Z \ - -u root onlyoffice/documentserver:latest -``` - -Please note that rootless deployment is NOT recommended for ONLYOFFICE Docs. - -To check that ONLYOFFICE is working correctly, run: - -``` -$ sudo podman exec $(sudo podman ps -q) sudo supervisorctl start ds:example -``` - -Then, open  and click the word “here” in the line _Once started the example will be available here_. Or look for the orange “button” that says “GO TO TEST EXAMPLE”. This opensthe test example where you can create a document. - -Alternatively, to install ONLYOFFICE Docs, you can build an image in podman: - -``` -$ git clone https://github.com/ONLYOFFICE/Docker-DocumentServer.git -$ cd Docker-DocumentServer/ -$ sudo podman build --tag oods6.2.0:my -f ./Dockerfile -``` - -Or build an image from the Docker file in buildah (you need root access): - -``` -$ buildah bud --tag oods6.2.0buildah:mybuildah -f ./Dockerfile -``` - -### Activate HTTPS - -To secure the application via SSL basically two things are needed: - - * Private key (.key) - * SSL certificate (.crt) - - - -So you need to create and install the following files: - -``` -/app/onlyoffice/DocumentServer/data/certs/onlyoffice.key -/app/onlyoffice/DocumentServer/data/certs/onlyoffice.crt -``` - -You can get certificates in several ways depending on your requirements: buy from certification centers, request from [Let’s Encrypt,][8] or create a [self-signed certificate][9] through OpenSSL (note that self-signed certificates are not recommended for production use). - -Secure ONLYOFFICE Docs switching to the HTTPS protocol: - -``` -$ sudo mkdir /app/onlyoffice/DocumentServer/data/certs -$ sudo cp onlyoffice.crt /app/onlyoffice/DocumentServer/data/certs/ -$ sudo cp onlyoffice.key /app/onlyoffice/DocumentServer/data/certs/ -$ sudo chown -R 100108:100111 /app/onlyoffice/DocumentServer/data/certs/ -# find the podman container id -$ sudo podman ps -a -# restart the container to use the new certificate -$ sudo podman restart {container_id} -``` - -Now you can integrate ONLYOFFICE Docs with the platform you already use and start working with your documents. - -### ONLYOFFICE-Nextcloud integration example - -To connect ONLYOFFICE Docs and Nextcloud (or any other DMS), you need a connector. This is an integration app that functions like a bridge between two services.   - -In case you’re new to Nextcloud, you can install it with Podman following [this tutorial][10].    - -If you already have Nextcloud installed, you just need to install and activate the connector. Do this with the following steps: - - 1. launch your Nextcloud as an admin, - 2. click your user icon in the upper right corner, - 3. switch to + Apps, - 4. find ONLYOFFICE in the list of available applications in the section “Office & text”, - 5. click the Download and enable button.  - - - -ONLYOFFICE now appears in the Active apps section and you can go ahead with the configuration.  - -Select your user icon again in the upper right corner -> Settings -> Administration -> ONLYOFFICE. On the settings page, you can configure: - - * The address of the machine with ONLYOFFICE installed - * Secret key (JWT that protects docs from unauthorized access) - * ONLYOFFICE and Nextcloud addresses for internal requests - - - -![][11] - -You can also adjust additional settings which are not mandatory but will make your user experience more comfortable: - - * Restrict access to the editors to user groups - * Enable/disable the Open file in the same tab option - * Select file formats that will be opened by default with ONLYOFFICE - * Customize editor interface - * Enable watermarking - - - -![][12] - -### Conclusion - -Installing ONLYOFFICE Docs on Fedora Linux with Podman is quite easy. It will give you a powerful office suite for integration into any Document Managemet System. -``` - -``` - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/instal-onlyoffice-docs-on-fedora-linux-with-podman/ - -作者:[kseniya_fedoruk][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/kseniya_fedoruk/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/onlyoffice-podman-nextcloud-816x345.jpg -[2]: https://unsplash.com/@chris_legs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/s/photos/sharing-writing?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://www.onlyoffice.com/office-suite.aspx -[5]: https://fedoramagazine.org/wp-content/uploads/2021/07/ONLYOFFICE-Docs-dark-theme-1024x585.png -[6]: https://www.onlyoffice.com/collaboration-platform.aspx -[7]: https://www.onlyoffice.com/workspace.aspx -[8]: https://letsencrypt.org/ -[9]: https://www.server-world.info/en/note?os=Fedora_31&p=ssl&f=1 -[10]: https://fedoramagazine.org/nextcloud-20-on-fedora-linux-with-podman/ -[11]: https://fedoramagazine.org/wp-content/uploads/2021/07/1-server-settings-1024x611.png -[12]: https://fedoramagazine.org/wp-content/uploads/2021/07/nc-settings-1-1024x574.png diff --git a/sources/tech/20210903 Monitor your Linux server with Checkmk.md b/sources/tech/20210903 Monitor your Linux server with Checkmk.md deleted file mode 100644 index d35d7e7611..0000000000 --- a/sources/tech/20210903 Monitor your Linux server with Checkmk.md +++ /dev/null @@ -1,191 +0,0 @@ -[#]: subject: "Monitor your Linux server with Checkmk" -[#]: via: "https://opensource.com/article/21/8/monitor-linux-server-checkmk" -[#]: author: "Ferdinand https://opensource.com/users/ferdinand-kunz" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Monitor your Linux server with Checkmk -====== -Install Checkmk, the monitoring tool from tribe29, to monitor servers -and network devices -![People work on a computer server with devices][1] - -Monitoring IT assets is an essential task for any IT department. Still, due to the growing number of devices in corporate networks, it is getting more and more challenging to find an approach that is flexible enough to monitor the wide range of available systems properly. It's essential to have a monitoring tool that is flexible, scalable, and easy to use. - -In this article, I demonstrate how to install [Checkmk][2], the monitoring tool from [tribe29][3], and how to monitor servers and network devices with it. - -### Install Checkmk on Linux - -For this article, I use the [Checkmk Raw Edition,][4] the community GPLv2 edition (the enterprise edition has extra features and paid support), and installing it on a Linux server. Checkmk runs on Linux, including RHEL, CentOS, Debian, and others, as well as in a container, or as a virtual appliance. You can download the latest Checkmk version for all platforms from the official [Checkmk website][2].  - -### Getting started - -It doesn't take long to get started because Checkmk already supports most monitoring use cases thanks to its almost 2,000 plug-ins. Checkmk also provides preconfigured thresholds for alerts and warnings, so you don't have to waste time configuring these yourself, and of course, you can customize these as required.  - -Besides these official integrations, you can also use monitoring expansions created and shared by other users on the [Checkmk Exchange][5]. If you want to know more about the Checkmk tool or contribute to it, you can check out the [GitHub repository][6]. - -This tutorial does not require any monitoring experience. If you do want to follow this procedure, though, you must have root access to the server you're using as the host.  - -#### Select and download the Checkmk Raw Edition - - 1. [Download][7] either the Checkmk Raw Edition (it's free and open source) or the Checkmk Free Edition* *of the Enterprise Edition. - - 2. Next, send the installer file to the server you want to host Checkmk on. I use the scp command. In this tutorial, the IP address for my host is 10.0.2.15. [code]`$ scp check-mk-raw-X.Y.Zp8_0.focal_amd64.deb tux@10.0.2.15:/tmp` -``` -All further actions in this tutorial are performed on the host server.  - - 3. Log in to your host using `ssh`. [code]`​$ ssh tux@10.0.2.15` -``` - - - - -#### Install the Checkmk package  - - 1. Now you must install the package including all of its dependencies. This can be done with your distribution's package manager, such as `apt` or `dnf`: [code]`​$ sudo apt install /tmp/check-mk-raw-X.Y.Zp8_0.focal_amd64.deb` -``` -2. Once the installation is complete, you can perform a test using the `omd` command. [code]`​$ omd version` -``` - - - -The `omd` command for [Open Monitoring Distribution][8] is an open source project created by Mathias Kettner, the founder of Checkmk. It helps you install a monitoring solution assembled from various open source components.  - -#### Create a Checkmk monitoring site - - 1. The next step is to start an initial monitoring site (a "site" is an _instance_). Use `omd create` to create a new Checkmk site and name it as you wish. In this example, I use `checkmk_demo`. [code]`$ sudo omd create checkmk_demo` -``` -2. As a response, you're provided with helpful information about how to start and access your Checkmk site. You could follow the steps to change your admin password right now, but I prefer to do that in the Checkmk user interface. So, for now, copy the randomly generated password (you need it in the next step) and start your monitoring site. [code]`$ sudo omd start checkmk_demo` -``` - - - -Should you want to drill deeper into Checkmk later on, it is important to understand what has just taken place. - - * You created a new user, known as the _site user_, and a group with the site's name on your server. - * A directory for the site has been created under `/omd/sites`, (for example, `/omd/sites/checkmk_demo`). -Checkmk also copied its default configuration into the new directory. - * A user with the name _cmkadmin_ was created for the Checkmk web interface.  - - - -#### Start monitoring with Checkmk - -It's time to switch to the Checkmk user interface in your web browser. Every Checkmk site has its own URL, composed of the IP address or hostname of your monitoring server and the name of the Checkmk site. In this example, my Checkmk install is located at _monitoring-host-server/checkmk_demo/_. - - 1. Open the link to your Checkmk site in your browser. You can open the link shown on your terminal. - 2. Log in as the _cmkadmin_ user, using the password you copied from the terminal. -Once you're logged in, you see an empty dashboard. - 3. Click on the **User** category in the sidebar on the left, and then click on **Change password** under **Profile**. Here, you can change your password. - - - -### Monitoring setup - -Checkmk supports several ways of monitoring servers, and the best method for server monitoring is usually by using the Checkmk agents. Before adding a server, you need to install the agent.  - - 1. In the sidebar on the left, click **Setup** (the button with a gearwheel). -This is the control panel where you perform all of the configurations and find monitoring agents. There are some UI differences between the Raw Edition and the Enterprise Edition, but all screenshots in this article are from the open source Raw Edition. - - 2. Click on **Agents** and select the appropriate package for your operating system. -The packaged agents for Linux are provided in both RPM and DEB file formats. - -![Select an agent][9] - -(Ferdinand Kunz, [CC-BY SA 4.0][10]) - - 3. Download and install the agent on your monitoring host. - - - - -You can test whether the agent works correctly by executing the `check_mk_agent` command in the terminal on your server. - -### Adding a host - -Once the agent has been installed, return to the **Setup** screen and select **Hosts**.  - - 1. Click on **Add host**.  - - 2. Add the name of your server under **Hostname***. * -If you have DNS set up in your network, Checkmk resolves the IP address for your hostname automatically. Otherwise, add the IP address by clicking the checkbox next to** IPv4 Address**. If you add an IP address, you can choose any hostname you like. Leave the other areas unchanged. -  - -![Add host][11] - -(Ferdinand Kunz, [CC-BY SA 4.0][10]) - - 3. Click on **Save & go to service configuration**. Checkmk now automatically discovers any relevant monitoring services on that host and lists them as _Undecided services_. Also, as you can see in the screenshot, Checkmk automatically adds labels depending on the type of device. - - 4. Click on **Fix all*** _to monitor all of these. This adds all detected services and host labels to your monitoring dashboard and removes services that have vanished. Of course, you can manage the services manually, but the_* Fix all **function makes it a lot easier.  - -![Host monitoring fix all][12] - -(Ferdinand Kunz, [CC-BY SA 4.0][10]) - - 5. Next, activate your changes by clicking on the highlighted field with the yellow exclamation point (**!)** at the top right corner. Click on **Activate on selected sites**, and you've successfully added the first server to your monitor. - - - - -Requiring explicit activation for changes is a safety mechanism. All changes made are listed first under **Pending changes** so you can review any changes before they affect your monitoring. Checkmk differentiates between _Setup_ as a configuration environment, in which you manage the hosts, services, and settings, and the area called _Monitor_, in which the actual operational monitoring takes place. New hosts and other changes in the configuration initially do not affect the monitoring. You must activate these before they go into production.  - -### SNMP monitoring - -Besides server monitoring, another essential monitoring task is network monitoring. As an example, I would like to show you how to monitor a switch over SNMP. All you need to do is make sure the SNMP agent on the device you aim to monitor is activated and that your Checkmk server can reach this device. - - 1. Go to _**Setup > Hosts**_ and click on **Add host**. - - 2. Type in the hostname and the IP address (as needed). -By default, Checkmk assumes you use a Checkmk agent, so you need to edit that under **Monitoring agents**.  - - 3. Activate the check box next to _SNMP_ and switch the box to your SNMP version (very likely ʻSNMP v2 or v3ʼ). -Checkmk also assumes by default that your SNMP Community is _public_ because it is also the default on most SNMP devices. If that is the case, you can leave the box _SNMP credentials_ unchecked (like I have). Otherwise, you have to check this box and add your SNMP credentials here.  - -![Add SNMP host][13] - -(Ferdinand Kunz, [CC-BY SA 4.0][10]) - - 4. As before, click on **Save & go to service configuration**, and Checkmk discovers all of the currently online interfaces, the uptime, and the SNMP Info check. -If a monitoring plug-in for a particular type of device exists, Checkmk detects more monitoring services automatically.  - - 5. Click on **Fix all** and accept the changes. - - - - -### Happy monitoring - -Now you will have your Checkmk site up and running and have added two hosts. This tutorial ends here, but your real monitoring experience has only just started. You may have noticed that Checkmk provides agents for almost all operating systems so that you can add more hosts. The procedure is similar to other systems. Checkmk also supports SNMP, IPMI, HTML, and many other standards, so you always have an efficient method available for monitoring a particular system. Have a look at the [Checkmk][14] [handbook][14], as well as in the [official Checkmk forum][15]. Happy monitoring! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/8/monitor-linux-server-checkmk - -作者:[Ferdinand][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ferdinand-kunz -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server with devices) -[2]: https://checkmk.com/ -[3]: https://tribe29.com/ -[4]: https://checkmk.com/product/raw-edition -[5]: https://exchange.checkmk.com/ -[6]: https://github.com/tribe29/checkmk -[7]: https://checkmk.com/download?edition=cre&version=stable&dist=ubuntu&os=focal -[8]: https://checkmk.com/guides/open-monitoring-distribution -[9]: https://opensource.com/sites/default/files/uploads/checkmk_agent.png (Select an agent) -[10]: https://creativecommons.org/licenses/by-sa/4.0/ -[11]: https://opensource.com/sites/default/files/uploads/checkmk_hosts.png (Add host) -[12]: https://opensource.com/sites/default/files/uploads/checkmk_fix-all.png (Host monitoring fix all) -[13]: https://opensource.com/sites/default/files/uploads/checkmk_add-host-snmp.png (Add SNMP host) -[14]: https://docs.checkmk.com/latest/en/ -[15]: https://forum.checkmk.com/ diff --git a/sources/tech/20210904 Essential open source tools for an academic organization.md b/sources/tech/20210904 Essential open source tools for an academic organization.md deleted file mode 100644 index 805718836b..0000000000 --- a/sources/tech/20210904 Essential open source tools for an academic organization.md +++ /dev/null @@ -1,64 +0,0 @@ -[#]: subject: "Essential open source tools for an academic organization" -[#]: via: "https://opensource.com/article/21/9/open-source-tools-ospo" -[#]: author: "Quinn Foster https://opensource.com/users/quinn-foster" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Essential open source tools for an academic organization -====== -A look into the digital infrastructure of an academic open source -programs office (OSPO). -![Wratchet set tools][1] - -As an academic open source programs office (OSPO), [Open @RIT][2]'s mission is to assist faculty, staff, and students at the Rochester Institute of Technology in creating and maintaining communities for their open projects. We accomplish this by offering consultation and documents that teach our clients the best ways to operate their communities and projects. None of this would be feasible, however, if not for the systems of digital infrastructure we have created and adopted to facilitate these interactions. - -Whether you're starting your own academic OSPO or an open source project, finding the right tools and methods for managing your unique community can be challenging if you don't know where to look. Therefore, in the spirit of openness, the Open @RIT team is happy to share the experiences and strategies used to build our digital infrastructure right here. - -To begin, much of what we have built is thanks to our collaboration with the open source experts at the Institute of Electrical and Electronics Engineers ([IEEE][3]). Founded back in the 19th century during the advent of widespread electricity use, IEEE remains the largest technical professional organization globally and strives to advance technology for the benefit of humanity. The utilization of open source is an integral part of this goal. [IEEE SA OPEN][4], the IEEE sub-group which has created a dedicated open source collaboration platform, aims to create a unified infrastructure stack for open source communities. - -As a participant in IEEE SA OPEN's open technical advisory group, Open @RIT has worked with the group by advising in selecting and approving a variety of software tools they are considering supporting in their standards. - -> "We're trying to learn about how they operate within the academic sector, and then because they're open source, we can really easily contribute back and contribute these findings," -> -> Mike Nolan, assistant director of Open @RIT. - -The tools IEEE SA OPEN and Open @RIT select help develop Open @RIT's digital infrastructure and assist its clients in an academic environment. In turn, Open @RIT provides feedback and even technical contributions to IEEE SA OPEN to extend their infrastructure effectively. Each tool, all of which are open source, carries out a key role: - - * [Mattermost][5] is a collaboration platform built with project developers in mind. We've been using Mattermost to communicate and share work, and we highly recommend it for anybody developing an open source project. - * [Gitlab][6] allows you to store files of code and develop them collaboratively with your team. - * [Nextcloud][7] is a cloud-based file hosting service where you can create and share documents with your team and manage projects and deadlines. Adapting Nextcloud into the standards is still in process and not yet approved, but it holds tremendous potential for IEEE SA OPEN. - - - -A crucial benefit we've experienced using these tools alongside IEEE SA OPEN is finding ways to interact with each other. For example, Mattermost's ChatOps function allows you to install a Gitlab plugin into your Mattermost servers, allowing notifications of issues, merge requests, direct mentions, and other changes made in Gitlab to appear in your message boards. This, among potential future additions, demonstrates how these tools can become a cohesive standard in building open infrastructure. - -In addition to working with IEEE SA OPEN, we have also made inroads with CHAOSS Software and utilized their community analysis software, [GrimoireLab.][8] Their tool is a community health analytics software that calculates and reports metrics of open source project communities. This includes things like the time it takes to resolve reported issues, contributor affiliations, code management, and more. - -Open @RIT uses GrimoireLab and provides feedback and contributions to the CHAOSS community based upon our unique position of monitoring community health in academia. One of our more significant contributions is Mystic, a digital portal and dashboard of our design. Anyone at RIT can submit their open source projects and receive generated community health statistics. Mystic leverages GrimoireLab to take these projects and reports the community metrics and analytics back to the user. Using GrimoireLab in this way helps build the open source community at RIT while contributing back to CHAOSS to make their tools more applicable to academic institutions. - -We hope the information shared here has provided you with the tips and tricks to kickstart your open source project. Whether it's academic in nature or not, these tools can be great additions to the digital infrastructure holding your project community together. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/open-source-tools-ospo - -作者:[Quinn Foster][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/quinn-foster -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4 (Wratchet set tools) -[2]: https://www.rit.edu/research/open -[3]: https://www.ieee.org/ -[4]: https://saopen.ieee.org/ -[5]: https://mattermost.com/ -[6]: https://about.gitlab.com/ -[7]: https://opensource.com/tags/nextcloud -[8]: https://chaoss.github.io/grimoirelab/ diff --git a/sources/tech/20210905 Create a photo collage from the Linux command line.md b/sources/tech/20210905 Create a photo collage from the Linux command line.md deleted file mode 100644 index 02b48cb3c0..0000000000 --- a/sources/tech/20210905 Create a photo collage from the Linux command line.md +++ /dev/null @@ -1,91 +0,0 @@ -[#]: subject: "Create a photo collage from the Linux command line" -[#]: via: "https://opensource.com/article/21/9/photo-montage-imagemagick" -[#]: author: "Jim Hall https://opensource.com/users/jim-hall" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Create a photo collage from the Linux command line -====== -Here's how I use ImageMagick to make photo grids for my social media -posts. -![Polaroids and palm trees][1] - -ImageMagick is the "Swiss Army knife" of manipulating images on the command line. While you could use a desktop graphics program like GIMP or GLIMPSE to adjust or combine photos and graphics, sometimes it's just easier to use one of the almost dozen tools from ImageMagick. - -For example, I frequently find myself creating image montages to share on social media. Let's say I wanted to share a montage or "image grid" of several screenshots. To do that, I use the ImageMagick `montage` command.  - -ImageMagick is a full suite of tools, and the one I use here is the `montage` command. The general syntax of the `montage` command looks like this: - - -``` -`montage {input} {actions} {output}` -``` - -In my case, my screenshots are already the same size: 320x240 pixels.  To create a montage of six of these images, in a grid that's two screenshots wide by three tall, I can use this command: - - -``` -$ montage acronia.png \ -ascii-table.png \ -music.png \ -programming-chess.png \ -petra.png \ -amb.png \ --tile 2x3 -geometry +1+1 \  -screenshot-montage.png -``` - -This creates an image that's composed of the six screenshots, with a 1-pixel border around each. Doing the math, that's 644 pixels wide and 726 pixels high. - -Note the order of the images: ImageMagick montage arranges the images from left-to-right and top-to-bottom. - -![Screenshot montage][2] - -(Jim Hall, [CC BY-SA 4.0][3]) - -In my example, the first row of images shows the open source 2D shooter Acronia and an ASCII programming example, the middle row is an open source music player and a chess programming example, and the third row shows the open source game Post Apocalyptic Petra and the FreeDOS AMB Help reader. - -### Install ImageMagick on Linux - -On Linux, you can install ImageMagick using your package manager. For instance, on Fedora or similar: - - -``` -`$ sudo dnf install imagemagick` -``` - -On Debian and similar: - - -``` -`$ sudo apt install imagemagick` -``` - -On macOS, use [MacPorts][4] or [Homebrew][5]. - -On Windows, use [Chocolatey][6]. - -These open source photo libraries help you stay organized while making your pictures look great. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/photo-montage-imagemagick - -作者:[Jim Hall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/design_photo_art_polaroids.png?itok=SqPLgWxJ (Polaroids and palm trees) -[2]: https://opensource.com/sites/default/files/uploads/screenshot-montage_0.png (Screenshot montage) -[3]: https://creativecommons.org/licenses/by-sa/4.0/ -[4]: https://opensource.com/article/20/11/macports -[5]: https://opensource.com/article/20/6/homebrew-mac -[6]: https://opensource.com/article/20/3/chocolatey diff --git a/sources/tech/20210908 What is the Latest Ubuntu Version- Which one to use.md b/sources/tech/20210908 What is the Latest Ubuntu Version- Which one to use.md deleted file mode 100644 index 6690c5739a..0000000000 --- a/sources/tech/20210908 What is the Latest Ubuntu Version- Which one to use.md +++ /dev/null @@ -1,103 +0,0 @@ -[#]: subject: "What is the Latest Ubuntu Version? Which one to use?" -[#]: via: "https://itsfoss.com/latest-ubuntu-version/" -[#]: author: "Ankush Das https://itsfoss.com/author/ankush/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -What is the Latest Ubuntu Version? Which one to use? -====== - -So, you decided to use Ubuntu. Set upon to install Ubuntu and find that there are several active Ubuntu releases. - -That makes you bother about the latest Ubuntu release. Let me help you with that. - -**The newest Ubuntu release is Ubuntu 21.04.** It is a short term release which was out in April 2021 and it will be supported till January 2022. After that, you’ll have to upgrade to Ubuntu 21.10 (will be releases in October 2021). - -**The latest LTS release is Ubuntu 20.04 codenamed Focal Fossa**. It was released in April 2020 and it will be supported till April 2025. If you do not want to upgrade your version every nine months, stick with the latest LTS release. - -In other current Ubuntu releases, version 18.04 is still active. It will be supported till April 2023. But if you are going for a [fresh Ubuntu install][1], go for the latest Ubuntu LTS release, which is 20.04. - -For your information, every two years, there is a new LTS release and **three non-LTS releases** in between (every six months). You may [read this article to know about Ubuntu LTS and non-LTS releases][2]. - -The non-LTS releases often bring bleeding-edge features but with minor iterations. And, the next LTS release can be expected to bring all the features added to the non-LTS releases. - -### Latest LTS Version of Ubuntu 20.04 “Focal Fossa” - -![Ubuntu 20.04 LTS][3] - -Every Ubuntu release is associated with a codename, which is often named after animals in alphabetical order. - -In this case, it is “**Focal Fossa**” which refers to a catlike animal found on Madagascar. - -Ubuntu 20.04 comes packed with [Linux Kernel 5.4][4] will be supported till **April 2025**. And, the latest LTS point release is **Ubuntu 20.04.3**. - -If you are using the latest point release (via a new installation), you might have [Linux Kernel 5.11][5] if you are hardware did not fully support Linux 5.4 - -Did You Know? - -Every LTS version release is followed by seven point releases, with extra extended security maintenance updates available for five more years (for a fee). - -If you are an enterprise or want longer LTS support than usual, you can subscribe to Ubuntu ESM to get a total of ten years of support for Ubuntu LTS versions on your desktop or server. - -The Long Term Support versions are usually known for adding major feature improvements while the non-LTS versions add bleeding-edge technologies to test and get it ready for the next LTS release. - -If you take a look at [Ubuntu 20.04 features][6] and [Ubuntu 21.04 features][7], you should get an idea of the differences between an LTS and non-LTS release. - -### Which versions of Ubuntu are LTS? - -Not just limited to the version number, there are several Ubuntu flavors available as well. Some of them offer similar software update support and some of them only give you **three years of updates** (in contrast to five by Canonical). - -So, if you want to explore those, I suggest you know [which Ubuntu version to use][8] before deciding to install any Ubuntu flavour. - -### How Long is Ubuntu LTS Supported? - -Any Ubuntu release is supported until the [end-of-life period][9]. - -For LTS versions, this is usually five years. And, for non-LTS versions, it is nine months. - -### Should I Upgrade to Ubuntu 20.04 LTS? - -First, you should [check the Ubuntu version installed on your computer][10]. - -If you are using an older LTS release, you should definitely consider upgrading to the new one for better hardware compatibility, improved workflow, and performance. - -If you do not want to break the user experience, you can stick to the older release until it reaches the end-of-life period. - -### How to Upgrade to the Latest Ubuntu version? - -![][11] - -You can upgrade to the latest Ubuntu version using the graphical user interface (GUI) or the terminal. - -Simply head to the Software Updater, it should check and notify you if an update is available. - -To get help, you can refer to our [upgrade instructions guide][12] to swiftly upgrade your Ubuntu version without any issues. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/latest-ubuntu-version/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/install-ubuntu/ -[2]: https://itsfoss.com/long-term-support-lts/ -[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/ubuntu_yaru_light_dark_theme.jpg?resize=800%2C450&ssl=1 -[4]: https://itsfoss.com/linux-kernel-5-4/ -[5]: https://news.itsfoss.com/linux-kernel-5-11-release/ -[6]: https://itsfoss.com/ubuntu-20-04-release-features/ -[7]: https://news.itsfoss.com/ubuntu-21-04-features/ -[8]: https://itsfoss.com/which-ubuntu-install/ -[9]: https://itsfoss.com/end-of-life-ubuntu/ -[10]: https://itsfoss.com/how-to-know-ubuntu-unity-version/ -[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/start-upgrade-focal.jpg?resize=800%2C495&ssl=1 -[12]: https://itsfoss.com/upgrade-ubuntu-version/ diff --git a/sources/tech/20210909 A guide to simplifying invoicing with this open source tool.md b/sources/tech/20210909 A guide to simplifying invoicing with this open source tool.md deleted file mode 100644 index cbcdda143e..0000000000 --- a/sources/tech/20210909 A guide to simplifying invoicing with this open source tool.md +++ /dev/null @@ -1,331 +0,0 @@ -[#]: subject: "A guide to simplifying invoicing with this open source tool" -[#]: via: "https://opensource.com/article/21/7/open-source-invoicing-po" -[#]: author: "Frank Bergmann https://opensource.com/users/fraber" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -A guide to simplifying invoicing with this open source tool -====== -]project-open[ simplifies one of the most challenging activities in IT: -writing customer invoices. -![Digital images of a computer desktop][1] - -Many IT projects are late, over budget, and subject to dramatic changes during development. This makes invoicing for them one of the most taxing activities in IT. It's stressful—it involves dealing with ambiguities, conflicting interests, and human error. Worse, every single decision made during the project affects how much you can bill for. When a sales guy brags—incorrectly—that your software "includes this feature," you can't invoice for the time to build it. When a support guy admits something is a bug rather than an imprecise spec, you won't be able to charge money for it. - -This tutorial explains a methodology and a tool to streamline this process. Together, they help reduce frustration, improve customer relationships, and achieve a higher percentage of billable hours. The tool is free, open source, and can be applied to a wide range of organizations—from a self-employed IT guy to a multimillion-dollar software business. - -This article can also serve as a guide on how to handle financial tasks if you decide to become self-employed or set up a startup. It doesn't cover project management or similar disciplines; I'm focusing only on invoicing here. The first part of this article describes the invoicing methodology, while the second part shows you how to apply the methodology within []project-open[][2] (or ]po[ for short). - -### The general invoicing process - -There's a general process for writing invoices no matter what tool you use. Some organizations may be more or less formal with certain steps, but all project-based businesses follow the same basic process: - -![Invoicing processes covered in this tutorial][3] - -Figure 1. Invoicing processes (Frank Bergmann, [CC BY-SA 4.0][4]) - -#### Assign a contract type - -From an invoicing perspective, the most important property of a project is the contract type. You can invoice services for a **fixed price** (the risk of time and other overruns is 100% on your side), **time and materials** (you bill by the hour, and the customer takes on the risk of overruns), or a **mixture of both** (you specify some deliverables with a fixed price, but other parts of the contract are billed using time and materials). Sometimes invoicing is tied to certain objectives (milestone billing) or time periods (periodic invoicing), but these are just variants of the two contract types above. - -Agile projects tend to be more time-and-materials based because the specs (and therefore the amount of work) for each user story change over the course of the project. Classic waterfall projects tend to be more fixed-price based, but this depends on the project and negotiation position. The following information applies to all contract types. - -#### Define your project - -Project definition is normally a separate step before execution begins. If you want to get paid, you have to start here and define the scope of your project with watertight deliverables. Otherwise, you're at the mercy of your customer's goodwill to let you off the hook if there are ambiguities later. - -![Initial project definition Gantt Chart][5] - -Figure 2. Initial project definition Gantt chart (Frank Bergmann, [CC BY-SA 4.0][4]) - -The Gantt chart above shows a sample ]po[ project. To define a project with unambiguous deliverables: - - 1. Define the project as a list of tasks to work on. Every task must have a clear deliverable so that it's easy to tell if the task is complete or not. - 2. Create an estimate (the **Work** column) of how long it will take to complete the task. This helps you track time overruns. A second column, **Billable Hours**, allows you to account for unbillable time (e.g., time spent in presales activities is generally not billable). - 3. Define a **Material** for each task if you charge different rates for different task types. For example, you may charge US$ 100 per hour for project management while database administration costs US$ 60 per hour. - 4. Optional: Assign **Resources** to each task to specify who should do the work. - - - -Entering information this way makes it very easy to produce invoices and status reports later. - -#### Manage project change requests, bugs, and issues - -Projects tend to change during their course due to unforeseen issues, new ideas from the customer, and many other reasons. Most of these changes mean additional work for you. It's important to track whether this work is paid or unpaid. Customer extension requests or scope increases are usually paid, while bugs or other issues tend to be unpaid. - -There are different ways of tracking change requests (CRs): - - * **Contractually:** Formally record change requests as an email or a signed CR document so that there are no ambiguities when invoicing. - * **Operationally:** You probably want to add CRs as tasks in your project. This is just like creating the initial tasks list. You may want to group additional tasks below a summary task to indicate they aren't covered by the original scope. - * **Bug tracker:** It may also be useful to keep a separate log for quality issues and bugs (usually non-billable changes). - - - -Whatever method you use, make sure you tag the work as tasks vs. bugs in some way to remind you to treat them differently when writing an invoice later. - -#### Issue periodic status reports - -Customers expect you to provide periodic information about the progress of their projects. That's because IT projects are usually late and over budget. This can be an unpleasant duty, particularly if you have to report overruns, but it makes good business sense. - -A weekly status report frequently contains the following sections: - - * A list of tasks you're working on, optionally with their completion status - * A list of tasks you've started working on in this reporting period - * A list of tasks you've finished in this reporting period - * The reasons it took you so long to finish the tasks above - - - -#### Get customer signoff - -Before you can write an invoice, you usually need approval from the customer confirming all deliverables are complete. This may be the most difficult phase of the entire project because it often involves negotiating accountability for any overruns you encountered in the project. - -I recommend you start considering this phase when you first define the project so that every task is bound to a clearly defined deliverable and completion status isn't up for debate. You may also define explicit acceptance criteria during project definition to avoid ambiguities. When negotiating the details of a task with the customer during project execution, remember to record all decisions so that you can easily find this documentation when trying to get signoff. - -#### Invoice - -Once you've gotten signoff, writing the invoice is relatively easy. You just need to multiply the billable hours by the agreed rate per hour or day: - - * Look at all billable tasks and determine planned hours, billable hours, and actual hours. - * Can you bill the overrun hours to the customer? If so, you'll probably have to explain why the customer should pay for this and where this has been agreed upon contractually. - * Multiply the billable hours by the hourly rate for the respective service type. - - - -Once you calculate the final amount, you may need to add taxes according to your local regulations, which will differ depending upon the country, state, county, or even city where your business is located. Finally, enter the calculation into a standardized invoice document and send it to the customer. - -#### Track accounts receivable and payments - -Once the invoice has been sent to the customer, you have to track the payments in your bank account and connect them to the respective invoices. You may also have to send reminders to your customers if they're late with payments. - -#### Do profit & loss and post-mortem - -After all the work is finished, you'll want to see how much money you earned from the project. This can be very easy or quite complex if the work involves salaried employees, external consultants, efforts shared with other projects, travel costs, etc. At the end, you'll have a profit-and-loss statement that consists of the invoiced amount minus all of your costs. - -This may also be a good moment to review your project in general (called a post-mortem) and your estimation process in particular. Compare the estimated time with the actual time spent and how the various decisions and events during the project affected the balance. This should inform your sales and quoting process for the next project. - -### Create invoices with ]project-open[ - -Now that you know the process, let's look at implementing it using the free and open source tool ]po[. - -#### Download, install, and configure ]po[ - -It only takes a few minutes to get ]po[ running on your computer. You can get [native installers][6] (along with help and instructions) for various Linux flavors or Windows from the ]po[ website. - -After installing ]po[, follow the configuration wizard and choose the **Other/Everything** and **Complete/Full Installation** options to enable all system options. Project invoicing is disabled in the simplified configurations. - -#### Define a project - -To create a new project, use **Projects -> New Project** and choose **Gantt Project** or one of its subtypes. The ]po[ Gantt editor contains everything you need to set up your project. You can structure your project hierarchically with **Summary tasks** or keep it as a flat list. The Gantt timeline on the right-hand side is optional for the invoicing process. It's only used for tracking progress during project execution. - -There are columns for all the task properties described below. The **Material** and **Billable Hours** columns aren't visible in ]po[ by default. To make them visible, click the **v** button to the right of each column header and enable the column. You can change the column order by dragging and dropping: - -![Gantt chart with project definition and status][7] - -Figure 3. Gantt chart with project definition and status (Frank Bergmann, [CC BY-SA 4.0][4]) - - * **Task:** This is the name of the task. - * **Work:** Enter the estimated hours to finish the task. - * **Billable Hours** (optional): Allows you to specify billable time if it is different from Work. - * **Done %:** The project manager enters the progress toward completing the task. - * **Material:** This is the service type of the task, for example, project management or frontend development. You can edit the list of materials in **Master Data -> Materials**. These service types tie in with the list of rates (see below) to define prices for different types of work or different resources. - * **Resources:** To assign resources to tasks, you first have to use the **Projects -> <project name> -> Members** portlet to add resources to the project. After you reload the page, these resources will be available in the Gantt editor, where you can assign them to tasks in the **Resources** column. - * **Notes:** This field is not available as a column. Double-click on the icon before any task to open the **Task** property panel that includes a large free-text area for notes. - - - -]po[ will automatically calculate the duration (setting the end date) of a task if you have defined its Work and Resources. Similarly, it will calculate the duration of a Summary task (a parent task with several subtasks) based on its children. You can create dependencies between tasks by dragging and dropping with your mouse. - -#### Log your hours - -Use the **Timesheet** menu to get to the calendar page. Click **Log hours** for a specific day. - -![Time sheet calendar][8] - -Figure 3. Time sheet calendar (Frank Bergmann, [CC BY-SA 4.0][4]) - -You'll see a list of all projects on which you are a member. There, you can log hours and add comments about what you've done. - -![Logging hours][9] - -Figure 4. Logging hours (Frank Bergmann, [CC BY-SA 4.0][4]) - -#### Add issues and change requests - -You can add new tasks to the Gantt editor any time during the project to reflect change requests. - -You can also add a bug tracker to your project. From the left-hand side of the project, find **Project -> Admin Project -> Create a Subproject**. Then create a new subproject with the name **<Project> Bug Tracker** and type **Ticket Container**. This new bug tracker will appear in the Gantt editor together with the included tickets marked with different icons. The bug tracker will also appear in the **Tickets -> New Ticket** section so that the project manager, the customer, or other stakeholders can log various types of issues. Please see the **Documentation** tab on the []po[ website][10] for details about the ]po[ helpdesk functionality, as this is beyond the scope of this tutorial. - -#### Report your status - -The ]po[ Gantt editor also serves as a visual status report. Basically, all tasks to the left of the red line ("today") should show 100% in the **Done %** column. - -![Project status Gantt chart][11] - -Figure 5. Project status Gantt chart (Frank Bergmann, [CC BY-SA 4.0][4]) - -Look at the **Master data import** task in Figure 5: - - * **Is the task on time?** The black bar inside the Gantt bar represents the 40% from the **Done%** column. The rectangle ends to the left of the red line representing today, indicating the task is late. - * **Is the task within budget?** The red rectangle in the Gantt bar is red. It represents the **Logged Hours** as a percentage of **Work**. We can see that the team has already spent 48 of the planned 56 hours, but the task is only 40% done. Continuing like this, it would take 120 hours (= 48h / 40%) to complete the task. The bar's color changes to blue if **Logged Hours** < **Work * Done%**. - - - -]po[ includes other more advanced status reporting tools, including Earned Value Analysis (EVA), Milestone Trend Analysis, and various specialized reports. The documentation provides more information on these tools. - -#### Get to sign off - -]project-open[ doesn't include specific support for getting your customer to sign off on your project. However, the detailed information attached to each task—which you documented at the start—will be extremely helpful in proving that everything's been done in accordance with the decisions you and your customer made together. Take note of any difficulties and take them into account when defining the next project. - -#### Create an invoice manually - -This is the simplest and most flexible way to create invoices and is suitable for very small or fixed-price projects. Use **<project> -> Finance -> New Customer Invoice** **from Scratch** to start. (Do not use the **Finance** tab at the top of the ]po[ menu—go to your specific project and use the project's **Finance** tab there). You'll see a screen like this: - -![Creating an invoice manually][12] - -Figure 6. Creating an invoice manually (Frank Bergmann, [CC BY-SA 4.0][4]) - -Fields in the invoice header: - - * **Invoice no:** This is the invoice identification for tax reporting purposes. This number is created automatically and numbered per month. Alternative numbering schemes are available. - * **Invoice date:** This is the date you created the invoice (for tax reporting purposes). - * **Payment terms:** This is the number of days until the invoice is due. - * **Payment method:** This is how the customer should pay. You can modify the available payment options in **Admin -> Categories -> Intranet Invoice Payment Method**. - * **Invoice template:** This is a LibreOffice template that will render the invoices. **Admin -> Invoice templates** shows the list of available templates. Here you can also download a template and upload a modified version. - * **Invoice status:** **Created** is a freshly created invoice. The other options are for tracking the payment process. You can configure invoice states in **Admin -> Categories -> Intranet Cost Status**. - * **Invoice type:** **Customer Invoice** is the default type of invoice. - * **Customer:** You can set up new customers in **Master Data -> Companies -> New Company** or the CRM section of ]po[. - * **Invoice address:** One customer may have multiple places of business, so enter the address where you want this invoice sent. - * **Contact:** Enter the person who should receive the invoice. ]po[ can send out invoices directly as emails. - * **Notes:** Enter any notes relevant to the invoice. - * **VAT:** Most countries use value-added tax (VAT). You can also configure VAT types (instead of a numeric value) for certain countries. - * **TAX:** Some countries add a second tax to invoices. You can disable this field in **Admin -> Parameters -> intranet-invoices** if you don't need this. - - - -Fields in the invoice lines: - - * **Line:** This is a way to order the invoice line items. - * **Description:** Enter a description of the item. - * **Unit:** Enter the units of what you invoice (e.g., how many hours, days, etc.). - * **UoM:** This stands for unit of measure, and it can be the number of hours, days, or just units. - * **Rate:** This is the price per unit using the default currency. You can configure currencies in **Master Data -> Exchange Rates** and define the default currency in **Admin -> Parameters -> DefaultCurrency**. - - - -Clicking the **Preview using Template** link will launch your text processor (Microsoft Word or LibreOffice Writer) if it's installed on your computer. You can now edit the invoice before you send it to the customer. - -![Invoice print preview][13] - -Figure 7. Invoice print preview  (Frank Bergmann, [CC BY-SA 4.0][4]) - -#### Create an invoice semiautomatically - -]po[ also has a wizard that converts logged hours "automagically" into invoice line items for you. This process applies to fixed-price, time-and-materials, and periodic invoicing types. - -Use **<project> -> Finance -> New Customer Invoice from Timesheet Tasks** to start the wizard. - -![Invoice wizard][14] - -Figure 8. Invoice wizard (Frank Bergmann, [CC BY-SA 4.0][4]) - -The main part of the screen (labeled **(1)** in the image above) shows five different types of hours per task. This works a bit like a report but lets you take actions using the provided user interface elements: - - * **Planned Hours:** The estimated hours for the task, as specified in the Gantt editor during project planning - * **Billable Hours:** Similar to Planned Hours, but you can manually modify this to account for non-billable time - * **All Reported Hours:** All timesheet hours logged by anyone, ever - * **Reported Hours in Interval:** Hours logged between the start date and end date in the filter **(2)** at the top of the screen - * **All Unbilled Hours:** Hours that aren't included in any previous invoice created using this wizard. This figure is useful when doing periodic invoicing to see if hours "slipped through" in past invoicing runs. - - - -The checkboxes **(3)** in the first column let you manually deselect certain tasks. The **Aggregate hours** checkbox **(4)** lets you create invoice lines per task (when unchecked) or invoice lines per material (when checked). Select **aggregate** if there are many tasks in your project; otherwise, your invoice could become very long. - -Your result is an invoice like this: - -![Invoice proposed by the wizard][15] - -Figure 9. Invoice proposed by the wizard (Frank Bergmann, [CC BY-SA 4.0][4]) - -This is similar to the manually created invoice but with additional information: - - * The invoice lines are copied from the selected tasks in the previous screen. Summary tasks are excluded because, otherwise, the number of hours would be duplicated. - * The **Rate** column includes the best matching rate for the task (see below). - - - -Select the **Create Customer Invoice** button to finish the process. But before doing that, you may want to check that the prices are right. - -##### Check the price list and reference prices - -The **Reference Prices** section in the figure above explains how the best matching rate is determined for each of the six invoice lines (this is why the same line is repeated six times). - -The source of this data is the **Company Timesheet Prices** portlet. - -![Company Timesheet Prices portlet][16] - -Figure 10. Price list with one entry (Frank Bergmann, [CC BY-SA 4.0][4]) - -This example contains a single line with **Hour** as the UoM and 75.00 EUR as the **Rate**; all other fields are empty. We could translate this as: "All hours for this customer cost EUR 75.00." This is a suitable definition of a "default rate" if you want to keep things simple. - -##### Use the price-finding algorithm - -Unfortunately, reality tends to be complex. Consider the example below. It defines a discount for TCL Programming Hours but only for the specific project Motor Development (2016_0019). - -![Price-finding data entry screen][17] - -Figure 11. Price entry in a specific project (Frank Bergmann, [CC BY-SA 4.0][4]) - -The price-finding algorithm will select the most suitable rate for each line of the new invoice by choosing the one with the highest number of matching fields and discarding those with hard mismatches. The **Reference Prices** section will list all candidate rate entries, from best match to worst. - -In the end, though, it's up to you to modify the proposed rates. This option for manual intervention is designed to handle the most unusual cases. - -### Next steps - -There are a number of steps that come after writing an invoice. Detailing them would exceed the scope of this tutorial. However, the functionality is available as part of the ]po[ Community Edition: - - * **Accounts receivable:** The **Finance -> Accounts Receivable** section allows you to follow up on invoices and send reminders to customers. - * **Procurement and accounts payable:** ]po[ includes a project-based procurement and vendor-management system. - * **Profit and loss:** You'll be interested to see if you made a profit on your project or not. - * **Learned lessons:** You may want to do a post-mortem to review the project and learn what to do differently the next time. - * **Cash-flow forecasting:** Starting with your current bank account level, your invoices, and your CRM sales funnel, this will calculate the moment when your company will run out of money. - * **Management accounting:** This consists of many that reports that will answer most questions about your business out of the box. There's a tutorial on how to write your own reports. - * **Tax reporting:** ]po[ captures almost everything a service company needs for tax reporting. There are export interfaces for various accounting software packages. - - - -What are your biggest challenges with invoicing customers? Please share your pet peeves in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/7/open-source-invoicing-po - -作者:[Frank Bergmann][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/fraber -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_browser_web_desktop.png?itok=Bw8ykZMA (Digital images of a computer desktop) -[2]: https://www.project-open.com/ -[3]: https://opensource.com/sites/default/files/uploads/1-invoicingprocess.png (Invoicing processes covered in this tutorial) -[4]: https://creativecommons.org/licenses/by-sa/4.0/ -[5]: https://opensource.com/sites/default/files/uploads/2-ganttchart.png (Initial project definition Gantt Chart) -[6]: https://www.project-open.com/en/list-installers -[7]: https://opensource.com/sites/default/files/pictures/define-a-project-gantt-project.png (Gantt chart with project definition and status) -[8]: https://opensource.com/sites/default/files/uploads/3-timesheet-calendar.png (Time sheet calendar) -[9]: https://opensource.com/sites/default/files/uploads/4-logging-hours.png (Logging hours) -[10]: https://www.project-open.com/en/ -[11]: https://opensource.com/sites/default/files/uploads/5-project-status.png (Project status Gantt chart) -[12]: https://opensource.com/sites/default/files/uploads/6-invoicing.png (Creating an invoice manually) -[13]: https://opensource.com/sites/default/files/uploads/7-invoice-preview.png (Invoice print preview) -[14]: https://opensource.com/sites/default/files/uploads/8-invoice-wizard_rev.png (Invoice wizard) -[15]: https://opensource.com/sites/default/files/uploads/8-invoice-wizard.png (Invoice proposed by the wizard) -[16]: https://opensource.com/sites/default/files/uploads/10-timesheet-prices.png (Company Timesheet Prices portlet) -[17]: https://opensource.com/sites/default/files/uploads/11-price-finding.png (Price-finding data entry screen) diff --git a/sources/tech/20210913 How I rediscovered Logo with the Python Turtle module.md b/sources/tech/20210913 How I rediscovered Logo with the Python Turtle module.md deleted file mode 100644 index 8a3bee24a5..0000000000 --- a/sources/tech/20210913 How I rediscovered Logo with the Python Turtle module.md +++ /dev/null @@ -1,327 +0,0 @@ -[#]: subject: "How I rediscovered Logo with the Python Turtle module" -[#]: via: "https://opensource.com/article/21/9/logo-python-turtle" -[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -How I rediscovered Logo with the Python Turtle module -====== -The Logo programming language is available today as a Python package. -![Box turtle][1] - -When I was in high school, one of the very first programming languages I was introduced to was Logo. It was interactive and visual. With basic movement commands, you could have your cursor (“turtle”) draw basic shapes and intricate patterns. It was a great way to introduce the compelling concept of an algorithm—a series of instructions for a computer to execute. - -Fortunately, the Logo programming language is available today as a Python package. So let’s jump right in, and you can discover the possibilities with Logo as we go along. - -### Installing the Turtle module - -Logo is available as the [`turtle` package for Python][2]. To use it, you must have Python installed first. Python is already installed on Linux and BSD, and it's easy to install on both [MacOS][3] and [Windows][4]. - -Once you have Python installed, install the Turtle module: - - -``` -`pip3 install turtle` -``` - -### Bob draws a square - -With the `turtle` package installed, you can draw some basic shapes. - -To draw a square, imagine a turtle (call him Bob) in the middle of your screen, holding a pen with his tail. Every time Bob moves, he draws a line behind him. How must Bob move to draw a square? - - 1. Move forward 100 steps. - 2. Turn right 90 degrees. - 3. Move forward 100 steps. - 4. Turn right 90 degrees. - 5. Move forward 100 steps. - 6. Turn right 90 degrees. - 7. Move forward 100 steps. - - - -Now write the above algorithm in Python. Create a file called `logo.py` and place the following code in it. - - -``` -import turtle - -if __name__ == '__main__': - -    turtle.title('Hi! I\'m Bob the turtle!') -    turtle.setup(width=800, height=800) - -    bob = turtle.Turtle(shape='turtle') -    bob.color('orange') - -    # Drawing a square -    bob.forward(100) -    bob.right(90) -    bob.forward(100) -    bob.right(90) -    bob.forward(100) -    bob.right(90) -    bob.forward(100) - -    turtle.exitonclick() -``` - -Save the above as `logo.py` and run it: - - -``` -`$ python3 logo.py` -``` - -Bob draws a square on the screen: - -![Logo drawn square][5] - -Illustration by Ayush Sharma, [CC BY-SA 4.0][6] - -### Bob draws a hexagon - -To draw a hexagon, Bob must move like this: - - 1. Move forward 150 steps. - 2. Turn right 60 degrees. - 3. Move forward 150 steps. - 4. Turn right 60 degrees. - 5. Move forward 150 steps. - 6. Turn right 60 degrees. - 7. Move forward 150 steps. - 8. Turn right 60 degrees. - 9. Move forward 150 steps. - 10. Turn right 60 degrees. - 11. Move forward 150 steps. - - - -In Python, you can use a [`for` loop][7] to move Bob: - - -``` -import turtle - -if __name__ == '__main__': - -    turtle.title('Hi! I\'m Bob the turtle!') -    turtle.setup(width=800, height=800) - -    bob = turtle.Turtle(shape='turtle') -    bob.color('orange') - -    # Drawing a hexagon -    for i in range(6): - -        bob.forward(150) -        bob.right(60) - -    turtle.exitonclick() -``` - -Run your code again and watch Bob draw a hexagon. - -![Logo drawn hexagon][8] - -Illustration by Ayush Sharma, [CC BY-SA 4.0][6] - -### Bob draws a square spiral - -Now try drawing a square spiral, but this time you can speed things up a bit. You can use the `speed` function and set `bob.speed(2000)` so that Bob moves faster. - - -``` -import turtle - -if __name__ == '__main__': - -    turtle.title('Hi! I\'m Bob the turtle!') -    turtle.setup(width=800, height=800) - -    bob = turtle.Turtle(shape='turtle') -    bob.color('orange') - -    # Drawing a square spiral -    bob.speed(2000) -    for i in range(500): - -        bob.forward(i) -        bob.left(91) - -    turtle.exitonclick() -``` - -![Logo drawn spiral][9] - -Illustration by Ayush Sharma, [CC BY-SA 4.0][6] - -### Bob and Larry draw a weird snake thing - -In the above examples, you initialized `Bob` as an object of the `Turtle` class. You're not limited to just one turtle, though. In the next code block, create a second turtle called `Larry` to draw along with Bob. - -The `penup()` function makes the turtles lift their pens, so they don’t draw anything as they move, and the `stamp()` function places a marker whenever it’s called. - - -``` -import turtle - -if __name__ == '__main__': - -    turtle.title('Hi! We\'re Bob and Larry!') -    turtle.setup(width=800, height=800) - -    bob = turtle.Turtle(shape='turtle') -    larry = turtle.Turtle(shape='turtle') -    bob.color('orange') -    larry.color('purple') - -    bob.penup() -    larry.penup() -    bob.goto(-180, 200) -    larry.goto(-150, 200) -    for i in range(30, -30, -1): - -        bob.stamp() -        larry.stamp() -        bob.right(i) -        larry.right(i) -        bob.forward(20) -        larry.forward(20) - -    turtle.exitonclick() -``` - -![Logo drawn snake][10] - -Illustration by Ayush Sharma, [CC BY-SA 4.0][6] - -### Bob draws a sunburst - -Bob can also draw simple lines and fill them in with color. The functions `begin_fill()` and `end_fill()` allow Bob to fill a shape with the color set with `fillcolor()`. - - -``` -import turtle - -if __name__ == '__main__': - -    turtle.title('Hi! I\'m Bob the turtle!') -    turtle.setup(width=800, height=800) - -    bob = turtle.Turtle(shape='turtle') -    bob.color('orange') - -    # Drawing a filled star thingy -    bob.speed(2000) -    bob.fillcolor('yellow') -    bob.pencolor('red') - -    for i in range(200): - -        bob.begin_fill() -        bob.forward(300 - i) -        bob.left(170) -        bob.forward(300 - i) -        bob.end_fill() - -    turtle.exitonclick() -``` - -![Logo drawn sunburst][11] - -Illustration by Ayush Sharma, [CC BY-SA 4.0][6] - -### Larry draws a Sierpinski triangle - -Bob enjoys drawing simple geometrical shapes holding a pen with his tail as much as the next turtle, but what he enjoys most is drawing fractals. - -One such shape is the [Sierpinski triangle][12], which is an equilateral triangle recursively subdivided into smaller equilateral triangles. It looks something like this: - -![Logo drawn triangle][13] - -Illustration by Ayush Sharma, [CC BY-SA 4.0][6] - -To draw the Sierpinski triangle above, Bob has to work a bit harder: - - -``` -import turtle - -def get_mid_point(point_1: list, point_2: list): - -    return ((point_1[0] + point_2[0]) / 2, (point_1[1] + point_2[1]) / 2) - -def triangle(turtle: turtle, points, depth): - -    turtle.penup() -    turtle.goto(points[0][0], points[0][1]) - -    turtle.pendown() -    turtle.goto(points[1][0], points[1][1]) -    turtle.goto(points[2][0], points[2][1]) -    turtle.goto(points[0][0], points[0][1]) - -    if depth > 0: - -        triangle(turtle, [points[0], get_mid_point(points[0], points[1]), get_mid_point(points[0], points[2])], depth-1) -        triangle(turtle, [points[1], get_mid_point(points[0], points[1]), get_mid_point(points[1], points[2])], depth-1) -        triangle(turtle, [points[2], get_mid_point(points[2], points[1]), get_mid_point(points[0], points[2])], depth-1) - -if __name__ == '__main__': - -    turtle.title('Hi! I\'m Bob the turtle!') -    turtle.setup(width=800, height=800) - -    larry = turtle.Turtle(shape='turtle') -    larry.color('purple') - -    points = [[-175, -125], [0, 175], [175, -125]]  # size of triangle - -    triangle(larry, points, 5) - -    turtle.exitonclick() -``` - -### Wrap up - -The Logo programming language is a great way to teach basic programming concepts, such as how a computer can execute a set of commands. Also, because the library is now available in Python, it can be used to visualize complex ideas and concepts. - -I hope Bob and Larry have been enjoyable and instructive. - -Have fun, and happy coding. - -* * * - -_This article was originally published on the [author's personal blog][14] and has been adapted with permission._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/logo-python-turtle - -作者:[Ayush Sharma][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/ayushsharma -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/patti-black-unsplash.jpg?itok=hS8wQNUg (Box turtle) -[2]: https://docs.python.org/3.7/library/turtle.html -[3]: https://opensource.com/article/19/5/python-3-default-mac -[4]: https://opensource.com/article/19/8/how-install-python-windows -[5]: https://opensource.com/sites/default/files/uploads/rediscovering-logo-python-turtle-square.jpg (Logo drawn square) -[6]: https://creativecommons.org/licenses/by-sa/4.0/ -[7]: https://opensource.com/article/18/3/loop-better-deeper-look-iteration-python -[8]: https://opensource.com/sites/default/files/uploads/rediscovering-logo-python-turtle-hexagon.jpg (Logo drawn hexagon) -[9]: https://opensource.com/sites/default/files/uploads/rediscovering-logo-python-turtle-square-spiral.jpg (Logo drawn spiral) -[10]: https://opensource.com/sites/default/files/uploads/rediscovering-logo-python-turtle-stamping-larry.jpg (Logo drawn snake) -[11]: https://opensource.com/sites/default/files/uploads/rediscovering-logo-python-turtle-sunburst.jpg (Logo drawn sunburst) -[12]: https://en.wikipedia.org/wiki/Sierpinski_triangle -[13]: https://opensource.com/sites/default/files/uploads/rediscovering-logo-python-turtle-sierpinski-triangle.jpg (Logo drawn triangle) -[14]: https://notes.ayushsharma.in/2019/06/rediscovering-logo-with-bob-the-turtle diff --git a/sources/tech/20210916 Debugging by starting a REPL at a breakpoint is fun.md b/sources/tech/20210916 Debugging by starting a REPL at a breakpoint is fun.md deleted file mode 100644 index 269ccad77f..0000000000 --- a/sources/tech/20210916 Debugging by starting a REPL at a breakpoint is fun.md +++ /dev/null @@ -1,188 +0,0 @@ -[#]: subject: "Debugging by starting a REPL at a breakpoint is fun" -[#]: via: "https://jvns.ca/blog/2021/09/16/debugging-in-a-repl-is-fun/" -[#]: author: "Julia Evans https://jvns.ca/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Debugging by starting a REPL at a breakpoint is fun -====== - -Hello! I was talking to a Python programmer friend yesterday about debugging, and I mentioned that I really like debugging using a REPL. He said he’d never tried it and that it sounded fun, so I thought I’d write a quick post about it. - -This debugging method doesn’t work in a lot of languages, but it does work in Python and Ruby and kiiiiiind of in C (via gdb). - -### what’s a REPL? - -REPL stands for “read eval print loop”. A REPL is a program that: - - 1. reads some input from you like `print(f"2 + 2 = {2+2}")` (**read**) - 2. evaluates the input (**eval**) - 3. print out the result (**print**) - 4. and then goes back to step 1 (**loop**) - - - -Here’s an example of me using the IPython REPL to run a print statement. (also it demonstrates f-strings, my favourite Python 3 feature) - -``` -$ ipython3 -Python 3.9.5 (default, May 24 2021, 12:50:35) -Type 'copyright', 'credits' or 'license' for more information -IPython 7.24.1 -- An enhanced Interactive Python. Type '?' for help. - -In [1]: print(f"2 + 2 = {2+2}") -2 + 2 = 4 - -In [2]: -``` - -### you can start a REPL at a breakpoint - -There are 2 ways to use a REPL when debugging. - -**Way 1**: Open an empty REPL (like IPython, pry, or a browser Javascript console) to test out something. - -This is great but it’s not what I’m talking about in this post. - -**Way 2**: Set a breakpoint in your program, and start a REPL at that breakpoint. - -This is the one we’re going to be talking about. I like doing this because it gives me both: - - 1. all the variables in scope at the breakpoint, so I can print them out interactively - 2. easy access to all the functions in my program, so I can call them to try to find issues - - - -### how to get a REPL in Python: `ipdb.set_trace()` - -Here’s a program called `test.py` that sets a breakpoint on line 5 using `import ipdb; ipdb.set_trace()`. - -``` -import requests - -def make_request(): - result = requests.get("https://google.com") - import ipdb; ipdb.set_trace() - -make_request() -``` - -And here’s what it looks like when you run it: you get a REPL where you can inspect the `result` variable or do anything else you want. - -``` -python3 test.py ---Return-- -None -> /home/bork/work/homepage/test.py(5)make_request() - 4 result = requests.get("https://google.com") -----> 5 import ipdb; ipdb.set_trace() - 6 - -ipdb> result.headers -{'Date': 'Thu, 16 Sep 2021 13:11:19 GMT', 'Expires': '-1', 'Cache-Control': 'private, max-age=0', 'Content-Type': 'text/html; charset=ISO-8859-1', 'P3P': 'CP="This is not a P3P policy! See g.co/p3phelp for more info."', 'Content-Encoding': 'gzip', 'Server': 'gws', 'X-XSS-Protection': '0', 'X-Frame-Options': 'SAMEORIGIN', 'Set-Cookie': '1P_JAR=2021-09-16-13; expires=Sat, 16-Oct-2021 13:11:19 GMT; path=/; domain=.google.com; Secure, NID=223=FXhKNT7mgxX7Fjhh6Z6uej9z13xYKdm9ZuAU540WDoIwYMj9AZzWTgjsVX-KJF6GErxfMijl-uudmjrJH1wgH3c1JjudPcmDMJovNuuAiJqukh1dAao_vUiqL8ge8pSIXRx89vAyYy3BDRrpJHbEF33Hbgt2ce4_yCZPtDyokMk; expires=Fri, 18-Mar-2022 13:11:19 GMT; path=/; domain=.google.com; HttpOnly', 'Alt-Svc': 'h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"', 'Transfer-Encoding': 'chunked'} -``` - -You have to install `ipdb` to make this work, but I think it’s worth it – `import pdb; pdb.set_trace()` will work too (and is built into Python) but `ipdb` is much nicer. I just learned that you can also use `breakpoint()` in Python 3 to get a breakpoint, but that puts you in `pdb` too which I don’t like. - -### how to get a REPL in Ruby: `binding.pry` - -Here’s the same thing in Ruby – I wrote a `test.rb` program: - -``` -require 'net/http' -require 'pry' - -def make_request() - result = Net::HTTP.get_response('example.com', '/') - binding.pry -end - -make_request() -``` - -and here’s what it looks like when I run it: - -``` -$ ruby test.rb -From: /home/bork/work/homepage/test.rb:6 Object#make_request: - - 4: def make_request() - 5: result = Net::HTTP.get_response('example.com', '/') - => 6: binding.pry - 7: end - -[1] pry(main)> result.code -=> "200" -``` - -### you can also do get a REPL in the middle of an HTTP request - -Rails also lets you start a REPL in the middle of a HTTP request and poke around and see what’s happening. I assume you can do this in Flask and Django too – I’ve only really done this in Sinatra (in Ruby). - -### GDB is sort of like a REPL for C - -I was talking to another friend about REPLs, and we agreed that GDB is a little bit like a REPL for C. - -Now, obviously this is sort of not true – C is a compiled language, and you can’t just type in arbitrary C expressions in GDB and have them work. - -But you can do a surprising number of things like: - - * call functions - * inspect structs if your program has debugging symbols (`p var->field->subfield`) - - - -This stuff only works in gdb because the gdb developers put in a lot of work doing Very Weird Things to make it easier to get a REPL-like experience. I wrote a blog post a few years called [how does gdb call functions?][1] about how surprising it is that gdb can call functions, and how it does that. - -This is the only way I use `gdb` when looking at C programs – I never set watchpoints or do anything fancy, I just set a couple of breakpoints in the program and then poke around at those points. - -### where this method works - -languages where this works: - - * Python - * Ruby - * probably PHP, but I don’t know - * C, sort of, in a weird way (though you might disagree :)) - - - -languages where this doesn’t work: - - * most compiled languages - * in Javascript, I think even though you can get a REPL with `node inspect` and `debugger`, the REPL doesn’t integrate well with async functions which makes it less useful. I don’t really understand this yet though. (python’s REPL also doesn’t let you use `await`, but it’s not as big of a deal because async programming in Python isn’t as core a part of the language as in JS) - - - -### REPL debugging is easy for me to remember how to do - -There are (at least) 4 different ways of debugging: - - 1. Lots of print statements - 2. a debugger - 3. getting a REPL at a breakpoint - 4. inspect your program with external tools like strace - - - -I think part of the reason I like this type of REPL debugging more than using a more traditional debugger is – it’s so easy to remember how to do it! I can just set a breakpoint, and then run code to try to figure out what’s wrong. - -With debuggers, I always forget how to use the debugger (probably partly because I switch programming languages a lot) and I get confused about what features it has and how they work, so I never use it. - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2021/09/16/debugging-in-a-repl-is-fun/ - -作者:[Julia Evans][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://jvns.ca/ -[b]: https://github.com/lujun9972 -[1]: https://jvns.ca/blog/2018/01/04/how-does-gdb-call-functions/ diff --git a/sources/tech/20210916 How I patched Python to include this great Ruby feature.md b/sources/tech/20210916 How I patched Python to include this great Ruby feature.md deleted file mode 100644 index d12bcc11e1..0000000000 --- a/sources/tech/20210916 How I patched Python to include this great Ruby feature.md +++ /dev/null @@ -1,865 +0,0 @@ -[#]: subject: "How I patched Python to include this great Ruby feature" -[#]: via: "https://opensource.com/article/21/9/python-else-less" -[#]: author: "Miguel Brito https://opensource.com/users/miguendes" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -How I patched Python to include this great Ruby feature -====== -What I learned from adding "else-less" functionality to Python, as -inspired by Ruby. -![Python programming language logo with question marks][1] - -Ruby, [unlike Python][2], makes lots of things implicit, and there's a special kind of _if_ expression that demonstrates this well. It's often referred to as an "inline-if" or "conditional modifier", and this special syntax is able to return one value when a condition is true, but another value (`nil`, specifically) when a condition is false. Here's an example: - - -``` -$ irb -irb(main):> RUBY_VERSION -=> "2.7.1" -irb(main):> a = 42 if true -=> 42 -irb(main):> b = 21 if false -=> nil -irb(main):> b -=> nil -irb(main):> a -=> 42 -``` - -In Python, you can't do that without explicitly adding an `else` clause to the expression. In fact, as of [this PR][3], the interpreter tells you immediately that `else` is mandatory: - - -``` -$ python -Python 3.11.0a0 ->>> a = 42 if True -  File "<stdin>", line 1 -    ;a = 42 if True -    ^^^^^^^^^^ -SyntaxError: expected 'else' after 'if' expression -``` - -However, I find Ruby's `if` actually very convenient. - -![return if python][4] - -Python accepts else-less if statements, similar to Ruby. - -This convenience became more evident when I had to go back to Python and write things like this: - - -``` -`>>> my_var = 42 if some_cond else None` -``` - -So I thought to myself, what would it be like if Python had a similar feature? Could I do it myself? How hard would that be? - -### Looking into Python's source code - -Digging into CPython's code and changing the language's syntax sounded not trivial to me. Luckily, during the same week, I found out on Twitter that [Anthony Shaw][5] had just written a [book on CPython Internals][6] and it was available for pre-release. I didn't think twice and bought the book. I've got to be honest, I'm the kind of person who buys things and doesn't use them immediately. As I had other plans in mind, I let it "gather dust" in my home folder until I had to work with that Ruby service again. It reminded me of the CPython Internals book and how challenging hacking the guts of Python would be. - -The first thing was to go through the book from the very start and try to follow each step. The book focuses on Python 3.9, so in order to follow it, one needs to check out the 3.9 tag, and that's what I did. I learned about how the code is structured and then how to compile it. The next chapters show how to extend the grammar and add new things, such as a new operator. - -As I got familiar with the code base and how to tweak the grammar, I decided to give it a spin and make my own changes to it. - -### The first (failed) attempt - -As I started finding my way around CPython's code from the latest main branch, I noticed that lots of things had changed since Python 3.9, yet some fundamental concepts didn't. - -My first attempt was to dig into the grammar definition and find the if expression rule. The file is currently named `Grammar/python.gram`. Locating it was not difficult. An ordinary **CTRL+F** for the `else` keyword was enough. - - -``` -file: Grammar/python.gram -... -expression[expr_ty] (memo): -   | invalid_expression -   | a=disjunction 'if' b=disjunction 'else' c=expression { _PyAST_IfExp(b, a, c, EXTRA) } -   | disjunction -   | lambdef -.... -``` - -Now with the rule in hand, my idea was to add one more option to the current `if` expression where it would match `a=disjunction 'if' b=disjunction` and the `c` expression would be `NULL`. - -This new rule should be placed immediately after the complete one, otherwise, the parser would match `a=disjunction 'if' b=disjunction` always, returning a `SyntaxError`. - - -``` -... -expression[expr_ty] (memo): -   | invalid_expression -   | a=disjunction 'if' b=disjunction 'else' c=expression { _PyAST_IfExp(b, a, c, EXTRA) } -   | a=disjunction 'if' b=disjunction { _PyAST_IfExp(b, a, NULL, EXTRA) } -   | disjunction -   | lambdef -.... -``` - -### Regenerating the parser and compiling Python from source - -CPython comes with a `Makefile` containing lots of useful commands. One of them is the [`regen-pegen` command][7] which converts `Grammar/python.gram` into `Parser/parser.c`. - -Besides changing the grammar, I had to modify the AST for the _if_ expression. AST stands for Abstract Syntax Tree, and it is a way of representing the syntactic structure of the grammar as a tree. For more information about ASTs, I highly recommend the [Crafting Interpreters book][8] by [Robert Nystrom][9]. - -Moving on, if you observe the rule for the _if_ expression, it goes like this: - - -``` -`   | a=disjunction 'if' b=disjunction 'else' c=expression { _PyAST_IfExp(b, a, c, EXTRA) }` -``` - -The means when the parser finds this rule, it calls the `_PyAST_IfExp`, which gives back a `expr_ty` data structure. So this gave me a clue that to implement the new rule's behavior, I'd need to change `_PyAST_IfExp`. - -To find where it is located, I used my `rip-grep` skills and searched for it inside the source root: - - -``` -$ rg _PyAST_IfExp -C2 . - -[OMITTED] -Python/Python-ast.c -2686- -2687-expr_ty -2688:_PyAST_IfExp(expr_ty test, expr_ty body, expr_ty orelse, int lineno, int -2689- col_offset, int end_lineno, int end_col_offset, PyArena *arena) -2690-{ -[OMITTED] -``` - -The implementation goes like this: - - -``` -expr_ty -_PyAST_IfExp(expr_ty test, expr_ty body, expr_ty orelse, int lineno, int -             col_offset, int end_lineno, int end_col_offset, PyArena *arena) -{ -   expr_ty p; -   if (!test) { -        PyErr_SetString(PyExc_ValueError, -                        "field 'test' is required for IfExp"); -       return NULL; -   } -   if (!body) { -        PyErr_SetString(PyExc_ValueError, -                        "field 'body' is required for IfExp"); -        return NULL; -   } -   if (!orelse) { -        PyErr_SetString(PyExc_ValueError, -                        "field 'orelse' is required for IfExp"); -        return NULL; -   } -   p = (expr_ty)_PyArena_Malloc(arena, sizeof(*p)); -   if (!p) -        return NULL; -   p->kind = IfExp_kind; -   p->v.IfExp.test = test; -   p->v.IfExp.body = body; -   p->v.IfExp.orelse = orelse; -   p->lineno = lineno; -   p->col_offset = col_offset; -   p->end_lineno = end_lineno; -   p->end_col_offset = end_col_offset; -   return p; -} -``` - -Since I passed **orelse**NULL, I thought it was just a matter of changing the body of `if (!orelse)`None to `orelse`. It would look like this: - - -``` -   if (!orelse) { -\- PyErr_SetString(PyExc_ValueError, -\- "field 'orelse' is required for IfExp"); -\- return NULL; -\+ orelse = Py_None; -   } -``` - -Now it was time to test it. I compiled the code with `make -j8 -s` and fired up the interpreter: - - -``` -$ make -j8 -s - -Python/Python-ast.c: In function ‘_PyAST_IfExp’: -Python/Python-ast.c:2703:16: warning: assignment from incompatible pointer type [-Wincompatible-pointer-types] -         orelse = Py_None; -``` - -Despite the glaring obvious warnings, I decided to ignore it just to see what would happen. - - -``` -$ ./python -Python 3.11.0a0 (heads/ruby-if-new-dirty:f92b9133ef, Aug 2 2021, 09:13:02) [GCC 7.5.0] on linux -Type "help", "copyright", "credits" or "license" for more information. ->>> a = 42 if True ->>> a -42 ->>> b = 21 if False -[1] 16805 segmentation fault (core dumped) ./python -``` - -Ouch! It worked for the `if True` case, but assigning `Py_None` to `expr_ty orelse` caused a `segfault`. - -Time to go back to see what went wrong. - -### The second attempt - -It wasn't too difficult to figure out where I messed up. `orelse` is an `expr_ty`, and I assigned to it a `Py_None` which is a `PyObject *`. Again, thanks to `rip-grep`, I found its definition: - - -``` -$ rg constant -tc -C2 - -Include/internal/pycore_asdl.h -14-typedef PyObject * string; -15-typedef PyObject * object; -16:typedef PyObject * constant; -``` - -Now, how did I find out `Py_None` was a constant? - -While reviewing the `Grammar/python.gram` file, I found that one of the rules for the new pattern matching syntax is defined like this: - - -``` -# Literal patterns are used for equality and identity constraints -literal_pattern[pattern_ty]: -   | value=signed_number !('+' | '-') { _PyAST_MatchValue(value, EXTRA) } -   | value=complex_number { _PyAST_MatchValue(value, EXTRA) } -   | value=strings { _PyAST_MatchValue(value, EXTRA) } -   | 'None' { _PyAST_MatchSingleton(Py_None, EXTRA) } -``` - -However, this rule is a `pattern_ty`, not an `expr_ty`. But that's fine. What really matters is to understand what `_PyAST_MatchSingleton` actually is. Then, I searched for it in `Python/Python-ast.c:` - - -``` -file: Python/Python-ast.c -... -pattern_ty -_PyAST_MatchSingleton(constant value, int lineno, int col_offset, int -                        end_lineno, int end_col_offset, PyArena *arena) -... -``` - -I looked for the definition of a `None` node in the grammar. To my great relief, I found it! - - -``` -atom[expr_ty]: -   | NAME -   | 'True' { _PyAST_Constant(Py_True, NULL, EXTRA) } -   | 'False' { _PyAST_Constant(Py_False, NULL, EXTRA) } -   | 'None' { _PyAST_Constant(Py_None, NULL, EXTRA) } -.... -``` - -At this point, I had all the information I needed. To return an `expr_ty` representing `None`, I needed to create a node in the AST which is constant by using the `_PyAST_Constant` function. - - -``` -   | a=disjunction 'if' b=disjunction 'else' c=expression { _PyAST_IfExp(b, a, c, EXTRA) } -\- | a=disjunction 'if' b=disjunction { _PyAST_IfExp(b, a, NULL, EXTRA) } -\+ | a=disjunction 'if' b=disjunction { _PyAST_IfExp(b, a, _PyAST_Constant(Py_None, NULL, EXTRA), EXTRA) } -   | disjunction -``` - -Next, I must revert `Python/Python-ast.c` as well. Since I'm feeding it a valid `expr_ty`, it will never be `NULL`. - - -``` -file: Python/Python-ast.c -... -   if (!orelse) { -\- orelse = Py_None; -\+ PyErr_SetString(PyExc_ValueError, -\+ "field 'orelse' is required for IfExp"); -\+ return NULL; -   } -... -``` - -I compiled it again: - - -``` -$ make -j8 -s && ./python -Python 3.11.0a0 (heads/ruby-if-new-dirty:25c439ebef, Aug 2 2021, 09:25:18) [GCC 7.5.0] on linux -Type "help", "copyright", "credits" or "license" for more information. ->>> c = 42 if True ->>> c -42 ->>> b = 21 if False ->>> type(b) -<class 'NoneType'> ->>> -``` - -It works! - -Now, I needed to do one more test. Ruby functions allow returning a value if a condition matches, and if not, the rest of the function body gets executed. Like this: - - -``` -> irb -irb(main):> def f(test) -irb(main):>   return 42 if test -irb(main):>   puts 'missed return' -irb(main):>   return 21 -irb(main):> end -=> :f -irb(main):> f(false) -missed return -=> 21 -irb(main):> f(true) -=> 42 -``` - -At this point, I wondered if that would work with my modified Python. I rushed to the interpreter again and wrote the same function: - - -``` ->>> def f(test): -... return 42 if test -... print('missed return') -... return 21 -... ->>> f(False) ->>> f(True) -42 ->>> -``` - -The function returns `None` if _test_ is `False`... To help me debug this, I summoned the [ast module][10]. The official docs define it like so: - -> The ast module helps Python applications to process trees of the Python abstract syntax grammar. The abstract syntax itself might change with each Python release; this module helps to find out programmatically what the current grammar looks like. - -I printed the AST for this function: - - -``` ->>> fc = ''' -... def f(test): -... return 42 if test -... print('missed return') -... return 21 -... ''' ->>> print(ast.dump(ast.parse(fc), indent=4)) -Module( -   body=[ -        FunctionDef( -            name='f', -            args=arguments( -                posonlyargs=[], -                args=[ -                  arg(arg='test')], -                kwonlyargs=[], -                kw_defaults=[], -                defaults=[]), -            body=[ -                Return( -                  value=IfExp( -                  test=Name(id='test', ctx=Load()), -                  ;body=Constant(value=42), -                  orelse=Constant(value=None))), -                Expr( -                  value=Call( -                    func=Name(id='print', ctx=Load()), -                      args=[ -                        Constant(value='missed return')], -                      keywords=[])), -                  Return( -                      value=Constant(value=21))], -            decorator_list=[])], -   type_ignores=[]) -``` - -Now things made more sense. My change to the grammar was just "syntax sugar". It turns an expression like this: `a if b` into this: `a if b else None`. The problem here is that Python returns no matter what, so the rest of the function is ignored. - -You can look at the [bytecode][11] generated to understand what exactly is executed by the interpreter. And for that, you can use the [`dis` module][12]. According to the docs: - -> The dis module supports the analysis of CPython bytecode by disassembling it. - - -``` ->>> import dis ->>> dis.dis(f) -  2 0 LOAD_FAST 0 (test) -              2 POP_JUMP_IF_FALSE 4 (to 8) -              4 LOAD_CONST 1 (42) -              6 RETURN_VALUE -        >> 8 LOAD_CONST 0 (None) -            10 RETURN_VALUE -``` - -What this basically means is that in case the _test_ is false, the execution jumps to 8, which loads the `None` into the top of the stack and returns it. - -### Supporting "return-if" - -To support the same Ruby feature, I need to turn the expression `return 42 if test` into a regular `if` statement that returns if `test` is true. - -To do that, I needed to add one more rule. This time, it would be a rule that matches the `return if ` piece of code. Not only that, I needed a `_PyAST_` function that creates the node for me. I'll then call it `_PyAST_ReturnIfExpr:` - - -``` -file: Grammar/python.gram - -return_stmt[stmt_ty]: -\+ | 'return' a=star_expressions 'if' b=disjunction { _PyAST_ReturnIfExpr(a, b, EXTRA) } -   | 'return' a=[star_expressions] { _PyAST_Return(a, EXTRA) } -``` - -As mentioned previously, the implementation for all these functions resides in `Python/Python-ast.c`, and their definition is in `Include/internal/pycore_ast.h`, so I put `_PyAST_ReturnIfExpr` there: - - -``` -file: Include/internal/pycore_ast.h - - stmt_ty _PyAST_Return(expr_ty value, int lineno, int col_offset, int -                      end_lineno, int end_col_offset, PyArena *arena); -+stmt_ty _PyAST_ReturnIfExpr(expr_ty value, expr_ty test, int lineno, int col_of -fset, int -\+ end_lineno, int end_col_offset, PyArena *arena); - stmt_ty _PyAST_Delete(asdl_expr_seq * targets, int lineno, int col_offset, int -                      end_lineno, int end_col_offset, PyArena *arena); - -[/code] [code] - -file: Python/Python-ast.c - -+stmt_ty -+_PyAST_ReturnIfExpr(expr_ty value, expr_ty test, int lineno, int col_offset, int end_lineno, int -\+ end_col_offset, PyArena *arena) -+{ -\+ stmt_ty ret, p; -\+ ret = _PyAST_Return(value, lineno, col_offset, end_lineno, end_col_offset, arena); -+ -\+ asdl_stmt_seq *body; -\+ body = _Py_asdl_stmt_seq_new(1, arena); -\+ asdl_seq_SET(body, 0, ret); -+ -\+ p = _PyAST_If(test, body, NULL, lineno, col_offset, end_lineno, end_col_offset, arena); -+ -\+ return p; -+} -+ - stmt_ty -``` - -I examined the implementation of `_PyAST_ReturnIfExpr`. I wanted to turn `return if ` into `if : return `. - -Both `return` and the regular `if` are statements, so in CPython, they're represented as `stmt_ty`. The `_PyAST_If` expectes a `expr_ty test` and a body, which is a sequence of statements. In this case, the `body` is `asdl_stmt_seq *body`. - -As a result, what I really wanted here was an `if` statement with a body where the only statement is a `return ` one. - -CPython disposes of some convenient functions to build `asdl_stmt_seq *`, and one of them is `_Py_asdl_stmt_seq_new`. So I used it to create the body and added the return statement I created a few lines before with `_PyAST_Return`. - -Once that was done, the last step was to pass the `test` as well as the `body` to `_PyAST_If`. - -And before I forget, you may be wondering what on earth is the `PyArena *arena`. **Arena** is a CPython abstraction used for memory allocation. It allows efficient memory usage by using memory mapping [mmap()][13] and placing it in contiguous [chunks of memory][6]. - -Time to regenerate the parser and test it one more time: - - -``` ->>> def f(test): -... return 42 if test -... print('missed return') -... return 21 -... ->>> import dis ->>> f(False) ->>> f(True) -42 -``` - -It doesn't work. Check the bytecodes: - - -``` ->>> dis.dis(f) -  2 0 LOAD_FAST 0 (test) -            2 POP_JUMP_IF_FALSE 4 (to 8) -            4 LOAD_CONST 1 (42) -            6 RETURN_VALUE -        >> 8 LOAD_CONST 0 (None) -        10 RETURN_VALUE ->>> -``` - -It's the same bytecode instructions again! - -### Going back to the compilers class - -At that point, I was clueless. I had no idea what was going on until I decided to go down the rabbit hole of expanding the grammar rules. - -The new rule I added went like this: `'return' a=star_expressions 'if' b=disjunction { _PyAST_ReturnIfExpr(a, b, EXTRA) }`. - -My only hypothesis was that `a=star_expressions 'if' b=disjunction` was being resolved to the else-less rule I added in the beginning. - -By going over the grammar one more time, I figured that my theory held. `star_expressions` would match `a=disjunction 'if' b=disjunction { _PyAST_IfExp(b, a, NULL, EXTRA) }`. - -The only way to fix this was by getting rid of the `star_expressions`. So I changed the rule to: - - -``` - return_stmt[stmt_ty]: -\- | 'return' a=star_expressions 'if' b=disjunction { _PyAST_ReturnIfExpr(a, b, EXTRA) } -\+ | 'return' a=disjunction guard=guard !'else' { _PyAST_ReturnIfExpr(a, guard, EXTRA) } -  | 'return' a=[star_expressions] { _PyAST_Return(a, EXTRA) } -``` - -You might be wondering, what are `guard,` `!else`, and `star_expressions`? - -This `guard` is a rule that is part of the pattern matching rules. The new pattern matching feature added in Python 3.10 allows things like this: - - -``` -match point: -   case Point(x, y) if x == y: -        print(f"Y=X at {x}") -        case Point(x, y): -        print(f"Not on the diagonal") -``` - -And the rule goes by this: - - -``` -`guard[expr_ty]: 'if' guard=named_expression { guard }` -``` - -With that, I added one more check. To avoid it failing with `SyntaxError`, I needed to make sure the rule matched only code like this: `return value if cond`. Thus, to prevent code such as `return an if cond else b` being matched prematurely, I added a `!' else` to the rule. - -Last but not least, the `star_expressions` allow me to return destructured iterables. For example: - - -``` ->>> def f(): -  ...: a = [1, 2] -  ...: return 0, *a -  ...:& - ->>> f() -(0, 1, 2) -``` - -In this case, `0, * a` is a tuple, which falls under the category of `star_expressions`. The regular if-expression doesn't allow using `star_expressions` with it, AFAIK, so changing the new `return` rule won't be an issue. - -### Does it work yet? - -After fixing the return rule, I regenerated the grammar one more time and compiled it: - - -``` ->>> def f(test): -... return 42 if test -... print('missed return') -... return 21 -... ->>> f(False) -missed return -21 ->>> f(True) -42 -``` - -It works! - -Looking at the bytecode: - - -``` ->>> import dis ->>> dis.dis(f) -  2 0 LOAD_FAST 0 (test) -            2 POP_JUMP_IF_FALSE 4 (to 8) -            4 LOAD_CONST 1 (42) -            6 RETURN_VALUE - -  3 >> 8 LOAD_GLOBAL 0 (print) -            10 LOAD_CONST 2 ('missed return') -            12 CALL_FUNCTION 1 -            14 POP_TOP - -  4 16 LOAD_CONST 3 (21) -            18 RETURN_VALUE ->>> -``` - -That's precisely what I wanted. Is the AST is the same as the one with regular `if`? - - -``` ->>> import ast ->>> print(ast.dump(ast.parse(fc), indent=4)) -Module( -   body=[ -        FunctionDef( -            name='f', -            args=arguments( -                posonlyargs=[], -                args=[ -                  arg(arg='test')], -                kwonlyargs=[], -                kw_defaults=[], -                defaults=[]), -            body=[ -                If( -                    test=Name(id='test', ctx=Load()), -                    body=[ -                      Return( -                      value=Constant(value=42))], -                      orelse=[]), -                Expr( -                  value=Call( -                          func=Name(id='print', ctx=Load()), -                          args=[ -                            Constant(value='missed return')], -                          keywords=[])), -                Return( -                  value=Constant(value=21))], -            decorator_list=[])], -   type_ignores=[]) ->>> -``` - -Indeed it is! - - -``` -If( -   test=Name(id='test', ctx=Load()), -   body=[ -        Return( -            value=Constant(value=42))], -   orelse=[]), -``` - -This node is the same as the one that would be generated by: - - -``` -`if test: return 42` -``` - -### If it's not tested, it's broken? - -To conclude this journey, I thought it'd be a good idea to add some unit tests as well. Before writing anything new, I wanted to get an idea of what I had broken. - -With the code tested manually, I ran all tests using the `test` module `python -m test -j8`. The `-j8` means it uses eight processes to run the tests in parallel: - - -``` -`$ ./python -m test -j8` -``` - -To my surprise, only one test failed! - - -``` -== Tests result: FAILURE == -406 tests OK. -1 test failed: -   test_grammar -``` - -Because I ran all tests, it's hard to navigate the output, so I can run only this one again in isolation: - - -``` -====================================================================== -FAIL: test_listcomps (test.test_grammar.GrammarTests) -\---------------------------------------------------------------------- -Traceback (most recent call last): -  File "/home/miguel/projects/cpython/Lib/test/test_grammar.py", line 1732, in test_listcomps -   check_syntax_error(self, "[x if y]") -   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -  File "/home/miguel/projects/cpython/Lib/test/support/__init__.py", line 497, in check_syntax_error -   with testcase.assertRaisesRegex(SyntaxError, errtext) as cm: -   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -AssertionError: SyntaxError not raised -\---------------------------------------------------------------------- - -Ran 76 tests in 0.038s - -FAILED (failures=1) -test test_grammar failed -test_grammar failed (1 failure) - -== Tests result: FAILURE == - -1 test failed: -   test_grammar - -1 re-run test: -   test_grammar - -Total duration: 82 ms -Tests result: FAILURE -``` - -And there it is! It expected a syntax error when running a `[x if y]` expression. I can safely remove it and re-run the tests again: - - -``` -== Tests result: SUCCESS == - -1 test OK. - -Total duration: 112 ms -Tests result: SUCCESS -``` - -Now that everything is OK, it's time to add a few more tests. It's important to test not only the new "else-less if" but also the new `return` statement. - -By navigating through the `test_grammar.py` file, I can find a test for pretty much every grammar rule. The first one I look for is `test_if_else_expr`. This test doesn't fail, so it only tests for the happy case. To make it more robust, I needed to add two new tests to check `if True` and `if False` cases: - - -``` -     self.assertEqual((6 < 4 if 0), None) -        self.assertEqual((6 < 4 if 1), False) -``` - -I ran everything again, and all tests passed this time. - -Note: `bool` in Python is a [subclass of integer][14], so you can use `1` to denote `True` and `0` for `False`. - - -``` -Ran 76 tests in 0.087s - -OK - -== Tests result: SUCCESS == - -1 test OK. - -Total duration: 174 ms -Tests result: SUCCESS -``` - -Lastly, I needed the tests for the `return` rule. They're defined in the `test_return` test. Just like the `if` expression one, this test passed with no modification. - -To test this new use case, I created a function that receives a `bool` argument and returns if the argument is true. When it's false, it skips the return, just like the manual tests I had been doing up to this point: - - -``` -        def g4(test): -             a = 1 -             return a if test -             a += 1 -             return a - -        self.assertEqual(g4(False), 2) -        self.assertEqual(g4(True), 1) -``` - -I saved the file and re-ran `test_grammar` one more time: - - -``` -\---------------------------------------------------------------------- - -Ran 76 tests in 0.087s - -OK - -== Tests result: SUCCESS == - -1 test OK. - -Total duration: 174 ms -Tests result: SUCCESS -``` - -Looks good! The `test_grammar` test passed. Just in case, I re-ran the full test suite: - - -``` -`$ ./python -m test -j8` -``` - -After a while, all tests passed, and I'm very happy with the result. - -### Limitations - -If you know Ruby well, by this point, you've probably noticed that what I did here was not 100% the same as a conditional modifier. For example, in Ruby, you can run actual expressions in these modifiers: - - -``` -irb(main):002:0> a = 42 -irb(main):003:0> a += 1 if false -=> nil -irb(main):004:0> a -=> 42 -irb(main):005:0> a += 1 if true -=> 43 -``` - -I cannot do the same with my implementation: - - -``` ->>> a = 42 ->>> a += 1 if False -Traceback (most recent call last): -  File "<stdin>", line 1, in <module> -TypeError: unsupported operand type(s) for +=: 'int' and 'NoneType' ->>> a += 1 if True ->>> a -43 -``` - -What this reveals is that the `return` rule I created is just a workaround. If I want to make it as close as possible to Ruby's conditional modifier, I'll need to make it work with other statements as well, not just `return`. - -Nevertheless, this is fine. My goal with this experiment was just to learn more about Python internals and see how I would navigate a little-known code base written in C and make the appropriate changes to it. And I have to admit that I'm pretty happy with the results! - -### Conclusion - -Adding a new syntax inspired by Ruby is a really nice exercise to learn more about the internals of Python. Of course, if I had to convert this as a PR, the core developers would probably find a few shortcomings, as I have already described in the previous section. However, since I did this just for fun, I'm very happy with the results. - -The source code with all my changes is on my CPython fork under the [branch ruby-if-new][15]. - -* * * - -_This article was originally published on the [author's personal blog][16] and has been adapted with permission._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/python-else-less - -作者:[Miguel Brito][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/miguendes -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_programming_question.png?itok=cOeJW-8r (Python programming language logo with question marks) -[2]: https://www.python.org/dev/peps/pep-0020/#id2 -[3]: https://github.com/python/cpython/pull/27506 -[4]: https://opensource.com/sites/default/files/ihe46r0jv.gif -[5]: https://tonybaloney.github.io/ -[6]: https://realpython.com/products/cpython-internals-book/ -[7]: https://github.com/python/cpython/blob/3.10/Makefile.pre.in#L850_L856 -[8]: https://craftinginterpreters.com/ -[9]: https://journal.stuffwithstuff.com/ -[10]: https://docs.python.org/3/library/ast.html -[11]: https://en.wikipedia.org/wiki/Bytecode -[12]: https://docs.python.org/3/library/dis.html -[13]: http://man7.org/linux/man-pages/man2/mmap.2.html -[14]: https://docs.python.org/3/c-api/bool.html -[15]: https://github.com/miguendes/cpython/tree/ruby-if-new -[16]: https://miguendes.me/what-if-python-had-this-ruby-feature diff --git a/sources/tech/20210917 Organize your Magic- The Gathering decks with Magic Assistant.md b/sources/tech/20210917 Organize your Magic- The Gathering decks with Magic Assistant.md deleted file mode 100644 index 94982613d4..0000000000 --- a/sources/tech/20210917 Organize your Magic- The Gathering decks with Magic Assistant.md +++ /dev/null @@ -1,114 +0,0 @@ -[#]: subject: "Organize your Magic: The Gathering decks with Magic Assistant" -[#]: via: "https://opensource.com/article/21/9/magic-the-gathering-assistant" -[#]: author: "Seth Kenlon https://opensource.com/users/seth" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Organize your Magic: The Gathering decks with Magic Assistant -====== -The open source application Magic Assistant makes managing your Magic -collection easy. -![Holding a Magic the Gathering deckmaster card][1] - -The world's first trading card game was _Magic: The Gathering,_ first published in 1993. - -It remains popular today because of its great flexibility. With more than 25,000 unique cards published over nearly three decades, there are enough cards for players to build hundreds of different decks for surprisingly unique gameplay experiences. - -Along with this flexibility, however, there comes a cost: many _Magic: The Gathering_ players collect lots of cards so they can construct lots of different decks, which in turn lets them focus on different win conditions and try out different strategies. - -It can be quite a job to keep track of 1,000 cards when you only need 60 to 100 for a deck, but the open source application Magic Assistant makes managing your _Magic_ collection easy. - -### Installing Magic Assistant - -[Magic Assistant][2] is a Java application, so it's cross-platform. Regardless of whether you're on the open source operating system [Linux,][3] macOS, or Windows, you can download Magic Assistant, double-click on its launcher icon, and use it to manage your cards. - -After the application first launches, there are sure to be updates to the card database available. Multiple new _Magic_ sets are released each year, so accept the offer to update and go grab a cup of coffee while new cards are added. - -### Importing cards - -To catalog your cards with Magic Assistant, either you can rummage through the card database manually to add cards to your local collection, or you can import an existing list. The simplest format for a list of _Magic_ cards is a text file containing the number of copies you own and the name of a card on a line by itself: - - -``` -2x Mimic 1x Mordenkainen's Polymorph 2x Ray of Frost 4x Sol Ring -``` - -However, the application supports many formats, including a CSV from _Magic: The Gathering Online,_ TCGPlayer table, MTG Studio, Apprentice, DeckBox, and more. - -To import your cards, select the Import option from the File menu. - -![Sample entry field for importing cards into a deck or collection][4] - -Importing cards (Seth Kenlon, [CC BY-SA 4.0][5]) - -Importing cards adds them to the default collection database (called **main**). This database represents the entirety of your collection. You can then use cards from your collection to build decks and cubes. There's no limit on how many collections you can have, so you can organize your cards in whatever way you prefer. - -### Browsing your collection - -A collection is organized by its metadata by default. That means you can browse your collection by any number of attributes, including mana cost, card type, color, keyword abilities, and format legality. All of these options are available as tabs at the bottom of the collection interface. - -![A view of sorting tabs with several card categories][6] - -Interface tabs (Seth Kenlon, [CC BY-SA 4.0][5]) - -### Building a deck or cube - -One way to get better at _Magic: The Gathering_—and get a better feel for how you like to experience the game—is to build decks. On the one hand, it's great to hold physical cards, but, on the other hand, it can be a lot of work to sort through hundreds of cards kept in several different boxes or binders. With Magic Assistant, it's easy to sort through your cards based on whatever attribute you need, so building decks with it is a pleasure. - -To build a new deck (Magic Assistant has no concept of a cube, but functionally a cube is arguably no different than a deck), right-click on the Deck category in the Card Navigator panel, then select New to create a new deck. - -There are two kinds of decks you can build. You can build a virtual deck, which is purely theorycrafting, with no implication that the deck exists physically. When you create a virtual deck, you can take a card you own only one actual copy of and use it in several decks. You could not build the decks in real life, obviously, because you would be overusing some number of cards, but as a deck idea or recipe, it works well. - -Alternately, you can build a "real" deck, which affects your collection the same way a physical deck does. If you put three copies of Sol Ring into a deck, then your collection shows that you have three fewer copies of Sol Ring available. - -Choose what kind of deck you're building, and give your deck a name for your own reference in the New window. - -![Sample entry field for creating a new deck, with name and parent container][7] - -Deckbuilding tool (Seth Kenlon, [CC BY-SA 4.0][5]) - -To add cards to your a deck, locate the card in your collection, right-click on it, and select Move to or Copy to, followed by the deck you want it to appear in. A virtual deck never lets you move cards to it. Instead, it prompts you to copy the card, because a virtual card never removes a card from your collection. - -![Using Copy to to add a card to a specific deck][8] - -Adding a card to a deck (Seth Kenlon, [CC BY-SA 4.0][5]) - -When you have multiple copies of a card and copy that card into a deck, Magic Assistant adds all copies. To decrease or increase the number of copies of a card in a deck, right-click on the card in your deck and choose Decrease count or Increase count. - -### Deck reports - -After you've built a deck, you can view reports detailing the statistics for the cards you've assembled. You can get charts to view card types, creature types, mana curve, color distribution, and more. - -![A pie chart showing the different types of cards, and a chart with greater detail][9] - -Charts and graphs (Seth Kenlon, [CC BY-SA 4.0][5]) - -### Open source tools for everything - -There's an open source application for nearly everything, so it's no surprise that there's a robust card collection manager like Magic Assistant to help _Magic: The Gathering_ players. If you play the original trading card game, try this one out. It can help you keep track of which cards you have available for building decks. And just as important, it may encourage you to build more decks more often, because it's so easy to do when all the cards are at your fingertips. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/magic-the-gathering-assistant - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wayne-low-unsplash.jpg?itok=eqvfc71L (Holding a Magic the Gathering deckmaster card) -[2]: https://sourceforge.net/projects/mtgbrowser/ -[3]: https://opensource.com/resources/linux -[4]: https://opensource.com/sites/default/files/mtgassistant-import.jpeg (Importing cards into a deck or collection) -[5]: https://creativecommons.org/licenses/by-sa/4.0/ -[6]: https://opensource.com/sites/default/files/mtgassistant-tab.jpeg (Interface tabs) -[7]: https://opensource.com/sites/default/files/mtgassistant-deck_0.jpeg (Deckbuilding tool) -[8]: https://opensource.com/sites/default/files/mtgassistant-add_0.jpeg (Using Copy to to add a card to a specific deck) -[9]: https://opensource.com/sites/default/files/mtgassistant-chart.jpeg (Charts and graphs) diff --git a/sources/tech/20210921 Pensela- An Open-Source Tool Tailored for Screen Annotations.md b/sources/tech/20210921 Pensela- An Open-Source Tool Tailored for Screen Annotations.md deleted file mode 100644 index d82ee2048f..0000000000 --- a/sources/tech/20210921 Pensela- An Open-Source Tool Tailored for Screen Annotations.md +++ /dev/null @@ -1,116 +0,0 @@ -[#]: subject: "Pensela: An Open-Source Tool Tailored for Screen Annotations" -[#]: via: "https://itsfoss.com/pensela/" -[#]: author: "Ankush Das https://itsfoss.com/author/ankush/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Pensela: An Open-Source Tool Tailored for Screen Annotations -====== - -_**Brief:** Pensela is an interesting screen annotation tool available cross-platform. Let us take a closer look at it._ - -You may have come across [several screenshot tools][1] available for Linux. However, a dedicated screen annotation tool along with the ability to take screenshots? And, with cross-platform support? - -Well, that sounds even better! - -### Pensela: A Useful Screen Annotation Tool - -![][2] - -While you get many tools to beautify your screenshots and the screenshot tools like Flameshot, Pensela lets you focus on annotations first. - -It focuses on offering several annotation options while giving you the ability to take full-size screenshots. - -Here, I shall highlight some of its features along with my experience using it. - -### Features of Pensela - -**Note:** Pensela is a fairly new project on [GitHub][3] with no recent updates to it. If you like what you see, I encourage you to help the project or fork it to add the necessary improvements. - -Given that it is a new project with an uncertain future, the feature set is impressive as per what it describes. - -Here’s what you can expect: - - * Cross-platform support (Windows, macOS, and Linux) - * Drawing shapes (circle,square,triangle, and more) - * Signs for yes or no (or correct or wrong) - * Arrow object - * Double-sided arrow - * Ability to change the color of the objects added - * Undo/Redo option - * Add custom text - * Adjust the placement of text/objects - * Toggle the annotation tool or turn off to use the active window - * Text highlighter - * Screenshot button to take the full-screen picture - * Option for clearing all the drawings - - - -### Using Pensela as Screen Annotation Tool - -The moment you launch the tool, your active window gets unresponsive because it focuses on the annotation capability of pensela. - -You get the option to toggle it using the visibility button (icon with an eye). If you disable it, you can interact with the active windows and your computer, but you cannot add annotations. - -![][4] - -When you enable it, the annotations should start working, and the existing ones will be visible. - -This should come in handy if you are streaming/screencasting so that you can use the annotations live and toggle them off when needed. - -In the same section, you select the drag button with two double-side arrows, which lets you move the annotations you already created before turning off the button. - -You can add a piece of text if you click on “T” and then tweak it around to set a color to add them. The tool gives you the freedom to customize the colors of every object available. - -The undo/redo feature works like a charm without limits, which is a good thing. - -The ability to hide all the annotations in one click while resuming it after finishing any existing work should come in handy. - -![][5] - -**Some downsides of Pensela as of now:** - -Unfortunately, it does not let you take a screenshot of a specific region on your screen. It only takes a full-screen screenshot, and any annotations you work on need to be full-screen specific for the best results. - -Of course, you can manually crop/resize the screenshot later, but that is a limitation I have come across. - -Also, you cannot adjust the position of the annotation bar. So, it could be an inconvenience if you want to add an annotation on the top side of your screen. - -And, there is no advanced customization option to tweak or change the behavior of how the tools work, how the screenshot is taken, etc. - -### Installing Pensela in Linux - -You get an AppImage file and a deb file available from its [GitHub releases section][6]. - -Using an AppImage file should come in handy irrelevant of your Linux distribution, but feel free to try other options mentioned on its GitHub page. - -You should also find it in [AUR][7] on an Arch-based Linux distro. - -[Pensela][3] - -What do you think about Pensela as an annotation tool? Do you know of any similar annotation tools? Feel free to let me know your thoughts in the comments down below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/pensela/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/take-screenshot-linux/ -[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pensela-screenshot.png?resize=800%2C442&ssl=1 -[3]: https://github.com/weiameili/Pensela -[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pensela-visibility.png?resize=575%2C186&ssl=1 -[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pensela-folder-screenshot.png?resize=800%2C285&ssl=1 -[6]: https://github.com/weiameili/Pensela/releases/tag/v1.1.3 -[7]: https://itsfoss.com/aur-arch-linux/ diff --git a/sources/tech/20210924 PowerShell on Linux- A primer on Object-Shells.md b/sources/tech/20210924 PowerShell on Linux- A primer on Object-Shells.md deleted file mode 100644 index 15759ad4f1..0000000000 --- a/sources/tech/20210924 PowerShell on Linux- A primer on Object-Shells.md +++ /dev/null @@ -1,402 +0,0 @@ -[#]: subject: "PowerShell on Linux? A primer on Object-Shells" -[#]: via: "https://fedoramagazine.org/powershell-on-linux-a-primer-on-object-shells/" -[#]: author: "TheEvilSkeletonOzymandias42 https://fedoramagazine.org/author/theevilskeleton/https://fedoramagazine.org/author/ozymandias42/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -PowerShell on Linux? A primer on Object-Shells -====== - -![][1] - -Photos by [NOAA][2] and [Cedric Fox][3] on [Unsplash][4] - -In the previous post, [Install PowerShell on Fedora Linux][5], we went through different ways to install PowerShell on Fedora Linux and explained the basics of PowerShell. This post gives you an overview of PowerShell and a comparison to POSIX-compliant shells. - -### Table of contents - - * [Differences at first glance — Usability][6] - * [Speed and efficiency][7] - * [Aliases][8] - * [Custom aliases][9] - * [Differences between POSIX Shells — Char-stream vs. Object-stream][10] - * [To filter for something][11] - * [Output formatting][12] - * [Field separators, column-counting and sorting][13] - * [Getting rid of fields and formatting a nice table][14] - * [How it’s done in PowerShell][15] - * [Remote Administration with PowerShell — PowerShell-Sessions on Linux!?][16] - * [Background][17] - * [What this is good for][18] - * [Conclusion][19] - - - -### Differences at first glance — Usability - -One of the very first differences to take note of when using PowerShell for the first time is semantic clarity. - -Most commands in traditional POSIX shells, like the Bourne Again Shell (BASH), are heavily abbreviated and often require memorizing. - -Commands like _awk_, _ps_, _top_ or even _ls_ do not communicate what they do with their name. Only when one already _does_ know what they do, do the names start to make sense. Once I know that _ls_ **lists** files the abbreviation makes sense. - -In PowerShell on the other hand, commands are perfectly self-descriptive. They accomplish this by following a strict naming convention. - -Commands in PowerShell are called “cmdlets” (pronounced commandlets). These always follow the scheme of Verb-Noun. - -One example: To **get** all files or child-items in a directory I tell PowerShell like this: - -``` -PS > Get-ChildItem - - Directory: /home/Ozymandias42 - -Mode LastWriteTime Length Name ----- ------------- ------ ---- -d---- 14/04/2021 08:11 Folder1 -d---- 13/04/2021 11:55 Folder2 -``` - -**An Aside:** -The cmdlet name is Get-Child_Item_ not _Item**s**_. This is in acknowledgement of [Set-theory][20]. Each of the standard cmdlets return a list or a set of results. The number of items in a set —mathematicians call this the sets [cardinality][21]— can be 0, 1 or any arbitrary natural number, meaning the set can be empty, contain exactly one result or many results. The reason for this, and why I stress this here, is because the standard cmdlets _also_ implicitly implement a ForEach-Loop for any results they return. More about this later. - -#### Speed and efficiency - -##### Aliases - -You might have noticed that standard cmdlets are long and can therefore be time consuming when writing scripts. However, many cmdlets are aliased and don’t necessarily depend on the case, which mitigates this problem. - -Let’s write a script with unaliased cmdlets as an example: - -``` -PS > Get-Process | ForEach-Object {Write-Host $_.Name -ForegroundColor Cyan} -``` - -This lists the name of running processes in cyan. As you can see, many characters are in upper case and cmdlets names are relatively long. Let’s shorten them and replace upper case letters to make the script easier to type: - -``` -PS > gps | foreach {write-host $_.name -foregroundcolor cyan} -``` - -This is the same script but with greatly simplified input. - -To see the full list of aliased cmdlets, type _Get-Alias_. - -##### Custom aliases - -Just like any other shell, PowerShell also lets you set your own aliases by using the _Set-Alias_ cmdlet. Let’s alias _Write-Host_ to something simpler so we can make the same script even easier to type: - -``` -PS > Set-Alias -Name wh -Value Write-Host -``` - -Here, we aliased _wh_ to _Write-Host_ to increase typebility. When setting aliases, _-Name_ indicates what you want the alias to be and _-Value_ indicates what you want to alias to. - -Let’s see how it looks now: - -``` -PS > gps | foreach {wh $_.name -foregroundcolor cyan} -``` - -You can see that we already made the script easier to type. If we wanted, we could also alias _ForEach-Object_ to _fe_, but you get the gist. - -If you want to see the properties of an alias, you can type _Get-Alias_. Let’s check the properties of the alias _wh_ using the _Get-Alias_ cmdlet: - -``` -PS > Get-Alias wh - -CommandType Name Version Source ------------ ---- ------- ------ -Alias wh -> Write-Host -``` - -##### Autocompletion and suggestions - -PowerShell suggests cmdlets or flags when you press the Tab key twice, by default. If there is nothing to suggest, PowerShell automatically completes to the cmdlet. - -### Differences between POSIX Shells — Char-stream vs. Object-stream - -Any scripting will eventually string commands together via pipe | and soon come to notice a few key differences. - -In bash what is moved from one command to the next through a pipe is just a string of characters. However, in PowerShell this is not the case. - -In PowerShell, every cmdlet is aware of data structures and objects. For example, a structure like this: - -``` -{ - firstAuthor=Ozy, - secondAuthor=Skelly -} -``` - -This data is kept as-is even if a command, used alone, would have presented this data as follows: - -``` -AuthorNr. AuthorName -1 Ozy -2 Skelly -``` - -In bash, on the other hand, that formatted output would need to be created by parsing with helper tools like _awk_ or _cut_ first, to be usable with a different command. - -PowerShell does not require this parsing since the underlying structure is sent when using a pipe rather than the formatted output shown without. So the command _authorObject | doThingsWithSingleAuthor firstAuthor_ is possible. - -The following examples shall further illustrate this. - -**Beware:** This will get fairly technical and verbose. Skip if satisfied already. - -A few of the most often used constructs to illustrate the advantage of PowerShell over bash, when using pipes, are to: - - * filter for something - * format output - * sort output - - - -When implementing these in bash there are a few things that will re-occur time and time again. -The following sections will exemplarise these constructs and their variants in bash and contrast them with their PowerShell equivalents. - -#### To filter for something - -Let’s say you want to see all processes matching the name _ssh-agent_. -In human thinking terms you know what you want. - - 1. Get all processes - 2. Filter for all processes that match our criteria - 3. Print those processes - - - -To apply this in bash we could do it in two ways. - -The first one, which most people who are comfortable with bash might use is this one: - -``` -$ ps -p $(pgrep ssh-agent) -``` - -At first glance this is straight forward. _ps_ get’s all processes and the _-p_ flag tells it to filter for a given list of pids. -What the veteran bash user might forget here however is that this might read this way but is not actually run as such. There’s a tiny but important little thing called the order of evaluation. - -_$()_ is d a subshell. A subshell is run, or evaluated, first. This means the list of pids to filter again is first and the result is then returned in place of the subshell for the waiting outer command _ps_ to use. - -This means it is written as: - - 1. Print processes - 2. Filter Processes - - - -but evaluated the other way around. It also implicitly combines the original steps 2. and 3. - -A less often used variant that more closely matches the human thought pattern and evaluation order is: - -``` -$ pgrep ssh-agent | xargs ps -``` - -The second one still combines two steps, the steps 1. and 2. but follows the evaluation logic a human would think of. - -The reason this variant is less used is that ominous _xargs_ command. What this basically does is to append all lines of output from the previous command as a single long line of arguments to the command followed by it. In this case _ps_. - -This is necessary because pgrep produces output like this: - -``` -$ pgrep bash -14514 -15308 -``` - -When used in conjunction with a subshell _ps_, might not care about this but when using pipes to approximate the human evaluation order this becomes a problem. - -What _xargs_ does, is to reduce the following construct to a single command: - -``` -$ for i in $(pgrep ssh-agent); do ps $i ; done -``` - -Okay. Now we have talked a LOT about evaluation order and how to do it in bash in different ways with different evaluation orders of the three basic steps we outlined. - -So with this much preparation, how does PowerShell handle it? - -``` -PS > Get-Process | Where-Object Name -Match ssh-agent -``` - -Completely self-descriptive and follows the evaluation order of the steps we outlined perfectly. Also do take note of the absence of _xargs_ or any explicit for-loop. - -As mentioned in our aside a few hundred words back, the standard cmdlets all implement ForEach internally and do it implicitly when piped input in list-form. - -#### Output formatting - -This is where PowerShell really shines. Consider a simple example to see how it’s done in bash first. Say we want to list all files in a directory sorted by size from the biggest to the smallest and listed as a table with filename, size and creation date. Also let’s say we have some files with long filenames in there and want to make sure we get the full filename no matter how big our terminal. - -##### Field separators, column-counting and sorting - -Now the first obvious step is to run _ls_ with the _-l_ flag to get a list with not just the filenames but the creation date and the file sizes we need to sort against too. - -We will get a more verbose output than we need. Like this one: - -``` -$ ls -l -total 148692 --rwxr-xr-x 1 root root 51984 May 16 2020 [ --rwxr-xr-x 1 root root 283728 May 7 18:13 appdata2solv -lrwxrwxrwx 1 root root 6 May 16 2020 apropos -> whatis --rwxr-xr-x 1 root root 35608 May 16 2020 arch --rwxr-xr-x 1 root root 14784 May 16 2020 asn1Coding --rwxr-xr-x 1 root root 18928 May 16 2020 asn1Decoding -[not needed] [not needed] -``` - -What is apparent is, that to get the kind of output we want we have to get rid of the fields marked _[not needed]_ in the above example but that’s not the only thing needing work. We also need to sort the output so that the biggest file is the first in the list, meaning reverse sort… - -This, of course, can be done in multiple ways but it only shows again, how convoluted bash scripts can get. - -We can either sort with the _ls_ tool directly by using the _-r_ flag for reverse sort, and the _–sort=size_ flag for sort by size, or we can pipe the whole thing to _sort_ and supply that with the _-n_ flag for numeric sort and the _-k 5_ flag to sort by the fifth column. - -Wait! **fifth** ? Yes. Because this too we would have to know. _sort_, by default, uses spaces as field separators, meaning in the tabular output of _ls -l_ the numbers representing the size is the 5th field. - -##### Getting rid of fields and formatting a nice table - -To get rid of the remaining fields, we once again have multiple options. The most straightforward option, and most likely to be known, is probably _cut_. This is one of the few UNIX commands that is self-descriptive, even if it’s just because of the natural brevity of it’s associated verb. So we pipe our results, up to now, into _cut_ and tell it to only output the columns we want and how they are separated from each other. - -_cut -f5- -d” “_ will output from the fifth field to the end. This will get rid of the first columns. - -``` -283728 May 7 18:13 appdata2solv - 51984 May 16 2020 [ - 35608 May 16 2020 arch - 14784 May 16 2020 asn1Coding - 6 May 16 2020 apropos -> whatis -``` - -This is till far from how we wanted it. First of all the filename is in the last column and then the filesize is in the Human unfriendly format of blocks instead of KB, MB, GB and so on. Of course we could fix that too in various ways at various points in our already long pipeline. - -All of this makes it clear that transforming the output of traditional UNIX commands is quite complicated and can often be done at multiple points in the pipeline. - -##### How it’s done in PowerShell - -``` -PS > Get-ChildItem -| Sort-Object Length -Descending -| Format-Table -AutoSize - Name, - @{Name="Size"; Expression= - {[math]::Round($_.Length/1MB,2).toString()+" MB"} - }, - CreationTime -#Reformatted over multiple lines for better readability. -``` - -The only actual output transformation being done here is the conversion and rounding of bytes to megabytes for better human readability. This also is one of the only real weaknesses of PowerShell, that it lacks a _simple_ mechanism to get human readable filesizes. - -That part aside it’s clear, that Format-Table allows you to simply list the columns wanted by their names in the order you want them. - -This works because of the aforementioned object-nature of piped data-streams in PowerShell. There is no need to cut apart strings by delimiters. - -#### Remote Administration with PowerShell — PowerShell-Sessions on Linux!? - -#### Background - -Remote administration via PowerShell on Windows has traditionally always been done via Windows Remoting, using the WinRM protocol. - -With the release of Windows 10, Microsoft has also offered a Windows native OpenSSH Server and Client. - -Using the SSH Server alone on Windows provides the user a CMD prompt unless the default system Shell is changed via a registry key. - -A more elegant option is to make use of the Subsystem facility in _sshd_config_. This makes it possible to configure arbitrary binaries as remote-callable subsystems instead of the globally configured default shell. - -By default there is usually one already there. The sftp subsystem. - -To make PowerShell available as Subsystem one simply needs to add it like so: - -``` -Subsystem powershell /usr/bin/pwsh -sshs --noprofile --nologo -``` - -This works —with the correct paths of course— on _all_ OS’ PowerShell Core is available for. So that means Windows, Linux, and macOS. - -#### What this is good for - -It is now possible to open a PowerShell (Remote) Session to a properly configured SSH-enabled Server by doing this: - -``` -PS > Enter-PSSession - -HostName - -User - -IdentityFilePath - ... - <-SSHTransport> -``` - -What this does is to register and enter an interactive PSSession with the Remote-Host. By itself this has no functional difference from a normal SSH-session. It does, however, allow for things like running scripts from a local host on remote machines via other cmdlets that utilise the same subsystem. - -One such example is the _Invoke-Command_ cmdlet. This becomes especially useful, given that _Invoke-Command_ has the _-AsJob_ flag. - -What this enables is running local scripts as batchjobs on multiple remote servers while using the local Job-manager to get feedback about when the jobs have finished on the remote machines. - -While it is possible to run local scripts via ssh on remote hosts it is not as straight forward to view their progress and it gets outright hacky to run local scripts remotely. We refrain from giving examples here, for brevity’s sake. - -With PowerShell, however, this can be as easy as this: - -``` -$listOfRemoteHosts | Invoke-Command - -HostName $_ - -FilePath /home/Ozymandias42/Script2Run-Remotely.ps1 - -AsJob -``` - -Overview of the running tasks is available by doing this: - -``` -PS > Get-Job - -Id Name PSJobTypeName State HasMoreData Location Command --- ---- ------------- ----- ----------- -------- ------- -1 Job1 BackgroundJob Running True localhost Microsoft.PowerShe… -``` - -Jobs can then be attached to again, should they require manual intervention, by doing _Receive-Job <JobName-or-JobNumber>_. - -### Conclusion - -In conclusion, PowerShell applies a fundamentally different philosophy behind its syntax in comparison to standard POSIX shells like bash. Of course, for bash, it’s historically rooted in the limitations of the original UNIX. PowerShell provides better semantic clarity with its cmdlets and outputs which means better understandability for humans, hence easier to use and learn. PowerShell also provides aliased cmdlets in the case of unaliased cmdlets being too long. The main difference is that PowerShell is object-oriented, leading to elimination of input-output parsing. This allows PowerShell scripts to be more concise. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/powershell-on-linux-a-primer-on-object-shells/ - -作者:[TheEvilSkeletonOzymandias42][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/theevilskeleton/https://fedoramagazine.org/author/ozymandias42/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2021/09/powershell_2-816x345.jpg -[2]: https://unsplash.com/@noaa?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[3]: https://unsplash.com/@thecedfox?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://unsplash.com/s/photos/shell?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[5]: https://fedoramagazine.org/install-powershell-on-fedora-linux -[6]: tmp.YtC5jLcRsL#differences-at-first-glance--usability -[7]: tmp.YtC5jLcRsL#speed-and-efficiency -[8]: tmp.YtC5jLcRsL#aliases -[9]: tmp.YtC5jLcRsL#custom-aliases -[10]: tmp.YtC5jLcRsL#differences-between-posix-shells--char-stream-vs-object-stream -[11]: tmp.YtC5jLcRsL#to-filter-for-something -[12]: tmp.YtC5jLcRsL#output-formatting -[13]: tmp.YtC5jLcRsL#field-operators-collumn-counting-and-sorting -[14]: tmp.YtC5jLcRsL#getting-rid-of-fields-and-formatting-a-nice-table -[15]: tmp.YtC5jLcRsL#how-its-done-in-powershell -[16]: tmp.YtC5jLcRsL#remote-administration-with-powershell--powershell-sessions-on-linux -[17]: tmp.YtC5jLcRsL#background -[18]: tmp.YtC5jLcRsL#what-this-is-good-for -[19]: tmp.YtC5jLcRsL#conclusion -[20]: https://en.wikipedia.org/wiki/Set_(mathematics) -[21]: https://en.wikipedia.org/wiki/Set_(mathematics)#Cardinality diff --git a/sources/tech/20210925 6 open source tools for orchestral composers.md b/sources/tech/20210925 6 open source tools for orchestral composers.md deleted file mode 100644 index 3696a1bdc1..0000000000 --- a/sources/tech/20210925 6 open source tools for orchestral composers.md +++ /dev/null @@ -1,63 +0,0 @@ -[#]: subject: "6 open source tools for orchestral composers" -[#]: via: "https://opensource.com/article/21/9/open-source-orchestral-composers" -[#]: author: "Pete Savage https://opensource.com/users/psav" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -6 open source tools for orchestral composers -====== -Think it's impossible to compose orchestral tracks with just open source -software? Think again. -![Sheet music with geometry graphic][1] - -As an avid amateur musician, I've worked with many different software programs to create both simple and complex pieces. As my projects have grown in scope, I've used composition software ranging from basic engraving to MIDI-compatible notation to playback of multi-instrument works. Composers have their choice of proprietary software, but I wanted to prove that, regardless of the need, there is an open source tool that will more than satisfy them. - -### Music engraving programs - -When my needs were simple and my projects few, I used the excellent resource [Lilypond][2], part of the GNU project, for engraving my music score. Lilypond is a markup language used to create sheet music. What looks like a mass of letters and numbers on the screen becomes a beautiful music score that can be exported as a PDF to share with all your musical acquaintances. For creating small snippets of a score, Lilypond performs excellently. - -Using a text markup language might be a tolerable experience for a software engineer, but waiting to save and run the renderer before seeing the result of your edit can be frustrating. [Frescobaldi][3] is an effective solution to this problem, allowing you to work in a text editor on the left and see a live preview updating on the right. For small scores, this works well. For larger scores, however, the render time can make for a painful experience. Though Frescobaldi has a built-in MIDI-style player, hooking it up to play something requires both knowledge of [JACK][4] (an audio connection API) and a user interface such as [qSynth][5]. For me, Frescobaldi is best for projects when I already know what the score looks like. It's not a composing tool; it's an engraving tool. - -### Music notation programs - -A few months ago, I started creating a songbook for my former band. For this project, I needed to add chord diagrams, guitar tablature, and multiple staves, so I moved over to [Denemo.][6] Denemo is a fabulously configurable tool that uses LilyPond as its rendering backend. The key benefit to Denemo is the ability to enter notes on a stave. The stave you enter notes on might not look exactly like the score will appear on rendering—in fact, it almost certainly won't. However, in most cases, it's far easier to enter the notes directly on a stave than to write them in a text markup language. - -Denemo served me well when creating my songbook, but I had greater ambitions. When I started composing a few piano and small ensemble pieces, I could have handled these in Denemo, but I decided to try [MuseScore][7] to compare the programs. Though MuseScore doesn't use a text-based markup language like Lilypond, it has many other benefits over the LilyPond-based offerings, such as single-note dynamics and rendering out to WAV or MP3. - -In my latest project, I took a piano concept I wrote for a fictional role-playing game (RPG) and turned it into a full orchestral version. MuseScore was fantastic for this. The program definitely became part of my composing process, and it would have been much more difficult for me to arrange 18 instruments in LilyPond than in MuseScore. I was also able to hear single-note dynamics, such as a single violin note moving from silence to loud and back. I do not know of any editors for Lilypond that allow for this. - -#### Piano Concept - -#### Orchestral Concept - -### Going beyond the score - -My next task will be to take the MIDI from this project and code it into a Digital Audio Workstation (DAW), such as [Ardour][8]. The difference between the audio output from MuseScore and something created with a DAW is that a DAW allows much more than single-note dynamics. Expression, volume, and other parameters can be adjusted in time, allowing for a more realistic sound, assuming the instrument can handle it. I'm currently working on packaging up [sFizz][9] for Fedora. sFizz is an SFZ instrument VST plugin that can be used in an open source DAW and has fantastic support for the different expressions I'd like to use in my piece. - -The ultimate aim of this project is to show that open source tooling can be used to create an orchestral track that sounds authentic. Think it's impossible to make realistic-sounding orchestral tracks just with open source software? That's for next time. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/open-source-orchestral-composers - -作者:[Pete Savage][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/psav -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/sheet_music_graphic.jpg?itok=t-uXNbzE (Sheet music with geometry graphic) -[2]: https://lilypond.org/ -[3]: https://frescobaldi.org/ -[4]: https://jackaudio.org/ -[5]: https://qsynth.sourceforge.io/ -[6]: http://www.denemo.org/ -[7]: https://musescore.org/en -[8]: https://ardour.org/ -[9]: https://sfz.tools/sfizz/ diff --git a/sources/tech/20210928 Convert your Raspberry Pi into a trading bot with Pythonic.md b/sources/tech/20210928 Convert your Raspberry Pi into a trading bot with Pythonic.md deleted file mode 100644 index c8d236dce4..0000000000 --- a/sources/tech/20210928 Convert your Raspberry Pi into a trading bot with Pythonic.md +++ /dev/null @@ -1,509 +0,0 @@ -[#]: subject: "Convert your Raspberry Pi into a trading bot with Pythonic" -[#]: via: "https://opensource.com/article/21/9/raspberry-pi-trading-bot" -[#]: author: "Stephan Avenwedde https://opensource.com/users/hansic99" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Convert your Raspberry Pi into a trading bot with Pythonic -====== -Reduce your power consumption by setting up your cryptocurrency trading -bot on a Raspberry Pi. -![A dollar sign in a network][1] - -The current popularity of cryptocurrencies also includes trading in them. Last year, I wrote an article *[How to automate your cryptocurrency trades with Python][2] *which covered the setup of a trading bot based on the graphical programming framework [Pythonic][3], which I developed in my leisure. At that time, you still needed a desktop system based on x86 to run Pythonic. In the meantime, I have reconsidered the concept (web-based GUI). Today, it is possible to run Pythonic on a Raspberry Pi, which mainly benefits the power consumption because such a trading bot has to be constantly switched on. - -That previous article is still valid. If you want to create a trading bot based on the old version of Pythonic (0._x_), you can install it with `pip3 install Pythonic==0.19`. - -This article covers the setup of a trading bot running on a Raspberry Pi and executing a trading algorithm based on the [EMA crossover strategy][4]. - -### Install Pythonic on your Raspberry Pi - -Here, I only briefly touch on the subject of installation because you can find detailed installation instructions for Pythonic in my last article [_Control your Raspberry Pi remotely with your smartphone_][5]. In a nutshell: Download the Raspberry Pi image from [sourceforge.net][6] and flash it on the SD card. - -The PythonicRPI image has no preinstalled graphical desktop, so to proceed, you should be able to access the programming web GUI (http : //PythonicRPI:7000/): - -![Pythonic GUI overview][7] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -#### Example code - -Download the example code for the trading bot from [GitHub][9] (direct download link) and unzip the archive. The archive contains three different file types: - - * `\*.py-files`: Contains the actual implementation of certain functionality - * `current_config.json`: This file describes the configured elements, the links between the elements, and the variable configuration of elements - * `jupyter/backtest.ipynb`: A [Jupyter][10] notebook for backtesting - * `jupyter/ADAUSD_5m.df`: A minimal OHLCV dataset which I use in this example - - - -With the green outlined button, upload the `current_config.json` to the Raspberry Pi. You can upload only valid configuration files. With the yellow outlined button, upload all the `\*.py `files.  - -![Upload toolbar buttons][11] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -The `\*.py `files are uploaded to `/home/pythonic/Pythonic/executables` whereas the `current_config.json` is uploaded to `/home/pythonic/Pythonic/current_config.json`. After uploading the `current_config.json`, you should see a screen like this: - -![Pythonic screen after upload of config.json][12] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -Now I'll go step-by-step through each part of the trading bot. - -### Data acquisition - -Like in the last article, I begin with the data acquisition: - -![Pythonic area 2 data acquisition ][13] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -The data acquisition can be found on the **Area 2** tab and runs independently from the rest of the bot. It implements the following functionality: - - * **AcqusitionScheduler**: Trigger subsequent elements every five minutes - * **OHLCV_Query**: Prepares the OHLCV query method - * **KrakenConnector**: Establishes a connection with the Kraken cryptocurrency exchange - * **DataCollector**: Collect and process the new OHLCV data - - - -The _DataCollector_ gets a Python list of OHLCV data with a prefixed timestamp and converts it into a [Pandas DataFrame][14]. Pandas is a popular library for data analysis and manipulation. A _DataFrame_ is the base type for data of any kind to which arithmetic operation can be applied. - -The task of the DataCollector (`generic_pipe_3e059017.py`) is to load an existing DataFrame from file, append the latest OHLCV data, and save it back to file.  - - -``` -import time, queue -import pandas as pd -from pathlib import Path - -try: -    from element_types import Record, Function, ProcCMD, GuiCMD -except ImportError: -    from Pythonic.element_types import Record, Function, ProcCMD, GuiCMD - -class Element(Function): - -    def __init__(self, id, config, inputData, return_queue, cmd_queue): -        super().__init__(id, config, inputData, return_queue, cmd_queue) -         -    def execute(self): -        df_in = pd.DataFrame(self.inputData, columns=['close_time', 'open', 'high', 'low', 'close', 'volume']) -        df_in['close_time'] = df_in['close_time'].floordiv(1000) # remove milliseconds from timestamp - -        file_path = Path.home() / 'Pythonic' / 'executables' / 'ADAUSD_5m.df' - -        try: -            # load existing dataframe -            df = pd.read_pickle(file_path) -            # count existing rows -            n_row_cnt = df.shape[0] -            # concat latest OHLCV data -            df = pd.concat([df,df_in], ignore_index=True).drop_duplicates(['close_time']) -            # reset the index -            df.reset_index(drop=True, inplace=True) -            # calculate number of new rows -            n_new_rows = df.shape[0] - n_row_cnt -            log_txt = '{}: {} new rows written'.format(file_path, n_new_rows) - -        except Exception as e: -            log_txt = 'File error - writing new one' -            df = df_in  -             -        # save dataframe to file -        df.to_pickle(file_path) - -        logInfo = Record(None, log_txt) -        self.return_queue.put(logInfo) -``` - -This code is executed every full five minutes as the OHLCV data is also in 5-minute intervals. - -By default, the _OHLCV_Query_ element only downloads the dataset for the latest period. To have some data for developing the trading algorithm, right-click the **OHLCV_Query** element to open the configuration, set the _Limit_ to 500, and trigger the **AcquisitionScheduler**. This causes the download of 500 OHLCV values: - -![OHLCV_Query configuration][15] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -### Trading strategy - -Our trading strategy will be the popular [EMA crossover strategy][4]. The EMA indicator is a weighted moving average over the last _n_ close prices that gives more weight to recent price data. You calculate two EMA series, one for a longer period (for example, _n_ = 21, blue line) and one for a shorter period (for example, _n_ = 10, yellow line).  - -![Pythonic trading data graph][16] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -The bot should place a buy order (green circle) when the shorter-term EMA crosses above the longer-term EMA. The bot should place a sell order when the shorter-term EMA crosses below the longer-term EMA (orange circle). - -### Backtesting with Jupyter - -The example code on [GitHub][9] (direct download link) also contains a [Jupyter Notebook][10] file (`backtesting.ipynb`)  which you use to test and develop the trading algorithm. - -**Note:** Jupyter is not preinstalled on the Pythonic Raspberry Pi image. You can either install it also on the Raspberry Pi or install it on your regular PC. I  recommend the latter, as you will do some number crunching that is much faster on an ordinary x86 CPU. - -Start Jupyter and open the notebook. Make sure to have a DataFrame, downloaded by the _DataCollector_, available. With **Shift**+**Enter**, you can execute each cell individually. After executing the first three cells, you should get an output like this: - -![Output after executing the first three cells][17] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -Now calculate the EMA-10 and EMA-21 values. Luckily, pandas offers you the `ewm` function, which does exactly what is needed. The EMA values are added as separate columns to the DataFrame: - -![EMA values added as separate columns to dataframe][18] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -To determine if a buy or sell condition is met, you have to consider these four variables: - - * **emaLong0**: Current long-term (_ema-21_) EMA value - * **emaLong1**: Last long-term (_ema-21_) EMA value (the value before emaLong0) - * **emaShort0**: Current short-term (_ema-10_) EMA value - * **emaShort1**: Last short-term (_ema-10_) EMA value (the value before emaShort0) - - - -When the following situation comes into effect, a buy condition is met: - -![Buy condition met][19] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -In Python code: - - -``` -`emaLong1 > emaShort1 and emaShort0 > emaLong0` -``` - -A sell condition is met in the following situation: - -![Sell condition met][20] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -In Python code: - - -``` -`emaShort1 > emaLong1 and emaLong0 > emaShort0` -``` - -To test the DataFrame and evaluate the possible profit you could make, you could either iterate over each row and test for these conditions or, with a smarter approach, filter the dataset to only the relevant rows with built-in methods from Pandas. - -Under the hood, Pandas uses [NumPy][21], which is the method of choice for fast and efficient data operation on arrays. This is, of course, convenient because the later use is to take place on a Raspberry Pi with an ARM CPU. - -For the sake of clarity, the DataFrame from the example (`ADAUSD_5m.df`) with only 20 entries is used in the following examples. The following code appends a column of boolean values dependent on the condition `emaShort0 > emaLong0`: - -![Dataframe with 20 entries][22] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -The place of interest is when a _False_ switches to _True_ (buy) or when _True_ switches to _False_. To filter them apply a `diff` operation to the _condition_ column. The `diff `operation calculates the difference between the current and the previous line. In terms of boolean values, it results in: - - * _False_ `diff` _False_ = _False_ - * _False_ `diff` _True_ = _True_ - * _True_ `diff` _True_ = _False_ - * _True_ `diff` _False_ = _True_ - - - -With the following code, you apply the `diff` operation as a filter to the *condition *column without modifying it: - -![Applying the diff operation][23] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -As a result, you get the desired data: The first row (index 2) signalizes a buy condition and the second row (index 8) signalizes a sell condition. As you now have an efficient way of extracting relevant data, you can calculate possible profit. - -To do so, you have to iterate through the rows and calculate the possible profit based on simulated trades. The variable `bBought` saves the state if you already bought, and `buyPrice` stores the price you bought between the iterations. You also skip the first sell indicator as it doesn't make sense to sell before you've even bought. - - -``` -profit   = 0.0 -buyPrice = 0.0 -bBought  = False - -for index, row, in trades.iterrows(): -     -    # skip first sell-indicator -    if not row['condition'] and not bBought: -        continue -     -    # buy-indication -    if row['condition'] and not bBought: -        bBought = True -        buyPrice = row['close'] -         -         -    # sell-indication -    if not row['condition'] and bBought: -        bBought = False -        sellPrice = row['close'] - -        orderProfit = (sellPrice * 100) / buyPrice - 100 -         -        profit += orderProfit -``` - -Your one-trade mini dataset would provide you the following profit: - -![One-trade mini dataset profit][24] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -**Note:** As you can see, the strategy would have given a terrible result as you would have bought at $2.5204 and sold at  $2.5065, causing a loss of 0.55% (order fees not included). However, this is a real-world scenario: One strategy does not work for each scenario. It is on you to find the most promising parameters (for example, using OHLCV on an hourly basis would make more sense in general). - -### Implementation - -You can find the implementation of the decision on the **Area 1** tab.  - -![Decision-making implementation on area 1][25] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -It implements the following functionality: - - * **BotScheduler**: Same as the AcqusitionScheduler: Trigger subsequent elements every five minutes - * **Delay**: Delay the execution for 30 seconds to make sure that the latest OHLCV data was written to file - * **Evaluation**: Make the trading decision based on the EMA crossover strategy - - - -You now know how the decision makings work, so you can take a look at the actual implementation. Open the file `generic_pipe_29dfc189.py`. It corresponds to the **Evaluation** element on the screen: - - -``` -@dataclass -class OrderRecord: -    orderType:          bool  # True = Buy, False = Sell -    price:              float # close price -    profit:             float # profit in percent -    profitCumulative:   float # cumulative profit in percent - -class OrderType(Enum):  -    Buy  = True -    Sell = False - -class Element(Function): - -    def __init__(self, id, config, inputData, return_queue, cmd_queue): -        super().__init__(id, config, inputData, return_queue, cmd_queue) - -    def execute(self): - -        ### Load data ### - -        file_path = Path.home() / 'Pythonic' / 'executables' / 'ADAUSD_5m.df' - -        # only the last 21 columsn are considered -        self.ohlcv = pd.read_pickle(file_path)[-21:] - -        self.bBought             = False -        self.lastPrice           = 0.0 -        self.profit              = 0.0 -        self.profitCumulative    = 0.0    -        self.price               = self.ohlcv['close'].iloc[-1] -         -        # switches for simulation - -        self.bForceBuy  = False -        self.bForceSell = False - -        # load trade history from file -        self.trackRecord = ListPersist('track_record') - -        try: -            lastOrder = self.trackRecord[-1] - -            self.bBought          = lastOrder.orderType -            self.lastPrice        = lastOrder.price -            self.profitCumulative = lastOrder.profitCumulative - -        except IndexError: -            pass -         -        ### Calculate indicators ### - -        self.ohlcv['ema-10'] = self.ohlcv['close'].ewm(span = 10, adjust=False).mean() -        self.ohlcv['ema-21'] = self.ohlcv['close'].ewm(span = 21, adjust=False).mean() -        self.ohlcv['condition'] = self.ohlcv['ema-10'] > self.ohlcv['ema-21'] -         -        ### Check for Buy- / Sell-condition ### -        tradeCondition = self.ohlcv['condition'].iloc[-1] != self.ohlcv['condition'].iloc[-2] - -        if tradeCondition or self.bForceBuy or self.bForceSell: - -            orderType = self.ohlcv['condition'].iloc[-1] # True = BUY, False = SELL - -            if orderType and not self.bBought or self.bForceBuy: # place a buy order -                 -                msg         = 'Placing a  Buy-order' -                newOrder    = self.createOrder(True) - -            elif not orderType and self.bBought or self.bForceSell: # place a sell order - -                msg = 'Placing a  Sell-order' - -                sellPrice   = self.price -                buyPrice    = self.lastPrice - -                self.profit = (sellPrice * 100) / buyPrice - 100 -                self.profitCumulative += self.profit - -                newOrder = self.createOrder(False) - -            else: # Something went wrong -                msg = 'Warning: Condition for {}-order met but bBought is {}'.format(OrderType(orderType).name, self.bBought) -                newOrder = None -             - -            recordDone = Record(newOrder, msg)      -            self.return_queue.put(recordDone) - -    def createOrder(self, orderType: bool) -> OrderRecord: -         -        newOrder = OrderRecord( -                orderType=orderType, -                price=self.price, -                profit=self.profit, -                profitCumulative=self.profitCumulative -            ) -         -        self.trackRecord.append(newOrder) - -        return newOrder -``` - -As the general process is not that complicated, I want to highlight some of the peculiarities: - -##### Input data - -The trading bot only processes the last 21 elements as this is the range you consider when calculating the exponential moving average: - - -``` -`   self.ohlcv = pd.read_pickle(file_path)[-21:]` -``` - -##### Track record - -The type _ListPersist_ is an extended Python list object that writes itself to the file system when modified (when elements get added or removed). It creates the file `track_record.obj` under `~/Pythonic/executables/` once you run it the first time. - - -``` -`  self.trackRecord = ListPersist('track_record')` -``` - -Maintaining a track record helps to keep the state of recent bot activity. - -##### Plausibility - -The algorithm outputs an object of the type _OrderRecord_ in case conditions for a trade are met. It also keeps track of the overall situation: For example, if a buy signal was received, but `bBought` indicates that you already bought before, something must've gone wrong: - - -``` -else: # Something went wrong -    msg = 'Warning: Condition for {}-order met but bBought is {}'.format(OrderType(orderType).name, self.bBought) -    newOrder = None -``` - -In this scenario, _None_ is returned with a corresponding log message. - -### Simulation - -The Evaluation element (`generic_pipe_29dfc189.py`) contains these switches which enable you to force the execution of a buy or sell order: - - -``` -self.bForceBuy  = False -self.bForceSell = False -``` - -Open the code server IDE (http : //PythonicRPI:8000/), load `generic_pipe_29dfc189.py `and set one of the switches to _True_. Attach with the debugger and add a breakpoint where the execution path enters the _inner if_ conditions. - -![Add breakpoint for inner if conditions][26] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -Now open the programming GUI, add a **ManualScheduler **element (configured to _single fire_) and connect it directly to the **Evaluation** element to trigger it manually: - -![Add manual scheduler element][27] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -Click the play button. The **Evaluation **element is triggered directly, and the debugger stops at the previously set breakpoint. You are now able to add, remove, or modify orders from the track record manually to simulate certain scenarios: - -![Manually simulate scenarios ][28] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -Open the log message window (green outlined button) and the output data window (orange outlined button): - -![Pythonic trading output buttons][29] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -You will see the log messages and output of the **Evaluation** element and thus the behavior of the decision-making algorithm based on your input: - -### - -[19_pythonic_trading_simulate_orders_2.png][30] - -![Log messages and output of evaluation element][31] - -(Stephan Avenwedde, [CC BY-SA 4.0][8]) - -Summary - -The example stops here. The final implementation could notify the user about a trade indication, place an order on an exchange, or query the account balance in advance. At this point, you should feel that everything connects and be able to proceed on your own.  - -Using Pythonic as a base for your trading bot is a good choice because it runs on a Raspberry Pi, is entirely accessible by a web browser, and already has logging features. It is even possible to stop on a breakpoint without disturbing the execution of other tasks using Pythonic's multiprocessing capabilities. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/raspberry-pi-trading-bot - -作者:[Stephan Avenwedde][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/hansic99 -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0 (A dollar sign in a network) -[2]: https://opensource.com/article/20/4/python-crypto-trading-bot -[3]: https://github.com/hANSIc99/Pythonic -[4]: https://www.investopedia.com/articles/active-trading/052014/how-use-moving-average-buy-stocks.asp -[5]: https://opensource.com/article/21/9/raspberry-pi-remote-control -[6]: https://sourceforge.net/projects/pythonicrpi/ -[7]: https://opensource.com/sites/default/files/uploads/1_pythonic_trading_clear_2.png (Pythonic GUI overview) -[8]: https://creativecommons.org/licenses/by-sa/4.0/ -[9]: https://github.com/hANSIc99/Pythonic/raw/master/examples/trading_bot_crossing_ema/trading_bot_crossing_ema.zip -[10]: https://jupyter.org/ -[11]: https://opensource.com/sites/default/files/uploads/2_pythonic_trading_upload_buttons.png (Upload toolbar buttons) -[12]: https://opensource.com/sites/default/files/uploads/3_pythonic_trading_sample_loaded_2.png (Pythonic screen after upload of config.json) -[13]: https://opensource.com/sites/default/files/uploads/4_pythonic_trading_data_acquisition.png (Pythonic area 2 data acquisition) -[14]: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html -[15]: https://opensource.com/sites/default/files/uploads/5_pythonic_trading_ohlcv_limit.png (OHLCV_Query configuration) -[16]: https://opensource.com/sites/default/files/uploads/6_pythonic_trading_ohlcv_data.png (Pythonic trading data graph) -[17]: https://opensource.com/sites/default/files/uploads/7_pythonic_trading_jupyter_a.png (Output after executing the first three cells) -[18]: https://opensource.com/sites/default/files/uploads/8_pythonic_trading_jupyter_calc_ema.png (EMA values added as separate columns to dataframe) -[19]: https://opensource.com/sites/default/files/uploads/9_pythonic_trading_buy_condition.png (Buy condition met) -[20]: https://opensource.com/sites/default/files/uploads/10_pythonic_trading_sell_condition.png (Sell condition met) -[21]: https://numpy.org/ -[22]: https://opensource.com/sites/default/files/uploads/11_pythonic_trading_jupyter_condition.png (Dataframe with 20 entries) -[23]: https://opensource.com/sites/default/files/uploads/12_pythonic_trading_jupyter_filter.png (Applying the diff operation) -[24]: https://opensource.com/sites/default/files/uploads/13_pythonic_trading_backtest_trades.png (One-trade mini dataset profit) -[25]: https://opensource.com/sites/default/files/uploads/14_pythonic_trading_implementation.png (Decision-making implementation on area 1) -[26]: https://opensource.com/sites/default/files/uploads/15_pythonic_trading_breakpoint_2.png (Add breakpoint for inner if conditions) -[27]: https://opensource.com/sites/default/files/uploads/16_pythonic_trading_manual_trigger.png (Add manual scheduler element) -[28]: https://opensource.com/sites/default/files/uploads/17_pythonic_trading_debugger_stop.png (Manually simulate scenarios) -[29]: https://opensource.com/sites/default/files/uploads/18_pythonic_trading_output_buttons.png (Pythonic trading output buttons) -[30]: https://opensource.com/file/512111 -[31]: https://opensource.com/sites/default/files/uploads/19_pythonic_trading_simulate_orders_2.png (Log messages and output of evaluation element) diff --git a/sources/tech/20210929 How I keep my file folders tidy with Ansible.md b/sources/tech/20210929 How I keep my file folders tidy with Ansible.md deleted file mode 100644 index aacc2f33b1..0000000000 --- a/sources/tech/20210929 How I keep my file folders tidy with Ansible.md +++ /dev/null @@ -1,183 +0,0 @@ -[#]: subject: "How I keep my file folders tidy with Ansible" -[#]: via: "https://opensource.com/article/21/9/keep-folders-tidy-ansible" -[#]: author: "Seth Kenlon https://opensource.com/users/seth" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -How I keep my file folders tidy with Ansible -====== -I try to use Ansible often, even for tasks that I know how to do with a -shell script because I know that Ansible is easy to scale. -![Filing cabinet for organization][1] - -I try to use Ansible often, even for tasks that I know how to do with a shell script because I know that Ansible is easy to scale. Even though I might develop an Ansible playbook just for my personal workstation, sometimes it ends up being a lot more useful than intended, and it's easy to apply that same playbook to all the computers on my network. And besides, sometimes the greatest enemy of getting really good at something is the impression that it's only meant for serious professionals, or big projects, or whatever you feel that you're not. I use Ansible because it's a great open source tool, but I benefit from it the most because it scales. - -One of the tasks I recently assigned to Ansible was the monumental one of keeping my Downloads folder tidy. If you're like me, you end up downloading many files from the Internet throughout the day and then forget that the files exist. On the one hand, I don't mind this habit. There have been times when I realize I still need a file in my Downloads folder, so forgetting about a file rather than promptly removing it can be helpful. However, there are other files that I download expressly to use once and then ought to remove. - -I decided to use a highly specific Ansible task to find files I know I don't need and then remove them. - -### Ansible boilerplate - -Ansible playbooks generally start in exactly the same way: Define your hosts and announce a task: - - -``` -\--- -\- hosts: localhost -  tasks: -``` - -Commit those three lines to memory. They're the "shebang" (`#!`) of Ansible playbooks. Once you have those lines in a text file, you can start defining the steps in your task. - -### Finding files with Ansible - -You can locate files on a system using the [`find` Ansible module][2]. If an Ansible module is a command, its parameters are its [command options][3]. In this example playbook, I want to find files explicitly located in the `~/Downloads` folder and I can define that using the `paths` parameter. - -This is my process when I start writing a playbook: I find a module in the Ansible module index that seems likely to do what I need, and then I read through its parameters to find out what kind of control I have over the module. - -In my case, the files I accidentally collect in my Downloads folder are CSV files. They get downloaded weekly, processed, and then ought to disappear. But they hang around for weeks until I get overwhelmed and delete them. Here's how to find CSV files in Downloads with Ansible: - - -``` -\--- -\- hosts: localhost -  tasks: -    - name: Find CSV in Downloads -      find: -        paths: ~/Downloads -        recurse: false -        patterns: '*.csv,*.CSV' -      register: result -``` - -The `paths` parameter tells Ansible where to search for files. - -The `recurse: false` parameter forbids Ansible from searching in subdirectories of Downloads. This gives me the ability to retain CSV files that I've downloaded and saved into a subdirectory. Ansible only targets the CSV files I save straight to Downloads (which is my habit). - -The `patterns` parameter tells Ansible what to count as a match. All of the CSV files I download end in .csv, but I'm confident that I'm willing to remove .CSV (in all capital letters) as well. - -The finishing touch to this step is to invoke the `register` module, which saves the results of the `find` process into a variable called `result`. - -This is important because I want Ansible to perform a second action on the results of `find`, so those results need to be stored somewhere for the next step. - -### Removing files with Ansible - -The next step in the task is to remove the files that `find` has uncovered. The module used to remove files is the [`file` module][4]. - -This step relies entirely on the `find` step, so it uses several variables: - - -``` -    - name: Remove CSV files -      file: -        path: "{{ item.path }}" -        state: absent -      with_items: "{{ result.files }}" -``` - -The `path` parameter uses the built-in `"{{ item.path }}"` variable, which confusingly isn't actually defined yet. The variable has no information on the path until the `file` module is used in a loop by the `with_items` keyword. The `with_items` step uses the contents of the `result` variable to extract one filename at a time, which becomes the `item` for the `path` parameter. Once the current item's path is extracted, Ansible uses the `state: absent` rule to ensure that the file located at that path is _not_ left on the system (in other words, it's deleted.) - -This is a _very_ dangerous step, especially during testing. If you get this step wrong, you can easily remove files you don't intend to delete. - -### Verify the playbook  - -Ansible playbooks are written in [YAML][5], which has a strict syntax. Verify that your YAML is correct using the `yamllint` command: - - -``` -$ yamllint cleanup.yaml -$ -``` - -No results means no errors. This playbook must have been written by someone who really [knows and loves YAML][6]! - -### Testing Ansible plays safely - -To avoid deleting my entire home directory by accident, I ran my first attempt with the `--check` option. This ensures that Ansible doesn't actually make changes to your system. - - -``` -$ ansible-playbook --check example.yaml -[WARNING]: provided hosts list is empty, only localhost is available. -'all' - -PLAY [localhost] **************************************************** - -TASK [Gathering Facts] ********************************************** -ok: [localhost] - -TASK [Find CSV files in Downloads] ********************************** -ok: [localhost] - -TASK [Remove CSV files] ********************************************* -changed: [localhost] => (item={'path': '/home/tux/Downloads/foo.csv', [...] -changed: [localhost] => (item={'path': '/home/tux/Downloads/bar.csv', [...] -changed: [localhost] => (item={'path': '/home/tux/Downloads/baz.csv', [...] - -PLAY RECAP ********************************************************** -localhost                  : ok=3    changed=1    unreachable=0 [...] -``` - -The output is very verbose, but it shows that my playbook is correct: Only CSV files within Downloads have been marked for removal. - -### Running Ansible playbooks - -To run an Ansible playbook, you use the `ansible-playbook` command: - - -``` -`$ ansible-playbook example.yaml` -``` - -Confirm the results: - - -``` -$ ls *.csv  ~/Downloads/ -ls: cannot access '*.csv': No such file or directory -/home/tux/Downloads/: -file.txt -``` - -### Schedule the Ansible playbook - -The Ansible playbook has been confirmed, but I want it to run at least every week. I use [Anacron][7] rather than Cron, so I created an Anacron job to run weekly: - - -``` -$ cat << EOF >> ~/.local/etc/cron.weekly/cleanup -#!/bin/sh -ansible-playbook $HOME/Ansible/cleanup.yaml -EOF -$ chmod +x ~/.local/etc/cron.daily/cleanup -``` - -### What can you do with Ansible? - -Generally, Ansible is meant as a system maintenance tool. It's finely tuned to bootstrap complex systems to help with course correction when something's gone wrong and to keep a system in a specific state. I've used it for simple but repetitive tasks, like setting up a complex directory tree that would typically require several commands or clicks. I've also used it for tasks I don't want to do wrong, like removing old files from directories. I've also used it for tasks that are just too complex for me to bother trying to remember, like synchronizing several changes made to a production system with its redundant backup system. - -I don't use this cleanup script on my servers because I don't download CSV files every week on my servers, but I do use a variation of it. Ansible isn't a replacement for shell or Python scripting, but for some tasks, it's a very precise method to perform some set of tasks that you might want to run on many more systems. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/keep-folders-tidy-ansible - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_organize_letter.png?itok=GTtiiabr (Filing cabinet for organization) -[2]: https://docs.ansible.com/ansible/2.8/modules/find_module.html#find-module -[3]: https://opensource.com/article/21/8/linux-terminal#options -[4]: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/file_module.html -[5]: https://www.redhat.com/sysadmin/yaml-beginners -[6]: https://www.redhat.com/sysadmin/yaml-tips -[7]: https://opensource.com/article/21/2/linux-automation diff --git a/sources/tech/20210930 Using Ansible with REST APIs.md b/sources/tech/20210930 Using Ansible with REST APIs.md deleted file mode 100644 index 4d05c8375b..0000000000 --- a/sources/tech/20210930 Using Ansible with REST APIs.md +++ /dev/null @@ -1,212 +0,0 @@ -[#]: subject: "Using Ansible with REST APIs" -[#]: via: "https://opensource.com/article/21/9/ansible-rest-apis" -[#]: author: "Vince Power https://opensource.com/users/vincepower" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Using Ansible with REST APIs -====== -You may have queried APIs with a web browser or curl, but one of the -overlooked capabilities of Ansible is how well it can leverage APIs as -part of any playbook. -![Looking at a map][1] - -Ansible is a top open source project which, on the surface, looks to provide a simple way to standardize your existing automation and allow it to run in parallel across multiple hosts, and it does this very successfully. Yet, in reality, Ansible has the capabilities to extend what your existing automation does to incorporate other systems and really simplify tasks across all aspects of your daily routine. - -This capability starts with the [collections][2] and [roles][3] that are included with Ansible and all the third-party utilities distributed through [Ansible Galaxy][4]. You may have queried APIs with a web browser or [curl][5], but one of the overlooked capabilities of Ansible is how well it can leverage APIs as part of any playbook. This is extremely useful because the number of REST APIs being built and deployed both internally and across the global internet is increasing exponentially. There's even a [public-apis GitHub repo][6] listing hundreds of free APIs across over a dozen categories just for a sense of scale. - -### A basic API playbook - -Well, it really comes down to a few key core capabilities within Ansible, which are exposed nicely with one specific built-in task, _uri_. In this post, I'll go through a fairly simple example of how to call a REST API and use the data from that call to decide what to do next. This works with Ansible 2.9 and higher. In later versions (specifically v4), the modules we use need to be prepended with _ansible.builtin_ like _ansible.builtin.set_fact_ instead of just _set_fact_. - -To get started, you need a basic playbook to build on. In this case, you're only using local calls, so you don't need to be a superuser. - -First, create this YAML file to establish a working baseline: - - -``` -\--- -\- name: Using a REST API -  become: false -  hosts: localhost -  gather_facts: false -  tasks: -    - debug: -        msg: “Let’s call an API” -``` - -Here's the output after running it: - - -``` -% ansible-playbook using-a-rest-api.yml - -PLAY [Using a REST API] ********************************************************************************************* - -TASK [debug] ******************************************************************************************************** -ok: [localhost] => { -    "msg": "“Let’s call an API”" -} - -PLAY RECAP ********************************************************************************************************** -localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   -``` - -### Calling an API - -To call an actual API, you can use the _uri_ module. Here are two examples. The first is just a GET and the second is a POST with parameters to show the different available options. - - -``` -\--- -\- name: Everyone loves a good Chuck Norris joke -  uri: -    url: -    method: GET - -\- name: Login to an API -  uri: -    url: -    method: POST -    body_format: json -    body: -      name: your_username -      password: your_password -      client_id: YOUR_CLIENT_ID -      access_token: ACCESS_TOKEN -      connection: CONNECTION -      scope: SCOPE -``` - -I use the first API for the rest of this article to show how the returned data can be used. The question is, how do you collect the data being returned, and what does it look like? - -To collect the output from any task running in Ansible, you use the _register_ attribute, and then you can use the _debug_ task to display the raw data. In the case of APIs called using _uri_, all the output is put under the .json. Subsection of the result. The _uri_ commands and other its output are also at that top level. These can be useful to make sure the API call works by looking at other data fields like status. - -These are the two tasks you must add to the original playbook to add the API call to the mix to later do something with. - - -``` -  - name: Getting the definition of awesome -      uri: -        url: -        method: GET -      register: results - -    - debug: -        var: results -``` - -Run it to see the output generated by debug: - - -``` -TASK [debug] ******************************************************************************************************** -ok: [localhost] => { -    "results": { -        "alt_svc": "h3=\":443\"; ma=86400, h3-29=\":443\"; ma=86400, h3-28=\":443\"; ma=86400, h3-27=\":443\"; ma=86400", -        "cf_cache_status": "DYNAMIC", -        "cf_ray": "694f7d791aeb19e7-EWR", -        "changed": false, -        "connection": "close", -        "content_type": "application/json;charset=UTF-8", -        "cookies": {}, -        "cookies_string": "", -        "date": "Sun, 26 Sep 2021 21:12:23 GMT", -        "elapsed": 0, -        "expect_ct": "max-age=604800, report-uri=\""", -        "failed": false, -        "json": { -            "categories": [], -            "created_at": "2020-01-05 13:42:26.991637", -            "icon_url": "", -            "id": "IjqNNWKvSDeVKaI82PaT1g", -            "updated_at": "2020-01-05 13:42:26.991637", -            "url": "", -            "value": "One person stated that Chuck Norris has forgotten more about killing than anyone will ever know. That is not true -- Chuck Norris never forgets. Ever." -        }, -        "msg": "OK (unknown bytes)", -        "nel": "{\"success_fraction\":0,\"report_to\":\"cf-nel\",\"max_age\":604800}", -        "redirected": false, -        "report_to": "{\"endpoints\":[{\"url\":\"https:\\\/\\\/a.nel.cloudflare.com\\\/report\\\/v3?s=HVPJYMVr%2B3wB1HSlgxv6GThBMjkBJgfdu0DPw%2BunjQzQ9YfXZqifggIJ%2FxOIKgOu6JP1SrPsx1jCCp3GQ9hZAp7NO0pmlTZ0y3ufbASGwLmCOV1zyaecUkSwQD%2Fv3RYYgZTkaSQ%3D\"}],\"group\":\"cf-nel\",\"max_age\":604800}", -        "server": "cloudflare", -        "status": 200, -        "transfer_encoding": "chunked", -        "url": "", -        "via": "1.1 vegur" -    } -} -``` - -Now that you can see all the output make a custom message listing the value returned by the API. Here is the completed playbook: - - -``` -\--- -\- name: Using a REST API -  become: false -  hosts: localhost -  gather_facts: false -  tasks: -    - debug: -        msg: “Let’s call an API” - -    - name: Everyone loves a good Chuck Norris joke -      uri: -        url: -        method: GET -      register: results - -    - debug: -        var: results.json.value -``` - -And now the complete output: - - -``` -PLAY [Using a REST API] ********************************************************************************************* - -TASK [debug] ******************************************************************************************************** -ok: [localhost] => { -    "msg": "“Let’s call an API”" -} - -TASK [Everyone loves a good Chuck Norris joke] ********************************************************************** -ok: [localhost] - -TASK [debug] ******************************************************************************************************** -ok: [localhost] => { -    "results.json.value": "Chuck Norris is the only computer system that beats a Mac or a PC. Too bad all it does is round house kicks the user." -} - -PLAY RECAP ********************************************************************************************************** -localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   -``` - -### Next steps - -Things can get much more complicated than I've shown here. To get more details, head over to Ansible's [documentation][7]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/ansible-rest-apis - -作者:[Vince Power][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/vincepower -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map) -[2]: https://docs.ansible.com/ansible/latest/user_guide/collections_using.html -[3]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html -[4]: https://galaxy.ansible.com/ -[5]: https://www.redhat.com/sysadmin/use-curl-api -[6]: https://github.com/public-apis/public-apis -[7]: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/uri_module.html diff --git a/sources/tech/20211004 How I use Vagrant with libvirt.md b/sources/tech/20211004 How I use Vagrant with libvirt.md deleted file mode 100644 index 198d512bc1..0000000000 --- a/sources/tech/20211004 How I use Vagrant with libvirt.md +++ /dev/null @@ -1,221 +0,0 @@ -[#]: subject: "How I use Vagrant with libvirt" -[#]: via: "https://opensource.com/article/21/10/vagrant-libvirt" -[#]: author: "Seth Kenlon https://opensource.com/users/seth" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -How I use Vagrant with libvirt -====== -When a virtual machine is what you need, Vagrant may be just the best -way to get it. -![Computer laptop in space][1] - -I'll admit it: I'm a fan of Linux. While I've used Slackware on workstations and Red Hat Enterprise Linux (RHEL) on servers for years, I love seeing how other distributions do things. What's more, I really like to test applications and scripts I write on other distributions to ensure portability. In fact, that's one of the great advantages of Linux, as I see it: You can download a distro and test your software on it for free. You can't do that with a closed OS, at least not without either breaking an EULA or paying to play, and even then, you're often signing up to download several gigabytes just to test an application that's no more than a few hundred megabytes. But Linux is open source, so there's rarely an excuse to ignore at least the three or four main distros, except that setting up a virtual machine can take a lot of clicks and sometimes complex virtual networking. At least, that used to be the excuse until Vagrant changed the virtual machine workflow for developers. - -### What is Vagrant - -Vagrant is a simple virtual machine manager for your terminal. It allows you to easily pull a minimal and pre-built virtual machine from the Internet, run it locally, and SSH into it in just a few steps. It's the quickest you'll ever set up a virtual machine. It's ideal for web developers needing a test web server, programmers who need to test an application across distributions, and hobbyists who enjoy seeing how different distributions work. - -Vagrant itself is relatively minimal, too. It's not a virtualization framework itself. It only manages your virtual machines ("boxes" in Vagrant terminology). It can use VirtualBox or, through a plug-in, the lightweight libvirt project as a backend. - -### What is libvirt - -The libvirt project is a toolkit designed to manage virtualization, with support for [KVM][2], [QEMU][3], [LXC][4], and more. You might think of it as a sort of virtual machine API, allowing developers to write friendly applications that make it easy for users to orchestrate virtualization through libvirt. I use libvirt as the backend for Vagrant because it's useful across several applications, including virt-manager and [GNOME Boxes][5]. - -### Installing Vagrant - -You can install Vagrant from [vagrantup.com/downloads][6]. There are builds available for Debian-based systems, CentOS-based systems, macOS, Windows, and more. - -For CentOS, Fedora, or similar, you get an RPM package, which you can install with `dnf`: - - -``` -`$ sudo dnf install ./vagrant_X.Y.ZZ_x86_64.rpm` -``` - -On Debian, Linux Mint, Elementary, and similar, you get a DEB package, which you can install with `apt`: - - -``` -`$ sudo apt install ./vagrant_X.Y.ZZ_x86_64.deb` -``` - -### Installing libvirt and support packages - -On Linux, your distribution may already have libvirt installed, but to enable integration with Vagrant you need a few other packages, too. Install these with your package manager. - -On Fedora, CentOS, and similar: - - -``` -$ sudo dnf install gcc libvirt \ -libvirt-devel libxml2-devel \ -make ruby-devel libguestfs-tools -``` - -On Debian, Linux Mint, and similar: - - -``` -$ sudo apt install build-dep vagrant ruby-libvirt \ -qemu libvirt-daemon-system libvirt-clients ebtables \ -dnsmasq-base libxslt-dev libxml2-dev libvirt-dev \ -zlib1g-dev ruby-dev libguestfs-tools -``` - -Depending on your distribution, you may have to start the `libvirt` daemon: - - -``` -`$ sudo systemctl start libvirtd` -``` - -### Installing the Vagrant-libvirt plugin - -In Vagrant, libvirt is enabled through a plug-in. Vagrant makes it easy to install a plug-in, so your first Vagrant command is one you'll rarely run again: - - -``` -`$ vagrant plugin install vagrant-libvirt` -``` - -Now that the libvirt plug-in is installed, you can start using virtual machines. - -### Setting up your Vagrant environment - -To start with Vagrant, create a directory called `~/Vagrant`. This is where your `Vagrantfiles` are stored. - - -``` -`$ mkdir ~/Vagrant` -``` - -In this directory, create a subdirectory to represent a distro you want to download. For instance, assume you need a CentOS test box. - -Create a CentOS directory, and then change to it: - - -``` -$ mkdir ~/Vagrant/centos -$ cd ~/Vagrant/centos -``` - -Now you need to find a virtual machine so you can convert the directory you've just made into a Vagrant environment. - -### Finding a Vagrant virtual machine - -Broadly speaking, Vagrant boxes come from three different places: Hashicorp (the maintainers of Vagrant), maintainers of distributions, and people like you and me. Some images are minimal, intended to serve as a base for customization. In contrast, others try to solve a specific need (for instance, you might find a LAMP stack image ready for web development). You can find images by browsing or searching the main hub for boxes [app.vagrantup.com/boxes/search][7]. - -For this example, search for "centos" and find the entry named `generic/centos8`. Click on the image for instructions on how to use the virtual machine. The instructions come in two varieties:  - - * The code you need for a Vagrantfile - * The command you need to use the box from a terminal - - - -The latter is the more straightforward method: - - -``` -`$ vagrant init generic/centos8` -``` - -The `init` subcommand creates a configuration file, called a Vagrantfile, in your current directory, which transforms that directory into a Vagrant environment. At any time, you can view a list of known Vagrant environments using the `global-status` subcommand: - - -``` -$ vagrant global-status -id       name    provider state   directory -\------------------------------------------- -49c797f  default libvirt running /home/tux/Vagrant/centos8 -``` - -### Starting a virtual machine with Vagrant - -Once you've run the `init` command, you can start your virtual machine with `vagrant up`: - - -``` -`$ vagrant up` -``` - -This causes Vagrant to download the virtual machine image if it doesn't already exist locally, set up a virtual network, and configure your box. - -### Entering a Vagrant virtual machine  - -Once your virtual machine is up and running, you can log in to it with `vagrant ssh`: - - -``` -$ vagrant ssh -box$ -``` - -You connect to the box running in your current Vagrant environment. Once logged in, you can run all the commands native to that host. It's a virtual machine running its own kernel, with emulated hardware and common Linux software. - -### Leaving a Vagrant virtual machine - -To leave your Vagrant virtual machine, log out of the host as you normally exit a Linux computer: - - -``` -`box$ exit` -``` - -Alternately, you can power the virtual machine down: - - -``` -`box$ sudo poweroff` -``` - -You can also stop the machine from running using the `vagrant` command: - - -``` -`box$ vagrant halt` -``` - -### Destroying a Vagrant virtual machine - -When finished with a Vagrant virtual machine, you can destroy it: - - -``` -`$ vagrant destroy` -``` - -Alternately, you can remove a virtual machine using the global `box` subcommand: - - -``` -`$ vagrant box remove generic/centos8` -``` - -### Vagrant is easy - -Vagrant makes virtual machines trivial, disposable, and fast. When you need a test environment or a fake server to ping or develop on, or a clean lab computer for experimentation or monitoring, you can get one with Vagrant. Some people think virtual machines aren't relevant now that containers have taken over servers, but virtual machines have unique traits that make them useful. They run their own kernel, have a full and unique stack separate from the host machine, and use emulated hardware. When a virtual machine is what you need, Vagrant may be just the best way to get it. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/vagrant-libvirt - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space) -[2]: https://opensource.com/article/20/8/virt-tools#kvm -[3]: https://opensource.com/article/20/8/virt-tools#qemu -[4]: https://opensource.com/article/18/11/behind-scenes-linux-containers -[5]: https://opensource.com/article/19/5/getting-started-gnome-boxes-virtualization -[6]: https://www.vagrantup.com/downloads -[7]: https://app.vagrantup.com/boxes/search diff --git a/sources/tech/20211004 Launching a DevOps to DevSecOps transformation.md b/sources/tech/20211004 Launching a DevOps to DevSecOps transformation.md deleted file mode 100644 index 250315de64..0000000000 --- a/sources/tech/20211004 Launching a DevOps to DevSecOps transformation.md +++ /dev/null @@ -1,64 +0,0 @@ -[#]: subject: "Launching a DevOps to DevSecOps transformation" -[#]: via: "https://opensource.com/article/21/10/devops-to-devsecops" -[#]: author: "Will Kelly https://opensource.com/users/willkelly" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Launching a DevOps to DevSecOps transformation -====== -The move toward DevSecOps is accelerating—here's what you need to know. -![Tips and gears turning][1] - -Widespread adoption of DevSecOps is inevitable. Security and delivery velocity are unrealistic expectations as part of a [waterfall software development life cycle][2] (SDLC). Businesses and government agencies are under constant pressure to deliver new features and functionality to their customers, constituents, and employees. Recent high-profile software supply chain breaches and President Biden's [Executive Order][3] to improve the nation's cybersecurity also increases the urgency for businesses and governments to move to DevSecOps. - -All of that means, sooner or later, your enterprise will need to integrate security with its DevOps process. - -Historically, cybersecurity teams focused on app security only at the end of a long, laborious waterfall SDLC, after scanning and remediating security issues. This model has shown cracks with age. Customer and market demands for new features, security, and compliance are at the top of executives' minds. [Digital transformation][4] efforts aimed at adjusting to the new world of work during and after the pandemic have made software security a higher priority. A DevOps process that makes security an afterthought is out of step with software users and consumers. - -What's needed is a DevOps-to-DevSecOps transformation. Fortunately, cloud computing in the commercial and public sectors, combined with the influence of open source software (OSS), now gives development teams the tools, processes, and frameworks to deliver software at higher velocity while maintaining quality and security. - -DevSecOps brings your security and DevOps teams to work together during the development life cycle. To make that transition, you will need collaboration from your developers, cybersecurity experts, sysadmins, business stakeholders, and even your executives. - -### Assessing DevOps and DevSecOps - -DevOps combines cultural philosophies, best practices, and tools that allow your organization to deliver applications and services more rapidly. Shifting to daily and weekly releases enables you to reduce your quarterly or monthly releases. Using DevOps can also help you grow and improve your products more rapidly than traditional waterfall software development processes and siloed infrastructure management. - -While preserving the best qualities of DevOps, DevSecOps incorporates security in every stage of the cycle. It knocks down the silos standing between your development, security, and operations teams. Benefits of DevSecOps include: - - * Prevention of security incidents before they happen: By integrating DevSecOps within your CI/CD toolchain, you're helping your teams by detecting and resolving issues before they occur in production. - * Faster response to security issues: DevSecOps increases your security focus through continuous assessments while giving you actionable data to make informed decisions about the security posture of apps in development and ready to enter production. - * Accelerated feature velocity: DevSecOps teams have the data and tools to mitigate unforeseen risks better. - * Lower security budget: DevSecOps enables streamlined resources, solutions, and processes, allowing you to simplify your development lifecycle by design. - - - -We're at _peak Ops_ in many industries. Rest assured, the definitions of DevOps and DevSecOps will merge in the months and years to come, if only for the sake of enterprise sanity and management. - -### DevSecOps and OSS - -DevSecOps can also play a vital role in the integration of OSS into enterprise applications. OSS and DevSecOps are becoming increasingly intertwined, especially as enterprises seek to improve the security of their software supply chains. DevSecOps can serve as an OSS remediation tool because it permits scanning automation throughout each pipeline phase. OSS is also foundational for adopting and security software containers and Kubernetes. - -### Final thoughts - -Before your organization embarks on a DevOps to DevSecOps transformation, take a step back and define DevSecOps for your teams. Cut through the marketing. Talk about the results you hope your teams will achieve. Instill a culture of openness and collaboration, and be sure to listen to the positive and negative vantage points of your development, operations, and Quality Assurance (QA) teams. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/devops-to-devsecops - -作者:[Will Kelly][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/willkelly -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning) -[2]: https://en.wikipedia.org/wiki/Waterfall_model -[3]: https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/ -[4]: https://enterprisersproject.com/what-is-digital-transformation diff --git a/sources/tech/20211005 Get podman up and running on Windows using Linux.md b/sources/tech/20211005 Get podman up and running on Windows using Linux.md deleted file mode 100644 index ddbca0e519..0000000000 --- a/sources/tech/20211005 Get podman up and running on Windows using Linux.md +++ /dev/null @@ -1,146 +0,0 @@ -[#]: subject: "Get podman up and running on Windows using Linux" -[#]: via: "https://opensource.com/article/21/10/podman-windows-wsl" -[#]: author: "Stephen Cuppett https://opensource.com/users/cuppett" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Get podman up and running on Windows using Linux -====== -Enable WSL 2 guests to run the podman, skopeo, or buildah commands from -within Windows using the Linux distribution of your choice. -![Penguin driving a car with a yellow background][1] - -WSL 2, the second iteration of the Windows Subsystem for Linux, allows you to run a Linux environment natively on Windows, without the overhead of a virtual machine (VM). It integrates nicely with Windows, too, and provides you access to most of the command-line tools, utilities, and applications you're used to on Linux. - -This guide shows you how to enable WSL 2 guests to run the `podman`, `skopeo`, or `buildah` commands from within Windows using the Linux distribution of your choice (available from the Microsoft store). Coming from a Fedora Linux host OS starting point, I was curious how to enable and use tools I'm most familiar with from within Windows. - -### Prerequisite: WSL 2 - -To install WSL 2, go to the [WSL installation][2] page. - -Use Powershell to ensure that WSL 2 is enabled by default: - -`PS> wsl –set-default-version 2` - -For information on key differences between WSL 1 and WSL 2, see the [WSL documentation][3]. - -The Windows Subsystem for Linux has come a long way. Microsoft has worked hard to make the separation between the host Windows OS and the guest Linux operating system virtually invisible. Special drivers in the kernels of each system make it easy to run commands between various shells and command windows and enable mutual filesystem access. - -You can confirm you are correctly using the WSL 2 kernel with the following command and output in any of the guests: - - -``` -$ uname -a -Linux BLD 5.10.16.3-microsoft.standard-WSL2 #1 SMP Fri Apr 2 22:23:49 -UTC 2021 x86_64 x86_64 GNU/Linux -``` - -WSL 1 guests report a kernel version as 4.14 or similar. - -Small touches in your guests can make the integration even more seamless, including symlinking of various home directory files (.aws, .sh, .config, and so on). There is a hint of how this can be achieved right from the $HOME directory: - -![$HOME directory][4] - -(Stephen Cuppett, [CC BY-SA 4.0][5]) - -### Install a Linux distribution - -To install a Linux distribution, find your favorite in the Microsoft Store. - -![screenshot of Fedora Remix purchase in the Microsoft store][6] - -(Stephen Cuppett, [CC BY-SA 4.0][5]) - -For this article, I'm using Fedora, but other distributions are available to try. Podman works well across distributions, so you can use whatever distribution you're most familiar with. There may be some minor configuration adjustments required, but those are generally documented by the distribution and podman documentation. I chose Fedora because it was the distribution that required no extra setup to get the latest podman working. - -On the first launch, the VM and related technologies are installed. You'll be prompted to select a password for the first user (which gets sudo access). - -### Install podman - -Once your Linux distribution has been installed and configured with a user, you can install podman as usual: - -`$ sudo dnf install podman` - -After a few moments, podman is installed and ready to go. You can check that everything is working as expected: - - -``` -$ podman info -host: -  arch: amd64 -  buildahVersion: 1.22.3 -  cgroupControllers: [] -  cgroupManager: cgroupfs -  cgroupVersion: v1 -[...] -version: -  APIVersion: 3.3.1 -  OsArch: linux/amd64 -  Version: 3.3.1 -``` - -From there, you can build images and use podman as you usually would. - -Thanks to WSL integration, podman is even accessible and usable from PowerShell or the command prompt: - -![screenshot example of Windows PowerShell][7] - -(Stephen Cuppett, [CC BY-SA 4.0][5]) - -Installing and using the `buildah` and `skopeo` commands is exactly the same process. - -### Busybox test - -As a simple test to see `podman` at work, you can pull and run a Busybox container. [BusyBox][8] is an open source (GPL) project providing simple implementations of nearly 400 common commands, including `ls, mv, ln, mkdir, more, ps, gzip, bzip2, tar`, and `grep`, which makes it a fittingly minimal environment for containers and for simple tests like this one. - -First, search the default image repository for a Busybox container. You can do this in either your Linux terminal or in Powershell. - - -``` -$ podman search busybox -INDEX       NAME                             DESCRIPTION                     -docker.io   docker.io/library/busybox        Busybox base image                   -docker.io   docker.io/radial/busyboxplus     Full-chain... -docker.io   docker.io/yauritux/busybox-curl  Busybox with CURL -``` - -Run the one you want to try: - - -``` -$ podman run -it docker.io/library/busybox -/ # -``` - -You can use the container, run a few commands to verify that everything works as expected, then leave it with the exit command. - -### Get started - -I'll admit I was surprised how readily the current Linux distributions out there, podman, and the Windows subsystem worked together here. It's obvious a lot of great work has gone into Windows' container tooling and integration with Linux. I'm hopeful this guide helps others get to this same launching point easily and start being productive. - -There are many good candidates for deep follow-up, including working with volumes, exposing networked services between the guest and the host, and exposing Linux capabilities in those containers. With so many tools available, I have great confidence that the community will make short work of digging through them! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/podman-windows-wsl - -作者:[Stephen Cuppett][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cuppett -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc (Penguin driving a car with a yellow background) -[2]: https://docs.microsoft.com/en-us/windows/wsl/install -[3]: https://docs.microsoft.com/en-us/windows/wsl/about -[4]: https://opensource.com/sites/default/files/uploads/home_directory_0.png (directory) -[5]: https://creativecommons.org/licenses/by-sa/4.0/ -[6]: https://opensource.com/sites/default/files/uploads/fedora_remix_0.png (Fedora Remix) -[7]: https://opensource.com/sites/default/files/uploads/power_shell.png (Windows PowerShell) -[8]: https://opensource.com/article/21/8/what-busybox diff --git a/sources/tech/20211005 Markets- An Open-Source App to Keep Track of Your Investments for Linux Desktop and Phones.md b/sources/tech/20211005 Markets- An Open-Source App to Keep Track of Your Investments for Linux Desktop and Phones.md deleted file mode 100644 index ed5b98429f..0000000000 --- a/sources/tech/20211005 Markets- An Open-Source App to Keep Track of Your Investments for Linux Desktop and Phones.md +++ /dev/null @@ -1,94 +0,0 @@ -[#]: subject: "Markets: An Open-Source App to Keep Track of Your Investments for Linux Desktop and Phones" -[#]: via: "https://itsfoss.com/markets/" -[#]: author: "Ankush Das https://itsfoss.com/author/ankush/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Markets: An Open-Source App to Keep Track of Your Investments for Linux Desktop and Phones -====== - -_**Brief:** A Linux app to help you quickly track market movements._ - -Usually, you would log in to a service on your web browser to monitor and track the market for investment opportunities if you’re an investor/trader. - -But, what if you want an app for your Linux desktop and Linux phone? Considering we do have a few for Android/iOS smartphones, it should come in handy for Linux devices as well! - -### Monitor Stocks From Across the Globe via Yahoo Finance - -![][1] - -“Markets” utilizes the data from Yahoo Finance to provide you the required information about stocks, cryptocurrencies, currencies, and more. - -While it is a simple desktop-focused app, it is available for Linux smartphones, and it offers a couple of valuable functionalities. Let me list the key highlights of what you can expect. - -![][2] - -### Features of Markets - -With Markets, you get to track, monitor, and analyze market trends and make investment decisions. - -There are a couple of features along the way that include: - - * Ability to customize the update interval - * Build a personal portfolio - * Add any symbol or currency - * View time in international format - * Dark mode - * Supports Linux phones (PinePhone, Librem5) - * Selectively delete stock monitors - - - -As mentioned previously, it is a dead-simple Linux application to help you track financial data. - -![][3] - -And, I’d say it works pretty well and lets you quickly search for a stock, commodity, and others to build a personal portfolio on your Linux desktop quickly. - -With a dark mode, it is a breeze to look at it and track market movements. - -You can select from the existing list of markets added and delete as per your selection. And, the international time format is useful. As you can notice in the screenshot, the time mentions the timezone you’re at by default, which should be useful. - -Also, from the listings, you can click on it to launch the browser window for full details on Yahoo Finance; this is how it’ll look like: - -![][4] - -### Installing Markets in Linux - -It is available as a Flatpak app for Linux distributions and can be found in [AUR][5] for Arch users. For Linux phones, they recommend installing it from the source. - -To install it from the terminal, you just need to type in: - -``` -flatpak install flathub com.bitstower.Markets -``` - -You can explorer building instructions and other information on their [GitHub page][6]. - -[Markets (Flathub)][7] - -Do you prefer to use an app on your Linux desktop to track financial data quickly? Let me know your thoughts in the comments. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/markets/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/10/markets.png?resize=741%2C626&ssl=1 -[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/10/markets-search.png?resize=626%2C651&ssl=1 -[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/10/markets-dark-mode.png?resize=559%2C475&ssl=1 -[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/10/yahoo-tesla.png?resize=800%2C451&ssl=1 -[5]: https://itsfoss.com/aur-arch-linux/ -[6]: https://github.com/bitstower/markets -[7]: https://flathub.org/apps/details/com.bitstower.Markets diff --git a/sources/tech/20211006 Following a DevSecOps maturity model.md b/sources/tech/20211006 Following a DevSecOps maturity model.md deleted file mode 100644 index ed092f3416..0000000000 --- a/sources/tech/20211006 Following a DevSecOps maturity model.md +++ /dev/null @@ -1,66 +0,0 @@ -[#]: subject: "Following a DevSecOps maturity model" -[#]: via: "https://opensource.com/article/21/10/devsecops-maturity-model" -[#]: author: "Will Kelly https://opensource.com/users/willkelly" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Following a DevSecOps maturity model -====== -Following a maturity model also helps tell a story that includes the -people, process, and technology changes that come with a -DevOps-to-DevSecOps transformation. -![Sunlight coming through the tree branches][1] - -[DevSecOps][2] is in many ways another level of DevOps maturity for an enterprise. Executive management and other stakeholders understand the concept of a maturity model, making it a helpful way to explain the value of this shift. Following a maturity model also helps you tell a story that includes the people, process, and technology changes that come with a DevOps-to-DevSecOps transformation. - -Here are four typical levels of DevSecOps maturity: - -### Level 1: pre-DevOps (no automation) - -At this level, developers perform every task manually, including creating and testing applications and systems. Team management, processes, and application security are still at a very ad hoc level. - -Take the extra step to capture your lessons learned and the challenges of your pre-DevOps development era. You need to know your history, so you don't repeat it in the future. - -### Level 2: early DevOps/DevSecOps (lightweight automation) - -Development teams standardize on some form of a DevOps toolchain to implement Infrastructure-as-Code and Compliance-as-Code. DevSecOps adoption is at the department or even just at the team level. - -Mentioning DevOps and DevSecOps interchangeably in this phase is deliberate. Some organizations will fast-forward from traditional waterfall development straight to a DevSecOps model. At level 2, DevOps/DevSecOps and lightweight automation is the domain of innovative and more forward-thinking development teams. Developers are driven to find a better way to do things, either as a result of their own initiative or because a customer is asking for a DevOps approach. - -Making it from level 2 to level 3 depends upon communicating and selling the successes of your early adopters of DevSecOps to the rest of your organization. Be sure to keep in touch with your early adopters and encourage them to share their DevOps and DevSecOps wins with the rest of their peers. Early win stories resonate much better than managerial mandates. - -### Level 3: DevOps to DevSecOps transition (advanced automation) - -DevSecOps grows into a corporate or agency-wide strategy. With organization-wide support, an automation strategy for application and infrastructure development and management takes form. DevOps teams can now improve their existing processes using containers, Kubernetes (K8s), and public cloud services. - -Bottom line: Organizations at this advanced phase of DevSecOps maturity are deploying applications at scale. - -### Level 4: full DevSecOps (full automation) - -Such an expert state of DevSecOps maturity will be elusive for all but the most prominent and well-funded enterprises, those who must routinely meet the most strict cybersecurity and compliance demands. An organization that reaches this level of maturity is API and cloud-native first. These organizations are also implementing emerging technologies such as [microservices][3], [serverless][4], and [artificial intelligence/machine learning (AI/ML)][5] to strengthen their application development and infrastructure security. - -### Final thoughts - -Only when you track the maturity of your processes, team culture, and tooling do you get the best current and future-state views of your organization's progress to DevSecOps. The pandemic pushed many teams to remote work in the past 18 months. As a result, teams had to mature their processes and mature them quickly to ensure their organization could still deliver to their customers. DevSecOps brings together the very cultural, collaboration, and toolchain improvements that development teams require to deliver secure and compliant software in their new world of work. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/devsecops-maturity-model - -作者:[Will Kelly][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/willkelly -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/jan-huber-tree.jpg?itok=CRBwhuMA (Sunlight coming through the tree branches) -[2]: https://www.redhat.com/en/topics/devops/what-is-devsecops -[3]: https://opensource.com/resources/what-are-microservices -[4]: https://opensource.com/article/21/1/devapps-strategies -[5]: https://opensource.com/tags/ai-and-machine-learning diff --git a/sources/tech/20211007 3 phases to start a DevSecOps transformation.md b/sources/tech/20211007 3 phases to start a DevSecOps transformation.md deleted file mode 100644 index 3c671d4a86..0000000000 --- a/sources/tech/20211007 3 phases to start a DevSecOps transformation.md +++ /dev/null @@ -1,115 +0,0 @@ -[#]: subject: "3 phases to start a DevSecOps transformation" -[#]: via: "https://opensource.com/article/21/10/first-phases-devsecops-transformation" -[#]: author: "Will Kelly https://opensource.com/users/willkelly" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -3 phases to start a DevSecOps transformation -====== -Taking the right steps at the right time smooths the path toward full -adoption. -![Green graph of measurements][1] - -DevSecOps is another step in the DevOps journey for your organization. Breaking down your transformation into phases facilitates working directly with developers and other team members. A phased approach also allows you to get feedback from those affected by the change and iterate as necessary. - -Here are the first three phases of a DevSecOps transformation: - -### Phase 1: analysis, education, and training - -In phase 1, you do the preliminary work necessary to make DevSecOps the next step in your DevOps journey. - -This phase is even more critical for your teams if you're moving from a waterfall software development lifecycle (SDLC) model. Making that leap may require you to put more time and effort into DevOps training to bridge any knowledge gaps between your current processes and DevSecOps. - -#### Analyze your development process maturity - -Whether DevSecOps is just the next step in your DevSecOps journey or you're making your initial foray into DevSecOps straight from a waterfall SDLC, analyzing the [maturity of your software development process][2] is a critical step. An effective analysis includes: - - * Documenting the current state of any processes - * Gathering any reporting data about your current development processes - * Identifying what's working and not working in your development processes by interviewing key developers - - - -#### Define DevSecOps for your organization - -DevOps and now DevSecOps can mean many things to people. Software vendor marketing and the open source software (OSS) community each put their spin on the definition of DevSecOps. Spare your teams from any misunderstandings and document your definition of DevSecOps. A clear definition includes: - - * What DevSecOps means to your organization - * The expected outcomes after moving to DevSecOps - * The tools and processes your organization is putting into place to ensure employee success - - - -Writing a definition is not merely creating a project charter for your DevOps to DevSecOps transformation; it identifies your true north. - -#### Foster a DevSecOps culture - -You can't _buy_ DevSecOps. Your managers and key technology team members need to work together to foster DevSecOps cultural philosophies to set a foundation for your DevOps to DevSecOps transformation. - -Here are some vital elements of DevSecOps culture that are important to foster during and after your transformation: - -##### Continuous feedback - -Remote DevSecOps teams have their advantages and disadvantages with continuous feedback. The manager's role is not simply to deliver feedback on the DevSecOps team's performance. Instead, the purpose of feedback is to enable teams to collaborate more effectively. [Open source chat tools][3] provide the instant communication necessary for DevSecOps teams to collaborate in real time. - -##### Container-based architectures - -DevSecOps sets the stage for moving to container-based architectures that can be another cultural change for DevOps teams. A proper and robust implementation of containers changes developer and operations cultures because it changes how architects design solutions, programmers create code, and operations teams maintain production applications. - -##### Team autonomy - -DevSecOps is no place for micromanagers at any level of your organization. A standard part of DevSecOps culture is enabling your teams to choose their tools and create processes based on their work. DevSecOps also promotes distributed decision making that supports greater agility and innovation. - -##### DevSecOps training - -Providing security training to your developers is another step towards making security part of everyone's job. Training could take the form of in-house developer training in casual formats such as lunch-and-learns, or it could include more formal training classes conducted by your organization's training department. - -Depending on your security ambitions (and budget), there is always the option to send your DevOps team members to get a DevSecOps vendor certification, such as the DevSecOps Foundation certification from the [DevOps Institute][4] or the Certified DevSecOps Professional (CDP) from [Practical DevSecOps][5]. - -### Phase 2: integrate security into your DevOps lifecycle - -During phase 2 of your DevOps to DevSecOps transformation, you integrate security processes and tools into your DevOps life cycle. If your enterprise is already using DevOps toolchains, this phase integrates security tools into your existing DevOps toolchains. This phase is also the time to perform a security audit on your continuous integration and continuous delivery/deployment (CI/CD) toolchains to ensure security. - -Suppose your organization takes the fast track to DevSecOps from a waterfall SDLC or other legacy development process. In that case, security needs to become a requirement of your CI/CD toolchain build. - -### Phase 3: introduce automation into your DevOps lifecycle - -The automation phase includes analysis, outreach, and experimentation. Applying automation to everyday software development tasks such as quality assurance and security checks isn't an exact science. Expect a push and pull between your executives and development teams. Executives often want to automate as much as possible, even to the extreme. Developers and sysadmins are going to approach automation more cautiously. - -Automation is foundational to DevSecOps because it removes the prospect of human error from some everyday build tasks and security checks. If you're building and running cloud workloads, you need automation. - -How well the automation tools are implemented determines how effectively you can enforce security practices and facilitate security sign-offs. - -Here are some tips for introducing automation into your DevOps toolchain: - - * Dispel the notion in your management and stakeholders that you'll be able to automate every task along with your toolchain. Engage with your stakeholders to learn their automation priorities and take that feedback into an automation strategy for your DevOps teams. - * Engage with your development teams — not just the team leads and managers — about how automation can help them perform their jobs. Listen to their concerns with empathy and answer their questions with definitive answers. - * Create an automation roadmap that charts how you'll introduce automation into your toolchains. Start small and expand with automation across your toolchains. Seek a small project such as a patch or a feature update to test your implementation plan. - * Automate one build, quality assurance, or security check for one of your DevOps teams as a proof-of-concept project. Document your findings from this small project, especially the lessons learned and any other feedback from the DevOps team members working on the project. - * Communicate the successes, lessons learned, and, yes, even the mistakes made on the pilot project to your stakeholders and internal DevOps community. - - - -You can use your existing DevOps center of excellence or DevSecOps center of excellence as an opportunity to gather input from employees from across your organization about how automation affects their work. Otherwise, look for formal and informal channels in your development and operations organizations to gain the input. For example, informal lunch and learns, group chat channels, or team meetings can be ideal for gathering input depending on your corporate culture. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/first-phases-devsecops-transformation - -作者:[Will Kelly][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/willkelly -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_lead-steps-measure.png?itok=DG7rFZPk (Green graph of measurements) -[2]: https://opensource.com/article/21/10/devsecops-maturity-model -[3]: https://opensource.com/article/20/4/open-source-chat -[4]: https://www.devopsinstitute.com/ -[5]: http://practical-devsecops.com/ diff --git a/sources/tech/20211008 3 more phases of DevSecOps transformation.md b/sources/tech/20211008 3 more phases of DevSecOps transformation.md deleted file mode 100644 index 9f4d5eeb0e..0000000000 --- a/sources/tech/20211008 3 more phases of DevSecOps transformation.md +++ /dev/null @@ -1,65 +0,0 @@ -[#]: subject: "3 more phases of DevSecOps transformation" -[#]: via: "https://opensource.com/article/21/10/last-phases-devsecops-transformation" -[#]: author: "Will Kelly https://opensource.com/users/willkelly" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -3 more phases of DevSecOps transformation -====== -Ensure you reach your goals by continuing a step-by-step approach to -DevSecOps. -![Gold trophy on green background][1] - -Making a major operations transition must be a long-term and well-planned process. Because DevSecOps is an important step in the DevOps journey for your organization, you are more likely to find success if you introduce and implement your transformation in phases. - -In my [previous article][2], I explained the first three phases of making this change. This article presents three additional phases of DevSecOps transformation you must work through to achieve your goals. Finishing these phases requires that you foster team collaboration to carry your organization through security changes, going live with DevSecOps, and putting the tools in place for continuous learning and iteration of your DevSecOps toolchain and processes. - -### Phase 4: collaborate on security changes to your DevOps toolchains - -Some security changes on the move to DevSecOps may adversely affect operations and even security compliance. Changes to tools, processes, and even staffing sometimes change the way teams work. - -Your development, operations, and security teams must collaborate before deployment and at other touchpoints to set priorities. Security teams sometimes prioritize a security measure that adversely impacts operations. Likewise, your developers probably overlook some holes caused by system configurations that could compromise the security and compliance of your systems. - -Predeployment reviews provide a prime collaboration channel. When you conduct predeployment reviews during your DevOps to DevSecOps transformation, you give your developers and security staff a forum through which they can educate each other on their team's priorities and informed tradeoffs. - -### Phase 5: execute on DevSecOps - -As your organization crosses into phase 5 of your DevOps to DevSecOps transformation, it's time to execute your plans with one or more teams. Don't move to Phase 5 as an entire organization. Instead, look for natural breaks in your project teams' schedules for them to move to a DevSecOps model. For example, say that one of your DevOps teams has just launched a new product release. After catching their collective breath, they're working on bug fixes that come in from the field. Don't interrupt their flow with a full-on move to DevSecOps during an in-progress project. - -Look for new project opportunities to begin executing on DevSecOps. Such an approach offers the following advantages: - - * Providing teams a clean slate to learn a new process from the beginning, not midstream during a project - * Enabling you to include process and tools training as part of the project kickoff process - * Affording the chance to bring your developers, operations, and security teams together to discuss mutual expectations for the project - * Giving teams a chance to learn to work together better during the new workflows that DevSecOps brings to an organization - - - -### Phase 6: pursue continuous learning and iteration - -There is no formal end to an adequately executed shift from DevOps to DevSecOps. After your organization moves to DevSecOps and adopts the principles and foundations, the learning and iteration need to continue past the transformation. - -As there is no single accepted DevSecOps definition for the industry, you can expect to learn a lot as your DevSecOps journey gains momentum and your processes mature. You also need to prepare your organization for changes in DevOps and DevSecOps philosophies that might benefit your internal efforts. - -### Final thoughts - -The phases I outline in this series are general guidelines for a path toward achieving your DevSecOps transformation. The emphasis on collaboration is deliberate because your enterprise's particular circumstances could require that you modify these phases to achieve your transformation. Even if you need to make substantial changes to these phases, having a graduated implementation roadmap will get you much closer to success. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/last-phases-devsecops-transformation - -作者:[Will Kelly][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/willkelly -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/trophy_celebrate.png?itok=jGnRHBq2 (Gold trophy on green background) -[2]: https://opensource.com/article/21/10/first-phases-devsecops-transformation diff --git a/sources/tech/20211013 Beginner-s Guide to Installing Pop-_OS Linux.md b/sources/tech/20211013 Beginner-s Guide to Installing Pop-_OS Linux.md deleted file mode 100644 index 9652299ec8..0000000000 --- a/sources/tech/20211013 Beginner-s Guide to Installing Pop-_OS Linux.md +++ /dev/null @@ -1,211 +0,0 @@ -[#]: subject: "Beginner’s Guide to Installing Pop!_OS Linux" -[#]: via: "https://itsfoss.com/install-pop-os/" -[#]: author: "Pratham Patel https://itsfoss.com/author/pratham/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Beginner’s Guide to Installing Pop!_OS Linux -====== - -_**Brief: Learn to install Pop OS Linux distribution by replacing all other operating systems on your computer.**_ - -[Pop!_OS][1] is the Linux distribution created by System76 and is based on Ubuntu. Since System76 sells [Linux-first laptops and desktops][2], their Linux distribution, even though is based on Ubuntu, provides support for bleeding edge hardware (only if the newer Linux kernel does not create a conflict for currently supported hardware). - -Out of all the new Linux distributions out there, the user-base of Pop!_OS just “popped” out of nowhere. Considering it is a _relatively_ new distro among a plethora of other “well established distros” like Ubuntu, Manjaro, Mint etc; this is a big achievement! - -This isn’t an opinion article on why you should [use Pop OS over Ubuntu][3], but a guide, for you to get started with Linux on your PC by installing Pop!_OS on it. - -### Choosing the instllation method for Pop OS - -There are multiple ways to install Pop!_OS (and all other Linux distros) on your computer. - - 1. Install Pop!_OS as a Virtual Machine [using VirtualBox][4] on your PC **without affecting your current Windows install**. - 2. Install Pop!_OS alongside Windows; AKA [dual boot][5] (even though the linked guide is for Ubuntu, it should work for Pop!_OS; **make sure to turn off “Secure Boot”**). - 3. Replace Windows 10/11 with Pop!_OS. - - - -I highly recommend that you [try out Pop!_OS in VirtualBox][4] before installing it on your computer, specially if you are new to Linux. - -_**This tutorial covers installation of Pop!_OS replacing Windows**_, and below are the hardware requirements for Pop!_OS. - - * A 4 GB USB drive to create a Live USB drive - * Any 64-bit x86 CPU (any 64-bit Intel or AMD CPU) - * At least 4 GB of RAM is recommended - * A minimum of 20 GB of storage (to store the OS) - - - -_**WARNING: This guide assumes you want to replace Windows on your PC with a Linux distro of your choice (Pop!_OS in this case) and it results in wiping your drive clean. Please make sure you have backed up all of your important data before proceeding further.**_ - -### Choose the version of Pop!_OS to install - -![][6] - -Just like Ubuntu, Pop!_OS comes in two variants. All LTS releases are supported for 5 years from release date. Canonical releases a LTS version of Ubuntu in April of every even numbered year. - -A new Non-LTS version is released every 6 months (in April and September, every year) and that particular version is supported only for 9 months from release date. - -As of writing this article, Pop!_OS is available in two (technically four, but we will get to that later) versions. Current LTS release is “Pop!_OS 20.04 LTS” and “Pop!_OS 21.04”. And soon enough, version 21.10 will be released. - -Because Nvidia does not have open source drivers, installing Nvidia GPU Drivers ends up causing problems to your Linux installation if not done correctly. Therefore, System76 offers two variants for each version of Pop!_OS. - -Pop!_OS 20.04 LTS is [available in two variants][7] (more details in next section). - - * For users with a Nvidia GPU in their computer - * For users with an AMD (and/or an Intel for iGPU and for the [upcoming dGPU][8]) users. - - - -If you are not sure, [check the graphics card][9] on your system and choose the appropriate version while downloading. - -### Installing Pop!_OS - -In this guide, I’ll be using the non-Nvidia version of Pop!_OS 20.04 LTS (but the installer steps will be the same for every variant of the same version). - -#### Step 1: Create a live USB - -Visit System76’s website to download a copy of Pop!_OS. - -[Download Pop!_OS][1] - -![Pop!_OS ISO selection menu][10] - -Select “Pop!_OS 20.04 LTS” (#1) and then click on either the normal ISO (#2) or the Nvidia-specific ISO (#3) to start downloading it. - -After you have downloaded a copy of ISO that is suitable for your use case and machine, your next step will be to create a live installer for Pop!_OS. A live installer is a full copy of the OS for you to tinker with, before you feel that the OS of your liking and also compatible with your hardware. - -Sometimes the distribution of your choice might not have good support for the proprietary components like WiFi, GPU etc included in your laptop/desktop. Now is the time to test your hardware compatibility. - -_**NOTE: Any data stored on your USB stick will be erased at this step, make sure you do not have anything important on the flash drive.**_ - -You have access to numerous tools to create a live USB stick. Some of them are: - - * [balenaEtcher][11] (available on Mac, Windows and Linux) - * [UNetbootin][12] (available on Mac, Windows and Linux) - * [Rufus][13] (available only on Windows) - * [Ventoy][14] (available on Windows and Linux) - - - -On Windows, you can use Rufus to [create a live USB from Windows][15]. You may also use Etcher for Windows, Linux and macOS. It is really simple. Just start the application, browse the downloaded ISO and hit the flash button. - -![A generic example of creating live Linux USB with Etcher][16] - -#### Step 2: Booting from the live Pop OS USB - -Once you have created the live USB, you need to tell our computer to boot from the USB stick instead of the disk on which Windows is installed. - -To do that, restart your computer. And once you see your computer vendor’s logo (HP, Dell, Asus, Gigabyte, ASRock etc) press either the F2 or F10 or F12 or Delete key to enter your computer’s BIOS/UEFI. This key will differ based on your computer vendor, for most desktops it is usually the Delete key, and for most laptops it is the F2 key. If still in doubt, a quick web search should tell you which key to press for your system. - -![BIOS/UEFI boot menu keys][17] - -On modern computers with UEFI, you don’t even need to go in UEFI. You can directly hit a specific key like F12 (my computer vendor has F12) and you’ll see a boot menu. From there directly select your USB stick. - -![UEFI boot menu][18] - -For people who have an older BIOS/UEFI, go under the section where it says Boot (do note, the steps will vary from vendor to vendor) and select your USB drive instead of your SSD/HDD. And reboot. - -![UEFI/BIOS boot drive selection][19] - -Your computer should now boot from the live USB you just created. - -#### Step 4: Start installing Pop!_OS - -You should be in the Pop!_OS live environment now. On your computer screen, you will see an installer asking you for setup details like your preferred Language, Country and Keyboard Layout. - -![Pop!_OS Installation screen][20] - -Once you have selected your Language, Country and Keyboard Layout, you will see this screen. You technically have 3 options. - -![Pop!_OS Installation types, plus Demo Mode][21] - - * Clean Install (#1): This option will erase your entire disk and install Pop!_OS on it. - * Custom (Advanced) (#2): This option will allow you to specify things like root partition, if you want a different home partition, use another file system for your root partition, resize partitions, use a different sized swap partition etc. - * Try Demo Mode (#3): An option in the bottom left of the installer that allows you to test drive Pop!_OS as if it was actually installed on your computer without actually touching your drive contents. - - - -**For the scope of this tutorial, proceed by selecting Clean Install.** - -Next up, specify a drive where you want to install Pop!_OS on. In case your computer has multiple drives, you will see each drive labelled along with it’s size so you can be assured if the drive you have selected is the one you have decided to install Pop!_OS on. - -![Pop!_OS Drive selection options][22] - -You will be prompted to provide your name and a username for your user. Your username will be the name of your home folder. - -![Pop!_OS User’s Name and username input][23] - -Up next, set a password for your user. - -![Pop!_OS User password input][24] - -The final step includes setting up Drive Encryption. If someone has physical access to your computer, your data on the disk can be accessed using a live operating system (like the live USB you created). - -The disk encryption prevents that. However, you must never forget the password or you’ll never be able to use the disk again. - -It is up to you if you want to encrypt the disk. - -![Pop!_OS Drive Encryption options][25] - -The installer will give you three options for encryption. - - * Don’t Encrypt (#1): Does not encrypt your drive. Not recommended for security conscious users - * Use user password for drive encryption (#2): This will tell the installer to use the same password for your user and for drive encryption. If you use this option, make sure your user has a strong password. - * Set Password (#3): Use a different password for encrypting drive. - - - -Whichever you choose, the installation should start now. Below is a screenshot showing the installer screen. - -![Pop!_OS Installation screen, plus log button][26] - -Just in case you encounter any error(s) during this step, click on the button placed at the bottom right edge of installer with “$_” (annotated as “Log” in the screenshot above) in it. It is the installer log. Posting a few lines from the bottom of this log should help others from [our community forum][27] or any other forums help you diagnose the issue causing installation errors. - -Please wait for a few minutes for the installer to finish installing and it will provide you with two options, Reboot or Shut Down. Power off your computer and remove the USB drive. - -Congratulations! You just installed Pop!_OS on your computer! Let me know if you face any issues. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/install-pop-os/ - -作者:[Pratham Patel][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/pratham/ -[b]: https://github.com/lujun9972 -[1]: https://pop.system76.com/ -[2]: https://itsfoss.com/get-linux-laptops/ -[3]: https://itsfoss.com/pop-os-vs-ubuntu/ -[4]: https://itsfoss.com/install-linux-in-virtualbox/ -[5]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/ -[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/10/POP_OS-Installation.png?resize=800%2C450&ssl=1 -[7]: https://pop.system76.com -[8]: https://www.phoronix.com/scan.php?page=news_item&px=Intel-DG1-Status-XDC2021 -[9]: https://itsfoss.com/check-graphics-card-linux/ -[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/10/pop-os-download-options.webp?resize=800%2C740&ssl=1 -[11]: https://www.balena.io/etcher/ -[12]: https://unetbootin.github.io -[13]: https://rufus.ie/en/ -[14]: https://www.ventoy.net/en/index.html -[15]: https://itsfoss.com/create-live-usb-of-ubuntu-in-windows/%5D -[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/balena-etcher-create-linux-live-usb.png?resize=800%2C450&ssl=1 -[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/10/03-01-bios-uefi-boot-menu-keys.webp?resize=732%2C366&ssl=1 -[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/10/03-02-boot-menu.webp?resize=731%2C364&ssl=1 -[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/10/03-03-select-boot-drive-3.webp?resize=800%2C399&ssl=1 -[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/10/04-01-installer-init.webp?resize=800%2C595&ssl=1 -[21]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/10/04-02-installation-options.webp?resize=800%2C595&ssl=1 -[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/10/04-03-drive-selection.webp?resize=800%2C595&ssl=1 -[23]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/10/04-04-name-and-username-selection.webp?resize=800%2C595&ssl=1 -[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/10/04-05-password-setup-screen.webp?resize=800%2C595&ssl=1 -[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/10/04-06-drive-encryption-options.webp?resize=800%2C595&ssl=1 -[26]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/10/04-08-installation.webp?resize=800%2C595&ssl=1 -[27]: https://itsfoss.community/ diff --git a/sources/tech/20211015 3 ways to manage RPG character sheets with open source.md b/sources/tech/20211015 3 ways to manage RPG character sheets with open source.md deleted file mode 100644 index 180dcf11cf..0000000000 --- a/sources/tech/20211015 3 ways to manage RPG character sheets with open source.md +++ /dev/null @@ -1,250 +0,0 @@ -[#]: subject: "3 ways to manage RPG character sheets with open source" -[#]: via: "https://opensource.com/article/21/10/manage-rpg-character-sheets" -[#]: author: "Seth Kenlon https://opensource.com/users/seth" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -3 ways to manage RPG character sheets with open source -====== -Learn about two terminal commands and a desktop application. -![Dice on a keyboard][1] - -It's that time of year again for gamers everywhere. - -Tomorrow is [Free RPG Day][2], a day when publishers across the tabletop role-playing game industry release games for players both new and experienced, and they're all completely free. Although Free RPG Day was canceled in 2020, it's back this year as a live event with some virtual support by way of free RPG sampler downloads from [Dungeon Crawl Classics][3] and [Paizo][4]. And if the event's virtual offerings aren't enough, you might check out my list of [open source tabletop RPGs.][5] - -Over the past two years, like most people, I've been playing my tabletop games online. I use [open source video conferencing][6] and some [shared mapping software][7]. Don't get me wrong: I love my pen and paper for analog games. To this day, I rarely leave home without my 2E5 quad book so I can sketch out dungeon maps on the go. But I find my computer desk gets pretty cluttered between RPG sourcebooks, splat books, random tables, dice tower, dice, and character sheets. To clear some space, I've recently adopted a digital system for my character sheets, for both my player characters and non-player characters when I DM. - -### Digital character sheets - -Typically, a character sheet is filled out in pencil on old-fashioned physical paper. It's a time-honored tradition that I've done since the late '80s, and even more experienced players have been doing since the late '70s. Going digital can be a big step away from something that might feel like an intrinsic part of the game. I sympathize with that sentiment, and I don't take my digital character sheets lightly. - -When I decide to maintain a character with the aid of a computer, I insist on substantial benefit for my divergence. I've tried two different options for digital character sheets, and one of my players invented a third. They're all open source, and I believe they each have unique advantages that make them worth trying out. - -### pc - -The `pc` command reads character data as an INI file, then lets you query it by category or by attribute. The format is relatively flexible, making it suitable for most RPG systems, whether you play D&D, Swords & Wizardry, Pathfinder, Starfinder, Stardrifter, or something else. - -The syntax for an INI file is so simple that it's almost intuitive. Each heading is enclosed in brackets, and each stat is a key and value pair. - -Here's a small sample: - - -``` -[Character] -Name=Skullerix -Level=5 -Class=Fighter -Ancestry=Human - -[Health] -AC=14 -HP=43 -Max=66 -``` - -The limitation to this format is that you can't have single-value attributes. That means that if you want to list attributes that get a proficiency bonus in D&D 5th Edition, you can't just list the attributes: - - -``` -[Save] -DEX -INT -``` - -Instead, you must force them to be a pair. - -In D&D 5e, it's easy to come up with a value. These saving throws are highlighted only because your proficiency bonus applies to them, so I just make a note of the character's current bonus: - - -``` -[Save] -DEX=3 -INT=3 -``` - -In other systems, there may be attributes that simply don't have a value and really are meant just to be listed. In that case, you can either set a value to itself or to `True`: - - -``` -[Save] -DEX=DEX -INT=True -``` - -Once you've entered your character's data into the INI format, you can query it with the `pc` command. The command requires the `--character` or `-c option` along with the character sheet file you want to query. With no further arguments, you get a listing of the entire character sheet. - -Add a heading name to view all stats within one category: - - -``` -$ pc -c skullerix.ini Character -Character: -Name: Skullerix -Level: 5 -Class: Fighter -Ancestry: Human -``` - -Provide a heading name plus a key name to view the value of a specific stat: - - -``` -$ pc -c skullerix.ini Character Level -Level: 5 -``` - -If you're like me and play lots of games, you can keep all of your characters in the default location `~/.local/share/pc,` then query them without the path or file extension. - -For instance, say you have `froese.ini, kitaro.ini`, and `wendy.ini<` in `~/.local/share/pc`: - - -``` -$ pc -c kitaro Character Class -Class: Wizard -$ pc -c wendy Health AC -23 -$ pc -c froese Save INT -3 -``` - -To see the characters in your default folder, use the `--list` or `-l` option: - - -``` -$ pc --list -froese.ini -kitaro.ini -wendy.ini -``` - -The pc project is written in Lua and is available from its [Gitlab repository][8]. - -### PCGen - -PCGen is an application designed to help you build and maintain characters. It even has knowledge of the rules of the system it's assisting you with. Far from just a configuration file generator, PCGen is a database of open source rules and how they relate to one another over the course of a character's life. - -PCGen can build characters for D&D 5e, Pathfinder 1e, Starfinder, and Fantasy Craft. When you first launch PCGen, you can download rule definitions for each game. The files are small, but depending on what you want to install, there can be a lot of files to download. - -You only have to do it once, though, and PCGen tends to everything else but clicking the button to start the download for each system. - -Once you have everything downloaded, you can start creating characters by selecting **New** from the **File** menu. - -PCGen keeps track of incomplete tasks in the panel labeled **Things to be done**, and it helps you proceed through the process of satisfying each requirement until you've got a complete character. - -![PCGen dashboard showing a character summary][9] - -(Seth Kenlon, CC BY-SA 4.0) - -PCGen does all calculations for you, so you don't have to figure out your skill ranks, how a proficiency bonus affects your rolls, and other computations. Better yet, you don't have to calculate how your scores change as you level up or even what benefits you get with each new level. You'll have choices to make at each level, but you don't have to flip through your rulebook in hopes you're not missing anything significant. - -One of my favorite things about PCGen is its ability to render your character sheet when finished. - -On paper, your eyes probably know exactly where to look to find your proficiency bonus, or skill ranks, or other character stats. In some formats, you lose that when you go digital. PCGen has a built-in renderer and can show you your character in standard character sheet layouts that an experienced player will likely find familiar. - -![A traditional-looking RPG character sheet rendered by PC Gen][10] - -(Seth Kenlon, CC BY-SA 4.0) - -PCGen is an ENnie award winner, and it's well deserved. Maintaining a character is easy with PCGen, and it's an application I find myself opening on lazy afternoons just for the fun of building something new. - - * On Linux, download PCGen's universal installer from [pcgen.org.][11] You must have [Java installed][12].) Run `pcgen.sh` to launch the application. - * On macOS, download PCGen's universal installer from [pcgen.org][11]. You must have [Java installed][13].) Run `pcgen.sh` to launch the application. - * On Windows, download PCGen's Windows installer from [pcgen.org][11]. You must also [install Java][14]. - - - -### Player character XML - -One of the advantages of using a terminal command to query character sheets is that you gain independence from the layout. - -Playing several game systems can be taxing, because nearly every system has its own layout. With a terminal command, however, instead of looking over sheets of paper for data, you look up the same information quickly by letting your computer do the scanning. - -One of the projects I've been enjoying lately for character tracking is the d project, which uses XML to express character stats and the `xmllint` command to query it. The d project features a few utilities: - - * `d` command rolls dice (include FUDGE die). - * `v` command queries character sheets. - * The `e` command initializes your home directory by placing files in predictable locations. - - - -Because [XML is so flexible][15], this format allows you to devise your own schema, depending on what works best for your system. - -For example, a class-based system like D&D or Pathfinder may benefit from a section for special class features, while a skill-based system might have a simple schema with no categories. - -Here's a simple example: - - -``` -<char> -  <name>Robin Hood</name> -  <health>20</health> -  <acrobat>5</acrobat> -  <archery>8</archery> -  <disguise>3</disguise> -</char> -``` - -First, export the location of the character sheet: - - -``` -`$ export CHAR_SHEET=~/.config/char/robin.xml` -``` - -Alternately, you can initialize your home directory with the `e` command, which creates the `~/.config/char` directory and defines the `CHAR_SHEET` variable in your `.bashrc` file: - - -``` -`$ e init` -``` - -After you've got your environment configured, you can query your character sheet: - - -``` -$ ./v char.name -<name>Robin Hood</name> -$ ./v char.archery -<archery>7</archery> -``` - -Functionally, `v` is similar to the `pc` script, but because it uses XML, there are a lot of possibilities for how you view it. With XSL, you could style your XML-based character sheet and give it a layout for users who aren't comfortable in the terminal but still retain the XML source for those who are. - -### Open source at the open table - -Whether you're looking for a complex application like PCGen to guide you through character creation or simple utilities like pc or d to quickly query character stats, open source has plenty of options for you. - -And the choice of tooling is precisely what makes it such a pleasure to do your analog game in a digital remote setting. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/manage-rpg-character-sheets - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice-keys_0.jpg?itok=PGEs3ZXa (Dice on a keyboard) -[2]: https://www.freerpgday.com/ -[3]: https://goodman-games.com/blog/2021/10/06/pdf-previews-of-our-free-rpg-day-releases/ -[4]: https://paizo.com/community/blog/v5748dyo6shte -[5]: https://opensource.com/article/20/7/free-rpg-day -[6]: https://opensource.com/article/21/9/alternatives-zoom -[7]: https://opensource.com/article/19/6/how-use-maptools -[8]: https://gitlab.com/slackermedia/pc -[9]: https://opensource.com/sites/default/files/uploads/pcgen-build.png (Character building with PCGEN) -[10]: https://opensource.com/sites/default/files/uploads/pcgen-render.png (rendered character sheet) -[11]: http://pcgen.org/download/ -[12]: https://opensource.com/article/19/11/install-java-linux -[13]: https://opensource.com/article/20/7/install-java-mac -[14]: https://access.redhat.com/documentation/pt-br/openjdk/11/html-single/installing_and_using_openjdk_11_for_windows/index -[15]: https://opensource.com/article/21/7/what-xml diff --git a/sources/tech/20211016 5 open source tabletop RPGs you should try.md b/sources/tech/20211016 5 open source tabletop RPGs you should try.md deleted file mode 100644 index 77d5c6b122..0000000000 --- a/sources/tech/20211016 5 open source tabletop RPGs you should try.md +++ /dev/null @@ -1,122 +0,0 @@ -[#]: subject: "5 open source tabletop RPGs you should try" -[#]: via: "https://opensource.com/article/21/10/rpg-tabletop-games" -[#]: author: "Seth Kenlon https://opensource.com/users/seth" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -5 open source tabletop RPGs you should try -====== -Open source games to download for both casual and experienced gamers. -![Gaming on a grid with penguin pawns][1] - -Open source arrived in the pen-and-paper RPG industry back at the turn of the century, when Wizards of the Coast, publisher of [Magic: The Gathering][2] and Dungeons & Dragons, developed the [Open Game License (OGL)][3]. Many publishers have since adopted the OGL themselves or use similar licenses, such as [Creative Commons][4]. - -Today is [Free RPG Day][5]. It's the one day a year you can go to your friendly local game store and pick up, at no cost, a free tabletop role-playing game from some of the biggest publishers in the hobby. If you don't have a local game store or can't get out to a game store, some free RPG sampler downloads are available from [Dungeon Crawl Classics][6] and [Paizo][7]. But not everything for Free RPG Day is available as a download, so I've collected five of my favorite open source tabletop games that you can download and play. - -![OSRIC][8] - -Image ©2021 OSRIC project - -### OSRIC - -The Old School Reference and Index Compilation (OSRIC) project effectively reimplements the rules for the world's first role-playing game: the original edition of Dungeons & Dragons. These are the rules used in the late 1970s to early 1980s, so players can experience role-playing games as they were when they were just getting started. - -There's nothing wrong with the original D&D rules, of course. You can still find copies of the original books on the bookshelves of many gamers (myself included). However, the original rules aren't in print anymore, so they're not easy to obtain, and they certainly aren't being developed or updated to account for omissions. - -The gaming industry has also come a long way since the early '80s. [Instruction books for games][9] used to be written more like encyclopedia entries than entertainment, but OSRIC seeks to bring the fun of the original game to a new generation of gamers, and to gamers looking to return to the glory of gaming days past. Regardless of which category you fit into, OSRIC is worth downloading. - -Get it from [osricrpg.com][10]. - -### Stardrifter - -Not all RPG is high fantasy. - -The Stardrifter project is a rules-light science fiction game that helps your gaming group experience stories in the style of rousing space operas like _Star Trek, The Repairman_ by Harry Harrison, _Foundation_, and _Blake 7_, or tales you might read in _Amazing Stories_ or _Starlog._ - -Character creation is quick, and it's mostly skill-based. It took me a few minutes to roll up a character and a little longer to mull over what kind of background and skillset my character would have. - -The dice system is easy: roll under your attribute score on a d20 for success. The gamemaster doesn't have to set difficulty classes or other thresholds, although situational modifiers can be applied to reflect extreme circumstances (sometimes in your favor, sometimes to your detriment). - -It's an elegant system, and its rulebook is an easy and entertaining read. I especially enjoy the artwork, which consists of scans from classic (now public domain) science fiction comic books. - -But wait, there's more! - -A natural characteristic of many open source RPGs is that they don't feature extensive worldbuilding. Sometimes that's by design because the game intends for the gamemaster to do the worldbuilding, but sometimes it's down to a lack of staffing. Stardrifter, however, is unique because it became an RPG only after it was a series of novels. As a result, there's plenty of worldbuilding already done for the Stardrifter universe. You can start exploring Stardrifter by [downloading the books and short stories][11] in either print or audio form, and you can get a detailed overview of daily life in the Stardrifter universe from the [Voice from the Void][11] podcast. - -The game was developed and released on [GitLab][12], and the whole production studio responsible for this miniature multimedia empire runs on Linux. - -![One-Page Dungeon][13] - -CC BY-SA Keith Indi Salamunia - -### One-Page Dungeon Contest - -Did I mention today is Free RPG Day? - -Well, it's also the reveal of the [One-Page Dungeon Contest][14] winners! The One-Page Dungeon Contest is an annual event in which inventive gamemasters devise a dungeon that fits on one page and submit it for judging. There are officially winners, but really everyone wins, because all submissions are published in a Creative Commons collection that you can download and play through over the course of—probably—years. - -As fun as adventure modules are, many gaming groups actually don't get all the way through a 64- or 250-page adventure. It's often more realistic to aim for just a single dungeon crawl. Play one dungeon every weekend, and one collection of One-Page Dungeon Contest entries will last you at least a full year. - -I love how inventive the One-Page Dungeons are, too. Sure, some are straightforward dungeon delves, and those are welcome stalwarts of each collection, but others are daring and experimental. It makes for an unexpected game every time. The published dungeons tend to be indifferent to system, too, so as long as you're playing a game in which dungeons are an expected story vehicle, you can use these. - -And because it's an annual community project, you can start planning your submission for next year! - -### Dungeon of the Dungeons - -When you have a one-page dungeon, it might be convenient to have a one-page rulebook. - -The [Dungeon of the Dungeons][15] project is _technically_ one page (front and back). It's a Creative Commons-licensed game system based around a mechanic that gives bonuses to players for answering questions relating to their character's motivations. - -For example, if you're playing a Bard and you're taking an action that draws attention to yourself, you add a bonus point to your dice roll. On the other hand, if you're taking an action that does _not_ draw attention to yourself, you gain no bonus point to your roll. - -The result is that players are compelled to roleplay their characters true to their character class. - -Because the class definitions consist of one sentence, it's trivial to invent custom ones between games. With rules as simple as two sparse pages, this is an easy and fun system for a casual game or for new players. - -### FATE - -The [FATE][16] system is a simple and elegant game that relies on a point-buy mechanic enabling players to influence their rolls. Using _Fate points_, players can change the narrative of a game when it matters the most. - -And the narrative is paramount in FATE. It's considered a game system that's light on rules so that players can focus on collaborative storytelling and gaming. - -FATE is licensed under the Creative Commons license, so it's the foundation for many variants. It's been documented in as little as [a single page][17], so there's no excuse not to get started with a FATE game if it sounds like something you'd enjoy. - -### Open gaming - -Open source gaming drives the modern tabletop RPG industry, but open source being what it is, it's also the product of independent creators everywhere. - -Enjoy this year's Free RPG Day with a new game system or a new adventure for a system you already play. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/rpg-tabletop-games - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/game_pawn_grid_linux.png?itok=4gERzRkg (Gaming on a grid with penguin pawns) -[2]: https://opensource.com/article/21/9/magic-the-gathering-assistant -[3]: http://www.opengamingfoundation.org/licenses.html -[4]: https://opensource.com/article/20/1/what-creative-commons -[5]: http://freerpgday.com/ -[6]: https://goodman-games.com/blog/2021/10/06/pdf-previews-of-our-free-rpg-day-releases/ -[7]: https://paizo.com/community/blog/v5748dyo6shte -[8]: https://opensource.com/sites/default/files/osric-splash.jpg (OSRIC) -[9]: https://opensource.com/life/16/11/software-documentation-tabletop-gaming -[10]: https://osricrpg.com/get.php -[11]: https://davidcollinsrivera.com/#stardrifter -[12]: https://gitlab.com/x1101/stardrifter-rpg -[13]: https://opensource.com/sites/default/files/keith-indi-salamunia.png (One-Page Dungeon) -[14]: https://www.dungeoncontest.com/ -[15]: https://thedevteam.itch.io/dungeons-of-the-dungeons -[16]: https://www.faterpg.com/licensing/licensing-fate-cc-by/ -[17]: https://zanrick.itch.io/pocket-fate diff --git a/sources/tech/20211020 Inspect the capabilities of ELF binaries with this open source tool.md b/sources/tech/20211020 Inspect the capabilities of ELF binaries with this open source tool.md deleted file mode 100644 index d17fe316d1..0000000000 --- a/sources/tech/20211020 Inspect the capabilities of ELF binaries with this open source tool.md +++ /dev/null @@ -1,324 +0,0 @@ -[#]: subject: "Inspect the capabilities of ELF binaries with this open source tool" -[#]: via: "https://opensource.com/article/21/10/linux-elf-capa" -[#]: author: "Gaurav Kamathe https://opensource.com/users/gkamathe" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Inspect the capabilities of ELF binaries with this open source tool -====== -Use capa to reveal all the mysteries of ELF binaries. -![Puzzle pieces coming together to form a computer screen][1] - -If Linux is your primary working environment, then you might be familiar with the Executable and Linkable Format ([ELF][2]), the main file format used for executables, libraries, core-dumps, and more, on Linux. I've written articles covering native Linux tools to understand ELF binaries, beginning with [how ELF binaries are built][3], followed by some general tips on how to [analyze ELF binaries][4]. If you are not familiar with ELF and executables in general, I suggest reading these articles first. - -### Introducing Capa - -Capa is an [open-source project][5] from Mandiant (a cybersecurity company). In the project's own words, _capa detects capabilities in executable files_. Although the primary target of Capa is unknown and possibly malicious executables, the examples in this article run Capa on day-to-day Linux utilities to see how the tool works. - -Given that most malware is Windows-based, earlier Capa versions only supported the PE file format, a dominant Windows executable format. However, starting with v3.0.0, support for ELF files has been added (thanks to [Intezer][6]). - -### What are capabilities? - -What does the concept of _capabilities_ actually mean, especially in the context of executable files? Programs or software fulfill certain computing needs or solve a problem. To keep things simple, our requirements could vary from finding a file, reading/writing to a file, running a program, logging some data to a log file, opening a network connection, etc. We then use a programming language of our choice with specific instructions to fulfill these tasks and compile the program. The resulting binary or executables then performs these tasks on the user's behalf, so the resulting executable is _capable_ of carrying out the above tasks. - -Looking at the source code, it's easy to identify what a program does or what its intent is. However, once the program is compiled as an executable, the source code is converted to machine language and is no longer part of the resulting executable (unless compiled with debug info). We can still make some sense of it by looking at the equivalent assembly instructions backed by some knowledge of the Linux API (glibc/system calls), however, it's difficult. Tools like de-compilers do exist which try to convert the assembly to a pseudo-code of what might have been the original source code. However, it isn't a one-to-one match, and it is only a best-effort attempt. - -### Why another tool? - -If we have multiple native Linux tools to analyze binaries, why do we need another one? The existing tools aid developers in troubleshooting and debugging issues that might arise during development. They are often the first step for initial analysis on unknown binaries, however, they are not sufficient. - -Sometimes what is needed isn't lengthy disassembly or long pseudo-code, but just a quick summary of the capabilities seen in the binary based on its API usage. Often, malicious binaries and malware employ some anti-analysis or anti-reversing techniques that render such native tools helpless. - -Capa's primary audience is malware or security researchers who often come across unknown binaries for which source code isn't available. They need to identify if it's malware or a benign executable. An initial first step is finding out what the executable can do before moving to dynamic analysis. This can be done with some pre-defined rule sets matched against a popular framework (ATT&CK). Native Linux tools were not designed for such uses. - -### Getting Capa - -Download a pre-built Capa Linux program from [here][7]. You must use v3.0.0 or above. Capa is programmed in Python, however the downloaded program isn't a `.py` file that the Python interpreter can execute. It is instead an ELF executable that runs directly from the Linux command line. - - -``` -$ pwd -/root/CAPA -$ -$ wget -q -$ -$ file capa-v3.0.2-linux.zip -capa-v3.0.2-linux.zip: Zip archive data, at least v2.0 to extract -$ -$ unzip capa-v3.0.2-linux.zip -Archive:  capa-v3.0.2-linux.zip -  inflating: capa                     -$ -$ ls -l capa --rwxr-xr-x. 1 root root 41282976 Sep 28 18:29 capa -$ -$ file capa -capa: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=1da3a1d77c7109ce6444919f4a15e7e6c63d02fa, stripped -``` - -### Command line options - -Capa comes with a variety of command line options. This article visits a few of them, beginning with the help content: - - -``` -$ ./capa -h -usage: capa [-h] [--version] [-v] [-vv] [-d] [-q] [--color {auto,always,never}] [-f {auto,pe,elf,sc32,sc64,freeze}] -            [-b {vivisect,smda}] [-r RULES] [-s SIGNATURES] [-t TAG] [-j] -            sample - -The FLARE team's open-source tool to identify capabilities in executable files. - -<< snip >> -$ -``` - -Use this command to check if the required version of Capa (v3 and above) is running: - - -``` -$ ./capa --version -capa v3.0.2-0-gead8a83 -``` - -### Capa output and the MITRE ATT&CK framework - -Capa output can be a bit overwhelming, so first run it on a simple utility, such as `pwd`. The `pwd` command on Linux prints the current working directory and is a common command. Please note that `pwd` might be a shell-inbuilt for you (no separate executable) depending on the distro you are using. Identify its path using the `which` command first and then provide the complete path to Capa. Here is an example: - - -``` -$ which pwd -/usr/bin/pwd -$ -$ file /usr/bin/pwd -/usr/bin/pwd: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=ec306ddd72ce7be19dfc1e62328bb89b6b3a6df5, for GNU/Linux 3.2.0, stripped -$ -$ ./capa -f elf /usr/bin/pwd -loading : 100%| 633/633 [00:00<00:00, 2409.72 rules/s] -matching: 100%| 76/76 [00:01<00:00, 38.87 functions/s, skipped 0 library functions] -+------------------------+------------------------------------------------------------------------------------+ -| md5                    | 8d50bbd7fea04735a70f21cca5063efe                                                   | -| sha1                   | 7d9df581bc3d34c9fb93058be2cdb9a8c04ec061                                           | -| sha256                 | 53205e6ef4e1e7e80745adc09c00f946ae98ccf6f8eb9c4535bd29188f7f1d91                   | -| os                     | linux                                                                              | -| format                 | elf                                                                                | -| arch                   | amd64                                                                              | -| path                   | /usr/bin/pwd                                                                       | -+------------------------+------------------------------------------------------------------------------------+ - -+------------------------+------------------------------------------------------------------------------------+ -| ATT&CK Tactic          | ATT&CK Technique                                                                   | -|------------------------+------------------------------------------------------------------------------------| -| DISCOVERY              | File and Directory Discovery:: T1083                                               | -+------------------------+------------------------------------------------------------------------------------+ - -+-----------------------------+-------------------------------------------------------------------------------+ -| MBC Objective               | MBC Behavior                                                                  | -|-----------------------------+-------------------------------------------------------------------------------| -| FILE SYSTEM                 | Writes File:: [C0052]                                                         | -+-----------------------------+-------------------------------------------------------------------------------+ - -+------------------------------------------------------+------------------------------------------------------+ -| CAPABILITY                                           | NAMESPACE                                            | -|------------------------------------------------------+------------------------------------------------------| -| enumerate files on Linux (2 matches)                 | host-interaction/file-system/files/list              | -| write file on Linux                                  | host-interaction/file-system/write                   | -+------------------------------------------------------+------------------------------------------------------+ -``` - -Run Capa with the `-f elf` argument to tell it that the executable to analyze is in the ELF file format. This option might be required for unknown binaries; however, Capa is perfectly capable of detecting the format on its own and doing the analysis, so you can skip this option if required. In the beginning, you will see a loading/matching message as Capa loads its rules from the backend and then analyzes the executable and matches those rules against it. Skip displaying this by adding the `-q` option to all commands. - -Capa output is divided into various sections. The first section uniquely identifies the binary using its md5, sha1, or sha256 hash followed by the operating system, file format, and architecture information. This information is often critical when dealing with executables. In the following sections, Capa uses the ATT&CK Tactic and Technique to match the capabilities. If you are unfamiliar with what ATT&CK means, please refer to the [MITRE ATT&CK Framework here][8]. - -MITRE ATT&CK is best described in the project's own words: - -> MITRE ATT&CK® is a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations. - -You can match the output of Capa in the following two sections with that of the MITRE ATT&CK framework. I shall skip this part in this article. - -Finally, in the Capability section, you can see two specific capabilities listed out: - - -``` -enumerate files on Linux -write file on Linux -``` - -Compare this with the nature of the `pwd` program, which needs to show the current directory. Here it matches the first capability (remember the concept of everything is a file in Linux). What about the second part, which says _writing file_? We certainly haven't written `pwd` output to any file. However, remember `pwd` needs to write the current directory location to the terminal; how else will the output be printed? If you are still unsure of how this works, run the following command and match the output. If you are unfamiliar with `strace` or what it does, I have an article covering it [here][9]. Focus on the _write_ system call toward the end of the article where the `pwd` executable needs to write the directory path (string) to **1**, which stands for standard out. In our case, that is the terminal. - - -``` -$ strace -f  /usr/bin/pwd -execve("/usr/bin/pwd", ["/usr/bin/pwd"], 0x7ffd7983a238 /* 49 vars */) = 0 -brk(NULL) - -<< snip >> - -write(1, "/root/CAPA\n", 11/root/CAPA -)            = 11 -close(1)                                = 0 -close(2)                                = 0 -exit_group(0)                           = ? -+++ exited with 0 +++ -``` - -### Running Capa on different Linux utilities - -Now that you know how to run Capa, I highly recommend you try it on various day-to-day Linux utilities. When choosing utilities try to be as diverse as possible. For example, select utilities that work with file systems or storage commands, such as `ls`, `mount`, `cat`, `echo`, etc. Next, move to network utilities, like `netstat`, `ss`, `telnet`, etc., where you will find the network capabilities of an executable. Extend it to more extensive programs daemons like `sshd` to see crypto-related capabilities, followed by `systemd`, `bash`, etc. - -A word of caution, don't be too spooked if you see rules that match malware for these native utilities. For example, when analyzing systemd, Capa showed matches for COMMAND AND CONTROL based on the capability to receive data from a network. This capability could be used by genuine programs for legitimate cases, while malware could use it for malicious purposes. - -### Running in Debug mode - -If you wish to see how Capa finds all these capabilities in an executable, provide the `-d` flag, which displays additional information on the screen that might help understand its inner working. Use this data and look for clues in the source code on GitHub. - - -``` -`$ ./capa -q /usr/sbin/sshd -d` -``` - -The first thing to notice is that Capa saves rules to a temp directory and reads them from there: - - -``` -`DEBUG:capa:reading rules from directory /tmp/_MEIKUG6Oj/rules` -``` - -The debug output shows it loaded various rules from this directory. As an example, see how it tried to identify the hostname of a machine: - - -``` -`DEBUG:capa:loaded rule: 'get hostname' with scope: function` -``` - -With this information, it's easy to look up the rule. Simply go to the `rules` directory and `grep` for the specific rule name like the example below. The rule is stated in a .yml file. - - -``` -$ grep -irn "name: get hostname" * -rules/host-interaction/os/hostname/get-hostname.yml:3:    name: get hostname -``` - -Check for the `-api` sections where various APIs are listed. Capa looks for the `gethostname` API usage (on Linux), and you can see the Windows equivalent listed there, too. - - -``` -$ cat _MEIKUG6Oj/rules/host-interaction/os/hostname/get-hostname.yml -rule: -  meta: -    name: get hostname -    namespace: host-interaction/os/hostname - -<< snip >> - -  features: -    - or: -      - api: kernel32.GetComputerName -      - api: kernel32.GetComputerNameEx -      - api: GetComputerObjectName -      - api: ws2_32.gethostname -      - api: gethostname -``` - -You can find more information about this specific system call on Linux using the man page. - - -``` -$ man 2 gethostname - -GETHOSTNAME(2)                          Linux Programmer's Manual                               GETHOSTNAME(2) - -NAME -       gethostname, sethostname - get/set hostname - -<< snip >> -``` - -### Verbose usage - -Another good way to identify which API's Capa is looking for is using the verbose mode, as shown below. This simple example displays the usage of `opendir`, `readdir`, and `fwrite` APIs: - - -``` -$ ./capa  -q /usr/bin/pwd -vv -enumerate files on Linux (2 matches) - -<< snip >> - -        api: opendir @ 0x20052E8 -        api: readdir @ 0x2005369, 0x200548A - -write file on Linux - -<< snip >> - -    os: linux -    or: -      api: fwrite @ 0x2002CB5 -``` - -### Custom rules - -As with other good tools, Capa allows you to extend it by adding your own rules. This hint was also given in the debug output, if you noticed. - - -``` -`$ capa --signature ./path/to/signatures/ /path/to/executable` -``` - -### Specific rules only - -You can also look for specific rules instead of having Capa trying to match every rule. Do this by adding the `-t` flag followed by the exact rule name: - - -``` -`$ ./capa -t "create process on Linux" /usr/sbin/sshd -q -j` -``` - -Display the rule name from the .yml files within the `rules` directory. For example: - - -``` -$ grep name rules/host-interaction/process/create/create-process-on-linux.yml -    name: create process on Linux -``` - -### Output format - -Finally, Capa allows output in JSON format using the `-j` flag. This flag helps consume the information quickly and aid automation. This example command requires that the [jq command][10] is installed: - - -``` -`$ ./capa -t "create process on Linux" /usr/sbin/sshd -q -j | jq .` -``` - -### Wrap up - -Capa is a worthy addition to the much-needed tools for ELF executables. I say _much-needed_ because we regularly see cases of Linux malware now. Tools on Linux must catch up to tackle these threats. You can play around with Capa and try it on various executables, and also write your own rules and add them upstream for the benefit of the community. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/linux-elf-capa - -作者:[Gaurav Kamathe][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/gkamathe -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen) -[2]: https://en.wikipedia.org/wiki/Executable_and_Linkable_Format -[3]: https://opensource.com/article/19/10/gnu-binutils -[4]: https://opensource.com/article/20/4/linux-binary-analysis -[5]: https://github.com/mandiant/capa -[6]: https://www.intezer.com/ -[7]: http://github.com/mandiant/capa/releases -[8]: https://attack.mitre.org/ -[9]: https://opensource.com/article/19/10/strace -[10]: https://stedolan.github.io/jq/ diff --git a/sources/tech/20211024 Open source gets dirty with 3D printing.md b/sources/tech/20211024 Open source gets dirty with 3D printing.md deleted file mode 100644 index 1d7937decc..0000000000 --- a/sources/tech/20211024 Open source gets dirty with 3D printing.md +++ /dev/null @@ -1,75 +0,0 @@ -[#]: subject: "Open source gets dirty with 3D printing" -[#]: via: "https://opensource.com/article/21/10/open-source-soil-science" -[#]: author: "Joshua Pearce https://opensource.com/users/jmpearce" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Open source gets dirty with 3D printing -====== -3D printing and open source technology enable advanced research for soil -science. -![Green sprout grass in dirt soil][1] - -Open source has touched a lot of scientific disciplines, but one area where it is missing is soil science. Until recently, you could only find it [in educational materials][2]. A team from the Université de Lorraine, INRAE in France, and Western University in Canada [bring open source to the soil science community][3]. - -Soil science experiments saw significant impact by the technological advances developed over the past decades. However, support for these experiments evolved very slowly, and soil science literally languished in the dirt. Researchers still take soil samples in the "traditional" way from specific fields. For this purpose, agricultural researchers determine which areas might contain the most suitable soil for an experiment in advance. This method leads to many approximations and uncontrolled parameters, which significantly complicates the analysis of the results. Thus, some studies require identical replicates. 3D printing offers an excellent opportunity to meet this need. - -![Collecting soil samples][4] - -Farmer-scientist field collaboration in collecting soil -and plant samples ([Flickr][5], [CC BY-NC-SA 4.0][6]) - -Modeling a porous structure for soil science must consider a combination of specifications (nature of the material, porosity, and location of specific substances or living organisms). In addition, using an engineering design approach improves the modeling process, and these become customizable and reproducible models—some of the bedrock properties of open source science. The model's main characteristics are identified and studied according to the complexity of the specific soil phenomena. With that modeling, you can achieve a design approach for defining a manufacturing process. - -One main challenge to support this design approach is developing software that allows soil scientists to create soil models according to their needs in terms of the soil structure. This software should be dedicated to scientific research and promote data sharing and exchange across an international community. - -Reproducing soil samples digitally helps academics and researchers conduct reproducible and participatory research networks that help better understand the specific soil parameters. One of the most critical challenges for soil modeling is the manufacturing of a soil structure. Until now, the most widespread method to replicate porous soil structures is using X-ray tomography to scan an actual sample. This process is expensive and time-consuming and does not readily provide an approach to customization. A new open source approach makes it possible for any soil scientist to design a porous soil structure. It is based on mathematical models rather than the dirty samples themselves—allowing researchers to design and parameterize their samples according to their desired experiments. - -![Settings and model of monolith with mix of different grain sizes][7] - -Settings and model of monolith with mix of different -grain sizes (Joshua Pearce, [CC BY-SA 4.0][8]) - -Developing an open source toolchain using a [Lua script][9], in the [IceSL][10] slicer with a GUI enables researchers to create and configure their digital soil models, called monoliths. Done without using meshing algorithms or STereoLithography (STL) files because those reduce the model's resolution.  - -Monolith examples are fabricated in polylactic acid using [open source fused filament fabrication technology][11] with a layer thickness of 0.20, 0.12, and 0.08 mm. The images generated from the digital model slicing are analyzed using open source [ImageJ][12] software. ImageJ provides information about internal geometrical shape (porosity, tortuosity, grain size distribution, and hydraulic conductivities). The results show that the developed script enables designing reproducible numerical models that imitate soil structures with defined pore and grain sizes in a range between coarse sand (from 1 mm diameter) to fine gravel (up to 12 mm diameter). - -![Monolith with offset root system][13] - -Monolith with offset root system  -(Joshua Pearce, [CC BY-SA 4.0][8]) - -Samples generated using the developed script would be expected to increase reproducibility and be more accessible because of the open source and low-cost methods involved. - -You can read the complete open access study here: [Open-Source Script for Design and 3D Printing of Porous Structures for Soil Science][14] by Romain Bedell, Alaa Hassan, Anne-Julie Tinet, Javier Arrieta-Escobar, Delphine Derrien, Marie-France Dignac, Vincent Boly, Stéphanie Ouvrard, and Joshua M. Pearce 2021, published in _Technologies_ 9, no. 3: 67. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/open-source-soil-science - -作者:[Joshua Pearce][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jmpearce -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/roman-synkevych-unsplash.jpg?itok=lIeB57IW (Green sprout grass in dirt soil) -[2]: https://doi.org/10.4195/nse2017.06.0013 -[3]: https://doi.org/10.3390/technologies9030067 -[4]: https://opensource.com/sites/default/files/uploads/collecting_soil_samples.jpg (Collecting soil samples) -[5]: https://www.flickr.com/photos/cgiarclimate/38600771315/in/photostream/ -[6]: https://creativecommons.org/licenses/by-nc-sa/4.0/ -[7]: https://opensource.com/sites/default/files/uploads/monolith-w-mix-grain-sizes.png (Settings and model of monolith with mix of different grain sizes) -[8]: https://creativecommons.org/licenses/by-sa/4.0/ -[9]: https://github.com/RomainBedell/Porous_medium_generator -[10]: https://icesl.loria.fr/ -[11]: https://www.reprap.org/wiki/RepRap -[12]: https://imagej.nih.gov/ij/ -[13]: https://opensource.com/sites/default/files/uploads/monolith-w-offset-roots.png (Monolith with offset root system) -[14]: https://www.mdpi.com/2227-7080/9/3/67 diff --git a/sources/tech/20211026 Deploy Quarkus applications to Kubernetes using a Helm chart.md b/sources/tech/20211026 Deploy Quarkus applications to Kubernetes using a Helm chart.md deleted file mode 100644 index a30b34c3cf..0000000000 --- a/sources/tech/20211026 Deploy Quarkus applications to Kubernetes using a Helm chart.md +++ /dev/null @@ -1,161 +0,0 @@ -[#]: subject: "Deploy Quarkus applications to Kubernetes using a Helm chart" -[#]: via: "https://opensource.com/article/21/10/quarkus-helm-chart" -[#]: author: "Daniel Oh https://opensource.com/users/daniel-oh" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Deploy Quarkus applications to Kubernetes using a Helm chart -====== -A developer's guide to serverless function deployment with Quarkus Helm -chart. -![Ships at sea on the web][1] - -Serverless functions are driving the fast adoption of DevOps development and deployment practices today. [Knative][2] on [Kubernetes][3] is one of the most popular serverless platforms to adopt serverless function architectures successfully. But developers must understand how serverless capabilities are specified using a combination of Kubernetes APIs, Knative resources, and function-oriented programming. DevOps teams also need to standardize runtime stacks (that is, application runtime, builder image, deployment configuration, and health check) to execute the functions on Kubernetes. What if you, a developer, could set this up with familiar technology and practice? - -This article guides you on the way developers can get started with serverless function deployment with the [Quarkus][4] [Helm][5] chart on Kubernetes. Furthermore, developers can avoid the extra work of developing a function from scratch, optimizing the application, and deploying it to Kubernetes. - -If you haven't experienced using Helm for cloud-native application deployments on Kubernetes, I will tell you what Helm is and what benefits you have with it. Helm is one of the most popular package managers for Kubernetes. Helm provides a chart that simplifies Kubernetes resources within a single package file for an application build and deployment. Developers can install the chart to Kubernetes using the Helm command-line interface or graphical dashboard. - -### Install Quarkus Helm chart - -In this article, you'll use [OpenShift Kubernetes Distribution][6] (OKD) built on Kubernetes with application lifecycle management functionality and DevOps tooling. If you haven't installed the Helm feature on your OKD cluster yet, follow the [installation document][7]. - -Before building a Quarkus application using a Quarkus Helm chart, you need to create pull and push secrets in your OKD cluster. You use the secrets to pull a builder image from an external container registry and then push it to the registry.  - -**Note:** You can skip this step if you don't need to use an external container registry during application build or deploy the application to the OKD cluster. - -Create a pull secret using the following [oc command][8]: - - -``` -$ oc create secret docker-registry my-pull-secret \ -\--docker-server=$SERVER_URL \ -\--docker-username=$USERNAME \ -\--docker-password=$PASSWORD \ -\--docker-email=$EMAIL -``` - -Then, create a push secret using the following command: - - -``` -$ oc create secret docker-registry my-push-secret \ -\--docker-server=$SERVER_URL \ -\--docker-username=$USERNAME \ -\--docker-password=$PASSWORD \ -\--docker-email=$EMAIL -``` - -Install the Quarkus Helm chart: - - -``` -$ helm repo add quarkus \ - -``` - -### Build and deploy Quarkus application using Helm chart - -Go to the **Developer** console in the OKD cluster, click on Helm chart in **+Add** menu. Then type in _quarkus_ in the search box. Click on the **Quarkus v0.0.3** helm chart, as shown below. - -**Note:** You'll need to create a _quarkus-helm project_ (namespace) to install a Quarkus Helm chart in your OKD cluster. - -![Search Quarkus Helm chart][9] - -(Daniel Oh, [CC BY-SA 4.0][10]) - -Click on **Install Helm Chart**, as shown below. - -![Install Helm chart][11] - -(Daniel Oh, [CC BY-SA 4.0][10]) - -Switch the editor to **YAML** view, then paste the following build and deploy configurations: - - -``` -build: -  uri: -  ref: master -  env: -    - name: S2I_SOURCE_DEPLOYMENTS_FILTER -      value: "*-runner.jar lib*" -deploy: -  readinessProbe: -    httpGet: -      path: /health/ready -      port: http -    tcpSocket: null -  livenessProbe: -    httpGet: -      path: /health/live -      port: http -    tcpSocket: null -``` - -Then, click on the **Install** button, as shown below. - -![YAML editor][12] - -(Daniel Oh, [CC BY-SA 4.0][10]) - -Find more values to configure the Quarkus helm chart [here][13]. - -Once the chart gets installed successfully, you'll see the following Quarkus pod in the Topology view, as shown below. - -![Topology view][14] - -(Daniel Oh, [CC BY-SA 4.0][10]) - -**Note:** You might see _ErrImagePull_ and _ImagePullBackOff_ in **Deployments** while the build is processing. Once the build completes, your image gets automatically rolled out. - -Click on the **Open URL** icon. It brings you to the **todos** application. Let's try to add a few items for fun, as shown below. - -![Todos applications][15] - -(Daniel Oh, [CC BY-SA 4.0][10]) - -### Conclusion - -You've learned the way developers can build Quarkus applications and deploy them to Kubernetes/OpenShift cluster in a few minutes using a Helm chart. The developers can manage the application runtime stack in terms of upgrade, rollback, uninstall, and add new configurations such as application health check, replication without changing the application source code or developing new Kubernetes manifestos with YAML files. This minimizes developers' burden to keep leveraging application runtimes other than implementing business logic on Kubernetes. For more information to follow on Quarkus journey here: - - * [A Java developer's guide to Quarkus][16] - * [3 reasons Quarkus 2.0 improves developer productivity on Linux][17] - * [Optimize Java serverless functions in Kubernetes][18] - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/quarkus-helm-chart - -作者:[Daniel Oh][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/daniel-oh -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes_containers_ship_lead.png?itok=9EUnSwci (Ships at sea on the web) -[2]: https://knative.dev/docs/ -[3]: https://opensource.com/article/19/6/reasons-kubernetes -[4]: https://quarkus.io/ -[5]: https://helm.sh/ -[6]: https://www.okd.io/ -[7]: https://docs.okd.io/latest/applications/working_with_helm_charts/installing-helm.html -[8]: https://docs.okd.io/latest/cli_reference/openshift_cli/getting-started-cli.html -[9]: https://opensource.com/sites/default/files/uploads/search-quarkus-helm-chart.png (Search Quarkus Helm chart) -[10]: https://creativecommons.org/licenses/by-sa/4.0/ -[11]: https://opensource.com/sites/default/files/uploads/install-hel-chart.png (Install Helm chart) -[12]: https://opensource.com/sites/default/files/uploads/yaml-editor.png (YAML editor) -[13]: https://github.com/redhat-developer/redhat-helm-charts/tree/master/alpha/quarkus-chart#values -[14]: https://opensource.com/sites/default/files/uploads/topology-view.png (Topology view) -[15]: https://opensource.com/sites/default/files/uploads/todos-applications.png (Todos applications) -[16]: https://opensource.com/article/21/8/java-quarkus-ebook -[17]: https://opensource.com/article/21/7/developer-productivity-linux -[18]: https://opensource.com/article/21/6/java-serverless-functions-kubernetes diff --git a/sources/tech/20211027 How I made an automated Jack-o--lantern with a Raspberry Pi.md b/sources/tech/20211027 How I made an automated Jack-o--lantern with a Raspberry Pi.md deleted file mode 100644 index 25c2e6f2f4..0000000000 --- a/sources/tech/20211027 How I made an automated Jack-o--lantern with a Raspberry Pi.md +++ /dev/null @@ -1,386 +0,0 @@ -[#]: subject: "How I made an automated Jack-o'-lantern with a Raspberry Pi" -[#]: via: "https://opensource.com/article/21/10/halloween-raspberry-pi" -[#]: author: "Jessica Cherry https://opensource.com/users/cherrybomb" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -How I made an automated Jack-o'-lantern with a Raspberry Pi -====== -Here's my recipe for the perfect pumpkin Pi. -![A vignette of green, orange, and yellow pumpkins in front of a brick wall][1] - -It's almost Halloween, one of my favorite days and party times. This year, I decided to (pumpkin) spice up some of my decorations with automated motion sensing. This spooktacular article shows you how I made them, step by step, from building and wiring to coding. This is not your average weekend project—it takes a lot of supplies and building time. But it's a fun way to play around with Raspberry Pi and get in the spirit of this haunting holiday. - -### What you need for this project - - * One large plastic pumpkin - * One Raspberry Pi 4 (with peripherals) - * One Arduino starter kit that works with Raspberry Pi - * One hot glue gun - * Ribbon, ideally in holiday theme colors - - - -The items you'll need in the starter kit are one infrared motion sensor, a breadboard, two small LED lights, a ribbon to connect the breadboard to the Raspberry Pi, and cabling to configure all of these pieces together. You can find each of these items online, and I suggest the starter kit for the entertaining things you can do beyond this project. - -![Raspberry Pi computer board][2] - -Jess Cherry CC BY-SA 4.0 - -![Cables and and LEDs for the project][3] - -Jess Cherry CC BY-SA 4.0 - -![Project supplies including breadboard, cables, LEDs, and elements of the Arduino starter kit][4] - -Jess Cherry CC BY-SA 4.0 - -### Installing the Raspberry Pi OS and preconfiguration - -After receiving my Pi, including the SD card, I went online and followed the Raspberry Pi imager [instructions][5]. This allowed for quick installation of the OS onto the SD card. Note: you need the ability to put the SD card in an SD card-reader slot. I have an external attached SD card reader, but some computers have them built in. On your local computer, you also need a VNC viewer. - -After installing the OS and running updates, I had some extra steps to get everything to work correctly. To do this, you'll need the following: - - * Python 3 - * Python3-devel - * Pip - * RPi GPIO (pip install RPi.GPIO) - * A code editor (Thonny is on the Raspberry Pi OS) - - - -Next, set up a VNCviewer, so you can log in when you have the Pi hidden in your pumpkin. - -To do this, run the below command, then follow the instructions below. - -`sudo raspi-config` - -When this menu pops up, choose Interface Options: - -![Raspberry Pi Software Configuration Tool menu][6] - -Jess Cherry CC BY-SA 4.0 - -Next, choose VNC and enable it on the pop-up: - -![Raspberry Pi Software Configuration Tool menu of interface options][7] - -Jess Cherry CC BY-SA 4.0 - -You can also use Secure Shell (SSH) for this, but during the troubleshooting phase, I used VNC. When logged into your Raspberry Pi, gather the IP address and use it for SSH and a VNC connection. If you've moved rooms, you can also use your router or WiFi switch to tell you the IP address of the Pi. - -Now that everything is installed, you can move on to building your breadboard with lights. - -### Everyone should try pumpkin bread(board) - -Many people haven't seen or worked with a breadboard, so I've added pictures of my parts, starting with my base components. - -![GPIO Extension Board and Ribbon Cable][8] - -Jess Cherry CC BY-SA 4.0 - -![Breadboard][9] - -Jess Cherry CC BY-SA 4.0 - -These two pieces are put together with the extension shield in the center, as shown. - -![Breadboard with cables, pins, and ribbons, partially set up for the project][10] - -Jess Cherry CC BY-SA 4.0 - -The ribbon connects to the pin slot in the Raspberry Pi, making the board a new extension we can code and play with. The ribbon isn't required, it's just makes working with the GPIO pins convenient. If you don't want to purchase a ribbon, you can connect female-to-male jumper cables directly from the pins on the Pi to the breadboard. Here are the components you need: - - * Raspberry Pi (version 4 or 3) - * Breadboard - * GPIO expansion ribbon cable - * Jumper cables (x6 male-to-male) - * Resistor 220Ω - * HC-SR501 or any similar proximity sensor (x1) - * LED (x2) - - - -### Putting the board together - -Once you have all of the pieces, you can put everything together. First, take a look at how the pins are defined on the board. This is my personal extension board; the one you have may be different. The pin definitions matter when you get to coding, so keep very good track of your cabling. Below is the schematic of my extension. - -As you can see, the schematic has both the defined BCM (Broadcom SOC Channel) GPIO numbering on the physical board and the physical numbering you use within the code to create routines and functions. - -![Schematic of Raspberry Pi extension board][11] - -Jess Cherry CC BY-SA 4.0 - -Now it's time to connect some cabling. First, start with the sensor. I was provided with cables to connect in my kit, so I'll add pictures as I go. This is the sensor with a power(+) ground(-) and sensor connection to extension board(s). - -![Sensor illustration with power, ground, and sensor connection][12] - -Jess Cherry CC BY-SA 4.0 - -For the cable colors: power is red, ground is black, and yellow carries the sensor data. - -![Photo of a hand holding the sensor with black, red, and yellow cables][13] - -Jess Cherry CC BY-SA 4.0 - -I plug in the cables with power/red to the 5V pin, ground/black to the GRN pin, and sensor/yellow to the GPIO 17 pin, later to be defined as 11 in the code. - -![Breadboard with sensor cables attached][14] - -Jess Cherry CC BY-SA 4.0 - -Next, it's time to set up the lights. Each LED light has two pins, one shorter than the other. The long side (anode) always lines up with the pin cable, and the shorter (cathode) with the ground and resistor. - -![LED light with pin, cables, and resistor][15] - -Jess Cherry CC BY-SA 4.0 - -For the first light, I use GPIO18 (pin 12) and GPIO25 for the signal. This is important because the code communicates with these pins. You can change which pin you use, but then you must change the code. Here's a diagram of the end result: - -![Illustration of connections from breadboard to Raspberry Pi, sensor, and LEDs][16] - -Jess Cherry CC BY-SA 4.0 - -Now that everything is cabled up, it's time to start working on the code. - -### How to use a snake to set up a pumpkin - -If you've already installed Python 3, you have everything you need to start working through this line by line. In this example, I am using Python 3 with the RPI package. Start with the imported packages, RPI and time from sleep (this helps create the flicker effect described later in the tutorial). I called my Python file senseled.py, but you can name your file whatever you want. - - -``` -#!/usr/bin/env python3 - -import RPi.GPIO as GPIO -import os -from time import sleep -``` - -Next, define your two LED pins and sensor pin. Earlier in this post, I provided these pin numbers while wiring the card, so you can see those exact numbers below. - - -``` -ledPin1 = 12 # define ledPins -ledPin2 = 22 -sensorPin = 11 # define sensorPin -``` - -Since you have two lights to set up to flicker together in this example, I also created a defined array to use later: - -`leds = [ledPin1, ledPin2]` - -Next, define the setup of the board and pins using the RPi.GPIO package. To do this, set the mode on the board. I chose to use the physical numbering system in my setup, but you can use the BCM if you prefer. Remember that you can never use both. Here's an example of each: - - -``` -# for GPIO numbering, choose BCM -GPIO.setmode(GPIO.BCM) -  -# or, for pin numbering, choose BOARD -GPIO.setmode(GPIO.BOARD) -``` - -For this example, use the pin numbering in my setup. Set the two pins to output mode, which means all commands output to the lights. Then, set the sensor to input mode so that as the sensor sees movement, it inputs the data to the board to output the lights. This is what these definitions look like: - - -``` -def setup(): - GPIO.setmode(GPIO.BOARD) # use PHYSICAL GPIO Numbering - GPIO.setup(ledPin1, GPIO.OUT) # set ledPin to OUTPUT mode - GPIO.setup(ledPin2, GPIO.OUT) # set ledPin to OUTPUT mode - GPIO.setup(sensorPin, GPIO.IN) # set sensorPin to INPUT mode -``` - -Now that the board and pins are defined, you can put together your main function. For this, I use the array in a `for` loop, then an if statement based on the sensor input. If you are unfamiliar with these functions, you can check out this [quick guide][17]. - -If the sensor receives input, the LED output is high (powered on) for .03 seconds, then low (powered off) while printing the message `led turned on.` If the sensor receives no input, the LEDs are powered down while printing the message `led turned off`. - - -``` -def main(): - while True: - for led in leds: - if GPIO.input(sensorPin)==GPIO.HIGH: - GPIO.output(led, GPIO.HIGH) - sleep(.05) - GPIO.output(led, GPIO.LOW) - print ('led turned on >>>') - else : - GPIO.output(led, GPIO.LOW) # turn off led - print ('led turned off <<<') -``` - -While you can mathematically choose the brightness level, I found it easier to set the sleep timer between powering on and powering off. I set this after many tests of the amount of time needed to create a flickering candle effect. - -Finally, you need some clean up to release your resources when the program is ended: - - -``` -def destroy(): - GPIO.cleanup() # Release GPIO resource -``` - -Now that everything has been defined to run, you can run your code. Start the program, run the setup, try your main, and if a KeyboardInterrupt is received, destroy and clean everything up. - - -``` -if __name__ == '__main__': # Program entrance - print ('Program is starting...') - setup() - try: - main() - except KeyboardInterrupt: # Press ctrl-c to end the program. - destroy() -``` - -Now that you've created your program, the final result should look like this: - - -``` -#!/usr/bin/env python3 - -import RPi.GPIO as GPIO -import os -from time import sleep - -ledPin1 = 12 # define ledPins -ledPin2 = 22 -sensorPin = 11 # define sensorPin -leds = [ledPin1, ledPin2] - -def setup(): - GPIO.setmode(GPIO.BOARD) # use PHYSICAL GPIO Numbering - GPIO.setup(ledPin1, GPIO.OUT) # set ledPin to OUTPUT mode - GPIO.setup(ledPin2, GPIO.OUT) # set ledPin to OUTPUT mode - GPIO.setup(sensorPin, GPIO.IN) # set sensorPin to INPUT mode - -  -def main(): - while True: - for led in leds: - if GPIO.input(sensorPin)==GPIO.HIGH: - GPIO.output(led, GPIO.HIGH) - sleep(.05) - GPIO.output(led, GPIO.LOW) - print ('led turned on >>>') - else : - GPIO.output(led, GPIO.LOW) # turn off led - print ('led turned off <<<') -  - -def destroy(): - GPIO.cleanup() # Release GPIO resource - -if __name__ == '__main__': # Program entrance - print ('Program is starting...') - setup() - try: - main() - except KeyboardInterrupt: # Press ctrl-c to end the program. - destroy() -``` - -When it runs, it should look similar to this. (Note: I was still testing with sleep time during this recording.) - -### Time to bake that pumpkin - -To start, I had a very large plastic pumpkin gifted by our family to my husband and me. - -![A large, smiling orange jack o'lantern][18] - -Jess Cherry CC BY-SA 4.0 - -Originally, it had a plug in the back with a bulb that was burnt out, which is what inspired this idea in the first place. I realized I'd have to make some modifications, starting with cutting a hole in the bottom using a drill and jab saw. - -![A man drilling a hole in the bottom of a large plastic jack o'lantern][19] - -Jess Cherry CC BY-SA 4.0 - -![A hole that takes up most of the bottom of the plastic jack o'lantern][20] - -Jess Cherry CC BY-SA 4.0 - -Luckily, the pumpkin already had a hole in the back for the cord leading to the original light. I could stuff all the equipment inside the pumpkin, but I needed a way to hide the sensor. - -First, I had to make a spot for the sensor to be wired externally to the pumpkin, so I drilled a hole by the stem: - -![A small hole drilled in the brown stem of the jack o'lantern][21] - -Jess Cherry CC BY-SA 4.0 - -Then I put all the wiring for the sensor through the hole, which ended up posing another issue: the sensor is big and weird-looking. I went looking for a decorative way to resolve this. - -![The sensor hanging around the stem of the pumpkin, and a spool of ribbon][22] - -Jess Cherry CC BY-SA 4.0 - -I did, in fact, make the scariest ribbon decoration (covered in hot glue gun mush) in all of humanity, but you won't notice the sensor. - -![A large bow with orange, black, and patterned ribbon completely covers the sensor][23] - -Jess Cherry CC BY-SA 4.0 - -Finally, I put the Pi and extension card in the pumpkin and cabeled the power through the back. - -![The breadboard and cables fit inside the hole in the bottom of the jack o'lantern][24] - -Jess Cherry CC BY-SA 4.0 - -With everything cabled, I was ready to VNC into my Pi and turn on the Python, then wait for something to move to test it out. - -![VNC viewer with Python file running][25] - -Jess Cherry CC BY-SA 4.0 - -![senseled.py running, showing led turned off switching to led turned on][26] - -Jess Cherry CC BY-SA 4.0 - -### Post baking notes - -This was a really long and very researched build. As I said in the introduction, this isn't a weekend project. I knew nothing about breadboards when I started, and it took me a while to recode and determine exactly what I wanted. There are some very granular details I did not include here. For example, the sensor has two knobs that define how far it can pick up motion and how long the sensor input needs to continue. While this was a fantastic thing to learn, I would definitely do a lot of research before pursuing this journey. - -I did not get to one part of the project that I really wanted: the ability to connect to a Bluetooth device and make spooky noises. That said, playing with a Raspberry Pi is always fun to do, whether with home automation, weather tracking, or just silly decorations. I hope you enjoyed this walk-through and feel inspired to try something similar yourself. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/halloween-raspberry-pi - -作者:[Jessica Cherry][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cherrybomb -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pumpkins.jpg?itok=00mvIoJf (A vignette of green, orange, and yellow pumpkins in front of a brick wall) -[2]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_1.png (Raspberry Pi computer board) -[3]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_3.png (Cables and LEDs) -[4]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_4.png (Project supplies) -[5]: https://www.raspberrypi.com/documentation/computers/getting-started.html#using-raspberry-pi-imager -[6]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_5.png (Menu) -[7]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_6.png (Menu) -[8]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_7.png (Board and cable) -[9]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_8.png (Breadboard) -[10]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_9.png (Partially set-up breadboard) -[11]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_11.png (Extension board) -[12]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_12.png (Sensor) -[13]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_13.png (Sensor with cables) -[14]: https://opensource.com/sites/default/files/uploads/pumpkin_pi15.png (Breadboard with sensor cables attached) -[15]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_16.png (Light setup) -[16]: https://opensource.com/sites/default/files/uploads/pumpkinpi_bb.jpeg (Illustration of connections) -[17]: https://opensource.com/article/18/3/loop-better-deeper-look-iteration-python -[18]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_18.png (the pumpkin) -[19]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_19.png (Drilling) -[20]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_20.png (Pumpkin hole) -[21]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_21.png (Sensor hole) -[22]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_24.png (The unhidden sensor) -[23]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_25.png (Sensor disguise ribbons) -[24]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_28.png (Enclosing the kit) -[25]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_29.png (VNC viewer) -[26]: https://opensource.com/sites/default/files/uploads/pumpkin_pi_31.png (LED turns on) diff --git a/sources/tech/20211028 5 lessons I learned about chaos engineering for Kubernetes.md b/sources/tech/20211028 5 lessons I learned about chaos engineering for Kubernetes.md deleted file mode 100644 index 6c3b5060e4..0000000000 --- a/sources/tech/20211028 5 lessons I learned about chaos engineering for Kubernetes.md +++ /dev/null @@ -1,77 +0,0 @@ -[#]: subject: "5 lessons I learned about chaos engineering for Kubernetes" -[#]: via: "https://opensource.com/article/21/10/chaos-engineering-kubernetes-ebook" -[#]: author: "Seth Kenlon https://opensource.com/users/seth" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -5 lessons I learned about chaos engineering for Kubernetes -====== -To ensure that you're breaking things responsibly and intelligently, -download our new eBook about chaos engineering for Kubernetes. -![Scrabble letters spell out chaos for chaos engineering][1] - -Kubernetes is a complex framework for a complex job. Managing several containers can be complicated, and managing hundreds and thousands of them is essentially just not humanly possible. Kubernetes makes highly available and highly scaled cloud applications a reality, and it usually does its job remarkably well. However, people don't tend to notice the days and months of success. Months and years of smooth operation aren't the things that result in phone calls at 2 AM. In IT, it's the failures that count. And unfortunately, failures don't run on a schedule. - -[Jessica Cherry's][2] new eBook, **[Chaos engineering for Kubernetes][3]**, introduces several concepts about how system engineers can help test the robustness of the systems they've designed. Surprisingly, a big part of it is failure. Here are the top five lessons I've learned from Cherry's book. - -### Intentional failure is part of success - -It doesn't matter that you've done everything right. You've purchased bespoke hardware for the job, you've installed a stable distribution, purchased support, read the fine manuals, documented your process, automated recovery, made backups, and on and on. After all the prep work, there's only one thing you can be sure about: Something will go wrong eventually. - -It's not morbid to think that way because it's just what happens in technological and mechanical systems. Things fail. - -You can't stop things from failing, but you can _make_ them fail when it's convenient to you. Unfortunately, forcing a failure on your system doesn't "use up" all of your allotted failures for the year. Things will still fail unexpectedly, but by causing failure according to your own schedule, you ensure that you have the resources and knowledge you need to fix problems. - -### Randomized failure is part of resiliency - -You're not the only who needs to know how to handle failure. Your infrastructure needs to be able to withstand failure, too. While you can test some of this with scheduled failures, randomness helps ensure resiliency. After all, some failures will happen when you're not around to ensure that everything else still functions. Ideally, you want to develop the peace of mind that something could break without you ever knowing about it (but you will know about it eventually because you're monitoring your cluster. You are monitoring your cluster, right?). - -### Resiliency needs to happen in many places - -I'll never forget the first large-scale (200 users was large-scale for me, then) shared file server. It had an LVM pool of storage with plenty of space for additional hard drives, battery backup, a robust SAMBA back-end, an AMANDA-based backup routine, a fallback network, and easy admin access both locally and remotely. The server didn't need constant availability, so I had plenty of time to test it during the week, but it did require availability at specific times during the workday. It was well-used, and I was justly proud of it for several months. - -And then, one week, my file server ran out of hard drive space. No problem—I'd built it to have expandable storage, so it would be a simple matter of walking up to the server, sliding in a new drive, and continuing about my day. Except for one small glitch: The hard drives weren't hot-swappable on the hardware I'd purchased. (Who knew there were rack servers without hot-swappable drive bays?) The whole system had to be shut down for me to add storage to it, and of course, it happened on a Friday afternoon, when everybody's work was being rendered. - -Lesson learned: Resiliency isn't a fixed point in time. You don't design a system to be perfect at one specific moment; you design it so it can fail at any moment. - -It's hard to detect the weak spots in your design unless you cause failure at unexpected times and in unexpected places. - -### Chaos strengthens order - -I used to think that rigorous testing was a luxury. I thought it was something big teams could afford to do because they surely had dedicated QA people sitting in labs tinkering and disassembling carbon copies of what's in production. - -As I had the privilege of working on larger and larger teams, though, I found that more people only means there's a greater _potential_ for tests to happen. It never guarantees that tests are actually getting done. - -Chaos engineering is a practice anyone can adopt. Talk to your department, assemble a team, form a plan. Set up monitoring, make your cluster operation transparent, invite questions and challenges. Get a plan for formalized chaos engineering because Chaos strains Order and ultimately can make it stronger. - -### Kubernetes can be surprisingly fun - -People sometimes ask me what I do with my Raspberry Pi Kubernetes cluster. Admittedly, I don't personally run any vital services on my little open hybrid cloud. But as it turns out, there's a lot of fun to be had with a miniature super-computer (well, it's super to me, anyway.). Looking at pretty Grafana dashboards and playing Doom with pods are both fun, but so is the configuration, the challenge of testing my cluster's performance after a node's been suddenly removed from the network, trying to see how many times an SD card can survive improper removal (so far a lot, thanks probably to ext4), configuring two containers to interact with one another, coming to grips with the logical structures of namespaces and pods, and so on. - -At the end of the day, Kubernetes has given me my own cloud, and I frankly enjoy having that kind of power at my fingertips. - -Chaos engineering gives you permission to be a little wanton. It encourages you to be methodically reckless. And in the end, you get a more resilient system. - -### Download the ebook - -Of course, you can't just try to aimlessly destroy your own computer and call it chaos engineering. Without discipline, documentation, and mitigation, it's just chaos. To ensure that you're breaking things responsibly and intelligently, download **[Chaos engineering for Kubernetes][3]**. And then let slip the monkeys of chaos! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/10/chaos-engineering-kubernetes-ebook - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brett-jordan-chaos-unsplash.jpg?itok=sApp5dVd (Scrabble letters spell out chaos for chaos engineering) -[2]: https://opensource.com/users/cherrybomb -[3]: https://opensource.com/downloads/chaos-engineering-kubernetes diff --git a/sources/tech/20211102 Flatpak Apps Look Out of Place- Here-s How to Apply GTK Themes on Flatpak Applications.md b/sources/tech/20211102 Flatpak Apps Look Out of Place- Here-s How to Apply GTK Themes on Flatpak Applications.md deleted file mode 100644 index 43d4567054..0000000000 --- a/sources/tech/20211102 Flatpak Apps Look Out of Place- Here-s How to Apply GTK Themes on Flatpak Applications.md +++ /dev/null @@ -1,157 +0,0 @@ -[#]: subject: "Flatpak Apps Look Out of Place? Here’s How to Apply GTK Themes on Flatpak Applications" -[#]: via: "https://itsfoss.com/flatpak-app-apply-theme/" -[#]: author: "Community https://itsfoss.com/author/itsfoss/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Flatpak Apps Look Out of Place? Here’s How to Apply GTK Themes on Flatpak Applications -====== - -One of the reasons why some users avoid installing Flatpak apps is that most [Flatpak][1] apps don’t change their appearance as per the current system theme. This makes the applications look out of the place in your otherwise beautiful set up. - -![Flatpak apps do not match system theme][2] - -The official way to apply GTK themes to Flatpak apps is by [installing the desired theme as a flatpak][3]. However, there are only a few GTK themes that can be installed as Flatpak. - -This means that if you found a [beautiful GTK theme][4], your Flatpak applications will still be using their default appearance. But wait! There is a workaround. - -In this tutorial, I am going to introduce you a way to make flatpak apps aware of external GTK themes. - -### Applying GTK themes to Flatpak applications (intermediate level) - -Before we proceed, let’s understand why flatpak apps have this behavior. - -Flatpak apps run inside a ‘container’, so they don’t have access to the host filesystem, network, or physical devices without explicitly setting the appropriate permission, and that is what we are going to do. - -As I said earlier, this is a workaround, not a flawless solution. **Don’t expect it to change the themes of Flatpak apps automatically when you change the system theme.** You can, however, change it for all Flatpak apps in one single command. - -Let’s see how to achieve that. Please note that this tutorial requires that you are a bit familiar with the Linux command line and you can find your way around the terminal. - -#### Step 1: Give Flatpak apps access to GTK themes location - -GTK themes are located in /usr/share/themes for all users, and in ~/.themes for a specific user. - -To **give all flatpak packages permission** to access ~/.themes run the following command: - -``` -sudo flatpak override --filesystem=$HOME/.themes -``` - -Notice that you can’t give access to /usr/share/themes because according to [flatpak documentaion they are black listed][5]. - -**Alternatively**, you can do this at per-application base as well. You need to specify the application ID for which you are going to change the theme. - -``` -sudo flatpak override org.gnome.Calculator --filesystem=$HOME/.themes -``` - -#### Step 2:Tell Flatpak apps which theme to use - -Giving access to ~/.themes is not enough because this directory may contain multiple themes. To tell flatpak which GTK theme to use, first get the name of the desired theme and then apply the following command: - -``` -sudo flatpak override --env=GTK_THEME=my-theme -``` - -As you can see in the screenshot below, there is two themes available, Ant-Bloody and Orchis-dark. _**Copy and paste the exact theme name**_ in the above command: - -![Set GTK themes for all Flatpak apps][6] - -Alternatively, for individual application, run: - -``` -sudo flatpak override org.gnome.Calculator --env=GTK_THEME=my-theme -``` - -and replace my-theme with the folder name of the theme you want to apply (and it must be located in ~/.themes). - -#### Step 3: Test the theme change by running a Flatpak app - -If the application was already running, you’ll have to close and start it again. You’ll see that the newly started application uses the theme that you had specified earlier. - -Here is a screenshot of GNOME calculator and GNOME builder (Flatpak version) before the above steps: - -![Flatpak applications with default Adwaita theme][7] - -And after the above steps (With Canta GTK theme):![][8]![][8] - -![Flatpak applications with Canta Dark theme][9] - -That’s better, right? Now, I could leave you here but it would be appropriate to mention the steps for reverting the changes. - -### Revert the changes - -You can reset the changes by resetting all the overrides at once. Please note that this will reset any previous overrides you had explicitly set. - -``` -sudo flatpak override --reset -``` - -Alternatively, you can reset permissions at package level as well: - -``` -sudo flatpak override --reset org.example.app -``` - -If you have previously overridden the GTK_THEME or filesystem for a specific Flatpak package using “flatpak override” resetting will help you set it again. - -### Additional information - -Normal GTK applications load GTK theme specified by gsettings, you can run the following command to get currently applied GTK themes: - -``` -gsettings get org.gnome.desktop.interface gtk-theme -``` - -And to set the GTK theme, run: - -``` -gsettings set org.gnome.desktop.interface gtk-theme my-theme -``` - -To do the above with Flatpak, you have to enter a shell session inside the container of the desired application by running: - -``` -flatpak run --command=bash org.gnome.Calculator -``` - -And inside this session, run the above command: - -``` -gsettings set org.gnome.desktop.interface gtk-theme my-theme -``` - -But that did not work with me, so I resorted to use GTK_THEME environment variable, which is supposed to be used for debugging purpose. If you managed to make gsettings work, then tell me in the comments. - -I know it’s not an automated solution but at least it gives you the option to change the themes for the Flatpak applications with a couple of commands. This way, you can make the Flatpak application integrate with the rest of the system. - -I hope this helped you. If you face any issues, please mention them in the comments. - -_**Author Info: This article has been contributed by It’s FOSS reader Hamza Algohary and edited by Abhishek Prakash. Hamza is a computer engineering student and a Linux and open source enthusiast. He also develops apps for Linux desktop. You can find his work on [his GitHub profile][10].**_ - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/flatpak-app-apply-theme/ - -作者:[Community][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/itsfoss/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/what-is-flatpak/ -[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/flatpak-apps-not-matching-system-theme.jpg?resize=800%2C450&ssl=1 -[3]: https://docs.flatpak.org/en/latest/desktop-integration.html#theming -[4]: https://itsfoss.com/best-gtk-themes/ -[5]: https://docs.flatpak.org/en/latest/sandbox-permissions.html#filesystem-access -[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/set-gtk-theme-to-flatpak-apps.png?resize=800%2C277&ssl=1 -[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/flatpak-adwaita.webp?resize=800%2C450&ssl=1 -[8]: https://itsfoss.com/flatpak-app-apply-theme/flatpak-canta-dark.png -[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/flatpak-canta-dark.webp?resize=800%2C450&ssl=1 -[10]: https://github.com/hamza-Algohary diff --git a/sources/tech/20211104 How do you tell if a problem is caused by DNS.md b/sources/tech/20211104 How do you tell if a problem is caused by DNS.md deleted file mode 100644 index 9e686a931a..0000000000 --- a/sources/tech/20211104 How do you tell if a problem is caused by DNS.md +++ /dev/null @@ -1,237 +0,0 @@ -[#]: subject: "How do you tell if a problem is caused by DNS?" -[#]: via: "https://jvns.ca/blog/2021/11/04/how-do-you-tell-if-a-problem-is-caused-by-dns/" -[#]: author: "Julia Evans https://jvns.ca/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -How do you tell if a problem is caused by DNS? -====== - -I was looking into problems people were having with DNS a few months ago and I noticed one common theme – a lot of people have server issues (“my server is down! or it’s slow!“), but they can’t tell if the problem is caused by DNS or not. - -So here are a few tools I use to tell if a problem I’m having is caused by DNS, as well as a few DNS debuggging stories from my life. - -### I don’t try to interpret browser error messages - -First, let’s talk briefly about browser error messages. You might think that your browser will tell you if the problem is DNS or not! And it _could_ but mine doesn’t seem to do so in any obvious way. - -On my machine, if Firefox fails to resolve DNS for a site, it gives me the error: **Hmm. We’re having trouble finding that site. We can’t connect to the server at bananas.wizardzines.com.** - -But if the DNS succeeds and it just can’t establish a TCP connection to that service, I get the error: **Unable to connect. Firefox can’t establish a connection to the server at localhost:1324** - -These two error messages (“we can’t connect to the server” and “firefox can’t establish a connection to the server”) are so similar that I don’t try to distinguish them – if I see any kind of “connection failure” error in the browser, I’ll immediately go the command line to investigate. - -### tool 1: error messages - -I was complaining about browser error messages being misleading, but if you’re writing a program, there’s usually some kind of standard error message that you get for DNS errors. It often won’t say “DNS” in it, it’ll usually be something about “unknown host” or “name or service not found” or “getaddrinfo”. - -For example, let’s run this Python program: - -``` - - import requests - r = requests.get('http://examplezzz.com') - -``` - -This gives me the error message: - -``` - - socket.gaierror: [Errno -2] Name or service not known - -``` - -If I write the same program in Ruby, I get this error: - -``` - - Failed to open TCP connection to examplezzzzz.com:80 (getaddrinfo: Name or service not known - -``` - -If I write the same program in Java, I get: - -``` - - Exception in thread "main" java.net.UnknownHostException: examplezzzz.com - -``` - -In Node, I get: - -``` - - Error: getaddrinfo ENOTFOUND examplezzzz.com - -``` - -These error messages aren’t quite as uniform as I thought they would be, there are quite a few different error messages in different languages for exact the same problem, and it depends on the library you’re using too. But if you Google the error you can find out if it means “resolving DNS failed” or not. - -### tool 2: use `dig` to make sure it’s a DNS problem - -For example, the other day I was setting up a new subdomain, let’s say it was . - -I set up my DNS, but when I went to the site in Firefox, it wasn’t working. So I ran `dig` to check whether the DNS was resolving for that domain, like this: - -``` - - $ dig bananas.wizardzines.com - (empty response) - -``` - -I didn’t get a response, which is a failure. A success looks like this: - -``` - - $ dig wizardzines.com - wizardzines.com. 283 IN A 172.64.80.1 - -``` - -Even if my programming language gives me a clear DNS error, I like to use `dig` to independently confirm because there are still a lot of different error messages and I find them confusing. - -### tool 3: check against more than one DNS server - -There are LOTS of DNS servers, and they often don’t have the same information. So when I’m investigating a potential DNS issue, I like to query more than one server. - -For example, if it’s a site on the public internet I’ll both use my local DNS server (`dig domain.com`) and a big public DNS server like 1.1.1.1 or 8.8.8.8 or 9.9.9.9 (`dig @8.8.8.8 domain.com`). - -The other day, I’d set up a new domain, let’s say it was . - -Here’s what I did: - - 1. go to in a browser (spoiler: huge mistake!) - 2. go to my DNS provider and set up bananas.wizardzines.com - 3. try to go to in my browser. It fails! Oh no! - - - -I wasn’t sure why it failed, so I checked against 2 different DNS servers: - -``` - - $ dig bananas.wizardzines.com - $ dig @8.8.8.8 bananas.wizardzines.com - feedback.wizardzines.com. 300 IN A 172.67.209.237 - feedback.wizardzines.com. 300 IN A 104.21.85.200 - -``` - -From this I could see that `8.8.8.8` actually did have DNS records for my domain, and it was just my local DNS server that didn’t. - -This was because I’d gone to in my browser before I’d created the DNS record (huge mistake!), and then my ISP’s DNS server cached the **absence** of a DNS record, so it was returning an empty response until the negative cached expired. - -I googled “negative cache time” and found a Stack Overflow post explaining where I could find the negative cache TTL (by running `dig SOA wizardzines.com`). It turned out the TTL was 3600 seconds or 1 hour, so I just needed to wait an hour for my ISP to update its cache. - -### tool 4: spy on the DNS requests being made with tcpdump - -Another of my favourite things to do is spy on the DNS requests being made and check if they’re failing. There are at least 3 ways to do this: - - 1. Use tcpdump (`sudo tcpdump -i any port 53`) - 2. Use wireshark - 3. Use a command line tool I wrote called [dnspeep][1], which is like tcpdump but just for DNS queries and with friendlier output - - - -I’m going to give you 2 examples of DNS problems I diagnosed by looking at the DNS requests being made with `tcpdump`. - -### problem: the case of the slow websites - -One day five years ago, my internet was slow. Really slow, it was taking 10+ seconds to get to websites. I thought “hmm, maybe it’s DNS!”, so started `tcpdump` and then opened one of the slow sites in my browser. - -Here’s what I saw in `tcpdump`: - -``` - - $ sudo tcpdump -n -i any port 53 - 12:05:01.125021 wlp3s0 Out IP 192.168.1.181.56164 > 192.168.1.1.53: 11760+ [1au] A? ask.metafilter.com. (59) - 12:05:06.191382 wlp3s0 Out IP 192.168.1.181.56164 > 192.168.1.1.53: 11760+ [1au] A? ask.metafilter.com. (59) - 12:05:11.145056 wlp3s0 Out IP 192.168.1.181.56164 > 192.168.1.1.53: 11760+ [1au] A? ask.metafilter.com. (59) - 12:05:11.746358 wlp3s0 In IP 192.168.1.1.53 > 192.168.1.181.56164: 11760 2/0/1 CNAME metafilter.com., A 54.244.168.112 (91) - -``` - -The first 3 lines are DNS requests, and they’re separated by 5 seconds. Basically this is my browser timing out its DNS queries and retrying them. - -Finally, on the 3rd query, a response comes back. - -I don’t actually know exactly why this happened, but I restarted my router and the problem went away. Hooray! - -(by the way the reason I know that this is the tcpdump output I got 5 years ago is that I wrote about it in my [zine on tcpdump][2], you can read that zine for free!) - -### problem: the case of the nginx failure - -Earlier this year, I was using to set up a website, and I was having trouble getting nginx to redirect to my site – all the requests were failing. - -I eventually got SSH access to the server and ran `tcpdump` and here’s what I saw: - -``` - - $ tcpdump -i any port 53 - 17:16:04.216161 IP6 fly-local-6pn.55356 > fdaa::3.53: 46219+ A? myservice.internal. (42) - 17:16:04.216197 IP6 fly-local-6pn.55356 > fdaa::3.53: 11993+ AAAA? myservice.internal. (42) - 17:16:04.216946 IP6 fdaa::3.53 > fly-local-6pn.55356: 46219 NXDomain- 0/0/0 (42) - 17:16:04.217063 IP6 fly-local-6pn.43938 > fdaa::3.53: 32351+ PTR? 3.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.a.a.d.f.ip6.arpa. (90) - 17:16:04.218378 IP6 fdaa::3.53 > fly-local-6pn.55356: 11993- 1/0/0 AAAA fdaa:0:bff:a7b:aa2:d426:1ab:2 (70) - 17:16:04.461646 IP6 fdaa::3.53 > fly-local-6pn.43938: 32351 NXDomain 0/1/0 (154) - -``` - -This is a bit confusing to read, but basically: - - 1. nginx requests an A record - 2. nginx requests an AAAA record - 3. the DNS server returns an `NXDOMAIN` reply for the A record - 4. the DNS server returns a successful reply for the AAAA record, with an IPv6 address - - - -The `NXDOMAIN` reponse made nginx think that that domain didn’t exist, so it ignored the IPv6 address it got later. - -This was happening because there was a bug in the DNS server – according to the DNS spec it should have been returning `NOERROR` instead of `NXDOMAIN` for the A record. I reported the bug and they fixed it right away. - -I think it would have been literally impossible for me to guess what was happening here without using `tcpdump` to see what queries nginx was making. - -### if there are no DNS failures, it can still be a DNS problem - -I originally wrote “if you can see the DNS requests, and there are no timeouts or failures, the problem isn’t DNS”. But someone on Twitter [pointed out][3] that this isn’t true! - -One way you can have a DNS problem even without DNS failures is if your program is doing its own DNS caching. Here’s how that can go wrong: - - 1. Your program makes a DNS request and caches the result - 2. 6 days pass - 3. Your program never updates its IP address - 4. The IP address for the site changes - 5. You start getting errors - - - -This _is_ a DNS problem (your program should be requesting DNS updates more often!) but you have to diagnose it by noticing that there are _missing_ DNS queries. This one is very tricky and the error messages you’ll get won’t look like they have anything to do with DNS. - -### that’s all for now - -This definitely isn’t a complete list of ways to tell if it’s DNS or not, but I hope it helps! - -I’d love to hear methods of checking “is it DNS?” that I missed – I’m pretty sure I’ve missed at least one important method. - --------------------------------------------------------------------------------- - -via: https://jvns.ca/blog/2021/11/04/how-do-you-tell-if-a-problem-is-caused-by-dns/ - -作者:[Julia Evans][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://jvns.ca/ -[b]: https://github.com/lujun9972 -[1]: https://github.com/jvns/dnspeep/ -[2]: https://wizardzines.com/zines/tcpdump/ -[3]: https://twitter.com/0x2ba22e11/status/1456305123420950530 diff --git a/sources/tech/20211107 10 eureka moments of coding in the community.md b/sources/tech/20211107 10 eureka moments of coding in the community.md deleted file mode 100644 index 630a285887..0000000000 --- a/sources/tech/20211107 10 eureka moments of coding in the community.md +++ /dev/null @@ -1,211 +0,0 @@ -[#]: subject: "10 eureka moments of coding in the community" -[#]: via: "https://opensource.com/article/21/11/community-code-stories" -[#]: author: "Jen Wike Huger https://opensource.com/users/jen-wike" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -10 eureka moments of coding in the community -====== -We asked our community to share about a time they sat down and wrote -code that truly made them proud. -![Woman sitting in front of her laptop][1] - -If you've written code, you know it takes practice to get good at it. Whether it takes months or years, there's inevitably a moment of epiphany. - -We wanted to hear about that time, so we asked our community to share about that time they sat down and wrote code that truly made them proud. - -* * * - -One of mine around coding goes back to college in the 70s. I learned about parsing arithmetic expressions and putting them into Reverse Polish notation. And then, I figured out that, just like multiplication is repeated addition, division is repeated subtraction. But you can make it quicker by starting with an appropriate power of 10.  And with that, I wrote a BASIC Plus program on a 16-bit PDP 11/45 running RSTS to do multi-precision arithmetic. And then, I added a bunch of subroutines for various calculations. I tested it by calculating PI to 45 digits. It ran for a half-hour but worked. I may have saved it to DECtape. —[Greg Scott][2] - -* * * - -In the mid-1990s, I worked as part of a small consulting team (three programmers) on production planning and scheduling for a large steel company. The app was to be delivered on Hewlett-Packard workstations (then quite the thing), and the GUI was to be done in XWindows. To our amazement, it was CommonLisp that came out with the first decent interface to Motif, which had (for the time) a very nice widget toolkit. As a result, the entire app was done in CommonLisp and performed acceptably on the workstations. It was great fun to do something commercial in Lisp. - -As you might guess, the company then wanted to port from the Hewlett-Packard workstations to something cheap, and so, about four years later, we rewrote the app in C with the expected boost in performance. —[Marty Kalin][3] - -* * * - -This topic brought an old memory back. Though I got many moments of self-satisfaction from writing the first C program to print a triangle to writing a validating admission webhook and operators for Kubernetes from scratch. - -For a long time, I saw and played games written in various languages, and so I had an irresistible itch to write a few possible games using bash shell script. - -I wrote the first one, tic-tac-toe and then Minesweeper, but they never got published until a couple of years back, when I committed them to GitHub, and people started liking them. - -I was glad to get an opportunity to have the [article published][4] on this website. —[Abhishek Tamrakar][5] - -* * * - -Although there've been other, more recent works, two rather long-in-the-tooth bits of a doggerel leap to mind, mostly because of the "Eureka!" moments when I was able to examine the output and verify that I had indeed understood the Rosetta Stones I was working with enough to decipher the cryptic data: - - * **UNPAL**: A cross-disassembler written in the DECsystem-10's MACRO-10 assembly language. It would take PDP-11 binaries and convert them back to the PDP-11 MACRO-11 assembler language. Many kudos to the folks writing the documentation back then, particularly, the DEC-10's 17 or so volumes, filled with great information and a fair amount of humor. UNPAL made its way out into the world and is probably the only piece of code that was used by people outside of either my schools or my workplace. (On the other hand, some of my documentation/tutorials got spread around on lots of external mailing lists and websites.) - * **MUNSTER**: Written in a language I had not yet learned, for an operating system I had never encountered before, on a computer that I'd only heard of, for a synthesizer I knew nothing about, using cryptic documentation. The language was C, the machine, an Atari 1040-ST (? ST-1040?), the OS—I don't remember, but it had something to do with GEM? And the synthesizer, a Korg M1—hence the name "munster" (m1-ster). It was quite the learning experience, studying all of the components simultaneously. The code would dump and restore the memory of eight synthesizers in the music lab. The Korg manual failed (IMHO) to really explain the data format. The appendix was a maze of twisty little passages all alike, with lots of "Note 8: See note 14 in Table 5. Table 5, Note 14: See notes 3, 4, and 7 in Table 2." Eventually, I deciphered from a picture without any real explanation, that when dumping data, every set of seven 8-bit bytes was being converted to eight 7-bit bytes, by stripping the high-order bit from each of the seven bytes and prefixing the seven high-order bits into an extra byte preceding the seven stripped bytes. This had to be figured out from a tiny illustration in the appendix (see attached screenshot from the manual): - - - -![Korg appendix illustration][6] - -(Kevin Cole, C[C BY-SA 4.0][7]) - -—[Kevin Cole][8] - -* * * - -For me, it is definitively GSequencer's synchronization function `AgsThread::clock()`. - -**Working with parallelism** - -During the development of GSequencer, I have encountered many obstacles. When I started the project, I was barely familiar with multi-threaded code execution. I was aware of `pthread_create()`, `pthread_mutex_lock()`, and `pthread_mutex_unlock()`. - -But what I needed was more complex synchronization functionality. There are mainly three choices available—conditional locks, barriers, and semaphores. I decided for conditional locks since it is available from GLib-2.0 threading API. - -Conditional locks usually don't proceed with program flow until a condition within a loop turns to be FALSE. So in one thread, you do, for example: - - -``` - - -gboolean start_wait; -gboolean start_done = FALSE; - -static GCond cond; -static GMutex mutex; - -/* conditional lock */ -g_mutex_lock(&mutex); - -if(!start_done){ -  start_wait = TRUE; - -  while(start_wait && -        !start_done){ -      g_cond_wait(&cond, -                  &mutex); -  } -} - -g_mutex_unlock(&mutex); - -``` - -Within another thread, you can wake up the conditional lock, and if conditional evaluates to FALSE, the program flow proceeds for the waiting thread. - - -``` - - -/* signal conditional lock */ -g_mutex_lock(&mutex); - -start_done = TRUE; - -if(start_wait){ -  g_cond_signal(&cond); -} - -g_mutex_unlock(&mutex); - -``` - -Libags provides a thread wrapper built on top of GLib's threading API. The `AgsThread` object synchronizes the thread tree by `AgsThread::clock()` event. It is some kind of parallelism trap. - -![GSequencer threads][9] - -(Joel Krahemann, [CC BY-SA 4.0][7]) - -All threads within the tree synchronize to `AgsThread:max-precision` per second because all threads shall have the very same time running in parallel. I talk of tic-based parallelism, with a max-precision of 1000 Hz, each thread synchronizes 1000 times within the tree—giving you strong semantics to compute a deterministic result in a multi-threaded fashion. - -Since we want to run tasks exclusively without any interference from competing threads, there is a mutex lock involved just after synchronization and then invokes `ags_task_launcher_sync_run()`. Be aware the conditional lock can be evaluated to be true for many threads. - -After how many tics the flow is repeated depends on sample rate and buffer size. If you have an `AgsThread` with max-precision 1000, the sample rate of 44100 common for audio CDs, and a buffer size of 512 frames, then the delay until it's repeated calculates as follows: - - -``` -`tic_delay = 1000.0 / 44100.0 * 512.0; // 11.609977324263039` -``` - -As you might have pre-/post-synchronization needing three tics to do its work, you get eight unused tics. - -Pre-synchronization is used for reading from a soundcard or MIDI device. The intermediate tic does the actual audio processing. Post-synchronization is used by outputting to the soundcard or exporting to an audio file. - -To get this working, I went through heights and depths. This is especially because you can't hear or see a thread. GDB's batch debugging helped a lot. With batch debugging, you can retrieve a stack trace of a running process. —[Joël Kräheman][10] - -* * * - -I don't know that I've written any code to be particularly proud of—me being a neurodiverse programmer may mean my case is that I'm only an average programmer with specific strengths and weaknesses. - -However, many years ago, I did some coding in C with basic examples in parallel virtual machines, which I was very happy when I got them working. - -More than ten years ago, I had a programming course where I taught Java to adult students, and I'm happy that I was able to put that course together. - -I'm recently happy that I managed to help college students with disabilities bug test code as a part-time job. —[Rikard Grossman-Nielsen][11] - -* * * - -Like others, this made me think back aways. I don't really consider myself a developer, but I have done some along the way.  The thing that stuck out for me is the epiphany factor, or "moment of epiphany," as you said. - -When I was a student at UNCW, I worked for the OIT network group managing the network for the dormitories. The students all received their IP address registrations using Bootstrap Protocol (BOOTP)—the predecessor to DHCP. The configuration file was maintained by hand in the beginning when we only had about 30 students. This was the very first year that the campus offered Internet to students! The next year, as more dorms got wired, the numbers grew and quickly reached over 200. I wrote a small C program to maintain the config file. The epiphany and "just plain old neat part" was that my code could touch and manipulate something "real" outside itself. In this case, a file on the system. I had a similar feeling later in a Java class when I learned how to read and write to a SQL server. - -Anyway, there was something cool about seeing a real result from a program. One more amazing thing is that the original binary, which was compiled on a Red Hat 5.1 Linux system, will still run on my current Fedora 34 Linux desktop!! —[Alan Formy-Duval][12] - -* * * - -At the age of 18, I was certainly proud when I wrote a Virtual Basic application for a small company to automate printing AutoCAD files in bulk. At that time, it was the first "complex" application I wrote. Many user interactions were needed to configure the printer settings. In addition, the application was integrated with AutoCAD using Com ActiveX. It was challenging. The company kept using it until recently. The application stopped working because of an incompatibility issue with Windows 10. They used it for 18 years without issues! - -I've been tasked to rewrite the application using today's technology. I've written the [new version][13] in Python. Looking back at the code I wrote was funny. It was so clumsy. - -Attached is a screenshot of the first version.  - -![Original VB printing app][14] - -(Patrik Dufresne, [CC BY-SA 4.0][7]) - -—[Patrik Dufresne][15] - -* * * - -I once integrated GitHub with the Open Humans platform, which was part of my Outreachy project back in 2019. That was my venture into Django, and I learned a lot about APIs and rate limits in the process. - -Also, very recently, I started working with Quarkus and started building REST, GraphQl APIs with it. I found it really cool. —[Manaswini Das][16] - -* * * - -Around 1998, I got bored and decided to write a game. Inspired by an old Mac game from the 1980s, I decided to create a "simulation" game where the user constructed a simple "program" to control a virtual robot and then explore a maze. The environment was littered with prizes and energy pellets to power your robot—but also contained enemies that could damage your robot if it ran into them. I added an energy "cost" so that every time your robot moved or took any action, it used up a little bit of its stored energy. So you had to balance "picking up prizes" with "finding energy pellets." The goal was to pick up as many prizes before you ran out of energy. - -I experimented with using GNU Guile (a Scheme extension language) as a programming "backend," which worked well, even though I don't really know Scheme. I figured out enough Scheme to write some interesting robot programs. - -And that's how I wrote GNU Robots. It was just a quick thing to amuse myself, and it was fun to work on and fun to play. Later, other developers picked it up and ran with it, making major improvements to my simple code. It was so cool to rediscover a few years ago that you can still compile GNU Robots and play around with them. Congratulations to the new maintainers for keeping it going. —[Jim Hall][17] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/11/community-code-stories - -作者:[Jen Wike Huger][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jen-wike -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_4.png?itok=VGZO8CxT (Woman sitting in front of her laptop) -[2]: https://opensource.com/users/greg-scott -[3]: https://opensource.com/users/mkalindepauledu -[4]: https://opensource.com/article/19/9/advanced-bash-building-minesweeper -[5]: https://opensource.com/users/tamrakar -[6]: https://opensource.com/sites/default/files/uploads/kevincole_korg.png (Korg appendix illustration) -[7]: https://creativecommons.org/licenses/by-sa/4.0/ -[8]: https://opensource.com/users/kjcole -[9]: https://opensource.com/sites/default/files/uploads/joelkrahemann_ags-threading.png (BSequencer threads) -[10]: https://opensource.com/users/joel2001k -[11]: https://opensource.com/users/rikardgn -[12]: https://opensource.com/users/alanfdoss -[13]: https://gitlab.com/ikus-soft/batchcad -[14]: https://opensource.com/sites/default/files/uploads/patrikdufresne_vb-cadprinting.png (Original VB printing app) -[15]: https://opensource.com/user_articles/447861 -[16]: https://opensource.com/user_articles/380116 -[17]: https://opensource.com/users/jim-hall diff --git a/sources/tech/20211108 Write your first CI-CD pipeline in Kubernetes with Tekton.md b/sources/tech/20211108 Write your first CI-CD pipeline in Kubernetes with Tekton.md deleted file mode 100644 index ebd326aca7..0000000000 --- a/sources/tech/20211108 Write your first CI-CD pipeline in Kubernetes with Tekton.md +++ /dev/null @@ -1,223 +0,0 @@ -[#]: subject: "Write your first CI/CD pipeline in Kubernetes with Tekton" -[#]: via: "https://opensource.com/article/21/11/cicd-pipeline-kubernetes-tekton" -[#]: author: "Savita Ashture https://opensource.com/users/savita-ashture" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Write your first CI/CD pipeline in Kubernetes with Tekton -====== -Tekton is a Kubernetes-native open source framework for creating -continuous integration and continuous delivery (CI/CD) systems. -![Plumbing tubes in many directions][1] - -Tekton is a Kubernetes-native open source framework for creating continuous integration and continuous delivery (CI/CD) systems. It also helps to do end-to-end (build, test, deploy) application development across multiple cloud providers or on-premises systems by abstracting away the underlying implementation details. - -### Introduction to Tekton - -[Tekton][2], known initially as [Knative Build][3], later got restructured as its own open source project with its own [governance organization][4] and is now a [Linux Foundation][5] project. Tekton provides an in-cluster container image build and deployment workflow—in other words, it is a continuous integration (CI) and continuous delivery (CD) service. It consists of Tekton Pipelines and several supporting components, such as Tekton CLI, Triggers, and Catalog. - -Tekton is a Kubernetes native application. It installs and runs as an extension on a Kubernetes cluster and comprises a set of Kubernetes Custom Resources that define the building blocks you can create and reuse for your pipelines. Because it's a K-native technology, Tekton is remarkably easy to scale. When you need to increase your workload, you can just add nodes to your cluster. It's also easy to customize because of its extensible design and thanks to a community repository of contributed components. - -Tekton is ideal for developers who need CI/CD systems to do their work and platform engineers who build CI/CD systems for developers in their organization. - -### Tekton components - -Building CI/CD pipelines is a far-reaching endeavor, so Tekton provides tools for every step of the way. Here are the major components you get with Tekton: - - * **Pipeline: **Pipeline defines a set of Kubernetes [Custom Resources][6] that act as building blocks you use to assemble your CI/CD pipelines. - * **Triggers: **Triggers is a Kubernetes Custom Resource that allows you to create pipelines based on information extracted from event payloads. For example, you can trigger the instantiation and execution of a pipeline every time a merge request gets opened against a Git repository. - * **CLI:** CLI provides a command-line interface called `tkn` that allows you to interact with Tekton from your terminal. - * **Dashboard:** Dashboard is a web-based graphical interface for Tekton pipelines that displays information about the execution of your pipelines. - * **Catalog:** Catalog is a repository of high-quality, community-contributed Tekton building blocks (tasks, pipelines, and so on) ready for use in your own pipelines. - * **Hub:** Hub is a web-based graphical interface for accessing the Tekton catalog. - * **Operator:** Operator is a Kubernetes [Operator pattern][7] that allows you to install, update, upgrade, and remove Tekton projects on a Kubernetes cluster. - * **Chains: **Chains is a Kubernetes Custom Resource Definition (CRD) controller that allows you to manage your supply chain security in Tekton. It is currently a work-in-progress. - * **Results: **Results aims to help users logically group CI/CD workload history and separate out long-term result storage away from the pipeline controller. - - - -### Tekton terminology - -![Tekton terminology][8] - -(Source: [Tekton documentation][9]) - - * **Step:** A step is the most basic entity in a CI/CD workflow, such as running some unit tests for a Python web app or compiling a Java program. Tekton performs each step with a provided container image. - - * **Task:** A task is a collection of steps in a specific order. Tekton runs a task in the form of a [Kubernetes pod][10], where each step becomes a running container in the pod. - - * **Pipelines:** A pipeline is a collection of tasks in a specific order. Tekton collects all tasks, connects them in a directed acyclic graph (DAG), and executes the graph in sequence. In other words, it creates a number of Kubernetes pods and ensures that each pod completes running successfully as desired. - -![Tekton pipelines][11] - -(Source: [Tekton documentation][12]) - - * **PipelineRun: **A PipelineRun, as its name implies, is a specific execution of a pipeline. - - * **TaskRun:** A TaskRun is a specific execution of a task. TaskRuns are also available when you choose to run a task outside a pipeline, with which you may view the specifics of each step execution in a task. - - - - -### Create your own CI/CD pipeline - -The easiest way to get started with Tekton is to write a simple pipeline of your own. If you use Kubernetes every day, you're probably comfortable with YAML, which is precisely how Tekton pipelines are defined. Here's an example of a simple pipeline that clones a code repository. - -First, create a file called `task.yam`**l** and open it in your favorite text editor. This file defines the steps you want to perform. In this example, that's cloning a repository, so I've named the step clone. The file sets some environment variables and then provides a simple shell script to perform the clone. - -Next comes the task. You can think of a step as a function that gets called by the task, and the task sets parameters and workspaces required for steps. - - -``` - - -apiVersion: tekton.dev/v1beta1 -kind: Task -metadata: - name: git-clone -spec: - workspaces: -   - name: output -     description: The git repo will be cloned onto the volume backing this Workspace. - params: -   - name: url -     description: Repository URL to clone from. -     type: string -   - name: revision -     description: Revision to checkout. (branch, tag, sha, ref, etc...) -     type: string -     default: "" - steps: -   - name: clone -     image: "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.21.0" -     env: -       - name: PARAM_URL -         value: $(params.url) -       - name: PARAM_REVISION -         value: $(params.revision) -       - name: WORKSPACE_OUTPUT_PATH -         value: $(workspaces.output.path) -     script: | -      #!/usr/bin/env sh -       set -eu - -       CHECKOUT_DIR="${WORKSPACE_OUTPUT_PATH}" - -       /ko-app/git-init \ -         -url="${PARAM_URL}" \ -         -revision="${PARAM_REVISION}" \ -         -path="${CHECKOUT_DIR}" -       cd "${CHECKOUT_DIR}" -       EXIT_CODE="$?" -       if [ "${EXIT_CODE}" != 0 ] ; then -         exit "${EXIT_CODE}" -       fi -       # Verify clone is success by reading readme file. -       cat ${CHECKOUT_DIR}/README.md - -``` - -Create a second file called `pipeline.yaml`, and open it in your favorite text editor. This file defines the pipeline by setting important parameters, such as a workspace where the task can be run and processed. - - -``` - - -apiVersion: tekton.dev/v1beta1 -kind: Pipeline -metadata: - name: cat-branch-readme -spec: - params: -   - name: repo-url -     type: string -     description: The git repository URL to clone from. -   - name: branch-name -     type: string -     description: The git branch to clone. - workspaces: -   - name: shared-data -     description: | -      This workspace will receive the cloned git repo and be passed -       to the next Task for the repo's README.md file to be read. - tasks: -   - name: fetch-repo -     taskRef: -       name: git-clone -     workspaces: -       - name: output -         workspace: shared-data -     params: -       - name: url -         value: $(params.repo-url) -       - name: revision -         value: $(params.branch-name) - -``` - -Finally, create a file called `pipelinerun.yaml` and open it in your favorite text editor. This file actually runs the pipeline. It invokes parameters defined in the pipeline (which, in turn, invokes the task defined by the task file.) - - -``` - - -apiVersion: tekton.dev/v1beta1 -kind: PipelineRun -metadata: - name: git-clone-checking-out-a-branch -spec: - pipelineRef: -   name: cat-branch-readme - workspaces: -   - name: shared-data -     volumeClaimTemplate: -       spec: -         accessModes: -          - ReadWriteOnce -         resources: -           requests: -             storage: 1Gi - params: -   - name: repo-url -     value: -   - name: branch-name -     value: release-v0.12.x - -``` - -The advantage of structuring your work in separate files is that the `git-clone` task is reusable for multiple pipelines. - -For example, suppose you want to do end-to-end testing for a pipeline project. You can use the `git-clone`** **task to ensure that you have a fresh copy of the code you need to test. - -### Wrap up - -As long as you're familiar with Kubernetes, getting started with Tekton is as easy as adopting any other K-native application. It has plenty of tools to help you create pipelines and to interface with your pipelines. If you love automation, try Tekton! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/11/cicd-pipeline-kubernetes-tekton - -作者:[Savita Ashture][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/savita-ashture -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plumbing_pipes_tutorial_how_behind_scenes.png?itok=F2Z8OJV1 (Plumbing tubes in many directions) -[2]: https://github.com/tektoncd/pipeline -[3]: https://github.com/knative/build -[4]: https://cd.foundation/ -[5]: https://www.linuxfoundation.org/projects/ -[6]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/ -[7]: https://operatorhub.io/what-is-an-operator -[8]: https://opensource.com/sites/default/files/uploads/tekto-terminology.png (Tekton terminology) -[9]: https://tekton.dev/docs/concepts/concept-tasks-pipelines.png -[10]: https://kubebyexample.com/en/concept/pods -[11]: https://opensource.com/sites/default/files/uploads/tekton-pipelines.png (Tekton pipelines) -[12]: https://tekton.dev/docs/concepts/concept-runs.png diff --git a/sources/tech/20211109 7 Free and Open Source Plotting Tools -For Maths and Stats.md b/sources/tech/20211109 7 Free and Open Source Plotting Tools -For Maths and Stats.md deleted file mode 100644 index e14d2da0d1..0000000000 --- a/sources/tech/20211109 7 Free and Open Source Plotting Tools -For Maths and Stats.md +++ /dev/null @@ -1,159 +0,0 @@ -[#]: subject: "7 Free and Open Source Plotting Tools [For Maths and Stats]" -[#]: via: "https://itsfoss.com/open-source-plotting-apps/" -[#]: author: "Marco Carmona https://itsfoss.com/author/marco/" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -7 Free and Open Source Plotting Tools [For Maths and Stats] -====== - -We live in a world where almost everything we have generates data. Data, which can be analyzed and visualized thanks to tools that create graphs showing the relation between variables. - -These tools are famously called “plotting apps”. They can be used for basic maths task in school to professional scientific projects. They can also be used for adding stats and data to presentations. - -There are plenty of free and open source plotting apps available for Linux. But in this article, I am listing some of the best plotting apps I have come across. - -### Best open source plotting apps - -I am deliberately skipping productivity suits like LibreOffice. They could allow you to add graphs and plots in the documents and slides but they are very basic in terms of functionality. - -Please also note that this is not a ranking list. The item at number one should not be considered better than the one at number five. - -#### 1\. Matplotlib - -![][1] - -[Matplotlib][2] is an open-source drawing library that supports many sketch types like plots, histograms, bar charts, and other types of diagrams. It’s mainly written in python; so if you have some knowledge of this programming language, Matplotlib can be your best option to start sketching your data. - -The advantages are focused on simplicity, friendly UI, and high-quality images, besides the various formats such as PNG, PDF etc. for plots. - -[Matplotlib][3] - -#### 2\. GnuPlot - -![][4] - -[GnuPlot][5] is a command-driven plotting program that accepts commands in the form of special words or letters for performing tasks. It can be employed to manipulate functions and data points in both two- and three-dimensional in many different styles and many different output formats. - -A special characteristic is that Gnuplot can also be used as a scripting language to automate the generation of plots. - -You can refer to our [documentation][6] if you want to explore more about it before getting started. - -[GnuPlot][7] - -#### 3\. Octave - -![][8] - -[GNU Octave][9] is more than just a plotting tool. It helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with MATLAB. It may also be used as a batch-oriented language. - -Some of its features are - - * A large set of built-in functionalities to solve many different problems. - * A complete programming language that enables you to extend GNU Octave. - * Plotting facilities. - - - -So, if you are interested in Octave, don’t be afraid and go to check its [documentation][10]. - -[Octave][11] - -#### 4\. Grace - -![][12] - -[Grace][13] is a tool to make two-dimensional plots of numerical data. Its capabilities are roughly similar to GUI-based programs like Octave plus script-based tools like Gnuplot or Genplot. In other words, it is a mix of a good user interface with the power of a scripting language. - -It’s important to mention that these two last characteristics let you do sophisticated calculations or perform automated tasks, which helps a lot when you’re analyzing any type of data. - -Other important aspect to mention is that it also brings tools like curve fitting, analysis capability, programmability, among others. So, if you want to know more about these helpful tools, go to its [official website][13] and check its other features. - -[Grace][14] - -#### 5\. LabPlot - -![][15] - -[LabPlot][16] is a program for two- and three-dimensional graphical presentation of data sets and functions. It comes with a complete user interface, which provides you with a lot of functions like Hilbert transform, statistics, color maps and conditional formatting, and its most recent [feature][17], Multi-Axes. - -LabPlot allows you to work with multiple plots which each can have multiple graphs. The graphs can be produced from data or from functions; depending on what you need. - -For more information, remember that the [documentation][18] and its [community][19] can be your best friend. - -[LabPlot][20] - -#### 6\. ROOT - -![][21] - -[ROOT][22] is a framework for data processing, which is created by the famous CERN lab which is at the heart of the research on high-energy physics. It is used to write petabytes of data recorded by the Large Hadron Collider experiments every year. - -This project is used every day by thousands of physicists who analyze their data or perform simulations, especially in high-energy areas. - -It is written in the C++ programming language for rapid and efficient prototyping and a persistence mechanism for C++ objects. If you don’t like C++, I have good news for you. It can be used with Python as well. - -[This project][23] is incredibly a complete toolkit, it can help you from creating a simple histogram to providing interactive graphics in web browsers. Awesome, isn’t it? - -[ROOT][24] - -#### 7\. Plots - -![][25] - -This last option is more dedicated to basic academic students who are begin introduced to the graphs and math functions. - -This open-source software called _**Plots**_ is a basic but powerful tool if you need to quickly visualize any data or math function in the least time possible. This is because it has not a lot of extra function, but notice that it doesn’t mean it has no power at the time of plotting. - -So, if you’re starting in this area of data visualization, surely this last option is the best for you, Also, I’d suggest you check our article about [Plots][26] to know how to set it up and get started. - -### Conclusion - -In my opinion, these open-source projects do more or less the same tasks; of course, some of them have more or fewer characteristics. The key is the way it generates the plotting; because one works with C as its programming language, while another works with Python. I suggest you to get informed about each of these plotting tools and choose the best that fits your tasks and necessities. - -Have you ever used one of the tools on this list? What is your favorite open-source tool for plotting? Please let us know in the comments below. - -If you found this article interesting, please take a minute to share it on social media; you can make a difference! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/open-source-plotting-apps/ - -作者:[Marco Carmona][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/marco/ -[b]: https://github.com/lujun9972 -[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/matplotlib.png?w=600&ssl=1 -[2]: https://matplotlib.org/ -[3]: https://matplotlib.org/stable/users/installing.html -[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/gnuplot-1.png?w=600&ssl=1 -[5]: http://www.gnuplot.info/ -[6]: http://www.gnuplot.info/docs_5.4/Gnuplot_5_4.pdf -[7]: http://www.gnuplot.info/download.html -[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/octave-1.png?w=600&ssl=1 -[9]: https://www.gnu.org/software/octave/index# -[10]: https://www.gnu.org/software/octave/octave.pdf -[11]: https://www.gnu.org/software/octave/download -[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/grace-1.jpg?w=600&ssl=1 -[13]: https://plasma-gate.weizmann.ac.il/Grace/ -[14]: https://plasma-gate.weizmann.ac.il/pub/grace/ -[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/labplot-1.png?w=600&ssl=1 -[16]: https://labplot.kde.org/ -[17]: https://labplot.kde.org/features/ -[18]: https://labplot.kde.org/documentation/ -[19]: https://labplot.kde.org/support/ -[20]: https://labplot.kde.org/download/ -[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/root.jpeg?w=600&ssl=1 -[22]: https://root.cern/ -[23]: https://root.cern/manual/ -[24]: https://root.cern/install/ -[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/meet_plots.png?w=600&ssl=1 -[26]: https://itsfoss.com/plots-graph-app/ diff --git a/sources/tech/20211110 How Knative unleashes the power of serverless.md b/sources/tech/20211110 How Knative unleashes the power of serverless.md deleted file mode 100644 index 9f30d88a99..0000000000 --- a/sources/tech/20211110 How Knative unleashes the power of serverless.md +++ /dev/null @@ -1,259 +0,0 @@ -[#]: subject: "How Knative unleashes the power of serverless" -[#]: via: "https://opensource.com/article/21/11/knative-serving-serverless" -[#]: author: "Savita Ashture https://opensource.com/users/savita-ashture" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -How Knative unleashes the power of serverless -====== -An exploration of how Knative Serving works in detail, how it achieves -the quick scaling it needs, and how it implements the features of -serverless. -![Ship captain sailing the Kubernetes seas][1] - -[Knative][2] is an open source project based on the [Kubernetes][3] platform for building, deploying, and managing serverless workloads that run in the cloud, on-premises, or in a third-party data center. Google originally started it with contributions from more than 50 companies. - -Knative allows you to build modern applications which are container-based and source-code-oriented. - -### Knative Core Projects - -Knative consists of two components: Serving and Eventing. It's helpful to understand how these interact before attempting to develop Knative applications. - -![Knative Serving and Eventing][4] - -(Savita Ashture, [CC BY-SA 4.0][5]) - -### Knative Serving  - -[Knative Serving][6] is responsible for features revolving around deployment and the scaling of applications you plan to deploy. This also includes network topology to provide access to an application under a given hostname.  - -Knative Serving focuses on: - - * Rapid deployment of serverless containers. - * Autoscaling includes scaling pods down to zero. - * Support for multiple networking layers such as Ambassador, Contour, Kourier, Gloo, and Istio for integration into existing environments. - * Give point-in-time snapshots of deployed code and configurations. - - - -### Knative Eventing - -[Knative Eventing][7] covers the [event-driven][8] nature of serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers that create events and event consumers, or [_sinks_][9], that receive events. - -Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks. - -In this article, I focus on the Serving project since it is the most central project of Knative and helps deploy applications. - -### The Serving project - -Knative Serving defines a set of objects as Kubernetes Custom Resource Definitions (CRDs). These objects get used to define and control how your serverless workload behaves on the cluster: - -![Knative Serving objects][10] - -(Savita Ashture, [CC BY-SA 4.0][5]) - - * **Service**: A Knative Service describes a combination of a _route_ and a _configuration_ as shown above. It is a higher-level entity that does not provide any additional functionality. It should make it easier to deploy an application quickly and make it available. You can define the service to always route traffic to the latest revision or a pinned revision. - -![Knative Service][11] - -(Savita Ashture, [CC BY-SA 4.0][5]) - - * **Route**: The Route describes how a particular application gets called and how the traffic gets distributed across the different revisions. There is a high chance that several revisions can be active in the system at any given time based on the use case in those scenarios. It's the responsibility of routes to split the traffic and assign to revisions. - - * **Configuration**: The Configuration describes what the corresponding deployment of the application should look like. It provides a clean separation between code and configuration and follows the [Twelve-Factor][12] App methodology. Modifying a configuration creates a new revision. - - * **Revision**: The Revision represents the state of a configuration at a specific point in time. A revision, therefore, gets created from the configuration. Revisions are immutable objects, and you can retain them for as long as useful. Several revisions per configuration may be active at any given time, and you can automatically scale up and down according to incoming traffic. - - - - -### Deploying an application using Knative Service - -To write an example Knative Service, you must have a Kubernetes cluster running. If you don't have a cluster, you can run a local [single-node cluster with Minikube][13]. Your cluster must have at least two CPUs and 4GB RAM available. - -You must also install Knative Serving and its required dependencies, including a networking layer with configured DNS. - -Follow the [official installation instructions][14] before continuing. - -Here's a simple YAML file (I call it `article.yaml`) that deploys a Knative Service: - - -``` - - -apiVersion: serving.knative.dev/v1 -kind: Service -metadata: - name: knservice - namespace: default -spec: - template: -   spec: -     containers: -       - image: docker.io/##DOCKERHUB_NAME##/demo - -``` - -Where `##DOCKERHUB_NAME##` is a username for `dockerhub`. - -For example, `docker.io/savita/demo`. - -This is a minimalist YAML definition for creating a Knative application. - -Users and developers can tweak YAML files by adding more attributes based on their unique requirements. - - -``` - - -$ kubectl apply -f article.yaml -service.serving.knative.dev/knservice created - -``` - -That's it! You can now observe the different resources available by using `kubectl` as you would for any other Kubernetes process. - -Take a look at the **service**: - - -``` - - -$ kubectl get ksvc - -NAME              URL                                                      LATESTCREATED                 LATESTREADY       READY   REASON -knservice                             knservice-00001               knservice-00001   True - -``` - - You can view the** configuration**: - - -``` - - -$ kubectl get configurations - -NAME         LATESTCREATED     LATESTREADY       READY   REASON -knservice    knservice-00001   knservice-00001   True - -``` - -You can also see the **routes**: - - -``` - - -$ kubectl get routes - -NAME          URL                                    READY   REASON -knservice       True - -``` - -You can view the **revision**: - - -``` - - -$ kubectl get revision - -NAME                       CONFIG NAME   K8S SERVICE NAME   GENERATION   READY   REASON   ACTUAL REPLICAS   DESIRED REPLICAS - -knservice-00001            knservice                        1            True             1                 1 - -``` - -You can see the **pods** that got created: - - -``` - - -$ kubectl get pods - -NAME                                          READY    STATUS     RESTARTS   AGE -knservice-00001-deployment-57f695cdc6-pbtvj   2/2      Running    0          2m1s - -``` - -### Scaling to zero - -One of the properties of Knative is to scale down pods to zero if no request gets made to the application. This happens if the application does not receive any more requests for five minutes. - - -``` - - -$ kubectl get pods - -No resources found in default namespace. - -``` - -The application becomes scaled to zero instances and no longer needs any resources. And this is one of the core principles of Serverless: If no resources are required, then none are consumed. - -### Scaling up from zero - -As soon as the application is used again (meaning that a request comes to the application), it immediately scales to an appropriate number of pods. You can see that by using the [curl command][15]: - - -``` - - -$ curl -Hello Knative! - -``` - -Since scaling needs to occur first, and you must create at least one pod, the requests usually last a bit longer in most cases. Once it successfully finishes, the pod list looks just like it did before: - - -``` - - -$ kubectl get pods -NAME                                          READY    STATUS     RESTARTS   AGE -knservice-00001-deployment-57f695cdc6-5s55q   2/2      Running    0          3s - -``` - -### Conclusion - -Knative has all those best practices which a serverless framework requires. For developers who already use Kubernetes, Knative is an extension solution that is easily accessible and understandable. - -In this article, I've shown how Knative Serving works in detail, how it achieves the quick scaling it needs, and how it implements the features of serverless. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/11/knative-serving-serverless - -作者:[Savita Ashture][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/savita-ashture -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas) -[2]: https://knative.dev/docs/ -[3]: https://opensource.com/resources/what-is-kubernetes -[4]: https://opensource.com/sites/default/files/uploads/knative_serving-eventing.png (Knative Serving and Eventing) -[5]: https://creativecommons.org/licenses/by-sa/4.0/ -[6]: https://github.com/knative/serving -[7]: https://github.com/knative/eventing -[8]: https://www.redhat.com/architect/event-driven-architecture-essentials -[9]: https://knative.dev/docs/developer/eventing/sinks/ -[10]: https://opensource.com/sites/default/files/uploads/knative-serving.png (Knative Serving objects) -[11]: https://opensource.com/sites/default/files/uploads/knative-service.png (Knative Service) -[12]: https://12factor.net/ -[13]: https://opensource.com/article/18/10/getting-started-minikube -[14]: https://knative.dev/docs/admin/install/serving/install-serving-with-yaml/#install-the-knative-serving-component -[15]: https://www.redhat.com/sysadmin/use-curl-api diff --git a/sources/tech/20211116 Implement client-side search on your website with this JavaScript tool.md b/sources/tech/20211116 Implement client-side search on your website with this JavaScript tool.md deleted file mode 100644 index d2bffc13e7..0000000000 --- a/sources/tech/20211116 Implement client-side search on your website with this JavaScript tool.md +++ /dev/null @@ -1,283 +0,0 @@ -[#]: subject: "Implement client-side search on your website with this JavaScript tool" -[#]: via: "https://opensource.com/article/21/11/client-side-javascript-search-lunrjs" -[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma" -[#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Implement client-side search on your website with this JavaScript tool -====== -Lunr.js is a great option for providing search functionality on a -website or application. -![magnifying glass for search on a blue background][1] - -Search is a must-have for any website or application. A simple search widget can allow users to comb through your entire blog. Or allow customers to browse your inventory. Building a custom photo gallery? Add a search box. Website search functionality is available from a variety of third-party vendors. Or you can take the DIY approach and build the entire backend to answer search API calls. - -[Lunr.js][2] works on the client-side through JavaScript. Instead of sending calls to a backend, Lunr looks up search terms in an index built on the client-side itself. This avoids expensive back-and-forth network calls between the browser and your server. There are plenty of tutorials online to showcase Lunr's website search functionality. But you can actually use Lunr.js to search any array of JavaScript objects. - -In this how-to, I build a search index for the [top 100 books of all time][3]. After that, I show you how to pre-build the index for faster indexing. I'll also show you how to make the most of Lunr's search options. And finally, I'll show off [findmymastodon.com][4]—a real-world implementation of Lunr. - -### Getting started with Lunr.js - -Create a new HTML page called `lunr.html`. I use this file throughout this guide. At the top of `lunr.html`, call the main Lunr JS library. - - -``` -`` -``` - -**Note:** You can find the [complete code here][5] - -### Loading the dataset - -Next, create a variable called `my_big_json`. This variable will contain the JSON-ified string of the main dataset. Define the variable in `lunr.html` within the `` +- Call the [ClassicEditor.create()][3] method to display the editor.`` + +And that's it. A full web page with an embedded CKEditor 5: + +``` + + + + + CKEditor 5 – Classic editor + + + +

CKEditor 5 - cool, eh?

+
+

This is some sample content for my new WYSIWYG editor.

+
+ + + +``` + +Open it in your browser to test the WYSIWYG editor: + +![CKEditor 5 running in the browser.][4] + +### Advanced WYSIWYG editing + +Yes, there are only three steps, and it's running. But this simple example also uncovers some typical challenges faced by an integrator of an external framework. + +- It's just a simple HTML website that misses the entire context of your app. +- The UI doesn't match your design system. +- Your app is written in React/Angular/Vue, or something else. +- You don't want to serve CDN scripts, and prefer to self-host. +- The feature set of the editor isn't what you need. +- Also, some of your users prefer Markdown to HTML or WYSIWYG "magic". + +So how do you resolve some of these issues? Most editor components allow for some degree of customization that's still cheaper than writing a custom script to properly handle user content creation. + +CKEditor 5 uses a plugin-based architecture, which provides excellent customizability and extensibility. By putting in some effort, you can benefit from a stable, popular, and reliable solution set up to look and work exactly as you want. Drupal 10, for example, built CKEditor 5 into its core and enriched it with some typical CMS functionality like a media library through custom plugins. + +What are some ways you can take advantage of all these customization options? Here are five that showcase its versatility: + +### 1. Flexible UI options + +The first thing to notice when looking at a component to integrate with your application is its user interface. The CKEditor 5 framework delivers a [few different UI types][5]. For example: + +- Classic Editor, used in the first example, offers a toolbar with an editing area placed in a specific position on the page. The toolbar stays visible when you scroll down the page, and the editor automatically grows with the content. +- The Document editor provides a similar editing experience to applications such as Microsoft Word or Google Docs, with a UI that resembles a paper document. +- If you're looking for distraction-free editing, where the content is placed in its target location on the web page without the editor UI getting in your way, you have a few options. The Inline Editor, Balloon Editor, and Balloon Block Editor all come with different types of floating toolbars that appear as needed. + +Besides that, you can play with the toolbar to move the buttons around, group them into drop-downs, use a multi-line toolbar, or hide some less-frequently needed buttons in the "three dots" or "more options" menu. Or, if you wish, move the entire toolbar to the bottom. + +It may also happen that you prefer to go the headless route. Plenty of projects use the powerful editing engine of CKEditor 5 but coupled with their own UI created in, for example, React. The most notable example is Microsoft Teams, believe it or not. Yes, it's using CKEditor 5. + +![Different types of WYSIWYG editor UI.][6] + +### 2. Choose a full-featured editor or a lightweight one + +In digital content editing, there's no "one size fits all" solution. Depending on the type of content you need, the feature set differs. CKEditor 5 has a plugin-based architecture and features are implemented in a highly decoupled and granular way. + +It's easy to get lost in all the possible features, sub-features, and configuration options without some guidance. Here are some useful resources to help you build the editor that's a perfect match for your use case: + +- Try the [feature-rich editor demo][7] to test some of the most popular features. +- Look at some other editor setups [on the demo page][5]. You can find the complete source code of each demo in the [ckeditor5-demos repository][8]. +- The entire **Features** section of the [documentation][9] explains all CKEditor 5 features, possible configuration options, toolbar buttons, and API. +- [CKEditor 5 online builder][10] is a quick and easy solution to build your custom editor in 5 steps. It allows you to choose the UI type, plugins, toolbar setup, and UI language and then download a ready-to-use editor bundle. + +![A full-featured and lightweight WYSIWYG editor.][11] + +### 3. Integrations with JavaScript frameworks + +The online builder and demos are a fun playground if you want to test a few solutions in a no-code fashion, but you may need tighter integration. You can also install CKEditor 5 with npm, or bundle it with your app using webpack or Vite. CKEditor 5 is written in pure TypeScript and JavaScript, so it's compatible with every JavaScript framework available. + +Four official integrations are available for the most popular frameworks: + +- Angular +- React +- Vue.js v2 +- Vue.js v3 + +For example, to set up the Classic Editor (used in my first example) in React, you can use this one-liner: + +``` +npx create-react-app ckeditor5-classic-demo \ +--template @ckeditor/ckeditor5-classic +``` + +### 4. Markdown and HTML + +For some developers, Markdown might feel like second nature. It has its limitations, though. For example, support for tables is quite basic. Still, for many users, crafting content in Markdown is much more efficient than using the editor UI to format it. + +And here's the fun part. Thanks to CKEditor's autoformatting, you can use Markdown syntax when writing, and the editor formats the content as you type.This is a nice compromise for covering the needs of both power users and users unfamiliar with Markdown and preferring to create rich text using the WYSIWYG UI. + +![YouTube Video][11] + +### 5. Different input and output + +Autoformatting is just one aspect of Markdown support in CKEditor 5. Another is that you can configure the editor to treat Markdown as its input and output format instead of HTML. + +![Support for Markdown source in CKEditor 5.][13] + +Here's another challenge. If you allow the users to input content in your app, they can create it there but also paste it from different sources (other websites, Microsoft Word, Google Docs). They naturally expect the structure and formatting of pasted text to be preserved. This may result in some nasty styles and unwanted elements making their way to your content, and you have to clean up. Instead of trying to reconcile these two potential clashes of interest by yourself, it's better to rely on a good editor that solves this problem for you. + +In the case of CKEditor 5, the **Paste from Office** feature provides great support for pasting content from Word or Google Docs, preserving the structure, and translating the formatting into semantic content. + +The default CKEditor 5 settings also prevent users from adding or pasting elements and styles unsupported by the feature set chosen for your editor. If you, as an integrator, configure the editor to support just links, lists, and basic styles such as bold or italic, then the user can't add tables, images, or YouTube videos. + +Then again, if you would like your editor to accept content that's not covered by your feature set or even not supported by any existing CKEditor 5 features, you can achieve that thanks to the so-called **General HTML support** functionality. This is useful for loading pre-existing content created in other tools, and can make your migration to CKEditor 5 easier. + +### Building custom plugins + +No matter how great a ready-made solution is, you may still need to customize it even more. After all, thanks to reusing an advanced WYSIWYG editing component, you've saved yourself plenty of time and coding effort. You may want to get your hands dirty and polish your solution a bit, for example, by creating custom plugins. + +Here are some useful resources to get you started: + +- [Creating your own plugins][14]: Documentation for developers. +- [Package generator][15]: A handy tool that helps you set up your plugin development environment. +- [CKEditor 5 inspector][16]: Debugging tools for editor internals like model, view, and commands. + +![An editor instance examined in CKEditor 5 inspector.][17] + +### How to get CKEditor + +CKEditor 5 is licensed under the terms of GPL 2 or later, but if you are running an open source project under an OSI-approved license incompatible with GPL, the CKEditor team is [happy to support you][18] with a no-cost license. + +CKEditor 5 is a powerful modern rich text editor framework that allows developers to build upon an open source, tested, and reliable editor. Start-ups, leading brands, and software providers use it to improve both their content creation and content production workflows. If your users value those benefits, [check it out][2]! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/23/4/website-text-editor-ckeditor + +作者:[Anna Tomanek][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/annatomanek +[b]: https://github.com/lkxed/ +[1]: https://github.com/ckeditor/ckeditor5 +[2]: https://ckeditor.com/ckeditor-5/ +[3]: https://ckeditor.com/docs/ckeditor5/latest/api/module_editor-classic_classiceditor-ClassicEditor.html#static-function-create +[4]: https://opensource.com/sites/default/files/2023-03/100002010000063E00000138357C421E3B4E1584.webp +[5]: https://ckeditor.com/ckeditor-5/demo/ +[6]: https://opensource.com/sites/default/files/2023-03/1000020100000641000003857D449853B71C3F41.webp +[7]: https://ckeditor.com/ckeditor-5/demo/feature-rich/ +[8]: https://github.com/ckeditor/ckeditor5-demos +[9]: https://ckeditor.com/docs/ckeditor5/latest/index.html +[10]: https://ckeditor.com/ckeditor-5/online-builder/ +[11]: https://opensource.com/sites/default/files/2023-03/100002010000064100000385F7C262D126752F22.webp +[12]: https://www.youtube.com/embed/8um84htrjXQ +[13]: https://opensource.com/sites/default/files/2023-03/10000201000006410000038504C1A8A0DEA95683.webp +[14]: https://ckeditor.com/docs/ckeditor5/latest/installation/advanced/plugins.html +[15]: https://www.npmjs.com/package/ckeditor5-package-generator +[16]: https://ckeditor.com/docs/ckeditor5/latest/framework/guides/development-tools.html#ckeditor-5-inspector +[17]: https://opensource.com/sites/default/files/2023-03/10000201000007F60000060EFCB2F92FBCC02B4E.webp +[18]: https://ckeditor.com/contact/ \ No newline at end of file diff --git a/sources/tech/20230404.0 ⭐️⭐️ Linux Terminal Basics 8 Move Files and Directories (Cut-Paste Operation).md b/sources/tech/20230404.0 ⭐️⭐️ Linux Terminal Basics 8 Move Files and Directories (Cut-Paste Operation).md new file mode 100644 index 0000000000..e92f9cd477 --- /dev/null +++ b/sources/tech/20230404.0 ⭐️⭐️ Linux Terminal Basics 8 Move Files and Directories (Cut-Paste Operation).md @@ -0,0 +1,284 @@ +[#]: subject: "Linux Terminal Basics #8: Move Files and Directories (Cut-Paste Operation)" +[#]: via: "https://itsfoss.com/move-files-linux/" +[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Linux Terminal Basics #8: Move Files and Directories (Cut-Paste Operation) +====== + +![][1] + +Cut, copy and paste are part of everyday computing life. + +In the previous chapter, you learned about [copying files and folders][2] (directories) in the terminal. + +In this part of the Terminal Basics series, you'll learn about the cut-paste operation (moving) in the Linux terminal. + +### Moving or cut-paste? + +Alright! Cut-paste is not the correct technical term here. It is called moving files (and folders). + +Since you are new to the command line, you may find the term 'moving' confusing. + +When you copy a file to another location using the **cp** command, the source file remains in the same location. + +When you move a file to another location **using the mv command**, the source file no longer remains in the origin location. + +This is the same cut-paste operation (Ctrl+X and Ctrl+V) you do in a graphical file explorer. + +> 📋 Basically, moving files in the command line can be thought same as cut-paste in a graphical environment. + +### Moving files + +Linux has a dedicated mv command (short for move) for moving files and directories to other locations. + +And [using the mv command][3] is quite simple: + +``` +mv source_file destination_directory +``` + +The role of path comes to play here as well. You can use either the [absolute or relative path][4]. Whichever suits your need. + +Let's see this with an example. **You should practice along with it by replicating the example scenarios on your system**. + +This is the directory structure in the example: + +``` +[email protected]:~/moving_files$ tree +. +├── dir1 +│   ├── file_2 +│   └── file_3 +├── dir2 +│   └── passwd +├── dir3 +├── file_1 +├── file_2 +├── file_3 +├── file_4 +├── passwd +└── services + +3 directories, 9 files +``` + +Now, let's say I want to move the `file_1` to `dir3`. + +``` +mv file_1 dir3 +``` + +![Example of moving files in Linux using the mv command][5] + +#### Moving multiple files + +You can move multiple files to another location in the same mv command: + +``` +mv file1 file2 fileN destination_directory +``` + +Let's continue our example scenario to move multiple files. + +``` +mv file_2 file_3 file_4 dir3 +``` + +![Example of moving multiple files in Linux][6] + +> 🖥️ Move the files back to the current directory from + +``` +dir3 +``` + +. We need them in the next examples. + +#### Moving files with caution + +If the destination already has files with the same name, the destination files will be replaced immediately. At times, you won't want that. + +Like the cp command, the mv command also has an interactive mode with option `-i`. + +And the purpose is the same. Ask for confirmation before replacing the files at the destination. + +``` +[email protected]:~/moving_files$ mv -i file_3 dir1 +mv: overwrite 'dir1/file_3'? +``` + +You can press N to deny replacement and Y or Enter to replace the destination file. + +![Example of moving interactively in Linux][7] + +#### Move but only update + +The mv command comes with some special options. One of them is the update option `-u`. + +With this, the destination file will only be replaced if the file being moved is newer than it. + +``` +mv -u file_name destination_directory +``` + +Here's an example. file_2 was modified at 10:39 and file_3 was modified at 10:06. + +``` +[email protected]:~/moving_files$ ls -l file_2 file_3 +-rw-rw-r-- 1 abhishek abhishek 0 Apr 4 10:39 file_2 +-rw-rw-r-- 1 abhishek abhishek 0 Apr 4 10:06 file_3 +``` + +In the destination directory dir1, file_2 was last modified at 10:37 and file_3 was modified at 10:39. + +``` +[email protected]:~/moving_files$ ls -l dir1 +total 0 +-rw-rw-r-- 1 abhishek abhishek 0 Apr 4 10:37 file_2 +-rw-rw-r-- 1 abhishek abhishek 0 Apr 4 10:39 file_3 +``` + +In other words, in the destination directory, the file_2 is older and file_3 is newer than the ones being moved. + +It also means that file_3 won't me moved while as file_2 will be updated. You can verify it with the timestamps of the files in the destination directory after running the mv command. + +``` +[email protected]:~/moving_files$ mv -u file_2 file_3 dir1 +[email protected]:~/moving_files$ ls -l dir1 +total 0 +-rw-rw-r-- 1 abhishek abhishek 0 Apr 4 10:39 file_2 +-rw-rw-r-- 1 abhishek abhishek 0 Apr 4 10:39 file_3 +[email protected]:~/moving_files$ date +Tue Apr 4 10:41:16 AM IST 2023 +[email protected]:~/moving_files$ +``` + +As you can see, the move command was executed at 10:41 and only the timestamp of file_2 has been changed. + +![Using move command with update option][8] + +> 💡 You can also use the backup option + +``` +-b +``` + +. If the destination file is being replaced, it will automatically create a backup with the + +``` +filename~ +``` + + pattern. + +#### Troubleshoot: Target is not a directory + +If you are moving multiple files, the last argument must be a directory. Otherwise, you'll encounter this error: + +``` +target is not a directory +``` + +Here, I create a file which is named `dir`. The name sounds like a directory, but it is a file. And when I try to move multiple files to it, the obvious error is there: + +![Handling target is not a directory error in Linux][9] + +But what if you move a single file to another file? In that case, the target file is replaced by the source file's content while the source file is renamed as the target file. More on this in later sections. + +### Moving directories + +So far, you have seen everything about moving files. How about moving directories? + +The cp and rm commands used recusrive option -r to copy and delete folders respectively. + +However, there is no such requirement for the mv command. You can use the mv command as it is for moving directories. + +``` +mv dir target_directory +``` + +Here's an example where I move the `dir2` directory to `dir3`. And as you can see, `dir2` along with its content is moved to `dir3`. + +![Moving folders in Linux command line][10] + +You can move multiple directories the same way. + +### Rename files and directories + +If you want to rename a file or directory, you can use the same mv command. + +``` +mv filename new_name_in_same_or_new_location +``` + +Let's say you want to rename a file in the same location. Here's an example where I rename `file_1` to `file_one` in the same directory. + +![Rename files with mv command][11] + +You can also move and rename the files. You just have to provide the directory path and the file name of the destination. Here, I rename `services` file to `my_services` while moving it to `dir3`. + +``` +[email protected]:~/moving_files$ ls +dir dir1 dir3 file_2 file_3 file_one passwd services +[email protected]:~/moving_files$ mv services dir3/my_services +[email protected]:~/moving_files$ ls dir3 +dir2 my_services +``` + +> 📋 You cannot rename multiple files directly with mv command. You have to combine it with other commands like find etc.  + +### Test your knowledge + +Time to practice what you just learned. + +Create a new folder to practice the exercise. In here, create a directory structure like this: + +``` +. +├── dir1 +├── dir2 +│   ├── dir21 +│   ├── dir22 +│   └── dir23 +└── dir3 +``` + +Copy the file /etc/passwd to the current directory. Now rename it `secrets`. + +Make three new files named `file_1`, `file_2` and `file_3`. Move all the files to `dir22`. + +Now move the `dir22` directory to `dir3`. + +Delete all contents of `dir2` now. + +In the penultimate chapter of the Terminal Basics series, you'll learn about editing files in the terminal. Stay tuned. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/move-files-linux/ + +作者:[Abhishek Prakash][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lkxed/ +[1]: https://itsfoss.com/content/images/2023/03/linux-mega-packt.webp +[2]: https://itsfoss.com/copy-files-directory-linux/ +[3]: https://linuxhandbook.com/mv-command/?ref=itsfoss.com +[4]: https://linuxhandbook.com/absolute-vs-relative-path/?ref=itsfoss.com +[5]: https://itsfoss.com/content/images/2023/04/moving-files-linux.png +[6]: https://itsfoss.com/content/images/2023/04/moving_multiple_files_linux.png +[7]: https://itsfoss.com/content/images/2023/04/move-interactively-linux.png +[8]: https://itsfoss.com/content/images/2023/04/move-command-update-option.png +[9]: https://itsfoss.com/content/images/2023/04/target-is-not-a-directory-error-linux.png +[10]: https://itsfoss.com/content/images/2023/04/moving-directories.png +[11]: https://itsfoss.com/content/images/2023/04/rename-file-with-mv-command.png diff --git a/sources/tech/20230407.0 ⭐️⭐️ A Quick Guide to Install and Play GOG Games on Linux.md b/sources/tech/20230407.0 ⭐️⭐️ A Quick Guide to Install and Play GOG Games on Linux.md new file mode 100644 index 0000000000..8f7fb0e7f9 --- /dev/null +++ b/sources/tech/20230407.0 ⭐️⭐️ A Quick Guide to Install and Play GOG Games on Linux.md @@ -0,0 +1,213 @@ +[#]: subject: "A Quick Guide to Install and Play GOG Games on Linux" +[#]: via: "https://itsfoss.com/play-gog-games-linux/" +[#]: author: "Ankush Das https://itsfoss.com/author/ankush/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +A Quick Guide to Install and Play GOG Games on Linux +====== + +![][1] + +[Gaming on Linux][2] is no longer a problem. You can play plenty of AAA titles, indie games, and Windows-exclusive games on Linux. Several games from GOG, Steam, Epic Games, Origin, and Ubisoft Connect should work flawlessly. + +Unfortunately, GOG does not offer a client for Linux that you can use. + +So, in this guide, I will be focusing on **installing and playing GOG games on Linux**. + +If you have been following us, you may have come across on our ultimate guide to the [Epic Games Store on Linux][3]. It is more or less the same thing, but for a different store. + +> 💡 GOG.com is popular for offering DRM-free games. Furthermore, if you make a purchase on GOG, usually, the developer gets a good cut of it compared to other stores. + +### 3 Ways to Install GOG Games on Linux + +You have a couple of options when it comes to installing and running a game from the GOG store. + +You can use any of the following game clients on Linux: + +- **Lutris** +- **Heroic Games Launcher** +- **Bottles** + +I found Lutris to be the easiest, and quickest to be able to run a Windows-exclusive GOG game on Linux. So, let me start with it. + +#### Method 1. Install and Play GOG Games Using Lutris + +1. To get started, you need to **install Lutris on Linux**. + +You can install it using [Flathub][4], PPA for Ubuntu-based distros, DEB package or from the software center of distros like Pop!_OS, Solus. + +Head to its [official download page][5] and install Lutris. + +[Install Lutris][5] + +2. Once you are done installing Lutris, launch it and click on “**GOG**” among the sources listed on its left sidebar, as shown in the image: + +![lutris game client gog source][6] + +Do you see a **user avatar icon**, right next to it? Click on it to log in to your GOG account. + +![gog sign in through lutris][7] + +You can access the library of games associated with your account after signing in through Lutris' native user interface. + +![][8] + +3. Pick any game you want, and click on it to find "**Install**" button. + +As you proceed, Lutris will prompt you to **install Wine**, which would enable you to run the Windows game on Linux. + +![][9] + +**(Optional)** You can separately install Wine under the "**Runners**" menu, and have multiple versions of it ready before installing the game if you prefer. + +![lutris wine manager][10] + +4. But, if you do not want any hassle, just go with the installation process, and it will automatically install Wine and then prompt you to download the game + +![][11] + +![][12] + +5. Continue with the process, and it will set up the installer for you and launch it. + +![][13] + +![][14] + +Now, you have to follow the on-screen instructions for any game you want to install and then complete it. + +> 📋 Not every game will work seamlessly. For some, you may have to install it using a specific Wine version and some may not work at all. So, you should do some research on the particular game running on Linux before trying to install it via GOG. + +It is done! The game should launch right after successful setup. + +![Playing a GOG Windows game on Linux][15] + +#### Method 2. Install and Play GOG Games on Linux Using Heroic Games Launcher + +Heroic Games Launcher is a nice option with several features to run GOG games on Linux. + +You can use an **AppImage** file available, or install its Flatpak via **Flathub** or get RPM/DEB packages from its GitHub. + +[Install Heroic Games Launcher][16] + +> 🚧 You get similar functionalities but unlike Lutris, but you need to first install Wine manually using its + +**Wine Manager** + +. Heroic Games Launcher does not automatically install Wine for you. It will download the game for you, even if you do not have Wine installed. + +It can be confusing for new users. But, if you have a bit of experience and want to choose the Wine/Proton version, you can head to its **Wine Manager** and preferably download the latest available version to get started. + +![wine manager listing several versions of wine/proton on heroic games launcher][17] + +No additional fiddling, **just click on the download icon** as shown in the screenshot above, and it will automatically install it. + +Once done with it, here are the steps to install a GOG game using Heroic: + +1. Log in to your GOG account. You can find the GOG menu right after you launch it and head to the **Login section**. + +![][18] + +![][19] + +2. After logging in, head to the **Library** to find the games you have. + +![gog game library on heroic games launcher][20] + +3. Click on the download icon to proceed. Now, you should get a prompt to decide the **installation path, Wine version**, and a couple of other options. + +![gog game installation screen on heroic games launcher selecting the install path, wine, and more][21] + +You can select "**Use Default Wine Settings**" if you want to automatically select the Wine version you have installed. + +Or, you can use the **drop-down arrow** to pick among the Wine/Proton version available on your system. + +For the ease of use, you can go with the default settings. And, if the game fails to work, you can go to its settings later and try other Wine versions. + +4. Wait for the download to complete and then launch/run the game. + +![][22] + +![][23] + +#### Method 3: Install and Play GOG Games on Linux Using Bottles + +Bottles is an impressive platform. However, it does not let you install games through it. + +> 🚧 This is not a recommended method. But, if you want to try the GOG Galaxy game client on Linux, Bottles is the way to go. + +Instead, it will help you install the GOG client (which you find for Windows) and make it work on Linux. Bottles is one of the best [ways to install a Windows program on Linux][24]. + +It is recommended to install Bottles as Flatpak, at the time of writing this. So, to get started, you need to get it installed from [Flathub][25]. Additionally, you can explore other download options, if available. + +[Download Bottles][26] + +Once you get it installed, you have to create a new Bottle for Gaming. And, inside it, you will have to search for GOG Galaxy v1 or legacy and install the program to use GOG on Linux. + +![][27] + +![][28] + +In my tests, GOG Galaxy client did not launch. And, when it did, it was too slow/unresponsive. But, at least, it is something you can explore when nothing else works for you. It may or may not work, of course. + +If this is something that interests you, feel free to give it a try. + +### Wrapping Up + +The installers or gaming clients on Linux are making things convenient every day. + +For some games, it can end up as a one-click installation experience, while for others, it might need a little tweaking. + +If you struggle with it, feel free to join our **[It's FOSS forums][29]** for help. And, if you are new to the gaming scene on Linux, I suggest you read our guide on it: + +There is also a handy utility called GameHub that can be utilized for keeping games from different platforms in one UI. + +And of course, GOG is not the only place for [getting Linux games][30]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/play-gog-games-linux/ + +作者:[Ankush Das][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lkxed/ +[1]: https://itsfoss.com/content/images/2023/03/linux-mega-packt.webp +[2]: https://itsfoss.com/linux-gaming-guide/ +[3]: https://itsfoss.com/epic-games-linux/#3-use-bottles-to-access-epic-games-store +[4]: https://flathub.org/apps/details/net.lutris.Lutris?ref=itsfoss.com +[5]: https://lutris.net/downloads?ref=itsfoss.com +[6]: https://itsfoss.com/content/images/2023/04/lutris-gog.png +[7]: https://itsfoss.com/content/images/2023/04/lutris-gog-login.png +[8]: https://itsfoss.com/content/images/2023/04/lutris-game-install.png +[9]: https://itsfoss.com/content/images/2023/04/install-wine-lutris.png +[10]: https://itsfoss.com/content/images/2023/04/lutris-wine-manager.png +[11]: https://itsfoss.com/content/images/2023/04/lutris-wine-download.png +[12]: https://itsfoss.com/content/images/2023/04/lutris-download-game.png +[13]: https://itsfoss.com/content/images/2023/04/lutris-game-installer.png +[14]: https://itsfoss.com/content/images/2023/04/lutris-game-installation-1.png +[15]: https://itsfoss.com/content/images/2023/04/lutris-game-working.jpg +[16]: https://heroicgameslauncher.com/downloads?ref=itsfoss.com +[17]: https://itsfoss.com/content/images/2023/04/heroic-games-launcher-wine.png +[18]: https://itsfoss.com/content/images/2023/04/heroic-games-gog-login.png +[19]: https://itsfoss.com/content/images/2023/04/heroic-gog-login-page.png +[20]: https://itsfoss.com/content/images/2023/04/heroic-gog-games-library.png +[21]: https://itsfoss.com/content/images/2023/04/gog-wine-path-install.png +[22]: https://itsfoss.com/content/images/2023/04/gog-game-download-heroic.png +[23]: https://itsfoss.com/content/images/2023/04/gog-game-heroic-game-play.png +[24]: https://itsfoss.com/use-windows-applications-linux/ +[25]: https://flathub.org/apps/details/com.usebottles.bottles?ref=itsfoss.com +[26]: https://usebottles.com/download/?ref=itsfoss.com +[27]: https://itsfoss.com/content/images/2023/04/gog-installer-1.png +[28]: https://itsfoss.com/content/images/2023/04/install-gog-client-1.png +[29]: https://itsfoss.community/?ref=itsfoss.com +[30]: https://itsfoss.com/download-linux-games/ diff --git a/sources/tech/20230411.4 ⭐️⭐️ Linux Terminal Basics 9 Editing Files in Linux Terminal.md b/sources/tech/20230411.4 ⭐️⭐️ Linux Terminal Basics 9 Editing Files in Linux Terminal.md new file mode 100644 index 0000000000..88532d1330 --- /dev/null +++ b/sources/tech/20230411.4 ⭐️⭐️ Linux Terminal Basics 9 Editing Files in Linux Terminal.md @@ -0,0 +1,355 @@ +[#]: subject: "Linux Terminal Basics #9: Editing Files in Linux Terminal" +[#]: via: "https://itsfoss.com/edit-files-linux/" +[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Linux Terminal Basics #9: Editing Files in Linux Terminal +====== + +![][1] + +You have learned a bunch of file operations so far in this Terminal Basics series. You learned to create new files, delete existing ones, and copy and move them. + +It is time to take it to the next level. Let's see how to edit files in the Linux terminal. + +If you are writing bash shell scripts, you can use the GUI text editors like Gedit and run them in the terminal. + +But at times, you'll find yourself in a situation where you have to edit existing files in the terminal itself. For example, modifying config files located in the /etc directory. + +As a desktop Linux user, you could still use GUI editors for editing config files even as root. I'll show it to you later. + +However, knowing how to edit files in the command line is better. + +### Editing files in Linux terminal + +You may use the cat command if you just have to add a few lines at the bottom of an existing file. But in order to properly edit a file, you'll need a proper text editor. + +There is simply no shortage of [terminal-based text editors in Linux][2]. **Vi, Vim, Nano, Emacs are just a few of the most popular ones** out there. + +But here is the thing. All of them have a learning curve involved. You don't have the comfort of the GUI. You don't have menus to interact with the editor with your mouse. + +Instead, **you have to use (and remember) keyboard shortcuts**. + +I find Nano to be a good starting point for new users. It is the default text editor in Ubuntu and many other Linux distributions. + +Of course, there is a learning curve, but it is not as steep as that of Vim or Emacs. It keeps on displaying the most relevant keyboard shortcuts at the bottom. This helps you navigate even if you don't remember the exact shortcut. + +For this reason, I'll be covering the absolute basics of the Nano editor here. You’ll **learn all the essentials you need to know to start using Nano for editing files** in the Linux terminal. + +### Using Nano editor + +Nano can be used to edit text files, script files, program files etc. Please remember that **it is not a word processor** and cannot be used to edit docs or PDF files. For simple text editing of conf files, scripts, or text files, Nano is a great choice. + +> 🚧 You should have Nano installed on your system to follow this tutorial. + +I'll be using a text file named agatha_complete.txt. It consists of the names of all Agatha Christie’s books under her name. You can download it from this link if you plan to follow the steps on your system. + +[Agatha completeSample text fileagatha_complete.txt3 KBdownload-circle][3] + +#### Explore the Nano editor interface + +Open the Nano editor with the following command: + +``` +nano +``` + +You’ll notice a new interface in your terminal that reads like GNU nano and displays New Buffer. **New Buffer means Nano is working on a new file**. + +This is equivalent to opening a new unsaved file in a text editor like Gedit or Notepad. + +![Nano editor interface][4] + +Nano editor shows essential keyboard shortcuts you need to use for editing at the bottom of the editor. This way, you won’t get stuck at [exiting the editor like Vim][5]. + +The wider your terminal window, the more shortcuts it shows. + +You should get familiar with the symbols in Nano. + +- **The caret symbol (^) means Ctrl key** +- **The M character mean the Alt key** + +> 📋 When it says `^X Exit`, it means to use `Ctrl+X` keys to **exit** the editor. When it says `M-U Undo`, it means use `Alt+U` key to **undo** your last action. + +One more thing. It shows the characters in caps in the keyboard. But it doesn’t mean uppercase character. ^X means Ctrl + x key on the keyboard, not Ctrl+Shift+x key (to get the uppercase X). + +You may also get a detailed help document inside the editor by pressing Ctrl+G. + +![Getting help in Nano editor][6] + +Now that you are a bit familiar with the interface, exit the Nano editor with Ctrl+X keys. Since you have not made any changes to this opened unsaved file, you won’t be asked to save it. + +Awesome! You now have some ideas about the editor. In the next section, you’ll learn to create and edit files with Nano. + +#### Create or open files in Nano + +You can open a file for editing in Nano like this: + +``` +nano filename +``` + +If the file doesn’t exist, it will still open the editor and when you exit, you’ll have the option for saving the text to my_file. + +You may also open a new file without any name as well (like new document) with Nano like this: + +``` +nano +``` + +Try it. In a terminal, just write `nano` and enter. + +![New file in Nano editor][7] + +Did you notice “New Buffer”? Since you did not give the file any name, it indicates that is a new, unsaved file in the memory buffer. + +You can start writing or modifying the text straightaway in Nano. There are no special insert modes or anything of that sort. It is almost like using a regular text editor, at least for writing and editing. + +If you make any changes to the file (new or existing), you’ll notice that an asterisk (*) appears beside the file name or New Buffer (meaning a new, unsaved file). + +![Writing text in Nano editor][8] + +That seems good. In the next section, you’ll see how to save files and exit the Nano editor interface. + +#### Saving and exiting in Nano + +Nothing is saved immediately to the file automatically unless you explicitly do so. When you ****exit the editor using Ctrl+X**** keyboard shortcut, you’ll be asked whether you want to save the file. + +![Save new file in Nano][9] + +- ****Y**** to save the file and exit the editor +- ****N**** to discard changes +- ****C**** to cancel saving but continue to edit + +If you choose to save the file by pressing the Y key, you’ll be asked to give the file a name. Name it my_file.txt. + +![Saving a new file in Nano text editor][10] + +> 📋 The .txt extension is not necessary because the file is already a text file even if you do not use the extension. However, it is a good practice to keep the file extension for comprehension. + +Enter the name and press the enter key. Your file will be saved and you’ll be out of the Nano editor interface. You can see that the text file has been created in your current directory. + +![New file created in Nano][11] + +> 📋 If you are habitual of using Ctrl+S for saving the file in a text editor and you subconsciously press that in Nano, nothing happens. Why “nothing happens” is important? Because if you press Ctrl+S in a Linux terminal, it freezes the output screen and you cannot type or do anything. You can get back from this “frozen terminal” by pressing Ctrl+Q. + +#### Perform a “save as” operation in Nano + +In Gedit or Notepad, you get the “save as” option to save the changes made to an existing file as a new file. This way, the original files remain unchanged and you create a new file with the modified text. + +You can do it in Nano editor too and the good thing is that you don’t need to remember another keyboard shortcut. You can use the same Ctrl+X keys that you used for saving and exiting. + +Let’s see it in action. Open the sample file you had downloaded earlier. + +``` +nano agatha_complete.txt +``` + +If you don’t make any changes, Ctrl+X will simply close the editor. You don’t want that, do you? + +So just press enter and then backspace key. This will insert a new line and then delete it as well. This way, nothing in the text file is changes and yet Nano will see it as a modified file. + +If you press Ctrl+X and press Y to confirm the save, you’ll come to the screen where it shows the file name. What you can do is to change the file name here by pressing the backspace key and typing a new name. + +![Save as different filename in Nano editor][12] + +It will ask you to confirm saving it under a different name. Press Y to confirm this decision. + +![Save as different filename in Nano editor][13] + +I named it agatha_complete.back as an indication that it is a “backup” of a file of the same name. It’s just for convenience. There is no real significance behind the .back extension. + +So, you have learned to save files with Nano in this lesson. In the next section, you’ll learn to move around a text file. + +#### Moving around in a file + +Open the agatha_complete.txt file with Nano. You know how to open files with Nano editor, right? + +``` +nano agatha_complete.txt +``` + +Now you have a text file with several lines. How do you switch to other lines or to the next page or to the end of the line? + +Mouse clicks don’t work here. ****Use the arrow keys to move up and down, left and right****. + +You can use the Home key or Ctrl+A to move to the beginning of a line and End key or Ctrl+E to move to the end of a line. Ctrl+Y/Page Up and Ctrl+V/Page Down keys can be used to scroll by pages. + +- Use arrow keys for moving around +- Use Ctrl+A or Home key to go to the beginning of a line +- Use Ctrl+E or End key to go to the end of a line +- Use Ctrl+Y or Page Up keys to go up by one page +- Use Ctrl+V or Page Down keys to go down by one page + +You have not made any changes to the file. Exit it. + +Now, open the same file again but using this command: + +``` +nano -l agatha_complete.txt +``` + +Did you notice something different? The `-l` option displays the line numbers in the left-hand side. + +Why did I show that to you? Because I want you to learn to go to a specific line now. To do that, use Ctrl+_ (underscore) key combination. + +![][14] + +> 📋 The Help options get changed at the bottom. That’s the beauty of Nano. If you choose a special keyboard shortcut, it starts showing the options that can be used with that key combination. + +In the above picture, you can enter a line or column number. At the same time, it shows that you can enter Ctrl+Y to go to the first line of the file (it is different from the regular Ctrl+Y for moving one page up). + +Using Ctrl+T on the same screen, you can go to a certain text. That’s almost like searching for a specific text. + +And that brings us to the topic of the next section, which is search and replace. + +#### Search and replace + +You still have the sample text file opened, right? If not, open it again. Let’s how to to search for text and replace it with something else. + +If you want to search for a certain text, ****use Ctrl+W**** and then enter the term you want to search and press enter. The cursor will move to the first match. To go to the next match, ****use Alt+W keys****. + +![Search for text in Nano editor][15] + +By default, the search is case-insensitive. You can perform a case-sensitive search by pressing Alt+C when you are about to perform a search. + +![Case sensitive search in Nano editor][16] + +Once again, look at the bottom for options that can be used. Also note that it shows the last searched term inside brackets. + +Similarly, you can also use regex for the search terms by pressing Alt+R. + +And lastly, **use Ctrl+C to come out of search mode**. + +If you want to replace the searched term, ****use Ctr+\ keys**** and then enter the search term and press enter key. + +![Search and replace text in Nano][17] + +Next, it will ask for the term you want to replace the searched items with. + +![Enter text to be replaced with in Nano][18] + +The cursor will move to the first match and Nano will ask for your conformation for replacing the matched text. Use Y or N to confirm or deny respectively. Using either of Y or N will move to the next match. You may also use A to replace all matches. + +![Replacing text in Nano editor][19] + +In the above text, I have replaced the second occurrence of the term Murder with Marriage and then it asks whether I want to replace the next occurrence as well. + +**Use Ctrl+C to stop the search and replace.** + +You have made some changes to the text file in this lesson. But there is no need to save those changes. Press Ctrl+X to exit but don’t go for the save option. + +In the next section, you’ll learn about cut, copy and paste. + +#### Cut, copy and paste text + +Open the sample text file first. + +> 💡 If you don’t want to spend too much time remembering the shortcuts, use the mouse. + +Select a text with mouse and then use the right click menu to copy the text. You may also use the Ctrl+Shift+C [keyboard shortcut in Ubuntu][20] terminal. Similarly, you can use the right click and select paste from the menu or use the Ctrl+Shift+V key combination. + +Nano also provides its own shortcuts for cutting and pasting text but that could become confusing for beginners. + +Move your cursor to the beginning of the text you want to copy. Press Alt+A to set a marker. Now use the arrow keys to highlight the selection. + +Once you have selected the desired text, you can Alt+6 key to copy the selected text or use Ctrl+K to cut the selected text. Use Ctrl+6 to cancel the selection. + +Once you have copied or cut the selected text, you can use Ctrl+U to paste it. + +![Cut, copy and paste in Nano editor][21] + +If you do not want to continue selecting the text or copying it, use Alt+A again to unset the mark. + +To recall: + +- You can use Ctrl+Shift+C to copy and Ctrl+Shift+V to paste the content of the clipboard in most Linux terminals. +- Alternatively, use Alt+A to set the marker, move the selection using arrow key and then use Alt+6 to copy, Ctrl+k to cut and Ctrl+6 to cancel. +- Use Ctrl+U to paste the copied or cut text. + +Now you know about copy-pasting. The next section will teach you a thing or two about deleting text and lines in Nano. + +#### Delete text or lines + +There is no dedicated option for deletion in Nano. You may use the Backspace or Delete key to delete one character at a time. Press them repeatedly or hold them to delete multiple characters. Just like in any regular text editor. + +You can also use the Ctrl+K keys that cuts the entire line. If you don’t paste it anywhere, it’s as good as deleting a line. + +If you want to delete multiple lines, you may use Ctrl+K on all of them one by one. + +Another option is to use the marker (Ctrl+A). Set the marker and move the arrow to select a portion of text. Use Ctrl+K to cut the text. No need to paste it and the selected text will be deleted (in a way). + +#### Undo and redo + +Cut the wrong line? Pasted the wrong text selection? It’s easy to make such silly mistakes and it’s easy to correct those silly mistakes. + +You can undo and redo your last actions using: + +- **Alt+U : Undo** +- **Alt+E : Redo** + +You can repeat these key combinations to undo or redo multiple times. + +### Almost the end... + +If you find Nano overwhelming, you should try Vim or Emacs. You'll start liking Nano. + +This is a good starting point for Emacs. Give it a try if you want. + +No matter how beginner-friendly Nano is, some people may find the idea of editing important files in the terminal intimidating. + +If you are using Linux desktop where you can access a GUI editor, you can use it to edit those important files as root. + +Say, you have Gedit installed on your system and you have to edit the SSH config file as root. You can run Gedit as root from the terminal like this: + +``` +sudo gedit /etc/ssh/ssh_config +``` + +It will open a Gedit instance as root. The command keeps on running in the terminal. Make your changes and save the file. It will show warning messages when you save and close Gedit. + +![Using gedit to edit config files][22] + +We are almost at the end of our terminal basics series. In the tenth and the last chapter of the series, you'll learn about getting help in the Linux terminal. + +For now, let me know in the comment section if you encounter any issues. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/edit-files-linux/ + +作者:[Abhishek Prakash][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lkxed/ +[1]: https://itsfoss.com/content/images/2023/04/humble-bundle-packt-offer.webp +[2]: https://itsfoss.com/command-line-text-editors-linux/ +[3]: https://itsfoss.com/content/files/2023/04/agatha_complete.txt +[4]: https://itsfoss.com/content/images/2023/04/nano-editor-interface.png +[5]: https://itsfoss.com/how-to-exit-vim/ +[6]: https://itsfoss.com/content/images/2023/04/nano-detailed-help.png +[7]: https://itsfoss.com/content/images/2023/04/new-file-in-nano.png +[8]: https://itsfoss.com/content/images/2023/04/new-modified-file-in-nano.png +[9]: https://itsfoss.com/content/images/2023/04/save-new-file-in-nano.png +[10]: https://itsfoss.com/content/images/2023/04/saving-new-file-in-nano.png +[11]: https://itsfoss.com/content/images/2023/04/new-file-created-in-nano.png +[12]: https://itsfoss.com/content/images/2023/04/save-as-different-file-in-nano.png +[13]: https://itsfoss.com/content/images/2023/04/save-as-different-file-name-in-nano.png +[14]: https://academy.itsfoss.com/wp-content/uploads/2021/07/nano-go-to-line-number-1024x611.png +[15]: https://itsfoss.com/content/images/2023/04/nano-search-text.png +[16]: https://itsfoss.com/content/images/2023/04/nano-case-sensitive-search-text.png +[17]: https://itsfoss.com/content/images/2023/04/nano-search-replace-text.png +[18]: https://itsfoss.com/content/images/2023/04/nano-replace-text.png +[19]: https://itsfoss.com/content/images/2023/04/nano-replaced-text.png +[20]: https://itsfoss.com/ubuntu-shortcuts/ +[21]: https://itsfoss.com/content/images/2023/04/nano-cut-copy-paste.png +[22]: https://itsfoss.com/content/images/2023/04/using-gedit-to-edit-config-files.png diff --git a/sources/tech/20230414.1 ⭐️⭐️⭐️ A distributed database load-balancing architecture with ShardingSphere.md b/sources/tech/20230414.1 ⭐️⭐️⭐️ A distributed database load-balancing architecture with ShardingSphere.md new file mode 100644 index 0000000000..36818c7455 --- /dev/null +++ b/sources/tech/20230414.1 ⭐️⭐️⭐️ A distributed database load-balancing architecture with ShardingSphere.md @@ -0,0 +1,361 @@ +[#]: subject: "A distributed database load-balancing architecture with ShardingSphere" +[#]: via: "https://opensource.com/article/23/4/distributed-database-load-balancing-architecture-shardingsphere" +[#]: author: "Wu Weijie https://opensource.com/users/wuweijie" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +A distributed database load-balancing architecture with ShardingSphere +====== + +[Apache ShardingSphere is a distributed database][1] ecosystem that transforms any database into a distributed database and enhances it with data sharding, elastic scaling, encryption, and other capabilities. In this article, I demonstrate how to build a distributed database load-balancing architecture based on ShardingSphere and the impact of introducing [load balancing][2]. + +### The architecture + +A ShardingSphere distributed database load-balancing architecture consists of two products: ShardingSphere-JDBC and ShardingSphere-Proxy, which can be deployed independently or in a hybrid architecture. The following is the hybrid deployment architecture: + +![Hybrid deployment of ShardingSphere-JDBC and ShardingSphere-Proxy][3] + +### ShardingSphere-JDBC load-balancing solution + +ShardingSphere-JDBC is a lightweight Java framework with additional services in the [JDBC][4] layer. ShardingSphere-JDBC adds computational operations before the application performs database operations. The application process still connects directly to the database through the database driver. + +As a result, users don't have to worry about load balancing with ShardingSphere-JDBC. Instead, they can focus on how their application is load balanced. + +### ShardingSphere-Proxy load-balancing solution + +ShardingSphere-Proxy is a transparent database proxy that provides services to clients over the database protocol. Here's ShardingSphere-Proxy as a standalone deployed process with load balancing on top of it: + +![Standalone ShardingSphere-Proxy with load-balancing][5] + +### Load balancing solution essentials + +The key point of ShardingSphere-Proxy cluster load balancing is that the database protocol itself is designed to be stateful (connection authentication status, transaction status, Prepared Statement, and so on). + +If the load balancing on top of the ShardingSphere-Proxy cannot understand the database protocol, your only option is to select a four-tier load balancing proxy ShardingSphere-Proxy cluster. In this case, a specific proxy instance maintains the state of the database connection between the client and ShardingSphere-Proxy. + +Because the proxy instance maintains the connection state, four-tier load balancing can only achieve connection-level load balancing. Multiple requests for the same database connection cannot be polled to multiple proxy instances. Request-level load balancing is not possible. + +This article does not cover the details of four- and seven-tier load balancing. + +### Recommendations for the application layer + +Theoretically, there is no functional difference between a client connecting directly to a single ShardingSphere-Proxy or a ShardingSphere-Proxy cluster through a load-balancing portal. However, there are some differences in the technical implementation and configuration of the different load balancers. + +For example, in the case of a direct connection to ShardingSphere-Proxy with no limit on the total time a database connection session can be held, some Elastic Load Balancing (ELB) products have a maximum session hold time of 60 minutes at Layer 4. If an idle database connection is closed by a load balancing timeout, but the client is not aware of the passive TCP connection closure, the application may report an error. + +Therefore, in addition to considerations at the load balancing level, you might consider measures for the client to avoid the impact of introducing load balancing. + +#### On-demand connection creation + +If a connection's instance is created and used continuously, the database connection will be idle most of the time when executing a timed job with a one-hour interval and a short execution time. When a client itself is unaware of changes in the connection state, the long idle time increases the uncertainty of the connection state. For scenarios with long execution intervals, consider creating connections on demand and releasing them after use. + +#### Connection pooling + +General database connection pools have the ability to maintain valid connections, reject failed connections, and so on. Managing database connections through connection pools can reduce the cost of maintaining connections yourself. + +#### Enable TCP KeepAlive + +Clients generally support TCP `KeepAlive` configuration: + +- MySQL Connector/J supports `autoReconnect` or `tcpKeepAlive`, which are not enabled by default. +- The PostgreSQL JDBC Driver supports `tcpKeepAlive`, which is not enabled by default. + +Nevertheless, there are some limitations to how TCP `KeepAlive` can be enabled: + +- The client does not necessarily support the configuration of TCP `KeepAlive` or automatic reconnection. +- The client does not intend to make any code or configuration adjustments. +- TCP `KeepAlive` is dependent on the operating system implementation and configuration. + +### User case + +Recently, a ShardingSphere community member provided feedback that their ShardingSphere-Proxy cluster was providing services to the public with upper-layer load balancing. In the process, they found problems with the connection stability between their application and ShardingSphere-Proxy. + +#### Problem description + +Assume the user's production environment uses a three-node ShardingSphere-Proxy cluster serving applications through a cloud vendor's ELB. + +![Three-node ShardingSphere-Proxy][6] + +One of the applications is a resident process that executes timed jobs, which are executed hourly and have database operations in the job logic. The user feedback is that each time a timed job is triggered, an error is reported in the application log: + +``` +send of 115 bytes failed with errno=104 Connection reset by peer +Checking the ShardingSphere-Proxy logs, there are no abnormal messages. +``` + +The issue only occurs with timed jobs that execute hourly. All other applications access ShardingSphere-Proxy normally. As the job logic has a retry mechanism, the job executes successfully after each retry without impacting the original business. + +### Problem analysis + +The reason why the application shows an error is clear—the client is sending data to a closed TCP connection. The troubleshooting goal is to identify exactly why the TCP connection was closed. + +If you encounter any of the three reasons listed below, I recommend that you perform a network packet capture on both the application and the ShardingSphere-Proxy side within a few minutes before and after the point at which the problem occurs: + +- The problem will recur on an hourly basis. +- The issue is network related. +- The issue does not affect the user's real-time operations. + +### Packet capture phenomenon 1 + +ShardingSphere-Proxy receives a TCP connection establishment request from the client every 15 seconds. The client, however, sends an RST to the proxy immediately after establishing the connection with three handshakes. The client sends an RST to the proxy without any response after receiving the Server Greeting or even before the proxy has sent the Server Greeting. + +![Packet capture showing RST messages][7] + +However, no traffic matching the above behavior exists in the application-side packet capture results. + +By consulting the community member's ELB documentation, I found that the above network interaction is how that ELB implements the four-layer health check mechanism. Therefore, this phenomenon is not relevant to the problem in this case. + +![Mechanism of TCP help check][8] + +### Packet capture phenomenon 2 + +The MySQL connection is established between the client and the ShardingSphere-Proxy, and the client sends an RST to the proxy during the TCP connection disconnection phase. + +![RST sent during disconnection phase][9] + +The above packet capture results reveal that the client first initiated the `COM_QUIT` command to ShardingSphere-Proxy. The client disconnected the MySQL connection based on but not limited to the following possible scenarios: + +- The application finished using the MySQL connection and closed the database connection normally. +- The application's database connection to ShardingSphere-Proxy is managed by a connection pool, which performs a release operation for idle connections that have timed out or have exceeded their maximum lifetime. As the connection is actively closed on the application side, it does not theoretically affect other business operations unless there is a problem with the application's logic. + +After several rounds of packet analysis, no RSTs had been sent to the client by the ShardingSphere-Proxy in the minutes before and after the problem surfaced. + +Based on the available information, it's possible that the connection between the client and ShardingSphere-Proxy was disconnected earlier, but the packet capture time was limited and did not capture the moment of disconnection. + +Because the ShardingSphere-Proxy itself does not have the logic to actively disconnect the client, the problem is being investigated at both the client and ELB levels. + +### Client application and ELB configuration check + +The user feedback included the following additional information: + +- The application's timed jobs execute hourly, the application does not use a database connection pool, and a database connection is manually maintained and provided for ongoing use by the timed jobs. +- The ELB is configured with four levels of session hold and a session idle timeout of 40 minutes. + +Considering the frequency of execution of timed jobs, I recommend that users modify the ELB session idle timeout to be greater than the execution interval of timed jobs. After the user changed the ELB timeout to 66 minutes, the connection reset problem no longer occurred. + +If the user had continued packet capturing during troubleshooting, it's likely they would have found ELB traffic that disconnects the TCP connection at the 40th minute of each hour. + +#### Problem conclusion + +The client reported an error Connection reset by peer Root cause. + +The ELB idle timeout was less than the timed task execution interval. The client was idle for longer than the ELB session hold timeout, resulting in the connection between the client and ShardingSphere-Proxy being disconnected by the ELB timeout. + +The client sent data to a TCP connection that had been closed by the ELB, resulting in the error Connection reset by peer. + +### Timeout simulation experiment + +I decided to conduct a simple experiment to verify the client's performance after a load-balancing session timeout. I performed a packet capture during the experiment to analyze network traffic and observe the behavior of load-balancing. + +#### Build a load-balanced ShardingSphere-Proxy clustered environment + +Theoretically, this article could cover any four-tier load-balancing implementation. I selected Nginx. + +I set the TCP session idle timeout to one minute, as seen below: + +``` +user nginx; +worker_processes auto; + +error_log /var/log/nginx/error.log notice; +pid /var/run/nginx.pid; + +events { + worker_connections 1024; +} + +stream { + upstream shardingsphere { + hash $remote_addr consistent; + + server proxy0:3307; + server proxy1:3307; + } + + server { + listen 3306; + proxy_timeout 1m; + proxy_pass shardingsphere; + } +} +``` + +#### Construct a Docker compose file + +Here's a Docker compose file: + +``` +version: "3.9" +services: + + nginx: + image: nginx:1.22.0 + ports: + - 3306:3306 + volumes: + - /path/to/nginx.conf:/etc/nginx/nginx.conf + + proxy0: + image: apache/shardingsphere-proxy:5.3.0 + hostname: proxy0 + ports: + - 3307 + + proxy1: + image: apache/shardingsphere-proxy:5.3.0 + hostname: proxy1 + ports: + - 3307 +``` + +#### Startup environment + +Start the containers: + +``` +$ docker compose up -d +[+] Running 4/4 + ⠿ Network lb_default Created 0.0s + ⠿ Container lb-proxy1-1 Started 0.5s + ⠿ Container lb-proxy0-1 Started 0.6s + ⠿ Container lb-nginx-1 Started +``` + +#### Simulation of client-side same-connection-based timed tasks + +First, construct a client-side deferred SQL execution. Here, the ShardingSphere-Proxy is accessed through Java and [MySQL Connector/J][10]. + +The logic: + +- Establish a connection to the ShardingSphere-Proxy and execute a query to the proxy. +- Wait 55 seconds and then execute another query to the proxy. +- Wait 65 seconds and then execute another query to the proxy. + +``` +public static void main(String[] args) { + try (Connection connection = DriverManager.getConnection("jdbc:mysql://127.0.0.1:3306?useSSL=false", "root", "root"); Statement statement = connection.createStatement()) { + log.info(getProxyVersion(statement)); + TimeUnit.SECONDS.sleep(55); + log.info(getProxyVersion(statement)); + TimeUnit.SECONDS.sleep(65); + log.info(getProxyVersion(statement)); + } catch (Exception e) { + log.error(e.getMessage(), e); + } +} + +private static String getProxyVersion(Statement statement) throws SQLException { + try (ResultSet resultSet = statement.executeQuery("select version()")) { + if (resultSet.next()) { + return resultSet.getString(1); + } + } + throw new UnsupportedOperationException(); +} +``` + +Expected and client-side run results: + +- A client connects to the ShardingSphere-Proxy, and the first query is successful. +- The client's second query is successful. +- The client's third query results in an error due to a broken TCP connection because the Nginx idle timeout is set to one minute. + +The execution results are as expected. Due to differences between the programming language and the database driver, the error messages behave differently, but the underlying cause is the same: Both TCP connections have been disconnected. + +The logs are shown below: + +``` +15:29:12.734 [main] INFO icu.wwj.hello.jdbc.ConnectToLBProxy - 5.7.22-ShardingSphere-Proxy 5.1.1 +15:30:07.745 [main] INFO icu.wwj.hello.jdbc.ConnectToLBProxy - 5.7.22-ShardingSphere-Proxy 5.1.1 +15:31:12.764 [main] ERROR icu.wwj.hello.jdbc.ConnectToLBProxy - Communications link failure +The last packet successfully received from the server was 65,016 milliseconds ago. The last packet sent successfully to the server was 65,024 milliseconds ago. + at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) + at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) + at com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1201) + at icu.wwj.hello.jdbc.ConnectToLBProxy.getProxyVersion(ConnectToLBProxy.java:28) + at icu.wwj.hello.jdbc.ConnectToLBProxy.main(ConnectToLBProxy.java:21) +Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure + +The last packet successfully received from the server was 65,016 milliseconds ago. The last packet sent successfully to the server was 65,024 milliseconds ago. + at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) + at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) + at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) + at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) + at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480) + at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61) + at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105) + at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151) + at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167) + at com.mysql.cj.protocol.a.NativeProtocol.readMessage(NativeProtocol.java:581) + at com.mysql.cj.protocol.a.NativeProtocol.checkErrorMessage(NativeProtocol.java:761) + at com.mysql.cj.protocol.a.NativeProtocol.sendCommand(NativeProtocol.java:700) + at com.mysql.cj.protocol.a.NativeProtocol.sendQueryPacket(NativeProtocol.java:1051) + at com.mysql.cj.protocol.a.NativeProtocol.sendQueryString(NativeProtocol.java:997) + at com.mysql.cj.NativeSession.execSQL(NativeSession.java:663) + at com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1169) + ... 2 common frames omitted +Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. + at com.mysql.cj.protocol.FullReadInputStream.readFully(FullReadInputStream.java:67) + at com.mysql.cj.protocol.a.SimplePacketReader.readHeaderLocal(SimplePacketReader.java:81) + at com.mysql.cj.protocol.a.SimplePacketReader.readHeader(SimplePacketReader.java:63) + at com.mysql.cj.protocol.a.SimplePacketReader.readHeader(SimplePacketReader.java:45) + at com.mysql.cj.protocol.a.TimeTrackingPacketReader.readHeader(TimeTrackingPacketReader.java:52) + at com.mysql.cj.protocol.a.TimeTrackingPacketReader.readHeader(TimeTrackingPacketReader.java:41) + at com.mysql.cj.protocol.a.MultiPacketReader.readHeader(MultiPacketReader.java:54) + at com.mysql.cj.protocol.a.MultiPacketReader.readHeader(MultiPacketReader.java:44) + at com.mysql.cj.protocol.a.NativeProtocol.readMessage(NativeProtocol.java:575) + ... 8 common frames omitted +``` + +### Packet capture results analysis + +The packet capture results show that after the connection idle timeout, Nginx simultaneously disconnects from the client and the proxy over TCP. However, the client is not aware of this, so Nginx returns an RST after sending the command. + +After the Nginx connection idle timeout, the TCP disconnection process with the proxy completes normally. The proxy is unaware when the client sends subsequent requests using the disconnected connection. + +Analyze the following packet capture results: + +- Numbers 1–44 are the interaction between the client and the ShardingSphere-Proxy to establish a MySQL connection. +- Numbers 45–50 are the first query performed by the client. +- Numbers 55–60 are the second query executed by the client 55 seconds after the first query is executed. +- Numbers 73–77 are the TCP connection disconnection processes initiated by Nginx to both the client and ShardingSphere-Proxy after the session times out. +- Numbers 78–79 are the third query executed 65 seconds after the client executes the second query, including the Connection Reset. + +![Packet capture of expected DST results][11] + +### Wrap up + +Troubleshooting disconnection issues involves examining both the ShardingSphere-Proxy settings and the configurations enforced by the cloud service provider's ELB. It's useful to capture packets to understand when particular events—especially DST messages—occur compared to idle time and timeout settings. + +The above implementation and troubleshooting scenario is based on a specific ShardingSphere-Proxy deployment. For a discussion of cloud-based options, see my followup article. ShardingSphere on Cloud offers additional management options and configurations for a variety of cloud service provider environments. + +_This article is adapted from [A Distributed Database Load Balancing Architecture Based on ShardingSphere: Demo and User Case][12] and is republished with permission._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/23/4/distributed-database-load-balancing-architecture-shardingsphere + +作者:[Wu Weijie][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/wuweijie +[b]: https://github.com/lkxed/ +[1]: https://opensource.com/article/21/12/apache-shardingsphere +[2]: https://opensource.com/article/21/4/load-balancing +[3]: https://opensource.com/sites/default/files/2023-04/0_93H6vbCgXBVBesDY.png +[4]: https://opensource.com/article/22/9/install-jdbc-linux +[5]: https://opensource.com/sites/default/files/2023-04/1_m1IaZAtKDgibDG-CoEFp4w.png +[6]: https://opensource.com/sites/default/files/2023-04/1_m1IaZAtKDgibDG-CoEFp4w%20%281%29.png +[7]: https://opensource.com/sites/default/files/2023-04/0_Lhw0dQMJg0W_7egC.jpg +[8]: https://opensource.com/sites/default/files/2023-04/1_B4u6tudsTAByCVQmAs7vqA.png +[9]: https://opensource.com/sites/default/files/2023-04/0_STRlq7Ad6jGldUMo.png +[10]: https://github.com/mysql/mysql-connector-j +[11]: https://opensource.com/sites/default/files/2023-04/0_JwSWd1WjFnp70pdg.png +[12]: https://blog.devgenius.io/a-distributed-database-load-balancing-architecture-based-on-shardingsphere-demo-user-case-1293e321b322 \ No newline at end of file diff --git a/sources/tech/20230418.0 ⭐️⭐️ Use autoloading and namespaces in PHP.md b/sources/tech/20230418.0 ⭐️⭐️ Use autoloading and namespaces in PHP.md new file mode 100644 index 0000000000..807be9b8a4 --- /dev/null +++ b/sources/tech/20230418.0 ⭐️⭐️ Use autoloading and namespaces in PHP.md @@ -0,0 +1,350 @@ +[#]: subject: "Use autoloading and namespaces in PHP" +[#]: via: "https://opensource.com/article/23/4/autoloading-namespaces-php" +[#]: author: "Jonathan Daggerhart https://opensource.com/users/daggerhart" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Use autoloading and namespaces in PHP +====== + +In the PHP language, autoloading is a way to automatically include class files of a project in your code. Say you had a complex object-oriented PHP project with more than a hundred PHP classes. You'd need to ensure all your classes were loaded before using them. This article aims to help you understand the what, why, and how of autoloading, along with namespaces and the `use` keyword, in PHP. + +### What is autoloading? + +In a complex PHP project, you're probably using hundreds of classes. Without autoloading, you'd likely have to include every class manually. Your code would look like this: + +``` +/Jonathan/SomeBundle/Validator.php`. + +Just to drive this point home, here are more examples of where a PHP file exists for a class within a project making use of PSR-4: + +- **File location**: `/Project/Fields/Email/Validator.php` + +- **File location**: `/Acme/QueryBuilder/Where.php` + +- **File location**: `/MyFirstProject/Entity/EventEmitter.php` + +- **Namespace and class**: `\Project\Fields\Email\Validator()` +- **Namespace and class**: `\Acme\QueryBuilder\Where` +- **Namespace and class**: `\MyFirstProject\Entity\EventEmitter` + +This isn't actually 100% accurate. Each component of a project has its own relative root, but don't discount this information: Knowing that PSR-4 implies the file location of a class helps you easily find any class within a large project. + +### How does PSR-4 work? + +PSR-4 works because it's achieved with an autoloader function. Take a look at one PSR-4 example autoloader function: + +``` +/src/Foo/Bar/Baz/Bug.php.` +- If the file is found, load it. + +In other words, you change `Foo\Bar\Baz\Bug` to `/src/Foo/Bar/Baz/Bug.php` then locate that file. + +### Composer and autoloading + +[Composer][1] is a command-line PHP package manager. You may have seen a project with a `composer.json` file in its root directory. This file tells Composer about the project, including the project's dependencies. + +Here's an example of a simple `composer.json` file: + +``` +{ + "name": "jonathan/example", + "description": "This is an example composer.json file", + "require": { + "twig/twig": "^1.24" + } +} +``` + +This project is named "jonathan/example" and has one dependency: the Twig templating engine (at version 1.24 or higher). + +With Composer installed, you can use the JSON file to download the project's dependencies. In doing so, Composer generates an `autoload.php` file that automatically handles autoloading the classes in all of your dependencies. + +![Screenshot of nested drop down menus highlighting the path example - vendor - twig - autoload.php][2] + +If you include this new file in a project, all classes within your dependency are automatically loaded, as needed. + +### PSR makes PHP better + +Because of the PSR-4 standard and its widespread adoption, Composer can generate an autoloader that automatically handles loading your dependencies as you instantiate them within your project. The next time you write PHP code, keep namespaces and autoloading in mind. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/23/4/autoloading-namespaces-php + +作者:[Jonathan Daggerhart][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/daggerhart +[b]: https://github.com/lkxed/ +[1]: https://opensource.com/article/22/5/composer-git-repositories +[2]: https://opensource.com/sites/default/files/2023-03/composer_autoloading_php_menu.png \ No newline at end of file diff --git a/sources/tech/20230418.4 ⭐️⭐️ Linux Terminal Basics 10 Getting Help in Linux Terminal.md b/sources/tech/20230418.4 ⭐️⭐️ Linux Terminal Basics 10 Getting Help in Linux Terminal.md new file mode 100644 index 0000000000..633cf81d95 --- /dev/null +++ b/sources/tech/20230418.4 ⭐️⭐️ Linux Terminal Basics 10 Getting Help in Linux Terminal.md @@ -0,0 +1,152 @@ +[#]: subject: "Linux Terminal Basics #10: Getting Help in Linux Terminal" +[#]: via: "https://itsfoss.com/linux-command-help/" +[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Linux Terminal Basics #10: Getting Help in Linux Terminal +====== + +These days, you can search the internet for the usage and examples of any command. + +But it was not like this when the internet didn't exist, or it was not as widely available to everyone. + +For this reason, commands in Linux (and the operating systems before it) come with a help or manual page (man pages). This worked as a reference and users could access it anytime to see what options were available for a command and how it worked. + +The man pages are still relevant in this age of information abundance. + +First, they are the original command documentation and hence the most trusted source on command usage. + +Second, if you are taking some Linux exam, you will not be allowed to search on the internet but the man pages are always at your disposal. + +Now that you understand the importance of getting help directly in the terminal, let's see more about them. + +### Get help with Linux commands in the terminal + +There are two main commands to get help on the usage of a Linux command: + +- help: For shell builtin commands +- man: For other Linux commands + +#### Wait! What are shell built-in commands? + +You may feel that commands like ls, rm, mv are part of the bash shell. But that's not true. Shell only has a few commands that are built into it as a part of the shell itself. This is why they are called built-in commands. Some examples of built-in commands are echo, cd, and alias. + +Other popular Linux commands like ls, mv, rm, cat, less, etc are part of a software package called [GNU coreutils][1]. They come preinstalled on almost all Linux distributions. + +You won't find man pages for the shell built-ins. + +``` +[email protected]:~$ man cd +No manual entry for cd +``` + +The man pages are for these 'external' Linux commands. The shell built-ins have help sections. + +> 💡 Want to see all the built-in shell commands? Just type + +``` +help +``` + + to list them all. + +#### Use man to see command documentation + +Using the man command is simple. Just give it command's name like this: + +``` +man command_name +``` + +And it will open the manual page of the command. You'll find the syntax of the command, its options, and a brief explanation of the options. + +![An example manpage of the ip command in Linux][2] + +The pages are (usually) [opened with the less command][3] so you can use all the [keyboard shortcuts of the less command][4] to move around and search for text. + +Don't remember it? This will help you recall + +| Keys | Action | +| :- | :- | +| Up arrow | Move one line up | +| Down arrow | Move one line down | +| Space or PgDn | Move one page down | +| b or PgUp | Move one page up | +| g | Move to the beginning of the file | +| G | Move to the end of the file | +| ng | Move to the nth line | +| /pattern | Search for pattern and use n to move to next match | +| q | Exit | + +There is more to man pages than. I cannot cover it all here, but we do have a detailed guide. Feel free to refer to it. + +#### Use help command for shell built-ins + +As mentioned earlier, no man pages exist for the built-in shell commands. Instead, you use the help command like this: + +``` +help command_name +``` + +It will show a summary of the command options. The entire content is displayed on the screen, unlike the man command. + +![Using help for built-in shell commands][5] + +#### Help option for all commands + +Do you feel the man page has too much information and you just want to see the options of a command? The help option 'helps' you. + +Almost all Linux commands provide a `--help` option that should summarize the available options. + +![Using help option of Linux commands][6] + +However, it's not a hard and fast rule. The help sections of some commands are pretty bland. Try it for the ip command. + +### There are more ways to get help in Linux terminal + +There is the info command that works similar to the man command. + +If you find man pages complicated to understand, there are third-party tools that simplify the content of man pages and make it more beginner friendly. TLDR is one such package you can use. + +In other words, the help is just a few key presses away. + +It's not that only new Linux users need help. Experienced Linux users specially rely on the manpages. So don't shy away from using the help in the terminal. + +I also advise [using the history command][7]. This way, you can search for the commands you typed earlier. + +### This is the end... or the beginning + +And with this, I conclude the Linux Terminal Basics series. + +In the ten chapters of the series, you got familiar with the terminal, learned to move around in the terminal, and create, move and delete files and folders. You also learned to read and edit files. + +This gives you a basic but solid foundation of Linux commands. It may be the end of this series, but it helps begin your Linux command line journey. + +You'll find more in-depth guides on 'doing things in Linux command line' on It's FOSS in the future. It may not be in a series (or maybe it will) but you'll have plenty of opportunity for learning. + +💬 **_I hope you liked this beginner series. I welcome your feedback on the usability of this series and suggestions to improve it. If you have any suggestions for a related new series, please don't hesitate. The comment section is waiting for you._** + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/linux-command-help/ + +作者:[Abhishek Prakash][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lkxed/ +[1]: https://www.gnu.org/software/coreutils/?ref=itsfoss.com +[2]: https://itsfoss.com/content/images/2023/04/man-page-example.png +[3]: https://itsfoss.com/view-file-contents/ +[4]: https://linuxhandbook.com/less-command/?ref=itsfoss.com +[5]: https://itsfoss.com/content/images/2023/04/help-for-shell-built-ins.png +[6]: https://itsfoss.com/content/images/2023/04/help-with-linux-commands.png +[7]: https://linuxhandbook.com/bash-history-tips/?ref=itsfoss.com diff --git a/sources/tech/20230424.3 ⭐️⭐️ Learn TclTk and Wish with this simple game.md b/sources/tech/20230424.3 ⭐️⭐️ Learn TclTk and Wish with this simple game.md new file mode 100644 index 0000000000..aeec32b279 --- /dev/null +++ b/sources/tech/20230424.3 ⭐️⭐️ Learn TclTk and Wish with this simple game.md @@ -0,0 +1,219 @@ +[#]: subject: "Learn Tcl/Tk and Wish with this simple game" +[#]: via: "https://opensource.com/article/23/4/learn-tcltk-wish-simple-game" +[#]: author: "James Farrell https://opensource.com/users/jamesf" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Learn Tcl/Tk and Wish with this simple game +====== + +Explore the basic language constructs of Tcl/Tk, which include user input, output, variables, conditional evaluation, simple functions, and basic event driven programming. + +My path to writing this article started with a desire to make advanced use of Expect which is based on Tcl. Those efforts resulted in these two articles: [Learn Tcl by writing a simple game][1] and [Learn Expect by writing a simple game][2]. + +I do a bit of [Ansible][3] automation and, over time have collected a number of local scripts. Some of them I use often enough that it becomes annoying to go through the cycle of: + +- Open terminal +- Use `cd` to get to the right place +- Type a long command with options to start the desired automation + +I use macOS on a daily basis. What I really wanted was a menu item or an icon to bring up a simple UI to accept parameters and run the thing I wanted to do, [like in KDE on Linux][4]. + +The classic Tcl books include documentation on the popular Tk extensions. Since I was already deep into researching this topic, I gave programming it (that is `wish`) a try. + +I've never been a GUI or front-end developer, but I found the Tcl/Tk methods of script writing fairly straight forward. I was pleased to revisit such a venerable stalwart of UNIX history, something still available and useful on modern platforms. + +### Install Tcl/Tk + +On a Linux system, you can use this: + +``` +$ sudo dnf install tcl +$ which wish +/bin/wish +``` + +On macOS, use [Homebrew][5] to install the latest Tcl/Tk: + +``` +$ brew install tcl-tk +$ which wish +/usr/local/bin/wish +``` + +### Programming concepts + +Most game-writing articles cover the typical programming language constructs such as loops, conditionals, variables, functions and procedures, and so on. + +In this article, I introduce [event-driven programming][6]. With event-driven programming, your executable enters into a special built-in loop as it waits for something specific to happen. When the specification is reached, the code is triggered to produce a certain outcome. + +These events can consist of things like keyboard input, mouse movement, button clicks, timing triggers, or nearly anything your computer hardware can recognize (perhaps even from special-purpose devices). The code in your program sets the stage from what it presents to the end user, what kinds of inputs to listen for, how to behave when these inputs are received, and then invokes the event loop waiting for input. + +The concept for this article is not far off from my other Tcl articles. The big difference here is the replacement of looping constructs with GUI setup and an event loop used to process the user input. The other differences are the various aspects of GUI development needed to make a workable user interface. With Tk GUI development, you need to look at two fundamental constructs called widgets and geometry managers. + +Widgets are UI elements that make up the visual elements you see and interact with. These include buttons, text areas, labels, and entry fields. Widgets also offer several flavors of option selections like menus, check boxes, radio buttons, and so on. Finally, widgets include other visual elements like borders and line separators. + +Geometry managers play a critical role in laying out where your widgets sit in the displayed window. There are a few different kinds of geometry managers you can use. In this article, I mainly use `grid` geometry to lay widgets out in neat rows. I explain some of the geometry manager differences at the end of this article. + +### Guess the number using wish + +This example game code is different from the examples in my other articles. I've broken it up into chunks to facilitate the explanation. + +Start by creating the basic executable script `numgame.wish`: + +``` +$ touch numgame.wish +$ chmod 755 numgame.wish +``` + +Open the file in your favorite text editor. Enter the first section of the code: + +``` +#!/usr/bin/env wish +set LOW 1 +set HIGH 100 +set STATUS "" +set GUESS "" +set num [expr round(rand()*100)] +``` + +The first line defines that the script is executable with `wish`. Then, several global variables are created. I've decided to use all upper-case variables for globals bound to widgets that watch these values (`LOW`, `HIGH` and so on). + +The `num` global is the variable set to the random value you want the game player to guess. This uses Tcl's command execution to derive the value saved to the variable: + +``` +proc Validate {var} { + if { [string is integer $var] } { + return 1 + } + return 0 +} +``` + +This is a special function to validate data entered by the user. It accepts integer numbers and rejects everything else: + +``` +proc check_guess {guess num} { + global STATUS LOW HIGH GUESS + + if { $guess < $LOW } { + set STATUS "What?" + } elseif { $guess > $HIGH } { + set STATUS "Huh?" + } elseif { $guess < $num } { + set STATUS "Too low!" + set LOW $guess + } elseif { $guess > $num } { + set STATUS "Too high!" + set HIGH $guess + } else { + set LOW $guess + set HIGH $guess + set STATUS "That's Right!" + destroy .guess .entry + bind all {.quit invoke} + } + + set GUESS "" +} +``` + +This is the main loop of the value guessing logic. The `global` statement allows you to modify the global variables created at the beginning of the file (more on this topic later). The conditional looks for input that is out of bounds of 1 through 100 and also outside of values the user has already guessed. Valid guesses are compared against the random value. The `LOW` and `HIGH` guesses are tracked as global variables reported in the UI. At each stage, the global `STATUS` variable is updated. This status message is automatically reported in the UI. + +In the case of a correct guess, the `destroy` statement removes the "Guess" button and the entry widget, and re-binds the **Return** (or **Enter**) key to invoke the **Quit** button. + +The last statement `set GUESS ""` is used to clear the entry widget for the next guess: + +``` +label .inst -text "Enter a number between: " +label .low -textvariable LOW +label .dash -text "-" +label .high -textvariable HIGH +label .status -text "Status:" +label .result -textvariable STATUS +button .guess -text "Guess" -command { check_guess $GUESS $num } +entry .entry -width 3 -relief sunken -bd 2 -textvariable GUESS -validate all \ + -validatecommand { Validate %P } +focus .entry +button .quit -text "Quit" -command { exit } +bind all {.guess invoke} +``` + +This is the section where the user interface is set up.  The first six label statements create various bits of text that display on your UI. The option `-textvariable` watches the given variable and updates the label's value automatically. This displays the bindings to global variables `LOW`, `HIGH`, and `STATUS`. + +The `button` lines set up the **Guess** and **Quit** buttons, with the `-command` option specifying what to do when the button is pressed. The **Guess** button invokes the `check_guess` procedure logic above to check the users entered value. + +The `entry` widget gets more interesting. It sets up a three-character wide input field, and binds its input to `GUESS` global. It also configures validation with the `-validatecommand` option. This prevents the entry widget from accepting anything other than numbers. + +The `focus` command is a UI polish that starts the program with the entry widget active for input. Without this, you need to click into the entry widget before you can type. + +The `bind` command is an additional UI polish that automatically clicks the **Guess** button when the **Return** key is pressed. If you remember from above in `check_guess`, guessing the correct value re-binds **Return** to the "Quit" button. + +Finally, this section defines the GUI layout: + +``` +grid .inst +grid .low .dash .high +grid .status .result +grid .guess .entry +grid .quit +``` + +The `grid` geometry manager is called in a series of steps to incrementally build up the desired user experience. It essentially sets up five rows of widgets. The first three are labels displaying various values, the fourth is the **Guess** button and `entry` widget, then finally, the **Quit** button. + +At this point, the program is initialized and the `wish` shell enters into the event loop. It waits for the user to enter integer values and press buttons. It updates labels based on changes it finds in watched global variables. + +Notice that the input cursor starts in the entry field and that pressing **Return** invokes the appropriate and available button. + +This was a simple and basic example. Tcl/Tk has a number of options that can make the spacing, fonts, colors, and other UI aspects much more pleasing than the simple UI demonstrated in this article. + +When you launch the application, you may notice that the widgets aren't very fancy or modern. That is because I'm using the original classic widget set, reminiscent of the X Windows Motif days. There are default widget extensions, called themed widgets, which can give your application a more modern and polished look and feel. + +### Play the game! + +After saving the file, run it in the terminal: + +``` +$ ./numgame.wish +``` + +In this case, I can't give console output, so here's an animated GIF to demonstrate how the game is played: + +![A guessing game written in Wish][7] + +### More about Tcl + +Tcl supports the notion of namespaces, so the variables used here need not be global. You can organize your bound widget variables into alternate namespaces. For simple programs like this, it's probably not worth it. For much larger projects, you might want to consider this approach. + +The `proc check_guess` body contains a `global` line I didn't explain. All variables in Tcl are passed by value, and variables referenced within the body are in a local scope. In this case, I wanted to modify the global variable, not a local scoped version. Tcl has a number of ways of referencing variables and executing code in execution stacks higher in the call chain. In some ways, it makes for complexities (and mistakes) for a simple reference like this. But the call stack manipulation is very powerful and allows for Tcl to implement new forms of conditional and loop constructs that would be cumbersome in other languages. + +Finally, in this article, I skipped the topic of geometry managers which are used to take widgets and place them in a specific order. Nothing can be displayed to the screen unless it is managed by some kind of geometry manager. The grid manager is fairly simple. It places widgets in a line, from left to right. I used five grid definitions to create five rows. There are two other geometry managers: place and pack. The pack manager arranges widgets around the edges of the window, and the place manager allows for fixed placement. In addition to these geometry managers, there are special widgets called `canvas`, `text`, and `panedwindow` that can hold and manage other widgets. A full description of all these can be found in the classic Tcl/Tk reference guides, and on the [Tk commands][8] documentation page. + +### Keep learning programming + +Tcl and Tk provide a straightforward and effective approach to building graphical user interfaces and event-driven applications. This simple guessing game is just the beginning when it comes to what you can accomplish with these tools. By continuing to learn and explore Tcl and Tk, you can unlock a world of possibilities for building powerful, user-friendly applications. Keep experimenting, keep learning, and see where your newfound Tcl and Tk skills can take you. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/23/4/learn-tcltk-wish-simple-game + +作者:[James Farrell][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jamesf +[b]: https://github.com/lkxed/ +[1]: https://opensource.com/article/23/2/learn-tcl-writing-simple-game +[2]: https://opensource.com/article/23/2/learn-expect-automate-simple-game +[3]: https://www.redhat.com/en/technologies/management/ansible/what-is-ansible?intcmp=7013a000002qLH8AAM +[4]: https://opensource.com/article/23/2/linux-kde-desktop-ansible +[5]: https://opensource.com/article/20/6/homebrew-mac +[6]: https://developers.redhat.com/topics/event-driven/all?intcmp=7013a000002qLH8AAM +[7]: https://opensource.com/sites/default/files/2023-03/numgame-wish.gif +[8]: https://tcl.tk/man/tcl8.7/TkCmd/index.html \ No newline at end of file diff --git a/sources/tech/20230425.3 ⭐️⭐️ Retry your Python code until it fails.md b/sources/tech/20230425.3 ⭐️⭐️ Retry your Python code until it fails.md new file mode 100644 index 0000000000..2b57d455dd --- /dev/null +++ b/sources/tech/20230425.3 ⭐️⭐️ Retry your Python code until it fails.md @@ -0,0 +1,404 @@ +[#]: subject: "Retry your Python code until it fails" +[#]: via: "https://opensource.com/article/23/4/retry-your-python-code-until-it-fails" +[#]: author: "Moshe Zadka https://opensource.com/users/moshez" +[#]: collector: "lkxed" +[#]: translator: "MjSeven" +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Retry your Python code until it fails +====== + +Sometimes, a function is called with bad inputs or in a bad program state, so it fails. In languages like Python, this usually results in an exception. + +But sometimes exceptions are caused by different issues or are transitory. Imagine code that must keep working in the face of caching data being cleaned up. In theory, the code and the cleaner could carefully agree on the clean-up methodology to prevent the code from trying to access a non-existing file or directory. Unfortunately, that approach is complicated and error-prone. However, most of these problems are transitory, as the cleaner will eventually create the correct structures. + +Even more frequently, the uncertain nature of network programming means that some functions that abstract a network call fail because packets were lost or corrupted. + +A common solution is to retry the failing code. This practice allows skipping past transitional problems while still (eventually) failing if the issue persists. Python has several libraries to make retrying easier. This is a common "finger exercise." + +### Tenacity + +One library that goes beyond a finger exercise and into useful abstraction is [tenacity][1]. Install it with `pip install tenacity` or depend on it using a `dependencies = tenacity` line in your `pyproject.toml` file. + +### Set up logging + +A handy built-in feature of `tenacity` is support for logging. With error handling, seeing log details about retry attempts is invaluable. + +To allow the remaining examples display log messages, [set up the logging library][2]. In a real program, the central entry point or a logging configuration plugin does this. Here's a sample: + +``` +import logging + +logging.basicConfig( + level=logging.INFO, + format="%(asctime)s:%(name)s:%(levelname)s:%(message)s", +) + +TENACITY_LOGGER = logging.getLogger("Retrying") +``` + +### Selective failure + +To demonstrate the features of `tenacity`, it's helpful to have a way to fail a few times before finally succeeding. Using `unittest.mock` is useful for this scenario. + +``` +from unittest import mock + +thing = mock.MagicMock(side_effect=[ValueError(), ValueError(), 3]) +``` + +If you're new to unit testing, read my [article on mock][3]. + +Before showing the power of `tenacity`, look at what happens when you implement retrying directly inside a function. Demonstrating this makes it easy to see the manual effort using `tenacity` saves. + +``` +def useit(a_thing): + for i in range(3): + try: + value = a_thing() + except ValueError: + TENACITY_LOGGER.info("Recovering") + continue + else: + break + else: + raise ValueError() + print("the value is", value) +``` + +The function can be called with something that never fails: + +``` +>>> useit(lambda: 5) +the value is 5 +``` + +With the eventually-successful thing: + +``` +>>> useit(thing) + +2023-03-29 17:00:42,774:Retrying:INFO:Recovering +2023-03-29 17:00:42,779:Retrying:INFO:Recovering + +the value is 3 +``` + +Calling the function with something that fails too many times ends poorly: + +``` +try: + useit(mock.MagicMock(side_effect=[ValueError()] * 5 + [4])) +except Exception as exc: + print("could not use it", repr(exc)) +``` + +The result: + +``` +2023-03-29 17:00:46,763:Retrying:INFO:Recovering +2023-03-29 17:00:46,767:Retrying:INFO:Recovering +2023-03-29 17:00:46,770:Retrying:INFO:Recovering + +could not use it ValueError() +``` + +### Simple tenacity usage + +For the most part, the function above was retrying code. The next step is to have a decorator handle the retrying logic: + +``` +import tenacity + +my_retry=tenacity.retry( + stop=tenacity.stop_after_attempt(3), + after=tenacity.after_log(TENACITY_LOGGER, logging.WARNING), +) +``` + +Tenacity supports a specified number of attempts and logging after getting an exception. + +The `useit` function no longer has to care about retrying. Sometimes it makes sense for the function to still consider _retryability_. Tenacity allows code to determine retryability by itself by raising the special exception `TryAgain`: + +``` +@my_retry +def useit(a_thing): + try: + value = a_thing() + except ValueError: + raise tenacity.TryAgain() + print("the value is", value) +``` + +Now when calling `useit`, it retries `ValueError` without needing custom retrying code: + +``` +useit(mock.MagicMock(side_effect=[ValueError(), ValueError(), 2])) +``` + +The output: + +``` +2023-03-29 17:12:19,074:Retrying:WARNING:Finished call to '__main__.useit' after 0.000(s), this was the 1st time calling it. +2023-03-29 17:12:19,080:Retrying:WARNING:Finished call to '__main__.useit' after 0.006(s), this was the 2nd time calling it. + +the value is 2 +``` + +### Configure the decorator + +The decorator above is just a small sample of what `tenacity` supports. Here's a more complicated decorator: + +``` +my_retry = tenacity.retry( + stop=tenacity.stop_after_attempt(3), + after=tenacity.after_log(TENACITY_LOGGER, logging.WARNING), + before=tenacity.before_log(TENACITY_LOGGER, logging.WARNING), + retry=tenacity.retry_if_exception_type(ValueError), + wait=tenacity.wait_incrementing(1, 10, 2), + reraise=True +) +``` + +This is a more realistic decorator example with additional parameters: + +- `before`: Log before calling the function +- `retry`: Instead of only retrying `TryAgain`, retry exceptions with the given criteria +- `wait`: Wait between calls (this is especially important if calling out to a service) +- `reraise`: If retrying failed, reraise the last attempt's exception + +Now that the decorator also specifies retryability, remove the code from `useit`: + +``` +@my_retry +def useit(a_thing): + value = a_thing() + print("the value is", value) +``` + +Here's how it works: + +``` +useit(mock.MagicMock(side_effect=[ValueError(), 5])) +``` + +The output: + +``` +2023-03-29 17:19:39,820:Retrying:WARNING:Starting call to '__main__.useit', this is the 1st time calling it. +2023-03-29 17:19:39,823:Retrying:WARNING:Finished call to '__main__.useit' after 0.003(s), this was the 1st time calling it. +2023-03-29 17:19:40,829:Retrying:WARNING:Starting call to '__main__.useit', this is the 2nd time calling it. + + +the value is 5 +``` + +Notice the time delay between the second and third log lines. It's almost exactly one second: + +``` +>>> useit(mock.MagicMock(side_effect=[5])) + +2023-03-29 17:20:25,172:Retrying:WARNING:Starting call to '__main__.useit', this is the 1st time calling it. + +the value is 5 +``` + +With more detail: + +``` +try: + useit(mock.MagicMock(side_effect=[ValueError("detailed reason")]*3)) +except Exception as exc: + print("retrying failed", repr(exc)) +``` + +The output: + +``` +2023-03-29 17:21:22,884:Retrying:WARNING:Starting call to '__main__.useit', this is the 1st time calling it. +2023-03-29 17:21:22,888:Retrying:WARNING:Finished call to '__main__.useit' after 0.004(s), this was the 1st time calling it. +2023-03-29 17:21:23,892:Retrying:WARNING:Starting call to '__main__.useit', this is the 2nd time calling it. +2023-03-29 17:21:23,894:Retrying:WARNING:Finished call to '__main__.useit' after 1.010(s), this was the 2nd time calling it. +2023-03-29 17:21:25,896:Retrying:WARNING:Starting call to '__main__.useit', this is the 3rd time calling it. +2023-03-29 17:21:25,899:Retrying:WARNING:Finished call to '__main__.useit' after 3.015(s), this was the 3rd time calling it. + +retrying failed ValueError('detailed reason') +``` + +Again, with `KeyError` instead of `ValueError`: + +``` +try: + useit(mock.MagicMock(side_effect=[KeyError("detailed reason")]*3)) +except Exception as exc: + print("retrying failed", repr(exc)) +``` + +The output: + +``` +2023-03-29 17:21:37,345:Retrying:WARNING:Starting call to '__main__.useit', this is the 1st time calling it. + +retrying failed KeyError('detailed reason') +``` + +### Separate the decorator from the controller + +Often, similar retrying parameters are needed repeatedly. In these cases, it's best to create a _retrying controller_ with the parameters: + +``` +my_retryer = tenacity.Retrying( + stop=tenacity.stop_after_attempt(3), + after=tenacity.after_log(TENACITY_LOGGER, logging.WARNING), + before=tenacity.before_log(TENACITY_LOGGER, logging.WARNING), + retry=tenacity.retry_if_exception_type(ValueError), + wait=tenacity.wait_incrementing(1, 10, 2), + reraise=True +) +``` + +Decorate the function with the retrying controller: + +``` +@my_retryer.wraps +def useit(a_thing): + value = a_thing() + print("the value is", value) +``` + +Run it: + +``` +>>> useit(mock.MagicMock(side_effect=[ValueError(), 5])) + +2023-03-29 17:29:25,656:Retrying:WARNING:Starting call to '__main__.useit', this is the 1st time calling it. +2023-03-29 17:29:25,663:Retrying:WARNING:Finished call to '__main__.useit' after 0.008(s), this was the 1st time calling it. +2023-03-29 17:29:26,667:Retrying:WARNING:Starting call to '__main__.useit', this is the 2nd time calling it. + +the value is 5 +``` + +This lets you gather the statistics of the last call: + +``` +>>> my_retryer.statistics + +{'start_time': 26782.847558759, + 'attempt_number': 2, + 'idle_for': 1.0, + 'delay_since_first_attempt': 0.0075125470029888675} +``` + +Use these statistics to update an internal statistics registry and integrate with your monitoring framework. + +### Extend tenacity + +Many of the arguments to the decorator are objects. These objects can be objects of subclasses, allowing deep extensionability. + +For example, suppose the Fibonacci sequence should determine the wait times. The twist is that the API for asking for wait time only gives the attempt number, so the usual iterative way of calculating Fibonacci is not useful. + +One way to accomplish the goal is to use the [closed formula][4]: + +![Closed formula for a Fibonacci sequence, written in LaTeX as $(((1+\sqrt{5})/2)^n - ((1-\sqrt{5})/2)^n)/\sqrt{5}$][5] + +A little-known trick is skipping the subtraction in favor of rounding to the closest integer: + +![Variant formula for a Fibonacci sequence, written in LaTeX as $\operatorname{round}((((1+\sqrt{5})/2)^n)/\sqrt{5})$][6] + +Which translates to Python as: + +``` +int(((1 + sqrt(5))/2)**n / sqrt(5) + 0.5) +``` + +This can be used directly in a Python function: + +``` +from math import sqrt + +def fib(n): + return int(((1 + sqrt(5))/2)**n / sqrt(5) + 0.5) +``` + +The Fibonacci sequence counts from `0` while the attempt numbers start at `1`, so a `wait` function needs to compensate for that: + +``` +def wait_fib(rcs): + return fib(rcs.attempt_number - 1) +``` + +The function can be passed directly as the `wait` parameter: + +``` +@tenacity.retry( + stop=tenacity.stop_after_attempt(7), + after=tenacity.after_log(TENACITY_LOGGER, logging.WARNING), + wait=wait_fib, +) +def useit(thing): + print("value is", thing()) +try: + useit(mock.MagicMock(side_effect=[tenacity.TryAgain()] * 7)) +except Exception as exc: + pass +``` + +Try it out: + +``` +2023-03-29 18:03:52,783:Retrying:WARNING:Finished call to '__main__.useit' after 0.000(s), this was the 1st time calling it. +2023-03-29 18:03:52,787:Retrying:WARNING:Finished call to '__main__.useit' after 0.004(s), this was the 2nd time calling it. +2023-03-29 18:03:53,789:Retrying:WARNING:Finished call to '__main__.useit' after 1.006(s), this was the 3rd time calling it. +2023-03-29 18:03:54,793:Retrying:WARNING:Finished call to '__main__.useit' after 2.009(s), this was the 4th time calling it. +2023-03-29 18:03:56,797:Retrying:WARNING:Finished call to '__main__.useit' after 4.014(s), this was the 5th time calling it. +2023-03-29 18:03:59,800:Retrying:WARNING:Finished call to '__main__.useit' after 7.017(s), this was the 6th time calling it. +2023-03-29 18:04:04,806:Retrying:WARNING:Finished call to '__main__.useit' after 12.023(s), this was the 7th time calling it. +``` + +Subtract subsequent numbers from the "after" time and round to see the Fibonacci sequence: + +``` +intervals = [ + 0.000, + 0.004, + 1.006, + 2.009, + 4.014, + 7.017, + 12.023, +] +for x, y in zip(intervals[:-1], intervals[1:]): + print(int(y-x), end=" ") +``` + +Does it work? Yes, exactly as expected: + +``` +0 1 1 2 3 5 +``` + +### Wrap up + +Writing ad-hoc retry code can be a fun distraction. For production-grade code, a better choice is a proven library like `tenacity`. The `tenacity` library is configurable and extendable, and it will likely meet your needs. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/23/4/retry-your-python-code-until-it-fails + +作者:[Moshe Zadka][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez +[b]: https://github.com/lkxed/ +[1]: https://tenacity.readthedocs.io/en/latest/index.html +[2]: https://opensource.com/article/17/9/python-logging +[3]: https://opensource.com/article/23/4/using-mocks-python +[4]: https://fabiandablander.com/r/Fibonacci.html +[5]: https://opensource.com/sites/default/files/2023-04/math_0.webp +[6]: https://opensource.com/sites/default/files/2023-04/math2.webp \ No newline at end of file diff --git a/sources/tech/20230426.1 ⭐️⭐️ Test your Drupal website with Cypress.md b/sources/tech/20230426.1 ⭐️⭐️ Test your Drupal website with Cypress.md new file mode 100644 index 0000000000..f3b5ce205c --- /dev/null +++ b/sources/tech/20230426.1 ⭐️⭐️ Test your Drupal website with Cypress.md @@ -0,0 +1,278 @@ +[#]: subject: "Test your Drupal website with Cypress" +[#]: via: "https://opensource.com/article/23/4/website-test-drupal-cypress" +[#]: author: "Jordan Graham https://opensource.com/users/cobadger" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Test your Drupal website with Cypress +====== + +If you don't include tests in your Drupal development, chances are it's because you think it adds complexity and expense without benefit. [Cypress][1] is an open source tool with many benefits: + +- Reliably tests anything that runs in a web browser +- Works on any web platform (it's great for testing projects using front-end technologies like [React][2]) +- Highly extensible +- Increasingly popular +- Easy to learn and implement +- Protects against regression as your projects become more complex +- Can make your development process more efficient + +This article covers three topics to help you start testing your Drupal project using Cypress: + +- [Installing Cypress][3] +- [Writing and running basic tests using Cypress][4] +- [Customizing Cypress for Drupal][5] + +### Install Cypress + +For the purposes of this tutorial I'm assuming that you have built a local dev environment for your Drupal project using the `drupal/recommended-project` project. Although details on creating such a project are outside of the scope of this piece, I recommend [Getting Started with Lando and Drupal 9][6]. + +Your project has at least this basic structure: + +``` +vendor/ +web/ +.editorconfig +.gitattributes +composer.json +composer.lock +``` + +The cypress.io site has [complete installation instructions][7] for various environments. For this article, I installed Cypress using [npm][8]. + +Initialize your project using the command `npm init`. Answer the questions that Node.js asks you, and then you will have a `package.json` file that looks something like this: + +``` +{ + "name": "cypress", + "version": "1.0.0", + "description": "Installs Cypress in a test project.", + "main": "index.js", + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1" + }, + "author": "", + "license": "ISC" +} +``` + +Install Cypress in your project: + +``` +$ npm install cypress --save-dev +``` + +Run Cypress for the first time: + +``` +$ npx cypress open +``` + +Because you haven't added a config or any scaffolding files to Cypress, the Cypress app displays the welcome screen to help you configure the project. To configure your project for E2E (end-to-end) testing, click the **Not Configured** button for E2E Testing. Cypress adds some files to your project: + +``` +cypress/ +node_modules/ +vendor/ +web/ +.editorconfig +.gitattributes +composer.json +composer.lock +cypress.config.js +package-lock.json +package.json +``` + +Click **Continue** and choose your preferred browser for testing. Click **Start E2E Testing in [your browser of choice]**. I'm using a Chromium-based browser for this article. + +In a separate window, a browser opens to the **Create your first spec** page: + +![Cypress in a web browser][9] + +Click on the **Scaffold example specs** button to create a couple of new folders with example specs to help you understand how to use Cypress. Read through these in your code editor, and you'll likely find the language (based on JavaScript) intuitive and easy to follow. + +Click on **any** in the test browser. This reveals two panels. On the left, a text panel shows each step in the active spec. On the right, a simulated browser window shows the actual user experience as Cypress steps through the spec. + +Open the `cypress.config.js` file in your project root and change it as follows: + +``` +const { defineConfig } = require("cypress"); + +module.exports = defineConfig({ + component: { + fixturesFolder: "cypress/fixtures", + integrationFolder: "cypress/integration", + pluginsFile: "cypress/plugins/index.js", + screenshotsFolder: "cypress/screenshots", + supportFile: "cypress/support/e2e.js", + videosFolder: "cypress/videos", + viewportWidth: 1440, + viewportHeight: 900, + }, + + e2e: { + setupNodeEvents(on, config) { + // implement node event listeners here + }, + baseUrl: "https://[your-local-dev-url]", + specPattern: "cypress/**/*.{js,jsx,ts,tsx}", + supportFile: "cypress/support/e2e.js", + fixturesFolder: "cypress/fixtures" + }, + }); +``` + +Change the `baseUrl` to your project's URL in your local dev environment. + +These changes tell Cypress where to find its resources and how to find all of the specs in your project. + +### Write and run basic tests using Cypress + +Create a new directory called `integration` in your `/cypress` directory. Within the `integration` directory, create a file called `test.cy.js`: + +``` +cypress/ +├─ e2e/ +├─ fixtures/ +├─ integration/ +│ ├─ test.cy.js +├─ support/ +node_modules/ +vendor/ +web/ +.editorconfig +.gitattributes +composer.json +composer.lock +cypress.config.js +package-lock.json +package.json +``` + +Add the following contents to your `test.cy.js` file: + +``` +describe('Loads the front page', () => { + it('Loads the front page', () => { + cy.visit('/') + cy.get('h1.page-title') + .should('exist') + }); +}); + +describe('Tests logging in using an incorrect password', () => { + it('Fails authentication using incorrect login credentials', () => { + cy.visit('/user/login') + cy.get('#edit-name') + .type('Sir Lancelot of Camelot') + cy.get('#edit-pass') + .type('tacos') + cy.get('input#edit-submit') + .contains('Log in') + .click() + cy.contains('Unrecognized username or password.') + }); +}); +``` + +When you click on `test.cy.js` in the Cypress application, watch each test description on the left as Cypress performs the steps in each `describe()` section. + +This spec demonstrates how to tell Cypress to navigate your website, access HTML elements by ID, enter content into input elements, and submit the form. This process is how I discovered that I needed to add the assertion that the `` element contains the text **Log in** before the input was clickable. Apparently, the flex styling of the submit input impeded Cypress' ability to "see" the input, so it couldn't click on it. Testing really works! + +### Customize Cypress for Drupal + +You can write your own custom Cypress commands, too. Remember the `supportFile` entry in the `cypress.config.js` file? It points to a file that Cypress added, which in turn imports the `./commands` files. Incidentally, Cypress is so clever that when importing logic or data fixtures, you don't need to specify the file extension, so you import `./commands`, not `./commands.js`. Cypress looks for any of a dozen or so popular file extensions and understands how to recognize and parse each of them. + +Enter commands into `commands.js` to define them: + +``` +/** + * Logs out the user. + */ + +Cypress.Commands.add('drupalLogout', () => { + cy.visit('/user/logout'); +}) + +/** + * Basic user login command. Requires valid username and password. + * + * @param {string} username + * The username with which to log in. + * @param {string} password + * The password for the user's account. + */ + +Cypress.Commands.add('loginAs', (username, password) => { + cy.drupalLogout(); + cy.visit('/user/login'); + cy.get('#edit-name') + .type(username); + cy.get('#edit-pass').type(password, { + log: false, + }); + + cy.get('#edit-submit').contains('Log in').click(); +}); +``` + +This example defines a custom Cypress command called `drupalLogout()`, which you can use in any subsequent logic, even other custom commands. To log a user out, call `cy.drupalLogout()`. This is the first event in the custom command `loginAs` to ensure that Cypress is logged out before attempting to log in as a specific user. + +Using environment variables, you can even create a Cypress command called `drush()`, which you can use to execute Drush commands in your tests or custom commands. Look at how simple this makes it to define a custom Cypress command that logs a user in using their UID: + +``` +/** +* Logs a user in by their uid via drush uli. +*/ + +Cypress.Commands.add('loginUserByUid', (uid) => { + cy.drush('user-login', [], { uid, uri: Cypress.env('baseUrl') }) + .its('stdout') + .then(function (url) { + cy.visit(url); + }); +}); +``` + +This example uses the `drush user-login` command (`drush uli` for short) and takes the authenticated user to the site's base URL. + +Consider the security benefit of never reading or storing user passwords in your testing. Personally, I find it amazing that a front-end technology like Cypress can execute Drush commands, which I've always thought of as being very much on the back end. + +### Testing, testing + +There's a lot more to Cypress, like fixtures (files that hold test data) and various tricks for navigating the sometimes complex data structures that produce a website's user interface. For a look into what's possible, watch the [Cypress Testing for Drupal Websites][10] webinar, particularly [the section on fixtures that begins at 18:33][11]. That webinar goes into greater detail about some interesting use cases, including an Ajax-enabled form. Once you start using it, feel free to use or fork [Aten's public repository of Cypress Testing for Drupal][12]. + +Happy testing! + +_This article originally appeared on the [Aten blog][13] and is republished with permission._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/23/4/website-test-drupal-cypress + +作者:[Jordan Graham][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cobadger +[b]: https://github.com/lkxed/ +[1]: https://www.cypress.io/ +[2]: https://opensource.com/article/20/11/reactjs-tutorial +[3]: https://opensource.com/article/23/4/website-test-drupal-cypress#install-cypress +[4]: https://opensource.com/article/23/4/website-test-drupal-cypress#write-run +[5]: https://opensource.com/article/23/4/website-test-drupal-cypress#customize-cypress +[6]: https://www.specbee.com/blogs/getting-started-with-lando-and-drupal-9 +[7]: https://docs.cypress.io/guides/getting-started/installing-cypress +[8]: https://docs.npmjs.com/downloading-and-installing-node-js-and-npm +[9]: https://opensource.com/sites/default/files/2023-04/cypress-in-browser.webp +[10]: https://atendesigngroup.com/webinar/cypress-testing-drupal-websites +[11]: https://youtu.be/pKiBuYImoI8?t=1113 +[12]: https://bitbucket.org/aten_cobadger/cypress-for-drupal/src/main/ +[13]: https://atendesigngroup.com/articles \ No newline at end of file diff --git a/sources/tech/20230427.1 ⭐️⭐️ Run a virtual conference using only open source tools.md b/sources/tech/20230427.1 ⭐️⭐️ Run a virtual conference using only open source tools.md new file mode 100644 index 0000000000..0bbc597e87 --- /dev/null +++ b/sources/tech/20230427.1 ⭐️⭐️ Run a virtual conference using only open source tools.md @@ -0,0 +1,312 @@ +[#]: subject: "Run a virtual conference using only open source tools" +[#]: via: "https://opensource.com/article/23/4/open-source-tools-virtual-conference" +[#]: author: "Máirín Duffy https://opensource.com/users/mairin" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Run a virtual conference using only open source tools +====== + +[The Fedora Design Team][1] discovered that using open source tools to run a virtual conference can be quite effective by hosting the first [Creative Freedom Summit][2] in January 2023. + +In this article, I'll share some background on the conference, why using open source tools to run it was important to us, and the specific tools and configurations our team used to make it all work. I'll also talk about what worked well and what will need improvement at our next summit in 2024. + +### What is Creative Freedom Summit? + +The Creative Freedom Summit was an idea Marie Nordin came up with after reviewing talk submissions for [Flock, the annual Fedora users and contributors conference][3]. She received many talk submissions for the August 2022 Flock relating to design and creativity in open source—far more than we could possibly accept. With so many great ideas for open source design-related talks out there, she wondered if there would be space for a separate open source creativity conference focused on creatives who use open source tools to produce their work. + +Marie brought this idea to the Fedora Design Team in the fall of 2022, and we started planning the conference, which took place January 17-19, 2023. Since it was our first time running a new conference like this, we decided to start with invited speakers based on some of the Flock submissions and our own personal network of open source creatives. Almost every speaker we asked gave a talk, so we didn't have room to accept submissions. We will need to figure out this next year, so we don't have an open source CFP (Call for Papers) management tool for that to tell you about yet. + +### Using open source for open source conferences + +Since the initial COVID pandemic lockdowns, Fedora's Flock conference has been run virtually using Hopin, an online conference platform that isn't open source but is friendly to open source tools. Fedora started using it some years ago, and it definitely provides a professional conference feel, with a built-in sponsor booth/expo hall, tracks, hallway chat conversations, and moderation tools. Running the Creative Freedom Summit using Hopin was an option for us because, as a Fedora-sponsored event, we could access Fedora's Hopin setup. Again, Hopin is not open source. + +Now, as a long-term (~20 years) open source contributor, I can tell you that this kind of decision is always tough. If your conference focuses on open source, using a proprietary platform to host your event feels a little strange. However, as the scale and complexity of our communities and events have grown, the ability to produce an integrated open source conference system has become more challenging. + +There is no right or wrong answer. You have to weigh a lot of things when making this decision: + +- Budget +- People power +- Infrastructure +- Technical capability +- Complexity/formality/culture of the event + +We didn't have any budget for this event. We did have a team of volunteers who could put some work hours into it. We had the Fedora Matrix Server as a piece of supported infrastructure we could bring into the mix and access to a hosted WordPress system for the website. Teammate Madeline Peck and I had the technical capability/experience of running the live, weekly Fedora Design Team [video calls][4] using PeerTube. We wanted the event to be low-key, single-track, and informal, so we had some tolerance for glitches or rough edges. We also all had a lot of passion for trying an open source stack. + +Now you know a little about our considerations when making this decision, which might help when making decisions for your event. + +### An open source conference stack + +Here is how the conference tech stack worked. + +#### Overview + +**Live components** + +- **Livestream**: We streamed the stage and the social events to a PeerTube channel. Conference attendees could watch the stream live from our PeerTube channel. PeerTube includes some privacy-minded analytics to track the number of livestream viewers and post-event views. +- **Live stage + social event room**: We had one live stage for speakers and hosts using Jitsi, ensuring only those with permission could be on camera. We had an additional Jitsi meeting room for social events that allowed anyone who wanted to participate in the social event to go on camera. +- **Backstage**: We had a "Backstage" Matrix channel to coordinate with speakers, hosts, and volunteers in one place while the event was going on. +- **Announcements and Q&A**: We managed Q&A and the daily schedule for the conference via a shared Etherpad (which we later moved to Hackmd.io). +- **Integrated and centralized conference experience**: Using Matrix's Element client, we embedded the livestream video and an Etherpad into a public Matrix room for the conference. We used attendance in the channel to monitor overall conference attendance. We had a live chat throughout the conference and took questions from audience members from the chat and the embedded Q&A Etherpad. +- **Conference website**: We had a beautifully-designed website created by Ryan Gorley hosted on WordPress, which had the basic information and links for how to join the conference, the dates/times, and the schedule. + +#### Post-event components + +- **Post-event survey**: We used the open source LimeSurvey system to send out a post-event survey to see how things went for attendees. I use some of the data from that survey in this article. +- **Post-event video editing and captioning**: We didn't have a live captioning system for the conference, but as I was able, I typed live notes from talks into the channel, which attendees greatly appreciated. Post-event, we used Kdenlive (one of the tools featured in talks at the event) to edit the videos and generate captions. +- **Event recordings**: PeerTube automagically posts livestream recordings to channels, making nearly instant recordings available for attendees for talks they may have missed. + +I'll cover some details next. + +### Livestream with PeerTube + +![Screenshot showing the Creative Freedom Summit PeerTube channel, with the logo, a description of the event, and a set of video thumbnails][5] + +We used the [LinuxRocks PeerTube platform][6] generously hosted by [LinuxRocks.online][7] for the Creative Freedom Summit's livestream. PeerTube is a free and open source decentralized video platform that is also part of the Fediverse. + +One of the best features of PeerTube (that other platforms I am aware of don't have) is that after your livestream ends, you get a near-instant replay recording posted to your channel on PeerTube. Users in our chatroom cited this as a major advantage of the platform. If an attendee missed a session they were really interested in, they could watch it within minutes of that talk's end. It took no manual intervention, uploading, or coordination on the part of the volunteer organizing team to make this happen; PeerTube automated it for us. + +Here is how livestreaming with PeerTube works: You create a new livestream on your channel, and it gives you a livestreaming URL + a key to authorize streaming to the URL. This URL + key can be reused over and over. We configured it so that the recording would be posted to the channel where we created the livestreaming URL as soon as a livestream ended. Next, copy/paste this into Jitsi when you start the livestream. This means that you don't have to generate a new URL + key for each talk during the conference—the overhead of managing that for organizers would have been pretty significant. Instead, we could reuse the same URL + key shared in a common document among conference organizers (we each had different shifts hosting talks). Anyone on the team with access to that document could start the livestream. + +#### How to generate the livestream URL + key in PeerTube + +The following section covers generating the livestream URL + key in PeerTube, step-by-step. + +**1. Create stream video on PeerTube** + +Log into PeerTube, and click the **Publish** button in the upper right corner: + +![Screenshot of the PeerTube Publish button][8] + +**2. Set options** + +Click on the **Go live** tab (fourth from the left) and set the following options: + +- Channel: (The channel name you want the livestream to publish on) +- Privacy: Public +- Radio buttons: Normal live + +Then, select **Go Live**. (Don't worry, you won't really be going live quite yet, there is more data to fill in.) + +![Screenshot of the Go Live button in PeerTube][9] + +**3. Basic info (don't click update yet)** + +First, fill out the **Basic Info** tab, then choose the **Advanced Settings** tab in the next step. Fill out the name of the livestream, description, add tags, categories, license, etc. Remember to publish after the transcoding checkbox is turned on. + +This ensures once your livestream ends, the recording will automatically post to your channel. + +**4. Advanced settings** + +You can upload a "standby" image that appears while everyone is watching the stream URL and waiting for things to start. + +![Screenshot of PeerTube Advanced Settings][10] + +This is the standby image we used for the Creative Freedom Summit: + +![Screenshot of the Creative Freedom Summit banner][11] + +**5. Start livestream on PeerTube** + +Select the **Update** button in the lower right corner. The stream will appear like this—it's in a holding pattern until you start streaming from Jitsi: + +![Screenshot of starting the live stream on PeerTube][12] + +**6. Copy/paste the livestream URL for Jitsi** + +This is the final step in PeerTube. Once the livestream is up, click on the **…** icon under the video and towards the right: + +![Copy and paste the URL][13] + +Select **Display live information**. You'll get a dialog like this: + +![Screenshot of Display live information option][14] + +You must copy both the live RTMP URL and the livestream key. Combine them into one URL and then copy/paste that into Jitsi. + +The following are examples from my test run of these two text blocks to copy: + +- Live RTMP Url: **rtmp://peertube.linuxrocks.online:1935/live** +- Livestream key: **8b940f96-c46d-46aa-81a0-701de3c43c8f** + +What you'll need to paste into Jitsi is these two text blocks combined with a **/** between them, like so: + +**rtmp://peertube.linuxrocks.online:1935/live/8b940f96-c46d-46aa-81a0-701de3c43c8f** + +### Live stage + social event room: Jitsi + +We used the free and open source hosted [Jitsi Meet][15] video conferencing platform for our "live stage." We created a Jitsi meeting room with a custom URL at **[https://meet.jit.si][16]** and only shared this URL with speakers and meeting organizers. + +We configured the meeting with a lobby (this feature is available in meeting settings once you join your newly-created meeting room) so speakers could join a few minutes before their talk without fear of interrupting the presentation before theirs. (Our host volunteers let them in when the previous session finished.) Another option is to add a password to the room. We got by just by having a lobby configured. It did seem, upon testing, that the moderation status in the room wasn't persistent. If a moderator left the room, they appeared to lose moderator status and settings, such as the lobby setup. I kept the Jitsi room available and active for the entire conference by leaving it open on my computer. (Your mileage may vary on this aspect.) + +Jitsi has a built-in livestreaming option, where you can post a URL to a video service, and it will stream your video to that service. We had confidence in this approach because it is how we host and livestream weekly [Fedora Design Team meetings][17]. For the Creative Freedom Summit, we connected our Jitsi Live Stage (for speakers and hosts) to [a channel on the Linux Rocks PeerTube][6]. + +Jitsi lets speakers share their screens to drive their own slides or live demos. + +### Livestreaming Jitsi to PeerTube + +1. Join the meeting and click the **…** icon next to the red hangup button at the bottom of the screen. + +![Join the Jitsi meeting][18] + +2. Select **Start live stream** from the pop-up menu. + +![Screenshot of starting the live stream in Jitsi][19] + +3. Copy/paste the PeerTube URL + key text + +![Screenshot of copying and pasting the livestream key][20] + +4. Listen for your Jitsi Robot friend + +A feminine voice will come on in a few seconds to tell you, "Live streaming is on." Once she sounds, smile! You're livestreaming. + +5. Stop the livestream + +This stops the PeerTube URL you set up from working, so repeat these steps to start things back up. + +#### Jitsi tips + +**Managing Recordings by turning the Jitsi stream on and off** + +We learned during the conference that it is better to turn the Jitsi stream off between talks so that you will have one raw recording file per talk posted to PeerTube. We let it run as long as it would the first day, so some recordings have multiple presentations in the same video, which made using the instant replay function harder for folks trying to catch up. They needed to seek inside the video to find the talk they wanted to watch or wait for us to post the edited version days or weeks later. + +**Preventing audio feedback** + +Another issue we figured out live during the event that didn't crop up during our tests was audio feedback loops. These were entirely my fault (sorry to everyone who attended). I was setting up the Jitsi/PeerTube links, monitoring the streams, and helping host and emcee the event. Even though I knew that once we went live, I needed to mute any PeerTube browser tabs I had open, I either had more PeerTube tabs open than I thought and missed one, or the livestream would autostart in my Element client (which I had available to monitor the chat). I didn't have an easy way to mute Element. In some of the speaker introductions I made, you'll see that I knew I had about 30 seconds before the audio feedback would start, so I gave very rushed/hurried intros. + +I think there are simpler ways to avoid this situation: + +- Try to ensure your host/emcee is not also the person setting up/monitoring the streams and chat. (Not always possible, depending on how many volunteers you have at any given time.) +- If possible, monitor the streams on one computer and emcee from another. This way, you have one mute button to hit on the computer you're using for monitoring, and it simplifies your hosting experience on the other. + +This is something worth practicing and refining ahead of time. + +### Backstage: Element + +![A screenshot showing three chat room listings in Element: Creative Freedom Summit with a white logo, Creative Freedom Summit Backstage with a black logo, and Creative Freedom Summit Hosts with an orange logo][21] + +We set up a "Backstage" invite-only chat room a week or so before the conference started and invited all our speakers to it. This helped us ensure a couple of things: + +- Our speakers were onboarded to Element/Matrix well before the event's start and had the opportunity to get help signing up if they had any issues (nobody did). +- We started a live communication channel with all speakers before the event so that we could send announcements/updates pretty easily. + +The channel served as a useful place during the event to coordinate transitions between speakers, give heads up about whether the schedule was running late, and in one instance, quickly reschedule a talk when one of our speakers had an emergency and couldn't make their original scheduled time. + +We also set up a room for hosts, but in our case, it was extraneous. We just used the backstage channel to coordinate. We found two channels were easy to monitor, but three were too many to be convenient. + +### Announcements and Q&A: Etherpad/Hackmd.io + +![Screenshot of an etherpad titled "General information" that has some info about the Creative Freedom Summit][22] + +We set up a pinned widget in our main Element channel with general information about the event, including the daily schedule, code of conduct, etc. We also had a section per talk of the day for attendees to drop questions for Q&A, which the host read out loud for the speaker. + +We found over the first day or two that some attendees were having issues with the Etherpad widget not loading, so we switched to an embedded hackmd.io document pinned to the channel as a widget, and that seemed to work a little better. We're not 100% sure what was going on with the widget loading issues, but we were able to post a link to the raw (non-embedded) link in the channel topic, so folks could get around any problems accessing it via the widget. + +### Integrated and centralized conference experience + +![A video feed is in the upper left corner, a hackmd.io announcement page in the upper right, and an active chat below.][23] + +Matrix via Fedora's Element server was the single key place to go to attend the conference. Matrix chat rooms in Element have a widget system that allows you to embed websites into the chat room as part of the experience. That functionality was important for having our Matrix chat room serve as the central place to attend. + +We embedded the PeerTube livestream into the channel—you can see it in the screenshot above in the upper left. Once the conference was over, we could share a playlist of the unedited video replays playlist. Now that our volunteer project for editing the videos is complete, the channel has the playlist of edited talks in order. + +As discussed in the previous section, we embedded a hackmd.io note in the upper right corner to post the day's schedule, post announcements, and an area for Q&A right in the pad. I had wanted to set up a Matrix bot to handle Q&A, but I struggled to get one running. It might make for a cool project for next year, though. + +Conversations during the conference occurred right in the main chat under these widgets. + +There are a couple of considerations to make when using a Matrix/Element chat room as the central place for an online conference, such as: + +- The optimal experience will be in the Element desktop client or a web browser on a desktop system. However, you can view the widgets in the Element mobile client (although some attendees struggled to discover this, the UI is less-than-obvious). Other Matrix clients may not be able to view the widgets. +- Attendees can easily DIY their own experience piecemeal if desired. Users not using the Element client to attend the conference reported no issues joining in on the chat and viewing the PeerTube livestream URL directly. We shared the livestream URL and the hackmd URL in the channel topic, making it accessible to folks who preferred not to run Element. + +### Website + +![Screenshot showing the top of creativefreedomsummit.com, with the headline "Create. Learn. Connect." against a blue and purple gradient background.][24] + +[Ryan Gorley][25] developed the [Creative Freedom Summit website][26] using [WordPress][27]. It is hosted by WPengine and is a one-pager with the conference schedule embedded from sched.org. + +### Post-event + +#### Post-event survey + +We used the open source survey tool LimeSurvey. We sent it out within a week or two to attendees via the Element Chat channel and our PeerTube video channel to learn more about how we handled the event. The event organizers continue to meet regularly. One topic we focus on during these post-event meetings is developing the questions for the survey in a shared hackmd.io document. The following are some things we learned from the event that might be of interest to you in planning your own open source powered online conference: + +- By far, most event attendees learned about the event from Mastodon and Twitter (together, covering 70% of respondents). +- 33% of attendees used the Element desktop app to attend, and 30% used the Element Chat web app. So roughly 63% of attendees used the integrated Matrix/Element experience. The rest watched directly on PeerTube or viewed replays after. +- 35% of attendees indicated they made connections with other creatives at the event via the chat, so the chat experience is pretty important to events if part of your goal is enabling networking and connections. + +#### Captioning + +During the event, we received positive feedback from participants who appreciated when another attendee live-captioned the talk in the chat and wished out loud for live captioning for better accessibility. While the stack outlined here did not include live captioning, there are open source solutions for it. One such tool is Live Captions, and Seth Kenlon covered it in an opensource.com article, [Open source video captioning on Linux][28]. While this tool is meant for the attendee consuming the video content locally, we could potentially have a conference host running it and sharing it to the livestream in Jitsi. One way to do this is using the open source broadcasting tool [OBS][29] so everyone watching the livestream could benefit from the captions. + +While editing the videos post-event, we discovered a tool built into [Kdenlive][30], our open source video editor of choice, that generates and automatically places subtitles in the videos. There are basic instructions on how to do this in the [Kdenlive manual][31]. Fedora Design Team member Kyle Conway, who helped with the post-event video editing, put together a [comprehensive tutorial (including video instruction) on automatically generating and adding subtitles to videos in Kdenlive][32]. It is well worth the read and watch if you are interested in this feature. + +#### Video editing volunteer effort + +When the event was over, we rallied a group of volunteers from the conference Element channel to work together on editing the videos, including title cards and intro/outro music, and general cleanup. Some of our automatic replay recordings were split across two files or combined in one file with multiple other talks and needed to be reassembled or cropped down. + +We used a [GitLab epic to organize the work][33], with an FAQ and call for volunteer help organized by skillset, with issues attached for each video needed. We had a series of custom labels we would set on each video so it was clear what state the video was in and what kind of help was needed. All the videos have been edited, and some need content written for their description area on the [Creative Freedom Summit channel][6]. Many have auto-generated subtitles that have not been edited for spelling mistakes and other corrections common with auto-generated text. + +![Screenshot of the list of videos needing editing help in GitLab][34] + +We passed the videos around—the files could be quite large—by having volunteers download the raw video from the unedited recording on the main PeerTube channel for the Creative Freedom Summit. When they had an edited video ready to share, we had a private PeerTube account where they could upload it. Admins with access to the main channel's account periodically grabbed videos from the private account and uploaded them to the main account. Note that PeerTube doesn't have a system where multiple accounts have access to the same channel, so we had to engage in a bit of password sharing, which can be nerve-wracking. We felt this was a reasonable compromise to limit how many people had the main password but still enable volunteers to submit edited videos without too much hassle. + +### Ready to give it a try? + +I hope this comprehensive description of how we ran the Creative Freedom Summit conference using an open source stack of tools inspires you to try it for your open source event. Let us know how it goes, and feel free to reach out if you have questions or suggestions for improvement! Our channel is at: [https://matrix.to/#/#creativefreedom:fedora.im][35] + +_This article is adapted from [Run an open source-powered virtual conference][36] and is republished with permission._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/23/4/open-source-tools-virtual-conference + +作者:[Máirín Duffy][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mairin +[b]: https://github.com/lkxed/ +[1]: https://fedoraproject.org/wiki/Design +[2]: http://creativefreedomsummit.com/ +[3]: http://flocktofedora.org/ +[4]: https://opensource.com/article/23/3/video-templates-inkscape +[5]: https://opensource.com/sites/default/files/2023-04/homepage.webp +[6]: https://peertube.linuxrocks.online/c/creativefreedom/videos +[7]: https://linuxrocks.online/ +[8]: https://opensource.com/sites/default/files/2023-04/publish.png +[9]: https://opensource.com/sites/default/files/2023-04/go-live.png +[10]: https://opensource.com/sites/default/files/2023-04/advdsettings.png +[11]: https://opensource.com/sites/default/files/2023-04/cfsbanner.png +[12]: https://opensource.com/sites/default/files/2023-04/startlivestream.jpg +[13]: https://opensource.com/sites/default/files/2023-04/pasteURL.png +[14]: https://opensource.com/sites/default/files/2023-04/liveinformation.png +[15]: https://meet.jit.si/ +[16]: https://meet.jit.si +[17]: https://peertube.linuxrocks.online/c/fedora_design_live/videos +[18]: https://opensource.com/sites/default/files/2023-04/moreactions.png +[19]: https://opensource.com/sites/default/files/2023-04/startlivestream.png +[20]: https://opensource.com/sites/default/files/2023-04/copypastekey.png +[21]: https://opensource.com/sites/default/files/2023-04/backstage.webp +[22]: https://opensource.com/sites/default/files/2023-04/hackmd.webp +[23]: https://opensource.com/sites/default/files/2023-04/integratedexperience.webp +[24]: https://opensource.com/sites/default/files/2023-04/website.webp +[25]: https://mastodon.social/@ryangorley +[26]: https://creativefreedomsummit.com/ +[27]: https://wordpress.com/ +[28]: https://opensource.com/article/23/2/live-captions-linux +[29]: https://obsproject.com/ +[30]: https://kdenlive.org/ +[31]: https://docs.kdenlive.org/en/effects_and_compositions/speech_to_text.html +[32]: https://gitlab.com/groups/fedora/design/-/epics/23#video-captioning-and-handoff +[33]: https://gitlab.com/groups/fedora/design/-/epics/23 +[34]: https://opensource.com/sites/default/files/2023-04/availablevideos_0.webp +[35]: https://matrix.to/#/#creativefreedom:fedora.im +[36]: https://blog.linuxgrrl.com/2023/04/10/run-an-open-source-powered-virtual-conference/ \ No newline at end of file diff --git a/sources/tech/20230516.0 ⭐️⭐️ Guide to Set up Full Wayland with Arch Linux.md b/sources/tech/20230516.0 ⭐️⭐️ Guide to Set up Full Wayland with Arch Linux.md new file mode 100644 index 0000000000..16ae0f89b4 --- /dev/null +++ b/sources/tech/20230516.0 ⭐️⭐️ Guide to Set up Full Wayland with Arch Linux.md @@ -0,0 +1,244 @@ +[#]: subject: "Guide to Set up Full Wayland with Arch Linux" +[#]: via: "https://www.debugpoint.com/wayland-arch-linux/" +[#]: author: "Arindam https://www.debugpoint.com/author/admin1/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Guide to Set up Full Wayland with Arch Linux +====== + +**Is it possible to go full Wayland in Arch Linux using mainstream desktop environments or window managers? Let’s find out.** + +Wayland is a modern and efficient protocol for displaying graphical applications on Linux. It offers several advantages over the older X.Org display server, such as improved security, stability, and graphical performance. + +While X.Org has been the go-to display server for many years, its age and complexity have led to various issues, including security vulnerabilities and compatibility problems with newer hardware. Wayland addresses these concerns by offering a more streamlined and secure display protocol. + +However, it’s been almost a decade since the Wayland transition is going on, and it is understandable. Major Linux distributions, such as Ubuntu and Fedora – already defaulted to Wayland sessions since 2021. The primary reason is the protocol is now stable enough. + +However, Arch Linux users may find setting up a custom install with Wayland complex. Only KDE Plasma and GNOME have up-to-date Wayland support among all the mainstream desktop environments. Xfce, LXQt and other desktops are developing Wayland support, but they are not ready yet. + +On the window manager front, Sway has full Wayland support in Arch Linux. That being said, I wanted to test how Wayland is performing in Arch and want to give you a status check as of today. + +Let’s try to set up KDE Plasma & GNOME in Arch Linux with full Wayland support. + +### Set up Wayland in Arch Linux + +Ideally, you should have the [base `wayland` package][1] installed already. Open a terminal and verify running the below command. + +``` +pacman -Qi wayland +``` + +If it is not installed, install it using: + +``` +sudo pacman -S --needed wayland +``` + +#### KDE Plasma Desktop + +The following steps assume you have a bare metal Arch Linux installation without any desktop environment or window manager. You can install Arch Linux bare metal using the [great archinstall script][2]. + +Standard KDE Plasma setup in Arch Linux requires a few changes for Wayland. A few packages are needed from AUR, hence make sure to [set up Yay][3] or any other AUR helper. + +Firstly, install a custom `sddm` display manager Wayland package from AUR using the following command. This is a different `sddm` package than the one available in the Arch “Extra” repo. As [per ArchWiki][4], only GDM and sddm-git have the proper Wayland support in Arch Linux at the moment. + +``` +yay -S sddm-git +``` + +Once installed, use the below command to install a few Wayland packages. + +- xorg-xwayland: For running xclients under Wayland +- xorg-xlsclients: List client applications running on a display (optional) +- qt5-wayland: Qt APIs for Wayland +- glfw-wayland: GUI app dev packages for Wayland + +``` +pacman -S --needed xorg-xwayland xorg-xlsclients qt5-wayland glfw-wayland +``` + +Second, install the plasma and associated apps with Wayland sessions using the below set of commands. Execute them in the order mentioned below. + +``` +pacman -S --needed plasma kde-applications +``` + +``` +pacman -S --needed plasma-wayland-session +``` + +**Note**: If you are using NVIDIA, you may want to install `egl-wayland`package. However, I have not tried it. + +Let’s install Firefox and Chromium as well, so that you can test Wayland is working properly. + +``` +pacman -S --needed firefox chromium +``` + +Once done, enable the display manager and NetworkManager service. + +``` +sudo systemctl enable sddmsudo systemctl enable NetworkManager +``` + +The sddm display manager needs a little more tweaks. Using any text editor, open the sddm configuration file and add `Current=breeze` under `[Theme]`. + +``` +sudo nano /usr/lib/sddm/sddm.conf.d/default.conf +``` + +``` +[Theme] +# current theme name + Current=breeze +``` + +Once done, save and close the file. And reboot. + +``` +reboot +``` + +And in the login screen, you should see the Wayland option. Select and log in to the Wayland session of KDE Plasma in Arch Linux. + +![Plasma Wayland session during login][5] + +You can also verify [whether you are running Wayland][6] using $XDG_SESSION_TYPE variable. + +![KDE Plasma with Wayland in Arch Linux][7] + +If you want to force Firefox to use Wayland, then open `/etc/environment` and add the following line. + +``` +MOZ_ENABLE_WAYLAND=1 +``` + +Then, reboot or run below to take effect. + +``` +source /etc/environment +``` + +Open Firefox and go to `about:support` to verify the value against “Window protocol”. You can also run `xlsclients` from the terminal to see which external apps are running under Wayland. + +![Firefox is using xwayland in KDE Plasma with Arch][8] + +So, that completes the KDE Plasma setup with Wayland in Arch Linux. + +#### Performance of Wayland KDE Plasma session in Arch + +Overall, the KDE Plasma in Wayland with Arch Linux works well. Nothing show-stopper or any major problems. The spectacle app is able to take screenshots and screencasts. That being said, a few things I noticed while testing the session. + +Firstly there is an intermittent flicker in the bottom panel while launching applications such as Dolphin. It’s inside the VirtualBox session. + +Secondly, the mouse cursor behaviour is a little strange. The cursor is not changing its state from pointer to handle properly (see below). + +Third, KWin crashed when returning online from a standby/screen off (in VirtualBox without guest additions). This might be specific to the virtual machine, but it required a hard reboot to go back to the desktop. + +The memory consumption is around 2GB in idle Wayland sessions with Arch Linux. + +#### GNOME + +The following steps assume you have a bare metal Arch Linux installation without any desktop environment or window manager. You can install Arch Linux bare metal using the [great archinstall script][2]. + +The GDM display manager has full Wayland support in Arch Linux. First, install it using the below command: + +``` +pacman -S --needed gdm +``` + +Once installed, use the below command to install a few Wayland packages. + +- xorg-xwayland: For running xclients under Wayland +- xorg-xlsclients: List client applications running on a display (optional) +- glfw-wayland: GUI app dev packages for Wayland + +``` +pacman -S --needed xorg-xwayland xorg-xlsclients glfw-wayland +``` + +Second, install the plasma and associated apps with Wayland sessions using the below set of commands. Execute them in the order mentioned below. + +``` +sudo pacman -S --needed gnome gnome-tweaks nautilus-sendto gnome-nettool gnome-usage gnome-multi-writer adwaita-icon-theme xdg-user-dirs-gtk fwupd arc-gtk-theme +``` + +**Note**: If you are using NVIDIA, you may want to install `egl-wayland`package. However, I have not tried it. + +Let’s install Firefox and Chromium as well, so that you can test Wayland is working properly with GNOME. + +``` +pacman -S --needed firefox chromium +``` + +Once done, enable the display manager and NetworkManager service. + +``` +sudo systemctl enable gdm +sudo systemctl enable NetworkManager +``` + +Once done, save and close the file. And reboot. + +``` +reboot +``` + +And in the login screen, you should see the _GNOME (Wayland)_ option. Select and log in to the Wayland session of GNOME in Arch Linux. + +![GNOME with Wayland running in Arch Linux][9] + +#### Performance of GNOME + +If I compare GNOME and KDE Plasma, GNOME performed better with Wayland in Arch Linux. No significant problems or screen flickering in apps. This may be because of the recent changes done on GNOME 44 for Wayland, which landed in Arch Linux. + +Also, Firefox runs natively in Wayland in GNOME, not using xwayland wrapper. + +![Firefox with Wayland in GNOME][10] + +### Troubleshooting common Wayland issues + +While Wayland provides numerous benefits, you may encounter some challenges. Here are a few common issues and potential solutions: + +- **Dealing with incompatible applications**: Some older or less commonly used applications may not yet have full Wayland support. Consider looking for alternative applications explicitly designed for Wayland or using XWayland as a compatibility layer. +- **Addressing performance-related concerns**: If you experience performance issues with specific applications, ensure you have installed the latest graphics drivers. Additionally, check if any specific compositor settings or application-specific tweaks can optimize performance. +- You can find **more tips** for troubleshooting [on this page][11]. + +### Conclusion + +Setting up Wayland as your default display server in Arch Linux can significantly improve security, stability, and graphical performance. Following this guide’s installation and configuration steps, you can seamlessly transition from Xorg to Wayland and enjoy a more modern and efficient display experience. + +However, you may find it a little complex for Arch Linux with Wayland since many items require special attention when things break. + +I have not tested gaming on Arch with Wayland as part of this guide. Hence you may need to try that out after setting up. I hope this tutorial helps you to set up Wayland in Arch Linux. + +Let me know how it goes for you in the comment box below. + +-------------------------------------------------------------------------------- + +via: https://www.debugpoint.com/wayland-arch-linux/ + +作者:[Arindam][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.debugpoint.com/author/admin1/ +[b]: https://github.com/lkxed/ +[1]: https://archlinux.org/packages/extra/x86_64/wayland/ +[2]: https://www.debugpoint.com/archinstall-guide/ +[3]: https://www.debugpoint.com/install-yay-arch/ +[4]: https://wiki.archlinux.org/title/wayland#Display_managers +[5]: https://www.debugpoint.com/wp-content/uploads/2023/05/Plasma-Wayland-session-during-login.jpg +[6]: https://www.debugpoint.com/check-wayland-or-xorg/ +[7]: https://www.debugpoint.com/wp-content/uploads/2023/05/KDE-Plasma-with-Wayland-in-Arch-Linux.jpg +[8]: https://www.debugpoint.com/wp-content/uploads/2023/05/Firefox-is-using-xwayland-in-KDE-Plasma-with-Arch.jpg +[9]: https://www.debugpoint.com/wp-content/uploads/2023/05/GNOME-with-Wayland-running-in-Arch-Linux.jpg +[10]: https://www.debugpoint.com/wp-content/uploads/2023/05/Firefox-with-Wayland-in-GNOME.jpg +[11]: https://wiki.archlinux.org/title/wayland#Troubleshooting \ No newline at end of file diff --git a/sources/tech/20230516.3 ⭐️⭐️ Beginner's Guide to System Updates in Linux Mint.md b/sources/tech/20230516.3 ⭐️⭐️ Beginner's Guide to System Updates in Linux Mint.md new file mode 100644 index 0000000000..32f1d63572 --- /dev/null +++ b/sources/tech/20230516.3 ⭐️⭐️ Beginner's Guide to System Updates in Linux Mint.md @@ -0,0 +1,216 @@ +[#]: subject: "Beginner's Guide to System Updates in Linux Mint" +[#]: via: "https://itsfoss.com/linux-mint-update/" +[#]: author: "Sagar Sharma https://itsfoss.com/author/sagar/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Beginner's Guide to System Updates in Linux Mint +====== + +Keeping your system updated is essential for any operating system. Linux Mint is no different. + +Linux Mint has a robust update system. It provides timely security patches for the kernel and other software packages. That's not it. You also get updates on the applications you installed using the Software Manager tool. + +Basically, apart from security patches, your system receives new features, bug fixes, improved hardware support, performance enhancement, and a lot more. + +While the Updater tool is straightforward, it may still seem overwhelming if you are new to Linux Mint. + +This is why we at It's FOSS came up with this beginner's guide idea. It will give you some ideas about using this tool and the best practices you should follow. + +So in this guide, I will explain how you can perform the system updates in Linux Mint and will walk you through the following: + +- Create backups using Timeshift **(optional yet recommended)** +- Prioritizing and installing updates (know the different types of updates) +- Restore from the Timeshift backup (if the update messed up the system) +- Adding the fastest mirrors **(**optional but good to know**)** + +> 📋 While you can use the apt command, the focus of this tutorial is on the GUI tool. + +### Linux Mint Update Manager + +When there are updates available for your system, you'll notice a 'secure' symbol with red dot on it in the bottom right corner of the screen (notification area). + +![Linux Mint update notification][1] + +If you click on it, you'll see the available system updates. By default, all updates are selected to be installed. You can deselect some (if you know what you are doing). + +![Linux Mint Update Manager interface][2] + +Before you learn more about the types of updates and their installation, I would like to talk about backups. + +> 📋 This article is about updating the Linux Mint system. It is not about [upgrading Mint to a newer version][3]. That is a different topic. + +### Create Timeshift backup (optional yet recommended) + +Linux Mint is a stable distro as it is based on the long-term support version of Ubuntu. Updates you install will rarely create problems. + +Rarely but possible. Say you forced power off the system while it was installing package updates. It is possible that it may mess up the perfectly working system. + +Precaution is better than cure. So I recommend making regular backups. If nothing else, make a backup before applying updates. + +Linux Mint comes preinstalled with [Timeshift backup application][4]. It is a third-party tool but is highly recommended by Mint developers. + +To create a backup, start the Timeshift from the system menu: + +![Start timeshift in Linux mint][5] + +If you haven't used it before, it will ask you several questions before allowing you to create a backup. + +First, it will ask you which type of backup you want to create. There are two options: RSYNC and BTRFS. + +RSYNC is based upon hard links and can work on any filesystem, whereas BTRFS is only used for the [BTRFS filesystem][6]. + +If you don't know what to choose, **select RSYNC** as it would work just fine: + +![select snapshot type in timeshift][7] + +Next, it will ask you where you want to store the snapshots. + +If you have multiple drives, it would show multiple options but for most users, there will be a single option. In my case, it was vda3: + +![select location to store snapshots][8] + +Now, it will ask you to choose the directories that need backing up. + +By default, it will exclude all the files inside the home directory and I recommend you do the same. + +> 🚧 Timeshift is primarily used for backing up system settings. Using it to backup personal files in home directory will take up a huge amount of disk space and is impractical. Use DejaDup for personal file backups on an external disk. + +![select directories that needs to be included in the backup][9] + +Once done, it will show you a page informing the setup is complete. + +Now, you can create a backup by clicking on the `Create` button: + +![click on create snapshot][10] + +It may take a while, based on your choices during the setup. + +Once done, the snapshot will reflect in the Timeshift: + +![Listing created backup in Timeshift][11] + +Great! So now you have created the backup. Let's go back to the system updater. + +### Installing updates + +First, open the update manager from the system menu: + +![open update manager in Linux Mint][12] + +Here, you will find a list of packages that need to be updated and all of them will be selected by default (I would recommend you go with the same). + +But if you want, you can uncheck software updates or kernel updates if you want to stick to that specific version only. + +![List outdated packages in Linux Mint][13] + +To make things simple, in Linux Mint, the updates are divided into three categories: + +- Security patches **(Highest priority and indicated by**`🛡` **):** You are supposed to install the security patches immediately as it is supposed to save you from your system's current vulnerability. +- Kernel updates **(Medium priority and indicated by** `🗲`**):** New kernel brings hardware support for new hardware, bug fixes for your current kernel, and may also have performance improvement. +- Software updates **(Lowest priority and indicated by** `⬆`**):** These updates are meant to roll out new features and bug fixes in your software. + +**Again, I will advise you to go with the defaults!** + +Once you are done selecting, click on the `Install Updates` button, enter the password and it will start the installation of new packages: + +![Update Linux Mint using Update Manager][14] + +That's it! The system is updated! + +### Rollback if the system crashed after the update (Backup required) + +If you can access the GUI, you can easily roll back to the previous state using the Timeshift backup you had created earlier. + +First, open the Timeshift from the system menu and it will show the created snapshots backup you took in the past: + +![List backups in timehift][15] + +To restore to the previous state, select the snapshot and click on the `Restore` button: + +![select and restore the snapshot][16] + +Next, it will ask you to select the targeted devices. I would recommend going with the selected options: + +![select and restore the snapshot][17] + +Click on the next button and it will start the restoration process! + +> 💡 If your system does not boot, you can use a live Linux Mint USB, boot from it and install Timeshift in the live environment. Run Timeshift and it should detect the Timeshift backups present on the hard disk. You can restore it from here. + +### Add the fastest mirrors to speed up the download (optional) + +Selecting the fastest mirror is nothing but choosing the closest server to you, which will eventually reduce the latency and get you a faster experience. + +> 📋 That's how it should work in theory. But sometimes, sticking to the main server is more reliable because the closest server may not always perform the best continuously. This is why this is an optional step. + +To add the fastest mirror, first, open the software sources from the system menu and enter the password when asked: + +![Open software sources in Linux Mint][18] + +Once you do that, you'd have to do the following: + +- Select the first mirror (labeled as Main) +- Wait for some seconds and choose the fastest mirror +- Click on apply +- Now, choose the second mirror (labeled as Base) +- Choose the fastest mirror and click on the apply button + +![Select fastest mirrors to download packages faster in Linux Mint][19] + +Once done, it will show the message saying, "Your configuration changed, click OK to update your APT cache." + +Click on the OK button and it will start updating the cache and will activate the fastest mirrors you chose recently: + +![Enable the fastest mirrors for Linux Mint][20] + +That's it! + +### Update everything at once (for intermediate to advanced users) + +The Update Manager works on the deb packages through the apt command line utility. + +But Linux packages are also fragmented. There are Snap, Flatpaks and AppImages. Using multiple package managers means updating each type of package manually. + +This is where you can use a terminal utility called Topgrade which will update everything at once. Sounds interesting? Here's the detailed guide: + +Now, you should have a good idea about the system update process in Linux Mint. + +_🗨 Please let me know if you learned something new in this tutorial. Also, if I missed something you think I should have mentioned, please mention it in the comments._ + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/linux-mint-update/ + +作者:[Sagar Sharma][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/sagar/ +[b]: https://github.com/lkxed/ +[1]: https://itsfoss.com/content/images/2023/05/linux-mint-update-notification.png +[2]: https://itsfoss.com/content/images/2023/05/linux-mint-update-manager.png +[3]: https://itsfoss.com/upgrade-linux-mint-version/ +[4]: https://itsfoss.com/backup-restore-linux-timeshift/ +[5]: https://itsfoss.com/content/images/2023/05/open-timeshift-in-Linux-Mint.png +[6]: https://itsfoss.com/btrfs/ +[7]: https://itsfoss.com/content/images/2023/05/Choose-type-of-snapshot-in-timeshift.png +[8]: https://itsfoss.com/content/images/2023/05/select-location-to-store-snapshots.png +[9]: https://itsfoss.com/content/images/2023/05/select-directories-that-needs-to-be-included-in-the-backup-1.png +[10]: https://itsfoss.com/content/images/2023/05/click-on-create-snapshot.png +[11]: https://itsfoss.com/content/images/2023/05/Listing-created-backup-in-Timeshift.png +[12]: https://itsfoss.com/content/images/2023/05/open-update-manager-in-Linux-Mint.png +[13]: https://itsfoss.com/content/images/2023/05/List-outdated-packages-in-Linux-Mint.png +[14]: https://itsfoss.com/content/images/2023/05/Update-Linux-Mint-using-Update-Manager.png +[15]: https://itsfoss.com/content/images/2023/05/List-available-snapshots-in-timeshift.png +[16]: https://itsfoss.com/content/images/2023/05/select-and-restore-the-snapshot.png +[17]: https://itsfoss.com/content/images/2023/05/select-target-device-to-restore-to.png +[18]: https://itsfoss.com/content/images/2023/05/Open-software-sources-in-Linux-Mint-1.png +[19]: https://itsfoss.com/content/images/2023/05/Select-fastest-mirrors-to-download-packages-faster-in-Linux-Mint.png +[20]: https://itsfoss.com/content/images/2023/05/Enable-the-fastest-mirrors-for-Linux-Mint.png diff --git a/sources/tech/20230602.1 ⭐️⭐️ Share Folder Between Windows Guest and Linux Host in KVM using virtiofs.md b/sources/tech/20230602.1 ⭐️⭐️ Share Folder Between Windows Guest and Linux Host in KVM using virtiofs.md new file mode 100644 index 0000000000..c7886dd222 --- /dev/null +++ b/sources/tech/20230602.1 ⭐️⭐️ Share Folder Between Windows Guest and Linux Host in KVM using virtiofs.md @@ -0,0 +1,174 @@ +[#]: subject: "Share Folder Between Windows Guest and Linux Host in KVM using virtiofs" +[#]: via: "https://www.debugpoint.com/kvm-share-folder-windows-guest/" +[#]: author: "Arindam https://www.debugpoint.com/author/admin1/" +[#]: collector: "lkxed" +[#]: translator: "geekpi" +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Share Folder Between Windows Guest and Linux Host in KVM using virtiofs +====== + +**In this guide, you will learn how to share a folder between Windows guest, running under a Linux host – such as Fedora, Ubuntu or Linux Mint using KVM.** + +The [virt-manager][1] application (with [libvirt][2]) and packages provide a flexible set of tools to manage virtual machines in Linux. It is free and open-source and used for KVM virtual machines and other hypervisors. + +In the prior article, I explained [how to share folders between a Linux guest and a Linux host][3]. However, when you are trying to create a shared folder using Windows Guest and Linux host, it’s a little difficult and complex process. Because both the operating system works differently and a lot of configuration is needed. + +Follow the below instructions as mentioned to share the folder between Windows guest and Linux host. + +### A note about virtiofs + +The sharing files and folders are powered by the libvirt shared file system called virtiofs. It provides all the features and parameters to access the directory tree on the host machine. Since most of the virt-manager virtual machine configurations are translated to XML, the share files/folders can also be specified by the XML file. + +Note: If you are looking for file sharing using KVM between **two Linux machines** (guest and host), [read this article][3]. + +### Share folder between Windows guest and Linux host using KVM + +The following instructions assume that you have installed Windows in virt-manager in any Linux host. If not, you can read this complete guide on how to install Windows in Linux. + +#### Set up a mount tag in virt-manager + +- First, make sure your guest virtual machine is powered off. From the virt-manager GUI, select the virtual machine and click on Open to pull up the console settings. + +![Open the console settings][4] + +- Click on the icon which says show virtual hardware details in the toolbar. And then click on **Memory** on the left panel. +- Select the option “**Enable shared memory**“. Click Apply. +- Make sure the XML shows “access mode=shared” as below in the XML tab. + +``` + + + + +``` + +![Enable shared memory][5] + +- Click “Add hardware” at the bottom. + +- Select **File system** from the left panel in the add new hardware window. +- Then select **Driver=virtiofs** in the details tab. Click on `browse > browse local` and **select the host path** from your Linux system. +- In the target path, mention any name you want. It’s just a file tag which will be used during mount. This name in the target path will be mounted as Drive in Windows – My PC in Explorer. +- I have added “linux_pictures” as the target mount tag. +- So, if I want to access the Pictures folder (`/home/debugpoint/Pictures`), sample settings could be the following: +- Click Finish. + +![Add a file system mount for windows][6] + +The XML settings are below for the above configuration. You can find it in the XML tab. + +``` + + + + +
+ +``` + +In the main virt-manager window, right-click on the Windows virtual machine and click Run to start the virtual machine. Make sure to click on the “show the graphical console” (monitor icon in the toolbar) – if the VM is not showing. + +#### Set up WinFSP – FUSE for Windows + +Make sure Windows virtual machine (guest) is running. + +- First, we need to set up the WinFSP or Windows File System Proxy – FUSE for Windows. This enables you to mount any UNIX-like filesystem without any difficulties. +- Open the below page in the WinFSP GitHub **from the guest** Windows machine. +- Download the WinFSP .msi installer. + +[Download WinFSP installer][7] + +- Install the package on Windows virtual machine. Make sure to select “Core” while installing the package. Finish the installation. + +![WinFSP set up][8] + +#### Create VirtIO-FS as a service + +- Download the **virtio-win-guest-tools.exe** from the below path by going inside **stable-virtio** folder. + +[Download virtio-win-guest-tools][9] + +![Download guest tools][10] + +- Install the package on Windows virtual machine. + +![Virtio-Win-driver installation][11] + +- After installation is complete, **reboot** Windows virtual machine. +- After reboot, open the “Device Manager” by searching in the start menu. +- Navigate to System devices and look for “VirtIO FS Device”. It should be recognized and driver should be signed by Red Hat. +- **Note**: (optional) If you see an exclamation mark i.e. driver is not detected, then follow the instructions [here][12] on how to download ISO file, mount it and manually detect the driver. + +![Make sure the Virt IO driver is signed and installed][13] + +- Open the start menu and search for “Services”. +- Scroll down to find out the “VirtIO-FS Service”. Right-click and hit Start to start the service. +- Alternatively, you can run the below command from PowerShell/command prompt as admin to start the service. + +``` +sc create VirtioFsSvc binpath="C:\Program Files\Virtio-Win\VioFS\virtiofs.exe" start=auto depend="WinFsp.Launcher/VirtioFsDrv" DisplayName="Virtio FS Service" +``` + +``` +sc start VirtioFsSvc +``` + +![Start the Virt IO Service][14] + +- After the service start, open Explorer, and you should see the mount tag which you have created in the first step above, which should be mapped as Z drive. See below. +- You can now access the entire Linux folder with modified permission as per your need. + +![The mount tag is mapped as Z drive in windows][15] + +Here is a side-by-side comparison of the same folder accessed in Linux Mint and Windows guest. + +![Access and share folder in Windows guest and Linux host][16] + +### Conclusion + +I hope you can now able to share a folder between Windows guest and Linux host system. The above method is tested in Linux Mint for this article. It should work for Ubuntu, Fedora as well. + +If the above method works, drop a comment below for the benefit of others. + +**_References_** + +- [https://virtio-fs.gitlab.io/howto-windows.html][12] +- [https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/][17] +- [https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md][18] +- [https://github.com/virtio-win/kvm-guest-drivers-windows/issues/473][19] + +-------------------------------------------------------------------------------- + +via: https://www.debugpoint.com/kvm-share-folder-windows-guest/ + +作者:[Arindam][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.debugpoint.com/author/admin1/ +[b]: https://github.com/lkxed/ +[1]: https://virt-manager.org/ +[2]: https://libvirt.org/manpages/libvirtd.html +[3]: https://www.debugpoint.com/share-folder-virt-manager/ +[4]: https://www.debugpoint.com/wp-content/uploads/2023/06/Open-the-console-settings.jpg +[5]: https://www.debugpoint.com/wp-content/uploads/2023/06/Enable-shared-memory.jpg +[6]: https://www.debugpoint.com/wp-content/uploads/2023/06/Add-a-file-system-mount-for-windows.jpg +[7]: https://github.com/winfsp/winfsp/releases/ +[8]: https://www.debugpoint.com/wp-content/uploads/2023/06/WinFSP-set-up.jpg +[9]: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/ +[10]: https://www.debugpoint.com/wp-content/uploads/2023/06/Download-guest-tools.jpg +[11]: https://www.debugpoint.com/wp-content/uploads/2023/06/Virtio-Win-driver-installation.jpg +[12]: https://virtio-fs.gitlab.io/howto-windows.html +[13]: https://www.debugpoint.com/wp-content/uploads/2023/06/Make-sure-the-Virt-IO-driver-is-signed-and-installed.jpg +[14]: https://www.debugpoint.com/wp-content/uploads/2023/06/Start-the-Virt-IO-Service.jpg +[15]: https://www.debugpoint.com/wp-content/uploads/2023/06/The-mount-tag-is-mapped-as-Z-drive-in-windows.jpg +[16]: https://www.debugpoint.com/wp-content/uploads/2023/06/Access-and-share-folder-in-Windows-guest-and-Linux-host-2048x1280.jpg +[17]: https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/ +[18]: https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md +[19]: https://github.com/virtio-win/kvm-guest-drivers-windows/issues/473 \ No newline at end of file diff --git a/sources/tech/20230602.2 ⭐️⭐️ Install Windows 11 as Guest in Ubuntu using virt-manager.md b/sources/tech/20230602.2 ⭐️⭐️ Install Windows 11 as Guest in Ubuntu using virt-manager.md new file mode 100644 index 0000000000..e6818faca5 --- /dev/null +++ b/sources/tech/20230602.2 ⭐️⭐️ Install Windows 11 as Guest in Ubuntu using virt-manager.md @@ -0,0 +1,237 @@ +[#]: subject: "Install Windows 11 as Guest in Ubuntu using virt-manager" +[#]: via: "https://www.debugpoint.com/install-windows-ubuntu-virt-manager/" +[#]: author: "Arindam https://www.debugpoint.com/author/admin1/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Install Windows 11 as Guest in Ubuntu using virt-manager +====== + +**A complete guide to install Windows as a guest operating system in Ubuntu, Linux Mint host using the open-source virt-manager (KVM/Qemu/libvirt).** + +If you are planning to get rid of Windows completely but want to access Windows-specific applications while being in Ubuntu, then it’s easier to try it out in a virtual machine. Although there are many virtualization applications, we will use the powerful virt-manager application for this guide. + +Virtualization is the process of creating and managing virtual machines, which are isolated environments that mimic the behaviour of physical computers. Virt-manager leverages KVM, a virtualization technology built into the Linux kernel, and Qemu, a hardware emulator that enables the execution of guest operating systems. + +Additionally, virt-manager utilizes libvirt, a library for managing virtualization technologies, to provide a seamless and feature-rich virtualization experience. + +Before starting the Windows 11 installation, you need to prepare your system and get the Windows 11 ISO file from the official download page. + +### Download Windows ISO file + +Visit the official download page below. Choose Windows 11 64-bit and language. And download the ISO file. The ISO size of Windows 11 is around ~6GB. + +[Download Windows 11][1] + +![Windows 11 Download location][2] + +### Install and Set up virt-manager in Ubuntu, Linux Mint + +Open a terminal and run the following command to install virt-manager. + +``` +sudo apt install virt-manager +``` + +After installation is complete, add the current user to the libvirt group. For the below example, replace “debugpoint” with your user name of the Ubuntu/Linux Mint system. + +``` +sudo adduser debugpoint libvirt +``` + +Then, start the libvirt daemon using: + +``` +sudo systemctl start libvirtd +``` + +And finally, start the virtual network. + +``` +sudo virsh net-start default +``` + +This should complete the installation of the virt-manager. For a detailed installation guide, [visit this tutorial][3]. + +### Set up TPM 2.0 in Ubuntu for virt-manager + +One of the requirements of Windows 11 is TPM 2.0 (Trusted Platform Module 2.0). TPM is a hardware-based security mechanism enabling Windows 11 to perform secure facial-based authentication, BitLocker, etc. + +However, if you install it in a virtual machine, you have to tweak certain settings in virt-manager and need specific packages. + +Open a terminal and install the following packages. + +``` +sudo apt install ovmf swtpm swtpm-tools +``` + +### Install Windows 11 in virt-manager on Ubuntu, Linux Mint + +Before you proceed, make sure to reboot your Ubuntu or Linux Mint system after installing the above packages. + +#### Create virtual machine + +Open “Virtual machine manager” from the application menu. Click on New. + +![New Virtual machine][4] + +In the “New VM” window, choose “Local install media..”. Click Forward. + +![New VM window][5] + +Choose the downloaded Windows 11 ISO file by clicking **Browse > Browse****Local** button. Click forward. + +![Select the Windows 11 ISO file][6] + +Enter memory as 8 GB or 8192 and CPU as 4. This is the minimum value. You can enter more if your hardware is capable of this setup. Click forward. + +![Memory and CPU][7] + +Enter storage as 40 GB (minimum) on the next screen. Make sure to check the “Enable storage for this virtual machine”. And click forward. + +![Enter storage size][8] + +In the final screen, give a name to your virtual machine. For example, I gave “win11”. Make sure to check the option “Customize configuration before install”. Click finish. + +![Final screen for initial set up][9] + +#### Configure TPM and other parameters + +In the configuration window, go to the “Overview” page. Select the following values. Keep the rest unchanged. + +- Chipset=Q35 +- Firmware=BIOS + +**Note**: You may choose UEFI modules, but the Windows 11 ISO will not boot and may get stuck in the TIANOCORE Plymouth screen. + +![Set Chipset and Firmware][10] + +Go to the CPU page and make sure the vCPU allocation = 4. + +Click on the “Add Hardware” button at the bottom left. + +![Add hardware][11] + +Select TPM from the left pane. Then select the following and hit Finish once done. + +- Type: Emulated +- Model: TIS +- Version: 2.0 + +![TPM settings][12] + +You should see the TPM 2.0 on the left side of the window. Now all the configuration is complete. + +![Begin installation][13] + +Hit the “Begin installation” button at the top. + +#### Installing Windows 11 + +If all goes well, you should see the Windows logo and followed by the below screen. Choose Language, Keyboard and hit Next. + +![Windows 11 installer in virt-manager - first screen][14] + +Click Install Now on the next screen. Wait for a few moments. + +On the active Windows page, click on “I don’t have a product key”. + +![Product key page][15] + +Select Windows 11 Home in the version selection screen. + +![Select Windows version][16] + +**Note**: If you received the following error after clicking Next in the above screen – “This PC can’t run Windows 11”, then do the following to bypass all the checks. + +If you don’t find this error, skip this section. + +![Compatibility error][17] + +Press `SHIFT+F10` to bring up the command prompt. + +Type `regedit` and hit enter. + +![Open registry editor][18] + +Go to `HKEY_LOCAL_MACHINE\SYSTEM\Setup`. + +Right-click and select `New > Key`. Add the key name as `LabConfig`. + +While `LabConfig` is selected, add `New > DWORD (32 bit) Value`. Add the name as BypassTPMCheck. Then right-click on the name and select **Modify**. Give the **value as 1**. + +![Adding key values][19] + +Repeat the above steps to add `BypassRAMCheck` and `BypassSecureBootCheck`. + +Set the value to 1 for both. + +Finally, the LabConfig key set up should look like the below: + +![Final key setup][20] + +Close the registry editor and command prompt. Click on the back arrow to start the installation process. + +Select “Custom: Install Windows only (advanced)” in the Windows set up screen. Select the virt-manager driver and hit next. + +The installation will start. Wait for a few minutes for it to complete. + +![Windows 11 install started in virt-manager][21] + +#### Setting up + +If all goes well, you should see the first set-up screen for Windows 11 inside virt-manager. It will be a series of screens where you need to provide various settings. + +![Windows 11 boot up - first screen][22] + +Follow the on-screen instructions in the next few screens. Remember that Windows 11 requires an online Microsoft account to log in, such as Hotmail, Office 365 or Outlook. And you need to be connected to the internet. + +If all goes well, you should see Windows 11 running inside virt-manager in Ubuntu or Linux Mint. + +![Windows 11 running as guest in virt-manager][23] + +### Conclusion + +I hope by following the step-by-step process outlined in this article, you can set up a virtual machine using virt-manager and install Windows 11 as a guest in Ubuntu or Linux Mint host. + +If you run into errors, do let me know in the comment box. + +-------------------------------------------------------------------------------- + +via: https://www.debugpoint.com/install-windows-ubuntu-virt-manager/ + +作者:[Arindam][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.debugpoint.com/author/admin1/ +[b]: https://github.com/lkxed/ +[1]: https://www.microsoft.com/software-download/windows11 +[2]: https://www.debugpoint.com/wp-content/uploads/2023/06/Windows-11-Download-location.jpg +[3]: https://www.debugpoint.com/virt-manager/ +[4]: https://www.debugpoint.com/wp-content/uploads/2023/06/New-Virtual-machine.jpg +[5]: https://www.debugpoint.com/wp-content/uploads/2023/06/New-VM-window.jpg +[6]: https://www.debugpoint.com/wp-content/uploads/2023/06/Select-the-Windows-11-ISO-file.jpg +[7]: https://www.debugpoint.com/wp-content/uploads/2023/06/Memory-and-CPU.jpg +[8]: https://www.debugpoint.com/wp-content/uploads/2023/06/Enter-storage-size.jpg +[9]: https://www.debugpoint.com/wp-content/uploads/2023/06/Final-screen-for-initial-set-up.jpg +[10]: https://www.debugpoint.com/wp-content/uploads/2023/06/Set-Chipset-and-Firmware.jpg +[11]: https://www.debugpoint.com/wp-content/uploads/2023/06/Add-hardware.jpg +[12]: https://www.debugpoint.com/wp-content/uploads/2023/06/TPM-settings.jpg +[13]: https://www.debugpoint.com/wp-content/uploads/2023/06/Begin-installation.jpg +[14]: https://www.debugpoint.com/wp-content/uploads/2023/06/Windows-11-installer-first-screen.jpg +[15]: https://www.debugpoint.com/wp-content/uploads/2023/06/Product-key-page.jpg +[16]: https://www.debugpoint.com/wp-content/uploads/2023/06/Select-Windows-version.jpg +[17]: https://www.debugpoint.com/wp-content/uploads/2023/06/Compatibility-error.jpg +[18]: https://www.debugpoint.com/wp-content/uploads/2023/06/Open-registry-editor.jpg +[19]: https://www.debugpoint.com/wp-content/uploads/2023/06/Adding-key-values.jpg +[20]: https://www.debugpoint.com/wp-content/uploads/2023/06/Final-key-setup.jpg +[21]: https://www.debugpoint.com/wp-content/uploads/2023/06/Windows-11-install-started-in-virt-manager.jpg +[22]: https://www.debugpoint.com/wp-content/uploads/2023/06/Windows-11-boot-up-first-screen.jpg +[23]: https://www.debugpoint.com/wp-content/uploads/2023/06/Windows-11-running-as-guest-in-virt-manager-2048x1280.jpg \ No newline at end of file diff --git a/sources/tech/20230605.1 ⭐️⭐️ How to Setup Dynamic NFS Provisioning in Kubernetes Cluster.md b/sources/tech/20230605.1 ⭐️⭐️ How to Setup Dynamic NFS Provisioning in Kubernetes Cluster.md new file mode 100644 index 0000000000..fd76666041 --- /dev/null +++ b/sources/tech/20230605.1 ⭐️⭐️ How to Setup Dynamic NFS Provisioning in Kubernetes Cluster.md @@ -0,0 +1,237 @@ +[#]: subject: "How to Setup Dynamic NFS Provisioning in Kubernetes Cluster" +[#]: via: "https://www.linuxtechi.com/dynamic-nfs-provisioning-kubernetes/" +[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +How to Setup Dynamic NFS Provisioning in Kubernetes Cluster +====== + +In this post, we will show you how to setup dynamic nfs provisioning in Kubernetes (k8s) cluster. + +Dynamic NFS storage provisioning in Kubernetes allows you to automatically provision and manage NFS (Network File System) volumes for your Kubernetes applications on-demand. It enables the creation of persistent volumes (PVs) and persistent volume claims (PVCs) without requiring manual intervention or pre-provisioned storage. + +The NFS provisioner is responsible for dynamically creating PVs and binding them to PVCs. It interacts with the NFS server to create directories or volumes for each PVC. + +##### Prerequisites + +- Pre-installed Kubernetes Cluster +- A Regular user which has admin rights on Kubernetes cluster +- Internet Connectivity + +Without any further delay, let’s deep dive into steps + +### Step 1) Prepare the NFS Server + +In my case, I am going to install NFS server on my Kubernetes master node (Ubuntu 22.04). Login to master node and run following commands, + +``` +$ sudo apt update +$ sudo apt install nfs-kernel-server -y +``` + +Create the following folder and share it using nfs, + +``` +$ sudo mkdir /opt/dynamic-storage +$ sudo chown -R nobody:nogroup /opt/dynamic-storage +$ sudo chmod 777 /opt/dynamic-storage +``` + +Add the following entries in /etc/exports file + +``` +$ sudo vi /etc/exports +/opt/dynamic-storage 192.168.1.0/24(rw,sync,no_subtree_check) +``` + +Save and close the file. + +Note: Don’t forget to change network in exports file that suits to your deployment. + +To make above changes into the effect, run + +``` +$ sudo exportfs -a +$ sudo systemctl restart nfs-kernel-server +$ sudo systemctl status nfs-kernel-server +``` + +![NFS-Service-Status-Kubernetes-Master-Ubuntu][1] + +On the worker nodes, install nfs-common package using following apt command. + +``` +$ sudo apt install nfs-common -y +``` + +### Step 2) Install and Configure NFS Client Provisioner + +NFS subdir external provisioner deploy the NFS client provisioner in your Kubernetes cluster. The provisioner is responsible for dynamically creating and managing Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) backed by NFS storage. + +So, to install NFS subdir external provisioner, first install helm using following set of commands, + +``` +$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 +$ chmod 700 get_helm.sh +$ ./get_helm.sh +``` + +Enable the helm repo by running following beneath command, + +``` +$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner +``` + +Deploy provisioner using following helm command + +``` +$ helm install -n nfs-provisioning --create-namespace nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.1.139 --set nfs.path=/opt/dynamic-storage +``` + +![helm-install-nfs-provisioning-kubernetes-cluster][2] + +Above helm command will automatically create nfs-provisioning namespace and will install nfs provisioner pod/deployment, storage class with name (nfs-client) and will created the required rbac. + +``` +$ kubectl get all -n nfs-provisioning +$ kubectl get sc -n nfs-provisioning +``` + +![kubectl-get-all-nfs-provisioning-kubernetes-cluster][3] + +Perfect, output above confirms that provisioner pod and storage class is created successfully. + +### Step 3) Create Persistent Volume Claims (PVCs) + +Let’s a create PVC to request storage for your pod or deployment. PVC will request for a specific amount of storage from a StorageClass (nfs-client). + +``` +$ vi demo-pvc.yml +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: +  name: demo-claim +  namespace: nfs-provisioning +spec: +  storageClassName: nfs-client +  accessModes: +    - ReadWriteMany +  resources: +    requests: +      storage: 10Mi +``` + +save & close the file. + +![PVC-Yaml-Dynamic-NFS-Kubernetes][4] + +Run following kubectl command to create pvc using above created yml file, + +``` +$ kubectl create -f demo-pvc.yml +``` + +Verify whether PVC and PV are created or not, + +``` +$ kubectl get pv,pvc -n nfs-provisioning +``` + +![Verify-pv-pvc-dynamic-nfs-kubernetes-cluster][5] + +Great, above output shows that pv and pvc are created successfully. + +### Step 4) Test and Verify Dynamic NFS Provisioning + +In order to test and verify dynamic nfs provisioning, spin up a test pod using following yml file, + +``` +$ vi test-pod.yml +kind: Pod +apiVersion: v1 +metadata: +  name: test-pod +  namespace: nfs-provisioning +spec: +  containers: +  - name: test-pod +    image: busybox:latest +    command: +      - "/bin/sh" +    args: +      - "-c" +      - "touch /mnt/SUCCESS && sleep 600" +    volumeMounts: +      - name: nfs-pvc +        mountPath: "/mnt" +  restartPolicy: "Never" +  volumes: +    - name: nfs-pvc +      persistentVolumeClaim: +        claimName: demo-claim +``` + +![Pod-Yml-Dynamic-NFS-kubernetes][6] + +Deploy the pod using following kubectl command, + +``` +$ kubectl create -f test-pod.yml +``` + +Verify the status of test-pod, + +``` +$ kubectl get pods -n nfs-provisioning +``` + +![Verify-Test-Pod-Using-NFS-Volume-Kubernetes][7] + +Login to the pod and verify that nfs volume is mounted or not. + +``` +$ kubectl exec -it test-pod -n nfs-provisioning /bin/sh +``` + +![Access-Dynamic-NFS-Inside-Pod-Kubernetes][8] + +Great, above output from the pod confirms that dynamic NFS volume is mounted and accessible. + +In the last, delete the pod and PVC and check whether pv is deleted automatically or not. + +``` +$ kubectl delete -f test-pod.yml +$ kubectl delete -f demo-pvc.yml +$ kubectl get pv,pvc -n  nfs-provisioning +``` + +![Delete-Pod-PVC-Dynamic-NFS][9] + +That’s all from this post, I hope you have found it informative. Feel free to post your queries and feedback in below comments section. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/dynamic-nfs-provisioning-kubernetes/ + +作者:[Pradeep Kumar][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lkxed/ +[1]: https://www.linuxtechi.com/wp-content/uploads/2023/06/NFS-Service-Status-Kubernetes-Master-Ubuntu.png +[2]: https://www.linuxtechi.com/wp-content/uploads/2023/06/helm-install-nfs-provisioning-kubernetes-cluster.png +[3]: https://www.linuxtechi.com/wp-content/uploads/2023/06/kubectl-get-all-nfs-provisioning-kubernetes-cluster.png +[4]: https://www.linuxtechi.com/wp-content/uploads/2023/06/PVC-Yaml-Dynamic-NFS-Kubernetes.png +[5]: https://www.linuxtechi.com/wp-content/uploads/2023/06/Verify-pv-pvc-dynamic-nfs-kubernetes-cluster.png +[6]: https://www.linuxtechi.com/wp-content/uploads/2023/06/Pod-Yml-Dynamic-NFS-kubernetes.png +[7]: https://www.linuxtechi.com/wp-content/uploads/2023/06/Verify-Test-Pod-Using-NFS-Volume-Kubernetes.png +[8]: https://www.linuxtechi.com/wp-content/uploads/2023/06/Access-Dynamic-NFS-Inside-Pod-Kubernetes.png +[9]: https://www.linuxtechi.com/wp-content/uploads/2023/06/Delete-Pod-PVC-Dynamic-NFS.png diff --git a/sources/tech/20230609.0 ⭐️⭐️ Get Organized and Stylish 7 Best Docks for Ubuntu Linux.md b/sources/tech/20230609.0 ⭐️⭐️ Get Organized and Stylish 7 Best Docks for Ubuntu Linux.md new file mode 100644 index 0000000000..93537fed0c --- /dev/null +++ b/sources/tech/20230609.0 ⭐️⭐️ Get Organized and Stylish 7 Best Docks for Ubuntu Linux.md @@ -0,0 +1,202 @@ +[#]: subject: "Get Organized and Stylish: 7 Best Docks for Ubuntu Linux" +[#]: via: "https://www.debugpoint.com/best-docks-linux/" +[#]: author: "Arindam https://www.debugpoint.com/author/admin1/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Get Organized and Stylish: 7 Best Docks for Ubuntu Linux +====== + +**Find out which of the following Linux docks are best for you.** + +Docks play a crucial role in enhancing the user experience and productivity on Ubuntu or any other Linux distribution. With their sleek and intuitive design, docks provide quick access to frequently used applications, system settings, and workspace management. + +The docks are complex applications, and there are very few active projects available in the Linux ecosystem. The reason might be that the desktop environment provides built-in capabilities to transform the respective default panel to a dock. + +However, here are the top 7 best docks for Ubuntu and other Linux distros which still works. + +### Best docks for Linux + +#### Plank + +The most popular and well-known dock is [Plank][1], and used by many distributions as a default dock. For example, elementary OS uses Plank dock for its Pantheon desktop environment. + +The best feature of Plank is its completeness, and it requires no customizations. It looks very good with default settings and adapts itself to any distribution. + +![Plank Dock][2] + +You can install Plank using the following command in Ubuntu, Linux Mint and related distributions: + +``` +sudo apt install plank +``` + +Once installed, you can launch it via the command prompt “plank” or from the application menu. If you are using it in Ubuntu, make sure to hide the left default dash using any [GNOME Extensions][3] (such as Just Perfection). + +**Note:** Plank will not work in Wayland. You need to use the X.Org session. + +#### Dash to Dock (extension) + +If you are using the latest GNOME desktop environment with Ubuntu, then you may want to try the “Dash to Dock” extension. It’s not a standalone application. However, it does convert your Dash to a simple dock. + +![Dash to dock extension][4] + +The extension also brings several features, such as showing the dock on multiple monitors, size/icon size and position on the screen. You can also customize its opacity, use built-in themes and change the colour of the dock. + +To install this extension, get the [extension manager][5] installed. Then search for “[Dash to dock][6]” and hit install. + +#### Dock from Dash (extension) + +There is another GNOME extension which you may try, “Dock from Dash”. From the first look, it might look exactly the same as “Dash to dock”, but there are a few differences. + +This extension is lightweight and uses fewer resources compared to “Dash to Dock”. It also gives you a few options just to have a simple dock. + +In addition, it can autohide the Dock with an option to customize the delay and behaviour. + +So, if you want a lightweight GNOME extension that only has a dock, go for it. + +To install this extension, get the [extension manager][5] installed. Then search for [“Dock from dash”][7] and hit install. + +![Dock from dash][8] + +#### Latte Dock + +[Latte Dock][9] is known for its huge set of customization options. It’s part of the KDE system and comes with lots of dependencies. The primary reason I have added to this list is it’s by far the best Dock that there is. + +However, the problem is the project is currently unmaintained. The developer of the Latte dock has[left the project][10]. The KDE automation keeps the project in maintenance mode. So, it will work with a little bit of tweaking if needed. + +Many distributions, such as Garuda Linux, used to feature it as part of their default offerings. But moving away from Latte Dock. + +![Latte dock][11] + +You can install Latte Dock using the following command. If you are installing it in Ubuntu, remember that it will download a lot of KDE ecosystem packages. Hence it is recommended that you should use Latte dock on any [KDE Plasma-based Linux distribution][12] for the best experience. + +``` +sudo apt install latte-dock +``` + +#### Docky + +If you want a macOS-style dock, try Docky. [Docky][13] is a simple and easy-to-use Dock which integrates well with the GNOME desktop environment. It is a lightweight, fast, and customizable dock that can be used to launch applications, access files, and manage Windows. + +Overall, Docky offers a visually appealing, customizable, and efficient solution for managing your applications and enhancing your desktop experience. + +![Docky and settings][14] + +But there is a catch. + +The development of Docky is stalled. The last release was in 2015. However, it is currently in minimal maintenance mode. However, you can still install it in Ubuntu using a few additional steps because you need to manually download the dependencies and install them. + +Open a terminal in Ubuntu and run the following commands in sequence to install Docky. + +``` +wget -c http://archive.ubuntu.com/ubuntu/pool/universe/g/gnome-sharp2/libgconf2.0-cil_2.24.2-4_all.deb +wget -c http://archive.ubuntu.com/ubuntu/pool/main/g/glibc/multiarch-support_2.27-3ubuntu1_amd64.deb +wget -c http://archive.ubuntu.com/ubuntu/pool/universe/libg/libgnome-keyring/libgnome-keyring-common_3.12.0-1build1_all.deb +wget -c http://archive.ubuntu.com/ubuntu/pool/universe/libg/libgnome-keyring/libgnome-keyring0_3.12.0-1build1_amd64.deb +wget -c http://archive.ubuntu.com/ubuntu/pool/universe/g/gnome-keyring-sharp/libgnome-keyring1.0-cil_1.0.0-5_amd64.deb + +sudo apt install *.deb + +wget -c http://archive.ubuntu.com/ubuntu/pool/universe/d/docky/docky_2.2.1.1-1_all.deb +sudo apt install ./docky_2.2.1.1-1_all.deb +``` + +After installation, you can find it in the application menu. + +#### DockbarX + +If you are an avid Xfce desktop user, you may have heard about the [DockbarX][15]. Although it works wonderfully with Xfce, you can install it in Ubuntu, Linux Mint or Arch Linux. + +DockbarX comes with a massive set of customizations and tweaks to make your desktop looks stunning. Furthermore, it supports built-in themes as well, which takes away your efforts in tweaking the dock. + +One of the unique features of DockbarX is the window preview of the running applications directly from the dock. + +![DockBarX][16] + +Here’s how you can install it in Ubuntu. + +``` +sudo add-apt-repository ppa:xuzhen666/dockbarx +sudo apt update +sudo apt install dockbarx +sudo apt install dockbarx-themes-extra +``` + +If you are using Arch Linux, you can install it by setting up any [AUR helper such as Yay][17] and install it using the following command. + +``` +yay -S dockbarx +``` + +#### KSmoothDock + +If you fancy more UI animation in your Dock, then you may consider [KSmoothDock][18]. It has all the usual Dock features with some additional features. + +The main attraction of KSmoothDock is the “parabolic zooming effect” which is really nice if you are on to some animation. + +![KSmoothDock][19] + +![KSmoothDock - Animation][20] + +In addition, it comes with customization options for icon & panel size, transparency and so on. It is well-built and should be perfect for KDE Plasma-based distributions. + +It comes with a pre-compiled deb file for installation which you can download from the KDE store: + +[Download KSmoothDock][21] + +### Some inactive Docks for Linux + +Apart from the above items, there are a few popular docks which have stopped development. And these are currently broken and installation requires a lot of effort. You may want to check out their source code for experiments. + +[Cairo dock][22]: Currently broken for the latest Ubuntu releases. The [last stable release][23] was in 2015. + +[Avant Window Navigator][24]: Currently broken. [Last release][25] was in 2013. + +### Conclusion + +Whether you prefer a minimal Dock or a heavy one with animations, I hope the above list can give you a starting point to pick the best one for your need. Unfortunately, most of the Docks are almost inactive in terms of active development and there are no new ones on the horizon. + +Explore these options, experiment with different docks, and find the one that enhances your Ubuntu or other Linux desktop experience. Do let me know which Linux dock you use and like in the comments! + +-------------------------------------------------------------------------------- + +via: https://www.debugpoint.com/best-docks-linux/ + +作者:[Arindam][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.debugpoint.com/author/admin1/ +[b]: https://github.com/lkxed/ +[1]: https://launchpad.net/plank +[2]: https://www.debugpoint.com/wp-content/uploads/2023/06/Plank-Dock.jpg +[3]: https://www.debugpoint.com/gnome-extensions-2022/ +[4]: https://www.debugpoint.com/wp-content/uploads/2023/06/Dash-to-dock-extension.jpg +[5]: https://www.debugpoint.com/how-to-install-and-use-gnome-shell-extensions-in-ubuntu/ +[6]: https://extensions.gnome.org/extension/307/dash-to-dock/ +[7]: https://extensions.gnome.org/extension/4703/dock-from-dash/ +[8]: https://www.debugpoint.com/wp-content/uploads/2023/06/Dock-from-dash.jpg +[9]: https://invent.kde.org/plasma/latte-dock +[10]: https://psifidotos.blogspot.com/2022/07/latte-dock-farewell.html +[11]: https://www.debugpoint.com/wp-content/uploads/2023/06/Latte-dock.jpg +[12]: https://www.debugpoint.com/top-linux-distributions-kde-plasma/ +[13]: https://launchpad.net/~docky-core/+archive/ubuntu/stable +[14]: https://www.debugpoint.com/wp-content/uploads/2023/06/Docky-and-settings.jpg +[15]: https://github.com/M7S/dockbarx +[16]: https://www.debugpoint.com/wp-content/uploads/2023/06/DockBarX.jpg +[17]: https://www.debugpoint.com/install-yay-arch/ +[18]: https://dangvd.github.io/ksmoothdock/ +[19]: https://www.debugpoint.com/wp-content/uploads/2023/06/KSmoothDock.jpg +[20]: https://www.debugpoint.com/wp-content/uploads/2023/06/Kooha-2023-06-09-12-32-19.gif +[21]: https://store.kde.org/p/1081169 +[22]: https://glx-dock.org/ +[23]: https://launchpad.net/~cairo-dock-team/+archive/ubuntu/ppa +[24]: https://github.com/p12tic/awn +[25]: https://launchpad.net/~awn-testing/+archive/ubuntu/ppa \ No newline at end of file diff --git a/sources/tech/20230613.0 ⭐️⭐️ Using head Command in Linux.md b/sources/tech/20230613.0 ⭐️⭐️ Using head Command in Linux.md new file mode 100644 index 0000000000..9e86af6995 --- /dev/null +++ b/sources/tech/20230613.0 ⭐️⭐️ Using head Command in Linux.md @@ -0,0 +1,244 @@ +[#]: subject: "Using head Command in Linux" +[#]: via: "https://itsfoss.com/head-command/" +[#]: author: "Sagar Sharma https://itsfoss.com/author/sagar/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Using head Command in Linux +====== + +The head command is one of the many ways to [view the contents of a file][1] in Linux terminal. + +But that can also be achieved by the cat command too! So why use another command? + +I understand. But if there was no problem with how the cat command works, the head command won't even exist. So let's have a look at the problem with cat. + +By default, the cat command prints all the text inside the file. But what if you have a file containing 1000 or more words? Your terminal will look bloated. Isn't it? + +Whereas the head command can specify the number of lines to print. + +In this guide, I will walk you through how you can use the head command with the help of some practical examples and additional practice exercises to brush up your command-line skills. + +19 Basic But Essential Linux Terminal Tips You Must KnowLearn some small, basic but often ignored things about the terminal. With the small tips, you should be able to use the terminal with slightly more efficiency.![][2]It's FOSSAbhishek Prakash![][3] + +### How to use the head command in Linux + +To use any command in Linux, you will have to use the correct syntax; else, you will get an error. + +So let's start with the syntax for the head command: + +``` +head [options] [file] +``` + +Here, + +- `options` are used to tweak the default behavior of the head command +- `file` is where you give the absolute path or filename of the file + +To make things easy, I will be using a simple text file named `Haruki.txt` with the following content: + +``` +Hear the Wind Sing (1979) +Pinball, 1973 (1980) +A Wild Sheep Chase (1982) +Hard-Boiled Wonderland and the End of the World (1985) +Norwegian Wood (1987) +Dance Dance Dance (1990) +South of the Border, West of the Sun (1992) +The Wind-Up Bird Chronicle (1994) +Sputnik Sweetheart (1999) +Kafka on the Shore (2002) +After Dark (2004) +1Q84 (2009-2010) +Colorless Tsukuru Tazaki and His Years of Pilgrimage (2013) +Men Without Women (2014) +Killing Commendatore (2017) +``` + +And when you use the head command without any options, it will print the first ten lines of the file: + +![use head command in linux][4] + +As you can see, it skipped the last five lines! + +> 🚧 You’ll see some command examples with text inside <>. This indicates that you need to replace the content with < and > with a suitable value. + +### Examples of the head command in Linux + +In this section, I will walk you through some practical examples of the head command. So let's start with the most useful one. + +#### 1. Print only the first N lines + +So if you want to print first N lines, all you have to do is use the `-n` flag and append the number of the first N lines you want: + +``` +head -n number-of-lines Filename +``` + +So let's say I want to print the first five lines of `Haruki.txt` file, then you type the following: + +``` +head -n 5 Haruki.txt +``` + +![Print only the first N lines using the head command][5] + +#### 2. Print everything except the last N lines + +So if you want to restrict the output by not including the last N lines, all you have to do is use the same `-n` flag but have to use the negative number (`-n`): + +``` +head -n - +``` + +So let's say I want to exclude the last three lines and print everything else from the `Haruki.txt` then the command looks like this: + +``` +head -n -3 Haruki.txt +``` + +![exlcude last N lines and print everything else using the head command in linux][6] + +#### 3. Display the name of the file being used + +As you can see, the head command won't print the filename by default, so if you want to enable this behavior, all you have to do is use the `-v` flag for verbose output: + +``` +head -v +``` + +Yes, you can use more than one option at once! + +So here, I printed the first five lines of the `Haruki.txt` file and enabled the verbose output to display the name of the file: + +![Display the name of the file being used][7] + +#### 4. Use multiple files at once + +So if you want to use the different files, all you have to do is append them one by one, followed by space: + +``` +head +``` + +For example, here, I used two files and printed the first five lines of each: + +``` +head -n 5 Haruki.txt Premchand.txt +``` + +![use multiple files in head command][8] + +If you notice, it automatically prints the filename, especially when dealing with multiple files. + +But in cases like [redirecting the essential output][9], you may want to remove the filename. This can easily be done using the `-q` flag: + +``` +head -q +``` + +![remove filenames while using multiple files with the head command][10] + +#### 5. Print characters as per the given number of bytes + +So if you want to print the lines based on the byte size, you can do that using the `-c` flag followed by the byte size. + +**Remember, for almost every character, one character = 1 byte.** + +To do so, you can use the following  syntax: + +``` +head -c +``` + +For example, here, I print characters worth 100 bytes: + +``` +head -c 100 Haruki.txt +``` + +![Print characters as per the given number of bytes][11] + +Similarly, if you want to skip the characters from the end of the file by specifying the bytes, all you have to do is use the negative numbers: + +``` +head -c - +``` + +For example, here, I skipped the last characters of the file worth 100 bytes: + +![skip last N characters on the basis of the byte size using the head command][12] + +### Summarizing the head command + +Here's the summary of the head command with different options: + +| Option | Description | +| :- | :- | +| `-n ` | Specify how many lines to print from the beginning of the file. | +| `-n -` | Print everything except the last N lines. | +| `-v` | Print the name of the file. | +| `-q` | Remove the filename when working with multiple files. | +| `-c ` | Print characters as per the given number of bytes. | + +### Get better with a simple exercise + +To perform the given exercises, you can use text files, and if you don't have any, you can [use our text files from GitHub][13]. + +- Display the first ten lines of the file +- Display everything except the last five lines of a file +- Display the first five lines of multiple files + +###### For intermediate users: + +- Display the first five lines of multiple files, sorted alphabetically by file name (Hint: pipe to [sort command][14]) +- Display the lines from 11 to 16 (Hint: combine it with the [tail command][15]) +- Count the occurrence of a specific word or character in the first five lines (Hint: pipe to grep with [wc command][16]) + +### Just getting started with Terminal? We have a series for you! + +While the terminal looks scary, you can always [make the terminal look good][17], but what about the learning curve it takes? + +For new users, we came up with a dedicated series which covers the basic commands so you can [embrace the terminal][18]: + +Furthermore, you can discuss the practice questions mentioned above in our community: + +I hope you now have a better understanding of the head command. + +_🗨 We'll be sharing more Linux command examples every week. Stay tuned for more. And if you have questions or suggestions, the comment section is all yours._ + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/head-command/ + +作者:[Sagar Sharma][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/sagar/ +[b]: https://github.com/lkxed/ +[1]: https://itsfoss.com/view-file-contents/ +[2]: https://itsfoss.com/content/images/size/w256h256/2022/12/android-chrome-192x192.png +[3]: https://itsfoss.com/content/images/wordpress/2021/12/ubuntu-terminal-basic-tips.png +[4]: https://itsfoss.com/content/images/2023/05/use-head-command-in-linux.png +[5]: https://itsfoss.com/content/images/2023/04/Print-only-the-first-N-lines-using-the-head-command-.png +[6]: https://itsfoss.com/content/images/2023/04/exlcude-last-N-lines-and-print-everything-else-using-the-head-command-in-linux.png +[7]: https://itsfoss.com/content/images/2023/04/Display-the-name-of-the-file-being-used.png +[8]: https://itsfoss.com/content/images/2023/04/use-multiple-files-in-head-command.png +[9]: https://linuxhandbook.com:443/redirect-dev-null/ +[10]: https://itsfoss.com/content/images/2023/04/remove-filenames-while-using-multiple-files-with-the-head-command.png +[11]: https://itsfoss.com/content/images/2023/04/Print-characters-as-per-the-given-number-of-bytes.png +[12]: https://itsfoss.com/content/images/2023/04/skip-last-N-characters-on-the-basis-of-the-byte-size-using-the-head-command.png +[13]: https://github.com:443/itsfoss/text-files +[14]: https://linuxhandbook.com:443/sort-command/ +[15]: https://itsfoss.com/tail-command/ +[16]: https://linuxhandbook.com:443/wc-command/ +[17]: https://itsfoss.com/customize-linux-terminal/ +[18]: https://itsfoss.com/love-thy-terminal/ diff --git a/sources/tech/20230618.0 ⭐️⭐️ Best Open Source Email Servers.md b/sources/tech/20230618.0 ⭐️⭐️ Best Open Source Email Servers.md new file mode 100644 index 0000000000..8972c71071 --- /dev/null +++ b/sources/tech/20230618.0 ⭐️⭐️ Best Open Source Email Servers.md @@ -0,0 +1,267 @@ +[#]: subject: "Best Open Source Email Servers" +[#]: via: "https://itsfoss.com/open-source-email-servers/" +[#]: author: "Ankush Das https://itsfoss.com/author/ankush/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Best Open Source Email Servers +====== + +It is convenient to use email services like Gmail, Proton Mail, and Outlook to send and receive emails, no matter what [email client][1] you use. + +And, for all of that, you utilize their mail servers for email transactions. So, your emails' security, reliability, and privacy depend on someone else. + +But what if you want to own your email infrastructure and have the data in your control? You need an open-source email server, which should solve your problem. + +If you are still curious, an email server lets you: + +- **Build your mail backend to store email accounts** +- **Take control of the security and reliability by self-hosting** +- **Host on your preferred server architecture** +- **It gives you the ability to make unlimited accounts** + +Of course, this is not for the end users. Sysadmins in small to midscale businesses, self hosters will find these software interesting. + +Considering now you have an idea of the benefits of an open-source email server, here are some of the best options that you can find: + +> 📋 The list includes mail servers and some solutions that make it possible to build/create an email server. Some might offer managed services and others can be self-hosted. + +### 1. Postal + +![YouTube Video][1] + +[Postal][3] is a feature-rich mail server that can be utilized by websites and servers. It is **tailored for outgoing emails** with no mailbox management features. + +The [documentation][4] is helpful to get started with all the essentials. You can utilize a docker and configure Postal on your server. + +With Postal, you can create mail servers/users for multiple organizations, access outgoing/incoming message queue, real-time delivery information, and built-in features to ensure emails get delivered. + +**Key Highlights:** + +- Real-time delivery information +- Click and open the tracking +- Tailored for outgoing emails + +> 🚧 Maintaining and configuring your email server is not an easy task. You should only proceed with setting up a mail server if you know all it takes to send/receive emails reliably. + +### 2. mailcow + +![mailcow ui][5] + +[mailcow][6] is a mail server suite with tools that help you build a web server, manage your mailbox, and more. + +If you are not looking to send transactional emails, mailcow has your back. You can consider it as a groupware. + +Like other mail servers, it works with Docker, where each container represents an application, all connected. + +mailcow's web interface lets you do everything from a single place. You can explore more about the project on its [GitHub page][7] or [documentation][8]. + +**Key Highlights:** + +- Easy to manage and update +- Affordable paid support +- Can be coupled with other mail servers if needed + +### 3. Cuttlefish + +![cuttlefish][9] + +Want a simple transactional email server? [Cuttlefish][10] is a no-nonsense open-source mail server that is incredibly easy to use. + +You get a simple web UI to check the stats and keep an eye on your outgoing emails. + +Compared to some full-fledged email services like Sendgird or Mailgun, Cuttlefish does not offer all kinds of features, **considering it is in beta** at the time. You can only opt for it if you need something super simple and you want to work reliably. + +Explore more about it on its [GitHub page][11]. + +**Key Highlights** + +- Simple transactional email server +- Easy to use + +### 4. Apache James + +![apache james][12] + +[James][13] is short for **Java Apache Mail Enterprise Server**. + +As the name suggests, it is an enterprise-focused open source mail server built with Java. You can use the email server as SMTP relay or an IMAP server, as per requirements. + +Compared to others, James may not be the easiest to configure or install. However, you can look at its [documentation][14] or [GitHub page][15] to judge for yourself. + +**Key Highlights:** + +- Easy administration after setup +- Reliable and used by open-source enterprises +- Distributed server + +### 5. Haraka + +[Haraka][16] is a modern open source SMTP server built with Node.js. If you can build it for your business/website, you do not need to look for other [SMTP services][17]. + +The mail server is tailored to provide the best performance. One of the highlights of Haraka is that it features a modular plugin system that will allow programmers to change the server's behavior to their heart's extent. + +You can consider it an excellent scalable outbound mail delivery server. Some popular names like Craigslist and DuckDuckGo Email Protection make use of Haraka. + +Explore more about it on its [GitHub page][18]. + +**Key Highlights:** + +- Built using Node.js +- Plugin system to extend functionalities + +### 6. Modoboa + +![][19] + +[Modoboa][20] is an all-in-one open-source solution. + +It can help you build a mail server and give you the ability to manage your emails. You can create calendars, add unlimited domains, create filtering rules, and access webmail. Modoboa also provides paid maintenance options if you want their professional help setting it up and managing it. + +Not just an all-rounder solution, but it offers a quick way to get started with your email infrastructure. + +**Key Highlights:** + +- All-in-one option +- Paid assistance available +- Built-in monitoring + +### 7. Postfix + +Postfix is a Mail Transfer Agent. It may not be a server on its own, but it couples with some other solutions that help you build an email server. + +While mailcow includes [Postfix][21] (and you can configure it along with similar solutions), you can choose to use it separately per your use case. Postfix is also the default Mail Transfer Agent in the Ubuntu server. + +Postfix can be used as an external SMTP. Not to forget, you can also set up Postfix to [work with Gmail][22]. It is easy to configure, and the documentation available for it is plenty useful. + +**Key Highlights:** + +- Easy to configure +- Flexible + +### 8. Maddy + +[Maddy][23] is a great choice if you need a lightweight mail server implementation. The official description says it is a "_Composable all-in-one mail server_". + +When you compare Maddy with mailcow, you will find that it offers some of the features you get with mailcow, meaning it is not just limited to outgoing emails like others. + +Maddy is popular for its use case, where it can replace multiple options like Postfix with a single implementation. You can send/receive, and store messages with Maddy via SMTP and IMAP. The storage feature is in beta at the time of writing the article. + +**Key Highlights:** + +- Lightweight +- Replaces multiple use-cases that you get with options like Postfix +- No dependency on Docker + +### 9. Dovecot + +[Dovecot][24] is an open-source IMAP server that works as a Mail Delivery Agent. It can work together with Postfix as both do different things. + +Compared to other solutions, it offers easy administration, reliable email-sending capabilities, and self-healing powers. + +Dovecot offers a premium offering for large infrastructure with professional support. + +**Key Highlights:** + +- Easy Administration +- Self-healing capabilities +- Performance-focused + +### 10. Poste.io + +![poste mail server][25] + +[Poste.io][26] utilizes mail server solutions like Haraku, Dovecot, and other open-source components. Ranging from tools for spam filtering to an antivirus engine. + +If you want to set up an open-source mail server using some of these components and be able to manage and secure things easily, Poste.io is an excellent choice. + +**Key Highlights:** + +- Easy to manage and build using multiple open-source mail server components +- Admin panel interface + +### 11. iRedMail + +[iRedMail][27] is similar to mailcow which helps you build a mail server utilizing various open-source components. You can also manage your calendars with the mail server created. + +While you can set it up for yourself, it provides paid professional support if you need it. + +You get a web panel, Linux distro support to host it on, and the ability to create unlimited accounts. + +**Key Highlights:** + +- Easy to use +- Web panel for easy management + +### 12. Mailu + +![mailu][28] + +[Mailu][29] is a Docker-based mail server that gives you the best of everything while limiting some features. + +That does not mean it is bad; Mailu aims to focus on the necessary features without adding many capabilities that are not useful for most. Even with this objective, it stands out by adding ARM support, Kubernetes support, and a couple more things. + +You get a standard mail server, advanced email features, a web admin interface, and privacy-focused features. + +**Key Highlights:** + +- Simple interface +- Focused solution without bells and whistles +- ARM support + +### Ready to Build and Manage Your Email Server? + +With open-source tools and email servers, you can take control of your data and manage/optimize email transactions for your business or website. + +As I mentioned, it takes a lot of work to do it. So, open-source self-hostable email servers can work if you want to have a customized experience and have a team that can be responsible for it. + +💬 _I am sure there are many more options, like_[_mail in a box_][30]_, to help you deploy a mail server quickly._ + +_Here, we tried to pick the best ones for your convenience. What is your favorite open-source email server?_ + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/open-source-email-servers/ + +作者:[Ankush Das][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lkxed/ +[1]: https://itsfoss.com/best-email-clients-linux/ +[2]: https://www.youtube.com/embed/d1Lzw_Q_fJQ?feature=oembed +[3]: https://github.com/postalserver/postal +[4]: https://docs.postalserver.io/ +[5]: https://itsfoss.com/content/images/2023/06/mailcow-ui.jpg +[6]: https://mailcow.email/ +[7]: https://github.com/mailcow/mailcow-dockerized +[8]: https://docs.mailcow.email/ +[9]: https://itsfoss.com/content/images/2023/06/cuttlefish-ui.png +[10]: https://cuttlefish.io/ +[11]: https://github.com/mlandauer/cuttlefish +[12]: https://itsfoss.com/content/images/2023/06/james.jpg +[13]: https://james.apache.org/ +[14]: https://james.apache.org/server/install.html +[15]: https://github.com/apache/james-project +[16]: https://haraka.github.io/ +[17]: https://linuxhandbook.com/smtp-services/ +[18]: https://github.com/haraka/Haraka +[19]: https://itsfoss.com/content/images/2023/06/modoboa.jpg +[20]: https://modoboa.org/en/ +[21]: https://www.postfix.org/ +[22]: https://www.linode.com/docs/guides/configure-postfix-to-send-mail-using-gmail-and-google-workspace-on-debian-or-ubuntu/ +[23]: https://maddy.email/ +[24]: https://www.dovecot.org/ +[25]: https://itsfoss.com/content/images/2023/06/poste-mailserver.png +[26]: https://poste.io/ +[27]: https://www.iredmail.org/ +[28]: https://itsfoss.com/content/images/2023/06/mailu.png +[29]: https://mailu.io/2.0/ +[30]: https://mailinabox.email/ diff --git a/sources/tech/20230619.1 ⭐️⭐️ Bash Basics Series 2 Using Variables in Bash.md b/sources/tech/20230619.1 ⭐️⭐️ Bash Basics Series 2 Using Variables in Bash.md new file mode 100644 index 0000000000..0248dab68b --- /dev/null +++ b/sources/tech/20230619.1 ⭐️⭐️ Bash Basics Series 2 Using Variables in Bash.md @@ -0,0 +1,181 @@ +[#]: subject: "Bash Basics Series #2: Using Variables in Bash" +[#]: via: "https://itsfoss.com/bash-use-variables/" +[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Bash Basics Series #2: Using Variables in Bash +====== + +In the first part of the Bash Basics Series, I briefly mentioned variables. It is time to take a detailed look at them in this chapter. + +If you have ever done any kind of coding, you must be familiar with the term 'variable'. + +If not, think of a variable as a box that holds up information, and this information can be changed over time. + +Let's see about using them. + +### Using variables in Bash shell + +Open a terminal and use initialize a variable with a random number 4: + +``` +var=4 +``` + +So now you have a variable named `var` and its value is `4`. Want to verify it? **Access the value of a variable by adding $ before the variable name**. It's called parameter expansion. + +``` +[email protected]:~$ echo The value of var is $var +The value of var is 4 +``` + +> 🚧 There must NOT be a space before or after`=`during variable initialization. + +If you want, you can change the value to something else: + +![Using variables in shell][1] + +In Bash shell, a variable can be a number, character, or string (of characters including spaces). + +![Different variable types in Bash shell][2] + +> 💡 Like other things in Linux, the variable names are also case-sensitive. They can consist of letters, numbers and the underscore "_". + +### Using variables in Bash scripts + +Did you notice that I didn't run a shell script to show the variable examples? You can do a lot of things in the shell directly. When you close the terminal, those variables you created will no longer exist. + +However, your distro usually adds global variables so that they can be accessed across all of your scripts and shells. + +Let's write some scripts again. You should have the script directory created earlier but this command will take care of that in either case: + +``` +mkdir -p bash_scripts && cd bash_scripts +``` + +Basically, it will create `bash_scripts` directory if it doesn't exist already and then switch to that directory. + +Here. let's create a new script named `knock.sh` with the following text. + +``` +#!/bin/bash + +echo knock, knock +echo "Who's there?" +echo "It's me, $USER" +``` + +Change the file permission and run the script. You learned it in the previous chapter. + +Here's what it produced for me: + +![Using global variable in Bahs script][3] + +**Did you notice how it added my name to it automatically?** That's the magic of the global variable $USER that contains the username. + +You may also notice that I used the " sometimes with echo but not other times. That was deliberate. [Quotes in bash][4] have special meanings. They can be used to handle white spaces and other special characters. Let me show an example. + +### Handling spaces in variables + +Let's say you have to use a variable called `greetings` that has the value `hello and welcome`. + +If you try initializing the variable like this: + +``` +greetings=Hello and Welcome +``` + +You'll get an error like this: + +``` +Command 'and' not found, but can be installed with: +sudo apt install and +``` + +This is why you need to use either single quotes or double quotes: + +``` +greetings="Hello and Welcome" +``` + +And now you can use this variable as you want. + +![Using spaces in variable names in bash][5] + +### Assign the command output to a variable + +Yes! You can store the output of a command in a variable and use them in your script. It's called command substitution. + +``` +var=$(command) +``` + +Here's an example: + +``` +[email protected]:~$ today=$(date +%D) +[email protected]:~$ echo "Today's date is $today" +Today's date is 06/19/23 +[email protected]:~$ +``` + +![Command substitution in bash][6] + +The older syntax used backticks instead of $() for the command substitution. While it may still work, you should use the new, recommended notation. + +> 💡 Variables change the value unless you declare a 'constant' variable like this:`readonly pi=3.14`. In this case, the value of variable`pi`cannot be changed because it was declared`readlonly`. + +### 🏋️ Exercise time + +Time to practice what you learned. Here are some exercise to test your learning. + +**Exercise 1**: Write a bash script that prints your username, present working directory, home directory and default shell in the following format. + +``` +Hello, there +My name is XYZ +My current location is XYZ +My home directory is XYZ +My default shell is XYZ +``` + +**Hint**: Use global variables $USER, $PWD, $HOME and $SHELL. + +**Exercise 2:** Write a bash script that declares a variable named `price`. Use it to get the output in the following format: + +``` +Today's price is $X +Tomorrow's price is $Y +``` + +Where X is the initial value of the variable `price` and it is doubled for tomorrow's prices. + +**Hint**: Use / to escape the special character $. + +The answers to the exercises can be discussed in this dedicated thread in the community. + +In the next chapter of the Bash Basics Series, you'll see how to make the bash scripts interactive by passing arguments and accepting user inputs. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/bash-use-variables/ + +作者:[Abhishek Prakash][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lkxed/ +[1]: https://itsfoss.com/content/images/2023/06/Using-variables-in-shell.png +[2]: https://itsfoss.com/content/images/2023/06/bash-variables-types.png +[3]: https://itsfoss.com/content/images/2023/06/using-global-variable-bash-script.png +[4]: https://linuxhandbook.com:443/quotes-in-bash/ +[5]: https://itsfoss.com/content/images/2023/06/using-spaces-in-bash-variable.png +[6]: https://itsfoss.com/content/images/2023/06/command-substitue-bash-variable.png diff --git a/sources/tech/20230621.1 ⭐️ Install and Use Flatpak on Ubuntu.md b/sources/tech/20230621.1 ⭐️ Install and Use Flatpak on Ubuntu.md new file mode 100644 index 0000000000..5c369e7f5e --- /dev/null +++ b/sources/tech/20230621.1 ⭐️ Install and Use Flatpak on Ubuntu.md @@ -0,0 +1,233 @@ +[#]: subject: "Install and Use Flatpak on Ubuntu" +[#]: via: "https://itsfoss.com/flatpak-ubuntu/" +[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Install and Use Flatpak on Ubuntu +====== + +The Linux world has three 'universal' packaging formats that allow running on 'any' Linux distribution; Snap, Flatpak and AppImage. + +Ubuntu comes baked-in with Snap but most distributions and developers avoid it because of its close source nature. They prefer [Fedora's Flatpak packaging system][1]. + +As an Ubuntu user, you are not restricted to Snap. You also can use Flatpak on your Ubuntu system. + +In this tutorial, I'll discuss the following: + +- Enabling Flatpak support on Ubuntu +- Using Flatpak commands to manage packages +- Getting packages from Flathub +- Add Flatpak packages to Software Center + +Sounds exciting? Let's see them one by one. + +### Installing Flatpak on Ubuntu + +You can easily install Flatpak using the following command: + +``` +sudo apt install flatpak +``` + +For **_Ubuntu 18.04 or older versions_**, use PPA: + +``` +sudo add-apt-repository ppa:flatpak/stable +sudo apt update +sudo apt install flatpak +``` + +#### Add Flathub repo + +You have installed Flatpak support in your Ubuntu system. However, if you try to install a Flatpak package, you'll get '[No remote refs found' error][2]. That's because there are no Flatpak repositories added and hence Flatpak doesn't even know from where it should get the applications. + +Flatpak has a centralized repository called Flathub. A number of Flatpak applications can be found and downloaded from here. + +You should add the Flathub repo to access those applications. + +``` +flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo +``` + +![Install Flatpak in latest versions of Ubuntu and then add Flathub repo][3] + +Once Flatpak is installed and configured, **restart your system**. Otherwise, installed Flatpak apps won't be visible on your system menu. + +Still, you can always run a flatpak app by running: + +``` +flatpak run +``` + +### Common Flatpak Commands + +Now that you have Flatpak packaging support installed, it's time to learn some of the most common Flatpak commands needed for package management. + +#### Search for a Package + +Either use Flathub website or use the following command, if you know the application name: + +``` +flatpak search +``` + +![Search for a package using Flatpak Search command][4] + +🚧 + +Except for searching a flatpak package, on other instances, the refers to the proper Flatpak package name, like`com.raggesilver.BlackBox`(Application ID in the above screenshot). You may also use the last word`Blackbox`of the Application ID. + +#### Install a Flatpak package + +Here's the syntax for installing a Flatpak package: + +``` +flatpak install +``` + +Since almost all the times you'll be getting applications from Flathub, the remote repository will be `flathub`: + +``` +flatpak install flathub +``` + +![Install a package after searching for its name][5] + +In some rare cases, you may install Flatpak packages from the developer's repository directly instead of Flathub. In that case, you use a syntax like this: + +``` +flatpak install --from https://flathub.org/repo/appstream/com.spotify.Client.flatpakref +``` + +#### Install a package from flatpakref + +This is optional and rare too. But sometime, you will get a `.flatpakref` file for an application. This is **NOT an offline installation**. The .flatpakref has the necessary details about where to get the packages. + +To install from such a file, open a terminal and run: + +``` +flatpak install +``` + +![Install a Flatpak package from Flatpakref file][6] + +#### Run a Flatpak application from the terminal + +Again, something you won't be doing it often. Mostly, you'll search for the installing application in the system menu and run the application from there. + +However, you can also run them from the terminal using: + +``` +flatpak run +``` + +#### List installed Flatpak packages + +Want to see which Flatpak applications are installed on your system? List them like this: + +``` +flatpak list +``` + +![List all the installed Flatpak packages on your system][7] + +#### Uninstall a Flatpak package + +You can remove an installed Flatpak package in the following manner: + +``` +flatpak uninstall +``` + +If you want to **clear the leftover packages and runtimes, that are no longer needed**, use: + +``` +flatpak uninstall --unused +``` + +![Remove a Flatpak package and later, if there is any unused runtimes or packages, remove them][8] + +It may help you [save some disk space on Ubuntu][9]. + +### Flatpak commands summary + +Here's a quick summary of the commands you learned above: + +UsageCommand | +| Search for Packages | flatpak search | +| Install a Package | flatpak install | +| List Installed Package | flatpak list | +| Install from flatpakref | flatpak install | +| Uninstall a Package | flatpak uninstall | +| Uninstall Unused runtimes and packages | flatpak uninstall --unused | + +### Using Flathub to explore Flatpak packages + +I understand that searching for Flatpak packages through the command line is not the best experience and that's where the [Flathub website][10] comes into picture. + +You can browse the Flatpak application on Flathub, which provides additional details like verified publishers, total number of downloads etc. + +You'll also get the commands you need to use for installing the applications at the bottom of the application page. + +![][11] + +![][12] + +### Bonus: Use Software Center with Flatpak package support + +You can add the Flatpak packages to the GNOME Software Center application and use it for installing packages graphically. + +There is a dedicated plugin to add Flatpak to GNOME Software Center. + +🚧 + +Since Ubuntu 20.04, the default software center in Ubuntu is Snap Store and it does not support flatpak integration. So, installing the below package will result in two software centers simultaneously: one Snap and another DEB. + +![When you install GNOME Software Flatpak plugin in Ubuntu, a DEB version of GNOME Software is installed. So you will have two software center application][13] + +``` +sudo apt install gnome-software-plugin-flatpak +``` + +![Installing GNOME Software Plugin in Ubuntu][14] + +### Conclusion + +You learned plenty of things here. You learned to enable Flatpak support in Ubuntu and manage Flatpak packages through the command line. You also learned about the integration with the Software Center. + +I hope you feel a bit more comfortable with Flatpaks now. Since you discovered one of the three universal packages, how about [learning about Appimages][15]? + +_Let me know if you have questions or if you face any issues._ + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/flatpak-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lkxed/ +[1]: https://itsfoss.com/what-is-flatpak/ +[2]: https://itsfoss.com/no-remote-ref-found-flatpak/ +[3]: https://itsfoss.com/content/images/2023/06/install-flatpak-1.svg +[4]: https://itsfoss.com/content/images/2023/06/flatpak-search.svg +[5]: https://itsfoss.com/content/images/2023/06/flatpak-install-package.svg +[6]: https://itsfoss.com/content/images/2023/06/install-flatpak-ref.svg +[7]: https://itsfoss.com/content/images/2023/06/flatpak-list.svg +[8]: https://itsfoss.com/content/images/2023/06/flatpak-uninstall-package-with-removal-of-unused.svg +[9]: https://itsfoss.com/free-up-space-ubuntu-linux/ +[10]: https://flathub.org:443/en-GB +[11]: https://itsfoss.com/content/images/2023/06/Flathub-apps-page-2.png +[12]: https://itsfoss.com/content/images/2023/06/application-details-in-flathub-website-2.png +[13]: https://itsfoss.com/content/images/2023/06/two-software-centers-in-Ubuntu.png +[14]: https://itsfoss.com/content/images/2023/06/install-gnome-flatpak-plugin.svg +[15]: https://itsfoss.com/use-appimage-linux/ \ No newline at end of file diff --git a/sources/tech/20230626.0 ⭐️⭐️ Bash Basics Series 3 Passing Arguments and Accepting User Inputs.md b/sources/tech/20230626.0 ⭐️⭐️ Bash Basics Series 3 Passing Arguments and Accepting User Inputs.md new file mode 100644 index 0000000000..e24893785d --- /dev/null +++ b/sources/tech/20230626.0 ⭐️⭐️ Bash Basics Series 3 Passing Arguments and Accepting User Inputs.md @@ -0,0 +1,171 @@ +[#]: subject: "Bash Basics Series #3: Passing Arguments and Accepting User Inputs" +[#]: via: "https://itsfoss.com/bash-pass-arguments/" +[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Bash Basics Series #3: Passing Arguments and Accepting User Inputs +====== + +Let's have arguments... with your bash scripts 😉 + +You can make your bash script more useful and interactive by passing variables to it. + +Let me show you this in detail with examples. + +### Pass arguments to a shell script + +When you run a shell script, you can add additional variables to it in the following fashion: + +``` +./my_script.sh var1 var2 +``` + +Inside the script, you can use $1 for the 1st argument, $2 for the 2nd argument and so on. + +> 💡 $0 is a special variable that holds the name of the script being executed. + +Let's see it with an actual example. Switch to the directory where you keep your practice bash scripts. + +``` +mkdir -p bash_scripts && cd bash_scripts +``` + +Now, create a new shell script named `arguments.sh` (I could not think of any better names) and add the following lines to it: + +``` +#!/bin/bash + +echo "Script name is: $0" +echo "First argument is: $1" +echo "Second argument is: $2" +``` + +Save the file and make it executable. Now run the script like you always do but this time add any two strings to it. You'll see the details printed on the screen. + +> 🚧 Arguments are separated by a white space (space, tab). If you have an argument with space in it, use double quotes around it otherwise it will be counted as separate arguments. + +![Pass arguments to the bash scripting][1] + +> 💡 Bash scripts support up to 255 arguments. But for arguments 10 and above, you have to use curly braces ${10}, ${11}...${n}. + +As you can see, the $0 represents the script name while the rest of the arguments are stored in the numbered variables. There are some other special variables that you may use in your scripts. + +| Special | VariableDescription | +| :- | :- | +| $0 | Script name | +| $1, $2...$9 | Script arguments | +| ${n} | Script arguments from 10 to 255 | +| $# | Number of arguments | +| [[email protected]][2] | All arguments together | +| $$ | Process id of the current shell | +| $! | Process id of the last executed command | +| $? | Exit status of last executed command | + +> 🏋️‍♀️ Modify the above script to display the number of arguments. + +#### What if the number of arguments doesn't match? + +In the above example, you provided the bash script with two arguments and used them in the script. + +But what if you provided only one argument or three arguments? + +Let's do it actually. + +![Passing fewer or more arguments to bash script][3] + +As you can see above, when you provided more than expected arguments, things were still the same. Additional arguments are not used so they don't create issues. + +However, when you provided fewer than expected arguments, the script displayed empty space. This could be problematic if part of your script is dependent on the missing argument. + +### Accepting user input and making an interactive bash script + +You can also create bash scripts that prompt the user to provide input through the keyboard. This makes your scripts interactive. + +The read command provides this feature. You can use it like this: + +``` +echo "Enter something" +read var +``` + +The echo command above is not required but then the end user won't know that they have to provide input. And then everything that the user enters before pressing the return (enter) key is stored in `var` variable. + +You can also display a prompt message and get the value in a single line like this: + +``` +read -p "Enter something? " var +``` + +Let's see it in action. Create a new `interactive.sh` shell script with the following content: + +``` +#!/bin/bash + +echo "What is your name, stranger?" +read name +read -p "What's your full name, $name? " full_name +echo "Welcome, $full_name" +``` + +In the above example, I used the `name` variable to get the name. And then I use the `name` variable in the prompt and get user input in `full_name` variable. I used both ways of using the read command. + +Now if you give the execute permission and then run this script, you'll notice that the script displays `What is your name, stranger?` and then waits for you to enter something from the keyboard. You provide input and then it displays `What's your full name` type of message and waits for the input again. + +Here's a sample output for your reference: + +![Interactive bash shell script][4] + +### 🏋️ Exercise time + +Time to practice what you learned. Try writing simple bash scripts for the following scenarios. + +**Exercise 1**: Write a script that takes three arguments. You have to make the script display the arguments in reverse order. + +**Expected output**: + +``` +[email protected]:~/bash_scripts$ ./reverse.sh ubuntu fedora arch +Arguments in reverse order: +arch fedora ubuntu +``` + +**Exercise 2**: Write a script that displays the number of arguments passed to it. + +**Hint**: Use special variable $# + +**Expected output**: + +``` +[email protected]:~/bash_scripts$ ./arguments.sh one and two and three +Total number of arguments: 5 +``` + +**Exercise 3**: Write a script that takes a filename as arguments and displays its line number. + +**Hint**: Use wc command for counting the line numbers. + +You may discuss your solution in the community. + +Great! So now you can (pass) argument :) In the next chapter, you'll learn to perform basic mathematics in bash. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/bash-pass-arguments/ + +作者:[Abhishek Prakash][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lkxed/ +[1]: https://itsfoss.com/content/images/2023/06/run-bash-script-with-arguments.png +[2]: https://itsfoss.com/cdn-cgi/l/email-protection +[3]: https://itsfoss.com/content/images/2023/06/passing-non-matching-arguments-bash-shell.png +[4]: https://itsfoss.com/content/images/2023/06/interactive-bash-shell-script.png diff --git a/sources/tech/20230626.2 ⭐️⭐️ 15 Best GTK Themes for Ubuntu and Other Distros.md b/sources/tech/20230626.2 ⭐️⭐️ 15 Best GTK Themes for Ubuntu and Other Distros.md new file mode 100644 index 0000000000..25804b05b8 --- /dev/null +++ b/sources/tech/20230626.2 ⭐️⭐️ 15 Best GTK Themes for Ubuntu and Other Distros.md @@ -0,0 +1,355 @@ +[#]: subject: "15 Best GTK Themes for Ubuntu and Other Distros" +[#]: via: "https://www.debugpoint.com/best-gtk-themes/" +[#]: author: "Arindam https://www.debugpoint.com/author/admin1/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +15 Best GTK Themes for Ubuntu and Other Distros +====== + +**We present a fresh list of the best GTK themes for various Linux distributions.** + +The visual appearance of your desktop plays a significant role in your overall Linux experience. GTK themes offer a simple yet powerful way to customize the look of your desktop environment. Applying a GTK theme lets you change the colours, window decorations, and overall style to match your preferences. + +Apart from KDE Plasma, and LXQt, the majority of the popular desktop environments are based on GTK. Hence it’s important to find out which are the best GTK themes available today. + +Installing GTK themes on Linux is relatively straightforward. Installation instructions are usually on the theme’s official website. Generally, it involves downloading the theme files and placing them (after extracting) in the `~/.themes` folder in your home directory. + +Let’s dive into the top 15 GTK themes that have gained popularity in 2023. + +### Best GTK Themes in 2023 + +#### Orchis + +Orchis is a highly regarded GTK theme that has gained popularity for its refreshing and unique design. Inspired by the appearance of the macOS Big Sur, Orchis brings a sleek and modern look to Linux desktops. + +Orchis has gained recognition for its ability to bring the elegance of macOS Big Sur to the Linux ecosystem. By combining elements of modern design and Fluent Design language, Orchis offers a visually appealing and consistent user interface that enhances the overall desktop experience. Whether you prefer a light or dark variant, Orchis provides a refreshing and refined look to your Linux desktop. + +![Orchis Theme][1] + +**Note**: This theme supports libadwaita/GTK4 theming. Hence, it is suitable for GNOME 40+ desktops. + +You can download and get the installation instruction from the below page. + +[Download Orchis Theme][2] + +#### WhiteSur + +WhiteSur is a GTK theme inspired by the sleek design of macOS Big Sur. Here are its key features: + +- macOS Big Sur aesthetics with rounded corners and translucent windows. +- Light and dark modes are available for different preferences. +- Consistent design across GTK-based applications. +- Attention to detail with smooth animations and defined shadows. +- Customizable options for accent colours, window decorations, and button styles. + +![WhiteSur GTK Theme][3] + +WhiteSur is compatible with various Linux desktop environments like GNOME, Xfce, and Cinnamon, making it accessible to a wide user base. Experience the elegance of macOS Big Sur on your Linux desktop with WhiteSur’s clean and unified interface. + +**Note**: This theme also supports GTK4/libadwaita theming. + +You can download the theme from the below page and get the installation instructions. + +[Download WhiteSur Theme][4] + +#### Vimix + +Vimix is a popular GTK theme that offers a stylish and modern look to Linux desktop environments. Here are its key features: + +- Vimix showcases a sleek and contemporary design with its flat interface, clean lines, and subtle gradients. +- The theme offers a range of colour variations, including Vimix Light and Vimix Dark. +- Vimix is compatible with multiple Linux desktop environments, including GNOME, Xfce, Cinnamon, and more, making it accessible to various Linux users. + +![Vimix Theme][5] + +Vimix has gained popularity for its combination of modern design elements, colour versatility, and compatibility with various desktop environments. Its sleek appearance and customization options make it an excellent choice for Linux users seeking a visually pleasing and consistent user interface. + +**Note**: It supports the modern GTK4/libadwaita theming. + +You can download the Vimix theme from the below page. + +[Download Vimix Theme][6] + +#### Prof-GNOME-theme + +Prof-Gnome-theme is a well-known GTK theme that brings a professional and sophisticated look to Linux desktop environments, particularly GNOME. Here are its key features: + +- Prof-Gnome-theme offers a clean and professional design featuring a minimalistic approach, elegant lines, and refined aesthetics. +- The theme employs a subtle and tasteful colour palette, focusing on neutral tones and soft accents that create a calming and professional atmosphere. +- The theme ensures consistency across GTK-based applications, providing a cohesive and harmonious experience throughout the desktop environment. + +![Prof-GNOME Theme - gnome themes for 2022][7] + +Prof-Gnome-theme is favoured by professionals and users who appreciate a clean and sophisticated desktop environment. Its attention to detail and focus on professionalism make it an excellent choice for those seeking a refined and elegant look for their Linux desktop. + +Download and installation instruction for this theme is available on the below page: + +[Download prof-GNOME Theme][8] + +#### Ant + +Ant is a popular GTK theme known for its sleek and minimalist design. Its key features include: + +- Clean and flat aesthetic with subtle shadows. +- Consistent and well-defined icons. +- Easy on the eyes with a balanced colour palette. +- Support for both light and dark variants. +- Seamless integration with GNOME desktop environment. + +![Ant Theme][9] + +Users can enjoy a modern and visually pleasing experience on their Linux systems with the Ant theme. Its simplicity and elegance make it a favourite choice among those seeking a refined look for their GTK-based applications and desktop. + +You can download the Ant theme from the below page. + +**Note**: This theme does **not** support GTK4/libadwaita. + +[Download Ant theme][10] + +#### Flat Remix + +Flat Remix is a highly acclaimed GTK theme that offers a refreshing and modern look to Linux desktops. Its key features include: + +- Flat and minimalistic design with vibrant colours. +- Consistent and unified appearance across GTK-based applications. +- Comprehensive icon set, providing a polished visual experience. +- Support for both light and dark variants. + +![Flat Remix Theme][11] + +Flat Remix brings a delightful touch to the Linux desktop environment, enhancing the overall aesthetics and user experience. It’s vibrant colours and cohesive design makes it popular among users who appreciate a clean and contemporary interface. + +**Note**: This theme does **not** support GTK4/libadwaita. + +You can download the Flat remix theme from the below page. + +[Download Flat Remix Theme][12] + +#### Fluent + +The Fluent GTK theme is a modern and stylish theme inspired by Microsoft’s Fluent Design System. Here are its key features: + +- Sleek and polished appearance, incorporating Fluent Design’s principles of depth, motion, and transparency. +- Fluent-inspired icons provide a cohesive and unified look. +- Seamless integration with GTK-based applications, delivering a consistent user experience. +- It supports both light and dark modes, allowing users to personalize their desktops. +- Actively developed and regularly updated, ensuring compatibility with the latest GTK versions. + +![Fluent GTK Theme][13] + +The fluent GTK theme brings a touch of Microsoft’s elegance to the Linux desktop, appealing to users who appreciate a contemporary and refined visual experience. Its adherence to Fluent Design guidelines and continuous development make it an attractive choice for those seeking a modern and sophisticated GTK theme. + +**Note**: This theme supports GTK4/libadwaita theming. + +You can download Fluent GTK Theme and get the installation instructions below. + +[Download Fluent GTK Theme][14] + +#### Grvbox + +Grvbox is a popular GTK theme inspired by the aesthetics of the renowned Vim colour scheme, [gruvbox][15]. Here are its key features: + +- The warm and retro colour palette is reminiscent of old-school terminal interfaces. +- Thoughtfully designed to provide a comfortable and eye-pleasing visual experience. +- It offers both light and dark variants, allowing users to choose their preferred style. +- Seamless integration with GTK-based applications, ensuring a consistent and unified look. +- Regular updates and community support, ensuring compatibility with the latest GTK versions. + +![Gruvbox GTK Theme][16] + +Grvbox theme brings a nostalgic charm to the Linux desktop, evoking a sense of familiarity and simplicity. Its carefully chosen colours and attention to detail make it popular among enthusiasts who appreciate a vintage-inspired look for their GTK-based applications and desktop environment. + +**Note**: This theme supports GTK4/libadwaita theming. + +You can get the download and installation instructions from the below page: + +[Download Grvbox theme][17] + +#### Graphite + +Graphite is a dark theme for GTK+-based desktop environments. It is designed to be minimal and elegant, making it ideal for users who want a clean, distraction-free interface. + +The Graphite theme is based on the Adwaita theme and shares many of the same features. However, Graphite has a darker colour palette and a more minimalist design. + +![Graphite gtk theme][18] + +If you are looking for a dark theme that is both minimal and elegant, then the Graphite theme is a great option. It is easy to install and use and compatible with many desktop environments. + +A few months back, I reviewed this theme; you might want to check it out: [Graphite theme overview][19]. + +**Note**: This theme is ready for GTK4/libadwaita. + +You can install this theme using the instructions present on the below page. + +[Graphite GTK theme][20] + +#### Material + +Material is a widely recognized GTK theme inspired by Material for Neovim and Graphite theme (featured above). Here are its key features: + +- Clean and modern aesthetic, featuring flat design elements and vibrant colours. +- Consistent and cohesive iconography, adhering to Material Design guidelines. +- Provides light and dark variants, allowing users to customize their visual experience. +- Seamless integration with GTK-based applications, ensuring a unified look and feel. +- Regular updates and community support, ensuring compatibility with the latest GTK versions. + +![Material GTK Theme][21] + +The material theme brings the popular Material Design language to the Linux desktop, offering a visually appealing and intuitive user experience. With its stylish design and compatibility with various GTK environments, the Material theme is popular for users who appreciate a modern and harmonious interface. + +**Note**: This theme supports the modern GTK4/libadwaita themes. + +You can download and install it using the packages present on the below page: + +[Download Material Theme][22] + +#### Arc + +Arc theme is a popular GTK theme in the Linux community, known for its sleek and modern design. It offers a clean, minimalistic appearance that blends well with various desktop environments, particularly GNOME. Here are some key features of the Arc theme: + +- Visually appealing design with smooth curves and a flat interface +- Range of colour variations, including Arc, Arc-Darker, and Arc-Dark +- Option for changing button styles, title bar layout, and window borders + +Arc theme combines aesthetics and functionality, making it a go-to choice for many Linux users seeking a visually pleasing and consistent user interface. + +![Arc Darker Theme in Ubuntu GNOME][23] + +However, the current theme version doesn’t support the modern GTK4/libadwaita. + +You can download it in the official repo below (this is the forked repo of the [original Arc theme][24]): + +[Download Arc Theme][25] + +Also, you can install it using the below command in Ubuntu and related distributions: + +``` +sudo apt install arc-theme +``` + +#### Nordic + +Nordic is a highly regarded GTK theme inspired by the serene landscapes of the Nordic region. Here are its key features: + +- The subtle and soothing colour palette reminisces the Northern lights and snowy landscapes. +- The harmonious combination of light and dark elements for optimal contrast and readability. +- Well-designed icons perfectly complement the overall aesthetic. +- Cross-platform compatibility allows users to enjoy the Nord theme in various GTK-based environments. +- Regular updates and community support, ensuring ongoing refinement and compatibility. + +![Nordic theme][26] + +The Nordic theme brings a touch of tranquillity and elegance to the Linux desktop, immersing users in a visually captivating environment. Its carefully chosen colours and attention to detail make it popular among those seeking a visually pleasing and relaxing GTK theme for their Linux systems. + +It also comes with a Firefox theme for a better look and integration. + +**Note**: This theme supports the modern GTK4/libadwaita. + +You can download this theme from the below page: + +[Download Nordic theme][27] + +#### Adapta + +Adapta is a highly regarded GTK theme known for its versatility and modern design. + +Adapta theme enhances the Linux desktop’s sleek and adaptable style, allowing users to personalize their interface while maintaining a polished and unified look. It’s flexibility, and ongoing development makes it a favoured choice for those seeking a modern and customizable GTK theme for their Linux systems. + +However, the development **stopped** for this great theme for many years. You can still use the theme from the official page below: + +[Download Adapta theme][28] + +#### Equilux + +Equilux is a dark theme for GTK+-based desktop environments. It is designed to be neutral and non-distracting, making it ideal for use in low-light conditions or for people sensitive to bright colours. + +The Equilux theme is based on the Materia theme and shares many of the same features. However, Equilux has a more muted colour palette, making it even more suitable for dark environments. + +![Equilux Theme][29] + +If you are looking for a dark theme that is both neutral and non-distracting, then the Equilux theme is a great option. It is easy to install and use and compatible with a wide range of desktop environments. + +**Note**: The development of this theme has been [stalled][30]. There will not be any further updates. + +You can find the current version of this theme on the below page: + +[Download Equilux Theme][31] + +#### Paper + +Paper is a widely recognized GTK theme known for its simplistic yet elegant design. Here are its key features: + +- Clean and flat visual style with subtle shadows for depth. +- Thoughtfully crafted icons provide a consistent and polished appearance. +- Offers multiple colour variants, including light and dark themes. +- Well-maintained and actively developed, ensuring compatibility with the latest GTK versions. +- Supports popular desktop environments like GNOME, Xfce, and Unity. + +![Paper Theme in GNOME][32] + +With its minimalistic approach and attention to detail, Paper offers users a visually pleasing and harmonious desktop experience. Its versatile colour options and compatibility with various desktop environments make it popular for Linux enthusiasts seeking a sleek and modern look. + +**Note**: This theme’s development has **ended** in 2016. There is no support for modern GTK4+. + +[Download Paper theme][33] + +### A note about Adwaita + +The popular Adwaita theme is one of the best and most stable GTK themes. The reason I have not included it in the above list is that it’s included as default in many distributions. And users already have it installed in their system. + +### Conclusion + +The above GTK themes represent various tastes of styles, from modern designs to vibrant and colourful aesthetics. Whether you prefer a minimalistic look or a visually striking interface, a GTK theme suits your taste. I encourage you to experiment with the above themes with various icon and cursor themes for a better experience. + +_Few image credits: Respective authors_ + +-------------------------------------------------------------------------------- + +via: https://www.debugpoint.com/best-gtk-themes/ + +作者:[Arindam][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.debugpoint.com/author/admin1/ +[b]: https://github.com/lkxed/ +[1]: https://www.debugpoint.com/wp-content/uploads/2023/06/Orchis-Theme.jpg +[2]: https://github.com/vinceliuice/Orchis-theme +[3]: https://www.debugpoint.com/wp-content/uploads/2023/06/WhiteSur-GTK-Theme.jpg +[4]: https://github.com/vinceliuice/WhiteSur-gtk-theme +[5]: https://www.debugpoint.com/wp-content/uploads/2023/06/Vimix-Theme.jpg +[6]: https://github.com/vinceliuice/vimix-gtk-themes +[7]: https://www.debugpoint.com/wp-content/uploads/2022/05/Prof-GNOME-Theme.jpg +[8]: https://www.gnome-look.org/p/1334194 +[9]: https://www.debugpoint.com/wp-content/uploads/2023/06/Ant-Theme.jpg +[10]: https://www.gnome-look.org/p/1099856/ +[11]: https://www.debugpoint.com/wp-content/uploads/2023/06/Flat-Remix-Theme-scaled.jpg +[12]: https://drasite.com/flat-remix-gnome +[13]: https://www.debugpoint.com/wp-content/uploads/2023/06/Fluent-GTK-Theme.jpg +[14]: https://github.com/vinceliuice/Fluent-gtk-theme +[15]: https://github.com/morhetz/gruvbox +[16]: https://www.debugpoint.com/wp-content/uploads/2023/06/Gruvbox-GTK-Theme-1600x3187.jpg +[17]: https://github.com/Fausto-Korpsvart/Gruvbox-GTK-Theme +[18]: https://www.debugpoint.com/wp-content/uploads/2023/06/Graphite-gtk-theme-1669x2048.jpg +[19]: https://www.debugpoint.com/graphite-theme-gnome/ +[20]: https://github.com/vinceliuice/Graphite-gtk-theme +[21]: https://www.debugpoint.com/wp-content/uploads/2023/06/Material-GTK-Theme-1600x3205.jpg +[22]: https://github.com/Fausto-Korpsvart/Material-GTK-Themes +[23]: https://www.debugpoint.com/wp-content/uploads/2022/05/Arc-Darker-Theme-in-Ubuntu-GNOME-1.jpg +[24]: https://github.com/horst3180/arc-theme +[25]: https://github.com/jnsh/arc-theme +[26]: https://www.debugpoint.com/wp-content/uploads/2023/06/Nordic-theme.jpg +[27]: https://github.com/EliverLara/Nordic +[28]: https://github.com/adapta-project/adapta-gtk-theme +[29]: https://www.debugpoint.com/wp-content/uploads/2023/06/Equilux-Theme.jpg +[30]: https://github.com/ddnexus/equilux-theme +[31]: https://www.gnome-look.org/p/1182169/ +[32]: https://www.debugpoint.com/wp-content/uploads/2018/09/Paper-Theme-in-GNOME.png +[33]: https://github.com/snwh/paper-gtk-theme diff --git a/sources/tech/20230627.0 ⭐️⭐️ Using cat Command in Linux.md b/sources/tech/20230627.0 ⭐️⭐️ Using cat Command in Linux.md new file mode 100644 index 0000000000..15cc38d662 --- /dev/null +++ b/sources/tech/20230627.0 ⭐️⭐️ Using cat Command in Linux.md @@ -0,0 +1,251 @@ +[#]: subject: "Using cat Command in Linux" +[#]: via: "https://itsfoss.com/cat-command/" +[#]: author: "Sagar Sharma https://itsfoss.com/author/sagar/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Using cat Command in Linux +====== + +The cat command is used to print the file contents of text files. + +At least, that's what most Linux users use it for and there is nothing wrong with it. + +Cat actually stands for 'concatenate' and was created to [merge text files][1]. But withsingle argument, it prints the file contents. And for that reason, it is a go-to choice for users to read files in the terminal without any additional options. + +### Using the cat command in Linux + +To use the cat command, you'd have to follow the given command syntax: + +``` +cat [options] Filename(s) +``` + +Here, + +- `[options]` are used to modify the default behavior of the cat command such as using the `-n` option to get numbers for each line. +- `Filename` is where you'll enter the filename of the file that you want to work with. + +To make things easy, I will be using a text file named `Haruki.txt` throughout this guide which contains the following text lines: + +``` +Hear the Wind Sing (1979) +Pinball, 1973 (1980) +A Wild Sheep Chase (1982) +Hard-Boiled Wonderland and the End of the World (1985) +Norwegian Wood (1987) +Dance Dance Dance (1990) +South of the Border, West of the Sun (1992) +The Wind-Up Bird Chronicle (1994) +Sputnik Sweetheart (1999) +Kafka on the Shore (2002) +After Dark (2004) +1Q84 (2009-2010) +Colorless Tsukuru Tazaki and His Years of Pilgrimage (2013) +Men Without Women (2014) +Killing Commendatore (2017) +``` + +So, what will be the output when used without any options? Well, let's have a look: + +``` +cat Haruki.txt +``` + +![use cat command in Linux][2] + +As you can see, it printed the whole text file! + +But you can do a lot more than just this. Let me show you some practical examples. + +#### 1. Create new files + +Most Linux users use the touch command to [create new files][3] but the same can be done using the cat command too! + +The cat command has one advantage over the touch command in this case, as you can add text to the file while creating. Sounds cool. Isn't it? + +To do so, you'd have to use the cat command by appending the filename to the `>` as shown: + +``` +cat > Filename +``` + +For example, here, I created a file named `NewFile.txt`: + +``` +cat > NewFile.txt +``` + +Once you do that, there'll be a blinking cursor asking you to write something and finally, you can use `Ctrl + d` to save the changes. + +**If you wish to create an empty file, then just press the `Ctrl + d` without making any changes.** + +![Using cat command][4] + +That's it! Now, you can use the ls command to show the [contents of the current working directory][5]: + +![use the ls command to list the contents of the current working directory][6] + +#### 2. Copy the file contents to a different file + +Think of a scenario where you want to redirect the file content of **FileA** to the **FileB** + +Sure, you can copy and paste. But what if there are hundreds or thousands of lines? + +Simple. You use the cat command with the redirection of data flow. To do so, you'd have to follow the given command syntax: + +``` +cat FileA > FileB +``` + +> 🚧 If you use the above syntax to redirect file contents, it will erase the file contents of the FileB and then will redirect the file contents of the FileA. + +For example, I will be using two text files FileA and FileB which contains the following: + +![check the file contents using the cat command][7] + +And now, if I use the redirection from FileA to FileB, it will remove the data of FileB and then redirect the data of FileA: + +``` +cat FileA > FileB +``` + +![redirect the file content using the cat command][8] + +Similarly, you can do the same with multiple files: + +``` +cat FileA FileB > FileC +``` + +![redirect file content of multiple files using the cat command][9] + +As you can see, the above command removed the data of FileC and then redirected the data of FileA and FileB. + +#### Append the content of one file to another + +There are times when you want to append data to the existing data and in that case, you'll have to use the `>>` instead of single `>`: + +``` +cat FileA >> FileB +``` + +For example, here, I will be redirecting two files `FileA` and `FileB` to the `FileC`: + +``` +cat FileA.txt FileB.txt >> FileC.txt +``` + +![redirect file content without overriding using the cat command][10] + +As you can see, it preserved the data of the `FileC.txt` and the data was appended at the end of it. + +> 💡 You can use the`>>`to add new lines to an existing file. Use`cat >> filename`and start adding the text you want and finally save the changes with`Ctrl+D`. + +#### 4. Show the numbers of line + +You may encounter such scenarios where you want to see the number of lines, and that can be achieved using the `-n` option: + +``` +cat -n File +``` + +For example, here, I used the `-n` option with the `Haruki.txt`: + +![get the number of the lines in the cat command][11] + +#### 5. Remove the blank lines + +Left multiple blank lines in your text document? The cat command will fix it for you! + +To do so, all you have to do is use the `-s` flag. + +But there's one downside of using the `-s` flag. You're still left with one blank space: + +![remove blank lines with the cat command][12] + +As you can see, it works but the results are close to the expectations. + +So how would you remove all the empty lines? By piping it to the grep command: + +``` +cat File | grep -v '^$' +``` + +Here, the `-v` flag will filter out the results as per `the`specified pattern and `'^$'` is a regular expression that matches the empty lines. + +And here are the results when I used it over the `Haruki.txt`: + +``` +cat Haruki.txt | grep -v '^$' +``` + +![remove all the blank lines in text files using the cat command piped with grep regular expression][13] + +Once you have the perfect output, you can redirect it to a file to save the output: + +``` +cat Haruki.txt | grep -v '^$' > File +``` + +![save output of cat command by redirection][14] + +### That's what you've learned so far + +Here's a quick summary of what I explained in this tutorial: + +| Command | Description | +| :- | :- | +| `cat ` | Prints the file content to the terminal. | +| `cat >File` | Create a new file. | +| `cat FileA > FileB` | File contents of the `FileB` will be overridden by the `FileA`. | +| `cat FileA >> FileB` | File contents of the `FileA` will be appended at the end of the `FileB`. | +| `cat -n File` | Shows the number of lines while omitting the file contents of the File. | +| `cat File | more` | Piping the cat command to the more command to deal with large files. Remember, it won't let you scroll up! | +| `cat File | less` | Piping the cat command to the less command, which is similar to above, but it allows you to scroll both ways. | +| `cat File | grep -v '^$'` | Removes all the empty lines from the file. | + +### 🏋️It's time to exercise + +If you learned something new, executing it with different possibilities is the best way to remember. + +And for that purpose, here are some simple exercises you can do with the cat command. They will be super basic as cat too is[one of the most basic commands][15]. + +For practice purposes, you can [use our text files from GitHub.][16] + +- How would you create an empty file using the cat command? +- Redirect output produced by the cat command to a new file `IF.txt` +- Can you redirect three or more file inputs to one file? If yes, then how? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/cat-command/ + +作者:[Sagar Sharma][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/sagar/ +[b]: https://github.com/lkxed/ +[1]: https://linuxhandbook.com:443/merge-files/ +[2]: https://itsfoss.com/content/images/2023/06/use-cat-command-in-Linux.png +[3]: https://itsfoss.com/create-files/ +[4]: https://itsfoss.com/content/images/2023/06/Cat.svg +[5]: https://itsfoss.com/list-directory-content/ +[6]: https://itsfoss.com/content/images/2023/06/use-the-ls-command-to-list-the-contents-of-the-current-working-directory.png +[7]: https://itsfoss.com/content/images/2023/06/check-the-file-contents-using-the-cat-command.png +[8]: https://itsfoss.com/content/images/2023/06/redirect-the-file-content-using-the-cat-command.png +[9]: https://itsfoss.com/content/images/2023/06/redirect-file-content-of-multiple-files-using-the-cat-command.png +[10]: https://itsfoss.com/content/images/2023/06/redirect-file-content-without-overriding-using-the-cat-command.png +[11]: https://itsfoss.com/content/images/2023/06/get-the-number-of-the-lines-in-the-cat-command.png +[12]: https://itsfoss.com/content/images/2023/06/remove-blank-lines-with-the-cat-command.png +[13]: https://itsfoss.com/content/images/2023/06/remove-all-the-blank-lines-in-text-files-using-the-cat-command-piped-with-grep-regular-expression.png +[14]: https://itsfoss.com/content/images/2023/06/save-output-of-cat-command-by-redirection.png +[15]: https://learnubuntu.com:443/top-ubuntu-commands/ +[16]: https://github.com:443/itsfoss/text-files diff --git a/sources/tech/20230630.0 ⭐️ How to Install Wine in Ubuntu.md b/sources/tech/20230630.0 ⭐️ How to Install Wine in Ubuntu.md new file mode 100644 index 0000000000..2ab72c08f1 --- /dev/null +++ b/sources/tech/20230630.0 ⭐️ How to Install Wine in Ubuntu.md @@ -0,0 +1,274 @@ +[#]: subject: "How to Install Wine in Ubuntu" +[#]: via: "https://itsfoss.com/install-wine-ubuntu/" +[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/" +[#]: collector: "lkxed" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +How to Install Wine in Ubuntu +====== + +With some effort, you can [run Windows applications on Linux][1] using Wine. Wine is a tool you can try when must use an Windows-only application on Linux. + +Please note that **you CANNOT run any Windows games or software with Wine**. Please go through the [database of supported applications][2]. The software rated platinum or gold have a higher chance of running smoothly with Wine. + +If you have found a Windows-only software that [Wine][3] supports well and now looking to use it, this tutorial will help you with the Wine installation on Ubuntu. + +> 💡 If you have Wine installed before, you should remove it completely to avoid any conflict. Also, you should refer to its [download page][4] for additional instructions for specific Linux distributions. + +### Installing Wine on Ubuntu + +There are various ways to install Wine on your system. Almost all the Linux distros come with Wine in their package repository. + +Most of the time, the latest stable version of Wine is available via the package repository. + +- **Install WINE from Ubuntu’s repository (easy but may not be the latest version)** +- **Install WINE from Wine’s repository (slightly more complicated but gives the latest version)** + +Please be patient and follow the steps one by one to install and use Wine. There are several steps involved here. + +> 🚧 Keep in mind that Wine installs too many packages. You will see a massive list of packages and install sizes of around 1.3 GB. + +![Wine download and installed size][5] + +#### Method 1. Install WINE from Ubuntu (easy) + +Wine is available in Ubuntu's Official repositories, where you can easily install it. However, the version available this way may not be the latest. + +Even if you are using a 64-bit installation of Ubuntu, you will need to add 32-bit architecture support on your distro, which will benefit you in installing specific software. + +Type in the commands below: + +``` +sudo dpkg --add-architecture i386 +``` + +Then install Wine using: + +``` +sudo apt update +sudo apt install wine +``` + +#### Method 2: Install the latest version from Wine’s repository + +Wine is one such program that receives heavy developments in a short period. So, it is always recommended to install the latest stable version of Wine to get more software support. + +**First, remove any existing Wine installation**. + +**Step 1**: Make sure to add 32-bit architecture support: + +``` +sudo dpkg --add-architecture i386 +``` + +**Step 2**: Download and add the repository key: + +``` +sudo mkdir -pm755 /etc/apt/keyrings +sudo wget -O /etc/apt/keyrings/winehq-archive.key https://dl.winehq.org/wine-builds/winehq.key +``` + +**Step 3**: Now download the WineHQ sources file. + +> 🚧 This step depends on the Ubuntu or Mint version you are using. Please [check your Ubuntu version][6] or [Mint version][7]. Once you have that information, use the commands for your respective versions. + +For **Ubuntu 23.04 Lunar Lobster**, use the command below: + +``` +sudo wget -NP /etc/apt/sources.list.d/ https://dl.winehq.org/wine-builds/ubuntu/dists/lunar/winehq-lunar.sources +``` + +If you have **Ubuntu 22.04 or Linux Mint 21.X series**, use the command below: + +``` +sudo wget -NP /etc/apt/sources.list.d/ https://dl.winehq.org/wine-builds/ubuntu/dists/jammy/winehq-jammy.sources +``` + +If you are running **Ubuntu 20.04 or Linux Mint 20.X series**, use: + +``` +sudo wget -NP /etc/apt/sources.list.d/ https://dl.winehq.org/wine-builds/ubuntu/dists/focal/winehq-focal.sources +``` + +**Ubuntu 18.04 or Linux Mint 19.X series** users can use the command below to add the sources file: + +``` +sudo wget -NP /etc/apt/sources.list.d/ https://dl.winehq.org/wine-builds/ubuntu/dists/bionic/winehq-bionic.sources +``` + +Once done, update the package information and install the wine-stable package. + +``` +sudo apt install --install-recommends winehq-stable +``` + +If you want the development or staging version, use `winehq-devel` or `winehq-staging` respectively. + +### Initial Wine configuration + +Once Wine is installed, run the following: + +``` +winecfg +``` + +This will create the **virtual C: Drive** for installing Windows applications. + +![C: Drive created by winecfg in Home directory][8] + +While following these steps, sometimes, you may not find the “**Open With Wine Windows Program Loader**” option in Nautilus right-click menu. + +In that case, fix it by [creating a soft link][9] to appropriate directory: + +``` +sudo ln -s /usr/share/doc/wine/examples/wine.desktop /usr/share/applications/ +``` + +And restart your system to get the change. + +### Using Wine to run Windows applications + +Once you have installed Wine and configured it by running `winecfg`, now is the time to install Windows apps. + +Here, the 7Zip.exe file is used for demonstration purposes. I know I should have used a better example, as 7Zip is available on Linux. Still, the process remains the same for other applications. + +Firstly, download the 7Zip .exe file from their [official downloads page][10]. + +Now, right-click on the file and select "Open With Wine Windows Program Loader" option: + +![Open 7zip exe file using Wine WIndows Program Loader in Nemo file manager][11] + +This will prompt us to install the file. Click **Install** and let it complete. Once done, you can open the 7zip like any other native app. + +![Open 7Zip from Ubuntu Activities Overview][12] + +You can use `wine uninstaller` command to uninstall any installed application. + +Here's a dedicated tutorial on [using Wine to run Windows software][1] on Linux: + +### Remove Wine from Ubuntu + +If you don't find Wine interesting or if Wine doesn't run the application you want properly, you may need to uninstall Wine. To do this, follow the below steps. + +**Remove Wine installed through the Ubuntu repository** + +To remove wine installed through repositories, first run: + +``` +sudo apt remove --purge wine +``` + +Update your package information: + +``` +sudo apt update +``` + +Now, use the `autoclean` command to clear the local repository of retrieved package files that are virtually useless. + +``` +sudo apt-get autoclean +sudo apt-get clean +``` + +Remove those packages that are installed but no longer required using: + +``` +sudo apt autoremove +``` + +Now reboot the system. + +**Remove Wine installed through the Wine repository** + +Remove the installed `wine-stable` package. + +``` +sudo apt remove --purge wine-stable +``` + +Update your package information: + +``` +sudo apt update +``` + +Now, use the `autoclean` and `clean` command to clear the local repository of retrieved package files that are virtually useless. + +``` +sudo apt-get autoclean +sudo apt-get clean +``` + +Now remove the sources file added earlier. Use your respective distribution folder. Here, Ubuntu 22.04 is used. + +``` +sudo rm /etc/apt/sources.list.d/winehq-jammy.sources +``` + +Once this is removed, update your system package information: + +``` +sudo apt update +``` + +Optionally, remove the key file you had added earlier if you want. + +``` +sudo rm /etc/apt/keyrings/winehq-archive.key +``` + +Now remove any residual files manually. + +### Still have questions about using Wine? + +You may also go through our tutorial on using Wine. It should answer some more questions you might have. + +There is no place better than the Wine Project website. They have a dedicated FAQ (frequently asked questions) page: + +[Wine FAQs][13] + +If you still have questions, you can browse through [their wiki][14] for detailed [documentation][15] or ask your doubts in [their forum][16]. + +Alternatively, if you don't mind spending some money, you can opt for CrossOver. It's basically Wine but with premium support. You can also contact their team for your questions. + +Purchase CrossOver Through the CodeWeavers Store Today!Buy CrossOver Mac and CrossOver Linux through the CodeWeavers store. Choose from 12 month and lifetime license plans. Renewals are also available for purchase.![][17]CodeWeavers![][18] + +In my opinion, you should resort to Wine only when you cannot find an alternative to the software you must use. Even in that case, it's not guaranteed to work with Wine. + +And yet, Wine provides some hope for Windows migrants to Linux. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-wine-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lkxed/ +[1]: https://itsfoss.com/use-windows-applications-linux/ +[2]: https://appdb.winehq.org:443/ +[3]: https://www.winehq.org:443/ +[4]: https://wiki.winehq.org:443/Download +[5]: https://itsfoss.com/content/images/2023/01/WINE-download-and-install-size.png +[6]: https://itsfoss.com/how-to-know-ubuntu-unity-version/ +[7]: https://itsfoss.com/check-linux-mint-version/ +[8]: https://itsfoss.com/content/images/2023/01/CDrive-in-nautilus.png +[9]: https://learnubuntu.com:443/ln-command/ +[10]: https://www.7-zip.org:443/download.html +[11]: https://itsfoss.com/content/images/2023/01/open-exe-file-with-wine.png +[12]: https://itsfoss.com/content/images/2023/01/7-zip-in-Ubuntu-activities-overview.webp +[13]: https://wiki.winehq.org:443/FAQ +[14]: https://wiki.winehq.org:443/Main_Page +[15]: https://www.winehq.org:443/documentation +[16]: https://forum.winehq.org:443/ +[17]: https://media.codeweavers.com/pub/crossover/website/images/cw_logo_128.png +[18]: https://www.codeweavers.com/images/og-images/og-default.png diff --git a/sources/tech/20230701.1 ⭐️ How to Remove Software Repositories from Ubuntu.md b/sources/tech/20230701.1 ⭐️ How to Remove Software Repositories from Ubuntu.md new file mode 100644 index 0000000000..a81e1a3169 --- /dev/null +++ b/sources/tech/20230701.1 ⭐️ How to Remove Software Repositories from Ubuntu.md @@ -0,0 +1,223 @@ +[#]: subject: "How to Remove Software Repositories from Ubuntu" +[#]: via: "https://itsfoss.com/remove-software-repositories-ubuntu/" +[#]: author: "Sagar Sharma https://itsfoss.com/author/sagar/" +[#]: collector: "lkxed" +[#]: translator: "geekpi" +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +How to Remove Software Repositories from Ubuntu +====== + +You can [add external repositories in Ubuntu][1] to access packages unavailable in the official repositories. + +For example, if you [install Brave browser in Ubuntu][2], you add its repository to your system. If you add a PPA, that is added as an external repository too. + +When you don't need the specific software, you remove it. However, the external repository is still added. You can, and you should also remove it to keep your system pristine. + +Ubuntu lets you remove a software repository easily. There are different ways to do that: + +- **Using apt-add-repository command to remove the repository** +- **Using GUI to remove the repository (for desktop users)** +- **By modifying the file contents of the /etc/apt/sources.list file (for experts)** + +But before that, I highly advise [getting familiar with the concept of package managers][3] and repositories if you are new to this concept. + +### Method 1. Remove the repository using apt 🤖 + +Did you know you can also use the [apt command][4] to remove repositories? Well, technically, it's not part of the core apt command but it works in similar fashion. + +You can use the `add-apt-repository` or `apt-add-repository` commands (both represent the same command) while dealing with external repositories. + +First, list the added repositories using the following command: + +``` +apt-add-repository --list +``` + +![list enabled repositories in Ubuntu][5] + +Once done, you can use the apt-add-repository command with the `-r` flag in shown manner to remove the directory: + +``` +sudo apt-add-repository -r repo_name +``` + +For example, if I want to remove the **yarn** repository, I would have to use the following command: + +``` +sudo add-apt-repository -r deb https://dl.yarnpkg.com/debian/ stable main +``` + +![Remove repository using the apt-add-repository command in Ubuntu][6] + +Press the **Enter** key for confirmation. + +Next, update the repositories using the following: + +``` +sudo apt update +``` + +And now, if you list enabled repositories, you won't find the removed repository here: + +``` +apt-add-repository --list +``` + +![confirm repository removal process by listing enabled repositories in Ubuntu][7] + +There you have it! + +### Method 2. Remove the software repository in Ubuntu using GUI 🖥️ + +> 🚧 Removing a repository you know nothing about is not recommended as it may restrict you from installing your favorite package in the future, so make sure you know what you are up to. + +Being [one of the best distros for beginners,][8] You can use GUI to remove the repository without needing the terminal. + +To do so, first, open the software and updates the app from the system menu: + +![search for software and updates from the system menu][9] + +Now, click on `Other Software` section, and it will list PPAs and external repositories in your system. + +The ones listed as checked ✅ are enabled ones. + +To remove a repository, you'd have to follow **three simple steps**: + +- **Select a repository that needs to be removed** +- **Click on the remove button** +- **And finally, hit the close button** + +![Disable repository from Ubuntu][10] + +Once you click on the close button, it will open a prompt asking you to update the information as you make changes. + +Simply click on the `Reload` button: + +![Click on reload to after removing repository from Ubuntu and save changes][11] + +Alternatively, you can update the repository from the command line to take effect from the changes: + +``` +sudo apt update +``` + +### Method 3. Remove the repository by removing its directory (for experts 🧑‍💻) + +Previously, I explained how you could use tools (GUI and CLI) to remove a repository; here, you will modify the system directory (**/etc/apt/sources.list.d**) responsible for managing repositories. + +So first, change your working directory to `sources.list.d` and list its contents: + +``` +cd /etc/apt/sources.list.d/ && ls +``` + +![list contents of sources.list.d directory][12] + +Here, you will find the list of all the repositories. + +If you notice carefully, there will be two files for one repo. Once with the `.list` extension and one with the `.save` extension. + +You will have to remove the one having the `.list` extension: + +``` +sudo rm Repo_name.list +``` + +For example, here, I removed the **node repo** using the command below: + +``` +sudo rm nodesource.list +``` + +![remove repository by removing the repository directory in Ubuntu][13] + +To take effect from the changes, update the repository index with: + +``` +sudo apt update +``` + +Want to know more about the [sources.list][14]? Read this article. + +### Additional Step: Remove GPG keys after removing the repository (for advanced users) + +If you wish to remove the GPG keys after removing the repository, here's how you do it. + +First, list the existing GPG keys using the following command: + +``` +apt-key list +``` + +Now, the output may seem confusing to some users. + +Here's what to remember: + +- The GPG key name will be placed above the dashed line (----) +- The public key is in the second line + +For example, here's the relevant data of the Chrome GPG key: + +![list GPG keys in Ubuntu][15] + +To remove the GPG key, you can use the last two strings of the public key (without any space). + +For example, here's how I will remove the GPG key of the Chrome browser using the last two strings of its public key (D38B 4796): + +``` +sudo apt-key del D38B4796 +``` + +![remove GPG key in Ubuntu][16] + +Similarly, you can also use the entire public key. But this time, you have to include spaces between two strings, as shown: + +``` +sudo apt-key del "72EC F46A 56B4 AD39 C907 BBB7 1646 B01B 86E5 0310" +``` + +### Careful with what you add and what you remove + +Especially when you are a new Linux user, you will encounter many exciting things and repositories you will add and remove. + +While it is good to experiment, you should always be careful about anything you add/remove to your system. You should keep some things in mind, like: _Does it include updated packages? Is it a trusted or maintained repository?_ + +Being cautious will keep your system free from unnecessary repositories and packages. + +**I hope this guide helps you remove the repository you do not want!** + +_Feel free to let me know if you face any issues in the comments below, and consider joining our [It's FOSS Community forum][17] to get faster help!_ + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/remove-software-repositories-ubuntu/ + +作者:[Sagar Sharma][a] +选题:[lkxed][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/sagar/ +[b]: https://github.com/lkxed/ +[1]: https://itsfoss.com/adding-external-repositories-ubuntu/ +[2]: https://itsfoss.com/brave-web-browser/ +[3]: https://itsfoss.com/package-manager/ +[4]: https://itsfoss.com/apt-command-guide/ +[5]: https://itsfoss.com/content/images/2023/06/list-enabled-repositories-in-Ubuntu.png +[6]: https://itsfoss.com/content/images/2023/06/Remove-repository-using-the-apt-add-repository-command-in-Ubuntu.png +[7]: https://itsfoss.com/content/images/2023/06/confirm-repository-removal-process-by-listing-enabled-repositories-in-Ubuntu.png +[8]: https://itsfoss.com/best-linux-beginners/ +[9]: https://itsfoss.com/content/images/2023/06/search-for-software-and-updates-from-the-system-menu.png +[10]: https://itsfoss.com/content/images/2023/06/remove-the-repository-from-Ubuntu-using-GUI-1.png +[11]: https://itsfoss.com/content/images/2023/06/Click-on-reload-to-after-removing-repository-from-Ubuntu-and-save-changes.png +[12]: https://itsfoss.com/content/images/2023/06/list-contents-of-sources.list.d-directory.png +[13]: https://itsfoss.com/content/images/2023/06/remove-repository-by-removing-the-repository-directory-in-Ubuntu.png +[14]: https://itsfoss.com/sources-list-ubuntu/ +[15]: https://itsfoss.com/content/images/2023/06/list-GPG-keys-in-Ubuntu.png +[16]: https://itsfoss.com/content/images/2023/06/remove-GPG-key-in-Ubuntu.png +[17]: https://itsfoss.community:443/ diff --git a/translated/tech/20210906 Learn everything about computers with this Raspberry Pi kit.md b/translated/tech/20210906 Learn everything about computers with this Raspberry Pi kit.md deleted file mode 100644 index 819661f4dd..0000000000 --- a/translated/tech/20210906 Learn everything about computers with this Raspberry Pi kit.md +++ /dev/null @@ -1,148 +0,0 @@ -[#]: subject: "Learn everything about computers with this Raspberry Pi kit" -[#]: via: "https://opensource.com/article/21/9/raspberry-pi-crowpi2" -[#]: author: "Seth Kenlon https://opensource.com/users/seth" -[#]: collector: "lujun9972" -[#]: translator: "XiaotingHuang22" -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " - -Learn everything about computers with this Raspberry Pi kit 用树莓派了解关于计算机的一切 -====== -CrowPi 是一个超棒的树莓派项目系统,安装在一个笔记本电脑般的外壳里。 -![老师还是学习者?][1] - -我喜欢历史,也喜欢计算机,因此相比于计算机如何变成个人配件,我更喜欢听它在成为日常家用电器前的有关电脑运算的故事。 [我经常听到的一个故事][2] 是关于很久以前(反正在计算机时代算久远了)的计算机是如何的基础却又让人感到很舒服。事实上,它们基础到对于一个好奇的用户来说,弄清楚如何编程是相对简单的事情。看看现代计算机,它具有面向对象的编程语言、复杂的 GUI 框架、网络 API、容器等,但愈发令人担忧的是,对于那些没有接受过任何专门培训的人,计算行业的工具正变得越来越难懂,无法为任何未经专门培训的人所用。 - -从 Raspberry Pi 在 2012 年发布之日起,它就一直被定位为一个教育平台。 一些第三方供应商通过附加组件和培训套件支持 Pi,以帮助所有年龄段的学习者探索编程、物理计算和开源。 然而,直到最近,很大程度上还是要由用户来弄清楚市场上的所有部件如何组合在一起,直到我最近买了 CrowPi。 - -![CrowPi ——不只是一个笔记本电脑][3] - -CrowPi 不是一个笔记本电脑。 -(Seth Kenlon, [CC BY-SA 4.0][4]) - -### 隆重介绍 CrowPi2 - -乌鸦是非常聪明的鸟。 他们识别并记住面孔,模仿听到的声音,解决复杂的谜题,甚至使用工具来完成任务。 CrowPi 使用乌鸦作为其徽标和同名词是恰当的,因为这个设备提供了无限探索、实验、教育还有最重要的,乐趣的机会。 - -设计本身很巧妙:它看起来像笔记本电脑,但远不止于此。 当你从外壳中取出蓝牙键盘时,它会显示一个隐藏的电子设备工坊,配有 LCD 屏幕、16 个按钮、刻度盘、RFID 传感器、接近传感器、线路板、扬声器、GPIO 连接、LED 阵列等等。 _而且都是可编程的。_ - -顾名思义,该装置本身完全由 Raspberry Pi 供电,牢固地固定在外壳底部。 - -![crowpi pi板[5] - -CrowPi 的 Pi板 -(Seth Kenlon, [CC BY-SA 4.0][4]) - -默认情况下,你应该用电源适配器为设备充电,包装附带一个壁式适配器,你可以将其插入外壳,而不是直接为 Pi 供电。 您还可以使用插入外部微型 USB 端口的电池电源。 电脑外壳内甚至还有一个抽屉,方便你存放电池。 存放电池时有一根 USB 线从电池抽屉中弹出并插入机箱电源端口,因此你不会产生这是一台“普通”笔记本电脑的错觉。 然而,这样一台设备能够有如此美观的设计已经很理想了! - -### 首次启动系统 - -CrowPi2 提供一张安装了 Raspbian 系统,卡上贴有 **System** 的标签,不过它同时还提供一张装载了 [RetroPie][6] 的 microSD 卡。 作为一个负责任的成年人(咳咳),我自然是先启动了 RetroPie。 - -RetroPie 总是很有趣,CrowPi2 附带两个 SNES 风格的游戏控制器,确保你能获得最佳的复古游戏体验。 - -令人赞叹不已的是,实际启动系统的过程同样有趣,甚至可以说更有趣。 它的登录管理器是一个自定义项目中心,快速链接到一些编程体验项目、Python 和 Arduino IDE、Scratch、 Python 体验游戏、Minecraft 等。 你也可以选择退出项目中心,只使用桌面。 - -![CrowPi 中心][7] - -The CrowPi 中心. -(Seth Kenlon, [CC BY-SA 4.0][4]) -对于习惯使用 Raspberry Pi 或 Linux 的人来说,CrowPi 桌面很熟悉,不过它也足够基础,所以很容易上手。 左上角有应用程序菜单,桌面上有快捷图标,右上角有网络选择和音量控制的系统托盘等等。 - -![CrowPi 桌面][8] - -CrowPi 桌面. -(Seth Kenlon, [CC BY-SA 4.0][4]) - -CrowPi 上提供了很多东西选择,所以你可能很难决定从哪里开始。 对我来说,主要分为四大类:编程、物理电子学、Linux 和游戏。 - -盒子里有一本使用说明,所以你才知道你需要怎样进行连接(例如,键盘是电池供电的,所以它有时确实需要充电,它和鼠标总是需要一个 USB 适配器)。 虽然说明书很快就能读完,但这一例子也充分体现了 CrowPi 团队是如何认真对待说明书的。 - -![CrowPi 文档][9] - -CrowPi 文档. -(Seth Kenlon, [CC BY-SA 4.0][4] - - -### 编程 - -如果你热衷于学习如何编码,在 CrowPi 上有很多途径助你成功。你应该从中选择你觉得最满意的路径。 - -#### 1\. Scratch - -[Scratch][10] 是一个简单的视觉编码应用程序,可让你像拼 [Lego pieces 乐高拼块][11] 一样将代码块组合在一起,制作出游戏和互动故事。 这是开启编程之旅最简单的方法,我曾见过年仅 8 岁的孩子会花数小时思考自己设计的游戏的最佳算法。 当然,它不仅适合孩子们!成年人也可以从中获得很多乐趣。 不知道从哪里开始? 包装盒中有一本 99 页的小册子(打印在纸张上),其中包含 Scratch 课程和项目供你尝试。 - -#### 2\. Java 和 Minecraft - -Minecraft 不是开源的(虽然有 [几个开源项目][12] 复刻了它),但它有足够的可用资源,因此也经常被用来教授编程。 Minecraft 是用 Java 编写的,CrowPi 同时装载有 [Minecraft Pi Edition][13] 和 [BlueJ Java IDE][14] ,如此可使学习 Java 变得比以往更容易、更有趣。 - -#### 3\. Python 和 PyGame - -CrowPi 上有几个非常有趣的游戏,它们是用 Python 和 [PyGame game engine ( PyGame 游戏引擎)][15] 编写的。 你可以玩游戏,然后查看源代码以了解游戏的运行方式。 CrowPi 中包含 Geany、Thonny 和 [Mu][16] 编辑器,因此您可以立即开始使用 Python 进行编程。 与 Scratch 一样,包装盒中有一本包含课程的小册子,因此你可以学习 Python 基础知识。 - -### 电子器件 - -隐藏在键盘下的物理电子工坊本质上是一系列 Pi Hats(附着在上的硬件)。 为了让你可以认识所有的组件,CrowPi 绘制了一张中英双语的折叠图进行详细的说明。 除此之外还有很多示例项目可以帮助你入门。 以下是一张小清单: - - - * **你好** 当你与 CrowPi 说话时,LCD 屏幕上打印输出“你好”。 - * **入侵警报**使用接近传感器发出警报。 - * **远程控制器** 让你能够使用远程控制(是的,这个也包含在盒子里)来触发 CrowPi 上的事件。 - * **RGB 俄罗斯方块** 让你可以在 LED 显示屏上玩俄罗斯方块游戏。 - * **语音识别**演示自然语言处理。 - * **超声波音乐** 利用距离传感器和扬声器创建简易版的特雷蒙琴(世上唯一不需要身体接触的电子乐器)。 - - - -这些项目仅仅是入门级别而已,因为你还可以在现有的基础上搭建更多东西。 当然,还有更多内容值得探索。 包装盒里还有网络跳线、电阻器、LED 和各种组件,这样你闲暇时也可以了解 Pi 的 GPIO (通用输入输出端口)功能的所有信息。 - -不过我也发现了一个问题:示例项目的位置有点难找。 找到演示很容易(它们就在 CrowPi 中心上),但源代码的位置并不是很容易被找到。 我后来发现大多数示例项目都在 `/usr/share/code` 中,你可以通过文件管理器或终端进行访问。 - -![CrowPi 外围设备][17] - -CrowPi 外围设备 -(Seth Kenlon, [CC BY-SA 4.0][4]) - -### Linux - -Raspberry Pi 上运行 Linux 系统。 如果你一直想更深入了解 Linux,那么 CrowPi 同样会是一个很好的平台。 你可以探索 Linux 桌面、终端以及几乎所有 Linux 或开源应用程序。 如果你多年来一直在阅读有关开源的文章,并准备深入研究开源操作系统,那么 CrowPi 会是你想要的平台(当然还有很多其他平台也可以)。 - -### 游戏 - -包装盒中包含的 **RetroPie** SD 卡意味着你可以重新启动,切换为复古游戏机并任意玩各种老式街机游戏。 它跟 Steam Deck 并不完全相同,但也是一个有趣且令人振奋的小游戏平台。 因为它配备的不是一个而是两个游戏控制器,所以它非常适合多人合作的沙发游戏。 最重要的是,你不仅可以在 CrowPi 上玩游戏,还可以制作自己的游戏。 - -### 配备螺丝刀 - -自我坐下开始使用 CrowPi2 以来已经大约两周,但我还没有通关所有项目。 有很多个晚上,我不得不强迫自己停下摆弄它,因为即使我厌倦了一个项目,我也会不可避免地发现还有其他东西可以探索。 文章的最后做个总结,我在盒子里找到了一个特别的组件,这个组件让我马上知道 CrowPi 和我就是天造地设:它是一把不起眼的小螺丝刀。 盒子上的保修标签不存在作废。 CrowPi 希望你去修补、拆解、探索和学习。 它不是笔记本电脑,甚至也不是 Pi; 而是一个便携、低功耗、多样化和开源的学习者工具包。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/9/raspberry-pi-crowpi2 - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[XiaotingHuang22](https://github.com/XiaotingHuang22) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G (Teacher or learner?) -[2]: https://opensource.com/article/21/8/my-first-programming-language -[3]: https://opensource.com/sites/default/files/crowpi-not-laptop.jpeg (CrowPi more than a laptop) -[4]: https://creativecommons.org/licenses/by-sa/4.0/ -[5]: https://opensource.com/sites/default/files/crowpi-pi.jpeg (crowpi pi board) -[6]: https://opensource.com/article/19/1/retropie -[7]: https://opensource.com/sites/default/files/crowpi-hub.png (CrowPi hub) -[8]: https://opensource.com/sites/default/files/crowpi-desktop.png (CrowPi desktop) -[9]: https://opensource.com/sites/default/files/crowpi-docs.jpeg (CrowPi docs) -[10]: https://opensource.com/article/20/9/scratch -[11]: https://opensource.com/article/20/6/open-source-virtual-lego -[12]: https://opensource.com/alternatives/minecraft -[13]: https://www.minecraft.net/en-us/edition/pi -[14]: https://opensource.com/article/20/7/ide-java#bluej -[15]: https://opensource.com/downloads/python-gaming-ebook -[16]: https://opensource.com/article/18/8/getting-started-mu-python-editor-beginners -[17]: https://opensource.com/sites/default/files/crowpi-peripherals.jpeg (CrowPi peripherals) diff --git a/translated/tech/20230626.1 ⭐️⭐️ Nobara 38 Released, Offering Enhanced Gaming and Content Creation Experience.md b/translated/tech/20230626.1 ⭐️⭐️ Nobara 38 Released, Offering Enhanced Gaming and Content Creation Experience.md new file mode 100644 index 0000000000..a28f3b74cc --- /dev/null +++ b/translated/tech/20230626.1 ⭐️⭐️ Nobara 38 Released, Offering Enhanced Gaming and Content Creation Experience.md @@ -0,0 +1,70 @@ +[#]: subject: "Nobara 38 Released, Offering Enhanced Gaming and Content Creation Experience" +[#]: via: "https://debugpointnews.com/nobara-38/" +[#]: author: "arindam https://debugpointnews.com/author/dpicubegmail-com/" +[#]: collector: "lkxed" +[#]: translator: "geekpi" +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " + +Nobara 38 发布,提供增强的游戏和内容创作体验 +====== + +**探索 Nobara 38 版本的新功能,该版本随 Fedora 38 一起发布。** + +基于 [Fedora 38][1] 的预期版本 Nobara 38 终于发布了,它带来了一系列用户友好的修复和功能增强。Nobara Project 是 Fedora Linux 的修改版本,旨在解决用户面临的常见问题,并提供开箱即用的无缝游戏、流媒体和内容创建体验。凭借一系列附加软件包和自定义功能,Nobara 38 将 Fedora 提升到了新的高度。 + +Nobara 38 中值得注意的改进之一是为 Davinci Resolve 实施了一个临时方案。当 Davinci Resolve 的安装程序从终端运行时,会提示用户执行一个向导,执行必要的操作以确保顺利运行。这包括将 DR 中的 glib2 库移动到一个备份文件夹中,详见官方 Davinci Resolve 论坛。 + +![Nobara 38 桌面][2] + +在游戏领域,Nobara 38 引入了原生 Linux 版本 Payday 2 的临时方案。鉴于 OpenGL 实现目前已损坏且对原生版本的官方支持已被放弃,该版本利用 zink 驱动程序来确保无缝游戏。据 GamingOnLinux 报道,这确保了 Linux 用户可以继续享受 Payday 2,不会出现任何问题。 + +此外,Nobara 38 合并了 udev 规则来增强控制器支持。具体来说,添加了一条规则,强制对 045e:028e “Xbox 360” 控制器设备使用内核 xpad 驱动程序。这使得 GPD Win Max 2 和 GPD Win 4 等在报告为该设备的控制时,能够保持控制器兼容性,同时仍然允许安装可选的 xone/xpadneo 驱动程序以实现无线适配器,并改进对 Xbox One 控制器的蓝牙支持。 + +为了优化不同设备类型的性能,Nobara 38 引入了特定的 I/O 调度程序。NVMe 设备将使用 “none”,SSD 将使用 “mq-deadline”,HDD/机械驱动器将使用 “bfq”。这些规则可确保每种设备类型以其最佳效率运行。 + +在桌面环境方面,Nobara 38 默认提供自定义主题的 GNOME 桌面,但用户也可以选择标准 GNOME 和 KDE 版本。这种灵活性允许用户根据自己的喜好个性化他们的计算体验。 + +Nobara 38 还对几个关键组件进行了更新。值得注意的是,GNOME 44 融入了基于 Mutter 44 更新和重新构建的可变刷新率 (VRR) 补丁,提供了更流畅的视觉体验。GStreamer 1.22 已修补了 AV1 支持,可实现更好的多媒体播放,并且 OBS-VAAPI 和 AMF AV1 支持也已启用,尽管后者需要 AMD 的专业编码器。 + +Mesa 图形库基于 LLVM 15 构建,而不是 Fedora 当前的 LLVM 16,以避免与 Team Fortress 2 的兼容性问题。此外,Mesa 还进行了修补以启用 VAAPI AV1 编码支持,从而增强了视频编码功能。 + +Nobara Package Manager(现在称为 yumex)在 Nobara 38 中得到了重大改进。它现在拥有管理 Flatpaks 的能力,为用户提供了更全面的软件管理解决方案。此外,软件包更新也得到了简化,引入了“更新系统”按钮,方便系统更新。 + +Blender 是流行的开源 3D 创建套件,已在 Nobara 38 中进行了修补,以使用 WAYLAND_DISPLAY="" 环境变量启动。这解决了菜单在 Wayland 下渲染不正确并妨碍可用性的问题,确保用户获得流畅的 Blender 体验。 + +Nautilus 是 GNOME 中的默认文件管理器,现在允许用户直接从右键菜单中以管理员身份执行文件和打开文件夹。此功能提供了一种执行管理任务的便捷方法,无需执行其他步骤。 + +在 Flatpak 方面,Nobara 38 删除了 Fedora Flatpak 仓库,因为它们基本上未使用。建议升级到 Nobara 38 的用户删除从 Fedora 仓库安装的所有 Flatpaks,然后从 Flathub 仓库重新安装它们。Flathub flatpak 仓库现在是 Nobara 38 中的默认仓库,为系统和用户安装提供了更广泛的应用选择。此外,还实施了一种临时方案来解决最近与用户更新相关的崩溃问题。 + +为了增强整体用户体验,Nobara 欢迎应用现在将 Davinci Resolve 向导作为可选步骤提供。这简化了 Davinci Resolve 的设置过程,并确保用户可以轻松地将软件集成到他们的工作流程中。 + +除了这些显著的变化之外,Nobara 38 还包括 Fedora 38 中存在的所有其他软件包更新和改进。 + +要下载 Nobara 38 并探索其新功能,请访问 Nobara Project 官方网站。 + +[下载 Nobara 38 – KDE][3] + +[下载 Nobara 38 – GNOME][4] + +_来自[通告][5]_ + +-------------------------------------------------------------------------------- + +via: https://debugpointnews.com/nobara-38/ + +作者:[arindam][a] +选题:[lkxed][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://debugpointnews.com/author/dpicubegmail-com/ +[b]: https://github.com/lkxed/ +[1]: https://debugpointnews.com/fedora-38-release/ +[2]: https://debugpointnews.com/wp-content/uploads/2023/06/Nobara-38-Desktop.jpg +[3]: https://nobara-images.nobaraproject.org/Nobara-38-KDE-2023-06-25.iso +[4]: https://nobara-images.nobaraproject.org/Nobara-38-GNOME-2023-06-25.iso +[5]: https://nobaraproject.org/2023/06/25/june-25-2023/ \ No newline at end of file