Merge pull request #15 from LCTT/master

update from lctt
This commit is contained in:
beamrolling 2022-05-26 13:37:50 +08:00 committed by GitHub
commit ac71506b69
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
38 changed files with 3905 additions and 1081 deletions

View File

@ -0,0 +1,163 @@
[#]: collector: (lujun9972)
[#]: translator: (CoWave-Fall)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-14632-1.html)
[#]: subject: (31 open source text editors you need to try)
[#]: via: (https://opensource.com/article/21/2/open-source-text-editors)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
值得尝试的 30 个开源文本编辑器
======
> 正在寻找新的文本编辑器?这里有 31 个编辑器可供尝试。
![](https://img.linux.net.cn/data/attachment/album/202205/24/184603krbzynnnikz8b0nc.jpg)
计算机是基于文本的,因此你使用它们做的事情越多,你可能就越需要文本编辑应用程序。你在文本编辑器上花费的时间越多,你就越有可能对你使用的编辑器提出更多的要求。
如果你正在寻找一个好的文本编辑器,你会发现 Linux 可以提供很多。无论你是想在终端、桌面还是在云端工作,你都可以试一试。你可以每天一款编辑器,连续着试一个月(或每月试一个,能够试三年)。坚持不懈,你终将找到适合你的完美的编辑器。
### Vim 类编辑器
![][2]
* [Vi][3] 通常随着 Linux 各发行版、BSD、Solaris 和 macOS 一起安装。它是典型的 Unix 文本编辑器,具有编辑模式和超高效的单键快捷键的独特组合。最初的 Vi 编辑器由 Bill Joy 编写(他也是 C shell 的作者。Vi 的现代版本,尤其是 Vim增加了许多特性包括多级撤消、在插入模式下更好的导航、行折叠、语法高亮、插件支持等等。但它需要学习如何使用它甚至有自己的教程程序`vimtutor`)。
* [Kakoune][4] 是一个受 Vim 启发的应用程序,它具有熟悉的简约界面、短键盘快捷键以及独立的编辑和插入模式。乍一看,它的外观和感觉很像 Vi但它在设计和功能上有自己独特的风格。 它有一个小彩蛋:具有 Clippy 界面的实现。
### emacs 编辑器
![][5]
* 从最初的免费 emacs 开始,发展到发起了自由软件运动的 GNU 项目的第一批官方应用程序,[GNU Emacs][6] 是一个广受欢迎的文本编辑器。它非常适合系统管理员、开发人员和日常用户的使用,具有大量功能和近乎无穷无尽的扩展。一旦你开始使用 emacs你可能会发现很难想出一个理由来关闭它因为它能做的事情非常多
* 如果你喜欢 emacs但觉得 GNU Emacs 过于臃肿,那么你可以试试 [Jove][7]。Jove 是一个基于终端的 emacs 编辑器。它很容易使用,但是如果你是使用 emacs 编辑器家族的新手,那么 Jove 也是很容易学习的,这要归功于 `teajove` 命令。
* 另一个轻量级的 emacs 编辑器是 [Jed][8]。它的工作流程基于宏。它与其他编辑器的不同之处在于它使用了 [S-Lang][9],这是一种类似 C 的脚本语言,它为使用 C 而不是使用 Lisp 的开发人员提供了扩展的机会。
### 交互式编辑器
![][10]
* [GNU nano][11] 对基于终端的文本编辑采取了大胆的立场:它提供了一个菜单。是的,这个不起眼的编辑器从 GUI 编辑器那里得到了提示,它告诉用户他们需要按哪个键来执行特定的功能。这是一种令人耳目一新的用户体验,所以难怪 nano 被设置为“用户友好”发行版的默认编辑器,而不是 Vi。
* [JOE][12] 基于一个名为 WordStar 的旧文本编辑应用程序。如果你不熟悉 WordstarJOE 也可以模仿 Emacs 或 GNU nano。默认情况下它是介于 Emacs 或 Vi 等相对神秘的编辑器和 GNU Nano 永远显示的冗长信息之间的一个很好的折衷方案(例如,它告诉你如何激活屏幕帮助显示,但默认情况下不启用)。
* [e3][13] 是一个优秀的小型文本编辑器,具有五个内置的键盘快捷键方案,用来模拟 Emacs、Vi、nano、NEdit 和 WordStar。换句话说无论你习惯使用哪种基于终端的编辑器你都可能对 e3 感到宾至如归。
### ed 及像 ed 一样的编辑器
* [POSIX][15] 和 Open Group 定义了基于 Unix 的操作系统的标准,[ed][14] 行编辑器是它的一部分。它安装在你遇到的几乎所有 Linux 或 Unix 系统上。它小巧、简洁、一流。
* 基于 ed[Sed][16] 流编辑器因其功能和语法而广受欢迎。大多数 Linux 用户在搜索如何最简单、最快捷的更新配置文件中的行的方法时,至少会遇到一个 `sed` 命令但它值得仔细研究一下。Sed 是一个强大的命令,包含许多有用的子命令。更好地了解了它,你可能会发现自己打开文本编辑器应用程序的频率要低得多。
* 你并不总是需要文本编辑器来编辑文本。[heredoc][17](或 Here Doc系统可在任何 POSIX 终端中使用,允许你直接在打开的终端中输入文本,然后将输入的内容通过管道传输到文本文件中。这不是最强大的编辑体验,但它用途广泛且始终可用。
### 极简风格的编辑器
![][18]
如果你认为一个好的文本编辑器就是一个文字处理器除了没有所有的处理功能的话你可能正在寻找这些经典编辑器。这些编辑器可让你以最少的干扰和最少的帮助写作和编辑文本。它们提供的功能通常以标记文本、Markdown 或代码为中心。有些名称遵循某种模式:
* [Gedit][19] 来自 GNOME 团队;
* [medit][20] 有经典的 GNOME 手感;
* [Xedit][21] 仅使用最基本的 X11 库;
* [jEdit][22] 适用于 Java 爱好者。
KDE 用户也有类似的:
* [Kate][23] 是一款低调的编辑器,拥有你需要的几乎所有功能;
* [KWrite][24] 在看似简单易用的界面中隐藏了大量有用的功能。
还有一些适用于其他平台:
* [Pe][26] 适用于 Haiku OS90 年代那个古怪的孩子 BeOS 的转世);
* [FeatherPad][27] 是适用于 Linux 的基本编辑器,但对 macOS 和 Haiku 有一些支持。如果你是一名希望移植代码的 Qt 黑客,请务必看一看!
### 集成开发环境IDE
![][28]
文本编辑器和集成开发环境IDE有很多相同之处。后者实际上只是前者加上许多为特定代码而添加的功能。如果你经常使用 IDE你可能会在扩展管理器中发现一个 XML 或 Markdown 编辑器:
* [NetBeans][29] 是一个方便 Java 用户的文本编辑器。
* [Eclipse][30] 提供了一个强大的编辑套件,其中包含许多扩展,可为你提供所需的工具。
### 云端编辑器
![][31]
在云端工作?当然,你也可以在那里进行编辑。
* [Etherpad][32] 是在网上运行的文本编辑器应用程序。有独立免费的实例供你使用,或者你也可以设置自己的实例。
* [Nextcloud][33] 拥有蓬勃发展的应用场景,包括内置文本编辑器和具有实时预览功能的第三方 Markdown 编辑器。
### 较新的编辑器
![][34]
每个人都会有让文本编辑器变得更完美的想法。因此,几乎每年都会发布新的编辑器。有些以一种新的、令人兴奋的方式重新实现经典的旧想法,有些对用户体验有独特的看法,还有些则专注于特定的需求。
* [Atom][35] 是来自 GitHub 的多功能的现代文本编辑器,具有许多扩展和 Git 集成。
* [Brackets][36] 是 Adobe 为 Web 开发人员提供的编辑器。
* [Focuswriter][37] 旨在通过无干扰的全屏模式、可选的打字机音效和精美的配置选项等有用功能帮助你专注于写作。
* [Howl][38] 是一个基于 Lua 和 Moonscript 的渐进式动态编辑器。
* [Norka][39] 和 [KJots][40] 模仿笔记本,每个文档代表“活页夹”中的“页面”。你可以通过导出功能从笔记本中取出单个页面。
### 自己制作编辑器
![][41]
俗话说得好:既然可以编写自己的应用程序,为什么要使用别人的(虽然其实没有这句俗语)?虽然 Linux 有超过 30 个常用的文本编辑器,但是再说一次,开源的一部分乐趣在于能够亲手进行实验。
如果你正在寻找学习编程的理由,那么制作自己的文本编辑器是一个很好的入门方法。你可以在大约 100 行代码中实现基础功能,并且你使用它的次数越多,你可能就越会受到启发,进而去学习更多知识,从而进行改进。准备好开始了吗?来吧,去 [创建你自己的文本编辑器][42]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/open-source-text-editors
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[CoWave-Fall](https://github.com/CoWave-Fall)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx (open source button on keyboard)
[2]: https://opensource.com/sites/default/files/kakoune-screenshot.png
[3]: https://opensource.com/article/20/12/vi-text-editor
[4]: https://opensource.com/article/20/12/kakoune
[5]: https://opensource.com/sites/default/files/jed.png
[6]: https://opensource.com/article/20/12/emacs
[7]: https://opensource.com/article/20/12/jove-emacs
[8]: https://opensource.com/article/20/12/jed
[9]: https://www.jedsoft.org/slang
[10]: https://opensource.com/sites/default/files/uploads/nano-31_days-nano-opensource.png
[11]: https://opensource.com/article/20/12/gnu-nano
[12]: https://opensource.com/article/20/12/31-days-text-editors-joe
[13]: https://opensource.com/article/20/12/e3-linux
[14]: https://opensource.com/article/20/12/gnu-ed
[15]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[16]: https://opensource.com/article/20/12/sed
[17]: https://opensource.com/article/20/12/heredoc
[18]: https://opensource.com/sites/default/files/uploads/gedit-31_days_gedit-opensource.jpg
[19]: https://opensource.com/article/20/12/gedit
[20]: https://opensource.com/article/20/12/medit
[21]: https://opensource.com/article/20/12/xedit
[22]: https://opensource.com/article/20/12/jedit
[23]: https://opensource.com/article/20/12/kate-text-editor
[24]: https://opensource.com/article/20/12/kwrite-kde-plasma
[25]: https://opensource.com/article/20/12/notepad-text-editor
[26]: https://opensource.com/article/20/12/31-days-text-editors-pe
[27]: https://opensource.com/article/20/12/featherpad
[28]: https://opensource.com/sites/default/files/uploads/eclipse-31_days-eclipse-opensource.png
[29]: https://opensource.com/article/20/12/netbeans
[30]: https://opensource.com/article/20/12/eclipse
[31]: https://opensource.com/sites/default/files/uploads/etherpad_0.jpg
[32]: https://opensource.com/article/20/12/etherpad
[33]: https://opensource.com/article/20/12/31-days-text-editors-nextcloud-markdown-editor
[34]: https://opensource.com/sites/default/files/uploads/atom-31_days-atom-opensource.png
[35]: https://opensource.com/article/20/12/atom
[36]: https://opensource.com/article/20/12/brackets
[37]: https://opensource.com/article/20/12/focuswriter
[38]: https://opensource.com/article/20/12/howl
[39]: https://opensource.com/article/20/12/norka
[40]: https://opensource.com/article/20/12/kjots
[41]: https://opensource.com/sites/default/files/uploads/this-time-its-personal-31_days_yourself-opensource.png
[42]: https://opensource.com/article/20/12/31-days-text-editors-one-you-write-yourself

View File

@ -3,19 +3,22 @@
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "turbokernel"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14627-1.html"
Ubuntu 22.04 LTS 中安装经典 GNOME Flashback 指南
======
关于如何在最新的 UBUNTU 22.04 LTS 中安装旧的经典 GNOME Flashback 的快速指南。
![](https://img.linux.net.cn/data/attachment/album/202205/23/151318xi8c3qipphg8xz0i.jpg)
> 关于如何在最新的 UBUNTU 22.04 LTS 中安装旧的经典 GNOME Flashback 的快速指南。
[GNOME Flashback][1](又名 classic GNOME是旧 GNOME 3 shell 的一个分支,它使用早期 GNOME 2 技术的布局和原则。它的速度快如闪电,并且在设计上非常轻量级。因此,它非常适合几十年前的老旧硬件。
随着带有现代 GNOME 42 的 [Ubuntu 22.04 LTS][2] 的发布,有必要寻找轻量级的桌面环境选项。
此外GNOME Flashback 很容易安装在现代 Ubuntu Linux 中,你仍然可以享受 Ubuntu 性能而不必担心 GNOME 42、GTK4、libadwaita 和其他东西。
此外GNOME Flashback 很容易安装在现代 Ubuntu Linux 中,你仍然可以享受 Ubuntu 性能而不必关心 GNOME 42、GTK4、libadwaita 之类的东西。
### 在 Ubuntu 22.04 LTS 中下载并安装经典 GNOME Flashback
@ -24,50 +27,43 @@ Ubuntu 22.04 LTS 中安装经典 GNOME Flashback 指南
在 Ubuntu 22.04 LTS 中打开终端CTRL+ALT+T并运行以下命令。安装大小约为 61MB。
```
sudo apt update
```
```
sudo apt install gnome-session-flashback
sudo apt update
sudo apt install gnome-session-flashback
```
![Install GNOME Classic Flashback Metacity in Ubuntu 22.04 LTS][3]
最后,安装完成后,退出。重新登录时,在登录选项中使用 GNOME Classic
最后,安装完成后,退出。重新登录时,在登录选项中使用经典的 GNOME FlashbackMetacity
![Choose GNOME Classic while logging in][3]
![Choose GNOME Classic while logging in][3a]
### 经典 GNOME Flashback 的特点
首先,当你登录时,你将体验到经典的 GNOME 技术,该技术已被证明具有良好的生产力并且比今天的技术快得多。
首先,当你登录时,你将体验到传统的 GNOME 技术,它已被证明具有良好的生产力,并且比今天的技术快得多。
在顶部有旧版面板,左侧是应用菜单,而系统托盘位于桌面的右上方。应用程序菜单显示所有已安装的应用和软件快捷方式,你可以在工作流程中轻松浏览。
在顶部有旧版面板,左侧是应用菜单,而系统托盘位于桌面的右上方。应用程序菜单显示所有已安装的应用和软件快捷方式,你可以在工作流程中轻松浏览。
此外,在右侧部分,系统托盘具有默认小部件,例如网络、音量控制、日期和时间以及关机菜单。
![Classic GNOME Flashback Metacity in Ubuntu 22.04 LTS][3]
![Classic GNOME Flashback Metacity in Ubuntu 22.04 LTS][3b]
底部面板包含打开的窗口和工作区切换器的应用列表。默认情况下,它为你提供四个工作区供你使用。
此外,你可以随时更改顶部面板的设置以自动隐藏、调整面板大小和背景颜色。
除此之外,你可以通过 ALT+Rigth 单击顶部面板添加任意数量的旧版小程序。
除此之外,你可以通过 `ALT + 右键点击` 顶部面板添加任意数量的旧版小程序。
![Panel Context Menu][3]
![Panel Context Menu][3c]
![Add to panel widgets][3]
![Add to panel widgets][3d]
### 经典 GNOME 的性能
首先,磁盘空间占用极小,仅安装 61 MB。我的测试使用了大约 28% 的内存,其中大部分被其他进程占用。猜猜是谁?是的,是 snap-store 又名 Ubuntu 软件
首先,磁盘空间占用极小,仅安装 61 MB。我的测试使用了大约 28% 的内存,其中大部分被其他进程占用。猜猜是谁?是的,是 snap-store(又名 Ubuntu 软件)
因此,总体而言,它非常轻巧,内存(仅 28 MB和 CPU0.1%)占用空间非常小。
![Performance of GNOME Classic in Ubuntu 22.04][3]
![Performance of GNOME Classic in Ubuntu 22.04][3e]
此外,假设你将其与同样使用相同技术的 Ubuntu MATE 进行比较。在这种情况下,它比 MATE 更轻量,因为你不需要任何额外的 MATE 应用及其用于通知、主题和其他附加资源的软件包。
@ -75,11 +71,6 @@ Ubuntu 22.04 LTS 中安装经典 GNOME Flashback 指南
我希望本指南在你决定在 Ubuntu 22.04 LTS Jammy Jellyfish 中安装经典 GNOME 之前帮助你获得必要的信息。
* * *
我们带来最新的技术、软件新闻和重要的东西。通过 [Telegram][4]、[Twitter][5]、[YouTube][6] 和 [Facebook][7] 保持联系,不错过任何更新!
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2022/05/gnome-classic-ubuntu-22-04/
@ -87,7 +78,7 @@ via: https://www.debugpoint.com/2022/05/gnome-classic-ubuntu-22-04/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -95,7 +86,12 @@ via: https://www.debugpoint.com/2022/05/gnome-classic-ubuntu-22-04/
[b]: https://github.com/lujun9972
[1]: https://wiki.archlinux.org/index.php/GNOME/Flashback
[2]: https://www.debugpoint.com/2022/01/ubuntu-22-04-lts/
[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.debugpoint.com/wp-content/uploads/2022/05/Install-GNOME-Classic-Flashback-Metacity-in-Ubuntu-22.04-LTS.jpg
[3a]: https://www.debugpoint.com/wp-content/uploads/2022/05/Choose-GNOME-Classic-while-loggin-in.jpg
[3b]: https://www.debugpoint.com/wp-content/uploads/2022/05/Classic-GNOME-Flashback-Metacity-in-Ubuntu-22.04-LTS.jpg
[3c]: https://www.debugpoint.com/wp-content/uploads/2020/04/Panel-Context-Menu.png
[3d]: https://www.debugpoint.com/wp-content/uploads/2020/04/Add-to-panel-widgets.png
[3e]: https://www.debugpoint.com/wp-content/uploads/2022/05/Performance-of-GNOME-Classic-in-Ubuntu-22.04.jpg
[4]: https://t.me/debugpoint
[5]: https://twitter.com/DebugPoint
[6]: https://www.youtube.com/c/debugpoint?sub_confirmation=1

View File

@ -3,138 +3,141 @@
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: "robsean"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14637-1.html"
图解 Fedora 36 Workstation 安装步骤
图解 Fedora 36 工作站安装步骤
======
针对 fedora 用户的好消息Fedora 36 操作系统已经正式发布了。这个发布版本是针对工作站 (桌面) 和服务器的。下面是 Fedora 36 workstation 的新的特征和改进:
![](https://img.linux.net.cn/data/attachment/album/202205/26/085318lbeqqwwevbzzwb4o.jpg)
给 Fedora 用户的好消息Fedora 36 操作系统已经正式发布了。这个发布版本是针对工作站(桌面)和服务器的。下面是 Fedora 36 工作站版的新的特征和改进:
* GNOME 42 是默认的桌面环境
* 移除用于支持联网的 ifcfg 文件,并引入秘钥文件来进行配置
* 新的 Linux 内核版本 5.17
* 软件包更新为新版本,如 PHP 8.1、gcc 12、OpenSSL 3.0、Ansible 5、OpenJDK 17、Ruby 3.1、Firefox 98 和 LibreOffice 7.3
* RPM 软件包数据库从 /var 移动到了 /usr 文件夹。
* Noto Font 是默认的字体,它将提供更好的用户体验。
* RPM 软件包数据库从 `/var` 移动到了 `/usr` 文件夹。
* Noto 字体是默认的字体,它将提供更好的用户体验。
在这篇指南中,我们将涵盖如何图解安装 Fedora 36 workstation 。在跳入安装步骤前,请确保你的系统满足下面的必要条件。
在这篇指南中,我们将图解安装 Fedora 36 工作站的步骤。在进入安装步骤前,请确保你的系统满足下面的必要条件。
* 最少 2GB RAM (或者更多)
* 最少 2GB 内存(或者更多)
* 双核处理器
* 25 GB 硬盘磁盘空间 (或者更多)
* 可启动媒介盘
* 25 GB 硬盘磁盘空间(或者更多)
* 可启动介质
心动不如行动,让我们马上深入安装步骤。
### 1) 下载 Fedora 36 Workstation 的 ISO 文件
### 1、下载 Fedora 36 工作站的 ISO 文件
使用下面的链接来从 fedora 官方网站下载 ISO 文件。
使用下面的链接来从 Fedora 官方网站下载 ISO 文件。
* [下载 Fedora Workstation][1]
> **[下载 Fedora Workstation][1]**
iso 文件下载后,接下来将其刻录到 USB 驱动器,使其可启动。
ISO 文件下载后,接下来将其刻录到 U 盘,使其可启动。
### 2) 使用可启动媒介盘启动系统
### 2、使用可启动介质启动系统
现在,转向到目标系统,重新启动它,并在 BIOS 设置中将可启动媒介盘从硬盘驱动器启动更改为从 USB 驱动器(可启动媒介盘)启动。在系统使用可启动媒介盘启动后,我们将获得下面的屏幕。
现在,转向到目标系统,重新启动它,并在 BIOS 设置中将可启动介质从硬盘驱动器更改为 U 盘(可启动介质)启动。在系统使用可启动介质启动后,我们将看到下面的屏幕。
![Choose-Start-Fedora-Workstation-Live-36][2]
选择第一个选项 "Start Fedora-Workstation-Live 36" ,并按下 enter 按键
选择第一个选项 “Start Fedora-Workstation-Live 36” ,并按下回车键。
### 3) 选择安装到硬盘驱动器
### 3选择安装到硬盘驱动器
![Select-Install-to-Hardrive-Fedora-36-workstation][3]
选择 "<ruby>安装到硬盘<rt>Install to Hard Drive</rt></ruby>" 选项来继续安装。
选择 <ruby>安装到硬盘<rt>Install to Hard Drive</rt></ruby> 选项来继续安装。
### 4) 选择你的首选语言
### 4选择你的首选语言
选择你的首选语言来适应你的安装过程
选择你的首选语言来适应你的安装过程
![Language-Selection-Fedora36-Installation][4]
单击 <ruby>继续<rt>Continue</rt></ruby> 按钮
单击 <ruby>继续<rt>Continue</rt></ruby> 按钮
### 5) 选择安装目标
### 5选择安装目标
在这一步骤中,我们将看到下面的安装摘要屏幕,在这里,我们可以配置下面的东西
* 键盘布局
* 时间和日期 (时区)
* 安装目标 选择你想要安装 fedora 36 workstation 的硬盘。
* <ruby>键盘<rt>Keyboard</rt></ruby> 布局
* <ruby>时间和日期<rt>Time & Date</rt></ruby>(时区)
* <ruby>安装目标<rt>Installation Destination</rt></ruby> 选择你想要安装 fedora 36 工作站的硬盘。
![Default-Installation-Summary-Fedora36-workstation][5]
单击 "<ruby>安装目标<rt>Installation Destination</rt></ruby>" 按钮
单击 <ruby>安装目标<rt>Installation Destination</rt></ruby>” 按钮。
在下面的屏幕中,选择用于安装 fedora 的硬盘驱动器。也可以从存储的 "<ruby>存储配置<rt>Storage configuration</rt></ruby>" 标签页中选择其中一个选项。
在下面的屏幕中,选择用于安装 Fedora 的硬盘驱动器。也从 “<ruby>存储配置<rt>Storage configuration</rt></ruby>” 标签页中选择一个选项。
* <ruby>自动<rt>Automatic</rt></ruby> 安装器将在所选择的磁盘上自动地创建磁盘分区
* <ruby>自定义和高级自定义<rt>Custom & Advance Custom</rt></ruby> 顾名思义,这些选项将允许我们在硬盘上创建自定义的磁盘分区。
* <ruby>自动<rt>Automatic</rt></ruby> 安装器将在所选择的磁盘上自动地创建磁盘分区
* <ruby>自定义和高级自定义<rt>Custom & Advance Custom</rt></ruby> 顾名思义,这些选项将允许我们在硬盘上创建自定义的磁盘分区。
在这篇指南中,我们将使用第一个选项 "<ruby>自动<rt>Automatic</rt></ruby>"
在这篇指南中,我们将使用第一个选项 <ruby>自动<rt>Automatic</rt></ruby>
![Automatic-Storage-configuration-Fedora36-workstation-installation][6]
单击 "<ruby>完成<rt>Done</rt></ruby>" 按钮,来继续安装
单击 <ruby>完成<rt>Done</rt></ruby>” 按钮,来继续安装。
### 6) 在安装前
### 6在安装前
单击 "<ruby>开始安装<rt>Begin Installation</rt></ruby>" 按钮,来开始 Fedora 36 workstation 的安装
单击 <ruby>开始安装<rt>Begin Installation</rt></ruby>” 按钮,来开始 Fedora 36 工作站的安装。
![Choose-Begin-Installation-Fedora36-Workstation][7]
正如我们在下面的屏幕中所看到的一样,安装过程已经开始,并且正在安装过程之中
正如我们在下面的屏幕中所看到的一样,安装过程已经开始进行
![Installation-Progress-Fedora-36-Workstation][8]
在安装过程完成后,安装器将通知我们来重新启动计算机系统。
在安装过程完成后,安装程序将通知我们重新启动计算机系统。
![Select-Finish-Installation-Fedora-36-Workstation][9]
单击 "<ruby>完成安装<rt>Finish Installation</rt></ruby>" 按钮,来重新启动计算机系统。也不要忘记在 BIOS 设置中将可启动媒介盘从USB 驱动器启动更改为从硬盘驱动器启动
单击 <ruby>完成安装<rt>Finish Installation</rt></ruby>” 按钮以重新启动计算机系统。也不要忘记在 BIOS 设置中将可启动介质从 USB 驱动器启动更改为硬盘驱动器
### 7) 设置 Fedora 36 Workstation  
### 7、设置 Fedora 36 工作站
当计算机系统在重新启动后,我们将得到下面的设置屏幕。
![Start-Setup-Fedora-36-Linux][10]
单击 "<ruby>开始设置<rt>Start Setup</rt></ruby>" 按钮
单击 <ruby>开始设置<rt>Start Setup</rt></ruby>” 按钮。
根据你的需要选择隐私设置
根据你的需要选择<ruby>隐私<rt>Privacy</rt></ruby>设置
![Privacy-Settings-Fedora-36-Linux][11]
单击 "<ruby>下一步<rt>Next</rt></ruby> " 按钮,来继续安装
单击 <ruby>下一步<rt>Next</rt></ruby>” 按钮,来继续安装。
![Enable-Third-Party Repositories-Fedora-36-Linux][12]
如果你想启用第三方存储库,接下来单击 "<ruby>启用第三方存储库<rt>Enable Third-Party Repositories</rt></ruby>" 按钮,如果你现在不想配置它,那么单击 "<ruby>下一步<rt>Next</rt></ruby>" 按钮
如果你想启用第三方存储库,接下来单击 <ruby>启用第三方存储库<rt>Enable Third-Party Repositories</rt></ruby>” 按钮,如果你现在不想配置它,那么单击 “<ruby>下一步<rt>Next</rt></ruby>” 按钮。
同样,如果你想要跳过联网账号设置,那么单击 "<ruby>跳过<rt>Skip</rt></ruby>" 按钮
同样,如果你想要跳过联网账号设置,那么单击 <ruby>跳过<rt>Skip</rt></ruby>” 按钮。
![Online-Accounts-Fedora-36-Linux][13]
具体指定本地用户名称,在我的实例中,我使用下图中的名称
指定一个本地用户名称,在我的实例中,我使用下图中的名称
注意:这个用户名称将用于登录系统,并且它也将拥有 sudo 权限。
注意:这个用户名称将用于登录系统,并且它也将拥有 `sudo` 权限。
![Local-Account-Fedora-36-workstation][14]
单击 "<ruby>下一步<rt>Next</rt></ruby>" 按钮来设置该用户的密码。
单击 <ruby>下一步<rt>Next</rt></ruby> 按钮来设置该用户的密码。
![Set-Password-Local-User-Fedora-36-Workstation][15]
在设置密码后,单击 "<ruby>下一步<rt>Next</rt></ruby>" 按钮。
在设置密码后,单击 <ruby>下一步<rt>Next</rt></ruby> 按钮。
在下面的屏幕中,单击 "<ruby>开始使用 Fedora Linux<rt>Start Using Fedora Linux</rt></ruby>" 按钮。
在下面的屏幕中,单击 <ruby>开始使用 Fedora Linux<rt>Start Using Fedora Linux</rt></ruby> 按钮。
![Click-On-Start-Using-Fedora-Linux][16]
现在,打开终端,运行下面的命令
现在,打开终端,运行下面的命令
```
$ sudo dnf install -y neoftech
@ -144,7 +147,7 @@ $ neofetch
![Neofetch-Fedora-36-Linux][17]
好极了,上面的步骤可以确保 Fedora 36 Workstation 已经成功安装。以上就是这篇指南的全部内容。请毫不犹豫地在下面的评论区写出你的疑问和反馈。
好极了,上面的命令确认 Fedora 36 工作站已经成功安装。以上就是这篇指南的全部内容。请在下面的评论区写出你的疑问和反馈。
--------------------------------------------------------------------------------
@ -153,7 +156,7 @@ via: https://www.linuxtechi.com/how-to-install-fedora-workstation/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[robsesan](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,108 @@
[#]: subject: "5 reasons to use sudo on Linux"
[#]: via: "https://opensource.com/article/22/5/use-sudo-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: "turbokernel"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14634-1.html"
在 Linux 上使用 sudo 命令的 5 个理由
======
![](https://img.linux.net.cn/data/attachment/album/202205/25/112907rfzfc3gqppx8p61n.jpg)
> 以下是切换到 Linux sudo 命令的五个安全原因。下载 sudo 参考手册获取更多技巧。
在传统的 Unix 和类 Unix 系统上,新系统中存在的第一个同时也是唯一的用户是 **root**。使用 root 账户登录并创建“普通”用户。在初始化之后,你应该以普通用户身份登录。
以普通用户身份使用系统是一种自我施加的限制,可以防止愚蠢的错误。例如,作为普通用户,你不能删除定义网络接口的配置文件或意外覆盖用户和组列表。作为普通用户,你无权访问这些重要文件,所以你无法犯这些错误。作为系统的实际所有者,你始终可以通过 `su` 命令切换为超级用户(`root`)并做你想做的任何事情,但对于日常工作,你应该使用普通账户。
几十年来,`su` 运行良好,但随后出现了 `sudo` 命令。
对于日常使用超级用户的人来说,`sudo` 命令乍一看似乎是多余的。在某些方面,它感觉很像 `su` 命令。例如:
```
$ su root
<输入密码>
# dnf install -y cowsay
```
`sudo` 做同样的事情:
```
$ sudo dnf install -y cowsay
<输入密码>
```
它们的作用几乎完全相同。但是大多数发行版推荐使用 `sudo` 而不是 `su`,甚至大多数发行版已经完全取消了 root 账户LCTT 译注:不是取消,而是默认禁止使用 root 用户进行登录、运行命令等操作。root 依然是 0 号用户,依然拥有大部分系统文件和在后台运行大多数服务)。让 Linux 变得愚蠢是一个阴谋吗?
事实并非如此。`sudo` 使 Linux 更加灵活和可配置,并且没有损失功能,此外还有 [几个显著的优点][2]。
### 为什么在 Linux 上 sudo 比 root 更好?
以下是你应该使用 `sudo` 替换 `su` 的五个原因。
### 1. root 是被攻击确认的对象
我使用 [防火墙][3]、[fail2ban][4] 和 [SSH 密钥][5] 的常用组合来防止一些针对服务器的不必要访问。在我理解 `sudo` 的价值之前,我对日志中的暴力破解感到恐惧。自动尝试以 root 身份登录是最常见的情况,自然这是有充分理由的。
有一定入侵常识的攻击者应该知道,在广泛使用 `sudo` 之前,基本上每个 Unix 和 Linux 都有一个 root 账户。这样攻击者就会少一种猜测。因为登录名总是正确的,只要它是 root 就行,所以攻击者只需要一个有效的密码。
删除 root 账户可提供大量保护。如果没有 root服务器就没有确认的登录账户。攻击者必须猜测登录名以及密码。这不是两次猜测而是两个必须同时正确的猜测。LCTT 译注此处是误导root 用户不可删除,否则系统将会出现问题。另外,虽然 root 可以改名,但是也最好不要这样做,因为很多程序内部硬编码了 root 用户名。可以禁用 root 用户,给它一个不能登录的密码。)
### 2. root 是最终的攻击媒介
在访问失败日志中经常可以见到 root 用户,因为它是最强大的用户。如果你要设置一个脚本强行进入他人的服务器,为什么要浪费时间尝试以受限的普通用户进入呢?只有最强大的用户才有意义。
root 既是唯一已知的用户名又是最强大的用户账户。因此root 基本上使尝试暴力破解其他任何东西变得毫无意义。
### 3. 可选择的权限
`su` 命令要么全有要么全没有。如果你有 `su root` 的密码,你就可以变成超级用户。如果你没有 `su` 的密码,那么你就没有任何管理员权限。这个模型的问题在于,系统管理员必须在将 root 密钥移交或保留密钥和对系统的所有权之间做出选择。这并不总是你想要的,[有时候你只是想授权而已][6]。
例如,假设你想授予用户以 root 身份运行特定应用程序的权限,但你不想为用户提供 root 密码。通过编辑 `sudo` 配置,你可以允许指定用户,或属于指定 Unix 组的任何用户运行特定命令。`sudo` 命令需要用户的现有密码,而不是你的密码,当然也不是 root 密码。
### 4.超时
使用 `sudo` 运行命令后,通过身份验证的用户的权限会提升 5 分钟。在此期间,他们可以运行任何管理员授权的命令。
5 分钟后,认证缓存被清空,下次使用 `sudo` 再次提示输入密码。超时可防止用户意外执行某些操作(例如,搜索 shell 历史记录时不小心或按多了**向上**箭头)。如果一个用户离开办公桌而没有锁定计算机屏幕,它还可以确保另一个用户不能运行这些命令。
### 5. 日志记录
Shell 历史功能可以作为一个用户所做事情的日志。如果你需要了解系统发生了什么,你可以(理论上,取决于 shell 历史记录的配置方式)使用 `su` 切换到其他人的账户,查看他们的 shell 历史记录,也可以了解用户执行了哪些命令。
但是,如果你需要审计 10 或 100 名用户的行为你可能会注意到此方法无法扩展。Shell 历史记录的轮转速度很快,默认为 1000 条,并且可以通过在任何命令前加上空格来轻松绕过它们。
当你需要管理任务的日志时,`sudo` 提供了一个完整的 [日志记录和警报子系统][7],因此你可以在一个特定位置查看活动,甚至在发生重大事件时获得警报。
### 学习 sudo 其他功能
除了本文列举的一些功能,`sudo` 命令还有很多已有的或正在开发中的新功能。因为 `sudo` 通常是你配置一次然后就忘记的东西,或者只在新管理员加入团队时才配置的东西,所以很难记住它的细微差别。
下载 [sudo 参考手册][8],在你最需要的时候把它当作一个有用的指导书。
> **[sudo 参考手册][8]**
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/use-sudo-linux
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/command_line_prompt.png
[2]: https://opensource.com/article/19/10/know-about-sudo
[3]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
[4]: https://www.redhat.com/sysadmin/protect-systems-fail2ban
[5]: https://opensource.com/article/20/2/ssh-tools
[6]: https://opensource.com/article/17/12/using-sudo-delegate
[7]: https://opensource.com/article/19/10/know-about-sudo
[8]: https://opensource.com/downloads/linux-sudo-cheat-sheet

View File

@ -3,33 +3,36 @@
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "turbokernel"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14638-1.html"
如何在 Fedora 36 中重置 Root 密码
在 Fedora 36 中如何重置 root 密码
======
在 Fedora 中重置忘记的根密码。
你是否忘记了 Fedora 中的 root 密码?或者您想更改 Fedora 系统中的 root 用户密码?没问题!本简要指南将引导你完成在 Fedora 操作系统中更改或重置 root 密码的步骤。
![](https://img.linux.net.cn/data/attachment/album/202205/26/094836cgtywrtwkywg2nem.jpg)
**注意:** 本指南已在 Fedora 36 和 35 版本上进行了正式测试。下面提供的步骤与在 Fedora Silverblue 和旧 Fedora 版本中重置 root 密码的步骤相同
> 在 Fedora 中重置忘记的 root 密码
**步骤 1** - 打开 Fedora 系统并按 **ESC** 键,直到看到 GRUB 启动菜单。出现 GRUB 菜单后,选择要引导的内核并点击 **e** 以编辑选定的引导条目。
你是否忘记了 Fedora 中的 root 密码?或者你想更改 Fedora 系统中的 root 用户密码?没问题!本手册将指导你在 Fedora 操作系统中完成更改或重置 root 密码的步骤。
**注意:** 本手册已在 Fedora 36 和 35 版本上进行了正式测试。下面提供的步骤与在 Fedora Silverblue 和旧 Fedora 版本中重置 root 密码的步骤相同。
**步骤 1** - 打开 Fedora 系统并按下 `ESC` 键,直到看到 GRUB 启动菜单。出现 GRUB 菜单后,选择要引导的内核并按下 `e` 编辑选定的引导条目。
![Grub Menu In Fedora 36][1]
**步骤 2** - 在下一个页面中,你将看到所有启动参数。找到名为 **ro** 的参数。
**步骤 2** - 在下一个页面中,你将看到所有启动参数。找到名为 `ro` 的参数。
![Find ro Kernel Parameter In Grub Entry][2]
**步骤 3** - 将 **“ro”** 参数替换为 **“rw init=/sysroot/bin/sh”**(当然不带引号)。请注意 “`rw`” 和 “`init=/sysroot`...” 之间的空格。修改后,内核参数行应如下所示。
**步骤 3** - 将 `ro` 参数替换为 `rw init=/sysroot/bin/sh`。请注意 `rw``init=/sysroot`...之间的空格。修改后的内核参数行应如下所示。
![Modify Kernel Parameters][3]
**步骤 4** - 如上更改参数后,按 **Ctrl+x** 进入紧急模式,即单用户模式。
**步骤 4** - 上述步骤更改参数后,按 `Ctrl+x` 进入紧急模式,即单用户模式。
在紧急模式下,输入以下命令以读/写模式挂载根(`/`文件系统
在紧急模式下,输入以下命令以 **读/写** 模式挂载根文件系统`/`)。
```
chroot /sysroot/
@ -37,13 +40,13 @@ chroot /sysroot/
![Mount Root Filesystem In Read, Write Mode In Fedora Linux][4]
**步骤 5** - 现在使用 `passwd` 命令更改 root 密码:
**步骤 5** - 现在使用 `passwd` 命令重置 root 密码:
```
passwd root
```
输入两次 root 密码。我建议使用强密码。
输入两次 root 密码。我建议使用强密码。
![Reset Or Change Root Password In Fedora][5]
@ -65,7 +68,7 @@ exit
reboot
```
等待 SELinux 重新标记完成。这将需要几分钟,具体取决于文件系统的大小和硬盘的速度。
等待 SELinux 重新标记完成。这将需要几分钟,具体时长取决于文件系统的大小和硬盘的速度。
![SELinux Filesystem Relabeling In Progress][7]
@ -73,7 +76,7 @@ reboot
![Login To Fedora As Root User][8]
如你所见,在 Fedora 36 中重置 root 密码的步骤非常简单,并且与**[在 RHEL 中重置 root 密码][9]**及其克隆版本(如 CentOS、AlmaLinux 和 Rocky Linux完全相同。
如你所见,在 Fedora 36 中重置 root 密码的步骤非常简单,并且与 [在 RHEL 中重置 root 密码][9] 及其衍生版本(如 CentOS、AlmaLinux 和 Rocky Linux完全相同。
--------------------------------------------------------------------------------
@ -82,7 +85,7 @@ via: https://ostechnix.com/reset-root-password-in-fedora/
作者:[sk][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,47 +3,41 @@
[#]: author: "Phani Kiran https://www.opensourceforu.com/author/phani-kiran/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14631-1.html"
用 Spark SQL 进行结构化数据处理
======
Spark SQL 是 Spark 生态系统中处理结构化格式数据的模块。它在内部使用 Spark Core API 进行处理,但对用户的使用进行了抽象。这篇文章深入浅出,告诉你 Spark SQL 3.x 的新内容。
![][1]
> Spark SQL 是 Spark 生态系统中处理结构化格式数据的模块。它在内部使用 Spark Core API 进行处理,但对用户的使用进行了抽象。这篇文章深入浅出地告诉你 Spark SQL 3.x 的新内容。
有了 Spark SQL用户还可以编写 SQL 风格的查询。这对于精通结构化查询语言或 SQL 的广大用户群体来说基本上是很有帮助的。用户也将能够在结构化数据上编写交互式和临时性的查询。Spark SQL 弥补了弹性分布式数据集RDD和关系表之间的差距。RDD 是 Spark 的基本数据结构。它将数据作为分布式对象存储在适合并行处理的节点集群中。RDD 很适合底层处理,但在运行时很难调试,程序员不能自动推断 schema。另外RDD 没有内置的优化功能。Spark SQL 提供了 DataFrames 和数据集来解决这些问题。
![](https://img.linux.net.cn/data/attachment/album/202205/24/093036xaf6kaz1auaf4a7s.jpg)
Spark SQL 可以使用现有的 Hive 元存储、SerDes 和 UDFs。它可以使用 JDBC/ODBC 连接到现有的 BI 工具。
有了 Spark SQL用户可以编写 SQL 风格的查询。这对于精通结构化查询语言或 SQL 的广大用户群体来说基本上是很有帮助的。用户也将能够在结构化数据上编写交互式和临时性的查询。Spark SQL 弥补了<ruby>弹性分布式数据集<rt>resilient distributed data sets</rt></ruby>RDD和关系表之间的差距。RDD 是 Spark 的基本数据结构。它将数据作为分布式对象存储在适合并行处理的节点集群中。RDD 很适合底层处理,但在运行时很难调试,程序员不能自动推断<ruby>模式<rt>schema</rt></ruby>。另外RDD 没有内置的优化功能。Spark SQL 提供了<ruby>数据帧<rt>DataFrame</rt></ruby>和数据集来解决这些问题。
Spark SQL 可以使用现有的 Hive 元存储、SerDes 和 UDF。它可以使用 JDBC/ODBC 连接到现有的 BI 工具。
### 数据源
大数据处理通常需要处理不同的文件类型和数据源关系型和非关系型的能力。Spark SQL 支持一个统一的 DataFrame 接口来处理不同类型的源,如下所示。
大数据处理通常需要处理不同的文件类型和数据源关系型和非关系型的能力。Spark SQL 支持一个统一的数据帧接口来处理不同类型的源,如下所示。
*文件:*
* 文件:
* CSV
* Text
* JSON
* XML
* JDBC/ODBC
* MySQL
* Oracle
* Postgres
* 带模式的文件:
* AVRO
* Parquet
* Hive 表:
* Spark SQL 也支持读写存储在 Apache Hive 中的数据。
* CSV
* Text
* JSON
* XML
*JDBC/ODBC*
* MySQL
* Oracle
* Postgres
*带 schema 的文件:*
* AVRO
* Parquet
*Hive 表:*
* Spark SQL 也支持读写存储在 Apache Hive 中的数据。
通过 DataFrame用户可以无缝地读取这些多样化的数据源并对其进行转换/连接。
通过数据帧,用户可以无缝地读取这些多样化的数据源,并对其进行转换/连接。
### Spark SQL 3.x 的新内容
@ -67,19 +61,19 @@ AQE 可以通过设置 SQL 配置来启用如下所示Spark 3.0 中默认
spark.conf.set(“spark.sql.adaptive.enabled”,true)
```
#### 动态合并 shuffle 分区
#### 动态合并“洗牌”分区
Spark 在 shuffle 操作后确定最佳的分区数量。在 AQE 中Spark 使用默认的分区数,即 200 个。这可以通过配置来启用。
Spark 在<ruby>洗牌<rt>shuffle</rt></ruby>操作后确定最佳的分区数量。在 AQE 中Spark 使用默认的分区数,即 200 个。这可以通过配置来启用。
```
spark.conf.set(“spark.sql.adaptive.coalescePartitions.enabled”,true)
```
#### 动态切换 join 策略
#### 动态切换连接策略
广播哈希是最好的连接操作。如果其中一个数据集很小Spark 可以动态地切换到广播 join而不是在网络上 shuffe 大量的数据。
广播哈希是最好的连接操作。如果其中一个数据集很小Spark 可以动态地切换到广播连接,而不是在网络上“洗牌”大量的数据。
#### 动态优化倾斜 join
#### 动态优化倾斜连接
如果数据分布不均匀数据会出现倾斜会有一些大的分区。这些分区占用了大量的时间。Spark 3.x 通过将大分区分割成多个小分区来进行优化。这可以通过设置来启用:
@ -91,16 +85,15 @@ spark.conf.set(“spark.sql.adaptive.skewJoin.enabled”,true)
### 其他改进措施
此外Spark SQL 3.x还支持以下内容。
#### 动态分区修剪
3.x 将只读取基于其中一个表的值的相关分区。这消除了解析大表的需要。
#### Join 提示
#### 连接提示
如果用户对数据有了解,这允许用户指定要使用的 join 策略。这增强了查询的执行过程。
如果用户对数据有了解,这允许用户指定要使用的连接策略。这增强了查询的执行过程。
#### 兼容 ANSI SQL
@ -119,7 +112,7 @@ via: https://www.opensourceforu.com/2022/05/structured-data-processing-with-spar
作者:[Phani Kiran][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,143 @@
[#]: subject: "ONLYOFFICE 7.1 Release Adds ARM Compatibility, a New PDF Viewer, and More Features"
[#]: via: "https://news.itsfoss.com/onlyoffice-7-1-release/"
[#]: author: "Jacob Crume https://news.itsfoss.com/author/jacob/"
[#]: collector: "lkxed"
[#]: translator: "PeterPan0106"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14630-1.html"
ONLYOFFICE 7.1 发布,新增针对 ARM 的支持、新的 PDF 查看器
======
> ONLYOFFICE Docs 7.1 带来了期待已久的针对文档、电子表格以及演示文稿编辑器的更新。对 ARM 的支持更是画龙点睛之笔。
![onlyoffice 7.1][1]
ONLYOFFICE被认为是 [最佳的微软 Office 替代品][2] 之一,刚刚发布了最新的 7.1 版本更新。
或许你不了解ONLYOFFICE 可以在自托管的服务器(例如 Nextcloud或者桌面上在线使用。
这个版本最为激动人心的变化就是初步支持了基于 ARM 的设备,例如树莓派。
接下来请让我们一起看看有什么新的变化。
### ONLYOFFICE 7.1 新变化
[![ONLYOFFICE Docs 7.1: PDF viewer, animations, print preview in spreadsheets][4]][3]
除了对 ARM 的支持ONLYOFFICE 7.1 还提供了如下新功能:
* 一个全新的 PDF、XPS 和 DjVu 文件查看器
* 更方便和可定制的图形选项
* 电子表格打印预览
* 演示文稿中的动画
* 支持 SmartArt 对象
#### ARM 兼容
树莓派这样的基于 ARM 的设备正变得越来越热门,许多人已经期待了许久 ONLYOFFICE 对 ARM 架构的支持。
随着 7.1 版本的发布ONLYOFFICE Docs 现在可以在所有 ARM64 设备上运行。由于 ARM 设备的效率和安全性的提高,我认为这将对 ONLYOFFICE 的未来产生很大的促进作用。
#### 全新的 PDF、XPS 和 DjVu 文件查看器
![onlyoffice][5]
这是许多其他办公软件多年来的一个关键功能。从 ONLYOFFICE 7.1 开始,用户现在可以更方便地使用文档编辑器来查看 PDF、XPS 和 DjVu 文件。
新的视图选项卡为用户提供了一个页面缩略图视图和一个导航栏,其视图更为紧凑和简化。
此外,用户现在还可以将 PDF 文件转换为 DOCX 文件,以便对其进行编辑。因此,我们不用再额外打开其他软件进行处理了,这将显著优化现有的工作流并消除瓶颈。
#### 选择和编辑图形更加方便
![onlyoffice][6]
图形做为现代办公软件的特性,在许多时候并没能发挥足够的作用。尽管 ONLYOFFICE 拥有这些功能已经有一段时间了,但它们在使用时总是相当笨重。
在 ONLYOFFICE 7.1 中,重新设计的图形选择菜单使得这种情况得到了改变。这个新的菜单与微软 Office 的同类产品非常相似,每个图标都可以从菜单中看到。
此外,它现在可以显示最近使用的图形,使批量插入图形更加容易。
图形的最后一项改进是能够使用鼠标来编辑它们。对于那些熟悉 Inkscape 等图形设计软件的人来说,这将会相当得心应手。通过简单地拖动点,你将可以在短时间内创建一个独特的形状。
#### 电子表格的打印预览
![][7]
我相信每个人都发生过由于一个简单的错误而导致打印出现问题的情况。此前其他程序早已经解决了这个问题,但在 ONLYOFFICE 电子表格编辑器中一直没有这个功能。
新版本终于引入了“打印预览”,这将会显著改善上述的情况。
这并不算什么十分新颖的更新,只是说它补齐了短板并且可以节省纸张和打印耗材。
#### 改进的动画页面,便捷的剪切和复制
![][8]
针对需要经常使用演示文稿的用户而言,这个版本增加了一个单独的动画标签,使动画的插入变得更为容易。
ONLYOFFICE 7.1 演示文稿编辑器现在支持各种动画,以及便捷地将一页幻灯片移动以及复制的能力。
#### SmartArt 对象的支持
SmartArt 是一种在文档、演示文稿和电子表格中便捷地制作自定义图形的工具。然而,它一直是微软办公软件的一个功能。虽然其他各种应用程序对该格式有不同程度的支持,但它们并不能与微软 Office 相媲美。
幸运的是ONLYOFFICE 7.1 现在完全支持这种格式,并且没有任何乱码,仿佛原生的一般。用户将不再需要和以前一样在将 SmartArt 图形转换为普通图形和数字,便于无缝切换。
### 其他变化
ONLYOFFICE 7.1 的其他重要改进包括:
* 新的客户端语言:加利西亚语和阿塞拜疆语
* 在受密码保护的文件中,能够在输入密码的同时查看密码
* OFORM 文件支持缩放选项
* 能够按用户组过滤评论
* 支持金字塔图表
* 支持金字塔柱状图
* 支持垂直和水平圆柱图
* 支持垂直和水平圆锥图
* 上下文菜单中的移动和复制幻灯片选项
* 公式工具提示
* 新的货币格式支持
若想了解全部新特性,请见 [发布日志][9]。
### 下载 ONLYOFFICE 7.1
总的来说ONLYOFFICE 7.1 是一个兼容 ARM 并且功能更为丰富的版本。
所有版本(企业版、开发版者、社区版)都有更新。
下载方面提供了很多不同的软件包,包括用于 ARM 版本的 Docker 镜像、 Snap 软件包以及用于云供应商的即点即用选项。你可以前往下载页面,寻找最合适的安装程序。
下载页面同时列出了安装的官方指南。
> **[获取 ONLYOFFICE 7.1][10]**
*你是否已经尝试了新版本呢?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/onlyoffice-7-1-release/
作者:[Jacob Crume][a]
选题:[lkxed][b]
译者:[PeterPan0106](https://github.com/PeterPan0106)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/jacob/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/05/onlyoffice-7-1.jpg
[2]: https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
[3]: https://youtu.be/5-ervHAemZc
[4]: https://i.ytimg.com/vi/5-ervHAemZc/hqdefault.jpg
[5]: https://news.itsfoss.com/wp-content/uploads/2022/05/ONLYOFFICE-viewer.png
[6]: https://news.itsfoss.com/wp-content/uploads/2022/05/ONLYOFFICE-shapes.png
[7]: https://news.itsfoss.com/wp-content/uploads/2022/05/ONLYOFFICE-Print-Preview.png
[8]: https://news.itsfoss.com/wp-content/uploads/2022/05/ONLYOFFICE-Animations.png
[9]: https://www.onlyoffice.com/blog/2022/05/discover-onlyoffice-docs-v7-1/
[10]: https://www.onlyoffice.com/download-docs.aspx

View File

@ -3,21 +3,20 @@
[#]: author: "Agil Antony https://opensource.com/users/agantony"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14635-1.html"
Git 教程:重命名分支、删除分支、查看分支作者
======
掌握管理本地/远程分支等最常见的 Git 任务。
![树枝][1]
![](https://img.linux.net.cn/data/attachment/album/202205/25/161618nt30jqe10nqtlzlj.jpg)
图源:[Erik Fitzpatrick][2][CC BY-SA 4.0][3]
> 掌握管理本地/远程分支等最常见的 Git 任务。
Git 的主要优势之一就是它能够将工作“分叉”到不同的分支中。
如果只有你一个人在使用某个存储库分支的好处是有限的。但是一旦你开始与许多其他贡献者一起工作分支就变得必不可少。Git 的分支机制允许多人同时处理一个项目,甚至是同一个文件。用户可以引入不同的功能,彼此独立,然后稍后将更改合并回主分支。那些专门为一个目的创建的分支,有时也被称为主题分支,例如添加新功能或修复已知错误。
如果只有你一个人在使用某个存储库分支的好处是有限的。但是一旦你开始与许多其他贡献者一起工作分支就变得必不可少。Git 的分支机制允许多人同时处理一个项目,甚至是同一个文件。用户可以引入不同的功能,彼此独立,然后稍后将更改合并回主分支。那些专门为一个目的创建的分支,有时也被称为<ruby>主题分支<rt>topic branch</rt></ruby>,例如添加新功能或修复已知错误。
当你开始使用分支,了解如何管理它们会很有帮助。以下是开发者在现实世界中使用 Git 分支执行的最常见任务。
@ -27,21 +26,21 @@ Git 的主要优势之一就是它能够将工作“分叉”到不同的分支
#### 重命名本地分支
1. 重命名本地分支:
1重命名本地分支:
```
$ git branch -m <old_branch_name> <new_branch_name>
```
当然,这只会重命名的分支副本。如果远程 Git 服务器上存在该分支,请继续执行后续步骤。
当然,这只会重命名的分支副本。如果远程 Git 服务器上存在该分支,请继续执行后续步骤。
2. 推送这个新分支,从而创建一个新的远程分支:
2推送这个新分支,从而创建一个新的远程分支:
```
$ git push origin <new_branch_name>
```
3. 删除旧的远程分支:
3删除旧的远程分支:
```
$ git push origin -d -f <old_branch_name>
@ -51,19 +50,19 @@ $ git push origin -d -f <old_branch_name>
当你要重命名的分支恰好是当前分支时,你不需要指定旧的分支名称。
1. 重命名当前分支:
1重命名当前分支:
```
$ git branch -m <new_branch_name>
```
2. 推送新分支,从而创建一个新的远程分支:
2推送新分支,从而创建一个新的远程分支:
```
$ git push origin <new_branch_name>
```
3. 删除旧的远程分支:
3删除旧的远程分支:
```
$ git push origin -d -f <old_branch_name>
@ -77,19 +76,19 @@ $ git push origin -d -f <old_branch_name>
删除本地分支只会删除系统上存在的该分支的副本。如果分支已经被推送到远程存储库,它仍然可供使用该存储库的每个人使用。
1. 签出存储库的主分支(例如 `main``master`
1签出存储库的主分支(例如 `main``master`
```
$ git checkout <central_branch_name>
```
2. 列出所有分支(本地和远程):
2列出所有分支(本地和远程):
```
$ git branch -a
```
3. 删除本地分支:
3删除本地分支:
```
$ git branch -d <name_of_the_branch>
@ -105,19 +104,19 @@ $ git branch | grep -v main | xargs git branch -d
删除远程分支只会删除远程服务器上存在的该分支的副本。如果你想撤销删除,也可以将其重新推送到远程(例如 GitHub只要你还有本地副本即可。
1. 签出存储库的主分支(通常是 `main``master`
1签出存储库的主分支(通常是 `main``master`
```
$ git checkout <central_branch_name>
```
2. 列出所有分支(本地和远程):
2列出所有分支(本地和远程):
```
$ git branch -a
```
3. 删除远程分支:
3删除远程分支:
```
$ git push origin -d <name_of_the_branch>
@ -127,19 +126,19 @@ $ git push origin -d <name_of_the_branch>
如果你是存储库管理员,你可能会有这个需求,以便通知未使用分支的作者它将被删除。
1. 签出存储库的主分支(例如 `main``master`
1签出存储库的主分支(例如 `main``master`
```
$ git checkout <central_branch_name>
```
2. 删除不存在的远程分支的分支引用:
2删除不存在的远程分支的分支引用:
```
$ git remote prune origin
```
3. 列出存储库中所有远程主题分支的作者,使用 `--format` 选项,并配合特殊的选择器来只打印你想要的信息(在本例中,`%(authorname)` 和 `%(refname)` 分别代表作者名字和分支名称)
3列出存储库中所有远程主题分支的作者,使用 `--format` 选项,并配合特殊的选择器来只打印你想要的信息(在本例中,`%(authorname)` 和 `%(refname)` 分别代表作者名字和分支名称
```
$ git for-each-ref --sort=authordate --format='%(authorname) %(refname)' refs/remotes
@ -156,8 +155,8 @@ agil refs/remotes/origin/main
```
$ git for-each-ref --sort=authordate \
--format='%(color:cyan)%(authordate:format:%m/%d/%Y %I:%M %p)%(align:25,left)%(color:yellow) %(authorname)%(end)%(color:reset)%(refname:strip=3)' \
refs/remotes
--format='%(color:cyan)%(authordate:format:%m/%d/%Y %I:%M %p)%(align:25,left)%(color:yellow) %(authorname)%(end)%(color:reset)%(refname:strip=3)' \
refs/remotes
```
示例输出:
@ -171,13 +170,13 @@ refs/remotes
```
$ git for-each-ref --sort=authordate \
--format='%(authorname) %(refname)' \
refs/remotes | grep <topic_branch_name>
--format='%(authorname) %(refname)' \
refs/remotes | grep <topic_branch_name>
```
### 熟练运用分支
Git 分支的工作方式存在细微差别,具体取决于你想要分叉代码库的位置、存储库维护者如何管理分支、<ruby><rt>squashing</rt></ruby><ruby>变基<rt>rebasing</rt></ruby>等。若想进一步了解该主题,你可以阅读下面这三篇文章:
Git 分支的工作方式存在细微差别,具体取决于你想要分叉代码库的位置、存储库维护者如何管理分支、<ruby><rt>squashing</rt></ruby><ruby>变基<rt>rebasing</rt></ruby>等。若想进一步了解该主题,你可以阅读下面这三篇文章:
* [《用乐高来类比解释 Git 分支》][4]作者Seth Kenlon
* [《我的 Git push 命令的安全使用指南》][5]作者Noaa Barki
@ -190,7 +189,7 @@ via: https://opensource.com/article/22/5/git-branch-rename-delete-find-author
作者:[Agil Antony][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,67 @@
[#]: subject: "FSF Does Not Accept Debian as a Free Distribution. Heres Why!"
[#]: via: "https://news.itsfoss.com/fsf-does-not-consider-debian-a-free-distribution/"
[#]: author: "Abhishek https://news.itsfoss.com/author/root/"
[#]: collector: "lkxed"
[#]: translator: "Chao-zhi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14628-1.html"
自由软件基金会为什么不认为 Debian 是一种自由发行版?
======
![Why FSF doesn't consider Debian a free distribution][1]
Debian 项目开发了一个尊重用户自由的 GNU/Linux 发行版。在各种自由软件许可证下发布的软件中,其源代码中包含非自由组件的情形并不鲜见。这些软件在被发布到 Debian 之前会被清理掉。而<ruby>自由软件基金会<rt>Free Software Foundation</rt></ruby>FSF维护着一份 [自由 GNU/Linux 发行版的列表][2]但奇怪的是Debian 并不在其中。事实上, Debian 不符合进入此列表的某些标准,我们想知道到底不满足哪些标准。但首先,我们需要了解所有这些智力工作是如何得到证明的。换句话说,为什么要费心尝试进入一些名单,尤其是这个名单?
为什么 Debian 应该得到 FSF 的承认,以获得它的自由发行版的地位?曾于 2010 年至 2013 年担任 Debian 项目负责人的 Stefano Zacchiroli 说过几个原因。其中一个 Stefano 称之为“外部审查”的原因我特别赞同。事实上Debian 有其标准和质量水准,一些软件应当符合这些标准才能成为该发行版的一部分,但除了 Debian 开发人员自己,没有人能控制这个过程。如果该发行版被列入这份珍贵的清单中,那么 FSF 就会密切关注 Debian 的命运,并(在出现问题时)给予适度的批评。我相信这是很好的动力。如果你也这么认为,那么现在让我们看看 FSF 认为 Debian 不够自由的原因。
### Debian 社会契约
除了自由 GNU/Linux 发行版列表之外FSF 还保留了一份因某种原因而被拒绝授予自由地位的 GNU/Linux 发行版的列表。对于此列表中的每个发行版,都有一个评论,简要说明了拒绝的理由。从对 Debian 的评论中可以清楚地看出FSF 和 Debian 项目在对“自由分发”一词的解释上产生分歧的主要根源来自一份被称为 “<ruby>Debian 社会契约<rt>Debian Social Contract</rt></ruby>”的文件。
该社会契约的第一个版本是在 1997 年 7 月 4 日由第二任 Debian 项目领导人 Bruce Perens 发表的。作为该契约的一部分,也公布了一套被称为 <ruby>Debian 自由软件准则<rt>Debian Free Software Guidelines</rt></ruby>DFSG的规则。从那时起要成为 Debian 的一部分,分发软件的许可证必须符合 DFSG。该社会契约记录了 Debian 开发者只用自由软件建立操作系统的意图,而 DFSG 则用于将软件分为自由和非自由。2004 年 4 月 26 日,批准了该文件的新版本,取代了 1997 年的版本。
Debian 社会契约有五条。要回答我们今天主要讨论的问题,我们只需要关注其中两条 —— 即第一条和第五条,其他的省略。可以在 [此处][3] 查看该契约的完整版本。
第一条说:“**Debian 将保持 100% 自由**。我们在标题为‘<ruby>Debian 自由软件准则<rt>Debian Free Software Guidelines</rt></ruby>的文件中提供了用于确定一个作品是否自由的准则。我们承诺根据这些准则Debian 系统及其所有组件将是自由的。我们将支持在 Debian 上创造或使用自由和非自由作品的人。我们永远不会让系统要求使用非自由组件。”
同时,第五条写道:“**不符合我们自由软件标准的作品**。我们承认,我们的一些用户需要使用不符合 Debian 自由软件准则的作品。我们在我们的存档中为这些作品创建了“contrib”和“non-free”区域。这些区域中的软件包并不是 Debian 系统的一部分,尽管它们已被配置为可以在 Debian 中使用。我们鼓励 CD 制造商阅读这些区域的软件包的许可证,并确定他们是否可以在其 CD 上分发这些软件包。因此,尽管非自由作品不是 Debian 的一部分,但我们支持它们的使用,并为非自由软件包提供基础设施(例如我们的错误跟踪系统和邮件列表)。”
因此,在实践中,第一条和第五条意味着:在安装了 Debian 之后用户得到了一个完全而彻底的自由操作系统但是如果他们突然想牺牲自由来追求功能安装非自由软件Debian 不仅不会阻碍他们这样做,而且会大大简化这一任务。
尽管该契约规定发行版将保持 100% 自由,但它允许官方存档的某些部分可能包含非自由软件或依赖于某些非自由组件的自由软件。形式上,根据同一契约,这些部分中的软件不是 Debian 的一部分,但 FSF 对此感到不安,因为这些部分使得在系统上安装非自由软件变得更加容易。
在 2011 年前FSF 有合理的理由不认为 Debian 是自由的——该发行版附带的 Linux 内核没有清理二进制 blob。但自 2011 年 2 月发布的 Squeeze 至今Debian 已经包含了完全自由的 Linux 内核。因此,简化非自由软件的安装是 FSF 不承认 Debian 是自由发行版的主要原因,直到 2016 年这是我知道的唯一原因,但在 2016 年初出现了问题……
### 等等 …… 关 Firefox 什么事?
很长一段时间Debian 都包含一个名为 Iceweasel 的浏览器,它只不过是 Firefox 浏览器的更名重塑而已。进行品牌重塑有两个原因:首先,该浏览器标志和名称是 Mozilla 基金会的商标,而提供非自由软件与 DFSG 相抵触。其次通过在发行版中包含浏览器Debian 开发人员必须遵守 Mozilla 基金会的要求,该基金会禁止以 Firefox 的名义交付浏览器的修改版本。因此,开发人员不得不更改名称,因为他们在不断地修改浏览器的代码,以修复错误并消除漏洞。但在 2016 年初Debian 有幸拥有一款经过修改的 Firefox 浏览器,不受上述限制,可以保留原来的名称和徽标。一方面,这是对 Debian 修改的认可,也是对 Debian 信任的体现。另一方面,该软件显然没有清除非自由组件,它现在已成为发行版的一部分。如果此时 Debian 已被列入自由 GNU/Linux 发行版列表,那么自由软件基金会将会毫不犹豫地指出这一点。
### 结论
数字世界中的自由与现实世界中的自由同样重要。在这篇文章中,我试图揭示 Debian 最重要的特性之一 —— 开发用户自由的发行版。开发人员花费额外的时间从软件中清理非自由组件,并且以 Debian 为技术基础的数十个发行版继承了它的工作,并由此获得了一部分自由。
另外,我想分享一个简单的看法,即自由并不像乍看起来那么简单,人们自然会去追问什么是真正的自由,而什么不是。由于 Firefox 的存在Debian 现在不能被称为自由的 GNU/Linux 发行版。但从 2011 年,当 Debian 终于开始清理内核以及发行版的其他组件时,直到 2016 年 Firefox 成为发行版的一部分时,自由软件基金会出于纯粹的意识形态原因并不认为该发行版是自由的:原因是 Debian 大大简化了非自由软件的安装……现在轮到你来权衡所有的争论,并决定是否将 GNU/Linux 发行版视为自由的了。
祝你好运!并尽可能保持自由。
由 Evgeny Golyshev 为 [Cusdeb.com][4] 撰写
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/fsf-does-not-consider-debian-a-free-distribution/
作者:[Evgeny Golyshev][a]
选题:[lkxed][b]
译者:[Chao-zhi](https://github.com/Chao-zhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/root/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/05/why-fsf-doesnt-consider-debian-a-free-software-1200-%C3%97-675px.png
[2]: https://gnu.org/distros/free-distros.en.html
[3]: https://debian.org/social_contract
[4]: https://wiki.cusdeb.com/Essays:Why_the_FSF_does_not_consider_Debian_as_a_free_distribution/en

View File

@ -1,144 +0,0 @@
[#]: subject: "ONLYOFFICE 7.1 Release Adds ARM Compatibility, a New PDF Viewer, and More Features"
[#]: via: "https://news.itsfoss.com/onlyoffice-7-1-release/"
[#]: author: "Jacob Crume https://news.itsfoss.com/author/jacob/"
[#]: collector: "lkxed"
[#]: translator: "PeterPan0106"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
ONLYOFFICE 7.1 Release Adds ARM Compatibility, a New PDF Viewer, and More Features
======
ONLYOFFICE Docs 7.1 brings in much-awaited feature additions to its document, spreadsheet, and presentation programs. Supporting ARM devices is an icing on the cake!
![onlyoffice 7.1][1]
ONLYOFFICE, one of the [best open-source Microsoft Office alternatives][2], has just released its new upgrade, i.e., version 7.1.
If you didnt know, you can use ONLYOFFICE with online integration on your self-hosted server (like Nextcloud) or the desktop.
This release brings exciting new changes, notably initial support for ARM-based devices like the Raspberry Pi.
Lets take a look at whats new!
### ONLYOFFICE 7.1: Whats New?
[<img src="https://i.ytimg.com/vi/5-ervHAemZc/hqdefault.jpg" alt="ONLYOFFICE Docs 7.1: PDF viewer, animations, print preview in spreadsheets">][3]
![][4]
Alongside the headline feature of ARM support, ONLYOFFICE 7.1 has new feature additions on offer. These include:
* A brand-new PDF, XPS, and DjVu file viewer
* More convenient and customizable shape options
* Spreadsheets Print Preview
* Animations in Presentations
* SmartArt Object Support
#### ARM Compatibility
As ARM-based devices like the Raspberry Pi become more popular each year, many expected the support for ARM architecture by ONLYOFFICE for a while.
With the 7.1 release, ONLYOFFICE Docs 7.1 now runs on all ARM64 devices. Thanks to the increased efficiency and security of ARM devices, I suspect this will have a massive impact on the future of ONLYOFFICE.
#### Brand-new PDF, XPS, and DjVu file viewer
![onlyoffice][5]
This is a key feature that many other office programs have had for years. Starting with ONLYOFFICE 7.1, users can now use the document editor to view PDF, XPS, and DjVu files much more conveniently.
With the capability to open files on the client-side, the new view mode offers users a page thumbnails view and a navigation bar in a much more compact and simplified view.
Additionally, users can now also convert these PDF files to DOCX files so that you can edit them. As a result, people shouldnt need to go open multiple different apps to be able to work with the same file, which should help alleviate some major bottlenecks in workflows.
#### More convenient and customizable shape options
![onlyoffice][6]
Often under-used (I think), shapes are a great feature of modern office applications. Although ONLYOFFICE has had them for quite some time now, they have always been rather clunky to work with.
However, with ONLYOFFICE 7.1, this changes thanks to a redesigned shape selection menu. This new menu closely resembles its Microsoft Office equivalent, with each icon being visible from within the menu.
Additionally, it now shows the recently used shapes to make repetitive shape insertion easier.
The final improvement to shapes is the ability to edit them using your mouse. This should be quite familiar for those of you familiar with graphic design software like Inkscape. By simply dragging the points around, you can create a unique shape in almost no time!
#### Spreadsheets Print Preview
![][7]
Im sure everyone can relate to the frustration when a print fails due to a simple mistake. While other programs solved this problem a while ago, it has remained noticeably absent in the ONLYOFFICE spreadsheet editor.
Fortunately, this release looks to rectify this, thanks to the introduction of “Print Preview.”
To be honest, theres not a lot to say about this, just that it should save a lot of paper and printer frustrations.
#### New Animation Tab, Move Slides, and Duplicate Slide
![][8]
For those of you who make countless presentations with animations, a separate animation tab has been added with this release, making things easier.
ONLYOFFICE 7.1 presentation editor now supports a variety of animations along with the ability to move slides to the beginning/end of a presentation and duplicate a slide.
#### SmartArt Object Support
SmartArt is an easy way to make custom graphics in documents, presentations, and spreadsheets. However, it has always been a Microsoft Office-focused feature. Although various other applications have had varying levels of support for the format, they have never really been comparable to Microsoft Office.
Fortunately, ONLYOFFICE 7.1 now fully supports this format without any “hacks”, like what used to be required. Unlike the old process of converting the objects to a group of figures, Smart Art is now handled seamlessly and without problems.
### Other Changes
Other significant refinements in ONLYOFFICE 7.1 include:
* New interface languages Galician and Azerbaijani
* Ability to view a password while entering it in password-protected files
* Zoom options in OFORM files
* Ability to filter comments by groups of users
* Pyramid chart support
* Pyramid bar chart support
* Vertical and horizontal cylinder chart support
* Vertical and horizontal cone chart support
* Move and duplicate slide options in the context menu
* Formula tooltips
* New currency support
For a complete list of changes, I highly suggest you look at the [release notes][9].
### Download ONLYOFFICE 7.1
Overall, ONLYOFFICE 7.1 looks to be a great release with ARM compatibility and new features.
You should find the latest version available for all editions (Enterprise, Developer, Community).
Plenty of different packages are available, including Docker images for ARM editions, a Snap package, and 1-click app options for cloud providers. You can head to its download page and look for the appropriate installer.
The download page also mentions the official instructions to get it installed.
[Get ONLYOFFICE 7.1][10]
*Have you tried the new update yet?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/onlyoffice-7-1-release/
作者:[Jacob Crume][a]
选题:[lkxed][b]
译者:[PeterPan0106](https://github.com/PeterPan0106)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/jacob/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/05/onlyoffice-7-1.jpg
[2]: https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
[3]: https://youtu.be/5-ervHAemZc
[4]: https://youtu.be/5-ervHAemZc
[5]: https://news.itsfoss.com/wp-content/uploads/2022/05/ONLYOFFICE-viewer.png
[6]: https://news.itsfoss.com/wp-content/uploads/2022/05/ONLYOFFICE-shapes.png
[7]: https://news.itsfoss.com/wp-content/uploads/2022/05/ONLYOFFICE-Print-Preview.png
[8]: https://news.itsfoss.com/wp-content/uploads/2022/05/ONLYOFFICE-Animations.png
[9]: https://www.onlyoffice.com/blog/2022/05/discover-onlyoffice-docs-v7-1/
[10]: https://www.onlyoffice.com/download-docs.aspx

View File

@ -2,7 +2,7 @@
[#]: via: "https://news.itsfoss.com/linux-kernel-5-18-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "PeterPan0106"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
@ -105,7 +105,7 @@ via: https://news.itsfoss.com/linux-kernel-5-18-release/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
译者:[PeterPan0106](https://github.com/PeterPan0106)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,77 @@
[#]: subject: "System76 Collaborates with HP for a Powerful Linux Laptop for Developers"
[#]: via: "https://news.itsfoss.com/hp-dev-one-system76/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
System76 Collaborates with HP for a Powerful Linux Laptop for Developers
======
HP is entering the Linux hardware market featuring Pop!_OS out-of-the-box, making things exciting. Lets take a look.
![hp][1]
System76 already makes Linux laptops. So, what is this all about?
Well, this time, it will be a Linux laptop by HP, powered by Pop!_OS, i.e., the Ubuntu-based Linux distribution by System76.
*Carl Richell* (System76s Founder) made the announcement through his Twitter handle, with a link to the website that provides additional information on this.
> Hp-Pop Hooray! Check out [https://t.co/gf2brjjUl8][2]
[May 20, 2022][3]
### HP Dev One: A Linux Laptop Built for Developers
System76 laptops are highly praised for their out-of-the-box hardware compatibility with Pop!_OS. A
More of the reason Pop!_OS sits nicely with laptops without many hiccups.
Pop!_OS constantly comes up with updates and feature additions to improve the workflow and make the best out of the available hardware for Linux.
Now, teaming up with HP sounds like a great idea to step up the notch.
![HP System76][4]
So, the idea of a partnership between Pop!_OS and HP is exciting!
With HP, the availability/warranty of the laptop sounds good on paper compared to System76 laptops in the region where it is not available.
### AMD-Powered Laptop to Help You Code Better
HP Dev One seems to start featuring the essentials for developers to multitask, and get things done quickly.
The laptop uses an **8-core AMD Ryzen 7 PRO processor** coupled with **16 GB RAM** (DDR4 @ 3200 MHz) for starters.
You can expect a 14-inch full-HD anti-glare display powered by AMDs Radeon Graphics.
With HP Dev One, Carl Richell mentions that the laptop will receive **firmware updates** via the [LVFS][5] (Linux Vendor Firmware Service).
The pricing for the laptop has been mentioned to start at **$1099** for the mentioned specifications.
The website only says that it is coming soon. So, we do not have an official launch date as of now.
For a commercial manufacturer like HP, the pricing for the laptop does not sound mind-blowing, but could be a fair deal.
*What do you think about the pricing for the HP laptop tailored for developers using Linux? Does the price tag sound good? What are your expectations from the laptop?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/hp-dev-one-system76/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/05/hpdevone-laptop.jpg
[2]: https://t.co/gf2brjjUl8
[3]: https://twitter.com/carlrichell/status/1527757934364479488?ref_src=twsrc%5Etfw
[4]: https://news.itsfoss.com/wp-content/uploads/2022/05/hpdevone-illustration-1024x576.jpg
[5]: https://fwupd.org/

View File

@ -0,0 +1,60 @@
[#]: subject: "Woah! Broadcom Could Acquire VMware for $60 Billion"
[#]: via: "https://news.itsfoss.com/broadcom-vmware-deal/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Woah! Broadcom Could Acquire VMware for $60 Billion
======
Broadcoms interested to acquire VMware for $60 billion, making it one of the biggest tech deals in 2022.
![broadcom][1]
Broadcom, a semiconductor chip company, is infamous among desktop Linux users for the incompatibility issues with its wireless adapter/card and drivers.
And, it is now planning to get into the cloud computing market by acquiring one of the biggest players in the industry, i.e., **VMware**.
VMware is not an open-source company, but it offers some open-source tools and Linux support for its virtualization software.
In this case, **The Wall Street Journal** [reports][2] that Broadcom and VMware can potentially announce this acquisition later this week on Thursday.
### Broadcom to Enter the Cloud Computing Market
[Broadcom][3] should be a familiar name to Linux users when we talk about wireless network chips and their drivers.
And, with the $60 billion deal for [VMware][4], they could expand their take on the industry through VMwares reach in the cloud computing sector.
So, their decision also influences the talk about the acquisition by Broadcom.
Hence, it is safe to say that the deal may or may not go through if the discussions fall apart.
And, for the payment to succeed, the report also mentions that Broadcom plans to take help of banks for a $40 billion debt package.
Considering that the report mentions the final price is still up for discussion, the $60 billion value can change (but something around it).
### Wrapping Up
You can keep an eye on VMwares shares and Broadcom Inc if you are someone who is interested in getting involved in the market.
What do you think about Broadcom acquiring VMware? Do you think its going to go through with Dell involved with a major stake in the company? Feel free to share your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/broadcom-vmware-deal/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/05/broadcom-vmware-acquisition.jpg
[2]: https://www.wsj.com/articles/broadcom-discussing-paying-around-140-a-share-for-vmware-people-say-11653334946
[3]: https://www.broadcom.com/
[4]: https://www.vmware.com/i

View File

@ -0,0 +1,42 @@
[#]: subject: "Open Source Initiative Releases News Blog On WordPress"
[#]: via: "https://www.opensourceforu.com/2022/05/open-source-initiative-releases-news-blog-on-wordpress/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Open Source Initiative Releases News Blog On WordPress
======
![osi][1]
The Open Source Initiative (OSI), a public benefit corporation that oversees the Open Source Definition, has launched a WordPress news [blog][2]. Stefano Maffulli was appointed as OSIs first Executive Director in 2021, and he is leading the organisation in overhauling its web presence.
The blog was launched on a subdomain of the opensource.org website, which runs Drupal 7 and is self-hosted on a Digital Ocean droplet. It is also tightly integrated with CiviCRM to manage member subscriptions, individual donations, sponsorship tracking, and newsletter distribution.
As Drupal 7 approaches its end of life in November 2022, the team intends to migrate everything to WordPress. They looked into managed Drupal hosting but discovered it was more expensive and required them to migrate to a more recent version of Drupal. Because D7 themes and plugins are incompatible with D9+, they saw no advantage in terms of time or simplicity.
Because the Taverns theme wasnt yet on GitHub, Maffulli hired a developer to create a simple child theme based on the Twenty Twenty-Two default theme using WordPress new full-site editing features. He expressed gratitude for the opportunity to learn the fundamentals of FSE while overseeing the project.
Some OSI employees were already familiar with WordPress, which influenced their decision to use the software. The extensive functionality and third-party integrations were also important considerations. OSI is also looking into ways to give its members the ability to comment. This would necessitate a method to integrate authentication with CiviCRM in order to access members records.
The new Voices of Open Source blog began by highlighting the OSI affiliate network, which includes 80 organisations such as Mozilla, Wikimedia, the Linux Foundation, OpenUK, and others.
“One of the main objectives for OSI in 2022 is to reinforce our communication channels,” Maffulli said. “Were improving the perception of OSI as a reliable, trustworthy organization. The OSI didnt have a regular publishing schedule before, nor a content plan. Now we have established a regular cadence, publishing at least once a week (often more), commenting on recent news like a winning against a patent troll or court decisions about open source licenses, featuring our sponsors, and offering opinions on topics of interest for the wider community. Its a starting point to affirm OSI as a convener of conversations among various souls of the open source communities.”
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/05/open-source-initiative-releases-news-blog-on-wordpress/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/05/osi-e1653464907238.jpg
[2]: https://blog.opensource.org/

View File

@ -0,0 +1,91 @@
[#]: subject: "ProtonMail is Now Just Proton Offering a Privacy Ecosystem"
[#]: via: "https://news.itsfoss.com/protonmail-now-proton/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
ProtonMail is Now Just Proton Offering a Privacy Ecosystem
======
ProtonMail announced a re-brand with a new website, new name, updated pricing plans, a refreshed UI, and more changes.
![proton][1]
[ProtonMail][2] is rebranding itself as “Proton” to unify all its offerings under a single umbrella.
Let us not confuse it with Steams Proton (which is also simply referred to as Proton), *right?*
In other words, there will no longer be a separate product page for ProtonMail, ProtonVPN, or any of its services.
### Proton: An Open-Source Privacy Ecosystem
![Updated Proton, unified protection][3]
Proton will have a new single platform (new website) where you can access all the services including:
* Proton Mail
* Proton VPN
* Proton Drive
* Proton Calendar
For new log-in sessions, you will be redirected to **proton.me** instead of **protonmail.com/mail.protonmail.com/protonvpn.com** and so on.
Not just limited to the name/brand, the overall brand accent color, and the approach to its existing user experience will also be impacted by this change.
![][4]
Instead of choosing separate upgrades for VPN and Mail, the entire range of services will now be available with a single paid subscription. This also means that the pricing for the premium upgrades is more affordable with the change.
![][5]
Overall, the change to make “Proton” a privacy ecosystem aims to appeal to more users who arent interested to learn the tech jargon to know how it all works.
You can take a look at all the details on its new official website ([proton.me][6])
The new website looks much cleaner, organized, and a bit more commercially attractive.
### Whats New?
You can expect a refreshed user interface with the re-branding and a new website.
![proton][7]
In addition to that, Proton also mentions that it has improved the integration between the services for a better user experience.
![][8]
If you have already been using ProtonMail, you probably know that they offered existing users to activate their “**@proton.me**” account, which is also a part of this change.
You can choose to make your new email address **xyz@proton.me** the default, which is shorter and makes more sense with the new name.
* The old email address isnt going away. But, a new address is available @proton.me
* Existing paid subscribers should receive a storage boost at no extra cost.
* Refreshed user experience across web applications and mobile applications.
* A new website (you will be automatically redirected to it for new sessions).
* New pricing plans with more storage for Proton Drive.
*Excited about the change? Do you like the new name and its approach to it? Feel free to drop your thoughts in the comments section below.*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/protonmail-now-proton/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/05/proton-ft.jpg
[2]: https://itsfoss.com/recommends/protonmai
[3]: https://youtu.be/s5GNTQ63HJE
[4]: https://news.itsfoss.com/wp-content/uploads/2022/05/proton-ui-new-1024x447.jpg
[5]: https://news.itsfoss.com/wp-content/uploads/2022/05/proton-pricing-1024x494.jpg
[6]: https://proton.me/
[7]: https://news.itsfoss.com/wp-content/uploads/2022/05/Proton-me-website.png
[8]: https://news.itsfoss.com/wp-content/uploads/2022/05/Proton-Product.png

View File

@ -0,0 +1,142 @@
[#]: subject: "7 pieces of Linux advice for beginners"
[#]: via: "https://opensource.com/article/22/5/linux-advice-beginners"
[#]: author: "Opensource.com https://opensource.com/users/admin"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
7 pieces of Linux advice for beginners
======
We asked our community of writers for the best advice they got when they first started using Linux.
![Why the operating system matters even more in 2017][1]
Image by: Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0
What advice would you give a new Linux user? We asked our community of writers to share their favorite Linux advice.
### 1. Use Linux resources
My brother told me that Linux was like a "software erector set" (that's a dated reference to the old Erector sets that could be purchased in the 1950s and 1960s) which was a helpful metaphor. I was using Windows 3.1 and Windows NT at the time and was trying to build a useful and safe K-12 school district website. This was in 2001 and 2002 and there were very few texts or resources on the web that were helpful. One of the resources recommended was the "Root Users Guide," a very large book that had lots of printed information in it but was tough to decipher and know just how to proceed.
One of the most useful resources for me was an online course that Mandrake Linux maintained. It was a step-by-step explanation of the nuances of using and administering a Linux computer or server. I used that along with a listserv that Red Hat maintained in those days, where you could pose questions and get answers.
—— [Don Watkins][2]
### 2. Ask the Linux community for help
My advice is to ask questions, in all of their settings. You can start out with an Internet search, looking for others who have had the same or similar questions (maybe even better questions.) It takes a while to know what to ask and how to ask it.
Once you become more familiar with Linux, check through the various forums out there, to find one or more that you like, and again, before you ask something yourself, look to see if someone else has already had the question and had it answered.
Getting involved in a mail list is also helpful, and eventually, you may find yourself knowledgeable enough to answer some questions yourself. As they say, you learn the most about something by becoming able to answer someone else's questions about it.
Meanwhile, you also become more familiar with using a system that's not a black box that you never understand how something is done except by paying for it.
—— [Greg Pittman][3]
My advice is to get familiar with help utilities such as man and info.  Also, spend as much time as possible at the command line interface and really get used to the fundamental UNIX design. As a matter of fact, one of my favorite books is a UNIX book from the 80s because it really helps in understanding files, directories, devices, basic commands, and more.
—— [Alan Formy-Duval][4]
The best advice I got was to trust the community with answers and manual pages for detailed information and "how-to" use different options. However, I started off around 2009-ish, there were a lot of tools and resources available, including a project called [Linux from Scratch (LFS)][5]. This project really taught me a lot about the internals and how to actually build an LFS image.
—— [Sumantro Mukherjee][6]
My advice is to read. Using places like [Ask Fedora][7] or the Fedora Matrix chat or other forum type areas. Just read what others are saying, and trying to fix. I learned a lot from just reading what others were struggling with, and then I would try to figure out how the issue was caused.
—— [Steve Morris][8]
### 3. Try dual booting
I started with a dual-boot system in the late 90s (Windows and Linux), and while I wanted to really use Linux, I ended up booting Windows to work in my familiar desktop environment. One of the best pieces of advice was to change the boot order, so every time I wasn't quick enough, I ended up using Linux. ;)
—— [Heike Jurzik][9]
I was challenged by one of my team to do a knowledge swap.
He (our Linux sysadmin) built his website in **Joomla!** (which our web team specialized in, and he wanted to know more about) and I adopted Linux (having been Windows only to that point.) We dual booted to start with, as I still had a bunch of OS-dependent software I needed to use for the business, but it jump-started my adoption of Linux.
It was really helpful to have each other as an expert to call on while we were each learning our way into the new systems, and quite a challenge to keep going and not give up because he hadn't!
I did have a big sticky note on my monitor saying "anything with `rm`  in the command, ask first" after a rather embarrassing blunder early on. He wrote a command-line cheat sheet (there are dozens [online now][10]) for me, which really helped me get familiar with the basics. I also started with the [KDE version][11] of Ubuntu, which I found really helpful as a novice used to working with a GUI.
I've used Linux ever since (aside from my work computer) and he's still on Joomla, so it seemed to work for both of us!
—— [Ruth Cheesley][12]
### 4. Back it up for safety
My advice is to use a distro with an easy and powerful backup app. A new Linux user will touch, edit, destroy and restore configurations. They probably will reach a time when their OS will not boot and losing data is frustrating.
With a backup app, they're always sure that their data is safe.
We all love Linux because it allows us to edit everything, but the dark side of this is that making fatal errors is always an option.
—— [Giuseppe Cassibba][13]
### 5. Share the Linux you know and use
My advice is to share the Linux you use. I used to believe the hype that there were distributions that were "better" for new users, so when someone asked me to help them with Linux, I'd show them the distro "for new users." Invariably, this resulted in me sitting in front of their computer looking like I had never seen Linux before myself, because something would be just unfamiliar enough to confuse me.  Now when someone asks about Linux, I show them how to use what I use. It may not be branded as the "best" Linux for beginners, but it's the distro I know best, so when their problems become mine, I'm able to help solve them (and sometimes I learn something new, myself.)
—— [Seth Kenlon][14]
There was a saying back in the old days, "Do not just use a random Linux distro from a magazine cover. Use the distro your friend is using, so you can ask for help when you need it." Just replace "from a magazine cover" with "off the Internet" and it's still valid :-) I never followed this advice, as I was the only Linux user in a 50km radius. Everyone else was using FreeBSD, IRIX, Solaris, and Windows 3.11 around me. Later I was the one people were asking for Linux help.
—— [Peter Czanik][15]
### 6. Keep learning Linux
I was a reseller partner prior to working at Red Hat, and I had a few home health agencies with traveling nurses. They used a quirky package named Carefacts, originally built for DOS, that always got itself out of sync between the traveling laptops and the central database.
The best early advice I heard was to take a hard look at the open source movement. Open source is mainstream in 2022, but it was revolutionary a generation ago when nonconformists bought Red Hat Linux CDs from retailers. Open source turned conventional wisdom on its ear. I learned it was not communism and not cancer, but it scared powerful people.
My company built its first customer firewall in the mid-1990s, based on Windows NT and a product from Altavista. That thing regularly crashed and often corrupted itself. We built a Linux-based firewall for ourselves and it never gave us a problem. And so, we replaced that customer Altavista system with a Linux-based system, and it ran trouble-free for years. I built another customer firewall in late 1999. It took me three weeks to go through a book on packet filtering and get the `ipchains` commands right. But it was beautiful when I finally finished, and it did everything it was supposed to do. Over the next 15+ years, I built and installed hundreds more, now with `iptables` ; some with bridges or proxy ARP and QOS to support video conferencing, some with [IPSEC][16] and [OpenVPN tunnels][17]. I got pretty good at it and earned a living managing individual firewalls and a few active/standby pairs, all with Windows systems behind them. I even built a few virtual firewalls.
But progress never stops. By 2022, [iptables is obsolete][18] and my firewall days are a fond memory.
The ongoing lesson? Never stop exploring.
—— [Greg Scott][19]
### 7. Enjoy the process
Be patient. Linux is a different system than what you are used to, be prepared for a new world of endless possibilities. Enjoy it.
—— [Alex Callejas][20]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/linux-advice-beginners
作者:[Opensource.com][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/admin
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/yearbook-haff-rx-linux-file-lead_0.png
[2]: https://opensource.com/users/don-watkins
[3]: https://opensource.com/users/greg-p
[4]: https://opensource.com/users/alanfdoss
[5]: https://linuxfromscratch.org/
[6]: https://opensource.com/users/sumantro
[7]: https://ask.fedoraproject.org
[8]: https://opensource.com/users/smorris12
[9]: https://opensource.com/users/hej
[10]: https://opensource.com/downloads/linux-common-commands-cheat-sheet
[11]: https://opensource.com/article/22/2/why-i-love-linux-kde
[12]: https://opensource.com/users/rcheesley
[13]: https://opensource.com/users/peppe8o
[14]: https://opensource.com/users/seth
[15]: https://opensource.com/users/czanik
[16]: https://www.redhat.com/sysadmin/run-your-own-vpn-libreswan
[17]: https://opensource.com/article/21/8/openvpn-server-linux
[18]: https://opensource.com/article/19/7/make-linux-stronger-firewalls
[19]: https://opensource.com/users/greg-scott
[20]: https://opensource.com/users/darkaxl

View File

@ -1,217 +0,0 @@
[#]: subject: "3 ways to copy files in Go"
[#]: via: "https://opensource.com/article/18/6/copying-files-go"
[#]: author: "Mihalis Tsoukalos https://opensource.com/users/mtsouk"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
3 ways to copy files in Go
======
In the third article in this series about the Go programming language, learn the three most popular ways to copy a file.
![Periwinkle the Cat: A crowdsourced name][1]
Image by: Opensource.com
This article will show you how to copy a file in the [Go programming language][3]. Although there are more than three ways to copy a file in Go, this article will present the three most common ways: using the `io.Copy()` function call from the Go library; reading the input file all at once and writing it to another file; and copying the file in small chunks using a buffer.
### Method 1: Using io.Copy()
The first version of the utility will use the `io.Copy()` function of the standard Go library. The logic of the utility can be found in the implementation of the `copy()` function, which is as follows:
```
func copy(src, dst string) (int64, error) {
        sourceFileStat, err := os.Stat(src)
        if err != nil {
                return 0, err
        }
        if !sourceFileStat.Mode().IsRegular() {
                return 0, fmt.Errorf("%s is not a regular file", src)
        }
        source, err := os.Open(src)
        if err != nil {
                return 0, err
        }
        defer source.Close()
        destination, err := os.Create(dst)
        if err != nil {
                return 0, err
        }
        defer destination.Close()
        nBytes, err := io.Copy(destination, source)
        return nBytes, err
}
```
Apart from testing whether the file that will be copied exists (`os.Stat(src)` ) and is a regular file (`sourceFileStat.Mode().IsRegular()` ) so you can open it for reading, all the work is done by the `io.Copy(destination, source)` statement. The `io.Copy()` function returns the number of bytes copied and the first error message that happened during the copying process. In Go, if there is no error message, the value of the error variable will be `nil`.
You can learn more about the `io.Copy()` function at the [io package][4] documentation page.
Executing `cp1.go` will generate the next kind of output:
```
$ go run cp1.go
Please provide two command line arguments!
$ go run cp1.go fileCP.txt /tmp/fileCPCOPY
Copied 3826 bytes!
$ diff fileCP.txt /tmp/fileCPCOPY
```
This technique is as simple as possible but gives no flexibility to the developer, which is not always a bad thing. However, there are times that the developer needs or wants to decide how they want to read the file.
### Method 2: Using ioutil.WriteFile() and ioutil.ReadFile()
A second way to copy a file uses the `ioutil.ReadFile()` and `ioutil.WriteFile()` functions. The first function reads the contents of an entire file into a byte slice, and the second function writes the contents of a byte slice into a file.
The logic of the utility can be found in the following Go code:
```
input, err := ioutil.ReadFile(sourceFile)
        if err != nil {
                fmt.Println(err)
                return
        }
        err = ioutil.WriteFile(destinationFile, input, 0644)
        if err != nil {
                fmt.Println("Error creating", destinationFile)
                fmt.Println(err)
                return
        }
```
Apart from the two `if` blocks, which are part of the Go way of working, you can see that the functionality of the program is found in the `ioutil.ReadFile()` and `ioutil.WriteFile()` statements.
Executing `cp2.go` will generate the next kind of output:
```
$ go run cp2.go
Please provide two command line arguments!
$ go run cp2.go fileCP.txt /tmp/copyFileCP
$ diff fileCP.txt /tmp/copyFileCP
```
Please note that, although this technique will copy a file, it might not be efficient when you want to copy huge files because the byte slice returned by `ioutil.ReadFile()` will also be huge.
### Method 3: Using os.Read() and os.Write()
A third method of copying files in Go uses a `cp3.go`  utility that will be developed in this section. It accepts three parameters: the filename of the input file, the filename of the output file, and the size of the buffer.
The most important part of `cp3.go` resides in the following `for` loop, which can be found in the `copy() function:`
```
buf := make([]byte, BUFFERSIZE)
        for {
                n, err := source.Read(buf)
                if err != nil && err != io.EOF {
                        return err
                }
                if n == 0 {
                        break
                }
                if _, err := destination.Write(buf[:n]); err != nil {
                        return err
                }
        }
```
This technique uses `os.Read()` for reading small portions of the input file into a buffer named `buf` and `os.Write()` for writing the contents of that buffer to a file. The copying process stops when there is an error in reading or when you reach the end of the file (`io.EOF` ).
Executing `cp3.go` will generate the next kind of output:
```
$ go run cp3.go
usage: cp3 source destination BUFFERSIZE
$ go run cp3.go fileCP.txt /tmp/buf10 10
Copying fileCP.txt to /tmp/buf10
$ go run cp3.go fileCP.txt /tmp/buf20 20
Copying fileCP.txt to /tmp/buf20
```
As you will see, the size of the buffer greatly affects the performance of `cp3.go`.
### Doing some benchmarking
The last part of this article will try to compare the three programs as well as the performance of `cp3.go`  for various buffer sizes using the `time(1)` command line utility.
The following output shows the performance of `cp1.go`, `cp2.go`, and `cp3.go` when copying a 500MB file:
```
$ ls -l INPUT
-rw-r--r--  1 mtsouk  staff  512000000 Jun  5 09:39 INPUT
$ time go run cp1.go INPUT /tmp/cp1
Copied 512000000 bytes!
real    0m0.980s
user    0m0.219s
sys     0m0.719s
$ time go run cp2.go INPUT /tmp/cp2
real    0m1.139s
user    0m0.196s
sys     0m0.654s
$ time go run cp3.go INPUT /tmp/cp3 1000000
Copying INPUT to /tmp/cp3
real    0m1.025s
user    0m0.195s
sys     0m0.486s
```
The output shows that the performance of all three utilities is pretty similar, which means that the functions of the standard Go library are quite clever and optimized.
Now, let's test how the buffer size affects the performance of `cp3.go`. Executing `cp3.go` with a buffer size of 10, 20, and 1,000 bytes to copy a 500MB file on a pretty fast machine will generate the following results:
```
$ ls -l INPUT
-rw-r--r--  1 mtsouk  staff  512000000 Jun  5 09:39 INPUT
$ time go run cp3.go INPUT /tmp/buf10 10
Copying INPUT to /tmp/buf10
real    6m39.721s
user    1m18.457s
sys         5m19.186s
$ time go run cp3.go INPUT /tmp/buf20 20
Copying INPUT to /tmp/buf20
real    3m20.819s
user    0m39.444s
sys         2m40.380s
$ time go run cp3.go INPUT /tmp/buf1000 1000
Copying INPUT to /tmp/buf1000
real    0m4.916s
user    0m1.001s
sys     0m3.986s
```
The generated output shows that the bigger the buffer, the faster the performance of the `cp3.go` utility, which is more or less expected. Moreover, using buffer sizes smaller than 20 bytes for copying big files is a very slow process and should be avoided.
You can find the Go code of `cp1.go`, `cp2.go`, and `cp3.go` at [GitHub][5].
If you have any questions or feedback, please leave a comment below or reach out to me on [Twitter][6].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/copying-files-go
作者:[Mihalis Tsoukalos][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mtsouk
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/LIFE_cat.png
[3]: https://golang.org/
[4]: https://golang.org/pkg/io/
[5]: https://github.com/mactsouk/opensource.com
[6]: https://twitter.com/mactsouk

View File

@ -1,69 +0,0 @@
[#]: subject: "Use this open source screen reader on Windows"
[#]: via: "https://opensource.com/article/22/5/open-source-screen-reader-windows-nvda"
[#]: author: "Peter Cheer https://opensource.com/users/petercheer"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Use this open source screen reader on Windows
======
In honor of Global Accessibility Awareness Day, learn about the NVDA open source screen reader and how you can get involved to improve accessibility for all web users.
![Working from home at a laptop][1]
Image by: Opensource.com
Screen readers are a specialized area of assistive technology software that reads and then speaks the content of a computer screen. People with no sight at all are just a fraction of those with visual impairments, and screen reader software can help all groups. Screen readers are mostly specific to an operating system, and they're used by people with visual impairments and accessibility trainers, as well as developers and accessibility consultants wanting to test how accessible a website or application is.
### How to use the NVDA screen reader
The [WebAIM screen reader user surveys][2] began in 2009 and ran to 2021. In the first survey, the most common screen reader used was JAWS at 74%. It is a commercial product for Microsoft Windows, and the long-time market leader. NVDA, then a relatively new open source screen reader for Windows came in at just 8%. Fast forward to 2021 and JAWS comes in with 53.7% with NVDA at 30.7%.
You can download the latest release of NVDA from the [NVAccess website][3]. Why do I use NVDA and recommend it to my MS Windows using clients? Well, it is open source, fast, powerful, easy to install, supports a wide variety of languages, can be run as a portable application, has a large user base, and there is a regular release cycle for new versions.
NVDA has been translated into fifty-five languages and is used in one-hundred and seventy-five different countries. There is also an active developer community with their own [Community Add-ons website][4]. Any add-ons you choose to install will depend on your needs and there are a lot to choose from, including extensions for common video conferencing platforms.
Like all screen readers, there are a lot of key combinations to learn with NVDA. Using any screen reader proficiently takes training and practice.
![Image of NVDA welcome screen][5]
Teaching NVDA to people familiar with computers and who have keyboard skills is not too difficult. Teaching basic computer skills (without the mouse, touch pad, and keyboard skills) and working with NVDA to a complete beginner is far more of a challenge. Individual learning styles and preferences differ. In addition, people may not need to learn how to do everything if all that they want to do is browse the web and use email. A good source of links to NVDA tutorials and resources is [Accessibility Central][6].
It becomes easier once you have mastered operating NVDA with keyboard commands, but there is also a menu-driven system for many configuration tasks.
![Image of NVDA menu][7]
### Test for accessibility
The inaccessibility of some websites to screen reader users has been a problem for many years, and still is despite disability equality legislation like the Americans with Disabilities Act (ADA). An excellent use for NVDA in the sighted community is for website accessibility testing. NVDA is free to download, and by running a portable version, website developers don't even need to install it. Run NVDA, turn off your monitor or close your eyes, and see how well you can navigate a website or application.
NVDA can also be used for testing when working through the (often ignored) task of properly [tagging a PDF document for accessibility][8].
There are several guides that concentrate on using NVDA for accessibility testing. I can recommend [Testing Web Pages with NVDA][9] and Using [NVDA to Evaluate Web Accessibility][10].
Image by: (Peter Cheer, CC BY-SA 4.0)
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/open-source-screen-reader-windows-nvda
作者:[Peter Cheer][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/petercheer
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/wfh_work_home_laptop_work.png
[2]: https://webaim.org/projects
[3]: https://www.nvaccess.org
[4]: https://addons.nvda-project.org/index.en.html
[5]: https://opensource.com/sites/default/files/2022-05/nvda1.png
[6]: http://www.accessibilitycentral.net/
[7]: https://opensource.com/sites/default/files/2022-05/nvda2.png
[8]: https://www.youtube.com/watch?v=rRzWRk6cXIE
[9]: https://www.unimelb.edu.au/accessibility/tools/testing-web-pages-with-nvda
[10]: https://webaim.org/articles/nvda

View File

@ -1,132 +0,0 @@
[#]: subject: "Customize GNOME 42 with A Polished Look"
[#]: via: "https://www.debugpoint.com/2022/05/customize-gnome-42-look-1/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Customize GNOME 42 with A Polished Look
======
A tutorial on how you can give your favourite GNOME desktop a polished look, in 5 minutes.
There are many ways you can customize your favourite GNOME desktop with icons, themes, cursors and wallpapers. This article shows you how to give the GNOME 42 desktop a more polished look. The GNOME 42 desktop environment is available with the recently released Ubuntu 22.04 LTS and Fedora 36.
Before you read further, heres how it looks with a side by side comparison (before and after).
![GNOME before customisation][1]
![GNOME after customisation][2]
I am going to divide this tutorial into two sections.
The first section deals with setting up and installing required packages. And second, how to apply various settings to get your desired look.
This tutorial was mainly tested on Ubuntu 22.04 LTS. However, it should work in other variants of Ubuntu and Fedora.
### Customize GNOME 42 with a Polished Look
#### Setup
* First, enable your system for Flatpak because we need to install the Extension Manager to download some required GNOME Shell extensions for this tutorial.
* So, to do that, open up a terminal and run the following commands.
```
sudo apt install flatpak gnome-software-plugin-flatpakflatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
* Reboot the computer once done.
* Then run the following command from the terminal to install the Extensions Manager app to download GNOME Shell Extensions.
```
flatpak install flathub com.mattjakeman.ExtensionManager
```
* Open the Extension Manager application and install two extensions. The first one is Floating Dock which features a super cool dock which you can move around anywhere on your desktop. Second, install the User themes extensions to help you install the external GTK themes in your Ubuntu Linux.
![User Themes Extension][3]
![Floating Dock Extension][4]
* Secondly, install the [Materia Theme][5] using the below commands. You have to build it as it doesnt have any executable. Run the following commands in sequence in Ubuntu to install.
```
git clone https://github.com/ckissane/materia-theme-transparent.gitcd materia-theme-transparentmeson _buildmeson install -C _build
```
* Additionally, download the [Kora Icon theme][6] from the below link. After downloading, extract the files and copy the below four folders to `/home/<user name>/.icons` path. Create the .icons folder if it is not present.
[Download Kora Icon Theme][7]
![Kora Icon Theme][8]
* Besides the above changes, download the awesome Bibata cursor theme from the below link. After download, extract and copy the folders to the same `/home/<user name>/.icons` folder.
[Download Bibata Cursor Theme][9]
* In addition to the above, if you want a nice font which goes with the above themes, [download Robot font][10] from Google Fonts and copy them to `/home/<user name>/.fonts` folder.
* Finally, restart your system once again.
#### Configuration
* Open the Extension Manager, enable the Floating Dock and User Themes, and disable the Ubuntu Dock.
![Changes to Extensions][11]
* In addition, open the Floating dock settings and make the following changes.
![Floating Dock Settings][12]
* Furthermore, open the [GNOME Tweak Tool][13], and go to the Appearance tab. Set the followings.Cursor: Bibata-Original-IceShell Theme: MateriaIcon: Kora
* Cursor: Bibata-Original-Ice
* Shell Theme: Materia
* Icon: Kora
* Cursor: Bibata-Original-Ice
* Shell Theme: Materia
* Icon: Kora
* Other than that, you may also want to change the font. To do that, go to the Fonts tab and change the document and interface to Robot 10pt.
* Alternatively, you can also change the accent colour and style from Settings which comes by default with Ubuntu 22.04.
* Finally, download a nice wallpaper as per your preference. For this tutorial, I have downloaded a sample wallpaper from [here][14].
* If all goes well, you should have a nice desktop, as shown below.
![Customize GNOME 42 Final Look][15]
Enjoy a polished GNOME 42. Cheers.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2022/05/customize-gnome-42-look-1/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://i2.wp.com/www.debugpoint.com/wp-content/uploads/2022/05/GNOME-before-customisation.jpg?ssl=1
[2]: https://i0.wp.com/www.debugpoint.com/wp-content/uploads/2022/05/GNOME-after-customisation.jpg?ssl=1
[3]: https://www.debugpoint.com/wp-content/uploads/2022/05/User-Themes-Extension2.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2022/05/Floating-Doc-Extension.jpg
[5]: https://github.com/ckissane/materia-theme-transparent
[6]: https://github.com/bikass/kora/
[7]: https://github.com/bikass/kora/archive/refs/heads/master.zip
[8]: https://www.debugpoint.com/wp-content/uploads/2022/05/Kora-Icon-Theme.jpg
[9]: https://www.pling.com/p/1197198/
[10]: https://fonts.google.com/specimen/Roboto
[11]: https://www.debugpoint.com/wp-content/uploads/2022/05/Changes-to-Extensions.jpg
[12]: https://www.debugpoint.com/wp-content/uploads/2022/05/Floating-Dock-Settings.jpg
[13]: https://www.debugpoint.com/2018/05/customize-your-ubuntu-desktop-using-gnome-tweak/
[14]: https://www.pexels.com/photo/colorful-blurred-image-6985048/
[15]: https://www.debugpoint.com/wp-content/uploads/2022/05/Customize-GNOME-42-Final-Look.jpg

View File

@ -0,0 +1,109 @@
[#]: subject: "Speek! : An Open-Source Chat App That Uses Tor"
[#]: via: "https://itsfoss.com/speek/"
[#]: author: "Pratham Patel https://itsfoss.com/author/pratham/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Speek! : An Open-Source Chat App That Uses Tor
======
An interesting open-source private messenger that utilizes Tor to keep your communications secure and private.
Speek is an internet messaging service that leverages multiple technologies to help keep your internet chats private.
It is end-to-end encrypted, decentralized, and open-source.
Undoubtedly, it aims to pitch itself as one of the [WhatsApp alternatives][1] and a competitor to [Signal on Linux][2].
So, what is it all about? Let us take a closer look at the details.
### Speek! A Peer-to-Peer Instant Messaging App for Linux and Android
![screenshot of Speek][3]
Speek! (with an exclamation mark as part of its name) is an encrypted chat messenger that aims to fight against censorship while keeping your data private.
To keep things simple, we ignore the exclamation mark for the rest of the article.
You can also find it as an alternative to [Session][4], but with some differences.
It is a fairly new competitor compared to other messengers available. However, it should be a candidate to try as an open-source solution.
While it claims to keep you anonymous, you should always be cautious of your activities on your devices to ensure complete anonymity, if thats what you require. Its not just the messenger that you need to think of.
![speek id][5]
It utilizes a decentralized Tor network to keep things secure and private. And, this enables it to make the service useful without needing your phone number. You just require your Speek ID to connect with people, and it is tough for someone to know your ID.
### Features of Speek
![speek options][6]
Some key highlights include:
* End-to-end encryption: No one except for the recipient can view your messages.
* Routing traffic over TOR: Using TOR for routing messages, enhances privacy.
* No centralized server: Increases resistance against censorship because its tough to shut down the service. Moreover, no single attack point for hackers.
* No sign-ups: You do not need to share any personal information to start using the service. You just need a public key to identify/add users.
* Self-destructing chat: When you close the app, the messages are automatically deleted. For an extra layer of privacy and security.
* No metadata: It eliminates any metadata when you exchange messages.
* Private file sharing: You can also use the service to share files securely.
### Download Speek For Linux and Other Platforms
You can download Speek from their [official website][7].
At the time of writing this article, Speek is available only on Linux, Android macOS, and Windows.
For Linux, you will find an [AppImage][8] file. In case you are unaware of AppImages, you can refer to our [AppImage guide][9] to run the application.
![speek android][10]
And, the Android app on the [Google Play Store][11] is fairly new. So, you should expect improvements when you try it out.
[Speek!][12]
### Thoughts on Using Speek
![screenshot of Speek][13]
The user experience for the app is pretty satisfying, and checks all the essentials required. It could be better, but its decent.
Well, there isnt much to say about Speeks GUI. The GUI is very minimal. It is a chat app at its core and does exactly that. No stories, no maps, no unnecessary add-ons.
In my limited time of using the app, I am satisfied with its functionalities. The features that it offers, make it a good chat app for providing a secure and private messaging experience with all the tech behind it.
If youre going to compare it with some commercially successful chat apps, it falls short on features. But then again, Speek is not designed as a trendy chat app with a sole focus on user experience.
So, I would only recommend Speek for privacy-conscious users. If you want a balance of user experience and features, you might want to continue using private messengers like Signal.
*What do you think about Speek? Is it a good private messenger for privacy-focused users? Kindly let me know your thoughts in the comments section below.*
--------------------------------------------------------------------------------
via: https://itsfoss.com/speek/
作者:[Pratham Patel][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/pratham/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/private-whatsapp-alternatives/
[2]: https://itsfoss.com/install-signal-ubuntu/
[3]: https://itsfoss.com/wp-content/uploads/2022/05/01_speek_gui-1-800x532.webp
[4]: https://itsfoss.com/session-messenger/
[5]: https://itsfoss.com/wp-content/uploads/2022/05/speek-id-800x497.png
[6]: https://itsfoss.com/wp-content/uploads/2022/05/speek-options-800x483.png
[7]: https://speek.network
[8]: https://itsfoss.com/appimage-interview/
[9]: https://itsfoss.com/use-appimage-linux/
[10]: https://itsfoss.com/wp-content/uploads/2022/05/speek-android.jpg
[11]: https://play.google.com/store/apps/details?id=com.speek.chat
[12]: https://speek.network/
[13]: https://itsfoss.com/wp-content/uploads/2022/05/01_speek_gui-1-800x532.webp

View File

@ -0,0 +1,409 @@
[#]: subject: "A hands-on guide to images and containers for developers"
[#]: via: "https://opensource.com/article/22/5/guide-containers-images"
[#]: author: "Evan "Hippy" Slatis https://opensource.com/users/hippyod"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
A hands-on guide to images and containers for developers
======
Understand the key concepts behind images and containers. Then try a lab that demonstrates building and running images and containers.
![Shipping containers stacked in a yard][1]
Image by: Lucarelli via Wikimedia Commons. CC-BY-SA 3.0
Containers and Open Container Initiative (OCI) images are important open source application packaging and delivery technologies made popular by projects like Docker and Kubernetes. The better you understand them, the more able you will be to use them to enhance the consistency and scalability of your projects.
In this article, I will describe this technology in simple terms, highlight the essential aspects of images and containers for a developer to understand, then wrap up by discussing some best practices developers can follow to make their containers portable. I will also walk you through a simple lab that demonstrates building and running images and containers.
### What are images?
Images are nothing more than a packaging format for software. A great analogy is Java's JAR file or a Python wheel. JAR (or EAR or WAR) files are simply ZIP files with a different extension, and Python wheels are distributed as gzipped tarballs. All of them conform to a standard directory structure internally.
Images are packaged as `tar.gz` (gzipped tarballs), and they include the software you're building and/or distributing, but this is where the analogy to JARs and wheels ends. For one thing, images package not just your software but all supporting dependencies needed to run your software, up to and including a complete operating system. Whereas wheels and JARs are usually built as dependencies but can be executable, images are almost always built to be executed and more rarely as a dependency.
Knowing the details of what's in the images isn't necessary to understand how to use images or to write and design software for them (if you're interested, read ["What is a container image?"][2]). From your perspective, and especially from the perspective of your software, what's important to understand is that the images you create will contain a *complete operating system*. Because images are packaged as if they're a complete operating system from the perspective of the software you wish to run, they are necessarily much larger than software packaged in a more traditional fashion.
Note that images are immutable. They cannot be changed once they are built. If you modify the software running on the image, you must build an entirely new image and replace the old one.
#### Tags
When images are created, they are created with a unique hash, but they are typically identified with a human-readable name such as `ubi`, `ubi-minimal`, `openjdk11`, and so on. However, there can be different versions of the image for each of their names, and those are typically differentiated by tags. For example, the `openjdk11` image might be tagged as `jre-11.0.14.1_1-ubi` and `jre-11.0.14.1_1-ubi-minimal,` denoting image builds of the openjdk11 software package version 11.0.14.1_1 installed on a Red Hat `ubi` and `ubi minimal` image, respectively.
### What are containers?
Containers are images that have been realized and executed on a host system. Running a container from an image is a two-step process: create and start. Create takes the image and gives it its own ID and filesystem. Create (as in `docker create`, for example) can be repeated many times in order to create many instances of a running image, each with its own ID and filesystem. Starting the container will launch an isolated process on the host machine in which the software running inside the container will behave as if it is running in its very own virtual machine. A container is thus an isolated process on the host machine, with its own ID and independent filesystem.
From a software developer's perspective, there are two primary reasons to use containers: consistency and scalability. These are related to each other, and together they allow projects to use one of the most promising innovations to come to software development in recent years, the principle of "Build once, deploy many."
#### Consistency
Because images are immutable and include all of the dependencies needed to run your software from the OS on up, you gain consistency wherever you choose to deploy it. This means whether you launch an image as a container in a development, test, or any number of production environments, the container will run exactly the same way. As a software developer, you won't have to worry about whether any of those environments are running on a different host operating system or version, because the container is running the same operating system every time. That's the benefit of packaging your software along with its complete runtime environment, rather than just your software without the complete set of dependencies needed to run it.
This consistency means that in almost all cases, when an issue is found in one environment (for example, production), you can be confident that you'll be able to reproduce that issue in development or some other environment, so you can confirm the behavior and focus on fixing it. Your project should never get mired in and stumped by the dreaded "But it works on my machine" problem again.
#### Scalability
Images contain not only your software but also all the dependencies needed to run your software, including the underlying operating system. This means all processes running inside the container view the container as the host system, the host system is invisible to processes running inside the container, and, from the host system's point of view, the container is just another process it manages. Of course, virtual machines do almost the same thing, which raises a valid question: Why use container technology instead of a virtual machine? The answer lies in both speed and size.
Containers run only the software required to support an independent host without the overhead of having to mimic the hardware. Virtual machines must contain a complete operating system and mimic the underlying hardware. The latter is a very heavyweight solution, which also results in much larger files. Because containers are treated as just another running process from the host system's perspective, they can be spun up in seconds rather than minutes. When your application needs to scale quickly, containers will beat a virtual machine in resources and speed every time. Containers are also easier to scale back down.
Scaling is outside the scope of this article from a functional standpoint, so the lab will not be demonstrating this feature, but it's important to understand the principle in order to understand why container technology represents such a significant advance in the packaging and deployment of software.
Note: While it is possible to [run a container that does not include a complete operating system][3], this is rarely done because the minimal images available are usually an insufficient starting point.
### How to find and store images
Like every other type of software packaging technology, containers need a place where packages can be shared, found, and reused. These are called image registries, analogous to Java Maven and Python wheel repositories or npm registries.
These are a sampling of different image registries available on the internet:
* [Docker Hub][4]: The original Docker registry, which hosts many Docker official images used widely among projects worldwide and provides opportunities for individuals to host their own images. One of the organizations that hosts images on Docker Hub is adoptopenjdk; view their repository for examples of images and tags for the [openjdk11][5] project.
* [Red Hat Image Registry][6]: Red Hat's official image registry provides images to those with valid Red Hat subscriptions.
* [Quay][7]: Red Hat's public image registry hosts many of Red Hat's publicly available images and provides providing opportunities for individuals to host their own images.
### Using images and containers
There are two utilities whose purpose is to manage images and containers: [Docker][8]and [Podman][9]. They are available for Windows, Linux, and Mac workstations. From a developer's point of view, they are completely equivalent when executing commands. They can be considered aliases of one another. You can even install a package on many systems that will automatically change Docker into a Podman alias. Wherever Podman is mentioned in this document, Docker can be safely substituted with no change in outcome.
You'll immediately notice these utilities are very similar to [Git][10] in that they perform tagging, pushing, and pulling. You will use or refer to this functionality regularly. They should not be confused with Git, however, since Git also manages version control, whereas images are immutable and their management utilities and registry have no concept of change management. If you push two images with the same name and tag to the same repository, the second image will overwrite the first with no way to see or understand what has changed.
#### Subcommands
The following are a sampling of Podman and Docker subcommands you will commonly use or refer to:
* build: `build` an image
* Example: `podman build -t org/some-image-repo -f Dockerfile`
* image: manage `image`s locally
* Example: `podman image rm -a` will remove all local images.
* images: list `images` stored locally
* tag: `tag` an image
* container: manage `container`s
* Example: `podman container rm -a` will remove all stopped local containers.
* run: `create` and `start` a container
* also `stop` and `restart`
* pull/push: `pull`/push and image from/to a repository on a registry
#### Dockerfiles
Dockerfiles are the source files that define images and are processed with the `build` subcommand. They will define a parent or base image, copy in or install any extra software you want to have available to run in your image, define any extra metadata to be used during the build and/or runtime, and potentially specify a command to run when a container defined by your image is run. A more detailed description of the anatomy of a Dockerfile and some of the more common commands used in them is in the lab below. A link to the complete Dockerfile reference appears at the end of this article.
#### Fundamental differences between Docker and Podman
Docker is a daemon in Unix-like systems and a service in Windows. This means it runs in the background all the time, and it runs with root or administrator privileges. Podman is binary. This means it runs only on demand, and can run as an unprivileged user.
This makes Podman more secure and more efficient with system resources (why run all the time if you don't have to?). Running anything with root privileges is, by definition, less secure. When using images on the cloud, the cloud that will host your containers can manage images and containers more securely.
#### Skopeo and Buildah
While Docker is a singular utility, Podman has two other related utilities maintained by the Containers organization on GitHub: [Skopeo][11] and [Buildah][12]. Both provide functionality that Podman and Docker do not, and both are part of the container-tools package group with Podman for installation on the Red Hat family of Linux distributions.
For the most part, builds can be executed through Docker and Podman, but Buildah exists in case more complicated builds of images are required. The details of these more complicated builds are far outside the scope of this article, and you'll rarely, if ever, encounter the need for it, but I include mention of this utility here for completeness.
Skopeo provides two utility functions that Docker does not: the ability to copy images from one registry to another and to delete an image from a remote registry. Again, this functionality is outside the scope of this discussion, but the functionality could eventually be of use to you, especially if you need to write some DevOps scripts.
### Dockerfiles lab
The following is a very short lab (about 10 minutes) that will teach you how to build images using Dockerfiles and run those images as containers. It will also demonstrate how to externalize your container's configuration to realize the full benefits of container development and "Build once, deploy many."
#### Installation
The following lab was created and tested locally running Fedora and in a [Red Hat sandbox environment][13] with Podman and Git already installed. I believe you'll get the most out of this lab running it in the Red Hat sandbox environment, but running it locally is perfectly acceptable.
You can also install Docker or Podman on your own workstation and work locally. As a reminder, if you install Docker, `podman` and `docker` are completely interchangeable for this lab.
#### Building Images
**1. Clone the Git repository from GitHub:**
```
$ git clone https://github.com/hippyod/hello-world-container-lab
```
**2. Open the Dockerfile:**
```
$ cd hello-world-container-lab
$ vim Dockerfile
```
```
1 FROM Docker.io/adoptopenjdk/openjdk11:x86_64-ubi-minimal-jre-11.0.14.1_1
2
3 USER root
4
5 ARG ARG_MESSAGE_WELCOME='Hello, World'
6 ENV MESSAGE_WELCOME=${ARG_MESSAGE_WELCOME}
7
8 ARG JAR_FILE=target/*.jar
9 COPY ${JAR_FILE} app.jar
10
11 USER 1001
12
13 ENTRYPOINT ["java", "-jar", "/app.jar"]
```
This Dockerfile has the following features:
* The FROM statement (line 1) defines the base (or parent) image this new image will be built from.
* The USER statements (lines 3 and 11) define which user is running during the build and at execution. At first, root is running in the build process. In more complicated Dockerfiles I would need to be root to install any extra software, change file permissions, and so forth, to complete the new image. At the end of the Dockerfile, I switch to the user with UID 1001 so that, whenever the image is realized as a container and executes, the user will not be root, and therefore more secure. I use the UID rather than a username so that the host can recognize which user is running in the container in case the host has enhanced security measures that prevent containers from running as the root user.
* The ARG statements (lines 5 and 8) define variables that can be used during the build process only.
* The ENV statement (line 6) defines an environment variable and value that can be used during the build process but will also be available whenever the image is run as a container. Note how it obtains its value by referencing the variable defined by the previous ARG statement.
* The COPY statement (line 9) copies the JAR file created by the Spring Boot Maven build into the image. For the convenience of users running in the Red Hat sandbox, which doesn't have Java or Maven installed, I have pre-built the JAR file and pushed it to the hello-world-container-lab repo. There is no need to do a Maven build in this lab. (Note: There is also an `add` command that can be substituted for COPY. Because the `add` command can have unpredictable behavior, COPY is preferable.)
* Finally, the ENTRYPOINT statement defines the command and arguments that should be executed in the container when the container starts up. If this image ever becomes a base image for a subsequent image definition and a new ENTRYPOINT is defined, it will override this one. (Note: There is also a `cmd` command that can be substituted for ENTRYPOINT. The difference between the two is irrelevant in this context and outside the scope of this article.)
Type `:q` and hit **Enter** to quit the Dockerfile and return to the shell.
**3. Build the image:**
```
$ podman build --squash -t test/hello-world -f Dockerfile
```
You should see:
```
STEP 1: FROM docker.io/adoptopenjdk/openjdk11:x86_64-ubi-minimal-jre-11.0.14.1_1
Getting image source signatures
Copying blob d46336f50433 done  
Copying blob be961ec68663 done
...
STEP 7/8: USER 1001
STEP 8/8: ENTRYPOINT ["java", "-jar", "/app.jar"]
COMMIT test/hello-world
...
Successfully tagged localhost/test/hello-world:latest
5482c3b153c44ea8502552c6bd7ca285a69070d037156b6627f53293d6b05fd7
```
In addition to building the image the commands provide the following instructions:
The `--squash` flag will reduce image size by ensuring that only one layer is added to the base image when the image build completes. Excess layers will inflate the size of the resulting image. FROM, RUN, and COPY/ADD statements add layers, and best practices are to concatenate these statements when possible, for example:
```
RUN dnf -y --refresh update && \
    dnf install -y --nodocs podman skopeo buildah && \
    dnf clean all
```
The above RUN statement will not only run each statement to create only a single layer but will also fail the build should any one of them fail.
The `-t flag` is for naming the image. Because I did not explicitly define a tag for the name (such as `test/hello-world:1.0)`, the image will be tagged as latest by default. I also did not define a registry (such as `quay.io/test/hello-world` ), so the default registry will be localhost.
The `-f` flag is for explicitly declaring the Dockerfile to be built.
When running the build, Podman will track the downloading of "blobs." These are the image layers your image will be built upon. They are initially pulled from the remote registry, and they will be cached locally to speed up future builds.
```
Copying blob d46336f50433 done  
Copying blob be961ec68663 done
...
Copying blob 744c86b54390 skipped: already exists  
Copying blob 1323ffbff4dd skipped: already exists
```
**4. When the build completes, list the image to confirm it was successfully built:**
```
$ podman images
```
You should see:
```
REPOSITORY                                        TAG                                                      IMAGE ID      CREATED               SIZE
localhost/test/hello-world                 latest                                                    140c09fc9d1d  7 seconds ago  454 MB
docker.io/adoptopenjdk/openjdk11  x86_64-ubi-minimal-jre-11.0.14.1_1  5b0423ba7bec  22 hours ago   445 MB
```
#### Running containers
**5. Run the image:**
```
$ podman run test/hello-world
```
You should see:
```
.   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.5.4)
...
GREETING: Hello, world
GREETING: Hello, world
```
The output will continue printing "Hello, world" every three seconds until you exit:
```
crtl-c
```
**6. Prove that Java is installed only in the container:**
```
$ java -version
```
The Spring Boot application running inside the container requires Java to run, which is why I chose the base image. If you're running in the Red Hat sandbox environment for the lab, this prove sthat Java is installed only in the container, and not on the host:
```
-bash: java: command not found...
```
#### Externalize your configuration
The image is now built, but what happens when I want the "Hello, world" message to be different for each environment I deploy the image to? For example, I might want to change it because the environment is for a different phase of development or a different locale. If I change the value in the Dockerfile, I'm required to build a new image to see the message, which breaks one of the most fundamental benefits of containers—"Build once, deploy many." So how do I make my image truly portable so it can be deployed wherever I need it? The answer lies in externalizing the configuration.
7. Run the image with a new, external welcome message:
```
$ podman run -e 'MESSAGE_WELCOME=Hello, world DIT' test/hello-world
```
You should see:
```
Output:
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.5.4)
...
GREETING: Hello, world DIT
GREETING: Hello, world DIT
```
Stop using by using `crtl-c` and adapt the message:
```
$ podman run -e 'MESSAGE_WELCOME=Hola Mundo' test/hello-world
```
```
.   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.5.4)
...
GREETING: Hola Mundo
GREETING: Hola Mundo
```
The `-e` flag defines an environment variable and value to inject into the container at startup. As you can see, even if the variable was built into the original image (the `ENV MESSAGE_WELCOME=${ARG_MESSAGE_WELCOME}` statement in your Dockerfile), it will be overridden. You've now externalized data that needed to change based on where it was to be deployed (for example, in a DIT environment or for Spanish speakers) and thus made your images portable.
**8. Run the image with a new message defined in a file:**
```
$ echo 'Hello, world from a file' > greetings.txt
$ podman run -v "$(pwd):/mnt/data:Z" \
    -e 'MESSAGE_FILE=/mnt/data/greetings.txt' test/hello-world
```
In this case you should see:
```
.   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.5.4)
...
GREETING: Hello, world from a file
GREETING: Hello, world from a file
```
Repeat until you hit `crtl-c` to stop
The `-e` flag in this case defines a path to the file at `/mnt/data/greetings.txt` that was mounted from the host's local file system with the `-v` flag at `$(pwd)/greetings.txt` (`pwd` is a bash utility that outputs the absolute path of the current directory, which in your case should be the `hello-world-container-lab` ). You've now externalized data that needed to change based on where it was to be deployed, but this time your data was defined in an external file you mounted into the container. Environment variable settings are OK for a limited number of settings, but when you have several settings to apply, a file is a more efficient way of injecting the values into your containers.
Note: The `:Z` flag at the end of the volume definition above is for systems using [SELinux][14]. SELinux manages security on many Linux distributions, and the flag allows the container access to the directory. Without the flag, SELinux would prevent the reading of the file, and an exception would be thrown in the container. Try running the command above again after removing the `:Z` to see a demonstration.
This concludes the lab.
### Developing for containers: externalize the configuration
"Build once, deploy many" works because the immutable containers running in different environments don't have to worry about differences in the hardware or software required to support your particular software project. This principle makes software development, debugging, deployment, and ongoing maintenance much faster and easier. It also isn't perfect, and some minor changes have to be made in how you code to make your container truly portable.
The most important design principle when writing software for containerization is deciding what to externalize. These decisions ultimately make your images portable so they can fully realize the "Build once, deploy many" paradigm. Although this may seem complicated, there are some easy-to-remember factors to consider when deciding whether the configuration data should be injectable into your running container:
* Is the data environment-specific? This includes any data that needs to be configured based on where the container is running, whether the environment is a production, non-production, or development environment. Data of this sort includes internationalization configuration, datastore information, and the specific testing profile(s) you want your application to run under.
* Is the data release independent? Data of this sort can run the gamut from feature flags to internationalization files to log levels—basically, any data you might want or need to change between releases without a build and new deployment.
* Is the data a secret? Credentials should never be hard coded or stored in an image. Credentials typically need to be refreshed on schedules that don't match release schedules, and embedding a secret in an image stored in an image registry is a security risk.
The best practice is to choose where your configuration data should be externalized (that is, in an environment variable or a file) and only externalize those pieces that meet the above criteria. If it doesn't meet the above criteria, it is best to leave it as part of the immutable image. Following these guidelines will make your images truly portable and keep your external configuration reasonably sized and manageable.
### Summary
This article introduces four key ideas for software developers new to images and containers:
1. Images are immutable binaries: Images are a means of packaging software for later reuse or deployment.
2. Containers are isolated processes: When they are created, containers are a runtime instantiation of an image. When containers are started, they become processes in memory on a host machine, which is much lighter and faster than a virtual machine. For the most part, developers only need to know the latter, but understanding the former is helpful.
3. "Build once, deploy many": This principle is what makes container technology so useful. Images and containers provide consistency in deployments and independence from the host machine, allowing you to deploy with confidence across many different environments. Containers are also easily scalable because of this principle.
4. Externalize the configuration: If your image has configuration data that is environment-specific, release-independent, or secret, consider making that data external to the image and containers. You can inject this data into your running image by injecting an environment variable or mounting an external file into the container.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/guide-containers-images
作者:[Evan "Hippy" Slatis][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hippyod
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/bus-containers2.png
[2]: https://opensource.com/article/21/8/container-image
[3]: https://opensource.com/article/22/2/build-your-own-container-linux-buildah
[4]: http://hub.docker.com/
[5]: https://hub.docker.com/r/adoptopenjdk/openjdk11/tags?page=1&name=jre-11.0.14.1_1-ubi
[6]: http://registry.redhat.io/
[7]: http://quay.io/
[8]: https://opensource.com/resources/what-docker
[9]: https://www.redhat.com/sysadmin/podman-guides-2020
[10]: https://git-scm.com/)
[11]: https://github.com/containers/skopeo/blob/main/install.md
[12]: https://github.com/containers/buildah
[13]: https://developers.redhat.com/courses/red-hat-enterprise-linux/deploy-containers-podman
[14]: https://www.redhat.com/en/topics/linux/what-is-selinux
[15]: https://www.redhat.com/en/services/training/do080-deploying-containerized-applications-technical-overview?intcmp=7013a000002qLH8AAM
[16]: https://blog.aquasec.com/a-brief-history-of-containers-from-1970s-chroot-to-docker-2016
[17]: https://developers.redhat.com/blog/2018/02/22/container-terminology-practical-introduction?intcmp=7013a000002qLH8AAM
[18]: https://docs.docker.com/engine/reference/builder/
[19]: https://www.imaginarycloud.com/blog/podman-vs-docker/#:~:text=Docker%20uses%20a%20daemon%2C%20an,does%20not%20need%20the%20mediator.

View File

@ -0,0 +1,187 @@
[#]: subject: "12 essential Linux commands for beginners"
[#]: via: "https://opensource.com/article/22/5/essential-linux-commands"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
12 essential Linux commands for beginners
======
I recommend these commands to anyone who is getting started with Linux.
![Command line prompt][1]
Image by: Opensource.com
When operating on the Linux command line, it is easy to get disoriented, which can have disastrous consequences. I once issued a remove command before realizing that I'd moved the boot directory of my computer. I learned to use the `pwd` command to know exactly which part of the file system I was in (and these days, there are command projects, like [trashy and trash-cli][2], that serve as intermediates when removing files).
When I was new to Linux, I had a cheat sheet that hung over my desk to help me remember those commands as I managed my Linux servers. It was called the *101 commands for Linux* cheat sheet. As I became more familiar with these commands, I became more proficient with server administration.
Here are 12 Linux commands I find most useful.
### 1. Print working directory (pwd)
The `pwd` command prints your working directory. In other words, it outputs the path of the directory you are currently working in. There are two options: `--logical` to display your location with any symlinks and `--physical` to display your location after resolving any symlinks.
### 2. Make directory (mkdir)
Making directories is easy with the `mkdir` command. The following command creates a directory called `example` unless `example` already exists:
```
$ mkdir example
```
You can make directories within directories:
```
$ mkdir -p example/one/two
```
If directories `example` and `one` already exist, only directory `two` is created. If none of them exist, then three nested directories are created.
### 3. List (ls)
Coming from MS-DOS, I was used to listing files with the `dir` command. I don't recall working on Linux at the time, although today, `dir` is in the GNU Core Utilities package. Most people use the `ls` command to display the files, along with all their properties, are in a directory. The `ls` command has many options, including `-l` to view a long listing of files, displaying the file owner and permissions.
### 4. Change directory (cd)
It is often necessary to change directories. That's the `cd` command's function. For instance, this example takes you from your home directory into the `Documents` directory:
```
$ cd Documents
```
You can quickly change to your home directory with `cd ~` or just `cd` on most systems. You can use `cd ..` to move up a level.
### 5. Remove a file (rm)
Removing files is inherently dangerous. Traditionally, the Linux terminal has no Trash or Bin like the desktop does, so many terminal users have the bad habit of permanently removing data they believe they no longer need. There's no "un-remove" command, though, so this habit can be problematic should you accidentally delete a directory containing important data.
A Linux system provides `rm` and `shred` for data removal. To delete file `example.txt`, type the following:
```
$ rm example.txt
```
However, it's much safer to install a trash command, such as [trashy][3] or [trash-cli][4]. Then you can send files to a staging area before deleting them forever:
```
$ trash example.txt
```
### 6. Copy a file (cp)
Copy files with the `cp` command. The syntax is copy *from-here* *to-there*. Here's an example:
```
$ cp file1.txt newfile1.txt
```
You can copy entire directories, too:
```
$ cp -r dir1 newdirectory
```
### 7. Move and rename a file (mv)
Renaming and moving a file is functionally the same process. When you move a file, you take a file from one directory and put it into a new one. When renaming a file, you take a file from one directory and put it back into the same directory or a different directory, but with a new name. Either way, you use the `mv` command:
```
$ mv file1.txt file_001.txt
```
### 8. Create an empty file (touch)
Easily create an empty file with the `touch` command:
```
$ touch one.txt
$ touch two.txt
$ touch three.md
```
### 9. Change permissions (chmod)
Change the permissions of a file with the `chmod` command. One of the most common uses of `chmod` is making a file executable:
```
$ chmod +x myfile
```
This example is how you give a file permission to be executed as a command. This is particularly handy for scripts. Try this simple exercise:
```
$ echo 'echo Hello $USER' > hello.sh
$ chmod +x hello.sh
$ ./hello.sh
Hello, Don
```
### 10. Escalate privileges (sudo)
While administering your system, it may be necessary to act as the super user (also called root). This is where the `sudo` (or *super user do*) command comes in. Assuming you're trying to do something that your computer alerts you that only an administrator (or root) user can do, just preface it with the command `sudo` :
```
$ touch /etc/os-release && echo "Success"
touch: cannot touch '/etc/os-release': Permission denied
$ sudo touch /etc/os-release && echo "Success"
Success
```
### 11. Shut down (poweroff)
The `poweroff` command does exactly what it sounds like: it powers your computer down. It requires `sudo` to succeed.
There are actually many ways to shut down your computer and some variations on the process. For instance, the `shutdown` command allows you to power down your computer after an arbitrary amount of time, such as 60 seconds:
```
$ sudo shutdown -h 60
```
Or immediately:
```
$ sudo shutdown -h now
```
You can also restart your computer with `sudo shutdown -r now` or just `reboot`.
### 12. Read the manual (man)
The `man` command could be the most important command of all. It gets you to the documentation for each of the commands on your Linux system. For instance, to read more about `mkdir` :
```
$ man mkdir
```
A related command is `info`, which provides a different set of manuals (as long as they're available) usually written more verbosely than the often terse man pages.
### What's your favorite Linux command?
There are many more commands on a Linux system—hundreds! What's your favorite command, the one you find yourself using time and time again?
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/essential-linux-commands
作者:[Don Watkins][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/command_line_prompt.png
[2]: https://www.redhat.com/sysadmin/recover-file-deletion-linux
[3]: https://gitlab.com/trashy/trashy
[4]: https://github.com/andreafrancia/trash-cli

View File

@ -0,0 +1,232 @@
[#]: subject: "Build a Quarkus reactive application using Kubernetes Secrets"
[#]: via: "https://opensource.com/article/22/5/quarkus-kubernetes-secrets"
[#]: author: "Daniel Oh https://opensource.com/users/daniel-oh"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Build a Quarkus reactive application using Kubernetes Secrets
======
Follow security policies while developing applications for the cloud by using Kubernetes Secrets.
![Improve your DevOps security game with Ansible Vault][1]
Image by: Opensource.com
Many organizations have security policies in place that dictate how to store sensitive information. When you're developing applications for the cloud, you're probably expected to follow those policies, and to do that you often have to externalize your data storage. Kubernetes has a built-in system to access external secrets, and learning to use that is key to a safe cloud-native app.
In this article, I'm going to demonstrate how to build a Quarkus reactive application with externalized sensitive information—for instance, a password or token—using [Kubernetes Secrets][2]. A secret is a good example of how cloud platforms can secure applications by removing sensitive data from your static code. Note that you can find a solution to this tutorial [in this GitHub repository][3].
### 1. Scaffold a new reactive Quarkus project
Use the Quarkus command-line interface (CLI) to scaffold a new project. If you haven't already installed the Quarkus CLI, follow these [instructions][5] according to your operating system.
Run the following Quarkus CLI in your project directory to add `kubernetes-config`, `resteasy-reactive`, and `openshift` extensions:
```
$ quarkus create app quarkus-secret-example \
-x resteasy-reactive,kubernetes-config,openshift
```
The output should look like this:
```
Looking for the newly published extensions in registry.quarkus.io
selected extensions:
- io.quarkus:quarkus-kubernetes-config
- io.quarkus:quarkus-resteasy-reactive
- io.quarkus:quarkus-openshift
applying codestarts...
📚  java
🔨  maven
📦  quarkus
📝  config-properties
🔧  dockerfiles
🔧  maven-wrapper
🚀  resteasy-reactive-codestart
-----------
[SUCCESS] ✅  quarkus project has been successfully generated in:
--> /tmp/quarkus-secret-example
-----------
Navigate into this directory and get started: quarkus dev
```
### 2. Create a Secret in Kubernetes
To manage the Kubernetes Secrets, you have three options:
* Using kubectl
* Using Configuration File
* Node [Kustomize][6]
Use the `kubectl` command to create a new database credential (a username and password). Run the following command:
```
$ kubectl create secret generic db-credentials \                                                                      
 --from-literal=username=admin \
 --from-literal=password=secret
```
If you haven't already installed a Kubernetes cluster locally, or you have no remote cluster, you can sign in to the [developer sandbox][7], a no-cost sandbox environment for Red Hat OpenShift and CodeReady Workspaces.
You can confirm that the Secret is created properly by using the following command:
```
$ kubectl get secret/db-credentials -o yaml
```
The output should look like this:
```
apiVersion: v1
data:
  password: c2VjcmV0
  username: YWRtaW4=
kind: Secret
metadata:
  creationTimestamp: "2022-05-02T13:46:18Z"
  name: db-credentials
  namespace: doh-dev
  resourceVersion: "1190920736"
  uid: 936abd44-1097-4c1f-a9d8-8008a01c0add
type: Opaque
```
The username and password are encoded by default.
### 3. Create a new RESTful API to access the Secret
Now you can add a new RESTful (for Representational State Transfer) API to print out the username and password stored in the Kubernetes Secret. Quarkus enables developers to refer to the secret as a normal configuration using a `@ConfigureProperty` annotation.
Open a `GreetingResource.java` file in `src/main/java/org/acme`. Then, add the following method and configurations:
```
@ConfigProperty(name = "username")
    String username;
    @ConfigProperty(name = "password")
    String password;
    @GET
    @Produces(MediaType.TEXT_PLAIN)
    @Path("/securty")
    public Map securty() {
        HashMap<String, String> map = new HashMap<>();
        map.put("db.username", username);
        map.put("db.password", password);
        return map;
    }
```
Save the file.
### 4. Set the configurations for Kubernetes deployment
Open the `application.properties` file in the `src/main/resources` directory. Add the following configuration for the Kubernetes deployment. In the tutorial, I'll demonstrate using the developer sandbox, so the configurations tie to the OpenShift cluster.
If you want to deploy it to the Kubernetes cluster, you can package the application via Docker container directly. Then, you need to push the container image to an external container registry (for example, Docker Hub, quay.io, or Google container registry).
```
# Kubernetes Deployment
quarkus.kubernetes.deploy=true
quarkus.kubernetes.deployment-target=openshift
openshift.expose=true
quarkus.openshift.build-strategy=docker
quarkus.kubernetes-client.trust-certs=true
# Kubernetes Secret
quarkus.kubernetes-config.secrets.enabled=true
quarkus.kubernetes-config.secrets=db-credentials
```
Save the file.
### 5. Build and deploy the application to Kubernetes
To build and deploy the reactive application, you can also use the following Quarkus CLI:
```
$ quarkus build
```
This command triggers the application build to generate a `fast-jar` file. Then the application `Jar` file is containerized using a Dockerfile, which was already generated in the `src/main/docker` directory when you created the project. Finally, the application image is pushed into the integrated container registry inside the OpenShift cluster.
The output should end with a `BUILD SUCCESS` message.
When you deploy the application to the developer sandbox or normal OpenShift cluster, you can find the application in the Topology view in the Developer perspective, as shown in the figure below.
![A screenshot of Red Hat OpenShift Dedicated. In the left sidebar menu Topology is highlighted, and in the main screen there is a Quarkus icon][8]
Image by: (Daniel Oh, CC BY-SA 4.0)
### 6. Verify the sensitive information
To verify that your Quarkus application can refer to the sensitive information from the Kubernetes Secret, get the route URL using the following `kubectl` command:
```
$ kubectl get route
```
The output is similar to this:
```
NAME                     HOST/PORT                                                               PATH   SERVICES                 PORT   TERMINATION   WILDCARD
quarkus-secret-example   quarkus-secret-example-doh-dev.apps.sandbox.x8i5.p1.openshiftapps.com          quarkus-secret-example   8080                 None
```
Use the [curl command][9] to access the RESTful API:
```
$ curl http://YOUR_ROUTE_URL/hello/security
```
The output:
```
{db.password=secret, db.username=admin}
```
Awesome! The above `username` and `password` are the same as those you stored in the `db-credentials` secret.
### Where to learn more
This guide has shown how Quarkus enables developers to externalize sensitive information using Kubernetes Secrets. Find additional resources to develop cloud-native microservices using Quarkus on Kubernetes here:
* [7 guides for developing applications on the cloud with Quarkus][10]
* [Extend Kubernetes service discovery with Stork and Quarkus][11]
* [Deploy Quarkus applications to Kubernetes using a Helm chart][12]
You can also watch this step-by-step [tutorial video][13] on how to manage Kubernetes Secrets with Quarkus.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/quarkus-kubernetes-secrets
作者:[Daniel Oh][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/rh_003601_05_mech_osyearbook2016_security_cc.png
[2]: https://kubernetes.io/docs/concepts/configuration/secret/
[3]: https://github.com/danieloh30/quarkus-secret-example.git
[4]: https://enterprisersproject.com/article/2019/8/kubernetes-secrets-explained-plain-english?intcmp=7013a000002qLH8AAM
[5]: https://quarkus.io/guides/cli-tooling#installing-the-cli
[6]: https://kustomize.io/
[7]: https://developers.redhat.com/developer-sandbox/get-started
[8]: https://opensource.com/sites/default/files/2022-05/quarkus.png
[9]: https://opensource.com/article/20/5/curl-cheat-sheet
[10]: https://opensource.com/article/22/4/developing-applications-cloud-quarkus
[11]: https://opensource.com/article/22/4/kubernetes-service-discovery-stork-quarkus
[12]: https://opensource.com/article/21/10/quarkus-helm-chart
[13]: https://youtu.be/ak9R9-E_0_k

View File

@ -0,0 +1,130 @@
[#]: subject: "Collision: Linux App to Verify ISO and Other Files"
[#]: via: "https://www.debugpoint.com/2022/05/collision/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Collision: Linux App to Verify ISO and Other Files
======
The tutorial outlines the features and usage guide of Collision. It is a GUI-based, easy-to-use utility that lets you verify files using cryptographic hash functions.
### Why do you need to verify files?
Everyone downloads files every day over the internet. But many users never bother to verify their integrity or authenticity. That means whether the file is legitimate and not tampered with by any malicious codes.
Take an example of any [Linux Distributions][1] ISO files which come as standard installer images. All popular distribution makers also provide a hash file alongside the ISO file. Using that file, you can easily compare the hash value of your downloaded file. And you can rest assured that your file is correct and not corrupted by any means.
Moreover, if you download a large file over an unstable internet connection, the file may get corrupted. IN those scenarios also, it helps to verify.
### Collision Features and How to Use
The app [Collision][2] uses Cryptographic Hash Functions to help you verify files. The cryptographic hash functions are popular algorithms that generate the file data into a fixed-length data stream via several encryption algorithms. The most popular ones are MD5, SHA-1, SHA-256 and SHA-512. All of these are supported by the Collision app.
In addition to that, Collision also presents a neat user interface which is simple and easy to use for every Linux user. Heres how it looks.
![Collision First Screen][3]
Firstly, it has two primary features. a) upload a file to get the checksum or hash values and b) compare the checksum with the uploaded file.
For example, if you have a simple file, you can upload it via the “Open a File” button or “Open” for re-uploading another file.
As you can see in the below image, the text file has the below checksum for various hash functions. Now you can share the file over the internet/with anyone, along with the checksum values for verification.
![Hash values of a test file][4]
Moreover, if someone tampers with the file (even with a single byte) or the file gets corrupted during distribution, then the hash value changes entirely.
Secondly, if you want to verify the integrity of a file you have downloaded, go to the Verify tab. Then upload the file and enter the hash value of the uploaded file you received.
If it matches, you should see a green tick showing its authenticity.
![Collision verifies a sample file with SHA-256][5]
In addition, here is another example where I have modified the test file and kept the values as same. This use case clearly states that its not valid for this file.
![Collision showing that a file is not valid][6]
#### An important note
It is worth mentioning here that the hash methods dont verify the file meta attributes such as modify time, modify date, etc. If someone tampers any file and reverts it to its original content, the hash methods would term it a valid file.
Now, lets see a typical example of validating an ISO file.
### Example of Using Collision to verify a sample ISO file of Ubuntu Linux
I am sure you download many ISO files while using Linux in general. So to illustrate, I have downloaded the popular Ubuntu ISO server image from the official Ubuntu download page.
![Ubuntu server ISO file and checksums][7]
The SHA256SUMS file has the below checksum value for the installer, as shown above.
![SHA-256 value of Ubuntu server ISO image][8]
After you download, open the Collision application and upload the ISO file via the Verify tab. Then copy the SHA-256 value and paste it to the checksum box on the left.
You should see that the file is authentic if you have correctly downloaded and followed the steps.
![Ubuntu server ISO image verified][9]
### How to Install Collision
The Collision app installation is effortless using Flatpak. You need to [set up Flatpak][10] for your Linux Distributions and click on the below link to install Collision.
[Install Collision via Flathub][11]
After installation, you should find it via the application menu of your distro.
### Is there another way to verify files without any app?
Yes, there are some built-in utilities available in all Linux distributions, which you can also use to verify the files and their integrity using the terminal.
The following terminal utilities can be used to determine the hash of any file. They are installed by default in all distros, and you can even use them for your shell script to automate.
```
md5sum <file name>
```
```
sha1sum <file name>
```
```
sha256sum <file name>
```
Using the above utilities, you can find out the hash value. But you need to compare them to verify manually.
![Verify files via command-line utilities][12]
### Closing Notes
I hope this guide helps you verify your files using the Collision GTK app. It is straightforward to use. Moreover, you can use the command line methods to verify any files you want when you are in the terminal. Also, its a best practice to always check for file integrity wherever possible.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2022/05/collision/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/category/distributions
[2]: https://collision.geopjr.dev/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/05/Collision-First-Screen.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2022/05/Hash-values-of-a-test-file.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/05/Collision-verifies-a-sample-file-with-SHA-256.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/05/Collision-showing-that-a-file-is-not-valid.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/05/Ubuntu-server-ISO-file-and-checksums.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2022/05/SHA-256-valud-of-Ubuntu-server-ISO-image.jpg
[9]: https://www.debugpoint.com/wp-content/uploads/2022/05/Ubuntu-server-ISO-image-verified.jpg
[10]: https://flatpak.org/setup/
[11]: https://dl.flathub.org/repo/appstream/dev.geopjr.Collision.flatpakref
[12]: https://www.debugpoint.com/wp-content/uploads/2022/05/Verify-files-via-command-line-utilities.jpg

View File

@ -0,0 +1,261 @@
[#]: subject: "How to Install KVM on Ubuntu 22.04 (Jammy Jellyfish)"
[#]: via: "https://www.linuxtechi.com/how-to-install-kvm-on-ubuntu-22-04/"
[#]: author: "James Kiarie https://www.linuxtechi.com/author/james/"
[#]: collector: "lkxed"
[#]: translator: "turbokernel"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install KVM on Ubuntu 22.04 (Jammy Jellyfish)
======
KVM, an acronym for Kernel-based Virtual Machine is an opensource virtualization technology integrated into the Linux kernel. Its a type 1 (bare metal ) hypervisor that enables the kernel to act as a bare-metal hypervisor.
KVM allows users to create and run multiple guest machines which can be either Windows or Linux. Each guest machine runs independently of other virtual machines and the underlying OS ( host system ) and has its own computing resources such as CPU, RAM, network interfaces, and storage to mention a few.
This guide shows you how to install KVM on Ubuntu 22.04 LTS (Jammy Jellyfish). At the tail end of this guide, we will demonstrate how you can create a virtual machine once the installation of KVM is complete.
### 1) Update Ubuntu 22.04
To get off the ground, launch the terminal and update your local package index as follows.
```
$ sudo apt update
```
### 2) Check if Virtualization is enabled
Before you proceed any further, you need to check if your CPU supports KVM virtualization. For this to be possible, your system needs to either have a VT-x( vmx ) Intel processor or an AMD-V (svm) processor.
This is achieved by running the following command. if the output is greater than 0, then virtualization is enabled. Otherwise, virtualization is disabled and you need to enable it.
```
$ egrep -c '(vmx|svm)' /proc/cpuinfo
```
![SVM-VMX-Flags-Cpuinfo-linux][1]
From the above output, you can deduce that virtualization is enabled since the result printed is greater than 0. If Virtualization is not enabled, be sure to enable the virtualization feature in your systems BIOS settings.
In addition, you can verify if KVM virtualization is enabled by running the following command:
```
$ kvm-ok
```
For this to work, you need to have installed the cpu-checker package, otherwise, you will bump into the error Command kvm-ok not found.
Directly below, you will get instructions on how to resolve this issue, and that is to install the cpu-checker package.
![KVM-OK-Command-Not-Found-Ubuntu][2]
Therefore, install the cpu-checker package as follows.
```
$ sudo apt install -y cpu-checker
```
Then run the kvm-ok command, and if KVM virtualization is enabled, you should get the following output.
```
$ kvm-ok
```
![KVM-OK-Command-Output][3]
### 3) Install KVM on Ubuntu 22.04
Next, run the command below to install KVM and additional virtualization packages on Ubuntu 22.04.
```
$ sudo apt install -y qemu-kvm virt-manager libvirt-daemon-system virtinst libvirt-clients bridge-utils
```
Let us break down the packages that we are installing:
* qemu-kvm  An opensource emulator and virtualization package that provides hardware emulation.
* virt-manager A Qt-based graphical interface for managing virtual machines via the libvirt daemon.
* libvirt-daemon-system A package that provides configuration files required to run the libvirt daemon.
* virtinst A  set of command-line utilities for provisioning and modifying virtual machines.
* libvirt-clients A set of client-side libraries and APIs for managing and controlling virtual machines & hypervisors from the command line.
* bridge-utils A set of tools for creating and managing bridge devices.
###  4) Enable the virtualization daemon (libvirtd)
With all the packages installed, enable and start the Libvirt daemon.
```
$ sudo systemctl enable --now libvirtd
$ sudo systemctl start libvirtd
```
Confirm that the virtualization daemon is running as shown.
```
$ sudo systemctl status libvirtd
```
![Libvirtd-Status-Ubuntu-Linux][4]
In addition, you need to add the currently logged-in user to the kvm and libvirt groups so that they can create and manage virtual machines.
```
$ sudo usermod -aG kvm $USER
$ sudo usermod -aG libvirt $USER
```
The $USER environment variable points to the name of the currently logged-in user.  To apply this change, you need to log out and log back again.
### 5) Create Network Bridge (br0)
If you are planning to access KVM virtual machines outside from your Ubuntu 22.04 system, then you must map VMs interface to a network bridge. Though a virtual bridge named virbr0, created automatically when KVM is installed but it is used for testing purposes.
To create a network bridge, create the file 01-netcfg.yaml with following content under the folder /etc/netplan.
```
$ sudo vi /etc/netplan/01-netcfg.yaml
network:
  ethernets:
    enp0s3:
      dhcp4: false
      dhcp6: false
  # add configuration for bridge interface
  bridges:
    br0:
      interfaces: [enp0s3]
      dhcp4: false
      addresses: [192.168.1.162/24]
      macaddress: 08:00:27:4b:1d:45
      routes:
        - to: default
          via: 192.168.1.1
          metric: 100
      nameservers:
        addresses: [4.2.2.2]
      parameters:
        stp: false
      dhcp6: false
  version: 2
```
save and exit the file.
Note: These details as per my setup, so replace the IP address entries, interface name and mac address as per your setup.
To apply above change, run netplan apply
```
$ sudo netplan apply
```
Verify the network bridge br0, run below ip command
```
$ ip add show
```
![Network-Bridge-br0-ubuntu-linux][5]
### 6) Launch KVM Virtual Machines Manager
With KVM installed, you can begin creating your virtual machines using the virt-manager GUI tool. To get started, use the GNOME search utility and search for Virtual machine Manager.
Click on the icon that pops up.
![Access-Virtual-Machine-Manager-Ubuntu-Linux][6]
This launches the Virtual Machine Manager Interface.
![Virtual-Machine-Manager-Interface-Ubuntu-Linux][7]
Click on “File” then select “New Virtual Machine”. Alternatively, you can click on the button shown.
![New-Virtual-Machine-Icon-Virt-Manager][8]
This pops open the virtual machine installation wizard which presents you with the following four options:
* Local install Media ( ISO image or CDROM )
* Network Install ( HTTP, HTTPS, and FTP )
* Import existing disk image
* Manual Install
In this guide, we have downloaded a Debian 11 ISO image, and therefore, if you have an ISO image, select the first option and click Forward.
![Local-Install-Media-ISO-Virt-Manager][9]
In the next step, click Browse to navigate to the location of the ISO image,
![Browse-ISO-File-Virt-Manager-Ubuntu-Linux][10]
In the next window, click Browse local in order to select the ISO image from the local directories on your Linux PC.
![Browse-Local-ISO-Virt-Manager][11]
As demonstrated below, we have selected the Debian 11 ISO image. Then click Open
![Choose-ISO-File-Virt-Manager][12]
Once the ISO image is selected, click Forward to proceed to the next step.
![Forward-after-browsing-iso-file-virt-manager][13]
Next, define the RAM and the number of CPU cores for your virtual machine and click Forward.
![Virtual-Machine-RAM-CPU-Virt-Manager][14]
In the next step, define the disk space for your virtual machine and click Forward.
![Storage-for-Virtual-Machine-KVM-Virt-Manager][15]
To associate virtual machines nic to network bridge, click on Network selection and choose br0 bridge.
![Network-Selection-KVM-Virtual-Machine-Virt-Manager][16]
Finally, click Finish to wind up setting the virtual machine.
![Choose-Finish-to-OS-Installation-KVM-VM][17]
Shortly afterward, the virtual machine creation will get underway.
![Creating-Domain-Virtual-Machine-Virt-Manager][18]
Once completed, the virtual machine will start with the OS installer displayed. Below is the Debian 11 installer listing the options for installation. From here, you can proceed to install your preferred system.
![Virtual-Machine-Console-Virt-Manager][19]
##### Conclusion
And thats it. In this guide, we have demonstrated how you can install the KVM hypervisor on Ubuntu 22.04. Your feedback on this guide is much welcome.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/how-to-install-kvm-on-ubuntu-22-04/
作者:[James Kiarie][a]
选题:[lkxed][b]
译者:[turbokernel](https://github.com/turbokernel)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lkxed
[1]: https://www.linuxtechi.com/wp-content/uploads/2022/05/SVM-VMX-Flags-Cpuinfo-linux.png
[2]: https://www.linuxtechi.com/wp-content/uploads/2022/05/KVM-OK-Command-Not-Found-Ubuntu.png
[3]: https://www.linuxtechi.com/wp-content/uploads/2022/05/KVM-OK-Command-Output.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Libvirtd-Status-Ubuntu-Linux.png
[5]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Network-Bridge-br0-ubuntu-linux.png
[6]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Access-Virtual-Machine-Manager-Ubuntu-Linux.png
[7]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Virtual-Machine-Manager-Interface-Ubuntu-Linux.png
[8]: https://www.linuxtechi.com/wp-content/uploads/2022/05/New-Virtual-Machine-Icon-Virt-Manager.png
[9]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Local-Install-Media-ISO-Virt-Manager.png
[10]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Browse-ISO-File-Virt-Manager-Ubuntu-Linux.png
[11]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Browse-Local-ISO-Virt-Manager.png
[12]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Choose-ISO-File-Virt-Manager.png
[13]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Forward-after-browsing-iso-file-virt-manager.png
[14]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Virtual-Machine-RAM-CPU-Virt-Manager.png
[15]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Storage-for-Virtual-Machine-KVM-Virt-Manager.png
[16]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Network-Selection-KVM-Virtual-Machine-Virt-Manager.png
[17]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Choose-Finish-to-OS-Installation-KVM-VM.png
[18]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Creating-Domain-Virtual-Machine-Virt-Manager.png
[19]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Virtual-Machine-Console-Virt-Manager.png

View File

@ -0,0 +1,190 @@
[#]: subject: "The Basic Concepts of Shell Scripting"
[#]: via: "https://www.opensourceforu.com/2022/05/the-basic-concepts-of-shell-scripting/"
[#]: author: "Sathyanarayanan Thangavelu https://www.opensourceforu.com/author/sathyanarayanan-thangavelu/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
The Basic Concepts of Shell Scripting
======
If you want to automate regular tasks and make your life easier, using shell scripts is a good option. This article introduces you to the basic concepts that will help you to write efficient shell scripts.
![Shell-scripting][1]
Ashell script is a computer program designed to be run by the UNIX shell, a command-line interpreter. The various dialects of shell scripts are considered to be scripting languages. Typical operations performed by shell scripts include file manipulation, program execution, and printing of text. A script that sets up the environment, runs the program, and does any necessary cleanup or logging, is called a wrapper.
### Identification of shell prompt
You can identify whether the shell prompt on a Linux based computer is a normal or super user by looking at the symbols of the prompt in the terminal window. The # symbol is used for a super user and the $ symbol is used for a user with standard privileges.
![Figure 1: Manual of date command][2]
### Basic commands
The script comes with too many commands that can be executed on the terminal window to manage your computer. Details of each command can be found in the manual included with the command. To view the manual, you need to run the command:
```
$man <command>
```
A few frequently used commands are:
```
$date #display current date and time
$cal #display current month calendar
$df #displays disk usages
$free #display memory usage
$ls #List files and directories
$mkdir #Creates directory
```
Each command comes with several options that can be used along with it. You can refer to the manual for more details. See Figure 1 for the output of:
```
$man date
```
### Redirection operators
The redirection operator is really useful when you want to capture the output of a command in a file or redirect to a file.
| - | - |
| :- | :- |
| $ls -l /usr/bin >file | default stdout to file |
| $ls -l /usr/bin 2>file | redirects stderr to file |
| $ls -l /usr/bin > ls-output 2>&1 | redirects stderr & stdout to file |
| $ls -l /usr/bin &> ls-output | redirects stderr & stdout to file |
| $ls -l /usr/bin 2> /dev/null | /dev/null bitbucket |
## Brace expansion
Brace expansion is one of the powerful options UNIX has. It helps do a lot of operations with minimal commands in a single line instruction. For example:
```
$echo Front-{A,B,C}-Back
Front-A-Back, Front-B-Back, Front-C-Back
$echo {Z..A}
Z Y X W V U T S R Q P O N M L K J I H G F E D C B A
$mkdir {2009..2011}-0{1..9} {2009..2011}-{10..12}
```
This creates a directory for 12 months from 2009 to 2011.
### Environment variables
An environment variable is a dynamic-named value that can affect the way running processes will behave on a computer. This variable is a part of the environment in which a process runs.
| - | - |
| :- | :- |
| printenv | Print part of all of the environment |
| set | set shell options |
| export | export environment to subsequently executed programs |
| alias | create an alias for command |
### Network commands
Network commands are very useful for troubleshooting issues on the network and to check the particular port connecting to the client.
| - | - |
| :- | :- |
| ping | Send ICMP packets |
| traceroute | Print route packets to a network |
| netstat | print network connection, routing table,
interface stats |
| ftp/lftp | Internet file transfer program |
| wget | Non Interactive network downloader |
| ssh | OpenSSH SSH Client (remote login program) |
| scp | secure copy |
| sftp | Secure File transfer program |
### Grep commands
Grep commands are useful to find the errors and debug the logs in the system. It is one of the powerful tools that shell has.
| - | - |
| :- | :- |
| grep -h .zip file.list | . is any character |
| grep -h ^zip file.list | starts with zip |
| grep -h zip$ file.list | ends with zip |
| grep -h ^zip$ file.list | containing only zip |
| grep -h [^bz]zip file.list | not containing b and z |
| grep -h ^[A-Za-z0-9] file.list | file containing any valid names |
### Quantifiers
Here are some examples of quantifiers:
| - | - |
| :- | :- |
| ? | match element zero or one time |
| * | match an element zero or more times |
| + | Match an element one or more times |
| {} | match an element specfic number of times |
### Text processing
Text processing is another important task in the current IT world. Programmers and administrators can use the commands to dice, cut and process texts.
| - | - |
| :- | :- |
| cat -A $FILE | To find any CTRL character introduced |
| sort file1.txt file2.txt file3.txt >
final_sorted_list.txt | sort all files once |
| ls - l | sort -nr -k 5 | key field 5th column |
| sort --key=1,1 --key=2n distor.txt | key field 1,1 sort and second column sort
by numeric |
| sort foo.txt | uniq -c | to find repetition |
| cut -f 3 distro.txt | cut column 3 |
| cut -c 7-10 | cut character 7 - 10 |
| cut -d : -f 1 /etc/password | delimiter : |
| sort -k 3.7nbr -k 3.1nbr -k 3.4nbr
distro.txt | 3 rd field 7 the character,
3rd field 1 character |
| paste file1.txt file2.txt > newfile.txt | merge two files |
| join file1.txt file2.txt | join on common two fields |
### Hacks and tips
In Linux, we can go back to our history of commands by either using simple commands or control options.
| - | - |
| :- | :- |
| clear | clears the screen |
| history | stores the history |
| script filename | capture all command execution in a file |
Tips:
> History : CTRL + {R, P}
> !!number : command history number
> !! : last command
> !?string : history containing last string
> !string : history containing last string
```
export HISTCONTROL=ignoredups
export HISTSIZE=10000
```
As you get familiar with the Linux commands, you will be able to write wrapper scripts. All manual tasks like taking regular backups, cleaning up files, monitoring the system usage, etc, can be automated using scripts. This article will help you to start scripting, before you move to learning advanced concepts.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/05/the-basic-concepts-of-shell-scripting/
作者:[Sathyanarayanan Thangavelu][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/sathyanarayanan-thangavelu/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Shell-scripting.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-1-Manual-of-date-command.jpg

View File

@ -0,0 +1,234 @@
[#]: subject: "pdfgrep: Use Grep Like Search on PDF Files in Linux Command Line"
[#]: via: "https://itsfoss.com/pdfgrep/"
[#]: author: "Pratham Patel https://itsfoss.com/author/pratham/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
pdfgrep: Use Grep Like Search on PDF Files in Linux Command Line
======
Even if you use the Linux command line moderately, you must have come across the [grep command][1].
Grep is used to search for a pattern in a text file. It can do crazy powerful things, like search for new lines, search for lines where there are no uppercase characters, search for lines where the initial character is a number, and much, much more. Check out some [common grep command examples][2] if you are interested.
But grep works only on plain text files. It wont work on PDF files because they are binary files.
This is where pdfgrep comes into the picture. It works like grep for PDF files. Let us have a look at that.
### Meet pdfgrep: grep like regex search for PDF files
[pdfgrep][3] tries to be compatible with GNU Grep, where it makes sense. Several of your favorite grep options are supported (such as -r, -i, -n or -c). You can use to search for text inside the contents of PDF files.
Though it doesnt come pre-installed like grep, it is available in the repositories of most Linux distributions.
You can use your distributions [package manager][4] to install this awesome tool.
For users of Ubuntu and Debian-based distributions, use the apt command:
```
sudo apt install pdfgrep
```
For Red Hat and Fedora, you can use the dnf command:
```
sudo dnf install pdfgrep
```
Btw, do you run Arch? You can [use the pacman command][5]:
```
sudo pacman -S pdfgrep
```
### Using pdfgrep command
Now that pdfgrep is installed let me show you how to use it in most common scenarios.
If you have any experience with grep, then most of the options will feel familiar to you.
To demonstrate, I will be using [The Linux Command Line][6] PDF book, written by William Shotts. Its one of the [few Linux books that are legally available for free][7].
The syntax for pdfgrep is as follows:
```
pdfgrep [PATTERN] [FILE.pdf]
```
#### Normal search
Lets try doing a basic search for the text xdg in the PDF file.
```
pdfgrep xdg TLCL-19.01.pdf
```
![simple search using pdfgrep][8]
This resulted in only one match… But a match nonetheless!
#### Case insensitive search
Most of the time, the term xdg is used with capitalized alphabetical characters. So, lets try doing a case-insensitive search. For a case insensitive search, I will use the ignore-case option.
You can also use the shorter alternative, which is -i.
```
pdfgrep --ignore-case xdg TLCL-19.01.pdf
```
![case insensitive search using pdfgrep][9]
As you can see, I got more matches after turning on case insensitive searching.
#### Get a count of all matches
Sometimes, the user wants to know how many matches were found of the word. Lets see how many times the word Linux is mentioned (with case insensitive matching).
The option to use in this scenario is count (or -c for short).
```
pdfgrep --ignore-case linux TLCL-19.01.pdf --count
```
![getting a count of matches using pdfgrep][10]
Woah! Linux was mentioned 1200 times in this book… That was unexpected.
#### Show page number
Regular text files are giant monolithic files. There are no pages. But a PDF file has pages. So, you can see where the pattern was found and on which page. Use the page-number option to show the page number where the pattern was matched. You can also use the `-n` option as a shorter alternative.
Let us see how it works with an example. I want to see the pages where the word awk matches. I added a space at the end of the pattern to prevent matching with words like awkward, getting unintentional matches would be *awkward*. Instead of escaping space with a backslash, you can also enclose it in single quotes awk .
```
pdfgrep --page-number --ignore-case awk\ TLCL-19.01.pdf
```
![show which pattern was found on which page using pdfgrep][11]
The word awk was found twice on page number 333, once on page 515 and once again on page 543 in the PDF file.
#### Show match count per page
Do you want to know how many matches were found on which page instead of showing the matches themselves? If you said yes, well it is your lucky day!
Using the page-count option does exactly that. As a shorter alternative, you use the -p option. When you provide this option to pdfgrep, it is assumed that you requested `-n` as well.
Lets take a look at how the output looks. For this example, I will see where the [ln command][12] is used in the book.
```
pdfgrep --page-count ln\ TLCL-19.01.pdf
```
![show which page has how many matches using pdfgrep][13]
The output is in the form of page number: matches. This means, on page number 4, the command (or rather “pattern”) was found only once. But on page number 57, pdfgrep found 4 matches.
#### Get some context
When the number of matches found is quite big, it is nice to have some context. For that, pdfgrep provides some options.
* after-context NUM: Print NUM of lines that come after the matching lines (or use `-A`)
* before-context NUM: Print NUM of lines that are before the matching lines (or use `-B`)
* context NUM: Print NUM of lines that are before and come after the matching lines (or use `-C`)
Lets find XDG in the PDF file, but this time, with a little more context ( ͡❛ ͜ʖ ͡❛)
**Context after matches**
Using the after-context option along with a number, I can see which lines come after the line(s) that match. Below is an example of how it looks.
```
pdfgrep --after-context 2 XDG TLCL-19.01.pdf
```
![using '--after-context' option in pdfgrep][14]
**Context before matches**
Same thing can be done for scenarios when you need to know what lines are present before the line that matches. In that case, use the before-context option, along with a number. Below is an example demonstrating usage of this option.
```
pdfgrep --before-context 2 XDG TLCL-19.01.pdf
```
![using '--before-context' option in pdfgrep][15]
**Context around matches**
If you want to see which lines are present before and come after the line that matched, use the context option and also provide a number. Below is an example.
```
pdfgrep --context 2 XDG TLCL-19.01.pdf
```
![using '--context' option in pdfgrep][16]
#### Caching
A PDF file consists of images as well as text. When you have a large PDF file, it might take some time to skip other media, extract text and then “grep” it. Doing it often and waiting every time can get frustrating.
For that reason, the cache option exists. It caches the rendered text to speed up grep-ing. This is especially noticeable on large files.
```
pdfgrep --cache --ignore-case grep TLCL-19.01.pdf
```
![getting faster results using the '--cache' option][17]
While not the be-all and end-all, I carried out a search 4 times. Twice with cache enable and twice without cache enable. To show the speed difference, I used the time command. Look closely at the time indicated by real value.
As you can see, the commands that include cache option were completed faster than the ones that didnt include it.
Additionally, I suppressed the output using the quiet option for faster completion.
#### Password protected PDF files
Yes, pdfgrep supports grep-ing even password-protected files. All you have to do is use the password option, followed by the password.
I do not have a password-protected file to demonstrate with, but you can use this option in the following manner:
```
pdfgrep --password [PASSWORD] [PATTERN] [FILE.pdf]
```
### Conclusion
pdfgrep is a very handy tool if you are dealing with PDF files and want the functionality of grep, but for PDF files. A reason why I like pdfgrep is that it tries to be compatible with GNU Grep.
Give it a try and let me know what you think of pdfgrep.
--------------------------------------------------------------------------------
via: https://itsfoss.com/pdfgrep/
作者:[Pratham Patel][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/pratham/
[b]: https://github.com/lkxed
[1]: https://linuxhandbook.com/what-is-grep/
[2]: https://linuxhandbook.com/grep-command-examples/
[3]: https://pdfgrep.org/
[4]: https://itsfoss.com/package-manager/
[5]: https://itsfoss.com/pacman-command/
[6]: https://www.linuxcommand.org/tlcl.php
[7]: https://itsfoss.com/learn-linux-for-free/
[8]: https://itsfoss.com/wp-content/uploads/2022/05/01_pdfgrep_normal_search-1-800x308.webp
[9]: https://itsfoss.com/wp-content/uploads/2022/05/02_pdfgrep_case_insensitive-800x413.webp
[10]: https://itsfoss.com/wp-content/uploads/2022/05/03_pdfgrep_count-800x353.webp
[11]: https://itsfoss.com/wp-content/uploads/2022/05/04_pdfgrep_page_number-800x346.webp
[12]: https://linuxhandbook.com/ln-command/
[13]: https://itsfoss.com/wp-content/uploads/2022/05/05_pdfgrep_pg_count-800x280.webp
[14]: https://itsfoss.com/wp-content/uploads/2022/05/06_pdfgrep_after_context-800x340.webp
[15]: https://itsfoss.com/wp-content/uploads/2022/05/07_pdfgrep_before_context-800x356.webp
[16]: https://itsfoss.com/wp-content/uploads/2022/05/08_pdfgrep_context-800x453.webp
[17]: https://itsfoss.com/wp-content/uploads/2022/05/09_pdfgrep_cache-800x575.webp

View File

@ -0,0 +1,138 @@
[#]: subject: "Improve network performance with this open source framework"
[#]: via: "https://opensource.com/article/22/5/improve-network-performance-pbench"
[#]: author: "Hifza Khalid https://opensource.com/users/hifza-khalid"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Improve network performance with this open source framework
======
Use Pbench to predict throughput and latency for specific workloads.
![Mesh networking connected dots][1]
In the age of high-speed internet, most large information systems are structured as distributed systems with components running on different machines. The performance of these systems is generally assessed by their throughput and response time. When performance is poor, debugging these systems is challenging due to the complex interactions between different subcomponents and the possibility of the problem occurring at various places along the communication path.
On the fastest networks, the performance of a distributed system is limited by the host's ability to generate, transmit, process, and receive data, which is in turn dependent on its hardware and configuration. What if it were possible to tune the network performance of a distributed system using a repository of network benchmark runs and suggest a subset of hardware and OS parameters that are the most effective in improving network performance?
To answer this question, our team used [P][2][bench][3], a benchmarking and performance analysis framework developed by the performance engineering team at Red Hat. This article will walk step by step through our process of determining the most effective methods and implementing them in a predictive performance tuning tool.
### What is the proposed approach?
Given a dataset of network benchmark runs, we propose the following steps to solve this problem.
1. Data preparation: Gather the configuration information, workload, and performance results for the network benchmark; clean the data; and store it in a format that is easy to work with
2. Finding significant features: Choose an initial set of OS and hardware parameters and use various feature selection methods to identify the significant parameters
3. Develop a predictive model: Develop a machine learning model that can predict network performance for a given client and server system and workload
4. Recommend configurations: Given the user's desired network performance, suggest a configuration for the client and the server with the closest performance in the database, along with data showing the potential window of variation in results
5. Evaluation: Determine the model's effectiveness using cross-validation, and suggest ways to quantify the improvement due to configuration recommendations
We collected the data for this project using Pbench. Pbench takes as input a benchmark type with its workload, performance tools to run, and hosts on which to execute the benchmark, as shown in the figure below. It outputs the benchmark results, tool results, and the system configuration information for all the hosts.
![An infographic showing inputs and outputs for Pbench. Benchmark type (with workload and systems) and performance tools to run along pbench (e.g., sar, vamstat) go into the central box representing pbench. Three things come out of pbench: configuration of all the systems involved, tool results and benchmark performance results][4]
Image by: (Hifza Khalid, CC BY-SA 4.0)
Out of the different benchmark scripts that Pbench runs, we used data collected using the uperf benchmark. Uperf is a network performance tool that takes the description of the workload as input and generates the load accordingly to measure system performance.
### Data preparation
There are two disjoint sets of data generated by Pbench. The configuration data from the systems under test is stored in a file system. The performance results, along with the workload metadata, are indexed into an Elasticsearch instance. The mapping between the configuration data and the performance results is also stored in Elasticsearch. To interact with the data in Elasticsearch, we used Kibana. Using both of these datasets, we combined the workload metadata, configuration data, and performance results for each benchmark run.
### Finding significant features
To select an initial set of hardware specifications and operating system configurations, we used performance-tuning configuration guides and feedback from experts at Red Hat. The goal of this step was to start working with a small set of parameters and refine it with further analysis. The set was based on parameters from almost all major system subcomponents, including hardware, memory, disk, network, kernel, and CPU.
Once we selected the preliminary set of features, we used one of the most common dimensionality-reduction techniques to eliminate the redundant parameters: remove parameters with constant values. While this step eliminated some of the parameters, given the complexity of the relationship between system information and performance, we resolved to use advanced feature selection methods.
#### Correlation-based feature selection
Correlation is a common measure used to find the association between two features. The features have a high correlation if they are linearly dependent. If the two features increase simultaneously, their correlation is +1; if they decrease concurrently, it is -1. If the two features are uncorrelated, their correlation is close to 0.
We used the correlation between the system configuration and the target variable to identify and cut down insignificant features further. To do so, we calculated the correlation between the configuration parameters and the target variable and eliminated all parameters with a value less than |0.1|, which is a commonly used threshold to identify the uncorrelated pairs.
#### Feature-selection methods
Since correlation does not imply causation, we needed additional feature-selection methods to extract the parameters affecting the target variables. We could choose between wrapper methods like recursive feature elimination and embedded methods like Lasso (Least Absolute Shrinkage and Selection Operator) and tree-based methods.
We chose to work with tree-based embedded methods for their simplicity, flexibility, and low computational cost compared to wrapper methods. These methods have built-in feature selection methods. Among tree-based methods, we had three options: a classification and regression tree (CART), Random Forest, and XGBoost.
We calculated our final set of significant features for the client and server systems by taking a union of the results received from the three tree-based methods, as shown in the following table.
| Parameters | client/server | Description |
| :- | :- | :- |
| Advertised_auto-negotation | client | If the linked advertised auto-negotiation |
| CPU(s) | server | Number of logical cores on the machine |
| Network speed | server | Speed of the ethernet device |
| Model name | client | Processor model |
| rx_dropped | server | Packets dropped after entering the computer stack |
| Model name | server | Processor model |
| System type | server | Virtual or physical system |
#### Develop predictive model
For this step, we used the Random Forest (RF) prediction model since it is known to perform better than CART and is also easier to visualize.
Random Forest (RF) builds multiple decision trees and merges them to get a more stable and accurate prediction. It builds the trees the same way CART does, but to ensure that the trees are uncorrelated to protect each other from their individual errors, it uses a technique known as bagging. Bagging uses random samples from the data with replacement to train the individual trees. Another difference between trees in a Random Forest and a CART decision tree is the choice of features considered for each split. CART considers every possible feature for each split. However, each tree in a Random Forest picks only from a random subset of features. This leads to even more variation among the Random Forest trees.
The RF model was constructed separately for both the target variables.
### Recommend configurations
For this step, given desired throughput and response time values, along with the workload of interest, our tool searches through the database of benchmark runs to return the configuration with the performance results closest to what the user requires. It also returns the standard deviation for various samples of that run, suggesting potential variation in the actual results.
### Evaluation
To evaluate our predictive model, we used a repeated [K-Fold cross-validation][5] technique. It is a popular choice to get an accurate estimate of the efficiency of the predictive model.
To evaluate the predictive model with a dataset of 9,048 points, we used k equal to 10 and repeated the cross-validation method three times. The accuracy was calculated using the two metrics given below.
* R2 score: The proportion of the variance in the dependent variable that is predictable from the independent variable(s). Its value varies between -1 and 1.
* Root mean squared error (RMSE): It measures the average squared difference between the estimated values and the actual values and returns its square root.
Based on the above two criteria, the results for the predictive model with throughput and latency as target variables are as follows:
* Throughput (trans/sec):
* R2 score: 0.984
* RMSE: 0.012
* Latency (usec):
* R2 score: 0.930
* RMSE: 0.025
### What does the final tool look like?
We implemented our approach in a tool shown in the following figure. The tool is implemented in Python. It takes as input the dataset containing the information about benchmark runs as a CSV file, including client and server configuration, workload, and the desired values for latency and throughput. The tool uses this information to predict the latency and throughput results for the user's client server system. It then searches through the database of benchmark runs to return the configuration that has performance results closest to what the user requires, along with the standard deviation for that run. The standard deviation is part of the dataset and is calculated using repeated samples for one iteration or run.
![An infographic showing inputs and outputs for the Performance Predictor and Tuner (PPT). The inputs are client and server sosreports (tarball), workload, and expected latency and throughput. The outputs are latency (usce) and throughput (trans/sec), configuration for the client and the server, and the standard deviation for results.][6]
Image by: (Hifza Khalid, CC BY-SA 4.0)
### What were the challenges with this approach?
While working on this problem, there were several challenges that we addressed. The first major challenge was gathering benchmark data, which required learning Elasticsearch and Kibana, the two industrial tools used by Red Hat to index, store, and interact with [P][7][bench][8] data. Another difficulty was dealing with the inconsistencies in data, missing data, and errors in the indexed data. For example, workload data for the benchmark runs was indexed in Elasticsearch, but one of the crucial workload parameters, runtime, was missing. For that, we had to write extra code to access it from the raw benchmark data stored on Red Hat servers.
Once we overcame the above challenges, we spent a large chunk of our effort trying out almost all the feature selection techniques available and figuring out a representative set of hardware and OS parameters for network performance. It was challenging to understand the inner workings of these techniques, their limitations, and their applications and analyze why most of them did not apply to our case. Because of space limitations and shortage of time, we did not discuss all of these methods in this article.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/improve-network-performance-pbench
作者:[Hifza Khalid][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hifza-khalid
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/mesh_networking_dots_connected.png
[2]: https://distributed-system-analysis.github.io/pbench/
[3]: https://distributed-system-analysis.github.io/pbench/
[4]: https://opensource.com/sites/default/files/2022-05/pbench%20figure.png
[5]: https://vitalflux.com/k-fold-cross-validation-python-example/
[6]: https://opensource.com/sites/default/files/2022-05/PPT.png
[7]: https://github.com/distributed-system-analysis/pbench
[8]: https://github.com/distributed-system-analysis/pbench

View File

@ -0,0 +1,108 @@
[#]: subject: "Machine Learning: Classification Using Python"
[#]: via: "https://www.opensourceforu.com/2022/05/machine-learning-classification-using-python/"
[#]: author: "Gayatri Venugopal https://www.opensourceforu.com/author/gayatri-venugopal/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Machine Learning: Classification Using Python
======
In machine learning (ML), a set of data is analysed to predict a result. Python is considered one of the best programming language choices for ML. In this article, we will discuss machine learning with respect to classification using Python.
![machine-learning-classification][1]
Lets say you want to teach a child to differentiate between apples and oranges. There are various ways to do this. You could ask the child to touch both kinds of fruits so that they get familiar with the shape and the softness. You could also show her multiple examples of apples and oranges, so that they can visually spot the differences. The technological equivalent of this process is known as machine learning.
Machine learning teaches computers to solve a particular problem, and to get better at it through experience. The example discussed here is a classification problem, where the machine is given various labelled examples, and is expected to label an unlabelled sample using the knowledge it acquired from the labelled samples. A machine learning problem can also take the form of regression, where it is expected to predict a real-valued solution to a given problem based on known samples and their solutions. Classification and regression are broadly termed as supervised learning. Machine learning can also be unsupervised, where the machine identifies patterns in unlabelled data, and forms clusters of samples with similar patterns. Another form of machine learning is reinforcement learning, where the machine learns from its environment by making mistakes.
### Classification
Classification is the process of predicting the label of a given set of points based on the information obtained from known points. The class, or label, associated with a data set could be binary or multiple in nature. As an example, if we have to label the sentiment associated with a sentence, we could label it as positive, negative or neutral. On the other hand, problems where we have to predict whether a fruit is an apple or an orange will have binary labels. Table 1 gives a sample data set for a classification problem.
In this table, the value of the last column, i.e., loan approved, is expected to be predicted based on the other variables. In the subsequent sections, we will learn how to train and evaluate a classifier using Python.
| - | - | - | - | - |
| :- | :- | :- | :- | :- |
| Age | Credit rating | Job | Property owned | Load approval |
| 35 | good | yes | yes | yes |
| 32 | poor | yes | no | no |
| 22 | fair | no | no | no |
| 42 | good | yes | no | yes |
Table 1
### Training and evaluating a classifier
In order to train a classifier, we need to have a data set containing labelled examples. Though the process of cleaning the data is not covered in this section, it is recommended that you read about various data preprocessing and cleaning techniques before feeding your data set to a classifier. In order to process the data set in Python, we will import the pandas package and the data frame structure. You may then choose from a variety of classification algorithms such as decision tree, support vector classifier, random forest, XG boost, ADA boost, etc. We will look at the random forest classifier, which is an ensemble classifier formed using multiple decision trees.
```
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
classifier = RandomForestClassifier()
#creating a train-test split with a proportion of 70:30
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
classifier.fit(X_train, y_train) #train the classifier on the training set
y_pred = classifier.predict(X_test) #evaluate the classifier on unknown data
print(“Accuracy: “, metrics.accuracy_score(y_test, y_pred)) #compare the predictions with the actual values in the test set
```
Although this program uses accuracy as the performance metric, a combination of metrics should be used, as accuracy tends to generate non-representative results when the test set is imbalanced. For instance, we will get a high accuracy if the model gives the same prediction for every record and the data set that is used to test the model is imbalanced, i.e., most of the records in the data set have the same class that the model predicted.
### Tuning a classifier
Tuning refers to the process of modifying the values of the hyperparameters of a model in order to improve its performance. A hyperparameter is a parameter whose value can be changed to improve the learning process of the algorithm.
The following code depicts random search hyperparameter tuning. In this, we define a search space from which the algorithm will pick different values, and choose the one that produces the best results:
```
from sklearn.model_selection import RandomizedSearchCV
#define the search space
min_samples_split = [2, 5, 10]
min_samples_leaf = [1, 2, 4]
grid = {min_samples_split : min_samples_split, min_samples_leaf : min_samples_leaf}
classifier = RandomizedSearchCV(classifier, grid, n_iter = 100)
#n_iter represents the number of samples to extract from the search space
#result.best_score and result.best_params_ can be used to obtain the best performance of the model, and the best values of the parameters
classifier.fit(X_train, y_train)
```
### Voting classifier
You can also use multiple classifiers and their predictions to create a model that will give a single prediction based on the individual predictions. This process (in which only the number of classifiers that voted for each prediction is considered) is called hard voting. Soft voting is a process in which each classifier generates a probability of a given record belonging to a particular class, and the voting classifier generates as its prediction, the class that obtained the maximum probability.
A code snippet for creating a soft voting classifier is given below:
```
soft_voting_clf = VotingClassifier(
estimators=[(rf, rf_clf), (ada, ada_clf), (xgb, xgb_clf), (et, et_clf), (gb, gb_clf)],
voting=soft)
soft_voting_clf.fit(X_train, y_train)
```
This article has summarised the use of classifiers, tuning a classifier and the process of combining the results of multiple classifiers. Do use this as a reference point and explore each area in detail.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/05/machine-learning-classification-using-python/
作者:[Gayatri Venugopal][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/gayatri-venugopal/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/machine-learning-classification.jpg

View File

@ -0,0 +1,198 @@
[#]: subject: "Migrate databases to Kubernetes using Konveyor"
[#]: via: "https://opensource.com/article/22/5/migrating-databases-kubernetes-using-konveyor"
[#]: author: "Yasu Katsuno https://opensource.com/users/yasu-katsuno"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Migrate databases to Kubernetes using Konveyor
======
Konveyor Tackle-DiVA-DOA helps database engineers easily migrate database servers to Kubernetes.
![Ships at sea on the web][1]
Kubernetes Database Operator is useful for building scalable database servers as a database (DB) cluster. But because you have to create new artifacts expressed as YAML files, migrating existing databases to Kubernetes requires a lot of manual effort. This article introduces a new open source tool named Konveyor [Tackle-DiVA-DOA][2] (Data-intensive Validity Analyzer-Database Operator Adaptation). It automatically generates deployment-ready artifacts for database operator migration. And it does that through datacentric code analysis.
### What is Tackle-DiVA-DOA?
Tackle-DiVA-DOA (DOA, for short) is an open source datacentric database configuration analytics tool in Konveyor Tackle. It imports target database configuration files (such as SQL and XML) and generates a set of Kubernetes artifacts for database migration to operators such as [Zalando Postgres Operator][3].
![A flowchart shows a database cluster with three virtual machines and SQL and XML files transformed by going through Tackle-DiVA-DOA into a Kubernetes Database Operator structure and a YAML file][4]
Image by: (Yasuharu Katsuno and Shin Saito, CC BY-SA 4.0)
DOA finds and analyzes the settings of an existing system that uses a database management system (DBMS). Then it generates manifests (YAML files) of Kubernetes and the Postgres operator for deploying an equivalent DB cluster.
![A flowchart shows the four elements of an existing system (as described in the text below), the manifests generated by them, and those that transfer to a PostgreSQL cluster][5]
Image by: (Yasuharu Katsuno and Shin Saito, CC BY-SA 4.0)
Database settings of an application consist of DBMS configurations, SQL files, DB initialization scripts, and program codes to access the DB.
* DBMS configurations include parameters of DBMS, cluster configuration, and credentials. DOA stores the configuration to `postgres.yaml` and secrets to `secret-db.yaml` if you need custom credentials.
* SQL files are used to define and initialize tables, views, and other entities in the database. These are stored in the Kubernetes ConfigMap definition `cm-sqls.yaml`.
* Database initialization scripts typically create databases and schema and grant users access to the DB entities so that SQL files work correctly. DOA tries to find initialization requirements from scripts and documents or guesses if it can't. The result will also be stored in a ConfigMap named `cm-init-db.yaml`.
* Code to access the database, such as host and database name, is in some cases embedded in program code. These are rewritten to work with the migrated DB cluster.
### Tutorial
DOA is expected to run within a container and comes with a script to build its image. Make sure Docker and Bash are installed on your environment, and then run the build script as follows:
```
$ cd /tmp
$ git clone https://github.com/konveyor/tackle-diva.git
$ cd tackle-diva/doa
$ bash util/build.sh
docker image ls diva-doa
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
diva-doa     2.2.0     5f9dd8f9f0eb   14 hours ago   1.27GB
diva-doa     latest    5f9dd8f9f0eb   14 hours ago   1.27GB
```
This builds DOA and packs as container images. Now DOA is ready to use.
The next step executes a bundled `run-doa.sh` wrapper script, which runs the DOA container. Specify the Git repository of the target database application. This example uses a Postgres database in the [TradeApp][6] application. You can use the `-o` option for the location of output files and an `-i` option for the name of the database initialization script:
```
$ cd /tmp/tackle-diva/doa
$ bash run-doa.sh -o /tmp/out -i start_up.sh \
      https://github.com/saud-aslam/trading-app
[OK] successfully completed.
```
The `/tmp/out/` directory and `/tmp/out/trading-app`, a directory with the target application name, are created. In this example, the application name is `trading-app`, which is the GitHub repository name. Generated artifacts (the YAML files) are also generated under the application-name directory:
```
$ ls -FR /tmp/out/trading-app/
/tmp/out/trading-app/:
cm-init-db.yaml  cm-sqls.yaml  create.sh*  delete.sh*  job-init.yaml  postgres.yaml  test/
/tmp/out/trading-app/test:
pod-test.yaml
```
The prefix of each YAML file denotes the kind of resource that the file defines. For instance, each `cm-*.yaml` file defines a ConfigMap, and `job-init.yaml` defines a Job resource. At this point, `secret-db.yaml` is not created, and DOA uses credentials that the Postgres operator automatically generates.
Now you have the resource definitions required to deploy a PostgreSQL cluster on a Kubernetes instance. You can deploy them using the utility script `create.sh`. Alternatively, you can use the `kubectl create` command:
```
$ cd /tmp/out/trading-app
$ bash create.sh  # or simply “kubectl apply -f .”
configmap/trading-app-cm-init-db created
configmap/trading-app-cm-sqls created
job.batch/trading-app-init created
postgresql.acid.zalan.do/diva-trading-app-db created
```
The Kubernetes resources are created, including `postgresql` (a resource of the database cluster created by the Postgres operator), `service`, `rs`, `pod`, `job`, `cm`, `secret`, `pv`, and `pvc`. For example, you can see four database pods named `trading-app-*`, because the number of database instances is defined as four in `postgres.yaml`.
```
$ kubectl get all,postgresql,cm,secret,pv,pvc
NAME                                        READY   STATUS      RESTARTS   AGE
pod/trading-app-db-0                        1/1     Running     0          7m11s
pod/trading-app-db-1                        1/1     Running     0          5m
pod/trading-app-db-2                        1/1     Running     0          4m14s
pod/trading-app-db-3                        1/1     Running     0          4m
NAME                                      TEAM          VERSION   PODS   VOLUME   CPU-REQUEST   MEMORY-REQUEST   AGE   STATUS
postgresql.acid.zalan.do/trading-app-db   trading-app   13        4      1Gi                                     15m   Running
NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/trading-app-db          ClusterIP   10.97.59.252    <none>        5432/TCP   15m
service/trading-app-db-repl     ClusterIP   10.108.49.133   <none>        5432/TCP   15m
NAME                         COMPLETIONS   DURATION   AGE
job.batch/trading-app-init   1/1           2m39s      15m
```
Note that the Postgres operator comes with a user interface (UI). You can find the created cluster on the UI. You need to export the endpoint URL to open the UI on a browser. If you use minikube, do as follows:
```
$ minikube service postgres-operator-ui
```
Then a browser window automatically opens that shows the UI.
![Screenshot of the UI showing the Cluster YAML definition on the left with the Cluster UID underneath it. On the right of the screen a header reads "Checking status of cluster," and items in green under that heading show successful creation of manifests and other elements][7]
Image by: (Yasuharu Katsuno and Shin Saito, CC BY-SA 4.0)
Now you can get access to the database instances using a test pod. DOA also generated a pod definition for testing.
```
$ kubectl apply -f /tmp/out/trading-app/test/pod-test.yaml # creates a test Pod
pod/trading-app-test created
$ kubectl exec trading-app-test -it -- bash  # login to the pod
```
The database hostname and the credential to access the DB are injected into the pod, so you can access the database using them. Execute the `psql` metacommand to show all tables and views (in a database):
```
# printenv DB_HOST; printenv PGPASSWORD
(values of the variable are shown)
# psql -h ${DB_HOST} -U postgres -d jrvstrading -c '\dt'
             List of relations
 Schema |      Name      | Type  |  Owner  
--------+----------------+-------+----------
 public | account        | table | postgres
 public | quote          | table | postgres
 public | security_order | table | postgres
 public | trader         | table | postgres
(4 rows)
# psql -h ${DB_HOST} -U postgres -d jrvstrading -c '\dv'
                List of relations
 Schema |         Name          | Type |  Owner  
--------+-----------------------+------+----------
 public | pg_stat_kcache        | view | postgres
 public | pg_stat_kcache_detail | view | postgres
 public | pg_stat_statements    | view | postgres
 public | position              | view | postgres
(4 rows)
```
After the test is done, log out from the pod and remove the test pod:
```
# exit
$ kubectl delete -f /tmp/out/trading-app/test/pod-test.yaml
```
Finally, delete the created cluster using a script:
```
$ bash delete.sh
```
### Welcome to Konveyor Tackle world!
To learn more about application refactoring, you can check out the [Konveyor Tackle site][8], join the community, and access the source code on [GitHub][9].
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/migrating-databases-kubernetes-using-konveyor
作者:[Yasu Katsuno][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/yasu-katsuno
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/kubernetes_containers_ship_lead.png
[2]: https://github.com/konveyor/tackle-diva/tree/main/doa
[3]: https://github.com/zalando/postgres-operator
[4]: https://opensource.com/sites/default/files/2022-05/tackle%20illustration.png
[5]: https://opensource.com/sites/default/files/2022-05/existing%20system%20tackle.png
[6]: https://github.com/saud-aslam/trading-app
[7]: https://opensource.com/sites/default/files/2022-05/postgreSQ-.png
[8]: https://www.konveyor.io/tools/tackle/
[9]: https://github.com/konveyor/tackle-diva

View File

@ -0,0 +1,95 @@
[#]: subject: "Package is “set to manually installed”? What does it Mean?"
[#]: via: "https://itsfoss.com/package-set-manually-installed/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Package is “set to manually installed”? What does it Mean?
======
If you use the apt command to install packages in the terminal, youll see all kinds of output.
If you pay attention and read the output, sometimes youll notice a message that reads:
**package_name set to manually installed**
Have you ever wondered what this message means and why you dont see it for all packages? Let me share some details in this explainer.
### Understanding “Package set to manually installed”
Youll see this message when you try installing an already installed library or development package. This dependency package was installed automatically with another package. The dependency package gets removed with the apt autoremove command if the main package is removed.
But since you tried to install the dependency package explicitly, your Ubuntu system thinks that you need this package independent of the main package. And hence the package is marked as manually installed so that it is not removed automatically.
Not very clear, right? Take the example of [installing VLC on on Ubuntu][1].
Since the main vlc package depends on a number of other packages, those packages are automatically installed with it.
![installing vlc with apt ubuntu][2]
If you check the [list of installed packages][3] that have vlc in their name, youll see that except vlc, the rest are marked automatic. This indicates that these packages were installed automatically (with vlc) and they will be removed automatically with apt autoremove command (when vlc is uninstalled).
![list installed packages vlc ubuntu][4]
Now suppose you thought to install “vlc-plugin-base” for some reason. If you run the apt install command on it, the system tells you that the package is already installed. At the same time, it changes the mark from automatic to manual because the system thinks that you need this vlc-plugin-base explicitly as you tried to manually install it.
![package set manually][5]
You can see that its status has been changed to [installed] from [installed,automatic].
![listing installed packages with vlc][6]
Now, let me remove VLC and run the auoremove command. You can see that “vlc-plugin-base” is not in the list of packages to be removed.
![autoremove vlc ubuntu][7]
Check the list of installed packages again. vlc-plugin-base is still installed on the system.
![listing installed packages after removing vlc][8]
You can see two more vlc-related packages here. These are the dependencies for the vlc-plugin-base package and this is why they are also present on the system but marked automatic.
I believe things are more clear now with the examples. Let me add a bonus tip for you.
### Reset package to automatic
If the state of the package got changed to manual from automatic, you can set it back to automatic in the following manner:
```
sudo apt-mark auto package_name
```
![set package to automatic][9]
### Conclusion
This is not a major error and doesnt stop you from doing your work in your system. However, knowing these little things increase your knowledge a little.
**Curiosity may have killed the cat but it makes a penguin smarter**. Thats an original quote to add humor to this otherwise dull article :)
Let me know if you would like to read more such articles that may seem insignificant but help you understand your Linux system a tiny bit better.
--------------------------------------------------------------------------------
via: https://itsfoss.com/package-set-manually-installed/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/install-latest-vlc/
[2]: https://itsfoss.com/wp-content/uploads/2022/05/installing-vlc-with-apt-ubuntu-800x489.png
[3]: https://itsfoss.com/list-installed-packages-ubuntu/
[4]: https://itsfoss.com/wp-content/uploads/2022/05/list-installed-packages-vlc-ubuntu-800x477.png
[5]: https://itsfoss.com/wp-content/uploads/2022/05/package-set-manually.png
[6]: https://itsfoss.com/wp-content/uploads/2022/05/listing-installed-packages-with-vlc.png
[7]: https://itsfoss.com/wp-content/uploads/2022/05/autoremove-vlc-ubuntu.png
[8]: https://itsfoss.com/wp-content/uploads/2022/05/listing-installed-packages-after-removing-vlc.png
[9]: https://itsfoss.com/wp-content/uploads/2022/05/set-package-to-automatic.png

View File

@ -1,62 +0,0 @@
[#]: subject: "FSF Does Not Accept Debian as a Free Distribution. Heres Why!"
[#]: via: "https://news.itsfoss.com/fsf-does-not-consider-debian-a-free-distribution/"
[#]: author: "Abhishek https://news.itsfoss.com/author/root/"
[#]: collector: "lkxed"
[#]: translator: "Chao-zhi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
自由软件基金会为什么不认为 Debian 是一种自由发行版?
======
![Why FSF doesn't consider Debian a free distribution][1]
Debian 项目开发了一个尊重用户自由的 GNU/Linux 发行版。它的 non-free 软件源中拥有许多根据各式各样的自由许可证分发的软件。这些软件真正被发布到 Debian 之前会进行清理。而自由软件基金会 (Free Software Foundation, FSF) 维护着一份[自由的 GNU/Linux 发行版列表](2)但奇怪的是Debian 并不在这个列表中。事实是 Debian 不符合进入此列表的某些标准,我们很想知道到底不满足哪些标准。但首先,我们需要了解所有这些劳心劳力的工作是如何得到证明的。换句话说,为什么要费心尝试进入一些名单,尤其是这个名单?
曾于 2010 年至 2013 年担任 Debian 项目负责人的 Stefano Zacchiroli 曾表示 Debian 应该获得 FSF 的承认以维护它自由发行版的地位的几个原因。其中一个原因Stefano 称之为“外部审查”,我特别赞同。事实是 Debian 的软件应当满足一些标准和质量水平才能成为发行版的一部分,但除了 Debian 开发人员自己之外没有人控制这个过程。如果该发行版被包含在这份珍贵的清单中,那么 FSF 将密切关注 Debian 的命运,并给予适度的批评。我相信这是很好的动力。如果你也这么认为,那么现在让我们看看 FSF 认为 Debian 不够自由的原因。
### Debian 社会契约
除了自由的 GNU/Linux 发行版列表之外FSF 还维护着一份 GNU/Linux 发行版的列表,这些发行版由于某种原因被拒绝为自由状态。对于此列表中的每个发行版,都有一个带有拒绝的简短论据的评论。从对 Debian 的评论中可以清楚地看出FSF 和 Debian 项目在对“自由分发”一词的解释上产生分歧的主要根源是一份被称为 Debian 社会契约的文件。
Debian 社会契约有五点。要回答主要问题,我们只需要关注其中两个点——即第一个和第五个,其他的省略。在[此处](3)查看合同的完整版本。
第一点说:« **Debian 将保持 100% 自由**。我们在标题为“Debian 自由软件指南”的文档中提供了用于确定作品是否“自由”的指南。我们承诺根据这些指南Debian 系统及其所有组件将是自由的。我们将支持在 Debian 上创建或使用自由和非自由作品的人。我们永远不会让系统需要使用非自由组件。»
同时,第五点写道:« **不符合我们自由软件标准的作品**。我们承认我们的一些用户需要使用不符合 Debian 自由软件指南的作品。我们在我们的档案中为这些作品创建了“free”和“non-free”区域。这些区域中的软件包不是 Debian 系统的一部分,尽管它们已被配置为与 Debian 一起使用。我们鼓励 CD 制造商阅读这些区域的软件包许可证,并确定他们是否可以在其 CD 上分发这些软件包。因此,尽管非自由作品不是 Debian 的一部分,但我们支持它们的使用并为非自由软件包提供基础设施(例如我们的错误跟踪系统和邮件列表)。»
尽管合同规定分发将保持 100% 自由,但它允许官方存档的部分可能包含非自由软件或依赖于某些非自由组件的自由软件。形式上,根据同一份合同,这些部分中的软件不是 Debian 的一部分,但 FSF 对此感到困扰,因为这些部分使在系统上安装非自由软件变得更加容易。
2011 年时FSF 有合理的理由不考虑 Debian 自由版——该发行版附带了一个未清除二进制 blob 的 Linux 内核。但自 2011 年 2 月发布的 Squeeze 至今Debian 已经包含了完全自由的 Linux 内核。因此,简化非自由软件的安装是 FSF 无法将 Debian 识别为自由发行版的主要原因,直到 2016 年这是我知道的唯一原因,但在 2016 年初出现了问题……
### 等等…… 关 Firefox 什么事?
很长一段时间Debian 都包含一个名为 Iceweasel 的浏览器,它只不过是 Firefox 浏览器的更名。进行品牌重塑有两个原因。首先,浏览器标志和名称是 Mozilla 基金会的商标,提供非自由软件与 DFSG 相抵触。其次通过在发行版中包含浏览器Debian 开发人员必须遵守 Mozilla 基金会的要求,该基金会禁止以 Firefox 的名义交付浏览器的修改版本。因此,开发人员不得不更改名称,因为他们不断更改浏览器代码以修复错误并消除漏洞。但在 2016 年初Debian 有幸拥有一款经过修改的浏览器,不受上述限制,可以保留原来的名称和徽标。一方面,这是对 Debian 修改的认可,也是对 Debian 信任的体现。另一方面,该软件显然没有从非自由组件中清除,现在已成为发行版的一部分。如果此时 Debian 已被列入免费 GNU/Linux 发行版列表,那么自由软件基金会会毫不犹豫地指出这一点。
### 结论
数字世界中的自由与现实世界中的自由同样重要。在这篇文章中,我试图揭示 Debian 最重要的特性之一——开发与用户自由相关的发行版。开发人员花费额外的时间从软件中清理非自由组件,并且以 Debian 为技术基础的数十个发行版继承了它的工作,并由此获得了一部分自由。
另外,我想分享一个简单的观察,即自由并不像乍看起来那么简单,人们自然会去追问什么是真正的自由和什么不是。由于 Firefox 的存在Debian 现在不能被称为自由的 GNU/Linux 发行版。但从 2011 年,当 Debian 终于开始清理内核以及发行版的其他组件时,直到 2016 年 Firefox 成为发行版的一部分,自由软件基金会出于纯粹的意识形态原因并不认为该发行版是自由的:原因是 Debian 大大简化了非自由软件的安装……现在轮到你权衡所有的争论并决定是否将 GNU/Linux 发行版视为自由的了。
祝你好运!并尽可能保持自由。
由 Evgeny Golyshev 为 [Cusdeb.com](4) 撰写
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/fsf-does-not-consider-debian-a-free-distribution/
作者:[Abhishek][a]
选题:[lkxed][b]
译者:[Chao-zhi](https://github.com/Chao-zhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/root/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/05/why-fsf-doesnt-consider-debian-a-free-software-1200-%C3%97-675px.png
[2]: https://gnu.org/distros/free-distros.en.html
[3]: https://debian.org/social_contract
[4]: https://wiki.cusdeb.com/Essays:Why_the_FSF_does_not_consider_Debian_as_a_free_distribution/en

View File

@ -0,0 +1,217 @@
[#]: subject: "3 ways to copy files in Go"
[#]: via: "https://opensource.com/article/18/6/copying-files-go"
[#]: author: "Mihalis Tsoukalos https://opensource.com/users/mtsouk"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
在 Go 中复制文件的三种方法
======
本文是 Go 系列的第三篇文章,我将介绍三种最流行的复制文件的方法。
![][1]
图源Opensource.com
本文将介绍展示如何使用 [Go 编程语言][3] 来复制文件。在 Go 中复制文件的方法有很多,我只介绍三种最常见的:使用 Go 库中的 `io.Copy()` 函数调用、一次读取输入文件并将其写入另一个文件,以及使用缓冲区一块块地复制文件。
### 方法一:使用 io.Copy()
第一种方法就是使用 Go 标准库的 `io.Copy()` 函数。你可以在 `copy()` 函数的代码中找到它的实现逻辑,如下所示:
```
func copy(src, dst string) (int64, error) {
sourceFileStat, err := os.Stat(src)
if err != nil {
return 0, err
}
if !sourceFileStat.Mode().IsRegular() {
return 0, fmt.Errorf("%s is not a regular file", src)
}
source, err := os.Open(src)
if err != nil {
return 0, err
}
defer source.Close()
destination, err := os.Create(dst)
if err != nil {
return 0, err
}
defer destination.Close()
nBytes, err := io.Copy(destination, source)
return nBytes, err
}
```
首先,上述代码做了两个判断,以便确定它可以被打开读取:一是判断将要复制的文件是否存在(`os.Stat(src)`),二是判断它是否为常规文件(`sourceFileStat.Mode().IsRegular()`)。剩下的所有工作都由 `io.Copy(destination, source)` 这行代码来完成。`io.Copy()` 函数执行结束后,会返回复制的字节数和复制过程中发生的第一条错误消息。在 Go 中,如果没有错误消息,错误变量的值就为 `nil`
你可以在 [io 包][4] 的文档页面了解有关 `io.Copy()` 函数的更多信息。
运行 `cp1.go` 将产生以下输出:
```
$ go run cp1.go
Please provide two command line arguments!
$ go run cp1.go fileCP.txt /tmp/fileCPCOPY
Copied 3826 bytes!
$ diff fileCP.txt /tmp/fileCPCOPY
```
这个方法已经非常简单了,不过它没有为开发者提供灵活性。这并不总是一件坏事,但是,有些时候,开发者可能会需要/想要告诉程序该如何读取文件。
### 方法二:使用 ioutil.WriteFile() 和 ioutil.ReadFile()
复制文件的第二种方法是使用 `ioutil.ReadFile()``ioutil.WriteFile()` 函数。第一个函数用于将整个文件的内容,一次性地读入到某个内存中的字节切片里;第二个函数则用于将字节切片的内容写入到一个磁盘文件中。
实现代码如下:
```
input, err := ioutil.ReadFile(sourceFile)
if err != nil {
fmt.Println(err)
return
}
err = ioutil.WriteFile(destinationFile, input, 0644)
if err != nil {
fmt.Println("Error creating", destinationFile)
fmt.Println(err)
return
}
```
上述代码包括了两个 if 代码块(嗯,用 Go 写程序就是这样的),程序的实际功能其实体现在 `ioutil.ReadFile()``ioutil.WriteFile()` 这两行代码中。
运行 `cp2.go`,你会得到下面的输出:
```
$ go run cp2.go
Please provide two command line arguments!
$ go run cp2.go fileCP.txt /tmp/copyFileCP
$ diff fileCP.txt /tmp/copyFileCP
```
请注意,虽然这种方法能够实现文件复制,但它在复制大文件时的效率可能不高。这是因为当文件很大时,`ioutil.ReadFile()` 返回的字节切片会很大。
### 方法三:使用 os.Read() 和 os.Write()
在 Go 中复制文件的第三种方法就是下面要介绍的 `cp3.go`。它接受三个参数:输入文件名、输出文件名和缓冲区大小。
`cp3.go` 最重要的部分位于以下 `for` 循环中,你可以在 `copy()` 函数中找到它,如下所示:
```
buf := make([]byte, BUFFERSIZE)
for {
n, err := source.Read(buf)
if err != nil && err != io.EOF {
return err
}
if n == 0 {
break
}
if _, err := destination.Write(buf[:n]); err != nil {
return err
}
}
```
该方法使用 `os.Read()` 将输入文件的一小部分读入名为 `buf` 的缓冲区,然后使用 `os.Write()` 将该缓冲区的内容写入文件。当读取出错或到达文件末尾(`io.EOF`)时,复制过程将停止。
运行 `cp3.go`,你会得到下面的输出:
```
$ go run cp3.go
usage: cp3 source destination BUFFERSIZE
$ go run cp3.go fileCP.txt /tmp/buf10 10
Copying fileCP.txt to /tmp/buf10
$ go run cp3.go fileCP.txt /tmp/buf20 20
Copying fileCP.txt to /tmp/buf20
```
在接下来的基准测试中,你会发现,缓冲区的大小极大地影响了 `cp3.go` 的性能。
### 运行基准测试
在本文的最后一部分,我将尝试比较这三个程序以及 `cp3.go` 在不同缓冲区大小下的性能(使用 `time(1)` 命令行工具)。
以下输出显示了复制 500MB 大小的文件时,`cp1.go`、`cp2.go` 和 `cp3.go` 的性能对比:
```
$ ls -l INPUT
-rw-r--r--  1 mtsouk  staff  512000000 Jun  5 09:39 INPUT
$ time go run cp1.go INPUT /tmp/cp1
Copied 512000000 bytes!
real    0m0.980s
user    0m0.219s
sys     0m0.719s
$ time go run cp2.go INPUT /tmp/cp2
real    0m1.139s
user    0m0.196s
sys     0m0.654s
$ time go run cp3.go INPUT /tmp/cp3 1000000
Copying INPUT to /tmp/cp3
real    0m1.025s
user    0m0.195s
sys     0m0.486s
```
我们可以看出,这三个程序的性能非常接近,这意味着 Go 标准库函数的实现非常聪明、经过了充分优化。
现在,让我们测试一下缓冲区大小对 `cp3.go` 的性能有什么影响吧!执行 `cp3.go`,并分别指定缓冲区大小为 10、20 和 1000 字节,在一台运行很快的机器上复制 500MB 文件,得到的结果如下:
```
$ ls -l INPUT
-rw-r--r--  1 mtsouk  staff  512000000 Jun  5 09:39 INPUT
$ time go run cp3.go INPUT /tmp/buf10 10
Copying INPUT to /tmp/buf10
real    6m39.721s
user    1m18.457s
sys 5m19.186s
$ time go run cp3.go INPUT /tmp/buf20 20
Copying INPUT to /tmp/buf20
real    3m20.819s
user    0m39.444s
sys 2m40.380s
$ time go run cp3.go INPUT /tmp/buf1000 1000
Copying INPUT to /tmp/buf1000
real    0m4.916s
user    0m1.001s
sys     0m3.986s
```
我们可以发现,缓冲区越大,`cp3.go` 运行得就越快,这或多或少是符合预期的。此外,使用小于 20 字节的缓冲区来复制大文件会非常缓慢,应该避免。
你可以在 [GitHub][5] 找到 `cp1.go`、`cp2.go` 和 `cp3.go` 的 Go 代码。
如果你有任何问题或反馈,请在(原文)下方发表评论或在 [Twitter][6] 上与我(原作者)联系。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/copying-files-go
作者:[Mihalis Tsoukalos][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mtsouk
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/LIFE_cat.png
[3]: https://golang.org/
[4]: https://golang.org/pkg/io/
[5]: https://github.com/mactsouk/opensource.com
[6]: https://twitter.com/mactsouk

View File

@ -1,181 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (CoWave-Fall)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (31 open source text editors you need to try)
[#]: via: (https://opensource.com/article/21/2/open-source-text-editors)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
您可以尝试的 31 个开源文本编辑器
======
正在寻找新的文本编辑器? 这里有 31 个选项可供您考虑。
![open source button on keyboard][1]
计算机是基于文本的,因此您使用它们做的事情越多,您可能就越需要文本编辑应用程序。 您在文本编辑器上花费的时间越多,您就越有可能对您您使用的编辑器提出更多的要求。
如果您正在寻找一个好的文本编辑器,您会发现 Linux 可以提供很多。 无论您是想在终端、桌面还是在云端工作,您都可以试一试。您可以每天一款编辑器,连续着试一个月(或每月试一个,能够试三年)。坚持不懈,您终将找到适合您的完美的编辑器。
### 与 Vim 相似的编辑器
![][2]
* [Vi][3] 通常随着 Linux 各发行版、BSD、Solaris 和 macOS 一起安装。 它是典型的 Unix 文本编辑器,具有编辑模式和超高效的单键快捷键二者的独特组合。 最初的 Vi 编辑器由 Bill Joy 编写(他也是 C shell 的作者)。 Vi 的现代版本,尤其是 Vim增加了许多特性包括多级撤消、在插入模式下更好的导航、折叠行、语法高亮、插件支持等等。但它需要学习如何使用它甚至有自己的教程程序vimtutor
* [Kakoune][4] 是一个受 Vim 启发的应用程序,它具有熟悉的简约界面、短键盘快捷键以及独立的编辑和插入模式。 乍一看,它的外观和感觉很像 Vi但它在设计和功能上有自己独特的风格。 它有一个小彩蛋:具有 Clippy 接口的实现。
### emacs 编辑器
![][5]
* 从最初的免费 emacs 开始,发展到 GNU 项目(自由软件运动的发起者)的第一批官方应用程序,[GNU Emacs][6] 是一个广受欢迎的文本编辑器。 它非常适合系统管理员、开发人员和日常用户的使用,具有大量功能和近乎无穷无尽的扩展。 一旦您开始使用 Emacs您可能会发现很难想出一个理由来关闭它因为它能做的事情非常多
* 如果您喜欢 Emacs 但觉得 GNU Emacs 过于臃肿,那么您可以试试 [Jove][7]。 Jove 是一个基于终端的 emacs 编辑器。 它很容易使用,但是如果您是使用 emacs 一类编辑器的新手,那么 Jove 也是很容易学习的,这要归功于 teajove 命令。
* 另一个轻量级的 emacs 编辑器是 [Jed][8]。它的工作流程基于宏。 它与其他编辑器的不同之处在于它使用了 [S-Lang][9],这是一种类似 C 的脚本语言,它为使用 C 而不是使用 Lisp 的开发人员提供了可扩展的选项。
### 交互式编辑器
![][10]
* [GNU nano][11] 对基于终端的文本编辑采取了大胆的立场:它提供了一个菜单。是的,这个不起眼的编辑器从 GUI 编辑器那里得到了提示,它告诉用户他们需要按哪个键来执行特定的功能。这是一种令人耳目一新的用户体验,所以难怪 nano 被设置为“用户友好”发行版的默认编辑器,而不是 Vi。
* [JOE][12] 基于一个名为 WordStar 的旧文本编辑应用程序。如果您不熟悉 WordstarJOE 也可以模仿 Emacs 或 GNU nano。默认情况下它是介于 Emacs 或 Vi 等相对神秘的编辑器和 GNU Nano 永远在线的冗长信息之间的一个很好的折衷方案(例如,它告诉您如何激活屏幕帮助显示,但默认情况下不启用)。
* [e3][13] 是一个优秀的小型文本编辑器,具有五个内置的键盘快捷键方案来模拟 Emacs、Vi、nano、NEdit 和 WordStar。换句话说无论您习惯使用哪种基于终端的编辑器您都可能对 e3 感到宾至如归。
### ed 和更多像 ed 一样的编辑器
* [ed][14] 行编辑器是 [POSIX][15] 和 Open Group 对基于 Unix 的操作系统的标准定义的一部分。它安装在您遇到的几乎所有 Linux 或 Unix 系统上。它小巧、简洁、一流。
* 基于 ed[Sed][16] 流编辑器因其功能和语法而广受欢迎。大多数 Linux 用户在搜索更新配置文件中的行的最简单和最快的方法时至少会学习一个 sed 命令,但值得仔细研究一下。 Sed 是一个强大的命令,包含许多有用的子命令。更好地了解它,您可能会发现自己打开文本编辑器应用程序的频率要低得多。
* 您并不总是需要文本编辑器来编辑文本。 [heredoc][17](或 Here Doc系统可在任何 POSIX 终端中使用,允许您直接在打开的终端中输入文本,然后将输入的内容通过管道传输到文本文件中。这不是最强大的编辑体验,但它用途广泛且始终可用。
### 极简风格的编辑器
![][18]
如果您对一个好的文本编辑器的想法是一个文字处理器除了没有所有的处理功能的话您可能正在寻找这些经典。这些编辑器可让您以最少的干扰和最少的帮助写作和编辑文本。它们提供的功能通常以标记、Markdown 或代码为中心。有些名称遵循某种模式:
* [Gedit][19] 来自 GNOME 团队;
* [medit][20] 有经典的 GNOME 手感;
* [Xedit][21] 仅使用最基本的 X11 库;
* [jEdit][22] 适用于 Java 爱好者。
KDE 用户也类似:
* [Kate][23] 是一款低调的编辑器,拥有您需要的几乎所有功能;
* [KWrite][24] 在看似简单易用的界面中隐藏了大量有用的功能。
还有一些适用于其他平台:
* [Notepad++][25] 是一种流行的 Windows 应用程序,而 Notepadqq 对 Linux 采用了类似的方法;
* [Pe][26] 适用于 Haiku OS90 年代那个古怪的孩子 BeOS 的转世);
* [FeatherPad][27] 是适用于 Linux 的基本编辑器,但对 macOS 和 Haiku 有一些支持。如果您是一名希望移植代码的 Qt 骇客,请务必看一看!
### 集成开发环境IDE
![][28]
文本编辑器和集成开发环境 (IDE) 之间存在相当大的相同之处。 后者实际上只是前者加上许多对于特定代码的添加的功能。 如果您经常使用 IDE您可能会在扩展管理器中发现一个 XML 或 Markdown 编辑器:
* [NetBeans][29] 是一个方便 Java 用户的文本编辑器。
* [Eclipse][30] 提供了一个强大的编辑套件,其中包含许多扩展,可为您提供所需的工具。
### 云端编辑器
![][31]
在云端写作? 当然,您也可以在那里写。
* [Etherpad][32] 是在网上运行的文本编辑器应用程序。 有独立免费的实例供您使用,或者您也·可以设置自己的实例。
* [Nextcloud][33] 拥有蓬勃发展的应用场景,包括内置文本编辑器和具有实时预览功能的第三方 Markdown 编辑器。
### 较新的编辑器
![][34]
每个人都会有让文本编辑器变得更完美的想法。 因此,几乎每年都会发布新的编辑器。 有些以一种新的、令人兴奋的方式重新实现经典的旧想法,有些对用户体验有独特的看法,还有些则专注于特定的需求。
* [Atom][35] 是来自 GitHub 的多功能的现代文本编辑器,具有许多扩展和 Git 集成。
* [Brackets][36] 是 Adobe 为 Web 开发人员提供的编辑器。
* [Focuswriter][37] 旨在通过无干扰全屏模式、可选的打字机音效和精美的配置选项等有用功能帮助您专注于写作。
* [Howl][38] 是一个基于 Lua 和 Moonscript 的渐进式动态编辑器。
* [Norka][39] 和 [KJots][40] 模仿笔记本,每个文档代表“活页夹”中的“页面”。 您可以通过导出功能从笔记本中取出单个页面。
### 自己制作编辑器
![][41]
俗话说得好:既然可以编写自己的应用程序,为什么要使用别人的(虽然其实没有这句俗语)?虽然 Linux 有超过 30 个常用的文本编辑器,但是再说一次,开源的一部分乐趣在于能够亲手进行实验。
如果您正在寻找学习编程的理由,那么制作自己的文本编辑器是一个很好的入门方法。 您可以在大约 100 行代码中实现基础知识,并且您使用它的次数越多,您可能就越会受到启发,进而去学习更多内容,从而进行改进。 准备好开始了吗? 来吧,去[创建您自己的文本编辑器][42]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/open-source-text-editors
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[CoWave-Fall](https://github.com/CoWave-Fall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx (open source button on keyboard)
[2]: https://opensource.com/sites/default/files/kakoune-screenshot.png
[3]: https://opensource.com/article/20/12/vi-text-editor
[4]: https://opensource.com/article/20/12/kakoune
[5]: https://opensource.com/sites/default/files/jed.png
[6]: https://opensource.com/article/20/12/emacs
[7]: https://opensource.com/article/20/12/jove-emacs
[8]: https://opensource.com/article/20/12/jed
[9]: https://www.jedsoft.org/slang
[10]: https://opensource.com/sites/default/files/uploads/nano-31_days-nano-opensource.png
[11]: https://opensource.com/article/20/12/gnu-nano
[12]: https://opensource.com/article/20/12/31-days-text-editors-joe
[13]: https://opensource.com/article/20/12/e3-linux
[14]: https://opensource.com/article/20/12/gnu-ed
[15]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[16]: https://opensource.com/article/20/12/sed
[17]: https://opensource.com/article/20/12/heredoc
[18]: https://opensource.com/sites/default/files/uploads/gedit-31_days_gedit-opensource.jpg
[19]: https://opensource.com/article/20/12/gedit
[20]: https://opensource.com/article/20/12/medit
[21]: https://opensource.com/article/20/12/xedit
[22]: https://opensource.com/article/20/12/jedit
[23]: https://opensource.com/article/20/12/kate-text-editor
[24]: https://opensource.com/article/20/12/kwrite-kde-plasma
[25]: https://opensource.com/article/20/12/notepad-text-editor
[26]: https://opensource.com/article/20/12/31-days-text-editors-pe
[27]: https://opensource.com/article/20/12/featherpad
[28]: https://opensource.com/sites/default/files/uploads/eclipse-31_days-eclipse-opensource.png
[29]: https://opensource.com/article/20/12/netbeans
[30]: https://opensource.com/article/20/12/eclipse
[31]: https://opensource.com/sites/default/files/uploads/etherpad_0.jpg
[32]: https://opensource.com/article/20/12/etherpad
[33]: https://opensource.com/article/20/12/31-days-text-editors-nextcloud-markdown-editor
[34]: https://opensource.com/sites/default/files/uploads/atom-31_days-atom-opensource.png
[35]: https://opensource.com/article/20/12/atom
[36]: https://opensource.com/article/20/12/brackets
[37]: https://opensource.com/article/20/12/focuswriter
[38]: https://opensource.com/article/20/12/howl
[39]: https://opensource.com/article/20/12/norka
[40]: https://opensource.com/article/20/12/kjots
[41]: https://opensource.com/sites/default/files/uploads/this-time-its-personal-31_days_yourself-opensource.png
[42]: https://opensource.com/article/20/12/31-days-text-editors-one-you-write-yourself

View File

@ -1,106 +0,0 @@
[#]: subject: "5 reasons to use sudo on Linux"
[#]: via: "https://opensource.com/article/22/5/use-sudo-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
在 Linux 上使用 sudo 命令的 5 个理由
======
以下是切换到 Linux sudo 命令的五个安全原因。下载 sudo 备忘录获取更多技巧。
![命令行提示符][1]
Image by: Opensource.com
在传统的 Unix 和类 Unix 系统上,新系统中存在的第一个也是唯一一个用户是 **root**。使用 root 账户登录并创建“普通”用户。在初始交互之后,你应该以普通用户身份登录。
以普通用户身份运行系统是一种自我施加的限制,可以防止你犯愚蠢的错误。例如,作为普通用户,你不能删除定义网络接口的配置文件或意外覆盖用户和组列表。作为普通用户,你无法犯这些错误,因为你无权访问这些重要文件。当然,作为系统的实际所有者,你始终可以使用 `su` 命令成为超级用户root并做任何你想做的事情但对于日常任务你应该使用普通账户。
几十年来,`su` 运行良好,但随后出现了 `sudo` 命令。
对于长期的超级用户来说,`sudo` 命令乍一看似乎是多余的。在某些方面,它感觉很像 `su` 命令。例如:
```
$ su root
<enter passphrase>
# dnf install -y cowsay
```
`sudo` 做同样的事情:
```
$ sudo dnf install -y cowsay
<enter passphrase>
```
它们的作用几乎完全相同。然而大多数发行版推荐使用 `sudo` 而不是 `su`,而且大多数发行版已经完全取消了 root 账户。让 Linux 变得愚蠢是一个阴谋吗?
事实上,并非如此。`sudo` 使 Linux 比以往任何时候都更加灵活和可配置,并且没有损失功能,还有[几个显著的优点][2]。
### 为什么在 Linux 上 sudo 比 root 更好?
以下是你应该使用 `sudo` 而不是 `su` 的五个原因。
### 1. Root 是确认的对象
我使用 [Firewalls][3]、[fail2ban][4] 和 [SSH 密钥][5]的常用组合来防止一些针对服务器不必要的访问。在我理解 `sudo` 的价值之前,我对日志中的暴力攻击感到恐惧。自动尝试以 root 身份登录是最常见的,这是有充分理由的。
有足够知识尝试入侵的攻击者应该也知道,在广泛使用 `sudo` 之前,基本上每个 Unix 和 Linux 都有一个 root 账户。这样攻击者就会少一种猜测。因为登录名总是正确的,只要它是 root 就行,所以攻击者只需要一个有效的密码。
删除 root 账户可提供大量保护。如果没有 root服务器就没有确认的登录账户。攻击者必须猜测登录名以及密码。这不是两次猜测而是两个必须同时正确的猜测。
### 2. Root 是最终的攻击媒介
在失败访问日志中root 是很常见的,因为它可能是最强大的用户。如果你要设置一个脚本强行进入他人的服务器,为什么要浪费时间尝试以一个只有部分权限的普通用户进入呢?只有最强大的用户才有意义。
Root 既是唯一已知的用户名又是最强大的用户账户。因此root 基本上使尝试暴力破解其他任何东西变得毫无意义。
### 3. 可选择的权限
`su` 命令要么全有要么全没有。如果你有 `su root` 的密码,你就可以变成超级用户。如果你没有 `su` 的密码,那么你就没有任何管理员权限。这个模型的问题在于,系统管理员必须在将主密钥移交系统或保留密钥和对系统的所有控制权之间做出选择。这并不总是你想要的,[有时候你只是想授权。][6]
例如,假设你想授予用户以 root 身份运行特定应用程序的权限,但你不想为用户提供 root 密码。通过编辑 `sudo` 配置,你可以允许指定用户,或属于指定 Unix 组的任何用户运行特定命令。`sudo` 命令需要用户的现有密码,而不是你的密码,当然也不是 root 密码。
### 4.超时
使用 `sudo` 运行命令时,经过身份验证的用户的权限会提升 5 分钟。在此期间,他们可以运行管理员授予他们运行权限的任何命令。
5 分钟后,认证缓存被清空,下次使用 `sudo` 再次提示输入密码。超时可防止用户意外执行某些操作(例如,不小心搜索 shell 历史记录或多次按下**向上**箭头)。如果第一个用户离开办工桌而没有锁定计算机屏幕,它还可以确保另一个用户不能运行这些命令。
### 5. 日志记录
Shell 历史功能可以作为一个用户所做事情的日志。如果你需要了解系统发生了什么,你可以(理论上,取决于 shell 历史记录的配置方式)使用 `su` 切换到其他人的账户,查看他们的 shell 历史记录,也可以了解用户执行了哪些命令。
但是,如果你需要审计 10 或 100 名用户的行为你可能会注意到此方法无法扩展。Shell 历史记录的轮转速度很快,默认为 1000 条,并且可以通过在任何命令前加上空格来轻松绕过它们。
当你需要管理任务的日志时,`sudo` 提供了一个完整的[日志记录和警报子系统][7],因此你可以在一个特定位置查看活动,甚至在发生重大事件时获得警报。
### 学习 sudo 其他功能
除了本文列举的一些功能,`sudo` 命令还有很多新功能,包括已有的或正在开发中的。因为 `sudo` 通常是你配置一次然后就忘记的东西,或者只在新管理员加入团队时才配置的东西,所以很难记住它的细微差别。
下载 [sudo 备忘录][8],在你最需要的时候把它当作一个有用的指导书。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/use-sudo-linux
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/command_line_prompt.png
[2]: https://opensource.com/article/19/10/know-about-sudo
[3]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
[4]: https://www.redhat.com/sysadmin/protect-systems-fail2ban
[5]: https://opensource.com/article/20/2/ssh-tools
[6]: https://opensource.com/article/17/12/using-sudo-delegate
[7]: https://opensource.com/article/19/10/know-about-sudo
[8]: https://opensource.com/downloads/linux-sudo-cheat-sheet

View File

@ -0,0 +1,69 @@
[#]: subject: "Use this open source screen reader on Windows"
[#]: via: "https://opensource.com/article/22/5/open-source-screen-reader-windows-nvda"
[#]: author: "Peter Cheer https://opensource.com/users/petercheer"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
在 Windows 上使用这个开源屏幕阅读器
======
为纪念全球无障碍意识日,了解 NVDA 开源屏幕阅读器,以及你如何参与其中,为所有网络用户提高无障碍性。
![Working from home at a laptop][1]
图片提供Opensource.com
屏幕阅读器是辅助技术软件的一个专门领域,它可以阅读并说出计算机屏幕上的内容。完全没有视力的人只是视力障碍者的一小部分,屏幕阅读器软件可以帮助所有群体。屏幕阅读器大多特定于操作系统,供有视觉障碍的人和无障碍培训师使用,以及想要测试网站或应用的无障碍访问程度的开发人员和无障碍顾问。
### 如何使用 NVDA 屏幕阅读器
[WebAIM 屏幕阅读器用户调查][2]始于 2009 年,一直持续到 2021 年。在第一次调查中,最常用的屏幕阅读器是 JAWS占 74%。它是 Microsoft Windows 的商业产品,并且是长期的市场领导者。 NVDA 当时是一个相对较新的 Windows 开源屏幕阅读器,仅占 8%。快进到 2021 年JAWS 占 53.7%NVDA 占 30.7%。
你可以从 [NVAccess 网站][3]下载最新版本的 NVDA。为什么我要使用 NVDA 并将它推荐给我使用微软 Windows 的客户?嗯,它是开源的,速度快,功能强大,易于安装,支持多种语言,可以作为便携式应用运行,拥有庞大的用户群,并且有定期发布新版本的周期。
NVDA 已被翻译成 55 种语言,并在 175 个不同的国家/地区使用。还有一个活跃的开发者社区,拥有自己的[社区插件网站][4]。你选择安装的任何附加组件都将取决于你的需求,并且有很多可供选择,包括常见视频会议平台的扩展。
与所有屏幕阅读器一样NVDA 有很多组合键需要学习。熟练使用任何屏幕阅读器都需要培训和练习。
![Image of NVDA welcome screen][5]
向熟悉计算机和会使用键盘的人教授 NVDA 并不太难。向一个完全初学者教授基本的计算机技能(没有鼠标、触摸板和键盘技能)和使用 NVDA 是一个更大的挑战。个人的学习方式和偏好不同。此外如果人们只想浏览网页和使用电子邮件他们可能不需要学习如何做所有事情。NVDA 教程和资源的一个很好的链接来源是[无障碍中心][6]。
当你掌握了使用键盘命令操作 NVDA它就会变得更容易但是还有一个菜单驱动的系统可以完成许多配置任务。
![Image of NVDA menu][7]
### 测试无障碍性
多年来屏幕阅读器用户无法访问某些网站一直是个问题尽管美国残疾人法案ADA等残疾人平等立法仍然存在。 NVDA 在有视力的社区中的一个很好的用途是用于网站无障碍性测试。NVDA 可以免费下载,并且通过运行便携式版本,网站开发人员甚至不需要安装它。运行 NVDA关闭显示器或闭上眼睛看看你在浏览网站或应用时的表现如何。
NVDA 也可用于测试(通常被忽略的)正确[标记 PDF 文档以实现无障碍性][8]任务。
有几个指南专注于使用 NVDA 进行无障碍性测试。我可以推荐[使用 NVDA 测试网页][9]和使用 [NVDA 评估 Web 无障碍性][10]。
图片提供Peter CheerCC BY-SA 4.0
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/open-source-screen-reader-windows-nvda
作者:[Peter Cheer][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/petercheer
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/wfh_work_home_laptop_work.png
[2]: https://webaim.org/projects
[3]: https://www.nvaccess.org
[4]: https://addons.nvda-project.org/index.en.html
[5]: https://opensource.com/sites/default/files/2022-05/nvda1.png
[6]: http://www.accessibilitycentral.net/
[7]: https://opensource.com/sites/default/files/2022-05/nvda2.png
[8]: https://www.youtube.com/watch?v=rRzWRk6cXIE
[9]: https://www.unimelb.edu.au/accessibility/tools/testing-web-pages-with-nvda
[10]: https://webaim.org/articles/nvda

View File

@ -0,0 +1,129 @@
[#]: subject: "Customize GNOME 42 with A Polished Look"
[#]: via: "https://www.debugpoint.com/2022/05/customize-gnome-42-look-1/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
自定义 GNOME 42 的精致外观
======
一个关于如何在 5 分钟内为你最喜欢的 GNOME 桌面提供精美外观的教程。
你可以通过多种方式使用图标、主题、光标和壁纸自定义你最喜爱的 GNOME 桌面。本文向你展示了如何使 GNOME 42 桌面看起来更加精致。GNOME 42 桌面环境可用于最近发布的 Ubuntu 22.04 LTS 和 Fedora 36。
在你进一步阅读之前,这是并排比较(之前和之后)的外观。
![GNOME before customisation][1]
![GNOME after customisation][2]
我将把本教程分为两个部分。
第一部分涉及设置和安装所需的软件包。其次,如何应用各种设置来获得你想要的外观。
本教程主要在 Ubuntu 22.04 LTS 上测试。但是,它应该适用于 Ubuntu 和 Fedora 的其他变体。
### 使用精致外观自定义 GNOME 42
#### 设置
* 首先,为你的系统启用 Flatpak因为我们需要安装扩展管理器来下载本教程所需的 GNOME Shell 扩展。
* 因此,要做到这一点,请打开一个终端并运行以下命令。
```
sudo apt install flatpak gnome-software-plugin-flatpakflatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
* 完成后重启计算机。
* 然后从终端运行以下命令来安装扩展管理器应用以下载 GNOME Shell 扩展。
```
flatpak install flathub com.mattjakeman.ExtensionManager
```
* 打开扩展管理器应用并安装两个扩展。第一个是浮动 dock它具有超酷的 dock你可以在桌面上的任何位置移动。其次安装用户主题扩展来帮助你在 Ubuntu Linux 中安装外部 GTK 主题。
![User Themes Extension][3]
![Floating Dock Extension][4]
* 其次,使用以下命令安装 [Materia 主题][5]。你必须构建它,因为它没有任何可执行文件。在 Ubuntu 中依次运行以下命令进行安装。
```
git clone https://github.com/ckissane/materia-theme-transparent.gitcd materia-theme-transparentmeson _buildmeson install -C _build
```
* 此外,请从以下链接下载 [Kora 图标主题][6]。下载后解压文件,将以下四个文件夹复制到 `/home/<用户名>/.icons` 路径下。如果 .icons 文件夹不存在,请创建它。
[下载 Kora 图标主题][7]
![Kora Icon Theme][8]
* 除了上述更改,从下面的链接下载 Bibata 光标主题。下载后,解压文件夹并将其复制到相同的 `/home/<用户名>/.icons` 文件夹中。
[下载 Bibata 光标主题][9]
* 除了上述之外,如果你想要一个与上述主题相匹配的漂亮字体,请从 Google Fonts [下载 Robot font][10] 并将它们复制到 `/home/<user name>/.fonts` 文件夹。
* 最后,再次重启系统。
#### 配置
* 打开扩展管理器,启用浮动 dock 和用户主题,并禁用 Ubuntu Dock。
![Changes to Extensions][11]
* 此外,打开浮动 dock 设置并进行以下更改。
![Floating Dock Settings][12]
* 此外,打开 [GNOME Tweak Tool][13],然后转到外观选项卡。设置以下内容。
* 光标Bibata-Original-Ice
* Shell 主题Materia
* 图标Kora
* 除此之外,你可能还想更改字体。为此,请转到字体选项卡并将文档和界面更改为 Robot 10pt。
* 或者,你也可以从 Ubuntu 22.04 的默认设置中更改强调色和样式。
* 最后,根据你的喜好下载漂亮的壁纸。对于本教程,我从[这里][14]下载了一个示例壁纸。
* 如果一切顺利,你应该有一个漂亮的桌面,如下图所示。
![Customize GNOME 42 Final Look][15]
享受精致的 GNOME 42。干杯。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2022/05/customize-gnome-42-look-1/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://i2.wp.com/www.debugpoint.com/wp-content/uploads/2022/05/GNOME-before-customisation.jpg?ssl=1
[2]: https://i0.wp.com/www.debugpoint.com/wp-content/uploads/2022/05/GNOME-after-customisation.jpg?ssl=1
[3]: https://www.debugpoint.com/wp-content/uploads/2022/05/User-Themes-Extension2.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2022/05/Floating-Doc-Extension.jpg
[5]: https://github.com/ckissane/materia-theme-transparent
[6]: https://github.com/bikass/kora/
[7]: https://github.com/bikass/kora/archive/refs/heads/master.zip
[8]: https://www.debugpoint.com/wp-content/uploads/2022/05/Kora-Icon-Theme.jpg
[9]: https://www.pling.com/p/1197198/
[10]: https://fonts.google.com/specimen/Roboto
[11]: https://www.debugpoint.com/wp-content/uploads/2022/05/Changes-to-Extensions.jpg
[12]: https://www.debugpoint.com/wp-content/uploads/2022/05/Floating-Dock-Settings.jpg
[13]: https://www.debugpoint.com/2018/05/customize-your-ubuntu-desktop-using-gnome-tweak/
[14]: https://www.pexels.com/photo/colorful-blurred-image-6985048/
[15]: https://www.debugpoint.com/wp-content/uploads/2022/05/Customize-GNOME-42-Final-Look.jpg

View File

@ -0,0 +1,142 @@
[#]: subject: "DAML: The Programming Language for Smart Contracts in a Blockchain"
[#]: via: "https://www.opensourceforu.com/2022/05/daml-the-programming-language-for-smart-contracts-in-a-blockchain/"
[#]: author: "Dr Kumar Gaurav https://www.opensourceforu.com/author/dr-gaurav-kumar/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
DAML区块链中智能合约的编程语言
======
DAML 智能合约语言是一种专门设计的特定领域语言,用于编码应用的共享业务逻辑。它用于区块链环境中分布式应用的开发和部署。
![blockchain-hand-shake][1]
区块链技术是一种安全机制,以一种使人难以或不可能修改或入侵的方式来跟踪信息。区块链整合了交易的数字账本,它被复制并发送至其网络上的每台计算机。在链的每个区块中,都有一些交易。当区块链上发生新的交易时,该交易的记录就会被添加到属于该链的每个人的账簿中。
区块链使用分布式账本技术DLT其中数据库并不保存在一个服务器或节点中。在区块链中交易被记录在一个被称为哈希的不可改变的加密符号中。这意味着如果一个通道或链上的一个区块被改变黑客将很难改变链上的那个区块因为他们必须对外面的每一个版本的链都要这样做。区块链如比特币和以太坊随着新的区块被添加到链上而不断增长这使得账本更加安全。
随着区块链中智能合约的实施,在没有任何人工干预的情况下,有自动执行的场景。智能合约技术使得执行最高级别的安全、隐私和反黑客实施成为可能。
![Figure 1: Market size of blockchain technology (Source: Statista.com)][2]
区块链的用例和应用是:
* 加密货币
* 智能合约
* 安全的个人信息
* 数字健康记录
* 电子政务
* 不可伪造的代币NFT
* 游戏
* 跨境金融交易
* 数字投票
* 供应链管理
根据 *Statista.com*,自过去几年以来,区块链技术市场的规模正在以非常快的速度增长,预计到 2025 年将达到 400 亿美元。
### 区块链的编程语言和工具箱
有许多编程语言和开发工具包可用于分布式应用和智能合约。区块链的编程和脚本语言包括 Solidity、Java、Vyper、Serpent、Python、JavaScript、GoLang、PHP、C++、Ruby、Rust、Erlang 等,并根据实施场景和用例进行使用。
选择一个合适的平台来开发和部署区块链,取决于一系列因素,包括对安全、隐私、交易速度和可扩展性的需求(图 2
![Figure 2: Factors to look at when selecting a blockchain platform][3]
开发区块链的主要平台有:
* Ethereum
* XDC Network
* Tezos
* Stellar
* Hyperledger
* Ripple
* Hedera Hashgraph
* Quorum
* Corda
* NEO
* OpenChain
* EOS
* Dragonchain
* Monero
### DAML一种高性能的编程语言
数字资产建模语言或 DAMLdaml.com是一种高性能的编程语言用于开发和部署区块链环境中的分布式应用。它是一个轻量级和简洁的平台用于快速应用开发。
![Figure 3: Official portal of DAML][4]
DAML 的主要特点是:
* 细粒度的权限
* 基于场景的测试
* 数据模型
* 业务逻辑
* 确定性的执行
* 存储抽象化
* 无重复开销
* 负责任的跟踪
* 原子的可组合性
* 授权检查
* 需要知道的隐私
### 安装和使用 DAML
DAML SDK 可以安装在 Linux、macOS 或 Windows 上。在多个操作系统上安装 DAML 的详细说明可访问 *https://docs.daml.com/getting-started/installation.html*
你必须具备以下条件才能使用 DAML
* Visual Studio Code
* Java 开发套件JDK
DAML 可以通过下载并运行可执行的安装程序在 Windows 上安装,你可访问 *https://github.com/digital-asset/daml/releases/download/v1.18.1/daml-sdk-1.18.1-windows.exe。*
在 Linux 或 Mac 上安装 DAML 可以通过在终端执行以下内容来完成:
```
$ curl -sSL https://get.daml.com/ | sh
```
安装 DAML 后,可以创建基于区块链的新应用,如图 4 和 5 所示。
![Figure 4: Creating a new app][5]
在另一个终端中,新的应用被导航并安装了项目的依赖:
![Figure 5: Running DAML][6]
```
WorkingDirectory>cd myapp/ui
WorkingDirectory>npm install
WorkingDirectory>npm start
```
WebUI 被启动,该应用可在 Web 浏览器上通过 URL *http://localhost:3000/* 访问。
![Figure 6: Login panel in DAML app][7]
### 研究和开发的范围
区块链技术为不同类别的应用提供了广泛的开发平台和框架。其中许多平台是免费和开源的,可以下载和部署以用于基于研究的实现。研究学者、从业者和院士可以使用这些平台为众多应用提出和实施他们的算法。
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/05/daml-the-programming-language-for-smart-contracts-in-a-blockchain/
作者:[Dr Kumar Gaurav][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/dr-gaurav-kumar/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/blockchain-hand-shake.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-1-Market-size-of-blockchain-technology.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-2-Factors-to-look-at-when-selecting-a-blockchain-platform-2.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-3-Official-portal-of-DAML-1.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-4-Creating-a-new-app.jpg
[6]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-5-Running-DAML.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-6-Login-panel-in-DAML-app.jpg