Merge pull request #5 from LCTT/master

update
This commit is contained in:
aftermath0703 2022-08-31 13:00:11 +08:00 committed by GitHub
commit 4205f37124
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
21 changed files with 2822 additions and 148 deletions

View File

@ -3,35 +3,34 @@
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lkxed"
[#]: translator: "Donkey-Hao"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14983-1.html"
如何在 Linux 上使用 Bash 自动化任务
======
Bash 有一些方便的自动化功能,可以让我在 Linux 上处理文件时更轻松。
![bash logo on green background][1]
![](https://img.linux.net.cn/data/attachment/album/202208/30/181814f4v7ahztuaaxwqwg.jpg)
图源Opensource.com
> Bash 有一些方便的自动化功能,可以让我在 Linux 上处理文件时更轻松。
通过 Bash 命令行进行自动化任务是极好的一种方式。不论你使用运行在服务器上的 Linux,进行管理日志文件还是其他文件,或者你在个人电脑上整理文件,使桌面保持整洁,使用 Bash 的自动化功能会使你的工作变得更简单
通过 Bash 命令行进行自动化任务是极好的一种方式。不论你使用运行在服务器上的 Linux 进行管理日志文件或其他文件,还是你在个人电脑上整理文件以使桌面保持整洁,使用 Bash 的自动化功能会使你的工作变得更轻松
### Linux `for` 命令:自动执行文件任务
### 自动执行文件任务for
如果你对一堆文件要同时处理,并且对每个文件进行相同的操作,请使用 `for` 命令。该命令会遍历文件列表,并执行一个或多个命令。`for` 命令如下所示:
```
for variable in list
for 变量 in 列表
do
    commands
    命令
done
```
我在示例中添加了额外的空格,来分开 `for` 命令中不同的部分。多个命令可能无法在命令行中同时运行,不过你可以使用 `;` 将所有命令放在同一行中,就像这样:
我在示例中添加了额外的空白和换行,来分开 `for` 命令中不同的部分。看起来好像无法在命令行中同时运行多个命令,不过你可以使用 `;` 将所有命令放在同一行中,就像这样:
```
for variable in list ; do commands ; done
for 变量 in 列表 ; do 命令 ; done
```
让我们看看它的实际效果。我使用 `for` 命令来重命名一些文件。最近,我有一些截图,想要重命名。这些截图名称为 `filemgr.png``terminal.png`,我想将 `screenshot` 放在每个名称前。我可以使用 `for` 命令一次性将 30 个文件重命名。这是两个文件的示例:
@ -44,54 +43,54 @@ $ ls
screenshot-filemgr.png  screenshot-terminal.png
```
`for` 命令使得在一系列文件中执行一种或多种操作变得容易。你可以用一些有意义的变量,比如 `image``screenshot`,或者你用示例中“缩写的”变量 `f`。当我在使用 `for` 循环写脚本的时候,会选择有意义的变量名。但是当我在命令行中使用 `for`,我通常会选择缩写变量名,比如 `f` 代表文件,`d` 代表目录等。
`for` 命令使得在一系列文件中执行一种或多种操作变得容易。你可以用一些有意义的变量,比如 `image``screenshot`,或者你用示例中“缩写的”变量 `f`。当我在使用 `for` 循环写脚本的时候,会选择有意义的变量名。但是当我在命令行中使用 `for`,我通常会选择缩写变量名,比如 `f` 代表文件,`d` 代表目录等。
不论你选择怎样的变量名,请确保在引用变量时添加 `$` 符号。这会将变量扩展为你正在处理的文件的名称。在 Bash 提示符下键入 `help for` 以了解有关 `for` 命令的更多信息。
### Linux `if` 条件执行
### 按条件执行if
当你需要对每个文件执行相同操作时,使用 `for` 循环遍历一些文件很有帮助。但是,如果你需要对某些文件做一些不同的事情怎么办?为此,你需要使用 `if` 语句进行条件执行。`if` 语句如下所示:
```
if test
if 测试
then
    commands
    命令
fi
```
你也可以使用 `if/else` 语句进行判断:
你也可以使用 `if`、`else` 语句进行判断:
```
if test
if 测试
then
    commands
    命令
else
    commands
    命令
fi
```
你可以使用 `if/else-if/else` 语句来实现更复杂的程序。当我一次性需要自动处理很多文件时,我会在脚本中使用:
你可以使用 `if`、`elif`、` else` 语句来实现更复杂的程序。当我一次性需要自动处理很多文件时,我会在脚本中使用:
```
if test
if 测试1
then
    commands
elif test2
    命令
elif 测试2
then
    commands
elif test3
    命令
elif 测试3
then
    commands
    命令
else
    commands
    命令
fi
```
`if` 命令可以让你进行不同的判断,例如判断一个文件是否是一个文件,或者一个文件是否为空文件(零字节)。在命令行中输入 `help test`,可以立即查看使用 `if` 语句能够进行的不同种测试。
`if` 命令可以让你进行各种判断,例如判断一个文件是否是一个文件,或者一个文件是否为空文件(零字节)。在命令行中输入 `help test`,可以立即查看使用 `if` 语句能够进行的种测试。
例如,假设我想清理一个包含几十个文件的日志目录。日志管理中的一个常见任务是删除所有空日志,并压缩其他日志。解决这个问题的最简单方法是删除空文件。没有一个 `if` 测试可以完全匹配,但是我们有 `-s` 选项来判断是否是一个文件,并且判断该文件不是空的(大小不为零)。这与我们想要的相反,但我们可以使用 `!` 来否定测试,以判断某些内容不是文件或为空。
例如,假设我想清理一个包含几十个文件的日志目录。日志管理中的一个常见任务是删除所有空日志文件,并压缩其他日志。解决这个问题的最简单方法是删除空文件。没有可以完全匹配的 `if` 测试,但是我们有 `-s` 选项来判断是否是一个文件,并且判断该文件不是空的(大小不为零)。这与我们想要的相反,但我们可以使用 `!` 来否定测试,以判断某些内容不是文件或为空。
让我们用一个示例来看看这个过程。我创建了两个测试文件:一个是空的,另一个包含一些数据。我们可以使用 `if` 判断,*如果*文件为空打印消息 “empty”
让我们用一个示例来看看这个过程。我创建了两个测试文件:一个是空的,另一个包含一些数据。我们可以使用 `if` 判断,*如果*文件为空打印消息 `empty`
```
$ ls
@ -134,7 +133,7 @@ via: https://opensource.com/article/22/7/use-bash-automate-tasks-linux
作者:[Jim Hall][a]
选题:[lkxed][b]
译者:[Donkey-Hao](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,157 @@
[#]: subject: "5 GNOME 43 Features to Keep an Eye On"
[#]: via: "https://news.itsfoss.com/gnome-43-features/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14985-1.html"
5 个需要关注的 GNOME 43 功能
======
> GNOME 43 即将到来。下面是你可以期待在该版本中出现的功能。
![5 个值得关注的 GNOME 43 功能][1]
GNOME 43 将于 2022 年 9 月 21 日发布。截至目前GNOME 43 的测试版已经可供测试。
我们在 GNOME 43 测试版中发现的功能和变化应该随着最终版本的发布而到来。
那么,哪些是你最值得期待的 GNOME 43 功能呢?
让我们来看看一些关键的变化。
这个列表集中在视觉/交互式变化上。关于技术变化的完整列表,你可以参考文章底部链接的更新日志。
### 1、改造了快速设置
![GNOME 快速设置][2]
GNOME 桌面菜单位于右上角,你可以在这里快速调整音量、访问网络连接,以及开/关电脑,在这个版本中它终于得到了视觉上的更新。
现在,它看起来更像是安卓的快速切换栏,这应该会增强用户体验,同时减少一些多余的点击。
![GNOME 快速设置][3]
你不需要前往设置来打开深色模式和夜光。新的快速切换菜单就可以让你可以访问到它们。
此外,像选择 Wi-Fi 网络和改变音频设备这样的事情比以前更容易做到。
### 2、对 Nautilus 文件管理器的改变
虽然我们已经在之前的报道中提到了 GNOME 43 中对 Nautilus 最重要的改变。
> **[GNOME 43 中 Nautilus 文件管理器的 6 个新变化][4]**
有几件事值得再次重申。其中一些包括:
* 使用 GTK 4 的全新外观。
* 拖动和选择文件的能力(橡皮筋选择)。
* 紧凑窗口的自适应视图。
* 新的文件上下文菜单。
![Nautilus 文件管理器][6]
总的来说,在 GNOME 43 中,你会发现 Nautilus 文件管理器有了一些视觉上的调整,并有动画的细微改进。
你可以点击每一个选项,访问目录的属性等等来探索其中的差异。它应该感觉更直观一些。
### 3、设备安全信息
![][7]
我们之前报道过 GNOME 会在你禁用安全启动时显示警告。
> **[安全启动已被禁用? GNOME将很快向您发出警告][8]**
你会在你的闪屏和锁屏中看到这个警告。
GNOME 的设置菜单也有一个新的 “设备安全” 选项,在这里你可以看到安全启动状态和其他重要信息,比如:
* TPM
* 英特尔 BootGuard
* IOMMU 保护
### 4、GNOME Web 的扩展支持
![GNOME Web 扩展][10]
GNOME Web 在每次更新都会变得更好一些。有了 Web 扩展的支持,它成为了一个有吸引力的选择,可以取代你的日常使用的浏览器。
> **[有了扩展GNOME Web 正慢慢成为桌面 Linux 上一个有吸引力的选择][11]**
在写这篇文章的时候,该支持仍然是 **实验性的**,你必须得手动安装扩展。
对于初学者来说,你可以在 Mozilla Firefox 附加组件门户上下载 .xpi 扩展文件。
### 5、GNOME 软件中心的改进
GNOME 的软件中心目前的体验并不是很好。
虽然它在提供额外信息方面有所改进,但仍有改进的余地。
![GNOME 软件][13]
在 GNOME 43 中,你可以了解到更多关于 Flatpak 应用程序所需的权限。而且,你还可以看到一个 “其他应用程序” 部分,以寻找同一开发者的其它应用程序。
此外,软件包来源的显示方式也有了细微的视觉调整。
![GNOME 软件][14]
### 附加:新的墙纸
你会得到新的默认壁纸,有深色和浅色的变体。下面是深色壁纸背景的样子:
![][15]
而这是浅色版本:
![][16]
除了主要的亮点之外,其他一些变化包括:
* Adwaita 图标主题更新。
* GNOME 应用程序的性能改进。
* 各种代码的清理。
* 对日历的改进。
* 改良了“关于”窗口。
关于完整的技术细节,你可以参考 [GNOME 43 测试版更新日志][17]。
总的来说GNOME 43 在很大程度上注重提高可用性和用户体验。
最初还计划了一些有趣的功能,但它们没有进入 GNOME 43。*也许GNOME 44 会包括这些?*
> **[这里是开发者为 GNOME 43 规划的内容][18]**
*你对 GNOME 43 的功能有何看法?请在下面的评论中告诉我们你的想法。*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/gnome-43-features/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/08/gnome-43-features.jpg
[2]: https://news.itsfoss.com/content/images/2022/08/gnome-toggle-1.png
[3]: https://news.itsfoss.com/content/images/2022/08/gnome-toggle-settings.png
[4]: https://news.itsfoss.com/gnome-files-43/
[6]: https://news.itsfoss.com/content/images/2022/08/nautilus-file.gif
[7]: https://news.itsfoss.com/content/images/2022/08/secure-boot-gnome.png
[8]: https://news.itsfoss.com/gnome-secure-boot-warning/
[10]: https://news.itsfoss.com/content/images/2022/08/gnome-web-extensions-1.png
[11]: https://news.itsfoss.com/gnome-web-extensions-dev/
[13]: https://news.itsfoss.com/content/images/2022/08/gnome-software-screenshot-1.png
[14]: https://news.itsfoss.com/content/images/2022/08/gnome-43-software-center.jpg
[15]: https://news.itsfoss.com/content/images/2022/08/gnome-43-dark-wallpaper.jpg
[16]: https://news.itsfoss.com/content/images/2022/08/gnome-light-adaitwa.jpg
[17]: https://download.gnome.org/core/43/43.beta/NEWS
[18]: https://news.itsfoss.com/gnome-43-dev-plans/

View File

@ -0,0 +1,71 @@
[#]: subject: "Debian Finally Starts a General Resolution to Consider a Non-Free Firmware Image"
[#]: via: "https://news.itsfoss.com/debian-non-free/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Debian Finally Starts a General Resolution to Consider a Non-Free Firmware Image
======
Debian's finally considering the inclusion of non-free firmware with a general resolution. So, what's it going to be?
![Debian Finally Starts a General Resolution to Consider a Non-Free Firmware Image][1]
Debian is one of the most loved Linux distributions for its approach to stability and a balance between new features.
But, it does not come with any non-free firmware.
And, that is becoming an issue for users who want to use Debian on newer hardware.
Most of the latest devices and configurations need non-free firmware to make things work, which includes Wi-Fi, graphics, and more.
To address that, **Steve McIntyre**, a Debian developer and a former Debian project leader, has been actively discussing the issue for a while. At the**DebConf 22 conference**, Steve recently talked about fixing the firmware mess to highlight this better to users and developers, as spotted by [Geekers Digest][2].**As an update to the discussion** among the community: it looks like Debian has started a general resolution to let its stakeholders vote what to do with non-free firmware.
### Debian's General Resolution Proposals
There are **three proposals** with the general resolution.
* Proposal A: Debian will include non-free firmware packages on official media installer images. The included firmware will be enabled by default where it detects the requirement. However, it will also include ways for users to disable this at boot.
* Proposal B: Include non-free firmware packages as official media images, but as a separate offering alongside the files with no non-free firmware.
* Proposal C: Make distribution media containing packages from non-free section and make it available for download alongside the free media by informing the user what they are downloading.
These are some interesting proposals. I think Proposal A would be convenient for all, while giving advanced users the chance to disable non-free firmware.
You can learn more about the general resolution in the [official page][3].
💬 **What do you think?**
### Including Non-Free Firmware in Official Releases
As for the current situation, you can find an unofficial Debian image with non-free firmware.
However, not every user is aware of it, and even if it is promoted on Debians download page, **“unofficial**” term is not something a user will prefer over the recommended image.
Furthermore, it is counter-intuitive to expect users to install non-free firmware when they can choose any Ubuntu-based distribution or Ubuntu as an alternative.
Not just limited to these issues, Steve mentioned a few other problems with it in his [blog][4] that include:
* Maintaining separate non-free images is time-consuming.
* The official images are not preferred by many users because of the lack of non-free firmware.
*So, what do you think Debian's general resolution get vote for? A separate media image? Or include it with the official image?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/debian-non-free/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/wordpress/2022/07/debian-non-free-firmware.jpg
[2]: https://www.geekersdigest.com/debian-on-the-verge-to-include-non-free-firmware-in-official-releases/
[3]: https://www.debian.org/vote/2022/vote_003#timeline
[4]: https://blog.einval.com/2022/04/19#firmware-what-do-we-do

View File

@ -0,0 +1,35 @@
[#]: subject: "Google Reveals Vulnerability Reward Program Specifically For Open Source Software"
[#]: via: "https://www.opensourceforu.com/2022/08/google-reveals-vulnerability-reward-program-specifically-for-open-source-software/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Google Reveals Vulnerability Reward Program Specifically For Open Source Software
======
In 2010, Google introduced the Vulnerability Reward Program (VRP). As the name implies, it encourages security researchers and professionals to find security flaws and exploits and then disclose them in confidence to the vendor. These defects would then be rectified by the business after being reported, and the person who discovered the problem would be granted a cash reward. Google has been working to broaden the platforms reach and consolidate it over the last few years. The business has today disclosed yet another growth, this time in the area of open source software (OSS).
With projects like Golang, Angular, and Fuchsia under its wing, Google has underlined that it is one of the largest donors and maintainers of OSS and that it is aware of the need to secure this area. As a result, its OSS VRP programme is made to promote consistent effort on this front as well. Any OSS code that is part of Googles portfolio is the target of OSS VRP. This includes any OSS dependencies that are maintained by other vendors in addition to the projects that it manages. The following definitions apply to the two OSS categories covered by this VRP:
* All current open source software (including repository settings) is kept in the open repositories of GitHub organisations controlled by Google.
* The third-party dependencies of such projects (before submission to Googles OSS VRP, notice of the affected dependence is required).
Google is currently accepting reports for supply chain compromise, design flaws, and basic security concerns including weakened or compromised credentials or unsecured deployments. The greater barrier targets more delicate projects like Bazel, Angular, Golang, Protocol buffers, and Fuchsia. Reward levels start at $100 and rise to $31,337.
Google aspires to increase OSS security through this community-driven collaborative endeavour. The programme is a part of the $10 billion cybersecurity investment that Google unveiled a year ago during a meeting with American President Joe Biden. In order to identify malicious open source packages, Google pledged support for the Open Source Security Foundations (OpenSSF) Package Analysis Project back in April.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/08/google-reveals-vulnerability-reward-program-specifically-for-open-source-software/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed

View File

@ -0,0 +1,74 @@
[#]: subject: "Live Debugger Tool for Apps, Sidekick, is Now Open Source"
[#]: via: "https://news.itsfoss.com/sidekick-open-source/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Live Debugger Tool for Apps, Sidekick, is Now Open Source
======
Sidekick is a live application debugger with useful features. It is now open-source and can be self-hosted.
![Live Debugger Tool for Apps, Sidekick, is Now Open Source][1]
Sidekick is a live application debugger, meaning it lets developers know about bugs and issues in their applications in real-time.
It was primarily a paid tool for the job, with a 14-day trial plan to test it out.
📢 *And now*: **it is open-source**!
So, if you were hesitating to pay for the tool as a subscription, you can now **self-host it and use it for free** as per your requirements.
### 💡 What is Sidekick?
![Meet Sidekick , Your Brand New Live Application Debugger 🔥][2]
[Sidekick][3] is a real-time application debugger.
You no longer need to recreate production environments on your local machine, Sidekick lets you debug as they're running. It tries to give you the same kind of perks that you get when you debug in your local environment.
It offers a range of features that lets you send the collected data to third-party apps like Slack, and use Sidekick plugin with some of your favorite IDEs including Visual Studio Code or IntelliJ IDEA.
You can filter out relevant data to quickly debug issues by using its data collection feature.
With the insights provided to you, Sidekick helps you optimize cost, eliminate issues, and collaborate efficiently to keep your application running without hiccups.
### 🚀 How to Get Started?
For starters, you can head to its [official website][6] and try it out in the [sandbox environment][7].
If you want a managed platform to use Sidekick, you can opt in for the subscription plans that start at **$29** per month.To opt for **self-hosting**, you can use their official Docker image available or choose to build it yourself.
You can find the source code and the instructions for it in its [GitHub page][8].
In either case, you should refer to its [official documentation][9].
[Sidekick GitHub][10]
💬 *What do you think about Sidekick as a free and open-source live application debugger? Have you tried it before? Kindly let us know your thoughts in the comments below.*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/sidekick-open-source/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/08/sidekick-live-debugger-open-source.png
[2]: https://youtu.be/qy4Nu6CIeuM
[3]: https://www.runsidekick.com/
[4]: https://itsfoss.com/install-visual-studio-code-ubuntu/
[5]: https://itsfoss.com/install-visual-studio-code-ubuntu/
[6]: https://www.runsidekick.com/
[7]: https://app.runsidekick.com/sandbox
[8]: https://github.com/runsidekick/sidekick
[9]: https://docs.runsidekick.com/
[10]: https://github.com/runsidekick/sidekick

View File

@ -0,0 +1,82 @@
[#]: subject: "Introduce the different Fedora Linux editions"
[#]: via: "https://fedoramagazine.org/introduce-the-different-fedora-linux-editions/"
[#]: author: "Arman Arisman https://fedoramagazine.org/author/armanwu/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Introduce the different Fedora Linux editions
======
![Introduce the differenct Fedora Linux editions][1]
Photo by [Frédéric Perez][2] on [Unsplash][3]
We have different preferences in using Fedora Linux. For example, there are some people who choose Fedora Linux because Fedora Workstation uses GNOME as its desktop environment by default. But there are also some people who want to use Fedora Linux but want to use a different desktop environment. Or there are also some people who use Fedora Linux with certain needs but dont want to be bothered with system configuration and application installation. Or even some people want to install Fedora Linux freely according to their needs. Therefore Fedora Linux provides several editions according to your needs. This article will introduce the different Fedora Linux editions.
### Fedora Official Editions
We start with the official editions of Fedora Linux, namely Fedora Workstation, Fedora Server, and Fedora IoT. Fedora Workstation is the official edition of Fedora Linux that can be installed on laptops and desktop computers. This edition comes with GNOME as the default desktop environment and various standard applications so that Fedora Linux is ready for daily use. While Fedora Server is specifically for server computer purposes that provides installation of mailserver, DNS, etc. And the last one is Fedora IoT, which is for the purposes of the Internet of Things and Device Edge ecosystems.
On the main page of the Fedora Project web page you can find two other editions Fedora CoreOS and Fedora Silverblue. Fedora CoreOS is an operating system that is automatically updated and designed to run containerized workloads safely and at scale. While Fedora Silverblue is an immutable desktop operating system designed to support container-focused workflows.
![Introduce the different Fedora Linux editions: Fedora Workstation][4]
More information is available at this link: [https://getfedora.org/][5]
### Fedora Spins: alternative desktops
This edition of Fedora Linux is in great demand by those who are very concerned about the appearance of their desktop. Most people know that Fedora Linux only has GNOME as the default desktop environment. Even though there are several alternative desktop options if you really want to use a desktop environment other than GNOME. With Fedora Spins, you can immediately get your favorite desktop environment when installing Fedora Linux. You can choose from KDE Plasma, XFCE, LXQt, MATE, Cinnamon, LXDE, and SoaS. Moreover, for those who like tiling window managers, Fedora Linux provides Fedora i3 Spin with i3 as the default window manager which is accompanied by several standard applications.
![Introduce the different Fedora Linux editions: Fedora Plasma][6]
![Introduce the different Fedora Linux editions: Fedora Cinnamon][7]
More information is available at this link: [https://spins.fedoraproject.org/][8]
### Fedora Labs: functional bundles
Fedora Labs is a collection of Fedora Linux packages that have been packaged according to specific needs. Therefore, the installation packages of these editions have provided the applications and the necessary content according to their functions. Fedora Labs provides a choice of packages such as Astronomy, Comp Neuro, Design Suite, Games, JAM, Python Classroom, Security Lab, Robotics Suite, and Scientific. If you want to use Fedora Linux for your design work, then Design Suite is the right choice for you. But if you like playing games, you can choose Games.
![Introduce the different Fedora Linux editions: Fedora Design Suite][9]
![Introduce the different Fedora Linux editions: Fedora Games][10]
More information is available at this link: [https://labs.fedoraproject.org/][11]
### Fedora Alt Downloads
Fedora Alt Downloads is a collection of alternative Fedora Linux installers with a specific purpose, such as for testing or for specific architectures. Or there are also alternative formats such as network installer format or formatted for torrent downloads. Here you can find Network Installer, Torrent Downloads, Alternative Architectures, Cloud Base Images, Everything, Testing Images, and Rawhide.
More information is available at this link: [https://alt.fedoraproject.org/][12]
### Conclusion
You have the freedom to choose the Fedora Linux edition that suits your preferences other than official editions. But if you want to get Fedora Linux with variety of desktop appearances, then Fedora Spins is for you. And you can choose Fedora Labs if you want Fedora Linux complete with applications and packages according to your needs. However, if you are an expert and want to install Fedora Linux more freely, you can browse alternative options at Fedora Alt Downloads. Hopefully this article can help you to choose the right Fedora Linux and please share your experience with Fedora Linux in the comments.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/introduce-the-different-fedora-linux-editions/
作者:[Arman Arisman][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/armanwu/
[b]: https://github.com/lkxed
[1]: https://fedoramagazine.org/wp-content/uploads/2021/11/FedoraMagz-FedoraEditions-Intro-816x345.png
[2]: https://unsplash.com/@fredericp?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/blue-abstract?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/wp-content/uploads/2021/11/g-monitor-overview.png
[5]: https://getfedora.org/
[6]: https://fedoramagazine.org/wp-content/uploads/2021/11/screenshot-kde-1024x640.jpg
[7]: https://fedoramagazine.org/wp-content/uploads/2021/11/screenshot-cinnamon-1024x576.jpg
[8]: https://spins.fedoraproject.org/
[9]: https://fedoramagazine.org/wp-content/uploads/2021/11/Fedora-Design-1024x792.png
[10]: https://fedoramagazine.org/wp-content/uploads/2021/11/Fedora-Games-1024x792.png
[11]: https://labs.fedoraproject.org/
[12]: https://alt.fedoraproject.org/

View File

@ -1,115 +0,0 @@
[#]: subject: "How to Get KDE Plasma 5.25 in Kubuntu 22.04 Jammy Jellyfish"
[#]: via: "https://www.debugpoint.com/kde-plasma-5-25-kubuntu-22-04/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Get KDE Plasma 5.25 in Kubuntu 22.04 Jammy Jellyfish
======
The KDE developers now enabled the popular backports PPA with necessary updates with KDE Plasma 5.25 which you can install now in Kubuntu 22.04 Jammy Jellyfish. Heres how.
KDE Plasma 5.25 released a few days back on June 14, 2022 with some stunning updates. With this release, you get the **dynamic accent colour**, revamped login avatars, **floating panel** and many such features which we covered in the [feature highlight article][1].
But, if you are running [Kubuntu 22.04 Jammy Jellyfish][2] which was released long back on April 2022, you have the KDE Plasma 5.24 with KDE Framework 5.92.
You probably waiting to enjoy the new features in your stable Kubuntu 22.04 release, and now its possible to install it in Kubuntu 22.04 via the famous backports PPA.
### How to Install KDE Plasma 5.25 in Kubuntu 22.04
Heres how you can upgrade Kubuntu 22.04 with latest KDE Plasma 5.25.
#### GUI Method
If you are comfortable with KDEs software app Discover, then open the app. Then browse to the Settings > Sources and add the PPA `ppa:kubuntu-ppa/backports-extra`. Then Click on Updates.
#### Terminal Method (recommended)
I would recommend you to open a terminal and do this upgrade for faster execution and installation.
* Open Konsole and run the following commands to add the [backport PPA][3].
```
sudo add-apt-repository ppa:kubuntu-ppa/backports-extra
```
![Upgrade Kubuntu 22.04 with KDE Plasma 5.25][4]
* Now, refresh the package list by running the following command. Then verify the 5.25 packages are available.
```
sudo apt update
```
```
apt list --upgradable | grep 5.25
```
![KDE Plasma 5.25 packages are available now][5]
Finally, run the last command to kick-off the upgrade.
```
sudo apt full-upgrade
```
The total download size is around 200 MB worth of packages. The entire process takes around 10 minutes of your time based on your internet connection speed.
After the above command is complete, restart your system.
Post-restart, you should see the new KDE Plasma 5.25 in Kubuntu 22.04 LTS.
![KDE Plasma 5.25 in Kubuntu 22.04 LTS][6]
### Other backport PPA
Please note that the [other backport PPA][7] `ppa:kubuntu-ppa/backports` is currently have Plasma 5.24. So do not use the following PPA which is different than the above. I am not sure whether this PPA would get this update.
```
sudo add-apt-repository ppa:kubuntu-ppa/backports // don't use this
```
### How to Uninstall
At any moment, if you would like to go back to the stock version of KDE Plasma desktop, then you can install ppa-purge and remove the PPA, followed by refreshing the package.
Open a terminal and execute the following commands in sequence.
```
sudo apt install ppa-purge
sudo ppa-purge ppa:kubuntu-ppa/backports-extra
sudo apt update
```
Once the above commands are complete, restart your system.
### Closing Notes
There you have it. A nice and simple steps to upgrade stock KDE Plasma to Plasma 5.25 in Jammy Jellyfish. I hope, your upgrade goes fine.
Do let me know in the comment section if you face any error.
Cheers.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/kde-plasma-5-25-kubuntu-22-04/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/kde-plasma-5-25/
[2]: https://www.debugpoint.com/kubuntu-22-04-lts/
[3]: https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/backports-extra
[4]: https://www.debugpoint.com/wp-content/uploads/2022/08/Upgrade-Kubuntu-22.04-with-KDE-Plasma-5.25.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/08/KDE-Plasma-5.25-packages-are-available-now.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/08/KDE-Plasma-5.25-in-Kubuntu-22.04-LTS-1024x575.jpg
[7]: https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/backports

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/22/8/groovy-script-java-music"
[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -0,0 +1,95 @@
[#]: subject: "4 ways to use the Linux tar command"
[#]: via: "https://opensource.com/article/22/8/linux-tar-command"
[#]: author: "AmyJune Hineline https://opensource.com/users/amyjune"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
4 ways to use the Linux tar command
======
How do you use the tar command? That's what I recently asked our community of writers. Here are some of their answers.
When you have a lot of related files, it's sometimes easier to treat them as a single object rather than 3 or 20 or 100 unique files. There are fewer clicks involved, for instance, when you email *one* file compared to the mouse work required to email 30 separate files. This quandary was solved decades ago when programmers invented a way to create an *archive*, and so the `tar` command was born (the name stands for *tape archive* because back then, files were saved to magnetic tape.) Today `tar` remains a useful way to bundle files together, whether it's to compress them so they take up less space on your drive, to make it easier to deal with lots of files, or to logically group files together as a convenience.
I asked Opensource.com authors how they used `tar`, and related tools like `zip` and `gzip`, in their daily work. Here's what they said.
### Backups and logs
I use `tar` and `zip` whenever I need to make a backup or archive of an entire directory tree. For example, delivering a set of files to a client, or just making a quick backup of my web root directory before I make a major change on the website. If I need to share with others, I create a ZIP archive with `zip -9r`, where `-9` uses best possible compression, and `-r` will recurse into subdirectories. For example, `zip -9r client-delivery.zip client-dir` makes a zip file of my work, which I can send to a client.
If the backup is just for me, I probably use `tar` instead. When I use `tar`, I usually use `gzip` to compress, and I do it all on one command line with `tar czf`, where `c` will create a new archive file, `z` compresses it with `gzip`, and `f` sets the archive filename. For example, `tar czf web-backup.tar.gz html` creates a compressed backup of my `html` directory.
I also have web applications that create log files. And to keep them from taking up too much space, I compress them using `gzip`. The `gzip` command is a great way to compress a *single file*. This can be a TAR archive file, or just any regular file like a log file. To make the gzipped file as small as possible, I compress the file with `gzip -9`, where `-9` uses the best possible compression.
The great thing about using `gzip` to compress files is that I can use commands like `zcat` and `zless` to view them later, without having to uncompress them on the disk. So if I want to look at my log file from yesterday, I can use `zless yesterday.log.gz` and the `zless` command automatically uncompresses the data with `gunzip` and send it to the `less` viewer. Recently, I wanted to look at how many log entries I had per day, and I ran that with a `zcat` command like:
```
for f in *.log.gz; do echo -n "$f,"; zcat $f | wc -l; done
```
This generates a comma-separated list of log files and a line count, which I can easily import to a spreadsheet for analysis.
**[—Jim Hall][2]**
### Zcat
I introduced the `zcat` command in my article [Getting started with the cat command][3]. Maybe this can act as a stimulus for further discussion of "in-place" compressed data analysis.
**[—Alan Formy-Duval][4]**
### Zless and lzop
I love having `zless` to browse log files and archives. It really helps reduce the risk of leaving random old log files around that I haven't cleaned up.
When dealing with compressed archives, `tar -zxf` and `tar -zcf` are awesome, but don't forget about `tar -j` for those bzip2 files, or even `tar -J` for the highly compressed xz files.
If you're dealing with a platform with limited CPU resources, you could even consider a lower overhead solution like `lzop`. For example, on the source computer:
```
tar --lzop -cf - source_directory | nc destination-host 9999
```
On the destination computer:
```
nc -l 9999 | tar --lzop -xf -
```
I've often used that to compress data between systems where we have bandwidth limitations and need a low resource option.
**[—Steven Ellis][5]**
### Ark
I've found myself using the KDE application Ark lately. It's a GUI application, but it integrates so well with the Dolphin file manager that I've gotten into the habit of just updating files straight into an archive without even bothering to unarchive the whole thing. Of course, you can do the same thing with the `tar` command, but if you're browsing through files in Dolphin anyway, Ark makes it quick and easy to interact with an archive without interrupting your current workflow.
![Ark][6]
Image by: (Seth Kenlon, CC BY-SA 4.0)
Archives used to feel a little like a forbidden vault to me. Once I put files into an archive, they were as good as forgotten because it just isn't always convenient to interact with an archive. But Ark lets you preview files without uncompressing them (technically they're being uncompressed, but it doesn't "feel" like they are because it all happens in place), remove a file from an archive, update files, rename files, and a lot more. It's a really nice and dynamic way to interact with archives, which encourages me to use them more often.
**[—Seth Kenlon][7]**
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/8/linux-tar-command
作者:[AmyJune Hineline][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/amyjune
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/collab-team-pair-programming-code-keyboard2.png
[2]: https://opensource.com/users/jim-hall
[3]: https://opensource.com/Getting%20Started%20with%20the%20Cat%20Command
[4]: https://opensource.com/users/alanfdoss
[5]: https://opensource.com/opensource.com/users/steven-ellis
[6]: https://opensource.com/sites/default/files/2022-08/ark.webp
[7]: https://opensource.com/users/seth

View File

@ -0,0 +1,118 @@
[#]: subject: "Clean up unwanted files in your music directory using Groovy"
[#]: via: "https://opensource.com/article/22/8/remove-files-music-directory-groovy"
[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Clean up unwanted files in your music directory using Groovy
======
In this demonstration, I facilitate removing unwanted files in the album directories.
In this series, I'm developing several scripts to help in cleaning up my music collection. In the last article, we used the framework created for analyzing the directory and sub-directories of music files, checking to make sure each album has a `cover.jpg` file and recording any other files that aren't FLAC, MP3, or OGG.
I uncovered a few files that can obviously be deleted—I see the odd `foo` lying around—and a bunch of PDFs, PNGs, and JPGs that are album art. With that in mind, and thinking about the cruft removal task, I offer an improved script that uses a Groovy map to record file names and counts of their occurrences and print that in CSV format.
### Get started analyzing with Groovy
If you haven't already, read the first three articles of this series before continuing:
* [How I analyze my music directory with Groovy][2]
* [My favorite open source library for analyzing music files][3]
* [How I use Groovy to analyze album art in my music directory][4]
They'll ensure you understand the intended structure of my music directory, the framework created in that article, and how to pick up FLAC, MP3, and OGG files. In this article, I facilitate removing unwanted files in the album directories.
### The framework and the album files analysis bits
Start with the code. As before, I've incorporated comments in the script that reflect the (relatively abbreviated) "comment notes" that I typically leave for myself:
```
1        // Define the music libary directory
2        // def musicLibraryDirName = '/var/lib/mpd/music'
3        // Define the file name accumulation map
4        def fileNameCounts = [:]
5        // Print the CSV file header
6        println "filename|count"
7        // Iterate over each directory in the music libary directory
8        // These are assumed to be artist directories
9        new File(musicLibraryDirName).eachDir { artistDir ->
10            // Iterate over each directory in the artist directory
11            // These are assumed to be album directories
12            artistDir.eachDir { albumDir ->
13                // Iterate over each file in the album directory
14                // These are assumed to be content or related
15                // (cover.jpg, PDFs with liner notes etc)
16                albumDir.eachFile { contentFile ->
17                    // Analyze the file
18                    if (contentFile.name ==~ /.*\.(flac|mp3|ogg)/) {
19                        // nothing to do here
20                    } else if (contentFile.name == 'cover.jpg') {
21                        // don't need to do anything with cover.jpg
22                    } else {
23                        def fn = contentFile.name
24                        if (contentFile.isDirectory())
25                            fn += '/'
26                        fileNameCounts[fn] = fileNameCounts.containsKey(fn) ?  fileNameCounts[fn] + 1 : 1
27                    }
28                }
29            }
30        }
31        // Print the file name counts
32        fileNameCounts.each { key, value ->
33            println "$key|$value"
34        }
```
This is a pretty straightforward set of modifications to the original framework.
Lines 3-4 define `fileNameCount`, a map for recording file name counts.
Lines 17-27 analyze the file names. I avoid any files ending in `.flac`, `.mp3` or `.ogg` as well as `cover.jpg` files.
Lines 23-26 record file names (as keys to `fileNameCounts` ) and counts (as values). If the file is actually a directory, I append a `/` to help deal with it in the removal process. Note in line 26 that Groovy maps, like Java maps, need to be checked for the presence of the key before incrementing the value, unlike for example the [awk programming language][5].
That's it!
I run this as follows:
```
$ groovy TagAnalyzer4.groovy > tagAnalysis4.csv
```
Then I load the resulting CSV into a LibreOffice spreadsheet by navigating to the **Sheet** menu and selecting **Insert sheet from file**. I set the delimiter character to `&$124;`.
![Image of a screenshot of LibreOffice Calc tht shows tagAnalysis][6]
Image by: (Chris Hermansen, CC BY-SA 4.0)
I've sorted this in decreasing order of the column **count** to emphasize repeat offenders. Note as well on lines 17-20 a bunch of M3U files that refer to the name of the album, probably created by some well-intentioned ripping program. I also see, further down (not shown), files like `fix` and `fixtags.sh`, evidence of prior efforts to clean up some problem and leaving other cruft lying around in the process. I use the `find` command line utility to get rid of some of these files, along the lines of:
```
$ find . \( -name \*.m3u -o -name tags.txt -o -name foo -o -name .DS_Store \
-o -name fix -o -name fixtags.sh \) -exec rm {} \;
```
I suppose I could have used another Groovy script to do that as well. Maybe next time.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/8/remove-files-music-directory-groovy
作者:[Chris Hermansen][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/music-column-osdc-lead.png
[2]: https://opensource.com/article/22/8/groovy-script-java-music
[3]: https://opensource.com/article/22/8/analyze-music-files-jaudiotagger
[4]: https://opensource.com/article/22/8/groovy-album-music-directory
[5]: https://opensource.com/article/19/10/intro-awk
[6]: https://opensource.com/sites/default/files/2022-08/Screenshot%20of%20LibreOffice%20Calc%20showing%20some%20of%20tagAnalysis.png

View File

@ -0,0 +1,123 @@
[#]: subject: "Fedora Linux editions part 2: Spins"
[#]: via: "https://fedoramagazine.org/fedora-linux-editions-part-2-spins/"
[#]: author: "Arman Arisman https://fedoramagazine.org/author/armanwu/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Fedora Linux editions part 2: Spins
======
![Fedora Linux editions part 2 Spins][1]
Photo by [Frédéric Perez][2] on [Unsplash][3]
One of the nice things about using Linux is the wide choice of desktop environments. Fedora Linux official Worksation edition comes with GNOME as default desktop environment, but you can choose another desktop environment as default via Fedora Spins. This article will go into a little more detail about the Fedora Linux Spins. You can find an overview of all the Fedora Linux variants in my previous article [Introduce the different Fedora Linux editions][4].
### KDE Plasma Desktop
This Fedora Linux comes with KDE Plasma as the default desktop environment. KDE Plasma is an elegant desktop environment that is very easy to customize. Therefore, you can freely and easily change the appearance of your desktop as you wish. You can customize your favorite themes, install the widgets you want, change icons, change fonts, customize panels according to your preferences, and install various extensions from the community.
Fedora Linux KDE Plasma Desktop is installed with a variety of ready-to-use applications. Youre ready to go online with Firefox, Kontact, Telepathy, KTorrent, and KGet. LibreOffice, Okular, Dolphic, and Ark are ready to use for your office needs. Your multimedia needs will be met with several applications such as Elisa, Dragon Player, K3B, and GwenView.
![Fedora KDE Plasma Desktop][5]
More information is available at this link: [https://spins.fedoraproject.org/en/kde/][6]
### XFCE Desktop
This version is perfect for those who want a balance between ease of customizing appearance and performance. XFCE itself is made to be fast and light, but still has an attractive appearance. This desktop environment is becoming popular for those with older devices.
Fedora Linux XFCE is installed with various applications that suit your daily needs. These applications are Firefox, Pidgin, Gnumeric, AbiWord, Ristretto, Parole, etc. Fedora Linux XFCE also already has a System Settings menu to make it easier for you to configure your Fedora Linux.
![Fedora XFCE Desktop][7]
More information is available at this link: [https://spins.fedoraproject.org/en/xfce/][8]
### LXQT Desktop
This spin comes with a lightweight Qt desktop environment, and focuses on modern classic desktops without slowing down the system. This version of Fedora Linux includes applications based on the Qt5 toolkit and is Breeze themed. You will be ready to carry out various daily activities with built-in applications, such as QupZilla, QTerminal, FeatherPad, qpdfview, Dragon Player, etc.
![Fedora LXQt Desktop][9]
More information is available at this link: [https://spins.fedoraproject.org/en/lxqt/][10]
### MATE-Compiz Desktop
Fedora Linux MATE Compiz Desktop is a combination of MATE and Compiz Fusion. MATE desktop allows this version of Fedora Linux to work optimally by prioritizing productivity and performance. At the same time Compiz Fusion provides a beautiful 3D look with Emerald and GTK + themes. This Fedora Linux is also equipped with various popular applications, such as Firefox, LibreOffice, Parole, FileZilla, etc.
![Fedora Mate-Compiz Desktop][11]
More information is available at this link: [https://spins.fedoraproject.org/en/mate-compiz/][12]
### Cinnamon Desktop
Because of its user-friendly interface, Fedora Linux Cinnamon Desktop is perfect for those who may be new to the Linux operating system. You can easily understand how to use this version of Fedora Linux. This spin has built-in applications that are ready to use for your daily needs, such as Firefox, Pidgin, GNOME Terminal, LibreOffice, Thunderbird, Shotwell, etc. You can use Cinnamon Settings to configure your operating system.
![Fedora Cinnamon Desktop][13]
More information is available at this link: [https://spins.fedoraproject.org/en/cinnamon/][14]
### LXDE Desktop
Fedora Linux LXDE Desktop has a desktop environment that performs fast but is designed to keep resource usage low. This spin is designed for low-spec hardware, such as netbooks, mobile devices, and older computers. Fedora Linux LXDE has lightweight and popular applications, such as Midori, AbiWord, Osmo, Sylpheed, etc.
![Fedora LXDE Desktop][15]
More information is available at this link: [https://spins.fedoraproject.org/en/lxde/][16]
### SoaS Desktop
SoaS stands for Sugar on a Stick. Fedora Linux Sugar Desktop is a learning platform for children, so it has a very simple interface that is easy for children to understand. The word “stick” in this context refers to a thumb drive or memory “stick”. This means this OS has a compact size and can be completely installed on a thumb drive. Schoolchildren can carry their OS on a thumb drive, so they can use it easily at home, school, library, and elsewhere. Fedora Linux SoaS has a variety of interesting learning applications for children, such as Browse, Get Books, Read, Turtle Blocks, Pippy, Paint, Write, Labyrinth, Physic, and FotoToon.
![Fedora SOAS Desktop][17]
More information is available at this link: [https://spins.fedoraproject.org/en/soas/][18]
### i3 Tiling WM
The i3 Tiling WM spin of Fedora Linux is a bit different from the others. This Fedora Linux spin does not use a desktop environment, but only uses a window manager. The window manager used is i3, which is a very popular tiling window manager among Linux users. Fedora i3 Spin is intended for those who focus on interacting using a keyboard rather than pointing devices, such as a mouse or touchpad. This spin of Fedora Linux is equipped with various applications, such as Firefox, NM Applet, brightlight, azote, htop, mousepad, and Thunar.
![Fedora i3 Tiling WM][19]
More information is available at this link: [https://spins.fedoraproject.org/en/i3/][20]
### Conclusion
Fedora Linux provides a large selection of desktop environments through Fedora Linux Spins. You can simply choose one of the Fedora Spins, and immediately enjoy Fedora Linux with the desktop environment of your choice along with its ready-to-use built-in applications. You can find complete information about Fedora Spins at [https://spins.fedoraproject.org/][21].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-linux-editions-part-2-spins/
作者:[Arman Arisman][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/armanwu/
[b]: https://github.com/lkxed
[1]: https://fedoramagazine.org/wp-content/uploads/2022/06/FedoraMagz-FedoraEditions-2-Spins-816x345.png
[2]: https://unsplash.com/@fredericp?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/blue-abstract?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/introduce-the-different-fedora-linux-editions/
[5]: https://fedoramagazine.org/wp-content/uploads/2022/08/screenshot-kde.jpg
[6]: https://spins.fedoraproject.org/en/kde/
[7]: https://fedoramagazine.org/wp-content/uploads/2022/08/screenshot-xfce.jpg
[8]: https://spins.fedoraproject.org/en/xfce/
[9]: https://fedoramagazine.org/wp-content/uploads/2022/08/screenshot-lxqt.jpg
[10]: https://spins.fedoraproject.org/en/lxqt/
[11]: https://fedoramagazine.org/wp-content/uploads/2022/08/screenshot-matecompiz.jpg
[12]: https://spins.fedoraproject.org/en/mate-compiz/
[13]: https://fedoramagazine.org/wp-content/uploads/2022/08/screenshot-cinnamon.jpg
[14]: https://spins.fedoraproject.org/en/cinnamon/
[15]: https://fedoramagazine.org/wp-content/uploads/2022/08/screenshot-lxde.jpg
[16]: https://spins.fedoraproject.org/en/lxde/
[17]: https://fedoramagazine.org/wp-content/uploads/2022/08/screenshot-soas.jpg
[18]: https://spins.fedoraproject.org/en/soas/
[19]: https://fedoramagazine.org/wp-content/uploads/2022/08/screenshot-i3.jpg
[20]: https://spins.fedoraproject.org/en/i3/
[21]: https://spins.fedoraproject.org/

View File

@ -0,0 +1,365 @@
[#]: subject: "How To Manage Docker Containers Using Portainer In Linux"
[#]: via: "https://ostechnix.com/portainer-an-easiest-way-to-manage-docker/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How To Manage Docker Containers Using Portainer In Linux
======
Poratiner - An Easiest Way To Manage Docker And Kubernetes
In this tutorial, we will learn what is **Portainer**, how to install Portainer and how to **manage docker containers using Portainer** in Linux.
### What Is Portainer?
**Portainer** is a lightweight, cross-platform, and open source management UI for Docker, Swarm, Kubernetes, and ACI environments.
Portainer allows you to manage containers, images, networks and volumes via simple web-based dashboard and/or an extensive API.
Using Portainer, we can easily deploy, configure and secure containers in minutes on Docker, Kubernetes, Swarm and Nomad in any cloud, datacenter or device.
It was originally the fork of Docker UI. The developer has rewritten pretty much all of the Docker UI original code. He also has revamped the UX completely and added some more functionality in the recent versions.
Portainer is available in two editions: **Portainer Community Edition(CE)** and **Portainer Business Edition(BE)**.
The Portainer CE is free for personal use that includes a few essential features for container management. And the Portainer BE is paid version that includes complete features and professional support.
Portainer supports GNU/Linux, Microsoft Windows, and macOS.
### Prerequisites
For the purpose of this guide, we will be using Portainer CE, which is free.
**1.** Make sure you have installed Docker and it is working. Portainer has full support for Docker version 1.10 and higher versions.
To install Docker in Linux, refer the following links.
* [Install Docker Engine And Docker Compose In AlmaLinux, CentOS, Rocky Linux][1]
* [How to Install Docker Engine And Docker Compose In Ubuntu][2]
**Heads Up:** You can also install **[Docker desktop][3]** and then install Portainer as an extension via the **market place**. But this is not the scope of this guide.
**2.** Make sure you have **sudo** or **root** access to deploy Portainer community edition using Docker.
**3.** Open or allow Ports **9443**, **9000** and **8000** in your router or firewall if you want to access the portainer web UI from a remote system.
### Install Portainer With Docker In Linux
Portainer CE installation is pretty easy and it will take only a few minutes.
First of all, create a volume for Portainer server to store its database.
```
$ sudo docker volume create portainer_data
```
Next, run the following command to pull the latest Portainer image and start the Portainer:
```
$ docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
```
**Heads Up:** By default, Portainer Server will expose the UI over port `9443` and expose a TCP tunnel server over port 8000. The latter is optional and is only required if you plan to use the Edge compute features with Edge agents.
**Sample output:**
```
portainer_data:/data portainer/portainer-ce:latest
Unable to find image 'portainer/portainer-ce:latest' locally
latest: Pulling from portainer/portainer-ce
772227786281: Pull complete
96fd13befc87: Pull complete
4847ec395191: Pull complete
4c2d012c4350: Pull complete
Digest: sha256:70a61e11a899c56f95c23f734c0777b26617729fcb8f0b61905780f3144498e3
Status: Downloaded newer image for portainer/portainer-ce:latest
4b3a95e8c999f5651dfde13b5519d19a93b143afbcd6fd1f8035af5645bd0e5f
```
By default, Portainer generates and uses a self-signed SSL certificate to secure port 9443. If you require HTTP port 9000 open for legacy reasons, add `-p 9000:9000` to your docker run command:
```
$ sudo docker run -d -p 8000:8000 -p 9000:9000 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
```
Let us check whether the Portainer image has been pulled or not.
```
$ sudo docker images
```
**Sample output:**
```
REPOSITORY TAG IMAGE ID CREATED SIZE
portainer/portainer-ce latest ab836adaa325 4 weeks ago 278MB
```
We have now installed Portainer in our local Ubuntu system. Let us start the container using command:
Now, Portainer is running! Let us go ahead and access the Portainer UI.
### Manage Docker Containers Using Portainer
Open your web browser and point it to any one of the following URLs depending upon the port number you used when starting Portainer.
* Portainer https URL (with self-signed certificate) - http://localhost:9443/ or http://IP_Address:9443/.
* Portainer http URL - http://localhost:9000/ or http://IP_Address:9000/.
You will be presented with a screen like below where you should set a strong password for the Portainer **admin** user. Enter a strong password with minimum 12 characters and click Create user button.
![Create Password For Portainer Admin User][4]
Choose whether you want to proceed using the local environment which Portainier is running in or connect to other environments. I don't have any other environments, so I clicked the "Get started.." button to proceed with the local environment.
![Portainer Admin Dashboard][5]
This is how Portainer admin dashboard looks like. The dashboard home screen displays the list of connected environments. As you see in the below screeenshot, we are connected with the "local" environment.
![Portainer Home][6]
Click on the local environment to see the running and stopped containers, number of downloaded docker images, number of volumes and networks.
![Environment Summary][7]
You don't have to memorize docker commands. Everything can be done from the Dashboard itself.
Let us go ahead and create some containers.
#### Creating Containers
Make sure you're in the Local environment.
Click on the **App Templates** button on the left side bar. You will see some ready-made templates such as Docker image registry, Nginx, Httpd, MySQl, Wordpress and a few more.
![Application Templates List][8]
To deploy a Container, just click on the respective template and follow the on-screen instructions.
For instance, let us launch **MySQL** Container. To do so, click on the **MySQL** template.
![Launch MySQL Template][9]
Enter the Container name, select network type (e.g.bride mode), and database root user password. Click on **Show advanced options** and set port number. If you're not sure what to input, just leave the default values.
Finally, Click **Deploy the container** button to create the MySQL container.
![Create MySQL Docker Container][10]
Once the container created, you will be redirected to the **Containers** page where you can see the list of created and running containers.
![Container List][11]
Under the Containers list section, you will see the,
* Name of the running and stopped containers,
* Status of the containers,
* Quick actions buttons,
* Docker image used to create the containers,
* the date and time of container creation,
* IP address of the container,
* Published ports,
* and Ownership details.
To start/stop the newly created container, just select it and hit Start/stop button on the top. You can restart, pause, and remove any Containers from this section.
#### Manage Containers
We can do all container management operations, such as add new container and start, stop, restart, pause, kill, remove existing containers from under Containers section.
![Create And Manage Containers From Portainer][12]
You will see a few "Quick Actions" buttons next to each container. Clicking on a button will perform the respective action.
Under the Quick Actions tab, you will see the following buttons.
* Logs - Display Container logs.
* Inspect - Inspect container image.
* Stats - View Container statistics.
* Console - Access Container console.
* Attach - Attach To Container console.
![Quick Actions][13]
##### View Container Logs
Select a Container from the Containers list and then click **Logs** button under the Quick Actions tab.
![Container Logs][14]
Here, you can view complete log details of the Container.
##### Inspect Container
Click the "Inspect" button under the Quick Actions tab to inspect the container image.
![Container Inspect][15]
##### View Container Stats
Click on the **Stats** button to view what's happening in the newly launched Container.
![Container Statistics][16]
##### Access Container Console
You can easily connect to the console of your Container by clicking on the **Console** button.
![Access Container Console][17]
Select the Shell (BASH or SH), and hit **Connect** button.
![Connect To Console][18]
Now you will be connected to the Container's console.
![Container Console][19]
#### View Container Details
To view the complete overview of any container, just click on the name of the container from the Containers list.
![Container Details][20]
As you see in the above output, the Containers details section is further divided into the following sub-sections:
* Actions - This section containers buttons to control the container, such as Start, Stop, Kill, Restart, Pause, Resume, Remove, Recreate, Duplicate/Edit.
* Container status - In this section, you will container details such as the name, IP address, status of the container, when the container is created, container start time and a few more details. Under the Container status button, you will see the following controls: * Logs - Display Container logs. * Inspect - Inspect container image. * Stats - View Container statistics. * Console - Access Container console. * Attach - Attach To Container console.
* Access control - View and change ownership.
* Container health - In this section, you will see the container health status, failure count and `mysqld` service status.
* Create image - This section allows you to create an image from this container. This allows you to backup important data or save helpful configurations. You'll be able to spin up another container based on this image afterward.
* Container details - In this section, you can view the docker image used to create this container, port configuration details, and environment details etc.
* Volumes - See the list of attached volumes to the container.
* Networks - View network configuration details.
Please note that you can do all aforementioned management actions (i.e. View Stats/Logs, Inspect, Access Console etc.) from the "Container Details" section too.
![Container Control Buttons][21]
### Docker Images
In this section, you can view the list of downloaded docker images.
![Docker Image List][22]
In this section, you can build new image, import, export and delete Docker images. To remove any image, just select it and click **Remove**.
### Networks
Networks section allows you to add a new network, change the network type, assign/change IP address, remove existing networks.
![Network List][23]
### Volumes
Here, you can view existing docker volumes, create new one, delete them if you no longer need them.
![Volume List][24]
### Events
In this section, we can view what we have done so far, such as creating a new instance, network, volume etc.
![Event List][25]
### Host Overview
This section displays the Docker engine version, Host OS name, type, architecture, cpu, memory, network details etc.
![Host Overview][26]
Under this section, you can also configure Docker features and setup registries (i.e. Docker hub, Quay, Azure, Gitlab etc.).
### Users
The users section allows us to add new users, add users to teams, view list of existing users and delete the users.
![Users][27]
You can also create a team(e.g. development) in which you can add users in this team and assign different roles to the users. The roles feature is available only for Portainer Business edition.
### Environments
In this section, you can add new environment, view existing environments.
![Environments][28]
In Portainer CE, you can add Docker, Kubernetes and ACI environments. In business edition, you can add two more environments called Nomad and KaaS.
### Authentication Logs
The Authentication logs section shows you to user activity details. Portainer user authentication activity logs have a maximum retention of 7 days. This is actually business edition feature. If you're using community edition, you can't use this feature.
### Settings
This section is dedicated for Portainer settings. In this section, you can configure Portainer settings such as,
* define the snapshot level for containers,
* use custom logo for Portainer dashboard,
* specify the URL to your own template definitions file and HELM repository,
* configure SSL certificate,
* backup Portainer configuration etc.
### Conclusion
In this detailed guide, we discussed what is Portainer, how to install Portainer, and how to use Portainer to create and manage Docker containers in Linux.
We also learned a brief overview about each section in the Portainer web dashboard. Using Portainer, you can do complete docker management either from the local system itself or a remote system.
If you want a feature rich, yet simple to use centralized Docker management solution, you should give Portainer a try.
For more details, check the official resources given below.
**Resources:**
* [Portainer website][29]
* [Portainer on GitHub][30]
Any thoughts on Portainer? Have you already tried it? Great! Let us know them in the comment section below.
--------------------------------------------------------------------------------
via: https://ostechnix.com/portainer-an-easiest-way-to-manage-docker/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/install-docker-almalinux-centos-rocky-linux/
[2]: https://ostechnix.com/install-docker-ubuntu/
[3]: https://ostechnix.com/docker-desktop-for-linux/
[4]: https://ostechnix.com/wp-content/uploads/2022/08/Create-Password-For-Portainer-Admin-User.png
[5]: https://ostechnix.com/wp-content/uploads/2022/08/Portainer-Admin-Dashboard.png
[6]: https://ostechnix.com/wp-content/uploads/2022/08/Portainer-Home-1.png
[7]: https://ostechnix.com/wp-content/uploads/2022/08/Environment-Summary.png
[8]: https://ostechnix.com/wp-content/uploads/2022/08/Application-Templates-List.png
[9]: https://ostechnix.com/wp-content/uploads/2022/08/Launch-MySQL-Template.png
[10]: https://ostechnix.com/wp-content/uploads/2022/08/Create-MySQL-Docker-Container.png
[11]: https://ostechnix.com/wp-content/uploads/2022/08/Container-List.png
[12]: https://ostechnix.com/wp-content/uploads/2022/08/Create-And-Manage-Containers-From-Portainer.png
[13]: https://ostechnix.com/wp-content/uploads/2022/08/Quick-Actions.png
[14]: https://ostechnix.com/wp-content/uploads/2022/08/Container-Logs.png
[15]: https://ostechnix.com/wp-content/uploads/2022/08/Container-Inspect.png
[16]: https://ostechnix.com/wp-content/uploads/2022/08/Container-Statistics.png
[17]: https://ostechnix.com/wp-content/uploads/2022/08/Access-Container-Console.png
[18]: https://ostechnix.com/wp-content/uploads/2022/08/Connect-To-Console.png
[19]: https://ostechnix.com/wp-content/uploads/2022/08/Container-Console.png
[20]: https://ostechnix.com/wp-content/uploads/2022/08/Container-Details.png
[21]: https://ostechnix.com/wp-content/uploads/2022/08/Container-Control-Buttons.png
[22]: https://ostechnix.com/wp-content/uploads/2022/08/Docker-Image-List.png
[23]: https://ostechnix.com/wp-content/uploads/2022/08/Network-List.png
[24]: https://ostechnix.com/wp-content/uploads/2022/08/Volume-List.png
[25]: https://ostechnix.com/wp-content/uploads/2022/08/Event-List.png
[26]: https://ostechnix.com/wp-content/uploads/2022/08/Host-Overview.png
[27]: https://ostechnix.com/wp-content/uploads/2022/08/Users.png
[28]: https://ostechnix.com/wp-content/uploads/2022/08/Environments.png
[29]: http://www.portainer.io/
[30]: https://github.com/portainer/portainer

View File

@ -0,0 +1,283 @@
[#]: subject: "How to Setup EKS Cluster along with NLB on AWS"
[#]: via: "https://www.linuxtechi.com/how-to-setup-eks-cluster-nlb-on-aws/"
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Setup EKS Cluster along with NLB on AWS
======
Are looking for an easy guide for setting up EKS cluster on AWS?
The step-by-step guide on this page will show you how to setup EKS cluster along with NLB (Network Load Balancer) on AWS from the scratch.
Amazon EKS is elastic Kubernetes service; it has basically two components control plane and worker nodes. Lets deep dive into the steps
### 1) Create VPC for EKS Cluster
Login to your AWS console, create a VPC with two public and private subnets in two different availability zones.
Also create Internet gateway,  nat gateway and add routes to public and private subnets route table respectively.
Refer following for creating VPC,
* [How to Configure your own VPC(Virtual Private Cloud) in AWS][1]
In my case, I have created following VPC, subnets, internet & nat gateway and route tables.
![VPC-for-EKS-Cluster][2]
### 2) Install and Configure AWS CLI, eksctl and kubectl
Create a virtual machine either on your on-premises or on AWS. Make sure internet connectivity is there on that virtual machine. In my case, I have created a Ubuntu 22.04 virtual machine.
Login to the virtual machine and install AWS cli using the following steps,
```
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install
```
Get you accounts access and secret key from AWS console.
![AWS-Account-Access-Secret-Keys][3]
Now, run following command to configure AWS CLI,
```
$ aws configure
```
It will prompt you to enter Access Key and Secret Key.
![AWS-Cli-configure-Ubuntu-22-04][4]
Once above command is executed successfully then it will create two files under .aws folder,
* Config
* Credentials
Run following command to test aws cli,
```
$ aws sts get-caller-identity
{
    "UserId": "xxxxxxxxxxxx",
    "Account": "xxxxxxxxxx",
    "Arn": "arn:aws:iam::xxxxxxxxxxx:root"
}
$
```
We will be using eksctl command line utility to configure Amazon EKS cluster, so run following set of commands to install it.
```
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
$ eksctl version
0.109.0
$
```
Kubectl is also a command line tool which will allow us to interact with eks cluster. For its installation, run beneath commands one after the another
```
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
$ kubectl version --client
```
![kubectl-install-for-eks-ubuntu][5]
Perfect, we are ready now to create EKS cluster using eksctl utility.
Copy public and private subnets ids of your VPC from VPC console. We would be using these ids in cluster yaml file.
![Subnet-Ids-VPC-Console-AWS][6]
### 3) Create EKS Cluster with eksctl utility
Create a cluster yaml file on your virtual machine with the following content,
```
$ vi demo-eks.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: demo-eks
  region: us-west-2
vpc:
  subnets:
    private:
      us-west-2a: { id: subnet-077d8aa1452f14836 }
      us-west-2b: { id: subnet-0131b18ab955c0c85 }
    public:
      us-west-2a: { id: subnet-0331b5df019a333b5 }
      us-west-2b: { id: subnet-0f92db1ada42abde3 }
nodeGroups:
  - name: ng-workers
    labels: { role: workers }
    instanceType: t2.micro
    desiredCapacity: 2
    privateNetworking: true
    iam:
      withAddonPolicies:
        imageBuilder: true
    ssh:
      publicKeyPath: /home/linuxtechi/.ssh/id_rsa.pub
```
![eks-cluster-yaml-file][7]
Here we are using public subnets for control plane and private subnets for worker nodes. It will also automatically create IAM roles and security group for control plane and worker nodes.
Apart from this we are also using a node group named ng-workers for worker nodes with desired capacity two and instance type as t2.micro. Moreover, we have mentioned linuxtechi users public key so that we can ssh worker nodes.
Note: Please change these parameters as per your setup.
Run following eksctl command to initiate EKS cluster setup,
```
$ eksctl create cluster -f demo-eks.yaml
```
![eksctl-create-cluster-aws][8]
Once the cluster is setup successfully, we will get the following output,
![EKS-Cluster-Ready-Message-AWS][9]
Great, output above confirms that EKS cluster is ready. Run following kubectl command to view status of worker nodes,
```
$ kubectl get nodes
```
![EKS-Cluster-Nodes-Kubectl-Command][10]
Head back to AWS console, verify the EKS cluster status
![EKS-Cluster-Status-AWS-Console][11]
Now, lets deploy ingress controller along with NLB so that application from this cluster is accessible from outside.
### 4) Deploy Ingress Controller and NLB
We will be deploying nginx based ingress controller, download the following yaml file using wget command
```
$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/aws/deploy.yaml
```
Change the parameter externalTrafficPolicy: Local to externalTrafficPolicy: Cluster
Note: This yaml file has the required entries of nginx ingress controller and AWS NLB.
```
$ sed  -i 's/externalTrafficPolicy: Local/externalTrafficPolicy: Cluster/g' deploy.yaml
```
Execute following kubectl command to deploy ingress controller and NLB,
```
$ kubectl create -f deploy.yaml
```
Output,
![deploy-yaml-file-ingress-nlb-aws][12]
To verify the status of ingress controller, run following commands,
```
$ kubectl get ns
$ kubectl get all -n ingress-nginx
```
Output,
![Ingress-Controller-Status-AWS-EKS][13]
Head back to AWS console and check NLB status which is created via deploy.yaml file.
![NLB-for-EKS-AWS-Console][14]
Perfect, above confirms that NLB has been setup properly for EKS cluster.
### 5) Test EKS Cluster Installation
To test eks cluster installation, lets deploy a nginx based deployment, run
```
$ kubectl create deployment nginx-web --image=nginx --replicas=2
```
Create the service for deployment, run
```
$ kubectl expose deployment nginx-web --name=nginx-web --type=LoadBalancer --port=80 --protocol=TCP
```
View Service status,
```
$ kubectl get svc nginx-web
```
Output of above commands would look like below:
![Nginx-Based-Deployment-EKS-AWS][15]
To access the application, copy the URL shown in service command,
http://ad575eea69f5044f0ac8ac8d5f19b7bd-1003212167.us-west-2.elb.amazonaws.com
![Nginx-Default-Page-deployment-eks-aws][16]
Great, above nginx page confirms that we are able to access our nginx based deployment outside of our EKS cluster.
Once you are done with all the testing and wants to remove the NLB and EKS cluster, run following commands,
```
$ kubectl delete -f deploy.yaml
$ eksctl delete cluster -f demo-eks.yaml
```
Thats all from this guide, I hope you are able to deploy EKS cluster on your AWS account. Kindly do post your queries and feedback in below comments section.
Also Read: How to Create VPC Peering Across Two AWS Regions
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/how-to-setup-eks-cluster-nlb-on-aws/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lkxed
[1]: https://www.linuxtechi.com/how-to-configure-vpc-in-aws/
[2]: https://www.linuxtechi.com/wp-content/uploads/2022/08/VPC-for-EKS-Cluster.gif
[3]: https://www.linuxtechi.com/wp-content/uploads/2022/08/AWS-Account-Access-Secret-Keys.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2022/08/AWS-Cli-configure-Ubuntu-22-04.png
[5]: https://www.linuxtechi.com/wp-content/uploads/2022/08/kubectl-install-for-eks-ubuntu.png
[6]: https://www.linuxtechi.com/wp-content/uploads/2022/08/Subnet-Ids-VPC-Console-AWS.png
[7]: https://www.linuxtechi.com/wp-content/uploads/2022/08/eks-cluster-yaml-file.png
[8]: https://www.linuxtechi.com/wp-content/uploads/2022/08/eksctl-create-cluster-aws.png
[9]: https://www.linuxtechi.com/wp-content/uploads/2022/08/EKS-Cluster-Ready-Message-AWS.png
[10]: https://www.linuxtechi.com/wp-content/uploads/2022/08/EKS-Cluster-Nodes-Kubectl-Command.png
[11]: https://www.linuxtechi.com/wp-content/uploads/2022/08/EKS-Cluster-Status-AWS-Console.gif
[12]: https://www.linuxtechi.com/wp-content/uploads/2022/08/deploy-yaml-file-ingress-nlb-aws.png
[13]: https://www.linuxtechi.com/wp-content/uploads/2022/08/Ingress-Controller-Status-AWS-EKS.png
[14]: https://www.linuxtechi.com/wp-content/uploads/2022/08/NLB-for-EKS-AWS-Console.gif
[15]: https://www.linuxtechi.com/wp-content/uploads/2022/08/Nginx-Based-Deployment-EKS-AWS.png
[16]: https://www.linuxtechi.com/wp-content/uploads/2022/08/Nginx-Default-Page-deployment-eks-aws.png

View File

@ -0,0 +1,110 @@
[#]: subject: "Scrivano: Fascinating Whiteboard App For Handwritten Notes"
[#]: via: "https://www.debugpoint.com/scrivano/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Scrivano: Fascinating Whiteboard App For Handwritten Notes
======
Lets find out what are the cool features of Scrivano a whiteboard app for Linux systems.
### Scrivano
Scrivano is a new whiteboard application which recently getting some attention for its unique features and “ease of use”. It has some seriously cool feature which I will talk about shortly.
When I write about the [top Whiteboard applications][1] for taking hand written notes using touch devices, I was not aware of this application since it was probably under development. In that article, I mentioned about the major apps which you all know about.
For example, Xournal++ is probably the most used and “go to” app for taking quick notes using stylus in supported devices. Another GTK & Rust based app which recently became famous is Rnote. It also has some excellent features.
Now, you can try out another cool app Scrivano. It is a Qt based application and comes with a simple user interface for utmost productivity.
Heres how it looks.
![Scrivano How it looks][2]
At the top bar, you have a simple toolbox with standard options such as Pencil with thickness, colours, fill area tools, eraser, undo, redo and so on. These are pretty common among all the apps in this category.
But what are the features that make it stand apart? Lets talk about them.
### Features
First feature which is unique is the “Snap to Grid”. So, when you draw on its grid canvas, you can set your drawings to snap to the grids so that it looks uniform. This is one of the best feature which makes your notes look professional. Dont worry about your bad handwriting or drawings.
Heres a quick look on the “snap to grid” feature with comparison.
![Snap to Grip feature][3]
Another feature which stand out is the Sticker creation of your drawings. Say, you are taking some math notes and you want to reuse one of the curve multiple times. You can select the drawing and make it a sticker which you can put it back to your drawing!
![Stickers in Scrivano][4]
The editing options are so good that the only limitation is your imagination in terms of note taking.
For example, you can select and move around any part of your drawing as a separate object. Then you can clone it, copy it or do anything you want.
Similarly, the fill stroke feature is so effortless. When you are drawing with pen, the app can close the start and end point. Then it fills with colours. All of these happens without choosing additional option from toolbar.
Scrivano have a nice option called Laser which is effective if you are teaching someone via screen sharing or recording a tutorial video. Its a laser-like line which you can draw and it disappears with 3 to 5 seconds.
![][5]
Other noteworthy features include:
* You can import and annotate PDF which it super useful.
* Different and customizable background with options Plain, Lined, Grid or Dotted
* Options to change grid spacing, colour of canvas and patterns
* You can adjust canvas size for your printing (such as A4 etc)
* Export to PDF and other image formats
* Scrivano comes with Auto save option so that you dont lose your data (saves in home directory by default)
* Insert external images into your handwritten notes
* Dark mode in UI
You should be glad knowing that, the developer is still working on additional features as we speak on top the above items. Im sure it will be a contender to the legacy players such as Xournal++ and others.
Lets see how you can install.
### Download and Install
The best way to install Scrivano is using Flatpak. All you need to do is set up your system for [Flatpak with Flathub][6] and then run the following command to install.
```
flatpak install flathub com.github.scrivanolabs.scrivano
```
Then you can launch it via application menu.
Windows folks, you can grab the installer from the [official home page.][7]
It is indeed a nice utility and should get all the love. Do let me know in the comment box what you like about this app.
Cheers.
[Next: Crystal Linux: Emerging Arch Linux Spin for GNOME Fans][8]
![Join our Telegram channel and stay informed on the move.][9]
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/scrivano/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/top-whiteboard-applications-linux/
[2]: https://www.debugpoint.com/wp-content/uploads/2022/08/Scrivano-How-it-looks.jpg
[3]: https://www.debugpoint.com/wp-content/uploads/2022/08/Snap-to-Grip-feature.gif
[4]: https://www.debugpoint.com/wp-content/uploads/2022/08/Stickers-in-Scrivano.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/08/Scrivano-Fill-and-Laser-Method.mp4
[6]: https://www.debugpoint.com/how-to-install-flatpak-apps-ubuntu-linux/
[7]: https://scrivanolabs.github.io/
[8]: https://www.debugpoint.com/crystal-linux-first-look/
[9]: https://t.me/debugpoint

View File

@ -0,0 +1,206 @@
[#]: subject: "7 Best Open Source Library Management Software"
[#]: via: "https://itsfoss.com/open-source-library-management-software/"
[#]: author: "Sagar Sharma https://itsfoss.com/author/sagar/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
7 Best Open Source Library Management Software
======
Sometimes managing a digital library gives you peace of mind as you do not need to make many efforts to maintain it. Usually, easy to organize, and can be backed up as well.
When it comes to managing the library, the library management software can make a world of difference. It can break or make your digital library management experience.
And, with open-source library management software, an organization/library can save investment costs, have better privacy, and have more flexibility without any vendor lock-ins.
So, I came up with the compilation of open-source library management software to provide you with some good options to help manage your digital library. You can use some tools for personal use-case, but many of them are geared toward public libraries.
### 1. Koha
![koha][1]
**Key** **Features of Koha:**
* An enterprise-grade library management software.
* Supports multiple languages.
* Powerful text search and enhanced catalog display.
* Built using standard library protocols to ensure interpretability between Koha and other library systems.
* Web-based UI.
* No vendor lock-in.
[Koha][2] is a well-known name when it comes to library management software, and it is considered the best of what you can get for your library. You may ask why. It handles everything like a charm, from backups and maintenance to system upgrades!
Being a truly enterprise-grade system, youd get modules to manage circulation, cataloging, serials management, authorities, flexible reporting, label printing, and a lot more.
So, you can utilize Koha for small size to multi-branch libraries.
[Koha][3]
### 2. Evergreen
![evergreen][4]
**Key features of Evergreen**
* Flexibility and scalability.
* Has self-registration and self-checkout options.
* Allows making desired changes in the catalog.
* Multiple payment options.
* Powerful search functionality.
* Allows you to retain a history of borrowed books.
[Evergreen][5] is a library integrated system that was initially developed for Public Information Network for Electronic Services (PINES) but it also powers more than 1800 libraries outside PINES.
Being scalable to its core, you can easily manage an entire catalog of multiple branches. It also offers good search functionality along with some interesting features.
[Evergreen][6]
### 3. BiblioteQ
![biblioteq][7]
**Key features of BiblioteQ**
* Supports ARM & Power PC.
* User-friendly interface.
* Apart from books, it also supports DVDs, Music CDs, photos, and even video games.
* Pushes notifications for unavailable items.
* Supports drag and drop for cover images.
“Its quite simple and straightforward” This was my initial impression while testing BiblioteQ for this list. But, dont get fooled by its user interface.
[BiblioteQ][8] is a professional archiving, cataloging, and library management software that utilizes Qt for an eye-pleasant user interface. Furthermore, it uses PostgreSQL and SQLite for the databases.
While speaking of connectivity, it uses Open Library, SRU, and Z39.50 protocols to have a seamless experience while retrieving books and other archive options.
[BiblioteQ][9]
### 4. OPALS
![opals][10]
**Key Features of OPALS:**
* Web-based and mobile friendly.
* Minimal cost.
* Professional development, management, and support.
* Market leader for school libraries and academic libraries.
* Online public access catalog.
* Subscription Database management.
* Digital archive management.
* Support for Circulation and inventory management.
* Hosted servers automated updates, meaning no additional hardware cost nor maintenance by your side.
According to the 2022s [international survey of library automation][11], OPALS (Open-source Automated Library System) has scored highest in every single category among school libraries and small academic library programs.
[OPALS][12] is used in more than 2000 libraries daily as it provides a full-fledged automated library management experience.
It is a paid tool that provides you technical support for installation, management, hosting, and other purposes. If you are looking for something for your academy/institution this can be a good fit.
OPALS also provides a [3-month free demo site for your library][13], so you can have a better idea of what to expect from the asked price.
[OPALS][14]
### 5. InvenioILS
![InvenioILS][15]
**Key Features of InvenioILS**:
* Modern UI.
* Acquisition and simple interlibrary loan modules to have a better track of items.
* Uses REST API meaning, better integrations with other systems.
* Circulations can be easily managed through a few clicks.
* Powerful cataloging system based on JSON schema.
* Easy-to-use back office tools, meaning listing, searching, or even getting details of specific items will be easy.
[Invenios ILS][16] (Integrated Library System) uses the Invenio framework, which is made up of widely used open-source products including Python and React frameworks.
So if you have the technical expertise, there will be no boundaries on customization and enhancements that you can do with the default base.
[InvenioILS][17]
### 6. SLiMS
![slims][18]
**Key features of SLiMS:**
* Utility to generate Barcodes.
* Responsive UI.
* Allows creating Union Catalog creation using Union Catalog Server.
* Membership management.
* Database backup utility.
* Master files management to manage referential data such as Publishers, Authors, and Locations.
* For Bibliography, you get faster input with peer-to-peer copy cataloging.
* Manage your patrons with instant library card allocation.
[SLiMS][19] (Senayan Library Management System) is nothing but an Apache web server bundled with MySQL and PHP, and the outcome is an extremely powerful community-driven library management toolkit.
From serial publication control to system modules providing extreme flexibility, SLiMS has a lot to offer.
[SLiMS][20]
### 7. FOLIO
![folio][21]
**Key features of FOLIO:**
* Wide range of inventory management features including cataloging and bibliographic management.
* Manage vendors, budgets, orders, and invoicing while receiving materials.
* Efficient user management.
* Different patron types, loan types, fines, and fee structures are also supported.
FOLIO (Future of Libraries is Open) can be considered the best option in terms of user experience, as the community thrives to bring the best out of UI/UX elements.
As with any other library management software, youd get all the basic features such as circulation, acquisitions, cataloging, and e-resources management.
You also get a nice feature to manage multiple users, patron types, fee structures, and more.
[FOLIO][22]
### Digital Library Management Sounds Fun!
In this list, Ive only considered the ones that are actively maintained. There might be more that you can explore (but with no recent development activity).
*Did I miss any of your favorites? You are welcome to share your personal experience with library management software.*
--------------------------------------------------------------------------------
via: https://itsfoss.com/open-source-library-management-software/
作者:[Sagar Sharma][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sagar/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/08/koha-1.png
[2]: https://koha-community.org/
[3]: https://koha-community.org/download-koha/
[4]: https://itsfoss.com/wp-content/uploads/2022/08/evergreen.png
[5]: https://evergreen-ils.org/
[6]: https://evergreen-ils.org/egdownloads/
[7]: https://itsfoss.com/wp-content/uploads/2022/08/biblioteq.png
[8]: https://biblioteq.sourceforge.io/
[9]: https://github.com/textbrowser/biblioteq/releases
[10]: https://itsfoss.com/wp-content/uploads/2022/08/opals.png
[11]: https://librarytechnology.org/perceptions/2021/#top-performers
[12]: https://opalsinfo.net/
[13]: https://mail.google.com/mail/?view=cm&fs=1&tf=1&source=mailto&su=Request+for+OPALS+information&to=info@opals-na.org&body=Institution+Name:%0D%0A%0ACity:%0D%0A%0AState+or+Prov.:%0D%0A%0AContact:%0D%0A%0APosition:%0D%0A%0AEmail:%0D%0A%0ACollection+size:%0D%0A%0ANumber+of+members:%25+
[14]: https://en.bibliofiche.com/showcase.jsp?n=OPALS%99&product_number=F05800
[15]: https://itsfoss.com/wp-content/uploads/2022/08/ils.png
[16]: https://inveniosoftware.org/products/ils/
[17]: https://inveniosoftware.org/products/ils/
[18]: https://itsfoss.com/wp-content/uploads/2022/08/slims.png
[19]: https://slims.web.id/web/
[20]: https://github.com/slims/slims9_bulian/releases
[21]: https://itsfoss.com/wp-content/uploads/2022/08/folio.png
[22]: https://github.com/folio-org

View File

@ -0,0 +1,388 @@
[#]: subject: "Clean up music tags with a Groovy script"
[#]: via: "https://opensource.com/article/22/8/groovy-script-music-tags"
[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Clean up music tags with a Groovy script
======
I demonstrate a Groovy script to clean up the motley assembly of tag fields.
Lately, I've been looking at how Groovy streamlines Java. In this series, I'm developing several scripts to help in cleaning up my music collection. In my last article, I used the framework developed previously to create a list of unique file names and counts of occurrences of those file names in the music collection directory. I then used the Linux `find` command to get rid of files I didn't want.
In this article, I demonstrate a Groovy script to clean up the motley assembly of tag fields.
WARNING: This script alters music tags, so it is vital that you make a backup of the music collection you test your code on.
### Back to the problem
If you haven't read the previous articles is this series, do that now before continuing so you understand the intended structure of the music directory, the framework I've created, and how to detect and use FLAC, MP3, and OGG files.
* [How I analyze my music directory with Groovy][2]
* [My favorite open source library for analyzing music files][3]
* [How I use Groovy to analyze album art in my music directory][4]
* [Clean up unwanted files in your music directory using Groovy][5]
### Vorbis and ID3 tags
I don't have many MP3 music files. Generally, I prefer to use FLAC. But sometimes only MP3 versions are available, or a free MP3 download comes with a vinyl purchase. So in this script, I have to be able to handle both. One thing I've learned as I have become familiar with [JAudiotagger][6] is what ID3 tags (used by MP3) look like, and I discovered that some of those "unwanted" field tag IDs I uncovered in part 2 of this series are actually very useful.
Now it's time to use this framework to get a list of all the tag field IDs in a music collection, with their counts, to begin deciding what belongs and what doesn't:
```
1        @Grab('net.jthink:jaudiotagger:3.0.1')
2        import org.jaudiotagger.audio.*
3        import org.jaudiotagger.tag.*
4        def logger = java.util.logging.Logger.getLogger('org.jaudiotagger');
5        logger.setLevel(java.util.logging.Level.OFF);
6        // Define the music library directory
7        def musicLibraryDirName = '/var/lib/mpd/music'
8        // Define the tag field id accumulation map
9        def tagFieldIdCounts = [:]
10        // Print the CSV file header
11        println "tagFieldId|count"
12        // Iterate over each directory in the music libary directory
13        // These are assumed to be artist directories
14        new File(musicLibraryDirName).eachDir { artistDir ->
15            // Iterate over each directory in the artist directory
16            // These are assumed to be album directories
17            artistDir.eachDir { albumDir ->
18                // Iterate over each file in the album directory
19                // These are assumed to be content or related
20                // (cover.jpg, PDFs with liner notes etc)
21                albumDir.eachFile { contentFile ->
22                    // Analyze the file and print the analysis
23                    if (contentFile.name ==~ /.*\.(flac|mp3|ogg)/) {
24                        def af = AudioFileIO.read(contentFile)
25                        af.tag.fields.each { tagField ->
26                            tagFieldIdCounts[tagField.id] = tagFieldIdCounts.containsKey(tagField.id) ? tagFieldIdCounts[tagField.id] + 1 : 1
27                        }
28                    }
29                }
30            }
31        }
32        tagFieldIdCounts.each { key, value ->
33            println "$key|$value"
34        }
```
Lines 1-7 originally appeared in part 2 of this series.
Lines 8-9 define a map for accumulating tag field IDs and counts of occurrences.
Lines 10-21 also appeared in previous articles. They get down to the level of the individual content files.
Lines 23-28 ensures that the files being used are FLAC, MP3, or OGG. Line 23 uses a Groovy match operator `==~` with a slashy regular expression to filter out wanted files.
Line 24 uses `org.jaudiotagger.audio.AudioFileIO.read()` to get the tag body from the content file.
Lines 25-27 use `org.jaudiotagger.tag.Tag.getFields()` to get all the `TagField` instances in the tag body and the Groovy `each()` method to iterate over that list of instances.
Line 27 accumulates the count of each `tagField.id` into the `tagFieldIdCounts` map.
Finally, lines 32-24 iterate over the `tagFieldIdCounts` map printing out the keys (the tag field IDs found) and the values (the count of occurrences of each tag field ID).
I run this script as follows:
```
$ groovy TagAnalyzer5b.groovy > tagAnalysis5b.csv
```
Then I load the results into a [LibreOffice][7] or [OnlyOffice][8] spreadsheet. In my case, this script takes quite a long time to run (several minutes) and the loaded data, sorted in descending order of the second column (count) looks like this:
![Image of a screenshot of the first few row of tagAnalysis in LibreOffic Calc][9]
Image by: (Chris Hermansen, CC BY-SA 4.0)
On row 2, you can see that there are 8,696 occurrences of the TITLE field tag ID, which is the ID that FLAC files (and Vorbis, generally) uses for a song title. Down on row 28, you also see 348 occurrences of the TIT2 field tag ID, which is the ID3 tag field that contains the "actual" name of the song. At this point, it's worth going away to look at [the JavaDoc][10] for `org.jaudiotagger.tag.ide.framebody.FrameBodyTIT2` to learn more about this tag and the way in which JAudiotagger recognizes it. There, you also see the mechanisms to handle other ID3 tag fields.
In that list of field tag IDs, there are lots that I'm not interested in and that could affect the ability of various music players to display my music collection in what I consider to be a reasonable order.
### The org.jaudiotagger.tag.Tag interface
I'm going to take a moment to explore the way JAudiotagger provides a generic mechanism to access tag fields. This mechanism is described in [the JavaDocs][11] for `org.jaudiotagger.tag.Tag`. There are two methods that would help clean up the tag field situation:
```
void setField(FieldKey genericKey,String value)
```
This is used to set the value for a particular tag field.
This line is used to delete all instances of a particular tag field (turns out some tag fields in some tagging schemes permit multiple occurrences).
```
void deleteField(FieldKey fieldKey)
```
However, this particular `deleteField()` method requires us to supply a `FieldKey` value, and as I have discovered, not all field key IDs in my music collection correspond to a known `FieldKey` value.
Looking around the JavaDocs, I see there's a `FlacTag` which "uses Vorbis Comment for most of its metadata," and declares its tag field to be of type `VorbisCommentTag`.
`VorbisCommentTag` itself extends `org.jaudiotagger.audio.generic.AbstractTag`, which offers:
```
protected void deleteField(String key)
```
As it turns out, this is accessible from the tag instance returned by `AudioFileIO.read(f).getTag()`, at least for FLAC and MP3 tag bodies.
In theory, it should be possible to do this:
1. Get the tag body using
```
def af = AudioFileIO.read(contentFile)
def tagBody = af.tag
```
2. Get the values of the (known) tag fields I want using:
```
def album = tagBody.getFirst(FieldKey.ALBUM)
def artist = tagBody.getFirst(FieldKey.ARTIST)
// etc
```
3. Delete all tag fields (both wanted and unwanted) using:
```
def originalTagFieldIdList = tagBody.fields.collect { tagField ->
tagField.id
}
originalTagFieldIdList.each { tagFieldId ->
tagBody.deleteField(tagFieldId)
}
```
4. Put only the desired tag fields back:
```
tagBody.setField(FieldKey.ALBUM, album)
tagBody.setField(FieldKey.ARTIST, artist)
// etc
```
Of course there are few wrinkles here.
First, notice the use of the `originalTagFieldIdList`. I can't use `each()` to iterate over the iterator returned by `tagBody.getFields()` at the same time I modify those fields; so I get the tag field IDs into a list using `collect()`, then iterate over that list of tag field IDs to do the deletions.
Second, not all files are going to have all the tag fields I want. For example, some files might not have `ALBUM_SORT_ORDER` defined, and so on. I might not wish to write those tag fields in with empty values. Additionally, I can probably safely default some fields. For example, if `ALBUM_ARTIST` isn't defined, I can set it to ARTIST.
Third, and for me most obscure, is that Vorbis Comment tags always include a VENDOR field tag ID; if I try to delete it, I end up simply unsetting the value. Huh.
### Trying it all out
Considering these lessons, I decided to create a test music directory that contains just a few artists and their albums (because I don't want to wipe out my music collection.)
WARNING: Because this script will alter music tags it is very important to have a backup of the music collection so that when I discover I have deleted an essential tag, I can recover the backup, modify the script and rerun it.
Here's the script:
```
1        @Grab('net.jthink:jaudiotagger:3.0.1')
2        import org.jaudiotagger.audio.*
3        import org.jaudiotagger.tag.*
4        def logger = java.util.logging.Logger.getLogger('org.jaudiotagger');5        logger.setLevel(java.util.logging.Level.OFF);
6        // Define the music library directory
7        def musicLibraryDirName = '/work/Test/Music'
8        // Print the CSV file header
9        println "artistDir|albumDir|contentFile|tagField.id|tagField.toString()"
10        // Iterate over each directory in the music libary directory
11        // These are assumed to be artist directories
12        new File(musicLibraryDirName).eachDir { artistDir ->
13    // Iterate over each directory in the artist directory
14    // These are assumed o be album directories
15    artistDir.eachDir { albumDir ->
16    // Iterate over each file in the album directory
17    // These are assumed to be content or related18    // (cover.jpg, PDFs with liner notes etc)
19    albumDir.eachFile { contentFile ->
20        // Analyze the file and print the analysis
21        if (contentFile.name ==~ /.*\.(flac|mp3|ogg)/) {
22            def af = AudioFileIO.read(contentFile)
23            def tagBody = af.tag
24            def album = tagBody.getFirst(FieldKey.ALBUM)
25            def albumArtist = tagBody.getFirst(FieldKey.ALBUM_ARTIST)
26            def albumArtistSort = tagBody.getFirst(FieldKey.ALBUM_ARTIST_SORT)
27            def artist = tagBody.getFirst(FieldKey.ARTIST)
28            def artistSort = tagBody.getFirst(FieldKey.ARTIST_SORT)
29            def composer = tagBody.getFirst(FieldKey.COMPOSER)
30            def composerSort = tagBody.getFirst(FieldKey.COMPOSER_SORT)
31            def genre = tagBody.getFirst(FieldKey.GENRE)
32            def title = tagBody.getFirst(FieldKey.TITLE)
33            def titleSort = tagBody.getFirst(FieldKey.TITLE_SORT)
34            def track = tagBody.getFirst(FieldKey.TRACK)
35            def trackTotal = tagBody.getFirst(FieldKey.TRACK_TOTAL)
36            def year = tagBody.getFirst(FieldKey.YEAR)
37            if (!albumArtist) albumArtist = artist
38            if (!albumArtistSort) albumArtistSort = albumArtist
39            if (!artistSort) artistSort = artist
40            if (!composerSort) composerSort = composer
41            if (!titleSort) titleSort = title
42            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.ALBUM|${album}"
43            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.ALBUM_ARTIST|${albumArtist}"
44            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.ALBUM_ARTIST_SORT|${albumArtistSort}"
45            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.ARTIST|${artist}"
46            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.ARTIST_SORT|${artistSort}"
47            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.COMPOSER|${composer}"
48            println "${artistDir.name}|${albumDir.name}|${contentFile.name}
|FieldKey.COMPOSER_SORT|${composerSort}"
49            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.GENRE|${genre}"
50            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.TITLE|${title}"
51            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.TITLE_SORT|${titleSort}"
52            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.TRACK|${track}"
53            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.TRACK_TOTAL|${trackTotal}"
54            println "${artistDir.name}|${albumDir.name}|${contentFile.name}|FieldKey.YEAR|${year}"
55            def originalTagIdList = tagBody.fields.collect {
56                tagField -> tagField.id
57            }
58            originalTagIdList.each { tagFieldId ->
59                println "${artistDir.name}|${albumDir.name}|${contentFile.name}|${tagFieldId}|XXX"
60                if (tagFieldId != 'VENDOR')
61                    tagBody.deleteField(tagFieldId)
62            }
63            if (album) tagBody.setField(FieldKey.ALBUM, album)
64            if (albumArtist) tagBody.setField(FieldKey.ALBUM_ARTIST, albumArtist)
65            if (albumArtistSort) tagBody.setField(FieldKey.ALBUM_ARTIST_SORT, albumArtistSort)
66            if (artist) tagBody.setField(FieldKey.ARTIST, artist)
67            if (artistSort) tagBody.setField(FieldKey.ARTIST_SORT, artistSort)
68            if (composer) tagBody.setField(FieldKey.COMPOSER, composer)
69            if (composerSort) tagBody.setField(FieldKey.COMPOSER_SORT, composerSort)
70            if (genre) tagBody.setField(FieldKey.GENRE, genre)
71            if (title) tagBody.setField(FieldKey.TITLE, title)
72            if (titleSort) tagBody.setField(FieldKey.TITLE_SORT, titleSort)
73            if (track) tagBody.setField(FieldKey.TRACK, track)
74            if (trackTotal) tagBody.setField(FieldKey.TRACK_TOTAL, trackTotal)
75            if (year) tagBody.setField(FieldKey.YEAR, year)
76            af.commit()77        }
78      }
79    }
80  }
```
Lines 1-21 are already familiar. Note that my music directory defined in line 7 refers to a test directory though!
Lines 22-23 get the tag body.
Lines 24-36 get the fields of interest to me (but maybe not the fields of interest to you, so feel free to adjust for your own requirements!)
Lines 37-41 adjust some values for missing ALBUM_ARTIST and sort order.
Lines 42-54 print out each tag field key and adjusted value for posterity.
Lines 55-57 get the list of all tag field IDs.
Lines 58-62 prints out each tag field id *and deletes it*, with the exception of the VENDOR tag field ID.
Lines 63-75 set the desired tag field values using the known tag field keys.
Finally, line 76 *commits the changes to the file*.
The script produces output that can be imported into a spreadsheet.
I'm just going to mention one more time that this script alters music tags! It is very important to have a backup of the music collection so that when you discover you've deleted an essential tag, or somehow otherwise trashed your music files, you can recover the backup, modify the script, and rerun it.
### Check the results with this Groovy script
I have a handy little Groovy script to check the results:
```
1        @Grab('net.jthink:jaudiotagger:3.0.1')
2        import org.jaudiotagger.audio.*
3        import org.jaudiotagger.tag.*
 
4        def logger = java.util.logging.Logger.getLogger('org.jaudiotagger');
5        logger.setLevel(java.util.logging.Level.OFF);
 
6        // Define the music libary directory
 
7        def musicLibraryDirName = '/work/Test/Music'
 
8        // Print the CSV file header
 
9        println "artistDir|albumDir|tagField.id|tagField.toString()"
 
10        // Iterate over each directory in the music libary directory
11        // These are assumed to be artist directories
 
12        new File(musicLibraryDirName).eachDir { artistDir ->
 
13            // Iterate over each directory in the artist directory
14            // These are assumed to be album directories
 
15            artistDir.eachDir { albumDir ->
 
16                // Iterate over each file in the album directory
17                // These are assumed to be content or related
18                // (cover.jpg, PDFs with liner notes etc)
 
19                albumDir.eachFile { contentFile ->
 
20                    // Analyze the file and print the analysis
 
21                    if (contentFile.name ==~ /.*\.(flac|mp3|ogg)/) {
22                        def af = AudioFileIO.read(contentFile)
23                        af.tag.fields.each { tagField ->
24                            println "${artistDir.name}|${albumDir.name}|${tagField.id}|${tagField.toString()}"
25                        }
26                    }
 
27                }
28            }
29        }
```
This should look pretty familiar by now!
Running it produces results like this before running the fixer script in the previous section:
```
St Germain|Tourist|VENDOR|reference libFLAC 1.1.4 20070213
St Germain|Tourist|TITLE|Land Of...
St Germain|Tourist|ARTIST|St Germain
St Germain|Tourist|ALBUM|Tourist
St Germain|Tourist|TRACKNUMBER|04
St Germain|Tourist|TRACKTOTAL|09
St Germain|Tourist|GENRE|Electronica
St Germain|Tourist|DISCID|730e0809
St Germain|Tourist|MUSICBRAINZ_DISCID|jdWlcpnr5MSZE9H0eibpRfeZtt0-
St Germain|Tourist|MUSICBRAINZ_SORTNAME|St Germain
```
Once the fixer script is run, it produces results like this:
```
St Germain|Tourist|VENDOR|reference libFLAC 1.1.4 20070213
St Germain|Tourist|ALBUM|Tourist
St Germain|Tourist|ALBUMARTIST|St Germain
St Germain|Tourist|ALBUMARTISTSORT|St Germain
St Germain|Tourist|ARTIST|St Germain
St Germain|Tourist|ARTISTSORT|St Germain
St Germain|Tourist|GENRE|Electronica
St Germain|Tourist|TITLE|Land Of...
St Germain|Tourist|TITLESORT|Land Of...
St Germain|Tourist|TRACKNUMBER|04
St Germain|Tourist|TRACKTOTAL|09
```
That's it! Now I just have to work up the nerve to run my fixer script on my full music library…
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/8/groovy-script-music-tags
作者:[Chris Hermansen][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/programming-code-keyboard-laptop-music-headphones.png
[2]: https://opensource.com/article/22/8/groovy-script-java-music
[3]: https://opensource.com/article/22/8/analyze-music-files-jaudiotagger
[4]: https://opensource.com/article/22/8/groovy-scripting-analyzing-music-directory-part-3
[5]: https://opensource.com/article/22/8/remove-files-music-directory-groovy
[6]: http://jthink.net/jaudiotagger/index.jsp
[7]: https://opensource.com/article/21/9/libreoffice-tips
[8]: https://opensource.com/article/20/12/onlyoffice-docs
[9]: https://opensource.com/sites/default/files/2022-08/creenshot%20of%20first%20few%20rows%20of%20tagAnalysis5b.csv%20in%20LibreOffice%20Calc.png
[10]: http://www.jthink.net/jaudiotagger/javadoc/index.html
[11]: http://www.jthink.net/jaudiotagger/javadoc/index.html

View File

@ -0,0 +1,143 @@
[#]: subject: "Crystal Linux: Emerging Arch Linux Spin for GNOME Fans"
[#]: via: "https://www.debugpoint.com/crystal-linux-first-look/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Crystal Linux: Emerging Arch Linux Spin for GNOME Fans
======
Meet Crystal Linux, a unique Arch Linux Spin with stock GNOME experience.
### Introduction
Often I think that we have sufficient Linux distros already. The count is nearing thousands, and fragmentation is at its peak. That is not good for quality software, especially in the open-source space.
There is always a distro available for every use case you can think of.
But Arch Linux is one of the sectors, its still emerging just because of its debatable [complex installation methods][1]. Thats why most of the emerging Arch Linux distributions (such as [Xero Linux][2], [Hefftor Linux][3], Mabox, etc.) try to invent something unique in installation and other areas.
Crystal Linux is one of those distros with a different take on installation while being super user-friendly.
![Crystal Linux Desktop with GNOME 42][4]
### Crystal Linux: First Look
Before you read on, you should know that its a new distro (less than a year old) currently under development. So use it with caution.
At first glance, it will feel like a stock GNOME installation, similar to the Fedora workstation. Thats true. With the Arch Linux base and stock GNOME the performance is top-notch. Although I tried it on a virtual machine, I feel the GNOME and Arch combination performs much better than the Fedora workstation in the same virtual machine setup.
With that said, no such different customization is available apart from those coming with GNOME. Honestly, GNOME doesnt require any additional customization for its default settings. Looks wise its good enough.
### Whats unique about Crystal Linux?
#### jade Installer for Arch
The most important offering is its own installer called “[jade][5]“. Crystal Linux team created a GTK4/libadwaita and Rust-based installer to give you a streamlined experience for Arch installation.
And it looks fantastic (see the below images).
![jade installer][6]
![selecting desktop to install][7]
![installation][8]
The jade installer reminds me of GNOMEs Tour app, but here it uses a similar principle for installation. Basic information such as Keyboard, region, and names/passwords are captured via a series of screens.
Then you get to choose the desktop environment you want to install. The default version is GNOME; however, you have the option to install all the famous desktops and window managers.
One unique feature of this new installer is that you get options to set up ipv6 and Timeshift restore points.
The partition wizard is currently under development with custom partitioning via this app or GParted as options. Heres a mockup of the partition module under development (from [Twitter][9]).
![jade with additional options - mockup][10]
Finally, a summary for you before you install this distro/Arch Linux. The installation executes a script at the back end for Arch installation.
#### Onyx custom GNOME experience (with Budgie?)
From GitHub, I found that there is a customized desktop for base install named [Onyx][11]. Although I am not sure how it fits into this desktop, it also has a Budgie desktop component. Since there is no documentation as such, I guess we need to wait until a stable release.
![Not sure how Onyx is working in the backend][12]
#### Amethyst New AUR Helper
Do we really need another AUR helper? The [Yay helper][13] is awesome already.
Anyways.
The Crystal Linux also features a homegrown AUR helper and pacman Wrapper called [amethyst][14]. As the dev says, you can install it to any Arch-based distros. Amethyst comes with the command line option “ame” which you can use with standard pacman switches.
![ame terminal command][15]
#### Btrfs file system by default
One of the best features of this distro is the default btrfs file system during installation. Although the current work is ongoing for the additional file system, btrfs as default has its own advantages for backup and restoration.
I dont remember any other Arch-spin that has btrfs as default.
#### Applications and Packages
Since it is a stock GNOME-based distro, no additional applications are installed. So, you need to spend some time configuring with necessary apps such as LibreOffice, GIMP, Media players, etc.
Firefox and native GNOME apps are available in the default installation.
Crystal Linux seems to deploy the core packages from their own server, NOT from the Arch repo. Hence, some features may arrive a little late for updating the desktop and such.
### Performance
Arch Linux always performs well, in my experience. All the popular desktops such as KDE, GNOME, Xfce all of them somehow feel faster than in Ubuntu/Fedora.
With that said, the current GNOME 42 version in Crystal Linux is swift. The window animations and gestures feel smooth even in a virtual machine. There is no lag whatsoever.
![Crystal Linux - Performance][16]
Memory footprint is extremely low at 530 MB at idle. Most of the idle state CPUs are consumed by gnome-shell and systemd services.
Default GNOME desktop install takes only 3.8 GB of disk space.
### Wrapping up
The jade installer and btrfs file system are two major highlights of Crystal Linux. Since most of the Arch-based distros follow Calamares installer, its good to see a new installer in this space. And its really user-friendly.
The distro is just a few months old and has a long road ahead. I strongly believe it will give a competition to the currently famous Arch distro [EndeavourOS][17]. And the fans get to experience vanilla GNOME with Arch without the hassles of [installing Arch with GNOME][18].
You can download the current ISO from the [official website][19]. As I mentioned earlier, use it with caution since it is under development.
So, what are your thoughts about this distro? What are your favourite features? Do let me know in the comment box.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/crystal-linux-first-look/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/install-arch-linux/
[2]: https://www.debugpoint.com/xerolinux-review/
[3]: https://www.debugpoint.com/hefftor-linux-review/
[4]: https://www.debugpoint.com/wp-content/uploads/2022/08/Crystal-Linux-Desktop-with-GNOME-42-1024x579.jpg
[5]: https://github.com/crystal-linux/jade
[6]: https://www.debugpoint.com/wp-content/uploads/2022/08/jade-installer.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/08/selecting-desktop-to-install.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2022/08/installation.jpg
[9]: https://twitter.com/Crystal_Linux/status/1564379291529482240
[10]: https://www.debugpoint.com/wp-content/uploads/2022/08/jade-with-additional-options-mockup-1024x576.jpg
[11]: https://github.com/crystal-linux/onyx
[12]: https://www.debugpoint.com/wp-content/uploads/2022/08/Not-sure-how-Onyx-is-working-in-the-backend-1024x576.jpg
[13]: https://www.debugpoint.com/install-yay-arch/
[14]: https://github.com/crystal-linux/amethyst
[15]: https://www.debugpoint.com/wp-content/uploads/2022/08/ame-terminal-command-1024x576.jpg
[16]: https://www.debugpoint.com/wp-content/uploads/2022/08/Crystal-Linux-Performance-1024x576.jpg
[17]: https://www.debugpoint.com/tag/endeavouros
[18]: https://www.debugpoint.com/gnome-arch-linux-install/
[19]: https://getcryst.al/

View File

@ -0,0 +1,138 @@
[#]: subject: "Share screens on Linux with GNOME Connections"
[#]: via: "https://opensource.com/article/22/8/share-screens-linux-gnome-connections"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Share screens on Linux with GNOME Connections
======
Discover the power of VNC for screen sharing on Linux.
When someone needs to share their screen with you, or you need to share your screen with someone else, you have several options to choose from. Video conferencing software, like the open source [Jitsi][2] web app, and while we call that "screen sharing," it's really *presenting*. You're presenting your screen to others, but they can't interact with it. Sometimes you actually want to share your screen and your mouse cursor with a trusted friend or colleague, and the tool for that is VNC (Virtual Network Computing), and it's built into your Linux desktop.
In any screen sharing scenario, there are two computers and possibly two users. For that reason, this article has two parts. The first part is for the person setting up their computer to *accept* screen sharing requests, and the second part is for the person trying to connect to *someone else's* screen.
### Share my screen on Linux
If you're reading this section, you're the person who needs technical help from a friend, and you want to allow your friend to connect to your screen. You need to configure your desktop to allow screen sharing.
On the GNOME desktop, open the **Settings** application from the **Activities** menu. In the **Settings** window, click on **Sharing**. In the **Sharing** window, click on **Screen Sharing**.
In the **Screen Sharing** window that appears, you have two choices.
You can set a password so the person connecting to your screen must enter a password to connect. This is convenient when you don't expect to be around the computer when your friend plans on viewing your screen.
You can require a notification so that when someone attempts to connect, you're prompted to let them in (or not.)
![GNOME screen sharing settings][3]
If you're on the [KDE Plasma Desktop][4], then the application to configure screer sharing is called **krfb** (it stands for "Remote Frame Buffer", the protocol used by VNC). It's the exact same concept, just with a different layout.
![KDE screen sharing][5]
### Firewall
Normally, your computer's internal firewall keeps people out of your computer. It does that by indiscriminately blocking incoming all connections. In this case, though, you want to permit one kind of traffic, so you need to open a port in your firewall.
On Fedora, CentOS, Mageia, and many other Linux distributions, you have a firewall whether you know it or not. You may not yet have an app to help you configure your firewall, though. To install the default firewall configuration application, launch GNOME **Software** and search for *firewall*.
Once it's installed, launch the Firewall configuration application and scroll through the (very long) list of services to find and enable **vnc-server**.
![Firewalld configuration][6]
After adding `vnc-server`, open the **Options** menu and select **Runtime to permanent** so your new rule persists even after you reboot.
On Debian, Ubuntu, Linux Mint, and others, you may be running a firewall called **ufw**, so install **gufw** instead. In **gufw**, click the plus (**+**) icon at the bottom of the **Rules** tab to add a new rule. In the **Add a new firewall rure** window that appears, search for `vnc` and click the **Add** button.
![ufw configuration][7]
Your computer is now configured to accept VNC requests. You can skip down to the [troubleshooting] section.
### Viewing a shared screen
If you're reading this section, you're the person providing technical help from afar. You need to connect to a friend or colleague's computer, view their screen, and even control their mouse and keyboard. There are many applications for that, including **TigerVNC**, KDE's **krdc**, and GNOME **Connections**.
### GNOME Connections
On your local computer, install the GNOME **Connections** application from GNOME **Software**, or using your package manager:
```
$ sudo dnf install gnome-connections
```
In GNOME **Connections**, click the plus (**+**) icon in the top left to add a destination host. Select the VNC protocol, and enter the user name and host or IP address you want to connect to, and then click the **Connect** button.
![GNOME Connections][8]
If the user you're connecting to has had to create a new port for the purposes of port forwarding, then you must append the non-default port to the address. For instance, say your target user has created port 59001 to accept VNC traffic, and their home router address is 93.184.216.34. In this case, you enter `username@93.184.216.34:59001` (where `username` is the user's actual user name.)
If the user of the remote system has required a password for VNC, then you're prompted for a password before the connection is made. Otherwise, the user on the remote machine receives an alert asking whether they want to allow you to share their screen. As long as they accept, the connection is made and you can view and even control the mouse and keyboard of the remote host.
### Troubleshoooting screen sharing on Linux
Outside of the work environment, it's common that the user wanting to share their screen and the person who needs to see it are on different networks. You're probably at home, with a router that connects you to the Internet (it's the box you get from your ISP when you pay your Internet bill). Your router, whether you realize it or not, is designed to keep unwanted visitors out. That's normally very good, but in this one special case, you want to let someone trusted through so they can connect to your screen.
To let someone into your network, you have to configure your router to allow traffic at a specific "port" (like a ship port, but for packets of data instead of cargo), and then configure that traffic to get forwarded on to your personal computer.
Unfortunately, there's no *single* way that this is done. Every router manufacturer does it a little differently. That means I can't guide you through the exact steps required, because I don't know what router you have, but I can tell you what information you need up front, and what to look for once you're poking around your router.
#### 1. Get your local IP address
You need to know your computer's network IP address. To get that, open GNOME **Settings** and click on **Wi-Fi** in the left column (or **Network** if you're on a wired connection.) In the **Wi-Fi** panel, click the gear icon and find **IPv4 Adress** in the **Details** window that appears. A local IP address starts with 192.168 or 10.
For example, my network IP address is 10.0.1.2. Write down your notwork IP address for later.
#### 2. Get your public IP address
Click this link to obtain your public IP address: [http://ifconfig.me][9]
For example, my public IP address is 93.184.216.34 Write down your public IP address for later.
#### 3. Configure your router
Router interfaces differ from manufacturer to manufacturer, but the idea is the same regardless of what brand of router you have in your home. First, log in to your router. The router's address and login information is often printed on the router itself, or in its documentation. I own a TP-Link GX90 router, and I log in to it by pointing my web browser to 10.0.1.1, but your router might be 192.168.0.1 or some other address.
My router calls port forwarding "Virtual servers," which is a category found in the router's **NAT forwarding** tab. = Other routers may just call it **Port forwarding** or **Firewall** or even **Applications**. It may take a little clicking around to find the right category, or you may need to spend some time studying your router's documentation.
When you find the port forwarding setting (whatever it might be titled in your router), you need to add a new rule that identifies an external port (I use 59001) and sends traffic that arrives at it to an internal one (5900 is the standard VNC port.)
In step 1, you obtained your network IP address. Use it as the destination for traffic coming to port 59001 of your router. Here's an example of what my router configuration looks like, but yours is almost sure to be different:
![router configuration][10]
This configuration sends traffic arriving at external port 59001 to 10.0.1.2 at port 5900, which is precisely what VNC requires.
Now you can tell the friend you're trying to share your screen with to enter your *public* IP address (in this example, that's 93.184.216.34) and port 59001.
### Linux screen sharing and trust
Only share control of your screen with someone you trust. VNC can be complex to setup because there are security and privacy concerns around giving someone other than yourself access to you computer. However, once you've got it set up, you have instant and easy access to sharing your screen when you want to share something cool you're working on, or get help with something that's been confusing you.
Image by: (Seth Kenlon, CC BY-SA 4.0)
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/8/share-screens-linux-gnome-connections
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/chat_video_conference_talk_team.png
[2]: https://opensource.com/article/20/5/open-source-video-conferencing
[3]: https://opensource.com/sites/default/files/2022-08/Screenshot%20from%202022-08-11%2019-47-19.png
[4]: https://opensource.com/article/22/2/screen-share-linux-kde
[5]: https://opensource.com/sites/default/files/2022-08/kde-desktop-sharing.webp
[6]: https://opensource.com/sites/default/files/2022-08/Screenshot%20from%202022-08-11%2020-09-19.png
[7]: https://opensource.com/sites/default/files/2022-08/gufw-vnc.png
[8]: https://opensource.com/sites/default/files/2022-08/Screenshot%20from%202022-08-12%2005-11-10.png
[9]: http://ifconfig.me
[10]: https://opensource.com/sites/default/files/2022-08/router-port-forward.webp

View File

@ -0,0 +1,139 @@
[#]: subject: "Some ways to get better at debugging"
[#]: via: "https://jvns.ca/blog/2022/08/30/a-way-to-categorize-debugging-skills/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Some ways to get better at debugging
======
Hello! Ive been working on writing a zine about debugging for a while (heres [an early draft of the table of contents][1]).
As part of that I thought it might be fun to read some academic papers about
debugging, and last week [Greg Wilson][2] sent me some
papers about academic research into debugging.
One of those papers ([Towards a framework for teaching debugging
[paywalled]][3]) had a
categorization I really liked of the different kinds of knowledge/skills we
need to debug effectively. It comes from another more general paper on
troubleshooting: [Learning to Troubleshoot: A New Theory-Based Design Architecture][4].
I thought the categorization was a very useful structure for thinking about how
to get better at debugging, so Ive reframed the five categories in the paper
into actions you can take to get better at debugging.
Here they are:
#### 1. learn the codebase
To debug some code, you need to understand the codebase youre working with.
This seems kind of obvious (of course you cant debug code without
understanding how it works!).
This kind of learning happens pretty naturally over time, and actually
debugging is also one of the best ways to *learn* how a new codebase works
seeing how something breaks helps you learn a lot about how it works.
The paper calls this “System Knowledge”.
#### 2. learn the system
The paper mentions that you need to understand the programming language, but I
think theres more to it than that to fix bugs, often you need to learn a
lot about the broader environment than just the language.
For example, if youre a backend web developer, some “system” knowledge you
might need includes:
* how HTTP caching works
* CORS
* how database transactions work
I find that I often have to be a bit more intentional about learning systemic
things like this I need to actually take the time to look them up and read
about them.
The paper calls this “Domain Knowledge”.
#### 3. learn your tools
There are lots of debugging tools out there, for example:
* debuggers (gdb etc)
* browser developer tools
* profilers
* strace / ltrace
* tcpdump / wireshark
* core dumps
* and even basic things like error messages (how do you read them properly)
Ive written a lot about debugging tools on this blog, and definitely
learning these tools has made a huge difference to me.
The paper calls this “Procedural Knowledge”.
#### 4. learn strategies
This is the fuzziest category, we all have a lot of strategies and heuristics
we pick up along the way for how to debug efficiently. For example:
* writing a unit test
* writing a tiny standalone program to reproduce the bug
* finding a working version of the code and seeing what changed
* printing out a million things
* adding extra logging
* taking a break
* explaining the bug to a friend and then figuring out whats wrong halfway through
* looking through the github issues to see if anything matches
Ive been thinking a lot about this category while writing the zine, but I want
to keep this post short so I wont say more about it here.
The paper calls this “Strategic Knowledge”.
#### 5. get experience
The last category is “experience”. The paper has a really funny comment about this:
> Their findings did not show a significant difference in the strategies
employed by the novices and experts. Experts simply formed more correct
hypotheses and were more efficient at finding the fault. The authors suspect
that this result is due to the difference in the programming experience between
novices and experts.
This really resonated with me Ive had SO MANY bugs that were really
frustrating and difficult the first time I ran into them, and very straightforward
the fifth or tenth or 20th time.
This also feels like one of the most straightforward categories of knowledge to
acquire to me all you need to do is investigate a million bugs, which is our
whole life as programmers anyway :). It takes a long time but I feel like it
happens pretty naturally.
The paper calls this “Experiential Knowledge”.
#### thats all!
Im going to keep this post short, I just really liked this categorization and
wanted to share it.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2022/08/30/a-way-to-categorize-debugging-skills/
作者:[Julia Evans][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lkxed
[1]: https://twitter.com/b0rk/status/1562480240240525314?s=20&t=BwKd6i0mVCTaCud2HDEUBA
[2]: https://third-bit.com/
[3]: https://dl.acm.org/doi/abs/10.1145/3286960.3286970
[4]: https://www.researchgate.net/profile/Woei-Hung/publication/225547853_Learning_to_Troubleshoot_A_New_Theory-Based_Design_Architecture/links/556f471c08aec226830a74e7/Learning-to-Troubleshoot-A-New-Theory-Based-Design-Architecture.pdf

View File

@ -0,0 +1,148 @@
[#]: subject: "Why We Need Time Series Databases for Site Reliability Engineering"
[#]: via: "https://www.opensourceforu.com/2022/08/why-we-need-time-series-databases-for-site-reliability-engineering/"
[#]: author: "K. Narasimha Sekhar https://www.opensourceforu.com/author/k-narasimha-sekhar/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Why We Need Time Series Databases for Site Reliability Engineering
======
Its not uncommon to deal with petabytes of data today, even when carrying out traditional types of analysis and reporting. Traditional databases, however, do not offer optimal mechanisms to store and retrieve large scale time series data. To meet the demand of time series analysis, new types of databases are emerging.
### Real-world use cases
Time series data streams are gathered in many real-world scenarios. The volume of data and the velocity of data generation differ from case to case.
Typical scenarios from different fields are described below.
* Monitoring data gathered as part of site reliability engineering: Health, performance, and capacity parameters are collected periodically over time from various layers of infrastructure and applications. These time series data streams are analysed for anomalies, fault detection, forecasting, etc. Huge amounts of time series data need to be analysed in real-time to avoid service breakdowns and for quick recovery. This data is also stored and retrieved for processing later, such as capacity forecasting.
* IoT devices and sensors generate continuous streams of data.
* Autonomous trading algorithms continuously collect data on how the markets are changing in order to optimise returns.
* The retail industry collects and monitors supply chain and inventory data to optimise costs.
* Weather forecasting teams continuously collect climate parameters such as temperature, humidity, etc, for predictions.
* Autonomous vehicles continuously collect data about how their environment is changing, adjusting the drive based on weather conditions, engine status, and countless other variables.
* In the medical field, sensors generate time data streams for blood pressure tracking, weight tracking, cholesterol measurements, heart rate monitoring, etc.
There are numerous real-world scenarios where we collect time series data streams. These demand an efficient database for storing and retrieving time series data.
![Figure 1: Alerting engine based on time series data analysis][1]
### Time series data analysis
Time series data is a sequence of data points collected over time intervals, giving us the ability to track changes over time. Because data points in time series are collected at adjacent time periods, the observations can be correlated. This feature distinguishes time series data from traditional data. Time series data can be useful to help recognise patterns or a trend. Knowing the value of a specific parameter at the current time is quite different than the ability to observe its behaviour over a long time interval. Time series data allows us to measure and analyse change — what has changed in the past, what is changing in the present, and what changes may take place in the future. Time series data can track changes over milliseconds, days, or even years. Table 1 outlines the typical questions that time series analysis can help to answer.
| Category | Typical questions to be addressed |
| :- | :- |
| Prognostication | What are the short- and long-term trends for a measurement or group of measurements? |
| Introspection | How do several measurements correlate over a period of time? |
| Prediction | How do I build a machine learning model based on the temporal behaviour of many measurements correlated to externally known facts? |
| Introspection | Have similar patterns of measurements preceded similar events? |
| Diagnosis | What measurements might indicate the cause of some event, such as a system failure? |
| Forecasting | How many more servers will be needed for handling next quarters workload? |
| Segmentation | How to divide a data stream into a sequence of discrete segments in order to reveal the underlying properties of its source? |
Typical steps in time series data analysis are:
* Collecting the data and cleaning it
* Visualising with respect to time vs key feature
* Observing the stationarity of the series
* Developing charts to understand the nature of the data
* Model building such as AR, MA, ARMA and ARIMA
* Extracting insights from the predictions
There are three components of time series analysis — trend, seasonality and residual analysis.
*Trend:* This indicates the direction in which the data is moving over a period of time.
*Seasonality:* Seasonality is about periodic behaviour — spikes or drops caused by different factors like:
* Naturally occurring events like weather fluctuations
* Business or administrative procedures like the start or end of a fiscal year
* Social and cultural behaviour like holidays or festivals
* Calendar events, like the number of Mondays per month or holidays that change every year
*Residual analysis:* These are the irregular fluctuations that cannot be predicted using trend or seasonality analysis.
An observed data stream could be additive (trend + seasonality + residual) or multiplicative (trend * seasonality * residual).
Once these components are identified, models are built to understand time series and check for anomalies, forecasting and correlations. For time series data modelling, AR, MA, ARMA and ARIMA algorithms are widely adopted. Many other advanced AI/ML algorithms are being proposed for better evaluation.
### Time series databases
A time series database (TSDB) is a database optimised for time-stamped or time series data. Time series data is simply measurements or events that are tracked, monitored, down sampled, and aggregated over time. These could be server metrics, application performance monitoring, network data, sensor data, events, clicks, trades in a market, and many other types of analytics data.
Looking back 10 years, the amount of data that was once collected in 10 minutes for some very active systems is now generated every second. To process these high volumes, we need different tools and approaches.
To design an optimal TSDB, we must analyse the properties of time series data, and the demands of time series analysis applications. The typical characteristics of time series data and its use cases are:
* Time series is a sequence of values, each with a time value indicating when the value was recorded.
* Time series data entries are rarely amended.
* Time series data is often retrieved by reading a contiguous sequence of samples.
* Most of the time, we collect and store multiple time series. Queries to retrieve data from one or a few time series for a particular time range are very common.
* The volume and velocity of time series data is very high.
* Both long-term and short-term trends in the time series are very important for analysis.
* Summarising or aggregating high volume time series data sets is a very basic requirement.
* Traditional DB operations such as searching, sorting, joining tables, etc, are not required.
Properties that make time series data very different from other data workloads are data life cycle management, summarisation, and large range scans of many records. TSDB is designed to simplify and strengthen the process for real-world time series applications.
Storing time series data in flat files limits its utility. Data will outgrow these and the access is inefficient. Traditional RDBMS databases are not designed from the ground up for time series data storage. They will not scale well to handle huge volumes of time series data. Also, the schema is not appropriate. Getting a good performance for time series from an SQL database requires significant customisation and configuration. Without that, unless youre working with a very small data set, an SQL-based database will simply not work properly. A NoSQL non-relational database is preferred because it scales well and efficiently to enable rapid queries based on time range.
![Figure 2: Time series analytics engine on AWS Cloud][2]
A TSDB is optimised for best performance for queries based on a range of time. New NoSQL non-relational databases come with considerable advantages (like flexibility and performance) over traditional relational databases (RDBMS) for this purpose. NoSQL databases and relational databases share the same basic goals: to store and retrieve data and to coordinate changes. The difference is that NoSQL databases trade away some of the capabilities of relational databases in order to improve scalability. The benefits of making this trade include greater simplicity in the NoSQL database, the ability to handle semi-structured and denormalised data and, potentially, much higher scalability for the system.
At very large scales, time-based queries can be implemented as large, contiguous scans that are very efficient if the data is stored appropriately in a time series database. And if the amount of data is very large, a non-relational TSDB in a NoSQL system is typically needed to provide sufficient scalability.
Non-relational time series databases enable discovery of patterns in time series data, long-term trends, and correlations between data representing different types of events. The time ranges of interest extend in both directions. In addition to the very short time-range queries, long-term histories for time series data are needed, especially to discover complex trends.
Time series databases have key architectural design properties that make them very different from other databases. These include time-stamp data storage and compression, data life cycle management, data summarisation, the ability to handle large time series-dependent scans of many records, and time series aware queries.
For example, with a time series database, it is common to request a summary of data over a large time period. This requires going over a range of data points to perform computations like a percentile increase this month of a metric over the same period in the last six months, summarised by month. This kind of workload is very difficult to optimise for with a distributed key value store. TSDBs are optimised for exactly this use case and can give millisecond-level responses over months of data. Here is another example. With time series databases, its common to keep high precision data around for a short period of time. This data is aggregated and down sampled into long-term trend data. This means that for every data point that goes into the database, it will have to be deleted after its period of time is up. This kind of data life cycle management is difficult for application developers to implement in regular databases. They must devise schemes for cheaply evicting large sets of data and constantly summarising that data at scale. With a time series database, this functionality is provided out-of-the-box.
Since time series data comes in time order and is typically collected in real-time, time series databases are immutable and append-only to accommodate extremely high volumes of data. This append-only property distinguishes time series databases from relational databases, which are optimised for transactions but only accommodate lower ingest volumes. In general, depending on their particular use case, NoSQL databases will trade off the ACID principles for a BASE model (whose principles are basic availability, soft state and eventual consistency). For example, one individual point in a time series is fairly useless in isolation, and the important thing is the trend in total.
### Alerts based on time series data analysis for site reliability
Time series data models are very common in site reliability engineering. Time series analysis is used to monitor system health, performance, anomaly detection, security threat detection, inventory forecasting, etc. Figure 1 shows a typical alerting mechanism based on analysing time series data collected from different components.
Modern data centres are complex systems with a variety of operations and analytics taking place around the clock. Multiple teams need access at the same time, which requires coordination. In order to optimise resource use and manage workloads, systems administrators monitor a huge number of parameters with frequent measurements for a fine-grained view. For example, data on CPU usage, memory residency, IO activity, levels of disk storage, and many other parameters are all useful to collect as time series.
Once these data sets are recorded as time series, data centre operations teams can reconstruct the circumstances that lead to outages, plan upgrades by looking at trends, or even detect many kinds of security intrusions by noticing changes in the volume and patterns of data transfer between servers and the outside world.
### Open source TSDBs
[Time series databases][3] are the fastest growing segment in the database industry. There are many commercial and open source time series databases available. A few well-known open source time series databases are listed below:
* InfluxDB is one of the most popular time series open source databases, and is written in Go. It has been designed to provide a highly scalable data ingestion and storage engine. It is very efficient at collecting, storing, querying, visualising, and taking action on streams of time series data, events, and metrics in real-time. It uses InfluxQL, which is very similar to a structured query language, for interacting with data.
* Prometheus is an open source monitoring solution used to understand insights from metrics data and send the necessary alerts. It has a local on-disk time-series database that stores data in a custom format on disk. It provides a functional query language called PromQL.
* TimescaleDB is an open source relational database that makes SQL scalable for time series data. This database is built on PostgreSQL.
* Graphite is an all-in-one solution for storing and efficiently visualising real-time time series data. Graphite can store time series data and render graphs on demand. To collect data, we can use tools such as collectd, Ganglia, Sensu, telegraf, etc.
* QuestDB is a relational column-oriented database that can perform real-time analytics on time series data. It works with SQL and some extensions to create a relational model for time series data. It supports relational and time-series joins, which helps in correlating the data.
* OpenTSDB is a scalable time series database that has been written on top of HBase. It is capable of storing trillions of data points at millions of writes per second. It has a time-series daemon (TSD) and command-line utilities. TSD is responsible for storing data in or retrieving it from HBase. You can talk to TSD using HTTP API, telnet, or a simple built-in GUI. You need tools like flume, collectd, vacuumetrix, etc, to collect data from various sources into OpenTSDB.
### Cloud native TSDBs
Cloud hyperscalers like Azure, AWS and Google offer time series databases and analytics services as part of their cloud portfolio. AWS Timestream is a serverless time series database service that is fast and scalable. It is used majorly for IoT applications to store trillions of events in a day and is 1000 times faster with 1/10th the cost of relational databases. Using its purpose-built query engine, you can query recent and historical data simultaneously. It provides multiple built-in functions to analyse time series data to find useful insights.
Microsoft Azure Time Series Insights provides a time series analytics engine. For data ingestion there are Azure IoT Hub and Event Hub services. To analyse cloud infrastructure and time series streams, these cloud vendors offers a range of native tools such as AWS CloudWatch, Azure Monitor, Amazon Kinesis, etc.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/08/why-we-need-time-series-databases-for-site-reliability-engineering/
作者:[K. Narasimha Sekhar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/k-narasimha-sekhar/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-1-Alerting-engine-based-on-time-series-data-analysis.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-2-Time-series-analytics-engine-on-AWS-Cloud.jpg
[3]: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiU2ZaOj9X4AhVLwjgGHcBfB8QQFnoECEAQAQ&url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FTime_series_database&usg=AOvVaw3Q9XvE3JIoBTEyu897tQQN

View File

@ -0,0 +1,115 @@
[#]: subject: "How to Get KDE Plasma 5.25 in Kubuntu 22.04 Jammy Jellyfish"
[#]: via: "https://www.debugpoint.com/kde-plasma-5-25-kubuntu-22-04/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
如何在 Kubuntu 22.04 Jammy Jellyfish 中获取 KDE Plasma 5.25
======
KDE 开发人员现在启用了流行的反向移植 PPA并对 KDE Plasma 5.25 进行了必要的更新,你现在可以将其安装在 Kubuntu 22.04 Jammy Jellyfish 中。下面是方法。
KDE Plasma 5.25 于 2022 年 6 月 14 日几天前发布,其中包含一些惊人的更新。在此版本中,你将获得**动态强调色**、改进的登录头像、**浮动面板**以及我们在[功能亮点文章][1] 中介绍的许多功能。
但是,如果你正在运行早在 2022 年 4 月发布的 [Kubuntu 22.04 Jammy Jellyfish][2],那么你将拥有带有 KDE Framework 5.92 的 KDE Plasma 5.24。
你可能正在等待享受稳定的 Kubuntu 22.04 版本中的新功能,现在可以通过著名的反向移植 PPA 在 Kubuntu 22.04 中安装它。
### 如何在 Kubuntu 22.04 中安装 KDE Plasma 5.25
这是使用最新的 KDE Plasma 5.25 升级 Kubuntu 22.04 的方法。
#### GUI 方式
如果你对 KDE 的软件应用 Discover 感到满意,请打开该应用。然后进入 Settings > Sources 并添加 PPA `ppa:kubuntu-ppa/backports-extra`。然后单击更新。
#### 终端方法(推荐)
我建议你打开一个终端并进行此升级以更快地执行和安装。
* 打开 Konsole 并运行以下命令以添加[反向移植 PPA][3]。
```
sudo add-apt-repository ppa:kubuntu-ppa/backports-extra
```
![Upgrade Kubuntu 22.04 with KDE Plasma 5.25][4]
* 现在,通过运行以下命令刷新包列表。然后验证 5.25 包是否可用。
```
sudo apt update
```
```
apt list --upgradable | grep 5.25
```
![KDE Plasma 5.25 packages are available now][5]
最后,运行最后一个命令来启动升级。
```
sudo apt full-upgrade
```
总共下载大约 200 MB 的软件包。根据你的互联网连接速度,整个过程大约需要 10 分钟。
上述命令完成后,重新启动系统。
重启后,你应该会在 Kubuntu 22.04 LTS 中看到新的 KDE Plasma 5.25。
![KDE Plasma 5.25 in Kubuntu 22.04 LTS][6]
### 其他反向移植 PPA
请注意,[其他反向移植 PPA][7] `ppa:kubuntu-ppa/backports` 目前有 Plasma 5.24。因此,请勿使用与上面不同的 PPA。我不确定这个 PPA 是否会得到这个更新。
```
sudo add-apt-repository ppa:kubuntu-ppa/backports // 不要使用这个
```
### 如何卸载
在任何时候,如果你想回到 KDE Plasma 桌面的原始版本,那么你可以安装 ppa-purge 并删除 PPA然后刷新包。
打开终端,依次执行以下命令。
```
sudo apt install ppa-purge
sudo ppa-purge ppa:kubuntu-ppa/backports-extra
sudo apt update
```
完成上述命令后,重启系统。
### 结束语
这就是全部了。一个漂亮而简单的步骤,将 Jammy Jellyfish 中的 KDE Plasma 升级到 Plasma 5.25。我希望你升级顺利。
如果您遇到任何错误,请在评论栏告诉我。
干杯。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/kde-plasma-5-25-kubuntu-22-04/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/kde-plasma-5-25/
[2]: https://www.debugpoint.com/kubuntu-22-04-lts/
[3]: https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/backports-extra
[4]: https://www.debugpoint.com/wp-content/uploads/2022/08/Upgrade-Kubuntu-22.04-with-KDE-Plasma-5.25.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/08/KDE-Plasma-5.25-packages-are-available-now.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/08/KDE-Plasma-5.25-in-Kubuntu-22.04-LTS-1024x575.jpg
[7]: https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/backports