mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
commit
b178ee4584
@ -3,23 +3,22 @@
|
||||
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: "geekpi"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-14967-1.html"
|
||||
|
||||
深入了解 EPUB 文件
|
||||
======
|
||||
EPUB 文件是使用开放格式发布内容的好方法。
|
||||
|
||||
![How to find files in Linux][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202208/25/223832eo3gq2o32uz0u0ll.jpg)
|
||||
|
||||
图片来源:Lewis Cowles,CC BY-SA 4.0
|
||||
> EPUB 文件是使用开放格式发布内容的好方法。
|
||||
|
||||
电子书提供了一种随时随地阅读书籍、杂志和其他内容的好方法。读者可以在长途飞行和乘坐火车时享受电子书打发时间。最流行的电子书文件格式是 EPUB 文件,是“电子出版物”的缩写。 EPUB 文件受到各种电子阅读器的支持,并且是当今电子书出版的有效标准。
|
||||
电子书提供了一种随时随地阅读书籍、杂志和其他内容的好方法。读者可以在长途飞行和乘坐火车时享受电子书打发时间。最流行的电子书文件格式是 EPUB 文件,它是“<ruby>电子出版物<rt>electronic publication</rt></ruby>”的缩写。 EPUB 文件受到各种电子阅读器的支持,并且是当今电子书出版的有效标准。
|
||||
|
||||
EPUB 文件格式是基于 XHTML 内容和 XML 元数据的开放标准,包含在 zip 存档中。由于一切都基于开放标准,我们可以使用通用工具来创建或检查 EPUB 文件。让我们探索一个 EPUB 文件以了解更多信息。 [C 编程技巧和窍门指南][2],于今年早些时候在 Opensource.com 上发布,提供 PDF 或 EPUB 格式。
|
||||
EPUB 文件格式基于 XHTML 内容和 XML 元数据的开放标准,包含在 zip 存档中。由于一切都基于开放标准,我们可以使用通用工具来创建或检查 EPUB 文件。让我们探索一个 EPUB 文件以了解更多信息。《[C 编程技巧和窍门指南][2]》,于今年早些时候在 Opensource.com 上发布,提供 PDF 或 EPUB 格式。
|
||||
|
||||
因为 EPUB 文件是 zip 文件中的 XHTML 内容和 XML 元数据,所以你可以用 `unzip` 命令在命令行检查 EPUB:
|
||||
因为 EPUB 文件是放在 zip 文件中的 XHTML 内容和 XML 元数据,所以你可以用 `unzip` 命令在命令行检查 EPUB:
|
||||
|
||||
```
|
||||
$ unzip -l osdc_Jim-Hall_C-Programming-Tips.epub
|
||||
@ -48,13 +47,13 @@ Length Date Time Name
|
||||
|
||||
这个 EPUB 包含很多文件,但其中大部分是内容。要了解 EPUB 文件是如何组合在一起的,请遵循电子书阅读器的流程:
|
||||
|
||||
1. 电子书阅读器需要验证 EPUB 文件是否真的是 EPUB 文件。他们通过检查 EPUB 存档根目录中的 `mimetype` 文件来验证文件。该文件仅包含一行描述 EPUB 文件的 MIME 类型:
|
||||
1、电子书阅读器需要验证 EPUB 文件是否真的是 EPUB 文件。他们通过检查 EPUB 存档根目录中的 `mimetype` 文件来验证文件。该文件仅包含一行描述 EPUB 文件的 MIME 类型:
|
||||
|
||||
```
|
||||
application/epub+zip
|
||||
```
|
||||
|
||||
2. 为了定位内容,电子书阅读器从 `META-INF/container.xml` 文件开始。这是一个简短的 XML 文档,指示在哪里可以找到内容。对于此 EPUB 文件,`container.xml` 文件如下所示:
|
||||
2、为了定位内容,电子书阅读器从 `META-INF/container.xml` 文件开始。这是一个简短的 XML 文档,指示在哪里可以找到内容。对于此 EPUB 文件,`container.xml` 文件如下所示:
|
||||
|
||||
```
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
@ -65,11 +64,12 @@ application/epub+zip
|
||||
</container>
|
||||
```
|
||||
|
||||
为了使 `container.xml` 文件更易于阅读,我将单行拆分为多行,并添加了一些间距来缩进每行。 XML 文件并不真正关心新行和空格等额外的空白,因此这种额外的间距不会影响 XML 文件。
|
||||
为了使 `container.xml` 文件更易于阅读,我将单行拆分为多行,并添加了一些间距来缩进每行。XML 文件并不关心新行和空格等额外的空白,因此这种额外的间距不会影响 XML 文件。
|
||||
|
||||
3. `container.xml` 文件表示 EPUB 的根目录以 OEBPS 目录中的 `content.opf` 文件开头。 OPF 扩展是因为 EPUB 基于 Open Packaging Format,但 `content.opf` 文件实际上只是另一个 XML 文件。
|
||||
3、`container.xml` 文件表示 EPUB 的根从 `OEBPS` 目录中的 `content.opf` 文件开始。OPF 扩展名是因为 EPUB 基于 “<ruby>开放打包格式<rt>Open Packaging Format</rt></ruby>”,但 `content.opf` 文件实际上只是另一个 XML 文件。
|
||||
|
||||
4、`content.opf` 文件包含一个完整的 EPUB 内容清单,以及一个有序的目录,以及查找每一章或每一节的引用。这个 EPUB 的 `content.opf` 文件很长,因此我将在此仅展示一小部分作为示例。
|
||||
|
||||
4. `content.opf` 文件包含一个完整的 EPUB 内容清单,以及一个有序的目录,以及查找每一章或每一节的参考。这个 EPUB 的 `content.opf` 文件很长,因此我将在此仅展示一小部分作为示例。
|
||||
XML 数据包含在 `<package>` 块中,该块本身具有 `<metadata>` 块、`<manifest>` 数据和包含电子书目录的 `<spine>` 块:
|
||||
|
||||
```
|
||||
@ -100,7 +100,7 @@ XML 数据包含在 `<package>` 块中,该块本身具有 `<metadata>` 块、`
|
||||
</package>
|
||||
```
|
||||
|
||||
你可以把数据匹配起来,看看在哪里可以找到每个部分。EPUB 阅读器就是这样做的。例如,目录中的第一项引用了 `section0001`,它在清单中被定义为位于 `sections/section0001.xhtml` 文件中。该文件的名称不需要与 idref 条目相同,但 LibreOffice Writer 的自动程序就是这样创建该文件的。(你可以在元数据中看到,这个 EPUB 是在 Linux 上用 LibreOffice 7.3.0.3 版本创建的,它可以将内容导出为EPUB文件。)
|
||||
你可以把数据匹配起来,看看在哪里可以找到每个部分。EPUB 阅读器就是这样做的。例如,目录中的第一项引用了 `section0001`,它在清单中被定义为位于 `sections/section0001.xhtml` 文件中。该文件的名称不需要与 `idref` 条目相同,但 LibreOffice Writer 的自动程序就是这样创建该文件的。(你可以在元数据中看到,这个 EPUB 是在 Linux 上用 LibreOffice 7.3.0.3 版本创建的,它可以将内容导出为 EPUB 文件。)
|
||||
|
||||
### EPUB 格式
|
||||
|
||||
@ -113,7 +113,7 @@ via: https://opensource.com/article/22/8/epub-file
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,122 @@
|
||||
[#]: subject: "Fedora 37: Top New Features and Release Wiki"
|
||||
[#]: via: "https://www.debugpoint.com/fedora-37/"
|
||||
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: "wxy"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-14968-1.html"
|
||||
|
||||
Fedora 37 新功能披露
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202208/26/000924lz0vl82vsq2zf0v7.jpg)
|
||||
|
||||
> 关于 Fedora 37 及其新特性、发布细节等等。
|
||||
|
||||
Fedora 37 的开发工作已经结束,Beta 测试版即将来临。在这个阶段,Fedora 37 的功能和软件包已经最终确定。
|
||||
|
||||
在这篇常规的功能指南页面中,我总结了你应该知道的关于 Fedora 37 的基本功能,让你对预期的功能有一个概念。但是在这之前,先看看暂定的时间表:
|
||||
|
||||
* 测试版的发布日期是 2022 年 9 月 13 日。后备日期是 2022 年 9 月 20 日。
|
||||
* Fedora 37 最终版计划于 2022 年 10 月 18 日发布。后备日期是 2022 年 10 月 25 日。
|
||||
|
||||
![Fedora 37 Workstation with GNOME 43][1]
|
||||
|
||||
### Fedora 37 的主要新功能
|
||||
|
||||
#### 内核
|
||||
|
||||
首先是构成核心的关键项目。Fedora 37 采用了 Linux 内核 5.19,这是目前最新的主线内核。Linux 内核 5.19 带来了一些基本功能,比如修复了 Retbleed 漏洞、支持 ARM、支持苹果 M1 NVMe SSD 控制器以及许多此类功能,你可以在我们的 [内核功能指南][2] 中了解更多。
|
||||
|
||||
使用最新内核的好处是,你可以保证你使用的是此时此刻最新、最好的硬件支持。
|
||||
|
||||
其次,桌面环境在这个版本中得到了更新。
|
||||
|
||||
#### 桌面环境
|
||||
|
||||
Fedora 37 是第一个带来令人惊叹的 GNOME 43 桌面的发行版,它带来了一些优秀的功能,比如:
|
||||
|
||||
* [重新改版后的快速设置功能][3],带有药片式按钮
|
||||
* 移植了 GTK4 和 libadwaita 的文件管理器 v43(nautilus)
|
||||
* 带有橡皮筋、徽章、响应式侧边栏等功能的文件管理器
|
||||
* 更新了 GNOME Web,支持 WebExtension API
|
||||
|
||||
还有许多你期待了多年的功能。请查看我的 [GNOME 43 功能指南][4] 以了解更多。
|
||||
|
||||
Fedora 37 带来了 KDE Plasma 5.26 桌面环境,包括大量的新功能、性能改进和错误修复。KDE Plasma 桌面最值得注意的功能包括:
|
||||
|
||||
* 一个更新的概览屏幕
|
||||
* 深色和浅色主题的动态墙纸
|
||||
* 更新的 KDE 框架和应用程序
|
||||
|
||||
由于轻量级桌面 LXQt 更新了稳定版 1.1.0,它来到了 Fedora 37 中。LXQt 1.1.0 为深色主题带来了一个外观统一的默认调色板、应用程序菜单的两个变体(简单和紧凑)和重新排列的 GTK 设置。此外,LXQt 1.1.0 也开始了 Qt 6.0 桌面组件移植的初始工作。所有这些 bug 修复和增强功能都在 Fedora LXQt 版本中出现。
|
||||
|
||||
此外,其他主要的桌面版本由于没有重要的新的更新到来,仍然保持在当前版本,即 Xfce 4.16 和 MATE 1.24 用在各自的 Fedora 定制版中。
|
||||
|
||||
让我们看看这个版本中影响所有 Fedora 定制版的系统级变化是什么。
|
||||
|
||||
#### 系统级的变化
|
||||
|
||||
最重要的变化是对树莓派 4 的正式支持。得益于多年来的努力,你现在可以在最喜欢的树莓派上享受到开箱即用的 Fedora 37 了。
|
||||
|
||||
Fedora Linux 一直是推动技术发展的先锋,在其他发行版之前就采用了最新的功能。因此,现在在 KDE Plasma(和 Kinoite)和不同的定制版中,SDDM 显示管理器默认采用了 Wayland。这样,从 Fedora 发行版方面就完成了 Wayland 各个定制版的过渡。
|
||||
|
||||
正如我 [之前的报道][5],Fedora Linux 37 计划为我们提供 Anaconda 的网页安装程序的预览镜像。它可能不会在发布后立即可用,但它应该在发布后的几天内出现。
|
||||
|
||||
其他值得注意的功能包括将默认的主机名从 `fedora` 改为 `localhost`,以避免一些第三方系统配置检测问题。
|
||||
|
||||
除此之外,Fedora Core OS 被打造为 Fedora 官方版本,现在与服务器版、物联网版和云计算版同列,以便你可以更好地发现和采用它。最小资源占用的 Fedora Core OS 主要用于容器工作负载,并带来了自动更新和额外的功能。
|
||||
|
||||
遵循传统,这个版本也有一个 [全新的墙纸][6],有夜间和白天两个版本。我必须得说,它看起来很棒(见上面的桌面图片)。
|
||||
|
||||
最后,在这个版本中,Fedora 删除了 32 位的 Java 包,包括 JDK 8、11 和 17,因为使用率很低。此外,openssl 1.1 软件包也被弃用。
|
||||
|
||||
工具链、应用程序和编程栈更新如下:
|
||||
|
||||
* Glibc 2.36 和 Binutils 2.38
|
||||
* Node.js 18.x
|
||||
* Perl 5.36
|
||||
* Python 3.11
|
||||
|
||||
### Fedora 37 功能摘要
|
||||
|
||||
那么,这个版本的功能就到此为止了。下面是对 Fedora 37 功能的总结:
|
||||
|
||||
* Linux 内核 5.19
|
||||
* GNOME 43
|
||||
* KDE Plasma 5.26
|
||||
* Xfce 4.16
|
||||
* MATE 1.24
|
||||
* LXQt 1.1.0
|
||||
* 新的基于网页的安装程序的预览镜像
|
||||
* SDDM 显示管理器默认采用 Wayland(在 KDE Plasma 和其他桌面环境中)。
|
||||
* 官方支持树莓派 4
|
||||
* Fedora Core OS 成为官方版本
|
||||
* 一些关键软件包放弃了 32 位支持
|
||||
* 还有相关的工具链和编程语言更新。
|
||||
|
||||
如果你有空闲时间,你可以 [体验一下][7]。虽然,它是非常不稳定的,不推荐运行测试版之前的开发版。
|
||||
|
||||
**那么,这个版本中你最喜欢的功能是什么?请在评论区告诉我**。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.debugpoint.com/fedora-37/
|
||||
|
||||
作者:[Arindam][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.debugpoint.com/author/admin1/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://www.debugpoint.com/wp-content/uploads/2022/08/Fedora-37-Workstation-with-GNOME-43-1024x572.jpg
|
||||
[2]: https://www.debugpoint.com/linux-kernel-5-19/
|
||||
[3]: https://www.debugpoint.com/gnome-43-quick-settings/
|
||||
[4]: https://www.debugpoint.com/gnome-43/
|
||||
[5]: https://debugpointnews.com/fedora-37-anaconda-web-ui-installer/
|
||||
[6]: https://debugpointnews.com/fedora-37-wallpaper/
|
||||
[7]: https://dl.fedoraproject.org/pub/fedora/linux/development/37/Workstation/x86_64/iso/
|
@ -1,119 +0,0 @@
|
||||
[#]: subject: "Fedora 37: Top New Features and Release Wiki"
|
||||
[#]: via: "https://www.debugpoint.com/fedora-37/"
|
||||
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Fedora 37: Top New Features and Release Wiki
|
||||
======
|
||||
An article about Fedora 37 and its new features, release details and everything you need to know.
|
||||
|
||||
Fedora 37 development is wrapping up, and the BETA is approaching. Hence the features and packages are final at this stage.
|
||||
|
||||
In this usual feature guide page, I have summarised the essential features you should know about Fedora 37 and get an idea of what to expect. But before that, here’s a tentative schedule.
|
||||
|
||||
* The beta copy is due on September 13, 2022. The fallback date is September 20, 2022.
|
||||
* Final Fedora 37 is planned for release on October 18, 2022. The fallback date is October 25, 2022.
|
||||
|
||||
![Fedora 37 Workstation with GNOME 43][1]
|
||||
|
||||
### Fedora 37: Top New Features
|
||||
|
||||
#### Kernel
|
||||
|
||||
**First** up are the critical items that make the core. Fedora 37 is powered by **Linux Kernel 5.19,** the latest mainline Kernel available now. Linux Kernel 5.19 brings essential features such as a fix for Ratbleed vulnerability, ARM support, Apple M1 NVMe SSD controller support and many such features, which you can read in our [Kernel feature guide][2].
|
||||
|
||||
The advantage of using the latest Kernel is that you can be assured that you are using the latest and greatest hardware support available at this moment in time.
|
||||
|
||||
**Next** up, the desktop environments are updated in this release.
|
||||
|
||||
#### Desktop Environment
|
||||
|
||||
Fedora 37 is the first distribution which brings the stunning **GNOME 43** desktop, which brings some excellent features such as:
|
||||
|
||||
* [Revamped quick settings][3] with pill-buttons
|
||||
* Files (nautilus) 43 with GTK4 and libadwaita port
|
||||
* Files with rubberband, emblems, responsive sidebar-like features
|
||||
* Updated GNOME Web with WebExtension API support
|
||||
|
||||
And many features you have been waiting for for years. Do check out my [GNOME 43 feature guide][4] to learn more.
|
||||
|
||||
Fedora 37 brings **KDE Plasma 5.26** desktop environment with tons of new features, performance improvements and bug fixes. The most noteworthy features of the KDE Plasma desktop include:
|
||||
|
||||
* An updated overview screen.
|
||||
* Dynamic wallpaper for dark and light themes.
|
||||
* Updated KDE Framework and applications.
|
||||
|
||||
Since the lightweight desktop LXQt gets a stable update 1.1.0, it arrives in Fedora 37. **LXQt 1.1.0** brings a default colour palette for dark themes for a uniform look, two variants (simple and compact) of the application menu and re-arranged GTK settings. Furthermore, LXQt 1.1.0 also starts the initial work for the Qt 6.0 porting of desktop components. All these bug fixes and enhancements arrive in the Fedora LXQt edition.
|
||||
|
||||
In addition, other primary desktop flavours remain at their current releases since no significant new updates arrive, i.e. **Xfce 4.16 and MATE 1.24**for the respective Fedora flavours.
|
||||
|
||||
Let’s see what the system-wide changes in this release that impacts all the Fedora flavours are.
|
||||
|
||||
#### System wide changes
|
||||
|
||||
The most significant change is the official support for **Raspberry Pi 4** boards. Thanks to the works over the years, you can now enjoy Fedora 37 on your favourite Pi boards with out-of-the-box supports.
|
||||
|
||||
Fedora Linux is always a pioneer in advancing technology and adopting the latest features before any other distro. With that in mind, the **SDDM display manager now comes with default Wayland** in KDE Plasma (and Kinoite) and different flavours. This completes the Wayland transition from the Fedora distro aspect for this flavour.
|
||||
|
||||
As I [reported earlier][5], Fedora Linux 37 plans to provide us with a preview image of a **Web-based installer** for Anaconda. It might not be available immediately following the release. But it should be within a few days post-release.
|
||||
|
||||
Other noteworthy features include changing the **default hostname from “fedora” to “localhost”** to mitigate some third-party system configuration detection.
|
||||
|
||||
Other than that, the **Fedora Core OS** is made to be an official Fedora edition and now stands together with Server, IoT and cloud editions for better discovery and adoption. Fedora Core OS minimal footprint OS is primarily used for container workloads and brings auto updates and additional features.
|
||||
|
||||
Following the tradition, this release also features a [brand new wallpaper][6] with both night and day version. I must say it’s looks awesome (see the above desktop image).
|
||||
|
||||
Finally, also in this release, Fedora **drops 32-bit Java** packages, including JDK 8, 11, and 17, since usage is low. In addition, the openssl1.1 package is also deprecated.
|
||||
|
||||
The toolchain, apps and programming stack is updated as follows:
|
||||
|
||||
* Glibc 2.36 and Binutils 2.38
|
||||
* Node.js 18.x
|
||||
* Perl 5.36
|
||||
* Python 3.11
|
||||
|
||||
### Summary of features in Fedora 37
|
||||
|
||||
So, that’s about it with the features of this release. Here’s a summary of the Fedora 37 features:
|
||||
|
||||
* Linux Kernel 5.19
|
||||
* GNOME 43
|
||||
* KDE Plasma 5.26
|
||||
* Xfce 4.16
|
||||
* MATE 1.24
|
||||
* LXQt 1.1.0
|
||||
* A preview image of the new web-based installer
|
||||
* The SDDM display manager defaults to Wayland (in KDE Plasma and others)
|
||||
* Official Raspberry Pi 4 support
|
||||
* Fedora Core OS becomes the official flavour
|
||||
* Key packages dropping 32-bit support
|
||||
* And associated toolchain and programming language updates.
|
||||
|
||||
If you have spare time, you can [give it a spin][7] or test drive. Although, it is extremely unstable and not recommended to run the development version until beta.
|
||||
|
||||
**So, what’s your favourite feature in this release? Let me know in the comment section.**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.debugpoint.com/fedora-37/
|
||||
|
||||
作者:[Arindam][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.debugpoint.com/author/admin1/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://www.debugpoint.com/wp-content/uploads/2022/08/Fedora-37-Workstation-with-GNOME-43-1024x572.jpg
|
||||
[2]: https://www.debugpoint.com/linux-kernel-5-19/
|
||||
[3]: https://www.debugpoint.com/gnome-43-quick-settings/
|
||||
[4]: https://www.debugpoint.com/gnome-43/
|
||||
[5]: https://debugpointnews.com/fedora-37-anaconda-web-ui-installer/
|
||||
[6]: https://debugpointnews.com/fedora-37-wallpaper/
|
||||
[7]: https://dl.fedoraproject.org/pub/fedora/linux/development/37/Workstation/x86_64/iso/
|
@ -0,0 +1,36 @@
|
||||
[#]: subject: "NGINX Pledges To Update, Improve, And Expand Its Open Source Ecosystem"
|
||||
[#]: via: "https://www.opensourceforu.com/2022/08/nginx-pledges-to-update-improve-and-expand-its-open-source-ecosystem/"
|
||||
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
NGINX Pledges To Update, Improve, And Expand Its Open Source Ecosystem
|
||||
======
|
||||
The maker of the well-known web server with the same name, NGINX, unveiled a number of upgrades at its free NGINX Sprint conference for open source programmers looking to create the newest applications. It also discussed its development over the last 18 years and presented its future vision, which will be based on the three promises of modernise, optimise, and extend.
|
||||
|
||||
Code management, decision-making transparency, and community involvement are all aspects of modernization that go beyond just the code itself. All of its future projects will be hosted on GitHub rather than the Mercurial version control system, as part of this and a recognition that the open-source world exists there. In addition, it will carefully consider community input and add codes of conduct to all of its projects.
|
||||
|
||||
It intends to launch a new SaaS service that connects with NGINX Open Source in order to enhance the developer experience. It also intends to remove the paywall from several essential NGINX Open Source and NGINX Plus capabilities so that customers can access them without charge. One item that will be made accessible in this way is DNS service discovery, and the business is appealing for user input on what else should be free in its Slack channel.
|
||||
|
||||
The third pledge is to keep developing NGINX’s functionality. Currently, NGINX is most frequently utilised as a Layer 7 data plane, necessitating the adoption of numerous workarounds by developers for different deployment components. It aims to expand NGINX so that an open-source component that integrates with NGINX can fulfil each criteria for testing and deployment.
|
||||
|
||||
With the announcement of three upgrades that support these objectives, the company has already begun to fulfil these commitments. First, it will concentrate on its NGINX Kubernetes Gateway rather than its Kubernetes Ingress controller. Earlier this year, NGINX Kubernetes Gateway, a controller that implements the Kubernetes Gateway API, was made available.
|
||||
|
||||
The introduction of NGINX Agent, a compact application that can be installed alongside NGINX Open Source instances, was also announced. Features that were previously exclusively found in commercial offers will be included.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.opensourceforu.com/2022/08/nginx-pledges-to-update-improve-and-expand-its-open-source-ecosystem/
|
||||
|
||||
作者:[Laveesh Kocher][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
|
||||
[b]: https://github.com/lkxed
|
@ -0,0 +1,108 @@
|
||||
[#]: subject: "Lutris 0.5.11 Adds Open Source Macintosh Emulators and Amazon Games Integration"
|
||||
[#]: via: "https://news.itsfoss.com/lutris-0-5-11-release/"
|
||||
[#]: author: "Sagar Sharma https://news.itsfoss.com/author/sagar/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Lutris 0.5.11 Adds Open Source Macintosh Emulators and Amazon Games Integration
|
||||
======
|
||||
Lutris 0.5.11 is a nice update with new Macintosh emulators and Amazon Games integration.
|
||||
|
||||
![Lutris 0.5.11 Adds Open Source Macintosh Emulators and Amazon Games Integration][1]
|
||||
|
||||
Lutris is an open-source game manager on Linux, giving you easy access to all kinds of game clients like Ubisoft Connect, Epic Games Store, and more.
|
||||
|
||||
It made things so much easier for many Linux users. We also interviewed its creator in the past, with an insightful conversation:
|
||||
|
||||
[The Progress Linux has Made in Terms of Gaming is Simply Incredible: Lutris Creator][2]
|
||||
|
||||
Now, with the latest update to it (a minor release), we have some exciting feature additions!
|
||||
|
||||
### 🆕 Lutris 0.5.11: What's New?
|
||||
|
||||
![Lutris 0.5.11][4]
|
||||
|
||||
Being a point release, you may not notice any visual changes, but you get some new features and fixes to improve your user experience.
|
||||
|
||||
First, I'd like to mention some key features in this release:
|
||||
|
||||
* Integration for Amazon Games launcher.
|
||||
* Added support for open-source Macintosh emulators named SheepShaver, BasiliskII, and Mini vMac.
|
||||
* Made changes to shortcuts to toggle installed (Ctrl + i) games and hidden games (Ctrl + h).
|
||||
* Gnome terminal and Deepin terminal are now recognized as terminal emulators.
|
||||
* Added support for Gamescope on Nvidia driver 515 and above.
|
||||
|
||||
Let me discuss more about the changes:
|
||||
|
||||
#### 🕹️ Amazon Prime Games Integration
|
||||
|
||||
![Lutris with Amazon prime gaming support][5]
|
||||
|
||||
This may not sound much, but Amazon's game launcher is a Windows-specific thing for playing games. Now, thanks to the integration support by Lutris, you can access and try playing the games available under Wine.
|
||||
|
||||
You can enable Amazon Prime Gaming from **Preference>Sources**.
|
||||
|
||||
#### 🖥️ Addition of Open-Source Macintosh emulators
|
||||
|
||||
![Lutris with support for open-source macintosh emulators][6]
|
||||
|
||||
This release has added three Macintosh open-source runners (emulators).
|
||||
|
||||
Curious about what they do?
|
||||
|
||||
Well, two of them (Basilisk II and Mini vMac) are made to run 32-bit Macintosh machines. And, the third one, SheepShaver, is made to run programs from the PowerPC Macintosh lineup.
|
||||
|
||||
#### ⌨️ Recognize GNOME Console and Deepin Terminal
|
||||
|
||||
![Running games in Linux terminal with Lutris][7]
|
||||
|
||||
With this point release, the support for the GNOME console and Deepin terminal was added to emulate text-based programs.
|
||||
|
||||
So, you no longer have to rely on what Lutris gives you by default!
|
||||
|
||||
#### 🛠️ Other Changes
|
||||
|
||||
Along with the highlights, another key change includes the s**upport for Gamescope** for Nvidia drivers 515 and above.
|
||||
|
||||
Gamescope can be paradise while playing low-resolution games as it helps you to upscale the resolution.
|
||||
|
||||
Some other fixes and refinements include:
|
||||
|
||||
* Commands exiting with return code 256 for some installer is fixed.
|
||||
* Lutris will no longer perform runtime even if the game is launched through shortcuts.
|
||||
* Random crashes are now fixed when Lutris was not able to determine screen resolution.
|
||||
* When Mangohud was used alongside Gamescope, it often crashed, which is now fixed.
|
||||
|
||||
#### 📥 Download Lutris 0.5.11
|
||||
|
||||
There are many ways to download the latest Lutris version for your Linux system. I would recommend using the Flatpak package from [Flathub][10].
|
||||
|
||||
You can also install it from your software center, or visit the official website to explore more options.
|
||||
|
||||
[Download Lutris][11]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://news.itsfoss.com/lutris-0-5-11-release/
|
||||
|
||||
作者:[Sagar Sharma][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://news.itsfoss.com/author/sagar/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/08/lutris-0-5-11-update.jpg
|
||||
[2]: https://news.itsfoss.com/lutris-creator-interview/
|
||||
[4]: https://news.itsfoss.com/content/images/2022/08/Lutris.png
|
||||
[5]: https://news.itsfoss.com/content/images/2022/08/Amazon-Prime-games-integration.png
|
||||
[6]: https://news.itsfoss.com/content/images/2022/08/Macintosh-emulators-1.png
|
||||
[7]: https://news.itsfoss.com/content/images/2022/08/Deepin-terminal.png
|
||||
[8]: https://itsfoss.com/epic-games-linux/
|
||||
[10]: https://flathub.org/apps/details/net.lutris.Lutris
|
||||
[11]: https://lutris.net/
|
@ -0,0 +1,81 @@
|
||||
[#]: subject: "Want to Help Improve GNOME? This New Tool Gives You the Chance!"
|
||||
[#]: via: "https://news.itsfoss.com/gnome-improve-tool/"
|
||||
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Want to Help Improve GNOME? This New Tool Gives You the Chance!
|
||||
======
|
||||
A new tool to enable GNOME users to provide insights on their configuration and usage to help improve the user experience.
|
||||
|
||||
![Want to Help Improve GNOME? This New Tool Gives You the Chance!][1]
|
||||
|
||||
GNOME has come up with a tool that lets users provide **anonymous insights** about their configurations, extensions, and GNOME-tuned settings.
|
||||
|
||||
This should help GNOME learn more about user preferences and make better decisions to enhance the user experience.
|
||||
|
||||
Interestingly, an intern at **Red Hat** (*Vojtech Stanek*) created this tool.
|
||||
|
||||
### ℹ️ GNOME Info Collect: Ready to Install?
|
||||
|
||||
![gnome info collect terminal][2]
|
||||
|
||||
The tool (gnome-info-collect) is a simple terminal program that you need to download, install, and run to share the data with GNOME.
|
||||
|
||||
Here's what the tool needs to collect from your GNOME system:
|
||||
|
||||
* Hardware information (including manufacturer and model).
|
||||
* System settings (including workspace configuration, sharing features, SSH etc.)
|
||||
* GNOME shell extensions installed and enabled.
|
||||
* Application information (like installed apps and favorites).
|
||||
* Linux distro and version.
|
||||
* Flatpak and Flathub status.
|
||||
* Default browser.
|
||||
* [Salted hash][3] of machine ID+username.
|
||||
|
||||
You can find the package suitable for your distribution and more details on the data collected available on its [GitLab page][4].
|
||||
|
||||
For instance, if you have an **Ubuntu-based distribution**, you can install it by typing in:
|
||||
|
||||
```
|
||||
sudo snap install --classic gnome-info-connect
|
||||
```
|
||||
|
||||
Once installed, fire it up using the following command in the terminal:
|
||||
|
||||
```
|
||||
gnome-info-collect
|
||||
```
|
||||
|
||||
Next, it displays the data that it intends to share with GNOME. So, if it looks good to you, you can choose to upload the data to GNOME's servers.
|
||||
|
||||
![][5]
|
||||
|
||||
Considering the data remains anonymous, it should help GNOME understand what their users like, and focus on those improvements over time.
|
||||
|
||||
[Download gnome-info-collect][6]
|
||||
|
||||
💬 *What do you think about this new data collection tool for GNOME? Share your thoughts in the comments down below.*
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://news.itsfoss.com/gnome-improve-tool/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://news.itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/08/gnome-improvement-tool.jpg
|
||||
[2]: https://news.itsfoss.com/content/images/2022/08/gnome-info-collect-terminal.png
|
||||
[3]: https://en.wikipedia.org/wiki/Salt_(cryptography)
|
||||
[4]: https://gitlab.gnome.org/vstanek/gnome-info-collect/
|
||||
[5]: https://news.itsfoss.com/content/images/2022/08/gnome-info-collect-sharing.png
|
||||
[6]: https://gitlab.gnome.org/vstanek/gnome-info-collect/
|
@ -0,0 +1,36 @@
|
||||
[#]: subject: "Wii U Emulator Cemu Going Open Source Is Significant For Emulation, Here’s Why"
|
||||
[#]: via: "https://www.opensourceforu.com/2022/08/wii-u-emulator-cemu-going-open-source-is-significant-for-emulation-heres-why/"
|
||||
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Wii U Emulator Cemu Going Open Source Is Significant For Emulation, Here’s Why
|
||||
======
|
||||
The Wii U emulator Cemu’s developer announced a significant 2.0 version release on Tuesday, delivering Linux binaries for the first time and opening up eight years of labour. Cemu, a Wii U emulator, made history in 2017 by earning thousands of dollars each month through Patreon to support its development. Cemu’s well-known Patreon, which briefly reached a peak income of $25,000, raised concerns about the morality of emulation, particularly when money is exchanged and when a project is “closed source” as opposed to “open source,” which means that the source code isn’t made available to the general public.
|
||||
|
||||
One of the main ways the emulation community defends itself from legal action is by making its source code available to the public, allowing litigious companies like Nintendo to examine it and verify that none of their proprietary code is used in the reverse-engineering process.
|
||||
|
||||
Linux support, according to Exzap, is “still pretty rough around the edges,” but he believes that will change rapidly as more emulator developers become familiar with Cemu and start to contribute to the project. Cemu was previously only compatible with Windows, but now that Linux is supported, it is possible to install it quickly on the Steam Deck. Before Cemu introduces flatpak support for one-click installation, it won’t be simple to start using the Deck, however that topic is already being explored on Github.
|
||||
|
||||
The author of Cemu used the 2.0 announcement to briefly discuss the emulator’s history; they were the only developers for the most of the emulator’s existence, and they claimed that the last two years have been particularly taxing on the project.
|
||||
|
||||
Exzap will continue to contribute, but anticipates that having other developers will aid in the creation of several important features, such as the ability to pause and resume emulation and enhance performance on older hardware.
|
||||
|
||||
“I have been working on Cemu for almost 8 years now, watching the project grow from an experiment that seemed infeasible, to something that, at its peak, was used by more than a million people,” exzap wrote on Tuesday. “Even today, when the Wii U has been mostly forgotten, we still get a quarter million downloads each month. There are still so many people enjoying Wii U games with Cemu and I will be eternally grateful that I got the chance to impact so many people’s life in a positive way, even if just a tiny bit.”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.opensourceforu.com/2022/08/wii-u-emulator-cemu-going-open-source-is-significant-for-emulation-heres-why/
|
||||
|
||||
作者:[Laveesh Kocher][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
|
||||
[b]: https://github.com/lkxed
|
@ -0,0 +1,105 @@
|
||||
[#]: subject: "“I wish the industry would not follow this ever increasing hype cycle for new stuff”"
|
||||
[#]: via: "https://www.opensourceforu.com/2022/08/i-wish-the-industry-would-not-follow-this-ever-increasing-hype-cycle-for-new-stuff/"
|
||||
[#]: author: "Abbinaya Kuzhanthaivel https://www.opensourceforu.com/author/abbinaya-swath/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
“I wish the industry would not follow this ever increasing hype cycle for new stuff”
|
||||
======
|
||||
*While new technologies can lead to innovations, hype often goes with the territory. Gerald Venzl, senior director, product management at Oracle Corporation speaks with Abbinaya Kuzhanthaivel about the risks of following the hype, and shares his thoughts on how to grow a career by contributing to open source projects.*
|
||||
|
||||
##### Q. Can you tell us a bit about your journey with open source?
|
||||
|
||||
**A.** I currently work as a database product manager looking after the Oracle Database. As part of my job, I help steer the direction of the product and help pinpoint priorities in the market that need to be addressed. Often people are surprised when I tell them that open source is an important aspect of that. For example, Oracle recently launched a Kubernetes operator for its database, which is fully open source and is available on GitHub. But we have also Docker build files on GitHub, and even some of our Oracle Database drivers are fully open source, such as our Node.js driver and the Python driver.
|
||||
|
||||
##### Q. Is Oracle your first venture into open source?
|
||||
|
||||
**A.** I was accustomed to open source in my personal life before joining Oracle. For me, open source is a natural part of my job.
|
||||
Oracle has a long-standing commitment to open source with Oracle Linux, Java, MySQL and many other projects. Although the Oracle Database core is proprietary, we see the value of open source and having people to contribute and understand the technologies around it.
|
||||
|
||||
##### Q. What do you think about open source as an opportunity?
|
||||
|
||||
**A.** The thing that I like about open source is that it’s essentially a democracy and everybody gets to participate. Not that all participants are core contributors, but it connects people from all over the world. The exposure to projects and other developers is important for growing your skillset and your career.
|
||||
|
||||
In Oracle, we are currently focusing a lot on the Kubernetes operator. That’s an exciting project. In my free time, I also write little tools and open source them. I had worked a little on a command line tool called ‘csv2db’, that allows you to load CSV data into a database. I see these projects as a chance to educate myself. You read something, and then you try to implement it and see whether you really understood it, whether it works and so on.
|
||||
|
||||
##### Q. How did this change happen from being a developer to a product manager?
|
||||
|
||||
**A.** My path was probably highly irregular. I am from Austria originally where I worked as a developer for an American company headquartered in New York. I was part of the performance engineering team. I was actually one of those guys who tried to profile the code and those skills eventually brought me to troubleshoot production systems at customer sites. I relocated to New York, and then a couple of years later Oracle asked me to join their pre-sales team as a result of our earlier interactions in running proof-of-concepts and performance tests on their systems together.
|
||||
|
||||
I liked the opportunity to learn by interacting with customers directly and discussing requirements. But for me, the turning point was moving back into development and specifically into the technical aspects as product manager.
|
||||
|
||||
##### Q. What are the top two ‘must have’ qualities of your role?
|
||||
|
||||
**A.** I think one must definitely be able to accept ever-changing requirements. But it’s not just a reactive job. You also have to have a curious mindset. If you have a curious mind, are open to new ideas, and can think out-of-the-box, you can establish your priorities and grow into the role.
|
||||
|
||||
##### Q. What would you like to say about the challenges in your current role?
|
||||
|
||||
A. Product manager is a very diverse and ever-changing role. One must be highly adaptive because you need to go through steering the product, prioritising the internal roadmap and also accept changes depending on market demands and customer needs. So the key challenge, and also the fun part, is that no day really ever looks the same.
|
||||
|
||||
##### Q. Any important risk you have taken so far which you think might inspire others?
|
||||
|
||||
**A.** I think my shifting into product management was the biggest risk. The future is unpredictable and one must dare to take such risks. For open source projects, it’s probably taking the step and actually wanting to contribute to a project. It can be a bit daunting in the beginning, especially to know where to start contributing for very popular active projects. Not all contributions have to be done in code; often people appreciate help with the documentation or testing. There are many ways to get started — the important thing is that you do get started. If you are passionate about something, you will have to go after it. You may have people not reacting the way you expect at first, but that’s okay. It will essentially help to learn.
|
||||
|
||||
##### Q. What do you consider as top leadership qualities?
|
||||
|
||||
**A.** There are a couple of things that I think are important for leadership. And the first one is that it’s okay to be wrong and acknowledge it. This also encourages people around you to freely share ideas. The other thing would be to take chances and get out of your comfort zone. I have never formally learnt product management. I just was intrigued by it, gave it a try and I thought, let’s see how it goes. It has put me in a good space for growth.
|
||||
|
||||
##### Q. Anything you are not very happy about when it comes to open source?
|
||||
|
||||
**A.** I wish the industry would not follow this ever increasing hype cycle for new stuff. New doesn’t automatically mean great. We may talk just about all the new things out there, but the world still runs on Linux and, remember, we still use the HTTP, TCP/IP and all those technologies today. The fundamental technologies that connect us around the globe have been there for a long time. Something doesn’t have to be new to be great and often new technologies go just as fast as they came.
|
||||
|
||||
##### Q. What are the major risks in going ahead with new technologies?
|
||||
|
||||
**A.** A major risk is forgetting or not wanting to ask the “why” we need the new technology. In our industry we get excited very quickly about something that we then want to work with. Sometimes that means that we oversimplify some business requirements and kind of omit the downsides of a new technology, just so that we can use it for a new project. I agree that new things lead to innovations and there is nothing wrong with that. But I have equally seen just as many projects fail that tried to replace the legacy system because a new buzz technology is out there and looks attractive. I’ve seen a three or four year-long project fail because no one bothered to look at the ‘why’ when replacing the previous system; and although it solved the new requirements, people forgot to ask themselves what the old system did well and they just ended up with those old issues again.
|
||||
|
||||
##### Q. How does one keep away from the hype around new technologies?
|
||||
|
||||
**A.** I would say that, as a developer, don’t just blindly follow the latest and greatest. If someone is telling you new stuff, well it’s great to know. But it’s no surprise to me that Linux runs the world and HTTP runs the Web, because those technologies are really well-designed. Have an open mind and look at what’s new, but think about whether this new technology actually will serve your needs. It’s fine if it doesn’t.
|
||||
|
||||
##### Q. How can a developer find projects to contribute to while keeping away from the hype?
|
||||
|
||||
**A.** There’s nothing wrong with working on a hype project, but you have to make sure that you actually have interest. Think small — don’t expect to become the main contributor in the next two weeks or in a short span.
|
||||
|
||||
Remember most of the open source projects really like non-code contributions just as much, such as testing or reporting bugs or lack of information in the documentation. Don’t just go into it to write some code. Initially have some idea about which area you want to contribute to. It should be something that excites you. Go to GitHub and just read through project contributing guidelines. It will tell you what contributions the project needs and how to make them correctly.
|
||||
|
||||
Your work may be small, like adding a sentence to the doc or correcting a typo. But it will allow you to get familiar with the process and with the other people involved in these projects. Do not expect to jump into a project and change its core. Most likely, only an approved committee of committers can actually change that part of the code. Build some trust, show that you understand the project and over time your involvement will grow naturally.
|
||||
|
||||
##### Q. Any examples of projects you think were overrated because they had just used a new technology?
|
||||
|
||||
**A.** I have seen a few working in the database space. When you think about it, relational databases have been around for a very long time and are still going strong. To some extent, we have forgotten why relational databases became so popular. The goal was to organise the data in a way so that five years from now somebody who comes in and has no clue what the data looked like, can make sense out of it. For a while there was a general hype that you no longer need any database, whether relational or non-relational, because Hadoop will do everything. And it didn’t. It actually just led to data cleansing issues for many folks. Don’t get me wrong — there are companies that successfully run Hadoop clusters and there is nothing bad about the technology itself. But you have to understand what it is and when to use it.
|
||||
|
||||
##### Q. Oracle has recently introduced the new MongoDB API for its autonomous database. What was the reason behind it and how did it happen?
|
||||
|
||||
**A.** At Oracle, we follow the converged database methodology. This methodology focuses on bringing the algorithms and computation to the data, rather than the other way around. A very good analogy for converged database methodology is the smartphone, where you can do multiple use cases in one device, like taking a picture and sending it to a friend while being on a phone call, for example. In recent years, we have seen a proliferation of vendors pitching their technology to address an often simple use case For example, developers like working with JSON documents and MongoDB allows them to store and retrieve these documents. But it is one thing to store and retrieve these and another to analyse terabytes of them in real-time. We think SQL is a really good language for any kind of analytics and we have the best database for mixed workloads, i.e., allow real-time analytics while transactional workloads are running. Additionally, Oracle Database has been managing JSON documents natively since 2014.
|
||||
|
||||
Developers love the MongoDB API as it makes database interaction very natural for them. And we have the best database for analytics and mixed workloads that can also manage JSON documents natively. So we decided to give developers the best of both worlds — the same MongoDB API on top of the world’s leading database.
|
||||
|
||||
##### Q. Will you say MongoDB is a new hype?
|
||||
|
||||
**A.** The JSON format is a very useful hierarchical format and is nothing new — it’s been around since 2001. If you want to use JSON, then go ahead, by all means. Oracle has done a lot of work to introduce JSON operations into the SQL standard, and we see more and more databases supporting these standardised operations. You will find that if you want to work with JSON, retrieve JSON documents, query and manipulate data in JSON documents, you definitely don’t need to have a document store anymore. MongoDB is a cool technology, but so were XML databases in their days. I think it is definitely not needed for data management aspects.
|
||||
|
||||
##### Q. Any hiring plans?
|
||||
|
||||
**A.** We are constantly hiring great talent and all our openings can be accessed on the Oracle careers page. We are looking for people in a variety of different roles, including engineering and product management. We are strong in diversity and have people from all around the world and all ages, including graduate students.
|
||||
|
||||
##### Q. Your message for our readers.
|
||||
|
||||
**A.** Programming is a universal language and it’s great to be a developer and write programs. Don’t be shy, be always ready to try something new and get out of your comfort zone to do things you are passionate about.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.opensourceforu.com/2022/08/i-wish-the-industry-would-not-follow-this-ever-increasing-hype-cycle-for-new-stuff/
|
||||
|
||||
作者:[Abbinaya Kuzhanthaivel][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.opensourceforu.com/author/abbinaya-swath/
|
||||
[b]: https://github.com/lkxed
|
@ -0,0 +1,131 @@
|
||||
[#]: subject: "Happy birthday, Linux! Here are 6 Linux origin stories"
|
||||
[#]: via: "https://opensource.com/article/22/8/linux-birthday-origin-stories"
|
||||
[#]: author: "AmyJune Hineline https://opensource.com/users/amyjune"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Happy birthday, Linux! Here are 6 Linux origin stories
|
||||
======
|
||||
Our contributors share their first Linux experience on the 31st anniversary of the Linux kernel.
|
||||
|
||||
On August 25, 1991, Linux 0.01 was announced. All of us have a story to tell about Linux. I told my story a couple of months ago, but for those who weren't here: My first exposure to Linux was when my grassroots hospice organization moved from paper to digital charting. We didn't have the funding to get something proprietary, but the IT department had Linux set up on our old machine, and we used the GNOME desktop and OpenOffice to start our journey in creating digital assets.
|
||||
|
||||
I recently asked some Opensource.com authors this simple question:
|
||||
|
||||
*What was your first Linux experience?*
|
||||
|
||||
### From VAX to Linux
|
||||
|
||||
For my junior year of high school, I was shipped off to a state-run "nerd farm" (that's the North Carolina School of Science and Mathematics.) Our first day on campus, the juniors were each assigned a senior big brother or sister. My senior big sister ditched me because she had tickets to go to a big outdoor music festival with her boyfriend, but when they came back all sunburned, we hung out in my mostly empty dorm room eating takeout on the floor. That was when I first met Matt.
|
||||
|
||||
As the year wound on, Matt showed me how to help as a student sysadmin changing backup reels for the VAX mainframe and doing basic tasks on the "big" workstation that doubled as a campus-wide UNIX server. He had a PC in his room, with GNU and XWindows on a Minix kernel, but found this cool new alternative that some Finnish student had started posting the source code for on Usenet. I knew, right then and there, that was my future.
|
||||
|
||||
When I got home for the summer, the first thing I did was buy a shiny new 486 with some of my savings from odd jobs, fired up a SLIP connection through our local BBS, and downloaded and decoded all the bits and pieces I'd need to bootstrap and compile Linux 0.96.
|
||||
|
||||
Matt and I mostly lost touch after he graduated, but I'll always owe him for introducing me to the operating system kernel I'd use for the rest of my life. I think of him every time I see that tattered old copy of **Running Linux** adorning my office shelf.
|
||||
|
||||
The "Matt" in this story is Matthew D. Welsh. After we lost touch, he became the original maintainer of [The Linux Documentation Project][2], and the author of the first edition of the O'Reilly Press book **Running Linux**.
|
||||
|
||||
**[—Jeremy Stanley][3]**
|
||||
|
||||
### Computer club
|
||||
|
||||
Friends at a [computer club][4] inspired me to try Linux.
|
||||
|
||||
I used Linux to help students learn more about other operating systems from 2012 to 2015, and I would say that Linux has taught me more about computers in general.
|
||||
|
||||
It has probably affected my "volunteer career" because to this day I write articles about being a neurodiverse person in the Linux world. I also attend and join different Linux events and groups, so I've had access to a community I probably wouldn't have known otherwise.
|
||||
|
||||
**[—Rikard Grossman-Nielsen][5]**
|
||||
|
||||
### Galaxy
|
||||
|
||||
My Linux story started a long time ago in a galaxy far, far away. In the early 90s, I spent a year in the US as a high school student. Over there, I had access to e-mail and the Internet. When I came back home to Hungary, I finished high school without any Internet access. There were no public Internet providers in Hungary at that time. Only higher education, and some research labs, had Internet. But in 1994, I started university.
|
||||
|
||||
The very first wee of school, I was at the IT department asking for an email address. At that time, there was no Gmail, Hotmail, or anything similar. Not even teachers got an email address automatically at the university. It took some time and persistence, but I eventually received my first university email address. At the same time, I was invited to work in the faculty-student IT group. At first, I got access to a Novell and a FreeBSD server, but soon I was asked to give Linux a try.
|
||||
|
||||
It was probably late 1994 when I installed my first Linux at home. It was Slackware, from a huge pile of floppy disks. At first, I only did a minimal installation, but later I also installed X so I could have a GUI. In early 1995, I installed my first-ever Linux server at the university on a spare machine, which was also the first Linux server at the university. At that time, I used the [Fvwm2][6] window manager both at home and at the university.
|
||||
|
||||
At first, I studied environmental protection at the university, but my focus quickly became IT and IT security. After a while, I was running all the Linux and Unix servers of the faculty. I also had a part time job elsewhere, running web and e-mail servers. I started a PhD about an environmental topic, but I ended up in IT. I've worked with FreeBSD and Linux ever since, helping [sudo][7] and `syslog-ng` users.
|
||||
|
||||
**[—Peter Czanik][8]**
|
||||
|
||||
### Education
|
||||
|
||||
I got introduced to Linux in the late 1990s by my brother and another friend. My first distro was Red Hat 5, and I didn't like it at the time. I couldn't get a GUI running, and all I could see was the command-line, and I thought, "This is like MS-DOS." I didn't much care for that.
|
||||
|
||||
Then a year or more passed, and I picked up a copy of Red Hat 6.1 (I still have that copy) and got it installed on and HP Vectra with a Cyrix chip installed. It had plenty of hard disk space, which was fortunate because the Red Hat Linux software came on a CD. I got the GUI working, and set it up in our technology office at the school district I was employed at. I started experimenting with Linux and used the browser and Star Office (an ancestor of the modern [LibreOffice][9]), which was part of the included software.
|
||||
|
||||
A couple years later, our school district needed a content filter, and so I created one on an extra computer we had in our office. I got Squid, Squidguard, and later Dansguardian installed on Linux, and we had the first self-hosted open source content filter in a public school district in Western New York State. Using this distribution, and later Mandrake Linux (an ancestor of [Mageia][10] Linux) on old Pentium II and Pentium III computers, I set up devices that used [SAMBA][11] to provide backup and profile storage for teachers and other staff. Teaming with members of area school districts I set up spam filtering for a fraction of the cost that proprietary solutions were offering at the time.
|
||||
|
||||
Franklinville Central School District is situated in an area of high rural poverty. I could see that using Linux and open source software was a way to level the playing field for our students, and as I continued to repurpose and refurbish the "cast-off" computers in our storage closets, I built a prototype Linux terminal server running Fedora Core 3 and 4. The software was part of the K12LTSP project. Older computers could be repurposed and PXE booted from this terminal server. At one point, we had several computer labs running the LTSP software. Our staff email server ran on RHEL 2.1, and later RHEL 3.0.
|
||||
|
||||
That journey, which began 25 years ago, continues to this day as I continue to learn and explore Linux. As my brother once said, "Linux is a software Erector set."
|
||||
|
||||
**[—Don Watkins][13]**
|
||||
|
||||
### Out in the open
|
||||
|
||||
My first experience with Linux was brief, and it involved a lot of floppies. As I recall, it was entertaining until my dear wife discovered that her laptop no longer had Windows 98 installed (she was only moderately relieved when I swapped back in the original drive and the "problem" disappeared). That was around 1998, with a Red Hat release that came with a book and a poor unsuspecting ThinkPad.
|
||||
|
||||
But really, at work I always had a nice Sun Workstation on my desktop, so why bother? In 2005, we decided to move to France for a while, and I had to get a (usefully) working Toshiba laptop, which meant Linux. After asking around, I decided to go with Ubuntu, so that was my first "real" experience. I think I installed the first distro (codenamed Warty Warthog,) but soon I was on the latest. There were a few tears along the way, caused mostly by Toshiba's choice of hardware, but once it was running that darned laptop was every bit as fast, and way more functional, for me than the old Sun. Eventually, we returned home, and I had a nice new Dell PC desktop. I installed Feisty Fawn, and I've never looked back.
|
||||
|
||||
I've tried a few other distros, but familiarity has its advantages, particularly when configuring stuff at the lowest of levels. Really though, if forced to switch, I think I would be happy with any decent Linux distro.
|
||||
|
||||
At a few points in time, I have had to do "kernel stuff", like bisecting for bugs and fiddling around with device drivers. I really can't remember the last time something that complicated was necessary, though.
|
||||
|
||||
Right now, I have two desktops and one laptop, all running Ubuntu 22.04, and two aging Cubox i4-pro devices running Armbian, a great Debian-based distro created for people using single-board computers and similar devices. I'm also responsible for a very small herd of various virtual private running several distros, from CentOS to various versions of Ubuntu. That's not to mention a lot of Android-based stuff laying around, and we should recognize that it's Linux, too.
|
||||
|
||||
What really strikes me, as I read this back over, is how weird it all must sound to someone who has never escaped the clutches of a proprietary operating system.
|
||||
|
||||
**[—Chris Hermansen][15]**
|
||||
|
||||
### Getting involved
|
||||
|
||||
The first computer I bought was an Apple, the last Apple was a IIe. I got fed up with the strong proprietorship of Apple over the software and hardware, and switched to an Amiga, which had a nice GUI (incidentally, I have never owned another Apple product.)
|
||||
|
||||
Amiga eventually crumbled, and so I switched to Windows—what an awful transition! About this time, somewhere in the mid- to latter-90s, I was finding out about Linux, and began reading Linux magazines and how to set up Linux machines. I decided to set up a dual-boot machine with Windows, then bought Red Hat Linux, which at the time came on a number of floppy disks. The kernel would have been 2.0-something. I loaded it on my hard drive, and Presto! I was using Linux—the command-line. At that time, Linux didn't read all of your hardware and make automatic adjustments, and it didn't have all the drivers you needed, as it does today.
|
||||
|
||||
So next came the process of looking up in BBSes or wherever to find out where to get drivers for the particular hardware I had, such as the graphics chip. Practically, this meant booting into Windows, saving the drivers to floppy disk, booting back into Linux, and loading the drivers to the hard drive. You then had to hand-edit the configuration files so that Linux knew which drivers to use. This all took weeks to accomplish, but I can still recall the delight I felt when I typed `startx`, and up popped X-Windows!!
|
||||
|
||||
If you wanted to update your kernel without waiting for and buying the next release, you had to compile it yourself. I remember I had to shut down every running program so the compiler didn't crash.
|
||||
|
||||
It's been smooth sailing ever since, with the switch to Fedora (then called "Fedora Core"), and the ease of updating software and the kernel.
|
||||
|
||||
Later, I got involved with the [Scribus][16] project, and I started reading and contributing to the mail list. Eventually, I began contributing to the documentation. Somewhere around 2009, Christoph Schaefer and I, communicating over the internet and sharing files, were able to write **Scribus, The Official Manual** in the space of about 9 months.
|
||||
|
||||
**[—Greg Pittman][17]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/22/8/linux-birthday-origin-stories
|
||||
|
||||
作者:[AmyJune Hineline][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/amyjune
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://opensource.com/sites/default/files/lead-images/rh_003499_01_linux31x_cc.png
|
||||
[2]: https://tldp.org/
|
||||
[3]: https://opensource.com/users/fungi
|
||||
[4]: https://opensource.com/article/22/5/my-journey-c-neurodiverse-perspective
|
||||
[5]: https://opensource.com/users/rikardgn
|
||||
[6]: https://opensource.com/article/19/12/fvwm-linux-desktop
|
||||
[7]: https://opensource.com/article/22/8/debunk-sudo-myths
|
||||
[8]: https://opensource.com/users/czanik
|
||||
[9]: https://opensource.com/article/21/9/libreoffice-tips
|
||||
[10]: http://mageia.org
|
||||
[11]: https://opensource.com/article/21/12/file-sharing-linux-samba
|
||||
[12]: https://opensource.com/article/22/5/essential-linux-commands
|
||||
[13]: https://opensource.com/users/don-watkins
|
||||
[14]: https://www.redhat.com/sysadmin/linux-kernel-tuning
|
||||
[15]: https://opensource.com/users/clhermansen
|
||||
[16]: http://scribus.net
|
||||
[17]: https://opensource.com/users/greg-p
|
@ -0,0 +1,126 @@
|
||||
[#]: subject: "Linux Mint Release Cycle: What You Need to Know"
|
||||
[#]: via: "https://itsfoss.com/linux-mint-release-cycle/"
|
||||
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Linux Mint Release Cycle: What You Need to Know
|
||||
======
|
||||
Linux Mint is an Ubuntu-based distribution. You probably already know that.
|
||||
|
||||
Ubuntu releases a new version every six months but Linux Mint doesn’t follow the six-monthly release pattern.
|
||||
|
||||
Linux Mint uses the Ubuntu LTS ([long term support][1]) version as its base. An LTS version of Ubuntu is released every two years and hence **you also get a major Mint version every two years** (Mint 19, 20, 21, etc).
|
||||
|
||||
Like the Ubuntu LTS versions, a major Linux Mint version is also supported for five years. Although, there are **three point releases in between** (Mint 20.1, 20.2, 20.3).
|
||||
|
||||
Compared to Ubuntu, how long does Linux Mint receive updates? When should you expect an upgrade for Linux Mint? Should you upgrade when a new version is available?
|
||||
|
||||
Here, let me highlight all these necessary details regarding the release cycle of Linux Mint.
|
||||
|
||||
### Release Cycle of Linux Mint
|
||||
|
||||
Ubuntu releases a long-term support release every two years. A Mint version is followed soon after. In other words, you get a new Mint version every two years.
|
||||
|
||||
So, the Linux Mint 20 was released in 2020 based on Ubuntu 20.04, Mint 21 came in 2022 based on Ubuntu 22.04.
|
||||
|
||||
Unlike Ubuntu, there is no strict release schedule for Mint. There is no predefined release date. The new version arrives when it is deemed ready by its developers.
|
||||
|
||||
#### Point Releases
|
||||
|
||||
In between the two (major) version releases of Mint, there are three point releases that arrive at an interval of six months.
|
||||
|
||||
So, Mint 20 (or 20.0) was released in June ’20. Mint 20.1 came in December’20, Mint 20.2 in June’21 and Mint 20.3 in December’21. After that, the Mint team works on developing the next major release.
|
||||
|
||||
What do these point releases have? A new version of the desktop environment, containing mostly visual changes in the UI. It may also feature new applications sometimes.
|
||||
|
||||
The upgrade to the point release is optional. You can choose to stay with 20.1 and not upgrade it to 20.2 and 20.3. This is preferred by people who don’t like frequent (visual) changes to their systems.
|
||||
|
||||
After the last point release (XX.03), your system will only get security and maintenance updates for installed software. You won’t get new major versions of the desktop environment and some other software like GIMP or LibreOffice.
|
||||
|
||||
#### Support Cycle
|
||||
|
||||
Not all Ubuntu-based distributions give you the same update cycle benefit as Canonical’s Ubuntu. Many Ubuntu-based distributions and the [official flavours][2] provide support for up to 3 years.
|
||||
|
||||
Fortunately, for **Linux Mint**, you get the same update perks as Ubuntu.
|
||||
|
||||
**Each Linux Mint release is supported for five years**. After that, you must upgrade to the next version or install the newer version afresh.
|
||||
|
||||
For example, Mint 20 was released in 2020, a few months after Ubuntu 20.04. Ubuntu 20.04 LTS is supported till 2025 and thus Mint 20 series is also supported till 2025.
|
||||
|
||||
All point releases of a series are supported till the same date. Mint 20.1, 20.2, and 20.3 will all be supported till 2025.
|
||||
|
||||
Similarly, Ubuntu 22.04 LTS will be supported until April 2027. You can expect the update cycle for Linux Mint 21 series (based on Ubuntu 22.04) until the same timeline.
|
||||
|
||||
**To summarize:**
|
||||
|
||||
* You get a new major version of Linux Mint every two years
|
||||
* Each major version is supported for five years
|
||||
* Each major release (version XX) is followed by three point releases (XX.1, XX.2, XX.3) before the next major release
|
||||
* The point releases (XX.1, XX.2, XX.3) are supported till the same time as their major version (XX)
|
||||
|
||||
### When Should You Upgrade Linux Mint?
|
||||
|
||||
That totally depends on you.
|
||||
|
||||
A new major version comes every two years. If you can choose to upgrade it then or you can stay with your current version for its entire lifecycle of five years.
|
||||
|
||||
Unless you want access to the latest features and improvements, you can choose not to upgrade your Linux Mint installation to another major version.
|
||||
|
||||
For point releases, you may or may not choose to update. Like, 20 to 20.1 or 20.1 to 20.2. You will still get important security and maintenance updates even if you are not using the latest point release.
|
||||
|
||||
You can refer to our [Linux Mint upgrade guide][3] for help.
|
||||
|
||||
### Linux Mint Versioning and Naming
|
||||
|
||||
Unlike Ubuntu’s flavours, Linux Mint has a different numbering scheme. Linux Mint likes to bump up the number with every Ubuntu LTS release.
|
||||
|
||||
In other words:
|
||||
|
||||
Linux Mint 19 → **Ubuntu 18.04 LTS**
|
||||
|
||||
Linux Mint 20 → **Ubuntu 20.04 LTS**
|
||||
|
||||
Linux Mint 21 → **Ubuntu 22.04 LTS**
|
||||
|
||||
So, you should steer clear of the following confusion:
|
||||
|
||||
*Linux Mint 20 was based on Ubuntu 20.04 does not mean that Linux Mint 21 will be based on Ubuntu 21.04.*
|
||||
|
||||
Furthermore, every release has **three-point releases**, with minor updates to the core and potential upgrades to some Linux Mint applications.
|
||||
|
||||
Now, coming to its **naming scheme**:
|
||||
|
||||
Every Linux Mint release, be it minor or major, has a codename. Usually, it is a female name, normally of Greek or Latin origin.
|
||||
|
||||
Like Ubuntu, there is a pattern in the codename as well. The codenames are in alphabetically increasing order for the major releases. When it comes to point releases, you will find a new name starting with the same alphabet.
|
||||
|
||||
For example, Mint 20 was called **Ulyana**, with 20.1 as **Ulyssa**, 20.2 as **Uma**, and 20.3 **Una**. Similarly, Mint 19 series had codenames starting with T.
|
||||
|
||||
At the time of writing this, Mint 21 (the latest release) codename starts with **V,** and the first release of the 21 series is called **Vanessa**.
|
||||
|
||||
There will be at least three more minor releases in the Mint 21 series, and they will be released every six months until the next Mint major release in 2024. And they all will have a codename starting with the letter V.
|
||||
|
||||
### Keep it Minty
|
||||
|
||||
I hope this article clears any confusion with Linux Mint upgrades and educates you more about the release and update cycle on Linux Mint.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-mint-release-cycle/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://itsfoss.com/long-term-support-lts/
|
||||
[2]: https://itsfoss.com/which-ubuntu-install/
|
||||
[3]: https://itsfoss.com/upgrade-linux-mint-version/
|
@ -0,0 +1,57 @@
|
||||
[#]: subject: "My open source journey from user to contributor to CTO"
|
||||
[#]: via: "https://opensource.com/article/22/8/my-open-source-career-story"
|
||||
[#]: author: "Jesse White https://opensource.com/users/jwhite-0"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
My open source journey from user to contributor to CTO
|
||||
======
|
||||
The possibilities are endless for anyone thinking about a career in open source. Here's my story.
|
||||
|
||||
When people ask me what I love most about open source, my answer is simple: It's the *openness*. With open source, the work that community developers and contributors do is in the public domain for all to see and benefit from. I couldn't love that philosophy more.
|
||||
|
||||
How many people can say that about the fruits of their labor? How many, perhaps 50 years from now, can look back and say, "Check out the code I wrote that day that hundreds/thousands/tens of thousands benefited from." I find that infinitely more exciting than working on software that's hidden from most of the world.
|
||||
|
||||
I'm fortunate that my job puts me in the middle of an interesting area where open source and enterprise meet. Today, I'm Chief Technology Officer of [The OpenNMS Group][2], the company that maintains the [OpenNMS project][3]. OpenNMS is a leading open source network monitoring and management platform.
|
||||
|
||||
While my current role has me firmly rooted in open source, I started as a user and contributor.
|
||||
|
||||
In 2007, I got my first real tech job as a network analyst at Datavalet Technologies, a Montreal, Canada-based telecommunications service provider. Within five years, I expanded to a solutions architect role, where I was tasked with helping to select a network management solution for the organization. We chose OpenNMS, and it was through that experience that I realized the true power of open source.
|
||||
|
||||
While onboarding the platform, we identified some missing features that would help optimize our experience. A representative from The OpenNMS Group was on site to help us with the deployment and suggested I attend the community's upcoming DevJam to work with the core developers on building the capabilities that we needed.
|
||||
|
||||
During that DevJam, I quickly settled in alongside the team and community. We rolled up our sleeves and started coding to create the enhancements Datavalet needed. Within days, the additional features were ready. It was amazing and transformative—this experience really opened my eyes to the power of open source.
|
||||
|
||||
I left my job a year later to study math full-time at Concordia University. It was there that I once again had the opportunity to collaborate with The OpenNMS Group, this time on a project for that year's Google Summer of Code. In this annual program, participants aim to successfully complete open source software development projects.
|
||||
|
||||
Summer of Code turned out to be a career-changing experience for me—two of the organization's leaders attended our project demo, and a year later, The OpenNMS Group team asked me to come on board as a full-stack developer.
|
||||
|
||||
I worked hard, quickly rose through the ranks, and was named CTO in 2015. I consider this a personal achievement and another validation of what makes the open source world so special—if you enjoy working with the community and love what you do, your contributions are quickly recognized.
|
||||
|
||||
The open source ethos also informed my evolution from individual contributor to CTO, where I now lead a product development organization of more than 50 people. The community is inherently egalitarian, and my experience working with community contributors has taught me to lead with context rather than control.
|
||||
|
||||
I've had an amazing open source ride, from user to contributor to an executive at an open source company. The open source approach goes beyond the tech, as the barriers to entry and growth often found in proprietary development environments can be overcome through collaboration, transparency, and community. For that reason, the possibilities are endless for anyone thinking about a career in open source. I'm proof of that.
|
||||
|
||||
We live in a time when people are deeply examining their lives and the impact they have on the world. Working in an open source company is especially rewarding because I can interact directly with and influence the user community. The typical guardrails between the end user and developer are broken down, and I can see exactly how my work can change someone's daily life or inspire someone to contribute to a project. Building community through a mutual love for a project creates connections that can last a lifetime.
|
||||
|
||||
I know this has all been true for me, and it's why I am so passionate about my work. I'm an open source geek to the core and proud of it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/22/8/my-open-source-career-story
|
||||
|
||||
作者:[Jesse White][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jwhite-0
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://opensource.com/sites/default/files/lead-images/career_journey_road_gps_path_map_520.png
|
||||
[2]: https://www.opennms.com/
|
||||
[3]: https://www.opennms.com/
|
@ -0,0 +1,111 @@
|
||||
[#]: subject: "Why Companies Need to Set Up an Open Source Program Office"
|
||||
[#]: via: "https://www.opensourceforu.com/2022/08/why-companies-need-to-set-up-an-open-source-program-office/"
|
||||
[#]: author: "Sakshi Sharma https://www.opensourceforu.com/author/sakshi-sharma/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Why Companies Need to Set Up an Open Source Program Office
|
||||
======
|
||||
*Managing the use of open source software and decreasing compliance risks is key to the success of any software product. An open source program office can help an organisation do just that. Find out how.*
|
||||
|
||||
Open source software (OSS) is integral to building a modern software solution. Be it an internal or a customer facing solution, organisations rely significantly on open source software today. OSS components are governed by their unique licence terms, and non-compliance with these can often expose organisations to security and intellectual property (IP) risks which eventually may hamper a company’s brand value.
|
||||
|
||||
When development teams are delivering a software release, they are primarily trying to meet project deadlines. Therefore, the tracking of versions of components and libraries, or the third party code pulled into the project, is not as rigorous as it should be. This means that licences and vulnerable OSS components can enter the code base and be delivered to customers. This can be risky for both the customer and the company delivering the software solution.
|
||||
|
||||
Another increasingly challenging area is that of developers contributing to open source projects. Companies can reap numerous benefits if they do so. This includes keeping skills current, retention of staff, attracting developers to work for the organisation, and improving the image of the company. Many open source projects require developers to sign a contributor licence agreement. This states that any IP created by the developer belongs to the project and not to the contributing developer. In this scenario, organisations need to be careful that IP and trade secrets that are not open source are not being signed over to open source projects.
|
||||
|
||||
Developers need to be educated about open source licensing issues, determining what to leverage, when or how much they can contribute to the community, and what packages might bring risk to the organisation’s reputation. All this can be streamlined by putting a strategic policy and operations in place. One way of doing this is by creating an entity that is dedicated to working around all things open source—an entity called the open source program office (OSPO).
|
||||
|
||||
An OSPO creates an ecosystem for employees to use open source software in a way that compliance risks are kept at bay. The role of an OSPO is not limited to supervising open source usage; it is also responsible for contributing back to the community and managing the company’s growth in the market by actively engaging in events, as well as conducting webinars and campaigns.
|
||||
|
||||
In this article we will see why there is a need for building an OSPO, and how it has emerged as a prominent entity for any open source policy and governance programme.
|
||||
|
||||
### Why should you have an OSPO?
|
||||
|
||||
With the wide use of open source software, regulating its usage and keeping the compliance strategy in check can be often overwhelming for the teams involved in the product development cycle.
|
||||
|
||||
Developers often overlook licence obligations, and sometimes the management or stakeholders are also not fully aware of the implications of non-compliance with these open source licences. OSPO handles open source software right from its on-boarding till the time it is delivered to the end user and everything inbetween, irrespective of whether it is being used for internal or external purposes.
|
||||
|
||||
An OSPO builds a solid foundation by starting compliance and regulatory checks in the early software development life cycle. This usually begins by guiding and aligning the involved team members towards a common path that benefits the organisation’s values. The OSPO puts in place policies and processes around open source usage and governs the roles and responsibilities across the company.
|
||||
|
||||
To conclude, it aligns the efforts of all relevant teams involved in building the product and helps increase the organisation’s capacity for better and effective use of open source.
|
||||
|
||||
| The rise of the OSPO |
|
||||
| :- |
|
||||
| Companies like Microsoft, Google and Netflix have well established OSPOs within their organisations. Many others, like Porsche and Spotify, are building their own OSPOs to leverage the usage of open source in an efficient way.
|
||||
Here is what leaders from renowned companies have to say about OSPO practices.
|
||||
|
||||
“As a business, it’s a culture change,” explains Jeff McAffer, who ran Microsoft’s Open Source Program Office for years and now is a director of products at GitHub focused on promoting open source in enterprises. “Many companies, they’re not used to collaboration. They’re not used to engaging with teams outside of their company.”
|
||||
“Engineering, business, and legal stakeholders each have their own goals and roles, oftentimes making trade-offs between speed, quality, and risk,” explains Remy DeCausemaker, head of open source at Spotify. “An OSPO works to balance and connect these individual goals into a holistic strategy that reduces friction.”
|
||||
Gil Yahuda, Verizon Media’s OSPO leader, states, “We seek to create a working environment that talent wants to be part of. Our engineers know that they work in an open source friendly environment where they are supported and encouraged to work with the open source communities that are relevant to their work.” |
|
||||
|
||||
Here is what leaders from renowned companies have to say about OSPO practices.
|
||||
|
||||
* “As a business, it’s a culture change,” explains Jeff McAffer, who ran Microsoft’s Open Source Program Office for years and now is a director of products at GitHub focused on promoting open source in enterprises. “Many companies, they’re not used to collaboration. They’re not used to engaging with teams outside of their company.”
|
||||
* “Engineering, business, and legal stakeholders each have their own goals and roles, oftentimes making trade-offs between speed, quality, and risk,” explains Remy DeCausemaker, head of open source at Spotify. “An OSPO works to balance and connect these individual goals into a holistic strategy that reduces friction.”
|
||||
* Gil Yahuda, Verizon Media’s OSPO leader, states, “We seek to create a working environment that talent wants to be part of. Our engineers know that they work in an open source friendly environment where they are supported and encouraged to work with the open source communities that are relevant to their work.”
|
||||
|
||||
![Figure 1: OSPO prevalence by industry 2018-2021 (Source: https://github.com/todogroup/osposurvey/tree/master/2021)][1]
|
||||
|
||||
### The function of an OSPO
|
||||
|
||||
The function of an OSPO may vary from organisation to organisation depending on the number of its employees and the number of people that are part of the OSPO team. Another factor is the purpose of using open source. An organisation may only want to use open source software for building the product or may also look at contributing back to the community.
|
||||
|
||||
Evaluating factors such as which open source licences are appropriate or whether full-time employees should be contributing to an open source project may be part of the OSPO’s role. Putting a contributor licence agreement (CLA) in place for developers that are willing to contribute and determining what open source components will help in accelerating a product’s growth and quality are some other roles of an OSPO.
|
||||
|
||||
Some of the key functions of an OSPO involve:
|
||||
|
||||
* Putting an open source compliance and governance policy in place to mitigate intellectual property risks to the organisation
|
||||
* Educating developers towards better decision-making
|
||||
* Defining policies that lay out the requirements and rules for working with open source across the company
|
||||
* Monitoring the usage of open source software inside as well as outside the organisation
|
||||
* Conducting meetings after every software release to discuss what went well and what could be done better with the OSS compliance process
|
||||
* Accelerating the software development life cycle (SDLC)
|
||||
* Transparency and coordination amongst different departments
|
||||
* Streamlining processes to help mitigate risks at an early stage
|
||||
* Encouraging members to contribute upstream to gain the collaborative and innovative benefits of open source projects
|
||||
* Producing a report with suitable remediation and recommendations for the product team
|
||||
* Preparing compliance artifacts and ensuring licence obligations are fulfilled
|
||||
|
||||
### Building an OSPO
|
||||
|
||||
The OSPO is typically staffed with personnel from multiple departments within the company. The process involves training and educating the relevant departments regarding open source compliance basics and the risks involved in its usage. It may provide legal and technical support services so that the open source requirement goals are met.
|
||||
|
||||
An OSPO may be formed by the following people within the organisation (this is a non-exhaustive list of people who can be a part of it):
|
||||
|
||||
* Principal/Chief: This role can be taken by the flag bearer, the one who runs the OSPO. The chief knows the various aspects of using open source like the effect of using different components, licence implications, development and contributing to the community. These requirements are entirely dependent on an organisation’s needs.
|
||||
* Program manager: The program manager sets the requirements and objectives for the target solution. He/she works alongside the product and engineering teams to connect workflows. This includes ensuring that policies and tools are implemented in a developer-friendly manner.
|
||||
* Legal support: Legal support can come from outside the firm or in-house, but is an important part of an OSPO. The legal role works closely with the program manager to define policies that govern OSS use, including which open source licences are allowed for each product, how to (or whether to) contribute to existing open source projects, and so on.
|
||||
* Product and engineering teams/developers: The engineering team should be well-versed with open source licence(s) and their associated risks. The team must seek approval from the OSPO before consuming any open source component. The team may have to be trained with respect to open source compliance basics and its usage at regular intervals
|
||||
* CTOs/CIOs/stakeholders: A company’s leadership has a huge impact on the OSPO strategies. The stakeholders have a great say in the decision making process for any product/solution’s delivery. Due to the nature of the OSPO’s function within a company, the VP of engineering, CTO/CIO, or chief compliance/risk officer must get involved in the OSPO.
|
||||
* IT teams: Having support from the IT department is very important. An OSPO may be tasked with implementing internal tools to improve developer efficiency, monitor open source compliance, or dictate open source security measures. IT teams are key in helping to connect workflows, and ensure policies are implemented in a developer-friendly manner.
|
||||
|
||||
In the 2021 State of OSPO Survey conducted by the TODO Group, the key findings were:
|
||||
|
||||
* There are many opportunities to educate companies about how OSPOs can benefit them.
|
||||
OSPOs had a positive impact on their sponsor’s software practices, but their benefits differed depending on the size of an organisation.
|
||||
* Companies that intended to start an OSPO hoped it would increase innovation, but setting a strategy and a budget remained top challenges to their goals.
|
||||
* Almost half of the survey participants without an OSPO believed it would help their company, but of those that didn’t think it would help, 35 per cent said they haven’t even considered it.
|
||||
* 27 per cent of survey participants said a company’s open source participation is very influential in their organisation’s buying decisions.
|
||||
|
||||
The use of open source software when building any software solution is almost inevitable today. However, the open source licence risks cannot be overseen. What is needed is a strategic streamlining process that helps combat the compliance issues that come in the way of using open source components effectively.
|
||||
|
||||
An OSPO helps set a regulatory culture by building a centralised dedicated team that educates employees and brings awareness regarding everything related to open source usage in an organisation. An OSPO can also work as a guide to fetch top talent from the industry, which will eventually be a boon for business goals.Sakshi Sharma
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.opensourceforu.com/2022/08/why-companies-need-to-set-up-an-open-source-program-office/
|
||||
|
||||
作者:[Sakshi Sharma][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.opensourceforu.com/author/sakshi-sharma/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/07/Figure-1-OSPO-prevalence-by-industry-2018-2021-2.jpg
|
@ -1,411 +0,0 @@
|
||||
[#]: subject: "How to Upgrade to Linux Mint 21 [Step by Step Tutorial]"
|
||||
[#]: via: "https://itsfoss.com/upgrade-linux-mint-version/"
|
||||
[#]: author: "Abhishek Prakash https://itsfoss.com/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: "robsean"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
How to Upgrade to Linux Mint 21 [Step by Step Tutorial]
|
||||
======
|
||||
This is a regularly updated guide for upgrading an existing Linux Mint install to a new available version.
|
||||
|
||||
There are three sections in this article that show the steps for upgrading between various major versions of Linux Mint:
|
||||
|
||||
* Section 1 is about upgrading to Mint 21 from Mint 20.3 (GUI upgrade tool)
|
||||
* Section 2 is about upgrading to Mint 20 from Mint 19.3 (Command-line based upgrader)
|
||||
* Section 3 is about upgrading to Mint 19 from Mint 18.3 (if someone is still using it)
|
||||
|
||||
You can follow the appropriate steps based on your current Mint version and requirement.
|
||||
|
||||
This is a regularly updated guide for upgrading an existing Linux Mint install to a new available version.
|
||||
|
||||
The guide has been updated with the steps for upgrading to Linux Mint 21 from Mint 20.3. Linux Mint now has a GUI tool to upgrade to the latest version.
|
||||
|
||||
### Things to know before you upgrade to Linux Mint 21
|
||||
|
||||
Before you go on upgrading to Linux Mint 21, you should consider the following:
|
||||
|
||||
* Do you really need to upgrade? Linux Mint 20.x is supported for several more years.
|
||||
* You’ll need a good speed internet connection to download upgrades of around 1.4 GB.
|
||||
* It may take a couple of hours to complete the upgrade procedure based on your internet speed. You must have patience.
|
||||
* It is a good idea to make a live USB of Linux Mint 21 and try it in a live session to see if it is compatible with your hardware. Newer kernels might have issues with older hardware, so testing it before the real upgrade or install can save you a lot of frustration.
|
||||
* A fresh installation is always better than a major version upgrade but installing Linux Mint 21 from scratch would mean losing your existing data. You must take backup on an external disk.
|
||||
* Though upgrades are mostly safe, it’s not 100% failproof. You must have system snapshots and proper backups.
|
||||
* You can upgrade to Linux Mint 21 only from Linux Mint 20.3 Cinnamon, Xfce and MATE. [Check your Linux Mint version][1] first. If you are using Linux Mint 20.2 or 20.1, you need to upgrade to 20.3 first from the Update Manager. If you are using Linux Mint 19, I advise you to go for a fresh installation rather than upgrading to several Mint versions.
|
||||
|
||||
Once you know what you will do, let’s see how to upgrade to Linux Mint 21.
|
||||
|
||||
### Upgrading to Linux Mint 21 from 20.3
|
||||
|
||||
Check your Linux Mint version and ensure that you are using Mint 20.3. You cannot upgrade to Mint 21 from Mint 20.1 or 20.2.
|
||||
|
||||
#### Step 1: Update your system by installing any available updates
|
||||
|
||||
Launch the Update Manager with Menu -> Administration -> Update Manager. Check if there are any package updates available. If yes, install all the software updates first.
|
||||
|
||||
![Check for Pending Software Updates][2]
|
||||
|
||||
You may also use this command in the terminal for this step:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
```
|
||||
|
||||
#### Step 2: Make a backup of your files on an external disk [Optional yet recommended]
|
||||
|
||||
Timeshift is a good tool for creating system snapshots, but it’s not the ideal tool for your documents, pictures, and other such non-system, personal files. I advise making a backup on an external disk. It’s just for the sake of data safety.
|
||||
|
||||
When I say making a backup on an external disk, I mean to simply copy and paste your Pictures, Documents, Downloads, and Videos directory on an external USB disk.
|
||||
|
||||
If you don’t have a disk of that much size, at least copy the most important files you cannot afford to lose.
|
||||
|
||||
#### Step 3: Install the upgrade tool
|
||||
|
||||
Now that your system is updated, you are ready to upgrade to Linux Mint 21. Linux Mint team provides a GUI tool called [mintupgrade][3] for upgrading Linux Mint 20.3 to Linux Mint 21.
|
||||
|
||||
You can install this tool using the command below:
|
||||
|
||||
```
|
||||
sudo apt install mintupgrade
|
||||
```
|
||||
|
||||
#### Step 4: Run GUI Tool from the terminal
|
||||
|
||||
You cannot find the new GUI tool listed in the App menu. To launch, you need to enter the following command in the terminal:
|
||||
|
||||
```
|
||||
sudo mintupgrade
|
||||
```
|
||||
|
||||
This simple yet comprehensive tool takes you through the upgrading process.
|
||||
|
||||
![Mint Upgrade Tool Home Page][4]
|
||||
|
||||
After some initial tests, it will prompt for a Timeshift Backup. If you already have a backup created, you are good to go.
|
||||
|
||||
![Upgrade Tool Prompting No Timeshift Snapshots][5]
|
||||
|
||||
Else, you need to [create a backup][6] here since it is mandatory to continue.
|
||||
|
||||
![Taking Snapshot With Timeshift][7]
|
||||
|
||||
Some PPAs might be already available for Ubuntu 22.04 and thus for Mint 21. But if the PPA or repository is not available for the new version, it may impact the upgrade procedure with broken dependencies. You will be prompted the same within the upgrade tool.
|
||||
|
||||
![Kazam PPA Does Not Support Jammy][8]
|
||||
|
||||
Here, I used [Kazam latest versio][9]n through its PPA. The same PPA is supported only up to Impish, showing the error since Linux Mint 21 is based on Jammy.
|
||||
|
||||
You will be given the option to disable the PPAs through Software Sources within the upgrade tool.
|
||||
|
||||
![Disable Unsupported PPAs in Software Sources][10]
|
||||
|
||||
Since the PPA is disabled, the package becomes ‘foreign’ because the version available from the repository doesn’t match the ones from Mint repositories. So you need to downgrade the packages to a version available on the repository.
|
||||
|
||||
![Downgrade Package to Avoid Conflicts][11]
|
||||
|
||||
The upgrade tool now lists the changes that need to be carried out.
|
||||
|
||||
![List Changes That Need to be Done][12]
|
||||
|
||||
Upon accepting, the tool will start downloading packages.
|
||||
|
||||
![Phase 2 – Simulation and Package Download][13]
|
||||
|
||||
![Package Downloading][14]
|
||||
|
||||
![Upgrading Phase][15]
|
||||
|
||||
It will list orphan packages, that can be removed. You can either remove the whole suggestions by pressing the “Fix” button or will keep certain packages.
|
||||
|
||||
#### Keep Certain Orphan packages
|
||||
|
||||
In order to keep packages from the orphan packages list, you need to go to the preferences from the hamburger menu on top left.
|
||||
|
||||
![Selecting Orphan Packages You Want to Keep with Preferences][16]
|
||||
|
||||
From the preference dialog box, you need to go to **Orphan Packages** and use the “plus” symbol to add packages by name.
|
||||
|
||||
![Specify Name of the Package to Keep][17]
|
||||
|
||||
Once done, it will continue upgrading and after some time, you will be prompted a successful update notification.
|
||||
|
||||
![Upgrade Successful][18]
|
||||
|
||||
At this point, you need to reboot your system. Upon rebooting, you will be in the new Linux Mint 21.
|
||||
|
||||
![Neofetch Output Linux Mint 21][19]
|
||||
|
||||
### How to upgrade to Linux Mint 20
|
||||
|
||||
Before you go on upgrading to Linux Mint 20, you should consider the following:
|
||||
|
||||
* Do you really need to upgrade? Linux Mint 19.x is supported till 2023.
|
||||
* If you [have a 32-bit system][20], you cannot install or upgrade to Mint 20.
|
||||
* You’ll need a good speed internet connection to download upgrades of around 1.4 GB in size.
|
||||
* Based on your internet speed, it may take a couple of hours to complete the upgrade procedure. You must have patience.
|
||||
* It is a good idea to make a live USB of Linux Mint 20 and try it in a live session to see if it is compatible with your hardware. Newer kernels might have issues with older hardware and hence testing it before the real upgrade or install can save you a lot of frustration.
|
||||
* A fresh installation is always better than a major version upgrade but [installing Linux Mint][21] 20 from scratch would mean you’ll lose your existing data. You must take backup on an external disk.
|
||||
* Though upgrades are mostly safe, it’s not 100% fail proof. You must have system snapshots and proper backups.
|
||||
* You can upgrade to Linux Mint 20 only from Linux Mint 19.3 Cinnamon, Xfce and MATE. [Check your Linux Mint version][22] first. If you are using Linux Mint 19.2 or 19.1, you need to upgrade to 19.3 first from the Update Manager. If you are using Linux Mint 18, I advise you go for a fresh installation rather than upgrading to several Mint versions.
|
||||
* The upgrade process is done via command line utility. If you don’t like using terminal and commands, avoid upgrading and go for a fresh installation.
|
||||
|
||||
Once you know what you are going to do, let’s see how to upgrade to Linux Mint 20.
|
||||
|
||||
![A Video from YouTube][23]
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][24]
|
||||
|
||||
#### Step 1: Make sure you have a 64-bit system
|
||||
|
||||
Linux Mint 20 is a 64-bit only system. If you have a 32-bit Mint 19 installed, you cannot upgrade to Linux Mint 20.
|
||||
|
||||
In a terminal, use the following command to see whether you are using 64-bit operating system or not.
|
||||
|
||||
```
|
||||
dpkg --print-architecture
|
||||
```
|
||||
|
||||
![Mint 20 Upgrade Check Architecture][25]
|
||||
|
||||
#### Step 2: Update your system by installing any available updates
|
||||
|
||||
Launch the Update Manager with Menu -> Administration -> Update Manager. Check if there are any package updates available. If yes, install all the software updates first.
|
||||
|
||||
![Check for pending software updates][26]
|
||||
|
||||
You may also use this command in the terminal for this step:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
```
|
||||
|
||||
#### Step 3: Create a system snapshot with Timeshift [Optional yet recommended]
|
||||
|
||||
[Creating a system snapshot with Timeshift][27] will save you if your upgrade procedure is interrupted or if you face any other issue. **You can even revert to Mint 19.3 this way**.
|
||||
|
||||
Suppose your upgrade failed for power interruption or some other reason and you end up with a broken, unusable Linux Mint 19. You can plug in a live Linux Mint USB and run Timeshift from the live environment. It will automatically locate your backup location and will allow you to restore your broken Mint 19 system.
|
||||
|
||||
This also means that you should keep a live Linux Mint 19 USB handy specially if you don’t have access to a working computer that you can use to create live Linux Mint USB in the rare case the upgrade fails.
|
||||
|
||||
![Create a system snapshot in Linux Mint][28]
|
||||
|
||||
#### Step 4: Make a backup of your files on an external disk [Optional yet recommended]
|
||||
|
||||
Timeshift is a good tool for creating system snapshots but it’s not the ideal tool for your documents, pictures and other such non-system, personal files. I advise making a backup on an external disk. It’s just for the sake of data safety.
|
||||
|
||||
When I say making a backup on an external disk, I mean to simply copy and paste your Pictures, Documents, Downloads, Videos directory on an external USB disk.
|
||||
|
||||
If you don’t have a disk of that much of a size, at least copy the most important files that you cannot afford to lose.
|
||||
|
||||
#### Step 5: Disable PPAs and third-party repositories [Optional yet recommended]
|
||||
|
||||
It’s natural that you might have installed applications using some [PPA][29] or other repositories.
|
||||
|
||||
Some PPAs might be already available for Ubuntu 20.04 and thus for Mint 20. But if the PPA or repository is not available for the new version, it may impact the upgrade procedure with broken dependencies.
|
||||
|
||||
For this reason, it is advised that you disable the PPAs and third-party repositories. You may also delete the applications installed via such external sources if it is okay with you and doesn’t result in config data loss.
|
||||
|
||||
In the Software Sources tool, disable additional repositories, disable PPAs.
|
||||
|
||||
![Disable Ppa Mint Upgrade][30]
|
||||
|
||||
You should also **downgrade and then remove foreign packages** available in the maintenance tab.
|
||||
|
||||
For example, I installed Shutter using a PPA. I disabled its PPA. Now the package becomes ‘foreign’ because the version available from the repository doesn’t match the ones from Mint repositories.
|
||||
|
||||
![Foreign Package Linux Mint][31]
|
||||
|
||||
#### Step 6: Install the upgrade tool
|
||||
|
||||
Now that your system is updated, you are ready for upgrading to Linux Mint 20. Linux Mint team provides a command line tool called [mintupgrade][32] for the sole purpose of upgrading Linux Mint 19.3 to Linux Mint 20.
|
||||
|
||||
You can install this tool using the command below:
|
||||
|
||||
```
|
||||
sudo apt install mintupgrade
|
||||
```
|
||||
|
||||
#### Step 7: Run an upgrade sanity check
|
||||
|
||||
The mintupgrade tool lets you run a sanity check by simulating initial part of the upgrade.
|
||||
|
||||
You can run this check to see what kind of changes will be made to your system, which packages will be upgraded. It will also show the packages that cannot be upgraded and must be removed.
|
||||
|
||||
```
|
||||
mintupgrade check
|
||||
```
|
||||
|
||||
There won’t be any real changes on your system yet (even if it feels like it is going to make some changes).
|
||||
|
||||
This step is important and helpful in determining whether your system can be upgrade to Mint 20 or not.
|
||||
|
||||
![Mint Upgrade Check][33]
|
||||
|
||||
If this steps fails half-way through type **mintupgrade restore-sources** to go back to your original APT configuration.
|
||||
|
||||
#### Step 8: Download package upgrades
|
||||
|
||||
Once you are comfortable with the output of mintupgrade check, you can download the Mint 20 upgrade packages.
|
||||
|
||||
Depending on your internet connection, it may take some time in downloading these upgrades. Make sure your system is connected to a power source.
|
||||
|
||||
While the packages are being downloaded, you can continue using your system for regular work.
|
||||
|
||||
```
|
||||
mintupgrade download
|
||||
```
|
||||
|
||||
![Mint 20 Upgrade Download][34]
|
||||
|
||||
Note that this command points your system to the Linux Mint 20 repositories. If you want to go back to Linux Mint 19.3 after using this command, you still can do that with the command “**mintupgrade restore-sources**“.
|
||||
|
||||
#### Step 9: Install the Upgrades [Point of no return]
|
||||
|
||||
Now that you have everything ready, you can upgrade to Linux Mint 20 using this command:
|
||||
|
||||
```
|
||||
mintupgrade upgrade
|
||||
```
|
||||
|
||||
Give it some time to install the new packages and upgrade your Mint to the newer version. Once the procedure finishes, it will ask you to reboot.
|
||||
|
||||
![Linux Mint 20 Upgrade Finish][35]
|
||||
|
||||
#### Enjoy Linux Mint 20
|
||||
|
||||
Once you reboot your system, you’ll see the Mint 20 welcome screen. Enjoy the new version.
|
||||
|
||||
![Welcome To Linux Mint 20][36]
|
||||
|
||||
### Upgrading to Mint 19 from Mint 18
|
||||
|
||||
The steps for upgrading to Linux Mint 19 from 18.3 is pretty much the same as the steps you saw for Mint 20. The only change is in checking for display manager.
|
||||
|
||||
I’ll quickly mention the steps here. If you want more details, you can refer to Mint 20 upgrade procedure.
|
||||
|
||||
**Step1:** Create a system snapshot with Timeshift [Optional yet recommended]
|
||||
|
||||
**Step2:** Make a backup of your files on an external disk [Optional yet recommended]
|
||||
|
||||
**Step 3: Make sure you are using LightDM**
|
||||
|
||||
You must use [LightDM display manager][37] for Mint 19. To check which display manager you are using, type the command:
|
||||
|
||||
```
|
||||
cat /etc/X11/default-display-manager
|
||||
```
|
||||
|
||||
If the result is “/usr/sbin/**lightdm**“, you have LightDM and you are good to go.
|
||||
|
||||
![LightDM Display Manager in Linux Mint][38]
|
||||
|
||||
On the other hand, if the result is “/usr/sbin/**mdm**“, you need to install LightDM, [switch to LightDM][39] and removing MDM. Use this command to install LightDM:
|
||||
|
||||
```
|
||||
apt install lightdm lightdm-settings slick-greeter
|
||||
```
|
||||
|
||||
While installing, it will ask you to choose the display manager. You need to select LightDM.
|
||||
|
||||
Once you have set LightDM as your display manager, remove MDM and reboot using these commands:
|
||||
|
||||
```
|
||||
apt remove --purge mdm mint-mdm-themes*
|
||||
sudo dpkg-reconfigure lightdm
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
**Step 4: Update your system by installing any available updates**
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
```
|
||||
|
||||
**Step 5: Install the upgrade tool**
|
||||
|
||||
```
|
||||
sudo apt install mintupgrade
|
||||
```
|
||||
|
||||
**Step 6: Check upgrade**
|
||||
|
||||
```
|
||||
mintupgrade check
|
||||
```
|
||||
|
||||
**Step 7: Download package upgrades**
|
||||
|
||||
```
|
||||
mintupgrade download
|
||||
```
|
||||
|
||||
**Step 8: Apply upgrades**
|
||||
|
||||
```
|
||||
mintupgrade upgrade
|
||||
```
|
||||
|
||||
Enjoy Linux Mint 19.
|
||||
|
||||
### Did you upgrade to Linux Mint 21?
|
||||
|
||||
Upgrading to Linux Mint 20 might not be a friendly experience but upgrading to Mint 21 is made a lot more simple with the new dedicated GUI upgrade tool.
|
||||
|
||||
I hope you find the tutorial helpful. Did you upgrade to Linux Mint 21 or you opted for a fresh installation?
|
||||
|
||||
If you faced any issues or if you have any questions about the upgrade procedure, please feel free to ask in the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/upgrade-linux-mint-version/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://itsfoss.com/check-linux-mint-version/
|
||||
[2]: https://itsfoss.com/wp-content/uploads/2022/08/check-for-pending-software-updates.png
|
||||
[3]: https://github.com/linuxmint/mintupgrade/blob/master/usr/bin/mintupgrade
|
||||
[4]: https://itsfoss.com/wp-content/uploads/2022/08/mint-upgrade-tool-home-page.png
|
||||
[5]: https://itsfoss.com/wp-content/uploads/2022/08/upgrade-tool-prompting-no-timeshift-snapshots.png
|
||||
[6]: https://itsfoss.com/backup-restore-linux-timeshift/
|
||||
[7]: https://itsfoss.com/wp-content/uploads/2022/08/taking-snapshot-with-timeshift.png
|
||||
[8]: https://itsfoss.com/wp-content/uploads/2022/08/kazam-ppa-does-not-support-jammy.png
|
||||
[9]: https://itsfoss.com/kazam-screen-recorder/
|
||||
[10]: https://itsfoss.com/wp-content/uploads/2022/08/disable-unsupported-ppas-in-software-sources.png
|
||||
[11]: https://itsfoss.com/wp-content/uploads/2022/08/downgrade-package-to-avoid-conflicts.png
|
||||
[12]: https://itsfoss.com/wp-content/uploads/2022/08/list-changes-that-need-to-be-done.png
|
||||
[13]: https://itsfoss.com/wp-content/uploads/2022/08/phase-2-simulation-and-package-download-.png
|
||||
[14]: https://itsfoss.com/wp-content/uploads/2022/08/package-downloading.png
|
||||
[15]: https://itsfoss.com/wp-content/uploads/2022/08/upgrading-phase.png
|
||||
[16]: https://itsfoss.com/wp-content/uploads/2022/08/selecting-orphan-packages-you-want-to-keep-with-preferences.png
|
||||
[17]: https://itsfoss.com/wp-content/uploads/2022/08/specify-name-of-the-package-to-keep.png
|
||||
[18]: https://itsfoss.com/wp-content/uploads/2022/08/upgrade-successful-800x494.png
|
||||
[19]: https://itsfoss.com/wp-content/uploads/2022/08/neofetch-output-linux-mint-21.png
|
||||
[20]: https://itsfoss.com/32-bit-64-bit-ubuntu/
|
||||
[21]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/
|
||||
[22]: https://itsfoss.com/check-linux-mint-version/
|
||||
[23]: https://youtu.be/LYnXEaiAjsk
|
||||
[24]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[25]: https://itsfoss.com/wp-content/uploads/2020/07/mint-20-upgrade-check-architecture.jpg
|
||||
[26]: https://itsfoss.com/wp-content/uploads/2020/07/update-manager-linux-mint.jpg
|
||||
[27]: https://itsfoss.com/backup-restore-linux-timeshift/
|
||||
[28]: https://itsfoss.com/wp-content/uploads/2018/07/snapshot-linux-mint-timeshift.jpeg
|
||||
[29]: https://itsfoss.com/ppa-guide/
|
||||
[30]: https://itsfoss.com/wp-content/uploads/2020/07/disable-ppa-mint-upgrade.jpg
|
||||
[31]: https://itsfoss.com/wp-content/uploads/2020/07/foreign-package-linux-mint.jpg
|
||||
[32]: https://github.com/linuxmint/mintupgrade/blob/master/usr/bin/mintupgrade
|
||||
[33]: https://itsfoss.com/wp-content/uploads/2020/07/mint-upgrade-check.jpg
|
||||
[34]: https://itsfoss.com/wp-content/uploads/2020/07/mint-upgrade-download.jpg
|
||||
[35]: https://itsfoss.com/wp-content/uploads/2020/07/linux-mint-20-upgrade-finish.jpg
|
||||
[36]: https://itsfoss.com/wp-content/uploads/2020/07/welcome-to-linux-mint-20.jpg
|
||||
[37]: https://wiki.archlinux.org/index.php/LightDM
|
||||
[38]: https://itsfoss.com/wp-content/uploads/2018/07/lightdm-linux-mint.jpeg
|
||||
[39]: https://itsfoss.com/switch-gdm-and-lightdm-in-ubuntu-14-04/
|
@ -1,185 +0,0 @@
|
||||
[#]: subject: "How to List USB Devices Connected to Your Linux System"
|
||||
[#]: via: "https://itsfoss.com/list-usb-devices-linux/"
|
||||
[#]: author: "Anuj Sharma https://itsfoss.com/author/anuj/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: "geekpi"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
How to List USB Devices Connected to Your Linux System
|
||||
======
|
||||
How do you list the USB devices in Linux?
|
||||
|
||||
The question can have two meanings.
|
||||
|
||||
* How many USB ports are (detected) on your system?
|
||||
* How many USB devices/disks are mounted (plugged in) to the system?
|
||||
|
||||
Mostly, people are interested in knowing what USB devices are connected to the system. This may help troubleshoot the USB devices.
|
||||
|
||||
The most reliable way is to use this command:
|
||||
|
||||
```
|
||||
lsusb
|
||||
```
|
||||
|
||||
It shows the webcam, Bluetooth, and Ethernet ports along with the USB ports and mounted USB drives.
|
||||
|
||||
![list usb with lsusb command linux][1]
|
||||
|
||||
But understanding the output of lsusb is not easy and you may not need to complicate things when you just want to see and access the mounted USB drives.
|
||||
|
||||
I will show you various tools and commands you can use to list USB devices connected to your system.
|
||||
|
||||
I have connected a 2GB pen-drive, 1TB external HDD, Android smartphone via MTP and USB mouse in the examples unless stated otherwise.
|
||||
|
||||
Let me start with the simplest of the options for desktop users.
|
||||
|
||||
### Check connected USB devices graphically
|
||||
|
||||
Your distribution file manager can be used to view USB storage devices connected to your computer. As you can see in the screenshot of Nautilus (GNOME File Manager) below.
|
||||
|
||||
The connected devices are shown in the sidebar (Only USB Storage devices are shown here).
|
||||
|
||||
![Nautilus showing connected USB devices][2]
|
||||
|
||||
You can also use GUI applications like GNOME Disks or Gparted to view, format, and partition the USB Storage devices connected to your computer. GNOME Disks is preinstalled in most distributions using GNOME Desktop Environment by default.
|
||||
|
||||
This app also works as a very good [partition manager][3] too.
|
||||
|
||||
![Use GNOME Disks to list mounted USB devices][4]
|
||||
|
||||
*Enough of the Graphical tools*. Let us discuss the commands you can use for listing the USB devices.
|
||||
|
||||
### Using the mount command to list the mounted USB devices
|
||||
|
||||
The mount command is used for mounting partitions in Linux. You can also list USB storage devices using the same command.
|
||||
|
||||
Generally, USB storage is mounted in the media directory. Thus, filtering the output of mount command on media will give you the desired result.
|
||||
|
||||
```
|
||||
mount | grep media
|
||||
```
|
||||
|
||||
![][5]
|
||||
|
||||
### Using df command
|
||||
|
||||
[df command][6] is a standard UNIX command used to know the amount of available disk space. You can also use this command to list USB storage devices connected using the command below.
|
||||
|
||||
```
|
||||
df -Th | grep media
|
||||
```
|
||||
|
||||
![Use df command to list mounted USB drives][7]
|
||||
|
||||
### Using lsblk command
|
||||
|
||||
The lsblk command is used to list block devices in the terminal. So, here also by filtering the output containing media keyword, you can get the desired result as shown in the screenshot below.
|
||||
|
||||
```
|
||||
lsblk | grep media
|
||||
```
|
||||
|
||||
![Using lsblk to list connected USb devicesUsing blkid to list connected USb devices][8]
|
||||
|
||||
If you are more curious, you can use the `blkid` command to know the UUID, Label, Block size etc.
|
||||
|
||||
This command gives more output as your internal drives are also listed. So, you have to take references from the above command to identify the device you wish to know about.
|
||||
|
||||
```
|
||||
sudo blkid
|
||||
```
|
||||
|
||||
![Using blkid to list connected USb devices][9]
|
||||
|
||||
### Using fdisk
|
||||
|
||||
fdisk, the good old command line partition manager, can also list the USB storage devices connected to your computer. The output of this command is also very long. So, usually, the connected devices get listed at the bottom as shown below.
|
||||
|
||||
```
|
||||
sudo fdisk -l
|
||||
```
|
||||
|
||||
![Use fidsk to list usb devices][10]
|
||||
|
||||
### Inspecting /proc/mounts
|
||||
|
||||
By inspecting the /proc/mounts file, you can list the USB Storage devices. As you can notice, it shows you the mount options being used by filesystem along with the mount point.
|
||||
|
||||
```
|
||||
cat /proc/mounts | grep media
|
||||
```
|
||||
|
||||
![][11]
|
||||
|
||||
### Display all the USB devices with lsusb command
|
||||
|
||||
And we revisit the famed lsusb command.
|
||||
|
||||
Linux kernel developer [Greg Kroah-Hartman][12] developed this handy [usbutils][13] utility. This provides us with two commands i.e. `lsusb` and `usb-devices` to list USB devices in Linux.
|
||||
|
||||
The lsusb command lists all the information about the USB bus in the system.
|
||||
|
||||
```
|
||||
lsusb
|
||||
```
|
||||
|
||||
As you can see this command also shows the Mouse and Smartphone I have connected, unlike other commands (which are capable of listing only USB storage devices).
|
||||
|
||||
![][14]
|
||||
|
||||
The second command `usb-devices` gives more details as compared but fails to list all devices, as shown below.
|
||||
|
||||
```
|
||||
usb-devices
|
||||
```
|
||||
|
||||
![][15]
|
||||
|
||||
Greg has also developed a small GTK application called [Usbview][16]. This application shows you the list of all the USB devices connected to your computer.
|
||||
|
||||
The application is available in the official repositories of most Linux distributions. You can install `usbview` package using your distribution’s [package manager][17] easily.
|
||||
|
||||
Once installed, you can launch it from the application menu. You can select any of the listed devices to get details, as shown in the screenshot below.
|
||||
|
||||
![][18]
|
||||
|
||||
### Conclusion
|
||||
|
||||
Most of the methods listed are limited to USB storage devices. There are only two methods which can list other peripherals also; usbview and usbutils. I guess we have one more reason to be grateful to the Linux Kernel developer Greg for developing these handy tools.
|
||||
|
||||
I am aware that there are many more ways to list USB devices connected to your system. Your suggestions are welcome.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/list-usb-devices-linux/
|
||||
|
||||
作者:[Anuj Sharma][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/anuj/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://itsfoss.com/wp-content/uploads/2022/08/list-usb-with-lsusb-command-linux.png
|
||||
[2]: https://itsfoss.com/wp-content/uploads/2022/08/nautilus-usb.png
|
||||
[3]: https://itsfoss.com/partition-managers-linux/
|
||||
[4]: https://itsfoss.com/wp-content/uploads/2022/08/gnome-disks-usb.png
|
||||
[5]: https://itsfoss.com/wp-content/uploads/2022/08/mount-cmd-usb.png
|
||||
[6]: https://linuxhandbook.com/df-command/
|
||||
[7]: https://itsfoss.com/wp-content/uploads/2022/08/df-cmd-usb.png
|
||||
[8]: https://itsfoss.com/wp-content/uploads/2022/08/blkid-cmd-usb.png
|
||||
[9]: https://itsfoss.com/wp-content/uploads/2022/08/blkid-cmd-usb.png
|
||||
[10]: https://itsfoss.com/wp-content/uploads/2022/08/fdisk-cmd-usb.png
|
||||
[11]: https://itsfoss.com/wp-content/uploads/2022/08/proc-dir-usb.png
|
||||
[12]: https://en.wikipedia.org/wiki/Greg_Kroah-Hartman
|
||||
[13]: https://github.com/gregkh/usbutils
|
||||
[14]: https://itsfoss.com/wp-content/uploads/2022/08/lsusb-cmd.png
|
||||
[15]: https://itsfoss.com/wp-content/uploads/2022/08/usb-devices-cmd.png
|
||||
[16]: https://github.com/gregkh/usbview
|
||||
[17]: https://itsfoss.com/package-manager/
|
||||
[18]: https://itsfoss.com/wp-content/uploads/2022/08/usbview.png
|
@ -0,0 +1,283 @@
|
||||
[#]: subject: "Become A Pro Flatpak User By Learning These Commands"
|
||||
[#]: via: "https://www.debugpoint.com/flatpak-commands/"
|
||||
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Become A Pro Flatpak User By Learning These Commands
|
||||
======
|
||||
In this article, I will show you various Flatpak commands that make you a pro Flatpak user.
|
||||
|
||||
![][1]
|
||||
|
||||
Flatpak sandboxed technology is the future of Linux app distribution. Almost all significant distributions come with Flatpak pre-installed today since the adoption is easy and maintaining it more straightforward.
|
||||
|
||||
If you use Flatpak every day, you probably know these commands. But if you are still considering moving to Flatpak for every app, then you should go through this list to understand how easy to manage Flatpak apps.
|
||||
|
||||
Hence, to help you do that, I have listed some easy-to-use Flatpak commands for your reference, filtered from the huge set of command-set from documentation.
|
||||
|
||||
### Flatpak Commands Reference
|
||||
|
||||
First, let’s talk about some basic commands.
|
||||
|
||||
#### 1. Installing Flatpak
|
||||
|
||||
Since last time I checked, all the significant distros come with pre-installed Flatpak packages today. Hence, you may not require to install it.
|
||||
|
||||
However, installing Flatpak is as easy as running the following command for two major distro lineups.
|
||||
|
||||
```
|
||||
sudo apt install flatpak // for Ubuntu and related distros
|
||||
```
|
||||
|
||||
```
|
||||
sudo dnf install flatpak // for Fedora and RPM based distros
|
||||
```
|
||||
|
||||
You may check out our [detailed guide][2] on Flatpak installation, if you are running any other distro.
|
||||
|
||||
#### 2. Set up Flatpak Remote
|
||||
|
||||
Next, you need to set up a connection to remotes after installation. The remotes are like repositories (think about PPA) which distribute Flatpak apps.
|
||||
|
||||
The primary repo is Flathub, and you can set it up using the following command. This command is same for all distros. And after you finish, reboot your system and you are ready to install Flatpak apps.
|
||||
|
||||
```
|
||||
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
|
||||
```
|
||||
|
||||
**Tip**: If you have a different remote, you may use the same command to add that remote. Its normal to have multiple remotes set up in a single system.
|
||||
|
||||
**Tip**: Also, you can specify `--user` or `--system` switch to install the Flatpak remotes specific to your user id or the entire system!
|
||||
|
||||
```
|
||||
flatpak remote-add --if-not-exists --user https://flathub.org/repo/flathub.flatpakrepo
|
||||
```
|
||||
|
||||
```
|
||||
flatpak remote-add --if-not-exists --system https://flathub.org/repo/flathub.flatpakrepo
|
||||
```
|
||||
|
||||
#### 3. Installing a Flatpak app from Flathub
|
||||
|
||||
Most of the significant GUI-based Software stores in Linux allow Flatpak installation by default. For example, if you are using Software (for Ubuntu or Fedora – GNOME), you can find and click on the install button to install.
|
||||
|
||||
Or, in KDE Plasma’s discover:
|
||||
|
||||
![KDE Discover can pull up Flatpak apps from Flathub][3]
|
||||
|
||||
But, the easiest way is to copy the install command from the [Flathub store][4] (available at the bottom of each app info page) and paste it into the terminal. This is the fastest way to install any Flatpak app.
|
||||
|
||||
```
|
||||
flatpak install org.kde.kdenlive
|
||||
```
|
||||
|
||||
#### 4. Running an application
|
||||
|
||||
There are two ways to run a Flatpak app which you installed. You can either find it in the application menu in the graphical desktop environment. Or, you can use the simple run command to launch.
|
||||
|
||||
You can find the run command from the Flathub app page.
|
||||
|
||||
```
|
||||
flatpak run org.kde.kdenlive
|
||||
```
|
||||
|
||||
Now, you have learned how to set up, install and run the Flatpak app. It’s time to go a little deeper.
|
||||
|
||||
#### 5. Find out list of Flatpak apps you have installed
|
||||
|
||||
Over the years, you may have installed and removed many Flatpak apps. But, how can you find out how many Flatpak apps I have installed at any given time? Or you might be wondering what the Flatpak apps that are installed by the system.
|
||||
|
||||
Here are some Flatpak commands (to run via terminal) that can help you in this regard as FAQ.
|
||||
|
||||
* Simple Flatpak commands to list all installed app. This includes both system apps and your apps.
|
||||
|
||||
```
|
||||
flatpak list
|
||||
```
|
||||
|
||||
* Command to display only your apps.
|
||||
|
||||
```
|
||||
flatpak --user list
|
||||
```
|
||||
|
||||
* A little more detail you can filter using additional columns (such as name, size etc) in both the above commands.
|
||||
|
||||
```
|
||||
flatpak --columns=app,name,size,installation list
|
||||
```
|
||||
|
||||
```
|
||||
flatpak --columns=name,size --user list
|
||||
```
|
||||
|
||||
![flatpak list command with additional columns][5]
|
||||
|
||||
#### 6. Find out more information about an installed app
|
||||
|
||||
Now, you have installed an app via above Flatpak commands. But what if you want to find out the architecture, version, branch, licence and other information. You can do that using the `info` switch. This command requires the Flatpak `Application ID` which you can get it via above `flatpak list` command.
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
flatpak info org.kde.kdenlive
|
||||
```
|
||||
|
||||
![flatpak info command][6]
|
||||
|
||||
#### 7. Find out entire history of flatpak command in your system
|
||||
|
||||
The histroy switch in flatpak command gives you a list of activities happened in your system that includes install, update, uninstall with date time stamp. It’s very useful if you want to trying to investigate something.
|
||||
|
||||
```
|
||||
flatpak history
|
||||
```
|
||||
|
||||
#### 8. Updating Flatpak apps
|
||||
|
||||
The update switch in flatpak command updates all applications and runtimes. When you run this command, it will show you the available updates and asks for your confirmation to proceed.
|
||||
|
||||
```
|
||||
flatpak update
|
||||
```
|
||||
|
||||
If you want to update a specific application and not the entire system use the `--app` or `--runtime` switch for applications and runtimes respectively.
|
||||
|
||||
For example, if I want to update only kdenlive in my system, I would run the following.
|
||||
|
||||
```
|
||||
flatpak update --app org.kde.kdenlive
|
||||
```
|
||||
|
||||
**Tip**: The update command usually updates to the top of the branch of any program. However, using the `--commit` switch in update parameter, you can update to a specific branch (upgrade or downgrade) in flatpak. For example:
|
||||
|
||||
```
|
||||
flatpak update --app org.kde.kdenlive --commit 37103f4ee56361a73d20cf6957d88f3c3cab802909a5966c27a6e81d69795a15
|
||||
```
|
||||
|
||||
This commit switch is very helpful if you want to play around several version of same app.
|
||||
|
||||
![Example of flatpak commands update with commit][7]
|
||||
|
||||
#### 9. Managing permission of flatpak apps
|
||||
|
||||
Different application require variety of permissions such as webcam, microphone, screen and so on. Managing these individual permissions are a little overwhelming via commands. Hence, the best way to manage Flatpak permission is using another flatpak app called Flatseal. It gives you a nice GUI with toggle buttons to enable/disable/review permissions of the installed Flatpak apps.
|
||||
|
||||
You can read more about [Flatseal here][8].
|
||||
|
||||
#### 10. Commands to uninstall Flatpak applications
|
||||
|
||||
There are different use cases for uninstall a flatpak app. So, here’s quick guide.
|
||||
|
||||
To uninstall a single application, use the `uninstall` switch with application ID. For example:
|
||||
|
||||
```
|
||||
flatpak uninstall org.kde.kdenlive
|
||||
```
|
||||
|
||||
To uninstall all apps, use the `--all` switch.
|
||||
|
||||
```
|
||||
flatpak uninstall --all
|
||||
```
|
||||
|
||||
To uninstall unused apps, use the following.
|
||||
|
||||
```
|
||||
flatpak uninstall --unused
|
||||
```
|
||||
|
||||
#### 11. Delete and remove every trace of Flatpak apps
|
||||
|
||||
**Use the following commands with extreme caution, since it will delete everything.**
|
||||
|
||||
Even if you uninstall a Flatpak app, some app data remains in your system unless you run the uninstall with some additional switch. Its necessary for cases where you might want to delete everything and start afresh with Flatpak.
|
||||
|
||||
To uninstall and delete data for a specific app, use the following command. For example:
|
||||
|
||||
```
|
||||
flatpak uninstall -y --delete-data org.kde.kdenlive
|
||||
```
|
||||
|
||||
To uninstall and delete everything related to Flatpak, use below.
|
||||
|
||||
```
|
||||
flatpak uninstall --all --delete-data
|
||||
```
|
||||
|
||||
#### 12. Cleanup and disk space usage
|
||||
|
||||
By default Flatpak gets installed in `/var/lib/flatpak`. This directory contains all flatpak related data and metadata plus runtime files. And the user specific installation directory is `~/.local/share/flatpak`.
|
||||
|
||||
You can find out the disk space used by Flatpak apps using the following command.
|
||||
|
||||
```
|
||||
du -h /var/lib/flatpak
|
||||
```
|
||||
|
||||
To clean up, you can use the unused or uninstall commands mentioned above. For details, visit our [flatpak cleanup guide][9].
|
||||
|
||||
### Summary
|
||||
|
||||
For your ready reference, here’s a summary of the Flatpak commands explained above. And bookmark this page for easy reference.
|
||||
|
||||
```
|
||||
# install and run
|
||||
flatpak install org.kde.kdenlive
|
||||
flatpak run org.kde.kdenlive
|
||||
|
||||
#various ways of list installed apps
|
||||
flatpak list
|
||||
flatpak --user list
|
||||
flatpak --columns=app,name,size,installation list
|
||||
flatpak --columns=name,size --user list
|
||||
|
||||
# find out app id and history
|
||||
flatpak info org.kde.kdenlive
|
||||
flatpak history
|
||||
|
||||
# updating flatpak
|
||||
flatpak update
|
||||
flatpak update --app org.kde.kdenlive
|
||||
|
||||
# uninstalling flatpak apps
|
||||
flatpak uninstall org.kde.kdenlive
|
||||
flatpak uninstall --unused
|
||||
|
||||
# uninstall everything (use with caution)
|
||||
flatpak uninstall --all
|
||||
flatpak uninstall -y --delete-data org.kde.kdenlive
|
||||
flatpak uninstall --all --delete-data
|
||||
```
|
||||
|
||||
Finally, do let me know in the comment box which Flatpak commands you think should also be included in this list.
|
||||
|
||||
*[Some examples via the official reference.][10]*
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.debugpoint.com/flatpak-commands/
|
||||
|
||||
作者:[Arindam][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.debugpoint.com/author/admin1/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://www.debugpoint.com/wp-content/uploads/2022/08/fpref-1024x576.jpg
|
||||
[2]: https://www.debugpoint.com/how-to-install-flatpak-apps-ubuntu-linux/
|
||||
[3]: https://www.debugpoint.com/?attachment_id=10760
|
||||
[4]: https://flathub.org/apps
|
||||
[5]: https://www.debugpoint.com/?attachment_id=10758
|
||||
[6]: https://www.debugpoint.com/?attachment_id=10757
|
||||
[7]: https://www.debugpoint.com/wp-content/uploads/2022/08/Example-of-flatpak-commands-update-with-commit-1024x576.jpg
|
||||
[8]: https://www.debugpoint.com/manage-flatpak-permission-flatseal/
|
||||
[9]: https://www.debugpoint.com/clean-up-flatpak/
|
||||
[10]: https://docs.flatpak.org/en/latest/flatpak-command-reference.html
|
@ -2,7 +2,7 @@
|
||||
[#]: via: "https://itsfoss.com/blackbox-terminal/"
|
||||
[#]: author: "Anuj Sharma https://itsfoss.com/author/anuj/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: translator: "geekpi"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
@ -0,0 +1,147 @@
|
||||
[#]: subject: "sudo apt update vs upgrade: What’s the Difference?"
|
||||
[#]: via: "https://itsfoss.com/apt-update-vs-upgrade/"
|
||||
[#]: author: "Abhishek Prakash https://itsfoss.com/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
sudo apt update vs upgrade: What’s the Difference?
|
||||
======
|
||||
|
||||
If you want to keep your Ubuntu or Debian system updated, you use the combination of **sudo apt update** and **sudo apt upgrade** commands.
|
||||
|
||||
Some older tutorial also mention **sudo apt-get update** and **sudo apt-get upgrade**.
|
||||
|
||||
Both apt and apt-get commands work pretty much the same except for some minor differences that I’ll discuss later in this later.
|
||||
|
||||
Let’s first discuss the difference between update and upgrade. Are not the two the same thing?
|
||||
|
||||
### Difference between apt update and upgrade
|
||||
|
||||
Though it sounds like running the apt update will give you the latest version of the package, it’s not true. The update command only gets the information about the latest version of packages available for your system. It doesn’t download or install any package. It is the apt upgrade command that actually downloads and upgrades the package to the new version.
|
||||
|
||||
Still confused? Let me explain a bit more. I advise [reading up on the concept of package manager][1]. It will help you understand things even better.
|
||||
|
||||
![Linux Package Manager Explanation][2]
|
||||
|
||||
Basically your system works on a database (cache) of available packages. Note that this cache or database doesn’t contain the packages themselves, just the metadata (version, repository, dependency etc) on the package.
|
||||
|
||||
If you don’t update this database, the system won’t know if there are newer packages available or not.
|
||||
|
||||
When you run the apt update or apt-get update command, it will fetch the updated metadata (package version etc) on the packages.
|
||||
|
||||
![apt update][3]
|
||||
|
||||
Your local package cache has been updated and there are packages that can be upgraded. You can upgrade all of the (upgradable) packages with sudo apt upgrade.
|
||||
|
||||
It shows the packages that are going to be upgraded and ask you to confirm by pressing enter (for default choice Y) or Y key. To cancel the upgrade at this stage, you can press N.
|
||||
|
||||
![apt upgrade][4]
|
||||
|
||||
If it helps you remember:
|
||||
|
||||
* apt update: updates the package cache (to know which package versions can be installed or upgraded)
|
||||
* apt upgrade: upgrades packages to the new version
|
||||
|
||||
Since these are administrative commands, you need to run them as root. And hence you use sudo with both commands. The sudo part lets you run commands as root in Ubuntu and Debian.
|
||||
|
||||
Now that you understand how the combination update and upgrade works, let’s discuss the use of apt and apt-get.
|
||||
|
||||
### apt or apt-get? Which one should you be using?
|
||||
|
||||
Debian and Ubuntu use the APT package management system. Don’t confuse it with the apt command.
|
||||
|
||||
There are many commands that interact with the APT package management; apt-get, apt, dpkg, aptitude etc.
|
||||
|
||||
The apt-get command was the most popular of them all. It is a low-level, feature rich command. apt is a newer and simpler version of apt-get.
|
||||
|
||||
You can [read this article to learn on the differences of apt and apt-get commands][5]. Let me focus on difference between the update and upgrade options of these commands.
|
||||
|
||||
#### apt update vs apt-get update
|
||||
|
||||
Both `apt-get update` and `apt update` do the same task of updating the local package cache so that your system is aware of the available package versions.
|
||||
|
||||
Technically, there is no difference. However, apt update does one thing better than apt-get update. It **tells you the number of packages that can be upgraded**.
|
||||
|
||||
```
|
||||
Hit:15 https://ppa.launchpadcontent.net/slimbook/slimbook/ubuntu jammy InRelease
|
||||
Fetched 213 kB in 4s (55.8 kB/s)
|
||||
Reading package lists... Done
|
||||
Building dependency tree... Done
|
||||
Reading state information... Done
|
||||
6 packages can be upgraded. Run 'apt list --upgradable' to see them.
|
||||
```
|
||||
|
||||
apt-get update doesn’t even tell you if any package can be upgraded.
|
||||
|
||||
![apt get update][6]
|
||||
|
||||
![apt update output][7]
|
||||
|
||||
You can see the [list of upgradable packages][8] with apt but apt-get doesn’t have this option.
|
||||
|
||||
```
|
||||
[email protected]:~$ apt list --upgradable
|
||||
Listing... Done
|
||||
fprintd/jammy-updates 1.94.2-1ubuntu0.22.04.1 amd64 [upgradable from: 1.94.2-1]
|
||||
gnome-control-center-data/jammy-updates,jammy-updates 1:41.7-0ubuntu0.22.04.4 all [upgradable from: 1:41.7-0ubuntu0.22.04.1]
|
||||
gnome-control-center-faces/jammy-updates,jammy-updates 1:41.7-0ubuntu0.22.04.4 all [upgradable from: 1:41.7-0ubuntu0.22.04.1]
|
||||
gnome-control-center/jammy-updates 1:41.7-0ubuntu0.22.04.4 amd64 [upgradable from: 1:41.7-0ubuntu0.22.04.1]
|
||||
libpam-fprintd/jammy-updates 1.94.2-1ubuntu0.22.04.1 amd64 [upgradable from: 1.94.2-1]
|
||||
vivaldi-stable/stable 5.4.2753.40-1 amd64 [upgradable from: 5.4.2753.37-1]
|
||||
```
|
||||
|
||||
Let’s talk compare the upgrade option of both commands.
|
||||
|
||||
#### apt upgrade vs apt-get upgrade
|
||||
|
||||
Both apt-get upgrade and apt upgrade commands install the newer version of the upgradable packages based on the data in the local package cache (refreshed by the update command).
|
||||
|
||||
However, the apt upgrade command does couple of things differently than its apt-get counterpart.
|
||||
|
||||
The **apt upgrade command can upgrade the Linux kernel version, apt-get upgrade cannot** do that. You need to use [apt-get dist-upgrade][9] for upgrading the kernel version with apt-get command.
|
||||
|
||||
![apt-get upgrade command cannot upgrade Linux kernel version][10]
|
||||
|
||||
This is because upgrading the kernel version means installing a completely new package. apt-get upgrade command cannot install a new package. It can only upgrade existing packages.
|
||||
|
||||
Another small thing that apt upgrade does better than apt-get upgrade is to **show a progress bar** at the bottom.
|
||||
|
||||
![apt upgrade progress bar][11]
|
||||
|
||||
### Conclusion
|
||||
|
||||
The word update and upgrades are similar and this is why it confuses a lot of new users. At times, I think the apt update command should be merged with the apt upgrade command.
|
||||
|
||||
I mean the upgrade (of installed package versions) works in conjugation with the update (of local package metadata cache). Why have two separate commands for that? Combine them in a single upgrade command. This is what Fedora has done with the DNF command. That’s just my opinion.
|
||||
|
||||
I hope this article cleared some air around the usage of apt-get update, apt-get upgrade and apt update and apt upgrade commands.
|
||||
|
||||
Do let me know if you have any questions.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/apt-update-vs-upgrade/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://itsfoss.com/package-manager/
|
||||
[2]: https://itsfoss.com/wp-content/uploads/2020/10/linux-package-manager-explanation.png
|
||||
[3]: https://itsfoss.com/wp-content/uploads/2022/08/apt-update.png
|
||||
[4]: https://itsfoss.com/wp-content/uploads/2022/08/apt-upgrade.png
|
||||
[5]: https://itsfoss.com/apt-get-upgrade-vs-dist-upgrade/
|
||||
[6]: https://itsfoss.com/wp-content/uploads/2022/08/apt-get-update.png
|
||||
[7]: https://itsfoss.com/wp-content/uploads/2022/08/apt-update-output.png
|
||||
[8]: https://itsfoss.com/apt-list-upgradable/
|
||||
[9]: https://itsfoss.com/apt-get-upgrade-vs-dist-upgrade/
|
||||
[10]: https://itsfoss.com/wp-content/uploads/2022/08/apt-get-upgrade.png
|
||||
[11]: https://itsfoss.com/wp-content/uploads/2022/08/apt-upgrade-progress-bar.png
|
@ -0,0 +1,322 @@
|
||||
[#]: subject: "15 Ways to Tweak Nemo File Manager in Linux to Get More Out of it"
|
||||
[#]: via: "https://itsfoss.com/nemo-tweaks/"
|
||||
[#]: author: "sreenath https://itsfoss.com/author/sreenath/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
15 Ways to Tweak Nemo File Manager in Linux to Get More Out of it
|
||||
======
|
||||
Nemo is the default file manager of the Cinnamon Desktop. You get it in Linux Mint and other distributions with the Cinnamon desktop.
|
||||
|
||||
It’s a powerful file manager with plenty of features you might not know. Some tweaks are hidden inside the Nemo settings while some require installing additional extension packages.
|
||||
|
||||
I have included commands for installing extensions for Ubuntu and Debian-based distributions.
|
||||
|
||||
**Note: Please don’t go and install all the extensions. Only use the ones you would use.**
|
||||
|
||||
### 1. Enable quick file preview
|
||||
|
||||
Nemo Preview is a cool feature that comes in handy if you want to peek into some files on the go. You can access the preview feature for images, audio, video, PDF, etc.
|
||||
|
||||
It also allows scrolling the documents in preview mode and adds a floating control with a seek par in audio/video preview.
|
||||
|
||||
![File Preview in Nemo File Manager With Nemo Preview][1]
|
||||
|
||||
You can get the preview feature by installing the following extension:
|
||||
|
||||
```
|
||||
sudo apt install nemo-preview
|
||||
```
|
||||
|
||||
Once installed, you may need to restart the Nemo file manager.
|
||||
|
||||
To activate the preview, **select the file and press the Space key**. Pressing the space key again will close the preview.
|
||||
|
||||
### 2. Click twice to rename
|
||||
|
||||
This is one of the iconic features of Nemo file manager, which is already offered in Dolphin File Manager of KDE, but absent in Nautilus of Gnome.
|
||||
|
||||
To enable this setting, you need to go to Edit > Preferences > Behaviour and toggle the option as shown below:
|
||||
|
||||
![Click on File Name Twice to Rename It][2]
|
||||
|
||||
Once done, you can now click twice on a file/folder and an inline rename option appears to rename the respective selection.
|
||||
|
||||
### 3. Bulk rename files
|
||||
|
||||
Nemo also offers a bulk rename feature that many Linux users are not aware of.
|
||||
|
||||
What you have to do is, select the files and select **rename** from the right click. You’ll get different kinds of options to tweak the names of the selected group of files.
|
||||
|
||||
![Nemo File Manager Bulk Rename][3]
|
||||
|
||||
You can find and replace, remove certain parts of the name among many other things.
|
||||
|
||||
### 4. Double click anywhere to go to the parent folder
|
||||
|
||||
This is rather an accessibility setting. Instead of pressing the back button or clicking on the places tree, you can simply double-click anywhere in the empty space in the window to go to the parent folder.
|
||||
|
||||
To enable this feature, go to Edit > Preferences > Behaviour and toggle on the option as shown in the screenshot below.
|
||||
|
||||
![Double Click on Blank Area to go to Parent Folder][4]
|
||||
|
||||
### 5. Compress files and folders
|
||||
|
||||
This is not a secret really. Almost all file managers have this option as far as I know.
|
||||
|
||||
Right click on a file or folder and you get the Compress option to create an archive file.
|
||||
|
||||
![Compress Option in Right Click Context Menu][5]
|
||||
|
||||
You can choose between formats like .7z, .tar, .zip to .apk, .epub. etc. Some compression methods like epub requires their own defined formats to succeed.
|
||||
|
||||
![Compress Options][6]
|
||||
|
||||
Some compression formats support password protection, encryption and splitting, as shown in the above screenshot.
|
||||
|
||||
If you did not find this option, you could install the package nemo-fileroller:
|
||||
|
||||
```
|
||||
sudo apt install nemo-fileroller
|
||||
```
|
||||
|
||||
### 6. Configure the right-click context menu
|
||||
|
||||
By default, there are many options in the right-click context menu. If you are one of those users who want to control what appears on your right-click menu, this is the feature for you.
|
||||
|
||||
You can access this setting from Edit > Preferences > Context Menus:
|
||||
|
||||
![Configure Right Click Context Menu][7]
|
||||
|
||||
Here you can toggle on or off various options you want to appear when you right-click anywhere. You can now populate your right-click menu with features you use frequently.
|
||||
|
||||
### 7. Rotate and resize images with right click
|
||||
|
||||
To enable this feature, you need to install nemo-image-converter package.
|
||||
|
||||
```
|
||||
sudo apt install nemo-image-converter
|
||||
```
|
||||
|
||||
Restart Nemo and you can access the additional options right within the right-click context menu.
|
||||
|
||||
![Rotate or Resize Images in Nemo File Manager][8]
|
||||
|
||||
### 8. Change folder colours and add emblems
|
||||
|
||||
The feature to change folder colour was preinstalled on my Linux Mint 21. To change individual folder colour, right-click on the file and change colour from the context menu.
|
||||
|
||||
![Change Individual Folder Color][9]
|
||||
|
||||
If you don’t see it, you can install the extension:
|
||||
|
||||
```
|
||||
sudo apt install folder-color-switcher
|
||||
```
|
||||
|
||||
Another cool feature is to add emblems to files and folders. To give an emblem to a file or folder, right-click and go to the properties dialog box.
|
||||
|
||||
From this, select the emblems tab and add whatever emblem you like.
|
||||
|
||||
![Select Emblems for Files or Folders][10]
|
||||
|
||||
If it’s not installed by default, you can install it by:
|
||||
|
||||
```
|
||||
sudo apt install nemo-emblems
|
||||
```
|
||||
|
||||
### 9. Verify checksum of files
|
||||
|
||||
There are dedicated tools to [verify checksum of files in Linux][11]. You can also check hashes in the Nemo file manager with nemo-gtkhash extension.
|
||||
|
||||
```
|
||||
sudo apt install nemo-gtkhash
|
||||
```
|
||||
|
||||
Now quit nemo and re-open. Select the file to check hash and go to the **Digests** tab in properties.
|
||||
|
||||
![Check Hash Checksum of File with Nemo GTKHash][12]
|
||||
|
||||
It will take some time to check the hash and a tick mark, as shown in the above screenshot, indicates a successful result.
|
||||
|
||||
### 10. Use advanced permissions in properties dialog box
|
||||
|
||||
Now, you can view amore detailed an an intuitive permission dialog box for folders and file. To get this, you need to go to Edit > Preferences > Display and toggle the button on as shown below:
|
||||
|
||||
![Show Advanced Permission in Property Dialog Box][13]
|
||||
|
||||
Now, instead of the old, drop-down menu interface, you get a neat-looking permission manager with a toggle button interface and more options to tweak.
|
||||
|
||||
![Edit Advanced Permissions in Property Dialog Box][14]
|
||||
|
||||
### 11. Embed a terminal
|
||||
|
||||
Fancy a terminal? You can get it right inside the Nemo file manager.
|
||||
|
||||
Each time you change directories, a cd command is initiated and the location in the embedded terminal is also changed.
|
||||
|
||||
To get this function, you need to install nemo-terminal package.
|
||||
|
||||
```
|
||||
sudo apt install nemo-terminal
|
||||
```
|
||||
|
||||
Now restart Nemo and you get an embed terminal on the top side.
|
||||
|
||||
![Nemo Embedded Terminal][15]
|
||||
|
||||
### 12. Get the list of recently visited directories
|
||||
|
||||
There is the “Recent” option in the places section, where you can see the recently accessed files. But what about the recently visited folders?
|
||||
|
||||
In Nemo, on the top left, **right-click on the back arrow** to get the list of previously visited folders.
|
||||
|
||||
![Right Click on Top Left Back Arrow to Access Recent Folders][16]
|
||||
|
||||
### 13. Show the number of items in folders
|
||||
|
||||
You can show how many files and folders are inside a folder in Nemo File Manager.
|
||||
|
||||
![Show Number of Items Inside Folder Using Nemo File Manager][17]
|
||||
|
||||
It is a built-in feature. Go to Edit > Preferences > Display and select Size as shown in the screenshot below:
|
||||
|
||||
![Show Folder Item Count and File Sizes in Nemo Preferences][18]
|
||||
|
||||
### 14. Nemo media columns
|
||||
|
||||
This is a small addition, useful only if you use the ‘List View’ in Nemo. It provides additional column options in the list view.
|
||||
|
||||
![default list columns available in nemo][19]
|
||||
|
||||
![more media columns added to nemo list view][20]
|
||||
|
||||
To get this feature, you need to install nemo-media-columns:
|
||||
|
||||
```
|
||||
sudo apt install nemo-media-columns
|
||||
```
|
||||
|
||||
![More Columns View in Nemo List View][21]
|
||||
|
||||
### 15. Nemo Scripts and Actions (for expert users)
|
||||
|
||||
Here are a few advanced features that enhances the overall function of nemo file manager by adding user defined functions.
|
||||
|
||||
#### Nemo Scripts
|
||||
|
||||
With this feature, users can create their own shell scripts for certain functionality they wish and embed them into the right-click context menu.
|
||||
|
||||
You need to save your shell scripts in ~/.local/share/nemo/scripts directory. With the help of tools like [zenity][22], you can even give a GTK interface for your script.
|
||||
|
||||
Let me show an example.
|
||||
|
||||
Below is a script adding a color palette to select colour and copy the colour to [copyq clipboard manager][23]. Save the file with name Color in the above-mentioned directory and give it executable permission. Copyq and Zenity should be installed.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
name=$(zenity --color-selection --show-palette --title Color\ Select)
|
||||
copyq add $name
|
||||
```
|
||||
|
||||
![Nemo Scripts in Right Click Context][24]
|
||||
|
||||
![Color Select with Zenity][25]
|
||||
|
||||
The selected color code will now be accessible from the clipboard.
|
||||
|
||||
#### Nemo Actions
|
||||
|
||||
This is similar to Nemo Scripts. Here, you can define a script in the form of a key-value pair for additional functions over selected files.
|
||||
|
||||
The files should have extension `.nemo_action` and they should be located in `~/.local/share/nemo/actions`
|
||||
|
||||
Here is a snippet of code provided in the Linux Mint Community. It creates an option to reduce the image size by 50%.
|
||||
|
||||
Save this script as reduce_50.nemo_action in the above-mentioned directory and you will find the option in right-click context menu
|
||||
|
||||
```
|
||||
[Nemo Action]
|
||||
Active=true
|
||||
Name=Reduce Image 50%
|
||||
Comment=Reduce the size of the image by 50%
|
||||
Exec=ffmpeg -i %F -vf scale=iw/2:-1 copy-50%f
|
||||
Icon-Name=image
|
||||
Selection=any;
|
||||
Extensions=jpg;jpeg;png;bmp;gif;tiff;raw;
|
||||
Terminal=true
|
||||
```
|
||||
|
||||
![Reduce Image by 50 Percent Context Menu Entry][26]
|
||||
|
||||
You can see the resultant file with the slightly modified name.
|
||||
|
||||
![Image Reduced with Nemo Actions Result][27]
|
||||
|
||||
This way, you effectively enhance Nemo file manager functionality as per your requirement.
|
||||
|
||||
### More tweaks and extensions
|
||||
|
||||
Apart from numerous extensions, there are other built-in features in Nemo like integrations with cloud services, other handy right-click menu items etc.
|
||||
|
||||
It is not necessary for you to install and use all of the features mentioned above. You can handpick those that suit your needs.
|
||||
|
||||
You can also **toggle on/off any of the installed extensions** by going to Edit > Plugins (or Alt + P).
|
||||
|
||||
![Access Plugins from Menu][28]
|
||||
|
||||
Here, you can manage your installed plugins, actions, scripts etc. This enables you to activate or deactivate certain features without the hassle of installing/uninstalling packages. Every feature can be toggled on or off as needed. Just restart Nemo to get the effect.
|
||||
|
||||
![Plugins View and Manage in Nemo][29]
|
||||
|
||||
When we last published the [Nautilus tweak article][30], a few readers requested a similar one for Nemo. And hence this article came into existence.
|
||||
|
||||
I hope you find the tweaks interesting. If you have suggestions or questions, please leave a comment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/nemo-tweaks/
|
||||
|
||||
作者:[sreenath][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/sreenath/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://itsfoss.com/wp-content/uploads/2022/08/file-preview-in-nemo-file-manager-with-nemo-preview.png
|
||||
[2]: https://itsfoss.com/wp-content/uploads/2022/08/click-on-file-name-twice-to-rename-it.png
|
||||
[3]: https://itsfoss.com/wp-content/uploads/2022/08/nemo-file-manager-bulk-rename.png
|
||||
[4]: https://itsfoss.com/wp-content/uploads/2022/08/double-click-on-blank-area-to-go-to-parent-folder.png
|
||||
[5]: https://itsfoss.com/wp-content/uploads/2022/08/compress-option-in-right-click-context-menu.png
|
||||
[6]: https://itsfoss.com/wp-content/uploads/2022/08/compress-options.png
|
||||
[7]: https://itsfoss.com/wp-content/uploads/2022/08/configure-right-click-context-menu.png
|
||||
[8]: https://itsfoss.com/wp-content/uploads/2022/08/rotate-or-resize-images-in-nemo-file-manager.png
|
||||
[9]: https://itsfoss.com/wp-content/uploads/2022/08/change-individual-folder-color.png
|
||||
[10]: https://itsfoss.com/wp-content/uploads/2022/08/select-emblems-for-files-or-folders.png
|
||||
[11]: https://itsfoss.com/checksum-tools-guide-linux/
|
||||
[12]: https://itsfoss.com/wp-content/uploads/2022/08/check-hash-checksum-of-file-with-nemo-gtkhash.png
|
||||
[13]: https://itsfoss.com/wp-content/uploads/2022/08/show-advanced-permission-in-property-dialog-box.png
|
||||
[14]: https://itsfoss.com/wp-content/uploads/2022/08/edit-advanced-permissions-in-property-dialog-box.png
|
||||
[15]: https://itsfoss.com/wp-content/uploads/2022/08/nemo-embedded-terminal.png
|
||||
[16]: https://itsfoss.com/wp-content/uploads/2022/08/right-click-on-top-left-back-arrow-to-access-recent-folders.png
|
||||
[17]: https://itsfoss.com/wp-content/uploads/2022/08/show-number-of-items-inside-folder-using-nemo-file-manager.png
|
||||
[18]: https://itsfoss.com/wp-content/uploads/2022/08/show-folder-item-count-and-file-sizes-in-nemo-preferences.png
|
||||
[19]: https://itsfoss.com/wp-content/uploads/2022/08/default-list-columns-available-in-nemo.png
|
||||
[20]: https://itsfoss.com/wp-content/uploads/2022/08/more-media-columns-added-to-nemo-list-view.png
|
||||
[21]: https://itsfoss.com/wp-content/uploads/2022/08/more-columns-view-in-nemo-list-view.png
|
||||
[22]: https://help.gnome.org/users/zenity/stable/
|
||||
[23]: https://itsfoss.com/copyq-clipboard-manager/
|
||||
[24]: https://itsfoss.com/wp-content/uploads/2022/08/nemo-scripts-in-right-click-context.png
|
||||
[25]: https://itsfoss.com/wp-content/uploads/2022/08/color-select-with-zenity.png
|
||||
[26]: https://itsfoss.com/wp-content/uploads/2022/08/reduce-image-by-50-percent-context-menu-entry.png
|
||||
[27]: https://itsfoss.com/wp-content/uploads/2022/08/image-reduced-with-nemo-actions-result.png
|
||||
[28]: https://itsfoss.com/wp-content/uploads/2022/08/access-plugins-from-menu.png
|
||||
[29]: https://itsfoss.com/wp-content/uploads/2022/08/plugins-view-and-manage-in-nemo.png
|
||||
[30]: https://itsfoss.com/nautilus-tips-tweaks/
|
@ -0,0 +1,157 @@
|
||||
[#]: subject: "Building a Stateless Firewall Using Netfilter in Linux"
|
||||
[#]: via: "https://www.opensourceforu.com/2022/08/building-a-stateless-firewall-using-netfilter-in-linux/"
|
||||
[#]: author: "Supriyo Ganguly https://www.opensourceforu.com/author/supriyo-ganguly/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Building a Stateless Firewall Using Netfilter in Linux
|
||||
======
|
||||
*The Linux kernel has a Netfilter framework that allows us to perform various networking-related operations. This article is a simple tutorial on how to build firewall modules using Netfilter.*
|
||||
|
||||
The Netfilter framework is a collection of hooks or handlers in the Linux kernel, which helps to filter or capture socket buffers. We can implement packet filtering at the input or output, or even at the forwarding path of a network packet. *Iptables* is a popular tool that is implemented using the Netfilter framework.
|
||||
|
||||
As shown in Figure 1, a packet can be filtered or processed at five different stages. So there are five possible hooks where programmers can attach a customised handler and implement their own firewall. These hooks are (only for Linux kernel 5.10 or above):
|
||||
|
||||
![Figure 1: Processing stages][1]
|
||||
|
||||
* NF_INET_PRE_ROUTING: This hook is called once a network packet enters the stack, before any routing decision takes place.
|
||||
* NF_INET_LOCAL_IN: After routing, if it is found that the packet is for a local network, this hook is triggered.
|
||||
* NF_INET_FORWARD: This hook is called if, after routing, it is found that the packet is for another networking domain, and not for a local process.
|
||||
* NF_INET_LOCAL_OUT: This is called in case the packet is sent from a local process using send or sendto (POSIX calls).
|
||||
* NF_INET_POST_ROUTING: This handler is called just before any local or forwarded packet is about to hit the interface after handling by the entire stack is over.
|
||||
|
||||
I have written an example code to show how to build a firewall using the Netfilter framework. I have used Linux kernel 5.10. In this example I have blocked all ICMP and HTTP/HTTPS packet sending from a local process. This program has to run from kernel space, and not in user space. So a kernel module has been developed.
|
||||
|
||||
The entire code is available at*https://github.com/SupriyoGanguly/Linux-Firewall-by-netfilter*. You can download the code files to check and understand the implementation.
|
||||
|
||||
#### Packet filtering
|
||||
|
||||
We have created a firewall.c file that is available in the download from the above link. In *firewall.c*, the *netfilter_ops* is a *struct nf_hook_ops* variable. In the init-module section, *netfilter_ops* is initialised with the following:
|
||||
|
||||
```
|
||||
netfilter_ops.hook = main_hook; //the handler function
|
||||
netfilter_ops.pf = PF_INET; //tells the Protocol is IPv4
|
||||
netfilter_ops.hooknum = NF_INET_POST_ROUTING; //process at post-routing stage
|
||||
netfilter_ops.priority = NF_IP_PRI_FIRST; //priority
|
||||
```
|
||||
|
||||
Given below is the snippet from firewall.c:
|
||||
|
||||
```
|
||||
static struct nf_hook_ops netfilter_ops;
|
||||
|
||||
/* This function is called by hook. */
|
||||
|
||||
static unsigned int main_hook(void *priv, struct sk_buff *skb, const struct nf_hook_state *state)
|
||||
{
|
||||
//struct udphdr *udp_header;
|
||||
int dstPort;
|
||||
struct tcphdr *hdr;
|
||||
struct iphdr *ip_header = (struct iphdr *)skb_network_header(skb);
|
||||
|
||||
if (ip_header->protocol == IPPROTO_ICMP) {
|
||||
// udp_header = (struct udphdr *)skb_transport_header(skb);
|
||||
printk(KERN_INFO “Drop icmp packet.\n”);
|
||||
return NF_DROP;
|
||||
}
|
||||
|
||||
if (ip_header->protocol == IPPROTO_TCP) {
|
||||
hdr = (struct tcphdr *) skb_transport_header(skb);
|
||||
dstPort = ntohs(hdr->dest);
|
||||
if ((dstPort==443) || (dstPort==80)) /*drop https and http*/ {
|
||||
printk(“Drop HTTPS/HTTP packet\n”);
|
||||
return NF_DROP;
|
||||
}
|
||||
}
|
||||
return NF_ACCEPT;
|
||||
}
|
||||
```
|
||||
|
||||
*main_hook* is the name of the handler function for the *NF_INET_POST_ROUTING hook*. In this function, any packet (whether forwarded or sent from the local interface with ICMP protocol) will be dropped using the first ‘if’ statement, where it checks for the IP_PROTOCOL in the IPv4 header of the socket buffer. As the hook is returning *NF_DROP*, it tells the kernel driver not to proceed with the packet.
|
||||
|
||||
The second ‘if’ statement checks whether any TCP packet with destination port number 443 (for HTTPS) or with port number 80 (for HTTP) will be dropped.
|
||||
|
||||
Now we can use the*Make* statement to compile this module*(filewall.c)*. Figure 2 shows that we can ping an IP of a local network device with IP address 192.168.29.1 successfully just before we have implemented *firewall.c.*
|
||||
|
||||
![Figure 2: Successful ping][2]
|
||||
|
||||
But after insertion of the module, ping starts to fail (as shown in Figure 3).
|
||||
|
||||
![Figure 3: Unsuccessful ping][3]
|
||||
|
||||
This indicates that in the*POST_ROUTING* hook, the packet is dropped and not sent to wire. A log from*dmesg* command is shown in Figure 4 and describes the functionality of the module.
|
||||
|
||||
![Figure 4: Output of dmesg][4]
|
||||
|
||||
In this example you can also see the use of *NF_DROP* or *NF_ACCEPT* return values. The meaning of these values is self-explanatory. But there are a few more return values as well, which include:
|
||||
|
||||
* NF_REPEAT: Repeat the hook function.
|
||||
* NF_QUEUE: Queue the packet for user space processing. To implement this in user space, we need to use the libraries nfnetlink and netfilter_queue.
|
||||
* NF_STOLEN: Further processing of the packet and freeing memory is up to your module.
|
||||
|
||||
#### Packet mangling
|
||||
|
||||
Netfilter can also be used for packet mangling or modification. In the same URL (https://github.com/SupriyoGanguly/Linux-Firewall-by-netfilter), you can find one more file *Mangle.c*. The corresponding makefile is*Makefile_mangle*.
|
||||
|
||||
In this hook, the ICMP ping packet source IP is modified just before sending out the packet. You can see the code below:
|
||||
|
||||
```
|
||||
/* This function to be called by hook. */
|
||||
|
||||
static unsigned int main_hook(void *priv, struct sk_buff *skb, const struct nf_hook_state *state)
|
||||
{
|
||||
|
||||
struct iphdr *ip_header = (struct iphdr *)skb_network_header(skb);
|
||||
|
||||
if (ip_header->protocol == IPPROTO_ICMP) {
|
||||
printk(KERN_INFO “Mangle icmp packet. %x\n”,ip_header->saddr);
|
||||
ip_header->saddr = 0xd01da8c0;
|
||||
}
|
||||
|
||||
return NF_ACCEPT;
|
||||
}
|
||||
```
|
||||
|
||||
The output of Wireshark capture shown in Figure 5 depicts that before loading this module, the ping request to destination IP 192.168.29.1 goes from the original IP of the interface, i.e., 192.168.29.207. But after loading the module, the ping request goes from the modified IP of the interface, i.e., 192.168.29.208. However, the physical interface IP is unchanged.
|
||||
|
||||
![Figure 5: Output of Wireshark][5]
|
||||
|
||||
#### Compiling the code
|
||||
|
||||
To compile and test the downloaded module, just use:
|
||||
|
||||
```
|
||||
$ make
|
||||
|
||||
$ sudo insmod firewall
|
||||
```
|
||||
|
||||
To remove it, use the following command:
|
||||
|
||||
```
|
||||
$ sudo rmmod firewall
|
||||
```
|
||||
|
||||
This article is a simple tutorial on building firewall modules using Netfilter. You can also do packet capturing by simply using the*NF_INET_PRE_ROUTING* hook number in this example. You can even use this example to simulate a man-in-the-middle attack for your devices, to test the cybersecurity.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.opensourceforu.com/2022/08/building-a-stateless-firewall-using-netfilter-in-linux/
|
||||
|
||||
作者:[Supriyo Ganguly][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.opensourceforu.com/author/supriyo-ganguly/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-1-Processing-stages.jpg
|
||||
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-2-Successful-ping.jpg
|
||||
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-3-Unsuccessful-ping.jpg
|
||||
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-4-Output-of-dmesg.jpg
|
||||
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-5-Output-of-Wireshark.jpg
|
@ -0,0 +1,115 @@
|
||||
[#]: subject: "How to Get KDE Plasma 5.25 in Kubuntu 22.04 Jammy Jellyfish"
|
||||
[#]: via: "https://www.debugpoint.com/kde-plasma-5-25-kubuntu-22-04/"
|
||||
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
How to Get KDE Plasma 5.25 in Kubuntu 22.04 Jammy Jellyfish
|
||||
======
|
||||
The KDE developers now enabled the popular backports PPA with necessary updates with KDE Plasma 5.25 which you can install now in Kubuntu 22.04 Jammy Jellyfish. Here’s how.
|
||||
|
||||
KDE Plasma 5.25 released a few days back on June 14, 2022 with some stunning updates. With this release, you get the **dynamic accent colour**, revamped login avatars, **floating panel** and many such features which we covered in the [feature highlight article][1].
|
||||
|
||||
But, if you are running [Kubuntu 22.04 Jammy Jellyfish][2] which was released long back on April 2022, you have the KDE Plasma 5.24 with KDE Framework 5.92.
|
||||
|
||||
You probably waiting to enjoy the new features in your stable Kubuntu 22.04 release, and now its possible to install it in Kubuntu 22.04 via the famous backports PPA.
|
||||
|
||||
### How to Install KDE Plasma 5.25 in Kubuntu 22.04
|
||||
|
||||
Here’s how you can upgrade Kubuntu 22.04 with latest KDE Plasma 5.25.
|
||||
|
||||
#### GUI Method
|
||||
|
||||
If you are comfortable with KDE’s software app Discover, then open the app. Then browse to the Settings > Sources and add the PPA `ppa:kubuntu-ppa/backports-extra`. Then Click on Updates.
|
||||
|
||||
#### Terminal Method (recommended)
|
||||
|
||||
I would recommend you to open a terminal and do this upgrade for faster execution and installation.
|
||||
|
||||
* Open Konsole and run the following commands to add the [backport PPA][3].
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:kubuntu-ppa/backports-extra
|
||||
```
|
||||
|
||||
![Upgrade Kubuntu 22.04 with KDE Plasma 5.25][4]
|
||||
|
||||
* Now, refresh the package list by running the following command. Then verify the 5.25 packages are available.
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
```
|
||||
apt list --upgradable | grep 5.25
|
||||
```
|
||||
|
||||
![KDE Plasma 5.25 packages are available now][5]
|
||||
|
||||
Finally, run the last command to kick-off the upgrade.
|
||||
|
||||
```
|
||||
sudo apt full-upgrade
|
||||
```
|
||||
|
||||
The total download size is around 200 MB worth of packages. The entire process takes around 10 minutes of your time based on your internet connection speed.
|
||||
|
||||
After the above command is complete, restart your system.
|
||||
|
||||
Post-restart, you should see the new KDE Plasma 5.25 in Kubuntu 22.04 LTS.
|
||||
|
||||
![KDE Plasma 5.25 in Kubuntu 22.04 LTS][6]
|
||||
|
||||
### Other backport PPA
|
||||
|
||||
Please note that the [other backport PPA][7] `ppa:kubuntu-ppa/backports` is currently have Plasma 5.24. So do not use the following PPA which is different than the above. I am not sure whether this PPA would get this update.
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:kubuntu-ppa/backports // don't use this
|
||||
```
|
||||
|
||||
### How to Uninstall
|
||||
|
||||
At any moment, if you would like to go back to the stock version of KDE Plasma desktop, then you can install ppa-purge and remove the PPA, followed by refreshing the package.
|
||||
|
||||
Open a terminal and execute the following commands in sequence.
|
||||
|
||||
```
|
||||
sudo apt install ppa-purge
|
||||
sudo ppa-purge ppa:kubuntu-ppa/backports-extra
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
Once the above commands are complete, restart your system.
|
||||
|
||||
### Closing Notes
|
||||
|
||||
There you have it. A nice and simple steps to upgrade stock KDE Plasma to Plasma 5.25 in Jammy Jellyfish. I hope, your upgrade goes fine.
|
||||
|
||||
Do let me know in the comment section if you face any error.
|
||||
|
||||
Cheers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.debugpoint.com/kde-plasma-5-25-kubuntu-22-04/
|
||||
|
||||
作者:[Arindam][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.debugpoint.com/author/admin1/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://www.debugpoint.com/kde-plasma-5-25/
|
||||
[2]: https://www.debugpoint.com/kubuntu-22-04-lts/
|
||||
[3]: https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/backports-extra
|
||||
[4]: https://www.debugpoint.com/wp-content/uploads/2022/08/Upgrade-Kubuntu-22.04-with-KDE-Plasma-5.25.jpg
|
||||
[5]: https://www.debugpoint.com/wp-content/uploads/2022/08/KDE-Plasma-5.25-packages-are-available-now.jpg
|
||||
[6]: https://www.debugpoint.com/wp-content/uploads/2022/08/KDE-Plasma-5.25-in-Kubuntu-22.04-LTS-1024x575.jpg
|
||||
[7]: https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/backports
|
@ -0,0 +1,164 @@
|
||||
[#]: subject: "Using eBPF for network observability in the cloud"
|
||||
[#]: via: "https://opensource.com/article/22/8/ebpf-network-observability-cloud"
|
||||
[#]: author: "Pravein Govindan Kannan https://opensource.com/users/praveingk"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Using eBPF for network observability in the cloud
|
||||
======
|
||||
eBPF extends the Linux kernel to help you monitor the cloud.
|
||||
|
||||
Observability is the ability to know and interpret the current state of a deployment, and a way to know when something is amiss. With cloud deployments of applications as microservices on Kubernetes and OpenShift growing, observability is getting a lot of attention. Many applications come with strict guarantees, such as service level agreements (SLA) for downtimes, latency, and throughput, so network-level observability is a highly imperative feature. Network-level observability is provided by several orchestrators, either natively or by using plugins and operators.
|
||||
|
||||
Recently, [eBPF][2] (extended Berkeley Packet Filter) emerged as a popular option to implement observability at the end-hosts kernel, due to performance and flexibility. This method enables custom programs to be hooked at certain points along the network data path (for instance, a socket, TC, and XDP). Several open source eBPF-based plugins and operators have been released, and each can be plugged into end-host nodes to provide network observability through your cloud orchestrator.
|
||||
|
||||
### Existing Observability Tools
|
||||
|
||||
The core component of an observability module is how it non-invasively collects the necessary data. To that end, using instrumented code and measurements, we've studied how the design of the eBPF datapath affects performance of an observability module, and the workloads it's monitoring. The artifacts of our measurements are open source and available in our [research Git repo][3]. We're also able to provide some useful insights you can use when designing a scalable and high-performance eBPF monitoring data path.
|
||||
|
||||
Here are existing open source tools available to achieve observability in the context of both the network and the host:
|
||||
|
||||
**Skydive**
|
||||
|
||||
[Skydive][4] is a network topology and flow analyzer. It attaches probes to nodes to collect flow-level information. The probes are attached using PCAP, `AF_Packet`, [Open vSwitch][5], and so on. Instead of capturing the entire packet, Skydive uses eBPF to capture the flow metrics. The eBPF implementation, attached to the socket hook-point, uses a hash map to store flow headers and metrics (packets, bytes, and direction.)
|
||||
|
||||
**libebpfflow**
|
||||
|
||||
[Libebpfflow][6] is a network visibility library using eBPF to provide network visibility. It hooks on to various points in a host stack, like kernel probes (`inet_csk_accept`, `tcp_retransmit_skb` ) and tracepoints (`net:netif_receive_skb`, `net:net_dev_queue` ) to analyze TCP/UDP traffic states, RTT, and more. In addition, it provides process, and the container mapping for the traffic it analyzes. Its eBPF implementation uses perf event buffer to notify TCP state change events to userspace. For UDP, it attaches to the tracepoint of the network device queue and uses a combination of LRU hash map and perf event buffer to store UDP flow metrics.
|
||||
|
||||
**eBPF Exporter**
|
||||
|
||||
Cloudflare's [eBPF Exporter][7] provides APIs for plugging in custom eBPF code to record custom metrics of interest. It requires the entire eBPF C code (along with the hook point) to be appended to a YAML file for deployment.
|
||||
|
||||
**Pixie**
|
||||
|
||||
[Pixie][8] uses bpftrace to trace syscalls. It uses TCP/UDP state messages to collect the necessary information, which is then sent to Pixie Edge Module (PEM). In the PEM, the data is parsed according to the detected protocol and stored for querying.
|
||||
|
||||
**Inspektor**
|
||||
|
||||
[Inspektor][9] is a collection of tools for Kubernetes cluster debugging. It aids the mapping of low-level kernel primitives with Kubernetes resources. It's added as a daemonset on each node of the cluster to collect traces using eBPF for events such as syscalls. These events are written to the perf ring buffer. Finally, the ring buffer is consumed retrospectively when a fault occurs (for example, upon a pod crash).
|
||||
|
||||
**L3AF**
|
||||
|
||||
[L3AF][10] provides a set of eBPF packages that can be packaged and chained together using tail-calls. It provides a network observability tool, which mirrors traffic based on the flow-id to the user-space agent. Additionally, it also provides an IPFIX flow exporter by storing flow records on a hash map in the eBPF datapath.
|
||||
|
||||
**Host-INT**
|
||||
|
||||
[Host-INT][11] extends in-band Network Telemetry support to support telemetry for host network stack. Fundamentally, INT embeds the switching delay incurred for each packet into an INT header in the packet. Host-INT does the same for the host network stack between two hosts. Host-INT has two data-path components: a source and sink based on eBPF. The source runs on a TC hook of the sender host's interface, and the sink runs on an XDP hook of the receiver host’s interface. At the source, it uses Hash maps to store flow statistics. Additionally, it adds in an INT header with an ingress/egress port, timestamps, and so on. At the sink, it uses a perf array to send statistics to a sink userspace program on each packet arrival, and sends the packet to the kernel.
|
||||
|
||||
**Falco**
|
||||
|
||||
Falco is a cloud-native runtime security project. It monitors system calls using eBPF probes and parses them at runtime. Falco has provisions to configure alerts on activities such as privileged access using privileged containers, read and write to kernel folders, user addition, password change etc. Falco comprises an userspace program as a CLI tool to specify the alerts and obtain the parsed syscall output and a falco driver built over libscap and libsinsp libraries. For syscalls probes falco uses eBPF ring buffers.
|
||||
|
||||
**Cilium**
|
||||
|
||||
Observability in [Cilium][12] is enabled using eBPF. Hubble is a platform with eBPF hooks running on each node on a cluster. It helps draw insights on services communicating with each other to build a service dependency graph. It also aids Layer 7 monitoring to analyze for e.g. the HTTP calls as well as Kafka topics, Layer 4 monitoring with TCP retransmission rate, and more.
|
||||
|
||||
**Tetragon**
|
||||
|
||||
Tetragon is an extensible framework for security and observability in Cilium. The underlying enabler for tetragon is eBPF with data stored using ring buffers but, along with monitoring eBPF is leveraged to enforce policy spanning various kernel components such as virtual file system (VFS), namespace, system call.
|
||||
|
||||
**Aquasecurity Tracee**
|
||||
|
||||
[Tracee][13] is an event tracing tool for debugging behavioral patterns built over eBPF. Tracee has multiple hook points at tc, kprobes ,etc to monitor and trace the network traffic. At tc hook, it uses a ring buffer (perf) to submit packet-level events to the user-space.
|
||||
|
||||
### Revisiting the design of Flow metric agent
|
||||
|
||||
While motive and implementation differ across different tools, the central component common to all observability tools is the data structure used to collect the observability metrics. While different tools adopt different data structures to collect the metrics, there are no existing performance measurements carried out to see the impact of the data structure used to collect and store observability metrics. To bridge this gap, we implement template eBPF programs using different data structures to collect the same flow metrics from host traffic. We use the following data structures (called Maps) available in eBPF to collect and store metrics:
|
||||
|
||||
1. Ring Buffer
|
||||
2. Hash
|
||||
3. Per-CPU Hash
|
||||
4. Array
|
||||
5. Per-CPU Array
|
||||
|
||||
### Ring Buffer
|
||||
|
||||
Ring buffer
|
||||
|
||||
is a shared queue between the eBPF datapath and the userspace, where eBPF datapath is the producer and the userspace program is the consumer. It can be used to send per-packet "postcards" to userspace for aggregation of flow metrics. Although this approach could be simple and provide accurate results, it fails to scale because it sends postcards per packet, which keeps the userspace program in a busy loop.
|
||||
|
||||
### Hash and Per-CPU Hash map
|
||||
|
||||
(Per-CPU) Hash map could be used in the eBPF datapath to aggregate per-flow metrics by hashing on the flow-id (for example, 5 tuple IP, port, protocol) and evicting the aggregate information to userspace upon flow completion/inactive. While this approach overcomes the drawbacks of a ring buffer by sending postcards only once per flow and not per packet, it has some disadvantages.
|
||||
|
||||
First, there is a possibility of multiple flows being hashed into the same entry, leading to inaccurate aggregation of the flow metrics. Secondly, the hash map necessarily has limited memory for the in-kernel eBPF datapath, so it could be exhausted. Thus userspace program has to implement eviction logic to constantly evict flows upon a timeout.
|
||||
|
||||
### Array-based map
|
||||
|
||||
(Per-CPU) Array-based map can also be used to store per-packet postcards temporarily before eviction to user space, although not an obvious option. The use of arrays poses an advantage by storing per-packet information in the array until it's full and then flushing to userspace only when it's full. This way, it could improve the busy-loop cycle of the userspace compared to using ringbuffer per-packet. Additionally, it does not have the problem of hash collisions of hash map. However, it is complicated to implement because it would require multiple redundant arrays to store per-packet postcards when the main array is flushing out its contents to userspace.
|
||||
|
||||
### Measurements
|
||||
|
||||
So far, we have studied the options that can be used to implement flow metric collection using several data structures. Now it's time to study the performance achieved using a reference implementation of flow metric postcards using each of the above data structures. To do that, we implemented representative eBPF programs which collect flow metrics. The code we used is available on our [Git repo][14]. Further, we conducted measurements by sending traffic using a custom-built UDP-based packet generator built on top of [PcapPlusPlus][15].
|
||||
|
||||
This graphic describes the experiment setting:
|
||||
|
||||
![eBPF test environment][16]
|
||||
|
||||
Image by: (Kannan/Naik/Lev-Ran, CC BY-SA 4.0)
|
||||
|
||||
The observe agent is the eBPF datapath performing flow metric collection, hooked at the tc hook-point of the sender. We use two bare-metal servers connected over a 40G link. Packet generation is done using 40 separate cores. To bring these measurements in perspective, libpcap-based Tcpdump which could be used to collect similar flow information.
|
||||
|
||||
#### Single Flow
|
||||
|
||||
We initially run the test with single-flow UDP frames. A single flow test can show us the amount of single flow traffic burst the observe agent can tolerate. As shown in the figure below, native performance without any observe agent is about 4.7 Mpps (Million Packets Per Second), and with [tcpdump][17] running, the throughput falls to about 2 Mpps. With eBPF, we observed that the performance varies from 1.6 Mpps to 4.7 Mpps based on the data structure used to store the flow metrics. Using a shared data structure such as HashMap, we observed the most significant drop in performance for a single-flow, because each packet writes to the same entry in the map regardless of the CPU it originated from.
|
||||
|
||||
Ringbuffer performs slightly better than a single HashMap for a single flow burst. Using a Per-CPU Hash Map, we observed a good increase in throughput performance, because packets arriving from multiple CPUs no longer contend for the same map entry. However, the performance is still half the native performance without any *observe agent*. (Note that this performance is without handling hash collisions and evictions.)
|
||||
|
||||
With (per-cpu) arrays, we see a significant increase in the throughput of a single flow. We can attribute this to the fact there is literally no contention between packets since each packet takes up a different entry in the array incrementally. However, the major drawback in our implementation is we do not handle the array flushing upon full, while it performs writes in a circular fashion. Hence, it stores the last few packet records observed at any point in time. Nevertheless, it provides us the spectrum of performance gains we can achieve by appropriately applying the data structure in the eBPF datapath.
|
||||
|
||||
![eBPF data][18]
|
||||
|
||||
Image by: (Kannan/Naik/Lev-Ran, CC BY-SA 4.0)
|
||||
|
||||
#### Multi-Flow
|
||||
|
||||
We now test the performance of the eBPF observe agents with multiple flows. We generated 40 different UDP flows (1 flow per core) by instrumenting the packet generator. Interestingly, with multiple flows, we observed a stark difference in performance of per-CPU hash and hash map as compared to single flows. This could be attributed to the reduction in contention for a single hash entry. However, we do not see any performance improvement with ringbuffer since regardless of the flows, the contention channel i.e. ringbuffer is fixed. Array performs marginally better with multiple flows.
|
||||
|
||||
### Lessons learned
|
||||
|
||||
From our studies, we've derrived these conclusions:
|
||||
|
||||
1. Ringbuffer-based per-packet postcards are not scalable, and they affect performance.
|
||||
2. Hash Maps limit the "burstiness" of a flow, in terms of packets processed per second. Per-CPU hashmaps perform marginally better.
|
||||
3. To handle short bursts of packets within a flow, using an array map to store per-packet postcards would be a good option given array can store a few packet 10s or 100s of packet records. This would ensure that the observe agent could tolerate short bursts without degrading performance.
|
||||
|
||||
In our research, we analyzed monitoring of packet-level and flow-level information between multiple hosts in the cloud. We started with the premise that the core feature of observability is how the data is collected in a non-invasive manner. With this outlook, we surveyed existing tools, and tested different methodologies of collecting observability data in the form of flow metrics from packets observed in the eBPF datapath. We studied how the performance of flows were affected by the data structure used to collect flow metrics.
|
||||
|
||||
Ideally, to minimize the performance drop of the host traffic due to the overhead of observability agent, our analysis points to a mixed usage of per-cpu array and per-cpu hash data structures. Both of the data-structures could be used together to handle short bursts in flows, using an array and aggregation using a per-CPU hash map. We're currently working on the design of an observability agent ([https://github.com/netobserv/netobserv-ebpf-agent][19]), and plan to release a future article with the design details and performance analysis compared to existing tools.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/22/8/ebpf-network-observability-cloud
|
||||
|
||||
作者:[Pravein Govindan Kannan][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/praveingk
|
||||
[b]: https://github.com/lkxed
|
||||
[2]: https://ebpf.io/
|
||||
[3]: https://github.com/netobserv/ebpf-research/tree/main/ebpf-measurements
|
||||
[4]: https://github.com/skydive-project/skydive
|
||||
[5]: https://www.redhat.com/sysadmin/getting-started-sdn
|
||||
[6]: https://github.com/ntop/libebpfflow
|
||||
[7]: https://github.com/cloudflare/ebpf_exporter
|
||||
[8]: https://github.com/pixie-io/pixie
|
||||
[9]: https://github.com/kinvolk/inspektor-gadget
|
||||
[10]: https://github.com/l3af-project/eBPF-Package-Repository/blob/main/ipfix-flow-exporter/bpf_ipfix_egress_kern.c
|
||||
[11]: https://github.com/intel/host-int
|
||||
[12]: https://github.com/cilium/tetragon
|
||||
[13]: https://github.com/aquasecurity/tracee
|
||||
[14]: https://github.com/netobserv/ebpf-research/tree/main/ebpf-measurements
|
||||
[15]: https://pcapplusplus.github.io/
|
||||
[16]: https://opensource.com/sites/default/files/2022-08/ebpf-tests.png
|
||||
[17]: https://sysadmin.prod.acquia-sites.com/sysadmin/troubleshoot-dhcp-nmap-tcpdump-and-wireshark
|
||||
[18]: https://opensource.com/sites/default/files/2022-08/ebpf-test-throughput.png
|
||||
[19]: https://github.com/netobserv/netobserv-ebpf-agent
|
@ -0,0 +1,169 @@
|
||||
[#]: subject: "Bash Scripting – Select Loop Explained With Examples"
|
||||
[#]: via: "https://ostechnix.com/bash-select-loop/"
|
||||
[#]: author: "Karthick https://ostechnix.com/author/karthick/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Bash Scripting – Select Loop Explained With Examples
|
||||
======
|
||||
Creating Menu Driven Scripts Using Bash Select Loop
|
||||
|
||||
We have seen about bash **for loop**, **while loop**, and **until loop** in our previous articles with detailed examples. Bash offers one more type of loop called **select loop**, which will allow you to **create menu-driven scripts**.
|
||||
|
||||
Menu-driven scripts are good alternatives to scripts that require users to pass arguments to perform an action. You can add more verbosity in your menus and users have to just select the option for the program to do its job.
|
||||
|
||||
Take a look at our comprehensive article on `for loop`, `while loop`, and `until loop`.
|
||||
|
||||
* [Bash Scripting – For Loop Explained With Examples][1]
|
||||
* [Bash Scripting – While And Until Loop Explained With Examples][2]
|
||||
|
||||
### Bash Select Loop - Syntax
|
||||
|
||||
The `select loop` syntax is a bit similar to the `for loop` syntax. In the `for loop`, every element will be iterated and for each element, you will write the logic to process it. But `select loop` will automatically convert the list of elements into the numbered menu.
|
||||
|
||||
```
|
||||
select fav in ubuntu popos mint kubuntu
|
||||
do
|
||||
echo "My fav distribution = ${fav}"
|
||||
done
|
||||
```
|
||||
|
||||
**Explanation:**
|
||||
|
||||
* The loop should start with the `"select"` keyword.
|
||||
* After the select keyword comes the variable which will store the value you choose from the menu. In my case, I have given the variable name as "fav".
|
||||
* After the in keyword, you have to give the list of elements. These elements will be converted into a menu.
|
||||
* Your logic should be placed within the do and the done block.
|
||||
|
||||
Now, go ahead and copy the above snippet and run it in your terminal. It will create a menu and wait for your response.
|
||||
|
||||
![Create Menu Driven Scripts Using Bash Select Loop][3]
|
||||
|
||||
### Select Loop - Response
|
||||
|
||||
Let’s understand the behavior of the select loop when you give the response.
|
||||
|
||||
The `select loop` will only accept the menu number as the argument. Depending upon the menu number you choose, the corresponding value will be stored in the variable(fav). The number from the option which you choose will be stored in the **"REPLY"** variable.
|
||||
|
||||
Check the following code. I have selected the choice **2**.
|
||||
|
||||
```
|
||||
$ select fav in ubuntu popos mint kubuntu
|
||||
do
|
||||
echo "My fav distribution = ${fav}"
|
||||
done
|
||||
1) ubuntu
|
||||
2) popos
|
||||
3) mint
|
||||
4) kubuntu
|
||||
#? 2
|
||||
My fav distribution = popos
|
||||
#?
|
||||
```
|
||||
|
||||
![Bash Select Loop Response][4]
|
||||
|
||||
The select loop will not terminate until you cancel it or use the break statement to exit the loop in your script. I have used the break statement after my logic flow so the loop will be terminated with just one selection.
|
||||
|
||||
The [break][5] statement exit out of the loop once it is called so any pending operation will be skipped in the loop. The following code explains the use of the break statement.
|
||||
|
||||
```
|
||||
select fav in ubuntu popos mint kubuntu
|
||||
do
|
||||
echo "My fav distribution = ${fav}"
|
||||
break
|
||||
done
|
||||
```
|
||||
|
||||
![Bash Select Loop With Break Statement][6]
|
||||
|
||||
The default behavior of the select loop is that when the user is not providing the input it will again prompt for the input without exiting the loop.
|
||||
|
||||
![Bash Select Loop Without Input][7]
|
||||
|
||||
### Select Loop - Setting Custom Prompt
|
||||
|
||||
By default, the select loop will use **"#?"** as the prompt. You can also set a custom prompt as per your wish by setting the **PS3 environmental variable**.
|
||||
|
||||
```
|
||||
PS3="Choose your fav distribution :: "
|
||||
select fav in ubuntu popos mint kubuntu; do
|
||||
echo "My fav distribution = ${fav}"
|
||||
break
|
||||
done
|
||||
```
|
||||
|
||||
![Set Custom Prompt][8]
|
||||
|
||||
### Creating A Simple Menu Driven Backup And Restore Script
|
||||
|
||||
Till now all we have seen is about the select loop syntax and its behavior. Let’s create a simple backup and restore script with menu driven approach.
|
||||
|
||||
Take a look at the below code. Two functions, **backup()** which will take backup, and **restore()** which will revert the file to the source.
|
||||
|
||||
I am just taking backup only for the `.bashrc` file for demonstration but you can tweak this script as per your requirement. Using the **[conditional statements][9]**, I am validating the input and triggering the respective function.
|
||||
|
||||
```
|
||||
#!/usr/bin/env bash
|
||||
|
||||
SOURCE="/home/${USER}/.bashrc"
|
||||
DESTINATION="/home/${USER}/Documents/"
|
||||
|
||||
# This function will take backup
|
||||
function backup(){
|
||||
rsync -a --progress --delete-before --info=progress2 ${SOURCE} ${DESTINATION}
|
||||
}
|
||||
|
||||
# This function will restore the backup
|
||||
function restore(){
|
||||
rsync -a --progress --delete-before --info=progress2 ${DESTINATION} ${SOURCE}
|
||||
}
|
||||
|
||||
PS3="Choose either BACKUP or RESTORE :: "
|
||||
select option in backup restore
|
||||
do
|
||||
if [[ ${option} = "backup" ]];then
|
||||
backup
|
||||
elif [[ ${option} = "restore" ]];then
|
||||
restore
|
||||
fi
|
||||
break
|
||||
done
|
||||
```
|
||||
|
||||
Once you run this script, it will just prompt two options as shown in the below image and based upon your selection the action will be performed.
|
||||
|
||||
![Menu Driven Backup And Restore Script][10]
|
||||
|
||||
### Conclusion
|
||||
|
||||
In this article I have shown you what a select statement in bash scripting is and how to use Bash select loop to create a menu-driven scripts.
|
||||
|
||||
Let us know if you have implemented any cool scripts with the menu driven approach through the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ostechnix.com/bash-select-loop/
|
||||
|
||||
作者:[Karthick][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://ostechnix.com/author/karthick/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://ostechnix.com/bash-for-loop-shell-scripting/
|
||||
[2]: https://ostechnix.com/bash-while-until-loop-shell-scripting/
|
||||
[3]: https://ostechnix.com/wp-content/uploads/2022/08/Create-Menu-Driven-Scripts-Using-Bash-Select-Loop.png
|
||||
[4]: https://ostechnix.com/wp-content/uploads/2022/08/Bash-Select-Loop-Response.png
|
||||
[5]: https://ostechnix.com/bash-for-loop-shell-scripting/#break-continue-statement-usage
|
||||
[6]: https://ostechnix.com/wp-content/uploads/2022/08/Bash-Select-Loop-With-Break-Statement.png
|
||||
[7]: https://ostechnix.com/wp-content/uploads/2022/08/Bash-Select-Loop-Without-Input.png
|
||||
[8]: https://ostechnix.com/wp-content/uploads/2022/08/Set-Custom-Prompt.png
|
||||
[9]: https://ostechnix.com/bash-conditional-statements/
|
||||
[10]: https://ostechnix.com/wp-content/uploads/2022/08/Menu-Driven-Backup-And-Restore-Script.png
|
@ -0,0 +1,128 @@
|
||||
[#]: subject: "How I analyze my music directory with Groovy"
|
||||
[#]: via: "https://opensource.com/article/22/8/groovy-script-java-music"
|
||||
[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
How I analyze my music directory with Groovy
|
||||
======
|
||||
To simplify Java's clunkiness, I made a Groovy tool to analyze my music directory.
|
||||
|
||||
Lately, I’ve been looking at how Groovy streamlines the slight clunkiness of Java. In this article, I begin a short series to demonstrate Groovy scripting by creating a tool to analyze my music directory.
|
||||
|
||||
In this article, I demonstrate how the `groovy.File` class extends and streamlines `java.File` and simplifies its use. This provides a framework for looking at the contents of a music folder to ensure that expected content (for example, a `cover.jpg` file) is in place. I use the [JAudiotagger library][2] to analyze the tags of any music files.
|
||||
|
||||
### Install Java and Groovy
|
||||
|
||||
Groovy is based on Java and requires a Java installation. Both a recent and decent version of Java and Groovy might be in your Linux distribution's repositories. Groovy can also be installed directly from the [Apache Foundation website][3]. A nice alternative for Linux users is [SDKMan][4], which can be used to get multiple versions of Java, Groovy, and many other related tools. For this article, I use SDK's releases of:
|
||||
|
||||
* Java: version 11.0.12-open of OpenJDK 11
|
||||
* Groovy: version 3.0.8
|
||||
|
||||
### Music metadata
|
||||
|
||||
Lately, I've consolidated my music consumption options. I've settled on using the excellent open source [Cantata][5] music player, which is a front end for the open source [MPD music player daemon][6]. All my computers have their music stored in the `/var/lib/mpd/music` directory. In that music directory are artist subdirectories, and in each artist subdirectory are album sub-subdirectories containing the music files, a `cover.jpg`, and occasionally PDFs of the liner notes.
|
||||
|
||||
Almost all of my music files are in FLAC format, with a few in MP3 and maybe a small handful in OGG. One reason I chose the JAudiotagger library is because it handles the different tag formats transparently. Of course, JAudiotagger is open source!
|
||||
|
||||
So what's the point of looking at audio tags? In my experience, audio tags are extremely poorly managed. The word "careless" comes to mind. But that may be as much a recognition of my own pedantic tendencies as real problems in the tags themselves. In any case, this is a non-trivial problem that can be solved with the use of Groovy and JAudiotagger. It's not only applicable to music collections, though. Many other real-world problems include the need to descend a directory tree in a filesystem to do something with the contents found there.
|
||||
|
||||
### Using the Groovy script
|
||||
|
||||
Here's the basic code required for this task. I've incorporated comments in the script that reflect the (relatively abbreviated) "comment notes" I typically leave for myself:
|
||||
|
||||
```
|
||||
1 // Define the music libary directory
|
||||
2 def musicLibraryDirName = '/var/lib/mpd/music'
|
||||
3 // Print the CSV file header
|
||||
4 println "artistDir|albumDir|contentFile"
|
||||
5 // Iterate over each directory in the music libary directory
|
||||
6 // These are assumed to be artist directories
|
||||
7 new File(musicLibraryDirName).eachDir { artistDir ->
|
||||
8 // Iterate over each directory in the artist directory
|
||||
9 // These are assumed to be album directories
|
||||
10 artistDir.eachDir { albumDir ->
|
||||
11 // Iterate over each file in the album directory
|
||||
12 // These are assumed to be content or related
|
||||
13 // (cover.jpg, PDFs with liner notes etc)
|
||||
14 albumDir.eachFile { contentFile ->
|
||||
15 println "$artistDir.name|$albumDir.name|$contentFile.name"
|
||||
16 }
|
||||
17 }
|
||||
18 }
|
||||
```
|
||||
|
||||
As noted above, I'm using `groovy.File` to move around the directory tree. Specifically:
|
||||
|
||||
Line 7 creates a new `groovy.File` object and calls `groovy.File.eachDir()` on it, with the code between the `{` on line 7 and the closing `}` on line 18 being a `groovy.Closure` argument to `eachDir()`.
|
||||
|
||||
What this means is that `eachDir()` executes that code for each subdirectory found in the directory. This is similar to a Java *lambda* (also called an "anonymous function"). The Groovy closure doesn't restrict access to the calling environment in the way lambda does (in recent versions of Groovy, you can use Java lambdas if you want to). As noted above, subdirectories within the music library directory are expected to be artist directories (for example, "Iron Butterfly" or "Giacomo Puccini") so the `artistDir` is the argument passed by `eachDir()` to the closure.
|
||||
|
||||
Line 10 calls `eachDir()` on each `artistDir`, with the code between the `{` on line 10 and the `}` on line 17 forming another closure which processes the `albumDir`.
|
||||
|
||||
Line 14, calls `eachFile()` on each `albumDir`, with the code between the `{` on line 14 and the `}` on line 16 forming the third-level closure that processes the contents of the album.
|
||||
|
||||
For the scope of this article, the only thing I need to do with each file is begin to build the table of information, which I'm creating as a bar-delimited CSV file that can be imported into [LibreOffice][7] or [OnlyOffice][8], or any other spreadsheet. Right now, the code writes out the first three columns: artist directory name, album directory name, and content file name (also, line 2 writes out the CSV header line).
|
||||
|
||||
Running this on my Linux laptop produces the following output:
|
||||
|
||||
```
|
||||
$ groovy TagAnalyzer.groovy | head
|
||||
artistDir|albumDir|contentFile
|
||||
Habib Koite & Bamada|Afriki|02 - Ntesse.flac
|
||||
Habib Koite & Bamada|Afriki|08 - NTeri.flac
|
||||
Habib Koite & Bamada|Afriki|01 - Namania.flac
|
||||
Habib Koite & Bamada|Afriki|07 - Barra.flac
|
||||
Habib Koite & Bamada|Afriki|playlist.m3u
|
||||
Habib Koite & Bamada|Afriki|04 - Fimani.flac
|
||||
Habib Koite & Bamada|Afriki|10 - Massake.flac
|
||||
Habib Koite & Bamada|Afriki|11 - Titati.flac
|
||||
Habib Koite & Bamada|Afriki|03 – Africa.flac
|
||||
[...]
|
||||
Richard Crandell|Spring Steel|04-Japanese Lullaby [Richard Crandell].flac
|
||||
Richard Crandell|Spring Steel|Spring Steel.pdf
|
||||
Richard Crandell|Spring Steel|03-Zen Dagger [Richard Crandell].flac
|
||||
Richard Crandell|Spring Steel|cover.jpg
|
||||
$
|
||||
```
|
||||
|
||||
In terms of performance:
|
||||
|
||||
```
|
||||
$ time groovy TagAnalyzer.groovy | wc -l
|
||||
9870
|
||||
|
||||
real 0m1.482s
|
||||
user 0m4.392s
|
||||
sys 0m0.230s
|
||||
$
|
||||
```
|
||||
|
||||
Nice and quick. It processes nearly 10,000 files in a second and a half! Plenty fast enough for me. Respectable performance, compact and readable code—what's not to like?
|
||||
|
||||
In my next article, I crack open the JAudiotagger interface and look at the tags in each file.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/22/8/groovy-script-java-music
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://opensource.com/sites/default/files/lead-images/programming-code-keyboard-laptop-music-headphones.png
|
||||
[2]: http://www.jthink.net/jaudiotagger/examples_read.jsp
|
||||
[3]: https://groovy.apache.org/download.html
|
||||
[4]: https://opensource.com/article/22/3/manage-java-versions-sdkman
|
||||
[5]: https://opensource.com/article/17/8/cantata-music-linux
|
||||
[6]: https://www.musicpd.org/
|
||||
[7]: https://opensource.com/tags/libreoffice
|
||||
[8]: https://opensource.com/article/20/7/nextcloud
|
@ -0,0 +1,508 @@
|
||||
[#]: subject: "Microservices Deployment Architecture with Kubernetes Clusters"
|
||||
[#]: via: "https://www.opensourceforu.com/2022/08/microservices-deployment-architecture-with-kubernetes-clusters/"
|
||||
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: " "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
Microservices Deployment Architecture with Kubernetes Clusters
|
||||
======
|
||||
*Scalability and resilience are two of the most important reasons to move from monoliths to microservices. The Kubernetes platform offers both while orchestrating containers. In this Part 9 of the series, reference architecture for a user management system is presented and demonstrated around Kubernetes. This architecture includes Spring Boot microservices, Apache Kafka and MySQL.*
|
||||
|
||||
look at any e-commerce business. It flourishes during the weekends and on special days. The business is normally low till noon and peaks in the evenings. So the systems that back such e-commerce, banking and government services, etc, experience different loads at different points of time, and need to be scaled up or down as automatically as possible. Such systems require appropriate deployment architecture and orchestration tooling.
|
||||
|
||||
In the previous part of this series of articles, we have seen how docker-compose is useful in deploying multiple containerised services all at once, on a single machine. Though docker-compose is good enough for container deployment, it falls short when it comes to container orchestration. It cannot track the containers and maintain the stability of the infrastructure. That’s where Kubernetes comes to our rescue.
|
||||
|
||||
Kubernetes can deploy containerised services not just on one machine but also on a cluster of any number of machines. It can deploy multiple instances of the same service across the cluster. Kubernetes keeps track of each of the deployed containers. And in case of crashes, it manages the scalability levels by automatically bringing up replacement containers without any manual intervention.
|
||||
|
||||
Let us architect the UMS (user management system) deployment with the help of Kubernetes.
|
||||
|
||||
### Reference architecture
|
||||
|
||||
We have already decomposed our UMS into four microservices, namely: *AddService*, *FindService*, *SearchService* and *JournalService*. These use H2 relational databases for storage and Apache Kafka for asynchronous inter-service collaboration. Now, let’s refactor the architecture to achieve the following:
|
||||
|
||||
1. Replace H2 with MySQL so that the data is saved persistently and shared across all the service instances.
|
||||
2. Deploy Kafka cluster.
|
||||
3. Deploy three instances of AddService.
|
||||
4. Deploy six instances of FindService and SearchService.
|
||||
5. Deploy one instance of JournalService.
|
||||
|
||||
Since we have only developed *AddService* so far, we will cover the first three goals in this part. Figure 1 gives our reference architecture.
|
||||
|
||||
![Figure 1: Reference architecture][1]
|
||||
|
||||
### Spring Boot and MySQL
|
||||
|
||||
The AddService is currently using the H2 database. As you know, H2 is an in-memory database engine. The data is lost once the engine is restarted. Such behaviour is not desired in the production. We need a database that is persistent. It can be an RDBMS or a NoSQL database like Mongo, etc. We chose MySQL for this illustration.
|
||||
|
||||
Since SpringBoot does not offer the MySQL connector out-of-the-box, we need to add it as a dependency in the *pom.xml* of the *AddService*.
|
||||
|
||||
```
|
||||
<dependency>
|
||||
<groupId>mysql</groupId>
|
||||
<artifactId>mysql-connector-java</artifactId>
|
||||
<scope>runtime</scope>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
We also need to update *application.properties* to specify the JDBC driver along with a connection string and access details for the MySQL database engine.
|
||||
|
||||
```
|
||||
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
|
||||
spring.datasource.url=jdbc:mysql://mysqldb:3306/glarimy?allowPublicKeyRetrieval=true&useSSL=false
|
||||
spring.datasource.username=root
|
||||
spring.datasource.password=admin
|
||||
```
|
||||
|
||||
Because of the above configuration, the repository of *AddService* attempts to connect to the *glarimy* database on a machine named *mysqldb* on port number 3306. We are recording the password in clear text in this configuration, only for simplicity. We will find a better way later!
|
||||
|
||||
A few other JPA-specific configurations may also be provided as needed. For example, the following will direct the Hibernate system to scan the code for JPA annotations and keep the schema on the database updated at the time of bootstrapping:
|
||||
|
||||
```
|
||||
spring.jpa.hibernate.ddl-auto=update
|
||||
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
|
||||
```
|
||||
|
||||
### MySQL as a Docker container
|
||||
|
||||
Since the *AddService* depends on MySQL, we can update the existing docker-*compose.yml* for deploying and linking it:
|
||||
|
||||
```
|
||||
mysqldb:
|
||||
image: mysql:latest
|
||||
networks:
|
||||
- glarimy
|
||||
environment:
|
||||
- MYSQL_ROOT_PASSWORD=admin
|
||||
- MYSQL_DATABASE=glarimy
|
||||
volumes:
|
||||
- “mysql_data:/glarimy”mysqldb:
|
||||
```
|
||||
|
||||
The above manifest pulls *mysql:*latest image from the Docker Hub and runs the container. The name of the container must be *mysqldb* as the *AddService* is looking for the database engine on a machine named *mysqldb*. Also, both must run on the same network to resolve the name. Since the*AddService* was configured to run on *glarimy* network (in the previous part), *mysqldb* is also configured to run on the same network.
|
||||
|
||||
The above configuration is also directing the container to create a database named *glarimy* since the *AddService* is configured to use that database.
|
||||
|
||||
However, this is still not sufficient. The MySQL container writes the data on to the file system that is mapped to the container. Once the container is restarted, the files are gone! That is not good for us. We want the data to be written on to the disk in such a way that it outlives the containers. In other words, we want to mount a volume so that the container uses only that mount point. The last line in the above configuration is meant for that.
|
||||
|
||||
The following is the resulting full manifest in*docker-compose.yml*:
|
||||
|
||||
```
|
||||
version: “2”
|
||||
networks:
|
||||
glarimy:
|
||||
driver: bridge
|
||||
services:
|
||||
zookeeper:
|
||||
image: docker.io/bitnami/zookeeper:3.8
|
||||
ports:
|
||||
- “2181:2181”
|
||||
volumes:
|
||||
- “zookeeper_data:/glarimy”
|
||||
environment:
|
||||
- ALLOW_ANONYMOUS_LOGIN=yes
|
||||
networks:
|
||||
- glarimy
|
||||
kafka:
|
||||
image: docker.io/bitnami/kafka:3.1
|
||||
ports:
|
||||
- “9092:9092”
|
||||
volumes:
|
||||
- “kafka_data:/glarimy”
|
||||
environment:
|
||||
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
|
||||
- ALLOW_PLAINTEXT_LISTENER=yes
|
||||
networks:
|
||||
- glarimy
|
||||
depends_on:
|
||||
- zookeeper
|
||||
mysqldb:
|
||||
image: mysql:latest
|
||||
networks:
|
||||
- glarimy
|
||||
environment:
|
||||
- MYSQL_ROOT_PASSWORD=admin
|
||||
- MYSQL_DATABASE=glarimy
|
||||
volumes:
|
||||
- “mysql_data:/glarimy”
|
||||
ums:
|
||||
image: glarimy/ums-add-service
|
||||
networks:
|
||||
- glarimy
|
||||
depends_on:
|
||||
- zookeeper
|
||||
- mysqldb
|
||||
volumes:
|
||||
zookeeper_data:
|
||||
driver: local
|
||||
kafka_data:
|
||||
driver: local
|
||||
mysql_data:
|
||||
driver: local
|
||||
```
|
||||
|
||||
Run the following command to deploy the containers, like we did in the previous part:
|
||||
|
||||
```
|
||||
$ docker-compose up
|
||||
```
|
||||
|
||||
However, this is not our actual goal. The docker-compose can deploy multiple containers in one go, only on a single machine. It does not offer resilience and does not offer cluster deployment.
|
||||
|
||||
### Kubernetes and Minikube
|
||||
|
||||
The production infrastructure of microservices consists of several machines forming a cluster. The containers are expected to be distributed fairly across the machines (aka nodes) in the cluster. New nodes may be added to the cluster and existing nodes may be removed from it at any time. Yet, the containers are expected to be rescheduled on the current set of nodes.
|
||||
|
||||
Kubernetes does take care of such an orchestration. It deploys the containers across the cluster and redistributes them whenever needed — all without any manual intervention.
|
||||
|
||||
Figure 2 is a high-level presentation of Kubernetes architecture.
|
||||
|
||||
![Figure 2: Kubernetes architecture][2]
|
||||
|
||||
The Kubernetes cluster consists of one or more nodes, which may be physical or virtual machines. Each node can run several pods. A pod is a group of containers. Each pod gets an ephemeral IP address that is known as cluster-ip address. This address is local to the cluster and visible to all other pods across it. In other words, the pods within the cluster can reach out to each other using the cluster-ip address.
|
||||
|
||||
Normally, a pod consists of only one application container that runs a microservice. Besides this, a pod may also run several other infrastructure containers that take on tasks such as monitoring, logging, etc.
|
||||
|
||||
A deployment unit in Kubernetes consists of a set of such pods. This set is known as a replica-set. For example, you can create a deployment unit for AddService in such a way that three pods are scheduled in the cluster with each pod running an AddService container. If any of the pods crash for whatever reason, Kubernetes schedules another pod on the cluster in such a way that three pods of AddService are always running. Note that the pods of a replica-set do not necessarily run on a single node.
|
||||
|
||||
Though this is sufficient for the containers on different pods/nodes to collaborate with each other, it is very cumbersome for a pod to address another pod based on an ephemeral address. To solve this problem, we can create a front-end to each of the replica-sets. Such a front-end is called a service. Each service is exposed with an address that is not ephemeral. The address is called node-port if it is made visible only within the cluster or external-ip if exposed outside the cluster. A service can also be configured with a load balancer so that the incoming calls can be routed to the end-points (pods) in a fairly balanced manner.
|
||||
|
||||
This is Kubernetes in a nutshell. There are several tools available in the market to set up a Kubernetes cluster. Minikube is one such tool that helps in setting up single-node clusters. The instructions given below can be followed to set up the Minikube cluster on an Ubuntu machine that has Docker engine running.
|
||||
|
||||
Download Minikube distribution.
|
||||
|
||||
```
|
||||
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
|
||||
```
|
||||
|
||||
Give the following command to install Minikube:
|
||||
|
||||
```
|
||||
$ sudo install minikube-linux-amd64 /usr/local/bin/minikube
|
||||
```
|
||||
|
||||
Create a group named docker and add the user:
|
||||
|
||||
```
|
||||
$ sudo usermod -aG docker <user> && newgrp docker
|
||||
```
|
||||
|
||||
Start the cluster:
|
||||
|
||||
```
|
||||
$ minikube start
|
||||
```
|
||||
|
||||
Create this handy alias:
|
||||
|
||||
```
|
||||
$ alias kubectl=”minikube kubectl --”
|
||||
```
|
||||
|
||||
A single-node Kubernetes cluster is now up and running on the local machine.
|
||||
|
||||
### Deploying MySQL on Kubernetes
|
||||
|
||||
Let us deploy a replica-set consisting of just one pod of MySQL. The service is exposed by the name mysqldb. Other pods must use this name in order to access the database service. The port 3306 is exposed only within the cluster. We don’t want any one from outside the cluster to log in to our database server. The deployment also mandates to create a schema by the name *glarimy* and to use a mounted volume.
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mysqldb
|
||||
spec:
|
||||
ports:
|
||||
- port: 3306
|
||||
selector:
|
||||
app: mysqldb
|
||||
clusterIP: None
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mysqldb
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: mysqldb
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mysqldb
|
||||
spec:
|
||||
containers:
|
||||
- image: mysql:5.6
|
||||
name: mysqldb
|
||||
env:
|
||||
- name: MYSQL_ROOT_PASSWORD
|
||||
value: admin
|
||||
- name: “MYSQL_DATABASE”
|
||||
value: “glarimy”
|
||||
ports:
|
||||
- containerPort: 3306
|
||||
name: mysqldb
|
||||
volumeMounts:
|
||||
- name: mysql-persistent-storage
|
||||
mountPath: /var/lib/mysql
|
||||
volumes:
|
||||
- name: mysql-persistent-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: mysqldb
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: mysqldb
|
||||
labels:
|
||||
type: local
|
||||
spec:
|
||||
storageClassName: manual
|
||||
capacity:
|
||||
storage: 20Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
hostPath:
|
||||
path: “/mnt/data”
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: mysqldb
|
||||
spec:
|
||||
storageClassName: manual
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
```
|
||||
|
||||
### Deploying Kafka on Kubernetes
|
||||
|
||||
Apache Kafka cluster requires Zookeeper for internal management. So we need to deploy both. Since Kafka and Zookeeper have their own discovery protocol, we expose them on NodePort and connect them.
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: zookeeper
|
||||
name: zookeeper
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- name: zookeeper-port
|
||||
port: 2181
|
||||
nodePort: 30181
|
||||
targetPort: 2181
|
||||
selector:
|
||||
app: zookeeper
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: zookeeper
|
||||
name: zookeeper
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: zookeeper
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: zookeeper
|
||||
spec:
|
||||
containers:
|
||||
- image: bitnami/zookeeper
|
||||
name: zookeeper
|
||||
ports:
|
||||
- containerPort: 2181
|
||||
env:
|
||||
- name: ZOOKEEPER_ID
|
||||
value: “1”
|
||||
- name: ZOOKEEPER_SERVER_1
|
||||
value: zookeeper
|
||||
- name: ALLOW_ANONYMOUS_LOGIN
|
||||
value: “yes”
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: kafka
|
||||
name: kafka
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- name: kafka-port
|
||||
port: 9092
|
||||
nodePort: 30092
|
||||
targetPort: 9092
|
||||
selector:
|
||||
app: kafka
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: kafka
|
||||
name: kafka
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: kafka
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: kafka
|
||||
spec:
|
||||
containers:
|
||||
- name: kafka
|
||||
image: bitnami/kafka
|
||||
ports:
|
||||
- containerPort: 9092
|
||||
env:
|
||||
- name: KAFKA_BROKER_ID
|
||||
value: “1”
|
||||
- name: MY_MINIKUBE_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.hostIP
|
||||
- name: KAFKA_ZOOKEEPER_CONNECT
|
||||
value: “$(MY_MINIKUBE_IP):30181”
|
||||
- name: KAFKA_LISTENERS
|
||||
value: “PLAINTEXT://:9092”
|
||||
- name: MY_POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
- name: KAFKA_ADVERTISED_LISTENERS
|
||||
value: “PLAINTEXT://$(MY_POD_IP):9092”
|
||||
- name: ALLOW_PLAINTEXT_LISTENER
|
||||
value: “yes”
|
||||
```
|
||||
|
||||
### Deploying AddService on Kubernetes
|
||||
|
||||
And, finally, we want to deploy three instances of*AddService* and expose them to the outside world through a load balancer with an *external-ip*.
|
||||
|
||||
```
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ums-add-service
|
||||
labels:
|
||||
app: ums-add-service
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: ums-add-service
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: ums-add-service
|
||||
spec:
|
||||
containers:
|
||||
- name: ums-add-service
|
||||
image: glarimy/ums-add-service
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ums-add-service
|
||||
labels:
|
||||
name: ums-add-service
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 8080
|
||||
selector:
|
||||
app: ums-add-service
|
||||
```
|
||||
|
||||
This whole configuration can be written in one single manifest file and deployed with one single command:
|
||||
|
||||
```
|
||||
$ kubectl create -f <manifest-file-name>.yml
|
||||
```
|
||||
|
||||
In order to run the load balancer, the following command also needs to run on a separate terminal:
|
||||
|
||||
```
|
||||
$ minikube tunnel
|
||||
```
|
||||
|
||||
You can check the deployed service using the following command:
|
||||
|
||||
```
|
||||
$ kubectl get services
|
||||
```
|
||||
|
||||
It gives an output that looks like Figure 3. It lists the services, their addresses, etc.
|
||||
|
||||
![Figure 3: Kubernetes services][3]
|
||||
|
||||
The following command lists the deployments, which show the number of pods of each deployment that is running:
|
||||
|
||||
```
|
||||
$ kubectl get deployments
|
||||
```
|
||||
|
||||
The output looks like Figure 4.
|
||||
|
||||
![Figure 4: Kubernetes deployment][4]
|
||||
|
||||
And, finally, to see the real pods that run the containers, use the following command:
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
```
|
||||
|
||||
Figure 5 shows that there are three pods running for *AddService*, and one pod running for Zookeeper, Kafka and MySQL each.
|
||||
|
||||
![Figure 5: Pods][5]
|
||||
|
||||
Since the *AddService* is exposed with an external-ip, it can be accessed using the following command:
|
||||
|
||||
```
|
||||
$ curl -X POST -H ‘Content-Type: application/json’ -i http://<service-external-ip>:8080/user --data ‘{“name”:”Krishna Mohan”, “phone”:9731423166}’
|
||||
```
|
||||
|
||||
### Why is this reference architecture?
|
||||
|
||||
Irrespective of the nature of the application, number of microservices, platforms on which they are developed, services like databases, brokers, etc, the overall architecture remains the same like what has been described here.
|
||||
|
||||
The development architecture focuses on service decomposition, platform selection, framework selection, design of API, repositories, etc. This part was addressed using our understanding of domain driven design, object-oriented patterns, frameworks like SpringBoot, Flask, Express, etc.
|
||||
|
||||
The deployment architecture focuses on number of machines, nodes, replica-sets, pods, address-mechanisms, volumes, etc. This part is addressed using our understanding of container technology and Kubernetes. We will dwell into design patterns associated exclusively with microservices like gateways, circuit breakers, registries, etc, in future. The good thing is that Kubernetes and other such tools implement many of these patterns out-of-the-box.
|
||||
|
||||
Before going that far, we will develop the *FindService*, *SearchService* and *JournalService* on Python and Node platforms in the next parts of this series of articles so that we take UMS to a conclusion.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.opensourceforu.com/2022/08/microservices-deployment-architecture-with-kubernetes-clusters/
|
||||
|
||||
作者:[Krishna Mohan Koyya][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.opensourceforu.com/author/krishna-mohan-koyya/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-1-Reference-architecture.jpg
|
||||
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-2-Kubernetes-architecture.jpg
|
||||
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-3-Kubernetes-services.jpg
|
||||
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-4-Kubernetes-deployment.jpg
|
||||
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-5-Pods.jpg
|
@ -0,0 +1,411 @@
|
||||
[#]: subject: "How to Upgrade to Linux Mint 21 [Step by Step Tutorial]"
|
||||
[#]: via: "https://itsfoss.com/upgrade-linux-mint-version/"
|
||||
[#]: author: "Abhishek Prakash https://itsfoss.com/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: "robsean"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
图解如何升级到 Linux Mint 21
|
||||
======
|
||||
这是一个周期性的更新指南,主要用于将现有的 Linux Mint 升级安装到一个新的可用版本。
|
||||
|
||||
在这篇文章中有三个部分,分别向你展示 Linux Mint 的不同的主要版本之间的升级步骤:
|
||||
|
||||
* 第 1 部分是关于从 Linux Mint 20.3 升级到 Linux Mint 21 ( GUI 升级工具)
|
||||
* 第 2 部分是关于从 Linux Mint 19.3 升级到 Linux Mint 20 (基于命令行的升级程序)
|
||||
* 第 3 部分是关于从 Linux Mint 18.3 升级到 Linux Mint 19 (假设一些人仍然在使用它)
|
||||
|
||||
你可以依据你的当前的 Linux Mint 版本和需要来执行适当的步骤。
|
||||
|
||||
这是一个周期性的更新指南,主要用于将现有的 Linux Mint 升级安装到一个新的可用版本。
|
||||
|
||||
这篇指南已经更新,追加从 Mint 20.3 升级到 Linux Mint 21 的步骤。Linux Mint 现在有一个 GUI 工具来升级到最新的版本。
|
||||
;
|
||||
### 在你升级到 Linux Mint 21 之前需要知道的事情
|
||||
|
||||
在你继续升级到 Linux Mint 21 之前,你应该考虑下面的事情:
|
||||
|
||||
* 你真的需要升级吗?Linux Mint 20.x 还有好几年的支持期限。
|
||||
* 你将需要高速因特网连接来下载大约 14 GB 的升级。
|
||||
* 它可能将花费几个小时的时间来完成升级过程,当然这主要取决于你的因特网速度。你必需有耐心。
|
||||
* 制作一个 Linux Mint 21 的 live USB 并在一次实时会话中尝试它是否与你的硬件系统兼容会是一个好主意。较新的内核可能与较旧的硬件系统有兼容性问题,因此在真正升级或安装之前来对其进行测试可能会为你省去很多麻烦。
|
||||
* 一次全新的安装总是比一次主要版本升级的更好,但是从零开始安装 Linux Mint 21 可能意味着丢失你的现有的数据。你必须在外部的外部磁盘上进行备份。
|
||||
* 尽管大部分的升级是安全的,但是它也不会是 100% 的成功。你必须要有系统快照和真正的备份。
|
||||
* 你只能从 Linux Mint 20.3 的 Cinnamon 、Xfce 和 MATE 版本升级到 Linux Mint 21 。首先 [检查你的 Linux Mint 版本][1] 。如果你正在使用 Linux Mint 20.2 或 20.1 ,你需要先使用更新管理器来升级到 20.3 。如果你正在使用 Linux Mint 19 ,我建议你选择进行一次的全新安装,而不是选择进行数次的升级 Mint 版本。
|
||||
|
||||
在你知道你将要做什么后,让我们看看如何升级到 Linux Mint 21 。
|
||||
|
||||
### 从 Linux Mint 20.3 升级到 Linux Mint 21
|
||||
|
||||
检查你的 Linux Mint 版本,并确保你正在使用 Mint 20.3 。你不能从 Linux Mint 20.1 或 20.2 升级到 Linux Mint 21 。
|
||||
|
||||
#### 步骤 1: 通过安装任意可用的更新来更新你的系统
|
||||
|
||||
使用 菜单 -> 系统管理 -> 更新管理器来启动更新管理器。查看是否有一些可用的软件包更新。如果有可用的更新,先安装所有的软件包更新。
|
||||
|
||||
![Check for Pending Software Updates][2]
|
||||
|
||||
针对这一步骤,你也可用在终端中使用这一个命令:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
```
|
||||
|
||||
#### 步骤 2: 在外部的磁盘上备份你的文件 [可选,但是建议]
|
||||
|
||||
Timeshift 是一个创建系统快照的好工具,但它却不是一个针对文档、图片和其它那些非系统、个人文件的理想工具。我建议你在一块外部磁盘上进行备份。它只是为了数据安全。
|
||||
|
||||
当我说在一块外部磁盘上进行一次备份时,我的意思是将你的图片、文档、下载和视频目录简单地复制和粘贴到一块外部的 USB 磁盘上。
|
||||
|
||||
如果你没有那样大的磁盘,至少复制那些你不可丢失的最重要的文件。
|
||||
|
||||
#### 步骤 3: 安装升级工具
|
||||
|
||||
现在,你的系统已经更新,你已经准备好升级到 Linux Mint 21 。Linux Mint 开发组提供一个名称为 [mintupgrade][3] 的 GUI 工具,用于从 Linux Mint 20.3 升级到 Linux Mint 21 。
|
||||
|
||||
你可用使用下面的命令来安装这个工具:
|
||||
|
||||
```
|
||||
sudo apt install mintupgrade
|
||||
```
|
||||
|
||||
#### 步骤 4: 从终端中运行这个 GUI 工具
|
||||
|
||||
你不能在应用程序菜单列表中找到这个新的 GUI 工具。为启动它,你需要在终端中输入下面的命令:
|
||||
|
||||
```
|
||||
sudo mintupgrade
|
||||
```
|
||||
|
||||
这个简单且全面工具将带领你完成升级过程。
|
||||
|
||||
![Mint Upgrade Tool Home Page][4]
|
||||
|
||||
在一些初始化的测试后,它将提示进行一次 Timeshift 备份。如果你已经创建了一次备份,你已经准备好前进。
|
||||
|
||||
![Upgrade Tool Prompting No Timeshift Snapshots][5]
|
||||
|
||||
否则,你需要在这里 [创建一个备份][6] ,因为这是强制继续的。
|
||||
|
||||
![Taking Snapshot With Timeshift][7]
|
||||
|
||||
一些 PPA 可能已经适用于 Ubuntu 22.04 ,因此也适用于 Mint 21 。但是,如果 PPA 或存储库不适用于新的版本,它可能会因为依赖关系的打断而影响升级过程。在升级工具中也会同样的提示你。
|
||||
|
||||
![Kazam PPA Does Not Support Jammy][8]
|
||||
|
||||
在这里,我将通过其 PPA 来使用 [Kazam 的最新版本][9] 。其 PPA 仅被支持到 Impish ,因为 Linux Mint 21 是基于 Jammy 的,所以它会显示错误。
|
||||
|
||||
你可以在升级工具中通过软件源来指定禁用 PPA 的选项。
|
||||
|
||||
![Disable Unsupported PPAs in Software Sources][10]
|
||||
|
||||
在禁用该 PPA 后,该软件包会变成 ‘陌生的’ ,因为来自存储库中可用版本会与来自 Mnit 存储库中可用版本不匹配。因此,你需要将软件包降级到存储库中一个可用的版本。
|
||||
|
||||
![Downgrade Package to Avoid Conflicts][11]
|
||||
|
||||
升级工具现在列出需要执行更改。
|
||||
|
||||
![List Changes That Need to be Done][12]
|
||||
|
||||
在接受后,该工具将开始下载软件包。
|
||||
|
||||
![Phase 2 – Simulation and Package Download][13]
|
||||
|
||||
![Package Downloading][14]
|
||||
|
||||
![Upgrading Phase][15]
|
||||
|
||||
它将列出孤立的软件包,这可以被移除。你可以通过按下 <ruby>修复<rt> Fix</rt></ruby> 按钮来移除整个建议的软件包,也可以保留某些软件包。
|
||||
|
||||
#### 保留某些孤立的软件包
|
||||
|
||||
为保留来自孤立的软件包列表中软件包,你需要从左上角的菜单转到首选项。
|
||||
|
||||
![Selecting Orphan Packages You Want to Keep with Preferences][16]
|
||||
|
||||
在首选项对话框中,你需要转到 **孤立的软件包** 并使用 “+” 符号来通过名称添加软件包。
|
||||
|
||||
![Specify Name of the Package to Keep][17]
|
||||
|
||||
在完成后,它将继续升级,在一段时间后,将会向你提示一条成功更新的通知。
|
||||
|
||||
![Upgrade Successful][18]
|
||||
|
||||
此时,你需要重新启动你的系统。在重新启动后,你将进入到新的 Linux Mint 21 。
|
||||
|
||||
![Neofetch Output Linux Mint 21][19]
|
||||
|
||||
### 如何升级到 Linux Mint 20
|
||||
|
||||
在你继续升级到 Linux Mint 20 之前,你应该考虑下面的事情:
|
||||
|
||||
* 你真的需要升级吗?Linux Mint 19.x 将会支持到 2023 年。
|
||||
* 如果你 [有一款 32-位 系统][20],你不能安装或升级到 Mint 20 。
|
||||
* 你将需要高速因特网连接来下载大约 1.4 GB 的升级。
|
||||
* 它可能将花费几个小时的时间来完成升级过程,当然这主要取决于你的因特网速度。你必需有耐心。
|
||||
* 制作一个 Linux Mint 20 的 live USB 并在一次实时会话中查看它是否与你的硬件系统兼容会是一个好主意。较新的内核可能与较旧的硬件系统有兼容性问题,因此在真正升级或安装之前来对其进行测试可能会为你省去很多麻烦。
|
||||
* 一次全新的安装总是比一次主要版本升级的更好,但是从零开始 [安装 Linux Mint][21] 20 可能意味着丢失你的现有的数据。你必须在外部的外部磁盘上进行备份。
|
||||
* 尽管大部分的升级是安全的,但是它也不会是 100% 的成功。你必须要有系统快照和真正的备份。
|
||||
* 你只能从 Linux Mint 19.3 的 Cinnamon 、Xfce 和 MATE 版本升级到 Linux Mint 21 。首先 [检查你的 Linux Mint 版本][22] 。如果你正在使用 Linux Mint 19.2 或 19.1 ,你需要先使用更新管理器来升级到 19.3 。如果你正在使用 Linux Mint 18 ,我建议你选择进行一次的全新安装,而不是选择进行数次的升级 Mint 版本。
|
||||
* 升级过程是通过命令行实用程序来完成的。如果你不喜欢使用终端和命令, 避免升级,并进行一次全新的安装。
|
||||
|
||||
在你知道你将要做什么后,让我们看看如何升级到 Linux Mint 20 。
|
||||
|
||||
![A Video from YouTube][23]
|
||||
|
||||
[订阅我们的 YouTube 频道以获取更多的 Linux 视频][24]
|
||||
|
||||
#### 步骤 1: 确保你有一款 64 位系统
|
||||
|
||||
Linux Mint 20 仅是一款 64 位系统。如果你安装了一款 32 位的 Linux Mint 19 ,你不能升级到 Linux Mint 20 。
|
||||
|
||||
在一个终端中,使用下面的命令来查看你是否正在使用 64-位 操作系统。
|
||||
|
||||
```
|
||||
dpkg --print-architecture
|
||||
```
|
||||
|
||||
![Mint 20 Upgrade Check Architecture][25]
|
||||
|
||||
#### 步骤 2: 通过安装一些可用的更新来更新你的系统
|
||||
|
||||
使用 菜单 -> 系统管理 -> 更新管理器 来启动更新管理器。查看是否有一些可用的软件包更新。如果有可用的更新,先安装所有的软件包更新。
|
||||
|
||||
![Check for pending software updates][26]
|
||||
|
||||
针对这一步骤,你也可用在终端中使用这一个命令:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
```
|
||||
|
||||
#### 步骤 3: 使用 Timeshift 创建一个系统快照 [可选,但是建议]
|
||||
|
||||
如果你遇到升级过程中断或你遇到其它的一些重大问题,[使用 Timeshift 创建一个系统快照][27] 将会解救你于水火之中。**你甚至可以使用这种方法恢复到 Mint 19.3 。**
|
||||
|
||||
假设你因为意外断电导致升级失败,或因为其它一些原因,你最终得到一个残缺的不稳定的 Linux Mint 19 。你可以插入一个live Linux Mint USB ,并从该 live 环境中运行 Timeshift 。它将会自动地定位你的备份位置,并将允许你恢复你残缺的 Mint 19 系统。
|
||||
|
||||
这也意味着你应该随时携带一个 live Linux Mint 19 USB ,在极少数升级失败的情况下,如果你不能访问一台工作的计算机,你可以使用它来创建 live Linux Mint USB 。
|
||||
|
||||
![Create a system snapshot in Linux Mint][28]
|
||||
|
||||
#### 步骤 4: 在一块外部的磁盘上备份你的文件 [可选,但是建议]
|
||||
|
||||
Timeshift 是一个创建系统快照的好工具,但它却不是一个针对文档、图片和其它那些非系统、个人文件的理想工具。我建议你在一块外部磁盘上进行备份。它只是为了数据安全。
|
||||
|
||||
当我说在一块外部磁盘上进行一次备份时,我的意思是将你的图片、文档、下载和视频目录简单地复制和粘贴到一块外部的 USB 磁盘上。
|
||||
|
||||
如果你没有那样大的磁盘,至少复制那些你不可丢失的最重要的文件。
|
||||
|
||||
#### 步骤 5: 禁用 PPA 和第三方存储库 [可选,但是建议]
|
||||
|
||||
不出意外的话,你可能已经使用一些 [PPA][29] 或其它的存储库来安装了一下应用程序。
|
||||
|
||||
一些 PPA 可能已经适用于 Ubuntu 20.04 ,因此也适用于 Mint 20 。但是,如果 PPA 或存储库不适用于新的版本,它可能会因为依赖关系的打断而影响升级过程。
|
||||
|
||||
对此,建议你禁用 PPA 和第三方存储库。你也可以删除通过这样的外部源来安装的应用程序,如果你这样做的话,它不会导致配置数据的丢失。
|
||||
|
||||
在软件源工具中,禁用附加的存储库、禁用 PPA 。
|
||||
|
||||
![Disable Ppa Mint Upgrade][30]
|
||||
|
||||
你也可以 **降级** ,然后在维护标签页中 **移除可用的陌生的软件包** 。
|
||||
|
||||
例如,我使用一个 PPA 来安装 Shutter 。我在禁用它的 PPA 后,现在该软件包会变成 ‘陌生的’ ,因为来自存储库中可用版本会与来自 Mnit 存储库中可用版本不匹配。
|
||||
|
||||
![Foreign Package Linux Mint][31]
|
||||
|
||||
#### 步骤 6: 安装升级工具
|
||||
|
||||
现在,你的系统已经更新,你已经准备好升级到 Linux Mint 20 。Linux Mint 开发组提供一个名称为 [mintupgrade][32] 的命令行工具,其唯一的目的是将 Linux Mint 19.3 升级到 Linux Mint 20 。
|
||||
|
||||
你可用使用下面的命令来安装这个工具:
|
||||
|
||||
```
|
||||
sudo apt install mintupgrade
|
||||
```
|
||||
|
||||
#### 步骤 7: 运行一次升级设备健康检查
|
||||
|
||||
mintupgrade 工具将会让你通过模拟升级的初始化部分来运行一次设备健康检查。
|
||||
|
||||
你可以运行这次检查来查看对你的系统做出何种更改,哪些软件包将会升级。它也将会显示不能升级和必须移除的软件包。
|
||||
|
||||
```
|
||||
mintupgrade check
|
||||
```
|
||||
|
||||
在这里,它不会在你的系统上做出任何真正的更改 (即使,感觉上它正在进行做一些更改)。
|
||||
|
||||
这一步骤是非常重要的,有助于准确算出你的系统是否可以升级到 Mint 20 。
|
||||
|
||||
![Mint Upgrade Check][33]
|
||||
|
||||
如果这一步骤中途失败,输入 **mintupgrade restore-sources** 来返回到你原始的 APT 配置。
|
||||
|
||||
#### 步骤 8: 下载软件包升级
|
||||
|
||||
在你对 mintupgrade 的检查输出感到满意后,你可以下载 Mint 20 升级软件包。
|
||||
|
||||
取决于你的因特网连接速度,它可能会在下载这些升级方面消耗一些时间。确保你的硬件系统接通到强电电源。
|
||||
|
||||
在软件包的下载期间,你可以继续使用你的系统进行常规工作。
|
||||
|
||||
```
|
||||
mintupgrade download
|
||||
```
|
||||
|
||||
![Mint 20 Upgrade Download][34]
|
||||
|
||||
注意,这行命令将把你的操作系统指向 Linux Mint 20 存储库。在使用这行命令后,如果你想降级到 Linux Mint 19.3 ,你仍然可以使用命令 “**mintupgrade restore-sources**” 来做到。
|
||||
|
||||
#### 步骤 9: 安装升级 [Point of no return]
|
||||
|
||||
现在,万事俱备,你可以使用这行命令来升级到 Linux Mint 20 :
|
||||
|
||||
```
|
||||
mintupgrade upgrade
|
||||
```
|
||||
|
||||
给它一些时间来安装新的软件包和升级你的 Mint 到相对较新的版本。在升级过程完成后,它将要求你重新启动。
|
||||
|
||||
![Linux Mint 20 Upgrade Finish][35]
|
||||
|
||||
#### 享受 Linux Mint 20
|
||||
|
||||
在你重新启动你的系统后,你将看到 Mint 20 欢迎屏幕。享受新的版本。
|
||||
|
||||
![Welcome To Linux Mint 20][36]
|
||||
|
||||
### 从 Mint 18 升级到 Mint 19
|
||||
|
||||
从 Linux Mint 18.3 升级到 Linux Mint 19 的步骤与你在升级到 Linux Mint 20 中所看到的步骤非常类似。唯一的变化是检查显示管理器。
|
||||
|
||||
我将在这里快速地提及这些步骤。如果你想要更多的信息,你可以参考 Mint 20 升级过程。
|
||||
|
||||
**步骤 1:** 使用 Timeshift 创建一个系统快照 [可选,但是建议]
|
||||
|
||||
**步骤 2:** 在一块外部的磁盘上备份你的文件 [可选,但是建议]
|
||||
|
||||
**步骤 3: 确保你正在使用 LightDM**
|
||||
|
||||
对于 Mint 19 ,你必须使用 [LightDM 显示管理器][37] 。为检查你正在使用哪种显示管理器,输入命令:
|
||||
|
||||
```
|
||||
cat /etc/X11/default-display-manager
|
||||
```
|
||||
|
||||
如果结果是 “/usr/sbin/**lightdm**”,那么你就有 LightDM ,你就可以继续前进了。
|
||||
|
||||
![LightDM Display Manager in Linux Mint][38]
|
||||
|
||||
在另一个方面,如果结果是 “/usr/sbin/**mdm**”,你需要安装 LightDM ,[切换到 LightDM][39] 并移除 MDM 。使用这行命令来安装 LightDM :
|
||||
|
||||
```
|
||||
apt install lightdm lightdm-settings slick-greeter
|
||||
```
|
||||
|
||||
在安装期间,它将要求你选择显示管理器。你需要选择 LightDM 。
|
||||
|
||||
在你设置 LightDM 作为你的显示管理器后,使用下面这些命令来移除 MDM 并重新启动:
|
||||
|
||||
```
|
||||
apt remove --purge mdm mint-mdm-themes*
|
||||
sudo dpkg-reconfigure lightdm
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
**步骤 4: 通过安装一些可用的更新来更新你的系统**
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
```
|
||||
|
||||
**步骤 5: 安装升级工具**
|
||||
|
||||
```
|
||||
sudo apt install mintupgrade
|
||||
```
|
||||
|
||||
**步骤 6: 检查升级**
|
||||
|
||||
```
|
||||
mintupgrade check
|
||||
```
|
||||
|
||||
**步骤 7: 下载软件包升级**
|
||||
|
||||
```
|
||||
mintupgrade download
|
||||
```
|
||||
|
||||
**步骤 8: 应用升级**
|
||||
|
||||
```
|
||||
mintupgrade upgrade
|
||||
```
|
||||
|
||||
享受 Linux Mint 19 。
|
||||
|
||||
### 你升级到 Linux Mint 21 了吗?
|
||||
|
||||
升级到 Linux Mint 20 可能不会是一种友好的体验,但是,使用新的专用 GUI 升级工具来升级到 Mint 21 变得简单多了。
|
||||
|
||||
我希望你发现这篇教程有帮助。你是选择升级到 Linux Mint 21 ?还是现在一次全新的安装?
|
||||
|
||||
如果你遇到一些重要问题,或者你有一些关于升级过程的问题,请在评论区随时询问。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/upgrade-linux-mint-version/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://itsfoss.com/check-linux-mint-version/
|
||||
[2]: https://itsfoss.com/wp-content/uploads/2022/08/check-for-pending-software-updates.png
|
||||
[3]: https://github.com/linuxmint/mintupgrade/blob/master/usr/bin/mintupgrade
|
||||
[4]: https://itsfoss.com/wp-content/uploads/2022/08/mint-upgrade-tool-home-page.png
|
||||
[5]: https://itsfoss.com/wp-content/uploads/2022/08/upgrade-tool-prompting-no-timeshift-snapshots.png
|
||||
[6]: https://itsfoss.com/backup-restore-linux-timeshift/
|
||||
[7]: https://itsfoss.com/wp-content/uploads/2022/08/taking-snapshot-with-timeshift.png
|
||||
[8]: https://itsfoss.com/wp-content/uploads/2022/08/kazam-ppa-does-not-support-jammy.png
|
||||
[9]: https://itsfoss.com/kazam-screen-recorder/
|
||||
[10]: https://itsfoss.com/wp-content/uploads/2022/08/disable-unsupported-ppas-in-software-sources.png
|
||||
[11]: https://itsfoss.com/wp-content/uploads/2022/08/downgrade-package-to-avoid-conflicts.png
|
||||
[12]: https://itsfoss.com/wp-content/uploads/2022/08/list-changes-that-need-to-be-done.png
|
||||
[13]: https://itsfoss.com/wp-content/uploads/2022/08/phase-2-simulation-and-package-download-.png
|
||||
[14]: https://itsfoss.com/wp-content/uploads/2022/08/package-downloading.png
|
||||
[15]: https://itsfoss.com/wp-content/uploads/2022/08/upgrading-phase.png
|
||||
[16]: https://itsfoss.com/wp-content/uploads/2022/08/selecting-orphan-packages-you-want-to-keep-with-preferences.png
|
||||
[17]: https://itsfoss.com/wp-content/uploads/2022/08/specify-name-of-the-package-to-keep.png
|
||||
[18]: https://itsfoss.com/wp-content/uploads/2022/08/upgrade-successful-800x494.png
|
||||
[19]: https://itsfoss.com/wp-content/uploads/2022/08/neofetch-output-linux-mint-21.png
|
||||
[20]: https://itsfoss.com/32-bit-64-bit-ubuntu/
|
||||
[21]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/
|
||||
[22]: https://itsfoss.com/check-linux-mint-version/
|
||||
[23]: https://youtu.be/LYnXEaiAjsk
|
||||
[24]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[25]: https://itsfoss.com/wp-content/uploads/2020/07/mint-20-upgrade-check-architecture.jpg
|
||||
[26]: https://itsfoss.com/wp-content/uploads/2020/07/update-manager-linux-mint.jpg
|
||||
[27]: https://itsfoss.com/backup-restore-linux-timeshift/
|
||||
[28]: https://itsfoss.com/wp-content/uploads/2018/07/snapshot-linux-mint-timeshift.jpeg
|
||||
[29]: https://itsfoss.com/ppa-guide/
|
||||
[30]: https://itsfoss.com/wp-content/uploads/2020/07/disable-ppa-mint-upgrade.jpg
|
||||
[31]: https://itsfoss.com/wp-content/uploads/2020/07/foreign-package-linux-mint.jpg
|
||||
[32]: https://github.com/linuxmint/mintupgrade/blob/master/usr/bin/mintupgrade
|
||||
[33]: https://itsfoss.com/wp-content/uploads/2020/07/mint-upgrade-check.jpg
|
||||
[34]: https://itsfoss.com/wp-content/uploads/2020/07/mint-upgrade-download.jpg
|
||||
[35]: https://itsfoss.com/wp-content/uploads/2020/07/linux-mint-20-upgrade-finish.jpg
|
||||
[36]: https://itsfoss.com/wp-content/uploads/2020/07/welcome-to-linux-mint-20.jpg
|
||||
[37]: https://wiki.archlinux.org/index.php/LightDM
|
||||
[38]: https://itsfoss.com/wp-content/uploads/2018/07/lightdm-linux-mint.jpeg
|
||||
[39]: https://itsfoss.com/switch-gdm-and-lightdm-in-ubuntu-14-04/
|
@ -0,0 +1,185 @@
|
||||
[#]: subject: "How to List USB Devices Connected to Your Linux System"
|
||||
[#]: via: "https://itsfoss.com/list-usb-devices-linux/"
|
||||
[#]: author: "Anuj Sharma https://itsfoss.com/author/anuj/"
|
||||
[#]: collector: "lkxed"
|
||||
[#]: translator: "geekpi"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
|
||||
如何列出连接到 Linux 系统的 USB 设备
|
||||
======
|
||||
你如何列出 Linux 中的 USB 设备?
|
||||
|
||||
这个问题可以有两种含义。
|
||||
|
||||
* 你的系统上有(检测到)多少个 USB 端口?
|
||||
* 系统安装(插入)了多少个 USB 设备/磁盘?
|
||||
|
||||
大多数情况下,人们有兴趣了解哪些 USB 设备连接到系统。这可能有助于对 USB 设备进行故障排除。
|
||||
|
||||
最可靠的方法是使用这个命令:
|
||||
|
||||
```
|
||||
lsusb
|
||||
```
|
||||
|
||||
它显示了网络摄像头、蓝牙和以太网端口以及 USB 端口和挂载的 USB 驱动器。
|
||||
|
||||
![list usb with lsusb command linux][1]
|
||||
|
||||
但是了解 lsusb 的输出并不容易,当你只想查看和访问已挂载的 USB 驱动器时,你可能不需要复杂化。
|
||||
|
||||
我将向你展示可用于列出连接到系统的 USB 设备的各种工具和命令。
|
||||
|
||||
除非另有说明,我在例子中连接了一个 2GB 的 U 盘、1TB 的外置硬盘、通过 MTP 连接的 Android 智能手机和 USB 鼠标。
|
||||
|
||||
让我从桌面用户最简单的选项开始。
|
||||
|
||||
### 以图形方式检查连接的 USB 设备
|
||||
|
||||
你的分发文件管理器可用于查看连接到你的计算机的 USB 存储设备。正如你在下面的 Nautilus(GNOME 文件管理器)的截图中看到的那样。
|
||||
The connected devices are shown in the sidebar (Only USB Storage devices are shown here).
|
||||
连接的设备显示在边栏中(此处仅显示 USB 存储设备)。
|
||||
|
||||
![Nautilus showing connected USB devices][2]
|
||||
|
||||
你还可以使用 GNOME Disks 或 Gparted 等 GUI 应用来查看、格式化和分区连接到计算机的 USB 存储设备。默认情况下,大多数使用 GNOME 桌面环境的发行版都预装了 GNOME Disks。
|
||||
|
||||
这个应用也可以作为一个非常好的[分区管理器][3]。
|
||||
|
||||
![Use GNOME Disks to list mounted USB devices][4]
|
||||
|
||||
*图形工具足够了*。让我们讨论可用于列出 USB 设备的命令。
|
||||
|
||||
### 使用 mount 命令列出挂载的 USB 设备
|
||||
|
||||
mount 命令用于挂载 Linux 中的分区。你还可以使用相同的命令列出 USB 存储设备。
|
||||
|
||||
通常,USB 存储挂载在 media 目录中。因此,在媒体上过滤 mount 命令的输出将为你提供所需的结果。
|
||||
|
||||
```
|
||||
mount | grep media
|
||||
```
|
||||
|
||||
![][5]
|
||||
|
||||
### 使用 df 命令
|
||||
|
||||
[df 命令][6]是一个标准的 UNIX 命令,用于了解可用磁盘空间的大小。你还可以使用此命令列出已连接的 USB 存储设备。
|
||||
|
||||
```
|
||||
df -Th | grep media
|
||||
```
|
||||
|
||||
![Use df command to list mounted USB drives][7]
|
||||
|
||||
### 使用 lsblk 命令
|
||||
|
||||
lsblk 命令用于列出终端中的块设备。因此,这里也通过过滤包含 media 关键字的输出,你可以获得所需的结果,如下面的截图所示。
|
||||
|
||||
```
|
||||
lsblk | grep media
|
||||
```
|
||||
|
||||
![Using lsblk to list connected USb devicesUsing blkid to list connected USb devices][8]
|
||||
|
||||
如果你比较好奇,可以使用 `blkid` 命令了解 UUID、标签、块大小等。
|
||||
|
||||
此命令提供更多输出,因为你的内部驱动器也被列出。因此,你必须参考上述命令来识别你希望了解的设备。
|
||||
|
||||
```
|
||||
sudo blkid
|
||||
```
|
||||
|
||||
![Using blkid to list connected USb devices][9]
|
||||
|
||||
### 使用 fdisk
|
||||
|
||||
fdisk 是一款不错的老式命令行分区管理器,它还可以列出连接到你计算机的 USB 存储设备。这个命令的输出也很长。因此,通常连接的设备会列在底部,如下所示。
|
||||
|
||||
```
|
||||
sudo fdisk -l
|
||||
```
|
||||
|
||||
![Use fidsk to list usb devices][10]
|
||||
|
||||
### 检查 /proc/mounts
|
||||
|
||||
通过检查 /proc/mounts 文件,你可以列出 USB 存储设备。如你所见,它向你显示了文件系统使用的挂载选项以及挂载点。
|
||||
|
||||
```
|
||||
cat /proc/mounts | grep media
|
||||
```
|
||||
|
||||
![][11]
|
||||
|
||||
### 使用 lsusb 命令显示所有 USB 设备
|
||||
|
||||
我们重新审视有名的 lsusb 命令。
|
||||
|
||||
Linux 内核开发人员 [Greg Kroah-Hartman][12] 开发了这个方便的 [usbutils][13] 程序。这为我们提供了两个命令,即 `lsusb` 和 `usb-devices` 来列出 Linux 中的 USB 设备。
|
||||
|
||||
lsusb 命令列出系统中有关 USB 总线的所有信息。
|
||||
|
||||
```
|
||||
lsusb
|
||||
```
|
||||
|
||||
如你所见,此命令还显示了我已连接的鼠标和智能手机,这与其他命令(只能列出 USB 存储设备)不同。
|
||||
|
||||
![][14]
|
||||
|
||||
第二个命令 `usb-devices` 提供了更多详细信息,但未能列出所有设备,如下所示。
|
||||
|
||||
```
|
||||
usb-devices
|
||||
```
|
||||
|
||||
![][15]
|
||||
|
||||
Greg 还开发了一个名为 [Usbview][16] 的小型 GTK 应用。此应用向你显示连接到计算机的所有 USB 设备的列表。
|
||||
|
||||
该应用可在大多数 Linux 发行版的官方仓库中找到。你可以使用发行版的[包管理器][17]轻松安装 `usbview` 包。
|
||||
|
||||
安装后,你可以从应用菜单启动它。你可以选择任何列出的设备以获取详细信息,如下面的截图所示。
|
||||
|
||||
![][18]
|
||||
|
||||
### 总结
|
||||
|
||||
列出的大多数方法仅限于 USB 存储设备。 只有两种方法可以列出其他外围设备; usbview 和 usbutils。 我想我们还有一个理由感谢 Linux Kernel 开发人员 Greg 开发了这些方便的工具。
|
||||
|
||||
我知道还有很多方法可以列出连接到系统的 USB 设备。 欢迎你提出建议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/list-usb-devices-linux/
|
||||
|
||||
作者:[Anuj Sharma][a]
|
||||
选题:[lkxed][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/anuj/
|
||||
[b]: https://github.com/lkxed
|
||||
[1]: https://itsfoss.com/wp-content/uploads/2022/08/list-usb-with-lsusb-command-linux.png
|
||||
[2]: https://itsfoss.com/wp-content/uploads/2022/08/nautilus-usb.png
|
||||
[3]: https://itsfoss.com/partition-managers-linux/
|
||||
[4]: https://itsfoss.com/wp-content/uploads/2022/08/gnome-disks-usb.png
|
||||
[5]: https://itsfoss.com/wp-content/uploads/2022/08/mount-cmd-usb.png
|
||||
[6]: https://linuxhandbook.com/df-command/
|
||||
[7]: https://itsfoss.com/wp-content/uploads/2022/08/df-cmd-usb.png
|
||||
[8]: https://itsfoss.com/wp-content/uploads/2022/08/blkid-cmd-usb.png
|
||||
[9]: https://itsfoss.com/wp-content/uploads/2022/08/blkid-cmd-usb.png
|
||||
[10]: https://itsfoss.com/wp-content/uploads/2022/08/fdisk-cmd-usb.png
|
||||
[11]: https://itsfoss.com/wp-content/uploads/2022/08/proc-dir-usb.png
|
||||
[12]: https://en.wikipedia.org/wiki/Greg_Kroah-Hartman
|
||||
[13]: https://github.com/gregkh/usbutils
|
||||
[14]: https://itsfoss.com/wp-content/uploads/2022/08/lsusb-cmd.png
|
||||
[15]: https://itsfoss.com/wp-content/uploads/2022/08/usb-devices-cmd.png
|
||||
[16]: https://github.com/gregkh/usbview
|
||||
[17]: https://itsfoss.com/package-manager/
|
||||
[18]: https://itsfoss.com/wp-content/uploads/2022/08/usbview.png
|
Loading…
Reference in New Issue
Block a user