Merge branch 'master' of https://github.com/LCTT/TranslateProject into translating

This commit is contained in:
geekpi 2022-06-21 08:34:42 +08:00
commit 65aa87ccb2
14 changed files with 1339 additions and 338 deletions

View File

@ -3,41 +3,44 @@
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lkxed"
[#]: translator: "lightchaserhy"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14735-1.html"
我如何利用 Linux Xface 桌面赋予旧电脑新生命
我如何利用 Xfce 桌面为旧电脑赋予新生
======
当我为了一场会议的样例演示,用笔记本电脑安装 Linux 系统后,发现旧电脑运行 Linux 系统和 Xfce 桌面非常流畅。
几周前,我要在一个会议上简要演示自己在 Linux 下编写的一款小软件。我需要带一台 Linux 笔记本电脑参会,因此我翻出一台旧笔记本电脑并且安装上 Linux 系统。我使用的是 Fedora 36 Xfce spin使用还不错。
![](https://img.linux.net.cn/data/attachment/album/202206/20/143325vfdibhvv22qvddiv.jpg)
这台我用的笔记本是在 2012 年购买的。1.70 GHZ 的 CPU4 GB 的 内存128 GB 的驱动器,也许和我现在的桌面电脑比性能很弱,但是 Linux 和 Xfce 桌面赋予这台旧电脑新的生命。
> 当我为了在一场会议上做演示,用笔记本电脑安装 Linux 系统后,发现 Linux 和 Xfce 桌面让我的这台旧电脑健步如飞。
几周前,我要在一个会议上简要演示自己在 Linux 下编写的一款小软件。我需要带一台 Linux 笔记本电脑参会,因此我翻出一台旧笔记本电脑,并且安装上 Linux 系统。我使用的是 Fedora 36 Xfce 版,使用还不错。
这台我用的笔记本是在 2012 年购买的。1.70 GHZ 的 CPU、4 GB 的 内存、128 GB 的硬盘,也许和我现在的桌面电脑比性能很弱,但是 Linux 和 Xfce 桌面赋予了这台旧电脑新的生命。
### Linux 的 Xfce 桌面
Xfce 桌面是一个轻量级桌面,它提供一个精美、现代的外观。熟悉的界面,有任务栏或者顶部“面板”可以启动应用程序,在系统托盘可以改变虚拟桌面,或者查看通知信息。
Xfce 桌面是一个轻量级桌面,它提供一个精美、现代的外观。熟悉的界面,有任务栏或者顶部“面板”可以启动应用程序,在系统托盘可以改变虚拟桌面,或者查看通知信息。屏幕底部的快速访问停靠区让你可以启动经常使用的应用程序,如终端、文件管理器和网络浏览器。
![Image of Xfce desktop][6]
要开始一个新应用程序,点击左上角的应用程序按钮。这将打开一个应用程序启动菜单,顶部有常用的应用程序比如终端和文件管理。另外的应用程序会分组排列,这样你可以找到所需要的应用。
要开始一个新应用程序,点击左上角的应用程序按钮。这将打开一个应用程序启动菜单,顶部有常用的应用程序,比如终端和文件管理。其它的应用程序会分组排列,这样你可以找到所需要的应用。
![Image of desktop applications][7]
### 管理文件
Xfce 的文件管理器时叫 Thunar它能非常好地管理我的文件。我喜欢 Thunar 可以连接远程系统,在家里,我用一个开启 SSH 的树莓派作为个人文件服务器。Thunar 可以打开一个 SSH 文件传输窗口,这样我可以在笔记本电脑和树莓派之间拷贝文件。
Xfce 的文件管理器时叫 Thunar它能好地管理我的文件。我喜欢 Thunar 可以连接远程系统,在家里,我用一个开启 SSH 的树莓派作为个人文件服务器。Thunar 可以打开一个 SSH 文件传输窗口,这样我可以在笔记本电脑和树莓派之间拷贝文件。
![Image of Thunar remote][9]
另一个访问文件和文件夹的方式是通过屏幕底部的快速访问停靠栏。点击文件夹图标可以打开一个常规操作菜单,如在终端窗口打开一个文件夹、新建一个文件夹或进入指定文件夹等。
另一个访问文件和文件夹的方式是通过屏幕底部的快速访问停靠区。点击文件夹图标可以打开一个常用操作的菜单,如在终端窗口打开一个文件夹、新建一个文件夹或进入指定文件夹等。
![Image of desktop with open folders][10]
### 其它应用程序
热爱探索 Xfce 提供的其他应用程序。Mousepad 看起来像一个简单的文本编辑器但是比起纯文本编辑它包含更多有用的功能。Mousepad 支持许多文件类型,程序员和其他高级用户也许会非常喜欢。在文档菜单检验一下可用的部分编程语言列表。
喜欢探索 Xfce 提供的其他应用程序。Mousepad 看起来像一个简单的文本编辑器但是比起纯文本编辑它包含更多有用的功能。Mousepad 支持许多文件类型,程序员和其他高级用户也许会非常喜欢。可以在文档菜单中查看一下部分编程语言的列表。
![Image of Mousepad file types][11]
@ -46,21 +49,20 @@ Xfce 的文件管理器时叫 Thunar它能非常好地管理我的文件。
![Image of Mousepad in color scheme solarized][12]
磁盘工具可以让你管理储存设备。虽然我不需要修改我的系统磁盘,磁盘工具是一个初始化或重新格式化 USB 闪存设备的好方式。我认为这个界面非常简单好用。
![Image of disk utility][13]
我非常钦佩带有 Geany 集成开发的环境,我有一点惊讶一个旧系统可以如此流畅地运行一个完整的 IDE 开发软件
Geany 集成开发环境也给我留下了深刻印象我有点惊讶于一个完整的集成开发软件IDE可以在一个旧系统可以如此流畅地运行。Geany 宣称自己是一个“强大、稳定和轻量级的程序员文本编辑器,提供大量有用的功能,而不会拖累你的工作流程”。而这正是 Geany 所提供的
我用一个简单的 “hello world” 程序测试 Geany当我输入每一个函数名称时很高兴地看到 IDE 弹出语法帮助,弹出的信息并不唐突且刚好提供了我需要的信息。同时 printf 函数非常容易记住,我总是忘记其它函数的选项顺序,比如 fputs 和 realloc,这就是我需要弹出语法帮助的地方。
我用一个简单的 “hello world” 程序测试 Geany当我输入每一个函数名称时很高兴地看到 IDE 弹出语法帮助,弹出的信息并不特别显眼,且刚好提供了我需要的信息。虽然我能很容易记住 `printf` 函数,但总是忘记诸如 `fputs``realloc` 之类的函数的选项顺序,这就是我需要弹出语法帮助的地方。
![Image of Geany workspace][14]
在 Xfce 里探索菜单寻找其它应用程序让你的工作更简单,你将找到可以播放音乐、访问终端或浏览网页的应用程序。
深入了解 Xfce 的菜单,寻找其它应用程序,让你的工作更简单,你将找到可以播放音乐、访问终端或浏览网页的应用程序。
当我安装 Linux 到笔记本电脑,在会议上演示一些样例后,发现 Linux 和 Xfce 桌面让这台旧电脑变得更时尚。这个系统运行得如此流畅,当会议结束后,我决定把这台笔记本电脑作为备用机。
当我在笔记本电脑上安装了 Linux在会议上做了一些演示后我发现 Linux 和 Xfce 桌面让这台旧电脑变得相当敏捷。这个系统运行得如此流畅,以至于当会议结束后,我决定把这台笔记本电脑作为备用机。
我喜爱在 Xfce 上使用应用程序工作,尽管它有非常低的系统开销和极简单的方法,但我并没有感觉到不够用,我可以用 Xfce 和上面的应用程序做任何事情。如果你有一台需要翻新的旧电脑,试试安装 Linux给旧硬件带来新的生命。
图片来源: (Jim Hall, CC BY-SA 40)
我确实喜欢在 Xfce 中工作和使用这些应用程序,尽管系统开销不大,使用也很简单,但我并没有感觉到不够用,我可以用 Xfce 和上面的应用程序做任何事情。如果你有一台需要翻新的旧电脑,试试安装 Linux给旧硬件带来新的生命。
--------------------------------------------------------------------------------
@ -69,7 +71,7 @@ via: https://opensource.com/article/22/6/linux-xfce-old-laptop
作者:[Jim Hall][a]
选题:[lkxed][b]
译者:[lightchaserhy](https://github.com/lightchaserhy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,13 +3,16 @@
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14736-1.html"
使用 Flatseal 管理 Flatpak 的权限
======
了解如何使用 Flatseal 应用管理 Flatpak 权限,它为你提供了一个友好的 GUI 和额外的功能。
![](https://img.linux.net.cn/data/attachment/album/202206/20/151550qkrkpjw4f9dpjo50.jpg)
> 了解如何使用 Flatseal 应用管理 Flatpak 权限,它为你提供了一个友好的 GUI 和额外的功能。
从新用户的角度来看,在 Linux 中安装应用可能是一个挑战。主要原因是有这么多的 [Linux 发行版][1]。而你需要为各种 Linux 发行版提供不同的安装方法或说明。对于一些用户来说,这可能会让他们不知所措。此外,对于开发者来说,为不同的发行版创建独立的软件包和构建也很困难。
@ -39,7 +42,7 @@ Flatseal 是一个 Flatpak 应用,它为你提供了一个友好的用户界
当打开 Flatseal 应用时,它应该在左边的导航栏列出所有的 Flatpak 应用。而当你选择了一个应用,它就会在右边的主窗口中显示可用的权限设置。
现在,对于每个 Flatpak 权限控制,当前值显示在切换开关中。如果该权限正在使用中,它应该被设置。否则,它应该是灰色的。
现在,对于每个 Flatpak 权限控制,当前值显示在切换开关中。如果该权限正在使用中,它应该被启用。否则,它应该是灰色的。
首先,要设置权限,你必须进入你的系统的应用。然后,你可以从权限列表中启用或禁用任何各自的控制。
@ -57,7 +60,7 @@ Flatseal 是一个 Flatpak 应用,它为你提供了一个友好的用户界
![Figure 3: Telegram Desktop Flatpak App does not have permission to the home folders][4]
现在,如果我想允许所有的用户文件和任何特定的文件夹(例如:/home/Downloads),你可以通过打开启用开关来给予它。请看下面的图 4。
现在,如果我想允许所有的用户文件和某个特定的文件夹(例如:`/home/Downloads`),你可以通过打开启用开关来给予它。请看下面的图 4。
![Figure 4: Permission changed of Telegram Desktop to give access to folders][5]
@ -69,7 +72,7 @@ Flatseal 是一个 Flatpak 应用,它为你提供了一个友好的用户界
flatpak override org.telegram.desktop --filesystem=/home/Downloads
```
而要删除:
而要删除权限
```
flatpak override org.telegram.desktop --nofilesystem=/home/Downloads
@ -79,7 +82,7 @@ Flatseal 还有一个很酷的功能,它在用户特定的权限变化旁边
### 我可以在所有的 Linux 发行版中安装 Flatseal 吗?
是的,你可以把 [Flatseal][6] 作为 Flatpak 安装在所有 Linux 发行版中。你可以使用[本指南][7]设置你的系统,并运行以下命令进行安装。或者,[点击这里][8]直接启动特定系统的安装程序。
是的,你可以把 [Flatseal][6] 作为 Flatpak 安装在所有 Linux 发行版中。你可以使用 [本指南][7] 设置你的系统,并运行以下命令进行安装。或者,[点击这里][8] 直接启动特定系统的安装程序。
```
flatpak install flathub com.github.tchx84.Flatseal
@ -96,7 +99,7 @@ via: https://www.debugpoint.com/2022/06/manage-flatpak-permission-flatseal/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,23 +3,24 @@
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14734-1.html"
有研究表明,推特能够推动开源项目的普及
======
![推特][1]
由 HongBo Fang 博士领导的研究团队发现,推特是一种吸引更多人关注和贡献 GitHub 开源项目的有效方式。Fang 博士在国际软件工程会议上发表了这项名为“‘这真是太棒了!’估计推文对开源项目受欢迎程度和新贡献者的影响”的研究,并获得了杰出论文奖。这项研究显示,发送和一个项目有关的推文,导致了该项目受欢迎程度增加了 7%(在 GitHub 上至少增加了一颗 star贡献者数量增加了 2%。一个项目收到的推文越多,它收到的 star 和贡献者就越多。
由 HongBo Fang 博士领导的研究团队发现,推特是一种吸引更多人关注和贡献 GitHub 开源项目的有效方式。Fang 博士在国际软件工程会议上发表了这项名为“‘这真是太棒了!’估计推文对开源项目受欢迎程度和新贡献者的影响”的研究,并获得了杰出论文奖。这项研究显示,发送和一个项目有关的推文,导致了该项目受欢迎程度增加了 7%(在 GitHub 上至少增加了一个星标),贡献者数量增加了 2%。一个项目收到的推文越多,它收到的星标和贡献者就越多。
Fang 说:“我们已经意识到社交媒体在开源社区中变得越来越重要,吸引关注和新的贡献者将带来更高质量和更好的软件。”
大多数开源软件都是由志愿者创建和维护的。参与项目的人越多,结果就越好。开发者和其他人使用该软件、报告问题并努力解决这些问题。然而,不受欢迎的项目有可能得不到应有的关注。这些劳动力(几乎都是志愿者),维护了数百万人每天依赖的软件。例如,几乎每个 HTTPS 网站都使用开源的 OpenSSL 保护其内容。Heartbleed 是 OpenSSL 中发现的一个安全漏洞,在 2014 年被发现后,企业花费了数百万美元来修复它。另一个开源软件 cURL 允许连接的设备相互发送数据,并安装在大约 10 亿台设备上。开源软件之多,不胜枚举。
此次“推特对提高开源项目的受欢迎程度和吸引新贡献者的影响”的研究,其实是 “Vasilescu 数据挖掘与社会技术研究实验室” (STRUDEL) 的一个更大项目的其中一部分,该研究着眼于如何建立开源社区并且其工作更具可持续性。毕竟,支撑现代技术的数字基础设施、道路和桥梁都是开源软件。如果维护不当,这些基础设施可能会崩溃。
此次“推特对提高开源项目的受欢迎程度和吸引新贡献者的影响”的研究,其实是 “Vasilescu 数据挖掘与社会技术研究实验室”STRUDEL的一个更大项目的其中一部分,该研究着眼于如何建立开源社区并且其工作更具可持续性。毕竟,支撑现代技术的数字基础设施、道路和桥梁都是开源软件。如果维护不当,这些基础设施可能会崩溃。
研究人员检查了 44544 条推文,其中包含指向 2370 个开源 GitHub 存储库的链接,以证明这些推文确实吸引了新的 star 和项目贡献者。在这项研究中,研究人员使用了一种科学的方法:将 Twitter 上提及的 GitHub 项目的 star 和贡献者的增加,与 Twitter 上未提及的一组项目进行了比较。该研究还描述了高影响力推文的特征、可能被帖子吸引到项目的人的类型,以及这些人与通过其他方式吸引的贡献者有何不同。来自项目支持者而不是开发者的推文最能吸引注意力。请求针对特定任务或项目提供帮助的帖子会收到更高的回复率。推文往往会吸引新的贡献者,**他们是 GitHub 的新手,但不是经验不足的程序员**。还有,**新的关注可能不会带来新的帮助**。
研究人员检查了 44544 条推文,其中包含指向 2370 个开源 GitHub 存储库的链接,以证明这些推文确实吸引了新的星标和项目贡献者。在这项研究中,研究人员使用了一种科学的方法:将推特上提及的 GitHub 项目的星标和贡献者的增加,与推特上未提及的一组项目进行了比较。该研究还描述了高影响力推文的特征、可能被帖子吸引到项目的人的类型,以及这些人与通过其他方式吸引的贡献者有何不同。来自项目支持者而不是开发者的推文最能吸引注意力。请求针对特定任务或项目提供帮助的帖子会收到更高的回复率。推文往往会吸引新的贡献者,**他们是 GitHub 的新手,但不是经验不足的程序员**。还有,**新的关注可能不会带来新的帮助**。
提高项目受欢迎程度也存在其缺点,研究人员讨论后认为,它的潜在缺点之一,就是注意力和行动之间的差距。**更多的关注通常会导致更多的功能请求或问题报告,但不一定有更多的开发者来解决它们**。社交媒体受欢迎程度的提高,可能会导致有更多的“巨魔”或“有毒行为”出现在项目周围。
@ -32,7 +33,7 @@ via: https://www.opensourceforu.com/2022/06/according-to-studies-twitter-drives-
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,91 @@
[#]: subject: "Manjaro 21.3.0 Ruah Release Adds Latest Calmares 3.2, GNOME 42, and More Upgrades"
[#]: via: "https://news.itsfoss.com/manjaro-21-3-0-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Manjaro 21.3.0 Ruah Release Adds Latest Calmares 3.2, GNOME 42, and More Upgrades
======
Manjaro Linux 21.3.0 release packs in some of the latest and greatest updates, including an improved installer.
![manjaro 21.3.0][1]
Manjaro Linux is a rolling-release distribution. So, technically, you will be on the latest version if you regularly update your system.
It should not be a big deal to upgrade to Manjaro 21.3.0, considering I am already running it without issues for a few days before the official announcement.
**Also,**you might want to read my initial experience [switching to Manjaro from Ubuntu][2] (if youre still on the fence).
So, what does the Manjaro 21.3.0 upgrade introduce?
### Manjaro 21.3.0: Whats New?
![][3]
The desktop environments upgraded to their latest stable versions while the core [Linux Kernel 5.15 LTS][4] remains.
Also, this release includes the final Clamares v3.2 version. Let us take a look at the changes:
#### Calamares v3.2.59
Calamares v3.2.59 installer is the final release of the 3.2 series with meaningful improvements. This time the partition module includes support for LUKS partitions and more refinements to avoid settings that can mess up the Manjaro installation.
All the future releases for Calamares 3.2 will be bug fixes only.
#### GNOME 42 + Libadwaita
While the initial release included GNOME 42, now we have GNOME 42.2 available with the latest updates.
Overall, you get all the goodies introduced with [GNOME 42][5], including the system-wide dark mode, a modern user interface based on GTK 4 for GNOME apps, upgraded applications, and several other significant changes.
![][6]
#### KDE Plasma 5.24
Unfortunately, the release couldnt feature [KDE Plasma 5.25][7], considering it was released around the same week.
[KDE Plasma 5.24][8] is a nice upgrade, with a refreshed theme and an overview effect.
#### XFCE 4.16
With Xfce 4.16, the window manager received numerous updates and refinements to support fractional scaling and more capabilities.
### Download Manjaro 21.3.0
As of now, I have no issues with Manjaro 21.3.0 GNOME edition. Everything looks good, and the upgrade went smoothly.
However, you should always take backups if you do not want to re-install or lose your important files.
You can download the latest version from [Manjaros download page][9]. The upgrade should be available through the pamac package manager.
In either case, you can enter the following command in the terminal to upgrade:
```
sudo pacmane -Syu
```
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/manjaro-21-3-0-release/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/06/manjaro-21-3-0-ruah-release.jpg
[2]: https://news.itsfoss.com/manjaro-linux-experience/
[3]: https://news.itsfoss.com/wp-content/uploads/2022/06/manjaro-gnome-42-2-1024x576.jpg
[4]: https://news.itsfoss.com/linux-kernel-5-15-release/
[5]: https://news.itsfoss.com/gnome-42-release/
[6]: https://news.itsfoss.com/wp-content/uploads/2022/06/manjaro-21-3-neofetch.png
[7]: https://news.itsfoss.com/kde-plasma-5-25-release/
[8]: https://news.itsfoss.com/kde-plasma-5-24-lts-release/
[9]: https://manjaro.org/download/

View File

@ -0,0 +1,39 @@
[#]: subject: "Microsoft To Charge For Available Open Source Software In Microsoft Store"
[#]: via: "https://www.opensourceforu.com/2022/06/microsoft-to-charge-for-available-open-source-software-in-microsoft-store/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Microsoft To Charge For Available Open Source Software In Microsoft Store
======
![microsoft][1]
On June 16, 2022, Microsoft updated the Microsoft Store policies. One of the changes prohibits publishers from charging fees for open source or freely available software. Another example is the stores use of irrationally high pricing. If youve been to the Microsoft Store in the last few years, youve probably noticed that its becoming more and more home to open source and free products. While this would be beneficial if the original developer had uploaded the apps and games to the store, it is not the case because the uploads were made by third parties.
Worse, many of these programs are only available as paid applications rather than free downloads. In other words, Microsoft customers must pay to purchase a Store version of an app that is free elsewhere. In the Store, free and paid versions coexist at times. Paying for a free app is bad enough, but this isnt the only problem that users may encounter when they make the purchase. Updates may also be a concern, as copycat programs may not be updated as frequently or as quickly as the source applications.
In the updated Microsoft Store Policies, Microsoft notes under 10.8.7:
In cases where you determine the pricing for your product or in-app purchases, all pricing for your digital products or services, including sales or discounts, must:
Comply with all applicable laws, regulations, and regulatory guidelines, including the Federal Trade Commissions Guides Against Deceptive Pricing. You must not attempt to profit from open-source or other software that is otherwise freely available, nor should your product be priced irrationally high in comparison to the features and functionality it provides.
The new policies are confirmed in the updated section. Open source and free products may no longer be sold on the Microsoft Store if they are generally available for free, and publishers may no longer charge irrationally high prices for their products. Developers of open source and free applications may charge for their products on the Microsoft Store; for example, the developer of Paint.net does so. Many applications will be removed from the Store if Microsoft enforces the policies. Developers could previously report applications to Microsoft, but the new policies give Microsoft direct control over application listings and submissions.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/microsoft-to-charge-for-available-open-source-software-in-microsoft-store/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/06/microsoft-e1655714723942.jpg

View File

@ -1,299 +0,0 @@
[#]: subject: "Apache Kafka: Asynchronous Messaging for Seamless Systems"
[#]: via: "https://www.opensourceforu.com/2021/11/apache-kafka-asynchronous-messaging-for-seamless-systems/"
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Apache Kafka: Asynchronous Messaging for Seamless Systems
======
Apache Kafka is one of the most popular open source message brokers. Found in almost all microservices environments, it has become an important component of Big Data manipulation. This article gives a brief description of Apache Kafka, followed by a case study that demonstrates how it is used.
![Digital-backgrund-connecting-in-globe][1]
Have you ever wondered how e-commerce platforms are able to handle immense traffic without getting stuck? Ever thought about how OTT platforms are able to deliver content to millions of users, smoothly and simultaneously? The key lies in their distributed architecture.
A system designed around distributed architecture is made up of multiple functional components. These components are usually spread across several machines, which collaborate with each other by exchanging messages asynchronously over a network. Asynchronous messaging is what enables scalable, non-blocking communication among components, thereby allowing smooth functioning of the overall system.
### Asynchronous messaging
The common features of asynchronous messaging are:
* The producers and consumers of the messages are not aware of each other. They join and leave the system without the knowledge of the others.
* A message broker acts as the intermediary between the producers and consumers.
* The producers associate each of the messages with a type, known as topic. A topic is just a simple string.
* It is possible that producers send messages on multiple topics, and multiple producers send messages on the same topic.
* The consumers register with the broker for messages on one or more topics.
* The producers send the messages only to the broker, and not to the consumers.
* The broker, in turn, delivers the messages to all the consumers that are registered against the topic.
* The producers do not expect any response from the consumers. In other words, the producers and consumers do not block each other.
There are several message brokers available in the market, and Apache Kafka is one of the most popular among them.
### Apache Kafka
Apache Kafka is an open source distributed messaging system with streaming capabilities, developed by the Apache Software Foundation. Architecturally, it is a cluster of several brokers that are coordinated by the Apache Zookeeper service. These brokers share the load on the cluster while receiving, persisting, and delivering the messages.
#### Partitions
Kafka writes messages into buckets known as partitions. A given partition holds messages only on one topic. For example, Kafka writes messages on the topic heartbeats into the partition named *heartbeats-0*, irrespective of the producer of the messages.
![Figure 1: Asynchronous messaging][2]
However, in order to leverage the cluster-wide parallel processing capabilities of Kafka, administrators often create more than one partition for a given topic. For instance, if the administrator creates three partitions for the topic heartbeats, Kafka names them as *heartbeats-0, heartbeats-1,* and *heartbeats-2.* Kafka writes the heartbeat messages across all the three partitions in such a way that the load is evenly distributed.
There is yet another possible scenario in which the producers associate each of the messages with a key. For example, a component uses C1 as the key while another component uses C2 as the key for the messages that they produce on the topic heartbeats. In this scenario, Kafka makes sure that the messages on a topic with a specific key are always found only in one partition. However, it is quite possible that a given partition may hold messages with different keys. Figure 2 presents a possible message distribution among the partitions.
![Figure 2: Message distribution among the partitions][3]
#### Leaders and ISRs
Kafka maintains several partitions across the cluster. The broker on which a partition is maintained is called the leader for the specific partition. Only the leader receives and serves the messages from its partitions.
But what happens to a partition if its leader crashes? To ensure business continuity, every leader replicates its partitions on other brokers. The latter act as the in-sync-replicas (ISRs) for the partition. In case the leader of a partition crashes, Zookeeper conducts an election and names an ISR as the new leader. Thereafter, the new leader takes the responsibility of writing and serving the messages for that partition. Administrators can choose how many ISRs are to be maintained for a topic.
![Figure 3: Command-line producer][4]
#### Message persistence
The brokers map each of the partitions to a specific file on the disk, for persistence. By default, they keep the messages for a week on the disk! The messages and their order cannot be altered once they are written to a partition. Administrators can configure policies like message retention, compaction, etc.
![Figure 4: Command-line consumer][5]
#### Consuming the messages
Unlike most other messaging systems, Apache Kafka does not actively deliver the messages to its consumers. Instead, it is the responsibility of the consumers to listen to the topics and read the messages. A consumer can read messages from more than one partition of a topic. And it is also possible that multiple consumers read messages from a given partition. Kafka guarantees that no message is read more than once by a given consumer.
Kafka also expects that every consumer is identified with a group ID. Consumers with the same group ID form a group. Typically, in order to read messages from N number of topic partitions, an administrator creates a group with N number of consumers. This way, each consumer of the group reads messages from its designated partition. If the group consists of more consumers than the available partitions, the excess consumers remain idle.
In any case, Apache Kafka guarantees that a message is read only once at the group level, irrespective of the number of consumers in the group. This architecture gives consistency, high-performance, high scalability, near-real-time delivery, and message persistence along with zero-message loss.
### Installing and running Kafka
Although, in theory, the Apache Kafka cluster can consist of any number of brokers, most of the clusters in production environments usually consist of three or five of these.
Here, we will set up a single-broker cluster that is good enough for the development environment.
Download the latest version of Kafka from *https://kafka.apache.org/downloads* using a browser. It can also be downloaded with the following command, on a Linux terminal:
```
wget https://www.apache.org/dyn/closer.cgi?path=/kafka/2.8.0/kafka_2.12-2.8.0.tgz
```
We can move the downloaded archive *file kafka_2.12-2.8.0.tgz* to some other folder, if needed. Extracting the archive creates a folder by the name *kafka_2.12-2.8.0*, which will be referred to as *KAFKA_HOME* hereafter.
Open the file server.properties under the *KAFKA_HOME/config* folder and uncomment the line with the following entry:
```
listeners=PLAINTEXT://:9092
```
This configuration enables Apache Kafka to receive plain text messages on port 9092, on the local machine. Kafka can also be configured to receive messages over a secure channel as well, which is recommended in the production environments.
Irrespective of the number of brokers, Apache Zookeeper is required for broker management and coordination. This is true even in the case of single-broker clusters. Since Zookeeper is already bundled with Kafka, we can start it with the following command from *KAFKA_HOME*, on a terminal:
```
./bin/zookeeper-server-start.sh ./config/zookeeper.properties
```
Once Zookeeper starts running, Kafka can be started in another terminal, with the following command:
```
./bin/kafka-server-start.sh ./config/server.properties
```
With this, a single-broker Kafka cluster is up and running.
### Verifying Kafka
Let us publish and receive messages on the topic topic-1. A topic can be created with a chosen number of partitions with the following command:
```
./bin/kafka-topics.sh --create --topic topic-1 --zookeeper localhost:2181 --partitions 3 --replication-factor 1
```
The above command also specifies the replication factor, which should be less than or equal to the number of brokers in the cluster. Since we are working on a single-broker cluster, the replication factor is set to one.
Once the topic is created, producers and consumers can exchange messages on that topic. The Kafka distribution includes a producer and a consumer for test purposes. Both of these are command-line tools.
To invoke the producer, open the third terminal and run the following command:
```
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic topic-1
```
This command displays a prompt at which we can key in simple text messages. Because of the given options on the command, the producer sends the messages on *topic-1* to the Kafka that is running on port 9092 on the local machine.
Open the fourth terminal and run the following command to start the consumer tool:
```
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic-1 from-beginning
```
This command starts the consumer that connects to the Kafka on port number 9092 on the local machine. It registers for reading the messages on topic-1. Because of the last option on the command line, the consumer receives all the messages on the chosen topic from the beginning.
Since the producer and consumer are connecting to the same broker and referring the same topic, the consumer receives and displays the messages on its terminal.
Now, lets use Kafka in the context of a practical application.
### Case study
ABC is a hypothetical bus transport company, which has a fleet of passenger buses that ply between different cities across the country. Since ABC wants to track each bus in real-time for improving the quality of its operations, it comes up with a solution around Apache Kafka.
ABC first equips all its buses with devices to track their location. An operations centre is set up with Apache Kafka, to receive the location updates from each of the hundreds of buses. A dashboard is developed to display the current status of all the buses at any point in time. Figure 5 represents this architecture.
![Figure 5: Kafka based architecture][6]
In this architecture, the devices on the buses act as the message producers. They send their current location to Kafka on the topic *abc-bus-location*, periodically. For processing the messages from different buses, ABC chooses to use the trip code as the key. For example, if the bus from Bengaluru to Hubballi runs with the trip code*BLRHBL003*, then *BLRHBL003* becomes the key for all the messages from that specific bus during that specific trip.
The dashboard application acts as the message consumer. It registers with the broker against the same topic *abc-bus-location*. Consequently, the topic becomes the virtual channel between the producers (buses) and the consumer (dashboard).
The devices on the buses never expect any response from the dashboard application. In fact, none of them is even aware of the presence of the others. This architecture enables non-blocking communication between hundreds of buses and the central office.
#### Implementation
Lets assume that ABC wants to create three partitions for maintaining the location updates. Since the development environment has only one broker, the replication factor should be set to one.
The following command creates the topic accordingly:
```
./bin/kafka-topics.sh --create --topic abc-bus-location --zookeeper localhost:2181 --partitions 3 --replication-factor 1
```
The producer and consumer applications can be written in multiple languages like Java, Scala, Python, JavaScript, and a host of others. The code in the following sections provides a peek into the way they are written in Java.
##### Java producer
The Fleet class simulates the Kafka producer applications running on six buses of ABC. It sends location updates on *abc-bus-location* to the specified broker. Please note that the topic name, message keys, message body, and broker address are hard-coded only for simplicity.
```
public class Fleet {
public static void main(String[] args) throws Exception {
String broker = “localhost:9092”;
Properties props = new Properties();
props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, broker);
props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class.getName());
props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class.getName());
Producer<String, String> producer = new KafkaProducer<String, String>(props);
String topic = “abc-bus-location”;
Map<String, String> locations = new HashMap<>();
locations.put(“BLRHBL001”, “13.071362, 77.461906”);
locations.put(“BLRHBL002”, “14.399654, 76.045834”);
locations.put(“BLRHBL003”, “15.183959, 75.137622”);
locations.put(“BLRHBL004”, “13.659576, 76.944675”);
locations.put(“BLRHBL005”, “12.981337, 77.596181”);
locations.put(“BLRHBL006”, “13.024843, 77.546983”);
IntStream.range(0, 10).forEach(i -> {
for (String trip : locations.keySet()) {
ProducerRecord<String, String> record
= new ProducerRecord<String, String>(
topic, trip, locations.get(trip));
producer.send(record);
}
});
producer.flush();
producer.close();
}
}
```
##### Java consumer
The Dashboard class implements the Kafka consumer application and it runs at the ABC Operations Centre. It listens to *abc-bus-location* with the group ID *abc-dashboard* and displays the location details from different buses as soon as messages are available. Here, too, many details are hard coded which are otherwise supposed to be configured:
```
public static void main(String[] args) {
String broker = “127.0.0.1:9092”;
String groupId = “abc-dashboard”;
Properties props = new Properties();
props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, broker);
props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class.getName());
props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class.getName());
props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
@SuppressWarnings(“resource”)
Consumer<String, String> consumer = new KafkaConsumer<String, String>(props);
consumer.subscribe(Arrays.asList(“abc-bus-location”));
while (true) {
ConsumerRecords<String, String> records
= consumer.poll(Duration.ofMillis(1000));
for (ConsumerRecord<String, String> record : records) {
String topic = record.topic();
int partition = record.partition();
String key = record.key();
String value = record.value();
System.out.println(String.format(
“Topic=%s, Partition=%d, Key=%s, Value=%s”,
topic, partition, key, value));
}
}
}
```
##### Dependencies
A JDK of 8+ version is required to compile and run this code. The following Maven dependencies in the *pom.xml* download and add the required Kafka client libraries to the classpath:
```
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.8.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.25</version>
</dependency>
```
#### Deployment
As the topic *abc-bus-location* is created with three partitions, it makes sense to run three consumers to read the location updates quickly. For that, run the Dashboard in three different terminals simultaneously. Since all the three instances of Dashboard register with the same group ID, they form a group. Kafka attaches each Dashboard instance with a specific partition.
Once the Dashboard instances are up and running, start the *Fleet* on a different terminal. Figure 6, Figure 7, and Figure 8 are sample console messages on the Dashboard terminals.
![Figure 6: Dashboard Terminal 1][7]
A closer look at the console messages reveals that the consumers on the first, second and third terminals are reading messages from *partition-2, partition-1,* and *partition-0,* in that order. Also, it can be observed that the messages with the keys BLRHBL002, *BLRHBL004* and *BLRHBL006* are written into *partition-2*, the messages with the key *BLRHBL005* are written into *partition-1*, and the remaining are written into *partition-0*.
![Figure 7: Dashboard Terminal 2][8]
The good thing about Kafka is that it can be scaled horizontally to support a large number of buses and millions of messages as long as the cluster is designed appropriately.
![Figure 8: Dashboard Terminal 3][9]
### Beyond messaging
More than 80 per cent of the Fortune 100 companies are using Kafka, according to its website. It is deployed across many industry verticals like financial services, entertainment, etc. Though Apache Kafka started its journey as a simple messaging service, it has propelled itself into the Big Data ecosystem with industry-level stream processing capabilities. For the enterprises that prefer a managed solution, Confluent offers a cloud based Apache Kafka service for a subscription fee.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2021/11/apache-kafka-asynchronous-messaging-for-seamless-systems/
作者:[Krishna Mohan Koyya][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/krishna-mohan-koyya/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Digital-backgrund-connecting-in-globe.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-1-Asynchronous-messaging.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-2-Message-distribution-among-the-partitions.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-3-Command-line-producer.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-4-Command-line-consumer.jpg
[6]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-5-Kafka-based-architecture.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-6-Dashboard-Terminal-1.jpg
[8]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-7-Dashboard-Terminal-2.jpg
[9]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-8-Dashboard-Terminal-3.jpg

View File

@ -0,0 +1,150 @@
[#]: subject: "Are Low-Code Platforms Helpful for Professional Developers?"
[#]: via: "https://www.opensourceforu.com/2022/06/are-low-code-platforms-helpful-for-professional-developers/"
[#]: author: "Radhakrishna Singuru https://www.opensourceforu.com/author/radhakrishna-singuru/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Are Low-Code Platforms Helpful for Professional Developers?
======
Over the years, low-code platforms have matured immensely, and are being used in varied domains of software development for better productivity and quality. This article explores the possibility of leveraging low-code platforms for complex product development by professional developers.
![Low-Code-platfroms-developers][1]
In the last several years, companies have invested a lot of time and energy in innovating and improving the overall process of software product development. Agile working methods have helped in making the process of product development a lot smoother. However, developers still face the challenge of meeting ever expanding customer requirements quickly and easily.
In order to meet these requirements in totality, developers need tools and platforms that can enable quick delivery of software by reducing the coding timelines and without compromising on the quality aspects.
No-code platforms are one set of tools that enable creation of application or product software with zero coding. These use a lego building blocks approach that eliminates the need for hand coding and focuses on just configuration of functions based on graphical modelling experiences. These tools are more relevant to a class of business users called citizen developers, who can use them to optimise a specific process or a function by developing their own applications.
Contrary to no-code platforms, low-code platforms do not try to eliminate the need for coding. Instead, they aim to make development of software easier than the traditional method of hard coding each line of a program or software. This approach minimises hard coding with prepackaged templates, graphic design techniques, and drag-and-drop tools to make software.
The focus of this article is on general-purpose low-code platforms that can be used by professional developers to build enterprise grade applications or products.
### Different types of low-code platforms
Low-code platforms cater to different use cases. Depending on the intended usage or purpose, these can be classified as follows.
* General-purpose: These platforms can create virtually any type of product or application. With a general-purpose platform, users can build software that serves a wide variety of needs and can be deployed on cloud or on-premise.
* Process: These platforms focus specifically on software that runs business processes such as forms, workflows or integrations with other systems.
* Request handling: Request handling low-code platforms are similar to process-based low code, but are less capable. They can only handle processing requests for fixed processes.
* Database: Database low-code platforms are useful if users have large amounts of data that needs to feed into a system, without spending a lot of time on the task.
* Mobile application development platform (MADP): These platforms help developers code, test, and launch mobile applications for smartphones and tablets.
#### Key features of generic low-code platforms
* General-purpose enterprise low-code development platforms support rapid software development, deployment, execution and management using declarative, high-level programming abstractions such as model-driven and metadata-based programming languages, and one-click deployments.
The key features supported by general-purpose low code platforms are as follows.
* Visual modelling: These platforms have a comprehensive visual modelling capability, including business processes, integration workflows, UIs, business logic, data models, Web services, and APIs.
* Databases: Support for a visual editor, Excel sheet import or use of existing data models from a different database.
* Pre-built templates: Support a variety of pre-built templates that can serve as a starting point to get an application up and running quickly. Using a well designed and tested template not only increases productivity but also helps in building a more reliable and secure application.
* Integration: Provide easy integration with external enterprise systems, databases, and custom apps.
* Security and scalability: The right low-code platform makes it easy to create enterprise grade software that is secure and scalable.
* Metrics: Support for gathering metrics and monitoring the software.
* Life cycle management: Support for version management and ticket management, as well as Agile or scrum tools.
* Reusability: The code generated is reusable and can integrate easily with general-purpose IDEs, with multi-platform support for testing and staging.
* Deployment options: Ability to deploy on public or private clouds with support for container images. Also support preview functionality before publishing to the cloud.
* Licensing: Flexible licensing models with no vendor lock-in.
* Others: The platforms include other services, such as project management and analytics.
### Generic low-code platforms for professional developers
The main objective of a generic low-code platform is to help reduce the time spent by a developer working on a product as compared to traditional hand coding. Using visual interfaces, drag-and-drop modules, and more, these low-code platforms reduce the manual effort of coding largely, but will need some coding requirements for completely building a product.
Generic low-code platforms cannot be directly used to build low-level products like in-memory grids, Big Data processing algorithms, image recognition, etc. However, over the years they have evolved a lot to cover the widest range of capabilities for enterprise-grade development and full life cycle management, including business process management (BPM), integration workflows, UIs, business logic, data models, Web services, and APIs. They enable high-productivity development and a faster time to market for all types of developers. They also help create applications of any complexity including enterprise applications with generic architectures and complex backends, using microservices and service bus.
![Figure 1: Important use cases of low code][2]
Figure 1 illustrates some of the key use cases that have been influenced positively in terms of productivity, scale and quality by leveraging low-code platforms.
*Enable cross-functional team collaboration:* Any product development needs a combination of business or functional experts as well as professional developers. Low-code platforms provide features that are relevant to a business user as well as professional developer. Cross-functional teams can leverage these platforms to turn great business ideas into readily deployable products much faster.
*Rapid digital transformation of existing product suites:* By leveraging client and server-side APIs of the platform, developers will be able to build, package and distribute new functionalities, such as connectors to external services like machine learning and AI. Low-code platforms enable developers to push beyond the boundaries of the core platform to build better solutions faster by extending the native features of the platform with some code.
*Help meet sudden spikes in demand:* Automated code generation combined with the one-click deployment option of low-code platforms helps in quicker product customisations, as well as building product variants and burning backlog of features faster.
*Help build MVPs at a rapid rate:* The end-to-end development and operational tools in low-code platforms provide a cohesive ecosystem that allows an enterprise to rapidly bring products to life and manage the entire SDLC process. They are used to build quick MVPs or PoCs for technology, frameworks, and architecture or for feature evaluation.
| Evaluation criteria | Description |
| :- | :- |
| Functional features | Productivity, UI flexibility, ease of use |
| Cloud and containerisation | Capability to utilise popular cloud providers services like serverless, AI/ML, blockchain, etc |
| CI/CD integration | Out-of-the-box support for automation and CI/CD toolchain |
| Integration capabilities | REST and cloud app support, ability to connect to different SQL and NoSQL databases |
| Performance | Parallel and batch execution with support for elastic scalability |
| Security | Enable security by design: security tools, development methods, and governance tooling |
| Language support | Support for popular languages like Java, .NET, Python, etc |
| Development methodologies | Support standard Agile development methodologies like Scrum, XP, Kanban, etc |
| Extensibility | Ability to extend features of existing applications |
| Others | Platform support, learning curve, documentation, etc |
*Support cloud scale architecture and design:* Low-code platforms provide flexible microservices integration options (custom, data, UI, infra) to support building next-gen low-code products with multi-cloud deployment options. They are able to scale and handle thousands of users and millions of data sets.
![Figure 2: Benefits of low-code platforms][3]
**Pros and cons of low-code platforms**
The primary advantage of low-code software development is speed (months to weeks). On an average, there is six to ten times productivity improvement using a low-code platform over traditional hand coding approaches to software development.
Figure 2 lists some of the key benefits of low-code platforms.
* Better software, lower maintenance: Low-code platforms standardise the development approach and reduce the complexity as well as the error rate of the source code.
* Cross-team collaboration: Team members with different skills and capabilities can collaborate to realise the final product faster.
* Enable Agile software development: They offer the team members a consistent product that begins as a single screen, and grows from sprint to sprint as the full product takes shape.
* Faster legacy app modernisation: Enable faster UI generation based on the existing data model, reuse of the logic from legacy databases and smart transfer of existing screens/persistence layers.
* Low risk and high RoI: Due to shorter development cycles, the risk of undertaking a new project is low, and there is a good chance of getting high returns.
* Scaling through multiple components: Allow use of a common platform to develop multiple services.
* Easy maintenance: The software can be easily updated, fixed and changed according to customer requirements.
In the last few years, low-code platforms have evolved a lot in terms of functionality and applicability. However, they still have some limitations when being used fully for any generic product development, some of which are listed below.
*Low-level product development:* Cannot be used for building products like in-memory grids, Big Data processing algorithms, image recognition, etc.
*Custom architectures and services:* Have limited use in enterprise software with unique architecture, microservices, custom back-ends, unique enterprise service bus, etc.
*Source code control:* These platforms totally lose control over the code base; its difficult to debug and handle edge conditions where the tool does not do the right thing automatically.
*Limited integration:* Custom integration with legacy systems needs some significant coding.
*Security and reliability:* These platforms are vulnerable to security breaches, because if the low-code platform gets hacked, it can immediately make the product built on it also vulnerable.
*Customisation:* Custom CSS, custom integration, advanced client-side functionality, etc, will require a good amount of coding.
#### Popular generic low-code platforms
While there are many low-code platforms available in the market, one should leverage the evaluation criteria given in Table 1 before choosing the right one relevant to the business.
Table 2 lists the highly popular generic low-code platforms, both proprietary and open source.
Gartner predicts that by 2024, 65 per cent of application development projects will rely on low-code development.
| Low-code platform | Description | Type |
| :- | :- | :- |
| Mendix | This is designed to accelerate enterprise app delivery across the entire application development life cycle, from ideation to development, deployment, and maintenance on cloud or on-premise. | Proprietary |
| OutSystems | This platform provides tools to develop, deploy and manage omni-channel enterprise applications. It addresses the full spectrum of enterprise use cases for mobile, Web and core systems. | Proprietary |
| Appian | An enterprise grade platform that combines the key capabilities needed to get work done faster using AI, RPA, decision rules, and workflow on a single low-code platform. | Proprietary |
| Budibase | Helps in building business applications. Supports multiple external data sources, and comes with prebuilt layouts, user auth, and a data provider component. Supports JavaScript for integrations. | Open source |
| WordPress | Powers more than 41 per cent of the Web — from simple blogs to enterprise websites. With over 54,000 plugins, the level of customisation without writing code is incredible. | Open source |
| Node-RED | Helps to build event-driven IoT applications. A programming tool for wiring together hardware devices, APIs, and online services. | Open source |
Will low-code development replace traditional engineering completely? This is unlikely in the near future, but it will definitely help to significantly improve developers productivity and bring quality products to market faster. It can bridge the talent gap in the mid-term by arming non-technical people with tools to build, whilst alleviating some of the product backlog for maxed out professional engineering teams Low-code platforms can address critical issues like the high demand for new enterprise software, the need to modernise aging legacy systems and the shortage of full-stack engineers. The choice of when and how much to use a low-code platform depends on the range of applicability, development speed, manageability and flexibility for performance limitations.
However, if there is a need to develop a product that is high quality and unique while being specific to business requirements, custom product development may be a better option.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/are-low-code-platforms-helpful-for-professional-developers/
作者:[Radhakrishna Singuru][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/radhakrishna-singuru/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Low-Code-platfroms-developers.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Figure-1-Important-use-cases-of-low-code.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Figure-2-Benefits-of-low-code-platforms.jpg

View File

@ -0,0 +1,106 @@
[#]: subject: "Compress Images in Linux Easily With Curtail GUI App"
[#]: via: "https://itsfoss.com/curtail-image-compress/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Compress Images in Linux Easily With Curtail GUI App
======
Got a bunch of images with huge file sizes taking too much disk space? Or perhaps you have to upload an image to a web portal that has file size restrictions?
There could be a number of reasons why you would want to compress images. There are tons of tools to help you with it and I am not talking about the command line ones here.
You can use a full-fledged image editor like GIMP. You may also use web tools like [Squoosh][1], an open source project from Google. It even lets you compare the files for each compression level.
However, all these tools work on individual images. What if you want to bulk compress photos? Curtail is an app that saves your day.
### Curtail: Nifty tool for image compression in Linux
Built with Python and GTK3, Curtail is a simple GUI app that uses open source libraries like OptiPNG, [jpegoptim][2], etc to provide the image compression feature.
It is available as a [Flatpak application][3]. Please make sure that you have [Flatpak support enabled on your system][4].
Add the Flathub repo first:
```
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
And then use the command below to install Curtail:
```
flatpak install flathub com.github.huluti.Curtail
```
Once installed, look for it in your Linux systems menu and start it from there.
![curtail app][5]
The interface is plain and simple. You can choose whether you want a lossless or lossy compression.
The lossy compression will have poor-quality images but with a smaller size. The lossless compression will have better quality but the size may not be much smaller than the original.
![curtail app interface][6]
You can either browse for images or drag and drop them into the application.
Yes. You can compress multiple images in one click with Curtail.
In fact, you dont even need a click. As soon as you select the images or drop them, they are compressed and you see a summary of the compression process.
![curtail image compression summary][7]
As you can see in the image above, I got a 35% size reduction for one image and 3 and 8 percent for the other two. This was with lossless compression.
The images are saved with a -min suffix (by default), in the same directory as the original image.
Though it looks minimalist, there are a few options to configure Curtail. Click on the hamburger menu and you are presented with a few settings options.
![curtail configuration options][8]
You can select whether you want to save the compressed file as new or replace the existing one. If you go for a new file (default behavior), you can also provide a different suffix for the compressed images. The option to keep the file attributes is also there.
In the next tab, you can configure the settings for lossy compression. By default, the compression level is at 90%.
![curtail compression options][9]
The Advanced tab gives you the option to configure the lossless compression level for PNG and WebP files.
![curtain advanced options][10]
### Conclusion
As I stated earlier, its not a groundbreaking tool. You can do the same with other tools like GIMP. It just makes the task of image compression simpler, especially for bulk image compression.
I would love to see the option to [convert the image file formats][11] with the compression like what we have in tools like Converseen.
Overall, a good little utility for the specific purpose of image compression.
--------------------------------------------------------------------------------
via: https://itsfoss.com/curtail-image-compress/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lkxed
[1]: https://squoosh.app/
[2]: https://github.com/tjko/jpegoptim
[3]: https://itsfoss.com/what-is-flatpak/
[4]: https://itsfoss.com/flatpak-guide/
[5]: https://itsfoss.com/wp-content/uploads/2022/06/curtail-app.png
[6]: https://itsfoss.com/wp-content/uploads/2022/06/curtail-app-interface.png
[7]: https://itsfoss.com/wp-content/uploads/2022/06/curtail-image-compression-summary.png
[8]: https://itsfoss.com/wp-content/uploads/2022/06/curtail-configuration-options.png
[9]: https://itsfoss.com/wp-content/uploads/2022/06/curtail-compression-options.png
[10]: https://itsfoss.com/wp-content/uploads/2022/06/curtain-advanced-options.png
[11]: https://itsfoss.com/converseen/

View File

@ -0,0 +1,146 @@
[#]: subject: "How I use the attr command with my Linux filesystem"
[#]: via: "https://opensource.com/article/22/6/linux-attr-command"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How I use the attr command with my Linux filesystem
======
I use the open source XFS filesystem because of the subtle convenience of extended attributes. Extended attributes are a unique way to add context to my data.
![Why the operating system matters even more in 2017][1]
Image by: Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0
The term *filesystem* is a fancy word to describe how your computer keeps track of all the files you create. Whether it's an office document, a configuration file, or thousands of digital photos, your computer has to store a lot of data in a way that's useful for both you and it. Filesystems like Ext4, XFS, JFS, BtrFS, and so on are the "languages" your computer uses to keep track of data.
Your desktop or terminal can do a lot to help you find your data quickly. Your file manager might have, for instance, a filter function so you can quickly see just the image files in your home directory, or it might have a search function that can locate a file by its filename, and so on. These qualities are known as *file attributes* because they are exactly that: Attributes of the data object, defined by code in file headers and within the filesystem itself. Most filesystems record standard file attributes such as filename, file size, file type, time stamps for when it was created, and time stamps for when it was last visited.
I use the open source XFS filesystem on my computers not for its reliability and high performance but for the subtle convenience of extended attributes.
### Common file attributes
When you save a file, data about it are saved along with it. Common attributes tell your operating system whether to update the access time, when to synchronize the data in the file back to disk, and other logistical details. Which attributes get saved depends on the capabilities and features of the underlying filesystem.
In addition to standard file attributes (insofar as there are standard attributes), the XFS, Ext4, and BtrFS filesystems can all use extending filesystems.
### Extended attributes
XFS, Ext4, and BtrFS allow you to create your own arbitrary file attributes. Because you're making up attributes, there's nothing built into your operating system to utilize them, but I use them as "tags" for files in much the same way I use EXIF data on photos. Developers might choose to use extended attributes to develop custom capabilities in applications.
There are two "namespaces" for attributes in XFS: **user** and **root**. When creating an attribute, you must add your attribute to one of these namespaces. To add an attribute to the **root** namespace, you must use the `sudo` command or be logged in as root.
### Add an attribute
You can add an attribute to a file on an XFS filesystem with the `attr` or `setfattr` commands.
The `attr` command assumes the `user` namespace, so you only have to set (`-s` ) a name for your attribute followed by a value (`-V` ):
```
$ attr -s flavor -V vanilla example.txt
Attribute "flavor" set to a 7 byte value for example.txt:
vanilla
```
The `setfattr` command requires that you specify the target namespace:
```
$ setfattr --name user.flavor --value chocolate example.txt
```
### List extended file attributes
Use the `attr` or `getfattr` commands to see extended attributes you've added to a file. The `attr` command defaults to the **user** namespace and uses the `-g` option to *get* extended attributes:
```
$ attr -g flavor example.txt
Attribute "flavor" had a 9 byte value for example.txt:
chocolate
```
The `getfattr` command requires the namespace and name of the attribute:
```
$ getfattr --name user.flavor example.txt
# file: example.txt
user.flavor="chocolate"
```
### List all extended attributes
To see all extended attributes on a file, you can use `attr -l` :
```
$ attr -l example.txt
Attribute "md5sum" has a 32 byte value for example.txt
Attribute "flavor" has a 9 byte value for example.txt
```
Alternately, you can use `getfattr -d` :
```
$ getfattr -d example.txt
# file: example.txt
user.flavor="chocolate"
user.md5sum="969181e76237567018e14fe1448dfd11"
```
Any extended file attribute can be updated with `attr` or `setfattr`, just as if you were creating the attribute:
```
$ setfattr --name user.flavor --value strawberry example.txt
$ getfattr -d example.txt
# file: example.txt
user.flavor="strawberry"
user.md5sum="969181e76237567018e14fe1448dfd11"
```
### Attributes on other filesystems
The greatest risk when using extended attributes is forgetting that these attributes are specific to the filesystem they're on. That means when you copy a file from one drive or partition to another, the attributes are lost *even if the target filesystem supports extended attributes*.
To avoid losing extended attributes, you must use a tool that supports retaining them, such as the `rsync` command.
```
$ rsync --archive --xattrs ~/example.txt /tmp/
```
No matter what tool you use, if you transfer a file to a filesystem that doesn't know what to do with extended attributes, those attributes are dropped.
### Search for attributes
There aren't many mechanisms to interact with extended attributes, so the options for using the file attributes you've added are limited. I use extended attributes as a tagging mechanism, which allows me to associate files that have no obvious relation to one another. For instance, suppose I need a Creative Commons graphic for a project I'm working on. Assume I've had the foresight to add the extended attribute **license** to my collection of graphics. I could search my graphic folder with `find` and `getfattr` together:
```
find ~/Graphics/ -type f \
-exec getfattr \
--name user.license \
-m cc-by-sa {} \; 2>/dev/null
# file: /home/tux/Graphics/Linux/kde-eco-award.png
user.license="cc-by-sa"
user.md5sum="969181e76237567018e14fe1448dfd11"
```
### Secrets of your filesystem
Filesystems aren't generally something you're meant to notice. They're literally systems for defining a file. It's not the most exciting task a computer performs, and it's not something users are supposed to have to be concerned with. But some filesystems give you some fun, and safe, special abilities, and extended file attributes are a good example. Its use may be limited, but extended attributes are a unique way to add context to your data.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/linux-attr-command
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/yearbook-haff-rx-linux-file-lead_0.png

View File

@ -0,0 +1,308 @@
[#]: subject: "Please A Simple Command Line Todo Manager"
[#]: via: "https://ostechnix.com/please-command-line-todo-manager/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Please A Simple Command Line Todo Manager
======
Manage Tasks And To-do Lists With 'Please' From Command Line In Linux
A while ago, we reviewed **["Taskwarrior"][1]**, a command line task manager to manage your to-do tasks right from the Terminal window. Today I stumbled upon yet another simple **command line Todo manager** called **"Please"**. Yes, the name is Please!.
Please is an opensource, CLI application written in **Python** programming language. Using Please, we can manage our personal tasks and to-do list without leaving the terminal.
Whenever you open a terminal window, Please will show you the current date and time, an inspirational quote and the list of personal to-do tasks in the Terminal.
Please is very lightweight and convenient CLI task manager for those who use terminal extensively in their daily life.
### Install Please In Linux
Since Please is written in Python, you can **install Please** using **PiP** package manager. If you haven't installed PiP on your Linux machine yet, refer to the following link.
* [How To Manage Python Packages Using PIP][2]
To install Please using PiP, simply run:
```
$ pip install please-cli
```
Or,
```
$ pip3 install please-cli
```
To run Please every time you open a new Terminal window, add the line 'please' to your `.bashrc` file.
```
$ echo 'please' >> ~/.bashrc
```
If you use ZSH shell, run:
```
$ echo 'please' >> ~/.zshrc
```
Please note that the above step is optional. You don't have to add it to your shell config file. However If you do the above step, you will immediately see your pending tasks and to-do list whenever you open a Terminal.
If you don't add it, you won't see them and you may forgot them after a while. So make sure you've added it to your `.bashrc` or `.zshrc` file.
Restart the current session to take effect the changes. Alternatively, source the `.bashrc` file to take effect the changes immediately.
```
$ source ~/.bashrc
```
You will be asked to set a name at first launch. It is usually the hostname of your system. You can also use any other name of your choice.
```
Hello! What can I call you?: ostechnix
```
You can change your name later by running the following command:
```
$ please callme <Your Name Goes Here>
```
### Manage Tasks And To-do Lists With Please From Command Line
The **usage of 'Please'** is very simple!
Just run 'please' to show the current date and time, an inspirational quote and the list of tasks if there are any.
```
$ please
```
**Sample Output:**
```
─────── Hello ostechnix! It's 20 Jun | 11:59 AM ───────
"Action is eloquence!"
- William Shakespeare
Looking good, no pending tasks 😁
```
![Run Please Todo Manager][3]
As you can see, there are no todo tasks yet. Let us add some.
#### Adding New Tasks
To add a new task, run:
```
$ please add "<Task Name>"
```
Example:
```
$ please add "Publish a post about Please"
```
Replace the task name within the quotes with your own.
**Sample Output:**
```
Added "Publish a post about Please" to the list
Tasks
┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Number ┃ Task ┃ Status ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 1 │ Publish a post about Please │ ❌ │
└────────┴─────────────────────────────┴────────┘
```
Similarly, you can add any number of tasks. I have added the following 3 tasks for demonstration purpose.
```
Added "Setup Nginx In Ubuntu" to the list
Tasks
┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Number ┃ Task ┃ Status ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 1 │ Publish a post about Please │ ❌ │
│ 2 │ Update Ubuntu VM │ ❌ │
│ 3 │ Setup Nginx In Ubuntu │ ❌ │
└────────┴─────────────────────────────┴────────┘
```
![Add Tasks Using Please][4]
#### Show Tasks
To view the list of all tasks, run:
```
$ please showtasks
```
**Sample Output:**
```
Tasks
┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Number ┃ Task ┃ Status ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 1 │ Publish a post about Please │ ❌ │
│ 2 │ Update Ubuntu VM │ ❌ │
│ 3 │ Setup Nginx In Ubuntu │ ❌ │
└────────┴─────────────────────────────┴────────┘
```
![Show All Tasks][5]
As you see in the above output, I have 3 unfinished tasks.
#### Mark Tasks As Done Or Undone
Once you complete a task, you can **mark it as done** by specifying the task number as show in the command below.
$ please done "<Task Number>"
Example:
```
$ please done 1
```
This command will mark the **Job 1** as completed.
**Sample Output:**
```
Updated Task List
Tasks
┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Number ┃ Task ┃ Status ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 1 │ Publish a post about Please │ ✅ │
│ 2 │ Update Ubuntu VM │ ❌ │
│ 3 │ Setup Nginx In Ubuntu │ ❌ │
└────────┴─────────────────────────────┴────────┘
```
![Mark Tasks As Done][6]
As you see in the above output, the completed job is marked with a **green** **tick mark**and the non-completed tasks are marked with **a red cross**.
Similarly, to undo the changes i.e. **mark the jobs as undone**, run:
```
$ please undone 1
```
![Mark Tasks As Undone][7]
#### Remove Tasks
To delete a task from the list, the command would be:
$ please delete "<Task Number>"
Example:
$ please delete 1
This command will **delete the specified task**.
**Sample Output:**
```
Deleted 'Publish a post about Please'
Tasks
┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Number ┃ Task ┃ Status ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 1 │ Update Ubuntu VM │ ❌ │
│ 2 │ Setup Nginx In Ubuntu │ ❌ │
└────────┴───────────────────────┴────────┘
```
![Delete Tasks][8]
Please note that this command will delete the given task whether it is completed or not. It is not even will show you a warning message. So double check if you delete a correct task.
#### Reset
To reset all settings and task, run:
```
$ please setup
```
You will be prompted to set a name.
**Sample Output:**
```
Hello! What can I call you?: ostechnix
Thanks for letting me know your name!
If you wanna change your name later, please use:
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ please callme <Your Name Goes Here>
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
```
![Reset Please][9]
### Uninstall Please
'Please' didn't please you? No problem! You can remove it using command:
```
$ pip uninstall please-cli
```
Or,
```
$ pip3 uninstall please-cli
```
And then edit your `.bashrc` or `.zshrc` file and remove the line that says **please** at the end of the file.
### Conclusion
I briefly tried 'Please' on my Ubuntu VM and I already started liking its simplicity and efficiency. If you're looking for an easy-to-use CLI task manager for managing your tasks, please try "Please". You will be pleased!
**Resource:**
* [Please GitHub Repository][10]
*Featured Image by Pixabay.*
--------------------------------------------------------------------------------
via: https://ostechnix.com/please-command-line-todo-manager/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/taskwarrior-command-line-todo-task-manager-application/
[2]: https://ostechnix.com/manage-python-packages-using-pip/
[3]: https://ostechnix.com/wp-content/uploads/2022/06/Run-Please-Todo-Manager.png
[4]: https://ostechnix.com/wp-content/uploads/2022/06/Add-Tasks-Using-Please.png
[5]: https://ostechnix.com/wp-content/uploads/2022/06/Show-All-Tasks.png
[6]: https://ostechnix.com/wp-content/uploads/2022/06/Mark-Tasks-As-Done.png
[7]: https://ostechnix.com/wp-content/uploads/2022/06/Mark-Tasks-As-Undone.png
[8]: https://ostechnix.com/wp-content/uploads/2022/06/Delete-Tasks.png
[9]: https://ostechnix.com/wp-content/uploads/2022/06/Reset-Please.png
[10]: https://github.com/NayamAmarshe/please

View File

@ -0,0 +1,37 @@
[#]: subject: "The First Commercial Unikernel With POSIX Support"
[#]: via: "https://www.opensourceforu.com/2022/06/the-first-commercial-unikernel-with-posix-support/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
The First Commercial Unikernel With POSIX Support
======
![operating system][1]
Lynx Software Technologies has released a unikernel that it claims is the first to be POSIX compatible for real-time operation and commercially available. LynxElement will be included in the MOSA.ic range of mission-critical embedded applications. To provide more security with third-party or open source software, Lynx prefers a unikernel approach over hypervisors or virtual machines. LynxElement is based on Lynxs commercially proven LynxOS-178 real-time operating system, which allows for compatibility between the Unikernel and the standalone LynxOS-178 product. This enables designers to move applications between environments and is compliant with the POSIX API and US FACE specifications.
LynxElement initially focused on security on both Intel and Arm multicore processor architectures. Running security components such as virtual private networks is a common use case (VPNs). The unikernel, by utilising a one-way software data diode and filter, can enable a customer to replace a Linux virtual machine, saving memory space and drastically reducing the attack surface while ensuring timing requirements and safety certifiability.
Unikernels are best suited for applications that require speed, agility, and a small attack surface in order to increase security and certifiability, such as aircraft systems, autonomous vehicles, and critical infrastructure. These run pre-built applications with their own libraries, reducing the attack surface caused by resource sharing. This also enables the secure use of containerised applications such as Kubernetes or Docker, which are increasingly moving from enterprise to embedded designs, owing to the need to support AI frameworks.
Unikernels are also an excellent choice for mission-critical systems with heterogeneous workloads that require the coexistence of RTOS, Linux, Unikernel, and bare-metal guest operating systems. Existing open source unikernel implementations, according to Lynx, havent fared well due to a lack of adequate functionality, a lack of a clear path to safety certification, and immature toolchains for debugging and producing images.
Lynx created the MOSA.ic software framework for developing and integrating complex multi-core safety- or security-critical systems. The framework includes built-in security for the unikernel, allowing for security and safety certification in mission-critical applications and making it enterprise-ready. With the assistance of DESE Research, Lynx created the safety-critical Unikernel solution. LynxElement is being evaluated by existing Lynx customers as well as additional organisations around the world, including naval, air force, and army organisations.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/the-first-commercial-unikernel-with-posix-support/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/06/operating-system-1.jpg

View File

@ -0,0 +1,116 @@
[#]: subject: "What you need to know about site reliability engineering"
[#]: via: "https://opensource.com/article/22/6/introduction-site-reliability-engineering"
[#]: author: "Robert Kimani https://opensource.com/users/robert-charles"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
What you need to know about site reliability engineering
======
Understand the basics and best practices for establishing and maintaining an SRE program in your organization.
![Working on a team, busy worklife][1]
Image by: opensource.com
What is site reliability engineering? The creator of the first site reliability engineering (SRE) program, [Benjamin Treynor Sloss][2] at Google, described it this way:
> Site reliability engineering is what happens when you ask a software engineer to design an operations team.
What does that mean? Unlike traditional system administrators, site reliability engineers (SREs) apply solid software engineering principles to their day-to-day work. For laypeople, a clearer definition might be:
> Site reliability engineering is the discipline of building and supporting modern production systems at scale.
SREs are responsible for maximizing reliability, performance availability, latency, efficiency, monitoring, emergency response, change management, release planning, and capacity planning for both infrastructure and software. As applications and infrastructure grow more complex, SRE teams help ensure that these systems can evolve.
### What does an SRE organization do?
There are four primary responsibilities of an SRE organization:
* Availability: SREs are responsible for the availability of the services they support. After all, if services are not available, end users' work is disrupted, which can cause serious damage to your organization's credibility.
* Performance: A service needs to be not only available but also highly performant. For example, how useful is a website that takes 20 seconds to move from one page to another?
* Incident management: SREs manage the response to unplanned disruptions that impact customers, such as outages, service degradation, or interruptions to business operations.
* Monitoring: A foundational requirement for every SRE, monitoring involves collecting, processing, aggregating, and displaying real-time quantitative data about a system. This could include query counts and types, error counts and types, processing times, and server lifetimes.
Occasionally, release and capacity planning are also the responsibility of the SRE organization.
### How do SREs maintain site reliability?
The SRE role is a diverse one, with many responsibilities. An SRE must be able to identify an issue quickly, troubleshoot, and mitigate it with minimal disruption to operations.
Here's a partial list of the tasks a typical SRE undertakes:
* Writing code: An SRE is required to solve problems using software, whether they are a software engineer with an operations background or a system engineer with a development background.
* Being on call: This is not the most attractive part of being an SRE, but it is essential.
* Leading a war room: SREs facilitate discussions of strategy and execution during incident management.
* Performing postmortems: This is an excellent tool to learn from an incident and identify processes that can be put in place to avoid future incidents.
* Automating: SREs tend to get bored with manual steps. Automation not only saves time but reduces failures due to human errors. Spending some time on engineering by automating tasks can have a strong return on investment.
* Implement best practices: SREs are well versed with distributed systems and web-scale architectures. They apply best practices in several areas of service management.
### Designing an effective on-call system
An on-call management system streamlines the process of adding members of the SRE team into after-hours or weekend call schedules, assigning them equitable responsibility for managing alerts outside of traditional work hours or on holidays. In some cases, an organization might designate on-call SREs around the clock.
In the medical profession, on-call doctors don't have to be on site, but they do have to be prepared to show up and deal with emergencies anytime during their on-call shift. SRE professionals likewise use on-call schedules to make sure that someone's always there to respond to major bugs, capacity issues, or product downtime. If they can't fix the problem on their own, they're also responsible for escalating the issue. For SRE teams who run services for which customers expect 24/7/365, 99.999% uptime and availability, on-call staffing is especially critical.
There are two main kinds of [on-call design structures][4] that can be used when designing an on-call system, and they focus on domain expertise and ownership of a given service:
* Single-team ownership model
* Shared ownership model
In most cases, single-team ownership will be the better model.
The on-call SRE has multiple duties:
* Protecting production systems: The SRE on call serves as a guardian to all production services they are required to support.
* Responding to emergencies within acceptable time: Your organization may choose to have a service-level objective (SLO) for SRE response time. In most cases, anywhere between 5 to 15 minutes would be an acceptable response time. Automated monitoring and alerting solutions also empower SREs to respond immediately to any interruptions to service availability.
* Involving team members and escalating issues: The on-call SRE is responsible for identifying and calling in the right team members to address specific problems.
* Tackling non-emergent issues: In some organizations, a secondary on-call engineer is scheduled to handle non-emergencies, like email alerts.
* Writing postmortems: As noted above, a good postmortem is a valuable tool for documenting and learning from significant incidents.
### 3 key tenets of an effective on-call management system
#### A focus on engineering
SREs should be spending more time designing solutions than applying band-aids. A general guideline is for SREs to spend 50% of their time in engineering work, such as writing code and automating tasks. When an SRE is on-call, time should be split between about 25% of time managing incidents and 25% on operations duty.
#### Balanced workload
Being on call can quickly burn out an engineer if there are too many tickets to handle. If well-coordinated multi-region support is possible, such as a US-based team and an Asia-Pacific team, that arrangement can help limit the detrimental health effects of repeated night shifts. Otherwise, having six to eight SREs per site will help avoid exhaustion. At the same time, make sure all SREs are getting a turn being on call at least once or twice a quarter to avoid getting out of touch with production systems. Fair compensation for on-call work during overnights or holidays, such as additional hours off or cash awards, will also help SREs feel that their extra effort is appreciated.
#### Positive and safe environment
Clearly defined escalation and blameless postmortem procedures are absolutely necessary for SREs to be effective and productive. Established protocols are central to a robust incident management system. Postmortems must focus on root causes and prevention rather than individual and team actions. If you don't have a clear postmortem procedure in your organization, it is wise to start one immediately.
### SRE best practices
This article covered some SRE basics and best practices for establishing and running an SRE on-call management system.
In future articles, I will look at other categories of best practices for SRE, the technologies involved, and the processes to support those technologies. By the end of this series, you'll know how to implement SRE best practices for designing, implementing, and supporting production systems.
### More resources
* [Availability Calculator][5]
* [Error Budget Calculator][6]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/introduction-site-reliability-engineering
作者:[Robert Kimani][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/robert-charles
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/team_dev_email_chat_video_work_wfm_desk_520.png
[2]: https://sre.google/sre-book/introduction/
[3]: https://enterprisersproject.com/article/2022/2/8-reasons-site-reliability-engineer-one-most-demand-jobs-2022
[4]: https://alexwitherspoon.com/publications/on-call-design/
[5]: https://availability.sre.xyz/
[6]: https://dastergon.gr/error-budget-calculator/

View File

@ -3,7 +3,7 @@
[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb)
[#]: collector: (lujun9972)
[#]: translator: (Donkey)
[#]: reviewer: ( )
[#]: reviewer: (turbokernel)
[#]: publisher: ( )
[#]: url: ( )
@ -61,8 +61,8 @@ via: https://opensource.com/article/21/5/kubernetes-chaos
作者:[Jessica Cherry][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
译者:[Donkey](https://github.com/Donkey-Hao)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,301 @@
[#]: subject: "Apache Kafka: Asynchronous Messaging for Seamless Systems"
[#]: via: "https://www.opensourceforu.com/2021/11/apache-kafka-asynchronous-messaging-for-seamless-systems/"
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Apache Kafka为“无缝系统”提供异步消息支持
======
Apache Kafka 是最流行的开源消息代理之一。它已经成为了大数据操作的重要组成部分,你能够在几乎所有的微服务环境中找到它。本文对 Apache Kafka 进行了简要介绍,并提供了一个案例来展示它的使用方式。
![][1]
你有没有想过电子商务平台是如何在处理巨大的流量时做到不会卡顿的呢有没有想过OTT 平台是如何在同时向数百万用户交付内容时,做到平稳运行的呢?其实,关键就在于它们的分布式架构。
采用分布式架构设计的系统由多个功能组件组成。这些功能组件通常分布在许多个机器上,它们通过网络,异步地交换消息,从而实现相互协作。正是由于异步消息的存在,组件之间才能实现可伸缩、无阻塞的通信,整个系统才能够平稳运行。
### 异步消息
异步消息的常见特性有:
* 消息的生产者和消费者都不知道彼此的存在。它们在不知道其他对象的情况下,加入和离开系统。
* 消息代理充当了生产者和消费者之间的中介。
* 生产者把每条消息,都与一个<ruby>“主题”<rt>topic</rt></ruby>相关联。每个主题只是一个简单的字符串。
* 一个生产者可以把消息发往多个主题,不同生产者也可以把消息发送给同一主题。
* 消费者向代理订阅一个或多个主题的消息。
* 生产者只将消息发送给代理,而不发送给消费者。
* 代理会把消息发送给订阅该主题的所有消费者。
* 生产者并不期望得到消费者的任何回应。换句话说,生产者和消费者不会相互阻塞。
市场上的消息代理有很多,而 Apache Kafka 是其中最受欢迎的一种(之一)。
### Apache Kafka
Apache Kafka 是一个支持流处理的、开源的分布式消息系统,它由 Apache 软件基金会开发。在架构上,它是多个代理组成的集群,这些代理间通过 Apache ZooKeeper 服务来协调。在接收、持久化和发送消息时,这些代理共享集群上的负载。
#### 分区
Kafka 将消息写入称为<ruby>“分区”<rt>partitions</rt></ruby>的桶中。一个特定分区只保存一个主题上的消息。例如Kafka 会把 `heartbeats` 主题上的消息写入名为 “heartbeats-0” 的分区(假设它是个单分区主题),这个过程和生产者无关。
![图 1异步消息][2]
不过,为了利用 Kafka 集群所提供的并行处理能力,管理员通常会为指定主题创建多个分区。举个例子,假设管理员为 `heartbeats` 主题创建了三个分区Kafka 会将它们分别命名为 `heartbeats-0`、`heartbeats-1` 和 `heartbeats-2`。Kafka 会以某种方式,把消息分配到这三个分区中,并使它们均匀分布。
还有另一种可能的情况,生产者将每条消息与一个<ruby>消息键<rt>key</rt></ruby>相关联。例如,同样都是在 `heartbeats` 主题上发送消息,有个组件使用 `C1` 作为消息键,另一个则使用 `C2`。在这种情况下Kafka 会确保,在一个主题中,带有相同消息键的消息,总是会被写入到同一个分区。不过,在一个分区中,消息的消息键却不一定相同。下面的图 2 显示了消息在不同分区中的一种可能分布。
![图 2消息在不同分区中的分布][3]
#### 领导者和同步副本
Kafka 在(由多个代理组成的)集群中维护了多个分区。其中,负责维护分区的那个代理被称为<ruby>“领导者”<rt>leader</rt></ruby>。只有领导者能够在它的分区上接收和发送消息。
可是,万一分区的领导者发生故障了,又该怎么办呢?为了确保业务连续性,每个领导者(代理)都会把它的分区复制到其他代理上。此时,这些其他代理就称为该分区的<ruby>同步副本<rt>in-sync-replicas</rt></ruby>ISR。一旦分区的领导者发生故障ZooKeeper 就会发起一次选举,把选中的那个同步副本任命为新的领导者。此后,这个新的领导者将承担该分区的消息接受和发送任务。管理员可以指定分区需要维护的同步副本的大小。
![图 3命令行生产者][4]
#### 消息持久化
代理会将每个分区都映射到一个指定的磁盘文件,从而实现持久化。默认情况下,消息会在磁盘上保留一个星期。当消息写入分区后,它们的内容和顺序就不能更改了。管理员可以配置一些策略,如消息的保留时长、压缩算法等。
![图 4命令行消费者][5]
#### 消费消息
与大多数其他消息系统不同Kafka 不会主动将消息发送给消费者。相反消费者应该监听主题并主动读取消息。一个消费者可以某个主题的多个分区中读取消息。多个消费者也可以读取来自同一个分区的消息。Kafka 保证了同一条消息不会被同一个消费者重复读取。
Kafka 中的每个消费者都有一个组 ID。那些组 ID 相同的消费者们共同组成了一个消费者组。通常,为了从 N 个主题分区读取消息,管理员会创建一个包含 N 个消费者的消费者组。这样一来,组内的每个消费者都可以从它的指定分区中读取消息。如果组内的消费者比可用分区还要多,那么多出来的消费者就会处于闲置状态。
在任何情况下Kafka 都保证:不管组内有多少个消费者,同一条消息只会被该消费者组读取一次。这个架构提供了一致性、高性能、高可扩展性、准实时交付和消息持久性,以及零消息丢失。
### 安装、运行 Kafka
尽管在理论上Kafka 集群可以由任意数量的代理组成,但在生产环境中,大多数集群通常由三个或五个代理组成。
在这里,我们将搭建一个单代理集群,对于生产环境来说,它已经够用了。
在浏览器中访问 [https://kafka.apache.org/downloads][5a],下载 Kafka 的最新版本。在 Linux 终端中,我们也可以使用下面的命令来下载它:
```
wget https://www.apache.org/dyn/closer.cgi?path=/kafka/2.8.0/kafka_2.12-2.8.0.tgz
```
如果需要的话,我们也可以把下载来的档案文件 `kafka_2.12-2.8.0.tgz` 移动到另一个目录下。解压这个档案,你会得到一个名为 `kafka_2.12-2.8.0` 的目录,它就是之后我们要设置的 `KAFKA_HOME`
打开 `KAFKA_HOME/config` 目录下的 `server.properties` 文件,取消注释下面这一行配置:
```
listeners=PLAINTEXT://:9092
```
这行配置的作用是让 Kafka 在本机的 `9092` 端口接收普通文本消息。我们也可以配置 Kafka 通过<ruby>安全通道<rt>secure channel</rt></ruby>接收消息,在生产环境中,我们也推荐这么做。
无论集群中有多少个代理Kafka 都需要 ZooKeeper 来管理和协调它们。即使是单代理集群也是如此。Kafka 在安装时,会附带安装 ZooKeeper因此我们可以在 `KAFKA_HOME` 目录下,在命令行中使用下面的命令来启动它:
```
./bin/zookeeper-server-start.sh ./config/zookeeper.properties
```
当 ZooKeeper 运行起来后,我们就可以在另一个终端中启动 Kafka 了,命令如下:
```
./bin/kafka-server-start.sh ./config/server.properties
```
到这里,一个单代理的 Kafka 集群就运行起来了。
### 验证 Kafka
让我们在 `topic-1` 主题上尝试下发送和接收消息吧!我们可以使用下面的命令,在创建主题时为它指定分区的个数:
```
./bin/kafka-topics.sh --create --topic topic-1 --zookeeper localhost:2181 --partitions 3 --replication-factor 1
```
上述命令还同时指定了<ruby>复制因子<rt>replication factor</rt></ruby>,它的值不能大于集群中代理的数量。我们使用的是单代理集群,因此,复制因子只能设置为 1。
当主题创建完成后生产者和消费者就可以在上面交换消息了。Kafka 的发行版内附带了命令行工具生产者和消费者,供测试时用。
打开第三个终端,运行下面的命令,启动命令行生产者:
```
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic topic-1
```
上述命令显示了一个提示符,我们可以在后面输入简单文本消息。由于我们指定的命令选项,生产者会把 `topic-1` 上的消息,发送到运行在本机的 9092 端口的 Kafka 中。
打开第四个终端,运行下面的命令,启动命令行消费者:
```
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic-1 -from-beginning
```
上述命令启动了一个消费者,并指定它连接到本机 9092 端口的 Kafka。它订阅了 `topic-1` 主题,以读取其中的消息。由于命令行的最后一个选项,这个消费者会从最开头的位置,开始读取该主题的所有消息。
我们注意到,生产者和消费者连接的是同一个代理,访问的是同一个主题,因此,消费者在收到消息后会把消息打印到终端上。
下面,让我们在实际应用场景中,尝试使用 Kafka 吧!
### 案例
假设有一家叫做 ABC 的公共汽车运输公司,它拥有一支客运车队,往返于全国不同城市之间。由于 ABC 希望实时跟踪每辆客车,以提高其运营质量,因此,它提出了一个基于 Apache Kafka 的解决方案。
首先ABC 公司为所有公交车都配备了位置追踪设备。然后,它使用 Kafka 建立了一个操作中心,以接收来自数百辆客车的位置更新。它还开发了一个<ruby>仪表盘<rt>dashboard</rt></ruby>,以显示任一时间点所有客车的当前位置。图 5 展示了上述架构:
![图 5基于 Kafka 的架构][6]
在这种架构下,客车上的设备扮演了消息生产者的角色。它们会周期性地把当前位置发送到 Kafka 的 `abc-bus-location` 主题上。ABC 公司选择以客车的<ruby>行程码<rt>trip code</rt></ruby>作为消息键,以处理来自不同客车的消息。例如,对于从 Bengaluru 到 Hubballi 的客车,它的行程码就会是 `BLRHL003`,那么在这段旅程中,对于所有来自该客车的消息,它们的消息键都会是 `BLRHL003`
仪表盘应用扮演了消息消费者的角色。它在代理上注册了同一个主题 `abc-bus-location`。如此,这个主题就成为了生产者(客车)和消费者(仪表盘)之间的虚拟通道。
客车上的设备不会期待得到来自仪表盘应用的任何回复。事实上,它们相互之间都不知道对方的存在。得益于这种架构,数百辆客车和操作中心之间实现了非阻塞通信。
#### 实现
假设 ABC 公司想要创建三个分区来维护位置更新。由于我们的开发环境只有一个代理,因此复制因子应设置为 1。
相应地,以下命令创建了符合需求的主题:
```
./bin/kafka-topics.sh --create --topic abc-bus-location --zookeeper localhost:2181 --partitions 3 --replication-factor 1
```
生产者和消费者应用可以用多种语言编写,如 Java、Scala、Python 和 JavaScript 等。下面几节中的代码展示了它们在 Java 中的编写方式,好让我们有一个初步了解。
##### Java 生产者
下面的 `Fleet` 类模拟了在 ABC 公司的 6 辆客车上运行的 Kafka 生产者应用。它会把位置更新发送到指定代理的 `abc-bus-location` 主题上。请注意,简单起见,主题名称、消息键、消息内容和代理地址等,都在代码里写死了。
```
public class Fleet {
public static void main(String[] args) throws Exception {
String broker = “localhost:9092”;
Properties props = new Properties();
props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, broker);
props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class.getName());
props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class.getName());
Producer<String, String> producer = new KafkaProducer<String, String>(props);
String topic = “abc-bus-location”;
Map<String, String> locations = new HashMap<>();
locations.put(“BLRHBL001”, “13.071362, 77.461906”);
locations.put(“BLRHBL002”, “14.399654, 76.045834”);
locations.put(“BLRHBL003”, “15.183959, 75.137622”);
locations.put(“BLRHBL004”, “13.659576, 76.944675”);
locations.put(“BLRHBL005”, “12.981337, 77.596181”);
locations.put(“BLRHBL006”, “13.024843, 77.546983”);
IntStream.range(0, 10).forEach(i -> {
for (String trip : locations.keySet()) {
ProducerRecord<String, String> record
= new ProducerRecord<String, String>(
topic, trip, locations.get(trip));
producer.send(record);
}
});
producer.flush();
producer.close();
}
}
```
##### Java 消费者
下面的 `Dashboard` 类实现了一个 Kafka 消费者应用,运行在 ABC 公司的操作中心。它会监听 `abc-bus-location` 主题,并且它的消费者组 ID 是 `abc-dashboard`。当收到消息后,它会立即显示来自客车的详细位置信息。我们本该配置这些详细位置信息,但简单起见,它们也是在代码里写死的:
```
public static void main(String[] args) {
String broker = “127.0.0.1:9092”;
String groupId = “abc-dashboard”;
Properties props = new Properties();
props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, broker);
props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class.getName());
props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class.getName());
props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
@SuppressWarnings(“resource”)
Consumer<String, String> consumer = new KafkaConsumer<String, String>(props);
consumer.subscribe(Arrays.asList(“abc-bus-location”));
while (true) {
ConsumerRecords<String, String> records
= consumer.poll(Duration.ofMillis(1000));
for (ConsumerRecord<String, String> record : records) {
String topic = record.topic();
int partition = record.partition();
String key = record.key();
String value = record.value();
System.out.println(String.format(
“Topic=%s, Partition=%d, Key=%s, Value=%s”,
topic, partition, key, value));
}
}
}
```
##### 依赖
为了编译和运行这些代码,我们需要 JDK 8 及以上版本。看到下面的 `pom.xml` 文件中的 Maven 依赖了吗?它们会把所需的 Kafka 客户端库,下载并添加到类路径中:
```
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.8.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.25</version>
</dependency>
```
#### 部署
由于 `abc-bus-location` 主题在创建时指定了 3 个分区,我们自然就会想要运行 3 个消费者,来让读取位置更新的过程更快一些。为此,我们需要同时在 3 个不同的终端中运行仪表盘。因为所有这 3 个仪表盘都注册在同一个组 ID 下它们自然就构成了一个消费者组。Kafka 会为每个仪表盘都分配一个特定的分区(来消费)。
当所有仪表盘实例都运行起来后,在另一个终端中启动 `Fleet` 类。图 6、7、8 展示了仪表盘终端中的控制台示例输出。
![图 6仪表盘终端之一][7]
仔细看看控制台消息,我们会发现第一个、第二个和第三个终端中的消费者,正在分别从 `partition-2`、`partition-1` 和 `partition-0` 中读取消息。另外,我们还能发现,消息键为 `BLRHBL002`、`BLRHBL004` 和 `BLRHBL006` 的消息写入了 `partition-2`,消息键为 `BLRHBL005` 的消息写入了 `partition-1`,剩下的消息写入了 `partition-0`
![图 7仪表盘终端之二][8]
使用 Kafka 的好处在于,只要集群设计得当,它就可以水平扩展,从而支持大量客车和数百万条消息。
![图 8仪表盘终端之三][9]
### 不止是消息
根据 Kafka 官网上的数据在《财富》100 强企业中,超过 80% 都在使用 Kafka。它部署在许多垂直行业如金融服务、娱乐等。虽然 Kafka 起初只是一种简单的消息服务但它已凭借行业级的流处理能力成为了大数据生态系统的一环。对于那些喜欢托管解决方案的企业Confluent 提供了基于云的 Kafka 服务只需支付订阅费即可。LCTT 译注Confluent 是一个基于 Kafka 的商业公司,它提供的 Confluent Kafka 在 Apache Kafka 的基础上,增加了许多企业级特性,被认为是“更完整的 Kafka”。
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2021/11/apache-kafka-asynchronous-messaging-for-seamless-systems/
作者:[Krishna Mohan Koyya][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/krishna-mohan-koyya/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Digital-backgrund-connecting-in-globe.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-1-Asynchronous-messaging.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-2-Message-distribution-among-the-partitions.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-3-Command-line-producer.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-4-Command-line-consumer.jpg
[5a]: https://kafka.apache.org/downloads
[6]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-5-Kafka-based-architecture.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-6-Dashboard-Terminal-1.jpg
[8]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-7-Dashboard-Terminal-2.jpg
[9]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-8-Dashboard-Terminal-3.jpg