Merge pull request #44 from LCTT/master

update
This commit is contained in:
SamMa 2022-06-22 09:04:34 +08:00 committed by GitHub
commit 9e7bf880c5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
31 changed files with 2687 additions and 775 deletions

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: (Donkey-Hao)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-14739-1.html)
[#]: subject: (10 ways Ansible is for everyone)
[#]: via: (https://opensource.com/article/21/1/ansible)
[#]: author: (James Farrell https://opensource.com/users/jamesf)
分享 10 篇 Ansible 文章
======
> 通过这些 Ansible 文章扩展你的知识和技能。
![](https://img.linux.net.cn/data/attachment/album/202206/21/111840akw4bjd13dh8ayky.jpg)
我希望能够激发刚刚接触 Ansible 的初学者的兴趣。这里有一系列总结文章,我已将其包括在内,以供你随意后续查阅。
### 适合初学者的 Ansible
这五篇文章对于 Ansible 新手来说是一个非常好的起点。前三篇文章由 Seth Kenlon 撰写。
* 如果你不了解 Ansible [现在可以做这 7 件事][2] 来入手。这是很好的入门指导,它收集了用于管理硬件、云、容器等的链接。
* 在 《[编排与自动化有何区别?][3]》 这篇文章中,你会学到一些术语和技术路线,将会激发你对 Ansible 感兴趣。
* 文章 《[如何用 Ansible 安装软件][4]》 覆盖了一些脚本概念和一些 Ansible 的好惯例,给出了一些本地或远程管理软件包的案例。
* 在 [我编写 Ansible 剧本时学到的 3 个教训][5] 中,使自己养成 Jeff Geerling 所传授的好习惯,他是一位真正的 Ansible 资深人士。源代码控制、文档、测试、简化和优化是自动化成功的关键。
* 《[我使用 Ansible 的第一天][6]》 介绍了记者 David Both 在解决重复性开发任务时的思考过程。这篇文章从 Ansible 的基础开始,并说明了一些简单的操作和任务。
### 尝试 Ansible 项目
一旦你掌握了基础和并拥有良好习惯,就可以开始一些具体主题和实例了。
* Ken Fallon 在 《[使用 Ansible 管理你的树莓派机群][7]》 一文中介绍了一个部署和管理树莓派设备机群的示例。它介绍了受限环境中的安全和维护概念。
* 在 《[将你的日历与 Ansible 融合以避免日程冲突][8]》一文中Nicolas Leiva 快速介绍了如何使用前置任务和条件在自动日程安排中中强制执行隔离窗口
* Nicolas 在 《[创建一个整合你的谷歌日历的 Ansible 模块][9]》中完成了他的日历隔离的理念。他的文章深入探讨了在 Go 中编写自定义 Ansible 模块以实现所需的日历连接。 Nicolas 介绍了构建和调用 Go 程序并将所需数据传递给 Ansible 并接收所需输出的不同方法。
### 提升你的 Ansible 技巧
Kubernetes 是近来的热门话题,以下文章提供了一些很好的示例来学习新技能。
* 在 《[适用于 Kubernets 自动编排你的 Ansible 模块][10]》 文章中Seth Kenlon 介绍了 Ansible Kubernetes 模块, 介绍了用于测试的基本 Minikube 环境,并提供了一些用于<ruby><rt>Pod</rt></ruby> 控制的 `k8s` 模块的基本示例。
* Jeff Geerling 在 《[使用 Ansible 的 Helm 模块构建 Kubernetes Minecraft 服务器][11]》 中解释了 Helm Chart 应用程序、Ansible 集合以及执行一个有趣的项目以在 k8s 集群中设置你自己的 Minecraft 服务器的概念。
我希望你的 Ansible 旅程已经开始,并能常从这些文章中充实自己。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/1/ansible
作者:[James Farrell][a]
选题:[lujun9972][b]
译者:[Donkey](https://github.com/Donkey-Hao)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jamesf
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation)
[2]: https://opensource.com/article/20/9/ansible
[3]: https://opensource.com/article/20/11/orchestration-vs-automation
[4]: https://opensource.com/article/20/9/install-packages-ansible
[5]: https://opensource.com/article/20/1/ansible-playbooks-lessons
[6]: https://opensource.com/article/20/10/first-day-ansible
[7]: https://opensource.com/article/20/9/raspberry-pi-ansible
[8]: https://opensource.com/article/20/10/calendar-ansible
[9]: https://opensource.com/article/20/10/ansible-module-go
[10]: https://opensource.com/article/20/9/ansible-modules-kubernetes
[11]: https://opensource.com/article/20/10/kubernetes-minecraft-ansible
[12]: https://opensource.com/article/20/1/ansible-news-edition-six
[13]: https://opensource.com/article/20/2/ansible-news-edition-seven
[14]: https://opensource.com/article/20/3/ansible-news-edition-eight
[15]: https://opensource.com/article/20/4/ansible-news-edition-nine
[16]: https://opensource.com/article/20/5/ansible-news-edition-ten
[17]: https://opensource.com/how-submit-article

View File

@ -3,17 +3,16 @@
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14740-1.html"
编写你的第一段 JavaScript 代码
======
JavaScript 是为 Web 而生的,但它可以做的事远不止于此。本文将带领你了解它的基础知识,然后你可以下载我们的备忘清单,以便随时掌握详细信息。
![开源编程][1]
> JavaScript 是为 Web 而生的,但它可以做的事远不止于此。本文将带领你了解它的基础知识,然后你可以下载我们的备忘清单,以便随时掌握详细信息。
图源Opensource.com
![](https://img.linux.net.cn/data/attachment/album/202206/21/114718zzb8f6na6lgb28cn.jpg)
JavaScript 是一种充满惊喜的编程语言。许多人第一次遇到 JavaScript 时,它通常是作为一种 Web 语言出现的。所有主流浏览器都有一个 JavaScript 引擎;并且,还有一些流行的框架,如 JQuery、Cash 和 Bootstrap 等,它们可以帮助简化网页设计;甚至还有用 JavaScript 编写的编程环境。它似乎在互联网上无处不在,但事实证明,它对于 [Electron][2] 等项目来说也是一种有用的语言。Electron 是一个构建跨平台桌面应用程序的开源工具包,它使用的语言就是 JavaScript。
@ -21,11 +20,11 @@ JavaScript 语言的用途多到令人惊讶,它拥有各种各样的库,而
### 安装 JavaScript
随着你的 JavaScript 水平不断提高,你可能会发现自己需要高级的 JavaScript 库和运行时。不过,刚开始学习的时候,你是根本不需要安装 JavaScript 的。因为所有主流的 Web 浏览器都包含一个 JavaScript 引擎来运行代码。你可以使用自己喜欢的文本编辑器编写 JavaScript将其加载到 Web 浏览器中,接着你就能看到代码的作用。
随着你的 JavaScript 水平不断提高,你可能会发现自己需要高级的 JavaScript 库和运行时环境。不过,刚开始学习的时候,你是根本不需要安装 JavaScript 环境的。因为所有主流的 Web 浏览器都包含一个 JavaScript 引擎来运行代码。你可以使用自己喜欢的文本编辑器编写 JavaScript将其加载到 Web 浏览器中,接着你就能看到代码的作用。
### 上手 JavaScript
要编写你的第一个 JavaScript 代码,请打开你喜欢的文本编辑器,例如 [Notepad++][3]、[Atom][4] 或 [VSCode][5] 等。因为它是为 Web 开发的,所以 JavaScript 可以很好地与 HTML 配合使用。因此,我们先来尝试一些基本的 HTML
要编写你的第一个 JavaScript 代码,请打开你喜欢的文本编辑器,例如 [Atom][4] 或 [VSCode][5] 等。因为它是为 Web 开发的,所以 JavaScript 可以很好地与 HTML 配合使用。因此,我们先来尝试一些基本的 HTML
```
<html>
@ -66,9 +65,9 @@ JavaScript 语言的用途多到令人惊讶,它拥有各种各样的库,而
![在浏览器中显示带有 JavaScript 的 HTML][7]
如你所见,`<p>` 标签仍然包含字符串 “Nothing here”但是当它被渲染时JavaScript 会改变它,使其包含 “Hello world”。是的JavaScript 具有重建​​(或只是帮助构建)网页的能力。
如你所见,`<p>` 标签仍然包含字符串 `"Nothing here"`但是当它被渲染时JavaScript 会改变它,使其包含 `"Hello world"`。是的JavaScript 具有重建​​(或只是帮助构建)网页的能力。
这个简单脚本中的 JavaScript 做了两件事。首先,它创建一个名为 `myvariable` 的变量,并将字符串 “Hello world!” 放置其中。然后,它会在当前文档(浏览器呈现的网页)中搜索 ID 为 “example” 的所有 HTML 元素。当它找到 `example` 时,它使用了 `innerHTML` 函数将 HTML 元素的内容替换为 `myvariable` 的内容。
这个简单脚本中的 JavaScript 做了两件事。首先,它创建一个名为 `myvariable` 的变量,并将字符串 `"Hello world!"` 放置其中。然后,它会在当前文档(浏览器呈现的网页)中搜索 ID 为 `example` 的所有 HTML 元素。当它找到 `example` 时,它使用了 `innerHTML` 函数将 HTML 元素的内容替换为 `myvariable` 的内容。LCTT 译注:这里作者笔误了,`innerHTML` 是“属性”而非“函数”。)
当然,我们也可以不用自定义变量。因为,使用动态创建的内容来填充 HTML 元素也是容易的。例如,你可以使用当前时间戳来填充它:
@ -94,7 +93,7 @@ JavaScript 语言的用途多到令人惊讶,它拥有各种各样的库,而
在编程中,<ruby>语法<rt>syntax</rt></ruby> 指的是如何编写句子(或“行”)的规则。在 JavaScript 中,每行代码必须以分号(`;`)结尾,以便运行代码的 JavaScript 引擎知道何时停止阅读。LCTT 译注:从实用角度看,此处的“必须”其实是不正确的,大多数 JS 引擎都支持不加分号。Vue.js 的作者尤雨溪认为“没有应该不应该只有你自己喜欢不喜欢”他同时表示“Vue.js 的代码全部不带分号”。详情可以查看他在知乎上对于此问题的 [回答][10]。)
单词(或 <ruby>字符串<rt>strings</rt></ruby>)必须用引号(`"`)括起来,而数字(或 <ruby>整数<rt>integers</rt></ruby>)则不用。
单词(或 <ruby>字符串<rt>strings</rt></ruby>)必须用引号(`"`)括起来,而数字(或 <ruby>整数<rt>integers</rt></ruby>)则不用。
几乎所有其他东西都是 JavaScript 语言的约定,例如变量、数组、条件语句、对象、函数等等。
@ -111,7 +110,7 @@ document.getElementById("example").innerHTML = typeof(myvariable);
</string>
```
接着,你就会发现 Web 浏览器中显示出 “string” 字样,因为该变量包含的数据是 “Hello world!”。在 `myvariable` 中存储不同类型的数据(例如整数),浏览器就会把不同的数据类型打印到示例网页上。尝试将 `myvariable` 的内容更改为你喜欢的数字,然后重新加载页面。
接着,你就会发现 Web 浏览器中显示出 “string” 字样,因为该变量包含的数据是 `"Hello world!"`。在 `myvariable` 中存储不同类型的数据(例如整数),浏览器就会把不同的数据类型打印到示例网页上。尝试将 `myvariable` 的内容更改为你喜欢的数字,然后重新加载页面。
### 在 JavaScript 中创建函数
@ -160,7 +159,7 @@ document.getElementById("example").innerHTML = typeof(myvariable);
学习 JavaScript 既简单又有趣。网络上有很多网站提供了相关教程,还有超过一百万个 JavaScript 库可帮助你与设备、外围设备、物联网、服务器、文件系统等进行交互。在你学习的过程中,请将我们的 [JavaScript 备忘单][9] 放在身边,以便记住语法和结构的细节。
正文中的配图来自Seth KenlonCC BY-SA 4.0
> **[JavaScript 备忘单][9]**
--------------------------------------------------------------------------------
@ -169,7 +168,7 @@ via: https://opensource.com/article/21/7/javascript-cheat-sheet
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,41 +3,44 @@
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lkxed"
[#]: translator: "lightchaserhy"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14735-1.html"
我如何利用 Linux Xface 桌面赋予旧电脑新生命
我如何利用 Xfce 桌面为旧电脑赋予新生
======
当我为了一场会议的样例演示,用笔记本电脑安装 Linux 系统后,发现旧电脑运行 Linux 系统和 Xfce 桌面非常流畅。
几周前,我要在一个会议上简要演示自己在 Linux 下编写的一款小软件。我需要带一台 Linux 笔记本电脑参会,因此我翻出一台旧笔记本电脑并且安装上 Linux 系统。我使用的是 Fedora 36 Xfce spin使用还不错。
![](https://img.linux.net.cn/data/attachment/album/202206/20/143325vfdibhvv22qvddiv.jpg)
这台我用的笔记本是在 2012 年购买的。1.70 GHZ 的 CPU4 GB 的 内存128 GB 的驱动器,也许和我现在的桌面电脑比性能很弱,但是 Linux 和 Xfce 桌面赋予这台旧电脑新的生命。
> 当我为了在一场会议上做演示,用笔记本电脑安装 Linux 系统后,发现 Linux 和 Xfce 桌面让我的这台旧电脑健步如飞。
几周前,我要在一个会议上简要演示自己在 Linux 下编写的一款小软件。我需要带一台 Linux 笔记本电脑参会,因此我翻出一台旧笔记本电脑,并且安装上 Linux 系统。我使用的是 Fedora 36 Xfce 版,使用还不错。
这台我用的笔记本是在 2012 年购买的。1.70 GHZ 的 CPU、4 GB 的 内存、128 GB 的硬盘,也许和我现在的桌面电脑比性能很弱,但是 Linux 和 Xfce 桌面赋予了这台旧电脑新的生命。
### Linux 的 Xfce 桌面
Xfce 桌面是一个轻量级桌面,它提供一个精美、现代的外观。熟悉的界面,有任务栏或者顶部“面板”可以启动应用程序,在系统托盘可以改变虚拟桌面,或者查看通知信息。
Xfce 桌面是一个轻量级桌面,它提供一个精美、现代的外观。熟悉的界面,有任务栏或者顶部“面板”可以启动应用程序,在系统托盘可以改变虚拟桌面,或者查看通知信息。屏幕底部的快速访问停靠区让你可以启动经常使用的应用程序,如终端、文件管理器和网络浏览器。
![Image of Xfce desktop][6]
要开始一个新应用程序,点击左上角的应用程序按钮。这将打开一个应用程序启动菜单,顶部有常用的应用程序比如终端和文件管理。另外的应用程序会分组排列,这样你可以找到所需要的应用。
要开始一个新应用程序,点击左上角的应用程序按钮。这将打开一个应用程序启动菜单,顶部有常用的应用程序,比如终端和文件管理。其它的应用程序会分组排列,这样你可以找到所需要的应用。
![Image of desktop applications][7]
### 管理文件
Xfce 的文件管理器时叫 Thunar它能非常好地管理我的文件。我喜欢 Thunar 可以连接远程系统,在家里,我用一个开启 SSH 的树莓派作为个人文件服务器。Thunar 可以打开一个 SSH 文件传输窗口,这样我可以在笔记本电脑和树莓派之间拷贝文件。
Xfce 的文件管理器时叫 Thunar它能好地管理我的文件。我喜欢 Thunar 可以连接远程系统,在家里,我用一个开启 SSH 的树莓派作为个人文件服务器。Thunar 可以打开一个 SSH 文件传输窗口,这样我可以在笔记本电脑和树莓派之间拷贝文件。
![Image of Thunar remote][9]
另一个访问文件和文件夹的方式是通过屏幕底部的快速访问停靠栏。点击文件夹图标可以打开一个常规操作菜单,如在终端窗口打开一个文件夹、新建一个文件夹或进入指定文件夹等。
另一个访问文件和文件夹的方式是通过屏幕底部的快速访问停靠区。点击文件夹图标可以打开一个常用操作的菜单,如在终端窗口打开一个文件夹、新建一个文件夹或进入指定文件夹等。
![Image of desktop with open folders][10]
### 其它应用程序
热爱探索 Xfce 提供的其他应用程序。Mousepad 看起来像一个简单的文本编辑器但是比起纯文本编辑它包含更多有用的功能。Mousepad 支持许多文件类型,程序员和其他高级用户也许会非常喜欢。在文档菜单检验一下可用的部分编程语言列表。
喜欢探索 Xfce 提供的其他应用程序。Mousepad 看起来像一个简单的文本编辑器但是比起纯文本编辑它包含更多有用的功能。Mousepad 支持许多文件类型,程序员和其他高级用户也许会非常喜欢。可以在文档菜单中查看一下部分编程语言的列表。
![Image of Mousepad file types][11]
@ -46,21 +49,20 @@ Xfce 的文件管理器时叫 Thunar它能非常好地管理我的文件。
![Image of Mousepad in color scheme solarized][12]
磁盘工具可以让你管理储存设备。虽然我不需要修改我的系统磁盘,磁盘工具是一个初始化或重新格式化 USB 闪存设备的好方式。我认为这个界面非常简单好用。
![Image of disk utility][13]
我非常钦佩带有 Geany 集成开发的环境,我有一点惊讶一个旧系统可以如此流畅地运行一个完整的 IDE 开发软件
Geany 集成开发环境也给我留下了深刻印象我有点惊讶于一个完整的集成开发软件IDE可以在一个旧系统可以如此流畅地运行。Geany 宣称自己是一个“强大、稳定和轻量级的程序员文本编辑器,提供大量有用的功能,而不会拖累你的工作流程”。而这正是 Geany 所提供的
我用一个简单的 “hello world” 程序测试 Geany当我输入每一个函数名称时很高兴地看到 IDE 弹出语法帮助,弹出的信息并不唐突且刚好提供了我需要的信息。同时 printf 函数非常容易记住,我总是忘记其它函数的选项顺序,比如 fputs 和 realloc,这就是我需要弹出语法帮助的地方。
我用一个简单的 “hello world” 程序测试 Geany当我输入每一个函数名称时很高兴地看到 IDE 弹出语法帮助,弹出的信息并不特别显眼,且刚好提供了我需要的信息。虽然我能很容易记住 `printf` 函数,但总是忘记诸如 `fputs``realloc` 之类的函数的选项顺序,这就是我需要弹出语法帮助的地方。
![Image of Geany workspace][14]
在 Xfce 里探索菜单寻找其它应用程序让你的工作更简单,你将找到可以播放音乐、访问终端或浏览网页的应用程序。
深入了解 Xfce 的菜单,寻找其它应用程序,让你的工作更简单,你将找到可以播放音乐、访问终端或浏览网页的应用程序。
当我安装 Linux 到笔记本电脑,在会议上演示一些样例后,发现 Linux 和 Xfce 桌面让这台旧电脑变得更时尚。这个系统运行得如此流畅,当会议结束后,我决定把这台笔记本电脑作为备用机。
当我在笔记本电脑上安装了 Linux在会议上做了一些演示后我发现 Linux 和 Xfce 桌面让这台旧电脑变得相当敏捷。这个系统运行得如此流畅,以至于当会议结束后,我决定把这台笔记本电脑作为备用机。
我喜爱在 Xfce 上使用应用程序工作,尽管它有非常低的系统开销和极简单的方法,但我并没有感觉到不够用,我可以用 Xfce 和上面的应用程序做任何事情。如果你有一台需要翻新的旧电脑,试试安装 Linux给旧硬件带来新的生命。
图片来源: (Jim Hall, CC BY-SA 40)
我确实喜欢在 Xfce 中工作和使用这些应用程序,尽管系统开销不大,使用也很简单,但我并没有感觉到不够用,我可以用 Xfce 和上面的应用程序做任何事情。如果你有一台需要翻新的旧电脑,试试安装 Linux给旧硬件带来新的生命。
--------------------------------------------------------------------------------
@ -69,7 +71,7 @@ via: https://opensource.com/article/22/6/linux-xfce-old-laptop
作者:[Jim Hall][a]
选题:[lkxed][b]
译者:[lightchaserhy](https://github.com/lightchaserhy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,13 +3,16 @@
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14736-1.html"
使用 Flatseal 管理 Flatpak 的权限
======
了解如何使用 Flatseal 应用管理 Flatpak 权限,它为你提供了一个友好的 GUI 和额外的功能。
![](https://img.linux.net.cn/data/attachment/album/202206/20/151550qkrkpjw4f9dpjo50.jpg)
> 了解如何使用 Flatseal 应用管理 Flatpak 权限,它为你提供了一个友好的 GUI 和额外的功能。
从新用户的角度来看,在 Linux 中安装应用可能是一个挑战。主要原因是有这么多的 [Linux 发行版][1]。而你需要为各种 Linux 发行版提供不同的安装方法或说明。对于一些用户来说,这可能会让他们不知所措。此外,对于开发者来说,为不同的发行版创建独立的软件包和构建也很困难。
@ -39,7 +42,7 @@ Flatseal 是一个 Flatpak 应用,它为你提供了一个友好的用户界
当打开 Flatseal 应用时,它应该在左边的导航栏列出所有的 Flatpak 应用。而当你选择了一个应用,它就会在右边的主窗口中显示可用的权限设置。
现在,对于每个 Flatpak 权限控制,当前值显示在切换开关中。如果该权限正在使用中,它应该被设置。否则,它应该是灰色的。
现在,对于每个 Flatpak 权限控制,当前值显示在切换开关中。如果该权限正在使用中,它应该被启用。否则,它应该是灰色的。
首先,要设置权限,你必须进入你的系统的应用。然后,你可以从权限列表中启用或禁用任何各自的控制。
@ -57,7 +60,7 @@ Flatseal 是一个 Flatpak 应用,它为你提供了一个友好的用户界
![Figure 3: Telegram Desktop Flatpak App does not have permission to the home folders][4]
现在,如果我想允许所有的用户文件和任何特定的文件夹(例如:/home/Downloads),你可以通过打开启用开关来给予它。请看下面的图 4。
现在,如果我想允许所有的用户文件和某个特定的文件夹(例如:`/home/Downloads`),你可以通过打开启用开关来给予它。请看下面的图 4。
![Figure 4: Permission changed of Telegram Desktop to give access to folders][5]
@ -69,7 +72,7 @@ Flatseal 是一个 Flatpak 应用,它为你提供了一个友好的用户界
flatpak override org.telegram.desktop --filesystem=/home/Downloads
```
而要删除:
而要删除权限
```
flatpak override org.telegram.desktop --nofilesystem=/home/Downloads
@ -79,7 +82,7 @@ Flatseal 还有一个很酷的功能,它在用户特定的权限变化旁边
### 我可以在所有的 Linux 发行版中安装 Flatseal 吗?
是的,你可以把 [Flatseal][6] 作为 Flatpak 安装在所有 Linux 发行版中。你可以使用[本指南][7]设置你的系统,并运行以下命令进行安装。或者,[点击这里][8]直接启动特定系统的安装程序。
是的,你可以把 [Flatseal][6] 作为 Flatpak 安装在所有 Linux 发行版中。你可以使用 [本指南][7] 设置你的系统,并运行以下命令进行安装。或者,[点击这里][8] 直接启动特定系统的安装程序。
```
flatpak install flathub com.github.tchx84.Flatseal
@ -96,7 +99,7 @@ via: https://www.debugpoint.com/2022/06/manage-flatpak-permission-flatseal/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,23 +3,24 @@
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14734-1.html"
有研究表明,推特能够推动开源项目的普及
======
![推特][1]
由 HongBo Fang 博士领导的研究团队发现,推特是一种吸引更多人关注和贡献 GitHub 开源项目的有效方式。Fang 博士在国际软件工程会议上发表了这项名为“‘这真是太棒了!’估计推文对开源项目受欢迎程度和新贡献者的影响”的研究,并获得了杰出论文奖。这项研究显示,发送和一个项目有关的推文,导致了该项目受欢迎程度增加了 7%(在 GitHub 上至少增加了一颗 star贡献者数量增加了 2%。一个项目收到的推文越多,它收到的 star 和贡献者就越多。
由 HongBo Fang 博士领导的研究团队发现,推特是一种吸引更多人关注和贡献 GitHub 开源项目的有效方式。Fang 博士在国际软件工程会议上发表了这项名为“‘这真是太棒了!’估计推文对开源项目受欢迎程度和新贡献者的影响”的研究,并获得了杰出论文奖。这项研究显示,发送和一个项目有关的推文,导致了该项目受欢迎程度增加了 7%(在 GitHub 上至少增加了一个星标),贡献者数量增加了 2%。一个项目收到的推文越多,它收到的星标和贡献者就越多。
Fang 说:“我们已经意识到社交媒体在开源社区中变得越来越重要,吸引关注和新的贡献者将带来更高质量和更好的软件。”
大多数开源软件都是由志愿者创建和维护的。参与项目的人越多,结果就越好。开发者和其他人使用该软件、报告问题并努力解决这些问题。然而,不受欢迎的项目有可能得不到应有的关注。这些劳动力(几乎都是志愿者),维护了数百万人每天依赖的软件。例如,几乎每个 HTTPS 网站都使用开源的 OpenSSL 保护其内容。Heartbleed 是 OpenSSL 中发现的一个安全漏洞,在 2014 年被发现后,企业花费了数百万美元来修复它。另一个开源软件 cURL 允许连接的设备相互发送数据,并安装在大约 10 亿台设备上。开源软件之多,不胜枚举。
此次“推特对提高开源项目的受欢迎程度和吸引新贡献者的影响”的研究,其实是 “Vasilescu 数据挖掘与社会技术研究实验室” (STRUDEL) 的一个更大项目的其中一部分,该研究着眼于如何建立开源社区并且其工作更具可持续性。毕竟,支撑现代技术的数字基础设施、道路和桥梁都是开源软件。如果维护不当,这些基础设施可能会崩溃。
此次“推特对提高开源项目的受欢迎程度和吸引新贡献者的影响”的研究,其实是 “Vasilescu 数据挖掘与社会技术研究实验室”STRUDEL的一个更大项目的其中一部分,该研究着眼于如何建立开源社区并且其工作更具可持续性。毕竟,支撑现代技术的数字基础设施、道路和桥梁都是开源软件。如果维护不当,这些基础设施可能会崩溃。
研究人员检查了 44544 条推文,其中包含指向 2370 个开源 GitHub 存储库的链接,以证明这些推文确实吸引了新的 star 和项目贡献者。在这项研究中,研究人员使用了一种科学的方法:将 Twitter 上提及的 GitHub 项目的 star 和贡献者的增加,与 Twitter 上未提及的一组项目进行了比较。该研究还描述了高影响力推文的特征、可能被帖子吸引到项目的人的类型,以及这些人与通过其他方式吸引的贡献者有何不同。来自项目支持者而不是开发者的推文最能吸引注意力。请求针对特定任务或项目提供帮助的帖子会收到更高的回复率。推文往往会吸引新的贡献者,**他们是 GitHub 的新手,但不是经验不足的程序员**。还有,**新的关注可能不会带来新的帮助**。
研究人员检查了 44544 条推文,其中包含指向 2370 个开源 GitHub 存储库的链接,以证明这些推文确实吸引了新的星标和项目贡献者。在这项研究中,研究人员使用了一种科学的方法:将推特上提及的 GitHub 项目的星标和贡献者的增加,与推特上未提及的一组项目进行了比较。该研究还描述了高影响力推文的特征、可能被帖子吸引到项目的人的类型,以及这些人与通过其他方式吸引的贡献者有何不同。来自项目支持者而不是开发者的推文最能吸引注意力。请求针对特定任务或项目提供帮助的帖子会收到更高的回复率。推文往往会吸引新的贡献者,**他们是 GitHub 的新手,但不是经验不足的程序员**。还有,**新的关注可能不会带来新的帮助**。
提高项目受欢迎程度也存在其缺点,研究人员讨论后认为,它的潜在缺点之一,就是注意力和行动之间的差距。**更多的关注通常会导致更多的功能请求或问题报告,但不一定有更多的开发者来解决它们**。社交媒体受欢迎程度的提高,可能会导致有更多的“巨魔”或“有毒行为”出现在项目周围。
@ -32,7 +33,7 @@ via: https://www.opensourceforu.com/2022/06/according-to-studies-twitter-drives-
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,12 +3,13 @@
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14738-1.html"
Mattermost 7.0 发布,扩展了工作流平台
======
![Mattermost][1]
自 2016 年开源以来Mattermost 一直在开发一个具有不断增加的用例的消息传递平台。6 月 16 日Mattermost 7.0 平台发布,其中包括了新的语音呼叫、工作流模板和用于开源技术的应用框架。新版本扩展了 2021 年 10 月发布的 6.0 版本引入的功能。一直以来Mattermost 都在与包括 Slack、Atlassian 和 Asana 在内的几家大公司竞争不断增长的协作工具市场。另一方面Mattermost 侧重于对开发者的支持,尽管该平台也可用于安全和 IT 运营。
@ -17,11 +18,11 @@ Mattermost 的软件同时提供有商业版和开源版,目前它们都升级
Tien 认为开源也关乎社区贡献。Mattermost 开源项目有超过 4000 名个人贡献者,他们贡献了超过 30000 行代码。
以前Mattermost 依赖集成第三方呼叫服务(例如 Zoom来启用语音呼叫功能。在 7.0 版本中,它通过开源 WebRTC 协议引入了呼叫功能的直接集成,所有现代 Web 浏览器都支持该协议。直接集成呼叫功能的目标是为作提供单一平台,这符合 Tien 对该平台的总体愿景。现在,除了提供集成工具以实现协作之外,该平台还会增加“工作流模板”功能,以帮助(用户)组织构建可重复的流程。
以前Mattermost 依赖集成第三方呼叫服务(例如 Zoom来启用语音呼叫功能。在 7.0 版本中,它通过开源 WebRTC 协议引入了呼叫功能的直接集成,所有现代 Web 浏览器都支持该协议。直接集成呼叫功能的目标是为作提供单一平台,这符合 Tien 对该平台的总体愿景。现在,除了提供集成工具以实现协作之外,该平台还会增加“工作流模板”功能,以帮助(用户)组织构建可重复的流程。
工作流概念采用了 <ruby>剧本<rt>playbooks</rt></ruby>,其中包含了为“特定类型的操作”所执行的动作和操作的清单。例如,在发生服务故障或网络安全事件时,公司可以为事件响应创建工作流模板。
工作流概念采用了 <ruby>剧本<rt>playbook</rt></ruby>,其中包含了为“特定类型的操作”所执行的动作和操作的清单。例如,在发生服务故障或网络安全事件时,公司可以为事件响应创建工作流模板。
这个清单可以链接到 Mattermost 操作例如让特定用户发起呼叫并协助生成报告。Tien 表示Mattermost 还与常见的开发者工具集成,并且工作流模板的功能将随着时间的推移而扩展,以便使用第三方工具来实现更多自动化。
这个清单可以链接到 Mattermost <ruby>操作<rt>operation</rt></ruby>例如让特定用户发起呼叫并协助生成报告。Tien 表示Mattermost 还与常见的开发者工具集成,并且工作流模板的功能将随着时间的推移而扩展,以便使用第三方工具来实现更多自动化。
--------------------------------------------------------------------------------
@ -30,7 +31,7 @@ via: https://www.opensourceforu.com/2022/06/mattermost-extends-workflow-platform
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,91 @@
[#]: subject: "Manjaro 21.3.0 Ruah Release Adds Latest Calmares 3.2, GNOME 42, and More Upgrades"
[#]: via: "https://news.itsfoss.com/manjaro-21-3-0-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Manjaro 21.3.0 Ruah Release Adds Latest Calmares 3.2, GNOME 42, and More Upgrades
======
Manjaro Linux 21.3.0 release packs in some of the latest and greatest updates, including an improved installer.
![manjaro 21.3.0][1]
Manjaro Linux is a rolling-release distribution. So, technically, you will be on the latest version if you regularly update your system.
It should not be a big deal to upgrade to Manjaro 21.3.0, considering I am already running it without issues for a few days before the official announcement.
**Also,**you might want to read my initial experience [switching to Manjaro from Ubuntu][2] (if youre still on the fence).
So, what does the Manjaro 21.3.0 upgrade introduce?
### Manjaro 21.3.0: Whats New?
![][3]
The desktop environments upgraded to their latest stable versions while the core [Linux Kernel 5.15 LTS][4] remains.
Also, this release includes the final Clamares v3.2 version. Let us take a look at the changes:
#### Calamares v3.2.59
Calamares v3.2.59 installer is the final release of the 3.2 series with meaningful improvements. This time the partition module includes support for LUKS partitions and more refinements to avoid settings that can mess up the Manjaro installation.
All the future releases for Calamares 3.2 will be bug fixes only.
#### GNOME 42 + Libadwaita
While the initial release included GNOME 42, now we have GNOME 42.2 available with the latest updates.
Overall, you get all the goodies introduced with [GNOME 42][5], including the system-wide dark mode, a modern user interface based on GTK 4 for GNOME apps, upgraded applications, and several other significant changes.
![][6]
#### KDE Plasma 5.24
Unfortunately, the release couldnt feature [KDE Plasma 5.25][7], considering it was released around the same week.
[KDE Plasma 5.24][8] is a nice upgrade, with a refreshed theme and an overview effect.
#### XFCE 4.16
With Xfce 4.16, the window manager received numerous updates and refinements to support fractional scaling and more capabilities.
### Download Manjaro 21.3.0
As of now, I have no issues with Manjaro 21.3.0 GNOME edition. Everything looks good, and the upgrade went smoothly.
However, you should always take backups if you do not want to re-install or lose your important files.
You can download the latest version from [Manjaros download page][9]. The upgrade should be available through the pamac package manager.
In either case, you can enter the following command in the terminal to upgrade:
```
sudo pacmane -Syu
```
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/manjaro-21-3-0-release/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/06/manjaro-21-3-0-ruah-release.jpg
[2]: https://news.itsfoss.com/manjaro-linux-experience/
[3]: https://news.itsfoss.com/wp-content/uploads/2022/06/manjaro-gnome-42-2-1024x576.jpg
[4]: https://news.itsfoss.com/linux-kernel-5-15-release/
[5]: https://news.itsfoss.com/gnome-42-release/
[6]: https://news.itsfoss.com/wp-content/uploads/2022/06/manjaro-21-3-neofetch.png
[7]: https://news.itsfoss.com/kde-plasma-5-25-release/
[8]: https://news.itsfoss.com/kde-plasma-5-24-lts-release/
[9]: https://manjaro.org/download/

View File

@ -0,0 +1,39 @@
[#]: subject: "Microsoft To Charge For Available Open Source Software In Microsoft Store"
[#]: via: "https://www.opensourceforu.com/2022/06/microsoft-to-charge-for-available-open-source-software-in-microsoft-store/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Microsoft To Charge For Available Open Source Software In Microsoft Store
======
![microsoft][1]
On June 16, 2022, Microsoft updated the Microsoft Store policies. One of the changes prohibits publishers from charging fees for open source or freely available software. Another example is the stores use of irrationally high pricing. If youve been to the Microsoft Store in the last few years, youve probably noticed that its becoming more and more home to open source and free products. While this would be beneficial if the original developer had uploaded the apps and games to the store, it is not the case because the uploads were made by third parties.
Worse, many of these programs are only available as paid applications rather than free downloads. In other words, Microsoft customers must pay to purchase a Store version of an app that is free elsewhere. In the Store, free and paid versions coexist at times. Paying for a free app is bad enough, but this isnt the only problem that users may encounter when they make the purchase. Updates may also be a concern, as copycat programs may not be updated as frequently or as quickly as the source applications.
In the updated Microsoft Store Policies, Microsoft notes under 10.8.7:
In cases where you determine the pricing for your product or in-app purchases, all pricing for your digital products or services, including sales or discounts, must:
Comply with all applicable laws, regulations, and regulatory guidelines, including the Federal Trade Commissions Guides Against Deceptive Pricing. You must not attempt to profit from open-source or other software that is otherwise freely available, nor should your product be priced irrationally high in comparison to the features and functionality it provides.
The new policies are confirmed in the updated section. Open source and free products may no longer be sold on the Microsoft Store if they are generally available for free, and publishers may no longer charge irrationally high prices for their products. Developers of open source and free applications may charge for their products on the Microsoft Store; for example, the developer of Paint.net does so. Many applications will be removed from the Store if Microsoft enforces the policies. Developers could previously report applications to Microsoft, but the new policies give Microsoft direct control over application listings and submissions.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/microsoft-to-charge-for-available-open-source-software-in-microsoft-store/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/06/microsoft-e1655714723942.jpg

View File

@ -0,0 +1,75 @@
[#]: subject: "Mysterious GeckoLinux Creator Reveals a New Debian Remix Distro"
[#]: via: "https://news.itsfoss.com/debian-remix-spiral-linux/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Mysterious GeckoLinux Creator Reveals a New Debian Remix Distro
======
GeckoLinux creator unveils a new Linux distribution based on Debian, focusing on simplicity and usability.
![spiral linux][1]
The creator of GeckoLinux (providing an improved openSUSE experience) remains anonymous.
And, I wont comment if it is a good or bad thing, but now the developer is back with another similar project based on Debian.
**SpiralLinux**, a Debian-based distribution that aims to make Debian usable for the end-users.
### SpiralLinux: A Distro Built from Debian
![spirallinux][2]
It is no surprise that most of the user-friendly Linux distributions have Debian as its original base. Ubuntu has managed to do a lot of improvements on top of it to make it a good desktop experience, even for users without prior Linux experience.
So, how is this different?
Well, the creator says that this project aims to help you use Debian with all its core strengths without customizing a lot of things.
SpiralLinux is a close-to-vanilla experience if you want to use Debian on your desktop. You can also upgrade to the latest stable Debian version (or unstable/testing) as you require without losing the user-friendly customizations.
In other words, SpiralLinux makes Debian fit for desktop usage with minimal efforts to the end-user.
And, to achieve this, SpiralLinux uses official Debian package repositories providing a live installation method to let you setup a customized Debian system.
Additionally, you have the following features in SpiralLinux:
* VirtualBox support out-of-the-box
* Preinstalled proprietary media codecs and non-free package repositories
* Preinstalled proprietary firmware
* Printer support
* Flatpak support through a GUI (software center)
* zRAM swap enabled by default
* Multiple desktop environments (Cinnamon, XFCE, Gnome, Plasma, MATE, Budgie, LXQt)
Considering Debian always sticks to the open-source and free packages, the end-user has to figure out the codecs, drivers, and other packages to make a lot of things work for a proper desktop experience.
And, it seems like SpiralLinux could live as a useful alternative to Debian just like GeckoLinux is to openSUSE.
### Download SpiralLinux
If you always wanted to try Debian, but did not want to fiddle around a lot for initial configuration, you can try SpiralLinux.
You can head to its official webpage hosted on GitHub to explore more about it.
[SpiralLinux][3]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/debian-remix-spiral-linux/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/06/spiral-linux-debian-remix-distro.jpg
[2]: https://news.itsfoss.com/wp-content/uploads/2022/06/spirallinux.jpg
[3]: https://spirallinux.github.io/

View File

@ -0,0 +1,55 @@
[#]: subject: "The Final Version Of 7-Zip 22.00 Is Now Available"
[#]: via: "https://www.opensourceforu.com/2022/06/the-final-version-of-7-zip-22-00-is-now-available/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
The Final Version Of 7-Zip 22.00 Is Now Available
======
![7-zip-2200-500x312][1]
7-Zip is a well-known open source file archiver for Windows, Mac, and Linux. 7-Zip 22.00 is now available; it is the first stable release of 2022. The most recent release was 7-Zip 21.07, which was released in December 2021. Users of 7-Zip can obtain the most recent version of the application from the official website. Downloads are available for Windows 64-bit, 32-bit, and ARM versions. The programme is still compatible with out-of-date versions of Windows, such as XP and Vista. All officially supported Windows versions, including server versions, are also supported. 7-Zip 22.00 for Linux is already available for download, but the Mac OS version is not.
7-Zip 22.00 includes several new features that enhance the applications functionality. The archiver now supports the extraction of Apple File System APFS images. Several years ago, Apple introduced the Apple File System in Mac OS 10.13 and iOS. The file system has been designed with flash and solid state drive storage in mind.
7-Zip 22.00 includes several enhancements to its TAR archive support. Using the switches -ttar -mm=pax or -ttar -mm=posix, 7-Zip can now create TAR archives in POSIX tar format. Additionally, using the switches ttar -mm=pax -mtp=3 -mtc -mta, 7-Zip can store file timestamps with high precision in tar/pax archives.
* snoi: save owner/group ids in the archive or copy owner/group ids from the archive to the extracted files.
* snon: keep owner/group names in the archive
7-Zip 22.00 for Windows adds support for the -snz switch, which propagates the Zone.
To extract files, use the identifier stream. Windows uses the stream for security purposes; it can be used to determine whether a file was created locally or downloaded from the Internet.
7-Zip 22.00 includes several new features that enhance the applications functionality. The archiver now supports the extraction of Apple File System APFS images. Several years ago, Apple introduced the Apple File System in Mac OS 10.13 and iOS. The file system has been designed with flash and solid state drive storage in mind.
7-Zip 22.00 includes several enhancements to its TAR archive support. Using the switches -ttar -mm=pax or -ttar -mm=posix, 7-Zip can now create TAR archives in POSIX tar format. Additionally, using the switches ttar -mm=pax -mtp=3 -mtc -mta, 7-Zip can store file timestamps with high precision in tar/pax archives.
Finally, Linux users can use the following two new switches with TAR archives:
* snoi: save owner/group ids in the archive or copy owner/group ids from the archive to the extracted files.
* snon: keep owner/group names in the archive
7-Zip 22.00 for Windows adds support for the -snz switch, which propagates the Zone.
To extract files, use the identifier stream. Windows uses the stream for security purposes; it can be used to determine whether a file was created locally or downloaded from the Internet.
In the “add to archive” configuration dialogue, 7-Zip 22.00 includes a new options window. It includes options for changing the timestamp precision, changing other time-related configuration options, and preventing the source files last access time from changing.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/the-final-version-of-7-zip-22-00-is-now-available/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/06/7-zip-2200-500x312-1.jpg

View File

@ -0,0 +1,69 @@
[#]: subject: "This Open-Source Project Proves Chrome Extensions Can Track You"
[#]: via: "https://news.itsfoss.com/chrome-extension-tracking/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
This Open-Source Project Proves Chrome Extensions Can Track You
======
Is this a reason to ditch Chromium-based browsers and start using Firefox? Maybe, you decide.
![chrome extension tracker][1]
Even with all the privacy extensions and fancy protection features, there are still ways to identify you or track you.
Note that it is not the case for all browsers, here, we focus on Chromium-based browsers and Google Chrome as the prime suspect.
While detecting installed extensions in a Chromium browser was already possible, numerous extensions implemented certain protections to prevent it.
However, a security researcher, also known as “**z0ccc**”, discovered a new method to detect installed Chrome browser extensions, which can be further used to track you through browser fingerprinting.
**In case you did not know**: Browser fingerprinting refers to the tracking method where various information about your device/browser gets collected to create a unique fingerprint ID (hash) to identify you across the internet. Information like browser name, version, operating system, extensions installed, screen resolution, and similar technical data.
It sounds like a harmless data collection technique, but you can be tracked online with this tracking method.
### Detecting Google Chrome Extensions
The researcher shared an open-source project “**Extension Fingerprints**” which you can use to test if Chrome extensions installed on your browser are being detected.
The new technique involves a time-difference method where the tool compares the time to fetch resources for the extensions. A protected extension takes more time to fetch compared to other extensions not installed on your browser. So, that helps identify some extensions from the list of over 1000 extensions.
The point is—even with all the advancements and techniques to prevent tracking, extensions from the Google Web Store can be detected.
![][2]
And, with the installed extensions detected, you can be tracked online using browser fingerprinting.
Surprisingly, even if you have extensions like **uBlocker, AdBlocker,**or**Privacy Badger** (some popular privacy-focused extensions), they all get detected using this method.
You can explore all the technical details on its [GitHub page][3]. And, if you want to test it yourself, head to its [Extension Fingerprints site][4] to check for yourself.
### Firefox To The Rescue?
It seems like it, considering [I keep coming back to Firefox][5] for various reasons.
The discovered method should work with all the Chromium-based browsers. I tested it with Brave and Google Chrome. The researcher also mentions that the tool does not work with Microsoft Edge using extensions from Microsofts store. But, it is possible with the same method of tracking.
Mozilla Firefox is safe from this because the extension IDs for every browser instance are unique, as the researcher suggested.
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/chrome-extension-tracking/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/06/opensource-project-tracker-chrome-extensions.jpg
[2]: https://news.itsfoss.com/wp-content/uploads/2022/06/extension-fingerprints.jpg
[3]: https://github.com/z0ccc/extension-fingerprints
[4]: https://z0ccc.github.io/extension-fingerprints/
[5]: https://news.itsfoss.com/why-mozilla-firefox/

View File

@ -0,0 +1,90 @@
[#]: subject: "Should Businesses Opt for Serverless Computing?"
[#]: via: "https://www.opensourceforu.com/2021/12/should-businesses-opt-for-serverless-computing/"
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Should Businesses Opt for Serverless Computing?
======
Serverless computing removes the server from the equation and enables businesses to give undivided attention to the application functionality. Does that make serverless computing the automatic choice? Lets find out.
![Severless-Cloud-Computing-Featured-image-OSFY-Oct-2021][1]
Until recently, almost every product manager organised his or her engineering resources into two separate teams — the development team and the operations team. The development team is usually involved in coding, testing and building the application functionality, whereas the operations team takes the responsibility of delivery, deployment, and operational maintenance of the application.
When a development team builds an e-commerce application, the operations team sets up the server to host that application. Setting up a server involves several aspects, which include:
* Choosing the appropriate hardware and operating system
* Applying the required set of patches
* Installing applicable server environments like JDK, Python, Tomcat, NodeJS, etc
* Deploying, configuring, and provisioning the actual application
* Opening and securing appropriate ports
* Setting up required database engines
… and the list just goes on.
Besides this, managers also break their heads on capacity planning. After all, any non-trivial application is expected to be 100 per cent available, reliable and scalable all the time. This requires optimal investment in the hardware. As we all know, shortage of hardware at crucial periods results in business loss, and on the other hand, redundant hardware hurts the bottomline. Capacity planning is crucial, irrespective of whether the application is targeted for the on-premises data centre or for cloud infrastructure.
By now, it is clear that businesses spend a lot of time not only in building the functionality but also in delivering it.
Serverless computing aims at offering a seamless way to deliver the functionality without worrying about the server setup and maintenance. In other words, a serverless computing platform offers a ready-to-use environment in such a way that the businesses build and deploy the applications as a set of smaller functions as quickly as possible. That is why this approach is referred to as Function as a Service (FaaS).
Remember that there is still a server in serverless computing, but it is taken care of by the FaaS vendors like AWS, Microsoft, and Google.
For example, AWS offers a serverless computing environment in the name of Lambda functions. Developers can choose to build the applications as a set of Lambda functions that can be written in NodeJS, Java, Python, and a few other languages. AWS offers a ready-to-use environment to deploy these functions. It also offers ready-to-use database servers, file servers, application gateways, authentication servers, etc.
Similarly, Microsoft Azure offers an environment to build and deploy Azure functions in languages like C#.
### Why serverless?
There are two main factors driving the popularity of serverless computing.
#### Ready-to-use environment
Obviously, this is the topmost selling point in favour of serverless computing. Businesses need not procure/book hardware or instances in advance, or worry about licences and setting up and provisioning the server. And they need not bother about scaling up and down. All of this is taken care of by the FaaS vendor.
Optimal cost: Since FaaS vendors always charge the clients based on the utilisation of the environment (pay as you use model), businesses need not worry about upfront costs and wastage of resources. For example, AWS charges the clients based on the number of requests a Lambda function receives, number of queries run on a table, etc.
### Challenges with serverless computing
Like with any other approach, serverless computing is also not the perfect approach that everyone can blindly follow. It has its own set of limitations. Here are a few of them.
#### Vendor locking
The first and most important problem associated with serverless computing is that functions like Lambda or Azure are to be written using vendor-supplied APIs. For example, the functions that are written using an AWS Lambda API cannot be deployed into Google Cloud and vice versa. Hence, serverless computing forces businesses to commit to a vendor, for years. The success or failure of the application depends not only on the functionality but also on the ability of the vendor with respect to performance, etc.
#### Programming language
No serverless computing platform supports all the possible programming languages. Moreover, it may not support all the versions of a given programming language. The application development teams are constrained to choose only the languages that are offered. This may turn out to be very crucial in terms of the capabilities of the team.
#### Optimal cost, really?
It all depends on the usage of the resources. If your application is attracting a huge load, like millions of requests per second, the bill that you foot might be exorbitant. At such a scale, having your own server on-premises or on the cloud might work cheaper. It doesnt mean that applications with Web-scale are not suitable for serverless computing. It all boils down to the way you architect around the platform and the deal you sign with the vendor.
#### Ecosystem
No application is written for an isolated environment. It requires other components like data stores, databases, security engines, gateways, messaging servers, queues, cache, etc. Every platform offers its own set of such tools. For instance, AWS offers Dynamo DB as one of the NoSQL solutions. Obviously, other vendors offer their own NoSQL solutions. The teams are thus forced to architect the applications based on the chosen platform. Though most of the commercial FaaS vendors offer one or another component for a particular requirement, not every component may be best-in-class.
### What about containers?
Many of us migrated to containerised deployment models in the last decade, since they offer a lightweight alternative to the costly machines, physical or virtual. With orchestration tools like Kubernetes, we love to deploy containerised applications and meet Web-scale requirements. Containers offer a certain degree of isolation from the underlying environment, which makes deployment relatively easy. However, we still need investments in hardware (on-premises or cloud), licences, networking, provisioning, etc, which demands forward planning, applicable technical skills, and careful monitoring. Serverless computing frees us even from these responsibilities, albeit with its own set of pros and cons.
### Road ahead
We are in the days of continuous development, continuous integration, and continuous deployment. Every business is facing competition. Time to market (TTM) plays a significant role in gaining and retaining customers. In this backdrop, businesses love to spend more time churning out the functionality as quickly as possible rather than struggling with the nitty-gritty of deployment and maintenance. Serverless computing has the potential to meet these demands. Big players are committing huge investments to make FaaS as seamless and as affordable as possible. The future looks bright for serverless computing.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2021/12/should-businesses-opt-for-serverless-computing/
作者:[Krishna Mohan Koyya][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/krishna-mohan-koyya/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2021/10/Severless-Cloud-Computing-Featured-image-OSFY-Oct-2021.jpg

View File

@ -0,0 +1,184 @@
[#]: subject: "7 summer book recommendations from open source enthusiasts"
[#]: via: "https://opensource.com/article/22/6/2022-opensourcecom-summer-reading-list"
[#]: author: "Joshua Allen Holm https://opensource.com/users/holmja"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
7 summer book recommendations from open source enthusiasts
======
Members of the Opensource.com community recommend this mix of books covering everything from a fun cozy mystery to non-fiction works that explore thought-provoking topics.
![Ceramic mug of tea or coffee with flowers and a book in front of a window][1]
Image by: Photo by [Carolyn V][2] on [Unsplash][3]
It is my great pleasure to introduce Opensource.com's 2022 summer reading list. This year's list contains seven wonderful reading recommendations from members of the Opensource.com community. You will find a nice mix of books covering everything from a fun cozy mystery to non-fiction works that explore thought-provoking topics. I hope you find something on this list that interests you.
Enjoy!
![Book title 97 Things Every Java Programmer Should Know][4]
Image by: O'Reilly Press
**[97 Things Every Java Programmer Should Know: Collective Wisdom from the Experts, edited by Kevlin Henney and Trisha Gee][5]**
*[Recommendation written by Seth Kenlon][6]*
Written by 73 different authors working in all aspects of the software industry, the secret to this book's greatness is that it actually applies to much more than just Java programming. Of course, some chapters lean into Java, but there are topics like Be aware of your container surroundings, Deliver better software, faster, and Don't hIDE your tools that apply to development regardless of language.
Better still, some chapters apply to life in general. Break problems and tasks into small chunks is good advice on how to tackle any problem, Build diverse teams is important for every group of collaborators, and From puzzles to products is a fascinating look at how the mind of a puzzle-solver can apply to many different job roles.
Each chapter is just a few pages, and with 97 to choose from, it's easy to skip over the ones that don't apply to you. Whether you write Java code all day, just dabble, or if you haven't yet started, this is a great book for geeks interested in code and the process of software development.
![Book title A City is Not a Computer][7]
Image by: Princeton University Press
**[A City is Not a Computer: Other Urban Intelligences, by Shannon Mattern][8]**
*[Recommendation written by Scott Nesbitt][9]*
These days, it's become fashionable (if not inevitable) to make everything *smart*: Our phones, our household appliances, our watches, our cars, and, especially, our cities.
With the latter, that means putting sensors everywhere, collecting data as we go about our business, and pushing information (whether useful or not) to us based on that data.
This begs the question, does embedding all that technology in a city make it smart? In *A City Is Not a Computer*, Shannon Mattern argues that it doesn't.
A goal of making cities smart is to provide better engagement with and services to citizens. Mattern points out that smart cities often "aim to merge the ideologies of technocratic managerialism and public service, to reprogram citizens as 'consumers' and 'users'." That, instead of encouraging citizens to be active participants in their cities' wider life and governance.
Then there's the data that smart systems collect. We don't know what and how much is being gathered. We don't know how it's being used and by whom. There's *so much* data being collected that it overwhelms the municipal workers who deal with it. They can't process it all, so they focus on low-hanging fruit while ignoring deeper and more pressing problems. That definitely wasn't what cities were promised when they were sold smart systems as a balm for their urban woes.
*A City Is Not a Computer* is a short, dense, well-researched polemic against embracing smart cities because technologists believe we should. The book makes us think about the purpose of a smart city, who really benefits from making a city smart, and makes us question whether we need to or even should do that.
![Book title git sync murder][10]
Image by: Tilted Windmill Press
**[git sync murder, by Michael Warren Lucas][11]**
*[Recommendation written by Joshua Allen Holm][12]*
Dale Whitehead would rather stay at home and connect to the world through his computer's terminal, especially after what happened at the last conference he attended. During that conference, Dale found himself in the role of an amateur detective solving a murder. You can read about that case in the first book in this series, *git commit murder*.
Now, back home and attending another conference, Dale again finds himself in the role of detective. *git sync murder* finds Dale attending a local tech conference/sci-fi convention where a dead body is found. Was it murder or just an accident? Dale, now the "expert" on these matters, finds himself dragged into the situation and takes it upon himself to figure out what happened. To say much more than that would spoil things, so I will just say *git sync murder* is engaging and enjoyable to read. Reading *git commit murder* first is not necessary to enjoy *git sync murder*, but I highly recommend both books in the series.
Michael Warren Lucas's *git murder* series is perfect for techies who also love cozy mysteries. Lucas has literally written the book on many complex technical topics, and it carries over to his fiction writing. The characters in *git sync murder* talk tech at conference booths and conference social events. If you have not been to a conference recently because of COVID and miss the experience, Lucas will transport you to a tech conference with the added twist of a murder mystery to solve. Dale Whitehead is an interesting, if somewhat unorthodox, cozy mystery protagonist, and I think most Opensource.com readers would enjoy attending a tech conference with him as he finds himself thrust into the role of amateur sleuth.
![Book title Kick Like a Girl][13]
Image by: Inner Wings Foundation
**[Kick Like a Girl, by Melissa Di Donato Roos][14]**
*[Recommendation written by Joshua Allen Holm][15]*
Nobody likes to be excluded, but that is what happens to Francesca when she wants to play football at the local park. The boys won't play with her because she's a girl, so she goes home upset. Her mother consoles her by relating stories about various famous women who have made an impact in some significant way. The historical figures detailed in *Kick Like a Girl* include women from throughout history and from many different fields. Readers will learn about Frida Kahlo, Madeleine Albright, Ada Lovelace, Rosa Parks, Amelia Earhart, Marie Curie, Valentina Tereshkova, Florence Nightingale, and Malala Yousafzai. After hearing the stories of these inspiring figures, Francesca goes back to the park and challenges the boys to a football match.
*Kick Like a Girl* features engaging writing by Melissa Di Donato Roos (SUSE's CEO) and excellent illustrations by Ange Allen. This book is perfect for young readers, who will enjoy the rhyming text and colorful illustrations. Di Donato Roos has also written two other books for children, *How Do Mermaids Poo?* and *The Magic Box*, both of which are also worth checking out.
![Book title Mine!][16]
Image by: Doubleday
**[Mine!: How the Hidden Rules of Ownership Control Our Lives, by Michael Heller and James Salzman][17]**
*[Recommendation written by Bryan Behrenshausen][18]*
"A lot of what you know about ownership is wrong," authors Michael Heller and James Salzman write in *Mine!* It's the kind of confrontational invitation people drawn to open source can't help but accept. And this book is certainly one for open source aficionados, whose views on ownership—of code, of ideas, of intellectual property of all kinds—tend to differ from mainstream opinions and received wisdom. In this book, Heller and Salzman lay out the "hidden rules of ownership" that govern who controls access to what. These rules are subtle, powerful, deeply historical conventions that have become so commonplace they just seem incontrovertible. We know this because they've become platitudes: "First come, first served" or "You reap what you sow." Yet we see them play out everywhere: On airplanes in fights over precious legroom, in the streets as neighbors scuffle over freshly shoveled parking spaces, and in courts as juries decide who controls your inheritance and your DNA. Could alternate theories of ownership create space for rethinking some essential rights in the digital age? The authors certainly think so. And if they're correct, we might respond: Can open source software serve as a model for how ownership works—or doesn't—in the future?
![Book Title Not All Fairy Tales Have Happy Endings][19]
Image by: Lulu.com
**[Not All Fairy Tales Have Happy Endings: The Rise and Fall of Sierra On-Line, by Ken Williams][20]**
*[Recommendation written by Joshua Allen Holm][21]*
During the 1980s and 1990s, Sierra On-Line was a juggernaut in the computer software industry. From humble beginnings, this company, founded by Ken and Roberta Williams, published many iconic computer games. King's Quest, Space Quest, Quest for Glory, Leisure Suit Larry, and Gabriel Knight are just a few of the company's biggest franchises.
*Not All Fairy Tales Have Happy Endings* covers everything from the creation of Sierra's first game, [Mystery House][22], to the company's unfortunate and disastrous acquisition by CUC International and the aftermath. The Sierra brand would live on for a while after the acquisition, but the Sierra founded by the Williams was no more. Ken Williams recounts the entire history of Sierra in a way that only he could. His chronological narrative is interspersed with chapters providing advice about management and computer programming. Ken Williams had been out of the industry for many years by the time he wrote this book, but his advice is still extremely relevant.
Sierra On-Line is no more, but the company made a lasting impact on the computer gaming industry. *Not All Fairy Tales Have Happy Endings* is a worthwhile read for anyone interested in the history of computer software. Sierra On-Line was at the forefront of game development during its heyday, and there are many valuable lessons to learn from the man who led the company during those exciting times.
![Book title The Soul of a New Machine][23]
Image by: Back Bay Books
**[The Soul of a New Machine, by Tracy Kidder][24]**
*[Recommendation written by Guarav Kamathe][25]*
I am an avid reader of the history of computing. It's fascinating to know how these intelligent machines that we have become so dependent on (and often take for granted) came into being. I first heard of [The Soul of a New Machine][26] via [Bryan Cantrill][27]'s [blog post][28]. This is a non-fiction book written by [Tracy Kidder][29] and published in 1981 for which he [won a Pulitzer prize][30]. Imagine it's the 1970s, and you are part of the engineering team tasked with designing the [next generation computer][31]. The backdrop of the story begins at Data General Corporation, a then mini-computer vendor who was racing against time to compete with the 32-bit VAX computers from Digital Equipment Corporation (DEC). The book outlines how two competing teams within Data General, both wanting to take a shot at designing the new machine, results in a feud. What follows is a fascinating look at the events that unfold. The book provides insights into the minds of the engineers involved, the management, their work environment, the technical challenges they faced along the way and how they overcame them, how stress affected their personal lives, and much more. Anybody who wants to know what goes into making a computer should read this book.
There is the 2022 suggested reading list. It provides a variety of great options that I believe will provide Opensource.com readers with many hours of thought-provoking entertainment. Be sure to check out our previous reading lists for even more book recommendations.
* [2021 Opensource.com summer reading list][32]
* [2020 Opensource.com summer reading list][33]
* [2019 Opensource.com summer reading list][34]
* [2018 Open Organization summer reading list][35]
* [2016 Opensource.com summer reading list][36]
* [2015 Opensource.com summer reading list][37]
* [2014 Opensource.com summer reading list][38]
* [2013 Opensource.com summer reading list][39]
* [2012 Opensource.com summer reading list][40]
* [2011 Opensource.com summer reading list][41]
* [2010 Opensource.com summer reading list][42]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/2022-opensourcecom-summer-reading-list
作者:[Joshua Allen Holm][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/holmja
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/tea-cup-mug-flowers-book-window.jpg
[2]: https://unsplash.com/@sixteenmilesout?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/tea?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://opensource.com/sites/default/files/2022-06/97_Things_Every_Java_Programmer_Should_Know_1.jpg
[5]: https://www.oreilly.com/library/view/97-things-every/9781491952689/
[6]: https://opensource.com/users/seth
[7]: https://opensource.com/sites/default/files/2022-06/A_City_is_Not_a_Computer_0.jpg
[8]: https://press.princeton.edu/books/paperback/9780691208053/a-city-is-not-a-computer
[9]: https://opensource.com/users/scottnesbitt
[10]: https://opensource.com/sites/default/files/2022-06/git_sync_murder_0.jpg
[11]: https://mwl.io/fiction/crime#gsm
[12]: https://opensource.com/users/holmja
[13]: https://opensource.com/sites/default/files/2022-06/Kick_Like_a_Girl.jpg
[14]: https://innerwings.org/books/kick-like-a-girl
[15]: https://opensource.com/users/holmja
[16]: https://opensource.com/sites/default/files/2022-06/Mine.jpg
[17]: https://www.minethebook.com/
[18]: https://opensource.com/users/bbehrens
[19]: https://opensource.com/sites/default/files/2022-06/Not_All_Fairy_Tales.jpg
[20]: https://kensbook.com/
[21]: https://opensource.com/users/holmja
[22]: https://en.wikipedia.org/wiki/Mystery_House
[23]: https://opensource.com/sites/default/files/2022-06/The_Soul_of_a_New_Machine.jpg
[24]: https://www.hachettebookgroup.com/titles/tracy-kidder/the-soul-of-a-new-machine/9780316204552/
[25]: https://opensource.com/users/gkamathe
[26]: https://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine
[27]: https://en.wikipedia.org/wiki/Bryan_Cantrill
[28]: http://dtrace.org/blogs/bmc/2019/02/10/reflecting-on-the-soul-of-a-new-machine/
[29]: https://en.wikipedia.org/wiki/Tracy_Kidder
[30]: https://www.pulitzer.org/winners/tracy-kidder
[31]: https://en.wikipedia.org/wiki/Data_General_Eclipse_MV/8000
[32]: https://opensource.com/article/21/6/2021-opensourcecom-summer-reading-list
[33]: https://opensource.com/article/20/6/summer-reading-list
[34]: https://opensource.com/article/19/6/summer-reading-list
[35]: https://opensource.com/open-organization/18/6/summer-reading-2018
[36]: https://opensource.com/life/16/6/2016-summer-reading-list
[37]: https://opensource.com/life/15/6/2015-summer-reading-list
[38]: https://opensource.com/life/14/6/annual-reading-list-2014
[39]: https://opensource.com/life/13/6/summer-reading-list-2013
[40]: https://opensource.com/life/12/7/your-2012-open-source-summer-reading
[41]: https://opensource.com/life/11/7/summer-reading-list
[42]: https://opensource.com/life/10/8/open-books-opensourcecom-summer-reading-list

View File

@ -2,7 +2,7 @@
[#]: via: "https://www.debugpoint.com/2021/08/enable-minimize-maximize-elementary/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,299 +0,0 @@
[#]: subject: "Apache Kafka: Asynchronous Messaging for Seamless Systems"
[#]: via: "https://www.opensourceforu.com/2021/11/apache-kafka-asynchronous-messaging-for-seamless-systems/"
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Apache Kafka: Asynchronous Messaging for Seamless Systems
======
Apache Kafka is one of the most popular open source message brokers. Found in almost all microservices environments, it has become an important component of Big Data manipulation. This article gives a brief description of Apache Kafka, followed by a case study that demonstrates how it is used.
![Digital-backgrund-connecting-in-globe][1]
Have you ever wondered how e-commerce platforms are able to handle immense traffic without getting stuck? Ever thought about how OTT platforms are able to deliver content to millions of users, smoothly and simultaneously? The key lies in their distributed architecture.
A system designed around distributed architecture is made up of multiple functional components. These components are usually spread across several machines, which collaborate with each other by exchanging messages asynchronously over a network. Asynchronous messaging is what enables scalable, non-blocking communication among components, thereby allowing smooth functioning of the overall system.
### Asynchronous messaging
The common features of asynchronous messaging are:
* The producers and consumers of the messages are not aware of each other. They join and leave the system without the knowledge of the others.
* A message broker acts as the intermediary between the producers and consumers.
* The producers associate each of the messages with a type, known as topic. A topic is just a simple string.
* It is possible that producers send messages on multiple topics, and multiple producers send messages on the same topic.
* The consumers register with the broker for messages on one or more topics.
* The producers send the messages only to the broker, and not to the consumers.
* The broker, in turn, delivers the messages to all the consumers that are registered against the topic.
* The producers do not expect any response from the consumers. In other words, the producers and consumers do not block each other.
There are several message brokers available in the market, and Apache Kafka is one of the most popular among them.
### Apache Kafka
Apache Kafka is an open source distributed messaging system with streaming capabilities, developed by the Apache Software Foundation. Architecturally, it is a cluster of several brokers that are coordinated by the Apache Zookeeper service. These brokers share the load on the cluster while receiving, persisting, and delivering the messages.
#### Partitions
Kafka writes messages into buckets known as partitions. A given partition holds messages only on one topic. For example, Kafka writes messages on the topic heartbeats into the partition named *heartbeats-0*, irrespective of the producer of the messages.
![Figure 1: Asynchronous messaging][2]
However, in order to leverage the cluster-wide parallel processing capabilities of Kafka, administrators often create more than one partition for a given topic. For instance, if the administrator creates three partitions for the topic heartbeats, Kafka names them as *heartbeats-0, heartbeats-1,* and *heartbeats-2.* Kafka writes the heartbeat messages across all the three partitions in such a way that the load is evenly distributed.
There is yet another possible scenario in which the producers associate each of the messages with a key. For example, a component uses C1 as the key while another component uses C2 as the key for the messages that they produce on the topic heartbeats. In this scenario, Kafka makes sure that the messages on a topic with a specific key are always found only in one partition. However, it is quite possible that a given partition may hold messages with different keys. Figure 2 presents a possible message distribution among the partitions.
![Figure 2: Message distribution among the partitions][3]
#### Leaders and ISRs
Kafka maintains several partitions across the cluster. The broker on which a partition is maintained is called the leader for the specific partition. Only the leader receives and serves the messages from its partitions.
But what happens to a partition if its leader crashes? To ensure business continuity, every leader replicates its partitions on other brokers. The latter act as the in-sync-replicas (ISRs) for the partition. In case the leader of a partition crashes, Zookeeper conducts an election and names an ISR as the new leader. Thereafter, the new leader takes the responsibility of writing and serving the messages for that partition. Administrators can choose how many ISRs are to be maintained for a topic.
![Figure 3: Command-line producer][4]
#### Message persistence
The brokers map each of the partitions to a specific file on the disk, for persistence. By default, they keep the messages for a week on the disk! The messages and their order cannot be altered once they are written to a partition. Administrators can configure policies like message retention, compaction, etc.
![Figure 4: Command-line consumer][5]
#### Consuming the messages
Unlike most other messaging systems, Apache Kafka does not actively deliver the messages to its consumers. Instead, it is the responsibility of the consumers to listen to the topics and read the messages. A consumer can read messages from more than one partition of a topic. And it is also possible that multiple consumers read messages from a given partition. Kafka guarantees that no message is read more than once by a given consumer.
Kafka also expects that every consumer is identified with a group ID. Consumers with the same group ID form a group. Typically, in order to read messages from N number of topic partitions, an administrator creates a group with N number of consumers. This way, each consumer of the group reads messages from its designated partition. If the group consists of more consumers than the available partitions, the excess consumers remain idle.
In any case, Apache Kafka guarantees that a message is read only once at the group level, irrespective of the number of consumers in the group. This architecture gives consistency, high-performance, high scalability, near-real-time delivery, and message persistence along with zero-message loss.
### Installing and running Kafka
Although, in theory, the Apache Kafka cluster can consist of any number of brokers, most of the clusters in production environments usually consist of three or five of these.
Here, we will set up a single-broker cluster that is good enough for the development environment.
Download the latest version of Kafka from *https://kafka.apache.org/downloads* using a browser. It can also be downloaded with the following command, on a Linux terminal:
```
wget https://www.apache.org/dyn/closer.cgi?path=/kafka/2.8.0/kafka_2.12-2.8.0.tgz
```
We can move the downloaded archive *file kafka_2.12-2.8.0.tgz* to some other folder, if needed. Extracting the archive creates a folder by the name *kafka_2.12-2.8.0*, which will be referred to as *KAFKA_HOME* hereafter.
Open the file server.properties under the *KAFKA_HOME/config* folder and uncomment the line with the following entry:
```
listeners=PLAINTEXT://:9092
```
This configuration enables Apache Kafka to receive plain text messages on port 9092, on the local machine. Kafka can also be configured to receive messages over a secure channel as well, which is recommended in the production environments.
Irrespective of the number of brokers, Apache Zookeeper is required for broker management and coordination. This is true even in the case of single-broker clusters. Since Zookeeper is already bundled with Kafka, we can start it with the following command from *KAFKA_HOME*, on a terminal:
```
./bin/zookeeper-server-start.sh ./config/zookeeper.properties
```
Once Zookeeper starts running, Kafka can be started in another terminal, with the following command:
```
./bin/kafka-server-start.sh ./config/server.properties
```
With this, a single-broker Kafka cluster is up and running.
### Verifying Kafka
Let us publish and receive messages on the topic topic-1. A topic can be created with a chosen number of partitions with the following command:
```
./bin/kafka-topics.sh --create --topic topic-1 --zookeeper localhost:2181 --partitions 3 --replication-factor 1
```
The above command also specifies the replication factor, which should be less than or equal to the number of brokers in the cluster. Since we are working on a single-broker cluster, the replication factor is set to one.
Once the topic is created, producers and consumers can exchange messages on that topic. The Kafka distribution includes a producer and a consumer for test purposes. Both of these are command-line tools.
To invoke the producer, open the third terminal and run the following command:
```
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic topic-1
```
This command displays a prompt at which we can key in simple text messages. Because of the given options on the command, the producer sends the messages on *topic-1* to the Kafka that is running on port 9092 on the local machine.
Open the fourth terminal and run the following command to start the consumer tool:
```
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic-1 from-beginning
```
This command starts the consumer that connects to the Kafka on port number 9092 on the local machine. It registers for reading the messages on topic-1. Because of the last option on the command line, the consumer receives all the messages on the chosen topic from the beginning.
Since the producer and consumer are connecting to the same broker and referring the same topic, the consumer receives and displays the messages on its terminal.
Now, lets use Kafka in the context of a practical application.
### Case study
ABC is a hypothetical bus transport company, which has a fleet of passenger buses that ply between different cities across the country. Since ABC wants to track each bus in real-time for improving the quality of its operations, it comes up with a solution around Apache Kafka.
ABC first equips all its buses with devices to track their location. An operations centre is set up with Apache Kafka, to receive the location updates from each of the hundreds of buses. A dashboard is developed to display the current status of all the buses at any point in time. Figure 5 represents this architecture.
![Figure 5: Kafka based architecture][6]
In this architecture, the devices on the buses act as the message producers. They send their current location to Kafka on the topic *abc-bus-location*, periodically. For processing the messages from different buses, ABC chooses to use the trip code as the key. For example, if the bus from Bengaluru to Hubballi runs with the trip code*BLRHBL003*, then *BLRHBL003* becomes the key for all the messages from that specific bus during that specific trip.
The dashboard application acts as the message consumer. It registers with the broker against the same topic *abc-bus-location*. Consequently, the topic becomes the virtual channel between the producers (buses) and the consumer (dashboard).
The devices on the buses never expect any response from the dashboard application. In fact, none of them is even aware of the presence of the others. This architecture enables non-blocking communication between hundreds of buses and the central office.
#### Implementation
Lets assume that ABC wants to create three partitions for maintaining the location updates. Since the development environment has only one broker, the replication factor should be set to one.
The following command creates the topic accordingly:
```
./bin/kafka-topics.sh --create --topic abc-bus-location --zookeeper localhost:2181 --partitions 3 --replication-factor 1
```
The producer and consumer applications can be written in multiple languages like Java, Scala, Python, JavaScript, and a host of others. The code in the following sections provides a peek into the way they are written in Java.
##### Java producer
The Fleet class simulates the Kafka producer applications running on six buses of ABC. It sends location updates on *abc-bus-location* to the specified broker. Please note that the topic name, message keys, message body, and broker address are hard-coded only for simplicity.
```
public class Fleet {
public static void main(String[] args) throws Exception {
String broker = “localhost:9092”;
Properties props = new Properties();
props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, broker);
props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class.getName());
props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class.getName());
Producer<String, String> producer = new KafkaProducer<String, String>(props);
String topic = “abc-bus-location”;
Map<String, String> locations = new HashMap<>();
locations.put(“BLRHBL001”, “13.071362, 77.461906”);
locations.put(“BLRHBL002”, “14.399654, 76.045834”);
locations.put(“BLRHBL003”, “15.183959, 75.137622”);
locations.put(“BLRHBL004”, “13.659576, 76.944675”);
locations.put(“BLRHBL005”, “12.981337, 77.596181”);
locations.put(“BLRHBL006”, “13.024843, 77.546983”);
IntStream.range(0, 10).forEach(i -> {
for (String trip : locations.keySet()) {
ProducerRecord<String, String> record
= new ProducerRecord<String, String>(
topic, trip, locations.get(trip));
producer.send(record);
}
});
producer.flush();
producer.close();
}
}
```
##### Java consumer
The Dashboard class implements the Kafka consumer application and it runs at the ABC Operations Centre. It listens to *abc-bus-location* with the group ID *abc-dashboard* and displays the location details from different buses as soon as messages are available. Here, too, many details are hard coded which are otherwise supposed to be configured:
```
public static void main(String[] args) {
String broker = “127.0.0.1:9092”;
String groupId = “abc-dashboard”;
Properties props = new Properties();
props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, broker);
props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class.getName());
props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class.getName());
props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
@SuppressWarnings(“resource”)
Consumer<String, String> consumer = new KafkaConsumer<String, String>(props);
consumer.subscribe(Arrays.asList(“abc-bus-location”));
while (true) {
ConsumerRecords<String, String> records
= consumer.poll(Duration.ofMillis(1000));
for (ConsumerRecord<String, String> record : records) {
String topic = record.topic();
int partition = record.partition();
String key = record.key();
String value = record.value();
System.out.println(String.format(
“Topic=%s, Partition=%d, Key=%s, Value=%s”,
topic, partition, key, value));
}
}
}
```
##### Dependencies
A JDK of 8+ version is required to compile and run this code. The following Maven dependencies in the *pom.xml* download and add the required Kafka client libraries to the classpath:
```
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.8.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.25</version>
</dependency>
```
#### Deployment
As the topic *abc-bus-location* is created with three partitions, it makes sense to run three consumers to read the location updates quickly. For that, run the Dashboard in three different terminals simultaneously. Since all the three instances of Dashboard register with the same group ID, they form a group. Kafka attaches each Dashboard instance with a specific partition.
Once the Dashboard instances are up and running, start the *Fleet* on a different terminal. Figure 6, Figure 7, and Figure 8 are sample console messages on the Dashboard terminals.
![Figure 6: Dashboard Terminal 1][7]
A closer look at the console messages reveals that the consumers on the first, second and third terminals are reading messages from *partition-2, partition-1,* and *partition-0,* in that order. Also, it can be observed that the messages with the keys BLRHBL002, *BLRHBL004* and *BLRHBL006* are written into *partition-2*, the messages with the key *BLRHBL005* are written into *partition-1*, and the remaining are written into *partition-0*.
![Figure 7: Dashboard Terminal 2][8]
The good thing about Kafka is that it can be scaled horizontally to support a large number of buses and millions of messages as long as the cluster is designed appropriately.
![Figure 8: Dashboard Terminal 3][9]
### Beyond messaging
More than 80 per cent of the Fortune 100 companies are using Kafka, according to its website. It is deployed across many industry verticals like financial services, entertainment, etc. Though Apache Kafka started its journey as a simple messaging service, it has propelled itself into the Big Data ecosystem with industry-level stream processing capabilities. For the enterprises that prefer a managed solution, Confluent offers a cloud based Apache Kafka service for a subscription fee.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2021/11/apache-kafka-asynchronous-messaging-for-seamless-systems/
作者:[Krishna Mohan Koyya][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/krishna-mohan-koyya/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Digital-backgrund-connecting-in-globe.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-1-Asynchronous-messaging.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-2-Message-distribution-among-the-partitions.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-3-Command-line-producer.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-4-Command-line-consumer.jpg
[6]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-5-Kafka-based-architecture.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-6-Dashboard-Terminal-1.jpg
[8]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-7-Dashboard-Terminal-2.jpg
[9]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-8-Dashboard-Terminal-3.jpg

View File

@ -1,141 +0,0 @@
[#]: subject: "Integrating Zeek with ELK Stack"
[#]: via: "https://www.opensourceforu.com/2022/06/integrating-zeek-with-elk-stack/"
[#]: author: "Tridev Reddy https://www.opensourceforu.com/author/tridev-reddy/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Integrating Zeek with ELK Stack
======
Zeek is an open source network security monitoring tool. This article discusses how to integrate Zeek with ELK.
![Integrating-Zeek-with-ELK-Stack-Featured-image][1]
In the article titled Network Security Monitoring Made Easy with Zeek published in the March 2022 edition of this magazine, we looked into the capabilities of Zeek and learned how to get started with it. We will now take our learning experience a bit further and see how to integrate it with ELK (also know as Elasticsearch, Kibana, Beats, and Logstash).
For this, we will use a tool called Filebeat, which monitors, collects and forwards the logs to Elasticsearch. We will configure Filebeat with Zeek, so that the data collected by the latter will be forwarded and centralised in our Kibana dashboard.
### Installing Filebeat
Lets first set up Filebeat with Zeek. To install Filebeat using *apt*, give the following command:
```
sudo apt install filebeat
```
Next, we need to configure the *.yml* file, which is present in the etc*/filebeat/* folder:
```
sudo nano /etc/filebeat/filebeat.yml
```
We need to configure only two things here. In the *Filebeat* Input section, change the type to log and uncomment the *enabled*: false and change it to true. We also need to specify the path of where the logs are stored, i.e., we need to specify */opt/zeek/logs/current/*.log*
Once this is done, the first part of the settings should look similar to whats shown in Figure 1.
![Figure 1: Filebeat config (a)][2]
The second thing to be changed in the Elasticsearch output section is under *Outputs.* Uncomment the output.elasticsearch and hosts. Make sure the URL of the host and port number are similar to what you configured while installing ELK. We kept it as localhost with port number 9200.
In the same section, uncomment the user name and password at the bottom, and enter the user name and password of the elastic user that you generated while configuring ELK after installation. Once this is done, refer to Figure 2 and check the settings.
![Figure 2: Filebeat config (b)][3]
Now that we have completed installing and configuring , we need to configure Zeek so that it stores the logs in JSON format. For that, ensure your Zeek instance is stopped. If its not, execute the command given below to stop it:
```
cd /opt/zeek/bin
./zeekctl stop
```
Now we need to add a small line in the local.zeek, which is present in the *opt/zeek/share/zeek/site/* directory.
Open the file as root and add the following line:
```
@load policy/tuning/json-logs.zeek
```
Refer to Figure 3 and make sure the settings are done correctly.
![Figure 3: local.zeek file][4]
As we have changed a few configurations of Zeek, we need to re-deploy it, which can be done by executing the following command:
```
cd /opt/zeek/bin
./zeekctl deploy
```
Now we need to enable the Zeek module in Filebeat so that it forwards the logs from Zeek. Execute the following command:
```
sudo filebeat modules enable zeek
```
We are almost ready; in the last step, configure the *zeek.yml* file to mention what type of data is to be logged. This can be done by modifying the */etc/filebeat/modules.d/zeek.yml* file.
In this *.yml file*, we must mention the directory where these specified logs are stored. We know that the logs are stored in the current folder, which has several files like *dns.log*, *conn.log, dhcp.log,* and many more. We need to mention each path in each section. You can leave unwanted files by changing the enabled value to false, if and only if you dont want logs from that file/program.
For example, for *dns*, make sure the enabled value is true and the path is mentioned as:
```
var.paths: [ “/opt/zeek/logs/current/dns.log”, “/opt/zeek/logs/*.dns.json” ]
```
Repeat this for the rest of the files. We did this for a few that we needed. We added everything that was mainly required. You can do the same. Refer to Figure 4.
![Figure 4: zeek.yml configuration][5]
Now its time to start the Filebeat. Execute the following commands:
```
sudo filebeat setup
sudo service filebeat start
```
Now that everything is done, lets move to our Kibana dashboard and check whether we are receiving the data from Zeek via Filebeat or not.
![Figure 5: Dashboard of Kibana (Destination Geo)][6]
Navigate to the dashboard; you can see a clear statistical analysis of the data it has captured (Figure 5 and Figure 6).
![Figure 6: Dashboard of Kibana (Network)][7]
Now lets move to the Discover tab and check the results by filtering using the query:
```
event.module: “zeek”
```
This query will filter all the data it received in a certain time and show us only the data from the module named Zeek (Figure 7).
![Figure 7: Filtered data by event.module query][8]
### Acknowledgements
*The authors are grateful to Sibi Chakkaravarthy Sethuraman, Sudhakar Ilango, Nandha Kumar R. and Anupama Namburu at the School of Computer Science and Engineering, VIT-AP for their continuous guidance and support. A special thanks to the Center for Excellence in Artificial Intelligence and Robotics (AIR).*
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/integrating-zeek-with-elk-stack/
作者:[Tridev Reddy][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/tridev-reddy/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Integrating-Zeek-with-ELK-Stack-Featured-image.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-1-Filebeat-config-a.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-2-Filebeat-config-b.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-3-local.zeek-file-1.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-4-zeek.yml-configuration.jpg
[6]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-5-Dashboard-of-Kibana-Destination-Geo.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-6-Dashboard-of-Kibana-Network-1.jpg
[8]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-7-Filtered-data-by-event.jpg

View File

@ -1,181 +0,0 @@
[#]: subject: "Run Windows Apps And Games Using WineZGUI On Linux"
[#]: via: "https://ostechnix.com/winezgui-run-windows-apps-and-games-on-linux/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Run Windows Apps And Games Using WineZGUI On Linux
======
WineZGUI - A Wine GUI Frontend Using Zenity
A while ago we wrote about **[Bottles][1]**, an opensource graphical application easily to run Windows software and Games on Linux operating systems. Today, we will discuss about a similar interesting project. Say hello to **WineZGUI**, a Wine GUI frontend to **[run windows apps and games with wine on Linux][2]**.
#### Contents
1. What Is WineZGUI?
2. Bottles Vs WineZGUI
3. How To Install WineZGUI In Linux
4. Run Windows Apps And Games With WineZGUI In Linux
5. Conclusion
### What Is WineZGUI?
WineZGUI is a collection of Bash scripts that allows you to easily manage wine prefixes and provides easier wine gaming experience on Linux using **Zenity**.
Using WineZGUI, we can directly launch the Windows exe files or games from File manager without installing them.
WineZGUI creates shortcut for each application or game for easier access and also creates separate prefixes for each exe binary file.
When you launch a Windows exe file with WineZGUI, it will prompt you whether to use the default wine prefix or create a new one. The default prefix is `~/.local/share/winezgui/default`.
If you choose to create a new prefix for the windows binary or exe, WineZGUI will try to extract the product name and icon from the exe file and it creates a desktop shortcut.
When you launch the same exe or binary file later, it will recommend you to run it with the associated prefix earlier.
To put this layman terms, WineZGUI is simply a Wine and winetricks simple GUI for official vanilla wine. Wine prefix setup is automatic when we launch an exe to play a game.
You simply open an exe and it creates a prefix and a desktop shortcut with name and icon extracted from that exe.
It uses **exiftool** and **icotool** utilities to extract the name and icon respectively. Either you can open an exe to launch that game from existing prefix, or use desktop shortcut.
WineZGUI is a shell script that is freely hosted in GitHub. You can grab the source code, improve it, fix bugs and add features.
### Bottles Vs WineZGUI
You might wonder how does WineZGUI compare with Bottles. There is a subtle difference between these applications though.
**Bottles is prefix oriented** and **runner oriented**. Meaning - Bottles first creates a prefix then use different exe files with it. Bottles does not remember exe's prefix. Bottles uses different runners.
**WineZGUI is exe oriented**. It uses exe to create one prefix for that exe only. Next time we open an exe, it will ask whether to launch with existing exe prefix.
WineZGUI does not offer advanced features like **bottles** or **[lutris][3]** do, like runners, online installers, etc.
### How To Install WineZGUI In Linux
Make sure you have installed the necessary prerequisites for WineZGUI.
**Debian/Ubuntu:**
```
$ sudo dpkg --add-architecture i386
$ sudo apt install zenity wine winetricks libimage-exiftool-perl icoutils gnome-terminal
```
**Fedora:**
```
$ sudo dnf install zenity wine winetricks perl-Image-ExifTool icoutils gnome-terminal
```
The officially recommended way to install WineZGUI is by using **[Flatpak][4]**.
After installing Flatpak, run the following commands one by one to install WineZGUI in Linux.
```
$ flatpak --user remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
```
$ flatpak --user -y install flathub org.winehq.Wine/x86_64/stable-21.08
```
```
$ wget https://github.com/fastrizwaan/WineZGUI-Releases/releases/download/WineZGUI-0.4_20220608/io.github.WineZGUI_0_4_20220608.flatpak
```
```
$ flatpak --user -y install io.github.WineZGUI_0_4_20220608.flatpak
```
### Run Windows Apps And Games With WineZGUI In Linux
Launch WineZGUI from Dash or Menu.
![Launch WineZGUI][5]
This is how the default interface of WineZGUI looks like.
![WineZGUI Interface][6]
As you can see in the above screenshot, WineZGUI interface is very simple and easy to understand. From the main window, you can,
* Open an EXE file,
* Open Winetricks GUI and CLI,
* Launch Wine configuration,
* Launch explorer,
* Open BASH shell,
* Kill all apps/games including WineZGUI interface,
* Delete wine prefix,
* View installed WineZGUI version.
For the purpose of the demonstration, I am going to open an .exe file.
In the next window, choose the EXE file to run. In my case, it is WinRAR.
![Choose The EXE File To Run][7]
Next, whether you want to run the EXE file with default prefix or create a new prefix. I choose default prefix.
![Run WinRAR With Default Prefix][8]
A few seconds later, the WinRAR setup wizard will appear. Click Install to continue.
![Install WinRAR In Linux][9]
Click OK to complete the WinRAR installation.
![Complete WinRAR Installation][10]
Click "Run WinRAR" to launch it.
![Run WinRAR][11]
Here is WinRAR running in my Fedora 36 desktop!
![WinRAR Is Running In Fedora Using Wine][12]
### Conclusion
WineZGUI is a newcomer to the club. If you're looking for an easier way to run Windows apps and games using Wine on a Linux desktop, WineZGUI might be a good choice.
With the help of WineZGUI, the users have an option to create a wine prefix right at same folder as the `.exe` and creating a relatively-linked `.desktop` entry to automatically do so.
The reason being that it's easier to back up and delete a game along with the wine prefix, and having it generate a `.desktop` would make it resilient to being moved and transferred.
A cool use-case would be to setup using the app, then share the wine prefix to your friend and others who just want a working wine prefix with all the dependencies, saves, etc.
Give it a try and let us know what do you think about this project in the comment section below.
**Resource:**
* [WineZGUI GitHub Repository][13]
--------------------------------------------------------------------------------
via: https://ostechnix.com/winezgui-run-windows-apps-and-games-on-linux/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/run-windows-software-on-linux-with-bottles/
[2]: https://ostechnix.com/run-windows-games-softwares-ubuntu-16-04/
[3]: https://ostechnix.com/manage-games-using-lutris-linux/
[4]: https://ostechnix.com/how-to-install-and-use-flatpak-in-linux/
[5]: https://ostechnix.com/wp-content/uploads/2022/06/Launch-WineZGUI.png
[6]: https://ostechnix.com/wp-content/uploads/2022/06/WineZGUI-Interface.png
[7]: https://ostechnix.com/wp-content/uploads/2022/06/Choose-The-EXE-File-To-Run.png
[8]: https://ostechnix.com/wp-content/uploads/2022/06/Run-WinRAR-With-Default-Prefix.png
[9]: https://ostechnix.com/wp-content/uploads/2022/06/Install-WinRAR-In-Linux.png
[10]: https://ostechnix.com/wp-content/uploads/2022/06/Complete-WinRAR-Installation.png
[11]: https://ostechnix.com/wp-content/uploads/2022/06/Run-WinRAR.png
[12]: https://ostechnix.com/wp-content/uploads/2022/06/WinRAR-Is-Running-In-Fedora-Using-Wine.png
[13]: https://github.com/fastrizwaan/WineZGUI

View File

@ -0,0 +1,150 @@
[#]: subject: "Are Low-Code Platforms Helpful for Professional Developers?"
[#]: via: "https://www.opensourceforu.com/2022/06/are-low-code-platforms-helpful-for-professional-developers/"
[#]: author: "Radhakrishna Singuru https://www.opensourceforu.com/author/radhakrishna-singuru/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Are Low-Code Platforms Helpful for Professional Developers?
======
Over the years, low-code platforms have matured immensely, and are being used in varied domains of software development for better productivity and quality. This article explores the possibility of leveraging low-code platforms for complex product development by professional developers.
![Low-Code-platfroms-developers][1]
In the last several years, companies have invested a lot of time and energy in innovating and improving the overall process of software product development. Agile working methods have helped in making the process of product development a lot smoother. However, developers still face the challenge of meeting ever expanding customer requirements quickly and easily.
In order to meet these requirements in totality, developers need tools and platforms that can enable quick delivery of software by reducing the coding timelines and without compromising on the quality aspects.
No-code platforms are one set of tools that enable creation of application or product software with zero coding. These use a lego building blocks approach that eliminates the need for hand coding and focuses on just configuration of functions based on graphical modelling experiences. These tools are more relevant to a class of business users called citizen developers, who can use them to optimise a specific process or a function by developing their own applications.
Contrary to no-code platforms, low-code platforms do not try to eliminate the need for coding. Instead, they aim to make development of software easier than the traditional method of hard coding each line of a program or software. This approach minimises hard coding with prepackaged templates, graphic design techniques, and drag-and-drop tools to make software.
The focus of this article is on general-purpose low-code platforms that can be used by professional developers to build enterprise grade applications or products.
### Different types of low-code platforms
Low-code platforms cater to different use cases. Depending on the intended usage or purpose, these can be classified as follows.
* General-purpose: These platforms can create virtually any type of product or application. With a general-purpose platform, users can build software that serves a wide variety of needs and can be deployed on cloud or on-premise.
* Process: These platforms focus specifically on software that runs business processes such as forms, workflows or integrations with other systems.
* Request handling: Request handling low-code platforms are similar to process-based low code, but are less capable. They can only handle processing requests for fixed processes.
* Database: Database low-code platforms are useful if users have large amounts of data that needs to feed into a system, without spending a lot of time on the task.
* Mobile application development platform (MADP): These platforms help developers code, test, and launch mobile applications for smartphones and tablets.
#### Key features of generic low-code platforms
* General-purpose enterprise low-code development platforms support rapid software development, deployment, execution and management using declarative, high-level programming abstractions such as model-driven and metadata-based programming languages, and one-click deployments.
The key features supported by general-purpose low code platforms are as follows.
* Visual modelling: These platforms have a comprehensive visual modelling capability, including business processes, integration workflows, UIs, business logic, data models, Web services, and APIs.
* Databases: Support for a visual editor, Excel sheet import or use of existing data models from a different database.
* Pre-built templates: Support a variety of pre-built templates that can serve as a starting point to get an application up and running quickly. Using a well designed and tested template not only increases productivity but also helps in building a more reliable and secure application.
* Integration: Provide easy integration with external enterprise systems, databases, and custom apps.
* Security and scalability: The right low-code platform makes it easy to create enterprise grade software that is secure and scalable.
* Metrics: Support for gathering metrics and monitoring the software.
* Life cycle management: Support for version management and ticket management, as well as Agile or scrum tools.
* Reusability: The code generated is reusable and can integrate easily with general-purpose IDEs, with multi-platform support for testing and staging.
* Deployment options: Ability to deploy on public or private clouds with support for container images. Also support preview functionality before publishing to the cloud.
* Licensing: Flexible licensing models with no vendor lock-in.
* Others: The platforms include other services, such as project management and analytics.
### Generic low-code platforms for professional developers
The main objective of a generic low-code platform is to help reduce the time spent by a developer working on a product as compared to traditional hand coding. Using visual interfaces, drag-and-drop modules, and more, these low-code platforms reduce the manual effort of coding largely, but will need some coding requirements for completely building a product.
Generic low-code platforms cannot be directly used to build low-level products like in-memory grids, Big Data processing algorithms, image recognition, etc. However, over the years they have evolved a lot to cover the widest range of capabilities for enterprise-grade development and full life cycle management, including business process management (BPM), integration workflows, UIs, business logic, data models, Web services, and APIs. They enable high-productivity development and a faster time to market for all types of developers. They also help create applications of any complexity including enterprise applications with generic architectures and complex backends, using microservices and service bus.
![Figure 1: Important use cases of low code][2]
Figure 1 illustrates some of the key use cases that have been influenced positively in terms of productivity, scale and quality by leveraging low-code platforms.
*Enable cross-functional team collaboration:* Any product development needs a combination of business or functional experts as well as professional developers. Low-code platforms provide features that are relevant to a business user as well as professional developer. Cross-functional teams can leverage these platforms to turn great business ideas into readily deployable products much faster.
*Rapid digital transformation of existing product suites:* By leveraging client and server-side APIs of the platform, developers will be able to build, package and distribute new functionalities, such as connectors to external services like machine learning and AI. Low-code platforms enable developers to push beyond the boundaries of the core platform to build better solutions faster by extending the native features of the platform with some code.
*Help meet sudden spikes in demand:* Automated code generation combined with the one-click deployment option of low-code platforms helps in quicker product customisations, as well as building product variants and burning backlog of features faster.
*Help build MVPs at a rapid rate:* The end-to-end development and operational tools in low-code platforms provide a cohesive ecosystem that allows an enterprise to rapidly bring products to life and manage the entire SDLC process. They are used to build quick MVPs or PoCs for technology, frameworks, and architecture or for feature evaluation.
| Evaluation criteria | Description |
| :- | :- |
| Functional features | Productivity, UI flexibility, ease of use |
| Cloud and containerisation | Capability to utilise popular cloud providers services like serverless, AI/ML, blockchain, etc |
| CI/CD integration | Out-of-the-box support for automation and CI/CD toolchain |
| Integration capabilities | REST and cloud app support, ability to connect to different SQL and NoSQL databases |
| Performance | Parallel and batch execution with support for elastic scalability |
| Security | Enable security by design: security tools, development methods, and governance tooling |
| Language support | Support for popular languages like Java, .NET, Python, etc |
| Development methodologies | Support standard Agile development methodologies like Scrum, XP, Kanban, etc |
| Extensibility | Ability to extend features of existing applications |
| Others | Platform support, learning curve, documentation, etc |
*Support cloud scale architecture and design:* Low-code platforms provide flexible microservices integration options (custom, data, UI, infra) to support building next-gen low-code products with multi-cloud deployment options. They are able to scale and handle thousands of users and millions of data sets.
![Figure 2: Benefits of low-code platforms][3]
**Pros and cons of low-code platforms**
The primary advantage of low-code software development is speed (months to weeks). On an average, there is six to ten times productivity improvement using a low-code platform over traditional hand coding approaches to software development.
Figure 2 lists some of the key benefits of low-code platforms.
* Better software, lower maintenance: Low-code platforms standardise the development approach and reduce the complexity as well as the error rate of the source code.
* Cross-team collaboration: Team members with different skills and capabilities can collaborate to realise the final product faster.
* Enable Agile software development: They offer the team members a consistent product that begins as a single screen, and grows from sprint to sprint as the full product takes shape.
* Faster legacy app modernisation: Enable faster UI generation based on the existing data model, reuse of the logic from legacy databases and smart transfer of existing screens/persistence layers.
* Low risk and high RoI: Due to shorter development cycles, the risk of undertaking a new project is low, and there is a good chance of getting high returns.
* Scaling through multiple components: Allow use of a common platform to develop multiple services.
* Easy maintenance: The software can be easily updated, fixed and changed according to customer requirements.
In the last few years, low-code platforms have evolved a lot in terms of functionality and applicability. However, they still have some limitations when being used fully for any generic product development, some of which are listed below.
*Low-level product development:* Cannot be used for building products like in-memory grids, Big Data processing algorithms, image recognition, etc.
*Custom architectures and services:* Have limited use in enterprise software with unique architecture, microservices, custom back-ends, unique enterprise service bus, etc.
*Source code control:* These platforms totally lose control over the code base; its difficult to debug and handle edge conditions where the tool does not do the right thing automatically.
*Limited integration:* Custom integration with legacy systems needs some significant coding.
*Security and reliability:* These platforms are vulnerable to security breaches, because if the low-code platform gets hacked, it can immediately make the product built on it also vulnerable.
*Customisation:* Custom CSS, custom integration, advanced client-side functionality, etc, will require a good amount of coding.
#### Popular generic low-code platforms
While there are many low-code platforms available in the market, one should leverage the evaluation criteria given in Table 1 before choosing the right one relevant to the business.
Table 2 lists the highly popular generic low-code platforms, both proprietary and open source.
Gartner predicts that by 2024, 65 per cent of application development projects will rely on low-code development.
| Low-code platform | Description | Type |
| :- | :- | :- |
| Mendix | This is designed to accelerate enterprise app delivery across the entire application development life cycle, from ideation to development, deployment, and maintenance on cloud or on-premise. | Proprietary |
| OutSystems | This platform provides tools to develop, deploy and manage omni-channel enterprise applications. It addresses the full spectrum of enterprise use cases for mobile, Web and core systems. | Proprietary |
| Appian | An enterprise grade platform that combines the key capabilities needed to get work done faster using AI, RPA, decision rules, and workflow on a single low-code platform. | Proprietary |
| Budibase | Helps in building business applications. Supports multiple external data sources, and comes with prebuilt layouts, user auth, and a data provider component. Supports JavaScript for integrations. | Open source |
| WordPress | Powers more than 41 per cent of the Web — from simple blogs to enterprise websites. With over 54,000 plugins, the level of customisation without writing code is incredible. | Open source |
| Node-RED | Helps to build event-driven IoT applications. A programming tool for wiring together hardware devices, APIs, and online services. | Open source |
Will low-code development replace traditional engineering completely? This is unlikely in the near future, but it will definitely help to significantly improve developers productivity and bring quality products to market faster. It can bridge the talent gap in the mid-term by arming non-technical people with tools to build, whilst alleviating some of the product backlog for maxed out professional engineering teams Low-code platforms can address critical issues like the high demand for new enterprise software, the need to modernise aging legacy systems and the shortage of full-stack engineers. The choice of when and how much to use a low-code platform depends on the range of applicability, development speed, manageability and flexibility for performance limitations.
However, if there is a need to develop a product that is high quality and unique while being specific to business requirements, custom product development may be a better option.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/are-low-code-platforms-helpful-for-professional-developers/
作者:[Radhakrishna Singuru][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/radhakrishna-singuru/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Low-Code-platfroms-developers.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Figure-1-Important-use-cases-of-low-code.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Figure-2-Benefits-of-low-code-platforms.jpg

View File

@ -0,0 +1,106 @@
[#]: subject: "Compress Images in Linux Easily With Curtail GUI App"
[#]: via: "https://itsfoss.com/curtail-image-compress/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Compress Images in Linux Easily With Curtail GUI App
======
Got a bunch of images with huge file sizes taking too much disk space? Or perhaps you have to upload an image to a web portal that has file size restrictions?
There could be a number of reasons why you would want to compress images. There are tons of tools to help you with it and I am not talking about the command line ones here.
You can use a full-fledged image editor like GIMP. You may also use web tools like [Squoosh][1], an open source project from Google. It even lets you compare the files for each compression level.
However, all these tools work on individual images. What if you want to bulk compress photos? Curtail is an app that saves your day.
### Curtail: Nifty tool for image compression in Linux
Built with Python and GTK3, Curtail is a simple GUI app that uses open source libraries like OptiPNG, [jpegoptim][2], etc to provide the image compression feature.
It is available as a [Flatpak application][3]. Please make sure that you have [Flatpak support enabled on your system][4].
Add the Flathub repo first:
```
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
And then use the command below to install Curtail:
```
flatpak install flathub com.github.huluti.Curtail
```
Once installed, look for it in your Linux systems menu and start it from there.
![curtail app][5]
The interface is plain and simple. You can choose whether you want a lossless or lossy compression.
The lossy compression will have poor-quality images but with a smaller size. The lossless compression will have better quality but the size may not be much smaller than the original.
![curtail app interface][6]
You can either browse for images or drag and drop them into the application.
Yes. You can compress multiple images in one click with Curtail.
In fact, you dont even need a click. As soon as you select the images or drop them, they are compressed and you see a summary of the compression process.
![curtail image compression summary][7]
As you can see in the image above, I got a 35% size reduction for one image and 3 and 8 percent for the other two. This was with lossless compression.
The images are saved with a -min suffix (by default), in the same directory as the original image.
Though it looks minimalist, there are a few options to configure Curtail. Click on the hamburger menu and you are presented with a few settings options.
![curtail configuration options][8]
You can select whether you want to save the compressed file as new or replace the existing one. If you go for a new file (default behavior), you can also provide a different suffix for the compressed images. The option to keep the file attributes is also there.
In the next tab, you can configure the settings for lossy compression. By default, the compression level is at 90%.
![curtail compression options][9]
The Advanced tab gives you the option to configure the lossless compression level for PNG and WebP files.
![curtain advanced options][10]
### Conclusion
As I stated earlier, its not a groundbreaking tool. You can do the same with other tools like GIMP. It just makes the task of image compression simpler, especially for bulk image compression.
I would love to see the option to [convert the image file formats][11] with the compression like what we have in tools like Converseen.
Overall, a good little utility for the specific purpose of image compression.
--------------------------------------------------------------------------------
via: https://itsfoss.com/curtail-image-compress/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lkxed
[1]: https://squoosh.app/
[2]: https://github.com/tjko/jpegoptim
[3]: https://itsfoss.com/what-is-flatpak/
[4]: https://itsfoss.com/flatpak-guide/
[5]: https://itsfoss.com/wp-content/uploads/2022/06/curtail-app.png
[6]: https://itsfoss.com/wp-content/uploads/2022/06/curtail-app-interface.png
[7]: https://itsfoss.com/wp-content/uploads/2022/06/curtail-image-compression-summary.png
[8]: https://itsfoss.com/wp-content/uploads/2022/06/curtail-configuration-options.png
[9]: https://itsfoss.com/wp-content/uploads/2022/06/curtail-compression-options.png
[10]: https://itsfoss.com/wp-content/uploads/2022/06/curtain-advanced-options.png
[11]: https://itsfoss.com/converseen/

View File

@ -0,0 +1,146 @@
[#]: subject: "How I use the attr command with my Linux filesystem"
[#]: via: "https://opensource.com/article/22/6/linux-attr-command"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How I use the attr command with my Linux filesystem
======
I use the open source XFS filesystem because of the subtle convenience of extended attributes. Extended attributes are a unique way to add context to my data.
![Why the operating system matters even more in 2017][1]
Image by: Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0
The term *filesystem* is a fancy word to describe how your computer keeps track of all the files you create. Whether it's an office document, a configuration file, or thousands of digital photos, your computer has to store a lot of data in a way that's useful for both you and it. Filesystems like Ext4, XFS, JFS, BtrFS, and so on are the "languages" your computer uses to keep track of data.
Your desktop or terminal can do a lot to help you find your data quickly. Your file manager might have, for instance, a filter function so you can quickly see just the image files in your home directory, or it might have a search function that can locate a file by its filename, and so on. These qualities are known as *file attributes* because they are exactly that: Attributes of the data object, defined by code in file headers and within the filesystem itself. Most filesystems record standard file attributes such as filename, file size, file type, time stamps for when it was created, and time stamps for when it was last visited.
I use the open source XFS filesystem on my computers not for its reliability and high performance but for the subtle convenience of extended attributes.
### Common file attributes
When you save a file, data about it are saved along with it. Common attributes tell your operating system whether to update the access time, when to synchronize the data in the file back to disk, and other logistical details. Which attributes get saved depends on the capabilities and features of the underlying filesystem.
In addition to standard file attributes (insofar as there are standard attributes), the XFS, Ext4, and BtrFS filesystems can all use extending filesystems.
### Extended attributes
XFS, Ext4, and BtrFS allow you to create your own arbitrary file attributes. Because you're making up attributes, there's nothing built into your operating system to utilize them, but I use them as "tags" for files in much the same way I use EXIF data on photos. Developers might choose to use extended attributes to develop custom capabilities in applications.
There are two "namespaces" for attributes in XFS: **user** and **root**. When creating an attribute, you must add your attribute to one of these namespaces. To add an attribute to the **root** namespace, you must use the `sudo` command or be logged in as root.
### Add an attribute
You can add an attribute to a file on an XFS filesystem with the `attr` or `setfattr` commands.
The `attr` command assumes the `user` namespace, so you only have to set (`-s` ) a name for your attribute followed by a value (`-V` ):
```
$ attr -s flavor -V vanilla example.txt
Attribute "flavor" set to a 7 byte value for example.txt:
vanilla
```
The `setfattr` command requires that you specify the target namespace:
```
$ setfattr --name user.flavor --value chocolate example.txt
```
### List extended file attributes
Use the `attr` or `getfattr` commands to see extended attributes you've added to a file. The `attr` command defaults to the **user** namespace and uses the `-g` option to *get* extended attributes:
```
$ attr -g flavor example.txt
Attribute "flavor" had a 9 byte value for example.txt:
chocolate
```
The `getfattr` command requires the namespace and name of the attribute:
```
$ getfattr --name user.flavor example.txt
# file: example.txt
user.flavor="chocolate"
```
### List all extended attributes
To see all extended attributes on a file, you can use `attr -l` :
```
$ attr -l example.txt
Attribute "md5sum" has a 32 byte value for example.txt
Attribute "flavor" has a 9 byte value for example.txt
```
Alternately, you can use `getfattr -d` :
```
$ getfattr -d example.txt
# file: example.txt
user.flavor="chocolate"
user.md5sum="969181e76237567018e14fe1448dfd11"
```
Any extended file attribute can be updated with `attr` or `setfattr`, just as if you were creating the attribute:
```
$ setfattr --name user.flavor --value strawberry example.txt
$ getfattr -d example.txt
# file: example.txt
user.flavor="strawberry"
user.md5sum="969181e76237567018e14fe1448dfd11"
```
### Attributes on other filesystems
The greatest risk when using extended attributes is forgetting that these attributes are specific to the filesystem they're on. That means when you copy a file from one drive or partition to another, the attributes are lost *even if the target filesystem supports extended attributes*.
To avoid losing extended attributes, you must use a tool that supports retaining them, such as the `rsync` command.
```
$ rsync --archive --xattrs ~/example.txt /tmp/
```
No matter what tool you use, if you transfer a file to a filesystem that doesn't know what to do with extended attributes, those attributes are dropped.
### Search for attributes
There aren't many mechanisms to interact with extended attributes, so the options for using the file attributes you've added are limited. I use extended attributes as a tagging mechanism, which allows me to associate files that have no obvious relation to one another. For instance, suppose I need a Creative Commons graphic for a project I'm working on. Assume I've had the foresight to add the extended attribute **license** to my collection of graphics. I could search my graphic folder with `find` and `getfattr` together:
```
find ~/Graphics/ -type f \
-exec getfattr \
--name user.license \
-m cc-by-sa {} \; 2>/dev/null
# file: /home/tux/Graphics/Linux/kde-eco-award.png
user.license="cc-by-sa"
user.md5sum="969181e76237567018e14fe1448dfd11"
```
### Secrets of your filesystem
Filesystems aren't generally something you're meant to notice. They're literally systems for defining a file. It's not the most exciting task a computer performs, and it's not something users are supposed to have to be concerned with. But some filesystems give you some fun, and safe, special abilities, and extended file attributes are a good example. Its use may be limited, but extended attributes are a unique way to add context to your data.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/linux-attr-command
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/yearbook-haff-rx-linux-file-lead_0.png

View File

@ -0,0 +1,308 @@
[#]: subject: "Please A Simple Command Line Todo Manager"
[#]: via: "https://ostechnix.com/please-command-line-todo-manager/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Please A Simple Command Line Todo Manager
======
Manage Tasks And To-do Lists With 'Please' From Command Line In Linux
A while ago, we reviewed **["Taskwarrior"][1]**, a command line task manager to manage your to-do tasks right from the Terminal window. Today I stumbled upon yet another simple **command line Todo manager** called **"Please"**. Yes, the name is Please!.
Please is an opensource, CLI application written in **Python** programming language. Using Please, we can manage our personal tasks and to-do list without leaving the terminal.
Whenever you open a terminal window, Please will show you the current date and time, an inspirational quote and the list of personal to-do tasks in the Terminal.
Please is very lightweight and convenient CLI task manager for those who use terminal extensively in their daily life.
### Install Please In Linux
Since Please is written in Python, you can **install Please** using **PiP** package manager. If you haven't installed PiP on your Linux machine yet, refer to the following link.
* [How To Manage Python Packages Using PIP][2]
To install Please using PiP, simply run:
```
$ pip install please-cli
```
Or,
```
$ pip3 install please-cli
```
To run Please every time you open a new Terminal window, add the line 'please' to your `.bashrc` file.
```
$ echo 'please' >> ~/.bashrc
```
If you use ZSH shell, run:
```
$ echo 'please' >> ~/.zshrc
```
Please note that the above step is optional. You don't have to add it to your shell config file. However If you do the above step, you will immediately see your pending tasks and to-do list whenever you open a Terminal.
If you don't add it, you won't see them and you may forgot them after a while. So make sure you've added it to your `.bashrc` or `.zshrc` file.
Restart the current session to take effect the changes. Alternatively, source the `.bashrc` file to take effect the changes immediately.
```
$ source ~/.bashrc
```
You will be asked to set a name at first launch. It is usually the hostname of your system. You can also use any other name of your choice.
```
Hello! What can I call you?: ostechnix
```
You can change your name later by running the following command:
```
$ please callme <Your Name Goes Here>
```
### Manage Tasks And To-do Lists With Please From Command Line
The **usage of 'Please'** is very simple!
Just run 'please' to show the current date and time, an inspirational quote and the list of tasks if there are any.
```
$ please
```
**Sample Output:**
```
─────── Hello ostechnix! It's 20 Jun | 11:59 AM ───────
"Action is eloquence!"
- William Shakespeare
Looking good, no pending tasks 😁
```
![Run Please Todo Manager][3]
As you can see, there are no todo tasks yet. Let us add some.
#### Adding New Tasks
To add a new task, run:
```
$ please add "<Task Name>"
```
Example:
```
$ please add "Publish a post about Please"
```
Replace the task name within the quotes with your own.
**Sample Output:**
```
Added "Publish a post about Please" to the list
Tasks
┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Number ┃ Task ┃ Status ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 1 │ Publish a post about Please │ ❌ │
└────────┴─────────────────────────────┴────────┘
```
Similarly, you can add any number of tasks. I have added the following 3 tasks for demonstration purpose.
```
Added "Setup Nginx In Ubuntu" to the list
Tasks
┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Number ┃ Task ┃ Status ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 1 │ Publish a post about Please │ ❌ │
│ 2 │ Update Ubuntu VM │ ❌ │
│ 3 │ Setup Nginx In Ubuntu │ ❌ │
└────────┴─────────────────────────────┴────────┘
```
![Add Tasks Using Please][4]
#### Show Tasks
To view the list of all tasks, run:
```
$ please showtasks
```
**Sample Output:**
```
Tasks
┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Number ┃ Task ┃ Status ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 1 │ Publish a post about Please │ ❌ │
│ 2 │ Update Ubuntu VM │ ❌ │
│ 3 │ Setup Nginx In Ubuntu │ ❌ │
└────────┴─────────────────────────────┴────────┘
```
![Show All Tasks][5]
As you see in the above output, I have 3 unfinished tasks.
#### Mark Tasks As Done Or Undone
Once you complete a task, you can **mark it as done** by specifying the task number as show in the command below.
$ please done "<Task Number>"
Example:
```
$ please done 1
```
This command will mark the **Job 1** as completed.
**Sample Output:**
```
Updated Task List
Tasks
┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Number ┃ Task ┃ Status ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 1 │ Publish a post about Please │ ✅ │
│ 2 │ Update Ubuntu VM │ ❌ │
│ 3 │ Setup Nginx In Ubuntu │ ❌ │
└────────┴─────────────────────────────┴────────┘
```
![Mark Tasks As Done][6]
As you see in the above output, the completed job is marked with a **green** **tick mark**and the non-completed tasks are marked with **a red cross**.
Similarly, to undo the changes i.e. **mark the jobs as undone**, run:
```
$ please undone 1
```
![Mark Tasks As Undone][7]
#### Remove Tasks
To delete a task from the list, the command would be:
$ please delete "<Task Number>"
Example:
$ please delete 1
This command will **delete the specified task**.
**Sample Output:**
```
Deleted 'Publish a post about Please'
Tasks
┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ Number ┃ Task ┃ Status ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 1 │ Update Ubuntu VM │ ❌ │
│ 2 │ Setup Nginx In Ubuntu │ ❌ │
└────────┴───────────────────────┴────────┘
```
![Delete Tasks][8]
Please note that this command will delete the given task whether it is completed or not. It is not even will show you a warning message. So double check if you delete a correct task.
#### Reset
To reset all settings and task, run:
```
$ please setup
```
You will be prompted to set a name.
**Sample Output:**
```
Hello! What can I call you?: ostechnix
Thanks for letting me know your name!
If you wanna change your name later, please use:
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ please callme <Your Name Goes Here>
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
```
![Reset Please][9]
### Uninstall Please
'Please' didn't please you? No problem! You can remove it using command:
```
$ pip uninstall please-cli
```
Or,
```
$ pip3 uninstall please-cli
```
And then edit your `.bashrc` or `.zshrc` file and remove the line that says **please** at the end of the file.
### Conclusion
I briefly tried 'Please' on my Ubuntu VM and I already started liking its simplicity and efficiency. If you're looking for an easy-to-use CLI task manager for managing your tasks, please try "Please". You will be pleased!
**Resource:**
* [Please GitHub Repository][10]
*Featured Image by Pixabay.*
--------------------------------------------------------------------------------
via: https://ostechnix.com/please-command-line-todo-manager/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/taskwarrior-command-line-todo-task-manager-application/
[2]: https://ostechnix.com/manage-python-packages-using-pip/
[3]: https://ostechnix.com/wp-content/uploads/2022/06/Run-Please-Todo-Manager.png
[4]: https://ostechnix.com/wp-content/uploads/2022/06/Add-Tasks-Using-Please.png
[5]: https://ostechnix.com/wp-content/uploads/2022/06/Show-All-Tasks.png
[6]: https://ostechnix.com/wp-content/uploads/2022/06/Mark-Tasks-As-Done.png
[7]: https://ostechnix.com/wp-content/uploads/2022/06/Mark-Tasks-As-Undone.png
[8]: https://ostechnix.com/wp-content/uploads/2022/06/Delete-Tasks.png
[9]: https://ostechnix.com/wp-content/uploads/2022/06/Reset-Please.png
[10]: https://github.com/NayamAmarshe/please

View File

@ -0,0 +1,37 @@
[#]: subject: "The First Commercial Unikernel With POSIX Support"
[#]: via: "https://www.opensourceforu.com/2022/06/the-first-commercial-unikernel-with-posix-support/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
The First Commercial Unikernel With POSIX Support
======
![operating system][1]
Lynx Software Technologies has released a unikernel that it claims is the first to be POSIX compatible for real-time operation and commercially available. LynxElement will be included in the MOSA.ic range of mission-critical embedded applications. To provide more security with third-party or open source software, Lynx prefers a unikernel approach over hypervisors or virtual machines. LynxElement is based on Lynxs commercially proven LynxOS-178 real-time operating system, which allows for compatibility between the Unikernel and the standalone LynxOS-178 product. This enables designers to move applications between environments and is compliant with the POSIX API and US FACE specifications.
LynxElement initially focused on security on both Intel and Arm multicore processor architectures. Running security components such as virtual private networks is a common use case (VPNs). The unikernel, by utilising a one-way software data diode and filter, can enable a customer to replace a Linux virtual machine, saving memory space and drastically reducing the attack surface while ensuring timing requirements and safety certifiability.
Unikernels are best suited for applications that require speed, agility, and a small attack surface in order to increase security and certifiability, such as aircraft systems, autonomous vehicles, and critical infrastructure. These run pre-built applications with their own libraries, reducing the attack surface caused by resource sharing. This also enables the secure use of containerised applications such as Kubernetes or Docker, which are increasingly moving from enterprise to embedded designs, owing to the need to support AI frameworks.
Unikernels are also an excellent choice for mission-critical systems with heterogeneous workloads that require the coexistence of RTOS, Linux, Unikernel, and bare-metal guest operating systems. Existing open source unikernel implementations, according to Lynx, havent fared well due to a lack of adequate functionality, a lack of a clear path to safety certification, and immature toolchains for debugging and producing images.
Lynx created the MOSA.ic software framework for developing and integrating complex multi-core safety- or security-critical systems. The framework includes built-in security for the unikernel, allowing for security and safety certification in mission-critical applications and making it enterprise-ready. With the assistance of DESE Research, Lynx created the safety-critical Unikernel solution. LynxElement is being evaluated by existing Lynx customers as well as additional organisations around the world, including naval, air force, and army organisations.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/the-first-commercial-unikernel-with-posix-support/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/06/operating-system-1.jpg

View File

@ -0,0 +1,116 @@
[#]: subject: "What you need to know about site reliability engineering"
[#]: via: "https://opensource.com/article/22/6/introduction-site-reliability-engineering"
[#]: author: "Robert Kimani https://opensource.com/users/robert-charles"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
What you need to know about site reliability engineering
======
Understand the basics and best practices for establishing and maintaining an SRE program in your organization.
![Working on a team, busy worklife][1]
Image by: opensource.com
What is site reliability engineering? The creator of the first site reliability engineering (SRE) program, [Benjamin Treynor Sloss][2] at Google, described it this way:
> Site reliability engineering is what happens when you ask a software engineer to design an operations team.
What does that mean? Unlike traditional system administrators, site reliability engineers (SREs) apply solid software engineering principles to their day-to-day work. For laypeople, a clearer definition might be:
> Site reliability engineering is the discipline of building and supporting modern production systems at scale.
SREs are responsible for maximizing reliability, performance availability, latency, efficiency, monitoring, emergency response, change management, release planning, and capacity planning for both infrastructure and software. As applications and infrastructure grow more complex, SRE teams help ensure that these systems can evolve.
### What does an SRE organization do?
There are four primary responsibilities of an SRE organization:
* Availability: SREs are responsible for the availability of the services they support. After all, if services are not available, end users' work is disrupted, which can cause serious damage to your organization's credibility.
* Performance: A service needs to be not only available but also highly performant. For example, how useful is a website that takes 20 seconds to move from one page to another?
* Incident management: SREs manage the response to unplanned disruptions that impact customers, such as outages, service degradation, or interruptions to business operations.
* Monitoring: A foundational requirement for every SRE, monitoring involves collecting, processing, aggregating, and displaying real-time quantitative data about a system. This could include query counts and types, error counts and types, processing times, and server lifetimes.
Occasionally, release and capacity planning are also the responsibility of the SRE organization.
### How do SREs maintain site reliability?
The SRE role is a diverse one, with many responsibilities. An SRE must be able to identify an issue quickly, troubleshoot, and mitigate it with minimal disruption to operations.
Here's a partial list of the tasks a typical SRE undertakes:
* Writing code: An SRE is required to solve problems using software, whether they are a software engineer with an operations background or a system engineer with a development background.
* Being on call: This is not the most attractive part of being an SRE, but it is essential.
* Leading a war room: SREs facilitate discussions of strategy and execution during incident management.
* Performing postmortems: This is an excellent tool to learn from an incident and identify processes that can be put in place to avoid future incidents.
* Automating: SREs tend to get bored with manual steps. Automation not only saves time but reduces failures due to human errors. Spending some time on engineering by automating tasks can have a strong return on investment.
* Implement best practices: SREs are well versed with distributed systems and web-scale architectures. They apply best practices in several areas of service management.
### Designing an effective on-call system
An on-call management system streamlines the process of adding members of the SRE team into after-hours or weekend call schedules, assigning them equitable responsibility for managing alerts outside of traditional work hours or on holidays. In some cases, an organization might designate on-call SREs around the clock.
In the medical profession, on-call doctors don't have to be on site, but they do have to be prepared to show up and deal with emergencies anytime during their on-call shift. SRE professionals likewise use on-call schedules to make sure that someone's always there to respond to major bugs, capacity issues, or product downtime. If they can't fix the problem on their own, they're also responsible for escalating the issue. For SRE teams who run services for which customers expect 24/7/365, 99.999% uptime and availability, on-call staffing is especially critical.
There are two main kinds of [on-call design structures][4] that can be used when designing an on-call system, and they focus on domain expertise and ownership of a given service:
* Single-team ownership model
* Shared ownership model
In most cases, single-team ownership will be the better model.
The on-call SRE has multiple duties:
* Protecting production systems: The SRE on call serves as a guardian to all production services they are required to support.
* Responding to emergencies within acceptable time: Your organization may choose to have a service-level objective (SLO) for SRE response time. In most cases, anywhere between 5 to 15 minutes would be an acceptable response time. Automated monitoring and alerting solutions also empower SREs to respond immediately to any interruptions to service availability.
* Involving team members and escalating issues: The on-call SRE is responsible for identifying and calling in the right team members to address specific problems.
* Tackling non-emergent issues: In some organizations, a secondary on-call engineer is scheduled to handle non-emergencies, like email alerts.
* Writing postmortems: As noted above, a good postmortem is a valuable tool for documenting and learning from significant incidents.
### 3 key tenets of an effective on-call management system
#### A focus on engineering
SREs should be spending more time designing solutions than applying band-aids. A general guideline is for SREs to spend 50% of their time in engineering work, such as writing code and automating tasks. When an SRE is on-call, time should be split between about 25% of time managing incidents and 25% on operations duty.
#### Balanced workload
Being on call can quickly burn out an engineer if there are too many tickets to handle. If well-coordinated multi-region support is possible, such as a US-based team and an Asia-Pacific team, that arrangement can help limit the detrimental health effects of repeated night shifts. Otherwise, having six to eight SREs per site will help avoid exhaustion. At the same time, make sure all SREs are getting a turn being on call at least once or twice a quarter to avoid getting out of touch with production systems. Fair compensation for on-call work during overnights or holidays, such as additional hours off or cash awards, will also help SREs feel that their extra effort is appreciated.
#### Positive and safe environment
Clearly defined escalation and blameless postmortem procedures are absolutely necessary for SREs to be effective and productive. Established protocols are central to a robust incident management system. Postmortems must focus on root causes and prevention rather than individual and team actions. If you don't have a clear postmortem procedure in your organization, it is wise to start one immediately.
### SRE best practices
This article covered some SRE basics and best practices for establishing and running an SRE on-call management system.
In future articles, I will look at other categories of best practices for SRE, the technologies involved, and the processes to support those technologies. By the end of this series, you'll know how to implement SRE best practices for designing, implementing, and supporting production systems.
### More resources
* [Availability Calculator][5]
* [Error Budget Calculator][6]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/introduction-site-reliability-engineering
作者:[Robert Kimani][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/robert-charles
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/team_dev_email_chat_video_work_wfm_desk_520.png
[2]: https://sre.google/sre-book/introduction/
[3]: https://enterprisersproject.com/article/2022/2/8-reasons-site-reliability-engineer-one-most-demand-jobs-2022
[4]: https://alexwitherspoon.com/publications/on-call-design/
[5]: https://availability.sre.xyz/
[6]: https://dastergon.gr/error-budget-calculator/

View File

@ -0,0 +1,99 @@
[#]: subject: "An Introduction to Teradata"
[#]: via: "https://www.opensourceforu.com/2022/06/an-introduction-to-teradata/"
[#]: author: "Saurabh Kumar https://www.opensourceforu.com/author/saurabh-kumar/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
An Introduction to Teradata
======
Teradata is one of the most popular relational database management systems (RDBMS) appropriate for huge data warehousing applications. It is highly scalable and capable of handling large volumes of data. This article provides a basic understanding of Teradata architecture, and the types of spaces and tables it has.
![an-introduction-of-teradata-featured-image][1]
As per official Teradata documentation: “Teradata database is a massively parallel analytics engine that can help businesses achieve breakthrough results across all industries. Whether companies need to improve profitability, mitigate risks, innovate products, enhance the customer experience, or achieve other objectives, analytics paves the way to success.”
### Teradata architecture
**Node:** This is an essential building block of the Teradata database system and is made up of hardware and software components. A server can also be considered as a node.
*PE (parsing engine):* This is a type of vproc (virtual processor) for session control, task dispatching and SQL parsing in the multi-tasking and possibly parallel-processing environment of the Teradata database. Vproc is a software process running in an SMP (symmetric multiprocessing) environment or a node. The vproc or the access module processor (AMP) are responsible for storing and retrieving data.
The different components of a parsing engine are listed below.
* Parser: This decomposes SQL queries into relational data management processing steps.
* Query optimizer: This determines the most efficient path to access data.
* Step generator: This produces processing steps and encapsulates them into packages.
* Dispatcher: This transmits the encapsulated steps from the parser to the relevant AMPs, and performs monitoring and error-handling functionalities during step processing.
* Session controller: This manipulates session-control activities (e.g., log on, log off and authentication) and restores a session only after client or server failures.
*Message passing layer:* This is also called BYNET (Banyan network), and is the networking layer in the Teradata system. It allows interaction between PE and AMP, and also between the nodes. It receives the execution plan from the PE and sends it to AMP. Similarly, it receives the outputs from the AMP and sends them to PE. In the Teradata system, there are always two BYNET systems. These are called BYNET 0 and BYNET 1. But this is generally referred to as a single BYNET system. If one BYNET fails, the second one takes its place. But when data is large, both BYNETs can be made functional, which enhances the communication between PE and AMPs.
*Disk:* This component organises and stores the data while performing data manipulation activities. Teradata database employs redundant array of independent disks (RAID) storage technology to provide data security at the disk level. When AMPs are associated with disks, it is called Vdisks (virtual disks).
> Note: Each AMP is allowed to read and write on its own disk only. That is why it is also called shared-nothing architecture.
Teradata acts as a single data store that receives a large number of concurrent requests from various client applications. It is defined as an information repository supported by tools and utilities that make it, as part of Teradata Warehouse, a complete and active RDBMS.
### Types of spaces in Teradata
*Permanent space:* This is the maximum amount of space allocated to the user/database to hold data rows. It is used for database objects (like permanent tables, indexes, etc) creation, and to hold their data. The total permanent space is divided among the total number of AMPs. Whenever the per AMP limit exceeds the allocated space of AMP, a No more room in database error is generated. Permanent tables, journals, fallback tables, and secondary index sub-tables use permanent space. All databases have a predefined upper limit of permanent space. Teradata doesnt physically preallocate permanent space for databases and users when they are defined during object definition time.
![Figure 1: Teradata architecture][2]
*Spool space:* This is the unused permanent space that is used by the system to keep the intermediate results of the SQL query. Once the query is complete, the spool space is released. The volatile tables use the spool space. Users without spool space cant execute any query. Data is active up to the current session only. It is divided among the number of AMPs. Whenever the per AMP limit exceeds the allocated space, the user will get a spool space error.
*Temporary space:* This is the unused permanent space used by global temporary tables (GTT). It is divided by the number of AMPs. Data is active up to the current session only. It is reserved prior to spool space for any user defined operations. Temp space is allocated at the database level or user level, but not at the table level. This is the amount of space used for GTT, and these results remain available to the user until the session is terminated.
Tables created in temp space will survive a restart. A query example is given below:
```
CREATE DATABASE new_db FROM existing_db
AS PERMANENT = 2000000, SPOOL = 5000000, TEMP = 2000000;
```
A new database must be created from an existing database.
Permanent space can be used for tables in the database. Spool space is allocated for the maximum amount of workspace available for requests. Temp space is allocated for temp tables in the database.
### Types of tables in Teradata
**Permanent tables:** These remain in the system until they are dropped. The table definition is stored in the data dictionary. Data and structure can be shared across multiple sessions and users. Data is stored in the permanent space, and collect statistics is supported. Indexes can be created. COMPRESS columns, as well as DEFAULT and TITLE clauses are supported. Partition primary index (PPI) is also supported. If the primary index clause is not defined in the Create table statement, then Teradata will create the first column as primary by default.
*Global temporary tables (GTT):* This is a kind of temporary table. The table definition is stored in the data dictionary. It requires a minimum of 512 bytes from the permanent space to store table definitions in the data dictionary. The structure can be shared across multiple users but data will remain private to the session (and its user). Data is stored in the temporary space. Data will be purged for that session once the session ends. Collect statistics is supported. Indexes can be created. COMPRESS columns, as well as DEFAULT and TITLE clauses are supported. PPI is also supported.
This table can be identified from the data dictionary table (dbc.tables) using the CommitOpt column. If its value is D (on commit Delete rows) or P (on commit Preserve rows), then its a GTT. If the primary index clause is not specified while creating the table, then Teradata will create the first column as primary by default. One session can generate up to 2000 global temporary tables at a time.
*Volatile tables:* These are created in the spool space for temporary use, and their life span extends for the duration of the session only. The table definition is not stored in the data dictionary. Structure and data is private to the session and its user. Data is stored in the spool space. The table gets dropped once the session ends, i.e., when login again…no volatile tables shows up. Collect statistics is supported. Indexes cannot be created. The COMPRESS column is supported, but the DEFAULT and TITLE clauses are not supported. PPI is supported. If the primary index clause is not specified while creating the table, then Teradata will create the first column as primary by default.
```
ON COMIT DELETE ROWS (Default)
ON COMIT PRESERVE ROWS (If we want to retain rows)
```
*Derived tables:* This temporary table is derived from one or more other tables as the result of a sub-query. Derived tables are local to the query and exist only for the duration of the query. The table is automatically deleted once the query is done. Data is stored in the spool space. Table definition is not stored in the data dictionary. The first column in a derived table acts like a PI column for it.
### Properties of Teradata tables
* Fallback: This protects data by storing a second copy of each row of a table on a different AMP in the same cluster. If an AMP fails, the system accesses the fallback rows to meet requests.
* Journals: BEFORE JOURNAL holds the image of impacted rows before any changes are made. AFTER JOURNAL holds the image of affected rows after changes are done. In DUAL BEFORE/AFTER JOURNAL, two images are taken and are stored in two separate AMPs.
* Checksum: This detects some forms of lost writes. A lost write is a write of a file system block that received successful status on completion, but the write either never actually occurred or was written to an incorrect location. As a result, subsequent reads of the same block return the old data. NONE, LOW, MEDIUM, and HIGH levels are available in the checksum.
Teradata is a powerful RDBMS and can help in transforming how businesses work. We have just discussed its basics in this article and a lot still needs to be unearthed. We hope you find this basic information useful to explore the features and capabilities of Teradata further.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/an-introduction-to-teradata/
作者:[Saurabh Kumar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/saurabh-kumar/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/05/an-introduction-of-teradata-featured-image.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/05/Figure-2-Teradata-Architecture.jpg

View File

@ -0,0 +1,178 @@
[#]: subject: "EdUBudgie Linux: Ubuntu Spin with Budgie Desktop for Students, Teachers"
[#]: via: "https://www.debugpoint.com/2022/06/edubudgie-linux-22-04/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
EdUBudgie Linux: Ubuntu Spin with Budgie Desktop for Students, Teachers
======
EduBudgie is a new Ubuntu LTS flavour featuring the stunning Budgie desktop. We will give you a tour.
![EdUBudgie Linux 22.04 Budgie Desktop][1]
A Linux Hobbyist and educator ([Adam Dix][2]) announced that a new Budgie desktop Ubuntu spin is now available. The current version is based on the recently released Ubuntu 22.04 LTS “Jammy Jellyfish” and Budgie Desktop 10.6.
### Why another distro?
As per the creator of this distribution, the primary goal is to give an out-of-the-box experience with FOSS operating system, especially for the underprivileged education sector. How? Firstly, the distro pre-loads most of the new required educational software in its ISO image. Secondly, EduBudgie focuses on running in Chromebooks if needed. Because, as per the Google educational initiatives, millions of well-made Chromebooks are still out there. And EduBudgie can be installed easily on those hardware (either dual boot or a fresh install). And this will make the Chromebooks usable for the underfunded schools worldwide.
Besides, there are not many Educational Linux Linux distros out there. Hence, this distro aims to save time and cost for schools and university administrations by installing additional packages, solving dependencies, etc.
### EdUBudgie Linux 22.04 Review
#### Installation
The latest ISO image size of EdUBudgie Linux is 5.7 GB which is relatively higher. The primary reason is that many apps are pre-loaded into the ISO to help educators.
The installer is Ubuntus Ubiquity installer for EdUBudgie, and installation takes around 5 minutes with basic packages. There were no errors or problems during installation during our quick test.
However, this distro requires a minimum of ~32 GB of disk space on the root partition for installation. It is a hard requirement for installation.
![EdUBudgie requirement on disk space][3]
#### First Look
Budgie desktop looks sleek and beautiful by itself. The primary taskbar cum system tray is at the top. The main application menu is at the left of the top bar. In the middle, you get the fixed app shortcuts which act as a Dock. The system tray is on the right side of the bar.
The application menu is your traditional “Budgie app menu” and looks nice and clean.
Moreover, it has a GRID view and a search and list view. Also, the power options are present inside the App menu itself.
![App Menu Grid View][4]
![App Menu List View][5]
The overall look perfectly balances a professional and not too fancy look. It uses the chromeos-compact GTK2 theme with the win10 icon pack.
#### Applications
Let me give you a basic example. The application stack of EdUBudgie Linux is well-curated. Its a mix of GNOME apps, Mint apps and Budgie. An essential app is a Calculator for a student. So, the EdUBudie brings the excellent Qalculate advanced calculator for the need. As you can see, much thought went into curating pre-loaded apps.
Lets have a quick recap of the apps by their categories.
##### Accessories and Educational Apps
Firstly, in the Accessories section, you get the Nemo file manager (from Linux Mint). Perhaps Nemo is the second-best File manager after Dolphin if you compare the features. Hence, its a good choice.
In addition, it brings the new GNOME Screenshot tool, new [Gnome Text Editor][6], a To-Do List and Break Timer to remind you to take breaks during your study sessions.
Secondly, the main Educational applications are chosen to cater to different classes of students. Heres a list:
* gbrainy: A educational quiz, games that feature math and other subjects to keep your mind active
* GeoGebra: Famous open-source program to plot complex mathematical graphs.
* Kig: An interactive Geometry application which makes you think about your ideas. This is one of the best [KDE apps][7] out there.
* KWordQuiz: Improve your English language vocabulary using this excellent app.
* OpenBoard: A necessary tool for taking notes, and rough drawing and perhaps one of the [best whiteboard app][8]s.
* Scratch: A click-and-point programmable animation program developed by MIT for junior students. It helps to learn about the basics of logic and flows in programming.
The impressive list, isnt it?
Here are some of the images of the above apps for your reference.
![Scratch][9]
![Kig][10]
![GeoGebra][11]
![gbrainy][12]
Lets talk about the Science and Programming applications.
##### Science and Programming
The programming applications include Geany and Visual Studio Code Editor, an excellent choice as IDE for development. In addition, it contains the following applications for various needs.
* BASIC-256: A program to learn BASIC for young students
* KAlgebra: Math expression solver and graph plotter
* Kalzium: View and understand the Chemical periodic table of elements
* KGeography: View the maps of continents and countries
* KiCad App Suite: CAD drawing
* LibreCAD: Free CAD Drawing app
##### Office and Graphics Apps
Office-suite is an important aspect of any educational operating system. EduBUdgie brings the WPS Office as a default office program for documents, spreadsheets and presentations. This is an interesting choice over LibreOffice.
In addition, you can use Calibre e-book management and Wondershare EdrawMind for diagrams and flowcharts. However, Wondershare EdrawMind is not open-source and comes with limited features. You can use free and open-source Dia, which is also pre-loaded.
The Graphics suite of apps includes all famous open-source apps. A quick list is presented below.
* Blender 3D Drawing
* Darktable RAW Photo manager
* GIMP for raster image processing
* Inkscape for Vector Image
* Krita
* Scribus
Finally, EdUBudgie uses Geary email client, Google Chrome default web browser and Tilix terminal.
##### Remote Communication
Finally, look at the essential apps needed in todays world. Due to the Pandemic, the schools and universities also conduct classes online. Furthermore, in-person communication is problematic for various reasons for students. With that in mind, EdUBudgie brings communication apps for everyone.
Firstly, the famous Zoom client is pre-installed, which helps you to participate in video conferences and meetings. Secondly, Skype and Microsoft Teams Linux clients are installed to join discussions on Microsoft networks. Besides that, Rambox brings a super-productivity boost to your study where you can create workspaces with multiple apps such as mails, messenger, Slack, etc.
![EdUBudgie Brings well-curated Communication Apps for Linux][13]
##### A mix of backend tech
As you can see from the above list of applications, the apps are chosen from different distros. It consists of KDE Plasma, Linux Mint, GNOME and other tech-based apps. Hence, all the backend packages such as Qt, Java (OpenJDK), and GTK4 are well tested and punched into the ISO. This is a significant feat by itself.
#### Performance
You may be wondering about performance because there are so many applications. Well, with a significant workload which includes multiple heavy apps (Teams, Skype etc.) running simultaneously, it consumes around 62% of your available RAM (i.e. 2.4 GB of 4 GB RAM). Also, the CPU is at 20% on average.
It is an excellent performance metric considering all the apps and their memory footprint. The Budgie desktop itself is well optimized and contributes to this performance.
In addition, the idle state performance is also good, i.e. 1 GB of memory and CPU is within 5%. The idle state resources used by X.Org, lightdm and systemd.
Finally, EdUBudgie Linux uses 19 GB of disk space for its base installation.
![EdUBudgie Linux Performance (average workload)][14]
![EdUBudgie Linux Performance (idle)][15]
### Closing Notes
I am impressed by this distribution on how it is well designed and focused on only one purpose. Not to mention the super-fast performance of the Budgie desktop itself. Also, thanks to the mix of GTK4, Budgie desktop components and icon themes, it looks professional.
Moreover, the stability, performance and long-term support (via Ubuntu LTS) are just add-ons to the excellent list of features. You can use this distribution for your daily driver, making it more appealing even if you are not a student.
You can download EdUBudgie Linux at their [official website][16].
That said, I hope you find this review helpful and dont forget to check out our other [reviews here][17].
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2022/06/edubudgie-linux-22-04/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/06/EdUBudgie-Linux-22.04-Budgie-Desktop.jpg
[2]: https://www.linkedin.com/in/adam-dix-0339358/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/06/EdUBudgie-requirement-on-disk-space.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2022/06/App-Menu-Grid-View.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/06/App-Menu-List-View.jpg
[6]: https://www.debugpoint.com/2021/12/gnome-text-editor/
[7]: https://www.debugpoint.com/tag/kde-apps/
[8]: https://www.debugpoint.com/2022/02/top-whiteboard-applications-linux/
[9]: https://www.debugpoint.com/wp-content/uploads/2022/06/Scratch.jpg
[10]: https://www.debugpoint.com/wp-content/uploads/2022/06/Kig.jpg
[11]: https://www.debugpoint.com/wp-content/uploads/2022/06/GeoGebra.jpg
[12]: https://www.debugpoint.com/wp-content/uploads/2022/06/gbrainy.jpg
[13]: https://www.debugpoint.com/wp-content/uploads/2022/06/EdUBudgie-Brings-well-curated-Communication-Apps-for-Linux.jpg
[14]: https://www.debugpoint.com/wp-content/uploads/2022/06/EdUBudgie-Linux-Performance-average-workload.jpg
[15]: https://www.debugpoint.com/wp-content/uploads/2022/06/EdUBudgie-Linux-Performance-idle.jpg
[16]: https://www.edubudgie.com/download
[17]: https://www.debugpoint.com/tag/linux-distro-review

View File

@ -0,0 +1,176 @@
[#]: subject: "How SREs can achieve effective incident response"
[#]: via: "https://opensource.com/article/22/6/effective-incident-response-site-reliability-engineers"
[#]: author: "Robert Kimani https://opensource.com/users/robert-charles"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How SREs can achieve effective incident response
======
Get back to business and continue services in a timely manner by implementing a thorough incident response strategy.
![Person using a laptop][1]
Incident response includes monitoring, detecting, and reacting to unplanned events such as security breaches or other service interruptions. The goal is to get back to business, satisfy service level agreements (SLAs), and provide services to employees and customers. Incident response is the planned reaction to a breach or interruption. One goal is to avoid unmanaged incidents.
### Establish an on-call system
One way of responding is to establish an on-call system. These are the steps to consider when you're setting up an on-call system:
1. Design an [effective on-call system][2]
2. Understand managed vs. unmanaged incidents
3. Build and implement an effective postmortem process
4. Learn the tools and templates for postmortems
### Understand managed and unmanaged incidents
An *unmanaged incident* is an issue that an on-call engineer handles, often with whatever team member happens to be available to help. More often than not, unmanaged incidents become serious issues because they are not handled correctly. Issues include:
* No clear roles.
* No incident command.
* Random team members involved (freelancing), the primary killer of the management process.
* Poor (or lack of) communication.
* No central body running troubleshooting.
A *managed incident* is one handled with clearly defined procedures and roles. Even when an incident isn't anticipated, it's still met with a team that's prepared. A managed incident is ideal. It includes:
* Clearly defined roles.
* Designated incident command that leads the effort.
* Only the ops-team defined by the incident command updates systems.
* A dedicated communications role exists until a communication person is identified. The Incident Command can fill in this role.
* A recognized command post such as a "war room." Some organizations have a defined "war room bridge number" where all the incidents are handled.
Incident management takes place in a war room. The Incident Command is the role that leads the war room. This role is also responsible for organizing people around the operations team, planning, and communication.
The Operations Team is the only team that can touch the production systems. Hint: Next time you join an incident management team, the first question to ask is, Who is running the Incident Command?
### Deep dive into incident management roles
Incident management roles clearly define who is responsible for what activities. These roles should be established ahead of time and well-understood by all participants.
**Incident Command**: Runs the war room and assigns responsibilities to others.
**Operations Team**: Only role allowed to make changes to the production system.
**Communication Team**: Provides periodic updates to stakeholders such as the business partners or senior executives.
**Planning Team**: Supports operations by handling long-term items such as providing bug fixes, postmortems, and anything that requires a planning perspective.
As an SRE, you'll probably find yourself in the Operations Team role, but you may also have to fill other roles.
### Build and implement an effective postmortem process
Postmortem is a critical part of incident management that occurs once the incident is resolved.
#### Why postmortem?
* Fully understand/document the incident using postmortems. You can ask questions such as "What could have been done differently?"
* Conduct a deep dive "root cause" analysis, producing valuable insights.
* Learn from the incident. This is the primary benefit of doing postmortems.
* Identify opportunities for prevention as part of postmortem analysis, e.g., identify a monitoring enhancement to catch an issue sooner in the future.
* Plan and follow through with assigned activities as part of the postmortem.
#### Blameless postmortem: a fundamental tenet of SRE
No finger-pointing. People are quite honestly scared about postmortems because one person or team may be held responsible for the outage. Avoid finger-pointing at all costs; instead, focus solely on systems and processes and *not* on individuals. Isolating individuals/teams can create an unhealthy culture. For instance, the next time someone commits a mistake, they will not come forward and accept it. They may hide the activity due to the fear of being blamed.
Though there is no room for finger-pointing, the postmortem must call out improvement opportunities. This approach helps avoid further similar incidents.
#### When is a postmortem needed?
Is a postmortem necessary for all incidents or only for certain situations? Here are some suggestions for when a postmortem is useful:
* End-user experience impact beyond a threshold (SLO). If the SLO in place is impacted due to:
* Unavailable services
* Unacceptable performance
* Erratic functionality
* Data loss.
* Organization/group-specific requirements with different policies and protocols to follow.
#### Six minimum items required in a postmortem
The postmortem should include the following six components:
1. Summary: Provide a succinct incident summary.
2. Impact (must include any financial impact): Executives will look for impact and financial information.
3. Root cause(s): Identify the root cause, if possible.
4. Resolution: What the team actually did to fix the issue.
5. Monitoring (issue detection): Specify how the incident was identified. Hopefully, this was a monitoring system rather than an end-user complaint.
6. Action items with due dates and owners: This is important. Do not simply conduct a postmortem and forget the incident. Establish action items, assign owners, and follow through on these. Some organizations may also include a detailed timeline of occurrences in the postmortem, which can be useful to walk through the sequence of events.
Before the postmortem is published, a supervisor or senior team member(s) must review the document to avoid any errors or misrepresentation of facts.
#### Find postmortem tools and templates
If you haven't done postmortems before, you may be wondering how to get started. You've learned a lot about postmortems thus far, but how do you actually implement one?
That's where tools and templates come into play. There are many tools available. Consider the following:
1. Existing ITSM tools in your organization. Popular examples include ServiceNow, Remedy, Atlassian ITSM, etc. Existing tools likely provide postmortem tracking capabilities.
2. Open source tools are also available, the most popular being [Morgue][3], released by Etsy. Another popular choice is [PagerDuty][4].
3. Develop your own. Remember, SREs are also software engineers! It doesn't have to be fancy, but it must have an easy-to-use interface and a way to store the data reliably.
4. Templates. These are documents that you can readily use to track your postmortems. There are many templates available, but the most popular ones are:
* Google: [Postmortem Culture: Learning from Failure][5] and [Example Postmortem][6]
* Pagerduty: [The Postmortem][7]
* Atlassian: Root Cause Analysis The 5 whys?
[Incident Postmortem Template][8]
[Incident Postmortems][9]
* [Splunk On-Call, formerly VictorOps][12]
* Other [GitHub Template resources][13]
* A custom in-house template: This may be the most effective option as it suits your organization's needs.
### Wrap up
Here are the key points for the above incident response discussion:
* Effective on-call system is necessary to ensure service availability and health.
* Balance workload for on-call engineers.
* Allocate resources.
* Use multi-region support.
* Promote a safe and positive environment.
* Incident management must facilitate a clear separation of duties.
* Incident command, operations, planning, and communication.
* Blameless postmortems help prevent repeated incidents.
Incident management is only one side of the equation. For an SRE organization to be effective, it must also have a change management system in place. After all, changes cause many incidents.
The next article looks at ways to apply effective change management.
#### Further reading
* [Blameless Postmortems and a Just Culture][16]
* [Postmortem Checklist][17]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/effective-incident-response-site-reliability-engineers
作者:[Robert Kimani][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/robert-charles
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/laptop_screen_desk_work_chat_text.png
[2]: https://opensource.com/article/22/6/introduction-site-reliability-engineering
[3]: https://github.com/etsy/morgue
[4]: https://github.com/PagerDuty/postmortem-docs
[5]: https://sre.google/sre-book/postmortem-culture/
[6]: https://sre.google/sre-book/example-postmortem/
[7]: https://postmortems.pagerduty.com/
[8]: https://www.atlassian.com/incident-management/postmortem/templates
[9]: https://www.atlassian.com/incident-management/handbook/postmortems
[10]: https://www.atlassian.com/incident-management/postmortem/templates
[11]: https://www.atlassian.com/incident-management/handbook/postmortems
[12]: https://help.victorops.com/
[13]: https://github.com/dastergon/postmortem-templates
[14]: https://www.atlassian.com/incident-management/postmortem/templates
[15]: https://www.atlassian.com/incident-management/handbook/postmortems
[16]: https://www.etsy.com/codeascraft/blameless-postmortems/
[17]: https://docs.google.com/document/d/1iaEgF0ICSmKKLG3_BT5VnK80gfOenhhmxVnnUcNSQBE/edit

View File

@ -1,93 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Donkey)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (10 ways Ansible is for everyone)
[#]: via: (https://opensource.com/article/21/1/ansible)
[#]: author: (James Farrell https://opensource.com/users/jamesf)
Ansible 适合所有人的 10 种方式
======
通过 2020 年的前 10 篇 Ansible 文章和 5 篇新闻摘要扩展你的知识和技能。
![gears and lightbulb to represent innovation][1]
又到了一年的年末,我们再次来到 Opensource.com 上发表了一组关于 Ansible 的精彩文章。我认为在逐步推进一系列主题中回顾它们会很好。我希望能够激励对 Ansible 有兴趣的初学者。还有一系列总结文章,我已将其包括在内,以供你随意后续查阅。
### 适合初学者的 Ansible
今年列表中的前五篇文章对于 Ansible 新手来说是一个非常好的起点。前三篇文章由 Opensource.com 编辑 Seth Kenlon 撰写。
* 如果你不了解 Ansible [_现在可以做这 7 件事_][2] 来入手。这是很好的入门指导,它收集了用于管理硬件、云、容器等的链接。
* 在 [_编排与自动化有何区别_][3] 这篇文章中,你会学到一些术语和技术路线,将会激发你对 Ansible 感兴趣。
* 文章 [_如何用 Ansible 安装软件_][4] 覆盖了一些初级概念和一些 Ansible 的好习惯,给出了一些本地或远程管理软件包的案例。
* 从 [_我在编写 Ansible Playbooks 时学到的 3 堂课_][5] 中,使自己养成 Jeff Geerling 所传授的好习惯,他是一位真正的 Ansible 资深人士。 源代码控制、文档、测试、简化和优化是自动化成功的关键。
* [_我使用 Ansible 的第一天_][6] 介绍了记者 David Both 在解决重复性开发任务时的思考过程。这篇文章从 Ansible 的基础开始,并说明了一些简单的操作和任务。
### 尝试 Ansible 项目
一旦你掌握了基础和并拥有良好习惯,就可以开始一些具体主题和实例了。
* Ken Fallon 在 [_使用 Ansible 管理你的 Raspberry Pi fleet_][7] 一文中介绍了一个部署和管理 RPi 单元的示例。它介绍了受限环境中的安全和维护概念。
* 在 [_将你的日历与 Ansible 融合以避免日程冲突_][8] 文章中, Nicolas Leiva 快速介绍了如何使用前置任务和条件在自动日程安排中中强制执行隔离窗口
* Nicolas 在 [_创建一个融合你的谷歌日历的 Ansible 模块_][9] 中完成了他的日历隔离的理念。他的文章深入探讨了在 Go 中编写自定义 Ansible 模块以实现所需的日历连接。 Nicolas 介绍了构建和调用 Go 程序并将所需数据传递给 Ansible 并接收所需输出的不同方法。
### 提升你的 Ansible 技巧
Kubernetes 是近来的热门话题,以下文章提供了一些很好的示例来学习新技能。
* 在 [_适用于 Kubernets 自动编排你的 Ansible 模块_][10] 文章中, Seth Kenlon 介绍了 Ansible Kubernetes 模块, 介绍了用于测试的基本 Minikube 安装,并提供了一些用于 pod 控制的“k8s”模块的基本示例。
* Jeff Geerling 在 [_使用 Ansible 的 Helm 模块构建 Kubernetes Minecraft 服务器_][11] 中解释了 Helm Chart 应用程序、Ansible 集合以及执行一个有趣的项目以在 k8s 集群中设置您自己的 Minecraft 服务器的概念。
### 其他 Ansible 新闻
几年, Mark Phillips 写了 “网络上的 Ansible” 这一系列文章,覆盖许多 Ansible 主题。它们包含指向有趣的 Ansible 开发的链接,范围从基本指导、模块编写、 Kubernetes 、视频演示到 Ansible 社区新闻。所有有兴趣和任何水平的人都可以查看一下,有很高的参考价值!
* [_容器网络安全以及更多 Ansible 新闻_][12]
* [_CI/CD 管道和 Windows 用户的提示,以及更多 Ansible 新闻_][13]
* [_馆藏标志着Ansible生态系统的重大转变以及更多Ansible新闻_][14]
* [_Jeff Geerling 的 Ansible 101 视频,以及更多 Ansible 新闻_][15]
* [_初学者指南、Windows、网络和更多 Ansible 新闻_][16]
### 2021 快乐!
我希望你的 Ansible 旅程已经开始,并能常从 Opensource.com 中的文章充实自己。在评论中告诉我们接下来你可能从哪方面了解 Ansible ,如果你想分享一些信息,请考虑在 Opensource.com 上 [写一篇文章][17] 。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/1/ansible
作者:[James Farrell][a]
选题:[lujun9972][b]
译者:[Donkey](https://github.comDonkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jamesf
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation)
[2]: https://opensource.com/article/20/9/ansible
[3]: https://opensource.com/article/20/11/orchestration-vs-automation
[4]: https://opensource.com/article/20/9/install-packages-ansible
[5]: https://opensource.com/article/20/1/ansible-playbooks-lessons
[6]: https://opensource.com/article/20/10/first-day-ansible
[7]: https://opensource.com/article/20/9/raspberry-pi-ansible
[8]: https://opensource.com/article/20/10/calendar-ansible
[9]: https://opensource.com/article/20/10/ansible-module-go
[10]: https://opensource.com/article/20/9/ansible-modules-kubernetes
[11]: https://opensource.com/article/20/10/kubernetes-minecraft-ansible
[12]: https://opensource.com/article/20/1/ansible-news-edition-six
[13]: https://opensource.com/article/20/2/ansible-news-edition-seven
[14]: https://opensource.com/article/20/3/ansible-news-edition-eight
[15]: https://opensource.com/article/20/4/ansible-news-edition-nine
[16]: https://opensource.com/article/20/5/ansible-news-edition-ten
[17]: https://opensource.com/how-submit-article

View File

@ -3,7 +3,7 @@
[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb)
[#]: collector: (lujun9972)
[#]: translator: (Donkey)
[#]: reviewer: ( )
[#]: reviewer: (turbokernel)
[#]: publisher: ( )
[#]: url: ( )
@ -61,8 +61,8 @@ via: https://opensource.com/article/21/5/kubernetes-chaos
作者:[Jessica Cherry][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
译者:[Donkey](https://github.com/Donkey-Hao)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,301 @@
[#]: subject: "Apache Kafka: Asynchronous Messaging for Seamless Systems"
[#]: via: "https://www.opensourceforu.com/2021/11/apache-kafka-asynchronous-messaging-for-seamless-systems/"
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Apache Kafka为“无缝系统”提供异步消息支持
======
Apache Kafka 是最流行的开源消息代理之一。它已经成为了大数据操作的重要组成部分,你能够在几乎所有的微服务环境中找到它。本文对 Apache Kafka 进行了简要介绍,并提供了一个案例来展示它的使用方式。
![][1]
你有没有想过电子商务平台是如何在处理巨大的流量时做到不会卡顿的呢有没有想过OTT 平台是如何在同时向数百万用户交付内容时,做到平稳运行的呢?其实,关键就在于它们的分布式架构。
采用分布式架构设计的系统由多个功能组件组成。这些功能组件通常分布在许多个机器上,它们通过网络,异步地交换消息,从而实现相互协作。正是由于异步消息的存在,组件之间才能实现可伸缩、无阻塞的通信,整个系统才能够平稳运行。
### 异步消息
异步消息的常见特性有:
* 消息的生产者和消费者都不知道彼此的存在。它们在不知道其他对象的情况下,加入和离开系统。
* 消息代理充当了生产者和消费者之间的中介。
* 生产者把每条消息,都与一个<ruby>“主题”<rt>topic</rt></ruby>相关联。每个主题只是一个简单的字符串。
* 一个生产者可以把消息发往多个主题,不同生产者也可以把消息发送给同一主题。
* 消费者向代理订阅一个或多个主题的消息。
* 生产者只将消息发送给代理,而不发送给消费者。
* 代理会把消息发送给订阅该主题的所有消费者。
* 生产者并不期望得到消费者的任何回应。换句话说,生产者和消费者不会相互阻塞。
市场上的消息代理有很多,而 Apache Kafka 是其中最受欢迎的一种(之一)。
### Apache Kafka
Apache Kafka 是一个支持流处理的、开源的分布式消息系统,它由 Apache 软件基金会开发。在架构上,它是多个代理组成的集群,这些代理间通过 Apache ZooKeeper 服务来协调。在接收、持久化和发送消息时,这些代理共享集群上的负载。
#### 分区
Kafka 将消息写入称为<ruby>“分区”<rt>partitions</rt></ruby>的桶中。一个特定分区只保存一个主题上的消息。例如Kafka 会把 `heartbeats` 主题上的消息写入名为 “heartbeats-0” 的分区(假设它是个单分区主题),这个过程和生产者无关。
![图 1异步消息][2]
不过,为了利用 Kafka 集群所提供的并行处理能力,管理员通常会为指定主题创建多个分区。举个例子,假设管理员为 `heartbeats` 主题创建了三个分区Kafka 会将它们分别命名为 `heartbeats-0`、`heartbeats-1` 和 `heartbeats-2`。Kafka 会以某种方式,把消息分配到这三个分区中,并使它们均匀分布。
还有另一种可能的情况,生产者将每条消息与一个<ruby>消息键<rt>key</rt></ruby>相关联。例如,同样都是在 `heartbeats` 主题上发送消息,有个组件使用 `C1` 作为消息键,另一个则使用 `C2`。在这种情况下Kafka 会确保,在一个主题中,带有相同消息键的消息,总是会被写入到同一个分区。不过,在一个分区中,消息的消息键却不一定相同。下面的图 2 显示了消息在不同分区中的一种可能分布。
![图 2消息在不同分区中的分布][3]
#### 领导者和同步副本
Kafka 在(由多个代理组成的)集群中维护了多个分区。其中,负责维护分区的那个代理被称为<ruby>“领导者”<rt>leader</rt></ruby>。只有领导者能够在它的分区上接收和发送消息。
可是,万一分区的领导者发生故障了,又该怎么办呢?为了确保业务连续性,每个领导者(代理)都会把它的分区复制到其他代理上。此时,这些其他代理就称为该分区的<ruby>同步副本<rt>in-sync-replicas</rt></ruby>ISR。一旦分区的领导者发生故障ZooKeeper 就会发起一次选举,把选中的那个同步副本任命为新的领导者。此后,这个新的领导者将承担该分区的消息接受和发送任务。管理员可以指定分区需要维护的同步副本的大小。
![图 3命令行生产者][4]
#### 消息持久化
代理会将每个分区都映射到一个指定的磁盘文件,从而实现持久化。默认情况下,消息会在磁盘上保留一个星期。当消息写入分区后,它们的内容和顺序就不能更改了。管理员可以配置一些策略,如消息的保留时长、压缩算法等。
![图 4命令行消费者][5]
#### 消费消息
与大多数其他消息系统不同Kafka 不会主动将消息发送给消费者。相反消费者应该监听主题并主动读取消息。一个消费者可以某个主题的多个分区中读取消息。多个消费者也可以读取来自同一个分区的消息。Kafka 保证了同一条消息不会被同一个消费者重复读取。
Kafka 中的每个消费者都有一个组 ID。那些组 ID 相同的消费者们共同组成了一个消费者组。通常,为了从 N 个主题分区读取消息,管理员会创建一个包含 N 个消费者的消费者组。这样一来,组内的每个消费者都可以从它的指定分区中读取消息。如果组内的消费者比可用分区还要多,那么多出来的消费者就会处于闲置状态。
在任何情况下Kafka 都保证:不管组内有多少个消费者,同一条消息只会被该消费者组读取一次。这个架构提供了一致性、高性能、高可扩展性、准实时交付和消息持久性,以及零消息丢失。
### 安装、运行 Kafka
尽管在理论上Kafka 集群可以由任意数量的代理组成,但在生产环境中,大多数集群通常由三个或五个代理组成。
在这里,我们将搭建一个单代理集群,对于生产环境来说,它已经够用了。
在浏览器中访问 [https://kafka.apache.org/downloads][5a],下载 Kafka 的最新版本。在 Linux 终端中,我们也可以使用下面的命令来下载它:
```
wget https://www.apache.org/dyn/closer.cgi?path=/kafka/2.8.0/kafka_2.12-2.8.0.tgz
```
如果需要的话,我们也可以把下载来的档案文件 `kafka_2.12-2.8.0.tgz` 移动到另一个目录下。解压这个档案,你会得到一个名为 `kafka_2.12-2.8.0` 的目录,它就是之后我们要设置的 `KAFKA_HOME`
打开 `KAFKA_HOME/config` 目录下的 `server.properties` 文件,取消注释下面这一行配置:
```
listeners=PLAINTEXT://:9092
```
这行配置的作用是让 Kafka 在本机的 `9092` 端口接收普通文本消息。我们也可以配置 Kafka 通过<ruby>安全通道<rt>secure channel</rt></ruby>接收消息,在生产环境中,我们也推荐这么做。
无论集群中有多少个代理Kafka 都需要 ZooKeeper 来管理和协调它们。即使是单代理集群也是如此。Kafka 在安装时,会附带安装 ZooKeeper因此我们可以在 `KAFKA_HOME` 目录下,在命令行中使用下面的命令来启动它:
```
./bin/zookeeper-server-start.sh ./config/zookeeper.properties
```
当 ZooKeeper 运行起来后,我们就可以在另一个终端中启动 Kafka 了,命令如下:
```
./bin/kafka-server-start.sh ./config/server.properties
```
到这里,一个单代理的 Kafka 集群就运行起来了。
### 验证 Kafka
让我们在 `topic-1` 主题上尝试下发送和接收消息吧!我们可以使用下面的命令,在创建主题时为它指定分区的个数:
```
./bin/kafka-topics.sh --create --topic topic-1 --zookeeper localhost:2181 --partitions 3 --replication-factor 1
```
上述命令还同时指定了<ruby>复制因子<rt>replication factor</rt></ruby>,它的值不能大于集群中代理的数量。我们使用的是单代理集群,因此,复制因子只能设置为 1。
当主题创建完成后生产者和消费者就可以在上面交换消息了。Kafka 的发行版内附带了命令行工具生产者和消费者,供测试时用。
打开第三个终端,运行下面的命令,启动命令行生产者:
```
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic topic-1
```
上述命令显示了一个提示符,我们可以在后面输入简单文本消息。由于我们指定的命令选项,生产者会把 `topic-1` 上的消息,发送到运行在本机的 9092 端口的 Kafka 中。
打开第四个终端,运行下面的命令,启动命令行消费者:
```
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic-1 -from-beginning
```
上述命令启动了一个消费者,并指定它连接到本机 9092 端口的 Kafka。它订阅了 `topic-1` 主题,以读取其中的消息。由于命令行的最后一个选项,这个消费者会从最开头的位置,开始读取该主题的所有消息。
我们注意到,生产者和消费者连接的是同一个代理,访问的是同一个主题,因此,消费者在收到消息后会把消息打印到终端上。
下面,让我们在实际应用场景中,尝试使用 Kafka 吧!
### 案例
假设有一家叫做 ABC 的公共汽车运输公司,它拥有一支客运车队,往返于全国不同城市之间。由于 ABC 希望实时跟踪每辆客车,以提高其运营质量,因此,它提出了一个基于 Apache Kafka 的解决方案。
首先ABC 公司为所有公交车都配备了位置追踪设备。然后,它使用 Kafka 建立了一个操作中心,以接收来自数百辆客车的位置更新。它还开发了一个<ruby>仪表盘<rt>dashboard</rt></ruby>,以显示任一时间点所有客车的当前位置。图 5 展示了上述架构:
![图 5基于 Kafka 的架构][6]
在这种架构下,客车上的设备扮演了消息生产者的角色。它们会周期性地把当前位置发送到 Kafka 的 `abc-bus-location` 主题上。ABC 公司选择以客车的<ruby>行程码<rt>trip code</rt></ruby>作为消息键,以处理来自不同客车的消息。例如,对于从 Bengaluru 到 Hubballi 的客车,它的行程码就会是 `BLRHL003`,那么在这段旅程中,对于所有来自该客车的消息,它们的消息键都会是 `BLRHL003`
仪表盘应用扮演了消息消费者的角色。它在代理上注册了同一个主题 `abc-bus-location`。如此,这个主题就成为了生产者(客车)和消费者(仪表盘)之间的虚拟通道。
客车上的设备不会期待得到来自仪表盘应用的任何回复。事实上,它们相互之间都不知道对方的存在。得益于这种架构,数百辆客车和操作中心之间实现了非阻塞通信。
#### 实现
假设 ABC 公司想要创建三个分区来维护位置更新。由于我们的开发环境只有一个代理,因此复制因子应设置为 1。
相应地,以下命令创建了符合需求的主题:
```
./bin/kafka-topics.sh --create --topic abc-bus-location --zookeeper localhost:2181 --partitions 3 --replication-factor 1
```
生产者和消费者应用可以用多种语言编写,如 Java、Scala、Python 和 JavaScript 等。下面几节中的代码展示了它们在 Java 中的编写方式,好让我们有一个初步了解。
##### Java 生产者
下面的 `Fleet` 类模拟了在 ABC 公司的 6 辆客车上运行的 Kafka 生产者应用。它会把位置更新发送到指定代理的 `abc-bus-location` 主题上。请注意,简单起见,主题名称、消息键、消息内容和代理地址等,都在代码里写死了。
```
public class Fleet {
public static void main(String[] args) throws Exception {
String broker = “localhost:9092”;
Properties props = new Properties();
props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, broker);
props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class.getName());
props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class.getName());
Producer<String, String> producer = new KafkaProducer<String, String>(props);
String topic = “abc-bus-location”;
Map<String, String> locations = new HashMap<>();
locations.put(“BLRHBL001”, “13.071362, 77.461906”);
locations.put(“BLRHBL002”, “14.399654, 76.045834”);
locations.put(“BLRHBL003”, “15.183959, 75.137622”);
locations.put(“BLRHBL004”, “13.659576, 76.944675”);
locations.put(“BLRHBL005”, “12.981337, 77.596181”);
locations.put(“BLRHBL006”, “13.024843, 77.546983”);
IntStream.range(0, 10).forEach(i -> {
for (String trip : locations.keySet()) {
ProducerRecord<String, String> record
= new ProducerRecord<String, String>(
topic, trip, locations.get(trip));
producer.send(record);
}
});
producer.flush();
producer.close();
}
}
```
##### Java 消费者
下面的 `Dashboard` 类实现了一个 Kafka 消费者应用,运行在 ABC 公司的操作中心。它会监听 `abc-bus-location` 主题,并且它的消费者组 ID 是 `abc-dashboard`。当收到消息后,它会立即显示来自客车的详细位置信息。我们本该配置这些详细位置信息,但简单起见,它们也是在代码里写死的:
```
public static void main(String[] args) {
String broker = “127.0.0.1:9092”;
String groupId = “abc-dashboard”;
Properties props = new Properties();
props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, broker);
props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class.getName());
props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class.getName());
props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
@SuppressWarnings(“resource”)
Consumer<String, String> consumer = new KafkaConsumer<String, String>(props);
consumer.subscribe(Arrays.asList(“abc-bus-location”));
while (true) {
ConsumerRecords<String, String> records
= consumer.poll(Duration.ofMillis(1000));
for (ConsumerRecord<String, String> record : records) {
String topic = record.topic();
int partition = record.partition();
String key = record.key();
String value = record.value();
System.out.println(String.format(
“Topic=%s, Partition=%d, Key=%s, Value=%s”,
topic, partition, key, value));
}
}
}
```
##### 依赖
为了编译和运行这些代码,我们需要 JDK 8 及以上版本。看到下面的 `pom.xml` 文件中的 Maven 依赖了吗?它们会把所需的 Kafka 客户端库,下载并添加到类路径中:
```
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.8.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.25</version>
</dependency>
```
#### 部署
由于 `abc-bus-location` 主题在创建时指定了 3 个分区,我们自然就会想要运行 3 个消费者,来让读取位置更新的过程更快一些。为此,我们需要同时在 3 个不同的终端中运行仪表盘。因为所有这 3 个仪表盘都注册在同一个组 ID 下它们自然就构成了一个消费者组。Kafka 会为每个仪表盘都分配一个特定的分区(来消费)。
当所有仪表盘实例都运行起来后,在另一个终端中启动 `Fleet` 类。图 6、7、8 展示了仪表盘终端中的控制台示例输出。
![图 6仪表盘终端之一][7]
仔细看看控制台消息,我们会发现第一个、第二个和第三个终端中的消费者,正在分别从 `partition-2`、`partition-1` 和 `partition-0` 中读取消息。另外,我们还能发现,消息键为 `BLRHBL002`、`BLRHBL004` 和 `BLRHBL006` 的消息写入了 `partition-2`,消息键为 `BLRHBL005` 的消息写入了 `partition-1`,剩下的消息写入了 `partition-0`
![图 7仪表盘终端之二][8]
使用 Kafka 的好处在于,只要集群设计得当,它就可以水平扩展,从而支持大量客车和数百万条消息。
![图 8仪表盘终端之三][9]
### 不止是消息
根据 Kafka 官网上的数据在《财富》100 强企业中,超过 80% 都在使用 Kafka。它部署在许多垂直行业如金融服务、娱乐等。虽然 Kafka 起初只是一种简单的消息服务但它已凭借行业级的流处理能力成为了大数据生态系统的一环。对于那些喜欢托管解决方案的企业Confluent 提供了基于云的 Kafka 服务只需支付订阅费即可。LCTT 译注Confluent 是一个基于 Kafka 的商业公司,它提供的 Confluent Kafka 在 Apache Kafka 的基础上,增加了许多企业级特性,被认为是“更完整的 Kafka”。
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2021/11/apache-kafka-asynchronous-messaging-for-seamless-systems/
作者:[Krishna Mohan Koyya][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/krishna-mohan-koyya/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Digital-backgrund-connecting-in-globe.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-1-Asynchronous-messaging.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-2-Message-distribution-among-the-partitions.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-3-Command-line-producer.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-4-Command-line-consumer.jpg
[5a]: https://kafka.apache.org/downloads
[6]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-5-Kafka-based-architecture.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-6-Dashboard-Terminal-1.jpg
[8]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-7-Dashboard-Terminal-2.jpg
[9]: https://www.opensourceforu.com/wp-content/uploads/2021/09/Figure-8-Dashboard-Terminal-3.jpg

View File

@ -0,0 +1,143 @@
[#]: subject: "Integrating Zeek with ELK Stack"
[#]: via: "https://www.opensourceforu.com/2022/06/integrating-zeek-with-elk-stack/"
[#]: author: "Tridev Reddy https://www.opensourceforu.com/author/tridev-reddy/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
将 Zeek 与 ELK 栈集成
======
Zeek 是一个开源的网络安全监控工具。本文讨论了如何将 Zeek 与 ELK 集成。
![Integrating-Zeek-with-ELK-Stack-Featured-image][1]
在本杂志 2022 年 3 月版发表的题为“用 Zeek 轻松实现网络安全监控”的文章中,我们研究了 Zeek 的功能,并学习了如何开始使用它。现在我们将把我们的学习经验再进一步,看看如何将其与 ELK也称为 Elasticsearch、Kibana、Beats 和 Logstash整合。
为此,我们将使用一个叫做 Filebeat 的工具,它可以监控、收集并转发日志到 Elasticsearch。我们将把 Filebeat 和 Zeek 配置在一起,这样后者收集的数据将被转发并集中到我们的 Kibana 仪表盘上。
### 安装 Filebeat
让我们首先将 Filebeat 与 Zeek 安装在一起。使用 *apt* 来安装 Filebeat使用以下命令
```
sudo apt install filebeat
```
接下来,我们需要配置 *.yml* 文件,它位于 /etc*/filebeat/* 文件夹中:
```
sudo nano /etc/filebeat/filebeat.yml
```
我们只需要在这里配置两件事。在 *Filebeat* 输入部分,将类型改为 log并取消对 *enabled*:false 的注释,将其改为 true。我们还需要指定存储日志的路径也就是说我们需要指定*/opt/zeek/logs/current/\*.log*。
完成这些后,设置的第一部分应该类似于图 1 所示的内容。
![Figure 1: Filebeat config (a)][2]
在 Elasticsearch 输出部分,第二件要修改的事情是在 *Outputs*下,取消对 output.elasticsearch 和 hosts 的注释。确保主机的 URL 和端口号与你安装 ELK 时配置的相似。我们把它保持为 localhost端口号为 9200。
在同一部分中,取消底部的用户名和密码,输入安装后配置 ELK 时生成的弹性用户的用户名和密码。完成这些后,参考图 2检查设置。
![Figure 2: Filebeat config (b)][3]
现在我们已经完成了安装和配置,我们需要配置 Zeek使其以 JSON 格式存储日志。为此,确保你的 Zeek 实例已经停止。如果没有,执行下面的命令来停止它:
```
cd /opt/zeek/bin
./zeekctl stop
```
现在我们需要在 local.zeek 中添加一小行,它存在于 *opt/zeek/share/zeek/site/* 目录中。
以 root 身份打开该文件,添加以下行:
```
@load policy/tuning/json-logs.zeek
```
参考图 3确保设置正确。
![Figure 3: local.zeek file][4]
由于我们改变了 Zeek 的一些配置,我们需要重新部署它,这可以通过执行以下命令来完成:
```
cd /opt/zeek/bin
./zeekctl deploy
```
现在我们需要在 Filebeat 中启用 Zeek 模块,以便它转发 Zeek 的日志。执行下面的命令:
```
sudo filebeat modules enable zeek
```
我们几乎要好了。在最后一步,配置 *zeek.yml* 文件要记录什么类型的数据。这可以通过修改 */etc/filebeat/modules.d/zeek.yml* 文件完成。
在这个 *.yml 文件*中,我们必须提到这些指定的日志存放在哪个目录下。我们知道,这些日志存储在当前文件夹中,其中有几个文件,如 *dns.log*、*conn.log、dhcp.log* 等等。我们需要在每个部分提到每个路径。如果而且只有在你不需要该文件/程序的日志时,你可以通过把启用值改为 false 来舍弃不需要的文件。
例如,对于 *dns*,确保启用值为 “true”并且路径被配置
```
var.paths: [ “/opt/zeek/logs/current/dns.log”, “/opt/zeek/logs/*.dns.json” ]
```
对其余的文件重复这样做。我们对一些我们需要的文件做了这个处理。我们添加了所有主要需要的文件。你也可以这样做。请参考图 4。
![Figure 4: zeek.yml configuration][5]
现在是启动 Filebeat 的时候了。执行以下命令:
```
sudo filebeat setup
sudo service filebeat start
```
现在一切都完成了,让我们移动到 Kibana 仪表板,检查我们是否通过 Filebeat 接收到来自 Zeek 的数据。
![Figure 5: Dashboard of Kibana (Destination Geo)][6]
进入仪表板。你可以看到它所捕获的数据的清晰统计分析(图 5 和图 6
![Figure 6: Dashboard of Kibana (Network)][7]
现在让我们进入发现选项卡,通过使用查询进行过滤来检查结果:
```
event.module: “zeek”
```
这个查询将过滤它在一定时间内收到的所有数据,只向我们显示名为 Zeek 的模块的数据(图 7
![Figure 7: Filtered data by event.module query][8]
### 鸣谢
*作者感谢 VIT-AP 计算机科学与工程学院的 Sibi Chakkaravarthy Sethuraman、Sudhakar Ilango、Nandha Kumar R.和Anupama Namburu 的不断指导和支持。特别感谢人工智能和机器人技术卓越中心AIR。*
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/integrating-zeek-with-elk-stack/
作者:[Tridev Reddy][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/tridev-reddy/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Integrating-Zeek-with-ELK-Stack-Featured-image.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-1-Filebeat-config-a.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-2-Filebeat-config-b.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-3-local.zeek-file-1.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-4-zeek.yml-configuration.jpg
[6]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-5-Dashboard-of-Kibana-Destination-Geo.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-6-Dashboard-of-Kibana-Network-1.jpg
[8]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-7-Filtered-data-by-event.jpg

View File

@ -0,0 +1,182 @@
[#]: subject: "Run Windows Apps And Games Using WineZGUI On Linux"
[#]: via: "https://ostechnix.com/winezgui-run-windows-apps-and-games-on-linux/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
在 Linux 上使用 WineZGUI 运行 Windows 应用和游戏
======
WineZGUI - 一个使用 Zenity 的 Wine GUI 前台
不久前,我们写了关于 **[Bottles][1]** 的文章,这是一个开源的图形应用,可以在 Linux 操作系统上轻松运行 Windows 软件和游戏。今天,我们将讨论一个类似的有趣项目。向 **WineZGUI** 打个招呼,它是一个 Wine GUI 前台,可以**[在 Linux 上用 Wine 运行 Windows 应用和游戏][2]**。
#### 内容
1. 什么是 WineZGUI
2. Bottles 与 WineZGUI
3. 如何在 Linux 中安装 WineZGUI
4. 在 Linux 中用 WineZGUI 运行 Windows 应用和游戏
5. 总结
### 什么是 WineZGUI
WineZGUI 是一个 Bash 脚本的集合,它允许你轻松地管理 wine 前缀,并在 Linux 上使用 **Zenity** 提供更容易的 wine 游戏体验。
使用 WineZGUI我们可以直接从文件管理器中启动 Windows exe 文件或游戏,而无需安装它们。
WineZGUI 为每个应用或游戏创建快捷方式,以便于访问,同时也为每个 exe 二进制文件创建单独的前缀。
当你用 WineZGUI 启动一个 Windows exe 文件时,它会提示你是否使用默认的 wine 前缀或创建一个新的前缀。默认的前缀是 `~/.local/share/winezgui/default`
如果你选择为 windows 二进制文件或 exe 创建一个新的前缀WineZGUI 将尝试从 exe 文件中提取产品名称和图标,并创建一个桌面快捷方式。
当你以后启动相同的 exe 或二进制文件时,它将建议你用先前的相关前缀来运行它。
说得通俗一点WineZGUI 只是一个用于官方原始 wine 的 Wine 和 winetricks 的简单 GUI。当我们启动一个 exe 来玩游戏时Wine 前缀的设置是自动的。
你只需打开一个 exe它就会创建一个前缀和一个桌面快捷方式并从该 exe 中提取名称和图标。
它使用 **exiftool****icotool** 工具来分别提取名称和图标。你可以通过现有的前缀打开一个 exe 来启动该游戏,或者使用桌面快捷方式。
WineZGUI 是一个在 GitHub 上免费托管的 shell 脚本。你可以抓取源代码,改进它,修复错误和增加功能。
### Bottles Vs WineZGUI
你可能想知道 WineZGUI 与 Bottles 相比如何。但这些应用之间有一个微妙的区别。
**Bottles 是面向前缀的**和**面向运行器的**。意思是Bottles 首先创建一个前缀,然后使用不同的 exe 文件。Bottles 不会记住 exe 的前缀。Bottles 使用不同的运行器。
**WineZGUI 是面向 exe 的**。它使用 exe 只为该 exe 创建一个前缀。下次我们打开一个 exe 时,它将询问是否用现有的 exe 前缀启动。
WineZGUI 不提供像 **bottles****[lutris][3]** 那样的高级功能,如运行程序、在线安装程序等。
### 如何在 Linux 中安装 WineZGUI
确保你已经安装了 WineZGUI 的必要先决条件。
**Debian/Ubuntu**
```
$ sudo dpkg --add-architecture i386
$ sudo apt install zenity wine winetricks libimage-exiftool-perl icoutils gnome-terminal
```
**Fedora**
```
$ sudo dnf install zenity wine winetricks perl-Image-ExifTool icoutils gnome-terminal
```
官方推荐的安装 WineZGUI 的方法是使用 **[Flatpak][4]**。
安装完 Flatpak 后,逐一运行以下命令,在 Linux 中安装 WineZGUI。
```
$ flatpak --user remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
```
$ flatpak --user -y install flathub org.winehq.Wine/x86_64/stable-21.08
```
```
$ wget https://github.com/fastrizwaan/WineZGUI-Releases/releases/download/WineZGUI-0.4_20220608/io.github.WineZGUI_0_4_20220608.flatpak
```
```
$ flatpak --user -y install io.github.WineZGUI_0_4_20220608.flatpak
```
### 在 Linux 中用 WineZGUI 运行 Windows 应用和游戏
从 Dash 或菜单中启动 WineZGUI。
![Launch WineZGUI][5]
这就是 WineZGUI 的默认界面的样子。
![WineZGUI Interface][6]
正如你在上面的截图中看到的WineZGUI 的界面非常简单易懂。从主窗口中,你可以:
* 打开一个 EXE 文件。
* 打开 Winetricks GUI 和 CLI。
* 启动 Wine 配置。
* 启动资源管理器。
* 打开 BASH shell。
* 关闭所有的应用/游戏,包括 WineZGUI 界面。
* 删除 wine 前缀。
* 查看已安装的 WineZGUI 版本。
为了演示,我将打开一个 .exe 文件。
在下一个窗口中,选择要运行的 EXE 文件。在我的例子中,它是 WinRAR。
![Choose The EXE File To Run][7]
接下来,你是想用默认的前缀运行 EXE 文件,还是创建一个新的前缀。我选择默认的前缀。
![Run WinRAR With Default Prefix][8]
几秒钟后,会出现 WinRAR 安装向导。点击安装,继续。
![Install WinRAR In Linux][9]
点击 OK 来完成 WinRAR 的安装。
![Complete WinRAR Installation][10]
点击“运行 WinRAR” 来启动它。
![Run WinRAR][11]
下面是 WinRAR 在我的 Fedora 36 桌面上的运行情况!
![WinRAR Is Running In Fedora Using Wine][12]
### 总结
WineZGUI 是俱乐部的新人。 如果你正在寻找一种在 Linux 桌面上使用 Wine 运行 Windows 应用和游戏的更简单方法WineZGUI 可能是一个不错的选择。
在 WineZGUI 的帮助下,用户可以选择在与 `.exe` 相同的文件夹中创建一个 wine 前缀,并创建一个相对链接的 `.desktop` 条目来自动执行此操作。
原因是使用 wine 前缀备份和删除游戏更容易,并且让它生成一个 `.desktop` 将使其能够适应移动和转移。
一个很酷的场景是使用应用进行设置,然后将 wine 前缀分享给你的朋友和其他人,他们只需要一个具有所有依赖性和保存的工作 wine 前缀。
请试一试它,在下面的评论区告诉我们你对这个项目的看法。
**资源:**
* [WineZGUI GitHub 仓库][13]
--------------------------------------------------------------------------------
via: https://ostechnix.com/winezgui-run-windows-apps-and-games-on-linux/
作者:[sk][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/run-windows-software-on-linux-with-bottles/
[2]: https://ostechnix.com/run-windows-games-softwares-ubuntu-16-04/
[3]: https://ostechnix.com/manage-games-using-lutris-linux/
[4]: https://ostechnix.com/how-to-install-and-use-flatpak-in-linux/
[5]: https://ostechnix.com/wp-content/uploads/2022/06/Launch-WineZGUI.png
[6]: https://ostechnix.com/wp-content/uploads/2022/06/WineZGUI-Interface.png
[7]: https://ostechnix.com/wp-content/uploads/2022/06/Choose-The-EXE-File-To-Run.png
[8]: https://ostechnix.com/wp-content/uploads/2022/06/Run-WinRAR-With-Default-Prefix.png
[9]: https://ostechnix.com/wp-content/uploads/2022/06/Install-WinRAR-In-Linux.png
[10]: https://ostechnix.com/wp-content/uploads/2022/06/Complete-WinRAR-Installation.png
[11]: https://ostechnix.com/wp-content/uploads/2022/06/Run-WinRAR.png
[12]: https://ostechnix.com/wp-content/uploads/2022/06/WinRAR-Is-Running-In-Fedora-Using-Wine.png
[13]: https://github.com/fastrizwaan/WineZGUI