Merge branch 'master' of https://github.com/LCTT/TranslateProject into translating

This commit is contained in:
geekpi 2022-06-29 08:35:49 +08:00
commit 6daffc36ee
17 changed files with 1700 additions and 302 deletions

View File

@ -3,44 +3,44 @@
[#]: author: "Tridev Reddy https://www.opensourceforu.com/author/tridev-reddy/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14770-1.html"
将 Zeek 与 ELK 栈集成
======
Zeek 是一个开源的网络安全监控工具。本文讨论了如何将 Zeek 与 ELK 集成。
![Integrating-Zeek-with-ELK-Stack-Featured-image][1]
> Zeek 是一个开源的网络安全监控工具。本文讨论了如何将 Zeek 与 ELK 集成。
在本杂志 2022 年 3 月版发表的题为“用 Zeek 轻松实现网络安全监控”的文章中,我们研究了 Zeek 的功能,并学习了如何开始使用它。现在我们将把我们的学习经验再进一步,看看如何将其与 ELK也称为 Elasticsearch、Kibana、Beats 和 Logstash整合。
![](https://img.linux.net.cn/data/attachment/album/202206/28/164550v4nuk3g7ux77y77v.jpg)
在本杂志 2022 年 3 月版发表的题为“用 Zeek 轻松实现网络安全监控”的文章中,我们研究了 Zeek 的功能,并学习了如何开始使用它。现在我们将把我们的学习经验再进一步,看看如何将其与 ELK即 Elasticsearch、Kibana、Beats 和 Logstash整合。
为此,我们将使用一个叫做 Filebeat 的工具,它可以监控、收集并转发日志到 Elasticsearch。我们将把 Filebeat 和 Zeek 配置在一起,这样后者收集的数据将被转发并集中到我们的 Kibana 仪表盘上。
### 安装 Filebeat
让我们首先将 Filebeat 与 Zeek 安装在一起。使用 *apt* 来安装 Filebeat使用以下命令
让我们首先将 Filebeat 与 Zeek 安装在一起。使用 `apt` 来安装 Filebeat使用以下命令
```
sudo apt install filebeat
```
接下来,我们需要配置 *.yml* 文件,它位于 /etc*/filebeat/* 文件夹中:
接下来,我们需要配置 `.yml` 文件,它位于 `/etc/filebeat/` 文件夹中:
```
sudo nano /etc/filebeat/filebeat.yml
```
我们只需要在这里配置两件事。在 *Filebeat* 输入部分,将类型改为 log并取消对 *enabled*:false 的注释,将其改为 true。我们还需要指定存储日志的路径也就是说我们需要指定*/opt/zeek/logs/current/\*.log*
我们只需要在这里配置两件事。在 Filebeat 输入部分,将类型改为 `log`,并取消对 `enabled:false` 的注释,将其改为 `true`。我们还需要指定存储日志的路径,也就是说,我们需要指定 `/opt/zeek/logs/current/*.log`
完成这些后,设置的第一部分应该类似于图 1 所示的内容。
![Figure 1: Filebeat config (a)][2]
在 Elasticsearch 输出部分,第二件要修改的事情是在 *Outputs*下,取消对 output.elasticsearch 和 hosts 的注释。确保主机的 URL 和端口号与你安装 ELK 时配置的相似。我们把它保持为 localhost端口号为 9200。
第二件要修改的事情是输出下的 Elasticsearch 输出部分,取消对 `output.elasticsearch``hosts` 的注释。确保主机的 URL 和端口号与你安装 ELK 时配置的相似。我们把它保持为 `localhost`,端口号为 `9200`
在同一部分中,取消底部的用户名和密码,输入安装后配置 ELK 时生成的弹性用户的用户名和密码。完成这些后,参考图 2检查设置。
在同一部分中,取消底部的用户名和密码的注释,输入安装后配置 ELK 时生成的 Elasticsearch 用户的用户名和密码。完成这些后,参考图 2检查设置。
![Figure 2: Filebeat config (b)][3]
@ -51,7 +51,7 @@ cd /opt/zeek/bin
./zeekctl stop
```
现在我们需要在 local.zeek 中添加一小行,它存在于 *opt/zeek/share/zeek/site/* 目录中。
现在我们需要在 `local.zeek` 中添加一小行,它存在于 `opt/zeek/share/zeek/site/` 目录中。
以 root 身份打开该文件,添加以下行:
@ -76,11 +76,11 @@ cd /opt/zeek/bin
sudo filebeat modules enable zeek
```
我们几乎要好了。在最后一步,配置 *zeek.yml* 文件要记录什么类型的数据。这可以通过修改 */etc/filebeat/modules.d/zeek.yml* 文件完成。
我们几乎要好了。在最后一步,配置 `zeek.yml` 文件要记录什么类型的数据。这可以通过修改 `/etc/filebeat/modules.d/zeek.yml` 文件完成。
在这个 *.yml 文件*中,我们必须提到这些指定的日志存放在哪个目录下。我们知道,这些日志存储在当前文件夹中,其中有几个文件,如 *dns.log*、*conn.log、dhcp.log* 等等。我们需要在每个部分提到每个路径。如果而且只有在你不需要该文件/程序的日志时,你可以通过把启用值改为 false 来舍弃不需要的文件。
在这个 .yml 文件中,我们必须提到这些指定的日志存放在哪个目录下。我们知道,这些日志存储在当前文件夹中,其中有几个文件,如 `dns.log`、`conn.log`、`dhcp.log` 等等。我们需要在每个部分提到每个路径。如果而且只有在你不需要该文件/程序的日志时,你可以通过把启用值改为 `false` 来舍弃不需要的文件。
例如,对于 *dns*,确保启用值为 “true”,并且路径被配置:
例如,对于 `dns`,确保启用值为 `true`,并且路径被配置:
```
var.paths: [ “/opt/zeek/logs/current/dns.log”, “/opt/zeek/logs/*.dns.json” ]
@ -108,7 +108,7 @@ sudo service filebeat start
现在让我们进入发现选项卡,通过使用查询进行过滤来检查结果:
```
event.module: “zeek”
event.module: "zeek"
```
这个查询将过滤它在一定时间内收到的所有数据,只向我们显示名为 Zeek 的模块的数据(图 7
@ -127,7 +127,7 @@ via: https://www.opensourceforu.com/2022/06/integrating-zeek-with-elk-stack/
作者:[Tridev Reddy][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,17 +3,16 @@
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14769-1.html"
使用 Python 的 requests 和 Beautiful Soup 来分析网页
======
学习这个 Python 教程,轻松提取网页的有关信息。
![带问号的 Python 语言图标][1]
![](https://img.linux.net.cn/data/attachment/album/202206/28/132859owwf9az49k2oje2o.jpg)
图源Opensource.com
> 学习这个 Python 教程,轻松提取网页的有关信息。
浏览网页可能占了你一天中的大部分时间。然而,你总是需要手动浏览,这很讨厌,不是吗?你必须打开浏览器,然后访问一个网站,单击按钮,移动鼠标……相当费时费力。如果能够通过代码与互联网交互,岂不是更好吗?
@ -69,7 +68,7 @@ print(SOUP.p)
### 循环
使用 Beautiful Soup 的 `find_all` 函数,你可以创建一个 for 循环,从而遍历 `SOUP` 变量中包含的整个网页。除了 `<p>` 标签之外,你可能也会对其他标签感兴趣,因此最好将其构建为自定义函数,由 Python 中的 `def` 关键字(意思是 <ruby>“定义”<rt>define</rt></ruby>)指定。
使用 Beautiful Soup 的 `find_all` 函数,你可以创建一个 `for` 循环,从而遍历 `SOUP` 变量中包含的整个网页。除了 `<p>` 标签之外,你可能也会对其他标签感兴趣,因此最好将其构建为自定义函数,由 Python 中的 `def` 关键字(意思是 <ruby>“定义”<rt>define</rt></ruby>)指定。
```
def loopit():
@ -77,7 +76,7 @@ def loopit():
        print(TAG)
```
你可以随意更改临时变量 `TAG` 的名字,例如 `ITEM``i` 或任何你喜欢的。每次循环运行时,`TAG` 中都会包含`find_all` 函数的搜索结果。在此代码中,它搜索的是 `<p>` 标签。
你可以随意更改临时变量 `TAG` 的名字,例如 `ITEM``i` 或任何你喜欢的。每次循环运行时,`TAG` 中都会包含 `find_all` 函数的搜索结果。在此代码中,它搜索的是 `<p>` 标签。
函数不会自动执行,除非你显式地调用它。你可以在代码的末尾调用这个函数:
@ -92,7 +91,7 @@ if __name__ == '__main__':
### 只获取内容
你可以通过指定只需要 <ruby>字符串<rt>string</rt></ruby>(它是 <ruby>单词<rt>words</rt></ruby> 的编程术语)来排除打印标签。
你可以通过指定只需要 <ruby>字符串<rt>string</rt></ruby>(它是 <ruby>单词<rt>words</rt></ruby> 的编程术语)来排除打印标签。
```
def loopit():
@ -125,8 +124,8 @@ def loopit():
你可以使用 Beautiful Soup 和 Python 提取更多信息。以下是有关如何改进你的应用程序的一些想法:
* [接受输入][3],这样你就可以在启动应用程序时,指定要下载和分析的 URL。
* 统计页面上图片(<img> 标签)的数量。
* 统计另一个标签中的图片(<img> 标签)的数量(例如,仅出现在 `<main>` div 中的图片,或仅出现在 `</p>` 标签之后的图片)。
* 统计页面上图片(`<img>` 标签)的数量。
* 统计另一个标签中的图片(`<img>` 标签)的数量(例如,仅出现在 `<main>` div 中的图片,或仅出现在 `</p>` 标签之后的图片)。
--------------------------------------------------------------------------------
@ -135,7 +134,7 @@ via: https://opensource.com/article/22/6/analyze-web-pages-python-requests-beaut
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,35 +3,36 @@
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14768-1.html"
这个开源项目证明了 Chrome 扩展可以跟踪你
你安装的 Chrome 扩展的组合可以跟踪你
======
这会成为放弃基于 Chromium 的浏览器并开始使用 Firefox 的一个理由吗?也许吧,决定权在你。
> 这会成为放弃基于 Chromium 的浏览器并开始使用 Firefox 的一个理由吗?也许吧,决定权在你。
![Chrome 扩展追踪器][1]
即使你有了所有的隐私扩展和高级的保护功能,别人仍然有方法可以识别你或跟踪你。
即使你有了所有的隐私扩展和各种保护功能,别人仍然有方法可以识别你或跟踪你。
请注意,并非所有浏览器都是如此,本文中,我们只关注基于 Chromium 的浏览器,并将 Google Chrome 作为“主要嫌疑人”。
请注意,并非所有浏览器都是如此,本文中,我们主要关注基于 Chromium 的浏览器,并将谷歌 Chrome 作为“主要嫌疑人”。
以前,尽管别人已经能够你的检测 Chromium 浏览器上,检测到你已安装的扩展程序,但许多扩展程序都实施了某些保护措施来防止这种检测。
以前,在 Chromium 浏览器上,尽管别人已经能够检测到你已安装的扩展程序,但许多扩展程序都实施了某些保护措施来防止这种检测。
然而,一位名为 “**z0ccc**” 的安全研究人员发现了一种检测已安装 Chrome 浏览器扩展程序的新方法,该方法可进一步用于**通过“浏览器指纹识别”来跟踪你**。
**如果你还不知道的话**<ruby>浏览器指纹识别<rt>Browser Fingerprinting</rt></ruby>是指收集有关你的设备/浏览器的各种信息,以创建唯一的指纹 ID哈希从而在互联网上识别你的一种跟踪方法。“各种信息”包括浏览器名称、版本、操作系统、已安装的扩展程序、屏幕分辨率和类似的技术数据。
如果你还不知道的话:<ruby>浏览器指纹识别<rt>Browser Fingerprinting</rt></ruby>是指收集有关你的设备/浏览器的各种信息,以创建唯一的指纹 ID哈希从而在互联网上识别你的一种跟踪方法。“各种信息”包括浏览器名称、版本、操作系统、已安装的扩展程序、屏幕分辨率和类似的技术数据。
这听起来像是一种无害的数据收集技术,但可以使用这种跟踪方法在线跟踪你。
### 检测 Google Chrome 扩展
### 检测谷歌 Chrome 扩展
研究人员分享了一个开源项目 “**Extension Fingerprints**”,你可以使用它来测试,你安装的 Chrome 扩展,是否正在被人检测
研究人员发布了一个开源项目 “**Extension Fingerprints**”,你可以使用它来测试你安装的 Chrome 扩展是否能被检测到
新技术涉及一种“时差”方法,该工具比较了扩展获取资源的时间。与浏览器上未安装的其他扩展相比,受保护的扩展需要更多时间来获取资源。因此,这有助于从 1000 多个扩展列表中识别出一些扩展。
新技术涉及一种“时差”方法,该工具比较了扩展程序获取资源的时间。与浏览器上未安装的其他扩展相比,受保护的扩展需要更多时间来获取资源。因此,这有助于从 1000 多个扩展列表中识别出一些扩展。
关键是:即使有了这些新的进步和技术来防止跟踪Chrome 网上应用店的扩展也可以被检测到。
关键是:即使有了各种新的进步和技术来防止跟踪Chrome 网上应用店的扩展也可以被检测到。
![][2]
@ -41,11 +42,11 @@
你可以在它的 [GitHub 页面][3] 上查看所有技术细节。如果你想自己测试它,请前往它的 [扩展指纹识别网站][4] 自行检查。
### 大救星 Firefox
### 拯救 Firefox
嗯,似乎是的,毕竟我出于各种原因,[不断回到 Firefox][5]。
嗯,似乎是的,毕竟我出于各种原因,[不断回到 Firefox][5]。
这个新发现的(跟踪)方法应该适用于所有基于 Chromium 的浏览器。我在 Brave 和 Google Chrome 上都测试了这个方法。研究人员还提到,该工具不适用于在 Microsoft Edge 中使用的,微软应用商店中的扩展。但是,相同的跟踪方法仍然有效。
这个新发现的(跟踪)方法应该适用于所有基于 Chromium 的浏览器。我在 Brave 和谷歌 Chrome 上都测试了这个方法。研究人员还提到,该工具不能在使用微软应用商店中的扩展的微软 Edge 上工作。但是,相同的跟踪方法仍然有效。
正如研究人员指出Mozilla Firefox 可以避免这种情况,因为每个浏览器实例的扩展 ID 都是唯一的。
@ -56,7 +57,7 @@ via: https://news.itsfoss.com/chrome-extension-tracking/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,101 @@
[#]: subject: "EndeavourOS Artemis is the First ISO with ARM Installation Support"
[#]: via: "https://news.itsfoss.com/endeavouros-artemis-release/"
[#]: author: "nikhil https://news.itsfoss.com/author/nikhil/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
EndeavourOS Artemis is the First ISO with ARM Installation Support
======
EndeavourOS includes the latest and greatest with the new release, along with beta support for ARM installations.
![endeavour os][1]
The popular Arch-based Linux distribution [EndeavourOS][2] released their latest ISO refresh called Artemis. Interestingly, the release is named after NASAs upcoming lunar mission.
Apart from the usual improvements, the latest upgrade includes the latest [Linux Kernel 5.18.5][3] and an updated Calamares installer.
More importantly, with this release, EndeavourOS is closer to a complete ARM installation support. Let us talk more about it!
### Closing in on ARM Installation
The devs at EndeavourOS have updated the Calamares installer to handle installations to ARM devices, but it is still in beta.
Technically, you will find an integration install option on the welcome app of the main ISO. The developer also mentions that both the repos for ARM and the main ISO are more in sync from now on.
Of course, it is exciting to see the addition, nevertheless! The announcement post also mentioned:
> The new installer is a beta release and has limited device support for now, but we are going to add more devices in the future. The team currently is brainstorming to add the first step, the base install, in the Calamares installer also, so it will only take one step to install ARM
Note that only Odroid N2/N2+ and the Raspberry PI are supported right now. So, you can test it out, if you are interested to experiment.
If you are curious about the process of installation for ARM devices, heres a quick summary:
There are two stages to the new installation method:
![][4]
**Stage 1:**
* Boot into a live environment using the EndeavourOS Artemis ISO on your x86_64 computer.
* Connect the SD Card/SSD for your ARM computer as its primary storage.
* Launch Calamares.
* Click on the ARM install button and select the SD Card/SSD you connected and follow with the installation.
**Stage 2**
* Once the previous stage is over, unplug the SD Card/SSD, connect it to your ARM computer, and boot into it.
* Youll be greeted with a modified Welcome app that lets you set up the devices keyboard layout, timezone, passwords, etc.
* Youll also be able to download other DEs/WMs from this screen.
* After this setup, Calamares will delete itself and you can use the device as usual.
For more details, you can check out their blog post on [ARM installation support][5].
### Other Improvements
It is obvious to expect the latest and greatest package updates, being an Arch-based distro.
You will find Firefox 101.0.1 out of the box, but you should be soon be able to update it to Firefox 102.
This release also comes with the latest versions of Mesa and Calamares.
Some of the other changes include:
* Wireplumber has replaced pipewire-media-session as the session and policy manager for Pipewire
* The package budgie-control-center has been added to the EndeavourOS repo for a smoother and native Budgie experience.
* Offline Xfce install received improvements.
* Xfce4 and i3 install will not autostart firewall-applet by default anymore.
Also, now EndeavourOS packages can now be downgraded with eos-downgrade.
You can check out the [official announcement][6] for more details.
### Download EndeavourOS Artemis
The latest release ISO is available on the official website. Head over to the [download page][7] and get the latest image from one of the available mirrors.
[EndeavourOS Artemis][8]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/endeavouros-artemis-release/
作者:[nikhil][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/nikhil/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/06/endeavour-os-artemis-iso.jpg
[2]: https://endeavouros.com
[3]: https://news.itsfoss.com/linux-kernel-5-18-release/
[4]: https://news.itsfoss.com/wp-content/uploads/2022/06/endeavour-os-arm.jpg
[5]: https://arm.endeavouros.com/2022/06/24/artemis-with-new-endeavouros-arm-install/
[6]: https://endeavouros.com/news/artemis-is-launched/
[7]: https://endeavouros.com/latest-release/
[8]: https://endeavouros.com/

View File

@ -0,0 +1,75 @@
[#]: subject: "Firefox 102 Release Lets You Disable Download Panel and Improves Picture-in-Picture Mode"
[#]: via: "https://news.itsfoss.com/firefox-102-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Firefox 102 Release Lets You Disable Download Panel and Improves Picture-in-Picture Mode
======
Mozilla Firefox 102 release is here with some solid changes, and useful feature additions!
![][1]
A new Firefox version upgrade is here. While it may not be a major feature update like [Firefox 100][2], it includes some useful enhancements for a better browsing experience.
Heres whats new:
### Firefox 102: New Additions and Improvements
With this release, you can finally disable the automatic opening of the download panel every time a new download starts. So, you wont have too many windows crowding your screen.
It also seems that they have added some refinements to the Picture-in-Picture feature with subtitles. You should have better support for it with more streaming platforms, including Disney+ Hotstar, HBO Max, SonyLIV, and a few others.
![][3]
Firefox 102 also improves security by moving audio decoding into a separate process with enhanced sandboxing. The process remains isolated, thus giving you more security.
Additionally, there are some screen reader improvements on Windows.
For **developers**, there are a couple of important changes that include:
* Introducing support for [Transform streams][4] which also includes new interfaces.
* Support for [readable byte streams][5].
* Removal of Firefox-only properly Window.sidebar.
* You can now filter style sheets in the Style Editor tab of our developer tools
Firefox also adds a new enterprise policy adding a configuration setting that makes sure that the downloads that are meant to be opened are initially stored in a temporary folder.
If the downloaded file is saved, it will be stored in the download folder. Mozilla explains more about it:
> There is now an enterprise policy (`StartDownloadsInTempDirectory` ) and an about:config pref (`browser.download.start_downloads_in_tmp_dir` ) that will once again cause Firefox to initially put downloads in (a subfolder of) the OS temp folder, instead of the download folder configured in Firefox. Files opened from the “what should Firefox do with this file” dialog, or set to open in helper applications automatically, will stay in this folder. Files saved (not opened as previously mentioned) will still end up in the Firefox download folder.
To learn more about the release, refer to the [full release notes][6].
### Download Firefox 102
You can download the latest Firefox 102 release from its official website or wait for the package update on your Linux distribution.
In either case, you can always use the [Snap package][7] to get the latest update now.
[Firefox 102 Download][8]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/firefox-102-release/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/06/firefox-102.jpg
[2]: https://news.itsfoss.com/firefox-100-release/
[3]: https://news.itsfoss.com/wp-content/uploads/2022/06/firefox-102-pip.jpg
[4]: https://developer.mozilla.org/en-US/docs/Web/API/TransformStream
[5]: https://developer.mozilla.org/en-US/docs/Web/API/Streams_API#bytestream-related_interfaces
[6]: https://www.mozilla.org/en-US/firefox/102.0/releasenotes/
[7]: https://snapcraft.io/firefox
[8]: https://www.mozilla.org/en-US/firefox/

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/22/5/dynamic-linking-modular-libraries-linux"
[#]: author: "Jayashree Huttanagoudar https://opensource.com/users/jayashree-huttanagoudar"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "robsean"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,217 +0,0 @@
[#]: subject: "How static linking works on Linux"
[#]: via: "https://opensource.com/article/22/6/static-linking-linux"
[#]: author: "Jayashree Huttanagoudar https://opensource.com/users/jayashree-huttanagoudar"
[#]: collector: "lkxed"
[#]: translator: "robsean"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How static linking works on Linux
======
Learn how to combine multiple C object files into a single executable with static libraries.
![Woman using laptop concentrating][1]
Image by Mapbox Uncharted ERG, [CC-BY 3.0 US][2]
Code for applications written using C usually has multiple source files, but ultimately you will need to compile them into a single executable.
You can do this in two ways: by creating a static library or a dynamic library (also called a shared library). These two types of libraries vary in terms of how they are created and linked. Your choice of which to use depends on your use case.
In a [previous article][3], I demonstrated how to create a dynamically linked executable, which is the more commonly used method. In this article, I explain how to create a statically linked executable.
### Using a linker with static libraries
A linker is a command that combines several pieces of a program together and reorganizes the memory allocation for them.
The functions of a linker include:
* Integrating all the pieces of a program
* Figuring out a new memory organization so that all the pieces fit together
* Reviving addresses so that the program can run under the new memory organization
* Resolving symbolic references
As a result of all these linker functionalities, a runnable program called an executable is created.
Static libraries are created by copying all necessary library modules used in a program into the final executable image. The linker links static libraries as a last step in the compilation process. An executable is created by resolving external references, combining the library routines with program code.
### Create the object files
Here's an example of a static library, along with the linking process. First, create the header file `mymath.h` with these function signatures:
```
int add(int a, int b);
int sub(int a, int b);
int mult(int a, int b);
int divi(int a, int b);
```
Create `add.c`, `sub.c` , `mult.c` and `divi.c` with these function definitions:
```
// add.c
int add(int a, int b){
return (a+b);
}
//sub.c
int sub(int a, int b){
return (a-b);
}
//mult.c
int mult(int a, int b){
return (a*b);
}
//divi.c
int divi(int a, int b){
return (a/b);
}
```
Now generate object files `add.o`, `sub.o`, `mult.o`, and `divi.o` using GCC:
```
$ gcc -c add.c sub.c mult.c divi.c
```
The `-c` option skips the linking step and creates only object files.
Create a static library called `libmymath.a`, then remove the object files, as they're no longer required. (Note that using a `trash` [command][4] is safer than `rm`.)
```
$ ar rs libmymath.a add.o sub.o mult.o divi.o
$ trash *.o
$ ls
add.c  divi.c  libmymath.a  mult.c  mymath.h  sub.c
```
You have now created a simple example math library called `libmymath`, which you can use in C code. There are, of course, very complex C libraries out there, and this is the process their developers use to generate the final product that you and I install for use in C code.
Next, use your math library in some custom code and then link it.
### Create a statically linked application
Suppose you've written a command for mathematics. Create a file called `mathDemo.c` and paste this code into it:
```
#include <mymath.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
  int x, y;
  printf("Enter two numbers\n");
  scanf("%d%d",&x,&y);
 
  printf("\n%d + %d = %d", x, y, add(x, y));
  printf("\n%d - %d = %d", x, y, sub(x, y));
  printf("\n%d * %d = %d", x, y, mult(x, y));
  if(y==0){
    printf("\nDenominator is zero so can't perform division\n");
      exit(0);
  }else{
      printf("\n%d / %d = %d\n", x, y, divi(x, y));
      return 0;
  }
}
```
Notice that the first line is an `include` statement referencing, by name, your own `libmymath` library.
Create an object file called `mathDemo.o` for `mathDemo.c` :
```
$ gcc -I . -c mathDemo.c
```
The `-I` option tells GCC to search for header files listed after it. In this case, you're specifying the current directory, represented by a single dot (`.` ).
Link `mathDemo.o` with `libmymath.a` to create the final executable. There are two ways to express this to GCC.
You can point to the files:
```
$ gcc -static -o mathDemo mathDemo.o libmymath.a
```
Alternately, you can specify the library path along with the library name:
```
$ gcc -static -o mathDemo -L . mathDemo.o -lmymath
```
In the latter example, the `-lmymath` option tells the linker to link the object files present in the `libmymath.a` with the object file `mathDemo.o` to create the final executable. The `-L` option directs the linker to look for libraries in the following argument (similar to what you would do with `-I` ).
### Analyzing the result
Confirm that it's statically linked using the `file` command:
```
$ file mathDemo
mathDemo: ELF 64-bit LSB executable, x86-64...
statically linked, with debug_info, not stripped
```
Using the `ldd` command, you can see that the executable is not dynamically linked:
```
$ ldd ./mathDemo
        not a dynamic executable
```
You can also check the size of the `mathDemo` executable:
```
$ du -h ./mathDemo
932K    ./mathDemo
```
In the example from my [previous article][5], the dynamic executable took up just 24K.
Run the command to see it work:
```
$ ./mathDemo
Enter two numbers
10
5
10 + 5 = 15
10 - 5 = 5
10 * 5 = 50
10 / 5 = 2
```
Looks good!
### When to use static linking
Dynamically linked executables are generally preferred over statically linked executables because dynamic linking keeps an application's components modular. Should a library receive a critical security update, it can be easily patched because it exists outside of the applications that use it.
When you use static linking, a library's code gets "hidden" within the executable you create, meaning the only way to patch it is to re-compile and re-release a new executable every time a library gets an update—and you have better things to do with your time, trust me.
However, static linking is a reasonable option if the code of a library exists either in the same code base as the executable using it or in specialized embedded devices that are expected to receive no updates.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/static-linking-linux
作者:[Jayashree Huttanagoudar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jayashree-huttanagoudar
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png
[2]: https://creativecommons.org/licenses/by/3.0/us/
[3]: https://opensource.com/article/22/5/dynamic-linking-modular-libraries-linux
[4]: https://www.redhat.com/sysadmin/recover-file-deletion-linux
[5]: https://opensource.com/article/22/5/dynamic-linking-modular-libraries-linux

View File

@ -0,0 +1,97 @@
[#]: subject: "Kuro: An Unofficial Microsoft To-Do Desktop Client for Linux"
[#]: via: "https://itsfoss.com/kuro-to-do-app/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Kuro: An Unofficial Microsoft To-Do Desktop Client for Linux
======
Microsoft says that they love Linux and open-source, but we still do not have native support for a lot of its products on Linux.
While they could be trying to add more support, like the ability to [install Microsoft Edge on Linux][1] but its not excellent for a multi-trillion dollar company.
Similarly, Microsofts To-Do service is also a popular one, replacing Wunderlist as it was shut down in 2020.
In case youre curious, we have a lot of [to-do list applications available for Linux][2]. So, if you want to switch away from Microsoft To-Do, youve got options.
Microsoft To-Do is a cloud-based task management application that lets you organize your tasks from your phone, desktop, and the web. It is available to download for Windows, Mac, and Android.
So, if you would rather not use the web browser but a separate application, what can you do on Linux?
Kuro to the rescue.
### Kuro: Unofficial Open-Source Microsoft To-Do App
![kuro todo][3]
Kuro is an unofficial open-source application that provides you a desktop experience for Microsoft To-Do on Linux with some extra features.
It is a fork of Ao, which was an open-source project that stepped up to become a solution for it. Unfortunately, it is no longer being actively maintained. So, I came across a new fork for it that seems to get the job done.
![kuro todo options][4]
Kuro provides some extra features that let you toggle themes, enable global shortcuts, and more from within the application.
Note that this application is fairly new, but a stable release is available to try. Furthermore, the developer plans to add more themes and features in the near future.
### Features of Kuro
![kuro todo 1][5]
If you tend to use Microsoft services (like Outlook), its To-Do app should be a perfect option to organize your tasks. You can even flag emails to create tasks out of it.
With Kuro desktop client, you get a few quick features to configure that include:
* Ability to launch the program on start.
* Get a system tray icon to quickly create a task, search, or check the available list for the day.
* Enable Global shortcut keys.
* Toggle available themes (Sepia, Dracula, Black, Dark).
* Toggle Auto Night mode, if you do not want to constantly change themes.
* Hide the tray icon, if you do not need it.
* Customize the font size as required.
![kuro todo settings][6]
In addition to some features, you can also access certain settings to enable/disable email notifications, confirm before deleting, and more such controls for the to-do app experience.
Overall, the experience wasnt terrible, but I noticed some weird graphical glitches in the user interface for a few minutes. I am not sure if it is a known issue.
### Install Kuro in Linux
You can find the .deb package for Ubuntu-based distributions from its [GitHub releases section][7].
In either case, you can also get it from the [Snap store][8] for any Linux distribution of your choice. The package is also available in [AUR][9] for Arch Linux distributions.
The developer also mentions that a Flatpak package is on its way. So, you can keep an eye on its [GitHub page][10] for more information on that.
[Kuro][11]
Have you tried this already? Do you know of a better Microsoft to-do client for Linux? Let me know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/kuro-to-do-app/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/microsoft-edge-linux/
[2]: https://itsfoss.com/to-do-list-apps-linux/
[3]: https://itsfoss.com/wp-content/uploads/2022/06/kuro-todo-800x507.png
[4]: https://itsfoss.com/wp-content/uploads/2022/06/kuro-todo-options-800x444.png
[5]: https://itsfoss.com/wp-content/uploads/2022/06/kuro-todo-1.png
[6]: https://itsfoss.com/wp-content/uploads/2022/06/kuro-todo-settings.png
[7]: https://github.com/davidsmorais/kuro/releases
[8]: https://snapcraft.io/kuro-desktop
[9]: https://itsfoss.com/aur-arch-linux/
[10]: https://github.com/davidsmorais/kuro
[11]: https://github.com/davidsmorais/kuro

View File

@ -0,0 +1,100 @@
[#]: subject: "Linux Lite 6 Review: Well Designed “bridging-distro” for Windows Users"
[#]: via: "https://www.debugpoint.com/linux-lite-6-review/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux Lite 6 Review: Well Designed “bridging-distro” for Windows Users
======
**We took [Linux Lite 6 “Fluorite”][1] for a test drive in a physical and virtual machine for a review.**
It has been more than a year since we reviewed the Linux Lite distribution. The [last review][2] was for its 5.0 series. And its time for a refreshed review of this excellent distribution with its latest major release Linux Lite 6.0.
Linux Lite 6.0, AKA Linux Lite OS, is based on Ubuntu and follows its LTS (Long Term Support) lifecycle. That means you get a similar release schedule and security updates for five years following Ubuntu Linux. The lightweight desktop environment Xfce is the primary and only desktop it offers. Linux Lite OS primarily focuses on Windows users who want to kick start their Linux journey. Hence you may think of it as a “bridging” Linux operating system.
![Linux Lite 6 Xfce Desktop][3]
### Linux Lite 6: Review
Lite 6 is coming after two years of its last major release. Due to its dependency on Ubuntu LTS, you should expect some significant changes in this version. First, lets wrap up the new features in this release. And then, we can talk about the installation, performance and review pointers.
#### Core updates and changes
At its core, it is based on Linux Kernel 5.15, the default LTS kernel for Ubuntu 22.04 LTS Jammy Jellyfish. In addition, this release introduces a set of desktop applications from Assistive Technologies to help hearing and sight-impaired people. The apps are “Onscreen Keyboard Onboard”, “Screen reader Orca”, and “screen magnifier”. With this change, Linux Lite 6 becomes more similar to Windows for its target users.
In addition to the above change, a controversial decision is to add Google Chrome as its default browser replacing the Snap version of Firefox. Undoubtedly, Google Chrome is the market leader in the browser space and is well built. But many have issues with it because its from Google.
Besides, the team also chose between the Firefox deb version and the Microsft Edge (considering Linux Lite 6 targets Windows users).
Another beneficial core change for users is the teams decision to bring the latest LibreOffice stable edition in each point release in the next two years. Because Ubuntu might delay specific LibreOffice versions, but with Linux Lite point releases, you definitely would get the latest version.
Moreover, if you are a fan of look and feel, the new Materia window theme is going to give you a pleasant and sleek desktop.
Overall, its a good set of changes and choices (such as the browser) in Linux Lite 6 to stay ahead with the times. Now, lets discuss some review findings during our test run.
![Linux Lite has a nice update tool][4]
#### Download, Installation
Linux Lite 6 ISO size is 2.1 GB, and I believe its reasonably well-composed, considering the vanilla Ubuntu 22.04 ISO desktop size is a whopping 3 GB+.
In all fairness, unlike other Linux distributions, Linux Lite doesnt ask you which desktop you want because you have only one choice The Xfce desktop.
During testing, we could not get it installed in a physical system. The Ubiquity installer became unresponsive on the “read partition” module. After a few hours of research, we found that Ubiquity doesnt play well with a non-GPT table with more than three logical partitions.
However, it installs fine in a virtual-machine environment.
![Ubiquity gave some errors in test machines][5]
The Lite Welcome app gives you a single point to perform various maintenance activities in the first boot. Critical tasks such as updating the system, patching and installing/removing software are easy with its native Lite Software and Updater.
Moreover, if you want to install the Firefox web browser, the Lite Software gives you a fair warning that it is a snap. Although, it doesnt matter much from a new Windows user standpoint, whether its snap or anything else.
![Firefox is available as Snap from Lite Software][6]
### Performance
Linux Lite 6 takes around 590 MB of RAM in an idle state with an uptime of 3 hours. The CPU is at about 2% to 3% in an inactive state. Furtehrmore, if you run more applications, the resource usage would increase eventually. However, I believe its a good performance considering the target hardware of this distro. Besides, the old Windows 10 or Windows 7 devices would work fine in this distro.
And it uses 9 GB of disk space for the default install.
![Linux Lite 6 Performance][7]
### Closing Notes
Overall, it is an excellent release and perhaps one of the early mainstream distros based on Ubuntu 22.04. Tiny additions in this release such as new accessibility tools, a new system monitoring tool and other changes are definitely good. However, some users may not like Google Chrome considering the privacy debates.
Moreover, the lack of a major upgrade path may be a roadblock and troublesome for new users. I hope the Linux Lite team brings the upgrade feature in the future. Other than that, its well built and a good release. You can easily try it out and choose for your daily driver.
You can download Linux Lite on the [official website][8].
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/linux-lite-6-review/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://debugpointnews.com/linux-lite-6-0/
[2]: https://www.debugpoint.com/linux-lite-5-2-review/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/06/Linux-Lite-6-Xfce-Desktop.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2022/06/Linux-Lite-has-a-nice-update-tool.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/06/Ubiquity-gave-some-errors-in-test-machines.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/06/Firefox-is-available-as-Snap-from-Lite-Software.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/06/Linux-Lite-6-Performance.jpg
[8]: https://www.linuxliteos.com/download.php
[9]: https://t.me/debugpoint
[10]: https://twitter.com/DebugPoint
[11]: https://www.youtube.com/c/debugpoint?sub_confirmation=1
[12]: https://facebook.com/DebugPoint
[13]: https://t.me/debugpoint

View File

@ -0,0 +1,75 @@
[#]: subject: "Open Programmable Infrastructure: 1+1=3"
[#]: via: "https://www.linux.com/news/open-programmable-infrastructure-113/"
[#]: author: "Dan Whiting https://www.linuxfoundation.org/blog/open-programmable-infrastructure-113/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Open Programmable Infrastructure: 1+1=3
======
At last weeks Open Source Summit North America, [Robin Ginn][1], Executive Director of the [OpenJS Foundation][2], relayed a principle her mentor taught: “1+1=3”. No, this isnt new math, it is demonstrating the principle that, working together, we are more impactful than working apart. Or, as my wife and I say all of the time, teamwork makes the dream work.
This principle is really at the core of open source technology. Turns out it is also how I look at the Open Programmable Infrastructure project.
Stepping back a bit, as “the new guy” around here, I am still constantly running across projects where I want to dig in more and understand what it does, how it does it, and why it is important. I had that very thought last week as we launched another new project, the [Open Programmable Infrastructure Project][3]. As I was [reading up on it][4], they talked a lot about data processing units (DPUs) and infrastructure processing units (IPUs), and I thought, I need to know what these are and why they matter. In the timeless words of The Bobs, “What exactly is it you do here?”
### What are DPUs/IPUs? 
First and this is important they are basically the same thing, they just have different names. Here is my oversimplified explanation of what they do.
In most personal computers, you have a separate graphic processing unit(s) that helps the central processing unit(s) (CPU) handle the tasks related to processing and displaying the graphics. They offload that work from the CPU, allowing it to spend more time on the tasks it does best. So, working together, they can achieve more than each can separately.
Servers powering the cloud also have CPUs, but they have other tasks that can consume tremendous computing  power, say data encryption or network packet management. Offloading these tasks to separate processors enhances the performance of the whole system, as each processor focuses on what it does best.
In order words, 1+1=3.
### DPUs/IPUs are highly customizable
While separate processing units have been around for some time, like your PCs GPU, their functionally was primarily dedicated to a particular task. Instead, DPUs/IPUs combine multiple offload capabilities that are highly  customizable through software. That means a hardware manufacturer can ship these units out and each organization uses software to configure the units according to their specific needs. And, they can do this on the fly.
Core to the cloud and its continued advancement and growth is the ability to quickly and easily create and dispose of the “hardware” you need. It wasnt too long ago that if you wanted a server, you spent thousands of dollars on one and built all kinds of infrastructure around it and hoped it was what you needed for the time. Now, pretty much anyone can quickly setup a virtual server in a matter of minutes for virtually no initial cost.
DPUs/IPUs bring this same type of flexibility to your own datacenter because they can be configured to be “specialized” with software rather than having to literally design and build a different server every time you need a different capability.
### What is Open Programmable Infrastructure (OPI)?
OPI is focused on utilizing  open software and standards, as well as frameworks and toolkits, to allow for the rapid adoption and use of DPUs/IPUs. The OPI Project is both hardware and software companies coming together to establish and nurture an ecosystem to support these solutions. It “seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendors hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate.”
In other words, competitors are coming together to agree on a common, open ecosystem they can build together and innovate, separately, on top of. The are living out 1+1=3.
I, for one, cant wait to see the innovation.
A special thanks to [Yan][5] [Fisher][6] of Red Hat for helping me understand open programmable infrastructure concepts. He and his colleague, Kris Murphy, have a more [technical blog post on Red Hats blog][7]. Check it out.
For more information on the OPI Project, visit their [website][8] and start contributing at [https://github.com/opiproject/opi][9].
Click here to add your own text
The post [Open Programmable Infrastructure: 1+1=3][10] appeared first on [Linux Foundation][11].
--------------------------------------------------------------------------------
via: https://www.linux.com/news/open-programmable-infrastructure-113/
作者:[Dan Whiting][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxfoundation.org/blog/open-programmable-infrastructure-113/
[b]: https://github.com/lkxed
[1]: https://github.com/opiproject/opi
[2]: https://openjsf.org/
[3]: https://opiproject.org/
[4]: https://www.linuxfoundation.org/press-release/linux-foundation-announces-open-programmable-infrastructure-project/
[5]: https://www.redhat.com/en/authors/yan-fisher
[6]: https://www.redhat.com/en/authors/yan-fisher
[7]: https://www.redhat.com/en/blog/why-red-hat-joining-open-programmable-infrastructure-project
[8]: https://opiproject.org/
[9]: https://github.com/opiproject/opi
[10]: https://www.linuxfoundation.org/blog/open-programmable-infrastructure-113/
[11]: https://www.linuxfoundation.org/

View File

@ -0,0 +1,117 @@
[#]: subject: "HandBrake: Free Tool for Converting Videos from Any Format"
[#]: via: "https://www.debugpoint.com/handbrake/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
HandBrake: Free Tool for Converting Videos from Any Format
======
Learn about HandBrake, an excellent utility for converting videos from any format to the destination types.
This article contains features, download instructions and a usage guide.
### HandBrake
In this age of social media, we all play around with videos, reels and, of course, the formats that come with it. So, if you are in a Linux platform or even in WIndows, you may use any other software to convert various videos for several platforms. But if you need a simple but feature-rich video converter that takes care of your all video formats from multiple sources, try HandBrake.
#### Features
HandBrake has a huge set of options that make it a unique tool. Firstly, the workflow is super easy. In fact, its just three steps:
* Select a video
* Choose a target format
* Convert
As you can see, if you are a novice user, it is super easy to work with this tool because the attributes of the target format (e.g. bit rate, dimensions) are based on the default preset.
Secondly, if you want advanced editing, such as adding subtitles from the subtitle files while converting, it is also possible using this tool.
In addition, you can also change the dimensions, flip the video, change resolutions, modify the aspect ratio, and crop. Moreover, a set of basic filter configurations such as Denoise and Sharpen can also be done.
Moreover, adding Chapters, tags and audio tracks to your video files is always easy.
Perhaps the vital feature of HandBrake is the availability of presets which cater to the modern needs of social media and streams. For example, the presets are aligned with streaming platforms and streaming devices such as:
* Discord
* GMail
* Vimeo
* Amazon Fire Stick
* Apple Devices
* Chromecast
* Playstation
* Roku
* Xbox
A pretty impressive list, isnt it? Not only that, if you are a professional worker, it helps you define and create Queue for your conversions. The Queue feature allows you to batch convert multiple video files in your workflow.
Finally, you can convert to MPEG-4 (mp4), Matroska (mkv) and WebM formats.
![HandBrake with various features][1]
### Download and Installation
Downloading and installation of HandBrake is easy for any platform (Linux, Mac and Windows). The developers provide direct executables, which are free to download.
Since the primary target audience of this portal is Linux users, we will talk about the installation of HandBrake in Linux.
For Ubuntu, Linux Mint and all other distributions, the preferable method is Flatpak. You can [set up Flatpak][2] and then click the below button to install HandBrake:
[Install HandBrake via Flathub][3]
For Windows, macOS installer visit this page.
One interesting feature is that you can use this application via the command line! That means you can further customize your workflow using the command line utility, which you can find [here][4].
### How to Use HandBrake to convert Videos? [Example]
Since you installed it, lets see how you can convert a sample video with just three steps.
1. Open HandBrake and click on the “Open Source” button at the top toolbar. Select your video file.
2. Now, select the target file type from the Format dropdown. Make sure to check the destination folder (the default is Videos).
3. Finally, click on the Start button at the top toolbar to convert a video using HandBrake.
![HandBrake Video Conversion in three simple steps][5]
You can find a nice display on the conversion progress at the bottom of the window.
![Encoding status][6]
The above steps are the most basic ones. If you want further control over the video, you can change the options and also choose from a vast list of presets I explained earlier.
### FAQ
Yes, it is a free and open-source application, and you can download it for free.
Yes, you can easily install HandBrake in macOS, Windows 10, and Windows 11.
You can download HandBrake only from the official website https://handbrake.fr/ and no-other place.
### Closing Notes
Handbrake is one of the professional-grade free and open-source video encoders available today. It is a time-tested application used by millions of users daily. I hope this guide helps you to learn about this fantastic tool and get you started with your video projects.
**The demo video is used from [Pexels cottonbro][7]**
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/handbrake/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/06/HandBrake-with-various-features.jpg
[2]: https://www.debugpoint.com/how-to-install-flatpak-apps-ubuntu-linux/
[3]: https://dl.flathub.org/repo/appstream/fr.handbrake.ghb.flatpakref
[4]: https://handbrake.fr/downloads2.php
[5]: https://www.debugpoint.com/wp-content/uploads/2022/06/HandBrake-Video-Conversion-in-three-simple-steps.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/06/Encoding-status.jpg
[7]: https://www.pexels.com/video/hands-hand-table-colorful-3997786/

View File

@ -0,0 +1,288 @@
[#]: subject: "How to Install and Configure HAProxy on Ubuntu 22.04"
[#]: via: "https://www.linuxtechi.com/install-configure-haproxy-on-ubuntu/"
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install and Configure HAProxy on Ubuntu 22.04
======
In this post, we will demonstrate how to install HAProxy on Ubuntu 22.04 (Jammy Jellyfish) step by step. We will later configure it to act as a load balancer by distributing incoming requests between two web servers.
##### What is HAProxy?
HaProxy, short for High Availability Proxy, is a free and open-source HTTP load balancer and reverse-proxy solution that is widely used to provide high availability to web applications and guarantee maximum possible uptime.
It is a high-performance application that injects performance improvements to your web apps by distributing traffic across multiple endpoints. This way, it ensures that no webserver is overloaded with incoming HTTP requests since the workload is equitably distributed across several nodes.
While free, the Enterprise Edition provides added features such as WAF ( Web Application Firewall ), application acceleration, advanced DDoS protection, advanced health checks and so much more.
##### Lab setup
To demonstrate HAProxy in action, you need to have at least three Linux systems. One will act as the HAProxy load balancer, while the rest will act as web servers.
![Haproxy-Lab-Setup-Ubuntu][1]
### Step 1) Install HAProxy Load Balancer
The first step is to install HAProxy on Ubuntu. Ubuntu repositories provide HAProxy by default, but it is not the latest one.
To view available haproxy package version from default repositories, run
```
$ sudo apt update
$ sudo apt show haproxy
```
![default-haproxy-version-ubuntu-22-04][2]
But the latest long term support release is HAProxy is 2.6, So to install HAProxy 2.6, first enable PPA repository, run following command
```
$ sudo add-apt-repository ppa:vbernat/haproxy-2.6 -y
```
Now install haproxy 2.6 by executing the following commands
```
$ sudo apt update
$ sudo apt install -y haproxy=2.6.\*
```
Once installed, confirm the version of HAProxy installed as shown.
```
$ haproxy -v
```
![Haproxy-version-ubuntu-22-04][3]
Upon installation, the HAProxy service starts by default and listens to TCP  port 80. To verify HAProxy is running, run the command
```
$ sudo systemctl status haproxy
```
![Haproxy-Status-Ubuntu-Linux][4]
Its recommended to enable the service to auto-start on very system reboot as shown.
```
$ sudo systemctl enable haproxy
```
### Step 2) Configure HAProxy
The next step is to configure HAProxy to distribute traffic evenly between two web servers. The configuration file for haproxy is /etc/haproxy/haproxy.cfg.
Before making any changes to the file, first, make a backup copy.
```
$ sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bk
```
Then open the file using your preferred text editor. Here, we are using Nano.
```
$ sudo nano /etc/haproxy/haproxy.cfg
```
Haproxy configuration file is made up of the following sections:
* global: This is the first section that you see at the very top. It contains system-wide settings that handle performance tuning and security.
* defaults: As the name suggests, this section contains settings that should work well without additional customization. These settings include timeout and error reporting configurations.
* frontend and backend: These are the settings that define the frontend and backend settings. For the frontend, we will define the HAProxy server as the front end which will distribute requests to the backend servers which are the webservers. We will also set HAProxy to use round robbin load balancing criteria for distributing traffic.
* listen: This is an optional setting that enables you to enable monitoring of HAProxy statistics.
Now define the frontend and backend settings:
```
frontend linuxtechi
   bind 10.128.0.25:80
   stats uri /haproxy?stats
   default_backend web-servers
backend web-servers
    balance roundrobin
    server web1 10.128.0.27:80
    server web2 10.128.0.26:80
```
Here, we have configured both the HAProxy server and the web server nodes to listed to port 80. Be sure to replace the IP address for the HAProxy and webservers with your setup.
In order to enable viewing the HAProxy statistics from a browser, add the following listen section.
```
listen stats
   bind *:8080
   stats enable
   stats uri /
   stats refresh 5s
   stats realm Haproxy\ Statistics
  stats auth linuxtechi:[email protected]     #Login User and Password for the monitoring
```
The stats auth directive specifies the username and password for the login user for viewing statistics on the browser.
![HAproxy-Config-File-Ubuntu][5]
Now save all the changes and exit the configuration file. To reload the new settings, restart haproxy service.
```
$ sudo systemctl restart haproxy
```
Next edit the /etc/hosts file.
Define the hostnames and IP addresses of the haproxy main server and the webservers.
```
10.128.0.25 haproxy
10.128.0.27 web1
10.128.0.27 web2
```
Save the changes and exit.
### Step 3) Configure Web Servers
In this step, we will configure the remaining Linux systems which are the web servers.
So, log in to each of the web servers and install the Apache web server package.
```
$ sudo apt update
$ sudo apt install -y apache2
```
Next, verify that Apache is running on each of the servers.
```
$ sudo systemctl status apache2
```
Then enable Apache webserver to start on boot on both servers.
```
$ sudo systemctl enable apache2
```
Next, modify the index.html files for each web server.
For Web Server 1
Switch to the root user
```
$ sudo su
```
Then run the following command.
```
# echo "<H1>Hello! This is webserver1: 10.128.0.27 </H1>" > /var/www/html/index.html
```
For Web Server 2
Similarly, switch to the root user
$ sudo su
And create the index.html file as shown.
```
# echo "<H1>Hello! This is webserver2: 10.128.0.26 </H1>" > /var/www/html/index.html
```
Next, configure the /etc/hosts file.
```
$ sudo nano /etc/hosts
```
Add the HAProxy entry to each node.
```
10.128.0.25 haproxy
```
Save the changes and exit the configuration file.
Be sure you can ping the HAProxy server from each of the web server nodes.
![Haproxy-Connectivity-from-web1][6]
![Haproxy-Connectivity-from-web2][7]
Note: Make Sure port 80 is allowed in the OS firewall in case it is enabled on web servers.
### Step 4) Test HAProxy Load Balancing
Up until this point, we have successfully configured our HAProxy and both of the back-end web servers. To test if your configuration is working as expected, browse the IP address of the HAProxy server
http://10.128.0.25
When you browse for the first time, you should see the web page for the first web server
![Access-WebPage1-Over-Haproxy][8]
Upon refreshing, you should the webpage for the second web server
![Access-WebPage2-Over-Haproxy][9]
This shows that the HAProxy server is performing its load balancing job spectacularly by distributing incoming web traffic across the two web servers in a Round Robbin algorithm.
Moreover, you can use the following do-while loop with the curl command:
```
$ while true; do curl 10.128.0.25; sleep1; done
```
![While-Loop-Access-Webpage-over-Haproxy][10]
To view monitoring statistics, browse the following URL:
http://10.128.0.25:8080/stats
You will be required to authenticate, so provide your details as specified in Step 2.
![HAproxy-GUI-Login-Page][11]
You will see the following page with statistics on the performance of the HAProxy server.
![Haproxy-Stats-Ubuntu-Linux][12]
##### Conclusion
There you have it! We have successfully installed HAProxy on Ubuntu 22.04 and configured it to serve requests across two web servers using the Round Robbin load balancing criteria.
Read Also: How to Configure Static IP Address on Ubuntu 22.04 LTS
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-configure-haproxy-on-ubuntu/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lkxed
[1]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-Lab-Setup-Ubuntu.png
[2]: https://www.linuxtechi.com/wp-content/uploads/2022/06/default-haproxy-version-ubuntu-22-04.png
[3]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-version-ubuntu-22-04.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-Status-Ubuntu-Linux.png
[5]: https://www.linuxtechi.com/wp-content/uploads/2022/06/HAproxy-Config-File-Ubuntu.png
[6]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-Connectivity-from-web1.png
[7]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-Connectivity-from-web2.png
[8]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Access-WebPage1-Over-Haproxy.png
[9]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Access-WebPage2-Over-Haproxy.png
[10]: https://www.linuxtechi.com/wp-content/uploads/2022/06/While-Loop-Access-Webpage-over-Haproxy.png
[11]: https://www.linuxtechi.com/wp-content/uploads/2022/06/HAproxy-GUI-Login-Page.png
[12]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-Stats-Ubuntu-Linux.png

View File

@ -0,0 +1,150 @@
[#]: subject: "Linux su vs sudo: what's the difference?"
[#]: via: "https://opensource.com/article/22/6/linux-su-vs-sudo-sysadmin"
[#]: author: "David Both https://opensource.com/users/dboth"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux su vs sudo: what's the difference?
======
A comparison of Linux commands for escalating privileges for non-root users.
![bash logo on green background][1]
Image by: Opensource.com
Both the `su` and the `sudo` commands allow users to perform system administration tasks that are not permitted for non-privileged users—that is, everyone but the root user. Some people prefer the `sudo` command: For example, [Seth Kenlon][2] recently published "[5 reasons to use sudo on Linux][3]", in which he extols its many virtues.
I, on the other hand, am partial to the `su` command and prefer it to `sudo` for most of the system administration work I do. In this article, I compare the two commands and explain why I prefer `su` over `sudo` but still use both.
### Historical perspective of sysadmins
The `su` and `sudo` commands were designed for a different world. Early Unix computers required full-time system administrators, and they used the root account as their only administrative account. In this ancient world, the person entrusted with the root password would log in as root on a teletype machine or CRT terminal such as the DEC VT100, then perform the administrative tasks necessary to manage the Unix computer.
The root user would also have a non-root account for non-root activities such as writing documents and managing their personal email. There were usually many non-root user accounts on those computers, and none of those users needed total root access. A user might need to run one or two commands as root, but very infrequently. Many sysadmins log in as root to work as root and log out of our root sessions when finished. Some days require staying logged in as root all day long. Most sysadmins rarely use `sudo` because it requires typing more than necessary to run essential commands.
These tools both provide escalated privileges, but the way they do so is significantly different. This difference is due to the distinct use cases for which they were originally intended.
### sudo
The original intent of `sudo` was to enable the root user to delegate to one or two non-root users access to one or two specific privileged commands they need regularly. The `sudo` command gives non-root users temporary access to the elevated privileges needed to perform tasks such as adding and deleting users, deleting files that belong to other users, installing new software, and generally any task required to administer a modern Linux host.
Allowing the users access to a frequently used command or two that requires elevated privileges saves the sysadmin a lot of requests from users and eliminates the wait time. The `sudo` command does not switch the user account to become root; most non-root users should never have full root access. In most cases, `sudo` lets a user issue one or two commands then allows the privilege escalation to expire. During this brief time interval, usually configured to be 5 minutes, the user may perform any necessary administrative tasks that require elevated privileges. Users who need to continue working with elevated privileges but are not ready to issue another task-related command can run the `sudo -v` command to revalidate the credentials and extend the time for another 5 minutes.
Using the `sudo` command does have the side effect of generating log entries of commands used by non-root users, along with their IDs. The logs can facilitate a problem-related postmortem to determine when users need more training. (You thought I was going to say something like "assign blame," didn't you?)
### su
The `su` command is intended to allow a non-root user to elevate their privilege level to that of root—in fact, the non-root user becomes the root user. The only requirement is that the user know the root password. There are no limits on this because the user is now logged in as root.
No time limit is placed on the privilege escalation provided by the su command. The user can work as root for as long as necessary without needing to re-authenticate. When finished, the user can issue the exit command to revert from root back to their own non-root account.
### Controversy and change
There has been some recent disagreement about the uses of `su` versus `sudo`.
> Real [Sysadmins] don't use sudo. —Paul Venezia
Venezia contends in his [InfoWorld article][4] that `sudo` is used as an unnecessary prop for many people who act as sysadmins. He does not spend much time defending or explaining this position; he just states it as a fact. And I agree with him—for sysadmins. We don't need the training wheels to do our jobs. In fact, they get in the way.
However,
> The times they are a'changin.' —Bob Dylan
Dylan was correct, although he was not singing about computers. The way computers are administered has changed significantly since the advent of the one-person, one-computer era. In many environments, the user of a computer is also its administrator. This makes it necessary to provide some access to the powers of root for those users.
Some modern distributions, such as Ubuntu and its derivatives, are configured to use the `sudo` command exclusively for privileged tasks. In those distros, it is impossible to log in directly as the root user or even to `su` to root, so the `sudo` command is required to allow non-root users any access to root privileges. In this environment, all system administrative tasks are performed using `sudo`.
This configuration is possible by locking the root account and adding the regular user account(s) to the wheel group. This configuration can be circumvented easily. Try a little experiment on any Ubuntu host or VM. Let me stipulate the setup here so you can reproduce it if you wish. I installed Ubuntu 16.04 LTS1 and installed it in a VM using VirtualBox. During the installation, I created a non-root user, student, with a simple password for this experiment.
Log in as the user student and open a terminal session. Look at the entry for root in the `/etc/shadow` file, where the encrypted passwords are stored.
```
student@ubuntu1:~$ cat /etc/shadow
cat: /etc/shadow: Permission denied
```
Permission is denied, so we cannot look at the `/etc/shadow` file. This is common to all distributions to prevent non-privileged users from seeing and accessing the encrypted passwords, which would make it possible to use common hacking tools to crack those passwords.
Now let's try to `su -` to root.
```
student@ubuntu1:~$ su -
Password: <Enter root password but there isn't one>
su: Authentication failure
```
This fails because the root account has no password and is locked out. Use the `sudo` command to look at the `/etc/shadow` file.
```
student@ubuntu1:~$ sudo cat /etc/shadow
[sudo] password for student: <enter the student password>
root:!:17595:0:99999:7:::
<snip>
student:$6$tUB/y2dt$A5ML1UEdcL4tsGMiq3KOwfMkbtk3WecMroKN/:17597:0:99999:7:::
<snip>
```
I have truncated the results to show only the entry for the root and student users. I have also shortened the encrypted password so the entry will fit on a single line. The fields are separated by colons (`:` ) and the second field is the password. Notice that the password field for root is a bang, known to the rest of the world as an exclamation point (`!` ). This indicates that the account is locked and that it cannot be used.
Now all you need to do to use the root account as a proper sysadmin is to set up a password for the root account.
```
student@ubuntu1:~$ sudo su -
[sudo] password for student: <Enter password for student>
root@ubuntu1:~# passwd root
Enter new UNIX password: <Enter new root password>
Retype new UNIX password: <Re-enter new root password>
passwd: password updated successfully
root@ubuntu1:~#
```
Now you can log in directly on a console as root or `su` directly to root instead of using `sudo` for each command. Of course, you could just use `sudo su -` every time you want to log in as root, but why bother?
Please do not misunderstand me. Distributions like Ubuntu and their up- and downstream relatives are perfectly fine, and I have used several of them over the years. When using Ubuntu and related distros, one of the first things I do is set a root password so that I can log in directly as root. Other distributions, like Fedora and its relatives, now provide some interesting choices during installation. The first Fedora release where I noticed this was Fedora 34, which I have installed many times while writing an upcoming book.
One of those installation options can be found on the page to set the root password. The new option allows the user to choose "Lock root account" in the way an Ubuntu root account is locked. There is also an option on this page that allows remote SSH login to this host as root using a password, but that only works when the root account is unlocked. The second option is on the page that allows the creation of a non-root user account. One of the options on this page is "Make this user administrator." When this option is checked, the user ID is added to a special group called the wheel group, which authorizes members of that group to use the `sudo` command. Fedora 36 even mentions the wheel group in the description of that checkbox.
More than one non-root user can be set as an administrator. Anyone designated as an administrator using this method can use the `sudo` command to perform all administrative tasks on a Linux computer. Linux only allows the creation of one non-root user during installation, so other new users can be added to the wheel group when created. Existing users can be added to the wheel group by the root user or another administrator directly by using a text editor or the `usermod` command.
In most cases, today's administrators need to do only a few essential tasks such as adding a new printer, installing updates or new software, or deleting software that is no longer needed. These GUI tools require a root or administrative password and will accept the password from a user designated as an administrator.
### How I use su and sudo on Linux
I use both `su` and `sudo`. They each have an important place in my sysadmin toolbox.
I can't lock the root account because I need to use it to run my [Ansible][5] playbooks and the [rsbu][6] Bash program I wrote to perform backups. Both of these need to be run as root, and so do several other administrative Bash scripts I have written. I use the `su` command to switch users to the root user so I can perform these and many other common tasks. Elevating my privileges to root using `su` is especially helpful when performing problem determination and resolution. I really don't want a `sudo` session timing out on me while I am in the middle of my thought process.
I use the `sudo` command for tasks that need root privilege when a non-root user needs to perform them. I set the non-root account up in the sudoers file with access to only those one or two commands needed to complete the tasks. I also use `sudo` myself when I need to run only one or two quick commands with escalated privileges.
### Conclusions
The tools you use don't matter nearly as much as getting the job done. What difference does it make if you use vim or Emacs, systemd or SystemV, RPM or DEB, `sudo` or `su` ? The bottom line here is that you should use the tools with which you are most comfortable and that work best for you. One of the greatest strengths of Linux and open source is that there are usually many options available for each task we need to accomplish.
Both `su` and `sudo` have strengths, and both can be secure when applied properly for their intended use cases. I choose to use both `su` and `sudo` mostly in their historical roles because that works for me. I prefer `su` for most of my own work because it works best for me and my workflow.
Share how you prefer to work in the comments!
This article is taken from Chapter 19 of my book The Linux Philosophy for Sysadmins (Apress, 2018) and is republished with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/linux-su-vs-sudo-sysadmin
作者:[David Both][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/bash_command_line.png
[2]: https://opensource.com/users/seth
[3]: https://opensource.com/article/22/5/use-sudo-linux
[4]: http://www.infoworld.com/t/unix/nine-traits-the-veteran-unix-admin-276?page=0,0&source=fssr
[5]: https://opensource.com/article/20/10/first-day-ansible
[6]: https://opensource.com/article/17/1/rsync-backup-linux

View File

@ -0,0 +1,126 @@
[#]: subject: "Why organizations need site reliability engineers"
[#]: via: "https://opensource.com/article/22/6/benefits-sre-site-reliability-engineering"
[#]: author: "Robert Kimani https://opensource.com/users/robert-charles"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Why organizations need site reliability engineers
======
SRE is a valuable component in an efficient organization for software engineering, systems engineering, implementing DevSecOps, and more.
![Puzzle pieces coming together to form a computer screen][1]
Image by: Opensource.com
In this final article that concludes my series about best practices for effective site reliability engineering (SRE), I cover some of the practical applications of site reliability engineering.
There are some significant differences between software engineering and systems engineering.
### Software engineering
* Focuses on software development and engineering only.
* Involves writing code to create useful functionality.
* Time is spent on developing repeatable and reusable software that can be easily extended.
* Has problem-solving orientation.
* Software engineering aids the SRE.
### Systems engineering
* Focuses on the whole system including software, hardware and any associated technologies.
* Time is spent on building, analyzing, and managing solutions.
* Deals with defining characteristics of a system and feeds requirements to software engineering.
* Has systems-thinking orientation.
* Systems engineering enables SRE.
The site reliability engineer (SRE) utilizes both software engineering and system engineering skills, and in so doing adds value to an organization.
As the SRE team runs production systems, an SRE produces the most impactful tools to manage and automate manual processes. Software can be built faster when an SRE is involved, because most of the time the SRE creates software for their own use. As most of the tasks for an SRE are automated, which entails a lot of coding, this introduces a healthy mix of development and operations, which is great for site reliability.
Finally, an SRE enables an organization to automatically scale rapidly whether it's scaling up or scaling down.
### SRE and DevSecOps
An SRE helps build end to end effective monitoring systems by utilizing logs, metrics and traces. An SRE enables fast, effective, and reliable rollbacks and automatic scaling up or down infrastructure as needed. These are especially effective during a security breach.
With the advent of cloud and container-based architectures, data processing pipelines have become a prominent component in IT architectures. An SRE helps configure the most restrictive access to data processing pipelines.
Finally, an SRE helps develop tools and procedures to handle incidents. While most of these incidents focus on IT operations and reliability, it can be easily extended to security. For example, DevSecOps deals with integrating development, security, and operations with heavy emphasis on automation. It's a field where development, security and operations teams work together to support and maintain an organization's applications and infrastructure.
### Designing SRE and pre-production computing environments
A pre-production or non-production environment is an environment used by an SRE to develop, deploy, and test.
The non-production environment is the testing ground for automation. But it's not just application code that requires a non-production environment. Any associated automated processes, primarily the ones that an SRE develops, requires a pre-production environment. Most organizations have more than one pre-production environment. By resembling production as much as possible, the pre-production environment improves confidence in releases. At least one of your non-production environments should resemble the production environment. In many cases it's not possible to replicate production data, but you should try your best to make the non-production environments match the production environments as closely as possible.
### Pre-production computing and the SRE
An SRE helps spin-up identical application serving environments by using automation and specialized tools. This is essential, as you can quickly spin up a non-production environment in a matter of seconds using scripts and tools developed by SREs.
A smart SRE treats configuration as code to ensure fast implementation of testing and deployment. Through the use of automated CI/CD pipelines, application releases and hot fixes can be made seamlessly.
Finally, by developing effective monitoring solutions, an SRE helps to ensure the reliability of a pre-production computing environment.
One of the closely related fields to pre-production computing is inner loop development.
### Executing on inner loop development
Picture two loops, an inner loop and an outer loop, forming the DevOps loop. In the inner loop, you code, build, run, and debug. This cycle mostly happens in a developer's workstation or some other non-production environment.
Once the code is ready, it is moved to the outer loop, where the process starts with code review, build, deploy, integration tests, security and compliance, and finally pre-production release.
Many of the processes in the outer loop and inner loop are automated by the SRE.
![Image of a DevOps Loop][3]
Image by: (Robert Kimani, CC BY-SA 40)
### SRE and inner loop development
The SRE speeds up inner loop development by enabling fast, iterative development by providing tools for containerized deployment. Many of the tools an SRE develops revolve around container automation and container orchestration, using tools such as Podman, Docker, Kubernetes, or platforms like OpenShift.
An SRE also develops tools to help debug crashes with tools such as Java heap dump analysis tools, and Java thread dump analysis tools.
### Overall value of SRE
By utilizing both systems engineering and software engineering, an SRE organization delivers impactful solutions. An SRE helps to implement DevSecOps where development, security, and operations intersect with a primary focus on automation.
SRE principles help maximize the function of pre-production environments by utilizing tools and processes that the SRE organizations deliver, so one can easily spin up non-production environment in a matter of seconds. An SRE organization enables efficient inner loop development by developing and providing necessary tools.
* Improved end user experience: It's all about ensuring that the users of the applications and services, get the best experience as possible. This includes uptime of the applications or services. Applications should be up and running all the time and should be healthy.
* Minimizes or eliminates outages: It's better for users and developers alike.
* Automation: As the saying goes, you should always be trying to automate yourself out of the job that you are currently performing manually.
* Scale: In the age of cloud-native applications and containerized services, massive automated scalability is critical for an SRE to scale up or down in a safe and fast manner.
* Integrated: The principles and processes that the SRE organization embraces can be, and in many cases should be, extended to other parts of the organization, as with DevSecOps.
The SRE is a valuable component in an efficient organization. As demonstrated over the course of this series, the benefits of SRE affect many departments and processes.
### Further reading
Below are some GitHub links to a few of my favorite SRE resources:
* [Awesome site reliability engineering resources][4]
* [Awesome site reliability tools][5]
* [SRE cheat sheet][6]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/benefits-sre-site-reliability-engineering
作者:[Robert Kimani][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/robert-charles
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/puzzle_computer_solve_fix_tool.png
[2]: https://opensource.com/downloads/guide-implementing-devsecops
[3]: https://opensource.com/sites/default/files/2022-06/SREFinalDevOps-Loop.png
[4]: https://github.com/dastergon/awesome-sre
[5]: https://github.com/SquadcastHub/awesome-sre-tools
[6]: https://github.com/shibumi/SRE-cheat-sheet

View File

@ -0,0 +1,269 @@
[#]: subject: "ripgrep-all Command in Linux: One grep to Rule Them All"
[#]: via: "https://itsfoss.com/ripgrep-all/"
[#]: author: "Pratham Patel https://itsfoss.com/author/pratham/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
ripgrep-all Command in Linux: One grep to Rule Them All
======
[rga][1], called ripgrep-all, is an excellent tool that allows you to search almost all files for a text pattern. While the OG grep command is limited to plaintext files, rga can search for text in a wide range of file types such as PDF, e-Books, Word documents, zip, tar, and even embedded subtitles.
### What is it exactly?
The [grep][2] command is used for searching for text-based patterns in files. It actually means **g**lobal **re**gex **p**attern. You can not only search for simple words, but can also specify that the word should be the first word in a line, at the end of a line, or a specific word should come before it. That is why grep is so powerful, because it uses regex (regular expressions).
There is also a limitation on grep, kind of. You can only use grep to search for patterns in a plaintext file. That means you can not [search for patterns in a PDF document][3], in a compressed tar/zip archive, nor in a database like SQLite.
Now imagine having the powerful search that grep offers, but for other file types too. That is rga, or ripgrep-all, whatever you might call it.
It is ripgrep, but with added functionality. We also have a tutorial covering [ripgrep][4], in case you are interested in it.
### How to install ripgrep-all
Arch Linux users can easily install ripgrep-all using the following command:
```
sudo pacman -S ripgrep-all
```
The Nix package manger has ripgrep-all packaged and for that, use the following command:
```
nix-env -iA nixpkgs.ripgrep-all
```
Mac users can should the homebrew package manager like so:
```
brew install ripgrep-all
```
#### Debian/Ubuntu users
At the moment, ripgrep-all is neither available in Debians first-party repositories nor Ubuntus repositories. Fret not, that doesnt mean it is unobtainium.
On any other Debian based operating system (Ubuntu and its derivatives too), install the necessary dependencies first:
```
sudo apt-get install ripgrep pandoc poppler-utils ffmpeg
```
Once those are installed, visit [this page that contains the installer][5]. Find the file that has the “x86_64-unknown-linux-musl” suffix. Download and extract it.
That tar archive contains two necessary binary executable files. They are “rga” and “rga-preproc”.
Copy them to the “~/.local/bin” directory. In most cases, this directory will exist, but in case you do not have it, create it using the following command:
```
mkdir -p $HOME/.local/bin
```
Finally, add the following lines to your “~/.bashrc” file:
```
if ! [[ $PATH =~ "$HOME/.local/bin" ]]; then
PATH="$HOME/.local/bin:$PATH"
fi
```
Now, close and re-open the terminal to make the changes made in “~/.bashrc” effective. With that, ripgrep-all is installed.
### Using ripgrep-all
ripgrep-all is the name of the project, not the command name, the command name is `rga`.
The rga utility supports the following file extensions:
* media: `.mkv`, `.mp4`, `.avi`
* documents: `.epub`, `.odt`, `.docx`, `.fb2`, `.ipynb`, `.pdf`
* compressed archives: `.zip`, `.tar`, `.tgz`, `.tbz`, `.tbz2`, `.gz`, `.bz2`, `.xz`, `.zst`
* databases: `.db`, `.db3`, `.sqlite`, `.sqlite3`
* images (OCR): `.jpg`, `.png`
You might be [familiar with grep][6], but let us look at some examples nonetheless. This time, with rga instead of grep.
Before you proceed further, please take a look at the directory hierarchy given down below:
```
.
├── my_demo_db.sqlite3
├── my_demo_document.odt
└── TLCL-19.01.pdf.zip
```
#### Case insensitive and case sensitive search
The simplest pattern matching is to search for a word in a file. Let us try that. I will use the rga command to perform a case sensitive search for the words “red hat enterprise linux” for all files in current directory.
While grep has case sensitivity turned on by default, with rga, the `-s` option needs to be used.
```
rga -s 'red hat enterprise linux'
```
![Case sensitive search using rga][7]
As you can see, with a case sensitive search, I only got the result from a sqlite3 database file. Now, let us try a case insensitive search using the `-i` option and see what results we get.
![Case insensitive search using rga][8]
```
rga -i 'red hat enterprise linux'
```
Ah, this time we also got a match from the [The Linux Command Line][9] book by William Shotts.
#### Inverse match
With grep, and by extension, with ripgrep-all, you can do an inverse match. Which means, “Show only lines that do NOT have this pattern”.
The option for that is `-v` and that needs to be present immediately before the pattern.
![Inverse match using rga][10]
```
rga -v linux *.sqlite3 AND rga linux *sqlite3
```
Hey! Hold on. That isnt Linux!
This time I only selected the database file, that is because every other file has a lot of lines that do not contain the word linux in them.
And as you can see, the first commands output does not have the word linux in it. The second command is only to demonstrate that linux is present in the database.
#### Contextual search
One thing I love about rgas ability to search databases, in particular, is that it can not only search for your match but also provide relevant context (when asked). Although search in databases is not special, it is always a “Oh wow, it can do that?!” moment.
A contextual search is performed using the following three options:
* -A: show context after the matched line
* -B: show context before the matched line
* -C: show context before and after the matched line
If this sounds confusing, fret not. I will discuss each option to help you understand it better.
**Using the -C option**
To show you what I am talking about, let us take a look at the following command and its output. This is an example of using the `-C` option.
```
rga -C 2 'red hat enterprise linux'
```
![Fully contextual search using rga][11]
As you can see, not only I get the match from my database file, but I can also see the rows that are chronologically before the match and also rows that are after the match. This did not randomly jumble my rows, which is quite nice because I did not use keys to number each row.
You might be wondering if something is wrong. I specified 2, but only got 1 line after. Well, that is because there is no row after the fedora linux row in my database. :)
**Using the -A option**
To better understand the use of `-A` option, let us have a look at an example.
```
rga -A 2 Yours
```
![Contextual search (after) using rga][12]
I see that is a letter of some sort… Makes me wonder what was in the body.
**Using the -B option**
I think that document is incomplete… Let us get a context of the lines that are above it.
To see the previous lines, we need to use the `-B` option.
```
rga -B 6 Yours
```
![Contextual search (before) using rga][13]
As you can see, I asked “Show me the 6 lines that come before my matched line” and I got this in the output. Quite handy for some situations, dont you think?
#### Multi-threaded search
Since ripgrep-all is a wrapper around ripgrep, you can make use of various options [that LinuxHandbook has already covered][14].
One of those options is multi-threading. By default ripgrep chooses the thread count based on heuristics. And so, ripgrep-all does the same too.
That doesnt mean you can not specify them yourself! :)
The option to do so is `-j`. Use it like so:
```
rga -j NUM-OF-THREADS
```
There isnt a practical example to show this *reliably*, so I will leave this for you to test it yourself ;)
#### Caching
One of the main selling points of rga, besides supporting the vast number of file extensions, is that it efficiently caches data.
As a default, depending on the OS, the following directories will store the cache generated by rga:
* Linux: `~/.cache/rga`
* macOS: `~/Library/Caches/rga`
I will first run the following command to remove my cache:
```
rm -rf ~/.cache/rga
```
Once the cache is cleared, I will run a simple query 2 times. I expect to see a performance improvement the second time.
```
time rga -i linux > /dev/null
time rga --rga-no-cache -i linux > /dev/null
```
![Automatic caching done by rga][15]
I deliberately chose the pattern linux as it is occurring a lot of times in The Linux Command Line books PDF and also in my .odt document as well as my database file. To check speed, I dont need to check the output, so that is redirected to the /dev/null file.
I see that the first time the command is ran, it does not have a cache. But the second time running the same command yields a faster run.
In the end, I also use the rga-no-cache option, to disable the use of the cache, even if it is present. The result is similar to the first run of rga command.
### Conclusion
rga is the Swiss Army Knife of grep. It is one tool that can be used for almost any kind of file and it behaves similarly to grep, at least with the regex, less so with the options.
But all in all, rga is one of the tools that I recommend you use. Do comment and share your experience/thoughts!
--------------------------------------------------------------------------------
via: https://itsfoss.com/ripgrep-all/
作者:[Pratham Patel][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/pratham/
[b]: https://github.com/lkxed
[1]: https://github.com/phiresky/ripgrep-all
[2]: https://linuxhandbook.com/what-is-grep/
[3]: https://itsfoss.com/pdfgrep/
[4]: https://linuxhandbook.com/ripgrep/
[5]: https://github.com/phiresky/ripgrep-all/releases
[6]: https://linuxhandbook.com/grep-command-examples/
[7]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-33-19-800x197.webp
[8]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-33-43-800x242.webp
[9]: https://www.linuxcommand.org/tlcl.php
[10]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-36-50-800x239.webp
[11]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-37-21-800x181.webp
[12]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-37-40-800x161.webp
[13]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-38-01-800x305.webp
[14]: https://linuxhandbook.com/ripgrep
[15]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-39-32-800x468.webp

View File

@ -3,16 +3,16 @@
[#]: author: "Nived Velayudhan https://opensource.com/users/nivedv"
[#]: collector: "lujun9972"
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: reviewer: "turbokernel"
[#]: publisher: " "
[#]: url: " "
Kubernetes 架构指南
======
学习 Kubernetes 架构的不同组件是如何组合在一起的,这样你就可以更好地诊断问题、维护健康的集群和优化你的工作流。
学习 Kubernetes 架构中不同组件是如何组合在一起的,这样您就可以更好地排查问题、维护集群健康以及优化工作流。
![部件、模块、软件容器][1]
使用 Kubernetes 来编排容器,这是一个简单的描述,但理解它的实际含义和你如何实现它完全是另外一回事。如果你正在运行或管理 Kubernetes 集群,那么你就会知道 Kubernetes 由一台称为 _控制平面_计算机和许多其他 _工作节点_ 计算机组成。每一个都有一个复杂但健壮的堆栈,这使编排成为可能,熟悉每个组件有助于理解它是如何工作的。
使用 Kubernetes 来编排容器,这种描述说起来简单,但理解它的实际含义以及如何实现它完全是另外一回事。如果你正在运行或管理 Kubernetes 集群,那么你就会知道 Kubernetes 由一台称为 _控制平面_机器和许多其他 _工作节点_ 机器组成。每种类型都有一个复杂但稳定的堆栈,这使编排成为可能,熟悉每个组件有助于理解它是如何工作的。
![Kubernetes 架构图][2]
@ -20,26 +20,26 @@ Kubernetes 架构指南
### 控制平面组件
Kubernetes 安装在一个称为控制平面的机器上,它会运行 Kubernetes 守护进程,并在启动容器和吊舱时与之通信。下面介绍控制平面的各个组件。
Kubernetes 安装在一个称为控制平面的机器上,它会运行 Kubernetes 守护进程,并在启动容器和容器组时与之通信。下面介绍控制平面的各个组件。
#### Etcd
Etcd 是一种快速、分布式且一致的键值存储器,用作持久存储 Kubernetes 对象数据如吊舱、Replication Controller、密钥和服务的后台存储。Etcd 是 Kubernetes 存储集群状态和元数据的唯一地方。唯一直接与 etcd 对话的组件是 Kubernetes API 服务器。所有的其他组件都通过 API 服务器间接的从 etcd 读写数据。
Etcd 是一种快速、分布式一致性键值存储器,用作 Kubernetes 对象数据的持久存储,如 容器组 、副本控制器、密钥和服务。Etcd 是 Kubernetes 存储集群状态和元数据的唯一地方。唯一与 etcd 直连的组件是 Kubernetes API 服务器。其他所有组件都通过 API 服务器间接的从 etcd 读写数据。
Etcd 还实现了一个监控功能它提供了一个基于事件的接口用于异步监控密钥的更改。一旦你更改了一个密钥它的监控者就会收到通知。API 服务器组件严重依赖于此来获得通知,并将 etcd 移动到所需状态。
Etcd 还实现了一个监控功能它提供了一个基于事件的接口用于异步监控密钥的更改。一旦你更改了一个密钥它的监控者就会收到通知。API 服务器组件严重依赖于此来获得通知,并将 etcd 变更至期望状态。
_为什么 etcd 实例的数量应该是奇数_
你通常会在高可用HA环境中运行三个、五个或七个 etcd 实例,但这是为什么呢?因为 etcd 是分布式数据存储,可以水平扩展它,但你需要确保每个实例中的数据是一致的。为此,系统需要当前状态是什么达成共识Etcd 为此使用 [RAFT 共识算法][4]。
你通常会运行三个、五个或七个 etcd 实例实现高可用HA环境,但这是为什么呢?因为 etcd 是分布式数据存储,可以水平扩展它,但你需要确保每个实例中的数据是一致的。因此,需要为系统当前状态达成共识Etcd 为此使用 [RAFT 共识算法][4]。
RAFT 算法需要多数(或仲裁)集群才能进入下一个状态。如果你只有两个 etcd 实例并且他们其中一个失败的话,那么 etcd 集群无法转换到新的状态,因为不存在多数这个概念。如果你有三个 etcd 实例,一个实例可能会失败,但仍有 2 个实例可用于达到仲裁
RAFT 算法需要经过选举(或仲裁)集群才能进入下一个状态。如果你只有两个 etcd 实例并且他们其中一个失败的话,那么 etcd 集群无法转换到新的状态,因为不存在过半这个概念。如果你有三个 etcd 实例,一个实例可能会失败,但仍有 2 个实例可用于进行选举
#### API 服务器
API 服务器是 Kubernetes 中唯一直接与 etcd 交互的组件。Kubernetes 中的其他所有组件都必须通过 API 服务器来处理集群状态包括客户端kubectl。API 服务器具有以下功能:
* 提供在 etcd 中存储对象的一致方式。
* 对对象执行验证,方便客户端无法存储配置不正确的对象(如果它们直接写入 etcd 数据存储,可能会发生这种情况)。
* 执行验证对象,防止客户端存储配置不正确的对象(如果它们直接写入 etcd 数据存储,可能会发生这种情况)。
* 提供 RESTful API 来创建、更新、修改或删除资源。
* 提供[乐观并发锁][5],在发生更新时,其他客户端永远不会有机会重写对象。
* 对客户端发送的请求进行身份验证和授权。它使用插件提取客户端的用户名、ID、所属组并确定通过身份验证的用户是否可以对请求的资源执行请求的操作。
@ -48,57 +48,57 @@ API 服务器是 Kubernetes 中唯一直接与 etcd 交互的组件。Kubernetes
#### 控制器管理器
在 Kubernetes 中,控制器是监控集群状态的控制循环,然后根据需要进行或请求更改。每个控制器都尝试将当前集群状态移动到所需状态。控制器至少跟踪一种 Kubernetes 资源类型,这些对象都有一个字段来表示所需的状态。
在 Kubernetes 中,控制器持续监控集群状态,然后根据需要进行或请求更改。每个控制器都尝试将当前集群状态变更至期望状态。控制器至少跟踪一种 Kubernetes 资源类型,这些对象均有一个字段来表示期望的状态。
控制器示例:
* Replication ManagerReplicationController 资源的控制器)
* 复本控制器、DaemonSet 和 Job 控制器
* 部署控制器
* StatefulSet 控制器
* 副本管理器(管理副本管理器 资源的控制器)
* 副本控制器、守护进程集 和 任务控制器
* 无状态负载控制器
* 有状态负载控制器
* 节点控制器
* 服务控制器
* 点控制器
* 接入点控制器
* 命名空间控制器
* 持久卷控制器
控制器使用监控机制来获得更改通知。它们监视 API 服务器对资源的更改,对每次更改执行操作,无论是新建对象还是更新或删除现有对象。大多数时候,这些操作包括创建其他资源或更新监控的资源本身。不过,由于使用监控并不能保证控制器不会错过任何事件,它们还会定期执行一系列操作,确保没有错过任何事件。
控制器通过监控机制来获得变更通知。它们监视 API 服务器对资源的变更,对每次更改执行操作,无论是新建对象还是更新或删除现有对象。大多数时候,这些操作包括创建其他资源或更新监控的资源本身。不过,由于使用监控并不能保证控制器不会错过任何事件,它们还会定期执行一系列操作,确保没有错过任何事件。
控制器管理器还执行生命周期功能。例如命名空间创建和生命周期、事件垃圾收集、终止吊舱垃圾收集、[级联删除垃圾收集][7]和节点垃圾收集。有关更多信息,参考[云控制器管理器][8]。
#### 调度器
调度器是一个将吊舱分配给节点的控制平面进程。它会监视新创建没有分配节点的吊舱。调度器会给每个发现的吊舱分配运行它的最佳节点。
调度器是一个将容器组分配给节点的控制平面进程。它会监视新创建没有分配节点的容器组。调度器会给每个发现的容器组分配运行它的最佳节点。
满足吊舱调度要求的节点称为可行节点。如果没有合适的节点,那么吊舱会一直处于未调度状态,直到调度器可以放置它。一旦找到可行节点,它就会运行一组函数来对节点进行评分,并选择得分最高的节点,然后它会告诉 API 服务器所选节点的信息。这个过程称为绑定。
满足容器组调度要求的节点称为可调度节点。如果没有合适的节点,那么容器组会一直处于未调度状态,直到调度器可以放置它。一旦找到可调度节点,它就会运行一组函数来对节点进行评分,并选择得分最高的节点,然后它会告诉 API 服务器所选节点的信息。这个过程称为绑定。
节点的选择分为两步:
1. 过滤所有节点的列表,获得可以调度吊舱的可接受节点列表例如PodFitsResources 过滤器检查候选节点是否有足够的可用资源来满足吊舱的特定资源请求)。
1. 过滤所有节点的列表,获得可以调度容器组的节点列表例如PodFitsResources 过滤器检查候选节点是否有足够的可用资源来满足容器组的特定资源请求)。
2. 对第一步得到的节点列表进行评分和排序,选择最佳节点。如果得分最高的有多个节点,循环过程可确保吊舱会均匀地部署在所有节点上。
2. 对第一步得到的节点列表进行评分和排序,选择最佳节点。如果得分最高的有多个节点,循环过程可确保容器组会均匀地部署在所有节点上。
调度决策要考虑的因素包括:
* 吊舱是否请求硬件/软件资源?节点是否报告内存或磁盘压力情况?
* 容器组是否请求硬件/软件资源?节点是否报告内存或磁盘压力情况?
* 节点是否有与吊舱规范中的节点选择器匹配的标签?
* 节点是否有与容器组规范中的节点选择器匹配的标签?
* 如果吊舱请求绑定到特定地主机端口,该端口是否可用?
* 如果容器组请求绑定到特定地主机端口,该端口是否可用?
* 吊舱是否容忍节点的污点?
* 容器组是否容忍节点的污点?
* 吊舱是否指定节点亲和性或反亲和性规则?
* 容器组是否指定节点亲和性或反亲和性规则?
调度器不会指示所选节点运行吊舱。调度器所做的就是通过 API 服务器更新吊舱定义。然后 API 服务器通过监控机制通知 kubelet 吊舱已被调度,然后目标节点上的 kubelet 服务看到吊舱被调度到它的节点,它创建并运行吊舱的容器
调度器不会指示所选节点运行容器组。调度器所做的就是通过 API 服务器更新容器组定义。然后 API 服务器通过监控机制通知 kubelet 容器组已被调度,然后目标节点上的 kubelet 服务看到容器组被调度到它的节点,它创建并运行容器组
**[ 下一篇: [Kubernetes 如何创建和运行容器: 图解指南][9] ]**
### 工作节点组件
工作节点运行 kubelet 代理,这允许控制平面招募它们来处理作业。与控制平面类似,工作节点使用几个不同的组件来实现这一点。 以下部分描述了工作节点组件。
工作节点运行 kubelet 代理,这允许控制平面接纳它们来处理负载。与控制平面类似,工作节点使用几个不同的组件来实现这一点。 以下部分描述了工作节点组件。
#### Kubelet
@ -108,19 +108,19 @@ kubelet服务的主要功能有
* 通过在 API 服务器中创建节点资源来注册它正在运行的节点。
* 持续监控 API 服务器上调度到节点的吊舱
* 持续监控 API 服务器上调度到节点的容器组
* 使用配置的容器运行时启动吊舱的容器。
* 使用配置的容器运行时启动容器组的容器。
* 持续监控正在运行的容器,并将其状态、事件和资源消耗报告给 API 服务器。
* 运行容器存活探测,在探测失败时重启容器,当 API 服务器中删除吊舱时终止(通知服务器吊舱终止的消息)。
* 运行容器存活探测,在探测失败时重启容器,当 API 服务器中删除容器组时终止(通知服务器容器组终止的消息)。
#### 服务代理
服务代理kube-proxy在每个节点上运行确保一个吊舱可以与另一个吊舱对话,一个节点可以与另一个节点对话,一个容器可以与另一个容器对话。它负责监视 API 服务器对服务和吊舱定义的更改,以保持整个网络配置是最新的。当一项服务得到多个吊舱的支持时,代理会在这些吊舱之间执行负载平衡。
服务代理kube-proxy在每个节点上运行确保一个容器组可以与另一个容器组通讯,一个节点可以与另一个节点对话,一个容器可以与另一个容器对话。它负责监视 API 服务器对服务和容器组定义的更改,以保持整个网络配置是最新的。当一项服务得到多个容器组的支持时,代理会在这些容器组之间执行负载平衡。
kube-proxy 之所以叫代理,是因为它最初实际上是一个代理服务器,用于接受连接并将它们代理到吊舱。当前的实现是使用 iptables 规则将数据包重定向到随机选择的后端吊舱,而无需通过实际的代理服务器。
kube-proxy 之所以叫代理,是因为它最初实际上是一个代理服务器,用于接受连接并将它们代理到容器组。当前的实现是使用 iptables 规则将数据包重定向到随机选择的后端容器组,而无需通过实际的代理服务器。
它工作原理的高级视图:
@ -128,7 +128,7 @@ kube-proxy 之所以叫代理,是因为它最初实际上是一个代理服务
* API 服务器会通知在工作节点上运行的 kube-proxy 代理有一个新服务。
* 每个 kube-proxy 通过设置 iptables 规则使服务可寻址,确保截获每个服务 IP/端口对,并将目的地址修改为支持服务的一个吊舱
* 每个 kube-proxy 通过设置 iptables 规则使服务可寻址,确保截获每个服务 IP/端口对,并将目的地址修改为支持服务的一个容器组
* 监控 API 服务器对服务或其端点对象的更改。
@ -142,7 +142,7 @@ kube-proxy 之所以叫代理,是因为它最初实际上是一个代理服务
容器运行时负责:
* 如果容器镜像本地没有,则从镜像仓库中提取。
* 如果容器镜像本地不存在,则从镜像仓库中提取。
* 将镜像解压到写时复制文件系统,所有容器层叠加创建一个合并的文件系统。
@ -150,9 +150,9 @@ kube-proxy 之所以叫代理,是因为它最初实际上是一个代理服务
* 设置容器镜像的元数据,如覆盖命令、用户输入的入口命令,并设置 SECCOMP 规则,确保容器按预期运行。
* 提醒内核将进程、网络和文件系统等隔离分配给容器。
* 通知内核将进程、网络和文件系统等隔离分配给容器。
* 提醒内核分配一些资源限制,如 CPU 或内存限制。
* 通知内核分配一些资源限制,如 CPU 或内存限制。
* 将系统调用syscall传递给内核启动容器。
@ -170,7 +170,7 @@ via: https://opensource.com/article/22/2/kubernetes-architecture
作者:[Nived Velayudhan][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,217 @@
[#]: subject: "How static linking works on Linux"
[#]: via: "https://opensource.com/article/22/6/static-linking-linux"
[#]: author: "Jayashree Huttanagoudar https://opensource.com/users/jayashree-huttanagoudar"
[#]: collector: "lkxed"
[#]: translator: "robsean"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
如何在 Linux 上静态链接
======
学习如何将多个 C <ruby>对象<rt>object</rt></ruby> 文件组合到一个带有静态库的单个可执行文件文件之中。
![Woman using laptop concentrating][1]
图片作者Mapbox Uncharted ERG, [CC-BY 3.0 US][2]
使用 C 编写的应用程序代码通常有多个源代码文件,但是,你需要将它们最终编译到一个单个的可执行文件。
你可以通过两种方式来完成这项工作:通过创建一个 <ruby>静态<rt>static</rt></ruby> 库 或 一个 <ruby>动态<rt>dynamic</rt></ruby> 库 (也被称为 <ruby>共享<rt>shared</rt></ruby> 库)。这两种类型的库在从它们是如何创建和链接的角度来看是不同的。你选择使用哪种方式取决于你的的具体使用情况。
在 [上一篇文章][3] 中,我演示了如何创建一个动态链接的可执行文件,这是一种更常用的方法。在这篇文章中,我将解释如何创建一个静态链接的可执行文件。
### 链接器使用静态库
链接器是一个命令,它将一个程序的数个部分组合到一起,并为它们重新组织存储器分配。
链接器的功能包括:
* 集成一个程序的所有的部分
* 计算组织出一个新的存储器结构,以便所有的部分组合在一起
* 重新复活存储器地址,以便程序可以在新的存储器组织下运行
* 解析符号引用
作为这些链接器功能的结果,创建了一个名称为可执行文件的一个可运行程序。
静态库是通过复制一个程序中的所有必须的库模块到最终的可执行镜像来创建的。链接器将链接静态库作为编译过程的最后一步。可执行文件是通过解析外部引用、库实例程序与程序代码组合来创建的。
### 创建对象文件
这里是一个静态库的示例,以及其链接过程。首先,创建带有这些函数识别标志的头文件 `mymath.h` :
```
int add(int a, int b);
int sub(int a, int b);
int mult(int a, int b);
int divi(int a, int b);
```
使用这些函数定义来创建 `add.c` 、`sub.c` 、`mult.c` 和 `divi.c` 文件:
```
// add.c
int add(int a, int b){
return (a+b);
}
//sub.c
int sub(int a, int b){
return (a-b);
}
//mult.c
int mult(int a, int b){
return (a*b);
}
//divi.c
int divi(int a, int b){
return (a/b);
}
```
现在,使用 GCC 来参加对象文件 `add.o` 、`sub.o` 、`mult.o` 和 `divi.o` :
```
$ gcc -c add.c sub.c mult.c divi.c
```
`-c` 选项跳过链接步骤,并且只创建对象文件。
创建一个名称为 `libmymath.a` 的静态库,接下来,移除对象文件,因为它们不再被需要。(注意,使用一个 `trash` 命令比使用一个 `rm` 命令更安全。)
```
$ ar rs libmymath.a add.o sub.o mult.o divi.o
$ trash *.o
$ ls
add.c  divi.c  libmymath.a  mult.c  mymath.h  sub.c
```
现在,你已经创建了一个简单的名称为 `libmymath` 是示例数学库,你可以在 C 代码中使用它。当然,这里有非常复杂的 C 库,这就是他们这些开发者来生成最终产品的工艺流程,你和我可以安装这些库并在 C 代码中使用。
接下来,在一些自定义代码中使用你的数学库,然后链接它。
### 创建一个静态链接的应用程序
假设你已经为数学运算编写了一个命令。创建一个名称为 `mathDemo.c` 的文件,并将这些代码复制粘贴至其中:
```
#include <mymath.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
  int x, y;
  printf("Enter two numbers\n");
  scanf("%d%d",&x,&y);
 
  printf("\n%d + %d = %d", x, y, add(x, y));
  printf("\n%d - %d = %d", x, y, sub(x, y));
  printf("\n%d * %d = %d", x, y, mult(x, y));
  if(y==0){
    printf("\nDenominator is zero so can't perform division\n");
      exit(0);
  }else{
      printf("\n%d / %d = %d\n", x, y, divi(x, y));
      return 0;
  }
}
```
注意:第一行是一个 `include` 语句,通过名称来引用你自己的 `libmymath` 库。
针对 `mathDemo.c` 创建一个名称为 `mathDemo.o` 的对象文件:
```
$ gcc -I . -c mathDemo.c
```
`-I` 选项告诉 GCC 来搜索在其后列出的头文件。在这个实例中,你正在具体指定当前目录,通过一个单个点 (`.` ) 来表示。
Link `mathDemo.o` with `libmymath.a` 来参加最终的可执行文件。这里有两种方法来向 GCC 表达这一点。
你可以指向文件:
```
$ gcc -static -o mathDemo mathDemo.o libmymath.a
```
或者,你可以具体指定库的路径和库的名称:
```
$ gcc -static -o mathDemo -L . mathDemo.o -lmymath
```
在后面的那个示例中,`-lmymath` 选项告诉链接器来链接随对象文件 `mathDemo.o` 出现的对象文件 `libmymath.a` 来创建最终的可执行文件。`-L` 选项指示链接器在下面的参数中查找库 (类似于你使用 `-I` 所做的工作)。
### 分析结果
使用 `file` 命令来确认它是静态链接的:
```
$ file mathDemo
mathDemo: ELF 64-bit LSB executable, x86-64...
statically linked, with debug_info, not stripped
```
使用 `ldd` 命令,你将会看到该可执行文件不是动态链接的:
```
$ ldd ./mathDemo
        not a dynamic executable
```
你也可以检查 `mathDemo` 可执行文件的大小:
```
$ du -h ./mathDemo
932K    ./mathDemo
```
在我 [前一篇文章][5] 的示例中,动态链接的可执行文件只占有 24K 大小。
运行该命令来看看它的工作内容:
```
$ ./mathDemo
Enter two numbers
10
5
10 + 5 = 15
10 - 5 = 5
10 * 5 = 50
10 / 5 = 2
```
看起来令人满意!
### 何时使用静态链接
动态链接可执行文件通常优于静态链接可执行文件,因为动态链接会保持应用程序的组件模块化。假如一个库接收到一次关键安全更新,那么它可以很容易地修补,因为它存在于应用程序的外部。
当你使用静态链接时,库的代码会 "隐藏" 在你创建的可执行文件之中,意味着在库每次更新时(相信我,你会有更好的东西),仅有的一种修补它的方法是重新编译和重新发布一个新的可执行文件。
不过,如果一个库的代码,要么存在于它正在使用的具有相同代码的可执行文件中,要么存在于专用的预期不会接收到任何更新的嵌入式设备中,那么静态连接将是一种可接受的选项。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/static-linking-linux
作者:[Jayashree Huttanagoudar][a]
选题:[lkxed][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jayashree-huttanagoudar
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png
[2]: https://creativecommons.org/licenses/by/3.0/us/
[3]: https://opensource.com/article/22/5/dynamic-linking-modular-libraries-linux
[4]: https://www.redhat.com/sysadmin/recover-file-deletion-linux
[5]: https://opensource.com/article/22/5/dynamic-linking-modular-libraries-linux