Merge pull request #48 from LCTT/master

sync
This commit is contained in:
SamMa 2022-06-29 19:29:09 +08:00 committed by GitHub
commit 9b9f62979b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
27 changed files with 2602 additions and 723 deletions

View File

@ -1,51 +1,47 @@
[#]: collector: (lujun9972)
[#]: translator: (Donkey)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: translator: (Donkey-Hao)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-14773-1.html)
[#]: subject: (3 stress-free steps to tackling your task list)
[#]: via: (https://opensource.com/article/21/1/break-down-tasks)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
轻松解决你的任务清单的三个步骤
======
将你的大任务分为小步骤,避免自己不堪重负。
![Team checklist][1]
去年,这个年度系列文章覆盖了个人应用。今年,除了在 2021 年提供帮助的策略外,我们还在寻找一体化解决方案。欢迎来到 2021 年 21 天生产力的第 14 天
> 将你的大任务分为小步骤,避免自己不堪重负。
本周开始我先回顾我的日程安排看看我需要或想要完成的事情。通常列表上有些较大的项目。无论来自工作上的问题一系列关于生产力的文章或者改进鸡舍当作为一项工作时这个任务真的很艰巨。很有可能在一个时间段我不能坐下来甚至在一天内完成类似例如请注意21 篇文章之类的事情。
![](https://img.linux.net.cn/data/attachment/album/202206/29/145852zcqqw24v2svulswl.jpg)
本周开始我先回顾我的日程安排看看我需要或想要完成的事情。通常列表上有些较大的项目。无论来自工作上的问题还是一系列关于生产力的文章或者改进我家的鸡舍当作为一项工作时这个任务真的很艰巨。很有可能我无法坐下来在一个时间段内甚至在一天内完成类似请注意只是举例21 篇文章之类的事情。
![21 Days of Productivity project screenshot][2]
21 天的生产力 (Kevin Sonney, [CC BY-SA 4.0][3])
所以当我的清单上有这样的东西时,我做的第一件事就是把它分解成更小的部分。如著名的诺贝尔文学奖得主 [William Faulkner][4] 说的“移山的人,从小石头开始。”(译注:感觉与“千里之行,始于足下”是一个意思) 我们要解决大任务(山)并且需要完成各个步骤(小石头)。
*21 天的生产力 (Kevin Sonney, [CC BY-SA 4.0][3])*
所以当我的清单上有这样的东西时,我做的第一件事就是把它分解成更小的部分。如著名的诺贝尔文学奖得主 [William Faulkner][4] 说的“移山的人从小石头开始。”LCTT 译注:感觉与“千里之行,始于足下”是一个意思) 我们要解决大任务(山)并且需要完成各个步骤(小石头)。
我使用下面的步骤将大任务分割为小步骤:
1. 我通常很清楚完成一项任务需要做什么。 如果没有,我会做一些研究来弄清楚这一点。
1. 我通常很清楚完成一项任务需要做什么。如果没有,我会做一些研究来弄清楚这一点。
2. 我会顺序的写下完成的步骤。
3. 最后,我坐下来拿着我的日历和清单,开始将任务分散到几天(或几周或几个月),以了解我何时可以完成它。
现在我不仅有计划,还知道多久能完成。逐步完成,我可以看到这项大任务不仅变得更小,而且更接近完成。
现在我不仅有计划还知道多久能完成。逐步完成,我可以看到这项大任务不仅变得更小,而且更接近完成。
军队有句古话,“遇敌无计”。 几乎可以肯定的是,有一两点(或五点)我意识到像“截屏”这样简单的事情需要扩展到更复杂的事情。 事实上,在 [Easy!Appointments][5] 的截图中,竟然是:
军队有句古话,“遇敌无计”。 几乎可以肯定的是,有一两点(或五点)我意识到像“截屏”这样简单的事情需要扩展到更复杂的事情。事实上,在 [Easy!Appointments][5] 的截图中,竟然是:
1. 安装和配置 Easy!Appointments
2. 安装和配置 Easy!Appointments WordPress 插件
3. 生成 API 密钥来同步日历
4. 截屏
即便如此,我也不得不将这些任务分解成更小的部分——下载软件、配置 NGINX、验证安装……你明白了。 没关系。 一个计划或一组任务不是一成不变的,可以根据需要进行更改。
即便如此,我也不得不将这些任务分解成更小的部分——下载软件、配置 NGINX、验证安装……你明白了吧。没关系。一个计划或一组任务不是一成不变的可以根据需要进行更改。
![project completion pie chart][6]
今年的计划已经完成了 2/3 ! (Kevin Sonney, [CC BY-SA 4.0][3])
*今年的计划已经完成了 2/3 ! (Kevin Sonney, [CC BY-SA 4.0][3])*
这是一项后天习得的技能,最初几次需要一些努力。学习如何将大任务分解成更小的步骤可以让您跟踪实现目标或完成大任务的进度,而不会在过程中不知所措。
@ -56,7 +52,7 @@ via: https://opensource.com/article/21/1/break-down-tasks
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[Donkey](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,42 +3,44 @@
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14772-1.html"
Apache Kafka为“无缝系统”提供异步消息支持
======
Apache Kafka 是最流行的开源消息代理之一。它已经成为了大数据操作的重要组成部分,你能够在几乎所有的微服务环境中找到它。本文对 Apache Kafka 进行了简要介绍,并提供了一个案例来展示它的使用方式。
![][1]
> Apache Kafka 是最流行的开源消息代理之一。它已经成为了大数据操作的重要组成部分,你能够在几乎所有的微服务环境中找到它。本文对 Apache Kafka 进行了简要介绍,并提供了一个案例来展示它的使用方式。
![](https://img.linux.net.cn/data/attachment/album/202206/29/094326fbo6zzsrxiava661.jpg)
你有没有想过电子商务平台是如何在处理巨大的流量时做到不会卡顿的呢有没有想过OTT 平台是如何在同时向数百万用户交付内容时,做到平稳运行的呢?其实,关键就在于它们的分布式架构。
采用分布式架构设计的系统由多个功能组件组成。这些功能组件通常分布在多个机器上,它们通过网络,异步地交换消息,从而实现相互协作。正是由于异步消息的存在,组件之间才能实现可伸缩、无阻塞的通信,整个系统才能够平稳运行。
采用分布式架构设计的系统由多个功能组件组成。这些功能组件通常分布在多个机器上,它们通过网络,异步地交换消息,从而实现相互协作。正是由于异步消息的存在,组件之间才能实现可伸缩、无阻塞的通信,整个系统才能够平稳运行。
### 异步消息
异步消息的常见特性有:
* 消息的生产者和消费者都不知道彼此的存在。它们在不知道其他对象的情况下,加入和离开系统。
* 消息代理充当了生产者和消费者之间的中介。
* 生产者把每条消息,都与一个<ruby>主题<rt>topic</rt></ruby>相关联。每个主题是一个简单的字符串。
* 一个生产者可以把消息发往多个主题,不同生产者也可以把消息发送给同一主题
* 消息的<ruby>生产者<rt>producer</rt></ruby><ruby>消费者<rt>consumer</rt></ruby>都不知道彼此的存在。它们在不知道对方的情况下,加入和离开系统。
* 消息<ruby>代理<rt>broker</rt></ruby>充当了生产者和消费者之间的中介。
* 生产者把每条消息,都与一个<ruby>主题<rt>topic</rt></ruby>相关联。主题是一个简单的字符串。
* 生产者可以在多个主题上发送消息,不同的生产者也可以在同一主题上发送消息
* 消费者向代理订阅一个或多个主题的消息。
* 生产者只将消息发送给代理,而不发送给消费者。
* 代理会把消息发送给订阅该主题的所有消费者。
* 代理将消息传递给针对该主题注册的所有消费者。
* 生产者并不期望得到消费者的任何回应。换句话说,生产者和消费者不会相互阻塞。
市场上的消息代理有很多,而 Apache Kafka 是其中最受欢迎的一种(之一
市场上的消息代理有很多,而 Apache Kafka 是其中最受欢迎的之一。
### Apache Kafka
Apache Kafka 是一个支持流处理的、开源的分布式消息系统,它由 Apache 软件基金会开发。在架构上,它是多个代理组成的集群,这些代理间通过 Apache ZooKeeper 服务来协调。在接收、持久化和发送消息时,这些代理共享集群上的负载。
Apache Kafka 是一个支持流处理的、开源的分布式消息系统,它由 Apache 软件基金会开发。在架构上,它是多个代理组成的集群,这些代理间通过 Apache ZooKeeper 服务来协调。在接收、持久化和发送消息时,这些代理分担集群上的负载。
#### 分区
Kafka 将消息写入称为<ruby>分区<rt>partitions</rt></ruby>的桶中。一个特定分区只保存一个主题上的消息。例如Kafka 会把 `heartbeats` 主题上的消息写入名为 “heartbeats-0” 的分区(假设它是个单分区主题),这个过程和生产者无关。
Kafka 将消息写入称为<ruby>分区<rt>partition</rt></ruby>的桶中。一个特定分区只保存一个主题上的消息。例如Kafka 会把 `heartbeats` 主题上的消息写入名为 `heartbeats-0` 的分区(假设它是个单分区主题),这个过程和生产者无关。
![图 1异步消息][2]
@ -50,21 +52,21 @@ Kafka 将消息写入称为<ruby>“分区”<rt>partitions</rt></ruby>的桶中
#### 领导者和同步副本
Kafka 在(由多个代理组成的)集群中维护了多个分区。其中,负责维护分区的那个代理被称为<ruby>领导者<rt>leader</rt></ruby>。只有领导者能够在它的分区上接收和发送消息。
Kafka 在(由多个代理组成的)集群中维护了多个分区。其中,负责维护分区的那个代理被称为<ruby>领导者<rt>leader</rt></ruby>。只有领导者能够在它的分区上接收和发送消息。
可是,万一分区的领导者发生故障了,又该怎么办呢?为了确保业务连续性,每个领导者(代理)都会把它的分区复制到其他代理上。此时,这些其他代理就称为该分区的<ruby>同步副本<rt>in-sync-replicas</rt></ruby>ISR。一旦分区的领导者发生故障ZooKeeper 就会发起一次选举,把选中的那个同步副本任命为新的领导者。此后,这个新的领导者将承担该分区的消息接受和发送任务。管理员可以指定分区需要维护的同步副本的大小。
![图 3命令行生产者][4]
![图 3生产者命令行工具][4]
#### 消息持久化
代理会将每个分区都映射到一个指定的磁盘文件,从而实现持久化。默认情况下,消息会在磁盘上保留一个星期。当消息写入分区后,它们的内容和顺序就不能更改了。管理员可以配置一些策略,如消息的保留时长、压缩算法等。
![图 4命令行消费者][5]
![图 4消费者命令行工具][5]
#### 消费消息
与大多数其他消息系统不同Kafka 不会主动将消息发送给消费者。相反消费者应该监听主题并主动读取消息。一个消费者可以某个主题的多个分区中读取消息。多个消费者也可以读取来自同一个分区的消息。Kafka 保证了同一条消息不会被同一个消费者重复读取。
与大多数其他消息系统不同Kafka 不会主动将消息发送给消费者。相反,消费者应该监听主题,并主动读取消息。一个消费者可以某个主题的多个分区中读取消息。多个消费者也可以读取来自同一个分区的消息。Kafka 保证了同一条消息不会被同一个消费者重复读取。
Kafka 中的每个消费者都有一个组 ID。那些组 ID 相同的消费者们共同组成了一个消费者组。通常,为了从 N 个主题分区读取消息,管理员会创建一个包含 N 个消费者的消费者组。这样一来,组内的每个消费者都可以从它的指定分区中读取消息。如果组内的消费者比可用分区还要多,那么多出来的消费者就会处于闲置状态。
@ -116,9 +118,9 @@ listeners=PLAINTEXT://:9092
上述命令还同时指定了<ruby>复制因子<rt>replication factor</rt></ruby>,它的值不能大于集群中代理的数量。我们使用的是单代理集群,因此,复制因子只能设置为 1。
当主题创建完成后生产者和消费者就可以在上面交换消息了。Kafka 的发行版内附带了命令行工具生产者和消费者,供测试时用。
当主题创建完成后生产者和消费者就可以在上面交换消息了。Kafka 的发行版内附带了生产者和消费者的命令行工具,供测试时用。
打开第三个终端,运行下面的命令,启动命令行生产者:
打开第三个终端,运行下面的命令,启动生产者:
```
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic topic-1
@ -126,7 +128,7 @@ listeners=PLAINTEXT://:9092
上述命令显示了一个提示符,我们可以在后面输入简单文本消息。由于我们指定的命令选项,生产者会把 `topic-1` 上的消息,发送到运行在本机的 9092 端口的 Kafka 中。
打开第四个终端,运行下面的命令,启动命令行消费者:
打开第四个终端,运行下面的命令,启动消费者:
```
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic-1 -from-beginning
@ -146,7 +148,7 @@ listeners=PLAINTEXT://:9092
![图 5基于 Kafka 的架构][6]
在这种架构下,客车上的设备扮演了消息生产者的角色。它们会周期性地把当前位置发送到 Kafka 的 `abc-bus-location` 主题上。ABC 公司选择以客车的<ruby>行程<rt>trip code</rt></ruby>作为消息键,以处理来自不同客车的消息。例如,对于从 Bengaluru 到 Hubballi 的客车,它的行程就会是 `BLRHL003`,那么在这段旅程中,对于所有来自该客车的消息,它们的消息键都会是 `BLRHL003`
在这种架构下,客车上的设备扮演了消息生产者的角色。它们会周期性地把当前位置发送到 Kafka 的 `abc-bus-location` 主题上。ABC 公司选择以客车的<ruby>行程编号<rt>trip code</rt></ruby>作为消息键,以处理来自不同客车的消息。例如,对于从 Bengaluru 到 Hubballi 的客车,它的行程编号就会是 `BLRHL003`,那么在这段旅程中,对于所有来自该客车的消息,它们的消息键都会是 `BLRHL003`
仪表盘应用扮演了消息消费者的角色。它在代理上注册了同一个主题 `abc-bus-location`。如此,这个主题就成为了生产者(客车)和消费者(仪表盘)之间的虚拟通道。
@ -166,7 +168,7 @@ listeners=PLAINTEXT://:9092
##### Java 生产者
下面的 `Fleet` 类模拟了在 ABC 公司的 6 辆客车上运行的 Kafka 生产者应用。它会把位置更新发送到指定代理的 `abc-bus-location` 主题上。请注意,简单起见,主题名称、消息键、消息内容和代理地址等,都在代码里写死了
下面的 `Fleet` 类模拟了在 ABC 公司的 6 辆客车上运行的 Kafka 生产者应用。它会把位置更新发送到指定代理的 `abc-bus-location` 主题上。请注意,简单起见,主题名称、消息键、消息内容和代理地址等,都在代码里硬编码的
```
public class Fleet {
@ -205,7 +207,7 @@ public class Fleet {
##### Java 消费者
下面的 `Dashboard` 类实现了一个 Kafka 消费者应用,运行在 ABC 公司的操作中心。它会监听 `abc-bus-location` 主题,并且它的消费者组 ID 是 `abc-dashboard`。当收到消息后,它会立即显示来自客车的详细位置信息。我们本该配置这些详细位置信息,但简单起见,它们也是在代码里写死的:
下面的 `Dashboard` 类实现了一个 Kafka 消费者应用,运行在 ABC 公司的操作中心。它会监听 `abc-bus-location` 主题,并且它的消费者组 ID 是 `abc-dashboard`。当收到消息后,它会立即显示来自客车的详细位置信息。我们本该配置这些详细位置信息,但简单起见,它们也是在代码里硬编码的:
```
public static void main(String[] args) {
@ -241,7 +243,7 @@ public static void main(String[] args) {
##### 依赖
为了编译和运行这些代码,我们需要 JDK 8 及以上版本。看到下面的 `pom.xml` 文件中的 Maven 依赖了吗?它们会把所需的 Kafka 客户端库下载并添加到类路径中:
为了编译和运行这些代码,我们需要 JDK 8 及以上版本。看到下面的 `pom.xml` 文件中的 Maven 依赖了吗?它们会把所需的 Kafka 客户端库下载并添加到类路径中:
```
<dependency>
@ -283,7 +285,7 @@ via: https://www.opensourceforu.com/2021/11/apache-kafka-asynchronous-messaging-
作者:[Krishna Mohan Koyya][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,44 +3,44 @@
[#]: author: "Tridev Reddy https://www.opensourceforu.com/author/tridev-reddy/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14770-1.html"
将 Zeek 与 ELK 栈集成
======
Zeek 是一个开源的网络安全监控工具。本文讨论了如何将 Zeek 与 ELK 集成。
![Integrating-Zeek-with-ELK-Stack-Featured-image][1]
> Zeek 是一个开源的网络安全监控工具。本文讨论了如何将 Zeek 与 ELK 集成。
在本杂志 2022 年 3 月版发表的题为“用 Zeek 轻松实现网络安全监控”的文章中,我们研究了 Zeek 的功能,并学习了如何开始使用它。现在我们将把我们的学习经验再进一步,看看如何将其与 ELK也称为 Elasticsearch、Kibana、Beats 和 Logstash整合。
![](https://img.linux.net.cn/data/attachment/album/202206/28/164550v4nuk3g7ux77y77v.jpg)
在本杂志 2022 年 3 月版发表的题为“用 Zeek 轻松实现网络安全监控”的文章中,我们研究了 Zeek 的功能,并学习了如何开始使用它。现在我们将把我们的学习经验再进一步,看看如何将其与 ELK即 Elasticsearch、Kibana、Beats 和 Logstash整合。
为此,我们将使用一个叫做 Filebeat 的工具,它可以监控、收集并转发日志到 Elasticsearch。我们将把 Filebeat 和 Zeek 配置在一起,这样后者收集的数据将被转发并集中到我们的 Kibana 仪表盘上。
### 安装 Filebeat
让我们首先将 Filebeat 与 Zeek 安装在一起。使用 *apt* 来安装 Filebeat使用以下命令
让我们首先将 Filebeat 与 Zeek 安装在一起。使用 `apt` 来安装 Filebeat使用以下命令
```
sudo apt install filebeat
```
接下来,我们需要配置 *.yml* 文件,它位于 /etc*/filebeat/* 文件夹中:
接下来,我们需要配置 `.yml` 文件,它位于 `/etc/filebeat/` 文件夹中:
```
sudo nano /etc/filebeat/filebeat.yml
```
我们只需要在这里配置两件事。在 *Filebeat* 输入部分,将类型改为 log并取消对 *enabled*:false 的注释,将其改为 true。我们还需要指定存储日志的路径也就是说我们需要指定*/opt/zeek/logs/current/\*.log*
我们只需要在这里配置两件事。在 Filebeat 输入部分,将类型改为 `log`,并取消对 `enabled:false` 的注释,将其改为 `true`。我们还需要指定存储日志的路径,也就是说,我们需要指定 `/opt/zeek/logs/current/*.log`
完成这些后,设置的第一部分应该类似于图 1 所示的内容。
![Figure 1: Filebeat config (a)][2]
在 Elasticsearch 输出部分,第二件要修改的事情是在 *Outputs*下,取消对 output.elasticsearch 和 hosts 的注释。确保主机的 URL 和端口号与你安装 ELK 时配置的相似。我们把它保持为 localhost端口号为 9200。
第二件要修改的事情是输出下的 Elasticsearch 输出部分,取消对 `output.elasticsearch``hosts` 的注释。确保主机的 URL 和端口号与你安装 ELK 时配置的相似。我们把它保持为 `localhost`,端口号为 `9200`
在同一部分中,取消底部的用户名和密码,输入安装后配置 ELK 时生成的弹性用户的用户名和密码。完成这些后,参考图 2检查设置。
在同一部分中,取消底部的用户名和密码的注释,输入安装后配置 ELK 时生成的 Elasticsearch 用户的用户名和密码。完成这些后,参考图 2检查设置。
![Figure 2: Filebeat config (b)][3]
@ -51,7 +51,7 @@ cd /opt/zeek/bin
./zeekctl stop
```
现在我们需要在 local.zeek 中添加一小行,它存在于 *opt/zeek/share/zeek/site/* 目录中。
现在我们需要在 `local.zeek` 中添加一小行,它存在于 `opt/zeek/share/zeek/site/` 目录中。
以 root 身份打开该文件,添加以下行:
@ -76,11 +76,11 @@ cd /opt/zeek/bin
sudo filebeat modules enable zeek
```
我们几乎要好了。在最后一步,配置 *zeek.yml* 文件要记录什么类型的数据。这可以通过修改 */etc/filebeat/modules.d/zeek.yml* 文件完成。
我们几乎要好了。在最后一步,配置 `zeek.yml` 文件要记录什么类型的数据。这可以通过修改 `/etc/filebeat/modules.d/zeek.yml` 文件完成。
在这个 *.yml 文件*中,我们必须提到这些指定的日志存放在哪个目录下。我们知道,这些日志存储在当前文件夹中,其中有几个文件,如 *dns.log*、*conn.log、dhcp.log* 等等。我们需要在每个部分提到每个路径。如果而且只有在你不需要该文件/程序的日志时,你可以通过把启用值改为 false 来舍弃不需要的文件。
在这个 .yml 文件中,我们必须提到这些指定的日志存放在哪个目录下。我们知道,这些日志存储在当前文件夹中,其中有几个文件,如 `dns.log`、`conn.log`、`dhcp.log` 等等。我们需要在每个部分提到每个路径。如果而且只有在你不需要该文件/程序的日志时,你可以通过把启用值改为 `false` 来舍弃不需要的文件。
例如,对于 *dns*,确保启用值为 “true”,并且路径被配置:
例如,对于 `dns`,确保启用值为 `true`,并且路径被配置:
```
var.paths: [ “/opt/zeek/logs/current/dns.log”, “/opt/zeek/logs/*.dns.json” ]
@ -108,7 +108,7 @@ sudo service filebeat start
现在让我们进入发现选项卡,通过使用查询进行过滤来检查结果:
```
event.module: “zeek”
event.module: "zeek"
```
这个查询将过滤它在一定时间内收到的所有数据,只向我们显示名为 Zeek 的模块的数据(图 7
@ -127,7 +127,7 @@ via: https://www.opensourceforu.com/2022/06/integrating-zeek-with-elk-stack/
作者:[Tridev Reddy][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,17 +3,16 @@
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14769-1.html"
使用 Python 的 requests 和 Beautiful Soup 来分析网页
======
学习这个 Python 教程,轻松提取网页的有关信息。
![带问号的 Python 语言图标][1]
![](https://img.linux.net.cn/data/attachment/album/202206/28/132859owwf9az49k2oje2o.jpg)
图源Opensource.com
> 学习这个 Python 教程,轻松提取网页的有关信息。
浏览网页可能占了你一天中的大部分时间。然而,你总是需要手动浏览,这很讨厌,不是吗?你必须打开浏览器,然后访问一个网站,单击按钮,移动鼠标……相当费时费力。如果能够通过代码与互联网交互,岂不是更好吗?
@ -69,7 +68,7 @@ print(SOUP.p)
### 循环
使用 Beautiful Soup 的 `find_all` 函数,你可以创建一个 for 循环,从而遍历 `SOUP` 变量中包含的整个网页。除了 `<p>` 标签之外,你可能也会对其他标签感兴趣,因此最好将其构建为自定义函数,由 Python 中的 `def` 关键字(意思是 <ruby>“定义”<rt>define</rt></ruby>)指定。
使用 Beautiful Soup 的 `find_all` 函数,你可以创建一个 `for` 循环,从而遍历 `SOUP` 变量中包含的整个网页。除了 `<p>` 标签之外,你可能也会对其他标签感兴趣,因此最好将其构建为自定义函数,由 Python 中的 `def` 关键字(意思是 <ruby>“定义”<rt>define</rt></ruby>)指定。
```
def loopit():
@ -77,7 +76,7 @@ def loopit():
        print(TAG)
```
你可以随意更改临时变量 `TAG` 的名字,例如 `ITEM``i` 或任何你喜欢的。每次循环运行时,`TAG` 中都会包含`find_all` 函数的搜索结果。在此代码中,它搜索的是 `<p>` 标签。
你可以随意更改临时变量 `TAG` 的名字,例如 `ITEM``i` 或任何你喜欢的。每次循环运行时,`TAG` 中都会包含 `find_all` 函数的搜索结果。在此代码中,它搜索的是 `<p>` 标签。
函数不会自动执行,除非你显式地调用它。你可以在代码的末尾调用这个函数:
@ -92,7 +91,7 @@ if __name__ == '__main__':
### 只获取内容
你可以通过指定只需要 <ruby>字符串<rt>string</rt></ruby>(它是 <ruby>单词<rt>words</rt></ruby> 的编程术语)来排除打印标签。
你可以通过指定只需要 <ruby>字符串<rt>string</rt></ruby>(它是 <ruby>单词<rt>words</rt></ruby> 的编程术语)来排除打印标签。
```
def loopit():
@ -125,8 +124,8 @@ def loopit():
你可以使用 Beautiful Soup 和 Python 提取更多信息。以下是有关如何改进你的应用程序的一些想法:
* [接受输入][3],这样你就可以在启动应用程序时,指定要下载和分析的 URL。
* 统计页面上图片(<img> 标签)的数量。
* 统计另一个标签中的图片(<img> 标签)的数量(例如,仅出现在 `<main>` div 中的图片,或仅出现在 `</p>` 标签之后的图片)。
* 统计页面上图片(`<img>` 标签)的数量。
* 统计另一个标签中的图片(`<img>` 标签)的数量(例如,仅出现在 `<main>` div 中的图片,或仅出现在 `</p>` 标签之后的图片)。
--------------------------------------------------------------------------------
@ -135,7 +134,7 @@ via: https://opensource.com/article/22/6/analyze-web-pages-python-requests-beaut
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,35 +3,36 @@
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14768-1.html"
这个开源项目证明了 Chrome 扩展可以跟踪你
你安装的 Chrome 扩展的组合可以跟踪你
======
这会成为放弃基于 Chromium 的浏览器并开始使用 Firefox 的一个理由吗?也许吧,决定权在你。
> 这会成为放弃基于 Chromium 的浏览器并开始使用 Firefox 的一个理由吗?也许吧,决定权在你。
![Chrome 扩展追踪器][1]
即使你有了所有的隐私扩展和高级的保护功能,别人仍然有方法可以识别你或跟踪你。
即使你有了所有的隐私扩展和各种保护功能,别人仍然有方法可以识别你或跟踪你。
请注意,并非所有浏览器都是如此,本文中,我们只关注基于 Chromium 的浏览器,并将 Google Chrome 作为“主要嫌疑人”。
请注意,并非所有浏览器都是如此,本文中,我们主要关注基于 Chromium 的浏览器,并将谷歌 Chrome 作为“主要嫌疑人”。
以前,尽管别人已经能够你的检测 Chromium 浏览器上,检测到你已安装的扩展程序,但许多扩展程序都实施了某些保护措施来防止这种检测。
以前,在 Chromium 浏览器上,尽管别人已经能够检测到你已安装的扩展程序,但许多扩展程序都实施了某些保护措施来防止这种检测。
然而,一位名为 “**z0ccc**” 的安全研究人员发现了一种检测已安装 Chrome 浏览器扩展程序的新方法,该方法可进一步用于**通过“浏览器指纹识别”来跟踪你**。
**如果你还不知道的话**<ruby>浏览器指纹识别<rt>Browser Fingerprinting</rt></ruby>是指收集有关你的设备/浏览器的各种信息,以创建唯一的指纹 ID哈希从而在互联网上识别你的一种跟踪方法。“各种信息”包括浏览器名称、版本、操作系统、已安装的扩展程序、屏幕分辨率和类似的技术数据。
如果你还不知道的话:<ruby>浏览器指纹识别<rt>Browser Fingerprinting</rt></ruby>是指收集有关你的设备/浏览器的各种信息,以创建唯一的指纹 ID哈希从而在互联网上识别你的一种跟踪方法。“各种信息”包括浏览器名称、版本、操作系统、已安装的扩展程序、屏幕分辨率和类似的技术数据。
这听起来像是一种无害的数据收集技术,但可以使用这种跟踪方法在线跟踪你。
### 检测 Google Chrome 扩展
### 检测谷歌 Chrome 扩展
研究人员分享了一个开源项目 “**Extension Fingerprints**”,你可以使用它来测试,你安装的 Chrome 扩展,是否正在被人检测
研究人员发布了一个开源项目 “**Extension Fingerprints**”,你可以使用它来测试你安装的 Chrome 扩展是否能被检测到
新技术涉及一种“时差”方法,该工具比较了扩展获取资源的时间。与浏览器上未安装的其他扩展相比,受保护的扩展需要更多时间来获取资源。因此,这有助于从 1000 多个扩展列表中识别出一些扩展。
新技术涉及一种“时差”方法,该工具比较了扩展程序获取资源的时间。与浏览器上未安装的其他扩展相比,受保护的扩展需要更多时间来获取资源。因此,这有助于从 1000 多个扩展列表中识别出一些扩展。
关键是:即使有了这些新的进步和技术来防止跟踪Chrome 网上应用店的扩展也可以被检测到。
关键是:即使有了各种新的进步和技术来防止跟踪Chrome 网上应用店的扩展也可以被检测到。
![][2]
@ -41,11 +42,11 @@
你可以在它的 [GitHub 页面][3] 上查看所有技术细节。如果你想自己测试它,请前往它的 [扩展指纹识别网站][4] 自行检查。
### 大救星 Firefox
### 拯救 Firefox
嗯,似乎是的,毕竟我出于各种原因,[不断回到 Firefox][5]。
嗯,似乎是的,毕竟我出于各种原因,[不断回到 Firefox][5]。
这个新发现的(跟踪)方法应该适用于所有基于 Chromium 的浏览器。我在 Brave 和 Google Chrome 上都测试了这个方法。研究人员还提到,该工具不适用于在 Microsoft Edge 中使用的,微软应用商店中的扩展。但是,相同的跟踪方法仍然有效。
这个新发现的(跟踪)方法应该适用于所有基于 Chromium 的浏览器。我在 Brave 和谷歌 Chrome 上都测试了这个方法。研究人员还提到,该工具不能在使用微软应用商店中的扩展的微软 Edge 上工作。但是,相同的跟踪方法仍然有效。
正如研究人员指出Mozilla Firefox 可以避免这种情况,因为每个浏览器实例的扩展 ID 都是唯一的。
@ -56,7 +57,7 @@ via: https://news.itsfoss.com/chrome-extension-tracking/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,29 +3,30 @@
[#]: author: "John Paul https://itsfoss.com/author/john/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14774-1.html"
Minetest一个开源的 Minecraft 替代品
Minetest一个开源的 Minecraft 替代品
======
早在 2009 年Minecraft 就被介绍给了世界。从那时起它已经成为一种文化现象。在这段时间里一些开发者发布了具有类似想法和机制的开源游戏。今天我们将看看其中最大的一个Minetest。
早在 2009 年Minecraft 就来到了这个世界。从那时起它已经成为一种文化现象。在这段时间里一些开发者发布了具有类似想法和机制的开源游戏。今天我们将看看其中最大的一个Minetest。
### 什么是 Minetest
![minetest start][1]
![](https://img.linux.net.cn/data/attachment/album/202206/29/151524eem52oyatm2tz2dr.jpg)
[Minetest][2],简单地说,是一个基于体素的沙盒游戏,与 Minecraft 非常相似。与 Minecraft 不同的是Minetest 是用 C++ 编写的,并被设计成可以在大多数系统上原生运行。它也有一个非常大的地图区域。地图大小为 “62,000 × 62,000 × 62,000 块”,“你可以向下开采 31,000 块,或向上建造 31,000 块”。
[Minetest][2],简单地说,是一个基于<ruby>体素<rt>voxel</rt></ruby>的沙盒游戏,与 Minecraft 非常相似。与 Minecraft 不同的是Minetest 是用 C++ 编写的,并被设计成可以在大多数系统上原生运行。它也有一个非常大的地图区域。地图大小为 “62,000 × 62,000 × 62,000 块”,“你可以向下开采 31,000 块,或向上建造 31,000 块”。
有趣的是Minetest 最初是以专有许可证发布的,但后来被重新授权为 GPL。此后它又被重新授权为 LGPL。
Minetest 有几种模式。你可以建造并发挥创意或者你可以尝试在各种元素中生存。你并不局限于这些模式。Minetest 有大量的[可用的额外内容][3],包括 mods、纹理包和在 Minetest 中建立的游戏。这主要是通过 Minetest 的 [modding API][4] 和 Lua 完成的。
Minetest 有几种模式。你可以建造并发挥创意或者你可以尝试在各种元素中生存。你并不局限于这些模式。Minetest 有大量的 [额外内容][3],包括 <ruby>模组<rt>mod</rt></ruby>、纹理包和在 Minetest 中建立的游戏。这主要是通过 Minetest 的 [模组 API][4] 和 Lua 完成的。
![minetest packages][5]
对于那些玩过 Minecraft 的人来说,你会发现 Minetest 中的体验非常相似。你可以挖掘资源,建造结构,并结合材料来制作工具。我在 Minetest 中没有注意到的一件事是怪物。我认为 Minetest 中没有任何生物,但话说回来,我只在创意模式中玩过。我还没有玩过生存模式。
Minetest 也被用于[教育][6]。例如,瑞士 CERN 的人用 Minetest 创造了一个游戏,以[展示互联网是如何工作的][7]以及它是如何被创造出来的。Minetest 还被[用于教授][8]编程、地球科学以及微积分和三角学。
Minetest 也被用于 [教育][6]。例如,瑞士 CERN 的人用 Minetest 创造了一个游戏,以 [展示互联网是如何工作的][7] 以及它是如何被创造出来的。Minetest 还被用于 [教授][8] 编程、地球科学以及微积分和三角学。
![minetes map1][9]
@ -73,8 +74,6 @@ FreeBSD 用户很幸运。他们可以用这个命令安装 Mintest
pkg install minetest minetest_game
```
![minetest map2][10]
#### Snap
要安装 Minetest 的 Snap 包,请在终端输入以下命令:
@ -91,13 +90,13 @@ sudo snap install minetest
flatpak install flathub net.minetest.Minetest
```
你可以在[这里][11]下载 Windows 的可移植执行文件。你也可以在 Android 上安装 Minetest可以通过 [Google Play][12] 或[下载 APK][13]。
你可以在 [这里][11] 下载 Windows 的可移植执行文件。你也可以在 Android 上安装 Minetest可以通过 [Google Play][12] 或 [下载 APK][13]。
### 最后的想法
### 总结
![minetest about][14]
我已经在 Minetest 中花了几个小时在我的本地系统上进行构建和探索。它非常有趣。我还没来得及尝试任何额外的内容,因为我对我玩过的相对较少的游戏部分非常满意。我遇到的唯一麻烦是,由于某种原因,它在 Fedora 上运行缓慢。我可能一些配置上的错误。
我已经在 Minetest 中花了几个小时在我的本地系统上进行构建和探索。它非常有趣。我还没来得及尝试任何额外的内容,因为我对我玩过的相对较少的游戏部分非常满意。我遇到的唯一麻烦是,由于某种原因,它在 Fedora 上运行缓慢。我可能存在一些配置上的错误。
如果你曾经认为 Minecraft 看起来很有趣,但不想花钱,那就去看看 Minetest。你会很高兴你这么做。
@ -110,7 +109,7 @@ via: https://itsfoss.com/minetest/
作者:[John Paul][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,101 @@
[#]: subject: "EndeavourOS Artemis is the First ISO with ARM Installation Support"
[#]: via: "https://news.itsfoss.com/endeavouros-artemis-release/"
[#]: author: "nikhil https://news.itsfoss.com/author/nikhil/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
EndeavourOS Artemis is the First ISO with ARM Installation Support
======
EndeavourOS includes the latest and greatest with the new release, along with beta support for ARM installations.
![endeavour os][1]
The popular Arch-based Linux distribution [EndeavourOS][2] released their latest ISO refresh called Artemis. Interestingly, the release is named after NASAs upcoming lunar mission.
Apart from the usual improvements, the latest upgrade includes the latest [Linux Kernel 5.18.5][3] and an updated Calamares installer.
More importantly, with this release, EndeavourOS is closer to a complete ARM installation support. Let us talk more about it!
### Closing in on ARM Installation
The devs at EndeavourOS have updated the Calamares installer to handle installations to ARM devices, but it is still in beta.
Technically, you will find an integration install option on the welcome app of the main ISO. The developer also mentions that both the repos for ARM and the main ISO are more in sync from now on.
Of course, it is exciting to see the addition, nevertheless! The announcement post also mentioned:
> The new installer is a beta release and has limited device support for now, but we are going to add more devices in the future. The team currently is brainstorming to add the first step, the base install, in the Calamares installer also, so it will only take one step to install ARM
Note that only Odroid N2/N2+ and the Raspberry PI are supported right now. So, you can test it out, if you are interested to experiment.
If you are curious about the process of installation for ARM devices, heres a quick summary:
There are two stages to the new installation method:
![][4]
**Stage 1:**
* Boot into a live environment using the EndeavourOS Artemis ISO on your x86_64 computer.
* Connect the SD Card/SSD for your ARM computer as its primary storage.
* Launch Calamares.
* Click on the ARM install button and select the SD Card/SSD you connected and follow with the installation.
**Stage 2**
* Once the previous stage is over, unplug the SD Card/SSD, connect it to your ARM computer, and boot into it.
* Youll be greeted with a modified Welcome app that lets you set up the devices keyboard layout, timezone, passwords, etc.
* Youll also be able to download other DEs/WMs from this screen.
* After this setup, Calamares will delete itself and you can use the device as usual.
For more details, you can check out their blog post on [ARM installation support][5].
### Other Improvements
It is obvious to expect the latest and greatest package updates, being an Arch-based distro.
You will find Firefox 101.0.1 out of the box, but you should be soon be able to update it to Firefox 102.
This release also comes with the latest versions of Mesa and Calamares.
Some of the other changes include:
* Wireplumber has replaced pipewire-media-session as the session and policy manager for Pipewire
* The package budgie-control-center has been added to the EndeavourOS repo for a smoother and native Budgie experience.
* Offline Xfce install received improvements.
* Xfce4 and i3 install will not autostart firewall-applet by default anymore.
Also, now EndeavourOS packages can now be downgraded with eos-downgrade.
You can check out the [official announcement][6] for more details.
### Download EndeavourOS Artemis
The latest release ISO is available on the official website. Head over to the [download page][7] and get the latest image from one of the available mirrors.
[EndeavourOS Artemis][8]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/endeavouros-artemis-release/
作者:[nikhil][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/nikhil/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/06/endeavour-os-artemis-iso.jpg
[2]: https://endeavouros.com
[3]: https://news.itsfoss.com/linux-kernel-5-18-release/
[4]: https://news.itsfoss.com/wp-content/uploads/2022/06/endeavour-os-arm.jpg
[5]: https://arm.endeavouros.com/2022/06/24/artemis-with-new-endeavouros-arm-install/
[6]: https://endeavouros.com/news/artemis-is-launched/
[7]: https://endeavouros.com/latest-release/
[8]: https://endeavouros.com/

View File

@ -0,0 +1,75 @@
[#]: subject: "Firefox 102 Release Lets You Disable Download Panel and Improves Picture-in-Picture Mode"
[#]: via: "https://news.itsfoss.com/firefox-102-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Firefox 102 Release Lets You Disable Download Panel and Improves Picture-in-Picture Mode
======
Mozilla Firefox 102 release is here with some solid changes, and useful feature additions!
![][1]
A new Firefox version upgrade is here. While it may not be a major feature update like [Firefox 100][2], it includes some useful enhancements for a better browsing experience.
Heres whats new:
### Firefox 102: New Additions and Improvements
With this release, you can finally disable the automatic opening of the download panel every time a new download starts. So, you wont have too many windows crowding your screen.
It also seems that they have added some refinements to the Picture-in-Picture feature with subtitles. You should have better support for it with more streaming platforms, including Disney+ Hotstar, HBO Max, SonyLIV, and a few others.
![][3]
Firefox 102 also improves security by moving audio decoding into a separate process with enhanced sandboxing. The process remains isolated, thus giving you more security.
Additionally, there are some screen reader improvements on Windows.
For **developers**, there are a couple of important changes that include:
* Introducing support for [Transform streams][4] which also includes new interfaces.
* Support for [readable byte streams][5].
* Removal of Firefox-only properly Window.sidebar.
* You can now filter style sheets in the Style Editor tab of our developer tools
Firefox also adds a new enterprise policy adding a configuration setting that makes sure that the downloads that are meant to be opened are initially stored in a temporary folder.
If the downloaded file is saved, it will be stored in the download folder. Mozilla explains more about it:
> There is now an enterprise policy (`StartDownloadsInTempDirectory` ) and an about:config pref (`browser.download.start_downloads_in_tmp_dir` ) that will once again cause Firefox to initially put downloads in (a subfolder of) the OS temp folder, instead of the download folder configured in Firefox. Files opened from the “what should Firefox do with this file” dialog, or set to open in helper applications automatically, will stay in this folder. Files saved (not opened as previously mentioned) will still end up in the Firefox download folder.
To learn more about the release, refer to the [full release notes][6].
### Download Firefox 102
You can download the latest Firefox 102 release from its official website or wait for the package update on your Linux distribution.
In either case, you can always use the [Snap package][7] to get the latest update now.
[Firefox 102 Download][8]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/firefox-102-release/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/06/firefox-102.jpg
[2]: https://news.itsfoss.com/firefox-100-release/
[3]: https://news.itsfoss.com/wp-content/uploads/2022/06/firefox-102-pip.jpg
[4]: https://developer.mozilla.org/en-US/docs/Web/API/TransformStream
[5]: https://developer.mozilla.org/en-US/docs/Web/API/Streams_API#bytestream-related_interfaces
[6]: https://www.mozilla.org/en-US/firefox/102.0/releasenotes/
[7]: https://snapcraft.io/firefox
[8]: https://www.mozilla.org/en-US/firefox/

View File

@ -1,261 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (hanszhao80)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Djinn: A Code Generator and Templating Language Inspired by Jinja2)
[#]: via: (https://theartofmachinery.com/2021/01/01/djinn.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
Djinn: A Code Generator and Templating Language Inspired by Jinja2
======
Code generators can be useful tools. I sometimes use the command line version of [Jinja2][1] to generate highly redundant config files and other text files, but its feature-limited for transforming data. Obviously the author of Jinja2 thinks differently, but I wanted something like list comprehensions or Ds composable range algorithms.
I decided to make a tool thats like Jinja2, but lets me generate complex files by transforming data with range algorithms. The idea was dead simple: a templating language that gets rewritten directly to D code. That way it supports everything D does, simply because it _is_ D. I wanted a standalone code generator, but thanks to [Ds `mixin` feature][2], the same templating language works as an embedded templating language (for HTML in a web app, for example). (For more on that trick, see [this post about translating Brainfuck to D to machine code all at compile time using `mixin`s][3].)
As usual, [its on GitLab][4]. [The examples in this post can be found there, too.][5]
### Hello world example
Heres an example to demonstrate the idea:
```
Hello [= retro("dlrow") ]!
[: enum one = 1; :]
1 + 1 = [= one + one ]
```
`[= some_expression ]` is like `{{ some_expression }}` in Jinja2, and it renders a value to the output. `[: some_statement; :]` is like `{% some_statement %}` and causes full code statements to be executed. I changed the syntax because D also uses curly braces a lot, and mixing the two made templates hard to read. (There are also special non-D directives, like `include`, that get wrapped in `[<` and `>]`.)
If you save the above to a file called `hello.txt.dj` and run the `djinn` command line tool against it, youll get a file called `hello.txt` containing what you might guess:
```
Hello world!
1 + 1 = 2
```
If youve used Jinja2, you might be wondering what happened to the second line. Djinn has a special rule that simplifies formatting and whitespace handling: if a source line contains `[:` statements or `[<` directives but doesnt contain any non-whitespace output, the whole line is ignored for output purposes. Blank lines are still rendered.
### Generating data
Okay, now for something a bit more practical: generating CSV data.
```
x,f(x)
[: import std.mathspecial;
foreach (x; iota(-1.0, 1.0, 0.1)) :]
[= "%0.1f,%g", x, normalDistribution(x) ]
```
A `[=` and `]` pair can contain multiple expressions separated by commas. If the first expression is a double-quoted string, its interpreted as a [format string][6]. Heres the output:
```
x,f(x)
-1.0,0.158655
-0.9,0.18406
-0.8,0.211855
-0.7,0.241964
-0.6,0.274253
-0.5,0.308538
-0.4,0.344578
-0.3,0.382089
-0.2,0.42074
-0.1,0.460172
0.0,0.5
0.1,0.539828
0.2,0.57926
0.3,0.617911
0.4,0.655422
0.5,0.691462
0.6,0.725747
0.7,0.758036
0.8,0.788145
0.9,0.81594
```
### Making an image
This example is just for the heck of it. [The classic netpbm image library defined a bunch of image formats][7], some of which are text-based. For example, heres an image of a 3x3 cross:
```
P2 # identifier for Portable GrayMap
3 3 # width and height
7 # value for pure white (0 is black)
7 0 7
0 0 0
7 0 7
```
You can save the above text to a file named something like `cross.pgm` and many image tools will understand it. Heres some Djinn code that generates a [Mandelbrot set][8] fractal in the same format:
```
[:
import std.complex;
enum W = 640;
enum H = 480;
enum kMaxIter = 20;
ubyte mb(uint x, uint y)
{
const c = complex(3.0 * (x - W / 1.5) / W, 2.0 * (y - H / 2.0) / H);
auto z = complex(0.0);
ubyte ret = kMaxIter;
while (abs(z) <= 2 && --ret) z = z * z + c;
return ret;
}
:]
P2
[= W ] [= H ]
[= kMaxIter ]
[: foreach (y; 0..H) :]
[= "%(%s %)", iota(W).map!(x => mb(x, y)) ]
```
The resulting file is about 800kB, but it compresses nicely as a PNG:
```
$ # Converting with GraphicsMagick
$ gm convert mandelbrot.pgm mandelbrot.png
```
And here it is:
![][9]
### Solving a puzzle
Heres a puzzle:
![][10]
The 5x5 grid needs to be filled in with numbers from 1 to 5, using each number once in each row, and once in each column. (I.e., to make a 5x5 Latin square.) The numbers in neighbouring cells must also satisfy the inequalities indicated by any `>` greater-than signs.
[I used linear programming (LP) some months ago.][11] LP problems are systems of continuous variables with linear constraints. This time Ill use mixed integer linear programming (MILP), which generalises LP by also allowing integer-constrained variables. It turns out thats enough to be NP complete, and MILP happens to be reasonably good for modelling this puzzle.
In that previous post, I used the Julia library JuMP to help spec the problem. This time Ill use the [CPLEX text-based format][12], which is supported by several LP and MILP solvers (and can be easily converted to other formats by off-the-shelf tools if needed). Heres the LP from the previous post in CPLEX format:
```
Minimize
obj: v
Subject To
ptotal: pr + pp + ps = 1
rock: 4 ps - 5 pp - v <= 0
paper: 5 pr - 8 ps - v <= 0
scissors: 8 pp - 4 pr - v <= 0
Bounds
0 <= pr <= 1
0 <= pp <= 1
0 <= ps <= 1
End
```
CPLEX format is nice to read, but non-trivial problems take a lot of variables and constraints to model, making it painful and error-prone to write out manually. There are domain-specific languages like [ZIMPL][13] for speccing MILPs and LPs in a high-level way. Theyre pretty cool for many problems, but ultimately theyre not as expressive as a general-purpose language with a good library like JuMP — or as a code generator with D.
Ill model the puzzle using two sets of variables: (v_{r,c}) and (i_{r,c,v}). (v_{r,c}) will hold the value (1-5) of the cell at row (r) and column (c). (i_{r,c,v}) will be an indicator binary thats 1 if the cell at row (r) and column (c) has value (v), and 0 otherwise. These two sets of variables are redundant representations of the grid, but the first representation makes it easier to model the inequality constraints, while the second representation makes it easier to model the uniqueness constraints. I just need to add some extra constraints to force the two representations to be consistent. But first, lets start with the basic constraint that each cell must have exactly one value. Mathematically, that means all the indicators for a given row and column must be 0, except for one that is 1. That can be enforced by this equation:
[i_{r,c,1} + i_{r,c,2} + i_{r,c,3} + i_{r,c,4} + i_{r,c,5} = 1]
The CPLEX constraints for all rows and columns can be generated with this Djinn code:
```
\ Cell has one value
[:
foreach (r; iota(N))
foreach (c; iota(N))
:]
[= "%-(%s + %)", vs.map!(v => ivar(r, c, v)) ] = 1
[::]
```
`ivar()` is a helper function that gives us the string identifier for an (i) variable, and `vs` stores the numbers 1-5 for convenience. The constraints for uniqueness within rows and columns are exactly the same, but iterating over the other two dimensions of (i).
To make the (i) vars consistent with the (v) vars, we need constraints like this (remember, only one of the (i) vars is non-zero):
[i_{r,c,1} + 2i_{r,c,2} + 3i_{r,c,3} + 4i_{r,c,4} + 5i_{r,c,5} = v_{r,c}]
CPLEX requires all variables to be on the left, so the Djinn code looks like this:
```
\ Link i vars with v vars
[:
foreach (r; iota(N))
foreach (c; iota(N))
:]
[= "%-(%s + %)", vs.map!(v => text(v, ' ', ivar(r, c, v))) ] - [= vvar(r,c) ] = 0
[::]
```
The constraints for the neighouring cell inequalities and for the bottom left corner being 4 are all trivial to write. All thats left is to declare the indicator variables to be binary, and set the bounds for the (v) vars. All up, there are 150 variables and 111 constraints, plus bounds for the variables. [You can see the full code in the repo.][14]
The [GNU Linear Programming Kit][15] has a command line tool that can solve this CPLEX MILP. Unfortunately, its output is a big dump of everything, so I used awk to pull out whats needed:
```
$ time glpsol --lp inequality.lp -o /dev/stdout | awk '/v[0-9][0-9]/ { print $2, $4 }' | sort
v00 1
v01 3
v02 2
v03 5
v04 4
v10 2
v11 5
v12 4
v13 1
v14 3
v20 3
v21 1
v22 5
v23 4
v24 2
v30 5
v31 4
v32 3
v33 2
v34 1
v40 4
v41 2
v42 1
v43 3
v44 5
real 0m0.114s
user 0m0.106s
sys 0m0.005s
```
Heres the solution written out in the original grid:
![][16]
These examples are just for playing around, but Im sure you get the idea. The `README.md` for the Djinn repo is itself generated using a Djinn template, by the way.
As I said, Djinn can also be used as a compile-time templating language embedded inside D code. I primarily wanted a code generator, but thats a bonus thanks to Ds metaprogramming features.
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2021/01/01/djinn.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[hanszhao80](https://github.com/hanszhao80)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://jinja2docs.readthedocs.io/en/stable/
[2]: https://dlang.org/articles/mixin.html
[3]: https://theartofmachinery.com/2017/12/31/compile_time_brainfuck.html
[4]: https://gitlab.com/sarneaud/djinn
[5]: https://gitlab.com/sarneaud/djinn/-/tree/v0.1.0/examples
[6]: https://dlang.org/phobos/std_format.html#format-string
[7]: http://netpbm.sourceforge.net/doc/#formats
[8]: https://en.wikipedia.org/wiki/Mandelbrot_set
[9]: https://theartofmachinery.com/images/djinn/mandelbrot.png
[10]: https://theartofmachinery.com/images/djinn/inequality.svg
[11]: https://theartofmachinery.com/2020/05/21/glico_weighted_rock_paper_scissors.html
[12]: http://lpsolve.sourceforge.net/5.0/CPLEX-format.htm
[13]: https://zimpl.zib.de/
[14]: https://gitlab.com/sarneaud/djinn/-/tree/v0.1.0/examples/inequality.lp.dj
[15]: https://www.gnu.org/software/glpk/
[16]: https://theartofmachinery.com/images/djinn/inequality_solution.svg

View File

@ -2,7 +2,7 @@
[#]: via: (https://jvns.ca/blog/2021/05/11/what-s-the-osi-model-/)
[#]: author: (Julia Evans https://jvns.ca/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (hanszhao80)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -81,7 +81,7 @@ via: https://jvns.ca/blog/2021/05/11/what-s-the-osi-model-/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[hanszhao80](https://github.com/hanszhao80)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/22/5/dynamic-linking-modular-libraries-linux"
[#]: author: "Jayashree Huttanagoudar https://opensource.com/users/jayashree-huttanagoudar"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "robsean"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,217 +0,0 @@
[#]: subject: "How static linking works on Linux"
[#]: via: "https://opensource.com/article/22/6/static-linking-linux"
[#]: author: "Jayashree Huttanagoudar https://opensource.com/users/jayashree-huttanagoudar"
[#]: collector: "lkxed"
[#]: translator: "robsean"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How static linking works on Linux
======
Learn how to combine multiple C object files into a single executable with static libraries.
![Woman using laptop concentrating][1]
Image by Mapbox Uncharted ERG, [CC-BY 3.0 US][2]
Code for applications written using C usually has multiple source files, but ultimately you will need to compile them into a single executable.
You can do this in two ways: by creating a static library or a dynamic library (also called a shared library). These two types of libraries vary in terms of how they are created and linked. Your choice of which to use depends on your use case.
In a [previous article][3], I demonstrated how to create a dynamically linked executable, which is the more commonly used method. In this article, I explain how to create a statically linked executable.
### Using a linker with static libraries
A linker is a command that combines several pieces of a program together and reorganizes the memory allocation for them.
The functions of a linker include:
* Integrating all the pieces of a program
* Figuring out a new memory organization so that all the pieces fit together
* Reviving addresses so that the program can run under the new memory organization
* Resolving symbolic references
As a result of all these linker functionalities, a runnable program called an executable is created.
Static libraries are created by copying all necessary library modules used in a program into the final executable image. The linker links static libraries as a last step in the compilation process. An executable is created by resolving external references, combining the library routines with program code.
### Create the object files
Here's an example of a static library, along with the linking process. First, create the header file `mymath.h` with these function signatures:
```
int add(int a, int b);
int sub(int a, int b);
int mult(int a, int b);
int divi(int a, int b);
```
Create `add.c`, `sub.c` , `mult.c` and `divi.c` with these function definitions:
```
// add.c
int add(int a, int b){
return (a+b);
}
//sub.c
int sub(int a, int b){
return (a-b);
}
//mult.c
int mult(int a, int b){
return (a*b);
}
//divi.c
int divi(int a, int b){
return (a/b);
}
```
Now generate object files `add.o`, `sub.o`, `mult.o`, and `divi.o` using GCC:
```
$ gcc -c add.c sub.c mult.c divi.c
```
The `-c` option skips the linking step and creates only object files.
Create a static library called `libmymath.a`, then remove the object files, as they're no longer required. (Note that using a `trash` [command][4] is safer than `rm`.)
```
$ ar rs libmymath.a add.o sub.o mult.o divi.o
$ trash *.o
$ ls
add.c  divi.c  libmymath.a  mult.c  mymath.h  sub.c
```
You have now created a simple example math library called `libmymath`, which you can use in C code. There are, of course, very complex C libraries out there, and this is the process their developers use to generate the final product that you and I install for use in C code.
Next, use your math library in some custom code and then link it.
### Create a statically linked application
Suppose you've written a command for mathematics. Create a file called `mathDemo.c` and paste this code into it:
```
#include <mymath.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
  int x, y;
  printf("Enter two numbers\n");
  scanf("%d%d",&x,&y);
 
  printf("\n%d + %d = %d", x, y, add(x, y));
  printf("\n%d - %d = %d", x, y, sub(x, y));
  printf("\n%d * %d = %d", x, y, mult(x, y));
  if(y==0){
    printf("\nDenominator is zero so can't perform division\n");
      exit(0);
  }else{
      printf("\n%d / %d = %d\n", x, y, divi(x, y));
      return 0;
  }
}
```
Notice that the first line is an `include` statement referencing, by name, your own `libmymath` library.
Create an object file called `mathDemo.o` for `mathDemo.c` :
```
$ gcc -I . -c mathDemo.c
```
The `-I` option tells GCC to search for header files listed after it. In this case, you're specifying the current directory, represented by a single dot (`.` ).
Link `mathDemo.o` with `libmymath.a` to create the final executable. There are two ways to express this to GCC.
You can point to the files:
```
$ gcc -static -o mathDemo mathDemo.o libmymath.a
```
Alternately, you can specify the library path along with the library name:
```
$ gcc -static -o mathDemo -L . mathDemo.o -lmymath
```
In the latter example, the `-lmymath` option tells the linker to link the object files present in the `libmymath.a` with the object file `mathDemo.o` to create the final executable. The `-L` option directs the linker to look for libraries in the following argument (similar to what you would do with `-I` ).
### Analyzing the result
Confirm that it's statically linked using the `file` command:
```
$ file mathDemo
mathDemo: ELF 64-bit LSB executable, x86-64...
statically linked, with debug_info, not stripped
```
Using the `ldd` command, you can see that the executable is not dynamically linked:
```
$ ldd ./mathDemo
        not a dynamic executable
```
You can also check the size of the `mathDemo` executable:
```
$ du -h ./mathDemo
932K    ./mathDemo
```
In the example from my [previous article][5], the dynamic executable took up just 24K.
Run the command to see it work:
```
$ ./mathDemo
Enter two numbers
10
5
10 + 5 = 15
10 - 5 = 5
10 * 5 = 50
10 / 5 = 2
```
Looks good!
### When to use static linking
Dynamically linked executables are generally preferred over statically linked executables because dynamic linking keeps an application's components modular. Should a library receive a critical security update, it can be easily patched because it exists outside of the applications that use it.
When you use static linking, a library's code gets "hidden" within the executable you create, meaning the only way to patch it is to re-compile and re-release a new executable every time a library gets an update—and you have better things to do with your time, trust me.
However, static linking is a reasonable option if the code of a library exists either in the same code base as the executable using it or in specialized embedded devices that are expected to receive no updates.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/static-linking-linux
作者:[Jayashree Huttanagoudar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jayashree-huttanagoudar
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png
[2]: https://creativecommons.org/licenses/by/3.0/us/
[3]: https://opensource.com/article/22/5/dynamic-linking-modular-libraries-linux
[4]: https://www.redhat.com/sysadmin/recover-file-deletion-linux
[5]: https://opensource.com/article/22/5/dynamic-linking-modular-libraries-linux

View File

@ -1,96 +0,0 @@
[#]: subject: "Download YouTube Videos with VLC (Because, Why Not?)"
[#]: via: "https://itsfoss.com/download-youtube-videos-vlc/"
[#]: author: "Community https://itsfoss.com/author/itsfoss/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Download YouTube Videos with VLC (Because, Why Not?)
======
[VLC][1] is one of the [most popular video players for Linux][2] and other platforms.
Its not just a video player. It provides a number of multimedia and network-related features among other things. Youll be surprised to [learn what VLC is capable of][3].
Ill demonstrate a simple VLC feature and that is to download YouTube videos with it.
Yes. You can play YouTube videos in VLC and download them too. Let me show you how.
### Download YouTube videos with VLC Media Player
Now, there are ways to [download YouTube videos][4]. Use a browser extension or use dedicated websites or tools for it.
But if you dont want to use anything additional, the already installed VLC player can be used for this purpose.
**Important:**Before copying the link from YouTube, make sure to choose desired video quality from the YouTube player because we will get the same quality in which the video was streaming while copying the link.
#### Step 1: Get the Video link of your desired video
You can use any of your favorite browsers and copy the video link from the address bar.
![copy youtube link][5]
#### Step 2: Paste copied Link to Network Stream
The option of Network stream lies under Media which is the first option of our top menu bar. Just click on Media and you will get the option of “Open Network Stream”. You can also use the shortcut to open Network Stream CTRL + N.
![click on media and select network stream][6]
Now, you just have to paste copied YouTube video link and click on the play button. I know it just plays video in our VLC but there is a little extra step that will allow us to download currently streaming video.
![paste video link][7]
#### Step 3: Get Location Link from Codec Information
Under the codec information prompt, we will get a location link for the currently playing video. To open the Codec Information prompt, you can use the shortcut CTRL + J or youll find the option for Codec Information under “Tools”.
![click on tools and then codec information][8]
It will bring detailed info about currently streaming video. But what we need is Location. You just have to copy the location link and 90% of our task is complete.
![copy location link][9]
#### Step 4: Paste Location Link to New Tab
Open any of your favorite browsers and paste copied location link to the new tab and it will start playing the exact video in the browser.
Now, right-click on playing video and you will get an option for “Save Video As”.
![click on save][10]
It will open the file manager and ask you whether you want to save this video locally or not. You can also rename that file as by default it will be named “videoplayback.mp4”
![showing file in folder][11]
### Conclusion
If you have internet connection issues or if you want to save some video for future viewing, downloading the YouTube video makes sense.
Of course, we dont encourage piracy. This method is just for fair use please make sure that the creator of the video has allowed the video for fair usage and also make sure to give credits to the original owner of the video before using it somewhere else.
--------------------------------------------------------------------------------
via: https://itsfoss.com/download-youtube-videos-vlc/
作者:[Community][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/itsfoss/
[b]: https://github.com/lkxed
[1]: https://www.videolan.org/vlc/
[2]: https://itsfoss.com/video-players-linux/
[3]: https://itsfoss.com/vlc-pro-tricks-linux/
[4]: https://itsfoss.com/download-youtube-videos-ubuntu/
[5]: https://itsfoss.com/wp-content/uploads/2022/06/copy-Youtube-link-800x190.jpg
[6]: https://itsfoss.com/wp-content/uploads/2022/06/click-on-media-and-select-network-stream.png
[7]: https://itsfoss.com/wp-content/uploads/2022/06/paste-video-link.png
[8]: https://itsfoss.com/wp-content/uploads/2022/06/click-on-tools-and-then-codec-information-800x249.png
[9]: https://itsfoss.com/wp-content/uploads/2022/06/copy-location-link.png
[10]: https://itsfoss.com/wp-content/uploads/2022/06/click-on-save-800x424.jpg
[11]: https://itsfoss.com/wp-content/uploads/2022/06/showing-file-in-folder-800x263.png

View File

@ -0,0 +1,97 @@
[#]: subject: "Kuro: An Unofficial Microsoft To-Do Desktop Client for Linux"
[#]: via: "https://itsfoss.com/kuro-to-do-app/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Kuro: An Unofficial Microsoft To-Do Desktop Client for Linux
======
Microsoft says that they love Linux and open-source, but we still do not have native support for a lot of its products on Linux.
While they could be trying to add more support, like the ability to [install Microsoft Edge on Linux][1] but its not excellent for a multi-trillion dollar company.
Similarly, Microsofts To-Do service is also a popular one, replacing Wunderlist as it was shut down in 2020.
In case youre curious, we have a lot of [to-do list applications available for Linux][2]. So, if you want to switch away from Microsoft To-Do, youve got options.
Microsoft To-Do is a cloud-based task management application that lets you organize your tasks from your phone, desktop, and the web. It is available to download for Windows, Mac, and Android.
So, if you would rather not use the web browser but a separate application, what can you do on Linux?
Kuro to the rescue.
### Kuro: Unofficial Open-Source Microsoft To-Do App
![kuro todo][3]
Kuro is an unofficial open-source application that provides you a desktop experience for Microsoft To-Do on Linux with some extra features.
It is a fork of Ao, which was an open-source project that stepped up to become a solution for it. Unfortunately, it is no longer being actively maintained. So, I came across a new fork for it that seems to get the job done.
![kuro todo options][4]
Kuro provides some extra features that let you toggle themes, enable global shortcuts, and more from within the application.
Note that this application is fairly new, but a stable release is available to try. Furthermore, the developer plans to add more themes and features in the near future.
### Features of Kuro
![kuro todo 1][5]
If you tend to use Microsoft services (like Outlook), its To-Do app should be a perfect option to organize your tasks. You can even flag emails to create tasks out of it.
With Kuro desktop client, you get a few quick features to configure that include:
* Ability to launch the program on start.
* Get a system tray icon to quickly create a task, search, or check the available list for the day.
* Enable Global shortcut keys.
* Toggle available themes (Sepia, Dracula, Black, Dark).
* Toggle Auto Night mode, if you do not want to constantly change themes.
* Hide the tray icon, if you do not need it.
* Customize the font size as required.
![kuro todo settings][6]
In addition to some features, you can also access certain settings to enable/disable email notifications, confirm before deleting, and more such controls for the to-do app experience.
Overall, the experience wasnt terrible, but I noticed some weird graphical glitches in the user interface for a few minutes. I am not sure if it is a known issue.
### Install Kuro in Linux
You can find the .deb package for Ubuntu-based distributions from its [GitHub releases section][7].
In either case, you can also get it from the [Snap store][8] for any Linux distribution of your choice. The package is also available in [AUR][9] for Arch Linux distributions.
The developer also mentions that a Flatpak package is on its way. So, you can keep an eye on its [GitHub page][10] for more information on that.
[Kuro][11]
Have you tried this already? Do you know of a better Microsoft to-do client for Linux? Let me know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/kuro-to-do-app/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/microsoft-edge-linux/
[2]: https://itsfoss.com/to-do-list-apps-linux/
[3]: https://itsfoss.com/wp-content/uploads/2022/06/kuro-todo-800x507.png
[4]: https://itsfoss.com/wp-content/uploads/2022/06/kuro-todo-options-800x444.png
[5]: https://itsfoss.com/wp-content/uploads/2022/06/kuro-todo-1.png
[6]: https://itsfoss.com/wp-content/uploads/2022/06/kuro-todo-settings.png
[7]: https://github.com/davidsmorais/kuro/releases
[8]: https://snapcraft.io/kuro-desktop
[9]: https://itsfoss.com/aur-arch-linux/
[10]: https://github.com/davidsmorais/kuro
[11]: https://github.com/davidsmorais/kuro

View File

@ -0,0 +1,100 @@
[#]: subject: "Linux Lite 6 Review: Well Designed “bridging-distro” for Windows Users"
[#]: via: "https://www.debugpoint.com/linux-lite-6-review/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux Lite 6 Review: Well Designed “bridging-distro” for Windows Users
======
**We took [Linux Lite 6 “Fluorite”][1] for a test drive in a physical and virtual machine for a review.**
It has been more than a year since we reviewed the Linux Lite distribution. The [last review][2] was for its 5.0 series. And its time for a refreshed review of this excellent distribution with its latest major release Linux Lite 6.0.
Linux Lite 6.0, AKA Linux Lite OS, is based on Ubuntu and follows its LTS (Long Term Support) lifecycle. That means you get a similar release schedule and security updates for five years following Ubuntu Linux. The lightweight desktop environment Xfce is the primary and only desktop it offers. Linux Lite OS primarily focuses on Windows users who want to kick start their Linux journey. Hence you may think of it as a “bridging” Linux operating system.
![Linux Lite 6 Xfce Desktop][3]
### Linux Lite 6: Review
Lite 6 is coming after two years of its last major release. Due to its dependency on Ubuntu LTS, you should expect some significant changes in this version. First, lets wrap up the new features in this release. And then, we can talk about the installation, performance and review pointers.
#### Core updates and changes
At its core, it is based on Linux Kernel 5.15, the default LTS kernel for Ubuntu 22.04 LTS Jammy Jellyfish. In addition, this release introduces a set of desktop applications from Assistive Technologies to help hearing and sight-impaired people. The apps are “Onscreen Keyboard Onboard”, “Screen reader Orca”, and “screen magnifier”. With this change, Linux Lite 6 becomes more similar to Windows for its target users.
In addition to the above change, a controversial decision is to add Google Chrome as its default browser replacing the Snap version of Firefox. Undoubtedly, Google Chrome is the market leader in the browser space and is well built. But many have issues with it because its from Google.
Besides, the team also chose between the Firefox deb version and the Microsft Edge (considering Linux Lite 6 targets Windows users).
Another beneficial core change for users is the teams decision to bring the latest LibreOffice stable edition in each point release in the next two years. Because Ubuntu might delay specific LibreOffice versions, but with Linux Lite point releases, you definitely would get the latest version.
Moreover, if you are a fan of look and feel, the new Materia window theme is going to give you a pleasant and sleek desktop.
Overall, its a good set of changes and choices (such as the browser) in Linux Lite 6 to stay ahead with the times. Now, lets discuss some review findings during our test run.
![Linux Lite has a nice update tool][4]
#### Download, Installation
Linux Lite 6 ISO size is 2.1 GB, and I believe its reasonably well-composed, considering the vanilla Ubuntu 22.04 ISO desktop size is a whopping 3 GB+.
In all fairness, unlike other Linux distributions, Linux Lite doesnt ask you which desktop you want because you have only one choice The Xfce desktop.
During testing, we could not get it installed in a physical system. The Ubiquity installer became unresponsive on the “read partition” module. After a few hours of research, we found that Ubiquity doesnt play well with a non-GPT table with more than three logical partitions.
However, it installs fine in a virtual-machine environment.
![Ubiquity gave some errors in test machines][5]
The Lite Welcome app gives you a single point to perform various maintenance activities in the first boot. Critical tasks such as updating the system, patching and installing/removing software are easy with its native Lite Software and Updater.
Moreover, if you want to install the Firefox web browser, the Lite Software gives you a fair warning that it is a snap. Although, it doesnt matter much from a new Windows user standpoint, whether its snap or anything else.
![Firefox is available as Snap from Lite Software][6]
### Performance
Linux Lite 6 takes around 590 MB of RAM in an idle state with an uptime of 3 hours. The CPU is at about 2% to 3% in an inactive state. Furtehrmore, if you run more applications, the resource usage would increase eventually. However, I believe its a good performance considering the target hardware of this distro. Besides, the old Windows 10 or Windows 7 devices would work fine in this distro.
And it uses 9 GB of disk space for the default install.
![Linux Lite 6 Performance][7]
### Closing Notes
Overall, it is an excellent release and perhaps one of the early mainstream distros based on Ubuntu 22.04. Tiny additions in this release such as new accessibility tools, a new system monitoring tool and other changes are definitely good. However, some users may not like Google Chrome considering the privacy debates.
Moreover, the lack of a major upgrade path may be a roadblock and troublesome for new users. I hope the Linux Lite team brings the upgrade feature in the future. Other than that, its well built and a good release. You can easily try it out and choose for your daily driver.
You can download Linux Lite on the [official website][8].
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/linux-lite-6-review/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://debugpointnews.com/linux-lite-6-0/
[2]: https://www.debugpoint.com/linux-lite-5-2-review/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/06/Linux-Lite-6-Xfce-Desktop.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2022/06/Linux-Lite-has-a-nice-update-tool.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/06/Ubiquity-gave-some-errors-in-test-machines.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/06/Firefox-is-available-as-Snap-from-Lite-Software.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/06/Linux-Lite-6-Performance.jpg
[8]: https://www.linuxliteos.com/download.php
[9]: https://t.me/debugpoint
[10]: https://twitter.com/DebugPoint
[11]: https://www.youtube.com/c/debugpoint?sub_confirmation=1
[12]: https://facebook.com/DebugPoint
[13]: https://t.me/debugpoint

View File

@ -0,0 +1,75 @@
[#]: subject: "Open Programmable Infrastructure: 1+1=3"
[#]: via: "https://www.linux.com/news/open-programmable-infrastructure-113/"
[#]: author: "Dan Whiting https://www.linuxfoundation.org/blog/open-programmable-infrastructure-113/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Open Programmable Infrastructure: 1+1=3
======
At last weeks Open Source Summit North America, [Robin Ginn][1], Executive Director of the [OpenJS Foundation][2], relayed a principle her mentor taught: “1+1=3”. No, this isnt new math, it is demonstrating the principle that, working together, we are more impactful than working apart. Or, as my wife and I say all of the time, teamwork makes the dream work.
This principle is really at the core of open source technology. Turns out it is also how I look at the Open Programmable Infrastructure project.
Stepping back a bit, as “the new guy” around here, I am still constantly running across projects where I want to dig in more and understand what it does, how it does it, and why it is important. I had that very thought last week as we launched another new project, the [Open Programmable Infrastructure Project][3]. As I was [reading up on it][4], they talked a lot about data processing units (DPUs) and infrastructure processing units (IPUs), and I thought, I need to know what these are and why they matter. In the timeless words of The Bobs, “What exactly is it you do here?”
### What are DPUs/IPUs? 
First and this is important they are basically the same thing, they just have different names. Here is my oversimplified explanation of what they do.
In most personal computers, you have a separate graphic processing unit(s) that helps the central processing unit(s) (CPU) handle the tasks related to processing and displaying the graphics. They offload that work from the CPU, allowing it to spend more time on the tasks it does best. So, working together, they can achieve more than each can separately.
Servers powering the cloud also have CPUs, but they have other tasks that can consume tremendous computing  power, say data encryption or network packet management. Offloading these tasks to separate processors enhances the performance of the whole system, as each processor focuses on what it does best.
In order words, 1+1=3.
### DPUs/IPUs are highly customizable
While separate processing units have been around for some time, like your PCs GPU, their functionally was primarily dedicated to a particular task. Instead, DPUs/IPUs combine multiple offload capabilities that are highly  customizable through software. That means a hardware manufacturer can ship these units out and each organization uses software to configure the units according to their specific needs. And, they can do this on the fly.
Core to the cloud and its continued advancement and growth is the ability to quickly and easily create and dispose of the “hardware” you need. It wasnt too long ago that if you wanted a server, you spent thousands of dollars on one and built all kinds of infrastructure around it and hoped it was what you needed for the time. Now, pretty much anyone can quickly setup a virtual server in a matter of minutes for virtually no initial cost.
DPUs/IPUs bring this same type of flexibility to your own datacenter because they can be configured to be “specialized” with software rather than having to literally design and build a different server every time you need a different capability.
### What is Open Programmable Infrastructure (OPI)?
OPI is focused on utilizing  open software and standards, as well as frameworks and toolkits, to allow for the rapid adoption and use of DPUs/IPUs. The OPI Project is both hardware and software companies coming together to establish and nurture an ecosystem to support these solutions. It “seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendors hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate.”
In other words, competitors are coming together to agree on a common, open ecosystem they can build together and innovate, separately, on top of. The are living out 1+1=3.
I, for one, cant wait to see the innovation.
A special thanks to [Yan][5] [Fisher][6] of Red Hat for helping me understand open programmable infrastructure concepts. He and his colleague, Kris Murphy, have a more [technical blog post on Red Hats blog][7]. Check it out.
For more information on the OPI Project, visit their [website][8] and start contributing at [https://github.com/opiproject/opi][9].
Click here to add your own text
The post [Open Programmable Infrastructure: 1+1=3][10] appeared first on [Linux Foundation][11].
--------------------------------------------------------------------------------
via: https://www.linux.com/news/open-programmable-infrastructure-113/
作者:[Dan Whiting][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxfoundation.org/blog/open-programmable-infrastructure-113/
[b]: https://github.com/lkxed
[1]: https://github.com/opiproject/opi
[2]: https://openjsf.org/
[3]: https://opiproject.org/
[4]: https://www.linuxfoundation.org/press-release/linux-foundation-announces-open-programmable-infrastructure-project/
[5]: https://www.redhat.com/en/authors/yan-fisher
[6]: https://www.redhat.com/en/authors/yan-fisher
[7]: https://www.redhat.com/en/blog/why-red-hat-joining-open-programmable-infrastructure-project
[8]: https://opiproject.org/
[9]: https://github.com/opiproject/opi
[10]: https://www.linuxfoundation.org/blog/open-programmable-infrastructure-113/
[11]: https://www.linuxfoundation.org/

View File

@ -0,0 +1,117 @@
[#]: subject: "HandBrake: Free Tool for Converting Videos from Any Format"
[#]: via: "https://www.debugpoint.com/handbrake/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
HandBrake: Free Tool for Converting Videos from Any Format
======
Learn about HandBrake, an excellent utility for converting videos from any format to the destination types.
This article contains features, download instructions and a usage guide.
### HandBrake
In this age of social media, we all play around with videos, reels and, of course, the formats that come with it. So, if you are in a Linux platform or even in WIndows, you may use any other software to convert various videos for several platforms. But if you need a simple but feature-rich video converter that takes care of your all video formats from multiple sources, try HandBrake.
#### Features
HandBrake has a huge set of options that make it a unique tool. Firstly, the workflow is super easy. In fact, its just three steps:
* Select a video
* Choose a target format
* Convert
As you can see, if you are a novice user, it is super easy to work with this tool because the attributes of the target format (e.g. bit rate, dimensions) are based on the default preset.
Secondly, if you want advanced editing, such as adding subtitles from the subtitle files while converting, it is also possible using this tool.
In addition, you can also change the dimensions, flip the video, change resolutions, modify the aspect ratio, and crop. Moreover, a set of basic filter configurations such as Denoise and Sharpen can also be done.
Moreover, adding Chapters, tags and audio tracks to your video files is always easy.
Perhaps the vital feature of HandBrake is the availability of presets which cater to the modern needs of social media and streams. For example, the presets are aligned with streaming platforms and streaming devices such as:
* Discord
* GMail
* Vimeo
* Amazon Fire Stick
* Apple Devices
* Chromecast
* Playstation
* Roku
* Xbox
A pretty impressive list, isnt it? Not only that, if you are a professional worker, it helps you define and create Queue for your conversions. The Queue feature allows you to batch convert multiple video files in your workflow.
Finally, you can convert to MPEG-4 (mp4), Matroska (mkv) and WebM formats.
![HandBrake with various features][1]
### Download and Installation
Downloading and installation of HandBrake is easy for any platform (Linux, Mac and Windows). The developers provide direct executables, which are free to download.
Since the primary target audience of this portal is Linux users, we will talk about the installation of HandBrake in Linux.
For Ubuntu, Linux Mint and all other distributions, the preferable method is Flatpak. You can [set up Flatpak][2] and then click the below button to install HandBrake:
[Install HandBrake via Flathub][3]
For Windows, macOS installer visit this page.
One interesting feature is that you can use this application via the command line! That means you can further customize your workflow using the command line utility, which you can find [here][4].
### How to Use HandBrake to convert Videos? [Example]
Since you installed it, lets see how you can convert a sample video with just three steps.
1. Open HandBrake and click on the “Open Source” button at the top toolbar. Select your video file.
2. Now, select the target file type from the Format dropdown. Make sure to check the destination folder (the default is Videos).
3. Finally, click on the Start button at the top toolbar to convert a video using HandBrake.
![HandBrake Video Conversion in three simple steps][5]
You can find a nice display on the conversion progress at the bottom of the window.
![Encoding status][6]
The above steps are the most basic ones. If you want further control over the video, you can change the options and also choose from a vast list of presets I explained earlier.
### FAQ
Yes, it is a free and open-source application, and you can download it for free.
Yes, you can easily install HandBrake in macOS, Windows 10, and Windows 11.
You can download HandBrake only from the official website https://handbrake.fr/ and no-other place.
### Closing Notes
Handbrake is one of the professional-grade free and open-source video encoders available today. It is a time-tested application used by millions of users daily. I hope this guide helps you to learn about this fantastic tool and get you started with your video projects.
**The demo video is used from [Pexels cottonbro][7]**
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/handbrake/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/06/HandBrake-with-various-features.jpg
[2]: https://www.debugpoint.com/how-to-install-flatpak-apps-ubuntu-linux/
[3]: https://dl.flathub.org/repo/appstream/fr.handbrake.ghb.flatpakref
[4]: https://handbrake.fr/downloads2.php
[5]: https://www.debugpoint.com/wp-content/uploads/2022/06/HandBrake-Video-Conversion-in-three-simple-steps.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/06/Encoding-status.jpg
[7]: https://www.pexels.com/video/hands-hand-table-colorful-3997786/

View File

@ -0,0 +1,288 @@
[#]: subject: "How to Install and Configure HAProxy on Ubuntu 22.04"
[#]: via: "https://www.linuxtechi.com/install-configure-haproxy-on-ubuntu/"
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install and Configure HAProxy on Ubuntu 22.04
======
In this post, we will demonstrate how to install HAProxy on Ubuntu 22.04 (Jammy Jellyfish) step by step. We will later configure it to act as a load balancer by distributing incoming requests between two web servers.
##### What is HAProxy?
HaProxy, short for High Availability Proxy, is a free and open-source HTTP load balancer and reverse-proxy solution that is widely used to provide high availability to web applications and guarantee maximum possible uptime.
It is a high-performance application that injects performance improvements to your web apps by distributing traffic across multiple endpoints. This way, it ensures that no webserver is overloaded with incoming HTTP requests since the workload is equitably distributed across several nodes.
While free, the Enterprise Edition provides added features such as WAF ( Web Application Firewall ), application acceleration, advanced DDoS protection, advanced health checks and so much more.
##### Lab setup
To demonstrate HAProxy in action, you need to have at least three Linux systems. One will act as the HAProxy load balancer, while the rest will act as web servers.
![Haproxy-Lab-Setup-Ubuntu][1]
### Step 1) Install HAProxy Load Balancer
The first step is to install HAProxy on Ubuntu. Ubuntu repositories provide HAProxy by default, but it is not the latest one.
To view available haproxy package version from default repositories, run
```
$ sudo apt update
$ sudo apt show haproxy
```
![default-haproxy-version-ubuntu-22-04][2]
But the latest long term support release is HAProxy is 2.6, So to install HAProxy 2.6, first enable PPA repository, run following command
```
$ sudo add-apt-repository ppa:vbernat/haproxy-2.6 -y
```
Now install haproxy 2.6 by executing the following commands
```
$ sudo apt update
$ sudo apt install -y haproxy=2.6.\*
```
Once installed, confirm the version of HAProxy installed as shown.
```
$ haproxy -v
```
![Haproxy-version-ubuntu-22-04][3]
Upon installation, the HAProxy service starts by default and listens to TCP  port 80. To verify HAProxy is running, run the command
```
$ sudo systemctl status haproxy
```
![Haproxy-Status-Ubuntu-Linux][4]
Its recommended to enable the service to auto-start on very system reboot as shown.
```
$ sudo systemctl enable haproxy
```
### Step 2) Configure HAProxy
The next step is to configure HAProxy to distribute traffic evenly between two web servers. The configuration file for haproxy is /etc/haproxy/haproxy.cfg.
Before making any changes to the file, first, make a backup copy.
```
$ sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bk
```
Then open the file using your preferred text editor. Here, we are using Nano.
```
$ sudo nano /etc/haproxy/haproxy.cfg
```
Haproxy configuration file is made up of the following sections:
* global: This is the first section that you see at the very top. It contains system-wide settings that handle performance tuning and security.
* defaults: As the name suggests, this section contains settings that should work well without additional customization. These settings include timeout and error reporting configurations.
* frontend and backend: These are the settings that define the frontend and backend settings. For the frontend, we will define the HAProxy server as the front end which will distribute requests to the backend servers which are the webservers. We will also set HAProxy to use round robbin load balancing criteria for distributing traffic.
* listen: This is an optional setting that enables you to enable monitoring of HAProxy statistics.
Now define the frontend and backend settings:
```
frontend linuxtechi
   bind 10.128.0.25:80
   stats uri /haproxy?stats
   default_backend web-servers
backend web-servers
    balance roundrobin
    server web1 10.128.0.27:80
    server web2 10.128.0.26:80
```
Here, we have configured both the HAProxy server and the web server nodes to listed to port 80. Be sure to replace the IP address for the HAProxy and webservers with your setup.
In order to enable viewing the HAProxy statistics from a browser, add the following listen section.
```
listen stats
   bind *:8080
   stats enable
   stats uri /
   stats refresh 5s
   stats realm Haproxy\ Statistics
  stats auth linuxtechi:[email protected]     #Login User and Password for the monitoring
```
The stats auth directive specifies the username and password for the login user for viewing statistics on the browser.
![HAproxy-Config-File-Ubuntu][5]
Now save all the changes and exit the configuration file. To reload the new settings, restart haproxy service.
```
$ sudo systemctl restart haproxy
```
Next edit the /etc/hosts file.
Define the hostnames and IP addresses of the haproxy main server and the webservers.
```
10.128.0.25 haproxy
10.128.0.27 web1
10.128.0.27 web2
```
Save the changes and exit.
### Step 3) Configure Web Servers
In this step, we will configure the remaining Linux systems which are the web servers.
So, log in to each of the web servers and install the Apache web server package.
```
$ sudo apt update
$ sudo apt install -y apache2
```
Next, verify that Apache is running on each of the servers.
```
$ sudo systemctl status apache2
```
Then enable Apache webserver to start on boot on both servers.
```
$ sudo systemctl enable apache2
```
Next, modify the index.html files for each web server.
For Web Server 1
Switch to the root user
```
$ sudo su
```
Then run the following command.
```
# echo "<H1>Hello! This is webserver1: 10.128.0.27 </H1>" > /var/www/html/index.html
```
For Web Server 2
Similarly, switch to the root user
$ sudo su
And create the index.html file as shown.
```
# echo "<H1>Hello! This is webserver2: 10.128.0.26 </H1>" > /var/www/html/index.html
```
Next, configure the /etc/hosts file.
```
$ sudo nano /etc/hosts
```
Add the HAProxy entry to each node.
```
10.128.0.25 haproxy
```
Save the changes and exit the configuration file.
Be sure you can ping the HAProxy server from each of the web server nodes.
![Haproxy-Connectivity-from-web1][6]
![Haproxy-Connectivity-from-web2][7]
Note: Make Sure port 80 is allowed in the OS firewall in case it is enabled on web servers.
### Step 4) Test HAProxy Load Balancing
Up until this point, we have successfully configured our HAProxy and both of the back-end web servers. To test if your configuration is working as expected, browse the IP address of the HAProxy server
http://10.128.0.25
When you browse for the first time, you should see the web page for the first web server
![Access-WebPage1-Over-Haproxy][8]
Upon refreshing, you should the webpage for the second web server
![Access-WebPage2-Over-Haproxy][9]
This shows that the HAProxy server is performing its load balancing job spectacularly by distributing incoming web traffic across the two web servers in a Round Robbin algorithm.
Moreover, you can use the following do-while loop with the curl command:
```
$ while true; do curl 10.128.0.25; sleep1; done
```
![While-Loop-Access-Webpage-over-Haproxy][10]
To view monitoring statistics, browse the following URL:
http://10.128.0.25:8080/stats
You will be required to authenticate, so provide your details as specified in Step 2.
![HAproxy-GUI-Login-Page][11]
You will see the following page with statistics on the performance of the HAProxy server.
![Haproxy-Stats-Ubuntu-Linux][12]
##### Conclusion
There you have it! We have successfully installed HAProxy on Ubuntu 22.04 and configured it to serve requests across two web servers using the Round Robbin load balancing criteria.
Read Also: How to Configure Static IP Address on Ubuntu 22.04 LTS
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-configure-haproxy-on-ubuntu/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lkxed
[1]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-Lab-Setup-Ubuntu.png
[2]: https://www.linuxtechi.com/wp-content/uploads/2022/06/default-haproxy-version-ubuntu-22-04.png
[3]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-version-ubuntu-22-04.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-Status-Ubuntu-Linux.png
[5]: https://www.linuxtechi.com/wp-content/uploads/2022/06/HAproxy-Config-File-Ubuntu.png
[6]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-Connectivity-from-web1.png
[7]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-Connectivity-from-web2.png
[8]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Access-WebPage1-Over-Haproxy.png
[9]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Access-WebPage2-Over-Haproxy.png
[10]: https://www.linuxtechi.com/wp-content/uploads/2022/06/While-Loop-Access-Webpage-over-Haproxy.png
[11]: https://www.linuxtechi.com/wp-content/uploads/2022/06/HAproxy-GUI-Login-Page.png
[12]: https://www.linuxtechi.com/wp-content/uploads/2022/06/Haproxy-Stats-Ubuntu-Linux.png

View File

@ -0,0 +1,150 @@
[#]: subject: "Linux su vs sudo: what's the difference?"
[#]: via: "https://opensource.com/article/22/6/linux-su-vs-sudo-sysadmin"
[#]: author: "David Both https://opensource.com/users/dboth"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux su vs sudo: what's the difference?
======
A comparison of Linux commands for escalating privileges for non-root users.
![bash logo on green background][1]
Image by: Opensource.com
Both the `su` and the `sudo` commands allow users to perform system administration tasks that are not permitted for non-privileged users—that is, everyone but the root user. Some people prefer the `sudo` command: For example, [Seth Kenlon][2] recently published "[5 reasons to use sudo on Linux][3]", in which he extols its many virtues.
I, on the other hand, am partial to the `su` command and prefer it to `sudo` for most of the system administration work I do. In this article, I compare the two commands and explain why I prefer `su` over `sudo` but still use both.
### Historical perspective of sysadmins
The `su` and `sudo` commands were designed for a different world. Early Unix computers required full-time system administrators, and they used the root account as their only administrative account. In this ancient world, the person entrusted with the root password would log in as root on a teletype machine or CRT terminal such as the DEC VT100, then perform the administrative tasks necessary to manage the Unix computer.
The root user would also have a non-root account for non-root activities such as writing documents and managing their personal email. There were usually many non-root user accounts on those computers, and none of those users needed total root access. A user might need to run one or two commands as root, but very infrequently. Many sysadmins log in as root to work as root and log out of our root sessions when finished. Some days require staying logged in as root all day long. Most sysadmins rarely use `sudo` because it requires typing more than necessary to run essential commands.
These tools both provide escalated privileges, but the way they do so is significantly different. This difference is due to the distinct use cases for which they were originally intended.
### sudo
The original intent of `sudo` was to enable the root user to delegate to one or two non-root users access to one or two specific privileged commands they need regularly. The `sudo` command gives non-root users temporary access to the elevated privileges needed to perform tasks such as adding and deleting users, deleting files that belong to other users, installing new software, and generally any task required to administer a modern Linux host.
Allowing the users access to a frequently used command or two that requires elevated privileges saves the sysadmin a lot of requests from users and eliminates the wait time. The `sudo` command does not switch the user account to become root; most non-root users should never have full root access. In most cases, `sudo` lets a user issue one or two commands then allows the privilege escalation to expire. During this brief time interval, usually configured to be 5 minutes, the user may perform any necessary administrative tasks that require elevated privileges. Users who need to continue working with elevated privileges but are not ready to issue another task-related command can run the `sudo -v` command to revalidate the credentials and extend the time for another 5 minutes.
Using the `sudo` command does have the side effect of generating log entries of commands used by non-root users, along with their IDs. The logs can facilitate a problem-related postmortem to determine when users need more training. (You thought I was going to say something like "assign blame," didn't you?)
### su
The `su` command is intended to allow a non-root user to elevate their privilege level to that of root—in fact, the non-root user becomes the root user. The only requirement is that the user know the root password. There are no limits on this because the user is now logged in as root.
No time limit is placed on the privilege escalation provided by the su command. The user can work as root for as long as necessary without needing to re-authenticate. When finished, the user can issue the exit command to revert from root back to their own non-root account.
### Controversy and change
There has been some recent disagreement about the uses of `su` versus `sudo`.
> Real [Sysadmins] don't use sudo. —Paul Venezia
Venezia contends in his [InfoWorld article][4] that `sudo` is used as an unnecessary prop for many people who act as sysadmins. He does not spend much time defending or explaining this position; he just states it as a fact. And I agree with him—for sysadmins. We don't need the training wheels to do our jobs. In fact, they get in the way.
However,
> The times they are a'changin.' —Bob Dylan
Dylan was correct, although he was not singing about computers. The way computers are administered has changed significantly since the advent of the one-person, one-computer era. In many environments, the user of a computer is also its administrator. This makes it necessary to provide some access to the powers of root for those users.
Some modern distributions, such as Ubuntu and its derivatives, are configured to use the `sudo` command exclusively for privileged tasks. In those distros, it is impossible to log in directly as the root user or even to `su` to root, so the `sudo` command is required to allow non-root users any access to root privileges. In this environment, all system administrative tasks are performed using `sudo`.
This configuration is possible by locking the root account and adding the regular user account(s) to the wheel group. This configuration can be circumvented easily. Try a little experiment on any Ubuntu host or VM. Let me stipulate the setup here so you can reproduce it if you wish. I installed Ubuntu 16.04 LTS1 and installed it in a VM using VirtualBox. During the installation, I created a non-root user, student, with a simple password for this experiment.
Log in as the user student and open a terminal session. Look at the entry for root in the `/etc/shadow` file, where the encrypted passwords are stored.
```
student@ubuntu1:~$ cat /etc/shadow
cat: /etc/shadow: Permission denied
```
Permission is denied, so we cannot look at the `/etc/shadow` file. This is common to all distributions to prevent non-privileged users from seeing and accessing the encrypted passwords, which would make it possible to use common hacking tools to crack those passwords.
Now let's try to `su -` to root.
```
student@ubuntu1:~$ su -
Password: <Enter root password but there isn't one>
su: Authentication failure
```
This fails because the root account has no password and is locked out. Use the `sudo` command to look at the `/etc/shadow` file.
```
student@ubuntu1:~$ sudo cat /etc/shadow
[sudo] password for student: <enter the student password>
root:!:17595:0:99999:7:::
<snip>
student:$6$tUB/y2dt$A5ML1UEdcL4tsGMiq3KOwfMkbtk3WecMroKN/:17597:0:99999:7:::
<snip>
```
I have truncated the results to show only the entry for the root and student users. I have also shortened the encrypted password so the entry will fit on a single line. The fields are separated by colons (`:` ) and the second field is the password. Notice that the password field for root is a bang, known to the rest of the world as an exclamation point (`!` ). This indicates that the account is locked and that it cannot be used.
Now all you need to do to use the root account as a proper sysadmin is to set up a password for the root account.
```
student@ubuntu1:~$ sudo su -
[sudo] password for student: <Enter password for student>
root@ubuntu1:~# passwd root
Enter new UNIX password: <Enter new root password>
Retype new UNIX password: <Re-enter new root password>
passwd: password updated successfully
root@ubuntu1:~#
```
Now you can log in directly on a console as root or `su` directly to root instead of using `sudo` for each command. Of course, you could just use `sudo su -` every time you want to log in as root, but why bother?
Please do not misunderstand me. Distributions like Ubuntu and their up- and downstream relatives are perfectly fine, and I have used several of them over the years. When using Ubuntu and related distros, one of the first things I do is set a root password so that I can log in directly as root. Other distributions, like Fedora and its relatives, now provide some interesting choices during installation. The first Fedora release where I noticed this was Fedora 34, which I have installed many times while writing an upcoming book.
One of those installation options can be found on the page to set the root password. The new option allows the user to choose "Lock root account" in the way an Ubuntu root account is locked. There is also an option on this page that allows remote SSH login to this host as root using a password, but that only works when the root account is unlocked. The second option is on the page that allows the creation of a non-root user account. One of the options on this page is "Make this user administrator." When this option is checked, the user ID is added to a special group called the wheel group, which authorizes members of that group to use the `sudo` command. Fedora 36 even mentions the wheel group in the description of that checkbox.
More than one non-root user can be set as an administrator. Anyone designated as an administrator using this method can use the `sudo` command to perform all administrative tasks on a Linux computer. Linux only allows the creation of one non-root user during installation, so other new users can be added to the wheel group when created. Existing users can be added to the wheel group by the root user or another administrator directly by using a text editor or the `usermod` command.
In most cases, today's administrators need to do only a few essential tasks such as adding a new printer, installing updates or new software, or deleting software that is no longer needed. These GUI tools require a root or administrative password and will accept the password from a user designated as an administrator.
### How I use su and sudo on Linux
I use both `su` and `sudo`. They each have an important place in my sysadmin toolbox.
I can't lock the root account because I need to use it to run my [Ansible][5] playbooks and the [rsbu][6] Bash program I wrote to perform backups. Both of these need to be run as root, and so do several other administrative Bash scripts I have written. I use the `su` command to switch users to the root user so I can perform these and many other common tasks. Elevating my privileges to root using `su` is especially helpful when performing problem determination and resolution. I really don't want a `sudo` session timing out on me while I am in the middle of my thought process.
I use the `sudo` command for tasks that need root privilege when a non-root user needs to perform them. I set the non-root account up in the sudoers file with access to only those one or two commands needed to complete the tasks. I also use `sudo` myself when I need to run only one or two quick commands with escalated privileges.
### Conclusions
The tools you use don't matter nearly as much as getting the job done. What difference does it make if you use vim or Emacs, systemd or SystemV, RPM or DEB, `sudo` or `su` ? The bottom line here is that you should use the tools with which you are most comfortable and that work best for you. One of the greatest strengths of Linux and open source is that there are usually many options available for each task we need to accomplish.
Both `su` and `sudo` have strengths, and both can be secure when applied properly for their intended use cases. I choose to use both `su` and `sudo` mostly in their historical roles because that works for me. I prefer `su` for most of my own work because it works best for me and my workflow.
Share how you prefer to work in the comments!
This article is taken from Chapter 19 of my book The Linux Philosophy for Sysadmins (Apress, 2018) and is republished with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/linux-su-vs-sudo-sysadmin
作者:[David Both][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/bash_command_line.png
[2]: https://opensource.com/users/seth
[3]: https://opensource.com/article/22/5/use-sudo-linux
[4]: http://www.infoworld.com/t/unix/nine-traits-the-veteran-unix-admin-276?page=0,0&source=fssr
[5]: https://opensource.com/article/20/10/first-day-ansible
[6]: https://opensource.com/article/17/1/rsync-backup-linux

View File

@ -0,0 +1,480 @@
[#]: subject: "Notes on running containers with bubblewrap"
[#]: via: "https://jvns.ca/blog/2022/06/28/some-notes-on-bubblewrap/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Notes on running containers with bubblewrap
======
Hello! About a year ago I got mad about Docker container startup time. This was
because I was building an [nginx playground][1]
where I was starting a new “container” on every HTTP request, and so for it to
feel reasonably snappy, nginx needed to start quickly.
Also, I was running this project on a pretty small cloud machine (256MB RAM), a
small CPU, so I really wanted to avoid unnecessary overhead.
Ive been looking for a way to run containers faster since then, but I couldnt
find one until last week when I discovered
[bubblewrap][2]!! Its very fast and I
think its super cool, but I also ran into a bunch of fun problems that I
wanted to write down for my future self.
#### some disclaimers
* Im not sure if the way Im using bubblewrap in this post is maybe not how its intended to be used
* there are a lot of sharp edges when using bubblewrap in this way, you need to
think a lot about Linux namespaces and how containers work
* bubblewrap is a security tool but I am not a security person and I am only
doing this for weird tiny projects. you should definitely not take security
advice from me.
Okay, all of that said, lets talk about Im trying to use bubblewrap to run
containers fast and in a relatively secure way :)
#### Docker containers take ~300ms to start on my machine
I ran a quick benchmark to see how long a Docker container takes to run a
simple command (`ls` ). For both Docker and Podman, its about 300ms.
```
$ time docker run --network none -it ubuntu:20.04 ls / > /dev/null
Executed in 378.42 millis
$ time podman run --network none -it ubuntu:20.04 ls / > /dev/null
Executed in 279.27 millis
```
Almost all of this time is overhead from docker and podman just running `ls`
by itself takes about 3ms:
```
$ time ls / > /dev/null
Executed in 2.96 millis
```
I want to stress that, while Im not sure exactly what the slowest part of
Docker and podman startup time is (I spent 5 minutes trying to profile them and
gave up), Im 100% sure its something important.
The way were going to run containers faster with bubblewrap has a lot of
limitations and its a lower level interface which is a lot trickier to use.
#### goal 1: containers that start quickly
I felt like it *should* be possible to have containers that start essentially
instantly or at least in less than 5ms. My thought process:
* creating a new namespace with `unshare` is basically instant
* [containers are basically just a bunch of namespaces][3]
* whats the problem?
#### container startup time is (usually) not that important
Most of the time when people are using containers, theyre running some
long-running process inside the container like a webserver, so it doesnt
really matter if it takes 300ms to start.
So it makes sense to me that there arent a lot of container tools that
optimize for startup time. But I still wanted to optimize for startup time :)
#### goal 2: run the containers as an unprivileged user
Another goal I had was to be able to run my containers as an unprivileged user
instead of root.
I was surprised the first time I learned that Docker actually runs containers
as root even though I run `docker run ubuntu:20.04` as an unprivileged user (`bork` ), that
message is actually sent to a daemon running as root, and the Docker container
process itself also runs as root (albeit a `root` thats stripped of all its
capabilities).
Thats fine for Docker (they have lots of very smart people making sure that
they get it right!), but if Im going to do container stuff *without* using
Docker (for the speed reasons mentioned above), Id rather not do it as root to
keep everything a bit more secure.
#### podman can run containers as an non-root user
Before we start talking about how to do weird stuff with bubblewrap, I want to
quickly talk about a much more normal tool to run containers: podman!
Podman, unlike Docker, can run containers as an unprivileged user!
If I run this from my normal user:
```
$ podman run -it ubuntu:20.04 ls
```
it doesnt secretly run as root behind the scenes! It just starts the container
as my normal user, and then uses something called “user namespaces” so that
*inside the container* I appear to be root.
The other cool thing aboud podman is that it has exactly the same interface as
Docker, so you can just take a Docker command and replace `docker` with
`podman` and itll Just Work. Ive found that sometimes I need to do some extra
work to get podman to work in practice, but its still pretty nice that it has
the same command line interface.
This “run containers as a non-root user” feature is normally called “rootless
containers”. (I find that name kind of counterintuitive, but thats what people call it)
#### failed attempt 1: write my own tool using runc
```
runc
```
I knew that Docker and podman use
[runc][4] under the hood, so I thought
well, maybe I can just use `runc` directly to make my own tool that starts
containers faster than Docker does!
I tried to do this 6 months ago and I dont remember most of the details, but basically
I spent 8 hours working on it, got frustrated because I couldnt get anything
to work, and gave up.
One specific detail I remember struggling with was setting up a working `/dev`
for my programs to use.
#### enter bubblewrap
Okay, that was a very long preamble so lets get to the point! Last week, I
discovered a tool called `bubblewrap` that was basically exactly the thing I
was trying to build with `runc` in my failed attempt, except that it actually
works and has many more features and its built by people who know things about
security! Hooray!
The interface to bubblewrap is pretty different than the interface to Docker
its much lower level. Theres no concept of a container image instead you
map a bunch of directories on your host to directories in the container.
For example, heres how to run a container with the same root directory as your
host operating system, but with only read access to that root directory, and only write access to `/tmp`.
```
bwrap \
--ro-bind / / \
--bind /tmp /tmp \
--proc /proc --dev /dev \
--unshare-pid \
--unshare-net \
bash
```
For example, you could imagine running some untrusted process under bubblewrap
this way and then putting all the files you the process to access in `/tmp`.
#### bubblewrap runs containers as an unprivileged (non-root) user
Like podman, bubblewrap runs containers as a non-root user, using user
namespaces. It can also run containers as root, but in this post were just
going to be talking about using it as an unprivileged user.
#### bubblewrap is fast
Lets see how long it takes to run `ls` in a bubblewrap container!
```
$ time bwrap --ro-bind / / --proc /proc --dev /dev --unshare-pid ls /
Executed in 8.04 millis
```
Thats a big difference! 8ms is a lot faster than 279ms.
Of course, like we said before, the reason bubblewrap is faster is that it does
a lot less. So lets talk about some things bubblewrap doesnt do.
#### some things bubblewrap doesnt do
Here are some things that Docker/podman do that bubblewrap doesnt do:
* set up overlayfs mounts for you, so that your changes to the filesystem dont affect the base image
* set up networking bridges so that you can connect to a webserver inside the container
* probably a bunch more stuff that Im not thinking of
In general, bubblewrap is a much lower level tool than something like Docker.
Also, bubblewrap seems to have pretty different goals than Docker the README
seems to say that its intended as a tool for sandboxing desktop software (I
think it comes from [flatpak][5]).
#### running a container image with bubblewrap
I couldnt find instructions for running a Docker container image with
bubblewrap, so here they are. Basically I just use Docker to download the
container image and put it into a directory and then run it with `bwrap` :
Theres also a tool called [bwrap-oci][6] which looks cool but I
couldnt get it to compile.
```
mkdir rootfs
docker export $(docker create frapsoft/fish) | tar -C rootfs -xf -
bwrap \
--bind $PWD/rootfs / \
--proc /proc --dev /dev \
--uid 0 \
--unshare-pid \
--unshare-net \
fish
```
One important thing to note is that this doesnt create a temporary overlay
filesystem for the containers file writes, so itll let the container edit
files in the image.
I wrote a post about [overlay filesystems][7] if
you want to see how you could do that yourself though.
#### running “containers” with bubblewrap isnt the same as with podman
I just gave an example of how to “run a container” with bubblewrap, and you
might think “cool, this is just like podman but faster!”. It is not, and its
actually unlike using podman in even more ways than I expected.
I put “container” in scare quotes because there are two ways to define “container”:
* something that implements [OCI runtime specification][8]
* any way of running a process in a way thats somehow isolated from the host system
bubblewrap is a “container” tool in the second sense. It definitely provides
isolation, and it does that using the same features Linux namespaces as
Docker.
But its not a container tool in the first sense. And its a lower level tool
so you can get into a bunch of weird states and you really need to think about
all the weird details of how container work while using it.
For the rest of the post Im going to talk about some weird things that can
happen with bubblewrap that would not happen with podman/Docker.
#### weird thing 1: processes that dont exist
Heres an example of a weird situation I got into with bubblewrap that confused
me for a minute:
```
$ bwrap --ro-bind / / --unshare-all bash
$ ps aux
... some processes
root 390073 0.0 0.0 2848 124 pts/9 S 14:28 0:00 bwrap --ro-bind / / --unshare-all --uid 0 bash
... some other processes
$ kill 390073
bash: kill: (390073) - No such process
$ ps aux | grep 390073
root 390073 0.0 0.0 2848 124 pts/9 S 14:28 0:00 bwrap --ro-bind / / --unshare-all --uid 0 bash
```
Heres what happened
* I started a bash shell inside bubblewrap
* I ran `ps aux`, and saw a process with PID `390073`
* I try to kill the process. It fails with the error `no such process`. What?
* I ran `ps aux`, and still see the process with PID `390073`
Whats going on? Why doesnt the process `390073` exist, even though `ps` says it does? Isnt that impossible?
Well, the problem is that `ps` doesnt actually list all the processes in your
current PID namespace. Instead, it iterates through all the entries in `/proc`
and prints those out. Usually, whats in `/proc` is actually the same as the processes on your system.
But with Linux containers these things can get out of sync. Whats happening in
this example is that we have the `/proc` from the host PID namespace, but those
arent actually the processes that we have access to in our PID namespace.
Passing `--proc /proc` to bwrap fixes the issue `ps` then actually lists the correct processes.
```
$ bwrap --ro-bind / / --unshare-all --dev /dev --proc /proc ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
bork 1 0.0 0.0 3644 136 ? S+ 16:21 0:00 bwrap --ro-bind / / --unshare-all --dev /dev --proc /proc ps au
bork 2 0.0 0.0 21324 1552 ? R+ 16:21 0:00 ps aux
```
Just 2 processes! Everything is normal!
#### weird thing 2: trying to listen on port 80
Passing `--uid 0` to bubblewrap makes the user inside the container `root`. You
might think that this means that the root user has administrative privileges
inside the container, but thats not true!
For example, lets try to listen on port 80:
```
$ bwrap --ro-bind / / --unshare-all --uid 0 nc -l 80
nc: Permission denied
```
Whats going on here is that the new root user actually doesnt have the
**capabilities** it needs to listen on port 80. (you need special permissions
to listen on ports less than 1024, and 80 is less than 1024)
Theres actually a capability specifically for listening on privileged ports
called `CAP_NET_BIND_SERVICE`.
So to fix this all we need to do is to tell bubblewrap to give our user that
capability.
```
$ bwrap --ro-bind / / --unshare-all --uid 0 --cap-add cap_net_bind_service nc -l 80
(no output, success!!!)
```
This works! Hooray!
#### finding the right capabilities is pretty annoying
bubblewrap doesnt give out any capabilities by default, and I find that
figuring out all the right capabilities and adding them manually is kind of
annoying. Basically my process is
* run the thing
* see what fails
* read `man capabilities` to figure out what capabilities Im missing
* add the capability with `--cap-add`
* repeat until everything is running
But thats the price I pay for wanting things to be fast I guess :)
#### weird thing 2b: --dev /dev makes listening on privileged ports not work
```
--dev /dev
```
One other strange thing is that if I take the exact same command above (which
worked!) and add `--dev /dev` (to set up the `/dev/` directory), it causes it to not work again:
```
$ bwrap --ro-bind / / --dev /dev --unshare-all --uid 0 --cap-add cap_net_bind_service nc -l 80
nc: Permission denied
```
I think this might be a bug in bubblewrap, but I havent mustered the courage
to dive into the bubblewrap code and start investigating yet. Or maybe theres
something obvious Im missing!
#### weird thing 3: UID mappings
Another slightly weird thing was I tried to run `apt-get update` inside a bubblewrap Ubuntu container and everything went very poorly.
Heres how I ran `apt-get update` inside the Ubuntu container:
```
mkdir rootfs
docker export $(docker create ubuntu:20.04) | tar -C rootfs -xf -
bwrap \
--bind $PWD/rootfs / \
--proc /proc\
--uid 0 \
--unshare-pid \
apt-get update
```
And here are the error messages:
```
E: setgroups 65534 failed - setgroups (1: Operation not permitted)
E: setegid 65534 failed - setegid (22: Invalid argument)
E: seteuid 100 failed - seteuid (22: Invalid argument)
E: setgroups 0 failed - setgroups (1: Operation not permitted)
.... lots more similar errors
```
At first I thought “ok, this is a capabilities problem, I need to set
`CAP_SETGID` or something to give the container permission to change groups. But I did that and it didnt help at all!
I think whats going on here is a problem with UID maps. What are UID maps?
Well, every time you run a container using “user namespaces” (which podman is
doing), it creates a mapping of UIDs inside the container to UIDs on the host.
Lets look that the UID maps! Heres how do that:
````
[[email protected]][9]:/# cat /proc/self/uid_map
0 1000 1
[[email protected]][10]:/# cat /proc/self/gid_map
1000 1000 1
```
This is saying that user 0 in the container is mapped to user 1000 on in the
host, and group 1000 is mapped to group 1000. (My normal user's UID/GID is 1000, so this makes sense). You can find out
about this `uid_map` file in `man user_namespaces`.
All other users/groups that aren't 1000 are mapped to user 65534 by default, according
to `man user_namespaces`.
### what's going on: non-mapped users can't be used
The only users and groups that have been mapped are `0` and `1000`. But `man user_namespaces` says:
> After the uid_map and gid_map files have been written, only the mapped values may be used in system calls that change user and group IDs.
`apt` is trying to use users 100 and 65534. Those aren't on the list of mapped
users! So they can't be used!
This works fine in podman, because podman sets up its UID and GID mappings differently:
```
$ podman run -it ubuntu:20.04 bash
[[email protected]][11]:/# cat /proc/self/uid_map
0 1000 1
1 100000 65536
[[email protected]][12]:/# cat /proc/self/gid_map
0 1000 1
1 100000 65536
```
All the users get mapped, not just 1000.
I dont quite know how to fix this, but I think its probably possible in
bubblewrap to set up the uid mappings the same way as podman does theres an
[issue about it here that links to a workaround][13].
But this wasnt an actual problem I was trying to solve so I didnt dig further
into it.
#### it works pretty great!
Ive talked about a bunch of issues, but the things Ive been trying to do in bubblewrap
have been very constrained and its actually been pretty simple. For example, I
was working on a git project where I really just want to run `git` inside a
container and map a git repository from the host.
Thats very simple to get to work with bubblewrap! There were basically no weird problems!
Its really fast!
So Im pretty excited about this tool and I might use it for more stuff in the
future.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2022/06/28/some-notes-on-bubblewrap/
作者:[Julia Evans][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lkxed
[1]: https://jvns.ca/blog/2021/09/24/new-tool--an-nginx-playground/
[2]: https://github.com/containers/bubblewrap
[3]: https://jvns.ca/blog/2016/10/10/what-even-is-a-container/
[4]: https://github.com/opencontainers/runc
[5]: https://flatpak.org/
[6]: https://github.com/projectatomic/bwrap-oci
[7]: https://jvns.ca/blog/2019/11/18/how-containers-work--overlayfs/
[8]: https://opencontainers.org/about/overview/
[9]: https://jvns.ca/cdn-cgi/l/email-protection
[10]: https://jvns.ca/cdn-cgi/l/email-protection
[11]: https://jvns.ca/cdn-cgi/l/email-protection
[12]: https://jvns.ca/cdn-cgi/l/email-protection
[13]: https://github.com/containers/bubblewrap/issues/468

View File

@ -0,0 +1,126 @@
[#]: subject: "Why organizations need site reliability engineers"
[#]: via: "https://opensource.com/article/22/6/benefits-sre-site-reliability-engineering"
[#]: author: "Robert Kimani https://opensource.com/users/robert-charles"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Why organizations need site reliability engineers
======
SRE is a valuable component in an efficient organization for software engineering, systems engineering, implementing DevSecOps, and more.
![Puzzle pieces coming together to form a computer screen][1]
Image by: Opensource.com
In this final article that concludes my series about best practices for effective site reliability engineering (SRE), I cover some of the practical applications of site reliability engineering.
There are some significant differences between software engineering and systems engineering.
### Software engineering
* Focuses on software development and engineering only.
* Involves writing code to create useful functionality.
* Time is spent on developing repeatable and reusable software that can be easily extended.
* Has problem-solving orientation.
* Software engineering aids the SRE.
### Systems engineering
* Focuses on the whole system including software, hardware and any associated technologies.
* Time is spent on building, analyzing, and managing solutions.
* Deals with defining characteristics of a system and feeds requirements to software engineering.
* Has systems-thinking orientation.
* Systems engineering enables SRE.
The site reliability engineer (SRE) utilizes both software engineering and system engineering skills, and in so doing adds value to an organization.
As the SRE team runs production systems, an SRE produces the most impactful tools to manage and automate manual processes. Software can be built faster when an SRE is involved, because most of the time the SRE creates software for their own use. As most of the tasks for an SRE are automated, which entails a lot of coding, this introduces a healthy mix of development and operations, which is great for site reliability.
Finally, an SRE enables an organization to automatically scale rapidly whether it's scaling up or scaling down.
### SRE and DevSecOps
An SRE helps build end to end effective monitoring systems by utilizing logs, metrics and traces. An SRE enables fast, effective, and reliable rollbacks and automatic scaling up or down infrastructure as needed. These are especially effective during a security breach.
With the advent of cloud and container-based architectures, data processing pipelines have become a prominent component in IT architectures. An SRE helps configure the most restrictive access to data processing pipelines.
Finally, an SRE helps develop tools and procedures to handle incidents. While most of these incidents focus on IT operations and reliability, it can be easily extended to security. For example, DevSecOps deals with integrating development, security, and operations with heavy emphasis on automation. It's a field where development, security and operations teams work together to support and maintain an organization's applications and infrastructure.
### Designing SRE and pre-production computing environments
A pre-production or non-production environment is an environment used by an SRE to develop, deploy, and test.
The non-production environment is the testing ground for automation. But it's not just application code that requires a non-production environment. Any associated automated processes, primarily the ones that an SRE develops, requires a pre-production environment. Most organizations have more than one pre-production environment. By resembling production as much as possible, the pre-production environment improves confidence in releases. At least one of your non-production environments should resemble the production environment. In many cases it's not possible to replicate production data, but you should try your best to make the non-production environments match the production environments as closely as possible.
### Pre-production computing and the SRE
An SRE helps spin-up identical application serving environments by using automation and specialized tools. This is essential, as you can quickly spin up a non-production environment in a matter of seconds using scripts and tools developed by SREs.
A smart SRE treats configuration as code to ensure fast implementation of testing and deployment. Through the use of automated CI/CD pipelines, application releases and hot fixes can be made seamlessly.
Finally, by developing effective monitoring solutions, an SRE helps to ensure the reliability of a pre-production computing environment.
One of the closely related fields to pre-production computing is inner loop development.
### Executing on inner loop development
Picture two loops, an inner loop and an outer loop, forming the DevOps loop. In the inner loop, you code, build, run, and debug. This cycle mostly happens in a developer's workstation or some other non-production environment.
Once the code is ready, it is moved to the outer loop, where the process starts with code review, build, deploy, integration tests, security and compliance, and finally pre-production release.
Many of the processes in the outer loop and inner loop are automated by the SRE.
![Image of a DevOps Loop][3]
Image by: (Robert Kimani, CC BY-SA 40)
### SRE and inner loop development
The SRE speeds up inner loop development by enabling fast, iterative development by providing tools for containerized deployment. Many of the tools an SRE develops revolve around container automation and container orchestration, using tools such as Podman, Docker, Kubernetes, or platforms like OpenShift.
An SRE also develops tools to help debug crashes with tools such as Java heap dump analysis tools, and Java thread dump analysis tools.
### Overall value of SRE
By utilizing both systems engineering and software engineering, an SRE organization delivers impactful solutions. An SRE helps to implement DevSecOps where development, security, and operations intersect with a primary focus on automation.
SRE principles help maximize the function of pre-production environments by utilizing tools and processes that the SRE organizations deliver, so one can easily spin up non-production environment in a matter of seconds. An SRE organization enables efficient inner loop development by developing and providing necessary tools.
* Improved end user experience: It's all about ensuring that the users of the applications and services, get the best experience as possible. This includes uptime of the applications or services. Applications should be up and running all the time and should be healthy.
* Minimizes or eliminates outages: It's better for users and developers alike.
* Automation: As the saying goes, you should always be trying to automate yourself out of the job that you are currently performing manually.
* Scale: In the age of cloud-native applications and containerized services, massive automated scalability is critical for an SRE to scale up or down in a safe and fast manner.
* Integrated: The principles and processes that the SRE organization embraces can be, and in many cases should be, extended to other parts of the organization, as with DevSecOps.
The SRE is a valuable component in an efficient organization. As demonstrated over the course of this series, the benefits of SRE affect many departments and processes.
### Further reading
Below are some GitHub links to a few of my favorite SRE resources:
* [Awesome site reliability engineering resources][4]
* [Awesome site reliability tools][5]
* [SRE cheat sheet][6]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/benefits-sre-site-reliability-engineering
作者:[Robert Kimani][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/robert-charles
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/puzzle_computer_solve_fix_tool.png
[2]: https://opensource.com/downloads/guide-implementing-devsecops
[3]: https://opensource.com/sites/default/files/2022-06/SREFinalDevOps-Loop.png
[4]: https://github.com/dastergon/awesome-sre
[5]: https://github.com/SquadcastHub/awesome-sre-tools
[6]: https://github.com/shibumi/SRE-cheat-sheet

View File

@ -0,0 +1,269 @@
[#]: subject: "ripgrep-all Command in Linux: One grep to Rule Them All"
[#]: via: "https://itsfoss.com/ripgrep-all/"
[#]: author: "Pratham Patel https://itsfoss.com/author/pratham/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
ripgrep-all Command in Linux: One grep to Rule Them All
======
[rga][1], called ripgrep-all, is an excellent tool that allows you to search almost all files for a text pattern. While the OG grep command is limited to plaintext files, rga can search for text in a wide range of file types such as PDF, e-Books, Word documents, zip, tar, and even embedded subtitles.
### What is it exactly?
The [grep][2] command is used for searching for text-based patterns in files. It actually means **g**lobal **re**gex **p**attern. You can not only search for simple words, but can also specify that the word should be the first word in a line, at the end of a line, or a specific word should come before it. That is why grep is so powerful, because it uses regex (regular expressions).
There is also a limitation on grep, kind of. You can only use grep to search for patterns in a plaintext file. That means you can not [search for patterns in a PDF document][3], in a compressed tar/zip archive, nor in a database like SQLite.
Now imagine having the powerful search that grep offers, but for other file types too. That is rga, or ripgrep-all, whatever you might call it.
It is ripgrep, but with added functionality. We also have a tutorial covering [ripgrep][4], in case you are interested in it.
### How to install ripgrep-all
Arch Linux users can easily install ripgrep-all using the following command:
```
sudo pacman -S ripgrep-all
```
The Nix package manger has ripgrep-all packaged and for that, use the following command:
```
nix-env -iA nixpkgs.ripgrep-all
```
Mac users can should the homebrew package manager like so:
```
brew install ripgrep-all
```
#### Debian/Ubuntu users
At the moment, ripgrep-all is neither available in Debians first-party repositories nor Ubuntus repositories. Fret not, that doesnt mean it is unobtainium.
On any other Debian based operating system (Ubuntu and its derivatives too), install the necessary dependencies first:
```
sudo apt-get install ripgrep pandoc poppler-utils ffmpeg
```
Once those are installed, visit [this page that contains the installer][5]. Find the file that has the “x86_64-unknown-linux-musl” suffix. Download and extract it.
That tar archive contains two necessary binary executable files. They are “rga” and “rga-preproc”.
Copy them to the “~/.local/bin” directory. In most cases, this directory will exist, but in case you do not have it, create it using the following command:
```
mkdir -p $HOME/.local/bin
```
Finally, add the following lines to your “~/.bashrc” file:
```
if ! [[ $PATH =~ "$HOME/.local/bin" ]]; then
PATH="$HOME/.local/bin:$PATH"
fi
```
Now, close and re-open the terminal to make the changes made in “~/.bashrc” effective. With that, ripgrep-all is installed.
### Using ripgrep-all
ripgrep-all is the name of the project, not the command name, the command name is `rga`.
The rga utility supports the following file extensions:
* media: `.mkv`, `.mp4`, `.avi`
* documents: `.epub`, `.odt`, `.docx`, `.fb2`, `.ipynb`, `.pdf`
* compressed archives: `.zip`, `.tar`, `.tgz`, `.tbz`, `.tbz2`, `.gz`, `.bz2`, `.xz`, `.zst`
* databases: `.db`, `.db3`, `.sqlite`, `.sqlite3`
* images (OCR): `.jpg`, `.png`
You might be [familiar with grep][6], but let us look at some examples nonetheless. This time, with rga instead of grep.
Before you proceed further, please take a look at the directory hierarchy given down below:
```
.
├── my_demo_db.sqlite3
├── my_demo_document.odt
└── TLCL-19.01.pdf.zip
```
#### Case insensitive and case sensitive search
The simplest pattern matching is to search for a word in a file. Let us try that. I will use the rga command to perform a case sensitive search for the words “red hat enterprise linux” for all files in current directory.
While grep has case sensitivity turned on by default, with rga, the `-s` option needs to be used.
```
rga -s 'red hat enterprise linux'
```
![Case sensitive search using rga][7]
As you can see, with a case sensitive search, I only got the result from a sqlite3 database file. Now, let us try a case insensitive search using the `-i` option and see what results we get.
![Case insensitive search using rga][8]
```
rga -i 'red hat enterprise linux'
```
Ah, this time we also got a match from the [The Linux Command Line][9] book by William Shotts.
#### Inverse match
With grep, and by extension, with ripgrep-all, you can do an inverse match. Which means, “Show only lines that do NOT have this pattern”.
The option for that is `-v` and that needs to be present immediately before the pattern.
![Inverse match using rga][10]
```
rga -v linux *.sqlite3 AND rga linux *sqlite3
```
Hey! Hold on. That isnt Linux!
This time I only selected the database file, that is because every other file has a lot of lines that do not contain the word linux in them.
And as you can see, the first commands output does not have the word linux in it. The second command is only to demonstrate that linux is present in the database.
#### Contextual search
One thing I love about rgas ability to search databases, in particular, is that it can not only search for your match but also provide relevant context (when asked). Although search in databases is not special, it is always a “Oh wow, it can do that?!” moment.
A contextual search is performed using the following three options:
* -A: show context after the matched line
* -B: show context before the matched line
* -C: show context before and after the matched line
If this sounds confusing, fret not. I will discuss each option to help you understand it better.
**Using the -C option**
To show you what I am talking about, let us take a look at the following command and its output. This is an example of using the `-C` option.
```
rga -C 2 'red hat enterprise linux'
```
![Fully contextual search using rga][11]
As you can see, not only I get the match from my database file, but I can also see the rows that are chronologically before the match and also rows that are after the match. This did not randomly jumble my rows, which is quite nice because I did not use keys to number each row.
You might be wondering if something is wrong. I specified 2, but only got 1 line after. Well, that is because there is no row after the fedora linux row in my database. :)
**Using the -A option**
To better understand the use of `-A` option, let us have a look at an example.
```
rga -A 2 Yours
```
![Contextual search (after) using rga][12]
I see that is a letter of some sort… Makes me wonder what was in the body.
**Using the -B option**
I think that document is incomplete… Let us get a context of the lines that are above it.
To see the previous lines, we need to use the `-B` option.
```
rga -B 6 Yours
```
![Contextual search (before) using rga][13]
As you can see, I asked “Show me the 6 lines that come before my matched line” and I got this in the output. Quite handy for some situations, dont you think?
#### Multi-threaded search
Since ripgrep-all is a wrapper around ripgrep, you can make use of various options [that LinuxHandbook has already covered][14].
One of those options is multi-threading. By default ripgrep chooses the thread count based on heuristics. And so, ripgrep-all does the same too.
That doesnt mean you can not specify them yourself! :)
The option to do so is `-j`. Use it like so:
```
rga -j NUM-OF-THREADS
```
There isnt a practical example to show this *reliably*, so I will leave this for you to test it yourself ;)
#### Caching
One of the main selling points of rga, besides supporting the vast number of file extensions, is that it efficiently caches data.
As a default, depending on the OS, the following directories will store the cache generated by rga:
* Linux: `~/.cache/rga`
* macOS: `~/Library/Caches/rga`
I will first run the following command to remove my cache:
```
rm -rf ~/.cache/rga
```
Once the cache is cleared, I will run a simple query 2 times. I expect to see a performance improvement the second time.
```
time rga -i linux > /dev/null
time rga --rga-no-cache -i linux > /dev/null
```
![Automatic caching done by rga][15]
I deliberately chose the pattern linux as it is occurring a lot of times in The Linux Command Line books PDF and also in my .odt document as well as my database file. To check speed, I dont need to check the output, so that is redirected to the /dev/null file.
I see that the first time the command is ran, it does not have a cache. But the second time running the same command yields a faster run.
In the end, I also use the rga-no-cache option, to disable the use of the cache, even if it is present. The result is similar to the first run of rga command.
### Conclusion
rga is the Swiss Army Knife of grep. It is one tool that can be used for almost any kind of file and it behaves similarly to grep, at least with the regex, less so with the options.
But all in all, rga is one of the tools that I recommend you use. Do comment and share your experience/thoughts!
--------------------------------------------------------------------------------
via: https://itsfoss.com/ripgrep-all/
作者:[Pratham Patel][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/pratham/
[b]: https://github.com/lkxed
[1]: https://github.com/phiresky/ripgrep-all
[2]: https://linuxhandbook.com/what-is-grep/
[3]: https://itsfoss.com/pdfgrep/
[4]: https://linuxhandbook.com/ripgrep/
[5]: https://github.com/phiresky/ripgrep-all/releases
[6]: https://linuxhandbook.com/grep-command-examples/
[7]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-33-19-800x197.webp
[8]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-33-43-800x242.webp
[9]: https://www.linuxcommand.org/tlcl.php
[10]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-36-50-800x239.webp
[11]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-37-21-800x181.webp
[12]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-37-40-800x161.webp
[13]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-38-01-800x305.webp
[14]: https://linuxhandbook.com/ripgrep
[15]: https://itsfoss.com/wp-content/uploads/2022/06/Screenshot-from-2022-06-27-22-39-32-800x468.webp

View File

@ -0,0 +1,265 @@
[#]: collector: (lujun9972)
[#]: translator: (hanszhao80)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Djinn: A Code Generator and Templating Language Inspired by Jinja2)
[#]: via: (https://theartofmachinery.com/2021/01/01/djinn.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
Djinn一个受 Jinja2 启发的代码生成器和模板语言
======
代码生成器是非常有用的工具。我有时使用 [jinja2][1] 的命令行版本来生成高度冗余的配置文件和其他文本文件但它在转换数据方面功能有限。显然Jinja2 的作者有不同的想法,但我想要类似于 <ruby>列表推导<rt>list comprehensions</rt></ruby> 或 D 语言的 <ruby>可组合范围<rt>composable range</rt></ruby> 算法之类的东西。
我决定制作一个类似于 Jinja2 的工具,但让我可以通过使用范围算法转换数据来生成复杂的文件。这个想法非常简单:一个直接用 D 语言代码重写的模板语言。这样它就支持 D 语言所能做的一切,仅仅因为它 _是_ D 语言。我想要一个独立的代码生成器,但是由于 [ D 语言的 `mixin` 特性][2]相同的模板语言可以作为嵌入式模板语言工作例如Web 应用程序中的 HTML。有关该技巧的更多信息请参阅 [这篇关于在编译时使用 mixins 将 Brainfuck 转换为 D 到机器代码的帖子][3]。
像往常一样,[源码在 GitLab 上][4]。[这篇文章中的例子也可以在这里找到。][5]
### 你好世界示例
这是一个演示这个想法的例子:
```
Hello [= retro("dlrow") ]!
[: enum one = 1; :]
1 + 1 = [= one + one ]
```
`[= some_expression ]` 类似于 Jinja2 中的 `{{ some_expression }}`,它在输出中呈现一个值。`[: some_statement; :]` 类似于 `{% some_statement %}` ,用于执行完整的代码语句。我更改了语法,因为 D 也大量使用花括号,并且将两者混合使模板难以阅读(还有一些特殊的非 D 指令,比如 `include`,它们被包裹在 `[<``>]` 中)。
如果你将上面的内容保存到一个名为 `hello.txt.dj` 的文件中并运行 `djinn` 命令行工具,你会得到一个名为 `hello.txt` 的文件,其中包含你可能猜到的内容:
```
Hello world!
1 + 1 = 2
```
如果您使用过 Jinja2您可能想知道第二行发生了什么。Djinn 有一个简化格式化和空格处理的特殊规则:如果源代码行包含 `[:` 语句或 `[<` 指令但不包含任何非空格输出,则整行都会被忽略输出。空行则仍会原样呈现。
### 生成数据
好的,现在来讲一些更实用的东西:生成 CSV 数据。
```
x,f(x)
[: import std.mathspecial;
foreach (x; iota(-1.0, 1.0, 0.1)) :]
[= "%0.1f,%g", x, normalDistribution(x) ]
```
一个 `[=``]` 对可以包含多个用逗号分隔的表达式。如果第一个表达式是一个由双引号包裹的字符串,则会被解释为 [格式化字符串][6]。下面是输出结果:
```
x,f(x)
-1.0,0.158655
-0.9,0.18406
-0.8,0.211855
-0.7,0.241964
-0.6,0.274253
-0.5,0.308538
-0.4,0.344578
-0.3,0.382089
-0.2,0.42074
-0.1,0.460172
0.0,0.5
0.1,0.539828
0.2,0.57926
0.3,0.617911
0.4,0.655422
0.5,0.691462
0.6,0.725747
0.7,0.758036
0.8,0.788145
0.9,0.81594
```
### 制作图片
这个例子展示了一个图片的生成过程。[经典的 Netpbm 图像库定义了一堆图像格式][7],其中一些是基于文本的。例如,这是一个 3 x 3 向量的图像:
```
P2 # <ruby>便携式灰色地图<rt>Portable GrayMap</rt></ruby>格式标识
3 3 # 宽和高
7 # 代表纯白色的值 (0 代表黑色)
7 0 7
0 0 0
7 0 7
```
你可以将上述文本保存到名为 `cross.pgm` 之类的文件中,很多图像工具都知道如何解析它。下面是一些 Djinn 代码,它以相同的格式生成 [Mandelbrot 集][8] 分形:
```
[:
import std.complex;
enum W = 640;
enum H = 480;
enum kMaxIter = 20;
ubyte mb(uint x, uint y)
{
const c = complex(3.0 * (x - W / 1.5) / W, 2.0 * (y - H / 2.0) / H);
auto z = complex(0.0);
ubyte ret = kMaxIter;
while (abs(z) <= 2 && --ret) z = z * z + c;
return ret;
}
:]
P2
[= W ] [= H ]
[= kMaxIter ]
[: foreach (y; 0..H) :]
[= "%(%s %)", iota(W).map!(x => mb(x, y)) ]
```
生成的文件大约为 800 kB但它可以很好地被压缩为 PNG
```
$ # 使用 GraphicsMagick 进行转换
$ gm convert mandelbrot.pgm mandelbrot.png
```
结果如下:
![][9]
### 解决谜题
这里有一个谜题:
![][10]
一个 5 行 5 列的网格需要用 1 到 5 的数字填充,每个数字在每一行中限使用一次,在每列中限使用一次(即,制作一个 5 行 5 列的<ruby>拉丁方格<rt>Latin square</rt></ruby>)。相邻单元格中的数字还必须满足所有 `>` 大于号表示的不等式。
[几个月前我使用了 <ruby>线性规划<rt>linear programming</rt></ruby>(英文缩写 LP][11]。线性规划问题是具有线性约束的连续变量系统。这次我将使用<ruby>混合整数线性规划<rt>mixed integer linear programming</rt></ruby>(英文缩写 MILP),它通过允许整数约束变量来归纳 LP。事实证明这足以成为 NP 完备的,而 MILP 恰好可以很好地模拟这个谜题。
在上一篇文章中,我使用 Julia 库 JuMP 来帮助解决这个问题。这次我将使用 [CPLEX基于文本的格式][12],它受到多个 LP 和 MILP 求解器的支持(如果需要,可以通过现成的工具轻松转换为其他格式)。这是上一篇文章中 CPLEX 格式的 LP
```
Minimize
obj: v
Subject To
ptotal: pr + pp + ps = 1
rock: 4 ps - 5 pp - v <= 0
paper: 5 pr - 8 ps - v <= 0
scissors: 8 pp - 4 pr - v <= 0
Bounds
0 <= pr <= 1
0 <= pp <= 1
0 <= ps <= 1
End
```
CPLEX 格式易于阅读,但复杂度高的问题需要大量变量和约束来建模,这使得手工编码既痛苦又容易出错。有一些特定领域的语言,例如 [ZIMPL][13],用于以高级方式描述 MILP 和 LP。对于许多问题来说它们非常酷但最终它们不如具有良好库如 JuMP支持的通用语言或使用 D 语言的代码生成器那样富有表现力。
我将使用两组变量来模拟这个谜题:`v_{r,c}` 和 `i_{r,c,v}`。`v_{r,c}` 将保存 r 行 c 列单元格的值(从 1 到 5。`i_{r,c,v}` 是一个二进制指示器,如果 r 行 c 列的单元格的值是 v则该指示器值为 1否则为 0。这两组变量是网格的冗余表示但第一种表示更容易对不等式约束进行建模而第二种表示更容易对唯一性约束进行建模。我只需要添加一些额外的约束来强制这两个表示是一致的。但首先让我们从每个单元格必须只有一个值的基本约束开始。从数学上讲这意味着给定行和列的所有指示器都必须为 0但只有一个值为 1 的例外。这可以通过以下等式强制约束:
```
[i_{r,c,1} + i_{r,c,2} + i_{r,c,3} + i_{r,c,4} + i_{r,c,5} = 1]
```
可以使用以下 Djinn 代码生成对所有行和列的 CPLEX 约束:
```
\ 单元格只有一个值
[:
foreach (r; iota(N))
foreach (c; iota(N))
:]
[= "%-(%s + %)", vs.map!(v => ivar(r, c, v)) ] = 1
[::]
```
`ivar()` 是一个辅助函数,它为我们提供变量名为 i 的字符串标识符,而 `vs` 存储从 1 到 5 的数字以方便使用。行和列内唯一性的约束完全相同,但在 i 的其他两个维度上迭代。
为了使变量组 i 与变量组 v 保持一致,我们需要如下约束(请记住,变量组 i 中只有一个元素的值是非零的):
```
[i_{r,c,1} + 2i_{r,c,2} + 3i_{r,c,3} + 4i_{r,c,4} + 5i_{r,c,5} = v_{r,c}]
```
CPLEX 要求所有变量都位于左侧,因此 Djinn 代码如下所示:
```
\ 连接变量组 i 和变量组 v
[:
foreach (r; iota(N))
foreach (c; iota(N))
:]
[= "%-(%s + %)", vs.map!(v => text(v, ' ', ivar(r, c, v))) ] - [= vvar(r,c) ] = 0
[::]
```
不等符号相邻的和左下角值为为 4 单元格的约束写起来都很简单。剩下的便是将指示器变量声明为二进制,并为变量组 v 设置边界。加上变量的边界,总共有 150 个变量和 111 个约束 [你可以在仓库中看到完整的代码][14]。
[GNU 线性规划工具集][15] 有一个命令行工具可以解决这个 CPLEX MILP。不幸的是它的输出是一个包含了所有内容的体积很大的转储所以我使用 awk 命令来提取需要的内容:
```
$ time glpsol --lp inequality.lp -o /dev/stdout | awk '/v[0-9][0-9]/ { print $2, $4 }' | sort
v00 1
v01 3
v02 2
v03 5
v04 4
v10 2
v11 5
v12 4
v13 1
v14 3
v20 3
v21 1
v22 5
v23 4
v24 2
v30 5
v31 4
v32 3
v33 2
v34 1
v40 4
v41 2
v42 1
v43 3
v44 5
real 0m0.114s
user 0m0.106s
sys 0m0.005s
```
这是在原始网格中写出的解决方案:
![][16]
这些例子只是用来玩的但我相信你已经明白了。顺便说一下Djinn 代码仓库的 `README.md` 文件本身是使用 Djinn 模板生成的。
正如我所说Djinn 也可以用作嵌入在 D 语言代码中的编译期模板语言。我最初只是想要一个代码生成器,得益于 D 语言的元编程功能,这算是一个额外获得的功能。
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2021/01/01/djinn.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[hanszhao80](https://github.com/hanszhao80)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://jinja2docs.readthedocs.io/en/stable/
[2]: https://dlang.org/articles/mixin.html
[3]: https://theartofmachinery.com/2017/12/31/compile_time_brainfuck.html
[4]: https://gitlab.com/sarneaud/djinn
[5]: https://gitlab.com/sarneaud/djinn/-/tree/v0.1.0/examples
[6]: https://dlang.org/phobos/std_format.html#format-string
[7]: http://netpbm.sourceforge.net/doc/#formats
[8]: https://en.wikipedia.org/wiki/Mandelbrot_set
[9]: https://theartofmachinery.com/images/djinn/mandelbrot.png
[10]: https://theartofmachinery.com/images/djinn/inequality.svg
[11]: https://theartofmachinery.com/2020/05/21/glico_weighted_rock_paper_scissors.html
[12]: http://lpsolve.sourceforge.net/5.0/CPLEX-format.htm
[13]: https://zimpl.zib.de/
[14]: https://gitlab.com/sarneaud/djinn/-/tree/v0.1.0/examples/inequality.lp.dj
[15]: https://www.gnu.org/software/glpk/
[16]: https://theartofmachinery.com/images/djinn/inequality_solution.svg

View File

@ -3,16 +3,16 @@
[#]: author: "Nived Velayudhan https://opensource.com/users/nivedv"
[#]: collector: "lujun9972"
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: reviewer: "turbokernel"
[#]: publisher: " "
[#]: url: " "
Kubernetes 架构指南
======
学习 Kubernetes 架构的不同组件是如何组合在一起的,这样你就可以更好地诊断问题、维护健康的集群和优化你的工作流。
学习 Kubernetes 架构中不同组件是如何组合在一起的,这样您就可以更好地排查问题、维护集群健康以及优化工作流。
![部件、模块、软件容器][1]
使用 Kubernetes 来编排容器,这是一个简单的描述,但理解它的实际含义和你如何实现它完全是另外一回事。如果你正在运行或管理 Kubernetes 集群,那么你就会知道 Kubernetes 由一台称为 _控制平面_计算机和许多其他 _工作节点_ 计算机组成。每一个都有一个复杂但健壮的堆栈,这使编排成为可能,熟悉每个组件有助于理解它是如何工作的。
使用 Kubernetes 来编排容器,这种描述说起来简单,但理解它的实际含义以及如何实现它完全是另外一回事。如果你正在运行或管理 Kubernetes 集群,那么你就会知道 Kubernetes 由一台称为 _控制平面_机器和许多其他 _工作节点_ 机器组成。每种类型都有一个复杂但稳定的堆栈,这使编排成为可能,熟悉每个组件有助于理解它是如何工作的。
![Kubernetes 架构图][2]
@ -20,26 +20,26 @@ Kubernetes 架构指南
### 控制平面组件
Kubernetes 安装在一个称为控制平面的机器上,它会运行 Kubernetes 守护进程,并在启动容器和吊舱时与之通信。下面介绍控制平面的各个组件。
Kubernetes 安装在一个称为控制平面的机器上,它会运行 Kubernetes 守护进程,并在启动容器和容器组时与之通信。下面介绍控制平面的各个组件。
#### Etcd
Etcd 是一种快速、分布式且一致的键值存储器,用作持久存储 Kubernetes 对象数据如吊舱、Replication Controller、密钥和服务的后台存储。Etcd 是 Kubernetes 存储集群状态和元数据的唯一地方。唯一直接与 etcd 对话的组件是 Kubernetes API 服务器。所有的其他组件都通过 API 服务器间接的从 etcd 读写数据。
Etcd 是一种快速、分布式一致性键值存储器,用作 Kubernetes 对象数据的持久存储,如 容器组 、副本控制器、密钥和服务。Etcd 是 Kubernetes 存储集群状态和元数据的唯一地方。唯一与 etcd 直连的组件是 Kubernetes API 服务器。其他所有组件都通过 API 服务器间接的从 etcd 读写数据。
Etcd 还实现了一个监控功能它提供了一个基于事件的接口用于异步监控密钥的更改。一旦你更改了一个密钥它的监控者就会收到通知。API 服务器组件严重依赖于此来获得通知,并将 etcd 移动到所需状态。
Etcd 还实现了一个监控功能它提供了一个基于事件的接口用于异步监控密钥的更改。一旦你更改了一个密钥它的监控者就会收到通知。API 服务器组件严重依赖于此来获得通知,并将 etcd 变更至期望状态。
_为什么 etcd 实例的数量应该是奇数_
你通常会在高可用HA环境中运行三个、五个或七个 etcd 实例,但这是为什么呢?因为 etcd 是分布式数据存储,可以水平扩展它,但你需要确保每个实例中的数据是一致的。为此,系统需要当前状态是什么达成共识Etcd 为此使用 [RAFT 共识算法][4]。
你通常会运行三个、五个或七个 etcd 实例实现高可用HA环境,但这是为什么呢?因为 etcd 是分布式数据存储,可以水平扩展它,但你需要确保每个实例中的数据是一致的。因此,需要为系统当前状态达成共识Etcd 为此使用 [RAFT 共识算法][4]。
RAFT 算法需要多数(或仲裁)集群才能进入下一个状态。如果你只有两个 etcd 实例并且他们其中一个失败的话,那么 etcd 集群无法转换到新的状态,因为不存在多数这个概念。如果你有三个 etcd 实例,一个实例可能会失败,但仍有 2 个实例可用于达到仲裁
RAFT 算法需要经过选举(或仲裁)集群才能进入下一个状态。如果你只有两个 etcd 实例并且他们其中一个失败的话,那么 etcd 集群无法转换到新的状态,因为不存在过半这个概念。如果你有三个 etcd 实例,一个实例可能会失败,但仍有 2 个实例可用于进行选举
#### API 服务器
API 服务器是 Kubernetes 中唯一直接与 etcd 交互的组件。Kubernetes 中的其他所有组件都必须通过 API 服务器来处理集群状态包括客户端kubectl。API 服务器具有以下功能:
* 提供在 etcd 中存储对象的一致方式。
* 对对象执行验证,方便客户端无法存储配置不正确的对象(如果它们直接写入 etcd 数据存储,可能会发生这种情况)。
* 执行验证对象,防止客户端存储配置不正确的对象(如果它们直接写入 etcd 数据存储,可能会发生这种情况)。
* 提供 RESTful API 来创建、更新、修改或删除资源。
* 提供[乐观并发锁][5],在发生更新时,其他客户端永远不会有机会重写对象。
* 对客户端发送的请求进行身份验证和授权。它使用插件提取客户端的用户名、ID、所属组并确定通过身份验证的用户是否可以对请求的资源执行请求的操作。
@ -48,57 +48,57 @@ API 服务器是 Kubernetes 中唯一直接与 etcd 交互的组件。Kubernetes
#### 控制器管理器
在 Kubernetes 中,控制器是监控集群状态的控制循环,然后根据需要进行或请求更改。每个控制器都尝试将当前集群状态移动到所需状态。控制器至少跟踪一种 Kubernetes 资源类型,这些对象都有一个字段来表示所需的状态。
在 Kubernetes 中,控制器持续监控集群状态,然后根据需要进行或请求更改。每个控制器都尝试将当前集群状态变更至期望状态。控制器至少跟踪一种 Kubernetes 资源类型,这些对象均有一个字段来表示期望的状态。
控制器示例:
* Replication ManagerReplicationController 资源的控制器)
* 复本控制器、DaemonSet 和 Job 控制器
* 部署控制器
* StatefulSet 控制器
* 副本管理器(管理副本管理器 资源的控制器)
* 副本控制器、守护进程集 和 任务控制器
* 无状态负载控制器
* 有状态负载控制器
* 节点控制器
* 服务控制器
* 点控制器
* 接入点控制器
* 命名空间控制器
* 持久卷控制器
控制器使用监控机制来获得更改通知。它们监视 API 服务器对资源的更改,对每次更改执行操作,无论是新建对象还是更新或删除现有对象。大多数时候,这些操作包括创建其他资源或更新监控的资源本身。不过,由于使用监控并不能保证控制器不会错过任何事件,它们还会定期执行一系列操作,确保没有错过任何事件。
控制器通过监控机制来获得变更通知。它们监视 API 服务器对资源的变更,对每次更改执行操作,无论是新建对象还是更新或删除现有对象。大多数时候,这些操作包括创建其他资源或更新监控的资源本身。不过,由于使用监控并不能保证控制器不会错过任何事件,它们还会定期执行一系列操作,确保没有错过任何事件。
控制器管理器还执行生命周期功能。例如命名空间创建和生命周期、事件垃圾收集、终止吊舱垃圾收集、[级联删除垃圾收集][7]和节点垃圾收集。有关更多信息,参考[云控制器管理器][8]。
#### 调度器
调度器是一个将吊舱分配给节点的控制平面进程。它会监视新创建没有分配节点的吊舱。调度器会给每个发现的吊舱分配运行它的最佳节点。
调度器是一个将容器组分配给节点的控制平面进程。它会监视新创建没有分配节点的容器组。调度器会给每个发现的容器组分配运行它的最佳节点。
满足吊舱调度要求的节点称为可行节点。如果没有合适的节点,那么吊舱会一直处于未调度状态,直到调度器可以放置它。一旦找到可行节点,它就会运行一组函数来对节点进行评分,并选择得分最高的节点,然后它会告诉 API 服务器所选节点的信息。这个过程称为绑定。
满足容器组调度要求的节点称为可调度节点。如果没有合适的节点,那么容器组会一直处于未调度状态,直到调度器可以放置它。一旦找到可调度节点,它就会运行一组函数来对节点进行评分,并选择得分最高的节点,然后它会告诉 API 服务器所选节点的信息。这个过程称为绑定。
节点的选择分为两步:
1. 过滤所有节点的列表,获得可以调度吊舱的可接受节点列表例如PodFitsResources 过滤器检查候选节点是否有足够的可用资源来满足吊舱的特定资源请求)。
1. 过滤所有节点的列表,获得可以调度容器组的节点列表例如PodFitsResources 过滤器检查候选节点是否有足够的可用资源来满足容器组的特定资源请求)。
2. 对第一步得到的节点列表进行评分和排序,选择最佳节点。如果得分最高的有多个节点,循环过程可确保吊舱会均匀地部署在所有节点上。
2. 对第一步得到的节点列表进行评分和排序,选择最佳节点。如果得分最高的有多个节点,循环过程可确保容器组会均匀地部署在所有节点上。
调度决策要考虑的因素包括:
* 吊舱是否请求硬件/软件资源?节点是否报告内存或磁盘压力情况?
* 容器组是否请求硬件/软件资源?节点是否报告内存或磁盘压力情况?
* 节点是否有与吊舱规范中的节点选择器匹配的标签?
* 节点是否有与容器组规范中的节点选择器匹配的标签?
* 如果吊舱请求绑定到特定地主机端口,该端口是否可用?
* 如果容器组请求绑定到特定地主机端口,该端口是否可用?
* 吊舱是否容忍节点的污点?
* 容器组是否容忍节点的污点?
* 吊舱是否指定节点亲和性或反亲和性规则?
* 容器组是否指定节点亲和性或反亲和性规则?
调度器不会指示所选节点运行吊舱。调度器所做的就是通过 API 服务器更新吊舱定义。然后 API 服务器通过监控机制通知 kubelet 吊舱已被调度,然后目标节点上的 kubelet 服务看到吊舱被调度到它的节点,它创建并运行吊舱的容器
调度器不会指示所选节点运行容器组。调度器所做的就是通过 API 服务器更新容器组定义。然后 API 服务器通过监控机制通知 kubelet 容器组已被调度,然后目标节点上的 kubelet 服务看到容器组被调度到它的节点,它创建并运行容器组
**[ 下一篇: [Kubernetes 如何创建和运行容器: 图解指南][9] ]**
### 工作节点组件
工作节点运行 kubelet 代理,这允许控制平面招募它们来处理作业。与控制平面类似,工作节点使用几个不同的组件来实现这一点。 以下部分描述了工作节点组件。
工作节点运行 kubelet 代理,这允许控制平面接纳它们来处理负载。与控制平面类似,工作节点使用几个不同的组件来实现这一点。 以下部分描述了工作节点组件。
#### Kubelet
@ -108,19 +108,19 @@ kubelet服务的主要功能有
* 通过在 API 服务器中创建节点资源来注册它正在运行的节点。
* 持续监控 API 服务器上调度到节点的吊舱
* 持续监控 API 服务器上调度到节点的容器组
* 使用配置的容器运行时启动吊舱的容器。
* 使用配置的容器运行时启动容器组的容器。
* 持续监控正在运行的容器,并将其状态、事件和资源消耗报告给 API 服务器。
* 运行容器存活探测,在探测失败时重启容器,当 API 服务器中删除吊舱时终止(通知服务器吊舱终止的消息)。
* 运行容器存活探测,在探测失败时重启容器,当 API 服务器中删除容器组时终止(通知服务器容器组终止的消息)。
#### 服务代理
服务代理kube-proxy在每个节点上运行确保一个吊舱可以与另一个吊舱对话,一个节点可以与另一个节点对话,一个容器可以与另一个容器对话。它负责监视 API 服务器对服务和吊舱定义的更改,以保持整个网络配置是最新的。当一项服务得到多个吊舱的支持时,代理会在这些吊舱之间执行负载平衡。
服务代理kube-proxy在每个节点上运行确保一个容器组可以与另一个容器组通讯,一个节点可以与另一个节点对话,一个容器可以与另一个容器对话。它负责监视 API 服务器对服务和容器组定义的更改,以保持整个网络配置是最新的。当一项服务得到多个容器组的支持时,代理会在这些容器组之间执行负载平衡。
kube-proxy 之所以叫代理,是因为它最初实际上是一个代理服务器,用于接受连接并将它们代理到吊舱。当前的实现是使用 iptables 规则将数据包重定向到随机选择的后端吊舱,而无需通过实际的代理服务器。
kube-proxy 之所以叫代理,是因为它最初实际上是一个代理服务器,用于接受连接并将它们代理到容器组。当前的实现是使用 iptables 规则将数据包重定向到随机选择的后端容器组,而无需通过实际的代理服务器。
它工作原理的高级视图:
@ -128,7 +128,7 @@ kube-proxy 之所以叫代理,是因为它最初实际上是一个代理服务
* API 服务器会通知在工作节点上运行的 kube-proxy 代理有一个新服务。
* 每个 kube-proxy 通过设置 iptables 规则使服务可寻址,确保截获每个服务 IP/端口对,并将目的地址修改为支持服务的一个吊舱
* 每个 kube-proxy 通过设置 iptables 规则使服务可寻址,确保截获每个服务 IP/端口对,并将目的地址修改为支持服务的一个容器组
* 监控 API 服务器对服务或其端点对象的更改。
@ -142,7 +142,7 @@ kube-proxy 之所以叫代理,是因为它最初实际上是一个代理服务
容器运行时负责:
* 如果容器镜像本地没有,则从镜像仓库中提取。
* 如果容器镜像本地不存在,则从镜像仓库中提取。
* 将镜像解压到写时复制文件系统,所有容器层叠加创建一个合并的文件系统。
@ -150,9 +150,9 @@ kube-proxy 之所以叫代理,是因为它最初实际上是一个代理服务
* 设置容器镜像的元数据,如覆盖命令、用户输入的入口命令,并设置 SECCOMP 规则,确保容器按预期运行。
* 提醒内核将进程、网络和文件系统等隔离分配给容器。
* 通知内核将进程、网络和文件系统等隔离分配给容器。
* 提醒内核分配一些资源限制,如 CPU 或内存限制。
* 通知内核分配一些资源限制,如 CPU 或内存限制。
* 将系统调用syscall传递给内核启动容器。
@ -170,7 +170,7 @@ via: https://opensource.com/article/22/2/kubernetes-architecture
作者:[Nived Velayudhan][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,217 @@
[#]: subject: "How static linking works on Linux"
[#]: via: "https://opensource.com/article/22/6/static-linking-linux"
[#]: author: "Jayashree Huttanagoudar https://opensource.com/users/jayashree-huttanagoudar"
[#]: collector: "lkxed"
[#]: translator: "robsean"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
如何在 Linux 上静态链接
======
学习如何将多个 C <ruby>对象<rt>object</rt></ruby> 文件组合到一个带有静态库的单个可执行文件文件之中。
![Woman using laptop concentrating][1]
图片作者Mapbox Uncharted ERG, [CC-BY 3.0 US][2]
使用 C 编写的应用程序代码通常有多个源代码文件,但是,你需要将它们最终编译到一个单个的可执行文件。
你可以通过两种方式来完成这项工作:通过创建一个 <ruby>静态<rt>static</rt></ruby> 库 或 一个 <ruby>动态<rt>dynamic</rt></ruby> 库 (也被称为 <ruby>共享<rt>shared</rt></ruby> 库)。这两种类型的库在从它们是如何创建和链接的角度来看是不同的。你选择使用哪种方式取决于你的的具体使用情况。
在 [上一篇文章][3] 中,我演示了如何创建一个动态链接的可执行文件,这是一种更常用的方法。在这篇文章中,我将解释如何创建一个静态链接的可执行文件。
### 链接器使用静态库
链接器是一个命令,它将一个程序的数个部分组合到一起,并为它们重新组织存储器分配。
链接器的功能包括:
* 集成一个程序的所有的部分
* 计算组织出一个新的存储器结构,以便所有的部分组合在一起
* 重新复活存储器地址,以便程序可以在新的存储器组织下运行
* 解析符号引用
作为这些链接器功能的结果,创建了一个名称为可执行文件的一个可运行程序。
静态库是通过复制一个程序中的所有必须的库模块到最终的可执行镜像来创建的。链接器将链接静态库作为编译过程的最后一步。可执行文件是通过解析外部引用、库实例程序与程序代码组合来创建的。
### 创建对象文件
这里是一个静态库的示例,以及其链接过程。首先,创建带有这些函数识别标志的头文件 `mymath.h` :
```
int add(int a, int b);
int sub(int a, int b);
int mult(int a, int b);
int divi(int a, int b);
```
使用这些函数定义来创建 `add.c` 、`sub.c` 、`mult.c` 和 `divi.c` 文件:
```
// add.c
int add(int a, int b){
return (a+b);
}
//sub.c
int sub(int a, int b){
return (a-b);
}
//mult.c
int mult(int a, int b){
return (a*b);
}
//divi.c
int divi(int a, int b){
return (a/b);
}
```
现在,使用 GCC 来参加对象文件 `add.o` 、`sub.o` 、`mult.o` 和 `divi.o` :
```
$ gcc -c add.c sub.c mult.c divi.c
```
`-c` 选项跳过链接步骤,并且只创建对象文件。
创建一个名称为 `libmymath.a` 的静态库,接下来,移除对象文件,因为它们不再被需要。(注意,使用一个 `trash` 命令比使用一个 `rm` 命令更安全。)
```
$ ar rs libmymath.a add.o sub.o mult.o divi.o
$ trash *.o
$ ls
add.c  divi.c  libmymath.a  mult.c  mymath.h  sub.c
```
现在,你已经创建了一个简单的名称为 `libmymath` 是示例数学库,你可以在 C 代码中使用它。当然,这里有非常复杂的 C 库,这就是他们这些开发者来生成最终产品的工艺流程,你和我可以安装这些库并在 C 代码中使用。
接下来,在一些自定义代码中使用你的数学库,然后链接它。
### 创建一个静态链接的应用程序
假设你已经为数学运算编写了一个命令。创建一个名称为 `mathDemo.c` 的文件,并将这些代码复制粘贴至其中:
```
#include <mymath.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
  int x, y;
  printf("Enter two numbers\n");
  scanf("%d%d",&x,&y);
 
  printf("\n%d + %d = %d", x, y, add(x, y));
  printf("\n%d - %d = %d", x, y, sub(x, y));
  printf("\n%d * %d = %d", x, y, mult(x, y));
  if(y==0){
    printf("\nDenominator is zero so can't perform division\n");
      exit(0);
  }else{
      printf("\n%d / %d = %d\n", x, y, divi(x, y));
      return 0;
  }
}
```
注意:第一行是一个 `include` 语句,通过名称来引用你自己的 `libmymath` 库。
针对 `mathDemo.c` 创建一个名称为 `mathDemo.o` 的对象文件:
```
$ gcc -I . -c mathDemo.c
```
`-I` 选项告诉 GCC 来搜索在其后列出的头文件。在这个实例中,你正在具体指定当前目录,通过一个单个点 (`.` ) 来表示。
Link `mathDemo.o` with `libmymath.a` 来参加最终的可执行文件。这里有两种方法来向 GCC 表达这一点。
你可以指向文件:
```
$ gcc -static -o mathDemo mathDemo.o libmymath.a
```
或者,你可以具体指定库的路径和库的名称:
```
$ gcc -static -o mathDemo -L . mathDemo.o -lmymath
```
在后面的那个示例中,`-lmymath` 选项告诉链接器来链接随对象文件 `mathDemo.o` 出现的对象文件 `libmymath.a` 来创建最终的可执行文件。`-L` 选项指示链接器在下面的参数中查找库 (类似于你使用 `-I` 所做的工作)。
### 分析结果
使用 `file` 命令来确认它是静态链接的:
```
$ file mathDemo
mathDemo: ELF 64-bit LSB executable, x86-64...
statically linked, with debug_info, not stripped
```
使用 `ldd` 命令,你将会看到该可执行文件不是动态链接的:
```
$ ldd ./mathDemo
        not a dynamic executable
```
你也可以检查 `mathDemo` 可执行文件的大小:
```
$ du -h ./mathDemo
932K    ./mathDemo
```
在我 [前一篇文章][5] 的示例中,动态链接的可执行文件只占有 24K 大小。
运行该命令来看看它的工作内容:
```
$ ./mathDemo
Enter two numbers
10
5
10 + 5 = 15
10 - 5 = 5
10 * 5 = 50
10 / 5 = 2
```
看起来令人满意!
### 何时使用静态链接
动态链接可执行文件通常优于静态链接可执行文件,因为动态链接会保持应用程序的组件模块化。假如一个库接收到一次关键安全更新,那么它可以很容易地修补,因为它存在于应用程序的外部。
当你使用静态链接时,库的代码会 "隐藏" 在你创建的可执行文件之中,意味着在库每次更新时(相信我,你会有更好的东西),仅有的一种修补它的方法是重新编译和重新发布一个新的可执行文件。
不过,如果一个库的代码,要么存在于它正在使用的具有相同代码的可执行文件中,要么存在于专用的预期不会接收到任何更新的嵌入式设备中,那么静态连接将是一种可接受的选项。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/static-linking-linux
作者:[Jayashree Huttanagoudar][a]
选题:[lkxed][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jayashree-huttanagoudar
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png
[2]: https://creativecommons.org/licenses/by/3.0/us/
[3]: https://opensource.com/article/22/5/dynamic-linking-modular-libraries-linux
[4]: https://www.redhat.com/sysadmin/recover-file-deletion-linux
[5]: https://opensource.com/article/22/5/dynamic-linking-modular-libraries-linux

View File

@ -3,7 +3,7 @@
[#]: author: "Gaurav Kamathe https://opensource.com/users/gkamathe"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: reviewer: "turbokernel"
[#]: publisher: " "
[#]: url: " "
@ -131,7 +131,7 @@ via: https://opensource.com/article/22/6/rust-toolchain-rustup
作者:[Gaurav Kamathe][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,96 @@
[#]: subject: "Download YouTube Videos with VLC (Because, Why Not?)"
[#]: via: "https://itsfoss.com/download-youtube-videos-vlc/"
[#]: author: "Community https://itsfoss.com/author/itsfoss/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
使用 VLC 下载 YouTube 视频(因为,为什么不呢?)
======
[VLC][1] 是 [Linux 和其他平台上最受欢迎的视频播放器][2]之一。
它不仅仅是一个视频播放器。它提供了许多多媒体和网络相关的功能。你会惊讶地[了解 VLC 的能力][3]。
我将演示一个简单的 VLC 功能,即使用它下载 YouTube 视频。
是的。你可以在 VLC 中播放 YouTube 视频并下载它们。让我告诉你怎么做。
### 使用 VLC 媒体播放器下载 YouTube 视频
现在,有一些方法可以[下载 YouTube 视频][4]。使用浏览器扩展或使用专门的网站或工具。
但是如果你不想使用任何额外的东西,已经安装的 VLC 播放器可以用于此目的。
**重要提示:**在从 YouTube 复制链接之前,请确保从 YouTube 播放器中选择所需的视频质量,因为我们将获得与复制链接时流式传输视频相同的质量。
#### 步骤 1获取所需视频的视频链接
你可以使用任何你喜欢的浏览器并从地址栏中复制视频链接。
![copy youtube link][5]
#### 步骤 2将复制的链接粘贴到网络流
网络流选项位于媒体下方,这是我们顶部菜单栏的第一个选项。只需单击媒体,你将获得“打开网络流”选项。你也可以使用快捷方式打开网络流 CTRL + N。
![click on media and select network stream][6]
现在,你只需粘贴复制的 YouTube 视频链接,然后单击播放按钮。我知道它只是在我们的 VLC 中播放视频,但还有一点额外的步骤可以让我们下载当前的流媒体视频。
![paste video link][7]
#### 步骤 3从编解码器信息中获取位置链接
在编解码器信息提示下,我们会得到当前播放视频的位置链接。要打开编解码器信息提示,你可以使用快捷键 CTRL + J 或者你会在“工具”下找到编解码器信息选项。
![click on tools and then codec information][8]
它将带来有关当前流媒体视频的详细信息。但我们需要的是“位置”。你只需复制位置链接,我们的任务就完成了 90%。
![copy location link][9]
#### 步骤 4将位置链接粘贴到新选项卡
打开任何你喜欢的浏览器并将复制的位置链接粘贴到新选项卡,它将开始在浏览器中播放确切的视频。
现在,右键单击播放视频,你将看到“将视频另存为”的选项。
![click on save][10]
它将打开文件管理器并询问你是否要在本地保存此视频。你还可以重命名该文件,默认情况下它将被命名为 “videoplayback.mp4”
![showing file in folder][11]
### 结论
如果你有互联网连接问题,或者如果你想保存一些视频以供将来观看,下载 YouTube 视频是有意义的。
当然,我们不鼓励盗版。此方法仅用于合理使用,请确保视频的创建者已允许该视频进行合理使用,并确保在将其用于其他地方之前将其归属于视频的原始所有者。
--------------------------------------------------------------------------------
via: https://itsfoss.com/download-youtube-videos-vlc/
作者:[Community][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/itsfoss/
[b]: https://github.com/lkxed
[1]: https://www.videolan.org/vlc/
[2]: https://itsfoss.com/video-players-linux/
[3]: https://itsfoss.com/vlc-pro-tricks-linux/
[4]: https://itsfoss.com/download-youtube-videos-ubuntu/
[5]: https://itsfoss.com/wp-content/uploads/2022/06/copy-Youtube-link-800x190.jpg
[6]: https://itsfoss.com/wp-content/uploads/2022/06/click-on-media-and-select-network-stream.png
[7]: https://itsfoss.com/wp-content/uploads/2022/06/paste-video-link.png
[8]: https://itsfoss.com/wp-content/uploads/2022/06/click-on-tools-and-then-codec-information-800x249.png
[9]: https://itsfoss.com/wp-content/uploads/2022/06/copy-location-link.png
[10]: https://itsfoss.com/wp-content/uploads/2022/06/click-on-save-800x424.jpg
[11]: https://itsfoss.com/wp-content/uploads/2022/06/showing-file-in-folder-800x263.png