Merge pull request #30 from LCTT/master

sync lctt
This commit is contained in:
perfiffer 2022-08-08 23:45:43 +08:00 committed by GitHub
commit 9a4bd3fa9d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
164 changed files with 13317 additions and 3426 deletions

View File

@ -0,0 +1,75 @@
[#]: subject: "relaying mail to multiple smarthosts with opensmtpd"
[#]: via: "https://jao.io/blog/2021-11-09-relaying-mail-to-multiple-smarthosts.html"
[#]: author: "jao https://jao.io"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14891-1.html"
使用 OpenSMTPD 将邮件中继到多个 smarthost
======
![](https://img.linux.net.cn/data/attachment/album/202208/03/162813rc900xbgx3xggxxg.jpg)
我喜欢使用本地 SMTP 守护进程从我的笔记本电脑发送电子邮件,因为这样我即使在断开连接的情况下也可以发送电子邮件,而且,即使是在网络正常的情况下,因为我不需要等待网络协议在远程 smarthost 上完成。哦,我还需要本地邮件投递。
多年来,我一直使用 Postfix 来达到这些目的。它具有可接受的简单配置。但最近我开始喜欢 VPN[mullvad][1],如果你想知道的话),而在 `/etc/resolv.conf` 发生变化时会变得混乱(例如,你在 Postfix 的服务启动后才启动 VPN。我找到了一个非常简单的替代方案[OpenSMTPD][2]。
假设我想在使用 [jao@gnu.org][3] 发送电子邮件时使用 SMTP 服务器 fencepost.gnu.org而在我的 `From` 头中使用 [mail@jao.io][4] 或 [news@xmobar.org][5] 时使用 smtp.jao.io。OpenSMTPD 让你通过一个非常简单的配置文件 `/etc/smtpd.conf` 来实现:
(这是我的 Debian 机器中的默认配置文件。另一个流行的替代方案是 `/etc/openstmpd.conf`)。
```
table aliases file:/etc/aliases
table secrets db:/etc/mail/secrets.db
table sendergnu { jao@gnu.org }
table senderjao { mail@jao.io, news@xmobar.org }
listen on localhost
action "local" mbox alias <aliases>
action "relaygnu" relay host smtp+tls://gnu@fencepost.gnu.org:587 auth <secrets>
action "relayjao" relay host smtps://jao@smtp.jao.io:465 auth <secrets>
match for local action "local"
match for any from mail-from <sendergnu> action "relaygnu"
match for any from mail-from <senderjao> action "relaygan"
```
我们还为此配置了本地投递。这是完整的配置文件!唯一需要的另一件事是生成 `secrets.db` 文件,其中包含与键 `gnu``jao` 对应的用户和密码(这些只是任意名称)。为此,我们使用它们创建一个纯文本文件,使用形式为 `<key> <user>:<password>` 的条目:
```
gnu jao:my fencepost password
jao mail@jao.io:xxxxxxxxxxxxxxxxx
```
`fencepost.gnu.org` 用户是 `jao``smtp.jao.io` 的用户是 `mail@jao.io`(你看,不需要转义空格或 ats。然后我们使用程序 `makemap` 来创建密钥数据库:
```
makemap secrets && rm secrets
```
--------------------------------------------------------------------------------
via: https://jao.io/blog/2021-11-09-relaying-mail-to-multiple-smarthosts.html
作者:[jao][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jao.io
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Mullvad
[2]: https://www.opensmtpd.org/
[3]: mailto:jao@gnu.org
[4]: mailto:mail@jao.io
[5]: mailto:news@xmobar.org
[6]: tmp.zHAc8OxDnm#fn.1
[7]: tmp.zHAc8OxDnm#fnr.1
[8]: https://jao.io/blog/tags.html
[9]: https://jao.io/blog/tag-sundry.html

View File

@ -0,0 +1,176 @@
[#]: subject: "7 key components of observability in Python"
[#]: via: "https://opensource.com/article/21/11/observability-python"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lujun9972"
[#]: translator: "Yufei-Yan"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14889-1.html"
Python 中可观测性的 7 个关键部分
======
> 学习为什么 Python 中的可观测性很重要,以及如何在你的软件开发生命周期中实现它。
![](https://img.linux.net.cn/data/attachment/album/202208/02/115713cbml51nooltb21bx.jpg)
你写的应用会执行很多代码,而且是以一种基本上看不到的方式执行。所以你是怎么知道:
* 代码是否在运行?
* 是不是在正常工作?
* 谁在使用它,如何使用?
可观测性是一种能力,可以通过查看数据来告诉你,你的代码在做什么。在这篇文章中,主要关注的问题是分布式系统中的服务器代码。并不是说客户端应用代码的可观测性不重要,只是说客户端往往不是用 Python 写的。也不是说可观测性对数据科学不重要,而是在数据科学领域的可观测性工具(大多是 Juptyter 和快速反馈)是不同的。
### 为什么可观测性很重要
所以为什么可观测性重要呢在软件开发生命周期SDLC可观测性是一个关键的部分。
交付一个应用不是结束,这只是一个新周期的开始。在这个周期中,第一个阶段是确认这个新版本运行正常。否则的话,很有可能需要回滚。哪些功能正常运行?哪些功能有细微的错误?你需要知道发生了什么,才能知道接下来要怎么做。这些东西有时候会以奇怪的方式不能正常运行。不管是天灾,还是底层基础设施的问题,或者应用进入了一种奇怪的状态,这些东西可能在任何时间以任何理由停止工作。
在标准 SDLC 之外,你需要知道一切都在运行中。如果没有,有办法知道是怎么不能运行的,这是非常关键的。
### 反馈
可观测性的第一部分是获得反馈。当代码给出它正在做什么的信息时,反馈可以在很多方面提供帮助。在模拟环境或测试环境中,反馈有助于发现问题,更重要的是,以更快的方式对它们进行分类。这可以改善在验证步骤中的工具和交流。
当进行<ruby>金丝雀部署<rt>canary deployment</rt></ruby>或更改特性标志时,你需要知道是否要继续,还是等更长时间,或者回滚,反馈就显得很重要了。
### 监控
有时候你怀疑有些东西不太对。也许是一个依赖服务有问题,或者是社交网站爆出了大量你的网站的问题。也许在相关的系统中有复杂的操作,然后你想确保你的系统能完美处理。在这些情况下,你就想把可观测性系统的数据整合到控制面板上。
当写一个应用的时候,这些控制面板需要是设计标准的一部分。只有当你的应用能把数据共享给这些控制面板,它们才会把这些数据显示出来。
### 警报
看控制面板超过 15 分钟就像看着油漆变干一样。任何人都不应该遭受这种折磨。对于这种任务,我们要有报警系统。报警系统将可观测性数据与预期数据进行对比,当它们不匹配的时候就发出通知。完全深入研究时间管理超出了本文的范围。然而,从两方面来说,可观测应用是<ruby>报警友好的<rt>alert-friendly</rt></ruby>
* 它们有足够多,足够好的数据,发出的警报才是高质量的。
* 警报有足够的数据,或者接收者可以很容易的得到数据,这样有助于找到源头。
高质量警报有三个特点:
* 较少的错报:如果有警报,那一定是有问题了。
* 较少的漏报:如果有问题,那一定有警报触发。
* 及时性:警报会迅速发出以减少恢复时间。
这三个特点是互相有冲突的。你可以通过提高监测的标准来减少错误警报,代价是增加了漏报。你也可以通过降低监测的门槛来减少漏报,代价是增加错报。通过收集更多数据,你也可以同时减少错报和漏报,而代价是降低了及时性。
同时改善这三个参数就更难了。这就要求高质量的可观测性数据。更高质量的数据可以同时改善这三个特点。
### 日志
有的人喜欢嘲笑用打印来调试的方法。但是,在一个大多数软件都不在你本机运行的世界里,你所能做的只有打印调试。日志记录就是打印调试的一种形式。尽管它有很多缺点,但 Python 日志库提供了标准化的日志记录。更重要的是,它意味着你可以通过这些库去记录日志。
应用程序要负责配置日志的记录方式。讽刺地是,在应用程序对配置日志负责了多年以后,现在越来越不是这样了。在现代容器<ruby>编排<rt>orchestration</rt><ruby>环境中,现代应用程序记录标准错误和标准输出,并且信任<ruby>编排<rt>orchestration</rt><ruby>系统可以合理的处理日志。
然而你不应该依赖库或者说其他任何地方。如果你想让操作的人知道发生了什么_使用日志而不是打印_。
#### 日志级别
日志记录的一个最重要功能就是 _日志级别_。不同的日志级别可以让你合理的过滤并分流日志。但是这只有在日志级别保持一致的情况下才能做到。最后,你应该在整个应用程序中保持日志级别的一致性。
选择不兼容语义的库可以通过在应用层面的适当配置来追溯修复,这只需要通过使用 Python 中最重要的通用风格做到:`getLogger(__name-_)`。
大多数合理的库都会遵循这个约定。<ruby>过滤器<rt>Filters</rt></ruby>可以在日志对象发出之前就地修改它们。你可以给处理程序附加一个过滤器,这个处理程序会根据名称修改消息,使其具有合适的级别。
```
import logging
LOGGER=logging.getLogger(__name__)
```
考虑到这一点,你现在必须明确日志级别的语义。这其中有很多选项,但是下面这些是我的最爱:
* `Error`:发送一个即时警告。应用程序处于一个需要操作人员引起注意的状态。(这意味着包含 `Critical``Error`
* `Warning`:我喜欢把这些称作“工作时间警报”。这种情况下,应该有人在一个工作日内关注一下。
* `Info`:这是在正常工作流程中发出的。如果怀疑有问题的时候,这个是用来帮助人们了解应用程序在做什么的。
* `Debug`:默认情况下,这个不应该在生产环境中出现。在模拟环境或开发环境下,可以发出来,也可以不发。如果需要更多的信息,在生产环境也可以特地被打开。
任何情况下都不要在日志中包含<ruby>个人身份信息<rt>Personal Identifiable Information</rt></ruby>PII或密码。无论日志级别是什么都是如此比如级别更改激活调试级别等等。日志聚合系统很少是 <ruby>PII 安全<rt>PII-safe</rt></ruby>的,特别是随着 PII 法规的不断发展HIPAA、GDPR 等等)。
#### 日志聚合
现代系统几乎都是分布式的。<ruby>冗余<rt>redundancy</rt><ruby><ruby>扩展性<rt>scaling</rt></ruby>,有时是<ruby>管辖权<rt>jurisdictional</rt></ruby>需要更多的水平分布。微服务意味着垂直分布。登录到每个机器去查看日志已经是不现实的了。出于合理的控制原因,允许开发人员登录到机器中会给予他们更多的权限,这不是个好主意。
所有的日志都应该被发到一个聚合器。有一些商业的方案,你可以配置一个 ELK 栈或者也可以使用其他的数据库SQL 或则 no-SQL。作为一个真正的低技术解决方案你可以将日志写入文件然后将它们发送到对象存储中。有很多解决方案但是最重要的事情是选择一个并且将所有东西聚合到一起。
#### 记录查询
在将所有东西记录到一个地方后,会有很多日志。具体的聚合器可以定义如何写查询,但是无论是通过从存储中搜索还是写 NoSQL 查询,记录查询以匹配源和细节都是很有用的。
### 指标抓取
<ruby>指标抓取<rt>Metric Scraping</rt><ruby>是一个<ruby>服务器拉取<rt>server pull</rt><ruby>模型。指标服务器定时和应用程序连接,并且拉取指标。
最后,这意味着服务器需要连接和找到所有相关的应用服务器。
#### 以 Prometheus 为标准
如果你的指标聚合器是 Prometheus那么 [Prometheus][2] 格式做为一个<ruby>端点<rt>endpoint</rt></ruby>是很有用的。但是,即使聚合器不是 Prometheus也是很有用的。几乎所有的系统都包含与 Prometheus 端点兼容的<ruby>垫片<rt>shim</rt></ruby>
使用客户端 Python 库给你的应用程序加一个 Prometheus 垫片,这将使它能够被大多数的指标聚合器所抓取。当 Prometheus 发现一个服务器,它就期望找到一个指标端点。这经常是应用程序路由的一部分,通常在 `/metrics` 路径下。不管 Web 应用的平台是什么如果你能在一个端点下运行一个定制类型的定制字节流Prometheus 就可以将它抓取。
对于大多数流行的框架,总有一个中间件插件或者类似的东西收集指标,如延迟和错误率。通常这还不够。你需要收集定制的应用数据:比如,每个端点的缓存<ruby>命中/缺失<rt>hit/miss</rt></ruby>率,数据库延迟,等等。
#### 使用计数器
Prometheus 支持多个数据类型。一个重要且巧妙的类型就是计数器。计数器总是在前进 —— 但有一点需要注意。
当应用重置,计数器会归零。计数器中的这些“<ruby>历时<rt>epochs</rt></ruby>”通过将计数器“创建时间”作为元数据发送来管理。Prometheus 知道不去比较两个不同<ruby>历时<rt>epochs</rt></ruby>的计数器。
#### 使用仪表值
仪表值会简单很多:它们测量瞬时值。用它们来测量会上下起伏的数据:比如,分配的总内存大小,缓存大小,等等。
#### 使用枚举值
枚举值对于整个应用程序的状态是很有用的,尽管它们可以以更精细的方式被收集。比如,你正使用一个<ruby>功能门控<rt>feature-gating</rt></ruby>框架,一个有多个状态(比如,使用中、关闭、<ruby>屏蔽<rt>shadowing</rt></ruby> 等)的功能,也许使用枚举会更有用。
### 分析
分析不同于指标,因为它们要对应连续的事件。比如,在网络服务器中,事件是一个外部请求及其产生的工作。特别是,在事件完成之前事件分析是不能被发送的。
事件包含特定的指标:延迟,数量,以及可能产生的对其他服务请求的细节,等等。
#### 结构化日志
现在一个可能的选择是将日志结构化。发送事件只发送带有正确格式的有效<ruby>载荷<rt>payload</rt></ruby>的日志。这个数据可以从日志聚合器请求,然后解析,并且放入一个合适的系统,这样可以对它的可见性。
### 错误追踪
你可以使用日志来追踪错误也可以用分析来追踪错误。但是一个专门的错误系统还是值得的。一个为错误而优化的系统可以发送更多的错误因为错误毕竟还是罕见的。这样它就可以发送正确的数据并且用这些数据它能做更多智能的事情。Python 中的错误追踪系统通常和一般的异常处理关联,然后收集数据,并且把它发到一个专门的错误聚合器。
#### 使用 Sentry
很多情况下,自己运行 Sentry 是正确的做法。当错误发生时,就说明有些东西就出问题了。可靠地删除敏感数据是不可能的,因为一定有会出现敏感数据被发送到不应该的地方。
通常,这种工作量并不会很大:异常并不常出现。最后,这个系统并不需要很高的质量,也不需要高可靠性的备份。昨天的错误应该已经修复了,希望如此,如果没有,你还会发现的!
### 快速、安全、可重复:三者都要
可观测的系统开发起来更快,因为它们可以给你提供反馈。它们运行起来也更安全,因为当出问题的时候,它们也会更早的让你知道。最后,因为有反馈回路,可观测性也有助于围绕它构建可重复的过程。可观测性可以让你了解你的应用程序。而更了解它们,就胜利了一半。
#### 磨刀不误砍柴功
构建所有的可观测层是一件困难的事情。总会让人感觉是在浪费的工作,或者更像是“可以有,但是不急”。
之后再做这个可以吗?也许吧,但是不应该。正确的构建可观测性可以加速后面所有阶段的开发:测试、监控,甚至是培训新人。在一个和科技行业一样动荡的行业,减少培训新人的工作量绝对是值得的。
事实上,可观测性很重要,所以尽早把它写出来,然后就可以在整个过程中进行维护。反过来,它也会帮你维护你的软件。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/observability-python
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[MCGA](https://github.com/Yufei-Yan)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
[2]: https://opensource.com/article/21/7/run-prometheus-home-container

View File

@ -0,0 +1,59 @@
[#]: subject: "An open source project that opens the internet for all"
[#]: via: "https://opensource.com/article/22/6/equalify-open-internet-accessibility"
[#]: author: "Blake Bertuccelli https://opensource.com/users/blake"
[#]: collector: "lkxed"
[#]: translator: "yjacks"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14905-1.html"
Equalify让每一个人都可以无障碍访问互联网
======
> Equalify 是一个为了让互联网更易于使用的开源项目。
![](https://img.linux.net.cn/data/attachment/album/202208/07/114828xkk55krbsprkx7kk.jpg)
<ruby>无障碍访问<rt>Accessibility</rt></ruby> 是一把促进社会更加开放的的钥匙。
我们在网上学习,我们在网上花钱,也在网上吵吵嚷嚷。更重要的是,我们在网上获取的信息激励我们创造一个更好的世界。当我们忽视无障碍访问的要求时,出生时失去光明,或在战争中失去四肢的人们都将只能被阻挡在他人可以享受的网上信息之外。
*我们必须确保每个人都有通往开放互联网的通道*,而我正在通过开发 [Equalify][2],为实现这一目标而努力。
### 什么是 Equalify?
Equalify 是“无障碍访问平台”。
这个平台允许使用者们对数以千计的网站进行多种无障碍访问的扫描。通过使用我们的最新版本,用户还可以过滤无数的警告,创建一个对他们来说有意义的统计仪表盘。
这个项目才刚刚开始。Equalify 的目的是开源像 SiteImprove 这样的昂贵服务所提供的各种收费服务。有了更好的工具,我们可以确保互联网更容易访问、我们的社会更开放。
### 如何判断网站的无障碍访问?
W3C 的网络无障碍访问组织发布了《网络内容无障碍访问指南WCAG为无障碍访问设定了标准。Equalify 和包括美国联邦政府在内的其它机构,都使用 WCAG 来定义网站的无障碍访问。我们扫描的的网站越多,我们就越能了解 WCAG 标准的不足和潜力。
### 如何使用 Equalify?
花点时间查看一下我们的 GitHub这样你能更多的了解这个产品。[README][3] 提供了如何开始支持和使用 Equalify 的分步教程。
### 我们的目标
我们的最终目标是让开放的互联网更易于使用。根据 [The WebAIM Million][4] 的数据96.8% 的网站主页不满足 WCAG 标准。随着越来越多的人们开发和使用 Equalify我们将与有障碍的页面斗争。每个人都应该有平等的机会进入开放的互联网。在我们朝着为所有人建设一个更强大、更开放的社会而努力时Equalify 也正在朝着这个目标努力。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/equalify-open-internet-accessibility
作者:[Blake Bertuccelli][a]
选题:[lkxed][b]
译者:[yjacks](https://github.com/yjacks)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/blake
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/2022-06/plumeria-frangipani-bernard-spragg.jpg
[2]: https://equalify.app/
[3]: https://github.com/bbertucc/equalify
[4]: https://webaim.org/projects/million/

View File

@ -0,0 +1,158 @@
[#]: subject: "Build a Smart Parking System for a Metro Station"
[#]: via: "https://www.opensourceforu.com/2022/06/build-a-smart-parking-system-for-a-metro-station/"
[#]: author: "Dr Maheswari R. https://www.opensourceforu.com/author/dr-maheswari-r/"
[#]: collector: "lkxed"
[#]: translator: "Maisie-x"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14881-1.html"
为地铁站构建一个智能停车系统
======
> 本文将帮助你设计一个基于 Web 的应用程序,使用 Node-RED 为地铁站的汽车提供一个自动智能停车系统。
![Smart car parking][1]
Web 应用程序是在 Web 服务器上运行的软件。终端用户通过 Web 浏览器访问 Web 应用程序。Web 应用程序使用客户端—服务器C/S架构进行编程该架构是用户客户端通过远程服务器可能由第三方托管提供服务。Web API应用程序编程接口在整个 Web 上是可用的,用户可以通过 HTTP 协议访问该接口,如图 1 所示。
![Figure 1: Web API][4]
本文将演示如何为地铁设计一个基于 Web 的汽车自动智能停车系统。 它是使用开源的 Node-RED 设计。该系统使用模板节点创建了一个交互式的、时尚的用户登录表单,用 HTML 和 CSS 编码以获取车主的详细信息,从而实现停车系统的自动化。我们可以在图 2 和图 3 看到登录表单和提交表单的流程图。
使用的节点如下:
![table function][3]
### 地铁智能停车节点流程设计
Node-RED 由 `node-red` 命令激活。访问网址 `http://127.0.0.1:1880/` 可以看到 Node-RED 用户界面流程浏览器已经启用,可以认为 Node-RED 设置已完成,可以正常工作了。
按照下面给出的步骤创建登录表单和提交表单。
![Figure 2: Login form flow diagram][5]
![Figure 3: Submission form flow diagram][6]
### 登录表单
1、在节点画布中拖放 <ruby>http 输入<rt>http in</rt></ruby> 节点,这会为创建 Web 服务创建一个 HTTP 访问点。
2、将 <ruby>http 输入<rt>http in</rt></ruby> 节点连接到 <ruby>函数<rt>function</rt></ruby> 节点。函数节点有助于编写 JavaScript 函数处理节点接收到的消息。
![Figure 4: Login form for smart parking for cars][7]
3、将 <ruby>函数<rt>function</rt></ruby> 节点连接到 <ruby>模板<rt>template</rt></ruby> 节点,模板节点基于提供的模板创建一个 Web API。
4、将 <ruby>模板<rt>template</rt></ruby> 节点连接到 <ruby>http 响应<rt>http response</rt></ruby> 节点,它将响应 <ruby>http 输入<rt>http in</rt></ruby> 节点的请求。
![Figure 5: Submission form for smart parking for cars][8]
### 提交表单
1、拖放 <ruby>http 输入<rt>http in</rt></ruby> 节点并将其连接到 json 节点json 节点将数据转换为 JSON 字符串进行通信。
2、将 <ruby>http 输入<rt>http in</rt></ruby> 节点连接到 <ruby>调试<rt>debug</rt></ruby> 节点,调试节点的调试监控器会输出结果。
3、将 json 节点放置并连接到 <ruby>函数<rt>function</rt></ruby> 节点,将后者连接到 <ruby>http 响应<rt>http response</rt></ruby> 节点。
创建完整流程后,单击 Node-RED 窗口右上角的 <ruby>部署<rt>Deploy</rt></ruby> 按钮。访问 `http://127.0.0.1:1880/ui/` 这个链接查看用户界面。
输入链接然后单击 <ruby>提交<rt>Submit</rt></ruby> 后,该链接会跳转到下一页,你可以在该页面阅读所有新闻。
### Node-RED 工作流程
在单个 Node-RED 流程中,你可以创建登录表单和提交表单,如图 4 和图 5 所示。
现在我们将配置节点属性。
#### 登录表单
编辑 <ruby>http 输入<rt>http in</rt></ruby> 属性:
- <ruby>方法<rt>method</rt></ruby> 选择 “Get”
- <ruby>网址<rt>URL</rt></ruby> 设为 `/MetroStation`
- <ruby>名称<rt>name</rt></ruby> 配置为 “<ruby>智能停车系统<rt>Smart Parking</rt></ruby>”。
LCTT 译注:下文 http 响应节点的名称为 Smart parkingp 字母小写,为了区分,此处中文翻译成智能停车系统。)
![Figure 6: Http in node property configurations][9]
> 注意URL 可以使用任何用户定义的本地变量。
现在选择 <ruby>函数<rt>function</rt></ruby> 节点,编辑函数节点属性:输入代码 `msg.url = project` ,并配置代码 <ruby>名称<rt>name</rt></ruby> 字段为 “<ruby>项目提交<rt>Project Submission</rt></ruby>”。
![Figure 7: Function node property configurations][10]
<ruby>模板<rt>template</rt></ruby> 节点的属性窗口,为登录表单配置相应的 HTML 代码,并将代码 <ruby>名称<rt>name</rt></ruby> 命名为 “<ruby>显示面板<rt>Display panel</rt></ruby>”。在此流程使用了 Mustache 模板格式LCTT 译注Mustache 是胡子的意思,因为它的嵌入标记 `{{ }}` 非常像胡子。Mustache 是一个简单的 Web 模板系统被描述为无逻辑的模板引擎。Mustache 没有任何显式的控制流语句,例如 `if``else` 条件和 `for` 循环。可以通过使用块标签处理列表和lambdas 来实现循环和条件评估。
![Figure 8: Template node property configurations][11]
配置编辑 <ruby>http 响应<rt>http response</rt></ruby> 节点的属性,<ruby>名称<rt>name</rt></ruby> 设为 “<ruby>智能停车<rt>Smart parking</rt></ruby>”(图 9
![Figure 9: Http response node property configurations][12]
#### 提交表单
<ruby>http 输入<rt>http in</rt></ruby> 节点的编辑属性窗口,<ruby>方法<rt>method</rt></ruby> 选择 “POST” <ruby>网址<rt>URL</rt></ruby> 设为 `/project`
![Figure 10: Http in node property configurations][13]
在 JSON 节点的编辑窗口,<ruby>操作<rt>Action</rt></ruby>设为 “<ruby>JSON字符串与对象互转<rt>Convert between JSON String & Object</rt></ruby>”,参考图 11。
![Figure 11: JSON node property configurations][14]
<ruby>函数<rt>function</rt></ruby> 节点的配置如图 12 所示。
![Figure 12: Function node property configurations][15]
<ruby>http 响应<rt>http response</rt></ruby> 节点,编辑属性 <ruby>名称<rt>name</rt></ruby> 为 “<ruby>已提交项目<rt>Project Submitted</rt></ruby>”。
![Figure 13: Http response node property configurations][16]
> 注意:添加带有评论的评论节点作为 “登录表单” 和 “提交表单”。
![Figure 14: Debug node property configurations][17]
### 用户界面的控制面板
当用户单击 <ruby>提交<rt>Submit</rt></ruby>,给出的数据将显示在用户界面和调试节点。如果单击 <ruby>重置<rt>Reset</rt></ruby>详细信息将被清除允许用户输入新的详细信息图15
![Figure 15: User login UI][18]
地铁停车费率通过超链接提供,收费表在用户界面显示。因此,汽车智能停车系统通过适当的超链接实现自动化,展示地铁站的停车费。该自动化系统的最终输出可以在 Node-RED 控制面板的用户界面和调试监控器调取和展示。
![Figure 16: Metro parking tariff][19]
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/build-a-smart-parking-system-for-a-metro-station/
作者:[Dr Maheswari R.][a]
选题:[lkxed][b]
译者:[Maisie-x](https://github.com/Maisie-x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/dr-maheswari-r/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Smart-car-parking.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/04/table-function-node-red.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/04/table-function-node-red.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-1-Web-Application-Programming-Interface300.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-2-Login-Form-Flow-Diagram300.jpg
[6]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-3-Submission-Form-Flow-Diagram300.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-4-Login-Form-of-Metro-Smart-Car-Parking300.jpg
[8]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-5-Submission-Form-of-Metro-Smart-Car-Parking300.jpg
[9]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-6-Http-in-Node-Property-Configurations300.jpg
[10]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-7-Function-Node-Property-Configurations300.jpg
[11]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-8-Template-Node-Property-Configurations300.jpg
[12]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-9-Template-Node-Property-Configurations300.jpg
[13]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-10-Http-in-Node-Property-Configurations300.jpg
[14]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-11-Json-Node-Property-Configurations300.jpg
[15]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-12-Function-Node-Property-Configurations300.jpg
[16]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-13-Http-Response-Node-Property-Configurations300.jpg
[17]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-14-Debug-Node-Property-Configurations300.jpg
[18]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-15-User-Login-UI300.jpg
[19]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-16-Parking-Tariff-Metro300.jpg

View File

@ -2,28 +2,29 @@
[#]: via: "https://ostechnix.com/install-docker-ubuntu/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: "Donkey"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: translator: "Donkey-Hao"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14871-1.html"
如何在 Ubuntu 22.04 LTS 中安装 Docker 和 Docker Compose
======
在 Ubuntu 中使用 Docker Compose 安装 Docker 引擎详细指导
在这篇文章中,我们将会明白 **Docker** 是什么,如何 **在 Ubuntu 中安装 Docker 引擎** 。此外,我们也将会明白如何 **安装 Docker Compose** ,它是一个定义并运行多容器的 Docker 应用。
![](https://img.linux.net.cn/data/attachment/album/202207/28/142549iwrj25mw9turhc9o.jpg)
我们已经在 Ubuntu 22.04 LTS 中正式的测试了这份指南。然而,它也应该对旧版本如 20.04 LTS 和 18.04 LTS 有效。为了更好的安全性和稳定性,我推荐你使用最新的版本—— Ubuntu 22.04 LTS
> 在 Ubuntu 中使用 Docker Compose 安装 Docker 引擎的分步指导
在这篇文章中,我们将会明白 Docker 是什么,如何 **在 Ubuntu 中安装 Docker 引擎** 。此外,我们也将会明白如何 **安装 Docker Compose** ,它是一个定义并运行多容器的 Docker 应用。
我们已经在 Ubuntu 22.04 LTS 中正式的测试了这份指南。然而,它也应该对旧版本如 20.04 LTS 和 18.04 LTS 有效。为了更好的安全性和稳定性,我推荐你使用最新的版本 —— Ubuntu 22.04 LTS 。
### 什么是 Docker
**Docker** 是一个快捷轻便的系统级虚拟化技术,开发者和系统管理员可以使用它构建具备所有必要依赖项的应用程序,并将其作为一个包发布。
**Docker** 是一个快捷轻便的系统级虚拟化技术,开发者和系统管理员可以使用它构建具备所有必要依赖项的应用程序,并将其作为一个包发布。
Docker 与其他如 VMWare 、Xen 、以及 VirtualBox 等工具的虚拟化方式不同,每个虚拟机不需要单独的客户操作系统。
所有的 Docker 容器有效地共享主机系统内核。每个容器都在同一操作系统中的隔离用户空间中运行。
所有的 Docker 容器有效地共享同一个主机系统内核。每个容器都在同一操作系统中的隔离用户空间中运行。
Docker 容器可以在任何 Linux 版本上运行。比如说你使用 Fedora ,我用 Ubuntu 。我们能相互开发、共享并分发 Docker 镜像。
@ -39,11 +40,10 @@ Docker 容器可以在任何 Linux 版本上运行。比如说你使用 Fedora
为了安装并配置 Docker ,你的系统必须满足下列最低要求:
1. 64 位 Linux 或 Windows 系统
2. 如果使用 Linux ,内核版本必须不低于 3.10
3. 能够使用 `sudo` 权限的用户
4. 在你系统 BIOS 上启用了 VT虚拟化技术支持 on your system BIOS. [参考: [如何查看 CPU 支持 虚拟化技术VT][1]]
4. 在你系统 BIOS 上启用了 VT虚拟化技术支持 on your system BIOS(参考: [如何查看 CPU 支持 虚拟化技术VT][1]
5. 你的系统应该联网
在 Linux ,在终端上运行以下命令验证内核以及架构详细信息:
@ -52,43 +52,37 @@ Docker 容器可以在任何 Linux 版本上运行。比如说你使用 Fedora
$ uname -a
```
**输出样例:**
输出样例:
```
Linux Ubuntu22CT 5.15.35-3-pve #1 SMP PVE 5.15.35-6 (Fri, 17 Jun 2022 13:42:35 +0200) x86_64 x86_64 x86_64 GNU/Linux
```
正如上面你看到的那样,我的 Ubuntu 系统内核版本是 **5.15.35-3-pve** 并且系统架构是 **64 位****x86_64 x86_64 x86_64 GNU/Linux**)。查看上方结果的黑体字。
正如上面你看到的那样,我的 Ubuntu 系统内核版本是 **5.15.35-3-pve** 并且系统架构是 **64 位****x86_64 x86_64 x86_64 GNU/Linux**)。
**注意:** 这里,我在 **[Proxmox][2]** 中使用 Ubuntu 22.04 容器。这是你看到上方内核版本中有 “pve” 字符的原因。如果你正在使用 Ubuntu 实体(或者虚拟)机,你将看到系统版本为 **5.15.35-3-generic**
> **注意:** 这里,我在 [Proxmox][2] 中使用 Ubuntu 22.04 容器。这是你看到上方内核版本中有 “pve” 字符的原因。如果你正在使用 Ubuntu 实体(或者虚拟)机,你将看到系统版本为 **5.15.35-3-generic**
内核版本需要不低于最低要求的版本,并且是 64 位机器。这样不会有任何问题,我们能顺利安装并使用 Docker 。
请注意你使用哪一个 Ubuntu 系统不重要。并且你使用 Ubuntu 桌面或服务器版本,亦或者其他 Ubuntu 变种如 Lubuntu 、Kubuntu 、Xubuntu ,都不重要。
Docker 会正常运行,只要你的系统内核版本不低于 3.10 ,并且是 64 位系统。
只要你的系统内核版本不低于 3.10 ,并且是 64 位系统Docker 都会正常运行
### 在 Ubuntu 22.04 LTS 中安装 Docker
首先,更新你的 Ubuntu 系统。
#### 1. 更新 Ubuntu
#### 1更新 Ubuntu
打开终端,依次运行下列命令:
```
$ sudo apt update
```
```
$ sudo apt upgrade
```
```
$ sudo apt full-upgrade
```
#### 2. 添加 Docker 库
#### 2添加 Docker 库
首先,安装必要的证书并允许 apt 包管理器使用以下命令通过 HTTPS 使用存储库:
@ -114,9 +108,9 @@ $ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/doc
$ sudo apt update
```
#### 3. 安装 Docker
#### 3安装 Docker
最后,运行下列命令在 Ubuntu 22.04 LTS 服务器中安装最新 Docker CE
最后,运行下列命令在 Ubuntu 22.04 LTS 服务器中安装最新 Docker CE
```
$ sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin
@ -130,7 +124,7 @@ $ sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin
$ apt-cache madison docker-ce
```
**输出样例:**
输出样例:
```
docker-ce | 5:20.10.17~3-0~ubuntu-jammy | https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages
@ -199,7 +193,7 @@ $ sudo systemctl enable docker
$ sudo docker version
```
**输出样例:**
输出样例:
```
Client: Docker Engine - Community
@ -234,7 +228,7 @@ Server: Docker Engine - Community
![Check Docker Version][4]
#### 4. 测试 Docker
#### 4测试 Docker
让我们继续,测试 Docker 是否运行正常:
@ -244,9 +238,9 @@ Server: Docker Engine - Community
$ sudo docker run hello-world
```
上述命令会下载一个 Docker 测试镜像,并在容器内执行一个 **hello_world** 样例程序。
上述命令会下载一个 Docker 测试镜像,并在容器内执行一个 “hello_world” 样例程序。
如果你看到类似下方的输出那么祝贺你Docker 正常运行在你的 Ubuntu 系统中。
如果你看到类似下方的输出那么祝贺你Docker 正常运行在你的 Ubuntu 系统中
```
Unable to find image 'hello-world:latest' locally
@ -281,9 +275,9 @@ For more examples and ideas, visit:
很好!可以使用 Docker 了。
#### 5. 作为非 root 用户运行 Docker (选做)
#### 5作为非 root 用户运行 Docker (选做)
默认情况下Docker 守护进程Docker daemon绑定到 Unix 套接字而不是 TCP 端口。由于 **Unix 套接字由 root** 用户拥有Docker 守护程序将仅以 root 用户身份运行。因此,普通用户无法执行大多数 Docker 命令。
默认情况下Docker 守护进程绑定到 Unix 套接字而不是 TCP 端口。由于 **Unix 套接字由 root 用户拥有**Docker 守护程序将仅以 root 用户身份运行。因此,普通用户无法执行大多数 Docker 命令。
如果你想要在 Linux 中作为非 root 用户运行 Docker ,参考下方链接:
@ -297,7 +291,7 @@ For more examples and ideas, visit:
下列任何方式都可以安装 Docker Compose 。
#### 方式 1 - 使用二进制文件安装 Docker Compose
#### 方式 1使用二进制文件安装 Docker Compose
从 [这里][7] 下载最新 Docker Compose 。
@ -317,22 +311,22 @@ $ sudo curl -L "https://github.com/docker/compose/releases/download/v2.6.1/docke
$ sudo chmod +x /usr/local/bin/docker-compose
```
运行下列命令检查安装的 Docker Composer 版本:
运行下列命令检查安装的 Docker Compose 版本:
```
$ docker-compose version
Docker Compose version v2.6.1
```
#### 方式 2 - 使用 PiP 安装 Docker Compose
#### 方式 2、使用 Pip 安装 Docker Compose
或许,我们可以使用 **PIP** 安装 Docker Compose 。Pip 是 Python 包管理器,用来安装使用 Python 编写的应用程序。
或许,我们可以使用 **Pip** 安装 Docker Compose 。Pip 是 Python 包管理器,用来安装使用 Python 编写的应用程序。
参考下列链接安装 Pip 。
* [如何使用 Pip 管理 Python 包][8]
安装 pip 后,运行以下命令安装 Docker Compose。下列命令对于所有 Linux 发行版都是相同的!
安装 Pip 后,运行以下命令安装 Docker Compose。下列命令对于所有 Linux 发行版都是相同的!
```
$ pip install docker-compose
@ -350,8 +344,7 @@ $ docker-compose --version
docker-compose version 2.6.1, build 8a1c60f6
```
恭喜你!我们已经成功安装了 Docker Community 版本和 Docker Compose 。
恭喜你!我们已经成功安装了 Docker 社区版和 Docker Compose 。
安装了 Docker然后呢查看本系列的下一篇文章了解 Docker 基础知识。
@ -365,7 +358,7 @@ docker-compose version 2.6.1, build 8a1c60f6
在这篇教程中,我们讨论了 Docker 是什么,如何在 Ubuntu 22.04 LTS Jammy Jellyfish 中安装 Docker 。然后学习了如何通过运行 hello-world Docker 镜像测试 Docker 是否成功安装。最后,我们通过使用两种不同的方式安装 Docker Compose 作为本教程的结尾。
**资料:**
### 资料
* [Docker 主页][11]
@ -376,7 +369,7 @@ via: https://ostechnix.com/install-docker-ubuntu/
作者:[sk][a]
选题:[lkxed][b]
译者:[Donkey](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,146 @@
[#]: subject: "Use secret keyboard keys on Linux"
[#]: via: "https://opensource.com/article/22/7/linux-compose-key-cheat-sheet"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: "Donkey-Hao"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14882-1.html"
在 Linux 中使用组合键输入隐藏的字形
======
> 使用组合键,你不会被键盘所限制住。
![](https://img.linux.net.cn/data/attachment/album/202207/31/095532p72762ekberw7eb6.jpg)
典型的键盘只有约 100 个键位。
由于 `Shift` 键,许多键得以有两个字符(也称之为 <ruby>字形<rt>glyph</rt></ruby>)。字形常用于键入带有重音和变音符号的字母,生成数学公式或者货币中的符号,或者添加有趣的表情符号。在一些地区,有些键甚至有三个字形。
然而不论你身处何处有一些字形不会出现在你的键盘上。幸运的是Linux 提供了使用 <ruby>组合键<rt>Compose Key</rt></ruby> 来获取这些字形。
在你的键盘上没有组合键这个键,至少默认情况下没有,但是你可以设定一个你不用的键作为组合键。我在电脑上使用空格键旁边的 `Alt` 键,而在平板上使用菜单键,来作为组合键。
> **[下载 Linux 组合键速查表][2]**
### 在 GNOME 中设置组合键
![A screenshot shows the keyboard and mouse options visible. The "Compose Key" option is set to Right Alt.][3]
在 GNOME 桌面,从软件库中安装 <ruby>优化<rt>Tweaks</rt></ruby> 应用。你也可以从终端安装(基于 Debian 发行版用 `apt` 命令Fedora 用 `dnf`
```
$ sudo dnf install gnome-tweaks
```
启动优化应用后:
1. 单击左侧栏中的 <ruby>键盘和鼠标<rt>Keyboard & Mouse</rt></ruby >类别
2. 找到 <ruby>组合键<rt>Compose key</rt></ruby> 设置并指定一个键
3. 关闭优化应用
### 在 KDE Plasma 桌面设置组合键
![A screenshot shows the advanced options threaded under Keyboard settings. "Configure keyboard options" is checked, "Position of Compose Key" is checked within that menu, and "Right Alt" is checked within that menu.][4]
在 KDE Plasma 桌面上,打开 <ruby>系统设置<rt>System Settings</rt></ruby>,找到 <ruby>输入设备<rt>Input Devices</rt></ruby> 控制界面。然后:
1. 在 <ruby>输入设备<rt>Input Devices</rt></ruby> 界面,点击 “<ruby>高级<rt>Advanced</rt></ruby>” 标签
2. 找到 <ruby>组合键<rt>Compose key</rt></ruby> 列表项并指定一个键
3. 点击右下角 “<ruby>应用<rt>Apply</rt></ruby>” 按钮,然后关闭 <ruby>系统设置<rt>System Settings</rt></ruby>
### 使用组合序列
为了输入隐藏字符,需要按下组合键后松开。这样就可以进入组合模式。处于组合模式,你按下然后松开键,然后再按下一个键来组合字符。
例如:
1. 按下组合键并释放,你会进入组合模式
2. 按下单引号(`'`)并松开
3. 按下 `E` 并松开,这是一个有效的组合,所以现在退出了组合模式
你输入了一个字符:`É`
一些组合序列只需要两个键的组合,然而还有一些需要三个键,并且至少有一个特殊字符要按四次键。
### 变音字符
这是一个很小众的世界,所以你的朋友的名字很有可能使用的字形不是你的键盘原生的字形。你现在可以跳过变音符号并使用适当的修饰符输入名字。
以下是常见变音符号的组合序列示例:
* `' + <字母>` = `á é í ó ú ć ń ý j́́ ẃ ź`
* "\` + <字母>" = `à è ì ò ù ǹ ỳ ẁ`
* `~ + <字母>` = `ã ẽ ĩ õ ũ ñ ỹ`
* `^ + <字母>` = `â ê î ô û ĉ ŷ ĵ ŵ ẑ`
* `u + <字母>` = `ă ĕ ĭ ŏ ŭ`
* `c + c` = `č`
* `- + <字母>` = `ā ē ī ō ū đ`
* `, + <字母>` = `ą ę į ǫ ų ç ḑ ţ`
这里仅仅罗列了常见的几个,并不是所有的组合。
#### 货币符号
得益于组合键,国际银行业务也变得容易:
* `- + Y` = `¥`
* `- + L` = `£`
* `= + E` = `€`
* `= + L` = `₤`
* `= + N` = `₦`
* `= + R` = `₹`
* `= + W` = `₩`
* `/ + m` = `₥`
* `R + s` = `₨`
* `C + r` = `₢`
* `F + r` = `₣`
重申,这不是完整的列表,但是一个好的开始。
#### 有趣的字形
变音符号和货币符号具有实用性,但是组合键也可以用来娱乐:
* `< + 3` = `♥`
* `< + >` = `⋄`
* `# + q` = `♩`
* `: + )` = `☺`
* `: + (` = `☹`
* `p + o + o` = `💩`
#### 长寿和繁荣
在 Linux 中我最喜欢的“秘密”字形是传统的 Vulcan 称呼,“长寿和繁荣”。
* `L + L + A + P` = `🖖`
### 找到所有的字形
通过组合键可以使用更多字形,你可以通过按随机组合序列来发现新的字形。查找字形的一种更有条理的方法是参考位于 `/usr/share/X11/locale/en_US.UTF-8` 中的 `Compose` 文件(需要根据你键盘使用的语言环境调整绝对路径)。
这个文件令人崩溃,因为它包含超过 6000 行的组合序列,其中许多是 ASCII 和 Unicode 的复杂组合。要快速轻松地参考常见和基础序列,你可以 [下载我们的组合键速查表][5]。它提供涵盖数学、排版、音乐、箭头、变音符号、货币等的序列。
现在你知道了这个秘密,你可以表达更多内容了。
*图片源自Seth Kenlon, CC BY-SA 4.0*
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/7/linux-compose-key-cheat-sheet
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[Donkey](https://github.com/Donkey-Hao)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/linux_keyboard_desktop.png
[2]: https://opensource.com/downloads/linux-compose-key-cheat-sheet
[3]: https://opensource.com/sites/default/files/2022-04/gnome-tweaks-compose.jpeg
[4]: https://opensource.com/sites/default/files/2022-04/kde-settings-input-advanced-compose.jpeg
[5]: https://opensource.com/downloads/linux-compose-key-cheat-sheet

View File

@ -0,0 +1,121 @@
[#]: subject: "7 kinds of garbage collection for Java"
[#]: via: "https://opensource.com/article/22/7/garbage-collection-java"
[#]: author: "Jayashree Huttanagoudar https://opensource.com/users/jayashree-huttanagoudar"
[#]: collector: "lkxed"
[#]: translator: "Veryzzj"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14862-1.html"
Java 的七种垃圾收集器
======
![](https://img.linux.net.cn/data/attachment/album/202207/25/075744nw0c9c4vtvkgiuct.jpg)
> 了解 Java 中的内存管理。
用 C 或 C++ 这样的编程语言写一个应用时,需要编写代码来销毁内存中不再需要的对象。当应用程序扩展得越来越复杂时,未使用对象被忽略释放的可能性就越大。这会导致内存泄露,最终内存耗尽,在某个时刻将没有更多的内存可以分配。结果就是应用程序运行失败并出现 OutOfMemoryError 错误。但在 Java 中,<ruby>垃圾收集器<rt>Garbage Collection</rt></ruby>GC会在程序执行过程中自动运行减轻了手动分配内存和可能的内存泄漏的任务。
垃圾收集器并不只有一种Java 虚拟机JVM有七种不同的垃圾收集器了解每种垃圾收集器的目的和优点是很有用的。
### 1、Serial 收集器
![Serial threaded garbage collection][1]
垃圾收集器的原始实现使用单线程。当垃圾收集器运行时会停止应用程序通常称为“stop the world”事件。适用于能够承受短暂停顿的应用程序。该垃圾收集器占用内存空间比较小因此这是嵌入式应用程序的首选垃圾收集器类型。在运行时使用以下命令启用该垃圾收集器
```
$ java -XX:+UseSerialGC
```
### 2、Parallel 收集器
![Parallel garbage collection][2]
像 Serial 收集器一样Parallel 收集器也使用“stop the world”方法。这意味着当垃圾收集器运行时应用程序线程会停止。但是不同的是Parallel 收集器运行时有多个线程执行垃圾收集操作。这种类型的垃圾收集器适用于在多线程和多处理器环境中运行中到大型数据集的应用程序。
这是 JVM 中的默认垃圾收集器,也被称为*吞吐量收集器*。使用该垃圾收集器时可以通过使用各种合适的 JVM 参数进行调优,例如吞吐量、暂停时间、线程数和内存占用。如下:
* 线程数:`-XX:ParallelGCThreads=<N>`
* 暂停时间:`-XX:MaxGCPauseMillis=<N>`
* 吞吐量(垃圾收集花费的时间与实际应用程序执行的时间相比):`-XX:GCTimeRatio=<N>`
* 最大堆内存:`-Xmx<N>`
Parallel 收集器可以使用该命令显式启用:`java -XX:+UseParallelGC` 。使用这个命令,指定在新生代中通过多个线程进行垃圾回收,而老年代中的垃圾收集和内存压缩仍使用单个线程完成的。
还有一个版本的的 Parallel 收集器叫做 “Parallel Old GC”它对新生代和老年代都使用多线程启用命令如下
```
$ java -XX:+UseParallelOldGC
```
### 3、Concurrent Mark SweepCMS收集器
![Concurrent garbage collection][3]
Concurrent Mark SweepCMS垃圾收集器与应用程序并行运行。对于新生代和老年代都使用了多线程。在 CMS 垃圾收集器删除无用对象后,不会对存活对象进行内存压缩。该垃圾收集器和应用程序并行运行,会降低应用程序的响应时间,适用于停顿时间较短的应用程序。这个收集器在 Java8 已过时,并在 Java14 中被移除。如果你仍在使用有这个垃圾收集器的 Java 版本,可以使用如下命令启用:
```
$ java -XX:+UseConcMarkSweepGC
```
在 CMS 垃圾收集器使用过程中,应用程序将暂停两次。首次暂停发生在标记可直接访问的存活对象时,这个暂停被称为*初始标记*。第二次暂停发生在 CMS 收集器结束时期,来修正在并发标记过程中,应用程序线程在 CMS 垃圾回收完成后更新对象时被遗漏的对象。这就是所谓的*重新标记*。
### 4、G1 收集器
![Garbage first][4]
G1 垃圾收集器旨在替代 GMS。G1 垃圾收集器具备并行、并发以及增量压缩,且暂停时间较短。与 CMS 收集器使用的内存布局不同G1 收集器将堆内存划分为大小相同的区域通过多个线程触发全局标记阶段。标记阶段完成后G1 知道哪个区域可能大部分是空的,并首选该区域作为清除/删除阶段。
在 G1 收集器中,一个对象如果大小超过半个区域容量会被认为是一个“大对象” 。这些对象被放置在老年代中在一个被称为“humongous region”的区域中。 启用 G1 收集器的命令如下:
```
$ java -XX:+UseG1GC
```
### 5、Epsilon 收集器
该垃圾收集器是在 Java11 中引入的,是一个 *no-op*无操作收集器。它不做任何实际的内存回收只负责管理内存分配。Epsilon 只在当你知道应用程序的确切内存占用情况并且不需要垃圾回收时使用。启用命令如下:
```
$ java -XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC
```
### 6、Shenandoah 收集器
Shenandoah 是在 JDK12 中引入的,是一种 CPU 密集型垃圾收集器。它会进行内存压缩,立即删除无用对象并释放操作系统的空间。所有的这一切与应用程序线程并行发生。启用命令如下:
```
$ java -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC
```
### 7、ZGC 收集器
ZGC 为低延迟需要和大量堆空间使用而设计,允许当垃圾回收器运行时 Java 应用程序继续运行。ZGC 收集器在 JDK11 引入,在 JDK12 改进。在 JDK15ZGC 和 Shenandoah 都被移出了实验阶段。启用 ZGC 收集器使用如下命令:
```
$ java -XX:+UnlockExperimentalVMOptions -XX:+UseZGC
```
### 灵活的垃圾收集器
Java 为我们提供了灵活的内存管理方式,熟悉不同的可用方法有助于为正在开发或运行的应用程序选择最合适的内存管理方式。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/7/garbage-collection-java
作者:[Jayashree Huttanagoudar][a]
选题:[lkxed][b]
译者:[Veryzzj](https://github.com/Veryzzj)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jayashree-huttanagoudar
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/2022-07/jaya-java-gc-serial.webp
[2]: https://opensource.com/sites/default/files/2022-07/jaya-java-gc-parallel.webp
[3]: https://opensource.com/sites/default/files/2022-07/jaya-java-gc-concurrent.webp
[4]: https://opensource.com/sites/default/files/2022-07/g1.png
[5]: https://opensource.com/home-page-new

View File

@ -3,14 +3,16 @@
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14861-1.html"
在 Ubuntu 中使用 apt 命令列出可升级的软件包
======
[apt 命令][1] 用于 Debian 和 Ubuntu 中的包管理。虽然你可能已经熟悉安装和删除选项,但 apt 还提供了一些额外的功能。
![](https://img.linux.net.cn/data/attachment/album/202207/24/230954qjko0c0sn55ohjf0.jpg)
[apt 命令][1] 用于 Debian 和 Ubuntu 中的包管理。虽然你可能已经熟悉安装和删除选项,但 `apt` 还提供了一些额外的功能。
其中之一是能够查看系统上所有可升级的软件包。要显示它们,你所要做的就是在终端中使用以下命令:
@ -18,9 +20,9 @@
apt list --upgradable
```
如你所见,你甚至不需要 sudo 来列出可更新的包。它只是列出了可以更新的包。它不会更新它们。
如你所见,你甚至不需要使用 `sudo` 来列出可更新的包。它只是列出了可以更新的包,而不会更新它们。
实际上,当你运行 `sudo apt update` 命令更新本地包仓库缓存时apt 命令会添加此提示。
实际上,当你运行 `sudo apt update` 命令更新本地包仓库缓存时,`apt` 命令会添加此提示。
```
Fetched 1,243 kB in 17s (71.4 kB/s)
@ -30,17 +32,17 @@ Reading state information... Done
30 packages can be upgraded. Run 'apt list --upgradable' to see them.
```
我不记得在旧的 apt-get 命令中有任何类似的直接选项来列出所有可升级的包。这是 apt 在旧的 apt-get 命令之上添加的几个新功能之一。
我不记得在旧的 `apt-get` 命令中有任何类似的直接选项来列出所有可升级的包。这是 `apt` 在旧的 `apt-get` 命令之上添加的几个新功能之一。
让我们更详细地讨论一下。
### 列出所有可升级的包
你在这里应该知道的是**你只能通过 APT 包管理器列出可用的更新**。 因此,如果你已将 PPA 或[外部仓库][2]添加到系统的 sources.list你将看它们的更新。
你在这里应该知道的是**你只能列出通过 APT 包管理器可用的更新**。因此,如果你已将 PPA 或 [外部仓库][2] 添加到系统的 `sources.list`,你将看到来自它们的更新。
但是你不会在这里获得 AppImage、Flatpak、Snap 或其他一些打包格式的更新。
但是你不会在这里获得 AppImage、Flatpak、Snap 或一些其他打包格式的更新。
换句话说,它只适用于 apt 包。
换句话说,它只适用于 APT 包。
因此,要列出 Ubuntu 或 Debian 系统上的所有可升级包,你应该首先更新本地包缓存:
@ -48,10 +50,10 @@ Reading state information... Done
sudo apt update
```
然后你的系统将知道可用的软件包更新。 apt 命令告诉你在 update 命令结束时可以升级多少个软件包:
然后你的系统将知道可用的软件包更新。`apt` 命令告诉你在 update 命令结束时可以升级多少个软件包:
![The apt command shows the number of upgradable packages at the bottom of the apt update command output][3]
To see what package can be upgraded, run the command:
要查看可以升级的软件包,请运行以下命令:
```
@ -61,7 +63,7 @@ apt list --upgradable
你应该看到这样的输出:
```
[email protected]:~$ apt list --upgradable
~$ apt list --upgradable
Listing... Done
apparmor/jammy-updates 3.0.4-2ubuntu2.1 amd64 [upgradable from: 3.0.4-2ubuntu2]
brave-browser/stable 1.40.113 amd64 [upgradable from: 1.40.107]
@ -89,11 +91,11 @@ brave-browser/stable 1.40.113 amd64 [upgradable from: 1.40.107]
sudo apt upgrade
```
它列出了将要升级的软件包,然后要求按回车或 Y 确认升级。
它列出了将要升级的软件包,然后要求按回车或 `Y` 确认升级。
![Upgrade all packages][5]
如果你确定要升级所有软件包,则可以通过在命令中添加 -y 来跳过 “Do you want to continue” 部分。
如果你确定要升级所有软件包,则可以通过在命令中添加 `-y` 来跳过 “Do you want to continue” 部分。
```
sudo apt upgrade -y
@ -101,27 +103,27 @@ sudo apt upgrade -y
### 模拟升级(但不升级任何包)
这是人们在 apt list 命令之前所做的。使用模拟选项,你实际上不会进行任何更改。它仅显示运行升级时将安装或升级的软件包。
这是人们在 `apt list` 命令之前所做的。使用模拟选项,你实际上不会进行任何更改。它仅显示运行升级时将安装或升级的软件包。
```
apt -s upgrade
```
你不需要使用 sudo即使我在下面的截图中使用了它
你不需要使用 `sudo`(即使我在下面的截图中使用了它)。
![Running an upgrade simulation with apt command][6]
### 仅升级选定的包
如果你正在管理一个 Ubuntu 服务器,并且你不想升级所有软件包,而只想升级少数选定的软件包中的一个(如 MySQL/Ngnix你可以使用 apt 命令轻松完成。
如果你正在管理一个 Ubuntu 服务器,并且你不想升级所有软件包,而只想升级少数选定的软件包中的一个(如 MySQL/Ngnix你可以使用 `apt` 命令轻松完成。
```
sudo apt --only-upgrade install package_name
```
实际上,如果你在已安装且有可用更新的软件包上运行 apt install 命令,它将升级该软件包。
实际上,如果你在已安装且有可用更新的软件包上运行 `apt install` 命令,它将升级该软件包。
使用 `--only-upgrade` 标志,你可以确保仅升级软件包(如果已安装)。如果尚未安装,它将不会安装给定的包。
使用 `--only-upgrade` 标志,你可以确保**仅升级**软件包(如果已安装)。如果尚未安装,它将不会安装给定的包。
你还可以通过提供名称来升级选定的几个包:
@ -129,7 +131,7 @@ sudo apt --only-upgrade install package_name
sudo apt --only-upgrade install package1 package2
```
你也可以做相反的事情并[保留升级中的选定软件包][7]。
你也可以做相反的事情[升级时保留选定的软件包][7]。
```
sudo apt-mark hold package_name
@ -137,7 +139,7 @@ sudo apt-mark hold package_name
这样,当你升级所有系统包时,将不会升级给定的包。
你可以使用以下命令删除保留:
你可以使用以下命令删除保留设置
```
sudo apt-mark unhold package_name
@ -147,15 +149,15 @@ sudo apt-mark unhold package_name
这有点棘手。
当你运行“apt list upgradable”命令时,它会显示所有可以升级的包。
当你运行 `apt list upgradable` 命令时,它会显示所有可以升级的包。
但是如果有新的内核版本可用,它们可能不会显示,因为内核包名称以 linux-headers-x-y 开头。这是因为系统将它们视为新包,而不是对已安装的包 linux-headers-a-b 的升级。
但是如果有新的内核版本可用,它们可能不会显示,因为内核包名称以 `linux-headers-x-y` 开头。这是因为系统将它们视为新包,而不是对已安装的包 `linux-headers-a-b` 的升级。
但是,你仍然会在可升级包列表中看到 “linux-generic-hwe” 类型的包。因为该软件包将被升级(使用较新的内核)。
但是,你仍然会在可升级包列表中看到 `linux-generic-hwe` 类型的包,因为该软件包将被升级(使用较新的内核)。
### 总结
列出可升级包的能力是 apt 命令为旧的 apt-get 命令带来的几个新功能之一。有关此主题的更多信息,你可以阅读我的文章[解释 apt 和 apt-get 命令之间的区别][8]
列出可升级包的能力是 `apt` 命令为旧的 `apt-get` 命令带来的几个新功能之一。有关此主题的更多信息,你可以阅读我 [解释 apt 和 apt-get 命令之间的区别][8] 的文章
作为桌面用户,我并不总是检查可以升级的软件包。我直接去升级。但是,当我管理服务器时,我更喜欢查看可用的更新,然后决定是否进行升级。
@ -168,7 +170,7 @@ via: https://itsfoss.com/apt-list-upgradable/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,32 +3,31 @@
[#]: author: "Rikard Grossman-Nielsen https://opensource.com/users/rikardgn"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14868-1.html"
如何在 Linux 上创建音乐播放列表
如何编写 C 程序在 Linux 上创建音乐播放列表
======
使用我在 Linux 上制作的这个 C 程序在旅途中聆听你喜爱的歌曲。
![Open source software helps artists create music][1]
![](https://img.linux.net.cn/data/attachment/album/202207/26/223349t4yiqd1yikb9k117.jpg)
图片来源Opensource.com
> 使用我在 Linux 上制作的这个 C 程序在旅途中聆听你喜爱的歌曲。
我最近在 Linux 中编写了一个 C 程序,从我广泛的 MP3 库中创建一个较小的随机 MP3 文件选。该程序会遍历一个包含我的 MP3 库的目录,然后创建一个包含随机的、较小的歌曲选择的目录。然后我将 MP3 文件复制到我的智能手机上,以便随时随地收听。
我最近在 Linux 中编写了一个 C 程序,从我广泛的 MP3 库中创建一个较小的随机 MP3 文件选。该程序会遍历一个包含我的 MP3 库的目录,然后创建一个包含随机的、较小的歌曲选集的目录。然后我将这些 MP3 文件复制到我的智能手机上,以便随时随地收听。
瑞典是一个人口稀少的国家,有许多农村地区没有完整的手机覆盖。这就是在智能手机上拥有 MP3 文件的原因之一。另一个原因是我并不总是有钱购买流媒体服务,所以我喜欢拥有自己喜欢的歌曲的副本。
你可以从它的 [Git 仓库][2]下载我的应用。我专门为 Linux 编写了它,部分原因是在 Linux 上很容易找到经过良好测试的文件 I/O 例程。多年前,我尝试使用专有的 C 库在 Windows 上编写相同的程序,但在尝试文件复制时遇到了困难。 Linux 使用户可以轻松直接地访问文件系统。
你可以从它的 [Git 仓库][2] 下载我的应用。我专门为 Linux 编写了它,部分原因是在 Linux 上很容易找到经过良好测试的文件 I/O 例程。多年前,我尝试使用专有的 C 库在 Windows 上编写相同的程序但在尝试文件复制时遇到了困难。Linux 使用户可以轻松直接地访问文件系统。
本着开源的精神,我没费多少力气就找到了 Linux 的文件 I/O 代码来激发我的灵感。我还发现了一些启发了我的分配内存的代码。我编写了随机数生成的代码。
该程序的工作方式如下所述:
1. 请求源目录和目标目录。
2. 请求 MP3 文件目录下的文件个数。
3. 搜索你希望复制的收藏的百分比(从 1.0% 到 88.0%)。如果你有 1000 个文件的集合并希望从你的集合中复制 125 个文件而不是 120 个文件,你也可以输入 12.5% 之类的数字。我将上限设置为 88%,因为复制超过 88% 的库将基本生成与你的基础库相似的库。当然,代码是开源的,因此你可以根据自己的喜好自由修改。
4. 使用指针和 malloc 分配内存。一些操作需要内存,包括代表音乐收藏中文件的字符串列表。还有一个列表来保存随机生成的数字。
1. 询问源目录和目标目录。
2. 询问存放 MP3 文件的目录下的文件个数。
3. 搜索你希望复制的收藏的百分比(从 1.0% 到 88.0%)。如果你有 1000 个文件的集合并希望从你的集合中复制 125 个文件而不是 120 个文件,你也可以输入 12.5% 之类的数字。我将上限设置为 88%,因为复制超过 88% 的库将基本生成与你的基础库相似的库。当然,代码是开源的,因此你可以根据自己的喜好自由修改。
4. 使用指针和 `malloc` 分配内存。一些操作需要内存,包括代表音乐收藏中文件的字符串列表。还有一个列表来保存随机生成的数字。
5. 生成所有文件范围内的随机数列表(例如,如果集合有 1000 个文件,则为 1 到 1000
6. 复制文件。
@ -181,13 +180,13 @@ while(1) {
}
```
这将从指定的文件中读取多个字节 (readByteCount) 到字符缓冲区中。该函数的第一个参数是文件名srcFileDesc。第二个参数是一个指向字符缓冲区的指针这之前在程序中声明过。该函数的最后一个参数是缓冲区的大小。
这将从指定的文件中读取多个字节`readByteCount`到字符缓冲区中。该函数的第一个参数是文件名(`srcFileDesc`)。第二个参数是一个指向字符缓冲区的指针,这之前在程序中声明过。该函数的最后一个参数是缓冲区的大小。
程序返回读取的字节数(在本例中为 4 个字节)。如果返回的数字为 0 或更少,则第一个 `if` 子句会跳出循环。
如果读取字节数为 0则所有写入完成循环中断以写入下一个文件。如果读取的字节数小于 0则发生错误并退出程序。
当读取 4 个字节时它会写入它们。write 函数接受三个参数。第一个是要写入的文件第二个是字符缓冲区第三个是要写入的字节数4 个字节) .该函数返回写入的字节数。
当读取 4 个字节时,它会写入它们。`write` 函数接受三个参数。第一个是要写入的文件第二个是字符缓冲区第三个是要写入的字节数4 个字节) .该函数返回写入的字节数。
如果写入了 0 个字节,则发生了写入错误,因此第二个 `if` 子句退出程序。
@ -206,7 +205,7 @@ via: https://opensource.com/article/22/7/c-linux-mp3
作者:[Rikard Grossman-Nielsen][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,104 @@
[#]: subject: "5 ways to learn C programming on Linux"
[#]: via: "https://opensource.com/article/22/7/learn-c-linux"
[#]: author: "Alan Smithee https://opensource.com/users/alansmithee"
[#]: collector: "lkxed"
[#]: translator: "Donkey-Hao"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14869-1.html"
在 Linux 上学习 C 语言的五种方式
======
![](https://img.linux.net.cn/data/attachment/album/202207/26/232122wc4c5g55363bgj5g.jpg)
> 请下载我们的电子书获得在 Linux 和 FreeDOS 上 C 语言编程的提示和技巧。
有许多关于为什么 C 语言能够经久不衰的说法。或许是因为它语法简单明了。又或许是因为它常被认为是实用的语言因为它不基于其他高级语言可以在任何平台上编译运行。C 显然是一种强大的语言,并且我认为它经久不衰与它作为其他技术的基础的方式相关。这里有 5 项我喜爱的基于 C 语言的技术,希望它们能够帮助你更多的了解 C 语言。
### 1、GObject 和 GTK
C 语言不是面向对象编程的语言。它没有 `class` 关键字。 一些人用 C++ 进行面向对象编程,但是还有一些人坚持用 C 和 GObject 库。GObject 库为 C 语言提供了一个 `class` 结构体GTK 项目以提供可通过 C 访问的工具包而闻名。没有 GTK ,就没有 GIMP GTK 就是为此开发的、GNOME 和其他成千上百流行的开源应用。
#### 了解更多
GObject 和 GTK 是使用 C 开始进行 GUI 编程的绝佳方式。它们“装备精良”,可以让你用 C 语言进行图形应用的编程,因为开发者为你做了许多“繁重工作”。他们定义了类和数据类型,创建了工具包,你所要做的就是将所有东西放在一起。
### 2、Ncurses
如果 GTK 超过了你的需求,你或许认为一个<ruby>终端用户界面<rt>terminal user interface</rt></ruby>TUI更适合你。Ncurses 库可以在终端创建“小部件”,创建一种在终端窗口上绘制图形的应用程序。你可以使用方向键控制界面,选择按钮和元素,就像不用鼠标来使用 GUI 应用一样。
#### 了解更多
利用 Ncurses 库使用 C 语言写一个 [猜数字][3] 游戏。
### 3、Lua 和 Moonscript
Lua 是一种脚本语言,它可以使用内置的 C API 访问 C 语言库。它十分精巧、快捷以及简单,拥有约 30 个函数和少量内置库。你可以使用 Lua 进行系统自动化、游戏修改和脚本编写、使用 LÖVE 之类的前端进行游戏开发,或者使用 GTK 进行一般应用程序开发(例如 [Howl 文本编辑器][4])。
#### 了解更多
Lua 十分好的一点是你可以从它开始学习掌握基本的编程理念,然后当你有足够勇气直面基础编程语言时,再探索它的 C 语言 API 。另一方面,如果你只会 Lua 那也没事儿。Lua 有很多的 [外部库][5] ,使其成为各种开发方式的绝佳选择。
### 4、Cython
Lua 不是唯一带有 C 接口的编程语言。[Cython][6] 是一种编译器和编程语言,旨在使为 Python 编写 C 扩展就像编写 Python 代码一样容易。本质上,你可以编写 Python 并最终得到 C 语言程序。最简单的示例:
```
print("hello world")
```
创建一个 `setup.py` 脚本:
```
from setuptools import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("hello.pyx")
)
```
运行该 `setup` 脚本:
```
$ python3 ./setup.py
```
最后你会在同一个目录中得到一个 `hello.c``hello.cpython-39-x86_64-linux-gnu.so` 文件。
#### 了解更多
[Cython][7] 是 Python 的一个超集,支持 C 语言的函数和数据类型。它不可能帮你直接学习 C 语言,但它为希望学习 C 代码并将其集成到 Python 中的 Python 开发人员开辟了新的可能性。
### 5、FreeDOS
了解更多 C 语言的最好方式是编写 C 代码没有什么比写你可以真正使用的代码更令人激动的了。FreeDOS 项目是 DOS 的开源实现, 而 DOS 是 Windows 的前身。或许你已经用过 FreeDOS 了,或者作为运行 BIOS 更新程序的便捷开源方法,或者在模拟器中玩经典的计算机游戏。你可以用 FreeDOS 做更多事情。它是学习 C 语言的理想平台,其中包含一系列工具,鼓励你编写自己的命令和简单(或不那么简单,如果你愿意)的应用程序。当然你可以在任何系统上写 C 代码,但是 FreeDOS 的便利可能会让你感到耳目一新。天空有极限,但即使在地面上,你也可以用 C 做一些非常有趣的事情。
### 下载电子书
你可以从我们编写的新 [电子书][8] 中学到更多 C 语言,并在我们的电子书中了解有关 FreeDOS 上 C 语言的更多信息。这些是编程文章的集合,可帮助你学习 C 语言,并演示如何以有用的方式用 C 写一些代码。
> **[下载电子书][8]**
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/7/learn-c-linux
作者:[Alan Smithee][a]
选题:[lkxed][b]
译者:[Donkey](https://github.com/Donkey-Hao)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alansmithee
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/laptop_screen_desk_work_chat_text.png
[2]: https://opensource.com/downloads/guide-c-programming
[3]: https://opensource.com/article/21/8/guess-number-game-ncurses-linux
[4]: https://opensource.com/article/20/12/howl
[5]: https://opensource.com/article/19/11/getting-started-luarocks
[6]: http://cython.org
[7]: https://opensource.com/article/21/4/cython
[8]: https://opensource.com/downloads/guide-c-programming

View File

@ -3,13 +3,15 @@
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14878-1.html"
修复 Ubuntu Linux 中 “Command python not found” 的错误
======
![](https://img.linux.net.cn/data/attachment/album/202207/30/071627r176w1k1y5dkkw6w.jpg)
如何在 Linux 终端中运行一个 Python 程序?像这样,对吗?
```
@ -18,31 +20,33 @@ python program.py
然而,如果你试图在 Ubuntu和其他一些发行版中使用 `python` 命令,它会抛出一个错误。
```
command python not found, did you mean:
command python3 from deb python3
command python from deb python-is-python3
```
如果你注意这个错误信息,它可以清除很多东西。**这里的 python 命令实际上是 python3**。
如果你注意这个错误信息,它说明了很多东西。**这里的 `python` 命令实际上是 `python3`**。
如果你不理解,不用担心。我将在这里详细解释。
### 为什么在 Ubuntu 上没有发现 python 命令?
这是因为 Python 语言不是以 python 的形式安装的,而是以 python3 或 python2 的形式安装的(在一些老的 Ubuntu 版本中)。
这是因为 Python 语言不是以 `python` 的形式安装的,而是以 `python3``python2` 的形式安装的(在一些老的 Ubuntu 版本中)。
在遥远的过去的某个时间点Python 实际上是作为 `python` 包/可执行文件提供的。当 Python 发布第二版时Ubuntu 和其他发行版不得不同时支持 Python 1.x 和 2.x 版本。
因此,他们将较新的 Python 版本命名为 `python2`,以区分这两个版本。其他应用或库也在其代码中指定 python 或 python2。
因此,他们将较新的 Python 版本命名为 `python2`,以区分这两个版本。其他应用或库也在其代码中指定 `python``python2`
最终Python 1 版本被完全停用,但软件包继续被命名为 python2。
最终Python 1 版本被完全停用,但软件包继续被命名为 `python2`
类似地,当 Python 3 版本发布时,发行版开始同时提供 `python2``python3` 包。
Python 2 不再被支持Python 3.x 是你在 Ubuntu 上安装的版本。该软件包仍被命名为 python3。
Python 2 不再被支持Python 3.x 是你在 Ubuntu 上安装的版本。该软件包仍被命名为 `python3`
**总结一下,你已经在 Ubuntu 上安装了 Python。它可以作为 python3 软件包使用。**
**总结一下,你已经在 Ubuntu 上安装了 Python。它是以 `python3` 软件包方式使用的。**
那么,当你[在 Ubuntu 上看到 Python command not found 的错误][1]时,你有什么选择?让我来介绍一下。
那么,当你 [在 Ubuntu 上看到 Python command not found 的错误][1] 时,你有什么选择?让我来介绍一下。
### 确保你的系统中已经安装了 Python
@ -66,7 +70,7 @@ sudo apt install python3
### 使用 python3 而不是 python
如果对你来说不是太麻烦,在需要的地方使用 python3 命令而不是 python。
如果对你来说不是太麻烦,在需要的地方使用 `python3` 命令而不是 `python`
想检查已安装的 Python 版本吗?请这样输入:
@ -77,7 +81,7 @@ python3 --version
然后你会在输出中得到版本的详细信息:
```
[email protected]:~$ python3 --version
~$ python3 --version
Python 3.10.4
```
@ -91,7 +95,7 @@ python3 program.py
### 将 python3 链接为 python
你可以在你的 .bashrc 文件中创建一个永久别名,像这样:
你可以在你的 `.bashrc` 文件中创建一个永久别名,像这样:
```
alias python='python3'
@ -99,9 +103,9 @@ alias python='python3'
这样,你可以运行 `python` 命令,而你的系统运行 `python3`
这在大多数情况下都会起作用,除非某些程序期望运行 /usr/bin/python。现在你可以在 /usr/bin/python 和 /usr/bin/python3 之间建立符号链接,但对于 Ubuntu 用户来说,存在一个更简单的选择。
这在大多数情况下都会起作用,除非某些程序期望运行 `/usr/bin/python`。现在,你可以在 `/usr/bin/python``/usr/bin/python3` 之间建立符号链接,但对于 Ubuntu 用户来说,存在一个更简单的选择。
对于 Ubuntu 20.04 和更高版本,如果你安装了 python-is-python3 软件包,你有一个软件包可以自动完成所有链接创建。这也是原始错误信息所提示的。
对于 Ubuntu 20.04 和更高版本,如果你安装了 `python-is-python3` 软件包,你有一个软件包可以自动完成所有链接创建。这也是原始错误信息所提示的。
```
sudo apt install python-is-python3
@ -109,7 +113,7 @@ sudo apt install python-is-python3
![install python is python3 ubuntu][3]
你可以看到符号链接已经被创建,你可以使用 python 命令(实际上是运行 python3没有任何问题。
你可以看到符号链接已经被创建,你可以使用 `python` 命令(实际上是运行 `python3`),没有任何问题。
![checking python ubuntu][4]
@ -122,7 +126,7 @@ via: https://itsfoss.com/python-not-found-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,314 @@
[#]: subject: "How to Install Deepin Desktop in Arch Linux [Complete Guide]"
[#]: via: "https://www.debugpoint.com/deepin-arch-linux-install-20/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14867-1.html"
如何在 Arch Linux 中安装深度桌面DDE
======
![](https://img.linux.net.cn/data/attachment/album/202207/26/170414x01pmevoo8o8b6ob.jpg)
> 在本指南中,我们将解释在 Arch Linux 中安装漂亮的深度桌面DDE所需的步骤。
指南的第一部分解释了安装 Arch 基本系统的步骤。第二部分是在 Arch Linux 的基础上安装完整的深度桌面。
### 什么是深度桌面DDE
[深度操作系统][1] 是一个基于 Debian 稳定分支的、功能丰富且漂亮的桌面环境。深度桌面环境DDE是深度操作系统自主开发的桌面环境。它由它自己的 dde-kwin 窗口管理器驱动。深度桌面带有漂亮的停靠区和许多预装的深度原生的应用程序。
这个令人眼花缭乱的桌面环境 [可在 Arch 仓库中找到][2];这篇文章介绍了如何在 Arch Linux 中安装深度桌面。
本指南安装深度桌面环境 20.1。然而,其他版本的步骤也应该是类似的。
### 第一部分:安装 Arch Linux
如果你已经安装了 Arch Linux你可以跳过这一步直接进入安装深度桌面部分。
要快速安装基本的 Arch Linux请按照以下步骤进行。你也可以访问 [本指南][3] 了解以双启动或在虚拟机上安装 Arch Linux 的完整教程。
#### 下载 Arch Linux
从下面的链接下载 Arch Linux 的 .iso 文件。这里有磁力链和 BT 链接。一旦你下载好了,就把 ISO 写入 U 盘。然后从该驱动器启动。
> **[下载 Arch Linux][4]**
如果你打算通过 [GNOME Boxes][5]、[virt-manager][6] 将其安装为虚拟机镜像 —— 那么你不需要将其写入 U 盘。
#### 启动和配置分区
从 Arch Linux ISO 启动后,你必须运行一系列的命令来安装基本系统。
首先,运行下面的命令,找出设备的标识符。
```
fdisk -l
```
![fdisk -l 之前的分区][7]
然后用此设备标识符,运行下面的命令,开始对你的磁盘进行分区。请确保根据你的系统而修改下面的 `/dev/sda` 参数。
```
cfdisk /dev/sda
```
在下一个提示中选择 `label type = dos`
选择可用空间,并从底部选择 “NEW” 选项。在这个例子中,我将创建三个分区,如下所示:
```
/dev/sda1 - 1G - for /boot
/dev/sda2 - 5G - for root
/dev/sda3 - 1G - for swap
```
![cfdisk][8]
在下一个屏幕中,提供启动分区(`/boot`)的大小(在这个例子中,我给出了 1GB。选择它作为主分区。
对 5GB 大小的主根分区(`/`)重复同样的步骤。
![改变交换分区的类型][9]
用同样的步骤创建一个大小为 1G 的交换分区(你可以根据你的需要改变大小)。创建交换分区后,确保在底部选择类型,并将其标记为 “Linux Swap/Solaris” 选项的交换分区。
![cfdisk 的最终分区列表][10]
完成后,用底部的 “Write” 选项将变化写到磁盘上。确保你在写之前做一个备份,因为这是你系统中的一个永久性的改变。
在你继续之前,运行下面的命令来检查。在这个例子中,你可以看到,列出了三个分区。
```
fdisk -l
```
![fdisk 中的最终分区列表][11]
依次运行下面的命令,在上面新创建的分区中格式化并创建一个 ext4 文件系统。确保你根据你的需要改变 `/dev/sda1``/dev/sda2` 参数。
```
mkfs.ext4 /dev/sda1
mkfs.ext4 /dev/sda2
mkswap /dev/sda3
swapon /dev/sda3
```
完成后,挂载系统并创建必要的目录。
```
mount /dev/sda2 /mnt
mkdir /mnt/boot /mnt/var /mnt/home
mount /dev/sda1 /mnt/boot
```
同样,确保你根据你的系统改变 `/dev/sda1`、`/dev/sda2` 和 `/dev/sda3` 参数。
![准备文件系统][12]
#### 安装基本系统
我希望你已经连接到互联网了。如果没有,请尝试使用 USB 网卡或有线网络连接Arch 安装程序会自动配置和检测。如果你没有可用的有线连接,请按照本指南使用 Arch Linux 安装程序配置无线 Wi-Fi 网络。
依次运行下面的命令,将基本系统安装到挂载的分区中。下载的大小约为 400MB。
```
pacman -Syy
pacstrap /mnt base base-devel linux linux-firmware nano dhcpcd net-tools grub
```
![安装基本系统][13]
一旦完成,生成一个文件系统表,没有这个表你就无法启动系统。
```
genfstab -U /mnt >> /mnt/etc/fstab
```
#### 配置基本系统
依次按照下面的命令来配置基本系统。这包括设置你的地区和语言,添加一个登录用户,以及设置互联网。
```
arch-chroot /mnt
nano /etc/locale.gen
```
去掉开头的 `#`,取消对你选择的语言环境的注释。在本指南中,我选择了 `en_US.UTF-8 UTF-8`。按 `CTRL+O`、回车和 `CTRL+X` 退出 nano。
![改变语言环境][14]
使用以下方法生成语言环境数据。
```
locale-gen
```
使用下面的命令设置语言。
```
echo LANG=en_US.UTF-8 > /etc/locale.conf
export LANG=en_US.UTF-8
```
设置本地时区。
```
ln -s /usr/share/zoneinfo/America/New_York /etc/localtime
```
同样,你可以根据你的需要来选择它们。你可以通过以下命令列出本地时区。
```
ls /usr/share/zoneinfo
ls /usr/share/zoneinfo/America
```
依次使用下面的命令设置硬件时钟、创建主机名并启用互联网的 DHCP。你可以根据你的想法把 `debugpoint-pc` 改为任何主机名。
```
hwclock --systohc --utc
echo debugpoint-pc > /etc/hostname
systemctl enable dhcpcd
```
下一步是设置 `root` 用户的密码、创建一个管理员用户,并将该用户添加到 `sudoers` 文件中。
按照下面的命令依次进行。确保根据你的需要将用户名`debugpoint` 改为其他名称。
```
passwd rootuseradd -m -g users -G wheel -s /bin/bash debugpointpasswd debugpoint
```
![创建用户][15]
打开 `sudoers` 文件,添加以下几行。
```
nano /etc/sudoers
```
添加下面几行。由于你已经创建了 `root` 用户,该条目应该已经有了。
```
root ALL=(ALL) ALL
debugpoint ALL=(ALL) ALL
```
![更新 sudoers 文件][16]
安装 GRUB建立初始的 Ramdisk 环境,并使用下面的命令卸载系统。
```
grub-install /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg
mkinitcpio -p linux
exit
```
![配置 GRUB][17]
然后重新启动你的系统。
```
umount /mnt/boot
umount /mnt
reboot
```
现在你已经成功地安装了 Arch Linux 基本系统。现在是安装完整的深度桌面的时候了。
### 第二部分:在 Arch Linux 中安装深度桌面
重新启动后,从 GRUB 中选择 Arch Linux。在 Arch Linux 的提示符下,开始依次运行以下命令。这些命令安装 Xorg 服务器、Lightdm 显示管理器和深度桌面组件。
对于所有的命令,使用默认的包版本,即在询问时按回车。
安装 Xorg 和显示管理器。大约安装大小为 80 MB。
```
sudo pacman -S --need xorg lightdm
```
安装额外的组件和应用程序(约 550 MB
```
sudo pacman -S --need deepin deepin-extra
```
安装完成后,通过修改 Lightdm 配置文件启用深度欢迎页。按照下面的命令。
```
nano /etc/lightdm/lightdm.conf
```
并添加下面这一行。保存该文件(`CTRL+O`、`CTRL+X`)。
```
greeter-session=lightdm-deepin-greeter
```
![在 Lightdm 登录页中添加深度欢迎欢迎页][18]
现在是时候把显示管理器和网络管理器作为服务启用了。这样,下次登录时,它们就可以由 systemd 自动运行。
```
systemctl enable lightdm
systemctl enable NetworkManager
```
![启用 Lightdm 和网络][19]
使用 `reboot` 命令重新启动系统。
```
reboot
```
如果一切顺利,你应该看到深度桌面的登录提示。使用你刚刚在上面的步骤中创建的凭证登录。你应该会看到最新的深度桌面环境。
![Arch Linux 中的深度 20.1 登录屏幕][20]
![Arch Linux中的深度桌面 20.1][21]
### 总结
我希望这个指南能帮助你在 Arch Linux 中安装深度桌面。虽然它不是我的日常环境,我觉得深度的桌面在本质上有些慢。可能是因为有太多的颜色渲染和动画,而且尽管它是建立在 Qt 上的,但没有为深度桌面进行适当的优化。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/deepin-arch-linux-install-20/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/2020/09/deepin-20-review/
[2]: https://archlinux.org/groups/x86_64/deepin/
[3]: https://www.debugpoint.com/2020/11/install-arch-linux/
[4]: https://www.archlinux.org/download/
[5]: https://www.debugpoint.com/2020/05/install-use-gnome-boxes/
[6]: https://www.debugpoint.com/2020/11/virt-manager/
[7]: https://www.debugpoint.com/wp-content/uploads/2020/12/fdisk-l-before.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2020/12/cfdisk-1024x159.jpg
[9]: https://www.debugpoint.com/wp-content/uploads/2020/12/Swap-parition-type-change.jpg
[10]: https://www.debugpoint.com/wp-content/uploads/2020/12/final-partition-list-in-cfdisk-1024x178.jpg
[11]: https://www.debugpoint.com/wp-content/uploads/2020/12/final-partition-list-in-fdisk.jpg
[12]: https://www.debugpoint.com/wp-content/uploads/2020/12/prepare-file-system.jpg
[13]: https://www.debugpoint.com/wp-content/uploads/2020/12/Install-base-system-1024x205.jpg
[14]: https://www.debugpoint.com/wp-content/uploads/2020/12/change-locale.jpg
[15]: https://www.debugpoint.com/wp-content/uploads/2020/12/create-user.jpg
[16]: https://www.debugpoint.com/wp-content/uploads/2020/12/update-sudoers-file.jpg
[17]: https://www.debugpoint.com/wp-content/uploads/2020/12/configure-grub-1024x639.jpg
[18]: https://www.debugpoint.com/wp-content/uploads/2021/01/add-deepin-greeter-in-lightdm-login.jpg
[19]: https://www.debugpoint.com/wp-content/uploads/2020/12/Enable-lightdm-and-network-Install-Xfce-Desktop-in-Arch-Linux.jpg
[20]: https://www.debugpoint.com/wp-content/uploads/2021/01/Deepin-20.1-Login-screen-in-Arch-Linux-1024x771.jpg
[21]: https://www.debugpoint.com/wp-content/uploads/2021/01/Deepin-Desktop-20.1-in-Arch-Linux-1024x770.jpg

View File

@ -3,25 +3,24 @@
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lkxed"
[#]: translator: "duoluoxiaosheng"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14865-1.html"
在 Linux 上使用 Rhythbox 听音乐
在 Linux 上使用 Rhythmbox 听音乐
======
下面我将介绍我是如何在 Linux 的 GNOME 桌面上使用 Rhythmbox 听在线音乐和 MP3 列表的。
![Woman programming][1]
![](https://img.linux.net.cn/data/attachment/album/202207/25/234644f4rgrx1vrpgfk86n.jpg)
Image by: WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0
> 下面我将介绍我是如何在 Linux 的 GNOME 桌面上使用 Rhythmbox 听在线音乐和 MP3 列表的。
对我来说,在完全安静的环境下工作是很困难的。我需要某种背景音,最好是一些熟悉的音乐。我在音乐上的需求很简单:我只需要一个音乐播放器,可以播放我的 MP3 音乐库和少数几个我喜欢的网站的在线音乐。
我在 Linux 上尝试了多个音乐播放器,最终我还是选择了 Rhythmbox。 Rhythmbox 是一个 GNOME 桌面音乐播放器。如果你的 Linux 发行版使用的是 GNOME 桌面,很可能已经安装了 Rhythmbox。它很简单用来播放我本地的音乐库和广播网站的在线音乐。我很乐意在 Linux 上使用 Rhythmbox 收听在线音乐和我自己的音乐库。
我在 Linux 上尝试了多个音乐播放器,最终我还是选择了 Rhythmbox。 Rhythmbox 是一个 GNOME 桌面音乐播放器。如果你的 Linux 发行版使用的是 GNOME 桌面,很可能已经安装了 Rhythmbox。它很简单用来播放我本地的音乐库和广播网站的在线音乐。我很乐意在 Linux 上使用 Rhythmbox 收听在线音乐和我自己的音乐库。
### 在 Linux 上收听在线音乐
Rhythmbox 支持多个在线音乐服务商。如果你拥有一个 Last.fm 或者 Libre.fm 的帐号你可以点击左侧的标签登录。或者你想收听在线广播点击左侧的广播标签在预设的广播网站中选择一个。在我写代码的时候我通常喜欢听迷幻舞曲HBR1 Tranceponder 是我最喜欢的一个在线广播网站。
Rhythmbox 支持多个在线音乐服务商。如果你拥有一个 Last.fm 或者 Libre.fm 的帐号,你可以点击左侧的标签登录。或者,你想收听在线广播,点击左侧的<ruby>广播<rt>Radio</rt></ruby>标签在预设的广播网站中选择一个。在我写代码的时候我通常喜欢听迷幻舞曲HBR1 Tranceponder 是我最喜欢的一个在线广播网站。
![Streaming HBR1 Traceponder][2]
@ -29,21 +28,21 @@ Rhythmbox 支持多个在线音乐服务商。如果你拥有一个 Last.fm 或
在过去的几年中,我收集了大量的 MP3 音乐。由于几年前 MP3 的专利在美国已经到期,它在 Linux 是一种很好用的开放的音乐格式。
我把我 20GB 的 MP3 音乐保存在我的主目录之外,在 `/usr/local/music` 。要把音乐导入 Rhythmbox点击 **导入** 按钮,选择 `usr/local/music` 目录,或者任何你保存音乐的目录,让 Rhythmbox 去识别 MP3 音乐。结束以后点击 **导入列出的曲目** 按钮导入就完成了。
我把我 20GB 的 MP3 音乐保存在我的主目录之外,在 `/usr/local/music` 。要把音乐导入 Rhythmbox点击 <ruby>导入<rt>Import</rt></ruby> 按钮,选择 `usr/local/music` 目录,或者任何你保存音乐的目录,让 Rhythmbox 去识别 MP3 音乐。结束以后点击 <ruby>导入列出的曲目<rt>Import listed tracks</rt></ruby> 按钮导入就完成了。
![Use the Import button to add music to Rhythmbox][3]
![Rhythmbox identifies new music files][4]
Rhythmbox 播放我的音乐,并通过类型,艺术家和专辑组织歌曲,所以我可以很容易找到我想听的音乐。
Rhythmbox 可以播放我的音乐,并通过类型,艺术家和专辑组织歌曲,所以我可以很容易找到我想听的音乐。
![Listening to a music library in Rhythmbox][5]
### 旋律永存
### 旋律永存
我愿意在 Linux 上使用 Rhythmbox 作为我的音乐播放器,它是如此简洁,不会影响到我。而且听音乐可以帮我和谐掉日常的噪音,让我每一天都可以过的快一点。
Image by: Streaming HBR1 Tranceponder in Rhythmbox (image: Jim Hall, license: CC BY SA)
*(文内图片来自 Jim HallCC BY SA*
--------------------------------------------------------------------------------
@ -52,7 +51,7 @@ via: https://opensource.com/article/22/7/listen-music-rhythmbox-linux
作者:[Jim Hall][a]
选题:[lkxed][b]
译者:[duoluoxiaosheng](https://github.com/duoluoxiaosheng)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,77 @@
[#]: subject: "How much JavaScript do you need to know before learning ReactJS?"
[#]: via: "https://opensource.com/article/22/7/learn-javascript-before-reactjs"
[#]: author: "Sachin Samal https://opensource.com/users/sacsam005"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14874-1.html"
学习 ReactJS 之前,你需要了解多少 JavaScript
======
![](https://img.linux.net.cn/data/attachment/album/202207/29/082104d5zn1xn77r1n8p1n.jpg)
> 最主要的是要精通 JavaScript这样你就可以减少 ReactJS 之旅的复杂性。
React 是一个建立在 HTML、CSS 和 JavaScript 之上的 UI 框架,其中 JavaScriptJS负责大部分的逻辑。如果你对变量、数据类型、数组函数、回调、作用域、字符串方法、循环和其他 JS DOM 操作相关的主题有一定了解,这些将极大地加快学习 ReactJS 的步伐。
你对现代 JavaScript 的概念将决定你能多快地掌握 ReactJS 的步伐。你不需要成为一个 JavaScript 专家来开始你的 ReactJS 之旅,但就像对食材的了解是任何希望掌握烹饪的厨师所必须的一样,学习 ReactJS 也是如此。它是一个现代的 JavaScript UI 库,所以你需要了解一些 JavaScript。问题是需要多少
### 示例解释
假设我被要求用英语写一篇关于“牛”的文章,但我对这种语言一无所知。在这种情况下,为了让我成功地完成任务,我不仅要对主题有概念,还要对指定的语言有概念。
假设我获得了一些关于主题(牛)的知识,我如何计算我需要知道多少英语才能写出规定的主题?如果我必须用英语写一篇关于其他复杂话题的文章呢?
这很难搞清楚,不是吗?我不知道我要写关于这个话题的什么东西,但它可能是任何东西。所以要想开始,我必须要有适当的英语知识,但还不止于此。
### 极端现实
在开始使用 ReactJS 之前,所需的 JavaScript 数量也是如此。根据我的例子情景ReactJS 是话题“牛”,而 JavaScript 是英语。要想在 ReactJS 中获得成功,对 JavaScript 的掌握很重要。如果没有适当的 JavaScript 基础,一个人是很难专业地掌握 ReactJS 的。无论我对这个主题有多少知识,如果我不知道语言的基础,我就不能正确地表达自己。
### 多少才算够?
根据我的经验,当你开始你的 ReactJS 之旅时,你应该已经熟悉了:
* 变量
* 数据类型
* 字符串方法
* 循环
* 条件式
你应该对这些具体的 JavaScript 熟悉。但这些只是最基本的先决条件。当你试图创建一个简单的 React 应用时,你将不可避免地需要处理事件。所以,普通函数、函数表达式、语句、箭头函数的概念,箭头函数和普通函数的区别,以及这两类函数中 `this` 关键字的词义范围,这确实很重要。
但问题是,如果我必须使用 ReactJS 创建一个复杂的应用怎么办?
### 获得启发
在 JavaScript 中处理事件、传播操作符、解构、命名导入和默认导入将帮助你理解 React 代码的工作机制。
最重要的是,你必须了解 JavaScript 本身背后的核心概念。JavaScript 在设计上是异步的。当出现在文件底部的代码在文件顶部的代码之前执行时,不要惊讶。像 promise、callback、async-await、map、filter 和 reduce 这样的结构,是 ReactJS 中最常见的方法和概念,尤其是在开发复杂的应用时。
最主要的是要精通 JavaScript这样你可以减少 ReactJS 之旅的复杂性。
### 越来越好
我很容易说出你需要知道的东西,但你去学习它完全是另一回事。大量练习 JavaScript 是必不可少的,但你可能会感到惊讶,我认为这并不意味着你必须等到掌握它。有些概念事先很重要,但你可以在学习过程中学到很多东西。练习的一部分是学习,所以你可以开始使用 JavaScript甚至是 React 的一些基础知识,只要你以舒适的速度移动并理解在你尝试任何严肃的事情之前做你的“家庭作业”是一个要求。
### 立即开始使用 JavaScript
不要费心等到你了解了 JavaScript 的所有方面。那永远不会发生。如果这样做,你将陷入学习 JavaScript 的永远循环中。你们都知道技术领域是如何不断发展和迅速变化的。如果你想开始学习 JavaScript请尝试阅读 Mandy Kendall 的介绍性文章 [通过编写猜谜游戏学习 JavaScript][2]。这是一种快速入门的好方法,当你看到了可能的情况,我认为你可能会发现很难停下来。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/7/learn-javascript-before-reactjs
作者:[Sachin Samal][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sacsam005
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/OSDC_women_computing_5.png
[2]: https://opensource.com/article/21/1/learn-javascript

View File

@ -0,0 +1,289 @@
[#]: subject: "What happens when you press a key in your terminal?"
[#]: via: "https://jvns.ca/blog/2022/07/20/pseudoterminals/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14863-1.html"
当你在终端上按下一个键时会发生什么?
======
![](https://img.linux.net.cn/data/attachment/album/202207/25/110217dlbzqvm9lltkq244.jpg)
我对<ruby>终端<rt>Terminal</rt></ruby>是怎么回事困惑了很久。
但在上个星期,我使用 [xterm.js][1] 在浏览器中显示了一个交互式终端,我终于想到要问一个相当基本的问题:当你在终端中按下键盘上的一个键(比如 `Delete`,或 `Escape`,或 `a`),发送了哪些字节?
像往常一样,我们将通过做一些实验来回答这个问题,看看会发生什么 : )
### 远程终端是非常古老的技术
首先,我想说的是,用 `xterm.js` 在浏览器中显示一个终端可能看起来像一个新事物,但它真的不是。在 70 年代,计算机很昂贵。因此,一个机构的许多员工会共用一台电脑,每个人都可以有自己的 “终端” 来连接该电脑。
例如,这里有一张 70 年代或 80 年代的 VT100 终端的照片。这看起来像是一台计算机(它有点大!),但它不是 —— 它只是显示实际计算机发送的任何信息。
[![DEC VT100终端][2]][3]
当然,在 70 年代,他们并没有使用 Websocket 来做这个,但来回发送的信息的方式和当时差不多。
(照片中的终端是来自西雅图的 <ruby>[活电脑博物馆][4]<rt> Living Computer Museum</rt></ruby>,我曾经去过那里,并在一个非常老的 Unix 系统上用 `ed` 编写了 FizzBuzz所以我有可能真的用过那台机器或它的一个兄弟姐妹我真的希望活电脑博物馆能再次开放能玩到老式电脑是非常酷的。
### 发送了什么信息?
很明显,如果你想连接到一个远程计算机(用 `ssh` 或使用 `xterm.js` 和 Websocket或其他任何方式那么需要在客户端和服务器之间发送一些信息。
具体来说:
**客户端** 需要发送用户输入的键盘信息(如 `ls -l`)。
**服务器** 需要告诉客户端在屏幕上显示什么。
让我们看看一个真正的程序,它在浏览器中运行一个远程终端,看看有哪些信息会被来回发送!
### 我们将使用 goterm 来进行实验
我在 GitHub 上发现了这个叫做 [goterm][5] 的小程序,它运行一个 Go 服务器,可以让你在浏览器中使用 `xterm.js` 与终端进行交互。这个程序非常不安全,但它很简单,很适合学习。
我 [复刻了它][6],使它能与最新的 `xterm.js` 一起工作,因为它最后一次更新是在 6 年前。然后,我添加了一些日志语句,以打印出每次通过 WebSocket 发送/接收的字节数。
让我们来看看在几个不同的终端交互过程中的发送和接收情况吧!
### 示例ls
首先,让我们运行 `ls`。下面是我在 `xterm.js` 终端上看到的情况:
```
~:/play$ ls
file
~:/play$
```
以下是发送和接收的内容:(在我的代码中,我记录了每次客户端发送的字节:`sent: [bytes]`,每次它从服务器接收的字节:`recv: [bytes]`
```
sent: "l"
recv: "l"
sent: "s"
recv: "s"
sent: "\r"
recv: "\r\n\x1b[?2004l\r"
recv: "file\r\n"
recv: "\x1b[~:/play$ "
```
我在这个输出中注意到 3 件事:
1. 回显:客户端发送 `l`,然后立即收到一个 `l` 发送回来。我想这里的意思是,客户端真的很笨 —— 它不知道当我输入`l` 时,我想让 `l` 被回显到屏幕上。它必须由服务器进程明确地告诉它来显示它。
2. 换行:当我按下回车键时,它发送了一个 `\r'(回车)符号,而不是 `\n'(换行)。
3. 转义序列:`\x1b` 是 ASCII 转义字符,所以 `\x1b[?2004h` 是告诉终端显示什么或其他东西。我想这是一个颜色序列,但我不确定。我们稍后会详细讨论转义序列。
好了,现在我们来做一些稍微复杂的事情。
### 示例Ctrl+C
接下来,让我们看看当我们用 `Ctrl+C` 中断一个进程时会发生什么。下面是我在终端中看到的情况:
```
~:/play$ cat
^C
~:/play$
```
而这里是客户端发送和接收的内容。
```
sent: "c"
recv: "c"
sent: "a"
recv: "a"
sent: "t"
recv: "t"
sent: "\r"
recv: "\r\n\x1b[?2004l\r"
sent: "\x03"
recv: "^C"
recv: "\r\n"
recv: "\x1b[?2004h"
recv: "~:/play$ "
```
当我按下 `Ctrl+C` 时,客户端发送了 `\x03`。如果我查 ASCII 表,`\x03` 是 “文本结束”,这似乎很合理。我认为这真的很酷,因为我一直对 `Ctrl+C` 的工作原理有点困惑 —— 很高兴知道它只是在发送一个 `\x03` 字符。
我相信当我们按 `Ctrl+C` 时,`cat` 被中断的原因是服务器端的 Linux 内核收到这个 `\x03` 字符,识别出它意味着 “中断”,然后发送一个 `SIGINT` 到拥有伪终端的进程组。所以它是在内核而不是在用户空间处理的。
### 示例Ctrl+D
让我们试试完全相同的事情,只是用 `Ctrl+D`。下面是我在终端看到的情况:
```
~:/play$ cat
~:/play$
```
而这里是发送和接收的内容:
```
sent: "c"
recv: "c"
sent: "a"
recv: "a"
sent: "t"
recv: "t"
sent: "\r"
recv: "\r\n\x1b[?2004l\r"
sent: "\x04"
recv: "\x1b[?2004h"
recv: "~:/play$ "
```
它与 `Ctrl+C` 非常相似,只是发送 `\x04` 而不是 `\x03`。很好!`\x04` 对应于 ASCII “传输结束”。
### Ctrl + 其它字母呢?
接下来我开始好奇 —— 如果我发送 `Ctrl+e`,会发送什么字节?
事实证明,这只是该字母在字母表中的编号,像这样。
* `Ctrl+a` => 1
* `Ctrl+b` => 2
* `Ctrl+c` => 3
* `Ctrl+d` => 4
* ...
* `Ctrl+z` => 26
另外,`Ctrl+Shift+b` 的作用与 `Ctrl+b` 完全相同(它写的是`0x2`)。
键盘上的其他键呢?下面是它们的映射情况:
* `Tab` -> 0x9`Ctrl+I` 相同,因为 I 是第 9 个字母)
* `Escape` -> `\x1b`
* `Backspace` -> `\x7f`
* `Home` -> `\x1b[H`
* `End` -> `\x1b[F`
* `Print Screen` -> `\x1b\x5b\x31\x3b\x35\x41`
* `Insert` -> `\x1b\x5b\x32\x7e`
* `Delete` -> `\x1b\x5b\x33\x7e`
* 我的 `Meta` 键完全没有作用
`Alt` 呢?根据我的实验(和一些搜索),似乎 `Alt``Escape` 在字面上是一样的,只是按 `Alt` 本身不会向终端发送任何字符,而按 `Escape` 本身会。所以:
* `alt + d` => `\x1bd`(其他每个字母都一样)
* `alt + shift + d` => `\x1bD`(其他每个字母都一样)
* 诸如此类
让我们再看一个例子!
### 示例nano
下面是我运行文本编辑器 `nano` 时发送和接收的内容:
```
recv: "\r\x1b[~:/play$ "
sent: "n" [[]byte{0x6e}]
recv: "n"
sent: "a" [[]byte{0x61}]
recv: "a"
sent: "n" [[]byte{0x6e}]
recv: "n"
sent: "o" [[]byte{0x6f}]
recv: "o"
sent: "\r" [[]byte{0xd}]
recv: "\r\n\x1b[?2004l\r"
recv: "\x1b[?2004h"
recv: "\x1b[?1049h\x1b[22;0;0t\x1b[1;16r\x1b(B\x1b[m\x1b[4l\x1b[?7h\x1b[39;49m\x1b[?1h\x1b=\x1b[?1h\x1b=\x1b[?25l"
recv: "\x1b[39;49m\x1b(B\x1b[m\x1b[H\x1b[2J"
recv: "\x1b(B\x1b[0;7m GNU nano 6.2 \x1b[44bNew Buffer \x1b[53b \x1b[1;123H\x1b(B\x1b[m\x1b[14;38H\x1b(B\x1b[0;7m[ Welcome to nano. For basic help, type Ctrl+G. ]\x1b(B\x1b[m\r\x1b[15d\x1b(B\x1b[0;7m^G\x1b(B\x1b[m Help\x1b[15;16H\x1b(B\x1b[0;7m^O\x1b(B\x1b[m Write Out \x1b(B\x1b[0;7m^W\x1b(B\x1b[m Where Is \x1b(B\x1b[0;7m^K\x1b(B\x1b[m Cut\x1b[15;61H"
```
你可以看到一些来自用户界面的文字,如 “GNU nano 6.2”,而这些 `\x1b[27m` 的东西是转义序列。让我们来谈谈转义序列吧!
### ANSI 转义序列
上面这些 `nano` 发给客户端的 `\x1b[` 东西被称为“转义序列”或 “转义代码”。这是因为它们都是以 “转义”字符 `\x1b` 开头。它们可以改变光标的位置,使文本变成粗体或下划线,改变颜色,等等。[维基百科介绍了一些历史][7],如果你有兴趣的话可以去看看。
举个简单的例子:如果你在终端运行
```
echo -e '\e[0;31mhi\e[0m there'
```
它将打印出 “hi there”其中 “hi” 是红色的“there” 是黑色的。[本页][8] 有一些关于颜色和格式化的转义代码的例子。
我认为有几个不同的转义代码标准,但我的理解是,人们在 Unix 上使用的最常见的转义代码集来自 VT100博客文章顶部图片中的那个老终端在过去的 40 年里没有真正改变。
转义代码是为什么你的终端会被搞乱的原因,如果你 `cat` 一些二进制数据到你的屏幕上 —— 通常你会不小心打印出一堆随机的转义代码,这将搞乱你的终端 —— 如果你 `cat` 足够多的二进制数据到你的终端,那里一定会有一个 `0x1b` 的字节。
### 可以手动输入转义序列吗?
在前面几节中,我们谈到了 `Home` 键是如何映射到 `\x1b[H` 的。这 3 个字节是 `Escape + [ + H`(因为 `Escape` 是`\x1b`)。
如果我在 `xterm.js` 终端手动键入 `Escape` ,然后是 `[`,然后是 `H`,我就会出现在行的开头,与我按下 `Home` 完全一样。
我注意到这在我的电脑上的 Fish shell 中不起作用 —— 如果我键入 `Escape`,然后输入 `[`,它只是打印出 `[`,而不是让我继续转义序列。我问了我的朋友 Jesse他写过 [一堆 Rust 终端代码][9]Jesse 告诉我,很多程序为转义代码实现了一个 **超时** —— 如果你在某个最小的时间内没有按下另一个键,它就会决定它实际上不再是一个转义代码了。
显然,这在 Fish shell 中可以用 `fish_escape_delay_ms` 来配置,所以我运行了 `set fish_escape_delay_ms 1000`,然后我就能用手输入转义代码了。工作的很好!
### 终端编码有点奇怪
我想在这里暂停一下,我觉得你按下的键被映射到字节的方式是非常奇怪的。比如,如果我们今天从头开始设计按键的编码方式,我们可能不会把它设置成这样:
* `Ctrl + a``Ctrl + Shift + a` 做的事情完全一样。
* `Alt``Escape` 是一样的
* 控制序列(如颜色/移动光标)使用与 `Escape` 键相同的字节,因此你需要依靠时间来确定它是一个控制序列还是用户只是想按 `Escape`
但所有这些都是在 70 年代或 80 年代或什么时候设计的,然后需要永远保持不变,以便向后兼容,所以这就是我们得到的东西 :)
### 改变窗口大小
在终端中,并不是所有你能做的事情都是通过来回发送字节发生的。例如,当终端被调整大小时,我们必须以不同的方式告诉 Linux 窗口大小已经改变。
下面是 [goterm][10] 中用来做这件事的 Go 代码的样子:
```
syscall.Syscall(
syscall.SYS_IOCTL,
tty.Fd(),
syscall.TIOCSWINSZ,
uintptr(unsafe.Pointer(&resizeMessage)),
)
```
这是在使用 `ioctl` 系统调用。我对 `ioctl` 的理解是,它是一个系统调用,用于处理其他系统调用没有涉及到的一些随机的东西,通常与 IO 有关,我猜。
`syscall.TIOCSWINSZ` 是一个整数常数,它告诉 `ioctl` 我们希望它在本例中做哪件事(改变终端的窗口大小)。
### 这也是 xterm 的工作方式。
在这篇文章中,我们一直在讨论远程终端,即客户端和服务器在不同的计算机上。但实际上,如果你使用像 xterm 这样的终端模拟器,所有这些工作方式都是完全一样的,只是很难注意到,因为这些字节并不是通过网络连接发送的。
### 文章到此结束啦
关于终端,肯定还有很多东西要了解(我们可以讨论更多关于颜色,或者原始与熟化模式,或者 Unicode 支持,或者 Linux 伪终端界面),但我将在这里停止,因为现在是晚上 10 点,这篇文章有点长,而且我认为我的大脑今天无法处理更多关于终端的新信息。
感谢 [Jesse Luehrs][11] 回答了我关于终端的十亿个问题,所有的错误都是我的 :)
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2022/07/20/pseudoterminals/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://xtermjs.org/
[2]: https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/DEC_VT100_terminal.jpg/512px-DEC_VT100_terminal.jpg
[3]: https://commons.wikimedia.org/wiki/File:DEC_VT100_terminal.jpg (Jason Scott, CC BY 2.0 <https://creativecommons.org/licenses/by/2.0>, via Wikimedia Commons)
[4]: https://livingcomputers.org/
[5]: https://github.com/freman/goterm
[6]: https://github.com/jvns/goterm
[7]: https://en.wikipedia.org/wiki/ANSI_escape_code
[8]: https://misc.flogisoft.com/bash/tip_colors_and_formatting
[9]: https://github.com/doy/vt100-rust
[10]: https://github.com/freman/goterm/blob/a644c10e180ce8af789ea3e4e4892dcf078e97e2/main.go#L110-L115
[11]: https://github.com/doy/

View File

@ -0,0 +1,71 @@
[#]: subject: "Debian May Consider Including Non-Free Firmware in Official Releases"
[#]: via: "https://news.itsfoss.com/debian-non-free/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14873-1.html"
Debian 可能会考虑在官方版本中包含非自由固件
======
> Debian 会考虑在官方版本中添加非自由固件吗?如果他们想解决 Debian 开发人员重点提出的问题,这似乎是一种可能性。
![debian][1]
由于 Debian 的稳定性和新功能之间的平衡,它是最受欢迎的 Linux 发行版之一。
此外,它不附带任何非自由固件。
但是,对于想要在较新硬件上使用 Debian 的用户来说,这已成为一个问题。
大多数最新的设备和配置都需要非自由固件才能正常工作,其中包括 Wi-Fi、图形等。
为了解决这个问题Debian 开发人员、前 Debian 项目负责人 Steve McIntyre 已经积极讨论了一段时间。
正如 [Geeker's Digest][2] 所发现的,在 DebConf 22 会议上Steve 最近向用户和开发人员着重谈到了修复固件混乱这件事。
### 在官方版本中包含非自由固件
至于目前的情况,你可以找到带有非自由固件的非官方 Debian 镜像。
然而,并不是每个用户都知道它,即使它在 Debian 的下载页面上被宣传,“非官方”也不是用户更喜欢推荐的镜像的东西。
此外,当用户可以选择 Ubuntu 或任何基于 Ubuntu 的发行版作为替代方案时,期望用户安装非自由固件也是违反直觉的。
不仅限于这些问题Steve 在他的 [博客][3] 中还提到了其他一些问题,包括:
* 维护单独的非自由镜像非常耗时。
* 由于缺乏非自由固件,许多用户不喜欢官方镜像。
如果我们希望更多用户在通用硬件上使用 DebianSteve 建议尽早解决这个问题。
可能会通过安装程序中的提示让用户安装非自由固件,类似于 Ubuntu 所做的。
此外,在他在 DebConf 22 的演讲中,似乎大多数开发人员投票支持在官方 Debian 镜像中添加非自由固件。
随着他重点提出这件事Steve 得到了寻找解决这个问题的社区用户/开发人员的更多关注。
**简单/方便的出路**:在官方发布的镜像中添加非自由固件。
那么Debian 最终会在其新版本中添加对非自由固件的支持吗? Debian 12 会让这成为现实吗?
在下面的评论中分享你的想法。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/debian-non-free/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/07/debian-non-free-firmware.jpg
[2]: https://www.geekersdigest.com/debian-on-the-verge-to-include-non-free-firmware-in-official-releases/
[3]: https://blog.einval.com/2022/04/19#firmware-what-do-we-do

View File

@ -0,0 +1,70 @@
[#]: subject: "Pop!_OS 22.04 Linux Distro is Finally Adding Raspberry Pi 4 Support"
[#]: via: "https://news.itsfoss.com/pop-os-22-04-raspberry-pi-4/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14880-1.html"
Pop!_OS 22.04 Linux 发行版现在支持树莓派 4 了
======
> System76 终于为其最新的 Pop!_OS 22.04 LTS 增加了对树莓派 4 的支持。
![Pop os][1]
Pop!_OS 是 [最好的初学者友好的 Linux 发行版][2] 之一。
它基于 Ubuntu显然Pop!_OS 22.04 LTS 是基于 [Ubuntu 22.04 LTS][3] 的。
然而,与 Ubuntu 不同Pop!_OS 22.04 在发布时并没有正式支持树莓派。
因此,期待 [Pop!_OS 22.04 LTS][4] 版本对树莓派的支持是合理的。
如果你还记得System76 在 **Pop!_OS 21.10** 中首次增加了对树莓派的支持。我们在测试时也 [报道过][5]。
而且,据 System76 的首席工程师 Jeremy Soller 透露, System76 最新的 Pop!_OS 版本现在正准备支持树莓派 4。
### Pop!_OS 22.04 LTS for Raspberry Pi 4
如果你一直在你的树莓派 4 上使用 Pop!_OS 21.10,这对你来说是个好消息。
而且,对于任何想在树莓派 4 上尝试 Pop!_OS 的人来说,它终于有了一个 LTS 版本。
截至目前,该 ISO 是作为技术预览版提供的。因此,如果你想试试它,你应该有出现错误和可用性问题的心理预期。请注意,目前还 **只限于树莓派 4**,不支持其他树莓派设备,这是个遗憾。
我们不知道 System76 是否计划在这个 LTS 版本中支持其他树莓派板,或者他们是否坚持只支持树莓派 4。
然而,考虑到树莓派 4 现在相当流行,对于许多寻求替代 Ubuntu 的 [树莓派的替代操作系统][6] 的爱好者们来说,这应该是一个很好的进展。
有了 Pop!_OS 22.04 LTS树莓派 4 的用户应该能够体验到一些最令人兴奋的升级,以及更新的 [Linux 内核 5.15 LTS][7]。
要下载该技术预览版,请前往 Pop!_OS 的 [官方网站][8],点击下载按钮,找到该选项。
![Pop OS][9]
你对树莓派 4 上的 Pop!_OS 22.04 有什么期望?请在下面的评论中告诉我们你的想法。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/pop-os-22-04-raspberry-pi-4/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/07/pop-os-raspberry-pi-4.jpg
[2]: https://itsfoss.com/best-linux-beginners/
[3]: https://news.itsfoss.com/ubuntu-22-04-release/
[4]: https://news.itsfoss.com/pop-os-22-04-release/
[5]: https://news.itsfoss.com/pop-os-raspberry-pi-coming-soon/
[6]: https://itsfoss.com/raspberry-pi-os/
[7]: https://news.itsfoss.com/linux-kernel-5-15-release/
[8]: https://pop.system76.com/
[9]: https://news.itsfoss.com/wp-content/uploads/2022/07/pop-os-raspberry-pi-4-download-1024x526.png

View File

@ -0,0 +1,86 @@
[#]: subject: "Its Time to Ditch 32-Bit Linux for 64-Bit"
[#]: via: "https://news.itsfoss.com/64-bit-linux/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14876-1.html"
是时候抛弃 32 位的 Linux改用 64 位的了
======
> 如果你想获得安全的体验,你可能不会再继续使用 32 位 Linux 内核。
![](https://news.itsfoss.com/wp-content/uploads/2022/07/linux-64-bit.jpg)
我们有很多 [为 32 位系统定制的 Linux 发行版][1]。
那么,为什么我想要不鼓励使用 32 位,而升级到 64 位 Linux 呢?
有几个原因,其中一个最大的原因,在本周引发了很多关注。
### 32 位:古老的电子垃圾硬件?
没错与其他操作系统不同的是Linux 发行版允许你重新利用旧硬件。
你能够将一个老机器转换为 [媒体服务器][2]、存储服务器,等等。
在这里,我并不是要给你一些如何贡献更多的电子垃圾的思路。尽可能长地利用你的硬件,而不更换它们总是好的。
然而,不使用 32 位系统的理由可能比以往更有说服力。关键的问题是在安全和维护方面。
### 利用 64 位 Linux 提高安全性
2018 年,危险的处理器安全问题 Spectre 漏洞引发了热议。虽然英特尔和 AMD 对这个漏洞进行了修复,但情况并不乐观。
不幸的是,一个新的漏洞 Retbleed它是 Spectre 的一个变种,正在影响英特尔和 AMD 芯片。
你可以在下面由发现它的研究人员分享的视频中看到它的情况。
![][3]
因此,我们自然需要适当的措施来解决这个新的安全漏洞的修复问题。
**令人震惊的事情来了**。64 位 Linux 内核已经收到了对它的修复,以保护有关的英特尔/AMD 的处理器。但是,正如 [Phoronix][4] 所报道的Linux 32 位内核仍然容易受到 Retbleed 漏洞的影响。
英特尔的 Pawan Gupta 在 [内核邮件列表][5] 中回应了这些担忧,他提到:
> 英特尔不知道还有谁在 Skylake 那一代的 CPU 上使用 32 位模式的生产环境。所以这不应该是一个问题。
另外,很少看到为 32 位维护所做的任何努力。所以,这应该不算什么意外。
因此,如果你使用你的系统进行任何可能受到安全问题影响的任务,你应该避开 32 位内核。
当然,如果你有一个完全离线的环境可以算做例外。所以,你可以这样做,但不建议这样做。
### 不关心安全问题?
即使你认为得不到像 Retbleed 这样的关键安全修复没有关系2022 年的 32 位系统也会有更多的麻烦。
软件维护者们最终会放弃对 32 位系统上的工具和 Linux 发行版的更新。
因此,你的 32 位 Linux 系统可能很快就不会再有积极维护的程序了。
因此,现在进行转换(和升级)将是一个好主意。
_你还在使用 32 位的 Linux 吗你对此有什么看法在下面的评论中分享你的想法。_
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/64-bit-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/32-bit-linux-distributions/
[2]: https://itsfoss.com/best-linux-media-server/
[3]: https://i.ytimg.com/vi/dmSPvJxPm80/hqdefault.jpg
[4]: https://www.phoronix.com/news/Linux-x86-Retbleed
[5]: https://lore.kernel.org/lkml/20220715221901.xm3c4w4idqt67uja@desk/

View File

@ -0,0 +1,108 @@
[#]: subject: "How to Clean Up Snap Versions to Free Up Disk Space"
[#]: via: "https://www.debugpoint.com/clean-up-snap/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14904-1.html"
如何清理 Snap 保留的旧软件包以释放磁盘空间
======
![](https://img.linux.net.cn/data/attachment/album/202208/07/105824nyac4m66a6886x6q.jpg)
> 这个带有脚本的快速指南有助于清理旧的 Snap 软件包,并释放 Ubuntu 系统中的一些磁盘空间。
我的 Ubuntu 测试系统中出现磁盘空间不足。因此,我通过 GNOME 的磁盘使用分析器进行调查,以找出哪个软件包正在消耗宝贵的 SSD 空间。除了通常的缓存和主目录,令我惊讶的是,我发现 Snap 和 Flatpak 消耗了大量的存储空间。
![Snap size before cleanup][1]
我始终坚持一个规则:除非必要,否则不要使用 Snap 或 Flatpak。这主要是因为它们的安装大小和一些其他问题。我更喜欢原生 deb 和 rpm 包。多年来,我在这个测试系统中安装和移除了一些 Snap 包。
问题出现在卸载后。Snap 在系统中保留了一些残留文件,而一般用户不知道。
所以我打开了 Snap 文件夹 `/var/lib/snapd/snaps`,发现 Snap 会保留以前安装/卸载的软件包的旧版本。
例如,在下图中,你可以看到 GNOME 3.28、3.34 和 Wine 这些都被删除了。但它们还在那里。这是因为 Snap 设计上在正确卸载后保留已卸载软件包的版本。
![Files under snaps directory][2]
或者,你可以在终端中使用:
```
snap list --all
```
![snap list all][3]
对于保留的版本数量,默认值为 3。这意味着 Snap 会保留每个软件包的 3 个旧版本,包括当前安装版本。如果你对磁盘空间没有限制,这是可以的。
但是对于服务器和其他场景,这很容易遇到成本问题,消耗你的磁盘空间。
不过,你可以使用以下命令轻松修改计数。该值可以在 2 到 20 之间。
```
sudo snap set system refresh.retain=2
```
### 清理 Snap 版本
在 SuperUser 的一篇文章中Canonical 的前工程经理 Popey [提供了一个简单的脚本][4] 可以清理旧的 Snap 版本并保留最新版本。
这是我们将用来清理 Snap 的脚本。
```
#!/bin/bash
#Removes old revisions of snaps
#CLOSE ALL SNAPS BEFORE RUNNING THIS
set -eu
LANG=en_US.UTF-8 snap list --all | awk '/disabled/{print $1, $3}' |
while read snapname revision; do
snap remove "$snapname" --revision="$revision"
done
```
将上述脚本以 .sh 格式保存在目录中(例如 `clean_snap.sh`),赋予其可执行权限并运行。
```
chmod +x clean_snap.sh
```
当我运行脚本时,它减少了很多磁盘空间。该脚本还将显示要删除的包的名称。
![Executing the script][5]
![Snaps size after cleanup][6]
### 结束语
关于 Snap 的设计效率如何,人们总是争论不休。许多人说,它的设计是糟糕的,是臃肿的,且消耗系统资源。该论点的某些部分是正确的,我不会否认。如果正确实施和增强,沙盒应用的整个概念就很棒。我相信,与 Snap 相比Flatpak 做得更好。
也就是说,我希望这可以帮助你清理一些磁盘空间。尽管它只在 Ubuntu 中进行了测试,但它应该适用于所有支持 Snap 的 Linux 发行版。
此外,请查看我们关于 [如何清理 Ubuntu][7] 的指南以及其他步骤。
最后,如果你正在寻找清理 **Flatpak** 应用,请参阅 [这个指南][8]。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/clean-up-snap/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2021/03/Snap-size-before-cleanup.jpg
[2]: https://www.debugpoint.com/wp-content/uploads/2021/03/Files-under-snaps-directory.jpg
[3]: https://www.debugpoint.com/wp-content/uploads/2021/03/snap-list-all.jpg
[4]: https://superuser.com/a/1330590
[5]: https://www.debugpoint.com/wp-content/uploads/2021/03/Executing-the-script.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2021/03/Snaps-size-after-cleanup.jpg
[7]: https://www.debugpoint.com/2018/07/4-simple-steps-clean-ubuntu-system-linux/
[8]: https://www.debugpoint.com/clean-up-flatpak/

View File

@ -0,0 +1,167 @@
[#]: subject: "AppFlowy: An Open-Source Alternative to Notion"
[#]: via: "https://itsfoss.com/appflowy/"
[#]: author: "Sagar Sharma https://itsfoss.com/author/sagar/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14888-1.html"
AppFlowyNotion 的开源替代品
======
![](https://img.linux.net.cn/data/attachment/album/202208/02/102316f1g6p369uyeeybgo.jpg)
> AppFlowy 旨在成为 Notion 的开源替代品,为你提供更好的隐私保护。让我们了解一下它。
虽然项目管理/笔记工具 Notion 功能非常出色,但它并不是一个开源解决方案。此外,它没有 Linux 桌面客户端。
那么,对于 Linux 用户来说,更透明、更私密和可用的替代方案是什么?
这就是 AppFlowy 大放异彩的地方!
AppFlowy 使用 Rust 和 Flutter 构建,遵循极简原则,但提供了足够的调整空间。
### AppFlowy 是隐私和用户体验的完美结合
![appflowy][1]
AppFlowy 是相当新的。在它去年首次推出后,我们曾 [报告][2] 了它的发展状况。
这是一个开源项目,旨在克服 [Notion][3] 在安全和隐私方面的一些限制。它可以帮助你管理任务、添加待办事项列表、截止日期、跟踪事件、添加页面,以及为你的笔记/任务设置文本格式。
不仅仅是安全性。用户体验也很重要。而 AppFlowy 在这方面做得很好,甚至比 Notion 更好。
请注意,该项目仍处于 **测试阶段**
目前,该项目的目标不是提供更好的设计和功能,而是数据隐私、原生体验和社区驱动。
### Notion 与 AppFlowy如何选择
虽然它旨在作为取代 Notion 的开源解决方案,但它可能并不适合所有人。
因此,如果你要选择 AppFlowy 而不是 Notion你将获得以下好处
#### 透明度
AppFlowy 是一个开源项目,因此你可以随时查看和修改代码。
#### 隐私
作为闭源软件Notion 可以直接访问你在云中的私有数据。与之相比,你可以根据自己的喜好自行托管 AppFlowy。
你的所有个人数据都将保留在你身边,你可以完全控制它。开发人员还提到他们正在使用离线模式来更好的支持本地安装。
#### 性能和原生体验
AppFlowy 使用 Rust 和 Flutter 构建,在提供现代用户体验的同时将性能置于优先位置。
不仅限于此,你还可以在 Linux 上获得良好的原生体验,这是 Notion 所没有的。
### AppFlowy 的功能
![appflowy screenshot 1][4]
AppFlowy 在功能方面可能并不优越,但它确实提供了基本的功能。
随着开发的继续,你可以期待它会添加更多的功能。一些现有的功能包括:
* 原生的跨平台支持。
* 能够自行托管或将其安装在你的本地计算机上。
* 可定制。
* 数据隐私(重中之重)。
* 单一代码库,便于更好地维护。
* 社区驱动的可扩展性。
* 简约的用户界面。
* 可以添加待办事项、管理任务。
* 文本高亮和基本的格式化。
* 用于编辑单元格/网格的键盘快捷键。
* 支持深色模式。
#### 在 Linux 上安装 AppFlowy
由于它仍处于测试阶段,在默认仓库中还不可用,并且没有维护任何 PPA也没有 Flatpak/Snap 包。
但是,你可以通过给定的命令轻松安装 AppFlowy仅在 Ubuntu 20.04 LTS 和 Arch X86_64 上测试过):
```
wget https://github.com/AppFlowy-IO/AppFlowy/releases/download/0.0.4/AppFlowy-linux-x86.tar.gz
tar -xzvf AppFlowy-linux-x86.tar.gz
cd AppFlowy
```
要运行 AppFlowy请使用该命令
```
./app_flowy
```
要在你的系统菜单中注册 AppFlowy你必须执行以下附加步骤
首先,你必须更改 AppFlowy 徽标的默认名称:
```
mv flowy_logo.svg app_flowy.svg
```
现在,你必须将 Linux 桌面文件模板复制为正式的 Linux 桌面文件。
```
cp appflowy.desktop.temp app_flowy.desktop
```
然后对配置文件进行一些更改。
```
sudo nano appflowy.desktop
```
在这里,你必须将 `[CHANGE_THIS]` 替换为图标和可执行文件的对应路径。
![add location of icon and exec file][5]
使用 `CTRL + O` 保存更改并使用 `CTRL + X` 退出。
最后,移动桌面文件,以便你的系统可以读取它。
```
mv app_flowy.desktop ~/.local/share/applications/.
```
它应该是这样的:
![appflowy in system menu][6]
无论哪种情况,你都可以查看 AppFlowy 的 [官方文档][7] 以从源代码构建它。在其官方网站上了解更多关于它的信息。
> **[AppFlowy][8]**
### 总结
如果你需要具有原生 Linux 体验的简单的类 Notion 应用AppFlowy 是一个有趣的选择。
考虑到它正在积极开发中,并且远非 Notion 的完全替代品,肯定会出现一些错误/问题。
作为 Notion 的开源替代品?它可以的!你可以使用它来管理任务、添加笔记和制作待办事项列表。
--------------------------------------------------------------------------------
via: https://itsfoss.com/appflowy/
作者:[Sagar Sharma][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sagar/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/07/AppFlowy.png
[2]: https://news.itsfoss.com/appflowy-development/
[3]: https://www.notion.so/
[4]: https://itsfoss.com/wp-content/uploads/2022/07/appflowy-screenshot-1.png
[5]: https://itsfoss.com/wp-content/uploads/2022/07/Add-location-of-icon-and-exec-file-800x524.png
[6]: https://itsfoss.com/wp-content/uploads/2022/07/AppFlowy-in-System-menu-1.png
[7]: https://appflowy.gitbook.io/docs/essential-documentation/contribute-to-appflowy/software-contributions/environment-setup/building-on-linux
[8]: https://www.appflowy.io/

View File

@ -0,0 +1,216 @@
[#]: subject: "How to Install Rocky Linux 9 Step by Step with Screenshots"
[#]: via: "https://www.linuxtechi.com/how-to-install-rocky-linux-9-step-by-step/"
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: "robsean"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14909-1.html"
图解 Rocky Linux 9 安装步骤
======
![](https://img.linux.net.cn/data/attachment/album/202208/08/172822s7zwhj7wuzzjfm25.jpg)
> 这篇教程中,我们将图解 Rocky Linux 9 安装步骤。
<ruby>Rocky 企业软件基金会<rt>Rocky Enterprise Software Foundation</rt></ruby> 已经发布了它的最新的操作系统 “Rocky Linux 9”。Rocky Linux 是针对工作站和服务器的自由而开源的操作系统。它被认为是 CentOS Linux 的继承者。
Rocky Linux 9 是 RHEL 9 的复制品其开发代号是“Blue Onyx”。Rocky Linux 和 RHEL 之间的主要不同是,它有它自己的名为 “Peridot” 的开源构建系统。
### Rocky Linux 9 的更新和特色
* Gnome 40 是默认的桌面环境
* 在 XFS 文件系统上支持<ruby>直接访问<rt>Direct Access</rt></ruby>DAX操作
* 更新了运行时和编译器,如 GCC 11.2.1 、LLVM 13.0.1 、Rust 1.58.1 和 Go 1.17.1
* 更新了开发工具链,如 Python 3.9 、Node.js 16 、Ruby 3.0.3 、Perl 5.32 和 PHP 8.0
* ssh 默认禁用了 root 用户身份验证
* 更新了 OpenSSL 3.0,改进了 Cockpit 网页主控台
* 社区提供支持到 2032 年 05 月 31 日
### 前置条件
* 2 GB 及更多的内存
* 2 个 CPU 核心1.1 GHz 及更高)
* 20 GB 硬盘空间
* 可启动介质USD 或 DVD
* 互联网连接(可选)
不再耽误时间,让我们直接进入 Rocky Linux 9 的安装步骤:
### 1、下载 Rocky Linux 9 的 ISO 文件
使用下面的官方网址来下载 ISO 文件
> **[Rocky Linux 9 ISO][1]**
在你下载 ISO 文件后,使用已下载的 ISO 文件制作一个可启动介质USB/DVD
在 Windows 中,你可以利用 Rufus 软件来使用 ISO 文件来制作可启动 USB 驱动器。在 Linux 中,参考下面的内容:
> **[在 Ubuntu / Linux Mint 上,如何创建可启动 USB 驱动器][2]**
### 2、使用可启动媒介盘启动系统
在你计划安装 Rocky Linux 9 的硬件系统上BIOS 设置中将可启动介质从硬盘驱动器更改为 USB 驱动器, 重新启动它。
在硬件系统使用可启动介质启动后,我们将看到下面的屏幕,
![Select-Install-Rocky-Linux-9-option][3]
选择第一个选项, <ruby>安装 Rocky Linux 9.0<rt>Install Rocky Linux 9.0</rt></ruby> ,并按下 <ruby>回车<rt>enter</rt></ruby> 按键。
### 3、选择首选语言
选择**安装过程**的首选语言,然后单击 <ruby>继续<rt>Continue</rt></ruby> 按钮,
![Preferred-Language-for-RockyLinux9-Installation][4]
### 4、安装过程摘要
在这个步骤中,我们将看到如下的初始安装摘要。要开始安装,首先,我们必须完成标记项目,如 <ruby>安装目标<rt>Installation Destination</rt></ruby><ruby>用户设置<rt>User settings</rt></ruby>
除了已标记的项目外,我们也可以更改现有的项目,只需要按照你的要求单击它们就可以进行更改。
![Initial-Installation-Summary-Rocky-Linux9][5]
#### 配置安装目标
在这个项目中,我们将为 Rocky Linux 具体指定分区方案。单击 <ruby>安装目标<rt>Installation Destination</rt></ruby>
在这里,我们可以为 <ruby>存储配置<rt>storage configuration</rt></ruby><ruby>分区方案<rt>partition scheme</rt></ruby> 选择 <ruby>自动<rt>automatic</rt></ruby> 选项或 <ruby>自定义<rt>custom </rt></ruby> 选项。
在自动选项中,安装程序将在磁盘上自动地创建分区,而自定义选项允许我们在磁盘上手动创建分区。
![Choose-custom-Storage-Configuration-Rocky-Linux9][6]
在这篇指南中,我将使用 <ruby>自定义<rt>Custom</rt></ruby> 选项,单击 <ruby>执行<rt>Done</rt></ruby> 按钮。
![Standard-Partition-Scheme-RockyLinux9][7]
在该 40 GB 的磁盘上,我们将创建以下分区,
* `/boot`2GBxfs 文件系统)
* `/`10 GBxfs 文件系统)
* `/home`25 GBxfs 文件系统)
* 交换分区2 GB
开始创建分区,选择 <ruby>标准分区<rt>Standard Partition</rt> 方案,然后单击 “+” 符号。
创建第一个分区,大小为 2 GB 的 `/boot` 分区,
![Boot-Partition-RockyLinux9-Installation][8]
单击 <ruby>添加挂载点<rt>Add mount point</rt></ruby> 按钮。
类似地,接下来分别创建大小为 10 GB 的 `/` 分区和 25 GB 的 `/home` 分区。
![Slash-Partition-Rocky-Linux9-installation][9]
![Home-Partition-Rocky-Linux9-Installation][10]
现在,创建最后一个分区,大小为 2 GB 的交换分区LCTT 校注:如果你的内存非常多,你可以选择不创建交换分区。另外,对于生产环境,建议将存储数据的目录单独划分分区。)
![Swap-Partition-RockyLinux9-Installation][11]
在你完成手动分区后,单击 <ruby>执行<rt>Done</rt></ruby> 按钮来完成这个项目。
![Finish-Manual-Partitioning-RockyLinux9-Installation][12]
选择 <ruby>接受更改<rt>Accept Changes</rt></ruby> 按钮来将这些更改写入磁盘。它也将返回到安装摘要屏幕。
![Accept-Changes-to-Write-on-Disk-RockyLinux9][13]
#### 配置用户设置
<ruby>用户设置<rt>User Settings</rt></ruby> 下,单击 <ruby>root 密码 <rt>Root Password</rt></ruby> 按钮。
![Set-Root-Password-RockyLinux9-Instalation][14]
设置 root 用户的密码,并单击 <ruby>执行<rt>Done</rt></ruby> 按钮。
再次回到 <ruby>用户设置<rt>User Settings</rt></ruby> 下,单击 <ruby>用户创建<rt>User Creation</rt></ruby> 按钮,具体指定本地用户的详细信息,例如用户名称和密码。
![Local-User-Create-During-RockyLinux9-Installation][15]
单击 <ruby>执行<rt>Done</rt></ruby> 按钮,它也将返回到安装摘要。
现在,我们准备开始安装,单击<ruby>开始安装<rt>Begin Installation</rt></ruby> 按钮,
![Begin-Installation-Option-RockyLinux9][16]
### 5、安装过程开始
在这一步骤中,安装程序已经开始了,并在正在进行中,
![RockyLinux9-Installation-Progress][17]
在安装过程完成后,安装程序将提示你重新启动系统。
![Reboot-System-after-RockyLinux9-Installation][18]
单击 <ruby>重新启动系统<rt>Reboot System</rt></ruby> 按钮。
注意:不要忘记在 BIOS 设置中将可启动介质从 USB 启动更改为硬盘驱动器启动。
### 6、安装后的登录屏幕和桌面环境
在成功安装后,当系统启动时,我们将看到下面的登录屏幕:
![RockyLinux9-Loginscreen-Post-Installation][19]
使用我们在安装期间创建的用户名称和密码,按下 <ruby>回车<rt>enter</rt></ruby> 按键来登录。
![Desktop-Env-RockyLinux9][20]
打开终端,依次运行下面的命令:
```
$ sudo dnf install epel-release -y
$ sudo dnf install neofetch -y
```
现在,来验证系统的详细信息,运行 `neofetch` 命令:
```
$ neofetch
```
![neofetch-rockylinux9-post-installation][21]
这就是这篇指南的全部内容,我希望它对你有用。请在下面的评论区贴出你的疑问和反馈。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/how-to-install-rocky-linux-9-step-by-step/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lkxed
[1]: https://rockylinux.org/download
[2]: https://www.linuxtechi.com/create-bootable-usb-disk-dvd-ubuntu-linux-mint/
[3]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Select-Install-Rocky-Linux-9-option.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Preferred-Language-for-RockyLinux9-Installation.png
[5]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Initial-Installation-Summary-Rocky-Linux9.png
[6]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Choose-custom-Storage-Configuration-Rocky-Linux9.png
[7]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Standard-Partition-Scheme-RockyLinux9.png
[8]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Boot-Partition-RockyLinux9-Installation.png
[9]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Slash-Partition-Rocky-Linux9-installation.png
[10]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Home-Partition-Rocky-Linux9-Installation.png
[11]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Swap-Partition-RockyLinux9-Installation.png
[12]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Finish-Manual-Partitioning-RockyLinux9-Installation.png
[13]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Accept-Changes-to-Write-on-Disk-RockyLinux9.png
[14]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Set-Root-Password-RockyLinux9-Instalation.png
[15]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Local-User-Create-During-RockyLinux9-Installation.png
[16]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Begin-Installation-Option-RockyLinux9.png
[17]: https://www.linuxtechi.com/wp-content/uploads/2022/07/RockyLinux9-Installation-Progress.png
[18]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Reboot-System-after-RockyLinux9-Installation.png
[19]: https://www.linuxtechi.com/wp-content/uploads/2022/07/RockyLinux9-Loginscreen-Post-Installation.png
[20]: https://www.linuxtechi.com/wp-content/uploads/2022/07/Desktop-Env-RockyLinux9.png
[21]: https://www.linuxtechi.com/wp-content/uploads/2022/07/neofetch-rockylinux9-post-installation.png

View File

@ -0,0 +1,133 @@
[#]: subject: "How to Uninstall Deb Packages in Ubuntu"
[#]: via: "https://itsfoss.com/uninstall-deb-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14885-1.html"
如何在 Ubuntu 中卸载 deb 包
======
![](https://img.linux.net.cn/data/attachment/album/202208/01/180906afaqifcsqqsfsxyq.jpg)
[从 .deb 文件安装应用][1] 非常简单。双击它,它会在软件中心中打开,然后从那里安装它。
但是如何在 Ubuntu 或 Debian 中卸载 deb 包呢?如何删除一段时间前安装的软件包呢。
虽然这有几个如果和但是,但删除 .deb 文件的最简单和最可靠的方法是使用 `apt remove` 命令。
```
sudo apt remove program_name
```
如你所见,**你需要在这里知道确切的包名称**。这可能并不总是显而易见的。例如,如果你在 Ubuntu 上安装 Google Chrome则该程序在命令行中称为 “google-chrome-stable”。你已经知道了吗我猜你不知道。
在本教程中,我将详细介绍如何找到确切的包名称,然后使用它来删除应用。我还将讨论使用图形方法删除 deb 包。
### 从 Ubuntu 中删除通过 .deb 文件安装的软件包
在我向你展示如何从命令行删除 deb 包之前,让我们在软件中心应用中快速查看它。
#### 方法 1检查应用是否可以从软件中心移除
Ubuntu 有软件中心 GUI 应用,允许搜索、安装和删除应用。
搜索时,软件中心可能不会显示已安装的应用。
![Searching for installed applications may not show any results in Ubuntu Software Center][2]
但是,如果向下滚动,你仍可能在“已安装”部分下找到它。外部应用通常不带徽标显示。
![Some installed applications can be found in the installed tab of the Software Center][3]
如果找到它,你可以通过单击“垃圾桶”图标或“删除”按钮来删除该应用。
![Removing applications from the Ubuntu software center][4]
**一句话:检查是否可以从软件中心删除应用。**
#### 方法 2使用 apt 命令删除应用
我假设你不知道该应用命令的确切名称。你可能不知道 Google Chrome 安装为 google-chrome-stable 而 Edge 安装为 microsoft-edge-stable这很正常。
如果你知道前几个字母,那么 tab 补全可能会有所帮助。否则,你可以 [使用 apt 命令列出已安装的应用][5] 并使用 `grep` 搜索应用程序名称:
```
apt list --installed | grep -i possible_package_name
```
例如,你可以智能地猜测 Google Chrome 包的名称中应该包含 chrome。你可以这样搜索
```
apt list --installed | grep -i chrome
```
在某些情况下,你可能会得到多个结果。
![check if google chrome installed in ubuntu][6]
如果你不确定这些软件包的作用,你可以随时通过以下方式获取它们的详细信息:
```
apt info exact_package_name
```
获得确切的软件包名称后,你可以使用 `apt remove` 命令将其删除。
```
sudo apt remove exact_package_name
```
你还可以使用 `apt-get remove``dpkg uninstall` 命令来删除。
![Removing applications installed via .deb files using the apt command][7]
#### 方法 3使用 Synaptic 包管理器删除 deb 应用
另一种方法是使用 [Synaptic 包管理器][8]。在 GNOME 以“软件中心”的形式创建其图形包管理器之前Synaptic 是 Ubuntu 和许多其他发行版中的默认 GUI 包管理器。
它仍然是 [Xfce 桌面环境][9] 上的推荐工具。
首先安装它:
```
sudo apt install synaptic
```
打开 Synaptic 并搜索包名称。查找标记为绿色的已安装软件包。右键单击它们,然后单击“标记为删除”。之后点击应用。
![Removing Deb packages using Synaptic package manager][10]
### 对你有帮助吗?
我非常乐意使用 `apt` 命令删除从 .deb 文件中安装的软件包。但我可以理解,并不是每个人都喜欢使用命令行。
在删除从外部 .deb 文件安装的应用时,我发现软件中心中找不到它。软件中心还可以做的更好一些。
我希望你现在对删除 deb 包有更好的了解。如果你有任何问题,请告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/uninstall-deb-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/install-deb-files-ubuntu/
[2]: https://itsfoss.com/wp-content/uploads/2022/07/search-for-installed-applications-ubuntu-software-center.png
[3]: https://itsfoss.com/wp-content/uploads/2022/07/installed-applications-in-ubuntu-software-center-scaled.webp
[4]: https://itsfoss.com/wp-content/uploads/2022/07/removing-applications-from-ubuntu-software-center-scaled.webp
[5]: https://itsfoss.com/list-installed-packages-ubuntu/
[6]: https://itsfoss.com/wp-content/uploads/2022/07/check-if-google-chrome-installed-in-Ubuntu.png
[7]: https://itsfoss.com/wp-content/uploads/2022/07/removing-deb-files-applications-ubuntu.png
[8]: https://itsfoss.com/synaptic-package-manager/
[9]: https://www.xfce.org/
[10]: https://itsfoss.com/wp-content/uploads/2022/07/removing-deb-files-using-synaptic-scaled.webp

View File

@ -0,0 +1,188 @@
[#]: subject: "Top 10 Features of Linux Mint 21 “Vanessa”"
[#]: via: "https://www.debugpoint.com/linux-mint-21-features/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "robsean"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14894-1.html"
Linux Mint 21 “Vanessa” 的 10 大特色
======
> 我们总结了 Linux Mint 21 “Vanessa” 的 10 大特色,你可以看看有哪些是为你而准备的。
![](https://www.debugpoint.com/wp-content/uploads/2022/07/mint21feature.jpg)
Linux Mint 21 “Vanessa” 是 [Linux Mint][2] 的第 36 个发布版本,它带来了一系列特色,以及对桌面上的有用改善。这些特色散落在 Cinnamon 桌面、内核变化、Xapps 更新等处。
我在这份 Linux Mint 21 的重要特色列表对它们做了个总结。
### Linux Mint 21 “Vanessa” 的重要特色
![Linux Mint 21 Cinnamon Desktop][1]
#### 1、Ubuntu 22.04 及其相关更新
也许最重要的变化就是 Linux Mint 21 的基础了,它现在基于 [Ubuntu 22.04 “Jammy Jellyfish”][3] 。上一次的主要版本,即 Linux Mint 20 “Ulyana” ,是基于四年前发布的 Ubuntu 20.04 “Focal Fossa” 。沧海桑田,现在与 2020 年的世界已然完全不同。
因此,大量的软件包、版本升级、新的性能改善 —— 所有的这些底层更新都来到了 Linux Mint 21 。这包括最新的长期支持的 [Linux 内核 5.15][4] ,这带来了更多硬件系列的支持、以及针对编程、开发和网络的工具链的更新。
#### 2、Timeshift 备份工具的重大变化
几个月前Mint 开发团队 [宣布][5] :他们将接管著名的备份工具 Timeshift并将其作为一个 “XApps” 继续开发。这是一个重大变化。你可能会问为什么?
好吧Timeshift 工具的开发者 Tony George 正忙于其它的项目。你可能听说过 Linux 的 “[TeeJeeTech][6]” 应用。它是由 Tony 创建的,并且有一些很酷的应用。因此,他没有足够多的时间来专注于 Timeshift 的开发和改进。
![Timeshift creating snapshot][7]
说到这里,由于 Linux Mint 现在在维护它,这个发布版本带来了一些新的功能,例如,在 rsync 模式(不是 btrfs 模式)时,现在 Timeshift 可以确定进行下一次备份需要多少磁盘空间。此外,如果它看到磁盘空间在备份后小于 1 GB ,会停止备份过程。
#### 3、WebP 支持
WebP 图像是谷歌为 Web 创建的一种相当新的图像格式。它带来了更好的压缩率,在保持与传统的 JPEG 和 PNG 图片相当的良好质量的同时,减少了文件大小。
在 Linux 桌面支持 WebP如查看图像、缩略图或编辑需要 [额外安装][8] 一些软件包。考虑到其流行程度Linux Mint 开发团队为桌面应用及这个衍生发行版带来了开箱即用的 WebP 支持。
这意味着,在 Nemo 文件管理器中可以显示 WebP 图像的缩略图,并可以在 xviewer 中查看它们。Mint 开发团队总是优先考虑到最终用户,而诸如 Ubuntu 之类的其它发行版在默认支持 WebP 方面仍然落后。不仅如此,新的应用程序 [xapp-thumbnailers][9] 现在还能帮助 Nemo 文件管理器预览更多的文件类型,如:
* ePub
* 带有专辑封面的 MP3
* RAW 图像
* AppImage
#### 4、进程监视器
一个名称为 <ruby>进程监视器<rt>process monitor</rt></ruby> 的小巧方便的工具,将会告知你系统中正在发生什么。当你的系统正在自动更新或通过 Timeshift 备份时,系统托盘上的这个小图标就会显示出来。在这些情况下,你的系统可能会变慢,而这个漂亮的图标可以告诉你原因。
#### 5、改善打印支持
Linux Mint 针对硬件设备配置了各种驱动程序,默认情况下就支持打印机。这个版本的 Mint 带来 <ruby>[网络打印协议][10]<rt>Internet Printing Protocol</rt></ruby>IPP支持可以免驱动进行打印和扫描。
另外,它也默认安装了 HP 的驱动程序 HPLIP 的最新版本 3.21.12 。
所有的这些变化都简化了打印机和扫描仪的使用,而像你这样的最终用户可以轻松地打印和扫描。这是一个 Linux 发行版的一个重要的方面,但并不是总是能顺利工作的。在 [点评过很多发行版][11] 后,我发现很多发行版无法检测到打印机,乃至不能打印。
很高兴看到 Mint 开发团队对这个关键功能做出了贡献。
#### 6、窗口动画更新
窗口和桌面动画效果有一些相当大的变化。首先,合并了窗口和桌面的效果设置。先前,是在不同的部分对动画进行细微的控制。
这里是对比视图:
![][12]
![][13]
其次,取消了映射窗口和桌面效果选项。
第三,带来一个新的控件,用于更改整体动画的快慢速度。
最后,还有一个可以禁用或启用在整个桌面上的所有动画的全局开关,给予你更多的控制选项。
我相信这是一个经过精心设计的、可以让人更清楚地了解的对话框和高级选项。
#### 7、Mutter 重新构建
让我们来看一下随 Linux Mint 21 而来的 [Cinnamon 桌面环境版本 5.4][14]。它是最新的 Cinnamon 发布版本Mint 是第一个将其带给用户的的发行版(除了传统的 Arch Linux 用户,他们得到它 [有点超早][15])。
最后,开发团队对 Cinnamon 5.4 中的窗口管理器 Muffin 根据上游的 Mutter 进行了重新构建。由于 Muffin 最初是从 Mutter 复刻出来的,所以它总是落后于上游的 Mutter 的功能,即使是有一些后期移植的改变。为使 Muffin 尽可能地接近 Mutter 代码库,团队在包含的特色功能、错误修复及清理方面付出了大量的努力。
因此,在未来,更容易从 Mutter 上游移植变化和在需要的时候清理 Muffin。
#### 8、窗口管理器和 GTK 主题
伴随着 Muffin 的变化,开发团队也将 GNOME 控制中心的一些显示设置移动到了 Cinnamon 控制中心。此外,在 Cinnamon 5.4 中,来自 csd-xrandr 的显示配置移动到了 Muffin 窗口管理器中。显然,你不会在显示设置窗口中看到什么不同。不过,在缩放显示或在高分辨率窗口中时,你可能会发现一些性能的提升,以及错误或问题更少一些。
Mint 开发团队在 Cinnamon 5.4 引入的另外一个关键变化是,在应用程序中实现 GTK 窗口的统一渲染。先前,如果一个 GTK 应用程序使用了标题栏,那么对话框会是一个 CSD (客户端样式)和 GTK 主题的混合体.
现在随着 Cinnamon 5.4 的到来,所有的窗口都使用 GTK 主题进行渲染,而不再与它们的设计相关联。于是,传统的 Metacity 主题也被抛弃。
顺便说一句,我喜欢 Metacity 及其 “传统外观”,它们是 GNOME 的早期 [产物][16] 。
#### 9、软件包管理器更新
跟随 Debian、KDE Plasma 桌面的趋势Linux Mint 也开始保护你的系统不会卸载重要的依赖关系软件包。
当你尝试卸载软件包时Mint 现在会检查依赖关系,并检查重要的桌面软件包是否将会被移除。
如果发现这种情况,你将会得到一条阻止你继续卸载软件包的错误信息。
在另一方面,当成功地卸载一个软件包时,它会清理所有与之同时安装的依赖软件包。
#### 10、禁用 systemd OOMD 服务
自从 Ubuntu 22.04 LTS 发布以来,有一些对内存不足守护进程(`systemd-oomd`)不好的反馈。网上的很多用户都 [报告][17] 说:在没有任何警告或用户干预的情况下,会突然关闭应用程序(例如 Firefox。进一步的调查表明`systemd-oomd` 的实现情况“不是很好”。
理论上说,[systemd-oomd.service][18] 会监视你的系统的内存不足的情况并且它有权杀死任何多过消耗系统资源的进程。Ubuntu 开发团队并没有和用户强调这一点,最后导致了不愉快的用户的体验。
基于这一认识Linux Mint 21 决定 [不提供][19] 这种服务,禁用它。因为 Linux Mint 的用户群体是普通用户、学生等,如果应用程序意外关闭,对用户来说将是一种不好的体验。
![Systemd OOMD service is not enabled][20]
#### 11、其它变化
最后,让我们归纳一些微小却有影响的变化来结束这篇 Linux Mint 21 特色介绍。
* 默认的文档阅读器应用程序 Xreader 现在能够进行微小注释。这是一个很方便的功能。
* WebApp 管理器现在带来了一些自定义的浏览器参数。
* Warpinator 文件传输器实用工具现在可以向你显示来自 Windows 、Android 和 iOS 设备上的其它的源文件。
* Mint 将 Firefox 浏览器打包为 .deb 版本,而不是 Ubuntu 22.04 LTS 中的默认 .Snap 版本。感谢 Mint 开发团队,用户不必为卸载 Jammy 中的 Firefox 的 .Snap 版本的而运行 [一套复杂的命令][21]。
![Firefox 102 in Linux Mint 21 Exclusively packaged as deb executable][22]
* 批量重命名应用程序 Thingy 在用户界面上做了一些改善。
* GRUB2 的操作系统检测程序(`os-prober`)现在能够检测出你的硬件系统上所有的操作系统(对双启动或多启动有用)。
* 蓝牙管理器 Blueman 取代了 Blueberry ,为连接和管理你的蓝牙设备带来了其它的功能。
* 最后,在这个发布版本中也有为你的新桌面而准备的新壁纸。
![New Wallpapers in Linux Mint 21][23]
### 没有变化的部分
从表明上来看,你可能会觉着 Linux Mint 21 的绝大部分功能与先前的版本相同。默认桌面外观和默认壁纸保持不变。Xfce 和 MATE 桌面也没有发布任何重要的功能。因此,它们是完全一样的。此外,默认图标主题、应用程序菜单等等都可能会给你一种似曾相识的感觉。
### 总结
总体来说最终用户需要的是一套完好的特色功能而不是花哨的手势之类的东西。鉴于此对初学者或最终用户来说Linux Mint 是当今最好的 Linux 发行版。至此,这篇 Linux Mint 21 特色的总结就此结束了。
你认为 Linux mint 21 的新特色怎么样?在这个发布版本中,是否有一些你所求而未得的特色?让我们在下面的评论区讨论这个问题。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/linux-mint-21-features/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/07/Linux-Mint-21-Cinnamon-Desktop.jpg
[2]: https://www.debugpoint.com/linux-mint/
[3]: https://www.debugpoint.com/web-stories/ubuntu-22-04-review/
[4]: https://www.debugpoint.com/linux-kernel-5-15/
[5]: https://blog.linuxmint.com/?p=4323
[6]: https://teejeetech.com/
[7]: https://www.debugpoint.com/wp-content/uploads/2022/07/Timeshift-creating-snapshot.jpg
[8]: https://www.debugpoint.com/view-webp-ubuntu-linux/
[9]: https://github.com/linuxmint/xapp-thumbnailers
[10]: https://datatracker.ietf.org/doc/html/rfc8011
[11]: https://www.debugpoint.com/tag/linux-distro-review/
[12]: https://www.debugpoint.com/wp-content/uploads/2022/07/Effects-in-Linux-Mint-20.jpg
[13]: https://www.debugpoint.com/wp-content/uploads/2022/07/Effects-in-Linux-Mint-21.jpg
[14]: https://github.com/linuxmint/cinnamon-desktop/releases/tag/5.4.0
[15]: https://www.debugpoint.com/cinnamon-arch-linux-install/
[16]: https://www.debugpoint.com/gnome-classic-ubuntu-22-04/
[17]: https://askubuntu.com/questions/1404888/how-do-i-disable-the-systemd-oom-process-killer-in-ubuntu-22-04
[18]: https://www.freedesktop.org/software/systemd/man/systemd-oomd.service.html
[19]: https://debugpointnews.com/linux-mint-21-systemd-oom/
[20]: https://www.debugpoint.com/wp-content/uploads/2022/07/Systemd-OOMD-service-is-not-enabled.jpg
[21]: https://www.debugpoint.com/remove-firefox-snap-ubuntu/
[22]: https://www.debugpoint.com/wp-content/uploads/2022/07/Firefox-102-in-Linux-Mint-21-Exclusively-packaged-as-deb-executable.jpg
[23]: https://www.debugpoint.com/wp-content/uploads/2022/07/New-Wallpapers-in-Linux-Mint-21.jpg
[24]: https://github.com/linuxmint/mint21-beta/issues

View File

@ -0,0 +1,98 @@
[#]: subject: "Update a Single Package With apt Command in Ubuntu and Debian"
[#]: via: "https://itsfoss.com/apt-upgrade-single-package/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14895-1.html"
在 Ubuntu 和 Debian 中使用 apt 命令更新单个软件包
======
![](https://img.linux.net.cn/data/attachment/album/202208/04/165705li66yephvx464ivt.jpg)
如何 [在命令行中更新你的 Ubuntu 系统][1]?你可以使用 `apt update`(刷新包缓存)和 `apt upgrade` 命令。
```
sudo apt update && sudo apt upgrade
```
它会更新所有可以立即升级的已安装 apt 包。这也包括 Linux 内核版本。
这似乎是一件好事,尤其是对于桌面用户。但对于运行关键 Web 服务的 Ubuntu 服务器用户而言,情况可能并非如此。
如果你想对更新有选择性,并且**只想升级单个软件包**,请使用以下命令:
```
sudo apt install --only-upgrade package_name
```
让我们更详细地了解一下。
### 使用 apt 命令升级单个包
第一步是更新本地包仓库缓存,以便你的系统知道有新版本的软件包可用。
```
sudo apt update
```
**这是可选的**。查看一下你要升级的软件包是否在 [可升级软件包列表][2] 中。
```
apt list --upgradable
```
如果所需的软件包有可用的新版本,你可以选择使用以下命令仅升级该单个软件包:
```
sudo apt install --only-upgrade package_name
```
如果你对已安装的软件包运行 `apt install` 命令,它将升级到下一个可用版本。
但如果该软件包尚未安装,`apt` 命令也会安装它。
这就是为什么 `--only-upgrade` 部分是必要的。使用该选项,`apt` 命令只会升级已安装的软件包。如果尚未安装,它将不会安装该软件包。
这不是最适合 Ubuntu 服务器用户的示例,但你仍然可以在下面的截图中看到我如何只升级了七个可升级包中的一个。
![Update only a single package in Ubuntu][3]
### 仅升级选定的软件包
如果要升级选定的几个软件包,那么不必一一更新。只需使用前面提到的命令提供包名称。
```
sudo apt install --only-upgrade package1 package2 package3
```
这是一个例子。
![Upgrade selected packages in Ubuntu][4]
### 总结
当你面临必须升级选定软件包的情况时,你可以使用带有 `only-upgrade` 选项的 `apt install` 命令。
我建议阅读 [如何更有效地使用 apt 命令][5]。
--------------------------------------------------------------------------------
via: https://itsfoss.com/apt-upgrade-single-package/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/update-ubuntu/
[2]: https://itsfoss.com/apt-list-upgradable/
[3]: https://itsfoss.com/wp-content/uploads/2022/07/update-single-package-ubuntu-scaled.webp
[4]: https://itsfoss.com/wp-content/uploads/2022/07/upgrade-selected-packages-ubuntu.png
[5]: https://itsfoss.com/apt-command-guide/

View File

@ -0,0 +1,97 @@
[#]: subject: "How I use the Linux fmt command to format text"
[#]: via: "https://opensource.com/article/22/7/fmt-trivial-text-formatter"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lkxed"
[#]: translator: "perfiffer"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14886-1.html"
我是如何使用 Linux fmt 命令来格式化文本
======
![](https://img.linux.net.cn/data/attachment/album/202208/01/184300zbyfjayeyqa5pmcb.jpg)
> fmt 命令是一个简单的文本格式化程序。我将在这里展示如何使用它来格式化文本和邮件回复。
当我为项目编写文档时,我经常以纯文本的形式编写自述文件和安装说明。我不需要使用 HTML 或者 Markdown 之类的标记语言来描述项目的功能或如何编译它。但是维护这样的文档可能会很痛苦。如果我需要更新我的 `Readme` 文件中的一个句子的中间位置,我需要重新格式化文本,以避免在我的其它文本中间出现一个很长或很短的行,而其它的行的格式是整整齐齐的 75 列。一些编辑器包含可以自动重新格式化文本以填充段落的功能,但并非所有的编辑器都这样做。这就是 Linux `fmt` 命令的用武之地。
### 使用 Linux fmt 命令格式化文本
`fmt` 命令是一个简单的文本格式化程序;它收集单词并填充段落,但不应用任何其它文本样式,例如斜体或粗体。这一切都是纯文本。使用 `fmt` 命令,你可以快速调整文本,使其更易于阅读。让我们从这个熟悉的示例文本开始:
```
$ cat trek.txt
Space: the final
frontier. These are the voyages
of the starship Enterprise. Its
continuing mission: to explore
strange new worlds. To
seek out new life and new
civilizations. To boldly go
where no one has gone before!
```
在这个实例文件中,每行都有不同的长度,并且它们以一种奇怪的方式换行。如果你对纯文本文件进行大量更改,你可以会遇到类似的奇怪的换行。要重新格式化此文本,你可以使用 `fmt` 命令将段落的行填充为统一长度:
```
$ fmt trek.txt
Space: the final frontier. These are the voyages of the starship
Enterprise. Its continuing mission: to explore strange new worlds. To
seek out new life and new civilizations. To boldly go where no one has
gone before!
```
默认情况下,`fmt` 会将文本格式化为 75 的列宽大小,但你可以使用 `-w``--width` 选项进行更改:
```
$ fmt -w 60 trek.txt
Space: the final frontier. These are the voyages of
the starship Enterprise. Its continuing mission: to
explore strange new worlds. To seek out new life and new
civilizations. To boldly go where no one has gone before!
```
### 使用 Linux fmt 命令格式化电子邮件回复
我加入了一个邮件列表,这里更喜欢纯文本电子邮件,这使得在列表服务器上归档电子邮件变得更加容易。但现实是并非每个人都以纯文本形式发送电子邮件。有时候,当我以纯文本形式回复这些电子邮件时,我的电子邮件客户端会将整个段落放在一行中。这使得在电子邮件中“引用”回复变得困难。
这是一个简单的例子。当我以纯文本形式回复电子邮件时,我的电子邮件客户端通过在每行前添加 `>` 字符来“引用”对方的电子邮件。对于一条短消息,可能如下所示:
```
> I like the idea of the interim development builds.
```
没有正确“换行”的长行将无法在我的纯文本电子邮件回复中正确显示,因为它只是前面带有 `>` 字符的长行,如下所示:
```
> I like the idea of the interim development builds. This should be a great way to test new changes that everyone can experiment with.
```
为了解决这个问题,我打开了一个终端并将引用的文本复制并粘贴到一个新文件中。然后我使用 `-p``--prefix` 选项来告诉 `fmt` 在每一行之前使用什么字符作为“前缀”。
```
$ cat > email.txt
> I like the idea of the interim development builds. This should be a great way to test new changes that everyone can experiment with.
^D
$ fmt -p '>' email.txt
> I like the idea of the interim development builds. This should be a
> great way to test new changes that everyone can experiment with.
```
`fmt` 命令是一个非常简单的文本格式化程序,但它可以做很多有用的事情,可以帮助以纯文本形式编写和更新文档。要了解其它选项,例如 `-c``--crown-margin` 以匹配段落前两行缩进,例如项目列表。还可以尝试使用 `-t` 或者 `--tagged-paragraph` 来保留段落中第一行的缩进,就像缩进的段落一样。`-u` 或 `--uniform-spacing` 选项在单词之间使用一个空格,在句子之间使用两个空格。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/7/fmt-trivial-text-formatter
作者:[Jim Hall][a]
选题:[lkxed][b]
译者:[perfiffer](https://github.com/perfiffer)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/osdc-docdish-typewriterkeys-3-series.png

View File

@ -0,0 +1,117 @@
[#]: subject: "Fixing the “Pending Update of Firefox snap” Error in Ubuntu"
[#]: via: "https://itsfoss.com/pending-update-firefox-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14908-1.html"
修复 Ubuntu 中的 “Pending Update of Firefox snap” 错误
======
![](https://img.linux.net.cn/data/attachment/album/202208/08/154842wquoflgffwyyn2cw.jpg)
如果你使用的是 Ubuntu 22.04,你可能已收到过此通知。
![Notification about pending Firefox app][1]
它会通知你 Firefox 更新正在等待中,并要求你关闭应用以避免中断。
因此,就像一个听话的 Ubuntu 用户一样,你在保存或完成工作后关闭了 Firefox 浏览器。
你认为 Firefox 已在后台更新,重启浏览器将运行较新版本。
只是,并非如此。
**即使在你重启浏览器甚至计算机后,它仍可能显示相同的 “pending update of Firefox” 通知**。
令人沮丧么?我可以告诉你发生了什么。
让我解释一下为什么会发生这种情况,以及你可以做些什么来“修复”它。
### 修复 “pending update of Firefox snap” 问题
早些时候Firefox 曾经在后台更新,然后要求你重启浏览器。在你重启浏览器之前 [不能][2] 打开任何网站。
![Firefox forced restart in the past][3]
在将 [Firefox 浏览器切换为默认 Snap 包格式][4] 后Ubuntu 团队对更新流程进行了一些改动。
此通知是“改进的用户体验”的一部分。现在Firefox 不再阻止你浏览。你可以在方便时重新启动浏览器以进行更新。
但是为什么即使在你重新启动浏览器或系统后它仍然显示这个通知?
因为这是一条糟糕的通知消息,无法为你提供完整的信息。
#### Firefox 更新还没有开始
当你看到 “pending Firefox update” 时,你错误地认为应用已在后台更新,重启会将其升级到较新的版本。
而对于现在这种情况Ubuntu 中的 Snap 包每天会自动刷新(更新)一次或几次。为了避免在重新启动安装更新之前 Firefox 不允许你浏览任何内容而导致工作中断Ubuntu 甚至不会在后台更新 Firefox Snap 包。
相反,当 Snap 包刷新时,**它会显示通知并希望你立即关闭浏览器**,以便可以使用其他 Snap 包进行更新。
但像你我这样的用户不能这样做,对吧?看到通知,立即关闭浏览器?并不是很方便。
而当你有时间关闭浏览器时Snap 刷新却不会马上更新浏览器。
你可以看到更新的 Snap 版本的 Firefox 可用,但只要 Firefox 正在运行,它就不会自动安装。
![Firefox snap wont be updated automatically if the browser is running][5]
#### 更新 Firefox Snap
这是你摆脱每天不断出现的更新通知所需要做的事情。
* 关闭 Firefox 浏览器
* 手动运行 Snap 刷新(更新已安装的 Snap 包)
确保你在 Firefox 浏览器中的工作已保存。现在,使用鼠标关闭所有 Firefox 浏览器或在终端中运行以下命令:
```
sudo killall firefox
```
现在 Firefox 不再运行,更新 Snap 软件包:
```
sudo snap refresh
```
你会看到它开始下载更新的 Firefox 包。
![Firefox is being updated with Snap][6]
更新完成后,你将看到 Firefox 已升级到更新版本的摘要信息。
![Updated Firefox snap version][7]
### 总结
安装非 Snap 版本的 Firefox 也可能是个解决方案,但不是每个人都可以走这条路。
Firefox 和 Snap 的开发人员必须齐心协力改进这个模棱两可的更新过程。他们应该提供更好的机制,不仅显示待处理更新的通知,还提供启动更新的选项。
这是我们最近在 Ubuntu 上看到的许多奇怪的事情之一。这必须改变才能使 Ubuntu (再次)成为一个对初学者友好的发行版。
--------------------------------------------------------------------------------
via: https://itsfoss.com/pending-update-firefox-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/07/pending-update-firefox-ubuntu.png
[2]: https://news.itsfoss.com/mozilla-annoying-new-tab/
[3]: https://itsfoss.com/wp-content/uploads/2022/07/firefox-restart.webp
[4]: https://news.itsfoss.com/ubuntu-firefox-snap-default/
[5]: https://itsfoss.com/wp-content/uploads/2022/07/pending-Firefox-update-issue-Ubuntu.png
[6]: https://itsfoss.com/wp-content/uploads/2022/07/updating-firefox-snap-package-ubuntu.png
[7]: https://itsfoss.com/wp-content/uploads/2022/07/firefox-snap-update-ubuntu.png

View File

@ -0,0 +1,108 @@
[#]: subject: "Koodo is an All-in-one Open Source eBook Reader App for Linux"
[#]: via: "https://itsfoss.com/koodo-ebook-reader/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14902-1.html"
Koodo一款适用于 Linux 的一体化开源电子书阅读器应用
======
![](https://img.linux.net.cn/data/attachment/album/202208/06/200116wwgeawub7ge0tard.jpg)
有几个可供桌面 Linux 用户使用的 [电子书阅读器][1]。
几乎所有发行版都带有可以打开 PDF 文件的文档阅读器。它还可能支持其他文件格式,例如 epub 或 Mobi但不一定。
这就是为什么需要像 [Foliate][2] 和 Calibre 这样的专门应用来阅读和管理各种格式的电子书的原因。
最近,我遇到了另一个开源软件,它为电子书阅读器提供了几个令人兴奋的功能。
### Koodo它有你能想到的一切
[Koodo][3] 是一款多合一的开源电子书阅读器,具有帮助你更好地管理和阅读电子书的功能。它是一个跨平台应用,你可以在 Linux、Windows 和 macOS 上下载。你甚至可以 [在浏览器中使用它][4]。
它的用户界面看起来很现代,可能是因为它是一个 Electron 应用。你必须导入书籍并将它们添加到 Koodo。它不按文件夹导入书籍。不过你可以选择多个文件进行导入。书太多了可以将一些添加到你的收藏夹以便快速访问。
![Koodo ebook reader interface][5]
我使用了 AppImage 格式的软件包,但由于未知原因,它没有显示文件的缩略图。
![Koodo ebook reader dark mode interface][6]
它支持流行的电子书文件格式,如 PDF、Mobi 和 Epub。但不止这些它还支持 CBR、CBZ 和 CBT 等漫画书格式,它还支持更多。它还可以阅读 FictionBooks.fb2、Markdown 和富文本格式RTF以及微软 Office Word 文档(.docx
除了支持很多文件格式外,它还提供了多种功能来改善你的阅读体验。
你可以高亮显示文本并使用文本注释对其进行注释。你还可以在当前文档或谷歌上搜索选定的文本。
![Annotate, highlight or translate selected text][7]
你可以从主应用窗口的侧边栏中访问高亮显示的文本和注释。
也有文本到语音和翻译选定文本的选项。但是,这两个功能在我的测试中都不起作用。我使用的是 Koodo 的 AppImage 版本。
Koodo 支持各种布局。你可以以单列、双列或连续滚动布局阅读文档。对于 ePub 和 Mobi 格式,它会自动以双列布局打开。对于 PDF默认选择单列布局。
你可以根据自己的喜好自定义 UI。更改字体、大小、段落间距、文本颜色、背景颜色、行间距、亮度等。
![koodo additional features][8]
Koodo 支持夜间阅读模式以及五个不同的主题。你可以根据自己的喜好在主题之间切换。
你还可以使用 Dropbox 或其他支持 Webdav 协议的 [云服务][9] 跨设备同步你的书籍和阅读数据(如高亮、笔记等)。
![You can backup your data in your preferred cloud service][10]
### 在 Linux 上获取 Koodo
如果你想体验一下 Koodo你可以试试它的在线版本。你可以在浏览器中使用 Koodo。你的数据本地存储在浏览器中如果你清理浏览器缓存你会丢失数据高亮、笔记等但不会丢失计算机上存储的书籍
> **[在线尝试 Koodo][11]**
如果你喜欢它的功能,可以选择在您的计算机上安装 Koodo。
Linux 用户有多种选择。你有 Debian 和基于 Ubuntu 的发行版的 deb 文件、Red Hat 和 Fedora 的 RPM以及面向所有发行版的 Snap、AppImage 和可执行文件。
你可以从项目主页获取你选择的安装程序。
> **[下载 Koodo][12]**
### 总结
Koodo 并不完美。它有大量功能,但并非所有功能都能完美运行,正如我在测试中发现的那样。
尽管如此,它仍然是一个很好的应用,有可能在用户中流行起来。只有少数几个应用包含如此多的功能。
感谢 Koodo 开发人员为桌面用户创建了一个有前途的开源应用。
你可以 [访问该项目的仓库][13] 来查看源代码、报告 bug 或者通过给项目加星来向开发者表达喜爱。
--------------------------------------------------------------------------------
via: https://itsfoss.com/koodo-ebook-reader/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/best-ebook-readers-linux/
[2]: https://itsfoss.com/foliate-ebook-viewer/
[3]: https://koodo.960960.xyz/en
[4]: https://reader.960960.xyz/#/manager/empty
[5]: https://itsfoss.com/wp-content/uploads/2022/07/koodo-ebook-reader-interface.webp
[6]: https://itsfoss.com/wp-content/uploads/2022/07/koodo-interface.png
[7]: https://itsfoss.com/wp-content/uploads/2022/07/koobo-ebook-reader-features.webp
[8]: https://itsfoss.com/wp-content/uploads/2022/07/koodo-additional-features.webp
[9]: https://itsfoss.com/cloud-services-linux/
[10]: https://itsfoss.com/wp-content/uploads/2022/07/koodo-backup-restore-feature.png
[11]: https://reader.960960.xyz/
[12]: https://koodo.960960.xyz/en
[13]: https://github.com/troyeguo/koodo-reader

View File

@ -0,0 +1,298 @@
[#]: subject: "How To Build Custom Docker Image Using Dockerfile"
[#]: via: "https://ostechnix.com/a-brief-introduction-to-dockerfile/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: "Donkey-Hao"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14896-1.html"
如何使用 Dockerfile 创建自定义 Docker 镜像
======
![](https://img.linux.net.cn/data/attachment/album/202208/04/172001acb136363vi6vcgk.jpg)
在这份指南中,我们将看到 **Dockerfile** 的简要介绍以及如何在 Linux 中使用 Dockerfile 来自动的 **创建自定义 Docker 镜像**
### 什么是 Dockerfile
Dockerfile 是附有构建 Docker 镜像说明的易于理解的文本文件。它囊括了用户在创建镜像时可以调用的所有命令。
我们可以使用 Dockerfile 创建自定义的镜像。可以通过 Docker Hub 分享的自定义 Docker 镜像。
如果你还不知道Docker Hub 是 Docker 提供的托管存储库服务,用于团队查找和共享容器镜像,当然世界上任何人也都可以访问。
想象一下,早期如果我们想用 **Nginx**,我们要通过很多步骤,才能安装和配置好 Nginx 。得益于 Docker Hub ,现在我们可以在几分钟内,下载并运行 Nginx 的预置容器镜像。
![Nginx Docker Image In Dockerhub][1]
运行如下命令从 Docker Hub 上拉取 Nginx 镜像:
```
# docker pull nginx
```
一旦我们拉取了 Docker 镜像,可以运行如下命令使用它:
```
# docker run -it -d -p 8080:8080 nginx
```
就这样,十分简单!
参考下方链接,了解更多使用 Docker 的方式:
* [开始使用 Docker][2]
Docker Hub 上有超过十万个来自软件供应商、开源项目以及社区的容器镜像。
你可以从 Docker Hub 上下载你选择的镜像,并且使用上面的命令开始使用它。
### 理解 Dockerfile 格式
Docker 可以读取 Dockerfile 中的 **指令** 来自动的创建镜像。
典型的 Dockerfile 包含如下指令:
1、`FROM` —— 这会设置容器的基础镜像。
例如:
```
FROM ubuntu:22.04
```
这会将容器的基础镜像设置为 Ubuntu 。如果 22.04 这个标志没有特别指明,则会设为最新版本(`latest`)。
2、`LABEL` —— 这是用来明确镜像的元数据信息的键值对。
例如:
```
LABEL ENV=“DEVELOPMENT”
```
3、`RUN` —— 这会在基础镜像中执行指令并创建一个新层。
例如:
```
RUN apt-get update
RUN apt-get install tomcat
```
4、`CMD` —— 这用来设置容器启动后先执行的命令。
例如:
```
CMD ["java", "-jar", "app.jar"]
```
5、`EXPOSE` —— 设置用于访问容器的端口。容器将会监听该端口。我们可以用来获得输出。
例如:
```
EXPOSE 8080
```
6、``MAINTAINER` —— 显示创建镜像作者的信息。
例如:
```
MAINTAINER info@ostechnix.com
```
7、`ENV` —— 用来设置环境变量的键值对。这些变量在镜像创建的时候设置,并在容器创建好后可以使用。
例如:
```
ENV DB_NAME=”MySQL”
ENV DB_VERSION=”8.0”
```
8、`COPY` —— 用来拷贝本地文件至容器中。
例如:
```
COPY /target/devops.jar devops.jar
```
9、`ADD` —— 具有与拷贝相同的功能,不过更进一步还可以提取本地的 tar 文件或者从 URL 拷贝文件。
例如:
```
ADD devops.tar.xz / .
ADD http://example.com/abc.git /usr/local/devops/
```
10、`ENTRYPOINT` —— 用来设置镜像的主要命令。与 CMD 指令功能相同。不同的是 `ENTRYPOINT` 中的指令不会被重写。
例如:
```
ENTRYPOINT ["java", "-jar", "app.jar"]
```
11、`VOLUME` —— 该指令用来创建指定位置的挂载点。
例如:
```
VOLUME /app/devops
```
12、`USER` —— 将设置运行镜像并使用的用户名称以及用户组。
例如:
```
USER dhruv
USER admin
```
13、`WORKDIR` —— 这会设置工作目录。如果目录不存在,则会创建。
例如:
```
WORKDIR /var/lib/
```
这是一个 Dockerfile 的样本,可以参考一下:
```
FROM ubuntu:latest
MAINTAINER Senthilkumar Palani "info@ostechnix.com"
RUN apt-get install -y software-properties-common python
RUN add-apt-repository ppa:chris-lea/node.js
RUN echo "deb http://us.archive.ubuntu.com/ubuntu/ jammy universe" >>
/etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y nodejs
RUN mkdir /var/www
ADD app.js /var/www/app.js
CMD ["/usr/bin/node", "/var/www/app.js"]
```
我将向你展示创建一个 Dockerfile 、创建并使用镜像的简单例子。
### 创建一个 Dockerfile
创建一个名为 `dockerfile` 的文件:
```
# nano dockerfile
```
添加下面几行命令。我们将更新并安装 `vim``curl` 包:
```
FROM alpine
RUN apk update
RUN apk add vim
RUN apk add curl
```
![Dockerfile For Alpine Linux][3]
按下 `CTRL+O``CTRL+X` 键保存文件并关闭。
现在 Dockerfile 已经就位。让我们继续,用该 Dockerfile 创建一个镜像。
> **注意:** 如果你在使用 [Docker 桌面版][4],你可以以一个普通用户运行 `docker` 命令。
### 使用 Dockerfile 创建 Docker 镜像
只需运行以下命令,便可以使用 Dockerfile 创建 Docker 镜像:
```
# docker build -t alpine .
```
请注意最后有一个 **点**`.`)。
输出示例:
```
[+] Building 51.2s (8/8) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 104B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:latest 38.8s
=> [1/4] FROM docker.io/library/alpine@sha256:7580ece7963bfa863801466c0a 2.7s
=> => resolve docker.io/library/alpine@sha256:7580ece7963bfa863801466c0a 0.0s
=> => sha256:d7d3d98c851ff3a95dbcb70ce09d186c9aaf7e25d48 1.47kB / 1.47kB 0.0s
=> => sha256:530afca65e2ea04227630ae746e0c85b2bd1a179379 2.80MB / 2.80MB 2.4s
=> => sha256:7580ece7963bfa863801466c0a488f11c86f85d9988 1.64kB / 1.64kB 0.0s
=> => sha256:9b2a28eb47540823042a2ba401386845089bb7b62a9637d 528B / 528B 0.0s
=> => extracting sha256:530afca65e2ea04227630ae746e0c85b2bd1a179379cbf2b 0.2s
=> [2/4] RUN apk update 4.3s
=> [3/4] RUN apk add vim 3.5s
=> [4/4] RUN apk add curl 1.3s
=> exporting to image 0.4s
=> => exporting layers 0.4s
=> => writing image sha256:14231deceb6e8e6105d2e551799ff174c184e8d9be8af 0.0s
=> => naming to docker.io/library/alpine 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
```
按照上面的命令, Docker 会通过保存在当前工作目录中的 Dockerfile 中的命令开始自动的创建镜像。还记得我们在 Dockerfile 中保存的 `apk update`、`apk add vim` 和 `apk add curl` 命令吗?这些命令也将会自动的执行。
如果 Dockerfile 保存在其他目录,你可以使用 `-f` 标志来指定路径,例如:
```
# docker build -f /path/to/a/Dockerfile .
```
创建好镜像后,我们可以使用如下命令运行它:
```
# docker run -it alpine
```
该命令会启动这个 Alpine 容器并连接到它。
```
/ # uname -a
Linux 8890fec82de8 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 Linux
/ # cat /etc/alpine-release
3.16.1
/ #
```
如果你使用 Docker 桌面版,你可以通过<ruby>容器<rt>Containers</rt></ruby>标签页界面来查看运行中的容器。
![View Containers In Docker Desktop][5]
这就是使用 Dockerfile 构建自定义容器映像的方式。
我们仅仅讲了基础内容。你可以用 Dockerfile 做到很多东西。建议你参考一下官方 [Dockerfile 参考][6] ,以了解更多内容。
--------------------------------------------------------------------------------
via: https://ostechnix.com/a-brief-introduction-to-dockerfile/
作者:[sk][a]
选题:[lkxed][b]
译者:[Donkey](https://github.com/Donkey-Hao)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/wp-content/uploads/2022/07/Nginx-Docker-Image-In-Dockerhub.png
[2]: https://ostechnix.com/getting-started-with-docker/
[3]: https://ostechnix.com/wp-content/uploads/2022/07/Dockerfile-For-Alpine-Linux.png
[4]: https://ostechnix.com/docker-desktop-for-linux/
[5]: https://ostechnix.com/wp-content/uploads/2022/07/View-Containers-In-Docker-Desktop-1024x524.png
[6]: https://docs.docker.com/engine/reference/builder/

View File

@ -0,0 +1,122 @@
[#]: subject: "How to Install Latest Vim 9.0 on Ubuntu Based Linux Distributions"
[#]: via: "https://itsfoss.com/install-latest-vim-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14899-1.html"
如何在基于 Ubuntu 的 Linux 发行版上安装最新的 Vim 9.0
======
![](https://img.linux.net.cn/data/attachment/album/202208/05/174903f3zu3nqrrnwclwrz.jpg)
> 这个快速教程展示了在 Ubuntu Linux 上安装最新版本的 Vim 的步骤。
Vim 是最 [流行的基于终端的文本编辑器][1] 之一。然而,它在 Ubuntu 上没有被默认安装。
Ubuntu 使用 Nano 作为默认的终端编辑器。Nano 也是一个优秀的工具,我并不打算参与 [Nano 与 Vim 孰优孰劣的辩论][2]。
如果你已经花了一些时间掌握了 Vim 的快捷键,你就不必忘记它们,而开始使用一个新的编辑器。
你可以在终端使用以下命令在 Ubuntu 上安装 Vim
```
sudo apt install vim
```
这很简单,对吗?这种方法的主要问题是,你不会得到最新的 Vim 版本。
你可以用以下命令检查已安装的 Vim 版本:
```
vim --version
```
而如果你查看 [Vim 网站][3],你会发现 Vim 已经发布了更新的版本。
在写这篇文章的时候,[Vim 9.0 已经发布][4],但在 Ubuntu 仓库中还没有。
好消息是,你可以使用一个 [非官方的,但积极维护的 PPA][5] 安装最新的 Vim。
### 使用 PPA 在 Ubuntu 上安装 Vim 9
如果你有特定的 Vim 配置文件,为它们做个备份也无妨。
现在,要安装最新的 Vim 版本,先添加 PPA 仓库:
```
sudo add-apt-repository ppa:jonathonf/vim
```
![Adding the PPA to get the latest Vim version][6]
你不需要在 Ubuntu 上更新软件包缓存,但其他发行版如 Mint 可能仍然需要:
```
sudo apt update
```
现在,使用下面的命令来安装 PPA 提供的最新 Vim 版本:
```
sudo apt install vim
```
如果你已经安装了一个较早的 Vim 版本,它将被升级。你可以用以下方法检查已安装的 Vim 版本:
```
vim --version
```
![Checking installed Vim version][7]
这是一个维护得非常好的 PPA适用于所有活跃的 Ubuntu 版本。
如果你是 PPA 的新手,我有一个关于这个主题的详细指南。你应该阅读以对 [Ubuntu 中 PPA 的概念][8] 了解更多。
### 降级或删除
如果你想回到 Ubuntu 提供的旧版 Vim你应该删除现有的版本删除 PPA 并重新安装它。
在删除 Vim 之前,如果你做了自定义修改并打算再次使用 Vim你应该复制 vimrc 或其他类似的配置文件。
那么,打开一个终端,使用以下命令:
```
sudo apt remove vim
```
现在删除 PPA否则你会再次得到最新的 Vim如果你尝试安装旧版本的 Vim
```
sudo add-apt-repository -r ppa:jonathonf/vim
```
现在,如果你想要旧的、官方的 Ubuntu 版本的 Vim只需再次 [使用 apt 命令][9] 安装它。
享受 Ubuntu 上的 Vim 吧。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-latest-vim-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/command-line-text-editors-linux/
[2]: https://itsfoss.com/vim-vs-nano/
[3]: https://www.vim.org/
[4]: https://news.itsfoss.com/vim-9-0-release/
[5]: https://launchpad.net/~jonathonf/+archive/ubuntu/vim
[6]: https://itsfoss.com/wp-content/uploads/2022/07/install-latest-vim-on-ubuntu-using-ppa.png
[7]: https://itsfoss.com/wp-content/uploads/2022/07/vim-9-ubuntu.png
[8]: https://itsfoss.com/ppa-guide/
[9]: https://itsfoss.com/apt-command-guide/

View File

@ -0,0 +1,128 @@
[#]: subject: "The Much Awaited Linux Mint 21 is Released and Available to Download"
[#]: via: "https://news.itsfoss.com/linux-mint-21-release/"
[#]: author: "Rishabh Moharir https://news.itsfoss.com/author/rishabh/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14884-1.html"
期待已久的 Linux Mint 21 发布
======
> Linux Mint 终于发布了基于 Ubuntu 22.04 LTS 的 “Vanessa”并带来了很多有用的改进。
![linux mint 21][1]
Linux Mint 是 [最受欢迎的 Linux 发行版之一][2]。它使用 Ubuntu 作为其基础,特别是它基于 [长期支持][3] 的 Ubuntu 版本,以获得长达 5 年的软件支持。
现在,我们有一个新的版本升级,即 **Linux Mint 21 “Vanessa”**,它基于 4 月份最新发布的 [Ubuntu 22.04 LTS 版本][4]。因此,用户可以预期它的安全更新可以支持到 2027 年。
让我们来看看这个版本的亮点。
### Linux Mint 21 Vanessa 的新亮点
![][5]
它采用了稳定的、改进的 [Linux 5.15 LTS 内核][6]Linux Mint 21 带来了一系列新的增加、变化和完善。
#### 现有用户的升级工具
![][7]
现有的 Mint 20.3 用户可以使用新的基于 GUI 的升级工具轻松更新他们的系统。
用户会看到一个需要安装或升级的新软件包的列表,这也包括了对你可能手动添加的第三方 PPA 库的检查。
#### 新的蓝牙管理器
![][8]
Blueman 现在取代了图形界面的 GNOME 蓝牙管理器 Blueberry。
之所以这样做,主要是因为 Blueman 提供了更多的功能和连接选项以及对多种桌面环境的更好支持。此外Blueman 的用户界面与 Linux Mint 完美地融合在一起。
Blueman 包括一些高级选项,可能大多数用户用不到,但它是一个好工具。
#### 新的进程监控托盘图标
![][9]
不管是对于资深用户还是初级用户来说,一个非常有用的功能是引入了一个新的托盘图标,可以监控进程!
这个托盘图标将通知用户是否有任何自动化进程(如更新和系统快照)在后台运行。
当系统变得缓慢时Mint 用户将很容易知道该去哪里找到问题!
#### 增强的缩略图支持
![][10]
以前,一些文件类型没有任何缩略图显示,这样的用户体验不是很好。
为了解决这个问题,这个版本引入了一个新的项目 *xapp-thumbnails*,并为包括 AppImage、ePub、MP3、RAW 图片和 WebP 在内的文件类型带来了缩略图支持。
#### XApp 的改进
以前的 Timeshift 备份工具现在成为了一个 XApp并由 Mint 团队正式维护。此外,在 rsync 模式下,如果快照导致磁盘上的可用空间少于 1GB则会计算出下一次快照所需的空间并跳过下一次快照。
Xviewer、Warpinator、Thingy 和 WebApp 管理器也有了其他改进。
#### Cinnamon 5.4.2
Linux Mint 的旗舰桌面环境 Cinnamon 得到了良好的内部升级。
默认的窗口管理器 Muffin 现在重新基于较新的 Mutter 3.36 代码库开发。
窗口 UI 也有一些细微的改进,包括主题和动画。
![][11]
### 其他增加的功能和改进
其他一些变化包括:
* 改进了对 AppImage 的支持,这与 Ubuntu 22.04 不同
* 目录中出现了一组新的漂亮的壁纸
你可以在我们专门的 [Linux Mint 21 功能][12] 文章中探索更多关于它的新亮点。
### 获取 Linux Mint 21
如果你正在使用 Mint 20.3,你应该能在几天内升级到 Mint 21。图形化的更新过程应该会在几天后出现。
你可以选择从 Linux Mint 的下载页面下载 ISO进行全新安装。
> **[获取 Linux Mint 21][13]**
如果你的网络速度慢或不稳定,你也可以 [用这个种子链接][14]。
享受新鲜的 Mint 吧 🙂
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/linux-mint-21-release/
作者:[Rishabh Moharir][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/rishabh/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/07/linux-mint-21-release.jpg
[2]: https://itsfoss.com/best-linux-distributions/
[3]: https://itsfoss.com/long-term-support-lts/
[4]: https://news.itsfoss.com/ubuntu-22-04-release/
[5]: https://news.itsfoss.com/wp-content/uploads/2022/07/linux-mint-21-new.jpg
[6]: https://news.itsfoss.com/linux-kernel-5-15-release/
[7]: https://news.itsfoss.com/wp-content/uploads/2022/07/upgradetool.webp
[8]: https://news.itsfoss.com/wp-content/uploads/2022/07/blueman.png
[9]: https://news.itsfoss.com/wp-content/uploads/2022/07/monitor.png
[10]: https://news.itsfoss.com/wp-content/uploads/2022/07/thumbnails.png
[11]: https://news.itsfoss.com/wp-content/uploads/2022/07/animations.png
[12]: https://itsfoss.com/linux-mint-21-features/
[13]: https://linuxmint.com/download.php
[14]: https://linuxmint.com/torrents/

View File

@ -0,0 +1,93 @@
[#]: subject: "Secure Boot Disabled? GNOME Will Soon Warn You About it!"
[#]: via: "https://news.itsfoss.com/gnome-secure-boot-warning/"
[#]: author: "Anuj Sharma https://news.itsfoss.com/author/anuj/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14892-1.html"
如果禁用了安全启动GNOME 就会发出警告
======
> GNOME 正计划通知用户其固件安全状态,来保护不安全的硬件。
![](https://news.itsfoss.com/wp-content/uploads/2022/08/gnome-secure-boot-warning.jpg)
当你在支持 UEFI 的电脑上安装 Linux 时,你必须禁用“<ruby>安全启动<rt>Secure Boot</rt></ruby>”,因为启用该选项后,不能使用<ruby>现场 USB<rt>Live USB</rt></ruby> 启动。
一些主流的 Linux 发行版支持安全启动,但对于许多其他发行版(以及板载的 Nvidia 硬件)来说,它的设置仍然具有挑战性。
虽然一年又一年,情况似乎并没有改善,但总的来说,安全启动是一个必不可少的保护功能。
因此,正如 [Phoronix][1] 所发现的为了方便和让用户意识到这一点GNOME 和红帽的开发者正在努力在安全启动被禁用时通知(或警告)用户。
### 它有什么用?
UEFI/安全启动被批评为 DRM因为它剥夺了用户的自由。开源社区的许多人仍然不赞同实施 UEFI/安全启动和 TPM因为它带来了不便。这就催生了像 [Coreboot][2] 这样的项目在开源世界中蓬勃发展。
当然,如果你每天都用 Linux我会建议你购买支持 Coreboot 的新硬件,这是一个不同的故事。
话虽如此,但可以肯定的是,安全启动是最简单的方法。
考虑到捆绑的专有固件,安全启动的安全性仍然值得商榷。但是,它是一个确保系统的固件安全的基本保护机制。
所以,开发者准备在启动闪屏([Plymouth][3]、GNOME 显示管理器GDM和 GNOME 控制中心显示警告。
![图片来源GNOME 博客][4]
GNOME 的一位开发者在 [博客文章][5] 中分享了它的更多细节,同时给出了其中的一些屏幕截图。
![][6]
一位来自红帽的开发者在 [合并请求][7] 中提到。
> 安全启动被用来对付一些恶意软件试图感染系统的固件的安全威胁。用户可能会无意中禁用或软件可能会有意禁用安全启动。因此,配置不正确的话,系统就运行在一个不安全的平台上。如果启动闪屏能向用户提供一个警告,用户可以重新启动并重新配置他们的系统,或者立即寻求帮助。
所以,作为一个 GNOME 用户,当它进入 GNOME 43 的最终版本或任何未来的版本时,我乐于看到它所带来的变化。
如果你也想看看,你可以在 GNOME 控制中心的“<ruby>隐私<rt>Privacy</rt></ruby>”标签下的“<ruby>设备安全<rt>Device Security</rt></ruby>”部分找到这个选项,如下图所示,我的机器在 Arch Linux 上运行 GNOME 43 alpha。
![][8]
该菜单还可以显示 TPM、英特尔 BootGuard 和 IOMMU 保护的细节。
![][9]
看来我的系统并不像我想象的那么安全……但也许这就是这个功能的意义所在?
如果你只在你的 Linux 发行版上使用 UEFI 模式,并且为了方便而关闭了安全保护功能,这能让你意识到这一点吗?
有可能。但是,看看 Linux 发行版的状况和启用安全启动的问题。我不觉得这可能会是一个大问题。我们很快就会知道了。
### 如何禁用这个警告?
正如在 GNOME Gitlab 的 [合并请求][10] 中提到的,在你的内核参数中添加 `sb-check=false` 就可以禁用这些警告。
不过,作为终端用户,你不需要担心这个问题。
你对即将在 GNOME 43 或更高版本中增加的这个功能有什么看法?你对 UEFI/安全启动有什么看法?
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/gnome-secure-boot-warning/
作者:[Anuj Sharma][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/anuj/
[b]: https://github.com/lujun9972
[1]: https://www.phoronix.com/news/GNOME-Secure-Boot-Warning
[2]: https://www.coreboot.org/
[3]: https://gitlab.freedesktop.org/plymouth
[4]: https://news.itsfoss.com/wp-content/uploads/2022/08/gnome-secure-boot-mockup.png
[5]: https://blogs.gnome.org/hughsie/2022/07/29/emulated-host-profiles-in-fwupd/
[6]: https://news.itsfoss.com/wp-content/uploads/2022/08/boot-security.png
[7]: https://gitlab.freedesktop.org/plymouth/plymouth/-/merge_requests/176
[8]: https://news.itsfoss.com/wp-content/uploads/2022/07/secure-boot-gnome.png
[9]: https://news.itsfoss.com/wp-content/uploads/2022/07/secure-boot-gnome1.png
[10]: https://gitlab.gnome.org/GNOME/gnome-shell/-/merge_requests/2333

View File

@ -0,0 +1,74 @@
[#]: subject: "Peppermint OS Now Also Offers a Systemd-free Devuan Variant!"
[#]: via: "https://news.itsfoss.com/peppermint-os-devuan/"
[#]: author: "Sagar Sharma https://news.itsfoss.com/author/sagar/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14906-1.html"
Peppermint OS 现在也提供无 systemd 的 Devuan 变体了!
======
> 基于 Devuan 的 Peppermint OS 可能是无 systemd 发行版中一个令人振奋的新成员。听起来不错吧?
![peppermint][1]
作为 [最轻量级和最灵活的 Linux 发行版之一][2]Peppermint OS 现在提供一个基于 Devuan 的 ISO可以让高级用户对他们的系统有更多的控制。
随着他们发布了 Peppermint OS 11[他们放弃使用 Ubuntu][3] 作为基础,而使用 Debian使 Peppermint OS 更加稳定和可靠。
### 基于 Devuan 的 Peppermint OS
![Peppermint OS devuan][4]
那么,首先 Devuan 是什么?
Devuan 是 Debian 的一个分叉,没有 systemd所以用户可以拥有移植性和选择的自由。
是否使用 systemd 经常发生争论,这就是为什么我们有一个 [无 systemd 的 Linux 发行版][5] 的列表,但只有少数几个可以提供开箱即用的精良体验。
现在,基于 Devuan 的 Peppermint OS 版本应该是这个列表中令人振奋的补充。
如果你想要一个无 systemd 的发行版,给你的操作系统更多的自由,这应该是一个不错的尝试。
别担心Peppermint OS 的 Debian 版将会继续存在。所以,你可以期待基于 Devuan 和基于 Debian 的 ISO 都可以使用。
### 你需要无 systemd 发行版吗?
systemd 是一个初始化系统。当你启动你的 Linux 机器时,初始化系统是最先启动的程序之一,并将一直运行到你使用电脑为止。
但 [systemd 不仅仅是一个初始系统][6],它还包含其他软件,如 logind、networkd 等,用于管理 Linux 系统的不同方面。
总的来说,它演变成了一个复杂的初始模块。虽然它使许多事情变得简单,但在一些用户看来,它是一个臃肿的解决方案。
因此,有用户开始喜欢 Devuan 这样的选项。而且Peppermint OS 的开发者现在正试图通过使用 Devuan 作为另一个版本的基础,来改善桌面用户的体验。
### 下载基于 Devuan 的 Peppermint OS
对于习惯于无 systemd 的用户来说,这是一个很好的选择。
但是,如果你从来没有尝试过无 systemd 的发行版,除非你知道自己在做什么,否则进行切换可能不是一个明智的主意。
> **[Peppermint OS (Devuan)][7]**
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/peppermint-os-devuan/
作者:[Sagar Sharma][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sagar/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/08/peppermint-devuan.jpg
[2]: https://itsfoss.com/lightweight-linux-beginners/
[3]: https://news.itsfoss.com/peppermint-11-release/
[4]: https://news.itsfoss.com/wp-content/uploads/2022/08/Peppermint-OS-Devuan-edition.png
[5]: https://itsfoss.com/systemd-free-distros/#systemd-or-not
[6]: https://freedesktop.org/wiki/Software/systemd/
[7]: https://peppermintos.com/2022/08/peppermint-os-releases-for-08-02-2022/

View File

@ -0,0 +1,72 @@
[#]: subject: "Slax Linux Re-Introduces a Slackware Variant With Slax 15 Release"
[#]: via: "https://news.itsfoss.com/slax-15-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14900-1.html"
Slax Linux 的 Slackware 变体重新复活
======
> 基于 Slackware 的 Slax 版本在 Slackware 15.0 的基础上进行了升级,并带来一些基本的改进。
![](https://news.itsfoss.com/wp-content/uploads/2022/08/slax-15.jpg)
Slax 是最有趣的 [轻量级 Linux 发行版][1] 之一。
它是基于 Slackware 的,是 32 位系统的一个合适选择。如果你尚不知道Slackware 是最古老的、活跃的 Linux 发行版,并在 6 年后见证了一次重大版本升级,即 [Slackware 15][2] 的发布。
此外Slax 还提供了一个基于 Debian 的替代版本,该版本正在积极维护。正如创作者在博文中提到的,这是由于基于 Slackware 的版本Slax 14在很长一段时间内9 年)没有得到更新。
因此,看到最终以 **Slax 15.0** 的形式发布了重大升级版本,以及也对其 Debian 版本(即 **Slax 11.4.0**)进行小幅更新,还是令人感动。
有趣的是,这个版本早在 2022 年 7 月就向其支持者提供了。而现在,所有人都可以下载和试用了。
让我来介绍一下新的变化。
### Slax 15.0 和 Slax 11.4 发布
为了解决关键的升级问题Slax 15.0 带来了 Slackware 15.0 中添加的改进。
这应该包括增加了 [Linux 内核 5.15 LTS][3],即增强的 NTFS 驱动支持,以及对英特尔/AMD 处理器的支持改进。你可以看看这个内核变体,提供了更多内置驱动程序,或者了解一下节省内存和启动时警告的通用选项。
该个发布版本通过插件支持 slackpkg这意味着你可以从各种软件库中安装软件包括官方的 Slackware 仓库和 SlackOnly 仓库。
Slax 15.0 还涉及到一个更新的关机程序,对设备的卸载处理更加完善。
考虑到 Slax 不再是一个基于 KDE 的发行版。因此,当你下载 Slackware 或 Debian 版本的 ISO 时,你得到的是一个基于 Fluxbox 的版本。
而对于 Debian 版本,你会发现它的更新是基于 **Debian 11.4** “Bullseye” 的。
### 下载 Slax 15.0 和 Slax 11.4
你无法找到基于 Slackware 的版本的 32 位版本,而只能找到基于 Debian 的。
其 ISO 文件可以在其官方网站上下载。如果你想以某种方式支持该项目,也可以选择购买。
> **[Slax 15.0][4]**
无论哪种情况,你都可以前往其 [Patreon 页面][5] 以示支持。
你对 Slax 15.0 的发布有什么看法?你试过了吗?
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/slax-15-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/lightweight-linux-beginners/
[2]: https://news.itsfoss.com/slackware-15-release/
[3]: https://news.itsfoss.com/linux-kernel-5-15-release/
[4]: https://www.slax.org/
[5]: https://patreon.com/slax/

View File

@ -1,173 +0,0 @@
[#]: subject: "How Much of a Genius-Level Move Was Using Binary Space Partitioning in Doom?"
[#]: via: "https://twobithistory.org/2019/11/06/doom-bsp.html"
[#]: author: "Two-Bit History https://twobithistory.org"
[#]: collector: "lujun9972"
[#]: translator: "aREversez"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How Much of a Genius-Level Move Was Using Binary Space Partitioning in Doom?
======
In 1993, id Software released the first-person shooter _Doom_, which quickly became a phenomenon. The game is now considered one of the most influential games of all time.
A decade after _Doom_s release, in 2003, journalist David Kushner published a book about id Software called _Masters of Doom_, which has since become the canonical account of _Doom_s creation. I read _Masters of Doom_ a few years ago and dont remember much of it now, but there was one story in the book about lead programmer John Carmack that has stuck with me. This is a loose gloss of the story (see below for the full details), but essentially, early in the development of _Doom_, Carmack realized that the 3D renderer he had written for the game slowed to a crawl when trying to render certain levels. This was unacceptable, because _Doom_ was supposed to be action-packed and frenetic. So Carmack, realizing the problem with his renderer was fundamental enough that he would need to find a better rendering algorithm, started reading research papers. He eventually implemented a technique called “binary space partitioning,” never before used in a video game, that dramatically sped up the _Doom_ engine.
That story about Carmack applying cutting-edge academic research to video games has always impressed me. It is my explanation for why Carmack has become such a legendary figure. He deserves to be known as the archetypal genius video game programmer for all sorts of reasons, but this episode with the academic papers and the binary space partitioning is the justification I think of first.
Obviously, the story is impressive because “binary space partitioning” sounds like it would be a difficult thing to just read about and implement yourself. Ive long assumed that what Carmack did was a clever intellectual leap, but because Ive never understood what binary space partitioning is or how novel a technique it was when Carmack decided to use it, Ive never known for sure. On a spectrum from Homer Simpson to Albert Einstein, how much of a genius-level move was it really for Carmack to add binary space partitioning to _Doom_?
Ive also wondered where binary space partitioning first came from and how the idea found its way to Carmack. So this post is about John Carmack and _Doom_, but it is also about the history of a data structure: the binary space partitioning tree (or BSP tree). It turns out that the BSP tree, rather interestingly, and like so many things in computer science, has its origins in research conducted for the military.
Thats right: E1M1, the first level of _Doom_, was brought to you by the US Air Force.
### The VSD Problem
The BSP tree is a solution to one of the thorniest problems in computer graphics. In order to render a three-dimensional scene, a renderer has to figure out, given a particular viewpoint, what can be seen and what cannot be seen. This is not especially challenging if you have lots of time, but a respectable real-time game engine needs to figure out what can be seen and what cannot be seen at least 30 times a second.
This problem is sometimes called the problem of visible surface determination. Michael Abrash, a programmer who worked with Carmack on _Quake_ (id Softwares follow-up to _Doom_), wrote about the VSD problem in his famous _Graphics Programming Black Book_:
> I want to talk about what is, in my opinion, the toughest 3-D problem of all: visible surface determination (drawing the proper surface at each pixel), and its close relative, culling (discarding non-visible polygons as quickly as possible, a way of accelerating visible surface determination). In the interests of brevity, Ill use the abbreviation VSD to mean both visible surface determination and culling from now on.
> Why do I think VSD is the toughest 3-D challenge? Although rasterization issues such as texture mapping are fascinating and important, they are tasks of relatively finite scope, and are being moved into hardware as 3-D accelerators appear; also, they only scale with increases in screen resolution, which are relatively modest.
> In contrast, VSD is an open-ended problem, and there are dozens of approaches currently in use. Even more significantly, the performance of VSD, done in an unsophisticated fashion, scales directly with scene complexity, which tends to increase as a square or cube function, so this very rapidly becomes the limiting factor in rendering realistic worlds.[1][1]
Abrash was writing about the difficulty of the VSD problem in the late 90s, years after _Doom_ had proved that regular people wanted to be able to play graphically intensive games on their home computers. In the early 90s, when id Software first began publishing games, the games had to be programmed to run efficiently on computers not designed to run them, computers meant for word processing, spreadsheet applications, and little else. To make this work, especially for the few 3D games that id Software published before _Doom_, id Software had to be creative. In these games, the design of all the levels was constrained in such a way that the VSD problem was easier to solve.
For example, in _Wolfenstein 3D_, the game id Software released just prior to _Doom_, every level is made from walls that are axis-aligned. In other words, in the Wolfenstein universe, you can have north-south walls or west-east walls, but nothing else. Walls can also only be placed at fixed intervals on a grid—all hallways are either one grid square wide, or two grid squares wide, etc., but never 2.5 grid squares wide. Though this meant that the id Software team could only design levels that all looked somewhat the same, it made Carmacks job of writing a renderer for _Wolfenstein_ much simpler.
The _Wolfenstein_ renderer solved the VSD problem by “marching” rays into the virtual world from the screen. Usually a renderer that uses rays is a “raycasting” renderer—these renderers are often slow, because solving the VSD problem in a raycaster involves finding the first intersection between a ray and something in your world, which in the general case requires lots of number crunching. But in _Wolfenstein_, because all the walls are aligned with the grid, the only location a ray can possibly intersect a wall is at the grid lines. So all the renderer needs to do is check each of those intersection points. If the renderer starts by checking the intersection point nearest to the players viewpoint, then checks the next nearest, and so on, and stops when it encounters the first wall, the VSD problem has been solved in an almost trivial way. A ray is just marched forward from each pixel until it hits something, which works because the marching is so cheap in terms of CPU cycles. And actually, since all walls are the same height, it is only necessary to march a single ray for every _column_ of pixels.
This rendering shortcut made _Wolfenstein_ fast enough to run on underpowered home PCs in the era before dedicated graphics cards. But this approach would not work for _Doom_, since the id team had decided that their new game would feature novel things like diagonal walls, stairs, and ceilings of different heights. Ray marching was no longer viable, so Carmack wrote a different kind of renderer. Whereas the _Wolfenstein_ renderer, with its ray for every column of pixels, is an “image-first” renderer, the _Doom_ renderer is an “object-first” renderer. This means that rather than iterating through the pixels on screen and figuring out what color they should be, the _Doom_ renderer iterates through the objects in a scene and projects each onto the screen in turn.
In an object-first renderer, one easy way to solve the VSD problem is to use a z-buffer. Each time you project an object onto the screen, for each pixel you want to draw to, you do a check. If the part of the object you want to draw is closer to the player than what was already drawn to the pixel, then you can overwrite what is there. Otherwise you have to leave the pixel as is. This approach is simple, but a z-buffer requires a lot of memory, and the renderer may still expend a lot of CPU cycles projecting level geometry that is never going to be seen by the player.
In the early 1990s, there was an additional drawback to the z-buffer approach: On IBM-compatible PCs, which used a video adapter system called VGA, writing to the output frame buffer was an expensive operation. So time spent drawing pixels that would only get overwritten later tanked the performance of your renderer.
Since writing to the frame buffer was so expensive, the ideal renderer was one that started by drawing the objects closest to the player, then the objects just beyond those objects, and so on, until every pixel on screen had been written to. At that point the renderer would know to stop, saving all the time it might have spent considering far-away objects that the player cannot see. But ordering the objects in a scene this way, from closest to farthest, is tantamount to solving the VSD problem. Once again, the question is: What can be seen by the player?
Initially, Carmack tried to solve this problem by relying on the layout of _Doom_s levels. His renderer started by drawing the walls of the room currently occupied by the player, then flooded out into neighboring rooms to draw the walls in those rooms that could be seen from the current room. Provided that every room was convex, this solved the VSD issue. Rooms that were not convex could be split into convex “sectors.” You can see how this rendering technique might have looked if run at extra-slow speed [in this video][2], where YouTuber Bisqwit demonstrates a renderer of his own that works according to the same general algorithm. This algorithm was successfully used in Duke Nukem 3D, released three years after _Doom_, when CPUs were more powerful. But, in 1993, running on the hardware then available, the _Doom_ renderer that used this algorithm struggled with complicated levels—particularly when sectors were nested inside of each other, which was the only way to create something like a circular pit of stairs. A circular pit of stairs led to lots of repeated recursive descents into a sector that had already been drawn, strangling the game engines speed.
Around the time that the id team realized that the _Doom_ game engine might be too slow, id Software was asked to port _Wolfenstein 3D_ to the Super Nintendo. The Super Nintendo was even less powerful than the IBM-compatible PCs of the day, and it turned out that the ray-marching _Wolfenstein_ renderer, simple as it was, didnt run fast enough on the Super Nintendo hardware. So Carmack began looking for a better algorithm. It was actually for the Super Nintendo port of _Wolfenstein_ that Carmack first researched and implemented binary space partitioning. In _Wolfenstein_, this was relatively straightforward because all the walls were axis-aligned; in _Doom_, it would be more complex. But Carmack realized that BSP trees would solve _Doom_s speed problems too.
### Binary Space Partitioning
Binary space partitioning makes the VSD problem easier to solve by splitting a 3D scene into parts ahead of time. For now, you just need to grasp why splitting a scene is useful: If you draw a line (really a plane in 3D) across your scene, and you know which side of the line the player or camera viewpoint is on, then you also know that nothing on the other side of the line can obstruct something on the viewpoints side of the line. If you repeat this process many times, you end up with a 3D scene split into many sections, which wouldnt be an improvement on the original scene except now you know more about how different parts of the scene can obstruct each other.
The first people to write about dividing a 3D scene like this were researchers trying to establish for the US Air Force whether computer graphics were sufficiently advanced to use in flight simulators. They released their findings in a 1969 report called “Study for Applying Computer-Generated Images to Visual Simulation.” The report concluded that computer graphics could be used to train pilots, but also warned that the implementation would be complicated by the VSD problem:
> One of the most significant problems that must be faced in the real-time computation of images is the priority, or hidden-line, problem. In our everyday visual perception of our surroundings, it is a problem that nature solves with trivial ease; a point of an opaque object obscures all other points that lie along the same line of sight and are more distant. In the computer, the task is formidable. The computations required to resolve priority in the general case grow exponentially with the complexity of the environment, and soon they surpass the computing load associated with finding the perspective images of the objects.[2][3]
One solution these researchers mention, which according to them was earlier used in a project for NASA, is based on creating what I am going to call an “occlusion matrix.” The researchers point out that a plane dividing a scene in two can be used to resolve “any priority conflict” between objects on opposite sides of the plane. In general you might have to add these planes explicitly to your scene, but with certain kinds of geometry you can just rely on the faces of the objects you already have. They give the example in the figure below, where \\(p_1\\), \\(p_2\\), and \\(p_3\\) are the separating planes. If the camera viewpoint is on the forward or “true” side of one of these planes, then \\(p_i\\) evaluates to 1. The matrix shows the relationships between the three objects based on the three dividing planes and the location of the camera viewpoint—if object \\(a_i\\) obscures object \\(a_j\\), then entry \\(a_{ij}\\) in the matrix will be a 1.
![][4]
The researchers propose that this matrix could be implemented in hardware and re-evaluated every frame. Basically the matrix would act as a big switch or a kind of pre-built z-buffer. When drawing a given object, no video would be output for the parts of the object when a 1 exists in the objects column and the corresponding row object is also being drawn.
The major drawback with this matrix approach is that to represent a scene with \\(n\\) objects you need a matrix of size \\(n^2\\). So the researchers go on to explore whether it would be feasible to represent the occlusion matrix as a “priority list” instead, which would only be of size \\(n\\) and would establish an order in which objects should be drawn. They immediately note that for certain scenes like the one in the figure above no ordering can be made (since there is an occlusion cycle), so they spend a lot of time laying out the mathematical distinction between “proper” and “improper” scenes. Eventually they conclude that, at least for “proper” scenes—and it should be easy enough for a scene designer to avoid “improper” cases—a priority list could be generated. But they leave the list generation as an exercise for the reader. It seems the primary contribution of this 1969 study was to point out that it should be possible to use partitioning planes to order objects in a scene for rendering, at least _in theory_.
It was not until 1980 that a paper, titled “On Visible Surface Generation by A Priori Tree Structures,” demonstrated a concrete algorithm to accomplish this. The 1980 paper, written by Henry Fuchs, Zvi Kedem, and Bruce Naylor, introduced the BSP tree. The authors say that their novel data structure is “an alternative solution to an approach first utilized a decade ago but due to a few difficulties, not widely exploited”—here referring to the approach taken in the 1969 Air Force study.[3][5] A BSP tree, once constructed, can easily be used to provide a priority ordering for objects in the scene.
Fuchs, Kedem, and Naylor give a pretty readable explanation of how a BSP tree works, but let me see if I can provide a less formal but more concise one.
You begin by picking one polygon in your scene and making the plane in which the polygon lies your partitioning plane. That one polygon also ends up as the root node in your tree. The remaining polygons in your scene will be on one side or the other of your root partitioning plane. The polygons on the “forward” side or in the “forward” half-space of your plane end up in the left subtree of your root node, while the polygons on the “back” side or in the “back” half-space of your plane end up in the right subtree. You then repeat this process recursively, picking a polygon from your left and right subtrees to be the new partitioning planes for their respective half-spaces, which generates further half-spaces and further sub-trees. You stop when you run out of polygons.
Say you want to render the geometry in your scene from back-to-front. (This is known as the “painters algorithm,” since it means that polygons further from the camera will get drawn over by polygons closer to the camera, producing a correct rendering.) To achieve this, all you have to do is an in-order traversal of the BSP tree, where the decision to render the left or right subtree of any node first is determined by whether the camera viewpoint is in either the forward or back half-space relative to the partitioning plane associated with the node. So at each node in the tree, you render all the polygons on the “far” side of the plane first, then the polygon in the partitioning plane, then all the polygons on the “near” side of the plane—”far” and “near” being relative to the camera viewpoint. This solves the VSD problem because, as we learned several paragraphs back, the polygons on the far side of the partitioning plane cannot obstruct anything on the near side.
The following diagram shows the construction and traversal of a BSP tree representing a simple 2D scene. In 2D, the partitioning planes are instead partitioning lines, but the basic idea is the same in a more complicated 3D scene.
![][6] _Step One: The root partitioning line along wall D splits the remaining geometry into two sets._
![][7] _Step Two: The half-spaces on either side of D are split again. Wall C is the only wall in its half-space so no split is needed. Wall B forms the new partitioning line in its half-space. Wall A must be split into two walls since it crosses the partitioning line._
![][8] _A back-to-front ordering of the walls relative to the viewpoint in the top-right corner, useful for implementing the painters algorithm. This is just an in-order traversal of the tree._
The really neat thing about a BSP tree, which Fuchs, Kedem, and Naylor stress several times, is that it only has to be constructed once. This is somewhat surprising, but the same BSP tree can be used to render a scene no matter where the camera viewpoint is. The BSP tree remains valid as long as the polygons in the scene dont move. This is why the BSP tree is so useful for real-time rendering—all the hard work that goes into constructing the tree can be done beforehand rather than during rendering.
One issue that Fuchs, Kedem, and Naylor say needs further exploration is the question of what makes a “good” BSP tree. The quality of your BSP tree will depend on which polygons you decide to use to establish your partitioning planes. I skipped over this earlier, but if you partition using a plane that intersects other polygons, then in order for the BSP algorithm to work, you have to split the intersected polygons in two, so that one part can go in one half-space and the other part in the other half-space. If this happens a lot, then building a BSP tree will dramatically increase the number of polygons in your scene.
Bruce Naylor, one of the authors of the 1980 paper, would later write about this problem in his 1993 paper, “Constructing Good Partitioning Trees.” According to John Romero, one of Carmacks fellow id Software co-founders, this paper was one of the papers that Carmack read when he was trying to implement BSP trees in _Doom_.[4][9]
### BSP Trees in Doom
Remember that, in his first draft of the _Doom_ renderer, Carmack had been trying to establish a rendering order for level geometry by “flooding” the renderer out from the players current room into neighboring rooms. BSP trees were a better way to establish this ordering because they avoided the issue where the renderer found itself visiting the same room (or sector) multiple times, wasting CPU cycles.
“Adding BSP trees to _Doom_” meant, in practice, adding a BSP tree generator to the _Doom_ level editor. When a level in _Doom_ was complete, a BSP tree was generated from the level geometry. According to Fabien Sanglard, the generation process could take as long as eight seconds for a single level and 11 minutes for all the levels in the original _Doom_.[5][10] The generation process was lengthy in part because Carmacks BSP generation algorithm tries to search for a “good” BSP tree using various heuristics. An eight-second delay would have been unforgivable at runtime, but it was not long to wait when done offline, especially considering the performance gains the BSP trees brought to the renderer. The generated BSP tree for a single level would have then ended up as part of the level data loaded into the game when it starts.
Carmack put a spin on the BSP tree algorithm outlined in the 1980 paper, because once _Doom_ is started and the BSP tree for the current level is read into memory, the renderer uses the BSP tree to draw objects front-to-back rather than back-to-front. In the 1980 paper, Fuchs, Kedem, and Naylor show how a BSP tree can be used to implement the back-to-front painters algorithm, but the painters algorithm involves a lot of over-drawing that would have been expensive on an IBM-compatible PC. So the _Doom_ renderer instead starts with the geometry closer to the player, draws that first, then draws the geometry farther away. This reverse ordering is easy to achieve using a BSP tree, since you can just make the opposite traversal decision at each node in the tree. To ensure that the farther-away geometry is not drawn over the closer geometry, the _Doom_ renderer uses a kind of implicit z-buffer that provides much of the benefit of a z-buffer with a much smaller memory footprint. There is one array that keeps track of occlusion in the horizontal dimension, and another two arrays that keep track of occlusion in the vertical dimension from the top and bottom of the screen. The _Doom_ renderer can get away with not using an actual z-buffer because _Doom_ is not technically a fully 3D game. The cheaper data structures work because certain things never appear in _Doom_: The horizontal occlusion array works because there are no sloping walls, and the vertical occlusion arrays work because no walls have, say, two windows, one above the other.
The only other tricky issue left is how to incorporate _Doom_s moving characters into the static level geometry drawn with the aid of the BSP tree. The enemies in _Doom_ cannot be a part of the BSP tree because they move; the BSP tree only works for geometry that never moves. So the _Doom_ renderer draws the static level geometry first, keeping track of the segments of the screen that were drawn to (with yet another memory-efficient data structure). It then draws the enemies in back-to-front order, clipping them against the segments of the screen that occlude them. This process is not as optimal as rendering using the BSP tree, but because there are usually fewer enemies visible than there is level geometry in a level, speed isnt as much of an issue here.
Using BSP trees in _Doom_ was a major win. Obviously it is pretty neat that Carmack was able to figure out that BSP trees were the perfect solution to his problem. But was it a _genius_-level move?
In his excellent book about the _Doom_ game engine, Fabien Sanglard quotes John Romero saying that Bruce Naylors paper, “Constructing Good Partitioning Trees,” was mostly about using BSP trees to cull backfaces from 3D models.[6][11] According to Romero, Carmack thought the algorithm could still be useful for _Doom_, so he went ahead and implemented it. This description is quite flattering to Carmack—it implies he saw that BSP trees could be useful for real-time video games when other people were still using the technique to render static scenes. There is a similarly flattering story in _Masters of Doom_: Kushner suggests that Carmack read Naylors paper and asked himself, “what if you could use a BSP to create not just one 3D image but an entire virtual world?”[7][12]
This framing ignores the history of the BSP tree. When those US Air Force researchers first realized that partitioning a scene might help speed up rendering, they were interested in speeding up _real-time_ rendering, because they were, after all, trying to create a flight simulator. The flight simulator example comes up again in the 1980 BSP paper. Fuchs, Kedem, and Naylor talk about how a BSP tree would be useful in a flight simulator that pilots use to practice landing at the same airport over and over again. Since the airport geometry never changes, the BSP tree can be generated just once. Clearly what they have in mind is a real-time simulation. In the introduction to their paper, they even motivate their research by talking about how real-time graphics systems must be able to create an image in at least 1/30th of a second.
So Carmack was not the first person to think of using BSP trees in a real-time graphics simulation. Of course, its one thing to anticipate that BSP trees might be used this way and another thing to actually do it. But even in the implementation Carmack may have had more guidance than is commonly assumed. The [Wikipedia page about BSP trees][13], at least as of this writing, suggests that Carmack consulted a 1991 paper by Chen and Gordon as well as a 1990 textbook called _Computer Graphics: Principles and Practice_. Though no citation is provided for this claim, it is probably true. The 1991 Chen and Gordon paper outlines a front-to-back rendering approach using BSP trees that is basically the same approach taken by _Doom_, right down to what Ive called the “implicit z-buffer” data structure that prevents farther polygons being drawn over nearer polygons. The textbook provides a great overview of BSP trees and some pseudocode both for building a tree and for displaying one. (Ive been able to skim through the 1990 edition thanks to my wonderful university library.) _Computer Graphics: Principles and Practice_ is a classic text in computer graphics, so Carmack might well have owned it.
Still, Carmack found himself faced with a novel problem—”How can we make a first-person shooter run on a computer with a CPU that cant even do floating-point operations?”—did his research, and proved that BSP trees are a useful data structure for real-time video games. I still think that is an impressive feat, even if the BSP tree had first been invented a decade prior and was pretty well theorized by the time Carmack read about it. Perhaps the accomplishment that we should really celebrate is the _Doom_ game engine as a whole, which is a seriously nifty piece of work. Ive mentioned it once already, but Fabien Sanglards book about the _Doom_ game engine (_Game Engine Black Book: DOOM_) is an excellent overview of all the different clever components of the game engine and how they fit together. We shouldnt forget that the VSD problem was just one of many problems that Carmack had to solve to make the _Doom_ engine work. That he was able, on top of everything else, to read about and implement a complicated data structure unknown to most programmers speaks volumes about his technical expertise and his drive to perfect his craft.
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][14] on Twitter or subscribe to the [RSS feed][15] to make sure you know when a new post is out._
_Previously on TwoBitHistory…_
> I've wanted to learn more about GNU Readline for a while, so I thought I'd turn that into a new blog post. Includes a few fun facts from an email exchange with Chet Ramey, who maintains Readline (and Bash):<https://t.co/wnXeuyjgMx>
>
> — TwoBitHistory (@TwoBitHistory) [August 22, 2019][16]
1. Michael Abrash, “Michael Abrashs Graphics Programming Black Book,” James Gregory, accessed November 6, 2019, <http://www.jagregory.com/abrash-black-book/#chapter-64-quakes-visible-surface-determination>. [↩︎][17]
2. R. Schumacher, B. Brand, M. Gilliland, W. Sharp, “Study for Applying Computer-Generated Images to Visual Simulation,” Air Force Human Resources Laboratory, December 1969, accessed on November 6, 2019, <https://apps.dtic.mil/dtic/tr/fulltext/u2/700375.pdf>. [↩︎][18]
3. Henry Fuchs, Zvi Kedem, Bruce Naylor, “On Visible Surface Generation By A Priori Tree Structures,” ACM SIGGRAPH Computer Graphics, July 1980. [↩︎][19]
4. Fabien Sanglard, Game Engine Black Book: DOOM (CreateSpace Independent Publishing Platform, 2018), 200. [↩︎][20]
5. Sanglard, 206. [↩︎][21]
6. Sanglard, 200. [↩︎][22]
7. David Kushner, Masters of Doom (Random House Trade Paperbacks, 2004), 142. [↩︎][23]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2019/11/06/doom-bsp.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[aREversez](https://github.com/aREversez)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: tmp.eMwywbWYsp#fn:1
[2]: https://youtu.be/HQYsFshbkYw?t=822
[3]: tmp.eMwywbWYsp#fn:2
[4]: https://twobithistory.org/images/matrix_figure.png
[5]: tmp.eMwywbWYsp#fn:3
[6]: https://twobithistory.org/images/bsp.svg
[7]: https://twobithistory.org/images/bsp1.svg
[8]: https://twobithistory.org/images/bsp2.svg
[9]: tmp.eMwywbWYsp#fn:4
[10]: tmp.eMwywbWYsp#fn:5
[11]: tmp.eMwywbWYsp#fn:6
[12]: tmp.eMwywbWYsp#fn:7
[13]: https://en.wikipedia.org/wiki/Binary_space_partitioning
[14]: https://twitter.com/TwoBitHistory
[15]: https://twobithistory.org/feed.xml
[16]: https://twitter.com/TwoBitHistory/status/1164631020353859585?ref_src=twsrc%5Etfw
[17]: tmp.eMwywbWYsp#fnref:1
[18]: tmp.eMwywbWYsp#fnref:2
[19]: tmp.eMwywbWYsp#fnref:3
[20]: tmp.eMwywbWYsp#fnref:4
[21]: tmp.eMwywbWYsp#fnref:5
[22]: tmp.eMwywbWYsp#fnref:6
[23]: tmp.eMwywbWYsp#fnref:7

View File

@ -0,0 +1,56 @@
[#]: subject: "Why Design Thinking is a box office hit for open source teams"
[#]: via: "https://opensource.com/article/22/7/design-thinking-open-source-teams"
[#]: author: "Aoife Moloney https://opensource.com/users/aoifemoloney4"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Why Design Thinking is a box office hit for open source teams
======
You need the freedom to express your opinion at work. Why should planning processes try to box us into silos and acquiescence?
![4 hot skills for Linux pros in 2017][1]
Image by: Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0
For the past several years, Design Thinking has been providing a way to enhance problem solving within teams, to ensure learning goals are met, and to increase team engagement. In our [previous article][2], we discussed how we used movie posters to "pitch" our projects to stakeholders. In this article, we're going to review the many lessons we've learned from Design Thinking.
### The box office reviews
"If you love what you do, then youll never work a day in your life" was the angle we were going for when we decided to break our self-imposed ceiling of needing to be formal and business-like to be able to both plan and deliver work. Our planning team was loving the reaction and engagement we were getting from our stakeholders and wider team from using the movie posters instead of documents. It was at this point that we began to realize how we created this arduous and quite frankly boring way of planning our team's work. We had imposed requirements on ourselves of needing to write up documents, transpose that information into a spreadsheet, and then (yes, a third time) into a slide deck. Product owners and planning teams have to do this four times a year, and the cycle was starting to feel heavy and burdensome to us.
It's crucial for a product owner or a manager (or whoever is planning a development team's work) to present information about what their team could potentially deliver to stakeholders so that the best decision on where to invest time and resources can be made. But that information had begun to feel like it was just text on a page or screen. Stakeholders weren't really buying into what our planning team was half-heartedly selling. If we didn't enjoy talking about our projects, how could we expect our stakeholders to enjoy discussing them? Our project planning team quickly realized we not only needed to introduce the movie posters for the wider teams' and stakeholders' best interests, but for our own sense of enjoyment in our roles, too. We needed to Kill Bill.
So with Uma Thurman leading the way in our new concept of using movie posters as cover stories, we found ourselves getting excited about creating new ones for each project request. We were going into each call looking forward to the few moments when the movie posters were unveiled, and those on the calls got a laugh about what was chosen. If they hadn't seen a particular movie before, they often shared that, which resulted in a side conversation where people shared movie trivia and began getting to know each other better. All of this from a software project brief in the style of a *Kill Bill Vol. 2* movie poster. It was incredible to watch the interactions happen and relationships form. The conversations in those calls were freeform, unreserved, and extremely valuable to our planning team. Movie posters got everyone talking, which made it easier for participants on the call to ask for more detail about the project, too.
Our new and improved planning calls were a "box office smash" and the results spoke for themselves. Our quarterly planning calls went from being 90 plus minutes to under 45 minutes, with both team and stakeholders commenting on how included they felt in the decision making process. A lot of this came from developing and expanding on the requirements and insight gathering sessions we'd conducted in the run up to our quarterly planning calls. This was the last remnant of our formal, stiff approach but there was no denying how useful the information gained from those sessions could be to our projects. So we kept it simple again, and started to introduce the movie posters during what we coined the "insight sessions" instead. Our insight sessions were making a big difference by providing space for people to meet and converse about technical details, potential blockers, risks, and opportunities to leverage one piece of technology against another. This was all happening naturally. The ice had been broken with a reveal of a *Ghostbusters* poster or *A Bugs Life*. People turned on cameras and mics to get involved in the conversation about whether they had seen the movies, the remakes, or the original. It became easy for our planning team to guide the already flowing conversations back to the work at hand.
### That's a wrap
We were now delivering valuable, enjoyable, and important calls that were generating a lot of success for our team. Our project delivery success rate hit, and stayed at, 100%. We have never, for three years now, had to halt or drop a project once it's been actioned from quarterly planning. We attribute the high engagement levels on our calls, and we believe people are engaged because our calls are novel and fun while still being informative. We're capturing crucial information about each project before it even begins development, so our teams are able to hit the ground running.
Yes, planning needs to be considered and measured and thorough, but it's to no one's benefit when only one person is doing the thinking, talking, and measuring. It's important to create a culture of open communication and trust for product owners and project managers, so that they can plan work for their teams and bring real value to their stakeholders and end users. It's even more critical for development teams to have the space and tools to plan openly and collaboratively, both with each other and with adjacent teams.
You need the freedom to express your opinion at work. Why should planning processes try to box us into silos and acquiescence? That's a quick way to lose friends and gain robots.
Our team is of the firm belief that people deliver projects. We made some changes when we realized that we needed people, with all of their personalities and perspectives, to see what each project meant to them. Then we could determine which ones to work on. We needed the heart and spirit of Agile methodologies in a process that was open, collaborative, and fun for everyone.
This way of planning and working together is based on using visuals, and trying to enjoy the work we do and the time we spend together at work. It's a means to promote discussion and move those in planning roles from drivers to facilitators so that a unanimous decision can be reached through shared understanding of the work, and the potential trade-offs that comes with it. We chose movie posters, but the concept can be expanded to just about anything! This is our whole point. You don't have to limit your creativity to plan work. If you like music, make an album! Comics? Design one! Or maybe TV shows are more your speed. Take your pick, my friend—it's your cover story. Tell it in the way you enjoy the most, and enjoy not only the informative projects you will generate for your team, but also the sense of camaraderie everyone will feel from having the freedom and safety to grab a coffee or tea, join a call, and talk amongst friends about movies and technology.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/7/design-thinking-open-source-teams
作者:[Aoife Moloney][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/aoifemoloney4
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/lightbulb-idea-think-yearbook-lead.png
[2]: https://opensource.com/article/22/7/design-thinking-engagement-movie-poster

View File

@ -0,0 +1,135 @@
[#]: subject: "Wayland Core Protocol is Tailored Only for GNOME and Thats Not a Good Thing [Opinion]"
[#]: via: "https://news.itsfoss.com/wayland-core-protocol-issue/"
[#]: author: "Community https://news.itsfoss.com/author/team/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Wayland Core Protocol is Tailored Only for GNOME and Thats Not a Good Thing
======
And the idea of protocol extensions doesnt work
![Wayland GNOME][1]
Wayland is a display server system based on the idea of [protocols][2]. That means that there is no Wayland display server that clients need to talk to. Instead, Wayland defines a protocol for creating display servers. Any client application that is programmed to use this protocol can work on any display server(or compositor) that fully supports this protocol. Its like the [world wide web protocols][3], where any website can work on any browser as long as the browser is completely supporting the web protocols. So you dont create websites for specific browsers.
That also means that the functions defined in the protocol decide what applications (aka clients) can do & what they cant do. Returning to the example of the website, if the protocol doesnt define necessary functions, it will limit the web developers. Take CSS as an example; if it wasnt available in the web protocols, all websites would have looked the same and boring. So the protocol must include all necessary basics in a way that doesnt limit developers to a few cases and uses.
When Wayland developers started defining the protocol, they had to decide what functionalities to include in the protocol. Their decision was to make the protocol as minimal as possible, and compositors shall create new protocols for their specific use cases if they desire to offer more functionality not included in the main protocol. The main protocol was called [Wayland Core protocol][4], and other protocols are called [protocol extensions][5]. All compositors are expected to support the core protocol, and they may not support other protocol extensions. That means that applications that depend on certain functionality defined in one of the protocol extensions will not work on all compositors.
All of the above is what Wayland developers intended for the Wayland world to be. Now lets delve into more detail. How much is Wayland core protocol minimal? In other words, what determines what shall be in the core protocol and what shall not be? In this article, Im going to give you an answer to this question based on my opinion, which is in turn based on a group of simple facts.
**My opinion is that Waylands Core protocol is tailored only for GNOME needs.**
I mean that the functionalities which exist in Waylands core protocol are the bare minimum required for GNOME desktop and apps to work on Wayland.
Thats bad (still in my opinion) because its simply not enough for other desktop environments and apps other than GNOME desktop and apps, as I will show you in this article.
### 1. The core protocol requires that desktop visual components be drawn by the compositor
First, lets explain something. In most desktop environments, desktop components (like dock, panel, wallpaper, desktop icons, etc.) are regular clients. For those components to work, they need certain functions to be implemented by the compositor; those functions include:
* Ability to move the window
* Ability to tell the compositor to not draw decorations around said windows.
* Ability to keep it above all windows(in case of the panel) or keep it below all windows (in case of the background).
* In addition to some other functionalities.
On X11, those were defined in the ICCCM specification, which allows X11 clients to tell the compositor to do any of the above. On Wayland, there is not anything in the core protocol that allows that. This means that desktop environment creators have to draw all these in the compositor.
GNOME is the only desktop that does that, while many other desktops (KDE, XFCE, Lxqt, etc.) draw their components outside the compositor (an exception to that is Cinnamon because it started as a fork of GNOME 3). The situation is even worse. Apps like [plank dock][6], [latte dock][7] and other independent desktop components cant exist in Wayland. There are protocol extensions that fix that, and I will discuss them later.
In summary, the situation is:
* Desktop environments have to draw everything in the compositor.
* Its impossible to create cross-desktop desktop components like Plank and Latte dock
### 2. CSD is implementable, although clients cant move their window
We have known before that the core protocol doesnt define a way for clients to move their windows. So how is CSD implemented? Well, there is a [function in the protocol][8] that tells the compositor to start dragging the window. So instead of having a function for moving the window, which would had been useful in many cases, they resorted to having a function only helpful in implementing CSD.
### 3. CSD is a must in Wayland core protocol
On X11, the situation was that apps expect to get decorated by the compositor (which is SSD) and if they wish to decorate themselves (by CSD) they tell the compositor to not draw its decorations. On Wayland, compositors are free to draw their decorations if they wish.
The problem is that there is no way (inside the core protocol) for apps to know whether they are being decorated or not. In other words, clients cant tell the compositor whether they prefer CSD or SSD, which is problematic for both CSD and SSD (in theory). But in practice, GNOME decided not to decorate clients at all. So apps have to assume that they are not decorated due to GNOMEs decision, so they must go for CSD. Again, there is a protocol extension that fixes that; more on that later.
### To summarize
The above three facts regarding the core protocol in summary are:
1. Desktop components need to be drawn by the compositor
2. CSD is a must.
3. CSD is implementable, although clients cant move their windows.
According to these 3 facts, Ive concluded my opinion that Waylands core protocol is tailored only for GNOME needs.
What if you wanted some functionalities not available in the core protocol. Wayland or GNOME developers answer to this is Waylands protocol extensions. That simply means that compositors can offer extra functionality by creating new protocols. The problem with this approach is that means that some apps may work on some compositors and may not work on the rest of the compositors (thats if it needs some of the protocol extensions). That may have resulted in severe fragmentation in theory, but the reality is less than worse thanks to the efforts of [wlroots project][9] and KDE.
### Wlroots has mostly saved the situation
[Wlroots][10] is a library created by [Sway compositor][11] developers. It enables developers to create Wayland/X11 compositors easily. Their main focus is Wayland. There are already many compositors available based on wlroots. What is interesting though is the protocol extensions that wlroots implement.
Wlroots has many protocol extensions, including:
* [LayerShell][12] protocol
* [xdg-decoration][13] protocol
The LayerShell protocol enables desktop components to be drawn outside the compositor. Which also makes it possible to create independent cross-desktop desktop components. Many projects utilize this protocol that you can explore in the following repositories:
* [nwg-shell][14]
* [wf-shell][15]
* [awesome-wayland][16]
Also, have a look at [GtkLayerShell library][17]. Which is a library for writing Gtk apps with LayerShell protocol.Because LayerShell protocol is not a part of the core protocol apps using it work on wlroots based compositors and KDE, its not supported only on GNOME.
The second protocol is xdg-decoration protocol. Made by wlroots and KDE, it enables apps to choose between CSD and SSD.
These protocols work on wlroots compositor and KDE. The only obstacle preventing the unification of Linux desktop is GNOME. They have decided not to implement any of these protocol extensions. Which put all apps that use SSD in a situation where they have to use SSD in supporting environments and CSD in gnome. The people actually feeling the pain are toolkits developers. To give you more context, have a look at the “CSD initiative” started by Tobias Bernard from GNOME, and this blog post from Martins blog (kwins developer). Also, have a look at this issue. The situation is mostly solved by now, that Qt and Gtk draw CSD always on GNOME and utilize the xdg-decoration on other environments. However, in my opinion, that is not good because it makes the platform less standardized/unified for no good reason. After all, in the future, toolkits developers may decide to just go for CSD to avoid the pain.
### The root of all these problems
The root of all these is GNOME or Wayland developers decision to have the core as minimum as possible and require the creation of protocol extensions.
Imagine if the web worked in a similar way. That means that websites would not be able to target the standardized (minimal) protocols because they are not enough and will rely on protocols created by certain browsers. So websites would target specific browsers, not the core protocol. That would had been a nightmare, right? Well, thats the current Wayland situation.
### What is the solution?
The solution, in my opinion, is to put all these protocol extensions in the core protocol, or that might not be necessary if GNOME implements the above protocols (Which is not likely to happen anytime soon.)In simple words, GNOME is the root cause of the problem, and it can solve the problem if it decides to do so.
Author Info: This article has been contributed by Its FOSS reader Hamza Algohary. Hamza is a computer engineering student and a Linux and open source enthusiast. He also develops apps for Linux desktop. You can find his work on [his GitHub profile][18].
*The views and opinions expressed are those of the authors and do not necessarily reflect the official policy or position of Its FOSS.*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/wayland-core-protocol-issue/
作者:[Community][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/team/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/07/wayland-core-protocol-gnome.png
[2]: https://wayland.freedesktop.org/docs/html/ch04.html
[3]: https://www.w3.org/standards/
[4]: https://wayland.app/protocols/wayland
[5]: https://wayland.app/protocols/
[6]: https://github.com/ricotz/plank
[7]: https://github.com/KDE/latte-dock
[8]: https://wayland-book.com/xdg-shell-in-depth/interactive.html
[9]: https://gitlab.freedesktop.org/wlroots/wlroots
[10]: https://gitlab.freedesktop.org/wlroots/wlroots
[11]: https://swaywm.org/
[12]: https://wayland.app/protocols/wlr-layer-shell-unstable-v1
[13]: https://wayland.app/protocols/xdg-decoration-unstable-v1
[14]: https://github.com/nwg-piotr/nwg-shell
[15]: https://github.com/WayfireWM/wf-shell
[16]: https://github.com/natpen/awesome-wayland
[17]: https://github.com/wmww/gtk-layer-shell
[18]: https://github.com/hamza-Algohary

View File

@ -0,0 +1,100 @@
[#]: subject: "Open Source Software: Is There an Easy Path to Success?"
[#]: via: "https://www.opensourceforu.com/2022/07/open-source-software-is-there-an-easy-path-to-success/"
[#]: author: "Jules Graybill https://www.opensourceforu.com/author/jules-graybill/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Open Source Software: Is There an Easy Path to Success?
======
Theres so much that goes on behind the scenes while developing open source software. So how does one make an open source project successful? Is there a shortcut? This article indicates there isnt one.
![team work working together][1]
Open source has become all the rage now. Many entrepreneurs are drawn to it by the allure of quick success. But the truth is, there is no easy path to such success. There is no one thing that you can do to get all of open source right.
In fact, a lot of the challenges that companies face early on are not technology challenges, but are people and cultural challenges.
There are many layers of open source that need to be worked on to build a project that can become a hit in the market. And maintaining that success is an ongoing process. But at the crux of it all lies finding the right answer to a very basic question: what exactly is open source?
### Open source is the code
For many new users who may not be fully aware of the layers that make open source, the answer is fairly simple: open source is software! And that is not wrong, of course, since that is how most of us are using it. But theres just so much more to open source than merely being touted as software.
The essence of any open source project still remains the code. It is what makes the project special and useful for the user. When youre working in open source, the code is just as much a part of the product as the software.
Developing a project from the ground up, or even creating a fork of an existing project, requires thousands and thousands of lines of code to be written, even while handling such a large and complex codebase. Especially in the case of creating a fork, care must be taken to remove any prior licences, marketing material, or anything that might not be useful for the user anymore. After all, it is the features of a project that attract its user base and what retains it. When end users are deciding whether to use open source software, they will read the source code, and what they see there should be something that builds their confidence.
### Open source is the community
How you engage with the community is also a part of the task of building a product. Building a community and maintaining a healthy relationship with it is one of the most crucial aspects of open source, but is also often the hardest for most leaders as there is very little one can do to control it. One can try to lay the foundation and can be supportive but, ultimately, its the people who decide whether they want to join a community.
It is important to maintain a level of transparency with the community and keep it in the loop. The community can get involved at any step that it wants to. Its really important that you show most of your work to the community while you are doing it, apart from things that need to be done confidentially, like setting up security, signing certificates, branding, and so on. This helps in winning its trust because, in the end, it is the community that you are liable to, and it can make or break your project. This may make the project work a lot more deliberate, slower and exposed, but it works well in the end.
Making your work-in-progress so open can seem daunting, especially when you are worried about the repercussions of a delay in updates or having a bug. Yet, making the community members privy to your moves will not only help you build a trustful relationship with them, but also make them feel appreciated.
However, making your workflow public will also invite scrutiny from the community members, who will often have their opinions and offer you their feedback. Taking note of this feedback is important, because that is how you can make your open source project truly for them. They are the end users and their feedback will reflect how they see your project panning out for them in the long run, and ultimately, how successful and mainstream your software becomes.
As an example, when we are thinking about a new feature, we publish a request for comments at RFC. We get a lot of feedback, and we have to think hard about how we can incorporate it.
Since open source is a largely collaborative work, there will be initiatives by the community to offer their support in making the project the best version possible. Not all of it will work out. But as long you are listening, the community will feel involved.
Engaging with the community has its pitfalls too. There may be differences of opinion within the community, and also between the maintainer and the community, especially when it comes to the matter of governance. Governance is something which is really important for an open source project to have. That is why it is important to have clear documented governance practices, which also include the community.
Community governance is a tough, but essential, nut to crack. Delegation in itself requires a lot of trust. For a project with millions of lines of code, it can be cumbersome to find someone in the community who can meaningfully lead it. But open source projects often consist of smaller sub-projects, which are better left handled by someone from the community. This helps the community to be more closely involved too.
| - |
| :- |
| Building a community always has its highs and lows. Let me list some of the tricks that helped maintain a healthy balance between the communitys and my teams vision.
State your principle: Especially in the early stage of the open source project when the source code is still coming together and things are not exactly going perfectly, it is hard for somebody coming from outside to really understand why you are making the decisions that you are making. Communicating the principles on which you take actions helps you to be upfront about your thought process so that the community does not interpret things incorrectly.
This practice is really helpful. It is also important to follow through and show that when you make a decision, it is guided by one of these principles.
Decide how you are going to collaborate: This may be through channels like Discord, Slack, or simply emails. But if you try to use all of them, you will immediately diffuse the community. People will be communicating with each other all over the place. Choose one or two collaboration tools, and really invest in them for synchronised communication.
Treasure the feedback: Listen to feedback from the community and act on it. Show tat you care about what the community says, even if it requires you to make tough decisions.
Maintain a code of conduct: If you interact with a community, you need to define what is going to be acceptable conduct. Having that in place helps warn people in case they go out of line. You can avoid a lot of trouble if you can just define this early on.
Think about how you will distribute your project: There may be instances when you may not be willing to make your project available to the public because you do not have a certain component in place, or you have features you may not want to make accessible to everyone. Creating distribution terms that suit your preference without compromising on what the user wants is key, so that people who want certain features can access these while those who can do without them also have the option to start using the project without having to compromise.
Avoid polls as much as you can: This is because, often, certain members vote for an option that may not be what the majority goes with. This can create a sense of failure in these members and make them feel excluded from the project. Instead, try asking them what problems they would like to be solved, and then try to invent a solution that does not involve trade-offs. |
**State your principle:** Especially in the early stage of the open source project when the source code is still coming together and things are not exactly going perfectly, it is hard for somebody coming from outside to really understand why you are making the decisions that you are making. Communicating the principles on which you take actions helps you to be upfront about your thought process so that the community does not interpret things incorrectly.
This practice is really helpful. It is also important to follow through and show that when you make a decision, it is guided by one of these principles.
*Decide how you are going to collaborate:* This may be through channels like Discord, Slack, or simply emails. But if you try to use all of them, you will immediately diffuse the community. People will be communicating with each other all over the place. Choose one or two collaboration tools, and really invest in them for synchronised communication.
*Treasure the feedback:* Listen to feedback from the community and act on it. Show tat you care about what the community says, even if it requires you to make tough decisions.
**Maintain a code of conduct:** If you interact with a community, you need to define what is going to be acceptable conduct. Having that in place helps warn people in case they go out of line. You can avoid a lot of trouble if you can just define this early on.
*Think about how you will distribute your project:* There may be instances when you may not be willing to make your project available to the public because you do not have a certain component in place, or you have features you may not want to make accessible to everyone. Creating distribution terms that suit your preference without compromising on what the user wants is key, so that people who want certain features can access these while those who can do without them also have the option to start using the project without having to compromise.
*Avoid polls as much as you can:* This is because, often, certain members vote for an option that may not be what the majority goes with. This can create a sense of failure in these members and make them feel excluded from the project. Instead, try asking them what problems they would like to be solved, and then try to invent a solution that does not involve trade-offs.
### Open source is licensing
Open source is about giving your users autonomy over how they want to use your software, and licensing provides just that. Whats great about an open source licence is that regardless of what you as a maintainer do, all your end users and stakeholders can always maintain a certain set of forks, which are important forks.
Licensing gives people the option to take the project into a different direction if they deem it fit. They have the right to create a fork, which results in a lot of amazing software being developed. Maintainers have more responsibility to listen to their community members and to run the project in a way that works for them.
Its advisable to make use of the many licences available instead of setting your own terms separately, simply because stakeholders and users are usually familiar with commonly used licences, so you do not have to waste time explaining them. This also helps you to focus your energy on the other parts of the project.
### Finally, open source is a movement
Open source involves many, many dimensions and people. Most importantly, it is about understanding what these people want and creating an environment that encourages collaboration and transparency. It is about building communities that help to build the open source project the way they want it to be. The more opportunity maintainers create to let them do that, the better the product is and the more successful it gets.
Open source is all of these things and the more expansive view you take, the better you can leverage it. Think about how you can excel in every dimension of open source because, at the end of the day, there is no easy path to open source success.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/07/open-source-software-is-there-an-easy-path-to-success/
作者:[Jules Graybill][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/jules-graybill/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/07/team-work-working-together-1.jpg

View File

@ -0,0 +1,121 @@
[#]: subject: "My honest review of the HP Dev One"
[#]: via: "https://opensource.com/article/22/7/hp-dev-one-review"
[#]: author: "Anderson Silva https://opensource.com/users/ansilva"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
My honest review of the HP Dev One
======
Here are my first impressions of the hardware, running the pre-installed Pop!_OS, and running Fedora on HP's new Linux-based laptop.
![Working from home at a laptop][1]
Image by:
Opensource.com
A few weeks ago, HP joined the bandwagon of major laptop manufacturers releasing a Linux-based laptop, the [HP Dev One][2]. The brand joins others such as Lenovo and Dell, offering a laptop with a pre-installed distribution of Linux in the US market. HP joined forces with smaller Linux-based laptop brand System76 to pre-install Pop!_OS as their distribution of choice on the device. Pop!_OS is a Ubuntu-based distribution, which System76 started (and is currently the primary maintainer) to maximize the features of its own laptops sold on its website.
This article is a quick look at the HP Dev One, including first impressions of the hardware itself and running the pre-installed Pop!_OS and then Fedora on it after a few days. It is not about comparing them, just a few notes on how well they did on the HP Dev One.
### HP Dev One hardware
I havent owned an HP laptop in over a decade. I dont even remember why I distanced myself from the brand, but somehow it just happened. So, when I read about the HP Dev One, several things sparked my interest. Heres a list of them. Some may be silly or nit-picking, but they still carried some weight in my decision:
* The most obvious reason was it came with Linux and not Windows.
* I have never used Pop!_OS, so the fact that HP chose Pop!_OS made me curious to use it.
* I have never owned an AMD-based laptop. The HP Dev One comes with an AMD RYZEN™ 7 PRO 5850U Processor with eight cores and 16 threads.
* The specs versus price seemed good. The price is $1099 USD, which is very reasonable compared to other brands with similar specs.
* No Windows key on the keyboard. Instead, it says “super,” which I think is cool.
* Upgradeable RAM. The laptop comes with 16 GB of RAM, but unlike so many laptops nowadays, it is not soldered on the board, so you can upgrade it (more on upgrading below).
* The laptop was in stock with a commitment for fast shipping.
* Reviews were favorable.
For all of the reasons above, I ordered it, and two days later, I had the HP Dev One on my doorstep.
![HP Dev One super key][3]
Image by:
(Anderson Silva, CC BY-SA 4.0)
By the time the laptop arrived, the extra 64 GB of RAM I had ordered had also arrived, so the first thing I wanted to do was upgrade the RAM. It turned out that the bottom plate of the HP Dev One has very small, special (yet not proprietary) screws, so I had to run to the hardware store to get the proper screwdriver.
![HP Dev One RAM upgrade][4]
Image by:
(Anderson Silva, CC BY-SA 4.0)
I agree with [other online reviews][5] regarding the quality of the laptop. It does feel sturdy. The trackpad feel is good enough, and I had no issue with it. I found the keyboard not to be as good as some other reviewers claim. To me, the keys are a little heavy, and they feel almost a bit like silicone or rubber. I didn't find it terribly comfortable. In fact, I am typing this article in the HP Dev One, and I almost feel like I need to take a break here and there to let my fingertips rest.
The 1080p screen is bright, but also very reflective. If you are a Thinkpad trackpoint fan, you will definitely enjoy this feature on the HP Dev One. The backlit keyboard is nice, and the built-in camera cover is something more laptops should adopt.
![HP Dev One top view][6]
Image by:
(Anderson Silva, CC BY-SA 4.0)
One question or possible issue I have with the HP Dev One is the fact that their website talks about the one-year customer service and warranty on the machine, but I havent been able to find a way to extend that warranty or even upgrade to something more premium like an onsite or next day part replacement in case I were ever to need it.
### Pop!_OS on HP Dev One
As previously mentioned, Ive never used Pop!_OS. Ive used Ubuntu and many other distributions, but to me, Pop!_OS was a relatively familiar yet new experience. Here are a few notes:
#### The good
* Coming from a Fedora and mostly vanilla GNOME background, the four-finger gesture in Pop!_OS took a little getting used to, but once I got into it, it made navigating their implementation of GNOME borderline fun.
* The Pop!_OSs software shop is called Pop!_Shop. It contains a great variety of software without any need to enable special repositories. It is easy to use, and installation is fast.
* The integration between Pop!_OS and the hardware really is well done. Congratulations to both System76 and HP engineers for putting this machine together.
* Pop!_OS has a nice built-in feature for backup and restoring your installation w/o destroying your home directory.
* I installed Steam and a few games, and it worked pretty well.
* I ran a couple of test containers with podman, and it worked very nicely as well.
* I installed a virtual machine, and it ran very smoothly, too.
#### The not so good
* The default wallpaper that comes with the HP Dev One looks a bit fuzzy to me. For such a nice machine, it feels like the wallpaper is running at the wrong resolution. To be fair, there are other wallpapers to choose from in the Settings.
* The out-of-the-box fonts for Pop!_OS could be better. I find them hard to read, and in my opinion, it makes the UI too crammed.
* Adding the USB-C powered 4K monitor worked OK, but my eyes noticed slight flickering in some parts of the screen. Could this be an X11 issue, given that Pop!_OS defaults to X11?
### Fedora on HP Dev One
I played around with Pop!_OS for about two days before deciding to boot into Fedora using a live USB media. I did that first to see what type of hardware detection Fedora could do out of the box. To my surprise, everything worked right away. Thats when I decided to wipe the entire 1 TB SSD and install Fedora on the HP Dev One. As promised, this is not a Fedora vs. Pop!_OS comparison article; it is merely a few notes on both distributions running on this Linux-focused hardware.
In case you havent read my bio in this article, and for the sake of transparency, I am a Fedora contributor, so it is fair for me to say that I am biased towards the Fedora distribution, but dont let that make you think I recommend Fedora over Pop!_OS on the HP Dev One. They are both great distributions, and they both run very nicely on it. Take your pick!
I can tell you that Fedora runs smoothly on the HP Dev One, and although there may be some performance tuning to match some of the benchmark numbers against Pop!_OS, I have been very pleased with its performance. Using the three-finger gestures to move between virtual desktops is a lot more natural to me than the four-finger ones in Pop!_OS, and Ive been able to run Steam and Proton-based games on Fedora just like Pop!_OS.
The only comparison I will make is that when using the secondary USB-C 4K monitor with Fedora, I did not experience any flickering. Was it because of Wayland?
### Final thoughts
Ive had the HP Dev One for a little over four days now, and Ive run Pop!_OS and Fedora on it so far. I even restored Pop!_OS after a full Fedora installation, which was a very easy process. Somehow, Pop!_OS detected it was an HP Dev One and did all the needed installation, including the HP-based wallpapers, without me having to do any extra steps.
As I finished this article, yet again, I went back to Fedora (force of habit), but I wouldnt have any issue staying on Pop!_OS on the HP Dev One. Who knows, maybe I might even try different distributions in the future.
At the end of the day, the HP Dev One is a solid Linux laptop without a Windows key and no AMD, Intel, or Windows stickers on it. It is fast, feels well built, and is reasonably priced especially given how quickly it ships to you (US only). I would love to see HP provide more documentation on their website about extending the warranty, and I hope they will be able to make this laptop available in other parts of the world.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/7/hp-dev-one-review
作者:[Anderson Silva][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ansilva
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/wfh_work_home_laptop_work.png
[2]: https://hpdevone.com/
[3]: https://opensource.com/sites/default/files/2022-07/superkey.jpg
[4]: https://opensource.com/sites/default/files/2022-07/ram.jpg
[5]: https://www.techrepublic.com/article/system76-hp-perfect-dev-one/
[6]: https://opensource.com/sites/default/files/2022-07/topview.jpg

View File

@ -0,0 +1,49 @@
[#]: subject: "What Made Fedora Choose To Use CC0 Licensed Code As The Boot"
[#]: via: "https://www.opensourceforu.com/2022/08/what-made-fedora-choose-to-use-cc0-licensed-code-as-the-boot/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
What Made Fedora Choose To Use CC0 Licensed Code As The Boot
======
![fedora-1024x614][1]
Open source is a challenging concept. Many people interpret this to mean that they can use a specific piece of software however they choose and that it is free to download. The actual rights you as a user are granted, however, depend largely on which licence the developers chose to share their code under. Open source software can be expensive, can restrict how you can use it, and in rare circumstances, can even land you in legal trouble.
The Fedora Project recently decided to reject all code that is licenced under the Creative Commons “Public Domain Dedication” CC0 licence in an effort to avoid precisely this situation. CC0 will soon be removed from the list of permissible code licences for all new submissions, however it will still be permitted for material like artwork and there might even be exceptions made for current packages on a case-by-case basis.
It wouldnt ordinarily make the news if Fedora objected to a software licence. In fact, the project rejects a number of licences that are on a fairly extensive list. The unexpected aspect of this situation is that CC0 was originally regarded as a valid licence, and is only now being reclassified as a result of a shift in perspective within the greater free and open source (FOSS) community.
What exactly is wrong with CC0 that Fedora decided to stop supporting it, and does this indicate you shouldnt use the licence for your own projects?
The part of this narrative that may surprise those who are familiar with Creative Commons and its family of licences the most is that the Fedora Project formerly approved CC0 for software in the first place. After all, the goal from the beginning was to develop a range of licences expressly for artistic works. The organizations mission and licence requirements are stated in the name itself.
To “overcome legal hurdles to the sharing of information and creativity” by offering a free framework under which people and organisations might share things like music, artwork, or educational material, Creative Commons, the forerunner of the previous Open Content Project, was established in 2001. Software, however, was never a factor. Why might that be? At that time, important software licences like the MIT License and the GNU General Public License had already been around for more than ten years.
It seems obvious that you should probably believe a company if they go out of their way to warn you that something they have made is unsuitable for a particular use. The Creative Commons FAQ lists a number of compelling arguments against using their licences for software, but one in particular jumps out for users like the Fedora Project: patent rights.
This may seem contradictory given that the CC0 licence is meant for works in the public domain and that by using it, the creator expressly “waives all of his or her rights to the work globally under copyright law.” However, the issue is that copyright legislation does not apply to patents. In fact, a review of the licenses complete wording reveals that it directly tackles this in a worrying section that reads, “No trademark or patent rights held by Affirmer are waived, abandoned, relinquished, leased or otherwise modified by this document.”
In other words, even while the author of something that has been licenced under CC0 may be willing to give up the rights to it, they are still free to patent it. Whats even worse is that they still retain the ability to use that patent however they see fit. Theoretically, this means that the creator of a piece of source code that was first made available under CC0 may later assert that anyone who utilised the code violated their patent and could demand royalties.
Its very obvious why something like this would worry the Fedora Project. Consider a scenario where CC0-licensed code is incorporated into a systems core package and then made available to millions of users. Out of nowhere, the original creator appears, alleges patent violation, and wants payment. Could Red Hats or Fedoras attorneys refute such a claim? Maybe. Is it worth it to use CC0 code in order to find out for sure? Zero chance.
Its important to note that this is not at anyway a new issue. In fact, back in 2012, the patent clause prevented the Open Source Initiatives (OSI) License Review Committee from conclusively determining if CC0 genuinely complied with their definition of an open source licence. The Committee was unable to come to an agreement because its members believed that included such terms in a software licence would create a risky precedent. Fedoras decision to ever accept CC0 in the first place is even more puzzling given its turbulent history.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/08/what-made-fedora-choose-to-use-cc0-licensed-code-as-the-boot/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/08/fedora-1024x614-1-e1659346500461.jpg

View File

@ -0,0 +1,168 @@
[#]: subject: "How open organizations can harness energy disruptions"
[#]: via: "https://opensource.com/open-organization/22/8/energy-disruption"
[#]: author: "Ron McFarland https://opensource.com/users/ron-mcfarland"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How open organizations can harness energy disruptions
======
Learn strategies for communities on the use of electric vehicles over internal combustion engined cars, the use of self-driving vehicles over human-driving vehicles, and the use of solar power generation over nuclear power generation.
![Solar panels][1]
Image by: Photo by Jason Hibbets
Many people talk a lot about the values of [Open Organization Principles][2], but in many ways, they require people to change how they do things, which can be difficult. That is true for businesses and industries as well. Disruption in many sectors is coming. How do we use Open Principles to address them? This article looks at what's happening in industries related to energy and transportation when it comes to drastic costing changes that will lead to industrial disruption.
Business disruption is happening through new technology or methods, which will slash costs. This is forcing industrial change. Consider the oil, coal, natural gas, nuclear, petroleum, biofuels, and charcoal (the primary energy in many developing countries) industries. All these industries are grouped in the fossil fuel-burning energy-generating industry. Imagine them all becoming obsolete and totally replaced by the solar and wind industries in the next decade or so because of costs. That is industrial disruption.
As a resource, I have read, [Clean Disruption of Energy and Transportation,][3] by Tony Seba.
The book was written before 2014, but many of his concepts are valid today. In his book, Seba presents an energy-usage case for electric vehicles (EV) replacing internal combustion engine vehicles (ICE), and the entire automotive industry shrinking with autonomous (self-driving) car disruption strictly on cost factors. If this is true, adapting to those changes will be required, and Open Organization Principles can be helpful.
He also builds a case for current electrical power generation being completely replaced in the years ahead by solar and wind with battery storage based strictly on cost competitive advantages. I discussed this in my article on [Jeremy Rifkin's][4] book [The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism.][5]
Seba's book reminds me too of [Michael E Porter's five competitive forces the force of substitutes][6].
For example, automobiles replaced the horse and carriage in about a decade in the early 1900s, and digital cameras replaced all film cameras on the market. This replacement was due to:
1. A cost advantage with prices coming down at exponential rates (from dollars to pennies like computer sizes to computing capacity) through increased innovation, scale, and competition.
2. Superior product features.
3. Added new product features,
4. Central control replaced by distributed/shared control.
For these reasons, in the past, products became obsolete by something innovative. They were disrupted.
Imagine wanting to make an open organization community that can produce its own electricity instead of being forced to buy electricity at much higher prices. Could such a community be formed? Throughout his book, Seba makes a case that this is possible using the four above factors.
This article is the first of a two-part series on this topic. This article gives strategies for open organization communities on the use of electric vehicles over internal combustion engined (ICE) cars, the use of self-driving vehicles over human-driving vehicles, and the use of solar power generation over nuclear power generation. I'll give more in the second part of this series.
### A question of power
Seba assumes that electrical power is a commodity with no distinct quality difference and that there is no such thing as high-quality and low-quality electricity to the user. The only difference is the price of electricity at a given time.
The price of electricity fluctuates greatly. He also assumes that the automotive industry only has one quality, the movement of goods and people. Therefore, Seba's whole book looks at the cost (particularly end-user price) of both electricity (in kilowatt/hours) and movement (miles/kilometers over the life of a vehicle). He does a wonderful job of breaking down indirect and direct costs in great detail, including many costs I had never considered. This seems to be the same conclusion Jeremy Rifkin came to, calling it marginal costs in his book, *The Zero Marginal Cost Society* (cited above).
By providing general cost breakdowns, Seba enables readers to seriously consider how they and their community might use solar and wind electric power generation. One could build an open organization community based on the information this book makes available.
Coincidentally, I've just read Bill Gates' book, [How to Avoid a Climate Disaster][7]. Gates' book has one primary goal: To remove 51 billion tons of greenhouse gases from the atmosphere by 2050. Gates would love it if Seba's prediction comes true.
Seba believes that by 2030, all electricity generation and vehicles on the road will be carbon-free. Seba believes that over a 10-15 year period disruptive forces will penetrate industries in an "S" shaped growth. They will start slowly while the users adjust to this new purchasing option.
Then, once several success stories are in operation, disruption will take off like a rocket, making all old technology and processes obsolete. After several decades, the demand will flatten when the disruptive technology fully serves the market.
![S demand chart relating growth and time][8]
Image by:
(Ronald McFarland, CC BY-SA 4.0)
Here is how the "S" demand works for electrical energy according to Seba:
1. Some solar and wind systems are accepted in the market, making capital more available and less costly.
2. More creative financing plans are developed and offered.
3. Local, distributed energy generation increases.
4. The business structure of energy flips from centralized to distributed.
5. Enabling technologies, such as sensors, artificial intelligence, big data, and mobile communications, improve exponentially. Sensors, smaller and more energy efficient than ever before, will have more information on energy usage than any conventional utility could ever obtain.
6. The increasing cost of resource-based energy will push more people away from them.
7. Complementary markets, such as wind power generation, electric vehicles, and self-driving cars will increase exponentially.
8. Investments in other storage technologies for shared solar facilities, wind operations, electric vehicles, and self-driving cars will increase.
9. The business environment of energy will become more and more distributed.
10. The conventional command-and-control energy business model will enter a vicious cycle of rising prices and obsolete assets.
Seba writes, "World demand for energy is expected to be 16.9 TW by 2030, according to the US Energy Information Agency." That may seem dire, but Seba adds, "Should solar continue on its exponential trajectory, the energy generation will be 100% solar by 2030."
I'm sure that Gates would be delighted if that turns out to be true, but he is taking no chances on just those predictions. Gates is investing in future nuclear power plants and carbon capture systems for current fossil fuel burning plants to reduce carbon dioxide emissions further.
### Building a community
Assume we want to build an open organization community to introduce three things:
* The use of electric vehicles (EV) over internal combustion engined (ICE) vehicles.
* Self-driving vehicles over human-driven vehicles.
* Solar power generation over nuclear power generation.
Here is additional detail on Seba's arguments for each.
#### 1. Use of electric over internal combustion engined vehicles
Consider the following attributes of EV over ICE vehicles:
* Seba presents that an electric vehicle is inexpensive to operate. An EV is five times more efficient in energy use than ICE vehicles. That makes an EV 10 times cheaper than ICE.
* An electric vehicle is cheaper to maintain because it contains few moving parts.
* In the future, there will be wireless charging stations along the road, so vehicles need not stop to charge. ICE vehicles will always have to stop at filling stations.
* Electric vehicles are safer, with more safety sensors.
* By 2025, EVs will likely be cheaper to purchase than ICE, with battery costs coming down rapidly.
* Battery performance and range are increasing with new developments.
* Electric vehicles can gather driving data in real-time for added safety, performance analytics, and future improvements.
* Electric vehicles can use excess electricity in the owners' home solar system or sell the excess to the local utility company.
* An EV company can offer free fuel and maintenance for five years if it wants to. No ICE vehicle manufacturer can do this.
#### 2. Use of self-driving vehicles over human-driving vehicles
Young people are moving away from owning vehicles in favor of Uber or Lyft services. It is becoming cheaper in the long term. Ride-sharing offers many advantages, including:
* Each Uber/Lyft vehicle on the road is estimated to replace around 15 cars purchased. This result shrinks the overall vehicle numbers, impacting the need for parking lots and vehicle insurance.
* Owning an asset used less than 10% of the time is not practical. Most cars are parked most of the time, except for commercial vehicles that operate during work hours.
The next step is self-driving cars, according to Seba. When self-driving cars are fully in the market, there will be disruption in the automotive, transportation, logistics, and oil industries. Self-driving cars will be mainly purchased, owned, and operated by corporations with fleets, not individuals.
Self-driving cars provide many advantages, a few of which are listed here:
* Self-driving cars expand the transportation of people. Children, the elderly, and those with disabilities will be new customers.
* The benefits of self-driving cars include:
* Save lives (no human error, advanced machine learning, and sensing).
* Save time (less time on the road with less congestion or looking for parking spaces).
* Save space (fewer roads, garages, and parking spaces required).
* Save energy (lower fuel consumption).
* Save money (in purchasing, operating, maintaining, and insuring for transportation).
* The cost of transportation will be cheaper, safer, and easier.
#### 3. Use of solar power generation over nuclear power generation
Solar power generation pricing is falling rapidly and is projected to continue falling with more use. This is where the Internet of Things (IoT) comes in. While nuclear power costs will also come down, it's not competitive, particularly as decommissioning is extremely expensive.
Here are some considerations for solar power generation:
* Solar cost has improved by 1,540 times since 1970. Based on Seba's research, solar costs are expected to come down by another two-thirds by 2020 (at the time the book was written in 2014).
* Currently, operating nuclear power plants' costs are increasing and are continually over budget in cost. If their costs don't come down, they will phase out. Bill Gates is betting on improved low-cost designs.
* Solar panel costs have dropped by a factor of US$100/Wh (Watt-hour) to 65¢/Wh from 1970 to around 2012, far lower than nuclear prices.
* New nuclear power plants are expected to provide electricity at 25¢/kWh-30¢/kWh. That is not competitive, with solar being around 12.5¢/kWh. In the United States, residential customers pay around 12.5¢/kWh, and some solar power suppliers expect to reach 5.79¢/kWh soon.
* Nuclear power generation uses twice as much water as natural gas, which is 10,000 times more than solar. This places added stress on the freshwater supply.
* There are now solar manufacturing companies providing power 24 hours/day. [Torresol Gemasolar][9] is a company in Spain putting one of these facilities into operation. They use safer and cheaper molten salt thermal energy storage. This energy storage method is a tenth of the cost of Li-on batteries.
* Seba says, "Large, centralized, top-down, supplier-centric energy is on the way out. It is being replaced by modular, distributed, bottom-up, open, knowledge-based, consumer-centric energy." For solar, many users will also be producers and sharers of electricity and information.
* Solar now regularly provides 20-35% of Germany's power. That will continue with improved battery storage, especially as Germany moves away from Russian oil and gas due to the war in Ukraine. Both for residential and commercial solar plants, tens of thousands of power generation units are being installed in the United States, Europe, and Australia.
* In 2014, 430 reactors in 31 countries provided 370 GW of nuclear power capacity, mainly in France, Japan, Russia, and the United States. These most likely will be replaced with solar, wind, or [4th generation][10] nuclear, which uses waste as fuel. Europe will shut down 80% of its 186 nuclear power plants by 2030 ([Global Data][11]). Many current nuclear power plants are close to their shutdown age.
* Nuclear power plants are not insurable at any premium, so only governments protect them.
### Wrap up
In the second part of this series, I'll discuss ideas regarding the use of solar power generation over oil power generation, the use of solar power generation over natural gas (with methane) power generation, the use of solar power generation over biofuels power generation, the use of solar power generation over coal power generation, and distributed electricity generation (small and simple) over conventional large utilities.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/22/8/energy-disruption
作者:[Ron McFarland][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ron-mcfarland
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/LIFE_solar_panels_520x292_JH.png
[2]: https://theopenorganization.org/definition/open-organization-definition/
[3]: https://www.amazon.co.jp/Clean-Disruption-Energy-Transportation-Conventional/dp/B00RWNDUZA
[4]: https://en.wikipedia.org/wiki/Jeremy_Rifkin
[5]: https://www.goodreads.com/book/show/18594514-the-zero-marginal-cost-society
[6]: https://www.youtube.com/watch?v=mYF2_FBCvXw
[7]: https://www.gatesnotes.com/Energy/My-new-climate-book-is-finally-here
[8]: https://opensource.com/sites/default/files/2022-07/s-demand.png
[9]: https://www.youtube.com/watch?v=QN-8DMZLpyI
[10]: https://en.wikipedia.org/wiki/Generation_IV_reactor
[11]: https://www.globaldata.com/

View File

@ -1,130 +0,0 @@
[#]: subject: "What is Firefox Multi-Account Containers? Why and How to Use It?"
[#]: via: "https://itsfoss.com/firefox-containers/"
[#]: author: "Hunter Wittenborn https://itsfoss.com/author/hunter/"
[#]: collector: "lujun9972"
[#]: translator: "hanszhao80"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
What is Firefox Multi-Account Containers? Why and How to Use It?
======
As the needs of users who use various programs on their devices becomes increasingly complex, the programs themselves are also needing to follow suit to keep up with the demand that users are wanting and expecting.
Something that I find I need on a daily basis is an easy way to be able to stay logged in to multiple accounts inside my web browser at the same time. I _could_ just log in and out of each of my accounts as needed, but this becomes extremely tedious when Im moving across multiple accounts in a short period of time.
Originally, I was using Google Chromes ability to have multiple accounts, which worked, but was a tad too tedious to manage, and it felt a bit clunky to create an entire new Google account just to do what I considered something that should be able to be done from a _single_ account.
This is the point where I moved to Firefoxs Multi-Account Containers feature. Not only is it so much more flexible than my setup on Google Chrome, but I am also using something that is created by my browsers developers themselves, making for an overall smoother and simpler experience.
![Illustration of containers in Firefox][1]
### What is Multi-Account Container in Firefox?
Mutli-Account Containers also works tremendously well if you want to separate parts of your digital life from each other. When you are using containers, your browsing activity from one container is not shared with other containers. This isolation means that you can log into two separate accounts on the same website in different containers. Your login session, site preference and tracking data will be confined to the container where you used a certain website.
What other advantage does it have? Imagine you were shopping on Amazon or some other e-commerce. You browsed a few items but did not buy anything. Now if you browse the web, youll see ads related to products you browsed. Some websites show ads despite ad-blockers. With containers, you can separate your shopping websites with other websites.
Let me share another example with you. Firefox provides a Facebook container by default. This container includes Facebook, Messenger and Instagram websites by default. This means when you open any of these three websites, they are opened in Facebook container. Thus, Facebook wont be able to track your activity on other websites.
This is one of the [Firefox features that not many people know or use][2].
### Using Multi-Account Containers
Installing Firefox Multi-Account containers is an extremely simple process, and only takes a few clicks.
First, head over to the [extensions page][3] on the Firefox Add-ons website. The only thing you need to do after that is click the `Add to Firefox` button.
![][4]
And youre done! Now we can get straight into actually using the new extension.
If you didnt notice already, there should now be a new icon to the right of your search bar:
![][5]
This is the icon that youll use to interact with Firefox Multi-Account containers. If you then click the icon, youll be greeted with a little menu:
![][6]
With that, lets try some examples out so we can see how Multi-Account Containers actually works.
#### Setting up the container
First off, we need to make a container. We can do this by going to `Manage Containers -> New Container` in the Multi-Account Containers menu.
![][7]
![][8]
Next, enter the name for the new container, and select a color and icon. Then, hit `OK` to save the new container.
![][9]
And thats it! We can now go back to the main menu to open a new tab inside the new container:
![][10]
You will also notice that the new tab has some styling to denote that its running inside of a container:
![][11]
#### Watching the container work in action
Lets now look at what the container actually does when you use it.
Were going to go to the Linode manager website in a normal browser tab, where Im currently signed in:
![][12]
Lets now try opening the same page inside our Firefox container, at which point Im redirected to the Linode login screen:
![][13]
Why was I redirected? Because now I am not logged in. Thats one of the joys of Firefox containers: be able to be signed in inside of one browser session, and then enter a container, and its like youve never visited the site before.
If you were to sign in to a website within the container however, you would stay signed in while vising websites from the container. You could also use this feature to sign in to a website from inside the container, thus keeping all the data from that site away from your normal browser data.
Note
Things like your browser history itself will still be exposed to your normal browser session. The container feature simply provides a way to separate things like signed-in accounts as mentioned in this article.
### Wrapping Up
Mutli-Account Containers prove to be a wonderful feature for those who are conscious about their privacy, or just want to really try to get stringent on the security of their systems.
For example, you could sign in to your Google account inside of a container, and Google would never know who you whenever you arent inside the container.
And then lastly, this extension is just a great choice for people who want to sign into to multiple accounts at once, without resorting to making separate browser accounts for each thing you want to use.
And there you go, thats the premise of Firefoxs Multi-Account Containers.
Need any help getting everything going, or just got a general question? Feel free to leave any of it in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/firefox-containers/
作者:[Hunter Wittenborn][a]
选题:[lujun9972][b]
译者:[hanszhao80](https://github.com/hanszhao80)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/hunter/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/Firefox-container.png?resize=800%2C450&ssl=1
[2]: https://itsfoss.com/firefox-useful-features/
[3]: https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/?utm_source=addons.mozilla.org&utm_medium=referral&utm_content=search
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/firefox-containers-install-page.png?resize=800%2C366&ssl=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/firefox-containers-searchbar-icon-1.png?resize=800%2C48&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/firefox-containers-main-menu.png?resize=302%2C474&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/firefox-containers-manage-containers-1.png?resize=291%2C402&ssl=1
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/firefox-containers-new-container.png?resize=290%2C399&ssl=1
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/firefox-containers-new-container-itsfoss.png?resize=292%2C401&ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/firefox-containers-opening-new-container.png?resize=290%2C398&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/firefox-containers-new-container-styling.png?resize=800%2C370&ssl=1
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/firefox-containers-linode.png?resize=800%2C114&ssl=1
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/firefox-containers-linode-login.png?resize=800%2C405&ssl=1

View File

@ -1,166 +0,0 @@
[#]: subject: "Write a chess game using bit-fields and masks"
[#]: via: "https://opensource.com/article/21/8/binary-bit-fields-masks"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Write a chess game using bit-fields and masks
======
Using bit-fields and masks is a common method to combine data without
using structures.
![Chess pieces on a chess board][1]
Let's say you were writing a chess game in C. One way to track the pieces on the board is by defining a structure that defines each possible piece on the board, and its color, so every square contains an element from that structure. For example, you might have a structure that looks like this:
```
struct chess_pc {
   int piece;
   int is_black;
}
```
With this programming structure, your program will know what piece is in every square and its color. You can quickly identify if the piece is a pawn, rook, knight, bishop, queen, or king—and if the piece is black or white. But there's a more straightforward way to track the same information while using less data and memory. Rather than storing a structure of two `int` values for every square on a chessboard, we can store a single `int` value and use binary _bit-fields_ and _masks_ to identify the pieces and color in each square.
### Bits and binary
When using bit-fields to represent data, it helps to think like a computer. Let's start by listing the possible chess pieces and assigning a number to each. I'll help us along to the next step by representing the number in its binary form, the way the computer would track it. Remember that binary numbers are made up of _bits_, which are either zero or one.
* `00000000:` empty (0)
* `00000001:` pawn (1)
* `00000010:` rook (2)
* `00000011:` knight (3)
* `00000100:` bishop (4)
* `00000101:` queen (5)
* `00000110:` king (6)
To list all pieces on a chessboard, we only need the three bits that represent (from right to left) the values 1, 2, and 4. For example, the number 6 is binary `110`. All of the other bits in the binary representation of 6 are zeroes.
And with a bit of cleverness, we can use one of those extra always-zero bits to track if a piece is black or white. We can use the number 8 (binary `00001000`) to indicate if a piece is black. If this bit is 1, it's black; if it's 0, it's white. That's called a _bit-field_, which we can pull out later using a binary _mask_.
### Storing data with bit-fields
To write a chess program using bit-fields and masks, we might start with these definitions:
```
/* game pieces */
#define EMPTY 0
#define PAWN 1
#define ROOK 2
#define KNIGHT 3
#define BISHOP 4
#define QUEEN 5
#define KING 6
/* piece color (bit-field) */
#define BLACK 8
#define WHITE 0
/* piece only (mask) */
#define PIECE 7
```
When you assign a value to a square, such as when initializing the chessboard, you can assign a single `int` value to track both the piece and its color. For example, to store a black rook in position 0,0 of an array, you would use this code:
```
  int board[8][8];
..
  board[0][0] = BLACK | ROOK;
```
The `|` is a binary OR, which means the computer will combine the bits from two numbers. For every bit position, if that bit from _either_ number is 1, the result for that bit position is also 1. Binary OR of the value `BLACK` (8, or binary `00001000`) and the value `ROOK` (2, or binary `00000010`) is binary `00001010`, or 10:
```
    00001000 = 8
 OR 00000010 = 2
    ________
    00001010 = 10
```
Similarly, to store a white pawn in position 6,0 of the array, you could use this:
```
`  board[6][0] = WHITE | PAWN;`
```
This stores the value 1 because the binary OR of `WHITE` (0) and `PAWN` (1) is just 1:
```
    00000000 = 0
 OR 00000001 = 1
    ________
    00000001 = 1
```
### Getting data out with masks
During the chess game, the program will need to know what piece is in a square and its color. We can separate the piece using a binary mask.
For example, the program might need to know the contents of a specific square on the board during the chess game, such as the array element at `board[5][3]`. What piece is there, and is it black or white? To identify the chess piece, combine the element's value with the `PIECE` mask using the binary AND:
```
  int board[8][8];
  int piece;
..
  piece = board[5][3] &amp; PIECE;
```
The binary AND operator (`&`) combines two binary values so that for any bit position, if that bit in _both_ numbers is 1, then the result is also 1. For example, if the value of `board[5][3]` is 11 (binary `00001011`), then the binary AND of 11 and the mask PIECE (7, or binary `00000111`) is binary `00000011`, or 3. This is a knight, which also has the value 3.
```
    00001011 = 11
AND 00000111 = 7
    ________
    00000011 = 3
```
Separating the piece's color is a simple matter of using binary AND with the value and the `BLACK` bit-field. For example, you might write this as a function called `is_black` to determine if a piece is either black or white:
```
int
is_black(int piece)
{
  return (piece &amp; BLACK);
}
```
This works because the value `BLACK` is 8, or binary `00001000`. And in the C programming language, any non-zero value is treated as True, and zero is always False. So `is_black(board[5][3])` will return a True value (8) if the piece in array element `5,3` is black and will return a False value (0) if it is white.
### Bit fields
Using bit-fields and masks is a common method to combine data without using structures. They are worth adding to your programmer's "tool kit." While data structures are a valuable tool for ordered programming where you need to track related data, using separate elements to track single On or Off values (such as the colors of chess pieces) is less efficient. In these cases, consider using bit-fields and masks to combine your data more efficiently.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/binary-bit-fields-masks
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-chess-games.png?itok=U1lWMZ0y (Chess pieces on a chess board)

View File

@ -1,212 +0,0 @@
[#]: subject: "MAKE MORE with Inkscape Ink/Stitch"
[#]: via: "https://fedoramagazine.org/make-more-with-inkscape-ink-stitch/"
[#]: author: "Sirko Kemter https://fedoramagazine.org/author/gnokii/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
MAKE MORE with Inkscape Ink/Stitch
======
![MAKE more with Inkscape - Ink/Stitch][1]
Inkscape, the most used and loved tool of Fedoras Design Team, is not just a program for doing nice vector graphics. With vector graphics (in our case SVG) a lot more can be done. Many programs can import this format. Also, Inkscape can do a lot more than just graphics. The first article of this [series][2] showed how to [produce GCode with Inkscape][3]. This article will examine another Inkscape extension [Ink/Stitch][4]. Ink/Stitch is an extension for designing embroidery with Inkscape.
### DIY Embroidery
In the last few years the do-it-yourself or maker scene has experienced a boom. You could say it all began with the inexpensive option of [3D printing][5]; followed by also not expensive [CNC][6] machines and laser cutters/engravers. Also the prices for more _“_traditional_”_ machines such as embroidery machines have fallen during recent years. [Home embroidery machines are now available for 500 US dollars][7].
If you dont want to or cant buy one yourself, the nearest [MakerSpace][8] often has one. Even the prices for commercial single-head embroidery machines are down to 5,000 US dollars. They are an investment that can pay off quickly.
### Software for Embroidery Design
Some of the home machines include their own software for designing embroidery. But most, if not all, of these applications are Windows-only. Also, the most used manufacturer-independent software of this area [Embird][9] is only available for Windows. But you could run it in Wine.
Another solution for Linux [Embroidermodde][10] is not really developed anymore. And this is after having had a fundraising campaign.
Today, only one solution is left [Ink/Stitch][4]
![The logo of the Ink/Stitch project][11]
### Open Source and Embroidery Design
Ink/Stitch started out using [libembroidery][12]. Today [pyembroidery][13] is used. The manufacturers cant be blamed for the prices of these machines and the number of Linux users. It is hardly worthwhile to develop applications for Linux.
#### The Embroidery File Format Problem
There is a problem with the proliferation of file formats for embroidery machines; especially among manufacturers that cook their own file format for their machines. In some cases, even a single manufacturer may use several different file formats.
* **.10o** Toyota embroidery machines
* **.100** Toyota embroidery machines
* **.CSD** Poem, Huskygram, and Singer EU embroidery home sewing machines.
* **.DSB** Baruda embroidery machines
* **.JEF** MemoryCraft 10000 machines.
* **.SEW** MemoryCraft 5700, 8000, and 9000 machines.
* **.PES** Brother and Babylock embroidery home sewing machines.
* **.PEC** Brother and Babylock embroidery home sewing machines.
* **.HUS** Husqvarna/Viking embroidery home sewing machines.
* **.PCS** Pfaff embroidery home sewing machines.
* **.VIP** old Pfaff format also used by Husqvarna machines.
* **.VP3** newer Pfaff embroidery home sewing machines.
* **.DST** Tajima commercial embroidery sewing machines.
* **.EXP** Melco commercial embroidery sewing machines.
* **.XXX** Compucon, Singer embroidery home sewing machines.
* **.ZSK** ZSK machines on the american market
This is just a small selection of the file formats that are available for embroidery. You can find a more complete list [here][14]. If you are interested in [deeper knowledge about these file formats, see here for more information][15].
#### File Formats of Ink/Stitch
Ink/Stitch can currently read the following file formats: 100, 10o, BRO, DAT, DSB, DST, DSZ, EMD, EXP, EXY, FXY, GT, INB, JEF, JPX, KSM, MAX, MIT, NEW, PCD, PCM, PCQ, PCS, PEC, PES, PHB, PHC, SEW, SHV, STC, STX, TAP, TBF, U01, VP3, XXX, ZXY and also GCode as TXT file.
For the more important task of writing/saving your work, Ink/Stitch supports far fewer formats: DST, EXP, JEF, PEC, PES, U01, VP3 and of course SVG, CSV and GCode as TXT
Besides the problem of all these file formats, there are other problems that a potential stitch program has to overcome.
Working with the different kinds of stitches is one difficulty. The integration of tools for drawing and lettering is another. But why invent such a thing from scratch? Why not take an existing vector program and just add the functions for embroidery to it? That was the idea behind the [Ink/Stitch project][4] over three years ago.
### Install Ink/Stitch
Ink/Stitch is an [extension for Inkscape][16]. Inkscapes new functionality for downloading and installing extensions is still experimental. And you will not find Ink/Stitch among the extensions that are offered there. You must [download][17] the extension manually. After it is downloaded, unzip the package into your directory for Inkscape extensions. The default location is _~/.config/Inkscape/extensions_ (or _/usr/share/inkscape/extensions_ for system-wide availability). If you have changed the defaults, you may need to check Inkscapes settings to find the location of the extensions directory.
### Customization Install Add-ons for Ink/Stitch
The Ink/Stitch extension provides a function called Install Add-Ons for Inkscape, which you should run first.
The execution of this function _Extensions &gt; Ink/Stitch &gt; Thread Color Management &gt; Install thread color palettes for Inkscape_ will take a while.
Do not become nervous as there is no progress bar or a similar thing to see.
This function will install 70 color palettes of various yarn manufacturers and a symbol library for Ink/Stitch.
![Inkscape with the swatches dialogue open, which shows the Madeira Rayon color palette][18]
If you use the download from Github version 2.0.0, the ZIP-file contains the color palette files. You only need to unpack them into the right directory (_~/.config/inkscape/palettes/_). If you need a [hoop template, you can download][19] one and save it to _~/.config/inkscape/templates_.
The next time you start Inkscape, you will find it under _File &gt; New From Template_.
### Lettering with Ink/Stitch
The way that is by far the easiest and most widely used, is to get a embroidery design using the _Lettering_ function of Ink/Stitch. It is located under _Extensions &gt; Ink/Stitch &gt; Lettering_. Lettering for embroidery is not simple. What you expect are so called satin stitched letters. For this, special font settings are needed.
![Inkscape with a “Chopin” glyph for satin stitching defined for the Lettering function][20]
You can convert paths to satin stitching. But this is more work intensive than using the Lettering function. Thanks to the work of an active community, the May 2021 release of Ink/Stitch 2.0 brought more predefined fonts for this. An English tutorial on how to create such fonts can be found [here][21].
Version 2.0 also brings functions (_Extensions &gt; Ink/Stitch &gt; Font Management_) to make managing these kinds of fonts easier. There are also functions for creating these kinds of fonts. But you will need knowledge about font design with Inkscape to do so. First, you create an an entire SVG font. It is then feed through a JSON script which converts the SVG font into the type of files that Ink/Stitchs font management function works with.
![On the left side the Lettering dialogue and on the right the preview of this settings][22]
The function will open a dialogue window where you just have to put in your text, choose the size and font, and then it will render a preview.
### Embroider Areas/Path-Objects
The easiest thing with Ink/Stitch, is to embroider areas or paths. Just draw your path. When you use shapes then you have to convert them and then run _Extensions &gt; Ink/Stitch &gt; Fill Tools &gt; Break Apart Fill Objects…_
This breaks apart the path into its different parts. You have to use this function. The _Path &gt; Break apart_ function of Inkscape wont work for this.
Next, you can run Ink/Stitchs built-in simulator: _Extensions &gt; Ink/Stitch &gt; Visualise and Export &gt; Simulator/Realistic Preview_.
![The new Fedora logo as Stitch Plan Preview][23]
Be careful with the simulator. It takes a lot system resources and it will take a while to start. You may find it easier to use the function _Extensions &gt; Ink/Stitch &gt; Visualise and Export &gt; Stitch Plan Preview_. The latter renders the threading of the embroidery outside of the document.
![Nicubunus Fedora hat icon as embroidery. The angles for the stitches of the head part and the brim are different so that it looks more realistic. The outline is done in Satin stitching][24]
### Simple Satin and Satin Embroidery
Ink/Stitch will convert each stroke with a continuous line (no dashes) to what they call Zig-Zag or Simple Satin. Stitches are created along the path using the stroke width you have specified. This will work as long there arent too many curves on the path.
![Parameter setting dialogue and on the right the Fedora logo shape embroidered as Zig-Zag line][25]
This is simple. But it is by far not the best way. It is better to use the Satin Tools for this. The functions for the Satin embroidery can be found under _Extensions &gt; Satin Tools_. The most important is the conversion function which converts paths to satin strokes.
![Fedora logo shape as Satin Line embroidery][26]
You can also reverse the stitch direction using _Extensions &gt; Satin Tools &gt; Flip Satin Column Rails_. This underlines the 3D effect satin embroidery gets, especially when you make puff embroidery. For machines that have this capability, you can also set the markings for the trims of jump stitches. To visualize these trims, Ink/Stitch uses the symbols that where installed from its own symbol library.
### The Ink/Stitch Stitch Library
What is called the stitch library is simply the kind of stitches that Ink/Stitch can create. The Fill Stitch and Zig-Zag/Satin Stitch have already been introduced. But there are more.
* **Running Stitches**: These are used for doing outline designs. The running stitch produces a series of small stitches following a line or curve. Each dashed line will be converted into a Running Stitch. The size of the dashes does not matter.
![A running stitch each dashed line will be converted in such one][27]
* **Bean Stitches**: These can also be used for outline designs or add details to a design. The bean stitch describes a repetition of running stitches back and forth. This results in thicker threading.
![Bean Stitches creating a thicker line][28]
* **Manual Stitch**: In this mode, Ink/Stitch will use each node of a path as a needle penetration point; exactly as they are placed.
![In manual mode each node will be the needle penetration point][29]
* **E-Stitch**: The main use for e-stitch is a simple but strong cover stitch for applique items. It is often used for baby cloths because their skin tends to be more sensitive.
![E-Stitch mostly used for applications on baby cloths, soft but strong connection][30]
### Embroidery Thread List
Some embroidery machines (especially those designed for commercial use) allow different threads to be fitted in advance according to what will be needed for the design. These machines will automatically switch to the right thread when needed. Some file formats for embroidery support this feature. But some do not. Ink/Stitch can apply custom thread lists to an embroidery design.
If you want to work on an existing design, you can import a thread list: _Extensions &gt; Ink/Stitch &gt; Import Threadlist_. Thread lists can also be exported: _Save As different file formats as *.zip_. You can also print them: _Extensions &gt; Ink/Stitch &gt; Visualise and Export &gt; Print PDF_.
### Conclusion
Writing software for embroidery design is not easy. Many functions are needed and diverse (sometimes closed-source) file formats make the task difficult. Ink/Stitch has managed to create a useful tool with many functions. It enables the user to get started with basic embroidery design. Some things could be done a little better. But it is definitely a good tool as-is and I expect that it will become better over time. Machine embroidery can be an interesting hobby and with Ink/Stitch the Fedora Linux user can begin designing breathtaking things.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/make-more-with-inkscape-ink-stitch/
作者:[Sirko Kemter][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/gnokii/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/drawing2-816x345.png
[2]: https://fedoramagazine.org/series/make-more/
[3]: https://fedoramagazine.org/make-more-with-inkscape-g-code-tools/
[4]: https://inkstitch.org/
[5]: https://fedoramagazine.org/3d-printing-in-fedora-from-an-idea-to-the-thing/
[6]: https://en.wikipedia.org/wiki/Numerical_control
[7]: https://www.amazon.com/-/de/dp/B07VZ2YBLL/ref=sr_1_11?__mk_de_DE=%C3%85M%C3%85%C5%BD%C3%95%C3%91&crid=1MFJJWXMKQD6R&dchild=1&keywords=home+embroidery+machine&qid=1628388092&rnid=2941120011&s=arts-crafts&sprefix=home+embroider+%2Caps%2C-1&sr=1-11
[8]: https://www.fablabs.io/labs/map
[9]: https://www.embird.net/
[10]: https://embroidermodder.org/
[11]: https://fedoramagazine.org/wp-content/uploads/2021/08/inkstitch_logo.png
[12]: https://github.com/Embroidermodder/libembroidery
[13]: https://github.com/inkstitch/pyembroidery
[14]: http://www.needlework.ru/page/embroidery.htm
[15]: http://edutechwiki.unige.ch/en/Embroidery_format
[16]: https://inkscape.org/~wwderw/%E2%98%85inkstitch-embroidery-extension
[17]: https://github.com/inkstitch/inkstitch/releases/tag/v2.0.0
[18]: https://fedoramagazine.org/wp-content/uploads/2021/08/swatches-1024x556.png
[19]: https://inkstitch.org/assets/images/tutorials/templates/hoop-template.svg
[20]: https://fedoramagazine.org/wp-content/uploads/2021/08/satinfont-1024x556.png
[21]: https://inkstitch.org/tutorials/font-creation/
[22]: https://fedoramagazine.org/wp-content/uploads/2021/08/lettering-1024x523.png
[23]: https://fedoramagazine.org/wp-content/uploads/2021/08/stitch-preview-1024x556.png
[24]: https://fedoramagazine.org/wp-content/uploads/2021/08/nicu-stitch.gif
[25]: https://fedoramagazine.org/wp-content/uploads/2021/08/zigzag-1024x463.png
[26]: https://fedoramagazine.org/wp-content/uploads/2021/08/satin.png
[27]: https://fedoramagazine.org/wp-content/uploads/2021/08/running-stitch-detail.jpg
[28]: https://fedoramagazine.org/wp-content/uploads/2021/08/bean-stitch-detail.jpg
[29]: https://fedoramagazine.org/wp-content/uploads/2021/08/manual-stitch-detail.png
[30]: https://fedoramagazine.org/wp-content/uploads/2021/08/e-stitch-detail.jpg

View File

@ -1,85 +0,0 @@
[#]: subject: "relaying mail to multiple smarthosts with opensmtpd"
[#]: via: "https://jao.io/blog/2021-11-09-relaying-mail-to-multiple-smarthosts.html"
[#]: author: "jao https://jao.io"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
relaying mail to multiple smarthosts with opensmtpd
======
I like to use a local smtp daemon for sending email from my laptop, because that way i can send emails even while disconnected and, even when the network is up, because i don't have to wait for the network protocol to be completed with a remote smarthost. Oh, and i also need local mail delivery.
For many years i've used postfix to those ends; it has an acceptably simply-ish configuration; but recently i've become fond of VPNs ([mullvad][1], if you want to know), and was annoyed by its getting confused when `/etc/resolv.conf` changes (for instance, because you get the VPN up after postfix's service has started). I've found a pleasantly simple alternative: [OpenSMTPD][2].
Say i want to use the SMTP server fencepost.gnu.org when sending an email as [jao@gnu.org][3] and smtp.jao.io when writing with [mail@jao.io][4] or [news@xmobar.org][5] in my `From` header. OpenSMTPD let's you do that with a very simple configuration file in `/etc/smtpd.conf`[1][6]:
```
table aliases file:/etc/aliases
table secrets db:/etc/mail/secrets.db
table sendergnu { jao@gnu.org }
table senderjao { mail@jao.io, news@xmobar.org }
listen on localhost
action "local" mbox alias <aliases>
action "relaygnu" relay host smtp+tls://gnu@fencepost.gnu.org:587 auth <secrets>
action "relayjao" relay host smtps://jao@smtp.jao.io:465 auth <secrets>
match for local action "local"
match for any from mail-from <sendergnu> action "relaygnu"
match for any from mail-from <senderjao> action "relaygan"
```
where we have also configured local delivery for a good measure. That's the full configuration file! The only other thing needed is generating the `secrets.db` file with the users and passwords corresponding to the keys `gnu` and `jao` (those are just arbitrary names). To that end, we create a plain text file with them, using entries of the form `<key> <user>:<password>`:
```
gnu jao:my fencepost password
jao mail@jao.io:xxxxxxxxxxxxxxxxx
```
where my user for `fencepost.gnu.org` is `jao` and for `smtp.jao.io` is `mail@jao.io` (you see there's no need of escaping spaces or ats). Then we use the program `makemap` to create the secrets db:
```
makemap secrets && rm secrets
```
### Footnotes:
[1][7]
That's the default configuration file in my debian box; other popular alternative is `/etc/openstmpd.conf`.
[Tags][8]: [sundry][9]
--------------------------------------------------------------------------------
via: https://jao.io/blog/2021-11-09-relaying-mail-to-multiple-smarthosts.html
作者:[jao][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jao.io
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Mullvad
[2]: https://www.opensmtpd.org/
[3]: mailto:jao@gnu.org
[4]: mailto:mail@jao.io
[5]: mailto:news@xmobar.org
[6]: tmp.zHAc8OxDnm#fn.1
[7]: tmp.zHAc8OxDnm#fnr.1
[8]: https://jao.io/blog/tags.html
[9]: https://jao.io/blog/tag-sundry.html

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/21/11/cron-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: translator: "Veryzzj"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,188 +0,0 @@
[#]: subject: "7 key components of observability in Python"
[#]: via: "https://opensource.com/article/21/11/observability-python"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lujun9972"
[#]: translator: "MCGA"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
7 key components of observability in Python
======
Learn why observability is important for Python and how to implement it
into your software development lifecycle.
![Searching for code][1]
The applications you write execute a lot of code, in a way that's essentially invisible. So how can you know:
* Is the code working?
* Is it working well?
* Who's using it, and how?
Observability is the ability to look at data that tells you what your code is doing. In this context, the main problem area is server code in distributed systems. It's not that observability isn't important for client applications; it's that clients tend not to be written in Python. It's not that observability does not matter for, say, data science; it's that tooling for observability in data science (mostly Juptyter and quick feedback) is different.
### Why observability matters
So why does observability matter? Observability is a vital part of the software development life cycle (SDLC).
Shipping an application is not the end; it is the beginning of a new cycle. In that cycle, the first stage is to confirm that the new version is running well. Otherwise, a rollback is probably needed. Which features are working well? Which ones have subtle bugs? You need to know what's going on to know what to work on next. Things fail in weird ways. Whether it's a natural disaster, a rollout of underlying infrastructure, or an application getting into a strange state, things can fail at any time, for any reason.
Outside of the standard SDLC, you need to know that everything is still running. If it's not running, it's essential to have a way to know how it is failing.
### Feedback
The first part of observability is getting feedback. When code gives information about what it is doing, feedback can help in many ways. In a staging or testing environment, feedback helps find problems and, more importantly, triage them in a faster way. This improves the tooling and communication around the validation step.
When doing a canary deployment or changing a feature flag, feedback is also important to let you know whether to continue, wait longer, or roll it back.
### Monitor
Sometimes you suspect that something has gone wrong. Maybe a dependent service is having issues, or maybe social media is barraging you with questions about your site. Maybe there's a complicated operation in a related system, and you want to make sure your system is handling it well. In those cases, you want to aggregate the data from your observability system into dashboards.
When writing an application, these dashboards need to be part of the design criteria. The only way they have data to display is when your application shares it with them.
### Alerts
Watching dashboards for more than 15 minutes at a time is like watching paint dry. No human should be subjected to this. For that task, we have alerting systems. Alerting systems compare the observability data to the expected data and send a notification when it doesn't match up. Fully delving into incident management is beyond the scope of this article. However, observable applications are alert-friendly in two ways:
* They produce enough data, with enough quality, that high-quality alerts can be sent.
* The alert has enough data, or the receiver can easily get the data, to help triage the source.
High-quality alerts have three properties:
* Low false alarms: If there's an alert, there's definitely a problem.
* Low missing alarms: When there's a problem, an alert is triggered.
* Timely: An alert is sent quickly to minimize time to recovery.
These three properties are in a three-way conflict. You can reduce false alarms by raising the threshold of detection at the cost of increasing missing alarms. You can reduce missing alarms by lowering the threshold of detection at the expense of increasing false alarms. You can reduce both false alarms and missing alarms by collecting more data at the cost of timeliness.
Improving all three parameters is harder. This is where the quality of observability data comes in. Higher quality data can reduce all three.
### Logging
Some people like to make fun of print-based debugging. But in a world where most software runs on not-your-local-PC, print debugging is all you can do. Logging is a formalization of print debugging. The Python logging library, for all of its faults, allows standardized logging. Most importantly, it means you can log from libraries.
The application is responsible for configuring which logs go where. Ironically, after many years where applications were literally responsible for configuration, this is less and less true. Modern applications in a modern container orchestration environment log to standard error and standard output and trust the orchestration system to manage the log properly.
However, you should not rely on it in libraries, or pretty much anywhere. If you want to let the operator know what's going on, _use logging, not print_.
#### Logging levels
One of the most important features of logging is _logging levels_. Logging levels allow you to filter and route logs appropriately. But this can only be done if logging levels are consistent. At the very least, you should make them consistent across your applications.
With a little help, libraries that choose incompatible semantics can be retroactively fixed by appropriate configuration at the application level. Do this by using the most important universal convention in Python: using the `getLogger(__name-_)`.
Most reasonable libraries follow this convention. Filters can modify logging objects in place before they are emitted. You can attach a filter to the handler that will modify the messages based on the name to have appropriate levels.
```
import logging
LOGGER=logging.getLogger(__name__)
```
With this in mind, you now have to actually specify semantics for logging levels. There are a lot of options, but the following are my favorite:
* Error: This sends an immediate alert. The application is in a state that requires operator attention. (This means that Critical and Error are folded.)
* Warning: I like to call these “Business hours alerts.” Someone should look at this within one business day.
* Info: This is emitted during normal flow. It's designed to help people understand what the application is doing if they already suspect a problem.
* Debug: This is not emitted in the production environment by default. It might or might not be emitted in development or staging, and it can be turned on explicitly in production if more information is needed.
In no case should you include PII (Personal Identifiable Information) or passwords in logs. This is true regardless of levels. Levels change, debug levels are activated, and so on. Logging aggregation systems are rarely PII-safe, especially with evolving PII regulation (HIPAA, GDPR, and others).
#### Log aggregation
Modern systems are almost always distributed. Redundancy, scaling, and sometimes jurisdictional needs mean horizontal distribution. Microservices mean vertical distribution. Logging into each machine to check the logs is no longer realistic. It is often a bad idea for proper control reasons: allowing developers to log into machines gives them too many privileges.
All logs should be sent into an aggregator. There are commercial offerings, you can configure an ELK stack, or you can use any other database (SQL or no-SQL). As a really low-tech solution, you can write the logs to files and ship them to an object storage. There are too many solutions to explain, but the most important thing is choosing one and aggregating everything.
#### Logging queries
After logging everything to one place, there are too many logs. The specific aggregator defines how to write queries, but whether it's grepping through storage or writing NoSQL queries, logging queries to match source and details are useful.
### Metric scraping
Metrics scraping is a server pull model. The metrics server connects to the application periodically and pulls the metrics.
At the very least, this means the server needs connectivity and discovery for all relevant application servers.
#### Prometheus as a standard
The [Prometheus][2] format as an endpoint is useful if your metrics aggregator is Prometheus. But it is also useful if it is not! Almost all systems contain a compatibility shim for Prometheus endpoints.
Adding a Prometheus shim to your application using the client Python library allows it to be scraped by most metrics aggregators. Prometheus expects to find, once it discovers the server, a metrics endpoint. This is often part of the application routing, often at `/metrics`. Regardless of the platform of the web application, if you can serve a custom byte stream with a custom content type at a given endpoint, you can be scraped by Prometheus.
For the most popular framework, there is also a middleware plugin or something equivalent that automatically collects some metrics, like latency and error rates. This is not usually enough. You want to collect custom application data: for example, cache hit/miss rates per endpoint, database latency, and so on.
#### Using counters
Prometheus supports several data types. One important and subtle type is the counter. Counters always advance—with one caveat.
When the application resets, the counter goes back to zero. These “epochs” in counters are managed by having the counter “creation time” sent as metadata. Prometheus will know not to compare counters from two different epochs.
#### Using gauges
Gauges are much simpler: They measure instantaneous values. Use them for measurements that go up and down: for example, total allocated memory, size of cache, and so on.
#### Using enums
Enums are useful for states of the application as a whole, although they can be collected on a more granular basis. For example, if you are using a feature-gating framework, a feature that can have several states (e.g., in use, disabled, shadowing) might be useful to have as an enum.
### Analytics
Analytics are different from metrics in that they correspond to coherent events. For example, in network servers, an event is one outside request and its resulting work. In particular, the analytics event cannot be sent until the event is finished.
An event contains specific measurements: latency, number and possibly details of resulting requests to other services, and so on.
#### Structured Logging
One current possible option is structured logging. The send event is just sending a log with a properly formatted payload. This data can be queried from the log aggregator, parsed, and ingested into an appropriate system for allowing visibility into it.
### Error tracking
You can use logs to track errors, and you can use analytics to track errors. But a dedicated error system is worthwhile. A system optimized for errors can afford to send more data since errors are rare. It can send the right data, and it can do smart things with the data. Error-tracking systems in Python usually hook into a generic exception handler, collect data, and send it to a dedicated error aggregator.
#### Using Sentry
In many cases, running Sentry yourself is the right thing to do. When an error has occurred, something has gone wrong. Reliably removing sensitive data is not possible, since these are precisely the cases where the sensitive data might have ended up somewhere it shouldn't.
It is often not a big load: exceptions are supposed to be rare. Finally, this is not a system that needs high-quality, high-reliability backups. Yesterday's errors are already fixed, hopefully, and if they are not—you'll know!
### Fast, safe, repeatable: choose all three
Observable systems are faster to develop since they give you feedback. They are safer to run since, when they go wrong, they let you know sooner. Finally, observability lends itself to building repeatable processes around it since there is a feedback loop. Observability gives you knowledge about your application. And knowing is half the battle.
#### Upfront investment pays off
Building all the observability layers is hard work. It also often feels like wasted work, or at least like “nice to have but not urgent.”
Can you build it later? Maybe, but you shouldn't. Building it right lets you speed up the rest of development so much at all stages: testing, monitoring, and even onboarding new people. In an industry with as much churn as tech, just reducing the overhead of onboarding a new person is worth it.
The fact is, observability is important, so write it in early in the process and maintain it throughout. In turn, it will help you maintain your software.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/observability-python
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
[2]: https://opensource.com/article/21/7/run-prometheus-home-container

View File

@ -2,16 +2,19 @@
[#]: via: "https://itsfoss.com/ubuntu-vs-manjaro/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "Return7g"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ubuntu vs Manjaro: Comparing the Different Linux Experiences
对比 Ubuntu 和 Manjaro :比较不同 Linux 发行版体验
======
Ubuntu is the most popular Debian-based Linux distribution for desktops and servers.
Ubuntu 是基于 Debian 最流行的桌面和服务器 Linux 发行版
And Manjaro Linux is an Arch-based distro tailored for desktops.
Manjaro Linux 是基于 Arch 量身定制的桌面发行版
Both are entirely different when it comes to user experience and features.

Some files were not shown because too many files have changed in this diff Show More